content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
[SciPy-user] Sparse with fast element-wise multiply?
Nathan Bell wnbell@gmail....
Mon Dec 17 22:41:32 CST 2007
David Warde-Farley <dwf <at> cs.toronto.edu> writes:
> Thanks for your reply. Actually, though, I'm not looking for matrix-
> multiply; I know CSR/CSC is faster for that.
> I'm looking for elementwise multiply, i.e. take two matrices of the
> same size, multiply each element in matrix 1 with the corresponding
> element in matrix 2, and put the result in the same position in the
> result matrix (as with matlab's a .* b syntax). It seems lil_matrix is
> the only sparse type that implements .multiply(), and converting to
> lil_matrix is slow, so I've written my own function for multiplying
> two coo_matrix's together. It's a LITTLE slower than .multiply() but
> faster than converting both matrices to lil_matrix and then multiplying.
> I'm not entirely sure how CSR and CSC work so it might be possible to
> implement faster elementwise multiplication on top of them.
> David
Currently elementwise multiplication is exposed through A**B where A and B are
csr_matrix or csc_matrix objects. You can expect similar performance to A+B.
I don't know why ** was chosen, it was that way before I started working on
scipy.sparse. I've added a .multiply() method to the sparse matrix base class
that goes through csr_matrix:
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2007-December/014882.html","timestamp":"2014-04-17T16:38:09Z","content_type":null,"content_length":"4063","record_id":"<urn:uuid:f7106746-e908-4232-a4b9-edf5941fd532>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Type variables instead of concrete types
From HaskellWiki
If you are new to Haskell you may think that type variables and polymorphism are fancy things that are rarely used. Maybe you are surprised that type variables and type class constraints can increase
safety and readability also if you are eventually using only one concrete type.
Imagine the Prelude would contain the following functions:
maximum :: [Integer] -> Integer
sum :: [Integer] -> Integer
Sure, the names are carefully chosen and thus you can guess what they do. But the signature is not as expressive as it could be. Indeed these functions are in the Prelude but with a more general
maximum :: (Ord a) => [a] -> a
sum :: (Num a) => [a] -> a
These functions can also be used for
but the signatures show which aspects of integers are actually required.
We realize that
is not about numbers, but can also be used for other ordered types, like
. We can also conclude that
is undefined, since the
class has no function to construct certain values and the input list does not contain an element of the required type.
Now consider that you have a complex function that is hard to understand.
It is fixed to a concrete type, say
You want to divide that function into a function that does the processing of the structure
and another function which does the calculations with
This is good style in the sense of the "Separation of concerns" idiom. You do it, because you want to untangle the explicit recursion, which is hard to understand of its own, and the calculation,
which also has pitfalls.
The structure processing does not know about the
and it is wise to use type variables instead of
Making the example more concrete, look at a state monad transformer
which shall be nested by a nesting depth that is only known at runtime.
Ok that's not possible, so just consider a
applicative functor
, that shall be nested the same way.
The functor depends on an input of the same type as its functor output,
that is we nest functions of type
(Double -> State Double Double)
The nested functor has a list of state values as state value. The nesting depth depends on the length of the list of state values. (This design also forbids transformer techniques for general
applicative functors.)
We could write a nesting function fixed to type
stackStates :: (Double -> State Double Double) -> (Double -> State [Double] Double)
but it is too easy to mix up state and return value here, because they have the same type. You should really separate that
stackStates :: (a -> State s a) -> (a -> State [s] a)
stackStates m = State . List.mapAccumL (runState . m)
also if you only use it for
This way the type checker asserts, that you never mix up the state with the other type. | {"url":"http://www.haskell.org/haskellwiki/index.php?title=Type_variables_instead_of_concrete_types&redirect=no","timestamp":"2014-04-19T14:56:17Z","content_type":null,"content_length":"23357","record_id":"<urn:uuid:3efa6758-a742-4741-ae88-94f4ace340fe>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00417-ip-10-147-4-33.ec2.internal.warc.gz"} |
Partial Fractions (I think)
March 4th 2010, 01:55 AM #1
Nov 2009
Partial Fractions (I think)
! = integrate
I am trying to integrate the following:
! x/sqr(a^2-b^2*x^2) dx
Using partial fractions I get the following:
1/sqr(2b) ! sqr(x)/sqr(a+bx) dx - 1/sqr(2b) ! sqr(x)/sqr(a-bx) dx.
This doesn't seem to help at all so it has obviously come off the rails. Could someone be so kind as to show the correct workings?
Last edited by p75213; March 4th 2010 at 01:57 AM. Reason: I meant partial fractions NOT integration by parts
Just need to know a few things mate.
1) Is this the equation?
2) Are a and b both constants?
Yes and yes
In which case, I'd consider looking at substitution rather than partial fractions because of that annoying square root.
Try substituting $u = a^2 - b^2x^2$ into the equation and integrating that way.
! = integrate
I am trying to integrate the following:
! x/sqr(a^2-b^2*x^2) dx
Using partial fractions I get the following:
1/sqr(2b) ! sqr(x)/sqr(a+bx) dx - 1/sqr(2b) ! sqr(x)/sqr(a-bx) dx.
This doesn't seem to help at all so it has obviously come off the rails. Could someone be so kind as to show the correct workings?
If you use the substitution
The integral then is
Thanks guys. Easy once you select the correct method. Practice makes perfect.
Answer is -sqr(a^2-b^2*x^2)/b^2.
Gojinn: Your right about the ANNOYING square root.
Yeah, those buggers can really make life more tedious. I mean, at first it's kinda handy because if, in my experience, if you see a big square root in the denominator then you'll probably have to
use substitution... but once you're over the joy of knowing which technique to use, you realise the extra work you gotta do to find the answer. :P
Good luck with you other integrals. ^_^
March 4th 2010, 02:04 AM #2
Mar 2010
March 4th 2010, 02:08 AM #3
Nov 2009
March 4th 2010, 02:16 AM #4
Mar 2010
March 4th 2010, 02:16 AM #5
MHF Contributor
Dec 2009
March 4th 2010, 02:34 AM #6
Nov 2009
March 4th 2010, 03:02 AM #7
Mar 2010 | {"url":"http://mathhelpforum.com/calculus/132000-partial-fractions-i-think.html","timestamp":"2014-04-20T20:08:45Z","content_type":null,"content_length":"45220","record_id":"<urn:uuid:91d24a0a-b319-44a9-914a-27e9fe7cd0b3>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bapchule Algebra Tutor
Find a Bapchule Algebra Tutor
...I have worked with a variety of students ranging from those with learning disabilities all the way to brilliant students seeking a challenge. Simply put, I love to teach and I thoroughly enjoy
the one-on-one interaction tutoring provides. With a degree in chemistry and minor in math, I have the skills and knowledge to instruct students on a variety of subjects.
10 Subjects: including algebra 1, algebra 2, calculus, chemistry
...I took physiology in undergraduate school and then again in medical school. When I took pathophysiology, the study of disease physiology, it reinforced my knowledge even more. Overall I took 15
credits hours of physiology and 12 credit hours in pathophysiology while in medical school.
14 Subjects: including algebra 1, algebra 2, chemistry, physics
...I have read extensively on metaphysics, symbolic logic, epistemology, ethics, philosophy of science, mathematics, language, and several other subjects within philosophy. Differential equations
is the extension of calculus and is an important foundation to understanding physical sciences, economi...
62 Subjects: including algebra 1, algebra 2, reading, English
...I started off with Computer Science and then added Mathematics graduating Summa Cum Laude and the top of my class. I teach in the local community colleges where students wished I had taught
them math since elementary school and I found that I truly enjoyed teaching and tutoring as well. I enjoy...
16 Subjects: including algebra 2, algebra 1, chemistry, physics
I have been working with kids for over 30 years and I am the proud father of 3 girls. I have directed a large youth mentoring program that worked with 100's of youth in the Phoenix area. My goal
is to help your kids learn in a way that they will enjoy.
7 Subjects: including algebra 1, elementary math, social studies, volleyball | {"url":"http://www.purplemath.com/Bapchule_Algebra_tutors.php","timestamp":"2014-04-21T10:36:29Z","content_type":null,"content_length":"23803","record_id":"<urn:uuid:1b3ed70e-2d79-451e-9981-7559651d8acb>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
[plt-scheme] Nifty assignment help
From: Todd O'Bryan (toddobryan at gmail.com)
Date: Tue Mar 10 10:33:49 EDT 2009
Hey all,
Christopher Stone from Harvey Mudd College presented a nifty
assignment that I'd love to use with my students:
The idea is that you create random functions that have domain ([-1,
1], [-1, 1]) and range [-1, 1]. You map the domain values to x and y
and the range values to a color, plot the results, and get very cool
What I need are functions with the following contracts:
;; draw-grayscale: Integer (Integer[0, size] Integer[0, size] ->
Integer[0, 255]) -> Image
;; consumes: the size in pixels of a square image,
;; a function that, given an x and y coordinate on the image,
;; produces a grayscale value
;; produces: the image
(define (draw-grayscale size plot-function)
;; draw-color: Integer
;; (Integer[0, size] Integer[0, size] -> Integer[0, 255])
;; (Integer[0, size] Integer[0, size] -> Integer[0, 255])
;; (Integer[0, size] Integer[0, size] -> Integer[0,
255]) -> Image
;; consumes: the size in pixels of a square image,
;; three functions that, given an x and y coordinate
on the image,
;; produce a color value
;; produces: the image
(define (draw-color size red-function green-function blue-function)
I could do these in the image teachpack using color-list->image, but I
suspect that would be remarkably slow.
Where should I look for a better way to do it?
Posted on the users mailing list. | {"url":"http://lists.racket-lang.org/users/archive/2009-March/031195.html","timestamp":"2014-04-17T07:45:17Z","content_type":null,"content_length":"6709","record_id":"<urn:uuid:dd34df1f-5869-4c5b-b20e-faf3fc44fa8d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number of results: 44,336
I got the same answer as Damon. Damon is right.
Saturday, July 9, 2011 at 9:08pm by Reiny
I got the same answer as Damon. Damon is right.
Saturday, July 9, 2011 at 10:26pm by Reiny
For Damon
Damon, I am having a brain freeze. I know you answer is correct. Can you see the error in my solution?
Thursday, January 23, 2014 at 1:11am by Reiny
Math-Can you help me Damon or Reiny?
missed a bracket ... P(A|B) is a conditional probability and is defined as P(A∩B) / P(B)
Thursday, July 28, 2011 at 8:01pm by Reiny
Math 112 Help please
Hey Reiny Thanks for your respond, Damon you and Reiny are on the same track. I'll keeping working on this,. I think I keep missing something. Thanks
Sunday, April 14, 2013 at 9:26am by Enis
Algebra check Damon
No it is not It sure looks obvious to me that was what Damon told you. To have a one-to-one, both the relation and its inverse must be functions. Yours is not
Sunday, November 30, 2008 at 7:59pm by Reiny
Reiny or Damon or Bobpursley
Express the series 11 + 18 + 27 +...+ 171 using sigma notation. Usually you would take 27-18 and thats the step between numbers, right? (I think its the k-value?) But 27-18=9 and 18-11=7. Are we
stepping by k+2 here? Like k, k+2 k+4, k+6...? Please help me. I'm not confident ...
Friday, April 27, 2012 at 6:09pm by Tabby
Damon - trig
Damon, when I opened the page and started to answer this question, I left it alone to watch the end of the WorldCup football game, and when I resumed I did not refresh the page, thus did not see your
answer was already posted.
Friday, July 2, 2010 at 4:12pm by Reiny
THANKS FOR YOUR HELP DAMON AND REINY!! :)
thanks alot!
Thursday, May 8, 2008 at 9:31pm by Miley
Thanks Damon =)
Sunday, December 25, 2011 at 8:43pm by -Untamed-
Thanks Damon and Reiny for your help!
Thursday, February 20, 2014 at 5:33pm by Kathy
Thank You Damon, Reiny and Drwls for your help
Monday, February 16, 2009 at 7:51pm by Barbara
Math grade 9
Are they supposed to know that Reiny ?
Friday, December 20, 2013 at 8:02pm by Damon
LOL - do it Reiny's way !
Thursday, February 20, 2014 at 5:15pm by Damon
math 115 9.4 #80
my money is on Damon
Monday, December 29, 2008 at 3:07pm by Reiny
Algebra 1 (Reiny or Kuai)
Include x = 0 for sure :)
Saturday, November 30, 2013 at 3:24pm by Damon
Algebra 1 (Reiny or Kuai)
f(x) = 2.00 + .25(x-1)
Friday, November 8, 2013 at 10:16am by Damon
Algebra 1 (Reiny)
4 * 3^4* 2^8/ (2^9 * 3^5 ) I think you mean it that way 4 /(2*3) 2/3
Monday, February 3, 2014 at 9:08pm by Damon
pre-calc help !!!!!!!
from Damon's middle section: 2cos^2 x - cosx = 0 cosx(2cosx - 1) = 0 cosx = 0 or cosx = 1/2 so from cosx=1/2 we have the 2 answers that Damon gave from cosx = 0 , we also have x = 90° or x = 270° x =
π/2 or x = 3π/2
Tuesday, November 20, 2012 at 3:21pm by Reiny
thank you Reiny/Damon sorry for all the trouble
Saturday, November 22, 2008 at 5:00pm by qwerty
ain't it scary how we think alike? LOL
Tuesday, December 2, 2008 at 8:25pm by Reiny
Go with Damon's - Calculus please help
another senior moment for me.
Tuesday, November 22, 2011 at 9:56am by Reiny
To Damon
Best answer I have seen in a week !!!
Saturday, April 21, 2012 at 7:58pm by Reiny
Dang - you are correct, Damon. Sorry, Reiny.
Thursday, October 11, 2012 at 4:24am by Steve
Math-Can you help me Damon or Reiny?
counting all the numbers in A I see n(A) = 90 n(B)=150 total is 200 , P(A) = 90/200 = 9/20 P(B) = 150/200 = 3/4 P(A|B) is a conditional probability and is defined as P(A∩B/P(B) = (40/200) / (3/4) = 4
/15 find P(B|A) the same way
Thursday, July 28, 2011 at 8:01pm by Reiny
algebra 1
x^3 + 3 x^2 - 2 x^2 - 6 x + 2 x^2 + 2 x = 0 No change. Reiny guessed what you meant way back x^3 + 3 x^2 - 4 x = 0 x(x^2 + 3 x - 4 ) = 0 x (x+4)(x-1) = 0 x = 0 x = -4 x = 1
Monday, March 9, 2009 at 5:18pm by Damon
Thanks Damon and Reiny, it helped a lot. Really appreciate it :)
Sunday, January 31, 2010 at 3:13pm by Vanessa
Oh, sorry
I gave you the amount on the last square. Use Reiny's system.
Sunday, September 18, 2011 at 8:06pm by Damon
Go with Damon - math
Just have to get stronger glasses.
Sunday, March 10, 2013 at 8:59am by Reiny
Calc and Physics
Damon and Reiny. Thanks for your quick and thorough responses.
Sunday, December 16, 2012 at 2:03am by Daryl
Algebra-Ms. Sue
Reiny and I bout replied below
Monday, March 11, 2013 at 8:31pm by Damon
go with Damon I skimmed over the "parabolic section" of your post.
Sunday, August 22, 2010 at 9:51pm by Reiny
I think Damon missed that negative sign This is a catenary and its derivative is y' = (1/2) (e^x - e^-x)
Sunday, April 14, 2013 at 1:55am by Reiny
Tyler, Reiny answered this for you a few minutes ago. The geometry has not changed.
Tuesday, February 26, 2008 at 8:23pm by Damon
Go with Damon ... Calculas
sorry, I should really read more carefully,
Tuesday, October 12, 2010 at 7:17pm by Reiny
Steve, Reiny ??
I did this twice and do not see errors. Did I make some stupid mistake?
Sunday, March 30, 2014 at 7:29pm by Damon
AP Stats
Damon is right, (time to get new reading glasses at the dollar store)
Friday, January 2, 2009 at 2:39pm by Reiny
Damon, how would I solve thats what I am confused on Reiny, That answer does not fit with any of the given answers
Tuesday, August 9, 2011 at 2:01pm by Hanson
Algebra 1 (Reiny or Kuai)
No, because you only pay .25 AFTER you buy the first one for 2
Friday, November 8, 2013 at 10:16am by Damon
oops - Finite Math
forget my answer, go with Damon I read it as a single deposit.
Thursday, January 16, 2014 at 5:18pm by Reiny
Reiny Or Damon?
Thanks for reading this! A triangular prism has vertices at A(2,-1,-1) B(2,1,4) C(2,2,-1) D(-1,-1,-1) E(-1,1,4), and F(-1,2,-1)? This question has two parts: 1 - Which image point has the coordinates
(-3,2,1) after a translation using the vector <-5,1,3> I'm thinking it'...
Thursday, April 26, 2012 at 7:57pm by For Reiny
Maths - Sets
In general if you have n elements, then the number of subsets is 2^n , which includes the null set and the complete set. I you look at Damon's first reply you will notice that the numbers he has are
the elements of a row in Pascal's triangle. in any row of Pascal's triangle ...
Sunday, May 4, 2008 at 8:23am by Reiny
advanced math
Then you are all set. The answer Reiny gave you applies. Either z or y or f(x) is fine on the left.
Monday, March 17, 2008 at 3:14pm by Damon
alg 2.
I do not know what to say. Reiny and I did the parentheses quite different ways and both got the same answer.
Thursday, May 8, 2008 at 8:32pm by Damon
Please read the above carefully. Both Reiny and Damon gave you the answer.
Monday, May 19, 2008 at 7:47pm by Ms. Sue
math help plz!
Damon, didn't see that you already did this question. Well, at least we agree down to the last penny, lol
Wednesday, April 21, 2010 at 10:13pm by Reiny
MATH 12
Use what Reiny wrote. I ignored the changes in probability as the bag contained fewer balls.
Tuesday, August 5, 2008 at 8:38pm by Damon
Thank you very much Reiny and also Damon. This is the first time I felt embarrassed helping my son. Sign of things to come of course. As Reiny suggested, we ended up plotting and then selecting less
than half a dozen candidates. I also cheated and made a spreadsheet just to ...
Sunday, January 27, 2013 at 5:20pm by sally
Damon's answer is correct. Just looked at mine and I forgot the 6 in front of the x^2 term second last line should have been = (7x)^2 - (6x^2 - 20)^2 , ahhh, a difference of squares = (7x + 6x^2 -
20)(7x - 6x^2 + 20) = (3x-4)(2x+5)(5-2x)(4+3x) the same answer as Damon
Monday, April 19, 2010 at 8:56pm by Reiny
solid mensuration
wow!!! thnk you so much damon and reiny for helping now i knw how to use ratios. tnx a lot:)
Friday, July 29, 2011 at 9:38pm by acshikari
I often found that students were more comfortable and familiar with rotations around the x-axis. Notice we could just switch the x and y variables, and then rotate around the x-axis. All shapes and
volumes would be retained. Then volume = 2pi[integral] (4x^2 - x^6)dx from 0 to...
Thursday, February 5, 2009 at 4:35am by Reiny
Damon did this for you - Derviative
Damon did this for you yesterday http://www.jiskha.com/display.cgi?id=1288682588 Please check your previous posts before posting the same question again.
Wednesday, November 3, 2010 at 1:48am by Reiny
Thanks Damon & gr
Thanks to both of you for correcting my blatant typing error. Perhaps I shouldn't attempt to answer questions at 1:30 A.M. anymore, lol
Tuesday, December 16, 2008 at 1:19am by Reiny
Thank you:):):)
Thank you now I understand how to check them to see if there right. Thanks Damon, and thanks Reiny.
Sunday, April 5, 2009 at 4:44pm by Rae Rae
Damon, I think she wanted to rotate counter-clockwise, you went clockwise
Monday, February 23, 2009 at 5:15pm by Reiny
College Algebra
Reiny thank you because I didn't answer Damon's I knew it had to have something to do with 25 being 1/4 of 100 but your explanation of thinking of it as * 1/4 made sense to me so thank you
Friday, December 21, 2012 at 2:34pm by Linda
Calculus Help
That's what I get. It seems we agree. Is there a chance the answer key is in error? Maybe Reiny or Damon can see an error.
Friday, April 4, 2014 at 10:52pm by Steve
Can you reply to my last math+log post =) thanks. I'm going to be having alot of math questions today.
Monday, February 23, 2009 at 5:24pm by Chopsticks
Just noticed that Damon had given you the formula for finding the midpoint. xm = (-9+5)/2 = -2 ym = (3+7)/2 = 5 so the midpoint is (-2,5)
Thursday, January 12, 2012 at 7:18pm by Reiny
Damon I respectfully disagree Your parabola opens sideways I think you meant to say: y-1 = a(x-6)^2 then at (0,-35) -36 = a(36) a = -1 so y-1 = -(x-6)^2 y = -(x-6)^2 + 1
Thursday, October 16, 2008 at 6:47pm by Reiny
Calculus (Optimization)
Reading the question again, I think that I took the wrong interpretation and Damon took the right one.
Friday, December 16, 2011 at 6:23pm by Reiny
If the equation is as you typed my money is riding with Damon. Your answers don't even come close to satisfying the equation.
Tuesday, February 5, 2013 at 8:48pm by Reiny
Sorry - Pre-cal
sorry, not in my area of expertise. Perhaps drwls, Damon, bobpursley or DrBob or others can help out here.
Saturday, January 3, 2009 at 12:09pm by Reiny
Did you mean 2(x^2)y 12 + 12xy + 18y ? If so, then = 2y(x^2 + 6x + 9) = 2y(x+3)^2 If you meant it the way you typed it, then use Damon's solution.
Monday, January 10, 2011 at 9:05am by Reiny
If the question is g(x) = 1/(x-3) + 1 then the vertical asymptote is x = 3 and the horizontal is y = 1 (I believe Damon asked you yesterday for a clarification about the confusing denominator, but
today you typed it exactly the same way.)
Saturday, March 27, 2010 at 7:03pm by Reiny
Analytic Geometry
I see that you have already looked at the solutions to a very similar problem below. Follow the recipes laid out by Reiny and Steve and you should get it.
Monday, January 28, 2013 at 3:26am by Damon
Algebra II
Damon, since x ≤ -3 is already a subset of x < -1/2, graphing x < -1/2 the way you just described it would suffice.
Saturday, January 5, 2008 at 5:54pm by Reiny
Algebra 1 (Reiny)
y^10 / y^1 = y^10 y^-1 = y^9
Monday, February 3, 2014 at 10:12pm by Damon
More Rational Exponents! - Is Anyone there? Ms. Sue? Damon? Steve? Reiny?
Is anyone there? I'm still here, and will be until it gets answered. I will still appreciate it if you guys can answer these.
Friday, February 14, 2014 at 8:22pm by Brady
algebra 4
or the equation was given as p = 100(.99997)^t for t = 20000, my calculator gave me p = 54.88 g As Damon pointed out, your equation as it stands makes little sense.
Wednesday, February 18, 2009 at 7:55pm by Reiny
Reiny! Reiny! Reiny!
Wait, I don't get it. I solved 3t = 6 (1/4 - t) and got 11.66666666666.... and then what do I do? ..or did I do that wrong?
Thursday, February 5, 2009 at 12:12am by Joy
first denominator = (x-2)(x+2) second denominator = (x-2)(x-2) note Reiny typo third denominator = (x-3)(x+2) so Least common denominator = (x-2)(x-2)(x+2)(x-3)
Wednesday, November 28, 2012 at 10:00pm by Damon
Damon, I apologize You went the correct way, I messed up and went the wrong way.
Monday, February 23, 2009 at 5:15pm by Reiny
Reiny! Reiny! Reiny! Answer Check and HW Help!
What are the equations for numbers 3 and 4?
Thursday, February 12, 2009 at 11:36pm by Britney
Damon and I have done some of them for you. Now you try some of the others. Post your solution so we can see where your difficulty is. These are rather straightforward questions, and you should not
expect us just to do the work for you.
Sunday, November 28, 2010 at 10:19am by Reiny
it is x look at Damon's last two lines: " loga (a^x) = x loga(a) but loga(a) = 1 " so wouldn't loga (a^x) = x(1) = x ?
Monday, February 23, 2009 at 4:56pm by Reiny
Oh, I see Reiny got the same answer. If you typed it right the answer is right.
Tuesday, January 5, 2010 at 9:58pm by Damon
lou, your answer of -(x-5)(x+5) is the same as the answer of (5-x)(5+x) that Damon and anonymous gave you if you want to factor out the -1 at the start, you can do that 25 - x^2 = -(x^2 - 25) = -
(x+5)(x-5) really makes no difference
Sunday, June 12, 2011 at 4:46pm by Reiny
Damon had 10^1281 so I guess you wanted 1.0 x 10^1281
Sunday, March 11, 2012 at 10:02pm by Reiny
trust me...
anonymous, dont trust damon on this one i answered what u and damon said and i get them wrong, trust me... SO SORRY DAMON:( I DONT MEAN IT PERSONALLY i took this and i put down the same answers and
got them wrong, thats all
Thursday, March 13, 2014 at 11:18am by TTR+S<3
looked at another way: How many numbers can be formed with no 9 single digit: 8 2 digits: 8*9 = 72 total with no 9's = 80 so the numbers that do have a 9 is 99-80 = 19, like Damon had above.
Wednesday, February 20, 2008 at 8:17pm by Reiny
you're pretty good at history and literature as well... i'm not sure about Reiny, and bobpursley, and damon..and GuruBlue i've noticed bobpursley answer many different type of questions; science,
lit, history, ..
Sunday, February 22, 2009 at 10:16pm by Bella
Thanks Damon. And to Bob and Reiny, I was not asking for answers at all, I assumed that when someone looked at the question they would explain how to do it, not give me the answer. Considering I have
all the answers in the back of my book. But thanks.
Tuesday, March 17, 2009 at 11:36am by mkcolema
to Damon
Looking at the little box that holds my paper clips I got 4 and 4
Wednesday, June 6, 2012 at 5:36pm by Damon
Math question - Thanks, Damon
Ooops -- Damon is right. Where's my head???
Monday, April 22, 2013 at 12:57pm by Ms. Sue
S.S (DAMON)
I have to see what you think first. I will check after lunch.
Thursday, March 13, 2014 at 12:52pm by Damon
Social studies check answers ***damon***
No. What did Damon tell you to do?
Thursday, March 13, 2014 at 6:04pm by Ms. Sue
Calculus (for damon)
Have you had any series even in algebra 2 ?
Sunday, February 17, 2008 at 7:28pm by Damon
algebra please help Dr.Bob
I answered below. Damon
Tuesday, February 19, 2008 at 9:33pm by Damon
damon (: !
Please no, post here so others can help. I am not often around.
Monday, June 16, 2008 at 9:18pm by Damon
heidi - I did the previous one for you. Now you try. Damon
Sunday, November 30, 2008 at 8:49pm by Damon
grammar damon be clearr
damon can you be more clear please.
Sunday, June 14, 2009 at 7:05pm by alley
Physics Help - Use Damon's Answer
Go with Damon - I switched the a and the t.
Tuesday, January 28, 2014 at 12:54am by Steve
Social studies check answers ***damon***
1. I have no idea. 4. A) yes 5. B) yes 6. C) yes 8. C) yes
Wednesday, March 26, 2014 at 11:06am by Damon
alg 2.
Well, I think log4 (1/5) I got -1 log4 5 she got -2 log4 5 one of us subtracted wrong or something However Reiny and I agree exactly, so the odds are in our favor :
Thursday, May 8, 2008 at 8:32pm by Damon
algebra- help needed fast
from Damon's x^2 = 7 next line: x = ± √7 subbing that into the first 49 + y^2 = 49 y = 0 you have an ellipse intersecting a hyperbola. They meet at (√7,0) and (-√7,0)
Thursday, February 26, 2009 at 7:31am by Reiny
What is wrong with my answer?
SraJMcGin, Do you see something incorrect about my reply? Damon
Friday, February 11, 2011 at 3:38pm by Damon
U.S History
i get it thank u damon. for some reason you remind me of matt damon if u no him hes an actor thanks 4 the help :)
Monday, December 12, 2011 at 6:13pm by Jackson
S.S (DAMON)
I think the rest are ok but Costa Rica has by far the most stable democracy.
Thursday, March 13, 2014 at 1:38pm by Damon
Physics - Damon
that sin theta = .1 is a typo Yes, I got 2.87 degrees in the line above
Wednesday, January 28, 2009 at 7:05pm by Damon
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=Reiny%2BDamon","timestamp":"2014-04-18T23:03:27Z","content_type":null,"content_length":"30869","record_id":"<urn:uuid:e1c321f7-ab8a-4f81-844b-b86b3e6a046e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kingwood, TX Prealgebra Tutor
Find a Kingwood, TX Prealgebra Tutor
I am a Marine Corps veteran and college student pursuing a Bachelor's of Science in Applied Mathematics. I have a 4+ years of tutoring experience and 3+ years of leading, mentoring, and guiding
young Marines. I tutored my peers in high school and my first year of college.
10 Subjects: including prealgebra, reading, algebra 1, algebra 2
...At Humble ISD, I am an AVID tutor. In AVID, we help the students to learn to study in supervised tutorial sessions. The students fill out a tutorial request form, where they provide their
subject topic, information they know on the topic- such as key terms and definitions, and a point of confusion.
4 Subjects: including prealgebra, chemistry, geometry, nutrition
...I can tutor from early mornings to late at night (about 10 pm). I can meet you where you feel comfortable whether it be at your home, a coffee shop, a bookstore, or even a library. I am a
native English speaker and am extremely patient and will take my time to make sure you understand the conce...
24 Subjects: including prealgebra, chemistry, calculus, physics
...During my experience, I've learned to instruct and explain using puzzles, games & fun analogies to explain the concepts of fractions, integers, negative numbers, and such. I make it fun and
inspire anyone to develop a love for learning Prealgebra. Anatomy is very fresh in my mind.
20 Subjects: including prealgebra, chemistry, physics, GED
...I hold a valid teaching certificate for EC-12 Generalist. I currently teach in a 7th grade middle school math classroom. I have also taught 6th and 8th grade math.
4 Subjects: including prealgebra, algebra 1, elementary math, STAAR | {"url":"http://www.purplemath.com/Kingwood_TX_Prealgebra_tutors.php","timestamp":"2014-04-16T07:43:48Z","content_type":null,"content_length":"23952","record_id":"<urn:uuid:dbee7c6a-a8fd-4ab1-81f4-c682c6ce479a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
non-linear differential equation system
October 5th 2008, 07:29 PM #1
Sep 2008
non-linear differential equation system
I am having trouble with the following question..
dx/dt = y
dy/dt = -x + (1-x^2)y
Find the critical points and determine their stability.
Now, I found the critical point to be (0,0).
The matrix representing the linearized system is
A = [ 0, 1; -1, 0 ] and eigenvalues of this matrix are -i and i, which are complex with the real parts equal to zero, thereofore Linearization theorem can't be used here.
so.. using coordinate shift I get X=x and Y=y
thus dY/dX = -X/Y + (1-X^2).. to solve this I get an integrating factor
e^(0.5*x^2)... which can't be integrated. This is where I get stuck (assuming I have done everything else above correctly)
Does this mean the critical point is unstable and that the orbits of the non-linear system cannot be approximated by the linear approximation?
Or have I done something wrong along the way?
Thanks in advance
I get $1/2\pm \sqrt{3}/2$ for the eigenvalues: The origin is a source but that's true only for solutions near the origin. Further qualitative analysis would show that solutions close to the
origin spiral outward to the equilibrium solution which is periodic and solutions outside the equilibrium solution spiral towards it. Here's some Mathematica code to illustrate these two
scenarios. Me personally, I'd always do it numerically just to check things.
tmax = 50;
xinit = 0.2;
yinit = 0.2;
sol = NDSolve[{Derivative[1][x][t] == y[t],
Derivative[1][y][t] ==
-x[t] + (1 - x[t]^2)*y[t],
x[0] == xinit, y[0] == yinit},
{x[t], y[t]}, {t, 0, tmax}]
p1 = ParametricPlot[Evaluate[{x[t], y[t]} /.
sol], {t, 0, tmax}, PlotStyle -> Red]
tmax = 50;
xinit = 2.5;
yinit = 2.5;
sol = NDSolve[{Derivative[1][x][t] == y[t],
Derivative[1][y][t] ==
-x[t] + (1 - x[t]^2)*y[t],
x[0] == xinit, y[0] == yinit},
{x[t], y[t]}, {t, 0, tmax}]
p2 = ParametricPlot[Evaluate[{x[t], y[t]} /.
sol], {t, 0, tmax}, PlotStyle -> Blue]
Show[{p1, p2}, PlotRange ->
{{-3, 3}, {-3, 3}}]
thanks shawsend. I figured out my mistake yday.. so it's all good
October 6th 2008, 03:33 AM #2
Super Member
Aug 2008
October 7th 2008, 12:09 AM #3
Sep 2008 | {"url":"http://mathhelpforum.com/differential-equations/52189-non-linear-differential-equation-system.html","timestamp":"2014-04-17T19:19:07Z","content_type":null,"content_length":"37127","record_id":"<urn:uuid:b09397d6-be0b-4eb2-9f23-1751c2f6a6be>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00624-ip-10-147-4-33.ec2.internal.warc.gz"} |
Havertown Algebra 2 Tutor
Find a Havertown Algebra 2 Tutor
...I can assist with any proofreading needs or help you child learn to read. My goal is to serve you and your learning needs! There is no one-size-fits-all when it comes to education.
20 Subjects: including algebra 2, reading, statistics, biology
...Also, during the day, I stay at home with my two young daughters.I took a differential equations course in fall of 2007 at Rensselaer Polytechnic Institute. I received an A. I used these topics
in many chemical engineering courses after that.
25 Subjects: including algebra 2, chemistry, writing, geometry
...I have taken a few courses which deal with linear algebra, namely Differential Equations (which introduces linear algebra), Intermediate Linear Algebra, and General Relativity, receiving an A
in each of these courses. I have tutored linear algebra both for my job as a math tutor and privately. ...
26 Subjects: including algebra 2, English, writing, reading
...I always encourage students to look beyond the obstacles and turn them into as solutions. My experience includes tutoring math and reading to elementary students as well as junior high and high
school students. One student who attended college needed assistance for her realtor exam.
21 Subjects: including algebra 2, English, reading, algebra 1
I am currently employed as a secondary mathematics teacher. Over the past eight years I have taught high school courses including Algebra I, Algebra II, Algebra III, Geometry, Trigonometry, and
Pre-calculus. I also have experience teaching undergraduate students at Florida State University and Immaculata University.
9 Subjects: including algebra 2, geometry, GRE, algebra 1 | {"url":"http://www.purplemath.com/Havertown_Algebra_2_tutors.php","timestamp":"2014-04-18T06:00:05Z","content_type":null,"content_length":"23853","record_id":"<urn:uuid:e2e9d9f5-1d8e-484c-91fc-7b628e690aa8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abelianization of a semidirect product
up vote 9 down vote favorite
I believe there is a straightforward formula for the abelianization of a semi-direct product: if $G$ acts on $H$, and we form the semi-direct product of $G$ and $H$ in the usual way, and the
abelianization of this semi-direct product is the product $G^{ab}\times (H^{ab})_{G}$.
(Here the subscript $G$ denotes taking the coinvariants with respect to $G$. That is, $(H^{ab})_{G}$ is a the quotient of $H^{ab}$ by the subgroup generated by elements of the form $h^g-h$ for $h$ in
$H$ and $g$ in $G$, and where the superscript $g$ denotes the action of $G$ on $H^{ab}$ induced by the action of $G$ on $H$.)
Does anyone happen to know a good reference for this?
1 Does it really need a reference? Write down the presentation for the semi-direct product, that gives you a presentation matrix for the abelianization and it's pretty much immediate from there, no?
– Ryan Budney Aug 16 '10 at 3:16
3 That's what I thought. However, a referee requested that I explain the formula; it seems that giving a reference is more appropriate than explaining the thing in detail. (I'm nervous about only
explaining it very briefly, given that referee made an especial request for clarification...) – blt Aug 16 '10 at 3:21
3 If you don't find a reference, just write a one-paragraph explanation along the lines of Ryan's comment. If it is a mathematics journal, it should be sufficient. – Victor Protsak Aug 16 '10 at
It is a mathematics journal, for a research paper in number theory (not a textbook). Given the weight of the consensus here, I will write a short explanation along the lines of Greg's below.
Thank-you all for giving me the confidence to do so! – blt Aug 16 '10 at 12:37
add comment
2 Answers
active oldest votes
I agree with Ryan and Victor, except that you don't need presentations. The subgroup $[G \ltimes H,G \ltimes H]$ is generated by $[H,H] \cup [G,H] \cup [G,G]$, so you can write $$(G \
ltimes H)^{ab} = (G \ltimes H) / \langle [H,H] \cup [G,H] \cup [G,G] \rangle.$$ If you apply the relators $[H,H]$, you get $G \ltimes H^{ab}$; then if you apply the relators $[G,H]$,
you get $G \times (H^{ab})_G$; then finally if you apply $[G,G]$, you get $G^{ab} \times (H^{ab})_G$. You can add this as an extra half-paragraph or footnote rather than giving a
up vote 13 citation.
down vote
accepted I don't think that the referee has the right to demand a longer explanation than this, unless maybe you are writing a textbook.
You don't even need to mention commutators: any homomorphism from $G\ltimes H$ to an abelian group factors through $G\times H^{ab}$; then through $G\times (H^{ab})_G$; finally through
$G^{ab}\times (H^{ab})_G$. – mephisto Apr 21 '11 at 21:32
add comment
A description of the derived subgroup of a semidirect product, from which the abelianization can be obtained, was published in:
Daciberg Lima Gonçalves, John Guaschi The lower central and derived series of the braid groups of the sphere Trans. Amer. Math. Soc. 361 (2009), 3375-3399. http://www.ams.org/journals
up vote 2 down /tran/2009-361-07/S0002-9947-09-04766-7/ (Proposition 3.3)
You may also find it in their preprint: http://arxiv.org/abs/math/0603701 (Proposition 29)
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/35713/abelianization-of-a-semidirect-product?sort=oldest","timestamp":"2014-04-16T22:53:57Z","content_type":null,"content_length":"58905","record_id":"<urn:uuid:137304c8-0878-4dcd-a760-e400b56477cc>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quadratic Question
July 23rd 2009, 02:04 AM #1
Quadratic Question
This is probably a relatively easy question, but I've never done this before, and I'm not sure how to go about it.
For the equation $3x^2+ax+4=0$ find the smallest possible value of $a$ such that the equation will have two distinct rational solutions.
Thanks for your help
*Edit - Never-mind. I just figured it out
Last edited by Stroodle; July 23rd 2009 at 02:19 AM.
So please mark this thread [SOLVED]
Hello, Stroodle!
For the equation $3x^2+ax+4\:=\:0$, find the smallest possible value of $a$
such that the equation will have two distinct rational solutions.
Thanks for your help
*Edit - Never-mind. I just figured it out
Good for you!
But is this a trick question?
I noted that it didn't say smallest positive value of $a$.
Using the Quadratic Formula: . $x \;=\;\frac{-a \pm\sqrt{a^2-48}}{6}$
For rational roots, the discriminant $(a^2-48)$ must be a square.
This is true for: . $a \;=\;\pm7,\:\pm8,\:\pm13$
Then the smallest (least) value of $a$ is: . $a \:=\:-13$
Haha. Well spotted Soroban!
Actually the question did ask for the smallest positive value of $a$, but in my tired & vague state of mind I accidentally failed to copy it down...
July 23rd 2009, 02:56 AM #2
Senior Member
Jul 2009
July 23rd 2009, 03:28 AM #3
Super Member
May 2006
Lexington, MA (USA)
July 23rd 2009, 03:41 AM #4 | {"url":"http://mathhelpforum.com/pre-calculus/95869-quadratic-question.html","timestamp":"2014-04-17T03:58:49Z","content_type":null,"content_length":"39987","record_id":"<urn:uuid:078d8168-d5d5-43a8-8811-c0f9914e4fca>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basics of Lighting
This shader demonstrates the basic lighting formula using ambient, diffuse and specular reflection, light attenuation, and spotlight effects. This article will describe the basics behind lighting and
cover the differences between Gouraud and Phong shading techniques.
Diffuse Reflection
We see objects because objects reflect some of the light energy they receive back into the environment. This is illustrated in the following diagram.
The light ray reflects at multiple angles within +-90 degrees of the surface normal. A mathematician by the name Johann Lambert in 1760 discovered this property of diffuse reflection and deduced the
following formula.
\[I_D = (\vec{N} \cdot \vec{L}) * I_L\]
\(\vec{N}\) is the unit normal vector of the surface.
\(\vec{L}\) is the unit direction vector from a point on the surface to the light source.
\(I_L\) is the intensity of the light source.
\(I_D\) is the calculated intensity of the diffusely reflected light.
If you assume the intensity of your light source remains at a constant of 1.0, then the formula reduces to a simple dot product between the surface normal and the direction vector from the surface to
the light source. The dot product between two vectors will produce a positive value if both vectors are within 90 degrees of each other, 0 if the vectors are perpendicular to each other, or a
negative value if both vectors are greater than 90 degrees apart. If both vectors are of unit length (ie: normalized), then the dot product will return a value -1.0 <= \(I_D\) <= 1.0. If the value is
less than or equal to 0, then that part of the object is shaded. If the value is greater than 0, then that part of the object is illuminated based on the diffuse intensity value calculated for that
particular point. To apply a diffuse colour to the object, you use the following formula.
\[C_D = D_{RGB} * I_D\]
\(D_{RGB}\) is the red, green, and blue diffuse colour of the object.
\(I_D\) is your Lambert diffuse intensity value.
\(C_D\) is the final diffuse colour.
If you have multiple light sources, you simply sum the diffuse reflection contributions from each light source and clamp the result to your colour depth. Shaders use floating point values, so you
would clamp the result between 0.0 and 1.0.
Improving Diffusion Reflection
Constant diffuse factor vs bump map
The diffuse reflection formula discussed in this article and demonstrated in the WebGL demo use a constant diffuse value. This produces a smooth surface that lacks any texture such as cracks, bumps,
pores, scratches, etc. In real life, all surfaces exhibit irregularities causing light to reflect at different angles. This is illustrated in the following diagram.
This can be represented using a high number of displaced polygons, but that comes with a computationally expensive process. A more efficient way to improve diffuse reflection quality is to use bump
maps. Bump maps are image files that contain the perturbed surface normal vectors encoded in RGB space. With a bump map, you calculate the diffuse intensity value for the surface normal as discussed
above in addition to calculating the diffuse intensity value of the perturbed normal stored in the bump map. You then combine (multiply) the two in order to produce the final intensity value. The
following demonstrates what a bump map in RGB space looks like and what the final lighting calculations would produce.
From left to right: Displacement map, bump map conversion, final lighting result
This topic goes outside the scope of this article, but it is mentioned to give you insight how to improve the visual quality of diffuse reflection. Topics you can investigate include dot 3 bump
mapping (aka normal mapping) and the more advanced parallax bump mapping, which adds the illusion of depth by factoring in the viewing angle.
Specular Reflection
Specular reflection is what gives you that shinny look that you often see on billiard balls, leather sofas, or metal surfaces. It can be really shinny and give off a bright glare or it can be really
dull and lower the contrast of a particular object. There are many empirical formulae for calculating specular reflection, but this shader focuses on the more popular Phong reflection model. The
Phong reflection model was developed by Bui Tuong Phong at the University of Utah in 1973\(^{[2]}\). It is calculated using the following formula.
\[I_S = max(\vec{R} \cdot \vec{V}, 0.0)^S\] \[\vec{R} = \vec{L} - 2 * \vec{N} * (\vec{L} \cdot \vec{N})\]
\(\vec{R}\) is the unit reflection vector calculated from the surface normal and the light direction vector.
\(\vec{L}\) is the unit light direction vector.
\(\vec{N}\) is the unit surface normal vector.
\(\vec{V}\) is the unit view direction vector.
\(S\) is the shininess term. A high value produces a smaller glare while a low value produces a larger glare.
\(I_S\) is the calculated intensity of the specular reflected light.
Unlike diffuse reflection where light reflects at multiple angles, specular lighting only reflects at one angle, which is relative to the surface normal and light direction vectors. The dot product
between the reflected light vector and view vector will produce a value clamped to the range 0.0 <= \(I_S\) <= 1.0. From this we deduce that a specular highlight will be at its strongest when the
reflected light vector is aligned with the view direction vector and at its weakest when the reflected light vector is greater than or equal to 90 degrees from the view vector. The formula to apply
the intensity value to the specular colour is defined below.
\[C_S = S_{RGB} * I_S\]
\(S_{RGB}\) is the red, green, and blue specular colour of the object.
\(I_S\) is your calculated specular intensity value.
\(C_S\) is the final specular colour.
As with diffuse reflection, if you have multiple light sources you simply sum the specular reflection contributions from each light source and clamp the result to your colour depth.
Improving Specular Reflection
Using a constant specular colour throughout an object does not always work out well. For example, the human body doesn't shine equally across all body parts. A bald head is more likely to give off a
polished shine than your arms or hands. A sweaty body will also give off more shine compared to a dry body. In order to define these areas and treat them differently, you need to create and supply
your shader with a specular map. This map will control the specular intensities that are permitted for a particular pixel on your object. The following example demonstrates the rendering differences
with and without a specular map.
From left to right: Face without specular map, specular map, face with specular map
In addition to specular maps, normal maps also help bring out specular reflections. The bumpiness of the material will alter the direction light reflects off the surface, giving the following result.
Specular reflection due to bump mapping
This is a common technique applied to rendering oceans. To reduce the polygon count, the waves of an ocean are rendered using low and high frequency normal maps. The specular highlights produced by
these normal maps produces a more realistic body of water. This topic goes outside the scope of this article, but it is mentioned to give you insight how to improve the visual quality of specular
Ambient Lighting
Sphere rendered without ambient lighting and with ambient lighting.
When light is reflected off a surface, that light can be further reflected by other objects in the area, which then other objects reflect that light and so on until the energy dissipates. This is
known as ambient lighting (or background lighting) and it plays a crucial role in generating realistic imagery. The most basic way to represent ambient lighting is to illuminate the object as a whole
with a constant value, but this can create less than realistic images. When combined with smart texturing to cover up the obvious constant ambient factor, it can sometimes be enough to improve the
image quality.
Since ambient is additional lighting from the environment, the ambient colour is added to the result of the lighting equation. This produces the following formula.
\[C_{RGB} = A_{RGB} + ...\]
\(A_{RGB}\) is the red, green, and blue ambient colour value to add.
\(...\) is the remainder of the lighting formula that will sum diffuse, specular, attenuation, and spotlights.
\(C_{RGB}\) is the final colour of the pixel..
Improving Ambient Lighting
Diffuse reflection uses bump maps to improve quality and specular reflection uses specular maps. Ambient quality can be improved in a similar way by using ambient occlusion maps. Ambient occlusion is
an empirical technique for determining how much ambient lighting a pixel on an object receives based on its view of the outside world. These maps even include the amount of shadowing received due to
light sources being occluded by other objects.
Image on the left rendered without AO and with AO on the right.
Ambient occlusion maps are calculated based on the assumption objects remain stationary. If an object is transformed, then the calculated ambient values are no longer valid. One way around that is to
use screen space ambient occlusion (SSAO), which is a real-time technique applied in a fragment shader to approximate ambient occlusion values based on the recorded depth values. It requires a fair
amount of processing power to calculate however.
This topic goes outside the scope of this article, but it is mentioned to give you insight how to improve the visual quality of ambient lighting.
Light dissipates over the distance it travels and more so when it bumps into other particles such as dust, walls, gases, etc. This visual trait can be simulated by multiplying the calculated light
intensity value by an attenuation factor that will reduce the intensity of light based on the distance it travelled from the light source to the surface of the object. This is represented using the
following formula.
\[A = 1.0 / (A_C + A_L + A_K)\] \[A_L = A_l * d\] \[A_K = A_k * d^2\]
\(A_C\) is an attenuation constant.
\(A_L\) is the calculated linear attenuation and \(A_l\) is the linear constant used in that calculation.
\(A_K\) is the calculated quadratic attenuation and \(A_k\) is the quadratic constant used in that calculation.
\(d\) is the distance between the surface of the object and the light source.
\(A\) is the calculated attenuation factor that will multiply the light intensity value.
Each attenuation factor has a certain effect on luminosity. The constant attenuation factor doesn't dim the light over distance, but instead adjusts its intensity. Linearly attenuating the light
source is approximate to a light source that emits in a vacuum. Quadratic attenuation dims the light more per distance travelled, which is typical in an environment where light collides with many
particles. This is often used to simulate light sources such as flashlights and lanterns.
Spotlights extend the lighting equation by cutting off the light source after a certain angle. This is shown in the formula below.
\[ S_A = (\vec{D} \cdot \vec{L})^{S_{exp}} \left\{ \begin{matrix} 0 & I_{spot} \geq Cutoff \\ 1 & I_{spot} \lt Cutoff \end{matrix} \right\} \]
\(\vec{D}\) is the unit light direction vector.
\(\vec{L}\) is the unit direction vector from the light source to the vertex.
\(S_{exp}\) is the spotlight exponent, which increases or decreases the spotlight's brightness factor.
\(S_A\) is the calculated angle between the light source and the vertex.
This is similar to the diffuse reflection formula except you are examining the situation from the light's point of view instead of the vertex point of view. If the vertex lies within the cone of
influence, it will be lit, otherwise it will be shaded. In addition to the above, sometimes you want a sharp edge and other times you want a nice falloff. The effect is illustrated below.
Spotlight without falloff on the left and with falloff on the right
A falloff can be calculated by factoring an inner ring and an outer ring in the spotlight's cone of influence. This is illustrated below.
The area inside the inner ring is fully lit, whereas the falloff area will dim the luminosity as you approach the outer ring. This can be added to the spotlight formula.
\[I_{spot} = (C_O - S_A) / (C_O - C_I) [0.0 \le S_I \le 1.0]\]
\(C_O\) is the cosine of the outer cutoff angle. Ex: cos(45.0 * PI / 180.0).
\(C_I\) is the cosine of the inner cutoff angle, which must be less than or equal to the outer cutoff angle.
\(S_A\) is the calculated spotlight angle from the previous equation.
\(I_{spot}\) is the calculated spotlight intensity clamped to the range 0.0 and 1.0.
Putting it all Together
We've covered diffuse reflection, specular reflection, ambient lighting, attenuation, and spotlights. When combining all these elements, we get the final lighting formula.
\[C_{RGB} = A_{RGB} + ((D_{RGB} * I_D) + (S_{RGB} * I_S)) * L_{RGB} * A * I_{spot}\]
\(A_{RGB}\) is the ambient component.
\(D_{RGB} * I_D\) is the diffuse component.
\(S_{RGB} * I_S\) is the specular component.
\(L_{RGB}\) is the light colour.
\(A\) is the attenuation factor.
\(I_{spot}\) is the spotlight factor.
\(C_{RGB}\) is the final colour for that pixel.
Shading Techniques
There are two ways to apply the lighting formula.
Gouraud Shading
Gouraud shading is an interpolation technique whereby the colour values calculated at the vertices of a polygon are linearly interpolated to fill the rest of the polygon. This is computationally
inexpensive as lighting calculations are performed at the vertices rather than at each pixel, which is the case with Phong shading. There is some quality loss when using a low-polygon model however.
Both diffuse and specular reflections can appear blocky due to interpolating between so few polygons. An example of this is shown below.
The first image demonstrates specular highlights with Gouraud shading. The second image demonstrates specular highlights with Phong shading. Since there are more pixels than polygons when rendering
the image, the linearly interpolated values calculated in the vertex shader display quality loss. If however you have an object with a sufficiently high polygon count, typically a 1:1 polygon per
pixel ratio, then you effectively eliminate the problem.
Here is the same object rendered with Gouraud shading, but with a sufficiently higher polygon count. In this particular case, the performance benefits of Gouraud shading are eliminated by the fact
that more memory and vertex processing are required to produce this level of quality.
Phong Shading
Phong shading, not to be confused with the Phong reflection model, is an interpolation technique whereby the surface normal is interpolated over the polygon in a fragment shader for purposes of
shading the object. By interpolating normals instead of colour values, as is the case with Gouraud shading, you end up with a higher quality image at the expense of additional computations per pixel.
One thing to note is that linearly interpolating normals will not guarantee unit length throughout. You must renormalize the normal vector in order to restore this property. Failing to do so will
result in improper lighting intensities when interpolating normals with large angular differences, such as the vertex normals at each corner in a cube.
1. Wikipedia Editors (2012-02-10). “Lambertian reflectance”. Wikipedia. Retrieved 2012-02-11.
2. Wikipedia Editors (2011-11-27). “Phong shading”. Wikipedia. Retrieved 2012-02-11. | {"url":"http://nutty.ca/articles/basic_lighting/","timestamp":"2014-04-19T09:57:49Z","content_type":null,"content_length":"31356","record_id":"<urn:uuid:0fd68ae3-7464-4839-a833-0dd52c0d7828>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Astronomy Morsels IV
See also: Mathematical Astronomy Mosrsels, More Mathematical
Astronomy Morsels, Mathematical Astronomy Morsels III, and
Mathematical Astronomy Morsels V
About Mathematical Astronomy Morsels IV
In his Preface to Mathematical Morsels III Jean Meeus writes: We are living in a period of important astrophysical and cosmological research. Many astronomical journals and scientific books deal with
subjects such as birth and evolution of stars, black holes, dark matter, gamma-ray bursts, supernova remnants or collisions between galaxies. Of course this is important matter, but one almost seems
to have forgotten the `old' astronomy, the classical, mathematical science of the sky. And yet, without this fundamental astronomy modern research on the universe new would never have been possible.
And to this Roger Sinnott responded “In his Preface the author hints that some readers might accuse him of practicing “old” astronomy. Don't let that fool you. The problems he tackles would have
fascinated astronomers of the early 20 th and prior centuries, but those poor souls faced a brick wall of computational difficulty. They had to work out all their answers laboriously, with a pencil
and paper. Freed, from that limitation, the author uses today's computers to address each topic with a rigor and finesse beyond the wildest dreams of any old-time practitioner.
This conversation continues in this “Morsels IV” where Jean Meeus concludes his Preface as follows: Certainly this book will not make astronomy to progress. Rather, most subjects discussed in this
book belong to what might be called recreational astronomy. While making the calculations and writing the text, we felt being as Sir Arthur Conan Doyle wrote in The Hound of the Basikerville
A dabbler in science, Mr. Holes, a picker up
of shells on the shores of the great unknown ocean.
And not too far into this book he cites Maurice Ravel's Une Barque sur l'Océan (a boat at sea) piano piece as launching point into a study of when a horizon skimming Moon might look like a boat at
sea. Interested? Here are 68 more subjects that have washed upon the beach of Jean Meeus ' imagination.
Click here for a PDF of the Table of Contents
From the Reviews
The fourth book in this amazing series contains 370 pages of absolutely fascinating facts. All these books are written assuming an ‘appropriate astronomical background’ and as such there is no
glossary. It can be read in isolation, although there are references to previous Morsels in the text. Jean’s motivation for writing this fourth volume is the same as for the previous three, and many
chapters have been inspired by questions from correspondents, such as ‘can Jupiter be visible with none of its Galilean satellites visible?’ or ‘what is the longest total solar eclipse visible from
Personally I find books of this type absolutely riveting, and frequently dip into the various volumes. I have also given talks based on some of the surprising facts discovered by Jean. For example,
did you know that on 2007 Jan 1, a shadowless transit of Ganymede across Jupiter occurred? This was because the Sun, Earth, Ganymede and Jupiter were so perfectly aligned that Ganymede’s shadow was
obscured by Ganymede itself. Turning this around, an observer within the shadow on Jupiter would have seen a simultaneous total solar eclipse, a transit of Ganymede across the Earth and a transit of
Earth across the Sun!
One of Jean’s passions is eclipses, and it amazes me to find that he can still devote a further 115 pages to new eclipse facts. For example the 2015 total solar eclipse which he predicts (even though
it shouldn’t be) will be visible from exactly the North Pole (it’s all to do with refraction of course).
Jean is a stickler for accuracy and although English is not his mother tongue, the book is remarkably free from grammatical and spelling errors. Mathematically, Jean’s results are a perfect example
of how data should be presented — never using more precision than is justified, well tabulated and well explained.
I have always been intrigued to find out why, if the Earth’s rotation is slowing by 1 ten millionth of a second per day, DeltaT can amount to around 67 seconds in 100 years, and 2 hours in 1,000
years. Jean explains this admirably. But just sometimes, even Jean cannot explain his results, such as why (in our era) do total solar eclipses fall more often on a Wednesday than any other day?
Also, why between 2001-2400 does the 13th of the month fall on a Friday more often than any other day?
How about this for you to try (p.357) — if you are at roughly latitude 48°N and the declination of the Sun is roughly +20° and you are living in a tall building, you could witness the rise of the
Sun’s upper limb and then have time to take the lift downstairs and see the Sun rise a second time! Jean always explains his workings, and if I give away any more secrets you won’t buy the book. Also
in case you think Jean has a super-powerful computer, he has an ordinary PC and all his programs are written in BASIC.
Journal of the British Astronomical Association (June, 2007)
About The Author
Jean Meeus , born in 1928, studied mathematics at the University of Louvain ( Leuven ) in Belgium , where he received the Degree of Licentiate in 1953. From then until his retirement in 1993, he was
a meteorologist at Brussels Airport . His special interest is spherical and mathematical astronomy. He is a member of several astronomical associations and the author of many scientific papers. He is
co-author of Canon of Solar Eclipses (1966), the Canon of Lunar Eclipses (1979) and the Canon of Solar Eclipses (1983). His Astronomical Formulae for Calculators (1979, 1982,1985 and 1988) has been
widely acclaimed by both amateur and professional astronomers. Further works, published by Willmann-Bell, Inc., are Elements of Solar Eclipses 1951-2200 (1989), Transits (1989), Astronomical
Algorithms (1991 and 1998), Astronomical Tables of the Sun , Moon and Planets (1983 and 1995), Mathematical Astronomy Morsels (1997), More Mathematical Astronomy Morsels (2002), and Mathematical
Astronomy III (2004) . For his numerous contributions to astronomy the Inter national Astronomical Union announced in 1981 the naming of asteroid 2213 Meeus in his honor. | {"url":"http://www.willbell.com/MATH/MorselsIV.htm","timestamp":"2014-04-19T22:16:27Z","content_type":null,"content_length":"23722","record_id":"<urn:uuid:32e8f69e-36fc-449d-bf20-b4d77368dc6e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Set Theory
April 12th 2007, 05:23 PM #1
Dec 2006
Set Theory
Suppose the the set N of natural numbers is less than or equimunerous to A. Show that A is infinite. Infinite is defined as not having a bijection.
By definition if |N|<|A| the there is a injection from N to A.
Now, if there were a bijection from A to a finite subset of N then that leads to a contradiction. Therefore A must be infinite.
April 13th 2007, 05:10 AM #2 | {"url":"http://mathhelpforum.com/discrete-math/13634-set-theory.html","timestamp":"2014-04-19T00:16:28Z","content_type":null,"content_length":"32610","record_id":"<urn:uuid:17c3c036-d971-4ce3-ae35-64f61463be11>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Redwood Estates Math Tutors
...This is simple math and there will be no problem for me to provide tutoring for such a subject. I have a BS in Accounting Degree, passed all my CPA exams. Im' currently working in a company as
an accountant.
7 Subjects: including algebra 1, prealgebra, accounting, Chinese
I am an energetic and friendly Math tutor, that enjoys teaching Math to all levels of students. I like to talk through examples and discuss the problems, to ensure there is a true understanding of
the concepts. I'm currently in school to gain my credentialing in teaching in order to teach Mathematics for grades 6-12, and have been tutoring for over 10 years.
9 Subjects: including algebra 1, algebra 2, prealgebra, geometry
UPDATE, April 3, 2014 I am not currently accepting new students. I expect to have openings again around April 27. Inquiries in advance are welcome.
17 Subjects: including precalculus, statistics, algebra 1, algebra 2
...It is too basic to use it. Then we need to know how to manipulate the basic formula to something like cos2x = 2cos^2(x) - 1 or 2cos^2(x) = cos(2x) + 1. Then from there we could integrate it
directly or integrate by parts.
13 Subjects: including prealgebra, trigonometry, discrete math, differential equations
...I am able to give public speaking coaching and tips for all age ranges, high school and up. As a lifelong student who has switched careers multiple times, I feel I am uniquely qualified to
advise in a variety of careers and fields. I have worked as a researcher, an engineer, a manager, a teacher and a director of data and research.
22 Subjects: including SAT math, public speaking, algebra 1, chemistry | {"url":"http://www.algebrahelp.com/Redwood_Estates_math_tutors.jsp","timestamp":"2014-04-21T07:13:04Z","content_type":null,"content_length":"24848","record_id":"<urn:uuid:11e34598-0275-4fae-9d0c-16643e7e08b4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof of geometric probability distribution
February 14th 2010, 05:40 PM #1
Jan 2010
Proof of geometric probability distribution
I would need help with this proof. It seems any way I try to prove I get stuck or mess up somewhere, I know it's easy but basically I suck at proving stuff. So a bit help would be appreciated.
$\sum_{ y=1}^{ \infty } {q}^{ y-1 }p=1$
Well I assume that q=1-p, then you have that
$\sum_{n\geq 0} q^n p=p\sum_{n\geq 0} q^n=\frac{p}{1-q}=\frac{p}{p}$.
If you want to prove what the sum of a geometric series is then you can look it up on wikipedia.
February 14th 2010, 06:51 PM #2 | {"url":"http://mathhelpforum.com/advanced-statistics/128840-proof-geometric-probability-distribution.html","timestamp":"2014-04-19T09:08:57Z","content_type":null,"content_length":"32040","record_id":"<urn:uuid:9194496c-bf92-4714-923b-7ed22662cf9c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Best Laid Schemes...
"The best laid schemes o' Mice an' Men, Gang aft agley" -- Robert Burns
I've hit on one of those programming conundrums where a quick diversion becomes a huge project. Let me explain what I'm going for; it will probably take three or four blog posts to cover all the
My basic idea for a cool Perl 6 twist on knot vectors was evaluating the basis function to create a polynomial rather than a set value. The basis function defines a piecewise polynomial, with a
continuous polynomial between each pair of distinct neighboring knots. So if the knot vector is (0, 0, 0, 1, 2, 3, 3, 3), that defines three polynomials: from 0 to 1, from 1 to 2, and from 2 to 3.
For any reasonably complicated knot vector, those polynomials are pretty tricky to calculate by hand. The NURBS Book approach is a change of basis, which works well but is not particularly
mathematically enlightening. But the knot vector basis calculation function N we've defined can already do this: if it is passed a reasonably capable polynomial object for the $u parameter. This is
the payoff for not forcing $u to be a Num; we can pass a polynomial instead and get a polynomial out.
Well, almost. We have two KnotVector.N functions, one for the $p == 0 case and one for $p > 0. The latter will work perfectly when passed a polynomial for $u. (At least in theory; there may be issues
with Rakudo bugs, of course.) The former, however, actually needs to know what the numeric value of $u is. Originally I planned to somehow overload the polynomial class to have a value for this
purpose. But as far as I know, to make that work, I'd have to overload prefix:<+> -- and last time I checked, that didn't work in Rakudo.
You could fix that by changing the second N to take an impulse array, instead of calculating it using the value of $u. But once you're thinking about doing that, it's time to face the facts: even the
hypothetical "cool" implementation of N we have is grotesquely inefficient for reasonably complicated knot vectors.
To calculate the basis efficiently, you should only calculate the values that have a chance of being non-zero. And you should get the impulse value by doing something like a binary search.
So before I can get where I want to go with knot vectors, I need to have a working polynomial class and a working binary search. Plus I think it might be worthwhile to have a short post discussing
the .Str and .perl functions for KnotVector and Nubs...
No comments: | {"url":"http://lastofthecarelessmen.blogspot.com/2009/09/best-laid-schemes.html","timestamp":"2014-04-21T09:43:56Z","content_type":null,"content_length":"40142","record_id":"<urn:uuid:80d5b264-aae8-4339-a6c6-0290fba589f4>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alexandria, VA SAT Math Tutor
Find an Alexandria, VA SAT Math Tutor
Hello, My name is Leona. I am a licensed Civil Engineer and I love helping others to succeed. My strength is in math.
32 Subjects: including SAT math, reading, GRE, English
...I have assisted students in both high school and college by preparing them to be successful both inside and outside the classroom. During the five years I was a high school football coach, I
helped 10th, 11th, and 12th grade students manage their class load and maintain high grades during footba...
40 Subjects: including SAT math, chemistry, English, Spanish
...I utilize hands on material whenever possible. I have a bachelor's degree and master's degree in economics as well as several years of teaching experience. I know how to make economics
understandable and doable, by using simple language and concrete examples.
21 Subjects: including SAT math, reading, calculus, geometry
...Please feel free to contact with questions. I scored very high on my math portions of the SAT and ACT (750 out of 800 SAT, 35 out of 36 ACT). I have helped several other students with study
tips and practice problems for these exams as well. I completed AP Calculus BC through my Junior year of high school as well.
16 Subjects: including SAT math, geometry, statistics, algebra 2
...My promise is to be supportive and to ensure that you have the best chance to do well. Yours in math, ~DexterReviewing basic algebraic concepts (e.g. variables, order of operations, and
exponents); understanding and graphing functions; solving linear (with one variable 'x') and quadratic equati...
15 Subjects: including SAT math, chemistry, calculus, geometry
Related Alexandria, VA Tutors
Alexandria, VA Accounting Tutors
Alexandria, VA ACT Tutors
Alexandria, VA Algebra Tutors
Alexandria, VA Algebra 2 Tutors
Alexandria, VA Calculus Tutors
Alexandria, VA Geometry Tutors
Alexandria, VA Math Tutors
Alexandria, VA Prealgebra Tutors
Alexandria, VA Precalculus Tutors
Alexandria, VA SAT Tutors
Alexandria, VA SAT Math Tutors
Alexandria, VA Science Tutors
Alexandria, VA Statistics Tutors
Alexandria, VA Trigonometry Tutors
Nearby Cities With SAT math Tutor
Annandale, VA SAT math Tutors
Arlington, VA SAT math Tutors
Bethesda, MD SAT math Tutors
Fairfax, VA SAT math Tutors
Falls Church SAT math Tutors
Forest Heights, MD SAT math Tutors
Fort Washington, MD SAT math Tutors
Hyattsville SAT math Tutors
Oxon Hill SAT math Tutors
Silver Spring, MD SAT math Tutors
Springfield, VA SAT math Tutors
Temple Hills SAT math Tutors
Vienna, VA SAT math Tutors
Washington, DC SAT math Tutors
Woodbridge, VA SAT math Tutors | {"url":"http://www.purplemath.com/Alexandria_VA_SAT_Math_tutors.php","timestamp":"2014-04-21T14:42:26Z","content_type":null,"content_length":"24067","record_id":"<urn:uuid:7652868d-5b14-4800-8413-700f0214aecc>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Talk:Romanesco Broccoli
From Math Images
Hi Eliza,
You did a great job covering the Fibonacci aspect of the broccoli on your page. I have some suggestions on how to revise that aspect as well as other suggestions on future directions that your page
could take.
I thought you handled your presentation quite well, especially the way you answered Mr. Tarranta’s questions and clarified the data results. Your answers during the presentation regarding the
difference in ratios from the side view and aerial view should be incorporated into the page. For instance, you should elaborate on how there are different and misleading ratios of the broccoli
depending on which way you view it, whether it is from the side or from the top. Include this discussion somewhere between the two top images of the broccoli.
Also, what you should do, to clear up any confusion, is to number the line segments. It’s hard to see what line segments you are referring to when you are charting their ratios.
You do a good job explaining why you need to take the ratio of the Fibonacci sequence. However, you should explain how you arrived at the ratios for that sequence as well as the for the broccoli
segments. It’s not clear why you divided the largest line segment by the next largest. Why does this ratio work? I believe this was something Mr.Taranta asked you during presentation. Clarify this.
Somewhere between your data set and your first graph where you explain what you noticed about the lines created, you should another point of discussion. Talk about the potential for the broccoli
ratio 3 line to intersect with the Fibonacci line ratio. If there had been more data points for the broccoli line segment 3 (ie. possibly a bigger broccoli), couldn’t you speculate that the broccoli
grows at exactly the same rate as the Fibonacci sequence? Anyway, the intersection between broccoli ratio 3 line and the Fibonacci line ratio is the reason why you decided to keep the two lines,
right? You forgot to pinpoint exactly why you decided to keep the two lines before launching into a discussion of your final graph, so mention that. It’ll keep with the logical transitions of your
mathematical discussion.
And for the final graph that you produced, you should make your linear approximations more clear. I know you have them underneath their respective colored broccoli and Fibonacci lines but is there
any way to go back and color-code those lines?
Other technical things: You could use mouseover for fractal because you might be engaging with an audience that is not familiar with that term. I wasn’t even sure what a fractal was until you told
me! Anyways, if you could provide a definition for fractal, that’d be super helpful. I can deal with the technical aspects of making it a mouseover where you place your mouse over a linked term and
out pops out a cloud with its definition.
Future directions: You mentioned about wanting to cover the fractal aspect of the broccoli. I am not really familiar with fractals, but I am willing to do research on it if you are interesting in
extending your page. I know it is summer vacation, so you may not be as willing to do any more work. But try to get in touch with me. I am more than happy to work with you to complete your page by
the end of the summer. Hopefully we’ll have a finished product that you can be proud to call your own.
Anyways, this was long, but I hope it all helps! If you have any questions, feel free to contact me at lpeng1@swarthmore.edu
--Lpeng1 16:15, 1 July 2013 (EDT) | {"url":"http://mathforum.org/mathimages/index.php/Talk:Romanesco_Broccoli","timestamp":"2014-04-21T07:26:03Z","content_type":null,"content_length":"17908","record_id":"<urn:uuid:d08d8506-55e9-4a6e-8fea-4d18cafe090d>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
April 19th 2010, 01:56 PM
How much was spent in interest between the outstanding principle after 5 years, and the outstanding Principle after 10 years.
on a 225,000 loan.
interest rate: 4.948 %
a 30 year mortgage.
principle outstanding after 5 years = $20,6462.32
principle outstanding after 10 years = $18,2731.88
just a bit stuck on what formuals to use... and what values to input.
any help would be great!
April 19th 2010, 02:49 PM
Rule of Thumb: If you cannot remember which formula to use, then you have not learned the material in any useful way. Sorry, but formula-based concepts are not good. Formulas only work on
problems designed to meet their assumptions. Think about the structure and dissect it. There is no formula for that.
In this case, we paid down the loan $206,462.32 - $182,731.88 = $23,730.44
How much was paid in that same period? A 30 year mortgage is likely to be monthly, so that's the last piece. We can calculate the payment and see what happens. You SHOULD be able to use either
the 5-year value or the 10-year value or the original value to calculate the payment amount. (I found them to be within 1¢ of each other) Please do that to find the payment, p. Then calculate
60*p - $23,730.44 to answer the question.
April 20th 2010, 03:10 AM
Rule of Thumb: If you cannot remember which formula to use, then you have not learned the material in any useful way. Sorry, but formula-based concepts are not good. Formulas only work on
problems designed to meet their assumptions. Think about the structure and dissect it. There is no formula for that.
In this case, we paid down the loan $206,462.32 - $182,731.88 = $23,730.44
How much was paid in that same period? A 30 year mortgage is lokely to be monthly, so that's the last piece. We can calculate the payment and see what happens. You SHOULD be able to use either
the 5-year value or the 10-year value or the original value to calculate the payment amount. (I found them to be within 1¢ of each other) Please do that to find the payment, p. Then calculate
60*p - $23,730.44 to answer the question.
TKHunny is absolutely right, to paraphrase Newton, "The mechanical mathematician at the slightest hint of change will be left in limbo". | {"url":"http://mathhelpforum.com/business-math/140127-annuity-print.html","timestamp":"2014-04-18T04:59:15Z","content_type":null,"content_length":"6075","record_id":"<urn:uuid:c76b7e2f-5cbc-48e5-b71f-fd3fe6ae2040>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the least common multiple of 12 18 and 24?
Elementary arithmetic is the simplified portion of arithmetic which includes the operations of addition, subtraction, multiplication, and division.
Elementary arithmetic starts with the natural numbers and the written symbols (digits) which represent them. The process for combining a pair of these numbers with the four basic operations
traditionally relies on memorized results for small values of numbers, including the contents of a multiplication table to assist with multiplication and division.
In arithmetic and number theory, the least common multiple (also called the lowest common multiple or smallest common multiple) of two integers a and b, usually denoted by LCM(a, b), is the smallest
positive integer that is divisible by both a and b. Since division of integers by zero is undefined, this definition has meaning only if a and b are both different from zero. However, some authors
define lcm(a,0) for all a, which is the result of taking the lcm to be the least upper bound in the lattice of divisibility.
The LCM is familiar from grade-school arithmetic as the "least common denominator" (LCD) that must be determined before fractions can be added, subtracted or compared.
Lowest common factor
A questionnaire is a series of questions asked to individuals to obtain statistically useful information about a given topic. When properly constructed and responsibly administered,
questionnaires become a vital instrument by which statements can be made about specific groups or people or entire populations.
Questionnaires are frequently used in quantitative marketing research and social research. They are a valuable method of collecting a wide range of information from a large number of individuals,
often referred to as respondents. Adequate questionnaire construction is critical to the success of a survey. Inappropriate questions, incorrect ordering of questions, incorrect scaling, or bad
questionnaire format can make the survey valueless, as it may not accurately reflect the views and opinions of the participants. A useful method for checking a questionnaire and making sure it is
accurately capturing the intended information is to pretest among a smaller subset of target respondents.
Data collection usually takes place early on in an improvement project, and is often formalised through a data collection plan which often contains the following activity.
Prior to any data collection, pre-collection activity is one of the most crucial steps in the process. It is often discovered too late that the value of their interview information is discounted
as a consequence of poor sampling of both questions and informants and poor elicitation techniques. After pre-collection activity is fully completed, data collection in the field, whether by
interviewing or other methods, can be carried out in a structured, systematic and scientific way.
Health Medical Pharma
Disaster Accident
Law Crime
Religion Belief
Related Websites: | {"url":"http://answerparty.com/question/answer/what-is-the-least-common-multiple-of-12-18-and-24","timestamp":"2014-04-21T07:46:46Z","content_type":null,"content_length":"32354","record_id":"<urn:uuid:47132b0f-0f53-4a24-b844-62e53a8db149>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Noun: BMI
1. A number reflecting human weight in relationship to height, defined as weight in kilograms divided by the square of the height in meters
- body mass index
2. A measure of someone's weight in relation to height; to calculate one's BMI, multiply one's weight in pounds by 703 and divide that by the square of one's height in inches; overweight is a BMI
greater than 25; obese is a BMI greater than 30
- body mass index
Type of: index, index number, indicant, indicator
Encyclopedia: BMI | {"url":"http://www.wordwebonline.com/en/BMI","timestamp":"2014-04-17T15:37:38Z","content_type":null,"content_length":"8170","record_id":"<urn:uuid:afb15480-60a6-440f-aec4-7ecd93b5143b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
This book is designed for a three-semester or four-quarter calculus course covering single variable and multivariable calculus for mathematics, engineering, and science majors.
Drawing on their decades of teaching experience, William Briggs and Lyle Cochran have created a calculus text that carries the teacher’s voice beyond the classroom. That voice–evident in the
narrative, the figures, and the questions interspersed in the narrative–is a master teacher leading students to deeper levels of understanding. The authors appeal to students’ geometric intuition to
introduce fundamental concepts and lay the foundation for the more rigorous development that follows. Comprehensive exercise sets have received praise for their creativity, quality, and scope.
CourseSmart textbooks do not include any media or print supplements that come packaged with the bound book.
Table of Contents
1. Functions
1.1 Review of Functions
1.2 Representing Functions
1.3 Trigonometric Functions and Their Inverses
2. Limits
2.1 The Idea of Limits
2.2 Definitions of Limits
2.3 Techniques for Computing Limits
2.4 Infinite Limits
2.5 Limits at Infinity
2.6 Continuity
2.7 Precise Definitions of Limits
3. Derivatives
3.1 Introducing the Derivative
3.2 Rules of Differentiation
3.3 The Product and Quotient Rules
3.4 Derivatives of Trigonometric Functions
3.5 Derivatives as Rates of Change
3.6 The Chain Rule
3.7 Implicit Differentiation
3.8 Related Rates
4. Applications of the Derivative
4.1 Maxima and Minima
4.2 What Derivatives Tell Us
4.3 Graphing Functions
4.4 Optimization Problems
4.5 Linear Approximation and Differentials
4.6 Mean Value Theorem
4.7 L’Hôpital’s Rule
4.8 Antiderivatives
5. Integration
5.1 Approximating Areas under Curves
5.2 Definite Integrals
5.3 Fundamental Theorem of Calculus
5.4 Working with Integrals
5.5 Substitution Rule
6. Applications of Integration
6.1 Velocity and Net Change
6.2 Regions between Curves
6.3 Volume by Slicing
6.4 Volume by Shells
6.5 Length of Curves
6.6 Physical Applications
7. Logarithmic and Exponential Functions
7.1 Inverse Functions
7.2 The Natural Logarithmic and Exponential Functions
7.3 Logarithmic and Exponential Functions with Other Bases
7.4 Exponential Models
7.5 Inverse Trigonometric Functions
7.6 L’Hôpital’s Rule Revisited and Growth Rates of Functions
8. Integration Techniques
8.1 Integration by Parts
8.2 Trigonometric Integrals
8.3 Trigonometric Substitutions
8.4 Partial Fractions
8.5 Other Integration Strategies
8.6 Numerical Integration
8.7 Improper Integrals
8.8 Introduction to Differential Equations
9. Sequences and Infinite Series
9.1 An Overview
9.2 Sequences
9.3 Infinite Series
9.4 The Divergence and Integral Tests
9.5 The Ratio, Root, and Comparison Tests
9.6 Alternating Series Review
10. Power Series
10.1 Approximating Functions with Polynomials
10.2 Power Series
10.3 Taylor Series
10.4 Working with Taylor Series
11. Parametric and Polar Curves
11.1 Parametric Equations
11.2 Polar Coordinates
11.3 Calculus in Polar Coordinates
11.4 Conic Sections
12. Vectors and Vector-Valued Functions
12.1 Vectors in the Plane
12.2 Vectors in Three Dimensions
12.3 Dot Products
12.4 Cross Products
12.5 Lines and Curves in Space
12.6 Calculus of Vector-Valued Functions
12.7 Motion in Space
12.8 Length of Curves
12.9 Curvature and Normal Vectors
13. Functions of Several Variables
13.1 Planes and Surfaces
13.2 Graphs and Level Curves
13.3 Limits and Continuity
13.4 Partial Derivatives
13.5 The Chain Rule
13.6 Directional Derivatives and the Gradient
13.7 Tangent Planes and Linear Approximation
13.8 Maximum/Minimum Problems
13.9 Lagrange Multipliers
14. Multiple Integration
14.1 Double Integrals over Rectangular Regions
14.2 Double Integrals over General Regions
14.3 Double Integrals in Polar Coordinates
14.4 Triple Integrals
14.5 Triple Integrals in Cylindrical and Spherical Coordinates
14.6 Integrals for Mass Calculations
14.7 Change of Variables in Multiple Integrals
15. Vector Calculus
15.1 Vector Fields
15.2 Line Integrals
15.3 Conservative Vector Fields
15.4 Green’s Theorem
15.5 Divergence and Curl
15.6 Surface Integrals
15.6 Stokes’ Theorem
15.8 Divergence Theorem
Purchase Info ?
With CourseSmart eTextbooks and eResources, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs.
Once you have purchased your eTextbooks and added them to your CourseSmart bookshelf, you can access them anytime, anywhere.
Buy Access
Calculus, CourseSmart eTextbook
Format: Safari Book
$96.99 | ISBN-13: 978-0-321-69171-2 | {"url":"http://www.mypearsonstore.com/bookstore/calculus-coursesmart-etextbook-0321691717","timestamp":"2014-04-19T17:28:15Z","content_type":null,"content_length":"19648","record_id":"<urn:uuid:7acd309d-358f-4f06-a33f-02f89110b360>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of gll buffon
Buffon's needle problem
is a question first posed in the 18th century by
Georges-Louis Leclerc, Comte de Buffon
Suppose we have a floor made of parallel strips of wood, each the same width, and we drop a needle onto the floor. What is the probability that the needle will lie across a line between two
Using integral geometry, the problem can be solved to get a Monte Carlo method to approximate π.
The problem in more mathematical terms is: Given a needle of length $l$ dropped on a plane ruled with parallel lines t units apart, what is the probability that the needle will cross a line?
Let x be the distance from the center of the needle to the closest line, let θ be the acute angle between the needle and the lines, and let $tge l$.
The probability density function of x between 0 and t /2 is
The probability density function of θ between 0 and π/2 is
The two random variables, x and θ, are independent, so the joint probability density function is the product
The needle crosses a line if
$x le frac\left\{l\right\}\left\{2\right\}sintheta.$
Integrating the joint probability density function gives the probability that the needle will cross a line:
$int_\left\{theta=0\right\}^\left\{frac\left\{pi\right\}\left\{2\right\}\right\} int_\left\{x=0\right\}^\left\{\left(l/2\right)sintheta\right\} frac\left\{4\right\}\left\{tpi\right\},dx,dtheta =
frac\left\{2 l\right\}\left\{tpi\right\}.$
For n needles dropped with h of the needles crossing lines, the probability is
$frac\left\{h\right\}\left\{n\right\} = frac\left\{2 l\right\}\left\{tpi\right\},$
which can be solved for π to get
$pi = frac\left\{2\left\{l\right\}n\right\}\left\{th\right\}.$
Now suppose $t < l$. In this case, integrating the joint probability density function, we obtain:
$int_\left\{theta=0\right\}^\left\{frac\left\{pi\right\}\left\{2\right\}\right\} int_\left\{x=0\right\}^\left\{m\left(theta\right)\right\} frac\left\{4\right\}\left\{tpi\right\},dx,dtheta ,$
is the minimum between
Thus, performing the above integration, we see that, when $t < l$, the probability that the needle will cross a line is
$frac\left\{h\right\}\left\{n\right\} = frac\left\{2 l\right\}\left\{tpi\right\} - frac\left\{2\right\}\left\{tpi\right\}left\left\{sqrt\left\{l^2 - t^2\right\} + tsin^\left\{-1\right\}left\left
Lazzarini's estimate
Mario Lazzarini, an Italian mathematician, performed the Buffon's needle experiment in 1901. Tossing a needle 3408 times, he attained the well-known estimate 355/113 for π, which is a very accurate
value, differing from π by no more than 3×10^−7. This is an impressive result, but is something of a cheat, as follows.
Lazzarini chose needles whose length was 5/6 of the width of the strips of wood. In this case, the probability that the needles will cross the lines is 5/3π. Thus if one were to drop n needles and
get x crossings, one would estimate π as
π ≈ 5/3 · n/x
π is very nearly 355/113; in fact, there is no better rational approximation with fewer than 5 digits in the numerator and denominator. So if one had n and x such that:
355/113 = 5/3 · n/x
or equivalently,
x = 113n/213
one would derive an unexpectedly accurate approximation to π, simply because the fraction 355/113 happens to be so close to the correct value. But this is easily arranged. To do this, one should pick
n as a multiple of 213, because then 113n/213 is an integer; one then drops n needles, and hopes for exactly x = 113n/213 successes.
If one drops 213 needles and happens to get 113 successes, then one can triumphantly report an estimate of π accurate to six decimal places. If not, one can just do 213 more trials and hope for a
total of 226 successes; if not, just repeat as necessary. Lazzarini performed 3408 = 213 · 16 trials, making it seem likely that this is the strategy he used to obtain his "estimate".
See also
External links and references | {"url":"http://www.reference.com/browse/gll+buffon","timestamp":"2014-04-21T03:36:24Z","content_type":null,"content_length":"79758","record_id":"<urn:uuid:467b7bfe-0db5-4ecb-a818-1de3646ef22a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
Helical Antennas in Satellite Radio Channel
1. Introduction
Monofilar and multifilar helical antennas are the most widely proposed antennas in satellite communications systems. The main reason why these antennas constitute an asset in applications concerning
satellite and space communications generally is circular polarization. Good axial ratio provides precise measurement of the polarization of the received signal due to immunity of the circularly
polarized wave to Faraday rotation of the signal propagating through the ionosphere.
In addition to circular polarization, monofilar helical antennas offer the advantage of high gain in axial direction over a wide range of frequencies which makes them suitable for applications in
broadband satellite communications. Split beam and conical beam radiation patterns of bifilar and quadrifilar helical antennas respectively, offer even more applications in mobile satellite
communications (Kilgus, 1975; Nakano et al., 1991). Also, backfire helical antenna has stood out as a better feed element for parabolic reflector than the axial mode helical antenna and horn antennas
(Nakano et al., 1988). Beside the number of wires in helical antenna structure, it is possible to use antenna’s physical parameters to control the directivity pattern. Phase velocity of the current
can be controlled by changing the pitch angle and circumference (Kraus, 1988; Mimaki & Nakano, 1998), and the ground plane can be varied in its size and shape to achieve a certain form of radiation
pattern and higher antenna gain (Djordjevic et al., 2006; Nakano et al., 1988; Olcan et al., 2006). Various materials used in helical antenna design, even only for the purpose of mechanical support
or isolation, can noticeably influence the antenna’s performance so this should be taken into account when designing and modeling the desirable helical antenna structure ( Casey & Basal, 1988 a;
Casey & Basal, 1988b; Hui et al., 1997; Neureuther et al., 1967; Shestopalov et al., 1961; Vaughan & Andersen, 1985).
A theoretical study of a sheath, tape and wire helix given in (Sensiper, 1951) provided the base for a physical model of the helical antenna radiation mechanism. The complex solutions of the
determinantal equation for the propagation constants of the surface waves traversing a finite tape helix are used to calculate the current distribution on helical antenna in (Klock, 1963). The
understanding of the waves propagating on the helical antenna structure can also provide a good assessment of the circular polarization purity as well as the estimation of varying the helical antenna
radiation characteristics by changing the antenna’s physical parameters and using various materials in helical antenna design (Maclean & Kouyoumjian, 1959; Neureuther et al., 1967; Vaughan &
Andersen, 1985). Although an analytical approach can sometimes provide a fast approximation of helix radiation properties (Maclean & Kouyoumjian, 1959), generally it is a very complicated procedure
for an engineer to apply efficiently and promptly to the specified helical antenna design. Therefore, we combine the analytical with the numerical approach, i. e. the thorough understanding of the
wave propagation on helix structure with an efficient calculation tool, in order to obtain the best method for analyzing the helical antenna.
In this chapter, a theoretical analysis of monofilar helical antenna is given based on the tape helix model and the antenna array theory. Some methods of changing and improving the monofilar helical
radiation characteristics are presented as well as the impact of dielectric materials on helical antenna radiation pattern. Additionally, backfire radiation mode formed by different sizes of a ground
reflector is presented. The next part is dealing with theoretical description of bifilar and quadrifilar helices which is followed by some practical examples of these antennas and matching solutions.
The chapter is concluded with the comparison of these antennas and their application in satellite communications.
2. Monofilar helical antennas
The helical antenna was invented by Kraus in 1946 whose work provided semi-empirical design formulas for input impedance, bandwidth, main beam shape, gain and axial ratio based on a large number of
measurements and the antenna array theory. In addition, the approximate graphical solution in (Maclean & Kouyoumjian, 1959) offers a rough but also a fast estimation of helical antenna bandwidth in
axial radiation mode. The conclusions in (Djordjevic et al., 2006) established optimum parameters for helical antenna design and revealed the influence of the wire radius on antenna radiation
properties. The optimization of a helical antenna design was accomplished by a great number of computations of various antenna parameters providing straightforward rules for a simple helical antenna
Except for the conventional design, the monofilar helical antenna offers many various modifications governed by geometry (Adekola et al., 2009; Kraft & Monich, 1990; Nakano et al., 1986; Wong & King,
1979), the size and shape of reflector (Carver, 1967; Djordjevic et al., 2006; Nakano et al., 1988; Olcan et al., 2006), the shape of windings (Barts & Stutzman, 1997, Safavi-Naeini & Ramahi, 2008),
the various guiding (and supporting) structures added ( Casey & Basal, 1988 a; Casey & Basal, 1988 b; Hui et al., 1997; Neureuther et al., 1967; Shestopalov et al., 1961; Vaughan & Andersen, 1985)
and other. This variety of multiple possibilities to slightly modify the basic design and still obtain a helical antenna performance of great radiation properties with numerous applications is the
motivation behind the great number of helical antenna studies worldwide.
2.1. Helix as an antenna array
A simple helical antenna configuration, consisted of a perfectly conducting helical conductor wounded around the imaginary cylinder of a radius a with some pitch angleψ, is shown in Fig. 1. The
conductor is assumed to be a flat tape of an infinitesimal thickness in the radial direction and a narrow width δ in the azimuthally direction. The antenna geometry is described with the following
parameters: circumference of helix C = πD, spacing p between the successive turns, diameter of helix D = 2a, pitch angle ψ = tan^-1(p/πD), number of turns N, total length of the antenna L = Np, total
length of the wire L [ n ] = NL [ 0 ] where L [ 0 ] is the wire length of one turn L [ 0 ] = (C ^2 + p ^2)^1/2.
Considering the tape is narrow, δ<<λ, p, a, assuming the existence of electric and magnetic currents in the direction of the antenna axis of symmetry and applying the boundary conditions on the
surface of the helix, we can derive the field expressions for each existing free mode as the total of an infinite number of space harmonics caused by helix periodicity with the propagation constants
h [ m ] = h + 2πm/p, where m is an integer (Sensiper, 1951). Knowing the field components at the antenna surface, the far field in spherical coordinates (R,θ,ϑ) for each existing mode can be obtained
upon by the Kirchhoff-Huygens method. The contribution to the radiated field of each space harmonic can be written in the form of the element factor and the array factor product, thus the total
radiated electric field caused by the particular mode is expressed as (Cha, 1972; Kraus, 1948; Shestopalov, 1961; Vaughan & Andersen, 1985):
The element factors F [ θm ] and F [ ϑm ] represent the contribution of each turn to the total field in some far point of the space due to the m ^th cylindrical space harmonic, and are determined as:
where E ^ a [ θm ], E ^ a [ ϑm ] , and H ^ a [ θm ], H ^ a [ ϑm ] are the m ^th cylindrical space harmonic amplitudes of electric and magnetic field spherical components at the antenna surface
respectively, k=2πfμ0ε0=2πf/c is the free-space wave-number, Z0=μ0/ε0=120π Ω is the impedance of the free space, and Jm=Jm(kasinϑ) is the ordinary Bessel function of the first kind and order m. The
complex array factor G [ m ] is calculated for each space harmonic as:
where Φ[ m ] is the phase difference for the m ^th harmonic between the successive turns:
Unlike the element factor, the array factor defines the directivity and does not influence the polarization properties of the antenna. It is found (Kraus, 1949) that, although (3) and (4) are
different in form, the patterns (1) and (2) for entire helix are nearly the same, and the similar could also be stated for the dielectrically loaded antenna. Furthermore, the main lobes of E [ θ ]
and E [ ϑ ] patterns are very similar to the array factor pattern. Hence, the calculation of the array factor alone suffices for estimations of the antenna properties at least for long helices.
Assuming only a single travelling wave on the helical conductor, following (1)-(2), a helix antenna can be depicted as an array of isotropic point sources separated by the distance p, as in Fig. 2.
The normalized array factor is:
This is justified as the absolute of (5) and (7) are approximately equal, and small differences become noticeable only for N ≤ 5. Denoting the phase difference for the fundamental space harmonic of
axial mode as Φ[0] = Φ in (6), the Hansen-Woodyard condition for the maximum directivity in the axial direction (ϑ = 0) states that (Maclean & Kouyoumjian, 1959):
Ideally, applying (6)-(8), the radiation characteristics of the helical antenna and the antenna geometry can be directly connected by single variable, the velocity v of the surface wave (Kraus, 1949;
Maclean & Kouyoumjian, 1959; Nakano et al., 1986; Wong & King, 1979). As the wave velocities in a finite helix are hard to calculate, those calculated for the infinite
helix can be applied as a fair approximation. The determinantal equation for the wave propagation constants on an infinite helical waveguide is given and analyzed in (Klock, 1963; Mittra, 1963;
Sensiper, 1951, 1955) and generalized forms of the equation for helices filled with dielectrics are considered in (Blazevic & Skiljo, 2010 ; Shestopalov et al. 1961; Vaughan & Andersen, 1985). The
solutions are obtained in a form of the Brillouin diagram for periodic structures, which dispersion curves are symmetrical with respect to the ordinate (the circumference of the helix in
wavelengths). The calculated propagation constants (phase velocities) of free modes are real numbers settled within the triangles defined by lines ka=±ha∓|m|cotψ , among which those with |m| = 1
comply with the condition (8) for infinite arrays. The m = 0 and m = −1 regions of the diagram refer to the so called normal and the axial mode, respectively. The Brillouin diagram provide the
information about the group velocity of the surface waves calculated as the slope of the dispersion curves at given frequency. It is important to note that the phase and group velocities on the helix
may have opposite directions. When the circumference of the helix is small compared to the wavelength, the normal mode dominates over the others and the maximum radiated field is perpendicular to
helix axis. These electric field components are then out of phase so the total far field is usually elliptically polarized. Due to the narrow bandwidth of radiation, the normal mode helical antenna
is limited to narrow band applications (Kraus, 1988). Axial radiation mode is obtained when the circumference of helix is approximately one wavelength, achieving a constructive interference of waves
from the opposite sides of turns and creating the maximum radiation along the axis. Helical antenna in the axial mode of radiation is a circularly polarized travelling-wave wideband antenna.
However, due to the assumption of the existence of only a single travelling wave, the modeling of helical antenna as a finite length section of the helical waveguide has some practical shortcomings,
which becomes more problematical as the antenna length becomes shorter. Consider an example of the typical axial mode current distribution on Fig. 3, obtained at C [ λ ] = 1.0 for the helical antenna
with ψ = 14° and N = 12. We may observe three regions: the exponential decaying region away from the source, the surface wave region after the first minimum and the standing wave due to reflection of
the outgoing wave at the open antenna end. The works of (Klock, 1963; Kraus, 1948, 1949; Marsh, 1950) showed that the approximate current distribution can be estimated assuming two main current
waves, one with a complex valued phase constant settled in the region of normal mode (m = 0) that forms a standing wave deteriorating antenna radiation pattern, and one with real phase constant in
the region of the axial mode (m = – 1) that contributes to the beam radiation.
The analytical procedure of a satisfying accuracy for determining the relationship between the powers of the surface waves traversing the arbitrary sized helical antenna may still be sought using a
variational technique, assuming the existence of only two principal propagation modes (normal and axial), and a sinusoidal current distributions for each of them taking into account the velocities
calculated for the infinite helical waveguide, as shown by (Klock, 1963). However, as the formula for the total current on the helix involves integrals of a very complex form, one may rather chose to
use the classical design data given in (Kraus, 1988) which, for helices longer than three turns, define the optimum design parameters in a limited span of the pitch angles in the frequency range of
the axial mode. The semi-empirical formulas for antenna gain G in dB, input impedance R in ohms, half power beam-width HPBW in degrees and axial ratio AR, are given by:
Because of the traveling-wave nature of the axial-mode helical antenna, the input impedance is mainly resistive and frequency insensitive over a wide bandwidth of the antenna and can be estimated by
(10). The discrepancy from a pure circular polarization, described with axial ratio AR, depends on the number of turns N and it approaches to unity as the number of turns increases. It is interesting
to note that this formula is obtained by Kraus using a quasi-empirical approach where the phase velocity is assumed to always satisfy the Hansen- Woodyard condition for increased directivity. The
reflected current degrades desired polarization in forward direction and by suppressing it (with tapered end for example); the formula (11) becomes more accurate (Vaughan & Andersen, 1985). However,
King and Wong reported that without the end tapering the axial ratio formula often fails (Wong & King, 1982). Also, based on a great number of experimental results, they established that in the
equation (13), valid for 12° < ψ< 15°, 3/4 < C/λ< 4/3 and N> 3, numerical factor can be much lower than 15, usually between 4.2 and 7.7 (Djordjevic et al., 2006), providing a different expression for
the helical antenna gain:
where λ [ p ] is wavelength at peak gain.
The existence of multiple free modes on a helical antenna makes the theoretical analysis even more complicated when a dielectric loading is introduced. Consider two examples of the Brillouin diagram
in the region m = −1 for the case of ψ = 13º, δ = 1 mm, N = 10 given on Fig. 4 a) and b) respectively. The first refers to the empty helix and the second to the helix filled uniformly with a lossless
dielectric of relative permittivity ε [ r ] = 6. The A points mark the intersections of the dispersion curves of the determinantal equation with the line defined by the Hansen-Woodyard condition (8).
Obviously, their positions depend on the number of turns. Point B marks the calculated upper frequency limit of the axial mode, f [ B ] i.e. the frequency at which the SLL is increased to 45 % of the
main beam, the criterion adopted from (Maclean & Kouyoumjian, 1959). In the case of helical antenna with dielectric core, due to the difference in permittivity of the antenna core and surrounding
media, it can be noted that the solutions shape multiple branches. It can also be shown that the number of branches increases rapidly by increasing the permittivity and decreasing the pitch angle.
The existence of multiple axial modes as in Fig. 4 b) implicates a possibility of the existence of a number of optimal frequencies (A points), one for each axial mode. However, if the permittivity is
high enough and the pitch angle low enough, the power of the lowest axial mode may be found to be insufficient to shape a significant beam radiation. Then the solution A at the lowest mode branch of
the dispersion curve is settled below the minimum beam mode frequency f [ L ]. This frequency limit marks the frequency at which the axial mode power starts to dominate over the normal mode power. It
is usually determined as the lowest frequency at which the circular polarization is formed i.e. the axial ratio is less than two. Also, the HPBW of the main lobe falls below 60 degrees but this
criterion can be strictly applied only for longer helices (longer than ten turns). As the working frequency starts to surmount this limit, the current magnitude distribution is transformed steadily
toward the classical shape of the axial mode current (Kraus, 1988) as in Fig. 3. Also, as the classical current distribution forms, the character of the input impedance starts to be mainly real. It
is found in (Maclean & Kouyoumjian, 1959) that the lower limit remains approximately constant regardless of the antenna length. This fact is confirmed for the dielectrically loaded helices as well in
(Blazevic & Skiljo, 2010). It is also noted that the change in the maximum axial mode frequency with varying permittivity and pitch angle as the consequence of the change of the surface wave group
velocity is much more emphasized than the change of the minimum frequency. This means that, as the optimal frequency becomes lower, the axial mode bandwidth shrinks. The overall effect of the
permittivity and pitch angle on the fractional axial mode bandwidth (defined as the ratio of the bandwidth and twice the central frequency) for the various antenna lengths is depicted on Fig. 5.
2.2. Impact of materials used in helical antenna design
A frequently used antenna is the conventional monofilar helical structure wrapped around a hollow dielectric cylinder providing a good mechanical support, especially for thin and long helical
antennas. In the case of commercially manufactured helical antennas they are often covered with non-loss dielectric material all over, while in amateur applications sometimes low cost lossy materials
take place. The properties of various materials used in antenna design and their selection can be of great importance for meeting the required antenna performance, and the purpose of this chapter is
to provide an insight to its influence based on a practical example.
The CST Microwave Studio was used to analyze the impact of various materials and their composition on helical antenna design and optimal performance. Since the chapter focuses on longer antennas, a
12-turn helix was chosen. We created the helical structure with the following parameters: f = 2430 MHz, D = 42 mm, C = 132 mm, p = 33 mm, L = 396 mm, N = 12, a = 1 mm and Ψ = 14°. Instead of infinite
ground plane commonly used in numerical simulations, we formed a round reflector with the diameter of D [ r ] = 17 cm to be closer to the widespread practical design. The resistance of the source is
selected to be 50 Ω and the thickness of the dielectric tube in practical design is 1mm.
The antenna shown in Fig. 6 a) is the reference model of the helical antenna constructed of a perfectly conducting helical conductor and a finite size circular reflector using the hexahedral mesh.
The simulation results in Fig. 7 demonstrate the influence of applied materials on the antenna VSWR and gain in frequency band from 1.8-2.8 GHz. Each material was examined separately except for the
practical design of the antenna which included all the materials used. First step to practical design of the helical antenna depicted in Fig. 6 a) was the replacement of the PEC material with the
copper one, which produced negligible effects on the antenna parameters as expected. Lossy dielectric wire coating added to reference model with permittivity and conductivity selected to be ε [r] = 3
andσ = 0.03 S/m, however, caused noticeable change in the overall antenna performance. The antenna input impedance is decreased where primarily the capacitive reactance is decreased because of the
higher permittivity along the helical conductor. Also, the gain is decreased and the frequency bandwidth of the antenna is shifted to somewhat lower frequencies. The empty dielectric tube (EDT),
often used as a mechanical support for long antennas, is analyzed in two steps. First, non-loss EDT (with ε [ r ] = 3) added to the reference model, produced gain decrease and the bandwidth shift. At
the same time, the antenna input impedance decreases causing the improvement of VSWR. When the conductivity of σ = 0.03 S/m is added in second step, these effects are much more emphasized, especially
for the antenna gain.
Comparing the obtained antenna gain of 13.96 dB at f = 2.43 GHz of reference PEC model with (9) and (13), where calculated gains are G = 17.44 dB and G = 13.21 dB respectively, it is found that the
first formula is too optimistic as expected, and the second one is acceptable for some readily estimation of helical antenna gain. To the reference, the final practical antenna design, comprising the
copper helical wire covered with lossy dielectric wire coating wounded around the lossy dielectric tube, and the finite size circular reflector, achieves gain of 10.91 dB at 2.43 GHz and peak gain of
13.18 dB at 2.2 GHz. Thus, in comparison with PEC helical antenna in free space, the practical antenna performance is significantly influenced by the dielectric coating and supporting EDT.
2.3. Changing the parameters of helix to achieve better radiation characteristics
High antenna gain and good axial ratio over a broad frequency band are easily achieved by various designs of a helical antenna which can take many forms by varying the pitch angle (Mimaki and Nakano,
1998; Nakano et al., 1991 ; Sultan et al., 1984), the surrounding medium (Bulgakov et al., 1960; Casey and Basal, 1988 ; Vaughan and Andersen, 1985) and the size and shape of reflector (Djordjevic
et al., 2006; Nakano et al., 1988; Olcan et al., 2006). In this chapter, we introduce a design of the helical antenna obtained by combining two methods to improve the radiation properties of this
antenna; one is changing the pitch angle, i.e. combining two pitch angles (Mimaki and Nakano, 1998; Sultan et al., 1984) and the other is reshaping the round reflector into a truncated cone reflector
(Djordjevic et al., 2006; Olcan et al., 2006).
It is shown (Mimaki and Nakano, 1998) that double pitch helical antenna radiates in endfire mode with slightly higher gain over wider bandwidth. Two pitch angles were investigated; 2° and 12.5°,
along different lengths of the antenna. Their relative lengths were varied in order to obtain a wider bandwidth with higher antenna gain. In (Skiljo et al., 2010) the axial mode bandwidth was
examined by means of parameters defining the limits of the axial radiation mode: axial ratio, HPBW, side lobe level (SLL) and total gain in axial direction, whereas the method of changing the pitch
angle was applied to a helical antenna wounded around a hollow dielectric cylinder with the pitch angle of 14°. The maximum gain of the antennas with variable lengths h/H, where h is the antenna
length where pitch angle ψ [ h ] = 2° and H is the rest of the antenna with ψ [ H ] = 12.5°, is achieved with h/H = 0.26 (Mimaki and Nakano, 1998; Skiljo et al., 2010).
Various shapes of ground plane were considered: infinite ground plane, square conductor, cylindrical cup and truncated cone, whereas the later produced the highest gain increase relative to the
infinite ground plane. So, we used the truncated cone reflector with optimal cone diameters D [ 1 ] = 1.3λ and D [ 2 ] = 0.4λ and height h = 0.5λ in order to maximize the gain of the previously
simulated double pitch helical antenna (Skiljo et al., 2010). Applying the criteria for the cut-off frequencies of the axial mode from chapter 2.1, it is observed that the bandwidth of the axial mode
is not increased (it is slightly shifted towards lower frequencies) by using two pitch angles and a truncated cone reflector. Fig 8 a) shows the antenna model used in chapter 2.2 with non loss
dielectric tube (with ε [r] = 3) and b) the simulated double pitch helical antenna.
The results in Fig. 9 depict that HPBW is mainly better in case of the truncated cone reflector but worse with the round reflector, and the antenna gain is improved when using the truncated cone.
Also, Fig. 9 b) shows a significant gain increase of the double pitch helical antenna with truncated cone reflector in comparison with the standard one around 2.4 GHz, but the bandwidth of such an
antenna gain is not increased.
2.4. Backfire monofilar helical antenna
This chapter gives the information about the effect of the ground plane size on the helical antenna radiation characteristics. It is found that as the diameter of the reflector decreases, the
backfire radiation occurs and at the ground plane diameter smaller than the helix diameter it becomes dominant (Nakano et al., 1988). The analysis of a monofilar backfire helix was carried out
through the example from chapter 2.1: λ = 12.34 cm, ψ = 14°, N = 12, r [ w ] = 0.008λ and D = 0.34λ with the reflector diameter of d = 1.38λ. This antenna can also be used in the form of monofilar
backfire helix in the focus of a paraboloidal reflector. The results of simulations performed in FEKO show the radiation patterns and current distributions of the helical antennas with three
different diameters of ground plane d [ 1 ] = 0.7λ, d [ 2 ] = 0.35λ and d [ 3 ] = 0.3λ. In Fig. 10 a) helical antenna operates in standard axial mode where radiation is in forward direction where
relative phase velocity p = v/c satisfies the in-phase Hansen-Woodyard condition and the current distribution shows that surface wave is formed after the first minimum. There are no great
discrepancies between this antenna and the one with larger reflector, as expected. As the diameter of the reflector decreases below 0.5λ, the decaying region of current distribution (Fig. 10)
slightly shifts toward the end and becomes comparable to the surface region of the current. Also the amplitude of current in surface wave region decreases meaning that the backward radiation becomes
larger. The antenna in Fig. 10 c) is the typical backfire monofilar helical antenna with the current distribution consisted only of a decaying current and a relative phase velocity nearly equal to
one. It can be noticed that the forward and backward wave helical antennas achieve good but opposite sense circular polarization (Nakano et al., 1988).
3. Multifilar helical antennas
Beside the parameter modifications of monofilar helical antenna, the multiple number of wires in helix structure also offers interesting radiation performances for satellite communications. While
monofilar helices are usually employed in transmission (Kraus, 1988), the multifilar helical antennas, bifilar and quadrifilar are mostly utilized at reception where wide beamwitdh coverage is needed
to track as many of the visible satellites as possible (Kilgus, 1974; Lan et al., 2004).
3.1. The bifilar helical antenna
Patton was the first to describe bifilar helical antenna (BHA) with backfire radiation achieving maximum directivity just above the cut-off frequency of the main mode of the helical waveguide. The
beamwidth broadens with frequency and for pitch angles of about forty five degrees, the beam splits and turns into a scanning mode toward broadside direction. As opposed to monofilar helical antenna,
the backfire BHA radiates toward the feed point, its gain is independent of length (provided that the length is large enough) and the beamwidth increases with frequency (Patton, 1962).
Backfire bifilar helix is often used as a feed antenna because of its high efficiency, circularly polarized backward wave and low aperture blockage. In mobile handsets and various aerodynamic
surfaces requiring low profile antennas side fed bifilar helical antenna can be used which produces a slant 45° linearly polarized omnidirectional toroidal pattern providing higher diversity gain in
all directions (Amin et al., 2007).
In order for the bifilar helix to operate as backfire antenna, it is necessary that the currents flowing from the terminals to the ends of two helices are out of phase and the currents in the
reversed direction are in phase. Hence, no radiation in forward direction is possible. This could be explained by the nature of the backward wave of current, where the phase is progressing toward the
feed and the group velocity must be away from the feed point. A ground plane is not necessary in bifilar helical antenna design but this antenna usually achieves poor front-to-back (F/B) ratio which
can cause interference problems when used as a receiving antenna. However, bifilar helical antenna with tapered feed end improves F/B ratio as well as the antenna power gain and axial ratio in
comparison with conical and standard bifilar helical antenna (Yamauchi et al., 1981).
The BHA simulations are carried out in FEKO software on the basis of the following parameters (Yamauchi et al., 1981); the wavelength λ = 10 cm, circumference of the helical cylinder C =λ, the pitch
angle ψ = 12.5°, wire radius r = 0.005λ, tapering cone angle θ = 12.5° and the number of turns in tapered section n [ t ] = 2.3 and in uniform section n [ u ] = 3. Three types of BHA with the same
axial length were simulated: standard, conical and tapered BHA, Fig. 11 a). Tapered BHA is consisted of two sections of equal axial lengths, one corresponding to the first half of the conical BHA and
the other to the half of the standard BHA. According to the radiation patterns in Fig. 11 b) and the results given in the Table 1, the tapered BHA provides the best performance of the BHA considering
the F/B ratio and gain with satisfying axial ratio and decreased HPBW. It is important to note that the conical and tapered BHA’s give better radiation characteristics than the standard BHA. Further
investigation of the tapered BHA in terms of height reduction concerning the growing need for antenna miniaturization, shows that good BHA performance can be achieved with even smaller tapered
bifilar helical antenna. The height of this antenna was reduced with a step of one spacing of the standard BHA (p = C tanψ) and the results are summarized in Table 2. The simulations obtained for the
reduced version of tapered BHA yielded the best results for the one with n [ u ] = 1 and n [ t ] = 2.3 which corresponds to 2/3 of the total length of the original BHA, with the geometry and
radiation pattern shown in Fig. 12.
In order to reduce the antenna length, Nakano et al. examined bifilar scanning helical antenna with large pitch angle terminated with a resistive load. This antenna generates circularly polarized
scanning radiation pattern from backfire to normal. The simulations show the scanning radiation patterns of the bifilar helix with six turns, pitch angle of 68° and diameter of 1.6 cm, through the
frequency band from 1.3 – 2.5 GHz ( Nakano et. al, 1991 ). Fig. 13 illustrates typical radiation patterns, the backfire conical and normal radiation pattern reaching the antenna gain of 10 dB, Fig.
13 a) and b), respectively.
F/B (dB) Gain (dB) AR HPBW (°)
Tapered BHA (n[t] = 1.5, n[u] =3) 15.4 7.1 0.72 90
Tapered BHA (n[t] = 0.8, n[u] = 3) 11.2 5.7 0.89 120
Tapered BHA (n[u] = 0, n[t] = 2.3) 7.5 6 0.72 85
Tapered BHA (n[u] = 1, n[t] = 2.3) 14.8 7.8 0.65 82
Tapered BHA (n[u] = 2, n[t] = 2.3) 14.0 7.8 0.75 87
Contrary to monofilar helical antenna, the bifilar helical antenna yields scanning radiation mode when relative phase velocity p = v/c = 1.0. This is confirmed with the comparison of the simulated
results with the experimental and calculated results ( Nakano et al., 1991 ; Zimmerman, 2000) of the lobe direction for the different values of phase velocity, Fig. 14.
3.2. The quadrifilar helical antenna
The quadrifilar helical antenna (QHA), also known as the Kilgus coil, is mostly used for telemetry, tracking and command (TT&C) satellite systems due to its simplicity, small size, wide circularly
polarized beam and insensitivity to nearby metal objects. The QHA consists of four helical wires equally spaced circumferentially and fed from the top or the bottom. The open ended QHA generally uses
the length of each wire of λ/4 or 3λ/4 with typical input impedance in the range 10 to 20 ohms while the short–circuited QHA uses λ/2 or λ length of each wire which produces resonant input impedance
of nearly 50 ohms. Printed QHAs, convenient for high frequency applications, are manufactured using the dielectric substrate (Chew et al., 2002; Hanane et al., 2007) while wire QHA-s can be
implemented on cylindrical, conical, square and spherical dielectric mechanical supports (Casey & Bansal, 2002; Hui et al., 2001). The size reduction of quadrifilar helical antennas can be achieved
with geometrical reduction techniques such as sinusoidal (Fonseca et al., 2009; Takacs et al., 2010), rectangular (Ibambe et al., 2007), meander line (Chew et al., 2002) and other techniques (Letestu
et al., 2006).
Radiation pattern of fractional turn resonant QHA is cardioid-shaped and circularly polarized with wide beamwitdh, but by extending the fractional-turn QHA to an integral number of turns
shaped-conical radiation pattern can be obtained for many applications in spacecraft communications (Kilgus, 1975).
The Kilgus coil consisted of four wires λ/2 long and forming a ½ turn of a helix, generates a cardioid-shaped backfire radiation pattern with circular polarization and a very high HPBW when two pairs
are fed in phase quadrature and lower ends are short-circuited (Kilgus, 1968, 1974). The antenna is fed with a split sheath balun and the phase quadrature is achieved by adjusting the lengths of the
The performance of the QHA is described with the following parameters: the length of one element consisted of two radials and a helical section l [ el ] (integer number of λ/2), axial length between
the radials l [ ax ] and the number of turns N. We designed a half turn QHA for GPS L2 signal with the central frequency of f = 1220 MHz and the following parameters: l [ el ] = λ/2, wire diameter d
= 2 mm, bending radius b [ r ] = 5 mm and width-to height ratio w/h = 0.44 (the length of wires was adjusted to achieve phase quadrature so width w is the longitudinal width and h is axial height (l
[ ax ]) of the antenna). This is the so called self-phased QHA where the wire of one bifilar helix is longer than the resonant length, so that the current has a phase lead of 45° and the other is
shorter in order to achieve a phase lag of 45°. Instead of infinite balun, we proposed a stripline structure for impedance matching and the support for helical wire. Fig. 15 c) shows that matching
stripline is made of shorter part designed to counteract the imaginary part of the antenna input impedance and longer quarterwave part which is used to tune the real component of antenna input
impedance to 50-Ω coaxial line impedance (Sekelja et al., 2009).
In many satellite applications, it is also desirable to concentrate the radiated energy into a shaped conical beam with full cone angles from 120° to 180° (Kilgus, 1975). So, for the same frequency,
f = 1220 MHz, we simulated a three turn QHA (Fig. 16 a)) fed in phase quadrature with short circuited ends which achieves gain decreasing from the maximum of 5.6 dB at the edge of the cone to the
local minimum of -2.5 dB at the centre. Radiation pattern in Fig. 16 b) also shows that this antenna gives an excellent axial ratio.
4. Conclusion
In this chapter, the basic theory and simulations of helical antennas are presented. It is shown that various radiation patterns can be obtained with conventional helical antenna and its
modifications: forward and backward radiation, beam, normal and scanning radiation, from hemispherical to conical-shaped radiation patterns. The circular polarization is easily achieved (except for
the normal mode) and it can be improved by end tapering. These modifications include the change of helix geometry, the size and shape of reflector, the number of wires and implementing some guiding
However, when implementing real materials in practical design, they must be evaluated for their influence on the overall antenna performance. Thus, while the depicted analytical approach offers a
tool for the optimal design and basic analysis of the helical antenna, although not completely impossible, it becomes too complex to be implemented in final decision about the practical design. The
performances of the designed antenna must therefore be tested by some numerical tool or by measurements. | {"url":"http://www.intechopen.com/books/advances-in-satellite-communications/helical-antennas-in-satellite-radio-channel","timestamp":"2014-04-17T14:37:59Z","content_type":null,"content_length":"149424","record_id":"<urn:uuid:73bc14c3-9acb-4f8d-b4b6-e56ff35c43b7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ragged matrix and histogram image problem
Author Ragged matrix and histogram image problem
Hi i am a newbie learning java.. Well i am currently practicing from the book " A Concise and Practical introduction to programming algorithms in java".. So far now i have covered the
Joined: Sep basics upto arrays.. I have not yet reached object, strings classes and so on.. the problem i am stuck with is from the chapter on arrays on these 2 problems
29, 2009
Posts: 17 A d-dimensional symmetric matrix M is such that Mi,j = Mj,i for all
1 ≤ i, j ≤ d. That is, matrix M equals its transpose matrix: MT = M.
Consider storing only the elements Mi,j with d ≥ i ≥ j ≥ 1 into a ragged
array: double [] [] symMatrix=new double [d][];. Write the array
allocation instructions that create a 1D array of length i for each row of
the symMatrix. Provides a static function that allows one to multiply two
such symmetric matrices stored in “triangular” bi-dimensional ragged
Consider that an image with grey level ranging in [0, 255] has been
created and stored in the regular bi-dimensional data-structure byte
[] [] img;. How do we retrieve the image dimensions (width and
height) from this array? Give a procedure that calculates the histogram
distribution of the image. (Hint: Do not forget to perform the histogram
normalization so that the cumulative distribution of grey colors sums up
to 1.)
I dont want any code for these as i want to try and code it myself.. I just dont understand the problems.. Can anyone explain it to me as to what the problems are all about???
About the histogram do i need to display the width as 0 and height as 255 using a function prototype say like DisplayDimensions(byte img[][])???.. I dont think the problem is that simple
I really dont get how do i normalize the histogram and with what data?? And what is histogram distribution??..
Joined: Oct
13, 2005
Posts: 36453 You do realise an algorithms book won't teach about objects? An algorithms book will require you to work out how to calculate something.
Joined: Sep +1; focus either on Java, or algorithms. If you're learning Java, an algorithms book isn't the best place to start.
29, 2008
Posts: 12617 Even if you're familiar with other programming languages, a book focusing on algorithms isn't necessarily going to teach you how to think in Java.
I like...
Joined: Sep Yup i am familiar with that.. I know the basics of java and i am just trying to improve my problem solving capacity.. I am using the book to help me think in a way to solve problems!!!..
29, 2009 I just want to be able to solve problems.. So can you please help me with what the problem states???
Posts: 17
Joined: Sep
29, 2008 What part of it don't you understand?
Posts: 12617
I like...
In the ragged matrix problem.. Do i need to create two matrices say a[5][] and b[5][] and write a function to multiply these two matrices??? get the reason why they given the conditions
Joined: Sep of matrix transpose??
29, 2009
Posts: 17 In the histogram problem i do not know with what data do i calculate the histogram distribution and what is meant by normalization?? And how do i get the values of gray color?? I don't
think i can do that with the given 9 and 255
Joined: Oct
13, 2005 You have two questions; you would have done better to start two threads.
Posts: 36453
Until you understand what matrix transformation means, you will not understand our replies.
Joined: Sep
29, 2009 Okay i think I'll first learn what is matrix transformation then i'll ask my doubts... Do i need to know anything about the histogram??
Posts: 17
Joined: Sep It's critically important to understand the algorithm before trying to implement it, otherwise you have no mental picture of what you're trying to do.
29, 2008
Posts: 12617 If this is the kind of problems you're interested in trying to solve, no worries, but if you're interested in more typical problems I don't think an algorithm book is a great place for
I like...
Joined: Sep
29, 2009 well i am trying to solve these problems so that i could solve the ones that are in sites like spoj and codechef... SO please can you explain what the problem states???
Posts: 17
Joined: Sep But if I tell you what the problem states you're not really solving the problem, are you--*I* am!
29, 2008
Posts: 12617 I'd recommend either breaking it down piece-by-piece, or starting with something easier. I'm not really prepared to explain linear algebra etc. in the Java forum ;)
I like...
Joined: Sep hey i know linear algebra, calculus even matrix transformations like scaling rotation and all stuffs... All i asked was what the problem states you need not explain to me what i should
29, 2009 do in the problem.. Just tell me what the problem says?? I think i am asking for too much. Thanks anyways for all your help..
Posts: 17
subject: Ragged matrix and histogram image problem | {"url":"http://www.coderanch.com/t/504123/java/java/Ragged-matrix-histogram-image","timestamp":"2014-04-16T19:54:36Z","content_type":null,"content_length":"42446","record_id":"<urn:uuid:dfbc1972-1df6-48d6-a104-e5fe9b0ba96e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
Functional problem
February 25th 2010, 05:39 AM #1
Feb 2010
Functional problem
Let $V=R_n[x]$ and $\alpha_i(p)=p^i(0)$ be a functional over $V$( $p^i$ is the number i derivative of p)
Show that $\{\alpha_i\}_\forall\i$ is a basis of a dual space $V^*$
What I first should do is prove that $\alpha_i \forall i$ are linearly independent.
How should I do it?
Any ideas?
Hint: notice that $\alpha_i(x^k)$ is $k!$ if i=k, and 0 if i≠k.
$\alpha_0,....,\alpha_n$are independent iff
So if $p=a_0+a_1x+....+a_nx^n$
Then $\forall a_1....a_n$
and therefore it's obvious that $\lambda_0=\lambda_1=.....=\lambda_n=0$
And therefore $dimV^*=dimV=n+1$ => $\{\alpha_i\}$ is basis.
This is more or less is the answer right?
February 25th 2010, 05:56 AM #2
February 25th 2010, 08:46 AM #3
Feb 2010 | {"url":"http://mathhelpforum.com/advanced-algebra/130728-functional-problem.html","timestamp":"2014-04-17T05:29:42Z","content_type":null,"content_length":"42043","record_id":"<urn:uuid:260ace4a-e9ce-40b7-8500-11349eb62079>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Primes in Fibonacci
Date: 07/11/99 at 18:27:44
From: Carly White
Subject: Primes in Fibonacci
Dear Dr Math,
Is there any pattern with prime numbers in the Fibonacci sequence?
Date: 07/12/99 at 10:57:24
From: Doctor Floor
Subject: Re: Primes in Fibonacci
Dear Carly,
Thanks for your question!
This is a very interesting question indeed. As far as I know, it
is not known whether there are infinitely many prime numbers in
Fibonacci's sequence or not.
There is some pattern however, in the sense that we know that if a
Fibonacci number F(n) is prime (here we take F(1) = 1, F(2) = 1,
F(3) = 2, etc.) and n > 4, then n is prime too. Or, in other words,
when n > 4 is a composite number, then F(n) is a composite number too.
A proof of this is not too difficult. When n is even, so it can be
written as n = 2m, it follows from the fact that F(2m) = F(m)*L(m),
where L(m) is the mth Lucas number. You can find a proof of this in
the Dr. Math archives:
Since n > 4, we see that m > 2, and thus L(m) and L(n) are both
greater than 1, and F(2m) must be composite.
To prove a more general statement we will use the terminology and
formulae of the message in the Dr. Math archives I pointed you to.
P = (1+SQRT(5))/2
Q = (1-SQRT(5))/2
F(n) = (P^n - Q^n)/SQRT(5)
L(n) = P^n + Q^n
Now let n = p*q with p>2 and q>1 (in this way we describe all existing
composite numbers greater than 4). We have:
F(n) = F(p*q) = [P^(p*q) - Q^(p*q)]/SQRT(5) =
[(P^p)^q - (Q^p)^q]/SQRT(5) =
(P^p - Q^p)/SQRT(5) *
[(P^p)^(q-1) + Q^p*(P^p)^(q-2) + ... + (Q^p)^(q-1) ] = (*)
F(p) * [(P^p)^(q-1) + Q^p*(P^p)^(q-2) + ... + (Q^p)^(q-1)]
(*): This is a special case of:
x^n-y^n = (x-y)[x^(n-1) + y*x^(n-2) + y^2*x^(n-3) + ... + y^(n-2)*x +
Since F(p) > 1, and clearly F(n) > F(p), we can now see that F(n) is a
composite number when
[(P^p)^(q-1) + Q^p*(P^p)^(q-2) + ... + (Q^p)^(q-1) ]
is an integer. This can be shown using that L(n) = P^n + Q^n is an
integer, and P^n*Q^n = (P*Q)^n = (-1)^n. I will not go over writing
this all out exactly, but I think you will understand how to prove it
when I give as an example how to show that F(15) is composite:
F(15) = F(5*3) = [P^15 - Q^15]/SQRT(5) =
[P^3 - Q^3]/SQRT(5)*(P^12 + P^9*Q^3 + P^6*Q^6 + P^3*Q^9 + Q^12) =
F(3) * [P^12 + Q^12 + (P*Q)^3*(P^6 + Q^6) + (P*Q)^6] =
F(3) * [ L(12) - L(6) + 1]
It is clear here that L(12) - L(6) + 1 is an integer, and thus F(15)
is composite.
And our conclusion stands: when F(n) is prime and n > 4, then n itself
must be prime.
If you have a math question again, or if you need more help on this,
don't hesitate to write us back.
Best regards,
- Doctor Floor, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/52705.html","timestamp":"2014-04-17T19:11:58Z","content_type":null,"content_length":"7717","record_id":"<urn:uuid:02a1ef37-3798-4ac1-8dbd-a9c12bb7cb27>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
American Mathematical Society
Bulletin Notices
AMS Sectional Meeting Program by Day
Current as of Tuesday, April 12, 2005 15:09:42
Program | Deadlines | Registration/Housing/Etc. | Inquiries: meet@ams.org
1997 Fall Western Sectional Meeting
Albuquerque, NM, November 8-9, 1997
Meeting #928
Associate secretaries:
William A Harris, Jr
, AMS
Saturday November 8, 1997
• Saturday November 8, 1997, 7:30 a.m.-4:30 p.m.
Meeting Registration
• Saturday November 8, 1997, 8:00 a.m.-10:50 a.m.
Special Session on Commutative Algebra, I
Laguna Room, Albuquerque Convention Center
Scott Thomas Chapman, Trinity University schapman@trinity.edu
Alan Loper, Ohio State University, Newark lopera@math.ohio-state.edu
• Saturday November 8, 1997, 8:00 a.m.-10:50 a.m.
Special Session on Harmonic Analysis, I
Acoma Room, Albuquerque Convention Center
Jay B. Epperson, University of New Mexico jeppers@math.unm.edu
Christina Pereya, University of New Mexico crisp@math.unm.edu
• Saturday November 8, 1997, 8:00 a.m.-4:30 p.m.
Exhibit and Book Sale
• Saturday November 8, 1997, 8:30 a.m.-10:50 a.m.
Special Session on Difference and Differential Equations, I
Tesuque Room, Albuquerque Convention Center
Saber N. Elaydi, Trinity University selaydi@trinity.edu
Robert J. Sacker, University of Southern California rsacker@mtha.usc.edu
• Saturday November 8, 1997, 8:30 a.m.-10:50 a.m.
Special Session on Diophantine Geometry, I
Apache Room, Albuquerque Convention Center
Alexandru Buium, University of New Mexico buium@math.unm.edu
• Saturday November 8, 1997, 8:30 a.m.-10:50 a.m.
Special Session on (Multi) Wavelets and Numerical PDEs, I
Navajo Room, Albuquerque Convention Center
Peter R. Massopust, Sandia National Laboratories prmasso@cs.sandia.gov
• Saturday November 8, 1997, 8:30 a.m.-10:40 a.m.
Special Session on Localization and Other Multiple Scattering Phenomena of Classical Waves, I
Santo Domingo Room, Albuquerque Convention Center
Alexander Figotin, University of North Carolina at Charlotte figotin@uncc.edu
Abel Klein, University of California, Irvine aklein@math.uci.edu
□ 8:30 a.m.
Transport theory for waves and applications .
George C Papanicolaou*, Stanford University
□ 9:15 a.m.
Magnetotelluric Probing.
Benjamin White, Exxon Research & Engineering Co.
L. J. Srnka, Exxon Exploration Co.
Werner Kohler*, Virginia Tech
□ 10:00 a.m.
The bispectral problem and some of its applications in mathematical physics.
F. Alberto Grunbaum*, University of California, Berkeley
• Saturday November 8, 1997, 8:30 a.m.-10:50 a.m.
Special Session on Quaternions in Global Riemannian and Algebraic Geometry, I
Zuni Room, Albuquerque Convention Center
Charles P. Boyer, University of New Mexico cboyer@math.unm.edu
Galicki Krzysztof, University of New Mexico galicki@math.unm.edu
• Saturday November 8, 1997, 9:00 a.m.-10:50 a.m.
Special Session on Computational Mathematics, I
Isleta Room, Albuquerque Convention Center
Richard C. Allen, Jr., Sandia National Laboratories rcallen@cs.sandia.gov
• Saturday November 8, 1997, 9:00 a.m.-10:50 a.m.
Special Session on Computational Mechanics, I
Nambe Room, Albuquerque Convention Center
D. L. Sulsky, University of New Mexico sulsky@math.unm.edu
• Saturday November 8, 1997, 9:00 a.m.-10:50 a.m.
Special Session on Geometry and Analysis of Foliations, I
Jemez Room, Albuquerque Convention Center
Efton L. Park, Texas Christian University epark@gamma.is.tcu.edu
Kenneth S. Richardson, Texas Christian University k.richardson@tcu.edu
□ 9:00 a.m.
Riemannian Foliations and Eigenvalue Comparison.
Jeffrey M Lee*, Texas Tech University
Ken Richardson, Texas Christian University
□ 9:40 a.m.
Metric fibrations in Euclidean space
Detlef Gromoll,
Gerard Walschap*,
□ 10:20 a.m.
Leafwise heat flow and applications
Jesus A. Alvarez-Lopez*,
• Saturday November 8, 1997, 11:15 a.m.-12:05 p.m.
Invited Address
Localization of classical waves.
Taos Room, Albuquerque Convention Center
Abel Klein*, University of California, Irvine
• Saturday November 8, 1997, 1:45 p.m.-2:35 p.m.
Invited Address
An adaptive pseudo-wavelet approach for solving nonlinear partial differential equations
Taos Room, Albuquerque Convention Center
Gregory Beylkin*, University of Colorado
• Saturday November 8, 1997, 3:00 p.m.-6:20 p.m.
Special Session on Commutative Algebra, II
Laguna Room, Albuquerque Convention Center
Scott Thomas Chapman, Trinity University schapman@trinity.edu
Alan Loper, Ohio State University, Newark lopera@math.ohio-state.edu
• Saturday November 8, 1997, 3:00 p.m.-4:50 p.m.
Special Session on Computational Mathematics, II
Isleta Room, Albuquerque Convention Center
Richard C. Allen, Jr., Sandia National Laboratories rcallen@cs.sandia.gov
□ 3:00 p.m.
Discussion: Mathematics of Molecular Science
• Saturday November 8, 1997, 3:00 p.m.-4:50 p.m.
Special Session on Computational Mechanics, II
Nambe Room, Albuquerque Convention Center
D. L. Sulsky, University of New Mexico sulsky@math.unm.edu
• Saturday November 8, 1997, 3:00 p.m.-5:20 p.m.
Special Session on Difference and Differential Equations, II
Tesuque Room, Albuquerque Convention Center
Saber N. Elaydi, Trinity University selaydi@trinity.edu
Robert J. Sacker, University of Southern California rsacker@mtha.usc.edu
• Saturday November 8, 1997, 3:00 p.m.-4:50 p.m.
Special Session on Diophantine Geometry, II
Apache Room, Albuquerque Convention Center
Alexandru Buium, University of New Mexico buium@math.unm.edu
• Saturday November 8, 1997, 3:00 p.m.-4:50 p.m.
Special Session on Geometry and Analysis of Foliations, II
Jemez Room, Albuquerque Convention Center
Efton L. Park, Texas Christian University epark@gamma.is.tcu.edu
Kenneth S. Richardson, Texas Christian University k.richardson@tcu.edu
□ 3:00 p.m.
An extension of Pestov's identity to the frame bundle.
Maung Min-oo*,
□ 3:40 p.m.
Quasigeodesic Flows on Hyperbolic 3-Manifolds Which Fiber Over the Circle
Diane Hoffoss*, UC Santa Barbara
□ 4:20 p.m.
Entropy Rigidity for Foliations.
Christopher G. Connell*, University of Michigan
Jeffrey R. Boland, University of Michigan
• Saturday November 8, 1997, 3:00 p.m.-5:50 p.m.
Special Session on Harmonic Analysis, II
Acoma Room, Albuquerque Convention Center
Jay B. Epperson, University of New Mexico jeppers@math.unm.edu
Christina Pereya, University of New Mexico crisp@math.unm.edu
• Saturday November 8, 1997, 3:00 p.m.-5:55 p.m.
Special Session on Localization and Other Multiple Scattering Phenomena of Classical Waves, II
Santo Domingo Room, Albuquerque Convention Center
Alexander Figotin, University of North Carolina at Charlotte figotin@uncc.edu
Abel Klein, University of California, Irvine aklein@math.uci.edu
• Saturday November 8, 1997, 3:00 p.m.-5:20 p.m.
Special Session on Quaternions in Global Riemannian and Algebraic Geometry, II
Zuni Room, Albuquerque Convention Center
Charles P. Boyer, University of New Mexico cboyer@math.unm.edu
Galicki Krzysztof, University of New Mexico galicki@math.unm.edu
• Saturday November 8, 1997, 3:00 p.m.-4:40 p.m.
Session on Contributed Papers
Navajo Room, Albuquerque Convention Center
Inquiries: meet@ams.org | {"url":"http://ams.org/meetings/sectional/2030_program_saturday.html","timestamp":"2014-04-19T14:42:12Z","content_type":null,"content_length":"69218","record_id":"<urn:uuid:8dbec78d-daa5-4907-beb3-edabd41dfea7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
1. (x^3)^5
A x^-15
B x^2
C x^8
D x^15
E x^-2
F x^-8
#1- (D)
because (x^3)^5=
(x*x*x)* (x*x*x)*
14. Examine the following solution to a conversion problem. Do you see a mistake? If so, identify the location of the first mistake and explain why it is incorrect.
A The conversion is not possible.
B The first problem occurs in column (1). The problem should begin with mg.
C The first problem occurs in column (2). 1 kg is not equal to 1000 g, so that should not be used as the conversion factor.
D The first problem occurs in column (2). The conversion factor is upside down (grams should be on top, and kg should be on bottom).
E The first problem occurs in column (4). The multiplication and division was performed incorrectly.
F No mistakes.
#14- is (d) because every gram is made up of 1000 milligrams 3.25 kilograms is equal to 3250 grams each of them is made up of 1000 milligrams so there are 3250 *1000=3250000
3.25 kilograms *1000= 3250 grams
the chart says 1 kilogram =1000 grams
so to get the number of grams from 3.25 kilograms i would multiply 3.25 kilograms *1000=3250
so my answer for that would be d The first problem occurs in column (2). The conversion factor is upside down (grams should be on top, and kg should be on bottom
17. Examine the following solution to a conversion problem. Do you see a mistake? If so, identify the location of the first mistake and explain why it is incorrect.
A No mistakes.
B The conversion is not possible.
C The first problem occurs in column (1). The problem should begin with L.
D The first problem occurs in column (4). The multiplication and division was performed incorrectly.
E The first problem occurs in column (3). We are not allowed to use a conversion factor of (1 L/ 1L).
F The first problem occurs in column (2). 1 L is not equal to 1000 kL, so that should not be used as the conversion factor.
#17-is F because 1kl=1000 l
not the other way around
19. Estimate the amount of juice in a typical orange juice carton in liters.
A about 8 liters
B about 5 liters
C about 4 liters
D about 3.5 liters
E about 2 liters
F about .5 liters
i estimated it as D from a 1 liter carton that i have in the fridge
20. Estimate the weight of a small can of soup in grams.
A about 30 grams
B about 500 grams
C about 250 grams
D about 325 grams
E about 750 grams
F about 100 grams
#20- B
because a gram measures the weight of very light objects so i estimated it to be 500 grams from the can of soup that was in my refrigerator
thats the answers i submitted to her | {"url":"http://www.mathisfunforum.com/post.php?tid=15902&qid=180309","timestamp":"2014-04-16T19:19:17Z","content_type":null,"content_length":"24423","record_id":"<urn:uuid:1437ccbd-6114-4cd4-8ced-a66805509c3f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00326-ip-10-147-4-33.ec2.internal.warc.gz"} |
An integral equation method for solving exterior Neumann problems on smooth regions
Jumadi, Azlina (2009) An integral equation method for solving exterior Neumann problems on smooth regions. Masters thesis, Universiti Teknologi Malaysia, Faculty of Science.
PDF - Submitted Version
Restricted to Repository staff only
This work develops a boundary integral equation method for numerical solution of the exterior Neumann problem. An integral equation for solving the exterior Neumann problem in a simply connected
region is derived in this dissertation based on the exterior Riemann-Hilbert problem. In the first step the exterior Neumann problem is reduced to an exterior Riemann-Hilbert problem for the
derivative of an auxiliary function which is analytic in the region. Then, the exterior Riemann-Hilbert problem is transformed to a uniquely solvable Fredholm integral equation on the boundary of the
region. Once this equation is solved, the auxiliary function and the solution of the exterior Neumann problem can be obtained. The efficiency of the method is illustrated by some numerical examples.
Item Type: Thesis (Masters)
Additional Information: Thesis (Sarjana Sains (Matematik)) - Universiti Teknologi Malaysia, 2009; Supervisor : Dr. Munira Ismail
Uncontrolled Keywords: integral equation, exterior Neumann problems
Subjects: Q Science > Q Science (General)
Q Science > QA Mathematics
Divisions: Science
ID Code: 12322
Deposited By: S.N.Shahira Dahari
Deposited On: 16 May 2011 04:48
Last Modified: 19 Jul 2012 04:21
Repository Staff Only: item control page | {"url":"http://eprints.utm.my/12322/","timestamp":"2014-04-18T10:51:39Z","content_type":null,"content_length":"19107","record_id":"<urn:uuid:bf235da9-d625-471f-ac75-1b368b53cb69>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
34-XX Ordinary differential equations
34Axx General theory
34A05 Explicit solutions and reductions
34A09 Implicit equations, differential-algebraic equations [See also 65L80]
34A12 Initial value problems, existence, uniqueness, continuous dependence and continuation of solutions
34A25 Analytical theory: series, transformations, transforms, operational calculus, etc. [See also 44-XX]
34A26 Geometric methods in differential equations
34A30 Linear equations and systems, general
34A34 Nonlinear equations and systems, general
34A35 Differential equations of infinite order
34A36 Discontinuous equations
34A37 Differential equations with impulses
34A40 Differential inequalities [See also 26D20]
34A45 Theoretical approximation of solutions {For numerical analysis, see 65Lxx}
34A55 Inverse problems
34A60 Differential inclusions [See also 49J24, 49K24]
34A99 None of the above, but in this section | {"url":"http://ams.org/mathscinet/msc/msc.html?t=34A09","timestamp":"2014-04-17T14:43:11Z","content_type":null,"content_length":"13565","record_id":"<urn:uuid:b387c082-2570-484c-bed7-88075f762b6d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
nterview Questions and Answers
Financial Analyst interview Questions and Answers
This series of questions pulled from an actual financial analyst interview are excellent to test out your problem solving and analytical skills – they are almost like the brainteasers that you could
see on a GMAT or LSAT test, so have fun with them. We highly recommend that you try to solve these problems on your own before reading the answers that we have provided. Here is the actual interview
question (first you will need to read some background information):
Our company partners with colleges and universities to coach their students to be more successful in school, typically beginning in the student’s first year. The colleges and universities are our
clients. Below we present some sample data from one of our clients. In order to show our client how effective our coaching is, we separated students into 2 different groups – 1 group in which all of
the students received coaching and another group (also called the “control” group), in which NONE of the students received any coaching. The term “control” group is a scientific term used when
comparing the effect of a particular treatment on some group of people (or whatever it is that’s being studied), versus people who did not receive that treatment. In this case, that “treatment” is
our coaching/tutoring service for colleges. Comparing Coached retention to Control retention provides the basis for measuring our enrollment impact and return on investment in coaching. Retention is
just a term that refers to the percentage of students that were “retained”, or basically the percentage of students who stayed in school and made it to their next year in college. In the table below,
we have enrollment numbers for people who started their 1st and 2nd years in both the coached and non coached groups.
│Group │Began 1st year in college│Began 2nd year in college│
│Coached │180 │155 │
│Not Coached (“Control” Group) │200 │155 │
Financial Analyst Interview Question part 1 A – According to the figures above, how many 2nd year students are still in college due to our coaching?
Let’s break the question down – it specifically asks us how many students made it to their 2nd year in college because they were coached. So, we want to make sense of the numbers presented above and
find out how effective the coaching was in helping students stay in college for one more year.
The answer to Financial Analyst Interview Question part 1 A
You might take a look at the numbers above and notice that 155 people made it to their 2nd year both in the control group (composed of people who were not coached) and the group which received
coaching. This may lead you to believe that the coaching was not effective at all, because exactly the same number of people made it to their 2nd year in college as people who were not even coached!
However, you would be completely wrong to say this. And this is where attention to detail and analytical skills are very important.
Notice that 180 people began their freshman year in the control group, but 200 people began their freshman year in the “control” group. This of course means that there were more people in the control
group to begin with, which means that comparing just the number of people who made it to their 2nd year (the 155 people) is not a valid or accurate way to compare the 2 groups. So, what is an
accurate and meaningful way to compare the 2 groups? Well, with percentages of course! When comparing results from 2 different groups of different sizes, the only meaningful way to do that is to use
Now that we know we must use percentages, let’s do some more analysis on these numbers. In the control group, we can say that 200 (# who started their 1st year) – 155 (# who started their 2nd year) =
45 people did not start their 2nd year. As a percentage, this means that 45/200 = .225 = 22.5% of students in the control group “dropped out” (which just means that they failed to make it to their
2nd year). Remember that the control group consists of students who did not receive any coaching, so, based on the results from the control group, we can say that given a group of any size, 22.5% of
those students will drop out. Of course, that number may change from group to group, but the control group is used as a basis of comparison.
Notice again that the group of students that were coached started out with 180 people. So, we should ask ourselves if those students were not coached, how many of them would have made it to their 2nd
year? Well, we can take the 22.5% that we came up with from our analysis on the control group, and multiply it against the 180 people – so .225 * 180 = 40.5 people, or rounding up – 41 people. This
means that 41 people would have dropped out of a group of 180 if they did not receive coaching, or that 180-41 = 139 people would have made it to the 2nd year in college if they didn’t receive
But how many people did actually make it to their 2nd year college in the group of students that were coached? Well, that’s easy – it’s 155 people. So, if 139 people would have made it if they did
not receive coaching, and 155 people who did receive coaching made it to their second year, the difference between those numbers (155-139 = 16) tells us that 16 students are still in college because
of the coaching provided by the company, which is our answer!
The key thing to note here is that the number of people that are in each group is different – the interviewer deliberately phrased the question this way because the question would have been very
simple if the number of people in both groups were the same. And, this allows the interviewer to test your analytical skills, attention to detail, and ability to apply mathematical analysis to come
out with a meaningful result.
Part B: If tuition at the client college is $20,000, then how much extra revenue is that for the client college?
This question is just asking if 16 students are still in college because of coaching/tutoring then how much extra revenue is that for the college? That’s simple enough – 16*$20,000 = $320,000. So the
extra revenue for the college is $320,000.
Part C: If we had charged $1,000 per student to coach these freshmen for one year, how much ADDITIONAL revenue does the client get to keep (minus the cost of the coaching)?
Because 180 students started out in the coached group, this means that 180 * $1,000 = $180,000 is the cost of coaching (at $1,000 per student). Because the extra revenue for the client college by
having the 16 students for one extra year is $320,000, and the cost of coaching is $180,000 – this means that the college’s additional revenue is $320,000 – $180,000 = $140,000 – this is the profit
that college gets to keep.
What’s the “return on investment” (profits / costs)?
Because the cost of coaching is $180,000 and the profit is $140,000, this means the the return on investment is 140,000/180,000 = 77.78%.
What if it turns out that the groups were improperly balanced, with the coached groups getting a bigger share of lower-retaining “at-risk” students?
This question is just saying that the “control” group and the “coached” group did not receive a proper balance of ‘good’ and ‘bad’ students – because the “coached” group had more students who were
“at-risk”, which just means that they were more likely not to make it to the 2nd year. This is most likely because they were simply not good students.
If what the question above states is true, and the groups were improperly balanced, then that means the coaching provided by the company was actually more effective than thought before – because the
coached group had more “at-risk” students than the “control” group. This also means that the ROI (Return on Investment) for the client must be higher than the 77.78%.
Click the next button below to read the 2nd part of this interview question. | {"url":"http://www.programmerinterview.com/index.php/financial-interview-questions/financial-analyst-interview-questions-and-answers/","timestamp":"2014-04-20T20:54:32Z","content_type":null,"content_length":"46024","record_id":"<urn:uuid:53f6bbc8-a7e3-48ce-ba5e-34c2cde306c1>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Leafs, Scoring Chances and Corsi
I recently got around to updating my database for the 09-10 season and, in looking over the EV stats for each team, I noticed that the Leafs continue to have one of the best corsi ratios in the
league at EV with the score tied.
I think that this is unusual for a couple reasons.
For one, the 08-09 Leafs were a poor team according to this metric. While last year's squad outshot the opposition in a general sense, their corsi ratio with the score tied was 0.94, good for 21st in
the league. Thus, if one considers corsi ratio with the score tied to be a crude measure of a team's ability at even strength, the Leafs would appear to be one of the most improved teams in the NHL
(looking strictly at EV play, of course).
Secondly, despite soundly outshooting the opposition, the Leafs have one of the worst EV goal differentials in the league. I haven't filtered out empty netters yet, but only the Lightning, Oilers,
and Jackets are worse than Toronto in terms of goal differential at even strength. This despite directing some 500 more shots towards the other team's net at EV than their opponent over the course of
the season. The effect isn't as extreme when the score is tied -- they're only -7 -- but the unusual profile remains.
The tendency to outshoot without outscoring has led some to question whether the Leafs do, in fact, outplay the opposition at even strength, or whether the shot numbers are deceiving.
One way to settle the issue is to look at the Leafs scoring chance numbers. If Toronto's scoring chance ratio broadly parallels its shot ratio at EV, then that ought to dispel notions that the Leafs
don't legitimately outplay the opposition, or that they shoot from everywhere.
Slava Duris, whose blog can be found
, has been recording scoring chances for Toronto over the course of the season. To date, he's posted 53 of the games for which he's recorded chances.
Taking those 53 games in particular, I looked at how many even strength scoring chances the Leafs had with the score tied, and how many their opponents had. I then determined how many shots the Leafs
directed towards the opposition's net -- again, only at EV with the score tied -- in those same 53 games, and did the same for their opponents. The raw data can be viewed below.
Overall, there were 474 even strength scoring chances with the score tied in the 53 games sampled. Of those 474 chances, Leafs generated 252, whereas the opposition generated the remaining 222. Thus,
the Leafs scoring chance ratio with the score tied was 1.14.
In terms of corsi with the score tied, the Leafs directed 1008 shots towards the opposition's goal, and had 915 directed toward their own, thus giving them a corsi ratio of 1.10.
In other words, the Leafs actually did better in terms of scoring chances than in terms of corsi over the 53 games examined.
Granted, this doesn't allow one to conclude that the Leafs are a better team than their corsi ratio would suggest. For example, if we assume that Toronto's underlying scoring chance ratio is
identical to its corsi ratio (1.10), then the probability of it generating at least 252 chances out of 474 randomly selected chances is 0.376 (or, if one prefers, the probability of it having a corsi
ratio at least as good as 1.14 in a sample of 474 chances). In other words, the two values are not significantly different from each other.
Nevertheless, it would appear that the Leafs have managed to outplay the opposition at even strength over the course of the season, their rotten goal differential notwithstanding.
7 comments:
Aren't you the clever one. If you don't mind me asking, what do you study in school, Jlikens?
Wow, so Vesa Toskala was really *that* bad?
I'm currently in law school, but I also have an undergraduate degree in history.
Probably, although playing to the score effects are relevant, too.
The Leafs have played from behind for much of 2009-10 and the same was true last year (although not, interestingly, in 07-08).
And as Vic demonstrated last fall, the Leafs continued to play aggressively when holding the lead last season.
Both of those factors have adversely affected Toskala's numbers to some degree, I think.
I wouldn't have guessed that, I would have gone with commerce, or perhaps a physical science. I suppose the lesson learned is that I should assume that everyone posting on the Oilogosphere is a
lawyer or law student until there is evidence to the contrary.
I like the order test, I think that's reasonable. I'd model that as an urn full of black and white balls. A white ball (1008 of them) for every TOR corsi+, and a black ball (915 of them) for
every corsi-. Then we'd have TOR draw 474 balls from the urn, without replacing them after each draw. That's the total number of scoring chances in the games recorded, while the score was tied of
And we'd ask the question: what are the chances that TOR picks 252 or fewer white balls?
That model is represented by the cumulative form of the hypergeometric distribution. It's simple math, arithmetic really, but computers won't like the gigantic integers used ... I doubt that
either a spreadsheet or the PHP math addon will work these. The former will probably give an error and the former may yield rogue values.
You could use 'R', which is free.
this function:
phyper(252, 1008, 915, 474, lower.tail = TRUE, log.p = FALSE)
yields p = .665.
So TOR fits in the 67th percentile.
Now we do the same for every other team. Even the ones for whom we only have 10 games or so. The underlying sample size is accounted for with the model here, only the error increases.
Then we'd be on our way.
Makes sense, no?
Your way seems like the better way to do it.
My method was somewhat simpler (although less rigorous).
I 'simulated' 474 chances in which the probability of the Leafs generating each individual chance was 0.525 (i.e. 1008/(1008+915); their Corsi percentage).
I looked at 500 simulations in total. Out of those 500 simulations, the Leafs had at least 252 of the 474 chances in 188 of the simulations (188/500=0.376).
If that figure is doubled in order to take into account the other half of the distribution (which I forgot to do in my post), I get 0.752, which is in the ballpark of your p-value.
Good stuff JLikens. Terrible luck for the Leafs that Toskala was so poor. I'd be more convinced that style was negatively effecting him if the other goalies weren't so much better. At EV:
Toskala: .898
Others: .915
That's just awful. The WOWY for Toskala is pretty damning as well:
Record when Toskala plays: 7-15-4
Record when Toskala out: 23-23-10
The Leafs had other problems for sure but Toskala sure as heck wasn't helping.
I think that it is great that you are updating you database. You can find interesting data to go back to it. | {"url":"http://www.objectivenhl.blogspot.com/2010/04/leafs-scoring-chances-and-corsi.html","timestamp":"2014-04-19T19:33:23Z","content_type":null,"content_length":"57144","record_id":"<urn:uuid:11202494-81c4-47ec-85ec-88d96e017a38>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
generate random numbers
"Jomar Bueyes" <jomarbueyes@hotmail.com> wrote in message
> On Tuesday, April 3, 2012 1:23:13 PM UTC-4, ahmed elziery wrote:
>> "Amy" wrote in message <jldo9k$ev9$1@newscl01ah.mathworks.com>...
>> > How can I generate random values within a range in Matlab? Let's say I
>> > need to generate random numbers between [65, 85]
>> > thanks for helping
>> okey here is the solution
>> you can loop between the these two numbers
>> and write this condition if(65<rand<85)
>> arrayb(j)=rand;
>> end
> The above won't work because 1) rand returns pseudo random numbers in the
> open interval (0,1) and thus the condition 65<rand<85 is never satisfied.
As written, it is ALWAYS satisfied. 65 < rand < 85 is equivalent to (65 <
rand) < 85. Since RAND returns values in the interval (0, 1) the expression
inside the parentheses is always false or 0 and since 0 < 85, the expression
is true.
What the poster to whom you're replying meant to write was something along
the lines of "x = rand; if (65 < x) && (x < 85), ..." which would never be
satisfied, as you realized.
>2) Even if rand would return numbers in a range that includes 65 to 85, the
>function is called twice, the first time to use in the if statement and the
>second to assign a random number to arrayb(j). That is, the test is done on
>one pseudo random number but a different one is assigned to arrayb(j) and
>this second number is not necessarily in the (85, 85) interval.
With the obvious typo fixed, you are correct.
> As Steve Lord suggested, see the help for the rand function. It explains
> how to get pseudo random numbers in arbitrary intervals.
RAND may or may not be the answer the OP was looking for; there's not enough
information to determine their desired answer. We'd need to know the
distribution they wanted.
Steve Lord
To contact Technical Support use the Contact Us link on | {"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/318664","timestamp":"2014-04-20T04:04:12Z","content_type":null,"content_length":"38276","record_id":"<urn:uuid:0f0b026c-ccd6-4d9c-9629-c70c539334e0>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Texas Education Agency gave credit for incorrect octagon perimeters
Question 8 of the Spring 2003, Grade 10, Texas Assessment of Knowledge and Skills (TAKS) released Mathematics test reads as follows:
8 What is the perimeter to the nearest
centimeter of the regular octagon drawn
[To view the drawing, click
F 41 cm
G 36 cm
H 27 cm
J 18 cm
Out of the 246816 students who took the test, the response percentages for answers F, G, H, J, and no answer were, respectively, 8.4%, 52.2%, 16.0%, 22.9%, and 0.5%.*
One angle is specified to be a right angle. It is part of a right triangle. The height of this right triangle is specified to be 4 cm. The hypotenuse of this right triangle is specified to be 4.6 cm.
The base of this right triangle is related to the hypotenuse and height by the Pythagorean Theorem (which is given on a beginning page of the test),
base = [hypotenuse^2 - height^2]^1/2 = [(4.6 cm)^2 - (4 cm)^2]^1/2 = 2.2715... cm.
The angle on the left side of the dotted line that is supplementary to the right angle is also a right angle and this second right angle is part of a second right triangle. The height of this second
right triangle is identical to the height of the first right triangle. The hypotenuse of this second right triangle is specified to be 4.6 cm, which equals the hypotenuse of the first right triangle.
The base of this second right triangle is, therefore, equal to the base of the first right triangle. The bottom side of the regular octagon equals the sum of the bases of the two right triangles,
side = 2(base) = 2 [hypotenuse^2 - height^2]^1/2 = 2 [(4.6 cm)^2 - (4 cm)^2]^1/2 = 4.5431... cm.
The perimeter of the regular octagon is 8 times one of its sides,
perimeter = 8(side) = 16 [hypotenuse^2 - height^2]^1/2 = 16 [(4.6 cm)^2 - (4 cm)^2]^1/2 = 36.3450... cm.
This shows that answer G is the correct answer and that answers F, H, and J are incorrect.
Scoring mistakes
The Texas Education Agency gave credit for the correct answer G but made the mistakes of also giving credit for the incorrect answers, F, H, and J, and for no answer. The TEA wrongly increased the
scores of those roughly (47.8%)(246816) = 117978 students who chose answers F, H, and J, and no answer, wrongly increased the average raw score expected from random guessing from 13.75 to 14.50 out
of 56 questions, and, with respect to the average raw score expected from random guessing, wrongly decreased the scores of those roughly (52.2%)(246816) = 128838 students who chose answer G. The
remedy is to remove credit for answers F, H, and J, and for no answer.
The TEA states that question 8 has "more than one possible correct answer." This TEA statement is false; answer G is the only correct answer.
The TEA states that "If the item is solved in this manner, the perimeter is 36 cm (answer choice G)." The clause "If the item is solved in this manner" is irrelevant; the perimeter is 36 cm (to the
nearest centimeter) and answer G is the only correct answer no matter in what manner question 8 is solved.
The TEA states that "However, if a student used one of the 45-degree angles at the center of the octagon and trigonometry to solve the problem, he or she may have chosen answer choice H (27 cm) or
determined that there was no correct answer." This TEA statement is based on the incorrect assumption that the apex of the triangle is located at the center of the octagon; the TEA knows that this
assumption is contradicted by the specified geometries and lengths.** Any student who made the incorrect assumption had the opportunity, using nothing more advanced than simple geometry and the
Pythagorean Theorem, to find that the assumption was contradicted by the specified geometries and lengths. Any student who "determined that there was no correct answer" had the opportunity to check
for, and find, his or her incorrect assumption(s).
The TEA states that "Because there was more than one possible answer to this problem, we have determined that students did not have a fair opportunity to demonstrate their understanding of this
mathematical skill." The TEA claim that "there was more than one possible answer to this problem", by which the TEA means that there was more than one correct answer to the problem, is false. The TEA
claim that "students did not have a fair opportunity to demonstrate their understanding of this mathematical skill" is false; the solution requires nothing more advanced than simple geometry and the
Pythagorean Theorem, which students were expected to know.
The TEA states that "Therefore, TEA has decided that all students should be credited with a correct response to item 8 of the Grade 10 mathematics test." The TEA's decision to give credit for the
incorrect answers, F, H, and J, and for no answer is due to incompetence. Texans are ill-served by such incompetence.
The TEA states that "When this credit is given, an additional 4,640 Grade 10 students (1.8% of the 246,816 tested statewide) will meet the standard, and an additional 936 students (.3% of the total
tested) will achieve commended performance." These students should be informed that they met the standard or achieved commended performance falsely, due to scoring mistakes made by the TEA.
The TEA states that "TEA has requested that Pearson Educational Measurement provide districts with revised Confidential Student Reports and Labels for each Grade 10 student who originally gave a
response other than G or who did not respond to item 8 on the Grade 10 mathematics test." This TEA request of Pearson Educational Measurement was inappropriate.
The TEA states that "We regret that item 8 was not a valid question, and we apologize for the inconvenience this will cause campus and district personnel." The TEA claim that "item 8 was not a valid
question" is false.
Failures to correct the scoring mistakes
Mark Loewe
notified the Texas Education Agency of the scoring mistakes and some of its false claims in a letter of 19 May 2004. The TEA has failed to correct the mistakes and is continuing to propagate the
mistakes and false claims without correction. On 15 July 2005,
Mark Loewe
gave written and oral testimony to the State Board of Education on the topic "TEA failed to correct TAKS mistakes". The SBOE has also failed to correct the scoring mistakes. Texans are ill-served by
the TEA and SBOE failures to correct the scoring mistakes.
* These response percentages are from "Texas educators flunk minimum skills math test", by Nancy T. King,
University Faculty Voice
, September 2003. The Texas Education Agency withheld response percentages for question
from the
Item Analysis Summary Report
. Nancy T. King obtained the response percentages through a Freedom of Information Act request.
** For any regular octagon, the ratio of distance from the center to (the midpoint of) a side divided by distance from the center to a vertex is given by
(center to side)/(center to vertex) = (2^1/2 + 2)^1/2/2.
This formula is exact and is easy to derive using nothing more advanced than simple geometry and the Pythagorean Theorem. A decimal value for this ratio is 0.9238.... A trigonometric expression for
this ratio is cos(22.5^o). The fact that this ratio differs from the right triangle's height divided by hypotenuse,
height/hypotenuse = 4 cm/4.6 cm = 0.8695...,
contradicts the assumption that the apex of the triangle is located at the center of the octagon.
For any regular octagon and any right triangle whose base is half the side of the octagon, the distance from the center to a side of the octagon is given in terms of the hypotenuse and height of the
triangle by
center to side = (2^1/2 + 1) [hypotenuse^2 - height^2]^1/2.
This formula is exact and is easy to derive using nothing more advanced than simple geometry and the Pythagorean Theorem. Plugging in the hypotenuse and height specified in question 8 gives
center to side = (2^1/2 + 1) [(4.6 cm)^2 - (4 cm)^2]^1/2 = 5.484... cm.
The fact that this distance differs from the triangle's height contradicts the assumption that the apex of the triangle is located at the center of the octagon.
For any regular octagon and any right triangle whose base is half the side of the octagon, the distance from the center to a vertex of the octagon is given in terms of the hypotenuse and height of
the triangle by
center to vertex = (4 + 2^1/22)^1/2 [hypotenuse^2 - height^2]^1/2.
This formula is exact and is easy to derive using nothing more advanced than simple geometry and the Pythagorean Theorem. Plugging in the hypotenuse and height specified in question 8 gives
center to vertex = (4 + 2^1/22)^1/2 [(4.6 cm)^2 - (4 cm)^2]^1/2 = 5.935... cm.
The fact that this distance differs from the triangle's hypotenuse contradicts the assumption that the apex of the triangle is located at the center of the octagon. | {"url":"http://www.markloewe.org/Q8_Spring03_G10_Math.html","timestamp":"2014-04-17T09:34:40Z","content_type":null,"content_length":"12962","record_id":"<urn:uuid:0bc9c42d-5d8e-4922-984c-9b58b0beab79>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
random.gauss: range
Gregory Ewing greg.ewing at canterbury.ac.nz
Sun Feb 28 05:27:11 CET 2010
Steven D'Aprano wrote:
> def pinned_gaussian(a, b, mu, sigma):
> """Return a Gaussian random number pinned to [a, b]."""
> return min(b, max(a, random.gauss(mu, sigma)))
> def truncated_gauss(a, b, mu, sigma):
> """Return a random number from a truncated Gaussian distribution."""
> while 1:
> x = random.gauss(mu, sigma)
> if a <= x <= b:
> return x
If it doesn't have to be strictly gaussian, another way is to
approximate it by adding some number of uniformly distributed
samples together. If you have n uniform samples ranging from
0 to a, the sum will be in the range 0 to n*a and the mean
will be n*a/2. The greater the value of n, the closer the
distribution will be to normal.
More information about the Python-list mailing list | {"url":"https://mail.python.org/pipermail/python-list/2010-February/569717.html","timestamp":"2014-04-16T22:46:26Z","content_type":null,"content_length":"3289","record_id":"<urn:uuid:a3aeeab2-31f9-4f79-82a3-d64f9dad6844>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
Merion Station Precalculus Tutor
Find a Merion Station Precalculus Tutor
...I am able to tutor at flexible times and locations. I am able to provide references, documentation, etc. upon request. Thank you for your interest and I hope to hear from you soon!
58 Subjects: including precalculus, reading, chemistry, calculus
...I use learned skills to help students overcome anxieties, build confidence, improve communication with teachers, and raise self advocacy within the classroom. I use conflict resolution skills
to mediate between parents and students regarding time management at home, to place emphasis on study ti...
35 Subjects: including precalculus, chemistry, English, geometry
...I am highly committed to students' performances and to improve their comprehension of all areas of mathematics.I have excelled in courses in Ordinary Differential Equations in both
undergraduate and graduate school, as well as partial differential equations at the graduate level. Also, I have tu...
19 Subjects: including precalculus, calculus, trigonometry, statistics
...I will focus in on the problem solving techniques necessary for success, helping to connect the blocks learned in prealgebra to everyday life in order to engage all students. I have volunteered
for numerous organizations to help students get back up to grade level in their reading. My wife (an elementary school teacher) has also shown me the essentials for helping students get back on
21 Subjects: including precalculus, reading, calculus, physics
...I own two Macbook pros, one Macbook air, an iMac, iPhone, 3 iPods, an iPad mini and an iPad mini. I also utilize a time capsule for all backups. I have beta tested apps for ios and have
volunteered to teach Macintosh at my daughter's middle school.
26 Subjects: including precalculus, chemistry, geometry, GRE | {"url":"http://www.purplemath.com/merion_station_precalculus_tutors.php","timestamp":"2014-04-21T14:41:56Z","content_type":null,"content_length":"24343","record_id":"<urn:uuid:239d5c20-cbde-49a2-a6c3-225665ca3ed0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: COLOR QUANTIZATION BASED ON DESIRED UPPER BOUND FOR RELATIVE QUANTIZATION STEP
Inventors: Sergey N. Bezryadin (San Francisco, CA, US) Sergey N. Bezryadin (San Francisco, CA, US)
IPC8 Class: AG09G502FI
USPC Class: 345597
Class name: Color or intensity dither or halftone color
Publication date: 2009-04-16
Patent application number: 20090096809
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A computer system (710) receives a desired upper bound δ
for a relative quantization step ∥S'-S''∥/∥S'∥ to be used when quantizing any color in some range of colors. Here S' and S'' are adjacent colors in the set of colors to be made available for the
quantized image, and ∥∥ is a norm in a 70%-orthonormal linear color coordinate system, the norm being the square root of the sum of squares of the tristimulus values. The computer system determines
(510) suitable quantization steps for the brightness coordinate (B) and the chromatic coordinates (e,f) in a non-linear color coordinate system, and quantizes (520) the brightness and chromatic
coordinates accordingly.
A computer-implemented method for quantizing a first digital image, the method comprising:obtaining a first parameter representing a desired upper bound δ
for relative quantization steps to be used when quantizing any color in at least a first range of colors, wherein any adjacent colors S'; S'' available for a quantized image correspond to relative
quantization steps δS'=∥S'-S'∥/∥S'∥ and δS''=∥S'-S''∥/∥S''∥, where for any color S, ∥S∥ is the square root of the sum of squares of tristimulus values of the color S in a predefined 70%-orthonormal
linear color coordinate system ("linear CCS"); andgenerating a quantized digital image using the first parameter.
The method of claim 1 further comprising obtaining color data representing each color of the first digital image in a non-linear color coordinate system ("non-linear CCS"), wherein the non-linear CCS
includes a brightness coordinate representing the square root of the sum of squares of the tristimulus values of the linear CCS, wherein the non-linear CCS also comprises one or more chromatic
coordinates whose values are unchanged if the tristimulus values are multiplied by a positive number;wherein generating the quantized digital image comprises quantizing the brightness coordinate
using the first parameter and quantizing said one or more chromatic coordinates using the first parameter.
The method of claim 2 further comprising:using the first parameter to generate (i) a brightness parameter specifying a desired maximum bound for relative quantization steps δB for the brightness
coordinate in at least a range of values of the brightness coordinate, and (ii) one or more chromatic parameters specifying one or more desired maximum bounds for quantization steps d for the one or
more chromatic coordinates in at least one or more ranges of values of the chromatic coordinates;wherein the brightness coordinate is quantized using the brightness parameter and said one or more
chromatic coordinates are quantized using the one or more chromatic parameters.
The method of claim 3 wherein the brightness parameter and the one or more chromatic parameters are determined from the first parameter to satisfy an equation:δ
where c
, c
are predefined constants independent of the first parameter.
The method of claim 1 further comprising obtaining color data representing each color of the first digital image in a non-linear color coordinate system ("non-linear CCS"), wherein the non-linear CCS
includes a brightness coordinate representing the square root of the sum of squares of the tristimulus values of the linear CCS;wherein generating the quantized digital image comprises quantizing the
brightness coordinate, wherein in at least a range of values of the brightness coordinate the quantizing of the brightness coordinate comprises logarithmic quantizing in which a quantized brightness
coordinate is a predefined first rounded or unrounded linear function, or its rounded value, of a logarithm to a predefined base of a predefined second rounded or unrounded linear function of the
brightness, the first and/or second linear functions and/or the base being dependent on the first parameter.
The method of claim 1 wherein the linear CCS is 90%-orthonormal.
A computer system programmed to perform the method of claim
1. 8.
The computer system of claim 7 wherein the method further comprises obtaining color data representing each color of the first digital image in a non-linear color coordinate system ("non-linear CCS"),
wherein the non-linear CCS includes a brightness coordinate representing the square root of the sum of squares of the tristimulus values of the linear CCS, wherein the non-linear CCS also comprises
one or more chromatic coordinates whose values are unchanged if the tristimulus values are multiplied by a positive number;wherein generating the quantized digital image comprises quantizing the
brightness coordinate using the first parameter and quantizing said one or more chromatic coordinates using the first parameter.
The computer system of claim 8 wherein the method further comprises:using the first parameter to generate (i) a brightness parameter specifying a desired maximum bound for relative quantization steps
δB for the brightness coordinate in at least a range of values of the brightness coordinate, and (ii) one or more chromatic parameters specifying one or more desired maximum bounds for quantization
steps d for the one or more chromatic coordinates in at least one or more ranges of values of the chromatic coordinates;wherein the brightness coordinate is quantized using the brightness parameter
and said one or more chromatic coordinates are quantized using the one or more chromatic parameters.
The computer system of claim 9 wherein the brightness parameter and the one or more chromatic parameters are determined from the first parameter to satisfy an equation:δ
where c
, c
are predefined constants independent of the first parameter.
The computer system of claim 9 wherein the brightness coordinate is quantized logarithmically in a first brightness range B>B
wherein B is the brightness and B
is a predefined positive number, and the brightness coordinate is quantized linearly in the range B<B
The computer system of claim 8 wherein the method further comprises obtaining color data representing each color of the first digital image in a non-linear color coordinate system ("non-linear CCS"),
wherein the non-linear CCS includes a brightness coordinate representing the square root of the sum of squares of the tristimulus values of the linear CCS;wherein generating the quantized digital
image comprises quantizing the brightness coordinate, wherein in at least a range of values of the brightness coordinate the quantizing of the brightness coordinate comprises logarithmic quantizing
in which a quantized brightness coordinate is a predefined first rounded or unrounded linear function, or its rounded value, of a logarithm to a predefined base of a predefined second rounded or
unrounded linear function of the brightness, the first and/or second linear functions and/or the base being dependent on the first parameter.
Computer-readable manufacture comprising a computer-readable program for performing the method of claim
1. 14.
The computer-readable manufacture of claim 13 wherein the method further comprises obtaining color data representing each color of the first digital image in a non-linear color coordinate system
("non-linear CCS"), wherein the non-linear CCS includes a brightness coordinate representing the square root of the sum of squares of the tristimulus values of the linear CCS, wherein the non-linear
CCS also comprises one or more chromatic coordinates whose values are unchanged if the tristimulus values are multiplied by a positive number;wherein generating the quantized digital image comprises
quantizing the brightness coordinate using the first parameter and quantizing said one or more chromatic coordinates using the first parameter.
The computer-readable manufacture of claim 14 wherein the method further comprises:using the first parameter to generate (i) a brightness parameter specifying a desired maximum bound for relative
quantization steps δB for the brightness coordinate in at least a range of values of the brightness coordinate, and (ii) one or more chromatic parameters specifying one or more desired maximum bounds
for quantization steps d for the one or more chromatic coordinates in at least one or more ranges of values of the chromatic coordinates;wherein the brightness coordinate is quantized using the
brightness parameter and said one or more chromatic coordinates are quantized using the one or more chromatic parameters.
The computer-readable manufacture of claim 15 wherein the brightness parameter and the one or more chromatic parameters are determined from the First parameter to satisfy an equation:δ
where c
, c
are predefined constants independent of the first parameter.
The computer-readable manufacture of claim 15 wherein the brightness coordinate is quantized logarithmically in a first brightness range B>B
wherein B is the brightness and B
is a predefined positive number, and the brightness coordinate is quantized linearly in the range B<B
The computer-readable manufacture of claim 14 wherein the method Further comprises obtaining color data representing each color of the first digital image in a non-linear color coordinate system
("non-linear CCS"), wherein the non-linear CCS includes a brightness coordinate representing the square root of the sum of squares of the tristimulus values of the linear CCS;wherein generating the
quantized digital image comprises quantizing the brightness coordinate, wherein in at least a range of values of the brightness coordinate the quantizing of the brightness coordinate comprises
logarithmic quantizing in which a quantized brightness coordinate is a predefined first rounded or unrounded linear function, or its rounded value, of a logarithm to a predefined base of a predefined
second rounded or unrounded linear function of the brightness, the first and/or second linear functions and/or the base being dependent on the first parameter.
A network transmission method comprising transmitting, over a network, a computer program which is operable to cause a computer system to perform the method of claim
1. 20.
The method of claim 19 wherein the method further comprises Obtaining color data representing each color of the first digital image in a non-linear color coordinate system ("non-linear CCS"), wherein
the non-linear CCS includes a brightness coordinate representing the square root of the sum of squares of the tristimulus values of the linear CCS, wherein the non-linear CCS also comprises one or
more chromatic coordinates whose values are unchanged if the tristimulus values are multiplied by a positive number;wherein generating the quantized digital image comprises quantizing the brightness
coordinate using the first parameter and quantizing said one or more chromatic coordinates using the first parameter.
The method of claim 20 wherein the method further comprises:using the first parameter to generate (i) a brightness parameter specifying a desired maximum bound for relative quantization steps δB for
the brightness coordinate in at least a range of values of the brightness coordinate, and (ii) one or more chromatic parameters specifying one or more desired maximum bounds for quantization steps d
for the one or more chromatic coordinates in at least one or more ranges of values of the chromatic coordinates;wherein the brightness coordinate is quantized using the brightness parameter and said
one or more chromatic coordinates are quantized using the one or more chromatic parameters.
The method of claim 21 wherein the brightness parameter and the one or more chromatic parameters are determined from the first parameter to satisfy an equation:δ
where c
, c
are predefined constants independent of the first parameter.
The method of claim 21 wherein the brightness coordinate is quantized logarithmically in a first brightness range B>B
wherein B is the brightness and B
is a predefined positive number, and the brightness coordinate is quantized linearly in the range B<B
The method of claim 20 wherein the method further comprises Obtaining color data representing each color of the first digital image in a non-linear color coordinate system ("non-linear CCS"), wherein
the non-linear CCS includes a brightness coordinate representing the square root of the sum of squares of the tristimulus values of the linear CCS;wherein generating the quantized digital image
comprises quantizing the brightness coordinate, wherein in at least a range of values of the brightness coordinate the quantizing of the brightness coordinate comprises logarithmic quantizing in
which a quantized brightness coordinate is a predefined first rounded or unrounded linear function, or its rounded value, of a logarithm to a predefined base of a predefined second rounded or
unrounded linear function of the brightness, the first and/or second linear functions and/or the base being dependent on the first parameter.
The method of claim 1 wherein the non-linear CCS comprises a brightness coordinate representing the square root of the sum of squares of the tristimulus values of the linear CCS, wherein the
non-linear CCS also comprises two chromatic coordinates each of which is equal to a ratio of a respective one of the tristimulus values to the brightness coordinate;wherein generating the quantized
digital image comprises quantizing each of the chromatic coordinates with a step 1/k
where k
is a value dependent on the first parameter; andwherein generating the quantized digital image also comprises quantizing the brightness coordinate, wherein in at least a range of values of the
brightness coordinate the quantizing of the brightness coordinate comprises logarithmic quantizing in which a predefined first linear function of a natural logarithm of a predefined second linear
function of the brightness coordinate is quantized with a step 1, the first linear function multiplying said natural logarithm by a value k
dependent on the first parameter;wherein k
is about 3K
, or k
is equal to the smallest power of 2 which is greater than or equal to about 3k
The computer system of claim 10 wherein c
=9:1 and c
1. 27.
The computer-readable manufacture of claim 13 wherein the non-linear CCS comprises a brightness coordinate representing the square root of the sum of squares of the tristimulus values of the linear
CCS, wherein the non-linear CCS also comprises two chromatic coordinates each of which is equal to a ratio of a respective one of the tristimulus values to the brightness coordinate;wherein
generating the quantized digital image comprises quantizing each of the chromatic coordinates times k
with a step 1, where k
is a value dependent on the first parameter; andwherein generating the quantized digital image also comprises quantizing the brightness coordinate, wherein in at least a range of values of the
brightness coordinate the quantizing of the brightness coordinate comprises logarithmic quantizing in which a predefined first linear function of a natural logarithm of a predefined second linear
function of the brightness coordinate is quantized with a step 1, the first linear function multiplying said natural logarithm by a value k
dependent on the first parameter;wherein k
is about 3k
, or k
is equal to the smallest power of 2 which is greater than or equal to about 3k
The method of claim 19 wherein the non-linear CCS comprises a brightness coordinate representing the square root of the sum of squares of the tristimulus values of the linear CCS, wherein the
non-linear CCS also comprises two chromatic coordinates each of which is equal to a ratio of a respective one of the tristimulus values to the brightness coordinate;wherein generating the quantized
digital image comprises quantizing each of the chromatic coordinates times k
with a step 1, where k
is a value dependent on the first parameter; andwherein generating the quantized digital image also comprises quantizing the brightness coordinate, wherein in at least a range of values of the
brightness coordinate the quantizing of the brightness coordinate comprises logarithmic quantizing in which a predefined first linear function of a natural logarithm of a predefined second linear
function of the brightness coordinate is quantized with a step 1, the first linear function multiplying said natural logarithm by a value k
dependent on the first parameter;wherein k
is about 3k
, or k
is equal to the smallest power of 2 which is greater than or equal to about 3k
A computer-implemented method for image processing, the method comprising converting a digital image between a first digital representation of the image and a second digital representation of the
image, wherein the image comprises a plurality of image portions, wherein an image at each said image portion has color coordinates S
, S
, S
in a first color coordinate system (CCS), whereinS
= {square root over (T.sub.
, T
, T
are color coordinates in a predefined 70%-orthonormal linear CCS;wherein at least for each said image portion whose color coordinate S
is in a predefined range, the first digital representation comprises color coordinates s
, s
, s
such that:s
)+β) rounded to an integer, where α and β are predefined constants,S
rounded to an integer,S
rounded to an integer,wherein k
is about 3k
, or k
is equal to the smallest power of 2 which is greater than or equal to about 3k
A computer system adapted to perform the method of claim
29. 31.
A network transmission method comprising transmitting, over a network, a computer program which is operable to cause a computer system to perform the method of claim
29. 32.
A computer-readable manufacture comprising computer-readable encoding of digital data representing an image at a plurality of image portions, wherein the image comprises a plurality of image
portions, wherein an image at each said image portion has color coordinates S
, S
, S
in a first color coordinate system (CCS), whereinS
= {square root over (T.sub.
/S.s- ub.1 S
, T
, T
are color coordinates in a predefined 70%-orthonormal linear CCS;wherein at least for each said image portion whose color coordinate S
is in a predefined range, the first digital representation comprises color coordinates s
, s
, S
such that:s
)+β) rounded to an integer, where α and β are predefined constants,S
rounded to an integer,S
rounded to an integer,wherein k
is about 3k
, or k
is equal to the smallest power of 2 which is greater than or equal to about 3k
A computer system adapted to perform the method of claim
32. 34.
A network transmission method comprising transmitting, over a network, a computer program which is operable to cause a computer system to perform the method of claim
BACKGROUND OF THE INVENTION [0001]
The present invention relates to digital image processing involving quantization of digital representations of color images.
Quantization involves reducing the set of colors available for image representation in order to reduce the size of the image data. Quantization errors should preferably be small so that image
distortion would be imperceptible to a human observer. Hence, any two adjacent colors in the reduced set of colors should be indistinguishable to a human. At the same time, the reduced set of colors
should itself be small in order to allow significant reduction in the image data size. These conflicting goals can be hard to balance if the quantization method must be suitable for a large gamut,
e.g. the entire visible gamut (to be able to accommodate high dynamic range (HDR) images for example).
Each color can be represented by its coordinates in a color coordinate system (CCS), and each coordinate can be quantized separately. For this purpose, a reduced set of values can be defined for each
coordinate. The color coordinate system can be chosen to provide luminance/chrominance separation, e.g. one coordinate for the luminance and two for the chrominance, with a larger reduced set for the
luminance to accommodate high human sensitivity to luminance distortion.
The quantized image can be coded in a floating point format (with separate representation of the mantissa and the fraction) or an integer format. The integer format is preferred in those cases in
which the quantized data is subjected to further processing involving arithmetic and/or logic operations because on many computers such operations are faster if the operands are in integer format.
An exemplary quantization method is described in Gregory Ward Larson, "The LogLuv Encoding for Full Gamut, High Dynamic Range Images", Silicon Graphics, Inc. Mountain View, Calif. That method
quantizes colors defined in either the CIE XYZ color coordinate system ("CCS") or its derivative system xyY. The colors are quantized into 32-bit values, and we will refer to this method as LogLuv32.
The luminance component Y is quantized into the following value:
=.left brkt-bot.256(log
Y+64).right brkt-bot. (1)
The function
.left brkt-bot.•.right brkt-bot. is the floor, i.e. .left brkt-bot.x.right brkt-bot. is the largest integer not exceeding x. L
is coded as a 16-bit integer, with one bit for the sign and 15 bits for the magnitude. This logarithmic coding provides a constant or almost constant upper bound for the relative error ΔY/Y. The
relative error is a good representation of the human ability to perceive luminance distortion. The upper bound for the relative error should be uniform (constant), and as large as possible, for
effective reduction of the size of image data. According to the "LogLuv encoding" article cited above, the maximum error in quantization (1) is 0.27% (i.e. 0.0027) over the entire visible gamut.
For each color, the LogLuv32 method quantizes the color's chrominance coordinates x, y into a pair of 8-bit integer values u
, v
as follows:
=.left brkt-bot.410u'.right brkt-bot.
=.left brkt-bot.410v'.right brkt-bot. (2)
where u
'=9y/(-2x+12y+3) (3)
Another quantization method for HDR images is described in Rafal Mantiuk, Karol Myszkowski, Hans-Peter Seidel, "Lossy Compression of High Dynamic Range Images and Vidio", Proceedings of SPIE, Volume
6057 (2006). This method quantizes the chromatic coordinates as in (2), and quantizes the physical luminance y into a 12-bit value l(y) as follows (the physical luminance y is not the same y as in
the xyY system):
( y ) = { a y if y < y l b y c + d if y l ≦ y < y h e log ( y ) if y ≧ y h ( 4 ) ##EQU00001##
Here a
, b, c, e, y
, y
are positive constants, d and f are negative constants. Choosing non-logarithmic quantization for lower values of the physical luminance y in (4) is based on studies of human visual perception as
described in the "Lossy Compression" article.
SUMMARY [0008]
This section summarizes some features of the invention. Other features are described in the subsequent sections. The invention is defined by the appended claims, which are incorporated into this
section by reference.
It is clear that human perception of color distortion depends not only on the luminance errors but also on the chrominance errors. The inventors use a new measure of color distortion which takes into
account both luminance and chrominance errors. Some embodiments of the present invention provide quantization techniques developed by the inventors based on the new measure of color distortion. In
some embodiments, a human user or some other entity can specify a desired upper bound for a relative quantization step, wherein the relative quantization step is a measure of the difference between
adjacent colors in the reduced set of colors available for the quantized image. The relative quantization step is defined in terms of the new measure of the color distortion. The system provides a
quantized image in which the maximum relative quantization step may at least approximately match the specified upper bound. Hence, a controlled trade-off is provided between the image distortion and
the size of the image data. In some embodiments, the quantization methods cover the entire visible gamut.
The invention is not limited to the features and advantages described above. Other features are described below. The invention is defined by the appended claims, which are incorporated into this
section by reference.
BRIEF DESCRIPTION OF THE DRAWINGS [0011]
FIG. 1 is a geometric illustration of an orthonormal color coordinate system DEF2 and its derivative CCS Bef.
[0012]FIG. 2
is a geometric illustration of regions each of which is mapped into a single color when each DEF2 coordinate D, E, F is quantized separately.
[0013]FIG. 3
is another geometric illustration of such regions.
[0014]FIG. 4
is a geometric illustration of estimating a relative quantization step in Bef:
[0015]FIG. 5
is a flowchart of a quantization operation according to some embodiments of the present invention.
FIG. 6 illustrates a format for a quantized digital image according to some embodiments of the present invention.
[0017]FIG. 7
is a flowchart of decoding quantized data to obtain color coordinates of a quantized image according to some embodiments of the present invention.
[0018]FIG. 8
is a block diagram of a computer system suitable for performing quantization coding and/or decoding according to some embodiments of the present invention.
DESCRIPTION OF SOME EMBODIMENTS [0019]
The embodiments described in this section illustrate but do not limit the invention. The invention is defined by the appended claims.
In order to characterize a quantization method both with respect to the luminance and with respect to the chrominance, the inventors use a metric in an orthonormal color coordinate system.
Orthonormal CCS's are defined in U.S. patent application Ser. No. 11/494,393 filed Jul. 26, 2006 by S. Bezryadin (who is also one of the inventors herein), published as no. 2007/0154083 A1 on Jul. 5,
2007, incorporated herein by reference. The orthonormal CCS's are also described Appendix A hereinbelow. In particular, some embodiments use the DET2 CCS described in Appendix A (this CCS is denoted
as DEF in the aforementioned patent application 2007/0154083; "2" in DEF2 denotes the 2° field of the underlying XYZ CCS). See also the inventors' presentations entitled "Local Criterion of Quality
for Color Difference Formula" and "Color Coordinate System for Accurate Color Image Editing Software", incorporated herein by reference. Both presentations were made at the International Conference
on Printing Technology held in St. Petersburg, Russia on Jun. 26-30, 2006 and are available at the web site of KWE International, Inc. of San Francisco, Calif. at the following respective addresses:
http://www.kweii.com/site/color_theory/2006_SPb/lc/lc.pdf http://www.kweii.com/site/color_theory/2006_SPb/ccs/ccs.pdf
Suppose that a color S is replaced with a color S' in quantization. The colors available for a quantized image will be called Q-colors herein. Let ΔS=S'-S denote the difference between the original
color S and the corresponding Q-color S'. The difference can be defined in terms of tristimulus values in a linear CCS. For example, in DEF2, if S=(D, E, F) and S'=(D', E', F'), then ΔS=(D'-D, E'-E,
ΔS may alternatively be set to S-S'.
The color distortion measure used by the inventors is the ratio
δS=∥ΔS∥/∥S∥ (5)
where the operator
∥•∥ represents the Euclidian (Pythagorean) norm in DEF2:
∥S∥= {square root over (D
)} (6)
Using the measure (5), the inventors analyzed the indistinguishability data described in Wyszecki & Stiles, "Color Science; Concepts and Methods, Quantitative Data and Formulae" (2nd Ed. 2000),
section 5.4 (pages 306-330; 793-803), incorporated herein by reference. For a color S, the corresponding indistinguishability region is the set of all colors indistinguishable from S to a human
observer. As described in the Color Science reference, studies by Brown, MacAdam, Wiszecki and Fielder have indicated that in the CIE AYZ CCS, each indistinguishability region can be represented as
an ellipsoid with a center at S. Since the DEF2 CCS is a linear transformation of the XYZ CCS, each indistinguishability region is an ellipsoid in DEF2.
For a number of such ellipsoids, the ellipsoids' geometry parameters in the XYZ CCS are specified in the Color Science reference, Table I (5.4.3), pages 801-803. Using these geometry parameters, the
inventors calculated the lengths of the ellipsoids' shortest axes in DEF2. Let a
(S) denote the length of the shortest axis of the DEF2 ellipsoid centered at S. The value a
(S) is thus the diameter of the ball inscribed into the ellipsoid. Clearly, the inscribed ball consists of all colors S' such that
(S)/2 (7)
The inventors have discovered that for all the ellipsoids in Table
(S)/∥S∥≧0.01 (8)
, if
δS=∥S'-S∥/∥S∥≦0.005 (9)
then S
' lies within the ellipsoid centered at S, and hence S' and S are indistinguishable.
Let us suppose that each DEF2 coordinate D, E, F is quantized linearly with a quantization step d. Then the color space can be partitioned into cubes 210 (
FIG. 2
) of side d. The colors in each cube 210 are mapped into the cube's center, i.e. are replaced with the cube's center in the quantized image. Each cube 210 will be called a Q-region herein. More
generally, a Q-region is the set of all colors mapped into a single Q-color in quantization. Each cube 210 has 26 adjacent cubes. FIG. 2 shows some of the cubes adjacent to a cube 210.0.
FIG. 2
does not show the nine cubes immediately in front of cube 210.0.
FIG. 3
illustrates a block of the 27 cubes which includes the cube 210.0 at the center and also includes the 26 cubes adjacent to cube 210.0. In order to avoid generation of artifacts during quantization,
the centers of all the 27 cubes should lie in a single ellipsoid. This condition is satisfied if the distance between the center of cube 210.0 and the center of any other one of the cubes does not
exceed a
)/2 where S
is the center of cube 210.0.
Clearly, the distance between the centers of cube 210.0 and an adjacent cube is the greatest for the eight cubes positioned diagonally with respect to cube 210.0, including cubes 210.1, 210.2. Each
of the eight cubes shares only a vertex with cube 210.0. The distance between the centers of cube 210.0 and any one of the eight cubes is d {square root over (3)}. Therefore, the following is a
sufficient condition for the centers of all the 27 cubes to lie in the ellipsoid centered at S
{square root over (3)}≦a
)/2 (10)
If the coordinates D
, E, F are quantized with respective different quantization steps, the condition (10) is sufficient for indistinguishability if d is the largest quantization step.
The inequality (8) provides a lower bound for a
) as 0.001∥S
∥, and hence (10) is satisfied if d2 {square root over (3)} v does not exceed that bound, or equivalently if
∥≦0.01/2 {square root over (3)}≈0.003 (11)
DEF2 does not provide luminance/chrominance separation. The Bef CCS described in Appendix A and FIG. 1 does provide luminance/chrominance separation. The quantization step d in the E and F
coordinates (
FIG. 2
) corresponds to step d/B in the chromatic coordinates e=E/B, f=F/B. Let us denote the quantization steps in e and f as d
and d
respectively, and assume for simplicity that d
. Denoting this common value as d
we obtain from (11):
∥≦0.01/2 {square root over (3)}0.003 (12)
The estimate (12) provides heuristic guidance in the choice of d
but this estimate was obtained based on the (11) computation for DEF2, not Bef, i.e. for separate quantization of the D coordinate, not the B coordinate. In particular, the Q-regions in Bef are not
cubes. The DEF2 analysis will now be carried over to Bef to derive estimates for the Bef quantization steps d
, d
and δB=|B'-B|/|B| (the relative step for the brightness).
FIG. 2
, suppose S
(the center of cube 210.0) has DEF2 coordinates (D
, E
, F
). Then the set of the 27 Q-colors at the centers of the 27 Q-regions 210 can be described as (D
, E
, F
), (D
±d, E
, F
), (D
, E
±d, F
), etc. This set is thus the set of all colors (D, E, F) such that:
D can be any one of D
, D
+d, D
E can be any one of E
, E
+d, E
-d; and
F can be any one of F
, F
+d, F
In other words
, the centers of the 27 Q-regions 210 are the following set:
-d}} (13)
Each cube can be defined as the set of all the colors
(D, E, F) such that each coordinate D, E, F is in some interval. For example, cube 210.0 is the following set:
+d/2)} (14)
Each interval may or may not include its end points
, and will typically include one of the end points; for example, (D
-d/2, D
+d/2) can be replaced with (D
-d/2, D
+d/2]. Each end point, e.g.
+d/2 (15)
defines a plane containing the cube
's edge.
Similar analysis will now be performed for Bef. Each coordinate B, e, f can be quantized separately. The B coordinate can be quantized into one of values {B
} where i is some index. Similarly, each of the e and f coordinates can be quantized respectively into one of values {e
} or {f
} where j and k are indices. Thus, the Q-colors are the colors (B
, e
, f
). The indices i, j, k can be in any set. We will assume for the sake of illustration that i, j, k are integers, and that increasing indices correspond to increasing coordinates, i.e.
. . . B
+1< . . .
. . . e
+1< . . .
. . . f
< . . .
By analogy with (14), for each Q-color (B
, e
, f
), the corresponding Q-region is the set of all the colors (B, e, f) such that each of B, e, f is in some interval containing the respective value B
, e
, or f
, (such interval is called a neighborhood of B
, e
, or f
). By analogy with (15), the Q-region's boundaries are surfaces defined by one of the coordinates being a constant equal to an end point of the corresponding interval, i.e. B=const., e=const. or f=
For any constant a>0, the equation B=a defines a sphere 410 (FIG. 4) of radius a and the center at the origin O of the DEF2 coordinate system. Therefore, a neighborhood of B
corresponds to a spherical shell including the sphere B=B
. The equation e=a defines a circular conical surface about the E axis with the vertex at the origin O. Therefore, a neighborhood of e
corresponds to a conical shell. Similarly a neighborhood of f
corresponds to a conical shell about the F axis. The Q-region for the point (B
, e
, f
) is therefore the intersection of the spherical shell for B
with the conical shells for e
and f
and with the visible spectrum. The visible spectrum is a region contained in a non-circular cone centered at the origin O and extending upwards (D≧0) about the D-axis.
By analogy with
FIG. 2
, each Q-region in Bef has 26 adjacent Q-regions, with their 26 Q-colors (except that there are fewer than 26 adjacent regions at the visible spectrum's boundary.) By analogy with (13), each Q-color
, e
, f
) and its 26 adjacent Q-colors (in the adjacent Q-regions) form the following set:
+1 };
+1}} (16)
All the
27 Q-colors should lie in a single ellipsoid. This condition is satisfied (see 8.5) if, for all colors S' in the set (16):
∥.ltoreq- .0.005 (17)
where S[0]
, e
, f
). The condition (17) should be satisfied for any Q-color S
, and hence for the maximum δS value, which will be denoted δ
We will now express δ
in terms of the quantization steps δ
, d
, d
(we use the symbols δ
and δ
interchangeably). Given a Q-color S=(B, e, f) and an adjacent Q-color S'=(B+dB, e+de, f+df), the vector ΔS=S'-S can be represented as the sum of two vectors (
FIG. 4
is obtained by changing e, f while keeping B constant, and Δ
is obtained by changing B while keeping e, f constant:
Since the angle between S and S' is small, Δ
and Δ
are approximately mutually orthogonal in DEF2 because Δ
is directed approximately along the spherical surface (B=constant) and Δ
perpendicularly to the spherical surface, radially from the origin. Therefore:
.parall- el.
^2 Since
∥S∥=|B| and ∥Δ
∥=|ΔB|, we can write (after dividing through by ∥S∥
+δ- B
/2 (19)
Based on their analysis of experimental data on human visual perception as described in Appendix B hereinbelow, the inventors have obtained the following approximate bound for ∥Δ
∥/B≦3max(de,df) (20)
This bound and
(19) provide:
).su- p.1/2 (21)
Denoting [0042]
) (22)
we can write
/2 (23)
The condition
(9) therefore leads to the following desirable condition:
/2≦0.005 (24)
In some embodiments, the B coordinate is quantized using a combination of linear and logarithmic coding, as described in U.S. patent application Ser. No. 11/564,730 filed on 29 Nov. 2006 by S.
Bezryadin (also named as inventor in the present application) and incorporated herein by reference. The e and f coordinates are quantized linearly. More particularly, the values (B, e, q) are coded
as B
, e
, f
as follows:
if B≦B
)) if B>B
e (25B)
f (25C)
) (26A)
) (26B)
) (26C)
where k[1]
, k
, k
, k
and B
are suitable constants. The function Round() represents rounding to the nearest integer. The half-integers can be rounded according to any suitable convention. For example, in some embodiments the
half-integers are rounded up if they are positive, down if they are negative:
(x)=.left brkt-bot.x+1/2.right brkt-bot.if x≧0
(x)=.left brkt-top.x-1/2.right brkt-bot.if x<0. (27)
.left brkt-top..right brkt-bot. is the ceiling function, i.e. .left brkt-top.x.right brkt-bot. is the smallest integer greater than or equal to x. Other rounding techniques are also possible.
In some embodiments, B
is a continuous function of B at B=B
, and therefore (25A-1) and (25A-2) imply that k
. We will denote this common value as k
. Thus:
if B≦B
)) if B>B
For small brightness values (such as B<B
), a suitable measure of human perception of brightness distortion is the absolute error Δ
rather than the relative error δB. The absolute error ΔB is a better measure if the image is so dark as to be viewed without physical adaptation of the eye (e.g. iris constriction). The value B
is therefore chosen suitably low in some cases, e.g. 1 cd/m
. See the aforementioned U.S. patent application Ser. No. 11/564,730. The absolute error
can be low if B[L]
is low and k
is high.
In some embodiments, k
, and we will denote this value as k
The inverse transformation converts an integer triple (B
, e
, f
) into a Q-color (B
, e
, f
) as follows:
if B
-1 if B
We will now describe the set Q
of possible values e
, the set Q
of possible values f
, and the set Q
of possible values B
. The e coordinate can take any value from -1 to 1 inclusive. Therefore, e
can be any integer in the range of Round(-k
) to Round(k
). Assuming that k
is integer, the set Q
consists of the following values (see 10.3C):
,0, . . . , (k
,1} (31)
The set Q[e]
has (2k
+1) possible values, which is the number of possible values of the coded coordinate e
. Therefore, .left brkt-top.log
+1).right brkt-bot. bits are needed if a separate field is used to represent e
in the quantized image. In some embodiments, -1 or 1 or both are omitted from Q
, and so .left brkt-top.log
).right brkt-bot. bits are sufficient. In this case, the value k
can be chosen as a power of 2 in order not to waste the bits in the e
representation. Choosing k
as a power of 2 may also improve the computation speed (see (25B) for example).
Similarly, the set Q
consists of the following values:
, . . . , 0, . . . ,(k
,1} (33)
The set Q
has (2k
+1) possible values, which is the number of possible values of the coded coordinate f
. Therefore, .left brkt-top.log
+1).right brkt-bot. bits are needed if a separate field is used to represent f
. In some embodiments, -1 or 1 or both are omitted from Q
, and so .left brkt-top.log
).right brkt-bot. bits are sufficient. The value k
can be chosen then as a power of 2.
It follows from (22) that
) (35)
The set Q
of the B
values consists of:
1. Linear range values:
, . . . , B
} (36A)
2. Logarithmic range values:
- )/k
, . . . , B
} (36B)
where B[Qmax]
is the maximum value of B
. This set has (B
+1) possible values, so .left brkt-top.log
+1).right brkt-bot. bits are needed if a separate field is used to represent B
In the logarithmic range, the relative brightness error δB is
-1 (37)
In the linear range
For the logarithmic range, the bound δ
can be obtained from (35) and (37). The logarithmic range is believed to be the most important for many images.
The dynamic range DR
in the logarithmic range is
(assuming the minimum brightness value of B
). The total dynamic range DR is:
Let us consider an example. Suppose the user wants to quantize each color into a 32-bit value, i.e. B
, e
, f
must fit into 32 bits. If we want d
, f
to satisfy (12), then each of k
, k
has to be at least 1/0.003=334 (see (32), (34)). The number of bits needed to represent each of e
, f
is therefore .left brkt-top.log
(2334+1).right brkt-bot.=10. Hence, we can increase k
, k
to 512, and thus decrease d
and d
(and hence δ
), without increasing the data size. In this case (see (22)):
If 10 bits are allocated for each of e
, f
, then 12 bits are left for B
. Therefore, .left brkt-top.log
+1).right brkt-bot.=12, and hence B
Preferably, δB should not exceed 0.005. See (24) and the aforementioned U.S. patent application Ser. No. 11/564,730. From (37) we obtain:
so k[B]
In some embodiments
, k
=256. Then δB≈0.4, and δ
is slightly above 0.007. In some embodiments, B
=1 cd/m
The total dynamic range is over 810
, and the dynamic range in the logarithmic range is over 310
(see (39) and (40)).
The dynamic range of the visible spectrum is about 10
(when measured as the ratio of the physical luminance of the moonless sky at night to the physical luminance of the sun's surface). See Mantiouk's article cited above. The dynamic range of the Bef
coding can be increased while keeping the color distortion small by increasing the size of the quantized data (B
, e
, f
) beyond 32 bits. The relative error δS can be reduced if the quantized data size is increased.
Some embodiments allow the user to designate a desired upper bound δ
for the relative error δS. Larger δ
values provide smaller size of quantized data. For example, if the initial image is noisy, e.g. obtained by a noisy camera, then specifying δ
=0.005 may serve to preserve the noise, leading to a waste of storage for the quantized data. Therefore, a larger δ
value may be appropriate. The invention is not limited to particular reasons for selecting a particular δ
value, and values both above and below 0.005 can be used. Of note, values smaller than required for visual indistinguishability may be appropriate if additional errors can be introduced after
quantization (e.g. during image editing or display).
When the user specifies δ
, the computer system quantizes the image so as to meet the δ
bound at least approximately. For that purpose, the computer system automatically selects the number of bits for each of B
, e
, f
and performs quantization.
One quantization embodiment is shown in
FIG. 5
. At step 510 (FIG. 5), the computer system obtains the image data that need to be quantized. The image data may be in Bef, or may be converted to Bef by the computer system. The computer system also
receives the δ
parameter from the user or some other source (e.g. from a computer program).
At step 510, the computer system determines quantization parameters (e.g. k
, k
, etc.) for each of B, e, f as described in more detail below. At step 520, the computer system performs quantization and outputs the quantized data B
, e
, f
. The quantized data can be stored in a file, together with the δ
parameter if desired, as illustrated in FIG. 6.
The quantization parameters can be obtained using the relationship (23) and a predefined ratio r=δB:δ
. The r value can be defined by the user, the system designer, or by some other entity. If δ
is reduced (i.e. if r is reduced), the brightness distortion will be reduced at the expense of the chromatic distortion, and vice versa. At step 510, the computer computes δ
and δ
("sqrt" denotes square root).
The parameters k
, k
, k
can be obtained, for example, as follows. If k
then (35) provides that
Due to
(37), we can set
) (42)
The k[B]
computation can be simplified by setting:
because ln
for small δ
Example: suppose r=1. Then
=0.005, then k
=283, and k
=849. The value k
which can be rounded up to 1024 (10 bits for each of e and f). The invention is not limited to particular computations however.
[0070]FIG. 7
shows decoding of the quantized data to generate the quantized image (i.e. the Q-colors) such as may be performed to display the image encoded as in FIG. 6. At step 710, a computer system obtains the
coded data B
, e
, f
and quantization parameters such as k
, k
for example. The quantization parameters can be provided to the computer system with the coded data, or can be computed by the computer system from other parameters such as, for example, δ
and r. At step 720, the computer system computes the image colors (B
, e
, f
[0071]FIG. 8
illustrates a computer system 810 suitable for either the quantization operation of
FIG. 5
or the decoding of
FIG. 7
. The computer system may include one or more computer processors 812 which execute computer instructions, and may also include computer storage 814 (including possibly magnetic, optical,
semiconductor, and/or other types of storage, known or to be invented). If data display is desired, the system may include one or more display devices 816 (e.g. monitors, printers, or some other
type, known or to be invented). If a network connection is desired, network interface 817 can be provided. Image data and computer programs manipulating the data (such as programs that perform the
methods described herein) can be stored in storage 814 and/or read over a network.
The invention is not limited to the embodiments described above. For example, at step 510, δ
could be estimated as 4max(d
, d
) for (23), or using some other multiplier k in
, δ
could be estimated as
δB.- sup.2) (44)
where c[1]
, c
are any positive constants. Thus, in some embodiments, the computer system receives values δ
and r and at step 510.2 computes the values d=max(d
, d
) and δB to satisfy (44). Then the computer system sets:
and computes k[B]
as in (42) or (43). Other expressions for δ
are possible, such that the chromatic steps (e.g. d
, d
) and the relative brightness step δB could be increasing functions of δ
which are strictly increasing for at least some values of δ
. One example is:
, d
and d
do not necessarily coincide. For example, any of the following expressions can be used:
δB (45)
.de- lta.B
) (46)
where c[e]
, c
are positive constants. Additional information can be provided instead of r to determine d
, d
and δB from δ
. An example of such information is the double ratio d
:δB. Then k
, k
can be determined at step 510.2 as in (32), (34), and k
can be determined as in (42) or (43). The double ratio can be provided with the quantized data (as in FIG. 6) together with δ
The invention is not limited to particular theories such as the particular study of ellipsoids and error estimates explained above or in the appendices below. These studies are incomplete because
they do not cover the ellipsoids of all the possible colors and involve other simplifications such as approximate computations, simplified models of human visual perception, and possibly others. The
invention is directed to particular quantization techniques as recited in the claims, and not to any theories leading to these techniques.
In some embodiments, different computations can be combined in a single computation, omitting the computation of intermediate values (such as the values B
, e
, f
). Also, multiplication by k
can be omitted, and the values B
or B
can be computed simply by taking the most significant bits in the fixed-point representation of these values. Of note, the term "integer format" as used herein includes fixed-point format. The
invention is not limited to integer format. Color coordinate systems other than Bef can be used.
Some embodiments include a computer-implemented method for quantizing a first digital image, the method comprising: obtaining a first parameter representing a desired upper bound δ
for relative quantization steps to be used in quantizing any color in at least a first range of colors. The first range of colors may be, for example, the set of all visible colors with B>B
, where B
is a predefined value, e.g. 1 cd/m
. Alternatively, the first range may be the entire visible range. Other ranges are also possible. A relative quantization step for a color is defined as follows. If S', S'' are adjacent Q-colors,
then the relative quantization steps δS', δS'' are:
where for any color S
, ∥S∥ is the square root of the sum of squares of tristimulus values of the color S in a predefined 70%-orthonormal linear color coordinate system ("linear CCS"), for example, in DEF2.
The definition of a relative quantization step involves the concept of "adjacent" Q-colors. Q-colors S', S'' are called adjacent if one or both of the following conditions (A), (B) hold true:
(A) S; S'' correspond to adjacent Q-regions, i.e. the Q-regions that map into S' and S'' are adjacent to each other;
(B) Suppose that quantization involves separately quantizing each color coordinate in some CCS (e.g. in Bef). Let us denote the CCS coordinates as α, β, γ (for example, α=B, β=e, γ=f). For the
Q-colors, denote the set of all possible α values as Q.sub.α; the set of all possible β values as Q.sub.β; and the set of all possible γ values as Q.sub.γ Then S'=(α', β', γ') and S''=(α'', β'', γ'')
are called adjacent if each of the following conditions (i), (ii), (iii) is true:
(i) α' and α'' coincide or are adjacent in Q.sub.α;
(ii) β' and β'' coincide or are adjacent in Q.sub.β; and
(iii) γ' and γ'' coincide or are adjacent in Q.sub.γ.
For example
, the colors (B
, e
, f
) and (B
+1, e
-1, f
) are adjacent.
The method further comprises generating a quantized digital image using the first parameter. For example, generation of the quantized digital image may involve defining the sets Q.sub.α, Q.sub.β,
Q.sub.γ using the value δ
. See e.g. (31), (33), (36A), (36B). The sets Q.sub.α, Q.sub.β, Q.sub.γ define the set of colors available for quantization and hence define the quantization steps in each coordinate. Alternatively,
the sets Q.sub.α, Q.sub.β, Q.sub.γ may be defined incompletely. For example, the value B
may be undefined.
In some embodiments, the first digital image is represented by data which represent each color of the first digital image using a non-linear color coordinate system ("non-linear CCS"). One suitable
example of the non-linear CCS is Bef The non-linear CCS includes a brightness coordinate (e.g. B) representing the square root of the sum of squares of the tristimulus values of the linear CCS. For
example, the brightness coordinate can be B=sqrt(D
). Alternatively, the brightness coordinate can be -B, or 2B, or some other representation of sqrt(D
). The non-linear CCS also comprises one or more chromatic coordinates (e.g. e, f) whose values are unchanged if the tristimulus values are multiplied by a positive number (for example, the e, f
coordinates of a color S with tristimulus values (D, E, F) are the same as of a color kS with tristimulus values (kD, kE, kF) where k is a positive number).
In some embodiments, generating the quantized digital image comprises quantizing the brightness coordinate using the first parameter and also quantizing said one or more chromatic coordinates using
the first parameter.
In some embodiments, the computer system uses the first parameter to generate (i) a brightness parameter (e.g. δB) specifying a desired maximum bound for relative quantization steps δB for the
brightness coordinate in at least a range of values of the brightness coordinate (e.g. at least in the logarithmic range), and (ii) one or more chromatic parameters (e.g. δ
) specifying one or more desired maximum bounds for quantization steps d for the one or more chromatic coordinates in at least one or more ranges of values of the chromatic coordinates (e.g. in the
entire ranges -1≦e≦1, -1≦f≦1). The brightness coordinate is then quantized using the brightness parameter, and said one or more chromatic coordinates are quantized using the one or more chromatic
In some embodiments, the brightness parameter and the one or more chromatic parameters are determined from the first parameter to satisfy the equation:
^2 where d is a quantization step used in each of the one or more chromatic
coordinates, and c
, c
are predefined constants independent of the first parameter. For example, in (21) (23), c
=9 and c
In some embodiments, generating the quantized digital image comprises quantizing the brightness coordinate, wherein in at least a range of values of the brightness coordinate (e.g. the range B>B
), the quantizing of the brightness coordinate comprises logarithmic quantizing in which a quantized brightness coordinate (e.g. B
or B
) is a predefined first rounded or unrounded linear function of a logarithm to a predefined base (e.g. the natural base e; changing the base corresponds to multiplication by a positive constant) of a
predefined second rounded or unrounded linear function of the brightness (e.g. the function B/B
), the first and/or second linear functions and/or the base being dependent on the first parameter. For example, in the case (28B) the first linear function can be
or f[1]
The second linear function can be f[2]
[L] or f[2]
In this case
, the value k
, and hence the function f
, are dependent on the first parameter (on δ
is independent of the first parameter.
Some embodiments include computer systems programmed to perform the quantization methods according to embodiments of the present invention. Some embodiments include computer-readable manufactures
(e.g. computer disks, tapes, and other types of memories) comprising computer-readable programs for performing the quantization methods of embodiments of the present invention. The present invention
also covers network transmission methods for transmitting such computer programs (e.g. to upload or download the programs into a computer system).
Other embodiments and variations are within the scope of the invention, as defined by the appended claims.
APPENDIX A Orthonormal Color Coordinate Systems [0090]
Let d(λ), (λ), f(λ) be color matching functions of some linear color coordinate system (CCS). The CCS is called orthonormal if its color matching functions d(λ), (λ), f(λ) form an orthonormal system
in the function space L
on [0, ∞) (or on any interval containing the visible range of the λ values if the color matching functions are zero outside of this range), that is:
.sup.∞ d(λ) e(λ)dλ=∫
.sup.∞ d(λ) f(λ)dλ=∫
.sup.∞ (λ) f(λ)dλ=0
.sup.∞[ d(λ)]
.sup.∞[ (λ)]
.sup.∞[ f(λ)]
dλ=K (A1)
where K is a positive constant defined by the measurement units for the
wavelength λ and the radiance P(λ). The units can be chosen so that K=1.
The integrals in (A1) can be replaced with sums if the color matching functions (CMF's) are defined at discrete A values, i.e.:
λ d _ ( λ ) e _ ( λ ) = λ d _ ( λ ) f _ ( λ ) = λ e _ ( λ ) f _ ( λ ) = 0 λ [ d _ ( λ ) ] 2 = λ [ e _ ( λ ) ] 2 = λ [ f _ ( λ ) ] 2 = K ( A 2 ) ##EQU00002##
where the sums are taken over a discrete set of the
λ values. The constant K can be different than in (A1). Color matching functions will be called orthonormal herein if the satisfy the equations (A1) or (A2).
Color coordinate systems DEF2, BEF, and Bef which can be defined, for example, as follows. DEF2 is a linear CCS defined as a linear transformation of the 1931 CIE XYZ color coordinate system for the
2° field. DEF2 coordinates D, E, F are:
[ D E F ] = A XYZ - DEF [ X Y Z ] ( A 3 ) ##EQU00003##
- DEF = [ 0.205306 0.712507 0.467031 1.853667 - 1.279659 - 0.442859 - 0.365451 1.011998 - 0.610425 ] ( A 4 ) ##EQU00004##
It has been found that for many computations
, adequate results are achieved if the elements of matrix A.sub.XYZ-DEF are rounded to four digits or fewer after the decimal point, i.e. the matrix elements can be computed with an error
Err≦0.00005. Larger errors can also be tolerated in some embodiments.
When D>0 and E=F=0, the color is white or a shade of gray. Such colors coincide, up to a constant multiple, with the CIE D
white color standard.
If a color is produced by monochromatic radiation with λ=700 nm (this is a red color), then F=0 and E>0.
If S
and S
are two colors with DET2 coordinates (D1, E1, F1) and (D2, E2, T2), a dot product of these colors can be defined as follows:
>=D1*D2+E1*E2+F1*F2 (A5)
Thus, the DEF2 coordinate system can be thought of as a Cartesian coordinate system having mutually orthogonal axes D, E, F (FIG. 1), with the same measurement unit for each of these axes.
The dot product (A5) does not depend on the CCS as long as the CCS is orthonormal in the sense of equations (A1) or (A2) and its CMF's are linear combinations of d(λ), (λ), f(λ). More particularly,
let T
, T
, T
be tristimulus values in a color coordinate system whose CMF's t
, t
, t
belong to a linear span Span( d(λ), (λ), f(λ)) and satisfy the conditions (A1) or (A2). For the case of equations (A1), this means that:
.sup.∞ t
(λ) t
.sup.∞ t
(λ) t
.sup.∞ t
(λ) t
.sup.∞[ t
.sup.∞[ t
.sup.∞[ t
dλ=K (A6)
with the same constant K as in
(A1). The discrete case (A2) is similar. Suppose the colors S
, S
have the T
coordinates (T
.1, T
.1, T
.1) and (T
.2, T
.2, T
.2) respectively. Then the dot product (A5) is the same as
.1T.s- ub.3.2.
The brightness B of a color S can be represented as the length (the norm) of the vector S:
=∥S∥= {square root over (<S,S>)}= {square root over (D
)} (A7)
The BEF color coordinate system defines each color by the coordinates
(B, E, E). The Bef color coordinate system is defined as follows:
= {square root over (D
)} (A8)
If B
=0 (absolute black color), then e and f can be left undefined or can be defined in any way, e.g. as zeroes.
D is never negative in the visible spectrum, so the D, E, F values can be determined from the B, e, f values as follows:
= {square root over (B
)}{square root over (B
)}=B {square root over (1-e
)} (A9)
The invention is not limited to the order of the coordinates. The invention is not limited to DEF2, XYZ, or any other color coordinate system as the initial coordinate system. In some embodiments,
the orthonormality conditions (A1) or (A2) are replaced with quasi-orthonormality conditions, i.e. the equations (A1) or (A2) hold only approximately. More particularly, CMF's t
(λ), t
(λ), t
(λ) will be called herein quasi-orthonormal with an error at most ε if they satisfy the following conditions:
1. each of ∫
.sup.∞ t
(λ) t
(λ)dλ, ∫
.sup.∞ t
(λ) t
(λ)dλ, ∫
.sup.∞ t
(λ) t
(λ)dλ is in the interval [-ε, ε] and2. each of ∫
.sup.∞[ t
dλ, ∫
.sup.∞[ t
dλ, ∫
.sup.∞[ t
dλ is in the interval [K-ε, K+ε] for positive constants K and ε. In some embodiments, ε is 0.3K, or 0.1K, or some other value at most 0.3K, or some other value. Alternatively, the CMF's will be
called quasi-orthonormal with an error at most ε if they satisfy the following conditions:1. each of
λ t _ 1 ( λ ) t _ 2 ( λ ) , λ t _ 1 ( λ ) t _ 3 ( λ ) , λ t _ 2 ( λ ) t _ 3 ( λ ) ##EQU00005##
is in the interval
[-ε, ε], and2. each of
λ [ t _ 1 ( λ ) ] 2 , λ [ t _ 2 ( λ ) ] 2 , λ [ t _ 3 ( λ ) ] 2 ##EQU00006##
is in the interval
[K-ε, K+ε] for positive constants K and ε. In some embodiments, ε is 0.3K, or 0.1K, or some other value at most 0.3K, or some other value. Orthonormal functions are quasi-orthonormal, but the reverse
is not always true. If ε=0.1K, the functions and the corresponding CCS will be called 90%-orthonormal. More generally, the functions and the CCS will be called n %-orthonormal if ε is (100-n) % of K.
For example, for 70%-orthonormality, ε=0.3K Some embodiments of the present invention use 90%-orthonormal CCS's or 100%-orthonormal CCS's.
APPENDIX B [0101]
Contribution of Chromatic Coordinates to Relative Error δS
The distortion measure δS as in (5) combines information on luminance and chrominance, and it may be desirable to controllably distribute this total distortion between the luminance and chrominance
coordinates. The inventors studied the impact of the chrominance distortion on the total distortion 65 in color coordinate systems Bef, xyY, and CIELUV (also known as L*u*v*). The system xyY is based
on the 1931 CIE XYZ CCS for a 2° field, i.e.
=Y/(X+Y+Z) (B1)
The L
*u*v* CCS is defined based on the XYZ CCS as follows:
/3-16 if Y/Y
^3 L
if Y/Y
^3 u
) (B2)
where u
', v' are as in (3), Y
is the Y coordinate of a predefined white color, and (u'
, v'
) are the (u', v') values defined by (3) for that predefined color.
More particularly, the inventors studied the relative change δS given a predefined quantization step 0.001 in a chromatic coordinate in the three CCS's. The results of this study are summarized in
Table 1 below. The study covered the monochromatic colors whose xyY coordinates x and y are provided in the standard for XYZ CIE 1931. These monochromatic colors are those with wavelengths 1=360 nm
to 830 nm at 1-nm intervals. The Y coordinate was set to 1. The inventors believed that a change in the y coordinate was more significant (likely to produce a larger δS value) than a change in x, so
only the y changes were investigated. For each color S with xyY coordinates (x, y, Y), the inventors set S'=(x, y+0.001, 1). Then the xyY coordinates of S and S' were converted to DEP2, and δS was
computed as in (5). The largest δS values are shown in the xyY line of Table 1 below. The inventors also computed δS for S being the stimulus of the blue phosphor of the standard sRGB monitor (this
stimulus provides the largest δS value for the sRGB monitor's gamut). The result is given in Table 1.
For the L*u*v* and Bef systems, the inventors used the same set of colors as for xyY, i.e. the monochromatic colors from Table I (3.3.1) of "Color Science" with Y=1 and the blue phosphor sRGB
stimulus. For L*u*v*, the chromatic coordinate v* was believed to be more significant than u*. For each color S, the inventors calculated its coordinates (L*, u*, v*), set S'=(L*, u*, v*+0.001), and
calculated δS in DEF2 as in (5). The results are presented in the L*u*v* row in Table 1.
For Bef S' was set to (B, e+0.001, f+0.001), and δS was calculated. The results are presented in the Befrow in Table 1.
-US-00001 TABLE 1 B
RGB 700 nm 440 nm 405 nm xyY 0.018 0.005 0.086 0.174 L*u*v* 0.009 0.003 0.030 0.061 Bef 0.002 0.003 0.002 0.002
The Bef system provided the lowest δS value for all the colors S studied and not only for the colors in Table 1. Of note, the condition (9) is always satisfied for Bef, but not always for xyY and
On the other hand, in the visible gamut, each of the Bef chromaticity coordinates e, f can change over a longer range, from about -1 to about 1, than each of x, y, u* and v*. The range for each of e,
f can be about three times larger than for u* and v* and about twice larger than for x and y. Therefore, for a given quantization step, each of e, f needs one or two more bits to represent the
quantized values than any one of x, y, u*, v*. However, even if the quantization step is tripled to 0.003 for the e and f coordinates to allow the same number of bits for the quantized values, the
Bef CCS still favorably compares with xyY and L*u*v* in terms of color distortion. Indeed, if the quantization step is tripled (i.e. set to 0.003) in Bef but not in xyY or L*u*v*, the Bef values in
Table 1 will approximately triple (assuming that δS linearly depends on the quantization step when the quantization step is small; the linear dependence is a reasonable assumption if &S is
differentiable with respect to the quantization step at the 0 value of the quantization step). The tripled δS values for Bef will exceed the δS values for xyY and L*u*v* for some colors, e.g. for A=
700 nm, but the maximum δS for Bef (i.e. 0.009) will still be below the maximum δS for each of xyY and L*u*v*. Further, the tripled δS values for Bef never exceed the 0.01 bound of (8). The xyY and
L*u*v* values exceed this limit for the 440 and 405 nm wavelengths, and the xyY value exceeds this limit for the B
RGB stimulus. Further, the tripled quantization step d=0.003 still satisfies the upper bound of (11).
With respect to (21), just as it is reasonable to assume that for small d, the value δS is approximately linearly dependent on d (up to the terms of second or higher order in d), it is also
reasonable to assume that the value ∥Δ
∥/B is approximately linearly dependent on de, df, and does not exceed a constant times the maximum of de, df %
∥/B≦cmax(de,df) (B3)
From Table 1, c=3, and so (20) follows.
Patent applications by Sergey N. Bezryadin, San Francisco, CA US
Patent applications in class Color
Patent applications in all subclasses Color
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20090096809","timestamp":"2014-04-19T13:59:47Z","content_type":null,"content_length":"116002","record_id":"<urn:uuid:9523c140-efba-4772-9508-bb6d74b99251>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
Riemann surface question
Hi guys,
My gf is doing honours and is having some trouble with one question on her assignment for complex analysis. She is really stuck and I've only done this topic at an undergraduate level so I have no
idea. Neither of us have done any subjects in Topology so we don't know what to do.
She is required to describe the Riemann surface associated with a function
f(z) = sqrt( (z - x1)(z - x2)...(z - xn) )
where xi is an element of the real number set.
It comes with the hint: Consider n odd and even seperately.
If anyone could shed some light on this it would be greatly apprecicated.
Thanks for your time, | {"url":"http://www.physicsforums.com/showthread.php?t=191777","timestamp":"2014-04-16T22:03:37Z","content_type":null,"content_length":"22698","record_id":"<urn:uuid:e418acf0-f0ec-4edb-87cc-57321483aad3>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
Paths in Coloured Graphs
November 3rd 2010, 06:43 AM #1
Oct 2010
Paths in Coloured Graphs
Suppose $\chi(G)=k$ and $c:V(G) \rightarrow \{1,\ldots,k\}$ is a proper $k$-colouring of $G$. Must there be a path $x_1 \ldots x_k$ in $G$ with $c(x_i)=i$ for each $i$?
I've been trying to find a counter-example without any luck (but on the other hand can't come up with a proof either...)
Take a look at the Gallai-Roy-Vitaver theorem and its corollaries. Apparently, a proof can be found in this article but you might be better to find the original somewhere.
November 7th 2010, 08:37 AM #2 | {"url":"http://mathhelpforum.com/discrete-math/161939-paths-coloured-graphs.html","timestamp":"2014-04-20T01:58:33Z","content_type":null,"content_length":"32325","record_id":"<urn:uuid:b24c2466-a1e2-4719-8569-75a76e068bb5>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dividing Polynomials - College Algebra
• This section covers concepts and properties related to dividing polynomials to find a quotient and remainder as well as the zeros of a polynomial. The following are some of the specific topics
□ long division
□ remainder theorem
□ factor theorem
□ synthetic division
• If you need further information about these ideas, the link below will give you access to an online textbook with helpful information and examples.
Click on the tabs to learn about this topic and then try the problems in the Learning Objects at the bottom of the page. | {"url":"https://sites.google.com/a/uwlax.edu/college-algebra/Chapter4/Dividing-Polynomials","timestamp":"2014-04-21T15:28:27Z","content_type":null,"content_length":"31130","record_id":"<urn:uuid:cedf705e-5905-438b-92b8-baac6b9ef6cb>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
Black-Body Theory and the Quantum Discontinuity, 1894-1912
398 pages | © 1978, 1987
"A masterly assessment of the way the idea of quanta of radiation became part of 20th-century physics. . . . The book not only deals with a topic of importance and interest to all scientists, but is
also a polished literary work, described (accurately) by one of its original reviewers as a scientific detective story."—John Gribbin, New Scientist
"Every scientist should have this book."—Paul Davies, New Scientist
Part One. Planck's Black-Body Theory, 1894-1906: The Classical Phase
I. Planck's Route to the Black-Body Problem
The Black-Body Problem
Planck and Thermodynamics
Planck and the Kinetic Theory of Gases
Planck and the Continuum and Electromagnetism
II. Planck's Statistical Heritage: Boltzmann on Irreversibility
Boltzmann's H-Theorem
The First Interpretation of the H-Theorem
Loschmidt's Paradox and the Combinatorial Definition of Entropy
The Conflation of "Molecular" and "Molar"
Molecular Disorder
Epilogue: Molecular Disorder and the Combinatorial Definition after 1896
III. Planck and the Electromagnetic H-Theorem 1897-1899
Cavity Radiation without Statistics
The Entry of Statistics and Natural Radiation
Planck's "Fundamental Equation"
Entropy and Irreversibility in the Field
IV. Planck's Distribution Law and Its Derivations, 1900-1901
Planck's Uniqueness Theorem and the New Distribution Law
Recourse to Combinatorials
Deriving the Distribution Law
The New Status of the Radiation Constants
V. The Foundations of Planck's Radiation Theory, 1901-1906
The Continuity of Planck's Theory, 1894-1906
Natural Radiation and Equiprobable States
Energy Elements and Energy Discontinuity
The Quantum of Action and Its Presumptive Source
Planck's Early Readers, 1900-1906
Part Two. The Emergence of the Quantum Discontinuity, 1905-1912
VI. Dismantling Planck's Black-Body Theory: Ehrenfest, Rayleigh, and Jeans
The Origin of the Rayleigh-Jeans Law, 1900-1905
Ehrenfest's Theory of Quasi-Entropies
The Impotence of Resonators
Complexion Theory and the Rayleigh-Jeans Law
VII. A New Route to Black-Body Theory: Einstein, 1902-1909
Einstein on Statistical Thermodynamics, 1902-1903
Fluctuation Phenomena and Black-Body Theory, 1904-1905
Einstein on Planck, 1906-1909
VIII. Converts to Discontinuity, 1906-1910
Lorentz's Rome Lecture and Its Aftermath
Planck on Discontinuity, 1908-1910
The Consolidation of Expert Opinion: Wien and Jeans
IX. Black-Body Theory and the State of the Quantum, 1911-1912
The Decline of Black-Body Theory
The Emergence of Specific Heats
Quanta and the Structure of Radiation
The Quantum and Atomic Structure
The State of the Quantum
Part Three. Epilogue
X. Planck's New Radiation Theory
Planck's "Second Theory"
Revising the Lectures
Some Uses of the Second Theory
The Fate of the Second Theory
Afterword: Revisiting Planck
For more information, or to order this book, please visit http://www.press.uchicago.edu
Google preview here | {"url":"http://press.uchicago.edu/ucp/books/book/chicago/B/bo3622390.html","timestamp":"2014-04-19T09:35:24Z","content_type":null,"content_length":"26230","record_id":"<urn:uuid:70f58efb-5595-424e-990d-c0d8ab3639c2>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
LRT for variance of normal distribution
February 24th 2011, 09:05 AM #1
Nov 2009
LRT for variance of normal distribution
Derive a likelihood ratio test for H0: σ = σ0 vs. Ha : σ ≠ σ0 for a sample from a normal distribution with unknown mean.
I have found the two likelihood functions that make up the ratio, but I am struggling to simplify it into something understandable.
What have you found so far? Please write down!
-2log(λ)= nlog(σ0)- n/σhat ̂- nlog(σhat)+ (nσhat)/(σ0^2)
This is the simplest I've gotten it, and it doesn't seem very coherent to me.
February 24th 2011, 10:41 AM #2
February 24th 2011, 11:34 AM #3
Nov 2009 | {"url":"http://mathhelpforum.com/advanced-statistics/172463-lrt-variance-normal-distribution.html","timestamp":"2014-04-18T14:04:05Z","content_type":null,"content_length":"34313","record_id":"<urn:uuid:4f2c8089-0f4b-4755-b75f-6235ebef7608>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
AM 224 SYLLABUS, Spring 2006
AM 223-224 (MA 237-238) is the core graduate sequence on partial differential equations. In the first semester, the focus was on the basic theory of linear equations. This semester, we will mainly
study nonlinear equations. The plan is to cover as much is possible of the following topics:
1. Scalar conservation laws and Hamilton-Jacobi equations.
2. Sobolev inequalities and Sobolev spaces.
3. The direct method in the calculus of variations.
4. Elliptic regularity.
5. Symmetric hyperbolic systems.
Prerequisites: Real analysis (AM 211, MA 221 or equivalent) and AM 223 (MA 237).
Textbooks: The two main sources for the class this semester will be
1. L. C. Evans, Partial differential equations , AMS 1998.
2. L. C. Evans, Weak convergence methods for nonlinear PDE , AMS, 1988.
As in AM 223, a few representative examples will be treated in depth, and this will often mean consulting other sources, including original papers. To the extent possible, I will try to hand out
notes. The following sources will be useful at different points:
1. C.M. Dafermos, Hyperbolic conservation laws in continuum physics , Springer, 200.
2. F. John, Partial differential equations , Springer 1981.
3. D. Gilbarg and N. Trudinger, Elliptic partial differential equations of second order , Springer 2001.
4. M. Struwe, Variational methods , Springer, 2000.
Lecture Notes
Here is a transcript of the lectures by Andreas Kloeckner. Here is Andreas' English translation of Hopf's paper on the Navier-Stokes equations.
HW 1 , Solutions .
HW 2 , Solutions .
HW 3 , Solutions .
HW 4 , Solutions .
HW 5 , Solutions .
HW 6 , Solutions .
Exam .
File translated from T[E]X by T[T]H, version 3.66.
On 24 Jan 2006, 16:47. | {"url":"http://www.dam.brown.edu/people/menon/am224/am224.html","timestamp":"2014-04-20T15:51:36Z","content_type":null,"content_length":"4155","record_id":"<urn:uuid:2603cb25-d1cb-44ac-afeb-b56db331375e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
IntroductionRepeat-Pass SAR InterferometryThe Atmosphere and its Effects on Repeat-Pass InSARProperties of Atmospheric Signals in SAR InterferogramsAtmospheric Signals from SAR InterferogramsAnisotropic Properties of Atmospheric SignalsGaussianity of Atmospheric SignalsSpectral Characteristics of Atmospheric SignalsMitigation of Atmospheric Effects on Repeat-Pass InSARCorrection of Atmospheric Effects based on External dataCorrection of atmospheric effects based on ground meteorological observationsCorrection of atmospheric effects based on GPS observationsCorrection of atmospheric effects based on high-resolution meteorological modelsCorrection of atmospheric effects based on MODIS dataCorrection of atmospheric effects based on MERIS dataCorrection of Atmospheric Effects based on Correlation AnalysisCorrection of Atmospheric Effects based on Pair-Wise LogicCorrection of Atmospheric Effects based on PSInSAR TechniqueReduction of Atmospheric Effects with the Stacking MethodConclusionsReferences and NotesFigures
Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s8095426 sensors-08-05426 Review Atmospheric Effects on InSAR Measurements and Their Mitigation DingXiao-li^1^*
LiZhi-wei^2 ZhuJian-jun^2 FengGuang-cai^1 LongJiang-ping^1 Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China; E-mails:
guangcai.feng@polyu.edu.hk; lsjplong@polyu.edu.hk School of Info-Physics and Geomatics Engineering, Central South University, Changsha 410083, Hunan, China; E-mails: zwli@mail.csu.edu.cn;
zjj@mail.csu.edu.cn Author to whom correspondence should be addressed; E-mail: lsxlding@polyu.edu.hk 09 2008 03 09 2008 8 9 5426 5448 28 07 2008 29 08 2008 29 08 2008 © 2008 by the authors; licensee
Molecular Diversity Preservation International, Basel, Switzerland. 2008
This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
Interferometric Synthetic Aperture Radar (InSAR) is a powerful technology for observing the Earth surface, especially for mapping the Earth's topography and deformations. InSAR measurements are
however often significantly affected by the atmosphere as the radar signals propagate through the atmosphere whose state varies both in space and in time. Great efforts have been made in recent years
to better understand the properties of the atmospheric effects and to develop methods for mitigating the effects. This paper provides a systematic review of the work carried out in this area. The
basic principles of atmospheric effects on repeat-pass InSAR are first introduced. The studies on the properties of the atmospheric effects, including the magnitudes of the effects determined in the
various parts of the world, the spectra of the atmospheric effects, the isotropic properties and the statistical distributions of the effects, are then discussed. The various methods developed for
mitigating the atmospheric effects are then reviewed, including the methods that are based on PSInSAR processing, the methods that are based on interferogram modeling, and those that are based on
external data such as GPS observations, ground meteorological data, and satellite data including those from the MODIS and MERIS. Two examples that use MODIS and MERIS data respectively to calibrate
atmospheric effects on InSAR are also given.
Interferometric Synthetic Aperture Radar (InSAR) atmospheric effects atmospheric correction MODIS MERIS
Synthetic Aperture Radar (SAR) Interferometry, commonly referred to as InSAR, IFSAR or SARI, is a synthesis of the SAR and the interferometry techniques [1]. InSAR is a powerful technology for
topographic and ground surface deformation mapping due to its all-weather and day-and-night imaging capability, wide spatial coverage, fine resolution, and high measurement accuracy. Rogers and
Ingalls [2] reported the first application of radar interferometry in Earth-based observations of Venus, while Graham [3] was regarded as the the first to apply an InSAR system to Earth topographic
mapping. Airborne and spaceborne InSAR systems were then applied to Earth observation by Zebker and Goldstein [4] and Goldstein et al. [5], respectively. Gabriel et al. [6] first demonstrated the
potential of differential InSAR (DInSAR) for centimeter or sub-centimeter level surface deformation mapping over a large area.
Significant progress has been made in further developing InSAR technology in the past two decades with the availability of a vast amount of globally covering SAR images from, e.g., ERS, Radarsat,
JERS, Envisat, ALOS and TerraSAR sensors and with a wide range of applications of the technology (e.g., [7-19]). It is expected that InSAR will play a wider and more important role in both research
and applications in the future with the advances of the technology and many ambitious SAR missions planed.
InSAR technology, however, has also limitations. One of the most intractable is the effect of the atmosphere (mainly the troposphere and the ionosphere) on repeat-pass InSAR. It is well known that
electromagnetic waves are delayed (slowed down) when they travel through the troposphere. The effect often introduces significant errors to repeat-pass InSAR measurements. Massonnet et al. [8] first
identified such effects. Since then, some intensive research has been carried out aiming to better understand and mitigate the effects. Zebker et al. [20] reported, for example, that spatial and
temporal changes of 20% in the relative humidity of the troposphere could lead up to 10 to 14 cm errors in the measured ground deformations and 80 to 290 m errors in derived topographic maps for
baselines ranging from 100 m to 400 m in the case of the SIR-C/X-SAR. A number of researchers have concluded that the tropospheric effects are a limiting factor for wide spread applications of
repeat-pass InSAR (e.g., [11, 21-23]).
Contrary to the effects of the troposphere, the ionosphere tends to accelerate the phases of electromagnetic waves when they travel through the medium. The zenith ionospheric range error is
proportional to the total electron content (TEC) in the ionosphere. For example, for C-band SAR, a TEC of 1 × 10^16 m^-2 causes a phase shift of about half a cycle [28]. The ionosphere is however a
dispersive medium affecting the radar signals proportionately to the square of the wavelength [83]. For example, if the ionosphere causes 1.5 m range errors to the C-band (wavelength = 5.6 cm)
signals, it will cause about 24 m range errors to the L-band (wavelength = 23 cm) signals if the same imaging geometry and atmospheric conditions are assumed. Since there are only very limited
published works available on the ionospheric effects on InSAR, we will limit our discussions to the tropospheric effects hereafter. We will review systematically the work carried out in studying the
atmospheric, especially the tropospheric effects on InSAR. The basic principles of the atmospheric effects on repeat-pass InSAR are first introduced. Research results on the properties of the
atmospheric effects will then be examined. The various methods developed for mitigating the atmospheric effects will finally be studied.
InSAR can be classified into across- and along-track interferometry according to the interferometric baseline formed, or single- and repeat-pass interferometry according to the number of platform
passes involved. Two antennas are mounted on the same platform in along-track interferometry and a single platform pass suffices [24]. Across-track interferometry can be performed either with a
one-antenna (e.g., ERS, Envisat) or a two-antenna (e.g., SRTM) SAR system. Revisit to the same scene is required for a one-antenna SAR system so that this is called repeat-pass SAR interferometry
[25]. The atmospheric effects in the single-pass interferometry are basically removed completely in the interferometric computation as the effects are almost the same for the two SAR images. In
repeat-pass interferometry, however, the atmospheric effects can become significant as the atmospheric conditions can vary considerably between the two SAR acquisitions. We will hereinafter limit our
discussions to repeat-pass InSAR only.
The geometrical configuration of repeat-pass SAR interferometry is illustrated in Figure 1a. A1 and A2 are the positions of radar platforms corresponding to the two acquisitions. The phases, ψ[1] and
ψ[2], measured at the two platform positions to a ground point are: ψ 1 = 4 π λ L 1 , ψ 2 = 4 π λ L 2where L[1] and L[2] are the slant ranges and λ is the wavelength of the radar signal. The
interferometric phase ϕ is then ϕ = ψ 1 − ψ 2 = 4 π λ ( L 1 − L 2 )
Under the far field approximation, one gets ϕ = ψ 1 − ψ 2 ≈ 4 π λ B ∥ = 4 π λ B sin ( θ − α )where α is the orientation angle of the baseline and θ is the look angle.
When assuming a surface without topographic relief as illustrated in Figure 1b, the interferometric phase becomes [11] ϕ 0 = 4 π λ B sin ( θ 0 − α )where θ[0] is the look angle. If topographic relief
is present, the look angle will differ from θ[0] by δθ, ϕ = 4 π λ B sin ( θ 0 + δ θ − α )Combining Equations (4) and (5), we get the “flattened” phase ϕ flat = ϕ − ϕ 0 ≈ 4 π λ B cos ( θ 0 − α ) δ θ =
4 π λ B ⊥ δ θThe relationship between the topographic height and δθ can be easily established (see Figure 1b) h ≈ L δ θ 0 ⋅ sin θ 0Thus the topographic height can be expressed as h = λ L 4 π B sin θ
0 cos ( θ 0 − α ) ϕ flat
The aforementioned process of topography reconstruction is based on the assumption that the imaged surface is stationary during the acquisitions. The interferometric phase in repeat-pass
interferometry in fact measures any ground displacement in addition to topography. DInSAR is the technique to extract displacement signature from a SAR interferogram over the acquisition period. In
Figure 2, there is an exaggerated ground displacement Δd between the two acquisitions whose projection onto radar line-of-sight (LOS) direction is Δr.
The displacement will introduce a variation of interferometric phase which is proportional to Δr: Δ ϕ flat = 4 π λ Δ rTherefore, the interferometric phase includes topography information as well as
deformation information, ϕ flat ≈ 4 π λ B cos ( θ 0 − α ) L sin θ 0 h + 4 π λ Δ r
To map the ground deformation between two SAR acquistions, the topographic contribution must be removed. According to the ways to remove the topographic contribution, three types of DInSAR
configuration can be distinguished: (1) two-pass plus external DEM, (2) three-pass, and (3) four-pass. In two-pass plus external DEM formulation, a SAR interferogram (topographic interferogram
thereinafter) is simulated based on the DEM and the imaging geometry of the “real” interferogram (deformation interferogram thereinafter) and is removed from the deformation interferogram. However,
in three-pass and four-pass formulations, both the topographic and the deformation interferograms are generated from SAR images. The only difference between them is that in three-pass interferometry,
one image is shared by both the topographic and the deformation interferograms. The two-pass plus external DEM and three-pass and four-pass configuration DInSAR can be expressed as: Δ r two = λ 4 π (
ϕ d − ϕ sim , t ) Δ r three , four = λ 4 π ( ϕ d − B d ⊥ B t ⊥ ϕ t )where ϕ[d] and ϕ[t] are phases of deformation and topography interferograms, respectively, and B d ⊥ and B t ⊥ are perpendicular
baseline components of the deformation and topography interferograms, respectively.
The interferometric phase in Equation (10) may also include linear phase ramps caused by orbital errors that should be modeled and removed to derive the ground deformation [22, 28]. This can at times
become a problem when the deformation or topography phases also have linear trends. We will however not discuss this problem further in this paper.
Atmospheric artifacts in SAR interferograms are mainly due to changes in the refractive index of the medium. These changes are mainly caused by the atmospheric pressure, temperature and water vapor.
In most cases, the spatial variations of pressure and temperature are not large enough to cause strong, localized phase gradients in SAR interferograms. Their effects are generally smaller in
magnitude and more evenly distributed throughout the interferogram when comparing with that of the water vapor, and sometimes difficult to be distinguished from errors caused by orbit uncertainties
[22, 26]. The artifact caused by localized water vapor generally dominates the atmosphere induced artifacts in SAR interferograms. Water vapor is mainly contained in the near-ground surface
troposphere (up to about 2 km above ground), where a strong turbulent mixing process occurs. Turbulent mixing can result in three-dimensional (3D) spatial heterogeneity in the refractivity and can
cause localized phase gradient in both flat and mountainous regions [27, 28]. Besides turbulent mixing, another atmospheric process with clear physical origin is the stratification of the atmosphere.
Stratification of the atmosphere into layers of different vertical refractivity causes additional atmospheric delays in mountainous regions [27, 28]. It should be noted that although water vapor is
often considered the most important parameter causing the tropospheric delays, there are cases, e.g., in regions with strong topography, changes in pressure between two acquisitions can generate a
bigger tropospheric delay signal than humidity variation.
Clouds are formed when the water vapor in the air condenses into visible mass of droplets or frozen crystals. Clouds are divided into two general categories, layered and convective. These are named
stratus clouds and cumulus clouds respectively. The liquid water content in the stratiform clouds is usually low so that they do not cause significant range errors to SAR signals. The liquid water
content in the cumulus clouds can however range from 0.5 to 2.0 g/m^3 and cause zenith delays of 0.7 to 3.0 mm/km [26], significant to InSAR measurements.
Due to the propagation delay of radar signals, in repeat-pass SAR interferometry systems, the phase measurements corresponding to Equation (1) becomes: ψ 1 = 4 π λ ( L 1 + Δ L 1 ) , ψ 2 = 4 π λ ( L 2
+ Δ L 2 )where ΔL[1] and ΔL[2] are atmospheric propagation delays of radar signals corresponding to the first and the second acquisitions. This gives the interferometric phase ϕ = ψ 1 − ψ 2 = 4 π λ (
L 1 − L 2 ) + 4 π λ ( Δ L 1 − Δ L 2 )where 4 π λ ( L 1 − L 2 ) are topography and surface deformation induced interferometric phase, and 4 π λ ( Δ L 1 − Δ L 2 ) is the atmosphere induced
interferometric phase. From Equation (14), we can see that the atmosphere induced phase errors are easily interpreted as topography or surface deformation.
It is obvious from Equation (14) that it is the relative tropospheric delay (ΔL[1] − ΔL[2]) that causes errors in InSAR measurements. If the atmospheric profiles remain the same at the two
acquisitions, the relative tropospheric delay will disappear. In addition, if ΔL[1] − ΔL[2] = constant for all the resolution cells in an area of interest, the atmospheric effects will also be
cancelled out. The two conditions are, however, next to impossible to occur in practice. First, the troposphere, especially the tropospheric water vapor, varies significantly over periods of a few
hours or shorter. It is, therefore, highly unlikely to have the same atmospheric profiles even over currently the shortest revisit interval of one day (for ERS-1/ERS-2). Second, it is also rather
rare for the relative tropospheric delays to be constant for all the resolution cells due to local tropospheric turbulences, which affect flat terrain as well as mountainous terrain and to vertical
stratification which only affects mountainous terrain [27-29].
The influences of the atmosphere induced phase errors on repeat-pass topographic and two-pass surface deformation measurements are straightforward [4, 9] σ h = λ 4 π B ⊥ L sin θ σ ϕ σ Δ r , two = λ 4
π σ ϕwhere σ[ϕ] is the phase error in the interferogram; σ[h] is the resultant height error; and σ[Δ][r,two] is the deformation error for two-pass D-InSAR.
Assuming the same standard deviation σ[ϕ] on each interferogram, the covariance matrixes of ϕ = [ϕ[d] ϕ[t]] ^T for three-pass and four-pass interferometry are: C o v ϕ , three = [ σ ϕ 2 1 2 σ ϕ 2 1 2
σ ϕ 2 σ ϕ 2 ] C o v ϕ , four = [ σ ϕ 2 0 0 σ ϕ 2 ]According to error propagation theorem, the effects of atmosphere induced phase errors on deformation mapping in three-pass and four-pass
interferometry are: σ Δ r , three = λ 4 π 1 − B d ⊥ B t ⊥ + ( B d ⊥ B t ⊥ ) 2 σ ϕ σ Δ r , four = λ 4 π 1 + ( B d ⊥ B t ⊥ ) σ ϕ
A SAR interferogram is a superposition of information on the topography, the surface deformation between the two SAR acquisitions, the differential atmospheric propagation delays between the two SAR
acquisitions, and various noise (e.g., [22, 26]). The contribution from the topography can be removed by using a reference elevation model. That from the surface deformation can be neglected or
removed if the surface deformation of the study area between the two SAR acquisitions is insignificant or the deformation is known. In addition, multi-looking operations and careful interferometric
processing can help to suppress the noise. Therefore at the end an interferogram that contains only the atmospheric signature can be obtained [26]. The atmospheric signature thus obtained is very
useful for studying the properties of atmospheric effects on InSAR. Besides, the atmospheric signals can be used to derive various atmospheric products. For example, Hanssen et al. [30] used
atmospheric signals derived from SAR interferograms to map high-resolution water vapor.
Radon transform is the projection of image intensities along a radial line at a specified angle. A single Radon transform is a mapping of an image from two dimensions to one dimension where the image
intensities collapse to a profile. Radon transform is therefore a tool to investigate anisotropy in images since systematic intensity variations in a particular direction will be visible as a profile
Hanssen [26] first used Radon transform to examine the anisotropy of atmospheric signatures in SAR interferograms, while Jónsson [32] used it to characterize the anisotropy of the noise in SAR
interferograms. Li et al. [33] used Radon transform to study the anisotropy of atmospheric signatures in four SAR interferograms over Shanghai. The results are shown in Figure 3.
The Radon transform of atmospheric signals showed varying degrees of anisotropy. For example, the first transform (Figure 3a) showed strong asymmetry especially for profiles of 0° to 90°. This
implies that there are areas of very different atmospheric signals in the southwest and northeast corners of the interferogram. However, as the authors pointed out, none of the transforms showed
complex variations in the signals, perhaps due to the fact that the studied region is very flat. The results are quite different from those obtained in mountainous regions where the atmospheric
effects vary significantly (e.g., [34]) perhaps due to the vertical stratification or the “static” effect of the troposphere in the mountainous regions [29, 35, 43] and the effects of mountains on
local weather conditions.
It is important to examine the Gaussianity of atmospheric signals in SAR interferograms as different processing strategies must be applied for Gaussian and non-Gaussian signals. There are a number of
hypothesis tests to study whether a signal is Gaussian or non-Gaussian. The Jarque-Bera test is based on classical measures of skewness and kurtosis and it examines whether the sample skewness and
kurtosis are unusually different from their expected values [36]. The Hinich test is however a frequency domain test that examines the deviation of the bispectrum of the signal from zero as the
bispectrum of a Gaussian signal is zero [37]. Li et al. [33] used both the Jarque-Bera and the Hinich methods to test the atmospheric signals in the four SAR interferograms over Shanghai. The results
from both of the methods indicate that the atmospheric signals in all the interferograms are non-Gaussian.
The spectrum of atmospheric signals in a SAR interferogram reveals the energy distribution of the atmospheric effects at different spatial scales. Two-dimensional (2D) FFT is generally used to
estimate the 2D power spectra of atmospheric signals. As the power spectra derived can be very noisy, the one-dimensional (1D) rotationally averaged power spectra are usually calculated from the 2D
power spectra and used to study the energy distribution of atmospheric signals (e.g., [28, 38]).
Goldstein et al. [39] first calculated the power spectra of atmospheric signals in a SAR interferogram, and demonstrated that the spectra followed a power law distribution with a power exponent of −8
/3. This feature is associated with Kolmogorov turbulences, indicating the nature of scale invariance or scaling [40]. Hanssen [26] analyzed the spectra of atmospheric signals in 26 SAR
interferograms over Netherland. The results also showed the power law feature. Li et al. [33] calculated the power spectra of atmospheric signals for the four interferograms over Shanghai (Figure 4).
It is very clear that the signals follow on the whole the power law distribution. The results are in good agreement with those presented for Mojave desert of California by Goldstein et al. [39] and
for the Groningen and Flevoland area of Netherlands by Hanssen [28].
The power law spectral characteristics of the atmospheric signals are very useful. For example, Ferretti et al. [41] used the spectral characteristics to estimate the powers of thermal noise and
atmospheric effects, and developed a method based on the results to combine SAR DEMs in wavelet domain. Ferretti et al. [42] also utilized the spectral characteristics to design filters to separate
atmospheric effects from nonlinear subsidence. Williams et al. [43] considered that the low-frequency (long wavelength) components of atmospheric effects had larger energy so that sparse external
data such as GPS and ground meteorological data can be used to calibrate the effects. Li et al. [44, 45] used the power law nature of the atmospheric effects in designing algorithms to model and
correct the effects based on GPS and meteorological data.
The power law can be described by E ( k ) ∝ k − βwhere E(k) is the power; k is the spatial frequency; and β is the power exponent. The power exponent is an important indicator of data stationarity.
Theoretically, when 1 < β < 3, the data series are considered non-stationary but with stationary increments [28, 46]. The estimated spectral exponents range from 2.31 to 2.66 so that the signals have
this property. Stationary increments lead to stationary structural function but do not imply that the variance and covariance of the atmospheric signals can be uniquely determined. Therefore, care
should be taken when using InSAR data to constrain geophysical models, where the covariances of the noise are generally needed (e.g., [32, 38, 47]).
The 3D Kolmogorov tropospheric turbulence can occur within the region of up to several kilometers in elevation, usually referred to as the effective height of the wet troposphere. The LOS ranges of
space-borne SAR systems are much larger than the effective height. The wet tropospheric delays can be therefore typically modeled as 2D turbulence [28].
The power exponent is an important parameter for estimating to what extent the atmospheric effects can be determined and removed with the help of external data. The accumulated energy of the
atmospheric signals can be estimated based on the information for different scales by integrating the atmospheric power over the spatial frequencies. The spatial scales corresponding to 90% of the
energy thus computed for the four interferograms mentioned above are 0.82, 1.01, 0.94, and 0.29 km, respectively [33]. The spatial scales can be considered as the lowest spatial resolution of
external atmospheric data required to calibrate 90% of the atmospheric effects in the SAR interferograms. Therefore, to calibrate 90% of the atmospheric effects for the four interferograms, the
spatial resolution of the external atmospheric data (assuming no measurement errors) must reach 0.82, 1.01, 0.94, and 0.29 km, respectively [33]. This can be a reference when one applies corrections
for atmospheric effects on InSAR based on external data.
Using ground meteorological data to calibrate tropospheric delays in radio ranging has been well documented [48-52]. Hanssen and Feijt [53] and Zebker et al. [20] proposed the use of the Saastamoinen
model to assess the potential effects of the troposphere on InSAR measurements. Delacourt et al. [29] presented a case study of correcting atmospheric errors by using meteorological observations at a
reference point together with tropospheric delay models, vertical gradient models of the meteorological parameters, and the DEM of the study area. The results showed that tropospheric corrections
reached 2 fringes for some interferograms, and that on average the accuracy of the interferograms was about ±1 fringe after the corrections were applied. Bonforte et al. [54] demonstrated congruence
between the tropospheric zenith delays estimated from GPS observations and from tropospheric models and meteorological data. The results confirmed that the meteorological data could be applied to
calibrate InSAR measurements like what had been done to correct GPS observations, and suggested a possible integration of the two data sources for improving models of the atmospheric effects. Li et
al. [45] studied InSAR atmospheric correction by using meteorological observations, GPS observations, and both types of observations. The results showed that the integration of the observations
produced better results.
The difficulties in using meteorological data to correct atmospheric effects on InSAR include mainly the poor accuracy of the atmospheric delays estimated from empirical tropospheric models and the
usually very sparse distribution of meteorological stations.
The advances in GPS meteorology have enabled accurate estimation of tropospheric delays from GPS observations [55, 56], and have provided an opportunity to use GPS observations to evaluate and
calibrate the atmospheric effects on InSAR measurements. However, the spatial resolution of GPS stations is in general much lower than that of InSAR data. This poses a potential limitation in
applying GPS observations to correcting InSAR measurements.
Considering the power law nature of the atmospheric noise, Williams et al. [43] however dismissed the belief that the spatially sparse GPS observations (compared to the scales of the atmospheric
irregularities and the resolutions of SAR data) were unsuitable for calibrating the atmospheric effects. Using simulated data, the authors demonstrated that in general it is possible to use sparsely
distributed data to reduce the noise in a more densely distributed data set, and that in particular it is possible to use zenith delays estimated from GPS observations to reduce the atmospheric noise
in InSAR measurements. Bock and Williams [57] reported through a cross validation analysis that using zenith delays estimated from GPS observations and the Kriging interpolator, more than 90% of the
atmospheric delays at the unsampled points in a SAR image can be retrieved and therefore removed for the Los Angeles basin where fairly dense GPS stations had been in operation. On the other hand,
only 40% of the atmospheric delays can be retrieved for regions outside the basin where the density of GPS stations is much lower. Also using cross validation analysis, Janssen et al. [58] tested the
effectiveness of three interpolators, i.e., inverse distance weighting, Kriging and spline, in interpolating the GPS-derived atmospheric delays to the SAR resolution level and correcting the
atmospheric effects on InSAR on a pixel-by-pixel basis. The results showed that the inverse distance weighting and Kriging interpolators are better than the spline interpolator. Webley et al. [59]
proposed a procedure to use the water vapor delays derived from both the GPS observations and the non-hydrostatic three-dimensional (NH3D) meteorological model to calibrate the atmospheric effects on
The research results in [43, 57-59] are mainly from cross validation analysis, but not from corrections to real SAR interferograms. Li et al. [44] recently proposed a new method and applied it to
correct a SAR interferogram. In this method, an atmospheric delay map for each SAR acquisition is generated in two steps. First, a “mean” atmospheric delay map is calculated using the method adopted
by Delacourt et al. [29]. Second, the “mean” atmospheric delay map is amended with the atmospheric zenith delays derived from a dense GPS network, mainly to calibrate the estimated “mean” atmospheric
delays and to compensate their horizontal heterogeneity. Using 14 GPS stations over Mt. Etna, the authors corrected a SAR interferogram and achieved 27.2% overall improvement in the accuracy of the
InSAR measurements. Based on the variance model of water vapor delays derived by Emardson et al. [47], a linear interpolator and the best linear unbiased estimator, Li et al. [60] developed a GPS
topography-dependent turbulence model for InSAR atmospheric correction. Test results show that the model is much better than the inverse distance weighting interpolator.
There are many GPS networks around the world operated in continuous mode. If GPS observations prior to and after SAR acquisitions are available, they can also contribute to the correction of
atmospheric effects on InSAR. Onn [62] and Onn and Zebker [61] applied this method to InSAR atmospheric correction based on Taylor's “frozen-flow” hypothesis. The results showed that additional
improvement can be achieved when GPS observations prior to and after SAR acquisitions are added to the GPS-based InSAR atmospheric correction models.
The various methods proposed to date that use GPS observations to correct InSAR atmospheric effects differ primarily in the algorithms used to generate atmospheric delay maps from the spatially
sparse GPS atmospheric delay measurements. Therefore, the accuracy of the corrections depends on how much atmospheric delays can be retrieved at the unsampled locations from the sparse GPS
measurements. With the gradual increase in the density of GPS networks around the world, the method should become more and more useful.
Numerical meteorological modeling is an essential tool in atmospheric research. Numerical meteorological modeling can be carried out on global, regional or mesoscale. The global and regional
numerical meteorological models are usually too coarse to model atmospheric effects on InSAR. The mesoscale numerical models can have a continuous time scale and a horizontal spatial scale of a few
kilometers, and are therefore suitable for InSAR atmospheric correction. Integrated water vapor content along the radar paths can be retrieved from such models and used to calibrate atmospheric
effects on InSAR.
Wadge et al. [59] used the local-scale non-hydrostatic three-dimensional models (NH3D) to simulate the path delays due to water vapor over Mount Etna, and found that the NH3D delays were in general
agree well with the ERS-2 SAR interferogram and the GPS estimates. Webley [65] and Webley et al. [64] tested correcting atmospheric effects on a descending and two ascending SAR interferograms over
Mount Etna by using the path delays derived from the NH3D models. The results showed that the correction can result in up to 28.6% improvements in terms of the phase standard deviations. The accuracy
improvement is however highly dependent on the data used to initialize the NH3D models. Foster et al. [66] used the MM5 models (a non-hydrostatic mesoscale meteorological model produced by the
National Center for Atmospheric Research (NCAR)/Pennsylvania State University) to predict the atmospheric delay maps and then to correct 44 SAR interferograms over Hawaii. The results showed that on
average atmospheric effects with wavelengths of 30 km or greater can be significantly reduced, while those with wavelengths shorter than 30 km cannot be effectively reduced. More recently, Puysségur
et al. [67] found that water vapor content estimated from Envisat Medium Resolution Imaging Spectrometer (MERIS) and from MM5 model were consistent and unbiased, and thus proposed to integrate MM5
model and MERIS data for InSAR atmospheric correction. Test results showed that about 43% of the atmospheric signals can be removed. High-resolution meteorological models have offered some promising
opportunities for mitigating atmospheric effects on InSAR although further research needs to be carried out to enhance the accuracy and reliability of the method.
The near-IR water vapor products provided by the Moderate Resolution Imaging Spectroradiometer (MODIS) have a spatial resolution of 1 km× 1 km (at nadir) and an accuracy of 5-10% [68]. The
high-resolution water vapor products appear to be very useful for modeling and correcting atmospheric effects on InSAR although as an optical sensor MODIS measurements are sensitive to the presence
of clouds. The resolutions of even the densest GPS networks in the world, e.g., the Southern California Integrated GPS Network (SCIGN), are more than ten times sparser than the resolution of MODIS.
Li et al. [69] first presented some results of using MODIS data to correct atmospheric effects on InSAR over Mount Etna and Los Angeles. Li et al. [70] proposed an integration of MODIS and GPS data
for InSAR atmospheric correction, where the GPS data (more exactly the GPS precipitable water vapor (PWV) data) are mainly used to calibrate the MODIS PWV data. Experiments over Los Angeles area
showed that the atmospheric singals in the SAR interferograms were significantly reduced with this method and the geophysical signals in the InSAR measurements became more prominent after the
corrections were made.
Considering that all data interpolators unavoidably suffer from smoothing effects [71], Li [72] proposed a hybrid algorithm that jointly uses the Kriging interpolator and the conditional spectral
simulation method to interpolate the MODIS PWV in correcting the atmospheric signals in SAR interferograms.
Figure 5 shows the original and the corrected ERS-2 interferogram over Los Angeles by using the MODIS data and the developed hybrid algorithm. Note that the original interferogram has been corrected
for topographic phases with a known DEM and for deformation phases with GPS positioning results from SCIGN [72]. Thus, the signals left in the original interferogram can be considered solely from the
atmospheric effects. After the corrections were applied, the negative atmospheric phases in the southwestern part of the interferogram and the positive atmospheric phases along the eastern margin of
the interferogram were largely removed. The phases at the lower central part of the interferogram became however more significant, coupled with some under-modeled positive/negative residual phases in
the corners of the interferogram. The phase standard deviation in the original deformation-free interferogram (Figure 5a) is 14.9 mm, while this becomes 10.6 mm in the corrected interferogram (Figure
5b), representing an improvement of 28.9% in the measurement accuracy.
The MODIS PWV measurements however are sensitive to the presence of clouds as noted earlier, which limits significantly the use of MODIS PWV in cloudy regions. In addition, systematic biases in space
borne MODIS PWV measurements may exist and need to be calibrated with more accurate PWV measurements (e.g., GPS PWV).
The MERIS onboard the Envisat satellite allows for global retrieval of PWV every three days, with two near infrared water vapor channels. It therefore can acquire water vapor data simultaneously with
the Advanced SAR (ASAR). Its PWV measurements have a resolution as high as 300 m and accuracy higher than that of MODIS [69]. MERIS measurements therefore offer an opportunity for the atmospheric
effects on ASAR measurements to be accurately modeled.
Li et al. [73] assessed the potential of using MERIS near-infrared water vapor products to correct ASAR interferometric measurements. The MERIS and the GPS/radiosonde water vapor products tested
agreed to each other to within 1.1 mm (standard deviation) on average. It was also pointed out that the major limitation with the use of MERIS water vapor products is the low frequencies of cloud
free conditions, i.e., about 25% globally although for certain areas like Easter Tibet and Southern California, the frequencies can be much higher.
Using the Los Angeles area as an example, Li et al. [74] showed that MERIS water vapor data could significantly reduce atmospheric effects in SAR interferograms. After corrections were made with the
MERIS data, the RMS difference between GPS and InSAR range changes in the satellite LOS direction decreased from 0.89 cm to 0.54 cm in one interferogram, and from 0.83 cm to 0.59 cm in another.
Puysségur et al. [67] proposed the integration of MM5 simulated water vapor data and MERIS data for InSAR atmospheric correction, as noted earlier. However, no significant improvements were found by
adding the MERIS data to the MM5 model.
Figure 6 shows an example of correcting atmospheric effects on InSAR using MERIS data over Hong Kong region. The SAR images are acquired on 30 April 2006 and 11 March 2007, respectively. Topographic
phases have been removed with a reference DEM. GPS positioning results have shown that there were no significant deformations during this period in region. The signals in Figure 6a can therefore be
considered from atmospheric heterogeneity only. Figure 6b shows the corrected interferogram with reduced resolution (RR) MERIS water vapor data. It can be seen that the positive atmospheric phases in
some of the areas have been significantly removed. The standard deviation of the phases in the original interferogram (Figure 6a) is 5.72 mm and that in the corrected interferogram (Figure 6b) is
4.51 mm, indicating an improvement of about 21% after the corrections were made.
There are mainly two types of correlation analysis adopted to reduce atmospheric effects on InSAR. The first type analyzes the correlation between interferograms, and the second the correlation
between atmosphere-induced interferometric phases and elevation. Sarti et al. [75] proposed to characterize the atmospheric artifacts in SAR interferograms and to remove them through analyzing the
correlation between interferograms. Fruneau and Sarti [76] proposed to separate the deformation signals from atmospheric artifacts by exploiting the correlation between interferograms. The method
does not aim to remove atmospheric noise from a SAR interferogram, but manages to extract the common (correlated) deformation signals within two SAR interferograms through analyzing the correlation
of the signals. Using this method, the authors successfully extracted the deformation signals from interferograms over Paris. Sarti et al. [77] compared this method with other methods for atmospheric
effect mitigation like the stacking and the persistent scatterer (or permanent scatterer) InSAR (PSInSAR) method and pointed out the advantages of the correlation analysis method when the number of
available SAR images is not large.
Beauducel et al. [35] proposed to separate deformation signals from atmospheric artifacts over Mount Etna by analyzing the correlation between the atmosphere-induced interferometric phases and the
elevations. Using 238 interferograms over the area, the authors jointly estimated the deformations and the tropospheric delays. The results revealed that the estimated large-scale deformation and
magma evolution from this study were much less than those from other studies, perhaps due to the fact that the atmospheric artifacts (ranging from -2.7 to +3.0 fringes) had been better accounted for
in this study. Using ten SAR interferograms over Sakurajima volcano, Remy et al. [78] carefully investigated the relationship between the atmosphere-induced interferometric phases and the elevations,
and found that the non-linear piecewise polynomial form of cubic splines was better in modeling the atmospheric delays than the linear models.
Chaabane et al. [81] suggested using the correlation between interferograms and that between atmosphere-induced interferometric phases and the elevation to correct for the atmospheric effects. In
this approach, the global-scale atmospheric contribution is corrected by exploiting the correlation between the interferometric phases and the elevation, while the local atmospheric artifacts are
corrected based on the correlation between interferograms containing a common acquisition. Test results with 81 differential interferograms covering the Gulf of Corinth (Greece) show that (1) the
average uncertainty of the stacked deformation map has been decreased from ± 26 mm to ±12 mm, and (2) the RMS value of the differences between InSAR and GPS measurements at four stations has
decreased from ±30 mm to ±19 mm after applying the correction.
The method of correlation analysis is advantageous in that no external data are needed. The method however strongly depends on the correlations between the deformations and between the atmospheric
signals in different interferograms. Weak correlation may lead to insufficient atmospheric effect reduction.
The atmospheric signature in a SAR interferogram can be determined with the pair-wise logic method [21]. Atmospheric perturbations that are different from the pattern of local ground displacements
can be identified by comparing interferograms spanning different time intervals. The method was used to find a 25×20 km kidney-shaped feature caused by ionospheric perturbations [8, 21]. Massonnet
and Feigl [21] also found irregular patterns of up to three complete fringes resulted from tropospheric turbulences or increased water vapor over a 5×10 km area with this method. The qualitative
nature of this method however makes it difficult to give exact values of the atmospheric effects. Hanssen [28] therefore suggested to sum or subtract two interferograms that use a common SAR image
for removing atmospheric anomalies. The approach has also been referred to as the method of linear combination. It is effective when the atmospheric anomalies exist only in the common SAR image of
the two interferograms.
PSInSAR is a relatively new interferometic processing method [42, 79, 80]. It works on temporally stable coherent targets (permanent scatterers) only and can overcome the difficulties of coherence
loss and atmospheric heterogeneities in conventional SAR interferometry. In PSInSAR, the atmospheric effects are modeled as linear phase ramps in the azimuth and the range directions for small ground
areas or a more sophisticated model that includes, e.g., the linear ramps as well as the topography dependent term and the turbulence can be used for large rugged ground areas [82]. Parameters of an
atmospheric model are estimated jointly with other unknowns such as the DEM errors and the LOS ground deformations at the permanent scatterers. The estimated atmospheric effects corresponding to each
interferogram are then resampled onto the image grid with an interpolator and removed from the interferogram. Ferretti et al. [42, 79] reported that improved estimation of local topography and
terrain motions was resulted over Ancona, Italy and Pomona, California with this method. Hooper et al. [80] modified the PSInSAR algorithms and applied the method to study the temporal and spatial
deformation of volcanoes. A shortcoming of the method is that a significant number of SAR images over the same area, typically over 30, are needed to get reliable results.
Stacking is a method that reduces the atmospheric effects on InSAR by averaging independent SAR interferograms. Assuming that atmospheric effects are uncorrelated between the interferograms,
averaging N independent interferograms will reduce the atmospheric signals to 1 / N fold. The method was once regarded as the only viable solution to the problem of atmospheric effect mitigation
[20]. Williams et al. [43] considered the method of interferogram stacking and that of atmospheric effect calibration with the assistance of external data (such as continuously operating GPS) to be
complementary and suggested the two to be used simultaneously. Ferretti et al. [41] proposed a weighted averaging method by taking into account the spectral features of the thermal noise and the
atmospheric component. Stacking in general degrades the temporal resolution of InSAR measurements, and the method works when there are only linear ground deformations as non-linear deformations can
be lost in the process of stacking.
We have in this section looked through the various existing methods for mitigating the atmospheric effects on InSAR measurements. It should however be pointed out that in principle an optimal
integration of some of the methods should yield the best results.
Atmospheric effects are one of the limiting error sources in repeat-pass InSAR measurements. They can introduce errors of over ten centimeters to ground deformations and of several hundred meters to
DEMs measured with the conventional DInSAR method when considering the typical baseline geometries used. Studies have shown that atmospheric signals in SAR interferograms are anisotropic and
non-Gaussian in distribution. The spectra of the atmospheric signals follow a power law distribution with the power exponent very close to -8/3. Various methods have been developed for mitigating the
atmospheric effects on InSAR measurements based on external data such as ground meteorological observations, GPS data, satellite water vapor products such as those from MERIS and MODIS, and results
from numerical meteorological modeling. These methods are typically able to reduce the atmospheric effects by about 20-40 percents. The other methods developed for mitigating the atmospheric effects
are mainly based on simple data analysis or numerical solutions, including the pair-wise logic, the stacking, the correlation analysis, and the PSInSAR methods. Each of the methods developed has its
pros and cons. The most suitable method should be chosen considering the number of SAR scenes acquired, the method used for InSAR processing, the atmospheric conditions (e.g., cloud conditions) and
the external data available. Despite the progress already made in the research, further studies are still necessary in the area to develop more effective methods for the mitigation of the atmospheric
The work presented was supported by the Research Grants Council of the Hong Kong Special Administrative Region (Project Nos.: PolyU 5157/05E and PolyU 5161/06E), the National Science Foundation of
China (Project Nos.: 40774003 and 40404001), the Hong Kong Polytechnic University (Project No.: G-YX48), the National High-Tech. Program of China (Project No.: 2006AA12Z156), and West China 1:50000
Topographic Mapping Project. Some of the images used were provided by the European Space Agency under a Category 1 User Project (AO-4458, 4914).
RosenP.A.HensleyS.JoughinI.R.LiF.K.MadsenS.N.RodriguezE.GoldsteinR.M.Synthetic Aperture Radar Interferometry20008833338210.1109/5.838084 RogersA.E.IngallsR.P.Venus: Mapping the surface reflectivity
by radar interferometry196916579779910.1126/science.165.3895.79717742269 GrahamL.C.Synthesis interferometric radar for topographic mapping19746276376810.1109/PROC.1974.9516
ZebkerH.A.GoldsteinR.M.Topographic mapping from Interferometric Synthetic Aperture Radar Observations1986914993499910.1029/JB091iB05p04993 GoldsteinR.M.ZebkerH.A.WernerC.L.Satellite radar
interferometry: Two-dimensional phase unwrapping19882371372010.1029/RS023i004p00713 GabrielA.K.GoldsteinR.M.ZebkerH.A.Mapping small elevation changes over large areas: differential radar
interferometry1989949183919110.1029/JB094iB07p09183 MassonnetD.RossiM.CarmonaC.AdragnaF.PeltzerG.FeiglK.L.RabauteT.The displacement field of the Landers earthquake mapped by radar
interferometry199336413814210.1038/364138a0 MassonnetD.FeiglK.L.RossiM.AdragnaF.Radar interferometry mapping of deformation in the year after the Landers earthquake199436922723010.1038/369227a0
ZebkerH.A.WernerC.L.RosenP.A.HensleyS.Accuracy of Topographic Maps Derived from ERS-1 Interferometric Radar199433823836 ZebkerH.A.RosenP.A.GoldsteinR.M.GabrielA.WernerC.L.On the derivation of
coseismic displacement fields using differential radar interferometry: The Landers Earthquake19949919617963410.1029/94JB01179 RosenP.A.HensleyS.ZebkerH.A.WebbF.H.FieldingE.J.Surface deformation and
coherence measurements of Kilauea Volcano, Hawaii, from SIR-C radar interferometry1996101231092312510.1029/96JE01459
LanariR.FornaroG.RiccioD.MigliaccioM.PapathanassiouK.P.MoreiraJ.R.SchwabischM.DutraL.PuglisiG.FranceschettiG.ColtelliM.Generation of digital elevation models by using SIR-C/X-SAR multifrequency
two-pass interferometry: the Etna case study1996341097111410.1109/36.536526 SmallD.Generation of Digital Elevation Models through Spaceborne SAR InterferometryZurichUniversity of Zurich1998
TobitaM.FujiwaraS.OzawaS.RosenP.A.Deformation of the 1995 North Sakhalin earthquake detected by JERS-1/SAR interferometry199850313335 FujiwaraS.RosenP.A.TobitaM.MurakamiM.Crustal deformation
measurements using repeat-pass JERS-1 Synthetic Radar Interferometry near the Izu Peninsula, Japan1998103B22411242610.1029/97JB02382 RufinoG.MocciaA.EspositoS.D.EM generation by means of ERS Tandem
Data19983619051913 LiuG.X.DingX.L.ChenY.Q.LiZ.L.LiZ.W.Monitoring land subsidence at the Chek Lap Kok Airport Using InSAR Technology20014617781782 LiuG.X.DingX.L.LiZ.L.LiZ.W.ChenY.Q.YuS.B.Pre- and
Co-Seismic Ground Deformations of the 1999 Chi-Chi, Taiwan Earthquake, Measured with SAR Interferometry200430333343 DingX.L.LiuG.X.LiZ.W.LiZ.L.ChenY.Q.Ground subsidence monitoring in Hong Kong with
satellite SAR interferometry20047011511156 ZebkerH. A.RosenP. A.HensleyS.Atmospheric effects in interferometric synthetic aperture radar surface deformation and topographic maps199710275477563
MassonnetD.FeiglK.Discrimination of geophysical phenomena in satellite radar interferograms19952215371540 TarayreH.MassonnetD.Atmospheric propagation heterogeneities revealed by ERS-1199623989992
LiZ.W.DingX.L.LiuG.X.HuangC.Atmospheric Effects on InSAR Measurements - A Review2003794358 GensR.van GenderenJ.L.SAR interferometry- Issues, techniques, applications19961718031835
LiF.K.GoldsteinR.M.Studies of multibaseline spaceborne interferometric synthetic aperture radars1990288897 HanssenR.DEOS Report No.98.1Delft University pressDelft, the Netherlands1998
MassonnetD.FeiglK.Radar interferometry and its application to changes in the earth's surface199836441500 HanssenR.F.Kluwer Academic PublishersDordrecht2001 DelacourtC.BrioleP.AchacheJ.Tropospheric
corrections of SAR interferograms with strong topography: application to Etna19982528492852 HanssenR.F.WechwerthT.M.ZebkerH.A.KleesR.High-resolution water vapor mapping from interferometric radar
measurements19992831297129910037594 BracewellR.N.Prentice HallNew Jersey1995 JónssonS.Modeling Volcano and Earthquake Deformation From Satellite Radar Interferometric ObservationsStanford
University2002 LiZ.W.DingX.L.HuangC.ZouZ.R.Atmospheric effects on repeat-pass InSAR measurements over Shanghai region20076913441356 DingX.L.GeL. L.LiZ.W.RizosC.Atmospheric Effects on InSAR
Measurements in Southern China and Australia: A Comparative StudyInternational Archives of the Photogrammetry, Remote Sensing and Spatial Information ScienceXXXVB17075Istanbul, Turkey12- 23 July.
2004 BeauducelB.BrioleP.FrogerJ.L.Volcano-wide fringes in ERS synthetic aperture radar interferograms of ETNA (1992-1998): Deformation or tropospheric effect?20001051639116402
JarqueC.M.BeraA.K.Efficient tests for normality, homoscedasticity and serial independence of regression residuals198063255259 HinichM.J.WilsonG. R.Detection of Non-Gaussian Signals in Non-Gaussian
Noise Using the Bispectrum19903811261131 LohmanR.B.SimonsM.Some thoughts on the use of InSAR data to constrain models of surface deformation: Noise structure and data downsampling20056Q01007
GoldsteinR.M.Atmospheric limitations to repeat-track radar interferometry19952225172520 TatarskiV.I.McGraw-HillNew York1961 FerrettiA.PratiC.RoccaF.Multibaseline InSAR DEM Reconstruction: The Wavelet
Approach199937705715 FerrettiA.PratiC.RoccaF.Nonlinear Subsidence Rate Estimation Using Permanent Scatters in Differential SAR Interferometry20003822022212 WilliamsS.BockY.FangP.Integrated satellite
interferometry: Tropospheric noise, GPS estimates and implications for interferometric synthetic aperture radar product19981032705127068 LiZ. W.DingX.L.HuangC.WadgeG.ZhengD.W.Modeling of atmospheric
effects on InSAR measurements by incorporating terrain elevation information20066611891194 LiZ. W.DingX.L.LiuG.X.Modeling Atmospheric Effects on InSAR with Meteorological and Continuous GPS
Observations: Algorithms and Some Test results200466907917 DavisA.MarshakA.WiscombeW.CahalanR.Scale invariance of liquid water distribution in marine stratocumulus. Part I: spectral properties and
stationarity issues19965315381558 EmardsonT.R.SimonsM.WebbF.H.Neutral atmospheric delay in interferometric synthetic aperture radar applications: Statistical description and mitigation200310822312238
HopfieldH. S.Tropospheric effect on electromagnetically measured range: Prediction from surface weather data19716357367 SaastamoinenJ.Atmospheric correction for troposphere and stratosphere in radio
ranging of satellites. The use of artificial satellites for Geodesy1972247251 AskneJ.NordiusH.Estimation of tropospheric delay for microwaves from surface weather data198722379386
BabyH.B.GoleP.LavergnatJ.A model for the tropospheric excess path length of radio waves from surface meteorological measurements19882310231038 LiZ.W.DingX.L.ChenW.LiuG.X.SheaY.K.EmersonN.Comparative
study of tropospheric empirical model for Hong Kong region2008in press HanssenR.FeijtA.A first quantitative evaluation of atmospheric effects on SAR interferometryFringe 96′ Workshop on ERS SAR
Interferometry30 Sep.-2 Oct.Zurich, Switzerland2772821996 BonforteA.FerrettiA.PratiC.PuglisiG.RoccaF.Calibration of atmospheric effects on SAR interferograms by GPS and local atmosphere models: first
results20016313431357 BevisM.BusingerS.HerringT.A.RockenR.AnthesR.A.WareR.H.GPS Meteorology: Remote sensing of atmospheric water vapor using the Global Positioning System1992971578715801 LiuY.Remote
Sensing of Water Vapor Content Using GPS Data in Hong Kong RegionHung HomHong Kong Polytechnic University1999 BockY.WilliamsS.Integrated Satellite Interferometry in Southern
California19977829293299300 JanssenV.GeL.L.RizosC.Tropospheric correction to SAR interferometry from GPS observations20048140151 WebleyP.W.BingleyR.M.DodsonA.H.WadgeG.WaughS.J.JamesI.N.Atmospheric
water vapor correction to InSAR surface motion measurements on mountains: results from a dense GPS network on Mount Etna200227363370 LiZ.H.FieldingE.J.CrossP.MullerJ.-P.Interferometric synthetic
aperture radar atmospheric correction: GPS topography-dependent turbulence model2006111B0240410.1029/2005JB003711 OnnF.ZebkerH.A.Correction for interferometric synthetic aperture radar atmospheric
phase artifacts using time series of zenith wet delay observations from a GOS network2006111B0910210.1029/2005JB004012 OnnF.Modeling water vapor using GPS with application to mitigating InSAR
atmospheric distortionsStanford University2006176 WadgeG.WebleyP.W.JamesI.N.BingleyR.DodsonA.WaughS.VeneboerT.PuglisiG.MattiaM.BakerD.EdwardsS.C.EdwardsS.J.ClarkeP.J.Atmospheric models, GPS and InSAR
measurements of the tropospheric water vapor fields over Mount Etna20022919051908 WebleyP.W.WadgeG.JamesI.N.Determining radio wave delay by non-hydrostatic atmospheric modeling of water vapour over
mountains200429139148 WebleyP.W.Atmospheric water vapor correction to InSAR surface motion measurements on mountains: Case Study on Mount EtnaReadingUniversity of Reading, UK2003
FosterJ.BrooksB.CherubiniT.ShacatC.BusingerS.WernerC.Mitigating atmospheric noise for InSAR using a high resolution weather model200633L16304 PuysségurB.MichelR.AvouacJ.-P.Tropospheric phase delay in
interferometric synthetic aperture radar estimated from meteorological model and multispectral imagery2007112B05419 GaoB.-C.KaufmanY.J.Water vapor retrievals using Moderate Resolution Imaging
Spectroradiometer (MODIS) near-infrared channels200310843894398 LiZ.H.MullerJ.P.CrossP.Tropospheric correction techniques in repeat-pass SAR interferometryProceedings of the FRINGE 2003 workshopESA
ESRINFrascati, Italy1- 5, December 2003 LiZ.H.MullerJ.-P.CrossP.FieldingE.J.Interferometric synthetic aperture radar (InSAR) atmospheric correction: GPS, Moderate Resolution Imaging Spectroradiometer
(MODIS), and InSAR integration2005110B03410 JournelA.G.KyriakidisP.C.MaoS.Correcting the Smoothing Effect of Estimators: A Spectral Postprocessor200032787813 LiZ. W.Modeling atmospheric effects on
repeat-pass InSAR measurementsThe Hong Kong Polytechnic UniversityHong Kong2005143 LiZ. H.MullerJ. P.CrossP.AlbertP.FischerJ.BennartzR.Assessment of the potential of MERIS near-infrared water vapour
products to correct ASAR interferometric measurements200627349365 LiZ.H.FieldingE.J.CrossP.MullerJ.P.Interferometric synthetic aperture radar atmospheric correction: Medium Resolution Imaging
Spectrometer and Advanced Synthetic Aperture Radar integration200633L06816 SartiF.VadonH.MassonnetD.A method for the automatic characterization of atmospheric artifacts in SAR interferograms by
correlation of multiple interferograms over the same siteProceedings of IGARRS'99Hamburg, Germany28 June- 2 July 1999 FruneauB.SartiF.Detection of ground subsidence in the city of Paris using radar
interferometry: isolation of deformation from atmospheric artifact using correlation20002739813984 SartiF.FruneauB.CunhaT.Isolation of atmospheric artifacts in differential interferometry for ground
displacement detection: comparison of different methodsProceedings of the European Space Agency ERS-Envisat SymposiumGothenburg, Sweden16 - 20 October 2000 RemyD.BonvalotS.BrioleP.MurakamiM.Accurate
measurements of tropospheric effects in volcanic areas from SAR interferometry data: application to Sakurajima volcano (Japan)2003213299310 FerrettiA.PratiC.RoccaF.Permanent Scatters in SAR
interferometry200139820 HooperA.ZebkerH.SegallP.KampesB.A new method for measuring deformation on volcanoes and other natural terrains using InSAR persistent scatterers200431L23611
ChaabaneF.AvalloneA.TupinF.BrioleP.MaîtreH.A Multitemporal Method for Correction of Tropospheric Effects in Differential SAR Interferometry: Application to the Gulf of Corinth
Earthquake20074516051615 FerretiA.NovaliF.PasseraE.PratiC.RoccaF.Statistical analysis of atmospherical components in ERS SAR dataEsrin, Prascati28 November - 2 December 2005
CurlanderJ.C.McDonoughR.N.WileyNew York1991
Interferometric geometry (from Li et al. [23]).
Geomtry of DInSAR.
Radon transform of atmospheric signals in SAR pairs acquired on: (a) 19 and 20 February, 1996, (b) 25 and 26 March, 1996, (c) 3 and 4 June, 1996, and (d) 16 November and 21 December, 1999 (from Li et
al. [33]).
Power spectra of atmospheric signals for SAR pairs acquired on: (a) 19 and 20 February, 1996, (b) 25 and 26 March, 1996, (c) 3 and 4 June, 1996, and (d) 16 November and 21 December, 1999. (from Li et
al. [33]).
Atmospheric path delay corrections for interferometric pair of 29 July 2000 and 18 August 2001 over Los Angeles basin, South California. (a) original interferogram (deformation field has been modeled
with GPS observations and removed from the interferogram); (b) interferogram corrected using MODIS data. (from Li [72]).
Atmospheric path delay correction for interferometric pair of 30 April 2006 and 11 March 2007 over Hong Kong area. (a) original interferogram; (b) interferogram corrected using MERIS data. | {"url":"http://www.mdpi.com/1424-8220/8/9/5426/xml","timestamp":"2014-04-19T22:13:36Z","content_type":null,"content_length":"136510","record_id":"<urn:uuid:e9180364-d7ae-4275-a863-10a71a6bcfa6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fix Excel Numbers That Don't Add Up
Some Excel values look like numbers, but don't add up, because Excel thinks they are text. With the techniques in the article, you can convert text "numbers" to real numbers. Then, 1 + 1 will
equal 2, instead of zero.
For instructions on changing written words to numbers (e.g. from Three to 3), see Words to Numbers in Excel.
Look Like Numbers, But Don't Add Up
Convert Text to Numbers with Paste Special
Convert Dates with Replace All
Convert Text to Numbers with Text to Columns
Convert Trailing Minus Signs
Convert Trailing Minus Signs - Formula
Convert Trailing Minus Signs Programmatically
Paste as CSV
Convert Text to Numbers with VBA
Look Like Numbers, But Don't Add Up
If you copy data from another program, such as Microsoft Access, or from a text file, Excel may treat the numbers as text.* In Excel, the values look like numbers, but they don't act like
numbers, and don't show a correct total, as you can see below.
In the screen shot above, the values in column B looks like numbers, but they don't add up -- the total is zero.
Convert Text to Numbers with Paste Special
1. Select a blank cell
2. Choose Edit>Copy
3. Select the cells that contain the numbers
4. Choose Edit>Paste Special
5. Select Add
6. Click OK
7. To apply number formatting, choose Format>Cells
8. On the Number tab, select the appropriate format, then click OK.
Watch the Video
To see the steps described above, you can watch this short video tutorial.
Convert Dates with Replace All
If dates are formatted with slashes, such as 5/5/04, you can convert them to real dates by replacing the slashes.
1. Select the cells that contain the dates
2. Choose Edit>Replace
3. For Find what, type a forward slash: /
4. For Replace with, type a forward slash: /
5. Click Replace All
6. To apply date formatting, choose Format>Cells
7. On the Number tab, select a date format, then click OK.
Convert Text to Numbers with Text to Columns
1. Select the cells that contain the numbers
2. Choose Data>Text to Columns
3. Click Finish
Convert Trailing Minus Signs
In Excel 2002, and later versions, imported numbers with trailing minus signs can be easily converted to negative numbers.
1. Select the cells that contain the numbers
2. Choose Data>Text to Columns
3. To view the Trailing Minus setting, click Next, click Next
4. In Step 3, click the Advanced button
5. Check the box for 'Trailing minus for negative numbers', click OK
6. Click Finish
Note: If 'Trailing minus for negative numbers' is checked, you can click Finish in Step 1 of the Text to Columns wizard.
Convert Trailing Minus Signs - Formula
Thanks to Bob Ryan, from Simply Learning Excel, who sent this formula to fix imported numbers with trailing minus signs.
1. In this example, the first number with a trailing minus sign is in cell A1
2. Select cell B1, and enter this formula:
3. =IF(RIGHT(A1,1)="-",-VALUE(LEFT(A1,LEN(A1)-1)),VALUE(A1))
4. Copy the formula down to the last row of data.
In the formula, the RIGHT function returns the last character in cell A1.
If that character is a minus sign, the VALUE function returns the number value to the left of the trailing minus sign.
The minus sign before the VALUE function changes the value to a negative amount.
Convert Trailing Minus Signs Programmatically
In all versions of Excel, you can use the following macro to convert numbers with trailing minus signs.
Sub TrailingMinus()
' = = = = = = = = = = = = = = = =
' Use of CDbl suggested by Peter Surcouf
' Program by Dana DeLouis, dana2@msn.com
' modified by Tom Ogilvy
' = = = = = = = = = = = = = = = =
Dim rng As Range
Dim bigrng As Range
On Error Resume Next
Set bigrng = Cells _
.SpecialCells(xlConstants, xlTextValues).Cells
If bigrng Is Nothing Then Exit Sub
For Each rng In bigrng.Cells
If IsNumeric(rng) Then
rng = CDbl(rng)
End If
End Sub
Paste as CSV
To prevent copied numbers from being pasted as text, you may be able to paste the data as CSV.
1. Copy the data in the other program
2. Switch to Excel
3. Select the cell where the paste will start
4. Choose Edit>Paste Special
5. Select CSV, click OK
Convert Text to Numbers With VBA
If you frequently convert text to numbers, you can use a macro.
Add a button to an existing toolbar, and attach the macro to that button. Then, select the cells, and click the toolbar button.
Sub ConvertToNumbers()
Cells.SpecialCells(xlCellTypeLastCell) _
.Offset(1, 1).Copy
Selection.PasteSpecial Paste:=xlPasteValues, _
With Selection
.VerticalAlignment = xlTop
.WrapText = False
End With
End Sub
*For Excel 2002, the problem with Access data has been fixed in Office XP Service Pack 3.
There is information in the following MSKB article:
Numbers that are copied from Access 2002 paste as text in Excel 2002
More Data Entry Tutorials
1. Data Entry -- Tips
2. Data Entry -- Fill Blank Cells
3. Data Entry -- Convert Text to Numbers
4. Data Entry -- Increase Numbers With Paste Special
Excel Data Entry Videos | {"url":"http://www.contextures.on.ca/xlDataEntry03.html","timestamp":"2014-04-18T02:58:10Z","content_type":null,"content_length":"31495","record_id":"<urn:uuid:e9c21412-fba9-40af-a448-95e6871de09a>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lieutenant Governors with term limits
From Ballotpedia
In nineteen states, the office of
Lieutenant Governor
is subject to
term limits
. Most states with term limits specify that an office-holder may serve two consecutive terms. Most states do not specify that the two terms are an absolute limit, so that a former Lieutenant Governor
may usually run again after a time, usually unspecified, out of office.
States with Lieutenant Governor term limits
│ Color Key │
│ No term limits │ 2 consecutive term limit │ 4 two year terms │
│ 2 terms in a 16 year period │ 8 years in a 12 year period │ 2 lifetime term limit │
See also
External links
1. ↑ [1] "Executive branch term limits from U.S. Term Limits" | {"url":"http://ballotpedia.org/Lieutenant_Governors_with_term_limits","timestamp":"2014-04-16T17:56:00Z","content_type":null,"content_length":"38213","record_id":"<urn:uuid:60169470-7120-4233-9730-f1f467c43953>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
Izi's blog
I remember siting once in a bar with my friend. I remember that night very well because it was very interesting otherwise. For whatever reason quite unknown to me, at some time of the evening I
decided to take a look at her new phone. Quite cute little phone she had, but I am not a gadget-maniac, no. This wouldn't keep my attention that long - if there wasn't a games section. Well, not even
that - but it had Sudoku. OK, it's not a very interesting game compared to, say, Civilization, Counter Strike or such masterpieces.
However, it was interesting. What kept me to it was one very simple thing. Here, take a look at the sample Sudoku board (taken from the above article about Sudoku on Wikipedia):
Nothing special - 81 squares, some of which (30, to be exact) have digits (1-9 only), some of which are empty. Rules? Very simple: no row, column and block (block is a set of 3x3 squares, separated
by thick lines on the picture) can have more then one instance of the same digit. To solve it, you need to fill all the digits in. Did I mention it's nothing special? Exactly my point - so simple,
but not so simple to solve. If you look at Algorithmics of sudoku, under Exceptionally difficult Sudokus, you will see that there are ones that are even impossible to be solved by applying only logic
rules (i.e. without any guessing of the digits). At least, that is how it stands right now.
The above Wikipedia pages explain pretty well the different ways you can solve many Sudoku puzzles. To use it to solve the Sudoku, you generally need two methods - using only them is pretty
efficient. Here are the two methods:
Neighbors elimination
Singles extraction
This is especially true if you are a computer - in that case, even if you don't solve using the above two methods, you can significantly reduce the number of possibilities. This is when so-called
guessing comes in - you can call it brute force solving.
Let me explain the above two methods in short.
Neighbors elimination
Eliminating neighbors is basically applying the first rule given above: no row, column and block can have more then one instance of the same digit. This is rather simple - if you have a square which
has only one digit as the possibility, no other square in the same row, column and block can have that digit as a possibility. By removing this, we reduce the number of choices, possibly clearing
some other squares to only one possibility - and that is our goal at the end.
Here is what can be eliminated
Since there can be only one 3 in the marked row and the 3 in the cell at the bottom is the only available choice, the three in the cell at the top can be freely cleared - it is not a choice if we
want to solve the board.
Singles extraction
Applying the second rule gives this method. The second rule says: to solve it, you need to fill all the digits in. While this is not explicitly mentioned in the rule, is it obvious that a filled
Sudoku board means that there are no empty squares. This leads us to another significant conclusion when combined with the first rule. Say there is a row, column or block containing only one square
with a specific digit as a possibility. In that case, that square must be filled with that digit. Otherwise, we could not put that digit in any of the other squares and some square would, thus, be
left empty, which violates the second rule.
If you take a look at the marked block, you can see that the circled 3 is the only 3 in the whole block. This means that 3 must be in the cell. Otherwise, it could not go to any other cell in the
block, so the board could not be solved properly.
Solving by the computer
The above solutions are right as applied. However, they cannot lead to solution in many cases, even when applied one after another more then one time. There are other, more complex, techniques. Even
the best of them cannot be used to solve some boards - "Qassim Hamza" is one of them. While these may be solved if some new method of solving is invented in the future, the key point is that using
the above techniques the number of possible variants is very narrowed. After that, very few number of guesses need to be performed and the solution can be found.
The "very few" can be very much for a human. The problem is that you basically need to keep track of all the possibilities, which becomes tiresome very quickly. Luckily, our PCs are very good at not
getting bored quickly. We, thus, can use them and the brute force algorithm to solve the board.
The way brute force algorithm works is rather simple. Take a look at the following board:
Applying any of the two non-guessing methods presented above yields nothing - you cannot eliminate any digit. What brute force solving does is the following:
It takes one of the circled digits in the first cell, starting from left-top corner and going right, then down, which has more then one digit in it. It then takes the first digit - 1. It assumes that
that cell has only that digit. After that, you in essence have the new board with virtually fixed digit 1 at that cell. What the program needs to do is solve that board.
Now, as we already mentioned, using the previous two non-guessing methods is rather fast comparing to brute force algorithm and yields boards that are in many cases much easier to solve. Thus, brute
force solver actually applies the above two methods on the new board, until they cannot be applied anymore. There are three cases:
The board is invalid at some time, meaning that our guess was certainly not right, so we backtrack. That means - return to the starting board and take the next digit - 5 in our case and do the
same thing again,
The board is not invalid, but still not solved. In that case, we are at the next "milestone". We have a board for which we need to guess another digit. We again take the first cell that has at
least two digits and apply the same method,
The board is solved.
The given solution is quite fast even in JavaScript. It solved all the boards I tried in less then half a minute on a P4 2.8 GHz. If you would like to play with it or the code (it's GPL 3), take it
I started with some OO-like concepts in JavaScript with my previous 2-part post. I would like to continue using the mentioned concepts in building something I think is a good way to make applications
- MVP.
MVP = Model View Presenter
If you are interested in MVP, take a look here. As you can see, it was "retired" by Fowler. I sill like to call it like this, but be aware that I will be talking about what he calls Passive View. In
other words - be aware that there are many ways to implement MVP, depending on your needs. Another similar way is using MVC. There are differences between them, but I like to look at them as a set of
options which are very similar, but have enough differences that you can use one over another depending on your needs.
MVP implemented as Passive View is the one I like for several reasons. First, it allows very big degree of separation. Presenter sits in between and there is nothing in View that will ever touch
Model. View doesn't depend on the Model. For efficiency reasons, you might transfer Model classes, but generally Presenter handles all transfers. As common to all MXX architectures, Model is already
separated by Observer pattern from other two, in this case from the Presenter. Presenter contains all the business logic, whatever that may be. Model can contain only domain logic (e.g. data
manipulation methods, which are most of the time rather simple). Presenter never contains any GUI-related code. It acts on the events from the View and sends updates, but doesn't inherently think
about the structure of the View. This allows for View switching - you can have multiple views or even Views that are built in different frameworks - for example, Web (HTML) and WebService (SOAP)
As you have seen, MVP = Model + View + Presenter. I will try to explain how MVP works on a simple example. Do you (or did you) ever play Sudoku? If you did, you will not have a problem following this
article. If you didn't, take a look at the link on Wikipedia. In essence, it is quite simple. You have a 9x9 (in standard Sudoku, which I will stick to) square matrix. It contains 9 rows, 9 columns
and 9 blocks (block is a 3x3 sub-square). Here is a picture, taken from Wikipedia:
The goal is also very simple. Think of each of the 9 rows, 9 columns and 9 blocks as a zone. Thus, you have 27 zones. Each of the zones must contain numbers 1-9. Since each zone has exactly 9 little
squares, you can also deduce from this that there are no duplicate numbers - each of the numbers 1-9 must appear once and only once.
Now, while the above definition of the objective is quite simple, solving it can be very hard in some cases. Another article on Wikipedia lists two of them at the bottom, named "Top 1465 Number 77"
and "Qassim Hamza", in the part called "Exceptionally difficult Sudokus". They are... Or are they? Well, they are if you are a human. Don't, however, forget that there are computers - they can solve
the above in seconds, even in JavaScript, which, according to The Computer Language Benchmarks Game is not quite the fastest programming language humans invented...
To explain MVP, I will try to make a structure that will allow you to play, help you with solving and solve automatically the Sudoku puzzles. I chose this one because:
It is a simple game that everyone can learn
Apparently there are a lot of people playing it or at least being interested in it - this is what Google contrives as the possible number of matches:
Quite a lot, wouldn't you say?
It is not that hard to implement a computer solver that is moderately fast even in JavaScript
I installed M-SudoKu on my phone. Quite nice implementation, lot of options (even for cheating, which I never used - what's the challenge if you use it). Darn, it generates very hard puzzles on
the most difficult level. I tend not to use any paper for solving them and it can take me hours to solve just one. It's a war!
OK, so this is somewhat how I have been thinking. The first thing - I started from is Model. What's a model? A model is generally the set of data classes - the thing you will be transforming in
different ways to satisfy your business (or, in this case, gaming) requirements. It can be a lot of things - you can imagine applications requiring objects like Person, Address, Mail,
WeatherCondition, Car, InsurancePolicy, Stock, TrainStation, AirlineTicket and such. What do we need in Sudoku? I thought I needed only three:
Cell, representing the small square on the board
Board, representing the whole board (which equals to 9x9 Cells)
Messages, which is a set of messages that will be displayed to the user
Thus, we have these three classes as model classes, which I put in the folder named model. These classes are all extended from Model (abstract) class. I'll explain it right away.
Each model is the subject in the Observer pattern. In MVP, Presenters are the observers. They subscribe to the underlying model - each Presenter usually have one dedicated or shared model they look
after. There is, however, nothing that dictates this - you can have one presenter overlook ten models if you want. There can be a presenter that doesn't have a model it subscribes to. A good example
are delegating presenters, which overlook other (sub-)presenters. Look at your top level Presenter. It's the application presenter. In some cases it only groups separate parts of your application -
other presenters - and delegate actions between them, connects them. Usually, however, even these presenters have a (small) model.
In the example I am giving, all presenters overlook one model, but some share the same model - Board, in this case. Generally, I think it is wise to have one model per presenter. If you need more
things to overlook, I choose to make the artificial layer (facade) between them. This way you can separate the part of the model world into two parts:
Top level parts are used directly by the presenters.
Their subparts (and other levels more deep) are used by both presenters and model parts above.
In my example, Board is the top level model. It is directly manipulated by the user. Cell is the subpart. While you do show cells, you generally don't show one Cell only. Thus, they are second-level
objects compared to Board.
If I ever had the need to overlook two model parts in one presenter - say Board and Timer, a fictional class that would represent time spent on solving the current puzzle - I would make another class
(maybe BoardTimer), combine Board and Timer into it and subscribe to notifications from that class, which will in turn subscribe to Board and Timer and just resend the messages up. This is not a good
practice (boilerplate code), but it somewhat simplifies things and is not very common not to have meaningful top-level objects, but to have to "invent the hot water" like this.
Now, here is the abstract Model class. It's quite simple, here:
function Model() {
Model.prototype.notifyAll = function() {
if(this.properties == null)
for(var i = 0; i < this.properties.length; i++) {
Model.prototype.addObserver = function(observer) {
Model.prototype.notify = function(propName) {
this.getChanging()[propName] = true;
for(var i = 0; i < this.getObservers().length; i++) {
var observer = this.getObservers()[i];
var method = observer[propName + 'Changed'];
if(method != null)
method.call(observer, this[propName]);
delete this.getChanging()[propName];
Model.prototype.removeObserver = function(observer) {
for(var i = 0; i < this.getObservers().length; i++) {
if(this.getObservers()[i] == observer) {
this.getObservers().splice(i, 1);
Model.addProperty = function(klass, propName, options) {
var propNameUpper = propName[0].toUpperCase() + propName.substr(1);
var getter = 'get' + propNameUpper;
var setter = 'set' + propNameUpper;
if(options) {
getter = '_' + getter;
setter = '_' + setter;
prototype = klass.prototype;
prototype[getter] = function() {
return this[propName];
prototype[setter] = function(newValue) {
if(this[propName] != newValue) {
this[propName] = newValue;
if(prototype.properties == null)
prototype.properties = [];
Property.addProperty(Model, 'observers');
Property.addProperty(Model, 'changing');
There are several things to take a look here. Start from the end - there are two properties in this class. Property is the utility singleton that looks like this:
Property = {
addProperty : function(klass, propName, options) {
var propNameUpper = propName[0].toUpperCase() + propName.substr(1);
var getter = 'get' + propNameUpper;
var setter = 'set' + propNameUpper;
if(options) {
getter = '_' + getter;
setter = '_' + setter;
prototype = klass.prototype;
prototype[getter] = function() {
return this[propName];
prototype[setter] = function(newValue) {
if(this[propName] != newValue)
this[propName] = newValue;
Very simple - when you say:
Property.addProperty(Model, 'observers');
it will add a property to the class Model. What is a property? If you come from Java, you could call this getters and setters. After the previous line, all instances of Model will have getObservers
and setObservers methods. You could (and should) use these to change the instance. This allows the class to change get/set operations to implements some more logic then the simple get/set. For
example, one useful thing is that you can simply override the methods to provide logging, like this:
Property.addProperty(Model, 'observers', { liftGet: true });
Model.prototype.getObservers = function() {
log.debug('getObservers called');
return this._getObservers();
Note the liftGet in the property definition - it tells the Property utility singleton to make getter with the underscore. This way, you still have the intended getter for your convenience, so you can
call it when necessary. The same thing is liftSet, only this applies to the setter method. In overridden setters you can put e.g. some validation logic:
Property.addProperty(Model, 'observers', { liftSet: true });
Model.prototype.setObservers = function(newObservers) {
if(!(newObservers instanceof Array))
throw 'Array must be given to setObservers';
return this._setObservers(newObservers);
Going up the Model class, you find even more important thing for using the properties. As you can see, Model defines addProperty method. This is similar to Property.addProperty, except that there is
the extra notification code. This code is used to automatically notify observers of this model whenever the value of the property changes.
For that, Model.notify is used. It will first check for loop-notification. This is a very big problem that can occur sometimes. For example, let's say you have two properties, called number and
square. Assume that the outside force (i.e. presenter or higher-level model) can set any of the above and the other gets calculated by the simple formulas square = number * number and number =
Math.sqrt(square). If you do nothing about this, then setting the number will trigger setting the square, which in turn will trigger setting the number and so on until you start feeling dizzy.
To stop this, Model has the other property called changing. When it starts notifying about the change of one property, it will set the corresponding key in changing map to true. Whenever the loop is
about to occur - in other words, the notification about the change of the same property happens - it will just silently ignore to continue notification. Quite logical - why notify twice when you are
sure the notification is already in progress?
The other important thing to notify about the notify method is that it uses JavaScript's dynamic nature. Observer made this way will notify its observers of the property changes by simply invoking a
method on the observer. For example, if the property named 'number' has changed, it will call the method 'numberChanged' on the observer. However, not all observers want to observer every property.
This is where dynamics kick in - if the method is not present, then the given observer (of all observers that are looped over) will just be skipped, as if it has not been the observer. This is in
fully what we want - subscribe and get notified only about the things that we care.
There are three more methods. notifyAll is the simplest of all - it just notifies all the observers that all the properties changed. This is very useful for one thing. The creation order in MVP is
Model/View then Presenter - because Presenter needs both a model and a view to work properly. However, this means that whatever startup notification happened will simply be lost. To circumvent this,
you can easily simulate it - Presenter, in its constructor, calls this method (after it subscribed as the observer to the model) to, effectively, notify itself of the starting state of the model.
This is the initial setup procedure.
The other two methods seem fairly self-explanatory - addObserver adds the observer to the model, making it the target of all notifications. Method removeObserver does the opposite - removes the
observer, which will stop getting the notifications about model's property changes.
I used MVP in a few projects, liked it quite a bit. It's not for small projects. You can use it - no benefit, though. For larger projects, it is very useful.
I was trying to do a quick HTML/CSS shadow the other day. It happens often you need a noticeable title to specify what the page is doing if it is not too complex (i.e. you have enough space). I just
wanted a simple shadow behind it.
It is quite simple, here:
<title>Shadow text example</title>
#title {
position: relative;
h1.clear, h1.primary, h1.shadow {
font: 56px Arial;
font-weight: bold;
font-style: italic;
padding-bottom: 20px;
margin: 0px;
h1.primary, h1.shadow {
text-decoration: underline;
color: #dda020;
position: absolute;
h1.shadow {
color: #808080;
left: 3px;
top: 3px;
#other_content {
position: relative;
<div id="title">
<h1 class="shadow"><nobr>Sudoku solver</nobr></h1>
<h1 class="primary"><nobr>Sudoku solver</nobr></h1>
<h1 class="clear"> </h1>
<div id="other_content">More content goes here...</div>
Take a look here.
Here you have a few elements:
div#title - This is the element that represents the title. It contains three children:
h1#shadow - The shadow of the title,
h1#primary - Primary text of the title. This one and the previous one should have the same text in them,
h1#clear - The clearing element,
div#other_content - The other content that follows the title.
The h1#primary is the text that is displayed and it should be made as a normal text without a shadow - set its color or whatever else to your preferences. Shadow should probably only change the color
and the "depth", as was done above. The clearing element is there only to give the size to the div#title. The reason is that whenever you do a position: absolute on the element, it will get out of
the normal flow of elements. This also means that its size will not affect the size of the other elements, including its parent.
The clearing element is a fake element with only one non-breaking space. This space makes it high as the other two (primary and shadow). All three elements have the same font, padding and margin
settings, so they will occupy the same space. The nobr elements are there to make the width appropriate. Otherwise, if you shrink the width of the browser, it will break the text, so the single space
in the clearing element will not be high enough to allocate the space for the other two.
There are other solutions - one of them is, for example, this one. I liked this one because it works with the layout of the HTML elements.
BDD is a way of building applications in a question-answer-test way. Before you build some part of the application, you make a set of questions, then you answer them and make tests which the code
should pass. The tests are made to satisfy the answers. A simple, probably often used example: assume you are making a calculator. One of its operations is addition. You ask: "What should 2 + 3 be?",
then you answer "5" and you write a test that, when executes, checks your code with the given inputs (2 and 3) and the expected output (5).
In JavaScript, you can use JSSpec (or JSSpec) framework to do your BDD. Let's say you have written the following function to represent the wanted addition:
function addition(a, b) {
return a + b;
Here is the BDD description of the test mentioned above:
describe('Addition', {
'2 + 3 = 5': function() {
expect(addition(2, 3)).should_be(5);
Download the file containing this simple test from here.
This is the basis of BDD. The basic pattern is:
Take a problem that needs to be solved next
Make a set of questions about the problem
Answer the questions
Make a BDD description out of the answers
Make the code
Test the code using the BDD description
Fix the code until the test passes successfully
It is common to make one test per class or module, depending what you are testing. You should cover everything in a class, that is test every non-private method in a class. You can test even private
methods, but a good rule of thumb is that if you need to test private methods, then they probably belong to some other class. The sole need to test private methods means they are important.
Everything that is important needs to be tested. Play by ear here. Often you have private methods that are only utility methods - extracted parts of other methods to simplify the code.
Try to test small pieces of code (for example, at least one test per method is a good thing) and to test everything, especially the corner cases. The above is written assuming you are using JSSpec to
test your code. Take a look at their Wiki for more details - although, at the time of this writing, the documentation is still unfinished and rather sparse. The basics are covered well, however, so
you won't have too much trouble if you are not pushing the limits.
The above example is not very exciting by itself. I will make another one, which incorporates the description for the class I am going to use in the near future. The class is called Model. It will
probably be clearer why it's used in the next post - it is the part of the MVP pattern. For now, I will give only the simplified view of this class. This is a good thing anyway - BDD should be used
in a minimalistic way, in my opinion (you might say this is YAGNI).
Say we need the class to represent some data in our application. For example, if application is dealing with issuing airline tickets, you might need information like how many airplanes you have, how
many passengers fit into the airplanes, how many flights you have planned for some day and at which times, etc. Being a good OO, you should find out all the objects that are interesting to your
application (e.g. airplane, ticket, passenger, ...). It will probably happen that you will display information about many (if not all) of these.
MVP suggests that these classes be separated into the M part - the Model part. This way you will have the option to subscribe to the notifications from them when some of their properties change. For
example, if you have the Airplane class representing the airplane and it has a numberOfPassengers property, you can subscribe to the notifications about the changes of this property. This allows you
to, for example, forbid booking any more tickets for that flight if the number of passengers equals to the number of seats in that airplane.
It would be daunting, however, to have to write the boilerplate code that does the notifications for each of the classes. It is quite normal for the model classes to be rather short and many times
they only represent the data. This is very similar to Java beans. The only difference is, in fact, that they should have the Observer pattern implanted into them. This pattern is very simple to
factor into the parent class - the Model class I mentioned above.
How would we develop this class using BDD? The problem definition is quite easy in this case. What we need to define is based on a black-box way of thinking. Essentially, you define how the instances
of this class will behave from the outside. This is how BDD is implemented. For our Model class, we need a class that will:
Allow the set of properties be defined (which will be subject to observation),
Allow others to subscribe or unsubscribe to the notifications about the changes on any property of the class,
Notify every observer of the current state of the instance.
The above is the basis for our BDD test suite. Here is a very short Model BDD description:
describe('Model', {
before_all: function() {
// Sample class inheriting from Model we are going to test.
MyModel = function() {
Klass.extend(MyModel, Model);
Model.addProperty(MyModel, 'number');
Model.addProperty(MyModel, 'color', {liftSet: true});
model = new MyModel();
model.setColor = function(newValue) {
this._setColor('Color is ' + newValue);
'Properties should work': function() {
expect(model.getColor()).should_be('Color is green');
'Should notify observers properly': function() {
var fired = false;
var mockObserver = {
numberChanged: function() {
fired = true;
fired = false;
It starts with the standard BDD 'describe', which says it describes 'Model' with the three methods object. The first method - before_all - it the method that will be executed when BDD testing starts,
once and before all the real tests (i.e. before the other two methods). In before_all, there is a definition of the sample class called MyModel, which inherits from Model. MyModel has two properties:
number and color. You can see that color property has the setter overridden, which will do something irregular (i.e. it's not the standard setter that only sets the property, but does more work).
The actual BDD tests follow. These tests have their names: 'Properties should work' and 'Should notify observers properly'. The names should be as descriptive as possible about what this part of BDD
description actually tests. It should be the part of the business definition of our Model - the three-item list before the code. As you can see 'Properties should work' describes the first item -
allow the properties to be defined. 'Should notify observers properly', on the other hand, combines the second and the third item - allow observers to be added/removed and acutally notify them.
In BDD, this is the first step. We have a description that we can automatically run and test whether the class that we write is what we wanted. If the tests pass, we are good. If they fail, the class
is not good. It must be noted that the BDD description can change if the requirements change, which is quite usual when you are agile. However, you should always follow "write tests, then write
class" order.
The above description also is YAGNI. Let me give you an example. If you need to go to the grocery store (this seems like a simple and common problem to take as an example), you have many options:
Take a bike,
Take a train or a bus,
Drive in a car,
Use a helicopter.
Any of the above options are possible and quite reasonable in different situations. If you don't have a bike, train, car or a helicopter (and you don't want to die of starvation), you would walk. The
next three options are mostly a matter of preference (if they are available). If the grocery store is on the island, a helicopter might be the best option. The important thing is - even if you had a
helicopter, you probably wouldn't fly if the store is a block away...
The same is with writing the code. To satisfy the above BDD description, you can write the very simple Model class or you can write a 10K lines monster. The question is - do you need a 10K monster?
No. If you did, the BDD description would include everything in there, because BDD description should include everything you need. Simply put - if it's not in BDD, then either don't build it (YAGNI)
or amend the BDD description to include the missing thing(s).
At the end - how would you run the above BDD spec? Go to JSSpec User Guide. There is a "Define new specification" part. Replace the text: "// Your spec goes here" with the above description and that
is it. In fact, it is not - you need to write the class (Model) and include it (in the normal way JavaScript files are included - using script tag). I will give the example Model implementation in my
post about MVP in JavaScript.
March 22nd, New York Auto Show. Breathtaking.
Look at my previous post for the first part - the introduction about OO in JavaScript.
Just a small digression if you didn't come across this. In JavaScript, there is a 'call' method on any function. For example:
function f(a, b) {
alert([this.hi, a, b]);
var passAsThis = { hi: 'Hello!' }
f.call(passAsThis, 55, 66);
What the above will output is Hello!,55,66. The f.call(passAsThis, 55, 66) call is doing something interesting. It calls the given function (in this example - 'f') passing the first argument
('passAsThis') to it as 'this' and all other arguments (55, 66) normally. This means that inside the function, 'this' refers to whatever you passed, which is convenient to do simulate parent class
method calling. If you have a method on parent class you want to call instead of the overridden method in the child class, you could do this:
function ParentClass(number) {
this.number = number;
ParentClass.prototype.displayNumber = function() {
alert(['ParentClass', this.number]);
function ChildClass(number) {
this.number = number * number;
ChildClass.prototype = new ParentClass();
// @Override (just joking - but in Java you could)
ChildClass.prototype.displayNumber = function() {
var child = new ChildClass(5);
This will output: ChildClass and ParentClass,25. It's a little clumsy, but the call: ParentClass.prototype.displayNumber.call(this) is what in Java you would write as super.displayNumber(). A lot of
characters thrown away. This can be a hard thing when you have to call parent class a lot.
Inheritance is an important OO concept. JavaScript doesn't support this very well. Some propose the following solution:
function Person(name, age) {
this.name = name;
this.age = age;
Person.prototype.display = function() {
alert('My name is ' + this.name + ' and I am ' + this.age);
function Teacher(name, age, subject) {
Person.call(this, name, age);
this.subject = subject;
Teacher.prototype = new Person();
Teacher.prototype.teach = function() {
alert('Teaching ' + this.subject);
This "almost" works in the sense that you get the right hierarchy. For example, you could now do:
var teacher = new Teacher('Foo Bar', 35, 'math');
This means that you inherited the 'display' method from Person class and you could add a new method 'teach'. This is basics of OO inheritance, so we are OK. Take a look at this for a very good
explanation of this method. This method is very acceptable if you don't mess to much into OO.
However, the call: Teacher.prototype = new Person() is really a function call, so all your code in Person gets executed. This is OK if you don't have any side effects, but if your class was
WebServiceClient and you would connect to Web service in the constructor, this would not be a good solution at all. First, every time you attempt to inherit from WebServiceClient, you would try to
execute the code in the constructor - really not what you intended. Second, when you execute it, you don't get any parameters, so you could possibly go even worse. Not a good situation at all.
One more problem is global state modified in constructors. Assume you have a PersonsCache, which would hold all the Person objects ever created. You could not put adding the just-made instance in the
constructor of Person, like this:
function Person(name, age) {
this.name = name;
this.age = age;
Every time you do the inheritance Teacher.prototype = new Person(), you would call the constructor and add something. That something is a dangling reference to your new instance object. You will
normally never use it, but what is more important - you will have more persons then you really have...
Similar to this, if you ever touch the DOM or any HTML-related stuff, this will break. The problem is that you inherit while the script is loading, which is before body of the document is created.
You can circumvent this by e.g. putting everything inside body onLoad event handler, but again - that is just clumsy boilerplate and distracts you from what you should be doing.
Another problem, common to all "solutions" (both the one above and all of the mentioned below) is that method overriding is not supported very well. Assume you have three classes: B, C and D. Assume
C inherits from B, D inherits from C. Assume all of the classes have one method called 'sayHi'. Now, you could override the method only in class D, for example. To call the base methods from that
method, you need to supply the actual base class. This may seem strange - you always know your base classes! Not true, though. Say I now change the hierarchy by inheriting B from class A, so we have
A -> B -> C -> D. If the method 'sayHi' stays in B, that is OK. If it, however, gets moved to class A for some reason, you would have to find all calls like B.sayHi.call(this, ...) and change them to
A.sayHi.call(this, ...). In Java, it would still be super.sayHi(...). Maybe I am nitpicking here - if you override in both C and D, even in Java you couldn't do it without casting (i.e. ((A)
this).sayHi() would call sayHi from class A). Here is Java code that is an example of this:
class A {
public void sayHi() {
class B extends A {
public void sayHi() {
class C extends B {
public void sayHi() {
class D extends C {
public void sayHi() {
public class Inherit {
public static void main(String[] args) {
new D().sayHi();
Try commenting out different sayHi methods in base classes and see how it behaves. In Java, super.sayHi() finds the "first method up the hierarchy".
There are ways to circumvent some of JavaScript's inheritance "features". Discussion below is about inheritance without calling the base constructors during inheritance.
Use empty constructors and construct using special constructor method
Back to empty-constructor method. This could look like this for the above Person & Teacher example:
function Person() {}
Person.ctor = function(name, age) {
this.name = name;
this.age = age;
Person.prototype.display = function() {
alert('My name is ' + this.name + ' and I am ' + this.age);
function Teacher() {}
Teacher.prototype = new Person();
Teacher.ctor = function(name, age, subject) {
Person.ctor.call(this, name, age);
this.subject = subject;
Teacher.prototype.teach = function() {
alert('Teaching ' + this.subject);
var teacher = new Teacher();
// #1
Teacher.ctor.call(teacher, 'Foo Bar', 35, 'math');
It obviously works pretty well, since we have removed the unwanted call when doing the inheritance. It also obviously looks very clumsy and there are much more characters to type with no good reason.
This is not very OO in another sense. The construction process is separated. If you would try to call teacher.display() in place of #1 comment, you actually could and you would get incorrect results,
of course.
Copy prototype
Some recommend doing a straight prototype-copy. I used this one on two of my projects and it worked very well.
The idea here is very simple. It gets answered when you ask - what means "to inherit" in JavaScript? Take a look at the above ways to implement inheritance. What all they do is take the parent class'
prototype on the start. This is what happens when you say something like Teacher.prototype = new Person(). Actually, this does something else, but the end result is the same - Teacher.prototype
becomes the reference to parts that parent class' prototype contains. The keyword is "copy". JavaScript is dynamic, so we can verify this:
function Parent() {
Parent.prototype.display = function() {
Parent.prototype.callDisplay = function() {
function Child() {
Child.prototype = new Parent();
// Change parent's definition of display.
Parent.prototype.display = function() {
alert('Parent.display - modified')
var parent = new Parent();
var child = new Child();
As you can see, Child's prototype now contains a definition of 'display' inherited from Parent and it changes as it should. How is this a copy then? Again, see this for a very good explanation. How
this really works is like this. When you say new Child(), you actually get the object which has __proto__ key. Take a look:
var s = '';
var childProto = new Child().__proto__;
for(var i in childProto) {
s += i + ': ' + childProto[i] + '\n';
You see the display method. That is child's display method? No, it's Parent's. How JavaScript uses this thing is rather interesting - it walks it as a tree. Here:
var childProto = new Child().__proto__;
var parentProto = childProto.__proto__;
alert(childProto == Parent.prototype)
Say you have var child = new Child(). This child has it's __proto__ pointing to Child.prototype. Each next reference to __proto__ gets you one level upper the inheritance chain, thus
child.__proto__.__proto__ gives you Parent.prototype. If we had more levels, we would have got GrandParent.prototype and so on.
The same thing, however, happens when you pick a variable or method. Say you want to do child.display(). JavaScript first looks in child - do you have a display? Nope. OK, go one level up - look at
child.__proto__. Child doesn't define display, sorry. OK, how about child.__proto__.__proto__? Now, since child.__proto__.__proto__ == Parent.prototype, you have a display here. So it gets called.
You can think of this as a sum of all things up the inheritance chain, but child classes have precedence - if they override something, it is overridden as they want. You can even override something
on an object-level:
var s = '';
var child1 = new Child();
var child2 = new Child();
child2.display = function() {
alert('child2 - modified');
You will get 'Parent.display - modified' and 'child2 - modified'.
We need a little more help from here. If you take a look, it says what var child = new Child() does:
Makes new object (i.e. new Object()) - let's call it 'cobj'
Calls Child constructor, passing the created 'cobj' object as this
Implicitly sets cobj.__proto__ to Child.prototype
Sets child to cobj
Hm... Now, what would Child.prototype = new Parent() do? It would set Child.prototype.__proto__ to Parent.prototype. This is how it all works. Let's take a look again at what var child = new Child()
Makes new object (i.e. new Object()) - let's call it 'cobj'
Calls Child constructor, passing the created 'cobj' object as this
Implicitly sets cobj.__proto__ to Child.prototype
Remember child.__proto__ == cobj.__proto__ == Child.prototype, but also Child.prototype.__proto__ == Parent.prototype.
Thus, child.__proto__.__proto__ == Parent.prototype and this is how it all can work nicely
Back to our "copy" - what gets copied is this __proto__ thing - the fact that Child.prototype.__proto__ == Parent.prototype. Let's say that our classes would never change in the "future". That is,
whenever we make a class, we would never change it later - we will not do the things we did in the previous example. This is a quite reasonable assumption. You can change your classes in JavaScript,
even in Java using reflection, but that is not very OO, right? Kinda hacky and we don't want to have hacky code.
If you have all the above assumptions in place and you are OK with them, then you can simulate the above mechanism. The only thing you actually need to do is copy the things from Parent.prototype to
Child.prototype and that is it. You lose __proto__ hierarchy, true, but you can simulate that, too (although not completely). It works, however, pretty nice. Here is how I did it on my last project:
Klass = {
extend: function(klassChild, klassParent) {
var pkChild = klassChild.prototype;
var pkParent = klassParent.prototype;
for(var i in pkParent)
pkChild[i] = pkParent[i];
pkChild.constructor = klassChild;
pkChild.parent = klassParent;
pkChild.klassName = Klass.name(klassChild);
create: function(klass) {
var prototype = klass.prototype;
prototype.constructor = klass;
prototype.klassName = Klass.name(klass);
name: function(klass) {
return klass.toString().match(/function (.*)\(/)[1];
The Klass is the utility object which can be used like this:
function Parent() {
Parent.prototype.display = function() {
alert('Parent.display: ' + this.value);
Parent.prototype.callDisplay = function() {
Parent.prototype.value = 1;
function Child() {
Klass.extend(Child, Parent);
Child.prototype.value = 2;
var child = new Child();
Here, instanceof doesn't work, class changes do not reflect to child classes. These are the imperfections.
Check arguments for inheritance-related info
Another method is to stop inheritance calls in their roots. For example:
var ONGOING_INHERITANCE = {};
function Person(name, age) {
if(arguments[0] == ONGOING_INHERITANCE) return;
this.name = name;
this.age = age;
Person.prototype.display = function() {
alert('My name is ' + this.name + ' and I am ' + this.age);
function Teacher(name, age, subject) {
if(arguments[0] == ONGOING_INHERITANCE) return;
Person.call(this, name, age);
this.subject = subject;
Teacher.prototype = new Person(ONGOING_INHERITANCE);
Teacher.prototype.teach = function() {
alert('Teaching ' + this.subject);
var teacher = new Teacher('Foo Bar', 35, 'math');
alert(teacher instanceof Person);
This code works as expected. It is lot less clumsy then the previous solutions. The last line tells you that it gives the right results, i.e. teacher really is the instance of Person class.
For the alternative approach to this, take a look at this page.
This is most of what should be considered basic JavaScript OO-like programming. Hope you enjoyed it.
According to Wikipedia, JavaScript is:
JavaScript is a scripting language most often used for client-side web development. It was the originating dialect of the ECMAScript standard. It is a dynamic, weakly typed, prototype-based
language with first-class functions.
What is interesting in the previous definition is that there is no mention of object orientation. While there is nothing that says that you must write OO programs, it definitely one of the most
common ways the programs are built these days. While there are other paradigms (functional programming, logic programming, structural programming, ...), JavaScript is nowadays mostly used in either
simple imperative/structural or more complex OO-like ways. OO-like and not OO because JavaScript is not OO-capable language. For example, it doesn't support any protection mechanisms - everything is
public. It supports something called prototype-based development. However, most people still call it OO - I suppose it's just "good enough".
Let's take a look at it. I will assume you are using JavaScript from within a standard browser. This is what it is:
function Person(name, age) {
this.name = name;
this.age = age;
Person.prototype.display = function() {
alert('My name is ' + this.name + ' and I am ' + this.age);
function main() {
var person = new Person('Foo Bar', 33);
If you somehow run method 'main' (e.g. by having it in your body onload event), you would see:
My name is Foo Bar and I am 33
in a dialog that pops up.
Although there is nothing even resembling a class in the above code if you compare it to e.g. Java classes, the above code actually is a basic example of a JavaScript class. The class is called
Multiple constructors
In JavaScript, you cannot have multiple constructors as in other languages (e.g. C++, Java, C#, Ruby, Scala, ...). The reason for this is that classes are represented as functions. JavaScript doesn't
have a concept of method overloading, as e.g. Java has. For example, in Java you can write:
public class MethodOverload {
public void method(int a) {}
public void method(String b) {}
public void method(double[] c) {}
The above code means that every instance of MethodOverload class have three methods with the same name. Which one gets called depends on the input parameters. If you would do the following:
MethodOverload mo = new MethodOverload();
mo.method(new double[]{ 1.0d, 2.0d });
then the above three calls would call the methods in order as given in the class definition. method(1) would call method(int a), method("Hi") would call method(String b) and method(new double[]{
1.0d, 2.0d }) would call method(double[] c).
JavaScript, however, doesn't have this capability. In fact, in JavaScript, you can define multiple functions with the same name, but only the last one will "stick". To demonstrate this, try running
the following:
function fun(a) {
alert('first: ' + a);
function fun(a) {
alert('second: ' + a);
The above code would display 'second: 55'. The interpreter will not even complain - the above code is valid. As if the first function hadn't even existed. You can circumvent this by using either
optional named arguments or unnamed arguments. Optional named arguments means that you have one constructor which does all the work, while all other constructors are just the utility constructors for
the user to be able to write the shorter code. For example:
function Person(firstName, lastName, sex, age, email, socialSecurityNumber, personalPhone, businessPhone, personalAddress, businessAddress, blogPage) {
this.firstName = firstName;
this.lastName = lastName;
if(typeof(sex) == 'undefined') {
// sex is not given.
this.sexDefined = 'no';
} else {
// sex is given.
this.sexDefined = 'yes';
// The same for all other optional arguments.
Assume that only firstname and lastName are required. You could instantiate it by:
var person = new Person('Foo', 'Bar');
alert([person.firstName, person.lastName, person.sexDefined]);
var person = new Person('Foo', 'Bar', 'male');
alert([person.firstName, person.lastName, person.sexDefined]);
The above would display 'Foo,Bar,no' and 'Foo,Bar,yes'. For the above two lines, you would accomplish the same thing in Java by writing two constructors:
public class Person {
private String firstName;
private String lastName;
private String sex;
public Person(String firstName, String lastName, String sex) {
this.firstName = firstName;
this.lastName = lastName;
this.sex = sex;
public Person(String firstName, String lastName) {
this(firstName, lastName, null);
The constructor would have to check whether all other things are undefined or not. A small digression - it is not enough to check for null in this case, for example:
function f(a) {
alert([a == null, typeof(a), typeof(a) == 'undefined']);
The above would display:
- f(1) => false, number, false
- f(null) => true, object, false
To check whether the argument is defined, always use typeof, except if you are sure that you will not have nulls as inupts.
The other option for multiple constructors is to use variable arguments:
function f() {
var s = '';
for(var i = 0; i < arguments.length; i++)
s += (i == 0 ? '' : ', ') + i + ': ' + arguments[i];
f('b', 888)
The above would display:
- f('a') => 0: a
- f('b', 888) => 0: b, 1: 888
Using the above you could make the constructor take any parameter, decide what to do based on the length and types of the arguments, etc. Both options require checking whether the arguments are
defined. The variable arguments is even more obscure, since you don't know how to call the constructor, except if you put a comment before it. That is maybe the reason why you don't see too much of
these in real life, i.e. one constructor classes are the most common in JavaScript. For that reason, I will stick to one-constructor classes and occasionally use the optional arguments constructors.
Instance variables and methods
Back to the first example:
function Person(name, age) {
this.name = name;
this.age = age;
Person.prototype.display = function() {
alert('My name is ' + this.name + ' and I am ' + this.age);
function main() {
var person = new Person('Foo Bar', 33);
variables are used directly on an object, similar to way you do it in e.g. Java. When you say this.name, it refers to 'name' variable in the current instance. Note, however, that using of 'this' is
obligatory, contrary to many situations in Java. When you say person.display(), it calls the method 'display' on the 'person' object, which happens to be the instance of the Person class.
JavaScript is both dynamically and weakly typed. This means that it resolves the type of the object in the runtime and you never supply it in the code. When you say
var person = new Person('Foo Bar', 33);
it doesn't say anywhere in the code what person is - this is determined at runtime. You could do, for example, the following:
var person = new Person('Foo Bar', 33);
person = new SomeOtherClass();
Depending whether SomeOtherClass has 'display' method, this will either call that method or raise an 'undefined function'-kind of error in JavaScript interpreter.
The same way you can have object methods, you can have object variables defined. This is something similar to class-level initialization in Java:
function Person(name, age) {
this.name = name;
this.age = age;
Person.prototype.display = function() {
alert('My name is ' + this.name + ' and I am ' + this.age + '. I am from ' + this.planet);
Person.prototype.planet = 'Earth';
function main() {
var person = new Person('Foo Bar', 33);
Note that 'planet' is the object-level, not class-level variable. To demonstrate that, try the following in main():
var person1 = new Person('Foo Bar', 33);
person1.planet = 'Mars';
var person2 = new Person('Baz Geez', 22);
person1.planet = 'Jupiter';
person2.planet = 'Neptune';
The output will be:
My name is Foo Bar and I am 33. I am from Mars
My name is Baz Geez and I am 22. I am from Earth
My name is Foo Bar and I am 33. I am from Jupiter
My name is Baz Geez and I am 22. I am from Neptune
As you can see, changing planet of person1 did not have any effect on planet of person2 and vice versa.
Class-level (a.k.a. static) methods and variables
You can have static methods and static variables in JavaScript - just omit the 'prototype' from the definition and that's it. For example:
function Person(name, age) {
this.name = name;
this.age = age;
Person.findOldest = function(persons) {
if(persons.length == 0)
return null;
var oldest = persons[0];
for(var i = 1; i < persons.length; i++)
if(persons[i].age > oldest.age)
oldest = persons[i];
return oldest;
var persons = [
new Person('Very Young', 10),
new Person('Just Right', 25),
new Person('Middle Age', 50),
new Person('Very Old', 70)
var oldest = Person.findOldest(persons);
alert([oldest.name, oldest.age]);
Person class now has 'findOldest' method. This method is a static method. You call it by giving the name of the class: Person.findOldest.
To comment, the previous, this is just a convention. This is again just a consequence that JavaScript is really not OO language. There is nothing wrong if you define the method as:
Person.prototype.findOldest = function(persons) {
I use the first convention since:
- In the second case, every object would have findOldest, so you couldn't really distinguish what is a static method and what is not
- More typing if you follow another convention (which I do) - static methods should be called with the name of the class instead of the object instance name.
In fact, you could have noticed that '.prototype' is just the object. It is a special object, since when you say 'new Person' it will make the object and copy everything from Person's prototype
object into the instance. However, with static methods, you could very well do something like this:
Person.staticMethods = {};
Person.staticMethods.findOldest = function(persons) {
Try it - works the same. I omit everything both prototype, staticMethods or whatever else you could put there. It is clear without these qualifications that this is a static method if you always
follow this convention (not hard).
The same way works for static variables:
function Person(name, age) {
this.name = name;
this.age = age;
Person.prototype.isAdult = function() {
return this.age >= Person.adultAge;
Person.adultAge = 18;
var persons = [
new Person('Very Young', 10),
new Person('Just Right', 25),
new Person('Middle Age', 50),
new Person('Very Old', 70)
for(var i = 0; i < persons.length; i++)
'adultAge' is a static variable in Person class. The above will output: false, true, true, true, since only 'Very Young' is not adult.
Hope this helped a little. In the next part, I'll write about inheritance. | {"url":"http://izittm.blogspot.com/","timestamp":"2014-04-21T07:04:40Z","content_type":null,"content_length":"148638","record_id":"<urn:uuid:ba374e24-97d0-44ff-b0d0-368b255688d4>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
4 dimensional array
November 17th, 2013, 04:48 AM
4 dimensional array
Hello everybody..
My name is John and I have decided to learn java.
I am fiddling around a bit with it and I want to make declare a 4dimensional int array where EACH array containing 20 cells.
I have this:
Code :
int[][][][] vierdeDimensie = new int[20][20][20][20];
Is this correct?
Thanks in advance!!!
November 17th, 2013, 04:58 AM
Re: 4 dimensional array
Arrays contain elements. How many elements total do you need? As you've currently defined the array, it will hold 20 x 20 x 20 x 20 elements, and that's a lot. (But not a big deal for a
computer.) What will you use this array for? Some part of a game?
November 17th, 2013, 05:10 AM
Re: 4 dimensional array
Thank you very much for replying.
This is a practice exercise I am doing and the exercise literally says:
Declare a 4 dimensional int array where EACH array containing 20 cells.
November 17th, 2013, 05:21 AM
Re: 4 dimensional array
Then I'd say you nailed it.
November 17th, 2013, 05:38 AM
Re: 4 dimensional array
Thank you!!!!
I tried some other stuff too. Like if I have a one dimensional array of 10 elements and I want to know content of the first position and last position of the string array.
also I think I know how to get the length of the 3rd array.
I came up with:
Code :
String naam = namen[9];
String naam = namen[0];
int i = arrayvoorbeeld[0][0].length;
Am I on the right track....?
November 17th, 2013, 05:59 AM
Re: 4 dimensional array
Taken together in the same scope, the first two lines would generate an error, but taken separately, the first line assigns the last or 10th element of namen to naam and the second line assigns
the first element of namen to naam.
4-dimensional arrays are hard for me to visualize, so when you say you want to find the length of the third array, I'd ask "Which one is the third array?" because I don't know.
In a two-dimensional array, the question could be "How many columns are in the 8th row?" The answer to that would be:
That I can visualize and understand. 4-D arrays, forget it.
November 17th, 2013, 06:15 AM
Re: 4 dimensional array
Taken together in the same scope, the first two lines would generate an error, but taken separately, the first line assigns the last or 10th element of namen to naam and the second line assigns
the first element of namen to naam.
4-dimensional arrays are hard for me to visualize, so when you say you want to find the length of the third array, I'd ask "Which one is the third array?" because I don't know.
In a two-dimensional array, the question could be "How many columns are in the 8th row?" The answer to that would be:
That I can visualize and understand. 4-D arrays, forget it.
Thank you for replying...
I am doing an excersise to learn java and the exact question (they are all separate question) is:
What command gives the length of the 3rd dimension of an array example.
lets say I have the following 4 dimensional array:
Code :
int[][][][] arrayvoorbeeld = new int[1][2][3][4];
int i = arrayvoorbeeld[0][0].length;
The last line of code (int i = arrayvoorbeeld[0][0].length;) gives me the length of the 3rd array.
Am I correct?
November 17th, 2013, 06:37 AM
Re: 4 dimensional array
Again, multi-dimensioned arrays greater than 3D are hard to visualize, so let's think this one through. It's important to use specific and precise language. I suspect English is not your native
language, and that's fine, you're doing great, but there's a big difference between finding the length of the 3rd dimension and the length of the 3rd array. So let's stick to using "dimension" in
this case. Now we have to agree on dimension numbering.
If we were talking about a 2D array, the length of the name of the array, my2DArray.length, specifies the number of rows in my2DArray, so let's call that the 1st dimension. The number of columns
in a specific row, or the 2nd dimension, is then found by my2DArray[row].length. Remember that multi-dimensioned arrays don't have to be square in Java. Each row could have a different length or
different number of columns.
Following that easier-to-understand pattern, then the length of the dimensions in your 4D array are:
Code :
arrayvoorbeeld.length; // length of the first dimension or the number of rows
arrayvoorbeeld[x].length; // length of the second dimension or the number of columns in row x
arrayvoorbeeld[x][y].length; // length of the third dimension or the depth of column (x, y)
arrayvoorbeeld[x][y][z].length; // length of the 4th dimension or the time of each depth (x, y, z)
// (assuming the 4th dimension is time)
In your case, everyone of the above for x, y, z = 0 through 19 should be 20, right? You might code that up to verify.
November 17th, 2013, 11:29 AM
Re: 4 dimensional array
Again, multi-dimensioned arrays greater than 3D are hard to visualize, so let's think this one through. It's important to use specific and precise language. I suspect English is not your native
language, and that's fine, you're doing great, but there's a big difference between finding the length of the 3rd dimension and the length of the 3rd array. So let's stick to using "dimension" in
this case. Now we have to agree on dimension numbering.
If we were talking about a 2D array, the length of the name of the array, my2DArray.length, specifies the number of rows in my2DArray, so let's call that the 1st dimension. The number of columns
in a specific row, or the 2nd dimension, is then found by my2DArray[row].length. Remember that multi-dimensioned arrays don't have to be square in Java. Each row could have a different length or
different number of columns.
Following that easier-to-understand pattern, then the length of the dimensions in your 4D array are:
Code :
arrayvoorbeeld.length; // length of the first dimension or the number of rows
arrayvoorbeeld[x].length; // length of the second dimension or the number of columns in row x
arrayvoorbeeld[x][y].length; // length of the third dimension or the depth of column (x, y)
arrayvoorbeeld[x][y][z].length; // length of the 4th dimension or the time of each depth (x, y, z)
// (assuming the 4th dimension is time)
In your case, everyone of the above for x, y, z = 0 through 19 should be 20, right? You might code that up to verify.
Thank you very much for your patience and your very detailed explanation!
I meant length of the dimension indeed.
You made it very clear for me.
I am very motivated to continue and learn java.
I will be back with questions, count on that ;-)
Thanks a bunch
November 17th, 2013, 12:20 PM
Re: 4 dimensional array
You're welcome. Glad to help. | {"url":"http://www.javaprogrammingforums.com/%20java-theory-questions/33895-4-dimensional-array-printingthethread.html","timestamp":"2014-04-19T04:33:05Z","content_type":null,"content_length":"16541","record_id":"<urn:uuid:d4371eec-4127-4832-bf04-fd951a1e3a11>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00204-ip-10-147-4-33.ec2.internal.warc.gz"} |
The art of modelling using CFD. Part VI – Peripheral Boundary Conditions
This final blog in this series focuses on what is sometimes the most ethereal of CFD modelling arts, where and how to define your peripheral boundary conditions. A fancy phrase but in reality no more
than deciding where the interface is between what you model and what you don’t. Heat is contemptuous of such divisions, it will spread out from it’s source and keep on spreading via convection,
conduction and radiation until it heats up (albeit only slightly) the earth’s atmosphere. From a pragmatic perspective you can’t model the entire earth just to find an active IC’s junction
temperature. You have to truncate your model somewhere and somehow.
The term ‘boundary condition’ is well used and understood when one talks of solving PDEs (partial differential equations). When applied to a 3D solution of the PDEs governing fluid flow and heat
transfer (the Navier Stokes equations) the term historically applied to the outer edges of the computational domain, the volume of space in which you predicted the fluid flow. The earliest CFD models
had an uninterrupted volume of fluid with solid, or well defined flow, conditions on the peripheral faces of that volume. Today, boundary condition (BC) as a phrase is used to describe a much wider
range of inputs to a model, e.g. power dissipations, internal geometries etc. For now let’s use the word peripheral to focus on just those conditions that cement the interface between the volume of
space you are modelling and the volume of space that you are not.
“Words offer the means to meaning” (good film, 5 fictional credits to the first person to comment as to which film, if you Google it you’re just cheating yourself, that’s what I tell my kids anyway).
‘Level’ is another word that is often used to define the type of electronics cooling simulation performed (and other types simulations as well, word re-use here can have some disastrous effects, what
a thermal engineer considers a system level model really isn’t what an EE would consider). From a thermal perspective:
• Wafer level, what goes on in a silicon die
• Package level, what goes on in the packaging arond the die
• Board level, considering many components sitting on a board
• System level, the PCB(s) + chassis/box + fans + PSUs …
• Room level, considering the product operating in its final destination as in a Data Center
If you are doing simulation at any level you have to define peripheral BCs that link it to the level(s) above. Heat and fluid need to know how they enter or leave the model from or to that which you
are not modelling.
Take a wafer level simulation. You are modelling a die in detail, maybe a series of stacked die to determine optimal placement fo TSVs. How do you represent the thermal effects that the
unmodelled package has on the behaviour of heat in the silicon? Whether the package has a heat slug, whether the die abuts a DAP, whether there will be a heatsink on the package, all these things
will affect the thermal behaviour within the die, all these conditions outside your model will have to be prescribed on the periphery of your wafer level model. For such conduction type models you
would set temperatures and heat transfer coefficients on the sides of your silicon. Temperature is just that, heat transfer coefficient is a measure of how effectively heat can pass through that
peripheral portion of your model. The two together prescribe the effects of the package.
Such decisions, such prescriptions, have to be made at any level of modelling. For board level you need to define how the air moves over the board. For a system level you have to set the temperature
of the air coming into your box. This all begs the question where do you get such information from?
Answer 1: set these BCs based on ‘worst case’ operating environments. Doing this will enable you to verify if your package, board, room etc. will be thermally viable in the extreme
Answer 2: import such BCs from existing models of the adjacent levels. FloTHERM excels at this with a feature called ‘zoom-in’:
e.g. getting actual operating conditions from a system level model to impose them on a board level model. Zoom-in again from board level down to package level. Such automation leads to far more
accurate simulation at any level at literally a press of a button (sorry, rather selection from a pop-up menu).
Combining adjacent levels into a single model alleviates the time and effort needed to pass BCs around using zoom-in (and zoom-out). Gathering all the required information together into a single
model is just as challenging though. In the glorious future all vendors will supply numerical thermal models of all of their parts, model building in CAD and CAE will be simply be a question of
drag+drop, plug and play. Until that time features that improve the effectiveness of the setting of peripheral BCs will aid in the art of this aspect of CFD modelling.
21st June 2010, Ross-on-Wye
Post Author
Post Tags
BC, boundary condition, CFD, electronics cooling, flotherm, model, modeling, modelling
Post Comments
About Robin Bornoff's blog
Views and insights into the concepts behind electronics cooling with a specific focus on the application of FloTHERM to the thermal simulation of electronic systems. Investigations into the
application of FloVENT to HVAC simulation. Plus the odd foray into CFD, non-linear dynamic systems and cider making. Robin Bornoff's blog
[...] (BC), an apt phrase if ever there was one. From an electronics cooling perspective there are well defined levels where such BCs can be readily imposed. The same is true from an HVAC
perspective, one of the best [...] | {"url":"http://blogs.mentor.com/robinbornoff/blog/2010/06/21/the-art-of-modelling-using-cfd-part-vi-peripheral-boundary-conditions/","timestamp":"2014-04-21T14:42:55Z","content_type":null,"content_length":"64353","record_id":"<urn:uuid:a0b85013-f3e1-4ae8-ab09-ff2cb651d7d6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
omplexity of
Results 1 - 10 of 74
- JOURNAL OF LOGIC PROGRAMMING , 1994
"... ..."
"... We study the question of determining whether an unknown function has a particular property or is ffl-far from any function with that property. A property testing algorithm is given a sample of
the value of the function on instances drawn according to some distribution, and possibly may query the fun ..."
Cited by 421 (57 self)
Add to MetaCart
We study the question of determining whether an unknown function has a particular property or is ffl-far from any function with that property. A property testing algorithm is given a sample of the
value of the function on instances drawn according to some distribution, and possibly may query the function on instances of its choice. First, we establish some connections between property testing
and problems in learning theory. Next, we focus on testing graph properties, and devise algorithms to test whether a graph has properties such as being k-colorable or having a ae-clique (clique of
density ae w.r.t the vertex set). Our graph property testing algorithms are probabilistic and make assertions which are correct with high probability, utilizing only poly(1=ffl) edge-queries into the
graph, where ffl is the distance parameter. Moreover, the property testing algorithms can be used to efficiently (i.e., in time linear in the number of vertices) construct partitions of the graph
which corre...
- Machine Learning , 1991
"... A higher order recurrent neural network architecture learns to recognize and generate languages after being "trained " on categorized exemplars. Studying these networks from the perspective of
dynamical systems yields two interesting discoveries: First, a longitudinal examination of the learning pro ..."
Cited by 214 (16 self)
Add to MetaCart
A higher order recurrent neural network architecture learns to recognize and generate languages after being "trained " on categorized exemplars. Studying these networks from the perspective of
dynamical systems yields two interesting discoveries: First, a longitudinal examination of the learning process illustrates a new form of mechanical inference: Induction by phase transition. A small
weight adjustment causes a "bifurcation" in the limit behavior of the network. This phase transition corresponds to the onset of the network’s capacity for generalizing to arbitrary-length strings.
Second, a study of the automata resulting from the acquisition of previously published training sets indicates that while the architecture is not guaranteed to find a minimal finite automaton
consistent with the given exemplars, which is an NP-Hard problem, the architecture does appear capable of generating non-regular languages by exploiting fractal and chaotic dynamics. I end the paper
with a hypothesis relating linguistic generative capacity to the behavioral regimes of non-linear dynamical systems.
- Artificial Intelligence , 1997
"... This paper presents a set of methods by which a learning agent can learn a sequence of increasingly abstract and powerful interfaces to control a robot whose sensorimotor apparatus and
environment are initially unknown. The result of the learning is a rich hierarchical model of the robot's world (it ..."
Cited by 129 (20 self)
Add to MetaCart
This paper presents a set of methods by which a learning agent can learn a sequence of increasingly abstract and powerful interfaces to control a robot whose sensorimotor apparatus and environment
are initially unknown. The result of the learning is a rich hierarchical model of the robot's world (its sensorimotor apparatus and environment). The learning methods rely on generic properties of
the robot's world such as almost-everywhere smooth e ects of motor control signals on sensory features. At thelowest level of the hierarchy, the learning agent analyzes the e ects of its motor
control signals in order to de ne a new set of control signals, one for each of the robot's degrees of freedom. It uses a generate-and-test approach to de ne sensory features that capture important
aspects of the environment. It uses linear regression to learn models that characterize context-dependent e ects of the control signals on the learned features. It uses these models to de ne
high-level control laws for nding and following paths de ned using constraints on the learned features. The agent abstracts these control laws, which interact with the continuous environment, to a
nite set of actions that implement discrete state transitions. At this point, the agent has abstracted the robot's continuous world to a nite-state world and can use existing methods to learn its
structure. The learning agent's methods are evaluated on several simulated robots with di erent sensorimotor systems and environments.
, 1999
"... XML is rapidly emerging as the new standard for data representation and exchange on the Web. An XML document can be accompanied by a Document Type Descriptor (DTD) which plays the role of a
schema for an XML data collection. DTDs contain valuable information on the structure of documents and thus ha ..."
Cited by 101 (4 self)
Add to MetaCart
XML is rapidly emerging as the new standard for data representation and exchange on the Web. An XML document can be accompanied by a Document Type Descriptor (DTD) which plays the role of a schema
for an XML data collection. DTDs contain valuable information on the structure of documents and thus have a crucial role in the efficient storage of XML data, as well as the effective formulation and
optimization of XML queries. In this paper, we propose XTRACT, a novel system for inferring a DTD schema for a database of XML documents. Since the DTD syntax incorporates the full expressive power
of regular expressions, naive approaches typically fail to produce concise and intuitive DTDs. Instead, the XTRACT inference algorithms employ a sequence of sophisticated steps that involve: (1)
finding patterns in the input sequences and replacing them with regular expressions to generate “general ” candidate DTDs, (2) factoring candidate DTDs using adaptations of algorithms from the logic
optimization literature, and (3) applying the Minimum Description Length (MDL) principle to find the best DTD among the candidates. The results of our experiments with real-life and synthetic DTDs
demonstrate the effectiveness of XTRACT’s approach in inferring concise and semantically meaningful DTD schemas for XML databases. 1
, 1998
"... . This paper first describes the structure and results of the Abbadingo One DFA Learning Competition. The competition was designed to encourage work on algorithms that scale well---both to
larger DFAs and to sparser training data. We then describe and discuss the winning algorithm of Rodney Price, w ..."
Cited by 90 (1 self)
Add to MetaCart
. This paper first describes the structure and results of the Abbadingo One DFA Learning Competition. The competition was designed to encourage work on algorithms that scale well---both to larger
DFAs and to sparser training data. We then describe and discuss the winning algorithm of Rodney Price, which orders state merges according to the amount of evidence in their favor. A second winning
algorithm, of Hugues Juille, will be described in a separate paper. Part I: Abbadingo 1 Introduction The Abbadingo One DFA Learning Competition was organized by two of the authors (Lang and
Pearlmutter) and consisted of a set of challenge problems posted to the internet and token cash prizes of $1024. The organizers had the following goals: -- Promote the development of new and better
algorithms. -- Encourage learning theorists to implement some of their ideas and gather empirical data concerning their performance on concrete problems which lie beyond proven bounds, particulary in
the direction o...
- Machine Learning , 1990
"... We introduce a rigorous performance criterion for training algorithms for probabilistic automata (PAs) and hidden Markov models (HMMs), used extensively for speech recognition, and analyze the
complexity of the training problem as a computational problem. The PA training problem is the problem of ap ..."
Cited by 86 (0 self)
Add to MetaCart
We introduce a rigorous performance criterion for training algorithms for probabilistic automata (PAs) and hidden Markov models (HMMs), used extensively for speech recognition, and analyze the
complexity of the training problem as a computational problem. The PA training problem is the problem of approximating an arbitrary, unknown source distribution by distributions generated by a PA. We
investigate the following question about this important, well-studied problem: Does there exist an efficient training algorithm such that the trained PAs provably converge to a model close to an
optimum one with high confidence, after only a feasibly small set of training data? We model this problem in the framework of computational learning theory and analyze the sample as well as
computational complexity. We show that the number of examples required for training PAs is moderate -- essentially linear in the number of transition probabilities to be trained and a low-degree
polynomial in the example l...
, 1996
"... Agents that operate in a multi-agent system need an efficient strategy to handle their encounters with other agents involved. Searching for an optimal interactive strategy is a hard problem
because it depends mostly on the behavior of the others. In this work, interaction among agents is represented ..."
Cited by 84 (2 self)
Add to MetaCart
Agents that operate in a multi-agent system need an efficient strategy to handle their encounters with other agents involved. Searching for an optimal interactive strategy is a hard problem because
it depends mostly on the behavior of the others. In this work, interaction among agents is represented as a repeated two-player game, where the agents' objective is to look for a strategy that
maximizes their expected sum of rewards in the game. We assume that agents' strategies can be modeled as finite automata. A model-based approach is presented as a possible method for learning an
effective interactive strategy. First, we describe how an agent should find an optimal strategy against a given model. Second, we present an unsupervised algorithm that infers a model of the
opponent's automaton from its input/output behavior. A set of experiments that show the potential merit of the algorithm is reported as well. Introduction In recent years, a major research effort has
been invested in desi...
- Journal of the Association for Computing Machinery , 1993
"... Abstract. The minimum consistent DFA problem is that of finding a DFA with as few states as possible that is consistent with a given sample (a finite collection of words, each labeled as to
whether the DFA found should accept or reject). Assuming that P # NP, it is shown that for any constant k, no ..."
Cited by 82 (4 self)
Add to MetaCart
Abstract. The minimum consistent DFA problem is that of finding a DFA with as few states as possible that is consistent with a given sample (a finite collection of words, each labeled as to whether
the DFA found should accept or reject). Assuming that P # NP, it is shown that for any constant k, no polynomial-time algorithm can be guaranteed to find a consistent DFA with fewer than opt ~
states, where opt is the number of states in the minimum state DFA consistent with the sample. This result holds even if the alphabet is of constant size two, and if the algorithm is allowed to
produce an NFA, a regular expression, or a regular grammar that is consistent with the sample. A similar nonapproximability result is presented for the problem of finding small consistent linear
grammars. For the case of finding minimum consistent DFAs when the alphabet is not of constant size but instead is allowed to vay with the problem specification, the slightly
- 2nd Int. Workshop on Analogical and Inductive Inference (AII , 1989
"... This paper surveys recent results concerning the inference of deterministic finite automata (DFAs). The results discussed determine the extent to which DFAs can be feasibly inferred, and
highlight a number of interesting approaches in computational learning theory. 1 ..."
Cited by 78 (1 self)
Add to MetaCart
This paper surveys recent results concerning the inference of deterministic finite automata (DFAs). The results discussed determine the extent to which DFAs can be feasibly inferred, and highlight a
number of interesting approaches in computational learning theory. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=51354","timestamp":"2014-04-16T21:53:43Z","content_type":null,"content_length":"39329","record_id":"<urn:uuid:c4ecc06e-7ecb-485f-b5b6-b060ebc18cd0>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
• The book Theories of Computational Complexity is used at Asian Institute of Technology for the course AT02.20 Theory of Computation , for the courses Complejidad Algoritmica Modelos de
Informatica Teorica and Introduccion a la complejidad algoritmica at Universidad De Granada, the course Symbolische Mathematik in der Theoretischen Physik by Karin Seifert-Lorenz, Christoph F.
Strnadl, Andreas Eichler.
• The book Information and Randomness is used for the following courses: at the University of Ulm by Prof. Dr. U. Schöning, at Martin-Luther-Universität Halle-Wittenberg by Prof. Dr. L. Staiger,
Theoretische Informatik 2" at the University of Potsdam by Prof. Dr. H. Jürgensen, Information and Coding Theory at the Universitat Politecnica de Valencia,Spain, by Prof. J. M. Sempere, LOG6:
Complexité et combinatoire at University of Nice Sophia Antipolis, France by Igor Litovsky and Bruno Martin.
• The book Computing with Cells and Atoms is used for the courses "COMP 4/6601: Models of Computation", The University of Memphis, USA , "Introduction to Quantum Computing (M743)", Indiana
University, USA , "06-12411 Introduction to Molecular and Quantum Computation", The University of Birmingham, UK , Seminar "Bits und Zellen", Hamburg University, Germany, Bioinformatics and
Biocomputing, Seoul National University, Korea.
• Stanford University at "Papers and Books for CS 446 and Beyond''.
• Opinions on the second edition of the book Information and Randomness, Springer-Verlag, Heidelberg, 2002:
• "A good review and comparison of molecular, quantum and membrane computing can be found in Computing with Cells and Atoms: An Introduction to Quantum, DNA and Membrane Computing by Cristian S.
Calude and Gheorghe Paun [22]." In Introduction to Quantum Computing (M743) , p. 23, by Zdzislaw Meglicki, Department of Mathematics at Indiana University, April 2, 2002.
• MathWorld citations: Almost all real numbers are lexicons, Chaitin's Constant and Bead-Sort.
• Wikipedia citations: Chaitin's Constant; see also Dictionary of Computer Science
• Y. Amaya. Characterization of Chaitin Machine Satisfying the Algorithmic Coding Theorem, Masters Thesis, Japan Advanced Institute of Science and Technology, Japan, February 2001.
• P. Andreasen. Universal Source Coding, Masters Thesis, Department of Mathematics, University of Copenhagen, Denmark, July 2001.
• G. Segre. Algorithmic Information Theoretic Issues in Quantum Mechanics, PhD Thesis, Departement of Theoretical Physics, University of Geneva, Swizerland, October 2001.
• M. Lipponen matematiikan väitöskirja vuoden 1996 paras Nevanlina-palkinto turkulaistutkijale, Turum Sanomat, 22 April 1998, p. 11.
• Standford University: Papers and Books for CS 446 and Beyond
• References to Quantum Computers; see also CERN
• A bibliography on primality testing
• E-journals in science
• Nagoya selected home-page
• Yonezawa Lab's WWW server, at the Department of Information Science of the University of Tokyo, bibliography in quantum computation.
• Presented in The Royal Society of New Zeland Science Digest (Science and Technology Alert 25), 17 April 1998.
• Applications, Interactions, and Connections, of Results and Methods From Core ("Pure") Computability Theory (C.C.T).
• Citations of Calude-Stay. From Heisenberg to Goedel via Chaitin: Quantum Computing SIG, Research in Quantum Computing and Information
• Michael Trott. Mathematica GuideBooks, Springer, 2004.
• Petrus H. Potgieter's talk: Topological aspects of the "random" sequences: The set of Kolmogorov-Chaitin-Solomonoff complex (KCSC) sequences is small in category in Cantor space but
Calude-Marcus-Staiger have recently given a metric topology in which the set of KCSC sequences is in fact co-meagre
• Included with the paper "From Heisenberg to Goedel via Chaitin" (with M.A. Stay) in the CERN list of most downloaded papers of year 2004 | {"url":"https://www.cs.auckland.ac.nz/~cristian/varia_quotations.html","timestamp":"2014-04-17T15:26:25Z","content_type":null,"content_length":"16642","record_id":"<urn:uuid:11d72253-1855-438f-bd43-45a8b0ad65f4>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
Please Help!
Total # Posts: 89
S.S redo
8d 9c 10a 11b 12b
S.S redo
1d 2c 3b 4a 5b 6b 7c
S.S redo
1. The economic centers of the Northeast are the region's __________. (1 point) farms fisheries forests cities 2. Two major ports in the South are ______________________. (1 point) Atlanta and Dalton
Raleigh and Austin Miami and New Orleans New York City and Philadelphia 3...
S.S please help!
Oops wrong questions! posting new ones right now!
S.S please help!
1d 2c 3b 4a 5b 6b 7c My answers
S.S please help!
1 Marks: 3 Which statement best describes the government of Canada? Choose one answer. a. The monarch of Britain has total control over Canada. b. The Canadian parliament is ruled by the United
States. c. Canada has complete power over its own government. d. The prime minister...
Please help with Problem 2!
Jon and Melissa agree to meet in Chicago for the weekend. Jon travels 236 miles in the same time that Melissa travels 224 miles. If Jons rate of travel is 3 mph more than Melissas, and they travel
the same length of time, at what speed does Jon travel?
What is the molarity of 5g of O2 in 250 mL? Show work, please.
The quantity demanded x (in units of a hundred) of the Mikado miniature cameras/week is related to the unit price p (in dollars) by p = −0.2x2 + 80 and the quantity x (in units of a hundred) that the
supplier is willing to make available in the market is related to the u...
A spherical generator is used to produce a -15.00D surface on glass of refractive index 1.80. The diameter of the cutting tool is 80mm and the radius of the cutting surface is 4mm. What is the angle
between the axis of the tool and the axis of the lens?
A spherical generator is used to produce a -15.00D surface on glass of refractive index 1.80. The diameter of the cutting tool is 80mm and the radius of the cutting surface is 4mm. What is the angle
between the axis of the tool and the axis of the lens?
1.) A bird can fly 20 km/h. How long does it take to fly 12 km?
Vector V1 is 6.29 units long and points along the negative x axis. Vector V2 is 4.23 units long and points at +35.0° to the +x axis. (a) What are the x and y components of each vector? (b) Determine
the sum V1 + V2.
I guess it's the format but I am little confused about what you did....
Can someone help me understand how to find the derivative of these two problems: (t^2-1/t^2+2)^3=y and y=sin^32x? I think with the first one, I could just foil it maybe and cancel but I don't know
what to do with the ^3...and then the second one I haven't gotten the ha...
I have to evaluate the limit of these problems and I can't figure them out! 1.) Lim x-> 1 x^2+x-2/x^2-x 2.) (please note that the -2 is not under the square root) Lim x->0 sqroot 4+h -2/ h
Algebra 2
log4 (x-2) - log4 (x+1) = 1
Algebra 2
log x 8 = 2
oops the equation is actually 2SO2 (g) + O2 (g) yield 2SO3 (g) the yield part was an arrow facing both directions, not just the product side
Consider the following equilibrium at 1000K: 2SO2 (g) + O2 (g) ¡ê 2SO3 (g) A study of this system reveals that there are 3.5E-3 moles of sulfur dioxide gas, and 4.8E-3 moles of oxygen gas present in
a 11.0L flask at equilibrium. The equilibrium constant for this re...
Change into standard form 1. x^2+6x+4y+5=0 2. y^2+12x+2y+=-25
Write the standard equation of the ellipse with the given characteristics Vertices: (0,7) (0,-7) foci: (0,2) (0,-2)
Suppose $5000 is deposited in a bank account that compounds interest four times per year. The bank account contains $9900 after 13 years. What is the annual interest rate for this bank account? i
already set it up but i don't know where to go from there. Please help
I do not understand the part in which you solve for x. Is it supposed to be the cos of 60 equals x divided by .2?
The three 500 g masses in the daigram are connected by massless, rigid rods to form a triangle. What is the triangle's rotational energy (in J) if it rotates at 1.50 rev/s about an axis through the
center? (Diagram shows equilateral triangle with 500g masses at each point,...
I actually just redid it and found something close to 9.21?
omg, thank u! I ended up getting something 1.70! THANKS!
How many grams of NAC2H3O2 must be dissolve in enough water to prepare 250.0 mL of a solution with a pH= 9.200? Molar mass of NaC2H3O2 ia 82.03 g/mol. Ka for C2H3O2H is 1.8x10^-5
i'm having trouble with this math equation 5/2x^2-1/2^4 (need to find greatest common factor)
ETHANOL (C2H5OH) HAS A DENSITY OF 0.80g/ml. IF A 10%(v/v) MIXTURE OF ETHANOL IN WATER WAS PREPARED HOW MANY GRAMS OF ETHANOL WOULD BE IN 100ML OF THE MIXTURE? (ASSUMING MIXED VOLUME IS THE SUM OF THE
Hello, Suppose that a newly discovered element called centium has three isotopes that occur in nature. These are centium-200, centium-203, and centium-209. Assume that these isotopes occur in equal
amounts in nature. What will be the average atomic mass of this element?
Write and equation to match the problem situation below. Callie has 4 rimes as many CDs as she has DVDs. She has a total of 60 CDs and DVD. How many dvds, d, does callie have?
when water is combined with magnesium nitride to produce magnesium hydroxide and ammonia gas, is this reaction endothermic or exothermic and why?
when water is combined with magnesium nitride to produce magnesium hydroxide and ammonia gas,
c - 1/4c = 49.95
2/7 k - 1/14 k = -3 does k= -42?
x/4 - 3x/2 = -1/2 I do not understand at all
Luke bought some things at the store. He paid $5.98 for a pizza, $3.75 for a case of dr.pepper, $2.50 for a box of brownie mix. And $1.01 in sales tax. What information is needed to find Lukes
correct change?
Write an expression that can be used to find the average if these numbers: 7, -2, 10, -4, -9
Maratha bakes between 4 1/4 and 9 3/4 hours each day at her bakery. About how many hours does she spend baking in one week?
A triangle with two congruent sides and two angles that both measure 45 is a.scalene and right b.isosceles and acute c.isosceles and right d. Equilateral and acute
Math Please Help
Oh sorry I meam two angles that are complementary have s sun of 45
Math Please Help
Which statement is not true? Two or angles that are supplementary have a sum of 180 or Two angles that are complementary have a sum ot 45
Which of the following shapes could not represent a top, front, or side view of a pentagonal prism? a. A pentagon b. A triangle c. A square d. A rectangle
Math Please Help
Ok I get it. Thank you!!!
Math Please Help
But that equals 24 and that not a answer on the thing
Math Please Help
3/2 = .6 repaeating x 16 = 10.6 repeating and 16/1 x 2/3 = 10.6 reapeating
Math Please Help
I got 10 something but it's not one of the answers
Math Please Help
Math Please Help
could I do it in a proportion?
Math Please Help
The premiters of two equilateral triangles are in the ratio 2:3. If the perimeter of the smaller triangle is 16 cm what is the measure of a side of the larger triangle?
A 66.0 kg man is ice-skating due north with a velocity of 5.70 m/s when he collides with a 36.0 kg child. The man and child stay together hand have a velocity of 2.20 m/s at an angle of 33.0° north
of east immediately after the collision. What was the velocity of the child...
I tried this and got 118882 for Va' and 390097 for Va but it was wrong?
An alpha particle collides with an oxygen nucleus, initially at rest. The alpha particle is scattered at an angle of 60.9° above its initial direction of motion, and the oxygen nucleus recoils at an
angle of 51.9° on the opposite side of that initial direction. The fin...
Thank you! what is the 2257 from?
If 1.10 g of steam at 100.0 degrees celsius condenses into 38.5 g of water, initially at 27.0 degrees celsius, in an insulated container, what is the final temperature of the entire water sample?
Assume no loss of heat into the surroundings
A horizontal rifle is fired at a bull's-eye. The muzzle speed of the bullet is 795 m/s. The barrel is pointed directly at the center of the bull's-eye, but the bullet strikes the target 0.042 m below
the center. What is the horizontal distance between the end of the ri...
A golfer imparts a speed of 32.1 m/s to a ball, and it travels the maximum possible distance before landing on the green. The tee and the green are at the same elevation. (a) How much time does the
ball spend in the air? (b) What is the longest "hole in one" that the...
College Calculus 1
A runner sprints around a circular track of radius 150 m at a constant speed of 7 m/s. The runner's friend is standing at a distance 300 m from the center of the track. How fast is the distance
between the friends changing when the distance between them is 300 m? (Round to...
The Ruby Throated Hummingbird has a resting heart rate of 250 beats per minute. When it is threatened or chased, it's heart rate can reach 1200 beats per minute. Predict its lifespan and justify your
The Ruby Throated Hummingbird has a resting heart rate of 250 beats per minute. When it is threatened or chased, it's heart rate can reach 1200 beats per minute. Predict its lifespan and justify your
The Ruby Throated Hummingbird has a resting heart rate of 250 beats per minute. When it is threatened or chased, it's heart rate can reach 1200 beats per minute. Predict its lifespan and justify your
I need to know how to write an equation for the horizontal line with the points of ( -2, -5). I understand that it should be written as y=k. So would it be, y=-2x-5 or would it be y=-5? Where do you
find k?
The total sales made by a salesperson was $25,000 after 3 months and $68,000 after 23 months. Predict the total sales after 42 months.
Advanced Functions
Rena has been offered entry-level positions with two firms. Firm A offers a starting salary of $40 000 per year, with a $2000 per year increase guaranteed each subsequent year. Firm B offers a
starting salary of $38 500, with a 5% increase every year after that. a)After how ma...
prob and stats
A test is conducted in 22 cities to see if giving away free transit system maps will increase the number of bus riders. In a regression analysis, the dependent variable Y is the increase in bus
riders (in thousands of persons) from the start of the test until its conclusion. T...
Prob and Stats
In a multiple regression with 5 predictors in a sample of 56 U.S. cities, what would be the critical value for an F test of overall significance at a= .05? A. 2.45 B. 2.37 C. 2.40 D. 2.56
1 + ((2*3)/4) =2.5
chemistry lab repost
i need help locating a lab for regents chemistry. i have the data sheet but misplaced the rest of the lab. i know it's online somewhere, but i'm not sure which lab it is.. (what it is called). The
data sheet has 3 charts on it, 1 titled 'solubility'. it include...
Nevermind. 1 m = 100 cm. I got confused. Thank you.
How? 1000 * 1000 = 1 million.
1 km = ? cm. I was thinking 1,000,000. Could someone double check that?
A tourist being chased by an angry bear is running in a straight line toward his car at a speed of 4.38 m/s. The car is a distance d away. The bear is 25.4 m behind the tourist and running at 5.88 m/
s. The tourist reaches the car safely. What is the maximum possible value for ...
A boat travels at 3.8 m/s and heads straight across a river 240 m wide. The river flows at 1.6 m/s. Find A) The boat's resultant velocity. B) How long it takes the boat to cross the river. C) How far
downstream the boat is when it reaches the other side. Please help, i hav...
i keep trying to do these problems and i don't understand them. please help me!! 1. Gas stored in a tank at 273 K has a pressure of 388 kPa. The save limit for the pressure is 825 kPa. At what
temperature will the gas reach this pressure? A. 850 K B. 925 K C. 580 K D. 273 ...
When deriving an individuals demand curve how do you find the optimal bundle if you are only given: price of 2 goods and income. i.e- price of wine: $35 price of beer: $12 Income= $419
When deriving an individuals demand curve how do you find the optimal bundle if you are only given: price of 2 goods and income. i.e- price of wine: $35 price of beer: $12 Income= $419
why won't anybody help me
The region R, is bounded by the graphs of x = 5/3 y and the curve C given by x = (1+y^2)^(1/2), and the x-axis. a) Set up and evaluate an integral expression with respect to y that gives the area of
R. b) Curve C is part of the curve x^2 - y^2 = 1, Show that x^2 - y^2 = 1 can ...
The region R, is bounded by the graphs of x = 5/3 y and the curve C given by x = (1+y^2)^(1/2), and the x-axis. a) Set up and evaluate an integral expression with respect to y that gives the area of
R. b) Curve C is part of the curve x^2 - y^2 = 1, Show that x^2 - y^2 = 1 can ...
The region R, is bounded by the graphs of x = 5/3 y and the curve C given by x = (1+y^2)^(1/2), and the x-axis. a) Set up and evaluate an integral expression with respect to y that gives the area of
R. b) Curve C is part of the curve x^2 - y^2 = 1, Show that x^2 - y^2 = 1 can ...
So sales must increase at a rate of 300 books per week. Thank U So much! im gonna work on the second half of the problem now.
ok i redid it so q'=300?
Ok $3000 is the rate at that the revenue is rising. Ok for the second answer i plugged it in and i got 30=q'
ok i got p=20 p'=-1 q=1000 q'=200 and when i plugged them in i got r'=3000 is that right?
i am a different person and i still do not understand it i get R'=3000-400X but i dont understand wat to do with the 5000 | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Please+Help!","timestamp":"2014-04-17T22:26:52Z","content_type":null,"content_length":"24976","record_id":"<urn:uuid:1dfae786-38d0-4ac7-b3ab-3a8e6090ffbf>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Loglinear Models and Goodness-of-Fit Statistics for Train Waybill Data
Herbert Lee*
Duke University
Kert Viele
University of Kentucky
Counts of carloads of train shipments are effectively described with loglinear models. This paper presents models of counts by origin, destination, and commodity type. Such models can highlight
structures in the data and give useful predictions. In particular, there are definite interactions between origin and destination and between origin and commodity, and these models can capture these
relationships. Model selection depends on the choice of goodness-of-fit statistic; this paper addresses several issues relating to this choice.
Roughly 1.7 billion tons of cargo is moved by train every year within the United States. In this paper, we explore a statistical method for modeling data from train waybills. In particular, we focus
on the counts of carloads of cargo by commodity type and by origin and destination. This information can be arranged into a large three-dimensional table and is thus suitable for analysis via
loglinear models. In addition to describing the data, such models allow us to compare flows of freight between different areas, search the data for unusual flows, and make predictions of future
flows. Choosing a good model requires the selection of a goodness-of-fit statistic, and we discuss issues involved in this process. We also note several challenges that this data set presents,
including a lack of symmetry and a large number of zero counts.
The data we analyze are from the Carload Waybill Sample issued by the Interstate Commerce Commission for the years 1988 through 1992 (ICC 1992). The data are a stratified sample from all waybills for
railroads with over 4,500 cars per year of traffic or 5 percent or more of a state's traffic. There are over 1.9 million total records, each of which has 62 fields of information. Here we focus on
three fields: the origin of the shipment, the destination, and the type of commodity. Both the origin and destination are classified into 1 of 181 regions (for the continental United States) as
defined by the Bureau of Economic Analysis (BEA) although some are missing or unknown. The commodities are classified by Standard Transportation Commodity Codes (STCC), as per the Association of
American Railroads. Using the two-digit aggregate codes gives us 37 categories of commodities in this data set (for example, farm products, coal, printed matter, etc.). Each record in the file is a
sample shipment, which may consist of multiple carloads of freight. The sample is stratified, and strata were sampled with different frequencies. Thus, to get an estimate of the total count of
carloads of commodity with a particular origin and destination, we first multiply the number of carloads in a record by the inverse sampling frequency and then sum these products over all such
commodity shipments from the same origin to the same destination. For example, a record of 7 carloads in a stratum that was sampled with frequency 1 in 40 would get a weighted product of 280
carloads. These sums are entered into a large three-dimensional table, which is then ready for analysis.
As an example of the heterogeneity in the data, we spotlight Chicago, Illinois, and Huntington, West Virginia. Chicago is both the origin of the most traffic, as well as the most frequent
destination. Over eight million carloads originate from the Chicago region, and these shipments are spread over many different categories of commodities and are well distributed across the country.
In contrast, Huntington is in the top 5 regions by origin of total freight (over 3.5 million carloads), but this freight is nearly all coal. It goes to a smaller number of destinations, and much less
freight is sent to Huntington in return. In modeling this data set, we need a model flexible enough to work for both general-freight cities like Chicago and for commodity-specific cities such as
Unlike many tables, there is no symmetry in the data since commodities (such as coal) are generally shipped along particular routes, with cities either being origins or destinations but not both.
Another potential problem is the large number of zero counts. For example, few things besides coal originate from the Huntington area. However, we do note that these zeroes are not structural zeroes.
While many of the zeroes are easily predictable, there is no inherent reason any entry is zero. For example, much freight now moves via intermodal transport, meaning that it could go by truck partway
and then be transferred to a train at an intermediate location. Thus, the intermediate location would show as the origin with respect to the train shipment even though it is not the true origin of
the commodity.
Loglinear Models
Data consisting of counts, such as the waybills, are naturally modeled by the Poisson distribution, which takes values on the nonnegative integers. Instead of a standard regression model with an
assumption of Gaussian error, we use a Poisson regression model. Such models are often called loglinear because they are a linear model for the mean after logarithms are taken. Here we model the mean
of the distribution of counts from origin i to destination j of commodity k by m[ijk]. The full loglinear model in this context is
where a[i] is a main effect for origin i (and b[j]and c[k][]are analogous), d[ij] is an interaction effect for when origin i and destination j have cargo flows not proportional to the product of the
main effects a[i][]and b[j] (e and f are analogous), and g[ijk] is a three-way interaction between origin i, destination j, and commodity k. The actual counts n[ijk] of commodity k from i to j thus
follow a Poisson distribution with mean m[ijk]:
In practice, not all interaction terms may be necessary, and some may be dropped from the model. Also note that the model in (1) is overspecified (there are more free parameters than degrees of
freedom), so some sort of restriction is needed. For example, b[1] = c[1] = d[i][1] = d[1j] = e[i][1] = e[1k] = f[j][1] = f[1k] = g[i][11] = g[1j1] = g[11k] = 1 for all i, j, k. While loglinear
models with only a few interaction terms can be fit directly, most complex models require iterative solutions, the most popular method being iterative proportional fitting (Deming and Stephan 1940).
For more background on loglinear models, the reader is referred to one of the many good references on the topic (Agresti 1990; McCullagh and Nelder 1989; Bishop et al. 1975).
Loglinear models are part of the same family of models as gravity models (such as Sen and Smith 1995). Gravity models also contain a term relating the distance between the origin and destination to
the rate of flow and so would have a term depending on this distance in equation (1). We have found that train cargo flow is not related to distance, and thus the additional term in the gravity model
is unhelpful for our data. In contrast to focusing on modeling the effect of distance, we focus on the complex interaction effects of the covariates.
In this paper, we take the Bayesian approach. The gamma distribution serves as a conjugate prior for all parameters, and the posterior can be easily estimated via Markov chain Monte Carlo. With the
full posterior, one can easily get estimates of uncertainty, in addition to simple point estimate. Either an informative prior or a noninformative (improper) prior can be used. Using a noninformative
prior leads to posterior mode estimates equal to the maximum likelihood estimates. We use an essentially noninformative prior. More details on Bayesian loglinear models can be found in Gelman et al.
(1995), and West (1994) discusses Bayesian loglinear models in the context of gravity models.
Assessing Goodness-of-Fit
To compare how well different models fit, we employed cross-validation (see Stone 1974). For this data set, annual counts seemed a natural unit of validation. Thus for each year s =1,...,5, we fit
each model under consideration using the other 4 years of data and used the fitted model to predict the counts for year s. These fitted counts were then used to compute a goodness-of-fit statistic q
[r,s] for model r for year s. To get the overall cross-validation score, the goodness-of-fit statistics are summed across all years giving
Mean square error is an appropriate goodness-of-fit statistic when the variance of observations is the same for all observations (not true for Poisson data) or when one is not interested in adjusting
for differing variances, such as when one is most interested in predicting the largest table entries correctly, that is, when nominal error is more important than relative error. This may be the case
for train data in that predicting 100 carloads when the truth was 200 (a nominal error of 100, relative error of 100%) is much less of a concern than predicting 100,000 carloads when the truth is
150,000 (nominal error of 50,000, relative error of 50%). Those 50,000 extra carloads could represent a much larger logistical problem than the 100 extra carloads, in which case mean square error
would be a useful summary. Equivalent to mean square error is its square root, root mean square error (RMSE), which has the advantage of being on the scale of the data and thus being more
Alternatively, one may be more interested in relative error. Statistical theory says that one should adjust for the variance in computing goodness-of-fit. The Pearson chi-squared statistic is
where n[ijk] is the actual count and X^2 is asymptotically distributed as a chi-squared distribution (see for example, Agresti 1990). The denominator in (3) is the estimated variance of the
prediction, and thus X^2 is a measure of relative error. However, for an application such as cargo, it does not make much sense to inflate the error when the prediction is smaller than one. For
example, if the model predicts 0.1 carloads in a year, and in truth 2 carloads were observed, the contribution to X^2 would be (2 - .1)^2/.1 = 36.1, larger than the nominal error. When routes have
hundreds of thousands of cases, a nominal error of 2 carloads is rather insignificant, and its contribution to the total error does not seem like it should be inflated. As a further complication,
when this goodness-of-fit statistic is used for predictions, the model might predict a count of zero when the actual count could be nonzero. In that case, X^2 is infinite, and it is impossible to
compare models. If a small number were added to each cell of the table, the comparison of models can depend on the size of the value added. To avoid these problems, we modify X^2 so that the
denominator is no smaller than one:
The Cressie-Read power-divergence family of goodness-of-fit statistics (Read and Cressie 1988), indexed by a single power parameter
F^2, employing the variance-stabilizing transformation for a Poisson distribution, represents a compromise between the mean square error and the Pearson chi-square statistic. Note that while all
members of the Cressie-Read family have the same asymptotic chi-square distribution, their distributions may be different for finite samples. In particular, when the data table is sparse (with many
zeroes, as with the waybill data), there can be problems with the chi-square approximation for all of the statistics (for example, Koehler 1986).
Data Analysis
The models under serious consideration were the full model (equation 1), the model without a three-way interaction (g of equation 1) but including all two-way interactions, and the three models with
no three-way interaction and only two two-way interactions (that is, no g and only two of d, e, and f in equation 1). Models with fewer terms were unable to capture the complexity of the data. Table
1 compares goodness-of-fit statistics for all models with at least two two-way interaction terms. We note that we can not use the unmodified X^2 statistic because during cross-validation, some
entries predicted to be zero are instead nonzero, leading to infinite values of X^2.
From the table, we see that the choice of best model does depend on the choice of measure of goodness-of-fit. The full model seems best for reducing absolute error since it has the lowest RMSE (and
does fairly consistently for each year of the cross-validation). If relative error is more important, the model using only two-way interactions for origin versus destination and for origin versus
commodity performs best. Also of note is the model with all two-way interactions. It is a reasonable compromise model having an RMSE close to that of the full model yet also having the second lowest
F^2 and Thus, this model might be chosen for its robust performance with respect to multiple goodness-of-fit statistics.
Substantively, it is interesting that the other models with only two two-way interactions do not perform as well. It seems clear that any reasonable model must include both an interaction term for
origin versus destination and a term for origin versus commodity. An instructive example is that of Huntington. As mentioned earlier, Huntington primarily exports coal and only to a specific set of
destinations. Yet Huntington is one of the largest areas in terms of total number of carloads. Thus, any model must be able to account for both the fact that Huntington exports a very large amount of
coal but little else as well as the fact that it exports large amounts to a relatively small number of destinations, unlike general shipping hubs like Chicago. In contrast, there are no obvious
examples of cities that are destinations for large amounts of particular commodities out of balance with their imports of other commodities, so the removal of the interaction term for destination
versus commodity has much less impact on the fit of the model.
The train waybills data set is interesting both for its information on commodity flows as well as for its statistical challenges. Loglinear models provide an effective method for describing the
relationship between cargo volume and origin, destination, and commodity type. The size of the data set^1 is much larger than in a standard statistical problem. While this size is beyond the
capabilities of many standard statistical software packages, loglinear models can be programed directly.
Model selection raised a number of statistical issues. In contrast to many data sets used with loglinear regression, the waybills are sorted by year, providing a natural breakdown for cross
validation. The choice of goodness-of-fit measures has been discussed. Dealing with the large number of zero counts during cross-validation appears to be a topic not fully addressed in the
statistical literature. The analyses of this paper should be seen as a starting point for further work, both methodological and relating to the interpretations of the indicated models.
The authors would like to thank three anonymous referees and the Editor-in-Chief for their valuable suggestions. This work was partially supported by National Science Foundation grants DMS-9971954
and DMS-9873275.
Agresti, A. 1990. Categorical Data Analysis. New York: Wiley.
Bishop, Y.V.V., S.E. Fienberg, and P.W. Holland. 1975. Discrete Multivariate Analysis. Cambridge, MA: MIT Press.
Deming, W.E. and F.F. Stephan. 1940. On a Least Squares Adjustment of a Sampled Frequency Table When the Expected Marginal Totals are Known. Annals of Mathematical Statistics 11:427-44.
Fienberg, S.E. 1979. The Use of Chi-Squared Statistics for Categorical Data Problems. Journal of the Royal Statistical Society B 41:54-64.
Gelman, A., J.B. Carlin, H.S. Stern, and D.B. Rubin. 1995. Bayesian Data Analysis. London: Chapman & Hall.
Interstate Commerce Commission (ICC). 1992. Carload Waybill Statistics: Territorial Distribution, Traffic and Revenue by Commodity Class. Published on CD-ROM by the Bureau of Transportation
Statistics, Washington, DC.
Koehler, K.J. 1986. Goodness-of-Fit Tests for Log-Linear Models in Sparse Contingency Tables. Journal of the American Statistical Association 81:483-93.
McCullagh, P. and J.A. Nelder. 1989. Generalized Linear Models. London: Chapman & Hall.
Read, T.R.C. and N.A.C. Cressie. 1988. Goodness-of-Fit Statistics for Discrete Multivariate Data. New York: Springer-Verlag.
Sen, A. and T.E. Smith. 1995. Gravity Models of Spatial Interaction Behavior. Berlin: Springer-Verlag.
Stone, M. 1974. Cross-Validatory Choice and Assessment of Statistical Predictions. Journal of the Royal Statistical Society B 36:111-47.
West, M. 1994. Statistical Inference for Gravity Models in Transportation Flow Forecasting. Technical Report 94-40, Duke University, Institute of Statistics and Decision Sciences, Durham, NC.
Address for Correspondence and End Notes
Herbert Lee, ISDS, Duke University, Box 90251, Durham, NC 27708. Email: herbie@stat.duke.edu.
^1 Approximately 2 million records in the data file, resulting in over 119 million carloads distributed in a three-way table containing over 1.2 million cells. | {"url":"http://www.rita.dot.gov/bts/sites/rita.dot.gov.bts/files/publications/journal_of_transportation_and_statistics/volume_04_number_01/paper_05/index.html","timestamp":"2014-04-19T07:34:58Z","content_type":null,"content_length":"57166","record_id":"<urn:uuid:5e9b2bc0-cde0-4514-a50a-1253334de9fa>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00235-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - My thought experiment on determining the moment of inertia of an irregular object
Hello everyone, (This might be a little long winded but I would very much appreciate some help on this!)
Today while in psychology class, I was thinking about a much more important subject, namely a project I want to start working on. Basically what I want to do is build a motor from the ground up. Like
a real motor that would be in a machine. I want to start from complete theory and them move to actually building it once. (I am a senior undergraduate in Physics and I want to get into theoretical
physics in my future.)
So my first thought would be, how would I determine how much current I need to run through each of the windings to get the motor to move? I then realized that I would need to know the torque required
to move the armature, and this would be the LEAST amount of torque I would need. (I would obviously want more torque than this so the motor can actually drive things. And I would actually need a
little more than this to get it moving due to friction effects.) Here is something I came up with to measure this torque.
Have an external force spin the armature to a constant frequency ω. When it reaches this point, cut the power to the armature (The force would be coming from some external motor) and see how long it
takes it to stop spinning. The torque will be approximately equal to..
So I would need to determine the rotational inertia of the armature first. It is spinning about one of its principle axis so it will not be a tensor. To determine the rotational inertia, the idea I
came up with would be very similar to determining the torque. I would again drive the armature to spin at a constant frequency ω and then cut the power. But right when I cut the power, have a
"negligible" string tied to one end of the armature. This string is attached to a block of known weight and will consequently do work on the block by moving it linearly. From the work-kinetic energy
theorem we can find the kinetic energy.
So here is the part I need help on. So for a rotating body the kinetic energy is..
But ω would not be constant since the object is negatively accelerating (slowing down). So I would have..
I have taken so many advanced physics and mathematics courses, I find it funny that I do not know how to integrate this. I have never worked with a squared differential before. How would I go about
doing this?
Also, if anyone has any other ways about measuring the moment of inertia of something, or any comments about my way, feel free to let me know! | {"url":"http://www.physicsforums.com/showpost.php?p=4176639&postcount=1","timestamp":"2014-04-20T00:59:33Z","content_type":null,"content_length":"11428","record_id":"<urn:uuid:49f2982c-a0b8-4210-acb2-26b3689e7b95>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
How many squares indeed
This is neither video game, nor film related but since it’s my site, I can do what I like with it.
You see, there’s this image floating around the social inter-tubes of a grid of different size squares and the question all the posters pose is “How many squares do you see?”. Some only count 24,
others 32 or 36, but the correct answer is 40. And each time I answer 40, I get the same response: “I just don’t see it”.
So, either because I feel like doing a public service, or because I just want to see this dumb debate put to rest (or maybe because I figure this might be a good way to drive traffic to our modest
little site), please join with me in counting the flashing red squares:
you guys left out the smaller squares adding up to an additional square making it a total of 42 squares, how can you publish this as if it is fact and you are completely wrong lol
• Are you talking about squares 17 and 18?
□ It was counted as 17 and 18, I guess u missed it.
• Hi Michael, I’m pretty certain the two you describe are highlighted in the animation above as Jeremy pointed out. If not, grab a screenshot and color in the squares that I missed and I’ll be
happy to post an update and credit your find.
□ Christopher,
I found the forty after a bit of mind crunching/relaxing/and rethinking. The 3×3′s escaped me for awhile. Thanks for posting the animation. Very cool. (((And you were a whole lot nicer to
Michael than I would have been.)))
□ Thanks so much for this animation Christopher. I was trying to explain the answer of 40 to my friends. I posted some other pretty good answers on my wall, but this one anybody can follow &
understand. Some insist that there are 44, but I don’t see it, & some self-proclaimed geniuses say that there are only 16 true squares because you can’t count the ones broken up by lines. I
am not a genius or math expert by-far, but I think that square can be made up of smaller squares, so when it says “how many squares”, it means all, not just those with no lines.
☆ i also see 44
☆ You are right because the groups of squares people are saying can’t be counted because of intersecting lines is actually wrong because if you look at the perimeter of those groups, they
are solid lines and haven’t been counted yet.
□ In fact there are only 16 true squares. All the other squares shown actually have lines going through them therefore making them not perfect squares. But that’s my opinion, probably thinking
too much inside the box instead of outside.
☆ Christal – I agree but don’t classify myself as a genius. The way I see the graphic once you count past 16 you are counting squares that have already been counted .. that is if you are
willing to accept the premise that a “square” can have intersecting lines.
Between any two points there are an infinite number of intersecting lines at infinite different angles.
☆ It doesn’t matter if there is a line going through it, it is still a square.
Bizarre definitions of squares going on here. People clearly didn’t pay any attention in geometry. The fact that there are intersecting lines on a plane does not mean that a plane
figure with four equal straight sides and four right angles isn’t still a square. It’s amazing to me that people are clearly and graphically shown the answer to a fairly simple puzzle
and they still want to make bizarre arguments.
□ cool animation
but it does not highlight the 40th square being the entire outside square.
☆ Frank. Sure it does. It’s the very last one highlighted.
• what are you talking about? which “smaller squares”? put a graphics up somewhere, smartie
• No, they counted those.
• Thank you! I thought I got ‘em all but missed 4
• If you are referring to the two sets of four squares that make two squares, I counted those when I got 40. If you are getting more than forty, you are probably counting some squares more than
once or including rectangles.
• do you always count things twice?
• I go 40. I’d like to know what two you are talking about :)
• Idon’t see 41 & 42? Explain thoroughly please =)
• why isn’t anyone counting the square that make up the whole puzzle? that would make 41.
□ ….they did?
□ Brenda, they did. It was number 40. Good lord, now people can’t even watch red squares popping up on their screen. I weep for American education.
☆ Clearly, there are squares: how do you use a simple puzzle to expose the mind to an ever expanding universe (or contracting). I think this puzzle allows the mind to focus on, not the
right answer, the question. 41
• Didnt you watch the answer the 2 small squares were counted after 16. Way to spout off about yourself being so wrong. Think before you speak next time!
• What the hell are you talking about there are only 40 squares.
• They missed nothing out, I been going over this puzzle for months, I thought it was 36, until a maths teacher showed me and gave me the answer in another link, the correct answer is 40 squares
• they counted all the squares. you must be drunk…again
• What are you looking at, the answer is even up there. This proves that people will argue even in the face of pure facts. it is 40 genius.
• Actually he did, you retard. You counted those squares twice.
I’m not sure how you’re getting only 40. I tilt my laptop screen back and see lighter squares inside of the black bold squares. (seems like a rough draft)The total count I have is 50. When I tip my
screen back, the first square does not have an inner square. The rest of the squares do have the second inner square which adds up to 30. (15 squares x 2 inner squares each=30)
Now, the 2 center squares each have 4 squares. Those 4 squares have the lighter squares inside of them. And then counting those 2 squares alone.
That total is 18.
Then we have the square at the lefthand top that does not have its’ inner square so that is counted alone. The outer square which is holding them all in.
I could be reading too much into it. However, the game didn’t specify what counted and what didn’t :) A square is a square is a square!
• Don’t you think that “tilting your screen back” is by definition “reading too much into it”, or adding a variable that may not be consistent across different types of computer screens? Thanks for
the graphical “40″ explanation BTW!
• Well, if you’re looking to read too much into it and get technical, this image is made of of hundreds of thousands of square pixels that make up hundreds of thousands of squares when combined,
but this puzzle comes from a good old fashioned geometry puzzle, probably drawn with a good old fashioned pencil, on good old fashioned paper.
So, assuming this was a simple drawing, you would be able to trace over 40 squares using the lines laid out in the illustration.
□ Right On!
□ No Like button, so I’ll just say, LIKE.
□ Pixels are not square.
□ The third column is one pixel too wide therefore not part of any square–messes up the whole count. (Just kidding.)
Actually, thanks for posting this it helped settle our little debate. Hopefully it does bring a lot of traffic to your site.
□ Terrific response, Christopher.
Sorry Tracy. You’re an idiot.
• If I cross my eyes I see double so there are actually 80. Ha!
□ LOL! Good one!
• the way you are counting is going too far. what about the border, or the web page, or each pixel, or the screen? i guess with current resolution of my screen, there are about 1366×780 squares.
no. there are 40 squares. 8 – 1/4 size, 18 – 1×1, 9 – 2×2, 4 – 3×3, 1 – 4×4. therefore, 18+9+8+4+1= 40. easy
• You’re creating an optical illusion when you tilt your screen. Just think of this puzzle as it was when originally created on paper years ago.
• I don’t think the “rough draft” squares are meant to be counted, since they don’t show up on everyone’s computer.
• The lighter squares are ghost images from a poor scan of the original. Common with lower quality scanners.
• The lighter squares you are seeing (and bare with me on this because it’s going to get very technical) are in fact artifacts resultings from the jpeg image compression (lossy algorithm). This
occurs because jpeg algorithms tend to smooth areas where there’s high contrast transitions, thus creating the gray areas between the black lines and the white background. This helps achieving
greater compression ratios but the tradeoff is loss of image quality/fidelity. That’s why jpeg is usually not a very good candidate for images with lots of text and charts (there’s lots of high
contrast colors between text and backgroud; large one colored areas are not well handled). One can use lossless image formats such as png (or even jpeg but the lossless version which is not
mainstreamed yet, probably due to some legal aspects related to the usage of such algorithms). Lastly, I counted 40 squares. :)
□ You make several good points and really aren’t far off, except for one thing…. The image is an animated GIF. :)
□ You know, I would have thought your comment was intelligent, but you said “bare” with me. Really? Do you want me to get naked with you? Is your comment just a big complicated come-on?
Make sure you know what words to use if you’re trying to sound smart or you will fail.
• you must be drunk too…again
• You seem a bit “tipsy”.
• Yeah, Tracey is right, but there is even more. I counted the squares across the top of my screen I got 1366 squares, and going down there is 766. My screen even says there is that many “pixels”,
whatever that means. I have lost count at 1,049,088. Genius Tracey, the answer is 40.
• Tracy, what kind of hallucinogenic drugs are you using? 42 squares. You think inside the box outside of the square. Then you can see all the cubes. Since they all have lines in them then, the
last 2 have lines inside them.
looks like there might be some artifacting in that image, but all you should be counting is the black outlined boxes, not any kind of anti-aliasing or negative image when you flip your screen to
different angles. 40 makes sense to me.
• Kimmy,
Right! 40 makes sense, and the math. Good eye!
i see a total of 43…the extras i see are at #17, #18 and #31
I call bullshit… There is, arguably, 80, not 40. The reasoning for my argument is set theory; Yes, there is 8 .5x.5 squares, yes there is 18 1×1 squares, yes there are 9 2×2 squares, and 4 3×3
squares, and 1 4×4 square, however there is also negative space which is in the shape of a square as well, this would instantly double the count. Why exclude one from being counted, and allow the
• I looked at your video, it seems you did highlight the negative space, well alright.. instead imagine the negative space and the line segments as separate entities, to better clarify.
□ NO. It’s not asking to count boxes. It is asking to count squares. Squares are defined by the LINES not the negative space or Ewoks that walk on them or any other imaginary bull shit you want
to stroke yourself over. There are 40 squares. This is ridiculous now.
☆ You, sir, are my hero.
• Malarkey!!!
What about the square created by using your #28 and deleting #17 from it? That adds two more…likewise, another unique square type can be formed combining your #1, #2, #3, #4, #5, #8, #9, #12, #13, #
14, #15, #15…additional variations of this total 5 additional squares.
Grand Total 47 that I can find.
• Whoops…small typo…should be #13, #14, #15, #16…
□ umm…that’s not a square, randy. that’s a toroid.a square-boundaried toroid to be sure, but it asked how many squares, not how many squares and toroids.
☆ Point conceded! I am back to 40…and seriously considering the point that squares, technically, should not have lines through them…leaving a count at 16. Fun puzzle!
If you were to only count squares without lines through them, there would only be 8 squares, not 16. Just an observation…
☆ I stand corrected! 40 it is! Although, someone’s point about the squares with lines going through them should technically count…an interesting point for sure. Fun puzzle!
Sheeez…”SHOULDN’T count”…not “should count”.
Wish I could reply to my own post…and I’m back to 16 “real” squares…forgot the teeny ones in the middle.
But I still think 40 is the correct answer.
☆ Wouldn’t it be a square-boundaried annulus?
I agree with Dennis. I see 43, but i’ve been told that I’m wrong and that the only squares that should be counted are solid squares.
Dennis, 16,17, and 31 are already in the count. Why would you count them twice? Making the total still 40.
• the outside of those square is another square
□ You wouldn’t count the outside of those ones, because they aren’t squares. You Only find SQUARES.
Trevor, I don’t think the spirit of the game is to include each square twice based on the thickness of the pencil.
You can only really count the lines, as the “negative space” is divided up by the lines. If you wanted to count negative space squares as well there are only 16 proper ones, making 56.
You missed 5. There are the four 2X2 squares in the corners and the one in the centre. So I got 45.
• Oops I recounted and concede. There are 40 :)
Am I the ONLY one that counts 16 squares? A REAL square should have NO lines in the middle of it. Why am I the ONLY one that counts this as a REAL square?
• No, that would be 16 (though I thoroughly disagree with you on premise).
There are the 4 squares along the left side and 4 along the right to give us 8.
Then there are the 8 mini-squares.
That makes a total of 16
□ The definition of a square is ‘A shape having four equal length sides’. It says nothing about what is inside that shape, it only defines the sides. Therefore a square divided by any number of
lines is still a square.
The answer is 40 and YOU are and Idiot.
☆ Why am I an idiot? I was simply following his logic (he originally stated 12, I believe). I made that clear by saying, “I thoroughly disagree with you on premise.”
☆ Now why would you call this person an idiot? Uncalled for. We are all just having fun. Apologize, my goodness.. Show some class and sportsmanship in this mind-bender game…
• No, you’re not the only one in the entire world to come up with that answer. Read some of the comments, & you will see that. But I don’t agree that a square with smaller squares inside of it is
no longer a square. That would mean that a checker board is not a square because it has smaller squares inside of it. What would you call the shape of a checker board? I could not find anything
supporting your claim. Please post a link proving that a square is not a square if it has lines through it. And I think that your cocky, superior attitude sucks! Like you’re a genius & the rest
of us are slumped over dragging our hairy fingers across the ground.
□ Why so serious? lol
If a line is inside a square, then that square is broken up. Sure it’s still technically a square, but the INSIDE is NO LONGER a square.
But, remember this, there seems to be no rules for this game, so actually no one is right. Which is actually quite stupid, because brainteazers should have a logical answer. But, ones like
this one open up discussion, which is fun. I was having FUN with my comments, but people like you take them too serious. I was serious about my answer of 16, but NOT about the rest. Just
Oh, and the answer is still 16. lol
☆ Technically if you are basing that a square has no lines through it then it would be 17 squares because of the outer box. All the lines hit it perpendicular and not run through it.
☆ The INSIDE was NEVER a square. The SQUARE IS DEFINED BY THE LINES.
This is really quite a nonsense argument.
□ … a grid?
□ http://www.math.cornell.edu/~hatcher/Top/TopNotes.pdf
Basic topology.
Kilahill is correct provided that we are to assume that the lines in the image all lie in the same plane (which I feel is the intent) and are not a projection of a third dimension, in which
case 40 is a correct answer.
• A square can have something inside of it, specifically a line in this matter, and still be a square. The definition doesn’t say anything about the inside of a square. If you would count the
squares as you would in the diagram, then you will get 40, not 16.
• Kilahill
It’s because your definition of a “real” square is flawed. There’s nothing wrong with the lines going through. A square is defined only by the lines that create it. So back to 40.
It is 16. Period. End of discussion. If you disagree, you’re an idiot.
This is meant to make you think there HAS to be a high number of squares. You’re all over thinking it! That’s the whole point to this. It is 16. I am literally the ONLY one EVER to guess 16. Am I a
genius? I think so, lol.
• Again, cocky, arrogant, offensive attitude, thinking that everyone has to agree with you, or they’re an idiot. You sound like my mother-in-law.
• http://www.teachingideas.co.uk/maths/chess.htm http://www.basic-mathematics.com/checkerboard-puzzle.html These math websites, as well as others I’ve seen, show that in counting the squares on a
checkerboard, you count all squares, yes, even those with lines through them, which would equal 204 squares total.
• The definition of a square is ‘A shape having four equal length sides’. It says nothing about what is inside that shape, it only defines the sides. Therefore a square divided by any number of
lines is still a square.
The answer is 40 and YOU are and Idiot.
□ Oh people, oh people. So angry over such a fun brain teazer! lol
Sure, they OUTSIDE shape is still a square, but, the INSIDE is broken up, making it no longer a square. It’s a square with many shapes on the INSIDE. Rendering it NOT A SQUARE ANYMORE.
I was serious about my answer of 16, but NOT serious about my attitude. I am having fun with you people. Relax a little.
☆ Did you edit your answer? I could have sworn you said 12 (thus my comment).
☆ Geez people kilahill was being sarcastic, not once was he demeaning! Oh and to the person that keeps calling him ‘and idiot’ it’s ‘an idiot’ I’ll refrain from calling you a dope.
□ I agree with Jon..and anyone else who knows the definition of a square being a shape having four equal length sides. PERIOD. The answer is 40.
□ If a square “has” a side, may the side belong to another? The question of side ownership also relates to separability. If you print out the diagram on paper, you cannot cut out 40 squares.
So, many of the squares are imaginary. Is the task to count real squares or imaginary ones?
☆ Hi Matt. I’m not sure I quite understand the question. A square, in geometry is defined as having all sides equal, and its interior angles all right angles. So it’s about the lines. Yes,
a line can be used for one square and again for another. And it’s not about cutting anything out. It’s about finding 2 sets of 2 parallel lines that cross each other making 90 degree
angles and in such a way that the four bordering segments are equal.
There are no imaginary squares here unless we’re drinking lots and tipping our laptop and holding our head at an odd angle and squinting our eyes. (this last part wasn’t directed at you).
• I’m not meaning to offend you in any way.
But how do you not count the 4×4 and 2×2 squares that the diagram shows?
And you shouldn’t be calling people idiots.
• I got 16 as well. Good job.
□ Open your mind. It’s 40. Look around and see.
• It could never be 16. It would have to be 17. You have to count the entire grid. It has 4 equal sides. But the correct answer is really 40.
• No, you’re still wrong because your definition of a square is just….oh my god, re-take high school geometry. Please.
You’d think a site that starts out with a gif that counts the squares for idiots completely incapable of counting for themselves would end this. It’s astounding, really.
I see where they are getting more than 40 from…IF you copy and paste the image into something like microsoft word and blow it up it has little squares at the corners of the squares in images 17, 18
and 31. In fact the one I blew up has even more little squares in that. Like Tracy said though you have to tip your screen back and such. But looking at ONLY the black lines and not the little gray
ones that pop up at the corners of the squares there are 40 and that is what I counted before finding this site! Hope this helps some of you!!! Have a great night!
@ Kilahill
If you are getting picky then there are only 8 squares. a square needs 4 lines, if you draw square 1 then to draw square 5 you are only drawing 3 lines so is that really a square?
square 1 and 9 can be squares or 5 and 13 and 4 and 12 or 8 and 16 then 4 of the small ones.
If squares are allowed to be drawn using sides that are already in place then this drawing can be made by drawing 29 squares that at the time of drawing do not have any other lines inside them. e.g.
the 4×4 square then square 27 then 1
I just did a quick count of these 29 so there may be a way of drawing this with more than 29 squares that do not have any internal lines at the time of drawing
• There is no reason to deny a square its’ shape by having other shapes or lines inside of it.
□ @ starknerd.
Thank you!
I got 40. I found this on facebook and decided to try it out. It took me going on to paint and pasting the puzzle square 11 times to color in all the different squares to get my number. I felt kind
of stupid when I scrolled up through the comments and saw that you had pasted the answer and how to figure it out. I was fun trying though so I wasn’t to bummed. Actually I was kind of suprised how
many people only got like 24 squares.
• At first, I only got 19…
• At first, I only got like 19… I wasn’t counting the overlapping squares.
• Holly, don’t ever feel stupid about something like this – even if it takes scrolling up. It’s a mind thing, self-test in “fun”, as you said. It’s too bad more people don’t accept the fun
I guessed 16 because I was looking for only actual visible squares and not overlaying and reusing what I had already counted. It is your perception that causes different answers to come up. So
depending on the method you use and your own personal perception we are all right to some degree.
So I’d say 40 and 16 are both correct. Take a crayon, color the squares different colors as you count them and don’t reuse the colored squares and you can only come up with one answer….16. That is my
reasoning. The 40 is only possible if you reuse what you have already counted.
• No I disagree. You are not reusing them, they are different sizes. You are not reusing the same square at any point.
It is simply a parlor trick with more than one answer. I bet you could win money at a bar with this easily. Because no matter how they did it you could still tell them they are wrong. LOL!
• No. You’re wrong. There are 40 squares.
Once again kids, squares are defined by the lines that make them, not the color in the middle, or the space in the middle. Get any geometry book or look it up online. A definition of a square is
about the lines.
This isn’t a game of perception. It’s about counting 40 damn squares.
I got 40 right away then counted rectangles. 74 correct?
• Every square is a rectangle. Not every rectangle is a square. We’re only looking at squares. Have a nice day :)
□ ok, 4o squares, 74 additional rectangles that are not squares.
I count 41 squares. Your animation misses the square encompassing your #6, 7, 10, and 11.
Never mind; I’m an idiot. There are 40 squares after all.
OK…So you have the squares figured out….how many RECTANGLES are in the same group? No cheating!
104 rectangles so far
There are (5 choose 2)^2 + 2*(3 choose 2)^2 = 10^2 + 2*3^2 = 118 rectangles. The first term comes from choosing which of the big vertical lines are the sides of the rectangle, and which of the big
horizontal lines are the top and bottom. The other term comes from asking the same question in the smaller 2×2 grids that are offset from the big lines.
This is the same approach I used to get 40 for the original problem, except the 1^2 + 2^2 + 3^2 + … + n^2 = n(n+1)(2n+1)/6 squares in an n-by-n grid are replaced by (n+1 choose 2)^2 rectangles.
Depends on what you consider a square. A square to me is shape with 4 points, and I counted 16 because of it.
Based on my definition, I am correct.
• Ramik. Was that sarcasm? If so, it was funny.
On the small chance you were serious, there is no “what you consider a square.” A square has a very specific definition.
But I’m going with that your post was pointed and funny.
this is really funny. i counted 3 times and got 36 then i saw 36-39 the forth time i counted. although i have never participated in an online discussion, i find them really amusing because this
really is a simple geometric puzzle and people are getting all in a rut about something so small. really, there are 40 squares, this isnt a debate, but so many people seem to be getting in a huff
about it, debate something that can be debated!
• Agreed. I stopped trying to explain to people and just decided to sit back and watch the shenanigans. I can’t help but laugh at people who have their own definitions of a square. Last I checked
there was only one definition, and a pretty solid one at that.
□ I agree. “A square is a shape with 4 sides of all the same length.
☆ Four right angles and all sides are the same length, to be exact. :)
Show me a shape with 4 equal sides that’s not a square. No?
It’s superfluous to specify the 90 degree angle requirement, since “4 sides of all the same length” must also make 4 right angles.
The shape YOU’RE thinking of is a Rhombus. A shape can very easily have 4 equal sides without having 4 equal angles. That’s why the definition of a square stands at 4 equal sides and
4 equal angles. http://en.wikipedia.org/wiki/Rhombus
Oh, and to quote from your other comment regarding a simple spelling error:
“Make sure you know what words to use if you’re trying to sound smart or you will fail.”
You were close to right at least. A triangle with 3 equal sides automatically has 3 equal angles. A rhombus however does not.
Dang, you got me there Greg. I think it was too late and I had too much gin to drink. Completely forgot about parallelograms.
And one of my pet peeves is when a person uses a whole bunch of big words to try to sound smart, but then gets a simple word wrong. It just bugs me. Kind of like when people think
they know how maths work. :)
People always forget about parallelograms.
… and the Spanish Inquisition.
I have consistently found 40, but I am curious as to how anyone is finding 44. I don’t see how that works and would like to know their logic.
Hello, When counting white squares, yes, there are 40. However, think ‘outside the box’ and count both white solid squares AND black lined squares. There are two sets here, taking the total to 80.
• There are no white “solid” squares. The squares ARE THE LINES NOT THE SPACE IN BETWEEN.
40 squares.
If you look at a street and are asked to count buildings, you wouldn’t also count the dead air space between. Why do people keep doing this with squares???
Holy God, whomever made this gif has WAY too much time on their hands. Furthermore, I can’t believe this “puzzle” is still going around the internet. I think I remember something like this in 2ND
GRADE MATH! There is no excuse for anyone not being able to count squares.
I saw thus first on Facebook and for ages I counted 36, but then realised I’d missed the 3×3 squares. A square is a shape with 4 equal sides. A pizza is still I circle when it is cut into pieces. The
Pentagon is still a pentagon even if the centre is cut with another pentagon. I think the question should be without moving any lines how many squares can you make. It’s 40, thats a logical way to
work it out. No hidden squares, use the lines. How many squares can you seperate from the rest?
The shapes around the small squares in the interior have 6 sides so they are not squares. You have to ignore what you see and you have to ignore the definition of a square to count them. If you get
to play psychological games to arrive at the answer you want, anything goes.
• Where are there six sides???? The small squares in the interior do not have six sides. Put down the bong.
http://www.flickr.com/photos/disneywizard/7648693432/ I agree, I made this animation independently. It results in agreement – there are 40 squares.
No way are there 40! The video went back over 16 of them to get that number!!! My protest is lodged…though I may be wrong!
• Shoshana;
There are no duplicates. Please see MY animation to confirm. http://www.flickr.com/photos/disneywizard/7648693432/
• Shoshanna,
You are correct that you are wrong.
Hey guys
There are actually more. The trick is how many you can ‘see’. If you look at the 2 smaller squares, you will notice that around it is a square with a thick border (If you can visualize it). That
creates 2 more squares, more than 40. You can also find a few more that way.
The second catch is that if you look long enough, a visual image of a square will appear at all the intersections of 4 lines. This has been documented with the right google search.
That brings the total so far to 63.
I hope someone can find more, but my google research has only gone this far. Enjoy! :)
• I counted 63 by myself and wondered If I wad the only one on the net to come with that count thank you for your post^^
OOH OOH, someone just gave me a new insight. If you imagine a square that is only white (ignoring the black surrounding border) you can add ANOTHER 16 white squares. Cool!!
Take care
• First, there are 40.
Second, there are 40.
You’re playing games with the resolution of the picture–which is a fun exercise–but there are 40.
Second, there are no white squares. The lines are black. Since squares are defined by LINES, and since there are no white lines (that we can “see,” since that is the exercise), we’re again back
to 40.
□ I think there are a whole bunch of white “lines” inside the white space, so the answer must be 2,464 squares.
☆ Either that or someone has been “doing” a whole bunch of white lines. ;)
THANK YOU! I was getting so frustrated telling people!!
Okay, if you wanna be a technical f tard about it! There are 40. BUT If you count the Inside of the border/ the Border / Outside of the boarder, then you’ve got 120. Just saying!
• Using this method, being a wise ass and all, with m-theory on dimensions, you’d have 2640 squares, but using the typical 10 dimensions you’d have 2400 squares….but this is thinking way outside
the box!….
□ I skipped mentioning a step, but no one needs to know it!
it’s 40. i draw it manually and color-code each square and i found 4×4=1, 3×3=4,2×2=9, 1×1=18, .5x.5=8 total of 40. i still remember my grade school fun games..
Their is definitely 27
I get 11sets of 4squares +the 1 square that outlines it all which makes it 45 squares no imagination just straight count.
The answer is either 40 (as you demonstrate) or 16, depending on how “square” is defined.
The conventional definition is something like ” A plane figure with four equal straight sides and four right angles.”
Only 1, 4, 5, 8, 9, 12, 13, 16, 19, 20, 21, 22, 23, 24, 25 and 26 meet that definition, since the others have lines inside and therefore more than four right angles in the figures.
But thanks for the great demonstration
• Only 1, 4, 5, 8, 9, 12, 13, 16, 19, 20, 21, 22, 23, 24, 25 and 26 meet that definition, since the others have lines inside and therefore more than four right angles in the figures.
No, this is not correct. The lines inside don’t have anything to do with the the 4 lines and their angles. They don’t negate them creating a square. If you take a piece of aluminum and cut it in
such a way that all 4 sides are equal and are connected by 4 right angles and write “Eat at Joe’s” in the middle, it doesn’t suddenly mean that the sign isn’t square because there are scribbles
in the middle.
The lines inside the squares are forming other objects to themselves, but any instance of two sets of parallel lines intersecting at 90 degree angles and whose lines are equal is a square. No
matter what else is going on around it.
□ Your argument is only true if we are to assume that the image is a composition of objects projected onto a single plane for our viewing.
It does matter if there are lines “inside” the larger “squares”. The larger “squares” are not topologically equivalent to those which are smaller. What you propose would also lead to the
conclusion a figure 8 is equivalent to a circle, topologically. This makes no sense if the squares are indeed supposed to lie in the same plane.
The answer is 16 unless we assume some really strange projection space.
☆ But Bob, if you had a big circle, and there was a smaller circle inside it, does the outside one cease being a circle. No, it doesn’t. Despite the clear amount of logic, do you cease
being an idiot. No, you don’t.
A circle inside of a circle is topologically different from lines intersecting inside a larger square. There is an issue with connectedness…
The question posed was “how many squares”, not “how many projections of a square” onto the image plane.
☆ @ Bob
In a flat plane, on a piece of A4 paper, how many square shapes can you draw on the image under discussion, using different coloured pencils and only tracing along an existing line?
That is the question, plain and simple. And, Bob, as others have shown you, if you do this (colour-coding as per @Tolits post 4×4, 3×3, 2×2 sized squares etc.), you will be able to trace
EXACTLY 40 SQUARES.
Why don’t you try it? It is elementary, key stage 3 Maths, and the answer is not subject to perception or interpretation, only knowledge.
Made up my own little graphic to illustrate my finding of 47 squares…
• To clarify (as some people may refute some of my choices in what constitutes a square):
- I have decided that hollow squares count. Why? Well, if you draw 4 lines connected at four corners, that is a square (regardless of whether it’s ‘filled in’ or not). Who’s to say how thick
these ‘lines’ can be?
- You kinda need to think ‘outside the box’ – pun intended – in order to count over 40. 40 seems to be a fairly obvious number of conventional squares to discover in the image.
□ Rather then thinking outside the box, get back in it and seal the lid please.
• You’re kidding right? Putting a hole in the middle of a square you’ve already counted doesn’t make it another square. The point of the exercise is to count how many squares can be drawn with the
lines provided. It has nothing to do with filling them or not filling them. What you’ve done with your image is link up rectangles.
“Who’s to say how thick these ‘lines’ can be?” That original image lays it out pretty clearly. The black lines are the rules that have been presented for this puzzle. You’re just making your own
up. That’s like saying the wood grain of a basketball court should act as some sort of boundary. The lines painted on the court are pretty clearly the lines the refs want you to pay attention to.
• Yeeaaahhh, ok….I get where you’re going. Make the “lines” thicker and you can grasp seven more…sorta…but naw. I’m not buyin’ it. But, thanks for participating!
I realistically stand by 16.
“Everything should be made as simple as possible, but not simpler.”
~ Albert Einstein
If you allow horizontal & vertical wrap-around there are more, & if you say allowing Horizontal and vertical wrap-around implies diagonal wrap-around too (ie, mapping onto a sphere) then there’s more
again. In all, on top of the 40 already counted, you can add 12 [16-sqs] + 16 [9-sqs] + 4 [4-sqs] or additional 32 squares, making 72 total.
Of course, allowing spherical mapping thus, actually limits the number of possible plane-squares, by making the following impossible:
without wrap-around, you can add an infinity of squares, if you allow the 16-sq to also represent a 25-sq, the edges of which don’t completely show; then a 36-sq, 49-sq, etc etc. These could all be
counted as part of each IS shown in the diagramme…
Plus, as someone else on FB said, add one for the WORD ‘square’ in the question. So, 73, plus.. And, a shot in the dark, maybe people familiar with 4-to-n-dimensional topology could add more by
mapping the 16/18-sq onto more exotic concepts like klein bottle etc..
Did you count the 3×3 squares as well?
Did you count the 3×3 squares as well? Which would make the total higher than 40. If not, why?
• Yes, if you’ll take another look at the animation above, you’ll see the four 3×3 squares account for numbers 36, 37, 38 and 39.
• Did you even watch the gif at the top of the page??? Just watch it and stop asking stupid questions.
I see circles….time for zzzzzzzzzzzzz
YOU VILL SEE 40 SQUARES AND LIKE IT!!!
Either 16 or 40, depending on how you look at it…but I think that’s the point. I guess that technically there are a fair few irregular hexagons. Ooh what fun!!
When I first saw this picture on Facebook, I counted 40 squares. I don’t need any animation to tell me there are 40 four-sided figures in this picture!!
there are 44. you missed 4 in your animation.
If you have 4 equal length lines making a box with four 90 degree angles you have 1 square. You could say you have 1-squared squares.
With a 2X2 grid you would have 2-squared plus 1-squared squares which equals 5 squares, some with lines in them.
(From here on out I will write that as 2² + 1² = 5 squares.)
With a 3X3 grid you would have 3² + 2² + 1² = 14 squares.
With a 4X4 grid (like here) you would have 4² + 3² + 2² + 1² = 30 squares.
With a 5X5 grid you would have 5² + 4² + 3² + 2² +1² = 55 squares.
What we have here is a 4X4 (that’s 30 squares) and two 2X2 squares (that’s 5 each, so 10 squares) which means you have 30+5+5=40 total squares.
And now you can figure them out for other combinations.
I like Peter BJ’s suggestion. Back when I was young we played checkers. We called it 3-D checkers, but actually it was more esoteric than that. The two sides of the board were the same line. So for
example, you could move diagonally forward to the right when you were already on the right side of the board — IF the correct square on the left side of the board was empty!
The two ends of the board were another same line. So for example you arrived at the next to the last row and jumped over the other’s piece he hadn’t moved yet, you didn’t land on the king row, you
jumped over it (back into your own king row). We eventually decided that the rules said you had to land on the king row to become a king, so jumping over like that you didn’t become a king.
It’s two cylinders, going two directions at once, so you can’t create a board in real (3D) space. But I finally figured out that if I in my imagination copied the playing board as a 3X3 grid of
checkerboards, I could jump from the center board onto one of the imaginary boards to see where my position was — but on the real (the center) board. That helped.
We never could get it to work with chess (first player has check mate without moving).
We wrote it up and sent it in to Scientific American, but never heard from them, so we sent a copy to the Library of Congress for copyright purposes but never got anybody else interested in it, so it
didn’t go anywhere.
• hey Jack, nice to see *someone* isn’t being dogmatic etc. See my new post below too. I was thinking, about my last idea, of connecting in a kind-of 3d spiral .. say square labelled clockwise
fashion ABCD, then connect side BC with side DA, but moved so B actually connects one square down from the top of DA, kinda spirally.. But then I had a thought – say your game, you made some kind
of ‘Hyper-Transport/Wormhole’ rule that allowed you to (metaphorically) bend/stretch the board so the king-line of one side could connect up with ANYWHERE on the board (that’s ‘topology’ I guess;
google it if unfamiliar). Guess you’d have to think a lot about the rules involved..makes me think of those Star Wars (was it?) 3-D Chess games, sorta,.. Or of the game was virtualised on a
screen rather than IRL, maybe AI could generate random wormholes (just from one square to one other random square) or major board warp (connecting a whole rank to another rank) – each event
random & connection existing only for random short time, say one or two moves??.. though that would prolly require more like chess pieces to take most advantage than drafts.. and you’d prolly
have to think over it carefully so the game didn’t get TOO random.. anyway, jus’ an idea. Cheers !
If you do not count squares with lines, you can’t even count the original square as there are lines running through it.
• correction the lines do not run through the big square they hit it perpendicularly.
□ Correction, every line intersects perpendicularly on this, as with all 40 squares.
I get 40 every time but have a hard time getting others to see it. This is FABULOUS!! Thanks for taking the time to create it.
But you’re all wrong. After all, how many squares CAN’T you see? The question doesn’t ask for a number. You can see them all.
If you ask “How many squares are there?” …. that requires a number.
I tell you what. Some of the posts on here are genuinely retarded. Some will pick fault in my comment as its not is US English etc however, the puzzle is great. Squares with lines through are still
squares as its the outer border which is counted. You all need to lighten up and say “yeh, this is good. It got me thinking.” ((like a gimp) for most of you)) and that you are all trying to be too
clever and it really doesn’t suit any of you. The comments saying they appreciate the puzzle are much more valuable then some of yours.
40 is correct. Stop questioning it.
I think the answer is infinate as no-one seems to be counting the “squares” counting the squares. On this page I see many squares who have been TAKEN IN by counting squares thus meaning your are all
“squares” who are now part of the puzzle .. hehehehe also meaning if the negative space inside a square and the posative space inside are square are squares in thier own right and you have all been
taken into one of those spaces by taking part you must also be counted. Thats real math for you lol
• Wow Arron. You are a meta genious. :)
• Aaron. LMAO Good call!
I have to agree with you Ian. I counted 40. Did the math after I realized the number and yes it is 40. What this discussion has turned into is perception. Pretty much the two schools of Athens going
on here in geometry and math. Which is absolutely appropriate, given the Greek reference. Aristotle inductive reasoning. Use the sense to establish patterns and create an absolute
Definition (simply put). Or Plato. There are Shapes and shadows that we, normal humans, are not aware off because our simple senses fail us.
I see 40 in traditional 2 dimensional math that are described, but also see a total of 57 if you count each intersection that has a full x/y cross (square black pixels at each intersection). If you
count partial intersections I count 81, but traditional math dictates 40 as each line is considered to be 0 space, only used as a reference. I am no math genius, but I see 40 as described.
There is actually well over 150 there.
I got to 172 so far. im sure a second look there will show more
There is an infinite number of squares that are not demarcated by visible black lines. Just kidding. 40 is correct if you go by what is surely the spirit of the puzzle. (OK, maybe 80 if you count
negative space, or 120 if you count both the outside and inside of the lines as squares in their own right. But again, the spirit of the puzzle probably doesn’t include those somewhat esoteric
squares). 40 is the commonsense answer.
I would like you to know that the square middle left and middle right are not in fact squares as they have 6 edges and not four so the 40 squares is wrong
hey Christopher Kirkman, I found this picture on facebook and I have a question about it. the picture itself is square so could you consider the whole of the picture outside the lines as being square
number 41?
• I don’t believe that to be the intention of the puzzle. You can read back through dozens of comments here and on FB and see that people have as many explanations and answers as there are
comments. However, remembering back to when I first saw this in 6th grade geometry class, I believe solving it correctly meant interpreting the question as “how many squares (4 sides of equal
length) can you TRACE using the lines provided?
There are many others like this using other shapes, and although the immediate answer isn’t usually correct, the overly complicated ones people seem to fabricate so only they can appear correct
are, at best, stretching things.
The truth is, the stated question is “how many squares do you see” so in essence you could scroll down for 3 pages worth of comments and count each of the squares surrounding the icons of every
commenter, every checkbox, every thumbnail image and every social feed icon to get hundreds, but that’s just a little silly, don’t you think?
Aside from the clever philosophical debates and attempts at humor – I really fear for the state of mathematics in the world. There are 40 squares. Even with this flawless animation (thank you!) there
is still attempted debate. Less than half of more than 200,000 posts to facebook appear to have answered it correctly. There is only one mathematical definition of a square. A planar figure 4 equal
length straight sides connected and 4 right angles. Arguments concerning what is inside the square are just absurd. If you have a square driveway and draw a line bisecting it to make two rectangles –
the driveway is still a square. One post defines a square as any 4 points! Still I’m encouraged that this inspires such passion from so many people. – Maybe math can make a comeback!
Another fun test – how many squares are on a Chess Board?
• You aren’t alone. It astounds me not only how much traffic I’ve gotten from this little one-off post, but how many people are refuting the simple, and what I feel is pretty clear, animation.
And yes, I do find it a little disconcerting that basic mathematics principals seem to be lost on so many people. Much in the same way correct spelling is a thing of the past, especially on
But to answer your last question, a commenter above, “Jack”, typed out the formula for figuring out how many squares can be made from any normal grid. Our up there isn’t normal, but a chess board
would be, so an 8×8 board you add:
8² + 7² + 6² + 5² + 4² + 3² + 2² + 1²
and get [drumroll]….. 204.
□ Humans are funny animals, I witnessed a whole week of debate on talk radio about why a corn cob looks as it does after having been eaten ( the shape before and after viewed from the end ).
These are the things that make one go hmmmmmm.
If you come on here arguing for anything other than 40 squares, it is obvious to every sane mind here that you are either-
1. Completely ignorant
2. A pretentious a** hole
3. Trying desperately to get people’s attention so they will reply to your comment
Please attempt to do the puzzle in the nature it was intended. It is obvious how this was meant to be counted, do it as such. We get it, you think you’re smart; you don’t have to prove it to
strangers on some random website.
ok…none… they are all rectangles, it is an optical illusion and if you don’t believe me, measure the sides!!!! really????? a million people and no one has measured the sides!!!!
• I hope that is a joke.
□ if you measure all the sides, they are all rectangles I see two squares of 4 blocks high and 3 blocks across that are actually squares…
☆ When I open the image file on my computer it says the image is 2.67″ by 2.67″, meaning the length of the overall image is the same as the width of the overall image. Since all the lines
are straight and parallel to the line across from it, perpendicular to the lines adjacent to it, that is one big square by definition. Next, the vertical and horizontal lines are all
spaced evenly apart and placed perpendicular or to the outer square’s lines, separating the big square into 16 smaller squares (and yes, I did measure the lengths of the distances between
all the crossing lines, they were all nearly exactly the same, likely off by some absurdly small inconsequential distance). The other 2 central squares (each of which can be separated
into 4 additional squares) have equal lengths and widths, as well as parallel opposite sides and 90 degree angles at the corners. I honestly have know idea what you are doing to measure
this, but you are clearly wrong. Terri, if you think you’re the first person in, as you say, millions of people to find out that these are not squares, you are out of your mind.
Unless you are measuring at some ridiculously small and insignificant scale just to be cute and differentiate yourself from the others, I recommend you stop measuring things with whatever
tool you are using (likely the width of your thumb or placing gaps between your fingers and moving them from line to line). Be realistic here.
Thank you @LostFaith.
Just when I thought this exercise couldn’t be any more stupid, Terri proved you can take stupid to a whole new level. Well done.
☆ In addition, to extinguish any lingering doubts (although I’m sure in your fantasy world you have to be right and I can’t convince you), I clearing out the white background, leaving just
the black lines, I then duplicated the image, having the grid (with a transparent background) superimposed on the original image. I then rotated the superimposed image 90 degrees (typing
in a 90 degree rotation, not by hand) and the lines of top rotated image was indistinguishable from the original image below (except the two central squares were now left and right,
instead of top and bottom). If you were right, the longer lines would have stretched out beyond their underlying shorter counterparts. That did not happen, in fact, the lines didn’t even
look thicker, meaning this image is really well designed and the squares are perfectly square. I understand this logic is hard to follow with a visualization, but try it your self and
see. I think even simple paint programs can rotate an image and let you superimpose them. Again, definitively, you are wrong; completely and utterly wrong. I would advice you reconsider
your stance, I’m sure others will take the opportunity to let you know you are wrong as well.
• The shame about social media is that every idiot now has a voice, and people that should never be heard now are on a global scale. Thank you for proving my point.
• Terri.
You’re an idiot. That is all.
What I don’t understand is how the middle 2 columns of squares can be counted when they are actually not squares… Because parts of them have been chopped out by the middle 2 boxes…
I do not understand everyone arguing over this.
Define square.
Those are all rectangles.
Zero squares.
• Katy. Yes, they are rectangles But they are a special kind of rectangle called squares. Please continue to 6th grade before ever speaking or typing again.
@Christopher Kirkman – you haven’t commented on wrap-around. It seems a perfectly obvious thing to me since every time one types more than one line of text wrapping is used. If you (or others in
here) say that wrapping implies 3D, hence disallowed, that is incorrect: when mapping a 2D (plane) figure onto a 3D figure, the ‘surface’ is still plane in itself, even though it may appear to be 3D
from the 3D perspective. For example, the surface area of our planet, Earth, is measured in square miles/kilometres (plane units), even though we know the earth is, approximately, a sphere, a 3D
object. Or to put it the other way, if I draw a square ABCD (clockwise) on a piece of paper: what is it’s volume? = (B-A) x (D-C) sq units. If I then curl the paper into a tube with the side AD on
the side BC, and ask the same question, the answer remains the same, even if the tube of paper looks like a 3D shape. Of course you can’t physically wrap both directions, paper isn’t flexible enough,
so you use theoretical (or virtual) paper, Photoshop it, so to speak, and/but the answer to the question, what is the area of the square ABCD?, is the same, in *square* (ie, plane) units.
Of course, for purposes of a junior math type question, no argument with 40. But if it were a figure in say, a physics or astronomy or etc etc* paper then the expectations might be different, and so
might the results, and a simple answer of ’40′ might not be sufficient.??
just for interest: [SURFACE AREA : Volume] (ratio) is a useful thing to contemplate in some situations – (eg: http://en.wikipedia.org/wiki/Surface-area-to-volume_ratio)
@Stan K – care to comment on this (above) ?
re squares on a chessboard: I assume you mean chessboard as example of 8×8 square plane figure – in which case, drawing them all out for myself, I get:
[1x1]64 + [2x2]40 + [3x3]36 + [4x4]21 + [5x5]16 + [6x6]9 + [7x7]4 + [8x8]1 = 191.
but if wrap-around is allowed (horizontal, vertical, & diagonal),:
[1x1]64 + [2x2]50 + [3x3]64 + [4x4]52 + [5x5]56 + [6x6]49 + [7x7]28 + [8x8]29 = 392, or 201 extra over the 191.
I spent the last half-afternoon drawing all the figures – I looked, removed duplicates (hope I got them all), so this is *not* just off the top of my head.
Next part – if I did a fancy wrap, say a spiral one where top right corner wrap corner connects to one square down from top left corner..or that PLUS bottom right connects to one square along from
top left.. and then varied the ‘pitch’ again after that for possibly more results… checking carefully not to include what’s already been counted… but I’ve spent long enough on this already, too long
even, so if you want to work on this, good luck to you. LOL. Cheers !
i see it now! it is 40. sorry for saying you were wrong!
I now see 40 but first saw 32. I missed the one’s in the center first time.
@Peter Bay If you allow wrap around – it does become more interesting. If you take a square ABCD with side length =2m then the area is 2×2=4m^2 (obviously). You can’t compute the Volume of a 2D
figure – When you wrap it into a sphere like shape (using virtual figures) you are correct the surface area is still 4. When you asked about the “Volume” I think you meant surface area. Units of
length are one dimensional (like miles), Area is 2 dimensional (Square miles), Volume is 3 dimensional (Cubic Miles). If you could do the mapping of the square piece of paper onto the surface of the
cube – the Surface area would be exactly 4. The cube would have a diameter of about 1.13m from S.A. = 4*pi*r^2 (integrating the Surface area yields the volume equation) V = 4/3 *pi * r^3 = 0.7524m^3
Giving a surface area to volume of 4/0.7524 = 3/r = ~5.317 for a Sphere with radius 0.564 (S.A. =4)
Regarding the basic chess board calc you had two small errors. There are 49 2×2 squares and 25 4×4 squares – for a total of 204. For an N sided board – the number of squares without wrap around is
the sum of n^2 from n = 1 to N. or 1 +4 +9+16+ 25+ 36 + 49 +64 = 204 (for a chessboard) note each term is a perfect “square” (to end on a pun).
I’ll think about the wrapping aspect of a chessboard and write back on the number of squares in this case when I have more time.
Oops – I said “cube” a few times in my last post- I meant “Sphere”
theres actually little tiny lighter colored squares all over in the black lines meant to be a focus trick. I count 80.
• The lines are ghost lines from a poor scan
Looking straight forward, using k.i.s.s. system, “I see” 16. No lines crossing to make squares, 8 large, 8 small, 4 shapes are not squares. The question asked was “how many squares do you see?” that
question leaves it up for debate. Had the the question been worded different, and rules applied (cannot make squares crossing other lines, etc.), certainly 16 would be correct. Remove a square from
the grid each time you count it and squares run out at 16. There’s an even number of squares so all the odd numbers are wrong, some are losing count and repeating. If you ask what’s the highest
possible number of squares can be found, certainly 40 is the best answer as the animation points out. There are some faint lines that appear but I believe those are ghost images from a poor scan.
Glass half full/empty never seems to be solved, this one may never be solved. Applying rules (math?) makes it simpler but then what’s the fun in that? ;)
lost faith…did it make you feel better to be ugly about it??? I MAY NOT BE AS CEREBRAL AS MOST ON THIS COMMENT STREAM, but I was not being mean…on my computer, with an external measuring device, the
only Squares are 4 high and 3 wide, the entire pic is a rectangle… if on your computer or if you can print it and measure, you will find 40 squares. but will stick with 2 as my image is a rectangle…
• I apologize for getting admittedly snappy and “ugly” in my comments. Regrettable decision. As to you seeing rectangles, I don’t know what source your image is from, but I would guess the image is
distorted, or your computer/printer distorts it. Opening the file on my computer, the file stays under the dimensions set by the source I found the picture through. According to my image there
are 40 squares. Again, sorry for being hateful; no real need for that kind of stuff.
• Terri, you’re not as cerebral as a rock. You’re an idiot.
I’m with you Terri…It asks for squares…they are all rectangles! And initially I thought the only squares were the two shapes in the middle, but they are just off a bit too which= rectangles. It’s an
optical illusion folks…
Based on an image size of 320 pixels x 320 pixels, the correct answer is 10,973,920. I don’t know why every guess is 5 orders of magnitude off… :/
• Pixels come in round, square, and rectangle according to mfr design. Does that make a difference?
□ Not especially as I’m referring to the digital image pixel, and not to whatever rendering device one might be using.
144 squares, 40 square boxes as defined as boxes of 4 sides of equal lengths, 104 squares as defined by 104 90 degree angles and combined 144 “squares”…
My mind is killing me!!!! lol
My argument is based on first the definition of a square?:
A plane figure with four equal straight sides and four right angles. Two important points in the definition: 4 equal sides and 4 right angles.
Using this definition this eliminates many squares….from my calculation 19 but there could be other squares that fit this defintion out there…they seem to appear out of nowhere.
I see 42 squares. The Square that has a middle square, can be looked at as an empty middle, with a thick edge forming an additional square.
I see 40. This picture finally finished my quest for an answer lol how is it that someone made this game and never supplied an answer? I would also like to know why everyone still keeps arguing there
is more than 40 and saying how could you post this? How could you post a comment and not thoroughly make sure your argument is a valid one?! lol good lawddd.
Ok, just wanted to write to thank you for taking the time to publish this animation, and to comment on how eerily similar our brains functioned in regards to the animation — with yours demonstrating
a tad more diligence, having included the counter which I opted to leave out (due to my GIF animator not having an easy way to cleanly add one).
What a fun game for the mind — and the kids :) I spent far too much time geometrically working through my formula only to discover what I thought were two additional squares; thus, arriving at 42. I,
however, did not use highlighted animation. Entirely possible my mind double counted. Any more puzzles available???
When you draw a square and then draw lines inside that square attached to the lines of the original square, you instantly make the original square a different shape than a square. This 40 square
assumption is incorrect unless you qualify that every background square should be counted. Even then that’s cheating!
I couldnt resist counting the squares.
i found 40 squares…..
I missed out the 3 x 3 squares
In my opinion, there are 16 squares. Anything beyond that is based on the assumptions that the image is layered and/or transparent. Since that’s impossible to know unless it is stated as fact along
with the image, the only certain answer is 16.
• Ugh, yer kidding right? You have to be. Let’s make it simple. Print out the image. Take a crayon to the paper. Now draw a horizontal line from any point on that grid to any other point in the
grid on that line. then draw a vertical line to another point. then horizontal, then vertical again back to the original point. If your lines are the same length:THAT’S A SQUARE. It doesn’t
matter how many other points you passed over to get from point A to point B. There’s no layering, there’s no transparency.
It’s a simple shapes puzzle for a 5th grader, why is this so difficult for everyone?
is wrong are 41 there is also a central turn of the small
OPS sorry, is counted are 40
I counted 41.
All of the squares you Identifed +1 2X2 square in the center!
Correction 40; Re-watched the video Center 2X2 counted
ridiculous, are only 17
the boxes can not overlap
Thank you. I had gotten 40, also, but I was seeing people saying things like “58″ on another posting, and I couldn’t for the life of me figure out what I was missing.
if you want to know how many squares are actually in the image you must not think in 3d. There are only 17 actual squares. However, if you count the ones that your mind can perceive there are 40
I think each of you will find you are in fact all wrong, the original puzzle is infact an optical illusion and when drawn properly each ‘square’ is infact a rectangle and when you turn the paper on
its side only then can you see that the measurements do not match those of the descripiton of a square so the answer is 0 … hope I cleared that up a bit for people
• Yeah, good one Nicole. Thanks for proving the fact that all people aren’t equal, some are stupid and shouldn’t be allowed to breed.
Your counting the outer sq As 1 but in fact if you count the twelve sq on the inside it would
Be 41…?
It’s amazing how little common sense most people have if they can’t figure out the simplest things. I guess common sense isn’t that common.
you missed a lot of the squares, please re-evaluate
Why are smart people dumbed down to name calling? I guess an increase in intelligence comes with a decrease in immaturity? Just state your case and zip it.
it amazes me how people try to “outsmart” other people with things like this. THERE IS NO SPOON PEOPLE…
There are 40 if you count the ones with black edges. There are also another 16 if you count the ones with WHITE edges (i.e. inside the black squares). I make it 56 total, but maybe I am just being
picky :)
I printed out the graphic and got my scissors out. After cutting along each line I had 8 large pieces of paper and 8 smaller pieces of paper in the shape of what is traditionally known as a square. I
had 8 large pieces of paper with 6 sides not a square. So in the “real” world there are only 16 squares whereas in the “virtual” world there could be 40 more or less depending on how intelligent you
are, level of education you have, eye color, hair color or any number of traits you possess.
• It didnt say print them out and cut them, it said look at the screen.
What about the white border outside of the entire square…. Does that not count??
And why are some of you talking about “there are only 16 true squares” etc.. Imagine them as stacked on top of one another..
THANKS for the clever easy to understand graphic! It took me forever to arrive at 40, then wanted to know if it was right. The search led to you!! Vindication!! (and no more searching for more)
You can cut out only 16 squares. How many you can count, is a nother thing, the test is to little described to know if u gna count it as a hexagram or none “destroyed” squares. 92 % here says 40, i
say 40, but i understand ppl saying 16.. maybe theres just a dude fu**ing with us, he did a great success
Do you guys know and make a difference between square and rectangle my answer is 28
• If you’re referring to me, the author, then yes, I know the difference. A rectangle is a four sided plane with four right angles. All squares are rectangles, but not all rectangles are squares.
As laid out in the animation above, 40 squares (a plane with four EQUAL sides and four right angles) can be created using the lines provided.
I had the same issue and created my own version of your counter here:
this is about the hidden square there are 40 square the solution is ( 8×1 ) + ( 4×4 +2 ) + (3 x 3) + (2 x 2) + (1 x 1) = 8 + 18 + 9 + 4 + 1 = 40 squares
The formula n²+(n-1)²+(n-2)²… will determine the number of squares in a square arrangement. In your chessboard setup with its 8×8 arrangement the number of squares will be 8²+7²+6²+5²+4²+3²+2²+1²=204
In the picture we have a 4-square setup which then gives you 4²+3²+2²+1²=30 squares plus two 2-square setups (2²+1²)×2=10. All in all 40 squares in the picture.
putting the two squares in the middle splits those eight middle squares into 4 smaller squares each even though they are not outlined.
• Jesus Christ man some people put way too much time into this (guy two comments up) Just count the squares! lol. Jk
Saw this on FB for the first time today and counted 40 the first time I tried b
Wouldn’t it be 41? The entire box is also a square.
• That was the last one Terri, watch the video.
terri. that is square # 40. haha people crack me up just accept it :)
I think the problem with people using the line thickness to get more than 40 squares is that they don’t understand the representative definition of those lines. Using their logic, there would be no
true squares because none of those lines is truly straight, none are truly the same length and even the angles would not be exactly 90 degrees. It would also render the ‘they’re really rectangles’
argument as false. This problem is a simple geometry problem, not something that was meant for college math majors (although I’m sure they could have more fun with it), so all the extra ‘dimensions’
being mentioned have no real place either.
I usually don’t respond to anything like this, but in this case given the idiousy of the debates and tirades against an absolute (that being 40 squares) I just can’t resist.
Does anyone remember this old maxim? “It is better to remain silent and thought a fool, then to speak up and remove all doubt.”. The End
I got it right!
40 is correct, but does anyone know how many rectangles?
I do.
• 58 or 59
I say 59
• Well, there are 118 rectangles… agree?
Yea I see 40, and 42 but 16 is right. Logically a square is not a stack of cubes making a structure as a whole unless you want to count all the friggin atoms which are not square. If we allow a stack
of cubes to be a larger cube then the damn things with the voids count as a squares or cubes.. or whatever you want to call it. Actually, the whole square with a square hole is in fact a square and
the piece that left the void is another square. I’m thinking this debate is HOLY CRAP OVERBOARD! Let the smart people be smart :D I’m not. But there are 42 squares and 16 “real” squares.
• You’re right Dave, you summed it up perfectly with the line that you are not smart. At least you are self aware, most people aren’t.
I love these games. They make me think. Usually I draw them out on graph paper. I got 40 when I drew it myself. Going over every line with different colors. I can’t find anymore. Thank You for the
fun. Keep them coming. I enjoy these and number games.
Well bolter, you needed that. I just feel bad for the people that have the misfortune of being around you.
Sorry, but the solution in the video is incorrect. Since there is no information given concerning the figure as to whether any of the angles have a degree measure of 90 or as to whether any of the
segments have equal lengths, there is no reason to conclude that this figure contains any squares. The only correct answer to this would be zero.
Those who claim a square is not a square based on “lines” inside do not understand that the term relates to the outside shape. Mr Bean’s image could be inside, it’s still a square – with an image
tracy is a fn idiot… tilting your screen back? wtf… are you serious… you;re such an idiot.
Hey Christopher, nice animation. There still seems to be some debate though – by the dummies of course – so I’d thought I’d throw my 2 cents worth in that might make it a little clearer (even though
for some it seems, this would be futile). Can you reconfigure the animation to start with the 8 small squares first, then the 2 squares that enclose those 8 before going to the other 16 singles, 9
doubles, 4 triples and then the full plate? It might help those idiots that, beyond all efforts made towards reasonably practicable assistance… still can’t get it through their thick Neolithic
And it looks like some might need the definition of a square also:
A four sided flat shape characterized by four 90 Degree angles (right angles) and four straight sides of equal length. And before you (those imbecilic, block-headed, dim-witted, simpleton clowns) try
to argue the point – It doesn’t matter a fuck if there’s another line running though it or… or a fucking train for that matter! Lets also say it’s safe to assume that they are squares… without the
need to debate whether they are a thousandth of a degree off true 90 degree right angles – either way… if only because the puzzle suggests they are squares by asking you how many squares there are in
the first place.
If that doesn’t put the debate to rest then… well then I’m at a loss! A tougher puzzle than this one is determining how this sub-culture of humanity ever manage to walk on two legs, let alone string
their mental masturbations together into words, or make an argument… and why are they breeding! Furthermore and for the love of all humanity – who in the hell is allowing this injustice to continue.
We’ve come so far and yet the swelling of stupidity is still on the increase! Why?
There are only 40 squares. Squares are formed by 4 equal sides at right angels whether or not there are other lines running through those squares. Also, you cannot include rectangles because the
sides of a rectangle are not equal.
How do I add a photo.. their are other squares that are left out. You know what, just check out my facebook page.. i’ll post the pictures there! (give me a little time, i’ll post them shortly after
this post.) https://www.facebook.com/jon.michael.988
damn i got it right I counted 40!
One thing is for sure, Jay Oli wins “self-important dbag” of the thread! Bravo, sir, bravo!
I don’t care how many squares there are. All I know is that every time one of my friends comments on this on FB it shows up in my newsfeed again.
People, you have all debated, counted, called each other names and DEFINED a square, but NOBODY bothered to measure one of the “squares” in the diagram. A million people commented on facebook and
everybody came across arrogant and self-important and tried to promote themselves as geniuses, but NOBODY measured the objects in the diagram.
The answer is ZERO squares. They are ALL rectangles!
• Hi Christine,
Any fourth grader can tell you that all squares are rectangles that have four equal straight sides and four right angles. Every square is a rectangle. So yes, in this diagram, they are all
rectangles, but that’s because they are all squares.
Secondly, barring any image artifacting, each of the squares depicted here all have the traits defined above. I should know, because I recreated the original Facebook image, ensuring the pixel
height and width for each shape was equal. So, for example, the first 16 highlighted squares are each 80 pixels by 80 pixels. A perfect, by-the-book definition of a square.
Besides, even if I had made some error in the illustration, the point of the puzzle was never to be a trick question. It’s an exercise for looking beyond the obvious.
27. Some squarez are no longer squares.
I am pretty sure this website is where retards come to die. Everyone get out before you lose all the IQ points you have left
I believe that the black background that makes up the lines when looked at as a solid object sitting behind the white squares would also count as a square.
Because I work with illustrator and photshop I would consider these part of the visual construction but if the black space is not considerd an object then 40 it is.
This comment thread is amazing. You clearly illustrate through the use of animated GIF the CORRECT number of squares, yet people still disagree because:
1. They missed part of the animation and thus assumed you missed some squares, and felt compelled to call you out on “your” error before doublechecking.
2. They missed the lesson in Geometry that defines a square to be a regular quadrilateral (ie equal sides, equal angles), and call you out for allowing lines to overlap (by the way people those are
line SEGMENTS and they still do not interfere with the squares!).
3. They fancy themselves to be mathematically erudite and want to call attention to their knowledge of non-Euclidean Geometry including the projective plane, toroids, and other esoteric facets of
obscure mathematics, and thus call you out for solving this puzzle using basic elementary school knowledge.
4. They are highly suspicious of “puzzles” and are looking for the “gotcha” – i.e., include the number of pixels, oh actually they are all technically rectangles, don’t forget the outer part of the
lines creates a square “different” from the inner part of the lines, the puzzle really contains an optical illusion. Thus, they call you out for trying to bamboozle them, but no! they won’t be
Thank YOU for an exceptionally clear explanation of what should be a relatively simple and classic puzzle.
0 or 8 or 16 or 40 depending on your definition of “square”. There’s quite a few rectangles, too, for that matter. If the lines are used to represent the borders which create squares, print the
picture out and cut along the lines. You will be left with 16 “perfect” squares: 8 large + 8 small, with 4 leftover “L” shapes. I also see bunny rabbits and cotton candy and dead people.
You forgot to count the big square whichcontains all the smaller squares!
• Care to look again Diana? The highlighted squares get progressively larger. Number 40 is the largest, encompassing all the prior squares.
Holy smokes! This is one of the longest clusterfucks of human stupidity I’ve ever seen! Watching people argue over something like this utterly compounds my complete lack of faith in this species. I’m
ashamed to be a human right now. I wish I was an ape, because then I would feel smarter reading this insanely long thread if nonsense.
I don’t see any squares at all. I see dead people.
Nice! 1 Large square divided into 39 smaller squares = 40 squares.
I saw this on line and I counted 43! Then someone post your page saying it was 40? So my answer is why didn’t you count the three squares made up of four squares down the center?
Ok me bad!! I see it!!
Can’t believe people are still debating this..
Well… Even so sir… I found well above 40 squares… Here’s why:
1+16+36+44= 97 Squares is what I’v found…
But these types of problems, I guess, seem to be rigged by the original creator, whomever they are, to make it sound like anyone’s opinion is right no matter who looks at it… But this is quite the
i get 41
I’m not sure if anyone mentioned but does’nt the written word “squares” count it is in the full picture that would make it 41 to me.
• I agree! ^ ^ ^
I’ve always counted the written word ‘squares’ and apparantly the people who count that, have a more creative mind and think ‘outside the square’ so to speak. So we win!
Easy. 40 18 regulate squares, 8 little squares, 9 of 4 regular squares grouped together , 4 of 9 regulate squares grouped together and the whole thing. People say you can’t count groups as a square
because of intersecting lines, but actually if you look at the groups your aren’t count the groups as a square you are counting the perimeter of the groups which is a solid line which makes the
square and you not counting the same ones over because those grouped square to make bigger haven’t been counted yet.
40,according to length of sides:
add up to 40. | {"url":"http://media-geeks.com/special-features/how-many-squares-indeed/","timestamp":"2014-04-16T23:03:18Z","content_type":null,"content_length":"429241","record_id":"<urn:uuid:7b5a2515-892b-4fc5-84ed-2d1bc1575a6f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Exponential Distribution
April 29th 2009, 12:21 PM #1
[SOLVED] The Exponential Distribution
The question goes as follows...
Annual wind speed maxia for various hurricane-prone locations in the American Deep South can be modelled with an exponential distribution with mean 58 miles per hour.
a) Find the probability that, for a randomly chosen location in the Deep South, the annual maximum wind speed is less than 30 miles per hour.
b) Find the probability that, for one of these locations, the annual maximum wind speed will exceed that observed for New Orleans during Hurricane Katrina of 181 miles per hour.
For the first part I got an answer of 1 when I initally tried it, which is obviously wrong. Thanks in advance for any help
Last edited by chella182; April 30th 2009 at 06:10 AM. Reason: Solved it
The question goes as follows...
Annual wind speed maxia for various hurricane-prone locations in the American Deep South can be modelled with an exponential distribution with mean 58 miles per hour.
a) Find the probability that, for a randomly chosen location in the Deep South, the annual maximum wind speed is less than 30 miles per hour.
b) Find the probability that, for one of these locations, the annual maximum wind speed will exceed that observed for New Orleans during Hurricane Katrina of 181 miles per hour.
For the first part I got an answer of 1 when I initally tried it, which is obviously wrong. Thanks in advance for any help
Let the pdf be f(x).
a) $\Pr(X < 30) = \int_0^{30} f(x) \, dx$.
b) $\Pr(X > 181) = \int_{181}^{+\infty} f(x) \, dx$.
Okay, this is sorted now - one of my friends in the year above me pointed out my silly mistake
Last edited by chella182; April 30th 2009 at 06:10 AM. Reason: LaTex mess-up
April 29th 2009, 06:50 PM #2
April 30th 2009, 05:09 AM #3 | {"url":"http://mathhelpforum.com/advanced-statistics/86484-exponential-distribution.html","timestamp":"2014-04-16T20:08:01Z","content_type":null,"content_length":"38630","record_id":"<urn:uuid:ab08e57d-5986-46f6-8689-7333239c72b4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
Eagleville, PA Algebra Tutor
Find an Eagleville, PA Algebra Tutor
...You will need to master both common writing conventions (such as proper use of the comma and semicolon, etc.) and more advanced grammatical concepts (subjunctive voice, irregular past
participles, etc.) I have all the necessary study materials, including a dozen actual, previously administered S...
34 Subjects: including algebra 1, algebra 2, English, physics
...During my undergraduate career, I tutored mathematics at the Villanova Mathematics Learning and Resource Center (MLRC), primarily in Calculus I, II, and III, Differential Equations, and Linear
Algebra. Aside from that, I occasionally tutored high school mathematics and other more advanced colleg...
26 Subjects: including algebra 2, reading, algebra 1, writing
...I have methods to determine what is the best way a student learns, and am committed to finding a way to teach them effectively using visual, auditory, or kinesthetic strategies, or some
combination thereof. I obtained my International Baccalaureate Diploma in July 2012 at Central High School of ...
18 Subjects: including algebra 2, algebra 1, reading, Spanish
...My doctoral work has focused on the use of technology and group work to enhance learning, particularly in the area of science. Students need to feel comfortable with a tutor, so establishing a
rapport with students is my primary objective, because goals beyond that otherwise can't be achieved. ...
19 Subjects: including algebra 1, algebra 2, chemistry, organic chemistry
...I have created and delivered materials for English Language Learners and students with learning disabilities. I have worked with individual students to improve their grades. This process
involved assessing each student's understanding of the material, identifying the gaps in understanding and the reasons for those gaps, and developing an individualized learning plan for each
12 Subjects: including algebra 2, algebra 1, calculus, physics | {"url":"http://www.purplemath.com/eagleville_pa_algebra_tutors.php","timestamp":"2014-04-21T00:15:15Z","content_type":null,"content_length":"24263","record_id":"<urn:uuid:b86f3cc0-0ebc-41a6-857a-08948b535d9f>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
Better performance
We redid the backend interface to the BGL routines. This optimization gave a considerable performance increase.
test_benchmark on MatlabBGL 2.1
2008-10-07, Version 2.1, Matlab 2007b, boost 1.33.0,
g++-3.4 (lib), gcc-? (mex)
airfoil west cs-stan minneso tapir
large 0.223 s 0.024 s 0.390 s 0.073 s 0.046 s
med NaN s 0.955 s NaN s NaN s 6.621 s
small NaN s 0.758 s NaN s NaN s NaN s
test_benchmark on MatlabBGL 3.0
2008-10-07: Version 3.0, Matlab 2007b, boost 1.34.1,
g++-4.0 (lib), gcc-? (mex)
airfoil west cs-stan minneso tapir
large 0.183 s 0.017 s 0.222 s 0.048 s 0.037 s
med NaN s 0.593 s NaN s NaN s 3.901 s
small NaN s 0.543 s NaN s NaN s NaN s
Graph construction functions
MatlabBGL 2.1 had a few graph construction functions. MatlabBGL 3.0 adds the grid_graph function for line, grid, cube, and hyper-cube graphs
[G xy] = grid_graph(6,5); gplot(G,xy,'.-');
In more dimensions...
[G xyz] = grid_graph(6,5,3);
G = grid_graph(2,2,2,2);
G = grid_graph([3,3,3,3,3]);
Targeted search
The graph search algorithms now let you specify a target vertex that will stop the search early if possible.
A = grid_graph(50,50);
tic; d = bfs(A,1,struct()); toc
tic; d = bfs(A,1,struct('target',2)); toc
Elapsed time is 0.001523 seconds.
Elapsed time is 0.000704 seconds.
Also implemented for astar_search, shortest_paths, and dfs.
Edge weights
In Matlab, there is no way to create a sparse matrix with a structural non-zero (used for MatlabBGL edges) and a value of 0 (used for MatlabBGL weights). Consequently, it's impossible to run
algorithms on graphs where the edge weights are 0.
Consequently, some algorithms now take an 'edge_weight' parameter that allows you to provide a different set of edge weights which allow structural non-zeros and 0 values.
This behavior is a bit complicated, so see the REWEIGHTED_GRAPHS example for more information.
Matching algorithms
While maximum cardinality bipartite matching is just a call to max-flow, general graph matching algorithms are not. MatlabBGL 3.0 contains the matching algorithms in Boost 1.34.0.
m = matching(A);
sum(m>0)/2 % matching cardinality should be 8
ans =
New graph statistics
We added a few new statistics functions.
Test for a topological ordering of a graph (only applies to DAGs or directed acyclic graphs)
n = 10; A = sparse(1:n-1, 2:n, 1, n, n); % construct a simple dag
p = topological_order(A);
test_dag(cycle_graph(6)) % a cycle is not acyclic!
ans =
ans =
Core numbers can help identify important regions in a graph. MatlabBGL includes weighted and directed core numbers. Also, the algorithms return the removal time of a particular vertex, which gives
interesting graph orderings.
% See EXAMPLES/CORE_NUMBERS_EXAMPLE
New algorithms for clustering_coefficients on weighted and directed graphs.
A = clique_graph(6) - cycle_graph(6); % A is a clique - a directed cycle
ccfs = clustering_coefficients(A)
B = sprand(A);
ccfs = clustering_coefficients(B)
C = A|A'; % now it's a full clique again
ccfs = clustering_coefficients(C)
ccfs =
ccfs =
ccfs =
Max-flow algorithms
Since Boost added the Kolmogorov max-flow function, we added the full collection of flow algorithms to MatlabBGL.
ans =
ans =
ans =
Dominator tree
Dominator trees are relations about presidence in certain types of graphs. These are also called flow-graphs.
p = lengauer_tarjan_dominator_tree(A,1);
New utility functions
MatlabBGL 3.0 introduces some new utility functions.
The output of a shortest path algorithm is a predecessor matrix. To convert these predecessor relationships to a path, use the path_from_pred function.
[A xy] = grid_graph(6,5); n= size(A,1);
[d dt pred] = bfs(A,1); %
path = path_from_pred(pred,n) % sequence of vertices to upper corner
path =
Let's draw the path
hold on; plot(px,py,'-','LineWidth',2); hold off;
We can also create a full shortest path tree using the tree_from_pred function.
T = tree_from_pred(pred);
hold on; plot(px,py,'-','LineWidth',2); hold off;
Finally, there are a few new routines to make working with reweighted graphs easier. See EXAMPLES/REWEIGHTED_GRAPHS for information about the INDEXED_SPARSE and EDGE_WEIGHT_INDEX functions. | {"url":"http://www.mathworks.com/matlabcentral/fileexchange/10922-matlabbgl/content/matlab_bgl/doc/html/new_in_3/new_in_3_0.html","timestamp":"2014-04-18T15:47:13Z","content_type":null,"content_length":"65593","record_id":"<urn:uuid:3744bc0f-fe2c-4ba7-86a8-4c54c068600e>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Normality test
On 03/09/2008 10:33 AM, Williams, Robin wrote:
> Hi,
> I am looking for a normality test in R to see if a vector of data I have
> can be assumed to be normally distributed and hence used in a linear
> regression.
Raw data that is suitable for standard linear regression is normally
distributed, but the mean varies from observation to observation. The
necessary assumption is that the errors are normally distributed with
zero mean, but the data itself also includes the non-random parts of the
model. The effect of the varying means is that the data will generally
*not* appear to come from a normal distribution if you just throw it all
into a vector and look at it.
So let's assume you're working with residuals from a linear fit. The
residuals should be normally distributed with mean zero, but their
variances won't be equal. It may be that in a large dataset this will
be enough to get a false declaration of non-normality even with
perfectly normal errors. In a small dataset you'll rarely have enough
power to detect non-normality.
So overall, don't use something like shapiro.test for what you have in
mind. Any recent regression text should give advice on model
diagnostics that will do a better job.
>> help.search("normality test")
> suggests the Shapiro test, ?shapiro.test.
> Now maybe I am interpreting things incorrectly (as is usually the case),
> am I right in assuming that this is a composite test for normality, and
> hence a high p-value would suggest that the sample is normally
> distributed?
A low p-value (e.g. p < 0.05) could suggest there is evidence of
non-normality, but p > 0.05 just shows a lack of evidence. In the case
where the data is truly normally distributed, you'd expect p to be
uniformly distributed between 0 and 1. (I have an article in the
current American Statistician suggesting ways to teach p-values to
emphasize this; unfortunately, it seems to be a surprise to a lot of
Duncan Murdoch
As a test I did
> shapiro.test(rnorm(4500))
> a few times, and achieved very different p-values, so I cannot be sure.
> I had assumed that a random sample of 4500 would have a very high
> p-value on all occasions but it appears not, this is interesting.
> Are there any other tests that people would recommend over this one in
> the base packages? I assume not as help.search did not suggest any.
> So am I right about a high p-value suggesting normality?
> Many thanks for any help.
> Robin Williams
> Met Office summer intern - Health Forecasting
[hidden email]
> [[alternative HTML version deleted]]
> ______________________________________________
[hidden email]
mailing list
> PLEASE do read the posting guide
> and provide commented, minimal, self-contained, reproducible code.
[hidden email]
mailing list
PLEASE do read the posting guide
and provide commented, minimal, self-contained, reproducible code. | {"url":"http://r.789695.n4.nabble.com/Normality-test-td865270.html","timestamp":"2014-04-20T00:37:57Z","content_type":null,"content_length":"58743","record_id":"<urn:uuid:c8f0541b-9c3d-483c-8f75-b00b7b9a3218>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Another Mathmatical Problem..
August 7th 2006, 11:11 AM
Another Mathmatical Problem..
The following experimental data values of x and y are believed to be related by the law y = mx^2 + c, where m and c are constants. By plotting a suitable graph, verify that this law does indeed
relate the values and find approximate values of m and c.
x 2.3 4.1 6.0 8.4 9.9 11.3
y 13.9 31.2 60.2 111.8 153.0 197.5
August 7th 2006, 11:40 AM
I used Excel to plot this graph using your data and a quadric regression. I hope I didn't get it too small. Note the coorelation coefficient R^2. It's 1. That means it's as good as it gets as far
as regressions go.
Incase you can't make it out, the equation is $1.4938x^{2}+0.0809x+5.8118$
August 7th 2006, 11:52 AM
Could you breifly explain how you solved this problem? I'm a little confused.
Once i understand this problem, i shall try some similar ones.
Thanks for your help so far.
August 7th 2006, 12:24 PM
Do not beg for answers! You can wait for a response! :mad:
-=USER WARNED=-
August 7th 2006, 12:25 PM
All I done was to use Excel. It did all the work. If you want to perform a quadric regression by hand....good luck. That's what technology is for. Do you have a TI-83?. I believe it'll do
regression: linear, quadric, cubic, quartic, sine, logistic, exponential, etc....
August 7th 2006, 12:53 PM
Originally Posted by c00ky
The following experimental data values of x and y are believed to be related by the law y = mx^2 + c, where m and c are constants. By plotting a suitable graph, verify that this law does indeed
relate the values and find approximate values of m and c.
x 2.3 4.1 6.0 8.4 9.9 11.3
y 13.9 31.2 60.2 111.8 153.0 197.5
Plot a graph of y against x^2 (x^2 the independent variable and y the
dependent), it should be a straight line, its slope will be m, and the intercept
on the x^2=0 axis will be c.
August 7th 2006, 02:01 PM
Yes, I am sorry, but I was wrong. A linear regression is more suited, as Cap,N said. I saw the x^2 and got carried away.
If you want to perform a linear regression 'by hand' you can use the following formulae:
y=mx+b, n=number of data points.
b=y-intercept= $\frac{\sum{y}}{n}-m\frac{\sum{x}}{n}$
Personally, I would use my calculator to do it, unless, you're forbidden.
Just thought you'd be interested.
August 7th 2006, 02:58 PM
Originally Posted by c00ky
The following experimental data values of x and y are believed to be related by the law y = mx^2 + c, where m and c are constants. By plotting a suitable graph, verify that this law does indeed
relate the values and find approximate values of m and c.
x 2.3 4.1 6.0 8.4 9.9 11.3
y 13.9 31.2 60.2 111.8 153.0 197.5
Actually, it's fairly easy to figure out the equations for the coefficients of a least squares (polynomial) regression curve. (The problem for me is the formulae for the error estimates. I don't
know how to do those.) If you like I can show you the general method. I should warn you that it involves a little Calculus (not much though). I won't post it unless you ask, since coding it will
take a bit of time, but I would be happy to do so.
August 7th 2006, 06:23 PM
Originally Posted by topsquark
I should warn you that it involves a little Calculus (not much though).
It actually involves quite a lot of calculus,...mutli-variable funtions,....partial diffrenciation,....hessian matrix.
August 8th 2006, 03:59 AM
Thanks Guys, I followed captain blacks steps and i think i've cracked it! | {"url":"http://mathhelpforum.com/advanced-statistics/4776-another-mathmatical-problem-print.html","timestamp":"2014-04-17T09:44:03Z","content_type":null,"content_length":"9975","record_id":"<urn:uuid:f05b0033-6215-4b6b-8ffa-85e34ffa4463>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem 3
We are adding fractions, so we need common denominators. We don't have common denominators, so we have to find a common denominator. First factor both denominators.
At this point we see that both denominators already have a factor of 2x + 5 in common. The first one does not have the x + 3 which the second one has, and the second denominator does not have the x -
4 which the first one has. So we multiply top and bottom of the first fraction by x + 3 and top and bottom of the second one by x - 4.
Now that we have common denominators, we can add the tops. First remove parentheses.
Then combine like terms on the top.
Now that we have finished the addition process, we still should check to see if this fraction will reduce. Fortunately, we cleverly left the bottom factored. Now that we have simplified the top, we
can check to see if it will factor. It does.
Not only does the top factor, but one of the factors cancels with one of the factors on the bottom leaving us with a simplified answer of | {"url":"http://www.sonoma.edu/users/w/wilsonst/Courses/Math_50/SolnsS98/MT1/MT1p3/MT1p3.html","timestamp":"2014-04-17T10:49:35Z","content_type":null,"content_length":"2938","record_id":"<urn:uuid:ecce1e58-4316-4092-84e3-e5151f87e250>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Section 12.4
Java Security Update: Oracle has updated the security settings needed to run Physlets.
Click here for help on updating Java and setting Java security.
Section 12.4: Wave Packet Dynamics
Please wait for the animation to completely load.
For the infinite square well, we consider a superposition of a large number of states so as to resemble an initial Gaussian wave packet at t = 0 in order to study the dynamics of such packets.^4 For
the simple harmonic oscillator, we will outline the same approach.^5
We can examine the time dependence of such an initially localized state by choosing a Gaussian of the form
Ψ[G](x,0) = (αħπ^1/2)^−1/2 exp(−(x − x[0])^2/(2α^2ħ^2)) exp(ip[0](x − x[0])/ħ),
where by direct calculation: <x>[t = 0 ]= x[0], <p>[t = 0 ]= p[0], and Δx[t = 0 ]= Δx[0] = αħ/2^1/2. The general expression for a wave packet solution constructed from such energy eigenfunctions is
Ψ[G](x,t) = Σ c[n] ψ[n](x) exp(−iE[n]t/ħ) [sum from n = 1 to ∞] (12.15)
where the expansion coefficients satisfy Σ[n] |c[n]|^2 =1. The expansion constants are determined by the integral
c[n ]= ∫ψ*[n](x,0) Ψ[G](x,0) dx [sum from −∞ to +∞]
Once we determine these coefficients, we can use Eq. (12.15) to reconstruct the wave packet and study the corresponding packet dynamics. In the animation, we calculate the time dependence of packets
with x[0 ]= 0 and an α, p[0], and ω that you can vary. We have set ħ = 2m = 1.
The time dependence of a wave packet in a harmonic oscillator is determined by all of the exp(−iE[n]t/ħ) factors. For the harmonic oscillator there is only one characteristic time scale, T[cl] = 2π/
ω, which gives a result in agreement with classical expectations. Because the energy of the harmonic oscillator depends linearly on the quantum number, n, there are no other time scales (unlike wave
packets in the infinite square well and other wells). In fact there are only two possibilities for the wave packet's evolution: it can either be a squeezed state or a coherent state. For a squeezed
state the packet remains Gaussian shaped, but as it moves throughout the well its width grows and contracts. You can see this effect by keeping the default ω and choosing p[0 ]= 3 and α = 0.5 or α =
2. The other situation, a coherent state (keeping the default ω and choosing p[0 ]= 3 and α = 1) occurs for special packets and wells and is easily noticeable as the wave packet retains its exact
shape throughout its motion throughout the well.
^4For a comprehensive review of this topic see: R. W. Robinett, "Quantum Wave Packet Revivals," Phys. Rep. 392, 1-119 (2004).
^5In practice, one uses propagator methods to exactly determine the localized Gaussian wave function and its time dependence. For the details, see pages 206-208 of R. W. Robinett, Quantum Mechanics:
Classical Results, Modern Systems, and Visualized Examples, Oxford, New York (1997).
« previous
next » | {"url":"http://www.compadre.org/PQP/quantum-theory/section12_4.cfm","timestamp":"2014-04-21T15:00:42Z","content_type":null,"content_length":"21155","record_id":"<urn:uuid:4b7fe0e2-e142-4168-bca9-2b110283c42b>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interference Model: Ripple Tank
Interference Model: Ripple Tank Relations
Other Related Resources
relation created by Anne Cox
The Easy Java Simulations Modeling and Authoring Tool is needed to explore the computational model used in the Interference Model: Ripple Tank.
relation created by wee lookang
Ejs Open Source Ripple Tank Interference Model java applet a remix from Interference Model: Ripple Tank written by Andrew Duffy
Create a new relation | {"url":"http://www.compadre.org/introphys/items/Relations.cfm?ID=9989","timestamp":"2014-04-20T23:33:14Z","content_type":null,"content_length":"19351","record_id":"<urn:uuid:ac3f7838-160e-4e0c-9125-292e46507436>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Sunflower Lemma
Today’s post is about the Sunflower Lemma (a.k.a the Erdos-Rado Lemma). I learnt about Sunflower Lemma while reading Razborov’s Theorem from Papadimitriou’s computational complexity book.
A sunflower is a family of p sets $\{P_1,P_2,\dots,P_p\}$, called petals, each of cardinality at most l, such that all pairs of sets in the family have the same intersection, called the core of the
The Sunflower Lemma : Let Z be a family of more than $M=(p-1)^{l}l!$ nonempty sets, each of cardinality l or less. Then Z must contain a sunflower.
Exercise : Prove the Sunflower Lemma
The best known lower bound on M is $(p-1)^l$.
Exercise : Construct a family of $(p-1)^l$ nonempty sets that does not have a sunflower.
We develop counterparts
of the Sunflower Lemma for distributive lattices, graphic matroids, and matroids
representable over a xed nite eld.
Sunflower lemma plays a crucial role in Razborov’s theorem [Razborov'85]. [McKenna'05] generalized the Sunflower Lemma for distributive lattices, graphic matroids, and matroids representable over a
fixed finite field.
I am not aware of other applications of Sunflower Lemma. If you know any, please leave a comment.
Open problems :
□ Improve the upper/lower bound on M.
□ Erdos and Rado conjectured that “For every fixed p there is a constant C = C(p) such that a family of more than $C^{l}$ nonempty sets, each of cardinality l or less has a sunflower”. This
conjecture is still open. It is open even for p=3.
References :
• [Razborov'85] A. A. Razborov, Some lower bounds for the monotone complexity of some Boolean functions, Soviet Math. Dokl. 31 (1985), 354-357.
• [McKenna'05] Geoffrey McKenna, Sunflowers in Lattices, The Electronic Journal of Combinatorics Vol.12 2005 [pdf]
5 thoughts on “The Sunflower Lemma”
1. A modification of the sunflower lemma was used by Andreev to prove the exponential bounds for monotone circuits (mentioned in Stasys Jukna “Extremal combinatorics”)
2. Andreev A. E. On the method of obtaining effective lower bounds on monotone complexity, Algebra i Logica, 26:1, 3-26 (Russian, 1987)
□ Thanks Andrei. I will look at this paper.
3. Hello Shiva,
Munro et. al. use the lemma to show an information-theoretic lower bound on the number of bits to be read for a space-efficient code. “Integer Representation and Counting in the Bit Probe Model”.
This is in the conference version only as it wasn’t reproduced in the journal version. http://www.springerlink.com/content/c20040k70k368212/
4. there is some improved sunflower lemmas by rossman in his recent paper on circuit lower bounds and noga alon has a new paper linking them to the complexity of matrix multiplication! see also
detailed material on sunflowers, cstheory stackexchange | {"url":"http://kintali.wordpress.com/2009/07/15/the-sunflower-lemma/","timestamp":"2014-04-20T06:09:02Z","content_type":null,"content_length":"69194","record_id":"<urn:uuid:a027f655-7cbe-43ce-8cf1-592836ca4992>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00612-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 11 - 20 of 23
"... ABSTRACT1 @ ?x: ?x = 68 Id @ ?x ["ID"] *) - s "?x"; - rri "ID";ex(); - p "ABSTRACT1@?x"; (* ABSTRACT2 @ ?x: ?f @ ?a = COMP != ?f @ (ABSTRACT @ ?x) =? ?a ["COMP"] *) - s "?f@?a"; - rri "COMP"; -
right(); right(); ri "ABSTRACT@?x"; - prove "ABSTRACT2@?x"; (* ABSTRACT3 @ ?x: ?a & ? ..."
Cited by 1 (0 self)
Add to MetaCart
ABSTRACT1 @ ?x: ?x = 68 Id @ ?x ["ID"] *) - s "?x"; - rri "ID";ex(); - p "ABSTRACT1@?x"; (* ABSTRACT2 @ ?x: ?f @ ?a = COMP != ?f @ (ABSTRACT @ ?x) =? ?a ["COMP"] *) - s "?f@?a"; - rri "COMP"; - right
(); right(); ri "ABSTRACT@?x"; - prove "ABSTRACT2@?x"; (* ABSTRACT3 @ ?x: ?a & ?b = RAISE0 =? ((ABSTRACT @ ?x) =? ?a) & (ABSTRACT @ ?x) =? ?b [] *) - s "?a&?b"; - right(); - ri "ABSTRACT@?x"; - up();
left(); - ri "ABSTRACT@?x"; - top(); - ri "RAISE0"; - prove "ABSTRACT3@?x"; (* ABSTRACT4 @ ?x: ?a = [?a] @ ?x [] *) - s "?a"; - ri "BIND@?x"; ex(); 69 - p "ABSTRACT4@?x"; (* ABSTRACT@term will
(attempt to) express a target term as a function of its parameter "term" *) (* ABSTRACT @ ?x: ?a = (ABSTRACT4 @ ?x) =?? (ABSTRACT3 @ ?x) =?? (ABSTRACT2 @ ?x) =?? (ABSTRACT1 @ ?x) =? ?a ["COMP","ID"]
*) - s "?a"; - ri "ABSTRACT1@?x"; - ari "ABSTRACT2@?x"; - ari "ABSTRACT3@?x"; - ari "ABSTRACT4@?x"; - p "ABSTRACT@?x"; (* REDUCE will reverse the effect of ABSTRACT; it will "evaluate" functions
built by ABSTRACT *) (* REDUCE: ?f @ ?x = (ABSTRACT4 @ ?x) !!= ((RL @ REDUCE) *? RAISE0) !!= ((RIGHT @ REDUCE) *? COMP) =?? ID =? ?f @ ?x ["COMP","ID"] *) - dpt "REDUCE"; - s "?f@?x"; - ri "ID"; -
ari "(RIGHT@REDUCE)*?COMP"; - arri "(RL@REDUCE)*?RAISE0"; - arri "ABSTRACT4@?x"; - prove "REDUCE"; (* old approach to hypotheses *) (* equational forms of tactics given without proof; the proofs of
the tactics involve no actual rewriting *) PIVOT: (?a = ?b) ------ ?T , ?U = (RIGHT @ LEFT @ EVAL) =? HYP =? (?a = ?b) 70 ------ ((BIND @ ?a) =? ?T) , ?U ["HYP"] REVPIVOT: (?a = ?b) ------ ?T , ?U =
(RIGHT @ LEFT @ EVAL) =? HYP != (?a = ?b) ------ ((BIND @ ?b) =? ?T) , ?U ["HYP"] We now present examples of the use of thes...
"... This paper shows that OO principles can be used to enhance the rigour of mathematical notation without loss of brevity and clarity. It is well known that traditional mathematical notation is not
completely formal. This is so because mathematicians and other users of mathematical notation tend to ..."
Add to MetaCart
This paper shows that OO principles can be used to enhance the rigour of mathematical notation without loss of brevity and clarity. It is well known that traditional mathematical notation is not
completely formal. This is so because mathematicians and other users of mathematical notation tend to sacrifice exactness to obtain brevity and clarity. The mathematician thereby leaves to the reader
to guess the meaning of each formula presented based on the written and unwritten rules of the particular field of research. This works perfectly well for communication between researchers in the
same field, but may be an obstacle for communication between researchers form different fields or for newcomers such as students. The lack of rigour in mathematical notation may also be an obstacle
when mathematical phenomena are to be simulated on computers, where the programmer has to fill out the gaps in the notation. It is generally believed that complete formal rigour leads to an
"... this paper refers to some witness to the appropriate instance of the comprehension axiom, and does not implicitly assume the existence of a unique or even a canonical witness) ..."
Add to MetaCart
this paper refers to some witness to the appropriate instance of the comprehension axiom, and does not implicitly assume the existence of a unique or even a canonical witness)
, 2002
"... form of Godel's first theorem: Let P be a set of Godel numbers of all the provable sentences. If the set # is expressible in correct, then there is a true sentence of not provable in L. ..."
Add to MetaCart
form of Godel's first theorem: Let P be a set of Godel numbers of all the provable sentences. If the set # is expressible in correct, then there is a true sentence of not provable in L.
, 2009
"... The “usual ” model construction for NFU (Quine’s New Foundations with urelements, shown to be consistent by Jensen) starts with a model of the usual set theory with an automorphism that moves a
rank (this rank is the domain of the model). “Most ” elements of the resulting model of NFU are urelements ..."
Add to MetaCart
The “usual ” model construction for NFU (Quine’s New Foundations with urelements, shown to be consistent by Jensen) starts with a model of the usual set theory with an automorphism that moves a rank
(this rank is the domain of the model). “Most ” elements of the resulting model of NFU are urelements (it appears that information about their extensions is discarded). The surprising result of this
paper is that this information is not discarded at all: the membership relation of the original model (restricted to the domain of the model of NFU) is definable in the language of NFU. A corollary
of this is that the urelements of a model of NFU obtained by the “usual ” construction are inhomogeneous: this was the question the author was investigating initially. Other aspects of the mutual
interpretability of NFU and a fragment of ZFC are discussed in sufficient detail to place
, 2006
"... By “alternative set theories ” we mean systems of set theory differing significantly from the dominant ZF (Zermelo-Frankel set theory) and its close relatives (though we will review these
systems in the article). Among the systems we will review are typed theories of sets, Zermelo set theory and its ..."
Add to MetaCart
By “alternative set theories ” we mean systems of set theory differing significantly from the dominant ZF (Zermelo-Frankel set theory) and its close relatives (though we will review these systems in
the article). Among the systems we will review are typed theories of sets, Zermelo set theory and its variations, New Foundations and related systems, positive set theories, and constructive set
theories. An interest in the range of alternative set theories does not presuppose an interest in replacing the dominant set theory with one of the alternatives; acquainting ourselves with
foundations of mathematics formulated in terms of an alternative system can be instructive as showing us what any set theory (including the usual one) is supposed to do for us. The study of
alternative set theories can dispel a facile identification of “set theory ” with “Zermelo-Fraenkel set theory”; they are not the same thing. Contents 1 Why set theory? 2 1.1 The Dedekind
construction of the reals............... 3 1.2 The Frege-Russell definition of the natural numbers....... 4
"... Abstract. In the informal unlimited theory of structures and (particularly) categories, one considers unrestricted statements concerning structures such as that the substructure relation on all
structures of a given kind forms a partially ordered structure. or that the collection of all categories f ..."
Add to MetaCart
Abstract. In the informal unlimited theory of structures and (particularly) categories, one considers unrestricted statements concerning structures such as that the substructure relation on all
structures of a given kind forms a partially ordered structure. or that the collection of all categories forms a category with arbitrary These sorts of propositio~s are not accounted for difunctors
as its morphisms. The aim of the present work is to give a founrectly by currently accepted means. dation for the theory of structures including such unlimited statements-more or less as they are
presented to us-by means of certain formal systems. The theories studied here are based on an extension of Quinels idea of stratification. Their use is justified bya consistency proof. adapting
methods of Jensen. These systems are successful for the basic aim to a considerable extent. but they suffer a specific defect which prevents them from being fully successful. Some possible
alternatives are also suggested.
, 2006
"... This article is essentially an extended review with historical comments. It looks at an algorithm published in 1943 by M. H. A. Newman, which decides whether a lambda-calculus term is typable
without actually computing its principal type. Newman’s algorithm seems to have been completely neglected by ..."
Add to MetaCart
This article is essentially an extended review with historical comments. It looks at an algorithm published in 1943 by M. H. A. Newman, which decides whether a lambda-calculus term is typable without
actually computing its principal type. Newman’s algorithm seems to have been completely neglected by the type-theorists who invented their own rather different typability algorithms over 15 years
later. 1
"... Abstract. A “new ” criterion for set existence is presented, namely, that a set {x | φ} should exist if the multigraph whose nodes are variables in φ and whose edges are occurrences of atomic
formulas in φ is acyclic. Formulas with acyclic graphs are stratified in the sense of New Foundations, so co ..."
Add to MetaCart
Abstract. A “new ” criterion for set existence is presented, namely, that a set {x | φ} should exist if the multigraph whose nodes are variables in φ and whose edges are occurrences of atomic
formulas in φ is acyclic. Formulas with acyclic graphs are stratified in the sense of New Foundations, so consistency of the set theory with weak extensionality and acyclic comprehension follows from
the consistency of Jensen’s system NFU. It is much less obvious, but turns out to be the case, that this theory is equivalent to NFU: it appears at first blush that it ought to be weaker. This paper
verifies that acyclic comprehension and stratified comprehension are equivalent, by verifying that each axiom in a finite axiomatization of stratified comprehension follows from acyclic
, 2012
"... Symmetry motivates a new consistent fragment of NF and an extension of NF with ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=906279&sort=cite&start=10","timestamp":"2014-04-20T05:27:14Z","content_type":null,"content_length":"34208","record_id":"<urn:uuid:4d27c155-1c7f-4287-b791-e65ee784f4dd>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Raymond James: A column by James Kruzan
The Rule of 72
September 29, 2010
"How long will it take my investment to double?" This is a common question many may have concerning their investments and think a calculator is needed to provide an answer. But a calculator may not
be needed, at all.
The tool to use is called the Rule of 72 and, best of all; it is simple and free. This is how it works. If an individual has an investment they think will grow at an assumed rate of return per year,
then simply dividing that rate of return into 72 will provide a rough estimate of the number of years it will take for the investment to double in size.
For example, let's assume an investment is assumed to grow at an average rate of return of six percent each year. Simply divide six into 72 will give a rough estimate that it will take 12 years for
this investment to double (72 / 6 = 12). This formula assumes a fixed annual rate of return and the reinvestment of all earnings. Keep in mind that very few investments offer a guaranteed rate of
return and that an investment's past performance does not guarantee future performance.
The rule of 72 may also be used to show the negative power of inflation. This may be an especially handy tool to those individuals in their retirement's years and, also, for those approaching the
retirement decision. Using this tool an individual can estimate the number of years it will take for his or her cost of living to double. Or put another way, how long before an individual's
purchasing power is cut in half.
For example, let's assume an individual is retired and forecasts an inflation rate of five percent per year. An inflation rate, in general terms, is the rate of increase in the prices of goods and
services individuals purchase over time. Forecasting an inflation rate of five percent means the individual is assuming the prices of the goods and services he or she will purchase in the future will
increase at a rate of five percent per year. Using the rule of 72, simply dividing five into 72 will provide a rough estimate that the individual's cost of living will double in 14 to 15 years (72 ¸
5 = 14.4).
James B. Kruzan, CFP, is a Registered Principal and Branch Manager for Raymond James Financial Services, Inc., Fenton and Clarkston. | {"url":"http://www.oxfordleader.com/LPprintwindow.LASSO?-token.editorialcall=237902.113121","timestamp":"2014-04-19T02:34:15Z","content_type":null,"content_length":"6898","record_id":"<urn:uuid:06f9ef09-37a5-4d94-ab2c-8f0e842a996c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
The String Merging Problem
, 1997
"... sequence (LCS) of those strings. If D is the simple Levenshtein distance between two strings having lengths m and n, SES is the length of the shortest edit sequence between the strings, and L is
the length of an LCS of the strings, then SES = D and L = (m + n 0D)=2. We will focus on the problem of ..."
Cited by 14 (0 self)
Add to MetaCart
sequence (LCS) of those strings. If D is the simple Levenshtein distance between two strings having lengths m and n, SES is the length of the shortest edit sequence between the strings, and L is the
length of an LCS of the strings, then SES = D and L = (m + n 0D)=2. We will focus on the problem of determining the length of an LCS and also on the related problem of recovering an LCS. Another
related problem, which will be discussed in Chapter 7, is that of approximate string matching, in which it is desired to locate all positions within string y which begin an approximation to string x
containing at most D errors (insertions or deletions). 124 SERIAL COMPUTATIONS OF LEVENSHTEIN DISTANCES procedure CLASSIC( x,<
, 1999
"... . This paper introduces an approach to solving combinatorial optimization problems on partially ordered sets by the reduction to searching source-sink paths in the related transversal graphs.
Dierent techniques are demonstrated in application to nding consistent supersequences, merging partially ..."
Cited by 4 (2 self)
Add to MetaCart
. This paper introduces an approach to solving combinatorial optimization problems on partially ordered sets by the reduction to searching source-sink paths in the related transversal graphs. Dierent
techniques are demonstrated in application to nding consistent supersequences, merging partially ordered sets, and machine scheduling with precedence constraints. Extending the approach to labeled
partially ordered sets we also propose a solution for the smallest superplan problem and show its equivalence to the well studied coarsest regular renement problem. For partially ordered sets of a
xed width the number of vertices in their transversal graphs is polynomial, so the reduction allows us easily to establish that many related problems are solvable in polynomial or pseudopolynomial
time. For example, we establish that the longest consistent supersequence problem with a xed number of given strings can be solved in polynomial time, and that the precedence-constrained release...
, 2006
"... cyclic strings ..."
, 2006
"... Abstract⎯Given a set S = {S 1,..., S k} of finite strings, the k-longest common subsequence problem (k-LCSP) seeks a string L of maximum length such that L is a subsequence of each S i for i =
1,..., k. This paper presents a technique, specialized branching, that solves k-LCSP. Specialized branching ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract⎯Given a set S = {S 1,..., S k} of finite strings, the k-longest common subsequence problem (k-LCSP) seeks a string L of maximum length such that L is a subsequence of each S i for i = 1,...,
k. This paper presents a technique, specialized branching, that solves k-LCSP. Specialized branching combines the benefits of both dynamic programming and branch and bound to reduce the search space.
For large k, this method is shown to be computationally superior to dynamic programming. Keywords⎯Longest common subsequence, Branch and bound, Dynamic programming 1.
, 1999
"... We present an algorithm for building the automaton that searches for all non-overlapping occurrences of each subsequence from the set of subsequences. Further, we define Directed Acyclic
Supersequence Graph and use it to solve the generalized Shortest Common Supersequence problem, the Longest Common ..."
Add to MetaCart
We present an algorithm for building the automaton that searches for all non-overlapping occurrences of each subsequence from the set of subsequences. Further, we define Directed Acyclic
Supersequence Graph and use it to solve the generalized Shortest Common Supersequence problem, the Longest Common Non-Supersequence problem, and the Longest Consistent Supersequence problem.
, 2005
"... We consider the master ring problem (MRP) which often arises in optical network design. Given a network which consists of a collection of interconnected rings R1,..., RK, with n1,..., nK
distinct nodes, respectively, we need to find an ordering of the nodes in the network that respects the ordering ..."
Add to MetaCart
We consider the master ring problem (MRP) which often arises in optical network design. Given a network which consists of a collection of interconnected rings R1,..., RK, with n1,..., nK distinct
nodes, respectively, we need to find an ordering of the nodes in the network that respects the ordering of every individual ring, if one exists. Our main result is an exact algorithm for MRP whose
running time approaches Q · ∏ K k=1 (nk / √ 2) for some polynomial Q, as the nk values become large. For the ring clearance problem, a special case of practical interest, our algorithm achieves this
running time for rings of any size nk ≥ 2. This yields the first nontrivial improvement, by factor of (2 √ 2) K ≈(2.82) K, | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=482556","timestamp":"2014-04-17T13:19:21Z","content_type":null,"content_length":"25689","record_id":"<urn:uuid:588292f9-ed4a-44b9-b786-e5939bdcaadb>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00191-ip-10-147-4-33.ec2.internal.warc.gz"} |
Expressions - Lesson 1
SOL A.1 - The student will represent verbal quantitative situations algebraically and evaluate these expressions for given replacement values of the variables.
Common Core
Warm-up 1. Warm-up: Expressions (doc)
Videos 1. Video: PH Writing Algebraic Expressions from Words
2. Video: PH Writing an Equation
1. Using Algebraic Equations - Translate equations into English sentences and translate English sentences into equations. Read the equation or sentence and select word tiles or
Explore symbol tiles to form the corresponding sentence or equation.
Learning 2. Using Algebraic Expressions - Translate algebraic expressions into English phrases, and translate English phrases into algebraic expressions. Read the expression or phrase and
select word tiles or symbol tiles to form the corresponding phrase or expression.
1. Interactive Notes: Algebraic Representations - lessons and practice problems at Regentsprep.org
Internet Sites 2. Interactive Notes: Writing Equations - notes and workout problems at Math.com
3. Interactive Notes: Writing Inequalities - notes and workout problems at Math.com
Journal 1. Opinion: Explain in your own words why you like or dislike math.
Study Guide 1. ExamView Quiz: Translating Expressions
2. Prentice Hall Quiz | {"url":"http://teachers.henrico.k12.va.us/math/HCPSAlgebra1/module1-1.html","timestamp":"2014-04-18T00:13:53Z","content_type":null,"content_length":"38179","record_id":"<urn:uuid:5227b057-b41c-4c98-8a47-f8c6113ba66c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
Student Support Forum: 'Heat Equation in Polar coordinates' topic
Author Comment/Response
Hi! Can someone please help?
I'm trying to solve the heat equation in polar coordinates. Forgive my way of typing it in, I'm battling to make it look right. The d for derivative should be partial, alpha is the Greek
alpha symbol and theta is the Greek theta symbol.
du/dt = (alpha.alpha)[(d/dr)(du/dr)+(1/r)du/dr+(1/(r.r))(d/theta)(du/dtheta)]
This is the heat equation for a disk with radius a.
u(r,theta,0) = (a-r)cos(theta)
u(a,theta,t) = 0
In Mathematica, I used:
NDSolve[{Derivative[0,0,1][u][r,theta,t]==alpha Derivative[2,0,0][u][r,theta,t]+alpha Derivative[1,0,0][u][r,theta,t]+alpha Derivative[0,2,0][u][r,theta,t], u[r,theta,0] == (a-r)Cos[theta], u
I got an error: NDSolve: bcart
I tried replacing alpha with 2 and a with 4. Still got a problem. Is there a way of keeping the alpha's and a's? And can the radius start from 0, without problems?
Then I need to plot it in 3 dimensions. I tried: (radius starting from 1 since I had problems)
Plot3D[Evaluate[u[r,theta,t] /. First[%]],{r,1,4}, {theta,0,pi}, {t,0,1},PlotPoints -> 50]
I got an error: Plot3D: plnc (repeatedly)
General :: stop
Plot3D[InterpolatingFunction[{{1.,4.},{0.,3.14159},{0.,1.}},<>][r,theta,t],{r,1,4},{theta,0,pi},{t,0,1}, PlotPoints -> 50]
I still need to add a different boundary condition where du/dr(a,theta,t) = 0 and need to solve u(r,theta,t). Would it work the same way?
Please help! I would really appreciate it!
URL: , | {"url":"http://forums.wolfram.com/student-support/topics/9508","timestamp":"2014-04-20T11:00:00Z","content_type":null,"content_length":"24701","record_id":"<urn:uuid:30d2ece4-3bc2-42d4-9352-cb51c6419022>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gravitomagnetic Induction of Gravitational Fields
Gravitomagnetic Induction of Gravitational Fields
This post is in response to a story originating from the European Space Agency, so check out; http://www.esa.int/SPECIALS/GSP/SEM0LVGJE_0.htmlThis story is entitled; Towards a new test for general
relativity, dated March 23, 2006 and has now been archived at the above site under Education.It would appear that Tajmar & Matos, the scientists responsible, have demonstrated the underlying force we
at Gravity Control refer to as Non-linear Time Field Frequency Acceleration, (ntffa).When you look at the drawing on the above site, you will notice the accelerative spin of the super conductive ring
is opposite to the direction of the Gravitoelectric Field, which is to be expected.The gravity of the sc ring is increasing as the ring accelerates, so this is not antigravity, but is in fact
anti-antigravity. This is the exact opposite of antigravity where the gravity of a system would be decreasing relative to the field in which it is situated.They make reference to gravitons which are
theoretical gravitational particles, but no such particles exist as gravity is merely a condition of field and does not itself move independent of the accelerating underlying field, (ntffa).The fact
that the effect is one hundred million trillion times larger than Einstein’s General Relativity predicts is huge, as this indicates there is something wrong with Einstein’s perception of the
situation.It is my contention that there is a ratio of energy per unit of mass, in that the ratio of energy per unit of mass is different for every system of reference. This means that the ratio of
energy per unit of mass is a relative consideration associated with the dynamic structure of all physical matter.In that the sc ring is linearly accelerating, the underlying dynamic energy of the
ring is decreasing in inverse proportion to the rate of linear acceleration, this accounts for this huge difference in Einstein’s theoretical prediction and the experimental results.The experimental
results reported to the ESA correspond directly to the condition of field associated with the underlying dynamics of the super conductive ring.The linearly accelerating sc ring acts against the
underlying field in which the test is run, which is the field of the Earth.The linear acceleration of the ring is resistant to a further increase in acceleration while the nonlinear acceleration of
(ntffa) is non-resistant to a further increase in nonlinear acceleration. Therefore the ratio of energy per unit of mass associated with the ring decreases in inverse proportion to the increase in
its linear acceleration. This affects an increase in both the gravitoelectric field and the gravitomagnetic field, which may seem a bit confusing to some.It is important to note that the underlying
force of field, (ntffa) does not radiate, but is focused to the center of field, as what radiates is a factor of resistance corresponding to a differential in the underlying force. In this respect
the factor of resistance increases isometrically from the center of field.What has occurred here is that the linear acceleration of the ring has decreased the underlying dynamics of the ring, which
in turn causes a decrease in energy and an increase in resistance relative to the field in which the ring exists.An increase in resistance affects a decrease in energy and an increase in gravity,
whereas a decrease in resistance affects an increase in energy and a decrease in gravity relative to the system of reference.This experimental evidence supports the contentions of gravity control and
the theoretical principles upon which Unity is designed.When the super conductive ring is stationary, it cannot be slowed any further in terms of linear motion, which makes it appear that a
controlled decrease in the gravity of the ring should not be possible other than by the process of freezing and the induction of an electrical charge, which is perfectly correct.To effect a decrease
in gravity the underlying force itself must be modulated in a controlled manner, whereby affecting the underlying dynamics in the desired manner. An increase in (ntffa) can be achieved by the simple
focusing of field relative to the system of reference, which in turn affects an increase in the underlying energy, a decrease in resistance and a decrease in gravity relative to the system of
reference.This would suggest that Einstein’s equation, E=MC2, can only be applied in a very general sense, as there is a discrepancy in energy from one system to another, such as the various atomic
elements.Each element has its own energy potential corresponding to a specific ratio of energy per unit of mass, which means that Einstein’s equation is less than accurate.Furthermore, when a
molecular structure is reduced in size, even to the state of single atoms, each of the constituent portions is affected by an increase in energy, in relation to an underlying ratio of energy per unit
of mass. This means the larger mass has less energy per unit of mass than if the same mass is reduced to a number of smaller portions. The underlying energy of any system cannot be physically
accessed in a manner which would cause energy to radiate, as energy does not radiate but is always focused to the center of field.This spinning super conductor forms a field within the field of the
EarthThe linear acceleration of the super conductive ring is acting against the underlying nonlinear acceleration of field, (aether), which supplies direct experimental evidence of (ntffa)’s (aether)
existence.We at Gravity Control give two thumbs up to Tajmar & Matos and thank them for their important contribution to field dynamics.© 2006 David Barclay
5 Comments: | {"url":"http://gravityc-idealism.blogspot.com/2006/03/gravitomagnetic-induction-of.html","timestamp":"2014-04-16T07:14:23Z","content_type":null,"content_length":"30383","record_id":"<urn:uuid:d88a7818-1a83-4f23-81f5-d0d19cf840ea>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
ALEX Lesson Plans
Thinkfinity Lesson Plans
Subject: Mathematics
Title: Symmetries II: Conclusions Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students reflect on what they learned in the three previous lessons. Eight thought questions (with a link to the answers) are
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Describing Rotations Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students learn how to describe rotations of a figure. They then use an interactive Java applet to investigate the effect of
rotations through different angles and on different shapes.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Understanding Congruence, Similarity, and Symmetry Using Transformations and Interactive Figures: Visualizing Transformations Add Bookmark
Description: The interactive figures in this four-part example from Illuminations allow a user to manipulate a shape and observe its behavior under a particular transformation or composition of
transformations. e-Math Investigations are selected e-examples from the electronic version of the Principles and Standards of School Mathematics (PSSM). The e-examples are part of the electronic
version of the PSSM document. Given their interactive nature and focused discussion tied to the PSSM document, the e-examples are natural companions to the i-Math investigations.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Understanding Congruence, Similarity, and Symmetry Using Transformations and Interactive Figures: Composing Transformations Add Bookmark
Description: This is part four of a four-part e-example from Illuminations that features interactive figures that allow a user to manipulate a shape and observe its behavior under a particular
transformation or composition of transformations. In this part, Composing Transformations, the users are challenged to compose equivalent transformations in two different ways. e-Math Investigations
are selected e-examples from the electronic version of the Principles and Standards for School Mathematics (PSSM). Given their interactive nature and focused discussion tied to the PSSM document, the
e-examples are natural companions to the i-Math Investigations.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Understanding Congruence, Similarity, and Symmetry Using Transformations and Interactive Figures: Composing Reflections Add Bookmark
Description: This is part three of a four-part e-example from Illuminations that features interactive figures that allow a user to manipulate a shape and observe its behavior under a particular
transformation or composition of transformations. In this part, Composing Reflections, users can examine the result of reflecting a shape successively through two different lines. e-Math
Investigations are selected e-examples from the electronic version of the Principles and Standards for School Mathematics (PSSM). Given their interactive nature and focused discussion tied to the
PSSM document, the e-examples are natural companions to the i-Math Investigations.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Symmetries I Add Bookmark
Description: In this unit of four lessons, from Illuminations, investigate rotational symmetry. They learn about the mathematical properties of rotations and have an opportunity to make their own
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Reflections Across Two Mirror Lines Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students learn what happens when a design is reflected twice across two different mirror lines. They use interactive Java
applets to explore reflections across parallel and intersecting mirror lines.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Dihedral Figures Add Bookmark
Description: Students will recognize dihedral symmetry and reflections in figures and examining various symmetries.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8,9,10,11,12
Subject: Arts,Mathematics
Title: Recognizing Transformations Add Bookmark
Description: This lesson introduces students to the world of symmetry and rotation in figures and patterns. Students learn how to recognize and classify symmetry in decorative figures and frieze
patterns, and get the chance to create and classify their own figures and patterns using JavaSketchpad applets.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Finding What Doesn't Change Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students predict the effect of a rotation through a given angle. They also learn to predict the effect of two or more
rotations performed one after the other, and they find angles that leave a figure unchanged.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Arts,Mathematics
Title: Classifying Transformations Add Bookmark
Description: Students will identify and classify reflections and symmetries in figures and patterns. They will also have the opportunity to create frieze patterns from each of the seven classes using
the supplemental on-line activities.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Describing Reflections Add Bookmark
Description: In this lesson, one of a multi-part unit from Illuminations, students learn how reflections work and what happens when two or more reflections are applied one after the other. They use
interactive Java applets to examine the reflection of a point and how to describe reflections.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Symmetries II Add Bookmark
Description: In this unit of four lessons, from Illuminations, students use Java applets to investigate reflection, mirror, or bilateral symmetry. They learn about the mathematical properties of
mirror symmetry and have a chance to create designs with mirror symmetry.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Symmetries IV Add Bookmark
Description: This lesson, from Illuminations, helps students to understand and identify glide reflections. With the help of a Java applet, students construct glide reflections using a translation and
a reflection. Students then identify glide reflections from groups of band ornaments and wallpaper patterns.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12 | {"url":"http://alex.state.al.us/plans2.php?std_id=54183","timestamp":"2014-04-18T10:34:02Z","content_type":null,"content_length":"37162","record_id":"<urn:uuid:427b8124-cd98-499c-ae01-cecb4c39bc63>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teaching at the Courant Institute, NYU
1. Verbal Description: Techniques of integration. Further applications. Plane analytic geometry. Polar coordinates and parametric equations. Infinite series, including power series.
2. Verbal Description: Systems of linear equations, Gaussian elimination process, Matrices and matrix operations, Determinants, Cramer’s rule. Vectors, Vector spaces, Basis and dimension, Linear
transformations, Eigenvalues, Eigenvectors, inner product, orthogonal projection, Gram-Schmidt process, quadratic forms and several applications.
Teaching Assistantships at UC Berkeley and Teaching statement
1. Instructor: David Aldous
Verbal Description: A treatment of ideas and techniques most commonly found in the applications of probability: Gaussian and Poisson processes, limit theorems, large deviation principles,
information, Markov chains and Markov chain Monte Carlo, martingales, Brownian motion and diffusion.
2. Instructor: Brad Luen
Verbal Description: Population and variables. Standard measures of location, spread and association. Normal approximation. Regression. Probability and sampling. Binomial distribution. Interval
estimation. Some standard significance tests.
3. Instructor: Elchanan Mossel
Verbal Description: Some knowledge of real analysis and metric spaces, including compactness, Riemann integral. Knowledge of Lebesgue integral and/or elementary probability is helpful, but not
essential, given otherwise strong mathematical background. Measuretheory concepts needed for probability. Expectation, distributions. Laws of large numbers and central limit theorems for
independent random variables. Characteristic function methods. Conditional expectations; martingales and theory convergence. Markov chains. Stationary processes. Also listed as Mathematics C218B.
4. Instructor: Timothy A. Thornton
Verbal Description: For students with mathematical background who wish to acquire basic concepts. Relative frequencies, discrete probability, random variables, expectation. Testing hypotheses.
Estimation. Illustrations from various fields. | {"url":"http://www.cims.nyu.edu/~partha/Teaching.php","timestamp":"2014-04-16T04:22:33Z","content_type":null,"content_length":"6816","record_id":"<urn:uuid:adc74db1-f85b-4969-aff4-95ce4ab7200b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
Questions to ask while problem solving
I'm working on a set of possible questions one can ask their students (and teach their students to ask themselves) while they are problem solving in math. Note that these questions are related to the
work of George Pólya from his book How to Solve It.
What would you add?
Questions to ask during problem solving
What are your assumptions?
• What happens if you change those assumptions?
• What assumptions have other people made?
Is there another way to solve it?
• Within your current assumptions?
• With different assumptions?
How is this problem related to other problems you have done?
• Can you solve a related problem?
• Can you simplify the problem, and then solve it?
• Can you find connections between this problem and other problems?
Can you explain the solution to someone else?
• Can they explain your solution to you?
• Can they explain your solution to someone else?
• Can you explain your solution without words?
• Can you explain your solution using only words (no symbols or drawings)?
What tools could you use to help you solve this problem?
• Are there any technological tools that might make the problem easier to visualize or manipulate?
• Are there any mathematical techniques that might be connected to this problem?
How can you justify your solution?
• How can you prove your answer is unique (if it is unique)?
• If your answer is not unique, how many different answers are there?
• How do you know your answer is reasonable?
Can you reflect on your problem solving process?
• How could you change this problem?
• Can you think of related problems?
• What is interesting about this problem?
• How could you generalize this problem?
I might use this for my
Submitted by
Kristina Buenafe
on Tue, 08/20/2013 - 12:39.
I might use this for my beginning of year problem solving challenge (this year it will incorporate building a gumdrop structure)! Cool way to think about it abstractly!
Polya and others and more questions
Submitted by
Howard Phillips
on Tue, 08/20/2013 - 15:26.
I grew up with the books of Polya and also W.W.Sawyer, both geniuses in the 'What is going on? Have I seen anything like this before? Are there other ways of looking at it?' business. Polya asks
other questions, often really useful ones such as 'Is there a simpler problem buried in here, which I might have more success with?' and 'Are there any special cases?'. Questions I like to ask are
"If you had the solution, what would it look like?' and 'So you have a solution, how can you convince me,or anybody else, that it is right?'. Also, 'Can you visualise the situation, draw a picture or
two or a diagram?', and 'Is there any structure in the situation that I have overlooked?'. Advice I have often given is 'Give it a break, think of something else, let the brain get on with the job,
it doesn't need your attention constantly'.
Questions to ask while problem solving
Submitted by
Howard Phillips
on Fri, 08/23/2013 - 18:36.
There are two other books by Polya:
Induction and Analogy in Mathematics, and
Patterns of Plausible Reasoning
Well worth a read if you havent already done so
Hi, my name is Kayla
Submitted by
Kayla Szymanski
on Tue, 08/27/2013 - 15:49.
Hi, my name is Kayla Szymanski and I am currently enrolled in the EDM310 course. I have been assigned your blog for this week, and by this being done I have read your current post. I think that it is
a very good idea to give your students questions they can ask themselves while problem solving. As a former math student myself, sometimes just seeing a math problem will freak you out enough to
where you don't even want to begin. I think by giving your students these quick questions it will ease this sensation that I repeatedly felt while taking math courses. Also just a tip I think maybe
you could condense these rules into about 10 quick easy steps. This would be a great class motto or easy memorizing learning tool for each of your students. They could use it as a way to self check
themselves while solving problems too. I had a teacher that once gave us a saying, and each word meant; subtract, multiply, etc.,and it worked. You will be surprised what works and sticks in your
students heads.
Nice questions. I am sure
Submitted by
Sheila McGinley
on Mon, 09/02/2013 - 01:51.
Nice questions. I am sure these questions are going to help us all. But we do not follow these and we fall into problems. Thanks | {"url":"http://davidwees.com/content/questions-ask-while-problem-solving","timestamp":"2014-04-18T00:32:30Z","content_type":null,"content_length":"60346","record_id":"<urn:uuid:de575db0-1214-47cc-8338-dee87d0487e0>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Truth table evaluation
Philip Riebold <philip@livenet.ac.uk>
Fri, 25 Feb 1994 16:43:04 GMT
From comp.compilers
| List of all articles for this month |
Newsgroups: comp.compilers
From: Philip Riebold <philip@livenet.ac.uk>
Keywords: logic, question
Organization: Compilers Central
Date: Fri, 25 Feb 1994 16:43:04 GMT
[apologies that this query hasn't got that much to do with compilers but I
thought some compiler techniques might be applicable]
I have an application that generates arbitrary postfix boolean expressions
such as
a b AND c b NOT AND OR END
There are up to 16 different variables in each expression and each
variable may occur more than once.
I need to evaluate the truth table for the expression. At present I use a
straightforward method of evaluating the expression for each of the
possible combination of the variables. For 16 variables this takes ~3
seconds on a SUN IPC.
Are there any ways I can speed this up ? There are special case rules I
can apply if all the operators are the same but I am looking for methods
which work in the general case.
Any help/suggestions would be appreciated.
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/94-02-189","timestamp":"2014-04-19T19:50:12Z","content_type":null,"content_length":"4806","record_id":"<urn:uuid:d9b3c278-fd72-4841-b0f9-c2cbd675f91f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
Decimal Subtraction ( Read ) | Arithmetic
Jeremy and his family are driving to visit his grandparents. On the first day they drove 234.8 miles and on the second day they drove 251.6 miles. How many more miles did they drive on the second
Watch This
Khan Academy Subtracting Decimals
To subtract decimals, first write the decimals using the vertical alignment method. The decimal points must be kept directly under each other and the digits in the same place value must be kept in
line with each other. If the decimal numbers are signed numbers, the rules for adding integers are applied to the problem. The number of greater magnitude should be placed above the number of smaller
magnitude. Magnitude is simply the size of the number without respect to its sign. The number -42.8 has a magnitude of 42.8.
Example A
Subtract: $57.62 - 6.18$
Solution: Subtracting decimals is similar to subtracting whole numbers. Line up the decimal points so that you can subtract corresponding place value digits (e.g. tenths from tenths, hundredths from
hundredths, and so on). As with whole numbers, start from the right and work toward the left remembering to borrow when it is necessary.
$& \quad 57. \cancel{\overset{5}{6}} \ ^1 2\\& \underline{ \; \; -6.1 \; \; \; 8}\\& \quad 51.4 \ \ 4$
Example B
Solution: Begin by writing the question using the vertical alignment method. To ensure that the digits are aligned correctly, add zero to 98.04.
$& {\color{white}-} 98.04 {\color{blue}0}\\& \underline{-32.801}\\&$
Subtract the numbers.
$& {\color{white}-} 9 \overset{7}{\cancel{8}}.^1 0 \overset{3}{\cancel{4}} ~ ^1 {\color{blue}0}\\& \underline{-32. {\color{white} ^1} 80 {\color{white}~ ^1} 1}\\& {\color{white}-} 65. {\color{white}
^1} 23 {\color{white}~ ^1} 9$
Example C
Solution: The first step is to write the problem as an addition problem and to change the sign of the original number being subtracted. In other words, add the opposite.
Now, write and solve the problem using the vertical alignment method.
$67.65 & \\ \underline{ +25.43} & \\ +93.08 &$
Example D
Solution: The first step is to write the problem as an addition problem and to change the sign of the original number being subtracted. In other words, add the opposite.
Now write the problem using the vertical alignment method. Remember to put 259.687 above 137.4 because 259.687 is the number of greater magnitude. The two numbers that are being added have opposite
signs. Apply the same rule that you used when adding integers that had opposite signs – subtract the numbers and use the sign of the larger number in the answer.
$-259.687 & \\ \underline{ +137.4 {\color{white} 00}} &$
To ensure that the digits are aligned correctly, add zeros to 137.4.
$-259.687 & \\ \underline{+137.4 {\color{blue}00}} &$
Subtract the numbers.
$-259.687 & \\ \underline{ +137.4 {\color{blue}00}} & \\ -122.287 &$
The numbers being added have opposite signs. This means that the sign of the answer will be the same sign as that of the number of greater magnitude. In this problem the answer has a negative sign.
Concept Problem Revisited
Jeremy and his family are driving to visit his grandparents. On the first day they drove 234.8 miles and on the second day they drove 251.6 miles.
The decimal number 251.6 is of greater magnitude than 234.8. The numbers must be vertically aligned with the larger one above the smaller one. Now the numbers can be subtracted.
They drove 16.8 miles more on the second day.
Decimal Point
A decimal point is the place marker in a number that separates the whole number and the fraction part. The number 326.45 has the decimal point between the six and the four.
A magnitude is the size of a number without respect to its sign. The number -35.6 has a magnitude of 35.6.
Guided Practice
1. Subtract these decimal numbers: $(243.67)-(196.3579)$
2. $(32.47)-(-28.8)-(19.645)$
3. Josie has $59.27 in her bank account. She went to the grocery store and wrote a check for $62.18 to pay for the groceries. Describe Josie’s balance in her bank account now.
1. $(243.67)-(196.3579)=47.3121$
2. $(32.47)-(-28.8)-(19.645)=41.625$
Write the question as an addition problem and change the sign of the original number being subtracted.
Follow the rules for adding integers.
3. $59.27-$62.18=$-2.91. The account will have a negative value. This means that her account is overdrawn.
Subtract the following numbers:
1. $42.37-15.32$
2. $37.891-7.2827$
3. $579.237-45.68$
4. $4.2935-0.327$
5. $16.074-7.58$
6. $(-17.39)-(-49.68)$
7. $(92.75)+(-106.682)$
8. $(-72.5)-(-77.57)-(31.724)$
9. $(-82.456)-(279.83)+(-567.3)$
10. $(-57.76)-(-85.9)-(33.84)$
Determine the answer to the following problems.
11. The diameter of No. 12 bare copper wire is 0.08081 in., and the diameter of No. 15 bare copper wire is 0.05707 in. How much larger is the diameter of the No.12 wire compared to the diameter of
the No. 15 wire?
12. The resistance of an armature while it is cold is 0.208 ohm. After running for several minutes, the resistance increases to 1.340 ohms. Find the increase in resistance of the armature.
13. The highest temperature recorded in Canada this year was $114.8^\circ F$$-62.9^\circ F$
14. The temperature in Alaska was recorded as $-78.64^\circ F$$-59.8^\circ F$
15. Laurie has a balance of -$32.16 in her bank account. Write a problem that could represent this balance. | {"url":"http://www.ck12.org/arithmetic/Decimal-Subtraction/lesson/Subtraction-of-Decimals-Honors/","timestamp":"2014-04-19T09:51:43Z","content_type":null,"content_length":"111527","record_id":"<urn:uuid:b9428b73-71ae-41ca-8361-6fa0ec841f1e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
Schottky locus in genus 2
up vote 8 down vote favorite
Let $\phi_g : \mathcal{M}_g \rightarrow \mathcal{A}_g$ be the period mapping from the open moduli space of genus $g$ Riemann surfaces to the moduli space of $g$-dimensional principally polarized
abelian varieties over $\mathbb{C}$. Thus for a Riemann surface $S$ the image $\phi_g(S)$ is the Jacobian of $S$. The Schottky problem consists in determining the image of $\phi_g$.
It is classical that $\text{Im}(\phi_2)$ is exactly the set of abelian varieties that are not isomorphic to a product of elliptic curves. This is asserted in many places, but I have not been able to
find a nice discussion of it in the literature. Does anyone know one? The more down-to-earth, the better.
add comment
5 Answers
active oldest votes
This will need expansion by a more knowledgable person, but as memory serves, it was proved by Mayer and Mumford that the closure in Ag of the locus of traditional Jacobians is the set
of products of Jacobians. This is probably exposed first in a talk in the 1964 Woods Hole talks on James Milne's site. (I see Mumford credits it there, on page 4 of his talk, in part
three of the Woods Hole notes, to Matsusaka and Hoyt. Apparently Mayer and Mumford computed the closure in the Satake compactification.) But let us try to explain this more in dim two.
A two diml ppav is a compact 2 torus A containing a curve C carrying the homology class a1xb1 + a2xb2, where the aj,bj are a basic symplectic homology basis of H1(A).
It follows from the topological Pontrjagin product that the induced map from the Albanese variety of C to A, has topological degree one, hence is an isomorphism. (I.e. the map from the
Cartesian product of C with itself g times to A, has image whose class is the g fold Pontrjagin product of [C], which equals g! times the fundamental class of A. Hence the induced map
from the g fold symmetric product of C, has image with exactly the fundamental class of A. Hence this map has degree one as does that induced from the Jacobian.)
up vote 6 Since it also induces the identity map on C, it also preserves the polarization.
down vote
accepted Let me speculate on the special cases. If C is reducible it is known (Complex abelian varieties and theta functions, George Kempf, p. 89, Cor. 10.4) that A is a product of elliptic
curves. If C is irreducible and singular then I guess the normalization map extends to a map of the Albanese of C to A. But that seems to imply the image of C in A does not span, a
So it seems that any irreducible curve C contained in a two diml ppav A and carrying the class of a principal polarization, is smooth and induces an isomorphism from the Albanese (i.e.
Jacobian) of the curve to the ppav.
I hope there is some useful information in this.
This was very helpful. Thanks! – G Fiori Jan 25 '12 at 20:21
add comment
By (a possible) definition, a principal polarization on an abelian surface is a curve with self-intersection 2. So, if smooth, it is a genus two curve and the abelian surface is a
jacobian. You have to rule out the case of a singular irreducible curve and the remaining possibility is a union of two elliptic curves meeting at a point and then you show that in this
up vote 7 case, the surface is the product of the two curves.
down vote
add comment
According to MR0364265 (51 #520) Oort, Frans; Ueno, Kenji "Principally polarized abelian varieties of dimension two or three are Jacobian varieties" (J. Fac. Sci. Univ. Tokyo Sect. IA
up vote 2 Math. 20 (1973), 377–381) covers this (I don't have access to the paper and am going solely by the review).
down vote
This was very helpful to me, and I had trouble deciding whether to accept it or roy smith's answer. Thanks! – G Fiori Jan 25 '12 at 20:22
add comment
Geoffrey Mess' paper "The Torelli group of genus 2 or 3 surfaces" provides two proofs of this fact---one cohomological and the other topological. But I understand neither. Maybe
up vote 1 down somebody could provide some commentary on his demonstrations?
I just looked at his paper, and I also understand neither of his proofs. But they look like exactly what I want! Hopefully an expert will come along and unpack them. – G Fiori Jan
24 '12 at 21:50
add comment
In genus 2 there is NO Schottky problem.
g(g+1)/2 = 3g-3.
up vote -1 Dim(Abelian) = dim(moduli curves)
down vote
PS Also in g=2 any curve is hyper-elliptic - again dimension count
2 Yes, this argument shows that the image of $\phi_2$ is dense in $\mathcal{A}_2$. But why is its complement as I described above (ie the Jacobians of curves of compact type)? – G Fiori
Jan 24 '12 at 17:39
As far as I understand the the product of elliptic curves will have decomposable period matrix "B". I.e. B = diag(t1,t2) While it is not decomposable for any curve or any other abelian
variety. So we see, that image is contained in the desired set. But you want more... To show that it coincides we need somehow to restore the curve from the abelian variety... Hmmm,
how to do this ? – Alexander Chervov Jan 24 '12 at 20:07
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry jacobians riemann-surfaces algebraic-curves abelian-varieties or ask your own question. | {"url":"http://mathoverflow.net/questions/86549/schottky-locus-in-genus-2/86623","timestamp":"2014-04-19T09:55:52Z","content_type":null,"content_length":"73361","record_id":"<urn:uuid:b3828e96-e69c-400e-b381-f502791000b2>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Re: Re: st: Fitting probit - estat gof puzzling results
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Re: Re: st: Fitting probit - estat gof puzzling results
From Clyde B Schechter <clyde.schechter@einstein.yu.edu>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject Re: Re: Re: st: Fitting probit - estat gof puzzling results
Date Fri, 2 Sep 2011 18:51:36 +0000
As part of a long-running thread, Jesus Gonzalez wrote:
"You said models like this (that discriminate quite well but are not well calibrated), "may still be useful for understanding factors that promote or inhibit applying, even if they are not well calibrated--but such models would not be suitable for some other purposes". My questions are two-fold:
1) If the data generating process does not match with the model I am using, then the assumptions about variable's distribution might not hold (for example probability distribution of residuals). Therefore, every statistic that depends on such assumptions might also be unreliable, and hence, the simplest hypothesis testing might be untrustworthy (for example, if one variable coefficient is statistically different from 0). Am I right? If I am right why and/or how the model "may still be useful for understanding factors that promote or inhibit applying"?. ....
2) What are the kind of purposes for which such a model might not be suitable? For example, an out-of-sample prediction of the probability of applying given the RHS variables?"
The type of hypothesis testing we do with probit and similar models is, in fact, conditional on the model being properly specified! The estimating procedures give us the values of the model's parameters that, in some sense (ml, OLS, whatever) best fit the data. But even the glove that fits your foot better than any other glove won't work very well as a sock. For some models there is a well-developed science of robustness to violation of assumptions. I'm not an expert in that area and won't really go further in this direction, because my comment really was intended to refer to something different:
There is more to life than hypothesis testing! Sometimes what we are really after is getting accurate predictions. Sometimes what we are really interested in is a semi-quantitative or qualitative understanding of relationships among observables.
Try running this example:
// EXAMPLE FOR J GONZALEZ
set obs 1000
set seed 32673099
// POPULATE X WITH VALUES BETWEEN 0 AND 10
gen x = (_n - 1)/100
gen lx = log(x)
gen p_true = invlogit(lx)
gen y = (runiform() < p_true)
// A GRAPH TO SHOW THAT A SIMPLE LOGISTIC REGRESSION OF
// Y ON X WOULD BE SERIOUSLY MIS-SPECIFIED
lowess y x, logit
// FIT A LOGISTIC REGRESSION MODEL OF Y TO LOG X
logit y lx
// (AND IT IS, BY CONSTRUCTION, THE CORRECT MODEL)
estat gof, group(10) table
lroc, nograph
// FIT A LOGISTIC REGRESSION MODEL OF Y TO X
// IT HAS POOR CALIBRATION, ESPECIALLY FOR LOW
// AS THAT OF THE CORRECT MODEL
logit y x
estat gof, group(10) table
lroc, nograph
// END OF EXAMPLE
The model based on x is mis-specified and poorly calibrated, but it discriminates the outcome y just as well as the true model based on log x. While inferences based on the coefficients in the x-model would be misleading, this incorrect model is nevertheless useful in one way: it correctly identifies x as an important determinant of y. It gets the quantitative relationship wrong, but captures the qualitative fact that y and x are strongly associated with each other: higher values of x are associated with greater probabilities of y = 1. If that determination were our major objective, this model would be fine. But if the outcome y were, say, an insurable event, you would be in trouble if you relied on this model to set premiums because the predicted probabilities are not close to the actual probabilities. In fact, for premium-setting, a better model might well be to just use the marginal probability of y: this ultra-simple model would have no discrimination at all (area u!
nder ROC = 0.5) but would be perfect for that purpose. Similar considerations would apply if the purpose of the model is to budget and plan for dealing with occurrences of event y.
This example is obviously contrived, and the nature of the mis-specification could be easily discovered during analysis--but it is, I think, a clearer example of the idea than could be found in real-world data. And although the example is a bit of a caricature, these same phenomena arise in real work. Our handy-dandy logistic, probit, and other models, applied to variables selected because they are what we happen to be able to measure, etc., are almost always mis-specifications of the real data generating process. But depending on the nature and extent of the mis-specification they can be useful for various purposes. The key thing to remember is that a model that is useful for one purpose may be useless or even dangerous if used for other purposes. When developing and using models it is always important to bear that in mind and determine their suitability for the purpose at hand.
Hope this makes it clearer.
Clyde Schechter
Dept. of Family & Social Medicine
Albert Einstein College of Medicine
Bronx, NY, USA
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2011-09/msg00102.html","timestamp":"2014-04-21T15:53:15Z","content_type":null,"content_length":"12314","record_id":"<urn:uuid:112bb9f9-1e05-4afd-b445-db969741a97a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Misner, Thorne and Wheeler, Exercise 8.5 (c)
Replies: 38 Last Post: Apr 13, 2013 11:57 PM
Messages: [ Previous | Next ]
Re: Misner, Thorne and Wheeler, Exercise 8.5 (c)
Posted: Apr 9, 2013 3:05 AM
"Dirk Van de moortel" wrote in message
"Hetware" <hattons@speakyeasy.net> schreef in bericht
> On 4/8/2013 9:27 AM, Dirk Van de moortel wrote:
>> Hetware <hattons@speakyeasy.net> wrote:
>>> This is the geodesic equation under discussion:
>>> d^2(r)/dt^2 = r(dp/dt)^2
>>> d^2(p)/dt^2 = -(2/r)(dp/dt)(dr/dt).
>>> r is radius in polar coordinates, p is the angle, and t is a path
>>> parameter.
>>> The authors ask me to "[S]olve the geodesic equation for r(t) and
>>> p(t), and show that the solution is a uniformly parametrized straight
>>> line(x===r cos(p) = at+p for some a and b; y===r sin(p) = jt+k for
>>> some j and k).
>> Normally we'd write dotted variables, but with quotes it's easier.
>> So write
> Like this?
> <math xmlns='http://www.w3.org/1998/Math/MathML'
This is a text group :-)
Dirk Vdm
I knew you would snip, you are such a predictable imbecile.
-- "Dork Van de faggot" <dirkvandemoortel@hotspam.not> wrote in message
> Indeed, writing LT and inverse:
> x' = g ( x - v t ) [1]
> t' = g ( t - v x ) [2]
> x = g ( x' + v t' ) [3]
> t = g ( t' + v x' ) [4]
[ but v' = x'/t', the inverse velocity ]
> No, imbecile, v' = 0.
"Dork Van de faggot" wrote in message news:kfp3ba$ku0$1@speranza.aioe.org...
How hard is it to listen to the definitions and stick with the rules?
"Did you ever had algebra?" - Dork Van de faggot
"the transformation equations are valid only for speeds below or up to c" --
Dork Van de faggot
-- So if T = 5 years and v = 0.8c, then the stay at home twin will
have aged 10 years (2T) while his travelling twin sister will have
aged 6 years (2T/g). <no silly grin>
-- Psychodork Van de improper faggot.
Date Subject Author
4/7/13 Misner, Thorne and Wheeler, Exercise 8.5 (c) Guest
4/7/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway
4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway
4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Hetware
4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway
4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Hetware
4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway
4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Hetware
4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Hetware
4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway
4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Hetware
4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway
4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Hetware
4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway
4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Hetware
4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Dirk Van de moortel
4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway
4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) rotchm@gmail.com
4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Dirk Van de moortel
4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Dono
4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) rotchm@gmail.com
4/13/13 MTW, Exercise 8.5 (c), Crotchm reasserts his imbecility Dono
4/13/13 MTW Exercise 8.5 (c) - Crotchm the poser puts his foot in his mouth, Dono
big time
4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Dono
4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) rotchm@gmail.com
4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Dono
4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) rotchm@gmail.com
4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Dono
4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Dono
4/13/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) rotchm@gmail.com
4/13/13 MTW Exercise 8.5 (c) - Crotchm the poser puts his foot in his mouth, Dono
big time
4/13/13 Re: MTW Exercise 8.5 (c) - Crotchm the poser puts his foot in his rotchm@gmail.com
mouth, big time
4/13/13 Re: MTW Exercise 8.5 (c) - Crotchm the poser puts his foot in his Dono
mouth, big time
4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Guest
4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Dirk Van de moortel
4/9/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway
4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Rock Brentwood
4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Hetware
4/8/13 Re: Misner, Thorne and Wheeler, Exercise 8.5 (c) Lord Androcles, Zeroth Earl of Medway | {"url":"http://mathforum.org/kb/message.jspa?messageID=8902722","timestamp":"2014-04-19T00:51:59Z","content_type":null,"content_length":"64438","record_id":"<urn:uuid:0e895f51-848f-4f43-a00c-380269b3d255>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applied combinatorics with problem solving
Applied combinatorics with problem solving
Bradley W. Jackson, Dmitri Thoro
Addison-Wesley Pub. Co., 1990 - Education - 324 pages
From inside the book
39 pages matching represented in this book
Where's the rest of this book?
Results 1-3 of 39
What people are saying - Write a review
We haven't found any reviews in the usual places.
Related books
Combinatorial Problem Solving 1
CHAPTER 4
CHAPTER 2 40
8 other sections not shown
Other editions - View all
Applied Combinatorics with Problem Solving
Bradley W. Jackson,Dmitri Thoro
No preview available - 1990
Common terms and phrases
adjacent arrangements assigned balls binary tree binomial bipartite graph breadth first search chosen coefficient coins color combinatorics Compute the number connected graph consecutive Consider
contains count the number counting procedure denote Describe an algorithm different objects digits directed graph elements equal equation equivalence classes Eulerian circuit Example exponential
generating function Find the number formula four graph G graph theory Hamiltonian cycle Hamiltonian path identical induction juggler labeled least lexicographic order Multiplication Principle number
of different number of distributions number of edges number of objects number of outcomes number of partitions number of sequences obtain path permutations planar graph polynomial positive integer
possible Principle of Inclusion-Exclusion problem Proof properties Puzzle recurrence relation represented satisfy Section selection sequences of length Show simple graph solve spanning tree subgraph
summands SUPPLEMENTARY COMPUTER PROJECTS Suppose THEOREM total number unlimited repetition vertex weighted graph Write a program
Bibliographic information
Applied combinatorics with problem solving
Bradley W. Jackson, Dmitri Thoro
Addison-Wesley Pub. Co., 1990 - Education - 324 pages
Applied Combinatorics with Problem Solving
Bradley W. Jackson,Dmitri Thoro
No preview available - 1990 | {"url":"http://books.google.com/books?id=GOzuAAAAMAAJ&q=represented&dq=related:OCLC227590280&source=gbs_word_cloud_r&cad=5","timestamp":"2014-04-16T11:05:31Z","content_type":null,"content_length":"113140","record_id":"<urn:uuid:3b4bb1d6-54b5-49e1-b683-70cc827425ee>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
The FLOW CONSULTANT
is the only software endorsed by Richard W. Miller covering all differential producers
and exact equations of state presented in
The Flow Measurement Engineering Handbook.
The Flow Consultant includes Restrictive Orifice, Turbine meters, Vortex meters, Fluidic meters, Swirl meters, All frequency /digital output meters, Fixed Geometry and V-Cone Flowmeters. Constants
for the following Flowmeters: Annubar, Elbow, Foxboro Integral Flow Orifice (IFO), Preso, Multiport Averaging (MPA), Taylor Wedge, Veris Verabar and the ability to enter the flow coefficient.
Users interested in other devices may contact us for consideration.
What's new in Version 8?
Version 8 is now available for purchase or upgrade.
Order Version 8 Now
Upgrades:Renew or Upgrade Licenses.
Accuracy / Uncertainty
AGA 8
Flow Versus Differential Pressure
Gas Mixtures
Fixed Geometry Devices Compact Orifice, Elbow, IFO, Multiport Averaging, Wedge Meters and V-Cone
Fluid PropertiesOnly The FLOW CONSULTANT ™ software contains fluid properties modules using the exact solution and not a generalize equation of state.
ISA DataSheet Export Data to Microsoft ™ Excel using the standardized Datasheet or customize your own corporate version.
Meters covered by The FLOW CONSULTANT ™
Data Storage
Pipe Material And Primary Element Material
Noise Calculation
Restrictive Orifice
Standards and Reference Texts
April 2004 - Regarding ISO 5167 and ASME -3M
Standards Notice From R.W. Miller
Wide Range Metering
Tap into a Wealth of Knowledge
The FLOW CONSULTANT ™continuously monitors input throughout the computational process. Warnings are issued and default data displayed for entries outside the manufacturer's recommendations or
published standards.
Constants are built in for virtually every model of Orifice Plate, Nozzle or Venturi are included. Solve for Liquid, Gas and Vapor fluid states.
Exclusive engineering features include continual checks for Cavitation or Flashing Flows, Critical (Choked) Flow-Rate Computations, Restrictive Orifice Computations, Energy and Overall Pressure
Loss and unique solutions derived by R.W. Miller for computations outside standard limits.
Meters covered by The FLOW CONSULTANT ™
Fixed Geometry Devicesbecame available in The FLOW CONSULTANT ™ Version 5.1.
The Flow Consultant is a modular design. Each segment of our program is compartmentalized for total design flexibility. Users can change any design element without any down time to re-compute.
For example, modify the Beta Ratio or Pipe Size to immediately update the Flow-Rate or vary the Flow-Rate for immediate change to the Beta Ratio.
Print Screens Shots directly from The FLOW CONSULTANT ™ without going into a separate application.
Standards and Reference Texts
Detail Method, Gravity Method and Heating Value
Accepted US standard for the use of orifice meters used to meter natural gas and other fluids.
Accepted standard used by many industries for the metering of many fluid and in particular steam and water flow measurement.
For Throat Tap nozzles
ANSI 2530
The American National Standards Institute's (ANSI) standard that duplicates AGA 3
ANSI 2530/AGA3/ Buckingham/Spink
The original AGA/ANSI/ASME standard for the metering of fluid with the flange tap orifice Flowmeter. Once widely used by all US industries in designing flow measurement equipment, this standard
has been superseded by the above reference documents.
API Chapter 14
The ANSI standards for the metering of Natural gas.
ISO-5167 (2003)*
The International Standards Organization's (ISO) Standard, now in three parts, for the metering of all fluid using Orifices (all tapings), Nozzles, and Venturi Meters.
Miller (1983 to date)
S-GERG Equation of State for Natural Gas is widely used in Europe for custody transfer measurements. S-GERG requires input of the heat content BTU (Joules), the relative density (Specific
Gravity) and the mole fraction of carbon dioxide.
Spink (1935 to date)
Regarding ISO 5167, ASME -3M, and AGA/API/ANSI 2530
From R.W. Miller - April 2004
ASME has recently approved separate standards for Orifice, Nozzle, and Venturi Flowmeters. There is a major change to the
• Orifice Discharge coefficient computational equation for all tappings
• The Gas Expansion factor equation for all tappings
ASME-3M and International standard ISO 5167 now use identical equations.
Other US standards (API/AGA/ANSI) no longer agree with International practice for either calculation.
Only The FLOW CONSULTANT software contains the new equation.
Only The FLOW CONSULTANT ™ software solves for fluid properties modules using the exact solution and not the generalized state equation used in most software. These solutions can be in error by
several percent or more.
The Fluid Properties modules contain equation constants for computing the properties for hundreds of fluids. Fluid Properties constants The FLOW CONSULTANT ™ are maintained in accordance with
all the current reference texts and standards.
Enter only Pressure and Temperature to compute Base Compressibility, Base Specific Gravity, Base Density, Flowing Compressibility, Flowing Density, Flowing Specific Gravity, Liquid
Compressibility, Viscosity or the Isentropic Exponent.
Enter your own fluid properties through our custom fluid screen.
AGA-8 for Natural Gas Heating Value, Gravity Method and theDetail Method
AIChE Constants for over 250 fluids. Exact solution correcting for Liquid Compressibility, Temperature and Pressure.
ASME - equations for Water, Saturated Steam and Superheated Steam
Modified Benedict Webb Rubin exact equations for Air, Ammonia, Argon, Butane, CO2 , Ethane, Heptane, Hexane, I-Butane, I-Pentane, Methane, Nitrogen, Octane, Oxygen, Pentane, Propane and
NX-19and S-GERG for Natural Gas
Fluid Property Reference
Mercury Meters
Two-Phase Flow/Multi-Component Flow using either the James or Murdock flow rate computational solution
Drain and Vent Hole
Saturated Liquids
US and SI units can be interchanged and modified at any time without the time consuming task of recalculating. The Flow Consultant operates on SI units so converting existing data is automatic.
Storage of user defined templates allows for infinite unit combinations with just one click of the mouse.
All units are verified to industry standards for the current design conditions.
If you don't find your units in our software, contact us and we will add it.
Our simplified interface for entering mole fraction percentages solves the often cumbersome detail method solution.
AGA-8 and Gas Mixtures entry screens
Fluids covered by AGA-8 and Gas Mixtures.
Also included: Heating Value, Gravity, N2 and CO2
Developed by R.W. Miller, the Gas Mixtures interface expands the AGA-8 interface with additional fluids not included in the AGA-8 standard.
Select from all standard pipe sizes or manually enter pipe size.
The thermal expansion factor is calculated by selecting from these pipe materials and primary element materials:
Carbon Steel
Chrome-Moly Steel
Stainless Steel (300, 304, 316, 321, 400)
Super Duplex
Hastelloy B
Hastelloy C
Inconel X, annealed
Inconel, 624
Haynes Stellite
Monel 400
Existing Flow Consultant users may note some older materials are no longer listed. These can still be used by Entering the Thermal Expansion Coefficient. Removed materials were Yellow Brass,
Beryllium Copper, Cupronickel and Pyrex Glass. If you are upgrading data from older versions of the Flow Consultant, these will default to entering the thermal expansion coefficient.
Enter the Thermal Expansion Coefficient for pipe material and primary element material if your materials are not in the pick list.
Noise Calculation
The ISA SP1, SP2, etc are ISA standards compatible or identical to the work on noise done by Hans Bauman, Masonelian for control valves.
This work is considered the basis for the most practical solution for noise on restrictive devices available in the literature. It is the source document for the solution used for the Flow
Consultant noise calculation.
Restrictive Orifice
An independent review of our competition's software shows the following:
Users can suppress or print any warnings or error messages generated during the computational process.
Our detailed specification sheet can be completely customized by using the Data Exporter.
Includes a customizable ISA style datasheet.
Print Screens Shots directly from The FLOW CONSULTANT ™ without going into a separate application.
A Print Preview window is now available
The Flow Consultant is backed by Microsoft Access TM. Our software provides a utility to create and maintain multiple databases for easy to use, directory based storage. Create a new data
warehouse by simply picking a name and destination.
Create custom external data objects with The Data Exporter. The Data Exporter creates user defined Microsoft Access TM tables export to spreadsheets, corporate databases or any other means of
Using the Data Exporter, share data with Microsoft Excel TM spreadsheets.
ISA DataSheet for Microsoft Excel TM
Data is exported to an Excel worksheet in the ISA format.
This puts the full capabilities of Excel graphical and mathematical capabilities at your fingertips. Included on the standard printout is a data and graphical showing dp vs flow-rate.
Or Modify the ISA format data sheet to create a custom company specific worksheet.
Click Here To View a Sample Data Sheet
Accuracy / Uncertainty Calculations
The estimated uncertainty based on the measured and unmeasured variables used in the flow rate computation are combined in accordance with the selected standards procedures to give the overall
uncertainty (Accuracy).
The estimated uncertainty of the discharge coefficient and gas expansion factor is computed using the appropriate standards. The user inputs the uncertainties of the other measured or unmeasured
Click Here to View Accuracy Screen
Flow Versus Differential Pressure
Unique to The FLOW CONSULTANT ™ is the exportable (and printable) tabulation shown below. The Standard graph is exported to Microsoft Excel™ via the Export Excel Function.
First you select the number of points to compute then the screen displays:
1. Scaled flow rate as a function of the differential
2. The exact flow rate corrected for the discharge coefficient and gas expansion factor for flow rate
3. The directional Bias error between actual and scaled flow rate at the measured differential.
Click Here to View Flow Vs Differential Pressure
Wide Range Metering
Wide Range Metering Accuracy can now easily be determined by
Step 1 select Flow Vs Dp to determine the +/- Bias, errors that occur by not including changes in discharge coefficient and gas expansion factor with Flow-Rate
Step 2 select Accuracy to determine the+/- error at the lowest acceptable flow rate.
Combine the bias and precision error by summation and repeat for additional transmitters.
Technical Upgrades are free of charge for one year after purchase.
Support Contracts
System Requirements
Windows 95/98/NT 4.0/02K/XP/Vista/Windows 7/Windows 8
Recommended P5 Processor or Higher
License Agreement
User Manual
How to Order
The FLOW CONSULTANT ™
Renewals and License Upgrades
View Screens and Sample Calculation Online
Have a Question?
Concerned about an Equation or Standard?
How does your Current Software Compare to The FLOW CONSULTANT?
Contact Us | {"url":"http://rwmillerassociates.com/FC6.htm","timestamp":"2014-04-18T14:30:09Z","content_type":null,"content_length":"71850","record_id":"<urn:uuid:397ae13e-dd65-4e33-8253-65e819524091>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tags: Monte Carlo
Note: Results do not include pending, unpublished, and some private items.
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used in simulating physical and mathematical
systems. Because of their reliance on repeated computation of random or pseudo-random numbers, these methods are most suited to calculation by a computer and tend to be used when it is unfeasible or
impossible to compute an exact result with a deterministic algorithm.
Learn more about quantum dots from the many resources on this site, listed below. More information on Monte Carlo method can be found here.
Teaching Materials (1-20 of 22) | {"url":"http://nanohub.org/tags/montecarlo/teachingmaterials?sort=title","timestamp":"2014-04-19T07:11:33Z","content_type":null,"content_length":"59062","record_id":"<urn:uuid:6153b0b5-2196-4ec6-a70f-a1cb6a43dc66>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
search results
Expand all Collapse all Results 26 - 50 of 70
26. CMB 2008 (vol 51 pp. 460)
On Primitive Ideals in Graded Rings
Let $R=\bigoplus_{i=1}^{\infty}R_{i}$ be a graded nil ring. It is shown that primitive ideals in $R$ are homogeneous. Let $A=\bigoplus_{i=1}^{\infty}A_{i}$ be a graded non-PI just-infinite
dimensional algebra and let $I$ be a prime ideal in $A$. It is shown that either $I=\{0\}$ or $I=A$. Moreover, $A$ is either primitive or Jacobson radical.
Categories:16D60, 16W50
27. CMB 2008 (vol 51 pp. 424)
Noncommutative Symmetric Bessel Functions
The consideration of tensor products of $0$-Hecke algebra modules leads to natural analogs of the Bessel $J$-functions in the algebra of noncommutative symmetric functions. This provides a simple
explanation of various combinatorial properties of Bessel functions.
Categories:05E05, 16W30, 05A15
28. CMB 2008 (vol 51 pp. 261)
On the Classification of Rational Quantum Tori and the Structure of Their Automorphism Groups
An $n$-dimensional quantum torus is a twisted group algebra of the group $\Z^n$. It is called rational if all invertible commutators are roots of unity. In the present note we describe a normal
form for rational $n$-dimensional quantum tori over any field. Moreover, we show that for $n = 2$ the natural exact sequence describing the automorphism group of the quantum torus splits over any
Keywords:quantum torus, normal form, automorphisms of quantum tori
29. CMB 2008 (vol 51 pp. 291)
Group Algebras with Minimal Strong Lie Derived Length
Let $KG$ be a non-commutative strongly Lie solvable group algebra of a group $G$ over a field $K$ of positive characteristic $p$. In this note we state necessary and sufficient conditions so that
the strong Lie derived length of $KG$ assumes its minimal value, namely $\lceil \log_{2}(p+1)\rceil $.
Keywords:group algebras, strong Lie derived length
Categories:16S34, 17B30
30. CMB 2008 (vol 51 pp. 81)
Homotopy Formulas for Cyclic Groups Acting on Rings
The positive cohomology groups of a finite group acting on a ring vanish when the ring has a norm one element. In this note we give explicit homotopies on the level of cochains when the group is
cyclic, which allows us to express any cocycle of a cyclic group as the coboundary of an explicit cochain. The formulas in this note are closely related to the effective problems considered in
previous joint work with Eli Aljadeff.
Keywords:group cohomology, norm map, cyclic group, homotopy
Categories:20J06, 20K01, 16W22, 18G35
31. CMB 2007 (vol 50 pp. 105)
On Valuations, Places and Graded Rings Associated to $*$-Orderings
We study natural $*$-valuations, $*$-places and graded $*$-rings associated with $*$-ordered rings. We prove that the natural $*$-valuation is always quasi-Ore and is even quasi-commutative (\emph
{i.e.,} the corresponding graded $*$-ring is commutative), provided the ring contains an imaginary unit. Furthermore, it is proved that the graded $*$-ring is isomorphic to a twisted semigroup
algebra. Our results are applied to answer a question of Cimpri\v c regarding $*$-orderability of quantum groups.
Keywords:$*$--orderings, valuations, rings with involution
Categories:14P10, 16S30, 16W10
32. CMB 2006 (vol 49 pp. 347)
Affine Completeness of Generalised Dihedral Groups
In this paper we study affine completeness of generalised dihedral groups. We give a formula for the number of unary compatible functions on these groups, and we characterise for every $k \in~\N$
the $k$-affine complete generalised dihedral groups. We find that the direct product of a $1$-affine complete group with itself need not be $1$-affine complete. Finally, we give an example of a
nonabelian solvable affine complete group. For nilpotent groups we find a strong necessary condition for $2$-affine completeness.
Categories:08A40, 16Y30, 20F05
33. CMB 2006 (vol 49 pp. 265)
Endomorphisms That Are the Sum of a Unit and a Root of a Fixed Polynomial
If $C=C(R)$ denotes the center of a ring $R$ and $g(x)$ is a polynomial in C[x]$, Camillo and Sim\'{o}n called a ring $g(x)$-clean if every element is the sum of a unit and a root of $g(x)$. If $V$
is a vector space of countable dimension over a division ring $D,$ they showed that $\end {}_{D}V$ is $g(x)$-clean provided that $g(x)$ has two roots in $C(D)$. If $g(x)=x-x^{2}$ this shows that $\
end {}_{D}V$ is clean, a result of Nicholson and Varadarajan. In this paper we remove the countable condition, and in fact prove that $\Mend {}_{R}M$ is $g(x)$-clean for any semisimple module $M$
over an arbitrary ring $R$ provided that $g(x)\in (x-a)(x-b)C[x]$ where $a,b\in C$ and both $b$ and $b-a$ are units in $R$.
Keywords:Clean rings, linear transformations, endomorphism rings
Categories:16S50, 16E50
34. CMB 2005 (vol 48 pp. 587)
Separation of Variables for $U_{q}(\mathfrak{sl}_{n+1})^{+}$
Let $U_{q}(\SL)^{+}$ be the positive part of the quantized enveloping algebra $U_{q}(\SL)$. Using results of Alev--Dumas and Caldero related to the center of $U_{q}(\SL)^{+}$, we show that this
algebra is free over its center. This is reminiscent of Kostant's separation of variables for the enveloping algebra $U(\g)$ of a complex semisimple Lie algebra $\g$, and also of an analogous
result of Joseph--Letzter for the quantum algebra $\Check{U}_{q}(\g)$. Of greater importance to its representation theory is the fact that $\U{+}$ is free over a larger polynomial subalgebra $N$ in
$n$ variables. Induction from $N$ to $\U{+}$ provides infinite-dimensional modules with good properties, including a grading that is inherited by submodules.
Categories:17B37, 16W35, 17B10, 16D60
35. CMB 2005 (vol 48 pp. 445)
On the Garsia Lie Idempotent
The orthogonal projection of the free associative algebra onto the free Lie algebra is afforded by an idempotent in the rational group algebra of the symmetric group $S_n$, in each homogenous
degree $n$. We give various characterizations of this Lie idempotent and show that it is uniquely determined by a certain unit in the group algebra of $S_{n-1}$. The inverse of this unit, or,
equivalently, the Gram matrix of the orthogonal projection, is described explicitly. We also show that the Garsia Lie idempotent is not constant on descent classes (in fact, not even on coplactic
classes) in $S_n$.
Categories:17B01, 05A99, 16S30, 17B60
36. CMB 2005 (vol 48 pp. 355)
On Maps Preserving Products
Maps preserving certain algebraic properties of elements are often studied in Functional Analysis and Linear Algebra. The goal of this paper is to discuss the relationships among these problems
from the ring-theoretic point of view.
Categories:16W20, 16N50, 16N60
37. CMB 2005 (vol 48 pp. 317)
On Pseudo-Frobenius Rings
It is proved here that a ring $R$ is right pseudo-Frobenius if and only if $R $ is a right Kasch ring such that the second right singular ideal is injective.
Categories:16D50, 16L60
38. CMB 2005 (vol 48 pp. 275)
Krull Dimension of Injective Modules Over Commutative Noetherian Rings
Let $R$ be a commutative Noetherian integral domain with field of fractions $Q$. Generalizing a forty-year-old theorem of E. Matlis, we prove that the $R$-module $Q/R$ (or $Q$) has Krull dimension
if and only if $R$ is semilocal and one-dimensional. Moreover, if $X$ is an injective module over a commutative Noetherian ring such that $X$ has Krull dimension, then the Krull dimension of $X$ is
at most $1$.
Categories:13E05, 16D50, 16P60
39. CMB 2005 (vol 48 pp. 80)
Trivial Units for Group Rings with $G$-adapted Coefficient Rings
For each finite group $G$ for which the integral group ring $\mathbb{Z}G$ has only trivial units, we give ring-theoretic conditions for a commutative ring $R$ under which the group ring $RG$ has
nontrivial units. Several examples of rings satisfying the conditions and rings not satisfying the conditions are given. In addition, we extend a well-known result for fields by showing that if $R$
is a ring of finite characteristic and $RG$ has only trivial units, then $G$ has order at most 3.
Categories:16S34, 16U60, 20C05
40. CMB 2004 (vol 47 pp. 445)
Biprojectivity and Biflatness for Convolution Algebras of Nuclear Operators
For a locally compact group $G$, the convolution product on the space $\nN(L^p(G))$ of nuclear operators was defined by Neufang \cite{Neuf_PhD}. We study homological properties of the convolution
algebra $\nN(L^p(G))$ and relate them to some properties of the group $G$, such as compactness, finiteness, discreteness, and amenability.
Categories:46M10, 46H25, 43A20, 16E65
41. CMB 2004 (vol 47 pp. 343)
Combinatorics of Words and Semigroup Algebras Which Are Sums of Locally Nilpotent Subalgebras
We construct new examples of non-nil algebras with any number of generators, which are direct sums of two locally nilpotent subalgebras. Like all previously known examples, our examples are
contracted semigroup algebras and the underlying semigroups are unions of locally nilpotent subsemigroups. In our constructions we make more transparent than in the past the close relationship
between the considered problem and combinatorics of words.
Keywords:locally nilpotent rings,, nil rings, locally nilpotent semigroups,, semigroup algebras, monomial algebras, infinite words
Categories:16N40, 16S15, 20M05, 20M25, 68R15
42. CMB 2003 (vol 46 pp. 14)
Generalized Commutativity in Group Algebras
We study group algebras $FG$ which can be graded by a finite abelian group $\Gamma$ such that $FG$ is $\beta$-commutative for a skew-symmetric bicharacter $\beta$ on $\Gamma$ with values in $F^*$.
Categories:16S34, 16R50, 16U80, 16W10, 16W55
43. CMB 2002 (vol 45 pp. 499)
Group Gradings on Matrix Algebras
Let $\Phi$ be an algebraically closed field of characteristic zero, $G$ a finite, not necessarily abelian, group. Given a $G$-grading on the full matrix algebra $A = M_n(\Phi)$, we decompose $A$ as
the tensor product of graded subalgebras $A = B\otimes C$, $B\cong M_p (\Phi)$ being a graded division algebra, while the grading of $C\cong M_q (\Phi)$ is determined by that of the vector space $\
Phi^n$. Now the grading of $A$ is recovered from those of $A$ and $B$ using a canonical ``induction'' procedure.
44. CMB 2002 (vol 45 pp. 711)
Classification of Quantum Tori with Involution
Quantum tori with graded involution appear as coordinate algebras of extended affine Lie algebras of type $\rmA_1$, $\rmC$ and $\BC$. We classify them in the category of algebras with involution.
From this, we obtain precise information on the root systems of extended affine Lie algebras of type $\rmC$.
45. CMB 2002 (vol 45 pp. 451)
Coordinatization Theorems For Graded Algebras
In this paper we study simple associative algebras with finite $\mathbb{Z}$-gradings. This is done using a simple algebra $F_g$ that has been constructed in Morita theory from a bilinear form $g\
colon U\times V\to A$ over a simple algebra $A$. We show that finite $\mathbb{Z}$-gradings on $F_g$ are in one to one correspondence with certain decompositions of the pair $(U,V)$. We also show
that any simple algebra $R$ with finite $\mathbb{Z}$-grading is graded isomorphic to $F_g$ for some bilinear from $g\colon U\times V \to A$, where the grading on $F_g$ is determined by a
decomposition of $(U,V)$ and the coordinate algebra $A$ is chosen as a simple ideal of the zero component $R_0$ of $R$. In order to prove these results we first prove similar results for simple
algebras with Peirce gradings.
46. CMB 2002 (vol 45 pp. 388)
Algèbres simples centrales de degré 5 et $E_8$
As a consequence of a theorem of Rost-Springer, we establish that the cyclicity problem for central simple algebra of degree~5 on fields containg a fifth root of unity is equivalent to the study of
anisotropic elements of order 5 in the split group of type~$E_8$.
Keywords:algèbres simples centrales, cohomologie galoisienne
Categories:16S35, 12G05, 20G15
47. CMB 2002 (vol 45 pp. 448)
Erratum: A Characterization of Left Perfect Rings
An error in {\it A characterization of left perfect rings}, Canad. Math. Bull. (3) {\bf 38}(1995), 382--384, is indicated and the consequences identified.
48. CMB 2002 (vol 45 pp. 11)
Polycharacters of Cocommutative Hopf Algebras
In this paper we extend a well-known theorem of M.~Scheunert on skew-symmetric bicharacters of groups to the case of skew-symmetric bicharacters on arbitrary cocommutative Hopf algebras over a
field of characteristic not 2. We also classify polycharacters on (restricted) enveloping algebras and bicharacters on divided power algebras.
Categories:16W30, 16W55
49. CMB 2001 (vol 44 pp. 27)
Normal Subloops in the Integral Loop Ring of an $\RA$ Loop
We show that an $\RA$ loop has a torsion-free normal complement in the loop of normalized units of its integral loop ring. We also investigate whether an $\RA$ loop can be normal in its unit loop.
Over fields, this can never happen.
Categories:20N05, 17D05, 16S34, 16U60
50. CMB 2000 (vol 43 pp. 413)
Non-Isomorphic Maximal Orders with Isomorphic Matrix Rings
We construct a countably infinite family of pairwise non-isomorphic maximal ${\mathbb Q}[X]$-orders such that the full $2$ by $2$ matrix rings over these orders are all isomorphic.
Categories:16S50, 16H05, 16N60 | {"url":"http://cms.math.ca/cmb/msc/16?page=2","timestamp":"2014-04-16T10:14:27Z","content_type":null,"content_length":"61897","record_id":"<urn:uuid:385ea56d-9854-45c9-857d-06345fa34193>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50e25942e4b028291d7484bd","timestamp":"2014-04-18T14:04:44Z","content_type":null,"content_length":"60646","record_id":"<urn:uuid:bbd6fac1-cb15-4745-9cfc-c89589db9246>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pre-Calculus Review Videos | MindBites
Series: Pre-Calculus Review
About this Series
• Lessons: 31
• Total Time: 4h 4m
• Use: Watch Online & Download
• Access Period: Unlimited
• Created At: 02/26/2009
• Last Updated At: 07/20/2010
Taught by Professor Edward Burger, this series was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be found
at http://www.thinkwell.com/student/product/precalculus. This series covers angles, trigonometric functions and expressions and more. The full course covers angles in degrees and radians,
trigonometric functions, trigonometric expressions, trigonometric equations, vectors, complex numbers, and more.
About this Author
2174 lessons
Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare
students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider
of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/.
Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through...
Lessons Included
Does not use sin or cos
~ eshoemaker
Faulty advertising! I don't like thinking I am getting an example of something that I want to understand and have the problem be something simple like d=rt.
Pre-Calculus: Graphing Period, Amplitude, Shifts
~ sotake
Extremely helpful--the textbook I'm using leaves some points unexplained, but Professor Burger explains everything clearly and concisely.
Good but abstract
~ Nina3
Still too abstract to get students (not math lovers;) exited....
Need more practice problems
~ tansley
The information is fantastic. However - practice makes perfect - so it would be nice to see a few examples.
Great lesson, but could use more real application
~ sharon5
When would you have a real problem where you would be looking for a position of a spring? I know that is what my students are going to be asking.
Easy way to identify even and odd functions!
~ brittanie
Great online video tutorial on how to verify if a trig function like sin or cos is even or odd. This question comes up on many college prep assessment tests and it's great to have these videos
available for easy review online.
clear and easy inverse sine and cosine
~ joyherber
This was great. My son was stuck on some homework and it had been years since I had studied sine and cosine. Once he watched the video and reread the assignment it all be came so simple. Thanks.
~ hmt79
This lesson walks you through several different proofs of trig identities - he does a good job at going step by step and so I didn't get lost anywhere along the way. Great lesson!
Below are the descriptions for each of the lessons included in the series:
• Pre-Calculus: Solving Trig Equations by Factoring
Professor Burger teaches how to solve more complicated equations (tanx * sin^2x = tan x) involving trigonometric functions in this lesson. Solving these types of problems involve use of trig
identities, factoring, etc and how to find all of the viable solutions for these types of problems. In the problem listed above, Professor Burger will show you how to factor the equation in order
to help simplify and then solve it. Professor Burger also gives a warning about cancelling out in equations that involve trig functions. By canceling, you risk missing valid solutions and
solution sets.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
About Professor Edward Burger:
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Finding Coterminal Angles
Coterminal angles are angles that have their terminating rays in the exact same location. By definition, any two angles whose difference is some multiple of 360 degrees. In this lesson, Professor
Burger will show you what coterminal angles look like, how to determine if two angles are coterminal using simple math and then he will review several examples of coterminal angles. He will also
demonstrate visually how coterminal angles behave and why the definition is appropriate for these types of angles.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Fundamental Trigonometric Identities
In this lesson, Professor Burger will reveal and explain several basic trigonometric identity proofs. He will begin by reviewing the definitions of sine, cosine, and tangent. From these
definititions, he will prove tanx = sinx/cosx. Then, he uses the Pythagorean Theorem to show you the proofs for 3 more trigonometric identities: cos^2 + sin^2 = 1, 1+ tan^2 = sec^2, and 1 + cot^2
= csc^2. Finally, Professor Burger will tell you which of these identities and proofs you need to memorize and which you can derive simply and don't need to fret about memorizing in advance of
your test.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Evaluating Inverse Trig Functions
Professor Burger shows you how to evaluate inverse trigonometric functions, which include arc sine, arc cosine, arc tangent, etc. He reminds you that inverse functions are asking "What is the
angle whose function is X?" Thus, the output for any inverse trig function should be an angle for which, if you apply the indicated trig function, you get the indicated value as a result. Then he
walks through finding the inverse of sine, the inverse of cosine, and the inverse of tangent. Finally, Prof Burger shows you how to interpret the presence of a negative sign and how to evaluate
inverse trig functions using a calculator (indicated in radians).
This lesson is perfect for review for a CLEP test, mid-term, final, summer school, or personal growth!
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Adding Vectors & Multiplying Scalars
Professor Burger shows you how to add and subtract vectors and use scalar multiplication to elongate or shrink vectors while maintaining their direction angle. The magnitude of a vector can be
altered with scalar multiplication. A scalar is simply a number (positive or negative or a fraction) used to multiply a vector by, with the vector keeping its same direction and changing
magnitude. Vectors can also be added and subtracted by simply adding or subtracting the components. It is also simple to find the answer graphically by creating a parallelogram with the two
vectors, which Professor Burger demonstrates.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Trig Equations with Coefficients
Now that you have learned how to solve simple trigonometric equations, Professor Burger will show you how to solve trig equations that have a coefficient in the argument (e.g. sine of 2X versus
just sine of X). These are also called multiple angle equations. In evaluating these trigonometry equations, you will generally only be asked to find solutions in one or two periods of the
functions, so you will not have an infinite number of solutions (most often, you'll solve for solutions between 0 and 2*pi). Sample problems from this lesson include (2*sin 3*theta) = 1 and tan^2
(2X) = 1 and sin2x*tan2x + sin2x = 0.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Graphing Period, Amplitude, Shifts
Professor Burger shows you how to use all of the tools at your disposal to effectively graph complicated trigonometric functions involving sine and cosine. He will show you how to recognize
changes in period, amplitude, and vertical and phase shifts in the equation and how to correctly incorporate them into your trig function graph. He will also show you a three-step process of
translating the equation, graphing the intermediate steps, and finailzing the graph. The examples you will use are y = -2sin(x- Pi/4)+1 and y = 2cos(Pi*x)-2. These equations both involve
complications like those listed above (as indicated by their added constants and coefficients).
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Complex Numbers - Trig or Polar Form
This lesson instructs you on how to convert complex numbers into trig form (also known as polar form). Complex numbers, written in the form (a + bi), are an extension of the real numbers obtained
by adjoining an imaginary unit, denoted by i, which is the square root of negative 1. To convert complex numbers into trigonometric or polar form, Professor Burger first walks you through
sketching a graph of the number and drawing a right triangle. From that, he shows you how to use the trig properties to find the unknown values and the modulus. Then, you plug these falues into
the trig form and determine the angle. To illustrate this method, Professor Burger walks you through an example in which he converts (-(3^1/2), +i) to polar or trigonometric form.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Inverse Trig Function Equations
An inverse function asks the question ""What is the angle whose function is X."" In this lesson, you will learn to solve equations that include an inverse function (arc sine, arc cosine, arc
tangent, etc). Professor Burger first shows you how to untangle the equation, re-writing it so that you can understand for what you are solving. He will also show you examples when there may be
an infinite numbers of solutions, and how you will need to correctly denote this answer. Finally, he suggests that you check your answers by graphing, and shows you how. This lesson will include
several examples of evaluating problems involving arc sin, arc cos, etc. You will begin by seeing how to approach and solve a problem like 'inverse cosine of cosine x = pi/4' While it would seem
that the cosine and inverse cosine here would cancel, you will learn in this lesson why this is not the case and how you can correctly solve for the answer.
This lesson is perfect for review for a CLEP test, mid-term, final, summer school, or personal growth!
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: The Law of Sines
Trigonometric functions (sin, cos, tan, etc) originally arose from the ratios of the sides of right triangles. But we can still use sine to evaluate the sines of angles that are in a triangle but
not in a right triangle, using the Law of Sines. The Law of Sines states that [(sin a)/A] = [(sin b)/B] = [(sin c)/C], where a is the angle opposite side A (and so on for b/B and c/C). Sometimes
the angles, a, b, and c, in this equation are denoted by the Greek symbols for alpha, beta, and gamma. Professor Burger shows you how to think about and use this this law by working through a
number of different examples. This law lays the foundation for proving properties about triangles that don't have a right angle, including the calculation of the lengths of their sides and the
measures of their angles.
This lesson is perfect for review for a CLEP test, mid-term, final, summer school, or personal growth!
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: The Law of Cosines
In this lesson, Professor Burger begins with a review of the Law of Sines. He then introduces the Law of Cosines, which extends the Pythagorean Theorem to triangles that are not right, allowing
us to solve for any angle in the triangle. The Law of Cosines, also called the Al-Kashi Law and the Cosine Formula and the Cosine Rule, states that, for the angle you are solving for, opposite
side ^2 = (sum of the squares of adjacent sides) - 2 * (product of the adjacent sides) (cos of the desired angle). The law of cosines is most useful when computing the third side of a triangle
when two sides and their enclosed angle are known (SAS) or when computing the angles of a triangle in which all sides are known (SSS).
This lesson is perfect for review for a CLEP test, mid-term, final, summer school, or personal growth!
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Convert between Degrees and Radians
In this lesson, Professor Burger teaches the basics of degrees and radians as they relate to the measurement of angles. He will cover how these terms are related (e.g. 360 degrees= 2*pi radians),
how they are different from each other and why they are used in different situations . This lesson should answer and assortment of questions, including: Why do we use 360 degrees? Is there a
better way to measure angles? Radian measurement is a different way to measure angles, and is the method of angle measurement used in trigonometric functions. You will learn how to measure angles
using radians instead of degrees, how to convert from degrees to radians, and what you should memorize to simplify this conversion. Additionally, Professor Burger will explain the rationale
behind using radians in place of degrees at times (mostly with trigonometric functions). Finally, you will review several examples of this conversion.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Word Problems with Sine or Cosine
Professor Burger provides a real-world application of periodicity using tidal waves (waves that happen after an earthquake, also called tsunami waves) as an example. In the example, the waves are
moving 540 feet/second (or 370 mph) and peaking at a hight of 50 feet every twenty minutes (period of the wave). The question posed is what is the length between each wave? Ocean waves, like
sound waves, have a sine curve. You will use this knowledge and the distance formula (D = r*t) to solve a word problem about tsunami waves. These types of sinusoidal waves occur frequently in
nature (and often in math word problems). Knowing how to approach and evaluate them is key to being able to solve all of them.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Intro to Sine and Cosine Graphs
In this lesson, you will examine the graphs of both the following trigonometric functions: sine and cosine. Professor Burger will show you how to graph sin and cos and teach you the acronym ASTC
(All Students Take Calculus). Prof Burger also defines and shows you where to look for to evaluate the amplitude, period, and zeros of the sine and cosine graphs and shows you how to find and
determine the maximums and mininimums for both sine and cosine functions. Finally, he will compare the graphs of the two functions, demonstrating that they have an identical shape with merely a
shift between them to differentiate the two functions from each other. You'll also learn the importance of the π/2 interval in plotting and remembering the trigonometric function graphs of cosine
and sine.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Trig to find a Right Triangle angle
This lesson teaches you to evaluate trigonometric functions to find one (non-right) angle of a right triangle. To do this, Professor Burger will walk through an example in which he presents a
right triangle with two known sides and one known angle. From this information, all other angles and sides can be determined using trigonometric functions for the angle (sine, cosine, tangent,
cosecant, secant, cotangent, etc). To determine the unknowns, he will apply a range of different formulas (involving the identification and use of opposite and adjacent sides of the various
angles). Trigonometric functions are ratios of the sides of a triangle. You will cover an example finding all the trigonometric functions for a right triangle, beginning with finding the
hypotenuse using the Pythagorean theorem. Then you will learn how to find the rest of the trig functions for a triangle if you given one function.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Simplifying Using Trig Identities
Professor Burger demonstrates how to use the fundamental trigonometric identities to simplify complex triognometric expressions. Many seemingly complex expressions can be greatly simplified with
simple application of several trigonometric identities. You will practice using expressions such as tanx * cosx and [1+(tan^2)x]/(csc^2)x. In these problems, you will see how substituting 1/cos^2
for sec^2 or 1/sin for csc or sin/cos for tan. By applying the definitions of the different trig functions, you'll often be able to substantially simplify a problem to the point where it will be
very easy for you to evaluate and solve it.
This lesson is perfect for review for a CLEP test, mid-term, final, summer school, or personal growth!
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Finding Vector Magnitude & Direction
In this lesson, Professor Burger will show you how to find the magnitude and direction angle of a vector provided in standard form (and how vector notation associated with standard form looks and
should be interpreted). He will also review how to depict a vector graphically that we have described by standard form. The magnitude is the length of a vector (reminder: it must be positive),
and we use the Pythagorean theorem to calculate the vector's magnitude. The direction angle is always measured counter-clockwise from the positive side of the x-axis. Once we know the vector's
length, we can use trigonometric functions to calculate the direction angle of the vector. Last, Professor Burger solves for the magnitude and direction of some of the vectors using a calculator.
This lesson is perfect for review for a CLEP test, mid-term, final, summer school, or personal growth!
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Graphing tan, sec, csc, cot
This lesson introduces the graphs of all the other trigonometric functions (cosecant, secant, tangent, cotangent), using the sine and cosine graphs for points of comparison. Professor Burger
shows you how to graph tanx using the identity tanx = sinx/cosx. This graph has asymptotes at all the multiples of Pi/2 and a period of Pi/absolute value of b. Next, you learn to graph secx,
which is equal to 1/cosx. This means that secant has an asymptote anywhere cosx = 0. Next, Prof. Burger graphs cosecant, using the identity that cscx = 1/sinx. This graph is identical to secx,
but shifted, like the relationship between sin and cos. Finally, you will learn to graph cotx, which is equal to 1/tanx. This means that there will be asymptote where tanx = 0, and zeros where
tanx has asymptotes.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Trig Equations and Quadratic Formula
Sometimes, trigonometric equations cannot be factored. To solve these equations, Professor Burger shows you how to apply the quadratic formula to find solutions to these equations. This is a
multi-step process that starts with simplifying the equation. After the equation is simplified, you will be able to solve the quadratic formula and then enter that answer in to solve for X. In
the lesson example, Professor Burger uses both a calculator and graphing to ensure he has the correct points. This lesson explains the covered material by walking through sample equations 3sin^2
(2x) + sin (2x)-1 = 0. This equation has 2 solutions over the interval from zero to 2*pi. This lesson is loaded with warnings about easy mistakes to make and pitfalls to be wary of when
evaluating problems like this.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Trig Functions - Odd, Even, Neither?
In this lesson, Professor Burger teaches you how to determine if a function is even, odd, or neither. He begins by defining even and odd functions and graphing them. A function is even if the
function of negative x is equal to the function of x. The graph of an even function is symetric across the y-axis. A function is odd if the function of negative x is equal to the negative
function of x. The graph of an odd function is symetric around the origin. After defining these, Professor Burger identifies whether sin and cos are even or odd, and then shows several more
examples, including tan x, sin (2x), (sin x)/x, and x cos x. Lastly, Professor Burger describes and illustrates what a function looks like that is neither odd nor even. In this case, it is not
symmetric to the Y axis or the origin.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Using Double-Angle Identities
Double-angle identities allow you to simplify trigonometric equations with a 2 as the coefficient. (similar formulae exist for trig functions with 1/2 or 3 as the coefficient). In this lesson,
Professor Burger uses the equation cos2x = sinx as an example. If this equation were simply cos x = sinx, we could divide to re-write the formula as sinx/cosx = tan x = 0, but in this case, we
have a coefficient in advance of one of the arguments, which is why we need to use the double-angle formulas. After using the double-angle formulas in the provided example to simplify, you can
further simplify these equations using trig identities (like the Pythagorean identity) and factoring. These tools will help you to solve many trig equations. The duble angle identities for sine,
cosine, tangent and cotangent are: sin2x = 2sinxcosx, cos2x = cos^2x-sin^2x, tan 2x = 2tanx/(1-tan^2x), and cot2x = (cot^2x-1)/2cotx.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Using Sum and Difference Identities
Using the sum and difference identities, Professor Burger shows you how to solve a trig function for an unknown angle. For an example, he uses the sin(15 degrees). You can use known angle values
for 45 degree and 30 degrees and the formula for the difference of two sines to find the solution. The formula for the difference of two sines is sin(x1 - x2) = sinx1cosx2 - cosx1sinx2. Hence, if
you know what the sin and cos values of 30 and 45 degrees are, you should be able to plub them into this formula to arrive at the sine value of 15 degrees. The beauty of the sum and difference
formulas for trig functions is that they allow us to decompose a problem we don't know the answer to into component parts to which we do know the answer, thus solving the original problem.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Graph Sine, Cosine with Phase Shifts
Now that you have learned how to graph the sine and cosine functions, Professor Burger asks the question ""How does changing the x-value affect the graph?"" He shows you how adding or subtracting
to the x-value can actually change graphs of the sine and cosine functions, a process called translation. Professor Burger also warns you about classic mistake #8, reminding you that adding and
subtracting to the x-value actually creates the opposite effect when graphed (adding to X moves the graph in the negative direction). Finally, Professor Burger shows you how to simplify the
equation y = 3sin(x + Pi/2) using translation. The key lies in the fact that adding or subtracting pi/2 or 2*pi to a sine or cosine function means there are some shortcuts that you can take to
determine what the graph of the function looks like (e.g. the graph of sine of (x+pi/2) is the same as the graph of cosine and the same as the graph of sine of (x+2*pi)).
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Solving Trigonometric Equations
This lesson will teach you how to solve equations involving trigonometric functions. Professor Burger shows you two ways to look at these equations, on a graph, and using reference angles. You
will learn to rephrase the equation to determine what it is really asking, ""What value of X makes the function of X = n?"" You will learn how to write your answer to indicate infinitely many
solutions and the step-by-step process of solving the equations. Examples of problems covered in this lesson involve trig functions, roots, fractions, variables and coefficients, including
problems like cos x = 1/2 and sinx = -(2^(1/2)/2). You'll also learn when and why most trig problems like these have multiple (or infinite) solutions and how to correctly identify and denote
these solution sets.
This lesson is perfect for review for a CLEP test, mid-term, final, summer school, or personal growth!
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Polar & Rectangular Coordinates
You will learn how to convert from polar coordinates to rectangular coordinates (or Cartesian coordinates or coordinates in a Cartesian plane), and vice versa in this lesson. First, Professor
Burger gives you an overview of polar and rectangular coordinates. Then, you will learn how to convert a polar cordinate (r, Theta) into a rectangular coordinate (x, y), using the equations x =
rcosTheta and y = rsinTheta. To convert from rectangular to polar, you will use the equations r = root(x^2 + y^2) and Theta = arctan (y/x). To illustrate the use of all of these formulae,
Professor Burger will walk you through the conversion of (3, pi/6) from polar to rectangular coordinates and the conversion of (-1.1) in rectangular coordinates to equivalent polar coordinates.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Word Problems and Trig Equations
This lesson provides a more real-world application of trigonometric equations using a word problem. Professor Burger walks you through solving the trig word problem about spring motion. The
motion of the particular spring in question is described by the function: sin (2T)+ 3^(1/2)*cost (2T), where T is the time in seconds. The problem asks us to solve for all times, T, when the
object is located where it started the experiment. As he demonstrates how to solve the word problem, Prof Burger uses many of the trig information he taught in previous lessons, including
identities, graphing, and angles. Finally, he reminds you to check your answer to make sure that the solutions are allowable. Additionally, he highlights that you should 'reality check' your
answer as it's obviously not possible to have solutions for T that are negative given that this is a real-world example and time should never be negative.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Factoring Trigonometric Expressions
Just as you can simplify trigonometric expressions using the trig identities, you can often simplify the expressions by factoring, as you would with other types of expressions. Simplifying using
the identities and factoring will save you a lot of effort in solving trig problems. In order to recognize opportunities to factor when working trigonometric problems, Professor Burger recommends
that you use trigonometry identities to convert trig functions to sin and cos, whenever possible. Some examples you will learn how to simplify include (sin^2) x+ (sin^2)x(cos^2)x and sinx - (cos^
2)x - 1 and sin^2x + (2/cscx)+x.
This lesson is perfect for review for a CLEP test, mid-term, final, summer school, or personal growth!
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Graph Sine, Cosine with Coefficients
After learning how to graph the sine and cosine functions, now we will modify the graphs of these functions by adding in coefficients. Professor Burgers shows you a simple, 2-step process to
determine the graphs. First, he will teach you about changes in the coefficient of the function. The introduction of a coefficient changes the amplitude of the graphed trigonometric function
(sine or cosine). This is difference between AM (amplitude modulation) radio stations; changes in amplitude produce AM radio signals.The amplitude is equal to the absolute value of the
coefficient of the trigonometric function. Prof Burger will also show you how changing the coefficient of the independent variable changes the period of the graphed sine or cosine function. This
is the difference in FM radio stations (frequency modulation). The period =(2 Pi) / coefficient of X.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Power and Roots of Complex Numbers
Professor Burger explains how to find the powers and roots of complex numbers. The equation of a complex number is z= r(cosx + isinx). To raise the complex number to a power, n, the equation is z
^n = r^n[cos(nx) + isin(nx)]. In general, if you are raising a complex number to the power of n or 1/n (taking the nth root), you will come up with n solutions, as you will always have one
solution for each of the degrees of power. When taking the root of a complex number, you will find one solution for each degree of power. To find the nth root of a complex number the equation is
n root of z = (n root r) *[cos ((x + 2 Pi K)/n) + 1 sin ((x + 2 Pi K)/n)] where k = 0, 1, 2,...n-1.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Trig to Find Right Triangle sides
Professor Burger explains how to use trigonometry to find the unknown sides of right Triangles. First, he explains how to evaluate and use trig functions with the help of a calculator. Then he
will teach you how to find the hypotenuse of a right triangle using trigonometry, as well as how to find the adjacent side of a right triangle, given the measure of the hypotenuse. Once you have
the measures of two sides of a right triangle, you will be able to deduce what the third side is equal to by applying the Pythagorean Theorem. Professor Burger will also highlight that
trigonometry uses functions (e.g. you are not multplying by sine, but finding the function, sine, of a number). This is an especially important distinction to remember when manipulating
trigonometric expressions and applying trigonometric properties.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
• Pre-Calculus: Find Angle Complements & Supplements
This trigonometry lesson introduces and explains the terminology behind complementary angles (angles that add up to 90 degrees) and supplementary angles (that add up to 180 degrees). Professor
Burger gives you the definition of these two types of angles, shows you what these angles will look like when combined and and shows you how to find a complement angle or a supplement angle to a
provide angle with a known degree measure. Professor Burger will also explain whether supplementary angles and complementary angles could be negative.
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, Precalculus. This course and others are available from Thinkwell, Inc. The full course can be
found at: http://www.thinkwell.com/student/product/precalculus. The full course covers angles in degrees and radians, trigonometric functions, trigonometric expressions, trigonometric equations,
vectors, complex numbers, and more.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from
Connecticut College.
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger
has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association
of America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The
Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math
journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of
numbers, and the theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
Supplementary Files:
• Once you purchase this series you will have access to these files:
Buy Now and Start Learning
Also from Thinkwell:
Link to this page
Copy and paste the following snippet: | {"url":"http://www.mindbites.com/series/13-pre-calculus-review","timestamp":"2014-04-17T06:45:39Z","content_type":null,"content_length":"140535","record_id":"<urn:uuid:4227bd01-7f1f-419c-be01-a4d52331cb57>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
October 7th 2009, 08:30 AM
Since so much of cardinal numbers involve creating in/bijections I'm not sure if that's what I should be doing here instead...
Prove that the addition and multiplication is well defined for cardinal numbers.
My answer was... (roughly, more to see if I'm on the right track)
Let A and B be two sets with cardinality $\kappa$ and $\lambda$ respectively.
Want to show that if card(A) = card(A') and card(B) = card(B') then...
card $(A \cup B)$ = card $(A' \cup B')$
So we have card $(A' \cup B')$ = card(A') + card(B') = $\kappa + \lambda$ = card(A) + card(B) = card $(A \cup B)$.
Hence addition is well defined... Seems wrong looking back on it. Do I even need to the $\kappa, \lambda$ bit?
October 7th 2009, 11:12 AM
$dim(A \cup B) = dim(A) + dim(B) - dim(A \cap B)$
dont know for multiplication...
do we look $AXB = \{(a,b)| a \in A, b \in B \}$ ?
October 7th 2009, 12:34 PM
card http://www.mathhelpforum.com/math-he...e3d99f3e-1.gif = card(A') + card(B') = http://www.mathhelpforum.com/math-he...0cdc5f60-1.gif = card(A) + card(B) = card http://www.mathhelpforum.com/
Doing that you assume the addition for cardinal numbers is well-defined.
Moreover, $cardA=cardA'$ and $cardB=cardB'$ does not mean $card(A\cup B)=card(A'\cup B')$ ; you must assume $A\cap B=A'\cap B'=\emptyset$ to be sure of that (and that's the way cardinal addition
is defined).
What you have to prove is: If $cardA=cardA'$ and $cardB=cardB',$ then $card(A\oplus B)=card(A'\oplus B')$ , where $X\oplus Y := X\times\{0\}\cup Y\times\{1\}$ for any two sets $X$ and $Y$.
$cardA=cardA'$ means there exists a bijection $f:A\rightarrow A'<br />$
$cardB=cardB'$ means there exists a bijection $g:B\rightarrow B'$
Question: Can you define a bijection between $A\oplus B$ and $A'\oplus B'$ ?
For the product, the hypotheses and the question are the same with the cartesian product $\times$ instead of $\oplus$.
@josipive: The question is about sets in general, you cannot apply such result about vector subspaces of finite dimension. | {"url":"http://mathhelpforum.com/discrete-math/106654-cardinality-print.html","timestamp":"2014-04-16T08:47:20Z","content_type":null,"content_length":"11270","record_id":"<urn:uuid:a1e121f5-9b38-4cb5-9fb2-315078d4bc90>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A paint mixture contains 7 gallons of base for every gallon of color. In 280 gallons of paint, how many gallons of color are there?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Now can you get it?
Best Response
You've already chosen the best response.
their are 280 gollans of colors of paint.
Best Response
You've already chosen the best response.
7 gallon of base per gallons of paint thus 8 gallons of paint would contain 1 gallon of colour and 7 gallons of base Find the ratio of pain to colour and multiply by 100 to convert to percent (1/
8) = 0.125 or 12.5% thus the paint is 12.5% colour and 87.5% base .125 * 280 = 35 gallons of paint
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Seems as though the rest of us are more interested in this problem than the asker.
Best Response
You've already chosen the best response.
Australopithecus answered my question and explained how they got the answer!! So I got all the info I needed!!!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f8b497ae4b09e61bffc86fa","timestamp":"2014-04-17T19:08:54Z","content_type":null,"content_length":"215369","record_id":"<urn:uuid:92796e11-3a2f-4fdc-9bce-4238d3389809>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about animation on Lucky's Notes
June 14, 2011
I randomly made this thing out of boredom:
Cute, but it’s probably a sign that I need some better ideas for hobby programming projects xD
It uses the pyglet library, and is only a page or so of python code.
The hockey stick theorem: an animated proof
October 22, 2010
An interesting theorem related to Pascal’s triangle is the hockey stick theorem or the christmas stocking theorem. This theorem states that the sum of a diagonal in Pascal’s triangle is the element
perpendicularly under it (image from Wolfram Mathworld):
So here the sum of the blue numbers , 1+4+10+20+35+56, is equal to the red number, 126. It is easy to see why it’s called the hockey stick theorem – the shape looks like a hockey stick!
An alternative, algebraic formulation of the hockey stick theorem is follows:
$\displaystyle\sum_{i=0}^{k} \binom{n+1}{i} = \binom{n+k+1}{k}$
But this works in two ways, considering the symmetry of Pascal’s triangle. The flipped version would be:
$\displaystyle\sum_{i=0}^{k} \binom{n+1}{n} = \binom{n+k+1}{n+1}$
Using Pascal’s identity, it is fairly trivial to prove either identity by induction. Instead I present an intuitive, animated version of the proof: | {"url":"http://luckytoilet.wordpress.com/tag/animation/","timestamp":"2014-04-18T16:53:42Z","content_type":null,"content_length":"30530","record_id":"<urn:uuid:18b188f5-ebe0-4d8b-b1a7-438c483486ab>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 125
Errata Predicting Outcomes of Investments in Maintenance and Repair of Federal Facilities APPENDIX C ISBN-13: 978-0-309-22186-3; ISBN-10: 0-309-22186-2 NOTE TO READERS: Following the public release
of this report in November 2011, errors were discovered in several of the equations and numerical solutions included in Appendix C. This document includes the corrected equations and solutions.
Copyright 2012 by the National Academy of Sciences. All rights reserved
OCR for page 125
C Some Fundamentals of the Risk-Based Approach BASIC PRINCIPLES The fundamental tools needed for the quantitative risk-based approach to decision-making include the basic principles of probability.
Those principles start with the premise that in the presence of uncertainty, a phenomenon or physical process can be defined or represented by a random variable and its probability distribution. That
is, uncertainty is modeled as a random variable with a range of possible values and their probabilities defined by a probability distribution. Thus, if X is a random variable with a range of possible
values from a to b, its probability distribution may be defined as FX ( x) = P( X ≤ x); a ≤ x≤b. Within the range of possible values of a random variable, there will be a mean (or average) value and
a measure of dispersion, such as the variance or standard deviation. The ratio of the standard deviation to the mean is the coefficient of variation (COV). Among the useful probability distributions
are the normal or Gaussian distribution and the lognormal (or logarithmic normal) distribution. The Normal or Gaussian Distribution. The normal distribution, whose range of possible values is −∞ to
+∞ is denoted as N(μ, ı) where μ is its mean value and ı is its standard deviation. If μ = 0 and ı = 1.0, the distribution is called the standard normal distribution. For the standard normal
distribution, the probability from - to x is FX ( x ) = Φ ( x) , where ĭ(x) is tabulated in Tables of Standard Normal Probability. The probability of a random variable, X, between a and b can be
evaluated as § b − μX · § a − μX · P ( a < X ≤ b) = Φ ¨ ¸ −Φ¨ ¸ © σX ¹ © σX ¹ where μX and ıX are, respectively, the mean and standard deviation of X. The Lognormal Distribution. In the lognormal
distribution, whose range of possible values is 0 to , there are no negative values. The probability that X will be between a and b becomes § ln b − λX · § ln a − λX · P(a < X ≤ b) = Φ ¨ ¸ −Φ¨ ¸ ©
ζX ¹ © ζX ¹ C-1
OCR for page 125
where ȜX and ȗX are, respectively, the mean and standard deviation of lnX and they are the parameters of the lognormal distribution. These parameters are related to the mean and standard deviation of
X as follows: λ = ln μ X − 1 ζ 2 2 and ª §σX · 2º » = ln(1 + δ X ) 2 ζ 2 = ln «1 + ¨ ¸ « ¬ © μX ¹ » ¼ if the COV of X, δX, is not large, say < 40%, ζ ≅ δ.X MATHEMATICS OF PROBABILITY A few rules that
pertain to the mathematics of probability may be described briefly as follows. Probability is defined with reference to the occurrence (or non-occurrence) of an event, and for an event E 0 ≤ P ( E )
≤ 1.0 The Addition Rule. For two or more events, A and B, the “union” of A and B, denoted A B ,means the occurrence of A or B (or both), and the probability is given as the addition rule namely, P( A
B) = P( A) + P( B) − P( AB) in which P(AB) stands for the simultaneous occurrence of A and B. The Multiplication Rule. The probability of the simultaneous occurrence of two events, A and B, is given
by the multiplication rule namely P( AB ) = P( A B) ⋅ P( B); or = P ( B A) ⋅ P( A) in which P( A B) stands for the “conditional probability” of A given (or assuming) the occurrence of B. Those two
simple rules, together with the “theorem of total probability” and the “theorem of Bayes” constitute the basic rules of the mathematics of probability. For a more complete description of the theory
of probability and illustrations of its many applications in engineering, see Ang and Tang (2007). C-2
OCR for page 125
ILLUSTRATIVE APPLICATIONS TO SPECIFIC OUTCOMES Described below are the numerical calculations of the risk or probability of “negative benefits” of three specific outcomesņaccident rates and types,
deferred maintenance, and energy use. Accident Rates and Types In this example, let X = recordable incident rate (RIR) Y = lost-time incident rate (LTR), and Z = number of worker compensation claims
Assume that the current incident rates and claims are as follows: X = 4 per 100,000 hours Y = 0.5 per 100,000 hours Z=1 The average cost per incident and the average lost time cost per incident is
$75,000, whereas that of a worker compensation claim is $100,000. With an investment of $200,000 for maintenance and repair, the incident rates would be reduced as follows: X’ = 2 per 100,000 hours
Y’ = 0.1 per 100,000 hours Z’ = 0.2 In this case, the current cost of an incident is C = c1X + c2Y + c3Z; and the corresponding reduced cost is, C’ = c1X’ + c2Y’ + c3Z’ where c1, c2, and c3 are the
corresponding costs in dollars. The pertinent costs are, therefore, as follows: Current cost, C = 75,000 x 4 + 75,000 x0.5 + 100,000 x1 = $437,500 Reduced cost, C’ = 200,000 + 75,000 x 2 + 75,000 x
0.1 + 100,000 x 0.2 = $377,500 The benefit derived from the investment in maintenance and repair, therefore, would be Benefit = C – C’. In this case, the benefit of maintenance and repair investment
= 437,500 í $377,500 = $60,000 C-3
OCR for page 125
In this example, the risk that the investment will be greater than the savings (negative benefit) is C < C’. Because there are uncertainties in all the variables X, X’, Y, Y’, Z, and Z’, there is
some probability of negative benefit. For example, suppose that the uncertainties are ±30% in all the variables. The risk would be calculated as follows. Assume that the variables are independent
normal random variables; the means and standard deviations of each of the variables are: X = N(4, 1.2); Y = N(0.5, 0.15); Z = N(1, 0.3); and X’ = N(2, 0.6); Y’ = N(0.1, 0.03); Z’ = N(0.2, 0.06) The
respective means of C and C’ (assuming no uncertainties in the costs), are μC = $437,500 and μC ' = $377,500 whereas the variances are, and, Therefore, the risk of negative benefit would be, That
means that with the investment of $200,000 in maintenance and repair, the risk of negative benefit will be about 28.5 percent. Deferred Maintenance Any equipment or facility has a finite and variable
operational life. In realistic terms, the operational life may be represented as a random variable and described with a probability distribution. The probability distribution often used for this
purpose is the lognormal distribution. C-4
OCR for page 125
The Risk Problem Consider the maintenance problem of air-conditioning (A/C) units. Assume that the operational life T of a typical A/C unit can be described with the lognormal distribution, with a
median life of tm months or years, and a COV of įT (or a standard deviation of σ T ≈ δ T × tm ). Suppose further that the current maintenance schedule calls for inspection and repair (if necessary)
of an A/C unit every n months or years. However, if inspection or repair is deferred beyond the schedule, what will be the reliability (probability of performance) of the A/C unit until the next
scheduled inspection? And what would be the cost implication of deferring maintenance? Solutions Assume that the A/C unit has an operational life of tm = 5 years, and a COV of įT = 0.30. The
probability that the A/C unit will fail to perform within a life of t years is given by P(T < t). With the lognormal distribution of the operational life T, the probability is § ln t − λ · P(T < t )
= Φ ¨ m ¸ © ζ ¹ in which Ȝ and ȗ are the parameters of the lognormal distribution. The reliability is then (1 – P). Problem I. The probability that the operational life of an A/C unit will be less
than 2 years is determined as follows: The parameters of the lognormal distribution λ and ζ are: . Ȝ § 1ntm = 1n5 = 1.61; and ȗ ~ įT = 0.30. The required probability of failure (non-performance) in 2
years is ln 2 − 1.61 0.69 − 1.61 P(T < 2) = Φ ( ) = Φ( ) = Φ (−3.07) 0.30 0.30 =1-Φ(3.07)=1-0.9989=0.0011 Therefore, the probability that a typical A/C unit will fail within a 2-year period is 0.11
percent. Its reliability of performance, therefore, is (1 – 0.0011) = 0.9989 = 99.89 percent. Problem II. Suppose that the A/C units of an agency are scheduled for routine maintenance at 2-year
intervals; this maintenance schedule should ensure a high performance reliability (of 99.89 percent) However, because of circumstances (such as a shortage of technicians, or a shortage of funding),
the inspection and repair schedule is deferred for 2 years (until the next scheduled maintenance). The average cost of repair per A/C unit is estimated to be $1,500; the cost implication of the
deferred maintenance will be as follows. In this problem, the operational life is assumed to be longer than 2 years (the schedule for maintenance), so it is necessary to calculate the probability of
failure in 4 years (2 years beyond the scheduled maintenance). The solution requires conditional probability as outlined below: P[(T ≤ 4) (T > 2)] P(2 < T ≤ 4) P(T ≤ 4 T > 2) = = P(T > 2) P(T > 2)
OCR for page 125
where, from Problem I, P(T > 2) = 1 − 0.0011 = 0.9989 ln 4 − 1.61 ln 2 − 1.61 1.39 − 1.61 0.69 − 1.61 P(2 < T ≤ 4) = Φ ( ) − Φ( ) = Φ( ) − Φ( ) = Φ (−0.73) − Φ (−3.07) 0.30 0.30 0.30 0.30 = 1 − Φ
(0.73) − (1 − Φ (3.07)) = Φ (3.07) − Φ (0.73) = 0.9989 − 0.7673 = 0.23 and 0.23 P(T ≤ 4 T > 2) = = 0.23 0.9989 Therefore, deferring the maintenance of the A/C units for 2 years, or until the next
scheduled maintenance, will result in a probability of failure of a typical A/C unit of 23 percent. If the agency has 1,000 A/C units, 230 of them are likely to fail within 2 years beyond the
scheduled maintenance. If the average repair cost is $1,500 per unit, the deferred maintenance cost will be 230 × 1500 = $345,000. Energy Use Problem Determine the benefit of investments in
maintenance and repair of energy systems. Consider savings in oil equivalents (gallons) of gasoline consumption at a price of $3.00 per gallon. Assume that with an investment of I (dollars) the
reduction in gasoline consumption is Y = f(I); this function may have to be developed empirically from historical data. Let the current consumption be X gallons; and in dollars = 3X. Assume that with
an investment of I dollars, the reduced consumption would be Y gallons, and in dollars = 3Y. Therefore, the energy saving with investment I is (X – Y) gal; or in dollars is (3X – 3Y). Hence, failure
in this case may be defined as “saving (in dollars) is less than the investment”; that is, in dollars, [3(X – Y) – I] < 0. There will be uncertainty in X (the current consumption) and in Y (the
reduced consumption), so there will be a probability of failure, or risk that the investment will be greater than the savings (negative benefit). To calculate that probability, assume that X and Y
are both normal (or Gaussian) random variables, with respective means and standard deviations ȝX, ȝY, and ıX, ıY; i.e., denoted as N(ȝX, ıX) and N(ȝY, ıY). C-6
OCR for page 125
The probability of failure, P, is P = P[3(X – Y) – I] < 0 or P[3(X – Y) < I]. For normal random variables, it becomes, 0 − [3( μ X − μY ) − I ] P = Φ( ) (3σ X )2 + (3σ Y )2 For numerical
illustrations, assume hypothetically the following: Current average gasoline consumption, is ȝX = 10 million gallons, with a standard deviation of ıX = 2 million gallons. With an investment of
$10,000,000, the average reduced consumption is expected to be ȝY = 8 million gallons, and ıY = 2 million gallons. The risk of a negative benefit is Therefore, with an investment of $10 million,
there is a 68 percent probability of a negative benefit. RELIABILITY ANALYSIS The practical approach to ensure the reliability or safety of an engineered system is to apply the first-order
reliability method (FORM). The basics of the method can be described below. The evaluation of reliability of an engineered system may be considered as a problem of supply versus demand; for this
purpose, define the following random variables: X = the supply and Y = the demand. The objective of a reliability analysis is to insure that (X > Y). If the probability density functions (PDFs) of X
and Y are, respectively, fX(x) and fY(y), then the reliability of the system is measured by the probability of failure (non)performance), ∞ pF = P ( X ≤ Y ) = ³0 FX ( y ) fY ( y )dy in which, y FX (
y ) = ³0 f X ( x)dx C-7
OCR for page 125
The above is a convolution integral, shown graphically in Figure C.1. FIGURE C.1 The probability of failure. The corresponding probability of performance is then pS = 1 − p F Consider a system in
which the available supply, X, is a Gaussian or normal random variable N(ȝX, ıX) and the demand is also a Gaussian random variable N(ȝY, ıY). The difference, M = X – Y, called “safety margin”, is
also a Gaussian variable with a mean of μ M = μ X − μY 2 2 2 If X and Y are statistically independent, the variance of M is σ M = σ X + σY M − μM Furthermore, σM is N(0,1). Hence, the probability of
nonperformance is (μ)− pF = FM (0) = Φ σ M = 1 − Φ σ M M M (μ ) in which Ɏ is the cumulative probability of the standard Gaussian distribution, N(0,1) Clearly, the reliability of the system is a
function of the ratio ȝM/ıM, which may be called the safety index or reliability index and denoted by ȕ μM μ − μY In this case, β= = X σM σ X + σY 2 2 C-8
OCR for page 125
If the supply and demand are both lognormal random variables, the corresponding reliability index would be ln( xm / y m ) β = δ X + δY 2 2 and the probability of nonperformance can be expressed as pF
= Φ ( − β ) = 1 − Φ ( β ) In the first case, where X and Y are both Gaussian random variables, the quantitative relation between pF and ȕ is unique (one to one) as follows: pF β pF β 0.5 0 0.01 2.33
0.25 0.67 10−3 3.10 -4 0.16 1.00 10 3.72 -5 0.10 1.28 10 4.25 -6 0.05 1.65 10 4.75 The First-Order Reliability Method (FORM) Engineers are, traditionally, reluctant to admit a probability of failure;
for this reason, a good alternative strategy is to use an equivalent measure, the safety index ȕ – which is a complete measure of the safety or performance of an engineered system. This has served to
spur the practical implementation of the probabilistic approach in engineering. Using the ȕ and FORM has contributed greatly to the practical implementation of reliability engineering (Ang and
Cornell, 1974). The basics of FORM may be described as follows: Introduce the reduced variates, X’ and Y’, X −μ Y −μ X ' = σ X and Y ' = σ Y X Y In the space of X’ and Y’, the safe and failure states
of the system may be represented as shown in Figure C.2. C-9
OCR for page 125
FIGURE C.2 Illustration of safe and failure regions. In terms of the reduced variates, the limit state equation M = 0 (X – Y = 0) becomes σ X X '− σ Y Y '+ μ X − μY = 0 From the above figure in the
reduced variates, we can clearly distinguish the failure region from the safe region, and distinguish the limit state equation (or failure surface) that separates the two regions. On that basis, the
distance, d, from the failure surface to the origin, o, is a measure of safety or reliability and in fact is the safety index ȕ. That distance is (from analytic geometry) μ X − μY β= σ X +σ Y 2 2 and
thus the probability of failure is pF = Φ ( − β ) C-10 | {"url":"http://books.nap.edu/openbook.php?record_id=13280&page=125","timestamp":"2014-04-19T15:47:44Z","content_type":null,"content_length":"55435","record_id":"<urn:uuid:edb0a8d5-20e4-456c-b7e7-ba0e81cd8948>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
A metric on a set of objects defines a measure of distance on the set, that is, for each pair of objects it defines the distance between those objects. Formally a metric is a function \(d\) from
pairs of elements of the set to the non-negative real numbers satisfying,
1. \(d(a,b) = 0\) if and only if \(a=b\),
2. \(d(a,b)=d(b,a)\), and
3. \(d(a,b)+d(b,c)\geq d(a,c)\)
for all \(a\), \(b\), and \(c\) in the set. The third condition is known as the triangle inequality, and captures the idea that you can't draw a triangle with three lengths if one is longer than the
other two combined.
The distance formula in the Cartesian plane is an example of a metric, but there are many other possible metrics on the Cartesian plane, and in more abstract settings the concept of a metric is
central, for instance in vector spaces and Hilbert spaces.
• [MLA] “metric.” Platonic Realms Interactive Mathematics Encyclopedia. Platonic Realms, 28 Mar 2013. Web. 28 Mar 2013. <http://platonicrealms.com/>
• [APA] metric (28 Mar 2013). Retrieved 28 Mar 2013 from the Platonic Realms Interactive Mathematics Encyclopedia: http://platonicrealms.com/encyclopedia/metric/ | {"url":"http://platonicrealms.com/encyclopedia/metric","timestamp":"2014-04-17T06:42:27Z","content_type":null,"content_length":"50476","record_id":"<urn:uuid:f5ea0044-50b0-457e-a822-c6a3cfacdbd0>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Equation that Couldn't Be Solved: How Mathematical Genius Discovered the Language of Symmetry
The astrophysicist and popular science writer Mario Livio is clearly not afraid of long titles and subtitles. His previous books include The Golden Ratio: The Story of Phi, the World's Most
Astonishing Number and The Accelerating Universe: Infinite Expansion, the Cosmological Constant, and the Beauty of the Cosmos and both books take a single topic and analyze it from many different
angles. Livio's new book, The Equation that Couldn't Be Solved: How Mathematical Genius Discovered the Language of Symmetry continues in this niche, both in its format and in its unwieldy title. The
book is perhaps best viewed as a series of disjoint chapters, all of which are related to the overall ideas of group theory, but presented without a coherent narrative.
The first chapter is entitled "Symmetry" and covers much of the same ground as Hermann Weyl's classic book of the same name, albeit with many of the references updated from the 1950's to include Andy
Warhol, Woody Allen, and poker, among others. This chapter and the next discuss examples of symmetry in nature and art and why it is aesthetically as well as biologically important to us. In the
second chapter, Livio also introduces the notion of a group and some nice examples of symmetry groups. He then uses this terminology to help discuss many of the symmetry phenomena which he has
already described.
After these two introductory chapters, the tone of the book changes entirely as Livio moves into discussing some aspects of the history of mathematics for three chapters. The first of these three
chapters starts as close to the beginning of mathematics as one gets, in the Sumerian communities of the fourth millennium BC. Livio then discusses the emergence of equations as a mathematical tool,
and in just over a dozen pages takes the reader through the introduction of algebra and up to attempts to solve the cubic equation.
The story of Fiore and Tartaglia, two mathematicians who lived in the sixteenth century and who had public 'duels' to see who was better at solving cubic equations, is a great story, and one that
never fails to captivate students to whom I have told it. I won't spoil the ending for those of you who have never heard these tales, but this chapter alone is worth the price of admission to Livio's
After discussing cubic equations (and quartic equations, which make a surprise cameo appearance in the above tale), the natural question to consider is the quintic equation, and how to find a
solution to a general fifth degree polynomial equation. The title of Livio's book gives away the punchline which is almost certainly familiar to any mathematician — that there is no way to solve a
general quintic equation and that many such equations have no easily expressible solution — but it is a very interesting tale of how this punchline was reached by different mathematicians.
The fourth and fifth chapters of Livio's book tell the stories of Niels Abel and Evariste Galois, two of the mathematicians who tackled the problem of solving the quintic but whose work on this
problem led to much deeper ideas and to the creation of entirely new fields of mathematics. The two stories have a decent amount in common — both men lived in the early part of the nineteenth
century, both men died young, both led tragic lives which would make good made-for-television movies, and both made great contributions to mathematics. Livio tells the stories of Abel, the
"poverty-stricken mathematician", and Galois, the "romantic mathematician" in a very interesting way, even for a reader who already knows the stories, and includes many nice photographs and
reproductions of historical documents such as the mathematicians' manuscripts.
Of course, the one detail that many people know about Galois's life is that he died in a duel — one with pistols rather than cubic equations. Depending on whom you ask you may hear that he was killed
over a woman or that he was killed for political reasons or for any number of other reasons. Livio details all of these 'conspiracy theories' and devotes an entire section of the chapter on Galois to
a Patricia Cornwall-esque attempt to unravel the mysteries surrounding Galois's death. His conclusions, not to mention his presentation of the 'evidence' are very compelling and offer a take on the
story that I had not previously seen.
After spending over a hundred pages looking like a history book, the book then does another 180 and once again looks like a math book in a chapter entitled simply "Groups." Luckily for the reader,
Livio is just as good at writing in this manner. He begins by expanding his earlier notions of symmetry groups by looking at the permutations of the letters in the word GALOIS, and before you know it
has discussed Rubik's cubes, the draft lottery for the Vietnam War, Levi-Strauss's Elementary Structures of Kinship, and non-euclidean geometry, all in the name of developing some of the key ideas in
group theory. Most impressively, Livio gives a very readable two page summary of Galois's proof of the insolvability of the quintic. Livio does not give a rigorous treatment of group theory by any
stretch of the imagination — and there are ways that I think he could have approached the material in a better way, such as including the concept and language of group actions — but he does a good
job at giving a flavor of the kinds of results and ideas in the field that will hopefully inspire the non-mathematician reader to go sign up for the nearest course in abstract algebra.
The seventh chapter changes flavors once again, and focuses primarily on ways in which principles of symmetry show up in physics, ranging from Newton's study of the dynamics of celestial bodies to
more modern ideas such as supersymmetry and Lie groups. The eighth chapter brings us full circle, as Livio once again discusses snapshots of symmetry in fields such as cognitive science and biology.
Livio also once again digresses into some high-powered mathematics, finishing the main body of the book with a discussion of the so-called Monster group, and discussing the classification of finite
simple groups.
The book concludes with a chapter entitled "Requiem for a Romantic Genius" in which Livio tackles the very notions of creativity and genius, and makes a compelling argument that Galois should be
considered one of the most creative and influential mathematicians throughout history, an argument that this reviewer would certainly agree with.
As I have mentioned several times, Livio is a very engaging writer. The Equation that Couldn't Be Solved is a very well written book about very interesting subject matter. Livio supplements his words
with many great illustrations, and also throws in pop culture references ranging from Shakespeare to the John Cusack film Serendipity.
Personally, I found the lack of a cohesive narrative and the scattershot nature of the different chapters to be a bit disconcerting, and I would have preferred a book that seemed a bit more
intentionally put together. On the other hand, I imagine that many readers will enjoy reading the different views on the material, and the fact that much of the book is not very technical might trick
a few civilians (by which I mean non-mathematicians) into learning some of the ideas of group theory and seeing the beauty of abstract mathematics.
In the end, the misgivings I have are far outweighed by the quality of the book. As a mathematician, I did not learn much in the way of mathematics from reading this book, but I did learn quite a bit
of history and I enjoyed reading the exposition of the math and physics. I also think that The Equation that Could Not Be Solved would be an excellent book for a student to pick up to get drawn into
the world of abstract algebra, and I have already been recommending the book to friends in the other science departments on campus. This is the kind of book, and Livio is the kind of author, that
will convince the kind of scientifically-minded people who read magazines like Seed and Discover (the latter of which put Livio's book on their list of the Best Science Books of 2005) that
mathematics in general, and group theory in particular, is an exciting and relevant field.
Darren Glass is an Assistant Professor at Gettysburg College. His mathematical interests include algebraic geometry, number theory, and (wait for it) Galois Theory. He can be reached at
dglass@gettysburg.edu . | {"url":"http://www.maa.org/publications/maa-reviews/the-equation-that-couldnt-be-solved-how-mathematical-genius-discovered-the-language-of-symmetry","timestamp":"2014-04-17T10:01:58Z","content_type":null,"content_length":"105474","record_id":"<urn:uuid:0695e726-8fd5-479f-8853-9bf2afa0db8a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the correct symbol for this line segment?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fa81c29e4b059b524f3ff81","timestamp":"2014-04-20T13:41:44Z","content_type":null,"content_length":"51798","record_id":"<urn:uuid:aaca483e-935f-4a25-9ffa-4a86950a8e2b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Greatest Mystery In All Of Physics
Because this is me, I must start with a lot of disclaimers. First, the title is catchy, but many would disagree with the mystery I’ve identified. Even I might. So, please try to avoid flaming me for
my choice. Second, very shortly I will post “The Most Elegant Solution In All Of Physics,” a post that might allow one to argue that what I’m about to identify as the greatest mystery isn’t a mystery
at all! Groundwork laid, here we go….
In physics, there is this quantity “mass” that we use to describe how much “stuff” there is in a particle. Technically speaking, “mass” is the energy content of an object measured by an observer when
that object is at rest with respect to the observer, and when the observer is viewing that object as a closed system from the outside. (In other words, we don’t know anything about “internal energy,”
because the object is just a thing.)
The thing is, there really are two different kinds of mass. First, there is inertial mass, which describes how much an object resists being pushed around by any kind of force. Second, there is
gravitational mass, which describes how strongly an object couples to the gravitational field. And, yet, to the best precision we’ve been able to measure, these two kinds of mass are exactly the same
. So much so that in introductory physics classes we just call it “mass”, and students may not even realize that there’s anything surprising about the fact that the two are the same!
If you’ve taken freshman Physics, you’ve seen a couple of equations. (Yes, I’m about to post equations; they’re very simple, and even people who are “not math people” can understand what I want you
to understand about them; please bear with me!)
First, we have the second-most famous equation in all of Physics:
In this equation, a is acceleration. If something is at rest, you must accelerate it to get it moving. F is the force it takes to move an object with acceleration a, and m is that object’s mass, or
how much inertia the object has. It takes more force to get a heavier object moving; this is intuitive, and matches our everyday experience.
The second equation I write in a slightly non-standard form:
Here, F is the gravitational force between two objects, one of mass big-M, and one of mass little-m. G is the universal gravitational constant (basically, a parameter of our Universe that describes
how strong gravity is in general), and r is the distance between the two objects.
For present purposes, think of this slightly differently. Assume that M is a really big mass– say, the Earth– and that m is a much smaller mass&ndash: say, you. Then, the quantity in parentheses, (GM
/r^2), describes how strong the gravitational field of the Earth is. Put in the radius of the Earth for r. Multiply that by your mass, and you get how much the Earth is pulling on you due to its
In freshman physics, we get so used to using these two equations that we never stop to think… why should the little-m in those two equations be the same? Indeed, perhaps we should be writing the
equations as follows:
The two concepts are very different. Inertial mass says how much an object resists being pushed around by any force whatsoever. Gravitational mass says how much an object couples to the very specific
force of gravity. Why are they the same? To try to put this in higher relief, consider another formula that many see in freshman physics:
This is the electric force between two objects, one of charge big-Q, the other of charge little-q. (It’s conventional to use the variable q for charge; don’t ask me why, I have no clue.) Or, to put
it another way, the quantity in parentheses is the electric field strength created by a particle of charge big-Q a distance r away (where the 1/4πε_0 represents the strength of electric forces in
general, and is just a constant). Then, little-q tells you how strongly the particle with charge little-q couples to that electric field. If you then wanted to figure out how much the particle moved
around, you’d need its inertial mass. Notice, however, that the electrical charge is completely different from the inertial mass. One says how strong something couples to a field, the other says how
much you need to push something around to get it moving.
Yet, with gravity, how much an object couples to the field is exactly the same as how much the object resists moving around. This isn’t true with any other force. There’s clearly something special
about gravity.
So here’s my candidate for the greatest mystery in all of physics: why are inertial mass and gravitational mass the same?
Coming soon to this blog: the solution, in the form of Einstein’s General Relativity.
1. #1 NJ February 26, 2007
OK, naive mineralogist question here: What would be the consequences if they weren’t the same?
2. #2 Jonathan Vos Post February 26, 2007
NJ: Eotvos experiments of sufficient accuracy would give non-null results. The Moon, whose iron-rich core in aluminum-rich crust is off center, would have measurably different orbital dynamics
(perhaps at level detectable by bouncing laser beams off those corner reflectors left by Project Apoillo astronauts).
Trivial or pointless to a layman, perhaps, but it would be a Nobel-prize winning experiment if such results were found.
3. #3 Rob Knop February 26, 2007
Indeed, I have a friend who is working on measuring the orbit of the Moon to centimeter or millimeter accuracy in an attempt to see any deviations from the predictions of General Relativity. See
More broadly speaking, the “Galileo experiment” wouldn’t work to sufficiently high precision. The fact that objects of different masses fall at the same rate is a consequence of (or, indeed, a
restatement of) the equivalence of inertial and gravitational mass. Differences between the orbits of things of different composition, indeed, are exactly that: things of different mass falling
at different rates.
4. #4 Benjamin Franz February 26, 2007
I guess I am in the “naive Mach’s principle” camp. It just doesn’t seem especially mysterious on that level to me: They are are measured to be the same simply because they are two different ways
of measuring the same thing. The questions then occur on a different level: What types of theories and universes are compatible with Mach’s principle?
5. #5 Melissa G February 26, 2007
Gravity is wacky!
6. #6 Rob Knop February 26, 2007
Benjamin– whether or not it’s surprising depends on the point of view you’re coming from.
The way we usually talk about it in Freshman physics, where the force of gravity is “just another” force, it’s surprising.
If you want to talk about Unification and gravity somehow mixing in with the other three forces, it’s surprising.
If, on the other hand, you start a priori with the idea of gravity as geometry, then it’s not surprising.
7. #7 Keith February 26, 2007
What if m(inertial) and m(gravitational) wern’t exactly equal but just proportionate. The gravitational law would hold but the constant would have a different value.
8. #8 Rob Knop February 26, 2007
Keith — in that case, we’d just interpret it as a different value of G. That would be indistinguishable from them being the same.
Contrast to electrodynamics: you can double the mass of an object without changing its electric charge.
9. #9 amesolaire February 27, 2007
Can someone please explain to me why modern ether theories are considered as fringe and are generally dismissed as junk science? I’m a layman but I find the idea that the Universe is filled with
electron-positron dipoles (as a medium for EM-waves, among other things) very convincing. In one such theory (A.Rykov) the force of gravity is just the result of polarization of said dipoles
(“ether”), hence the relationship between electric and gravitational forces – both are manifestations of the Coulomb’s law. Notice that the Michaelson-Morley experiment does not rule out this
kind of ether. (I’m genuinely interested, so please don’t take it out on me if you find the above preposterous)
10. #10 Rob Knop February 27, 2007
Can someone please explain to me why modern ether theories are considered as fringe and are generally dismissed as junk science?
I’ll try to write more on this at some point– although I’ll probably write more in general about “junk science detection” methods. I’m pretty sure that Mark Chu-Carroll here has written some on
this, and others have as well.
In general, any theory that requires some other well-established theory to be wrong should be approached with extreme suspicion. Sometimes, they may be OK; that was the case with MOND for a long
time. (MOND is a theory that says that dynamics of galaxies can be explained by deviations of gravity from the Newtonian relationship in regimes of extremely low acceleration, rather than via the
invocation of dark matter.) Most of us were immediately suspicious of MOND, because anything that starts with modifying such a well-established bit of Physics as the Newtonian limit of GR sets
off our suspicion alarm; however, MOND survived for a while because if you thought enough about it, it was working in a regime where we hadn’t really tested Newton’s gravity as well as we might
have liked. Nowadays, I think that MOND should be relegated to the regime of discarded theories alongside the steady-state Universe, but there are some who disagree with me.
Most junk science out there, however, doesn’t just modify a well-established theory in a regime where it’s not terribly well tested. They throw out well-established theories and the results of
all the experiments that confirm them in wide regimes. When you see that sort of thing, 999,999 times out of 1,000,000 (at least), you’re dealing with crackpot science rather than an intuitive
I read the beginning of the article you pointed two and saw this:
“This is despite the paradox that an isolated rotated object should not experience centrifugal forces.”
That sentence alone is enough to convince me that this guy is a clueless crackpot who should be ignored.
First, centrifugal forces are scary things. First, in one sense, they aren’t real forces; they are artifacts of being in a rotating reference frame. Second, an isolated object does feel them. Put
a cylinder of gas deep, deep into intergalactic space in a static Universe that’s not expanding (so there’s no traditional gravity anywhere), set it spinning, and the gas will differentiate just
as if it were in a centrifuge here on Earth.
That sets off my crackpot alarm enough for me to know that there’s no need to read the rest of this guy’s paper. I don’t trust him to have a tenth of a clue to know what he’s talking about.
11. #11 Rob Knop February 27, 2007
Doh… “pointed to,” not “pointed two.” Thinking sentence n+1 while typing sentence n can be hazardous.
12. #12 amesolaire February 27, 2007
Thank you for your very kind and considerate response. I agree with the point that hypotheses and theories requiring us to discard experimental results are not worth considering.
I am afraid that with my initial sentence I conditioned you to a certain attitude towards the hypothesis I’m referring to, which I should have been more careful about.
The author of said hypothesis is the chief of a seismometry lab, which does not exactly qualify him as an easy physics crackpot candidate, but does not preclude him from being one either. Still,
the meaning of the sentence you are talking about, it appears, has been lost in translation (he is russian). I think what he meant by that was to illustrate that if you assume the Mach’s view of
inertia, then you might agree that “an isolated rotating object should not experience centrifugal forces”, which is obviously not true, i.e. constitutes a paradox. I might be wrong on that, and
I’m not really prepared to address every point of contention, but I’d be happy to have such points identified.
I’m weary of wasting your time and that of your readers, but on the other hand I find the inability to neither generate on my own nor gather expert opinion in either direction on something that
seemingly makes sense, frustrating. So if anyone can give me arguments either for or against his line of reasoning – I’d be eternally grateful.
Thank you again.
13. #13 Keith February 27, 2007
Adding to my comment yesterday.
You state that m(inertial) and m(gravitational) are exactly the same.
I suspect that the value of the gravitational constant was determined by assuming that they are the same so I smell a circular argument here.
Suppose a constant does exist that relates the gravitiational mass of an object to the inertial mass; for fun let’s call it Keith’s Constant.
For your statement to be true it is necessary to prove that Keith’s constant equals exactly 1.
Can it be done?
14. #14 Rob Knop February 28, 2007
I suspect that the value of the gravitational constant was determined by assuming that they are the same so I smell a circular argument here.
Sure. So say that m(inertial) and m(gravitational) are perfectly proportional, instead. The rest of the argument is still there; for no other force do you have that perfect proportionality. Yes,
we choose G so that they have the same numerical value, but that doesn’t really make the argument circular.
15. #15 Rob Knop February 28, 2007
Re: proving that “Keith’s Constant” is 1, the fact that GR work so well tells us that just as the speed of light gives a natural, universal scaling between space and time, the gravitational
constant (together with the speed of light) gives a natural, universal scaling between mass, space, and time.
If GR didn’t work, then we wouldn’t have that, so I offer the empirical validation of GR as the proof that Keith’s Constant is 1.
Just as it’s completely arbitrary to measure distance in meters and speed in seconds, it’s arbitrary to choose G to be whatever you want it to be; by choosing it, you’re effectively choosing the
mass units you’re using. But everything still falls at exactly the same rate, so the equivalence between inertial and gravitational mass remains.
16. #16 raj February 28, 2007
You state that m(inertial) and m(gravitational) are exactly the same.
I suspect that the value of the gravitational constant was determined by assuming that they are the same so I smell a circular argument here.
Actually, that last is incorrect, and can be shown to be incorrect by a couple of simple experiments. First, measure the inertial mass of two test particles–that’s fairly easy to do. And
thereafter measure the force of the gravitational attraction between the two–that’s fairly easy to do, too. And from the latter experiment and Newton’s equation solve for “G,” Newton’s constant.
Next, run another experiment. Substitute for one of the test particles in the first experiment, a test particle of another inertial mass. As I suggested above, we know how to measure inertial
masses, so we know how to measure the inertial mass of the new test particle, too. And then measure the force of the gravitational attraction between the new test test particle, and the remaining
test particle from the earlier experient. Use Newton’s equation, solve for “G,” and voila! you’ll get the same value to within a margin of experimental error. It isn’t that complicated.
It really is an interesting fact that inertial mass and gravitational mass are precisely the same.
17. #17 raj February 28, 2007
Question for Prof. Knop. Isn’t it the case that the similarities in the force equations for gravity and electrodynamics have more to do with the fact that they are central (point source) force
equations, than other things? In those cases, inverse-square relationships actually do make a lot of sense. But, IIR my electrodynamics course correctly, electrodynamics equations involving, for
example, planar surfaces such as those in typical capacitors are much more complicated than mere inverse-square relationships.
18. #18 Wayne McCoy February 28, 2007
I have another mystery. Why is the velocity of light in vacuo finite? What is it about spacetime that constrains c to be about 3 x 10^8 m/s? Is there some field (e.g., Higgs), or possibly the
structure of spacetime itself, with which the electromagnetic field of photons interacts, thereby impeding the motion of the photon through it? Or is it that spacetime is an illusion (some
current theories suggest this) and thus “motion of photons” is also an illusion? Of the first suggestion, there are the current theories of the aether (similar to but not the same concept that
provoked the Michelson-Morley experiment). On the second, some interpretations of string theory suggest that space and spacetime are illusory. What’s your take on this?
19. #19 philw March 1, 2007
Speakng of String Theory, I think it would be an excellent topic for an essay or 3. As a non-scientist but science fascinated ex-engineer, I have real problems with a system of thought that has
produced no significant falsifyable predictions, unlike GR and the other well tested theories you cite previously. Why is there such group think by the vast majority of theorists? Tenure track?
20. #20 Wayne McCoy March 1, 2007
Actually, there are some significant falsifiable predictions of string theory. Lisa Randall of Harvard and others will be testing whether there are multiple hidden dimensions that string theory
requires to be a coherent and consistent theory, in series of experiments that will begin later this year or early next, when the Large Hadron Collider in Europe cranks up. Stay tuned. In the
mean time, take a look at Lisa Randall’s excellent book, “Warped Passages: Unraveling the Mysteries of the Universe’s Hidden Dimensions,” and don’t be too swayed by Lee Lee Smolin just yet.
21. #21 philw March 1, 2007
Good luck with the LHC experiments. I’ll await the results. Meanwhile have someone there contact physicist John Cramer who needs to warn them about The Hive.
22. #22 Rob Knop March 1, 2007
Why is the velocity of light in vacuo finite?
I’m not sure there’s a good answer to this.
I can answer “why do we believe the speed of light in vacuo is finite,” and that is that the theory that results from starting with that postulate is extremely well tested by experiment.
But why should it be that way? Dunno. It just is.
23. #23 Trevor Hearnden March 2, 2007
I’m new here os this may have come up before? It may also be a bit off topic but it concerns gravity. I understand there are two very different ways of looking at gravity: the warping of
space-time as described in General relativity and the exchange of Gravitons as postulated in many particle theories including string theory. Gravitons and indeed gravity are said to be limited to
the speed of light like everything else. So how do the gravitons cross the event horizon of a black hole? Clearly something must if the proposed giant black holes in Galactic centers exist. | {"url":"http://scienceblogs.com/interactions/2007/02/26/the-greatest-mystery-in-all-of/","timestamp":"2014-04-20T21:45:09Z","content_type":null,"content_length":"65440","record_id":"<urn:uuid:83ff717d-d33c-443e-9031-f07fffe380ee>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00602-ip-10-147-4-33.ec2.internal.warc.gz"} |
A module for working with directed graphs (digraphs). Some of the functions are specifically for working with directed acyclic graphs (DAGs), that is, directed graphs containing no cycles.
data Digraph v Source
A digraph is represented as DG vs es, where vs is the list of vertices, and es is the list of edges. Edges are directed: an edge (u,v) means an edge from u to v. A digraph is considered to be in
normal form if both es and vs are in ascending order. This is the preferred form, and some functions will only work for digraphs in normal form.
Functor Digraph
Eq v => Eq (Digraph v)
Ord v => Ord (Digraph v)
Show v => Show (Digraph v)
isoRepDAG :: Ord a => Digraph a -> Digraph IntSource
Given a directed acyclic graph (DAG), return a canonical representative for its isomorphism class. isoRepDAG dag is isomorphic to dag. It follows that if isoRepDAG dagA == isoRepDAG dagB then dagA is
isomorphic to dagB. Conversely, isoRepDAG dag is the minimal element in the isomorphism class, subject to some constraints. It follows that if dagA is isomorphic to dagB, then isoRepDAG dagA ==
isoRepDAG dagB.
The algorithm of course is faster on some DAGs than others: roughly speaking, it prefers "tall" DAGs (long chains) to "wide" DAGs (long antichains), and it prefers asymmetric DAGs (ie those with
smaller automorphism groups). | {"url":"http://hackage.haskell.org/package/HaskellForMaths-0.4.3/docs/Math-Combinatorics-Digraph.html","timestamp":"2014-04-21T00:54:58Z","content_type":null,"content_length":"17866","record_id":"<urn:uuid:a00da02b-ed64-4b72-ae93-dd55c36aeac2>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
BrainBashers : Puzzles and Brain Teasers
Kevin and Daniel were rowing their canoe along the River Trent.
In the morning they managed to row upstream at an average speed of 2 miles per hour.
They then stopped for a spot of lunch and a nice rest.
In the afternoon, the pace was a little easier as they were now rowing downstream, and managed an average speed of 4 miles an hour.
The morning trip took them 3 hours longer than the afternoon.
How far did they row upstream? | {"url":"http://www.brainbashers.com/schoolpuzzles.asp","timestamp":"2014-04-16T04:17:26Z","content_type":null,"content_length":"8726","record_id":"<urn:uuid:ede7494b-81cd-447a-8c58-b6fb29d9680e>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebraic and Geometric Topology 3 (2003), paper no. 40, pages 1119-1138.
Global structure of the mod two symmetric algebra, H^*(BO;F_2), over the Steenrod Algebra
David J. Pengelley, Frank Williams
Abstract. The algebra S of symmetric invariants over the field with two elements is an unstable algebra over the Steenrod algebra A, and is isomorphic to the mod two cohomology of BO, the classifying
space for vector bundles. We provide a minimal presentation for S in the category of unstable A-algebras, i.e., minimal generators and minimal relations.
From this we produce minimal presentations for various unstable A-algebras associated with the cohomology of related spaces, such as the BO(2^m-1) that classify finite dimensional vector bundles, and
the connected covers of BO. The presentations then show that certain of these unstable A-algebras coalesce to produce the Dickson algebras of general linear group invariants, and we speculate about
possible related topological realizability.
Our methods also produce a related simple minimal A-module presentation of the cohomology of infinite dimensional real projective space, with filtered quotients the unstable modules F(2^p-1)/A bar{A}
_{p-2}, as described in an independent appendix.
Keywords. Symmetric algebra, Steenrod algebra, unstable algebra, classifying space, Dickson algebra, BO, real projective space.
AMS subject classification. Primary: 55R45. Secondary: 13A50, 16W22, 16W50, 55R40, 55S05, 55S10.
DOI: 10.2140/agt.2003.3.1119
E-print: arXiv:math.AT/0312220
Submitted: 24 October 2003. Accepted: 5 November2003. Published: 10 November 2003.
Notes on file formats
David J. Pengelley, Frank Williams
New Mexico State University
Las Cruces, NM 88003, USA
Email: davidp@nmsu.edu, frank@nmsu.edu
AGT home page
Archival Version
These pages are not updated anymore. They reflect the state of . For the current production of this journal, please refer to http://msp.warwick.ac.uk/. | {"url":"http://www.emis.de/journals/UW/agt/AGTVol3/agt-3-40.abs.html","timestamp":"2014-04-17T04:16:20Z","content_type":null,"content_length":"3652","record_id":"<urn:uuid:268e6fbb-d444-4ffb-a2bd-708ca735b822>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: April 2002 [00392]
[Date Index] [Thread Index] [Author Index]
Re: Row vs. Column Vectors (or Matrices)
• To: mathgroup at smc.vnet.net
• Subject: [mg33928] Re: [mg33908] Row vs. Column Vectors (or Matrices)
• From: Murray Eisenberg <murraye at attbi.com>
• Date: Tue, 23 Apr 2002 07:14:08 -0400 (EDT)
• Organization: Mathematics & Statistics, Univ. of Mass./Amherst
• References: <200204220457.AAA02121@smc.vnet.net>
• Reply-to: murray at math.umass.edu
• Sender: owner-wri-mathgroup at wolfram.com
It's not clear from your message whether you merely want to display a
"row vector" or to create one (these are different issues). Moreover,
the very notion of "row vector" is ambiguous (at least in ordinary
mathematical parlance!).
You can represent an ordinary vector in Mathematica as a list (which is
a one-dimensional creature):
v = {1, 2, 3}
Or you can represent it as a list whose sole element is a list (which is
essentiall a two-dimensional creature):
vRow = {{1, 2, 3}}
Then using MatrixForm[v] will produce a "stacked" column 3-rows high!
But using MatrixForm[vRow] will produce probably what you want to SEE:
(1 2 3)
You wrote " * " for the operation combining these two vectors. Do you
mean "dot product"? If so, then this is abbreviated by " . " in
Mathematica. Thus:
v . w
Unfortunately the following creates an error:
vRow . wRow
Dot::"dotsh": "Tensors ({{1, 2, 3})} and ({4, 5, 6})} have incompatible
vRow . Transpose[wRow]
And the latter agrees perfectly with the true definition of dot product,
in mathematics, of a 1-by-3 matrix with a 3-by-1 matrix, which yields a
1-by-1 matrix (and NOT a scalar).
You see that ordinary mathematical notation gets a bit sloppy about
these distinctions, whereas an executable notation such as Mathematica
must be very precise.
John Resler wrote:
> Hi,
> I'm new to Mathematica and am doing a little linear algebra. I am
> aware of the MatrixForm[m]
> function but I know of no way to create a row vector eg. [ 1.0 2.0 3.0
> ] * [ 1.0
> 2.0
> 3.0].
> Can someone point me in the right direction? Thanks ahead of time.
> -John
Murray Eisenberg murray at math.umass.edu
Mathematics & Statistics Dept.
Lederle Graduate Research Tower phone 413 549-1020 (H)
University of Massachusetts 413 545-2859 (W)
710 North Pleasant Street
Amherst, MA 01375
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2002/Apr/msg00392.html","timestamp":"2014-04-16T10:16:13Z","content_type":null,"content_length":"36764","record_id":"<urn:uuid:21cfae97-809d-43ad-9707-19651dd42770>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] Fitting an arbitrary distribution
josef.pktd@gmai... josef.pktd@gmai...
Fri May 22 09:40:00 CDT 2009
On Fri, May 22, 2009 at 12:46 AM, David Warde-Farley <dwf@cs.toronto.edu> wrote:
> On 21-May-09, at 11:47 PM, David Baddeley wrote:
>> Thanks for the prompt replies!
>> I guess what I was meaning was that the PDF / histogram was the sum
>> or multiple Gaussians/normal distibutions. Sorry about the
>> ambiguity. I've had a quick look at the Em package and mixture
>> models, and while my problem is similar they might be a little more
>> general.
>> I guess I should describe the problem in a bit more detail - I'm
>> measuring the length of an objects which can be built up from
>> multiple unit cells. The measured size distribution is thus
>> multimodal, and I want to extract both the unit size and the
>> fraction of objects having each number of unit cells. This makes the
>> problem much more constrained than what is dealt with in the Em
>> package.
> So there is exactly one kind of 'unit cell' and then different lengths
> of objects? Are your observations expected to be particularly noisy?
> What you're staring down isn't quite a standard mixture model, your
> 'hidden variable' is just this unit size.
> Depending on the scale of your problem PyMC would work very well here.
> You'd have one Stochastic for the unit size, another for the integer
> multiple associated with each quantity, a Deterministic that
> multiplies the two and either make the Deterministic observed or add
> another Stochastic with the Deterministic as its mean if you believe
> your observations are noisy with a certain noise distribution.
> Note that since these things you are measuring are supposedly discrete
> multiples of the unit size a Gaussian distribution isn't appropriate
> for the multiples. Something like a Poisson would make more sense.
> Then you'd just fit the model and look at the posterior distributions
> over each quantity of interest, taking the maximum a posteriori
> (posterior mean) estimate where appropriate. To determine the
> fraction of the population that have a given number of unit cells, you
> basically just count (you'd have an estimate for each observation of
> how many unit cells it has).
> You could also do this by EM, but pyem would not be suitable, as it's
> built specifically for the case of vanilla Gaussian mixtures. PyMC
> would be a ready-made solution which would give you the additional
> flexibility of inferring a distribution over all estimated parameters
> rather than just a point-estimate.
I also think that pymc offers the best and well tested way of doing this.
Just to show how it is possible to do it with stats distributions, I
attach a script where I quickly hacked together different pieces to
try out the maximum likelihood estimation for this case. It's not
cleaned up and has pieces left over from copy and paste, but it's a
proof of concept for how univariate mixture distributions can be
constructed as subclasses of rv_continuous. But to be really useful,
several parts need to be improved.
-------------- next part --------------
# -*- coding: utf-8 -*-
* estimates a mixture of normal distribution given by random sum of iid normal
* only works for good initial conditions and if variance of individual
normal distribution is not too large
* maximum likelihood estimation using generic fit method is not very precise
and makes interpretation of parameters more difficult
* maximum likelihood estimation with fixed location and scale done in a quick
hack, gives good results for the "nice" estimation problem (good initial
conditions and small variance)
* to use generic methods, only pdf would need to be specified
* restriction: hard coded number of mixtures is four
* to be useful would need AIC, BIC, and covariance matrix of maximum
likelihood estimates
* does not impose non-negativity of the mixture probabilities, could use
logit transformation
* this pattern might work well for arbitrary distributions, when the
estimation problem is nice, e.g. mixture of only 2 distributions or well
separated distributions
* need proper implementation of estimation with frozen parameters like
location and scale
#see: for application of frozen parameter in fit
License: same as scipy
import numpy as np
from scipy import stats, special, optimize
from scipy.stats import distributions
class normmix_gen(distributions.rv_continuous):
def _rvs(self, mu, sig, p1, p2, p3):
return np.hstack((
mu+ sig*stats.norm.rvs(size=p1*self._size),
def _pdf(self, x, mu, sig, p1, p2, p3):
return p1*stats.norm.pdf(x,loc=mu, scale=sig) + \
p2*stats.norm.pdf(x,loc=2.0*mu, scale=2.0*sig) +\
p3*stats.norm.pdf(x,loc=3.0*mu, scale=3.0*sig) +\
(1-p1-p2-p3)*stats.norm.pdf(x,loc=4.0*mu, scale=4.0*sig)
def _nnlf_(self, x, *args): # inherited version for comparison
#print 'args in nnlf_', args
return -np.sum(np.log(self._pdf(x, *args)),axis=0)
def nnlf_fix(self, theta, x):
# quick hack to remove loc and scale, removed also bound checking
# - sum (log pdf(x, theta),axis=0)
# where theta are the parameters (including loc and scale)
# try:
# loc = theta[-2]
# scale = theta[-1]
# args = tuple(theta[:-2])
# except IndexError:
# raise ValueError, "Not enough input arguments."
# if not self._argcheck(*args) or scale <= 0:
# return inf
# x = arr((x-loc) / scale)
# cond0 = (x <= self.a) | (x >= self.b)
# if (any(cond0)):
# return inf
# else:
# N = len(x)
args = tuple(theta)
#print args
return self._nnlf_(x, *args)# + N*log(scale)
def fit_fix(self, data, *args, **kwds):
'''stolen from frozen distribution estimation and partial
removal of loc and scale'''
loc0, scale0 = map(kwds.get, ['loc', 'scale'],[0.0, 1.0])
Narg = len(args)
if Narg == 0 and hasattr(self, '_fitstart'):
x0 = self._fitstart(data)
elif Narg > self.numargs:
raise ValueError, "Too many input arguments."
#args += (1.0,)*(self.numargs-Narg)
# location and scale are at the end
x0 = args# + (loc0, scale0)
if 'frozen' in kwds:
frmask = np.array(kwds['frozen'])
if len(frmask) != self.numargs+2:
raise ValueError, "Incorrect number of frozen arguments."
# keep starting values for not frozen parameters
x0 = np.array(x0)[np.isnan(frmask)]
frmask = None
#print x0
#print frmask
return optimize.fmin(self.nnlf_fix, x0,
args=(np.ravel(data), ), disp=0)
normmix = normmix_gen(a=0.0,name='normmix',longname='A normmix',
shapes=('mu, sig, p1, p2, p3'),
true = (1.,0.05,0.4,0.2,0.2)
rvs = normmix.rvs(size=1000,*true)
#rvs = normmix.rvs(1.,0.01,0.4,0.2,0.2, size=1000)
#rvs = normmix.rvs(1.,0.05,0.5,0.49,0.01, size=1000)
est = normmix.fit(rvs,1.,0.005,0.4,0.2,0.2,loc=0,scale=1)
startval = np.array((1.,0.005,0.4,0.2,0.2))*1.1
#est2 = normmix.fit_fix(rvs,*startval)
est2 = normmix.fit_fix(1.0*rvs,1.05,0.005,0.3,0.21,0.21)
print 'estimate with generic fit'
print est
print 'estimate with fixed loc scale'
print est2
import matplotlib.pyplot as plt
#x = rvs
mu, sig, p1, p2, p3 = true
plt.plot(b,p1*stats.norm.pdf(b,loc=mu, scale=sig),
b,p2*stats.norm.pdf(b,loc=2.0*mu, scale=2.0*sig),
b,p3*stats.norm.pdf(b,loc=3.0*mu, scale=3.0*sig))
plt.title('estimated pdf, generic fit')
plt.title('estimated pdf, loc,scale fixed')
print np.array(est)[:-2]-true
print np.array(est2)-true
print 'estimated mean unit size', est2[0]
print 'estimated standard deviation of unit size', est2[1]
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2009-May/021182.html","timestamp":"2014-04-18T13:58:31Z","content_type":null,"content_length":"11944","record_id":"<urn:uuid:83186661-1677-47b9-bd49-98e34073dc96>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
If x > 1, what is the value of integer x?
Question Stats:
30%70% (00:52)based on 30 sessions
l0rrie wrote:
If x > 1, what is the value of integer x?
(1) There are x unique factors of x.
(2) The sum of x and any prime number larger than x is odd.
First of all, it is not a very straight forward question. Definitely needs some thinking so relax...
Ques: What is the value of x?
So we are looking for a single value of x.
First consider statement 2 since it is easier.
x + prime number greater than x = odd
There will be many many prime numbers greater than x. All prime numbers are odd except 2. So if you can add any prime number greater than x to x and get an odd number, it means x must be even.
(because Even + Odd = Odd)
So all statement 2 tells you is that x is even. It could be 2 or 4 or 6 etc
Now look at statement 1.
There are x unique factors of x. Think of the first number greater than 1.
2 has 2 unique factors: 1 and 2 ( 2 is a prime number)
What about 3? It has 2 unique factors: 1 and 3 (a prime number)
4 has 3 unique factors: 1, 2 , 4
Is it possible that any greater number x has x unique factors? No. Why?
For x to have x unique factors, each number from 1 to x must be a factor of x. Say if 10 had 10 unique factors, each number 1, 2, 3, 4, 5..., 9,10 would have to be a factor of 10 (because factors are
always positive integers)
But can 9 be a factor of 10 i.e. can (x-1) be a factor of x? No. 2 consecutive positive integers share only one common factor i.e. 1. Why? Check out the post given below for the answer:
So statement 1 is enough to tell us that x is 2.
Veritas Prep | GMAT Instructor
My Blog
Save $100 on Veritas Prep GMAT Courses And Admissions Consulting
Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options.
Veritas Prep Reviews | {"url":"http://gmatclub.com/forum/if-x-1-what-is-the-value-of-integer-x-111097.html?fl=similar","timestamp":"2014-04-17T01:28:59Z","content_type":null,"content_length":"196150","record_id":"<urn:uuid:0f95fe52-52ee-4a3c-ac0d-be464a5941cf>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |