content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
% Compute the cumulative distribution function (CDF) at @var{x} of the
function cdf = hygecdf (x, t, m, n)
% -*- texinfo -*-
% @deftypefn {Function File} {} hygecdf (@var{x}, @var{t}, @var{m}, @var{n})
% Compute the cumulative distribution function (CDF) at @var{x} of the
% hypergeometric distribution with parameters @var{t}, @var{m}, and
% @var{n}. This is the probability of obtaining not more than @var{x}
% marked items when randomly drawing a sample of size @var{n} without
% replacement from a population of total size @var{t} containing
% @var{m} marked items.
% The parameters @var{t}, @var{m}, and @var{n} must positive integers
% with @var{m} and @var{n} not greater than @var{t}.
% @end deftypefn
This function calls:
• discrete_cdf % For each element of @var{x}, compute the cumulative distribution
• hygepdf % Compute the probability density function (PDF) at @var{x} of the
This function is called by: Generated on Sat 16-May-2009 00:04:49 by m2html © 2003 | {"url":"http://biosig-consulting.com/matlab/help/freetb4matlab/statistics/distributions/hygecdf.html","timestamp":"2014-04-21T12:31:06Z","content_type":null,"content_length":"3431","record_id":"<urn:uuid:b42f88ca-5bf2-4338-8407-20d06f076b34>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
Module Mathematics Types Of Modules
A selection of articles related to module mathematics types of modules.
Original articles from our library related to the Module Mathematics Types Of Modules. See Table of Contents for further available material (downloadable resources) on Module Mathematics Types Of
Module Mathematics Types Of Modules is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Module Mathematics Types Of Modules books
and related discussion.
Suggested Pdf Resources
Suggested Web Resources
Module Mathematics Types Of Modules articles, reference materials. Need more on Module Mathematics Types Of Modules?
All # methods are module methods and should be called on the Math module.
New York J. Math. 2000 Mathematics Subject Classification.
Discusses the different number types, such as integers and reals, and explains the relationships between http://www.purplemath.com/modules/numtypes.
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site. | {"url":"http://www.realmagick.com/module-mathematics-types-of-modules/","timestamp":"2014-04-18T21:07:25Z","content_type":null,"content_length":"30104","record_id":"<urn:uuid:96d986ea-f7f1-48bb-84e8-987d6e3d17c0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
Toyota Shift Lever Bushing Replacement
Toyota Shift Lever Bushing Replacement
OEM Replacement Parts Catalog
Latest Toyota Shift Lever Bushing Applications
Shift Lever Bushing Shift Lever Bushing
01/95 - 04/95 Toyota Tacoma 4WD I4 RegCab 3RZFE Lower/large This is the lower one and larger 08/88 - 07/90 Toyota PUP 2WD X-Cab Carb 22R Upper This is the upper one and smaller of the
of the two. The smaller/upper one is 33548 -31010. OE calls this bushing a "seat." two. The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT
Shift Lever Bushing This is the lower one and larger of the two. The smaller/upper one is used in every application listed. Shift Lever Bushing This is the upper one and smaller of
33548 -31010. OE calls this bushing a "seat." the two. The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are
NOT used in every application listed.
Shift Lever Bushing Shift Lever Bushing
01/95 - 08/04 Toyota Tacoma 2WD I4 RegCab 2RZFE Lower/large This is the lower one and larger 08/80 - 12/81 Toyota Corona 22R Upper This is the upper one and smaller of the two. The
of the two. The smaller/upper one is 33548 -31010. OE calls this bushing a "seat." larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT used in
Shift Lever Bushing This is the lower one and larger of the two. The smaller/upper one is every application listed. Shift Lever Bushing This is the upper one and smaller of the two.
33548 -31010. OE calls this bushing a "seat." The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT used
in every application listed.
Shift Lever Bushing Shift Lever Bushing
08/84 - 07/88 Toyota Cressida Sedan 5MGE Upper This is the upper one and smaller of the two. 08/82 - 07/84 Toyota Cressida Sedan 5MGE Upper This is the upper one and smaller of the two.
The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT used The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT used
in every application listed. Shift Lever Bushing This is the upper one and smaller of the in every application listed. Shift Lever Bushing This is the upper one and smaller of the
two. The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT two. The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT
used in every application listed. used in every application listed.
Shift Lever Bushing Shift Lever Bushing
01/95 - 08/04 Toyota Tacoma 2WD I4 RegCab 2RZFE Lower/large This is the lower one and larger 01/95 - 04/95 Toyota Tacoma 4WD I4 RegCab 3RZFE Lower/large This is the lower one and larger
of the two. The smaller/upper one is 33548 -31010. OE calls this bushing a "seat." of the two. The smaller/upper one is 33548 -31010. OE calls this bushing a "seat."
Shift Lever Bushing Heavy Duty Urethane This is the lower one and larger of the two. The Shift Lever Bushing Heavy Duty Urethane This is the lower one and larger of the two. The
smaller/upper one is 33548 -31010. OE calls this bushing a "seat." smaller/upper one is 33548 -31010. OE calls this bushing a "seat."
Shift Lever Bushing Shift Lever Bushing
11/95 - 07/02 Toyota 4Runner V6 2WD 5VZFE Upper 11/95 - 07/02 Toyota 4Runner V6 4WD 5VZFE Upper
Upper/small This is the upper one and smaller of the two. The larger/lower ones are Upper/small This is the upper one and smaller of the two. The larger/lower ones are
33505-35020 or -35030. Caution: the larger/lower ones are NOT used in every application 33505-35020 or -35030. Caution: the larger/lower ones are NOT used in every application
listed. Shift Lever Bushing This is the upper one and smaller of the two. The larger/lower listed. Shift Lever Bushing This is the upper one and smaller of the two. The larger/lower
ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT used in every ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT used in every
application listed. application listed.
Shift Lever Bushing Shift Lever Bushing
08/88 - 01/95 Toyota Pickup V6 4WD X-Cab 3VZE Lower/large This is the lower one and larger of 08/88 - 01/95 Toyota Pickup V6 2WD X-Cab 3VZE Upper This is the upper one and smaller of the
the two. The smaller/upper one is 33548 -31010. OE calls this bushing a "seat." two. The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT
Shift Lever Bushing Heavy Duty Urethane This is the lower one and larger of the two. The used in every application listed. Shift Lever Bushing This is the upper one and smaller of
smaller/upper one is 33548 -31010. OE calls this bushing a "seat." the two. The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are
NOT used in every application listed.
Shift Lever Bushing Shift Lever Bushing
08/88 - 04/93 Toyota Supra Non-Turbo 7MGE Upper This is the upper one and smaller of the two. 08/82 - 07/85 Toyota Celica GT 22REC Upper This is the upper one and smaller of the two. The
The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT used larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT used in
in every application listed. Shift Lever Bushing This is the upper one and smaller of the every application listed. Shift Lever Bushing This is the upper one and smaller of the two.
two. The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT used
used in every application listed. in every application listed.
Shift Lever Bushing Shift Lever Bushing
01/95 - 08/04 Toyota Tacoma 2WD I4 RegCab 2RZFE Upper 08/85 - 07/88 Toyota PUP 2WD Crb Reg&XCab 22R Upper This is the upper one and smaller of the
Upper/small This is the upper one and smaller of the two. The larger/lower ones are two. The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT
33505-35020 or -35030. Caution: the larger/lower ones are NOT used in every application used in every application listed. Shift Lever Bushing This is the upper one and smaller of
listed. Shift Lever Bushing This is the upper one and smaller of the two. The larger/lower the two. The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are
ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT used in every NOT used in every application listed.
application listed.
Shift Lever Bushing Shift Lever Bushing
08/83 - 07/88 Toyota Truck 22R Upper This is the upper one and smaller of the two. The larger 08/88 - 07/93 Toyota Pickup V6 2WD RegCab 3VZE Upper This is the upper one and smaller of the
/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT used in every two. The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT
application listed. Shift Lever Bushing This is the upper one and smaller of the two. The used in every application listed. Shift Lever Bushing This is the upper one and smaller of
larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT used in the two. The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are
every application listed. NOT used in every application listed.
Shift Lever Bushing Shift Lever Bushing
08/85 - 07/88 Toyota Pickup Truck EFI 22REC Upper This is the upper one and smaller of the 04/89 - 10/95 Toyota 4Runner 4Cyl 4WD SR5 22RE Upper
two. The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT Upper/small This is the upper one and smaller of the two. The larger/lower ones are
used in every application listed. Shift Lever Bushing This is the upper one and smaller of 33505-35020 or -35030. Caution: the larger/lower ones are NOT used in every application
the two. The larger/lower ones are 33505-35020 or -35030. Caution: the larger/lower ones are listed. Shift Lever Bushing This is the upper one and smaller of the two. The larger/lower
NOT used in every application listed. ones are 33505-35020 or -35030. Caution: the larger/lower ones are NOT used in every
application listed.
Some Toyota Shift Lever Bushing Applications (View Full Catalog Above)
Toyota 4Runner 4Cyl 4WD EFI Toyota 4Runner 4Cyl 4WD SR5 Toyota 4Runner V6 2WD
Toyota 4Runner V6 4WD Toyota 4Runner V6 4WD SR5 Toyota Celica GT
Toyota Celica GTS Cpe/LBack Toyota Celica ST Coupe Toyota Celica ST Cpe/L-Back
Toyota Corona Toyota Cressida Sedan Toyota Cressida Wagon
Toyota Pickup 4WD Long Bed Toyota Pickup 4WD Std Bed Toyota Pickup I4 4WD RegCab
Toyota Pickup I4 4WD X-Cab Toyota Pickup V6 2WD RegCab Toyota Pickup V6 2WD X-Cab
Toyota Pickup V6 4WD RegCab Toyota Pickup V6 4WD X-Cab Toyota Truck
Toyota PUP 2WD Crb Reg&XCab Toyota Pickup Truck EFI Toyota PUP 2WD Reg-Cab Carb
Toyota PUP 2WD Reg-Cab EFI Toyota PUP 2WD X-Cab Carb Toyota PUP 2WD Xtra-Cab EFI
Toyota Pickup Truck Toyota PUP 4WD EFI Reg&XCab Toyota Supra Celica
Toyota Supra Non-Turbo Toyota Supra Turbo Toyota Tacoma 2WD I4 RegCab
Toyota Tacoma 2WD I4 XtrCab Toyota Tacoma 4WD I4 RegCab Toyota Tacoma 4WD I4 XtrCab
Toyota Tundra 2WD V6 4 Dr
20R 22R 22RE 22REC 2RZFE 3RZFE 3VZE 5MGE
5VZFE 7MGE 7MGTE
More Applications (View Full Catalog Above)
BMW Mercedes Benz Porsche Saab | {"url":"http://www.redlinemotive.com/replacement/toyota/shiftleverbushing.asp","timestamp":"2014-04-21T01:59:59Z","content_type":null,"content_length":"39461","record_id":"<urn:uuid:9484029d-4ed0-4968-8aa5-51d9075c4c67>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Deriving moment of Inertia
I'm attempting to derive the moment of inertia for a cylindrical object.
I know that I=[tex]\int r^2 dm[/tex]
which equals =[tex]\int r^2 p dV[/tex]
My question begins here, the derivations I seen pull p out of the integral, which makes sense to do, because in this case it's a constant. p=M/([tex]\pi[/tex]r^2L). So if I don't pull p out before
integrating I get I=Mr^2, if I do pull it out, I get I=M/2r^2. I know the answer should be I=M/2r^2 because I have a solid cylindrical object. So why am I getting a different result when I leave p
in, & a different result when I pull p out or am I just making a silly math error?
Below is my work when I leave p inside the integral
I=[tex]\int r^2*p*(2\pi*r)dr[/tex]
=2M[tex]\int r dr[/tex] (replacing p with M/([tex]\pi[/tex]r^2L) before integrating) | {"url":"http://www.physicsforums.com/showthread.php?t=354206","timestamp":"2014-04-16T07:38:12Z","content_type":null,"content_length":"25174","record_id":"<urn:uuid:6d5290c9-b151-4bcf-9720-bf2056033a90>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
Position Vector of Reflection of a Point
August 10th 2009, 12:18 AM #1
Aug 2009
Hello, I am currently having trouble understanding how to find the reflection point of a point with the coordiantes of that point and the vector equation of the line. I have tried finding the
maginitude of the foot of the prependicular, but I still could not get the answer. I hope my doubts can be cleared.
This is the question :
Find the position vector of the image C' of the point C(5,2,-1) under a reflection in the line l joining the points A(3,1,3) and B(-1,-1,-3).
Hello MathJunction
Welcome to Math Help Forum!
Hello, I am currently having trouble understanding how to find the reflection point of a point with the coordiantes of that point and the vector equation of the line. I have tried finding the
maginitude of the foot of the prependicular, but I still could not get the answer. I hope my doubts can be cleared.
This is the question :
Find the position vector of the image C' of the point C(5,2,-1) under a reflection in the line l joining the points A(3,1,3) and B(-1,-1,-3).
Using vectors, suppose that the position vector of $A$ is $\vec{a} = 3\vec{i} + \vec{j} + 3\vec{k}$. Similarly $\vec{b} = -\vec{i} -\vec{j} -3\vec{k},\, \vec{c} = 5\vec{i} + 2\vec{j} -\vec{k}$.
Then if $M$ is the mid-point of $AB$, its position vector is given by $\vec{m} = \tfrac12(\vec{a}+\vec{b}) = -\vec{i}$.
$\Rightarrow \vec{CM} = \vec{m} - \vec{c} = -6\vec{i} -2\vec{j} +\vec{k}$
Now if $C'$ is the reflection of $C$ in $AB$, $\vec{MC'}=\vec{CM}$
$\Rightarrow \vec{c'} = \vec{m} + \vec{MC'} =\vec{m} + \vec{CM}$, where $\vec{c'}$ is the position vector of $C'$.
Can you complete it now?
Hello MathJunction
Welcome to Math Help Forum!Using vectors, suppose that the position vector of $A$ is $\vec{a} = 3\vec{i} + \vec{j} + 3\vec{k}$. Similarly $\vec{b} = -\vec{i} -\vec{j} -3\vec{k},\, \vec{c} = 5\vec
{i} + 2\vec{j} -\vec{k}$.
Then if $M$ is the mid-point of $AB$, its position vector is given by $\vec{m} = \tfrac12(\vec{a}+\vec{b}) = -\vec{i}$.
$\Rightarrow \vec{CM} = \vec{m} - \vec{c} = -6\vec{i} -2\vec{j} +\vec{k}$
Now if $C'$ is the reflection of $C$ in $AB$, $\vec{MC'}=\vec{CM}$
$\Rightarrow \vec{c'} = \vec{m} + \vec{MC'} =\vec{m} + \vec{CM}$, where $\vec{c'}$ is the position vector of $C'$.
Can you complete it now?
Hello Grandad, thank you for taking some time off in helping me. I regret to tell you I am still blurred. Shouldn't vector m be 1/2(a+b) = (1,0,0) since vector AB is ( -4,-2,-6) ? Thus AM is half
of AB which is (-2,-1,-3) and then OM would be vector AM + vector OA?
Sorry to trouble you again.
Hello MathJunction
Hello Grandad, thank you for taking some time off in helping me. I regret to tell you I am still blurred. Shouldn't vector m be 1/2(a+b) = (1,0,0) since vector AB is ( -4,-2,-6) ? Thus AM is half
of AB which is (-2,-1,-3) and then OM would be vector AM + vector OA?
Sorry to trouble you again.
You're quite right - I got a sign wrong. Sorry!
$\vec{m} = \tfrac12(\vec{a}+\vec{b})= \vec{i}$
So $\vec{CM}=\vec{m}-\vec{c}=-4\vec{i} -2\vec{j} +\vec{k}$
Then continue as I said before.
August 10th 2009, 01:37 AM #2
August 10th 2009, 03:26 AM #3
Aug 2009
August 10th 2009, 04:08 AM #4 | {"url":"http://mathhelpforum.com/geometry/97536-position-vector-reflection-point.html","timestamp":"2014-04-17T20:53:23Z","content_type":null,"content_length":"49278","record_id":"<urn:uuid:98db29db-0c6f-4d90-9ab6-bd1af16ce41d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
One Variable Equations
Many financial situations can be modeled with one-variable equations and there are a handful of contexts in the problems shared below to explore these concepts. Sharing the bill at restaurant,
looking at the change in a savings or checking account based on deposits and bill payments, payment models for different kinds of service, all of them familiar and accessible to students of Algebra
and useful in everyday life. Thinking about these situations is useful for students as they start to think about how to model their different options in financial scenarios.
Highlighted Problem: Susita’s Savings Account
At the beginning of January, Susita had some money in her savings account.
Each month she was able to deposit enough from her allowance to double the amount currently in the account. However, she had a loan to pay off, requiring her to withdraw $10 from the account
At the end of May, she had $2 left in the account.
This is a great problem in terms of thinking about how bank accounts really function, we’re not just putting money in all the time! It’s a useful early step in thinking and learning about budgeting,
true costs of living and the reality of everyday life for adults.
If you have not already created a free account, you'll need to do so to access the Financial Education Problems of the Week. | {"url":"http://mathforum.org/fe/math-topics/algebra-2/one-variable-equations/","timestamp":"2014-04-24T00:08:55Z","content_type":null,"content_length":"36496","record_id":"<urn:uuid:fb4a79ea-9d45-46b9-8156-f35fa55b1a74>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Windsor, CT Math Tutor
Find a Windsor, CT Math Tutor
...I can help you turn that stress into positive energy and excitement. I recently worked with a student who was struggling with their Algebra II assignments and tests. Over the course of the
semester, the student was able to increase their skill and confidence.
11 Subjects: including precalculus, algebra 1, algebra 2, geometry
...Previously, I taught high school. In my free time, I like to hang out with my baby daughter and husband. I really enjoy one on one tutoring.
10 Subjects: including precalculus, trigonometry, statistics, probability
...The word Algebra scares many people, but simple explanations and basic terminology make math seem so much less scary. The approach I take is showing students the strengths that they already
have and how to apply those to learning and doing Algebra. The SAT prep that I offer involves doing a pra...
11 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...It's one thing to know how to plug numbers in to the quadratic formula and get the right answer, but it's way better in the long run if they know HOW the formula gives them that answer. I
don't have a set curricula but prefer instead to get a baseline of where the student currently stands in the...
18 Subjects: including algebra 2, geometry, prealgebra, precalculus
...I would also determine any areas the student has not mastered and provide practice in these areas along with several ways of introducing this material to the student. Prealgebra leads a
student from working with the concrete to working with the abstract. Along with practice using basic operations, the student is introduced to working with variables and integers.
7 Subjects: including algebra 2, algebra 1, grammar, prealgebra
Related Windsor, CT Tutors
Windsor, CT Accounting Tutors
Windsor, CT ACT Tutors
Windsor, CT Algebra Tutors
Windsor, CT Algebra 2 Tutors
Windsor, CT Calculus Tutors
Windsor, CT Geometry Tutors
Windsor, CT Math Tutors
Windsor, CT Prealgebra Tutors
Windsor, CT Precalculus Tutors
Windsor, CT SAT Tutors
Windsor, CT SAT Math Tutors
Windsor, CT Science Tutors
Windsor, CT Statistics Tutors
Windsor, CT Trigonometry Tutors | {"url":"http://www.purplemath.com/Windsor_CT_Math_tutors.php","timestamp":"2014-04-19T17:19:04Z","content_type":null,"content_length":"23792","record_id":"<urn:uuid:9da176d0-8c3b-4570-9e32-fdbdbd28be4c>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alan Turing
Early Life
Alan Turing was an English mathematician born on 23rd June 1912 in Chattarpur, Orissa, British India. His father was in the British military and his mother daughter of the chief engineer of the
Madras Railways. They wanted their children to be brought up in England so they could gain a good education thus they moved to London. Turing was enrolled in St Michael’s day school when he was six
years of age. At 14 he moved to ‘Sherborne School’ which was a famous public school in Dorset.
His mathematical talent however was not acclaimed by the teachers at Sherborne who placed more importance on learning the classics than on being a ‘Scientific Specialist’ as they called it. Their
disapproval did not seem to affect Turing who was continuously showing immense progress in subjects he loved. He solved advanced problems of calculus without even studying the subject formally. By
the time he was sixteen, he was grasping Albert Einstein’s concepts very speedily. He not only mastered Newton’s philosophies but also questioned his most famous ‘Laws of Motion’.
After Sherborne, Turing went to King’s College in Cambridge where he graduated with first-class honors in Mathematics. He was then elected as Fellow at King’s College in 1935 when he was just 22
years old. He published a paper called ‘On computable Numbers, with an application to the Entscheidungs problem’ in which he presented a machine known later as the Turing machine. It contained
fundamental mathematical ideas and computer science conceptions proving that this machine could calculate anything that was computable. Turing’s machines are a central object in theory of computation
even to this day.
Later Works
Turing spent his time from 1936 to 1938 studying under the Alonzo Church at Institute for Advanced Study, Princeton and New Jersey. He obtained his PhD from Princeton University, introducing the
concept of ordinal logic and relative computing in his dissertation. After September 1938, Turing started working for Government Code and Cypher School which was the British code-breaking
organization. Turing made many advances to this field too especially during the war between UK and Germany. He wrote his ‘Treatise on Enigma’ and he was awarded with the O.B.E in 1945 for his
significant contribution to the war. He published ‘Computing Machinery and Intelligence in Mind’ in 1950 which was a remarkable work of a bright innovative mind that could foresee the development of
computers before it even happened. His ‘Turing Test’ was a highly important influence on the debate of artificial intelligence that is being used even today.
Turing worked on mathematical biology from 1952 till his last days. His paper on the subject named ‘The Chemical Basis of Morphogenesis’ was published in 1952. His contribution to the field of
mathematical biology is considered to be of immense prominence. More papers by Turing were published posthumously in 1992.
Alan Turing was found dead at his home by his cleaner on 8th June 1954. The cause of death was reported to be from cyanide poisoning. We don’t know whether it was a suicide or an accident. | {"url":"http://www.famous-mathematicians.com/alan-turing/","timestamp":"2014-04-20T23:26:57Z","content_type":null,"content_length":"35669","record_id":"<urn:uuid:3b353103-9f07-4123-a36c-092f5aaece9a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
J. Schmidhuber. The Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions. In J. Kivinen and R. H. Sloan, editors, Proceedings of the 15th Annual Conference on
Computational Learning Theory (COLT 2002), Sydney, Australia, Lecture Notes in Artificial Intelligence, pages 216--228. Springer, 2002. PS. PDF. HTML. Or in Section 6 of Algorithmic Theories of
Everything (PDF) (first published in the physics arXiv (2000); the contents of sections 2-5 have also appeared in the International Journal on Foundations of Computer Science - see here ).
German home
Ray Solomonoff's optimal but non-computable method for inductive inference assumes that observation sequences x are drawn from a recursive prior distribution mu(x). Instead of using the unknown mu(x)
we predict using the celebrated universal enumerable prior or Solomonoff-Levin (semi)measure M(x) which for all x exceeds any recursive mu(x), save for a constant factor independent of x. The
simplicity measure M(x) naturally implements ``Occam's razor'' (simple solutions are preferred over complex ones) and is closely related to K(x), the Kolmogorov complexity or algorithmic information
of x. Predictions based on M are optimal in a certain (noncomputable) sense. However, M assigns high probability to certain data x that are extremely hard to compute. This does not match our
intuitive notion of simplicity.
Schmidhuber suggested a more plausible measure derived from the fastest way of computing data. In absence of contrarian evidence, he assumes that the physical world is generated by a computational
process, and that any possibly infinite sequence of observations is therefore computable in the limit (this assumption is more radical and stronger than Solomonoff's). Then he replaces M by the novel
Speed Prior S, under which the cumulative a priori probability of all data whose computation through an optimal algorithm requires more than O(n) resources is 1/n. To evaluate the plausibility of
this, consider that most data generated on your own computer are computable within a few microseconds, some take a few seconds, few take hours, very few take days, etc...
Essentially, the Speed Prior S(x) is the probability that the output of the following probabilistic algorithm starts with x:
1. Set t:=1. Let instruction pointer IP point to some cell of the initially empty internal storage of a universal binary computer (with separate, initially empty output storage).
2. While the number of instructions executed so far exceeds t: toss a coin; if heads is up set t:=2t; otherwise exit. If IP points to a cell that already contains a bit, execute the corresponding
instruction. Else if IP points to another cell, toss the coin again, set the cell's bit to 1 if heads is up (0 otherwise), and set t:=t/2.
3. Go to 2.
The Speed Prior allows for deriving a computable strategy for optimal prediction (within some given epsilon) of future y, given past x.
Now consider the case that the data actually stem from a non-optimal, unknown computational process. We can use Marcus Hutter's recent results on universal prediction machines to derive excellent
expected loss bounds for S-based inductive inference.
SPEED PRIOR APPLICATION TO PHYSICS
Here is an extreme application. In absence of contrarian evidence, we assume the entire history of our universe is computable, and sampled from S (or a less dominant prior reflecting suboptimal
computation of the history). (The legendary Konrad Zuse was the first who seriously suggested the universe is being computed on a grid of computers or cellular automaton; compare related work by Ed
Fredkin who initiated the translation of Zuse's 1969 book.)
Under our assumptions, we can immediately predict that our universe won't get many times older than it is now, that large scale quantum computation will not work well (essentially because it would
require too many computational resources in interfering ``parallel universes''), and that any apparent randomness in any physical observation is in fact due to some yet unknown but fast pseudo-random
generator which we should try to discover. (Details in the 2002 COLT paper or in the 2000 physics paper above).
Such ideas have recently attracted a lot of attention --- check out Wei Dai's "everything" mailing list archive and the Great Programmer Religion. | {"url":"http://www.idsia.ch/~juergen/speedprior.html","timestamp":"2014-04-20T05:44:10Z","content_type":null,"content_length":"8380","record_id":"<urn:uuid:2c39d7ed-c48f-420f-86bb-ef424d778b3c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by jump
Total # Posts: 9
diff eq
solve 2y^2(y'')+2y(y')^2=1
determine the volume of the solid of revolution generated by revolving the region bounded by y=x^3-x^5, y=0, x=0 and x=1 about the line x=3
f(x)=(x^2+1)(1-x-x^3).Determine the equation of the line tangent to f(x) at the point (1,-2).
Find the equation of the tangent line to the curve x^3 + xy^2 =5 at the point (1,2).
What is the slope of the curve x = cos(y) at the point where y=-pi/3
determine any values of x for which a tangent line to the curve f(x)=1/x will have a slope of -4
The height, h, of a cylinder is 3 times its radius, r. Which of the following represents the rate of change of the volume, V, of the cylinder with respect to its height, h?
The slope of the curve x^3y^2 + 2x - 5y + 2 = 0 at the point (1,1) is
A window consists of a rectangular part of height h surmounted by a semicircle of radius r. The perimeter of the window is held constant at 10 feet, but h and r are allowed to vary. Find an
expression for the rate of change of h with respect to r. | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=jump","timestamp":"2014-04-18T14:16:10Z","content_type":null,"content_length":"7352","record_id":"<urn:uuid:23d3e810-5f7c-417a-86e1-71581eb38f75>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
There are 119 electricians listed in the yellow pages of the phone book. If a customer selects an electrician at random, what is the probability that Andy (an electrician) will be selected? What is
the probability as a percent? Round to the nearest tenth of a percent.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
The probability of an certain possibility, is the number favoured possibilities divided by the total number of possibilities .
Best Response
You've already chosen the best response.
I don't really understand...
Best Response
You've already chosen the best response.
say i rolled a dice and was hoping to roll a three or four . the dice has six faces so there are 6 possibilities, i only want a three or four so there are 2 favoured possibilities the probability
of being successful is \[2/6 =1/3\approx33.3\%\]
Best Response
You've already chosen the best response.
So I would divide 119 by 1?
Best Response
You've already chosen the best response.
almost , it's the other way around
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50feeb2be4b0426c6367ff65","timestamp":"2014-04-17T06:47:21Z","content_type":null,"content_length":"37717","record_id":"<urn:uuid:895992d3-9b20-496b-945c-9827f279fcfd>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Art of R Programming
Here are my first impressions of The Art of R Programming. I haven’t had time to read it thoroughly, and I doubt I will any time soon. Rather than sitting on it, I wanted to get something out
quickly. I may say more about the book later.
The book’s author, Norman Matloff, began his career as a statistics professor and later moved into computer science. That may explain why his book seems to be more programmer-friendly than other
books I’ve seen on R.
My impression is that few people actually sit down and learn R the way they’d learn, say, Java. Most learn R in the context of learning statistics. Here’s a statistical chore, and here’s a snippet of
R to carry it out. Books on R tend to follow that pattern, organized more by statistical task than by language feature. That serves statisticians well, but it’s daunting to outsiders.
Matloff’s book is organized more like a typical programming book and may be more accessible to a programmer needing to learn R. He explains some things that might require no explanation if you were
learning R in the context of a statistics class.
The last four chapters would be interesting even for an experienced R programmer:
• Debugging
• Performance enhancement: memory and speed
• Interfacing R to other languages
• Parallel R
No one would be surprised to see the same chapters in a Java textbook if you replaced “R” with “Java” in the titles. But these topics are not typical in a book on R. They wouldn’t come up in a
statistics class because they don’t provide any statistical functionality per se. As long as you don’t make mistakes, don’t care how long your code takes to run, and don’t need to interact with
anything else, these chapters are unnecessary. But of course these chapters are quite necessary in practice.
As I mentioned up front, I haven’t read the book carefully. So I’m going out on a limb a little here, but I think this may be the book I’d recommend for someone wanting to learn R, especially for
someone with more experience in programming than statistics.
Related post:
Thanks. I have a lot of experience with Matlab, and am looking to take up R. This looks like it might work well for me.
I’ll second the warning that most other R books fail to introduce is as a programming language, instead treating it as a black box that you poke by typing things. I looked at the TOC for this book
and it could be really good.
I’ve been waiting for books like this for too long. I hope for stronger R ~programming~ tools in the future.
Having read through parts of the book my brief summary is that it contains a lot of useful information but falls short of providing an insightful and cohesive way of thinking about R programming.
That said, it’s way better than anything else that’s out there (including “R in a nutshell”) and it does provide information on many important topics (e.g.-useful details on debugging, all the
approaches to classes except for the new reference classes).
This book comes as to be a first-read for programmers going to statistics.
Although other books cover some topics, none (from the Amazon’s top sellers) of them is entirely dedicated to programming basics, and it is where introduced-programmers lack on.
I would recommend it as a first R book, and the reason is quite simple: start by the beginning.
I learned R by basically inhaling R in a Nutshell when I needed to move away from SAS for data analysis. Since then I have used R Cookbook, R in Action and a few other R books for reference since
most of my use was just “poking the black box” for analytic needs. I did have a reasonable understanding of R as a language from R in a Nutshell, but Art of R Programming really stressed R as a
language decoupled from the stats. Having this almost independent discussion of R as a language really improved my code, both functions and script files. I cannot recommend it enough to someone who
wants to learn how to really use R as opposed to using it as a FOSS form of SAS/Stata/SPSS/etc.
I recently came to R as a quantitative behavioral researcher who is steeped in SAS and SPSS programming. I have been working through several books on statistics using R and took a graduate level
course that emphasized the use of R, but even after all that I still felt very uncomfortable using R. I realized that in order to truly feel comfortable with R, I needed something that presented R to
me as a programming language. I figured that once I learned how to use R from the ground up as a programming language, I would feel much more comfortable in using R stat packages in general. In this
regard, The Art of R Programming has been enormously useful for me. I have no real coding background, and I find this book to be very user friendly and informative.
[...] Starch Press sent me a copy of The Art of R Programming last Fall and I wrote a review of it here. Then a couple weeks ago, Manning sent me a copy of R in Action. Here I’ll give a quick [...]
Tagged with: Books, Rstats
Posted in Software development | {"url":"http://www.johndcook.com/blog/2011/10/10/the-art-of-r-programming/","timestamp":"2014-04-19T02:03:10Z","content_type":null,"content_length":"38543","record_id":"<urn:uuid:5f99c87d-fa0c-46c9-813e-f58236109ca7>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Precision pins down the electron's magnetism - CERN Courier
Mesure de précision du magnétisme de l'électron
Grâce à l'association de plusieurs techniques de pointe, une nouvelle précision a été atteinte dans la mesure du moment magnétique de l'électron, g, en réduisant l'incertitude d'un facteur six. Cette
mesure plus précise permet de calculer la constante de structure fine avec une incertitude dix fois moindre que dans les mesures précédentes les plus précises. Elle constitue un moyen de vérification
de la théorie de l'électrodynamique quantique et impose des limites quant aux structures internes de l'électron. Ces résultats améliorés ont été rendus possibles par un cyclotron quantique à un
électron. L'électron est confiné dans un piège de Penning cylindrique, qui est également une cavité micro-onde, ce qui permet d'éliminer le rayonnement synchrotron. Les mesures se font sur la base de
la spectroscopie des niveaux d'énergie les plus bas de l'électron confiné.
The electron's magnetic moment has recently been measured to an accuracy of 7.6 parts in 10^13 (Odom et al. 2006). As figure 1a indicates, this is a six-fold improvement on the last measurement of
this moment made nearly 20 years ago (Van Dyck et al. 1987). The new measurement and the theory of quantum electrodynamics (QED) together determine the fine structure constant to 0.70 parts per
billion (Gabrielse et al. 2006). This is nearly 10 times more accurate than has so far been possible with any rival method (figure 1b). Higher accuracies are expected, based upon convergence of many
new techniques – the subject of a half-dozen Harvard PhD theses during the past 20 years. A one-electron quantum cyclotron, cavity-inhibited spontaneous emission, a self-excited oscillator and a
cylindrical Penning trap contribute to the extremely small uncertainty. For the first time, researchers have achieved spectroscopy with the lowest cyclotron and spin levels of a single electron fully
resolved via quantum non-demolition measurements, and a cavity shift of g has been directly observed.
Unusual features
A circular storage ring is the key to these greatly improved measurements, but the storage ring is unusual compared with those at CERN, for example. To begin with it uses only one electron, stored
and reused for months at a time. The radius of the storage ring is much less than 0.1 µm, and the electron energy is so low that we use temperature units to describe it – 100 mK. Furthermore, the
electron does not orbit in a familiar circular orbit even though it is in a magnetic field; instead, it makes quantum jumps between only the ground state and the first excited states of its cyclotron
motion – non-orbiting stationary states. It also makes quantum jumps between spin up and spin down states. Blackbody photons stimulate transitions between the two cyclotron ground states until we
cool our storage ring to 100 mK to essentially eliminate them. The spontaneous emission of synchrotron radiation is suppressed because of its low energy and by locating the electron in the centre of
a microwave cavity. The damping time is typically about 10 seconds, about 10^24 times slower than for a 104 GeV electron in the Large Electron–Positron collider (LEP). To confine the electron weakly
we add an electrostatic quadrupole potential to the magnetic field by applying appropriate potentials to the surrounding electrodes of a Penning trap, which is also a microwave cavity (figure 2a).
The lowest cyclotron and spin energy levels for an electron in a magnetic field are shown in figure 2b. (Very small changes to these levels from the electrostatic quadrupole and special relativity
are well understood and measured, though they cannot be described in this short report.) Microwave photons introduced into our trap cavity stimulate cyclotron transitions from the ground state to the
first excited state. The long cyclotron lifetime allows us to turn on a detector to count the number of quantum jumps for each attempt as a function of cyclotron frequency ν[c] (figure 3d). A similar
quantum jump spectroscopy is carried out as a function of the frequency of a radiofrequency drive at a frequency ν[a] = ν[s] – ν[c], which stimulates a simultaneous spin flip and cyclotron
excitation, where ν[s] is the spin precession frequency (figure 3c). The lineshapes are understood theoretically. One-quantum cyclotron transitions (figure 3b) and spin flips (figure 3a) are detected
with good signal-to-noise from the small shifts that they cause to an orthogonal, classical electron oscillation that is self-excited.
The dimensionless electron magnetic moment is the magnetic moment in units of the Bohr magneton, ehbar/2m, where the electron has charge –e and mass m. The value of g is determined by a ratio of the
frequencies that we measure, g/2 = 1 + νa/νc, with the result that g/2 = 1.00115965218085(76) [0.76 ppt]. The uncertainty is nearly six times smaller than in the past, and g is shifted downwards by
1.7 standard deviations (Odom et al. 2006).
What can be learned from the more accurate electron g? The first result beyond g itself is the fine structure constant, α = e^2/4πε[0]hbarc – the fundamental measure of the strength of the
electromagnetic interaction, and also a crucial ingredient in our system of fundamental constants. A Dirac point particle has g = 2. QED predicts that vacuum fluctuations and polarization slightly
increase this value. The result is an asymptotic series that relates g and α:
(Eq. 1)
g/2 = 1 + C[2](α/π) + C[4](α/π)^2 + C[6](α/π)^3 + C[8](α/π)^4
+ … a[µτ] + a[hadronic] + a[weak]
According to the Standard Model, hadronic and weak contributions are very small and believed to be well understood at the accuracy needed. Impressive QED calculations give exact C[2], C[4] and C[6],
a numerical value and uncertainty for C[8], and a small a[µτ]. Using the newly measured g in equation 1 gives α^–1 = 137.035999710(96) [0.70 ppb] (Gabrielse et al. 2006). The total uncertainty of
0.70 ppb is 10 times smaller than for the next most precise methods (figure 1b), which determine α from measured mass ratios, optical frequencies, together with rubidium (Rb) or caesium (Cs) recoil
The second use of the newly measured electron g is in testing QED. The most stringent test of QED – which is one of the most demanding comparisons of any calculation and experiment – continues to
come from comparing measured and calculated g-values, the latter using an independently measured α as an input. The new g, compared with equation 1 with α(Cs) or α(Rb), gives a difference δg/2 < 15 ×
10sup>–12 (see Gabrielse 2006 for details and a discussion.) The small uncertainties in g/2 will allow a 10 times more demanding test if ever the large uncertainties in the independent α values can
be reduced. The prototype of modern physics theories is thus tested far more stringently than its inventors ever envisioned – as Freeman Dyson remarks in his letter at the beginning of the article –
with better tests to come.
The third use of the measured g is in probing the internal structure of the electron – limiting the electron to constituents with a mass m* > m/√(δg/2) = 130 GeV/c^2, corresponding to an electron
radius R <1 × 10^–18 m. If this test was limited only by our experimental uncertainty in g, then we could set a limit m* > 600 GeV. This is not as stringent as the related limit set by LEP, which
probes for a contact interaction at 10.3 TeV. However, the limit is obtained quite differently, and is somewhat remarkable for an experiment carried out at 100 mK.
The fourth use of the new electron g concerns measurements of the muon g – 2 as a way to search for physics beyond the Standard Model. Even though the muon g values have nearly 1000 times larger
uncertainties than the new electron g, heavy particles – possibly unknown in the Standard Model – are expected to make a contribution that is much larger for the muon. However, this contribution
would still be very small compared with the calculated QED contribution, which depends on α and must be subtracted out. The electron g provides α and a confidence-building test of the QED, both
needed for the large subtraction.
CERN has long embraced particle physics at whatever energy scales are most appropriate for learning about fundamental reality. It is impressive that CERN is replacing the highest energy
electron–positron collider, LEP, with the world's highest energy proton collider, the Large Hadron Collider. Also at CERN, however, the lowest energy antiproton storage rings are also operating. One
antiproton cooled to 4.2 K was used to show that the magnitudes of q/m for the proton and antiproton were the same to better than nine parts in 10^11 – the most stringent test of CPT invariance with
a baryon system.
Now, these low-energy antiproton techniques are being used to make the coldest possible antihydrogen atoms, to be used for higher-precision tests of fundamental symmetries. It is fitting that the new
measurement of the electron magnetic moment and the fine structure constant were carried out in the lab of a long-time CERN researcher, since they illustrate the power of low-energy techniques of the
sort that we are applying to antihydrogen studies at CERN's Antiproton Decelerator facility, the unique source of low-energy antiprotons.
Further reading
F Dyson 2006 Letter to G Gabrielse.
G Gabrielse et al. 2006 Phys. Rev. Lett. 97 030802.
B Odom et al. 2006 Phys. Rev. Lett. 97 030801.
S Peil and G Gabrielse 1999 Phys. Rev. Lett. 83 1287.
R S Van Dyck, Jr et al. 1987 Phys. Rev. Lett. 59 26. | {"url":"http://cerncourier.com/cws/article/cern/29724","timestamp":"2014-04-18T16:43:55Z","content_type":null,"content_length":"37120","record_id":"<urn:uuid:2fe153ae-41f7-4397-b594-5a65eaaa701a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conductors of non-abelian number fields?
up vote 3 down vote favorite
Is there a definition out there of the notion of conductor of a non-abelian number field (i.e. a finite extension of Q whose Galois group is non-abelian)? If not, is there anyone you know of working
on it? The definition for abelian number fields uses class field theory; it comes out of Artin reciprocity (see page 525 of Neukirch's Algebraic number theory).
Sorry, does non-abelian mean "with non-abelian Galois group"? – Ben Webster♦ Oct 31 '09 at 16:49
Yeah, sorry. Similarly, abelian means with abelian Galois group. And the Galois group I mean is that over Q. You can also define the conductor of an extension of number fields L/K with abelian
Galois group (again using the Artin reciprocity map). – Rob Harron Oct 31 '09 at 17:02
add comment
3 Answers
active oldest votes
I think that a good notion of "conductor" isn't going to be intrinsic to the extension K/Q; rather, you might choose some finite-dimensional complex representation rho of Gal(K/Q) and
up vote 3 then use the Artin conductor of the resulting Galois representation. When K/Q is abelian, there aren't so many interesting choices of rho.
down vote
add comment
I don't know the precise definition of the conductor of a nonabelian Galois extension of Q, but see page 10 of Langlands's expository article "Representation Theory: Its Rise and Its Role in
up vote Number Theory" sunsite.ubc.ca/DigitalMathArchive/Langlands/pdf/gibbs-ps.pdf . Presumably the thesis of Joe Buhler referenced therein gives a precise definition, or a reference to one.
1 down
Buhler is discussing: given a Galois extension K/Q with Galois group A_5, what is the smallest Artin conductor of a two-dimensional Galois representation factoring through Gal(K/Q) whose
image in PGL(2,C) is A_5? Buhler finds it's 800, and occurs for the quintic polynomial Langlands mentions. Langlands (and Buhler 2 or 3 times) refers to this as the "conductor of K", but
Buhler generally refers to this as the minimal conductor of a corresponding projective representation. He makes no assertion that this is a definition of the conductor of K. It certainly
offers a possibility though. Thanks. – Rob Harron Oct 31 '09 at 19:45
add comment
The conductor of an order O \subset K is defined on p. 79 of Neukirch's Algebraic Number Theory (so that for K you define the conductor to be the conductor of its ring of integers). This
conductor tells you how to compute how a prime decomposes in K. I think in the abelian case these are the same conductors as in class field theory.
up vote -1
down vote In the non-abelian case, maybe the conductor is related to the zeta function of K (I'm not sure off the top of my head).
The notion of conductor you are talking about is unrelated. It simply measures how far away a ring is from its integral closure. Since the ring of integers is by definition integrally
closed, its conductor (in this sense) is the unit ideal. (This conductor tells you where you can't easily determine the decomposition of a prime.) The conductor I'm talking about is
defined on page 525 of Neukirch. – Rob Harron Oct 31 '09 at 16:59
add comment
Not the answer you're looking for? Browse other questions tagged algebraic-number-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/3550/conductors-of-non-abelian-number-fields","timestamp":"2014-04-18T23:22:51Z","content_type":null,"content_length":"59895","record_id":"<urn:uuid:ef3fe8a7-206f-401c-a8a1-e7236cf9a040>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
2011-2012 Seminars
Postdoctoral Seminars
September 08, 2011 10:30 - 11:30AM
Systems biology aims to explain how a biological system functions by investigating the interactions of its individual components from a systems perspective. Modeling is a vital tool as it helps to
elucidate the underlying mechanisms of the system. Many discrete model types can be translated into the framework of polynomial dynamical systems (PDS), that is, time- and state-discrete dynamical
systems over a finite field where the transition function for each variable is given as a polynomial. This allows for using a range of theoretical and computational tools from computer algebra, which
results in a powerful computational engine for model construction, parameter estimation, and analysis methods.
September 22, 2011 10:30 - 11:30AM
I will present the problem of adequate data subsampling for asymptotically consistent parametric estimation of unobservable stochastic differential equations (SDEs) when data are generated by
multiscale dynamic systems approximated by these SDEs. The challenge is that the approximation accuracy is scale dependent, and degrades at very small scales. Data from multiscale dynamics systems,
namely the Additive Triad model, will be used to illustrate this subsampling problem. I will also indicate the general framework for estimation under this indirect observability and present practical
numerical techniques to identify the correct subsampling regime to construct bias-corrected estimators.
September 29, 2011 10:30 - 11:30AM
In this talk, I will discuss how to discretize space in the stochastic model for chemical reaction-diffusion networks based on the chemical master equation. A system with reaction and diffusion is
modeled using a continuous time Markov jump process. Diffusion is described as a jump to the neighboring computational cell with proper spatial discretization. Considering the steady-state mean and
variance of the number of molecules of each species in each computational cell, an upper bound for the computational cell size for spatial discretization will be suggested.
Then, I will show conditions for the exponential convergence of concentration to its uniform solution in the corresponding PDE model for chemical reaction-diffusion networks. Conditions obtained from
the PDE model give an estimate for the maximal compartment size for space discretization in the stochastic model.
This is a joint work with Hans G. Othmer and Likun Zheng at the University of Minnesota.
October 06, 2011 10:30 - 11:30AM
Proton transport plays an important role in biological energy transduction and sensory systems. A multi-scale model is introduced to study the proton transport in membrane channels. Quantum dynamics
in utilized to model the motion of target particle (protons), while multi-scale treatments are given to surrounding environments in classical mechanics, upon the priorities of interests and
importance. All the system components are assembled on an equal footing in a total energy framework, from which the generalized Poisson-Boltzmann, Kohn-Sham and Laplace-Beltrami equations are
derived. Simulations are implemented based on the coupled governing equations and compared with experimental results.
October 20, 2011 10:30 - 11:30AM
The cytoskeleton of dividing cells is highly dynamic with microtubules stochastically transitioning between states of growth and shortening. In this dynamic environment "primitive" polymeric machines
can generate force. In eukaryotic cells, chromosomes move to the cell equator by attaching to multiple dynamic microtubules. Attachment is mediated by complex multi-protein scaffolds called
kinetochores. In this talk, we present a mathematical model for force generation at the microtubule/kinetochore interface in eukaryotic cells. Movement is modeled using a jump-diffusion process that
incorporates both biased diffusion due to microtubule lattice binding by kinetochore elements as well as thermal ratchet forces due to microtubule polymerization against the kinetochore plate. A key
result is that kinetochore motors obey nonlinear force-velocity relations. Finally, time permitting, we extend our modeling to explore how polymeric assemblies might facilitate the motility of the
circular chromosome of Caulobacter Crescentus.
October 26, 2011 2:30 - 3:30PM
Ecological systems may exhibit complex dynamics, yet the spatial and temporal scales over which these play out make them difficult to explore experimentally. An alternative approach is to develop
models based on detailed biological information about the systems and then fit them to observational data using nonlinear time-series techniques. I will give two examples of this approach, both
involving systems with alternative states. The first is the dynamics of midges in Lake Myvatn, Iceland, which show fluctuations with amplitudes >105 yet with irregular period. A nonlinear time-series
analysis demonstrates that these dynamics could be caused by the system having two states, a stable point and a stable cycle, with the irregular period caused by the population stochastically jumping
from the domain of one state to the other. The second example is the dynamics of salvinia, an aquatic weed, and the salvinia weevil that was introduced into the billabongs of Kakadu to control the
weed. Here the alternative states are two environmentally (seasonally) forced cycles, one in which salvinia is kept in check by the weevil and one in which it escapes. Understanding complex
ecological dynamics may improve our management of vigorously fluctuating natural systems.
November 03, 2011 10:30 - 11:30AM
A new approach to anti-cancer therapy modeling is presented, that reconciles existing observations for the combined action of carboplatin (a Pt-based chemotherapeutic agent) and ABT-737 (a small
molecule inhibitor of Bcl-2/xL) against ovarian cancers. To accurately simulate the action of these compounds, an age-structure together with a delay is imposed on proliferating cancer cells, and
detailed biochemistry of Bcl-xL-mediated apoptotic pathways is incorporated. The model is calibrated versus in vitro experimental results, and is then used to predict optimal doses and administration
time scheduling for the treatment of a tumor growing in vivo. The age-structured model gives rise to a 1D hyperbolic Partial Differential Equation which can be reduced to a nonlinear, non-autonomous
Delay Differential Equation by projecting along the characteristics. I prove the existence of periodic solutions and derive conditions for their stability. This has clinical implications since it
leads to a lower bound for the amount of therapy required to effect a cure.
November 10, 2011 10:30 - 11:30AM
Waterborne diseases cause over 3.5 million deaths annually, with cholera alone responsible for 3-5 million cases/year and over 100,000 deaths/year. Many waterborne diseases exhibit multiple
characteristic timescales or pathways of infection, which can be modeled as direct and indirect transmission. A major public health issue for waterborne diseases involves understanding the modes of
transmission in order to improve control and prevention strategies. One question of interest is: given data for an outbreak, can we determine the role and relative importance of direct vs.
environmental/waterborne routes of transmission? We examine these issues by exploring the identifiability and parameter estimation of a differential equation model of waterborne disease transmission
dynamics. We use a novel differential algebra approach together with several numerical approaches to examine the theoretical and practical identifiability of a waterborne disease model and establish
if it is possible to determine the transmission rates from outbreak case data (i.e. whether the transmission rates are identifiable).
Our results show that both direct and environmental transmission routes are identi?able, though they become practically unidenti?able with fast water dynamics. Adding measurements of pathogen
shedding or water concentration can improve identi?ability and allow more accurate estimation of waterborne transmission parameters, as well as the basic reproduction number. Parameter estimation for
a recent outbreak in Angola suggests that both transmission routes are needed to explain the observed cholera dynamics. I will also discuss some ongoing applications to the current cholera outbreak
in Haiti.
December 01, 2011 10:30 - 11:30AM
Alternans, a long-short alternation of cardiac action potential durations, emerges as a period-doubling bifurcation under rapid pacing. Detecting alternans or bifurcation of the cardiac restitution
has been a major task in prevent heart disease. We developed a new stochastic protocol and a regression method to approximate the full dynamics in a time interval. We also discuss the propagation of
alternans in 1D cardiac fiber.
December 08, 2011 10:30 - 11:30AM
In a given population there are usually more than two configurations of sex chromosomes vs. phenotypical gender differentiation. A common configuration is XX chromosomes for females and XY
chromosomes for males; variations of this theme (e.g. XY female) occur naturally at low frequencies. There is a vast family of such variations as a result of environmental intervention. In this talk
I will present a general formulation for multi-sexual populations using hypermatrices. I will present a method to compare the asymptotic behavior (i.e. who goes to extinction first, if at all) of
these competitive dynamical systems of different dimension under certain conditions of biological relevance.
January 19, 2012 10:30 - 11:30AM
In this two part talk I will summarize work from my Ph.D. thesis, then introduce some ongoing projects as an MBI Postdoctoral Fellow and part of OSU's Aquatic Ecology Laboratory (AEL). The first part
of this talk will focus on an infectious disease in house finches (Carpodacus mexicanus) and other wild birds caused by the pathogen Mycoplasma gallisepticum. After introducing the biological system,
I will present results from a mathematical model of the immune-pathogen interaction which address the immune system's role in mediating disease symptoms and controlling infection. For the second part
of the talk, we will shift gears and consider population dynamics in the context of simple aquatic food webs. I will start off with a brief but general introduction of the biology. I will then
present results from model that combines consumer-resource (predator-prey) and host-parasite interactions. These results describe the consequences of some unexpected connections between
consumer-resource and host-parasite interactions, as motivated by recent empirical findings from the study of Daphnia (a kind of freshwater zooplankton) their parasites and Daphnia's algal food
source. The last part of the talk will introduce two ongoing projects with Stuart Ludsin and others at the AEL. The first of these focuses on the role of hypoxia in shaping disease risk among fish.
The second investigates the importance of an aquatic larval insect (phantom midges; family Chaoboridae) in freshwater lakes and reservoirs in Ohio by modelling how they affect the dynamics of those
January 26, 2012 10:30 - 11:30AM
In this study, we use multi-stage cell lineages model, which include stem cell and multiple progenitor cell stages, to study how feedback regulation from different growth controls homeostasis of
tissue growth and generation of a robust spatial stratification. ODE and PDE models have been presented for the multi-stage cell lineages. Our analysis shows how negative feedbacks enhance the
stability of steady states and inter-regulation among different growth factors are responsible for developing spatial stratification. We also showed that the feedback on cell cycle from the growth
factor is important for forming temporary "stem cell niche" during the development of the tissue.
February 16, 2012 10:30 - 11:30AM
Complement Receptor 3 (CR3) and Toll-like Receptor 2 (TLR2) are pattern recognition receptors ex- pressed on the surface of human macrophages. Although these receptors are essential components of the
innate immune system, pathogen coordinated crosstalk between them can suppress the production of protective cytokines and promote infection. I will discuss a mathematical model of TLR2/CR3 crosstalk
in the context of Francisella tularensis infection.
March 01, 2012 10:30 - 11:30AM
Most phytoplankton movement is passive and occurs through either sinking/ floating (depending on their density relative to water) or through turbulent diffusion. As they move vertically in the water
column, phytoplankton experience gradients in critical environmental factors, such as light intensity and nutrient concentrations. The rate at which phytoplankton move across these gradients can be
critical to their persistence and vertical distribution. Grazing can also play a critical role in dictating where in the water column phytoplankton are found. However, theoretical models of critical
sinking and diffusion rates either do not explicitly consider grazing loss or treat it as vertically homogenous, thus making it independent of movement. In nature, however, grazing intensity is often
vertically heterogeneous. Despite its common occurrence, how such grazing heterogeneity influences critical rates of phytoplankton movement is not well understood. Here we put forth some basic
predictions regarding phytoplankton persistence and spatial heterogeneity of grazing, using a reaction-diffusion-advection model. We introduce some new ideas to investigate the combined effects of
advection, diffusion, and heterogeneous grazing pressure on the persistence of phytoplankton and to determine the unique number of critical sinking/buoyant rates that are specified by the inclusion
of depth dependent mortality that is a result of heterogeneous predation.
March 15, 2012 10:30 - 11:30AM
PhyloPTE/P (Phylogeny with Path to Event, in People) is a method to bridge the gap between large, gene sequencing based (and, in the near future, other *omic based) studies and
phylogenetically-driven approaches developed in other fields, for example epistatic-effect detection using comparison of phylogenetic tree reconstructions for different genes. PhyloPTE/P should be of
interest to a wide audience of investigators, including those in biomedical informatics or medical genomics, as well as those in systems biology or evolutionary biology, and serve as a software
platform to foster collaboration between the two areas.
March 29, 2012 10:30 - 11:30AM
Cholera, a waterborne diarrheal disease, is a major public health threat in many parts of the world. It is spread via direct contact with infected individuals as well as indirectly through a
contaminated water source. Cholera dynamics can be described by the SIWR model, a modified SIR model incorporating an equation to track the concentration of the pathogen in the water (W) and the
additional water transmission pathway. Factors affecting both transmission rates are likely to vary among different populations. Here we consider a multi-patch SIWR model, specifically a system of
non-mixing patches sharing a common water source, and explore the effect of heterogeneity in transmission on the spread of the disease, as well as the implications for control.
Visitors Semniars
September 06, 2011 10:30 - 11:30AM
We formulate and analyze a Markov process modeling the motion of DNA nanomechanical walking devices. We consider a molecular biped restricted to a well-defined one-dimensional track and study its
asymptotic behavior. Our analysis allows for the biped legs to be of different molecular composition, and thus to contribute differently to the dynamics. Our main result is a functional central limit
theorem for the biped with an explicit formula for the effective diffusivity coefficient in terms of the parameters of the model. A law of large numbers, a recurrence/transience characterization and
large deviation estimates are also obtained. Our approach is applicable to a variety of other biological motors such as myosin and motor proteins on polymer filaments. This is joint work with Iddo
Ben-Ari and Alexander Roitershtein.
September 19, 2011 10:30 - 11:30AM
In my talk I will discuss how predator-prey population dynamics can be altered by predator adaptive foraging behavior and/or avoidance strategies of prey. I will consider the Lotka-Volterra
predator-prey population model which assumes that interaction strength between predator and prey is fixed. Increasing empirical evidence, however, indicates prey and/or predators change their
behavior in response to the presence of the other species. For example, prey decrease their activity, become vigilant, or move to a refuge to avoid predators. Similarly, predator foraging behavior
(e.g., prey switching) depends on prey densities. These observations clearly show that interaction strength in the Lotka-Volterra model are not fixed, but is itself a function of population
densities. As behavioral effects often operate on a short time scale when compared to a population time scale, it is also not clear if behavioral effects attenuate at the population time scale or
not. In my talk, I will show how the games predator and prey play can change predictions of the Lotka-Volterra model.
September 20, 2011 10:30 - 11:30AM
Molecular motors are either proteins or macromolecular complexes which move along filamentous tracks utilizing some form of input energy. In contrast to their macroscopic counterparts, these natural
nano-machines are (i) made of soft matter, (ii) driven by isothermal engines, (iii) far from thermodynamic equilibrium, and (iv) their dynamics is dominated by viscous forces and thermal noise.
Mathematical models based on master equation (or, Langevin equation) are the most appropriate for a quantitative theory of their stochastic kinetics. In this talk I'll begin with a brief discussion
on the basic theoretical and experimental techniques that are used for studying molecular motors. One characteristic feature of their "directed", albeit noisy, movements is an alternating sequence of
pause and translocation. The main aim of this talk is to show how important "hidden" information on the kinetics of such motors can be extracted from the statistics of the durations of
pause+translocation. I'll present our recent results on dwell-time distributions of two motors, namely, a member of the kinesin superfamily and the ribosome. I'll also mention the nature of
collective spatio-temporal organization of the motors on the track and the effects of their crowding on the dwell time distribution.
September 27, 2011 10:30 - 11:30AM
Classical population genetics begins with a Markov chain model for the genetic types of the individuals in a finite population and then replaces the discrete model by a diffusion approximation under
the assumption that the population is large. "Lookdown" constructions of these models, introduced in work with Peter Donnelly, allow one to retain discrete individuals in the diffusion limit and, in
particular, obtain population genealogies coupled to the diffusion approximations. These constructions will be described along with related constructions for spatially distributed populations.
October 04, 2011 10:30 - 11:20AM
For a given biochemical network of interest it is often desirable to estimate its reaction constants. I shall discuss several different approaches to rate constants estimation from partial trajectory
data. The presentation will discuss the LSE as well Bayesian and MLE approaches as well as possible conditions on the data process which guarantee identifiability and estimators consistency. We shall
also consider ways of approximating the likelihood of a partially observed biochemical network with certain other likelihoods (e.g., Gaussian) for which inference problem is simplified.
October 18, 2011 10:30 - 11:30AM
The focus of this talk is to provide a basic but informative answer to the ecological question: How do habitat disturbances and fragmentation affect species persistence and diversity? In order to
answer this question, I will develop and analyze a deterministic metapopulation model that takes into account a time-dependent patchy environment. The variability of the patchy-environment could be
thought to be due to environmental changes. I will demonstrate, accordingly to the model, the effects of spatial variations on persistence and coexistence of two competing species. Also, I will
compare the analytical results of the deterministic model with simulations of a stochastic version of the model.
November 01, 2011 10:30 - 11:30AM
The Euler-Poisson system is a fundamental two-fluid model in physics. It describes the motion of ions and electrons coupled through their self-consistent electric field. For the three-dimensional
case Guo '98 first constructed a smooth global solution around the constant-density equilibrium by using a Klein-Gordon dispersive effect. It has been long conjectured that same results should also
hold in the two-dimensional case. The main issues in 2D are slow dispersion, quasilinearity and certain nonlocal obstructions in the nonlinearity. I will discuss some recent advances which lead to
the complete resolution of this conjecture.
November 08, 2011 10:30 - 11:30AM
Rhythmic behaviors in neural systems often combine features of limit cycle dynamics (stability and periodicity) with features of near heteroclinic or near homoclinic cycle dynamics (extended dwell
times in localized regions of phase space). Proximity of a limit cycle to one or more saddle equilibria can have a profound effect on the timing of trajectory components and response to both fast and
slow perturbations, providing a possible mechanism for adaptive control of rhythmic motions. Reyn showed that for a planar dynamical system with a stable heteroclinic cycle (or separatrix polygon),
small perturbations satisfying a net inflow condition will generically give rise to a stable limit cycle (Reyn, 1980; Guckenheimer and Holmes, 1983). Here we consider the asymptotic behavior of the
infinitesimal phase response curve (iPRC) for examples of two systems satisfying Reyn's inflow criterion, (i) a smooth system with a chain of four hyperbolic saddle points and (ii) a piecewise linear
system corresponding to local linearization of the smooth system about its saddle points. For system (ii), we obtain exact expressions for the limit cycle and the iPRC as a function of a parameter
$mu>0$ representing the distance from a heteroclinic bifurcation point. In the $mu o 0$ limit, we find that perturbations parallel to the unstable eigenvector direction in a piecewise linear region
lead to divergent phase response, as previously observed (Brown, Moehlis and Holmes, 2004). In contrast to previous work, we find that perturbations parallel to the stable eigenvector direction can
lead to either divergent or convergent phase response, depending on the phase at which the perturbation occurs. In the smooth system (i), we show numerical evidence of qualitatively similar phase
specific sensitivity to perturbation. Having the exact expression for the iPRC for the piecewise linear system allows us to investigate its stability under diffusive couplin g. In addition, we
qualitatively compare iPRCs obtained for systems (i) and (ii) to iPRCs for the Morris-Lecar equations near a bifurcation from limit cycles to a saddle-homoclinic orbit. Joint work with K. Shaw, Y.
Park, and H. Chiel.
November 22, 2011 10:30 - 11:30AM
In this talk, we shall discuss the stochastic modeling of chemotaxis and derive an anisotropic diffusion chemotaxis models from the Langevin stochastic equations which takes into account the movement
persistence. The various techniques, such as mean-filed theory, minimization principle, moment closure and scaling argument, will be used to carry out the results. In addition the stochastic
simulations exhibiting the chemotactic behavior will be presented.
November 28, 2011 2:30 - 3:30PM
Clustering data into groups of similarity is well recognized as an important step in many diverse applications, including biomedical imaging, data mining and bioinformatics. Well known clustering
methods, dating to the 70's and 80's, include the K-means algorithm and its generalization, the Fuzzy C-means (FCM) scheme, and hierarchical tree decompositions of various sorts. More recently,
spectral techniques have been employed to much success. However, with the inundation of many types of data sets into virtually every arena of science, it makes sense to introduce new clustering
techniques which emphasize geometric aspects of the data, the lack of which has been somewhat of a drawback in most previous algorithms.
In this talk, we focus on a slate of "random-walk" distances arising in the context of several weighted graphs formed from the data set, in a comprehensive generalized FCM framework, which allow to
assign "fuzzy" variables to data points which respect in many ways their geometry. The method we present groups together data which are in a sense "well-connected", as in spectral clustering, but
also assigns to them membership values as in FCM. We demonstrate the effectiveness and robustness of our method on several standard synthetic benchmarks and other standard data sets such as the IRIS
and the YALE face data sets. This is joint work with Sijia Liu and Sunder Sethuraman.
November 29, 2011 10:30 - 11:20AM
Currently, efforts are underway to develop vaccines for several viral infections, including Human Immunodeficiency Virus type 1 (HIV-1) and Herpes Simplex Virus type 2 (HSV-2). In this talk, I will
present the results of mathematical models that address vaccination strategies for these viral infections. I will demonstrate the use of these results to predict the impact of prevention efforts as
well as to assess the mechanisms of virus-host interactions. I will also show how such studies can guide the development of future vaccines and other therapeutic interventions.
January 17, 2012 10:30 - 11:20AM
My primary goal in this talk will be to provide a summary of some current research interests with the hope of stimulating potential collaborations with other faculty and postdocs during my time at
MBI. The general theme will concern the effective statistical description of a complex microbiological system consisting of a number of individual dynamical components with some structural
interactions as well as with stochastic noise sources. I will briefly touch on the examples of molecular motors and swimming microorganisms, then describe in some more detail a recent study of
synchrony in stochastically driven neuronal networks.
January 24, 2012 10:30 - 11:20AM
A brief introduction is presented to stochastic differential equations (SDEs) in mathematical biology. In particular, a procedure is described for deriving accurate SDE models for randomly varying
biological dynamical systems. Next, several research projects involving SDE models in biology are briefly described. Specifically summarized are: an investigation of a schistosomiasis infection with
biological control, a derivation of stochastic partial differential equations for size-and age-structured populations, and the development of SDE models for biological diversity. Finally, current/
future work is pointed out.
January 31, 2012 10:30 - 11:20AM
A mathematical model which incorporates the spatial dispersal and interaction dynamics of mistletoes and birds is derived and studied to gain insights of the spatial heterogeneity in abundance of
mistletoes. Fickian diffusion and chemotaxis are used to model the random movement of birds and the aggregation of birds due to the attraction of mistletoes respectively. The spread of mistletoes by
birds is expressed by a convolution integral with a dispersal kernel. Two different types of kernel functions are used to study the model, one is Dirac delta function which reflects one extreme case
that the spread behavior is local, and the other one is a general non-negative symmetric function which describes the nonlocal spread of mistletoes. When the kernel function is taken as the Dirac
delta function, the threshold condition for the existence of mistletoes is given and explored in term of parameters. For the general non-negative symmetric kernel case, we prove the existence and
stability of non-constant equilibrium solutions. Numerical simulations are conducted by taking specific forms of kernel functions. Our study shows that the spatial heterogeneous patterns of the
mistletoes are related to the specific dispersal pattern of the birds which carry mistletoe seeds.
February 14, 2012 10:30 - 11:30AM
Propagating Lyapunov Functions to Prove Noise-Induced Stabilization
March 27, 2012 10:30 - 11:20AM
Remarkable progress in advanced microscopy has yielded unprecedented access to a path-wise observation of the diffusive behavior of bacteria, viruses, organelles and various invasive particulates in
biological fluids. Upon inspection of the data one immediately notes, "That's not Brownian motion!" Perhaps not surprisingly, media such as human mucus are highly heterogeneous and exhibit
significant viscoelastic properties. In this talk, I will provide a survey of recent experimental observations along with mathematical models that are currently in use. Wherever possible I will point
out open problems in this burgeoning area of research.
April 10, 2012 10:30 - 11:20AM
Classical mathematical formulation of the dynamics of chemical reaction systems involves setting up and analyzing a system of ODEs, or PDEs if spatial effects are considered. However, a system may be
sensitive to the stochasticity inherent in the mechanism of chemical reactions, for example due to having small numbers of molecules, or reaction rates which vary over several orders of magnitude. We
consider such a reaction system in a cellular environment, and also impose a 'global' cell division mechanism, which adds noise to the concentrations of chemical species along a given lineage, and
find parameter regimes for which this produces a qualitative change in the dynamics. We model these reaction and division processes as Jump Markov Processes, and discuss some toy models in which the
stochasticity can allow the system to exhibit behavior that is not possible with a deterministic formulation. One such behavior is bistability, for which we find two processes that have similar
macroscopic signatures but whose underlying causes are fundamentally different; one such case leads to the Large-Deviation theory of Freidlin and Wentzell. Such bistability is characteristic of many
gene expression systems that effectively incorporate an ON/OFF switch, but the framework is very general and is applicable in other areas, such as population genetics, where bistability may represent
alternating dominance of allelic types in a population. This is joint work with Lea Popovic of Concordia University (Montreal).
May 18, 2012 2:30 - 3:30PM
This talk will attempt to provide a synthesis of the topics discussed at the workshop and to distill some central themes that point to opportunities and challenges in the field. | {"url":"http://mbi.osu.edu/programs/seminars/2011-2012/","timestamp":"2014-04-16T19:55:36Z","content_type":null,"content_length":"108141","record_id":"<urn:uuid:7b17a110-18a0-42e4-86d3-2f9bdd5fccac>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
more Adjoint of Linear operator confusion...
December 3rd 2009, 06:45 PM #1
Sep 2009
more Adjoint of Linear operator confusion...
So I have to prove the following, and not sure where to go with it:
Prove that if $V=W\oplus W^{\perp}$ and T is the projection on W along $W^{\perp}$ then T=T*.
Any help? THanks
let $v_1=w_1+w_1', \ v_2=w_2+w_2'$ where $w_i \in W, \ w_i' \in W^{\perp}.$ then $<Tv_1,v_2>=<w_1,w_2+w_2'>=<w_1,w_2>$ and $<v_1,Tv_2>=<w_1+w_1',w_2>=<w_1,w_2>.$
so $<Tv_1,v_2>=<v_1,Tv_2>$ and thus $T=T^*.$
December 3rd 2009, 07:01 PM #2
MHF Contributor
May 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/118383-more-adjoint-linear-operator-confusion.html","timestamp":"2014-04-18T12:26:06Z","content_type":null,"content_length":"35663","record_id":"<urn:uuid:b1c3a828-8f96-4007-bc91-95c77e9ff078>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00144-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Cantor's absurdity, once again, why not?
Replies: 77 Last Post: Mar 19, 2013 11:02 PM
Messages: [ Previous | Next ]
fom Re: Cantor's absurdity, once again, why not?
Posted: Mar 14, 2013 2:38 PM
Posts: 1,969
Registered: On 3/14/2013 5:43 AM, david petry wrote:
12/4/12 > On Thursday, March 14, 2013 3:17:06 AM UTC-7, fom wrote:
>> For example, Wittgenstein understood perfectly
>> well how to apply Cantor's argument and he
>> certainly is not thought of as believing in
>> a completed infinity.
> Here's an "acceptable" use of the diagonal argument: Given a provably well-defined list of provably well-defined real numbers (so that every digit of every number on the list can
provably be computed), the diagonal argument gives us a new provably well-defined real number not on the list.
> Notice that that argument doesn't require the use of an actual infinite.
Right. I have a preprint of a text on enumerable sets written
long ago by Robert Soare. In the opening pages he discusses
the diagonalizability of total recursive functions. So, there
is an "acceptable" diagonal argument for mathematics restricted
to recursive function theory. Soare goes on to observe that
the total functions do not span every computable function
so that partial recursive functions are introduced.
>> However, he also did not attack the mathematicians
>> who conducted investigations along those lines.
> He most certainly did mock them.
Perhaps. I have a small book transcribing certain
lectures he gave to certain notable mathematicians.
It was civil. He gave wonderful explanations of how
natural language structure accommodates the typical
beliefs of mathematicians as they work with abstract
The problem is that "language game" theories do not
help to explain why a bridge of building does not
crash to the ground. Early pyramids do not have the
majesty of those built later at Gizeh. Once one begins
the process of using mathematics for real-world situations,
one begins referring to mathematical terms with the
same linguistic forms as one would with material objects
like chairs and tables.
Next, you have a 7 year old child asking "Why?"
Because physics has seemingly given up on explaining
"the observable universe" in favor of mathematical
abstractions, non-material physicalism is becoming
the only reasonable choice.
As for mathematics, I have no reason to support its
uses to discredit anyone's metaphysical beliefs. It
is perfectly plausible to treat mathematics along the
lines of Lesniewskian nominalism and regard its
essential value in the interderivability of its
facts. That is, because mathematics can be understood
in the sense of an Aristotelian demonstrative science,
there is no reason to confuse its facticity with
More precisely, mathematical facticity relative to
the epistemic justification of a derivational system
is more important than mathematical actuality relative
to dialectical argument. Thanks to modern advances
beginning in the nineteenth century, it is possible
to divorce "essence" from "substance" in the
Aristotelian framework.
Modern logicians, however, try to distance themselves
from epistemology. Therefore, one has only the
Aristotelian distinction to guide this interpretation.
> FWIW, I was motivated to start this thread after reading an article you wrote about problems you had in your encounters with mathematicians who didn't accept your ideas. I'd like to
know a little more about that, although I admit you really have no obligation to tell us more if you don't want to.
> The mathematics community has simply lost touch with reality.
I have no real issues along those lines except that
sometimes the personal frustration becomes somewhat
emotional. Professional mathematicians are busy
people. Most who might be interested in "pure
mathematics" have responsibilities that must be
met to justify the salaries they receive. On the
one hand, they are expected to teach. To the extent
that they receive additional time for their own
interests, they are expected to publish. That latter
expectation is demanded from the same people who
think quarterly profits are important to the security
of the endowments with which they are entrusted.
Because I had been unable to fulfill my own academic
goals, I can only hope that perhaps someone would
take an interest in my work. Given the reality of
the world, I probably have a better chance of winning
a lottery (which, of course, is less likely than being
hit by lightning walking in a grassy field during a
There is even more to consider. If common accounts of
Greek mathematics as the first mathematics that can
be considered "abstract" are reliable, then mathematics
is a 2500 year old subject. Any given mathematician
knows only a small part of what that entails and even
less of how mathematics is being practically applied.
Almost anyone who investigates a particular aspect
of mathematics that will require investigation into
the historical record will be surprised at what they
In "A Mathematical History of the Golden Number,"
Roger Herz-Fischler writes:
"At first it seemed as if this
mathematical history would be fairly
short and straightforward. This
opinion was based on preliminary
reading not only of parts of the
Elements [Euclid], but also some of
the standard histories of Greek
mathematics. However, two things
soon became clear: the early Greek
aspect was not as clear-cut as it
was often made out to be and the
historical aspects that needed to be
considered neither started nor ended
with the early Greeks."
But, then if certain more informed accounts
are to be believed, I have mislead you.
In "Men of Mathematics" by E. T. Bell
we find the remark,
"[...] Finally we arrive at the first
great age of mathematics, about 2000 B.C,
in the Euphrates Valley.
"The descendants of the Sumerians in
Babylon appear to have been the first
"moderns" in mathematics; certainly
their attack on algebraic equations is
more in the spirit of the algebra we
know than anything done by the Greeks
in their Golden Age. More important
than the technical algebra of those
ancient Babylonians is their recognition --
as shown by their work -- of the necessity
for *proof* in mathematics. Until recently
it had been supposed that the Greeks were
the first to recognize that proof is
demanded for mathematical propositions.
This is one of the most important steps
ever taken by human beings. Unfortunately
it was taken so long ago that it led
nowhere in particular so far as our own
civilization is concerned -- unless the
Greeks followed consciously, which they
may well have done. They were not
particularly generous to their
How could anyone expect a handful of men
to be expert on the full array of
mathematical knowledge? And, how could
anyone expect men and women to be
different from the way men and women
have to be in order to live from day
to day and succeed in their personal
pursuits? I would be irrational if
I failed to recognize just how small
the small indignations I have experienced
really are.
Whatever difficulties I may have had
through the years, I was educated in
a respectable program. I respect the
people who study mathematics. I respect
mathematics as an academic discipline.
And, although one has to view one's own
interest in particular ways, I try to
respect other mathematician's interests
by trying to understand what they find
so interesting in what they study.
In view of my general isolation from
actively practicing mathematicians,
however, all of that effort is simply
personal effort to learn through
You may find some justifications
for your beliefs in comments you
might read that I have written.
And, I might sometimes be sharp
with my remarks to others. But,
you will not find me sympathetic to
your interpretations in this
Date Subject Author
3/14/13 Cantor's absurdity, once again, why not? David Petry
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? David Petry
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? David Petry
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/17/13 Re: Cantor's absurdity, once again, why not? Shmuel (Seymour J.) Metz
3/17/13 Re: Cantor's absurdity, once again, why not? ross.finlayson@gmail.com
3/18/13 Re: Cantor's absurdity, once again, why not? fom
3/18/13 Re: Cantor's absurdity, once again, why not? Shmuel (Seymour J.) Metz
3/19/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/14/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/14/13 Re: Cantor's absurdity, once again, why not? harold james
3/14/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/14/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? David Petry
3/15/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/15/13 Re: Cantor's absurdity, once again, why not? Virgil
3/15/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/15/13 Re: Cantor's absurdity, once again, why not? Virgil
3/15/13 Re: Cantor's absurdity, once again, why not? fom
3/16/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/16/13 Re: Cantor's absurdity, once again, why not? FredJeffries@gmail.com
3/16/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/16/13 Re: Cantor's absurdity, once again, why not? Virgil
3/16/13 Re: Cantor's absurdity, once again, why not? fom
3/16/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/16/13 Re: Cantor's absurdity, once again, why not? Virgil
3/16/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/16/13 Re: Cantor's absurdity, once again, why not? Virgil
3/17/13 Re: Cantor's absurdity, once again, why not? fom
3/19/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/19/13 Re: Cantor's absurdity, once again, why not? fom
3/19/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/19/13 Re: Cantor's absurdity, once again, why not? Virgil
3/19/13 Re: Cantor's absurdity, once again, why not? fom
3/19/13 Re: Cantor's absurdity, once again, why not? fom
3/19/13 Re: Cantor's absurdity, once again, why not? fom
3/19/13 Re: Cantor's absurdity, once again, why not? fom
3/19/13 Re: Cantor's absurdity, once again, why not? fom
3/19/13 Re: Cantor's absurdity, once again, why not? fom
3/19/13 Re: Cantor's absurdity, once again, why not? Virgil
3/16/13 Re: WM's absurdity, once again, why not? Virgil
3/17/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes
3/15/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/15/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? David Petry
3/14/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes
3/14/13 Re: Cantor's absurdity, once again, why not? David Petry
3/14/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes
3/15/13 Re: Cantor's absurdity, once again, why not? David Petry
3/15/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes
3/15/13 Re: Cantor's absurdity, once again, why not? David Petry
3/15/13 Re: Cantor's absurdity, once again, why not? Virgil
3/15/13 Re: Cantor's absurdity, once again, why not? fom
3/15/13 Re: Cantor's absurdity, once again, why not? fom
3/15/13 Re: Cantor's absurdity, once again, why not? fom
3/15/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes
3/14/13 Re: Cantor's absurdity, once again, why not? ross.finlayson@gmail.com | {"url":"http://mathforum.org/kb/message.jspa?messageID=8631296","timestamp":"2014-04-19T20:32:01Z","content_type":null,"content_length":"116558","record_id":"<urn:uuid:7f6d8492-5ccd-4c1b-a853-a88d266bbd64>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: October 2003 [00073]
[Date Index] [Thread Index] [Author Index]
Re: Plotting functions with undefined values
• To: mathgroup at smc.vnet.net
• Subject: [mg43772] Re: Plotting functions with undefined values
• From: "David W. Cantrell" <DWCantrell at sigmaxi.org>
• Date: Sat, 4 Oct 2003 02:04:51 -0400 (EDT)
• References: <blgk0b$jb2$1@smc.vnet.net> <blj62u$1hg$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
"Bo Le" <bole79 at email.si> wrote:
> You can define your function with the If clause:
> f[x_,y_,m_]:=If[y==1-x,0,(m+x+1)/(x+y-1)];
> Plot3D[ f[x,y,1],{x,0,1},{y,0,1} ]
But unfortunately that produces a graph which is _even more deceptive_ than
naively using just
f[x_,y_,m_]:= (m+x+1)/(x+y-1); Plot3D[f[x,y,1],{x,0,1},{y,0,1}]
Let me suggest something which produces a graph which is a fair
representation. And, although certainly not as nice as David Park's
solution, mine is substantially simpler:
f[x_,y_,m_]:= If[x+y != 1,(m+x+1)/(x+y-1)]; Plot3D[f[x,y,1],{x,0,1},{y,0,1}]
Of course, there are various comments generated before the graph, but the
graph produced is, as I said, a fair representation. Another possibility
(perhaps closer to what Bo Le had in mind), would be to use
which again gives a graph having no spurious portions of the surface.
David Cantrell
> "Ronaldo Prati" <rcprati at bol.com.br> wrote in message
> news:blgk0b$jb2$1 at smc.vnet.net...
> > I need to plot the function pos[m_] = (m + x -1)/(x+y-1), where
> > 0<=pos<1, x and y are between 0 and 1 but x+y-1!=0. My problem is
> > that I could not input this constrain. When plot the function using
> > Plot3D, at the region where x+y = 1 several erros of infinith
> > expression 1/0 encountred are given. I'm not interested in plotting the
> > region where x+1 occurs, but the region where the function has a true
> > value (the graphic appers, but with a straigh region where it is
> > undef). How can i solve it? | {"url":"http://forums.wolfram.com/mathgroup/archive/2003/Oct/msg00073.html","timestamp":"2014-04-17T07:37:00Z","content_type":null,"content_length":"36088","record_id":"<urn:uuid:4dbbd251-c4c7-4d6a-abf5-22a793d5782e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
PAPER AIRPLANE ACTIVITY
In the paper airplane activity students select and build one of five different paper airplane designs and test them for distance and for time aloft. Part of this activity is designed to explore NASA
developed software, FoilSim, with respect to the lift of an airfoil and the surface area of a wing.
Technology Needed
• Internet Access
• Graphing Calculator (optional)
Time Required
• 2-3 class periods
Classroom Organization
• Students should work in groups of 3 or 4.
1. Give students a sheet of unlined paper and instructions for construction of a paper airplane (See download above).
2. Students should give their plane a name using the aviation alphabet. (Example N 831 FE represents November 831 Foxtrot Echo. Identification numbers and letters must not exceed 7; and the
identification must begin with N, which stands for the United States.)
3. Students should determine the area of the wings of their planes. If students are able, have them unfold their planes and lay out basic geometric shapes to fill the wing area. Then have them
calculate the total area from the sum of the areas of the shapes. (See example. Use "back arrow" to return here.)
If students are not able to calculate geometric areas, they could make a duplicate plane, cut off the wings, and lay the wings onto measured grids or pieces of graph paper and count the total
squares covered, estimating partial squares.
A variation of this technique that eliminates a duplicate plane and cutting wings is to draw or trace a grid on a blank transparency with a sharpie marker and then hold the clear grid over the
wings to count squares covered.
4. Have students fly their planes in the gym or hallway or other large indoors area (to eliminate wind effects) five times, each time trying for maximum distance. Stress trying to duplicate the same
launch angle and speed. Now do another five trials, this time trying for maximum time aloft. Students should record their distances and times and average the three longest distances and the three
longest times.
5. Have students put their data onto a graph for the class, one graph of time aloft vs. wing area and the other of distance vs. wing area.
6. Discuss the results from the graphs as a class, and then ask for predictions as to what would happen if the wings were made smaller.
7. Have the students draw a line two centimeters from and parallel to the trailing edges of their wings, and then cut that 2 cm portion off the wings (Shown in red).
The cut off part should be tucked on the inside of the plane when it is refolded in order to keep mass constant. You might ask the class to provide an explanation for doing this.
8. Repeat steps three through six.
9. Have the students investigate their results using FoilSim. They should set the ANGLE OF ATTACK to 5 degrees and then vary only the area of the wing and note the effect on the value of LIFT. They
can compare these results to their own experimental results.
10. ADDITIONAL QUESTION: "Why don't all planes have the biggest wing area possible? Why do some fighter jets have small wings?" (ANSWER: There are other factors that contribute to lift, such as
velocity and shape of the wing. The weight of a plane is also very important.) Students can investigate these other factors by going through the lessons that are part of FoilSim.
Extension Activity
Assessment Strategies/Evaluation
1. Each group could make a presentation on their airplane and what made its design successful.
2. Students could individually graph the experimental data and make a report.
3. Challenge students to fold a better plane and explain the reasons for changes in design.
4. Students could write a summary of experimental results and relate the variables tested.
Supplementary Resources
• Related Internet Links
Related Pages
LIFT Home Page
Aeronautics Activities
Aerospace Activities Page
Aerodynamic Index
Wing Area
Wing Area Effects on Lift | {"url":"http://www.grc.nasa.gov/WWW/K-12/aerosim/LessonHS97/paperairplaneac.html","timestamp":"2014-04-17T18:29:16Z","content_type":null,"content_length":"13651","record_id":"<urn:uuid:a87095de-85a1-460e-94ee-1b21f357e9c9>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
a^p≡1 (mod p^n) iff a≡1 (mod p^(n-1)), n≥2
March 8th 2010, 08:59 AM
a^p≡1 (mod p^n) iff a≡1 (mod p^(n-1)), n≥2
The problem is the following:
p an odd prime, a an integer, n≥2
Show: a^p≡1 (mod p^n) iff a≡1 (mod p^(n-1))
I started by the following, but it gave me nothing
a^p≡1 (mod p^n)
a^p-1≡0 (mod p^n)
(a-1)(a^(p-1)+...+1)≡0 (mod p^n) | {"url":"http://mathhelpforum.com/number-theory/132693-p-1-mod-p-n-iff-1-mod-p-n-1-n-2-a-print.html","timestamp":"2014-04-19T10:17:01Z","content_type":null,"content_length":"3495","record_id":"<urn:uuid:e5a9ae08-c559-41f0-a4cb-09d31c7f4802>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Automorphy Factors and Bundles
up vote 1 down vote favorite
The question I'm considering is the following: given an 1-cocycle in of the modular group in Hom$(H;\textrm{GL}_{r}(C))$ call it $f$ when does it induce a vector bundle structure on the corresponding
curve $X(1)$ (or more genrally $X(\Gamma)$)?
The reason why I'm confused over this is because I have two contradictory views on why it should hold in general/not: This first is baased around a similar intuition to filling in punctured Riemann
surface - namely we can clearly define a vector bundle over $X(\Gamma)$ without the cusps & points of non-trivial stabilizers. As these points form a finite set we can choose suitable charts for them
such that the cocycle doesn't need to be defined at these points so we can just take it to be as for the simpler quotient space.
The competing reasoning is by example - if we look at a non-trivial 1-dim rep of the modular group say $S \mapsto i$ and $T \mapsto -\overline{\rho}$ then this gives us a 1-cocycle (that is
independant of $\tau$) and thus would induce a line bundle on $X(1)$. But as this line bundle would have it's 6th power isomorphic to the canonical line bundle this isn't possible.
I don't really feel which of these statements is wrong - any ideas would be musch appreciated.
cv.complex-variables rt.representation-theory dg.differential-geometry
add comment
2 Answers
active oldest votes
A cocycle for a modular group is precisely the same as a descent datum for the quotient map $\mathbf{H} \to [\mathbf{H}/\Gamma]$, where the target is the quotient orbifold. This is a
special case of the fact that pullback induces an equivalence of categories between vector bundles on the quotient, and vector bundles $V$ on the upper half-plane equipped with a cocycle
with coefficients in $\operatorname{Aut}(V)$.
If you want to make a vector bundle on $X(1)$ or $X(\Gamma)$, your cocycle has to satisfy some properties, and you need to specify some additional data. In particular, you need triviality
at elliptic points to descend from $[\mathbf{H}/\Gamma]$ to the affine coarse space $\mathbf{H}/\Gamma$ (which is often written $Y(\Gamma)$), and you need a gluing datum to describe the
behavior at the cusps in order to define a vector bundle on the compact curve $X(\Gamma)$.
up vote 1
down vote We have a standard example in level 1. The stack $Ell$ has Picard group $\mathbb{Z}/12\mathbb{Z}$ (I think this may be a theorem of Fulton). The trivial bundles descend to the coarse space
$Y(1)$, which is an affine line, and all vector bundles on $Y(1)$ are trivial. Adding a cusp yields $X(1)$, which is a projective line, and vector bundles on $X(1)$ are just sums of line
bundles parametrized by degree. The upshot is that if you don't specify gluing data at infinity, your cocycle will not tell you anything about the weight of a modular form.
Regarding your comment about level 1 forms of weight 6: they all vanish at $i$.
Thanks I get it i think - we take our space $Y(\Gamma)$ say then as we have only a finite no. of cusps we can extend our bundle to the whole space non-uniquely. The gluing data I guess
can be described canonically 'iff' we have automorphic forms that obey nice conditions at the cusps. I assume this holds also for the case for the orbifold which I see quite easily
accepts all cocycles as vector bundles. + nice remark about weight 6 – Sarah Dec 11 '12 at 18:52
add comment
The way you would want to produce that vector bundle is gluing the trivial vector bundle $\mathbb H \times \mathbb C^r$ together along the maps:
for each $g \in \Gamma$, the map sending $(x,y)$ to $(g(x),\rho_g(y))$
First, note that to be well-defined at the elliptic points, the stabilizers of these elliptic points, meaning the torsion elements of the modular group, must act trivially. This is already
up vote 0 enough to force the representation to be trivial, as the modular group is generated by its torsion elements.
down vote
One could solve this problem by considering the modular curve $X(1)$ as a stack or an orbifold.
Second, note that even if one could make a nontrivial vector bundle on $\mathbb H/\Gamma$, it is not clear at all how to extend this to the cusps.
Okay great - I get the global construction: Just one question though-in the case that we have $\rho_{g}(x) = (cx + d)^{2k}$, for modular forms of weight 2k, then we only get that $\rho_
{S}(i) = 1$ if k is even - but we can obviously get forms of weight 6 for example. Thanks for the idea about orbifolds as well - I was thinking about whether this may also be suitable
but I didn't do any calculations yet ... – Sarah Dec 10 '12 at 22:30
I believe it is correct to see modular forms as sections of line bundles on the appropriate orbifold. – Will Sawin Dec 10 '12 at 22:38
add comment
Not the answer you're looking for? Browse other questions tagged cv.complex-variables rt.representation-theory dg.differential-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/116023/automorphy-factors-and-bundles","timestamp":"2014-04-19T20:22:07Z","content_type":null,"content_length":"59806","record_id":"<urn:uuid:e3997b50-955f-4329-bec2-f3b5014ecaa9>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fairfax, CA ACT Tutor
Find a Fairfax, CA ACT Tutor
...S. taught pre-medical Chemistry, nursing Chemistry and Algebra at Dominican University in San Rafael California before starting her own business. She has been teaching and tutoring since 1979.
As an educator, Dr.
29 Subjects: including ACT Math, reading, chemistry, physics
...I love teaching and enjoy working with students of all ages. I served as a teacher's assistant in my math, English and Chinese classes since I excelled in those classes during middle and high
school. I have taught my younger sister piano and my friends' kids Mandarin and math.
22 Subjects: including ACT Math, calculus, algebra 1, algebra 2
...I have extensive knowledge of pretty much all of math through the end of college, and of statistics well beyond that. I'm good at zeroing in on precisely what's giving you trouble, and will
break each problem into pieces you can understand and learn. While a grad student at UC Berkeley, I recei...
14 Subjects: including ACT Math, statistics, geometry, GRE
...If you approach test-taking in a way that makes it fun, it takes a lot of the anxiety out of the process, and your scores will improve. Working with a kind and patient tutor can be
instrumental in this process. I have experience with the ACT, SAT, PSAT, GRE, and LSAT.I began studying French int...
48 Subjects: including ACT Math, reading, Spanish, English
...Also, I've passed the rigorous CSET Math 1 and 2 which covers the algebra, geometry, and other math on the CBEST. The Reading comprehension portion of the CBEST requires textual analysis. My
Master's degree work contained a significant bulk of textual analysis and comprehension.
37 Subjects: including ACT Math, reading, English, writing
Related Fairfax, CA Tutors
Fairfax, CA Accounting Tutors
Fairfax, CA ACT Tutors
Fairfax, CA Algebra Tutors
Fairfax, CA Algebra 2 Tutors
Fairfax, CA Calculus Tutors
Fairfax, CA Geometry Tutors
Fairfax, CA Math Tutors
Fairfax, CA Prealgebra Tutors
Fairfax, CA Precalculus Tutors
Fairfax, CA SAT Tutors
Fairfax, CA SAT Math Tutors
Fairfax, CA Science Tutors
Fairfax, CA Statistics Tutors
Fairfax, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/fairfax_ca_act_tutors.php","timestamp":"2014-04-17T04:19:58Z","content_type":null,"content_length":"23579","record_id":"<urn:uuid:b4a87266-7157-4d31-b0f1-399bca2073e8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
Laplace Transform of this signal?
November 17th 2011, 02:59 PM #1
Feb 2010
Laplace Transform of this signal?
In my linear systems class, we are doing Laplace transforms using transform tables and the properties. I can usually do the problems that closely resemble the table, but when they involve heavy
algebraic manipulations, I don't know what to do.
Can someone explain the steps of this solution?
ImageShack® - Online Photo and Video Hosting
Why do the +1 and -1 appear? Where did the extra exponential come from? If someone can go line for line, I would appreciate it and how to in general solve these kinds of signals?
Re: Laplace Transform of this signal?
In my linear systems class, we are doing Laplace transforms using transform tables and the properties. I can usually do the problems that closely resemble the table, but when they involve heavy
algebraic manipulations, I don't know what to do.
Can someone explain the steps of this solution?
ImageShack® - Online Photo and Video Hosting
Why do the +1 and -1 appear? Where did the extra exponential come from? If someone can go line for line, I would appreciate it and how to in general solve these kinds of signals?
The goal of the algebra is so the can use the t-axis translation theorem. This states that
$L\{f(t-a)u(t-a) \}=e^{-as}F(s)$
The problem is that you have different time shifts in your unit step function and the other function.
Consider the function
We want to write in with only factors of (t-1) and constants
Now for simplicity let $u=t-1$ this gives
Now we need to do something similar with the exponential term $e^{-5t}$ lets just focus on the exponent
$-5t=-5t+\underbrace{5-5}_{\text{add zero}}=-5(t-1)-5$ so
Now lets put everything to gether
Now just distribute and use the t-axis translation theorem to transform each part!
November 21st 2011, 10:02 AM #2 | {"url":"http://mathhelpforum.com/differential-equations/192142-laplace-transform-signal.html","timestamp":"2014-04-19T04:26:05Z","content_type":null,"content_length":"37073","record_id":"<urn:uuid:d8bee56d-5d90-4458-ad7c-0289131b359c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
Normal, t, chi square distribution confidence intervals
January 29th 2013, 09:54 AM #1
Super Member
Oct 2012
Normal, t, chi square distribution confidence intervals
I've been looking into different ways of calculating confidence intervals and I came across methods using the normal distribution, t-distribution and chi square distribution.
I want to find the confidence intervals for a binomial distribution with a high sample size (around N=5000). I am wary of using a normal approximation to the binomial distribution not because it
shouldn't be continuous but that it shouldn't have negative values (which the normal distribution does).
I have two questions
1. Is the t-distribution always better than a normal distribution for finite values of N (or rather N less than the population size)? I realise at N=5000 the difference is negligible but I am
wondering about this from a theory point of view.
2. Is their any distribution that always gives the best estimate of the confidence interval for a sample smaller than the population?
Re: Normal, t, chi square distribution confidence intervals
Hi Shakarri!
The z-distribution (which is the same as the normal distribution) is always better than the t-distribution if and only if you have knowledge of the standard deviation of the population ( $\sigma$
Since you are talking about a binomial distribution that implies you have knowledge about $\sigma$. Therefore the normal distribution is a better approximation than the t-distribution.
There is no single distribution that always gives the best estimate for a confidence interval.
It depends on the (assumed) distribution of the population (normal, binomial, uniform, poisson, ...).
Btw, in the tails of any distribution the probability density becomes very unreliable in practice, since there are always other effects that are unaccounted for.
In practice extreme outcomes are more probable than any normal distribution predicts.
Last edited by ILikeSerena; January 29th 2013 at 11:10 AM.
Re: Normal, t, chi square distribution confidence intervals
Thank you for the information. Particularly about tails of the distribution.
Just to confirm, when you say "knowledge of the standard deviation" you mean a having a sample standard deviation suffices? in what I am sampling it would be impossible to get the real standard
Re: Normal, t, chi square distribution confidence intervals
No, you have omitted the part of my statement that says: the standard deviation of the population (usually denoted as $\sigma$).
This is not the standard deviation of the sample (usually denoted as s).
This is the key difference between the z-test and the t-test.
If you do not have the standard deviation of the population, you cannot apply the z-test and are stuck with the t-test that uses the standard deviation of the sample.
Since you are talking about a binomial distribution, this implies knowledge of the standard deviation of the population: $\sigma = \sqrt{np(1-p)}$.
Re: Normal, t, chi square distribution confidence intervals
I noticed you used sigma not s, I was just unsure what "knowledge" meant. I was confused by you saying I implied I have knowledge of sigma even though I said that my sample size was less than the
population, the population size is infinite so I can never get the true vale of the standard deviation.
Anyway, cheers for the advice, I'll sick with t tests unless the distribution is not approximately normal.
January 29th 2013, 11:07 AM #2
January 29th 2013, 12:53 PM #3
Super Member
Oct 2012
January 30th 2013, 03:26 AM #4
January 30th 2013, 05:23 AM #5
Super Member
Oct 2012 | {"url":"http://mathhelpforum.com/advanced-statistics/212223-normal-t-chi-square-distribution-confidence-intervals.html","timestamp":"2014-04-16T05:09:06Z","content_type":null,"content_length":"45564","record_id":"<urn:uuid:32cd7147-032f-4bc1-8ba9-d234be2837cc>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brian Spencer
Professor Spencer's research interests are in the applied mathematics of materials. The research combines physics-based mathematical models with asymptotic, analytic and numerical methods to describe
growth processes, instabilities and microstructure formation in materials. Specific research programs include:
● instabilities and pattern formation in strained alloy film deposition
● formation of quantum dots in strained solid films
● corner regularizations in crystal growth models
scientific areas of interest: crystal growth, elasticity, phase transformations, chemical thermodynamics, fluid mechanics, strained solid films, solidification, diffusion, contact angles, dendrites,
surface energy effects, anisotropy, facetting.
mathematical areas of interest: free boundary and moving boundary problems, continuum mechanics, morphological instability, pattern formation, differential equations, integral equations, perturbation
methods, singular perturbations, numerical methods, spectral methods, finite difference methods, numerical stability, bifurcation and stability, energy minimization, variational calculus. | {"url":"http://math.buffalo.edu/~spencerb/","timestamp":"2014-04-20T01:03:22Z","content_type":null,"content_length":"9322","record_id":"<urn:uuid:5cf8b4c0-7440-4716-a264-01d3c6a15bd7>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00156-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lionville Algebra Tutor
Find a Lionville Algebra Tutor
...I have taught Geometry as a private tutor since 2001. I completed math classes at the university level through Advanced Calculus. This includes two semesters of elementary calculus, vector and
multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and non-euclidean geometry.
12 Subjects: including algebra 2, algebra 1, calculus, writing
...It also provided me with a better understanding of the types of problems the SAT focuses on, as well as some of the simple math tips and tricks we learned so long ago in elementary school that
really simplify some of those complicated problems. While purchasing an SAT book is a good first step i...
9 Subjects: including algebra 1, algebra 2, chemistry, geometry
...Further, having studied classical languages, I have a very good linguistic sense and could thus tutor English language and composition. Aside from my primary strengths in mathematics,
philosophy, and English language, I am also competent in some other subject areas, specifically literature, clas...
26 Subjects: including algebra 2, reading, algebra 1, writing
...Where does this fit in to other subjects you've taken before? What will this course not include? -Provide a thorough understanding of basic concepts. If you understand the foundations of a
subject very well, you can keep your head in a reasonable place when things get complicated. -Work on the same level as my students.
25 Subjects: including algebra 2, algebra 1, chemistry, writing
Hi,I graduated from the College of William and Mary with a Ph. D. degree in Chemistry, and this is my 7th year teaching chemistry in college. I like to tutor chemistry as well as math, and I look
forward to working with you to improve your understandings of chemistry and/or math.I am an instructor in college teaching chemistry, and I have taught organic chemistry (both semesters) many
9 Subjects: including algebra 1, algebra 2, chemistry, geometry | {"url":"http://www.purplemath.com/Lionville_Algebra_tutors.php","timestamp":"2014-04-17T07:43:11Z","content_type":null,"content_length":"24023","record_id":"<urn:uuid:a64bc16e-bd8c-46d3-acb2-96fb98fd84f9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
Windsor, CT Math Tutor
Find a Windsor, CT Math Tutor
...I can help you turn that stress into positive energy and excitement. I recently worked with a student who was struggling with their Algebra II assignments and tests. Over the course of the
semester, the student was able to increase their skill and confidence.
11 Subjects: including precalculus, algebra 1, algebra 2, geometry
...Previously, I taught high school. In my free time, I like to hang out with my baby daughter and husband. I really enjoy one on one tutoring.
10 Subjects: including precalculus, trigonometry, statistics, probability
...The word Algebra scares many people, but simple explanations and basic terminology make math seem so much less scary. The approach I take is showing students the strengths that they already
have and how to apply those to learning and doing Algebra. The SAT prep that I offer involves doing a pra...
11 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...It's one thing to know how to plug numbers in to the quadratic formula and get the right answer, but it's way better in the long run if they know HOW the formula gives them that answer. I
don't have a set curricula but prefer instead to get a baseline of where the student currently stands in the...
18 Subjects: including algebra 2, geometry, prealgebra, precalculus
...I would also determine any areas the student has not mastered and provide practice in these areas along with several ways of introducing this material to the student. Prealgebra leads a
student from working with the concrete to working with the abstract. Along with practice using basic operations, the student is introduced to working with variables and integers.
7 Subjects: including algebra 2, algebra 1, grammar, prealgebra
Related Windsor, CT Tutors
Windsor, CT Accounting Tutors
Windsor, CT ACT Tutors
Windsor, CT Algebra Tutors
Windsor, CT Algebra 2 Tutors
Windsor, CT Calculus Tutors
Windsor, CT Geometry Tutors
Windsor, CT Math Tutors
Windsor, CT Prealgebra Tutors
Windsor, CT Precalculus Tutors
Windsor, CT SAT Tutors
Windsor, CT SAT Math Tutors
Windsor, CT Science Tutors
Windsor, CT Statistics Tutors
Windsor, CT Trigonometry Tutors | {"url":"http://www.purplemath.com/Windsor_CT_Math_tutors.php","timestamp":"2014-04-19T17:19:04Z","content_type":null,"content_length":"23792","record_id":"<urn:uuid:9da176d0-8c3b-4570-9e32-fdbdbd28be4c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rahns, PA Algebra 2 Tutor
Find a Rahns, PA Algebra 2 Tutor
...I am highly committed to students' performances and to improve their comprehension of all areas of mathematics.I have excelled in courses in Ordinary Differential Equations in both
undergraduate and graduate school, as well as partial differential equations at the graduate level. Also, I have tu...
19 Subjects: including algebra 2, calculus, geometry, statistics
...The most important aspect of my tutoring is the feedback that I receive from my students. I am constantly looking to improve my teaching style and welcome any suggestions of how I can be a
more effective tutor. If any student is unsure whether or not I will be able to help them in a particular subject, I encourage them to contact me so that we can discuss if I can be of assistance
to them.
19 Subjects: including algebra 2, calculus, statistics, geometry
...Having worked with a diverse population of students, I have strong culturally competent teaching practices that are adaptive to diverse student learning needs. I have a robust knowledge of
various math curricula and resources that will get your child to love math in no time! I look forward to w...
9 Subjects: including algebra 2, geometry, ESL/ESOL, algebra 1
...I have received extensive training on how to take the PSAT/SAT and can offer a variety of methods for achieving high scores on those tests. I understand that many may have reservations about
hiring me due to my young age, but I believe my youth provides a unique advantage: I have been where your...
42 Subjects: including algebra 2, reading, English, calculus
I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching
because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun...
12 Subjects: including algebra 2, calculus, physics, ACT Math
Related Rahns, PA Tutors
Rahns, PA Accounting Tutors
Rahns, PA ACT Tutors
Rahns, PA Algebra Tutors
Rahns, PA Algebra 2 Tutors
Rahns, PA Calculus Tutors
Rahns, PA Geometry Tutors
Rahns, PA Math Tutors
Rahns, PA Prealgebra Tutors
Rahns, PA Precalculus Tutors
Rahns, PA SAT Tutors
Rahns, PA SAT Math Tutors
Rahns, PA Science Tutors
Rahns, PA Statistics Tutors
Rahns, PA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Charlestown, PA algebra 2 Tutors
Congo, PA algebra 2 Tutors
Creamery algebra 2 Tutors
Delphi, PA algebra 2 Tutors
Eagleville, PA algebra 2 Tutors
Englesville, PA algebra 2 Tutors
Fagleysville, PA algebra 2 Tutors
Gabelsville, PA algebra 2 Tutors
Graterford, PA algebra 2 Tutors
Gulph Mills, PA algebra 2 Tutors
Linfield, PA algebra 2 Tutors
Morysville, PA algebra 2 Tutors
Trappe, PA algebra 2 Tutors
Valley Forge algebra 2 Tutors
Zieglersville, PA algebra 2 Tutors | {"url":"http://www.purplemath.com/rahns_pa_algebra_2_tutors.php","timestamp":"2014-04-21T05:11:08Z","content_type":null,"content_length":"24213","record_id":"<urn:uuid:40781233-f239-45bd-a24e-5eeada9999b2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
airirm @ PaGaLGuY
Please help me out in solving this DI caselet.
7 Comments
@clxlfms This is the complete question ji. Is there any....
@clxlfms Ji please share the explanation.. .
Please help me out in solving this DI caselet.
Let total combined income be I and total combined tax be ....
Please help me out in solving this DI caselet.
7 Comments
@clxlfms This is the complete question ji. Is there any....
@clxlfms Ji please share the explanation.. .
Please help me out in solving this DI caselet.
Escalators _/\_
(Q1) P and Q walk up a moving up escalator at constant speeds. For every 5 steps that P takes Q takes 3 steps. How many steps would each have to climb when the escalator is switched off, given that P
takes 50 and Q takes 40 steps to climb up the moving up escalator respecti...
6 Comments
@garry1337 Ji please share your approach.. .
@garry1337 Thanks a lot for the explanation and the lin....
Escalators _/\_
(Q1) P and Q walk up a moving up escalator at constant speeds. For every 5 steps that P takes Q takes 3 steps. How many steps would each have to climb when the escalator is switched off, given that P
takes 50 and Q takes 40 steps to climb up the moving up escalator respecti...
6 Comments
@garry1337 Ji please share your approach.. .
@garry1337 Thanks a lot for the explanation and the lin....
Escalators _/\_
(Q1) P and Q walk up a moving up escalator at constant speeds. For every 5 steps that P takes Q takes 3 steps. How many steps would each have to climb when the escalator is switched off, given that P
takes 50 and Q takes 40 steps to climb up the moving up escalator respecti...
1 6 Comments
couldn't get the third question.. ;(.
replied to Quant by Arun Sharma
Escalators _/\_
(Q1) P and Q walk up a moving up escalator at constant speeds. For every 5 steps that P takes Q takes 3 steps. How many steps would each have to climb when the escalator is switched off, given that P
takes 50 and Q takes 40 steps to climb up the moving up escalator respectively...
In a photograph, ten family members named J through S are sitting on ten different chairs arranged in the shape of pyramid. The rows are marked as R-I through R-IV from top to bottom. There are four
chairs in row R-IV,three in row R-III, two in R-II and one chair in row R-I. All the family member...
9 Comments
@Tina24 Ji can you please share the approach? thanks a ....
In a photograph, ten family members named J through S are sitting on ten different chairs arranged in the shape of pyramid. The rows are marked as R-I through R-IV from top to bottom. There are four
chairs in row R-IV,three in row R-III, two in R-II and one chair in row R-I. All the family me... | {"url":"http://pagalguy.com/u/airirm","timestamp":"2014-04-20T18:45:14Z","content_type":null,"content_length":"171640","record_id":"<urn:uuid:475c26b7-e2be-4ec8-8003-6b8438e194da>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
Delaunay Triangulation of Points on the Unit Sphere
Delaunay Triangulation of Points on the Unit Sphere
SPHERE_DELAUNAY is a MATLAB library which computes the Delaunay triangulation of points on the unit sphere.
According to Steven Fortune, it is possible to compute the Delaunay triangulation of points on a sphere by computing their convex hull. If the sphere is the unit sphere at the origin, the facet
normals are the Voronoi vertices.
SPHERE_DELAUNAY uses this approach, by calling MATLAB's convhulln function to generate the convex hull. The information defining the convex hull is actually the desired triangulation of the points.
Since this computation is so easy, other parts of the program are designed to analyze the resulting Delaunay triangulation and return other information, such as the areas of the triangles and so on.
The computer code and data files described and made available on this web page are distributed under the GNU LGPL license.
SPHERE_DELAUNAY is available in a FORTRAN90 version and a MATLAB version.
Related Data and Programs:
GEOMETRY, a MATLAB library which computes various geometric quantities, including grids on spheres.
SPHERE_CVT, a MATLAB library which creates a mesh of well-separated points on a unit sphere by applying the Centroidal Voronoi Tessellation (CVT) iteration.
SPHERE_DESIGN_RULE, a FORTRAN90 library which returns point sets on the surface of the unit sphere, known as "designs", which can be useful for estimating integrals on the surface, among other uses.
SPHERE_GRID, a MATLAB library which provides a number of ways of generating grids of points, or of points and lines, or of points and lines and faces, over the unit sphere.
SPHERE_VORONOI, a MATLAB program which computes the Voronoi diagram of points on a sphere.
SPHERE_VORONOI_DISPLAY_OPENGL, a C++ program which displays a sphere and randomly selected generator points, and then gradually colors in points in the sphere that are closest to each generator.
SPHERE_XYZ_DISPLAY, a MATLAB program which reads XYZ information defining points in 3D, and displays a unit sphere and the points in the MATLAB graphics window.
SPHERE_XYZF_DISPLAY, a MATLAB program which reads XYZF information defining points and faces, and displays a unit sphere, the points, and the faces, in the MATLAB graphics window. This can be used,
for instance, to display Voronoi diagrams or Delaunay triangulations on the unit sphere.
STRIPACK, a FORTRAN90 library which computes the Delaunay triangulation or Voronoi diagram of points on a unit sphere.
STRIPACK_DELAUNAY, a FORTRAN90 program which reads an XYZ file of 3D points on the unit sphere, computes the Delaunay triangulation, and writes it to a file.
TOMS772, a FORTRAN77 library which is the original text of the STRIPACK program.
1. Jacob Goodman, Joseph ORourke, editors,
Handbook of Discrete and Computational Geometry,
Second Edition,
CRC/Chapman and Hall, 2004,
ISBN: 1-58488-301-4,
LC: QA167.H36.
2. Robert Renka,
Algorithm 772:
STRIPACK: Delaunay Triangulation and Voronoi Diagram on the Surface of a Sphere,
ACM Transactions on Mathematical Software,
Volume 23, Number 3, September 1997, pages 416-434.
Source Code:
Examples and Tests:
You can go up one level to the MATLAB source codes.
Last revised on 22 May 2012. | {"url":"http://people.sc.fsu.edu/~jburkardt/m_src/sphere_delaunay/sphere_delaunay.html","timestamp":"2014-04-18T08:02:24Z","content_type":null,"content_length":"11700","record_id":"<urn:uuid:bb726e43-11df-4061-96fe-2166fa2c721b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
The relations between conservative part and conservativity
up vote 3 down vote favorite
I revised the question. In smooth ergodic theory, a diffeomorphism is said to be conservative (I), if it preserves the Lebesgue measure. So for some of us, conservativity is just short for
On the other hand, we can define the conservative part $C_f$ for general measure-class preserving maps (see below). We can also say $f$ is conservative (II) if $C_f=M$.
My question is:
• given a smooth map $f:M\to M$, when could we upgrade from conservative (II) to conservativity (I) (up to a change of Riemannian metric, or to some measure $\mu\sim m$)?
• More generally, when could the restriction $f|_{C_f}$ be conservative (I) (assuming $m(C_f)>0$)?
Let $(X,\mu)$ be a standard measure space and $f:X\to X$ be an isomorphism under which $\mu$ is quasi-invariant. That is, $f^\ast\mu\ll \mu$ and $\mu\ll f^\ast\mu$. A measurable set $E$ is said to be
wandering if all $f^nE$, $n\in\mathbb{Z}$ are mutually disjoint.
(We may call it topologically wandering if $E$ is an open subset. So we generalize the classical definition of wandering.)
It has been proved that there exists a maximal wandering set $W$ (up to a $\mu$-null set). Then the dissipative part $D_f$ of $(X,f,\mu)$ is $D_f=\bigsqcup_{\mathbb{Z}}f^nW$. Then $C_f=X\backslash
D_f$ is called the conservative part of $(X,f,\mu)$. The induced partition $\lbrace C_f,D_f\rbrace$ is called Hopf decomposition (named by Halmos?)
For example, the dissipative part is trivial if $\mu$ is probability and preserved by $f$ simultaneously.
Observation: by introducing an artificial measure $\nu=\sum_{\mathbb{Z}}f^n(\mu|_W)$, the map can be made $\nu$-preserving on the dissipative part $D_f$.
• What about the conservative part? Could we make it measure-preserving with respect to some measure?
• More specifically, let $M$ be a closed manifold, $f:M\to M$ be a smooth diffeomorphism (say $C^\infty$ if necessary), and $m$ be the normalized Lebesgue measure (automatically quasi-invariant).
Assume $m(C_f)>0$. When could we find some $\mu\sim m|_{C_f}$ that is preserved by $f$?
Thank you!
ds.dynamical-systems ergodic-theory measure-theory
add comment
1 Answer
active oldest votes
Several comments
(1) There is no need to require invariance or finiteness of the measure in order to define the Hopf decomposition - it makes sense for any quasi-invariant measure.
up vote 1 down (2) There is no need to evoke metric spaces, homeomorphisms, manifolds, etc. The Hopf decomposition is defined entirely in the measure category.
vote accepted
(3) There is no reason for existence of an equivalent invariant measure on the conservative part. There are ergodic actions (of so-called type III), for which there is no equivalent
invariant measure.
Hi! In fact your third comment is the reason why require much stronger regularity in my question: do there exist the examples of type III in the category of smooth
diffeomorphisms on closed manifolds? That is, in the class of special $\mathbb{Z}$-actions on special spaces. – Pengfei Apr 3 '13 at 13:52
I did forget to make the quasi-invariance assumption. I will edit the question. I am aware the existence of the general definition and the classification. Just I am not that
familiar with the general scheme... I wonder whether the thpe III can happen in my daily life (that is, diffeomorphisms) :) – Pengfei Apr 3 '13 at 13:57
1 I see. I wouldn't immediately know any type III examples for smooth $\mathbb Z$ actions. The ones I know are either $\mathbb Z$ actions on pretty abstract spaces (e.g., shift on
product spaces) or smooth actions of much bigger groups (e.g., boundary actions of Fuchsian or Kleinian groups). – R W Apr 3 '13 at 17:01
@r-w You might noticed that I changed the questions a little bit. By the way, could you give another answer to elaborate some of these examples? – Pengfei Apr 4 '13 at 15:06
add comment
Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems ergodic-theory measure-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/126343/the-relations-between-conservative-part-and-conservativity","timestamp":"2014-04-17T19:08:56Z","content_type":null,"content_length":"57862","record_id":"<urn:uuid:282d5fa2-fcd1-4727-b9aa-f09c976d6f61>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
The basics of FPGA mathematics | EE Times
Most Recent Comments
7:10:50 PM
Flash Poll
All Polls
Frankenstein's Fix, Teardowns, Sideshows, Design Contests, Reader Content & More
Engineer's Bookshelf
The Engineering Life - Around the Web
Surprise TOQ Teardown at EELive!
Caleb Kraft Post a comment
This year, for EELive! I had a little surprise that I was quite eager to share. Qualcomm had given us a TOQ smart watch in order to award someone a prize. We were given complete freedom to ...
Design Contests & Competitions
Engineering Investigations
Frankenstein's Fix: The Winners Announced!
Caleb Kraft 8 comments
The Frankenstein's Fix contest for the Tektronix Scope has finally officially come to an end. We had an incredibly amusing live chat earlier today to announce the winners. However, we ...
MORE EELife
Top Comments of the Week
Like Us on Facebook
Datasheets.com Parts Search
185 million searchable parts
(please enter a part number or hit search to begin) | {"url":"http://www.eetimes.com/messages.asp?piddl_msgthreadid=39252&piddl_msgid=236010","timestamp":"2014-04-18T00:25:57Z","content_type":null,"content_length":"173440","record_id":"<urn:uuid:06cf35f6-675c-42c4-af0c-ecf7c4e737d5>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with a simple Sin proof
November 18th 2010, 04:06 AM #1
Junior Member
Nov 2010
Help with a simple Sin proof
In any triangle sin(A + B)= sinC
State true or false and explain why.
The angles A,Band C are alpha, betta and gamma I just dont know how to put that in there so I used A,B andC.
I tried to use an addition formula for sin and expand A+B to sines and cosines but I dont think that is right. The teacher wrote on the paper A+B=180-C which makes sense but there must be more
than just that. If someone could get me going in the right direction I would appreciate it.
Could you try to show that sin(180-C) = sin(C)? If that's true, then you're pretty much done, right? If it's false, then the original claim is false.
Thanks for the quick reply. So the statement is true. At first I thought that angles A and B would have to be acute to add up to angle C but that does not seem to be the case. You can take the
sine of any two angles and add them and as long as the third angle is the supplement. Why is this? Does it have to do with the unit circle and sine being positive from 0 to 180. Also they what
you to explain why. How do I put it into words?
Well, show me your proof that sin(180-C) = sin(C). How would you go about showing this?
I am not sure I understand what a proof is. Can you just put angles in for C and if it is true that is the proof? If so
sin(180-90) = sin(90)
sin(90) = sin(90)
1 = 1
sin(180-30) = sin(30)
sin(150) = sin(30)
1/2 = 1/2
maybe add for all angles from 0-180
No, you cannot "prove by example", which is what you're proposing. You must prove in the abstract. Examples are useful in proofs only insofar as they make a proposition believable, but you can
never prove by example (at least, not in this case. There is the proof method of "proof by cases", but that's only valid if you can enumerate every single example and prove all of them). You must
show that sin(180-C) = sin(C) for all C. Considering that C can be any real number, you're going to be at it a very long time if you try to prove by example.
Instead, I would recommend that you look at some trig identities, especially sum and difference identities. What ideas does that give you?
sin(180-C) = sinC
sin180cosC - cos180sinC = sinC
0(cosC) - (-1)sinC = sinC
-(-sinC) = sinC
sinC = sinC
Any better?
Works for me.
I agree with dwsmith. I'd say you're pretty much done. All you have to do, really, is preface your post # 7 with three lines:
A + B + C = 180, which implies
A + B = 180 - C, which implies
sin(A+B) = sin (180-C),
and then you insert your Post # 7 here. Got it?
Thanks Ackbeet that helped a lot. There are two others that I have to do. One I think I have figured out. I may have to create a thread on the other. Thanks again for the explanation on proofs.
One of my teachers at school was very pedant on every small things, especially about the degrees signs....
So 180 is in degrees?
AsZ is right. Don't write so that people can understand you. Write so they can't misunderstand you.
November 18th 2010, 05:07 AM #2
November 18th 2010, 05:35 PM #3
Junior Member
Nov 2010
November 19th 2010, 01:13 AM #4
November 19th 2010, 04:03 AM #5
Junior Member
Nov 2010
November 19th 2010, 05:08 AM #6
November 19th 2010, 07:23 PM #7
Junior Member
Nov 2010
November 19th 2010, 07:53 PM #8
MHF Contributor
Mar 2010
November 20th 2010, 02:32 AM #9
November 21st 2010, 12:24 PM #10
Junior Member
Nov 2010
November 21st 2010, 12:36 PM #11
November 22nd 2010, 05:24 AM #12 | {"url":"http://mathhelpforum.com/trigonometry/163669-help-simple-sin-proof.html","timestamp":"2014-04-16T20:22:36Z","content_type":null,"content_length":"65987","record_id":"<urn:uuid:a65887a2-5ad8-4f9f-8a17-7623a0f535bb>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Input parameters
Next: Additional Notes Up: DMPLSTSQR Previous: Files
iscrn - a switch to plot the parameter adjustments and updated
velocity model to the screen (default: 1)
dmpfct - an overall damping factor to control the trade-off
between resolution and variance (default: 1.0)
velunc - an estimate of the uncertainty (square root of the
variance) of the model velocity values (km/s) (default:
0.1; however, if velunc is equal to zero, the uncertainties
listed in the file i.out are used)
bndunc - an estimate of the uncertainty (square root of the
variance) of the depth of model boundary values (km)
(default: 0.1; however, if bndunc is equal to zero, the
uncertainties listed in the file i.out are used)
xmax - maximum distance (km) of velocity model (default: none)
Ingo Pecher
Sat Mar 7 19:13:54 EST 1998 | {"url":"http://pubs.usgs.gov/of/2004/1426/rayinvr/node21.html","timestamp":"2014-04-20T21:54:42Z","content_type":null,"content_length":"2389","record_id":"<urn:uuid:904b8368-f8b5-450b-971c-7a2eedf5eee7>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: Some important demonstrations on negative numbers
Date: Nov 28, 2012 7:00 AM
Author: Robert Hansen
Subject: Re: Some important demonstrations on negative numbers
I am all for helping students make sense of these properties that at first seem arbitrary. And I do it in ways similar to yours, except, I am more clear regarding the distinction between definitions and axioms versus the rest that "follows from". You don't "prove" axioms (as you often seem to be doing) but at the same time you certainly must show the student that they are more than just arbitrary decisions. I generally show how these choices (i.e. the definition of exponents) make sense because other choices would prove to be inconsistent down the road and lead to contradictions. The instinctive sense of the logical structure and layering in mathematics is reachable by students in algebra, at least by algebra 2.
Bob Hansen
On Nov 27, 2012, at 5:20 PM, Peter Duveen <pduveen@yahoo.com> wrote:
> Bob, I usually give similar demonstrations for exponents. That is, I begin with the self-evident properties of exponents, and then use them and extend them to interpret negative exponents, fractional exponents, raising to the zero power, and raising to the first power, all of which are, in my opinion, not self evident. I'm not so interested in conventions as I am the demonstration of the properties of exponents extended to all reals, and ways to interpret unfamiliar concepts in terms of the familiar. | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7929291","timestamp":"2014-04-16T14:04:21Z","content_type":null,"content_length":"2377","record_id":"<urn:uuid:7a28567e-4050-4f02-92e9-eebfba393864>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
duplicate detection problem
up vote 16 down vote favorite
Restated from stackoverflow:
• array a[1:N] consisting of integer elements in the range 1:N
Is there a way to detect whether the array is a permutation (no duplicates) or whether it has any duplicate elements, in O(N) steps and O(1) space without modifying the original array?
clarification: the array takes space of size N but is a given input, you are allowed a fixed amount of additional space to use.
The stackoverflow crowd has dithered around enough to make me think this is nontrivial. I did find a few papers on it citing a problem originally stated by Berlekamp and Buhler (see "The Duplicate
Detection Problem", S. Kamal Abdali, 2003)
permutations algorithms
What is your model of computation? Such an array (and in particular such a permutation) takes $\mathcal{O}(N\log N)$ space to store, so it would take that much time just to read it. – Noah Stein
May 20 '10 at 15:09
So you want this to be faster than sorting (which can be done in $O(N\mathrm{log}(N))$ steps if I am not mistaken)? – Roland Bacher May 20 '10 at 15:18
3 Sorting is not an O(n log n) solution, because it uses much more than O(1) space and/or modifies the array. And this input could be sorted in O(n) time anyway because it's all small integers. –
David Eppstein May 20 '10 at 17:26
1 By small I meant polynomially bounded in the input size. Integers in the range 1..n can be sorted by bucket sort in linear time and integers in the range 1..polynomial can be sorted by radix sort
in linear time. It's not a question of what's realistically large, it's a question of whether you allow your inputs to be used as array indexes or you artificially pretend your computer can only
access them via pairwise comparisons. – David Eppstein May 20 '10 at 20:50
2 Stupid observation: with only O(1) space, you can't actually address the whole array. So you probably want something like "O(1) space, but pointers count as constant space." – David Speyer Jun 2
'10 at 18:27
show 6 more comments
5 Answers
active oldest votes
It's at least possible to test whether the input is a permutation with a randomized algorithm that uses O(1) space, always answers "yes" when it is a permutation, and answers "yes"
incorrectly when it is not a permutation only with very small probability.
Simply pick a hash function $h(x)$, compute $\sum_{i=1}^n h(i)$, compute $\sum_{i=1}^n h(a[i])$, and compare the two sums.
Ok, some care needs to be used in defining and choosing among an appropriate family of hash functions if you want a rigorous solution (and I suppose we do want one, since we're on
up vote 15 mathoverflow not stackoverflow). Probably the simplest way is just to fill another array $H$ with random numbers and let $h(x)=H[x]$, but that is unacceptable because it uses too much
down vote space. I'll leave this part as unsolved and state this as a partial answer rather than claiming full rigor at this point.
See also my paper Space-Efficient Straggler Identification in Round-Trip Data Streams via Newton's Identitities and Invertible Bloom Filters which solves a more general problem (if
there are O(1) duplicates, say which ones are duplicated, using only O(1) space) with the same lacuna in how the hash functions are defined. It also contains a proof that an algorithm
that makes only a single pass over the data cannot solve the problem exactly and deterministically, but of course that doesn't apply to algorithms with random access to the input array.
Why use a hash function when you can use the identity and have correct output every time? Just compute $\sum a[i]$ and compare with $n(n+1)/2$. – Dror Speiser May 20 '10 at 18:12
@Dror: Because sum and compare with n(n+1)/2 does not correctly check for duplicates, e.g. a valid permutation 1,2,3,4,5,6...N vs. 2,2,2,4,5,6,...,N – Jason S May 20 '10 at 18:19
David: +1. I assume this is like primality testing where you can use multiple passes to increase probability? – Jason S May 20 '10 at 20:52
Yes, or just use a hash function with a bigger range. – David Eppstein May 20 '10 at 21:01
One hash function that takes $O(1)$ storage and obviously works is $h(x)=2^x\bmod p$ where $p$ is a large random prime. But that leads to an $O(n\log n)$ time algorithm because it
3 takes $O(\log n)$ time to evaluate $h(x)$ (for integer arguments in the range 1..n) in any reasonable model of machine arithmetic. I'm pretty sure that taking polynomials modulo a
prime doesn't work (there are specific inputs that will trick all low-degree polynomials), but that doesn't exhaust the possibilities. – David Eppstein May 23 '10 at 0:01
show 1 more comment
In the complexity theory literature there is a related problem known as the element distinctness problem: given a list of $n$ numbers, determine if they are all distinct.
Of course this problem isn't quite the same; one might expect that if you assume all numbers are in the range {$1,\ldots,n$} that you might solve the problem more efficiently.
up vote 8 The wikipedia article http://en.wikipedia.org/wiki/Element_distinctness_problem mentions the linear time bucket sort solution for the special case of $\{1,\ldots,n\}$. The purpose of my
down vote answer is to let you know of a common name for the problem so that maybe your web searches will fare better. Much is known about element distinctness and I am sure that your special case
has been studied to death.
1 thank you -- seems like many of the problems in math / CS have already been solved so much of the problem is just figuring out what they are called – Jason S May 21 '10 at 0:38
add comment
This is still an open and interesting problem. The best deterministic algorithm that I know of takes $O(n \log n)$ time and $O(\log n)$ words of space by Munro, Fich and Poblete in Permuting
in place. This paper doesn't explicitly mention the problem of detecting if there is a duplicate but the method they develop for permuting in place is directly applicable. It is still
possible that there is a true linear time and $O(1)$ words of space solution (either randomised or deterministic).
If you simply increase the alphabet size from $n$ the situation changes drastically. Even if you change it to $2n$ the complexity of finding if there is a duplicate is unknown and in
particular no near linear time solution is known for small space. The most obvious randomised approach is to hash the elements down to the range $[1,\dots,n]$. You are then left with the
problem of trying to distinguish real duplicates from ones created by hash collisions. With full independence it seems you can most likely do this in something like $O(n^{3/2})$ time but I am
not sure if this has ever been formally analyzed in published work. However, we can't actually use a hash function with full independence without also using linear space so the problem as
up vote before is to show that a hash function family whose members can be represented in small space and which has the desired properties actually exists.
4 down
vote For even larger alphabets of size $n^2$ there is an existing lower bound for small space algorithms given in Time-space trade-off lower bounds for randomized computation of decision problems.
With space $O(\log n)$ bits (or $O(1)$ words) it simplifies to approximately $\Omega(n \sqrt{\log n/\log{\log{n}}})$. This means that no linear time solution is possible in this case.
COMMENT: This should be a comment to David Eppstein's answer but I don't have the points for that. The function $h(x) = 2^x \bmod p$ with $p$ a prime with $O(\log n)$ bits is very
interesting. Although it is clear that it takes $\Theta(\log n)$ time to evaluate the hash function once (by repeated squaring, assuming constant time operations on words), is it obvious that
it can't be done faster on average when evaluating at $n$ points by some clever method? Consider, for example, an array with the elements in increasing order. In this case it takes only $O(n)
$ time to compute all the hash values.
add comment
EDIT: This turns out to be not an answer for practical N as well. Given two numbers and the right N, one can play games with the modulo pattern of the two numbers and create two other
numbers such that their contribution to the modulo counts replicates that of the two given numbers. Thus a practical solution based on modular arithmetic may need space O(ln N) and a
multiplicative time factor of O(ln N). Oops. END EDIT
For arbitrary N, this is not an answer, but for practical N, say N < 2^64, one approach is to consider the residues mod p of the array entries for primes p from 2 up to a sufficient limit,
say 60.
up vote 0
down vote If the counts match the expected distribution, then (I think) the list is a permutation if no element lies outside the range [1,P], where P > 2^64 and is the product of the primes from 2 up
to 60. In general, the algorithm uses space Q * B and time O( Pi(Q)*N ), where Q is the largest prime used, B is the size of N (or of an array element), and Pi(Q) is the number of primes
less than or equal to Q. Additionally, pi(Q) is significantly less than ln(N) and Q is not much larger (with respect to N) than pi(Q). For practical N, this approach should suffice.
Gerhard "Ask Me About System Design" Paseman, 2010.05.20
add comment
Basically David's approach: we fix $M$ = number of bits storage, and compute the indicator $h = XOR(\operatorname{hash}_M(a[i]))$ where $\operatorname{hash}_M$ is a hash function to $M$
bits (eg MD5 masked to M bits). We decide that it is a permutation without repetitions by comparing with the same indicator for the ordered array (1..N). This is order N. And there is a
up vote 0 probability of error which should be around $1/2^M$... if I'm not mistaken.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged permutations algorithms or ask your own question. | {"url":"http://mathoverflow.net/questions/25374/duplicate-detection-problem/25419","timestamp":"2014-04-16T11:09:36Z","content_type":null,"content_length":"85676","record_id":"<urn:uuid:3f4d2e46-368d-4d0d-a64c-3ff613e638fc>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oxford, GA Math Tutor
Find an Oxford, GA Math Tutor
...I have over 17 years' experience working with students in a variety of subjects including math, reading and writing! I am currently teaching Math 3 (advanced algebra) in a high school. I make
the learning interesting and easy to remember.
22 Subjects: including geometry, ACT Math, grammar, algebra 2
...I have chosen to leave the classroom to tutor from home so that I can be a stay at home mom. I can provide references upon request. I look forward to hearing from you.
10 Subjects: including linear algebra, logic, algebra 1, algebra 2
I have 10 years experience as an elementary math teacher. My experience ranges from teaching 5th grade to 7th grade. I have also tutored in Algebra.
3 Subjects: including algebra 1, prealgebra, study skills
...If you believe my instructional philosophy may be well aligned, please do not hesitate to contact me to inquire about tutoring in areas that may not be listed above. I have a PhD in
Educational psychology and my work has focused on elementary-aged children. Additionally, I am a former kindergarten teacher who has worked with students of this age for many years.
30 Subjects: including algebra 1, reading, prealgebra, English
...I have a Bachelor of Fine Arts degree in Theatre Arts, which prepared me well for public speaking and performing. I have also directed a number of plays, and have assisted numerous children in
preparing public speaking assignments for school and 4-H projects. I love to watch a child's self-esteem rise as she or he learns to speak publicly.
21 Subjects: including algebra 1, prealgebra, reading, English
Related Oxford, GA Tutors
Oxford, GA Accounting Tutors
Oxford, GA ACT Tutors
Oxford, GA Algebra Tutors
Oxford, GA Algebra 2 Tutors
Oxford, GA Calculus Tutors
Oxford, GA Geometry Tutors
Oxford, GA Math Tutors
Oxford, GA Prealgebra Tutors
Oxford, GA Precalculus Tutors
Oxford, GA SAT Tutors
Oxford, GA SAT Math Tutors
Oxford, GA Science Tutors
Oxford, GA Statistics Tutors
Oxford, GA Trigonometry Tutors | {"url":"http://www.purplemath.com/oxford_ga_math_tutors.php","timestamp":"2014-04-20T23:30:57Z","content_type":null,"content_length":"23507","record_id":"<urn:uuid:99097539-b25e-472c-8a6c-bd02c6786bcd>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
Books By Faculty
Our faculty members frequently publish textbooks which obtain national recognition. Listed below are a selection of these books.
The Cosmic Cocktail: Three Parts Dark Matter by Professor Katherine Freese. Blending cutting-edge science with her own behind-the-scenes insights as a leading researcher in
the field, acclaimed theoretical physicist Katherine Freese recounts the hunt for dark matter, from the discoveries of visionary scientists like Fritz Zwicky--the Swiss
astronomer who coined the term "dark matter" in 1933--to the deluge of data today from underground laboratories, satellites in space, and the Large Hadron Collider.
Theorists contend that dark matter consists of fundamental particles known as WIMPs, or weakly interacting massive particles. Billions of them pass through our bodies every
second without us even realizing it, yet their gravitational pull is capable of whirling stars and gas at breakneck speeds around the centers of galaxies, and bending light
from distant bright objects. Freese describes the larger-than-life characters and clashing personalities behind the race to identify these elusive particles. (Amazon.com)
Author: Katherine Freese Published by Princeton University Press (May 4, 2014)
Amazon Page Hardcover, 272 Pages
ISBN: 978-0691153353
Supersymmetry and Beyond: From the Higgs Boson to the New Physics by Professor Gordon Kane. This book tells the epic story of the quest to uncover a fully unified theory of
physics. He introduces the theory of supersymmetry, which implies that each of the fundamental particles has a "superpartner" that can be detected at energies and
intensities only now being achieved in the giant accelerators. If the theory is correct, these superpartners will also help solve many of the puzzles of modern physics -
such as the existence of the Higgs boson - as well as one of the biggest mysteries in cosmology: the notorious "dark matter" of the universe. This book has been updated to
reflect recent discoveries at the Large Hadron Collider. (Amazon.com)
Author: Gordon Kane Published by Basic Books; Revised Edition (May 14, 2013)
Amazon Page Paperback, 216 Pages
ISBN: 978-0465082971
Equilibrium Statistical Physics With Computer Simulations in Python by Professor Leonard Sander. This book is intended primarily as a graduate textbook for students of
Physics. Students in other field such as Biophysics, Materials Science, Chemical Engineering, or Chemistry may also find much to interest them. This book can also serve as a
reference for interested students and researchers.
Author: Leonard M. Sander Published by CreateSpace Independent Publishing Platform; 1st Edition (July 27, 2013)
Amazon Page Paperback, 334 Pages
Table of Contents ISBN: 978-1491066515
Principles of Laser Spectroscopy and Quantum Optics by Professor Paul R. Berman and Vladimir S. Malinovsky. This book is a graduate student-level text book for students
studying the interaction of optical fields with atoms. The book provides a rigorous introduction to the prototypical problems of radiation fields interacting with two- and
three-level atomic systems. It examines the interaction of radiation with both atomic vapors and condensed matter systems, the density matrix and the Bloch vector, and
applications involving linear absorption and saturation spectroscopy. Other topics include hole burning, dark states, slow light, and coherent transient spectroscopy, as
well as atom optics and atom interferometry. In the second half of the text, the authors consider applications in which the radiation field is quantized. Topics include
spontaneous decay, optical pumping, sub-Doppler laser cooling, the Heisenberg equations of motion for atomic and field operators, and light scattering by atoms in both weak
and strong external fields. The concluding chapter offers methods for creating entangled and spin-squeezed states of matter. (Amazon.com)
Authors: Paul R. Berman & Published by Princeton University Press (January 2, 2011)
Vladimir S. Malinovsky Hardcover, 544 Pages
Amazon Page ISBN: 978-0691140568
PDF of Book
Networks: An Introduction by Professor Mark E. J. Newman. The study of networks is broadly interdisciplinary and important developments have occurred in many fields,
including mathematics, physics, computer and information sciences, biology, and the social sciences. This book brings together for the first time the most important
breakthroughs in each of these fields and presents them in a coherent fashion, highlighting the strong interconnections between work in different areas. (Amazon.com)
Author: Mark E. J. Newman Published by Oxford University Press, USA; 1st Edition (May 20, 2010)
Amazon Page Hardcover, 720 Pages
ISBN: 978-0199206650
Innovation Was Not Enough: A History of the Midwestern Universities Research Association (MURA) by Professor Lawrence Jones. This book presents a history of the Midwestern
Universities Research Association (MURA) during its lifetime from the early 1950s to the late 1960s. MURA was responsible for a number of important contributions to the
science of particle accelerators, including the invention of fixed field alternating gradient accelerators (FFAG), as well as contributions to accelerator orbit theory,
radio frequency acceleration techniques, colliding beams technology, orbit instabilities, computation methods, and designs of accelerator magnets and linear accelerator
cavities. A number of students were trained by MURA in accelerator techniques, and went on to important posts where they made further contributions to the field. The authors
were all members of the MURA staff and themselves made many contributions to the field. No other such history exists, and there are relatively few publications devoted to
the history of particle accelerators. (Amazon.com)
Authors: Lawrence Jones,
Frederick Mills, Andrew Published by World Scientific Publishing Company (October 29, 2009)
Sessler, Keith Symon, and Hardcover, 268 Pages
Donald Young ISBN: 978-9812832832
Amazon Page
First Chapter
Advanced Condensed Matter Physics by Professor Leonard Sander. This text includes coverage of important topics that are not commonly featured in other textbooks on condensed
matter physics; these include surfaces, the quantum Hall effect, and superfluidity. The author avoids complex formalism, such as Green's functions, which can obscure the
underlying physics, and instead emphasizes fundamental physical reasoning. This textbook is ideal for physics graduates as well as students in chemistry and engineering; it
can equally serve as a reference for research students in condensed matter physics. Engineering students, in particular, will find the treatment of the fundamentals of
semiconductor devices and the optics of solids of particular interest. (Amazon.com)
Authors: Leonard M. Published by Cambridge University Press; 1st Edition (March 16, 2009)
Sander Hardback, 286 Pages
Amazon Page ISBN: 978-0521872904
PDF of Book
Perspectives on LHC Physics by Professor Gordon Kane and Professor Aaron Pierce. This book provides an overview on the techniques that will be crucial for finding new
physics at the LHC, as well as perspectives on the importance and implications of the discoveries. Among the accomplished contributors to this book are leaders and
visionaries in the field of particle physics beyond the Standard Model, including two Nobel Laureates (Steven Weinberg and Frank Wilczek), and presumably some future Nobel
Laureates, plus top younger theorists and experimenters. With its blend of popular and technical contents, the book will have wide appeal, not only to physical scientists
but also to those in related fields. (Amazon.com)
Authors: Gordon Kane & Published by World Scientific Publishing Company; 1st Edition (June 27, 2008)
Aaron Pierce Paperback, 352 Pages
Amazon Page ISBN: 978-1403986115
Encyclopedia of Modern Optics by Professor Duncan G. Steel and Bob D. Guenther. The encyclopedia provides valuable reference material for those working in the field who wish
to know more about a topic outside their area of expertise, as well as providing an authoritative reference source for students and researchers. Undergraduate students
should find it a useful source of material, as will teachers and lecturers. It will also be useful at the postgraduate level for summarizing a broad range of theoretical
topics, for practical advice on research techniques and for insights into new ways of approaching research problems. (Amazon.com)
Authors: Duncan G. Steel Published by Elsevier; 1st Edition (December 17, 2004)
& Hardcover, 5-Volume Set, 2400 Pages
Bob D. Guenther ISBN: 978-0122276002
Amazon Page
Origins of Existence by Professor Fred C. Adams. Professor Adams gives us a stunning new perspective on how the laws of physics created our non-random universe and life
itself. This handful of laws resulted in the big bang, which led to stars, galaxies and then to solar systems with planets such as Earth. That process was absolutely
necessary for all the tiny chemical structures and vast landscapes required for life to emerge. One of Adams's amazing claims is that organisms did not evolve in a primodial
soup in a pond on the Earth's surface, but rather one of many such structures in the vast cosmic landscape. In seven chronological chapters, Adams takes the reader from the
general subjects of physics and the universe to the specific origins of life on earth - showing clearly how energy flowed, exploded and was harnessed in replicating
organisms. He reveals how the evolution of the universe followed a clear path toward the emergence of life. His insight provides an answer to humankind's deepest anxiety -
we are almost certainly not alone in the universe - and throws a whole new light on our identity and beliefs. Life wasn't a lucky break, but the result of physics.
Authors: Fred C. Adams Published by Free Press; 1st Edition (October 15, 2002)
Amazon Page Hardcover, 272 Pages
ISBN: 978-0743212625
Probability and Statistics in Experimental Physics by Professor Byron P. Roe. This book is a practical introduction to the use of probability and statistics in experimental
physics for graduate students and advanced undergraduates. It is intended as a practical guide, not as a comprehensive text in probability and statistics. The emphasis is on
applications and understanding, on theorems and techniques that are actually used in experimental physics. Proofs of theorems are generally omitted unless they contribute to
the intuition in understanding and applying the theorem. The problems, some with worked solutions, introduce the student to the use of computers; occasional reference is
made to some of the Fortran routines available in the CERN library, but other systems, such as Maple, will also be useful. Topics covered include: basic concepts and
definitions; general results, independent of specific distributions; discrete distributions; the normal distribution and other continuous distributions; generating and
characteristic functions; the Monte Carlo method and computer simulations; multi-dimensional distributions; the central limit theorem; inverse probability and confidence
limits; estimation methods; curve fitting, robustness estimates, and likelihood ratios; interpolating functions and unfolding problems; fitting data with constraints; robust
estimation methods. (books.google.com)
Author: Byron P. Roe Published by Springer; 2nd Edition (July 1, 2001)
Amazon Page Hardcover, 252 Pages
Table of Contents ISBN: 978-0387951638
The Five Ages of the Universe by Professor Fred C. Adams and Greg Laughlin. In this book, Adams and Laughlin provide a detailed description of the physical processes that
guided our past and that will shape our future. (Amazon.com)
Author: Fred C. Adams Published by Fress Press (June 19, 2000)
and Greg Laughlin Paperback, 288 Pages
Amazon Page ISBN: 978-0684865768
The Particle Garden by Professor Kane gives the clearest survey of particle physics, including the theory, its experimental foundations, its relations to cosmology and
astrophysics, and its future. Known as an excellent expositor of physics, Kane has marshaled his research and teaching experience to make this daunting subject
understandable to all readers. (Amazon.com)
Author: Gordon Kane Published by Basic Books; Unstated Edition (July 2, 1996)
Amazon Page Paperback, 240 Pages
ISBN: 978-0201408263
Modern Elementary Particle Physics by Professor Kane. Professor Kane explains the modern standard model and the gauge theory of the interactions of quarks and leptons via
exchange of photons, W and Z bosons, and gluons. The treatment avoids technical details, but fully explains the basic physics involved. Open questions and directions of
future research are discussed. (Amazon.com)
Author: Gordon Kane Published by Basic Books; Unstated Edition (July 2, 1996)
Amazon Page Paperback, 240 Pages
ISBN: 978-0201408263
Particle Physics at the New Millennium by Professor Byron Roe. Intended for beginning graduate students or advanced undergraduates, this text provides a thorough
introduction to the phenomena of high-energy physics and the Standard Model of elementary particles. It should thus provide a sufficient introduction to the field for
experimenters, as well as sufficient background for theorists to continue with advanced courses on field theory. The text develops the Standard Model from the bottom up,
showing the experimental evidence for each theoretical assumption and emphasizing the most recent results. It includes thorough discussions of electromagnetic interactions
(of interest in particle detection), magnetic monopoles, and extensions of the Standard Model. (fishpond.com)
Author: Byron Roe Published by: Springer-Verlag New York Inc. (1996)
fishpond.com Page ISBN: 0387946152 | {"url":"http://www.lsa.umich.edu/physics/aboutus/booksbyfaculty","timestamp":"2014-04-21T14:52:46Z","content_type":null,"content_length":"40499","record_id":"<urn:uuid:7a40ed8b-18d1-4ac0-8901-3ebb377df051>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
binspikes bin spikes at a specified frequency sampling i.e. sampling rate 1/sampling
coherencypt Multi-taper coherency - point process times
coherencysegpt Multi-taper coherency computed by segmenting two univariate point processes into chunks
cohgrampt Multi-taper time-frequency coherence - two point processes given as times
cohmatrixpt Multi-taper coherency matrix - point process times
countsig Give the program two spike data sets and one
createdatamatpt Helper function to create an event triggered matrix from a single
extractdatapt Extract segements of spike times between t(1) and t(2)
isi Calculate the inter-spike-interval histogram
minmaxsptimes Find the minimum and maximum of the spike times in each channel
mtdspecgrampt Multi-taper derivative time-frequency spectrum - point process times
mtdspectrumpt Multi-taper spectral derivative - point process times
mtfftpt Multi-taper fourier transform for point process given as times
mtspecgrampt Multi-taper time-frequency spectrum - point process times
mtspecgramtrigpt Multi-taper event triggered time-frequency spectrum - point process times
mtspectrumpt Multi-taper spectrum - point process times
mtspectrumsegpt Multi-taper segmented spectrum for a univariate binned point process
mtspectrumtrigpt Multi-taper time-frequency spectrum - point process times
padNaN Creates a padded data matrix from input structural array of spike times
psth function to plot trial averaged rate smoothed by
Generated on Fri 28-Sep-2012 12:34:27 by | {"url":"http://chronux.org/Documentation/chronux/spectral_analysis/pointtimes/index.html","timestamp":"2014-04-18T06:35:39Z","content_type":null,"content_length":"5534","record_id":"<urn:uuid:ff421329-f5a2-453e-af03-0b67939df9ab>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Cone Complementarity Linearization Algorithm for Static Output-Feedback and Related Problems
Results 1 - 10 of 25
, 2007
"... The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a
diverse set of fields including system identification and control, Euclidean embedding, and collaborative ..."
Cited by 218 (15 self)
Add to MetaCart
The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a
diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the
general affine rank minimization problem is NP-hard, because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds
for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given
affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently
large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this pre-existing concept and outline a
dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to solving the norm minimization relaxations, and illustrate
our results with numerical examples.
- Proceedings of the IEEE , 2007
"... Networked Control Systems (NCSs) are spatially distributed systems for which the communication between sensors, actuators, and controllers is supported by a shared communication network. In this
paper we review several recent results on estimation, analysis, and controller synthesis for NCSs. The re ..."
Cited by 114 (6 self)
Add to MetaCart
Networked Control Systems (NCSs) are spatially distributed systems for which the communication between sensors, actuators, and controllers is supported by a shared communication network. In this
paper we review several recent results on estimation, analysis, and controller synthesis for NCSs. The results surveyed address channel limitations in terms of packet-rates, sampling, network delay
and packet dropouts. The results are presented in a tutorial fashion, comparing alternative methodologies. I.
- In American Control Conference , 2004
"... Abstract-In this tutorial paper, we consider the problem Of minimizing the rank of a matrix over a convex set. The Rank Minimization Problem (RMP) arises in diverse areas such as control, system
identification, statistics and signal processing, and is known to be computationally NP-hard. We give an ..."
Cited by 26 (0 self)
Add to MetaCart
Abstract-In this tutorial paper, we consider the problem Of minimizing the rank of a matrix over a convex set. The Rank Minimization Problem (RMP) arises in diverse areas such as control, system
identification, statistics and signal processing, and is known to be computationally NP-hard. We give an overview of the problem, its interpretations, applications, and solution methods. In
particular, we focus on how convex optimization can he used to develop heuristic methods for this problem.
- SYSTEMS & CONTROL LETTERS , 2007
"... ..."
- in Proceedings American Control Conference, 2001
"... t omizuka~me, berkeley, edu This paper presents the rank minimization approach to solve general bilinear matrix inequality (BMI) problems. Due to the NP-hardness of BMI problems, no proposed
algorithm that globally solves general BMI problems is a polynomial-time algorithm. We present a local search ..."
Cited by 8 (0 self)
Add to MetaCart
t omizuka~me, berkeley, edu This paper presents the rank minimization approach to solve general bilinear matrix inequality (BMI) problems. Due to the NP-hardness of BMI problems, no proposed
algorithm that globally solves general BMI problems is a polynomial-time algorithm. We present a local search algorithm based on the semidefinite programming (SDP) relaxation approach to indefinite
quadratic programming, which is analogous to the well-known relaxation method for a certain class of combinatorial problems. Instead of applying the branch and bound (BB) method for global search, a
linearization-based local search algorithm is employed to reduce the relaxation gap. Furthermore, a random search approach is introduced along with the deterministic approach. Four numerical
experiments are presented to show the search performance of the proposed approach. 1
- In Panos J. Antsaklis, Paulo Tabuada, Networked Embedded Sensing and Control, volume 331 of Lect. Notes in Contr. and Inform. Sci , 2006
"... Summary. We propose a numerical procedure to design a linear output-feedback controller for a remote linear plant in which the loop is closed through a network. The controller stabilizes the
plant in the presence of delays, sampling, and packet dropouts in the (sensor) measurement and actuation chan ..."
Cited by 6 (1 self)
Add to MetaCart
Summary. We propose a numerical procedure to design a linear output-feedback controller for a remote linear plant in which the loop is closed through a network. The controller stabilizes the plant in
the presence of delays, sampling, and packet dropouts in the (sensor) measurement and actuation channels. We consider two types of control units: anticipative and non-anticipative. In both cases the
closedloop system with delays, sampling, and packet dropouts can be modeled as delay differential equations. Our method of designing the controller parameters is based on the Lyapunov-Krasovskii
theorem and a linear cone complementarity algorithm. Numerical examples show that the proposed design method is significantly better than the existing ones. 1
, 2004
"... Abstract: We present an algorithm for the solution of static output feedback problems formulated as semidefinite programs with bilinear matrix inequality constraints and collected in the library
COMPleib. The algorithm, based on the generalized augmented Lagrangian technique, is implemented in the p ..."
Cited by 4 (3 self)
Add to MetaCart
Abstract: We present an algorithm for the solution of static output feedback problems formulated as semidefinite programs with bilinear matrix inequality constraints and collected in the library
COMPleib. The algorithm, based on the generalized augmented Lagrangian technique, is implemented in the publicly available general purpose software PENBMI. Numerical results demonstrate the behavior
of the code.
"... An iterative LMI approach to H ∞ networked control with random communication delays ..."
"... The static output feedback stabilization problem for linear and nonlinear (affine) systems is discussed. A novel necessary and sufficient condition for linear systems is proposed. For nonlinear
systems a sufficient condition is established and a (partial) converse is also discussed. The nonlinear fo ..."
Cited by 1 (0 self)
Add to MetaCart
The static output feedback stabilization problem for linear and nonlinear (affine) systems is discussed. A novel necessary and sufficient condition for linear systems is proposed. For nonlinear
systems a sufficient condition is established and a (partial) converse is also discussed. The nonlinear formulation is used to derive a simple characterization of stabilizing static output feedback
control laws for linear systems in terms of the intersection of two convex sets and a (generally) non-convex set. This characterization is used to establish a series of simple obstructions to the
solvabilityof the problem for linear SISO systems. A fully worked out example complete the paper. 1Introduction The static output feedback (SOF) stabilization problem is probably one of the most
known puzzle in system and control. The simple statement of the problem is as follows: find a static output feedbackcontrol suchthatthe closed-loop system is asymptotically stable. This problem is
important in its... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=145181","timestamp":"2014-04-18T14:59:15Z","content_type":null,"content_length":"35544","record_id":"<urn:uuid:d948c2d2-7751-4191-831a-ab2a690fa00e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is this a Julia set (and if so, for which function family is it the Julia set)?
up vote 7 down vote favorite
Consider the function family given by $f_\lambda(z) = z - p_\lambda(z)/p_\lambda'(z)$ where $p_\lambda(z) = (z^2 - 1)(z - \lambda)$. Every attracting cycle and every rational neutral cycle of $f_\
lambda$ attracts the one critical point of $f_\lambda$, which is $\lambda/3$ (see this other MathOverflow question and Alexandre Eremenko's answer for context).
If the sequence of iterates, $f^n_\lambda(\lambda/3)$, converges to a fixed point of $f_\lambda$ then the fixed point is one of the roots of $p_\lambda$, that is $1$, $-1$, or $\lambda$. If we mark
the points $\lambda$ in $\mathbb{C}$ such that $f^n_\lambda(\lambda/3)$ converges to each of these fixed points then the result is an image that is reminiscent of a Newton basin fractal. Let $K$ be
the set of values $\lambda\in\mathbb{C}$ such that $f^n_\lambda(\lambda/3)$ converges to a fixed point of $f_\lambda$. My question is:
Is the border, $\partial K$, a Julia set, and if so then for what function family $g$ does $J(g) = \partial K$?
I suspect that the answer is "yes" and that $g$ is a family of iterates of a rational function. In the attached images, the very bright green points are the ones nearest to the border $\partial K$.
A note about the little Mandelbrot sets visible in two of the attached images: In these images the white points indicate parameters $\lambda$ such that $f^n_\lambda(\lambda/3)$ converges to a
rational neutral cycle of period greater than $1$. The white points indicate the bifurcation locus of $f_\lambda$ ($J(f_\lambda)$ is not continuously determined, in the sense of the Hausdorff metric,
by the parameter at these points). The white points appear to be composed of many scaled and rotated copies of the boundary of the Mandelbrot set and I believe they are contained by $\partial K$.
Perhaps that is a useful thing to know when searching for $g$.
rational-functions complex-dynamics ds.dynamical-systems
It is very unlikely that the set defined in the parameter plane $\lambda$ is the Julia set of anything. Mandelbrot set is not a Julia set of anything. Why are you asking such a strange question? –
Alexandre Eremenko Dec 3 '12 at 2:56
@Alexandre Eremenko: I'm mostly interested in the bifurcation locus of $f_\lambda$ (and, in a more general way, in scenarios in which the Julia set of the family of Newton-Raphson iterates of a
polynomial is not continuously determined by the roots of the polynomial). In this case the bifurcation locus (the boundaries of the little Mandelbrot sets) is contained by something that reminds
me of a Julia set I recognize (a Newton basin fractal) so I thought maybe it would actually turn out to be the Julia set of some related family, which would give me some insights into the
bifurcation locus. – Aaron Golden Dec 3 '12 at 3:21
I agree with Eremenko; this is a Mandelbrot-style construction. Your pictures are quite interesting: they really DO locally look like Newton fractals, analogously to julia/mandelbrot correlation
mentioned below. You can try to "fit" your data to a rational function, or at least estimate its possibly degree by counting possible fixed points in your image, which should be dark centers (dark
centers also corresponds to attracting periodic points). Poles should be in regions which are black, as in one of the middle pictures. But your initial picture really do not look like a rational
julia fractal – Per Alexandersson Dec 3 '12 at 7:23
Parts of pictures in the parameter plane really look "somewhat like" the parts of the pictures in dynamical plane, and there are theorems of this sort. However, it is extremelly unlikely that that
boundaries of some regions in parameter plane exactly coincide with any Julia sets. (This can happen, see my reply to Rodrigo's answer, but very rarely, and I think such coincidences can have no
significance). – Alexandre Eremenko Dec 3 '12 at 14:45
What do you mean by "a function family"? Certainly it is not true that your set is the Julia set of a rational function. (If you care enough, you should be able to fashion a formal proof, e.g. from
the fact that the complement contains some domains bounded by analytic curves, in the small Mandelbrot copies.) On the other hand, a one-dimensional bifurcation locus can usually be described as
the locus of non-normality of a suitable family of analytic functions, formed by taking the iterates of the free critical point. – Lasse Rempe-Gillen Dec 3 '12 at 22:47
show 2 more comments
2 Answers
active oldest votes
Your $f_\lambda$ implements Newton's method for $p_\lambda$. As you said, the zeros $\{ 1,-1,\lambda \}$ of $p_\lambda$ are fixed points of $f_\lambda$, but even more, they are
superattracting fixed points; that is, $f'_\lambda(1) = f'_\lambda(-1) = f'_\lambda(\lambda) = 0$.
If you choose a fixed $\lambda$ and iterate different points to see whether they are attracted to $1$, $-1$, or $\lambda$, coloring them accordingly, you will be plotting the Julia set of
$f_\lambda$; this looks like the Newton fractals that you mention. Indeed, the first fractal in the Wikipedia page you linked to is the Julia set of the Newton method function associated
to $z \mapsto z^3-1$.
Instead, what you are doing is plotting the results of iterating one particular value for many different parameters $\lambda$. This is not a dynamical procedure because you use a
different $f_\lambda$ every time. This makes it very unlikely that your pictures are Julia sets. However, it can be shown that locally, near the critical value $f_{\lambda}(\lambda/3)$,
the Julia set of $f_\lambda$ looks similar to this parametric picture, and this is why your pictures look like Newton fractals. For a proof of the similarity statement in the quadratic
up vote 9 family, see Tan Lei's paper: Similarity between the Mandelbrot set and Julia sets Comm. Math. Phys. Volume 134, Number 3 (1990), 587-617.
down vote
accepted The original Mandelbrot set is constructed by iterating 0 with different parameters $c$ in the family $z \mapsto z^2+c$; a parametric construction just as above. The appearance of little
Mandelbrots sets in your pictures signals a region of parameters $\lambda$ where an iterate of $f_\lambda$ has some (local) behavior that is conjugate to a quadratic polynomial. If you
pick $\lambda$ in one of those small Mandelbrots and draw the Julia set of $f_\lambda$, you will find small regions that looks like the Julia set of that quadratic polynomial!
Added: Eremenko is right. Just because a picture is not generated by a dynamical process, it doesn't necessarily follow that it can't be a Julia set. A heuristic argument to support that
conclusion is as follows: Even though Julia sets of rational functions have structure at all scales, this structure can be described by a sequence of finite combinatorial objects (for
quadratic maps, we have kneading sequences and external angles). Once a picture contains something as complicated as a Mandelbrot set, it is possible to find infinitely many different
such sequences of combinatorial data. It is in this sense that I say it is very unlikely that a parametric fractal like yours can be generated as a Julia set.
1 Your explanation is correct, except the sentence which begins with "Therefore...". How do you really know that these pictures (or parts of them) are not the Julia sets of some other
function? For example, the boundary of the Mandelbrot set of $\lambda z(1-z)$ contains a circle. And this surcle is the Julia set of $z^2$. Of course this is very unlikely, and
probably can be proved in this example, if anyone cares about such...strange, to say the least problem. – Alexandre Eremenko Dec 3 '12 at 14:39
Thank you for the explanation, Rodrigo, but I am confused by the statement, "locally, near $\lambda$, the Julia set of $f_\lambda$ looks similar to this parametric picture." Since $\
1 lambda$ is a (super)attracting fixed point of $f_\lambda$, the Julia set of $f_\lambda$ is bound away from $\lambda$, right? What does it mean for the $J(f_\lambda)$ to look like
anything near $\lambda$? Did you mean near a different point? If you can point me to a reference that explains this visual correspondence between the Julia sets and the parametric
picture I would greatly appreciate it. – Aaron Golden Dec 3 '12 at 18:54
@Aaron: I corrected the answer. The similarity between the dynamical and parameter pictures exists near the critical point (the one iterated in the parametric process). My mistake
stems from thinking in terms of the quadratic family, where the normalization $f_c:z \mapsto z^2+c$ is chosen so that the critical point of $f_c$ is precisely $c$ itself. – Rodrigo A.
Pérez Dec 4 '12 at 0:01
@Rodrigo: I think you mean that the similarity will occur near the critical value, not the critical point? – Lasse Rempe-Gillen Dec 4 '12 at 9:10
@Lasse: Of course. Thanks :) – Rodrigo A. Pérez Dec 4 '12 at 15:28
add comment
The set in question is the bifurcation locus of the family $f_{\lambda}$. It is hence the set of non-normality of the family $$\bigl(\lambda\mapsto f_{\lambda}^n(\lambda/3)\bigr)_{n\in\
see Theorem 4.2 in McMullen's book "Complex Dynamics and Renormalization".
As has been pointed out, you cannot expect the set to exactly coincide with the Julia set of a rational function. It should be possible to prove this formally. Indeed, your set has
up vote complementary components bounded by analytic curves, or regions bounded by curves analytic except at a single cusp. Such curves cannot bound Fatou components of a rational map, unless the
4 down map is a Blaschke product and the Julia set is itself a circle.
(I cannot seem to find a reference for this fact right now. However, the boundary of a Siegel disk may be smooth, but cannot be analytic anywhere; otherwise, the conjugacy to a rotation
would extend beyond the boundary. On the other hand, boundaries of attracting basins are even known to have Hausdorff dimension strictly greater than one; see Przytycki, "On hyperbolic
Hausdorff dimension of the boundary of a basin of attraction for a holomorphic map and of quasirepellers". Showing that the boundary cannot be analytic is much easier, both for attracting
and parabolic basins.)
Thank you for the reference, Lasse. This may be a silly question, but how do you know that $\partial K$ is the bifurcation locus of the family $f_\lambda$? Is it sufficient to know that
the limit of the iterates $f^n_\lambda(\lambda/3)$ changes as $\lambda$ crosses between connected components of $K$? I ask because I started out looking for $\lambda$ such that $f_\lambda$
had neutral cycles of period greater than one, thinking that the bifurcation locus could include only those parameters. In the images, those parameters are indicated in white, the
boundaries of the little Mandelbrot sets. – Aaron Golden Dec 7 '12 at 20:43
@Aaron, the fact that $\partial K$ is contained in the bifurcation locus is clear (since the family in question cannot be normal near a point of the boundary). The other direction follows
from Montel's theorem. I am not sure what you mean about the neutral cycles. There are many parameters in the bifurcation locus without neutral cycles (e.g. parameters for which the free
critical point is eventually mapped to a repelling periodic cycle). – Lasse Rempe-Gillen Dec 9 '12 at 23:12
@Aaron - do you have any further questions on the topic, or might you consider accepting an answer? – Lasse Rempe-Gillen Dec 26 '12 at 17:47
add comment
Not the answer you're looking for? Browse other questions tagged rational-functions complex-dynamics ds.dynamical-systems or ask your own question. | {"url":"http://mathoverflow.net/questions/115230/is-this-a-julia-set-and-if-so-for-which-function-family-is-it-the-julia-set?sort=oldest","timestamp":"2014-04-19T22:23:21Z","content_type":null,"content_length":"79682","record_id":"<urn:uuid:77926723-0758-41ad-a237-fcc2fee20dbe>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Programming Praxis - Tracking Santa
Programming Praxis – Tracking Santa
In today’s Programming Praxis, our task is to calculate the total distance traveled by Santa based on data published by NORAD. Let’s get started, shall we?
First, some imports:
import Data.List.HT
import Text.HJson
import Text.HJson.Query
The easiest version of the algorithm to calculate the distance between two coordinated can be found here. I’ve made a few small adjustments to get rid of some duplication. The Scheme solution rounds
off the result, but I don’t believe that is correct. Granted, it doesn’t result in a big deviation (3 miles on a total of almost 200000), but rounding off should be saved until the end.
dist :: RealFloat a => (a, a) -> (a, a) -> a
dist (lat1, lng1) (lat2, lng2) =
let toRad d = d * pi / 180
haversin x = sin (toRad $ x / 2) ^ 2
a = haversin (lat2 - lat1) +
cos (toRad lat1) * cos (toRad lat2) * haversin (lng2 - lng1)
in 2 * 6371 * atan2 (sqrt a) (sqrt (1 - a))
Rather than hunting through the string ourselves for the coordinates, we use a Json library.
coords :: Json -> [(Double, Double)]
coords = map ((\[JString lat, JString lng] -> (read lat, read lng)) .
getFromKeys ["lat", "lng"]) . getFromArr
The total distance can be easily calculated by summing the distances between the subsequent points of the route.
totalMiles :: RealFloat a => [(a, a)] -> Int
totalMiles = round . (* 0.621371192) . sum . mapAdjacent dist
All that’s left to do is to read in the route and print out the result.
main :: IO ()
main = either print (print . totalMiles . coords) . fromString .
drop 16 =<< readFile "santa.txt"
Initially there was a much larger difference between my version and the provided answer. It turns out that the route on page 2 is different from the current route published by NORAD, resulting in a
difference of about 2000 miles. Using the same route as the Scheme version reduced this to 3, due to the rounding error present in the provided solution.
Tags: bonsai, christmas, code, Haskell, kata, praxis, programming, santa, tracking
programmingpraxis Says:
December 24, 2010 at 3:45 pm | Reply
Santa must have changed his route. Merry Christmas! | {"url":"http://bonsaicode.wordpress.com/2010/12/24/programming-praxis-tracking-santa/","timestamp":"2014-04-20T04:43:24Z","content_type":null,"content_length":"53293","record_id":"<urn:uuid:50d95746-f00f-4791-bb95-b9107bdf2bec>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Triangular Koch Fractal Surface
Many fractal curves can be generated using L-systems or string-rewrite rules, in which each stage of the curve is generated by replacing each line segment with multiple smaller segments in a
particular arrangement. The same technique can be extended to surfaces, where each stage is constructed by replacing each triangle with multiple smaller triangles. This Demonstration shows an
analogy of the Koch curve as a three-dimensional surface.
Snapshot 1: creation of the surface begins with a single triangle
Snapshot 2: each successive iteration is created by dividing each triangle into four smaller triangles, raising the midpoint of the middle triangle, and then closing the surface by building a
tetrahedron-shaped shell on the middle triangle
Snapshot 3: the bottom face of the surface is a Sierpinski sieve | {"url":"http://demonstrations.wolfram.com/TriangularKochFractalSurface/","timestamp":"2014-04-21T12:24:26Z","content_type":null,"content_length":"42528","record_id":"<urn:uuid:d8f6db07-c04e-497e-aea8-ad7fe86e4f6e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
Small Area Estimation resources
Small Area Estimation is a field of statistics that seeks to improve the precision of your estimates when standard methods are not enough.
Say your organization has taken a large national survey of people’s income, and you are happy with the precision of the national estimate: The estimated national average income has a tight confidence
interval around it. But then you try to use this data to estimate regional (state, county, province, etc.) average incomes, and some of the estimates are not as precise as you’d like: their standard
errors are too high and the confidence intervals are too wide to be useful.
Unlike usual survey-sampling methods that treat each region’s data independent, a Small Area Estimation model makes some assumptions that let areas “borrow strength” from each other. This can lead to
more precise and more stable estimates for the various regions.
Also note that it is sometimes called Small Domain Estimation because the “areas” do not have to be geographic: they can be other sub-domains of the data, such as finely cross-classified demographic
categories of race by age by sex.
If you are interested in learning about the statistical techniques involved in Small Area Estimation, it can be difficult to get started. This field does not have as many textbooks yet as many other
statistical topics do, and there are a few competing philosophies whose proponents do not cross-pollinate so much. (For example, the U.S. Census Bureau and the World Bank both use model-based small
area estimation but in quite different ways.)
Recently I gave a couple of short tutorials on getting started with SAE, and I’m polishing those slides into something stand-alone I can post. Meanwhile, below is a list of resources I recommend if
you would like to be more knowledgeable about this field.
Textbooks on SAE:
• Nicholas Longford, Missing Data and Small-Area Estimation: Modern Analytical Equipment for the Survey Statistician, Springer, 2005.
• J.N.K. Rao, Small Area Estimation, Wiley-Interscience, 2003.
• Parimal Mukhopadhyay, Small Area Estimation in Survey Sampling, Narosa Publishing House, 1998.
• Plater, Rao, Särndal, and Singh (ed.), Small Area Statistics: An International Symposium, Wiley, 1987.
Book sections on SAE:
• Wayne Fuller, Sampling Statistics, Wiley, 2009. Ch. 5.5, “Small area estimation,” pp. 311-324.
• Peter Congdon, Applied Bayesian Modelling, Wiley, 2003. Ch. 4.6, “Small domain estimation,” pp. 163-167.
• Peter Congdon, Bayesian Statistical Modelling, Wiley, 2001. Ch. 8.8, “Small area and survey domain estimation,” pp. 415-421.
Classic articles:
Review articles:
• Gauri S. Datta, “Model-based approach to small area estimation,” pp. 251-288, Handbook of Statistics: Sample Surveys: Inference and Analysis, vol. 29B, eds.: D. Pfeffermann and C.R. Rao,
North-Holland, 2009.
• Risto Lehtonen and Ari Veijanen, “Design-based methods of estimation for domains and small areas,” pp. 219-249, Handbook of Statistics: Sample Surveys: Inference and Analysis, vol. 29B, eds.: D.
Pfeffermann and C.R. Rao, North-Holland, 2009.
• M. Ghosh and J. N. K. Rao, “Small area estimation: an appraisal,” Statist. Sci., vol. 9, no. 1, pp. 55 – 76, 1994. (See also comments and rejoinder.) [Project Euclid]
Other resources:
• Pushpal Mukhopadhyay and Allen McDowell, “Small Area Estimation for Survey Data Analysis using SAS Software,” SAS Global Forum 2011. [SAS]
Examples of unit-level and area-level estimation with PROC MIXED and hierarchical Bayes estimation with PROC MCMC.
• Virgilio Gómez-Rubio, “Tutorial on Small Area Estimation,” 2008 useR! Conference. [Website]
Slides and R code from tutorial session.
• Arman Bidarbakht-Nia et al., “Workshop on Concepts & Methods for Producing Disaggregated Statistics Using Census Data,” Bangkok, 2011. [Website]
Slides from tutorial session by UN-ESCAP staff.
• World Bank staff et al. (?), “More Frequent, More Timely & More Comparable Data for Better Results,” PREM 2011. [Website]
Slides from workshop on poverty monitoring.
• Bedi, Coudouel, and Simler (ed.), More Than a Pretty Picture: Using Poverty Maps to Design Better Policies and Interventions, The World Bank, 2007.
• Elliott, Cuzick, English, and Stern (ed.), Geographical and Environmental Epidemiology: Methods for Small-Area Studies, Oxford University Press, 1992.
3 Responses to Small Area Estimation resources
1. Great resources! Looking forward to the longer treatment! I don’t do much survey research, but I have definitely done hierarchical modeling. I typically use mixed-effects/Bayesian approaches ala
Gelman to “partially pool” subsets of the data towards the global trend. Are the SAE approaches you describe substantially different? How so?
□ Thanks, Harlan! Yep, many of the SAE approaches are mixed-effects models as you describe, but with a special focus on accounting for the survey weights and sampling variances explicitly.
In the examples I’ll post, you’ll see a model like $y_i = X^T_i \beta + u_i + e_i$, where i indexes the small areas, and the observed $y_i$ are already aggregated to the area level using
sampling weights. So we usually also have a survey-design-based estimate of the area-level sampling variance $Var(e_i)$ that we often just treat as known. Then it’s just a matter of
estimating the area-level random-effect variance $Var(u_i)$ and regression coefficients $\beta$, then combining it all to get estimates of $Y_i = X^T_i \beta + u_i$.
More details to follow!
This entry was posted in Education, Statistics. Bookmark the permalink. | {"url":"http://civilstat.com/?p=1273","timestamp":"2014-04-18T13:07:25Z","content_type":null,"content_length":"32184","record_id":"<urn:uuid:ed3bf12e-106b-4136-921e-7021cba14dc1>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - why only first brillouin zone?
why only first brillouin zone?
I realise this a fundamental flaw in my understanding but I can't seem to get my head around it.
I understand that bragg reflection occurs at the zone boundary and so electron wavevectors are diffracted back into the 1st brillouin zone, but what is the justification for only considering the 1st
brillouin zone, for example, when considering energy gaps?
This is also connected with my earlier question about why Umklapp processes are required, where large k is transmitted back into 1st brillouin zone by G:
Repetit Apr18-07 11:59 AM
The reason why it is only necessary to consider the first Brillouin zone lies in the fact that the wave function in a crystal is written as a function with the periodicity of the lattice times a
plane wave, so that in one dimension:
\psi_{nk}(x) = e^{i k r} u_{nk}(x)
The plane wave is unique only up to a reciprocal lattice vector. This is seen by considering the reciprocal lattice vectors which in 1D are given by [tex]\mathbf{G}=\frac{2n\pi}{a}[/tex]:
e^{i G_{n1} r}= e^{i G_{n2} r},
for all values of n1 and n2. The interval is often chosen to be -pi/a to pi/a to get the band structure around a symmetric interval.
Ok, thanks for that
I understand Bloch waves. So are you saying that a bloch wave travelling through the crystal is the same in all brillouin zones so consideration of which one is arbitary?
So what about the electrons which remain within the 1st brillouin zone as their wavevector K is large enough to experience diffraction (laue condition: delta K=G & at zone boundary min kin for
diffraction = (delta k/2) = G/2) ?
I think i understand. The pure isotropic crystal has translational symetry, any position has same properties as that position translated by the lattice vectors. So every brillouin zone is equivalent
and choice between any 1st zone, or any 2nd zone is arbitrary. So electrons which are bound behave in the same way in every brillouin zone, as do electrons passing through the crystal.
so may only consider 1st brilluoin zone.
is that the reasoning or am I missing something? Sorry to go on about it!
All times are GMT -5. The time now is 03:35 AM.
Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2014 Physics Forums | {"url":"http://www.physicsforums.com/printthread.php?t=165966","timestamp":"2014-04-20T08:35:10Z","content_type":null,"content_length":"6546","record_id":"<urn:uuid:15c4255a-9f8d-495c-af67-1b1821592ca2>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Boletim de Ciências Geodésicas
Services on Demand
Related links
On-line version ISSN 1982-2170
Bol. Ciênc. Geod. vol.18 no.3 Curitiba July/Sept. 2012
Applications of Voronoi and Delaunay Diagrams in the solution of the geodetic boundary value problem
Aplicações de Diagramas de Voronoi e Delaunay na solução do problema geodésico do valor de contorno
C. A. B. Quintero^I; I. P. Escobar^II; C. F. Ponte-Neto^I
^IObservatório Nacional, Ministério da Ciência e Tecnologia, Rio de Janeiro, Brazil cosme@on.br; bonilla@on.br
^IIDepartamento de Engenharia Cartográfica, Universidade do Estado do Rio de Janeiro, Rio de Janeiro, Brazil irisescobar@gmail.com
Voronoi and Delaunay structures are presented as discretization tools to be used in numerical surface integration aiming the computation of geodetic problems solutions, when under the integral there
is a non-analytical function (e. g., gravity anomaly and height). In the Voronoi approach, the target area is partitioned into polygons which contain the observed point and no interpolation is
necessary, only the original data is used. In the Delaunay approach, the observed points are vertices of triangular cells and the value for a cell is interpolated for its barycenter. If the amount
and distribution of the observed points are adequate, gridding operation is not required and the numerical surface integration is carried out by point-wise. Even when the amount and distribution of
the observed points are not enough, the structures of Voronoi and Delaunay can combine grid with observed points in order to preserve the integrity of the original information. Both schemes are
applied to the computation of the Stokes' integral, the terrain correction, the indirect effect and the gradient of the gravity anomaly, in the State of Rio de Janeiro, Brazil area.
Keywords: 2-D tessellation; Delaunay Triangulation; Voronoi Cells; Geodesy; Stokes' Integral
Este trabalho apresenta as estruturas de Voronoi e Delaunay como ferramentas de discretização a serem usadas na integração numérica de superfícies com o objetivo de resolver problemas computacionais
geodésicos, quando no integrando a função não é analítica. No enfoque de Voronoi, a região de trabalho é particionada em polígonos, os quais contêm um ponto por polígono o que faz desnecessária a
interpolação. No enfoque de Delaunay, os pontos observados são os vértices de um triangulo, e o valor da célula é o resultado da interpolação sobre o triangulo pelo seu baricentro. Se a quantidade de
pontos observados e a sua distribuição são adequadas, a interpolação em grade não é necessária, e a integração é levada a cabo ponto a ponto. Mesmo quando a quantidade e distribuição dos pontos
observados não são suficientes, as estruturas de Voronoi e Delaunay podem combinar grade com pontos observados a fim de preservar a integridade da informação original. Ambos os enfoques são aplicados
ao cálculo da integral de Stokes, da correção de terreno, do efeito indireto, e do gradiente da anomalia da gravidade na região do estado de Rio de Janeiro, no Brasil.
Palavras-chave: Tesselação 2-D; Triangulação de Delaunay; Celulas de Voronoi; Geodésia; Integral de Stokes
In spite of the known techniques for geodetic parameters computing, either in the space-domain (SANTOS and ESCOBAR, 2000) or in frequency-domain (HAAGMANS et al., 1993; SCHWARZ et al., 1990), usually
the target area is required to be partitioned into geographical grid elements. Geoid undulations, terrain corrections, indirect effects, for instance, are computed at these cells, based on gravity
anomalies and heights, which are not evenly distributed. These are interpolated in order to produce a regular grid (HIRVONEN, 1956). Similarly, either integral methods (LEHMANN, 1997) or the combined
ones (e.g., space-frequency domains), also require evenly distributed data (LI and SIDERIS, 1992). Gridding is usually (see Sideris 1995) required for fast Fourier transform (FFT) geoid determination
techniques (e.g., STRANG VAN HEES, 1986; SIDERIS and TZIAVOS, 1988; HAAGMANS et al., 1993; FORSBERG and SIDERIS, 1993; KUROISHI, 2001), as well as for the fast Hartley transform (FHT) and fast T
transform (FTT) techniques (AYHAN, 1997; LI and SIDERIS, 1992). Similarly, some traditional space-domain techniques, such as discrete summation (e.g., HIRVONEN, 1956; GEMAEL, 1999) and integral
methods (e.g., JIANG and DUQUENNE, 1997; LEHMANN, 1997; NOVÁK et al., 2001), require the observed data to be gridded. Although this is not just a problem, nevertheless modified data are used instead
of the original ones. Also, gridding usually expends excessive manual/computational effort, as well as it is liable to produce spurious data with the loss of genuine information (GOLDEN SOFTWARE,
1999). Besides, the interpolated value depends on the chosen gridding technique and on the grid 'nodes' separation, which are inherent to the spatial data distribution (e.g., GOLDEN SOFTWARE, 1999).
In the worst case, it may produce spurious data that may lead to an inaccurate geoid. Also, different gridding methods provide different interpretations of the data because the grid node values are
computed by different algorithms.
In this work, the discretization by means of Voronoi and Delaunay structures (AURENHAMMER, 1991; WATSON, 1981) is used to the computation of the Stokes' integral, terrain correction, indirect effect
and gradient of the Helmert anomaly. Both approaches are reported in Santos and Escobar (2006, 2010). Genuine data are used and preserved if they have such a spatial distribution that does not
require filling blank areas with interpolated data. In spite of a natural "smoothing" due to the data distribution, Voronoi and Delaunay schemes avoid the "synthetic" smoothing due to an
interpolation step. Alternatively, if the original data are not sufficient, they can still be preserved in combination with a grid used to fill the blank areas. In Voronoi scheme, the target area is
subdivided into a unique set of convex and adjacent polygonal cells, in which each one holds an original data point. In Delaunay scheme, the area is tessellated into contiguous triangular cells
(triangulated irregular network - TIN). Mean values are locally interpolated for the barycenter (centroid) from respective triangles' vertices and remains enclosed to each cell (GOLDEN SOFTWARE,
Several authors have investigated Voronoi and Delaunay structures, their properties and applications in many different scientific fields. Those structures are used in the analysis of elements that
have to be partitioned into contiguous domains called "natural neighbors". This term was coined by mathematicians, when noticing the frequent instances of those relationships found in the nature.
Many geometric natural shapes tend to be organized into Voronoi-like figures, such as the formed surface layer of mud that has dried and contracted, or the tessellated bony shell of some turtles and
In gravimetry, Delaunay triangulation has been used to model the topography for terrain corrections computation, in which the relief is represented by triangular prisms (RUPERT, 1988). Lehmann (1997)
used a triangulation structure to model equilateral spherical triangles for the evaluation of geodetic surface integrals.
A Voronoi diagram is also referred to as the Dirichlet tessellation, and might be viewed as a geometric complement (a duality) to Delaunay triangulation (AURENHAMMER, 1991; TSAI, 1993). The polygon
vertices are associated with Delaunay triangles by the same construction rule - the circumcircle criterion or the Delaunay criterion (TSAI, 1993).
2.1. Construction of Delaunay Triangulation and Voronoi Diagram
A Delaunay triangulation (also called a Delaunay simplicial complex) is a partition of an m-dimensional space, S, into adjacent triangular elements (Figure 1b). Given a set of n distinct points in S,
so that n > m, every circumcircle associated with the triangles must contain no other points inside it. Whereas k points belonging to the perimeter of the area (peripheral points), the number of
triangles generated is equal to 2(n - 1)- k. The interesting property of this structure (the approximated equiangular form) indicates that minimum angles are maximized and maximum angles are not
minimized, which is an advantage over any triangulation of the same set of points. By Delaunay criteria, any triangulation with no obtuse angles has to be a Delaunay triangulation. According to
Aurenhammer (1991), a triangulation without extreme angles (or "compact") is desirable, especially in methods of data interpolation. Additionally, in graphical computation the equiangular property is
a need that provides the best visualization for displaying figures.
The vertices in Delaunay triangulation are all and only the n points of S, and its circumcenters are the actual Voronoi polygon vertices, or Voronoi polytopes (Figure 1a). The remaining circumcenters
not satisfying the 'empty circumcircle criterion' are discarded from the construction.
The automatic contouring of the points is according to the triangulation algorithm by D.F. Watson (RUPERT, 1988), which was modified to include the Voronoi polygons' computation, in which the
topological data structures set up the relations between data points, edges and Delaunay triangles. The algorithm implicitly ensures a closed bounding area perimeter (the convex hull), but it does
not preserve its outer limits because this information is not required for the triangulation.
A Voronoi diagram is a partition
where d is the point-to-vertex Euclidean distance function.
The Voronoi structure is unique in a sense that each polygon edge is exactly halfway between each pair of sites in S. For this reason, the i - 1 half planes give rise to convex polygons, each of
which have at most i - 1 edges (AURENHAMMER, 1991). Also, any point on an edge is equidistant to two sites, and any vertex is equidistant to each three sites, thus forming a polygonal partition of
the region. By construction, no polygon can be empty, and as a consequence, space is partitioned into exactly i polygons.
The topological data structures for Voronoi diagram construction are almost the same as in Delaunay triangulation, but in Voronoi diagram the sequence of vertices and polygons edges is necessary to
ensure the same area of computation as in the Delaunay triangulation. Additionally, a check for both Voronoi and Delaunay constructions is performed to verify if the sum of plane areas of the figures
is the same.
A comprehensive approach of Stokes' and Molodensky's formulations as well as the main Boundary Value Problem of Geodesy (BVPG) alternatives can be found in Guimarães and Blitzkow (2011). Voronoi and
Delaunay diagrams were applied to compute the Stokes' integral for the local gravimetric geoid determination in the Rio de Janeiro State (and nearby regions), Brazil. The relief is very rugged, and
varies between 0 - 2,600 m (Figure 2). In the same area Delaunay triangulation was applied to the computation of terrain correction, indirect effect and the gradient of Helmert gravity anomaly.
The dataset includes 1940 terrestrial gravity stations filled out with 491 Geosat 5-arcmin resolution gravity anomalies (Figure 3). The data on land are along some roads and a kriging interpolation
was used to fill in most of the blank areas between the roads to a 5-arcmin resolution grid, amounting to 3973 data points.
Figures 4 and 5 depict the goal area tessellated according to the Delaunay and Voronoi schemes, respectively. The process removes clustered data inside a circle of radius 2000 m, in order to avoid
rather irregular cells. These clusters accounted to 744 points for both the schemes, remaining 3229 data points, and the same amount of Voronoi cells were produced. Delaunay tessellation gave rise to
6278 triangular cells, whose vertices are the data points.
3.1 The Stokes' Integral Computation
In general, the geodesists deal with two distinct surfaces to represent the figure of the Earth. The determination of the distance between them is the main goal of the geodetic sciences. One is a
theoretical surface - the reference ellipsoid - adopted as the planimetric referential for geodetic and geophysical applications. The other one is the geoid, which is the most important equipotential
surface of the Earth's gravity field, the closest of the Earth's physical surface. It is a real surface, might be materialized, and may be approximately viewed as the mean sea surface, supposedly
extended through the continents. The geoid is used as the altimetric referential for engineering applications.
Stokes' established the theoretical basis for the gravimetric determination of the geoid, considering the variation of gravity at different points on the Earth's surface. Stokes' formula solves the
problem assuming a global and continuous data distribution over the Earth, and is given by (STOKES, 1849)
where R is the radius of a spherical Earth, and Figure 6 outlines the geometrical relationship between polar co-ordinates
From Figure 1, we get the relationship
Substitution of Eq. (2) in Eq. (1) yields the geoidal undulation N as a function of the elemental area
In practice, Stokes' integral is replaced by a discrete summation, due to the impossibility of an ideal data distribution. A method proposed by Hirvonen (1956) solves the discretization problem in
order to determine the geoid undulations. It subdivides the studied area into a regular geographical grid, and each grid cell contains an interpolated mean gravity anomaly that represents that cell
for the discrete evaluation. The geoidal undulation might be written as
where i-th cell.
In Scheme Voronoi the actual values observed in the data points are assigned to polygons, and the calculation point coincides with a point of integration (i.e., y = 0), leading the indetermination of
S(y). To avoid the indetermination, in this case, the Eq. (4) is replaced by (HEISKANEN & MORITZ, 1967),
where ψ[0] is the mean spherical distance between the point and the respective edges of the Voronoi polygon.
In Delaunay scheme, as the computation is performed for the triangles' vertices (data points), and the value for the cell is computed for its barycenter, Stokes' function singularity has gone (SANTOS
and ESCOBAR, 2010).
The geoidal height computation at point
The Fig. 2). Finally, the
Both Delaunay and Voronoi schemes were used to compute Figures 7 and 8 depict the results for the area of the Rio de Janeiro State.
Involving almost the double of discretization cells, the Delaunay scheme provides a more smoothed aspect in figure 9. The discrepancies range from -35 cm to 14 cm, with mean value of -2 cm and
standard deviation of 4 cm.
A comparison between the application of Delaunay and Voronoi schemes and the classical technic in geoidal heights computation was done. As the first two terms in Eq. (6) are independent of the used
method to compute Stokes' integral, because they do not depend on the way as the discretization is carried out, to examine the differences it is sufficient to consider only the term
Table 1 presents the statistics of those differences for the Rio de Janeiro dataset. The RMS differences indicate an almost perfect agreement (99%) between the Voronoi and Delaunay schemes. Only one
point in the Voronoi scheme is outside the 99% confidence interval for the RMS difference, which the maximum difference is less than 10 cm. With the exception of the area of Corcovado peak, results
are statistically the same for the two methods.
3.2 The Terrain Correction Computation
The terrain correction takes into account the attraction effect of the irregularities of the topography in the vicinity of the gravity station and is always added to the observed value. Since the
terrain correction can take values larger than other corrections to gravity (Earth's tide, free-air, Bouguer) it is very important, mainly in regions of rugged topography. The gravitational
attraction of a vertical triangular prism is the mathematical model used here for the terrain correction computation (WOODWARD, 1975, RUPERT, 1988). The terrain is geometrically represented by
vertical prisms of triangular base (B1, B2, B3) in horizontal plane at the altitude of the gravity point (P) and tilted top (A1, A2, A3) modeling the topography (Figure 10). Vertices of the
triangular base are determined according to the Delaunay structures and the density is assumed to be constant. Within a given radial distance from each gravity point, the calculation of the vertical
component of attraction of the prisms is computed.
Terrain corrections were computed for State of Rio de Janeiro area in a 1.5-arcmin grid, using Delaunay triangulation (Figure 4) and the dataset presented in Figure 3. A DTE dataset available online
by the National Institute for Space Research, INPE, Brazil, was used to fill in the blank areas. This topographical database was unified and structured for the whole Brazil area - the TOPODATA
Project (VALERIANO and ALBUQUERQUE, 2008). It was produced by a refined combination of local terrain elevation data and topographical information derived from Shuttle Radar Topography Mission - SRTM
data (USGS, 2007), for a 1-arcsec horizontal resolution grid. A subset DTE grid of a 3-arcsec horizontal resolution was used to the radial distance of 3 km and a 15-arcsec resolution subset grid for
radial distances between 3 km and 24 km. Distances and azimuths were calculated using Vincenty formulas for solving the inverse problem of Geodesy (VINCENTY, 1975). The values range between 0 and
37.22 mGal, with mean value of 1.66 mGal and standard deviation of 2.60 mGal. The Figure 11 presents a graphic with the contribution, in mGal, per distance range, in km, up to 24 km from the point of
maximum value of terrain correction (37.22 mGal), it is possible observe that, for distances larger than 24 km, the contribution to terrain correction tends to zero. The Figure 12 shows the map of
the terrain correction in Rio de Janeiro State area, computed using Delaunay triangulation (Figure 4) and the dataset presented in Figure 3.
3.3 The Indirect Effect Correction
As gravity reductions deal with topographic mass displacement, the resulting indirect effect on the geoidal height must be computed (HEISKANEN & MORITZ, 1967). The Helmert's second method of
condensation involves generally small indirect effects, given by (SIDERIS and SHE, 1995):
where s[i] is the Euclidean distance from computation point P, with elevation H[P], to generic integration cells i, with elevation H[i] and area a[i].
Using the same process applied to the terrain correction, with the same Delaunay scheme, dataset and scanned distance, the indirect effect for Helmert's second method of condensation was computed.
The values range between -0.424 m and 0 m. The mean value and standard deviation are, respectively, of -0.025 m and 0.032 m.
The Figure 13 presents a graphic with the contribution, in mm, per distance range, in km, up to 24 km from the point of minimum value of indirect effect correction (-0.424 m), it is possible observe
that, for distances larger than 21 km, the contribution tends to zero. A map for the indirect effect is shown in the Figure 14.
3.4 The Gradient Of The Helmert Gravity Anomaly
For the solution of some geodetic problems, involving downward or upward continuation of the gravity, it is useful to know the gradient of the gravity anomaly,
where R is the mean radius of the earth; l is the spatial distance between the fixed point P and the variable surface element R^2
The Eq. (24) can be written as
without error larger than 0.0006 %.
Since the gravity anomaly is not known as a continuous function, a numerical integration, based on Eq. (10), may be used for computing its vertical gradient, by discretizing the earth surface in
cells, that is:
where n is the number of cells over the integrating area around the point P, i is the order of a generic cell with area a[i] and mean gravity anomaly value
Vertical gradients of the Helmert gravity anomaly were computed for State of Rio de Janeiro area in a 1.5-arcmin grid, using Delaunay triangulation (Figure 4) and the dataset presented in Figure 3. A
grid of 1.5-arcmin horizontal resolution was used to fill in the blank areas. The neighboring area was scanned up to a radial distance of 24 km. The values of the gradient vary from -68.6 mGal/km to
27.2 mGal/km.
The Figure 15 presents a graphic with the contribution, in mGal/km, per distance range, in km, up to 24 km from the point of minimum value of vertical gradient (-68.6 mGal/km), it is possible observe
that, for distances larger than 22 km, the contribution tends to zero. A map for the vertical gradient of the Helmert gravity anomaly is shown in the Figure 16.
4. CONCLUSIONS
Voronoi and Delaunay structures have been applied as alternative discretization tools to compute numerical surface integration in geodetic problems solutions, when under the integral there is a
non-analytical function. Both schemes were used to computing the Stokes' integral, while terrain correction, indirect effect and vertical gradient of the Helmert gravity anomaly were computed using
Delaunay triangulation.
The main advantage of those schemes is to perform the computation without an interpolation grid, when the amount and distribution of data points are enough. Even when this condition is not satisfied,
it is possible to merge data points with a grid of interpolated data, used to fill in the blank areas. In both cases, the original data are preserved. However, when merged data are used, it is
important to check the consistency between the interpolated data grid and the original data. Any bias or conflict should be eliminated a priori, to avoid artificial effects on results.
Both structures, of simple and efficient geometrical constructions, are useful for the tessellation of a site in order to evaluate the geoidal undulations by means of the Stokes' technique. The
Voronoi approach uses less discretization cells than the Delaunay triangulation, nevertheless, both schemes leads to the same results, which are somewhat more efficient than the classical method.
Although a test with Voronoi scheme could have been done to the computation of terrain corrections, indirect effects and vertical gradients of the Helmert gravity anomaly, it was the Delaunay
triangulation used here, having in sight the best fit of the triangles to rugged surfaces. For the sake of comparison, a test with Voronoi scheme could have been done.
AURENHAMMER F. Voronoi diagrams: A survey of a fundamental geometric data structure. ACM Computing Surveys 23(3): 345-405,1991. [ Links ]
AYHAN M.E. Updating and computing the geoid using two dimensional fast Hartley transform and fast T transform. Journal of Geodesy 71: 362-369,1997. [ Links ]
DENKER H., WENZEL H.G. Local geoid determination and comparison with GPS results. Bulletin Géodésique 61: 349-366,1987. [ Links ]
FORSBERG R., SIDERIS M.G. Geoid computations by the multi-band spherical FFT approach. Manuscripta Geodaetica 18: 82-90,1993. [ Links ]
GEMAEL C. Introdução à Geodésia Física. 1999. UFPR, Curitiba, Brazil [ Links ]
GOLDEN SOFTWARE Surfer Version 7 User Guide. 1999. Golden Software Inc., Golden, Colorado [ Links ]
GUIMARÃES G. do N., BLITZKOW D. Problema de valor de contorno da Geodésia: uma abordagem conceitual. Boletim de Ciências Geodésicas vol.17 no.4, p. 607-624, Curitiba Oct./Dec., 2011. [ Links ]
HAAGMANS R.R., DE MIN E., VAN GELDEREN M. Fast evaluation of convolution integrals on the sphere using 1D FFT, and a comparison with existing methods for Stokes' integral. Manuscripta Geodaetica 18:
227-241,1993. [ Links ]
HEISKANEN W.H., MORITZ H. Physical Geodesy. 1967. WH Freeman & Co, San Francisco [ Links ]
HIRVONEN R.A. On the precision of the gravimetric determination of the geoid. Transactions - American Geophysical Union 3 (1): 1-8, 1956. [ Links ]
JIANG Z., DUQUENNE H. On fast integration in geoid determination. Journal of Geodesy 71: 59-69, 1997. [ Links ]
KUROISHI Y. An improved gravimetric geoid for Japan, JGEOID98, and relationships to marine gravity data. Journal of Geodesy 74 (11-12): 745-765, 2001. [ Links ]
LAMBERT W.D. The reduction of observed values of gravity to sea level. Bulletin Géodésique 26: 107-181, 1930. [ Links ]
LEHMANN R. Fast space-domain evaluation of geodetic surface integrals. Journal of Geodesy 71: 533-540, 1997. [ Links ]
LI J., SIDERIS M.G. The fast Hartley transform and its application in physical geodesy. Manuscripta Geodaetica 17: 381-387, 1992. [ Links ]
MORITZ H. Geodetic Reference System 1980, Bulletin Géodésique 58 (3): 388-398, 1984. [ Links ]
RAPP R.H., WANG Y.M., PAVLIS N.K. The Ohio State University 1991 geopotential and sea surface topography harmonic coefficient models. 1991. Rep 410, Department of Geodetic Science and Surveying, The
Ohio State Univ, Columbus [ Links ]
RAPP R.H., BASÍC T. Ocean-wide gravity anomalies from GEOS-3, Seasat and Geosat altimeter data. Geophysical Research Letters 19(19): 1979-1982, 1992. [ Links ]
RUPERT J. A gravitational terrain correction program for IBM compatible personal computers. 1988.Vol. 2.21, Geological Survey of Canada, GSC, Open File 1834 [ Links ]
SANTOS N.P., ESCOBAR I.P. Gravimetric geoid determination in the municipality of Rio de Janeiro and nearby region. Brazilian Journal of Geophysics 18 (1): 49-62, 2000. [ Links ]
SANTOS N.P. and ESCOBAR I.P.,2010. Use of EGM08 model and Shuttle Radar Topography Mission data for geoid computation in the State of Rio de Janeiro, Brazil: a case of study with Voronoi/Delaunay
discretizations. Studia Geophysica et Geodaetica, 54: 239-255 [ Links ]
SIDERIS M.G., TZIAVOS I.N. FFT-evaluation and applications of gravity-field convolution integrals with mean and point data. Bulletin Géodésique 62: 521-540, 1988. [ Links ]
SIDERIS M.G., SHE B.B. A new high-resolution geoid for Canada and part of the U. S. by the 1D-FFT method. Bulletin Géodésique 69: 92-108, 1995. [ Links ]
SIDERIS M.G. Fourier geoid determination with irregular data. Journal of Geodesy 70: 2-12, 1995. [ Links ]
STOKES G.G.) On the variation of gravity on the surface of the Earth. In: Mathematical and Physical Papers, Vol. II, New York, pp 131-171 (from the Transactions of the Cambridge Philosophical Society
, Vol. VIII, pp 672-695) , 1849. [ Links ]
STRANG VAN HEES G.L. Precision of the geoid, computed from terrestrial gravity measurements. Manuscripta Geodaetica 11: 86-98, 1986. [ Links ]
TSAI V.J.D. (1993) Fast topological construction of Delaunay triangulations and Voronoi diagrams. Computers & Geosciences 19(10): 1463-1474 [ Links ]
VINCENTY T. Direct and inverse solutions of geodesics on the ellipsoid with application of nested equations. Survey Review, XXII(176),88 - 93, 1975. [ Links ]
WOODWARD D.J. The gravitational attraction of vertical triangular prisms, Geophysical Prospecting, 23(3): 526-532, 1975, 1975. [ Links ]
Recebido em abril de 2012
Aceito em julho de 2012 | {"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S1982-21702012000300003&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-16T13:45:38Z","content_type":null,"content_length":"70990","record_id":"<urn:uuid:bf546900-ac6a-4682-b103-9b7ff16d460d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
intuitive visualizations of categorization for non-technical audiences
For a project I’m working on at work, I’m building a predictive model that categorizes something (I can’t tell you what) into two bins. There is a default bin that 95% of the things belong to and a
bin that the business cares a lot about, containing 5% of the things. Some readers may be familiar with the use of predictive models to identify better sales leads, so that you can target the leads
most likely to convert and minimize the amount of effort wasted on people who won’t purchase your product. Although my situation doesn’t have to do with sales leads, I’m going to pretend it does, as
it’s a common domain.
My data is many thousands of “leads”, for which I’ve constructed hundreds of predictive features (mostly 1/0, a few numeric) each. I can plug this data into any number of common statistical and
machine learning systems which will crunch the numbers and provide a black box that can do a pretty good job of separating more-valuable leads from less valuable leads. That’s great, but now I have
to communicate what I’ve done, and how valuable it is, to an audience that struggles with relatively simple statistical concepts like correlation. What can I do?
I’m generally interested in finding better ways to build clean, intuitive, and informative visualizations of data, especially when the visualizations can leverage intuitions and skills that everyone
has. For example, almost everyone has a surprisingly good approximate number sense, the ability to quickly identify about how many items are in a largish group. For example, if shown a photo of 30
oranges and a photo of 20 oranges, you would be able to immediately say that there were more oranges in the first photo, and you would happily say that that photo had a few dozen oranges in it. This
psychological skill can be used to make more effective visualizations of certain types of data. Instead of comparing two quantities by lines in a chart, or even a number in a table, it may be useful
to compare visual density.
How can this be used to make better visualizations of prediction quality? Consider the standard ways that predictive model quality is reported. I have obfuscated the test set data from the problem I
mentioned above, and placed it in a public Dropbox in Rdata format. I’ve also put together an R script to demonstrate various ways of looking at the predictions and put it in a Github gist. Follow
along if you’d like.
First, take a look at the data frame and some summary statistics:
> head(pred.df)
predicted actual actual.bin
7379 0.6020833 yes 1
5357 0.5791667 yes 1
7894 0.5791667 yes 1
5893 0.5604167 yes 1
16093 0.5541667 yes 1
2883 0.5520833 yes 1
> summary(pred.df)
predicted actual actual.bin
Min. :0.000000 no :7785 Min. :0.0000
1st Qu.:0.004167 yes: 366 1st Qu.:0.0000
Median :0.016667 Median :0.0000
Mean :0.040827 Mean :0.0449
3rd Qu.:0.041667 3rd Qu.:0.0000
Max. :0.602083 Max. :1.0000
The mode predicts about 4% of the items will be in the “yes” category, which is similar to the 4.5% that actually were. Using the very flexible ROCR package, I can quickly and easily convert this
data frame into an object that can then be used to calculate any number of standard measures of predictiveness. First, I calculate the AUC value, which has a very intuitive interpretation. Consider
sorting the list of items from most-predicted-to-be-”yes” to least. If the predictions are good, most of the “yes” values will be relatively high in the list. The AUC is equivalent to asking, if I
randomly pick a “yes” item and a “no” item out of the list, how likely is the “yes” item to be higher on the list? If the list was randomly shuffled, it would 0.5; if it were perfectly shuffled with
20/20 hindsight, the AUC would be 1.0.
> # convert to their object type (labels should be some sort of ordered type)
> pred.rocr <- prediction(pred.df$predicted, pred.df$actual)
> # Area Under the ROC Curve
> performance(pred.rocr, 'auc')@y.values[[1]]
[1] 0.8237496
In this case, it’s about .82, which is probably valuable but far from perfect. Another common way of looking at this type of predictions comes from business uses, where the goal is to identify leads
(or whatever) that are likely to convert to purchases. From this point of view, the goal is to lift the leads higher in the list, so that you can focus on the top of the list and got more benefit
from sales effort with less work. Two common ways of looking at lift are with a decile table, which shows how much value you get by focusing on the top 10%, 20%, etc. of the list, sorted by the
predictive model, and the lift chart, which visualizes the same thing by showing how much benefit over random guessing you get by looking at more or less of the sorted list. Here they are for this
# decile table
dec.table <- ldply((1:10)/10, function(x) data.frame(
print(dec.table, digits=2)
decile prop.yes lift
1 0.1 0.61 6.1
2 0.2 0.69 3.4
3 0.3 0.76 2.5
4 0.4 0.80 2.0
5 0.5 0.84 1.7
6 0.6 0.90 1.5
7 0.7 0.92 1.3
8 0.8 0.95 1.2
9 0.9 0.99 1.1
10 1.0 1.00 1.0
# Lift Curve
plot(performance(pred.rocr, 'lift', 'rpp'))
This graph shows, not particularly intuitively in my view, that if you focus on the top 10% of the data, you get more 5 times the bang for the buck than if you focus evenly on the whole set of items.
The decile table shows the same thing — the top decile is lifted by a factor 0f 6.1, and in fact you get 61% of the “yes” items in that top 10% of the data. These are very useful numbers to know, but
I think there are considerably more intuitive ways of showing how the predictive model pulls the “yes” values away from the 5% base rate.
These more intuitive ways are not the standard graphs used in statistics and machine learning, such as the sensitivity/specificity curve and the ROC curve. Those graphs, shown below, illustrate
trade-offs between accepting false positives and false negatives. Useful, yes, but to understand them you have to think about the ways you could set a threshold and what effect that threshold would
have on the nature of your predictions. That’s not particularly intuitive, and the visualization doesn’t visually contrast two things, so it’s difficult to get an intuitive understanding of what has
been gained.
I’ve put some thought and some tinkering into potentially better ways of visualizing the output of predictive models. The key, I think, is to use a visualization that builds on the scatter graph.
Scatter graphs are great for less-technical audiences, because you can tell them that every individual dot is a customer (widget, whatever). They can immediately see the number of items in question,
and if you can plot the points on axes that make sense to them, they can go from “that dot there represents one person with this level of X and this level of Y”, to “this set of dots represents a set
of people with similar levels of X and Y”, to “this graph represents everyone, and their respective levels of X and Y.” And because of skills like the approximate number sense and the ability to
quickly understand visual density, scatter graphs can give a vastly better understanding of the range of a data set than summary graphs that just plot a line and maybe some error bars.
Here are several versions of a graph that illustrates how the predictive model smears out the set of dots from the 5% base rate, disproportionately pulling the “yes” items to the right, separating at
least some of them from the much larger set of “no” items. One key change from a basic scatter graph is to jitter the Y position of each point randomly, which I think makes these graphs look a little
like a PCR gel image.
This first approach is built around a basic scatter graph, where the X axis is the predicted likelihood of being a “yes”, and the Y axis is 0 for actual “no” and 1 for actual “yes” items. On top of
that is an orange line representing the base rate of about 5%, a blue line showing the smoothed ratio between “yes” and “no” items at each level of prediction, and a thin grey line showing where the
blue line ought to be. In this case, the model tends to underestimate the likelihood that some items are to be “yes” items. At 50%, half of the items should be “yes” and half should be “no”, but it’s
more like 3:1.
I like this graph as it intuitively lets people see the extent to which the predictive model is separating the categories, and how much better it does than just assuming the base rate. My second
approach at this combines the “smears” with another way of visualizing lift.
In this graph, the smeared real data is at the bottom of the graph, and the black line represents the lift, or how much better you are at identifying “yes” items by using the predictions. It’s also
an intuitive way of motivating the need to draw a boundary to focus effort. When trying to convert the points at the 25% level or above, you may be ineffective 75% of the time, but you’re also more
than 10 times more efficient than you would be otherwise.
My final attempt worth sharing is this one, which combines the dual smear approach with the cumulative value numbers from the lift table.
Now, in addition to being able to see the density of yes and no items for various levels of the prediction, you can see what proportion of the potential “yes” values exist to the right of each level
of the prediction. For example, at a threshold of 25%, you capture 40 or 45% of the “yes” items. At a 5% threshold you capture more than 70% of the “yes” items.
I’d love some feedback on these graphs! Do you agree with my assertion that scatter graphs are more visually intuitive and easier to motivate to non-technical audiences? Do these variations on lift
charts seem clearer or more valuable than traditional alternatives to you? Have I re-invented something that should be cited?
The R code for these graphs is available in the Github gist. I used ggplot2, naturally, which is an essential tool for exploring the space of possible visualizations without being tied down by
traditional graph structures.
Incidentally, for people interested in building graphs that leverage people’s innate visual capabilities, I recommend Kosslyn’s book, Graph Design for the Eye and Mind.
Also incidentally, the question of how to communicate or visualize the potentially incredibly complex sets of rules/weights/whatever inside the categorization black box is another fascinating issue,
the subject of ongoing research, and maybe something I’ll write about soon.
1. No comments yet.
1. No trackbacks yet. | {"url":"http://www.harlan.harris.name/2011/04/visualizing-categorization-models/","timestamp":"2014-04-18T15:39:29Z","content_type":null,"content_length":"40806","record_id":"<urn:uuid:afa966f7-d3b1-4de7-8ecf-8fad820c21c6>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polynomial functions: finding equation from graph, etc.
I have two questions :
1/ Please help me write a quadratic function whose graph has only one x-intercept, -4 and whose y-intercept, -8 in factored form. Thank a lot.
2/ If we have a graph of polynomial function, how can we write its equation ? Actually, if i have a graph of cubic or 4th-degree polynomial function, how can i know the sign of their leading
coefficient, whether positive or negative ?
For EX : I have two following graphs :
One is quadratic function :
Two is 4th-degree function :
How can i write these equations ?
Thanks a lot!
honest_denverco09 wrote:1/ Please help me write a quadratic function whose graph has only one x-intercept, -4 and whose y-intercept, -8 in factored form.
I'm afraid I don't know what is meant by "the y-intercept 'in factored form'". The y-intercept is where the graph crosses the y-axis; "factoring" is not involved.
To learn how to find quadratics from their zeroes, try here. Thinking back to what you learned when you were factoring, solving, and graphing quadratics, you know that "one x-intercept" for a
quadratic means one repeated root -- that is, there is one root that occurs twice -- and the graph touches the x-axis at the one zero (rather than passing through the axis).
honest_denverco09 wrote:2/ If we have a graph of polynomial function, how can we write its equation ? Actually, if i have a graph of cubic or 4th-degree polynomial function, how can i know the
sign of their leading coefficient, whether positive or negative ?
To learn about polynomial behavior and the relationships between equations and graphs, try here.
honest_denverco09 wrote:For EX : I have two following graphs :
One is quadratic function :
Two is 4th-degree function :
How can i write these equations ?
Once you've studied the lesson on polynomial behavior, you will understand why it is obvious that neither of these pictures displays a quadratic or a quartic polynomial!
The first graph is a positive odd-degree polynomial, probably a cubic. The second graph is a negative odd-degree polynomial, possibly of degree five.
Re: Polynomial functions: finding equation from graph, etc.
Thank a lot!
I can not open the links that you sent to me, but from your answer, i guess so :
If the graph of quadratic or 4th-degree functions that have the start-point below the x-axis, their leading coefficients will be positive, and if they have the start-point above the x-axis, their
leading coefficients will be negative.
For EX in these pictures, the first graph has the positive "a" and the second one has the negative "a".
I just guess so, is that true a little bit ?
honest_denverco09 wrote:I can not open the links that you sent to me....
I'm sorry to hear there was some difficulty. The links are working now.
Please review the material, keeping in mind what you already know about graphing linear functions (degree one, and thus odd) and graphing quadratic functions (degree two, and thus even), as they are
strongly indicative of what you will encounter for end-behavior in all even and odd polynomial functions.
Re: Polynomial functions: finding equation from graph, etc.
Thank a lot!
Last edited by honest_denverco09 on Thu Apr 23, 2009 11:20 pm, edited 1 time in total.
Re: Polynomial functions: finding equation from graph, etc.
Here is the answer for my problem :
1/ We have x - intercept is only -4 , that means the vertex of this function's graph will be (-4, 0), and this x-intercept or this zero will be Double root.
As the theme, this polynomial function is a quadratic one, so it has two "square" (two zeros) or it is in form a(x - r1)(x - r2)
So, The equation will be y = a(x + 4)^2(x - r2)
We also have y-intercept is -8 , that means : The graph will be downward/the leading coefficient will be negative.
The point (0, -8) will be on the graph.
So far, we have : -8 = a(0 + 4)^2 and then solve for a, we'll get a = -2
Eventually, our equation is y = -2(x+4)^2
2/For this graph, i will do like this :
When we have a graph of any polynomial function, first, we need to identify the sign of "a", leading coefficient whether positive or negative by looking at its graph. In the second picture, the "a"
is negative.
Then, we need to realize what the degree of this function. In this case, the degree is 5.
Set up the parenthesis based on the degree and the graph. In this case, we will have y = -()()^3()
Seek for the x-intercepts to fill out the set above. In this case, we have x-intercepts are -5, -2, 1 .So, y = -(x+5)(x+2)^3(x-1) | {"url":"http://www.purplemath.com/learning/viewtopic.php?f=9&t=441&p=1407","timestamp":"2014-04-16T10:20:07Z","content_type":null,"content_length":"30673","record_id":"<urn:uuid:84416734-b6d8-4004-8da9-0bdff3bb8fce>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00197-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dilations of Quadratic Graphs - Concept
Long division can be used to divide a polynomial by another polynomial, in this case a binomial of lower degree. When dividing polynomials, we set up the problem the same way as any long division
problem, but are careful of terms with zero coefficients. For example, in the polynomial x^3 + 3x + 1, x^2 has a coefficient of zero and needs to be included as x^3+ 0x^2+3x+1in the division problem.
You guys know that any time you are making a graph in Math class, you can make a table of x and y values and substitute the x's in one at a time. But oh my gosh! It takes all day. I understand. I'm a
Math teacher, I assign this stuff, I have to do it myself. I know it's a real drag. That's why I personally learn a whole bunch of shortcuts. And when I'm grading homework, I don't make tables of
values all the time. I use the shortcuts in my brain to just check students' homework like estimation. That's how you guys can do your graphs. If your teacher says just make a sketch. It doesn't have
to be exact and that's what we're going to be looking at here.
We're going to be looking at what happens with a quadratic equation meaning you have x squared as your highest exponent. If it's multiplied by a negative number, like it has a negative a value, or if
a is not one. We're going to be checking that out. I'm going to be using one of these calculators that your teacher might have available to you, might not. But these are sometimes useful just to
check your work. Okay, so I'm going to go on ahead and turn on the computer and we're going to see what we can do when we explore quadratic equations.
The first thing I'm going to graph is y=x squared. That's what called a parent function. It's like the function that creates the shape that all quadratic equations have. I'm also going to make it so
whenever I draw that graph, it's going to show up as bold. At the same time I'm going to be graphing y=-x squared so we can see how they're alike and how they're different.
Let's go ahead and hit graph and see what happens. There's my bold and there's my regular. That means this is the graph of y=x squared and this is the graph of y=-x squared. Look at how it's upside
down. A lot of students remember that negative means upside down. It's like a sad face. Negative is sad. So you frown. That's how they remember that this graph is going to be upside down.
Let's go look at the table of values so you guys can see what else is going on. This is my graph or this this column here represents y=x squared. This represents y=x squared but then negativise, you
see all these negative signs. That's the points that create the graph that we just saw.
Okay, let's go back and look at a couple of different equations. We have y=x squared. Next let's look move those graphs up and down on the axis. Let's do y=x squared plus two and then we'll do y=-x
squared plus two. Again I want you guys to think about what's alike and what's different. Before I hit graph try to predict what this is going to look like. The negative means that it's going to be a
frowny face or upside down. And this +2 business means that my vertex is going to be moved up two vertically. Here comes the graph.
My parent function y=x squared plus two and y=-x squared plus two. Sad frowny face. Same vertex though, the vertex got moved from 0 0, up two because of that +2 business.
Let's look at a couple of these equations where the vertex is not on the origin or it's not even on the y axis. What I'm going to do is move the vertex over and up and down by using some parentheses.
I'm going to do y=x take away two quantity squared and then plus three. Remember that the stuff inside the parentheses represents a horizontal shift and the stuff outside the parentheses represents a
vertical shift in my case it's going to be up three.
I'm also going to be graphing y=-x+2 inside the parentheses squared plus three. So these are, oops, I meant to do minus two. Back up. Back up. Back up. Let's make that a negative sign. Okay, so these
are going to be the exact same equation except for the one tiny difference I'm going to have is this negative sign outside. I shouldn't call it a tiny difference, it's actually a big deal. Remember
also the order of operations. If you are making a table of values, you would do your x number, subtract two square it, negativise that answer and then plus three.
Okay, let's look at the graphs. There's my parent function, there's my parabola that's opening up because a was a positive value or there's a positive outside the parentheses, this is the negative
one. See how the vertex is no longer on the y axis but that pattern of flipping up and down still stayed the same.
Okay, so that's all good. The next thing we are going to be looking at is what happens if a is a number other than one or -1. For example, I'm going to be graphing y=x squared along with y=2 times x
squared. Think about what that might mean. I'm also going to graph y=4x squared so we can really start seeing what difference it makes when a is a number bigger than one. Here comes the graph, y=x
squared, y=2x squared, y=4x squared. Notice all three of those have the same vertex of 0 0, but y=2x squared is steeper because your y values are being multiplied by two every time. It gets steeper
more quickly than y=x squared. Along those same lines, when I multiply by 4x squared it gets even steeper still. What happens when you're a value, absolute value is larger than one, it makes your
graph what we call skinny.
That's as opposed if I use a fraction and what I'm going to do next, is half or 0.5 of x squared. We'll also do 0.2 so you guys can really see the couple difference. 0.2 of x squared. Let's graph all
three of those, see what happens. There's x squared, half of x squared and 0.2. Ooh, they got wider.
Now I think that's kind of counter intuitive. I would think that fractions would mean like get skinnier, but actually fractions make the parabola get wider. There's x squared, there's half x squared,
there's 0.2x squared. So you'll see as my decimal or fraction, makes the a value smaller and smaller, my parabola actually gets wider because my y value is changing in a less rapid way.
Okay, let's go back and do something where we can put it all together. I'm going to leave the y=x squared in bold, but let's do something where we have negative and a fractional value of a. I'm going
to use y=-0.2 as my coefficient times x squared, and then we'll see how that looks.
Try to predict before I hit graph what that might look like. The negative means that it's going to be a frowny face, upside because negative's sad like a frown and then 0.2 is going to make my
parabola wider, skinny? It's going to make it wide. Let's check. There it is, upside down and wide.
Let's do one that's super super messy putting together everything you guys have ever learned just to show you that once you get the hang of these shifting rules, you can use them to graph any
parabola you come across. And by the way these shifting rules don't only apply to parabolas or quadratics. They also apply to absolute values, they're going to apply to cubics, are going to apply to
all kinds of different shapes that you're going to see throughout your Math career. So I'm not just showing you this for fun or for shortcuts. These are actually going to be really important once you
start moving through your math classes.
Okay, so I'm moving the vertex side to side, moving it up and down and I'm turning it upside down and I'm making it wide. Before I hit graph, let's just review what all of these different numbers
mean. Negative means upside down, half means make it wide, this +2 moves my vertex side to side, it's going to move it to, in the left direction even though plus is usually to the right and then +3
is going to move my vertex up three. Let's go ahead and graph it, see what it looks like.
There it is. My vertex got moved over two, up three, upside down and wide. I could have drawn this graph by hand without making a table of values and that would make me feel like a superstar A+ Math
Start practicing these shifting rules guys. They're going to make your graphing a lot easier.
dilation vertex quadratic equation coefficient | {"url":"https://www.brightstorm.com/math/algebra-2/quadratic-equations-and-inequalities/dilations-of-quadratic-graphs/","timestamp":"2014-04-18T18:25:02Z","content_type":null,"content_length":"72545","record_id":"<urn:uuid:82b8cc9b-e713-419c-b49a-b498a29e8696>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
The proper name for a kind of ordered space
up vote 1 down vote favorite
I'm trying to find the correct term for a specific kind of totally ordered space:
Let $S$ be a totally ordered space with asymmetric relation <.
Property: For any two $s_{1}$ and $s_{2}$ in $S$ where $s_1 < s_2$, there must exist some $s_{3}$ such that $s_{1} < s_{3}$ and $s_{3} < s_{2}$.
What is the name of this property? Thank you!
gn.general-topology names
add comment
1 Answer
active oldest votes
Dense order is one name that concept goes by.
up vote 4 down vote accepted
Thank you so much, Mariano! – user1998 Dec 20 '09 at 1:37
add comment
Not the answer you're looking for? Browse other questions tagged gn.general-topology names or ask your own question. | {"url":"http://mathoverflow.net/questions/9396/the-proper-name-for-a-kind-of-ordered-space?sort=newest","timestamp":"2014-04-21T02:38:45Z","content_type":null,"content_length":"50463","record_id":"<urn:uuid:aabb0474-7058-4851-9b12-2ee73f55878c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Social Explorer | Census 1960 Census Tract Only
Documentation: Census 1960 Tracts Only Set
Publisher: U.S. Census Bureau
Document: Occupational Characteristics (Volume II, Part VII - Subject Reports)
citation: U.S. Bureau of the Census. U.S. Census of Population: 1960. Subject Reports, Occupational Characteristics. Final Report PC(2)-7A. U.S. Government Printing Office, Washington, D.C. 1963.
Occupational Characteristics (Volume II, Part VII - Subject Reports)
For persons in housing units at the time of the 1960 Census, the sampling unit was the housing unit and all its occupants; for persons in group quarters, it was the person. On the first visit to an
address, the enumerator assigned a sample key letter (A, B, C, or D) to each housing unit sequentially in the order in which he first visited the units, whether or not he completed an interview. Bach
enumerator was given a random key letter to start his assignment, and the order of canvassing was indicated in advance, although these instructions allowed some latitude in the order of visiting
addresses. Each housing unit to which the key letter "A" was assigned was designated as a sample unit, and all persons enumerated in the unit were included in the sample. In every group quarters, the
sample consisted of every fourth person in the order listed. The 1960 statistics in this report are based on a subsample of one-fifth of the original 25-percent sample schedules. The subsample was
selected on the computer, using a stratified systematic Sample Design. The strata were made up as follows: For persons in regular housing units there were 36 strata, i.e., 9 household size groups by
2 tenure groups by 2 color groups; for persons in group quarters, there were 2 strata, i.e., the 2 color groups.
Although the sampling procedure did not automatically insure an exact 5 percent sample of persons, the Sample Design was unbiased if carried through according to Instructions. Generally, for large
areas, the deviation from the estimated sample size was found to be quite small. Biases may have arisen, however, when the enumerator failed to follow his listing and sampling instructions exactly.
Table C compares the distribution by major occupation group of employed persons, as presented in this report, based on the 5-percent sample with corresponding statistics based on the 25-percent
sample presented in Volume I of the 1960 Census of Population, Differences in this table reflect primarily sampling error.
Table C. Occupation-Comparison of 25-Percent and 5-Percent Sample Data on Major Occupation Group of Employed Persons, For the United States: 1960
The statistics based on the 5-percent sample of the 1960 Census returns are estimates that have been developed through the use of a Ratio Estimation procedure. This procedure was carried out for each
of the following groups of persons in each of the sample weighting areas:
│Group│Sex, color, and age │Relationship and tenure │
│ │Male white: │ │
│1 │Under 5 │ │
│2 │5 to 13 │ │
│3 │14 to 24 │Head of owner household │
│4 │14 to 24 │Head of renter household │
│5 │14 to 24 │Not head of household │
│6-8 │25 to 44 │Same groups as age group 14 to 24 │
│9-11 │45 and over │Same groups as age group 14 to 24 │
│ │Male nonwhite: │ │
│12-22│Same groups as male white│ │
│ │Female white: │ │
│23-33│Same groups as male white│ │
│ │Female nonwhite: │ │
│34-44│Same groups as male white│ │
The sample weighting areas were defined as those areas within a State consisting of central cities of urbanized areas, the remaining portion of urbanized areas not in central cities, urban places not
in urbanized areas, or rural areas.
For each of the 44 groups, the ratio .of the complete count to the sample count of the population in the group was determined. Each specific sample person in the group was assigned an integral weight
so that the sum of the weights would equal the complete count for the group. For example, if the ratio for a group was 20.1, one-tenth of the persons (selected at random) within the group were
assigned a weight of 21, and the remaining nine-tenths a weight of 20. The use of such a combination of integral weights rather than a single fractional weight was adopted to avoid the complications
involved in rounding in the final tables. In order to increase the reliability, where there were fewer than 275 persons in the complete count in a group, or where the resulting weight was over 80,
groups were combined in a specific order to satisfy both of these two conditions.
These ratio estimates reduce the component of sampling error arising from the variation in the size of household and achieve some of the gains of stratification in the selection of the sample, with
the strata being the groups for which separate ratio estimates are computed. The net effect is a reduction in the sampling error and bias of most statistics below what would be obtained by weighting
the results of the 5-percent sample by a uniform factor of twenty. The reduction in sampling error will be trivial for some items and substantial for others. A byproduct of this estimation procedure,
in General, is that estimates for this sample are generally consistent with the complete count with respect to the total population and for the subdivisions used as groups in the estimation
procedure. A more complete discussion of the technical aspects of these ratio estimates will be presented in another report.
^2 Estimates of characteristics from the sample for a given area are produced using the formula
Where x' is the estimate of the characteristic for the area obtained through the use of the Ratio Estimation procedure,
x[i] is the count of sample persons with the characteristic for the area in one (i) of the 44 groups,
y[i] is the count of all sample persons for the area in the same one of the 44 groups, and
Y[i] is the count of persons in the complete count for the area in the same one of the 44 groups.
The figures from the 5-percent sample tabulations are subject to Sampling Variability, which can be estimated roughly from the standard errors shown in tables D and E.
These tables
do not reflect the effect of response variance, processing variance, or bias arising in the collection, processing, and estimation steps. Estimates of the magnitude of some of these factors in the
total error are being evaluated and will be published at a later date. The chances are about two out of three that the difference due to Sampling Variability between an estimate and the figure that
would have been obtained from a complete count of the population is less than the standard error. The chances are about 19 out of 20 that the difference is less than twice the standard error and
about 99 out of 100 that it is less than times the standard error. The amount by which the estimated standard error must be multiplied to obtain other odds deemed more appropriate can be found in
most statistical textbooks.
Table D shows rough standard errors of estimated numbers up to 50,000. The relative sampling errors of larger estimated numbers are somewhat smaller than for 50,000. For estimated numbers above
50,000, however, the nonsampling errors, e.g., response errors and processing errors may have an increasingly important effect on the total error. Table E shows rough standard errors of data in the
form of percentages. Linear interpolation in tables D and E will provide approximate results that are satisfactory for most purposes.
Table D. Rough Approximation to Standard Error of Estimated Number
(Range of 2 chances out of 3)
│Estimated number │Standard error │
│50 │30 ││
│100 │40 ││
│250 │60 ││
│500 │90 ││
│1,000 │120 ││
│2,500 │200 ││
│5,000 │280 ││
│10,000 │390 ││
│15,000 │480 ││
│25,000 │620 ││
│50,000 │880 ││
Table E. Rough Approximation to Standard Error of Estimated Percentage
(Range of 2 chances out of 3)
│ │Base of percentage │
│Estimated percentage ├────┬─────┬─────┬──────┬──────┬───────┤
│ │500 │1,000│2,500│10,000│25,000│100,000│
│2 or 98 │3.3 │2.3 │1.3 │0.8 │0.3 │0.3 │
│5 or 95 │5.0 │4.0 │2.3 │1.0 │0.5 │0.3 │
│10 or 90 │7.0 │5.0 │3.0 │1.5 │0.8 │0.5 │
│25 or 75 │10.0│6.8 │3.8 │1.8 │1.0 │0.5 │
│50 │11.0│7.8 │4.0 │2.0 │1.3 │0.8 │
For a discussion of the Sampling Variability of medians and means and of the method for obtaining standard errors of differences between two estimates, see 1960 Census of Population, Volume I,
Characteristics of the Population, Part 1, United States Summary.
Illustration: Table 8 shows that there are 47,315 total native males 14 years old and over who are aeronautical engineers. Table D shows that a rough approximation to the standard error for an
estimate of 47,315 is 852, which means that the chances are about 2 out of 3 that the results of a complete census would not differ by more than 852 from this estimated 47,315. It also follows that
there is only about 1 chance in 100 that a complete census result would differ by as much as 2,130, that is, by about 2 ½ times the number estimated from table D. | {"url":"http://www.socialexplorer.com/data/C1960TractDS/documentation/00094552-6066-488f-b048-40dfea0d314f","timestamp":"2014-04-21T04:58:08Z","content_type":null,"content_length":"46735","record_id":"<urn:uuid:0e94f365-4d7f-4e89-843e-e25bf3d7a1a1>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Winfield, IL Statistics Tutor
Find a Winfield, IL Statistics Tutor
...Since graduation, I've worked as a vision therapist for children ages 6-18, combining my love of optometry with one-on-one tutoring. I believe that students are able to learn anything with the
right instruction and I know my passion for science and math will contribute greatly to this process. ...
25 Subjects: including statistics, chemistry, calculus, physics
...Thank you for considering me!I have excelled in math classes throughout my school career. I have a bachelor's in Mathematics, so I have used algebra for many years. I have a knowledge of tips
and tricks to simplify elementary algebra concepts.
5 Subjects: including statistics, algebra 1, prealgebra, probability
...A little bit about my education background: I have a Bachelor’s and Master in Industrial Engineering from Bangalore University, India and New York State College of Ceramics (Alfred
University). I have over 20 years of professional experience and during these years I have designed and developed MS...
9 Subjects: including statistics, calculus, algebra 1, algebra 2
I will help your child learn by using personalized teaching solutions. I have been successful in the past tutoring high school students in math and science, but enjoy all ages and skill levels.
As a current dental student, my passion for learning is proven every day, and I hope to inspire your child to achieve their academic goals.
17 Subjects: including statistics, chemistry, biology, algebra 2
...Control systems theory consists of many differential equations. In association with this I received an A in differential equations when I received my bachelor's degree in mechanical
engineering. I have my master's degree in mechanical engineering.
20 Subjects: including statistics, calculus, physics, geometry
Related Winfield, IL Tutors
Winfield, IL Accounting Tutors
Winfield, IL ACT Tutors
Winfield, IL Algebra Tutors
Winfield, IL Algebra 2 Tutors
Winfield, IL Calculus Tutors
Winfield, IL Geometry Tutors
Winfield, IL Math Tutors
Winfield, IL Prealgebra Tutors
Winfield, IL Precalculus Tutors
Winfield, IL SAT Tutors
Winfield, IL SAT Math Tutors
Winfield, IL Science Tutors
Winfield, IL Statistics Tutors
Winfield, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/Winfield_IL_Statistics_tutors.php","timestamp":"2014-04-21T10:37:27Z","content_type":null,"content_length":"24233","record_id":"<urn:uuid:0149b42e-ddbb-4fe1-bc49-9550280502e9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newest 'textbook-recommendation homological-algebra' Questions
i want to know how to prove a torsion free modules over general ring is flat. (in "lecture on ring and modules, T.Y.Lam prove in case R is interal domain). please help me prove it or give me some ...
I have studied some basic homological algebra. But I can't send to get started on spectral sequences. I find Weibel and McCleary hard to understand. Are there books or web resources that serve as ...
I would like to hear the communities' ideas on good Homological Algebra textbooks / references. The standard example is of course Weibel (which I'll leave for someone else to describe). As usual, ... | {"url":"http://mathoverflow.net/questions/tagged/textbook-recommendation+homological-algebra","timestamp":"2014-04-18T03:25:39Z","content_type":null,"content_length":"38773","record_id":"<urn:uuid:533aaca0-84d6-4aba-9c91-b98f3b95a9cd>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum @ Drexel: Professional Development
Problem Solving Process || PD Courses || Schedule
download as: PDF file
The problem solving process that the Math Forum has been developing since 1993, can be compared to the writing process. We encourage problem solvers to:
• read the problem
• get started
• carry out a strategy
• draft a solution and explanation
• reflect
• get feedback
• revise
This course aligns well with the Math Forum's Problems of the Week but could also be used to develop techniques to use with problem solving prompts in general. Participants are not required to have a
Problem of the Week (PoW) Membership, although at the end of the course, they may find value in considering that as a logical next step as a resource for their students.
We hope this course helps you:
• learn about the Math Forum's approach to problem solving as a process. Our goal is not to be over and done. Our goal is to think, reflect, revise, and master.
• learn about the resources provided with each of the Math Forum's Problems of the Week (PoWs) and how they can help you enhance student competence and confidence in problem solving and
• develop concepts of mathematical problem solving and communication.
• enhance your understanding of NCTM's Process Standards and the role of PoWs in addressing them.
• learn about assessing student work and providing effective feedback in the context of the PoWs.
Course Requirements
Participants enrolled in this course are only expected to have an Internet-accessible computer. There is no need to have any level of Problem of the Week membership. All problems and accompanying
teacher resources will be available within the course environment and also in PDF format if the participant prefers to download any of the documents for printing.
Participants will have some flexibility within each week but are expected to complete the activities during the assigned week. Participants who successfully complete the course activities will
receive a Certificate of Completion from the Drexel University School of Education indicating they have completed 15 hours (1.5 CEUs) of Professional Development. For Pennsylvania residents we
are also able to provide Act 48 credit.
Most assignments can be completed anytime during the assigned week. Generally, the deadline for each week's assignments will be 10 pm (eastern time) on Wednesday nights. Occasionally some
assignments will have a different deadline. Those will be noted in the weekly overview.
Contributions to the Discussions should be thoughtful and add something of value to the topic. Our approach is to
1. value everyone's contributions as we all share our explorations and wonderings.
2. ask and answer questions of ourselves and others.
3. think of how this can transfer to our classrooms.
Participants may request an online (synchronous) chat with the instructor and/or arrange a chat time to have with other participants.
Weekly Schedule
Week 1: Understanding the Problem
Focus: Problem solving as a vehicle for teaching and learning mathematics.
• Become oriented to the Blackboard Vista environment (only an Internet connection and Web browser are required).
• Introduce yourself and become acquainted with the other course participants.
• Individually solve a Problem of the Week (PoW).
• Post your answer and explanation as a journal entry in the Blackboard Vista environment.
• Increase understanding of what good problem solvers do.
Week 2: Communication
Focus: The nature of good communication in problem solving and the teacher's role in facilitating it.
• After discussion and feedback from the instructor, revise your PoW journal post.
• Increase understanding of good communication in problem solving.
• Practice thinking in terms of "I notice..." and "I wonder..." to encourage reflection and revision.
Week 3: Representation
Focus: Representations (physical objects, drawings, charts, graphs, and symbols) help students communicate their thinking.
• View and discuss the Enhanced Problem Packet for Teachers.
• Examine samples of student work.
• Examine suggestions of how to facilitate students' reflection and revision of their work.
• Discuss the different representations students might use.
Week 4: Reasoning and Proof
Focus: Communication is key to understanding each student's reasoning.
• Reflect on and discuss the role of reasoning at your grade level.
• Explore ways to develop student's ability to justify their thinking.
• Continue to think in terms of "I notice" and "I wonder."
• View and discuss the Problem Solving and Communication Activity Series.
• Reflect on and discuss how problem solving is or could be implemented in the classroom.
Week 5: Connections and Reflections
Focus: The mathematical ideas presented in our mathematics classes should interconnect and build on one another to produce a coherent whole.
• Individually solve another Problem of the Week (PoW). [Optional: print copies of one of the problems offered in PDF format and present to students in your classroom. Post one of their solutions
instead of your own.]
• Post your answer and explanation (or a student's) as a journal entry in the Blackboard Vista environment.
• View and discuss the Enhanced Problem Packet for Teachers.
• View and discuss the Problem Solving and Communication Activity Series.
• After discussion and feedback from the instructor, revise your PoW journal post.
Week 6: Set up a Trial Account
Focus: Become aware of the Problem of the Week resources that are available.
• Visit all of the different Problem of the Week resources, including
Current Problems
Problems Library
Write Math
• Have the opportunity to ask questions about the PoW services and resources.
• Discuss strategies for managing problem solving in your classroom.
Title: Principles and Standards for School Mathematics
Author: National Council of Teachers of Mathematics (NCTM)
Edition/Year: 2000
If you are not an NCTM member and do not have access to the print form of this document, you can sign up for 120-day free online access to the full Principles and Standards at NCTM's website. This
document will be used throughout the course.
Title: Problem Solving and Communication Activity Series: Program Description & Introduction
Author: The Math Forum
PDFs of these are linked from the weekly readings assignment pages.
Title: Problem Solving and Communication Activity Series
Author: The Math Forum
PDFs of these are linked from the weekly readings assignment pages.
Title: Enhanced Problem Packet for Teachers
Author: The Math Forum
PDFs of these are linked from the weekly readings assignment pages.
Recommended Resources
Title: Dr. Math® Gets You Ready for Algebra
Author: The Math Forum
Publisher: John Wiley & Sons
The book is a series of questions and answers arranged according to a standard math pre-algebra class, and supplemented with Internet references and a glossary.
Available here: http://mathforum.org/pubs/dr.mathbooks.html
Title: Dr. Math® Explains Algebra
Author: The Math Forum
Publisher: John Wiley & Sons
The book is a series of questions and answers arranged according to a standard Algebra I class, and supplemented with Internet references and a glossary.
Available here: http://mathforum.org/pubs/dr.mathbooks.html
Feel free to email if you have any questions. | {"url":"http://mathforum.org/pd/process/syllabus.html","timestamp":"2014-04-20T03:33:40Z","content_type":null,"content_length":"12223","record_id":"<urn:uuid:9cb802f3-6a40-41fd-ae2a-aa7246b31f13>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binary division of large numbers
05-31-2007 #1
Registered User
Join Date
May 2007
Binary division of large numbers
As a newbie to this forum can I first say that I'm far from an expert programmer. I write in C using gcc on a Linux machine, and I'm always surprised when something actually works! Please could
anyone help me with a problem I can't see any way to resolve?
I am trying to make an encoder for OneCode, the new 4-state barcode system used by the USPS. Once done it will form part of my larger barcode encoding project at (http://
Following the instructions on the USPS website I have got as far as the correct calculation of a 102-digit binary number which I have stored in a short int array (not the best use of memory, but
it works). In order to get this far I have already written functions which handle binary addition, subtraction and multiplication. The next step is that I need to divide. As an example (in hex):
016907B2A24ABC16A2E5C004B1 divided by 27C.
I will need both the quotient and the remainder. I have written code that will recursively take away until I get a negative number, and from it I can get the correct values, but it takes forever
to do the calculation this way. I've looked at the 'long division' method of binary division but can't figure out how to put it into code.
I'm sure there must be a quicker way, but how?
Thank you for any suggestions.
If you don't mind the extra bulk of a library, libgmp is a math library for large numbers.
This Wikipedia article may be of help to you.
Otherwise, I once implemented division thus, which should be quicker than your way:
1) Find the smallest number of digits that can contain the answer. For base-10, this is: (number_of_digits_in_numerator) - (number_of_digits_in_demoninator - 1) + 1 (for safety/rounding) This is
the max. number of digits possible in your answer. (But may be less)
2) Start with all digits 0.
3) Take the most significant digit, increment it by one.
4) Multiply the number by the denominator.
5) If the result of the multiplication is larger than the numerator, reduce the digit that was incremented by 1, and move down to the next less significant place. Repeat to step 3 using this
digit. If there is no less significant digit, you're done.
100 / 5
Digits to use: 3 - (1 - 1) + 1 = 4.
Start with: 0000.
1000 * 5 = 5000, too big.
0100 * 5 = 500, too big.
0010 * 5 = 50, too small.
0020 * 5 = 100, right on. (Special case, quit early!)
This algorithm needs only (digits_in_answer * base) iterations to find an answer. Although my examples (and implementation) was in base-10, it should work the same in base-2.
Last edited by Cactus_Hugger; 05-31-2007 at 02:49 PM.
long time; /* know C? */
Unprecedented performance: Nothing ever ran this slow before.
Any sufficiently advanced bug is indistinguishable from a feature.
Real Programmers confuse Halloween and Christmas, because dec 25 == oct 31.
The best way to accelerate an IBM is at 9.8 m/s/s.
recursion (re - cur' - zhun) n. 1. (see recursion)
The only thing I can suggest is a google search. http://www.google.ca/search?hl=en&q=...division&meta=
BTW, the "license" link in your contents does not work. Also, I think "GNU Public Licence" should be "GNU General Public Licence", but perhaps what you have is sufficient.
[edit] Also, the last link contains text that should be a link:
Last edited by dwks; 05-31-2007 at 03:31 PM.
Seek and ye shall find. quaere et invenies.
"Simplicity does not precede complexity, but follows it." -- Alan Perlis
"Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra
"The only real mistake is the one from which we learn nothing." -- John Powell
Other boards: DaniWeb, TPS
Unofficial Wiki FAQ: cpwiki.sf.net
My website: http://dwks.theprogrammingsite.com/
Projects: codeform, xuni, atlantis, nort, etc.
I'm afraid the Wikipedia article may as well be in Russian for all I understand of it.
I think you're on to something with that division method, however. It wouldn't cut down any processing time as it is because it would still have to work out 1000 * 5 as 5+5+5+5...., but if I
combine it with a lookup table to take shortcuts (like "1000 * 5 makes 5000, so don't bother to work it out!") it might be just what I'm looking for. I'll play with it a bit and see if I can make
it work.
It wouldn't cut down any processing time as it is because it would still have to work out 1000 * 5 as 5+5+5+5....
Multiplication can be implemented better than that. Think of how a student works out a multiplication problem, and you'll get a decent algorithm with just that.
First, for any multiplication problem, especially ones like 1000 * 5, do not do 5 + 5 + 5..., even if that is your algorithm of choice. Reverse your operands: 1000 + 1000 + 1000 + 1000 + 1000,
Take 123 * 45:
Always place the operand with less digits on bottom. (Swapping operands by calling your multiplication function recursively is usually sufficient, depending on your data structures.) From there,
take each single digit, and multiply it as a schoolchild would, getting a total for that digit. For our problem:
1) 5 * 3 = 15. Record the 5, carry the 1. (Sub-answer: '5', Answer: 0)
2) 5 * 2 = 10 + carry = 11. Record the 1, carry the 1. (Sub-answer: '15', answer: 0)
3) 5 * 1 = 5 + carry = 6. (Sub-answer: '615', answer: 0)
4) No carry, no digits. Add sub-answer to answer (0 + 615 = 615)
5) Repeat for next digit (4 * 123 = sub answer of 492. Since this is tens place, multiply the sub answer by 10. (100 for hundreds, 1000 for thousands, etc)). 4920 + 615 = 5535.
Using this algorithm, all the multiplications should be simple (for base-10 or base-2) and not require bignumber work. After that, your bignum adder is called only as many times as there are
digits in the smaller operand, minus 1. (No addition is needed in 1000 * 5.)
Again, try wiki. This article has info on multiplying. The method I've attempted to demonstrate is the first one, long multiplication. The "Peasant or binary multiplication" may be more useful to
you. (I read that part, and it's easy enough to understand.)
And if it requires this much work with large numbers, perhaps you should think about libgmp, or some other library. No point in reinventing the wheel, and their work is likely going to be faster
and less prone to bugs.
Last edited by Cactus_Hugger; 05-31-2007 at 11:18 PM.
long time; /* know C? */
Unprecedented performance: Nothing ever ran this slow before.
Any sufficiently advanced bug is indistinguishable from a feature.
Real Programmers confuse Halloween and Christmas, because dec 25 == oct 31.
The best way to accelerate an IBM is at 9.8 m/s/s.
recursion (re - cur' - zhun) n. 1. (see recursion)
Psuedo code for the basics of division (A / B):
set result to zero
shift result left 1 bit
if B <= A then subtract B from A and increment result
shift B right 1 bit
while B != 0
result contains the answer
A contains the remainder
C++ code for this is here:
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
05-31-2007 #2
05-31-2007 #3
05-31-2007 #4
Registered User
Join Date
May 2007
05-31-2007 #5
06-01-2007 #6 | {"url":"http://cboard.cprogramming.com/c-programming/90418-binary-division-large-numbers.html","timestamp":"2014-04-18T21:43:12Z","content_type":null,"content_length":"67543","record_id":"<urn:uuid:36a05633-ffa4-4556-916f-d720434b511c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next: About this document ... Up: No Title Previous: Bibliography
ab initio
Alpha strings
Introduction to Determinant CI | (ee to Graphical Representation of Alpha | Restricted Active Space CI
graphical representation
Antisymmetry principle
Atomic natural orbitals
Beta strings
see Alpha strings
Why Configuration Interaction? | Energy Contributions of the | Energy Contributions of the | Size of the CI | Size of the CI | Truncated CI is not | Truncated CI is not | Truncated CI is not
Energy Contributions of the | Energy Contributions of the | Energy Contributions of the | Size of the CI | Size of the CI
Complete Active Space
Complete CI
Why Configuration Interaction? | (ee to The Correlation Energy | Energy Contributions of the | Energy Contributions of the | Truncated CI is not | Truncated CI is not
Why Configuration Interaction? | Why are Coupled-Cluster and to Why are Coupled-Cluster and | Truncated CI is not
Davidson correction
Direct CI
Distinct row table
DRT see Distinct row table
Dynamical correlation
FCI see Full CI
Frozen core
Full CI
Why Configuration Interaction? | Why Configuration Interaction? | The Correlation Energy | Slater's Rules | Variational Theorem for the | Variational Theorem for the | Energy Contributions of the
| Size of the CI | Size of the CI | Size of the CI | Size of the CI | Size of the CI | Size of the CI | Size of the CI | Truncated CI is not | Truncated CI is not | Restricted Active Space CI |
Restricted Active Space CI
Handy, N. C.
Introduction to Determinant CI | Alpha and Beta Strings | Alpha and Beta Strings | Alpha and Beta Strings
Aa. Jensen, H. J.
Jørgensen, P.
see Perturbation theory
Nondynamical correlation
Olsen J.
Pauli principle
see Antisymmetry principle
Perturbation theory
Why Configuration Interaction? | Why are Coupled-Cluster and to Why are Coupled-Cluster and | Truncated CI is not
Restricted Active Space
Alpha and Beta Strings | (ee to Full CI Algorithm
Reverse-lexical ordering
Roos, B. O.
Second quantization
Introduction and Notation | (ee to Second Quantization | Olsen's Full CI
Siegbahn, P. E. M.
Size consistency
(ee to Truncated CI is not
Size extensivity
(ee to Truncated CI is not
Slater's rules
Static correlation
see Nondynamical correlation
Variational theorem
Virtual orbitals
Weyl's dimension formula
C. David Sherrill | {"url":"http://vergil.chemistry.gatech.edu/notes/ci/node31.html","timestamp":"2014-04-19T02:21:03Z","content_type":null,"content_length":"10313","record_id":"<urn:uuid:ed352bb4-7662-4217-ad57-71a8a0e9b299>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] vector identity proof
August 26th 2006, 04:44 AM #1
[SOLVED] vector identity proof
Need help proving this
let f(x,y,z) and g(x,y,z) be any C^2 scaler functions. Without using the basic identities of vector analysis prove that
grad.(fgradg-ggradf)=fgrad^2g- ggrad^2f
(ive put grad instead of the upside down triangle symbol)
Need help proving this
let f(x,y,z) and g(x,y,z) be any C^2 scaler functions. Without using the basic identities of vector analysis prove that
grad.(fgradg-ggradf)=fgrad^2g- ggrad^2f
(ive put grad instead of the upside down triangle symbol)
To start you off:
$abla.(f abla g-g abla f)=\sum_i \frac{\partial}{\partial x_i}$$<br /> (f abla g-g abla f)_i$$<br /> =\sum_i \frac{\partial}{\partial x_i} \left( f\frac{\partial g}{\partial x_i}-g\frac{\partial
f}{\partial x_i}\right)<br />$
Now apply the product rule to the what is inside the right most summation
and simplify and you should have what you want.
Last edited by CaptainBlack; August 26th 2006 at 08:53 AM.
August 26th 2006, 07:55 AM #2
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/calculus/5136-solved-vector-identity-proof.html","timestamp":"2014-04-19T05:39:43Z","content_type":null,"content_length":"32818","record_id":"<urn:uuid:6585288a-c496-49e0-bd6e-a0becd4e8d6b>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quick start
To get a first glimpse on what the Phoenix framework offers, let us start with an example. We want to find the first odd number in an STL container.
1) Normally we use a functor or a function pointer and pass that in to STL's find_if generic function (sample1.cpp):
Write a function:
is_odd(int arg1)
return arg1 % 2 == 1;
Pass a pointer to the function to STL's find_if generic function:
find_if(c.begin(), c.end(), &is_odd)
2) Using Phoenix, the same can be achieved directly with a one- liner (sample2.cpp):
find_if(c.begin(), c.end(), arg1 % 2 == 1)
The expression "arg1 % 2 == 1" automagically creates a functor with the expected behavior. In FP, this unnamed function is called a lambda function. Unlike 1, the function pointer version, which is
monomorphic (expects and works only with a fixed type int argument), the Phoenix version is completely polymorphic and works with any container (of ints, of doubles, of complex, etc.) as long as its
elements can handle the "arg1 % 2 == 1" expression.
3) Write a polymorphic functor using Phoenix (sample3.cpp)
struct is_odd_ {
template <typename ArgT>
struct result { typedef bool type; };
template <typename ArgT>
bool operator()(ArgT arg1) const
{ return arg1 % 2 == 1; }
function<is_odd_> is_odd;
Call the lazy is_odd function:
find_if(c.begin(), c.end(), is_odd(arg1))
is_odd_ is the actual functor. It has been proxied in function<is_odd_> by is_odd (note no trailing underscore) which makes it a lazy function. is_odd_::operator() is the main function body.
is_odd_::result is a type computer that answers the question "What should be our return type given an argument of type ArgT?".
Like 2, and unlike 1, function pointers or plain C++ functors, is_odd is a true lazy, polymorphic functor (rank-2 polymorphic functoid, in FC++ jargon). The Phoenix functor version is fully
polymorphic and works with any container (of ints, of doubles, of complex, etc.) as long as its elements can handle the "arg1 % 2 == 1" expression. However, unlike 2, this is more efficient and has
less overhead especially when dealing with much more complex functions.
This is just the tip of the iceberg. There are more nifty things you can do with the framework. There are quite interesting concepts such as rank-2 polymorphic lazy functions, lazy statements,
binders etc; enough to whet the appetite of anyone wishing to squeeze more power from C++. | {"url":"http://idlebox.net/2008/apidocs/boost-1.35.0.zip/libs/spirit/phoenix/doc/quick_start.html","timestamp":"2014-04-20T05:47:36Z","content_type":null,"content_length":"7414","record_id":"<urn:uuid:3a4eaac9-bc00-463e-8c2c-45fee6dc7f8a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
norm one approximate identities in separable C* algebras
up vote 0 down vote favorite
I'm trying to prove Corollary 1.4.9 in K. Davidson's book (Exercise 1.5):
If A is a separable C* algebra, then there is an increasing sequence $E_i, i=1,...,\infty$ of positive norm-one elements which form an approximate identity for A.
The hint suggests to choose $E_n$ successively so that $||E_n A_k - A_k|| < \frac{1}{n}$ for all $k$ from $1$ to $n$.
It is clear from how approximate identities are constructed in this book that this can be done with $||E_i||<1$. Why can the $E_i$ be chosen to be increasing and norm = 1?
1 What is $A_k$? (I also feel this question would be more appropriate for math.stackexchange.com; either the hint is mistaken, or the exercise is not "research level", or both.) – Yemon Choi Aug 19
'11 at 2:20
$(A_k)$ is a dense sequence in the C*-algebra. – Jonas Meyer Aug 19 '11 at 3:18
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged oa.operator-algebras or ask your own question. | {"url":"http://mathoverflow.net/questions/73194/norm-one-approximate-identities-in-separable-c-algebras","timestamp":"2014-04-17T04:54:50Z","content_type":null,"content_length":"46939","record_id":"<urn:uuid:caf6a314-dcad-4595-b7e4-a45b2bce3db6>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
Swift Middle School Blog
is an area of Math that centers on data collection, presentation, and analysis. For example, the data from
Math Competitions show how Swift Middle School Homerooms' compare. Room 312 has set a goal of 55,000 points by 4-30, do you think that they'll reach their goal given the data shown below? The Math
competition also shows room 313 in 1st place for the Fall Competition, will they be able to regain the lead in the Spring? Room 315 has set a goal of 30,000 points, is this goal attainable?
One popular game on Sumdog.com is "Junk Pile"- the game pictured on the logo above:) If you haven't tried out the games on this free web site, why not give it a try?!
Spring 2012 Spring Math Competition Overall Totals for Fall and Spring
Homeroom Total Points 4-30-12 Rooms Total Points Percent
312 43,221 312 55,937 57%
313 7,632 313 30,160 32%
314 2,218 315 7,370 8%
315 5,242 314 3,052 3%
Total 55,122 Total 96,519 100%
Fall 2011 Sumdog Math Competition
Homeroom Final Points
313 22,528
312 12,716
315 2,128
Total 38,206
The area that covers the outside of a figure is its surface area. Surface area can be compared to wrapping a present or covering the outside of the object with paper.
The interactive tool at Annenbeg's web page shows how to find the surface area of cylinders and prisms. The surface area is the sum of 2 ends of the cylinder and the middle section.
Middle= Circumference * height + Ends (2 * Area)
Which container below has the greater surface area, and how much more area?
How would you verify and make sure that both containers have a volume of 750ml? | {"url":"http://www.swiftmiddleschool.blogspot.com/2012_04_01_archive.html","timestamp":"2014-04-19T06:52:27Z","content_type":null,"content_length":"96818","record_id":"<urn:uuid:2dbd6905-0664-4362-80d3-23b9c416c9ec>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math as a Civil Right
Voting rights advocate calls for mathematics literacy
Mathematics literacy is a new civil rights battleground, according to the renowned activist and political organizer Robert Parris Moses. Using the same ideas and methods that he once used to fight
for voting rights in the South, Moses is working to increase access to quality mathematics education through the Algebra Project, a nationwide program that he founded.
Before the 1965 Voting Rights Act, many Americans were excluded from participation in their country's democracy through laws that made literacy a requirement to vote. Today, Moses says, many young
people are excluded from full participation in the country's economy because they lack mathematical literacy.
In the early 1960s, he worked in the Mississippi Delta as part of the Student Nonviolent Coordinating Committee, mixing with sharecroppers and encouraging them to demand the right to vote. "Now we're
trying to build the same kind of momentum behind their grandchildren," Moses says. "We need to get them demanding what everyone says they don't want."
The ubiquity of computers makes abstract, quantitative reasoning skills critical to a wide range of job opportunities. "Information age technology put math on the table as a literacy requirement in
the same way that industrialism made reading literacy a requirement," says Moses. For that reason, he says, the country needs to raise math education standards for all students.
Moses founded the Algebra Project in 1982 with funds from a MacArthur Foundation "genius grant" that he received for his work on voting rights. Initially, he focused on the goal of making sure that
all students learn algebra, which he calls "the gatekeeper of citizenship." When students learn algebra, he says, they make a leap in their ability to manipulate abstract symbolic representations.
Over time, the Algebra Project has developed curricula and methods for teaching algebra in the late middle school and early high school grades. Moses and his fellow project leaders now run
professional development programs for teachers around the country. To encourage student and parent commitment, project leaders conduct community meetings at schools taking part in the program. With
support from the National Science Foundation, General Electric Foundation's Math Excellence program, and the Marguerite Casey Foundation, the Algebra Project currently has a $1.7 million budget.
The Algebra Project is now working with a team of research mathematicians and educators to develop a curriculum for college preparatory math. It uses an experiential approach to introduce new
concepts through concrete events. For example, to understand how to represent motion on a graphical plot, students take a subway or bus trip and then draw a "trip line" to represent the experience.
In Jackson, Miss. and Miami, Fla., Moses has started a program for ninth-graders who are performing in the bottom quartile of their peer group. Students commit to spending 90 minutes a day in math
class throughout their four years of high school, including six weeks each summer. In 2003, the most recent year for which data are available, 56 percent of students in Jackson who participate in the
Algebra Project passed the state algebra test, compared to only 38 percent of their peers who are not taking part.
Some alumni of the Algebra Project are now campaigning for math literacy by working at the community level to increase students' interest in math. Applying their efforts both within and outside
schools, they aim to encourage students to see math as "cool" and as essential to future success. They have now formed a separate organization for this work, the Young People's Project, which has
received a major grant from the National Science Foundation. To develop a culture around learning that mimics the culture of community-based basketball, the group is starting competitive math leagues
in urban and rural communities.
Moses spends most of his time on grass-roots efforts, teaching children and organizing communities, but he also advocates for change on a national scale by encouraging schools and families to demand
quality mathematics education.
He believes that achieving such a goal might even require Constitutional change. In 1973, the Supreme Court ruled that the U.S. Constitution does not guarantee the right to an education, and that it
is therefore not a violation of the Fourteenth Amendment's equal protection clause for schools in poor areas to receive less money than schools in wealthy areas.
"Because of the deep racial and class divisions," Moses says, "we've never been able to quite hold ourselves responsible for all the children in the country."
If you would like to comment on this article, please see the blog version.
Note: To comment, Science News subscribing members must now establish a separate login relationship with Disqus. Click the Disqus icon below, enter your e-mail and click “forgot password” to reset
your password. You may also log into Disqus using Facebook, Twitter or Google. | {"url":"https://www.sciencenews.org/article/math-civil-right","timestamp":"2014-04-21T16:30:15Z","content_type":null,"content_length":"76609","record_id":"<urn:uuid:bfb41d6f-423a-4178-be21-60bb93daa968>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Status of numeric3 / scipylite / scipy_core
Tim Hochberg tim.hochberg at cox.net
Thu Mar 17 12:20:22 CST 2005
Perry Greenfield wrote:
> Before I delve too deeply into what you are suggesting (or asking),
> has the idea to have a slice be equivalent to an index array been
> changed. For example, I recall seeing (I forget where), the suggestion
> that
> X[:,ind] is the same as X[arange(X.shape[0]), ind]
> The following seems to be at odds with this. The confusion of mixing
> slices with index arrays led me to just not deal with them in
> numarray. I thought index arrays were getting complicated enough.
Yes! Not index arrays by themselves, but the indexing system as a whole
is already on the verge of being overly complex in numarray. Adding
anything more to it is foolish.
> I suppose it may be useful, but I would be good to give some
> motivating, realistic examples of why they are useful. For example, I
> can think of lots of motivating examples for:
> using more than one index array (e.g., X[ind1, ind2])
> allowing index arrays to have arbitrary shape
> allowing partial indexing with index arrays
My take is that having even one type of index array overloaded onto the
current indexing scheme is questionable. In fact, even numarray's
current scheme is too complicated for my taste. I particularly don't
like the distinction that has to be made between lists and arrays on one
side and tuples on the other. I understand why it's there, but I don't
like it.
Is it really necessary to pile these indexing schemes directly onto the
main array object. It seems that it would be clearer, and more flexible,
to use a separate, attached adapter object. For instance (please excuse
the names as I don't have good ideas for those):
X.rows[ind0, ind1, ..., ind2, :]
would act like take(take(take(X, ind0, 0), ind1, 1), ind2, -1)). That is
it would select the rows given by ind0 along the 0th axis, the rows
given by ind1 along the 1st axis (aka the columns) and the rows given by
ind2 along the -2nd axis.
X.atindex[indices] would give numarray's current indexarray behaviour.
Etc, etc for any other indexing scheme that's deemed useful.
As I think about it more I'm more convinced that basic indexing should
not support index arrays at all. Any indexarray behaviour should be
impleented using helper/adapter objects. Keep basic indexing simple.
This also gives an opportunity to have multiple different types of index
arrays behaviour.
> Though I'm not sure I can think of good examples of arbitrary
> combinations of these capabilities (though the machinery allows it).
> So one question is there a good motivating example for
> X[:, ind]? By the interpretation I remember (maybe wrongly), I'm not
> sure I know where that would be commonly used (it would suggest that
> all the sizes of the sliced dimensions must have consistent lengths
> which doesn't seem typical. Any one have good examples?
> Perry
> On Mar 17, 2005, at 1:32 AM, Travis Oliphant wrote:
>> Travis Oliphant wrote:
>>> - Where there is more than one index array, what should replace the
>>> single-axis subspaces that the indexes are referencing? Remember,
>>> all of the single-axis subspaces are being replaced with one
>>> "global" subspace. The current proposal states that this indexing
>>> subspace should be placed first and the "remaining subspaces" pasted
>>> in at the end.
>>> Is this acceptable, or can someone see a problem??
>> Answering my own question...
>> I think that it makes sense to do a direct subspace replacement
>> whenever the indexing arrays are right next to each other. In other
>> words, I would just extend the "one-index array" rule to
>> "all-consecutive-index-arrays" where of course one index array
>> satisfies the all-consecutive requirement.
>> Hence in the previous example:
>> X[:,ind1,ind2,:,:] would result in a (10,2,3,4,40,50) with the
>> (20,30)-subspace being replaced by the (2,3,4) indexing subspace.
>> result[:,i,j,k,:,:] = X[:,ind1[i,j,k],ind2[i,j,k],:,:]
>> Any other thoughts. (I think I will implement this initially by just
>> using swapaxes on the current implementation...)
>> -Travis
> -------------------------------------------------------
> SF email is sponsored by - The IT Product Guide
> Read honest & candid reviews on hundreds of IT Products from real users.
> Discover which products truly live up to the hype. Start reading now.
> http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/numpy-discussion
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2005-March/004439.html","timestamp":"2014-04-21T04:54:00Z","content_type":null,"content_length":"8555","record_id":"<urn:uuid:ae5e2186-43c1-4d57-9d0e-37dbe8ad6d5c>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Eight New Distinguished Research Chairs Join Perimeter Institute
Dr. Neil Turok, Director of Canada’s Perimeter Institute for Theoretical Physics (PI), is pleased to announce the appointment of eight more outstanding international scientists as Perimeter Institute
Distinguished Research Chairs.
In making the announcement, Dr. Turok stated, "We are thrilled to welcome these eight world-leading scientists to PI’s research community. Science is an inherently human process, and bringing the
right people together is often the key to success. Each of these new DRCs will bring significant new ideas and expertise to PI. We cannot tell exactly what they will do, but based on their past
record we know it will be very exciting."
Distinguished Research Chairs come to PI for extended periods each year to do research. The program enables them to be part of Perimeter’s scientific community while retaining permanent positions at
their home institutions. The new appointees join PI’s 19 current Distinguished Research Chairs.
James Bardeen is an Emeritus Professor of Physics at the University of Washington in Seattle. He has made major contributions in general relativity and cosmology, including the formulation of the
laws of black hole mechanics with Stephen Hawking and Brandon Carter and the development of a gauge-invariant approach to cosmological perturbations and the origin of large scale structure in the
present universe from quantum fluctuations during an early epoch of inflation. His recent research has focused on improving calculations of the generation of gravitational radiation from merging
black hole and neutron star binaries by formulating the Einstein equations on asymptotically null constant mean curvature hypersurfaces. This makes possible numerical calculations with an outer
boundary at future null infinity, where waveforms can be read off directly, without any need for extrapolation. Dr. Bardeen received his PhD from Caltech under the direction of Richard Feynman.
Ganapathy Baskaran is an Emeritus Professor at the Institute of Mathematical Sciences, Chennai in India, where he has recently founded the Quantum Science Centre. He has made important contributions
to the field of strongly correlated quantum matter. Novel emergent quantum phenomena in matter, including biological ones, are his passion and research focus. He is well known for his contributions
to the theory of high temperature superconductivity and for discovering emergent gauge fields in strongly correlated electron systems. He predicted p-wave superconductivity in Sr[2]RuO[4], a system
believed to support Majorana fermion mode, which is a popular qubit for topological quantum computation. In recent work, he predicted room temperature superconductivity in optimally doped graphene.
From 1976-2006, Baskaran contributed substantially to the Abdus Salam International Centre for Theoretical Physics in Trieste, Italy where he worked closely with scientists from third and first world
countries and helped run scientific programs. He received the S.S. Bhatnagar Award from the Prime Minister of India (1990), the Alfred Kasler ICTP Prize (1983), Fellowships of The Indian Academy of
Sciences (1989), Indian National Science Academy (1991) and Third World Academy of Sciences (2008), and the Distinguished Alumni Award of the Indian Institute of Science, Bangalore (2008).
James S. Gates is the John S. Toll Professor and Director for the Center for String and Particle Theory at the University of Maryland, College Park. He has made numerous contributions to
supersymmetry, supergravity, and superstring theory, including the introduction of complex geometries with torsion (a new contribution in the mathematical literature), and the suggestion of models of
superstring theories that exit purely as four-dimensional constructs similar to the standard model of particle physics.
Professor Gates is a past recipient of the Public Understanding & Technology Award from the American Association for the Advancement of Science (AAAS) and the Klopsteg Award from the American
Association of Physics Teachers. Professor Gates is a Fellow of the American Physical Society, a Fellow of AAAS, and a past President of the National Society of Black Physicists. In 2011, he was
elected to the American Academy of Arts and Sciences. He currently serves on the U.S. President’s Council of Advisors on Science and Technology, the Maryland State Board of Education, and the Board
of Directors of the Fermi National Laboratory, and is on the Board of Trustees for the Society for Science and the Public.
Professor Gates’ current research probes questions on the relation between a set of graphs (given the name of 'Adinkras' from traditional African cultures), supersymmetry and a class of codes similar
to those that allow browsers to operate in an error-free manner.
Frans Pretorius is a Professor of Physics at Princeton University. His primary field of research interest is general relativity, specializing in numerical solution of the field equations. His work
has included studies of gravitational collapse, black hole mergers, cosmic singularities, higher-dimensional gravity, models of black hole evaporation, and using gravitational wave observations to
test the dynamical, strong-field regime of general relativity. He also designs algorithms to efficiently solve the equations in parallel on large computer clusters, and software to manipulate and
visualize the simulation results. Among his honours, in 2007, Dr. Pretorius was awarded an Alfred P. Sloan Research Fellowship, and was the 2010 recipient of the Aneesur Rahman Prize for
Computational Physics of the American Physical Society. He is a Scholar in the Canadian Institute for Advanced Research (CIFAR) Cosmology and Gravity Program.
Eva Silverstein is a Professor of Physics at Stanford University in the department of physics and the Stanford Linear Accelerator Center (SLAC). Dr. Silverstein's major contributions include
predictive new mechanisms for inflationary cosmology, which helped motivate a more systematic understanding of the process and the role of UV-sensitive quantities in observational cosmology;
mechanisms for singularity resolution in string theory; a novel duality in string theory between extra dimensions and negative curvature; extensions of the AdS/CFT correspondence to more realistic
field theories (with applications to particle physics and condensed matter model building) and to landscape theories; and simple mechanisms for stabilizing the extra dimensions of string theory. She
is a former MacArthur Fellow and past recipient of a Sloan Research Fellowship. Dr. Silverstein's current interests range over many of these areas.
Paul Steinhardt is the Albert Einstein Professor in Science and Director of the Princeton Center for Theoretical Science at Princeton University. Dr. Steinhardt is a Fellow of the American Physical
Society (APS) and a member of the National Academy of Sciences. He shared the P.A.M. Dirac Medal from the International Centre for Theoretical Physics for the development of the inflationary model of
the universe, and the Oliver E. Buckley Prize of the APS for his contributions to the theory of quasicrystals. His research interests include particle physics, astrophysics, cosmology and condensed
matter physics. Recently, with Neil Turok, he has developed a cyclic model for cosmology, according to which the big bang is explained as a collision between two "brane-worlds" in M-theory. In
addition to his continued research on inflationary and cyclic cosmology, Dr. Steinhardt has been one of the developers of a new class of disordered "hyperuniform" photonic materials with complete
bandgaps, and he conducted a systematic search for natural quasicrystals that has culminated in discovering the first known example. He is currently organizing an expedition to Far Eastern Russia to
find more samples and study the local geology where they are found.
Gerard ’t Hooft is a Professor at the Institute for Theoretical Physics at Utrecht University. He shared the 1999 Nobel Prize in Physics with Martinus J. G. Veltman "for elucidating the quantum
structure of electroweak interactions." His research interests include gauge theories in elementary particle physics, quantum gravity and black holes, and fundamental aspects of quantum physics. In
addition to being a Nobel laureate, Dr. 't Hooft is a past winner of the Wolf Prize, the Lorentz Medal, the Franklin Medal and the High Energy Physics Prize from the European Physical Society, among
other honours. He is a member of the Royal Netherlands Academy of Arts and Sciences (KNAW) and is a foreign member of many other science academies, including the French Académie des Sciences, the
National Academy of Sciences (US), and the Institute of Physics (UK).
Professor 't Hooft's present research concentrates on the question of nature's dynamical degrees of freedom at the tiniest possible scales. In his latest model, local conformal invariance is a
spontaneously broken symmetry, which may have very special implications for the interactions between elementary particles.
Senthil Todadri is an Associate Professor of Physics at the Massachusetts Institute of Technology (MIT). Dr. Todadri’s research interests are in condensed matter theory. Specifically, he is working
to develop a theoretical framework to describe the behaviour of electronic quantum matter in circumstances in which individual electrons have no integrity. A prime example is the quest for a
replacement to the Landau theory of Fermi liquids that describes many metals extremely successfully but fails in a number of situations studied in modern experiments in condensed matter physics. He
is a past Sloan Research Fellow and winner of a Research Innovation Award from the Research Corporation for Science Advancement. | {"url":"https://perimeterinstitute.ca/news/eight-new-distinguished-research-chairs-join-perimeter-institute","timestamp":"2014-04-19T07:14:48Z","content_type":null,"content_length":"40520","record_id":"<urn:uuid:cbbdfc22-b3dc-4b08-9b72-98bc4898d703>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
Resolution: standard / high
Figure 1.
Scores of individual doctors on Vocation/Engagement (ordinate) and BurnedOut (abscissa). The fitted line is a lowess curve. Note that because both measures are factor scores, they are expressed as
z-scores (that is, a mean of zero and a standard deviation of one).
McManus et al. BMC Medicine 2011 9:100 doi:10.1186/1741-7015-9-100
Download authors' original image | {"url":"http://www.biomedcentral.com/1741-7015/9/100/figure/F1","timestamp":"2014-04-21T12:39:19Z","content_type":null,"content_length":"11679","record_id":"<urn:uuid:f3d1d95b-3844-4553-b936-59f77e764115>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
Use Kirchhoff's Rules To Analyze The Circuit In ... | Chegg.com
(a) Let I[1] be the branch current through R[1] and I[2] be the branch current through R[2]. Write Kirchhoff's loop rule relation for a loop that travels through battery 1, resistor 1, and battery 2.
(Use the following as necessary: [1], I[1], R[1], [2], I[2], and R[2]. Do not substitute numerical values, use variables only.)
0= _____
(b) Write Kirchhoff's loop rule relation for a loop that travels through battery 2 and resistor 2. (Use the following as necessary: [1], I[1], R[1], [2], I[2], and R[2]. Do not substitute numerical
values, use variables only.)
(c) You should now have two equations and two unknowns (I[1] and I[2]). Solve for the two branch currents. | {"url":"http://www.chegg.com/homework-help/questions-and-answers/use-kirchhoff-s-rules-analyze-circuit-figure--assume-resistance-values-r1-2100-r2-1100-bat-q1287241","timestamp":"2014-04-21T11:38:26Z","content_type":null,"content_length":"26145","record_id":"<urn:uuid:480572f9-5bcb-42b3-bf36-21e0c527cb37>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: The meaning of truth
Joe Shipman shipman at savera.com
Wed Nov 1 14:38:53 EST 2000
First of all, we can agree that the notion of true-in-a-model is
unproblematic, and for mathematicians willing to commit to an ontology
containing infinite sets like {0,1,2,3,....} the problem with statements
like Goldbach's Conjecture (henceforth GC) is purely epistemological and
the statement "GC is true but not provable" makes sense (and can be
fully formalized if we choose a proof system like ZFC).
Professor Sazonov does not like to hear about "the" integers and would
not accept the structure {N,0,1,+,*} as a well-defined object. He
therefore has trouble understanding what it could mean for an
arithmetical statement to be "just true" without reference to a theory,
and does not see how we could make any sense out of "A is true" absent a
proof of A. Thus, he finds statements like "It is possible that the
Goldbach conjecture is true but not provable" incoherent. I am
sympathetic to this formalist position but do not agree with it.
Professor Kanovei, while still unwilling to accept the infinite
structure {N,0,1,+,*} as a complete, well-defined object, lists three
ways in which the statement "GC is true" can be given a meaning:
(a) "it has been correctly proved mathematically"
[Kanovei did not specify an axiom system, but I will assume that he has
one, and that he will accept statements correctly proven from it as
"true" but will not necessarily accept a statement proven from a
different axiom system as "true". I would hope his axiom system
includes PA but do not expect that it includes ZF since ZF allows a
truth definition for arithmetical statements like GC.]
(b) "it is true as a fact of nature"
[Here Kanovei refers to counting pebbles, but more generally to
statements about the physical universe. In the case of facts that can
be established by computation, this is only different from case (a) as a
matter of scale. For example, Appel and Haken proved (in a traditional,
humanly verifiable manner) that the 4-color conjecture 4CC was implied
by a certain logically simpler statement S that a particular computer
program had a particular output, and then verified S by a computation on
a real physical machine. Only the size of the computation prevents us
from regarding 4CC as "proven" in the traditional sense, because we can
only duplicate the experiment (running the computer) and not verify the
proof directly, but we are still justified in believing 4CC and calling
it "true" and even saying that we have established it to be true and we
know it to be true. The justification lies in our scientific
understanding of the way computers work and their record of
(c) "That it is true is given in a sacred script" [I suspect Kanovei is
joking here, thinking that nobody believes in the truth of a
mathematical statement because of a religious text. But this is not so
clear. A committed traditional theist will probably believe not only in
the existence of infinite sets (since infinity is a traditional
attribute of God) but also in arithmetical corollaries like Con(PA) and
maybe even Con(ZF).]
In the following, I will use the Twin Prime Conjecture (TPC) rather than
GC as an example, since it is Pi^0_2 and neither it NOR its negation
can, if true, be finitely verified so far as we know. This will avoid
some confusion.
The following seems like it might be acceptable to Professor Sazonov:
The only way that we could come to KNOW that there are arbitrarily large
twin primes (which is the same as saying "TPC is true" as far as I'm
concerned; the general truth predicate, as opposed to
truth-in-a-given-model, has the property that saying " 'A' is true" is
the same as saying "A") is by a mathematical proof. The only way we
could come to know TPC is not true (that there is a largest twin prime)
is by a mathematical proof.
By the Principle of Parsimony, we should therefore not introduce a
notion of "truth" that is distinct from provability because it is not
needed and accomplishes nothing for us.
But I disagree with this (does Kanovei?). In addition to "sacred
scriptures", I would also allow the possibility of empirical discovery
of a mathematical-sentence-generating oracle which had never been known
to emit a sentence that was known to be either false or inconsistent
with its previous utterances. The degree of faith we would have in the
truth of the sentences it provides would depend on how good a physical
model we had for it; but for sufficiently plausible physical models and
sufficiently impressive oracular performance the epistemological status
of an utterance, while not attaining "mathematically proven", might
still qualify as "scientifically known". We "know" the 4-color map
theorem to be true although no human has verified the proof because we
have a sufficiently plausible model of how our computing machines work
and they have a sufficiently good performance. The only difference here
is that the physical experiment we run will not necessarily be
algorithmically representable so that a human cannot "in principle"
verify it as he could for the proof of 4CC.
-- Joe Shipman
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2000-November/004508.html","timestamp":"2014-04-16T05:59:49Z","content_type":null,"content_length":"7515","record_id":"<urn:uuid:86f8354c-d429-409c-aca3-715ecdf835b3>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scale Reading of Normal Force: Due by Midnight!
Remember that a scale records the value of
the normal force, not a person’s actual weight.
Draw a FBD. Rotate your coordinate system.
A 61 kg student weighs himself by standing
on a scale mounted on a skateboard that is
rolling down an incline, as shown. Assume
there is no friction so that the force exerted
by the incline on the skateboard is normal to
the incline.
The acceleration of gravity is 9.81 m/s
What is the reading on the scale if the angle
of the slope is 29◦?
Answer in units of N
I have my free body diagram with Fn going up and to the left and Fg gong straight down. I do not understand how to rotate the FBD or what that would accomplish...
ANY HELP WOULD BE MUCH APPRECIATED!!! | {"url":"http://www.physicsforums.com/showthread.php?t=552457","timestamp":"2014-04-16T16:15:15Z","content_type":null,"content_length":"25545","record_id":"<urn:uuid:2993cd89-10c1-4704-a2d7-6bf04042769a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Three SIP faculty will be examining students in four different SIP areas as listed below. The students should specify which 3 of the 4 areas they want to be covered in their exam and the available
schedule during the exam week as soon as possible.
• Convolution
• Fourier Transforms
• Continuous and Discrete Time Systems
□ finding system inputs/outputs, system properties
C.L. Phillips and J.M. Parr, Signals, Systems, and Transforms, Chapters 1-7,9-11
• Random Variables and Probability
• Probability fundamentals - one and multiple random variables, expectations, characteristic functions, independence/uncorrelatedness.
• Classification of random processes - 2nd order characterization (correlation, power spectral density), ergodicity, stationarity
• Filtering by LTI systems
• Special Processes - Markov Chains/Processes, Gaussian Processes.
Y. Viniotis, Probability & Random Proc. for Electrical Engineers, McGraw Hill, 1998.
A.V. Oppenheim, R.W. Schafer, J.R. Buck, Discrete Time Signal Processing, Prentice Hall 1999
Chapter 7: Filter Design Techniques
Chapter 8.4-8.10: The Discrete Fourier Transform
Chapter 9: Computations of Discrete Fourier Transform
R.C. Gonzales and R.E. Woods, Digital Image Processing, Addison Wesley, 1992.
Chapter 3: Image Transforms
Chapter 4: Image Enhancement
Chapter 6: Image Compression
Chapter 7: Image Segmentation | {"url":"http://www.ee.washington.edu/academics/graduate/qual/syllabi/sip.a02.html","timestamp":"2014-04-16T16:19:25Z","content_type":null,"content_length":"3334","record_id":"<urn:uuid:4f5b1859-6846-479e-b0f0-7c1e43b472a3>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bounds for the triple integrals!
November 24th 2008, 02:33 PM
Bounds for the triple integrals!
Hello peeps!
I need help with setting up the integrals. I know how to evaluate but I need help setting it up!
1 ) The region common to the interiors of the cylinders x^2 + y^2 = 100 and x^2 + z^2 = 100. (cylindrical coordinates)
And also problems 14 (Headbang)(spherical coordinates), 18 (cylindrical coordinates) in the attachment.
please elaborate how you set up the integral on #14.
November 24th 2008, 05:54 PM
Chris L T521
Hello peeps!
I need help with setting up the integrals. I know how to evaluate but I need help setting it up!
1 ) The region common to the interiors of the cylinders x^2 + y^2 = 100 and x^2 + z^2 = 100. (cylindrical coordinates)
And also problems 14 (Headbang)(spherical coordinates), 18 (cylindrical coordinates) in the attachment.
please elaborate how you set up the integral on #14.
#18 : By cylindrical coordinates:
$0\leq z\leq r^2$
$0\leq r\leq 10$
$0\leq\vartheta\leq 2\pi$
So, your integral is $\int_0^{2\pi}\int_0^{10}\int_0^{r^2}r\,dz\,dr\,d\v artheta$
I'm sure you capable of solving this... | {"url":"http://mathhelpforum.com/calculus/61430-bounds-triple-integrals-print.html","timestamp":"2014-04-17T07:25:16Z","content_type":null,"content_length":"5694","record_id":"<urn:uuid:983b8b7a-0807-4b63-9b8a-ca4171744148>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Contemporary Concepts of Condensed Matter Science is dedicated to clear expositions of the concepts underlying experimental, theoretical and computational developments, new phenomena and probes at
the advancing frontiers of the rapidly evolving sub-fields of condensed matter science. The term "condensed matter science" is central, because the boundaries between condensed matter physics,
condensed matter chemistry, material science and biomolecular science are disappearing.
The overall goal of each volume in the Series is to provide the reader with an intuitively clear discussion of the underlying concepts and insights that are the "driving force" for the high profile
major developments of the sub-field, while providing only the amount of theoretical, experimental and computational detail, data, and results that would be needed for the reader from whatever field
to gain a conceptual understanding of the subject. | {"url":"http://felix.physics.sunysb.edu/~allen/electron-transport.html","timestamp":"2014-04-19T19:34:09Z","content_type":null,"content_length":"3706","record_id":"<urn:uuid:bfe1a017-9f08-4d33-b47d-47304db80dc4>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
August 2
EDITORIAL WRITERS: Any time someone calls the presidential race a "marathon" is a chance to point out a Kenyan will likely win it.
Okay, so there's an easy one for your students, right? I thought I'd try it with mine and let them argue a bit over what the answer is and why they came up with it. When someone asks, I'll tell them
that this is why we developed the arbitrary rule of PEMDAS - so that there would be no confusion, so that anyone who approached it came back with something we could all agree on.
But that's too easy, you say?
Well, following the break (Sorry to those reading in Google Reader) is a small portion of the Facebook feed. This is 100 or so of 254,542 ! According to the poster, only 30% got it right.
Naturally, the Onion was on the case first.
from Joanne:
Stop teaching dumbed-down algebra to unprepared eighth graders and we can solve America’s math problem argues Jacob Vigdor in an American Enterprise Institute report. Unprepared students don’t
benefit from Algebra Lite and prepared students are turned off to math, he argues.
Charlotte-Mecklenburg schools pushed most eighth graders into algebra classes, he writes. Students scored much lower on the end-of-course exam than those allowed to take algebra in ninth grade — and
accelerated students did worse in geometry. The district abandoned the experiment after two years.
Our math problems are largely “self-inflicted,” Vigdor writes. In order to bring low performers up to the standard, schools have lowered standards.
Closing the achievement gap by improving the performance of struggling students is hard; closing the gap by reducing the quality of education offered to high performers—for example, by
eliminating tracking and promoting universal access to “rigorous” courses while reducing the definition of rigor—is easy.
The first step to improving math performance is to concede that students differ in abilities, he concludes.
I have an idea. Let's try this everywhere.
A friend uploaded his helmet cam video already.
I was shooting the ballistae over his head at the enemy shield wall (especially Black Talon - those guys with dark red shield with white circle and a black raptor talon), so I'm not in these scenes.
So Andrew Hacker got on the NYTimes and asked "Is Algebra Necessary?"
"This debate matters. Making mathematics mandatory prevents us from discovering and developing young talent. In the interest of maintaining rigor, we’re actually depleting our pool of
From a cognitive sense, brain-power depletion is a myth and he's barking up the wrong tree to start with it. His points get worse as he continues.
"The toll mathematics takes begins early. To our nation’s shame, one in four ninth graders fail to finish high school. Most of the educators I’ve talked with cite algebra as the major academic
Really? Every school I know of has some program in place for those who cannot pass algebra - consumer math, business math, basic algebra - they can get their three math credits and move on. Then,
too, the educators who responded to this man's anecdotal survey probably had no idea of the real reasons kids dropped out - math was just the convenient scapegoat to help make his argument.
States such as California who make Algebra I a requirement for 8th grade are forgetting that not everyone will be a STEM major and that there are plenty of adults in the world who can't "do math" yet
who are doing just fine and consider themselves successful. Mandatory 8th grade algebra is bad policy. I have never been a proponent of 8th grade algebra except for the 15% or so who are ready for
it. Making it a requirement for all 8th grade students is educational abuse in my book.
There are certainly students who will never do well in a strictly abstract, mathematically intense field ... but that's okay. The world needs artists, too. And property management, and construction,
and politicians, and entrepreneurs, and salesmen, and fast food, and so on. Where Hacker goes astray is in taking the previous paragraph and then running out of the algebra classroom,
Pied-Piper-style, taking the students with him.
Why? All high school students should climb the mathematical ladder to the greatest height they can manage. 9th graders are not in much of a position to know their strengths and weaknesses and should
be pushed, cajoled, tutored, helped, or cheered as necessary. It's not until the summer jobs between 10th and 11th grade that students start getting a sense of "this is necessary" or "this is useful"
and decide to apply themselves a little more.
"If you can't swim the first time
you get in the water, you should never try
to learn." seems to be his message.
Why not let them experience algebra first? Many students just don't realize what they are good at. Many parents and elementary school teachers (the two biggest influences in students' lives so far)
are rarely good at math and pass on that limitation to the kids. Give me some time to reverse that before you write off this generation as math losers.
The depressing conclusion of a faculty report: “failing math at all levels affects retention more than any other academic factor.” A national sample of transcripts found mathematics had twice as
many F’s and D’s compared as other subjects. (Which means that math teachers are grading for knowledge and other disciplines for development? Hardly an argument for changing the curriculum. C.)
If students fail in the abstract path, they should be diverted to a parallel ladder of courses that are less abstract (more vo-tech, perhaps) while still teaching them math.
Hacker is also fairly sloppy about mixing in topics from pre-calculus and calculus and using somewhat obscure terminology to scare the reader ... "vectorial angles and discontinuous functions," two
topics that wouldn't be a graduation requirement anywhere and I wouldn't expect very many of the NYT readers to know (I'm not sure Hacker knows either - I think he just picked up a math book).
I'll throw in my favorite, "But there’s no evidence that being able to prove (x² + y²)² = (x² - y²)² + (2xy)² leads to more credible political opinions or social analysis." Well, Andrew, no one ever
claimed that it would.
It’s true that students in Finland, South Korea and Canada score better on mathematics tests. But it’s their perseverance, not their classroom algebra, that fits them for demanding jobs.
Actually, it's their perseverance that helps them score better on the tests. It's that same perseverance that dictates whether they will succeed in college and in life. Removing algebra doesn't
change that.
But a definitive analysis by the Georgetown Center on Education and the Workforce forecasts that in the decade ahead a mere 5 percent of entry-level workers will need to be proficient in algebra
or above.
Not to press the point, but entry-level jobs rarely require much of anything. They won't ask the entry-level intern to write a report, but grammar and writing skills are necessary. An entry-level
construction worker isn't interpreting designs, and the entry-level brewery go-fer isn't using his biology knowledge. It's at the next levels, where the decisions are made, that companies require the
writing, programming, algebra, science, and tech skills.
I'll end with this:
Go ahead and deny the algebra classes to anyone who seems unlikely to use the skills. Tell them that they aren't going to take algebra, geometry, calculus. Remove them from all the purely theoretical
classes you want.
If you aren't sued for discrimination and neglect, keep on doing it. My students will continue to return and thank me for what we did in class and have an easier time of it because your students
won't even be in their rear-view mirror.
I’ll grant that with an outpouring of resources, we could reclaim many dropouts and help them get through quadratic equations. But that would misuse teaching talent and student effort. It would
be far better to reduce, not expand, the mathematics we ask young people to imbibe.
Works for me. | {"url":"http://mathcurmudgeon.blogspot.com/2012_08_01_archive.html","timestamp":"2014-04-19T17:43:30Z","content_type":null,"content_length":"142174","record_id":"<urn:uuid:cc6216d2-c297-40f6-8a31-729f043579f3>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding the number of triangles with a SSA triangle given angle a, side a, and side b
November 13th 2011, 02:32 PM
Finding the number of triangles with a SSA triangle given angle a, side a, and side b
Solve the triangle completely and determine how many triangles there are:
Given: angle a equals 41.2 degrees side a equals 8.1 side b equals 10.6
I worked out this triangle and got:
---angle c=79.3 degrees
---angle b=59.5 degrees
---# of triangles 2
the second triangle
---angle b=120.5 degrees
---angle c=18.3 degrees
I got the second triangle because for an SSA example having side a
8.1 side b 10.6 and angle a 41.2
1. find h=bsinangleA if h>a there is no triangle
h=10.6sin41.2=6.98 6.98 is not > 8.1 so there must be a triangle
2. if h=a there is one right triangle angle b is 90 degrees and side b
is hypotenuse solve using right angle trig
6.98 is not equal to 8.1
3. if h<a<b there are 2 triangles this one is correct 6.98<8.1<10.6 one with angle b acute and one with
angle b obtuse. find the acute b using law of sines. law of sines sin41.2/8.1 = sinB/10.6 --- sinB=10.6sin41.2/8.1 angle B=59.5 59.5 cannot be the acute angle because angle a is 41.2 degrees so
what am I doing wrong... subtract it from
180 degrees to get the obtuse angle b. in each of the two triangles
find angle y using angle y=180 degrees-angle a-angle b and side c using
law of sines
4 if a is greater than or equal to b there is only one triangle and
angle b is acute (angle a or angle y might be obtuse) find angle b
using law of sines then find angle y using y=180-angle a-angle b. find
c using law of sines
6.98 is not > or = 8.1
The issue is that angle b is supposed to be an acute angle but it is not because it is 59.5 degrees while angle a is 41.2 degrees. What did I do wrong? | {"url":"http://mathhelpforum.com/trigonometry/191826-finding-number-triangles-ssa-triangle-given-angle-side-side-b-print.html","timestamp":"2014-04-18T15:55:37Z","content_type":null,"content_length":"5186","record_id":"<urn:uuid:dfdb32d9-088f-45e1-8de2-8f58ca18438a>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
I have some data displayed as CDF (Distribution Function) what does it show
August 24th 2012, 10:23 AM #1
Aug 2012
I have some data displayed as CDF (Distribution Function) what does it show
Hi, I am not very good in statistics, but I want to know what exactly is CDF, I have attached a graph here. It is the CDF of distance with respect to IP addresses. Can any one explain me a bit
about CDF and what exactly it re-presents, but in simple words.
Re: I have some data displayed as CDF (Distribution Function) what does it show
The CDF shows the proportion of IP addresses which have less than or equal to the distance on the X axis.
August 26th 2012, 10:46 AM #2
MHF Contributor
May 2010 | {"url":"http://mathhelpforum.com/statistics/202510-i-have-some-data-displayed-cdf-distribution-function-what-does-show.html","timestamp":"2014-04-18T18:57:32Z","content_type":null,"content_length":"33536","record_id":"<urn:uuid:3ae68f09-e0c1-45ab-874e-4d05221d8b4e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analyzing Classical Form : An Approach for the Classroom
ISBN: 9780199747184 | 0199747180
Format: Hardcover
Publisher: Oxford University Press, USA
Pub. Date: 11/5/2013
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/analyzing-classical-form-approach/bk/9780199747184","timestamp":"2014-04-17T07:20:53Z","content_type":null,"content_length":"32436","record_id":"<urn:uuid:ce0a51a3-b491-462c-8610-dcf34c3cc319>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex Projective Space as a $U(1)$ quotient
up vote 4 down vote favorite
As is well known, one can view $\mathbb{CP}^n$ as a quotient of the unit $(2n + 1)$-sphere in $\mathbb{C}^{n+1}$ under the action of $U(1)$, since every line in $\mathbb{C}^{n+1}$ intersects the unit
sphere in a circle.
Moreover, we have $S^{2n + 1} = SU(n+1)/SU(n)$, where $SU(n)$ embeds into the bottom right-hand corner (say).
My question is: Is there an embedding $j$ of $U(1)$ into $SU(n+1)/SU(n)$ that gives the $U(1)$ action as a left (right) multiplication, i.e. such that $A.e^{i \theta} = Aj(e^{i \theta})$, for all $A
\in SU(n+1)/SU(n)$?
For $n=1$, it's easy: $SU(2) = S^3$, and we embed $e^{i\theta}$ as
$\left( \begin{array}{cc} e^{i \theta} & 0 \\\\ 0 & e^{-i \theta} \end{array} \right)$.
4 Well, there is an embedding through left multiplication. – Mariano Suárez-Alvarez♦ Jan 13 '10 at 6:17
I mean an embedding such that the quotient of $SU(n+1)/SU(n)$ under the resulting multplicative action of $U(1)$ is homeomorphic to $\mathbb{CP}^n$. – Aston Smythe Jan 13 '10 at 6:47
add comment
3 Answers
active oldest votes
The U(1) group is the torus in SU(N+1) which commutes with SU(N). In the example given in the question where SU(N) is chosen as the bottom N-dimensional block. U(1) consists of
the diagonal matrices
diag{exp(N*i*theta), exp(-i*theta), . . . . (N-times) exp(-i*theta)}
up vote 3 down vote Please observe that the restriction to the bottom N-dimensional block is proportional to the unit matrix thus it commutes with the whole of SU(N), also it belongs to SU(N+1),
accepted since its has a unit determinant.
The reason that the U(1) and the SU(N) factors commute is due to a theorem by A. Borel which states that the denominator subgroup of homogeneous Kaehlerian spaces must be the
centralizer of a torus. In our case the torus is the U(1) subgroup and the certralizer is SU(N)*U(1)
add comment
From wikipedia: $\mathbb{CP}^n$ is a Symmetric space of type AIII for $p=n$, $q=1$. There are embeddings of both $U(1)$ and $SU(n)$ into $S(U(n) \times U(1)) \subset SU(n+1)$ which
up vote 1 down give you your quotient by right multiplication.
add comment
My original answer was unsalvegable so I've deleted it and am posting a new "answer". As with the first one, I don't rate this as particularly an answer but more just trying to understand
what's going on.
I was initially having trouble understanding Scott's answer, but now I think I do and I think it gives the matrix representation wanted which isn't quite what David wrote.
We have $SU(n+1)$ and inside this we have $SU(n)$ and quotient out to get $S^{2n+1}$. We also have a slightly larger subgroup which is $S(U(n) \times U(1))$, which contains $SU(n)$, such
that the quotient is $\mathbb{CP}^n$.
up vote 1 Now, $S(U(n) \times U(1))$ is $U(n)$ via $A \mapsto (\det A^{-1},A)$ and the inclusion $SU(n) \to S(U(n) \times U(1))$ goes over to the standard inclusion. Here, $SU(n)$ is a normal
down vote subgroup and $U(n)$ is the semi-direct product of $SU(n)$ and $U(1)$ with the map $U(1) \to U(n)$ given by $\lambda \mapsto (\lambda, 1,\dots,1)$ (diagonal matrix). When taken over to $S(U
(n) \times U(1))$ this becomes $\lambda \mapsto (\lambda^{-1},\lambda,1,\dots,1)$.
So then $SU(n+1)/S(U(n) \times U(1)) \cong (SU(n+1)/SU(n))/U(1)$ where $U(1) \to SU(n+1)$ is the map $\lambda \to (\lambda^{-1},\lambda,1,\dots,1)$.
This isn't the same as David's, I know, so it may not be what you want (since that answer's been accepted). Presumably only one satisfies the condition that you want and presumably it's
David's since that answer's been accepted. Still, I was confused and I think I've straightened myself out now.
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/11628/complex-projective-space-as-a-u1-quotient","timestamp":"2014-04-19T22:26:03Z","content_type":null,"content_length":"60831","record_id":"<urn:uuid:a99bba3f-6455-4bdc-94e7-729e38808890>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00299-ip-10-147-4-33.ec2.internal.warc.gz"} |
simplify question
March 23rd 2012, 10:55 AM #1
Junior Member
Mar 2011
simplify question
I don't know if this is particular part is calculus or not but this was on a calculus test. It was the last step of a third derivative question. I lost a mark because the teacher says (x^2-1)/x^2
is more simplified than 1-(1/x^2).. is this fair? Is there an official rule that states (x^2-1)/x^2 is more simplified? This is making a common denominator for no damn reason, nothing cancels and
it looks more complicated than before. Normally I wouldn't care about one mark but I was getting 100 in the course and that section was only out of 8 so now my mark is going to plummet for no
reason.. -_-
Re: simplify question
no, i would not consider $\frac{x^2 -1}{x^2}$ to be simpler than $1-x^{-2}$. in fact i think the opposite.
Of course, your teacher is probably a maths professor or something, and im not. and he sets the marks, so probably best to go with what he says
Re: simplify question
just clarifying, i wrote 1-1/x^2 not 1-1/x^(-2) (i know negative exponents aren't allowed)
he's not a professor he's just a high school teacher..
is there anything solid i can say to convince my teacher? because obviously everyone else that did this also lost a mark so i have to convince him really good because he'll have to change
everyone else's tests too which is a pain for him.. so unless i convince him fully it's not going to happen..
i was planning on saying since the teacher said always simplify before taking the next derivative, i was going to say using my form is easier to take the derivative therefore more simplified, any
other ideas?
Re: simplify question
I think the (x^2 - 1)/x^2 can be said to be "simpler" because it involves only positive exponents however, yours is "simpler" in the sense that it's easier to work with in Calculus for derivative
and integral evaluations and I think this so-called rule is ambiguous enough that it can be argued. I think arguing the derivative/integral evaluation simplicity could work. If not, arguing that
there is no quotient but just two terms is another option. I find losing marks for that being ridiculous. Ask him to prove himself right by citing something credible.
March 23rd 2012, 12:11 PM #2
MHF Contributor
May 2010
March 23rd 2012, 07:43 PM #3
Junior Member
Mar 2011
March 23rd 2012, 10:24 PM #4
Super Member
Nov 2008 | {"url":"http://mathhelpforum.com/calculus/196307-simplify-question.html","timestamp":"2014-04-17T01:44:13Z","content_type":null,"content_length":"38691","record_id":"<urn:uuid:b9dcb330-bc51-4c64-83e2-1b98fa8125d0>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
SAS-L archives -- September 2008, week 3 (#336)LISTSERV at the University of Georgia
Date: Thu, 18 Sep 2008 07:28:11 -0700
Reply-To: sounpra@YAHOO.COM
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: Song <sounpra@YAHOO.COM>
Organization: http://groups.google.com
Subject: PLS using both IML and PROC PLS
Comments: To: sas-l@uga.edu
Content-Type: text/plain; charset=windows-1252
Hi All –
I am performing PLS using the Iris data. In the first section, I am
using PROC PLS with the following code:
proc pls data=iris method = pls (algorithm=eig) nfac=2 details;
model species = sepallength sepalwidth petallength petalwidth/
output out=outpls xscore = xscr yscore = yscr;
Next I wanted to compared the first two x-scores from PROC PLS by
using the singular value decomposition of the X`YY`X matrix using IML.
The following is my IML code: (Note: E and F are column centered and
proc iml;
use iris;
read all var {species} into y;
read all var {sepallength sepalwidth petallength petalwidth} into x;
/** Column centered and normalized X matrix **/
mu = x[:,];
xdiff = x - J(nrow(x),1,1)*mu;
sig = (1/(nrow(x) - 1))*xdiff`*xdiff;
d = inv(sqrt(diag(sig)));
E = xdiff*d; /** Standardized
X matrix **/
/** Colum centered and normalized Y vector **/
muy = y[:,];
ydiff = y - J(nrow(y),1,1)*muy;
sigy = (1/(nrow(y) - 1))*ydiff`*ydiff;
dy = inv(sqrt(diag(sigy)));
F = ydiff*dy; /** Standardized Y vector **/
/** X`YY`X matrix **/
pls =E`*F*F`*E;
call svd(u,v,q,pls);
print u;
W = u[,1:2];
/** obtaining the first two x-scores **/
t = E*W;
beta1 = inv(t[,1]`*t[,1])*t[,1]`*F;
print beta1;
create PLS from t [colname=varnames];
append from t;
When I print out the U matrix ( the matrix of eigenvector for X`YY`X),
the first eigenvector is exactly the same one produced by PROC PLS
thus the first x-scores match up exactly. Also by beta1 from IML is
the same as the first "inner regression coefficients" given in the
However the second eigenvectors do not match therefore my second x-
scores using IML are different than PROC PLS. Am I missing something
in the IML code? Should I be subtracting my first x and y-scores from
my respective matrices and recomputed the SVD my updated E`FF`E
Any thoughts or suggestion will be much appreciated. | {"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0809c&L=sas-l&F=&S=&P=40762","timestamp":"2014-04-19T14:31:07Z","content_type":null,"content_length":"10930","record_id":"<urn:uuid:18497696-b53d-4f63-a0ce-b69eae41059c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: 8x8 bit patterns
Replies: 3 Last Post: Mar 13, 2013 12:09 PM
Messages: [ Previous | Next ]
8x8 bit patterns
Posted: Feb 28, 2013 4:51 AM
I have a little puzzle where I think the answer is that it is not
possible but having tried all the tricks I can think of cannot prove it.
The problem arises from considering the possible bit patterns in an 8x8
JPEG encoding square and searching for one that includes all possible
states for subsampling up to 2x2 - that is 2x1, 4x1, 2x2
ie 0000, 0001, ...., 1111
00 00 00 ........... 11
00, 01, 10, 11
It would be really nice if it did 1x4 as well.
It is obvious that the 4x1 subsampling requirement means that the final
solution if it exists must be a permutation of the nibbles 0,1,...,15
It is also obvious that taken as pairs and interpreted as 2x2 subsampled
there are 16x15 possible states of which those involving 0,5,10,15 will
have duplicate 2x2 subsampling patterns if used together. Using any pair
taken from this set prevents a solution.
4x1 2X2
0 00|00 0001 1
5 01|01 0001 1
4x1 2x2
0 00|00 0000 0
1 00|01 0001 1
The best I have been able to obtain by a directed brute force attack is
any number of solutions getting 11/15 of the 2x2 states and all of 4x1.
I am still not convinced my algorithm is working correctly.
This is based on computing the 2x2 patterns for all the pairs and then
combining them efficiently to try and maximise coverage in both domains.
The 4x1 pair {0,7} is represented as 2x2 {1,3} and bitmasks are used to
compute worthwhile continuations and cull all branches already worse
than existing solutions.
A complete solution should represent 0..15 in both domains 2x2 and 4x1.
But alas I can't find one :(
I can't help thinking there should be some clever parity based argument
to show that it is impossible to do better. Any suggestions?
Thanks for any enlightenment.
Martin Brown | {"url":"http://mathforum.org/kb/message.jspa?messageID=8437009","timestamp":"2014-04-17T02:37:01Z","content_type":null,"content_length":"21158","record_id":"<urn:uuid:191c03b2-43e9-4896-99c4-2f452cbabba6>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Anyone goes to CA ?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51278e32e4b0111cc68f6593","timestamp":"2014-04-17T04:12:12Z","content_type":null,"content_length":"36945","record_id":"<urn:uuid:1e93c4db-ed03-4030-8d6e-f08b0054381f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00553-ip-10-147-4-33.ec2.internal.warc.gz"} |
Page:Grundgleichungen (Minkowski).djvu/35
This page has been
, but needs to be
(58) $|\Phi \Psi] = i[w, \Omega]^{*}$,
$\Phi_{1}\Psi_{2} - \Phi_{2}\Psi_{1} = i(w_{3}\Omega_{4} - w_{4}\Omega_{3})$, etc.
The vector $\Omega$ fulfills the relation
(59) $(w\bar{\Omega})=w_{1}\Omega_{1}+w_{2}\Omega_{2}+w_{3}\Omega_{3}+w_{4}\Omega_{4}=0$,
which we can write as
and $\Omega$ is also normal to w. In case $\mathfrak{w} =0$, we have $\Phi_{4} = 0,\ \Psi_{4} = 0,\ \Omega_{4} = 0$, and
(60) $\Omega_{1} = \Phi_{2} \Psi_{3} - \Phi_{3} \Psi_{2},\ \Omega_{2} = \Phi_{3} \Psi_{1} - \Phi_{1} \Psi_{3},\ \Omega_{3} = \Phi_{1} \Psi_{2} - \Phi_{2} \Psi_{1}$,
I shall call $\Omega$, which is a space-time vector 1st kind the Rest-Ray.
As for the relation E), which introduces the conductivity $\sigma$, we have
This expression gives us the rest-density of electricity (see §8 and §4). Then
represents a space-time vector of the 1st kind, which since $w\bar{w}=1$, is normal to w, and which I may call the rest-current. Let us now conceive of the first three component of this vector as the
x-, y-, z co-ordinates of the space-vector, then the component in the direction of $\mathfrak{w}$ is
and the component in a perpendicular direction is $\mathfrak{s_{\bar{w}}}=\mathfrak{F_{\bar{w}}}$.
This space-vector is connected with the space-vector $\mathfrak{F}=\mathfrak{s}-\varrho\mathfrak{w}$, which we denoted in § 8 as the conduction-current.
Now by comparing with $\Phi = -wF$, the relation (E) can be brought into the form
(E) $s+(w\bar{s})w=-\sigma wF$. | {"url":"http://en.wikisource.org/wiki/Page:Grundgleichungen_(Minkowski).djvu/35","timestamp":"2014-04-16T11:17:02Z","content_type":null,"content_length":"26766","record_id":"<urn:uuid:d85bcf4e-38d7-408e-9e37-72b383853eaf>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
Paul H.
I am a retired electrical/nuclear engineer searching for ways to help young people succeed in education. My specialties are math at any level and physics. I have tutored before at the high school
level, both as a volunteer and for pay. My tutoring method is to try to find a level of understanding and to build from there to the level desired by the student. I stick to the material desired by
the student. I will try to show the student the method, but will let the student work out the problems. often, I present additional problems to solve to ensure the student has the grasp of the
subject matter. I will always be positive with the student.
Paul's subjects | {"url":"http://www.wyzant.com/Tutors/CO/Denver/8429858/?g=3JY","timestamp":"2014-04-16T11:39:37Z","content_type":null,"content_length":"96434","record_id":"<urn:uuid:8a446c3e-e4f0-4ff5-92f8-88ae2f364ace>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scaling analysis for the investigation of slip mechanisms in nanofluids
The primary objective of this study is to investigate the effect of slip mechanisms in nanofluids through scaling analysis. The role of nanoparticle slip mechanisms in both water- and ethylene
glycol-based nanofluids is analyzed by considering shape, size, concentration, and temperature of the nanoparticles. From the scaling analysis, it is found that all of the slip mechanisms are
dominant in particles of cylindrical shape as compared to that of spherical and sheet particles. The magnitudes of slip mechanisms are found to be higher for particles of size between 10 and 80 nm.
The Brownian force is found to dominate in smaller particles below 10 nm and also at smaller volume fraction. However, the drag force is found to dominate in smaller particles below 10 nm and at
higher volume fraction. The effect of thermophoresis and Magnus forces is found to increase with the particle size and concentration. In terms of time scales, the Brownian and gravity forces act
considerably over a longer duration than the other forces. For copper-water-based nanofluid, the effective contribution of slip mechanisms leads to a heat transfer augmentation which is approximately
36% over that of the base fluid. The drag and gravity forces tend to reduce the Nusselt number of the nanofluid while the other forces tend to enhance it.
Nanofluid was first proposed by Choi and Eastman [1] about a decade ago, to indicate engineered colloids composed of nanoparticles dispersed in a base fluid. Contrary to the milli- and micro-sized
particle slumped explored in the past, nanoparticles are relatively close in size to the molecules of the base fluid and thus can realize very stable suspensions with little gravitational settling
over long periods of time. It has long been recognized that suspensions of solid particles in liquid have great potential as improved heat management fluids. The enhancement of thermal transport
properties of nanofluids was even greater than that of suspensions of coarse-grained materials. In the recent years, many studies show that there is an abnormal increase in single phase convective
heat transfer coefficient relative to the base fluid [2]. Such an increase mainly depends on factors such as the form and size of the particles and their concentration, the thermal properties of the
base fluid as well as those of the particles, kinetics of particle in flowing suspension, and nanoparticle slip mechanisms. The enhancement mechanism of heat transfer in nanofluid can be explained
based on the following two aspects: (1) The suspended nanoparticles increase the thermal conductivity of the two-phase mixture and (2) the chaotic movement of the ultrafine particles due to the slip
between the particles and the base fluid resulting in thermal dispersion plays an important role in heat transfer enhancement. Slip mechanisms of the particles increase the energy exchange rates in
the nanofluid. Thermal dispersion will flatten the temperature distribution inside the nanofluid and make the temperature gradient between the fluid and wall steeper, which augments heat transfer
rate between the fluid and the wall [3]. Understanding the effect of different forces that bring about the slip mechanism is therefore essential in the study of convective transport of nanofluids.
An overall understanding of the effect of nanoparticle slip mechanisms for the augmentation of heat transport in nanofluids is in its infancy. In the past, several authors have attempted scaling
analysis for convective transport of nanofluids to show the effect of slip mechanisms. Scaling analysis [4-6] is an effective tool to apply and develop mathematical models for describing transport
processes. Through scaling analysis, the solution for any quantity that can be obtained from the governing equations can be reduced to a function of the dimensionless independent variables and the
dimensionless groups. Ahuja [7] examined the augmentation in heat transport of flowing suspensions due to the contribution of rotational and translational motions by an order of magnitude analysis
and concluded that the translational motion is expected to be negligibly small compared to that of the rotational motion of the particles. Savino and Paterna [8] performed order of magnitude analysis
for Soret effect in water/alumina nanofluid and concluded that the thermofluid-dynamic behavior may be influenced by gravity and the relative orientation between the residual gravity vector and the
imposed temperature gradient. Khandekar et al. [9] used scaling analysis for different nanofluids to show that entrapment of nanoparticles in the grooves of surface roughness leads to deterioration
of the thermal performance of nanofluid in closed two-phase thermosyphon. Hwang et al. [10], in his study for water/alumina nanofluid, showed that both thermophoresis and Brownian diffusion have
major effect on the particle migration and that the effect of viscosity gradient and non-uniform shear rate can be negligible. Buongiorno [11] estimated the relative importance of different
nanoparticle transport mechanisms through scaling analysis for water/alumina nanofluid and concluded that Brownian diffusion and thermophoresis are the two most important slip mechanisms. Also, he
ascertained that these results hold good for any nanoparticle size and nanofluid combination.
However, the different slip mechanisms between nanoparticle and the base fluid are dependent on several factors such as the shape, size, and volume fraction of the particle. Also, the thermophysical
properties of the nanofluid used in the scaling analysis affect the magnitude of the slip forces in nanofluids which were not taken into consideration in the previous studies discussed above.
Therefore, the objective of the present work is to carry out a detailed scaling analysis to understand the effect of seven different slip mechanisms in both water- and ethylene glycol-based
nanofluids. A comprehensive parametric study has been carried out by varying the shape, size, concentration, and temperature of the nanoparticle in the fluid in order to understand the relative
effect of these parameters on the magnitude of slip forces. The study is extended across different nanoparticles such as gold, copper, alumina, titania, silica, carbon nanotube (CNT), and graphene,
suspended in the base fluid. The effect of slip mechanism on heat transfer augmentation in these nanofluids due to the slip mechanisms is also studied.
Governing equations
The fluid surrounding the nanoparticles will be assumed to be continuum. Knudsen number is defined as the ratio of the molecule mean free path of base fluid molecules to the nanoparticles diameter [
where d[p ]is the particle diameter and λ is the molecule mean free path of base fluid molecules and is given by:
where R is the universal gas constant, T is the temperature, d[m ]is the molecular diameters of base fluid, N[A ]is the Avagodra's constant, and P is the pressure.
For water and ethylene glycol, the values of molecular mean free path are 0.278 and 0.26 nm, respectively. Therefore, for the nanoparticles in range of interest (1-100 nm), the Knudsen number is
relatively small (Kn < 0.3); thus, the assumption of continuum is reasonable.
Continuous fluid phase
The governing equations for the continuous phase include the continuity equation (mass balance), equation of motion (momentum balance), and energy equation (energy balance). They are given,
respectively, in the following:
Continuity equation
Momentum equation
Energy equation
T[bf ]in Equation 2 is the stress tensor defined as:
where μ[bf ]is the shear viscosity of the base fluid phase and I is the unit vector. S[p ]in Equation 4 is the source term representing the momentum transfer between the fluid and particle phases and
is obtained by computing the momentum variation due to several forces of slip, ∑ F, experienced by the control volume as:
In the Lagrangian frame of reference, the equation of motion of a nanoparticle is given by:
The equation of motion of nanoparticles contains the drag force F[D], gravity F[G], Brownian motion force F[B], thermophoresis force F[T], Saffman's lift force F[L], rotational force F[R], and Magnus
effect F[M ]and is given in the following equation:
The abbreviations for the different forces are listed in Table 1.
Table 1. Abbreviations for different forces
The coupling between the continuous fluid phase and discrete phase is realized through the Newton's third law of motion. The inter-particle forces such as the Van der Waals and electrostatic forces
are neglected in the analysis due to their relatively negligible contributions in nanofluids.
The above forces are computed separately as shown below.
Drag force
Drag is the force generated in opposition to the direction of motion of a particle in a fluid. Drag force is proportional to the relative velocity between the base fluid and nanoparticle and is
expressed by [12]:
where v[bf ]is the velocity of the base fluid, v[p ]is the particle velocity, m[p ]is the mass of the particle, β is the interphase momentum exchange coefficient:
Re[D ]is the Reynolds number due to drag:
C[D ]is the drag coefficient for spherical particles and is given by:
For non-spherical particles [13]:
Here, Ψ is the shape factor:
where A[sp ]is the surface area of the sphere of the same volume as the non-spherical particle:
and A is the actual surface area of the non-spherical particle.
Gravity force is proportional to the volume of the particle, and the relative density of nanoparticle and base fluid is expressed as:
where V[p ]is the volume of the particle, ρ[p ]is the density of the nanoparticle, ρ[bf ]is the density of the base fluid, and g is acceleration due to gravity.
Brownian force
The random motion of nanoparticles within the base fluid is called Brownian motion and results from continuous collisions between the nanoparticles and the molecules of the base fluid. Brownian force
is a function of concentration gradient, surface area of the particle, and the Brownian diffusion coefficient [11]:
where D[B ]is the Brownian diffusion coefficient (D[B]) for spherical particles [11]:
For non-spherical particles:
where K[B ]is the Boltzmann constant, T is the temperature, h is the length of the non-spherical particle, d[p ]is the particle diameter, μ[nf ]is the dynamic viscosity of the nanofluid. and v[B ]is
the Brownian velocity and is a function of temperature and diameter of the particle.
For spherical particles:
For non-spherical particles:
Thermophoresis force
Small particles suspended in a fluid that has a temperature gradient experience a force in the direction opposite to that of the gradient. This phenomenon is known as the thermophoresis.
Thermophoresis is a function of thermophoretic velocity, temperature gradient, Knudsen number, thermal conductivity, dynamic viscosity, and density of the nanofluid [11]:
where ϕ is the volume fraction of the particle, v[T ]is a thermophoretic velocity:
where ρ[nf ]is the density of nanofluid, μ[nf ]is the dynamic viscosity of the nanofluid, ∇T is the temperature gradient, and Kn is the Knudsen number:
k[nf ]is the thermal conductivity of nanofluid and k[p ]is the particle thermal conductivity.
Saffman's lift force
A free-rotating particle moving in a shear flow gives rise to a lift force. Lift due to shear, F[L ]has been derived by Saffman [14] and it can be expressed as [15]:
where r is the radius of the particle, v[bf ]is the velocity of the base fluid, v[nf ]is the kinematic viscosity of the fluid, K[L ]= 81.2, and D is the diameter of the tube.
Particle rotational force
The force experienced by the particle due to rotational motion around a fixed axis is given as [7]:
Magnus force
Under the effect of the shear stress, a particle rotates about an axis perpendicular to the main flow direction. If a relative axial velocity exists between the particle and the fluid, a force
perpendicular to the main flow direction will arise. This is known as the Magnus effect. It is a function of the difference between the axial velocity and radial velocity of the particle [11]:
The empirical relation proposed by Segre and Silberberg [16] for the velocity of radial motion of the particles (v[M]) is used:
where v[m ]is the mean velocity,
Thermophysical properties of nanofluids
The correlations used to compute the physical and thermal properties of the nanofluids are listed in Table 2. In this table, the subscripts p, bf, and nf refer to the particles, the base fluid, and
the nanofluid, respectively. The density and specific heat of nanofluid are assumed to be a linear function of volume fraction due to lack of experimental data on their temperature dependence. Widely
accepted correlations for determining the dynamic viscosity and thermal conductivity as a function of volume fraction for different nanofluids as shown in Table 2 are used in the analysis. For Al[2]O
[3]-water nanofluids, the equation for effective thermal conductivity as suggested by Li and Peterson [17] is used.
Table 2. Thermophysical properties of nanofluids
Scaling analysis methodology
In this section, the forces contributing to the slip between the particle and base fluid are analyzed through scaling analysis. A Reynolds number is introduced for each force depending on the
velocity of the particle and the base fluid velocity. The time scale is defined as the time that a nanoparticle takes to diffuse a length scale that is equal to its diameter under the effect of that
mechanism. In the present study, scaling analysis is to understand the order of magnitude for the forces involved in the slip mechanism of nanofluids.
Mass of the particle:
The drag force acting on the particle, F[D ]is given by:
The time scale for drag:
The velocity of nanoparticle due to gravitational settling, v[G], can be calculated from a balance of buoyancy and viscous forces[11]:
The corresponding Reynolds number and the force due to gravity can be expressed as:
and the time scale for gravity:
Brownian force
The corresponding Reynolds number and the force due to Brownian motion can be expressed as:
and the time scale for Brownian:
The corresponding Reynolds number and the force due to thermophoresis can be expressed as:
and the time scale for thermophoresis:
Saffman's lift force
The corresponding Reynolds number and the force due to lift can be expressed as:
and the time scale for lift:
Rotational force
The corresponding Reynolds number and the force due to particle rotation can be expressed as:
and the time scale for rotational:
Magnus effect
The corresponding Reynolds number and the force due to Magnus effect can be expressed as:
and the time scale for Magnus effect:
Applying the scaling analysis, to Equation 8, the acceleration term is normalized by the ratio of particle velocity to the relaxation time of the particle:
The particle relaxation time can be expressed as [11]:
and the particle Reynolds number as:
By substituting all the forces in Equation 9, we get the Reynolds number of the particle as a function of the Reynolds numbers of all the seven slip mechanisms:
Thus, from scaling analysis, we arrive at an expression for the particle Reynolds number which is dependent on the Reynolds number of each of the slip mechanisms, the relaxation time of the particle,
and the time scale of each mechanism.
Results and discussion
For the present study, nanoparticle suspensions flowing through a horizontal circular tube is considered as the model system. All the forces are calculated based on the equations derived in Section
4. The temperature-dependent nanofluid properties listed in Table 2 are used for calculating the forces described in Section 4. As an initial guess, the relative velocity between the fluid and
particle is obtained by assuming Poiseuille flow [15]:
Since the effect of forces on the velocity of particle cannot be incorporated in the above correlation, an iterative procedure is used to calculate the final velocity of the particle from the
particle force balance, given by Equation 8. Based on the assumptions that the particle-particle interactions are negligible and that the density of a nanoparticle is greater than the density of the
base fluid, the Equation 8 can be expressed as [18]:
The above equation is iteratively solved by using the initial guess for v[p ]from Equation 43 until the velocity of the particle is stabilized. A detailed parametric study is conducted to study the
relative importance of different slip mechanisms by varying parameters such as shape, size, concentration, and temperature of the particle in the nanofluid as shown in Table 3. These mechanisms are
analyzed for both water and ethylene glycol based nanofluids separately. The parametric study is done to study the shape effect for three different shapes, namely spherical, cylindrical, and sheet,
while fixing the particle size to be 100 nm, volume fraction as 1%, and temperature as 20°C. The size effect is studied by varying the particle diameter between 1 and 80 nm, while fixing the shape of
the particle to be cylindrical and considering the volume fraction and temperature to be 1% and 20°C, respectively. For the study involving variation of particle concentration, the concentration is
varied between 0.5% to 5%, while the size of the nanoparticle is 10 nm and at a temperature of 20°C. The parametric study on temperature is conducted by fixing the volume fraction of nanoparticles to
be 1% and the temperature between 20°C and 70°C. The effect of each of the parameters on the slip mechanisms are explained in the following sections.
Table 3. Parameters and their ranges used for scaling analysis
Effect of particle shape
The result of the scaling analysis on the particle shape is plotted in Figure 1. The results are plotted for water based nanofluids. From this figure, it is observed that all the seven slip
mechanisms, namely the drag, rotational, gravity, thermophoresis, Brownian, Saffman lift, and Magnus forces, dominate for cylindrical particles compared to that of sheet and spherical particles. In
terms of the order of magnitude, the drag force is the largest with values in the range 10^-6 to 10^-9, and the Brownian force is the smallest with values in the range 10^-25 to 10^-27. The results
are in agreement with the finding of Lazarus et al. [19] that cylindrical-shaped particles lead to a higher thermal transport over spherical-shaped particles. Gao et al. [20] also concluded that the
non-spherical nanoparticle shape is helpful in achieving appreciable enhancement of effective thermal conductivity.
Figure 1. Effect of particle shape. (a) Gravity and thermophoresis forces, (b) lift and rotational forces, (c) drag and Magnus forces, and (d) Brownian force for water-based nanofluids.
Effect of particle size
The results of the scaling analysis on particle size are plotted for cylindrical-shaped nanoparticles in Figures 2 and 3. Here, size of the cylindrical particles is representative of its diameter.
From the Figure 2, it is observed that the Brownian force decreases with increasing size of the particles. This is due to the fact that as the particle size increases, the probability for random
motion comes down. For a particle size of 1 nm, the Brownian force is 10 orders of magnitude higher compared to that of a particle with size of 80 nm. It is observed that the drag force decreases
with increase in size for cylindrical particles. All the other forces show an increase in magnitude with increasing particle diameter.
Figure 2. Effect of particle size in cylindrical particles. (a) Brownian force and (b) drag force for water-based nanofluids.
Figure 3. Effects of particle size in cylindrical particles. (a) Gravity and thermophoresis forces, (b) lift force, (c) Magnus force, and (d) rotational force for water-based nanofluids.
Effect of particle concentration
The results of the scaling analysis on the particle concentration are plotted for 10-nm cylindrical-shaped particles in Figures 4 and 5. From this figure, it is seen that the rotational and Magnus
forces decrease continuously with increasing particle concentration whereas the thermophoresis, drag, and lift forces increase continuously with increasing volume fraction of nanoparticles. It is
observed that the variation of gravity force with volume fraction is very small. The effect of volume fraction on Brownian force is also obtained from Figure 6. The Brownian force is initially found
to increase with concentration of particles and reaches a maximum value at 1% concentration. Further increase in the concentration reduces the effect of this force.
Figure 4. Effect of particle concentration in cylindrical particles. (a) Rotational force and (b) Magnus force for water-based nanofluids.
Figure 5. Effect of particle concentration in cylindrical particles. (a) Gravity and thermophoresis forces, (b) drag force, and (c) lift force for water-based nanofluids.
Figure 6. Effect of particle concentration in cylindrical particles. (a) Brownian force for water-based nanofluids and (b) Brownian force for alumina-water nanofluid.
Effect of particle temperature
The results of the scaling analysis on the particle temperature are plotted in Figures 7 and 8. The Brownian and Magnus forces are found to increase with increasing temperature. An increase in the
temperature of nanofluids augments the chaotic movement of the suspended nanoparticles thereby increasing the force due to Brownian motion. Gravity force has very little impact with temperature. All
other forces including drag, thermophoresis, Saffman's lift, and rotational forces are found to decrease with increasing temperature. This is because for a fixed temperature gradient, the
thermophoresis force is inversely proportional to the temperature of the nanofluid as given by Equations 17 and 18.
Figure 7. Effect of particle temperature in cylindrical particles. (a) Brownian force and (b) Magnus force for water-based nanofluids.
Figure 8. Effect of particle temperature in cylindrical particles. (a) Gravity and thermophoresis forces, (b) drag force, (c) lift force, and (d) rotational force for water-based nanofluids.
Slip mechanisms in different nanofluids
The role of slip mechanisms in different nanofluids is also understood from the scaling analysis in Figures 1, 2, 3, 4, 5, 6, 7, and 8, and it is observed that for gold and copper nanoparticles
suspended in water, the values of gravity, Saffman's, Brownian, Magnus, and rotational forces are higher than that of other nanoparticles. The value of drag force is found to be higher in silica- and
alumina-based nanofluids. The value of thermophoresis force is also found to be very high in alumina-based nanofluids. The trend observed for water-based nanofluid is found to hold good for ethylene
glycol-based nanofluids as well. Comparisons of forces in both the nanofluids are discussed below. The Brownian force is found to be 100 orders of magnitude higher in water-based nanofluids as
compared to ethylene glycol-based nanofluids due to higher viscosity of water-based nanofluids. Thermophoresis force is 10 orders of magnitude higher in ethylene glycol-based nanofluids compared to
water-based nanofluids. The value of rotational and drag forces are an order of magnitude higher in ethylene glycol-based nanofluids. However, the values of gravity and Magnus forces are found to be
an order of magnitude higher in water-based nanofluids.
Time scale
Time scale is defined as the ratio of the particle size to the velocity of each mechanism (Section 4). Time scale is an important parameter for the comparison of different slip mechanisms in scaling
analysis [5]. The results of the time scale analysis for both water- and ethylene glycol-based nanofluids are plotted in Figures 9 and 10. From these figures, it is observed that for water- and
ethylene glycol-based nanofluids, the time scales associated with rotational and lift forces are smaller than that of the other forces. It is also found that drag, rotational, and lift forces in both
the nanofluids occur within a very short duration of the order of 10^-7 to 10^-10 s, whereas the time scales of Magnus and thermophoresis forces are in the range between 10^-4 and 10^-8 s. The time
scale of Brownian and gravity forces are in the range of 10^-2 to 10^-1 and 1 to 100 s, respectively. Hence, the Brownian and gravity tend to act upon for a considerably longer duration than the
other forces.
Figure 9. Time scales of slip mechanisms for water-based nanofluids.
Figure 10. Time scales of slip mechanisms for ethylene glycol-based nanofluids.
Role of slip mechanisms on heat transfer augmentation
In this section, the role of slip mechanisms on heat transfer augmentation is studied. The correlation for the turbulent flow of nanofluids inside a tube developed by Xuan and Li [2] is used to
calculate the Nusselt number (Nu[nf]), for copper-water nanofluid:
Here Pe[d ]is defined as the particle Peclet number. Particle Peclet number is the product of the particle Reynolds and Prandtl number. The particle Reynolds number in Equation 42 is a function of
Reynolds numbers for different slip mechanisms. Thus, it can be inferred that slip mechanisms has an effect on heat transfer augmentation in nanofluids. The Nusselt number of the base fluid
corresponding to a pipe diameter of 10 mm is found to be 120. For copper-water nanofluid, the effect of each slip mechanism on the Nusselt number is calculated for a particle concentration of 2%, and
diameter of 1 nm is plotted in Figure 11. The heat transfer enhancement in copper-water nanofluid is found to be approximately 36% higher than that of the base fluid comparatively. The effect of each
slip mechanisms on heat transfer is discussed below. The drag and gravity forces tend to reduce the Nusselt number of the nanofluid due to their adverse effect on heat transfer, whereas the other
forces (Brownian, thermophoresis, lift, rotational, Magnus) contribute to heat transfer augmentation.
Figure 11. The effect of slip mechanisms on Nusselt number for copper-water-based nanofluid.
The objective of this work is to understand the effect of nanoparticle slip mechanisms in different nanofluids through scaling analysis. The role of each mechanism is studied by considering the
parameters of nanoparticle such as its shape, size, concentration, and temperature. From the scaling analysis, it is found that all of the slip mechanisms are dominant in particles of cylindrical
shape as compared to the spherical and sheet particles. The parametric study on the effect of nanoparticle size is conducted by considering the particle size in the range of 1-80 nm. It is found that
the magnitudes of slip mechanisms are higher for particles of large size between 10 and 80 nm. However, the Brownian and drag forces are found to dominate in smaller particles below 10 nm. The volume
fraction of the nanoparticles in the suspension plays an important role in the thermophysical properties of the nanofluids. The random motion of the particles is enhanced in dilute suspensions;
hence, Brownian force dominates at lower volumetric loading of nanoparticles. The Brownian and Magnus forces are found to be dominating at higher temperatures while the other forces are negligible.
With respect to forces, it is found that the Brownian force is more active for nanoparticles of cylindrical shape and in smaller-sized particles with a volumetric concentration of 1%. This force is
predominant in water-based nanofluids due to higher viscosity as compared to ethylene glycol-based nanofluids. Thermophoresis force is found to be dominant for cylindrical nanoparticles at lower
temperature of nanofluid. The drag force is found to increase with the increasing volume fraction and temperature of the nanoparticles and decrease with the increasing particle size. In terms of time
scales, the Brownian and gravity forces act considerably over a longer duration than the other forces. This result holds good for both water- and ethylene glycol-based nanofluids. For
copper-water-based nanofluids, the effective contributions of slip mechanisms lead to a heat transfer augmentation which is approximately 36% over that of the base fluid. The drag and gravity forces
tend to reduce the Nusselt number of the nanofluid while the other forces tend to enhance it.
A, actual surface area of the non-spherical particle; A, area (m^2); A[sp], surface area of the sphere of the same volume as the non-spherical particles; C, specific heat (J/kg K); C[m], coefficient
in Equation 18; C[s], coefficient in Equation 18; C[t], coefficient in Equation 18; d[p], particle diameter (nm); D, diameter of the tube (m); d[m], molecular diameter; F, force acting on the
particle (N); g, gravity acceleration (m/s^2); h, length of the cylindrical particle; I, unit vector; k, thermal conductivity (W/mK); K, thermal conductivity ratio (k[nf]/k[p]); K[B], Boltzmann
constant = 1.3806504 × 10^-23 (J/K); Kn, Knudsen number; K[L], coefficient in Equation 19; m[p], mass of the particle (kg); N[A], Avagadro's number = 6.0221367 × 10^23/mol; Nu, Nusselt number; P,
pressure (Pa); Pe, Peclet number; R, radius of the tube (m); R, universal gas constant = 8.3145 J/mol.K; Re, Reynolds number; S[p], source term; t, time (s); T, temperature (K); T[bf], stress tensor
of nanofluids; v, velocity (m/s); V, relative velocity of nanofluid (m/s); V[p], volume of the particle (m^3); z, location; z, location inside the tube (m)
Greeks symbols
β, inter-phase momentum exchange coefficient; ^-1); λ, mean free path of the fluid (m); μ, dynamic viscosity (kg/ms); Ψ, shape factor; ρ, fluid density (kg/m^3); τ, time (s); τ[p], relaxation time of
the particle; ϕ, volume fraction of the nanoparticle; ν, kinematic viscosity (m^2/s)
B, Brownian; bf, base fluids; D, drag; G, gravity; L, lift; m, mean; M, Magnus; nf, nanofluids; p, particle; R, rotational; T, thermophoresis
Authors' contributions
SS carried out the scaling analysis for each slip mechanisms and also the effect of each slip mechanisms in heat transfer enhancement. AP drafted the manuscript, together with contributions to the
analysis of the results and in discussions on the technical content. SKD participated in the finalization of manuscript and in discussions on the technical content. All authors read and approved the
final manuscript.
Sign up to receive new article alerts from Nanoscale Research Letters | {"url":"http://www.nanoscalereslett.com/content/6/1/471?fmt_view=mobile","timestamp":"2014-04-16T04:18:47Z","content_type":null,"content_length":"131745","record_id":"<urn:uuid:ba433309-f753-4bc9-a837-8966b5727241>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics and Plausible Reasoning, vols
- COGNITIVE SCIENCE , 1989
"... A theory of analogical mapping between source and target analogs based upon Interacting structural, semantic, and pragmatic constraints is proposed here. The structural constraint of isomorphism
encourages mappings that maximize the consistency of relational corresondences between the elements of th ..."
Cited by 276 (19 self)
Add to MetaCart
A theory of analogical mapping between source and target analogs based upon Interacting structural, semantic, and pragmatic constraints is proposed here. The structural constraint of isomorphism
encourages mappings that maximize the consistency of relational corresondences between the elements of the two analogs. The constraint of semantic similarity supports mapping hypotheses to the degree
that mapped predicates have similar meanings. The constraint of prog-mafic central/! / favors mappings involving elements the analogist believes to be Important in order to achieve the purpose for
which the analogy Is being used. The theory is implemented in a computer program called ACME (Analogical Constraint Mapping Engine), which represents constraints by means of a network of supporting
and competing hypotheses regarding what elements to map. A coop-erative algorithm for parallel constraint satisfaction identifies mapping hypotheses that collectively represent the overall mapping
that best fits the interacting constraints. ACME has been applied to a wide range of examples that include problem analogies, analogical arguments, explanatory analogies, story analogies, formal
analogies, and metaphors. ACME is sensitive to semantic and pragmatic Information if it Is available,.and yet able to compute mappings between formally Isomorphic analogs without any similar or
identical elements. The theory Is able to account for empirical findings regarding the impact of consistency and similarity on human processing of analogies.
- 2000), Article 17. [ONLINE: http://jipam. vu.edu.au/v1n2/014_99.html
"... Communicated by A. Lupa¸s ABSTRACT. New families of inequalities involving the elementary symmetric functions are built as a consequence that all zeros of certain real polynomials are real
numbers. ..."
Cited by 22 (1 self)
Add to MetaCart
Communicated by A. Lupa¸s ABSTRACT. New families of inequalities involving the elementary symmetric functions are built as a consequence that all zeros of certain real polynomials are real numbers.
, 2003
"... The existence of probability in the sense of the frequency interpretation, i.e. probability as “long term relative frequency, ” is shown to follow from the dynamics and the interpretational
rules of Everett quantum mechanics in the Heisenberg picture. This proof is free of the difficulties encounter ..."
Cited by 7 (4 self)
Add to MetaCart
The existence of probability in the sense of the frequency interpretation, i.e. probability as “long term relative frequency, ” is shown to follow from the dynamics and the interpretational rules of
Everett quantum mechanics in the Heisenberg picture. This proof is free of the difficulties encountered in applying to the Everett interpretation previous results regarding relative frequency and
probability in quantum mechanics. The ontology of the Everett interpretation in the Heisenberg picture is also discussed.
"... Aristotelian, or non-Platonist, realism holds that mathematics is a science of the real world, just as much as biology or sociology are. Where biology studies living things and sociology studies
human social relations, mathematics studies the quantitative or structural aspects of things, such as rat ..."
Add to MetaCart
Aristotelian, or non-Platonist, realism holds that mathematics is a science of the real world, just as much as biology or sociology are. Where biology studies living things and sociology studies
human social relations, mathematics studies the quantitative or structural aspects of things, such as ratios, or patterns, or complexity,
"... ABSTRACT It seems plausible that the conception of the mind has evolved over the first hundred years ofpsychology in America. In this research, we studied this evolution by tracing changes in
the kinds of metaphors used by psychologists to describe mental phenomena. A corpus of metaphors from 1894 t ..."
Add to MetaCart
ABSTRACT It seems plausible that the conception of the mind has evolved over the first hundred years ofpsychology in America. In this research, we studied this evolution by tracing changes in the
kinds of metaphors used by psychologists to describe mental phenomena. A corpus of metaphors from 1894 to the present was collected and examined. The corpus consisted of all metaphors for mental
phenomena used in the first issue of Psychological Review in each decade, beginning with the inception of the journal in 1894 and continuing with 1905, 1915, and so on through 1975. These nine issues
yielded 265 mental metaphors, which were categorized according to the type of analogical domain from which the comparison was drawn. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=4857858","timestamp":"2014-04-17T15:59:53Z","content_type":null,"content_length":"23392","record_id":"<urn:uuid:7f12ce58-6371-4847-baa1-cc700f0a360e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
Classical Fluids via Quantum Mechanics
The subject of this post is probably a bit too technical to interest many readers, but I’ve been meaning to post something about it for a while and seem to have an hour or so to spare this morning so
here goes. This is going to be a battle with the clunky WordPress latex widget too so please bear with me if it’s a little difficult to read.
The topic something I came across a while ago when thinking about the way the evolution of the matter distribution in cosmology is described in terms of fluid mechanics, but what I’m going to say is
not at all specific to cosmology, and perhaps isn’t all that well known, so it might be of some interest to readers with a general physics background.
Consider a fluid with density $\rho= \rho (\vec{x},t)$. The velocity of the fluid at any point is $\vec{v}=\vec{v}(\vec{x},t)$. The evolution of such a fluid can be described by the continuity
$\frac{\partial \rho}{\partial t} + \vec{abla}\cdot (\rho\vec{v})= 0$
and the Euler equation
$\frac{\partial \vec{v}}{\partial t} + (\vec{v}\cdot\vec{abla})\vec{v} +\frac{1}{\rho} \vec{abla} P + \vec{abla} V = 0,$
in which $P$ is the fluid pressure (pressure gradients appear in the above equation) and $V$ is a potential describing other forces on the fluid (in a cosmological context, this would include its
self-gravity). To keep things as simple as possible, consider a pressureless fluid (as might describe cold dark matter) and restrict consideration to the case of a potential flow, i.e. one in which
$\vec{v} = \vec{abla}\phi$
where $\phi=\phi(\vec{x},t)$ is a velocity potential; such a flow is curl-free. It is convenient to take the first integral of the Euler equation with respect to the spatial coordinates, which yields
an equation for the velocity potential (cf. the Bernoulli equation):
$\frac{\partial \phi}{\partial t} + \frac{1}{2} (abla \phi)^{2} + V=0.$
The continuity equation becomes
$\frac{\partial \rho}{\partial t} + \vec{abla}\cdot(\rho\vec{abla}\phi) = 0$
This is all standard basic classical fluid mechanics. Now here’s the interesting thing. Introduce a new quantity $\Psi$ defined by
$\Psi(\vec{x},t) \equiv R\exp(i\phi/u),$
in which $R=R(\vec{x},t)$ and $u$ is a constant. Using this construction, it turns out that
$\rho = \Psi\Psi^{\ast}= |\Psi|^2=R^2$.
After a little bit of fiddling around putting this in the previous equation you can obtain the following:
$iu \frac{\partial \Psi}{\partial t} = -\frac{u^2}{2} abla^2{\Psi} + V\Psi + Q\Psi$
which, apart from the last term $Q$ and a slightly different notation, is identical to the Schrödinger equation of quantum mechanics; the term $u$ would be proportional to Planck’s constant $h$ in
that context, but in this context is a free parameter.
The mysterious term $Q$ is pretty horrible:
$Q = \frac{u^2}{2} \frac{abla^2 R}{R},$
and it turns the Schrödinger equation into a non-linear equation, but its role can be understood by seeing what happens if you start with the normal single-particle Schrödinger equation and work
backwards; this is the approach taken historically by David Bohm and others. In that case the term $Q$ appears as a strange extra potential term in the Bernoulli equation which is sometimes called
the quantum potential. In the context of fluid flow, however, the term describes the the effect of pressure gradients that would arise if the fluid were barotropic. In the approach I’ve outlined,
going in the opposite direction, this term is consequently sometimes called the “quantum pressure”. The parameter $u$ controls the size of this term, which has the effect of blurring out the
streamlines of the purely classical solution.
This transformation from classical fluid mechanics to quantum mechanics is not a new idea; in fact it goes back to Madelung who, in the 1920s, was trying to find a way to express quantum theory in
the language of classical fluids.
What interested me about this approach, however, is more practical. It might seem strange to want transform relatively simple classical fluid-mechanical setup into a quantum-mechanical framework,
which isn’t the obvious way to make progress, but there are a number of advantages of doing so. Perhaps chief among them is that the construction of $\Psi$ means that the density $\rho$ is guranteed
positive definite; this means that a perturbation expansion of $\Psi$ will not lead to unphysical negative densities in the same way that happens if perturbation theory is applied to $\rho$ directly.
This approach also has interesting links to other methods of studying the growth of large-scale structure in the Universe, such as the Zel’dovich approximation; the “waviness” controlled by the
parameter $u$ is useful in ensuring that the density does not become infinite at shell-crossing, for example.
Anyway, here are some links to references with more details:
I think there are many more ways this approach could be extended, so maybe this will encourage someone out there to have a look at it!
Follow @telescoper
3 Responses to “Classical Fluids via Quantum Mechanics”
1. I knew this mathematics starting from the other way round, ie begin with the Schroedinger equation and derive (coupled, nonlinear) equations for the amplitude and phase of the wavefunction. David
Bohm attempted to give these a physical interpretation.
In fluid mechanics the Navier-Stokes equations (or whatever approximation to them is used) need to be supplemented by the condition that the pressure cannot go negative and that, whenever their
solution is on the point of doing so, you get a vacuum with P=0, as in the cavitation phenomenon around ship propellers. I don’t know if that condition has any analogy in cosmology, or even in
nonrelativistic quantum theory where it might be tested.
□ Yes, I should have mentioned the Bohm approach because that’s the more familiar way of doing this. That interprets the term Q as a quantum potential for a single-particle.
Incidentally, googling about I see there’s been some work on this approach in the context of Clifford Algebra. Very interesting.
2. I haven’t tried to integrate it with WordPress, but mathjax is a wonderful way of getting nicely typeset on the web. I stumbled across it on the PRL website when there was a news item about the
APS supporting the project. Now the difficult battle is to get my University to integrate it with their VLE. Sorry for all the TLAs. | {"url":"https://telescoper.wordpress.com/2012/06/17/classical-fluids-via-quantum-mechanics/","timestamp":"2014-04-19T09:24:59Z","content_type":null,"content_length":"63753","record_id":"<urn:uuid:4a9c24c3-aad0-4b5f-8ed9-ebba5606b6e0>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding limit at infinity
Find the limit at infinity. g(z) = 4z^6 - 7z^3 / (z^2 -4)^3 Not sure how to do this...please help.
Expand the denominator and divide it out through the numerator, the form may then be easier to work with.
I expanded the denominator and jsut got one long exquation... Not seeing how this helps... Please show and explain
Not sure if this helps and plus I am also really new to these forums, but I remember my teacher taught me this: Look at the highest power in the numerator and in the denominator. 1) If they both have
the same highest power, the ratio between the numerator and the denominator is the limit. 2)If the denominator has a higher power, the limit goes to zero 3)If the numerator has a higher power, the
limit goes to infinity ex1) $5x^2 / x^2$ limit goes to 5 ex2) $5x / x^2$ limit goes to zero ex3) $5x^2 / x$ limit goes to infinity So for your problem, try to compare the highest powers and see what
happens. I hope this may be of some help to you and I hope I remembered right
Yes, usually when I did these kinds of problems and found an answer I would check it with this approach: Plug in a very large value into the equation (such as 500,000 or something), and see what
happens. Make sure that you just don't go about doing this to find the answer, because you need to understand how that answer came to be. For this problem, plugging in a large value such as 500,000
would lead to the limit, 4.
$\frac{4z^6-7z^3}{z^6-12z^4+48z^2-64}$ now divide both numerator and denominator by $z^6$ $\frac{\frac{4z^6}{z^6}-\frac{7z^3}{z^6}}{\frac{z^6}{z^6}-\frac{12z^4}{z^6}+\frac{48z^2}{z^6}-\frac{64}{z^6}}
$ $\frac{4-\frac{7}{z^2}}{1-\frac{12}{z^2}+\frac{48}{z^4}-\frac{64}{z^6}}$ So the limit will be 4. | {"url":"http://mathhelpforum.com/calculus/128943-finding-limit-infinity.html","timestamp":"2014-04-20T09:58:04Z","content_type":null,"content_length":"48241","record_id":"<urn:uuid:471b2434-61ed-47ef-b7ec-9cc7fa88f00e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
integral of 1/x is ln|x| +C; why the absolute value?
September 6th 2011, 08:13 AM #1
Aug 2010
integral of 1/x is ln|x| +C; why the absolute value?
I have seen two justifications for the absolute value sign in ln|x| as the antiderivative of 1/xm, but neither one seems sufficient. The first one is quite lame, that ln can only deal with a
non-zero positive domain (as long as we are sticking to the real numbers). But this would not rule out a definition such as (as example only)
ln(x) if x is positive
-ln(|x|) if x is negative.
Or something like this. I am not proposing this as a definition; only showing how the justification above is insufficient.
The next justification I have seen is that the area under the curve 1/x over an interval (a,b), with a<b<0, will be the same as the area under the curve 1/x over the interval (|b|,|a|), so we
take the absolute value. However, the integral is not the same thing as the area; if we are looking at the negative side of the x-axis, we get negative "signed areas" from the integral, not the
So, why the absolute value? Thanks.
Re: integral of 1/x is ln|x| +C; why the absolute value?
I have seen two justifications for the absolute value sign in ln|x| as the antiderivative of 1/xm, but neither one seems sufficient. The first one is quite lame, that ln can only deal with a
non-zero positive domain (as long as we are sticking to the real numbers). But this would not rule out a definition such as (as example only)
ln(x) if x is positive
-ln(|x|) if x is negative.
Or something like this. I am not proposing this as a definition; only showing how the justification above is insufficient.
The next justification I have seen is that the area under the curve 1/x over an interval (a,b), with a<b<0, will be the same as the area under the curve 1/x over the interval (|b|,|a|), so we
take the absolute value. However, the integral is not the same thing as the area; if we are looking at the negative side of the x-axis, we get negative "signed areas" from the integral, not the
So, why the absolute value? Thanks.
$\displaystyle \frac{d}{dx}\left[\ln{(x)}\right] = \frac{1}{x}$, where $\displaystyle x > 0$, and $\displaystyle \frac{d}{dx}\left[\ln{(-x)}\right] = \frac{1}{x}$, where $\displaystyle x < 0$.
So that means $\displaystyle \int{\frac{1}{x}\,dx} = \begin{cases}\ln{(x)}\textrm{ if }x > 0 \\ \ln{(-x)} \textrm{ if }x < 0\end{cases}$.
Now what is the definition of $\displaystyle |x|$?
Re: integral of 1/x is ln|x| +C; why the absolute value?
I have seen two justifications for the absolute value sign in ln|x| as the antiderivative of 1/xm, but neither one seems sufficient. The first one is quite lame, that ln can only deal with a
non-zero positive domain (as long as we are sticking to the real numbers). But this would not rule out a definition such as (as example only)
ln(x) if x is positive
-ln(|x|) if x is negative.
Or something like this. I am not proposing this as a definition; only showing how the justification above is insufficient.
The next justification I have seen is that the area under the curve 1/x over an interval (a,b), with a<b<0, will be the same as the area under the curve 1/x over the interval (|b|,|a|), so we
take the absolute value. However, the integral is not the same thing as the area; if we are looking at the negative side of the x-axis, we get negative "signed areas" from the integral, not the
So, why the absolute value? Thanks.
In my opinion for any $x e 0$ is...
$\frac{d}{dx} \ln x = \frac{1}{x}$ (1)
... and the 'absolute value' isn't necessary. If You consider that...
$\ln (-x)= \ln x + i\ \pi$ (2)
... the difference between them is only the constant $i\ \pi$ the derivative of which is 0. For the same reason is...
$\int_{a}^{b} \frac{dx}{x} = \ln b - \ln a$ (3)
... if $0 i [a,b]$. What You have to keep in mind however is that that is my own opinion and I am not sure that in a test exam You can sustain that without negative consequences
Kind regards
Re: integral of 1/x is ln|x| +C; why the absolute value?
Thanks to both of you, Prove It and chisigma. Both excellent answers.
September 6th 2011, 08:23 AM #2
September 6th 2011, 09:17 AM #3
September 6th 2011, 09:45 AM #4
Aug 2010 | {"url":"http://mathhelpforum.com/calculus/187391-integral-1-x-ln-x-c-why-absolute-value.html","timestamp":"2014-04-24T16:28:05Z","content_type":null,"content_length":"45542","record_id":"<urn:uuid:0753a782-e4ed-43ee-ba8f-fbc4a9e1592e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frank Morgan's Math Chat - A ONE-PAGE PROOF OF FERMAT?
February 18, 1999
MATH JOKES. Math Chat has received a number of mathematical jokes, most of them terrible. This week's winner is a riddle from Peter Hegarty:
RIDDLE. Which mathematical term is named after a well-known American politician?
The answer is near the end of this column.
OLD CHALLENGE (Joe Shipman). Select the best occurrence in the world of each number from 1 to 10. For example, 12 is the number of eggs in a dozen or the number of months in a year.
ANSWER. Here is the winning response compiled from John Robertson, Henry Ricardo, Bob Swanson, Jean-Pierre Carmichael, and Ryan Grove:
1. Number of Earth's moons
2. Romeo and Juliet
3. Dimensions; Mazur, Ribet, and Wiles (contributors to the proof of Fermat's Last Theorem)
4. The Four Freedoms (FDR, 1941: freedom of speech, freedom of worship, freedom from want, freedom from fear)
5. Platonic solids [cube, octahedron, tetrahedron, dodecahedron, icosahedron]
6. Legs on insects
7. Notes of the musical scale
8. Cylinders in a V8 engine (such as in the Ferrari 308 series); vegetables in V8 juice (tomatoes, carrots, celery, beets, parsley, lettuce, watercress, and spinach)
9. Planets; Beethoven symphonies
10. Fingers
(Can readers improve on this list?) Then there is this from Al Zimmermann:
1. The amount, in cents, of the recent postage increase for first class US mail.
2. The number of sides to every story.
3. The number of moving parts in a Wankel engine.
4. The number of Beatles.
5. The number of players on a basketball team.
6. The number of sodas in a six-pack.
7. The number of days in a week.
8. The number of days in a week, according to the Beatles.
9. The number of months in a pregnancy.
10. The number of years it seems the Clinton/Lewinsky scandal has been going on.
Finally, in his beautiful response, David Shay relates that, "At the end of the Seder night, which begins the Jewish holiday of Passover, it is common to sing a song named 'Who Knows One.' This song
gives an exact Jewish answer to your challenge, in the range of 1 to 13. Here it is:
• Thirteen are the attributes of God;
Twelve are the tribes of Israel;
Eleven were the stars in Joseph's dream;
Ten commandments were given on Sinai;
Nine is the number of the holidays;
Eight are the days to the service of the covenant;
Seven days there are in a week;
Six sections the Mishnah has;
Five books there are in the Torah;
Four is the number of the matriarchs;
Three is the number of the patriarchs;
Two are the tables of the covenant;
One is our God in heaven and earth."
NEW CHALLENGE. Critique the following short proof of Fermat's Last Theorem sent in by reader Rob Connelly. (In perhaps the biggest mathematics news of the century, Andrew Wiles recently came up with
a very long and complicated proof to this 350-year old problem.)
Fermat's Last Theorem. The equation
(1) x^n + y^n = z^n has no positive integer solutions for n > 2.
Proposed proof. Suppose there were such a solution. Since x
(2) z^n -1 = x^n -1 + y^n -1 + N.
x^n + y^n = z^n = z(x^n -1 + y^n -1 + N).
Solving for N yields:
N = [(y+a) ^n -1 (a-b) + y^n -1 (-b)]/(y+b) = [F(y)]/(y+b) ,
so y+b divides F(y) and
0 = F(-b) = (a-b) ^n + (-b) ^n
0 = (b-a) ^n + b^n > 0,
the desired contradiction.
ANSWER TO RIDDLE. The mathematical term named after a well-known American politician is "Algorithm." (Readers are invited to continue to submit more jokes for future columns.)
Send answers, comments, and new questions by email to:
Frank.Morgan@williams.edu, to be eligible for Flatland and other book awards. Winning answers will appear in the next Math Chat. Math Chat appears on the first and third Thursdays of each month.
Prof. Morgan's homepage is at www.williams.edu/Mathematics/fmorgan.
Copyright 1999, Frank Morgan. | {"url":"http://www.maa.org/frank-morgans-math-chat-a-one-page-proof-of-fermat","timestamp":"2014-04-18T15:25:58Z","content_type":null,"content_length":"92857","record_id":"<urn:uuid:a6ce09df-09fa-4b69-aa90-b86950e9f903>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 155
Calculate the morality of a solution that contains a 4.0 mol of a solute dissolved in 12.0L of solution
Also, how would you calculate the IRR? Thank you for any help!
An income-producing property is priced at $600,000 and is expected to generatethe following after-tax cash flows: Year 1: $42,000; Year 2: $44,000; Year 3:$45,000; Year 4: $50,000; and Year 5:
$650,000. Would an investor with arequired after-tax rate of return of 15 percent be...
what is 18 percent of 240?
it is a
Sierra Company is considering a long-term investment project called ZIP. ZIP will require an investment of $121,200. It will have a useful life of 4 years and no salvage value. Annual revenues would
increase by $79,180, and annual expenses (excluding depreciation) would increa...
Business Finance
Neville Corporation, an amusement park, is considering a capital investment in a new exhibit. The exhibit would cost $174,777 and have an estimated useful life of 9 years. It will be sold for $69,200
at that time. (Amusement parks need to rotate exhibits to keep people interes...
Thank you!
how do you solve: 4=1(2)/(x-2) Thank you!
1.) Find the producers' surplus if the supply function is: S(q) = q^7/2+3q^5/2 + 54. Assume the supply and demand are in equilibrium at q= 25. 2.) S(q) = q^2 + 12q and D(q) = 900 - 18q - q^2 The
point at which the supply and demand are equilibrium is (15, 405). a.) Find t...
1.) Find the producers' surplus if the supply function is: S(q) = q^7/2+3q^5/2 + 54. Assume the supply and demand are in equilibrium at q= 25. 2.) S(q) = q^2 + 12q and D(q) = 900 - 18q - q^2 The
point at which the supply and demand are equilibrium is (15, 405). a.) Find t...
1.) Find the producers' surplus if the supply function is: S(q) = q^7/2+3q^5/2 + 54. Assume the supply and demand are in equilibrium at q= 25. 2.) S(q) = q^2 + 12q and D(q) = 900 - 18q - q^2 The
point at which the supply and demand are equilibrium is (15, 405). a.) Find th...
588 is the correct answer!**
588 is the correct answer? But, I don't understand how to get to that number. How did you calculate that? Sorry, that may be a lot to type out
Use the definite integral to find the area between the x-axis over the indicated interval. f(x) = 36 - x^2; [-1,13] So, what does be the area between the x-axis and f(x) equal? Thank you for any
help! I'm really confused with this problem!
Thank you!
Solve for x. .006x^3=8889 x = approx 114, but I don't understand how to get that answer. Can anyone help me solve? Thank you!
Thank you guys!
Can someone please help me find the derivative of the following: y = (-9e^7x) / (5x+3) Thank you!
Data given are related to the Born-Haber cycle for HCl Calculate the amount of energy lost when the ionic species return to their molecular form. Atomization of ½H2(g)=217.6KJ/mol 1st Ionisation of
½H2(g)=1312KJ/mol Atomization of ½Cl2(g)=121 KJ/mol 1st Io...
Any help?
The balance sheet of Burger King reports current assets of $30,500 and current liabilities of $15,800. Calculate the current ratio of Burger King and detemine whether it will increase or decrease as
a result of the following transactions: -Paid $2,030 cash for a new oven. ...
Find the value of x: 20x ≡ 9 (mod 15)
Thanks for your help! Also, I am kind of confused about finding the value of x. Example: 9x ≡ 1 mod 10 How do I solve this?
How do I solve for negative modular arithmetic? Here is an example: -45 mod 13 = 7, but how?
How do I solve for negative modular arithmetic? Here is an example: -45 mod 13 = 7, but how?
Computing in Security (Conversion Help)
1.) Encrypt the hexadecimal message F9E8 using the Rail Fence cipher for binary numbers with 3 Rails. [Give answer in hexadecimal encrypted message] 2.) Decrypt the hexadecimal encrypted message CDEF
created by the Rail Fence cipher for binary numbers with 3 Rails. [Give answ...
Sorry it's computing in security, so I guess that would fall under computers. . .
1.) Encrypt the hexadecimal message F9E8 using the Rail Fence cipher for binary numbers with 3 Rails. [Give answer in hexadecimal encrypted message] 2.) Decrypt the hexadecimal encrypted message CDEF
created by the Rail Fence cipher for binary numbers with 3 Rails. [Give answe...
IP address assignment
Yea I tried doing a google search, but nothing good explaining where to start. Thnx for the reply though
IP address assignment
Imagine that you are a system administrator for a company, and that the company operates from two locations Phoenix and Boston. Phoenix has 350 hosts and Boston has 2100. Given that the following
class C IP addresses have been assigned to your company as a whole by ICANN: 220....
Computing in Security (Conversion Help)
1.) Express each decimal number as an 8-bit binary number in the 2's complement form and then find the negative of the representation of that number in two s compliment a.) +18 b) -117 Thanks for any
Thermal Physics
Two cars collide head on while each is travelling at 80 km/hr. Suppose all of their kinetic energy is transformed into thermal energy. What is the temperature increase of each car? [You may assume
that the specific heat capacity of each car is that of iron, 449 J kg-1K-1.]
Physics of materials
Gaseous nitrogen has a density of 1.17 kg/m3 and liquid nitrogen has a density of 810 kg/m3. [The relative molecular mass of nitrogen is 28.0] What is the mean volume per nitrogen molecule in each
case? What is the mean separation between nitrogen molecules in each case?
Discrete Math
THANK YOU! :)
Discrete Math
A factory makes automobile parts. Each part has a code consisting of a letter and three digits, such as C117, O076, or Z920. Last week the factory made 60,000 parts. Prove that there are at least
three parts that have the same serial number.
Discrete Math
isisDOTpolyDOTedu/courses/discretemath/problemsDOTpdf Link is above can you please take a look? Specifically #21 on pg. 345. [There are 900 3DN], I need help with g-h. Thank you.
Discrete Math
Can someone help? Very confused. . . John Sununus was once the governor of New Hampshire, and his name reminds one of the authors of a palindrome (a words which is spelt the same way forwards as
backwards, such as SUNUNUS). How many seven-letter palindromes (not necessarily re...
Discrete Math
Basically for the MPD problem I have to make it more precise.
Discrete Math
Ok now I am confused again with 5:08, it seems the same as the first subpart of the problem. They can't be the same answer, for this one: How many seven-letter palindromes contain at most three
different letters one of which is S? And for the MPD problem that was really th...
Discrete Math
Can you at all help with this? Multiple personality disorder (MPD) is a condition in which different personalities exist within one person and at various times control that person s behavior. In a
recent survey of people with MPD, it was reported that 98% had been e...
Discrete Math
Oh ok so there are 13800 that contain at most three different letters one of which is S.
Discrete Math
So, how many seven-letter palindromes contain at most three different letters one of which is S? We would start out with 26^3, but I don't understand how to make sure S will be included as one of the
different letters. Any suggestions? Thank you.
Discrete Math
Oh okay I think I am following. . .
Discrete Math
To be honest I haven't started yet, but your method sounds like a step in the right direction. . .I'll play around with it for a little and see what I get. . .If you figure out anything post.
Hopefully someone who knows something will post cuz I'm lost
Discrete Math
Any suggestions?
Discrete Math
Okay I continued the first problem: |A ∩ B| = [2000/6] = 333 |B ∩ C| = [2000/15] = 133 |C ∩ D| = [2000/35] = 57 |A ∩ D| = [2000/14] = 142 |A ∩ B ∩ C ∩ D| = [2000/210] = 9 1000 + 666 + 400 + 285 - 333
- 133 - 57 - 142 + 9 = 1695 <--A...
Discrete Math
Is this correct? Using the Principle of Inclusion-Exclusion, find the number of integers between 1 and 2000 (inclusive) that are divisible by at least one of 2, 3, 5, 7. A = {n| 1 ≤ n ≤ 2000, 2 |n}
B = {n| 1 ≤ n ≤ 2000, 3 |n} C = {n| 1 ≤ n ...
Discrete Math
Hey thanks a lot for help!
Discrete Math
Well a_1 = 37 not 31
Discrete Math
So, going back to the previous 9:05. The solution is a_n = 40(1)^n - 3(1)^n, is this correct or way off?
Discrete Math
How about this: Solve the recurrence relation a_n+1 = -8a_n 16a_n - 1, n ≥ 1, given a₀ = 5, a₁ = 17. Characteristic polynomial is: x^2 + 8x + 16 with distinct roots -4. Since the roots are equal a0
= 5 = C1(-4)^0 + C2(0)(-4)^0 making C1 = 5, right? But...
Discrete Math
So the characteristic roots are 1?
Discrete Math
Idk what happened at 7:24 I think my computer had a glitch or something and it reposted. This one is throwing me for a loop: Solve the recurrence relation a_n = 2a_n - 1 a_n -2, n ≥ 2, given a₀ =
40, a₁ = 37. Characteristic polynomial: x^2 - 2 + 1 How ...
Discrete Math
Oh I feel dumb. . . Ok so now a_n = C_1(1)^n + C_2(-6)^n a0 = C_1 + C_2 = 5 (1) a1 = C_1 - 6C_2 = 19 (2) So to find C1 I eliminated it by 6(1) + (2) <--is this allowed? (6C1 + 6C2) = 30 + (C1 - 6C2)
= 19 ___________________ 7C1 = 49 C1 = 7 Plug this into (1) and this is whe...
Discrete Math
Oh I feel dumb. . . Ok so now a_n = C_1(1)^n + C_2(-6)^n a0 = C_1 + C_2 = 5 (1) a1 = C_1 - 6C_2 = 19 (2) So to find C1 I eliminated it by 6(1) + (2) <--is this allowed? (6C1 + 6C2) = 30 + (C1 - 6C2)
= 19 ___________________ 7C1 = 49 C1 = 7 Plug this into (1) and this is whe...
Discrete Math
Thank you so much for your help! I think I am getting the hang of it better. Can you please check: Solve the recurrence relation a_n = -5a_n - 1 + 6a_n - 2, n ≥ 2, given a₀ = 5, a₁ = 19.
characteristic polynomial is x^2 + 5x - 6 it has the distinct root 2 and...
Discrete Math
Oops posted twice. . .Sorry
Discrete Math
If you don't mind can you help with this problem? Solve the recurrence relation an+1 = 7an 10an - 1, n ≥ 2, given a₁ = 10, a₂ = 29. The characteristic polynomial is x^2 - 7 + 10 with characteristic
roots 2 and 5. Once again I get confused when I ge...
Discrete Math
If you don't mind can you help with this problem? Solve the recurrence relation an+1 = 7an 10an - 1, n ≥ 2, given a₁ = 10, a₂ = 29. The characteristic polynomial is x^2 - 7 + 10 with characteristic
roots 2 and 5. Once again I get confused when I ge...
Discrete Math
Solve the recurrence relation a_n = -6a_n - 1 + 7a_n-2, n ≥ 2, given a₀ = 32, a₁ = -17. This is what I have figured out so far: polynomial: x² + 6x - 7 distinct roots: 1 and -7 I do not understand
how to find C₁ and C₂. How do I complete this...
Discrete Math
Sorry I still don't get it. Can someone please explain?
Discrete Math
Solve the recurrence relation a_n = -2a_n-1 + 15a_n-2, n ≥ 2, given a₀ = 1, a₁ = -1. x² + 2x - 15, the distinct roots 3 and -5, so a_n = C₁(3^n) + C₂(-5)^n. The initial condition gives a₀ = 1 = C₁ -
C₂, a₁ = -1 = 3C...
Discrete Math
REVISED QUESTION: Why use mathematical induction to prove the sum of a sequence is valid?
Discrete Math
Why use mathematical induction to get the sum of a sequence? Also, if there are any websites you can recommend that will be a help too. However, I need an explanation rather than examples. Thank you
for any helpful replies.
So, this is how far I got. . .I getting weird numbers. . . -3072(1 - (-1/2)⁹) ------------------- = 1 - (-1/2)
Thank you for responding. Hmm... IDK. . .I'll have to ask if that's a typo on the other end. Hey do you mind seeing if this is correct, and helping with the second part? Consider the geometric
sequence that begins -3072 and common ratio 1/2. Find the 13th and 20t...
Can someone please calculate this: 48(-1/2)^6 The answer is 3/2, but I get 3/4. What am I dong wrong?
Discrete Math
OK thank you!
Discrete Math
I want to verify that this is correct: An arithmetic sequence begins, 116, 109, 102 Determine whether -480 belongs to this sequence, if it does, what is its term number? -480 = 116 + (n - 1)(-7) n
- 1 = 85.142. . . So, that means -480 does not belong to this sequence, r...
Discrete Math
Thank you!
Discrete Math
An arithmetic sequence begins, 116, 109, 102 Find the 300th term of this sequence.
Discrete Math
Yes, they both follow the same recursive definition. I was just trying the second part on my own to see if I understand. Sorry about the misunderstanding. . .
Discrete Math
f(n) is defined recursively by f(0) = 1 and for n = 0, 1, 2, . . . Find f(1), f(2), f(3)
Discrete Math
Oops. . .Sorry disregard previous post. . .
Discrete Math
OK example: f(n+1) = 3f(n) f(1) = 3 f(2) = 6 f(3) = 9 Right?
Discrete Math
OK example: f(n+1) = 3f(n) f(1) = 3 f(2) = 6 f(3) = 9 Right?
Discrete Math
The f(n+1) is throwing me off what does that mean?
Discrete Math
Ok thank for the responses, but there seems to be a contradiction between the two. Wouldn't f(1) = 1 + 2, which equals 3?
Discrete Math
Find f(1), f(2), and f(3) if f(n) is defined recursively by f(0) = 1 and for n = 0, 1, 2, . . . f(n+1) = f(n) + 2 So, would it be f(n) = f(n+1) + 2? Or would I just keep it like the original and
plug in 1, 2, 3. Thanks for any helpful replies.
Discrete Math
Here is the solution: The mistake is in applying the inductive hypothesis to look at max(x −1, y −1) . Notice that we are doing induction on n not on x or y. Even though x and y are positive
integers, x −1 and y −1 need not be (one or both could be 0). ...
Discrete Math
Oh okay. . .I get it. . .Thank you so much for your help :)
Discrete Math
Thank you! So, going back to your counterexample in post 9:52: x=4, y=6, n=max(x,y)=6 Why does it =6? Sorry if this seems like a silly question. . .
Discrete Math
No, the question verbatim is "What is wrong with this proof?"
Discrete Math
Thank you for responding. Yes everything is typed correctly. I want to find what is wrong with proof.
Discrete Math
Theorem: For every integer n, if x and y are positive integers with max(x, y) = n, then x = y. Basic Step: Suppose that n = 1. If max(x, y) = 1 and x and y are positive integers, we have x = 1 and y
= 1. Inductive Step: Let k be a positive integer. Assume that whenever max(x, ...
Discrete Math
Use mathematical induction to prove the truth of each of the following assertions for all n ≥1. 5^2n 2^5n is divisible by 7 If n = 1, then 5^2(1) - 2^5(1) = -7, which is divisible by 7. For the
inductive case, assume k ≥ 1, and the result is true for n = k; ...
Discrete Math
Any suggestions?
Discrete Math
Any suggestions?
Discrete Math
Yea that's what I thought. . .Hey if you don't mind helping me further I have been working on this problem for a while and I am a bit stuck. IDK where to go from here or if I am doing it correctly:
Use mathematical induction to prove the truth of each of the following ...
Discrete Math
Ok thank you for your helpful response! I have a couple of questions though. . . Is the 15th line suppose to be '(k+1)(k+2)/(22k+3)'? Also, the 16th line = RS, which is what exactly?
Discrete Math
Use mathematical induction to establish the following formula. n Σ i² / [(2i-1)(2i+1)] = n(n+1) / 2(2n+1) i=1 Thanks for any helpful replies :)
Discrete Math
Any suggestions?
Discrete Math
Any suggestions?
Discrete Math
Thank you so much for your response! But I have completed that particular question. However, can you please help with this one? I am confused. . . Use mathematical induction to establish the
following formula. n Σ i² / [(2i-1)(2i+1)] = n(n+1) / 2(2n+1) i=1 Thanks for...
Discrete Math
Discrete Math
Use mathematical induction to prove the truth of each of the following assertions for all n ≥1. n³ + 5n is divisible by 6 I really do not understand this to much. This is what I have so far: n = 1,
1³ - 5(1) = 6, which is divisible by 6 Then I really don't ...
Discrete Math
OK you gave me a lot to think about. . .Thank you so much for your help. Until next time (which may be tomorrow). Thanks again :)
Discrete Math
So, I worked out a couple. . . 1.) 4x ≡ 2(mod 6) x = 0, 4(0) - 2 = -2 is not divisible by 6 x = 1, 4(1) - 2 = 2 is not divisible by 6 x = 2, 4(2) - 2 = 6 is divisible by 6 x = 3, 4(3) - 2 = 10 is not
divisible by 6 x = 4, 4(4) - 2 = 14 is not divisible by 6 x = 5, 4(5) -...
Discrete Math
So, once I reduce it to this:x≡80 (mod 148). Then I have to do the trial and error process? Idk, I get how you got 80, but how did get 228, 376, 524. . .I think I see a pattern though each are
incremented by 148. Hmm. . .I have another example that I would like to share,...
Pages: 1 | 2 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Francesca","timestamp":"2014-04-20T04:16:08Z","content_type":null,"content_length":"28803","record_id":"<urn:uuid:e5447ce1-3a3c-4f9e-9e0e-a5275765e5eb>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00553-ip-10-147-4-33.ec2.internal.warc.gz"} |
Closure of singular points
up vote 3 down vote favorite
Let $f(x,y)$ be a complex degree $d$ polynomial that has this particular form.
$$ f = \frac{f_{02}}{2} y^2 + \frac{f_{21}}{2} x^2 y + \frac{f_{12}}{2} x y^2 + \frac{f_{03}}{6} y^3 + \frac{f_{40}}{24} x^4+ \ldots $$
This polynomial $f$ can be thought of as an element of $\mathbb{C}^{M_d}$, where $M_d = \frac{d^2+3d-10}{2}$. Note that aside from vanishing at the origin, the following derivatives at the origin
also vanish $$ f_{10}, f_{01}, f_{20}, f_{11}, f_{30}=0.$$
Let us now define a subset of $$ A_4^1 \subset \mathbb{C}^M_d \times \mathbb{C}^2$$ given by
$$ A_{4}^1:= ( (f,x,y) \in \mathbb{C}^{M_d} \times \mathbb{C}^2 : f(x,y)=0, ~~f_{x} =0, ~~ f_{y} =0, ~~ (x,y) \neq (0,0), ~~ f_{02} \neq 0, $$
$$ f_{40} f_{02} - 3 f_{21}^2 =0. )$$
I have a question regarding the closure of the space $\overline{A_{4}^1}$. Suppose the curve $x(t) = L_1 t$ and $ y(t) = t^2 $, $t\neq 0$, lies in the space $A_4^1$ for all $t\neq 0 $. Further
suppose that $f_{02}(t) = L_2 t^r$, for some $r > 0$. Assume that $L_1$ and $L_2$ are fixed non zero complex numbers (they don't depend on $t$).
What happens to the derivatives $f_{ij}$ in the limit as $t$ tends to zero? We basically want to see what happens in the closure when you approach it via the path $ x = L_1 t$, $y = t^2$ and $f_{02}
= L_2 t^r$.
It is easy to see that $f_{21}$ will tend to zero, using the equation $f_{y}=0$. Further, using that $f_{21}$ will tend to zero and using $f_{x} =0$ we get that $f_{40}$ will go to zero. I expect
another condition to come up, using the fact that $$ f_{40} f_{02} - 3 f_{21}^2 =0.$$
In fact, I expect (but can't prove) that in the limit
$$ \frac{-f_{31}^2}{24} + \frac{f_{50} f_{12}}{40} =0.$$
In any case even if that last claim is wrong, I still expect another condition to come up. The remaining coefficients can not be arbitrary is what I think. May be we get different conditions
depending on what $r$ is?
This may seem like a random question, but let me explain intuitively what I am asking. Look at the form of the function $f$ that I have taken. This curve has an $A_3$ singularity (a tacnode) at the
origin. What this is means is that at the origin, the first derivatives vanish, the Hessian has a Kernel ( which we have fixed to be $(1,0)$) and the third derivative along the kernel of the Hessian
is zero. The condition $$ f_{40} f_{02} - 3 f_{21}^2 =0$$ is the condition for an $A_4$ singularity. Hence, the space $A_4^1$ is the space of curves with an $A_4$ singularity at the origin and one
node at a point distinct form the origin. I wish to know how much more singular the curves becomes if the two points come together in the particular way I said i.e $ x = L_1 t$, $y = t^2$ and $f_{02}
= L_2 t^r$. The conditions $f_{02}=0$, $f_{21} =0$ and $f_{40} =0$ imply that the curve is at least as singular as a $D_6$-node. I expect it to be as singular as a $D_7$ node which is the condition
$$ \frac{-f_{31}^2}{24} + \frac{f_{50} f_{12}}{40} =0.$$
singularity-theory real-analysis
You probably mean that (f(t),x(t),(t)) is an analytic (?) germ of curve which for $t\ne 0$ lies in $A^1_4$. No? – quim Oct 4 '11 at 14:50
The limit is indeed a D_7 singularity. I know how to prove it using the "Horace method", and will try to post an answer, but do you even know what the Horace method is? – quim Oct 4 '11 at 15:01
I do not know what the horace method is. But I would appreciate a reply based on that. If you can give me a reference to look up Horace method, it would be nice. – Ritwik Oct 4 '11 at 16:48
What I was wondering is, isn't there some simple way to prove this? It seems to me that some clever'' combination like $$f_{02} yf+\frac{1}{4} x^2 y f_{12}f_{x}=0$$ should magically'' yield the
expression $$ \frac{-f_{31}^2}{24} + \frac{f_{50} f_{12}}{40}$$ and cancel some things earlier. I am not sure what the combination is but something like that. Does that seem to be a promising thing
to try? – Ritwik Oct 4 '11 at 16:55
Évain computed a few "collisions" using the (differential) Horace method in his thesis math.univ-angers.fr/~evain/0lancement.ps and in the paper Compactifications des espaces de configuration dans
les schémas de Hilbert Bull. Soc. Math. France 133 (2005), no. 4, 497--539. With his method it is not hard to show that a node colliding transversely with an ordinary cusp gives as a limit
singularity a D5 singularity (which can probably be proved in other ways) and then blow up the origin (as explained in my answer below) gives what you want. – quim Oct 5 '11 at 7:47
show 4 more comments
3 Answers
active oldest votes
If I understand correctly, you ask what can be the results of the collision of two singular points, of $A_4$ and $A_1$ types. In general there does not seem to exist an ultimate
effective method to treat such questions. Only in some simple cases, for example in this case.
A somewhat simpler question is: given a point of some singularity type, to which other types can it split by deformation? Of course, it is enough to classify only the "primitive
splittings", i.e. those that cannot be factorized through others.
In your particular case, if one restricts to ADE types, you ask: Which types deform to $A_4+A_1$? For ADE's you can use the classical criterion (Grothendieck/Brieskorn/Lyashko) that
says: a type S (one of ADE's) deforms to a bunch of types $(S_1,..,S_k)$ iff the disjoint union of Dynkin diagrams of $(S_1,..,S_k)$ is obtained from that of $S$ by erasing some
up vote 4 vertices.
down vote
accepted Therefore you get immediately, that $D_7$, $E_6$ and $A_6$ deform to $A_4+A_1$ while $D_6$ does not deform.
Unfortunately for higher singularities no such simple general "iff" criterion is known. In your particular case, however the deformation of any other singularity to $A_4+A_1$ factorizes
through these "prime" splittings.
You can see some additional results and references in my paper.
This approach is nice, but Ritwik is asking about a specific nongeneric collision, not about all possible collisions. More precisely, his node approaches along a smooth curve which
has intersection exactly 2 with the smooth curve with maximal contact with the $A_4$ – quim Oct 4 '11 at 16:18
Yes exactly! I am asking about a SPECIFIC collision. – Ritwik Oct 4 '11 at 17:35
Well, the local Bezout theorem is of some help: this smooth curve along which the points collide intersects the singularities with the total multiplicity 4+2. So, it must intersect
the resulting singularity with multiplicity at least 6. This rules out $E_6$. One can use some other very special tricks (of very limited use). The general approach is just to take an
honest flat limit of your family. I guess this can be done in many computer systems (Singular etc.) – Dmitry Kerner Oct 4 '11 at 18:56
@qui-vadis: (1) Ritwik already knows it is at least a $D_6$, so $E_6$ and $A_6$ are ruled out, and it follows it is at least a $D_7$ (using your GBL criterion). (2) Indeed your "local
Bézout" argument proves that in addition to being a $D_7$ it has maximal contact 6 with the curve $x^2=L_1^2y$, so one can in fact prove two more "relations between derivatives", not
just one. – quim Oct 5 '11 at 12:42
@qui-vadis:(3) The initial family $A^1_4$ is not linear (the equation setting $A_4$ is quadratic) and this means it is not given by an ideal (or subscheme) which would be the natural
setting for the computation of a flat limit. We would need a second parameter u to account for the variation of $f_{40}/f_{21}$, compute the flat limit for each u and show that it is
in fact independent of u. – quim Oct 5 '11 at 12:47
show 1 more comment
Assuming you know that a node colliding transversely with an ordinary cusp gives as a limit singularity a $D_5$ singularity, the answer is easier.
Blow up the point $(0,0)$ (ie, take y=xz, $f_t$ becomes divisible by $x^2$) and the family of proper transforms $f_t(x,xz)/x^2$ of your $f_t$'s have exactly an ordinary cusp and a node
up vote approaching transversely. The limit curve has at least a $D_5$, ie, it has intersection multiplicity 3 with the exceptional. The proper transform of a point of multiplicity 2 cannot
1 down intersect with multiplicity 3, so x (the equation of the exceptional divisor) divides $f_0(x,xz)/x^2$ at least once (it is the smooth branch of the $D_5$), and the quotient (which is the
vote actual strict transform of the limit curve, $f_0(x,xz)/x^3$) has at least an $A_2$ (ie an ordinary cusp). So the limit curve has a $D_7$ as you say.
add comment
Almost everything in this answer has already been said by qui-vadis or in the comments, but now I'll translate it to your notation. I'll write $f(x,y,t)$ for $f$, and $f(x,y,0)$ for the
limit $f$.
First remark that $u^4(u-t)^2$ divides $f(L_1u,u^2,t)$ so $u^6$ divides $f(L_1u,u^2,0)$ (this is qui-vadis' local Bézout). Expanding this and passing to the limit, $$\frac{L_1^5}{5!}f_{50}+\
up vote
1 down Next oberve that the limit vanishings of $f_{40}$, $f_{21}$ and $f_{02}$, together with $f_{40}(L_1t,t^2,t)f_{02}(L_1t,t^2,t)−3f^2_{21}(L_1t,t^2,t)=0$, give $$Q:=f_{t40}(0,0,0)f_{t02}(0,0,0)
vote −3f^2_{t21}(0,0,0)=0,$$ i.e., the limit of $f_t=\partial f/ \partial t$ also has an $A_4$ at least. Now using $f_x=0, f_y=0$ and the vanishing (in the limit) of $f_{ij}$ for $i+2j\le 4$
which you already know, it is possible to write the unknowns $f_{t40}(0,0,0)$, $f_{t02}(0,0,0)$, $f_{t21}(0,0,0)$ in terms of $f_{50}$, $f_{31}$ and $f_{12}$. Substitute in $Q$, and the
resulting equation is exactly what you were looking for.
add comment
Not the answer you're looking for? Browse other questions tagged singularity-theory real-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/77130/closure-of-singular-points/77821","timestamp":"2014-04-21T02:46:26Z","content_type":null,"content_length":"73999","record_id":"<urn:uuid:9d8509e0-e273-4ad6-a1e9-bea397f54afa>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 42
physical science
its 5th grade math, what other way can i show it
how did you get #8a, can you help me step by step please
three times as many children as adults attended a concert on saturday. an adult ticket cost $7 and a child ticket cost $3. the theater collected a total of $6000. how many people bought tickets?
its 5 grade math, can you make it more understanding
the cost of 5 similar digital cameras and 3 similar video cameras is $3213 each video camera costs 4 times as much as each digital camera. john buys a digital camera and a video camera. how much does
he pay?
mr. jacob is 55 years old and tony is 7 years old. in how many years will mr. jacobs be 4 times as old as tony?
naomi macy and sebastian have 234 stamps in all. naomi gives 16 stamps to macy and 24 stamps to sebastian. naomi then has 3 times as many stamps as macy, and macy has twice as many stamps as
sebastian. how many stamps does naomi have at first?
on a farm, there are some cows and some chickens. if the animals have a total of 40 heads and 112 legs, how many cows are there?
jane had $7 and her sister had $2. their parents gave each of them an equal amount of money. then, jane had twice as much money as her sister. how much money did their parents give each of them
apples are sold at 3 for $2 at busy mart. at big food, the same apples are sold at 5 for $2. kassim buys 15 apples from big foods instead of buys mart. how much does he save?
how many times can you make a perimeter of 10?
Find the slope of the line that passes through the given pair of points. (-1, 2) and (3, 4)
If a line L1 has equation y = mx + b, where m and b are constants with m ¹ 0,then an equation of a line L2 perpendicular to L1 has the form , where C is a constant.
carol has died, begin death
what is electric power?
MINE Math
Am i right?
MINE Math
i think the anser is 200$
MINE Math
16. Calculate the interest earned on a savings account with $800.00 that is invested at 5% annual simple interest for 5 years. (1 point)
The data below are the final exam scores of 10 randomly selected statistics students and the number of hours they studied for the exam. Calculate the correlation coefficient r.
A ball is thrown off a cliff from a ht of 180 ft above sea level. The ht s of the ball above sea water level at time t is s(t)=-16t2+32t+180. When will the ball strike the water?
Spanish 1-Please check
It goes like this: La Clase de Economia and La Clase de Anatomia
Spanish 1-Please check someone
1. Me gusta nadar y correr en patineta 3. Me gustan las hamburguesas con papas fritas o la paella. Me gusta comer en restaurantes pequenos y rapidos
How many grams of sucrose would you need to prepare 10 mL of .01M solution of sucrose(FW+342g/mole)? show your work
how many grams of sucrose would you need to prepare 10 mK of ,o1M solution of sucrose(FW+342g/mole)? show your work
How many 1's go into 4??
Zn(s) + 2 HCl(aq) H2(g) + ZnCl2(aq) When 25.0 g of Zn reacts, how many L of H2 gas are formed at STP? 1. 0.382 L 2. 0.0171 L 3. 22.4 L 4. 8.56 L 5. 4.28 L
functional english
How can i describe three different scenarios where stress can build up and affect a person? Also i have no idea on how to start! please help?
fuctional english level 2
Describe three scenarios where stress can build up and affect a person. Can someone help me on how to describe a scenario? Examples?
A hotdog vendor named Sam is walking from the southwest corner of Central park to the Empire State Building. He starts at the intersection of 59th Street and Eighth Avenue, walks 3 blocks east to
59th Street and Fifth Avenue, and then 25 blocks south to the Empire State Buildi...
a fruit is most commonly _______? A. modified root B. a mature female gametophyte c. mature ovary d. a thickened style e. enlarged ovule
thanks so much I like it that you did not just put the correct answer but gave me sights it really helps me learn. the answer i have to the worksheet is the use of chlorophylls a and b, starch as a
major stored compound, and cellulose in cell walls does that sound right?? Than...
what are the characteristics that link plants with green algae??
Algebra: Simple Interest
Problem: Geoff has $5000 to invest. He wants to earn $568 in interest in one year. He will invest part of the money at 12% and the other part at 10%. How will he invest. +I already know how to do
Simplest Interest, I just can't read the problem to find what i = p = r = t =
Divide first since you do it left to right so 12/6 = 2 2 x 3 - 5 6 - 5 =1
Ohhh I get it! Thank you :D It took be 45seconds to figure that out, after you helped.
There are two similar triangles XYZ, and ABC. If x = 8, c = 5, and y = b, find z and a. How am I suppose to solve this problem if y = b? Do I find what b is first? Or will the answer be with a
There are two similar triangles XYZ, and ABC. If x = 12 and z = 3c, find a. How do I work with that problem when z = 3c?
What is 8.2% as a fraction in it's lowest term Please someone kindly help!
what are 2 examples(policy positions) in which the obama administration has worked to the advantage of corporate interests?
there are 24 hours in a day and scientists tell us that we should sleep for 3/8 of the day.how much time should we spend sleeping?
Need help finding a good website on Paris? Ones I find always are the same info or are just adds for places you can stay in paris. These sites have a lot of information about the history of my
favorite city plus the backgrounds of some of its most famous attractions. http://en... | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Milly","timestamp":"2014-04-19T11:04:39Z","content_type":null,"content_length":"13880","record_id":"<urn:uuid:9d20a889-bc0e-420d-9a24-db4686429af9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
We're looking at c = 2 and f(c) = f(2) = 4. We want this point to be in the center of the window, start by graphing f with
0 ≤ x ≤ 4
0 ≤ y ≤ 8.
We see this:
We want to have f(x) within 0.1 of f(c). Therefore we want to have f(x) within 0.1 of 4, or 3.9 ≤ f(x) ≤ 4.1.
Change the values of y in the calculator window accordingly, to make this picture:
Yes, it looks crazy. That means we'll need to restrict x a lot. Start with letting x only move 0.1 away from 2,
so 1.9 ≤ x ≤ 2.1,
and the picture looks like this:
PICTURE: graph that
Since we don't have a picture yet, try some other numbers. Having 1.95 ≤ x ≤ 2.05 doesn't work, and neither does 1.97 ≤ x ≤ 2.03.
However, 1.98 ≤ x ≤ 2.02 does work, as does 1.99 ≤ x ≤ 2.01.
Our final answer could be either of these. We could say "restrict x to within 0.02 of 2" or "restrict x to within 0.01 of 2," and either of these answers would be right.
There are other right answers, also. As long as the point (c,f(c)) is in the center of the graph, the y values are what we want, and the graph shows a curve exiting on the sides of the graph (meaning
that we can see what f(x) is doing for all values of x in the window), we have a picture and we've restricted x appropriately. | {"url":"http://www.shmoop.com/continuity-function/continuity-informal-examples.html","timestamp":"2014-04-20T21:03:35Z","content_type":null,"content_length":"28932","record_id":"<urn:uuid:6cd31230-2853-4376-b4a8-1e40706835fd>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
More Like This
• http://hdl.handle.net/10048/1205
• Extensions of Skorohod’s almost sure representation theorem
• Hernandez Ceron, Nancy
• en
• Skorohod’s a. s. representation theorem
Weak convergence of probability measures
Convergence in probability
• Jul 8, 2010 5:06 PM
• Thesis
• en
• Adobe PDF
• 810314 bytes
• A well known result in probability is that convergence almost surely (a.s.) of a sequence of random elements implies weak convergence of their laws. The Ukrainian mathematician Anatoliy
Volodymyrovych Skorohod proved the lemma known as Skorohod’s a.s. representation Theorem, a partial converse of this result. In this work we discuss the notion of continuous representations,
which allows us to provide generalizations of Skorohod’s Theorem. In Chapter 2, we explore Blackwell and Dubins’s extension [3] and Fernique’s extension [10]. In Chapter 3 we present Cortissoz’s
result [5], a variant of Skorokhod’s Theorem. It is shown that given a continuous path inM(S) it can be associated a continuous path with fixed endpoints in the space of S-valued random elements
on a nonatomic probability space, endowed with the topology of convergence in probability. In Chapter 4 we modify Blackwell and Dubins representation for particular cases of S, such as certain
subsets of R or R^n.
• Master's
• Master of Science
• Department of Mathematical and Statistical Sciences
• Fall 2010
• Schmuland, Byron (Mathematical and Statistical Sciences)
• Litvak, Alexander (Mathematical and Statistical Sciences)
Beaulieu, Norman C. (Electrical and Computer Engineering) | {"url":"https://era.library.ualberta.ca/public/view/item/uuid:fd919733-a9a5-4c1b-a795-cf0935fba884","timestamp":"2014-04-19T22:09:33Z","content_type":null,"content_length":"57824","record_id":"<urn:uuid:23d461e4-0c8e-4114-a63b-46c9ee7f9580>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
[R] sample size > 20K? Was: fitness of regression tree: how to measure???
Ravi Varadhan rvaradhan at jhmi.edu
Thu Apr 1 23:23:13 CEST 2010
The discussion of Leo Breiman's paper in Statistical Science: Statistical Modeling - The Two cultures, is a must read for all statisticians doing prediction modeling. Especially see the exchange between Cox and Breiman (I call this the Cox-Breiman duel).
Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University
Ph. (410) 502-2619
email: rvaradhan at jhmi.edu
----- Original Message -----
From: Bert Gunter <gunter.berton at gene.com>
Date: Thursday, April 1, 2010 12:55 pm
Subject: Re: [R] sample size > 20K? Was: fitness of regression tree: how to measure???
To: 'Frank E Harrell Jr' <f.harrell at vanderbilt.edu>, 'vibha patel' <vibhapatelddu at gmail.com>
Cc: r-help at r-project.org
> Since Frank has made this somewhat cryptic remark (sample size > 20K)
> several times now, perhaps I can add a few words of (what I hope is) further
> clarification.
> Despite any claims to the contrary, **all** statistical (i.e. empirical)
> modeling procedures are just data interpolators: that is, all that
> they can
> claim to do is produce reasonable predictions of what may be expected
> within
> the extent of the data. The quality of the model is judged by the goodness
> of fit/prediction over this extent. Ergo the standard textbook caveats
> about
> the dangers of extrapolation when using fitted models for prediction.
> Note,
> btw, the contrast to "mechanistic" models, which typically **are** assessed
> by how well they **extrapolate** beyond current data. For example, Newton's
> apple to the planets. They are often "validated" by their ability to "work"
> in circumstances (or scales) much different than those from which they
> were
> derived.
> So statistical models are just fancy "prediction engines." In particular,
> there is no guarantee that they provide any meaningful assessment of
> variable importance: how predictors causally relate to the response.
> Obviously, empirical modeling can often be useful for this purpose,
> especially in well-designed studies and experiments, but there's no
> guarantee: it's an "accidental" byproduct of effective prediction.
> This is particularly true for happenstance (un-designed) data and
> non-parametric models like regression/classification trees. Typically,
> there
> are many alternative models (trees) that give essentially the same quality
> of prediction. You can see this empirically by removing a modest random
> subset of the data and re-fitting. You should not be surprised to see
> the
> fitted model -- the tree topology -- change quite radically. HOWEVER,
> the
> predictions of the models within the extent of the data will be quite
> similar to the original results. Frank's point is that unless the data
> set
> is quite large and the predictive relationships quite strong -- which
> usually implies parsimony -- this is exactly what one should expect.
> Thus it
> is critical not to over-interpret the particular model one get, i.e. to
> infer causality from the model (tree)structure.
> Incidentally, there is nothing new or radical in this; indeed, John Tukey,
> Leo Breiman, George Box, and others wrote eloquently about this
> decades ago.
> And Breiman's random forest modeling procedure explicitly abandoned efforts
> to build simply interpretable models (from which one might infer causality)
> in favor of building better interpolators, although assessment of "variable
> importance" does try to recover some of that interpretability
> (however, no
> guarantees are given).
> HTH. And contrary views welcome, as always.
> Cheers to all,
> Bert Gunter
> Genentech Nonclinical Biostatistics
> -----Original Message-----
> From: r-help-bounces at r-project.org [ On
> Behalf Of Frank E Harrell Jr
> Sent: Thursday, April 01, 2010 5:02 AM
> To: vibha patel
> Cc: r-help at r-project.org
> Subject: Re: [R] fitness of regression tree: how to measure???
> vibha patel wrote:
> > Hello,
> >
> > I'm using rpart function for creating regression trees.
> > now how to measure the fitness of regression tree???
> >
> > thanks n Regards,
> > Vibha
> If the sample size is less than 20,000, assume that the tree is a
> somewhat arbitrary representation of the relationships in the data and
> that the form of the tree will not replicate in future datasets.
> Frank
> --
> Frank E Harrell Jr Professor and Chairman School of Medicine
> Department of Biostatistics Vanderbilt University
> ______________________________________________
> R-help at r-project.org mailing list
> PLEASE do read the posting guide
> and provide commented, minimal, self-contained, reproducible code.
> ______________________________________________
> R-help at r-project.org mailing list
> PLEASE do read the posting guide
> and provide commented, minimal, self-contained, reproducible code.
More information about the R-help mailing list | {"url":"https://stat.ethz.ch/pipermail/r-help/2010-April/234028.html","timestamp":"2014-04-16T13:33:12Z","content_type":null,"content_length":"9669","record_id":"<urn:uuid:e7bd32cf-4a6f-41ab-9c77-2d31f6e43c97>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question about limits
March 21st 2010, 07:49 AM #1
Mar 2010
Question about limits
In part (a) of my questions I'm asked to show $\lim_{n \rightarrow{\infty}}[\frac{1}{n-1}]^\frac{1}{n} = 1$
I have done this, and part (b) asks me to consider a sequence of functions
$f_n : [0,1] \to \mathbb{R} : x \to \frac{x}{1 + x^n}$,
then for each fixed $n \in \mathbb{N}$ determine the point $\varepsilon(n) \in [0,1]$ at which the function $f_n$ attains its maximum value and hence, calculate
$\lim_{n \rightarrow{\infty}}f_n(\varepsilon(n))$
I'm told I may use part a. I'm not sure if $\varepsilon(n)$ is meant to represent x values in which case the max would be achieved (as I can see) at x = 1, but I can't see the relation with part
(a) by doing that.
In part (a) of my questions I'm asked to show $\lim_{n \rightarrow{\infty}}[\frac{1}{n-1}]^\frac{1}{n} = 1$
I have done this, and part (b) asks me to consider a sequence of functions
$f_n : [0,1] \to \mathbb{R} : x \to \frac{x}{1 + x^n}$,
then for each fixed $n \in \mathbb{N}$ determine the point $\varepsilon(n) \in [0,1]$ at which the function $f_n$ attains its maximum value and hence, calculate
$\lim_{n \rightarrow{\infty}}f_n(\varepsilon(n))$
I'm told I may use part a. I'm not sure if $\varepsilon(n)$ is meant to represent x values in which case the max would be achieved (as I can see) at x = 1, but I can't see the relation with part
(a) by doing that.
Let's solve $f_n'(x)=0 \implies \frac{1-(n-1)x^n}{(x^n+1)^2}=0 \implies 1-(n-1)x^n=0 \implies x=\left(\frac{1}{n-1}\right)^\frac{1}{n}$.
Hence $f_n(x)$ has a critical point at $x=\left(\frac{1}{n-1}\right)^\frac{1}{n}$. Now verify that this is indeed a maximum.
Aah, I see, I was thinking in terms of looking for the supremum like when establishing uniform convergence. Didn't think of maximum like that. Silly me
March 21st 2010, 09:23 AM #2
March 21st 2010, 09:26 AM #3
Mar 2010 | {"url":"http://mathhelpforum.com/differential-geometry/134849-question-about-limits.html","timestamp":"2014-04-17T07:25:33Z","content_type":null,"content_length":"39172","record_id":"<urn:uuid:8560dd17-e043-415a-8b01-2e251252a5ae>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rigged Partitions
Class and methods of the rigged partition which are used by the rigged configuration class. This is an internal class used by the rigged configurations and KR tableaux during the bijection, and is
not to be used by the end-user.
We hold the partitions as an 1-dim array of positive integers where each value corresponds to the length of the row. This is the shape of the partition which can be accessed by the regular index.
The data for the vacancy number is also stored in a 1-dim array which each entry corresponds to the row of the tableau, and similarly for the partition values.
• Travis Scrimshaw (2010-09-26): Initial version
Convert this to using multiplicities \(m_i\) (perhaps with a dictionary?)?
class sage.combinat.rigged_configurations.rigged_partition.RiggedPartition(shape=None, rigging_list=None, vacancy_nums=None)
Bases: sage.combinat.combinat.CombinatorialObject
The RiggedPartition class which is the data structure of a rigged (i.e. marked or decorated) Young diagram of a partition.
Note that this class as a stand-alone object does not make sense since the vacancy numbers are calculated using the entire rigged configuration. For more, see RiggedConfigurations.
sage: RC = RiggedConfigurations(['A', 4, 1], [[2, 2]])
sage: RP = RC(partition_list=[[2],[2,2],[2,1],[2]])[2]
sage: RP
0[ ][ ]0
-1[ ]-1
sage: type(RP)
<class 'sage.combinat.rigged_configurations.rigged_partition.RiggedPartition'>
get_num_cells_to_column(end_column, t=1)
Get the number of cells in all columns before the end_column.
☆ end_column – The index of the column to end at
☆ t – The scaling factor
sage: RC = RiggedConfigurations(['A', 4, 1], [[2, 2]])
sage: RP = RC(partition_list=[[2],[2,2],[2,1],[2]])[2]
sage: RP.get_num_cells_to_column(1)
sage: RP.get_num_cells_to_column(2)
sage: RP.get_num_cells_to_column(3)
sage: RP.get_num_cells_to_column(3, 2)
Insert a cell given at a singular value as long as its less than the specified width.
Note that insert_cell() does not update riggings or vacancy numbers, but it does prepare the space for them. Returns the width of the row we inserted at.
☆ max_width – The maximum width (i.e. row length) that we can insert the cell at
☆ The width of the row we inserted at.
sage: RC = RiggedConfigurations(['A', 4, 1], [[2, 2]])
sage: RP = RC(partition_list=[[2],[2,2],[2,1],[2]])[2]
sage: RP.insert_cell(2)
sage: RP
0[ ][ ][ ]None
-1[ ]-1
remove_cell(row, num_cells=1)
Removes a cell at the specified row.
Note that remove_cell() does not set/update the vacancy numbers or the riggings, but guarantees that the location has been allocated in the returned index.
☆ row – The row to remove the cell from
☆ num_cells – (Default: 1) The number of cells to remove
☆ The location of the newly constructed row or None if unable to remove row or if deleted a row.
sage: RC = RiggedConfigurations(['A', 4, 1], [[2, 2]])
sage: RP = RC(partition_list=[[2],[2,2],[2,1],[2]])[2]
sage: RP.remove_cell(0)
sage: RP
0[ ]0
-1[ ]-1
class sage.combinat.rigged_configurations.rigged_partition.RiggedPartitionTypeB(arg0, arg1=None, arg2=None)
Bases: sage.combinat.rigged_configurations.rigged_partition.RiggedPartition
Rigged partitions for type \(B_n^{(1)}\) which has special printing rules which comes from the fact that the \(n\)-th partition can have columns of width \(\frac{1}{2}\). | {"url":"http://sagemath.org/doc/reference/combinat/sage/combinat/rigged_configurations/rigged_partition.html","timestamp":"2014-04-21T04:34:35Z","content_type":null,"content_length":"23034","record_id":"<urn:uuid:1e05ef8c-82e6-42fc-89ff-a443bef2853e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sparse Multiscale Gaussian Process Regression
C. Walder, K.I. Kim and B. Schölkopf
In: Proceedings of the 25th International Conference on Machine Learning (ICML 2008) (2008) ACM Press , New York, NY, USA , .
Most existing sparse Gaussian process (g.p.) models seek computational advantages by basing their computations on a set of m basis functions that are the covariance function of the g.p. with one of
its two inputs fixed. We generalise this for the case of Gaussian covariance function, by basing our computations on m Gaussian basis functions with arbitrary diagonal covariance matrices (or length
scales). For a fixed number of basis functions and any given criteria, this additional flexibility permits approximations no worse and typically better than was previously possible. We perform
gradient based optimisation of the marginal likelihood, which costs O(m2n) time where n is the number of data points, and compare the method to various other sparse g.p. methods. Although we focus on
g.p. regression, the central idea is applicable to all kernel based algorithms, and we also provide some results for the support vector machine (s.v.m.) and kernel ridge regression (k.r.r.). Our
approach outperforms the other methods, particularly for the case of very few basis functions, i.e. a very high sparsity ratio.
PDF - Requires Adobe Acrobat Reader or other PDF viewer. | {"url":"http://eprints.pascal-network.org/archive/00004341/","timestamp":"2014-04-19T17:05:08Z","content_type":null,"content_length":"8750","record_id":"<urn:uuid:5449c2f6-0d6f-4e72-b16a-d221a44c7d23>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
The equation a M = b N c P in a free group
Results 1 - 10 of 88
- HANDBOOK OF FORMAL LANGUAGES , 1997
"... ..."
, 1992
"... ) Alberto Apostolico Purdue University and Universit`a di Padova Dany Breslauer yyz Columbia University Zvi Galil z Columbia University and Tel-Aviv University Summary of results Optimal
concurrent-read concurrent-write parallel algorithms for two problems are presented: ffl Finding all the pe ..."
Cited by 32 (13 self)
Add to MetaCart
) Alberto Apostolico Purdue University and Universit`a di Padova Dany Breslauer yyz Columbia University Zvi Galil z Columbia University and Tel-Aviv University Summary of results Optimal
concurrent-read concurrent-write parallel algorithms for two problems are presented: ffl Finding all the periods of a string. The period of a string can be computed by previous efficient parallel
algorithms only if it is shorter than half of the length of the string. Our new algorithm computes all the periods in optimal O(log log n) time, even if they are longer. The algorithm can be used to
compute all initial palindromes of a string within the same bounds. ffl Testing if a string is square-free. We present an optimal O(log log n) time algorithm for testing if a string is square-free,
improving the previous bound of O(log n) given by Apostolico [1] and Crochemore and Rytter [12]. We show matching lower bounds for the optimal parallel algorithms that solve the problems above on a
general alphab...
, 1991
"... A string w covers another string z if every position of z is within some occurrence of in z. ..."
- SIAM J. COMPUT , 1990
"... An optimal O(log log n) time parallel algorithm for string matching on CRCWPRAM is presented. It improves previous results of [G] and [V]. ..."
Cited by 27 (11 self)
Add to MetaCart
An optimal O(log log n) time parallel algorithm for string matching on CRCWPRAM is presented. It improves previous results of [G] and [V].
- Computers and Mathematics with Applications 47 , 2004
"... Codes play an important role in the study of combinatorics on words. Recently, we introduced pcodes that play a role in the study of combinatorics on partial words. Partial words are strings
over a finite alphabet that may contain a number of “do not know ” symbols. In this paper, the theory of code ..."
Cited by 22 (8 self)
Add to MetaCart
Codes play an important role in the study of combinatorics on words. Recently, we introduced pcodes that play a role in the study of combinatorics on partial words. Partial words are strings over a
finite alphabet that may contain a number of “do not know ” symbols. In this paper, the theory of codes of words is revisited starting from pcodes of partial words. We present some important
properties of pcodes. We give several equivalent definitions of pcodes and the monoids they generate. We investigate in particular the Defect Theorem for partial words. We describe an algorithm to
test whether or not a finite set of partial words is a pcode. We also discuss two-element pcodes, complete pcodes, maximal pcodes, and the class of circular pcodes. A World Wide Web server interface
has been established at
- AMS/IP Studies in Advanced Mathematics
"... groups ..."
, 1997
"... An O(m)-work, O(m)-space, O(log m)-time CREW-PRAM algorithm for constructing the suffix tree of a string s of length m drawn from any fixed alphabet set is obtained. This is the first known work
and space optimal parallel algorithm for this problem. It can be generalized to a string s drawn fr ..."
Cited by 17 (1 self)
Add to MetaCart
An O(m)-work, O(m)-space, O(log m)-time CREW-PRAM algorithm for constructing the suffix tree of a string s of length m drawn from any fixed alphabet set is obtained. This is the first known work and
space optimal parallel algorithm for this problem. It can be generalized to a string s drawn from any general alphabet set to perform in O(log m) time and O(m log j\Sigmaj) work and space, after the
characters in s have been sorted alphabetically, where j\Sigmaj is the number of distinct characters in s. In this case too, the algorithm is work-optimal.
- In Proc. 36th Symposium on Foundation of Computer Science (FOCS 95 , 1995
"... We establish a variety of combinatorial bounds on the tradeoffs inherent in reconstructing strings using few rounds of a given number of substring queries per round. These results lead us to
propose a new approach to sequencing by hybridization (SBH), which uses interaction to dramatically reduce th ..."
Cited by 17 (2 self)
Add to MetaCart
We establish a variety of combinatorial bounds on the tradeoffs inherent in reconstructing strings using few rounds of a given number of substring queries per round. These results lead us to propose
a new approach to sequencing by hybridization (SBH), which uses interaction to dramatically reduce the number of oligonucleotides used for de novo sequencing of large DNA fragments, while preserving
the parallelism which is the primary advantage of SBH. 1 Introduction Sequencing by hybridization (SBH) [4, 11] is a new and promising approach to DNA sequencing which offers the potential of reduced
cost and higher throughput over traditional gel-based approaches. In this paper, we propose a new approach to sequencing by hybridization which permits the sequencing of arbitrarily large fragments
without the inherently exponential chip area of SBH, while retaining the massive parallelism which is the primary advantage of the technique. We establish the potential of our technique through both
, 1994
"... this paper we characterize all the covers of x in terms of an easily computed normal form for x. The characterization theorem then gives rise to a simple recursive algorithm which computes all
the covers of x in time \Theta(n). By avoiding unnecessary checks, this algorithm appears to execute more q ..."
Cited by 17 (2 self)
Add to MetaCart
this paper we characterize all the covers of x in terms of an easily computed normal form for x. The characterization theorem then gives rise to a simple recursive algorithm which computes all the
covers of x in time \Theta(n). By avoiding unnecessary checks, this algorithm appears to execute more quickly than that given in [2]. Let x denote a string of length n = jxj 0; in particular, let ffl
denote the empty string of zero length. For any nonempty string x, let k 1 be the greatest integer such that (1:1) x = (v | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=60020","timestamp":"2014-04-24T06:40:46Z","content_type":null,"content_length":"32975","record_id":"<urn:uuid:1c07ea3a-b7d1-467e-9b87-147cd3ccd4a3>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
Carl Wilhelm Borchardt
Born: 22 February 1817 in Berlin, Germany
Died: 27 June 1880 in Rudersdorf (near Berlin), Germany
Click the picture above
to see a larger version
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Carl Borchardt was born into a Jewish family. His father, Moritz Borchardt, was a merchant who was very well off and a very respected member of the community. Carl's mother was Emma Heilborn. He was
tutored privately by a number of outstanding tutors, the best known of whom were Plücker and Steiner. He studied at Berlin from 1836 under Dirichlet then, in 1839, he went to Königsberg and studied
under Bessel, Franz Neumann and Jacobi. Certainly Borchardt was impressed with Franz Neumann and, much later, he was one of three mathematicians who proposed Franz Neumann for external membership of
the Berlin Academy in 1853.
Borchardt's doctoral work, on non-linear differential equations, was supervised by Jacobi and submitted in 1843. However, the thesis was not published and has since been lost. Jacobi was in poor
health and it was agreed that he could spend a year in Italy convalescing. He went with Borchardt and they spent time in both Rome and Naples. Dirichlet and Steiner were also in Rome at the same time
and it proved a useful time for Borchardt. The year 1846-47 he spent in Paris where he met Chasles, Hermite and Liouville. He attended a course by Liouville on doubly periodic functions and although
Liouville intended to publish the notes which Borchardt took of his lectures, in the end they were not published due to a priority dispute between Liouville and Hermite. Borchardt married Rosa
Oppenheim and recently there has been speculation that after Borchardt's death, Rosa had a child with Weierstrass.
Borchardt taught at the University of Berlin from 1848 when he was appointed as a Privatdozent. He quickly became a close personal friend of Weierstrass and was one of the privileged few, along with
Sofia Kovalevskaya, whom Weierstrass addressed with the familiar 'Du' form. He succeeded Crelle as editor of Crelle's Journal in 1856, a task he undertook until 1880 despite not being in very good
health. The correct title of the Journal was the Journal für die Reine und Angewandte Mathematik but it had been known as Crelle's Journal up to the time Borchardt took over as editor. The journal
was then often referred to as "Borchardt's Journal" or in France as "Journal de M Borchardt". After Borchardt's death, the Journal für die Reine und Angewandte Mathematik again became known as Crelle
's Journal.
He did important research on the arithmetic geometric mean continuing work in this area which had been begun by Gauss and Lagrange. In 1881 Borchardt published an algorithm for the
arithmetic-geometric mean of two elements from (two) sequences, although it was actually first proposed by Gauss in a letter to Pfaff written in 1800. Although Gauss's letter is lost we know its
contents through Pfaff's reply which was published in Gauss's Complete Works and indicates that Gauss had discovered the result. From this 1881 paper by Borchardt the name "Borchardt algorithms" has
come into use to describe algorithms of this type. Borchardt also generalised results of Kummer on equations determining the secular disturbances of the planets. A secular disturbance is one which is
not periodic, but continually acts in the same direction. In fact this was his first contribution and was published in his first paper of 1846. In this work he used determinants and Sturm functions [
In several further papers Borchardt applied the theory of determinants to algebraic equations, mostly in connection with symmetric functions, the theory of elimination, and interpolation.
After Jacobi's death there was considerable speculation as to the exact role he had played in the theory of elliptic functions. This was partially answered when Jacobi's letters to Lagrange were
published by Bertrand in 1869 but the position was still somewhat confused as the letters from Lagrange to Jacobi were not included in this work. Borchardt completed publishing the remaining parts of
the correspondence in 1875 and Jacobi was then able to get full recognition for his contributions to the theory of elliptic functions made independent of those of Abel. Borchardt contributed to
spreading the mathematical ideas introduced by Jacobi but he also spread Jacobi's ideas on the way that universities should be organised, namely in a research oriented way.
Borchardt's complete works, published in 1888, contains 25 papers and, in addition to the topics discussed above, contains papers on maxima and on the theory of elasticity. Finally we note that the
first of the eight volumes of Jacobi's Collected Works was edited by Borchardt and published in 1881. Borchardt died before being able to edit further volumes which were edited by Weierstrass.
Article by: J J O'Connor and E F Robertson
Click on this link to see a list of the Glossary entries for this page
List of References (4 books/articles)
A Poster of Carl Borchardt Mathematicians born in the same country
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © August 2006 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is: | {"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Borchardt.html","timestamp":"2014-04-21T14:42:26Z","content_type":null,"content_length":"18737","record_id":"<urn:uuid:edc5d822-302d-47fc-9b2b-0e8ddfccd02d>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
Remainder Theorem
March 20th 2007, 04:28 PM
Remainder Theorem
"Use the remainder theorem to find the remainder quickly when the polynomial on the left is divided by the linear binomial on the right"
1. x^5 - 3x^2 + 14 by (x+2)
2. x^51 + 51 by (x+1)
I have absolutely no clue how to either of these...Thanks
March 20th 2007, 04:33 PM
for the first one you substitue x=-2 because of some reason that i can't remember and for the second you substitue x=-1 for the same reason, it should give you the remainder.
March 20th 2007, 06:33 PM
I need you to be a bit more specifi; I have to show proper work on my test.
March 20th 2007, 06:35 PM
that's nice! "i don't know why, i just know"
it's alright though. you're the man jonannekeke :)
March 20th 2007, 07:01 PM
March 20th 2007, 07:03 PM
see this, if you still don't get it, get back to me Remainder Theorem
March 21st 2007, 02:14 AM
In my text book it says:
If a polynomial f(x) is divided by (ax - b) then the remainder is f(b/a)
in your case:
Compare (x + 2) which is the same as (1x + 2) to (ax - b), so a=1, b=-2 and the remainder is f(-2/1)
Does that help? | {"url":"http://mathhelpforum.com/algebra/12787-remainder-theorem-print.html","timestamp":"2014-04-18T18:33:37Z","content_type":null,"content_length":"7972","record_id":"<urn:uuid:067f3c7a-5226-4c8a-9274-9df894f62700>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00553-ip-10-147-4-33.ec2.internal.warc.gz"} |
Missouri City, TX SAT Math Tutor
Find a Missouri City, TX SAT Math Tutor
...S. Air Force Academy for 7 years. In addition I taught at the following universities: San Antonio College, University of Maryland, University of Colorado, Auburn University at Montgomery, AL.
11 Subjects: including SAT math, calculus, geometry, statistics
...My goal as your tutor is to make sure you fully understand the concepts surrounding your questions. If you're not comfortable with a topic we'll work on it from every possible angle until it
clicks for you. I recently received my Master's degree in Medical Sciences at the University of North Te...
29 Subjects: including SAT math, English, writing, physics
...Dear Sir/Madam, I have a BA in Communications, and have spent most of my career as a CEO where I was required to make many public presentations. Further, I taught a public speaking class in
Thailand to students studying English as a second language at "It's Easy" English association in Thailand. Thank You, Allan D.
45 Subjects: including SAT math, English, reading, writing
My love for learning is what draws me to teaching and tutoring. I am a 29 year old math and science tutor in Sugar Land, TX. I graduated Magna cum laude from Texas A&M University in 2007 with a
bachelors in Biology.
13 Subjects: including SAT math, chemistry, biology, algebra 1
...I have scored a 5 on both AP English tests, as well as the US History, Government, and Calculus exams (and a 4 on Art History). This isn't because I'm a genius--it's because I'm a good test
taker, and I can help you become a better one as well. Whether you want to try to make dramatic gains on a...
32 Subjects: including SAT math, Spanish, reading, writing
Related Missouri City, TX Tutors
Missouri City, TX Accounting Tutors
Missouri City, TX ACT Tutors
Missouri City, TX Algebra Tutors
Missouri City, TX Algebra 2 Tutors
Missouri City, TX Calculus Tutors
Missouri City, TX Geometry Tutors
Missouri City, TX Math Tutors
Missouri City, TX Prealgebra Tutors
Missouri City, TX Precalculus Tutors
Missouri City, TX SAT Tutors
Missouri City, TX SAT Math Tutors
Missouri City, TX Science Tutors
Missouri City, TX Statistics Tutors
Missouri City, TX Trigonometry Tutors | {"url":"http://www.purplemath.com/Missouri_City_TX_SAT_math_tutors.php","timestamp":"2014-04-17T11:26:46Z","content_type":null,"content_length":"24210","record_id":"<urn:uuid:98bf0960-7d2a-4d26-a807-ab4bf34c62b8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |