content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Highbridge, NY Algebra 1 Tutor
Find a Highbridge, NY Algebra 1 Tutor
...I first began to tutor at an after school center. There I was able to work with students who had challenging learning needs. I greatly enjoyed my time there because I was able to work with
students who were not typically given personal academic attention.
27 Subjects: including algebra 1, English, reading, writing
...My students have consistently achieved results at the apex of the learning and test curve, winning admission to highly competitive academic programs at Harvard, Stanford, MIT, and many other
top-tier universities. My Ivy Boot Camp-style mission is to help students achieve peak test preparation a...
52 Subjects: including algebra 1, English, reading, writing
I hold my master's in Drama from Pace University and I'm very passionate about the arts. I have been performing since I was in diapers. In front of family and friends, at church, in school, and
even the Broadway stage.
4 Subjects: including algebra 1, geometry, prealgebra, theatre
I came to the United States in July, 2005 to study Chemical Engineering at the City College of New York. After my first semester, I became a peer leader for the Chemistry department. As a peer
leader, I tutored first and second semester freshman chemistry for eight semesters.
12 Subjects: including algebra 1, chemistry, geometry, algebra 2
...I have an MS in Statistics and a BA in Psychology from Wesleyan University. I graduated from one of the most prestigious and competitive high schools in Manhattan (and in the country) and I am
familiar with the demands placed on today's students. I have been around students of all academic levels and offer a patient, encouraging method to help students understand new concepts.
18 Subjects: including algebra 1, English, writing, statistics
Related Highbridge, NY Tutors
Highbridge, NY Accounting Tutors
Highbridge, NY ACT Tutors
Highbridge, NY Algebra Tutors
Highbridge, NY Algebra 2 Tutors
Highbridge, NY Calculus Tutors
Highbridge, NY Geometry Tutors
Highbridge, NY Math Tutors
Highbridge, NY Prealgebra Tutors
Highbridge, NY Precalculus Tutors
Highbridge, NY SAT Tutors
Highbridge, NY SAT Math Tutors
Highbridge, NY Science Tutors
Highbridge, NY Statistics Tutors
Highbridge, NY Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Allerton, NY algebra 1 Tutors
Beechhurst, NY algebra 1 Tutors
Bronx algebra 1 Tutors
Castle Point, NJ algebra 1 Tutors
Fort George, NY algebra 1 Tutors
Fort Lee, NJ algebra 1 Tutors
Hamilton Grange, NY algebra 1 Tutors
Hillside, NY algebra 1 Tutors
Inwood Finance, NY algebra 1 Tutors
Manhattanville, NY algebra 1 Tutors
Morsemere, NJ algebra 1 Tutors
Parkside, NY algebra 1 Tutors
Rochdale Village, NY algebra 1 Tutors
West Englewood, NJ algebra 1 Tutors
West Fort Lee, NJ algebra 1 Tutors | {"url":"http://www.purplemath.com/highbridge_ny_algebra_1_tutors.php","timestamp":"2014-04-17T10:45:18Z","content_type":null,"content_length":"24291","record_id":"<urn:uuid:8a76ebe5-415f-4229-b96a-698696cf00f8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
How the Area Function Works
The area function is a bit weird. Brace yourself. Say you’ve got any old function, f(t). Imagine that at some t-value, call it s, you draw a fixed vertical line. (Note that because this line is
fixed, s is a constant, not a variable.) Check out the figure below.
Then, you add a moveable vertical line (the dotted line in the figure) at the t-value x. You start with the dotted line at s ( s is for starting point), and then drag it to the right. As you drag
the line, you sweep out a larger and larger area under the curve between s and x. This area is a function of x, the position of the moving line.
In symbols, you write
The dt is a little increment along the t-axis — actually an infinitesimally small increment.
Here’s a simple example to make sure you’ve got a handle on how the area function works. By the way, don’t feel bad if you find this extremely hard to grasp — you’ve got lots of company. Say you’ve
got the simple function f(t) = 10 — that’s a horizontal line at y = 10. If you sweep out area beginning at s = 3, you get the following area function:
You can see that the area swept out from 3 to 4 is 10 because, in dragging the line from 3 to 4, you sweep out a rectangle with a width of 1 and a height of 10, which has an area of 1 times 10, or
10. See the figure below.
Now, imagine that you drag the line across at a rate of one unit per second. You start at x = 3, and you hit 4 at 1 second, 5 at 2 seconds, 6 at 3 seconds, and so on. How much area are you sweeping
out per second? Ten square units per second because each second you sweep out another 1-by-10 rectangle. Notice — this is huge — that because the width of each rectangle you sweep out is 1, the area
of each rectangle — which is given by height times width — is the same as its height because anything times 1 equals itself. You’ll see why this is huge in a minute.
Okay, are you sitting down? You’ve reached one of the big Ah ha! moments in the history of mathematics. Recall that a derivative is a rate. So, because the rate at which the previous area function
grows is 10 square units per second, you can say its derivative equals 10. Thus, you can write —
Now here’s the critical thing: Notice that this rate or derivative of 10 is the same as the height of the original function f(t) = 10 because as you go across 1 unit, you sweep out a rectangle that’s
1 by 10, which has an area of 10, the height of the function.
This works for any function, not just horizontal lines. The next figure shows the function g(t) and its area function
that sweeps out the area beginning at s = 2.
You can see that
equals roughly 20 because the area swept out between 2 and 3 has a width of 1 and the curved top of the rectangle has an average height of about 20. So, during this interval, the rate of growth of
is about 20 square units per second. Between 3 and 4, you sweep out about 15 square units of area because that’s roughly the average height of g(t) between 15 and 4. So, during the second number two
— the interval from x = 3 to x = 4 — the rate of growth of
is about 15.
The rate of area being swept out under a curve by an area function at a given x-value is equal to the height of the curve at that x-value.
Although it’s a bit loose — in the discussion of the above figure — saying things like roughly this and average that, don’t worry; when you do the math, it all works out. The important thing to
focus on is that the rate of area being swept out under a curve is the same as the height of the curve. | {"url":"http://www.dummies.com/how-to/content/how-the-area-function-works.navId-403863.html","timestamp":"2014-04-19T00:23:24Z","content_type":null,"content_length":"57160","record_id":"<urn:uuid:2eff61fb-aca5-4c99-8a77-5f1e52afb026>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Math Forum » Discussions » Policy and News » geometry.announcements
Topic: The Antiquity of the Sumerian PI Value, Part 14 - THE FALSIFICATIONS OF THE HISTORY OF MATHEMATICS in 3rd-4th Cent. BC, and 17th-18th Cent. AD
Replies: 0
The Antiquity of the Sumerian PI Value, Part 14 - THE FALSIFICATIONS OF THE HISTORY OF MATHEMATICS in 3rd-4th Cent. BC, and 17th-18th Cent. AD
Posted: Mar 4, 2013 8:38 AM
Copyrights 2013 by Ion Vulcanescu.
Please, be aware that the understanding of this article
requires familiarization with the prior Research Data
of Part 1 through Part 13 that can be found at:
In this article we shall see the ancient civilizations
as being not " just a simple Line" as today's Universities
and Academies are sutaining it, but
in which the PI Values 3.14626437
is enveloping the Circle's Area, and PI Value 3.141592654
(of the today's Science of Mathematics)
is enveloping the PI Value 3.14626437.
When reporting the Circle's Radius (R=2) of
THE GEOMETRICAL EQUILIBIRUM to the Decimal System,
and dividing by SWASTIKA's High Number 10,000 this
signals THE ORIGIN OF "3.160493827", a Code send in time
by the ancient Egyptian Priests, and today called
"THE RHIND MATHEMATICAL PAPYRUS PI VALUE".
This philosopher, based on the Historical and Mathematical Prooves
declares that:
"THE CIRCLE'S CIRCUMFERENCE ...HAS A THICKNESS",
and this Mathematical Truth was known before 3300 BC.
The research signals that the Greeks of Hellenistic Era
attempted to erase it, but it appears that
it was Saved by the Old Hebrews Priests, then it Survived
millenia after millenia until reaching 17th-18th century AD, when
it is possible that in the then England or France
it was decided that..HAVE TO BE REPRESSED,
or, the then Learned Jewish Scholars ended up tortured
in the 17th-18th Century AD European jails,
when the then Secret Services of the Kings of England and/or France
may have extracted it from them, and then
being realized that it can not be redesigned to look as English or French
it was decided to be let Dissapear, or
...the then Jewish Scholars have died
without making it known to their Captors, or it may have survived
until reaching the 20th century AD when the Jewish Elders were killed
during the Hollocoust Time, taking it with them to the their gaves...
The Circle's Circumference having a Thickness
IT IS NOT MY DISCOVERY, but MY REDISCOVERY
that came in my attention through a Research
for which I have worked half of my life.
I have carried this research
through over 20 years of Discrimination in the United States of America
by USA Universities, by USA National Academy of Science, and by
a Discrimination well organized and covered up
through the Evecutive Powers of USA Presidents
George H.W. Bush, Bill Clinton, and also George H. Bush...
For this philosopher it is now proved not only Historic,
but also Mathematical, that before cca 3300 BC in the land area
we today call as Egypt, and Lebanon-Israel
WERE LIVING, or ARRIVED a People who knew
this Body of Knowledge, and a Sellect Group of Learned Jews
encodedd in the Torah, and then century-after-century
carried it in time until wee see it Europe encoded:
- In the Cologne Ounce ( of Germany) of 29.16 grams,
- In the Ounce Tower ( England) of 29.16 grams
- In the Weight of Paris Livre ( France) of 489.3070348 grams,
- In the Master Frequency "839808" of Ratio 13.5 between the sum
of the letters, and all letters of THE ENGLISH ALPHABET,
and them we see it appearing again in the word "JEW",
until suddenly Dissapear or is Repressed from the History of the Civilization
sometimes in the time of 17th - 18th Century AD.
The Research Data of the Effect shown above is visualy presented on
Draw Bega 1 through Draw Bega 7
that as-soon-as this article appears on Mathforum,
they shall be posted on my Facebook at:
and/or also at:
If you intend to study this Research Data, you are strongly advised
to print the DRAW BEGA 1 through DRAW BEGA 7,
put hem in front of your eyes, and reading this article look to such draws...
Only in doing so you shall have a full understanding of the fact that
that before 3300 BC on the Earth it existed
another Highly Spiritual Civlization
who knew to the Exactitude of the 9th decimal the PI Value 3.141592654,
of the todays Science of Mathematics...
...and now let's mark these Three Periods !
This is the time of before cca 3300 BC, and possible
even earlier, known being the fact that from this Period
known in Egypt as NAGADA PERIOD
appears in the graves Gold Weights known today as.
BEGA GOLD WEIGHTS.
This Period it is the time between 3rd and 4th century BC
when it appears THE LIBRARY OF ALEXANDRIA, when
appears under the so called signature of ...EUCLID.
For this philosopher the research signals that,
the Greeks of this time found out that
THE CIRCLE was known as being under Two Concepts:
- A First Concept through which the Point we call it today as
"the Circle's Center", was coming from the antiquity of the time
and seatted where it is today in the Science of mathematics, and
- A Second Concept where such " Circle's Center"
was seatted direct on THE CIRCLE CIRCUMFERENCE.
The Second Concept was rediscovered be my during 1988-1990,
and when I published it, it was immediately Seriously Considered
during 1991 by a Team of NASA Senior Engineers
who insisted so much that the USA National Academy of Sciences
to be asked to Investigate it, until they were sillenced
by the then USA President George H.W. Bush,
and Problems begun to be rised against me in the United States.
So powerfull was the hate of the USA Special Interests that
in order that me to be obligatted to Stop the research,
problems and traps were creatted for me, and
even attempts to create to me a File of Terrorist
appears that were taken.
They begun Undercover, and by 1993 these USA Special Interests
reached their goal by bribing
a New York City Realty Board Arbitrator ( Case # 21132)
with a $ 10,000 depositted direct in the Wall Street Acount
of his daughter to have me "legaly Fired",
and then continuatted in 1992 through using Entrapments
by the Town of South Fallsburg Police, and the use by the Special Interests
of the town judge Ivan Kalter thus being prooved to me that
in the United States of America, the Police and the Judiciary
were fully involved in sillencing the making public
by the 17th -18th Century AD Kings of England and/or France,
and that what was happening to me was for the Political Interests
of the Government of England and France...
They stopped somehow
only with the arrival of USA President Barack Obama,
when it appears that the US Attorney General Eric Holder
took my call, and presented the my Case to The White House...
Now the USA Interested Parties running for Political Cover
and trully believing that the Mathematical Exactitude
does not have to be Mathematical, but Political,
decided to take their Effort to THE USA CONGRESS
and sillence The White House of the Obama Adiminstration.
Refusing to Stop the Research, and accept to remain sillent,
and year afer year living a simple life, and making progress
into the Recovery of this Repressed Body of Knowledge,
again in 2007 " an employee"
of Town of South Fallsburg Police Department
presenting to me as Police Officer,
and the same town judge Ivan Kalter step-in for more Entrapment
under the pretext that "I have defaimed him"
reporting him to Law Enforcement for refusal to pay for the work
I did on his Property. (Cases 882/08, and 4843/08 in New York State,
Sullivan County, Monticello Supreme Court)
...and so, in this now more than 20 years of
in the United Sates not only the USA Universities, but also
the USA National Academy of Sciences,
all USA Presidents, one after the other
appears to have been involved.
What was the Reason that this Research to be stopped?
Let's look to the implication of these former USA Presidents:
- GEORGE H. W. BUSH sillence the NASA Team of Engineers
who sutain the Research Data, and later he is Honoured by
THE QUEEN ELISABETH II OF ENGLAND !
For what Exceptional Reason was he Honoured?
Does THE ROAYL FAMILY OF ENGLAND
has Something to hide relatted to the 17th-18th Jews of England
that the ordinary English People, and the Civilization
not to know it?
- BILL CLINTON under the Pretext that he is "improving"
the US Secret Service, eliminates from the inside of the White House
US Secret Service Agents who possible were implicatted
with Other Intelligence Agents sustaining the Research.
-GEORGE W. BUSH extends the Cover-up of this father
Sabottage of this Research when becaming President,
through "extending the time" for keeping Secret and under Lock
the actions of his father, and finally...
...as the Administration of USA President Barack Obama
took the Controll, all these aspects somehow ceased,
myself I continued the research that reached the levels
I am herein now explaining...
...and I am now making Public these Facts from my life
not because I am upset, but because to be known that
for this Research, for more than 20 years
is Discriminated against me AT THE HIGHEST LEVELS
Universitar, Academic, Politic, and that
has fully participatted in Supporting this Discrimination
and Harssament, in order to Stop this Research
not convenient for the Governments of England and France
17th-18th Century AD Repressing of the Body of Knowledge
of the then Learned Jewish Scholars...
Not to make it Public, would be to Participate in the Effect
that takes place now in the United States when those bent on Serving
the American People, and/or the Civilization
...and as even the now USA President Barack Obama
does not escape of this Effect, this Aspect in itself,
tells the History how in the United States of America
not only how Low Moral and Dangerous have became
the US Special Interests, but also how European Interests
are involved in the USA Politics
for keeping the lid on the 17th - 18th Cent. AD
repression of this Body of Mathematical Knowledge, and
This period of time appears to be between 17th and 18th Century AD.
It is prooved by:
- the appearition of the word "JEW" whose
Master Frequency "839808" appears THE SAME with the
Master Frequency of the ratio 13.5 betwen the sum ( 351) of
the order of the letters, and the total letters (26) of
- the appearition of THE FRENCH METRIC SYSTEM
through which the Old Units of Measures in which was encoded
THE SUMERIAN NATURAL GEOMETRIC PI VALUE 3.14626437,
are being Repressed from use, and
- THE ACADEMIE FRANCAISE stopping the Investigation
of the Exactitude of the PI Value, locks the Civilization
into a Numerical, not Geometric PI Value
in which the Science of Mathematics is now ...
...and now you may understand WHY it was so IMPORTANT
that this Research Data not to appear...
Well...IT APPEARED...
It is now PUBLIC !
You USA Universities... have LOST !
You USA National Academy of Science... have LOST !
YOU Foreign Interests of England and France in USA... have LOST !
CIVILIZATION ...HAS WON !
In order understand the fact that
by cca 3th- 4th century BC the then Civilizations Priests Scientists
knew that from the remote anitquity
were coming toward the then Greek era in time Two Concepts
of Circle's Center, we must keep under our eyes
THE GREEK FORMULATION of
as the presente Schools, Universities and Academies are sutaining:
1- "THE POINT is that Part of the Geometry that has no value".
(Thus THE POINT is of value ZERO), and then we see appearing
the Mathemtical Demagogy:
2- "THE LINE it is a Continuity of Points"
Thus, THE LINE being a Continuity of Points
and formulatted mathematically ( by the fact that the Point has no value)
as bellow indicatted:
"0 + 0 + ) +.... = 0"
it prooves mathematically that
or that the Schools, the Universities and the Academies are
Thus, through their own
OFFICIAL ACCEPTED CONCEPT OF "POINT" AND "LINE",
the Schools, the Universities, and the Academies
shows that their Concepts of POINT and LINE
are based not on Mathematical Reason, but on Demagogy and Falsity
SHAME TO YOU ACADEMIES !
SHAME TO YOU UNIVERSTIES !
...and now let's see these BEGA GOLD WEIGHTS !
The bellow presentation is based on what I found it in a 1954 edition
As in many parts of the world the researchers have no acces to
under the Consideration of the Copyright Fair Uses Rights,
I took a decision to republish here some parts of the text,
for Research Purpose only...
...and here now I present these BEGA GOLD WEIGHTS:
Volume 23, Page 488K, at the letter "W", Ancient Weights...
"1- The Oldest known standard it is BEGA, found in the Early Amratian
graves in Egypt and Palestine ( now Lebanon - Israel Area).
2- They are called Gold Standard because many of these gold weights
bear the hierogliph for gold.
3- Heavier Bega has 210 grains ( English Grains) and weights cca. 13.61 grams
Lighter Bega has 196 ( English Grains), and weights cca. 12.70 grams.
4- the names of Egyptian Kings are more frequenct on this than on other standards.
5- The Heavier and Lighter Bega were not unified until 23rd Dynasty
though they were used nefore for Royal Weights.
5- The System was Decimal up to 2000 Bega with Fractions down to 1/16.
6- The earlier weights ( Amratian) were Short Cylinders with Domed Ends, later
(Gerzean) were Dome with Convex base...
7- Abroad Egypt this Standard is often found at Gerar, and there are several
weights from Knossos ( 194-205 English Grains).
8 - In the west there are six Double Axex from Elbe and Rhine on an unit of
191 English Grains and, 50 examples of Irish Gold on 2022 and 202 English Grains.
9- The Iron Curency bars from Celtic England are satted on multiples of 191,
though they vary rather widely.
10- Some of these western variations are due to the Greek adoption of the Bega
as the Standard of Aigina ( 199 English Grains) which was widely spread
by trade in Asia Minor, and Ionia, as well as in Greece ( the Old Mina of Athens),
and passed to Italy...
11- The Etruscan Pound ( from which itoriginatted the Roman Libra) at its
lightest it was 25 x 197, divided into 12 Unciae"
As I looked to this Textual Evidence my attention was immediate atracted
of the following aspects:
a- The Evidence was presented in English Grains, and I knew that I have to
switch to the Sumerian Grains.
b- Knowing that the English Grain was based on the French Metric System,
and also knowing that the French Metric System had an error,
I realized that I have "to move" the weight through the Mathematical Model
and keep my eyes on what may appears relatted or not to the Sumerian Grain.
c- As the appearition of the English Grain was realizated by me as coming
into use in almost the same time with the French Metric System, and the
ACADEMIE FRANCAISE imposing through sillence the prezente PI Value
3.141592654, I had an hint that I have to look to the BEGA through
both PI Values: 3.141592654 of the science of mathematics and
3.14626437 sensed in the Sumerian weight.
...and as you shall see the Volumes and the Weights in Draw Bega 2.
d- My attention was also atracted by the BEGAas being "decimal up to 2000"
and also " down to 1/16" This imediat was saw it by me as
" A LONG LINE OF 32000" units seen as 2000 X 16 = 32000
e- But, MOST IMPORTANT fact seen by me it was THE FORMS of the weights
presented as " the earlier Amratian were
" SHORT CYLINDERS WITH DOMES ENDS"
immdiate sensed by me as :
SHORT CYLINDER can be an HALF CILINDER
of the type inscribed in a Cube, and
THE DOMED ENDS it can be a Sphere cut in two and "seatted" to both Base
of the Cylinder.
With respect to " the later BEGA being a DOME WITH THE CONVEX BASE
I immediate sensed that somehow the Egyptain Priests were able to
infuse THE CYLINDER into the Sphere.
...an d knowing that in the "History of Mathematics" it is said that
Archimedes discovered the Formnula of calculation of the Sphere's Volume,
I had a good laugh realizing the likly posibility that the Egyptian Priests knew
the Sphere Volume MILENIA BEFORE THE GEEKS...
...and I remebered what a scholar has written in his book:
"It appears that the Greeks of Helenistic era inheritted more mathematics than
they creatted..."
The research to the BEGA GOLD WEIGHT I done it in many years.
The Research File it is very big. For publishing purpose what I say now
is just a summary of the research. You must go to:
and also on the web at:
and research yourself the Draws marked as
Draw Bega 1 through Draw Bega 7
in order see all the Spiritual Beauty of the Bega Gold Weight
and their Relation to the thikness of 76 units of
Please, be aware that in order acces and research this Draws
you may have to register. IT is highly advisable that you to print
these draws, put them under your eyes and research them
by reading what I present now in the following explications of them...
... and now I shall present each draw in part!
DRAW BEGA 1
Step 1
Here you see the Geometric Point of the Bega Gold Weights
being a Square that I present in English Grains of .0648 grams, and
in Sumnerian Grains of .045 grams.
Step 2
Observe that the Evolution of the English and Sumerian Grains takes
effect only on 1/8 of the Square Perimeter, and that this 1/8 for the English Grain
it reaches 225 English Grains, while for the Sumerian Grains also for 1/8
of the Square Perimeter it reaches 7200. Thus the Full SQuare Perimeter
for the Englsih Grain it is 1800 as 225 x 8, and for Sumerian Grain
it is 57600 and 7200x 8.
Step 3
Observe that the BEGA of 13.61 grams was found by me to be as 13.608 grams,
and has the step of evolution as 210 (same with the Evidence) and
the Bega of 12.70 grams was found to be 12.7008 grams, and has a
step of 196 ( same as the Evidence).Then observe that on the Sumerian side
the English Grain step seen as 210 as a coordoantion with step 6720,
while the English Grain step 196 has a coordonation with step 6272.
In the following Draw you shall se Exactitude extablished by me through
research when we shall attach the Science of Mathematics Pi value 3.1415926564
to the English Grain and the Sumerian Natural Geometric PI value 3.l14626437
to the Sumerian Grain.
Step 4
For the eye-view I have drawn a Geometric Point of side 14400 units,
and also it Cube of edge 14400. Became familiar with the Perimeter
of the Geometric Point of 57600 units and of the Volume of the Cube
as 2.985984 cubic units.
Step 6
Please, became familiar with the values of the Circle and Day in seconds,
the English Grain weight, the Swastika's High Number 10,000 and
the known for egyptologist Earliest Egyptian Finger of .0185 units.
Step 7
Became familiar with the Master Frequency "839808" explainde
already few times herein.
Step 8
...and now being familiar with all I have explained until now
see the Four Coordonations I present.
Step 9
If you were able to read these Four Coordonations you have to realize that:
The Time when these Coordonations were designed was a time when it was known
that the Circle has 1296000 seconds and the day 86400,
and also the word JEW appeared.
...and as the word JEW appeared only in 18th century AD, we realize that
by this era in time, in Europe the Jewish Scientists KNEW :
-The Sumerian PI Value 3.14626437,
- The Science of mathematics PI value 3.141592654
- The Day has 86400 Seconds
- The Circle has 1296000 seconds
- The Swastika's High Number it is 10,000 ( as known so in the ancient Japan)
- The Egyptyian Finger .01875,
- The Master Frequency of the word JEW "839808" it is THE SAME with
the Master Frequency of the ratio 13.5 between the sum 351
of the order, and the all 26 numbers of THE ENGLISH ALPHABET.
(... and from Parts 13 at:
as we find out that the day separated in 86400 seconds
was not earlier than 17th-18th century AD)
we realize how highly spiritual were the then
and the fact that the science of mathematrics PI Value 3.141592654
was then known to the Mathematical Exactitude of the 9th decimal, or
we must assume that there is a Superpower
that creates these Mathematical Coordonations
and that this Superpower is spiritualy speaking to me.
But, please, take this Effect as you wish, and at your own risk...
Myself many times I had the feeling that Spiritual I am somehow a High Priest,
but I keep Both Aprocahes on my Research Table...
These Coordonations being a Mathematical Reality
can be researched by anyone. I may be sometime wrong,
because not paying attention to a calculation,
but in the majority of explanations I know that I am correct...
DRAW BEGA 2
Step 1
On this page you can see that in the "square" I have attached the Circle,
and I have coordonatted the Circle's Circumference length
for the English Grain with the PI Value 3.141592654,
and for the Sumerian Grain with with the PI Value 3.14626437.
Observe that although we can see the Geometric Point of
through the English Grain as of perimeter of 1800 units,
and through the Sumnerian Grain as of a perimeter of 57600 units,
when see it through the Decimal System it is just 80 units
which appears configuratted as:
"80 = The 8 parts of the Geometric Point x the Decimal System".
What a beautifull Spiritual Coodonation !
Step 2
Look now in the space between the Sumerian steps7200 and 6720.
What you see there are the over 5000 years old
which signals WHY they have chosen to leave in use
only 2 types of Gold Weights that measured
in the today's weights measurement through the French Metric System
ar seen to be 13.608 grams and 12.7008 grams
Step 3
...and now you are seeing the Weight of these Bega through
THE CIRCLE'S CIRCUMFERENCE as:
a - For the Heavier Bega Weight of 13.608 grams you have a weight
in the PI Value 3.141592654 of 7.330382858 representing
the Short cylinder and the Sphere while for the same Bega
in the Sumerian view you havea weight of 7.34128353
also representing the Short Cylinder and the Sphere, while
b - For the Lighter Bega Weight of 12.7008 grams
you have a weight in the PI Value 3.141592654 of 6.841690667 grams
while for the PI Value 3.14626437 you have a weight of
6.851864624 grams.
..and you may realize that the Designers of the Bega Gold Weights
of the Amratian Period of Egypt ( 4000 - 3500 BC)
when designing the physical shapes of the Bega Weights,
through making them as "short Cylinders and domed ends"
they designed in their shapes the Circle's Cicumference
(known being the fact that a Circle Circumference of the PI Value
also repsents the weights and the volume of a Cylinder), but
they created their weight in Sumerian Grains, that the now
it is measured in English Grains...
Thus it appears that in this period of between 4000 BC and 3500 BC
the Sumerian influence in Egypt it was massive...
Look now to the Gerzean Period of between 3500 BC and 3300 BC
and realize that now the Bega Gold Weights are REDESIGNED
as DOME WITH CONVEX BASE. If this was an attempt to Falsificate
the really Sumerian Origin of the Design, as to look as Egyptian,
so it is with the appearition of the English Grain designed to be... "English.
Step 4
to the value of the corresponding Circle Circumference
for the English Grain: it is 1.092452906(-03).
You shall see it appearing in full Spiritual Beauty in DRAW BEGA 4.
Step 5
Look now to the "Square from Center" where 1/8 of it is marked
with the Sumerian Grain ".o45". Eight parts of it would make .36,
and eight parts of 1.092452906 (-03) would make8.73962325 (-03)
The Mathematical Effect presented now shall be explaind in another research.
Now it is only shown as to be a type of ORIGIN...
DRAW BEGA 3
Step 1
...and now in this Draw 3 you are seeing
which signals the Historical Facts that
a - the Day was separated in 86400 seconds,
b - the Circle in separated in 1296000 seconds, and
c - THE ENGLISH ALPHABET was designed to have
only 26 Letters, and thier sum to be 351
by those who knew of the Historical Existence of
THE BEGA GOLD WEIGHTS, and of the fact that
THE CIRCLE'S CIRCUMFERENCE has a thickness of 76 units,
and that this was done it in the period of time we call as
17th-18th Century AD, and not earlier....
Step 2
Now, under your eyes you have Four Coodonations
that I shall explain now:
1- Here you see the Armonious Ballance of the English Grain 1/8 evolution
on the ancients form of Geometric Point, with the Sumerian Evolution
of the Sumnerian Grain, and with the "8)' which in Draw 2
you saw it sa being the DEcimal View, and with the Elnglish Alphabet
Ratio of 351/26 as coming to signal the mathematical coordonation
of the Day's Seconds with the Circle's Seconds.
2-In this coordonation you see the coordonation of the ONE : the Egyptian
Finger and then divided by 4 with Swastika's NUmber 10,000,
with the ratio 839808 coming also to show the Armonious Ballance
of the Day's Seconds with the Circle's Seconds...
3- Now you see the Geometric Point of the Bega Gold Weights
in both English and Sumerian Grains, coordonatted with the Decimal
System and being divided with the ratio between the Master Frequencies of
the PI Value 31.41592654 and 3.14626437 signaling the Master Frequency
of the word JEW...
4- ...and now you see the appearition in coordonation
of the Circle's Circumference, and tha fact that the Master Frequency
(5184) of theSumerian PI value 3.14626437 relates to
the Evolution of the Sumerian Grain (57600),
and the Master Frequency ( 6400) of the Science of Mathematics PI Value
3.141592654 relates to the Evolution of the so called English Grains,
thus pinpointing to the fact that at such time when it was designed
that the English Grain to be of .o048 grams, the Designers knew of a PI
Value 3.141592654 to the exactitude of the 9th decimal.
Then... what is the History of mathematics sustaining that
the exactitude of the PI value to the 9th decimal was obtained only later
in time... when the herein seen coordonatioon signals that
in 17th-18th Cent. AD the designers of the English Grain knew of' the PI Value
3.141592654 to the exactitude of the 9th decimal ?
Either the History of the PI Value it is a Fasification, or
there it is a Supernatural Power who spiritually is speaking to me ...
Just look yourself:
"(29.16 : 10) x 8 = (5184 x 36 x 57600) : (6400 x 40 x 1800)"
DRAW BEGA 4
Here you can see that "part" of the Circle's Cicumference
announced before "1.092452906 (-03)".
I shall lett you research it yourself...
Try read its Spiritual beauty looking to what it is "80",
"36", and "40", and to what is it "839808"...
DRW BEGA 5
Step 1
In this Draw you shall realize that the only Geometric PI Value
with which the Exactitude can be controlled, it is
THE SUMERIAN NATURAL GEOMETRIC PI VALUE 3.14626437
which "exists" on the inside side of the Circle Circumference.
It relaation to the Decimal System it signals that not only
the Sumerian Civilization Priests knew this Effect,
but also the Egyptian Priestood knew it.
Step 2
...and also here when looking to the Master FRequency "839808" you
shall realize that also the Designers of the word JEW
knew it also in 18th century AD when they coined the word JEW,
and gave it to their descendents to carry in time...
Step 3
Here you shall see also another Proof through which the Exactitude it is kept
only by the Sumerian PI, while the Science of Mathematics PI
it looses the Exactitude...
Observe how the Master Frequency of the word JEW "839808"
cooronates with the volume (5.625 cubic cubits) of
THE ARK OF THE COVENANT,
and how well the PI Value 3.160493827 of
comes to complete the coodonation, thus prooving the historical fact that
THE WRITERS OF THE TORAH DID NOT HATE THE EGYPTIANS,
but those who hated the Egyptians were
THE HYSCOS DESCENDETS who taking over the then Jews
obligatted the writers of the Torah to infuze in the Torah
Religious Themes for their Political Intereses...
Draw Bega 6
This is a beautiful draw !
Here you see THE ORIGIN of
It springs in time from an era when the Civilization knew
that there are Two PI Values, and that they are relatted
to THE VALUE OF ONE of the Decimal Stystem, and also that
after they are multiplied by "1.111...", which is
THE FLAW OF THE DECIMAL SYSTEM, and then divided by
SWASTIKA'S HIGH NUMBER 10,000
they create "another PI Value 3.160493827" that later in Egypt
a scribe copies for the Civilization from an older papyrus...
DRAW BEGA 7
...and here you see the Historical and mathematical PROOF which signals
Honoured Egyptiam Body of Knowledge so much that they named
their God with the letters YWHY and the First Priests of Solomon Temple
with the name of AZARIAH...
...and of course, they reported "3.160493827",
to the SUMERIAN NATURAL GEOMETRIC PI VALUE.
...and here is one of the most beautiful prooves of the Old Hebrews :
"1- (5184 x 3.160493827) = 16384 = YWHY = AZARIAH"
...and here is the Draw explained !
Step 1
Observe how for each PI Value I have designed a Finger
by following the Sumerian Prototype.
Observe that when I divided the Radius of the Circle of
by the Circle Circumference thickness of each PI Value
to the Sumerian Pi value appears the complete sum of
the Decimal System, numbers (45) while at the PI Value of
the Science of Mathematics appears only 40.5, thus the value of 45
it is divided by 1.111.....
Now you may have an hint why Leibnitz and Newton
in the Calculus had to employ an Assuming Concept where an .000...1
it is carried to the Infinity and thrown out as of no importance,
while eher you see such .000...1 under the expression " 1.111..."
See the article:
" The Invalidation of the calculus Exactitude through the DNA Concept"
where I presented how I tooks such .000...1 of the calculus" Assuming Concept,
and carried back through Decimals Steps until it could be read as 1.111...
that you see it now signaling the fact that:
"When the Decimal System's sum of 45 (that represent
The Sumerian PI Value 3.14626437) it is divided by 1.111... it became
representing the PI Value 3.14626437..."
Step 3
Observe that when you multiply the Decimal system sum of 45 with
the 40.5, and divide by the
square side" of the Bega Gold Weights you reach the known
THE EGYPTIAN CUBE OF 91.125 GRAMS.
Step 4
...and now you have the geometrical image of how both PI Values
3.141592654 and 3.14626437 are tied through the Decimal system with
THE BEGA GOLD WEIGHTS. Please read the bellow expression:
""(5184 : 45) x (6400 x 40.5) : 5184 = (14400 x 4) = 57600"
Step 5
...and still one more Proof
that the Old Hebrews apppears to have Honoured
THE EGYPTIAN CUBE OF 91.125 GRAMS
that we see it in the experssion:
"YHWH : BEGA GOLD WEIGHTS GEOMETRIC POINT SIDE =
= The Egyptian Cube of 91.125 grams..."
From the darkness of the time the Mathematical Image of
BEGA GOLD WEIGHT and of THE GEOMETRIC POINT
as being a Square with an inscribed Circle,
pulsate in time signaling that the Circumference of the Circle
has a Thickness of 76 units with two PI Values known
to the Exactitude of the 9th decimal...
They survived millenia after millenia until reaching the Hellenistic Era.
The Greeks of the Hellenistic Era through Redesigning
signals their intention to have this Body of Knowledge
Repressed from the History of the Civilization.
Fortunately for the Civilization
the Old Hebrews ( even that appears to have been under Occupation
by the Hyksos Descendents) appears that were able to Save for the Civilization
the Body of Knowledge that the Greeks of Hellenistic era
were hard to work to repress it...
As the time passed, and the Romans appears, then disapears, and
the Papal State takes Controll of the Europe, wee see Carlemagne
honouring the Cologne Ounce of 29.16 grams, and also we see
the same weight of 29.16 grams also in England as The Tower Ounce.
As this unit of weight appears tied in the Body of Knowledge of
THE BEGA GOLD WEIGHTS,
we realize that this Body of Knowledge survived with
the then Learned Jewish Scientists.
...and again looking into the passing of the time we reach
the 17th-18th Century AD, when we see the English Alphabet
already established, and the appearition of the word JEW
whose Master Frequency 839808 it is also the Master Frequency
of the ratio 13.5 between the sum of all letters of the English
Alphabet (351) and all letters (26).
As in the Draws (1 through 7) of THE BEGA GOLD WEIGHTS
I have shown that the 839808 (the Master Frequency
of the word JEW), and the cubic volume of the Ark of the Covenant,
are fully in coordonation with the elements known in Egypt, and they
fully coordonates through the Decimal System
with the Master Frequency of the Sumerian PI Value 3.14626437, and
also with the Master Frequency of the presente PI Value 3.141592654
of the Science of Mathematics, and then we see this Body of Knowledge
"just dissapears" during 17th-18th Century AD.
All these aspects indicate without doubt that the Learned Jews ended up
in the then jails of the English and/or French Kings, were tortured until
they gave up parts from this Body of Knowledge, for that the appearition of
clear being of no other then only English Origin pinpoint
this aspect to have taken place in England...
It also may be that the Jewish Learned Keepers of this Knowledge
survived somehow, but that what appears that were not able to do
the Secret Services of the Kings of England and France,
was able to finished by the Hitler's Administration....
It is this the SECRET
that understably Greece, England and France does not want to pe known
by the now Jews of England and France
by the People of England, of France, and by all Civilization ?
As the "beneficiary" of the Elimination of the Learned Jewish Keepers
of this Knowledge appears to have been also the Religious Rabbis
of 17th -18th Century AD England and France, did these High Wheels
of the then Jewish Religion participatted " in sillence" to the Repression
of this Body of Knowledge , and the Elimination of their own ?
Possible, or not this philosopher has no Access
to the Secret Files of England and France to proove it, or to dismiss it...
As the Body of Knowledge discovered by this Non-Jewish philosopher
appears in Full Coordonation with the TORAH aspects, and
with the History of the Jews, even with the word JEW,
it is then left to you Jewish Scholars, to research further this aspect,
and honestly to Report to the Jewish People,
and to the Civilization what indeed took place
in the then England and France of 17th-18th Century AD,
or be Forever Condemned to live in the ignorance and naivity
the now Universities, and Academies are perpetuating in time,
covering up the possible Secret Acts of the then Kings of England and France,
that ressulted in the dissapearing of this Body of Knowledge,
THE FALSIFICATION OF THE HISTORY OF MATHEMATICS, and
THE RESEARCH CONTINUES...
You have reached an United States of America's Independent Research
facilitated by the wisdom of the USA Drexel University's Mathforum
to give to the Independent Researchers of outside of USA Universities
an Internet Voice where free of any Censorship,
they to be able publish their Research and Opinions.
Please, be advised that this article contains not only the Research Data,
but also my Opinions of Philosopher, both protected by
the Federal Laws of the United States, and by the Intrenational Laws.
If you intend to use partial or total this research, you can use it by
the Specifications of the Copyright Notice of
that can be found attached at the end of the article:
"The Condemnation by the Paris Livre
of the French Academy of Sciences" at
Please, abide by the Mathforum's Terms of Use, and be aware that
any infringement on this author Ideas and/or Philosophical Concepts
presented on Mathforum for worldwide researchers, as also
any attempts of using Reverse Mathematical Aproaches, or
Deisgned Concepts of going around or altering for Universitar
or Academic Credits, or for Financial Gains of the Philosdophical Concepts
herein indicatted are Severe Violations of the USA,
and International Copyrights Treaties.
Detailed Visual Explanations through Geometrical Draws of
the Mathematical Exacitutude of
THE SUMERIAN NATURAL GEOMETRIC PI VALUE 3.14626437
can be found on web at:
or on the the author Facebook Page at;
at the end of the Album's Photos.
Please, be aware that in order to open and research these pages
you may have to Register and/or Sign in.
Providing that the Mathforum and the Author are mentioned,
the PH Dr. Students are Permitted Free of Charge
to use on their papers this and any other Research Data of this Author,
but a Copy of such papers have to be delivered ASAP to Mathforum,
and also to this Author.
The Same Permission ( as above indicatted)
it is also given to all Worldwide Newspapers,
and to all Student Asociation Newspapers,
who may consider to republish partial this Research Data for their readers.
Also, please, be aware that as this article here-and-there
may have typing errors, or other errors,
it shall be continuu corrected by author
until it shall remain fidel to the intended form.
Due to this effect
all researchers who shall find this article on internet
are asked that allways to return to Mathforum at:
and consider only the last corrected edition, or simplu
pass over such error(s), and research further...
Ion Vulcanescu - Philosopher
Independent Researcher in the History of Mathematics
Author, Editor, and Publisher of:
March 4 2013 | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2591031","timestamp":"2014-04-16T22:12:27Z","content_type":null,"content_length":"58200","record_id":"<urn:uuid:5c63b927-7bd2-42d9-8609-ef102f2ff35a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about gross on Quomodocumque
Happy birthday, Dick Gross
Just returned from Dick Gross’s 60th birthday conference, which functioned as a sort of gathering of the tribe for every number theorist who’s ever passed through Harvard, and a few more besides. A
few highlights (not to slight any other of the interesting talks):
• Curt McMullen talked about Salem numbers and the topological entropy of automorphisms of algebraic surfaces (essentially the material discussed in his 2007 Arbeitstagung writeup.) In particular,
he discussed the fact that the logarithm of Lehmer’s number — conjecturally the “simplest” algebraic integer — is in fact the smallest possible positive entropy for an automorphism of a compact
complex surface. Here’s a question that occurred to me after his talk. If f is a Cremona transformation, i.e. a birational automorphism of P^2, then there’s a way to define the “algebraic
entropy” of f, as follows: the nth iterate of f is given by two rational functions (R_n(x,y),S_n(x,y)), you let d_n be the maximal degree of R_n and S_n, and you define the entropy to be the
limit of (1/n) log d_n. Question: do we know how to classify the Cremona transformations with zero entropy? The elements of PGL_3 are in here, as are the finite-order Cremona transformations
(which are themselves no joke to classify, see e.g. work of Dolgachev.) Are there others?
• Serre spoke about characters of groups taking few values, or taking the value 0 quite a lot — this comes up when you want, e.g., to be sure that two varieties have the same number of points over
F_p for all but finitely many p, supposing that they have the same number of points for 99.99% of all p. The talk included the amusing fact that a character taking only the values -1,0,1 is
either constant or a quadratic character. (But, Serre said, there are lots of characters taking only the values 0,3 — what are they, I wonder?)
• Bhargava talked about his new results with Arul Shankar on average sizes of 2-Selmer groups. It’s quite nice — at this point, the machine, once restricted to counting orbits of groups acting on
the integral points of prehomogenous vector spaces, is far more general: it seems that the group of people around Manjul is getting a pretty good grasp on the general problem of counting orbits
of bounded height of the action of G(Z) on V(Z), where G is a group over Z (even a non-reductive group!) and V is some affine space on which G acts. With the general counting machine in place,
the question is: how to interpret these orbits? Manjul showed a list of 70 representations to which the current version of the orbit-counting machine applies; each one, hopefully, corresponds
to some interesting arithmetic enumeration problem. It must be nice to know what your next 70 Ph.D. students are going to do…
Dick has a lot of friends — the open mike at the banquet lasted an hour and a half! My own banquet story was from my college years at Harvard, where Dick was my first-year advisor. One time I asked
him, in innocence, whether he and Mazur had been in graduate school together. He fixed me with a very stern look.
“Jordan,” he said, “as you can see, I am a very old man. But I am not as old as Barry Mazur.“
Tagged benedict gross, bhargava, birthday, dick gross, entropy, gross, McMullen, number theory, serre
Reader survey: what open question would you ask Dick Gross?
I’m speaking on an “open problems” panel in honor of Dick Gross’s 60th birthday. I’ve got 10 minutes. I think I know what I’m going to say, but I was just informed that the panel is to be allowed
to run late if audience interest demands it. So I thought it would be good to stock up a little more material. And it occurred to me this might be a good opportunity to blogsource! So, readers:
if you were going to promote an open problem in front of Dick Gross and his wellwishers, what would it be?
Tagged benedict gross, gross, number theory, open problems, reader survey | {"url":"http://quomodocumque.wordpress.com/tag/gross/","timestamp":"2014-04-20T10:46:20Z","content_type":null,"content_length":"51021","record_id":"<urn:uuid:f926d326-e41c-44b8-b120-435538126083>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Art of Computer Programming, Vol I: Fundamental Algorithms
Results 1 - 10 of 57
- SERIAL COMPUTER, COMM. ACM , 1977
"... A real-time list processing system is one in which the time required by the elementary list operations (e.g. CONS, CAR, COR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small)
constant. Classical implementations of list processing systems lack this property because allocating a list cell ..."
Cited by 214 (13 self)
Add to MetaCart
A real-time list processing system is one in which the time required by the elementary list operations (e.g. CONS, CAR, COR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant.
Classical implementations of list processing systems lack this property because allocating a list cell from the heap may cause a garbage collection, which process requires time proportional to the
heap size to finish. A real-time list processing system is presented which continuously reclaims garbage, including directed cycles, while linearizing and compacting the accessible cells into
contiguous locations to avoid fragmenting the free storage pool. The program is small and requires no time-sharing interrupts, making it suitable for microcode. Finally, the system requires the same
average time, and not more than twice the space, of a classical implementation, and those space requirements can be reduced to approximately classical proportions by compact list representation.
Arrays of different sizes, a program stack, and hash linking are simple extensions to our system, and reference counting is found to be inferior for many applications. Key Words and Phrases:
real-time, compacting, garbage collection, list processing, virtual memory, file or database management, storage management, storage
- Journal of Computer and System Sciences , 1998
"... We define and study the notion of min-wise independent families of permutations. We say that F ⊆ Sn is min-wise independent if for any set X ⊆ [n] and any x ∈ X, when π is chosen at random in F
we have Pr(min{π(X)} = π(x)) = 1 |X |. In other words we require that all the elements of any fixed set ..."
Cited by 191 (11 self)
Add to MetaCart
We define and study the notion of min-wise independent families of permutations. We say that F ⊆ Sn is min-wise independent if for any set X ⊆ [n] and any x ∈ X, when π is chosen at random in F we
have Pr(min{π(X)} = π(x)) = 1 |X |. In other words we require that all the elements of any fixed set X have an equal chance to become the minimum element of the image of X under π. Our research was
motivated by the fact that such a family (under some relaxations) is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
However, in the course of our investigation we have discovered interesting and challenging theoretical questions related to this concept – we present the solutions to some of them and we list the
rest as open problems.
- ACM SIGPLAN Notices , 1992
"... this paper. associated with a relocating collector is saved; other costs remain, however, such as the costs of updating all pointers and foregoing some compiler optimizations. ..."
Cited by 85 (4 self)
Add to MetaCart
this paper. associated with a relocating collector is saved; other costs remain, however, such as the costs of updating all pointers and foregoing some compiler optimizations.
- USENIX SUMMER TECHNICAL CONFERENCE , 1994
"... This paper presents a comprehensive design overview of the SunOS 5.4 kernel memory allocator. This allocator is based on a set of object-caching primitives that reduce the cost of allocating
complex objects by retaining their state between uses. These same primitives prove equally effective for mana ..."
Cited by 70 (3 self)
Add to MetaCart
This paper presents a comprehensive design overview of the SunOS 5.4 kernel memory allocator. This allocator is based on a set of object-caching primitives that reduce the cost of allocating complex
objects by retaining their state between uses. These same primitives prove equally effective for managing stateless memory (e.g. data pages and temporary buffers) because they are space-efficient and
fast. The allocator’s object caches respond dynamically to global memory pressure, and employ an objectcoloring scheme that improves the system’s overall cache utilization and bus balance. The
allocator also has several statistical and debugging features that can detect a wide range of problems throughout the system. 1.
- IN ISMM ’04: PROCEEDINGS OF THE 4TH INTERNATIONAL SYMPOSIUM ON MEMORY MANAGEMENT , 1990
"... Automatic storage management, or garbage collection, is a feature usually associated with languages oriented toward ‘‘symbolic processing,’’ such as Lisp or Prolog; it is seldom associated with
‘‘systems’’ languages, such as C and C++. This report surveys techniques for performing garbage collection ..."
Cited by 62 (5 self)
Add to MetaCart
Automatic storage management, or garbage collection, is a feature usually associated with languages oriented toward ‘‘symbolic processing,’’ such as Lisp or Prolog; it is seldom associated with
‘‘systems’’ languages, such as C and C++. This report surveys techniques for performing garbage collection for languages such as C and C++, and presents an implementation of a concurrent copying
collector for C++. The report includes performance measurements on both a uniprocessor and a multiprocessor.
- ACM Computing Surveys , 1986
"... A unified model of a family of data flow algorithms, called elimination methods, is presented. The algorithms, which gather information about the definition and use of data in a program or a set
of programs, are characterized by the manner in which they solve the systems of equations that describe d ..."
Cited by 53 (8 self)
Add to MetaCart
A unified model of a family of data flow algorithms, called elimination methods, is presented. The algorithms, which gather information about the definition and use of data in a program or a set of
programs, are characterized by the manner in which they solve the systems of equations that describe data flow problems of interest. The unified model
- In STOC ’98: Proceedings of the thirtieth annual ACM symposium on Theory of computing , 1998
"... We define and study the notion of min-wise independent families of permutations. We say that F⊆Sn is min-wise independent if for any set X ⊆ [n] and any x ∈ X, when π is chosen at random in F we
have Pr ( min{π(X)} = π(x) ) = 1 |X |. In other words we require that all the elements of any fixed set ..."
Cited by 40 (1 self)
Add to MetaCart
We define and study the notion of min-wise independent families of permutations. We say that F⊆Sn is min-wise independent if for any set X ⊆ [n] and any x ∈ X, when π is chosen at random in F we have
Pr ( min{π(X)} = π(x) ) = 1 |X |. In other words we require that all the elements of any fixed set X have an equal chance to become the minimum element of the image of X under π. Our research was
motivated by the fact that such a family (under some relaxations) is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
However, in the course of
, 1978
"... ing with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works whether directly or by
incorporation via a link, requires prior specific permission and/or a fee. Permissions may be requested from Publica ..."
Cited by 32 (7 self)
Add to MetaCart
ing with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works whether directly or by incorporation
via a link, requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept, ACM Inc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or
permissions@acm.org. This research was supported by the Advanced Research Projects Agency of the Department of Defense and was monitored by the Office of Naval Research under contract number
N00014-75-C-0522. Author's present address: Computer Science Department, University of Rochester, Rochester, NY 14627. 1. Introduction A severe problem in Lisp 1.5 [6] systems is the amount of time
required to access the value of a variable. This problem is compounded by Lisp's choice of "fluid" or "dynamic" scoping for nonlocal (free) variables, wherein a procedure's free variables are
considered bound in the envi...
- Information Systems , 1990
"... : Bitmaps are data structures occurring often in information retrieval. They are useful; they are also large and expensive to store. For this reason, considerable effort has been devoted to
finding techniques for compressing them. These techniques are most effective for sparse bitmaps. We propose a ..."
Cited by 25 (2 self)
Add to MetaCart
: Bitmaps are data structures occurring often in information retrieval. They are useful; they are also large and expensive to store. For this reason, considerable effort has been devoted to finding
techniques for compressing them. These techniques are most effective for sparse bitmaps. We propose a preprocessing stage, in which bitmaps are first clustered and the clusters used to transform
their member bitmaps into sparser ones, that can be more effectively compressed. The clustering method efficiently generates a graph structure on the bitmaps. In some situations, it is desired to
impose restrictions on the graph; finding the optimal graph satisfying these restrictions is shown to be NPcomplete. The results of applying our algorithm to the Bible is presented: for some sets of
bitmaps, our method almost doubled the compression savings. 1. Introduction Textual Information Retrieval Systems (IRS) are voracious consumers of computer storage resources. Most conspicuous, of
course, is the...
, 1999
"... Performance estimation of computer systems is an important topic to a large number of people in the computer industry. Computer architects need to be able to study future machines, compiler
writers need to be able to evaluate the compiler output before a machine exists, and developers need insight i ..."
Cited by 24 (1 self)
Add to MetaCart
Performance estimation of computer systems is an important topic to a large number of people in the computer industry. Computer architects need to be able to study future machines, compiler writers
need to be able to evaluate the compiler output before a machine exists, and developers need insight into the machine's performance in order to tune their code. There are many performance estimation
techniques that range from profile -based approaches to full machine simulation. Detailed simulation is one of the most common methods for estimating performance. It suffers, however, from
potentially long run times when simulating large applications using detailed processor models. This thesis | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=494442","timestamp":"2014-04-21T01:40:52Z","content_type":null,"content_length":"37469","record_id":"<urn:uuid:6e3a7af5-7352-47e7-a043-f90fcd4a29f6>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Carteret, NJ Math Tutor
Find a West Carteret, NJ Math Tutor
I am a highly motivated, passionate math teacher who has taught in high performing schools in four states and two countries. I have previously taught all grades from 5th to 10th and am extremely
comfortable teaching all types of math to all level learners. I am a results driven educator who motivates and educates in a fun, focused atmosphere.
7 Subjects: including algebra 2, American history, accounting, prealgebra
...The GRE test includes math up to algebra and geometry, both of which I have been tutoring for a long time. I can help with the material covered on the tests as well as test taking strategies
which will result in a top score. I have successfully tutored many students who went on to be admitted to great colleges because they scored high on the entrance exams.
15 Subjects: including algebra 1, algebra 2, calculus, geometry
Hi, my name is Ryan A. I am 24 years old and in my final year of law school at Columbia Law School. I transferred to Columbia from Fordham University School of Law.
16 Subjects: including prealgebra, algebra 1, reading, writing
...I am an motivated teacher who can teach you to your understanding. I am an educator who motivates and educates in a fun, focused atmosphere. I look forward to help and educate all students who
are willing to learn.
6 Subjects: including calculus, algebra 1, algebra 2, geometry
...I was an Assistant Baseball coach at the United States Military Academy Preparatory School, Ft. Monmouth, NJ coaching college-level players the basics and advanced techniques on pitching,
hitting, fielding, and positioning. I was a college Baseball player at the United States Military Academy at West Point and coached fellow teammates on the points of hitting a ball and fielding.
73 Subjects: including algebra 2, French, public speaking, business
Related West Carteret, NJ Tutors
West Carteret, NJ Accounting Tutors
West Carteret, NJ ACT Tutors
West Carteret, NJ Algebra Tutors
West Carteret, NJ Algebra 2 Tutors
West Carteret, NJ Calculus Tutors
West Carteret, NJ Geometry Tutors
West Carteret, NJ Math Tutors
West Carteret, NJ Prealgebra Tutors
West Carteret, NJ Precalculus Tutors
West Carteret, NJ SAT Tutors
West Carteret, NJ SAT Math Tutors
West Carteret, NJ Science Tutors
West Carteret, NJ Statistics Tutors
West Carteret, NJ Trigonometry Tutors
Nearby Cities With Math Tutor
Avenel Math Tutors
Bayway, NJ Math Tutors
Bergen Point, NJ Math Tutors
Chestnut, NJ Math Tutors
Hopelawn, NJ Math Tutors
Maplecrest, NJ Math Tutors
Menlo Park, NJ Math Tutors
Parkandbush, NJ Math Tutors
Peterstown, NJ Math Tutors
Port Reading Math Tutors
Townley, NJ Math Tutors
Tremley Point, NJ Math Tutors
Tremley, NJ Math Tutors
Union Square, NJ Math Tutors
Winfield Park, NJ Math Tutors | {"url":"http://www.purplemath.com/West_Carteret_NJ_Math_tutors.php","timestamp":"2014-04-20T06:52:23Z","content_type":null,"content_length":"24150","record_id":"<urn:uuid:2fd811ad-5f59-4eaf-a654-b0cea1be71a5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
sine and cosine rule?
May 4th 2010, 12:25 PM #1
Junior Member
Apr 2010
sine and cosine rule?
Hi I know the formula
$sin(\alpha \pm n)\pi=(-1)^nsin\alpha \pi$
but what about
$cos(\alpha \pm n)\pi=?$
$cos(\alpha \mp n)\pi=?$
$sin(\alpha \mp n)\pi=?$
Hi I know the formula
$sin(\alpha \pm n)\pi=(-1)^nsin\alpha \pi$ , but what about
$cos(\alpha \pm n)\pi=?$
$cos(\alpha \pm n)\pi=(-1)^n\cos\alpha\pi$ , assuming $n\in\mathbb{Z}$ , and the other two below are, of course, exactly the same as the first two (why do you think they're different??)
$cos(\alpha \mp n)\pi=?$
$sin(\alpha \mp n)\pi=?$
Just making sure about the other 2! Thanks!
cos(a+ b)= cos(a)cos(b)- sin(a)sin(b).
In particular, if $b= n\pi$ and $a= \alpha$,
$cos(\alpha+ n\pi)= cos(\alpha)cos(n\pi)- sin(\alpha)sin(n\pi)$.
Of course, $sin(n\pi)= 0$, for all n, and $cos(n\pi)= (-1)^n$.
Therefore, $cos(\alpha+ n\pi)= (-1)^ncos(\alpha)$.
For $cos(\alpha- n\pi)$, use the fact that cosine is an even function: $cos(-n\pi)= cos(n\pi)= (-1)^n$
there is no difference at all between $\alpha\pm n\pi$ and $\alpha\mp n\pi$. They both mean exactly the same thing.
$cos(\alpha\pm{n}){\pi}=cos(\alpha{\pi})cos(\pm{n}{ \pi})-sin(\alpha{\pi})sin(\pm{n}{\pi})<br />$
If n=1, this is
If n=2, it's
so the power of $-1$ is n
May 4th 2010, 12:45 PM #2
Oct 2009
May 4th 2010, 12:48 PM #3
Junior Member
Apr 2010
May 12th 2010, 04:13 AM #4
Junior Member
Apr 2010
May 12th 2010, 04:24 AM #5
MHF Contributor
Apr 2005
May 12th 2010, 04:31 AM #6
MHF Contributor
Dec 2009 | {"url":"http://mathhelpforum.com/trigonometry/143041-sine-cosine-rule.html","timestamp":"2014-04-18T10:18:39Z","content_type":null,"content_length":"51211","record_id":"<urn:uuid:e960f029-8f3c-4c3d-a530-8499bc6b7efc>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
HowStuffWorks "How Chess Computers Work"
Computers and Chess
The current state-of-the-art in computer chess is fairly intricate, but all of it involves blind computation that is very simple at the core.
Let's say you start with a chess board set up for the start of a game. Each player has 16 pieces. Let's say that white starts. White has 20 possible moves:
• The white player can move any pawn forward one or two positions.
• The white player can move either knight in two different ways.
The white player chooses one of those 20 moves and makes it.
For the black player, the options are the same: 20 possible moves. So black chooses a move.
Now white can move again. This next move depends on the first move that white chose to make, but there are about 20 or so moves white can make given the current board position, and then black has 20
or so moves it can make, and so on.
This is how a computer looks at chess. It thinks about it in a world of "all possible moves," and it makes a big tree for all of those moves, like this:
In this tree, there are 20 possible moves for white. There are 20 * 20 = 400 possible moves for black, depending on what white does. Then there are 400 * 20 = 8,000 for white. Then there are 8,000 *
20 = 160,000 for black, and so on. If you were to fully develop the entire tree for all possible chess moves, the total number of board positions is about 1,000,000,000,000,000,000,000,000,
000,000,000,000, or 10^120, give or take a few. That's a very big number. For example, there have only been 10^26 nanoseconds since the Big Bang. There are thought to be only 10^75atoms in the entire
universe. When you consider that the Milky Way galaxy contains billions of suns, and there are billions of galaxies, you can see that that's a whole lot of atoms. That number is dwarfed by the number
of possible chess moves. Chess is a pretty intricate game!
No computer is ever going to calculate the entire tree. What a chess computer tries to do is generate the board-position tree five or 10 or 20 moves into the future. Assuming that there are about 20
possible moves for any board position, a five-level tree contains 3,200,000 board positions. A 10-level tree contains about 10,000,000,000,000 (10 trillion) positions. The depth of the tree that a
computer can calculate is controlled by the speed of the computer playing the game. The fastest chess computers can generate and evaluate millions of board positions per second.
Once it generates the tree, then the computer needs to "evaluate the board positions." That is, the computer has to look at the pieces on the board and decide whether that arrangement of pieces is
"good" or "bad." The way it does this is by using an evaluation function. The simplest possible function might just count the number of pieces each side has. If the computer is playing white and a
certain board position has 11 white pieces and nine black pieces, the simplest evaluation function might be:
11 - 9 = 2
Obviously, for chess that formula is way too simple, because some pieces are more valuable than others. So the formula might apply a weight to each type of piece. As the programmer thinks about it,
he or she makes the evaluation function more and more complicated by adding things like board position, control of the center, vulnerability of the king to check, vulnerability of the opponent's
queen, and tons of other parameters. No matter how complicated the function gets, however, it is condensed down to a single number that represents the "goodness" of that board position. | {"url":"http://electronics.howstuffworks.com/chess1.htm","timestamp":"2014-04-21T12:54:32Z","content_type":null,"content_length":"116545","record_id":"<urn:uuid:91759fc1-7353-4619-94c5-a1ba194ead94>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
Measuring historical volatility
> Measuring historical volatility
Measuring historical volatility
July 24, 2011
Say we are trying to estimate risk on a stock or a portfolio of stocks. For the purpose of this discussion, let’s say we’d like to know how far up or down we might expect to see a price move in one
First we need to decide how to measure the upness or downness of the prices as they vary from day to day. In other words we need to define a return. For most people this would naturally be defined as
a percentage return, which is given by the formula:
$(p_t - p_{t-1})/p_{t-1},$
where $p_t$ refers to the price on day $t$. However, there are good reasons to define a return slightly differently, namely as a log return:
If you know your power series expansions, you will quickly realize there is not much difference between these two definitions for small returns- it’s only when we are talking about pretty serious
market days that we will see a difference. One advantage of using the log returns is that they are additive- if you go down 0.01 one day, then up 0.01 the next, you end up with the same price as you
started. This is not true for percentage returns (and is even more not true when you consider large movements like 50% down one day, 50% up the next).
Once we have our returns defined, we can keep a running estimate of how much we have seen it change recently, which is usually measured as a sample standard deviation, and is called a volatility
A critical decision in measuring the volatility is in choosing a lookback window, which is a length of time in the past we will take our information from. The longer the lookback window is, the more
information we have to go by for our estimate. However, the shorter our lookback window, the more quickly our volatility estimate responds to new information. Sometimes you can think about it like
this: if a pretty big market event occurs, how long does it take for the market to “forget about it”? That’s pretty vague but it can give one an intuition on the appropriate length of a lookback
window. So, for example, more than a week, less than 4 months.
Next we need to decide how we are using the past few days worth of data. The simplest approach is to take a strictly rolling window, which means we weight each of the previous n days equally and a
given day’s return is counted for those n days and then drops off the back of a window. The bad news about this easy approach is that a big return will be counted as big until that last moment, and
it will completely disappear. This doesn’t jive with the sense of the ways people forget about things- they usually let information gradually fade from their memories.
For this reason we instead have a continuous look-back window, where we exponentially downweight the older data and we have a concept of the “half-life” of the data. This works out to saying that we
scale the impact of the past returns depending on how far back in the past they are, and for each day they get multiplied by some number less than 1 (called the decay). For example, if we take the
number to be 0.97, then for 5 days ago we are multiplying the impact of that return by the scalar 0.97^5. Then we will divide by the sum of the weights, and overall we are taking the weighted average
of returns where the weights are just powers of something like 0.97. The “half-life” in this model can be inferred from the number 0.97 using these formulas as -ln(2)/ln(0.97) = 23.
Now that we have figured out how much we want to weight each previous day’s return, we calculate the variance as simply the weighted sum of the squares of the previous returns. Then we take the
square root at the end to estimate the volatility.
Note I’ve just given you a formula that involves all of the previous returns. It’s potentially an infinite calculation, albeit with exponentially decaying weights. But there’s a cool trick: to
actually compute this we only need to keep one running total of the sum so far, and combine it with the new squared return. So we can update our vol estimate with one thing in memory and one easy
weighted average. This is easily seen as follows:
First, we are dividing by the sum of the weights, but the weights are powers of some number s, so it’s a geometric sum and the sum is given by $1/(1-s).$
Next, assume we have the current variance estimate as
$V_{old} = (1-s) \cdot \sum_i r_i^2 s^i$
and we have a new return $r_0$ to add to the series. Then it’s not hard to show we just want
$V_{new} = s \cdot V_{old} + (1-s) \cdot r_0^2.$
Note that I said we would use the sample standard deviation, but the formula for that normally involves removing the mean before taking the sum of squares. Here we ignore the mean, mostly because we
are typically taking daily volatility, where the mean (which is hard to anticipate in any case!) is a much smaller factor than the noise. If we were to measure volatility on a longer time scale such
as quarters or years, then we would not ignore the mean.
In my next post I will talk about how people use and abuse this concept of volatility, and in particular how it is this perspective that leads people to say things like, “a 6-standard deviation event
took place three times in a row.”
1. July 24, 2011 at 4:26 pm |
Any market is a mixture of people trying to understand the fundamentals of what is being bought and sold, and people trying to understand the market and its participants. Small fish doing the
latter are the suckers that make the big fish rich. Even more so when the small fish have predictable behaviour that the big fish are big enough to trigger.
□ July 29, 2011 at 8:02 am |
awesome thanks! I got pretty formulas!! | {"url":"http://mathbabe.org/2011/07/24/measuring-historical-volatility/","timestamp":"2014-04-18T05:31:20Z","content_type":null,"content_length":"54203","record_id":"<urn:uuid:55bd4ffd-30ed-4a84-a4e0-aa73f1af13d4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volo, IL Algebra Tutor
Find a Volo, IL Algebra Tutor
...In 2011, I graduated from the University of Michigan (Ann Arbor) with a BA in Classical Civilization. As a part of my coursework, I have taken the equivalent of seven semesters of Latin, and I
have taken six semesters of Classical Greek. Additionally, I gained a broad array of Latin teaching skills by taking a Latin teaching practicum class.
20 Subjects: including algebra 2, English, algebra 1, writing
...We will go over any subject as many times as necessary for the student to understand. I have had many physics courses and much related subject matter in my under-graduate engineering
coursework and graduate work in applied mathematics. I bring a diverse background to the tutoring sessions.
18 Subjects: including algebra 1, algebra 2, physics, GRE
...I am able to help in pre-algebra, algebra, Geometry, College-algebra, Trigonometry and Calculus. I am also helping students who is planning to take the AP Calculus, ACT and SAT exams. Many
students who hated math started liking it after my tutoring.
12 Subjects: including algebra 2, algebra 1, calculus, trigonometry
...While I do have a 24-hour cancellation policy, I strive to be flexible and offer make-up lessons. I can do private tutoring or small group lessons. I most often tutor at the Warren-Newport
Public Library but if there is another library that is more convenient for you, I can arrange to meet there.
5 Subjects: including algebra 1, biology, trigonometry, SAT math
...In addition, I took one semester worth of Organic Chemistry and one semester worth of Molecular and Cellular Biology. In addition to my classes, I have had experience researching at
Northwestern University in Evanston, IL and Feinberg School of Medicine in Chicago, IL. My research projects regarded estrogen receptors and tuberculosis respectively.
16 Subjects: including algebra 1, algebra 2, chemistry, calculus
Related Volo, IL Tutors
Volo, IL Accounting Tutors
Volo, IL ACT Tutors
Volo, IL Algebra Tutors
Volo, IL Algebra 2 Tutors
Volo, IL Calculus Tutors
Volo, IL Geometry Tutors
Volo, IL Math Tutors
Volo, IL Prealgebra Tutors
Volo, IL Precalculus Tutors
Volo, IL SAT Tutors
Volo, IL SAT Math Tutors
Volo, IL Science Tutors
Volo, IL Statistics Tutors
Volo, IL Trigonometry Tutors
Nearby Cities With algebra Tutor
Buffalo Grove algebra Tutors
Bull Valley, IL algebra Tutors
Crystal Lake, IL algebra Tutors
Fox Lake, IL algebra Tutors
Gurnee algebra Tutors
Holiday Hills, IL algebra Tutors
Island Lake algebra Tutors
Johnsburg, IL algebra Tutors
Lakemoor, IL algebra Tutors
Mchenry, IL algebra Tutors
Port Barrington, IL algebra Tutors
Round Lake Beach, IL algebra Tutors
Round Lake Park, IL algebra Tutors
Round Lake, IL algebra Tutors
Wheeling, IL algebra Tutors | {"url":"http://www.purplemath.com/volo_il_algebra_tutors.php","timestamp":"2014-04-17T08:05:27Z","content_type":null,"content_length":"24015","record_id":"<urn:uuid:69fb8799-d853-4ad5-9c2a-e3c0ee820240>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
Marina Del Rey Algebra 1 Tutor
...At the Screen Actors Guild Foundation and the American Federation of Radio and TV Artists for actors in Hollywood I did many workshops and presentations that involved public speaking and
performance. I also was a member of Toastmasters and did their public speaking training as well. In addition, I wrote a book on stage fright in 2003.
109 Subjects: including algebra 1, reading, Spanish, chemistry
...In my tutoring, I demonstrate the thinking skills and techniques to be learned and then lead the student in practicing them. This is in accord with the most recent research on effective
teaching. I am aware that the development and effective use of vocabulary in all areas of study is essential for mastery.
22 Subjects: including algebra 1, English, reading, biology
...Students enjoy learning high interest words and utilizing a multi-sensory approach. These approaches to learning boost a student's motivation & ability to read. I've been a Special Education
Teacher for over 3 years with a Master's degree in Special Education.
15 Subjects: including algebra 1, reading, accounting, ESL/ESOL
...I've been tutoring privately on and off for about ten years. I tutored math, Spanish, sociology, and some science courses for six years at a community college and I have also tutored middle and
high school students either privately or through after-school programs. I've passed the CBEST as well as the Spanish language proficiency test for the Culver City Unified School District.
10 Subjects: including algebra 1, Spanish, writing, chemistry
So about me: I graduated with honors from UCLA in 2004 with a Bachelors of Science degree in Applied Mathematics. I then went on to graduate school to receive my Master of Science in Pure
Mathematics from CSUN (in 2008). I am currently (at the time I write this) a tenured professor of Mathematics...
14 Subjects: including algebra 1, calculus, algebra 2, geometry | {"url":"http://www.purplemath.com/Marina_Del_Rey_algebra_1_tutors.php","timestamp":"2014-04-17T04:34:33Z","content_type":null,"content_length":"24476","record_id":"<urn:uuid:3aabcdd6-a894-44f5-88bf-923d771ec83e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: HESSIAN RIEMANNIAN GRADIENT FLOWS
FELIPE ALVAREZ, J´ER^OME BOLTE, AND OLIVIER BRAHIC§
SIAM J. CONTROL OPTIM. c 2004 Society for Industrial and Applied Mathematics
Vol. 43, No. 2, pp. 477501
Abstract. In view of solving theoretically constrained minimization problems, we investigate
the properties of the gradient flows with respect to Hessian Riemannian metrics induced by Legendre
functions. The first result characterizes Hessian Riemannian structures on convex sets as metrics that
have a specific integration property with respect to variational inequalities, giving a new motivation
for the introduction of Bregman-type distances. Then, the general evolution problem is introduced,
and global convergence is established under quasi-convexity conditions, with interesting refinements
in the case of convex minimization. Some explicit examples of these gradient flows are discussed. Dual
trajectories are identified, and sufficient conditions for dual convergence are examined for a convex
program with positivity and equality constraints. Some convergence rate results are established. In
the case of a linear objective function, several optimality characterizations of the orbits are given:
optimal path of viscosity methods, continuous-time model of Bregman-type proximal algorithms,
geodesics for some adequate metrics, and projections of q-trajectories of some Lagrange equations
and completely integrable Hamiltonian systems.
Key words. gradient flow, Hessian Riemannian metric, Legendre-type convex function, ex-
istence, global convergence, Bregman distance, Lyapunov functional, quasi-convex minimization, | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/065/2749625.html","timestamp":"2014-04-20T21:33:27Z","content_type":null,"content_length":"8840","record_id":"<urn:uuid:3af827e0-b610-409f-8438-127f60acda92>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reduction of computation time for crossed-grating problems: a group-theoretic approach
A systematic approach based on group theory is established to deal with diffraction problems of crossed gratings by exploiting symmetries. With this approach, a problem in an asymmetrical incident
mounting can be decomposed into a superposition of several symmetrical basis problems so that the computation efficiency is improved effectively. This methodology offers a convenient and succinct way
to treat all possible symmetry cases by following only several mechanical steps instead of intricate mathematical considerations or physical intuition. It is also general, applicable to both
scalar-wave and vector-wave problems and in principle can be easily adapted to any numerical method. A numerical example is presented to show its application and effectiveness.
© 2004 Optical Society of America
OCIS Codes
(000.3860) General : Mathematical methods in physics
(050.1950) Diffraction and gratings : Diffraction gratings
(050.2770) Diffraction and gratings : Gratings
Benfeng Bai and Lifeng Li, "Reduction of computation time for crossed-grating problems: a group-theoretic approach," J. Opt. Soc. Am. A 21, 1886-1894 (2004)
Sort: Year | Journal | Reset
1. P. Vincent, “A finite-difference method for dielectric and conducting crossed gratings,” Opt. Commun. 26, 293–296 (1978).
2. D. Maystre and M. Nevière, “Electromagnetic theory of crossed gratings,” J. Opt. (Paris) 9, 301–306 (1978).
3. S. T. Han, Y. L. Tsao, R. M. Walser, and M. F. Becker, “Electromagnetic scattering of two-dimensional surface-relief dielectric gratings,” Appl. Opt. 31, 2343–2352 (1992).
4. D. C. Dobson and J. A. Cox, “An integral equation method for biperiodic diffraction structures,” in International Conference on the Application and Theory of Periodic Structures, J. M. Lerner and
M. R. McKinney, eds., Proc. SPIE 1545, 106–113 (1991).
5. G. H. Derrick, R. C. McPhedran, D. Maystre, and M. Nevière, “Crossed gratings: a theory and its applications,” Appl. Phys. 18, 39–52 (1979).
6. R. C. McPhedran, G. H. Derrick, M. Nevière, and D. Maystre, “Metallic crossed gratings,” J. Opt. (Paris) 13, 209–218 (1982).
7. J. B. Harris, T. W. Preist, J. R. Sambles, R. N. Thorpe, and R. A. Watts, “Optical response of bigratings,” J. Opt. Soc. Am. A 13, 2041–2049 (1996).
8. G. Granet, “Analysis of diffraction by surface-relief crossed gratings with use of the Chandezon method: application to multilayer crossed gratings,” J. Opt. Soc. Am. A 15, 1121–1131 (1998).
9. R. Bräuer and O. Bryngdahl, “Electromagnetic diffraction analysis of two-dimensional gratings,” Opt. Commun. 100, 1–5 (1993).
10. E. Noponen and J. Turunen, “Eigenmode method for electromagnetic synthesis of diffractive elements with three-dimensional profiles,” J. Opt. Soc. Am. A 11, 2494–2502 (1994).
11. L. Li, “New formulation of the Fourier modal method for crossed surface-relief gratings,” J. Opt. Soc. Am. A 14, 2758–2767 (1997).
12. J. J. Greffet, C. Baylard, and P. Versaevel, “Diffraction of electromagnetic waves by crossed gratings: a series solution,” Opt. Lett. 17, 1740–1742 (1992).
13. V. Bagnoud and S. Mainguy, “Diffraction of electromagnetic waves by dielectric crossed gratings: a three-dimensional Rayleigh–Fourier solution,” J. Opt. Soc. Am. A 16, 1277–1285 (1999).
14. O. P. Bruno and F. Reitich, “Numerical solution of diffraction problems: a method of variation of boundaries. III. Doubly periodic gratings,” J. Opt. Soc. Am. A 10, 2551–2562 (1993).
15. O. P. Bruno and F. Reitich, “Calculation of electromagnetic scattering via boundary variations and analytic continuation,” Appl. Comput. Electromagn. Soc. J. 11, 17–31 (1996).
16. P. Lalanne and G. M. Morris, “Highly improved convergence of the coupled-wave method for TM polarization,” J. Opt. Soc. Am. A 13, 779–784 (1996).
17. G. Granet and B. Guizal, “Efficient implementation of the coupled-wave method for metallic lamellar gratings in TM polarization,” J. Opt. Soc. Am. A 13, 1019–1023 (1996).
18. L. Li, “Use of Fourier series in the analysis of discontinuous periodic structures,” J. Opt. Soc. Am. A 13, 1870–1876 (1996).
19. G. Granet and J. Plumey, “Parametric formulation of the Fourier modal method for crossed surface-relief gratings,” J. Opt. A Pure Appl. Opt. 4, S145–S149 (2002).
20. L. Li, “Fourier modal method for crossed anisotropic gratings with arbitrary permittivity and permeability tensors,” J. Opt. A Pure Appl. Opt. 5, 345–355 (2003).
21. P. Lalanne, “Improved formulation of the coupled-wave method for two-dimensional gratings,” J. Opt. Soc. Am. A 14, 1592–1598 (1997).
22. P. Lalanne and D. Lemercier-Lalanne, “On the effective medium theory of subwavelength periodic structures,” J. Mod. Opt. 43, 2063–2085 (1996).
23. C. Zhou and L. Li, “Formulation of Fourier modal method of symmetric crossed gratings in symmetric mountings,” J. Opt. A Pure Appl. Opt. 6, 43–50 (2004).
24. B. Y. Kinber and A. B. Kotlyar, “Use of symmetry in solving diffraction problems,” Radio Eng. Electron. Phys. 16, 581–587 (1971).
25. W. Ludwig and C. Falter, Symmetries in Physics: Group Theory Applied to Physical Problems (Springer, Berlin, 1988).
26. J. V. Smith, Geometrical and Structural Crystallography (Wiley, New York, 1982).
27. J. F. Cornwell, ed., “Appendix C: Character tables for the crystallographic point groups,” in Group Theory in Physics: an Introduction (Academic, San Diego, Calif., 1997), pp. 299–318.
28. T. Hahn, ed., International Tables for Crystallography, Vol. A (Reidel, Dordrecht, The Netherlands, 1983), pp. 92–109.
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/josaa/abstract.cfm?uri=josaa-21-10-1886","timestamp":"2014-04-18T16:20:51Z","content_type":null,"content_length":"130620","record_id":"<urn:uuid:28efccd3-707f-4486-97b4-54bdd3c3379d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: prove that an angle inscribed in a semicircle must be a right angle
Replies: 3 Last Post: Jan 25, 2008 11:39 PM
Messages: [ Previous | Next ]
Ladnor Geissinger Re: prove that an angle inscribed in a semicircle must be a right angle
Posted: Jan 25, 2008 11:39 PM
Posts: 313
From: University of North Carolina at Here is an interesting variant of that problem which is appropriate for any discussion of vector geometry.
Chapel Hill
Registered: 12/4/04 Suppose A and B are any two different points in 2,3, or n-space. Let S be the locus of all points X such that
(X-A)#(X-B)= 0 , where Y#W is the inner (dot) product of Y and W. Explain why S is a sphere (circle) with the segment from A to B as a diametral line.
Hint: complete the square.
Date Subject Author
1/21/08 prove that an angle inscribed in a semicircle must be a right angle niki7a
1/21/08 Re: prove that an angle inscribed in a semicircle must be a right angle Frederick Williams
1/21/08 Re: prove that an angle inscribed in a semicircle must be a right angle niki7a
1/25/08 Re: prove that an angle inscribed in a semicircle must be a right angle Ladnor Geissinger | {"url":"http://mathforum.org/kb/thread.jspa?threadID=1686067&messageID=6073363","timestamp":"2014-04-19T08:07:51Z","content_type":null,"content_length":"20082","record_id":"<urn:uuid:25b38580-dca8-4034-89ff-847983a2edaf>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
computational trinitarianism
computational trinitarianism
Type theory
Category theory
Universal constructions
Constructivism, Realizability, Computability
Under the identifications
the following notions are equivalent:
1. A proof of a proposition. (In logic.)
2. A program with output some type. (In type theory and computer science.)
3. A generalized element of an object. (In category theory.)
This is referred to as “computational trinitarianism” in (Harper), where also an exposition is given.
The central dogma of computational trinitarianism holds that Logic, Languages, and Categories are but three manifestations of one divine notion of computation. There is no preferred route to
enlightenment: each aspect provides insights that comprise the experience of computation in our lives.
Computational trinitarianism entails that any concept arising in one aspect should have meaning from the perspective of the other two. If you arrive at an insight that has importance for logic,
languages, and categories, then you may feel sure that you have elucidated an essential concept of computation–you have made an enduring scientific discovery. (Harper)
computational trinitarianism = propositions as types +programs as proofs +relation type theory/category theory
In the introduction of
• Paul-André Melliès, Functorial boxes in string diagrams, Procceding of Computer Science Logic 2006 in Szeged, Hungary. 2006 (pdf)
the insight is recalled to have surfaced in the 1970s, with an early appearance in print being the monograph
• Joachim Lambek, Phil Scott, Introduction to Higher Order Categorical Logic, Cambridge Studies in Advanced Mathematics Vol. 7. Cambridge University Press, 1986.
See also at History of categorical semantics of linear type theory for more on this.
A exposition of the relation between the three concepts is in
An exposition with emphasis on linear logic/quantum logic and the relation to physics is in
For further references see at programs as proofs, propositions as types, and relation between category theory and type theory.
Revised on March 6, 2014 13:11:08 by
Urs Schreiber | {"url":"http://www.ncatlab.org/nlab/show/computational+trinitarianism","timestamp":"2014-04-16T21:56:46Z","content_type":null,"content_length":"65574","record_id":"<urn:uuid:0b687f2a-5b94-42e4-bc86-bef1f977e7df>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
The electric dipole moment of a system with two electric charges is defined as the product of the two charges divided by the distance between them. It is a vector quantity, with the positive
direction defined as pointing from the (more) negative charge toward the (more) positive charge. The electric dipole moment of a more complicated system of charges is simply the sum of the moments of
each pair of charges. | {"url":"http://www.learner.org/courses/physics/glossary/definition.html?invariant=dipole_moment","timestamp":"2014-04-16T13:08:41Z","content_type":null,"content_length":"2720","record_id":"<urn:uuid:92194d3c-4728-4fee-9f27-ebbd6c68f5f6>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exploring Haskell's List Functions
primary data type is the list and there are a rich variety of functions for transforming lists. Most of these functions are familiar to me from
sequences but there are some other useful functions that I haven't seen before.
, as you'd expect, gives you the last item in a list. It has a corresponding function
that gives you everything but the last item.
allows you to insert an element between each element (e.g.
init (intersperse 'a' "bnnn") => "banana"
. Similarly,
inserts the lists between lists and concatenates the results. For different arrangements of the items, there are two functions.
gives you a list of all subsequences of the argument and
gives all the possible permutations.
Prelude> subsequences "abc"
Prelude> permutations "abc"
To reduce lists to a single value there are many versions of the fold function (in Clojure there was just
is the direct equivalent to reduce which reduces a list to a single binary operator.
foldl (+) 0 [1,2,3]
is equivalent to (((0 + 1) + 2) + 3).
is a convenience method without the initial argument, that only applies to non-empty lists
are right-associative so
foldr (+) 0 [1,2,3]
evaluates to (0 + (1 + (2 + 3))).
The fold family of functions can be extremely powerful - I need to read
"A tutorial on the universality and expressiveness of fold" [PDF]
that explores this in more detail. Some example functions that operate on lists defined as folds in Haskell include
On a side note, there are strict versions of the foldl functions, that with a '. Why'd you need these? Haskell is lazy by default which can mean you build up a great big thunk (a pending
calculation). This can be bad (for example, increased space requirements). By making a strict version you evaluate the values as they becomes available. This stops the huge think building up and can
be more efficient. There's a good discussion of this
doesn't have a corresponding strict version, and looking at the expansion it's easy to see why - there's nothing to be lazy with as the first value you can evaluate is right at the end of the list!
is an interesting function. It can be considered the opposite of
as it is used to build up a list from a seed value. The first argument is a function that returns
if it's finished or
Just (a,b)
is prepended to the list, and
is used as the next element. For example we can generate the
Fibonacci sequence
fibonacci :: [Integer]
fibonacci = unfoldr (\[a,b] -> Just(a+b,[b,b+a])) [0,1]
Prelude> take 10 fibonacci
can be written as
iterate f == unfoldr (\x -> Just (x, f x))
. Another paper to add to the reading list is
"The under appreciated unfold"
are functions for getting prefixes and suffixes of (potentially) infinite lists.
does both at the same time, returning a tuple
(take n xs, drop n xs)
take or drop errors whilst some predicate holds. Putting these together we can write a function
which groups elements into sublists of size N.
groupN :: [a] -> Int -> [[a]]
groupN [] _ = []
groupN xs n = a : groupN b n where
(a,b) = splitAt n xs
Prelude> groupN [1..7] 2
The Haskell list library is very complete and there's definitely some new ideas for me to absorb there. In particular, understanding unfoldr and folding in more detail seems to be an important thing
to do! | {"url":"http://www.fatvat.co.uk/2009/09/exploring-haskells-list-functions.html","timestamp":"2014-04-19T04:20:16Z","content_type":null,"content_length":"78061","record_id":"<urn:uuid:7fb4fda6-d55c-422b-96e8-ea031953ac56>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Deciding the word problem in the union of equational theories
Results 1 - 10 of 19
- Journal of Automated Reasoning , 2004
"... We extend Nelson-Oppen combination procedure to the case of theories which are compatible with respect to a common subtheory in the shared signature. The notion of compatibility relies on model
completions and related concepts from classical model theory. ..."
Cited by 40 (10 self)
Add to MetaCart
We extend Nelson-Oppen combination procedure to the case of theories which are compatible with respect to a common subtheory in the shared signature. The notion of compatibility relies on model
completions and related concepts from classical model theory.
- THEORETICAL COMPUTER SCIENCE , 2001
"... In this paper we outline a theoretical framework for the combination of decision procedures for constraint satisfiability. We describe a general combination method which, given a procedure that
decides constraint satisfiability with respect to a constraint theory T1 and one that decides constraint s ..."
Cited by 35 (4 self)
Add to MetaCart
In this paper we outline a theoretical framework for the combination of decision procedures for constraint satisfiability. We describe a general combination method which, given a procedure that
decides constraint satisfiability with respect to a constraint theory T1 and one that decides constraint satisfiability with respect to a constraint theory T2, produces a procedure that (semi-)
decides constraint satisfiability with respect to the union of T1 and T2. We provide a number of model-theoretic conditions on the constraint language and the component constraint theories for the
method to be sound and complete, with special emphasis on the case in which the signatures of the component theories are non-disjoint. We also describe some general classes of theories to which our
combination results apply, and relate our approach to some of the existing combination methods in the field.
- In Proc. IJCAR-3, U. Furbach and , 2006
"... Abstract. In the context of combinations of theories with disjoint signatures, we classify the component theories according to the decidability of constraint satisfiability problems in arbitrary
and in infinite models, respectively. We exhibit a theory T1 such that satisfiability is decidable, but s ..."
Cited by 22 (16 self)
Add to MetaCart
Abstract. In the context of combinations of theories with disjoint signatures, we classify the component theories according to the decidability of constraint satisfiability problems in arbitrary and
in infinite models, respectively. We exhibit a theory T1 such that satisfiability is decidable, but satisfiability in infinite models is undecidable. It follows that satisfiability in T1 ∪ T2 is
undecidable, whenever T2 has only infinite models, even if signatures are disjoint and satisfiability in T2 is decidable. In the second part of the paper we strengthen the Nelson-Oppen decidability
transfer result, by showing that it applies to theories over disjoint signatures, whose satisfiability problem, in either arbitrary or infinite models, is decidable. We show that this result covers
decision procedures based on rewriting, complementing recent work on combination of theories in the rewrite-based approach to satisfiability. 1
- The Journal of Symbolic Logic , 2007
"... Abstract. Basically, the connection of two many-sorted theories is obtained by taking their disjoint union, and then connecting the two parts through connection functions that must behave like
homomorphisms on the shared signature. We determine conditions under which decidability of the validity of ..."
Cited by 19 (5 self)
Add to MetaCart
Abstract. Basically, the connection of two many-sorted theories is obtained by taking their disjoint union, and then connecting the two parts through connection functions that must behave like
homomorphisms on the shared signature. We determine conditions under which decidability of the validity of universal formulae in the component theories transfers to their connection. In addition, we
consider variants of the basic connection scheme. 1
, 2003
"... If there exist ecient procedures (canonizers) for reducing terms of two rst-order theories to canonical form, can one use them to construct such a procedure for terms of the disjoint union of
the two theories? We prove this is possible whenever the original theories are convex. As an application, w ..."
Cited by 15 (4 self)
Add to MetaCart
If there exist ecient procedures (canonizers) for reducing terms of two rst-order theories to canonical form, can one use them to construct such a procedure for terms of the disjoint union of the two
theories? We prove this is possible whenever the original theories are convex. As an application, we prove that algorithms for solving equations in the two theories (solvers) cannot be combined in a
similar fashion. These results are relevant to the widely used Shostak's method for combining decision procedures for theories. They provide the rst rigorous answers to the questions about the
possibility of directly combining canonizers and solvers.
- In David A. Basin and Michaël Rusinowitch, editors, IJCAR ’04 , 2004
"... Previous results for combining decision procedures for the word problem in the non-disjoint case do not apply to equational theories induced by modal logics---whose combination is not disjoint
since they share the theory of Boolean algebras. Conversely, decidability results for the fusion of mod ..."
Cited by 13 (7 self)
Add to MetaCart
Previous results for combining decision procedures for the word problem in the non-disjoint case do not apply to equational theories induced by modal logics---whose combination is not disjoint since
they share the theory of Boolean algebras. Conversely, decidability results for the fusion of modal logics are strongly tailored towards the special theories at hand, and thus do not generalize to
other equational theories.
- In Proc. 6th Int. Symp. Frontiers of Combining Systems (FroCos 2007), LNCS 4720 , 2007
"... Abstract. We present an overview of results on hierarchical and modular reasoning in complex theories. We show that for a special type of extensions of a base theory, which we call local,
hierarchic reasoning is possible (i.e. proof tasks in the extension can be hierarchically reduced to proof tasks ..."
Cited by 11 (7 self)
Add to MetaCart
Abstract. We present an overview of results on hierarchical and modular reasoning in complex theories. We show that for a special type of extensions of a base theory, which we call local, hierarchic
reasoning is possible (i.e. proof tasks in the extension can be hierarchically reduced to proof tasks w.r.t. the base theory). Many theories important for computer science or mathematics fall into
this class (typical examples are theories of data structures, theories of free or monotone functions, but also functions occurring in mathematical analysis). In fact, it is often necessary to
consider complex extensions, in which various types of functions or data structures need to be taken into account at the same time. We show how such local theory extensions can be identified and
under which conditions locality is preserved when combining theories, and we investigate possibilities of efficient modular reasoning in such theory combinations. We present several examples of
application domains where local theories and local theory extensions occur in a natural way. We show, in particular, that various phenomena analyzed in the verification literature can be explained in
a unified way using the notion of locality. 1
- In Proc. 5th FroCoS , 2005
"... functions ..."
, 2006
"... We define a general notion of a fragment within higher-order type theory; a procedure for constraint satisfiability in combined fragments is outlined, following Nelson-Oppen schema. The
procedure is in general only sound, but it becomes terminating and complete when the shared fragment enjoys suitab ..."
Cited by 6 (3 self)
Add to MetaCart
We define a general notion of a fragment within higher-order type theory; a procedure for constraint satisfiability in combined fragments is outlined, following Nelson-Oppen schema. The procedure is
in general only sound, but it becomes terminating and complete when the shared fragment enjoys suitable noetherianity conditions and admits an abstract version of a ‘Keisler-Shelah like’ isomorphism
theorem. We show that this general decidability transfer result covers recent work on combination in first-order theories as well as in various intensional logics such as description, modal, and
temporal logics.
, 2001
"... this paper. On the one hand, dening a semantics for the combined system may depend on methods and results from formal logic and universal algebra. On the other hand, an ecient combination of the
actual constraint solvers often requires the possibility of communication and cooperation between the sol ..."
Cited by 5 (0 self)
Add to MetaCart
this paper. On the one hand, dening a semantics for the combined system may depend on methods and results from formal logic and universal algebra. On the other hand, an ecient combination of the
actual constraint solvers often requires the possibility of communication and cooperation between the solvers. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=979263","timestamp":"2014-04-16T21:40:56Z","content_type":null,"content_length":"36357","record_id":"<urn:uuid:73dac075-6fab-457e-ba4f-d77edc7d91ce>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
what is the radius of convergence of the power series Σan xⁿ
December 27th 2011, 06:18 AM #1
suppose Σan xⁿ be a power series and an= the number of divisors of n⁵⁰.
Then what will be the radius of convergence of this power series?
Thanks in advance . regards.
Re: what is the radius of convergence of the power series Σan xⁿ
December 27th 2011, 07:15 AM #2 | {"url":"http://mathhelpforum.com/differential-geometry/194698-what-radius-convergence-power-series-x.html","timestamp":"2014-04-18T05:32:48Z","content_type":null,"content_length":"34587","record_id":"<urn:uuid:7198da6c-5a29-4096-8d07-b3127bbfb6cf>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
easy integration by parts- cant figure out what im doing wrong
November 12th 2009, 11:07 AM
easy integration by parts- cant figure out what im doing wrong
ok its
(integral) sin x cos x dx
i did u= sinx dv= cosx du=cos x v=sinx
i then plug in and get sin^2 x - (integral)(sin x cos x)
that second term is exactly the problem we started with, so how does that help me?--it seems like i will just be doing an endless integration by continuing when that is clearly not what the
answer dictates...
November 12th 2009, 11:17 AM
ok its
(integral) sin x cos x dx
i did u= sinx dv= cosx du=cos x v=sinx
i then plug in and get sin^2 x - (integral)(sin x cos x)
that second term is exactly the problem we started with, so how does that help me?--it seems like i will just be doing an endless integration by continuing when that is clearly not what the
answer dictates...
post deleted,
sorry i messed up. i'll put the right answer in a moment.
November 12th 2009, 11:20 AM
Hello, twostep08!
$\int \sin x\cos x\,dx$
i did: . $\begin{array}{ccccccc} u&=& \sin x& dv&=& \cos x \\ du &=& \cos x & v &=& \sin x\end{array}$
i then plug in and get: . $\sin^2\!x - \int \sin x\cos x\,dx$
That second term is exactly the problem we started with,
so how does that help me?
We have: . $I \;=\; \int\sin x\cos x\,dx$
$\text{Using By-parts, we get: }\;I =\;\sin^2\!x - \underbrace{\int\sin x\cos x\,dx}_{\text{This is }I} +\: c$
Hence: . $I \;=\;\sin^2\!x - I + c \quad\Rightarrow\quad 2I \;=\;\sin^2\!x + c \quad\Rightarrow\quad I \;=\;\tfrac{1}{2}\sin^2\!x + C$
Therefore: . $\int\sin x\cos x\,dx \;=\;\tfrac{1}{2}\sin^2\!x + C$
November 12th 2009, 11:25 AM
ok its
(integral) sin x cos x dx
i did u= sinx dv= cosx du=cos x v=sinx
i then plug in and get sin^2 x - (integral)(sin x cos x)
that second term is exactly the problem we started with, so how does that help me?--it seems like i will just be doing an endless integration by continuing when that is clearly not what the
answer dictates...
I'd use a trig sub for this one - no need for integration by parts
$sin(2x) = 2sin(x)cos(x) \: \rightarrow \: sin(x)cos(x) = \frac{1}{2}sin(2x)$
$\frac{1}{2} \int sin(2x)\,dx = -\frac{1}{4}cos(2x)+C$
My answer is the same as Soroban's above. First let the constant of integration be $k$ to make it different from Soroban
$cos(2x) = 1-2sin^2(x)$
$-\frac{1}{4} + \frac{1}{2}sin^2(x) + k$
As $-\frac{1}{4}$ is constant we can define the following $C = k - \frac{1}{4}$
Therefore $-\frac{1}{4}cos(2x)+k = \frac{1}{2}sin^2(x) + C$
November 12th 2009, 11:41 AM
It's worth knowing, though (edit: and this is just what Soroban is also pointing out), that when the 'endlessness' thing occurs in integration by parts, you can often break out of it like this...
... i.e. by solving the top row for I.
... is the basic pattern for integration by parts, and...
... is the product rule, straight continuous lines differentiating downwards (integrating up) with respect to x, and choosing legs crossed or un-crossed is equivalent to all the labelling with u
and v and du and dv.
Don't integrate - balloontegrate!
Balloon Calculus: Gallery
Balloon Calculus Drawing with LaTeX and Asymptote! | {"url":"http://mathhelpforum.com/calculus/114151-easy-integration-parts-cant-figure-out-what-im-doing-wrong-print.html","timestamp":"2014-04-19T12:44:42Z","content_type":null,"content_length":"13138","record_id":"<urn:uuid:44bb637b-5a18-4964-a62c-517bd75ffa8a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
On supervised learning of Bayesian network parameters
- Machine Learning 2006 , 2002
"... Classification problems have a long history in the machine learning literature. One of the simplest, and yet most consistently well performing set of classifiers is the Nave Bayes models.
However, an inherent problem with these classifiers is the assumption that all attributes used to describe an in ..."
Cited by 11 (1 self)
Add to MetaCart
Classification problems have a long history in the machine learning literature. One of the simplest, and yet most consistently well performing set of classifiers is the Nave Bayes models. However, an
inherent problem with these classifiers is the assumption that all attributes used to describe an instance are conditionally independent given the class of that instance. When this assumption is
violated (which is often the case in practice) it can reduce classification accuracy due to "information double-counting" and interaction omission.
, 2002
"... this paper we show, how this supervised learning problem can be solved e#ciently. We introduce an alternative parametrization in which the supervised likelihood becomes concave. From this result
it follows that there can be at most one maximum, easily found by local optimization methods. We present ..."
Cited by 2 (1 self)
Add to MetaCart
this paper we show, how this supervised learning problem can be solved e#ciently. We introduce an alternative parametrization in which the supervised likelihood becomes concave. From this result it
follows that there can be at most one maximum, easily found by local optimization methods. We present test results that show this is feasible and highly beneficial | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1289583","timestamp":"2014-04-18T06:15:30Z","content_type":null,"content_length":"14405","record_id":"<urn:uuid:3b68db8d-d11f-4066-9d7b-e49209e160e3>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
System Identification: Theory for the User
Results 11 - 20 of 952
- In IEEE International Conference on Image Processing , 1996
"... Temporal textures are textures with motion. Examples include wavy water, rising steam and fire. We model image sequences of temporal textures using the spatio-temporal autoregressive model
(STAR). This model expresses each pixel as a linear combination of surrounding pixels lagged both in space and ..."
Cited by 118 (1 self)
Add to MetaCart
Temporal textures are textures with motion. Examples include wavy water, rising steam and fire. We model image sequences of temporal textures using the spatio-temporal autoregressive model (STAR).
This model expresses each pixel as a linear combination of surrounding pixels lagged both in space and in time. The model provides a base for both recognition and synthesis. We show how the least
squares method can accurately estimate model parameters for large, causal neighborhoods with more than 1000 parameters. Synthesis results show that the model can adequately capture the spatial and
temporal characteristics of many temporal textures. A 95% recognition rate is achieved for a 135 element database with 15 texture classes. 1.
- International Journal of Computer Vision , 2002
"... What does it mean for a deforming object to be "moving" (see Fig. 1)? How can we separate the overall motion (a finite-dimensional group action) from the more general deformation (a di#
eomorphism)? In this paper we propose a definition of motion for a deforming object and introduce a notion of "shap ..."
Cited by 105 (16 self)
Add to MetaCart
What does it mean for a deforming object to be "moving" (see Fig. 1)? How can we separate the overall motion (a finite-dimensional group action) from the more general deformation (a di#eomorphism)?
In this paper we propose a definition of motion for a deforming object and introduce a notion of "shape average" as the entity that separates the motion from the deformation. Our definition allows us
to derive novel and e#cient algorithms to register non-equivalent shapes using region-based methods, and to simultaneously approximate and register structures in grey-scale images. We also extend the
notion of shape average to that of a "moving average" in order to track moving and deforming objects through time.
, 1994
"... Recently a great deal of attention has been given to numerical algorithms for subspace state space system identification (N4SID). In this paper, we derive two new N4SID algorithms to identify
mixed deterministic-stochastic systems. Both algorithms determine state sequences through the projection of ..."
Cited by 103 (12 self)
Add to MetaCart
Recently a great deal of attention has been given to numerical algorithms for subspace state space system identification (N4SID). In this paper, we derive two new N4SID algorithms to identify mixed
deterministic-stochastic systems. Both algorithms determine state sequences through the projection of input and output data. These state sequences are shown to be outputs of non-steady state Kalman
filter banks. From these it is easy to determine the state space system matrices. The N4SID algorithms are always convergent (non-iterative) and numerically stable since they only make use of QR and
Singular Value Decompositions. Both N4SID algorithms are similar, but the second one trades off accuracy for simplicity. These new algorithms are compared with existing subspace algorithms in theory
and in practice. Key words : Subspace identification, non-steady state Kalman filter, Riccati difference equations, QR and Singular Value Decomposition 1 Introduction The greater part of the systems
, 2001
"... Dynamic textures are sequences of images that exhibit some form of temporal stationarity, such as waves, steam, and foliage. We pose the problem of recognizing and classifying dynamic textures
in the space of dynamical systems where each dynamic texture is uniquely represented. Since the space is no ..."
Cited by 87 (6 self)
Add to MetaCart
Dynamic textures are sequences of images that exhibit some form of temporal stationarity, such as waves, steam, and foliage. We pose the problem of recognizing and classifying dynamic textures in the
space of dynamical systems where each dynamic texture is uniquely represented. Since the space is non-linear, a distance between models must be defined. We examine three different distances in the
space of autoregressive models and assess their power. 1.
- Proc. IEEE , 2002
"... Mathematical and computational modeling of genetic regulatory networks promises to uncover the fundamental principles governing biological systems in an integrarive and holistic manner. It also
paves the way toward the development of systematic approaches for effective therapeutic intervention in di ..."
Cited by 84 (17 self)
Add to MetaCart
Mathematical and computational modeling of genetic regulatory networks promises to uncover the fundamental principles governing biological systems in an integrarive and holistic manner. It also paves
the way toward the development of systematic approaches for effective therapeutic intervention in disease. The central theme in this paper is the Boolean formalism as a building block for modeling
complex, large-scale, and dynamical networks of genetic interactions. We discuss the goals of modeling genetic networks as well as the data requirements. The Boolean formalism is justified from
several points of view. We then introduce Boolean networks and discuss their relationships to nonlinear digital filters. The role of Boolean networks in understanding cell differentiation and
cellular functional states is discussed. The inference of Boolean networks from real gene expression data is considered from the viewpoints of computational learning theory and nonlinear signal
processing, touching on computational complexity of learning and robustness. Then, a discussion of the need to handle uncertainty in a probabilistic framework is presented, leading to an introduction
of probabilistic Boolean networks and their relationships to Markov chains. Methods for quantifying the influence of genes on other genes are presented. The general question of the potential effect
of individual genes on the global dynamical network behavior is considered using stochastic perturbation analysis. This discussion then leads into the problem of target identification for therapeutic
intervention via the development of several computational tools based on first-passage times in Markov chains. Examples from biology are presented throughout the paper. 1
- ARTIFICIAL INTELLIGENCE (ACCEPTED FOR PUBLICATION) , 1997
"... Autonomous robots must be able to learn and maintain models of their environments. Research on mobile robot navigation has produced two major paradigms for mapping indoor environments:
grid-based and topological. While grid-based methods produce accurate metric maps, their complexity often prohibits ..."
Cited by 83 (12 self)
Add to MetaCart
Autonomous robots must be able to learn and maintain models of their environments. Research on mobile robot navigation has produced two major paradigms for mapping indoor environments: grid-based and
topological. While grid-based methods produce accurate metric maps, their complexity often prohibits efficient planning and problem solving in large-scale indoor environments. Topological maps, on
the other hand, can be used much more efficiently, yet accurate and consistent topological maps are often difficult to learn and maintain in large-scale environments, particularly if momentary sensor
data is highly ambiguous. This paper describes an approach that integrates both paradigms: grid-based and topological. Grid-based maps are learned using artificial neural networks and naive Bayesian
integration. Topological maps are generated on top of the grid-based maps, by partitioning the latter into coherent regions. By combining both paradigms, the approach presented here gains advantages
from both worlds: accuracy/consistency and efficiency. The paper gives results for autonomous exploration, mapping and operation of a mobile robot in populated multi-room environments.
- Proc. IEEE , 1998
"... this paper is to review developments in blind channel identification and estimation within the estimation theoretical framework. We have paid special attention to the issue of identifiability,
which is at the center of all blind channel estimation problems. Various existing algorithms are classified ..."
Cited by 79 (2 self)
Add to MetaCart
this paper is to review developments in blind channel identification and estimation within the estimation theoretical framework. We have paid special attention to the issue of identifiability, which
is at the center of all blind channel estimation problems. Various existing algorithms are classified into the moment-based and the maximum likelihood (ML) methods. We further divide these algorithms
based on the modeling of the input signal. If input is assumed to be random with prescribed statistics (or distributions), the corresponding blind channel estimation schemes are considered to be
statistical. On the other hand, if the source does not have a statistical description, or although the source is random but the statistical properties of the source are not exploited, the
corresponding estimation algorithms are classified as deterministic. Fig. 2 shows a map for different classes of algorithms and the organization of the paper.
, 1993
"... In this paper, we derive a new subspace algorithm to consistently identify stochastic state space models from given output data without forming the covariance matrix and using only semi-infinite
block Hankel matrices. The algorithm is based on the concept of principal angles and directions. We descr ..."
Cited by 77 (14 self)
Add to MetaCart
In this paper, we derive a new subspace algorithm to consistently identify stochastic state space models from given output data without forming the covariance matrix and using only semi-infinite
block Hankel matrices. The algorithm is based on the concept of principal angles and directions. We describe how they can be calculated with QR and Quotient Singular Value Decomposition. We also
provide an interpretation of the principal directions as states of a non-steady state Kalman filter bank. Key Words Principal angles and directions, QR and quotient singular value decomposition,
Kalman filter, Riccati difference equation, stochastic balancing, stochastic realization. 1 Introduction Let y k 2 ! l ; k = 0; 1; : : : ; K be a data sequence that is generated by the following
system : x k+1 = Ax k + w k (1) y k = Cx k + v k (2) where x k 2 ! n is a state vector. The vector sequence w k 2 ! n is a process noise while the vector sequence v k 2 ! l is a measurement noise.
They are bo...
- In Plenary talk at the proceedings of the 17th IFAC World Congress, Seoul, South Korea , 2008
"... System identification is the art and science of building mathematical models of dynamic systems from observed input-output data. It can be seen as the interface between the real world of
applications and the mathematical world of control theory and model abstractions. As such, it is an ubiquitous ne ..."
Cited by 77 (2 self)
Add to MetaCart
System identification is the art and science of building mathematical models of dynamic systems from observed input-output data. It can be seen as the interface between the real world of applications
and the mathematical world of control theory and model abstractions. As such, it is an ubiquitous necessity for successful applications. System identification is a very large topic, with different
techniques that depend on the character of the models to be estimated: linear, nonlinear, hybrid, nonparametric etc. At the same time, the area can be characterized by a small number of leading
principles, e.g. to look for sustainable descriptions by proper decisions in the triangle of model complexity, information contents in the data, and effective validation. The area has many facets and
there are many approaches and methods. A tutorial or a survey in a few pages is not quite possible. Instead, this presentation aims at giving an overview of the “science ” side, i.e. basic principles
and results and at pointing to open problem areas in the practical, “art”, side of how to approach and solve a real problem. 1.
- IEEE Transactions on Pattern Analysis and Machine Intelligence , 2000
"... AbstractÐStandard, exact techniques based on likelihood maximization are available for learning Auto-Regressive Process models of dynamical processes. The uncertainty of observations obtained
from real sensors means that dynamics can be observed only approximately. Learning can still be achieved via ..."
Cited by 76 (1 self)
Add to MetaCart
AbstractÐStandard, exact techniques based on likelihood maximization are available for learning Auto-Regressive Process models of dynamical processes. The uncertainty of observations obtained from
real sensors means that dynamics can be observed only approximately. Learning can still be achieved via ªEM-KºÐExpectation-Maximization (EM) based on Kalman Filtering. This cannot handle more complex
dynamics, however, involving multiple classes of motion. A problem arises also in the case of dynamical processes observed visually: background clutter arising for example, in camouflage, produces
non-Gaussian observation noise. Even with a single dynamical class, non-Gaussian observations put the learning problem beyond the scope of EM-K. For those cases, we show here how ªEM-CºÐbased on the
CONDENSATION algorithm which propagates random ªparticle-sets,º can solve the learning problem. Here, learning in clutter is studied experimentally using visual observations of a hand moving over a
desktop. The resulting learned dynamical model is shown to have considerable predictive value: When used as a prior for estimation of motion, the burden of computation in visual observation is
significantly reduced. Multiclass dynamics are studied via visually observed juggling; plausible dynamical models have been found to emerge from the learning process, and accurate classification of
motion has resulted. In practice, EM-C learning is computationally burdensome and the paper concludes with some discussion of computational complexity. Index TermsÐComputer vision, learning dynamics,
Auto-Regressive Process, Expectation Maximization. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=8755&sort=cite&start=10","timestamp":"2014-04-19T01:28:57Z","content_type":null,"content_length":"41018","record_id":"<urn:uuid:0a1e3a53-0780-4ef8-94e6-4b0b69de3246>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
AP Physics Chapter 22 Study Guide
Current and Resistance
Model of Current
Charge Carriers in Metals are Electrons
Creating Current
Conservation of Current
Law of Conservation of Current – The current is the same at all points in a current carrying wire.
Defining and Describing Current
Definition of Current
Direction of Conventional Current is the direction the positive charge carriors would travel.
Conservation of Current at a Junction
Kirchoff's Junction Law – the sum of the currents coming into a junction equals the sum of the currents leaving a junction.
Example 1
The disk drive in a portable CD player is connected to a battery that supplies it with a current of 0.22 A. How many electrons pass through the drive in 4.5 s?
Example 2
The currents through several segments of a wire object are shown in the figure below. What are the magnitudes and directions of the currents
Batteries and EMF
Connecting Potential and Current
Factors that affect the current in a wire
Resistance (R) – Opposition of a circuit to the flow of electric current.
Example 3
A potential difference of 24 V is applied to a 150
Where L is length of the wire, A is the cross-section area, and
Example 4
Wire 1 has a length and a circular cross section of diameter D. Wire 2 is constructed from the same material as wire 1 and has the same shape, but its length is 2L, and its diameter is 2D. Is the
resistance of wire 2 (a) the same as that of wire 1, (b) twice that of wire 1, or (c) half that of wire 1?
Example 5
A current of 1.82 A flows through a copper wire 1.75 m long and 1.10 mm in diameter. Find the potential difference between the ends of the wire.
Resistors and Ohm's Law
Ohmic and Nonohmic Materials
Energy and Power
In general, energy is required to cause an electric current to flow through a circuit. The rate at which the energy must be supplied is the power.
Example 6
A handheld electric fan operates on a 3.00 V battery. If the power generated by the fan is 2.24 W, what is the current supplied by the battery?
Example 7
A battery that produces a potential difference V is connected to a 5 W light bulb. Later, the 5 W light bulb is replaced with a 10 W light bulb. (a) In which case does the battery supply the greatest
current? (b) Which light bulb has the greatest resistance?
Example 8
A battery with an emf of 12 V is connected to a 545
Energy Usage
Example 9
A holiday goose is cooked in the kitchen oven for 4.00 h. Assume that the stove draws a current of 20.0 A, operates at a voltage of 220.0 V, and uses electrical energy that costs $0.048 per kWh. How
much does it cost to cook your goose? | {"url":"http://whs.wsd.wednet.edu/Faculty/Busse/MathHomePage/busseclasses/apphysics/studyguides/chapter22_2008/Chapter22StudyGuide2008.html","timestamp":"2014-04-19T04:33:11Z","content_type":null,"content_length":"15319","record_id":"<urn:uuid:6f843204-1c89-4398-bbd9-3eb004e55c98>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next: Properties Up: Functions, continuity and differentiability Previous: Functions, continuity and differentiability
The study of differentiable functions is the study of functions that mimic the behaviour of polynomials ``approximately''. To begin with we must formally define the notion of approximation.
Exercise 34 For any real number 0 < x < 1 show that x^n is a decreasing sequence with limit 0.
In particular, we see that a polynomial that vanishes to order (n + 1) at 0 satisfies the following condition on functions of one variable.
Definition 1
A function
) of one variable is said to be in
) if for any
| g(x)| < x^n| for all x so that | x| <
An alternative notion is
Definition 2
A function
) one variable is said to be in
) is there is a
so that
| g(x)| < C| x^n| for all x so that | x| <
Clearly, any polynomial that vanishes to order n is O(x^n). Further, it is clear that an function g(x) that is O(x^n) is o(x^n - 1) and any function that is o(x^n) is O(x^n).
We can extend these notions to many variables as well. A function g(x[1],..., x[n]) of n variables is said to be in o(x^n) (respectively O(x^n)) if for all lines (x[1],..., x[n]) = (xc[1],..., xc[n])
through the origin the restricted function f (x) = g(xc[1],..., xc[n]) is in o(x^n) (respectively O(x^n)). We can further extend this to define o((x - b)^n) and O((x - b)^n) where b = (b[1],..., b
[n]) is some point, as a way of approximating functions near this point.
We say that g and f agree upto o((x - b)^n) (or f approximates g upto o((x - b)^n)) if f - g is in o((x - b)^n). Note in particular, that f and g take the same value at b.
A function is differentiable n times at the point c if it is approximated upto o((x - b)^n) by a polynomial (of degree n). Clearly, a polynomial of any degree is differentiable by the results of the
previous section. In the one variable case we write this as follows
f (x) = a[0] + a[1]x + ^ ... + a[n]x^n + o((x - c)^n)
Exercise 35 Show that for any two functions f and g in o(x^n) and a function h which is differentiable n times at the origin, the function h^ . f + g is in o(x^n).
Exercise 36 Show that the numbers a[k] are uniquely determined by the function f.
Now the number a[1] depends on f and the point c. Now suppose that f is differentiable (1 times) at all points c so that it can be written as above near every point c. Then we can define the derived
function f' by letting f'(c) = a[1] for each point c; the function f' is also called the derivative of f. Now it clear that if f is the function given by a polynomial P then f' is dP/dx. Thus we also
use the notation df /dx for f'. We have the derivation property as well.
Exercise 37
are differentiable then so is
hf + g) = f + h
However, unlike the condition of vanishing to order n at c, the condition o((x - c)^n) is not very well behaved.
Exercise 38 Show that f (x) = x^2sin(1/x) is o(x) but the derivative of f' is not o(x^0).
A function f (x) is called continuous at a point c if f (x) - f (c) is o(x - c) (i. e. it is differentiable 0 times!). It is called continuous it it has this property at all points. Thus we would
like to study functions f which are differentiable and in addition the derivative f' is continuous. Such functions are provided by the fundamental theorem of calculus.
Next: Properties Up: Functions, continuity and differentiability Previous: Functions, continuity and differentiability Kapil H. Paranjape 2001-01-20 | {"url":"http://www.imsc.res.in/~kapil/geometry/prereq1/node9.html","timestamp":"2014-04-21T00:11:12Z","content_type":null,"content_length":"10967","record_id":"<urn:uuid:a153d762-f584-41c7-8972-a30be167e5e8>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
, a round or spherical body, more usually called a sphere, bounded by one uniform convex surface, every point of which is equally distant from a point within called its centre. Euclid defines the
Globe, or sphere, to be a solid figure described by the revolution of a semi-circle about its diameter, which remains unmoved. Also, its axis is the fixed line or diameter about which the semi-circle
revolves; and its centre is the same with that of the revolving semicircle, a diameter of it being any right line that passes through the centre, and terminated both ways by the superficies of the
sphere. Elem. 11. def. 14, 15, 16, 17.
Euclid, at the end of the 12th book, shews that spheres are to one another in the triplicate ratio of their diameters, that is, their solidities are to one another as the cubes of their diameters.
And Archimedes determines the real magnitudes and measures of the surfaces and solidities of spheres and their segments, in his treatise de Sphæra et Cylindro: viz, 1, That the superficies of any
Globe is equal to 4 times a great circle of it.—2, That any sphere is equal to 2/3 of its circumscribing cylinder, or of the cylinder of the same diameter and altitude.—3, That the curve surface of
the segment of a globe, is equal to the circle whose radius is the line drawn from the vertex of the segment to the circumference of the base.—4, That the content of a solid sector of the Globe, is
equal to a cone whose altitude is the radius of the Globe, and its base equal to the curve superficies or base of the sector. With many other properties. And from hence are easily deduced these
practical rules for the surfaces and solidities of Globes and their segments; viz,
1. For the Surface of a Globe, multiply the square of the diameter by 3.1416; or multiply the diameter by the circumference.
2. For the Solidity of a Globe, multiply the cube of the diameter by .5236 (viz 1/<*> of 3.1416); or multiply the surface by 1/6 of the diameter.
3. For the Sursace of a Segment, multiply the diameter of the Globe by the altitude of the segment and the product again by 3.1416.
4. For the Sol<*>dity of a Segment, multiply the square of the diameter of the Globe by the difference between 3 times that diameter and 2 times the altitude of the segment, and the product again by
.5236, or 1/6 of 3.1416.
Hence, if d denote the diameter of the Globe, c the circumference, a the altitude of any segment, and p = 3.1416; then
The surface. The solidity
In the Globe pd^2 = cd 1/<*> pd^3
In the Segt. pad 1/6 pa^2 × ―(3d - 2a)
See the art. Sphere, and my Mensuration, p. 197 &c, 2d edit.|
The Globe, or Terraque<*>s Globe, is the body or mass of the earth and water together, which is nearly globular.
, or Artificial Globe, is more particularly used for a Globe of metal, plaister, paper, pasteboard, &c, on the surface of which is drawn a map, or representation of either the heavens or the earth,
with the several circles conceived upon them. And hence
Globes are of two kinds, Terrestrial, and Celestial; which are of considerable use in geography and astronomy, by serving to give a lively representation of their principal objects, and for
performing and illustrating many of their operations in a manner easy to be perceived by the senses, and so as to be conceived even without any knowledge of the mathematical grounds of those
Description of the Globes.
The fundamental parts that are common to both Globes, are an axis, representing the axis of the world, passing through the two poles of a spherical shell, representing those of the world, which shell
makes the body of the Globe, upon the external surface of which is drawn the representation of the whole surface of the earth, sea, rivers, islands, &c, for the Terrestrial Globe, and the stars and
constellations of the heavens, for the Celestial one; besides the equinoctial and ecliptic lines, the zodiac, the two tropics and polar circles, and a number of meridian lines. There is next a brazen
meridian, being a strong circle of brass, circumscribing the Globe, at a small distance from it quite round, in which the globe is hung by its two poles, upon which it turns round within this circle,
which is divided into 4 times 90 degrees, beginning at the equator on both sides, and ending with 90 at the two poles. There are also two small hour circles, of brass, divided into twice 12 hours,
and fitted on the meridian round the poles, which carry an index pointing to the hour. The whole is set in a wooden ring, placed parallel to, and representing the horizon, in which the Globe slides
by the brass meridian, elevating or depressing the pole according to any proposed latitude. There is also a thin slip of brass, called a Quadrant of Altitude, made to fit on occasionally upon the
brass meridian, at the highest or vertical point, to measure the altitude of any thing above the horizon. A magnetic compass is sometimes set underneath. See the figure of the Globes so mounted, at
fig. 1, plate xii.
Such is the plain and simple construction of the artificial Globe, whether celestial or terrestrial, as adapted to the time only for which it is made. But as the angle formed by the equator and
ecliptic, as well as their point of intersection, is always changing; to remedy these inconveniences, several contrivances have been made, so as to adapt the same Globes to any other time, either
past or to come; as well as other contrivances to answer particular purposes.
Thus, Mr. Senex, a celebrated maker of Globes, had a contrivance which, by means of a nut and serew, caused the pole of the equator to revolve about the pole of the ecliptic, by any quantity
answering to the precession of the equinoxes, since the time for which the Globe was made. Philos. Trans. number 447, or Abr. vol. 8, p. 217, also Philos. Trans. vol. 46, p. 290.
Mr. Joseph Harris, late assay-master of the Mint, made some contrivances to shew the effects of the <*>arth's motions. He fixed two horary circles under the brass meridian, to the axis, one at each
pole, so as to turn round with the Globe, and that meridian served as an index to cut the horary divisions. The Globe in this state serves equally for resolving problems in both north and south
latitudes, as also in places near the equator; whereas, in the common construction, the axis and horary circle prevent the brass meridian from being moveable quite round in the horizon. This Globe is
also adapted for shewing how the vicissitudes of day and night, and the alteration of their lengths, are really occasioned by the motion of the earth: for this purpose, he divides the brass meridian,
at one of the poles, into months and days, according to the sun's declination, reckoning from the pole. Therefore, by bringing the day of the month to the horizon, and rectifying the Globe according
to the time of the day, the horizon will represent the circle separating light and darkness, and the upper half of the Globe the illuminated hemisphere, the sun being in the zenith. Mr. Harris also
gives an account of a cheap machine for shewing how the annual motion of the earth in its orbit causes the change of the sun's declination, without the great expence of an orrery. Philos. Trans.
number 456, or Abr. vol. 8, p. 352.
The late Mr. George Adams made also some useful improvements in the construction of the Globes. Besides what is usual, his Globes have a thin brass semicircle moveable about the poles, with a small
thin sliding circle upon it. On the terrestrial Globe, the former of these is a moveable meridian, and the latter is the visible horizon of any particular place to which it is set. But on the
celestial Globe, the semi-circle is a moveable circle of declination, and its small annexed circle an artisicial sun or planet. Each Globe has a brass wire circle, placed at the limits of the
twilight. The terrestrial Globe has many additional circles, as well as the rhumb-lines, for resolving all the necessary geographical and nautical problems: and on the celestial Globe are drawn, on
each side of the ecliptic, 8 parallel circles, at the distance of one degree from each other, including the zodiac; which are crossed at rightangles by segments of great circles at every 5th degree
of the ecliptic, for the more readily noting the place of the moon or of any planet upon the Globe. On the strong brass circle of the terrestrial Globe, and about 23 1/2 degrees on each side of the
north pole, the days of each month are laid down according to the sun's declination: and this brass circle is so contrived, that the Globe may be placed with the north and south poles in the plane of
the horizon, and with the south pole elevated above it. The equator, on the surface of either Globe, serves the purpose of the horary circle, by means of a semi-circular wire placed in the plane of
the equator, carrying two indices, one of which is occasionally to be used to point out the time. For a farther account of these Globes, with the method of using them, see Mr. Adams's treatise on
their construction and use.
There are also what are called Patent Globes, made by Mr. Neale; by means of which he resolves several astronomical problems, which do not admit of solution by the common Globes.|
Mr. Ferguson likewise made several improvements of the Globes, particularly one for constructing dials, and another called a planetary Globe. See Philos. Trans. vol. 44, p. 535, and Ferguson's
Astron. p. 291, and 292.
Lastly, in the Philos. Trans. for 1789, vol. 79, p. 1, Mr. Smeaton has proposed some improvements of the celestial Globe, especially with respect to the quadrant of altitude, for the resolution of
problems relating to the azimuth and altitude. The difficulty, he observes, that has occurred in fixing a semicircle, so as to have a centre in the zenith and nadir points of the Globe, at the same
time that the meridian is left at liberty to raise the pole to its desired elevation, I suppose, has induced the Globe-makers to be contented with the strip of thin flexible brass, called the
quadrant of altitude; and it is well known how imperfectly it performs its office. The improvement I have attempted, is in the application of a quadrant of altitude of a more solid construction;
which being affixed to a brass socket of some length, and this ground, and made to turn upon an upright steel spindle, fixed in the zenith, steadily directs the quadrant, or rather arc, of altitude
to its true azimuth, without being at liberty to deviate from a vertical circle to the right hand or left: by which means the azimuth and altitude are given with the same exactness as the measure of
any other of the great circles. For a more particular description of this improvement, illustrated with figures, see the place above quoted.
The use of the Terrestrial Globe.
Prob. I. To find the latitude and longitude of any place. —Bring the place to the graduated side of the first meridian: then the degree of the meridian it cuts is the latitude sought; and the degree
of the equator then under the meridian is the longitude.
II. To find a place, having a given latitude and longitude.—Find the degree of longitude on the equator, and bring it to the brass meridian; then find the degree of latitude on the meridian, either
north or south of the equator, as the given latitude is north or south; then the point of the Globe just under that degree of latitude is the place required.
III. To find all the places on the Globe that have the same latitude, and the same longitude, or hour, with a given place, as suppose London.—Bring the given place London to the meridian, and observe
what places are just under the edge of it, from north to south; and all those places have the same longitude and hour with it. Then turn the Globe quite round; and all those places which pass just
under the given degree of latitude on the meridian, have the same latitude with the given place.
IV. To find the Antœci, Periœci and Antipodes, of any given place, suppose London.—Bring the given place London to the meridian, then count 51 1/2 the same degree of latitude southward, or towards
the other pole, and the point thus arrived at will be the Antœci, or where the hour of the day or night is always the same at both places at the same time, and where the seasons and lengths of days
and nights are also equal, but at half a year distance from each other, because their sea- sons are opposite or contrary. London being still under the meridian, set the hour index to 12 at noon, or
pointing towards London; then turn the Globe just half round, or till the index point to the opposite hour, or 12 at night; and the place that comes under the same degree of the meridian where London
was, shews where the Periœci dwell, or those people that have the same seasons and at the same time as London, as also the same length of days and nights &c at that time, but only their time or hour
is just opposite, or 12 hours distant, being day with one when night with the other, &c. Lastly, as the Globe stands, count down by the meridian the same degree of latitude south, and that will give
the place of the Antipodes of London, being diametrically under or opposite to it; and so having all its times, both hours and seasons opposite, being day with the one when night with the other, and
summer with the one when winter with the other.
V. To find the Distance of two places on the Globe.— If the two places be either both on the equator, or both on the same meridian, the number of degrees in the distance between them, reduced into
miles, at the rate of 70 English miles to the degree, (or more exact 69 1/3), will give the distance nearly. But in any other situations of the two places, lay the quadrant of altitude over them, and
the degrees counted upon it, from the one place to the other, and turned into miles as above, will give the distance in this case.
VI. To find the Difference in the Time of the day at any two given places, and thence the Difference of Longitude.—Bring one of the places to the meridian, and set the hour index to 12 at noon; then
turn the Globe till the other place comes to the meridian, and the index will point out the difference of time; then by allowing 15° to every hour, or 1° to 4 minutes of time, the difference of
longitude will be known.—Or the difference of longitude may be found without the time, thus:
First bring the one place to the meridian, and note the degree of longitude on the equator cut by it; then do the same by the other place; which gives the longitudes of the two places; then
subtracting the one number of degrees from the other, gives the difference of longitude sought.
VII. The time being known at any given place, as suppose London, to find what hour it is in any other part of the world.—Bring the given place, London, to the meridian, and set the index to the given
hour; then turn the Globe till the other place come to the meridian, and look at what hour the index points, which will be the time sought.
VIII. To find the Sun's place in the ecliptic, and also on the Globe, at any given time.—Look into the calendar on the wooden horizon for the month and day of the month proposed, and immediately
opposite stands the sign and degree which the sun is in on that day. Then in the ecliptic drawn upon the Globe, look for the same sign and degree, and that will be the place of the sun required.
IX. To find at what place on the earth the sun is vertical, at a given moment of time at another place, as suppose London.—Find the sun's place on the Globe by the last problem, and turn the Globe
about till| that place come to the meridian, and note the degree of the meridian just over it. Then turn the Globe till the given place, London, come to the meridian, and set the index to the given
moment of time. Lastly, turn the Globe till the index points to 12 at noon; then the place of the earth, or Globe, which stands under the before noted degree, has the sun at that moment in the
X. To find how long the sun shines without setting, in any given place in the frigid zones.——Subtract the degrees of latitude of the given place from 90, which gives the complement of the latitude,
and count the number of this complement upon the meridian from the equator towards the pole, marking that point of the meridian; then turn the Globe round, and carefully observe what two degrees of
the ecliptic pass exactly under the point marked on the meridian. Then look for the same degrees of the ecliptic on the wooden horizon, and just opposite to them stand the months and days of the
months corresponding, and between which two days the sun never sets in that latitude.
If the beginning and end of the longest night be required, or the period of time in which the sun never rises at that place; count the same complement of latitude towards the south or farthest pole,
and then the rest of the work will be the same in all respects as above.
Note, that this solution is independent of the horizontal refraction of the sun, which raises him rather more than half a degree higher, by that means making the day so much longer, and the night the
shorter; therefore in this case, set the mark on the meridian half a degree higher up towards the north pole, than what the complement of latitude gives; then proceed with it as before, and the more
exact time and length of the longest day and night will be found.
XI. A place being given in the torrid zone, to find on what two days of the year the sun is vertical at that place.——Turn the Globe about till the given place come to the meridian, and note the
degree of the meridian it comes under. Next turn the Globe round again, and note the two points of the ecliptic passing under that degree of the meridian. Lastly, by the wooden horizon, find on what
days the sun is in those two points of the ecliptic; and on these days he will be vertical to the given place.
XII. To find those places in the torrid zone to which the sun is vertical on a given day.——Having found the sun's place in the ecliptic, as in the 8th problem, turn the Globe to bring the same point
of the ecliptic on the Globe to the meridian; then again turn the Globe round, and note all the places which pass under that point of the meridian; which will be the places sought.
After the same manner may be found what people are Ascii for any given day. And also to what place of the earth, the moon, or any other planet, is vertical on a given day; finding the place of the
planet on the globe by means of its right ascension and declination, like finding a place from its longitude and latitude given.
XIII. To rectify the Globe for the latitude of any place. ——By sliding the brass meridian in its groove, elevate the pole as far above the horizon as is equal to the latitude of the place; so for
London, raise the north pole 51 1/2 degrees above the wooden horizon: then turn the Globe on its axis till the place, as London, come to the meridian, and there set the index to 12 at noon. Then is
the place exactly on the vertex, or top point of the Globe, at 90° every way round from the wooden horizon, which represents the horizon of the place. And if the frame of the Globe be turned about
till the compass needle point to 22 1/2 degrees, or two points west of the north point (because the variation of the magnetic needle is nearly 22 1/2 degrees west), so shall the Globe then stand in
the exact position of the earth, with its axis pointing to the north pole.
XIV. To find the length of the day or night, or the sun's rising or setting, in any latitude; having the day of the month given.——Rectify the Globe for the latitude of the place; then bring the sun's
place on the globe to the meridian, and set the index to 12 at noon, or the upper 12, and then the Globe is in the proper position for noon day. Next turn the Globe about towards the east till the
sun's place come just to the wooden horizon, and the index will then point to the hour of sunrise; also turn the Globe as far to the west side, or till the sun's place come just to the horizon on the
west side, and then the index will point to the hour of sunset. These being now known, double the hour of setting will be the length of the day, and double the rising will be the length of the
night.—And thus also may the length of the longest day, or the shortest day, be found for any latitude.
XV. To find the beginning and end of Twilight on any day of the year, for any latitude.——It is twilight all the time from sunset till the sun is 18° below the horizon, and the same in the morning
from the time the sun is 18° below the horizon till the moment of his rise<*> Therefore, rectify the Globe for the latitude of the place, and for noon by setting the index to 12, and screw on the
quadrant of altitude. Then take the point of the ecliptic opposite the sun's place, and turn the Globe on its axis westward, as also the quadrant of altitude, till that point cut this quadrant in the
18th degree below the horizon, then the index will shew the time of dawning in the morning; next turn the Globe and quadrant of altitude towards the east, till the said point opposite the sun's place
meet this quadrant in the same 18th degree, and then the index will shew the time when twilight ends in the evening.
XVI. At any given day, and hour of the day, to find all those places on the Globe where the sun then rises, on sets, as also where it is noon day, where it is day light, and where it is in darkness.
——Find what place the sun is vertical to, at that time; and elevate the Globe according to the latitude of that place, and bring the place also to the meridian; in which state it will also be in the
zenith of the Globe. Then is all the upper hemisphere, above the wooden horizon, enlightened, or in day light; while all the lower one, below the horizon, is in darkness, or night: those places by
the edge of the meridian, in the upper hemisphere, have noon day, or 12 o'clock; and those by the meridian below, have it midnight: lastly, all those places by the eastern side of the horizon, have
the sun just set-| ting, and those by the western horizon have him just rising.
Hence, as in the middle of a lunar eclipse the moon is in that degree of the ecliptic opposite to the sun's place; by the present problem it may be shewn what places of the earth then see the middle
of the eclipse, and what the beginning or ending; by using the moon's place instead of the sun's place in the problem.
XVII. To find the bearing of one place from another, and their ang'e of position.——Bring the one place to the zenith, by rectifying the Globe for its latitude, and turning the Globe till that place
come to the meridian; then screw the quadrant of altitude upon the meridian at the zenith, and make it revolve till it come to the other place on the Globe; then look on the wooden horizon for the
point of the compass, or number of degrees from the fouth, where the quadrant of altitude cuts it, and that will be the bearing of the latter place from the former, or the angle of position sought.
The Use of the Celcstial Globe.
The Celestial Globe differs from the terrestrial only in this; instead of the several parts of the earth, the images of the stars and constellations are designed. The meridian circle drawn through
the two poles and through the point Cancer, represents the solstitial colure; but that through the point Arics, represents the equinoctial colure.
Pros. XVIII. To exhibit the true representation of the face of the heavens at any given time and place.—— Rectify for the lat. of the place, by prob. 13, setting the Globe with its pole pointing to
the pole of the world, by means of a compass. Find the sun's place in the ecliptic, and turn the Globe to bring it to the meridian, and there set the index to 12 at noon. Again revolve the Globe on
its axis, till the index point to the given hour of the day or night: so shall the Globe in this position exactly represent the face of the heavens as it appears at that time, every constellation and
star, in the heavens, answering in position to those on the Globe; so that, by examining the Globe, it will immediately appear which stars are above or below the horizon, which on the east or western
parts of the heavens, which lately risen, and which going to set, &c. And thus the positions of the sevcral planets, or comets, may also be exhibited; having marked the places of the Globe where they
are, by means of their declination and right ascension.
XIX. To find the Declination and Right-ascension of any star upon the Globe.——Turn the Globe till the star come to the meridian: then the number of degrees on the meridian, between the equator and
the star, is its declination; and the degree of the equator cut by the meridian, is the right-ascension of the star.——In like manner are found the declination and right-ascension of the sun, or any
other point.
XX. To find the Latitude and Longitude of any star drawn upon the Globe.——Bring the solstitial colure to the meridian, and there fix the quadrant of altitude over the pole of the ecliptic in the same
hemisphere with the star, and bring its graduated edge to the star: then the degree on the quadrant cut by the star is its latitude, counted from the ecliptic; and the degree of the ecliptic cut by
the quadrant its longitude.
XXI. To find the place of a star, planet, comet, &c. on the Globe; its declination and right-ascension being known.——Find the given point of right-ascension on the equinoctial, and bring it to the
meridian; then count the degrees of declination upon the meridian from the equinoctial, and there make a mark on the Globe, which will be the place of the planet, &c, sought.
XXII. To find the place of a star, planet, comet, or other object on the Globe; its latitude and longitude being given.——Bring the pole of the ecliptic to the meridian, and there fix the quadrant of
altitude, which turn round till its edge cut the given longitude on the ecliptic; then count the given latitude, from the ecliptic, upon the quadrant of altitude, and there make a mark on the Globe,
which will be the place of the planet, &c, sought.——The place on the Globe, of any such planet, &c, being found by this or the foregoing problem, its rising, or setting, or any other circumstance
concerning it, may then be found, the same as the sun, by the proper problems.
XXIII. To find the rising, setting, and culminating of a star, planet, sun, &c; with its continuance above the horizon, for any place and day; as also its oblique ascension and descension, with its
eastern and western amplitude and azimuth.——Adjust the Globe to the state of the heavens at 12 o'clock that day. Bring the star, &c, to the eastern side of the horizon: which will give its eastern
amplitude and azimuth, and the time of rising, as for the sun. Again, turn the Globe to bring the same star to the western side of the horizon: so will the western amplitude and azimuth, with the
time of setting, be found. Then, the time of rising, subtracted from that of setting, leaves the continuance of the star above the horizon: this continuance above the horizon taken from 24 hours,
leaves the time it is below the horizon. Lastly, bring the star to the meridian, and the hour to which the index then points is the time of its culmination, or southing.
XXIV. To find the altitude of the sun, or star, &c, for any given hour of the day or night.——Adjust the Globe to the position of the heavens, and turn it till the index point at the given hour. Then
six on the quadrant of altitude, at 90 degrees from the horizon, and turn it to the place of the sun or star: so shall the degrees of the quadrant, intercepted between the horizon and the sun or
star, be the altitude sought.
XXV. Given the altitude of the sun by day, or of a star by night, to find the hour of the day or night.——Rectify the Globe as in the foregoing problem; and turn the Globe and quadrant, till such time
as the star or degree of the ecliptic the sun is in, cut the quadrant in the given degree of altitude; then will the index point at the hour required.
XXVI. Given the azimuth of the sun or a star, to find the time of the day or night.——Rectify the Globe, and bring the quadrant to the given azimuth in the horizon; then turn the Globe till the sun or
star come to the quadrant, and the index will then shew the time of the day or night. | {"url":"http://words.fromoldbooks.org/Hutton-Mathematical-and-Philosophical-Dictionary/g/globe.html","timestamp":"2014-04-17T03:55:07Z","content_type":null,"content_length":"34790","record_id":"<urn:uuid:27f6e870-bd53-4a53-9cc5-1f59d492f6d3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
STANDARD ERROR: Linear Regression Trendline?
CAVEAT: I have a limited knowledge of statistics and and am not particularly adept with respect to formulas. Hence, kindly explain using prose if you would.
ISSUE: Using stock trading software by prophet.
net, a 64-day Linear Regression Trendline ("LRT") is currently the best fit for a particular stock. The 64-day LRT has an R-squared of .933. Now, the trading software has an indicator called
"Standard Error Bands" which allows you, for example, to enter (64,3), (64,2), (64,1) etc. In this example, of course, the indicator plots "standard error band" values ("SE") (3SE, 2SE, 1SE etc.)
both above and below the 64-day LRT.
With all this in mind, I have researched how to calculate "standard error" but my results don’t seem to match the trading software. The trading software determines the 64-day LRT by using the CLOSING
prices for the last 64-days. Can someone give me a step-by-step on how to calculate the 1SE of the 64-day LRT. ASSUME THAT I ALREADY HAVE THE LRT VALUE–I do.
It’s my understanding that the "Standard Error" is the "Standard Deviation" divided by the square root of the number of samples. I tried taking the Standard Deviation of the Closing Prices over the
last 64 days and, alternatively, the Standard Deviation of the 64-day Linear Regression Trendline itself–both of which come out to 3.15 and 3.18 respectively, and then divided both of those numbers
by 8, which gives me a value of .39 or so for 1 Standard Error. The software, however, has a value that is much greater–closer to .80.
Trading indicators are best used along with money management and good risk control, using trading indicators alone will not enable you to be a successful trader, even if you learn everything about
day trading indicators the market is just too random and unless risk is controlled, over time your account will slowly get wiped out, regardless how good a “trader” you think you are.
This question was about day trading indicators and there have been some pretty good answers that should help in your trading, and especially in relation to day trading indicators, the answer has been
posted in the categories listed below:
1. 14 January 11, 4:17pm
Remember, your LRT is simply a best fit line.
A line is of the form y = mx + b.
Your software is taking all the points and determining the "best" values for both m, the slope, and b, the y intercept.
This is a *two variable* problem, so it’s not just simply the square root of the number of samples.
Instead, you have to use the following formula:
SE = Standard Dev of the sample / sqrt (x – x(mean)) [summed]
SE = 3.15 /…
So for the denominator, you will take your first x value, and subtract the mean. You will take your second x value, and subtract the mean, you will do the same for your third, fourth etc all the
way to 64 and add them all up. Then, take the square root of that.
Let me know how it goes.
2. 15 January 11, 7:56pm
[...] STANDARD ERROR: Linear Regression Trendline? | Stock Trading … [...]
3. 27 May 13, 3:01am
This article offers clear idea for the new visitors
of blogging, that actually how to do blogging.
Leave a reply | {"url":"http://www.tradingindicators.org/standard-error-linear-regression-trendline","timestamp":"2014-04-20T13:18:18Z","content_type":null,"content_length":"38122","record_id":"<urn:uuid:b302e752-d6be-4aeb-b702-2c00d366a4e7>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
September Notices
The September issue of the Notices of the American Mathematical Society is out, and the highlight is Robert Gompf’s 3 page introduction, WHAT IS… a Lefschetz Pencil?.
The WHAT IS… series is a terrific addition to the magazine, which began back in September 2002 with WHAT IS… an Amoeba?. Since then, it’s provided quick introductions to various terms that float
around mathematics, such as the Monster group, motives, flips, and even operads. | {"url":"http://www.arsmathematica.net/2005/09/04/september-notices/","timestamp":"2014-04-21T10:20:02Z","content_type":null,"content_length":"9215","record_id":"<urn:uuid:8eb6a636-89a6-49f0-b4f0-2f3ecd50d267>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the homotopy type of a free simplicial ring?
up vote 7 down vote favorite
Is there a good description of the homotopy type of a free simplicial ring (or simplicial $R$-algebra) on a given simplicial set, in terms of the homotopy type of that simplicial set?
(This is mostly an idle question, but also motivated by the fact that it is a theorem of Milnor that a similar construction with the free group gives a model for the $\Omega \Sigma X$, perhaps
believable in view of the fact that $\Omega \Sigma$ is supposed to be the left adjoint from spaces into grouplike $A_\infty $-(i.e., with a coherently associative multiplication law) spaces.)
homotopy-theory simplicial-stuff
I seemed to have asked the same question twice by accident. Sorry about that; I've deleted the duplicate. – Akhil Mathew May 7 '12 at 0:58
add comment
1 Answer
active oldest votes
Let $R[-]$ be the free $R$-module functor, from sets to $R$-modules, and $T_R$ the free (tensor) $R$-algebra functor, from $R$-modules to $R$-algebras. The free $R$ algebra
functor from sets to $R$-algebras is the compoisite $T_RR[-]$.
Given a simplicial set $X$, the homotopy type of $R[X]$ is well understood. It is a product of Eilenberg-MacLane spaces, and its homotopy groups are the homology groups of $X$
with coefficients in $R$,
$$R[X]\simeq \prod_{n\geq 0}K(H_n(X,R),n).$$
up vote 9 down vote $$T_RM=\bigoplus_{m\geq 0}M^{\otimes m}$$
we have
$$T_RR[X]=\bigoplus_{m\geq 0}R[X\times\stackrel{m}\cdots\times X]$$
$$T_RR[X]\simeq \prod_{n\geq 0}K\left(\bigoplus_{m\geq 0}H_n(X\times\stackrel{m}\cdots\times X,R),n\right).$$
Thanks! This is very nice. – Akhil Mathew May 7 '12 at 15:31
add comment
Not the answer you're looking for? Browse other questions tagged homotopy-theory simplicial-stuff or ask your own question. | {"url":"http://mathoverflow.net/questions/96172/what-is-the-homotopy-type-of-a-free-simplicial-ring","timestamp":"2014-04-17T18:56:26Z","content_type":null,"content_length":"53264","record_id":"<urn:uuid:e597b275-4c99-43fc-a849-c0eee7039aeb>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Answer - Formula
Answer :
The number of different ways is 63,504.
The general formula for such arrangements, when the number of letters in the sentence is 2n + 1, and it is a palindrome without diagonal readings, is [4(2^n - 1)]^2.
I think it will be well to give here a formula for the general solution of each of the four most common forms of the diamond-letter puzzle.
By the word "line" I mean the complete diagonal.
Thus in A, B, C, and D, the lines respectively contain 5, 5, 7, and 9 letters.
A has a non-palindrome line (the word being BOY), and the general solution for such cases, where the line contains 2n + 1 letters, is 4(2^n - 1).
Where the line is a single palindrome, with its middle letter in the centre, as in B, the general formula is [4(2^n - 1)]^2.
In cases C and D we have double palindromes, but these two represent very different types. In C, where the line contains 4n-1 letters, the general expression is 4(2^2n-2).
But D is by far the most difficult case of all.
I had better here state that in the diamonds under consideration (i.) no diagonal readings are allowed—these have to be dealt with specially in cases where they are possible and admitted; (ii.)
readings may start anywhere; (iii.) readings may go backwards and forwards, using letters more than once in a single reading, but not the same letter twice in immediate succession.
This last condition will be understood if the reader glances at C, where it is impossible to go forwards and backwards in a reading without repeating the first O touched—a proceeding which I have
said is not allowed.
In the case D it is very different, and this is what accounts for its greater difficulty.
The formula for D is this:
where the number of letters in the line is 4n + 1. In the example given there are therefore 400 readings for n = 2. | {"url":"http://www.pedagonet.com/puzzles/yeoman1.htm","timestamp":"2014-04-19T09:24:26Z","content_type":null,"content_length":"12448","record_id":"<urn:uuid:af55071e-226b-4f34-b9ea-d8fd748d7f18>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Fun question: evaluate (p-a)(p -b)(p-c)............ (p-z)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50094d54e4b020a2b3be69ac","timestamp":"2014-04-20T23:50:06Z","content_type":null,"content_length":"58185","record_id":"<urn:uuid:6f1fb3cd-3c47-49ec-954e-f0d7f1c647f4>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
Type II Error Rate Question?
January 11th 2013, 12:55 PM #1
Jan 2013
Type II Error Rate Question?
I can't seem to figure out how to obtain the type 2 error rate from this question.
A researcher is conducting a hypothesis test to determine whether people can be trained to be better at detecting when someone's lying. He found training on average an increase in probability of
detection by 0.06, data's normally distributed and standard error of training effect of the population is 0.135. If type I error rate is 0.05 what's your type II error rate?
Any help greatly appreciated! Thank you
Re: Type II Error Rate Question?
Hey Donmarco.
The first thing you should look at is what the definition is. We know Type I error is P(H1 retained|H0 correct) while Type II is P(H0 retained|H1 correct) = 1 - P(H1 retained|H1 correct).
So now we need to know what the hypotheses are to get a specific value: what are your hypotheses for this particular example?
Re: Type II Error Rate Question?
Thanks Chiro
I started using this hypothesis H0: p ≥ 0.06 vs. H1: p < 0.06 I assume its right, this is still quite new to me
Re: Type II Error Rate Question?
Is p a proportion?
Re: Type II Error Rate Question?
Hi Donmarco!
To calculate a type II error, you need knowledge about the actual alternative population.
In the problem it is given that the alternative population has an average that is 0.06 greater than the null population.
Let's call $\pi$ the chance that someone detects a lie after training. This is what is measured.
Actually, it's he proportion of successful detections that is counted.
We'll call that measured proportion $p$.
Let's call $\pi_0$ the average chance that someone detects a lie.
And let's call $\pi_1$ the average chance that someone detects a lie after training.
From the problem statement, we can say that $\pi_1 = \pi_0 + 0.06$.
The null hypothesis is:
H0: training does not help, or $\pi = \pi_0$
H1: training does help, or $\pi > \pi_0$
Now, what is the critical value $p_{crit}$ that we need to say with at least 95% certainty that training helps?
And since we already know the actual alternative population, what is the chance that H0 is retained even while knowing that H1 is true ( $\pi = \pi_0 + 0.06$)?
This latter chance is the type II error.
January 11th 2013, 03:42 PM #2
MHF Contributor
Sep 2012
January 11th 2013, 06:49 PM #3
Jan 2013
January 11th 2013, 09:38 PM #4
MHF Contributor
Sep 2012
January 12th 2013, 03:31 AM #5 | {"url":"http://mathhelpforum.com/statistics/211172-type-ii-error-rate-question.html","timestamp":"2014-04-23T12:13:55Z","content_type":null,"content_length":"42162","record_id":"<urn:uuid:3e3d2ea3-24ca-4835-95a6-a2bfa97b0a19>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00209-ip-10-147-4-33.ec2.internal.warc.gz"} |
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 491.41001
Autor: Erdös, Paul; Vertesi, P.
Title: On the almost everywhere divergence of Lagrange interpolation. (In English)
Source: Approximation and function spaces, Proc. int. Conf., Gdansk 1979, 270-278 (1981).
Review: [For the entire collection see Zbl 471.00018.]
In a previously published paper, P.Erdös [Acta Math. Acad. Sci. Hung. 9, 381-388 (1958; Zbl 083.29001)] stated without proof that if (x[kn])[1 \leq k \leq n,1 \leq n] denotes a triangular matrix of
knots in the compact internal I = [-1,+1] ordered such that x[n,n] < x[n-1,n] < ... < x[1,n] (n \geq 1) holds then there exists a continuous function f: I > R such that the sequence (L[n]f)[n \geq
1] of Lagrange interpolation polynomials L[n] = sum[1 \leq k \leq n]f(x[kn])\ell[kn] diverges almost everywhere in I. In fact lim.\sup{n > oo}|L[n]f(x)| = oo for almost all x in I. The authors give
a brief account of the preliminary results in this direction (G. Faber, S. Bernstein, G. Grünwald, J. Marcinkiewicz, A. A. Privalov, P. Turán, P. Erdös) and point out a sketch of the proof. The
detailed proof is rather long and quite complicated, although it uses only elementary techniques. One of its important ingredients is the following result: Lemma. Let A > 0 be an arbitrary fixed
number and consider an arbitrary integer m \geq m[0](A). For any integer n \geq n[0](m) exists a set H[n]\subset I for which meas(H[n]) \leq 1/ ln\ln m and
sum[\over{1 \leq k \leq n]{x[k]\not in I[j(x),n]}}|\ell[kn](x)| \geq (ln m)^1/3geq2A (n \geq n[0](m))
where I[j(x),m] denotes that interval of the equivalent partition of I in m subintervals which contains the point x in I|H[n]. Details of the proof (about 300 pages) will be published in a paper to
appear in Acta Math. Acad. Sci. Hung.
Reviewer: W.Schempp
Classif.: * 41A05 Interpolation
Keywords: almost everywhere divergence; triangular matrix of knots; Lagrange interpolation polynomials
Citations: Zbl.471.00018; Zbl.083.290
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │ | {"url":"http://www.emis.de/classics/Erdos/cit/49141001.htm","timestamp":"2014-04-19T22:26:18Z","content_type":null,"content_length":"5327","record_id":"<urn:uuid:16bf5c46-c977-446a-a015-5c580a8515d7>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
Arbitrage Trading
by Sandor Lehoczky,
Senior Trader, Henry Capital,
co-author, the Art of Problem Solving
I tend to procrastinate. So it wasn't until a year and a half out of college that I began to think hard about pursuing a career.
In high school and college my main focus had been mathematical, first on the math team at Grissom High School (Huntsville, AL), then in the physics program at Princeton University. However, I had
realized near the end of college that an academic career wasn't going to happen.
Now, after pursuing various jobs and projects, I wanted to find something to do which was engaging, paid well, and had at least some mathematical side to it.
What I found was the field of arbitrage trading. Arbitrage, pronounced "AR-bi-trazh," means executing more than one trade such that 1) there is little or no risk, and 2) there is some positive value.
For example, suppose that at a farmer's market, three apples could be swapped for two bananas, three bananas could be swapped for two peaches, and three peaches could be swapped for seven apples.
Then I could take twenty-seven apples, turn them into eighteen bananas, turn the bananas into twelve peaches, and turn the peaches into twenty-eight apples.
After all is said and done, I've earned an extra apple for my efforts, by taking advantage of the fact that the swap rates between the different fruits aren't exactly right---the market has a slight
anomaly. Until the farmers adjust their swap rates, I can execute my series of trades over and over again, netting an extra apple each time.
Of course, in the real markets, anomalies as simple as this rarely occur, and are quickly eradicated when they do. Intense competition from arbitrage traders forces all securities to stay very close
to fair valuations: the market is what economists would call efficient.
Trading takes a lot of different forms. At first, I was standing in a "pit" on the floor of the Chicago Board Options Exchange, wearing a brightly colored jacket and waving my arms to get a broker's
attention. Later, I sat in front of a bank of computer monitors, directing split-second trades over the phone. In both cases, however, the underlying idea was he same: to exploit the mathematical
relationships between different stocks or bonds or options or stock indexes.
That isn't to say it's all about the math. In many trading situations, the mathematical relationships are well-understood, and what matters are things like speed, attention to detail, and
gamesmanship. From moment to moment, this type of trading closer resembles a race-to-the-buzzer game show than a math test.
Other types of trading are more cerebral, focused on extensive analysis to extract the relationships underlying security prices. The pricing of options---securities which grant the holder the right,
if so desired, to buy a stock at a specific price at a specific price---has inspired a wealth of mathematical research. The 1997 Nobel Prize went to two economists who pioneered a key options pricing
model based on partial differential equations. And the field has expanded enormously since their 1973 work, with new models continuing to proliferate in answer to ever-deeper questions. Further
afield, areas like bond analysis rest on a deep mathematical foundation.
Traders aren't mathematicians, however. Rather, they are skilled users of mathematical models, often developed by quantitative researchers at the trader's firm. However elaborate, models always fail
to capture the human side of the equation - much as poker-playing software can calculate probabilities, but can't infer an opponent's weak hand from her demeanor.
For most types of trading, a solid math background at the college level is crucial. Coursework in finance and economics is also useful. For those more inclined to quantitative research rather than
trading, graduate study in math or science may be required.
Trading is in many cases a very competitive, intense enterprise. Those who like games and don't mind stress will often excel. However, building relationships is also a key component, and people
skills are a must.
There are a lot of books that can help you get a sense of the culture of trading and finance. A few I'd recommend are Peter Bernstein's Against the Gods, Roger Lowenstein's When Genius Failed, and
Burton Malkiel's A Random Walk Down Wall Street.
If you're seriously interested in trading - or finance more broadly - talk to as many people close to the industry as you can. Opportunities range from large investment banks (most of which have
large trading operations) to small "boutique" operations with fewer than 100 employees. Use of headhunters is also an important way to gain exposure to Wall Street firms. | {"url":"http://www.mualphatheta.org/problem_corner/Mathematical_Log/Issues/0402/MathematicalLogCareersSpring02.aspx","timestamp":"2014-04-18T18:11:37Z","content_type":null,"content_length":"16011","record_id":"<urn:uuid:6aad6dfe-d01c-4a5c-bd59-191fcabd1b0f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generalizing groups via the Hall-Witt identity
up vote 0 down vote favorite
In studying the integrability problem for Lie algebra representations, I have been led to wonder whether generalizing the notion of group by dropping associativity, while keeping the Hall-Witt
identity, might be useful. Of course, I am assuming that associativity is stronger than Hall-Witt... So, let me ask first: is this the case? If yes, has such a notion of generalized group, or a
similar one, been considered in the literature? Is there a reason I am missing why this might not be a good idea, either in itself or to study non-integrable Lie algebra representations?
Since the question is somewhat vague, let me try to explain myself better. Let us think of a representation of a Lie algebra by means of vector fields over a manifold. This representation can fail to
integrate, i.e. give rise to an action of the corresponding Lie group, even if the vector fields are individually integrable. Nelson, in the paper where he introduces analytic vectors, gives a nice
example with two commuting vector fields $X$, $Y$ on a manifold $M$, with the following property: let us start at a point $x_0\in M$, and move along the vector field $X$ for an amount of time, say $t
=1$, to end up at $x_1$ (i.e. solve the equation $\dot x_t = X(x_t)$). Then, from $x_1$ let us move along the vector field $Y$ for a time $t=1$ to end up at $x_2$. Well, it is possible that, if one
reverses the order of the displacements, i.e. if one moves first along $Y$ and then along $X$, one ends up at a point which is different from $x_2$. Thus, the representation of the Lie algebra $\
mathbb R^2$ given by $X$ and $Y$ acting on, say, $C_0^\infty(M)$, cannot exponentiate to give an action of the Lie group $\mathbb R^2$ on $M$. Now, the idea is that these vector fields might still
give rise to a an action of some other kind of structure---perhaps a nonassociative one. Since Hall-Witt has a geometric meaning (see http://lamington.wordpress.com/2011/11/20/the-hall-witt-identity/
), it seems plausible to me that it might still make sense for the looked-for nonassociative object. Given that we are talking about actions on manifolds, one should perhaps define Hall-Witt in such
a context associating from the left, i.e. defining $g_1g_2\cdots g_n = g_1(g_2(\cdots g_n))$. If one does so, then my questions can be rephrased as follows: let $G$ be the not necessarily associative
algebraic object generated by taking products of a finite set of symbols together with their inverses, and quotienting away Hall-Witt.
1. Is $G$ a group? I'm guessing the answer is no, so:
2. Have things like $G$ been considered in the literature?
3. Is there any reason I'm failing to see why $G$ might not be an interesting object, either in itself or to study the integrability of Lie algebra representations?
reference-request gr.group-theory lie-algebras
2 Asking if associativity is stronger than Hall-Witt amounts to asking if Hall-Witt impliee associativity. But what would that mean? To state Hall-Witt, you need commutators and conjugation. How
would you define those in a nonassociative context? – Marty Isaacs Jun 9 '12 at 22:31
Marty, sorry for the delay in answering. I have edited the question with your commentary in mind. I think now it is much clearer. – Rodrigo Vargas Jun 16 '12 at 20:30
This seems problematic. It is far from clear that your parenthesization is the one that gives the "right" Hall-Witt identity. – Qiaochu Yuan Jun 16 '12 at 20:33
Could you be more precise regarding this? Indeed, what is the right parenthesization should be understood as part of the question, so the reasons you have to believe just associating from the
right may be wrong could actually evolve into a complete answer... – Rodrigo Vargas Jun 16 '12 at 21:14
I just mean that there's no good reason to prefer this particular parenthesization over any of the others. One might prefer a parenthesization which computes the conjugations first (somehow), then
the commutators (somehow), then the actual products, for example. – Qiaochu Yuan Jun 16 '12 at 22:22
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged reference-request gr.group-theory lie-algebras or ask your own question. | {"url":"http://mathoverflow.net/questions/97995/generalizing-groups-via-the-hall-witt-identity","timestamp":"2014-04-18T00:56:29Z","content_type":null,"content_length":"54889","record_id":"<urn:uuid:ab33ea47-f646-4e5b-848d-8506e10bad1f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
Life of Francis Galton by Karl Pearson Vol 3a
: image 43
Correlation and Application of Statistics to Problems of Heredity 27
discussion. He perceived for the first time that the problem of multiple correlation when solved would give the closest prediction possible to the probable value of the character in an individual
from known characters in the kinsfolk, but he also recognised that long selection could not indefinitely reduce variability, that 300/o reduction in variability was about as much as could be hoped
for (i.e. p to b in his notation).
"The possible problems are obviously very various and complicated, I do not propose to speak further about them now. It is some consolation to know that in the commoner questions of hereditary
interest, the genealogy is fully known for two generations, and that the average
influence of the preceding ones is small.
"In conclusion it must be borne in mind that I have spoken throughout of heredity in respect to a quality that blends freely in inheritance. I reserve for a future inquiry (as yet incomplete) the
inheritance of a quality that refuses to blend freely, namely the colour of the eyes. These
may be looked upon as extreme cases, between which all ordinary phenomena of heredity lie*."
These words show that Galton fully recognised that his theory applied only to continuously varying and blending characters.
The paper in the Anthropological Journal Miscellanea, while less replete with ideas requiring mathematical interpretation than that in the R. S. Proceedings, contains two matters which deserve
notice. Over and over again we meet with the statement that more able men are born from undistinguished parents than from parents of marked ability. In the year 1927 it formed the subject of a series
of controversial letters in The Times newspaper, in which neither side seemed to have any statistical ammunition, nor appeared to be aware that they were dealing with a forty year old paradox, which
Galton had refuted in 1885
"Let it not be supposed for a moment that any of these statements invalidate the general doctrine that the children of a gifted pair are much more likely to be gifted than the children of a mediocre
pair. What they assert is that the ablest child of one gifted pair is not as likely to be as able as the ablest of all the children of very many mediocre pairs t."
In 1900$ the biographer gave exact numbers for the production of ability on the assumption that one man in twenty may be treated as "able." It turned out that in 10,000 matings the 52 pairs of
exceptional parents produced 26 exceptional sons, while the 9948 non-exceptional pairs produced 474 exceptional sons, thus the rate of production of exceptional sons by exceptional parents was 10
times greater than the rate by non-exceptional parents, but the latter produced more than 18 times as many exceptional sons as the former. The result flows merely from the fact that a rate of 10
times the production in the case of exceptional parents is counteracted in total output, by the fact that there are some 200 times more non-exceptional than exceptional pairs of parents. It is
distressing to note how such distinguished scientists as Dr Leonard Hill are unable to grasp the interpretation of this simple statistical paradox first provided by Galton in 1885!
The second point is the publication .of a diagram illustrating the variability of a stable population in the parental generation, for the midparentages, for the generants, and for the filial
generation. The diagram (see our p. 28) is
* R. S. Proc. Vol. ixii, pp. 62-63. t dourn. Anthrop. Instit. Vol. xv, p. 254.
$ Phil. Trans. Vol. 195, A, p. 47. | {"url":"http://galton.org/cgi-bin/searchImages/galton/search/pearson/vol3a/pages/vol3a_0043.htm","timestamp":"2014-04-20T04:08:53Z","content_type":null,"content_length":"9499","record_id":"<urn:uuid:399ef4e2-945c-4d8b-95f4-ffe5778c83ba>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finite Difference Approximation for Linear Stochastic Partial Differential Equations with Method of Lines
McDonald, Stuart (2006): Finite Difference Approximation for Linear Stochastic Partial Differential Equations with Method of Lines.
Download (228Kb) | Preview
A stochastic partial differential equation, or SPDE, describes the dynamics of a stochastic process defined on a space-time continuum. This paper provides a new method for solving SPDEs based on the
method of lines (MOL). MOL is a technique that has largely been used for numerically solving deterministic partial differential equations (PDEs). MOL works by transforming the PDE into a system of
ordinary differential equations (ODEs) by discretizing the spatial dimension of the PDE. The resulting system of ODEs is then solved by application of either a finite difference or a finite element
method. This paper provides a proof that the MOL can be used to provide a finite difference approximation of the boundary value solutions for two broad classes of linear SPDEs, the linear elliptic
and parabolic SPDEs.
Item Type: MPRA Paper
Institution: Social and Information Systems Laboratory, California Institute of Technology
Original Finite Difference Approximation for Linear Stochastic Partial Differential Equations with Method of Lines
Language: English
Keywords: Finite difference approximation; linear stochastic partial differential equations (SPDEs); the method of lines (MOL)
Subjects: C - Mathematical and Quantitative Methods > C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling > C63 - Computational Techniques; Simulation Modeling
Item ID: 3983
Depositing Stuart McDonald
Date 11. Jul 2007
Last 13. Feb 2013 07:57
Allen, E.J., Novosel, S.J. and Zhang, Z. (1998) Finite Element and Difference Approximation of Some Linear Stochastic Partial Differential Equations. Stochastics and Stochastic
Reports. 64, 117--142.
Ames, W.F. (1992) Numerical Methods for Partial Differential Equations (2nd ed.) Barnes and Noble Inc., New York.
Brace, A. and Musiela, M. (1994) A Multifactor Gauss Markov Implementation of Heath, Jarrow and Morton. Mathematical Finance 4, 259--283.
Brace, A., Gatarak, D. and Musiela, M. (1997) The Market Model of Interest Rate Dynamics. Mathematical Finance 7 (2), 127--147.
Duffie, D., Ma, J., Yong, J-M. (1995) Black's consol rate conjecture. Annals Applied Probability 5 (2), 356--382.
Kloeden, P.E. and Platen, E. (1992) Numerical Solutions of Stochastic Differential Equations. Springer-Verlag, Berlin.
Liskovets, O.A. (1965) The Method of Straight Lines. \QTR{em}{Differential Equations} 1, 1662--1678.
Ma, J., Protter, P., Yong, J-M. (1994) Solving forward-backward stochastic differential equations explicitly---a four step scheme. Probability Theory and Related Fields} 98 (3),
Ma, J. and Yong, J-M. (1997) Adapted solution of a degenerate backward SPDE, with applications. Stochastic Process and their Applications 70 (1), 59--84.
Ma, J. and Yong, J. (1999a) Forward-backward stochastic differential equations and their applications. Lecture Notes in Mathematics, 1702. Springer-Verlag, Berlin.
Ma, J.; Yong, J. (1999b) On linear, degenerate backward stochastic partial differential equations. Probability Theory and Related Fields 113 (2), 135--170.
Musiela, M. (1993) Stochastic PDEs and the Term Structure Model. Journees Internationales de France IGR--AFFI. La Baule, June 1993.
Musiela, M. and Sondermann, D. (1997) Different Dynamical Specifications of the Term Structure of Interest Rates and their Implications. (Working Paper).
Pardoux, E. (1993) Stochastic Partial Differential Equations, A Review. Bulletin des Sciences Mathematiques, 2e Serie, 117, 29--47.
Tian, T., Burrage, K. and Volker, R. (2003) Stochastic Modelling and Simulations for Solute Transport in Porous Media. Australian & New Zealand Industrial and Applied Mathematics
Journal. 45 Part C, 551--564.
Walsh, J.B. (1985) An Introduction to Stochastic Partial Differential Equations. In Carmona, R. Kuston, H. and Walsh, J.B. (eds.) Ecole d'Ete de Probabilities de Saint-Flour XIV --
1984, 265--439. Springer-Verlag, Berlin.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/3983 | {"url":"http://mpra.ub.uni-muenchen.de/3983/","timestamp":"2014-04-20T11:51:11Z","content_type":null,"content_length":"23234","record_id":"<urn:uuid:a3025e78-2da5-4068-a8e3-52eb331a7ba6>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Given A(0,6) B(2,h) C(7,3) are three vertices of a square ABCD. Find value h
Best Response
You've already chosen the best response.
Put the distance formula and compare them !
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4e22ead00b8ba00f8f00a544","timestamp":"2014-04-16T10:36:37Z","content_type":null,"content_length":"29912","record_id":"<urn:uuid:6e6eeb62-619d-467f-aff9-4177002e08a1>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
Commutativity of the fundamental group of any Lie Group
Take the 2-minute tour ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
How do we formally prove that the fundamental group of any Lie group is always commutative?
@Marina: by and large, MO is for questions of research interest. For questions at this level we have math.stackexchange.com.
Qiaochu Yuan May 25 '12 at 13:58
I think the more important issue is that this is an exact dubplicate: mathoverflow.net/questions/35868/…
Sean Tilson May 25 '12 at 14:24
@Sean: the other question was phrased less offensively (to me) -- the OP wanted to know "why" this is true. The current OP simply asks us to do her homework for her (I know this is homework because
this was the first slightly nontrivial exercise on fundamental groups I remember doing when I was a student (I believe it is in Massey's book). It is extremely important to keep this sort of junk
out of MO.
Igor Rivin May 25 '12 at 14:41
Igor, I'm not a mathematician and this is not my homework. It came when I was reading Rubikov on field theory. I'm very sorry if my lack of background offended you. Please close the question if you
feel you need to.
user14210 May 25 '12 at 14:48
show 2 more comments
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally
applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center
, please edit the question.
As Vahid says, it is true for any topological group. Here is a proof. I'm sure there are nicer, more conceptual ones out there, but here goes.
Let $G$ be your topological group. Take two loops $\sigma$ and $\gamma$ in $G$, based at the identity of $G$, which we will denote by $e$. Let $\sigma \cdot \gamma$ be the concatenation of the two
loops. This is given by $$ (\sigma \cdot \gamma) (t) = \begin{cases} \sigma(2t) & \quad \text{ if } 0 \le t \le 1/2 \\\ \gamma(2t-1) &\quad \text{ if } 1/2 \le t \le 1 \end{cases} $$ (Sorry,
couldn't manage to format that any better. Feel free to edit if you know how to put a nice brace bracket to the left of that definition.)
The idea is this. We will show that $\sigma \cdot \gamma$ is homotopic to to the loop given by the pointwise product of $\sigma$ and $\gamma$. Let's call that loop $\rho$, so $$ \rho(t) = \sigma(t)\
Now define an auxiliary function $P : [0,1] \times [0,1] \to G$ by $$ P(s,t) = \begin{cases} \sigma\left( \frac{2t}{1+s} \right) & \quad \text{ if } 0 \le t \le \frac{1+s}{2} \\\ e &\quad \text{ if
} \frac{1+s}{2} \le t \le 1 \end{cases}$$
At $s=0$, this function does the whole loop $\sigma$ as $t$ goes from $0$ to $1/2$, then sits at $e$. In other words, at $s=0$ this is the first half of the loop $\sigma \cdot \gamma$. As $s$ gets
larger, $P$ does the whole loop $\sigma$ as $t$ goes from $0$ to $\frac{1+s}{2}$. At $s=1$, $P$ does the loop $\sigma$ at normal speed.
Then similarly define a function $Q : [0,1] \times [0,1] \to G$ by $$ Q(s,t) = \begin{cases} e & \quad \text{ if } 0 \le t \le \frac{1-s}{2} \\\ \gamma \left( \frac{2t-1+s}{1+s} \right) &\quad \text
{ if } \frac{1-s}{2} \le t \le 1 \end{cases}$$
At $s=0$ this is just the second half of the loop $\sigma\cdot\gamma$, while at $s=1$ it is exactly the loop $\gamma$.
So finally, define $$ H(s,t) = P(s,t) \cdot Q(s,t). $$ At $s=0$ this is $\sigma \cdot \gamma$, while at $s=1$ it is the pointwise product loop $\rho$. $H$ is clearly continuous, and $H(s,0) = e = H
(s,1)$ for all $s$, so this is a homotopy of loops between $\sigma \cdot \gamma$ and $\rho$.
Now we can redo that process and show that $\rho$ is homotopic to the other concatenation $\gamma \cdot \sigma$. So this shows that $\pi_1(G)$ is abelian.
Were were you when I needed to do my homework?
Igor Rivin May 25 '12 at 14:46
@Igor Rivin, I thought it might be homework, but then I spent long enough thinking about the problem that I decided to write it up anyway.
MTS May 25 '12 at 14:59
Thanks for fixing the latex, Henrik! For some reason the cases environment wasn't rendering properly when I was typing up the answer.
MTS May 25 '12 at 15:00
@MTS you did a wonderful job, my comment is meant to be negative on the question, but positive on the answer
Igor Rivin May 25 '12 at 16:52
@Igor, thank you! I only wish I knew a good way to draw the picture that I had in my head in latex somehow.
MTS May 25 '12 at 17:35
add comment
up vote 24 down One-sentence explanation: because the fact that a topological group $G$ is a group object in topological spaces makes its fundamental group $\pi_1(G)$ a group object in groups, and
vote this is an abelian group.
It's certainly the nicest proof of this result I've ever seen!
Stefano V. May 25 '12 at 10:30
This is terrific.
Steven Landsburg May 25 '12 at 12:11
"...by the Eckmann-Hilton argument."
Qiaochu Yuan May 25 '12 at 12:20
the point is that the group law in the fundamental group of a topological group is a group homomorphism; a group whose law is a homomorphism is easily checked to abelian (I knew this
argument but not with the interpretation by group objects).
Yves Cornulier May 25 '12 at 13:18
Nice! I knew there was some clever way of doing it.
MTS May 25 '12 at 14:58
add comment
Geometric proof: A connected Lie group $G$ is homotopy equivalent to a maximal compact subgroup, so we may assume $G$ is compact. Being compact, $G$ admits a bi-invariant Riemannian metric
up vote with respect to which it is a symmetric space, the symmetry $s$ at the identity being just the inversion map. Now a homotopy class in $\pi_1(G,1)$ can be represented by a closed geodesic $\
9 down gamma$ (of minimal length in its homotopy class, by a shortening process). Since the differential of $s$ at $1$ is minus identity, $s$ sends $\gamma$ to itself parametrized backwards. It
vote follows that the homomorphism induced by $s$ on the $\pi_1$-level is inversion. However, the inversion map in a group is a homomorphism if and only if the group is Abelian.
+1 That's a very cool argument!
Igor Rivin May 25 '12 at 14:49
Igor: Thanks a lot!
Claudio Gorodski May 25 '12 at 22:25
add comment
up vote 7 It is actually true for all topological groups. Topological groups possess a structure which makes them H-spaces and fundamental group of every H-space is abelian. The formulation and
down vote the proof is given in Algebraic Topology, Homotopy and Homology, by Switzer Pages 14-16.
add comment | {"url":"http://mathoverflow.net/questions/97909/commutativity-of-the-fundamental-group-of-any-lie-group/97922","timestamp":"2014-04-18T13:37:00Z","content_type":null,"content_length":"83569","record_id":"<urn:uuid:f3d95847-158e-44b6-b976-d646c5055771>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
A fact about $t/W$ and the centralizer bundle on $\mathfrak{g}^{\text{reg}}$
up vote 2 down vote favorite
Let $\mathfrak{g}$ be a simple Lie algebra; let $R = \mathfrak{g}^{*, reg}$ denote the regular locus in the dual Lie algebra. Consider the vector bundle $\mathfrak{z}$ over $R$, whose fiber over a
point $\xi \in R$ is the stabilizer of $\xi$ in the dual to $\mathfrak{g}$. Using the isomorphism $R/G = \mathfrak{t}/W$, we have a map $p: R \rightarrow \mathfrak{t}/W$; let $\mathcal{T}'$ denote
the co-tangent bundle to $R$.
Question: Why is $p^* \mathcal{T}' = \mathfrak{z}$?
It should suffices to show that the fibers of these two vector bundles over a point in the base, $\xi$, are the same, but I was having trouble computing the fibers of $p^* \mathcal{T}'$.
(This fact was mentioned in the second paragraph of Section $2.6$, pg 6 of this paper.) Sorry about my poor choice of notation, I was having trouble using Latex on MO.
rt.representation-theory ag.algebraic-geometry
add comment
1 Answer
active oldest votes
I think this can be understood in terms of Hamiltonian reduction.
The group $G$ acts on $\mathfrak g^{\ast,reg}$ inducing an action on $T^\ast \mathfrak g^{\ast,reg}$. The associated moment map $\mu: T^\ast \mathfrak g^{\ast,reg} \to \mathfrak g^\
ast$ is:
up vote 3 down $\mu(\xi,x) = coad(x)(\xi) - \xi$.
vote accepted
A basic result of Hamiltonian reduction is that a covector at $[\xi]$ on the quotient $\mathfrak g^{\ast,reg}/G = \mathfrak t^\ast /W$ is given by an element $x \in T^\ast _\xi \
mathfrak g^{\ast,reg}$ such that $\mu(x,\xi)=0$. Noting that $\mu^{-1}(0) \to \mathfrak g^{\ast,reg}$ is equal to the bundle $\mathfrak z$ of centralizers implies your result.
add comment
Not the answer you're looking for? Browse other questions tagged rt.representation-theory ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/123462/a-fact-about-t-w-and-the-centralizer-bundle-on-mathfrakg-textreg/123489","timestamp":"2014-04-18T11:15:50Z","content_type":null,"content_length":"51144","record_id":"<urn:uuid:be8578ad-e6b7-431c-881a-c2c2b99ef7b1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pacifica SAT Math Tutor
...I have an undergraduate degree in biology and math and have worked many years as a data analyst in a medical environment. I have a PhD in economics and have taken 6 PhD level classes in
econometrics. I have years of experience using STATA.
49 Subjects: including SAT math, physics, geometry, calculus
...I am very effective in helping students improve their test scores: in a past testing year all of my SAT students scored 700 and above! Some of them were scoring in mid-500, when I started
working with them. I bring my full attention and dedication to the students that I work with.
14 Subjects: including SAT math, calculus, statistics, geometry
...My expertise is especially strong in problem solving, and applying concepts to real world situations. I enjoy working with students. When my son was in college, we used to problem solve on the
phone and via email.
39 Subjects: including SAT math, chemistry, reading, English
...I have also worked as a court-appointed translator and have taught have Bilingual Biology (in Spanish). I lived in 3 Spanish-speaking countries for a total of 4 years (including 2 years in the
Peace Corps in Bolivia, and another year doing Master's research in Costa Rica), and have led student t...
43 Subjects: including SAT math, Spanish, geometry, chemistry
...A pre-algebra course is important to develop a strong foundation of arithmetic concepts, with applications in the real world. This provides a step toward success in more advanced classes. I
have taught regular and Advanced Placement Statistics (college-level class). I know that it is one of th...
6 Subjects: including SAT math, statistics, geometry, prealgebra | {"url":"http://www.purplemath.com/pacifica_sat_math_tutors.php","timestamp":"2014-04-20T09:23:19Z","content_type":null,"content_length":"23824","record_id":"<urn:uuid:25c2703a-3f70-48e7-987a-f75e5447de6c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prince Louis de Broglie
In 1923, while still a graduate student at the University of Paris, Louis de Broglie published a brief note in the journal Comptes rendus containing an idea that was to revolutionize our
understanding of the physical world at the most fundamental level.   He had been troubled by a curious "contradiction" arising from Einstein's special theory of relativity.
First, he assumed that there is always associated with a particle of mass $m$ a periodic internal phenomenon of frequency $u \text{.}$   For a particle at rest, he equated the rest mass energy $m
{c}^{2}$ to the energy of the quantum of the electromagnetic field $hu \text{.}$ That is, $m{c}^{2}=hu \text{,}$ where $h$ is Planck's constant and $c$ is the speed of light.
De Broglie noted that relativity theory predicts that, when such a particle is set in motion, its total relativistic energy will increase, tending to infinity as the speed of light is approached. &
nbsp Likewise, the period of the internal phenomenon assumed to be associated with the particle will also increase (due to time dilation).   Since period and frequency are inversely related, a
period increase is equivalent to a decrease of frequency and, hence, of the energy given by the quantum relation $hu \text{.}$   It was this apparent incompatibility between the tendency of the
relativistic energy to increase and the quantum energy to decrease that troubled de Broglie.^1 [footnote]
The manner in which de Broglie resolved this apparent contradiction is the subject of the famous 1923 Comptes rendus note [Comptes rendus de l'Académie des Sciences, vol. 177, pp. 507-510 (1923)]. &
nbsp The original note in French and an English translation are available here by kind permission of the Académie des Sciences and the Fondation Louis de Broglie.   The assistance of Professor
Sophie Papaefthymiou of the University of Paris in obtaining permission to post these materials is gratefully acknowledged.
1923 Comptes rendus Note
De Broglie's 1923 Comptes rendus note is available here as follows:
English translation as a web page,
English translation in Adobe PDF format, and
Facsimile of the original French in Adobe PDF format.
The Adobe PDF files require that you have Adobe Acrobat Reader in order to view and print them.   A free copy of Acrobat Reader can be downloaded from the Adobe web site.
Phase Wave Animation
For an illustration of the relationship between the phase wave proposed by de Broglie and the instantaneous state of the internal periodic phenomenon de Broglie assumed to be associated with a
particle, see the accompanying animated graphic .
Further Details
De Broglie presented a more detailed exposition of the ideas contained in his 1923 note in the first chapter of his doctoral thesis Recherches sur la théorie des Quanta (University of Paris, 1924). &
nbsp An English translation can be found in "Phase Waves of Louis deBroglie", Am. J. Phys. vol. 40 no. 9, pp. 1315-1320, September, 1972.   The ideas are also revisited in the first chapter of de
Broglie's book Non-linear Wave Mechanics: A Causal Interpretation, Elsevier Publishing Company, 1960.
^1 A consequence of de Broglie's reasoning is that a phase wave, often referred to as the "pilot" wave, appears to accompany the particle.   This is made evident in de Broglie's 1923 Comptes
rendus note.   Yet, modern introductions to quantum mechanics often fail to emphasize that this phase wave arises as an inevitable consequence of de Broglie's assumption of the internal periodic
phenomenon of the particle and the transformation laws of the special theory of relativity.   A notable exception is "Introduction to Quantum Mechanics," A. P. French and Edwin F. Taylor, W. W.
Norton & Co., pp. 55-62 (1978).
Given de Broglie's assumptions, quantum mechanics, which is to say the study of the behavior and interpretation of the phase wave, is the study of an inherently relativistic phenomenon.   In this
sense, if it were not for relativistic effects, quantum (wave) mechanics would not exist!   Yet, the phrase "non-relativistic quantum mechanics" is a commonplace.   Clearly, this phrase
should be understood to refer to the quantum mechanical description of particles moving at speeds very much less than the speed of light, but not to imply that relativistic effects are ever of no
consequence in quantum mechanics.
An example is provided by the orbits of the electron in the hydrogen atom.   Even for the innermost orbits, the speed of the electron is very much less than the speed of light.   While the
motion of the electron is, therefore, "non-relativistic" in the sense that the relativistic corrections to the classically (non-quantum mechanical) predicted behavior of the electron would be
inconsequential, relativistic effects in fact dominate.   That is, the phase wave associated with the electron, which leads directly to the quantization of the allowed orbits, is, according to de
Broglie's model, relativistic in origin.   If not for the relativistic transformations that give rise to the phase wave, quantization of the orbits and, hence, the spectrum of the hydrogen atom
would not occur.   Clearly then, relativistic effects cannot be considered to be inconsequential in determining the electronic properties of the hydrogen atom, even though the speed of the
orbiting electron is very much less than the speed of light. [return from footnote] | {"url":"http://www.davis-inc.com/physics/","timestamp":"2014-04-20T23:38:30Z","content_type":null,"content_length":"12659","record_id":"<urn:uuid:8af5f910-03f1-492c-9d09-c97a4545ac0e>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ordinary Differential Equations
Here at Iowa State University, the traditional sophomore-level “diffyQ” course comes in two versions: a three-credit course (Math 266) that does not cover Laplace Transforms (hereinafter, LTs), and a
four-credit version (Math 267) that does. But for this one topic, there is no structural difference in the courses, and indeed some sections of them meet simultaneously for a part of the semester,
with 266 ending before 267. The textbook for both courses is, therefore, the same, and, like most differential equations texts, it covers LTs near the end, so that this topic can be omitted if
Not this book, however. The major distinction between this book and the existing textbook literature is that it introduces LTs quite early (in chapter 2, in fact), and actually uses them throughout
the book. For example, they play a big role in the authors’ treatment of linear differential equations of degree n with constant coefficients. Such an equation can be written as q(D)(y) = f where D
is the differentiation operator and q is a polynomial of degree n; as is well known, if f is the zero function, the general solution to this equation is a vector space. The authors use LTs to define
the standard basis (which they call ℬ[q]) for this space. A bit later in the book the authors use LTs to transform some second order linear ODEs with non-constant coefficients into first order ODEs.
Still later in the text, connections between LTs and matrices are established; the matrix exponential e^At, for example, is the inverse LT of the resolvent matrix A.
On those few occasions, many years ago as a graduate student, when I taught ordinary differential equations, I discussed LTs only briefly and only as one more in a list of mechanical solution
techniques: starting with a differential equation, such as, say, y'' + 4y = 4x, with y(0) and y'(0) specified, I would apply the LT ℒ to both sides of the equation and, using previously stated facts
about the relationship of ℒ{y''} and ℒ{y'} to ℒ{y}, would convert the differential equation into an algebraic equation in the variable ℒ{y}. After solving this algebraic equation, use of a table of
inverse LTs would allow us to solve for y. The whole process struck me not only as mechanical but, to be blunt about it, fairly tedious and not terribly interesting; it wouldn’t surprise me if I
conveyed to the class a distinct lack of enthusiasm at this point of the course. What I didn’t realize at the time was that Laplace transforms could (as described above) also be used to shed some
light on other aspects of the course that were covered long before we got to this topic. A book like this one would have prevented some of these misunderstandings. Even if I didn’t present the
material that way in class (and doing so, I think, would have somewhat increased the level of sophistication of the course), it would have been nice for me to have known these facts.
Of course, there are going to be a number of teachers of sophomore ODE courses who, either because of personal preference or (as at ISU) the demands of a syllabus, do not want to introduce LTs early.
The natural question then becomes whether this book can be used in a way that suits their needs. The authors’ own chapter dependence chart suggests a negative answer, because it lists chapter 2 (the
one introducing LTs) as necessary background for most of the subsequent chapters (the one exception being chapter 8 on basic matrix algebra). This dependence chart notwithstanding, I had the
impression that an instructor could, with some effort, navigate a course through the book that avoided LTs. However, it does not seem to me that the benefits of doing so would outweigh the effort.
Books on this subject are not exactly scarce and, with the exception of the unusual feature of introducing LTs early, this book follows a fairly conventional path through the typical basic sophomore
ODE course: basic solutions methods for categories of first order ODEs, second order and then nth order linear ODEs with constant coefficients, second order linear ODEs with non-constant coefficients
using such methods as variation of parameters and the Wronskian, power series methods, and systems of linear ODEs (including matrix exponentials). The writing is competent and clear, but not
particularly memorable; there is good attention paid to applications of the subject but, again, this is not an unusual feature among texts of this nature. So, there is no compelling reason to choose
this text unless you are prepared to buy in to the authors’ philosophy regarding LTs. If you do buy into that philosophy, or are willing to be persuaded to do so, this book is tailor-made for you.
The exercises in a mathematics book of this level are always important, and this book does a good job with them. There are quite a lot of them, mostly of a fairly computational nature; a full
solutions manual of almost 350 pages is available to adopters (and at least one reviewer), and there is also a student solutions manual for the odd-numbered problems that is, not surprisingly, just
about half that size. There are also a number of solutions to odd-numbered problems in the back of the book (about fifty pages worth).
I do have a couple of quibbles, one of them not the responsibility of the authors and the other something that may be more of an issue to me than to others. Taking the latter first, in the chapter on
separation of variables, the authors tell the students to “multiply by dt”, which to me (at least at this level) is like hearing squeaky chalk on a blackboard. I do realize that the theory of
differential forms actually does allow the expression dy/dx to be treated as a fraction, but this comes far later in a student’s mathematical career and many calculus textbooks make a point of saying
that the expression is not a fraction. Different books treat this issue in different ways. The popular ODE textbook by Blanchard, Devaney and Hall, for example, manages to address the subject in a
reasonably precise and informative manner; they acknowledge that the reader “probably became nervous at one point” by treating dt as a variable and then proceed to explain “what is really going on”
in terms of the chain rule. But perhaps I am nitpicking here; I suspect many of my colleagues wouldn’t object at all to just “multiplying” by dt in a course at this level.
As for the first of the two quibbles that I mentioned, there are a large number of blank pages in this book. I didn’t make any attempt to count them all, but from a quick estimate I would say the
number of them was in the vicinity of 75 or more. There are also a great many other pages that, although not completely blank, contain very little writing. One would think that the publishers of an
800-page book would be making an effort to keep the size down and thus make it less cumbersome, but instead about ten percent of a book that is already quite thick and unwieldy is simply blank. I
found this annoying and distracting.
But these, as I said, are quibbles. My overall impression of this book is quite positive: I enjoy books with a novel and interesting point of view, and, even though this is an elementary subject, I
learned things from this text that I did not know before.
Mark Hunacek (mhunacek@iastate.edu) teaches mathematics at Iowa State University. | {"url":"http://www.maa.org/publications/maa-reviews/ordinary-differential-equations","timestamp":"2014-04-16T14:20:16Z","content_type":null,"content_length":"102555","record_id":"<urn:uuid:82cac7fe-0f97-44e5-962b-bce0f38d4771>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Tawny on Saturday, October 16, 2010 at 8:27pm.
How do I solve:
-9x + 2x to the third power +8x -12x to the third power
• Math - drwls, Saturday, October 16, 2010 at 8:58pm
Isn't there suppsed to be an "=" sign in there somewhere? If so, where?
You can't "solve" it if it isn't an equation.
• Math - Bosnian, Sunday, October 17, 2010 at 12:10am
(-7x)^3=(-7)^3 *x^3=-343x^3
(-4x)^3=(-4)^3 *x^3=-64x^3
-343x^3 +(-64x^3)=
-343x^3-64x^3= -407x^3
Related Questions
Algebra - I am so confused with the rules of exponents. I know them, but I have ...
Algebra - I'm not sure if anyone saw my question yesterday, but here it is again...
math - one third to the third power times two fifth to the third power is
math - What is P(-4) given that P(x) = 2x to the fourth power - 3x to the third ...
math - What is P(-4) given that P(x) = 2x to the fourth power - 3x to the third ...
math - What is P(-4) given that P(x) = 2x to the fourth power - 3x to the third ...
math - What is P(-4) given that P(x) = 2x to the fourth power - 3x to the third ...
algebra - 8x to the third power + 27=0. solve
algebra - 8x to the third power + 27=0. solve
geometry - 10x(to the 5th power)y(to the 4th power) over 15x(to the third power)... | {"url":"http://www.jiskha.com/display.cgi?id=1287275239","timestamp":"2014-04-20T23:55:41Z","content_type":null,"content_length":"8511","record_id":"<urn:uuid:78a4963b-db4c-4ca8-9ad4-d84467e451a0>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oakton Precalculus Tutor
Find an Oakton Precalculus Tutor
...While in college, I was on the Putnam Intercollegiate Math Competition team for 3 consecutive years, and won several math competitions. I had a 4.0 GPA in math as an undergraduate (graduating
with more than twice the number of required credit hours in math). I also obtained a perfect score in Co...
37 Subjects: including precalculus, physics, calculus, GRE
...Second Derivatives. E. Applications of Derivatives.
21 Subjects: including precalculus, calculus, world history, statistics
...I was always a better teacher than a student; and I guarantee your scores will raise with my help! Let's keep up the good work!I am qualified to tutor the Algebra SOL, Chemistry SOL and any
math SOL. I have been tutoring these subjects since 2003.
13 Subjects: including precalculus, chemistry, physics, algebra 2
...I can help you master statistics concepts and become more confident in your skills! I have several years of experience teaching all aspects of English. I have worked with high school and
college students studying English literature, as well as students from elementary through graduate school who were struggling with reading, writing, and grammar.
46 Subjects: including precalculus, English, Spanish, algebra 1
...As an undergraduate, I tutored peers in Spanish including grammar, writing, and speaking skills. I studied abroad for 4 months in Madrid, Spain. While there I volunteered with Helenski Espana,
a human rights group.
17 Subjects: including precalculus, Spanish, writing, physics | {"url":"http://www.purplemath.com/oakton_va_precalculus_tutors.php","timestamp":"2014-04-21T12:48:29Z","content_type":null,"content_length":"23677","record_id":"<urn:uuid:1eeb990d-846a-424d-96a1-41792ac2d24e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computer scientist claims to have solved one of the world’s most complex mathematical problems
12 August 2010, JellyBean @ 4:21 pm
2 Comments
1. Pingback by Tweets that mention Computer scientist claims to have solved one of the world’s most complex mathematical problems | Level Beyond -- Topsy.com — August 12, 2010 @ 4:38 pm
[...] This post was mentioned on Twitter by Level Beyond, area51 . org. area51 . org said: Level Beyond: Computer scientist claims to have solved one of the world's most complex mathematical
problems http://bit.ly/a58nXa [...]
2. Trackback by World Wide News Flash — August 12, 2010 @ 4:42 pm
Computer scientist claims to have solved one of the world?s most complex mathematical problems…
I found your entry interesting do I’ve added a Trackback to it on my weblog
You must be logged in to post a comment. | {"url":"http://levelbeyond.com/2010/08/12/computer-scientist-claims-to-have-solved-one-of-the-world%E2%80%99s-most-complex-mathematical-problems/","timestamp":"2014-04-18T23:24:24Z","content_type":null,"content_length":"90795","record_id":"<urn:uuid:4301ed4b-d9d4-4b90-afae-e58cab66039f>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cone problem for cone head
June 8th 2008, 05:50 PM #1
Jun 2008
Cone problem for cone head
Answer in exact values
In the diagram, the bases of the two cones are parallel and the center X of the smaller cone is the midpoint of the height (h) of the larger cone.
Explain now you know that the two cones are similar.
Find the ratio of the volume of the smaller cone to the volume of the larger one.
How do I reach answers in EXACT values?
You reach exact values by leaving it as surds and fractions where necessary.
thanks Sean - I'm having trouble answering this last item for my class tomorrow. Looks like a is asking to explain that two angles are the same and the height of the smaller cone is 1/2 that of
the larger. In b I think the ratio is 6?
June 8th 2008, 06:05 PM #2
May 2008
June 8th 2008, 06:13 PM #3
Jun 2008
June 8th 2008, 07:58 PM #4 | {"url":"http://mathhelpforum.com/geometry/41022-cone-problem-cone-head.html","timestamp":"2014-04-16T20:44:05Z","content_type":null,"content_length":"38625","record_id":"<urn:uuid:20c46c7b-4410-4a5b-b074-053c0c30ab30>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
The number of group elements whose squares lie in a given subgroup
up vote 14 down vote favorite
This number is divisible by the order of the subgroup http://arxiv.org/abs/1205.2824.
The proof is short but non-trivial. Is this fact new or is it known for a long time?
gr.group-theory finite-groups
6 Welcome to Mathoverflow! – HJRW Jun 2 '12 at 10:12
add comment
1 Answer
active oldest votes
Here is an easy character-theoretic proof of the fact that given a subgroup $H$ of a finite group $G$ and a positive integer $k$, the number of elements $y \in G$ such that $y^k \in H$ is
divisible by $|H|$. Let $\theta_k$ be the class function on $G$ defined by $\theta_k(x)$ = |{ $y \in G \mid y^k = x$ }|. It is well known that this class function is a generalized
character. (In other words, it is a $\Bbb Z$-linear combination of irreducible characters.) The number of interest here is $\sum_{x \in H} \theta_k(x)$, which is equal to $|H|[(\theta_k)
up vote _H,1_H]$. This is clearly divisible by $|H|$ since the second factor is an integer because $\theta_k$ is a generalized character.
23 down
vote In fact, the coefficient of an irreducible character $\chi$ in $\theta_k$ is the integer I called $\nu_k(\chi)$ in my character theory book. For $k = 2$, this is the famous Frobenius-Schur
indicator, whose value lies in the set {0,-1,1}. For other integers $k$, it is true that $\nu_k(\chi)$ is an integer, but there is no upper bound on its absolute value.
Thank you, Marty! Your proof is shorter than ours but less elementary. However, I would be happy if someone provides a reference proving that the fact is known. Actually, the paper cited
in the question contains a more general fact (Corollary 5). <i>Suppose that $H$ is a subgroup of a group $G$ and $W$ is a subgroup (or a subset) of a free group $F$. Then the number of
homomorphisms $f\colon F\to G$ such that $f(W)\subseteq H$ is divisible by $|H|$.</i> (Taking $F=\Bbb Z$, we obtain the statement from the question.) Does there exist a short
character-theoretic proof for this too? – Anton Klyachko Jun 4 '12 at 23:41
1 It seems to be easy to prove via character theory that if |W| = 1, then number of homomorphisms f such that f(W) <= H is a multiple of |H|. I don't see a proof along these lines if W has
cardinality exceeding 1. – Marty Isaacs Jun 5 '12 at 22:12
Is this because the number of tuples $(g_1,\dots,g_n)$ such that $w(g_1,\dots,g_n)=x$ is a generalised character $\theta(x)$ for ANY word $w\in F$ or this is not so easy? – Anton
Klyachko Jun 6 '12 at 17:59
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory finite-groups or ask your own question. | {"url":"http://mathoverflow.net/questions/98639/the-number-of-group-elements-whose-squares-lie-in-a-given-subgroup","timestamp":"2014-04-16T07:14:43Z","content_type":null,"content_length":"55990","record_id":"<urn:uuid:95b6dcb6-e477-4f8c-a205-ae2983d3e73f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00417-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating correct mean and standard deviation
August 14th 2010, 06:47 AM #1
Junior Member
Mar 2010
Calculating correct mean and standard deviation
This question is asking me to calculate the correct mean and standard deviations for data that was over-measured by 3.5cm. The problem is that I only have the sample mean and standard deviation,
both calculated from the incorrect measurements (which it does not give). What do I do...? There were 25 measurements taken. I have a feeling that I should calculate the mean and standard
deviation using 25 values of "3.5cm" and then subtract the results from the original incorrect values, but this is a total guess.
This question is asking me to calculate the correct mean and standard deviations for data that was over-measured by 3.5cm. The problem is that I only have the sample mean and standard deviation,
both calculated from the incorrect measurements (which it does not give). What do I do...? There were 25 measurements taken. I have a feeling that I should calculate the mean and standard
deviation using 25 values of "3.5cm" and then subtract the results from the original incorrect values, but this is a total guess.
If the all the data samples were overmeasured by 3.5cm, then the sample mean is 3.5cm too high.
$\displaystyle\huge\frac{a+3.5+b+3.5+c+3.5+......}{ 25}=\frac{a+b+c+....}{25}+\frac{25(3.5)}{25}$
where a, b, c.... are the true sample values.
Will this make any difference to the sample standard deviation?
the standard deviation will remain the same because it's calculated using the "differences" between the samples and the mean
(the deviations from the mean remain the same).
August 14th 2010, 07:43 AM #2
MHF Contributor
Dec 2009
August 16th 2010, 11:51 PM #3
Junior Member
Mar 2010
August 17th 2010, 01:44 AM #4
MHF Contributor
Dec 2009 | {"url":"http://mathhelpforum.com/statistics/153668-calculating-correct-mean-standard-deviation.html","timestamp":"2014-04-17T05:54:53Z","content_type":null,"content_length":"40922","record_id":"<urn:uuid:1070945d-78e7-4482-8ec9-58a4e371b565>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the difference between RMS and average current?
1. 14th November 2007, 08:28 #1
Junior Member level 1
Join Date
Oct 2007
0 / 0
What is the difference between RMS and average current?
what is difference between RMS and average current?
after simulating, what should i see to measure current consumption?
till now, i'v checked the average current.
when can i use RMS value?
thanks in advance~!:D
□ 14th November 2007, 08:28
2. 14th November 2007, 10:50 #2
Full Member level 6
Join Date
Jun 2004
Sao Paulo - Brasil
64 / 64
Re: BASIC
RMS current and voltage are mostly used in alternate current (AC) measurements.
If you try to calculate the simple average of an AC sinusoidal (or any other periodic) waveform (current or voltage) symetrical from ground, you will get the value zero, as the waveform is
composed of positive and negative values, unless you take only one semi cycle (positive or negative alone).
But AC waveforms of current or voltage can generate power, so using RMS (root mean square) calculation, you can obtain a value relationed to power (V x I) without the cancellation of negative
with positive values. The RMS value is the equivalent AC voltage or current that generates the same power of a DC voltage or current value.
When you squared (to 2nd power) the negative samples/values of the waveform, you turn them into positive values that are added to the positive squared samples. After that you divide by the
number of samples and do the root square of the "averaged sum of squares" and obtain the RMS value (for sinusoidal waveform, RMS value is 70% of the peak value, and average value of one semi
cycle is 63% of the peak value).
RMS calculation can be used also in measuring vibration or any other physic event that is variable. Obs. power is not measured in RMS, there is no the unit Wrms, as it does not make sense.
1 members found this post helpful.
□ 14th November 2007, 10:50
3. 17th November 2007, 13:11 #3
Full Member level 6
Join Date
Jun 2004
19 / 19
which software of your simulation,different software is different
□ 17th November 2007, 13:11
4. 19th November 2007, 00:45 #4
Junior Member level 1
Join Date
Oct 2007
0 / 0
really?...I didnt know...
sim. sftware is hspice or hsim..and
i usually use awaves or sandwork waveview to see power.
5. 19th November 2007, 16:43 #5
Member level 2
Join Date
Feb 2005
1 / 1
Re: BASIC
the average valure is the DC component in the AC signal, whereas the RMS is the effective value that would generate a heat through a resistor like DC. | {"url":"http://www.edaboard.com/thread110649.html","timestamp":"2014-04-21T04:42:04Z","content_type":null,"content_length":"70909","record_id":"<urn:uuid:0600ec6f-13d3-4dbc-b1cd-ce62544f6a6c>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00156-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Predicted probability after xtlogit with i option
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Predicted probability after xtlogit with i option
From Urmi Bhattacharya <ub3@indiana.edu>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Predicted probability after xtlogit with i option
Date Wed, 29 Jun 2011 20:43:07 -0400
Hi Joerg,
Thank you for the help. But I have to use the xtlogit which I guess
means that I have to do with predicted probabilities assuming the
random effect component to be 0.
On Wed, Jun 29, 2011 at 1:59 PM, Joerg Luedicke
<joerg.luedicke@gmail.com> wrote:
> I am not 100% sure if its doable with -xtlogit-, but you could use -xtmelogit-
> for which the default prediction (-mu-) takes the random effect into account
> (see -help xtmelogit postestimation-). (Using the option -fixedonly- with
> -predict- after -xtmelogit- should replicate your predictions from -xtlogit-).
> J.
> On Wed, Jun 29, 2011 at 1:29 PM, Urmi Bhattacharya <ub3@indiana.edu> wrote:
>> Hello Statalisters,
>> I am running the following xtlogit model
>> xtlogit school_left childage i.childfemale i.urban i.scstobc
>> i.casteother i.dadp i.dadm i.momp i.momm wagep wage5 wage8 wage9 distp
>> distm disth percapcons durat1 durat2 durat3 durat4 durat5 durat6
>> durat7 durat8 durat9 durat10 durat11, nocons nolog i(childcaseid)
>> After estimation, I need to find the predicted probability for each individual.
>> I do the following:
>> predict prob,pu0
>> The help file says that this calculation assumes that the random
>> effect is 0. My question is if it would be possible to find the
>> predicated probability not assuming that? Or is this the best I can
>> do.
>> Thanks
>> Urmi Bhattacharya
>> *
>> * For searches and help try:
>> * http://www.stata.com/help.cgi?search
>> * http://www.stata.com/support/statalist/faq
>> * http://www.ats.ucla.edu/stat/stata/
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2011-06/msg01289.html","timestamp":"2014-04-19T04:50:38Z","content_type":null,"content_length":"10063","record_id":"<urn:uuid:716b54d3-cfcf-49c9-a483-310fd5525efa>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Staten Island Precalculus Tutor
Find a Staten Island Precalculus Tutor
...Many GMAT students have not seen quadratic equations, right isosceles triangles, sentence corrections, etc., since their days in school. I provide a comprehensive review of all the concepts
tested on the GMAT, and focus on the specific subjects you remember least well, so that by the time Test D...
18 Subjects: including precalculus, geometry, GRE, algebra 1
...I earned 5s on both AP Calculus AB and AP Calculus BC in 2000 and 2001. Additionally, calculus has remained an essential part of the "bread and butter" that I have used to understand the
dynamic variations in the populations of cells undergoing therapeutic treatment during my work in the Nationa...
13 Subjects: including precalculus, reading, writing, physics
...I am very focused on being approachable and flexible in scheduling.I am a medical school graduate, and now a physician at the NIH. I have passed Step 1 and Step 2 of the USMLE medical licensing
exam, and am interviewing for residency training posts in obstetrics/gynecology. I am a lifelong ivy league academic with both a graduate degree and a medical degree.
44 Subjects: including precalculus, English, reading, writing
...I have an extensive background in physics (bachelors degree) and chemistry (2 year degree + 2 years of undergrad coursework). Keen on building a rock-solid foundation for this topic for my
students through examples and problem-based learning, never skimping on understanding the underlying basic p...
17 Subjects: including precalculus, chemistry, Spanish, physics
...In the past, for example, I've drawn pictures, sung songs, broken down information into bullet points, acted it out, made flash cards, and so many more! Whatever works best for the student
works for me. I travel to student's homes (or any other location you prefer) to make the experience as convenient as possible for you.
37 Subjects: including precalculus, chemistry, physics, calculus | {"url":"http://www.purplemath.com/staten_island_ny_precalculus_tutors.php","timestamp":"2014-04-17T07:39:20Z","content_type":null,"content_length":"24417","record_id":"<urn:uuid:69624eb1-9b5d-4f7d-9ad0-cb0658b22fbc>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
y' = 0.11y, where y' is the amount you get per year, y is the amount you have, and t is time (in years)
dy/y = 0.11dt
ln(|y|) = 0.11t + C
y = C*e^(0.11t)
y(0) = 3,415,569
y(0) = C*e^0
C = 3,415,569
y = 3,415,569*e^(0.11t) --> This is the equation you're looking for
y = 5,000,000
5,000,000 = 3,415,569*e^(0.11t)
e^(0.11t) = 1.464
0.11t = 0.381
t = 3.464
If you don't know calculus, don't worry about the above. If you do, just ask and I'll explain all the steps.
Or are you saying that the interest rate is compounded daily? I'm having a little trouble understanding your wording. | {"url":"http://www.mathisfunforum.com/post.php?tid=2389&qid=22947","timestamp":"2014-04-19T09:23:55Z","content_type":null,"content_length":"20037","record_id":"<urn:uuid:92e205ef-57d2-4087-82e0-1595e6286b27>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
On W 1,p estimates for elliptic equations in divergence form
Results 1 - 10 of 20
, 2005
"... Abstract. This is the fourth article of our series. Here, we apply the results of [AM1] to study weighted norm inequalities for the Riesz transform of the Laplace-Beltrami operator on Riemannian
manifolds and of subelliptic sum of squares on Lie groups, under the doubling volume property and Poincar ..."
Cited by 23 (6 self)
Add to MetaCart
Abstract. This is the fourth article of our series. Here, we apply the results of [AM1] to study weighted norm inequalities for the Riesz transform of the Laplace-Beltrami operator on Riemannian
manifolds and of subelliptic sum of squares on Lie groups, under the doubling volume property and Poincaré inequalities. 1. Introduction and
- Ann. Inst. Fourier
"... Abstract. Let L = −div(A(x)∇) be a second order elliptic operator with real, symmetric, bounded measurable coefficients on R n or on a bounded Lipschitz domain subject to Dirichlet boundary
condition. For any fixed p> 2, a necessary and sufficient condition is obtained for the boundedness of the Rie ..."
Cited by 10 (1 self)
Add to MetaCart
Abstract. Let L = −div(A(x)∇) be a second order elliptic operator with real, symmetric, bounded measurable coefficients on R n or on a bounded Lipschitz domain subject to Dirichlet boundary
condition. For any fixed p> 2, a necessary and sufficient condition is obtained for the boundedness of the Riesz transform ∇(L) −1/2 on the L p space. As an application, for 1 < p < 3 + ε, we
establish the L p boundedness of Riesz transforms on Lipschitz domains for operators with V MO coefficients. The range of p is sharp. The closely related boundedness of ∇(L) −1/2 on weighted L 2
spaces is also studied. 1.
, 2007
"... Abstract. We consider non-linear elliptic equations having a measure in the right hand side, of the type div a(x, Du) = µ, and prove differentiability and integrability results for solutions.
New estimates in Marcinkiewicz spaces are also given, and the impact of the measure datum density propertie ..."
Cited by 8 (4 self)
Add to MetaCart
Abstract. We consider non-linear elliptic equations having a measure in the right hand side, of the type div a(x, Du) = µ, and prove differentiability and integrability results for solutions. New
estimates in Marcinkiewicz spaces are also given, and the impact of the measure datum density properties on the regularity of solutions is analyzed in order to build a suitable Calderón-Zygmund
theory for the problem. All the regularity results presented in this paper are provided together with explicit local a priori estimates.
- Math. Z , 2008
"... m u), D m ∫ 〈 ϕ 〉 dx = |F | Ω p(x)−2 F, D m 〉 ϕ dx, for allϕ ∈ C ∞ c (Ω; RN), m> 1, with variable growth exponent p: Ω → (1, ∞) we prove that if |F | p(·) ∈ L q loc (Ω) with 1 < q < n n−2 +
δ, then |Dmu | p(·) ∈ L q loc (Ω). We should note that we prove this implication both in the non–degenerate ..."
Cited by 6 (3 self)
Add to MetaCart
m u), D m ∫ 〈 ϕ 〉 dx = |F | Ω p(x)−2 F, D m 〉 ϕ dx, for allϕ ∈ C ∞ c (Ω; RN), m> 1, with variable growth exponent p: Ω → (1, ∞) we prove that if |F | p(·) ∈ L q loc (Ω) with 1 < q < n n−2 + δ,
then |Dmu | p(·) ∈ L q loc (Ω). We should note that we prove this implication both in the non–degenerate (µ> 0) and in the degenerate case (µ = 0). 1.
, 2004
"... Abstract. We develop a new approach to the L p Dirichlet problem via L 2 estimates and reverse Hölder inequalities. We apply this approach to second order elliptic systems and the polyharmonic
equation on a bounded Lipschitz domain Ω in Rn. For n ≥ 4 and 2 − ε < + ε, we establish the solvability of ..."
Cited by 4 (2 self)
Add to MetaCart
Abstract. We develop a new approach to the L p Dirichlet problem via L 2 estimates and reverse Hölder inequalities. We apply this approach to second order elliptic systems and the polyharmonic
equation on a bounded Lipschitz domain Ω in Rn. For n ≥ 4 and 2 − ε < + ε, we establish the solvability of the Dirichlet problem with boundary data in p < 2(n−1) n−3 L p (∂Ω). In the case of the
polyharmonic equation ∆ ℓ u = 0 with ℓ ≥ 2, the range of p is sharp if 4 ≤ n ≤ 2ℓ + 1. 1.
- Adv. Math
"... Abstract. Let Ω be a bounded Lipschitz domain in Rn. We develop a new approach to the invertibility on Lp (∂Ω) of the layer potentials associated with elliptic equations and systems in Ω. As a
consequence, for n ≥ 4 and 2(n−1) − ε < p < 2 where ε> 0 depends on Ω, n+1 we obtain the solvability of the ..."
Cited by 3 (0 self)
Add to MetaCart
Abstract. Let Ω be a bounded Lipschitz domain in Rn. We develop a new approach to the invertibility on Lp (∂Ω) of the layer potentials associated with elliptic equations and systems in Ω. As a
consequence, for n ≥ 4 and 2(n−1) − ε < p < 2 where ε> 0 depends on Ω, n+1 we obtain the solvability of the Lp Neumann type boundary value problems for second order elliptic systems. The analogous
results for the biharmonic equation are also established.
, 2005
"... Abstract. We study the homogeneous elliptic systems of order 2ℓ with real constant coefficients on Lipschitz domains in Rn, n ≥ 4. For any fixed p> 2, we show that a reverse Hölder condition
with exponent p is necessary and sufficient for the solvability of the Dirichlet problem with boundary data i ..."
Cited by 3 (1 self)
Add to MetaCart
Abstract. We study the homogeneous elliptic systems of order 2ℓ with real constant coefficients on Lipschitz domains in Rn, n ≥ 4. For any fixed p> 2, we show that a reverse Hölder condition with
exponent p is necessary and sufficient for the solvability of the Dirichlet problem with boundary data in Lp. We also obtain a simple sufficient condition. As a consequence, we establish the
solvability of the Lp Dirichlet problem for n ≥ 4 and 2 −ε < p < 2(n−1) n−3 +ε. The range of p is known to be sharp if ℓ ≥ 2 and 4 ≤ n ≤ 2ℓ + 1. For the polyharmonic equation, the sharp range of p is
also found in the case n = 6, 7 if ℓ = 2, and n = 2ℓ + 2 if ℓ ≥ 3. 1.
"... Abstract. In this paper we study the crack initiation in a hyper-elastic body governed by a Griffith’s type energy. We prove that, during a load process through a time dependent boundary datum
of the type t → tg(x) and in absence of strong singularities (this is the case of homogeneous isotropic mat ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. In this paper we study the crack initiation in a hyper-elastic body governed by a Griffith’s type energy. We prove that, during a load process through a time dependent boundary datum of the
type t → tg(x) and in absence of strong singularities (this is the case of homogeneous isotropic materials) the crack initiation is brutal, i.e., a big crack appears after a positive time ti> 0. On
the contrary, in presence of a point x of strong singularity, a crack will depart from x at the initial time of loading and with zero velocity. We prove these facts (largely expected by the experts
of material science) for admissible cracks belonging to the large class of closed one dimensional sets with a finite number of connected components. The main tool we employ to address the problem is
a local minimality result for the functional Z E(u, Γ): = f(x, ∇v) dx + kH 1 (Γ),
, 708
"... Abstract. We give dimension-free regularity conditions for a class of possibly degenerate sub-elliptic equations in the Heisenberg group exhibiting super-quadratic growth in the horizontal
gradient; this solves an issue raised in [40], where only dimension dependent bounds for the growth exponent ar ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. We give dimension-free regularity conditions for a class of possibly degenerate sub-elliptic equations in the Heisenberg group exhibiting super-quadratic growth in the horizontal gradient;
this solves an issue raised in [40], where only dimension dependent bounds for the growth exponent are given. We also obtain explicit a priori local regularity estimates, and cover the case of the
horizontal p-Laplacean operator, extending some regularity proven in [17]. In turn, the a priori estimates found are shown to imply the suitable local Calderón-Zygmund theory for the related class of
non-homogeneous, possibly degenerate equations involving discontinuous coefficients. These last results extend to the sub-elliptic setting a few classical non-linear Euclidean results [30, 14], and
to the non-linear case estimates of the same nature that were available in the sub-elliptic setting only for solutions to linear equations.
"... Abstract. This paper continues the study of the mixed problem for the Laplacian. We consider a bounded Lipschitz domain Ω ⊂ R n, n ≥ 2, with boundary that is decomposed as ∂Ω = D ∪ N, D and N
disjoint. We let Λ denote the boundary of D (relative to ∂Ω) and impose conditions on the dimension and shap ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. This paper continues the study of the mixed problem for the Laplacian. We consider a bounded Lipschitz domain Ω ⊂ R n, n ≥ 2, with boundary that is decomposed as ∂Ω = D ∪ N, D and N
disjoint. We let Λ denote the boundary of D (relative to ∂Ω) and impose conditions on the dimension and shape of Λ and the sets N and D. Under these geometric criteria, we show that there exists p0>
1 depending on the domain Ω such that for p in the interval (1, p0), the mixed problem with Neumann data in the space L p (N) and Dirichlet data in the Sobolev space W 1,p (D) has a unique solution
with the non-tangential maximal function of the gradient of the solution in L p (∂Ω). We also obtain results for p = 1 when the Dirichlet and Neumann data comes from Hardy spaces, and a result when
the boundary data comes from weighted Sobolev spaces. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=11475131","timestamp":"2014-04-18T10:56:30Z","content_type":null,"content_length":"36074","record_id":"<urn:uuid:ee8d78ef-c228-4901-aa68-ccefeebab72b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/copythat/asked","timestamp":"2014-04-18T03:54:25Z","content_type":null,"content_length":"106936","record_id":"<urn:uuid:29ba1fe9-5cf2-44e1-81a2-76ad86f55bb3>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the value of lim x→1 f(x)g(x) − 1 f(x) + 8g(x) when lim x→1f(x) = 6, lim x→1g(x) = 6
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/504bc6bce4b0985a7a58c050","timestamp":"2014-04-21T15:49:44Z","content_type":null,"content_length":"82122","record_id":"<urn:uuid:b917e254-2983-489b-b9d3-7566903f60a8>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
Density stability; questions for those who like computer calculation
up vote 5 down vote favorite
BACKGROUND: The question, which has its roots in a question asked on MO by O'Bryant, concerns the relative density of certain subsets, $B$, of ${\mathbb N}$ in congruence classes modulo a power of 2.
Let $I$ be such a congruence class. I'll say that $B$ is "stable in $I$" if there is a $c$ such that $B$ has relative density $c$ in $J$ whenever $J$ is a congruence class contained in $I$ whose
modulus is also a power of 2.
Suppose $B$ consists of all $n$ such that the coefficient of $x^n$ in the reciprocal of the element $g=1+x+x^4+x^9+x^{16}+\dots$ of ${\mathbb Z}/2[[x]]$ is 1. Cooper et al. showed that $B$ has
density 0 in 12 of the mod 16 congruence classes. I extended the result to 3 of the 4 remaining classes. But calculations by O'Bryant suggest that in the class 15 mod 16, $B$ is stable with relative
density $1/2$. For a detailed account see my note Disquisitiones Arithmeticae and online sequence A108345.
These QUESTIONS pertain to sets introduced by Cooper et. al:
1. Replace the exponents $0, 1, 4, 9, \dots$ in $g$ by the numbers $3n^2-2n$, $n \in {\mathbb Z}$, to get a new $B$. This $B$ has density 0 in 7 of the classes mod 8. Does the computer suggest that
it is stable with relative density 1/2 in the class 0 mod 8?
2. Suppose the exponents are $5n^2-4n$, $n \in {\mathbb Z}$. Does the computer suggest that there's a $q$ such that the new $B$ you get is stable in each mod $q$ congruence class? And if so, what do
the relative densities appear to be? (The density is provably 0 in some mod 8 classes).
3. Answer the same question as 2. when the exponents are the $5n^2-2n$, $n \in {\mathbb Z}$.
EDIT: I'll give a modified and generalized version of the question (and an expansion of my answer) using notation and ideas from my MO question on characteristic 2 thetas. Let $L$ be the field of
formal Laurent series in $x$ over ${\bf Z}/2$. If $f$ (not zero) is in $L$, $B(f)$ consists of all $n$ for which the coefficient of $x^n$ in $1/f$ is 1. Now fix $l=2m+1$, $m>0$, and for $i$ in $\
lbrace1,...,m\rbrace$ let $[i]$ be the element of ${\bf Z}/2[[x]]$ defined in the "thetas" question.
Question: For $q$ a power of 2, what does the computer suggest about the relative density of $B([i])$ in the various mod $q$ congruence classes? (Since all elements of $B[i]$ are congruent to $-(i^2)
$ mod $l$, these relative densities are at most $1/l$).
Example: When $l=3$, it can be shown that $B([i])$ has density 0 in all congruence classes mod 8, with the possible exception of 7. And the computer (perhaps) indicates that in the 7 mod 8 class (or
any class contained therein) the relative density is 1/6.
My "answer" generalizes the first sentence of the example. I made no computer calculations--indeed the computer evidence is at first sight contrary to my results because of the slow approach to zero.
Let $L(q)$ contained in $L$ be the field of formal Laurent series in $x^q$. Then $L$ is the direct sum of the $(x^k)L(q)$, $k$ in $\lbrace0,...,q-1\rbrace$. Let $p_{(q,k)}$ be the obvious projection
map $L\to(x^k)L(q)$. Let $S$ contained in
${\bf Z}/2[[x]]$
be the smallest ring that contains all the $[i]$ and is stable under the $p_{(q,k)}$ for all $q$ and $k$. It can be shown that every element of $S$ is the mod 2 reduction of the Fourier series of an
integral weight modular form for a congruence group. A theorem of Serre then shows that if $\sum((c_n)(x^n))$ is in $S$ then the set of $n$ for which $c_n$ is 1 has density 0.
As a corollary one finds: Let $p$ be a $p_{(q,k)}$. If $p(1/[i])$ is in $S$ then $B([i])$ has density 0 in the class $k$ mod $q$.
By making use of the quintic relations from my theta question I can show that the hypothesis of the theorem holds in various cases. In particular suppose $i$ is prime to l. When $l=5$, $B=B([i])$ has
density 0 in each mod 32 class except perhaps the 5 classes $n=7$ mod 8 and $n=28$ mod 32. When $l=7$, $B$ has density 0 in each mod 32 class except perhaps the 7 classes 7 mod 8, 14 mod 16 and 28
mod 32. When $l=9$, $B$ has density 0 in each mod 64 class except perhaps the 19 classes 1 and 7 mod 8, 28 mod 32, and 48 mod 64.
In the various classes qualified by "except perhaps" in the above paragraph (and the subclasses contained therein) it seems plausible that the relative densities are $1/(2l)$. But this may be wishful
thinking. I hope that someone will make further calculations.
FURTHER EDIT: Heres a more explicit and more speculative version of my question. Let n_j be the negative exponents appearing in the Laurent series 1/[i], 1/[2i], 1/[4i], i/[8i],..., and q_j be the
largest power of 2 dividing n_j.
QUESTION: Does computer evidence support the following speculations?
(1) The relative density of B([i]) in each congruence class n_j mod 8q_j, and in all congruence classes modulo a power of 2 contained therein, is 1/(2l).
(2) Outside of these congruence classes B([i]) has density 0.
For example when l=9 and i=1 the n_j are -16,-7,-4 and -1, and the classes in (1) are 1 mod 8, -1 mod 8, -4 mod 32 and -16 mod 128. The technique I indicated in my earlier edit shows that (2) holds
in this case, so one gets 128-37 classes mod 128 where B has density 0. The technique also shows that (2) holds when l=3,5 or 7. This isn't much evidence, and there's far less for (1). But as these
are the simplest answers one might hope for, I'd be interested in any calculations concerning them.
nt.number-theory computational-number-theo modular-forms
Hi Paul, you forgot to put brackets around the exponent $16$ in your equation g=1+x+x^4+x^9+x^16..., you should have used g=1+x+x^4+x^9+x^{16} to get the correct result $g=1+x+x^4+x^9+x^{16}...$ I
don't have enough points to edit that for you, so you'll have to fix that. And you should just be able to hit the "edit" button under your question to perform the edit. MO shouldn't lose the
question if you do that. – sleepless in beantown Nov 18 '10 at 4:25
I've attempted to TeX the most recent edit. Hope I didn't corrupt the meaning. – Gerry Myerson Nov 18 '10 at 5:23
Gerry-Thanks for the TeXing; I don't know how to do such things. The meaning is intact, though I prefer Z/2 to the ambiguous Z_2 – paul Monsky Nov 19 '10 at 2:39
@paul, ${\bf Z}_2$ replaced by ${\bf Z}/2$. – Gerry Myerson Nov 19 '10 at 11:31
add comment
2 Answers
active oldest votes
This is really commentary on 2. and 3. of my question in the light of recent discoveries, but as I don't know how to edit the question without losing it, I'll post my discoveries as an
answer. Following Cooper et. al. denote the sets B of 2. and 3. by B_(1,10) and B_(3,10). I can now prove:
a. The n in B_(1,10) that are neither 7 mod 8 nor 0 mod 32 have density 0.
b. The n in B_(3,10) that are neither 0 mod 8 nor 25 mod 32 have density 0.
I suspect that B_(1,10) is stable with relative density 1/2 in each of the 5 remaining congruence classes mod 32, and that the same is true for B_(3,10). It would in fact suffice to handle
the classes 7 mod 8 for B_(1,10) and 0 mod 8 for B_(3,10), and I would appreciate calculations for these mod 8 congruence classes to confirm or disconfirm my suspicions.
up vote 1
down vote The proofs of a. and b. use a deep result of Serre's on the mod J reduction of the Fourier expansion of a modular form of integral weight,when the Fourier coefficients lie in a number
field, together with some simple techniques from my answers to O'Bryant's questions. I hope to post the proofs on arXiv one of these days.
EDIT: The arguments can now be found on arXiv NT 1107.4137, "The reciprocals of some characteristic 2 'theta series' ". I use the techniques mentioned above to prove the analogues to (a)
and (b) when l=7,9,11,13 and 15 as well. (See the edit to my question to see what I'm describing). But I can't handle one class mod (128) for l=13 and 15, and though my method might give
density 0 results in some congruence classes when l>15, the computer isn't up to the job. So I'd still be grateful to anyone willing to make further computer calculations to see whether my
speculations are plausible.
add comment
For prime l I've now proved (a corrected version of) the speculation (2) made in the further edit to my question. (See Theorem I below). The proof avoids the extensive computer
verifications made in arXiv NT 1107.4137. So let K be an algebraic closure of Z/2. Call an element g=a_0+a_1(x)+a_2(x^2)+... of K[[x]] "sparse" if the n with a_n non-zero form a set of
density 0.
Lemma 1----Suppose g is in the subring R of K[[x]] generated by K and the [i]. Then g is sparse. (This follows from the fact that the elements of R are the mod 2 reductions of Fourier
expansions of modular forms of integral weight, and the theorem of Serre mentioned in my previous answer).
We have shown in another question that the above ring R is the co-ordinate ring of an affine curve C. Let m_0 be the maximal ideal of R generated by [1],...,[l-1], and p_0 be the point of
C corresponding to m_0. We showed in addition (using the fact that l is prime) that m_0 is the only maximal ideal of R containing any of [1],...,[l-1], and that there are (l-1)_/2 linear
branches at p_0 with distinct branch tangents.
Lemma 2---Suppose g=a_0+a_1(x)+... is in K[[x]], that g is the quotient of an element of R by a product of powers of [i], and that g has positive ord at every branch of C centered at p_0.
Then g is sparse. (For g is in the localization of R at every maximal ideal other than m_0. Furthermore if n is large g^n is in the localization of R at m_0. So for n large, g^n is in R,
and is sparse by Lemma 1. Take n to be a large power of 2 to get the result.
---------Now let U=U_2 be the operator K[[x]]-->K[[x]] taking sum(a_n)(x^n) to sum(a_2n)(x^n). Note that U([i][i]g)=[i]U(g).
up vote 1 Lemma 3---The subring of Z/2[[x]] generated by the [i] is stable under U. (It suffices to show that U takes a product of terms, [ ], to an element of this ring. We argue by induction on
down vote the number of terms in the product. We may assume that the first 2 terms are [2i] and [2j]. Then [2i][2j] is the sum of [2i][j]^4, [2j][i]^4 and ([i+j][i-j])^2. Multiplying by the
remaining terms in the product, applying U, and using induction we get the result).
Now let L be the field of Laurent series in x over Z/2, and L(q) be the field of Laurent series in x^q, where q is a power of 2. We write p_(q,k) for the obvious projection map L-->(x^k)L
(q). Note that for g in Z/2[[x]], U(g) is the square root of p_(2,0)(g). So if g is in the ring of Lemma 3, then p_(q,0)(g) is a qth power in that ring.
Theorem I---Let q be a power of 2, and suppose that k is in {0,1,2,3,4,5,6,7}. Suppose that g_i=p_(8q,kq)(1/[i]) is in Z/2[[x]] for all i--that is to say that no negative exponents appear
in any g_i. Then each g_i is sparse. (In other words each B([i]) has density 0 in each congruence class kq mod 8q).
To prove this note that p_(q,0)(1/[i]) is the quotient of p_(q,0)([i]^(8q-1)) by [i]^8q. The paragraph before Theorem I shows that this is the quotient of v^q by [i]^8q for some v in R.
Applying p_(8q,kq) we find that g_i=(1/[i]^8q)(w^q), where w=p_(8,k)(v). Since p_(8,k) stabilizes R, g_i is the quotient of an element of R by a power of [i]. The exponent restriction
tells us that if g is a g_j, then g has positive ord at each branch of C centered at m_0, and we invoke Lemma 2.
Example: Suppose l=13, q=16 and k=3. The negative exponents appearing in the i/[i] are -36,-23,-10, -25, -16,-9,-4,and -1. Since none of these is congruent to 48 mod 128, all the g_i are
in Z/2[[x]]. It follows from Theorem I that each B([i]) has density 0 in the congruence class 48 mod 128, a result that had eluded me.
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory computational-number-theo modular-forms or ask your own question. | {"url":"http://mathoverflow.net/questions/39635/density-stability-questions-for-those-who-like-computer-calculation/84069","timestamp":"2014-04-17T21:51:24Z","content_type":null,"content_length":"69139","record_id":"<urn:uuid:48205acd-73aa-43fb-8177-757aea33ccb4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determining Mass Transfer Coefficients
It isn't reasonable to expect mass transfer coefficients to be readily available for any and all systems.
The "best" solution is to experimentally measure coefficients on a bench scale (using a wetted-wall column, etc.) and then use the results to design a full scale separation column. When this isn't
feasible, more approximate arrangements must be made.
Dimensional analysis of mass transfer suggests correlations of the form:
A number of correlations matching this form are presented in your textbook, pp. 533-40.
Treybal (1987) suggest the following correlations for use with beds packed with Raschig rings or Berl saddles:
subject to the following definitions
We will be using these in a class example.
Since the basic mechanisms of heat, mass, and momentum transport are essentially the same, it is sometimes possible to directly relate heat transfer coefficients, mass transfer coefficients, and
friction factors by means of analogies.
Analogies involving momentum transfer are only valid if there is no form drag, hence they are pretty much limited to flow over flat plates and inside (but not across) conduits. Also recognize that if
there is much heat or mass transfer, it may change fluid and flow characteristics enough to make analogy worthless; in some cases, a viscosity correction may be used to compensate.
A simple, crude analogy recognizes that turbulent eddies transport heat and mass as well as momentum, thus one can argue that the eddie diffusivities are the same for all modes of transport, that is:
E[T] = E[H] = E[M]. These values are seldom at hand, though.
Another analogy -- probably the oldest -- is the "Reynolds Analogy", which relates the Fanning friction factor for fluid flow to heat transport:
where the right hand side is the "Stanton Number". The Stanton number is a dimensionless group made up of other, more familiar groups. It can be defined for heat transfer or for mass transfer.
The Reynolds analogy gives reasonable values for gases where the Prandtl number is roughly one.
Note that one-half the friction factor is the ratio of the overall momentum transported to the wall to the inertial effects in the mainstream. The Stanton number represents the ratio of the overall
heat transport to the wall to the convective effects in the mainstream. The Reynolds analogy says that these ratios are equal for mass and momentum transport.
The Reynolds analogy postulates direct interaction between the turbulent core of the flow and the walls. If a laminar sublayer is included between these, the Prandtl-Taylor analogy applies:
This form includes the ratio of the mean velocities in the sublayer and core as well as the Prandtl number for heat transfer. Note that when the Prandtl number is equal to one, this equation reduces
to the Reynolds analogy.
Probably the most widely used is the Colburn (or Colburn-Chilton) analogy. It is based on correlations and data rather than on assumptions about transport mechanisms. The Colburn "j-factor" for heat
transfer and the Colburn-Chilton j-factor for mass transfer are:
The heat transfer factor may be modified with the Seider-Tate viscosity correction
although this does not seem to be universally done.
When the j-factors are used, the fluid properties in the Stanton number are evaluated at the mean bulk average temperature and those for the Prandtl number at the film temperature (this means two
heat capacities!).
The Colburn-Chilton analogy is simply
valid for turbulent flow in conduits with N[Re] > 10000, 0.7 < N[Pr] < 160, and tubes where L/d > 60 (the same constraints as Seider-Tate).
A wider range of data is correlated by the Friend-Metzner analogy:
which is valid when N[Re] > 10000, 0.5 < N[Pr] < 600, 0.5 < N[Sc] < 3000.
Coefficients from Reference Conditions
Another possibility is to estimate mass transfer coefficients by comparison with measured values for reference systems.
For instance, the overall mass transfer coefficients for the oxygen-water system has been measured (see MSH Fig 18.21, p. 581) and can be used to predict overall coefficients for other systems using
MSH suggest a typical value of n=0.3, so new values can be obtained using
For gas-film coefficients, MSH provide data for ammonia-water, and recommend estimation using
1. Brodkey, R.S. and H.C. Hershey, Transport Phenomena: A Unified Approach, McGraw-Hill, 1988, pp. 516-20.
2. McCabe, W.L., J.C. Smith, and P. Harriott, Unit Operations of Chemical Engineering (5th Edition), McGraw-Hill, 1993, pp. 348-52.
3. McCabe, W.L., J.C. Smith, and P. Harriott, Unit Operations of Chemical Engineering (6th Edition), McGraw-Hill, 2001, pp. 532-40, 580-88.
R.M. Price
Original: 12/99
Modified: 1/27/2003
Copyright 1999, 2003 by R.M. Price -- All Rights Reserved | {"url":"http://facstaff.cbu.edu/~rprice/lectures/mtcoeff.html","timestamp":"2014-04-23T17:26:52Z","content_type":null,"content_length":"6723","record_id":"<urn:uuid:cfe2bdef-27a1-4bb4-bc46-a92dc3e06e93>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area of a hyperbolic triangle
November 22nd 2013, 11:04 PM #1
Nov 2013
Hong Kong
Area of a hyperbolic triangle
In a Pointcare disc model, how to find the area of a hyperbolic triangle with vertices 0, i and (sqr(2)) - 1 ??
Re: Area of a hyperbolic triangle
Hey yfishmoony.
Have you tried setting up an integral for a portion of the surface in that particular co-ordinate system?
November 23rd 2013, 07:11 PM #2
MHF Contributor
Sep 2012 | {"url":"http://mathhelpforum.com/geometry/224553-area-hyperbolic-triangle.html","timestamp":"2014-04-21T10:17:29Z","content_type":null,"content_length":"31139","record_id":"<urn:uuid:faf70a54-feba-42e9-a142-c64436664f8d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finite formula found for partition numbers - PLOS Blogs Network
Finite formula found for partition numbers
By S.C. Kavassalis
Posted: January 20, 2011
So this isn’t physics*, but if you squint hard enough, you can probably make a connection. The hot topic today is Ken Ono‘s latest work on the partition function:
Ken Ono, Amanda Folsom, & Zach Kent (2011). l-adic properties of the partition function American Institute of Mathematics.
Ken Ono & Jan Bruinier (2011). AN ALGEBRAIC FORMULA FOR THE PARTITION FUNCTION American Institute of Mathematics.
A EurekAlert press release appeared today, entitled: New math theories reveal the nature of numbers and people are already whispering “Fields Medal”. Now, I haven’t thoroughly read the paper yet,
but, since I’m not a number theorist, my commentary probably won’t change very much anyway. Obviously, like most press releases, this one is full of hyperbole and ridiculous sentences like, “the
team was determined go beyond mere theories”, but the actual work being discussed is fascinating.
Now, when we talk about a partition function in the context of Ono’s work, we don’t mean the partition function that is familiar to most physicists, we mean what number theorists call a partition
In this setting, a partition is a way of representing a natural number $n$ as the sum of natural numbers (ie. for $n = 3$, we have three partitions, $3$, $2 + 1$, and $1 + 1 + 1$, independent of
order). Thus, the partition function, $p(n)$, represents the number of possible partitions of $n$. So, $p(3) = 3$, $p(4) = 5$ (for $n = 4$, we have: $4$, $3 + 1$, $2 + 2$, $2 + 1 + 1$, $1 + 1 + 1
+ 1$) , etc..
To be slightly more technical, from Ken Ono and Kathrin Bringman [1],
A partition of a non-negative integer n is a non-increasing sequence of positive integers whose sum is $n$.
The concept is straight forward, but how to obtain these partition numbers, in general, is actually no trivial matter.
The master of series, Leonhard Euler, worked on solving this problem, to less than fully satisfying results. Using the reciprocal of what is now called Euler’s function, we get the generator for $p
(n)$ by this infinite product,
$sum_{n=0}^{infty} p(n)q^n= prod_{n=1}^{infty}frac{1}{1-q^n}$.
Here, $q^n$ counts the number of ways to write, $n = a_1 + 2a_2 + 3a_3 +ldots$, for $a_i in mathbb{N}$, where each number $i$ appears $a_i$ times.
Obviously, for large $n$, this can be unwieldy, and it doesn’t lead to an explicit formula, but as long as you didn’t need more than 200~ partition numbers, it was okay.
Mathematics had to wait until the early 1900s before anyone was to expand on Euler’s partition number generator, when Srinivasa Ramanujan made contact with G.H. Hardy. Ken Ono actually has a
beautiful historical, and mathematical, account of the Ramanujan and Hardy story, called “The Last Words of a Genius” [pdf].
Ramanujan famously proved an unusual and surprising result that [2],
$p(5n + 4) = 0 (mod 5)$,
$p(7n + 5) = 0 (mod 7)$,
$p(11n + 6) = 0 (mod 11)$.
He was also responsible for the first attempt at an explicit, although not exact, formula for $p(n)$ with Hardy,
$p(n)simfrac{exp(pisqrt{2n/3})}{4nsqrt{3}}$ as $n rightarrow infty$.
A decade later, Hans Rademacher came up with an exact formula, involving a convergent series, Dedekind eta functions, and Farey sequences; it was computationally unpleasant, to say the least (and not
worth TeXing in here, but see Wikipedia if interested). It was also not substantially more useful than Euler’s initial work (although more direct).
In 2007, Ono was an author of a paper [1] that provided an arithmetic reformulation of Rademacher’s formula, using a Maass–Poincaré series. Based on some discussion, it wasn’t a giant improvement
over Rademacher work.
It seems that since Euler initially came up with his generating function, there haven’t been major leaps in our understanding of partition numbers.
Apparently that all changes tomorrow. Ken Ono and colleagues, Jan Bruinier, Amanda Folsom and Zach Kent, will be announcing results that include a finite, algebraic formula for partition numbers
thanks to the discovering that partitions are fractal. Well, so what does this mean, for partition numbers to be fractal?
Ken Ono, in the press release:
The sequences are all eventually periodic, and they repeat themselves over and over at precise intervals.
Alright, so obviously without going deeply into paper we can’t go further here (check out the pdf), but one can see how this insight could make the generation of partitions simple and explicit. This
also, apparently, explains, and is linked to, Ramanujan’s congruences above. How? Well, they’re part of this pattern.
Ken Ono, in the press release:
I can take any number, plug it into P, and instantly calculate the partitions of that number. P does not return gruesome numbers with infinitely many decimal places. It’s the finite, algebraic
formula that we have all been looking for.
There is already an extension on the Ono-Folsom-Kent fractal issue by John Webb called, “An improved “zoom rate” for the Folsom-Kent-Ono l-adic fractal behavior of partition values” [pdf here].
*The physics tie in? Alright, so this is reaching, but here we go. Partitions are visualized using Young tableaus, and anyone in particle physics (see pdf for relevant introduction) has probably
come across this, as well as other forms of group representation theory. Could an ability to always explicitly write down partition numbers translate to physics? I couldn’t ever imagine using groups
so large that this could at all come into play… but, one could possibly draw some conclusions about the fractal nature of… ah, I give up.
Update: Comments on connections to actual physics can be found here.
Related Literature:
[1] KATHRIN BRINGMANN, & KEN ONO (2007). An arithmetic formula for the partition function Proceedings of the American Mathematical Society, 135, 3507-3514
[2]Ken Ono (2010). The Last Words of a Genius Notices of the American Mathematical Society, 57, 1410-1419
[3] Folsom A, & Ono K (2008). The spt-function of Andrews. Proceedings of the National Academy of Sciences of the United States of America, 105 (51), 20152-6 PMID: 19091951
[4]Ken Ono, & Jan H. Bruinier (2009). Identities and congruences for the coefficients of Ramanujan’s omega(q) Ramanujan Journal
The Finite formula found for partition numbers by PLOS Blogs Network, unless otherwise expressly stated, is licensed under a Creative Commons Attribution 4.0 International License.
Latest Network Comments
About S.C. Kavassalis
This entry was posted in Local News and tagged Amanda Folsom, Jan Bruinier, Ken Ono, l-adic properties of the partition function, mathematics, number theory, partition function, Zach Kent. Bookmark
the permalink. | {"url":"http://blogs.plos.org/blog/2011/01/20/ono/","timestamp":"2014-04-20T08:17:44Z","content_type":null,"content_length":"87295","record_id":"<urn:uuid:20efe3a4-01aa-47d7-a8fa-997bbec9dc2f>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00553-ip-10-147-4-33.ec2.internal.warc.gz"} |
COSC 594 - 201420
Scientific Computing for Engineers: Spring 2014 – 3 Credits
This class is part of the Interdisciplinary Graduate Minor in Computational Science. See IGMCS for details.
Wednesdays from 1:30 – 4:15, Room 233 Claxton
Prof. Jack Dongarra with help from Profs. George Bosilca, Jakub Kurzak, Piotr Luszczek, Heike McCraw, Gabriel Marin, Stan Tomov
Email: dongarra@eecs.utk.edu
Phone: 865-974-8295
Office hours: Wednesday 11:00 - 1:00, or by appointment
TA: Reazul Hoque <rhoque@utk.edu>
TAÕs Office : Claxton 309
TAÕs Office Hours: WednesdayÕs 10:00 – 12:00 or by appointment
There will be four major aspects of the course:
● Part I will start with current trends in high-end computing systems and environments, and continue with a practical short description on parallel programming with MPI, OpenMP, and Pthreads.
● Part II will illustrate the modeling of problems from physics and engineering in terms of partial differential equations (PDEs), and their numerical discretization using finite difference, finite
element, and spectral approximation.
● Part III will be on solvers: both iterative for the solution of sparse problems of part II, and direct for dense matrix problems. Algorithmic and practical implementation aspects will be covered.
● Finally in Part IV, various software tools will be surveyed and used. This will include PETSc, Sca/LAPACK, MATLAB, and some tools and techniques for scientific debugging and performance analysis.
The grade would be based on homework, a midterm project, a final project, and a final project presentation. Topics for the final project would be flexible according to the student's major area of
Class Roster
If your name is not on the list or some information is incorrect, please send mail to the TA :
First Name Last Name Department Email Credit/Audit
Cole Gentry Nuclear Eng. cgentry7@utk.edu Credit
Anthony Gianfraesco ESE(energy science and Eng.) agianfra@utk.edu Credit
Jiahui Guo EECS guo.jiahui07@gmail.com Credit
Reazul Hoque EECS rhoque@utk.edu Credit
Aktem Maksov ESE(energy science and Eng.) amaksov@utk.edu ?
Marshall Mcdonnell Chemical Eng. mmcdonn1@utk.edu Credit
Christopher Ostrouchov Material Science costrouc@utk.edu Credit
Thananon Patinyasakdikul EECS tpatinya@utk.edu Credit
Chunyan Tang EECS ctang7@utk.edu Credit
Book for the Class:
The Sourcebook of Parallel Computing, Edited by Jack Dongarra, Ian Foster, Geoffrey Fox, William Gropp, Ken Kennedy, Linda Torczon, Andy White, October 2002, 760 pages, ISBN 1-55860-871-0, Morgan
Kaufmann Publishers.
Lecture Notes: (Tentative outline of the class)
1. January 8^th (Dr. Dongarra)
Class Introduction
Introduction to High Performance Computing
Read Chapter 1, 2, and 9
Homework 1 (due January 22, 2014)
Tar file of timer
2. January 15^th (Dr. Luszczek )
Hybrid MPI/OpenMP programming
Homework 2 (due January 29, 2014)
3. January 22rd (Dr. Tomov)
Projection and its importance in scientific computing
Homework 3 (due February 5, 2014)
4. January 29th (Dr. Tomov)
Discretization of PDEs and Tools for the Parallel Solution of the Resulting
SystMesh generation and load balancing
Homework 4 (due February 12, 2014)
5. February 5th (Dr. Tomov)
Sparse Matrices and Optimized Parallel Implementations
NVIDIA's Compute Unified Device Architecture (CUDA)
SGEMM example
Homework 5 (due February 19, 2014)
6. February 12th (Dr. Tomov)
Iterative Methods in Linear Algebra (Part 1)
Iterative Methods in Linear Algebra (Part 2)
Video of CUDA -- "Better Performance at Lower Occupancy", V. Volkov
Video of OpenCL -- "What is OpenCL"
Homework 6 (due February 26, 2014)
Homework 6 tar file
High Performance Linear Algebra with Intel Xeon Phi Coprocessors
7. February 19^th (Heike McCraw)
Performance Modeling
Homework 7 (due March 5, 2014)
8. February 26^th (Dr. Marin)
Performance Analysis and Tools
Homework 8 (due March 19, 2014)
Homework 8 tar file
9. March 5th (Dr. Kurzak )
10. March 12th (Dr. Bosilca)
Parallel programming paradigms and their performances
Homework 9 (due March 26th, 2014)
March 19th – Spring Break
11. March 26th (Dr. Dongarra)
Dense Linear Algebra part1
Dense Linear Algebra part2
Homework 10 (due April 9^th, 2014)
12. April 2nd (Dr. Bosilca)
Homework 11 (due April 23rd, 2014)
13. April 9^th (Dr. Bosilca)
Continue with the slides from last week
14. April 16^th No Class
15. April 23th (Dr. Bosilca)
Message Passing Interface (MPI)
Dynamic Processes
16. May 2^th (Friday) 1:30 – 4:00 (Dr. Dongarra)
Class Final reports & Final
The project is to describe and demonstrate what you have learned in class.
The idea is to take an application and implement it on a parallel computer.
Describe what the application is and why this is of importance.
You should describe the parallel implementation, look at the performance,
perhaps compare it to another implementation if possible.
You should write this up in a report, 10-15 pages, and in class you will have
20 minutes to make a presentation.
Here are some ideas for projects:
o Projects and additional projects.
Additional Reading Materials
Message Passing Systems
Several implementations of the MPI standard are available today. The most widely used open source MPI implementations are Open MPI and MPICH.
Here is the link to the MPI Forum.
Other useful reference material
á Here are pointers to specs on various processors:
á Introduction to message passing systems and parallel computing
J.J. Dongarra, G.E. Fagg, R. Hempl and D. Walker, Chapter in Wiley Encyclopedia of Electrical and Electronics Engineering, October 1999 ( postscript version )
``Message Passing Interfaces'', Special issue of Parallel Computing, vol 20(4), April 1994.
Ian Foster, Designing and Building Parallel Programs, see http://www-unix.mcs.anl.gov/dbpp/
Alice Koniges, ed., Industrial Strength Parallel Computing, ISBN1-55860-540-1, Morgan Kaufmann Publishers, San Francisco, 2000.
Ananth Gramma et al., Introduction to Parallel Computing, 2^nd edition, Pearson Education Limited, 2003.
Michael Quinn, Parallel Programming: Theory and Practice, McGraw-Hill, 1993
David E. Culler & Jaswinder Pal Singh, Parallel Computer Architecture, Morgan Kaufmann, 1998, see http://www.cs.berkeley.edu/%7Eculler/book.alpha/index.html
George Almasi and Allan Gottlieb, Highly Parallel Computing, Addison Wesley, 1993
Matthew Sottile, Timothy Mattson, and Craig Rasmussen, Introduction to Concurrency in Programming Languages, Chapman & Hall, 2010
á Other relevant books
Stephen Chapman, Fortran 95/2003 for Scientists and Engineers, McGraw-Hill, 2007
Stephen Chapman, MATLAB Programming for Engineers, Thompson, 2007
Barbara Chapman, Gabriele Jost, Ruud van der Pas, and David J. Kuck, Using OpenMP: Portable Shared Memory Paralllel Programming, MIT Press, 2007
Tarek El-Ghazawi, William Carlson, Thomas Sterling, Katherine Yelick, UPC: Distributed Shared Memory Programming, John Wiley & Sons, 2005
David Bailey, Robert Lucas, Samuel Williams, eds., Performance Tuning of Scientific Applications, Chapman & Hall, 2010
Message Passing Standards
``MPI - The Complete Reference, Volume 1, The MPI-1 Core, Second Edition'',
by Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, Jack Dongarra, MIT Press, September 1998, ISDN 0-262-69215-5.
``MPI: The Complete Reference - 2nd Edition: Volume 2 - The MPI-2 Extensions'',
by William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing Lusk, Bill Nitzberg, William Saphir, and Marc Snir, published by The MIT Press, September, 1998; ISBN 0-262-57123-4.
MPI-2.2 Standard, September 2009
PDF format: http://www.mpi-forum.org/docs/mpi-2.2/mpi22-report.pdf
Hardcover: https://fs.hlrs.de/projects/par/mpi//mpi22/
On-line Documentation and Information about Machines
High-performance computing systems:
á TOP500 Supercomputer Sites
á Green 500 List of Energy –Efficient Supercomputers
Other Scientific Computing Information Sites
á Netlib Repository at UTK/ORNL
á BLAS Quick Reference Card
á LAPACK
á ScaLAPACK
á GAMS - Guide to Available Math Software
á Fortran Standards Working Group
á Message Passing Interface (MPI) Forum
á OpenMP
á Unified Parallel C
á DOD High Performance Computing Modernization Program
á DOE Accelerated Strategic Computing Initiative (ASC)
á NSF XSEDE (Extreme Science and Engineering Discovery Environment
á AIST Parallel and High Performance Application Software Exchange (in Japan)
(includes information on parallel computing conferences and journals)
á HPCwire
á Supercomputing Online
Related On-line Books/Textbooks
á Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM Publication, Philadelphia, 1994.
á LAPACK Users' Guide (Second Edition), SIAM Publications, Philadelphia, 1995.
á Using MPI: Portable Parallel Programming with the Message-Passing Interface by W. Gropp, E. Lusk, and A. Skjellum
á Parallel Computing Works, by G. Fox, R. Williams, and P. Messina (Morgan Kaufmann Publishers)
á Designing and Building Parallel Programs. A dead-tree version of this book is available by Addison-Wesley.
á Introduction to High-Performance Scientific Computing, by Victor Eijkhout with Edmond Chow, Robert Van De Geijn, February 2010
á Introduction to Parallel Computing, by Blaise Barney
Performance Analysis Tools Websites
á PAPI
á PerfSuite
á TAU
á Vampir
á Scalasca
á HPCToolkit
á PerfExpert
á mpiP
á ompP
á Open|Speedshop
á IPM
á Eclipse Parallel Tools Platform
Other Online Software and Documentation
á Matlab documentation is available from several sources, most notably by typing ``help'' into the Matlab command window. See this url
á SuperLU is a fast implementations of sparse Gaussian elimination for sequential and parallel computers, respectively.
á Sources of test matrices for sparse matrix algorithms
á Matrix Market
á University of Florida Sparse Matrix Collection
á Templates for the solution of linear systems, a collection of iterative methods, with advice on which ones to use. The web site includes on-line versions of the book (in html and pdf) as well as
á Templates for the Solution of Algebraic Eigenvalue Problems is a survey of algorithms and software for solving eigenvalue problems. The web site points to an html version of the book, as well as
á Updated survey of sparse direct linear equation solvers, by Xiaoye Li
á MGNet is a repository for information and software for Multigrid and Domain Decomposition methods, which are widely used methods for solving linear systems arising from PDEs.
á Resources for Parallel and High Performance Computing
á ACTS (Advanced CompuTational Software) is a set of software tools that make it easier for programmers to write high performance scientific applications for parallel computers.
á PETSc: Portable, Extensible, Toolkit for Scientific Computation
á Issues related to Computer Arithmetic and Error Analysis
á Efficient software for very high precision floating point arithmetic
á Notes on IEEE Floating Point Arithmetic, by Prof. W. Kahan
á Other notes on arithmetic, error analysis, etc. by Prof. W. Kahan
á Report on arithmetic error that cause the Ariane 5 Rocket Crash Video of the explosion
á The IEEE floating point standard is currently being updated. To find out what issues the standard committee is considering, look here.
Jack Dongarra | {"url":"http://web.eecs.utk.edu/~dongarra/WEB-PAGES/SPRING-2014/cs594-2014.htm","timestamp":"2014-04-21T07:04:55Z","content_type":null,"content_length":"273374","record_id":"<urn:uuid:598734e8-f074-4137-b5d2-a9798368590f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Albert Einstein Achievements
• Einstein showed that absolute time had to be replaced by a new absolute: the speed of light. Einstein went against the grain and totally dismissed the "Old Physics." He envisioned a world where
space and time are relative and the speed of light is absolute (at the time, it was believed that space and time were absolute and the speed of light was relative).
• He asserted the equivalence of mass and energy, which would lead to the famous formula E=mc^2
• Einstein challenged the wave theory of light, suggesting that light could also be regarded as a collection of particles. This helped to open the door to a whole new world--that of quantum
physics. For ideas in this paper, he won the Nobel Prize in 1921.
• His paper concerning the Brownian motion of particles. With profound insight, Einstein blended ideas from kinetic theory and classical hydrodynamics to derive an equation for the mean free path
of such particles as a function of the time.
• Einstein showed how to calculate Avogadro's number and the size of molecules.
• In 1910, Einstein answered a basic question: 'Why is the sky blue?' His paper on the phenomenon called critical opalescence solved the problem by examining the cumulative effect of the
scattering of light by individual molecules in the atmosphere.
• Einstein later published a paper in 1915 called "General Relativity." General Relativity took over when Special Relativity started to fail. Controversy started to rise when Einstein released his
second paper called "General Relativity."
• In 1917, Einstein published a paper which uses general relativity to model the behavior of an entire universe. General relativity has spawned some of the weirdest, and most important results in
modern astronomy.
• Einstein recognized that there might be a problem with the classical notion of cause and effect. Given the peculiar, dual nature of quanta as both waves and particles, it might be impossible, he
warns, to definitively tie effects to their causes.
• Between 1905 and 1925, Einstein transformed humankind's understanding of nature on every scale, from the smallest to that of the cosmos as a whole. Now, nearly a century after he began to make
his mark, we are still exploring Einstein's universe.
• In 1924, Einstein received a short paper from a young Indian physicist named Satyendra Nath Bose, describing light as a gas of photons, and asking for Einstein's assistance in publication.
Einstein realised that the same statistics could be applied to atoms, and published an article in German (then the lingua franca of physics) which described Bose's model and explained its
implications. Bose Einstein statistics now describes any assembly of these indistinguishable particles known as bosons.
• Einstein and de Sitter in 1932 proposed a simple solution of the field equations of general relativity for an expanding universe. They argued that there might be large amounts of matter which
does not emit light and has not been detected. This matter, now called 'dark matter', has since been shown to exist by observing is gravitational effects. | {"url":"http://www.bibalex.org/Einstein2005/Achievements.htm","timestamp":"2014-04-20T16:33:17Z","content_type":null,"content_length":"15545","record_id":"<urn:uuid:b79c52c3-e752-45c8-82eb-b0cd9f4f5d15>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alternating colors on a line: infinitely often or converge?
up vote 43 down vote favorite
Suppose we have intervals of alternating color on $\mathbb{R}$ (say, red / blue / red / blue / ...). All intervals have independent length, with all red intervals distributed as $\mathbb{P}_{R}$, all
blue intervals distributed as $\mathbb{P}_B$.
Once the intervals are generated, I update them iteratively as follows: if an interval is smaller than its two immediate neighbors, then it get swallowed by its neighbors: all three intervals join to
become one big interval of the same color as the two end ones.
For example, if we have: ... (red 2) (blue 3) (red 4) (blue 1) (red 3) (blue 4) (red 5) ...
Then the next step we will see ... (red 2) (blue 3) (red 8) (blue 4) (red 5) ...
And the next step we will see ... (red 2) (blue 3) (red 17) ...
And so on.
The question is: does one color eventually wins out, or do the two colors switch back and forth infinitely often? Concretely: for any fixed point $x \in \mathbb{R}$, if $B_n(x)$ denotes the color of
$x$ at time step $n$ (= 1 if blue, = 0 if red), then will the sequence $\{B_n(x)\}$ converges, or will it visits 1 and 0 infinitely often (with probability 1)?
Clearly we need to know something about the two distributions. Then, the question really is: under what assumptions on $\mathbb{P}_R$ and $\mathbb{P}_B$ can we answer the above question?
One can check that the process is well-defined, in the sense that it doesn't matter which interval is updated first. Also, for the process to get stuck, one needs to have an infinite sequence of
increasing numbers, and this happens with probability 0. We further assume that the distributions are such that there is no tie between intervals with positive probability. (eg: when the
distributions are continuous)
We know the following:
If $\mathbb{P}_R = \mathbb{P}_B$ (that is, if red and blue have the same distributions), then the answer is infinitely often w.p. 1.
Proof: for any point $x$, the event of {converge to 1} and {converge to 0} lie in the tail sigma-field, hence occurs w.p. 1 or 0. But by symmetry they have the same probability, hence it must be 0.
Since the process is translation invariant, sequences $\{B_n(x)\}$ have the same tail distribution at all points, and hence there is no convergence in color for the whole line. (In particular, this
argument implies that if one point converges to a color with probability 1, then the whole line converges to the same color with probability 1). \qed
Intuitively, suppose that at the start, red has a slight "advantage" over blue. (say, $\mathbb{P}_R$ stochastically dominates $\mathbb{P}_B$). Then one would expect this advantage to get amplified in
later rounds, and eventually red wins with probability 1. But we can't find a concrete proof of this. The hardest step seems to be finding a characterization of "advantage" (that is, a property of a
distribution relative to another) that is invariant under updates, and tells us that it improves the probability of one color winning eventually. Stochastic domination is a fairly strong assumption -
we think that there should be a milder condition.
For a toy case, a simple example is when all red intervals have length 1, and all blue intervals have length either $1 - \epsilon_1$ or $1 + \epsilon_2$, where $\epsilon_1, \epsilon_2$ are distinct
irrational numbers (so as to stop having ties with positive probability).
For simple examples as above, one can write down the distribution of red and blue after step 1 of the update (one step of the update means: updating all local minima in the previous round) -- note
that the intervals of the same color are still i.i.d, but the red and blue are not independent (since we know that the surviving blues and reds have to have "big" size to stop the neighbor from
expanding). However, I think they should be exchangeable, in which case it might be easier to do the following: - update the process once. Now red and blue have new distributions $\mathbb{P}_R'$ and
$\mathbb{P}_B'$ - "peel off" all the reds and blue and "shuffle" them (that is: generate another sequence of interlacing red/blue following the new distribution). Then run another update. If we have
exchangeability, then this does not change the tail distribution. But analytically it is easier to work with. And now one can search for the characterization of an "advantage" that get amplified in
further updates.
For instance, one might hope that if red has a "huge advantage" over blue, then $P(B_n(x) = 1)$ decays fast enough so $\sum_{n}P(B_n(x) = 1) < \infty$, then the result follows from Borel-Cantelli.
Then the next step would be to show that one can reach this "huge advantage" starting from a "small advantage" in finite time step.
Intuitively it seems that the tightest (minimal definition of advantage) would be $P(B_0(x) = 1) < 1/2$. In this case we can compute and show that $P(B_n(x) = 1)$ is a monotone decreasing function in
$n$, but quantifying the rate seems very difficult.
If anybody has fresh idea to approach this problem, please let me know!
Here are some updates on this problem: We can also run this problem in continuous time as follows: at each end point of the intervals there is a balloon. At time $t$ the balloon as radius $t/2$. When
two balloons meet they "pop", and the two points are removed from the line.
This put the process in the world of stable matching, hierarchical coalescent process, and some other physics literature. The advantage of this continuous time approach is that one can check that the
sequence of points after running for some time $T$ is still a renewal process, and furthermore, one can write down a set of ODEs for the Laplace transform of the distribution of the two lengths. In
fact, if the distribution $P_R$ and $P_B$ are identical, then one can show that the normalized length $X_t/t$ converges to a "universal" limit. (Universal in the sense that it almost doesn't depend
on the initial distribution).
The relevant paper is: http://arxiv.org/abs/1112.4630
This paper considers a very similar, so to apply the result in this paper directly one needs to do a little argument, which unfortunately does not carry over when $P_R$ and $P_B$ have different
distributions. I'll skip writing down the argument because I think that we might benefit from a fresh approach.
pr.probability ergodic-theory stochastic-processes open-problem
It seems there are distributions for which it matters which interval you use to start the update process. So I do not know what you mean by "well-defined" For example, suppose all intervals are
length 1, except for two adjoining intervals which are length 2. How do you go about updating this case? Gerhard "Ask Me About System Design" Paseman, 2011.02.19 – Gerhard Paseman Feb 20 '11 at
Ah, numbering the colors leads to part of the confusion. You are looking for oscillatory behaviour versus non-oscillatory behaviour. I will re-examine your claim of "well-defined". Gerhard "Ask Me
About System Design" Paseman, 2011.02.19 – Gerhard Paseman Feb 20 '11 at 7:53
3 This is a wonderful question. You write that $[B_n(x)\to1]$ is a tail event (space $x$ fixed, time $n$ going to infinity). For which sequence of sigma-algebras? Since the update process is
deterministic once $B_0=(B_0(x))$ is known, I understand you consider the sigma-algebras $G_h$ generated by $\{B_0(x);|x|\ge h\}$ and the tail sigma-algebra $T$ of the sequence $(G_h)$. But then,
why is $T$ trivial? And, more importantly, why should $[B_n(0)\to1]$ belong to $T$? – Did Feb 20 '11 at 9:06
@Didier: I also wondered about that. It is easy to see that the event is translation invariant, which also gives us a proof by the ergodic theorem. – Ori Gurel-Gurevich Feb 20 '11 at 16:57
@Ori Even that... To get invariance by translation one should specify the initial distribution. If at time $0$, one imposes that a blue interval begins on the right of site $0$, then the events $
[B_0(x)=1]$ do not have the same probability for various sites $x$ hence the probability that $[B_n(x)\to1]$ might depend on $x$. Probably one should start from the stationary distribution of the
renewal process of the colors. And I would love to see things written carefully here... – Did Feb 20 '11 at 18:06
show 9 more comments
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged pr.probability ergodic-theory stochastic-processes open-problem or ask your own question. | {"url":"http://mathoverflow.net/questions/56048/alternating-colors-on-a-line-infinitely-often-or-converge?answertab=active","timestamp":"2014-04-19T12:41:23Z","content_type":null,"content_length":"60308","record_id":"<urn:uuid:7a0296cd-8e7b-4afe-ad08-3bc2de4d29ee>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Student Support Forum: 'Minimize not working when expr value 10 or more' topic
Author Comment/Response
I have a specific problem I'm trying to solve. Given a particular area, find the integer side lengths of the most square-like rectangle with that area. Here, "most square" means that the
difference between the two side lengths is minimised.
So, if given the area 48, the best answer is a 6 by 8 rectangle. If given the area 17, the only possible answer is a 1 by 17 rectangle.
The following function _sometimes_ works:
MSR := Function[{n},
Minimize[Abs[a - b],
a b == n
&& 0 < a <= n
&& a <= b <= n
&& Element[a, Integers]
&& Element[b, Integers], {a, b}]]
It works for cases where the difference between a and b is small (less than 10), but does not work where the difference is 10 or more.
It appears the Minimize function can't cope with situations where the closest solution that fits the constraints results in the expression to be minimized being 10 or more. That includes all
prime numbers from 11 onwards, plus many other numbers including 39 (3 * 13 would be OK, but 13-3 >= 10 so it doesn't find it.), 51 (3 * 17), 57 (3 * 19), 69 (3 * 23), etc. It works fine with
prime numbers less than 10.
Is this a known bug, or is it by design? Is there a workaround?
URL: , | {"url":"http://forums.wolfram.com/student-support/topics/9223","timestamp":"2014-04-18T03:26:43Z","content_type":null,"content_length":"24394","record_id":"<urn:uuid:f96d6998-5cb9-49e7-bc0c-bf431f394605>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
Two-parameter spectral averaging and localization for non-monotonic random Schrödinger operators Trans
Results 1 - 10 of 25
- J. Funct. Anal , 2001
"... We study the integrated density of states of random Anderson-type additive and multiplicative perturbations of deterministic background operators for which the single-site potential does not
have a fixed sign. Our main result states that, under a suitable assumption on the regularity of the random v ..."
Cited by 44 (6 self)
Add to MetaCart
We study the integrated density of states of random Anderson-type additive and multiplicative perturbations of deterministic background operators for which the single-site potential does not have a
fixed sign. Our main result states that, under a suitable assumption on the regularity of the random variables, the integrated density of states of such random operators is locally Hölder continuous
at energies below the bottom of the essential spectrum of the background operator for any nonzero disorder, and at energies in the unperturbed spectral gaps, provided the randomness is sufficiently
small. The result is based on a proof of a Wegner estimate with the correct volume dependence. The proof relies upon the L function for p 1 [9], and the vector field methods of [20]. We discuss the
application of this result to Schrödinger operators with random magnetic fields and to band-edge localization.
, 2001
"... The integrated density of states (IDS) for random operators is an important function describing many physical characteristics of a random system. Properties of the IDS are derived from the
Wegner estimate that describes the influence of finite-volume perturbations on a background system. In this pap ..."
Cited by 28 (10 self)
Add to MetaCart
The integrated density of states (IDS) for random operators is an important function describing many physical characteristics of a random system. Properties of the IDS are derived from the Wegner
estimate that describes the influence of finite-volume perturbations on a background system. In this paper, we present a simple proof of the Wegner estimate applicable to a wide variety of random
perturbations of deterministic background operators. The proof yields the correct volume dependence of the upper bound. This implies the local Hölder continuity of the integrated density of states at
energies in the unperturbed spectral gap. The proof depends on the L - theory of the spectral shift function (SSF), for p 1, applicable to pairs of...
, 2000
"... this paper is to contribute to the understanding of spectral localization for random Schrdinger operators in multi-dimensional Euclidean space ..."
- IN MATHEMATICAL RESULTS IN QUANTUM MECHANICS , 1999
"... We introduce the concept of a spectral shift operator and use it to derive Krein’s spectral shift function for pairs of self-adjoint operators. Our principal tools are operator-valued Herglotz
functions and their logarithms. Applications to Krein’s trace formula and to the Birman-Solomyak spectral ..."
Cited by 16 (7 self)
Add to MetaCart
We introduce the concept of a spectral shift operator and use it to derive Krein’s spectral shift function for pairs of self-adjoint operators. Our principal tools are operator-valued Herglotz
functions and their logarithms. Applications to Krein’s trace formula and to the Birman-Solomyak spectral averaging formula are discussed.
, 2003
"... We survey recent results on spectral properties of random Schrödinger operators. The focus is set on the integrated density of states (IDS). First we present a proof of the existence of a
self-averaging IDS which is general enough to be applicable to random Schrödinger and Laplace-Beltrami operators ..."
Cited by 12 (2 self)
Add to MetaCart
We survey recent results on spectral properties of random Schrödinger operators. The focus is set on the integrated density of states (IDS). First we present a proof of the existence of a
self-averaging IDS which is general enough to be applicable to random Schrödinger and Laplace-Beltrami operators on manifolds. Subsequently we study more specific models in Euclidean space, namely of
alloy type, and concentrate on the regularity properties of the IDS. We discuss the role of the integrated density of states and its regularity properties for the spectral analysis of random
Schrödinger operators, particularly in relation to localisation. Proofs of the central results are given in detail. Whenever there are alternative proofs, the different approaches are compared.
, 2001
"... We develop the L^p-theory of the spectral shift function, for p # 1, applicable to pairs of self-adjoint operators whose difference is in the trace ideal I p , for 0 < p # 1. This result is a
key ingredient of a new, short proof of the Wegner estimate applicable to a wide variety of additive and mul ..."
Cited by 12 (4 self)
Add to MetaCart
We develop the L^p-theory of the spectral shift function, for p # 1, applicable to pairs of self-adjoint operators whose difference is in the trace ideal I p , for 0 < p # 1. This result is a key
ingredient of a new, short proof of the Wegner estimate applicable to a wide variety of additive and multiplicative random perturbations of deterministic background operators. The proof yields the
correct volume dependence of the upper bound. This implies the local Hölder continuity of the integrated density of states at energies in the unperturbed spectral gap. Under an additional condition
of the single-site potential, local Hölder continuity is proved at all energies. This new Wegner estimate, together with other, standard results, establishes exponential localization for a new family
of models for additive and multiplicative perturbations.
, 2002
"... We prove a Wegner estimate for generalized alloy type models at negative energies (Theorems 8 and 13). The single site potential is assumed to be non- positive. The random potential does not
need to be stationary with respect to translations from a lattice. Actually, the set of points to which t ..."
Cited by 11 (3 self)
Add to MetaCart
We prove a Wegner estimate for generalized alloy type models at negative energies (Theorems 8 and 13). The single site potential is assumed to be non- positive. The random potential does not need to
be stationary with respect to translations from a lattice. Actually, the set of points to which the individual single site potentials are attached, needs only to satisfy a certain density condition.
The distribution of the coupling constants is assumed to have a bounded density only in the energy region where we prove the Wegner estimate.
- COMM. MATH. PHYS , 2000
"... We use scattering theoretic methods to prove exponential localization for random displacement models in one dimension. The operators we consider model both quantum and classical wave
propagation. Our main tools are the reflection and transmission coefficients for compactly supported single site per ..."
Cited by 10 (6 self)
Add to MetaCart
We use scattering theoretic methods to prove exponential localization for random displacement models in one dimension. The operators we consider model both quantum and classical wave propagation. Our
main tools are the reflection and transmission coefficients for compactly supported single site perturbations. We show that randomly displaced, non-reflectionless single sites lead to localization.
, 2003
"... ABSTRACT. The present paper is devoted to the study of spectral properties of random Schrödinger operators. Using a finite section method for Toeplitz matrices, we prove a Wegner estimate for
some alloy type models where the single site potential is allowed to change sign. The results apply to the c ..."
Cited by 9 (2 self)
Add to MetaCart
ABSTRACT. The present paper is devoted to the study of spectral properties of random Schrödinger operators. Using a finite section method for Toeplitz matrices, we prove a Wegner estimate for some
alloy type models where the single site potential is allowed to change sign. The results apply to the corresponding discrete model, too. In certain disorder regimes we are able to prove the Lipschitz
continuity of the integrated density of states and/or localization near spectral edges. 1.
- Mathematical Results in Quantum Mechanics , 2002
"... We study spectral properties of Schrodinger operators with a random potential of alloy type on L (R) and their restrictions to finite intervals. A Wegner estimates for non-negative single site
potentials with small support is proven. It implies the existence and local uniform boundedness of the de ..."
Cited by 8 (4 self)
Add to MetaCart
We study spectral properties of Schrodinger operators with a random potential of alloy type on L (R) and their restrictions to finite intervals. A Wegner estimates for non-negative single site
potentials with small support is proven. It implies the existence and local uniform boundedness of the density of states. Our estimate is valid for all bounded energy intervals. Wegner estimates play
a key role in an existence proof of pure point spectrum. 1. Model and results We study spectral properties of families of Schrodinger operators on L (R). The considered operators consist of a
non-random periodic Schrodinger operator plus a random potential of Anderson or alloy type: H# := H 0 + V# , H 0 := -#+ V per . (1) Here # is the Laplace operator on R and V per L # (R) is a
Z-periodic potential. The random potential V# is a stochastic process of the following form (2) V# (x) = # k u(x k). | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1288431","timestamp":"2014-04-20T12:43:45Z","content_type":null,"content_length":"36921","record_id":"<urn:uuid:910bba5f-1fd1-4cbb-adeb-6fa5aa2e86e0>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
ACP Atmospheric Chemistry and Physics ACP Atmos. Chem. Phys. 1680-7324 Copernicus GmbH Göttingen, Germany 10.5194/acp-12-2969-2012 A conceptual framework to quantify the influence of convective
boundary layer development on carbon dioxide mixing ratios Pino D. ^1 Vilà -Guerau de Arellano J. ^2 Peters W. ^2 Schröter J. ^2 van Heerwaarden C. C. ^3 Krol M. C. ^2 Applied Physics Department,
BarcelonaTech (UPC) and Institute for Space Studies of Catalonia (IEEC-UPC), Barcelona, Spain Meteorology and Air Quality Section, Wageningen University, Wageningen, The Netherlands Max Planck
Institute for Meteorology, Hamburg, Germany 26 03 2012 12 6 2969 2985 This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits
unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. This article is available from http://www.atmos-chem-phys.net/12/2969/2012/
acp-12-2969-2012.html The full text article is available as a PDF file from http://www.atmos-chem-phys.net/12/2969/2012/acp-12-2969-2012.pdf
Interpretation of observed diurnal carbon dioxide (CO<sub>2</sub>) mixing ratios near the surface requires knowledge of the local dynamics of the planetary boundary layer. In this paper, we study the
relationship between the boundary layer dynamics and the CO<sub>2</sub> budget in convective conditions through a newly derived set of analytical equations. From these equations, we are able to
quantify how uncertainties in boundary layer dynamical variables or in the morning CO<sub>2</sub> distribution in the mixed-layer or in the free atmosphere (FA) influence the bulk CO<sub>2</sub>
mixing ratio. <br></br> We find that the largest uncertainty incurred on the mid-day CO<sub>2</sub> mixing ratio comes from the prescribed early morning CO<sub>2</sub> mixing ratios in the stable
boundary layer, and in the free atmosphere. Errors in these values influence CO<sub>2</sub> mixing ratios inversely proportional to the boundary layer depth (<i>h</i>), just like uncertainties in the
assumed initial boundary layer depth and surface CO<sub>2</sub> flux. The influence of uncertainties in the boundary layer depth itself is one order of magnitude smaller. If we "invert" the problem
and calculate CO<sub>2</sub> surface exchange from observed or simulated CO<sub>2</sub> mixing ratios, the sensitivities to errors in boundary layer dynamics also invert: they become linearly
proportional to the boundary layer depth. <br></br> We demonstrate these relations for a typical well characterized situation at the Cabauw site in The Netherlands, and conclude that knowledge of the
temperature and carbon dioxide profiles of the atmosphere in the early morning are of vital importance to correctly interpret observed CO<sub>2</sub> mixing ratios during midday. | {"url":"http://www.atmos-chem-phys.net/12/2969/2012/acp-12-2969-2012.xml","timestamp":"2014-04-16T06:02:38Z","content_type":null,"content_length":"25044","record_id":"<urn:uuid:73c1a961-c72c-43f8-9822-cf2911c2991e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
anyone good with doing math on excel?
Re: anyone good with doing math on excel?
Hi tommoy;
Welcome to the forum.
Just let me say this:
I sure wish someone would send a memo to the instructor who is using that as an exercise to teach his students the "the joys of programming."
Someone should tell him that is a horrible example or a great example of the misuse of a simulation. I would like to give him a piece of my mind so to speak. Nothing important, maybe just the medulla
oblongata. Or maybe my left brain since according to Duke University, neanderthals like me never use it.
According to the ZAA, the left brain when viewed under the microscope is made up of tiny i's with little hearts over them while the right brain smells like old socks and gym shorts.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=269829","timestamp":"2014-04-19T05:05:57Z","content_type":null,"content_length":"16575","record_id":"<urn:uuid:f9fa263a-a46f-4553-bc7e-e7cccb2fa878>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Cannot get inverse of y=x/logx to work
Replies: 5 Last Post: May 30, 2011 2:03 PM
Messages: [ Previous | Next ]
Re: Cannot get inverse of y=x/logx to work
Posted: May 30, 2011 7:33 AM
"William Elliot" <marsh@rdrop.remove.com> wrote in message
> On Sun, 30 May 2011, Chris Richardson wrote:
>> On Mon, 30 May 2011 15:16:32 +1000, Brad Cooper wrote:
>>> I cannot get the correct result for the inverse calculation of y =
>>> x/log(x).
>> Equations of this type are solvable in terms of the
>> Lambert function:
>> x = -y * W(-1/y), where W is the Lambert function
> W(y) is the solution to y = xe^x; W(y).e^W(y) = y.
> You claim -yW(-1/y)/log(-yW(-1/y) = y.
> How so?
Hi Chris and William,
Thanks for taking the trouble to reply. I am trying to follow the thread of
the ideas put forward by R. P. Boas Jr.
Are you able to shed any light on why the equation he gives "appears" not to
produce correct results? He was such a great mathematician I am convinced I
have got something wrong, but I simply cannot find where I have made an
error. Some pointers would be greatly appreciated :-)
Date Subject Author
5/30/11 Cannot get inverse of y=x/logx to work Brad Cooper
5/30/11 Re: Cannot get inverse of y=x/logx to work root
5/30/11 Re: Cannot get inverse of y=x/logx to work William Elliot
5/30/11 Re: Cannot get inverse of y=x/logx to work Brad Cooper
5/30/11 Re: Cannot get inverse of y=x/logx to work G. A. Edgar
5/30/11 Re: Cannot get inverse of y=x/logx to work mjc | {"url":"http://mathforum.org/kb/message.jspa?messageID=7473889","timestamp":"2014-04-19T17:37:57Z","content_type":null,"content_length":"23092","record_id":"<urn:uuid:d8fa6250-6dee-479e-9786-ea06945cf16a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Definition of correlation, correlate and so on ?
David Cournapeau david at ar.media.kyoto-u.ac.jp
Tue Dec 12 01:49:30 CST 2006
I am polishing some code to compute autocorrelation using fft, and
when testing the code against numpy.correlate, I realised that I am not
sure about the definition... There are various function related to
correlation as far as numpy/scipoy is concerned:
For me, the correlation between two sequences X and Y at lag t is
the sum(X[i] * Y*[i+lag]) where Y* is the complex conjugate of Y.
numpy.correlate does not use the conjugate, scipy.signal.correlate as
well, and I don't understand numpy.corrcoef. I've never seen complex
correlation used without the conjugate, so I was curious why this
definition was used ? It is incompatible with the correlation as a
scalar product, for example.
Could someone give the definition used by those function ?
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-December/024992.html","timestamp":"2014-04-17T10:01:31Z","content_type":null,"content_length":"3591","record_id":"<urn:uuid:ed09ed42-1584-4623-81a6-f34457b12ab6>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to find the minium coefficient of static friction?
1. The problem statement, all variables and given/known data
For a system (with frictionless pulley) Mewsubk = .25 and the system is initially at rest. The mass of block 1 is 16 kg and the mass of block 2 is 12 kg. Determine the minimum coefficient of static
friction needed to prevent any movement.
2. Relevant equations
3. The attempt at a solution
I have no idea how to figure it out.
I figured out the acceleration to be 2.8m/s^2 because:
a= 12kg*9.8m/s^2 - .25*16kg*9.8m/s^2/(16kg+12kg)
I figured out the tension in the connecting string to be 84 N because:
Any help on how to figure out minimum coefficient of static friction? | {"url":"http://www.physicsforums.com/showthread.php?t=390558","timestamp":"2014-04-20T21:31:43Z","content_type":null,"content_length":"25187","record_id":"<urn:uuid:9105358e-166b-4370-b6e0-d20463111476>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
15 lines 16 points problem
August 24th 2010, 03:51 AM #1
Apr 2010
15 lines 16 points problem
I recently got given a problem by a friend, which I have been struggling over for a while. It is this:
Draw 16 points, and 15 lines, with exactly 4 points on each line.
Can anyone help?
Do the lines have to be straight? Any particular configuration of points?
Also, are intersections of lines considered points? ie. can I have four points on a line as well as an intersection not considered a point?
Yes the lines have to be straight, sorry, and the puzzle is to find the configuration of points for which this is possible. No, intersections of lines don't have to be points, so you can have as
many intersections on a line as you like.
Place 15 points where the red dots are in the diagram. You then have 10 lines with four points on each, as shown. Now put the 16th point at the centre of the picture, where the blue dot is. You
then get five more lines by joining the outer five red dots to the central dot.
Thank you! I've spent so much time trying to do this problem
August 24th 2010, 09:28 AM #2
August 24th 2010, 08:02 PM #3
August 26th 2010, 02:55 AM #4
Apr 2010
August 26th 2010, 07:23 AM #5
August 27th 2010, 02:42 PM #6
Apr 2010 | {"url":"http://mathhelpforum.com/geometry/154305-15-lines-16-points-problem.html","timestamp":"2014-04-21T12:11:56Z","content_type":null,"content_length":"44373","record_id":"<urn:uuid:4c26292f-91c4-4a04-a6f3-cc0a8a72dde2>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00606-ip-10-147-4-33.ec2.internal.warc.gz"} |
Special Week on Ranks of Elliptic Curves and Random Matrix Theory
Purpose of the special week
The application of random matrix ideas to ranks of elliptic curves shows great promise for shedding light on longstanding problems. This focused week of seminars and discussion sessions is aimed at
identifying the key issues needed to make further progress.
Topics for discussion
The connection between ranks of elliptic curves and random matrix theory arises from random matrix models for the values of L-functions in families. These models have been found to predict accurately
the small values of L-functions; in conjunction with formulas for special values of L-functions, these have been used to predict the frequency of rank two quadratic twists of a fixed elliptic curve.
Recent work of David, Fearnley, and Kivilevsky has explored a similar scenario for cubic twists.
Random matrix theory gives an order of magnitude prediction for the number of vanishings mentioned above, but there is an unknown constant which is required to produce an asymptotic formula. There is
evidence that Cohen-Lenstra like heuristics for Tate-Shafarevich groups (accomplished by C. Delaunay) will play a role in determining these constants. This is one area which will benefit from
bringing together mathematicians with a variety of perspectives, and we expect this to be an active topic of discussion during the week.
It is hoped that random matrix theory can be used to predict the frequency with which curves in a family have a given rank. There has been some success with small ranks, and it is predicted that
about x^(1/4) quadratic twists of a fixed elliptic curve should have rank 3. However, there is work by various authors suggesting that this frequency is higher in certain instances. So, it may be
that x^(1/4) is the generic answer, but this may be exceeded for certain special families. Or it may be that the random matrix model doesn't work, or has been formulated incorrectly. One possibility
is that the current model makes incorrect assumptions about the distribution of heights of generating points of rank 1 curves. It will be valuable to involve experts in elliptic curves in this
Finally, there will be some attention devoted to computation. Most of the interest is with quadratic and cubic twists. Formulas of Waldspurger and Kohnen-Zagier together with an algorithm of Gross
and an implementation by Rodriguez-Villegas allow for the quick evaluation of critical values of twists of elliptic curve L-functions by imaginary quadratic characters; recent work of Mao and Baruch
now offers the opportunity to extend this work to real quadratic characters. We would like to discuss how to go about compiling a large database of L-values for families. This will be a valuable
resource for the field, comparable to the role played by databases of elliptic curves.
The schedule for Monday February 9th, 2004, is listed on the page for the LMS Spitalfields Day Random Matrix Theory and the Birch and Swinnerton-Dyer Conjecture. These are lectures aimed at a general
mathematical audience. From the morning of Tuesday February 10th until late afternoon on Friday February 13th there will be a series of seminars and discussions of a more specialised nature.
Location and Cost
The special week will take place at the Newton Institute. There is no registration fee, but the support available for local expenses is also very limited.
The Institute has limited accommodation available. If you wish to participate in the special week, please fill in the application form below and on receipt of your formal workshop acceptance letter
you are strongly advised to confirm your accommodation requirements immediately. Once all Institute accommodation has been allocated, you are likely to have to arrange your own. A list of local Guest
Houses are available here.
Applications Forms
The closing date for the receipt of applications has now passed
Scientific enquiries may be addressed to Nina Snaith (e-mail:
Travel and Local Information
How to reach the Newton Institute & Local Information
Random Matrix Approaches in Number Theory | Workshops | Newton Institute Home Page | {"url":"http://www.newton.ac.uk/programmes/RMA/rmaw01.html","timestamp":"2014-04-16T22:03:29Z","content_type":null,"content_length":"7815","record_id":"<urn:uuid:084e55ff-bfa4-4334-97bc-4efdc1e88047>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
Voorhees Township, NJ Math Tutor
Find a Voorhees Township, NJ Math Tutor
...I work with students to develop strong conceptual understanding and high math fluency through creative math games. Having worked with a diverse population of students, I have strong culturally
competent teaching practices that are adaptive to diverse student learning needs. I have a robust knowledge of various math curricula and resources that will get your child to love math in no
9 Subjects: including SAT math, algebra 1, algebra 2, geometry
Your search for an experienced and knowledgeable tutor ends here.I have been coaching school teams for math league contests. and have coached school literature groups in preparation of Battle of
the Books contests. I enjoy teaching math and language arts. Although my specialty lies in elementary m...
15 Subjects: including algebra 2, study skills, Hindi, algebra 1
...My goal is to train you on how to become a better writer by giving you tips and directions that you can refer back to days after our session has ended. That goal applies whether you are
writing creatively or analytically, at any age. The Details of a Session: I prefer my sessions to last one hour.
37 Subjects: including algebra 1, ACT Math, reading, precalculus
...I have been a substitute teacher, piano teacher and a tutor for various subjects - many of which require my special attention to teach the kids what practicing really means, relating it to
something they understand i.e. sports, crafts, etc. AND it has also required me to understand the child/stu...
51 Subjects: including SPSS, SAT math, English, algebra 1
...I am very comfortable teaching algebra and geometry. I have a masters in teaching mathematics which I acquired in 2003. I've worked with several students over the years and am capable of
teaching a wide variety of students.
6 Subjects: including geometry, linear algebra, algebra 1, algebra 2
Related Voorhees Township, NJ Tutors
Voorhees Township, NJ Accounting Tutors
Voorhees Township, NJ ACT Tutors
Voorhees Township, NJ Algebra Tutors
Voorhees Township, NJ Algebra 2 Tutors
Voorhees Township, NJ Calculus Tutors
Voorhees Township, NJ Geometry Tutors
Voorhees Township, NJ Math Tutors
Voorhees Township, NJ Prealgebra Tutors
Voorhees Township, NJ Precalculus Tutors
Voorhees Township, NJ SAT Tutors
Voorhees Township, NJ SAT Math Tutors
Voorhees Township, NJ Science Tutors
Voorhees Township, NJ Statistics Tutors
Voorhees Township, NJ Trigonometry Tutors
Nearby Cities With Math Tutor
Collingswood Math Tutors
Deptford Township, NJ Math Tutors
Echelon, NJ Math Tutors
Evesham Twp, NJ Math Tutors
Gibbsboro Math Tutors
Haddonfield Math Tutors
Hi Nella, NJ Math Tutors
Laurel Springs, NJ Math Tutors
Lindenwold, NJ Math Tutors
Mount Laurel Math Tutors
Pine Hill, NJ Math Tutors
Somerdale, NJ Math Tutors
Stratford, NJ Math Tutors
Voorhees Math Tutors
Voorhees Kirkwood, NJ Math Tutors | {"url":"http://www.purplemath.com/voorhees_township_nj_math_tutors.php","timestamp":"2014-04-19T14:47:05Z","content_type":null,"content_length":"24466","record_id":"<urn:uuid:d554b466-d662-403f-a4fd-e7cb4ce91ffc>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
Two cyclists are racing up a mountain at different constant
Question Stats:
56%43% (02:13)based on 73 sessions
Asifpirlo wrote:
Two cyclists are racing up a mountain at different constant rates. Cyclist A is now 50 meters ahead of cyclist B. How many minutes from now will cyclist A be 150 meters ahead of cyclist B?
(1) 5 minutes ago, cyclist A was 200 meters behind cyclist B.
(2) Cyclist A is moving 25% faster than cyclist B.
The question basically asks the relative speed of A wrt B. This means that we don't need to find the individual speed of A & BStatement 1
In 5 minutes A cycled 250 m more than the distance traveled by B.
In other words, Relative Speed of A is = 250/5 = 50m/min
Time to cover extra 100 m will be 2 Min
Thus Sufficient.
Statement 2
In 5 minutes A cycled 250 m more than the distance traveled by B.
Speed of A is 25% more than B's.
This option will give different answers depending on the speed of B.
Thus Insufficient.
Answer A
If you like my Question/Explanation or the contribution, Kindly appreciate by pressing KUDOS.
Kudos always maximizes GMATCLUB worth -Game Theory
If you have any question regarding my post, kindly pm me or else I won't be able to reply | {"url":"http://gmatclub.com/forum/two-cyclists-are-racing-up-a-mountain-at-different-constant-158373.html","timestamp":"2014-04-24T11:08:59Z","content_type":null,"content_length":"169649","record_id":"<urn:uuid:aeadaf8f-7abb-4a40-bd44-22100c5bbae1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monponsett Math Tutor
Find a Monponsett Math Tutor
I can help you excel in your math or physical science course. I have experience teaching, lecturing, and tutoring undergraduate level math and physics courses for both scientists and
non-scientists, and am enthusiastic about tutoring at the high school level. I am currently a research associate in...
16 Subjects: including algebra 1, geometry, precalculus, trigonometry
...My proficiency is in handling data (informatics), so I can help you hands-on with dealing with data. While I can support you by helping you with homework and classes, I can also help those of
you, faculty or students, conducting independent research for dissertations or theses in getting through...
18 Subjects: including prealgebra, trigonometry, SPSS, English
...I have been teaching since 2005. I have taught language arts for grades 6-8 and currently teach technology to students in K0 through grade 8. I have worked with Microsoft Windows since the mid
14 Subjects: including prealgebra, reading, ESL/ESOL, grammar
...When working with students with diverse needs, I enjoy finding methods that make use of each student's unique strengths, talents and learning styles. With all students, I am excited to use
student-centered approaches to encourage critical thought and facilitate academic success. In other words, I love to teach!
16 Subjects: including SAT math, algebra 1, elementary (k-6th), grammar
I am a former physics teacher at an elite private school in Virginia, and I have been tutoring on and off in the years since I left this job. I am now a Ph.D. student at Boston College, where I
continue to teach classes and work with students one-on-one. I am a patient and encouraging teacher, and...
9 Subjects: including algebra 1, algebra 2, calculus, geometry | {"url":"http://www.purplemath.com/monponsett_ma_math_tutors.php","timestamp":"2014-04-18T21:35:47Z","content_type":null,"content_length":"23726","record_id":"<urn:uuid:8c490145-91cd-4148-b73b-303bfff381ee>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stanford, CA Precalculus Tutor
Find a Stanford, CA Precalculus Tutor
...Next is developing a work ethic that will remain forever, even after school. After I find the individual's areas of knowledge needing a stronger foundation and reinforcement, I try to develop a
style of communication and interaction that is unique to the student, promoting strong methods of lear...
13 Subjects: including precalculus, calculus, statistics, geometry
...My approach is always to be sure that the basics are mastered before attempting more advanced topics (e.g., students need to have mastered the graphical properties of linear functions before
working on quadratic ones (hyperbolas, ellipses, parabolas). Since so many homework and exam problems in t...
17 Subjects: including precalculus, physics, GRE, calculus
My years as a UCLA student were great memorable years. I worked as TA and RA at UCLA, while attending classes. I tutored my classmates as well as middle school, high school and college students
outside UCLA, in math (prealgebra to calculus at all levels, including multivariable, and more), and also physics at all levels including college and university, supporting myself financially that
9 Subjects: including precalculus, physics, calculus, geometry
...After all, math has its own language and when some concepts appear unclear or confusing it is because the person does not fully understand the language. I am fully versed in both Common Core
State standards as well as California Standards in Math. I can effectively help both students who have difficulty with various areas in math and those who are already good in it and want to excel.
7 Subjects: including precalculus, calculus, geometry, algebra 1
...PS: I have a PhD in theoretical physics, am a Phi Beta Kappa, graduated from the two best universities in China, and was once a NASA scientist.I have a PhD in theoretical physics which requires
comprehensive training in mathematical methods and have working experience with differential equations ...
15 Subjects: including precalculus, physics, calculus, statistics | {"url":"http://www.purplemath.com/Stanford_CA_precalculus_tutors.php","timestamp":"2014-04-18T19:06:39Z","content_type":null,"content_length":"24488","record_id":"<urn:uuid:ce31e713-7292-40e5-817e-79190962feb5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stabilization of Heegaard splittings
In the last lecture of a course on Heegaard splittings, professor Gabai sketched an example due to Hass-Thompson-Thurston of two genus $g$ Heegaard splittings of a $3$-manifold that requires at least
$g$ stabilization to make them equivalent. The argument is, in my opinion, very metric-geometric. The connection is so striking (to me) so that I feel necessary to give a brief sketch of it here.
(Side note: This has been a wonderful class! Although I constantly ask stupid questions and appear to be confused from time to time. But in fact it has been very interesting! I should have talked
more about it on this blog…Oh well~)
The following note is mostly based on professor Gabai’s lecture, I looked up some details in the original paper ( Hass-Thompson-Thurston ’09 ).
Recall: (well, I understand that I have not talked about Heegaard splittings and stabilizations here before, hence I’ll *try to* give a one minute definition)
A Heegaard splitting of a 3-manifold $M$ is a decomposition of the manifold as a union of two handlebodies intersecting at the boundary surface. The genus of the Heegaard splitting is the genus of
the boundary surface.
All smooth closed 3-manifolds has Heegaard splitting due the mere existence of a triangulation ( by thicken the 1-skeleton of the triangulation one gets a handlebody perhaps of huge genus, it’s easy
exercise to see its complement is also a handlebody). However it is of interest to find what’s the minimal genus of a Heegaard splitting of a given manifold.
Two Heegaard splittings are said to be equivlent if there is an isotopy of the manifold sending one splitting to the other (with boundary gluing map commuting, of course).
A stabilization of a Heegaard splitting $(H_1, H_2, S)$ is a surgery on $S$ that adds genus (i.e. cut out two discs in $S$ and glue in a handle). Stabilization will increase the genus of the
splitting by $1$)
Let $M$ be any closed hyperbolic $3$-manifold that fibres over the circle. (i.e. $M$ is $F_g \times [0,1]$ with the two ends identified by some diffeomorphism $f: F_g \rightarrow F_g$, $g\geq 2$):
Let $M'_k$ be the $k$ fold cover of $M$ along $S^1$ (i.e. glue together $k$ copies of $F_g \times I$ all via the map $f$:
Let $M_k$ be the manifold obtained by cut open $M'_k$ along $F_g$ and glue in two handlebodies $H_1, H_2$ at the ends:
Since $M$ is hyperbolic, $M'_k$ is hyperbolic. In fact, for any $\varepsilon > 0$ we can choose a large enough $k$ so that $M_k$ can be equipped with a metric having curvature bounded between $1-\
varepsilon$ and $1+\varepsilon$ everywhere.
( I’m obviously no in this, however, intuitively it’s believable because once the hyperbolic part $M'_k$ is super large, one should be able to make the metric in $M'_k$ slightly less hyperbolic to
make room for fitting in an almost hyperbolic metric at the ends $H_1, H_2$). For details on this please refer to the original paper. :-P
Now there comes our Heegaard splittings of $M_k$!
Let $k = 2n$, let $H_L$ be the union of handlebody $H_1$ together with the first $n$ copies of $M$, $H_R$ be $H_2$ with the last $n$ copies of $M$. $H_L, H_R$ are genus $g$ handlebodies shearing a
common surface $S$ in the ‘middle’ of $M_k$:
Claim: The Heegaard splitting $\mathcal{H}_1 = H_L \cup H_R$ and $\mathcal{H}_2 = H_L \cup H_R$ cannot be made equivalent by less than $g$ stabilizations.
In other words, first of all one can not isotope this splitting upside down. Furthermore, adding handles make it easier to turn the new higher genus splitting upside down, but in this particular case
we cannot get away with adding anything less than $g$ many handles.
Okay, not comes the punchline: How would one possible prove such thing? Well, as one might have imagined, why did we want to make this manifold close to hyperbolic? Yes, minimal surfaces!
Let’s see…Suppose we have a common stabilization of genus $2g-1$. That would mean that we can sweep through the manifold by a surface of genus (at most) $2g-1$, with $1$-skeletons at time $0, 1$.
Now comes what professor Gabai calls the ‘harmonic magic’: there is a theorem similar to that of Pitts-Rubinstein
Ingredient #1: (roughly thm 6.1 from the paper) For manifolds with curvature close to $-1$ everywhere, for any given genus $g$ Heegaard splitting $\mathcal{H}$, one can isotope the sweep-out so that
each surface in the sweep-out having area $< 5 \pi (g-1)$.
I do not know exactly how is this proved. The idea is perhaps try to shrink each surface to a ‘minimal surface’, perhaps creating some singularities harmless in the process.
The ides of the whole arguement is that if we can isotope the Heegaard splittings, we can isotope the whole sweep-out while making the time-$t$ sweep-out harmonic for each $t$. In particular, at each
time there is (at least) a surface in the sweep-out family that divides the volume of $M'_n$ in half. Furthermore, the time 1 half-volume-surface is roughly same as the time 0 surface with two sides
We shall see that the surfaces does not have enough genus or volume to do that. (As we can see, there is a family of genus $2g$ surface, all having volume less than some constant independent of $n$
that does this; Also if we have ni restriction on area, then even a genus $g$ surface can be turned.)
Ingredient #2: For any constant $K$, there is $n$ large enough so no surfaces of genus $<g$ and area $<K$ inside the middle fibred manifold with boundary $M'_n$ can divide the volume of $M'_n$ in
The prove of this is partially based on our all-time favorite: the isoperimetric inequality:
Each Riemannian metric $\lambda$ on a closed surface has a linear isoperimetric inequality for 1-chains bounding 2-chains, i.e. any homologically trivial 1-chain $c$ bounds a $2$ chain $z$ where
$\mbox{Vol}_2(z) \leq K_\lambda \mbox{Vol}_1(c)$.
Fitting things together:
Suppose there is (as described above) a family of genus $2g-1$ surfaces, each dividing the volume of $M_{2n}$ in half and flips the two sides of the surface as time goes from $0$ to $1$.
By ingredient #1, since the family is by construction surfaces from (different) sweep-outs by ‘minimal surfaces’, we have $\mbox{Vol}_2(S_t) < 5 \pi (2g-2)$ for all $t$.
Now if we take the two component separated by $S_t$ and intersect them with the left-most $n$ copies of $M$ in $M'_{2n}$ (call it $M'_L$), at some $t$, $S_t$ must also divide the volume of $M'_L$ in
Since $S_t$ divides both $M'_2n$ and $M'_L$ in half, it must do so also in $M'_R$.
But $S_t$ is of genus $2g-1$! So one of $S_t \cap M'_L$ and $S_t \cap M'_R$ has genus $< g$! (say it's $M'_L$)
Apply ingredient #2, take $K = 5 \pi (2g-2)$, there is $n$ large enough so that $S_t \cap M'_L$, which has area less than $K$ and genus less than $g$, cannot possibly divide $M'_L$ in half.
Leave a Reply Cancel reply | {"url":"http://conan777.wordpress.com/2011/05/09/a-note-on-stabilization-of-heegaard-splittings/","timestamp":"2014-04-16T20:25:45Z","content_type":null,"content_length":"94488","record_id":"<urn:uuid:188d2806-215c-4af5-a074-745f3b9542a4>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
Angles at a point
February 7th 2010, 01:18 PM #1
Feb 2010
Angles at a point
Hi all, im a new member and a high shcool Algerbra student. Im having quite a bit of trouble with this homework(which is being graded as a quiz if line AB and line CD intersect at E, Angle AEC=2=
x+10 and angle DEB=2x-30, find angle AEC I have no idea how to even set this up, my own notes dont even explain it. I would be very apreceative if anyone could help
Hi all, im a new member and a high shcool Algerbra student. Im having quite a bit of trouble with this homework(which is being graded as a quiz if line AB and line CD intersect at E, Angle AEC=2=
x+10 and angle DEB=2x-30, find angle AEC I have no idea how to even set this up, my own notes dont even explain it. I would be very apreceative if anyone could help
did you draw a diagram?
recall that angles at a point sum to 360 degrees. so,
AED + DEB + BEC + AEC = 360
as this will be graded as a quiz, that's all the help you'll get. you finish up the rest
Hmm, I have no idea how you got that diagram. the line CD doesn't even intersect AB and you have three extra points G, H and F. clearly it can't be right.
the correct diagram is below.
OH! *facepalm*
I think its x+10+2x-30=360
But the check for it comes out to be 359.8, but then again i suck at math....so this is wrong correct?
i wouldn't use decimals. leave your answers in terms of fractions and you will get the exact answer.
anyway, note that my equation has four angles in it. yours has two.
Final Hint: see number 10 here
that will give you the right equation to solve for x. note that "x" is NOT what you are looking for. you use it to find your answer.
OH!!!!! Now i think i get it... Much thanks man!
EDIT: I think i did somthing wrong though....
February 7th 2010, 01:36 PM #2
February 7th 2010, 01:44 PM #3
Feb 2010
February 7th 2010, 03:26 PM #4
February 7th 2010, 05:00 PM #5
Feb 2010
February 7th 2010, 05:38 PM #6
February 7th 2010, 05:53 PM #7
Feb 2010 | {"url":"http://mathhelpforum.com/algebra/127655-angles-point.html","timestamp":"2014-04-19T11:48:59Z","content_type":null,"content_length":"52394","record_id":"<urn:uuid:bfb0c077-1e57-41ba-85ef-5202366220c3>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00653-ip-10-147-4-33.ec2.internal.warc.gz"} |
Huffman Code
Huffman Code
The algorithm as described by David Huffman assigns every symbol to a leaf node of a binary code tree. These nodes are weighted by the number of occurences of the corresponding symbol called
frequency or cost.
The tree structure results from combining the nodes step-by-step until all of them are embedded in a root tree. The algorithm always combines the two nodes providing the lowest frequency in a bottom
up procedure. The new interior nodes gets the sum of frequencies of both child nodes.
Code Tree according to Huffman
The branches of the tree represent the binary values 0 and 1 according to the rules for common prefix-free code trees. The path from the root tree to the corresponding leaf node defines the
particluar code word.
< ^ > | {"url":"http://binaryessence.com/dct/en000079.htm","timestamp":"2014-04-20T20:55:49Z","content_type":null,"content_length":"4333","record_id":"<urn:uuid:26d2ab74-b6d8-422a-a053-b7d58e60f7e8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00318-ip-10-147-4-33.ec2.internal.warc.gz"} |
IACR News
22:17 [Pub][ePrint] NTRU-KE: A Lattice-based Public Key Exchange Protocol, by Xinyu Lei and Xiaofeng Liao Public key exchange protocol is identified as an important application in the field of
public-key cryptography. Most of the existing public key exchange schemes are Diffie-Hellman (DH)-type, whose security is based on DH problems over different groups. Note that there exists Shor\'s
polynomial-time algorithm to solve these DH problems when a quantum computer is available, we are therefore motivated to seek for a non-DH-type and quantum resistant key exchange protocol. To this
end, we turn our attention to lattice-based cryptography. The higher methodology behind our roadmap is that in analogy to the link between ElGamal, DSA, and DH, one should expect a NTRU lattice-based
key exchange primitive in related to NTRU-ENCRYPT and NTRU-SIGN. However, this excepted key exchange protocol is not presented yet and still missing. In this paper, this missing key exchange protocol
is found, hereafter referred to as NTRU-KE, which is studied in aspects of security and key-mismatch failure. In comparison with ECDH (Elliptic Curve-based Diffie-Hellman), NTRU-KE features faster
computation speed, resistance to quantum attack, and more communication overhead. Accordingly, we come to the conclusion that NTRU-KE is currently comparable with ECDH. However, decisive advantage of
NTRU-KE will occur when quantum computers become a reality. | {"url":"https://www.iacr.org/news/index.php?p=detail&id=3006","timestamp":"2014-04-19T19:33:36Z","content_type":null,"content_length":"25823","record_id":"<urn:uuid:a841e1fd-885b-4282-9d79-01a8617185d8>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Horizontal Vinyle Record Of Mass .10kg And Radius ... | Chegg.com
A horizontal vinyle record of mass .10kg and radius 0.1m rotates freely about a vertical axis through its center with an angular speed of 4.7 rad/s. The rotational inertia of the record about its
axis of rotation is 5.0 x 10^-4kgm^2. A wad of wet putty of mass .020kg drops vertically onto the record from above and sticks to the edge of the record. What is the angular speed of the record
immediately after the putty sticks to it?
The answer is ?f=3.36 rad/s
But why? | {"url":"http://www.chegg.com/homework-help/questions-and-answers/horizontal-vinyle-record-mass-10kg-radius-01m-rotates-freely-vertical-axis-center-angular--q1080345","timestamp":"2014-04-18T03:31:17Z","content_type":null,"content_length":"21513","record_id":"<urn:uuid:7b3a5953-8012-4974-a869-d510f89b6f06>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
4-bit binary subtractor
Hello folks :
the task is to design a circuit which subtracts 4 bit binary number a3,a2,a1,a0 and b3...b0. the MSB is a sign bit.
Assume that you are given half-adders and full-adders, i.e. you don't have to design the logic of the half-adder or full adder chip.
the two numbers are fed into the circuit on 2 serial lines and the output is also read out serially.
Please have look at the attachement and Let me know if its Correct!!!
electro ;) | {"url":"http://www.physicsforums.com/showthread.php?t=68167","timestamp":"2014-04-19T12:38:37Z","content_type":null,"content_length":"41392","record_id":"<urn:uuid:a84c2618-5758-4c41-93cf-4aee68481d31>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next: Kristeva Up: Pseudoscience Previous: Pseudoscience
Lacan is one of the most famous and controversial psychoanalysts. I shall not discuss his work on psychoanalysis; but one finds many mathematical notions in his writings. Here are some examples: at a
conference held in Baltimore (USA) in 1966, Lacan said:
This diagram [the Möbius strip] can be considered the basis of a sort of essential inscription at the origin, in the knot which constitutes the subject. This goes much further than you may think
at first, because you can search for the sort of surface able to receive such inscriptions. You can perhaps see that the sphere, that old symbol for totality, is unsuitable. A torus, a Klein
bottle, a cross-cut surface , are able to receive such a cut. And this diversity is very important as it explains many things about the structure of mental disease. If one can symbolize the
subject by this fundamental cut, in the same way one can show that a cut on a torus corresponds to the neurotic subject, and on a cross-cut surface to another sort of mental disease. [Lacan
(1970), pp. 192-193]
Lacan does not give any reason to think that the rather abstract geometrical notions which he mentions explain ``many things about the structure of mental disease." Of course, one might think that
this is just an analogy. Well, even so: what purpose would these analogies fill? But here is a dialogue following Lacan's lecture:
HARRY WOOLF: May I ask if this fundamental arithmetic and this topology are not in themselves a myth or merely at best an analogy for an explanation of the life of the mind?
JACQUES LACAN: Analogy to what? ``S'' designates something which can be written exactly as this S. And I have said that the ``S'' which designates the subject is instrument, matter, to symbolize
a loss. A loss that you experience as a subject (and myself also). In other words, this gap between one thing which has marked meanings and this other thing which is my actual discourse that I
try to put in the place where you are, you as not another subject but as peoplethat are able to understand me. Where is the analogon? Either this loss exists or it doesn't exist. If it exists it
is only possible to designate the loss by a system of symbols. In any case, the loss does not exist before this symbolization indicates its place. It is not an analogy. It is really in some part
of the realities, this sort of torus. This torus really exists and it is exactly the structure of the neurotic. It is not an analogon; it is not even an abstraction, because an abstraction is
some sort of diminution of reality, and I think it is reality itself. [Lacan (1970), pp. 195-196]
So, when he is asked explicitly whether it is an analogy, Lacan denies it. Of course, to say that the torus is ``reality itself", makes no sense, even if one were speaking about physics, where
mathematics can be applied in a precise way.
Lacan used also notions of (point set) topology:
In this space of jouissance , to take something bounded, closed, is a location, and to speak about it is a topology. ...What does the most recent development of topology allow us to put forward
concerning the location of the Other, of this sex as Other, as absolute Other? I will put forward the notion of compactness. Nothing is more compact than a fracture; clearly, the intersection of
everything that closes being admitted as existing on an infinite number of sets, it follows that the intersection implies this infinite number. It is the very definition of compactness. [Lacan
Here Lacan uses several words that enter into the mathematical definition of compactness (intersection, closed etc...) without paying any attention to their meaning. His ``definition" makes no sense
whatsoever. And, of course, no argument is given that could conceivably justify a relationship between compactness and ``jouissance".
In other texts, Lacan ``develops" the role of imaginary numbers:
Thus, by calculating that signification according to the algebraic method used here, namely
...Thus the erectile organ comes to symbolize the place of jouissance, not in itself, or even in the form of an image, but as a part lacking in the desired image: that is why it is equivalent to
the jouissance that it restores by the coefficient of its statement to the function of the lack of signifier -1. [Lacan (1971); seminar held in 1960.]
Clearly, the square root of -1 looks deep and mysterious to people who have not studied mathematics. But the relation between jouissance is even more mysterious, at least for us.
In the works of Lacan, one finds many other abuses, e.g. on mathematical logic, physics and knot theory. It seems reasonable to assume that, far from providing honest and useful analogies, these
references allowed Lacan to impress his non-mathematical audience with a superficial erudition and to put a varnish of scientificity on his discourse.
Next: Kristeva Up: Pseudoscience Previous: Pseudoscience Kuroki Gen
Mon Aug 30 11:49:39 JST 1999 | {"url":"http://www.math.tohoku.ac.jp/~kuroki/Sokal/bricmont/node3.html","timestamp":"2014-04-16T16:20:19Z","content_type":null,"content_length":"8421","record_id":"<urn:uuid:7c4902e3-0f82-4d4e-b219-20d81a7027ca>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
Milton, WA Calculus Tutor
Find a Milton, WA Calculus Tutor
...I am happy to help with many different math classes, from Elementary math to Calculus. I have helped my former classmates and my younger brother many times with Physics. I have been learning
French for more than 6 years.
16 Subjects: including calculus, chemistry, French, geometry
...Also, I recently scored 770 out of 800 on both the reading and writing portions of an SAT test. I am highly qualified to tutor SAT Math, and I am currently tutoring math at levels ranging from
Algebra 1 through Calculus. I recently scored over 2300 out of 2400 on a full SAT test.
21 Subjects: including calculus, chemistry, English, physics
...I view any tutoring appointment as a contract to which I am obligated, and ask the same from my clients. As a result, I ask for 4 hours notice of a need to cancel or reschedule. Your
satisfaction is what matters to me.
8 Subjects: including calculus, geometry, algebra 1, algebra 2
With my teaching experience of all levels of high school mathematics and the appropriate use of technology, I will do everything to find a way to help you learn mathematics. I can not promise a
quick fix, but I will not stop working if you make the effort. -Bill
16 Subjects: including calculus, geometry, GRE, statistics
...Any skill requires practice and effort, and I am able able to explain to the student that math competency is like exercising any muscle, except this particular one is the “brain muscle.”
Likewise, I show students the relevancy of what they are learning and how it applies to the world at large. O...
11 Subjects: including calculus, geometry, algebra 1, algebra 2
Related Milton, WA Tutors
Milton, WA Accounting Tutors
Milton, WA ACT Tutors
Milton, WA Algebra Tutors
Milton, WA Algebra 2 Tutors
Milton, WA Calculus Tutors
Milton, WA Geometry Tutors
Milton, WA Math Tutors
Milton, WA Prealgebra Tutors
Milton, WA Precalculus Tutors
Milton, WA SAT Tutors
Milton, WA SAT Math Tutors
Milton, WA Science Tutors
Milton, WA Statistics Tutors
Milton, WA Trigonometry Tutors
Nearby Cities With calculus Tutor
Algona, WA calculus Tutors
Auburn, WA calculus Tutors
Covington, WA calculus Tutors
Dupont, WA calculus Tutors
Edgewood, WA calculus Tutors
Federal Way calculus Tutors
Fife, WA calculus Tutors
Fircrest, WA calculus Tutors
Graham, WA calculus Tutors
Jovita, WA calculus Tutors
Maple Valley calculus Tutors
Pacific, WA calculus Tutors
Puy, WA calculus Tutors
Puyallup calculus Tutors
Sumner, WA calculus Tutors | {"url":"http://www.purplemath.com/Milton_WA_Calculus_tutors.php","timestamp":"2014-04-17T20:01:45Z","content_type":null,"content_length":"23671","record_id":"<urn:uuid:ed18c582-74a1-4291-b17f-57a15a8d147d>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tax Chapter 20 HW Answers
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with
the best way to expand their education.
Below is a small sample set of documents:
Community College of Baltimore County - ACCT - 225
21-3 In essence, the discounted cash-flow method calculates the expected cash inflows and outflows of a project as if they occurred at a single point in time so that they can be aggregated (added,
subtracted, etc.) in an appropriate way. This enable
Colorado - PHYSICS - 4340
Lehigh - CHEM - 151
om the definition of the thermal radiative properties and a radiation balance for anopaque surface on a total wavelength basis, according to Eq. 12.59,<α = 1 − ρ = 1 − 0.4 = 0.6.(b) The net
radiation heat transfer rate to the surfacefoll
Manhattan College - ACCT - 325
16-1Chapter 16: Check figures (2014 edition)updated: October 22, 201325. The SSTP is a cooperative effort of 44 states, the District of Columbia, local governments and the business community to
simplify sales and use tax collection and administration b
Manhattan College - ACCT - 325
Maturity 1 2 3 4 5 Face ValuePrice Yield Forward Rate $990.00 1.01% 1.01% $975.00 1.27% 1.54% $955.00 1.55% 2.09% $905.00 2.53% 5.52% $850.00 3.30% 6.47% $1,000.00Forward Rate8.00% 6.00% 4.00% 2.00%
0.00% 1 2 3 4 5
Manhattan College - ACCT - 325
Valuing a Corporate Bond Settlement Date Maturity Date Coupon Rate Required Rate (Yield) Redemption (as % of par) Frequency Basis (Day Count Convention) Price (value) of Bond (as % of par) Price in
Dollars Solve using PV Solving for Yield on Corporate Bon
Manhattan College - ACCT - 325
Stock A Stock B Correlation Standard Deviation on Portfolio Standard Deviation Expected Return on Market Risk-Free Rate Standard Deviation on Market Required Return on PortfolioWeight STD 30% 20% 70%
40% 0.6 31.96% Low Risk High Risk 10% 20% 15% 10% 15%
Manhattan College - ACCT - 325
Year 0 1 2 3Cash Flow Cumulative Balance -100 -100 10 -90 60 -30 80 50 Payback 2.375 yearsYear 0 1 2 3 Discount Rate Discounted Payback PeriodCash Flow PV CFS Cumulative Balance -100 $(100.00) $
(100.00) 10 $9.09 $(90.91) 60 $49.59 $(41.32) 80 $60.11 $1
Manhattan College - ACCT - 325
Call Option Stock Price $10.00 $15.00 $20.00 $25.00 $30.00 $35.00 $40.00 Put Option Stock Price $10.00 $15.00 $20.00 $25.00 $30.00 $35.00 $40.00 Black-Scholes Model Stock Price Strike Price Risk-Free
Rate Time to Expiry Variance Standard Deviation (Volati
Manhattan College - ACCT - 325
Present Value Rate # Years Future Value Future Value Future Value Rate # Years Present Value Present Value Annuities Annual Payments Rate # Payments Future Value Present Value Annual Payments Rate #
Payments Present Value Future Value Solving for Payment
Manhattan College - ACCT - 325
Year 1 2 3 4 Invest 1 2 3 4 Year 1 2 3 4 5Stock A 10% 11% 9% 10% 10000 11000 12210 13308.9 $14,639.79 Price $50.00 $38.00 $55.00 $40.00 $25.00Stock B 22% -22% 52% -4% 10000 12200 9516 14464.32
$13,885.75 Dividends $2.00 $2.00 $2.00 $2.00Stock A +1 110%
Manhattan College - ACCT - 325
Year 0 1 2 3 4 5Cash Flow Cumulative Balance (1,000,000) -1000000 150,000 -850000 270,000 -580000 420,000 -160000 450,000 290000 1,400,000 1,690,000 Payback 3.114 years($1,000,000) 150,000 270,000
420,000 450,000 1,400,000Year 0 1 2 3 4 5 Discount Rate
Manhattan College - ACCT - 325
EBIT $500,000 # SHARES 100000 Stock's Required Return 12% Stock Price $25.00 Tax Rate 40% Beta 1 Risk-Free Rate 6% Risk Premium on Market 6% Weight on Debt 0 20% 30% 40% 50% Yield on Debt 8.00% 8.50%
10.00% 12.00% D/S 0.25 0.43 0.67 1.00 beta 1.00 1.15 1.
Manhattan College - ACCT - 325
Accounting 757Prof. MandelkornHomework problems on corporate distributions, dividends and redemptions. 1/ Benson Corp. has current "E & P" = $30,000. It has an accumulated deficit =
($60,000) as of the beginning of its current year. The corporation made
Manhattan College - ACCT - 325
46a626d68368e714a80e3f52d8824f20b21cb56c.xlsTentative Course Outline - Spring 2013. Day Corporate Tax Class M. Date Day Cls Ch. Jan 9WedTopicsIntroduce concepts & research. Supplement throughout
semester. Forms. Entity Election, Compare. Cap-gains, Ch
Manhattan College - ACCT - 325
Accounting 757Supplemental NotesProf. MandelkornCapital Structure of a Corp./ Special Rules for Corps. A corporation needs capital, financing, money or funds to operate and grow. Funds may come from
within the corporation using reinvested earnings, or
Manhattan College - ACCT - 325
Accounting 757 Summary notes on stock dividends In this unit, we have studied corporate distributions of cash and property in the context of dividends, reductions of basis, possible capital gain and
redemptions. Another kind of distribution that a corpora
Manhattan College - ACCT - 325
Figure 5.1 Section 305 Stock DividendsWas distribution payable in either stock or property at the option of shareholder? No YesDid it result in a receipt of property (cash) by some shareholders and
an increase in assets or E&P to others (stock dividend)
Manhattan College - ACCT - 325
Chapter 1-Part A.Tax ResearchEdited January 1, 2013Howard Godfrey, Ph.D., CPA Professor of AccountingCopyright 2013. Howard Godfrey C13-Chp-01-1A-Research-Sources-2013Chapter 1A. Learning
Objectives-1A student should be able to: 1. Understand Reason
UC Davis - GEL 001 - 001
Solar Nebula Hypothesis 1) Collapse a clump of gas and dust within the cloud. 2) Disk (proplyd) and Protosun develop. 3) Planets form in the disk (through accretion). 4)Planetesimals grow to become
protoplanets. 5) Protoplanets collide. Left with planets.
UC Davis - GEL 001 - 001
SEARCH:goCornell Global Labor InstituteCornell GLI Study Finds Keystone XL Pipeline Will Create Few JobsPrevious Studies Are Misleading; Project May Kill More Jobs Than It Creates.Cornell GLI's new
report, Pipe Dreams? Jobs Gained, Jobs Lost by the C
UC Davis - GEL 001 - 001
Life After Oil and Gas - NYTimes.com3/25/13 11:37 AMHOME PAGETODAY'S PAPERVIDEOMOST POPULARU.S. EditionSubscribe to Home DeliverychemrgHelpSearch All NYTimes.comWORLDU.S.N.Y. /
UC Davis - GEL 001 - 001
By Paul C. "Chip" Knappenberger This article appeared in The Hill (Online) on May 3, 2013.he U.S. Environmental Protection Agency recently got in a rather public spat with the U.S.
Department of State over the proposed Keystone XL pipeline project. Apart
UC Davis - GEL 001 - 001
UC Davis - GEL 001 - 001
Keystone XL pipelineThe BasicsThe Canadian company TransCanada hopes to begin building the northern section of an oil pipeline that would trek close to 2,000 miles from Alberta, Canada to the Gulf
Coast of Texas. If constructed, the pipeline, known as K
UC Davis - GEL 001 - 001
UC Davis - GEL 001 - 001
UC Davis - GEL 001 - 001
Paper 1 prompt: Keystone XL Pipeline: Jobs vs. Environment? Energy for electricity that powers homes and industry commonly comes from sources like coal, natural gas, hydroelectricity or nuclear
power. Renewables like wind and solar constitute a very smal
UC Davis - GEL 001 - 001
UC Davis - GEL 001 - 001
Tang Kei ManGoogle Earth Assignment #21) Distance from epicenter to Port au Prince airport? 29.88km2) Highest magnitude aftershock? 5.93) How many days after the main quake did large aftershocks
occur? The main quake occurred on 12th Jan and the la
UC Davis - GEL 001 - 001
Using Google Earth to Explore the 2010 Haiti earthquakeGoal: Last week you learned to use Google Earth. In this exercise youll use GoogleEarth to explore the geological factors and societal effects
of the Haitiearthquake. It should take you about an ho
Purdue - STAT - 301
STAT 301 (Traditional and Online)SYLLABUS for Summer 2011INSTRUCTOR:EMAIL:Any student may attend the office hours of any STAT 301 instructor. A master scheduleof all STAT 301 office hours will be
posted on the course website.COURSE WEBSITE: http:/ww
Georgia Perimeter - MATH - 2431
Evaluating Limits analyticallyOne-sided limits (Infinite limits)Given ( ), + ( ), + ( ) ( )Note: can be any real number, and ( ) can be a rational functionFirst factor and simplify the function ()If
the value of x is approaching or pick anumber smal
Georgia Perimeter - MATH - 2431
Evaluating Limits analyticallyLimits at InfinityGiven lim ( ) or lim ( ), where( ) = + 1 1 +. . . 2 2 + 1 + 0is a polynomial function of degree nLimit of a polynomial is determined by the behavior of
theleading term ( )Example 1: Evaluate lim (5 3
Georgia Perimeter - MATH - 2431
CONTINUOUS/DISCONTINUOUS FUNCTIONSTo determine if a function, f, is continuous ata number, c, test the following conditions:Check to be sure that the function is defined atc. In other words, c must
be in the domain of f.That is, f(c) must exist.If f
Georgia Perimeter - MATH - 2431
Limits & ContinuityA function is continuous at a if:1. f(a) exists2.existsRemember, for a limit to exist,3. f(a) =Created by Drs. Marjorie Lewkowicz & Behnaz Rouhani, February 2013Example 1:
Given the following graph find:a.b. lim3c. limd.
Georgia Perimeter - MATH - 2431
The Tangent and Velocity ProblemsTangent ProblemThe word tangent is derived from the latin word tangens, which means touching.Thus, tangent to a curve touches the curve. We all know what is meant by
the slopeof a straight line, which tells us the rate
Georgia Perimeter - MATH - 2431
The Tangent and Velocity ProblemsTangent ProblemThe word tangent is derived from the latin word tangens, which means touching.Thus, tangent to a curve touches the curve. We all know what is meant by
the slopeof a straight line, which tells us the rate
Georgia Perimeter - MATH - 2431
The Limit of a functionWe have seen in previous section how limits arise when we want to find tangent toa curve or instantaneous rate of change of a function. If fx gets arbitrarily close to L(as
close to L as we like) for all x sufficiently close to a
Georgia Perimeter - MATH - 2431
Calculating Limits Using the Limit LawsIn previous section we used calculator and graphs to guess the values of limits. Butwe saw that such methods do NOT always lead to correct answer. In this
section weuse the following properties of limits called Li
Georgia Perimeter - MATH - 2431
ContinuityWhen we plot function values generated or collected in the field, we often connectthe plotted points with an unbroken curve to show what the functions values are likelyto have been at the
times we did not measure. In doing so we are assuming
Georgia Perimeter - MATH - 2431
Derivatives and Rates of ChangeNow that we have learned limits and techniques for computing them, we will returnto the tangent and velocity problems.TangentsThe tangent line to the curve y fx at
point a, fa is the linefx faprovided this limit exist.
Georgia Perimeter - MATH - 2431
The Derivative as a FunctionIn previous section we defined the derivative of a function as:f a limh0fa h fah(1)fx faxa(2)ORf a limxaWe know that value of f at x can be interpreted geometrically as
the slope of thetangent line to the graph of
Georgia Perimeter - MATH - 2431
Limits Involving InfinityInfinite Limits & Vertical AsymptotesPreviously we concluded that lim 12 does not exist.x0 xBy observing from the table 1 that as x becomes close to 0, x 2 becomes close
to 0,and 12 becomes very large as shown below in table
Georgia Perimeter - MATH - 2431
Limits at InfinityHere to find limits at infinity we let x become arbitrarily large (positive or negative)and see what happens to y.Lets begin by investigating the behavior of the function2fx x 2 1x
1as x becomes large.y1.00.5-7-6-5-4-3-2
Georgia Perimeter - MATH - 2431
Related RatesIf two functions at and bt are functions of time and they are related by someequation, then by implicit differentiation, their derivative a t and b t are also related.For instance when
air is pumped into a balloon, both radius and volume a
Georgia Perimeter - MATH - 2431
Related RatesRead the problem carefully, make a sketch and label the quantitiesWrite down one or more equations that express the relationship among variablesUsing the Chain Rule, implicitly
differentiate both sides of the equation withrespect to tSol
Georgia Perimeter - MATH - 2431
Derivatives and Shapes of CurvesThe Mean Value TheoremWe know that constant functions have zero derivatives, but could there be a morecomplicated function whose derivative is always zero? If two
functions have identicalderivatives over an interval, ho
Georgia Perimeter - MATH - 2431
Extreme Values of FunctionsSome of the most important applications of differential calculus are optimizationproblems. (Finding the best way of doing something) Some examples are as follows:a. What is
the shape of a can that minimizes manufacturing cost
Georgia Perimeter - MATH - 2431
Intervals of Increasing & DecreasingFind the first derivative of the function, f.Find critical numbers.Set the first derivative equal to zeroand solve.Determine whether the first derivative
isundefined for any x-values.Make sure these numbers are i
Georgia Perimeter - MATH - 2431
Absolute Maximum & Minimum Values of a FunctionFind the first derivative of the function, f.Find critical numbers.Set the first derivative equal to zero and solve.Determine whether the first
derivative is undefined for anyx-values.Make sure these nu
Georgia Perimeter - MATH - 2431
CONCAVITY AND INFLECTION POINTSFind the Second Derivative of the function, f.Set the Second Derivativeequal to zero and solve.To determine if these numbers arepotential Inflection Points, makesure
they are in the domain of theoriginal function, f.
Georgia Perimeter - MATH - 2431
Optimization ProblemsThe methods we learned in this unit for finding extreme values have practicalapplications in many areas of life. In solving such problems the greatest challenge isto convert the
word problem into a mathematical optimization problem
Georgia Perimeter - MATH - 2431
Newtons MethodFor a quadratic equation ax 2 bx c 0 there is a well known formula for theroots. For third and fourth degree equations there are also formulas for the roots, butthey are extremely
complicated. If f is a polynomial of degree 5 or higher, t
Georgia Perimeter - MATH - 2431
Areas and DistancesAreaWe know how to find area of a region with straight sides. For instance, the area ofa rectangle is defined as the product of the length and the width. The are of a triangleis
half the base times the height. The are of a polygon i
Georgia Perimeter - MATH - 2431
AntiderivativesWe have studied how to find the derivative of a function. However, many problemsrequire that we recover a function from its known derivative. For instance, we mayknow the velocity
function of an object falling from an initial height and
Georgia Perimeter - MATH - 2431
Georgia Perimeter - MATH - 2431
Georgia Perimeter - MATH - 2431
Georgia Perimeter - BIO - 2108
ForestsunEngelman spruceaspenlodgepole pinejack rabbitjunipersagebrushred-tailed hawkred squirrelgrasslichenchipmunkwood peckerwaxwingdeerbeetleelkgrasshopperfungigarter snakebacteriaturkey vulture
Georgia Perimeter - BIO - 2108
POPULATION GENETICS PROBLEMS1.PKU is a disease caused by a recessive allele. If untreated, it causes damage tothe nervous system and is usually fatal before the individual can reproduce. InEurope the
incidence is 1 in 10,000, or a q2 = 0.0001.a.b.I | {"url":"http://www.coursehero.com/file/8480990/Tax-Chapter-20-HW-Answers/","timestamp":"2014-04-17T19:09:33Z","content_type":null,"content_length":"55294","record_id":"<urn:uuid:dc667ae9-be7c-4d72-b985-a9c03e05a69f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
MAA Distinguished Lecture Series
James A. Yorke, University of Maryland
Thursday, November 17, 2011
Abstract: Chaos is a real-world phenomenon that arises in many different contexts, making it difficult to tell exactly what chaos is. Yorke will give examples of the aspects of chaos.
James A. Yorke earned his bachelor's degree from Columbia University in 1963. He came to the University of Maryland for graduate studies, in part because of interdisciplinary opportunities offered by
the faculty of the Institute for Physical Sciences and Technology (IPST). After receiving his doctoral degree in 1966 in Mathematics, Yorke stayed at the University as a member of IPST. Today he
holds the title of Distinguished University Professor and also is a member of the Mathematics and Physics Departments.
Professor Yorke's current research projects range from chaos theory and weather prediction and genome research to the population dynamics of the HIV/AIDS epidemic. He is perhaps best known to the
general public for coining the mathematical term "chaos" with T.Y. Li in a 1975 paper entitled "Period Three Implies Chaos," published in the American Mathematical Monthly. "Chaos" is a mathematical
concept in nonlinear dynamics for systems that vary according to precise deterministic laws but appear to behave in a random fashion. | {"url":"http://www.maa.org/meetings/calendar-events/maa-distinguished-lecture-series?page=5&device=desktop","timestamp":"2014-04-19T02:50:14Z","content_type":null,"content_length":"107069","record_id":"<urn:uuid:ae2b0783-4052-4995-a808-828aa17a7531>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 55
Statistical Methods for Testing and Evaluating Defense Systems: Interim Report APPENDIX B A Short History of Experimental Design, with Commentary for Operational Testing Some of the most important
contributions to the theory and practice of statistical inference in the twentieth century have been those in experimental design. Most of the early development was stimulated by applications in
agriculture. The statistical principles underlying design of experiments were largely developed by R. A. Fisher during his pioneering work at Rothamsted Experimental Station in the 1920s and 1930s.
The use of experimental design methods in the chemical industry was promoted in the 1950s by the extensive work of Box and his collaborators on response surface designs (Box and Draper, 1987). Over
the past 15 years, there has been a tremendous increase in the application of experimental design techniques in industry. This is due largely to the increased emphasis on quality improvement and the
important role played by statistical methods in general, and design of experiments in particular, in Japanese industry. The work of the Japanese quality consultant G. Taguchi on robust design for
variation reduction has shown the power of experimental design techniques for quality improvement. Experimental design techniques are also becoming popular in the area of computer-aided design and
engineering using computer/simulation models, including applications in manufacturing (automobile and semiconductor industries), as well as in the nuclear industry (Conover and Iman, 1980).
Statistical issues in the design and analysis of computer/simulation experiments are discussed in Sacks et al. (1989). Robust design uses designed experiments to study the response surfaces
associated with both mean and variation, and to choose the factor settings judiciously so that both variability and bias are made simultaneously small. Variability is studied by identifying important
“noise” variables and varying them systematically in off-line experiments. Robust design ideas have been used extensively in industry in recent years (see Taguchi, 1986; Nair, 1992). Some basic
insights of experimental design have had revolutionary impact, but many of these insights are not well known among scientists without specialized training in statistics, partly because elementary
texts and first courses seldom allocate time to this topic at all, or with any depth. For example, the role of randomization and the inefficiency of the practice of varying one factor at a time are
OCR for page 55
Statistical Methods for Testing and Evaluating Defense Systems: Interim Report not well appreciated. To the extent that this is true of the operational testing community, it should be surprising,
since many of the applications and much of the support for research in experimental design derived from problems faced by DoD during and shortly after World War II. The reason may be that practical
considerations in carrying out operational testing often impose such complex restrictions on the nature of the experimental design that one cannot rely on standard formulae to optimize the design.
Here, as in many other applications of statistical theory to practice, it seems likely that the limited standard textbook rules and dogmas are inadequate for dealing intelligently with the problem.
What is required is the kind of expertise that can adapt underlying basic principles to the current situation, an expertise rarely found outside the scope of well-trained statisticians who understand
the relation of standard rules to underlying principles. Both to serve as a reference point for later discussion and to help summarize the progress made in this field, we describe a few of the basic
principles and tools of experimental design in barest outline. It is our hope that appreciation of the basic principles will thus be enhanced, and the potential for more sophisticated applications
developed. THE VALUE OF CONTROLS, BLOCKING, AND RANDOMIZATION Several basic principles of design of experiments are widely understood. One is the need for a control. In comparing two systems, a new
one and a standard one whose behavior is relatively well known, there used to be a natural tendency to test and evaluate the new system separately. The result of such an evaluation tends to be biased
by a “halo” factor because the new system is being evaluated under conditions somewhat different from the everyday conditions under which the old system has been used. To avoid this bias, it is
commonplace to test both systems simultaneously under similar circumstances. With complicated weapon systems, satisfactory control may require careful consideration of the training of the personnel
handling the system. The use of controls has an additional advantage besides that of eliminating a potential inadvertent bias. This advantage stems from the factors that contribute to the variability
in the outcomes of individual tests. Ordinarily, the outcome of an experiment depends not only on the overall quality of the system, but also on more or less random variations, some of which are due
to the general environment. To the extent that the two systems are tested in the same environment, which is likely to have a similar effect on both systems, the difference in performance is less
likely to be affected by the environment, and the experiment yields a more precise estimate of the overall difference in performance of the two systems. If natural variations in the environment have
a relatively large effect on the variability in performance, the ability to match pairs has a correspondingly large effect on increasing the precision of conclusions. When this principle of matching
is generalized to more than two systems, it is referred to as blocking, a term derived from agricultural experiments in which several treatments are applied in each of many blocks of land. In the
context of operational testing, a series of prototypes and controls are tested simultaneously under a variety of conditions defined by such factors as terrain, weather, degree of training of troops,
and type of attack. Here one expects considerable homogeneity within blocks and nontrivial variation from block to block. The process of blocking raises another issue. How should the various
treatments be distributed within a block? In an agricultural experiment, if one assumes that position within the block has no effect, position will not matter. But if there is a systematic gradient
in soil fertility in one direction, the use of a systematic allocation might introduce a bias. One way to deal with this possibility is to anticipate the bias and allocate within the various blocks
in a clever fashion designed to cancel out the
OCR for page 55
Statistical Methods for Testing and Evaluating Defense Systems: Interim Report extraneous and irrelevant gradient effect. This is tricky, and the history of such attempts is full of misguided
failures. Another approach to reducing the bias is to select the allocation within the block by randomization. Often in operational testing applications with a small number of test articles,
randomization may not be necessary, and small systematic designs can be used safely. Or one can select a design at random from a restricted class of “reasonably safe” designs. However, in larger and
more complicated experiments where there are many blocks, the possible biasing effect due to “unfortunate” randomizations is very likely to be minimal. Moreover, one byproduct of randomization is
that it permits the statistician to ignore the complications due to many poorly understood potential biasing phenomena in constructing the probabilistic model on which to base the analysis. VARYING
MORE THAN ONE FACTOR AT A TIME Perhaps one of the most important insights of experimental design is that the traditional policy of varying one factor at a time is inefficient; that is the resulting
estimates have higher variance than estimates derived from experiments with the same number of replications in which several factors are simultaneously varied. We illustrate with two examples. One
example, due to Hotelling and based on work by Yates, involves the weighing of eight objects whose weights are wi, 1 ≤ i ≤ 8. A chemist's scale is used, which provides a reading equal to the weight
in one pan minus the weight in the other pan plus a random error with mean 0 and variance σ2. Hotelling proposes the design represented by the equations: X1 = w1 + w2 + w3 + w4 + w5 + w6 + w7 + w8 +
u1 X2 = w1 + w2 + w3 − w4 − w5 − w6 − w7 + w8 + u2 X3 = w1 − w2 − w3 + w4 + w5 − w6 − w7 + w8 + u3 X4 = w1 − w2 − w3 − w4 − w5 + w6 + w7 + w8 + u4 X5 = w1 + w2 − w3 + w4 − w5 + w6 − w7 − w8 + u5 X6 =
w1 + w2 − w3 − w4 + w5 − w6 + w7 − w8 + u6 X7 = w1 − w2 + w3 + w4 − w5 − w6 + w7 − w8 + u7 X8 = w1 − w2 + w3 − w4 + w5 + w6 − w7 − w8 + u8 where Xi is the observed outcome of the ith weighing, every
+1 before a wj means that the jth object is in the first pan, a −1 means that it is in the other pan, and ui is the random error for the ith weighing and is not observed directly. We estimate the wj
by solving the equations derived by assuming all ui = 0. This gives, for example, the estimate ŵ1 of w1, where: ŵ1 = (X1 + X2 + X3 + X4 + X5 + X6 + X7 + X8)/8 or ŵ1 = w1 + (u1 + u2 + u3 + u4 + u5 +
u6 + u7 + u8)/8 Since the ui are the errors resulting from independent weighings, we assume that they are independent
OCR for page 55
Statistical Methods for Testing and Evaluating Defense Systems: Interim Report with mean 0 and variance σ2. Then a straightforward computation yields the result that the ŵj have mean wj and variance
σ2/8, and are uncorrelated. If one had applied 8 weighings to the first object alone, no better result would have been obtained for w1. Thus a design in which each object is weighed separately would
require 64 weighings to achieve our results, which were derived from 8 weighings. Another example, from Mead (1988), confronts the practice of varying one factor at a time more directly. Suppose that
the outcome of a treatment is affected by three factors, p, q, and r, each of which can be controlled at two levels, p0 or p1, q0 or q1, and r0 or r1. We are allowed 24 observations. In one
experiment we use: p0q0r0 and p1q0r0 four times each p0q0r0 and p0q1r0 four times each p0q0r0 and p0q0r1 four times each An alternative second experiment uses each of the eight combinations p0q0r0,
p0q0r1, p0q1r0, p1q0r0, p1q1r0, p1q0r1, p0q1r1, and p1q1r1 three times. We are interested in estimating the difference in average effect due to the use of p1 rather than p0. Assume that effects of
the factors are additive, and the observations have a common variance σ2 about their expectation. Then the variance of the estimate of the difference due to p1qkrm rather than p0qkrm is σ2/2 in the
first experiment and σ2/6 in the second. The same holds for the differences due to the second and third factors. A threefold reduction in variance can be achieved by a design that varies several
factors at once. The more efficient design consisted of replicating the eight-case block three times. This design also has the advantage of allowing the designer to select quite distinct environments
for each block without worrying much about the contribution of the environmental factors to the overall effect being studied. In case the variations in environment have a large effect on the result,
the blocking aspect of the design is useful in increasing the efficiency of the estimation of the contrasting effects of p, q, and r over a design that ignores blocking. Moreover, the design is well
balanced in a technical sense, permitting simple analyses of the resulting data, as well as efficiency of the resulting estimates. The simplicity of the analysis, even in this day of cheap and fast
computing, retains an advantage in permitting the analyst to present the results in a convincing way to those without a background in statistics. An experiment in which each combination of
controllable factors is considered at several levels is called a factorial experiment. (Factorial designs were developed by Fisher and Yates at Rothamsted.) So, for example, if one has four factors
involving five levels each, a factorial experiment would require 54 = 625 distinct observations. Such a large number could be impractical. For such cases, an elegant mathematical theory of incomplete
block designs was developed, supplemented by a theory dealing with fractional factorial designs, latin squares, and graeco-latin squares for studying the main effects and low-order interactions in a
small number of runs. These designs tend to achieve efficiency and balance while reducing potential biases, leading to relatively simple analysis. Fractional factorial designs were introduced by
Finney (1945). Orthogonal arrays, recently popularized by Taguchi, include the fractional factorial designs developed by Finney, the designs developed by Plackett and Burman (1946), and the
orthogonal arrays developed by Rao (1946, 1947), Bose and Bush (1952), and others. OPTIMAL EXPERIMENTAL DESIGNS A major advance in the theory of experimental design was the introduction of optimal
experimental design. This theory provides asymptotically optimal or efficient designs for estimating a single un-
OCR for page 55
Statistical Methods for Testing and Evaluating Defense Systems: Interim Report known parameter for problems in which the relationship between the outcome Y and the independent variables x1, x2, etc.
is well understood and easily modeled as a function of a few unknown parameters. While this theory has some limitations in an applied setting, its results can be useful in pointing out targets of
efficiency one should approximate, and where one should aim to get reasonably good designs. There are several such limitations. First, since the theory is a large-sample theory, except in the case of
regression models, it may approximate good designs only for situations in which limited sample sizes are available. Second, the optimal designs often depend on the value of the unknown parameter. For
example, if the reliability r of a device under stress x is given by r(x) = exp(−θx), then the optimal design for estimating the unknown parameter θ consists of stressing a sample of devices with the
stress x = 1.6/θ. In these cases, one must rely on some prior knowledge about the unknown parameter or carry out some preliminary less inefficient experiments to “get the lay of the land.” The latter
is often good policy if it is feasible and not inconvenient. Third, the optimality may depend on an assumed model that is incorrect, causing the resulting design to be suboptimal and possibly even
noninformative. For example, consider a linear regression for probability of hit Y, which is a linear function of distance x for x in the range 3 to 4; i.e., Y = α + βx + u, where it is desired to
estimate the slope β (Of course, this model makes sense only for a relatively short range of x, since there is the danger of predicting probabilities that are less than 0 or greater than 1.) For each
value of x between 3 and 4, one may observe the corresponding value of Y, which depends not only on x but also on the random noise u, which is assumed to have mean 0 and constant variance
(independent of x), and is not observed. Then an optimal experiment would consist of selecting half of the x values at 3 and the other half at 4. However, if this was wrong and a more suitable model
for Y as a function of distance was, instead, Y = α + βx + γx2 + u, adding a quadratic term, then an optimal design for estimating β would require the use of three values of x, and the above design
that is concentrated on two values of x could not be used to estimate this three-parameter model. Note that for this quadratic model, the slope is no longer constant, and β represents the slope at x
= 0. This raises the additional question of whether β is the parameter we wish to estimate if the regression is not linear in x. More likely one would want to estimate β + 7γ, the slope at the
half-way point. On the other hand, if one were fairly certain that the linear model was an adequate approximation, but were somewhat concerned with the possibility that gamma was substantial, and so
wanted to be highly efficient for the linear model with some recourse in case the quadratic model was appropriate, then minor variations from the optimal design for the linear model could be used to
reveal deviations from the model without affecting the efficiency greatly should the linear model be appropriate. Finally, in many cases the object of the experiment involves the estimation of more
than one unknown parameter. It is rarely possible to design an experiment that is simultaneously maximally efficient for estimating each of these parameters. In such cases, it is necessary to
establish an appropriate criterion for measuring how well an experimental design does. Several criteria have been advanced. One possibility is to convert estimates of the parameters to estimates of
performance of the equipment for each of several environments likely to be encountered. For each such environment, the estimate would have a variance. One could then determine a design that would
minimize an average, over the range of environments, of the variances of estimated performance. RESPONSE SURFACE DESIGNS The choice of control settings is typically the subject of response surface
design and analysis.
OCR for page 55
Statistical Methods for Testing and Evaluating Defense Systems: Interim Report Response surfaces are simple linear or quadratic functions of independent variables that are used to approximate a more
complex relationship between a response and those independent variables. Two types of optimality are studied. In one case, the response is measured by a simple output that is to be maximized as a
function of several control variables. This type of study requires the estimation of a surface that is typically quadratic in the relevant neighborhood. The usual 2n factorial design in which each
variable is examined at two levels is inadequate for estimating the necessary quadratic effects for locating the optimal setting. However a 3n factorial design may involve too many settings.
Composite designs that supplement the 2n factorial designs with additional points contribute useful information about quadratic effects. In particular, there is a useful class of rotatable designs
that are efficient and easy to analyze and comprehend. In a second kind of problem, there may be several output variables to deal with, with good performance requiring that each of these lie within
certain acceptable bounds. In many cases, each of these output variables behaves in a roughly linear fashion as a function of the control variables in the region under discussion. Then a 2n factorial
design may be appropriate for estimating the linear trends necessary to determine control settings that will contribute satisfactory results. Typically, the goal in many industrial experiments is to
identify the important factors that affect one or more responses from among a large set of factors. Highly fractional, typically main-effect plans are used as screening designs to identify the
important factors. The high cost of industrial experimentation limits the number of runs; hence fractional designs with factors typically at two levels are used in these experiments. Once a smaller
set of important factors has been identified, the response surface can be studied more thoroughly using designs with more than two levels, and process/product performance can be optimized. This is
the rationale behind the response surface methodology developed by Box and others (see Box and Draper, 1987). It should be pointed out that most of the industrial applications along the lines of
Taguchi focus on product or process development, and so are closer to the application of developmental testing. In selecting the settings of the controls in a factorial design, an experimenter must
use some background information on what to expect. It would be useless to carry out an experiment in which all the values of a factor were too extreme or too similar. Thus the operational tester must
depend on information cumulated from previous experience, for example, from developmental testing, at least to establish what constitutes a useful design from which an analyst, who may be willing to
discard that previous history, can draw useful conclusions. To the extent that an experimenter depends on an educated intuition about likely outcomes of an experiment or appropriate models, he or she
tends to be subjective. This subjective element can never be fully removed from the design of an experiment, and in the minds of many, not even from the analysis of the resulting outcomes. BAYESIAN
AND SEQUENTIAL EXPERIMENTAL DESIGNS The theory of Bayesian inference deals with the formalism of subjective beliefs by assigning prior probabilities to such beliefs. With care, this theory can be
used productively in both the design and analysis of experiments. One advantage of such a theory is that it forces users to lay out their assumptions explicitly. It also provides a convenient way of
expressing the effect of the experiment in the user's posterior probability. Care is required, for priors that seemingly express general ignorance about a parameter sometimes assume much more
information than the user thinks. During World War II, a theory of sequential analysis was developed in connection with weapons testing. According to this theory, there is no point in proceeding
further with expensive tests if the results of the first few trials are overwhelming. For example, if a fixed-sample-size test might reject a
OCR for page 55
Statistical Methods for Testing and Evaluating Defense Systems: Interim Report weapon that failed in 3 out of 10 trials, it would make sense to stop testing and reject if the weapon failed on the
first two trials. This theory led to tests that were as effective as previous fixed-sample procedures, with considerable savings in the cost of experimentation. Although the initial theory confined
attention to experiments in which identical trials were repeated, the concept is naturally extended to sequential experimentation. Here, after each trial or experiment, the analyst-designer can
decide whether to stop experimentation and make a terminal decision, or continue experimentation. If the decision is to continue, the analyst-designer can then elect which of the alternative trials
or experiments to carry out next. Two-stage experiments, in which a preliminary experiment is devoted to gaining useful information for the design of a final stage, are special cases of sequential
experimentation. Finally, two active areas of research in experimental design (not specifically Bayesian or sequential) are the use of screening experiments, in which one wishes to discover which of
many possible factors has a substantial effect on some response, and designs for testing computer software. The panel is interested in pursuing the application of these two new areas of research as
they relate to operational testing. | {"url":"http://www.nap.edu/openbook.php?record_id=9074&page=55","timestamp":"2014-04-18T03:17:11Z","content_type":null,"content_length":"60131","record_id":"<urn:uuid:9a8789c0-8f1d-4025-8fea-62a7c930ca0c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00144-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Help! I need to show work for this question. 12b^-2-7b^-1+1=0
• 11 months ago
• 11 months ago
Best Response
You've already chosen the best response.
I got b = 4,3
Best Response
You've already chosen the best response.
\[\frac{ 12 }{ b^{2} }-\frac{ 7 }{ b }+1=0\] Multiply both sides by b^2 \[b^{2}-7b+12=0\] So your answer looks right.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/519294bee4b0e9fd532983ba","timestamp":"2014-04-21T04:56:56Z","content_type":null,"content_length":"30018","record_id":"<urn:uuid:5b727c22-895e-46e2-910d-2b4919495e42>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US6198486 - Method of using a hybrid error metric for multi-resolution mesh generation
1. Field
The present invention relates generally to graphics for display systems and, more specifically, to multi-resolution modeling of scenes or objects in display systems.
2. Description
In computer graphics, a model is a structured digital representation of an object or scene. Many computer graphics applications employ complex, detailed models of scenes or objects to maintain a
convincing level of realism for a user. Consequently, models are often created or acquired at a resolution to accommodate this desire for detail. However, depending on the application, this
complexity of such models may be excessive, and since the computational cost of using a model is typically related to its complexity, it is often useful to have simpler versions of complex models.
Hence, methods of automatically and efficiently producing simplified models are desirable.
A goal of multi-resolution modeling is to extract the details from complex models that are desirable for rendering a scene and to remove other, excessive details. A multi-resolution model is a model
which captures a wide range of levels of detail of an object and which can be used to reconstruct any one of those levels. Such models are typically represented as a mesh of many triangles, each
triangle having three vertices and three edges. A mesh may be represented by a data structure stored in a data storage device. One area of research in multi-resolution modeling has been the
development of iterative edge contraction techniques. An edge contraction (also known as an edge collapse) takes the two endpoints (vertices) of a target edge within a mesh, moves them to the single
position, links all incident edges to one of the vertices of the mesh, deletes the other vertex, and removes any faces that have degenerated into lines or points. Typically, this removes two
triangular faces per edge contraction, thereby simplifying the model. Edge contraction processes work by iteratively contracting edges of the mesh until a desired resolution is achieved. Differences
in such processes lie primarily in how a particular edge to be contracted is chosen.
Surface simplification is a restricted form of edge contraction in multi-resolution modeling. In polygonal surface simplification, a goal is to take a polygonal model as input data and generate a
simplified model (e.g., an approximation of the original) as output data. The focus of such simplification is on polygonal models represented as meshes comprising only triangles (e.g., wire frame
models). This implies no loss of generality, because every polygon in an original model can be triangulated as part of a pre-processing phase.
Simplification is useful in order to make storage, transmission, computation, and display of models more efficient. A compact approximation of a model can reduce disk and memory utilization and can
speed network transmission. It can also accelerate a number of computations involving shape information, such as finite element analysis, collision detection, visibility testing, shape recognition,
and display. Reducing the number of polygons in a model can make the difference between slow display and real time display.
A surface simplification process is described by Michael Garland and Paul S. Heckbert in “Surface Simplification Using Quadric Error Metrics”, SIGGRAPH 97 Proceedings, pages 209-216, August, 1997,
although the invention is not limited in scope in this respect. Garland and Heckbert describe a process for producing simplified versions of polygonal models that is based on the iterative
contraction of vertex pairs (which is a generalization of edge contraction). In this process, a geometric error approximation is maintained at each vertex of the current model. The error
approximation is used as the determining factor in identifying the order for iterative edge contractions. According to Garland and Heckbert, the error approximation is represented using quadric
matrices. The quadrics stored with the final vertices can also be used to characterize the overall shape of the model's surfaces. In this process, ten floating point numbers are used for storing the
error approximation at each vertex.
This process was extended by Michael Garland and Paul S. Heckbert in “Simplifying Surfaces With Color And Texture Using Quadric Error Matrics”, Proceedings Visualization 1998, to models having
material properties such as colors, textures, and surface normals, although, again, the invention is not limited in scope in this respect. The quadric error metric was modified to account for a range
of vertex attributes. However, the generalized error metric incurred additional space and processing overhead when implemented on a computer system because the size of the quadric matrix grows
quadratically in the size of the attributes. For example, the number of unique coefficients according to this method for each vertex of the model becomes 21 with a model having a geometry and
two-dimensional texture, 28 with a model having a geometry and color, and 28 with a model having a geometry and surface normals.
In addition, the methods disclosed by Garland and Heckbert often make mistakes in simplifying a model, particularly when dealing with objects having long, skinny features such as claws and tails, for
example. Their methods tend to remove or shorten these features of the model prematurely.
Therefore, an improved iterative edge contraction process for simplifying multi-resolution models is desired which reduces storage and processing utilization and produces more accurate
An embodiment of the present invention is a method of generating a first mesh from a second mesh representing a scene or object. This embodiment includes collapsing edges of the second mesh in a
first order defined by a surface area error metric to produce a third mesh, and collapsing edges of the third mesh in a second order defined by a combination quadric and surface area error metric to
produce the first mesh.
The features and advantages of the present invention will become apparent from the following detailed description of the present invention in which:
FIG. 1 is a diagram of a multi-resolution mesh generation process according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a multi-resolution mesh generation process according to an embodiment of the present invention; and
FIG. 3 is a diagram illustrating a sample system suitable to be programmed according to an embodiment of a method for generating a multi-resolution mesh.
An embodiment of the present invention is a method of generating simplified versions of models represented as meshes. In particular, an embodiment comprises an improvement in the error metric
component of existing mesh generation processes. Embodiments of the present invention use a surface area error metric and optionally a volume error metric in conjunction with a well-known quadric
error metric to efficiently determine which vertices are lesser important vertices to the shape of a mesh. As used herein, a metric is a process for determining vertices of a mesh, that, when removed
from a mesh, cause a reduced amount of damage visually to the shape of the model in comparison with other vertices. The edges affected by these vertices are iteratively removed one at a time from the
mesh to produce a series of models from the original, greater complexity mesh to a lesser complexity mesh. Since the iterative edge contraction process generates a series of models, a user may adjust
the resolution of any generated mesh. The meshes may be stored for future use at any point in the iterative edge contraction process, thereby making available models with different levels of detail.
A model may be obtained from a variety of sources such as digital image scanners, authoring tools, digital cameras, and so on. The model may then be processed into a wire frame mesh format. Some
meshes may contain thousands or even millions of triangles, although the invention is not limited in this respect. When the meshes are large, it becomes more desirable to use a simplification
technique that is efficient in terms of storage and processing utilization. The mesh may be represented in a data structure comprising an array of vertices, an array of edges, and an array of faces.
These arrays define the model's features such as its connectivity, shape, and possibly colors and textures. In this embodiment, each vertex may be represented as a set of x, y, and z coordinates in a
three dimensional coordinate system. In this embodiment, each edge may be represented as pointers or links to two vertices. In this embodiment, each face may be represented as pointers or links to
three vertices. In some models, color, texture, and surface normal information for faces may also be included. The model's data structure may be input to a simplification process to produce a data
structure representing a simplified multi-resolution mesh which corresponds to an original, complex mesh.
FIG. 1 is a diagram of a multi-resolution mesh generation process according to an embodiment of the present invention. Multi-resolution mesh generation process 200 accepts a complex mesh 202 data
structure as input data and produces a simplified multi-resolution mesh 204 data structure as output data. Edge collapse using surface and volume error metrics 206 accepts the complex mesh, reduces
it using edge collapse techniques based on edges identified by surface area and volume error metrics, and produces modified mesh 208. Block 206 eliminates a large percentage of the edges in the
complex mesh 202 in a manner which reduces storage and processing utilization. Edge collapse using quadric, surface, and volume error metrics 210 accepts the modified mesh, reduces it using edge
collapse techniques based on edges identified by quadric, surface area, and volume error metrics, and produces simplified multi-resolution mesh 204.
In this embodiment, in both edge collapse using surface and volume error metrics 206 and edge collapse using quadric, surface and volume error metrics 210, the same well-known edge collapse process
is used, however, different error metrics are employed and, thus, the ordering of edges may be different. Block 210 operates only on the edges remaining in modified mesh 208. In some example complex
meshes operated on by one embodiment of the present invention, up to approximately 80% of the edges may be eliminated by edge collapse using surface and volume error metrics 206 and up to
approximately 20% of the edges may be eliminated by edge collapse using quadric, surface, and volume error metrics 210. Of course, the actual percentage of edges eliminated at each block may depend
on the features and complexity of the complex mesh being processed. Since edge collapse using surface and volume error metrics 206 is computationally efficient in processing the complex mesh, and
only a portion of the original complex mesh is passed to edge collapse using quadric, surface and volume error metrics 210, the overall efficiency of multi-resolution mesh generation process 200 may
be improved as compared to the efficiency of employing an edge collapse technique using a quadric error metric alone.
FIG. 2 is a flow diagram of a multi-resolution mesh generation process according to an embodiment of the present invention. Initially, at block 300 a data structure is generated for complex mesh 202
to represent the relationships between the vertices, edges, and faces of the mesh in a format which may be more efficient for error metric processing than the format of complex mesh 202. This data
structure comprises mesh relationship information, such as which edges belong to vertices, which vertices are the end points of edges, and which vertices defines faces, for example. The data
structure may also comprise information relating to color, texture, and surface normals for faces. The data structure may be quicker to traverse when collapsing edges, yet it includes all of the
original information of the complex mesh 202, thereby reducing overall processing time for surface simplification. Additionally, the surface area of all faces may be summed to form a total surface
area for the complex mesh for use in determining a threshold percentage discussed below.
At block 302, a cost may be assigned to each edge of the mesh according to a surface area error metric. That is, each edge of the mesh may be examined to determine the damage to the shape of the mesh
when that edge is collapsed, taking into account the surface area of all faces affected by this edge collapse. In one embodiment, the cost determined by the surface area error metric may be the
difference between the sum of the face areas adjacent to an edge's vertices and the sum of the face areas adjacent to the remaining vertices after edge contraction. In an alternate embodiment, the
surface error metric may be the square of the difference between the sum of the face areas adjacent to an edge's vertices and the sum of the face areas adjacent to the remaining vertices after edge
contraction. In either embodiment, the cost is efficiently computed and employs only one floating point storage location in memory to store the cost for the edge. In other embodiments, the color,
texture, or surface normal of a face may also be used to affect the surface area error metric by weighting the cost according such features.
At block 304, the edges of the complex mesh may be sorted based on the cost assigned to them in block 302. The edges may be sorted from lowest surface area change of the model to highest surface area
change of the model. This ordering corresponds to a ranking from the lowest surface area change to the highest surface area change. In alternate embodiments, different sorting criteria may be
employed. After block 304, an ordered list of edges is available to use for edge collapse operations based on the results of the surface area error metric. At block 306, the edges in the list may be
collapsed according to the surface area error metric cost order using a well-known edge contraction process or a later developed edge contraction process until a cost is encountered that exceeds a
pre-defined threshold amount. Each edge collapse means the mesh is defined by one less edge. In one embodiment, the threshold may be a 1% change in local surface area. The 1% threshold has been
empirically determined to work well for a large class of three dimensional models. However, other threshold values may be applicable, depending on the model being reduced.
In one embodiment, each collapse may be subject to a veto by a volume error metric. That is, if the volume error metric detects a defect in the proposed model, then the edge may not be collapsed. A
volume error metric may be employed because occasionally an edge collapse may substantially preserve the surface area of the model despite the shape of the model being radically altered. This is
undesirable because then the model's appearance may not accurate. The volume error metric may be used to detect these cases where edge collapses yield large changes in volume. Such edges may then be
given a new cost according to the volume error metric based at least in part on the volume change. In one embodiment, edges identified by the volume error metric as causing a volume problem for the
model may be appended to the end of the surface area error metric cost list, thereby delaying their collapse.
The volume error metric works on the intuition that large changes in volume (either positive or negative) are detrimental to the perceived shape of a model. In one embodiment, the volume error metric
may be determined as follows. Given two vertices v[k ]and v[r ]which form an edge that is a candidate for collapse, measure the volume of tetrahedrons incident on those two vertices. Each tetrahedron
may be formed between the three vertices belonging to a face incident on v[k ]and with v[r]. Sum the volumes of each such tetrahedron. Call this volume sum V. If the edge is collapsed and v[r ]is
“moved” to position v[k], alter the local volume of the model by V. If the total volume of the model has changed greater than a predetermined threshold, then the edge is not collapsed at this time
and the edge may be added to the end of the surface area error metric cost list. Note that this volume computation is only valid for the edge collapse from v[r ]to v[k]. There is a corresponding
volume for the edge collapse from v[k ]to v[r].
At block 308, a cost may be assigned to all edges remaining in the modified mesh 208 using a combination of the quadric, surface and volume error metrics.
In an embodiment of the present invention, the cost for removing an edge may be computed as the cost according to the surface area error metric as discussed above multiplied by a constant value
multiplied by the cost of removing the edge according to the quadric error metric as described by Michael Garland and Paul S. Heckbert in “Surface Simplification Using Quadric Error Metrics”,
SIGGRAPH 97 Proceedings, pages 209-216, August, 1997, although other quadric error metrics may also be employed and the invention is not limited in this respect. More specifically, in this embodiment
the cost may be: (quadric cost+1.0)*((surface area cost*scale factor)+1.0). The scale factor may be empirically determined. In one embodiment, the scale factor may be 1.0. In one embodiment of the
present invention, increasing the scale factor results in preserving color boundaries over preserving the shape of the model during edge collapses, while decreasing the scale factor results in
preserving the shape of the model over preserving color boundaries. Of course, variations of this cost equation may be used as will be apparent to those skilled in the art. Note that constant 1.0
value is used to prevent the cost from going to zero when one of the quadric or surface area costs is zero. In another embodiment, the combination of the surface and quadric error metrics for
determining the cost of removing an edge may be combined with a volume error metric.
At block 310, the remaining edges may be sorted according to the newly assigned costs, from a lowest cost to a highest cost, based on the potential damage to the shape of the mesh. At block 312, the
edges may be iteratively collapsed in the combination metric cost order until a desired resolution of the model is reached. In one embodiment, all edges may be collapsed. At any point in the
iterative edge collapse process, the multi-resolution mesh may be saved for later use, thereby obtaining meshes with differing levels of detail.
Embodiments of the present invention may reduce large complex meshes several orders of magnitude faster than the existing quadric error metric approach. For example, using prior art techniques,
reduction of large complex meshes may take on the order of days of processing time, whereas with an embodiment of the present invention reduction of large complex meshes may take on the order of
minutes of processing time. This makes authoring of multi-resolution, three dimensional graphical content more productive. The multi-resolution meshes generated by embodiments of the present
invention are also more accurate than those generated with the quadric error metric, resulting in visibly improved images for the user.
Embodiments of the present invention employ less processing time than the quadric error metric approach when operating on the same mesh at least in part because they use less memory, which may allow
the surface simplification process to be executed from random-access memory (RAM) of a system instead of from disk-based virtual memory. Since processes for multi-resolution mesh generation
inherently have little temporal or spatial locality, a virtual memory system may be forced to frequently access disk storage within the system whenever the complex mesh data structure exceeds the
size of the RAM. Thus, it is desirable to reduce the storage requirements of a multi-resolution mesh generation method. The well-known quadric error metric employs at least ten floating point numbers
per vertex to represent the metric. With an embodiment of the present invention, one floating point number per vertex may be employed. Because in this embodiment only surface and volume metrics are
used initially, the memory requirements to process the complex mesh may be greatly reduced. In embodiments of the present invention, the creation of quadric error matrics may be delayed until the
surface and volume metrics may be deemed inadequate (based on a threshold, for example) for further accurate reduction of the model. Furthermore, embodiments of the present invention may be more
accurate than the quadric error metric approach because in some embodiments undesirable artifacts associated with long skinny features within a model may be reduced, and a more visually appealing
mesh may be produced, especially at lower resolutions.
Embodiments of the present invention enable graphical content developers to use a single high resolution three dimensional model and at run-time extract the level of detail appropriate for the
capabilities of the processor of the system being used. Application programs having such models may be more visually appealing to a user when executed on more powerful processors. The
multi-resolution meshes generated by embodiments of the present invention may be used to transmit three dimensional images over networks, such as the Internet, for example. In these cases, the lowest
resolution model may be sent over the network first, followed by incremental updates which increase the resolution. The result is that the three dimensional model being viewed in a browser program
may gradually become more detailed as the download progresses.
Embodiments of the present invention may also have applications beyond the reduction of three dimensional graphical models. Many types of data, including images, video, audio, and animation, may be
represented as surfaces. Because surface simplification effectively compresses the original data by removing lesser important data, embodiments of the present invention may be applicable to the
compression of these multimedia data types.
In the preceding description, various aspects of the present invention have been described. For purposes of explanation, specific numbers, systems and configurations were set forth in order to
provide a thorough understanding of the present invention. However, it is apparent to one skilled in the art that the present invention may be practiced without the specific details. In other
instances, well known features were omitted or simplified in order not to obscure the present invention.
Embodiments of the present invention may be implemented in hardware or software, or a combination of both. However, embodiments of the invention may be implemented as computer programs executing on
programmable systems comprising at least one processor, a data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output
device. Program code may be applied to input data to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in
known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example, a digital signal processor (DSP), a microcontroller, an
application specific integrated circuit (ASIC), or a microprocessor.
The programs may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The programs may also be implemented in assembly or machine
language, if desired. In fact, the invention is not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.
The programs may be stored on a storage media or device (e.g., hard disk drive, floppy disk drive, read only memory (ROM), CD-ROM device, flash memory device, digital versatile disk (DVD), or other
storage device) readable by a general or special purpose programmable processing system, for configuring and operating the processing system when the storage media or device is read by the processing
system to perform the procedures described herein. Embodiments of the invention may also be considered to be implemented as a machine-readable storage medium, configured for use with a processing
system, where the storage medium so configured causes the processing system to operate in a specific and predefined manner to perform the functions described herein.
An example of one such type of processing system is shown in FIG. 3. Sample system 400 may be used, for example, to execute the processing for methods in accordance with the present invention, such
as the embodiment described herein. Sample system 400 is representative of processing systems based on the PENTIUM®, PENTIUM® Pro, and PENTIUM® II microprocessors available from Intel Corporation,
although other systems (including personal computers (PCs) having other microprocessors, engineering workstations, set-top boxes and the like) may also be used. In one embodiment, sample system 400
may be executing a version of the WINDOWS™ operating system available from Microsoft Corporation, although other operating systems and graphical user interfaces, for example, may also be used.
FIG. 3 is a block diagram of a system 400 of one embodiment of the present invention. The computer system 400 includes a processor 402 that processes data signals. The processor 402 may be a complex
instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing a combination
of instruction sets, or other processor device, such as a digital signal processor, for example. FIG. 3 shows an example of an embodiment of the present invention implemented as a single processor
system 400. However, it is understood that embodiments of the present invention may alternatively be implemented as systems having multiple processors. Processor 402 may be coupled to a processor bus
404 that transmits data signals between processor 402 and other components in the system 400.
System 400 includes a memory 406. Memory 406 may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, or other memory device. Memory 406 may store
instructions and/or data represented by data signals that may be executed by processor 402. The instructions and/or data may comprise code for performing any and/or all of the techniques of the
present invention. Memory 406 may also contain additional software and/or data (not shown). A cache memory 408 may reside inside processor 402 that stores data signals stored in memory 406. Cache
memory 408 in this embodiment speeds up memory accesses by the processor by taking advantage of its locality of access. Alternatively, in another embodiment, the cache memory may reside external to
the processor.
A bridge/memory controller 410 may be coupled to the processor bus 404 and memory 406. The bridge/memory controller 410 directs data signals between processor 402, memory 406, and other components in
the system 400 and bridges the data signals between processor bus 404, memory 406, and a first input/output (I/O) bus 412. In some embodiments, the bridge/memory controller provides a graphics port
for coupling to a graphics controller 413. In this embodiment, graphics controller 413 interfaces to a display device (not shown) for displaying images rendered or otherwise processed by the graphics
controller 413 to a user. The display device may comprise a television set, a computer monitor, a flat panel display, or other suitable display device.
First I/O bus 412 may comprise a single bus or a combination of multiple buses. First I/O bus 412 provides communication links between components in system 400. A network controller 414 may be
coupled to the first I/O bus 412. The network controller links system 400 to a network that may include a plurality of processing systems (not shown in FIG. 3) and supports communication among
various systems. The network of processing systems may comprise a local area network (LAN), a wide area network (WAN), the Internet, or other network. In some embodiments, a display device controller
416 may be coupled to the first I/O bus 412. The display device controller 416 allows coupling of a display device to system 400 and acts as an interface between a display device (not shown) and the
system. The display device may comprise a television set, a computer monitor, a flat panel display, or other suitable display device. The display device receives data signals from processor 402
through display device controller 416 and displays information contained in the data signals to a user of system 400.
In some embodiments, camera 418 may be coupled to the first I/O bus. Camera 418 may comprise a digital video camera having internal digital video capture hardware that translates a captured image
into digital graphical data. The camera may comprise an analog video camera having digital video capture hardware external to the video camera for digitizing a captured image. Alternatively, camera
418 may comprise a digital still camera or an analog still camera coupled to image capture hardware. A second I/O bus 420 may comprise a single bus or a combination of multiple buses. The second I/O
bus 420 provides communication links between components in system 400. A data storage device 422 may be coupled to the second I/O bus 420. The data storage device 422 may comprise a hard disk drive,
a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device. Data storage device 422 may comprise one or a plurality of the described data storage devices.
A keyboard interface 424 may be coupled to the second I/O bus 420. Keyboard interface 424 may comprise a keyboard controller or other keyboard interface device. Keyboard interface 424 may comprise a
dedicated device or may reside in another device such as a bus controller or other controller device. Keyboard interface 424 allows coupling of a keyboard to system 400 and transmits data signals
from a keyboard to system 400. A user input interface 425 may be coupled to the second I/O bus 420. The user input interface may be coupled to a user input device, such as a mouse, joystick, or
trackball, for example, to provide input data to the computer system. Audio controller 426 may be coupled to the second I/O bus 420. Audio controller 426 operates to coordinate the recording and
playback of audio signals. A bus bridge 428 couples first I/O bridge 412 to second I/O bridge 420. The bus bridge operates to buffer and bridge data signals between the first I/O bus 412 and the
second I/O bus 420.
Embodiments of the present invention are related to the use of the system 400 to simplify multi-resolution models. According to one embodiment, simplifying a multi-resolution model using a hybrid
error metric may be performed by the system 400 in response to processor 402 executing sequences of instructions in memory 404. Such instructions may be read into memory 404 from another
computer-readable medium, such as data storage device 422, or from another source via the network controller 414, for example. Execution of the sequences of instructions causes processor 402 to
simplify a multi-resolution model using a hybrid error metric according to embodiments of the present invention. In an alternative embodiment, hardware circuitry may be used in place of or in
combination with software instructions to implement embodiments of the present invention. Thus, the present invention is not limited to any specific combination of hardware circuitry and software.
The elements of system 400 perform their conventional functions well-known in the art. In particular, data storage device 422 may be used to provide long-term storage for the executable instructions
and data structures for embodiments of methods of simplifying multi-resolution models in accordance with the present invention, whereas memory 406 is used to store on a shorter term basis the
executable instructions of embodiments of the methods for simplifying multi-resolution models in accordance with the present invention during execution by processor 402.
While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications of the illustrative
embodiments, as well as other embodiments of the invention, which are apparent to persons skilled in the art to which the inventions pertains are deemed to lie within the spirit and scope of the | {"url":"http://www.google.co.uk/patents/US6198486","timestamp":"2014-04-18T00:37:15Z","content_type":null,"content_length":"98229","record_id":"<urn:uuid:27e8b043-ef8a-41f2-932d-0554c10120c2>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculate V0(t) In The Circuit Shown In The Figure ... | Chegg.com
Electrical Engineering
Need help!
Image text transcribed for accessibility: Calculate v0(t) in the circuit shown in the figure below if i1(t) is 200 cos(105t + 60 degree ) mA, i2(t) is 100 sin(105t + 90 degree ) mA, and vS(t) = 10
sin(105t) V. Find the amplitude of v0(t). Find the phase of v0(t) in degrees. V degrees.
Electrical Engineering
Answers (1)
• Need help!
Rating:5 stars 5 stars 1
TheGladiator answered 26 minutes later | {"url":"http://www.chegg.com/homework-help/questions-and-answers/calculate-v0-t-circuit-shown-figure-i1-t-200-cos-105t-60-degree-ma-i2-t-100-sin-105t-90-de-q3939821","timestamp":"2014-04-16T22:27:07Z","content_type":null,"content_length":"20771","record_id":"<urn:uuid:058b66ab-a5e5-4a9a-9550-20d362b6cf6c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patterns in Fractals
Shodor > Interactivate > Lessons > Patterns in Fractals
This lesson is designed to introduce students to the idea of finding patterns in the generation of several different types of fractals.
Upon completion of this lesson, students will:
• have been introduced to patterns.
• have learned the terminology used with patterns.
• have practiced finding patterns in the observable process of fractal generation.
Standards Addressed:
Textbooks Aligned:
Student Prerequisites
• Arithmetic: Student must be able to:
□ perform integer and fractional arithmetic
□ perform the Pythagorean theorem
□ calculate the area of a triangle
• Technological: Students must be able to:
□ perform basic mouse manipulations such as point, click and drag
□ use a browser for experimenting with the activities
Teacher Preparation
• Access to a browser
• pencil and paper
• Access to a calculator (optional)
• Copies of supplemental materials for the activities:
□ Patterns in Fractals Data Table
Lesson Outline
1. Focus and Review
Remind students what has been learned in previous lessons that will be pertinent to this lesson and/or have them begin to think about the words and ideas of this lesson.
□ If the students have studied fractals previously, you may ask, "Do you remember fractals? What can you tell me about them?" or "Can anyone tell me what fractals might have to do with
□ If students are not familiar with fractals, that is okay. They do not need knowledge of fractals for this lesson. You can begin with questions such as, "Does anyone know what a pattern or a
sequence is?" or "Can anyone tell me a sequence that we see everyday?"
2. Objectives
Let the students know what it is they will be doing and learning today. Say something like this:
□ Today, class, we will be talking about patterns. After this lesson you will understand them better, be able to talk about them, and be able to pick them out of a process.
□ We are going to use the computers to learn about patterns, but please do not turn your computers on or go to this page until I ask you to. I want to show you a little about patterns first.
3. Teacher Input
Explain to the students how to do the assignment. You should model or demonstrate it for the students, especially if they are not familiar with how to use our computer applets.
□ Open your browser to The Hilbert Curve in order to demonstrate this activity to the students.
□ Ask the students what they see. They should tell you that they see a line segment. Point out to the students that the box at the top of the applet tells you that the line segment has a size
of 1.0 units.
□ Explain to the students that when you press the button to go to the next stage, a process will take place or that the applet will do something to the line segment on the screen.
□ Press the button to proceed to the next stage. Ask the students to describe what they see. They should tell you that there is now a rectangle in the middle of the line segment standing on
□ Ask the students to describe the lengths of the segments in the rectangle and the line. Help the students to see that the new figure is made up of 9 line segments that are all the same
length. Point out to the students that the box at the top of the applet tells us that there are 9 line segments of size 1/3.0 units.
□ Ask the students what 1/3.0 means. They should tell you that it means one-third. Ask them, "One-third of what?" Help the students see that these line segments on the screen are one-third of
the length of the original line segment.
□ Have the students guess what will happen when you press the button to go to the next stage. Explain to them that the process that happened before will happen to every line segment in the
□ Press the button to go to the next stage. Ask the students if they are surprised. Have a student explain why the picture looks as it does. Point out the box at the top of the applet that
tells the students how many segments there are in the figure and how long the segments are.
□ Ask the students, "Does it make sense that when we divided each of the line segments in the previous stage into three parts, that these line segments should be 1/9 in length?" Have a student
explain why this is true.
□ Pass out the Patterns in Fractals Data Table. With the students, show how you would answer the questions for The Hilbert Curve.
4. Guided Practice
Try another example, letting the students direct your moves. Or, you may simply ask, "Can anyone describe the steps you will take for this assignment?"
□ If your class seems to understand the process for doing this assignment, simply ask, "Can anyone tell me the steps that you will need to take to fill in the rest of this chart?"
□ If your class seems to be having a little trouble with this process, do another example together, but let the students direct your actions:
☆ Open the applet Another Hilbert Curve.
☆ Ask, "What do I need to do to answer this first question on our data table?"
☆ Let the students take the class through the steps to answer the questions for the second applet. If they seem to be having trouble, give them a hint. If they do something incorrectly, see
if they find their own mistake, or gently suggest they try another way.
5. Independent Practice
□ Allow the students to work on their own to complete the rest of the data table worksheet. Monitor the room for questions and to be sure that the students are on the correct web site.
□ Students will use these additional sites:
☆ Sierpinski's Triangle
☆ Koch's Snowflake
☆ Sierpinski's Carpet
□ Students may need help with the questions involving finding areas on the last three applets. You may choose to let the students work in groups to determine a method for finding the area of
the requested spaces. If the question seems too difficult, allow the students to move on to another applet or complete the challenging questions for extra credit.
6. Closure
You may wish to bring the class back together for a discussion of the findings. Once the students have been allowed to share what they found, summarize the results of the lesson. You may make a
list of characteristics for the students to keep in their notebooks.
Alternate Outline
This lesson can be rearranged in several ways.
• You may choose to only use a couple of the computer applets for this lesson.
• You may assign each of the five applets to a different group that would report their finding back to the class.
• You may invent your own way of using this lesson to suit the needs of your students.
Suggested Follow-Up
This lesson can be followed by Patterns in Pascal's Triangle, which will allow students to continue to build the skills necessary to identify patterns. Another lesson, An Introduction to Sequences
will introduce students to sequences of numbers unrelated to geometric shapes or fractals.
©1994-2014 Shodor Website Feedback
This lesson is designed to introduce students to the idea of finding patterns in the generation of several different types of fractals.
Upon completion of this lesson, students will:
• have been introduced to patterns.
• have learned the terminology used with patterns.
• have practiced finding patterns in the observable process of fractal generation.
Standards Addressed:
Textbooks Aligned:
Student Prerequisites
• Arithmetic: Student must be able to:
□ perform integer and fractional arithmetic
□ perform the Pythagorean theorem
□ calculate the area of a triangle
• Technological: Students must be able to:
□ perform basic mouse manipulations such as point, click and drag
□ use a browser for experimenting with the activities
Teacher Preparation
• Access to a browser
• pencil and paper
• Access to a calculator (optional)
• Copies of supplemental materials for the activities:
□ Patterns in Fractals Data Table
Lesson Outline
1. Focus and Review
Remind students what has been learned in previous lessons that will be pertinent to this lesson and/or have them begin to think about the words and ideas of this lesson.
□ If the students have studied fractals previously, you may ask, "Do you remember fractals? What can you tell me about them?" or "Can anyone tell me what fractals might have to do with
□ If students are not familiar with fractals, that is okay. They do not need knowledge of fractals for this lesson. You can begin with questions such as, "Does anyone know what a pattern or a
sequence is?" or "Can anyone tell me a sequence that we see everyday?"
2. Objectives
Let the students know what it is they will be doing and learning today. Say something like this:
□ Today, class, we will be talking about patterns. After this lesson you will understand them better, be able to talk about them, and be able to pick them out of a process.
□ We are going to use the computers to learn about patterns, but please do not turn your computers on or go to this page until I ask you to. I want to show you a little about patterns first.
3. Teacher Input
Explain to the students how to do the assignment. You should model or demonstrate it for the students, especially if they are not familiar with how to use our computer applets.
□ Open your browser to The Hilbert Curve in order to demonstrate this activity to the students.
□ Ask the students what they see. They should tell you that they see a line segment. Point out to the students that the box at the top of the applet tells you that the line segment has a size
of 1.0 units.
□ Explain to the students that when you press the button to go to the next stage, a process will take place or that the applet will do something to the line segment on the screen.
□ Press the button to proceed to the next stage. Ask the students to describe what they see. They should tell you that there is now a rectangle in the middle of the line segment standing on
□ Ask the students to describe the lengths of the segments in the rectangle and the line. Help the students to see that the new figure is made up of 9 line segments that are all the same
length. Point out to the students that the box at the top of the applet tells us that there are 9 line segments of size 1/3.0 units.
□ Ask the students what 1/3.0 means. They should tell you that it means one-third. Ask them, "One-third of what?" Help the students see that these line segments on the screen are one-third of
the length of the original line segment.
□ Have the students guess what will happen when you press the button to go to the next stage. Explain to them that the process that happened before will happen to every line segment in the
□ Press the button to go to the next stage. Ask the students if they are surprised. Have a student explain why the picture looks as it does. Point out the box at the top of the applet that
tells the students how many segments there are in the figure and how long the segments are.
□ Ask the students, "Does it make sense that when we divided each of the line segments in the previous stage into three parts, that these line segments should be 1/9 in length?" Have a student
explain why this is true.
□ Pass out the Patterns in Fractals Data Table. With the students, show how you would answer the questions for The Hilbert Curve.
4. Guided Practice
Try another example, letting the students direct your moves. Or, you may simply ask, "Can anyone describe the steps you will take for this assignment?"
□ If your class seems to understand the process for doing this assignment, simply ask, "Can anyone tell me the steps that you will need to take to fill in the rest of this chart?"
□ If your class seems to be having a little trouble with this process, do another example together, but let the students direct your actions:
☆ Open the applet Another Hilbert Curve.
☆ Ask, "What do I need to do to answer this first question on our data table?"
☆ Let the students take the class through the steps to answer the questions for the second applet. If they seem to be having trouble, give them a hint. If they do something incorrectly, see
if they find their own mistake, or gently suggest they try another way.
5. Independent Practice
□ Allow the students to work on their own to complete the rest of the data table worksheet. Monitor the room for questions and to be sure that the students are on the correct web site.
□ Students will use these additional sites:
☆ Sierpinski's Triangle
☆ Koch's Snowflake
☆ Sierpinski's Carpet
□ Students may need help with the questions involving finding areas on the last three applets. You may choose to let the students work in groups to determine a method for finding the area of
the requested spaces. If the question seems too difficult, allow the students to move on to another applet or complete the challenging questions for extra credit.
6. Closure
You may wish to bring the class back together for a discussion of the findings. Once the students have been allowed to share what they found, summarize the results of the lesson. You may make a
list of characteristics for the students to keep in their notebooks.
Alternate Outline
This lesson can be rearranged in several ways.
• You may choose to only use a couple of the computer applets for this lesson.
• You may assign each of the five applets to a different group that would report their finding back to the class.
• You may invent your own way of using this lesson to suit the needs of your students.
Suggested Follow-Up
This lesson can be followed by Patterns in Pascal's Triangle, which will allow students to continue to build the skills necessary to identify patterns. Another lesson, An Introduction to Sequences
will introduce students to sequences of numbers unrelated to geometric shapes or fractals.
This lesson is designed to introduce students to the idea of finding patterns in the generation of several different types of fractals.
Remind students what has been learned in previous lessons that will be pertinent to this lesson and/or have them begin to think about the words and ideas of this lesson.
Let the students know what it is they will be doing and learning today. Say something like this:
Explain to the students how to do the assignment. You should model or demonstrate it for the students, especially if they are not familiar with how to use our computer applets.
Try another example, letting the students direct your moves. Or, you may simply ask, "Can anyone describe the steps you will take for this assignment?"
You may wish to bring the class back together for a discussion of the findings. Once the students have been allowed to share what they found, summarize the results of the lesson. You may make a list
of characteristics for the students to keep in their notebooks.
This lesson can be followed by Patterns in Pascal's Triangle, which will allow students to continue to build the skills necessary to identify patterns. Another lesson, An Introduction to Sequences
will introduce students to sequences of numbers unrelated to geometric shapes or fractals. | {"url":"http://www.shodor.org/interactivate/lessons/PatternsInFractals/","timestamp":"2014-04-16T10:32:24Z","content_type":null,"content_length":"42248","record_id":"<urn:uuid:fe067117-27a8-4dcc-9234-cdc2f5f02e19>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Post a New Question | Current Questions
college algebra
4^(x-3) = 3^(2x) (x-3)*Log4 = 2x*Log 3 Divide both sides by Log 4: x-3 = 2x * 0.79248 = 1.5850x x - 1.5850x = 3 -0.5850x = 3 X = -5.1282
Tuesday, November 19, 2013 at 9:09pm
literature (bejamin Franklin)
As a poor boy, Franklin realized that he could not buy many books, but he came up with an idea that could give people access to as many books as they wanted. This idea was essentially.. a) get a
government grant b)get the government to buy the books c)have all your friends ...
Tuesday, November 19, 2013 at 6:07pm
A community college has 2,000 students and 50 instructors. Next year the enrollment will be 6,000. How many new instructors should be hired if the college wants to keep the same student to instructor
Tuesday, November 19, 2013 at 5:17pm
college algebra
visit calc101.com and click on the long division button to play around with polynomial division.
Tuesday, November 19, 2013 at 12:30am
college algebra
x^3 + 3x^2 − 14x − 20 = (x+5)(x^2-2x-4) Now just solve the quadratic for the other two values of x.
Tuesday, November 19, 2013 at 12:27am
college algebra
not sure how to find the t, tried using the log approach but can't get the right answer.
Monday, November 18, 2013 at 9:24pm
college algebra
A partial solution set is given for the polynomial equation. Find the complete solution set. (Enter your answers as a comma-separated list.) x^3 + 3x^2 − 14x − 20 = 0; {−5}
Monday, November 18, 2013 at 8:36pm
college algebra
Use long division to perform the division. (Express your answer as quotient + remainder/divisor.) (x^4 + 7x^3 − 2x^2 + x − 1) / (x − 1)
Monday, November 18, 2013 at 8:10pm
college algebra
Solve the exponential equation using logarithms. Give the answer in decimal form, rounding to four decimal places. (Enter your answers as a comma-separated list.) 4^(x − 3) = 3^(2x) x = _____
Monday, November 18, 2013 at 7:41pm
college algebra
Use a calculator to help solve the problem. An isotope of lead, 201Pb, has a half-life of 8.4 hours. How many hours ago was there 40% more of the substance? (Round your answer to one decimal place.)
Monday, November 18, 2013 at 7:38pm
college algebra
no it looks more like this: 4^(x-3)=3^2x
Monday, November 18, 2013 at 7:37pm
Which words go in the following sentences. Young Charles was born in 1902 and was growing/grew up on a farm. Later, although he has studied/had studied engineering in college, he was more interested
in flying planes.
Monday, November 18, 2013 at 4:39pm
College Accounting
A light truck is purchased on January 1 at a cost of $27,000. It is expected to serve for eight years and have a salvage value of $3,000. Calculate the depreciation expense for the first and third
years of the truck's life using the following methods. If required, round ...
Monday, November 18, 2013 at 8:26am
Business 101
1. Carmen Santiago works for a number of businesses as a consultant. She has helped design accounting systems, provided accounting services, and analyzed the financial strength of her clients
businesses. Carmen is working as a A. public accountant. B. ...
Sunday, November 17, 2013 at 6:59pm
college algebra
(1/2)^(4/k) = 0.7 Now just solve for k. Expect a number greater than 4, since if the half-life were 4, then 50% would decay in 4 years.
Sunday, November 17, 2013 at 4:29pm
college algebra
I assume you meant 4^(x-3) = 32^x Since 4=2^2 and 32=2^5, we have 2^(2x-6) = 2^(5x) 2x-6 = 5x 3x = -6 x = -2 Check: 4^(-5) = 2^-10 32^(-2) = 2^-10
Sunday, November 17, 2013 at 4:27pm
college algebra
If there was 40% more, then that's 1.4 times what's there now. So, starting with an original amount of 1, there is now 1/1.4 of it left. (1/2)^(t/8.4) = 1/1.4 Now just solve for t hours. Note: It
will be less than 8.4, since 8.4 hours ago, there was 100% more.
Sunday, November 17, 2013 at 4:14pm
college algebra
An isotope of lead, 201Pb, has a half-life of 8.4 hours. How many hours ago was there 40% more of the substance? (Round your answer to one decimal place.) ____hr
Sunday, November 17, 2013 at 4:10pm
college algebra
Solve the exponential equation using logarithms. Give the answer in decimal form, rounding to four decimal places. (Enter your answers as a comma-separated list.) 4x − 3 = 32x x = _____
Sunday, November 17, 2013 at 3:15pm
college algebra
In 4 years, 30% of a radioactive element decays. Find its half-life. (Round your answer to one decimal place.) _____ yr
Sunday, November 17, 2013 at 3:10pm
college essay conclusion edit
what is it too wordy?
Friday, November 15, 2013 at 7:07pm
College and Math Courses
Contact the particular college or university you have in mind. Only they can tell you.
Friday, November 15, 2013 at 7:02pm
College and Social studies Courses
You'd have to contact the particular college or university to be sure.
Friday, November 15, 2013 at 7:01pm
college essay conclusion edit
Then send it.
Friday, November 15, 2013 at 7:01pm
college essay conclusion edit
I did, I edited my entire essay, Im just trying to explain to admissions my point of view
Friday, November 15, 2013 at 7:00pm
College and Social studies Courses
Is Economic 201 or 202 equivalent to AP Microeconomic or AP Macroeconomic?
Friday, November 15, 2013 at 6:58pm
College and Math Courses
Is statistic 243 equivalent to AP Statistic?
Friday, November 15, 2013 at 6:56pm
college essay cover letter
never mind mind I figured it out
Friday, November 15, 2013 at 6:56pm
college essay conclusion edit
You don't seem to have understood or applied anything Bob Pursley has suggested to you. Send it in as is. The admissions people need to see YOUR writing, not our fixes.
Friday, November 15, 2013 at 6:55pm
college essay conclusion edit
can someone edit this. also, does it sound too melodramatic? And so, dear reader, I am no longer a child. I was unable to maintain my simplest and purest form, as the fragile innocence of my
childhood has been taken. I am like an old book, torn and weathered from time, ...
Friday, November 15, 2013 at 6:54pm
college essay cover letter
and if so, would I just say; Dear Director of Admissions, Lee H. Melvin, or would I just put dear Lee H. Melvin,
Friday, November 15, 2013 at 6:52pm
college essay cover letter
is the director of admissions the same as a dean of admssions?
Friday, November 15, 2013 at 6:50pm
college letter edit
Send it.
Friday, November 15, 2013 at 6:04pm
college essay cover letter
Christine, you are still wordy. Most students do not do a cover letter, but I have to say, your second paragraph is really distracting from your story. I would recommend this.... Dear Dean of
Admissions (I would put his/her name in that salutation) Attached is a supplemental ...
Friday, November 15, 2013 at 4:57pm
college letter edit
can someone help edit this? I literally need this in by tonight Dear Admissions, As I was recently speaking with an admissions officer from Uconn, I was exponentially pleased to be informed that the
University of Connecticut has allowed for the submition of a writing ...
Friday, November 15, 2013 at 4:52pm
college essay cover letter
Can someone edit this Dear Admissions, I was pleased to find that the University of Connecticut allowed for applicants to submit a writing supplement along with their applications. So, I have taken
this opportunity to show the main experiences that I have experienced the past ...
Friday, November 15, 2013 at 3:13pm
Salaries of 31 college graduates who took a course in college have a mean, x of 66,700 a standard deviation , o of 17,120, construct a 90% confidence interval for estimating the population mean u.
Friday, November 15, 2013 at 11:06am
A couple plans to save for their child s college education. What principal must be deposited by the parents when their child is born in order to have $100,000 when the child reaches age 18? Assume
the money earns 4% interest, compounded quarterly.
Friday, November 15, 2013 at 6:09am
College Chemistry
Thursday, November 14, 2013 at 1:53pm
COMMON CORE? IT'S PART OF COMMON CORE, WHICH WILL ADVANCE ALL OUR KIDS TO NEXT GRADE LEVEL WEATHER OR NOT IF THEY PASS. If your kid is advanced they will be expected to maintain a certain percentage
not based on their grade and if they don't they will be put in class ...
Wednesday, November 13, 2013 at 5:19pm
Personal Finance
If the economy is not doing well and jobs are hard to find, what would you suggest for your friend to do if they are graduating high school in a month? A) sit at home until the economy improves B)
save money by going to a local community college C) look for a job D) all of the...
Wednesday, November 13, 2013 at 10:41am
College Level Calculus
number of extra trees per acre --- n number of trees per acre = 20+n yield per tree = 720 - 15n number of oranges = N = (20+n)(720-15n) = 14400 + 420n - 15n^2 N ' = 420 - 30n = 0 for a max of N 30n =
420 n = 14 for the best yield there should be 20+14 or 34 trees per acre
Tuesday, November 12, 2013 at 11:18pm
College Level Calculus
Each orange tree grown in California produces 720 oranges per year if not more than 20 trees are planted per acre. For each additional tree planted per acre, the yield per tree decreases by 15
oranges. How many trees per acre should be planted to obtain the greatest number of ...
Tuesday, November 12, 2013 at 10:31pm
math 540
suppose it takes a college student an average of 5 minutes to find a parking space on the schools parking lot. Assume also that this time is normally distributed with a standard deviation of 2
minutes. What time is exceeded by approximately 75% of the college students when ...
Tuesday, November 12, 2013 at 9:21pm
college physics
A weightlifter has a basal metabolic rate of 92.4 W. As he is working out, his metabolic rate increases by about 650 W. (a) How many hours does it take him to work off a 450 Calorie bagel if he stays
in bed all day? (b) How long does it take him if he's working out? (c) ...
Tuesday, November 12, 2013 at 8:44pm
sinθ = 1/1.33 θ=48.7° Now, the radius r is r/2 = tanθ
Tuesday, November 12, 2013 at 12:24am
a point source is on the bottom of a pool 2m deep. find the diameter of the largest circle at the surface which light can emerge from the water (n=1.33)
Monday, November 11, 2013 at 11:36pm
College anatomy
never mind negative feedback is right. lul
Monday, November 11, 2013 at 9:56pm
College anatomy
i tried negative and negative feedback both wrong
Monday, November 11, 2013 at 9:51pm
the ratio of males to females in a college class is 2:6. If there are 72 students in the class, how many are male?
Monday, November 11, 2013 at 9:19pm
College anatomy
It was observed that blood glucose levels were falling in an individual. This event would be sensed by (a) cells in the (b) of the pancreas, causing them to release the hormone (c). This series of
events would be part of an (d) loop/mechanism, serving as a self-adjusting ...
Monday, November 11, 2013 at 7:33pm
college essay
I am my truest and purest self, a mere child embarking on the beginning of life. So, how can it be that my mother, my main source of comfort, can possibly be dying? Stage 4 Metastases Melanoma
Cancer: the doctor says is the grim diagnosis, the definite fate that we have no say...
Monday, November 11, 2013 at 5:10pm
An opinion poll asks a random sample of 100 college seniors how they view their job prospects. In all, 53 say "Good." The 95% confidence interval has a margin of error of 9.9%. The sample used
actually included 130 college seniors but 30 of the seniors refused to ...
Monday, November 11, 2013 at 3:51pm
To plan the budget for next year a college needs to estimate what impact current economic downturn might have on student requests for financial aid. Historically, this college has provided aid to 35%
of its students. Officials look at a random sample of this year s ...
Monday, November 11, 2013 at 2:50pm
U.S. and Global economics; involving math problems
I've been working on this problem for "hours" and I can't quite figure out how to solve it; can anyone help plzzz? I would appreciate it. :) Two college students went on spring break to Guadalajara,
Mexico. One took the vacation in 2002, while the other went ...
Monday, November 11, 2013 at 10:03am
In the space before each of the following items, place a C if the capitalization is correct, a zero (0) if it is incorrect. Then insert capitals where they should be, and draw a line through capitals
that should be small letters. 1. the Rocky Mountains 2. my spanish course 3. ...
Monday, November 11, 2013 at 3:22am
College Physics
If the two lines are attached at the same point on the branch so a triangle is formed, then the center of gravity of the plank, hiker system must be directly below the attachment point. mass of plank
= 2.25 * length for length (.5L)^2 = 21.78^2 - 10.89^2 (.5 L)^2 = 474.4 - 118...
Monday, November 11, 2013 at 12:17am
College Physics
A wooden bridge crossing a canyon consists of a plank with length density λ = 2.25 kg/m suspended at h = 10.89 m below a tree branch by two ropes of length L = 2h and with a maximum rated tension of
2000 N, which are attached to the ends of the plank, as shown in the ...
Sunday, November 10, 2013 at 11:50pm
College Algebra
Sunday, November 10, 2013 at 4:22pm
college essay
Then I'll repeat what Bob Pursley told you: "What I am saying to you, is that to communicate clearly, you need to shed the extravagant language. It does not impress the educated." ~~ Simplify your
vocabulary. ~~ Write shorter and less complicated sentences. ...
Sunday, November 10, 2013 at 3:11pm
college essay
Im trying to say that I was like a hollow shell. I dont know Im trying to make the essay developed and stand out to the admissions officers, while also not making it overdone or overly complicated.
Sunday, November 10, 2013 at 3:07pm
college essay
Why are you using the word "callous"? What do you mean by this?
Sunday, November 10, 2013 at 3:01pm
college essay
I entered a callous shell of myself,and once again allowed my obstacles overtake my life.
Sunday, November 10, 2013 at 2:56pm
college essay
also is it that im writing too much in general or is it the vocab?
Sunday, November 10, 2013 at 2:51pm
college essay
Concentrate on Bob Pursley's main points: 1. ... to communicate clearly, you need to shed the extravagant language. It does not impress the educated. http://grammar.ccc.commnet.edu/grammar/
concise.htm 2. ... colleges are looking to admit young people who have a ...
Sunday, November 10, 2013 at 2:34pm
college essay
so should I make it shorter?
Sunday, November 10, 2013 at 2:16pm
college essay
Your essay is much to sweet, and full of fat....and words. Reread just this part: <<Now, I put full zeal into my academics, community service, and life goals. Once a callous shell deprived of
substance and motivation, I am now an ambitious allegiant, heading towards my ...
Sunday, November 10, 2013 at 1:43pm
college essay
Here's another one for editing papers. Your paper was well-written btw. Hope you make it in! I can't paste the link in so if you go to Paper Rater's website, that could help.
Sunday, November 10, 2013 at 12:05pm
college essay
We don't edit/proofread first or rough drafts here. Sorry ... takes way too long. Make sure you go through these steps 3 or 4 times before posting a much more polished piece, please.
-------------------------- Please go over your paper with the following in mind. Thanks to...
Sunday, November 10, 2013 at 9:45am
college essay
can someone edit my college essay? I was told by an admissions counselor to write a supplement describing my low cumulative GPA. It is a rough first draft, and probolaly needs to be heavily edited I
am my truest and purest self, a mere child embarking on the beginning of the ...
Sunday, November 10, 2013 at 9:37am
College Algebra
x/10000=sin(14º) x=10000sin(14º) , since sin(14º)=0.24192 x=10000*(0.24192) x=2419.2 feet
Sunday, November 10, 2013 at 12:24am
college essay
I think you need to discuss how your experiences, and future at UConn will make you better, and make the world a better place.
Saturday, November 9, 2013 at 7:40pm
college essay
In the essay, make sure you emphasize the steady rise of your GPA and what it took for you to achieve that. Spend most of the essay telling how you achieved that GPA increase.
Saturday, November 9, 2013 at 7:10pm
college essay
I know its definitely a reach school but as far as the essay goes, basically I should just write an essay about what happened freshan and sophmore year and how I grew from it? do you have any other
advice on how to write it?
Saturday, November 9, 2013 at 6:55pm
college essay
Less than 50 percent of applicants are admitted to UConn. Check this site for more details. http://collegeapps.about.com/od/collegeprofiles/p/connecticut.htm
Saturday, November 9, 2013 at 6:49pm
college essay
do any of you think I have a chance?
Saturday, November 9, 2013 at 6:45pm
college essay
no the admissions counselor gave me an email to send the writing supplement to, so its not sent through the common app. She said it can be as "long as it takes to tell my story" and there really isnt
a minimum sat requirement, but they only look at reading and math
Saturday, November 9, 2013 at 6:33pm
college essay
Your essay may make the difference then. All applicants must apply through the Common Application. Due to the number of applications received, interviews are not part of the admissions process.
Students are welcome to share additional information with us through the Common ...
Saturday, November 9, 2013 at 6:28pm
college essay
I know Uconns competitive but its definitely my first choice school, so Im trying to do everything I can to increase my chances of getting in. My cumulative GPA is under the 3.33 that they look for,
but now its at a 3.77 so Im hoping they see that and understand why my grades ...
Saturday, November 9, 2013 at 6:22pm
college essay
Uconn and Ill probolaly also send it i as a writing supplement to Fairfield University
Saturday, November 9, 2013 at 6:19pm
college essay
Make it an essay, not a story. Be sure you emphasize the steady rise in your GPA from your sophomore year to now. What college is this? I'd like to look up their admission requirements.
Saturday, November 9, 2013 at 6:18pm
college essay
should I write it like a story or more like an essay that I would write in school? Also, I dont know if you give college advice or not, but do you think I have a chance for being admitted?
Saturday, November 9, 2013 at 5:49pm
college essay
If this is the supplement recommended by the UConn counselor -- you should stick to the reasons for your low grades. Your thesis could be something like "My grades are a disappointment to me, but I
had several problems to contend with in high school."
Saturday, November 9, 2013 at 5:45pm
college essay
what should be the main focus of the essay? Im obviously going to write about my mom, but im confused on how to structure it and how exactly to write it. It just seems confusing to me on how Im
supposed to write an impacting and honest story while also creating a d=well ...
Saturday, November 9, 2013 at 5:38pm
college essay
I'd make it shorter than 5 pages. Perhaps one page is enough for this essay.
Saturday, November 9, 2013 at 5:33pm
college essay
so how long do you think it should approximately be? im not sure whether I should be writing a 5 pg narrative or a short essay. also, do you give college advice?
Saturday, November 9, 2013 at 5:19pm
college essay
Also -- please check back here to see if other tutors have additional advice.
Saturday, November 9, 2013 at 5:15pm
college essay
Check these sites. http://www.englishclub.com/writing/college-application-essays/lfi_introductions.html http://www.usnews.com/education/blogs/professors-guide/2010/09/15/
10-tips-for-writing-the-college-application-essay http://www.infoplease.com/edu...
Saturday, November 9, 2013 at 5:14pm
college essay
Can you give me any advice on writing the essay
Saturday, November 9, 2013 at 5:09pm
college essay
sorry about your mom
Saturday, November 9, 2013 at 5:06pm
college essay
I have a 2.68 GPA. I know its bad, but this was mainly due to my mother dying freshman year and my fathers debilitating state after her death. I had a 2.0 freshman yr, a 2.65 sophmore yr, a 3,259
junior yr, and a 3.67 senior yr (so far) A Uconn admissions counselor told me to ...
Saturday, November 9, 2013 at 5:05pm
College Physics
Friday, November 8, 2013 at 10:54pm
College Physics
just for others who are looking for this answer: You have to convert 24Cal/hr into J/s since 1Cal=4190 Joules 24Cal/hr x 1hr/3600s x 4190 J = Q/t Then plug into formula stated above to get a final
answer of 0.026858974 = 0.027 (2 sigfigs)
Friday, November 8, 2013 at 5:21am
In a study of caffeine and stress, college students indicate how many cups of coffee they drink per day and their stress level on a scale of 1 to 10. The date follow: Number of Cups of Coffee
3,2,4,6,5,1,7,3 Stress Level 5,3,3,9,4,2,10,5 Calculate a Pearsons r to determine the...
Thursday, November 7, 2013 at 8:47pm
To estimate the mean score of those who took the Medical College Admission Test on your campus, you will obtain the scores of an SRS of students. From published information you know that the scores
are approximately Normal with standard deviation about 6.4. You want your ...
Wednesday, November 6, 2013 at 4:18pm
Career (Ms.Sue)
I want to be a lawyer do I have to do a master degree? what qualifications do I need? what subjects a suitable for college?
Wednesday, November 6, 2013 at 3:13pm
college math
i dont understand what information to put in and how to find the percent
Tuesday, November 5, 2013 at 8:52pm
college math
You can play around with lots of Z table stuff at http://davidmlane.com/hyperstat/z_table.html have fun
Tuesday, November 5, 2013 at 8:26pm
college math
Math homework please help!! 2. Suppose that people s weights are normally distributed, with mean 175 pounds and a standard deviation of 6 pounds. Round to the nearest hundredth of a percent a. What
percent of the population would weigh between 165 and 170 pounds? b. What ...
Tuesday, November 5, 2013 at 5:37pm
college resume
Go through it all and cut as much excess wording as possible. As it is right now, it's far too long, whether formatted and printed or sent electronically. The best resumes can be printed on one side
of one page ... mostly because the people reading them don't have time...
Tuesday, November 5, 2013 at 4:22pm
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/college/?page=7","timestamp":"2014-04-19T21:00:14Z","content_type":null,"content_length":"36363","record_id":"<urn:uuid:0429495a-a7b3-4bc0-9bcb-34f401978aaf>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Morse theory in TOP and PL categories?
up vote 15 down vote favorite
Apparently there are topological and piecewise linear versions of Morse theory. I would like to know of references that treat these topics.
How is a Morse function defined for compact manifolds (with boundary) in the TOP and PL categories?
It is well known that smooth Morse functions always exits for compact smooth manifolds. Are there similar results in the TOP and PL categories?
It is possible to classify closed smooth surfaces via smooth Morse theory. Is there a classification theorem for closed TOP (respestively PL) surfaces via topological (respectively PL) Morse theory?
morse-theory surfaces gt.geometric-topology
3 en.wikipedia.org/wiki/Discrete_Morse_theory – Daniel Moskovich May 14 '12 at 11:06
add comment
4 Answers
active oldest votes
For TOP Morse function a reference is the classical book of Kirby and Siebenmann "Foundational essays on topological manifolds, smoothings, and triangulations" (1977). The key point is to
consider the local standard coordinate charts given by the Morse lemma in the smooth category, and use this to define the TOP Morse functions. These are strictly related to topological
handlebody decompositions (so do not exist for non-smoothable topological 4-manifolds). In the PL category you can refer to the link of Daniel Moskovich or google "PL Morse function" (but
the TOP approach is not likely to work).
up vote
10 down Regarding the second part of the question, you can get such classification once you have proved TOP Morse functions exist! However, the techniques to prove in general that TOP Morse
vote functions exist are typically high-dimensional (dim $\geq 6$). So for surfaces it is likely that proving the existence of TOP Morse functions is equivalent to proving existence of
triangulations (which depends on the Schoenflies theorem).
@Daniele: Thank you for your answer. I want to point out that the existence of triangulations for surfaces does not depend on the Schoenflies theorem (ST). Usually, proofs of the
1 triangulation theorem for surfaces use the ST. This is arguably because once the ST is known the proof becomes simpler. However, using the ST to to prove the existence of triangulations
for surfaces is misleading in the sense that the ST fails in dimension 3, but the triangulation theorem holds for 3-manifolds. A proof of the triangualtion theorem for surfaces that does
not use the ST is due to E.E. Moise. – Victor May 23 '12 at 6:14
@Daniele: Moise's proof of the triangulation theorem for surfaces can be found Chapter 7 of "Geometric Topology in Dimensions 2 and 3". – Victor May 23 '12 at 6:15
Thank you for pointing out this VCF, you're right. Anyway, the proof of Moise is based on the combinatorial Schoenflies theorem (which holds also in $R^3$), since this used in the proof
of the approximation Theorem 3 in Chapter 6 in the book of Moise. This approximation theorem is a key ingredient in the proof of the triangulation theorem. – Daniele Zuddas May 24 '12 at
add comment
Forman's papers and the books by Kozlov and Orlik-Welker that you see cited at Daniel's Wikipedia link are good starting points for the more combinatorial tradition of PL Morse theory.
For more geometric approaches see Bestvina's PL Morse theory and the ancient Piecewise linear critical levels and collapsing by Kearton and Lickorish. Unfortunately, the literature on
up vote discrete Morse theory seems to be unaware of the Kearton-Lickorish 1972 paper and the still earlier papers by Kosinski and Kuiper (cited by Kearton and Lickorish) which also constructed
10 down some kinds of PL Morse functions. Even if they did it with heavier notation and by less illuminating arguments, their alternative understanding of PL Morse functions is still worth to be
vote aware of.
There is also Ken Brown's notion of a collapsing scheme from The geometry of rewriting systems: A proof of the Anick-Groves-Squier theorem. Algorithms and classification in combinatorial
1 group theory (Berkeley, CA, 1989), 137–163, Math. Sci. Res. Inst. Publ., 23, Springer, New York, 1992, which is an axiomatization of a special case from An infinite-dimensional
torsion-free FP_\infty group (with Ross Geoghegan). Invent. Math. 77 (1984), no. 2, 367–381 which is exactly the same as the acyclic matching formulation of Forman's approach but without
the assumption of finiteness. – Benjamin Steinberg May 14 '12 at 17:22
Thanks for these references -- I'm one of these combinatorics people who works with discrete Morse theory and hadn't known about some of them. I suppose it often happens that different
areas of math aren't fully aware of each other, though it looks like MathOverflow may help. I'm really appreciative of all the helpful topological insights, references, etc. I've been
finding here. Regarding your comment: I wonder if combinatorics people may not all know the precise relationship between being a CW complex and being in PL category, hence the
relationship of these different Morse theories. – Patricia Hersh May 16 '12 at 12:26
2 One more comment: it's been my impression for awhile now that Bestvina's PL Morse theory is actually quite different from Forman's discrete Morse theory, whereas indeed Brown's notion of
collapsing scheme is the same idea from a different viewpoint (with more emphasis on simple homotopy) as Forman's discrete Morse theory. – Patricia Hersh May 16 '12 at 14:24
Patricia, thanks for your comments. Finite CW complexes with PL attaching maps are polyhedra, i.e. objects of the PL category (not to be confused with polyhedra is the sense of
Combinatorics). The relationship of different Morse theories is not simple. Let's start with a discrete Morse function (in the sense of Forman) on a regular CW-complex $X$ such that all
cells of $X$ are critical (I guess this is kind of a trivial case of Forman's theory). If $X$ happens to be a PL manifold, the simplicial neighborhoods of the barycenters of cells of $X$
in the second barycentric subdivision of $X$... – Sergey Melikhov May 16 '12 at 14:39
... are the handles of a handlebody structure on $X$. Each handle $H=B^n\times B^m$ contains a smaller handle $h=B^n\times (\frac12 B^m)$ near the core of $H$. The given discrete Morse
function may be thought of as defined on the barycenters of cells of $X$. One can extend it to a PL function on all of $X$ so that it is constant on the smaller handles $h$, and
"vertical" on the collars $H\setminus h$. This will be a Morse function in the sense of Kearton and Lickorish. – Sergey Melikhov May 16 '12 at 14:57
show 8 more comments
I think that Daniele and Sergey have, between the two pretty much answered my question. However, I would like to add the following:
1. Regarding TOP Morse theory, in [Mor1959] M. Morse laid the foundations of the theory of topological non-degenerate functions, and proved the TOP Morse inequalities.
Also, in [Kui1961] the topological version of the Reeb-Milnor theorem for DIFF manifolds is proven. That is, a TOP n-dimensional closed manifold admiting a TOP Morse function having
exactly two non-degenerate critical points is homeomorphic to the n-sphere.
Another source containing further results for topological manifolds via TOP Morse theory that parallel those obtained for differentiable manifolds is J. Cantwell's paper [Can1967].
Kirby and Siebenmann's "Foundational essays on topological manifolds, smoothings, and triangulations is freely available here. In section 3 of Essay III (p. 80) they define Morse
functions in the DIFF and TOP categories for manifolds possibly with boundary.
up vote 4 Finally, simple examples of non-differentiable TOP Morse function are easily found. For example, the absolute value function on $\mathbb{R}$, $x\rightarrow |x|$. The origin is a
down vote non-degenerate critical point. Also the height function restricted to the double cone (i.e., the space formed by the cones $x^{2}+y^{2}=(z\pm1)^{2}$) has exactly two non-degenerate
critical points (the tips of the cones).
2. Regarding PL Morse theory, J. Harer's slides contain an interesting approach using homology. In particular, a PL Morse function is defined using Betti numbers.
[Cant1967] J. Cantwel, Topological non-degenerate functions, Tohoku Math. Journ., 20 (1968), 120-125.
[Kui1961] N.H. Kuiper, A continuous function with two critical points, Bull. Amer. Math. Soc., 67(1961), 281-285.
[Mor1959] M. Morse, Topological non-degenerate functions on a compact manifold $M$, Journal d'Analyse Math., 7 (1959), 189-208.
Thanks for the helpful information! – Patricia Hersh May 23 '12 at 15:57
add comment
There is also a completely unrelated theory the roots of which go to some work of Gromov, see:
up vote 2 down vote MR1621571 (99i:58027) Gershkovich, V.(5-MELB); Rubinstein, H.(5-MELB) Morse theory for Min-type functions. Asian J. Math. 1 (1997), no. 4, 696–715. 58E05 (53C20 53C23 57R70)
add comment
Not the answer you're looking for? Browse other questions tagged morse-theory surfaces gt.geometric-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/96883/morse-theory-in-top-and-pl-categories/96926","timestamp":"2014-04-19T04:47:12Z","content_type":null,"content_length":"79618","record_id":"<urn:uuid:1bfc1d21-2f88-4787-8826-23b46cf1be51>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Berry-Esseen Type Bound in Kernel Density Estimation for Negatively Associated Censored Data
Journal of Applied Mathematics
Volume 2013 (2013), Article ID 541250, 9 pages
Research Article
A Berry-Esseen Type Bound in Kernel Density Estimation for Negatively Associated Censored Data
^1College of Science, Guilin University of Technology, Guilin 541004, China
^2Department of Mathematics, Ji'nan University, Guangzhou 510630, China
Received 19 February 2013; Accepted 11 July 2013
Academic Editor: XianHua Tang
Copyright © 2013 Qunying Wu and Pingyan Chen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work is properly cited.
We discuss the kernel estimation of a density function based on censored data when the survival and the censoring times form the stationary negatively associated (NA) sequences. Under certain
regularity conditions, the Berry-Esseen type bounds are derived for the kernel density estimator and the Kaplan-Meier kernel density estimator at a fixed point .
1. Introduction
Let be a sequence of the true survival times. The random variables (r.v.s.) are not assumed to be mutually independent; it is assumed, however, that they have a common unknown continuous marginal
distribution function (d.f.) and density function . Let the r.v.s. be censored on the right by the censoring r.v.s. , so that one observes only , where here and in the sequel, and is the indicator
random variable of the event . In this random censorship model, the censoring times , , are assumed to have the common d.f. ; they are also assumed to be independent of the r.v.s. . Following the
convention in the survival literature, we assume that both and are nonnegative random variables. In contrast to statistics for complete data, we observe only the pairs , , and the estimators are
based on these pairs.
The following nonparametric estimation of the distribution functions and due to Kaplan and Meier [1] is widely used to estimate and on the basis of the data : where denote the order statistics of and
is the concomitant of .
We introduce the kernel density estimator where are bandwidths and is some kernel function. When is known, (3) can be used to estimate the common density of the lifetimes. However, in most practical
cases is unknown and must be replaced by the Kaplan-Meier estimator , so the Kaplan-Meier kernel density estimator of the is defined by
There is an extensive literature on the Kaplan-Meier estimator for censored independent observations. We refer to papers by Földes and Rejtő [2], Gu and Lai [3], Gill [4], and Sun and Zhu [5]. Sun
and Zhu obtained the following Berry-Esseen bound for i.i.d. censored sequences.
Theorem A. Let be a bounded probability kernel function with compact support satisfying for integer ,
Let be -order continuously differentiable and let be continuously differentiable in a neighborhood of with for . Then where denotes the standard normal distribution function, and .
However, the censored dependent data appear in a number of applications. For example, repeated measurements in survival analysis follow this pattern; see Kang and Koehler [6]. In the context of
censored time series analysis, Shumway et al. [7] considered (hourly or daily) measurements of the concentration of a given substance subject to some detection limits, thus being potentially censored
from the right. Lecoutre and Ould-Said [8], Cai [9], and Liang and Uña-Álvarez [10] studied the convergence for the stationary -mixing data. However, the convergence for the NA data has not been
The main purpose of this paper is to study the kernel density estimator and the Kaplan-Meier kernel estimator of a density function based on censored data when the survival and the censoring times
form the stationary NA (see the following definition) sequences. Under certain regularity conditions, the Berry-Esseen type bounds are derived for the kernel density estimator and the Kaplan-Meier
kernel estimator at a fixed point .
Definition 1. Random variables , are said to be negatively associated (NA) if for every pair of disjoint subsets and of , where and are increasing for every variable (or decreasing for every
variable) such that this covariance exists. A sequence of random variables is said to be NA if every finite subfamily is NA.
Obviously, if is a sequence of NA random variables and is a sequence of nondecreasing (or nonincreasing) functions, then is also a sequence of NA random variables.
This definition was introduced by Joag-Dev and Proschan [11]. Statistical test depends greatly on sampling. The random sampling without replacement from a finite population is NA but is not
independent. NA sampling has wide applications such as those in multivariate statistical analysis and reliability theory. Because of the wide applications of NA sampling, the limit behavior of NA
random variables has received more and more attention recently. One can refer to Joag-Dev and Proschan [11] for fundamental properties, Matuła [12] for the three-series theorem, and Wu and Jiang [13,
14] for the strong convergence.
2. Main Results
In what follows, let be the d.f. of the ’s, . Since the sequences and are independent, it follows that .
Define (possibly infinite) times , , and by Then, .
We give the following four lemmas, which are helpful in proving our theorems.
Lemma 2 (Chang and Rao, [15]). Let and be random variables, then for any here and in the sequel, where denotes the standard normal distribution function.
Lemma 3 (Su et al. [16, Theorem 1]). Let be a sequence of NA r.v.s. with zero means and , and . Then for , where depends only on .
Lemma 4. Let be a sequence of NA r.v.s. with continuous d.f. , and let be the empirical d.f. based on the segments . Then
Proof. Similar to the proof of Lemma 4 in Yang [17], we can prove Lemma 4.
Lemma 5 (Wu and Chen [18, Theorem 1.3]). Let and be two sequences of NA r.v.s. Suppose that the sequences and are independent. Then for any ,
In order to formulate our main results, we now list some assumptions.() and are two sequences of stationary NA random variables, and and are independent.() Suppose that , , and and have bounded
derivative in a neighborhood of .() For all integers , the conditional distribution , given , has a density , and for all , for and some , where represents a neighborhood of . () The kernel is a
bounded derivative function with for and .() Let , , and be positive integers with where .
Remark 6. () Implies and .
Let , .
Theorem 7. Suppose that are satisfied; then where , , , .
Consider the following: where .
Furthermore, if then
Theorem 8. Assume that the conditions of Theorem 7 hold. Then Furthermore, if (16) holds, then
3. Proofs
Proof of Theorem 7. We observe that, by (3),
Let , , , where and then By (20), We first estimate , , and . Obviously, implies that and are stationary; thus,
From , , and , we obtain Hence, by , .
For and , by ,
Therefore, by , By , , , and Lemma 2.3 of Zhang [19], for ,
Thus, by and , Therefore, by the combination of , (24), (26), (28), and (30),
By (26), (27), , , and ,
By (25), , and , Note that for any random variables and ; from (31)–(33), Therefore, from the combination of (23) and (31)–(34), it follows that Thus, (14) holds.
Now, we prove (15). Let , , , . Then, . According to Lemma 2, (14), (20), (32), and (33), we have
Let , be independent random variables with the same distribution as for . Put , . Obviously, Note that and from (20) and (24). By (14), (30), (32), and (33), Note that , , are independent random
variables, and . Therefore, by (from (39)), (14), and Berry-Esseen inequality (cf. Petrov [20, page 154, Theorem 5.7]), there exists some constant such that
Similar to (26), we can get and . It is easy to see from Property P7 of Joag-Dev and Proschan [11] that is also sequence of NA r.v.s., so by using Lemma 3, we have
Assume that and are the characteristic functions of and , respectively. By Esseen inequality (cf. Petrov [20, page 146, Theorem 5.3]), for any , there exists some constant such that
By Theorem 10 in Newman [21], (14), and (30), Therefore, On applying (39)–(41), we have Thus, Choosing , then by (42)–(46), Therefore, the combination of (37)–(39), (41), (47), and (15) holds.
Finally, we prove (17). By Lemma 2 and (15), for any ,
Applying (14), , , and differential mean value theorem, there exists a constant , such that Hence, there exists a constant sufficiently large such that . Let in (48); then . Therefore, by (48), (16)
Proof of Theorem 8. Using (15) and Lemma 2,
Let be the empirical d.f. of . Then, by (2),
Thus, by Lemmas 4 and 5, for ,
Using (14), we get
Therefore, (18) holds from (50) and (53).
Using (18), similar to the proof of (17), we can prove (19). This completes the proof of Theorem 8.
The authors are very grateful to the referees and the editors for their valuable comments and helpful suggestions that improved the clarity and readability of the paper. This paper is supported by
the National Natural Science Foundation of china (11061012), project supported by Program to Sponsor Teams for Innovation in the Construction of Talent Highlands in Guangxi Institutions of Higher
Learning ((2011) 47), and the Support Program of the Guangxi China Science Foundation (2012GXNSFAA053010 and 2013GXNSFDA019001).
1. E. L. Kaplan and P. Meier, “Nonparametric estimation from incomplete observations,” Journal of the American Statistical Association, vol. 53, pp. 457–481, 1958. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
2. A. Földes and L. Rejtő, “A LIL type result for the product limit estimator,” Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, vol. 56, no. 1, pp. 75–86, 1981. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
3. M. G. Gu and T. L. Lai, “Functional laws of the iterated logarithm for the product-limit estimator of a distribution function under random censorship or truncation,” The Annals of Probability,
vol. 18, no. 1, pp. 160–189, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
4. R. D. Gill, Censoring and Stochastic Integrals, vol. 124 of Mathematical Centre Tracts, Mathematisch Centrum, Amsterdam, The Netherlands, 1980. View at MathSciNet
5. L. Q. Sun and L. X. Zhu, “A Berry-Esseen type bound for kernel density estimators under random censorship,” Acta Mathematica Sinica, vol. 42, no. 4, pp. 627–636, 1999. View at Zentralblatt MATH ·
View at MathSciNet
6. S.-S. Kang and K. J. Koehler, “Modification of the Greenwood formula for correlated response times,” Biometrics, vol. 53, no. 3, pp. 885–899, 1997. View at Publisher · View at Google Scholar ·
View at Scopus
7. R. H. Shumway, A. S. Azari, and P. Johnson, “Estimating mean concentrations under transformation for environmental data with detection limits,” Technometrics, vol. 31, pp. 347–356, 1988.
8. J.-P. Lecoutre and E. Ould-Said, “Convergence of the conditional Kaplan-Meier estimate under strong mixing,” Journal of Statistical Planning and Inference, vol. 44, no. 3, pp. 359–369, 1995. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
9. Z. Cai, “Estimating a distribution function for censored time series data,” Journal of Multivariate Analysis, vol. 78, no. 2, pp. 299–318, 2001. View at Publisher · View at Google Scholar · View
at Zentralblatt MATH · View at MathSciNet
10. H.-Y. Liang and J. de Uña-Álvarez, “A Berry-Esseen type bound in kernel density estimation for strong mixing censored samples,” Journal of Multivariate Analysis, vol. 100, no. 6, pp. 1219–1231,
2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
11. K. Joag-Dev and F. Proschan, “Negative association of random variables, with applications,” The Annals of Statistics, vol. 11, no. 1, pp. 286–295, 1983. View at Publisher · View at Google Scholar
· View at Zentralblatt MATH · View at MathSciNet
12. P. Matuła, “A note on the almost sure convergence of sums of negatively dependent random variables,” Statistics & Probability Letters, vol. 15, no. 3, pp. 209–213, 1992. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
13. Q. Wu and Y. Jiang, “A law of the iterated logarithm of partial sums for NA random variables,” Journal of the Korean Statistical Society, vol. 39, no. 2, pp. 199–206, 2010. View at Publisher ·
View at Google Scholar · View at MathSciNet
14. Q. Wu and Y. Jiang, “Chover's law of the iterated logarithm for negatively associated sequences,” Journal of Systems Science & Complexity, vol. 23, no. 2, pp. 293–302, 2010. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
15. M. N. Chang and P. V. Rao, “Berry-Esseen bound for the Kaplan-Meier estimator,” Communications in Statistics, vol. 18, no. 12, pp. 4647–4664, 1989. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH · View at MathSciNet
16. C. Su, L. Zhao, and Y. Wang, “Moment inequalities and weak convergence for negatively associated sequences,” Science in China A, vol. 40, no. 2, pp. 172–182, 1997. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
17. S. C. Yang, “Consistency of the nearest neighbor estimator of the density function for negatively associated samples,” Acta Mathematicae Applicatae Sinica, vol. 26, no. 3, pp. 385–394, 2003. View
at MathSciNet
18. Q. Y. Wu and P. Y. Chen, “Strong representation results of Kaplan-Meier estimator for censored NA data,” Journal of Inequalities and Applications, vol. 2013, article 340, 2013. View at Publisher
· View at Google Scholar
19. L.-X. Zhang, “The weak convergence for functions of negatively associated random variables,” Journal of Multivariate Analysis, vol. 78, no. 2, pp. 272–298, 2001. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
20. V. V. Petrov, Limit Theorems of Probability Theory, vol. 4, Oxford University Press, New York, NY, USA, 1995. View at MathSciNet
21. C. M. Newman, “Asymptotic independence and limit theorems for positively and negatively dependent random variables,” in Inequalities in Statistics and Probability, vol. 5 of IMS Lecture Notes
Monogr. Ser., pp. 127–140, Institute of Mathematical Statistics, Hayward, Calif, USA, 1984. View at Publisher · View at Google Scholar · View at MathSciNet | {"url":"http://www.hindawi.com/journals/jam/2013/541250/","timestamp":"2014-04-20T18:51:36Z","content_type":null,"content_length":"823831","record_id":"<urn:uuid:efa0e3ac-5680-432f-adb4-5624bc769f0f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Hierarchical model
Roberto G. Gutierrez, StataCorp wrote:
Evans Jadotte <evans.jadotte@uab.es> provides more detail:
My original model is:
.xi: xtmixed logequiv i.education i.mpsex mpage* i.sector non_farm
small_cattle large_cattle s_s_cattle shock_cattle dumland1 tech_land
electricity landline piped_water road_access death d_rem i.hhtype ||strata:
||cluster:, var
Strata is my level-3 identifier and cluster (which is nested into strata) is
my level-2 identifier. Then I have households nested into clusters.
Predict stub* produces the empirical Bayes (EB) prediction of the random
effects on strata ad clusters, which I hoped would coincide with the one
calculated manually after estimation via "restricted maximum
.egen REML_id = mean(resid), by(id)
The residuals were calculated previously using the command: predict double
resid, residuals and account for both random intercepts above. They are
supposed to be the total residuals, i.e. the raw residuals from level-1 for
households, plus the residuals for the two random intercepts (strata and
cluster). I understand that stub* should inferior or equal to REML_id, which
is not for level-3 (strata). Specifically, Stata gives me an EB for strata
that is more than 300 times greater than REML_strata
In the -egen- command above, I assume you mean -by(strata)- instead of
-by(id)-, because variable -id- is not part of your mixed model.
In any case, why would you expect the estimated random effects to be smaller
in magnitude than the group-averaged residuals? Random effects can vary
significantly from group to group around their mean of zero. Total residuals,
however, have mean zero over the entire dataset, and although they don't need
to have mean zero within each group, it would be weird for them to have a mean
significantly far from zero for any particular group. After all, these
residuals do take into account the estimated random intercepts, and so all you
have left is observation-level variability.
As an example, consider
. webuse pig, clear
(Longitudinal analysis of pig weights)
. xtmixed weight week || id:
(output omitted)
. predict r1, reffects
. predict resid, residuals
. egen reml_id = mean(resid), by(id)
. sum r1 reml_id
Variable | Obs Mean Std. Dev. Min Max
r1 | 432 -1.98e-08 3.794276 -7.17375 9.079873
reml_id | 432 -3.93e-09 .1223594 -.2313422 .2928116
This shows the estimated random effects to have larger magnitude (see the
Std. Dev. column) than the group-averaged residuals, which is what you should
What I think you are trying to calculate for -reml_id- are group mean
residuals that DO NOT take into account random effects. Try
. predict xbeta, xb // No random effects
. gen resid2 = weight - xb // Residual calculated manually
. egen reml_id2 = mean(resid2), by(id)
. sum r1 reml_id2
Variable | Obs Mean Std. Dev. Min Max
r1 | 432 -1.98e-08 3.794276 -7.17375 9.079873
reml_id2 | 432 -7.02e-07 3.916635 -7.405093 9.372684
No you see the behavior that you expect. The key is that REML-based estimates
of random effects should be based on residuals that do not take into account
random effects. After all, you are trying to estimate the random effect
itself -- if you leave it in, there is nothing left to estimate.
Additionally, the sum of the residuals provided by Stata in the random part
of the output should be the total residual (residual level-1+ residual
level-2 + residual level-3), whose variance I calculate as follows:
.nl(resid2=exp({xb: $list one}))
.predict sigma2, xb
where $list is a skedasticity function. The estimate I get is less than the
residual at level-1, which I did not expect. I do not see quite well what I
am doing wrong here. I would really appreciate your feedback.
Given the above, my guess is that if you redefine your residuals to be those
that do not take into account random effects, you'll see behavior more in line
with your expectations.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
Hi Bobby,
(Time difference inhibited prompt response). Indeed many thanks for your feedback. Yes the 'id' stands for either strata or cluster. I shall replace it now. And yes I have tried your procedure of
calculating the REMLs based on the raw residuals, but since the results did not make sense to me (they do not have zero mean) so I thought I should proceed with calculations based on total residuals,
which gave me a mean zero. My calculations for instance are,
.xi: quietly xtmixed logequiv i.education i.mpsex mpage* i.sector non_farm
small_cattle large_cattle s_s_cattle shock_cattle dumland1 tech_land
electricity landline piped_water road_access death d_rem i.hhtype
||cluster:, var
estimates store model1
.xi: quietly xtmixed logequiv i.education i.mpsex mpage* i.sector non_farm
small_cattle large_cattle s_s_cattle shock_cattle dumland1 tech_land
electricity landline piped_water road_access death d_rem i.hhtype ||strata:
||cluster:, var
. estimates store model2
The random part of the model2 look like this:
Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval]
strata: Identity |
var(_cons) | .1586556 .0841078 .0561318 .4484368
cluster: Identity |
var(_cons) | .2896315 .0253914 .243906 .3439291
var(Residual) | 1.025492 .0178371 .9911216 1.061055
LR test vs. linear regression: chi2(2) = 1355.88 Prob > chi2 = 0.0000
. lrtest model1 model2
Likelihood-ratio test LR chibar2(01) = 135.49
(Assumption: model1 nested in model2) Prob > chibar2 = 0.0000
The last test strongly suggests that cluster is nested into strata and that the latter should be in the model. I tried estimating the other way around (strata nested into cluster) just to check, and
convergence failed. So, based on the three-level model:
. predict xbeta, xb
. gen res_fix=logequiv-xbeta //raw residuals calculated manually
. predict res_tot, residuals //get total residuals (which account for the random intercepts)
. egen random_c=mean(res_fix), by(cluster) //get the random effect for cluster
. egen random_s=mean(res_fix), by(strata) //get the random effect for strata
. predict EB_cluster, reffects level(cluster) //get the empirical Bayes of re_cluster
. predict EB_strata, reffects level(strata) //get the empirical Bayes of re_strata
. sum res_fix res_tot EB_cluster random_c EB_strata random_s
Variable | Obs Mean Std. Dev. Min Max
res_fix | 7157 .1470659 1.192167 -5.220241 5.231438
res_tot | 7157 5.62e-10 .983868 -4.57931 4.469897
EB_cluster | 7157 .0126679 .4600395 -1.764327 1.255301
random_c | 7157 .1470659 .6867958 -2.18074 2.053363
EB_strata | 7157 .1343979 .3803131 -.7883288 .6443056
random_s | 7157 .1470659 .3739612 -.8865975 .622515
As can be observed here, except res_tot (total residuals accounting for the random intercepts) have zero mean, and to some extend EB_cluster. I also expected the Std.Dev. of (res_fix^2 + random_c^2
+random_s^2) to be + or - equal to the sum of the variances of the random parameters from the Stata output, which in turn should be equal to res_tot^2 (its Std.Dev.). Also, the 300 times greater I
mentioned in my previous mail regarding the EB and the REML for strata is for some observations. However, results seems to be correct here on balance so this can be disregarded. Now to get the
. gen res_tot2=res_tot^2
. xi: nl(res_tot2=exp({xb: Z one})) //Z is skedasticity function, which is based on variables where skedasticity was detected.
. predict sigma2
. sum sigma2
Variable | Obs Mean Std. Dev. Min Max
sigma2 | 7157 .967807 .1937023 .2968854 4.334954
The total (predicted) variance 0.97, which is the sum (Var(raw residuals) + Var(radom_c) + Var(random_s)) is inferior to the variance of the raw residuals 1.03
In any case, the nesting in the data seems to be correct, but the results are not what I expected according to theory even by trying to set Z to overlap totally the fixed part matrix of th original
model. I am quite puzzled and the reference books (Rabe-Hesketh ad Skrondal, 2005); (Raudenbush and Bryck, 2002) do not help much.
Well, this is a long thread Bobby and I do not want to take too much of your time with this and I will understand if you have better fish to fry. I will keep digging to see what I find but any
further suggestions would be very helpful! | {"url":"http://www.stata.com/statalist/archive/2009-10/msg00127.html","timestamp":"2014-04-18T03:06:34Z","content_type":null,"content_length":"21540","record_id":"<urn:uuid:5f796d56-c5b7-4a89-8e27-771d5586335b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
Speed Punk
Outline curvature illustration tool for modern font editors such as Glyphs and RoboFont.
Speed Punk is a tool for modern font editors such as Glyphs and RoboFont that will illustrate the amount of curvature of glyph outlines. It will be released online soon, and available offline on the
RoboThon 2012 conference in The Hague.
Speed Punk is a learning tool. It teaches you to better understand the nature of Bézier curves and their curvature, the technical basis of digital type design.
The Bézier curve was developed almost simultaneously by two Frenchmen working in computer aided design (CAD) in the French automobile industry in the early 1960s, Paul de Casteljau working for
Citroën and Pierre Bézier working for Renault. De Casteljau is today attributed with the invention, but Citroën decided to keep his research secret until the late '60s, hence they carry Bézier's
The curves play an important role in industrial design, namely aqua- and aerodynamics, as well as computer animation (smoothly controlling the velocity of an object over time). And of course computer
graphics in general and type design in particular, as they provide a memory efficient means of storing illustrations and can easily be scaled and rasterized on the fly for sharp printing on output
devices with varying resolutions.
Without Bézier curves many things we take for granted wouldn't be possible. Taking off in a roller coaster from the flat into a perfect circle loop would brake your spine, as the curvature would
infinitely increase from the straight line to a fixed amount of curve speed in the circle. Or imagine the impossibility of your car's steering wheel needing to be turned instantly by a fixed angle
from the straight road into a curve. Instead, any moving object turns into a curve progressively, starting with no curvature and slowly increasing it over time.
Curvature of cubic Bézier curves (the most commonly used form of the Bézier curves in computer graphics, also part of the PostScript language) is controlled by two off-curve points per curve segment
also known as "handles" or "vectors". Pulling one of them will influence the distribution of the curvature of the entire curve segment.
Any design more complex than a quarter circle will require the use of several consecutive curve segments, and this is especially true for letter forms. One of the problems the designer is confronted
with is to distribute the curvature equally across two joining curve segments, across their shared on-curve point. If the curvature across this point is not continuous, one will see a smaller or
bigger dent in the curve on either side of the shared on-curve point.
Continuous curvature is called G2 in mathematical terms as opposed to G1 continuity where two joining curves have a common tangent in the joining point but no continuous curvature, as opposed to G0
continuity where two curves join but have no common tangent, as opposed to no continuity at all (the curves don't even join). "While it may be obvious that a curve would require G1 continuity to
appear smooth, for good aesthetics, such as those aspired to in architecture and sports car design, higher levels of geometric continuity are required. For example, reflections in a car body will not
appear smooth unless the body has G2 continuity." (Wikipedia)
A technical requirement of digital typefaces adds to the type designer's problem: the necessity of adding on-curve points at the curves' extremes in both X and Y direction. It is (or was, as some
voices say) necessary to speed up glyph arithmetics especially for making glyph bounding box calculation efficient and reliable. But those extremum points will unfortunately in many cases have to be
positioned in shape-wise unnatural locations.
Take the construction of the letter "o" mimicking a broad nib pen for instance. Wouldn't it feel natural to position the on-curve points along the contrast axis for the counter or along a mirrored
contrast axis for the outline? Instead we have to position the nodes horizontally and vertically in the most awkward spots. This poses the risk of discontinuous curvature.
Speed Punk illustrates the curvature on top of the outlines. This is a technique commonly known from CAD software. The "bigger" the illustration is, the higher the curvature is at this point. This
way it is easy to judge curvature continuity at on-curve points: if the illustrations are of same distance from the on-curve point (they "meet"), the two curves are of continuous curvature. If you
see a jump in the curvature illustration, curvature is discontinuous. Simple.
Another thing to judge other than continuous curvature across on-curve points is the distribution of curvature within curve segments: how lose or tight are the curves? While it is very hard to assess
equal curvature on two opposing corners of an "o", visually comparing the sizes of the curvature illustrations is much easier. As another visual aid, the illustrations' colors are being interpolated
between the highest and lowest measured curvature amounts for each glyph. When the two opposing corners of the "o" have a similar color, their curvature amount is also similar.
All of this design trouble, however, will go away with increasing experience of the type designer. Which is why I call it a learning tool.
Still with Speed Punk we can observe some interesting anomalies of Bézier curves.
Have you ever wondered what the much talked about "inflections" actually are? We get taught that a curve's one handle is not supposed to cross over the other handle's imaginary continued direction,
but why? What this actually means is that the curvature at some point in the curve segment inflects into the negative — the curve becomes an "s" form. However small the amount is — if not intended it
should be avoided. In Speed Punk this will show as the curvature illustration crossing over the outline into the other side (in "Outer side of curve" mode) or bounce back from the outline (in
"Outside of glyph" mode).
There are exceptions, of course. One letter that is supposed to carry two inflections is — not surprisingly — the "s".
The other interesting observation is the curvature of a circle. It is known that with no amount of Bézier curve segments can a perfect circle be constructed. The lower the amount of segments, the
less perfect the result. A normal circle's radius constructed of four curve segments will deviate from the exact radius by a maximum of about 0,27 per mill according to my measurements. The curvature
starts out at the on-curve point with its minimum, reaching a maximum (highest deviation from exact radius) at about 14 degrees angle, then going back to a lower curvature at 45 degrees, then
repeating over in reverse until the end of the segment.
Speed Punk lends its name from the curve speed (another term for curvature) and the Mohawk-like shapes that illustrate the curvature on the letter outlines. And it suggests that it's cool to draw
curvature continuous Bézier curves. | {"url":"http://yanone.de/typedesign/code/speedpunk/","timestamp":"2014-04-17T06:43:08Z","content_type":null,"content_length":"13824","record_id":"<urn:uuid:0b66bdf4-9ada-4af6-a78e-e64161f1ee1a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Essential Role of Statistics
Modern psychology couldn't get by without statistics. Some of these simply describe research data and stop there. An example is correlation, which yields a single number that indicates the extent to
which two variables are “related.” Another example is the set of often-complex statistical computations that help researchers decide whether the results of their experiments are likely to be “real.”
A variable is literally anything in the environment or about a person that can be modified and have an influence on his or her particular thoughts, reactions, or behaviors. The amount of light or
noise in a room is a variable — it differs from one room to the next. Height and weight are variables, as are intelligence, personality characteristics, and a host of observable behaviors, because
these differ from one person to the next.
In correlation, the resulting number can range from 0 to +1.00 or 0 to –1.00. Where it falls indicates the strength of the correlation. The sign of the correlation indicates its direction. A
correlation of 0 is nonexistent; a correlation of either +1.00 or –1.00 is perfect.
For example, to assess the correlation between height and weight, a researcher would measure the height and weight of each of a group of individuals and then plug the numbers into a mathematical
formula. This correlation will usually turn out to be noticeable, perhaps about +.63. The “.63” tells us that it is a relatively strong correlation, and the “+” tells us that height and weight tend
to vary in the same direction — taller people tend to weigh more, shorter people less. But the correlation is far from perfect and there are many exceptions.
What the correlations you encounter in this book mean vary somewhat according to the application, but here's a rule of thumb: A correlation between 0 and about +.20 (or 0 and –.20) is weak, one
between +.20 and +.60 (or –.20 and –.60) is moderate, and one between +.60 and +1.00 (or –.60 and –1.00) is strong.
As another example, a researcher might assess the extent to which people's blood alcohol content (BAC) is related to their ability to drive. The participants might be asked to drink and then attempt
to operate a driving simulator. Their BACs would then be compared with their scores on the simulator, and the researcher might find a correlation of –.68. This is again a relatively strong
correlation, but the “–” tells us that BAC and driving ability vary in an opposite direction — the higher the BAC, the lower the driving ability.
Inferential Statistics
For descriptive statistics such as correlation, the “mean,” or average, and some others that will be considered in context later in the book, the purpose is to describe or summarize aspects of
behavior to understand them better. Inferential statistics start with descriptive ones and go further in allowing researchers to draw meaningful conclusions — especially in experiments. These
procedures are beyond the scope of this book, but the basic logic is helpful in understanding how psychologists know what they know.
Again recalling Bandura's experiment of observational learning of aggression, consider just the model-punished and model-rewarded groups. It was stated that the former children imitated few behaviors
and the latter significantly more. What this really means is that, based on statistical analysis, the difference between the two groups was large enough and consistent enough to be unlikely to have
occurred simply by “chance.” That is, it would have been a long shot to obtain the observed difference if what happened to the model wasn't a factor. Thus, Bandura and colleagues discounted the
possibility of chance alone and concluded that what the children saw happen to the model was the cause of the difference in their behavior.
This logic may seem puzzling to you, and it isn't important that you grasp it to understand the many experiments that are noted throughout this book. Indeed, it isn't mentioned again. The point of
mentioning it at all is to underscore that people are far less predictable than chemical reactions and the like, and therefore have to be studied somewhat differently — usually without formulas.
Psychologists study what people tend to do in a given situation, recognizing that not all people will behave as predicted — just as the children in the model-rewarded group did not all imitate all
the behaviors. In a nutshell, the question is simply whether a tendency is strong enough — as assessed by statistics — to warrant a conclusion about cause and effect. | {"url":"http://www.netplaces.com/psychology/how-psychologists-know-what-they-know/the-essential-role-of-statistics.htm","timestamp":"2014-04-17T03:53:38Z","content_type":null,"content_length":"20830","record_id":"<urn:uuid:08f3b007-f539-4274-a009-e3ea71b97f01>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
Non-geometric approach to gravity impossible?
The classical electromagnetic wave is a coherent state of photons on flat spacetime. Similarly, classical curved spacetime (that can be covered by harmonic coordinates) is a coherent state of
gravitons on flat spacetime.
So since everybody believe there must be a quantum theory of gravity, then it's almost definite and categorical that "classical curved spacetime (that can be covered by harmonic coordinates) is a
coherent state of gravitons on flat spacetime" then why don't we hearing from say sci.am or other news items reporting that "Universe is really flat spacetime a priori!". Hmm.. maybe I should write
an article in sci am and make it a cover subject or someone else with credentials write it because sci am doesn't seem to accept contributions by unknown people. But in the book "Philosophy Meets
Physics at the Planck Scale". It seems they are saying there that there is a quantum gravity programme where spacetime is really curved and gravitons just quanta of it without any flat spacetime
Within string theory, gravitons are only approximate degrees of freedom, and strings are more primary. So in the string theory picture, curved spacetime is a coherent state of strings on flat
spacetime. In the AdS/CFT picture, strings and space are both emergent, and neither are primary.
This proves that in string theory, spacetime is really flat with the curved spacetime as only coherent state of strings.. although I'm still trying to imagine how these two can co-exist together.
Why didn't you answer this question "2. Can Loop Quantum Gravity be formulated as spin-2field in flat spacetime? Or does LQG stay valid only if spacetime is actually curved?" anyone else knows the | {"url":"http://www.physicsforums.com/showthread.php?p=3790215","timestamp":"2014-04-20T08:37:16Z","content_type":null,"content_length":"77666","record_id":"<urn:uuid:7efd766f-0b49-4e05-b5a0-6248854ada10>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Tutors
Woburn, MA 01801
Efficient Math/Physics and Electrical Engineering Tutor/Instructor
...I tested this method thoroughly, throughout the years, take a look: I have more than ten years of tutoring high school students, from
, Geometry, Trigonometry and Pre-Calculus to Calculus, AP Physics and College level Physics with Calculus Math and Electrical...
Offering 6 subjects including algebra 1 | {"url":"http://www.wyzant.com/Medford_MA_Algebra_tutors.aspx","timestamp":"2014-04-17T15:44:26Z","content_type":null,"content_length":"59267","record_id":"<urn:uuid:b2055e3c-d637-4bbc-bcdc-cdf49584632a>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
Old Mill Crk, IL Math Tutor
Find an Old Mill Crk, IL Math Tutor
Math and science have been my great interest since childhood. Everything around us has some explanation involving these subjects! I would like to share my enthusiasm with others.
2 Subjects: including algebra 1, geometry
I graduated magna cum laude with honors from Illinois State University with a bachelor's degree in secondary mathematics education in 2011. I taught trigonometry and algebra 2 to high school
juniors in the far north suburbs of Chicago for the past two years. I am currently attending DePaul University to pursue my master's degree in applied statistics.
19 Subjects: including calculus, statistics, discrete math, GRE
...Like many Fortran programmers of the 1980s, Pascal captured my interest because it was a way to transfer engineering simulation programs from expensive mainframes to minicomputers and desktop
computers like the Macintosh 512K I had just purchased. I went on to develop several computer programs f...
67 Subjects: including SAT math, ACT Math, ASVAB, finance
...I've taught at two community colleges and tutored students at the collegiate level. I know the importance of elementary education and the trouble students have when they fall behind from
personal experience. At times students need one-on-one attention with someone who listens to them to understand how they learn.
6 Subjects: including prealgebra, reading, business, geography
...Does the student highlight the text while reading, take notes while reading, and study their notes? Regarding tennis, I have been a lifelong player, starting to play at the age of seven. I
took lessons at our local park district at the beginner, intermediate and advanced level.
37 Subjects: including algebra 2, precalculus, GED, SAT math
Related Old Mill Crk, IL Tutors
Old Mill Crk, IL Accounting Tutors
Old Mill Crk, IL ACT Tutors
Old Mill Crk, IL Algebra Tutors
Old Mill Crk, IL Algebra 2 Tutors
Old Mill Crk, IL Calculus Tutors
Old Mill Crk, IL Geometry Tutors
Old Mill Crk, IL Math Tutors
Old Mill Crk, IL Prealgebra Tutors
Old Mill Crk, IL Precalculus Tutors
Old Mill Crk, IL SAT Tutors
Old Mill Crk, IL SAT Math Tutors
Old Mill Crk, IL Science Tutors
Old Mill Crk, IL Statistics Tutors
Old Mill Crk, IL Trigonometry Tutors
Nearby Cities With Math Tutor
Antioch, IL Math Tutors
Beach Park, IL Math Tutors
Green Oaks, IL Math Tutors
Gurnee Math Tutors
Lake Villa Math Tutors
Lindenhurst, IL Math Tutors
North Chicago Math Tutors
Old Mill Creek, IL Math Tutors
Pleasant Prairie Math Tutors
Round Lake Beach, IL Math Tutors
Round Lake Park, IL Math Tutors
Round Lake, IL Math Tutors
Volo, IL Math Tutors
Wadsworth, IL Math Tutors
Waukegan Math Tutors | {"url":"http://www.purplemath.com/old_mill_crk_il_math_tutors.php","timestamp":"2014-04-17T20:02:07Z","content_type":null,"content_length":"23982","record_id":"<urn:uuid:02e711ae-206b-4c4b-bd17-f255a96ecaec>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
Percent Length Contraction (check solution)
680/300,000 is what you were intending I trust?
Yep. That's why I thought it was off because it's such a small number that you'll have 1 under the radical...
Or is the percent contraction supposed to be very small since the aircraft, compared to the speed of light, is going extremely slow? | {"url":"http://www.physicsforums.com/showthread.php?t=289139","timestamp":"2014-04-21T12:18:00Z","content_type":null,"content_length":"31661","record_id":"<urn:uuid:f11abc33-47dd-45aa-bc50-15c7f7c2c7c2>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |