content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The slope of a line is -2. What is the slope of a line that is parallel to it?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/507880f7e4b0ed1dac50f740","timestamp":"2014-04-20T23:46:46Z","content_type":null,"content_length":"46435","record_id":"<urn:uuid:8dd901d8-d268-4cae-82d3-572142c1b220>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number 7250081
Number 7250081 in British English words is seven million two hundred fifty thousand eighty-one. Number consists of
digits, seven-digit.
Number 7250081 in American English words is seven million two hundred fifty thousand eighty-one, in German words is sieben Millionen zweihundertfünfzigtausendeinundachtzig, in French words is sept
millions deux cent cinquante mille quatre-vingt-un, in Spanish words is siete millones doscientos cincuenta mil ochenta y uno, in Italian words is settemiloniduecentocinquantamilaottantuno, in Dutch
words is zeven-Miljoen-tweehonderdvijftigDuizendeenenachttig, in Danish words is syv millioner to hundrede halvtreds tusinde og en og firs. If you want to write in words the number 7250081, it is
necessary to use
Number 7250081 in binary code can be written 11011101010000010100001.
Number 7250081 in octal code can be written
Number 7250081 in hexadecimal code can be written 6EA0A1.
Unix timestamp 7250081 converted to human readable date and time is Thursday March 26, 1970, 12:54 AM.
Decimal IP address 7250081 converted to dotted-decimal format is 0.110.160.161.
The square root of 7250081 is 2692.5974448476. Number 7250081 multiplied by 2 equals
. Divided by 2 equals 3625040.5. The sum of all digits equals
. Number 7250081 raised to the power of 2 is 52563674506561. Number 7250081 raised to the power of 3 is 3.810908978302E+20.
The cosine of number 7250081 is 0.13177219342563. The sine of number 7250081 is 0.99128002554263. The tangent of number 7250081 is 7.522679859633. The radian equivalent of number 7250081 is
126537.78448628. The equivalent of number 7250081 in degrees is 415399042.42799. The base-10 logarithm of number 7250081 is 6.8603428586615.
This number is odd number (an integer that is not evenly divisible by 2). Number 7250081 is not a prime number. | {"url":"http://www.all-numbers.com/7250081","timestamp":"2014-04-21T02:00:21Z","content_type":null,"content_length":"9508","record_id":"<urn:uuid:f61ecc67-86ad-430c-afb2-b8043238c328>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Chebyshev finite difference approximation for the boundary value problems.
(English) Zbl 1027.65098
Summary: This paper presents a numerical technique for solving linear and non-linear boundary value problems for ordinary differential equations. This technique is based on using matrix operator
expressions which applies to the differential terms. It can be regarded as a non-uniform finite difference scheme. The values of the dependent variable at the Gauss-Lobatto points are the unknown one
solves for.
The application of the method to boundary value problems leads to algebraic systems. The method permits the application of iterative method in order to solve the algebraic systems. The effective
application of the method is demonstrated by four examples.
65L10 Boundary value problems for ODE (numerical methods)
65L12 Finite difference methods for ODE (numerical methods)
34B05 Linear boundary value problems for ODE
34B15 Nonlinear boundary value problems for ODE | {"url":"http://zbmath.org/?q=an:1027.65098","timestamp":"2014-04-20T23:42:47Z","content_type":null,"content_length":"21526","record_id":"<urn:uuid:ec6160ea-b5f4-41ad-9e80-fc10503fe79d>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
Slice of Pi, Anyone? - About Pi Day and the Transcendental Number Pi
Slice of Pi, Anyone?
About Pi Day and the Transcendental Number Pi
By: John Shepler
"For a time I stood pondering on circle sizes. The large computer mainframe quietly processed all of its assembly code. Inside my entire hope lay for figuring out an elusive expansion. Value: pi...."
is a transcendental figment of mathematics. It is a number that has been chased by scholars for almost 4,000 years. Its precision has been calculated to over two billion decimal places without an end
in sight. Examine those digits and the frequency of the numbers is no more than random. Yet, pi is everywhere around us. There is pi in pie. Cut a pie in half. Pi is the number of times the length of
that cut will go around the outside of the pie. Pi pie? That would be one each for three of us with some left over.
Ah, but how much left over? That is the very question that has agonized mathematicians throughout the centuries. The supercomputers crunch and crunch and crunch those numbers until somebody cries
"enough" and moves on to something more pressing, like trying to predict next week's weather. Like that's a more likely problem to be solved. There may be issues seemingly more pressing to humankind,
but the pursuit of pi has always had a romance that captivated mathematicians...sometimes to obsession.
The Bible tells us that pi has a value of around 3. Oh, yes. It's there in the specifications for the great temple of Solomon, describing the pouring of what seems to be a large brass casting. "And
he made a molten sea, ten cubits from the one brim to the other: it was round all about, and his height was five cubits: and a line of thirty cubits did compass it about." (I Kings 7, 23).
Al'Khwarizmi, who lived in Baghdad around the year 800, calculated a more precise value of 3.1416 for pi. His name lives on in the term "algorithm." The title of one of his books, "al jabr" gave us
the word "algebra." Over the intervening centuries, famous mathematicians such as Leibniz, De Morgan and Euler worked on expanding the precision of pi. Ludolph Van Ceulen, who lived from 1540 to
1610, spent most of his days tediously performing the calculations for the first 35 decimal places of pi. In honor of his tenacity in sticking with pen, paper and fingers in lieu of even something as
rudimentary as a 386SX, pi is sometimes referred to a Ludolph's Constant. It's 3.14159265358979323846264338327950288...something.
Find that hard to remember? Well, if the truth be known, mathematicians and most everybody else does too. You can look it up, but you might not have your weighty reference book handy. You can get a
rough value by dividing 355 by 113 on your calculator, if your calculator is handy. Or you can be infinitely more clever by converting pi into words.
Consider the opening lines of the story "Circle Digits" written by Michael Keith, which I've quoted at the beginning of this article. Notice the number of letters in each word. The first is "for"
with 3 letters. The second is "a" with 1, followed by "time" with 4. That's 3.14 or pi. Michael's complete story provides the first 402 decimals of pi and is called a pimnemonic. Here's another of
Poe, E.
Near a Raven
Midnights so dreary, tired and weary.
Silently pondering volumes extolling all by-now obsolete lore.
During my rather long nap - the weirdest tap!
An ominous vibrating sound disturbing my chamber's antedoor.
"This", I whispered quietly, "I ignore".
Still too much? You can just go with the title and be close for most purposes, or do what the insiders do and remember this catchy phrase: "How I need a drink, alcoholic in nature, after the heavy
lectures involving quantum mechanics." One has to wonder how much alcohol was involved in advancing the science of pi over the years. The popularity of this expression might just tip-it, so to speak.
Why all the fuss about pi ? You didn't know? You missed it? Oh, my gosh. Pi Day is March 14 or 3-14. The official celebrations are scheduled for 1:59 PM, just to make it an appropriate 3-14 1:59.
There are Pi Day songs, Pi Day games, the history of pi...all sponsored by the San Francisco Exploratorium, an interactive museum of science, art and human perception. It's on the Web at the link
shown below. Push the button marked "The Ridiculously Enhanced Pi Pages" and join in the hoopla.
I used the pi birthday calculator at a site called "Am I in Pi?" I'll bet you are, too. The calculator checked over a million digits of pi and found my birth date starting at a location some 15,000
into the pi decimal places. Since pi goes to infinity (no one has proven otherwise), it's likely EVERYTHING is in pi. And you thought it was only four and twenty blackbirds.
Hmmm. All this PI talk is making me hungry for some reason. Cherry or blueberry?
Special thanks to Barbara Shepler for carving out this delicious information on Pi Day!
Wait! There's more. March 14, Pi Day, is also the birthday of that most famous mathematician of them all, Albert Einstein. Coincidence or...? Here's a fascinating story about his childhood:
Einstein's Compass
Want something different this year? Find interesting and unique, easily customized gifts from gigapacket.
Books of Interest:
History of Pi by: Peter Beckman. Here's the life and times of Pi, if you will, including the world background in which the development of Pi took place. Suitable for readers of all ages.
The Joy of Pi by: David Blatner. Blatner explores the many facets of Pi and humankind's fascination with it--from the ancient Egyptians and Archimedes to Leonardo da Vinci and the modern-day
Chudnovsky brothers.
Also visit Books-A-Million for an excellent selection of new books, magazines, e-books, audio books and more at low, low prices. Who is Books-A-Million?
Also visit these related sites:
PhysLINK.com - The Ultimate Physics & Astronomy Reference & Education Online Source.
Articles Portfolio - Policy & Contact Info - Doing Good - Poster Store - Products & Services - Special Offers - John Shepler .com
Copyright 1999 - 2014 by John E. Shepler. Linking to this article is welcome, but no online republication is permitted. Print media republication rights are available at reasonable rates. Contact me
at: John (at) JohnShepler.com
Sponsored by Long Distance Rate Finder .com - We proudly offer Rural Broadband, Cloud Computing Carriers, Copper Ethernet, Fiber Network Quotes, Go Elly Domains & Hosting, Ethernet over Copper, Cloud
Networking Services, Help an Elephant, Loves Elephants, Cell Phone Plan Finder, Fast Ethernet, MPLS Networks Today, GigaTrunks, T1 Explainer, T1 VAR, Ethernet Today, DS3 Today, T1 T3 Today, Toll Free
Numbers, Affordable VPN, Affordable VoIP, Ethernet Buildings, Talk on Talk, Telexplainer, New Home Phone, MegaTrunks, Cell Phone Plans Finder, Gigapackets, Enterprise VoIP, T3 Rex, Call Bird, and T1
Rex. Please Visit Us at LDRF Phone Services. We are independent agents for Commission River, including the commercial telecom services of Telarus, Inc.
First Published: March 15, 1999 as part of A Positive Light
Last Updated: March 6, 2014
You are visiting http://www.JohnShepler.com
Hosted by Arbor Hosting - For fast, reliable, low cost hosting services, check out Inexpensive Web Hosting. | {"url":"http://www.johnshepler.com/articles/piday.html","timestamp":"2014-04-19T01:49:50Z","content_type":null,"content_length":"17597","record_id":"<urn:uuid:356864c4-321d-4829-b29d-9add7fbd549e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00018-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quincy, MA Precalculus Tutor
Find a Quincy, MA Precalculus Tutor
...I also took 3D calculus at MIT. While tutoring in my junior and senior year of college, I tutored freshman in calculus. I took geometry in high school.
10 Subjects: including precalculus, physics, calculus, algebra 2
...I am a trained lawyer and a graduate of one of the top 20 law schools in the country. I scored better than 90th percentile based on a self study program. I know this test well and have tutored
for it many times.
29 Subjects: including precalculus, reading, calculus, geometry
I am a retired university math lecturer looking for students, who need experienced tutor. Relying on more than 30 years experience in teaching and tutoring, I strongly believe that my profile is
a very good fit for tutoring and teaching positions. I have significant experience of teaching and ment...
14 Subjects: including precalculus, calculus, geometry, statistics
...In addition I have taught Geometry and Trigonometry. This varied background has allowed me to develop clear insight into how best teach Algebra 2. I have taught Geometry for more than 25
6 Subjects: including precalculus, geometry, algebra 1, prealgebra
...My schedule is extremely flexible and am willing to meet you wherever is most convenient for you.I graduated from the University of Connecticut with a B.S. in Physics and minor in Mathematics
before attending graduate school at Brandeis University and Northeastern University, where I received a M...
9 Subjects: including precalculus, calculus, physics, geometry
Related Quincy, MA Tutors
Quincy, MA Accounting Tutors
Quincy, MA ACT Tutors
Quincy, MA Algebra Tutors
Quincy, MA Algebra 2 Tutors
Quincy, MA Calculus Tutors
Quincy, MA Geometry Tutors
Quincy, MA Math Tutors
Quincy, MA Prealgebra Tutors
Quincy, MA Precalculus Tutors
Quincy, MA SAT Tutors
Quincy, MA SAT Math Tutors
Quincy, MA Science Tutors
Quincy, MA Statistics Tutors
Quincy, MA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Boston precalculus Tutors
Braintree precalculus Tutors
Brookline, MA precalculus Tutors
Cambridge, MA precalculus Tutors
Dorchester, MA precalculus Tutors
Hyde Park, MA precalculus Tutors
Mattapan precalculus Tutors
Milton, MA precalculus Tutors
North Weymouth precalculus Tutors
Quincy Center, MA precalculus Tutors
Roxbury, MA precalculus Tutors
Somerville, MA precalculus Tutors
South Quincy, MA precalculus Tutors
Weymouth precalculus Tutors
Wollaston, MA precalculus Tutors | {"url":"http://www.purplemath.com/Quincy_MA_precalculus_tutors.php","timestamp":"2014-04-19T15:22:43Z","content_type":null,"content_length":"23836","record_id":"<urn:uuid:05f5290e-1ab0-45e9-a864-d0a2466e6f60>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
cube root 560
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/cube_root_560","timestamp":"2014-04-18T03:56:52Z","content_type":null,"content_length":"48172","record_id":"<urn:uuid:c987a847-8880-4213-939b-f6ec2d1380be>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
• Use "?" for one missing letter: pu?zle. Use "*" for any number of letters: p*zle. Or combine: cros?w*d
• Select number of letters in the word, enter letters you have, and find words!
equivalence's examples
• Being representable by one number such as we see on clocks is a binary relation on the set of natural numbers and it is an example of equivalence relation we are going to study here. The concept
of equivalence relation is characterized by three properties as follows:. — “Equivalence Relation”, cs.odu.edu
• Definition of equivalence in the Online Dictionary. Meaning of equivalence. Pronunciation of equivalence. Translations of equivalence. equivalence synonyms, equivalence antonyms. Information
about equivalence in the free online English. — “equivalence - definition of equivalence by the Free Online”,
• All objects in each equivalence-list share the same storage area. If one of the objects in equivalence-list is of type default integer, default real, double precision real, default complex,. —
• A partition of a set S is a set of nonempty subsets of S which are pairwise disjoint and whose union is all of S. An equivalence relation on a set S is a reflexive, symmetric, transitive relation
on S. Both of these structures is motivated by. — “Abstract Math: Equivalence Relations”,
• We found 35 dictionaries with English definitions that include the word equivalence: Equivalence (category theory), Equivalence (disambiguation), Equivalence (measure theory), Equivalence
(trade), Equivalence: Wikipedia, the Free Encyclopedia [home, info]. — “Definitions of equivalence - OneLook Dictionary Search”,
• The site for European Photography, Andreas Müller-Pohle and Vilém Flusser © Equivalence, Berlin 1997–2010. Last update: 12 November, 2010. European Photography | EyeMind | Riverproject | .
Concept and design: Andreas Müller-Pohle and Andreas Freund. — “Equivalence: European Photography, Andreas Müller-Pohle”,
• In mathematics, an equivalence relation is a binary relation between two elements of a set which groups them together as being "equivalent" in some The equivalence class a under "~", denoted [a],
is the subset of X whose elements b are. — “Equivalence relation”, schools-
• of Equivalence. The General Theory of Relativity is formulated in terms of mathematics well beyond the scope of our survey course in astronomy (primarily in fields of mathematics that go by the
names of tensor ***ysis and Riemannian geometry). — “The Principle of Equivalence”, csep10.phys.utk.edu
• An example :corollary formula from which a :equivalence rule might be The effect of such a rule is to identify equiv as an equivalence relation. — “EQUIVALENCE.html -- ACL2 Version 2.9”,
• Structural equivalence. • Automorphic equivalence. • Maximal regular equivalence. • Notes Three equivalence concepts from. theoretical point of view. — “Equivalence”, ***
• Equivalence is only relevant when comparing different formats. Equivalence says nothing about shallow DOF being superior to deep DOF, as this is entirely subjective (DOF is discussed in more
detail here). — “Equivalence”,
• Equivalence relation. In mathematics, an equivalence relation on a set X is a binary relation on X that is reflexive, symmetric and transitive, i.e., if the relation is written as ~ it holds for
all a, b and c in X that (Reflexivity) a ~ a (Symmetry) if a ~ b then b ~ a. — “Kids.Net.Au - Encyclopedia > Equivalence relation”, .au
• Equivalence Relations. Equivalence Classes. Partitions. Sometimes, We Want to Classify Objects Via. Certain Characteristics. 1. We have not defined negative differences yet, but 3 − 5 and. 2 − 4
should be "the same", even though they involve. different symbols. ( But here we'll actually think of. — “Equivalence Relations”, www2.latech.edu
• Equivalence Relation. by Jimmy Tseng. Equivalence relation, a mathematical concept, is a type of relation on a given set that provides a way for elements of that set to be identified with
(meaning considered equivalent to for some present purpose) other elements of that set. — “Equivalence Relation”,
• equivalence n. The state or condition of being equivalent; equality. An equivalence relation. Logic. The relationship that holds for two propositions that are either both true or both false, so
that the affirmation of one and the denial of the other results in contradiction. — “equivalence: Definition, Synonyms from ”,
• Automorphic equivalence is not as demanding a definition of similarity as structural equivalence, but is more There is a hierarchy of the three equivalence concepts: any set of structural
equivalences are also automorphic and regular equivalences. — “Introduction to social network methods: Chapter 14”, faculty.ucr.edu
• Equivalence definition, the state or fact of being equivalent; equality in value, force, significance, etc. See more. — “Equivalence | Define Equivalence at ”,
• Definition of word from the Merriam-Webster Online Dictionary with audio pronunciations, thesaurus, Word of the Day, and word games. — “Equivalence - Definition and More from the Free Merriam”,
• (countable, mathematics) An equivalence relation; ; ~ (uncountable, logic) The equivalence" Categories: English nouns | English uncountable nouns. — “equivalence - Wiktionary”,
• When a translator attempts to translate a text from one language (source) to another language (target), s/he should first of all understand and comprehend the source text and then translates it
to the target language. Therefore, the full. — “Equivalence and Translation”,
• Equivalence (trade), a requirement imposed on WTO Member countries regarding acceptable sanitary protection measures. This disambiguation page lists articles associated with the same title. If an
internal link led you here, you may wish to change. — “Equivalence - Wikipedia, the free encyclopedia”,
• Equivalence is implemented in Mathematica as Equal[A, B, ]. Binary equivalence has the following truth table (Carnap 1958, p. Similarly, ternary equivalence has the following truth table. —
“Equivalent -- from Wolfram MathWorld”,
related images for equivalence
the relative value of your diploma The exact value of your diploma may also depend on such factors as the institution from which it was obtained or the contents of your study programme
Pittsburgh Post Gazette below Chris Britt of the Springfield Journal Register and Theo Moudakis of the Toronto Star
C V Copie conforme du diplôme d entraineurs Licence technique copie certifiée conforme Equivalence Licence CAF Fiche de renseignement pour entraîneurs de Football Télécharger le Formulaire
attestent de la compétence de leurs titualires et de leur capacité à occuper des postes dévolus aux titulaires respectivement d un Bac+4 pour le niveau II ou d un Bac+5 Niveau I Chaque Cycle se
conclut par un stage de fin de cycle puis par une soutenance de mémoire devant un jury de professionnels et de membres du corps pédagogique pour validation et
MEDIA Motion INFO Tools Adobe After Effects compositing Adobe Illustrator Idea production text a week Compositing One sunday afternoon 6 hours total
equivalence principl > 27 Mar 2003 09 58 8k equivalence principl > 27 Mar 2003 09 58 6k equivalence principl > 27 Mar 2003 09 58 111k epicycle 2 gif 27 Mar 2003 09 58 24k
Thursday 4 17 08 NP hard NP complete CliqueInN equivalence HamiltonPathAndCircuit HC NPcomplete proof1 HC NPcomplete proof2
therapy a The sample size required to assess each hypothesis is shown Meanings and implications of different approaches to equivalence documentation are discussed in the text
following sample illustrates the kinds of responses that students typically produce at 3 75 as they progress towards the Level 4 standard View text version
to distinguish in a small region of spacetime the difference between the acceleration of an object and the existence of traditionally postulated gravitational force Mathematical aspects of
se retrouve agrandie car c est la seule ici qui corresponde directement à de l énergie électrique alors que les autres ne peuvent être que partiellement transformées en électricité ORMEE 2007
In order to get optimum antigen antiboy concentration in Blood Banking we make washed 3 saline suspension of red cells to mix with our reagents OBJECTIVES ANTIGENS AND ANTIBODIES IN
Friday 8 29 08 Propositional equivalences truthTablesForOperators equivalence equivalenceByTruthTables equivalenceProof
related videos for equivalence
What's wrong with equivalence? (Exploring Translation Theories) Short transition from equivalence to Skopos theory, from chapter 3 to chapter 4 of the book Exploring Translation Theories.
www.tinet.cat Recorded at the Monterey Institute of International Studies on September 15, 2009.
The Law of Equivalence of Form bit.ly - The law of equivalence of form operates to bring every part of nature to complete balance. Taken from the "Kabbalah Revealed" Kabbalah introductory series.
Titration Curves and Half Equivalence Titration Curves and Half Equivalence - A2 Chemistry Revision (AQA CHM4) Music: Walcott by Vampire Weekend
Equivalence relations 3 Proof that an equivalence relation on a set S partitions S into equivalence classes
Weak Equivalence Principle test on the moon ref: science.nasa.gov & nssdc.gsfc.nasa.gov The Apollo 15 Hammer-Feather Drop --- Astronaut Dave Scott drops a feather and hammer on the moon.
Equivalence Principle of Gravity FOUR Terrestrial and Spaceborne Tests of the Equivalence Principle of Gravity Dr James D Phillips, Harvard Smithsonian Center for Astrophysics Since before
Galileo Galilei, experimenters have used free fall to investigate gravity. The motivation has varied. In 2006, we test general relativity. GR is founded upon the Einstein equivalence principle,
which states that all sufficiently small test objects, subject only to gravity, fall the same, independent of their size, composition, location and the time. Theories of quantum gravity such as
string theory and supersymmetry predict violations of the equivalence principle, setting the stage for modern experimental tests. The most sensitive test at present employs a torsion pendulum
with a composition dipole, and is sensitive to small components of gravitational acceleration perpendicular to the suspension fiber. The highest sensitivity is Delta-g/g = 4x10^-13. These
experiments are reaching the limit imposed by thermal noise in the fiber. Furthermore, various groups including our own are considering a space-based test achieving a sensitivity as high as 10^
-20, and a torsion fiber suspension is a poor starting point. Therefore, we began a terrestrial equivalence principle test of the Galilean type, employing freely-falling masses. This experiment
will improve the terrestrial limit on EP violation by one order of magnitude, and would serve as the basis for a space-based version. Significant obstacles must be overcome. For example, the ...
Insomnium - Equivalence(Live in Prague 16/12/2009) 16.12.2009. live in Czech Republic, the concert took place in Music Hall Exit Chmelnice
The Equivalence Principle A demonstration of the equivalence principle, one of the ideas behind the origin of Einstein's General Theory of Relativity.
Lecture 22 - Order and Relations and Equivalence Relations Discrete Mathermatics Order and Relations and Equivalence Relations
Michael Coren - Check your moral equivalence at the door A simple question is put to two left-wing morons, and hilarity ensues.
Equivalence Principle of Gravity THREE Terrestrial and Spaceborne Tests of the Equivalence Principle of Gravity Dr James D Phillips, Harvard Smithsonian Center for Astrophysics Since before
Galileo Galilei, experimenters have used free fall to investigate gravity. The motivation has varied. In 2006, we test general relativity. GR is founded upon the Einstein equivalence principle,
which states that all sufficiently small test objects, subject only to gravity, fall the same, independent of their size, composition, location and the time. Theories of quantum gravity such as
string theory and supersymmetry predict violations of the equivalence principle, setting the stage for modern experimental tests. The most sensitive test at present employs a torsion pendulum
with a composition dipole, and is sensitive to small components of gravitational acceleration perpendicular to the suspension fiber. The highest sensitivity is Delta-g/g = 4x10^-13. These
experiments are reaching the limit imposed by thermal noise in the fiber. Furthermore, various groups including our own are considering a space-based test achieving a sensitivity as high as 10^
-20, and a torsion fiber suspension is a poor starting point. Therefore, we began a terrestrial equivalence principle test of the Galilean type, employing freely-falling masses. This experiment
will improve the terrestrial limit on EP violation by one order of magnitude, and would serve as the basis for a space-based version. Significant obstacles must be overcome. For example, the ...
Ron Hatch: Using GPS to Refute the Equivalence Principle - Part 1 The GPS system along with other experimental evidence is used to refute the equivalence principle. The paper is divided into
three major parts followed at the end by a short section which deals with a look at some as yet unexplained experimental data. The first major section looks at the equivalence principle in the
light of "falling" electromagnetic radiation. In this first section the equivalence principle is defined and the arguments of Einstein, Feynman and Clifford Will are presented. Each of the
arguments is refuted and the GPS evidence plays a significant role in that refutation. The section is closed with a strange quote from the "GPS Bible" regarding the equivalence principle. This
quote is followed by a transitional argument to the next section using two clocks, fore and aft, in an accelerating rocket. The second major section deals with the relationship of the equivalence
principle to infinitesimal Lorentz Transformations (ILTs). Goldstein, Meisner Thorne and Wheeler, Muller, and Ashby and Spilker are quoted in support of ILTs. Goy provides a valid alternative to
the ILTs. The difference between ILTs and the "clock hypothesis" of Goy is explored. Evidence from multiple sources (including GPS) is cited in support of the clock hypothesis. This evidence
contradicts ILTs. The section is closed with the suggestion for a fairly simple experiment which could remove any doubt as to the validity (or lack thereof) of the ILTs. The third major section
returns to the ...
General Relativity: The Equivalence Principle The equivalence principle is key to understanding that light is deflected by gravity, and to understanding gravitational redshift.
Equivalence Principle of Gravity ONE Terrestrial and Spaceborne Tests of the Equivalence Principle of Gravity Dr James D Phillips, Harvard Smithsonian Center for Astrophysics Since before Galileo
Galilei, experimenters have used free fall to investigate gravity. The motivation has varied. In 2006, we test general relativity. GR is founded upon the Einstein equivalence principle, which
states that all sufficiently small test objects, subject only to gravity, fall the same, independent of their size, composition, location and the time. Theories of quantum gravity such as string
theory and supersymmetry predict violations of the equivalence principle, setting the stage for modern experimental tests. The most sensitive test at present employs a torsion pendulum with a
composition dipole, and is sensitive to small components of gravitational acceleration perpendicular to the suspension fiber. The highest sensitivity is Delta-g/g = 4x10^-13. These experiments
are reaching the limit imposed by thermal noise in the fiber. Furthermore, various groups including our own are considering a space-based test achieving a sensitivity as high as 10^-20, and a
torsion fiber suspension is a poor starting point. Therefore, we began a terrestrial equivalence principle test of the Galilean type, employing freely-falling masses. This experiment will improve
the terrestrial limit on EP violation by one order of magnitude, and would serve as the basis for a space-based version. Significant obstacles must be overcome. For example, the ...
Lecture 23 - Equivalence relations and partitions Discrete Mathematical Structures - Equivalence relations and partitions
False Equivalency (Matt) Bai the Mainstream Media majority.fm Need a helping of false equivalence? Call Matt Bai! There will be ninety-seven shiploads of stupid written in the next three days
about the sad events of yesterday, but this lede from the Dean of Both Sides Are Just As Bad™ journalism will take a back seat to none of it. via
Ricardian Equivalence - Macroeconomics, Spring 2010 A video project by Evan Hewitt and Owen Burbank for Professor Ferderer's Macroeconomics class.
E = mc² | Einstein's Relativity Science & Reason on Facebook: Albert Einstein's Theory of Relativity (Chapter 4): E = mc² (Mass-Energy Equivalence) In physics, mass-energy equivalence is the
concept that the mass of a body is a measure of its energy content. In this concept the total internal energy E of a body at rest is equal to the product of its rest mass m and a suitable
conversion factor to transform from units of mass to units of energy. If the body is not stationary relative to the observer then account must be made for relativistic effects where m is given by
the relativistic mass and E the relativistic energy of the body. --- Please subscribe to Science & Reason: • • • • --- Albert Einstein proposed mass-energy equivalence in 1905 in one of his Annus
Mirabilis papers entitled "Does the inertia of a body depend upon its energy-content?" The equivalence is described by the famous equation E = mc2, where E is energy, m is mass, and c is the
speed of light in a vacuum. The formula is dimensionally consistent and does not depend on any specific system of measurement units. For example, in many systems of natural units, the speed of
light is set equal to 1, and the formula becomes the identity E = m; hence the term "mass-energy equivalence". The equation E = mc2 indicates that energy always exhibits mass in whatever form the
energy takes. Mass-energy equivalence also means that mass conservation becomes a restatement, or ...
Albert Einstein on the mass energy equivalence Albert Einstien discusses the physical principle that a measured quantity of energy is equivalent to a measured quantity of mass
Equivalence Principle of Gravity FIVE Terrestrial and Spaceborne Tests of the Equivalence Principle of Gravity Dr James D Phillips, Harvard Smithsonian Center for Astrophysics Since before
Galileo Galilei, experimenters have used free fall to investigate gravity. The motivation has varied. In 2006, we test general relativity. GR is founded upon the Einstein equivalence principle,
which states that all sufficiently small test objects, subject only to gravity, fall the same, independent of their size, composition, location and the time. Theories of quantum gravity such as
string theory and supersymmetry predict violations of the equivalence principle, setting the stage for modern experimental tests. The most sensitive test at present employs a torsion pendulum
with a composition dipole, and is sensitive to small components of gravitational acceleration perpendicular to the suspension fiber. The highest sensitivity is Delta-g/g = 4x10^-13. These
experiments are reaching the limit imposed by thermal noise in the fiber. Furthermore, various groups including our own are considering a space-based test achieving a sensitivity as high as 10^
-20, and a torsion fiber suspension is a poor starting point. Therefore, we began a terrestrial equivalence principle test of the Galilean type, employing freely-falling masses. This experiment
will improve the terrestrial limit on EP violation by one order of magnitude, and would serve as the basis for a space-based version. Significant obstacles must be overcome. For example, the ...
Juan Williams Smacks Down Brigadeführer Baier's False Equivalence Between Attack Obama and Questioning Bush (Via ) Williams slaps down Baier's false equivalence between those questioning Obama's
legitimacy and those who questioned Bush's.
Radio-Canada Anchor Simon Durivage Equates Israel with Sudan and Iran Radio-Canada Anchor Simon Durivage Equates Israel with Sudan and Iran on October 15, 2010
The Curse of Dynamic Equivalency Do you want to know what God said or what Dr. Scholar says God ment? Conviction vs. preference. What shall I wear today and what translation shall I read. By Dr.
DA Waite A study of the history of the translation work that went from formal equivalency to dynamic equivalency and the consequences. TheBible For Today. 900 Park Avenue. Collingswood NJ 08108.
Dr. DA Waite, Th.D., Ph.D.. Pastor of the Bible For Today. Ba The Bible For Today. 900 Park Avenue. Collingswood NJ 08108. Dr. DA Waite, Th.D., Ph.D.. Pastor of the Bible For Today. Baptist
Church. Orders - 1-800-John 10:9. or 1-800-564-6109. Q & A - 1-856-854-4452. or 1-856-854-4747. Fax - 1-856-854-2464. . BFT@. Every thing you wanted to know about the modern translation issue -
eBay VJET -- Java and JavaScript Language Equivalence (screencast) VJET brings structural, syntactic and semantic equivalence between Java and JavaScript
Insomnium - Across the Dark - Equivalence and Down With the Sun "New Tracks 2009" HQ Released September 7, 2009 Genre Melodic death metal Label Candlelight BUY IT !!! DON'T DOWNLOAD FOOLS !! From
Insomnium's New up coming album - Across the Dark 2 new tracks off the cd 1.Equivalence 2.Down With the Sun WATCH IN HIGH QUALITY
Theories of equivalence 2 (Exploring Translation Theories) Introduction to chapters 2 and 3 of Exploring Translation Theories:
Equivalence relations 2 Formal definition, and example of congruence mod 5. Introduction of the idea of equivalence classes
Equivalence relations 1 Introducing equivalence relations with some examples using relationships between people
Chakal feat. Babyface Fensta - Law Of Equivalence Song About Life, *** Militant Rap, colabo between Us and Latino/european. Keeping energy moving!
Chemistry: Titration (pH Equivalence Point) Watch more free lectures and examples of Chemistry at Other subjects include Algebra, Trigonometry, Calculus, Biology, Physics, Statistics, and
Computer Science. -All lectures are broken down by individual topics -No more wasted time -Just search and jump directly to the answer
Equivalence Principle of Gravity TWO Terrestrial and Spaceborne Tests of the Equivalence Principle of Gravity Dr James D Phillips, Harvard Smithsonian Center for Astrophysics Since before Galileo
Galilei, experimenters have used free fall to investigate gravity. The motivation has varied. In 2006, we test general relativity. GR is founded upon the Einstein equivalence principle, which
states that all sufficiently small test objects, subject only to gravity, fall the same, independent of their size, composition, location and the time. Theories of quantum gravity such as string
theory and supersymmetry predict violations of the equivalence principle, setting the stage for modern experimental tests. The most sensitive test at present employs a torsion pendulum with a
composition dipole, and is sensitive to small components of gravitational acceleration perpendicular to the suspension fiber. The highest sensitivity is Delta-g/g = 4x10^-13. These experiments
are reaching the limit imposed by thermal noise in the fiber. Furthermore, various groups including our own are considering a space-based test achieving a sensitivity as high as 10^-20, and a
torsion fiber suspension is a poor starting point. Therefore, we began a terrestrial equivalence principle test of the Galilean type, employing freely-falling masses. This experiment will improve
the terrestrial limit on EP violation by one order of magnitude, and would serve as the basis for a space-based version. Significant obstacles must be overcome. For example, the ...
False Moral Equivalence (Don't be a D*ck coughlan666) Coughlan666 tries really hard to paint TF as a coward. Funny enough I don't care about that at all. But he tries to do it using a false moral
equivalence. That I do not agree with. I wanted to make a video about false moral equivalence anyway, so this is my excuse. He banned me because I called him out on it. I think it's ironic given
that he called TF a coward for not engaging with him. But it's irrelevant. I don't make this video for him, but for anybody who make claims false moral equivalences. coughlan666's videos on the
topic: Sources: PLEASE NOTE THAT I DO NOT SAY THERE IS ANY CORRELATION WITH ANY WORLD VIEW AND BLASPHEMY LAW SEVERITY. Check wikipedia: Many countries of many faiths have mild or no blasphemy
laws. But we still have to be able to talk about the cases where free speech is punishable by death. That is not the same as 250 pounds and 100 hours of community service. And we shouldn't treat
it as the same.
On Twitter
twitter about equivalence
Blogs & Forum
blogs and forums about equivalence
• “'False Equivalence' in Campaign Reporting. 10/14/2008 by Gabriel Voiles coming up with a sort of false equivalence, the idea that bringing up John McCain's”
— FAIR Blog " Blog Archive " 'False Equivalence' in Campaign,
• “let's make it a good year - Equivalence's blogs at Ultimate- | Guitar Community let's make it a good year blog. Sign-in register NOW! Equivalence. Contributions. Pictures. Blogs. MP3s. Profile
index. Subscribe! Contacting Equivalence. Send message. Forward. Add to friends. Favorites. Add to group”
— let's make it a good year - Equivalence's Blogs | Ultimate, profile.ultimate-
• “Blog Equivalence of Page Rank - Web ***ytics World Blog”
— Blog Equivalence of Page Rank - Web ***ytics World Blog, web***
• “restore doctrine of equivalence Forum Moderator. Lead Member. Posts: 4417. Re: restore doctrine of equivalence " Reply #1 on: 12-01-06 at 12:51 pm " No, I don't think it works. We've had many
discussions here about that, and caselaw subsequent to Festo suggests that any surrender”
— restore doctrine of equivalence,
• “Mental energy is the concept of a principle of activity powering the operation of the mind, soul or psyche. Energy in this context is used as the literal meaning of the activity or operation.
Mental energy has been defined as the driving force of”
— cbirdesign " Equivalence,
• “Oh -- did I say that? I wonder what I could have meant. I think it means I'm trying to conceal the facts. forum=1022&message=30811435. You'll note that I also do the same with my examples
between 1.5x and 1.6x here for the same reasons: http:///equivalence”
— Re: Conveniently Approximate Equivalence (II): Olympus SLR,
• “Definition of Ricardian equivalence This is the idea that increased government borrowing Ricardian equivalence is also known as the Barro-Ricardo equivalence proposition because”
— Ricardian Equivalence — Economics Blog,
• “The Galois mission is to create trustworthiness in critical systems. To that end, Galois develops high assurance software for demanding applications”
— Galois " Blog " Blog " Equivalence and Safety Checking in Cryptol,
• “Moral Equivalence!!!! Matt Barganier, August 04, 2010. Email This | Print This | Share This | Comment | Antiwar Forum. Wednesday's Antiwar Forum. 76082
— Moral Equivalence!!!! " Blog,
similar for equivalence | {"url":"http://wordsdomination.com/equivalence.html","timestamp":"2014-04-20T13:21:18Z","content_type":null,"content_length":"67795","record_id":"<urn:uuid:94045de3-5e0a-4c8c-bbe2-f745bb0087c9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Axiom-developer] how to expand trig functions ?
[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Axiom-developer] how to expand trig functions ?
From: Francois Maltey
Subject: [Axiom-developer] how to expand trig functions ?
Date: Fri, 03 Jun 2005 15:01:54 +0200
User-agent: Gnus/5.1006 (Gnus v5.10.6) XEmacs/21.5 (chestnut, linux)
I play with expand(sin(...)) :
expand (sin (x+y)) is right, but
1/ I can't get expand (sin (2*x))
2/ The result of expand (sin (x+y+z)) doesn't seem complete.
axiom only apply twice the expanded sin (a+b) and cos (a+b).
I prefer the maple and mupad method. The formula is completly expanded.
Is there a reason for this ?
Do you think the axiom-code must remain as now ?
or is it possible to improve it ? If you think so, I propose to look at this.
Am I right when I see it's comming from this src/algebra/manip.spad file ?
)abbrev package TRMANIP TranscendentalManipulations
++ Transformations on transcendental objects
++ Author: Bob Sutor, Manuel Bronstein
++ Date Created: Way back
++ Date Last Updated: 22 January 1996, added simplifyLog MCD.
++ Description:
++ TranscendentalManipulations provides functions to simplify and
++ expand expressions involving transcendental operators.
++ Keywords: transcendental, manipulation.
TranscendentalManipulations(R, F): Exports == Implementation where
R : Join(OrderedSet, GcdDomain)
F : Join(FunctionSpace R, TranscendentalFunctionCategory)
[Prev in Thread] Current Thread [Next in Thread]
• [Axiom-developer] how to expand trig functions ?, Francois Maltey <= | {"url":"http://lists.gnu.org/archive/html/axiom-developer/2005-06/msg00059.html","timestamp":"2014-04-17T16:18:29Z","content_type":null,"content_length":"5805","record_id":"<urn:uuid:f94fc872-6a36-452b-9984-376a4b556741>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: LEFSCHETZ DECOMPOSITIONS FOR QUOTIENT VARIETIES
Abstract. In an earlier paper, the authors constructed an explicit Chow Kunneth decomposition for quotient
varieties of Abelian varieties by actions of finite groups. In the present paper, the authors extend the techniques
there to obtain an explicit Lefschetz decomposition for such quotient varieties for the ChowKunneth projectors
constructed there.
1. Introduction
It has been conjectured that every smooth projective variety X over a field k has a ChowK˜unneth de
composition. Currently, ChowK˜unneth decompositions are known to exist for curves and projective spaces
[14], surfaces [16], abelian varieties ([5], [19]), varieties with ``finitedimensional'' motives [10], and several other
special classes. In an earlier paper [1], the authors proved that the quotient A/G of an abelian variety A by
the action of a finite group G has a ChowK˜unneth decomposition, the projectors of which can be described
explicitly by pushing forward the ChowK˜unneth projectors of A (as constructed by Deninger and Murre [5])
via the quotient map A×A # A/G×A/G. Although A/G is not in general smooth, the finiteness of G ensures
that the machinery of intersection theory and Chow motives can be extended to varieties of this sort, which
we term pseudosmooth. Moreover, there are quotient varieties of the above form which are smooth, but not
abelian varieties; Igusa [9] gives such a construction, possibly due earlier to Enriques.
In [13], K˜unnemann proves the existence of a Lefschetz decomposition for Chow motives of abelian schemes;
it seems natural to ask whether such a decomposition can be given for the quotient of an abelian variety,
and, if so, it this can be given explicitly. Kahn, Murre, and Pedrini [10] have shown the existence of such | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/1021/3779499.html","timestamp":"2014-04-17T13:03:17Z","content_type":null,"content_length":"8959","record_id":"<urn:uuid:0b52cee3-ffdf-4ca8-b33a-a972a28d5a83>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
Code Algebra
From OnceAndOnlyOnce: This appears to me to be a parameterized/paraphrased discussion from the RefactoringBrowserThesis?. The RefactoringBrowser and thesis use FirstOrderPredicateLogic? to prove
various facts about refactorings. --ShaeErisson
I think
, GroupTheory
may be together useful backgrounds for building a code algebra. Those three sub-disciplines of mathematics were all founded with the idea that there are universal patterns in mathematics that can be
extracted and matched.
may also prove itself useful. The concept called "(homo)morphism" in
, GroupTheory
, etc., is just a formal version of the concept of Analogy. And then if you can find strong analogies between different portions of code, then maybe you have a redundancy to eliminate there. --
of this page (last edited October 14, 2006) or FindPage with title or text search | {"url":"http://c2.com/cgi/wiki?CodeAlgebra","timestamp":"2014-04-17T04:55:37Z","content_type":null,"content_length":"2372","record_id":"<urn:uuid:508cbde1-4252-4bb7-81bd-3ece030a85de>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
implement random selection in Python
Bruza benruza at gmail.com
Sat Nov 17 01:47:16 CET 2007
On Nov 16, 6:58 am, duncan smith <buzz... at urubu.freeserve.co.uk>
> Bruza wrote:
> > I need to implement a "random selection" algorithm which takes a list
> > of [(obj, prob),...] as input. Each of the (obj, prob) represents how
> > likely an object, "obj", should be selected based on its probability
> > of
> > "prob".To simplify the problem, assuming "prob" are integers, and the
> > sum of all "prob" equals 100. For example,
> > items = [('Mary',30), ('John', 10), ('Tom', 45), ('Jane', 15)]
> > The algorithm will take a number "N", and a [(obj, prob),...] list as
> > inputs, and randomly pick "N" objects based on the probabilities of
> > the
> > objects in the list.
> > For N=1 this is pretty simply; the following code is sufficient to do
> > the job.
> > def foo(items):
> > index = random.randint(0, 99)
> > currentP = 0
> > for (obj, p) in items:
> > currentP += w
> > if currentP > index:
> > return obj
> > But how about the general case, for N > 1 and N < len(items)? Is there
> > some clever algorithm using Python standard "random" package to do
> > the trick?
> I think you need to clarify what you want to do. The "probs" are
> clearly not probabilities. Are they counts of items? Are you then
> sampling without replacement? When you say N < len(items) do you mean N
> <= sum of the "probs"?
> Duncabn
I think I need to explain on the probability part: the "prob" is a
relative likelihood that the object will be included in the output
list. So, in my example input of
items = [('Mary',30), ('John', 10), ('Tom', 45), ('Jane', 15)]
So, for any size of N, 'Tom' (with prob of 45) will be more likely to
be included in the output list of N distinct member than 'Mary' (prob
of 30) and much more likely than that of 'John' (with prob of 10).
I know "prob" is not exactly the "probability" in the context of
returning a multiple member list. But what I want is a way to "favor"
some member in a selection process.
So far, only Boris's solution is closest (but not quite) to what I
need, which returns a list of N distinct object from the input
"items". However, I tried with input of
items = [('Mary',1), ('John', 1), ('Tom', 1), ('Jane', 97)]
and have a repeated calling of
More information about the Python-list mailing list | {"url":"https://mail.python.org/pipermail/python-list/2007-November/429760.html","timestamp":"2014-04-21T04:59:01Z","content_type":null,"content_length":"5491","record_id":"<urn:uuid:db862c94-1f72-480c-8cce-61ea1283b3fa>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Talk:263: Certainty
Explain xkcd: It's 'cause you're dumb.
This was done 6 years later by Fox News. 72.70.180.234 10:44, 31 May 2013 (UTC)
It's easy to politicize that. Abelians versus non-Abelians ;) Not all vector spaces will likely share the property seen there.67.204.136.58 23:34, 15 August 2013 (UTC)
If you flip ab + ac around, you end up with ac + ab which looks a lot like ACAB and that can get political very fast. 94.76.233.42 (talk) (please sign your comments with ~~~~)
Abelian means that ab = ba, but this distributive law is different. Both the distributive property and the Abelian property are assumed properties of numbers, i.e., accepted as true and used to prove
more complicated properties. Non-Abelian examples of objects that "look" like numbers are not too hard to construct. One interesting example is where "a" abd "b" are rotating a book clockwise 90
degrees (a) and rotating the book forward 90 degrees (b). Start with the book facing you for reading and first do "a", then "b", which is written "ab". The result has the front of the book facing up.
Now do "b" first, then "a", to get "ba". Now the binding of the book is facing up and the front of the book is facing to the right. So, "ab" is not "ba". The best I can think of for the distributive
type of thing is for everything to make sense, except b+c is something for which multiplying by "a" is undefined.--DrMath 09:07, 22 November 2013 (UTC)
But what about cryptography? A mathematical topic, and hardly apolitical nowadays. However, I appreciate and enjoy Randall's sentiment about the purity of mathematics. 108.162.219.223 20:23, 17
January 2014 (UTC) | {"url":"http://www.explainxkcd.com/wiki/index.php/Talk:263:_Certainty","timestamp":"2014-04-19T05:23:05Z","content_type":null,"content_length":"24017","record_id":"<urn:uuid:2fc5ad3e-5173-4ac5-bff0-8a3ca1b9ce5a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Formulating Differential equations for RLC circuit with switches.
April 28th 2012, 02:16 AM #1
Junior Member
Oct 2007
Formulating Differential equations for RLC circuit with switches.
For Question 2 (attached) I am trying to figure out how to go about answering part a, but I am not confident with the physics so I am unsure how to begin formulating the required equations.
Any help is much appreciated,
Re: Formulating Differential equations for RLC circuit with switches.
Switch 1 is closed for a long time so the current in the inductor builds up to its steady state value.
This inductor current (which is easily calculated) is then the initial value for the differential equation involving R2 and L1
For convenience I have assumed that S1 opens and S2 close at time equals zero.
Re: Formulating Differential equations for RLC circuit with switches.
Thanks, I think I get it.
I have a very similar question:
A fully charged inductor is hooked up to be in series with a resistor and capacitor. How do I figure out the differential equation for the current change over time for the inductor and the
voltage change over time for the capacitor? I would be able to figure it out if it was just a resistor and inductor, but I can't find anywhere that explains it it terms of a previously charged
Re: Formulating Differential equations for RLC circuit with switches.
I'll give you a few pointers.
When your question talks about the capacitor being charged or the inductor carrying current (inductors cannot be "charged") they are giving you a clue about how to work out the initial
conditions. YOU DON'T NEED THIS INFORMATION until after you have formulated your differential equation and found the general form of the solution. You then use the initial conditions to determine
some constants in the final solution.
You can also FORGET that the question is asking for specific information UNTIL you have solved your differential equation. Once you have a solution for the circuit current (or capacitor charge)
then finding the answers to their questions will be easy enough.
FIRST you must formulate the differential equation for an RLC circuit. To start with you don't need any other information from the question. To formulate the DE calculate the voltage across each
component in terms of current i or charge q. Add these voltages together and set them equal to the supply voltage (zero in your case).
The last piece of information that you need is that the "charge" on a capacitor is the integral of the current that passes through it. You can formulate your DE in terms of current or capacitor
charge, it is YOUR CHOICE.
If you choose current then your DE will have a differential term, the term iR and an integral term.
If you choose charge then your DE will have a second differential term, a differential term an the term q/C.
I hope that this helps.
Re: Formulating Differential equations for RLC circuit with switches.
Thanks, that makes more sense. Just to clarify the situation I am refering to, attached is the question.
So for the current through the inductor, is the equation I= (I inital)e^(-t/L/R) and the voltage across capacitor V= (V inital)e^(1-e^(-t/RC))?
Then just put in the L,R and C values to get the actual equations?
Thanks so much for your help.
Re: Formulating Differential equations for RLC circuit with switches.
I'm a bit confused what they are asking for, but the key DE I would be working from is:
$L\frac{di}{dt}+iR+\frac 1C \int i dt = 0$
this is simply the sum of the voltages around the loop.
You also have
$V_C=\frac qC =\frac 1C \int i dt$
Note that there is also a differential equations page on this site. These electrical problems are so common that you would probably get more responses there.
April 28th 2012, 10:50 PM #2
May 2008
Melbourne Australia
April 30th 2012, 10:24 PM #3
Junior Member
Oct 2007
May 1st 2012, 02:26 PM #4
May 2008
Melbourne Australia
May 1st 2012, 07:54 PM #5
Junior Member
Oct 2007
May 3rd 2012, 09:34 PM #6
May 2008
Melbourne Australia | {"url":"http://mathhelpforum.com/advanced-applied-math/198039-formulating-differential-equations-rlc-circuit-switches.html","timestamp":"2014-04-17T14:58:44Z","content_type":null,"content_length":"46127","record_id":"<urn:uuid:d35cd2db-8a79-4910-b565-2351025d7181>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
how do you Find the slope of a line that passes through (–2, –3), and (1, 1)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50994c31e4b085b3a90da2ae","timestamp":"2014-04-20T08:24:39Z","content_type":null,"content_length":"34727","record_id":"<urn:uuid:157bfbf4-3eee-4233-a90a-58bd7feef0f2>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
The gimbal lock shows up in my quaternions
up vote 8 down vote favorite
I suspect this is a bit basic for mathoverflow, seeing I'm still just an undergraduate
I've been playing around with quaternions as means to eliminate the gimbal lock. From what I understand, one place the gimbal lock occurs is when you rotate $\frac{\pi}{2}$ around the y-axis. If I
create two rotation matrices, $R_{1}$ rotates first $\phi$ around x-axis and $\frac{\pi}{2}$ around the y-axis, while $R_{2}$ rotates first $\frac{\pi}{2}$ around the y-axis and then $\theta$ around
the z-axis.
$$$R_{1} = R_{z}(0) R_{y}(\frac{\pi}{2}) R_{x}(\phi) \\ = \begin{bmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ -1 & 0 & 0 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos(\phi) & -\sin(\phi) \\ 0 & \sin(\
phi) & \cos(\phi) \end{bmatrix} \\ = \begin{bmatrix} 0 & \sin(\phi) & \cos(\phi) \\ 0 & \cos(\phi) & -\sin(\phi) \\ -1 & 0 & 0 \end{bmatrix},$$$
$$$R_{2} = R_{z}(\theta) R_{y}(\frac{\pi}{2}) R_{x}(0) \\ = \begin{bmatrix} \cos(\theta) & -\sin(\theta) & 0 \\ \sin(\theta) & \cos(\theta) & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} 0 & 0 & 1
\\ 0 & 1 & 0 \\ -1 & 0 & 0 \end{bmatrix} \\ = \begin{bmatrix} 0 & -\sin(\theta) & \cos(\theta) \\ 0 & \cos(\theta) & \sin(\theta) \\ -1 & 0 & 0 \end{bmatrix}.$$ $
Since $R_{1} = R_{2}^{-1} \Rightarrow R_{1}(\theta) = R_{2}(-\theta)$, we've lost a degree of freedom. Which is what I expect.
From what I understand, if I perform the same rotations using quaternions, I should be avoiding the gimbal lock?
$ Q_{1} = Q_{z}(0) \times Q_{y}(\frac{\pi}{2}) \times Q_{x}(\theta) = (1, 0, 0, 0) \times (\frac{1}{\sqrt(2)}, 0, \frac{1}{\sqrt(2)}, 0) \times (\cos\frac{\theta}{2}, \sin\frac{\theta}{2}, 0, 0)\\ =
\frac{1}{\sqrt(2)}(\cos\frac{\theta}{2}, \sin\frac{\theta}{2}, \cos\frac{\theta}{2}, -\sin\frac{\theta}{2})$
$ Q_{2} = Q_{z}(\phi) \times Q_{y}(\frac{\pi}{2}) \times Q_{x}(0) = (\cos\frac{\phi}{2}, 0, 0, \sin\frac{\phi}{2}) \times (\frac{1}{\sqrt(2)}, 0, \frac{1}{\sqrt(2)}, 0) \times (1, 0, 0, 0) \\ = \frac
{1}{\sqrt(2)}(\cos\frac{\phi}{2}, -\sin\frac{\phi}{2}, \cos\frac{\phi}{2}, \sin\frac{\phi}{2})$
By setting $\phi = -\theta$, $Q_{2}$ becomes
$ Q_{2} = \frac{1}{\sqrt(2)}(\cos\frac{-\theta}{2}, -\sin\frac{-\theta}{2}, \cos\frac{-\theta}{2}, \sin\frac{-\theta}{2})$ which due to trig properies becomes $ Q_{2} = \frac{1}{\sqrt(2)}(\cos\frac{\
theta}{2}, \sin\frac{\theta}{2}, \cos\frac{\theta}{2}, -\sin\frac{\theta}{2})$
Which means that $Q_{1}$ and $Q_{2}$ rotates around the same axis only in the oppsite direction, and we've lost a degree of freedom (??). Am I missing something fundamental?
quaternions geometry
1 While a lot of people will say to ask this on MSE, I would personally like to see the question answered here. – Steven Gubkin May 3 '12 at 19:12
add comment
1 Answer
active oldest votes
There's no paradox here: you did the same calculation in two different ways and got the same answer, as you should. The issue is how to think about gimbal lock.
How should you represent a rotation in three dimensions? You can try using Euler angles to represent it using three rotation angles, but there's something fishy about this. That
naturally parametrizes a three-dimensional torus, but the rotation group is not a torus (rather, it's a projective space). It doesn't even have a torus as a covering space, but rather a
3-sphere. So the problem is that the naive coordinates just don't give the right topology, and therefore something must go wrong in degenerate cases to fix the topology. Gimbal lock is
essentially a name for what goes wrong.
up vote 22 When people say quaternions avoid gimbal lock, they mean the unit quaternions naturally form a 3-sphere, so there are no topology issues and they give a beautiful double cover of the
down vote rotation group (via a very simple map). Keeping track of a unit quaternion is fundamentally a more natural way to describe a rotation than keeping track of three Euler angles.
On the other hand, if you describe your quaternion via Euler angles, then gimbal lock shows up again, not in the quaternions themselves but in your coordinate system for them. That's
what you are seeing in your calculations: you are doing a standard calculation to see the effects of gimbal lock, and then redoing the same calculation using quaternions.
Some explanations of gimbal lock don't distinguish clearly between the underlying geometry/topology and the choice of coordinates, which has always annoyed me, since that's essential
for understanding what's going on mathematically.
add comment
Not the answer you're looking for? Browse other questions tagged quaternions geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/95902/the-gimbal-lock-shows-up-in-my-quaternions?sort=oldest","timestamp":"2014-04-19T02:38:29Z","content_type":null,"content_length":"54872","record_id":"<urn:uuid:752c1689-0a7a-46f9-90ae-88f9fedb2db6>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: algebraic varieties
Replies: 2 Last Post: Mar 25, 2011 9:51 PM
Messages: [ Previous | Next ]
algebraic varieties
Posted: Mar 25, 2011 9:37 AM
Is an algebraic variety a union or an intersection? Example - three
first order equations in three unknowns is called a system of
equations whose solution is geometrically an intersection. Some
references I read tell me the obvious
fact that UNIONS are much more abstract and geometry is not often (if
ever) "visualizable". So this make for more exciting math and the
basis for new directions and abstractions. But most of the references
begin by stating algebraic varieties are intersections and then kind
of float into saying unions. And from there on of course I need many
credits under my belt (hat?) to proceed. Any answer or references
are appreciated.
Max Buscher | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2248073","timestamp":"2014-04-18T15:39:03Z","content_type":null,"content_length":"18767","record_id":"<urn:uuid:f05b5011-2114-4ede-8ee6-4f96198ad557>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculate Seconds that are over 60 minutes, to Days, Hours, Minutes, Seconds
I have an issue with calculating seconds that go over 60 minutes that sums to be a few days.
One of my eval calculations sums to be 496089.166322 seconds and if I use |fieldformat "Total Time"=strftime('Total Time', "%M:%S") I get 48:09 as the sum but should calculate to 5 days, 17 hours,
48 minutes and 9 seconds
I am not sure if I have to use a macro to do the job? LINK
Or missing something obvious?
I have searched through every variation of this and have tried all the common date and time format variables with strftime( converts epoch time to format Y )
Here is my current search string where I have to break down the Days Hours Minutes and Seconds along with a ScreenCapture
Search String:
index="snort" ( 2222222 dest_port="") OR (1111111 src_port="") OR ( 1111111 src_ip="") OR (2222222 dest_ip="") | eval disconnect_time=if(match(_raw,"2222222"),_time,null()) | eval connect_time=if
(match(_raw,"1111111"),_time,null()) | eval Ephemeral=if(isnotnull(disconnect_time),dest_port,Ephemeral) | eval Ephemeral=if(isnotnull(connect_time),src_port,Ephemeral) | stats min(connect_time) as
Connect max(disconnect_time) as Disconnect min(src_ip) as "Source IP" by Ephemeral | eval Seconds=Disconnect-Connect | fieldformat "Seconds"=strftime('Seconds', "%s") | eval Minutes=Seconds/60 |
eval Hours=Minutes/60 | eval Days=Hours/24 | convert timeformat="%a %b-%d %Y "at" %H:%M:%S" ctime(Connect) ctime(Disconnect) | search Connect= Disconnect= | rename Ephemeral as "Connection Port",
Total_time as "lala"
asked 12 Feb '13, 10:11
accept rate: 0%
One Answer:
Hello, I think that you have to use "tostring" on the eval command
| eval "Total Time"=tostring(Seconds,"duration")
The result of that command is 5+17:48:09.166322 where "5+" is the number of days.
I hope this help you :)
answered 12 Feb '13, 10:28
accept rate: 100%
Yes that worked! To make it pretty..is there a way to take away the miliseconds? Also, how would I sum the "Total Seonds" as a "Total Time" like: | transpose | "Total Time" string --so the total
time shows left justified?
I found addcoltotals gives me a total in seconds for the field I specify, then I will have to convert the seconds.
Can you get the amount of days on its own? | {"url":"http://answers.splunk.com/answers/75534/calculate-seconds-that-are-over-60-minutes-to-days-hours-minutes-seconds","timestamp":"2014-04-16T04:21:06Z","content_type":null,"content_length":"46060","record_id":"<urn:uuid:54cb967b-712f-49b3-8bd2-d3cb940bce6e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Lack of Relevancy
A LACK OF RELEVANCE
As a college student I always disliked the quantitative courses like statistics, calculus, management science, and operations management. Unless one is a math or science major, there is very little
in these courses that will be remembered, utilized or even relevant in one's future. I've been a management professor for almost 30 years, and I have never needed to recall or utilize any formula
from any of those manditory quantitative courses I took as a business student.
I now teach operations management, along with two management courses that are non-quantitative. I stopped using operations management textbooks for my course, because they all seemed to be fixated on
formulas like the one above. If you are curious, it is a formula for determining the probability of zero customers (in this case trucks) in a multiple-server system where there are four unloading
bays, each crewed by two employees taking one hour to unload each truck as the trucks arrive at the rate of three per hour. Try getting a student to remember it. Harder yet is to convince students
why it is important at all, when the reality is that it couldn't be less important.
Thanks to computer technology I can do this analysis quite simply without ever seeing a formula. I don't do statistical analysis, but I could do it quite easily with statistical software. I can also
use linear programming software to solve problems without knowing how to solve simultaneous linear equations. I consider the quantitative courses to have been a waste of my time and energy. What I
retained from all of them could be put into one very short lecture.
So why are so many quantitative courses required of students who are not quantitatively oriented and not pursuing a career that will utilize that kind of knowledge? The answer is simple. Professors
decide curriculum based on what they know and how they learned it. It's not a customer-driven process. And professors, many of whom claim to be on the leading edge of research, are decades behind in
teaching what students really need to know.
There is a phenomenon called the "paradox of expertise," which states that as humans become increasingly expert at performing a task, they become increasingly unable to explain what they are doing.
Experts perform advanced tasks without even thinking about them, and when forced to give an explanation of what they are doing, they revert to reasoning and strategies that were taught to them as
novices. Formulas were the essence of how most current quantitative experts learned their trade, so that is how they teach it. Unfortunately they also write the textbooks – most of which are littered
with formulae like the above example.
There is a sign in my office that says: "Immediacy in relevance is particularly critical in adult learning. Adults are quick to forget anything without foreseeable relevance." Solving equations like
the one above, as well as a lot of other stuff, is irrelevant to 99% of all college graduates and their future. But professors still teach this stuff – because they can, and because that is how they
learned it.
There ought to be a law for teaching that says: "Don't teach me what you know – teach me what I need to know." Of course some liberal arts and quantitative professors might disagree with this
because, if relevancy were the criteria for curriculum, then a lot of professors would be out of work. Enlightenment is a noble cause, and learning about history, language, music, and art is relevant
to improving a society. Formulas for the masses are not relevant.
│Photos│Stories │Bizarre │Links│Home│ | {"url":"http://www.tcnj.edu/~hofmann/Relevancy.htm","timestamp":"2014-04-21T14:40:13Z","content_type":null,"content_length":"6900","record_id":"<urn:uuid:bd78db84-c888-4a7b-b8d0-86a9158246de>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why Can't Math Teaching Be More Like Computer Science?
From elementary to high school I've had teachers who've droned on and on trying to explain minute details about the concepts they're teaching. I've almost always found it more difficult to follow the
teacher's train of thought than to just learn the concept myself, so I always just read the book during lecture. I mean "read" in the loosest sense of the word though, as I'd just skim through
looking for the equations. From middle-school and upwards, the textbooks were written so as to be incomprehensibly dense to any student not three grade levels higher than the grade the book was
intended for. The most peculiar aspect of this arrangement was that I always found that solving the problem always depended on only a few things. Far less than the length of the teacher's lectures
would suggest.
When I got to college and took computer science classes we had to document our code. For each function we needed to present a concise explanation of:
1. What the function does
2. What the function's arguments are and how they relate to what the function does
3. What conditions the function requires in order to work properly.
And it hit me. These three things are all you need in order to use any mathematics equation. And so I'm baffled. All the math and science textbooks are designed around the author raising some
question about the concept, going through a derivation, and arriving at an equation that answers the original question. Even ignoring the fact that the questions raised by the book have almost always
been incomprehensible to me, you still have the fact that this form of teaching results in the important information being strewn all over the place. The explanations on what the variables are could
be in any chapter, the explanation of what the function does is written in terms of its derivation and mixed all throughout the section, and the conditions on the function's use are hidden somewhere
in the derivation. In the worst case, you have arguments that are only defined in tiny tables in the margins of some page in the last chapter. So this makes me wonder: What is the use in making
students go through all this hullabaloo? Why can't a mathematics textbook, for each equation, provide concise summaries of what the function does, what its arguments are and how they relate to the
function, and what conditions invalidate the function? I see absolutely no use in making the student skip and hunt around the book every time they want to do a homework problem. I personally find it
much easier to understand mathematics by relating the equations to the concepts than trying to piece together some half-hazard imprecise English explanation, or some dense, etymologically sterilized
"mathematically correct" explanation.
Any decent mathematics book should do what you are saying.
Sometimes the definitions themselves are found in the Appendix or in part of the preamble of the textbook.
I do agree though that a lot of mathematics books do gloss over things and they do not retain the context that is required for a more intuitive understanding.
Having said this there are, nowadays, a lot more resources and people are realizing that there are people who need the extra context and explanation that is usually stripped out of many textbooks.
If you find a book that outlines all the structural definitions in set notation, as well all the constraints, then that should be sufficient to understand the question completely.
As for working out mathematical problems, that is not so clear cut. Everyone has problems for using results in creative ways, even the mathematical experts. I don't know if there is an optimal way to
do this, and I have not come across a universal approach in my readings.
George Polya wrote on this topic in his book "How to Solve It", but I am unaware of any other sources that analyze heuristics in a more general way. | {"url":"http://www.physicsforums.com/showthread.php?p=3572239","timestamp":"2014-04-20T03:21:59Z","content_type":null,"content_length":"91352","record_id":"<urn:uuid:bc003d00-5f00-4b46-816b-78a3d58f02e6>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
HELP!! differential equation and bessels function
June 11th 2008, 07:54 AM
HELP!! differential equation and bessels function
I'm studying for my final exam at the moment and i found this question from the past exam paper. but i have no idea how to solve it.
can anyone please please help me!!!
June 11th 2008, 08:54 AM
Make the change of variable $y'=y/x^{\alpha}$ and rewrite the de in terms of $y'$ and $x$. Then put $x'=\beta x^{\gamma}$ and you should find that $x'$ and $y'$ satisfy the Bessel equation of
order $n$.
June 11th 2008, 09:49 AM
thanks for the reply
just one more question.
how do you know which variable to use??
for instance, how do you choose the 'change of variable'?
June 11th 2008, 10:08 AM
By comparing the Bessel equation with that given, and more directly looking at the form of the solution that you are to demonstrate. (in fact the suggested change of variables comes directly from
the suggested solution)
June 14th 2008, 06:58 AM
using http://www.mathhelpforum.com/math-he...1ef959be-1.gif
i got
(y'x^2 - yxalpha)x^-alpha + yxx^-alpha - (2alphaxy)x^-alpha + beta^2.gemma^2.x^2gamma + alpha^2 -n^2.gemma^2)y = 0
is this right???? and i wasnt sure how to use x'=beta.x^gemma from here.
June 14th 2008, 09:35 AM
using http://www.mathhelpforum.com/math-he...1ef959be-1.gif
i got
(y'x^2 - yxalpha)x^-alpha + yxx^-alpha - (2alphaxy)x^-alpha + beta^2.gemma^2.x^2gamma + alpha^2 -n^2.gemma^2)y = 0
is this right???? and i wasnt sure how to use x'=beta.x^gemma from here.
You do know that the ' do not denote differentiation here don't you?
Try instead $u(x)=y(x)/x^{\alpha}$ first, from this compute $\frac{dy}{dx}$ and $\frac{d^2y}{dx^2}$ in terms of $x$, $u$, $\frac{du}{dx}$ and $\frac{d^2u}{dx^2}$ and substitute into the DE. | {"url":"http://mathhelpforum.com/differential-equations/41300-help-differential-equation-bessels-function-print.html","timestamp":"2014-04-17T23:12:47Z","content_type":null,"content_length":"10547","record_id":"<urn:uuid:d8cf71cb-5c79-4c1b-911e-1e05eaa3bedb>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Tall Will Your Baby Be?
I plugged Caleb's numbers into three different height predictors online, and got 5'7", 5'11", and 5'10". 5'7" seems much more likely, given that we're short people (I'm 5'3" and my husband is 5'6").
Doubling his height at 2 years gave me 5'8", and given that the first predictor said there's a 68% chance of him being 2 inches from that height and a 95% chance of him being 4 inches from that
height, that seems to be a pretty good indication for Caleb, at least. I hadn't heard of doubling their height, but in this case at least it lines up with the most common way of predicting height
(estimating by entering the parents' heights into a formula).
Both boys have the same parents, so they have a probable lower point of 5'3" and high point of 5'11" (yeah, I'm not seeing that happening). Doubling Isaac's height at 18 months gives me 5'3", and I
suspect he will be shorter than his brother, so that's possible and within his range; the third predictor on the site below gave me a height of 5'8" for Isaac where it gave Caleb 5'10". Funny.
I plugged Caleb's numbers into three different height predictors online, and got 5'7", 5'11", and 5'10". 5'7" seems much more likely, given that we're short people (I'm 5'3" and my husband is 5'6").
Doubling his height at 2 years gave me 5'8", and given that the first predictor said there's a 68% chance of him being 2 inches from that height and a 95% chance of him being 4 inches from that
height, that seems to be a pretty good indication for Caleb, at least. I hadn't heard of doubling their height, but in this case at least it lines up with the most common way of predicting height
(estimating by entering the parents' heights into a formula).
Both boys have the same parents, so they have a probable lower point of 5'3" and high point of 5'11" (yeah, I'm not seeing that happening). Doubling Isaac's height at 18 months gives me 5'3", and I
suspect he will be shorter than his brother, so that's possible and within his range; the third predictor on the site below gave me a height of 5'8" for Isaac where it gave Caleb 5'10". Funny. | {"url":"http://www.whattoexpect.com/forums/august-2009-babies/archives/how-tall-will-your-baby-be.html","timestamp":"2014-04-18T09:51:42Z","content_type":null,"content_length":"128561","record_id":"<urn:uuid:edfdd22f-f046-48dd-8ef6-36090c6ffefc>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scientific Notation - Problem 1
To change a number from standard notation to scientific notation, move the decimal so that all the 0s are to the right of the decimal. Another way to think about it is to move the decimal so that
there is only one non-zero digit to the left of the decimal. Count the number of places that you moved the decimal. This number tells you what factor of ten you must multiply the "new" number by in
order to get the original number in standard notation. The number of spaces that you moved the decimal will tell you what is the exponent of 10. If you moved the decimal to the left, the exponent
will be positive. If you moved the decimal to the right, the number will be negative.
Here we’re given a bunch of numbers that have lots of zeros and we’re asked to write them in scientific notation. So I’m going to start with this problem thinking about where my decimal place is. I’m
going to count how many times I’m multiplying by moving it to the left; 1, 2, 3, 3, 4, 5, 6, 7, that means I’m going to have 1.56 times 10 to the 7th.
This number is the same thing as 1.67 times 1 with 7 zeros behind it. That’s how it got to be so big. Here I’m doing the process backwards. My decimal point is moving to the right. There we go. I’m
going to stop so that I have an A value that’s between 1 and 10. So I moved 1, 2, 3, 4, 5 places so my answer will be 5.3 times 10 to the negative, because I moved in the right direction, negative
how much was it? 1, 2, 3, 4, 5; 10 to the negative 5th exponent.
This last one is really similar, I want to move over my decimal place until I get an A value that’s between 1 and 10 so it will look like 5. Sometimes people like to write it as 5.0 times 10 to the
negative 5th. I personally prefer scientific notation to these kinds of numbers where you have to write lots and lots of zeros. I think this is a lot easier and shorter than this whole stuff and the
nice thing about it is that these are equivalent statements in Math and Science classes. So anytime you write this your reader will know that you actually meant this number.
One other thing that I wanted to point out to you is that sometimes students look at this and say why don’t you just round that? That’s like zero. Well we have to get really precise especially when
we are moving into the time of like nanotechnology and numbers that are really, really small or thinking about world populations and national debts that are in the trillions of dollars, like really,
really big. Scientific notation lets you be precise without having to write these long, long, long numbers.
power scientific notation | {"url":"https://www.brightstorm.com/math/algebra/exponents/scientific-notation-problem-1/","timestamp":"2014-04-18T03:21:31Z","content_type":null,"content_length":"55089","record_id":"<urn:uuid:50ddcec1-3fb3-4821-930b-a806ac0ff539>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] The boundary of objective mathematics
Paul Budnik paul at mtnmath.com
Sat Mar 14 12:19:01 EDT 2009
Timothy Y. Chow wrote:
> I'm curious about this "increasing scepticism" that you speak of. Do you
> have any statistical evidence of increasing scepticism? Or it is just a
> rhetorical flourish?
Its an impression I have based on limited evidence. Soloman Feferman
expresses his doubts about the CH near the end of his paper "Does
Mathematics Need New Axioms?"
(math.stanford.edu/~feferman/papers/newaxioms.pdf ). There use to be a
web page that gave the opinions of various prominent mathematicians on
CH. Several of them expressed doubts about whether it was objectively
true or false. I cannot now find the page so it may no longer exist.
When I was a student (admittedly quite some time ago) I had the
impression that the vast majority of logicians thought CH was an
objective question.
> ... In one direction, I could argue
> that as far as we know, the continuum hypothesis might play a key role in
> the physical world. ...
Einstein, Feynman, a recent Noble Prize winner Gerard 't Hooft and many
others have come to suspect that physics is ultimately discrete or
digital (see http://www.mtnmath.com/digital.html for quotes and
additional names). Thus it is at least a respectable philosophical
position to take. Potential infinity is very useful in developing
mathematics and our universe could be potentially infinite.
Paul Budnik
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2009-March/013480.html","timestamp":"2014-04-16T06:02:40Z","content_type":null,"content_length":"3999","record_id":"<urn:uuid:e52bcbaf-5448-44f6-9b21-9883bf381756>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
HELP with Kirchhof's law equation
March 4th 2013, 06:23 AM
HELP with Kirchhof's law equation
Using Kirchhoff’s Laws calculate the following:
a) The three branch currents I[1], I[2], and I[L]
b) The power dissipated in the load resistor
Attachment 27353
R1 = 12
R2 = 14
RL = 7
Explain how you got there if possible. Thanks!
March 4th 2013, 01:00 PM
Re: HELP with Kirchhof's law equation
For starters the only time you have a problem like this without capacitors (or inductors) you will not have a differential equation.
So you have the junction rule: $I_1 + I_L = I_2$
You didn't put a unit on the 19 (tsk tsk). I'm going to assume that it's the potential difference across the source. Now we need to talk about potentials. There is a loop around the whole thing
and I'm going to assume the current is clockwise. (So we have a loop through the source and around the circuit going through the 12 (Ohm) and the 14 (Ohm) resistors.) You will need one more loop:
Let's make it go across the bridge...that is a clockwise current going through the 12 (Ohm) and 7 (Ohm) resistors. (What is the potential across the bridge?) Note that your 7 (Ohm) resistor has
the current going the "wrong" way, What do you do with the potential across the 7 (Ohm) resistor in this case? | {"url":"http://mathhelpforum.com/differential-equations/214194-help-kirchhofs-law-equation-print.html","timestamp":"2014-04-20T19:57:02Z","content_type":null,"content_length":"4975","record_id":"<urn:uuid:208bc4f6-18a5-4f8c-90ca-4d4f79a49547>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
physics - tchrwill, Monday, March 21, 2011 at 2:50pm
The Law of Universal Gravitation states that each particle of matter attracts every other particle of matter with a force which is directly proportional to the product of their masses and inversely
proportional to the square of the distance between them. Expressed mathematically,
F = GM(m)/r^2
where F is the force with which either of the particles attracts the other, M and m are the masses of two particles separated by a distance r, and G is the Universal Gravitational Constant. The
product of G and, lets say, the mass of the earth, is sometimes referred to as GM or mu (the greek letter pronounced meuw as opposed to meow), the earth's gravitational constant. Thus the force of
attraction exerted by the earth on any particle within, on the surface of, or above, is F = 1.40766x10^16 ft^3/sec^2(m)/r^2 where m is the mass of the object being attracted and r is the distance
from the center of the earth to the mass.
The gravitational constant for the earth, GM(E), is 1.40766x10^16ft^3/sec^2. The gravitational constant for the moon, GM(M), is 1.7313x10^14ft^3/sec^2. Using the average distance between the earth
and moon of 239,000 miles, let the distance from the moon, to the point between the earth and moon, where the gravitational pull on a 32,200 lb. satellite is the same, be X, and the distance from the
earth to this point be (239,000 - X). Therefore, the gravitational force is F = GMm/r^2 where r = X for the moon distance and r = (239000 - X) for the earth distance, and m is the mass of the
satellite. At the point where the forces are equal, 1.40766x10^16(m)/(239000-X)^2 = 1.7313x10^14(m)/X^2. The m's cancel out and you are left with 81.30653X^2 = (239000 - X)^2 which results in
80.30653X^2 + 478000X - 5.7121x10^10 = 0. From the quadratic equation, you get X = 23,859 miles, roughly one tenth the distance between the two bodies from the moon. So the distance from the earth is
~215,140 miles.
Checking the gravitational pull on the 32,200 lb. satellite, whose mass m = 1000 lb.sec.^2/ft.^4. The pull of the earth is F = 1.40766x10^16(1000)/(215,140x5280)^2 = 10.91 lb. The pull of the moon is
F = 1.7313x10^14(1000)/(23858x5280)^2 = 10.91 lb.
This point is sometimes referred to as L2. There is an L5 Society which supports building a space station at this point between the earth and moon. There are five such points in space, L1 through L5,
at which a small body can remain in a stable orbit with two very massive bodies. The points are called Lagrangian Points and are the rare cases where the relative motions of three bodies can be
computed exactly. In the case of a body orbiting a much larger body, such as the moon about the earth, the first stable point is L1 and lies on the moon's orbit, diametrically opposite the earth. The
L2 and L3 points are both on the moon-earth line, one closer to the earth than the moon and the other farther away. The remaining L4 and L5 points are located on the moon's orbit such that each forms
an equilateral triangle with the earth and moon. | {"url":"http://www.jiskha.com/display.cgi?id=1300673946","timestamp":"2014-04-18T22:30:41Z","content_type":null,"content_length":"11543","record_id":"<urn:uuid:4e3ca65f-fa38-4fd9-bb1c-544a9368cef8>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
Saber-Slant #5: Everything Regresses
You often hear a lot about Spring Training and early season performances, and people often wonder aloud if these performances are “fact or fiction.” Baseball has a (sometimes excruciatingly) long
season, and pundits some talking points during April when the races don’t yet matter. As a result, different players get discussed as whether or not they can maintain their level of performance over
the course of the season based on two or three weeks worth of good or bad play.
The truth of the matter is that the answer to the “fact or fiction” question is always fiction.
This question gets to the heart of an important concept in sabermetrics that is prevalent in anything and everything baseball. The concept in question is called regression to the mean. Regression to
the mean is actually a statistical phenomenon, as described here in short by Wikipedia.
In statistics, regression toward the mean refers to the phenomenon that a variable that is extreme on its first measurement will tend to be closer to the center of the distribution on a later
measurement. To avoid making wrong inferences, the possibility of regression toward the mean must be considered when designing experiments and interpreting experimental, survey, and other
empirical data in the physical, life, behavioral and social sciences.
Essentially, every time you observe a result, the most likely outcome is that the next observation of that result will be closer to the mean (or average) of the population distribution. This means
that as a rule, everything and everybody regresses.
Believe it or not, all baseball fans regress players to the mean in their minds; you are using regression to the mean even if you don’t know it. Let’s illustrate this concept with an example.
Here are four players and their batting averages so far during the 2010 season:
Player A: .325
Player B: .324
Player C: .324
Player D: .323
These BA were all picked up in about 80-90 PA. Now, just knowing this information, what do you think these players will hit in their next 90 PA?
That’s a pretty daunting task to try and predict. However, let me help you by saying that the league average BA for the last five years, the National League average (these are all NL players) has
been around .260. The AL average is around .267. Call the major league average, without pitchers, around .263. Now what do you think?
Well, you do have some information about the hitters, but you have much more information about the population distribution of those hitters. We don’t have any way of differentiating between them, so
we assume they are all in the same population (major league hitters, for example). Based on this, we can use regression and say that we’d expect these hitters to hit around .265 in their next 90 PA
(there is a rigorous way to do the calculations, but I’m just guessing as an illustration).
What if I gave you more information? Here are those player’s career BA, alongside their career PA.
Player A: .267, 2379 PA
Player B: .275, 1566 PA
Player C: .323, 113 PA
Player D: .260, 603 PA
You can see that each of those players have slightly varying batting averages and widely varying playing times. That playing time gives us more of an idea of their true talent than what we knew from
their first 90 PA. We know the most about Player A, which means we would regress his numbers the least. We know the least about Player C, which means the mean would encompass most of our guess about
his next 90 PA. The other two guys are in between. Armed with this much information, we can now make better guesses. Player C is probably still around .264-.265, Player A closer to .266, for example.
Now, what if we included all of that player’s history, including other descriptive statistics that would help us determine batting average, including stuff like Batting Average on Balls in Pay
(BABIP), batted ball rates, and home run rates? What if we also weighted recent years more heavily to reflect age and changes in true talent that career numbers couldn’t take into account? Well, then
we would have a projection system. The ZiPS projection system does just that with its in-season projections available on FanGraphs. It combines historical data about a player and regression to a mean
tailored to that player’s skill to determine what we would expect from that player going forward. Here’s ZiPS batting average projections for those players:
Player A (Jayson Werth): .280
Player B (Ryan Doumit): .283
Player C (David Freese): .273
Player D (Colby Rasmus): .266
Taking all the historical data, properly weighting in, and regressing to the mean yielded these guesses. Note that this is still closer to the .263-ish league average than it is to their 90 PA
batting averages from this season so far.
Does this mean that players cannot improve? Of course not. Does this mean it is absolutely true that a player will always perform closer to the mean the next time out? Definitely not. Baseball is a
game of chance and skill, and that chance (or luck or random variation, pick your term) will always push some players up and others down. Furthermore, remember that regression to the mean implies
that you know what mean to regress to. Players can improve and change the mean to which they must be regressed. I don’t think anyone here thinks we should regress Albert Pujols’ power numbers to the
same mean as David Eckstein’s. It just means that our best guess for the future is still going to be closer to that mean than to anywhere else. Keep that in mind the next time you are waiting for a
player with questionable peripherals to “finally break out.” Regression to the mean is a powerful entity in the world of baseball.
Topics: Colby Rasmus, David Freese, Jayson Werth, Regression To The Mean, Ryan Doumit | {"url":"http://calltothepen.com/2010/05/01/saber-slant-5-everything-regresses/","timestamp":"2014-04-19T04:50:32Z","content_type":null,"content_length":"72168","record_id":"<urn:uuid:90279e00-8059-4c40-801c-70ef57939bba>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
The sweet program
October 7th, 2011, 02:03 AM #1
Junior Member
Join Date
Oct 2011
The sweet program
im new to programming!
my problem:
n children numbered 1 to n are sitting in a circle. starting at child 1, a sweet is passed. after m passes the child holding the sweet is eliminated. if child x gets eliminated he gives the sweet
to child x+1 and leaves the ring. that does not count far a pass. the children in the circle close ranks and the game continues with the child who was sitting after the eliminated child,taking
the sweet. assume m is constant for each elimination.
write a program that will determine which child would get the sweet in the end.
I'm totally confused please help!!!!!!
How would i go about solving such a problem?
Re: The sweet program
Get a pencil and paper and work and through a few examples. You will then see the process in action and will be able to write out (in English or whatever your first language is, but not code) the
sequence of actions to get from say 5 children to a winner. Now look at your sequence of actions to find the repetitions, if there are any these are your loops. Now think about which data
structure allows items to be easily removed from it at any point.
Now you are ready to convert your sequence into code.
Posting code? Use code tags like this: [code]...Your code here...[/code]
Click here for examples of Java Code
Re: The sweet program
I'm totally confused please help!!!!!!
How would i go about solving such a problem?
If you're familiar with a data structure called a circular linked list it's very easy. The problem translates almost one-to-one on such a data structure.
You put all children (or rather their numbers) into the nodes of a circular linked list. You also introduce a pointer which always points at the node of the child holding the sweet.
Now the algorithm: Advance the sweet-pointer M nodes forward and then advance it one node further and remove the previous node. Repeated this until there's just one node left, the winner of the
Implementing a circular linked list is easy but you really only need to know how it works because then you can quite easily use an ordinary array as if it were a circular linked list. This has
pros and cons. For example "advance the sweet pointer" can be done in just one arithmetic operation in the array case. On the other hand "remove the previous node" is more expensive than in the
linked list case.
Note that if M >= N you will be doing (at least one but potentially many) full cycles around the circle of children. That's not necessary so "advance M nodes" should really read "advance M modulo
N nodes".
Last edited by nuzzle; October 8th, 2011 at 06:45 PM.
Re: The sweet program
In addition to my previous post:
Both the array and the linked list approaches lead to an O(N*N) algorithm. I was thinking maybe that could be improved.
In both approaches all N children must be considered so O(N) is a minimum. Then in the array approch to remove a child is an O(N) operation and the M advancement is O(1). In the linked list
approch it's the other way around. Child removal is O(1) whereas M advancement is O(N) (it's really O(M) so if M is small it tends towards O(1) but if M is big it tends towards O(N) so that's the
worst case).
The traditional way to strike an approximation between an array and a linked list is to use a deque. This would probably improve the actual complexity (for large N) but theoretically it would
still be O(N*N).
So is there a way to get a better theoretical complexity? I think so. I think O(N * log N) is possible. The idea is to find a data structure where both child removal and M advancement are O(log
N). This would give a total complexity of O (N * log N) (like for example a sort of N elements)
What would this data structure look like? It would be an array supported by a binary tree. I call it a Binary Tree Array because I think this idea may be novel. At least I haven't heard of it
To avoid confusing child as a participant in the game, with child as a child node in the binary tree, I'm going to call the former 'game participant' from now on.
The array holds the N game participant's identification numbers. When these are entered into the array the binary tree is built at the same time in an O(N) operation. The tree leaves hold the
array indexes. In addition each tree node holds the number of children. A leaf is considered to have one child (the array index if you will).
A game participant is never actually removed from the array. Instead the corresponding tree leaf is set to have no children (instead of one). This is an O(log N) operation on the binary tree.
The M advancement doesn't take place on the array itself. Instead it takes place in the binary tree. To skip M game participants becomes a question of going up a binary tree from a leaf and then
going down the tree again to a leaf M leaves away. This is also an O(log N) operation.
So the Binary Tree Array gives both O(log N) removals and O(log N) M advancements which together result in an overall O(N * log N) algorithm.
I've posted a reference to here in the Algorithms & Data Structures forum,
Last edited by nuzzle; October 10th, 2011 at 10:37 PM.
October 7th, 2011, 06:16 AM #2
Elite Member Power Poster
Join Date
May 2006
October 7th, 2011, 11:05 AM #3
Elite Member
Join Date
May 2009
October 8th, 2011, 05:08 PM #4
Elite Member
Join Date
May 2009 | {"url":"http://forums.codeguru.com/showthread.php?517046-The-sweet-program&p=2037048","timestamp":"2014-04-21T16:19:07Z","content_type":null,"content_length":"84463","record_id":"<urn:uuid:7623025d-dc10-466d-b347-29cf93c769ab>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kernel and Image
January 31st 2007, 07:13 PM #1
Nov 2006
Kernel and Image
Let V be the R-space of real polynomials of degree at most 3.
For p(X) in V, define
I need to find a basis for each of the kernel ker(T) of T, and the image (or range) Im(T) of T (T is a linear operator on V).
Any ideas?
Let V be the R-space of real polynomials of degree at most 3.
For p(X) in V, define
I need to find a basis for each of the kernel ker(T) of T, and the image (or range) Im(T) of T (T is a linear operator on V).
Any ideas?
After you find the standard matrix for the linear transformation does it not make sense to say that the kernel of the map of the nullspace of the standard matrix and the range of the map is the
coloum space of the standard matrix? However, what that standard matrix is, I did not find it.
January 31st 2007, 07:17 PM #2
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/advanced-algebra/10965-kernel-image.html","timestamp":"2014-04-17T07:11:12Z","content_type":null,"content_length":"33262","record_id":"<urn:uuid:e996c2d6-fe5b-4af6-ba8c-93948d213965>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
Word Problems I Got 99 Word Problems (don't worry, there aren't really that many) True or False
1. 1. Thomas received a 50% increase in his allowance this year. He now receives 12 dollars per week. No more minimum wage! What was his weekly allowance last year? -> 8
2. 2. Jenny has a jar for collecting spare change. So far she only has pennies, nickels, and dimes, but she dreams of owning a quarter one day. She has twice as many nickels as pennies, four more
dimes than nickels, and nineteen coins total. How many nickels does she have? -> 10
3. 3. The cost to rent a building for a dance party is $8 per hour plus $1.50 per person. If the Ballroom Dance Club spent $70.50 to put on a three-hour party, how many people were at the party? ->
4. Mr. and Mrs. Smith get into their cars to go to work. Not the Mr. and Mrs. Smith from the movie. There are no car chases or gunfire in this word problem. They leave their house at exactly the
4. same time. Mr. Smith heads west at 60 mph and Mrs. Smith heads east at 45 mph. How long does it take before they are 42 miles apart? Incidentally, this is their favorite distance to be in
relation to one another. They've been married a long time. -> Too long
5. 5. A rectangular box with a square base has volume 180. If one side of the base has length 6, what is the height of the box? -> 6
6. 6. A triangle's second angle is 12° more than its first angle, and its third angle is twice the first angle. What is the second angle of the triangle? -> 54°
7. 7. Leonardo was given a new phone on his last birthday. His parents had told him it was for emergencies only, but they are sure he has used it mainly to play Fruit Ninja. The plan they got costs
$49.98 per month plus 10 cents per text message. If the bill for the first year Leonardo had his new phone was $612.96, how many text messages did Leonardo use per month, on average? -> 13.2
8. A batty old lady has a lot of cats and birds in her house. It's actually not the animals that qualify her as batty. She thinks Elvis is alive and living in her basement. See? We told you.
8. There are a total of 252 legs in her house, including her own legs. Fortunately, she's not so crazy that any of those legs are detached human ones. The number of birds is 3 less than twice the
number of cats. How many cats does the crazy old lady have? -> 32
9. 9. Ashleigh is taking a class in which her grade is determined by her scores on four exams. Ashleigh has taken three exams so far, and has gotten scores of 92, 84, and 88. What does Ashleigh need
to score on the last exam in order to average 90% in the class? -> 96
10. 10. Janine baked 90 cookies, and now she needs to frost them...because they are not yet fattening enough on their own. The top of each cookie is 3 square inches. If a can of frosting can cover 20
square inches, how many cans of frosting does Janine need? -> 6 | {"url":"http://www.shmoop.com/word-problems/quiz-3-true-false.html","timestamp":"2014-04-19T17:26:48Z","content_type":null,"content_length":"39537","record_id":"<urn:uuid:8c3f2fdf-19ed-4848-be4b-85796c00c11e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bonsall Trigonometry Tutor
...I believe that it is not always the student's fault for their lack of understanding and/or confusion. Einstein once said that "You do not understand something completely if you cannot explain
it simply." With a classroom of 30+ students it can be difficult for a teacher to sufficiently explain ...
10 Subjects: including trigonometry, chemistry, calculus, algebra 1
...Even the students that breezed through Algebra 1 can get stumbled in Algebra 2 concepts just because there are so many things that have to come together in their brains before they can master
it. I enjoy the challenge of "getting all their ducks in the row" so to speak, so that they can then pro...
11 Subjects: including trigonometry, calculus, geometry, algebra 1
...Often just a few sessions are enough to help students to get back on track, give them confidence, reduce stress and improve their grades. I have a lot of patience and a true passion for
tutoring math. I usually tutor from my home office in Dana Point, but I am willing to drive to a more convenient location.
5 Subjects: including trigonometry, algebra 1, algebra 2, precalculus
I am a UCI honors graduate in mathematics with a minor in computer science. I started tutoring as a favor for a friend and have found that tutoring is one of the most rewarding experiences I can
have. Many of my students have gone from D's with no understanding to A's with the ability to peer tutor their classmates.
11 Subjects: including trigonometry, calculus, geometry, algebra 1
...It is important to me that the student understands the concept of each function and its role in logic solutions. I enjoy students that are eager and ready to learn. My pace of tutoring is
driven by what I think the student is absorbing.
20 Subjects: including trigonometry, calculus, geometry, accounting | {"url":"http://www.purplemath.com/Bonsall_Trigonometry_tutors.php","timestamp":"2014-04-21T02:29:15Z","content_type":null,"content_length":"24046","record_id":"<urn:uuid:13db9409-dce5-4bbe-8dd6-67b14d457423>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
NYS 3rd Grade Math Word Match Part 4 of 5
Proportion _____ 1 The results of a division problem.
Rectangular Prism _____ 2 A prism with rectangular basis.
Rectangle _____ 3 Intersecting lines that form right angles.
Polygon _____ 4 A decimal whose digits repeat in groups of 1 or more.
Perpendicular Lines _____ 5 A three-dimensional figure whose base is a polygon and having triangular faces that meet at a common point over the base.
Perfect Square _____ 6 A drawing that is similar but either larger or smaller than the actual object.
Right Angle _____ 7 In a right triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the length of the legs.
Ratio _____ 8 A zero-dimensional figure, an exact location in space.
Point _____ 9 A comparison of 2 numbers by division.
Scale Drawing _____ 10 An equation that shows that 2 ratios are equivalent.
Quotient _____ 11 The chance that some event will happen.
Perimeter _____ 12 The answer to a multiplication problem.
Pythagorean _____ 13 The difference between the greatest number and the least number in a set of data.
Product _____ 14 A polygon having 4 sides.
Place Value _____ 15 A system of writing numbers in which the position of the digit determines its value.
Prime Number _____ 16 A number whose square root is a whole number.
Quadrilateral _____ 17 A whole number greater than 1 that has exactly 2 factor, 1 and itself.
Rational Number _____ 18 A part of a line that extends forever in only 1 direction.
Range _____ 19 A simple closed figure in a plane formed by 3 or more line segments.
Repeating Decimals _____ 20 A rational number is a number that is the ratio of 2 integers. All other real numbers are said to be irrational.
Probability _____ 21 The reciprocal of the number x is the number 1/x.
Ray _____ 22 The distance around a geometric figure.
Pyramid _____ 23 A quadrilateral with 4 equal angles.
Reciprocal _____ 24 An angle formed by 2 perpendicular lines; a 90 degree angle. | {"url":"http://www.armoredpenguin.com/wordmatch/Data/2011.02/1719/17191529.835.html","timestamp":"2014-04-16T04:13:57Z","content_type":null,"content_length":"18731","record_id":"<urn:uuid:efb6aede-3827-4fca-9dff-cd3e220b1cad>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
Energy Stored in a spring
Why is the energy stored in a spring 1/2 * distance * force? Isn't work just force * distance?
The full force is not applied for the full distance.
In the simple case of a constant force over a fixed distance you can just multiply force by distance. But in the case of a spring being compressed, the force is not constant.
You could approximate the answer by analyzing the situation in small steps... You compress the spring the first 1/10th of the distance using 1/10th of the full force, the next 1/10th of the instance
using 2/10th of the total force and so on. This approach would give you an answer that is pretty close.
An exact answer can be obtained by using calculus and integrating instantaneous force over incremental distance. That answer turns out to be 1/2 * distance * maximum-force | {"url":"http://www.physicsforums.com/showthread.php?p=4215614","timestamp":"2014-04-16T04:17:41Z","content_type":null,"content_length":"25468","record_id":"<urn:uuid:96b65925-134f-4863-84aa-bc3705a88682>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
[plt-scheme] log
From: Jos Koot (jos.koot at telefonica.net)
Date: Wed Apr 23 20:01:15 EDT 2008
After playing a litle with procedure log i found:
(1) (log positive-exact-integer) never returns +inf.0 (although some time may be required, especially during compilation, for exponents beyond one million)
(2) (log positive-exact-rational) may return +inf.0, even when log does not return +inf.0 for larger exact integers.
(3) Therefore I use (let ((x (inexact->exact x))) (- (log (numerator x)) (log (denominator x)) for any real x.
(4) I ignore the fact that even the log may be too big for a floating point. Anyway, when that is the case log does not return before exhausting my patience, which is quite acceptible of course (I mean the not returning, not my impatience:)
(1) is this a correct approach (i.e. may I assume that PLT scheme's log does not produce +inf.0 on very large exact integer numbers? (as long as the log does not exceed 1e234)
(2) is there somewhere a solidly based (integer-log positive-exact-integer-n) function that returns the greatest exact natural number p such that (expt e p)<=n?
(3) An absolute error less than a few units would not be a problem. I can always make a few steps up or down afterwards.
In fact I use base 10 logs, but thats only a matter of scaling.
I am speaking about Welcome to DrScheme, version 3.99.0.23-svn22apr2008 [3m].
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.racket-lang.org/users/archive/attachments/20080424/0a2e7195/attachment.html>
Posted on the users mailing list. | {"url":"http://lists.racket-lang.org/users/archive/2008-April/024333.html","timestamp":"2014-04-17T18:29:31Z","content_type":null,"content_length":"6648","record_id":"<urn:uuid:706be1c3-cd6e-48f5-9ab8-a70766bcc243>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
Richard Elwes
27th February, 2014
If you shuffle a deck of cards perfectly eight times, you get back exactly to where you started. Here’s a video of the magician Adam West doing it:
In this post, we’ll see why this works. First let’s tighten up our terminology: a “perfect shuffle” here is really what magicians call a Faro Out-Shuffle: you split the deck in half, and then
interleave the two halves. To make things easier, let’s work with a deck of ten cards, initially in numerical order.
Step one: split 1,2,3,4,5 from 6,7,8,9,10.
Step two, interleave the two: 1,6,2,7,3,8,4,9,5,10.
Notice that the outermost cards (1 and 10) remain in place. (That’s why this is an out-shuffle — interleaving the other way produces an in-shuffle.)
At the same time, notice that the card initially in 4th position is now in 7th, and vice versa. We can indicate this in cycle notation by putting the two together in brackets: (4 7).
What about the rest? The card in second position to start with is now in third place, while the card from third position is now in fifth, the card from fifth is now in ninth,… Continuing this line of
thought produces the cycle (2 3 5 9 8 6).
So the shuffle is totally described by the collection of cycles (1) (10) (4 7) (2 3 5 9 8 6). (I’ve included the two trivial cycles of length one.)
Now, notice that performing any cycle of length two twice brings us back to where we started: in this case 4 goes to 7 and then back to 4. Similarly, performing a cycle of length 6 six times takes
use back to the starting position. In fact then, performing this ten card shuffle six times brings everything back to where it started.
Ok. What about a full 52 card deck? Well, for cards initially in the top half of the deck (i.e. numbered 1-26), the card in \(n\)th position is moved to \((2n -1)\)th position after one shuffle,
(meaning \(1 \to 1, \ 2 \to 3,\ 3 \to 5, \ldots, 26 \to 51\)).
For cards in the second half (numbered 27-52), the card in \(n\)th position is moved to \((2n-52)\)th position (meaning \(27 \to 2, \ 28 \to 4, \ldots, 55 \to 54, \ 56 \to 56 \)).
Repeatedly applying the first rule, we can see that 2 goes to 3 which goes to 5 which goes to 9 which goes to 17 which goes to 33. Now applying the second rule, 33 goes to 14 which goes to 27 (by the
first rule) which goes back to 2 (by the second rule). Thus we have the cycle:
(2 3 5 9 17 33 14 27).
The same reasoning tells us the other cycles which make up the shuffle, and altogether they turn out to be
(2 3 5 9 17 33 14 27) (4 7 13 25 49 46 40 28) (6 11 21 41 30 8 15 29)
(10 19 37 22 43 34 16 31) (12 23 45 38 24 47 42 32) (20 39 26 51 50 48 44 36)
(18 35) (1) (52)
which is to say six cycles of length eight, one of length two, and the two trivial cycles of length 1. Each of these cycles will take us back to the starting position when applied eight times… as the
video above shows!
Question: what if we do a Faro in-shuffle instead? This means interleaving the other way. On ten cards, after one shuffle the deck reads 6,1,7,2,8,3,9,4,10,5. How many applications of this rule are
needed to bring us back to the starting position in this case?
Categories: Maths, puzzles | Comments (2) | Permalink
29th January, 2014
I’m teaching a third year Combinatorics module this term, which has led me to revisit some old friends: permutations and combinations. I thought I knew them well, but have already learned something
First a quick refresher course:
Combinations: say \(C(n,k)\) is the the number of subsets of size \(k\) from a collection of size \(n\). So if you select 3 pieces of fruit from a total of 6 (all different), and don’t care about the
order, then the number of selections you might make is \(C(6,3)\). The formula^[1] here is \(C(n,k) = \frac{n!}{k!(n-k)!}\) and \(C(6,3)=20\).
Permutations: If you do care about the order of selection (maybe you want to arrange your fruit into military columns so that “strawberry, apple, pear” is different from “apple, pear, strawberry”),
then the number you need is \(P(n,k)\), the number of permutations of \(k\) objects from \(n\). Its formula is \(P(n,k) = \frac{n!}{(n-k)!}\) and \(P(6,3)=120\).
Total numbers of combinations: A set of size \(n\) has \(2^n\) subsets in total. To say the same thing another way, \(C(n,0)+C(n,1)+…+C(n,n)= 2^n\). So the total number of collections of fruit you
could take from the box of 6 is \(2^6=64\) (this includes both the empty-plate option and the take-everything option).
All of the above I have known for many years. But I recently discovered something new (to me):
Total numbers of permutations: if you pick between \(0\) and \(n\) objects, in order, from a set of size \(n\), how many choices can you make? To ask the same thing another way, what is \(P(n,0)+P
(n,1)+…+P(n,n)\)? And the answer turns out to be \( \lfloor n! \cdot e \rfloor \), that is the number you get if you round \(n! \cdot e\) down to a whole number. So the number of military columns of
fruit (including the empty column) you could make from your six are \( \lfloor 6! \cdot e \rfloor = 1957\).
I think this is very cool, and the proof of this is not hard. It’s been written out very nicely here by Michael Lugo. (When I first saw this result I assumed that it would be a consequence of
Stirling’s approximation – but that’s not required.)
[1] These formulae all use factorial \(! \) notation where \(7!=7\times 6 \times 5 \times 4 \times 3 \times 2 \times 1\)
Categories: Maths | Comments (0) | Permalink
18th January, 2014
There’s been a bit of a rumpus recently about this video from Numberphile which purports to show that \[1+2+3+\ldots = -\frac{1}{12}\]
It got posted on Slate with an accompanying blogpost by Phil Plait, which then got shot to pieces by various incensed bloggers, and Phil now has a much better follow-up post.
I’m not going to rehearse the story again – the short version is that the equation above is a really bad and misleading way of communicating a really interesting and surprising result. But I thought
I’d share another titbit which has a similar moral, namely:
Infinite series often behave in very weird ways.
This in turn has two corollaries, both amply illustrated in the story above:
Unexpected and cool things can happen.
but at the same time
It’s very easy to start talking nonsense if you’re not extremely careful.
The video is about the series \(1+2+3+4+5+\ldots \) This is what we call a divergent series, meaning that as you add up more and more terms, the series grows without limit. An example of a convergent
series is \(0.9+0.09+0.009+0.0009+\ldots\) Here, as you add up more and more terms, you get ever closer to the fixed value 1. This is the limit of the series. Convergent series are generally
friendlier beasts than divergent series, but the purpose of this post is to illustrate that even convergent series can do shocking things.
Humans are not used to infinite series. But we are used to adding up finite collections of numbers. If we have three numbers \(A,B,C\) we know that the order in which we add them up does not matter:
\(A+B+C=B+A+C=A+C+B\) etc.. This sort of thinking is so natural and automatic that it is all too easy to transfer it to the realm of infinite series, where they may not hold – not even (necessarily)
for convergent series.
This post is about the following series: \[S: \ \ 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} - \frac{1}{6} + \frac{1}{7} - \frac{1}{8} + \frac{1}{9} \ldots\]
The first obvious question is: does \(S\) converge? And the answer is yes. It turns out to converge to a number near 0.69. (More precisely, but unexpectedly and coolly, the limit is^[1] the natural
logarithm of 2 or \( \ln 2 \). But you don’t have to know what that means to follow the rest of the post.)
Now, let’s reorder the elements of \(S\). Notice that the numbers on the bottom alternate between being odd and even. Let’s mix things up so that they go ‘odd even even odd even even’ instead, but
being careful to keep the sign of every element the same at it was before:
\[ 1 - \frac{1}{2} - \frac{1}{4} + \frac{1}{3} - \frac{1}{6} - \frac{1}{8} + \frac{1}{5} - \frac{1}{10} - \frac{1}{12} + \ldots \]
To reiterate, we have not added or removed any elements here, just reordered the elements of \(S\). But if you look carefully at this new series, you’ll see that the positive terms are all followed
by the negative of their halves – I’ll group these together in brackets to make it clearer:
\[ \left(1 - \frac{1}{2} \right)- \frac{1}{4} + \left( \frac{1}{3} - \frac{1}{6} \right) - \frac{1}{8} + \left( \frac{1}{5} - \frac{1}{10} \right) - \frac{1}{12} + \ldots \]
Now let’s compute the bits in the brackets and give a name to the new series:
\[ T: \ \ \frac{1}{2} - \frac{1}{4} + \frac{1}{6} - \frac{1}{8} + \frac{1}{10} - \frac{1}{12} + \ldots \]
But what’s this? If you compare a term of \(T\) to the term in the corresponding position in the original series \(S\), you’ll see that it’s exactly half. And indeed, the limit of \(T\) is exactly
half the limit of \(S\) – around 0.34 (more preciely, \(\frac{1}{2} \ln 2 \)).
So just by rearranging the terms of \(S\) we have completely changed its limit! That might seem surprising, but Riemann’s rearrangement theorem tells us that this is the tip of the iceberg. It is
actually possible to rearrange the terms of \(S\) so that it converges to any number you care to name – or so that the series diverges!
So, be very careful when handling infinite series! Here endeth the lessen. But to continue with the mathematics a little: this happens because \(S\) is what is termed conditionally convergent, that
means that if you replace all its negative terms with their positives:
\[H: \ \ 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} + \frac{1}{9} \ldots\]
you get a series which doesn’t converge. \(H\) is called the harmonic series; that it diverges was one of the first truly surprising facts that mathematicians discovered about infinite series.
Absolutely convergent series (ones where the corresponding entirely positive series converges, as 0.9 + 0.09 + 0.009 +… does) are much better behaved, and Riemann’s theorem doesn’t apply there.
Finally, a shameless plug: All this material is covered in my book Maths 1001.
[1] This is the case \(x=1\) of the series expansion for \( \ln (1+x)\) which is given by the Mercator series: \(1 – x + \frac{x^2}{2} – \frac{x^3}{3} + \ldots \)
Categories: Maths | Comments (3) | Permalink
17th October, 2013
For many years, maths lessons have run in roughly the same way: the teacher stands at the blackboard, giving a mini-lecture on some mathematical topic or technique, introducing the idea, outlining
the theory, and then running through an example or two. The students would sit patiently (or not), taking notes (or not), so that at the end of the day they will be able to tackle some exercises on
the same subject as homework… or not.
As the years have rolled by, the blackboard may have been replaced by a whiteboard, and then by a smartboard, but otherwise the formula has remained by and large the same: theory in the classroom
followed by exercises for homework.
The idea of “flipping” the classroom, is to reverse this process. The insight is that in the age of youtube, today’s students can perfectly well take in the lectury bit at home — with the added
advantage of being able to pause, rewind, and rewatch in their own time. This leaves lesson-time free for practice, providing the teacher with more time to go around talking to students individually
or in groups, meaning more opportunities to help those who are struggling or to supply extra challenges to those who need stretching.
I have no experience of this method myself — all the same it immediately appeals to me as a way in which technology may really be able to improve the teaching and learning experience, rather than
just adding bells and whistles. One person who is convinced by this new approach is Colin Hegarty, an old friend of mine from university, who I’m delighted to say has returned to the world of maths
after a few years in finance, and is now expertly flipping classrooms in North London.
Even if you don’t have the good fortune to be one of Mr Hegarty’s students, you can still peruse the 611 (!) videos that he and his colleague Brian Arnold have created for this purpose, all of which
are freely available on Hegartymaths, or on youtube. Judging by the 800,000 odd views their videos have gathered, it’s not only their own pupils who are benefiting from these experiments in flipping.
As a sample I’m embedding one in which Colin talks through the Chinese postman problem:
Categories: Education, Technology | Comments (3) | Permalink
7th October, 2013
You can read my interview with Kevin Houston (or should that be Kevin’s interview with me?)
on his blog
Categories: Elwes Elsewhere | Comments (0) | Permalink
13th September, 2013
Constructible Numbers
A sure route to mathematical fame is to resolve a problem that has stood open for centuries, defying the greatest minds of previous generations. In 1837, Pierre Wantzel’s seminal analysis of
constructible numbers was enough to settle not just one, but an entire slew of the most famous problems in the subject, namely those relating to ruler-and-compass constructions.
As with so much in the history of mathematics, the topic had its origins in the empire of ancient Greece. The geometers of that period were interested not only in contemplating shapes in the
abstract, but also in creating them physically. Initially, this was for artistic and architectural purposes, but later for the sheer challenge it posed. In time, mathematicians came to understand
that the obstacles theyencountered in these ruler-and-compass constructions brought with them a great deal of mathematical insight. Nowhere was this more true than in the ancient enigma of squaring
the circle, and what that revealed about the number \(\pi\).
Classical problems
Greek geometers decided on a set of simple rules for building shapes, using only the simplest possible tools: a ruler and pair of compasses. The ruler is unmarked, so it can only be used for drawing
straight lines, not for measuring length (therefore these are sometimes called straight-edge-and-compass constructions). The compass is used to draw circles, but it may only be set to a length that
has already been constructed.
Today’s schoolchildren still learn how to use these devices to divide a segment of straight line into two equal halves and to bisect a given angle. These were two of the very first ruler-and-compass
constructions. A more sophisticated technique allows a line to be trisected, that is, divided into three equal parts. What of trisecting an angle, though? Various approximate methods were discovered,
which were accurate enough for most practical purposes, but no one could find a method which worked exactly. This proved a mystery, and gave the first hint that there was real depth beneath this
question. But what does it mean if one task can be carried out by ruler and compass and another cannot?
The most famous of the ruler-and-compass problems, and indeed one of the most celebrated questions in mathematics, is that of squaring the circle. The question is this: given a circle, is it possible
to create, by ruler and compass, a square which has exactly the same area? At the heart of this question lies the number \(\pi\) (see page 54). The problem ultimately reduces to this: given a line 1
unit long, is it possible to construct by ruler and compass another line exactly \(\pi\) units long?
Another classical problem was that of doubling the cube. This problem had its origins in a legend from around 430 BC. To overcome a terrible plague, the citizens of the island of Delos sought help
from the Oracle of Apollo. They were instructed to build a new altar exactly twice the size as the original. At first they thought it should be easy: it could be done by doubling the length of each
side. But that process leads to the volume of the altar increasing by a factor of 8 (since that is the number of smaller cubes that can fit inside the new one). To produce a cube whose volume is
double that of the original, the sides need to be increased by a factor of \( \sqrt[3]{2}\) (that is the cube root of 2, just as 2 is itself the cube root of 8). The question of doubling the cube
therefore reduces to this: given a line segment 1 unit long, is it possible to construct another exactly \( \sqrt[3]{2}\) units long?
Wantzel’s deconstruction
Working in the turbulent setting of France in the early 19th century, Pierre Wantzel turned these ancient questions over in his mind. He recognized that the form of many ruler-and-compass questions
is the same. The key to them was this: given a line 1 unit long, which other lengths can be constructed? And which cannot? If a line of length \(x\) can be constructed, then Wantzel deemed \(x\) a
constructible number. Setting aside the geometrical origins of these problems, he devoted himself to studying the algebra of constructible numbers. Some things were obvious: for example, if \(a\) and
\(b\) are constructible, then so must be \(a + b\), \(a – b\), \(a \times b\), and \(a \div b\). But these operations do not exhaust the range of constructible numbers; Wantzel realized that it is
also possible to construct square roots, such as \(\sqrt{a}\).
His great triumph came in 1837, when he showed that everything constructible by ruler and compass must boil down to some combination of addition, subtraction, multiplication, division and square
roots. Since \(\sqrt[3]{2}\) is a cube root, and cannot be obtained via these algebraic operations, it followed immediately that the Delians’ ambition to double the cube was unattainable. A similar
line of thought revealed the impossibility of trisecting an angle.
As for the greatest problem of all, squaring the circle, the final piece didn’t fall into place until 1882, when Ferdinand von Lindemann proved that \(\pi\) is a transcendental number (see page 197).
Then Wantzel’s work immediately implied the non-constructibility of \(\pi\), and the impossibility of squaring the circle was finally established.
Categories: Bookery, Geometry, Number theory | Comments (0) | Permalink
12th September, 2013
I’m pleased to present a new book:
Maths in 100 Key Breakthroughs
is published by Quercus and is now available as a softback or e-book. You can buy it from the publisher
, or in the usual other places.
As the title suggests, its hundred chapters, ordered chronologically, each deal with a major mathematical development (e.g. Aristotle’s analysis of logical syllogisms circa 350BC, the discovery of
transcendental numbers in 1844, and the creation of Weaire-Phelan foam in 1993). My hope is that it should be accessible, attractive, and entertaining to people with little or no background in the
subject – jargon and technical notation are kept to a minimum, and each chapter is accompanied by a beautiful full-page colour illustration.
My major concern was to avoid wrenching these breakthroughs out of context and artificially presenting them as stand-alone events. After all, mathematicians typically make advances by contemplating
the insights of previous generations and answering questions posed by earlier thinkers. Without Kelvin’s conjecture (and perhaps without the work of Pappus and Thomas Hales on related geometrical
questions) the discovery of Weaire-Phelan foam would have been less exciting. Equally, it often takes time and further insight for the significance of a breakthrough to become apparent: it was some
years after their initial discovery that the deep importance of transcendental numbers was recognised.
So I hope that the book not only presents some wonderful discoveries, but also tells the back-stories, gives some sense of what the characters involved thought they were up to, and discuss why their
work matters to us today.
Categories: Bookery | Comments (8) | Permalink
14th August, 2013
A few weeks ago, I was excited to receive correspondence from a certain Kenneth Perko. The tale of the
Perko pair
is a wonderful mathematical story, and one I have told on numerous occasions, so let me do so again:
The Perko Pair I: the story so far
In the late 19th century, mathematicians and physicists started producing tables of knots. The idea here is that some knots are genuinely different from each other, while others can be deformed to
match each other without cutting or gluing, making them essentially the same thing from a topologist’s point of view. The trouble is whether or not two knots are really the same is extremely hard to
tell on first sight…. as we shall see.
The tables begin with the unknot, the only knot with no crossings. Then comes the trefoil or overhand knot, which (so long as we don’t distinguish between it and its mirror image) is the only knot
with three crossings. There’s similarly one knot with four crossings, then two with five, and so on. In the late 19th century Peter Guthrie Tait and Charles Little got as far as listing the knots
with 10 crossings.
However the early knot tabulators (unsurprisingly) made a few errors, and it was not until 1976 that Dale Rolfsen put together a comprehensive list of the knots with up to 10 crossings, based on
earlier efforts by John Conway, in turn building on the work of James Alexander in the 1920s. Of the ten-crossing knots they counted 166 separate varieties. (Today’s mathematicians have made it as
far as 16 crossings.)
The surprise was that Rolfsen and Conway had also made a mistake! Even armed with 20th century techniques of algebraic topology, a duplication had slipped through. In 1973^[1] Kenneth Perko had been
studying a 19th century table of Little, and had realised that two ten-crossing knots (subsequently labelled by Rolfsen as 10[161] and 10[162]) were actually the same thing in disguise.
This story has a very clear moral: telling knots apart is difficult. Really difficult. Even after 75 years of brainpower, mathematicians were still coming badly unstuck, and who knows how long it
might have taken without Perko’s alertness. And of course, this just for knots with a paltry ten crossings. Imagine trying to decipher knots with thousands…
Now for the sequel:
The Perko Pair II: Perko strikes again
The above story has been told countless times, and is usually accompanied by a picture of the offending pair of knots, something like this:
The trouble is that this picture is itself wrong!
This error has infected, most likely among many other places, Wolfram Mathworld and Mathematics 1001 by myself. And guess who pointed this error out…
The explanation of the mistake is that two updated versions of Rolfsen’s 1976 table are now in circulation. In both, 10[162] has been deleted as it should be. But one version (occurring for instance
in the latest editions of his book) keeps his original numbering up to 10[166] with a space between 10[161] and 10[163]. In the other, see here for example, Rolfsen’s 10[163] has been renumbered as
10[162], and 10[164] as 10[163], etc., thus counting up to 10[165].
So the knot numbered 10[162] in the picture above was actually Rolfsen’s original 10[163], and thus not equivalent to 10[161]! For the avoidance of doubt, here is the real Perko pair, with drawings
provided by the man himself. Accept no imitations!
While we’re at it, let’s also put paid to the to the idea that Ken is an ‘amateur’ mathematician. Before going on to a career in law, he says “at Princeton I studied under the world’s top knot theory
topologists (Fox, Milnor, Neuwirth, Stallings, Trotter and Tucker)”. I apologise for suggesting otherwise. At least I didn’t pronounce him dead unlike certain other popular maths authors… Also, he
adds “That stuff about rope on the living room floor is pure internet nonsense. I did it with diagrams on a yellow legal pad. Ropes wouldn’t work anyway since the two knots are non-isotopic mirror
images of each other.”
[1] Note the corrected date, usually given as 1974.
Categories: Topology | Comments (19) | Permalink
28th July, 2013
I have just heard the very sad news that Eric Jaligot has died.
I did not know Eric well, although our paths crossed several times and we once collaborated in a piece of work [pdf] on the model theory of groups, an area in which Eric was a leading expert. He was
knowledgeable, thoughtful, kind and generous with his time, and often to be seen with a wry smile on his face. I’m sure he will be sorely missed by his many friends within the model-theory community
and beyond.
A tribute page is hosted at the Institut Camille Jordan in Lyon, where Eric was based.
Categories: Uncategorised | Comments (1) | Permalink
10th July, 2013
Here’s a little probability exercise. My wife and I are having some things shipped to UK from Japan, on board the container ship MOL Comfort. On 17th June, the ship broke clean in half. (Click on the
pictures for more details.)
The crew escaped, and for the next 10 days, the two halves of the ship drifted apart in the Arabian Ocean, each laden with containers.
On 27th June, the rear half sank with all its cargo, in 4km of water.
Meanwhile the front half began to be towed (backwards) towards Oman.
Since 6th July, the front section has been on fire.
The salvage company/coastguard estimate that approximately 90% of the front half’s cargo has already been burnt. Further, because of the possible presence of dangerous chemicals among the cargo,
along with large quantities of oil within the hull, they fear the risk of explosion.
Estimate the probability that our stuff is:
1. at the bottom of the ocean
2. on fire
3. unharmed
Assuming that our stuff has survived so far, estimate the probability that it is:
1. going to blow up
2. going to sink
3. going to reach us safe and sound
Update: did you guess right?
The fore half continued burning until its balance was so out of kilter that it sank. It didn’t explode.
Categories: Nonsense, probability | Comments (8) | Permalink | {"url":"http://richardelwes.co.uk/blog/","timestamp":"2014-04-17T06:42:15Z","content_type":null,"content_length":"54660","record_id":"<urn:uuid:5f8c016e-317e-463d-91ce-d0c6b5712d4c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume 04.02
Table of Contents
C0029_C0029M-09 Test Method for Bulk Density (|P`Unit Weight|P') and Voids in Aggregate
C0031_C0031M-10 Practice for Making and Curing Concrete Test Specimens in the Field
C0033_C0033M-11 Specification for Concrete Aggregates
C0039_C0039M-10 Test Method for Compressive Strength of Cylindrical Concrete Specimens
C0040_C0040M-11 Test Method for Organic Impurities in Fine Aggregates for Concrete
C0042_C0042M-10A Test Method for Obtaining and Testing Drilled Cores and Sawed Beams of Concrete
C0070-06 Test Method for Surface Moisture in Fine Aggregate
C0078_C0078M-10 Test Method for Flexural Strength of Concrete (Using Simple Beam with Third-Point Loading)
C0087_C0087M-10 Test Method for Effect of Organic Impurities in Fine Aggregate on Strength of Mortar
C0088-05 Test Method for Soundness of Aggregates by Use of Sodium Sulfate or Magnesium Sulfate
C0094_C0094M-11A Specification for Ready-Mixed Concrete
C0117-04 Test Method for Materials Finer than 75-|gmm (No. 200) Sieve in Mineral Aggregates by Washing
C0123-04 Test Method for Lightweight Particles in Aggregate
C0125-11 Terminology Relating to Concrete and Concrete Aggregates
C0127-07 Test Method for Density, Relative Density (Specific Gravity), and Absorption of Coarse Aggregate
C0128-07A Test Method for Density, Relative Density (Specific Gravity), and Absorption of Fine Aggregate
C0131-06 Test Method for Resistance to Degradation of Small-Size Coarse Aggregate by Abrasion and Impact in the Los Angeles Machine
C0136-06 Test Method for Sieve Analysis of Fine and Coarse Aggregates
C0138_C0138M-10B Test Method for Density (Unit Weight), Yield, and Air Content (Gravimetric) of Concrete
C0142_C0142M-10 Test Method for Clay Lumps and Friable Particles in Aggregates
C0143_C0143M-10A Test Method for Slump of Hydraulic-Cement Concrete
C0156-11 Test Method for Water Loss \[from a Mortar Specimen\] Through Liquid Membrane-Forming Curing Compounds for Concrete
C0157_C0157M-08 Test Method for Length Change of Hardened Hydraulic-Cement Mortar and Concrete
C0171-07 Specification for Sheet Materials for Curing Concrete
C0172_C0172M-10 Practice for Sampling Freshly Mixed Concrete
C0173_C0173M-10B Test Method for Air Content of Freshly Mixed Concrete by the Volumetric Method
C0174_C0174M-06 Test Method for Measuring Thickness of Concrete Elements Using Drilled Concrete Cores
C0192_C0192M-07 Practice for Making and Curing Concrete Test Specimens in the Laboratory
C0215-08 Test Method for Fundamental Transverse, Longitudinal, and Torsional Resonant Frequencies of Concrete Specimens
C0227-10 Test Method for Potential Alkali Reactivity of Cement-Aggregate Combinations (Mortar-Bar Method)
C0231_C0231M-10 Test Method for Air Content of Freshly Mixed Concrete by the Pressure Method
C0232_C0232M-09 Test Methods for Bleeding of Concrete
C0233_C0233M-10A Test Method for Air-Entraining Admixtures for Concrete
C0260_C0260M-10A Specification for Air-Entraining Admixtures for Concrete
C0289-07 Test Method for Potential Alkali-Silica Reactivity of Aggregates (Chemical Method)
C0293_C0293M-10 Test Method for Flexural Strength of Concrete (Using Simple Beam With Center-Point Loading)
C0294-05 Descriptive Nomenclature for Constituents of Concrete Aggregates
C0295-08 Guide for Petrographic Examination of Aggregates for Concrete
C0309-11 Specification for Liquid Membrane-Forming Compounds for Curing Concrete
C0311-11A Test Methods for Sampling and Testing Fly Ash or Natural Pozzolans for Use in Portland-Cement Concrete
C0330_C0330M-09 Specification for Lightweight Aggregates for Structural Concrete
C0331_C0331M-10 Specification for Lightweight Aggregates for Concrete Masonry Units
C0332-09 Specification for Lightweight Aggregates for Insulating Concrete
C0341_C0341M-06 Practice for Length Change of Cast, Drilled, or Sawed Specimens of Hydraulic-Cement Mortar and Concrete
C0387_C0387M-11 Specification for Packaged, Dry, Combined Materials for Mortar and Concrete
C0403_C0403M-08 Test Method for Time of Setting of Concrete Mixtures by Penetration Resistance
C0418-05 Test Method for Abrasion Resistance of Concrete by Sandblasting
C0441-05 Test Method for Effectiveness of Pozzolans or Ground Blast-Furnace Slag in Preventing Excessive Expansion of Concrete Due to the Alkali-Silica Reaction
C0457_C0457M-10A Test Method for Microscopical Determination of Parameters of the Air-Void System in Hardened Concrete
C0469_C0469M-10 Test Method for Static Modulus of Elasticity and Poisson's Ratio of Concrete in Compression
C0470_C0470M-09 Specification for Molds for Forming Concrete Test Cylinders Vertically
C0490_C0490M-10 Practice for Use of Apparatus for the Determination of Length Change of Hardened Cement Paste, Mortar, and Concrete
C0494_C0494M-10A Specification for Chemical Admixtures for Concrete
C0495-07 Test Method for Compressive Strength of Lightweight Insulating Concrete
C0496_C0496M-04E01 Test Method for Splitting Tensile Strength of Cylindrical Concrete Specimens
C0511-09 Specification for Mixing Rooms, Moist Cabinets, Moist Rooms, and Water Storage Tanks Used in the Testing of Hydraulic Cements and Concretes
C0512_C0512M-10 Test Method for Creep of Concrete in Compression
C0535-09 Test Method for Resistance to Degradation of Large-Size Coarse Aggregate by Abrasion and Impact in the Los Angeles Machine
C0566-97R04 Test Method for Total Evaporable Moisture Content of Aggregate by Drying
C0567-05A Test Method for Determining Density of Structural Lightweight Concrete
C0586-05 Test Method for Potential Alkali Reactivity of Carbonate Rocks as Concrete Aggregates (Rock-Cylinder Method)
C0597-09 Test Method for Pulse Velocity Through Concrete
C0617-10 Practice for Capping Cylindrical Concrete Specimens
C0618-08A Specification for Coal Fly Ash and Raw or Calcined Natural Pozzolan for Use in Concrete
C0637-09 Specification for Aggregates for Radiation-Shielding Concrete
C0638-09 Descriptive Nomenclature of Constituents of Aggregates for Radiation-Shielding Concrete
C0641-09 Test Method for Iron Staining Materials in Lightweight Concrete Aggregates
C0642-06 Test Method for Density, Absorption, and Voids in Hardened Concrete
C0666_C0666M-03R08 Test Method for Resistance of Concrete to Rapid Freezing and Thawing
C0670-10 Practice for Preparing Precision and Bias Statements for Test Methods for Construction Materials
C0672_C0672M-03 Test Method for Scaling Resistance of Concrete Surfaces Exposed to Deicing Chemicals
C0684-99R03 Test Method for Making, Accelerated Curing, and Testing Concrete Compression Test Specimens
C0685_C0685M-10 Specification for Concrete Made by Volumetric Batching and Continuous Mixing
C0702-98R03 Practice for Reducing Samples of Aggregate to Testing Size
C0779_C0779M-05R10 Test Method for Abrasion Resistance of Horizontal Concrete Surfaces
C0796-04 Test Method for Foaming Agents for Use in Producing Cellular Concrete Using Preformed Foam
C0802-09A Practice for Conducting an Interlaboratory Test Program to Determine the Precision of Test Methods for Construction Materials
C0803_C0803M-03R10 Test Method for Penetration Resistance of Hardened Concrete
C0805_C0805M-08 Test Method for Rebound Number of Hardened Concrete
C0823_C0823M-07 Practice for Examination and Sampling of Hardened Concrete in Constructions
C0827_C0827M-10 Test Method for Change in Height at Early Ages of Cylindrical Specimens of Cementitious Mixtures
C0856-11 Practice for Petrographic Examination of Hardened Concrete
C0869-91R06 Specification for Foaming Agents Used in Making Preformed Foam for Cellular Concrete
C0873_C0873M-10A Test Method for Compressive Strength of Concrete Cylinders Cast in Place in Cylindrical Molds
C0876-09 Test Method for Corrosion Potentials of Uncoated Reinforcing Steel in Concrete
C0878_C0878M-09 Test Method for Restrained Expansion of Shrinkage-Compensating Concrete
C0881_C0881M-10 Specification for Epoxy-Resin-Base Bonding Systems for Concrete
C0882_C0882M-05E01 Test Method for Bond Strength of Epoxy-Resin Systems Used With Concrete By Slant Shear
C0884_C0884M-98R10 Test Method for Thermal Compatibility Between Concrete and an Epoxy-Resin Overlay
C0900-06 Test Method for Pullout Strength of Hardened Concrete
C0918_C0918M-07 Test Method for Measuring Early-Age Compressive Strength and Projecting Later-Age Strength
C0928_C0928M-09 Specification for Packaged, Dry, Rapid-Hardening Cementitious Materials for Concrete Repairs
C0937-10 Specification for Grout Fluidifier for Preplaced-Aggregate Concrete
C0938-10 Practice for Proportioning Grout Mixtures for Preplaced-Aggregate Concrete
C0939-10 Test Method for Flow of Grout for Preplaced-Aggregate Concrete (Flow Cone Method)
C0940-10A Test Method for Expansion and Bleeding of Freshly Mixed Grouts for Preplaced-Aggregate Concrete in the Laboratory
C0941-10 Test Method for Water Retentivity of Grout Mixtures for Preplaced-Aggregate Concrete in the Laboratory
C0942-10 Test Method for Compressive Strength of Grouts for Preplaced-Aggregate Concrete in the Laboratory
C0943-10 Practice for Making Test Cylinders and Prisms for Determining Strength and Density of Preplaced-Aggregate Concrete in the Laboratory
C0944_C0944M-99R05E01 Test Method for Abrasion Resistance of Concrete or Mortar Surfaces by the Rotating-Cutter Method
C0953-10 Test Method for Time of Setting of Grouts for Preplaced-Aggregate Concrete in the Laboratory
C0979_C0979M-10 Specification for Pigments for Integrally Colored Concrete
C0989-10 Specification for Slag Cement for Use in Concrete and Mortars
C1017_C1017M-07 Specification for Chemical Admixtures for Use in Producing Flowing Concrete
C1040_C1040M-08 Test Methods for In-Place Density of Unhardened and Hardened Concrete, Including Roller Compacted Concrete, By Nuclear Methods
C1059_C1059M-99R08 Specification for Latex Agents for Bonding Fresh To Hardened Concrete
C1064_C1064M-08 Test Method for Temperature of Freshly Mixed Hydraulic-Cement Concrete
C1067-00R07 Practice for Conducting A Ruggedness or Screening Program for Test Methods for Construction Materials
C1073-97AR03 Test Method for Hydraulic Activity of Ground Slag by Reaction with Alkali
C1074-11 Practice for Estimating Concrete Strength by the Maturity Method
C1077-11A Practice for Agencies Testing Concrete and Concrete Aggregates for Use in Construction and Criteria for Testing Agency Evaluation
C1084-10 Test Method for Portland-Cement Content of Hardened Hydraulic-Cement Concrete
C1090-10 Test Method for Measuring Changes in Height of Cylindrical Specimens of Hydraulic-Cement Grout
C1105-08A Test Method for Length Change of Concrete Due to Alkali-Carbonate Rock Reaction
C1107_C1107M-11 Specification for Packaged Dry, Hydraulic-Cement Grout (Nonshrink)
C1116_C1116M-10A Specification for Fiber-Reinforced Concrete
C1138M-05R10E01 Test Method for Abrasion Resistance of Concrete (Underwater Method)
C1140-03A Practice for Preparing and Testing Specimens from Shotcrete Test Panels
C1141_C1141M-08 Specification for Admixtures for Shotcrete
C1152_C1152M-04E01 Test Method for Acid-Soluble Chloride in Mortar and Concrete
C1170_C1170M-08 Test Method for Determining Consistency and Density of Roller-Compacted Concrete Using a Vibrating Table
C1176_C1176M-08 Practice for Making Roller-Compacted Concrete in Cylinder Molds Using a Vibrating Table
C1202-10 Test Method for Electrical Indication of Concrete's Ability to Resist Chloride Ion Penetration
C1218_C1218M-99R08 Test Method for Water-Soluble Chloride in Mortar and Concrete
C1231_C1231M-10A Practice for Use of Unbonded Caps in Determination of Compressive Strength of Hardened Concrete Cylinders
C1240-10A Specification for Silica Fume Used in Cementitious Mixtures
C1245_C1245M-11 Test Method for Determining Bond Strength Between Hardened Roller Compacted Concrete and Other Hardened Cementitious Mixtures (Point Load Test)
C1252-06 Test Methods for Uncompacted Void Content of Fine Aggregate (as Influenced by Particle Shape, Surface Texture, and Grading)
C1260-07 Test Method for Potential Alkali Reactivity of Aggregates (Mortar-Bar Method)
C1293-08B Test Method for Determination of Length Change of Concrete Due to Alkali-Silica Reaction
C1315-11 Specification for Liquid Membrane-Forming Compounds Having Special Properties for Curing and Sealing Concrete
C1362-09 Test Method for Flow of Freshly Mixed Hydraulic-Cement Concrete
C1383-04R10 Test Method for Measuring the P-Wave Speed and the Thickness of Concrete Plates Using the Impact-Echo Method
C1385_C1385M-10 Practice for Sampling Materials for Shotcrete
C1399_C1399M-10 Test Method for Obtaining Average Residual-Strength of Fiber-Reinforced Concrete
C1435_C1435M-08 Practice for Molding Roller-Compacted Concrete in Cylinder Molds Using a Vibrating Hammer
C1436-08 Specification for Materials for Shotcrete
C1438-11 Specification for Latex and Powder Polymer Modifiers in Hydraulic Cement Concrete and Mortar
C1439-08A Test Methods for Evaluating Polymer Modifiers in Mortar and Concrete
C1451-05 Practice for Determining Uniformity of Ingredients of Concrete From a Single Source
C1480_C1480M-07 Specification for Packaged, Pre-Blended, Dry, Combined Materials for Use in Wet or Dry Shotcrete Application
C1524-02AR10 Test Method for Water-Extractable Chloride in Aggregate (Soxhlet Method)
C1542_C1542M-02R10 Test Method for Measuring Length of Concrete Cores
C1543-10A Test Method for Determining the Penetration of Chloride Ion into Concrete by Ponding
C1550-10A Test Method for Flexural Toughness of Fiber Reinforced Concrete (Using Centrally Loaded Round Panel)
C1556-04 Test Method for Determining the Apparent Chloride Diffusion Coefficient of Cementitious Mixtures by Bulk Diffusion
C1567-08 Test Method for Determining the Potential Alkali-Silica Reactivity of Combinations of Cementitious Materials and Aggregate (Accelerated Mortar-Bar Method)
C1579-06 Test Method for Evaluating Plastic Shrinkage Cracking of Restrained Fiber Reinforced Concrete (Using a Steel Form Insert)
C1580-09E01 Test Method for Water-Soluble Sulfate in Soil
C1581_C1581M-09A Test Method for Determining Age at Cracking and Induced Tensile Stress Characteristics of Mortar and Concrete under Restrained Shrinkage
C1582_C1582M-04 Specification for Admixtures to Inhibit Chloride-Induced Corrosion of Reinforcing Steel in Concrete
C1583_C1583M-04E01 Test Method for Tensile Strength of Concrete Surfaces and the Bond Strength or Tensile Strength of Concrete Repair and Overlay Materials by Direct Tension (Pull-off Method)
C1585-04E01 Test Method for Measurement of Rate of Absorption of Water by Hydraulic-Cement Concretes
C1602_C1602M-06 Specification for Mixing Water Used in the Production of Hydraulic Cement Concrete
C1603-10 Test Method for Measurement of Solids in Water
C1604_C1604M-05 Test Method for Obtaining and Testing Drilled Cores of Shotcrete
C1609_C1609M-10 Test Method for Flexural Performance of Fiber-Reinforced Concrete (Using Beam With Third-Point Loading)
C1610_C1610M-10 Test Method for Static Segregation of Self-Consolidating Concrete Using Column Technique
C1611_C1611M-09BE01 Test Method for Slump Flow of Self-Consolidating Concrete
C1621_C1621M-09B Test Method for Passing Ability of Self-Consolidating Concrete by J-Ring
C1622_C1622M-10 Specification for Cold-Weather Admixture Systems
C1646_C1646M-08A Practice for Making and Curing Test Specimens for Evaluating Resistance of Coarse Aggregate to Freezing and Thawing in Air-Entrained Concrete
C1679-09 Practice for Measuring Hydration Kinetics of Hydraulic Cementitious Mixtures Using Isothermal Calorimetry
C1688_C1688M-10A Test Method for Density and Void Content of Freshly Mixed Pervious Concrete
C1697-10 Specification for Blended Supplementary Cementitious Materials
C1698-09 Test Method for Autogenous Strain of Cement Paste and Mortar
C1701_C1701M-09 Test Method for Infiltration Rate of In Place Pervious Concrete
C1712-09 Test Method for Rapid Assessment of Static Segregation Resistance of Self-Consolidating Concrete Using Penetration Test
C1723-10 Guide for Examination of Hardened Concrete Using Scanning Electron Microscopy
C1740-10 Practice for Evaluating the Condition of Concrete Plates Using the Impulse-Response Method
C1741-11 Test Method for Bleed Stability of Cementitious Post-Tensioning Tendon Grout
E0329-11A Specification for Agencies Engaged in Construction Inspection, Testing, or Special Inspection | {"url":"http://www.astm.org/BOOKSTORE/BOS/TOCS_2011/04.02.html","timestamp":"2014-04-18T08:27:04Z","content_type":null,"content_length":"23347","record_id":"<urn:uuid:de832f96-9f68-424d-9193-ea624a21d4df>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00592-ip-10-147-4-33.ec2.internal.warc.gz"} |
NYS 3rd Grade Math Word Match Part 4 of 5
Proportion _____ 1 The results of a division problem.
Rectangular Prism _____ 2 A prism with rectangular basis.
Rectangle _____ 3 Intersecting lines that form right angles.
Polygon _____ 4 A decimal whose digits repeat in groups of 1 or more.
Perpendicular Lines _____ 5 A three-dimensional figure whose base is a polygon and having triangular faces that meet at a common point over the base.
Perfect Square _____ 6 A drawing that is similar but either larger or smaller than the actual object.
Right Angle _____ 7 In a right triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the length of the legs.
Ratio _____ 8 A zero-dimensional figure, an exact location in space.
Point _____ 9 A comparison of 2 numbers by division.
Scale Drawing _____ 10 An equation that shows that 2 ratios are equivalent.
Quotient _____ 11 The chance that some event will happen.
Perimeter _____ 12 The answer to a multiplication problem.
Pythagorean _____ 13 The difference between the greatest number and the least number in a set of data.
Product _____ 14 A polygon having 4 sides.
Place Value _____ 15 A system of writing numbers in which the position of the digit determines its value.
Prime Number _____ 16 A number whose square root is a whole number.
Quadrilateral _____ 17 A whole number greater than 1 that has exactly 2 factor, 1 and itself.
Rational Number _____ 18 A part of a line that extends forever in only 1 direction.
Range _____ 19 A simple closed figure in a plane formed by 3 or more line segments.
Repeating Decimals _____ 20 A rational number is a number that is the ratio of 2 integers. All other real numbers are said to be irrational.
Probability _____ 21 The reciprocal of the number x is the number 1/x.
Ray _____ 22 The distance around a geometric figure.
Pyramid _____ 23 A quadrilateral with 4 equal angles.
Reciprocal _____ 24 An angle formed by 2 perpendicular lines; a 90 degree angle. | {"url":"http://www.armoredpenguin.com/wordmatch/Data/2011.02/1719/17191529.835.html","timestamp":"2014-04-16T04:13:57Z","content_type":null,"content_length":"18731","record_id":"<urn:uuid:efb6aede-3827-4fca-9dff-cd3e220b1cad>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
What “sufficient assumption” means
Yesterday, I offered a definition of the word “assumption” using a very simplistic mathematical example. Today, I’m going to dig a bit deeper into the Assumption category by using another
super-simple bit of math. Don’t panic! If you passed third grade, you’ve seen this math before.
Many students struggle with assumption questions because they don’t understand that there are two very different types of assumptions. The purpose of this post is to start teaching you the
The previous post offered an example of an assumption that was both sufficient and necessary. Today I am going to talk about just one of those types, the “Sufficient Assumption.” So consider the
following argument:
Premise: Anything times zero equals zero. Conclusion: Therefore A times B equals zero.
Question: ”Which one of the following, if true, would allow the conclusion to be properly inferred?” Or, stated another way, “Which one of the following, if assumed, would justify the argument’s
conclusion?” Both of these are asking for sufficient assumptions. (You might want to memorize the wording of those questions, so that you can differentiate a sufficient assumption question from a
necessary assumption question.)
This question is asking you to prove the argument’s conclusion. In order to prove a conclusion on the LSAT, the conclusion of the argument must be connected, with no gaps, to the evidence offered. So
we need an answer that connects the evidence “anything times zero is zero” to the conclusion “A times B is zero.”
It’s pretty simple. The answer must contain one of the following:
“A equals zero.” If it’s true that A is zero, and if it’s true that anything times zero equals zero, then no matter what B is, the conclusion “A times B equals zero” would be proven correct. And
proof is what we’re looking for on a sufficient assumption question.
“B equals 0″ would be just as good, because no matter what A is, the conclusion “A times B equals zero” would be proven correct.
Here’s the really interesting part (if you’re a nerd like me, which I hope you are). While “A equals zero” and “B equals zero” are each sufficient to prove the conclusion correct, neither of these
statements, independently, are necessary in order for the argument to possibly make sense. A could be 1,000,000, and the conclusion “A times B equals zero” could still be conceivable (if B equals
zero). Likewise, B could be 1,000,000 and the conclusion could still be possible (as long as A equals zero.) So if the question had said “which one of the following is an assumption required by the
argument,” (that’s asking for a necessary component of the argument) then “A equals zero” would not be a good answer. Nor would “B equals zero.”
The definition of “sufficient assumption” is “something that would prove the argument’s conclusion to be correct.” Go ahead and memorize that. I’ll be back soon to offer a definition of “necessary
assumption.” Once I’m done with that, I promise I won’t use any math for a while. | {"url":"http://www.foxlsat.com/what-sufficient-assumption-means/","timestamp":"2014-04-16T16:08:58Z","content_type":null,"content_length":"60040","record_id":"<urn:uuid:73f9161c-0575-433e-8b56-5e4cdae694c6>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phase space plot of the kicked rotor
April 21, 2012
By sieste
In the idealized physical world, a rotor is simply a mass $m$ attached to an axis of length $r$, free to move in the plane. Gravity and friction are absent. Such a rotor becomes a kicked rotor if it
is periodically hit with a hammer. Every kick transfers momentum $p_k$ to the rotor and the time between successive kicks is $T$.
The current state of the rotor is completely described by the current angle of rotation $\theta$ and the current angular momentum $L$. We shall be interested only in the state of the rotor right
after the kick. All quantities evaluated right after the kick are indicated by a prime.
If the rotor rotates at an angular velocity $\omega$, the angle of rotation changes between two kicks by $\omega T$. Only the component of $p_k$ in the direction of motion is transferred. Therefor
the angular momentum increases by $p_k r \sin\theta$ as a result of the kick. We have
$L' = L + p_k r \sin\theta$
$\theta' = \theta + \omega' T$,
where $\omega'$ is the angular velocity after the kick. We can make the above equations dimensionless (unitless) by dividing the angular momentum $L$ by a typical angular momentum of the system. This
typical angular momentum is given by $\hat{L}=rp=rmv=mr^2/T$ where $v=r/T$ is the typical velocity of the system. We divide the $L$ – equation by $\hat{L}$ and get
$l' = l + \frac{p_k T}{mr}\sin\theta \equiv l + K\sin\theta$.
The dimensionless angular momentum $l$ is the rotors angular momentum measured in units of the typical angular momentum $\hat{L}$. We have summarized some parameters into the constant $K$.
Now, the (already dimensionless) angle equation can be manipulated a little. We notice that $\omega' T = v' T / r = p'T/mr = L'T/mr^2$ which is the same as $L'/\hat{L}$ which is simply the
dimensionless angular momentum after the kick $l'$. So we now have
$l' = l + K\sin\theta$
$\theta' = \theta + l'$,
where we can, without loss of generality, apply $\mod 2\pi$ to the angle equation and thus also to the angular momentum equation.
This is the famous Chirikov standard map which describes the behavior of the kicked rotor right after the kicks. It maps a phase space point $(\theta,l)$ before the kick to its corresponding point $
(\theta',l')$ after the kick.
What follows is an R-script that chooses 1000 random points on the square $[0,2\pi]\times[0,2\pi]$ and uses each point as the starting point to draw 1000 iterations of the standard map:
smap<-function(t,l) {
K<- 1
for ( i in seq(1e3) ) {
t[i+1]<- (t[i]+l[i]) %% (2*pi)
l[i+1]<- (l[i]+K*sin(t[i+1])) %% (2*pi)
for (i in seq(1000) ) {
m<-smap( t[1],t[2] )
points(m[,1],m[,2],col=rgb(runif(1),runif(1),runif(1)) ,pch=".",cex=2)
And this is the output:
for the author, please follow the link and comment on his blog:
sieste » R
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/phase-space-plot-of-the-kicked-rotor/","timestamp":"2014-04-21T04:46:21Z","content_type":null,"content_length":"42235","record_id":"<urn:uuid:9ff3b3b7-5819-45a5-b5c7-c92d8ce78845>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differential Equations and Mathematical Biology, Second Edition
• Uses various differential equations to model biological behavior
• Discusses the modeling of biological phenomena, including the heartbeat cycle, chemical reactions, electrochemical pulses in the nerve, predator–prey models, tumor growth, and epidemics
• Explains how bifurcation and chaotic behavior play key roles in fundamental problems of biological modeling
• Presents a unique treatment of pattern formation in developmental biology based on Turing’s famous idea of diffusion-driven instabilities
• Provides answers to selected exercises at the back of the book
• Offers downloadable MATLAB files on www.crcpress.com
Deepen students’ understanding of biological phenomena
Suitable for courses on differential equations with applications to mathematical biology or as an introduction to mathematical biology, Differential Equations and Mathematical Biology, Second Edition
introduces students in the physical, mathematical, and biological sciences to fundamental modeling and analytical techniques used to understand biological phenomena. In this edition, many of the
chapters have been expanded to include new and topical material.
New to the Second Edition
• A section on spiral waves
• Recent developments in tumor biology
• More on the numerical solution of differential equations and numerical bifurcation analysis
• MATLAB^® files available for download online
• Many additional examples and exercises
This textbook shows how first-order ordinary differential equations (ODEs) are used to model the growth of a population, the administration of drugs, and the mechanism by which living cells divide.
The authors present linear ODEs with constant coefficients, extend the theory to systems of equations, model biological phenomena, and offer solutions to first-order autonomous systems of nonlinear
differential equations using the Poincaré phase plane. They also analyze the heartbeat, nerve impulse transmission, chemical reactions, and predator–prey problems. After covering partial differential
equations and evolutionary equations, the book discusses diffusion processes, the theory of bifurcation, and chaotic behavior. It concludes with problems of tumor growth and the spread of infectious
Table of Contents
Population growth
Administration of drugs
Cell division
Differential equations with separable variables
Equations of homogeneous type
Linear differential equations of the first order
Numerical solution of first-order equations
Symbolic computation in MATLAB
Linear Ordinary Differential Equations with Constant Coefficients
First-order linear differential equations
Linear equations of the second order
Finding the complementary function
Determining a particular integral
Forced oscillations
Differential equations of order n
Systems of Linear Ordinary Differential Equations
First-order systems of equations with constant coefficients
Replacement of one differential equation by a system
The general system
The fundamental system
Matrix notation
Initial and boundary value problems
Solving the inhomogeneous differential equation
Numerical solution of linear boundary value problems
Modelling Biological Phenomena
Nerve impulse transmission
Chemical reactions
Predator–prey models
First-Order Systems of Ordinary Differential Equations
Existence and uniqueness
The phase plane and the Jacobian matrix
Local stability
Limit cycles
Forced oscillations
Numerical solution of systems of equations
Symbolic computation on first-order systems of equations and higher-order equations
Numerical solution of nonlinear boundary value problems
Appendix: existence theory
Mathematics of Heart Physiology
The local model
The threshold effect
The phase plane analysis and the heartbeat model
Physiological considerations of the heartbeat cycle
A model of the cardiac pacemaker
Mathematics of Nerve Impulse Transmission
Excitability and repetitive firing
Travelling waves
Qualitative behavior of travelling waves
Piecewise linear model
Chemical Reactions
Wavefronts for the Belousov–Zhabotinskii reaction
Phase plane analysis of Fisher’s equation
Qualitative behavior in the general case
Spiral waves and λ − ω systems
Predator and Prey
Catching fish
The effect of fishing
The Volterra–Lotka model
Partial Differential Equations
Characteristics for equations of the first order
Another view of characteristics
Linear partial differential equations of the second order
Elliptic partial differential equations
Parabolic partial differential equations
Hyperbolic partial differential equations
The wave equation
Typical problems for the hyperbolic equation
The Euler–Darboux equation
Visualization of solutions
Evolutionary Equations
The heat equation
Separation of variables
Simple evolutionary equations
Comparison theorems
Problems of Diffusion
Diffusion through membranes
Energy and energy estimates
Global behavior of nerve impulse transmissions
Global behavior in chemical reactions
Turing diffusion driven instability and pattern formation
Finite pattern forming domains
Bifurcation and Chaos
Bifurcation of a limit cycle
Discrete bifurcation and period-doubling
Stability of limit cycles
The Poincaré plane
Numerical Bifurcation Analysis
Fixed points and stability
Path-following and bifurcation analysis
Following stable limit cycles
Bifurcation in discrete systems
Strange attractors and chaos
Stability analysis of partial differential equations
Growth of Tumors
Mathematical model I of tumor growth
Spherical tumor growth based on model I
Stability of tumor growth based on model I
Mathematical model II of tumor growth
Spherical tumor growth based on model II
Stability of tumor growth based on model II
The Kermack–McKendrick model
An incubation model
Spreading in space
Answers to Selected Exercises
Editorial Reviews
…Much progress by these authors and others over the past quarter century in modeling biological and other scientific phenomena make this differential equations textbook more valuable and better
motivated than ever. …The writing is clear, though the modeling is not oversimplified. Overall, this book should convince math majors how demanding math modeling needs to be and biologists that
taking another course in differential equations will be worthwhile. The coauthors deserve congratulations as well as course adoptions.
—SIAM Review, Sept. 2010, Vol. 52, No. 3
… Where this text stands out is in its thoughtful organization and the clarity of its writing. This is a very solid book … The authors succeed because they do a splendid job of integrating their
treatment of differential equations with the applications, and they don’t try to do too much. … Each chapter comes with a collection of well-selected exercises, and plenty of references for further
—MAA Reviews, April 2010
Praise for the First Edition
A strength of [this book] is its concise coverage of a broad range of topics. … It is truly remarkable how much material is squeezed into the slim book’s 400 pages.
—SIAM Review, Vol. 46, No. 1
It is remarkable that without the classical scheme (definition, theorem, and proof) it is possible to explain rather deep results like properties of the Fitz–Hugh–Nagumo model … or the Turing model …
. This feature makes the reading of this text pleasant business for mathematicians. … [This book] can be recommended for students of mathematics who like to see applications, because it introduces
them to problems on how to model processes in biology, and also for theoretically oriented students of biology, because it presents constructions of mathematical models and the steps needed for their
investigations in a clear way and without references to other books.
—EMS Newsletter
The title precisely reflects the contents of the book, a valuable addition to the growing literature in mathematical biology from a deterministic modeling approach. This book is a suitable textbook
for multiple purposes … Overall, topics are carefully chosen and well balanced. …The book is written by experts in the research fields of dynamical systems and population biology. As such, it
presents a clear picture of how applied dynamical systems and theoretical biology interact and stimulate each other—a fascinating positive feedback whose strength is anticipated to be enhanced by
outstanding texts like the work under review.
—Mathematical Reviews, Issue 2004g | {"url":"http://www.crcpress.com/product/isbn/9781420083576","timestamp":"2014-04-16T04:17:44Z","content_type":null,"content_length":"110983","record_id":"<urn:uuid:19bbbc6e-80c1-4cbe-8af6-522659e90e87>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra in Matrix Calculator
Free online calculator to solve linear equations of algebra.
Algebra calculation using matrix method. This calculator will help you to solve linear equation of algebra very easily and dynamically. Matrix method is used to calculate the result.
Code to add this calci to your website
Just copy and paste the below code to your webpage where you want to display this calculator.
This tool will help you to find the values of linear equations of algebra using matrix calculation method. This can be used to calculate your math problems and create new math problems.
Related Calculators | {"url":"http://easycalculation.com/matrix/matrix-algebra.php","timestamp":"2014-04-18T05:41:35Z","content_type":null,"content_length":"29829","record_id":"<urn:uuid:a881bed2-58da-4391-9094-8299190239ec>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Is the magnetic field a mathematical abstraction?
If two protons are floating in space, they'll feel an electrical field between each other and repel, but if you see them zip by you in space at relativistic speeds, they'll behave differently from
your point of view by repelling slower than they should, and you'll see a new force.
But it doesn't matter -- the math was invented to describe the behavior, so ultimately it's all mathematical abstraction and you use whatever level of complexity is necessary to solve your problem. | {"url":"http://www.physicsforums.com/showpost.php?p=4255496&postcount=3","timestamp":"2014-04-19T02:20:14Z","content_type":null,"content_length":"7536","record_id":"<urn:uuid:e96d5e72-082e-42dc-a1b3-d2e718e6b516>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
• Axiomatic definition of sequential algorithms
2011-03-30 Microsoft Channel 9 Lecture 3 on Algorithms
* What are sequential algorithms?
* Postulate 1 (Sequential Time)
* Postulate 2 (Abstract State)
* Postulate 3 (Bounded Exploration)
* Axiomatic definition
* Representation Theorem
• What's an algorithm?
2011-02-07 Microsoft Channel 9 Lecture 2 on Algorithms
* Is it possible to define algorithms?
* What kind of entities are algorithms?
* Why bother to define algorithms?
* Algorithms = Turing machines ?
• Algorithms and computablitlity
2010-06-02 Microsoft Channel 9 Lecture 1 on Algorithms
* Euclid's algorithms and his time
* One classification of algorithms
* Turing's analysis of symbolic algorithms
* Computability and decidability
* Limitations of the Turing machine model
* On the verge of computational complexity
• 2010-02-02 Microsoft Channel 9 interview
The interview, conducted by Charles Torre of Channel 9, touched upon a number of topics.
* Logic in Soviet Union.
* A critique of functional programming and declarative specifications.
* The analysis of algorithms that motivated the introduction of abstract state machines. The is by far the longest piece of the conversation.
* A recent article When two algorithms are the same?
* A quick motivation of a new Distributed Knowledge Authorization Language (DKAL).
* The sequential character of human thought.
• 2009-06-02 lecture on Church-Turing thesis at Google
It is the same lecture as the #1 but to a different audience with different questions.
• 2009-01-20 lecture on Church-Turing thesis at Microsoft
The lecture was organized by the Association for the Advancement of Artificial Intelligence of Greater Seattle
ABSTRACT: The Church-Turing thesis is one of the foundations of computer science. The thesis heralded the dawn of the computer revolution by enabling the construct of the universal Turing machine
which led the way, at least conceptually, to the von Neumann architecture and first electronic computers. One way to state the Church-Turing thesis is as follows: A Turing Machine computes every
numerical function that is computable by means of a purely mechanical procedure.
It is that remarkable and a priori implausible characterization that underlies the ubiquitous applicability of digital computers. But why do we believe the thesis? Careful analysis shows that the
existing arguments are insufficient. Kurt Gödel surmised that it might be possible to state axioms which embody the generally accepted properties of computability, and to prove the thesis on that
basis. That is exactly what we did in a recent paper with Nachum Dershowitz of Tel Aviv University.
Beyond our proof, the story of the Church-Turing thesis is fascinating and scattered in specialized and often obscure publications. We try to do justice to that intellectual drama. | {"url":"http://research.microsoft.com/en-us/um/people/gurevich/video.htm","timestamp":"2014-04-18T20:54:35Z","content_type":null,"content_length":"5882","record_id":"<urn:uuid:7ac6785d-6fc9-43b9-bc31-8a1c77ecb237>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi
Size refers to the number of nodes in a parse tree. Generally speaking, you can think of size as code length.
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller. | {"url":"http://www.mathworks.com/matlabcentral/cody/problems/299-vectorization-in-n/solutions?page=2","timestamp":"2014-04-19T19:44:42Z","content_type":null,"content_length":"132153","record_id":"<urn:uuid:e19c9258-6425-40b7-afc3-8f61c1f397ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mission Viejo Math Tutor
...Majoring in Biology in college. Received A grade in Precalculus Honors Long time proficiency in all Math. Long time interest and proficiency in Sciences.
19 Subjects: including algebra 1, algebra 2, biology, vocabulary
...I have 2+ years of tutoring experience: In high school, I taught pre-Algebra, Algebra, Trigonometry, Pre-Calculus, and Calculus to both groups and individual students. I have an SAT score of
1980. I am knowledgeable in all high-school and undergraduate math courses and physics.
12 Subjects: including algebra 2, precalculus, tennis, physical science
...It's necessary to keep on top of the latest computer trend and programs no matter what our age. I have taught a high school study skills class. It is one of the most important classes to begin
the year with.
52 Subjects: including algebra 1, reading, Spanish, differential equations
...I have done work in research and health care field during a two year stint before I went back to school to earn a doctorate from Northeastern University in Boston. I now practice full time in
the Healthcare field, but still find joy in tutoring individual students part time on the side. My appr...
6 Subjects: including calculus, geometry, precalculus, physics
...My experience is long and varied and my qualifications are excellent. It can be said that I am passionate about the subject. During my long career I have taught most elementary grades:
kindergarden, second through fifth, and for the past fifteen years, sixth grade.
11 Subjects: including prealgebra, reading, English, French | {"url":"http://www.purplemath.com/mission_viejo_math_tutors.php","timestamp":"2014-04-18T13:27:59Z","content_type":null,"content_length":"23752","record_id":"<urn:uuid:ca03141b-7f14-486d-b414-3da0b70f7e09>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
Garden Grove, CA Math Tutor
Find a Garden Grove, CA Math Tutor
...I am also available to tutor in Photoshop as well; I do freelance graphic design work and basic web design for various clients. I will make my portfolio available to you if you need to see my
work. My tutoring method involves first assessing the students for their strengths and weaknesses in a ...
19 Subjects: including algebra 2, English, Spanish, algebra 1
...Although I hold my degree in the subject, I'll agree that Biology is one of the toughest sciences to master; there's a ton of material, and the field keeps changing its mind about how much of
our DNA is actually junk. That said, I am completely at ease guiding my students through introductory Bi...
43 Subjects: including algebra 1, algebra 2, biology, calculus
...I could discuss it endlessly, and in as much depth as any audience would allow. The benefit of this to any math student would be that they will find my patience profuse, my eagerness to
clarify its techniques tireless, my ingenuity unparalleled in creating the most illuminating metaphors for its...
13 Subjects: including differential equations, linear algebra, algebra 1, algebra 2
...I speak Spanish and am capable of tutoring the basics in this area as well. I enjoy sports and can offer service in this area as well. I am flexible with my hours and provide ongoing
assessment and diagnosis to ensure that my students are on track and achieving.
47 Subjects: including ACT Math, grammar, Spanish, reading
...I have very strong reading comprehension skills as they are required as an attorney. I have great approaches to the section for students who are not performing as well as they'd like. I have
tutored the SAT/ACT one-on-one and in group settings for several major tutoring agencies.
19 Subjects: including geometry, algebra 1, ACT Math, grammar
Related Garden Grove, CA Tutors
Garden Grove, CA Accounting Tutors
Garden Grove, CA ACT Tutors
Garden Grove, CA Algebra Tutors
Garden Grove, CA Algebra 2 Tutors
Garden Grove, CA Calculus Tutors
Garden Grove, CA Geometry Tutors
Garden Grove, CA Math Tutors
Garden Grove, CA Prealgebra Tutors
Garden Grove, CA Precalculus Tutors
Garden Grove, CA SAT Tutors
Garden Grove, CA SAT Math Tutors
Garden Grove, CA Science Tutors
Garden Grove, CA Statistics Tutors
Garden Grove, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Garden_Grove_CA_Math_tutors.php","timestamp":"2014-04-20T16:23:09Z","content_type":null,"content_length":"24040","record_id":"<urn:uuid:bd6e14cf-fd8f-4a4a-8383-4f149c93db17>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00587-ip-10-147-4-33.ec2.internal.warc.gz"} |
I was born on the 9th of July 1973, in Troy, New York. My farther was a professional musician and my mother a sociologist to-be. At the age of 6 months, we moved to Europe. Until the age of 22, I
lived mostly between France, Italy and Germany.
After a After getting my bachelors in mathematics at the UNSA, Nice, France, I moved back to U.S.A. and started a Ph.D. program soon after. Until then, I was mostly interested in music, spending most
of my time studying at the “Conservatoire de Nice” and attending the science university only to benefit of the scholarship advantages.
During my Ph.D. program at Emory University, I started to become more interested in mathematics. My predilection was for unifying disparate results into a subsuming one though the creation of a
unifying structure and language. Since the scholarship lent to it, I decided to work towards a Masters in computer science simultaneously. I bought my first computer at age 14—for the sole purpose to
try (unsuccessfully) to program it to be able to converse with me—but had scarcely had any formal education in computer science till then.
During and after my graduate student years, I taught several classes such as “mathematics of computer science”, “probability and statistics”, “calculus”, “theory of computing”, for Emory University,
John Hopkins, and Illinois State University.
In 2000 a computer hardware company called me to see if I could use mathematics to help them with a few problems they had, and thus started my first project as a consultant in mathematics. I sat in
meetings to learn their expertise and give them my “mathematical” perspective. I researched topics relevant to their problems and conducted workshops to teach the engineers certain techniques of
interest. I served as an “on-call” mathematician when a programmer had questions about the effectiveness or efficiency of his algorithms. Finally, I developed some novel systems and methods for them,
one of them which we later patented.
Word-to-mouth had its way and brought me many subsequent projects in various sectors.
Since 2003 an increasing proportion of projects involved data acquisition, analysis, mining, and utilization to create or optimize various business processes. Since 2006 my main focus gravitated
around marketing, behavior modeling and language processing. | {"url":"http://thorwhalen.com/biography.html","timestamp":"2014-04-21T09:35:38Z","content_type":null,"content_length":"5428","record_id":"<urn:uuid:3323707e-b8fe-4b7c-afc5-6989e4058f90>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
Blaise Pascal
Blaise Pascal
French theologian, mathematician, and philosopher
Page loading...
Summary Biography Works By Works About
Importance is calculated using the length of this author's Wikipedia entry, as well as the number of works by and about this author.
Pascal was born in Clermont-Ferrand on June 19, 1623, and his family settled in Paris in 1629. Under the tutelage of his father, Pascal soon proved himself a mathematical prodigy, mastering Euclid's
Elements by the age of 12. At the age of 16 he formulated one of the basic theorems of projective geometry, known as Pascal's theorem and described in his Essai pour les coniques (Essay on Conics,
In 1642 he invented the first mechanical adding machine. Pascal proved by experimentation in 1648 that the level of the mercury column in a barometer is determined by an increase or decrease in the
surrounding atmospheric pressure rather than by a vacuum, as previously believed. This discovery verified the hypothesis of the Italian physicist Evangelista Torricelli concerning the effect of
atmospheric pressure on the equilibrium of liquids. Six years later, in conjunction with the French mathematician Pierre de Fermat, Pascal formulated the mathematical theory of probability, which has
become important in such fields as actuarial, mathematical, and social statistics and as a fundamental element in the calculations of modern theoretical physics. Pascal's other important scientific
contributions include the derivation of Pascal's law or principle, which states that fluids transmit pressures equally in all directions, and his investigations in the geometry of infinitesimals.
In 1647, a few years after publishing Essai pour les coniques he suddenly abandoned the study of mathematics. Because of his chronically poor health, he had been advised to seek diversions from study
and attempted for a time to live in Paris in a deliberately frivolous manner. His interest in probability theory has been attributed to his interest in calculating the odds involved in the various
gambling games he played during this period.
At the end of 1654, after several months of intense depression, Pascal had a religious experience that altered his life. He entered the Jansenist monastery at Port-Royal, although he did not take
orders, and led a rigorously ascetic life until his death eight years later. He never published in his own name again. The Jansenists encouraged him in his mathematical studies, which he resumed. To
assist them in their struggles against the Jesuits, he wrote, under a pseudonym, a defense of the famous Jansenist Antoine Arnauld, the famous Lettres provinciales (Provincial Letters), in which he
attacked the Jesuits for their attempts to reconcile 16th-century naturalism with orthodox Roman Catholicism. His most positive religious statement appeared posthumously (he died August 19, 1662); it
was published in fragmentary form in 1670 as Apologie de la religion Chrétienne (Apology of the Christian Religion). In these fragments, which later were incorporated into his major work, he posed
the alternatives of potential salvation and eternal damnation, with the implication that only by conversion to Jansenism could salvation be achieved. Pascal asserted that whether or not salvation was
achieved, humanity's ultimate destiny is an afterlife belonging to a supernatural realm that can only be known intuitively.
Pascal's most famous work is the Pensées (published 1670), a set of deeply personal meditations in somewhat fragmented form on human suffering and faith in God. In the Pensées he attempted to explain
and justify the difficulties of human life by the doctrine of original sin, and he contended that revelation can be comprehended only by faith, which in turn is justified by revelation. "Pascal's
wager" expresses the conviction that belief in God is reasonable on the ground that there are no rational grounds either for belief or disbelief, so belief is not less reasonable than disbelief; but
this being so it is wiser to gamble on the truth of religion since this policy involves success if religion is true and no significant loss if it is false. He had admirers both Roman Catholic and
Protestant, including John Wesley, the founder of Methodism, who praised an essay he wrote on the psychology of conversion. Pascal died at the age of 39 from a combination of tuberculosis and stomach
Works by Blaise Pascal
Only CCEL
CCEL + External
Cover Art
Title / Description ▲
Popularity ▲
On November 23, 1654, Pascal had an intense religious vision; later that night, he wrote himself a note detailing the experience. Until his death, he kept this note sewn into whichever coat he wore,
and only by chance did a servant discover it. In it, Pascal claims to have had a personal encounter with Christ rather than the abstracted God of philosophy. As a result, he felt his faith
reinvigorated, and he began his first major work on religion, the Provincial Letters.
The Pensées is simply the compelling "Thoughts" of mathematician, physicist, and religious thinker Blaise Pascal. Originally intending to publish a book defending Christianity, Pascal died before he
could complete it. The thoughts and ideas for his book were collected and complied, posthumously, and then published as the Pensées. Pascal's thoughts are as powerful as they are comprehensive. He
discusses with great wonder and beauty the human condition, the incarnation, God, the meaning of life, revelation, and the paradoxes of Christianity. He passionately argues for the Christian faith,
using both argumentation and his famous "Wager." His ideas and arguments are sometimes developed and intricate, at other times, abrupt and mysterious. Consequently, the Pensées is a startling and
powerful book--with each successive read, one discovers new profound insights. Anyone curious about the Christian faith, or simply looking for an impassioned defense of it, should look no further
than Pascal's Pensées.
No description available.
Show all 3 works
Popularity: %
Popularity is calculated by comparing this book's number of views to our most commonly read book. Popularity is calculated by comparing this book's number of editions to the book with the largest
number of editions.
Works About Blaise Pascal
Wikipedia Article | {"url":"http://www.ccel.org/ccel/pascal","timestamp":"2014-04-21T03:09:15Z","content_type":null,"content_length":"36857","record_id":"<urn:uuid:263554a7-63b9-4180-aa52-0e39b472e938>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Anybody here good at physics and curious to know how long it takes a speeding Moped to reach Ke$ha?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f11f1d1e4b0a4c24f580aa2","timestamp":"2014-04-19T15:25:46Z","content_type":null,"content_length":"96923","record_id":"<urn:uuid:c3b24843-cd4d-4a5f-b310-98bf0df4599b>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
The first plot shows the quadratic polynomial
shifted to the interval [-8,-4]. The second plot shows its negative
but shifted to the interval [-4,0].
The last plot shows a piecewise polynomial constructed by alternating these two quadratic pieces over four intervals. It also shows its first derivative, which was constructed after breaking the
piecewise polynomial apart using unmkpp.
cc = [-1/4 1 0];
pp1 = mkpp([-8 -4],cc);
xx1 = -8:0.1:-4;
pp2 = mkpp([-4 0],-cc);
xx2 = -4:0.1:0;
pp = mkpp([-8 -4 0 4 8],[cc;-cc;cc;-cc]);
xx = -8:0.1:8;
[breaks,coefs,l,k,d] = unmkpp(pp);
dpp = mkpp(breaks,repmat(k-1:-1:1,d*l,1).*coefs(:,1:k-1),d);
hold on, plot(xx,ppval(dpp,xx),'r-'), hold off | {"url":"http://www.mathworks.nl/help/matlab/ref/mkpp.html?s_tid=gn_loc_drop&nocookie=true","timestamp":"2014-04-24T19:06:04Z","content_type":null,"content_length":"37827","record_id":"<urn:uuid:2169a91c-60f7-45f0-aad3-cca53a711714>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
Choosing the Right Spatial Weighting Matrix in a Quantile Regression Model
ISRN Economics
Volume 2013 (2013), Article ID 158240, 16 pages
Research Article
Choosing the Right Spatial Weighting Matrix in a Quantile Regression Model
Lancashire Business School, University of Central Lancashire, Greenbank Building, Preston, Lancashire PR1 2HE, UK
Received 4 December 2012; Accepted 27 December 2012
Academic Editors: D. M. Hanink and W. R. Reed
Copyright © 2013 Philip Kostov. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
This paper proposes computationally tractable methods for selecting the appropriate spatial weighting matrix in the context of a spatial quantile regression model. This selection is a notoriously
difficult problem even in linear spatial models and is even more difficult in a quantile regression setup. The proposal is illustrated by an empirical example and manages to produce tractable models.
One important feature of the proposed methodology is that by allowing different degrees and forms of spatial dependence across quantiles it further relaxes the usual quantile restriction attributable
to the linear quantile regression. In this way we can obtain a more robust, with regard to potential functional misspecification, model, but nevertheless preserve the parametric rate of convergence
and the established inferential apparatus associated with the linear quantile regression approach.
1. The Spatial Quantile Regression Model
The spatial quantile regression model [1] is a straightforward quantile regression generalisation of the popular, in spatial econometrics, linear spatial lag model. More specifically it can be
written as where is a spatially lagged dependent variable, specified via a predetermined spatial weighting matrix , is the design matrix containing the independent variables (covariates), and is a
residuals vector. Here we only have one spatially lagged dependent variable but this is not an essential assumption, and more than one spatial weighting matrix can be easily incorporated. This
representation is similar to the linear spatial lag regression model, but here coefficients are allowed to vary with the quantile, rather than being assumed fixed.
This model has some attractive properties. First, the original motivation for Kostov’s [1] proposal is to alleviate the potential bias arising from inappropriate functional form assumptions in a
spatial model. In simple terms the underlying logic is as follows. Omitting spatial dependence typically introduces estimation bias in the presence of spatial lag dependence when the wrong functional
form specification is employed. Hence a natural way to circumvent the problem is to estimate the underlying function nonparametrically. The sample sizes used in many empirical studies are however
often too small for efficient application of nonparametric methods. Semiparametric methods could then be used to alleviate the problem. The linear quantile regression is such a semi-parametric
method. Although it cannot be guaranteed to entirely eliminate the adverse effects of functional form assumptions, such methods can greatly reduce them. In particular Kostov [1] argues that for a
typical hedonic model the (linear) quantile restriction is appropriate.
A major advantage of the quantile regression approach is the opportunity to estimate a flexible semiparametric model, which is nevertheless characterised by parametric rate of convergence, thus
making it suitable for empirical analysis in small sample cases. Furthermore a well-developed set of tools for efficient inference is available (see [1] for details).
Spatial modelling has however been focused mostly on estimation issues. For example, Kostov [1] assumes that the exact form of the process generating the spatial dependence is given. This is a
typical assumption of an “estimation focused” approach to spatial modelling in that the spatial weighting matrix used to specify the model is known. The spatial weighting matrix is however a part of
the specification process. It needs to be prespecified. There could be cases where the underlying theoretical model provides some guidance but more often than not this is not the case. Consequently
in empirical applications of spatial models the selection of spatial weight matrices is characterised by a great deal of arbitrariness. This arbitrariness presents a serious problem to the inference
in such models since estimation results have been shown to critically depend on the choice of spatial weighting matrix [2–4]. Even more importantly, there is an interplay between spatial weighting
matrix and functional form choice. Using the wrong spatial weighting matrix has broadly speaking the same implications as ignoring existing spatial dependence. Therefore functional form and spatial
weighting matrix specification have to be considered simultaneously. The problem is not as severe in nonparametric models, because most nonparametric estimation methods are typically consistent even
in the presence of spatial dependence. The wrong spatial weighting matrix however would still introduce inefficiency in the non-parametric estimates, which with smaller samples can seriously impede
inference. In a parametric setup, the wrong spatial weighting matrix introduces bias even when the right functional form is used.
2. Selection of Spatial Weighting Matrix
Owing to these considerations it would be advantageous to have methods to choose an appropriate spatial weighting matrix. Selecting the “right” spatial weighting matrix can serve twofold purpose.
First, it will increase the efficiency of the model estimates, as discussed previously. Second, when the nature of the process generating spatial dependence is of particular interest (e.g., in social
interaction models) the form of the spatial weighting matrices consistent with that data generation process becomes a major inferential problem. In such cases we need to find the appropriate spatial
weighting matrix, since this is the explicit subject of the research problem. In this paper we consider the issue in a spatial quantile regression framework.
In the following we will briefly review some approaches designed to reduce the arbitrariness of spatial weighting matrix choice (mostly) in linear models. Then we will discuss the possible extensions
to the spatial quantile regression. The approach taken in this paper falls in the framework of selecting the spatial weighting matrix either implicitly or explicitly from a pre-defined set of
Holloway and Lapar [5] used a Bayesian marginal likelihood approach to select a neighbourhood definition (cut-off points for the neighbourhood), but one can consider their approach as a general model
selection approach, which could be applied to any other set of competing models. A particularly active strand of research is concerned with Bayesian model averaging (BMA) approaches. LeSage and
Parent [6] proposed a BMA procedure for spatial model which incorporates the uncertainty about the correct spatial weighting matrix. LeSage and Fischer [7] extended the latter approach into an MC3
(Markov Chan Monte Carlo Model Composition) method to select an inverse distance nearest neighbour type of spatial weighting matrix for the linear spatial model. Crespo-Cuaresma and Feldkircher [8]
further extend this procedure to deal with different types of spatial weighting matrices by introducing Bayesian model averaging inference conditional on a given spatial weighting matrix.
Crespo-Cuaresma and Feldkircher [8] use spatial filtering to resolve the endogeneity issue and in this way focus on the regression part of the model rather than on the spatial dependence itself. The
approach above implicitly assumes that the spatial dependence can be characterised by a single spatial weighting matrix. This assumption can be relaxed but at a considerable computational cost.
Eicher et al. [9] proposed instrumental variables Bayesian model averaging procedure which is essentially a hierarchical Bayesian counterpart to the frequentist two-step estimation that accounts for
model uncertainty in both steps. Although Eicher et al. [9] do not deal with spatial dependence, but only with the more general issue of endogeneity, since spatial lag dependence is a particular type
of endogeneity, their approach can be readily applied to spatial lag models.
Finally from a non-Bayesian point of view Kostov [10] suggested a two-step procedure for selecting spatial weighting matrix that is applicable to a wide range of prespecified candidates. This
procedure is motivated by considerations specific to spatial models (and the proposed computational algorithms are tuned for this purpose), but otherwise it deals with the endogeneity problem in the
same way as Eicher et al. [9].
3. Proposal Outline
This paper proposes extending the methodology adopted in Kostov [10] to a quantile regression setting. In what follows we will first briefly explain the previously mentioned approach. We will then
highlight the particularities of the extension of this procedure to quantile regression models. Furthermore we will briefly comment on the different alternative options and the reasons for the
specific choices we adopt. Our contribution is twofold. First we adapt the approach of Kostov [10] to a (linear) quantile regression model. Second, since as we will explain later, the original
approach has a prediction focus, we further expand it to focus on structure discovery (i.e., identifying the “true sparsity pattern”).
Kostov’s [10] approach is based on Kelejian and Prucha’s [11] two-stage least squares method to estimate spatial models. In this method, spatially lagged independent variables are used as instruments
for the spatially lagged dependent variable. The first step (instrumentation) is a least squares regression of the lagged dependent variable on the lagged independent variables. In the second step,
the fitted values from the first stage regression replace the original endogenous variable in the estimation of the model’s coefficients. Kostov [10] retains the first step of this procedure (which
projects the spatially lagged dependent variable in the vector space of the instruments). He however suggests implementing this first step for a number of different spatial weighting matrices
resulting in an augmented second stage model that includes a large number of transformed, in the first step, variables (instead of the original spatial weighting matrices) to be considered. In this
way the problem of choice of spatial weighting matrix becomes a variable selection problem (amongst the previously mentioned transformed variables). The other interesting feature of Kostov’s [10]
paper is the application of a component-wise boosting algorithm as a variable selection method in the second step. Any other variable selection method could be used but Kostov’s [10] choice is mainly
motivated by computational considerations in dealing with large number of potential alternatives.
In a nutshell the approach of Kostov [10] amounts to transforming the spatial weighting matrix selection problem into a high-dimensional (due to the potentially large number of alternatives) variable
selection problem, for which “standard” methods could be applied. The crucial point is Kostov’s [10] approach to establish equivalence between the two-stage spatial least squares method and the
proposed component-wise boosting alternative. Therefore in order to extend the same logic to a spatial quantile regression model we need to find a variable selection equivalent to a quantile
regression estimation method. We will deal with these two issues in turn.
The first issue is the estimation method for spatial quantile regression. We are aware of two main approaches able to consistently estimate such models. The first is the application of Zietz et al. [
12] who use the results of Kim and Muller [13] for quantile regression estimation under endogeneity. The other approach is presented in Kostov [1] who builds upon the methods developed by
Chernozhukov and Hansen [14, 15]. In Kostov’s [1] application one minimises a matrix norm over a range of values for the spatial dependence parameter. This is convenient when there is a single
spatial weighting matrix. With many candidates however this would involve such minimisation over a multidimensional grid, which makes such an approach prohibitively expensive in terms of
computational requirements, particularly when the number of potential spatial weighting matrices is large. Alternatively the methods developed in Chernozhukov and Hong [16] could be used to estimate
such a model, but this will still involve considerable computational costs, and we will not pursue this option here. Furthermore, the main appeal of this procedure over the two-stage quantile
regression is the availability of robust inference tools, since it is computationally more demanding (see [1] for detailed comparison). Here we are interested in selecting the model specification,
rather than estimating a prespecified model. With view to this simpler methods are preferable. Once the final model specification is established and inference is the main focus, any estimation method
could be applied, depending on the purpose of the analysis.
The Zietz et al. [12] approach on the other hand represents a simple two-stage quantile regression. As such it is very similar to the spatial two-stage least squares approach of Kelejian and Prucha [
11], which is being used in Kostov [10]. Therefore using the theoretical results of Kim and Muller [13] we can extend their two-stage quantile regression estimator to include variable selection,
using essentially the same arguments as Kostov [10]. Such an extension however comes at a cost. The previous approach uses two consecutive quantile regression estimators defined at the same quantiles
at both steps. In the context of selecting spatial weighting matrices the first step would carry considerable computational burden, mainly because of the large number of alternatives to be
considered. This means that the computational burden will be increased since separate first step estimation would need to be carried over each quantile that is to be considered. It would therefore
have been very useful if one could have replaced the first step with, for example, least squares estimation, because this would then only need to be carried once. There have been empirical
applications of two-stage estimation where the estimators used in the first and the second stage are different. For example, Arias et al. [17] and Garcia et al. [18] used least squares in the first
step followed by quantile regression in the second. Unfortunately in general settings such an approach could induce asymptotic bias in the overall estimator (see [13] for details). In simple terms
the robustness of two-stage estimators could be lost when the first stage applies an estimator that is not robust. Owing to this we consider here only estimators that employ the same type of
estimator for both steps. This means that we will have to use quantile regression in both steps. The use of quantile regression for each estimated quantile greatly increases the computational costs
of the method compared to the linear model.
The proposal of Kostov [10] translates into using variable selection algorithm in the second stage estimation. As discussed previously this variable selection algorithm needs to be the same type as
the one in the original two-stage estimator. Therefore we need a quantile regression variable selection method. There are several possibilities for the latter. First, the component-wise boosting
approach used in Kostov [10] can be adapted to do variable selection in a quantile regression setting. At this end Fenske et al. [19] demonstrated that using the check function used to define the
quantile regression as an empirical loss function leads to an alternative quantile regression estimator. Using this approach looks like a natural extension to the logic of Kostov [10], particularly
since he does mention the potential use of alternative empirical risk functions.
Another option is to use regularised (i.e., penalised) quantile regression to select covariates. Two of the most popular regularisation approaches, namely the least absolute shrinkage and selection
operator (lasso) of Tibshirani [20] and the smoothed clipped absolute deviations (SCAD) method of Fan and Li [21] have already been considered in quantile regression setting (see [22–24]. In general
these papers have established the consistency of such regularised estimators for quantile regression problems, subject to appropriately chosen “optimal” penalty parameter(s).
So, a straightforward generalisation of the approach of Kostov [10] to quantile regression involves a similar two-step procedure. In the first step a number of quantile regressions are implemented
(for each candidate spatial weighting matrix) regressing the spatially lagged dependent variable on the spatially lagged independent variables. The fitted values from the first step are then used as
additional explanatory variables (thus augmenting the original set of covariates). This second step is estimated using variable selection methods to effectively select the appropriate spatial
weighting matrix.
There are several important features of such implementation. First, since it is based on a consistent two-stage estimator (the two-stage quantile regression estimator of Kim and Muller [13]) it
should retain the consistency properties of the original estimator as long as the second step is also consistent. As already discussed the price we have to pay for maintaining such consistency is the
need to estimate separate first step quantile regression for each quantile considered. Second, similarly to other the two-step procedures, standard errors, or indeed any inference based solely on the
second step estimation would be invalid. One could consider asymptotic inference based on the results of Kim and Muller [13]. Alternatively the overall (two-step) estimator could be bootstrapped.
Note however that due to the computational costs of the first step (details of which we present later on) such an implementation would be prohibitively expensive. The best option is to follow the
suggestion of Kostov [10] and only use the proposed estimator to select the structure of the model, which can then be estimated using standard methods.
4. Variable Selection Step
From now on we will take the first (instrumentation) step as given and will focus entirely on the variable selection step. We will argue that in order to obtain efficient inference it is desirable
that in the second step a variable selection procedure characterised by the so-called oracle property is implemented. In simple words if an estimator possesses the oracle property this means that the
asymptotic distribution of the obtained estimates is the same as this of the “oracle estimator,” that is, an estimator constructed from a priori knowledge of which coefficients should be zero.
Therefore estimators possessing the oracle property can be used for both variable selection and inference. Here we deviate considerably from Kostov [10] who claimed that since the proposed procedure
is only to be used for selecting the model structure, the oracle property in not essential. Actually the brief discussion provided in Kostov [10] implies (without explicitly mentioning it) that
instead of consistency, the weaker condition of persistence [25] would be sufficient. While the oracle property aims at minimising prediction error, the persistence tries to avoid wrongly excluding
significant variables.
Therefore using persistent estimator implicitly includes a measure of uncertainty very much in the spirit of Bayesian methods. The actual aim in many typical applications however would be to discover
the “true sparsity pattern.” For such purposes a combination of consistent and oracle estimators have been shown to be able to discover the underlying structure and retain the oracle property. This
idea has been formalised and theoretically developed in Fan and Lv [26]. Their methodology consists of a screening step (using a consistent variable selection method) followed by an oracle method
(estimation step) to produce the final model. Even if both methods used in such a combination do not possess the oracle property the overall procedure will gain from improved convergence rates and
can still be consistent subject to some additional conditions (see e.g., [27] for detailed discussion and simulation evidence). Here however we prefer to avoid imposing such additional conditions and
would prefer applying a method possessing the oracle property in the estimation step.
An additional advantage of combining screening and estimation steps is the reduction in computational requirements and improved convergence rates. The convergence rates of estimators possessing the
oracle property depend on the relative (to the complexity of the employed model) sample size. Owing to this it would be desirable if the size of the initial model is reduced. Applying an estimator
possessing the oracle property to such a reduced model will improve this estimator’s efficiency (compared to the case when it is applied directly to the larger, unrestricted model). In addition to
the theoretical efficiency gain, this could bring considerable practical gains in greatly reducing the computational requirements of the selection algorithm(s) involved. Such a reduced model can be
produced by using any consistent estimator (i.e., an estimator that (asymptotically) retains the important variables (i.e., variables with nonzero coefficients)). In simple terms the combination of
screening and estimation steps reduces the false positive discovery rate (i.e., falsely retaining unimportant variables) and hence is tuned to structure discovery. Retaining such unimportant
variables often improves prediction accuracy or uncertainty measures and hence can result in larger models (see [27], for a detailed discussion).
So we propose applying a combination of screening and estimation steps to the already transformed model. Such a proposal can be viewed as unnecessary complication to an already involved procedure.
Nevertheless it has significant advantages. First, as we will show, it nests within itself the straightforward implementation of the Kostov’s [10] proposal. Second since the combination of screening
and estimation steps is equivalent to a single step estimation, but has better convergence rates, one can potentially further reduce the set of potential spatial weighting matrices by maintaining the
consistency of the overall estimation procedure. The previously mentioned equivalency means that the overall proposed spatial model estimator which comprises three distinct steps (instrumentation,
screening, and estimation) is still equivalent to the two-step method used to motivate it (i.e., the two-step quantile regression).
As discussed previously using either a boosting or regularisation approach can be viewed as different implementations of the same idea, namely, implementing a variable selection step in a two-stage
quantile regression estimator. In order to ascertain the relative merits of these two alternatives let us first consider their relative computational requirements. The boosting approach is
considerably less intensive in terms of computation. It has another important, in the context of spatial weighting matrix selection, advantage over the regularisation approach. Since the
component-wise boosting approach processes the candidate variables one by one (see the next section for description of the component-wise boosting algorithm), high degrees of correlation amongst
variables (and therefore singularity issues due to a highly nonorthogonal design) do not present significant problem to effectively reduce the set of alternatives. The nature of the spatial weighting
matrix selection problem could involve simultaneous consideration of numerically very similar alternatives, which could be infeasible in the regularisation approach. Furthermore although extensively
studied and shown to be consistent it is unclear whether the boosting approach possesses the oracle property. It is therefore desirable to implement the component-wise boosting as a screening method.
Then an oracle property regularisation approach can be implemented in the estimation step. Note that if we stop after the screening step, we obtain a straightforward quantile regression
generalisation of the approach of Kostov [10].
Due to the fact that component-wise boosting is much faster than direct implementation of any regularisation approach, the previous strategy achieves considerable reduction in the computational
requirements and makes the overall approach computationally feasible. Note that in addition to the computational requirements, direct application of a regularisation estimator could be infeasible in
many spatial problems, simply because of the nature of the spatial weighting matrices to be considered. When a large number of such matrices is considered (as in [10]), the resulting transformed
variables could be quite similar numerically. This could result in singularities that would prevent direct application of a regularised quantile regression estimation of the transformed problem.
In addition to the approach outlined previously we will also consider adopting the stability selection approach of Meinshausen and Bühlmann [28] to the boosting estimation. Strictly speaking
stability selection is not an estimator per se, but application of a combination of subsamplings (although other forms of bootstrap could be used) and a variable selection algorithm. It provides a
measure of how often a variable is selected, and therefore by using a threshold only persistent variables can be selected.
5. Technical Implementation Details
The screening step will use component-wise boosting estimation of quantile regression, following Fenske et al. [19]. Consider the general linear quantile regression model: where and are the dependent
and independent variables (the latter collected in the matrix ) and is the quantile of interest.
Boosting can be viewed as a functional gradient descent method that minimises the constrained empirical risk function , where is some suitable loss function. The th quantile regression is obtained
when the so-called check function is used as empirical risk:
In the a notation mentioned we intentionally use the general additive predictor since it allows for generalisation of the approach to nonlinear and indeed nonparametric versions of the quantile
regression problem. Since the check function is used to define the conventional linear quantile regression estimator of Koenker and Basett [29], using it as an empirical risk function solves an
equivalent optimisation problem.
The boosting algorithm is initialised by an initial value for , for example, . This implies an initial evaluation for the underlying function . In this case all underlying functions will be linear.
Typically one starts with an offset set to the unconditional mean of the response variable, but in the quantile regression the unconditional median is used instead (see [19] for details and
justification of this choice).
Let and denote the evaluations of the corresponding learners (in this case linear functions) for component at iteration . represents the learner (i.e., linear function) fitted to the current
“residuals” while is the “global” evaluation of the same function (see the following algorithm).
Then the component-wise boosting algorithm iteratively goes through the following steps.(1) Compute the negative gradient of the empirical risk function evaluated at the current function estimate (
for every step from ): (2) Use the previous calculated negative gradients to fit the underlying function for each dependent variable (component). Here is fitted to the current residuals value of the
used function at iteration .Find the best fitting base learner . (3) Update the best fitting base learner for a given step size :
The algorithm iterates between steps (1 and 3) until a maximum number of iterations are reached. The algorithm described above needs an updating step . In this application we will use . See Kostov [
10] and references therein for a discussion about this choice and demonstration that the final results are insensitive to a wide range of choices. The other element of interest is the criterion used
to decide which is the “best fitting” component in step (2). Here we use norm (see the aforementioned), but other choices are also possible. The greatest advantage of norm is that the base learners
can be updated by simple least squares fitting, which is computationally fast and convenient (see [19]). In this particular case, since we use linear quantile regression, updating the base learners
amounts to applying univariate least squares.
A regularised linear quantile regression estimator can be formally defined as where is the vector of the linear coefficients pertaining to the covariates, that is, , and is a given penalty function.
The shrinkage effect is determined by the positive penalty parameter , that needs to be chosen according to some criterion (typically information criterion or cross-validation).
The SCAD penalty is symmetric around the origin (i.e., ). It is defined as follows: where and are tuning parameters. In this paper we will set , following Zou and Yuan [30], which would help us avoid
searching for optimal tuning parameters over two-dimensional grid and for this reason suppress in the notation previous.
The SCAD estimator can then be formally defined as
Straightforward implementation of regularised estimators is however computationally demanding. The main issue is that expensive repeated optimisation calls are needed to select the regularisation
parameter(s) typically via some form of cross-validation. Furthermore the nonconvex nature of the SCAD optimisation problem can lead to considerable increase of the computation time at some
quantiles, particularly when larger number of spatial weighting matrices are retained by the screening step, which is consistent with the results of Wu and Liu [23]. In order to select the optimal
amount of regularisation we need some criterion. Given the computational costs of SCAD estimation, information criteria would be preferable. Here we will employ the g-prior Minimum Description Length
(gMDL) criterion used in Kostov’s [10] boosting application. This choice is however dictated mostly by computational reasons, and up to the best of our knowledge there is no evidence (such as
simulation studies) to ascertain the performance of this criterion in empirical studies of nonlinear models.
The adaptive lasso estimator for the linear quantile regression can be defined as a weighted lasso problem in the following way: where denotes the norm, while the weights are given by for some ,
where are initial estimates for the parameters. In this case will be obtained by an unpenalised quantile regression. The conventional lasso estimator is a particular case when all weights are equal,
rather than adaptively chosen.
The adaptive lasso when implemented in a quantile regression setting retains the oracle property [30] similarly to the mean regression case. Therefore the adaptive lasso estimator is a reasonable
choice in this setting, particularly bearing in mind the computational cost associated with the transformation step. Furthermore norm estimators are by far the most widely studied regularisation
estimators for quantile regression (see, e.g., [23, 24, 30] for variable selection applications).
Li and Zhu [22] proposed an algorithm to estimate the whole regularisation path for lasso type of quantile regression problem. Their proposal is potentially valuable since it can be applied to non-
(or semi-) parametric additive quantile regression models and therefore results in a much more general approach, intrinsically immune to functional form misspecification. The advantage to such
algorithms is that since they exploit the piecewise linear property of the regularisation path, the latter can be obtained at a fraction of the computational cost of the overall regularised
estimator. This facilitates implementation of cross-validation and/or information criteria.
The elastic net [31] penalty is a combination of and norms, and for the quantile regression the resulting estimator can be written as
An important property of the elastic net penalty is that the inclusion of the norm induces a grouping effect in that correlated variables are grouped together. This would help avoid spuriously
selecting only one variable from a group of highly correlated variables. Given that in many empirical problems the spatial weighting matrices considered can lead to highly correlated designs, it
would be desirable to avoid such a pitfall. One should note however that elastic net penalisation could be expected to retain more variables compared to the other approaches.
The least squares approximation (LSA) estimator [32] is given by: where is the second derivative at the unpenalised loss function, evaluated at the unregularised estimates . It is technically
obtained as an approximation based on first order Taylor series expansion (see [32]).
In the case of quantile regression, the respective loss function (i.e., the check function ) is not sufficiently smooth. Nevertheless, as long as , which is in principle any consistent covariance
matrix estimate pertaining to the unpenalised problem, can be obtained, the corresponding LSA estimator, defined in (11) exists. Furthermore when regularisation parameters are chosen optimally it
possesses the oracle property (see [32] for a formal proof). Since (11) is essentially a linear lasso type of problem, it can be estimated using standard methods. In particular the computationally
efficient least angle regression algorithm (LARS) of Efron et al. [33] can be used to compute the regularisation path. Here we will apply the BIC-type tuning parameter selector of Wang et al. [34] to
select the optimal amount of shrinkage. Application of the LSA to a quantile regression requires a covariance matrix estimator for the latter. Any consistent estimator would be appropriate. In this
paper we will use the kernel-based covariance estimator proposed in Newey and Powell [35].
6. Study Design and Implementation Details
For comparative purposes we follow closely the design outlined in Kostov [10]. This involves using the same dataset, model specification as well as a set of competing alternative spatial weighting
matrices. Since all these are discussed in some detail in Kostov [10] we will only briefly sketch them here.
The corrected version of the popular Boston housing dataset [36] is used. It consists of 506 observations and incorporates some corrections and additional latitude and longitude information, due to
Gilley and Pace [37]. This dataset contains one observation for each census tract in the Boston Standard Metropolitan Statistical Area. The variables comprise of proxies for pollution, crime,
distance to employment centres, geographical features, accessibility, housing size, age, race, status, tax burden, educational quality, zoning, and industrial externalities. A detailed description of
the variables, to be used in this study, is presented in Table 1.
The basic model as implemented in Kostov [10] is as follows:
The basic specification mentioned previously is augmented with alternative candidate spatial weighting matrices, constructed using the longitude and latitude information. The set of alternative
spatial weighting matrices is constructed using inverse distance raised on a power weights specification and nearest neighbours definition of the neighbourhood scheme. We will adopt the naming
conventions used in Kostov [10] combining the codes for the neighbourhood definition and the weighting scheme to refer to the corresponding spatial weighting matrix and the resulting additional
variables to be included in the boosting model. All these variables are named using the following convention: nxwy, where x is the number of neighbours and y is the weighting parameter (which is the
inverse power of the weight decay). For example, the spatial weighting matrix with the nearest 50 observations as neighbours and inverse squared distance weights as well as the resulting transformed
variable will be denoted as n50w2. We employ all values for number of neighbours from 1 to 50 and evaluate w in the interval [0.4, 4] using increments of 0.1. In simple words this means that we are
combining 50 possible neighbourhood definitions with 37 alternatives for the weighting parameter resulting in 1,850 alternative spatial weighting matrices to be considered simultaneously.
Kostov [10] projects the spatially weighted dependent variable into the column vector space of the spatially weighted independent variables, by taking the fitted values from a least-squares
regression to obtain the transformed variables, named according to the previous convention. As discussed before here we need to replace this first step with a quantile regression defined over a
pre-determined quantile to obtain a model augmented with the alternative spatial weighting matrices. The second stage is then implemented in two consecutive steps. First we apply a component-wise
boosting quantile regression, defined over the same quantile (as in the first stage) to the augmented model. This is the screening step that reduces the set of variables to be considered in the
model. Then a regularized quantile regression (defined over the same quantile) is applied to the screened dataset. The previous three steps (transformation, screening and estimation) can be run over
any prespecified quantile, and their consecutive implementation defines our estimator.
In the present setting some caution should be exercised in applying the estimation step. Note that in conditionally parametric models, there is a certain trade-off between variables and spatial
dependence. The spatial dependence structure could approximate the effect of missing variables, provided these are spatially correlated. Therefore simultaneously shrinking the coefficients of both
variables and spatial lags will be a manifestation of this trade-off. Whenever the model contains such related terms in both the spatial part (i.e., spatial weighting matrices) and in the regression
part (variables the effect of which could be approximated by these spatial weighting matrices) simultaneous shrinkage is undesirable. The danger here is that one can spuriously exclude important
variables and approximate their effect by additional spatial terms. Note however that if we assume that the regression part is given, this trade-off will disappear. Ideally one would want to
eliminate this trade-off. In order to avoid the impact of the approximation on this trade-off we suggest a two-step implementation of the estimation step. In the initial step only the spatial lag
coefficients are penalised, while in the following final step all coefficients are penalised. In this way the initial step should select the appropriate spatial dependence structure, while the final
step would perform final variable selection. Hence the initial step makes structural inference about the spatial part conditional on the regression part of the model. If the screening step has
produced a model that is reasonably close to the true one, then the proposed approach should be able to discover the true underlying structure. Alternatively one may wish to implement an iterative
estimation in which the estimator alternates between steps in which only the spatial structure is penalised and steps with only the regression part are penalised until convergence (defined in terms
of obtaining a stable structure in that no more terms are eliminated). Such steps can be viewed as conditioning one part (spatial or regression) of the model on the other hence avoiding the
trade-off. The latter approach would however be computationally more expensive.
Another issue is the highly correlated design of the spatial quantile regression model, when there are large number of potential spatial weighting matrices. Since in principle the variable selection
methods rely upon marginal correlations, they could fail to perform in such highly correlated designs. For the mean regression model recent contributions by Wang [38] and Cho and Fryzlewicz [39] have
suggested alternative methods that overcome such a reliance on marginal correlations and hence are applicable to highly correlated designs. It is however unclear how such methods can be extended to
the quantile regression case. The two-step approach adopted in this paper conditions selection for the spatial and hedonic variables on the other part of the model and hence reduces this trade-off.
Such an approach is justified if the regression part of the model is correctly specified, but could be suboptimal if this is not the case. This is of course an area that deserves further
7. Results
We implement the proposed estimator for the 0.1 to 0.9 quantiles with a step of 0.1 (i.e., 9 different quantile regressions). Table 2 presents comparative computational time details for the different
procedures. All these are calculated from the first of the considered quantiles (i.e., the 0.1th one) and are given as a guidance only since the actual computational time could vary according to the
nature of the optimisation problem that can change over different quantiles. All computations are undertaken using the statistical programming language R [40] on Intel Core2 2.13GHz processor with
2Gb RAM, not using any parallel computation. Parallelising some of the more computationally demanding tasks and/or using compiled code could considerably reduce the computational time. Furthermore
it cannot be claimed that the actual implementation of these procedure is optimised in terms of computational time.
The instrumentation step is the most time-consuming task. In our implementation it takes over 30 minutes for 1850 spatial weighting matrices. In many empirical problems one would probably consider
much smaller number of alternative spatially weighting matrices. Furthermore most of the time in this step is spent on creating the spatially weighted dependent and independent variables, rather than
fitting the actual quantile regressions.
The actual boosting procedure requires running the boosting algorithm for a large number of iterations and then calculating a stopping criterion to decide upon the estimated structure. The boosting
algorithm is very efficient computationally. The stopping criterion calculation however takes considerable time. Efficient parallel implementations for the latter exist, and these can considerably
reduce the computation time.
The time needed to calculate the stopping criterion is directly proportional to the number of boosting steps (which is effectively the number of alternative “models” for which it is calculated).
Since in this case at all considered quantiles we need at least three times less iterations than the 5,000 used here, practical implementation would have taken 6-7 minutes rather than 18 as reported
in Table 2.
We apply the stability selection to the already reduced (in the instrumentation step) dataset. Yet again this is relatively time-consuming procedure, but it can be parallelised for further
computational gains.
One has to be careful in directly comparing these implementations of the estimation step, as the instrumentation step mentioned previously demonstrates; calculating the stopping criterion (i.e., the
optimal penalty parameters) is by far the most computationally demanding part of these procedures and the reported implementations use different methods for this.
With regard to the estimation methods we report separately the computation times for step one (where only the spatial weighting matrix coefficients are penalised) and the consecutive second step
where all the coefficients are penalised. As it is to be expected the LSA is the fastest method. This is due to two underlying facts. The first is that it uses the efficient least angle regression
algorithm [33] while the other refers to use of the BIC-type tuning parameter selector of Wang et al. [34] which is easy to compute.
The full path estimation for adaptive lasso, accompanied by cross-validation to choose the optimal amount or regularization, appears to be the most computationally demanding estimation method. Most
of the computational costs however come from the use of cross-validation. Furthermore this is the most universally applicable method in the sense that many of the other methods can run into
difficulties during the optimisation (at different quantiles) which can considerably inflate their computational costs.
We present computational details for implementing SCAD with gMDL over a predefined grid of 50 penalty values. Although the computational times appear acceptable, one has to take into account some
caveats. The nonconvex nature of the SCAD optimisation problem means that in some cases the actual computation time can increase considerably (with a factor of over 100 in some cases). Furthermore we
have opted to fix one of the regularisation parameters which artificially reduces the computational time. Another important point to make is that no set of penalisation parameters is ex ante
guaranteed to span the whole regularisation path. In our implementation we run a preliminary SCAD estimation over a range of such values designed to identify a feasible set that does span most of the
regularisation path and then manually select the grid of such values. In cases where the optimisation is difficult, this can lead to considerable increase of computational time. Therefore a path
estimation algorithm for the SCAD estimator for quantile regression is essential if a reliable implementation of this method is to be designed. The use of the gMDL as an optimality criterion is also
somehow ad hoc in that there is no firm evidence on its performance for this type of problems, and it is mostly dictated by computational reasons (since cross-validation, for example, would be very
The elastic net implementation is reasonably efficient. Both the BIC and the generalised approximate cross-validation yield the same models. The reported computational costs refer to the routines
that compute internally both of the above criteria, but this only marginally increases the computational costs. Most of the computational load comes from the double regularisation needed to solve for
the two underlying penalties.
The component-wise boosting algorithm manages to achieve considerable reduction in the model space. It retains between three and eleven spatial weighting matrices across the different quantiles. We
will not present these intermediate results here for brevity reasons, but details are available upon request. This intermediate step yields a reduced model space that can be explored for the
underlying structure as discussed in the methodology section. Table 3 presents the results from the stability selection applied to the prescreened model (i.e., after the boosting application).
Typically stability selection applies a prespecified probability threshold to select variables. Here instead of proper stability selection we present the corresponding inclusion probabilities for the
spatial weighting matrices. We omit spatial weighting matrices with inclusion probability less than 10%. Full results are available upon request. Table 3 provides a background against which the
actual estimation results can be evaluated. If one was to use a threshold of 0.6, most quantiles would have resulted in a single spatial weighting matrix being selected. Such a choice would however
have been base solely on the component-wise boosting algorithm, which as already discussed may not possess the oracle property and therefore may be inappropriate for structure discovery purposes. The
estimation results from the least squares approximation (LSA), full-path adaptive lasso (full path), and smoothly clipped absolute deviations (SCAD) by quantile are presented in Tables 4–12. The
elastic net results are not presented. In simple terms due to the implicit correlation penalty, the elastic net possesses a grouping property in that it groups together correlated variables. In this
case due to the highly correlated nature of the spatial weighting matrices the elastic net groups them together and cannot exclude spatial weighting matrices whenever they are similar to the ones
already included in the model.
Bearing in mind the relative computational cost for the different estimation approaches, it is useful to determine whether the LSA results differ substantially from the other approaches. Table 4
presents the results at the 0.1th quantile. With regard to the main (i.e., hedonic) variables the results are comparable across estimation methods. Where a variable is omitted by one of these
methods, it is either dropped by the other methods or found to be statistically insignificant. With regard to the spatial weighting matrices the stability selection approach (see Table 3) determined
that three such matrices have inclusion probability of over 0.5, but one such matrix, namely, n5w0.6 has a very high inclusion probability of 0.96. The estimation results reflect this in that n5w0.6
is selected by all methods, while the full path and the SCAD estimators also include another spatial weighting matrix. It hence looks like the LSA chooses a slightly more parsimonious model. One
should note however that the additional spatial weighting matrices chosen by the other two methods are statistically insignificant. Therefore all methods point to the same underlying model structure.
Table 5 presents the results for the 0.2th quantile. Again the LSA closely approximates the full path solution. The additional spatial weighting matrix selected by the full path estimation is
insignificant, and both methods lead to the same conclusions. The SCAD estimation however produces slightly different results. Most notably it chooses a different spatial weighting matrix. In terms
of implementation the SCAD optimisation problem at this quantile did take longer to solve. Nevertheless, the selected by SCAD spatial weighting matrix is the one characterised by highest inclusion
probability, according to the stability selection. Due to the somewhat ad hoc implementation of the SCAD estimation in this paper (because it is evaluated over a grid, instead of computing the whole
regularisation path and the use of the gMDL as stopping criterion) it is however difficult to evaluate the latter result. As far as the LSA is concerned however, it provides a reliable approximation
of the adaptive lasso estimator at a fraction of its computational cost.
The results for the 0.3th quantile (Table 6) are broadly similar across different methods. LSA and the adaptive lasso select the same spatial weighting matrices and although these show some small
differences in terms of statistical significance for the latter, the structural inference is essentially the same. Once again SCAD results yield a small difference with regard to the preferred
spatial weighting matrices.
At the 0.4th quantile (Table 7) all methods choose n3w0.4 together with a slightly different second spatial weighting matrix. With regard to the second retained spatial weighting matrix LSA is very
similar to the SCAD (n6w0.5 versus n6w0.4) while the adaptive lasso selection is slightly different. Hence although LSA results are comparable to the other methods, the approximation to the adaptive
lasso is not as close as at the previously considered quantiles. One can note that in this case out of the 8 spatial weighting matrices originally retained by the boosting algorithm, 7 have a model
inclusion probability exceeding 0.1 (see Table 3). Therefore one can tentatively conclude that LSA provides a reasonable approximation to the adaptive lasso for quantile regression whenever there are
relatively few competing spatial weighting matrices. This property is to be expected given the trade-off between the spatial and the regression parts in a quantile regression model, that was
discussed in the methodology section. It is possible that if one applies a different method to control for this trade-off, instead of the two-step implementation of the oracle estimator, implemented
here, the quality of the LSA approximation may not deteriorate with increasing dimension of the spatial weighting matrices space.
The estimation results for the 0.5th quantile (see Table 8) are similar across the different methods. The adaptive lasso finds it difficult to discriminate between n5w0.4 and n6w0.4, but since these
two are very difficult to distinguish between one can conclude that the methods agree. Interestingly although SCAD produces the same spatial weighting matrix as LSA, the technical difficulty in
discriminating between two very similar spatial weighting matrices results in considerable increase in computational time.
Similarly at the 0.6th quantile (Table 9) results are consistent across methods. The LSA and adaptive lasso both select n6w0.6 with LSA also selecting another spatial weighting matrix, which is
however statistically insignificant. The SCAD selects n6w0.4, which by the way is the highest inclusion probability spatial matrix, according to the stability selection results (see Table 3). This
again raises the issue of the comparative performance of SCAD and adaptive lasso, but the results are nevertheless qualitatively similar.
Table 10 shows the estimation result for the 0.7th quantile. This is where there are considerable differences between the different methods. There is disagreement about the sign of TAX between SCAD
and the other two methods. Furthermore all three methods choose different spatial weighting matrices. Once again the source for such difference is probably the size of the (screened) spatial
weighting matrices space, which consists of 11 such matrices, 8 of which have stability selection-derived inclusion probability of at least 0.2.
At the 0.8th quantile all methods select n6w1.1 (see Table 11). SCAD selects another spatial weighting matrix, which leads to some differences in the regression part. However since this additional
spatial weighting matrix is not statistically significant, omitting it would produce results consistent with the other methods.
Finally Table 12 presents the results for the 0.9th quantile which differ considerably amongst methods. The main reason for these differences is that different main hedonic variables are selected by
different methods which results in differences for the spatial part. The latter raises the issue of the trade-off between the regression part and the spatial dependence.
8. Conclusions
This paper proposes methods for selecting spatial weighting matrix in a quantile regression context. We build upon previous work in this area. In discussing our proposal we outline the different
alternatives and the potential different implementations of the same set of ideas. Our proposals are designed to reduce the arbitrariness of choice of spatial weighting matrix and the impact of the
trade-off between functional form and spatial dependence specifications. The spatial quantile regression is already a quite flexible model allowing for different impacts, across different quantiles.
Our procedure introduces additional flexibility in not only different degrees of spatial dependence but also different spatial weighting matrices for different quantiles, resulting in potentially
interesting inferences about the nature of the underlying process.
The methodology proposed in this paper consists of three steps: instrumentation, screening, and estimation. Several different alternative methods for the estimation step are explored. This allows for
conclusions to be drawn with regard to the performance of these estimators in different circumstances. In general terms, the proposed methods are tractable with small and moderate size samples, where
the advantages of the spatial quantile regression model are most pronounced.
1. P. Kostov, “A spatial quantile regression hedonic model of agricultural land prices,” Spatial Economic Analysis, vol. 4, no. 1, pp. 53–72, 2009. View at Publisher · View at Google Scholar · View
at Scopus
2. F. Stetzer, “Specifying weights in spatial forecasting models: the results of some experiments,” Environment & Planning A, vol. 14, no. 5, pp. 571–584, 1982. View at Scopus
3. L. Anselin, “Under the hood issues in the specification and interpretation of spatial regression models,” Agricultural Economics, vol. 27, no. 3, pp. 247–267, 2002. View at Publisher · View at
Google Scholar · View at Scopus
4. B. Fingleton, “Externalities, economic geography, and spatial econometrics: conceptual and modeling developments,” International Regional Science Review, vol. 26, no. 2, pp. 197–207, 2003. View
at Publisher · View at Google Scholar · View at Scopus
5. G. Holloway and M. L. A. Lapar, “How big is your neighbourhood? Spatial implications of market participation among filipino smallholders,” Journal of Agricultural Economics, vol. 58, no. 1, pp.
37–60, 2007. View at Publisher · View at Google Scholar
6. J. P. LeSage and O. Parent, “Bayesian model averaging for spatial econometric models,” Geographical Analysis, vol. 39, no. 3, pp. 241–267, 2007. View at Publisher · View at Google Scholar · View
at Scopus
7. J. P. LeSage and M. M. Fischer, “Spatial growth regressions: model specification, estimation and interpretation,” Spatial Economic Analysis, vol. 3, no. 3, pp. 275–304, 2008. View at Publisher ·
View at Google Scholar · View at Scopus
8. J. Crespo-Cuaresma and M. Feldkircher, “Spatial filtering, model uncertainty and the speed of income convergence in Europe,” in Annual Meeting of the Austrian Economic Association, 2010, http://
9. T. S. Eicher, A. Lenkoski, and A. E. Raftery, “Bayesian model averaging and endogeneity under model uncertainty: an application to development determinants,” UW Working Paper UWEC-2009-19,
University of Washington, Department of Economics, 2009.
10. P. Kostov, “Model boosting for spatial weighting matrix selection in spatial lag models,” Environment and Planning B, vol. 37, no. 3, pp. 533–549, 2010. View at Publisher · View at Google Scholar
· View at Scopus
11. H. H. Kelejian and I. R. Prucha, “A generalized spatial two-stage least squares procedure for estimating a spatial autoregressive model with autoregressive disturbances,” Journal of Real Estate
Finance and Economics, vol. 17, no. 1, pp. 99–121, 1998. View at Scopus
12. J. Zietz, E. N. Zietz, and G. S. Sirmans, “Determinants of house prices: a quantile regression approach,” Journal of Real Estate Finance and Economics, vol. 37, no. 4, pp. 317–333, 2008. View at
Publisher · View at Google Scholar · View at Scopus
13. T. H. Kim and C. Muller, “Two-stage quantile regression when the first stage is based on quantile regression,” Econometrics Journal, vol. 7, pp. 218–231, 2004.
14. V. Chernozhukov and C. Hansen, “Instrumental quantile regression inference for structural and treatment effect models,” Journal of Econometrics, vol. 132, no. 2, pp. 491–525, 2006. View at
Publisher · View at Google Scholar · View at Scopus
15. V. Chernozhukov and C. Hansen, “Instrumental variable quantile regression: a robust inference approach,” Journal of Econometrics, vol. 142, no. 1, pp. 379–398, 2008. View at Publisher · View at
Google Scholar · View at Scopus
16. V. Chernozhukov and H. Hong, “An MCMC approach to classical estimation,” Journal of Econometrics, vol. 115, no. 2, pp. 293–346, 2003. View at Publisher · View at Google Scholar · View at Scopus
17. O. Arias, K. F. Hallock, and W. Sosa-Escudero, “Individual heterogeneity in the returns to schooling: instrumental variables quantile regression using twins data,” Empirical Economics, vol. 26,
no. 1, pp. 7–40, 2001. View at Publisher · View at Google Scholar · View at Scopus
18. J. García, P. J. Hernández, and A. López-Nicolás, “How wide is the gap? An investigation of gender wage differences using quantile regression,” Empirical Economics, vol. 26, no. 1, pp. 149–167,
2001. View at Publisher · View at Google Scholar · View at Scopus
19. N. Fenske, T. Kneib, and T. Hothorn, “Identifying risk factors for severe childhood malnutrition by boosting additive quantile regression,” Technical Report No. 52, Department of Statistics, LMU
München, 2009.
20. R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society B, vol. 58, pp. 267–288, 1996.
21. J. Fan and R. Li, “Variable selection via nonconcave penalized likelihood and its oracle properties,” Journal of the American Statistical Association, vol. 96, no. 456, pp. 1348–1360, 2001. View
at Scopus
22. Y. Li and J. Zhu, “L1-norm quantile regression,” Journal of Computational and Graphical Statistics, vol. 17, no. 1, pp. 163–185, 2008. View at Publisher · View at Google Scholar · View at Scopus
23. Y. Wu and Y. Liu, “Variable selection in quantile regression,” Statistica Sinica, vol. 19, no. 2, pp. 801–817, 2009. View at Scopus
24. A. Belloni and V. Chernozhukov, “L1-penalized quantile regression in high-dimensional sparse models,” WP10/09, Centre For Microdata Methods and Practice, Institute for Fiscal Studies, 2009.
25. E. Greenshtein and Y. Ritov, “Persistence in high-dimensional linear predictor selection and the virtue of overparametrization,” Bernoulli, vol. 10, no. 6, pp. 971–988, 2004. View at Publisher ·
View at Google Scholar · View at Scopus
26. J. Fan and J. Lv, “Sure independence screening for ultrahigh dimensional feature space,” Journal of the Royal Statistical Society B, vol. 70, no. 5, pp. 849–911, 2008. View at Publisher · View at
Google Scholar · View at Scopus
27. L. Wasserman and K. Roeder, “High-dimensional variable selection,” Annals of Statistics A, vol. 37, no. 5, pp. 2178–2201, 2009. View at Publisher · View at Google Scholar · View at Scopus
28. N. Meinshausen and P. Bühlmann, “Stability selection,” Journal of the Royal Statistical Society B, vol. 72, no. 4, pp. 417–473, 2010. View at Publisher · View at Google Scholar · View at Scopus
29. R. Koenker and G. Bassett, “Regression quantiles,” Econometrica, vol. 46, pp. 33–50, 1978.
30. H. Zou and M. Yuan, “Regularized simultaneous model selection in multiple quantiles regression,” Computational Statistics and Data Analysis, vol. 52, no. 12, pp. 5296–5304, 2008. View at
Publisher · View at Google Scholar · View at Scopus
31. H. Zou and T. Hastie, “Regularization and variable selection via the elastic net,” Journal of the Royal Statistical Society B, vol. 67, no. 2, pp. 301–320, 2005. View at Publisher · View at
Google Scholar · View at Scopus
32. H. Wang and C. Leng, “Unified LASSO estimation by least squares approximation,” Journal of the American Statistical Association, vol. 102, no. 479, pp. 1039–1048, 2007. View at Publisher · View
at Google Scholar · View at Scopus
33. B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, “Least angle regression,” Annals of Statistics, vol. 32, no. 2, pp. 407–499, 2004. View at Publisher · View at Google Scholar · View at
34. H. Wang, R. Li, and C. L. Tsai, “Regression coefficient and autoregressive order shrinkage and selection via the lasso,” Journal of the Royal Statistical Society B, vol. 69, no. 1, pp. 63–78,
2007. View at Publisher · View at Google Scholar · View at Scopus
35. W. K. Newey and J. L. Powell, “Efficient estimation of linear and type I censored regression models under conditional quantile restrictions,” Econometric Theory, vol. 6, pp. 295–317, 1990. View
at Publisher · View at Google Scholar
36. D. Harrison and D. L. Rubinfeld, “Hedonic Housing Prices and the Demand for Clean Air,” Journal of Environmental Economics and Management, vol. 5, pp. 81–102, 1978.
37. O.W. Gilley and R. K. Pace, “On the Harrison and Rubinfeld Data,” Journal of Environmental Economics and Management, vol. 3, pp. 403–405, 1996.
38. H. Wang, “Factor profiling for ultra high dimensional variable selection,” Working Paper, 2011, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1613452.
39. H. Cho and P. Fryzlewicz, “High-dimensional variable selection via tilting,” 2010, High-dimensional variable selection, http://stats.lse.ac.uk/fryzlewicz/tilt/tilt.pdf.
40. R: Development Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria, 2009, http://www.R-project.org. | {"url":"http://www.hindawi.com/journals/isrn.economics/2013/158240/","timestamp":"2014-04-16T15:59:14Z","content_type":null,"content_length":"218795","record_id":"<urn:uuid:bef3f840-9009-4f9d-91f6-a9bfa0d877a5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
9. A farmer wished to construct a rectangular fence with one side along a barn. The farmer has 240 feet of fence and wishes to have the length be three times the width. If the side of the barn
accounts for the length of the fence, find the dimensions of the fence.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50a32e5ce4b0e745f520ab1b","timestamp":"2014-04-18T13:47:40Z","content_type":null,"content_length":"40546","record_id":"<urn:uuid:16f0b935-9f51-4cfa-88d5-5b064edabf4b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-User] 4-D gaussian mixture model.
Éric Depagne eric@depagne....
Fri Nov 26 10:00:10 CST 2010
Here is my problem.
I have a set of data that are made of 4 parameters : x, y, dx and dy
I'd like to classify this set the following way : put together all (x,y) that
have similar (dx, dy).
I've had a look at Gaussian mixture models implementation in scikit, and it
seems to be what I need. But the examples i've found here :
only fit y vs x.
In my case for instance, all my (x,y) would be in red, but some of the (dx,
dy) would point towards you, and some would point away from you, and I'd like
to sort the data according to this "parameter": the pointing direction.
How can I modify the example so that it fits 2 dims, keeping the first two as
input ? And does it make sense to use this kind of method, my knowledge in
statistics is quite limited.
Many thanks.
Un clavier azerty en vaut deux
Éric Depagne eric@depagne.org
More information about the SciPy-User mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2010-November/027663.html","timestamp":"2014-04-17T01:14:44Z","content_type":null,"content_length":"3625","record_id":"<urn:uuid:c710109c-acd1-4785-9c66-bcaccb58f505>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need a C program for following question
You have ten boxes, each of which contains nine balls. The balls in one box each weigh 0.9pounds;the balls in all the other boxes weigh exactly one pound each. You have an accurate scale in front of
you, with which you can determine the exact weight, in pounds, of any given set of balls. How can you determine which of the ten boxes contains the lighter balls with only one weighing ?
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/beginner/84804/","timestamp":"2014-04-19T17:15:15Z","content_type":null,"content_length":"6696","record_id":"<urn:uuid:fc564b2c-625a-4bd4-a9bb-1090ac66fcba>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rooks in three dimensions
up vote 29 down vote favorite
Given is an infinite 3-dim chess board and a black king. What is the minimum number of white rooks necessary that can guarantee a checkmate in a finite number of moves?
(In 3-dimensional chess rooks move in straight lines, entering each cube through a face and departing through the opposite face. Kings can move to any cube which shares a face, edge, or corner with
the cube that the king starts in. See wikipedia).
Update: Comments (below the line) give interesting information on this problem including a connection to Conway's angel problem with 2-angel (Zare) and interesting comments towards positive answer
and connection with "kinggo" (Elkies). Also a link to an identical SE question is provided http://math.stackexchange.com/questions/155777/guaranteed-checkmate-with-rooks-in-high-dimensional-chess
22 It's closed because there seem to be too many trigger-happy people on the local `closing committee'. This is a bit worrying. – Nikita Sidorov Jun 8 '12 at 18:17
16 META: tea.mathoverflow.net/discussion/1384/rooksinthreedimensions – Will Jagy Jun 8 '12 at 20:53
In three dimensions a finite number of rooks should suffice by projection to "Kinggo": the King starts in the middle of an $N \times N$ square, then after each King move the opponent gets to
17 claim a square at the edge of the board; the King is known to lose by force ($N=33$ suffices for Kinggo, maybe here we'll use something like $N=45$ to accommodate the initial setup). The king is
then restricted to an $N \times N \times \infty$ box, so $N^2$ rooks suffice to force checkmate, and actually $5N$ will do. So $200$ rooks win in ${\bf Z}^3$. In ${\bf Z}^4$ and above, I don't
know. – Noam D. Elkies Jun 9 '12 at 0:57
20 Will Jagy, are you able to direct me to the part of the FAQ that stipulates that it is considered undesirable to post a MathOverflow question before leaving work, and then come back the next
morning to deal with replies? I ask because some might be unaware with this rule, and, finding that it is in concord with their personal schedules, feel a certain compulsion to do so. – James
Cranch Jun 9 '12 at 1:04
And to be clear, I don't think "asked on rec.puzzles in 1994" is a good enough reason to close a question. If you provided a link, maybe; but just a statement like that and then voting to close
13 is not really becoming of a high-rep user such as yourself. And I don't think the policy here should be to close these questions, then have a high-rep user repost it with more details. Shouldn't
we encourage the OP to alter the question instead? – Steve D Jun 9 '12 at 20:45
show 31 more comments
2 Answers
active oldest votes
As far as I know, it is still open whether or not there is any finite number of rooks which can force checkmate. However, this is possible. I answered the question over at
up vote 17 math.stackexchange describing a strategy which forces checkmate with 96 rooks. This is certainly not optimal. The strategy I describe is very wasteful, keeping some rooks in place even
down vote when they are not actually needed, so it can be improved at the expense of becoming more difficult to describe.
1 @George Lowther: Looked at your nice answer on math.stackexchange. Shouldn't it generalize to $n$ dimensions? As per your answer, the rooks should be placed far away from the king
along the $n$-th coordinate. As per your answer, the initial steps would still allow the king to roam freely along the $n$-th coordinate but aim to restrict its movement projected to
the first $n-1$ coordinates. Each rook, far away along the $n$-th coordinate, guards one square having the same $n$-th coordinate as the king. – user22202 Jun 11 '12 at 5:02
As in your answer, at the penultimate step, the king should be trapped in the middle of a $3^{n-1}$ cube, but be able to move freely along the $n$-th coordinate. (Each position on
the boundary of this cube being guarded by a rook far away along the $n$-th coordinate.)Then place a rook far away along the $n$-th coordinate but having the same first $n-1$
coordinates as the king, to give checkmate. The initial number of rooks to begin with probably depends on $n$, no idea how exactly though. – user22202 Jun 11 '12 at 5:03
add comment
The following is for a finite board (the question actually assumes an infinite board).
For an $N\times N \times N$ board, wouldn't $2N$ rooks suffice? The idea comes from adapting the checkmate with two rooks for the two dimensional case in which the two rooks alternate rows
and force the king to the last rank.
For the three dimensional case, for each $y$ with $1\leq y \leq N$, place one rook at $(1,y,k)$ and another at $(2,y,k+1)$. Then, for $y$ going from $1$ to $N$, move the rook at $(1,y,k)$ to
the square $(1,y,k+2)$. Again, for $y$ going from $1$ to $N$, move the rook at $(2,y,k+1)$ to $(2,y,k+3)$. Each for loop over $y$ involves moving, alternately, the rooks with $x$ coordinate
$1$ by increasing their $z$ coordinate by $2$ units, or the rooks with $x$ coordinate $2$ by increasing their $z$ coordinate by $2$ units. We alternate, so that a for loop in which the rooks
with $x$ coordinate $1$ are moved is followed by a for loop in which the rooks with $x$ coordinate $2$ are moved, and vice versa. Eventually, either the rooks with $x$ coordinate $1$ or the
up vote rooks with $x$ coordinate $2$ will have $z$ coordinate $N$.
5 down
vote The effect of this is that a subset of the squares guarded by the rooks form a "floor" of two layers that keeps moving upward. So if the black king is between this "floor" and the top of the
$N\times N\times N$ cube, it gets pushed to the top face. The "floor" must be moved upward in such a way that it never becomes disconnected, so that the black king can never escape to
beneath the "floor" through some gap.
For an infinite board, @Noam Elkies has already mentioned $5N$, so the following is not an improvement: the "ceiling" (and the other walls of the $N\times N \times N$ cube) can be formed by
placing $N$ more rooks at $(N,y,N)$, for each $y$ with $1\leq y\leq N$, and $N-1$ more rooks at $(x,1,N)$, for each $x$ with $1 \leq x \leq N-1$, and $N-1$ more rooks at $(x,N,N)$ for each
$x$ with $1 \leq x \leq N-1$.
1 Actually, as long as $N$ is large enough to get started, 12 rooks should suffice. Use 4 rooks to perform the "blocking move" described in my math.SE post, with another 4 rooks protecting
them. This traps the king in one half of the board - i.e. to one side of a plane containing these 8 rooks. Then, move the other 4 rooks in to push the king one step away from this plane,
and move 4 of the original 8 rooks across to protect them. Repeat until the king gets pushed off the edge of the board. – George Lowther Jun 11 '12 at 1:04
The question is for an infinite board though, so there is no "last rank". – George Lowther Jun 11 '12 at 1:08
@George Lowther: Right! An infinite board. Corrected the answer, thank you very much! – user22202 Jun 11 '12 at 1:17
add comment
Not the answer you're looking for? Browse other questions tagged co.combinatorics or ask your own question. | {"url":"http://mathoverflow.net/questions/99124/rooks-in-three-dimensions/99269","timestamp":"2014-04-20T01:20:25Z","content_type":null,"content_length":"70871","record_id":"<urn:uuid:0357c574-0221-4e52-9e5a-3e7f8e433586>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Substitution of differentials
January 13th 2011, 10:20 PM #1
Oct 2010
Substitution of differentials
I have never thought that differentials could be manipulated as numbers. I do not see why that is, in any case, justified.
Can someone prove that the product of dy/du and 1/(dx/du) is equal to dy/dx ? I have seen that if you manipulate the differentials you will get the result easily, but is that justified and what
are the domain restrictions in any case that it is justified?
Sorry if my question isn't well phrased.
Most people prefer to prove it from the chain rule alone, which doesn't depend on the fractional notation.
Just in case a picture helps...
... where...
... is the chain rule. Straight continuous lines differentiate downwards (integrate up) with respect to the main variable (in this case x), and the straight dashed line similarly but with respect
to the dashed balloon expression (the inner function of the composite which is subject to the chain rule).
From the bottom row, dx/du = one over du/dx.
Or maybe
See here for an interesting historical perspective on (and reaction against) nervousness about taking the fractional notation literally...
Non-standard analysis - Wikipedia, the free encyclopedia
Don't integrate - balloontegrate!
Balloon Calculus; standard integrals, derivatives and methods
Balloon Calculus Drawing with LaTeX and Asymptote!
Last edited by tom@ballooncalculus; January 13th 2011 at 11:30 PM. Reason: reaction
January 13th 2011, 11:12 PM #2
MHF Contributor
Oct 2008 | {"url":"http://mathhelpforum.com/calculus/168299-substitution-differentials.html","timestamp":"2014-04-21T07:38:12Z","content_type":null,"content_length":"33170","record_id":"<urn:uuid:1af5f36f-06dd-442c-84aa-952cee696a97>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: 57:Fixpoints/Summation/Large Cardinals
Harvey Friedman friedman at math.ohio-state.edu
Thu Sep 9 22:47:49 EDT 1999
This is the 57th in a series of self contained postings to fom covering a
wide range of topics in f.o.m. Previous ones are:
1:Foundational Completeness 11/3/97, 10:13AM, 10:26AM.
2:Axioms 11/6/97.
3:Simplicity 11/14/97 10:10AM.
4:Simplicity 11/14/97 4:25PM
5:Constructions 11/15/97 5:24PM
6:Undefinability/Nonstandard Models 11/16/97 12:04AM
7.Undefinability/Nonstandard Models 11/17/97 12:31AM
8.Schemes 11/17/97 12:30AM
9:Nonstandard Arithmetic 11/18/97 11:53AM
10:Pathology 12/8/97 12:37AM
11:F.O.M. & Math Logic 12/14/97 5:47AM
12:Finite trees/large cardinals 3/11/98 11:36AM
13:Min recursion/Provably recursive functions 3/20/98 4:45AM
14:New characterizations of the provable ordinals 4/8/98 2:09AM
14':Errata 4/8/98 9:48AM
15:Structural Independence results and provable ordinals 4/16/98
16:Logical Equations, etc. 4/17/98 1:25PM
16':Errata 4/28/98 10:28AM
17:Very Strong Borel statements 4/26/98 8:06PM
18:Binary Functions and Large Cardinals 4/30/98 12:03PM
19:Long Sequences 7/31/98 9:42AM
20:Proof Theoretic Degrees 8/2/98 9:37PM
21:Long Sequences/Update 10/13/98 3:18AM
22:Finite Trees/Impredicativity 10/20/98 10:13AM
23:Q-Systems and Proof Theoretic Ordinals 11/6/98 3:01AM
24:Predicatively Unfeasible Integers 11/10/98 10:44PM
25:Long Walks 11/16/98 7:05AM
26:Optimized functions/Large Cardinals 1/13/99 12:53PM
27:Finite Trees/Impredicativity:Sketches 1/13/99 12:54PM
28:Optimized Functions/Large Cardinals:more 1/27/99 4:37AM
28':Restatement 1/28/99 5:49AM
29:Large Cardinals/where are we? I 2/22/99 6:11AM
30:Large Cardinals/where are we? II 2/23/99 6:15AM
31:First Free Sets/Large Cardinals 2/27/99 1:43AM
32:Greedy Constructions/Large Cardinals 3/2/99 11:21PM
33:A Variant 3/4/99 1:52PM
34:Walks in N^k 3/7/99 1:43PM
35:Special AE Sentences 3/18/99 4:56AM
35':Restatement 3/21/99 2:20PM
36:Adjacent Ramsey Theory 3/23/99 1:00AM
37:Adjacent Ramsey Theory/more 5:45AM 3/25/99
38:Existential Properties of Numerical Functions 3/26/99 2:21PM
39:Large Cardinals/synthesis 4/7/99 11:43AM
40:Enormous Integers in Algebraic Geometry 5/17/99 11:07AM
41:Strong Philosophical Indiscernibles
42:Mythical Trees 5/25/99 5:11PM
43:More Enormous Integers/AlgGeom 5/25/99 6:00PM
44:Indiscernible Primes 5/27/99 12:53 PM
45:Result #1/Program A 7/14/99 11:07AM
46:Tamism 7/14/99 11:25AM
47:Subalgebras/Reverse Math 7/14/99 11:36AM
48:Continuous Embeddings/Reverse Mathematics 7/15/99 12:24PM
49:Ulm Theory/Reverse Mathematics 7/17/99 3:21PM
50:Enormous Integers/Number Theory 7/17/99 11:39PN
51:Enormous Integers/Plane Geometry 7/18/99 3:16PM
52:Cardinals and Cones 7/18/99 3:33PM
53:Free Sets/Reverse Math 7/19/99 2:11PM
54:Recursion Theory/Dynamics 7/22/99 9:28PM
55:Term Rewriting/Proof Theory 8/27/99 3:00PM
56:Consistency of Algebra/Geometry 8/27/99 3:01PM
We present two closely related new kinds of discrete independence results.
They are closely related technically to the previous numbered postings 29,
31, 39, and 45. The abstract is self contained.
We postpone discussion of finite forms.
***LARGE CARDINALS, FIXED POINTS, AND SUMMATION***
Harvey M. Friedman
Department of Mathematics
Ohio State University
September 10, 1999
We present new necessary uses of large cardinals in discrete mathematics.
The first approach involves fixed points given by the usual contraction
mapping theorem on the Cantor space P(N) of sets of natural numbers. The
second approach involves the action of restricted integer summation as an
operator from P(Z) into P(Z). The approaches are closely related.
1. APPROXIMATE FIXED POINTS - OVERVIEW
The usual contraction mapping theorem in this context says that every
contraction phi:P(N) into P(N) has a unique fixed point. It is inconvenient
that this fixed point is not always an infinite set. This is an issue
because we will be considering P(N) as a partial ordering under inclusion.
This situation is remedied by a simple condition called normality, which
asserts that finite sets are mapped to cofinite sets.
We then define an approximate fixed point of a normal contraction as a
chain of infinite elements of P(N) under inclusion which progressively get
closer and closer to being a fixed point. I.e., each set in the chain is
close to its image under the operator as measured by the preceding element
in the chain.
Of course, we can just take each term to be the unique fixed point, and
then the terms are actual fixed points. But we then throw a wild card into
the mix by demanding that the first set in the approximate fixed point
consist entirely of even integers. This requirement cannot be met with
actual fixed points. However this requirement can be met with approximate
fixed points of finite length - if the normal contraction obeys a
monotonicity condition (decreasing) and we use large cardinals. This use of
large cardinals is demonstrably necessary.
Another wild card with the same effect: ask for approximate fixed points of
two normal contractions which start with the same term.
In this posting, we consider only the infinitary discrete mathematical
context. We do not address finitary considerations. However, we do know
that the statements involved are provably equivalent to 1-consistency.
The issue of finite forms will be taken up in later postings.
We use N for the set of all nonnegative integers. Let A,B,C containedin N.
We say that A,B agree on C if and only if A intersect C = B intersect C.
Let P(N) be the Cantor space of all subsets of N. We are interested in
mappings phi:P(N) into P(N).
We say that phi:P(N) into P(N) is a contraction if and only if
for all n >= 0 and A,B in P(N), if A,B agree on [0,n) then phi(A),phi(B)
agree on [0,n]. Throughout this paper, all contractions will map P(N) into
Note that a contraction is determined by its values at finite sets. In this
sense, we are in the realm of discrete mathematics.
In this context, the classical contraction mapping theorem says the
THEOREM 1. Every contraction has a unique fixed point.
This fixed point may be finite. However we can get an infinite "fixed
point" in the following sense. We say that B is an initial segment of A if
and only if B = A intersect [0,n), where 0 <= n <= infinity. Note that A is
an initial segment of A.
THEROEM 2. Let phi be a contraction. There is an infinite A such that
phi(A) is an initial segment of A. The value phi(A) is unique, and in fact
is the unique fixed point of phi.
Let n,k >= 1. An approximate fixed point of phi of type (n,k) is a chain of
infinite sets A1 containedin ... containedin An containedin N such that
*for all 1 <= i <= n-1, phi(Ai+1) agrees with some initial segment of Ai+1
on the set of all k length sums from Ai union {0,1}.*
We also allow n and/or k to be infinite, where k = infinity indicates that
we take all finite length sums.
Condition ii) is needed in order to handle, e.g., the constant contractions.
THEOREM 3. Let A be the unique fixed point of the contraction phi. If A is
infinite then the infinite sequence of A's is an approximate fixed
point of phi of type (infinity,infinity). If A is finite then the infinite
sequence consisting of A union (max(A),infinity) is an approximate fixed
point of phi of type (infinity,infinity), where max(emptyset) = -1.
A decreasing contraction is a contraction phi:P(N) into P(N) such that A
containedin B implies phi(B) containedin phi(A).
PROPOSITION 4. Every decreasing contraction has approximate fixed points of
every finite type whose first set consists of even integers.
THEOREM 5. Proposition 3 is provably equivalent to the 1-consistency of MAH
= ZFC + {there exists a k-Mahlo cardinal}_k. In particular, it is
independent of ZFC.
Theorem 5 also holds for the following weakening of Proposition 5.
PROPOSITION 6. Every decreasing contraction has approximate fixed
points of every finite type (3,k) whose first set consists of even
integers. Every decreasing contraction has approximate
fixed points of every finite type (n,2) whose first set consists of even
We can enlarge the class of contractions considered. For n
in N, we can look at increasing chains of subsets of N, and ask how many
times the truth value of the statement "n lies in the value of the
operator" changes as we pass through the sets in the chain in order. If
there is a uniform bound r not depending on n then we call this an
admissible contraction. Then the results above hold for admissible
contractions. Note that every decreasing contraction is an admissible
contraction, where the uniform bound is taken to be 1.
What's special about the even integers here? Nothing. You can require that
the first set be a subset of any given infinite subset of N, and the
statement is still provable from large cardinals. In fact, the mere
existence of a subset of N so that you can always insist on the appearance
of some element from that subset is already provably equivalent to the
1-consistency of Mahlo cardinals of finite order. On a related note, we get
the necessary use of Mahlo cardinals of finite order for this:
PROPOSITION 7. Let n,k >= 1. Any two decreasing (or admissible)
contractions have approximate fixed points of type (n,k) with the
same first set.
In the definition of apporoximate fixed points, we can use other functions
instead of or in addition to summation. We can use finitely many functions
of several variables on N that are strictly increasing in each argument.
More generally, we can use finitely many functions of several variables
that are increasing in each argument and dominating.
Can we use minus instead of or in addition to + and get the same results?
Yes, provided we put an additional condition on the contractions. A
weak contraction is a phi:P(N) into P(N) such that for some constant c > 1,
we have that for all n >= 0 and A,B in P(N), if A,B agree on [0,n) then
phi(A),phi(B) agree on [0,cn]. This amounts to a higher order Lipschitz
condition on phi.
Let A,B,C be three sets. We say that A,B partitions C if and only if
i) A intersect B intersect C = emptyset;
ii) C containedin A union B.
We say that A,B disjointly partitions C if and only if
i) A intersect B = emptyset;
ii) C containedin A union B.
Let Z be the set of all integers, Z+ be the set of all positive integers.
For A containedin Z, let Sigma(A) be the set of all sums x1 + ... + xn, n
>= 1, such that x1,...,xn in A. We call Sigma:P(Z) into P(Z) the summation
Let Z* be the set of all nonempty finite sequences of integers. We view Z
as a subset of Z*. We say that U containedin Z* is finite dimensional if
and only if there is a finite upper bound to the lengths of the elements of
Each U containedin Z* defines the restricted summation function
Sigma_U:P(Z) into P(Z) given by SigmaU(A) = {x1 + ... + xn: n >= 1,
x1,...,xn in A, and (x1,...,xn) in U}. Note that if U = Z* then Sigma_U =
THEOREM 1. Let U containedin Z*\Z. There is a unique A containedin Z+ such
that A,Sigma(U) partitions Z+.
PROPOSITION 2. Let U,V be finite dimensional subsets of Z*\Z. There exists
infinite sets A containedin B containedin C containedin Z+ such that
B,SigmaU(B) partitions SigmaV(A), and C\A,SigmaU(C)\A partitions SigmaV(B).
We can strengthen Proposition 2 considerably in many ways. Here is an example.
PROPOSITION 3. Let U,V be finite dimensional subsets of Z*\Z, and n >= 1.
There exists infinite sets A1 properlycontainedin ... properlycontainedin
An properlycontainedin Z+ such that for all 1 <= i <= n-1,
Ai+1\A1,SigmaU(Ai+1)\A1 disjointly partitions SigmaV(Ai).
THEOREM 4. Propositions 2 and 3 are provably equivalent, over RCA_0, to the
1-consistency of MAH = ZFC + {there exists an n-Mahlo cardinal}n. In
particular, they are independent of ZFC.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1999-September/003359.html","timestamp":"2014-04-20T16:39:30Z","content_type":null,"content_length":"14225","record_id":"<urn:uuid:4ef1223f-8f68-4657-8c00-204fe56e926a>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
- In CADE-20 , 2005
"... This paper shows how to harness existing theorem provers for first-order logic to automatically verify safety properties of imperative programs that perform dynamic storage allocation and
destructive updating of pointer-valued structure fields. One of the main obstacles is specifying and proving the ..."
Cited by 35 (7 self)
Add to MetaCart
This paper shows how to harness existing theorem provers for first-order logic to automatically verify safety properties of imperative programs that perform dynamic storage allocation and destructive
updating of pointer-valued structure fields. One of the main obstacles is specifying and proving the (absence) of reachability properties among dynamically allocated cells. The main technical
contributions are methods for simulating reachability in a conservative way using first-order formulas—the formulas describe a superset of the set of program states that can actually arise. These
methods are employed for semi-automatic program verification (i.e., using programmer-supplied loop invariants) on programs such as mark-and-sweep garbage collection and destructive reversal of a
singly linked list. (The mark-and-sweep example has been previously reported as being beyond the capabilities of ESC/Java.) 1
, 2004
"... We define 2 operators on relations over natural numbers such that they generalize the operators '+' and '*' and show that the membership and emptiness problem of relations constructed from
finite relations with these operators and is decidable. This generalizes Presburger arithmetics and allows to ..."
Cited by 15 (0 self)
Add to MetaCart
We define 2 operators on relations over natural numbers such that they generalize the operators '+' and '*' and show that the membership and emptiness problem of relations constructed from finite
relations with these operators and is decidable. This generalizes Presburger arithmetics and allows to decide the reachability problem for those Petri nets where inhibitor arcs occur only in some
restricted way. Especially the reachability problem is decidable for Petri nets with only one inhibitor arc, which solves an open problem in [KLM89] . Furthermore we describe the corresponding
automaton having a decidable emptiness problem. 1
- in: Proc. 19th LICS, IEEE Comp. Soc , 2004
"... Abstract. Formal verification using the model checking paradigm has to deal with two aspects: The system models are structured, often as products of components, and the specification logic has
to be expressive enough to allow the formalization of reachability properties. The present paper is a study ..."
Cited by 14 (2 self)
Add to MetaCart
Abstract. Formal verification using the model checking paradigm has to deal with two aspects: The system models are structured, often as products of components, and the specification logic has to be
expressive enough to allow the formalization of reachability properties. The present paper is a study on what can be achieved for infinite transition systems under these premises. As models we
consider products of infinite transition systems with different synchronization constraints. We introduce finitely synchronized transition systems, i.e. product systems which contain only finitely
many (parameterized) synchronized transitions, and show that the decidability of FO(R), first-order logic extended by reachability predicates, of the product system can be reduced to the decidability
of FO(R) of the components. This result is optimal in the following sense: (1) If we allow semifinite synchronization, i.e. just in one component infinitely many transitions are synchronized, the FO
(R)-theory of the product system is in general undecidable. (2) We cannot extend the expressive power of the logic under consideration. Already a weak extension of firstorder logic with transitive
closure, where we restrict the transitive closure operators to arity one and nesting depth two, is undecidable for an asynchronous (and hence finitely synchronized) product, namely for the infinite
grid. 1.
"... We develop a unified framework for dealing with constructibility and absoluteness in set theory, decidability of relations in effective structures (like the natural numbers), and domain
independence of queries in database theory. Our framework and results suggest that domain-independence and absolut ..."
Cited by 4 (2 self)
Add to MetaCart
We develop a unified framework for dealing with constructibility and absoluteness in set theory, decidability of relations in effective structures (like the natural numbers), and domain independence
of queries in database theory. Our framework and results suggest that domain-independence and absoluteness might be the key notions in a general theory of constructibility, predicativity, and
computability. 1
- In First-Order Logic Revisited (Hendricks et all,, eds.), 37-58, Logos Verlag , 2004
"... ..."
"... We suggest a new basic framework for the Weyl-Feferman predicativist program by constructing a formal predicative set theory PZF which resembles ZF. The basic idea is that the predicatively
acceptable instances of the comprehension schema are those which determine the collections they define in an a ..."
Cited by 2 (1 self)
Add to MetaCart
We suggest a new basic framework for the Weyl-Feferman predicativist program by constructing a formal predicative set theory PZF which resembles ZF. The basic idea is that the predicatively
acceptable instances of the comprehension schema are those which determine the collections they define in an absolute way, independent of the extension of the “surrounding universe”. This idea is
implemented using syntactic safety relations between formulas and sets of variables. These safety relations generalize both the notion of domain-independence from database theory, and Godel notion of
absoluteness from set theory. The language of PZF is type-free, and it reflects real mathematical practice in making an extensive use of statically defined abstract set terms. Another important
feature of PZF is that its underlying logic is ancestral logic (i.e. the extension of FOL with a transitive closure operation). 1
"... To Boaz Trakhtenbrot: a scientific father, a friend, and a great man. Abstract. We present a new unified framework for formalizations of axiomatic set theories of different strength, from
rudimentary set theory to full ZF. It allows the use of set terms, but provides a static check of their validity ..."
Cited by 1 (1 self)
Add to MetaCart
To Boaz Trakhtenbrot: a scientific father, a friend, and a great man. Abstract. We present a new unified framework for formalizations of axiomatic set theories of different strength, from rudimentary
set theory to full ZF. It allows the use of set terms, but provides a static check of their validity. Like the inconsistent “ideal calculus ” for set theory, it is essentially based on just two
set-theoretical principles: extensionality and comprehension (to which we add ∈-induction and optionally the axiom of choice). Comprehension is formulated as: x ∈{x | ϕ} ↔ϕ, where {x | ϕ} is a legal
set term of the theory. In order for {x | ϕ} to be legal, ϕ should be safe with respect to {x}, where safety is a relation between formulas and finite sets of variables. The various systems we
consider differ from each other mainly with respect to the safety relations they employ. These relations are all defined purely syntactically (using an induction on the logical structure of
formulas). The basic one is based on the safety relation which implicitly underlies commercial query languages for relational database systems (like SQL). Our framework makes it possible to reduce
all extensions by definitions to abbreviations. Hence it is very convenient for mechanical manipulations and for interactive theorem proving. It also provides a unified treatment of comprehension
axioms and of absoluteness properties of formulas. 1
, 2004
"... In the last WSML phone conference we had a brief discussion about the expressivity of First-order Logic and Datalog resp. the relation between the expressiveness of those two languages. In
particular, there has been some confusion around the description of the transitive closure R + of some binary r ..."
Add to MetaCart
In the last WSML phone conference we had a brief discussion about the expressivity of First-order Logic and Datalog resp. the relation between the expressiveness of those two languages. In
particular, there has been some confusion around the description of the transitive closure R + of some binary relation R. In this short document, we want to clarify the situation and hope to remedy
the confusion. 1 Starting point During the discussion in the last WSML phone conference the statement arose that Datalog with it’s particular semantics can express some things which can not be
expressed in the First-order Logic (FOL) under the standard modeltheoretic (resp. Tarski) semantics [14]. As an example the transitive closure of a (binary) relation has been mentioned. Some people
didn’t believe this claim because it’s a straightforward
, 2007
"... So the question naturally arises: what kinds of sentences belonging to PA’s language LA can we actually establish to be true even though they are unprovable in PA? There are two familiar classes
of cases. First, there are sentences like the canonical Gödel sentence for PA. Second, there are sentence ..."
Add to MetaCart
So the question naturally arises: what kinds of sentences belonging to PA’s language LA can we actually establish to be true even though they are unprovable in PA? There are two familiar classes of
cases. First, there are sentences like the canonical Gödel sentence for PA. Second, there are sentences like the arithmetization of Goodstein’s Theorem. In the first sort of case, we can come to
appreciate the truth of the Gödelian undecidable sentences by reflecting on PA’s consistency or by coming to accept the instances of the Π1 reflection schema for PA. And those routes involve
deploying ideas beyond those involved in accepting PA as true. To reason to the truth of the Gödel sentence, we need not just to be able to do basic arithmetic, but to be able to reflect on our
practice. In the second sort of case, we come to appreciate the truth of the sentences which are undecidable in PA by deploying transfinite induction or other infinitary ideas. So the reasoning again
involves ideas which go beyond what’s involved in grasping basic | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2437928","timestamp":"2014-04-17T16:08:36Z","content_type":null,"content_length":"34394","record_id":"<urn:uuid:562fc136-070a-4fd0-8113-4aa1a8cbdb11>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by MS on Wednesday, September 4, 2013 at 1:47am.
Eq of curve is y=b sin^2(pi.x/a). Find mean value for part of curve where x lies between b and a.
I have gone thus far-
y=b[1-cos(2pi x/a)/2]/2
Integral y from a to b=b/2(b-a)-ab/4pi[sin(2pi b/a)-sin2pi)
MV=b/2-[ab sin(2pi b/a)]/(b-a)
Ans given is b/a. I am not getting further.
• Maths - Graham, Wednesday, September 4, 2013 at 3:36am
y(x) = b sin^2(πx/a)
The mean of the curve over the range b to a is:
y_ave = 1/(a-b) ∫(x=b to a) y(x) dx
sin^2(πx/a) = 1 - cos(2πx/a)
∫y(x) dx
= (b/2) ∫ (1 - cos(2πx/a)) dx
= (b/2) (x - a sin(2πx/a)/(2π)) + constant
= bx/2 - ab sin(2πx/a)/(4π) + constant
∫(x=b to a) y(x) dx
= b(a-b)/2 + ab sin(2πb/a)/(4π)
1/(a-b)∫(x=b to a) y(x) dx
= (b/2) + (ab sin(2πb/a))/(4π(a-b))
• Maths - Graham, Wednesday, September 4, 2013 at 3:39am
And, that is just about as far as it goes. You can play around with the sine identities, but it doesn't simplify much further.
• Maths - MS, Wednesday, September 4, 2013 at 3:52am
Does it indicate that the answer 'b/a' given in the book may be wrong? I tried many times but could not get it.
Related Questions
math - Eliminate the parameter (What does that mean?) and write a rectangular ...
calculus - Find complete length of curve r=a sin^3(theta/3). I have gone thus- (...
calculus - Given the curve defined by the equation y=cos^2(x) + sqrt(2)* sin(x) ...
Calculus - Find the equation of the tangent line to the curve at the given value...
math - find the value of pi in the following: sin pi=cos(pi + 40) sin(80 - pi)=...
Calculus - 1) The period of a trig. function y=sin kx is 2pi/k. Then period of y...
d/dx - d/dx( ln |sin(pi/x)| ) = ? Thanks. If those are absolute value signs, the...
math - find all solutions in the interval [0,2 pi) sin(x+(3.14/3) + sin(x- 3.14/...
Math(Please help) - 1)tan Q = -3/4 Find cosQ -3^2 + 4^2 = x^2 9+16 = sqrt 25 = 5...
Trig functions - 1)Write the equation sin y= x in the form of an inverse ... | {"url":"http://www.jiskha.com/display.cgi?id=1378273649","timestamp":"2014-04-19T14:59:37Z","content_type":null,"content_length":"9476","record_id":"<urn:uuid:2ab69c66-d521-401b-9c72-fb751a433ba2>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hungarian mathematician Endre Szemerédi gets 2012 Abel Prize
The winner of the prestigious Abel Prize of the Norwegian Academy of Science and Letters for the year 2012 is 72-year-old Hungarian mathematician Endre Szemerédi of the Alfréd Rényi Institute of
Mathematics, Hungarian Academy of Sciences, Budapest, and Department of Computer Science, Rutgers, The State University of New Jersey in the United States.
Szemerédi's highly influential work has proved to be a game-changer in many areas of mathematics.
The announcement was made by the President of the Norwegian Academy in Oslo on Tuesday and the award is being given “for his fundamental contributions to discrete mathematics and theoretical computer
science, and in recognition of the profound and lasting impact of these contributions on additive number theory and ergodic theory.”
Szemerédi has been described as a mathematician with exceptional research power and his influence in diverse areas of present-day mathematics has been enormous. The festschrift volume, titled An
Irregular Mind, published on his 70th birthday, ascribes his unique way of thinking and extraordinary mathematical vision as perhaps due to his brain being wired differently — “an irregular mind” —
than most mathematicians.
Discrete mathematics is the study of structures such as graphs, sequences, permutations and geometric configurations and it is the mathematics of such structures that forms the foundation of
theoretical computer science and information theory. For example, the tools of graph theory can be used to analyse communication networks such as the Internet. Similarly, the designing of efficient
computational algorithms relies crucially on insights from discrete mathematics.
Szemerédi, says the citation, “has revolutionized discrete mathematics by introducing ingenious and novel techniques, and by solving many fundamental problems”. His work has brought combinatorics to
the centre-stage of mathematics by bringing to bear its application in many areas of mathematics such as additive number theory, ‘ergodic' theory, theoretical computer science and ‘incidence'
The Abel Committee has noted that Szemerédi's approach belongs to the strong Hungarian problem-solving tradition exemplified by mathematicians such as George Pólya and yet the theoretical impact of
his work has been enormous.
Interestingly, Szemerédi entered mathematics somewhat late. He attended medical school for a year and worked in a factory before switching to mathematics. His extraordinary mathematical talent was
discovered when he was a young student in Budapest by his mentor, famous Hungarian mathematician Paul Erdõs. He studied at the Eõtvõs Loránd University in Budapest and obtained his Ph.D. in 1970
under Israel M. Gelfand at Moscow State University.
Szemerédi proved several fundamental theorems of tremendous importance. Many of his results have opened up new avenues in mathematics and form the basis for future research. He first attracted
international attention in 1976 with his solution of what is known as the Erdõs-Turan Conjecture. In its proof, Szemerédi had used a masterpiece of combinatorial reasoning, which was immediately
recognised to have exceptional depth and power. A key step in the proof, now known as the Szemerédi Regularity Lemma, is used for classification of large graphs.
Many of Szemerédi's discoveries that have had great impact on discrete mathematics and theoretical computer science carry his name. Examples in discrete mathematics include the Szemerédi-Trotter
Theorem, the Ajtai-Komlós-Szemerédi semi-random method, the Erdõs-Szemerédi sum-product theorem, and the Balog-Szemerédi-Gowers Lemma. Examples in theoretical computer science include the
Ajtai-Komlós-Szemerédi sorting network, the Fredman-Komlós-Szemerédi hashing scheme and the Paul-Pippenger-Szemerédi-Trotter theorem.
The Abel Prize, named after great Norwegian mathematical genius Niels Henrik Abel (1802-1829), is given in recognition of outstanding contributions to mathematical sciences and has been awarded
annually since 2003.
Abel, who died at the age of 26, has often been compared with the Indian mathematical genius Srinivasa Ramanujan. The Prize was established in 2001 as part of Abel's 200th birth anniversary. It
carries a cash award of 6 million Norwegian Kroner (NOK), equivalent to €750,000 (about U.S$ 1 million), and is comparable in prestige, value and eligibility criterion to the Nobel Prize, which, does
not cover mathematics.
The winning candidate is selected on the basis of the recommendation of an international committee of outstanding mathematicians chaired by a Norwegian. The current committee is headed by Ragni
Piene, Professor at the University of Oslo and includes M.S. Raghunathan, formerly of the Tata Institute of Fundamental research (TIFR) and currently at the Indian Institute of Technology-Bombay
(IIT-B), in Mumbai.
The selection of Szemerédi for the award was made in February at a meeting of the Committee held at the TIFR. | {"url":"http://www.thehindu.com/news/article3025783.ece?homepage=true","timestamp":"2014-04-19T04:34:09Z","content_type":null,"content_length":"74628","record_id":"<urn:uuid:85992e72-6b0b-44bb-a754-f62e112f6025>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proceedings of the American Mathematical Society
ISSN 1088-6826(online) ISSN 0002-9939(print)
Billingsley's packing dimension
Author: Manav Das
Journal: Proc. Amer. Math. Soc. 136 (2008), 273-278
MSC (2000): Primary 28A78, 28A80
Published electronically: October 18, 2007
MathSciNet review: 2350413
Full-text PDF Free Access
Abstract | References | Similar Articles | Additional Information
Abstract: For a stochastic process on a finite state space, we define the notion of a packing measure based on the naturally defined cylinder sets. For any two measures
then we prove that
Similar Articles
Retrieve articles in Proceedings of the American Mathematical Society with MSC (2000): 28A78, 28A80
Retrieve articles in all journals with MSC (2000): 28A78, 28A80
Additional Information
Manav Das
Affiliation: Department of Mathematics, 328 Natural Sciences Building, University of Louisville, Louisville, Kentucky 40292
Email: manav@louisville.edu
DOI: http://dx.doi.org/10.1090/S0002-9939-07-09069-7
PII: S 0002-9939(07)09069-7
Keywords: Billingsley's dimension, packing dimension, Hausdorff dimension
Received by editor(s): May 4, 2006
Received by editor(s) in revised form: December 18, 2006
Published electronically: October 18, 2007
Communicated by: Jane M. Hawkins
Article copyright: © Copyright 2007 American Mathematical Society
The copyright for this article reverts to public domain 28 years after publication. | {"url":"http://www.ams.org/journals/proc/2008-136-01/S0002-9939-07-09069-7/home.html","timestamp":"2014-04-19T04:31:21Z","content_type":null,"content_length":"29633","record_id":"<urn:uuid:216ee1db-549e-4b54-9b8a-b387d38b1de8>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
Position location techniques and applications - Part 3: Mobility in wireless networks | EE Times
Design How-To
Position location techniques and applications - Part 3: Mobility in wireless networks
[Part one begins a discussion of the basic evolution of wireless networks and how position location could be considered from the networking point of view. Part two offers a general overview of
wireless mobile ad hoc networks and sensor networks, including their evolution, applications, and issues such as connectivity and scalability.]
5.2 MOBILITY IN WIRELESS NETWORKS
Mobility is a very important issue that needs to be addressed in a wireless network, since it limits capacity. For example, in a cellular network with no mobility, we can establish a capacity
criterion based on the Erlang-B formula. Channels will be occupied by users that contribute to traffic so that a cell with C channels will have a blocking probability given by the formula.
If we allow users to move, then some users from adjacent cells will hand off their calls to the cell of interest, producing a higher occupancy state in the cell and thus a higher blocking
probability. This simple reasoning shows how mobility could limit capacity.
In the following sections, we introduce some aspects of mobility that need to be considered to balance the capacity–coverage trade-off that mobility causes.
5.2.1 Capacity and Coverage Issues
Demand for wireless services has increased, which in turn brings new issues to consider such as mobility management for service providers [13]. One of the major objectives of a telecommunications
system is to offer a service of excellent quality based on user requirements. In order to evaluate how a network or a communication system performs, we need to define measures that quantify the
effects of varying parameters such as demand and capacity. In wireless networks, we allow users to move from area to area, thereby causing handoffs in the network. A user requesting service for the
first time from the network is considered a new call.
Two of the most important performance measures in wireless networks are the new call and the handoff blocking probabilities [34, 48]. The handoffs are a fundamental feature in cellular systems;their
performance and efficiency strongly depend on the use of adequate algorithms. For cellular communication systems, to ensure mobility and capacity, to maintain the desired coverage areas, and to avoid
problems of interference, it is necessary to correctly assign the calls to the corresponding service areas in the entire cell and in the entire network.
In a CDMA network, soft handoff has been modeled considering overlapping areas, such as in Kwon and Suang [42], Miranda-Guardiola and Vargas-Rosales [53], and Scaglione et al. [72]. Admission of
handoffs is done using one of three criteria. The first is to consider handoffs and new call arrivals equally for occupancy of the channels, the second reserves channels to give priority to handoffs,
and the third sends them to a queue if no channel is available. Several performance evaluation algorithms have been introduced for these handoff strategies (e.g., see [34] and [79]) for reservation
and queueing strategies and in McMillan [48] for the reservation and no-reservation strategies.
In a simplistic way, we can see each i-th cell as a resource with C[i] channels offered Poisson traffic for new calls with ?[i] calls per time unit, with exponential channel residence times with mean
1/µ[i] . This would translate to considering each cell as an M/M/C[i]/C[i] system with performance provided by the Erlang-B formula,
But we need to see that each cell receives offered traffic due to handoffs from adjacent cells, as shown in Figure 5.12. And since the success of the new call traffic depends directly on the
available capacity of the cell to which it is being offered, and the handoff calls depend on the available capacity of the cell from which it comes, we can see a cellular network such as that in
Figure 5.12 as a Jackson-type network of queues [5] as shown in Figure 5.13.
FIGURE 5.12 Traffic offered to cell in wireless network.
FIGURE 5.13 Cellular network as Jackson network of queues.
In Vargas-Rosales et al. [79], it was shown that this viewpoint has a solution for the blocking probability of new calls and handoff calls with channel reservation. The fundamental idea in this model
is that the traffic offered to a cell is given by the new call traffic plus handoffs that have already been accepted in an adjacent cell; that is, if we denote as v[ij] the handoff traffic offered
from cell i to cell j, B[i] as the new call blocking in cell i, and B[hi] as the handoff blocking in cell i, we can obtain the handoff rate out of cell j offered to cell i as
where A[j] is the set of neighboring cells to cell j, and p[ij] is the routing probability from cell i to cell j. The first term in Equation (5.2) is due to those new calls offered to cell j that are
accepted and after a residence time, handed off to cell i with probability p[ji]. The second term is due to all handoff calls that are offered and accepted into cell j from its adjacent cells and
then handoff to cell i.
One can see that this model helps us to comprehend the effects that mobility has on performance by varying the routing probabilities to increase the proportion of handoff calls being offered. To
solve for the blocking probabilities, one needs to consider a fixed point because Equation (5.1) is in terms of traffic offered, and this depends on new call arrivals plus handoffs as given by
Equation (5.2). Equation (5.1) now also has traffic from both arrivals of new calls and handoff calls. This immediately tells us that traffic increases, and thus blocking also increases; that is,
capacity in terms of number of possible simultaneous users active is reduced.
This model has been evaluated for FDMA and TDMA in Vargas-Rosales et al. [79], and for CDMA in Miranda-Guardiola and Vargas-Rosales [53]. In addition, the use of reservation degrades performance for
the type of traffic with higher bandwidth requests. Even though these networks have limitations in terms of interference levels, it has been shown that relevant limitations are due to blocking [37].
It is well known that CDMA capacity depends on the processing gain and the bit energy-to-noise ratio, but from another viewpoint, we can consider a scenario where a central cell is influenced by
interference from an infinite number of rings (tiers), each of which contains cells transmitting at the same frequency with users working with perfect power control. The scenario was considered with
homogeneous circular cells of radius R in Munoz et al. [56]. The advantages of using such a model are that we get bounds on the interference levels even for an infinite number of cells due to the
infinite number of rings surrounding the center cell.
In the model, the same number of users for each cell is considered, and in order to obtain the major influence of the users, in each cell all users are located at the closest point toward the center.
The model used a simple propagation model with a path-loss exponent between 2. 5 and 4, and cell radius was varied to consider cases with a radius of 3, 5, and 10 km. In the worst-case scenario, a
cell capacity of 20 users was obtained when the number of interferents was infinity. Voice activity and sectoring were considered as well. The important aspect of this result is that regardless of
the number of interferents, the capacity of CDMA cells with perfect power control will be lower-bounded by 20. We must be cautious when referring to this number since in these conditions FDMA and
TDMA would be useless due to interference. The final result of the analysis in [56] is provided by the following lower bound:
where N is the number of users in each cell, ? is the path-loss exponent, R is the cell radius, C/I is the carrier-to-interference level usually set to -15 db, and x > 1 and y > 0.
In general, network capacity also depends on limitations encountered by the underlying channels. These limitations determine the data rates at which one can transmit with small bit error rates
(BERs). Once the physical layer provides a reliable link to transmit, then the network functions take place, consuming some of the available bandwidth in order to achieve network control. So in order
to consider capacity in wireless networks, we have to see that channel capacity or single-user system capacity and multiuser capacity need to be integrated.
For networks with infrastructure, it is well known that the uplink will have a degraded performance once the number of users increases since interference will be an issue. The base station transmits
at a certain power level in the downlink that is also affected by the amount of interference, creating a coverage problem in some areas since the downlink signal will not be received with as much
power as it seems. We also know that higher frequencies will require higher sensibility from the receivers since received power is inversely proportional to frequency. For treatments of these
capacity issues in single-user and multiuser systems, see Goldsmith [27].
For networks that have no infrastructure (i.e., reconfigurable networks such as ad hoc and sensor networks) capacity has been an important research issue. For these networks, it is not as simple
since the concept of simultaneous number of users does not apply directly due to the distributed use of the bandwidth. In addition, issues such as bit rates, interference suppression, multiple
access, geographic position, topology, connectivity, and reachability, among others, play an important role in determining the number of nodes that could be active at a given time in a network. Also,
certain types of algorithms implemented could be improved if used in a distributed or cooperative way. Capacity in these networks has been studied in general [30], as has how mobility increases
capacity when cooperation is used (e.g., see [29]).
The work of Grossglauser and Tse [29] contains a study of a network with no mobility with nodes generated randomly on a disk or sphere, and as the node density increases, the throughput per origin
destination pair decreases with a bound determined by 1/sqrt(n). It was also shown that this is the best performance that one can get even with optimal cooperation in relaying, routing, and
scheduling. One issue would then be scalability, since the result in Gupta and Kumar [30] gives practically a zero throughput when the network grows. In contrast, mobility can help maintain constant
origin–destination pair throughput even when the network grows, as shown in Grossglauser and Tse [29]. The result is based on the use of relaying as a form of multiuser diversity. | {"url":"http://www.eetimes.com/document.asp?doc_id=1279190","timestamp":"2014-04-16T22:29:05Z","content_type":null,"content_length":"138684","record_id":"<urn:uuid:7a2d2483-ab01-470a-9511-973436d5a627>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |
Principles of Inorganic Materials Design, 2nd Edition
Principles of Inorganic Materials Design, 2nd Edition
ISBN: 978-0-470-56753-1
500 pages
February 2010
Unique interdisciplinary approach enables readers to overcome complex design challenges
Integrating concepts from chemistry, physics, materials science, metallurgy, and ceramics, Principles of Inorganic Materials Design, Second Edition offers a unique interdisciplinary approach that
enables readers to grasp the complexities of inorganic materials. The book provides a solid foundation in the principles underlying the design of inorganic materials and then offers the guidance and
tools needed to create specific materials with desired macroscopic properties.
Principles of Inorganic Materials Design, Second Edition begins with an introduction to structure at the microscopic level and then progresses to smaller-length scales. Next, the authors explore both
phenomenological and atomistic-level descriptions of transport properties, the metal?nonmetal transition, magnetic and dielectric properties, optical properties, and mechanical properties. Lastly,
the book covers phase equilibria, synthesis, and nanomaterials.
Special features include:
• Introduction to the CALPHAD method, an important, but often overlooked topic
• More worked examples and new end-of-chapter problems to help ensure mastery of the concepts
• Extensive references to the literature for more in-depth coverage of particular topics
• Biographies introducing twentieth-century pioneers in the field of inorganic materials science
This Second Edition has been thoroughly revised and updated, incorporating the latest findings and featuring expanded discussions of such key topics as microstructural aspects, density functional
theory, dielectric properties, mechanical properties, and nanomaterials.
Armed with this text, students and researchers in inorganic and physical chemistry, physics, materials science, and engineering will be equipped to overcome today's complex design challenges. This
textbook is recommended for senior-level undergraduate and graduate course work.
See More
Foreword to Second Edition.
Foreword to First Edition.
Preface to Second Edition.
Preface to First Edition.
1.1 Degrees of Crystallinity.
1.1.1 Monocrystalline Solids.
1.1.2 Quasicrystalline Solids.
1.1.3 Polycrystalline Solids.
1.1.4 Semicrystalline Solids.
1.1.5 Amorphous Solids.
1.2 Basic Crystallography.
1.2.1 Space Lattice Geometry.
1.3 Single Crystal Morphology and its Relationship to Lattice Symmetry.
1.4 Twinned Crystals.
1.5 Crystallographic Orientation Relationships in Bicrystals.
1.5.1 The Coincidence Site Lattice.
1.5.2 Equivalent Axis-Angle Pairs.
1.6 Amorphous Solids and Glasses.
Practice Problems.
2.1 Materials Length Scales.
2.1.1 Experimental Resolution of Material Features.
2.2 Grain Boundaries in Polycrystalline Materials.
2.2.1 Grain-Boundary Orientations.
2.2.2 Dislocation Model of Low Angle Grain Boundaries.
2.2.3 Grain-Boundary Energy.
2.2.4 Special Types of Low-Energy Grain Boundaries.
2.2.5 Grain-Boundary Dynamics.
2.2.6 Representing Orientation Distributions in Polycrystalline Aggregates.
2.3 Materials Processing and Microstructure.
2.3.1 Conventional Solidification.
2.3.2 Deformation Processing.
2.3.3 Consolidation Processing.
2.3.4 Thin-Film Formation.
2.4 Microstructure and Materials Properties.
2.4.1 Mechanical Properties.
2.4.2 Transport Properties.
2.4.3 Magnetic and Dielectric Properties.
2.4.4 Chemical Properties.
2.5 Microstructure Control and Design.
Practice Problems.
3.1 Structure Description Methods.
3.1.1 Close Packing.
3.1.2 Polyhedra.
3.1.3 The Unit Cell.
3.1.4 Pearson Symbols.
3.2 Cohesive Forces in Solids.
3.2.1 Ionic Bonding.
3.2.2 Covalent Bonding.
3.2.3 Metallic Bonding.
3.2.4 Atoms and Bonds as Electron Charge Density.
3.3 Structural Energetics.
3.3.1 Lattice Energy.
3.3.2 The Born-Haber Cycle.
3.3.3 Goldschmidt's Rules and Pauling's Rules.
3.3.4 Total Energy.
3.3.5 Electronic Origin of Coordination Polyhedra in Covalent Crystals.
3.4 Common Structure Types.
3.4.1 Iono-Covalent Solids.
3.4.2 Intermetallic Compounds.
3.5 Structural Disturbances.
3.5.1 Intrinsic Point Defects.
3.5.2 Extrinsic Point Defects.
3.5.3 Structural Distortions.
3.5.4 Bond Valence Sum Calculations.
3.6 Structure Control and Synthetic Strategies.
Practice Problems.
4 THE ELECTRONIC LEVEL I: AN OVERVIEW OF BAND THEORY.
4.1 The Many-Body Schrödinger Equation.
4.2 Bloch’s Theorem.
4.3 Reciprocal Space.
4.4 A Choice of Basis Sets.
4.4.1 Plane-Wave Expansion - The Free-Electron Models.
4.4.2 The Fermi Surface and Phase Stability.
4.4.3 Bloch Sum Basis Set - The LCAO Method.
4.5 Understanding Band-Structure Diagrams.
4.6 Breakdown of the Independent Electron Approximation.
4.7 Density Functional Theory - The Successor to the Hartree-Fock Approach.
Practice Problems.
5.1 The General LCAO Method.
5.2 Extension of the LCAO Treatment to Crystalline Solids.
5.3 Orbital Interactions in Monatomic Solids.
5.3.1 s-Bonding Interactions.
5.3.2 p-Bonding Interactions.
5.4 Tight-Binding Assumptions.
5.5 Qualitative LCAO Band Structures.
5.5.1 Illustration 1: Transition Metal Oxides with Vertex-Sharing Octahedra.
5.5.2 Illustration 2: Reduced Dimensional Systems.
5.5.3 Illustration 3: Transition Metal Monoxides with Edge-Sharing Octahedra.
5.5.4 Corollary.
5.6 Total Energy Tight-Binding Calculations.
Practice Problems.
6.1 An Introduction to Tensors.
6.2 Thermal Conductivity.
6.2.1 The Free Electron Contribution.
6.2.2 The Phonon Contribution.
6.3 Electrical Conductivity.
6.3.1 Band Structure Considerations.
6.3.2 Thermoelectric, Photovoltaic, and Magnetotransport Properties.
6.4 Mass Transport.
6.4.1 Atomic Diffusion.
6.4.2 Ionic Conduction.
Practice Problems.
7.1 Correlated Systems.
7.1.1 The Mott-Hubbard Insulating State.
7.1.2 Charge-Transfer Insulators.
7.1.3 Marginal Metals.
7.2 Anderson Localization.
7.3 Experimentally Distinguishing Disorder from Electron Correlation.
7.4 Tuning the M-NM Transition.
7.5 Other Types of Electronic Transitions.
Practice Problems.
8.1 Phenomenological Description of Magnetic Behavior.
8.1.1 Magnetization Curves.
8.1.2 Susceptibility Curves.
8.2 Atomic States and Term Symbols of Free Ions.
8.3 Atomic Origin of Paramagnetism.
8.3.1 Orbital Angular Momentum Contribution - The Free Ion Case.
8.3.2 Spin Angular Momentum Contribution - The Free Ion Case.
8.3.3 Total Magnetic Moment - The Free Ion Case.
8.3.4 Spin-Orbit Coupling - The Free Ion Case.
8.3.5 Single Ions in Crystals.
8.3.6 Solids.
8.4 Diamagnetism.
8.5 Spontaneous Magnetic Ordering.
8.5.1 Exchange Interactions.
8.5.2 Itinerant Ferromagnetism.
8.5.3 Noncolinear Spin Configurations and Magnetocrystalline Anisotropy.
8.6 Magnetotransport Properties.
8.6.1 The Double Exchange Mechanism.
8.6.2 The Half-Metallic Ferromagnet Model.
8.7 Magnetostriction.
8.8 Dielectric Properties.
8.8.1 The Microscopic Equations.
8.8.2 Piezoelectricity.
8.8.3 Pyroelectricity.
8.8.4 Ferroelectricity.
Practice Problems.
9.1 Maxwell’s Equations.
9.2 Refractive Index.
9.3 Absorption.
9.4 Nonlinear Effects.
9.5 Summary.
Practice Problems.
10.1 Stress and Strain.
10.2 Elasticity.
10.2.1 The Elasticity Tensor.
10.2.2 Elastically Isotropic Solids.
10.2.3 The Relation Between Elasticity and the cohesive Forces in a Solid.
10.2.4 Superelasticity, Pseudoelasticity, and the Shape Memory Effect.
10.3 Plasticity.
10.3.1 The Dislocation-Based Mechanism to Plastic Deformation.
10.3.2 Polycrystalline Metals.
10.3.3 Brittle and Semibrittle Solids.
10.3.4 The Correlation Between the Electronic Structure and the Plasticity of Materials.
10.4 Fracture.
Practice Problems.
11 PHASE EQUILIBRIA, PHASE DIAGRAMS, AND PHASE MODELING.
11.1 Thermodynamic Systems and Equilibrium.
11.1.1 Equilibrium Thermodynamics.
11.2 Thermodynamic Potentials and the Laws.
11.3 Understanding Phase Diagrams.
11.3.1 Unary Systems.
11.3.2 Binary Metallurgical Systems.
11.3.3 Binary Nonmetallic Systems.
11.3.4 Ternary Condensed Systems.
11.3.5 Metastable Equilibria.
11.4 Experimental Phase-Diagram Determinations.
11.5 Phase-Diagram Modeling.
11.5.1 Gibbs Energy Expressions for Mixtures and Solid Solutions.
11.5.2 Biggs Energy Expressions for Phases with Long-Range Order.
11.5.3 Other Contributions to the Gibbs Energy.
11.5.4 Phase Diagram Extrapolations - the CALPHAD Method.
Practice Problems.
12 SYNTHETIC STRATEGIES.
12.1 Synthetic Strategies.
12.1.1 Direct Combination.
12.1.2 Low Temperature.
12.1.3 Defects.
12.1.4 Combinatorial Synthesis.
12.1.5 Spinodal Decomposition.
12.1.6 Thin Films.
12.1.7 Photonic Materials.
12.1.8 Nanosynthesis.
12.2 Summary.
Practice Problems.
13.1 History of Nanotechnology.
13.2 Nanomaterials Properties.
13.2.1 Electrical Properties.
13.2.2 Magnetic Properties.
13.2.3 Optical Properties.
13.2.4 Thermal Properties.
13.2.5 Mechanical Properties.
13.2.6 Chemical Reactivity.
13.3 More on Nanomaterials Preparative Techniques.
13.3.1 Top-Down Methods for the Fabrication of Nanocrystalline Materials.
13.3.2 Bottom-Up Methods for the Synthesis of Nanostructured Solids.
Appendix 1.
Appendix 2.
Appendix 3.
See More
JOHN N. LALENA
, PhD, is a Visiting Professor of Chemistry at The Evergreen State College, an Adjunct Assistant Professor of Chemistry at the University of Maryland University College–Europe, and an Affiliate
Research Assistant Professor at Virginia Commonwealth University. Previously, Dr. Lalena was a senior research scientist for Honeywell Electronic Materials and a product/process semiconductor
fabrication engineer for Texas Instruments.
DAVID A. CLEARY, PhD, is Professor and Chair of the Department of Chemistry at Gonzaga University.
See More | {"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470567538.html","timestamp":"2014-04-20T18:35:16Z","content_type":null,"content_length":"51064","record_id":"<urn:uuid:5d05088b-d31b-4d13-822b-6d210fa63bca>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
New Page
Sample Secondary Texts and Supplementary Resources for MA 380
Secondary Education Seminar in Mathematics
1. Pre-Algebra. Upper Saddle River, NJ: Globe Fearon Educational Publisher, 1997.
Includes text, classroom resource binder, workbook, and solutions.
Algebra I
1. Algebra I Applications, Equations, and Graphs by R. Larson, L. Boswell, T. Kanold, & K. Stiff. Evanston, IL: McDougal Littell, 2001.
2. Algebra: Structure and Method Book 1 by R. Brown, M. Dolciani, R. Sorganfrey, & W. Coe. Evanston, IL: McDougal Littell, 2000.
1. Geometry Applying, Reasoning, and Measuring by R. Larson, L. Boswell, T. Kanold, & K. Stiff. Evanston, IL: McDougal Littell, 2001.
2. Geometry for Enjoyment and Challenge by R. Rhoad, G. Milauskas, & R. Whipple. Evanston, IL: McDougal Littell/Houghton Mifflin, 1997.
1. Geometry by R. Jurgensen, R. Brown, & J. Jurgensen. Evanston, IL: McDougal Littell, 2000.
2. Patty Paper Geometry by M. Serra. Emeryville, CA: Key Curriculum Press, 1994.
Algebra II/Trigonometry
1. Algebra II Applications, Equations, and Graphs by R. Larson, L. Boswell, T. Kanold, & K. Stiff. Evanston, IL: McDougal Littell, 2001.
2. Algebra and Trigonometry: Functions and Applications, classic edition by P. Foerster. Menlo Park, CA: Addison-Wesley, 1999.
3. Algebra II: Integration, Applications, Connections by W. Collins, G. Cuevas, A. Foster, B. Gordon, B. Moore-Harris, J. Rath, D. Swart, & L. Winters. New York: Glencoe McGraw-Hill, 2001.
4. Algebra and Trigonometry: Structure and Method Book 2 by R. Brown, M. Dolciani, R. Sorganfrey, & R. Kane. Evanston, IL: McDougal Littell, 2000.
Advanced Mathematics
1. Functions, Statistics, and Trigonometry, 2^nd edition by S. Senk, S. Viktora, Z.
Usiskin, N. Ahbel, V. Highstone, D. Witonsky, R. Rubenstein, J. Schultz, M. Hackworth, J. McConnell, D. Aksoy, J. Flanders, & B. Kissane (The University of Chicago School Mathematics
Project). Glenview, IL: Scott Foresman Addison Wesley, 1998.
1. Precalculus, 5^th edition by M. Sullivan. Upper Saddle River, NJ: Prentice-Hall, 1999.
Unified Approaches
1. Contemporary Mathematics in Context: A Unified Approach by A. Coxford, J. Fey, C. Hirsch, H. Schoen, G. Burrill, E. Hart, & A. Watkins (CORE-PLUS Mathematics Project). Chicago: EveryDay
Learning , 1997.
Includes Course 1 Parts A, B teacher s guides, teaching resources, and assessment resources; and Course 2 Part A text.
2. Interactive Mathematics Program, Year 1 by D. Fendel & D. Resek. Emeryville, CA: Key Curriculum Press, 1997.
Includes the text and the following IMP Year 1 modules:
a. Patterns
b. The Game of Pig
c. The Overland Trail
d. The Pit and the Pendulum
e. Shadows | {"url":"http://academics.smcvt.edu/gashline/381/schooltexts.htm","timestamp":"2014-04-16T10:17:53Z","content_type":null,"content_length":"4828","record_id":"<urn:uuid:dbd7a744-637b-4000-80b8-52f8ce4d386c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex Analysis
Mathematics 352, Spring 2005 Complex Analysis It is no paradox to say that in our most theoretical moods we may be nearest to our most practical applications.
Ma 416: Complex Variables Solutions to Homework Assignment 9
Ma 416: Complex Variables Solutions to Homework Assignment 9 Prof. Wickerhauser Due Thursday, November 10th, 2005 ReadR. P. Boas, Invitation to Complex Analysis ...
Math 522-Analysis II Fall 2010
Classes: MWF 8:50 a.m., van Vleck B313. Instructor: Andreas Seeger Text: Principles of mathematical analysis , by Walter Rudin.
Solutions to Real and Complex Analysis
Solutions to Real and Complex Analysis Steven VSam ssam@mit.edu July 14,2008 Contents 1 Abstract ... which is measurable third edition, by Walter Rudin 1
Problems and Solutions in REAL AND COMPLEX ANALYSIS
Problems and Solutions in R EAL AND C OMPLEX A NALYSIS ... Complex Analysis 38 2.11989 April ... the skeptical reader is encouraged to consult Rudin ...
examples and counterexamples, about limits, continuity ...
Elementary Real Analysis: Fall 2001 T EXT : Michael Spivak, Calculus , 3rd edition, Houston: Publish or Perish, Inc., 1994 Four main goals you should have for this term ...
, Complex analysis , 3rded., McGraw{Hill, New York, 1979.
Here is alist of few books dealing with complex analysis and related ideas. ... [32]Walter Rudin, Real and complex analysis , 2nded., McGraw-Hill, New York, St. Louis, San ...
RudinsPrinciplesof Mathematical Analysis: Solutions to Selected ...
The following are solutions to selected exercises from Walter RudinsPrinciplesof Mathematical Analysis, Third Edition, which I compiled during the Winter of 2008 while a ...
CALIFORNIA STATE UNIVERSITY, SACRAMENTO Department of Mathematics and Statistics SYLLABUS Math 241A: Foundations of Applied Mathematics Prerequisite: Math 134 is ...
John Gilbert, Rudin Management
November 2009 @ the Intersection of Commercial/Corporate Real Estate and Technology John Gilbert, Rudin Management The Strategic Value of Technology www.realcomm.com IN THIS ...
Problems and Solutions in REAL AND COMPLEX ANALYSIS
Problems and Solutions in R EAL AND C OMPLEX A NALYSIS WilliamJ. DeMeo July9,2010 c WilliamJ. DeMeo. All rights reserved. This document maybe copied for personal use.
Suggested additional references: H. L. Royden, Real Analysis, 2 nd Edition, Chapters 1-14. W. Rudin, Real and Complex Analysis, 2 nd Edition, Chapters 1-9.
MATHEMATICS Advanced Calculus Analysis COURSE OUTLINE 2010-2011
MATHEMATICS Advanced Calculus Analysis C OURSE O UTLINE 2010-2011 Course Number- MATH-3101(6)-001 Course Name - A DVANCED C ALCULUS AND A NALYSIS Instructor Information ...
Complex Analysis II Questions for Exam 1
Complex Analysis II Questions for Exam 1 Harmonic and subharmonic functions Definition of the Poissontransform. Poissontransform of a function continuous in Tis ...
Mathematics 352, Spring 2005 Complex Analysis
Mathematics 352, Spring 2005 Complex Analysis It is no paradox ... difficult than the required text, by Walter Rudin ... Use the Examples in the text as exercises with solutions ...
Homework 10 - Chapter 7 of Rudin | {"url":"http://www.cawnet.org/docid/solutions+rudin+complex+analysis/","timestamp":"2014-04-16T20:08:45Z","content_type":null,"content_length":"46702","record_id":"<urn:uuid:555f8be3-7b41-446d-a6d1-3eaeacece321>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lilburn Precalculus Tutor
...Yes, I started pretty young. So I still remember how it is to be a student and struggle in math classes. I can tutor pretty much any math subject and basically have tutored every math subject.
20 Subjects: including precalculus, calculus, statistics, geometry
...When I was in high school I went to statewide math contests for two years and placed in the top ten both times. This algebra deals mostly with linear functions. Algebra 2 is a more advanced,
more complex version of algebra 1.
22 Subjects: including precalculus, calculus, geometry, ASVAB
...I have been using the Microsoft Word since it came out on the market for Mac almost 30 years ago. I use Word on multiple platforms for work and private letters daily. I will be very happy to
teach any student how to be proficient on Microsoft Word.
15 Subjects: including precalculus, geometry, algebra 1, algebra 2
...I believe I can teach players to improve their skills and understand the concepts of basketball to be able to play an important role on any team of which they are members. Over the last five
years of being a Youth Pastor at my church. One of the jobs that I have carried has been to design cards, banners and special tracts for various occasions.
35 Subjects: including precalculus, chemistry, English, physics
...As an undergraduate and graduate student in genetics, this subject is one that I know inside and out. I can tutor basic Mendelian genetics, Complex patterns of inheritance, Molecular biology/
genetics, and eukaryotic and prokaryotic genetics. I have also tutored genetics to undergraduate students.
15 Subjects: including precalculus, chemistry, biology, algebra 2 | {"url":"http://www.purplemath.com/Lilburn_Precalculus_tutors.php","timestamp":"2014-04-16T13:36:24Z","content_type":null,"content_length":"23874","record_id":"<urn:uuid:8922dbc3-7247-45dd-a48d-d524081926f4>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability and Finance: It’s Only a Game!
Glenn Shafer and Vladimir Vovk
Abstracts and keywords for the e-book edition
August 11, 2011
Chapter 1. Introduction: Probability and Finance as a Game
This chapter sketches the game-theoretic framework that is expounded and used in the rest of the
book. We propose it as a mathematical foundation for probability. But it also has philosophical content;
all the classical interpretations of probability fit into it.
The framework begins with a two-person sequential game of perfect information. On each round,
Player II states odds at which Player I may bet on what Player II will do next. In statistical modeling,
Player I is a statistician and Player II is the world. In finance, Player I is an investor and Player II is a
market. The framework is based on two principles: the principle of pricing by dynamic hedging (Player I
can combine his bets over time), and the hypothesis of the impossibility of a gambling system, also called
Cournot's principle or the efficient market hypothesis (no strategy for Player I can avoid all risk of
bankruptcy and have a reasonable chance of making him rich).
KEYWORDS: game-theoretic probability, upper price, lower price, subjective probability,
objective probability, stochasticism, fundamental interpretative hypothesis, dynamic hedging.
Game theory can handle classical topics in probability, including the weak and strong limit theorems. No
measure theory is needed.
Chapter 2. The Game-Theoretic Framework in Historical Context
This chapter sketches the historical development of the mathematics and philosophy of
probability, starting from the seventeenth-century work of Pascal and Fermat. It covers the work of De
Moivre, Bernoulli, and Laplace, and the rise of measure theory at the beginning of the twentieth century.
Special attention is paid to Kolmogorov’s axioms and their philosophical interpretation, and to the path
from von Mises’s collectives to Jean Ville’s martingales. The hypothesis of the impossibility of a gambling
system is also traced historically.
KEYWORDS: the problem of points, equal possibility, frequency, measure theory, Kolmogorov’s
axioms, collective, gambling system, complexity, martingale, prequential principle, neosubjectivism.
Chapter 3. The Bounded Strong Law of Large Numbers
This chapter states and proves the strong law of large numbers in its simplest forms. We begin
with the very simplest case, where a coin is tossed repeatedly. Player I (Skeptic) is allowed to bet each
time on heads or tails, as he chooses and in the amount he chooses. Then Player II (Reality), who sees how
Skeptic has bet, decides on the outcome. The strong law says that Skeptic has a strategy for betting,
beginning with a finite stake, that does not risk bankruptcy and makes him infinitely rich unless Reality
makes the proportion of heads converge to one-half. The result generalizes easily to the case where Reality
chooses a real number in a bounded interval; in this case, a third player (Forecaster) sets a price at which
Skeptic can buy positive or negative amounts of Reality’s move. The proofs explicitly exhibit Skeptic’s
KEYWORDS: strong law of large numbers, fair-coin game, bounded forecasting, martingale,
diabolical reality.
Chapter 4. Kolmogorov's Strong Law of Large Numbers
This chapter proves a game-theoretic version of Kolmogorov’s strong law, which applies to an
unbounded sequence of predictions. In this case, Forecaster sets not only a price for Reality’s forthcoming
move, but also a game-theoretic variance: a price for the squared deviation of this move from its price. We
show that Skeptic has a strategy that will make him infinitely rich without risking bankruptcy unless
Reality satisfies the condition that holds with probability one in the measure-theoretic version of
Kolmogorov’s strong law: the average difference between Reality’s move and its price converges to zero if
the sum over n of the nth variance divided by n2 is finite. Using Martin’s theorem, which asserts the
determinateness of Borel games, we can also conclude that Reality can avoid the convergence to zero
without making Skeptic infinitely rich whenever Forecaster makes the weighted sum of variances diverge.
KEYWORDS: unbounded strong law, martingale, supermartingale, upper forecasting, probability
game, Martin’s theorem, Borel game.
Chapter 5. The Law of the Iterated Logarithm
The law of the iterated logarithm, first proven for coin tossing by Aleksandr Khinchin in work
published in 1924, concerns the rate and oscillation of the convergence that the strong law of large numbers
asserts will take place. It sets an asymptotic bound on the deviation from the limit, and it asserts that the
oscillation will eventually stay with that bound (validity) but no tighter bound (sharpness). The game-
theoretic version of this theorem is analogous to the game-theoretic version of the strong law: Skeptic has
a strategy that will make him infinitely rich without risking bankruptcy unless the oscillation satisfies the
stated conditions. In the game-theoretic framework, however, it is natural to distinguish between the
conditions under which the bound is valid and the stronger conditions under which it is sharp.
KEYWORDS: iterated logarithm, validity, sharpness, unbounded forecasting, predictably
unbounded forecasting, large deviations.
Chapter 6. The Weak Laws
The weak law of large numbers and the central limit theorem are concerned with a game that has
only a finite number of rounds. The game-theoretic framework formulates them as theorems about the
price at which Skeptic can reproduce certain variables—the lowest initial capital with which he can be sure
to equal or exceed the variable’s value at the end of the game. This chapter explains this for the simplest
cases, the weak law of large numbers for coin tossing (Bernoulli’s theorem) and the central limit theorem
for coin tossing (De Moivre’s theorem). In the case of Bernoulli’s theorem, we are concerned with the
price of a variable that is equal to one in the event that the final proportion of heads is sufficiently close to
one-half and zero otherwise; this is the game-theoretic probability of the event. In the case of De Moivre’s
theorem, we use Lindeberg’s method of proof to obtain the price for a payoff that depends on the final
deviation of the proportion of heads from one-half; this leads to the heat equation and its solution, an
integral with respect to the normal distribution. We conclude by using parabolic potential theory to
generalize De Moivre's theorem to the case where Skeptic is allowed to bet on the errors being small but
not on their being large; this corresponds to heat propagation with heat sources.
KEYWORDS: game-theoretic price, game-theoretic probability, upper price, lower price, upper
probability, lower probability, weak law of large numbers, central limit theorem, martingale, parabolic
potential theory, heat diffusion, normal distribution.
Chapter 7. Lindeberg's Theorem
In the early 1920s, Lindeberg gave the most general conditions under which the central limit
theorem holds. This chapter expresses these conditions in game-theoretic terms and derives Lindeberg’s
theorem, using the same type of argument as the preceding chapter used for De Moivre’s theorem. We also
give a number of examples of the theorem, including an application to weather forecasting.
KEYWORDS: Lindeberg protocol, Lindeberg’s condition, central limit theorem, coherence,
game-theoretic price, game-theoretic variance, martingale gains, probability forecasting.
Chapter 8. The Generality of Probability Games
This chapter formulates the game-theoretic framework more abstractly and demonstrates its power
more generally. We show that the strongest forms of the classical measure-theoretic limit theorems are
special cases of the corresponding game-theoretic ones. We give general definitions of game-theoretic
price and probability. We show how the framework accommodates quantum mechanics and statistical
models that do not specify full probability measures. Finally, we briefly recount the life and relevant work
of Jean Ville.
KEYWORDS: measure-theoretic limit theorems, gambling protocol, probability protocol,
quantum mechanics, Cox’s regression model, Ville’s theorem.
The game-theoretic framework can dispense with the stochastic assumptions currently used in finance
theory. It can use the market, instead of a stochastic model, to price volatility. It can test for market
efficiency with no stochastic assumptions.
Chapter 9. Game-Theoretic Probability in Finance
This chapter introduces the game-theoretic approach to finance that is developed in the remaining
chapters of the book. We begin by reviewing the standard probabilistic treatment of stock-market prices, in
which the price of a stock is assumed to follow a geometric Brownian motion. We also review the idea,
championed by Mandelbrot, of measuring the wildness of prices using concepts related to fractal
dimension. Then, at a heuristic level, we review the derivation of the classical Black-Scholes formula and
explain our game-theoretic alternative. Instead of relying on the assumption of geometric Brownian
motion, this alternative asks the market to price a derivative that pays a measure of market volatility as a
dividend. Other derivatives can then be priced using the Black-Scholes formula with the market price of
the dividend-paying derivative substituted for the theoretical variance of the underlying security. The
chapter concludes with an introduction to our approach to the efficient-market hypothesis.
KEYWORDS: geometric Brownian motion, Wiener process, variation spectrum, Hölder exponent,
fractal dimension, Black-Scholes equation, Black-Scholes formula, dividend-paying derivative,
informational efficiency, stochastic volatility, stochastic differential equations, Itô’s lemma, risk-neutral
valuation, Girsanov’s theorem.
Chapter 10. Games for Pricing Options in Discrete Time
Although the standard probabilistic theory for pricing and hedging options is formulated in
continuous time, real hedging must be conducted in discrete time. In this chapter, we develop the game-
theoretic treatment in discrete time, with a precise treatment of the errors that arise from the hedging. We
begin, for simplicity, by describing a discrete-time version of the model of option pricing that Bachelier
invented in 1900. This model, which uses ordinary Brownian motion instead of geometric Brownian
motion, leads to a variant of the game-theoretic central limit theorem in which the remaining variance for a
sequence of trials is priced by the market on every round. We develop precise error bounds on this central
limit theorem. The analogous procedure for geometric Brownian motion leads to precise error bounds for
game-theoretic Black-Scholes hedging. The chapter concludes with some empirical studies of the
parameters that affect the hedging error.
KEYWORDS: option pricing, discrete hedging, Bachelier’s central limit theorem, Black-Scholes
pricing, stochastic hedging, relative variation.
Chapter 11. Games for Pricing Options in Continuous Time
Using nonstandard analysis, we pass from the practical but messy discrete theory of the preceding
chapter to an idealized continuous limit, in which exact hedging is achieved. We derive the limiting theory
for both the Bachelier and Black-Scholes cases. We also show that the (dt)1/2 effect, a crucial part of the
standard theory’s assumption that stock prices follow a Brownian motion, emerges in the limit from the
game-theoretic approach. In appendices, we review nonstandard analysis and connections with the
stochastic theory.
KEYWORDS: nonstandard analysis, Bachlier pricing, Black-Scholes pricing, diffusion model
Chapter 12. The Generality of Game-Theoretic Pricing
In this chapter, we show that the game-theoretic approach can handle various practical and
theoretical complications that are discussed in the existing literature on the stochastic approach. For
simplicity, we conduct the discussion in the continuous setting. We show how to take interest rates into
account and how to handle jumps. We also discuss alternatives to our dividend-paying derivative.
KEYWORDS: interest rate, risk-free bond, jump process, Poisson distribution, weather
derivative, Poisson protocol, stable distribution, infinitely divisible distribution, Lévy process
Chapter 13. Games for American Options
The elementary theory of option pricing, considered in last four chapters is concerned with
European options, which have a fixed date of maturity. A somewhat more complicated theory is needed for
the more common American options, which can be exercised by the holder whenever he pleases. In
general, a higher initial capital may be required to hedge an American option, since the greater freedom of
action of the holder must be replicated. This chapter develops the game-theoretic approach to American
options quite generally and shows how the upper price for such options can be found using the same
techniques from parabolic potential theory that we encountered in our study of the one-sided central limit
theorem in Chapter 6.
KEYWORDS: weak price, strong price, market protocol, passive instrument, exotic option, super-
replication, parabolic potential theory.
Chapter 14. Games for Diffusion Processes
The idea that a process obeys a particular stochastic differential equation can be expressed game-
theoretically; we simply interpret the stochastic differential equation as a strategy for the third player in the
game, Forecaster. This leads to a game-theoretic version of Itô’s lemma and to an alternative game-
theoretic derivation of the Black-Scholes formula. This way of looking at diffusion processes has the
advantage that it allows us to omit completely the drift term in applications where it is irrelevant; we
simply assume that Forecaster prices only the square of Reality’s move, not the move itself.
KEYWORDS: stochastic differential equation, drift, volatility, quadratic variation, Itô’s lemma,
diffusion protocol.
Chapter 15. The Game-Theoretic Efficient-Market Hypothesis:
Much of the theory of Part I of the book can be applied to financial markets. In this case,
Cournot’s principle is replaced by the efficient-market hypothesis, which says that a speculator cannot beat
the market by a large factor without risking bankruptcy. In this chapter, we exploit this idea to derive
finance-theoretic strong and weak laws. We also discuss relations between risk and return that follow from
this form of the efficient-market hypothesis.
KEYWORDS: securities market, numéraire, finance-theoretic strong law, arbitrage, horse races,
iterated logarithm, empirical volatility, risk and return, value at risk. | {"url":"http://www.docstoc.com/docs/89497929/ebook","timestamp":"2014-04-20T00:15:45Z","content_type":null,"content_length":"66109","record_id":"<urn:uuid:714a895b-5b89-4fc2-90a4-5298ec4b1e02>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
uniform convergence
March 11th 2010, 06:45 AM #1
Aug 2009
uniform convergence
Let $f_n= n \chi_{[\frac{1}{n},\frac{2}{n}]} \,$ on $\mathbb{R}$ for all $n \in \mathbb{N}$ and $f=0$ on $\mathbb{R}.$
Show that ther does not exist a set of measure zero ,on the complement of which $(f_n)$ is uniformly convergent.
Your functions $f_n$ are converging pointwise to the zero function. If you had some set $E$ as above, then you'd need, for any $\varepsilon>0$, that there exists some $N_\varepsilon$ so that $|
f_n|<\varepsilon$ on $E$ for all $n>N_\varepsilon$. But if $E$ has measure zero, then the complement contains points arbitrarily near zero, i.e. points $x$ such that $x\in[\frac1n,\frac2n]$ for
$n$ as large as you like, which contradicts uniform convergence.
March 11th 2010, 11:52 AM #2 | {"url":"http://mathhelpforum.com/differential-geometry/133297-uniform-convergence.html","timestamp":"2014-04-17T09:03:49Z","content_type":null,"content_length":"36681","record_id":"<urn:uuid:6bd655c0-64bf-453c-8ab8-f6aa1a412e84>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof by Induction (Urgent hint required)
November 24th 2008, 05:38 AM #1
Proof by Induction (Urgent hint required)
Let X be a discrete random variable and $g: R_X \rightarrow \mathbb{R}$ a continuous function, which is convex, i.e. for all $x_1,x_2 \in R_X$ and $\lambda \in (0,1)$
$g(\lambda x_1 + (1-\lambda)x_2 \leq \lambda g(x_1) + (1-\lambda) g(x_2).$
(a) For $\lambda_i \geq 0$ with $\sum_{i=1}^{n} \lambda_i = 1$ show that $g(\sum_{i=1}^{n} \lambda_i x_i ) \leq \sum_{i=1}^{n} \lambda_i g(x_i)$
I have let P(n) denote the above statement and shown that it holds for n = 1, but I am finding it hard to progress much further. I really need help with this and any hints or tips would be
Let X be a discrete random variable and $g: R_X \rightarrow \mathbb{R}$ a continuous function, which is convex, i.e. for all $x_1,x_2 \in R_X$ and $\lambda \in (0,1)$
$g(\lambda x_1 + (1-\lambda)x_2 \leq \lambda g(x_1) + (1-\lambda) g(x_2).$
(a) For $\lambda_i \geq 0$ with $\sum_{i=1}^{n} \lambda_i = 1$ show that $g(\sum_{i=1}^{n} \lambda_i x_i ) \leq \sum_{i=1}^{n} \lambda_i g(x_i)$
I have let P(n) denote the above statement and shown that it holds for n = 1, but I am finding it hard to progress much further. I really need help with this and any hints or tips would be
Google for Jensen's inequality.
Thank you, managed to solve it
November 25th 2008, 04:41 AM #2
Grand Panjandrum
Nov 2005
November 25th 2008, 05:14 AM #3 | {"url":"http://mathhelpforum.com/advanced-statistics/61339-proof-induction-urgent-hint-required.html","timestamp":"2014-04-16T19:13:39Z","content_type":null,"content_length":"39425","record_id":"<urn:uuid:e5170254-a13e-4261-996e-e3bfbc8e6837>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vista, CA Math Tutor
Find a Vista, CA Math Tutor
...I taught first grade for four years and I know exactly what your child needs to be a successful reader! In college, I was an AVID Tutor for High School Students. I learned multiple ways to be
more successful in school just by changing a few habits at school and home.
19 Subjects: including prealgebra, English, reading, Spanish
...I am more that prepared to teach in math, English, and any other general academic subjects. Patience, humor, and flexibility are all key themes in how I tutor students regardless of any
obstacles they are currently facing. Most times they are more than capable to succeed in struggling academic subjects, however they just need to unlock this potential.
25 Subjects: including algebra 1, English, reading, Spanish
...Her images have, also, been published widely, including in LIFE, The New York Times, and Architectural Record. I took extensive art history courses at the University of Rome, Italy, and also
for my BA and MFA degrees at the University of California at San Diego. I've traveled extensively in Eur...
26 Subjects: including algebra 1, English, geometry, prealgebra
...I like to help students achieve their goals in math! I have a very relaxed and friendly approach. I know that most students that need help in math are already somewhat intimidated.
17 Subjects: including SAT math, algebra 1, algebra 2, grammar
...This reduces the anxiety that many students often face. I look forward to new tutoring challenges and helping students achieve their goals! I am a certified biology teacher with a master's
degree in education with specialization in biology which required me to take and pass multiple genetics courses.
11 Subjects: including algebra 1, algebra 2, geometry, precalculus
Related Vista, CA Tutors
Vista, CA Accounting Tutors
Vista, CA ACT Tutors
Vista, CA Algebra Tutors
Vista, CA Algebra 2 Tutors
Vista, CA Calculus Tutors
Vista, CA Geometry Tutors
Vista, CA Math Tutors
Vista, CA Prealgebra Tutors
Vista, CA Precalculus Tutors
Vista, CA SAT Tutors
Vista, CA SAT Math Tutors
Vista, CA Science Tutors
Vista, CA Statistics Tutors
Vista, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/vista_ca_math_tutors.php","timestamp":"2014-04-18T11:41:12Z","content_type":null,"content_length":"23625","record_id":"<urn:uuid:cf352820-2323-412a-a563-a75193026f5a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tyrone, GA Math Tutor
Find a Tyrone, GA Math Tutor
...I am uniquely qualified to tutor students in Zoology. While in college I took two courses in Zoology and made an A in each course. I have extensive tutoring experience in Biology, which is the
major branch of science that includes both Zoology and Botany.
57 Subjects: including geometry, algebra 1, algebra 2, SAT math
Hello. I have six years experience as a Certified teacher of middle grades and high school science. I spent those years teaching high school science (grades 9-12) including Biology, Physical
Science, and Chemistry to at-risk youth in an alternative school environment.
5 Subjects: including algebra 1, prealgebra, chemistry, biology
...Everything around us can somehow traced back to the laws of Physics, which I think is very fascinating. I have tutored Physics for over 10 years and believe my success with students comes from
being able to relate physics concepts to day to day experiences and examples. This makes it much easie...
19 Subjects: including calculus, algebra 1, algebra 2, geometry
...I also spent eight years at a private academy in Atlanta where I was responsible for the language arts program for grades K-8. It was in this capacity that I became proficient in teaching
phonics/reading. I have been retired for a year and am now returning to part-time teaching as a private tutor.My lessons are based on Latin and Greek prefixes, suffixes, and roots.
15 Subjects: including SAT math, reading, English, writing
...For three years I taught first grade. For two years I taught remedial math skills to 3rd, 4th, and 5th graders. I love helping students each their potential and their educational goals!
10 Subjects: including algebra 1, prealgebra, geometry, reading
Related Tyrone, GA Tutors
Tyrone, GA Accounting Tutors
Tyrone, GA ACT Tutors
Tyrone, GA Algebra Tutors
Tyrone, GA Algebra 2 Tutors
Tyrone, GA Calculus Tutors
Tyrone, GA Geometry Tutors
Tyrone, GA Math Tutors
Tyrone, GA Prealgebra Tutors
Tyrone, GA Precalculus Tutors
Tyrone, GA SAT Tutors
Tyrone, GA SAT Math Tutors
Tyrone, GA Science Tutors
Tyrone, GA Statistics Tutors
Tyrone, GA Trigonometry Tutors
Nearby Cities With Math Tutor
Chamblee, GA Math Tutors
Chattahoochee Hills, GA Math Tutors
Fairburn, GA Math Tutors
Fayetteville, GA Math Tutors
Jonesboro, GA Math Tutors
Lake City, GA Math Tutors
Morrow, GA Math Tutors
Palmetto, GA Math Tutors
Peachtree City Math Tutors
Red Oak, GA Math Tutors
Sharpsburg, GA Math Tutors
Stockbridge, GA Math Tutors
Turin, GA Math Tutors
Union City, GA Math Tutors
Woolsey, GA Math Tutors | {"url":"http://www.purplemath.com/Tyrone_GA_Math_tutors.php","timestamp":"2014-04-17T15:40:47Z","content_type":null,"content_length":"23704","record_id":"<urn:uuid:9eeb8d6e-6978-41f9-a98f-794c11b80d01>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00583-ip-10-147-4-33.ec2.internal.warc.gz"} |
A message to calm your nerves...using math
Hey everyone!
A sentiment that used to be quite obvious seems to have been forgotten here in the buckeye nation as of late. Going undefeated is hard.
It seems like everyone is very worried about the possibility of the buckeyes going undefeated and being left out of the championship game. This is a very real possibility, and I completely understand
your concerns, but the sentiment has been skewed over the season to the point where some people think it's "unlikely" an undefeated Ohio State team would be selected to go to the national
championship game, and even a post saying "almost ZERO" chance today. It frustrates me when people start using words like "chance" or "probability" which can easily be calculated, and then proceed to
spew a baseless opinion, which many times happens to be quite incorrect, so I decided to go the scientific route.
I thought I'd use some probabilities to actually see what the numbers looked like. It's rather easy to calculate, I'll summarize very briefly: If you assign each game as a probability of a win, you
simply multiply all those probabilities together for that team, to find the odds of them going undefeated. I'll attach my spreadsheet for the numbers I assigned for the 5 teams most are worried
about, but here were my findings:
*note: these are obviously a little subjective, as even though the math is objective the percentage for each individual game was something I came up with. I tried to be as fair as I could. See the
spreadsheet for details.
Alabama has a 28.6% chance of going undefeated (despite having at least a 60% chance of winning each individual game)
Oregon has a 13.3% chance of going undefeated
Clemson has a 13.9% chance of going undefeated
Stanford has a 13.8% chance
Florida State has a 11.58% chance.
Obviously, subtract from 100 and you get the odds of each team losing a game.
If you take Ohio State going undefeated as a given (which you definitely shouldn't, but for the sake of the "if we go undefeated" argument, we will) There's actually a 40.6% chance that we'd be the
ONLY undefeated team (of the six most consider "in the hunt").
Even if you are of the crowd that think the SEC automatically gets into the NC game, and we're fighting for the number 2 spot. There's still a 56.9% chance that Oregon, Clemson, Stanford, and FSU ALL
lose a game between now and the end of the season. That's right, by my projections it's actually more likely than not that all of those teams lose a game.
Are the odds good enough that you should bet your house on it? Absolutely not. It's very possible OSU gets left out. But for those of you who are about ready to give up hope, we've got a lot better
shot than you think.
I think when many start to panic, is when you look at a team like Oregon or Clemson and see they are favored in every game, and you think that means they are more likely than not to go undefeated.
That's not how probability works. If you roll a six sided die, most the time it's going to be something other than, say, 6, but if you have to roll the die 8 times, you're likely to get a six
somewhere in there. So even though a team like Oregon will be favored every week from here on out, it's actually far less likely that they finish undefeated than likely. The 6 is a loss, in this
If you have a beef with any of the numbers in my spread sheet for individual games, I'd be happy to discuss my reasoning.
I hope this little math break calms your buckeye nerves!
This is the kind of math they need to be teaching in high school! Music class should be all the OSU songs. Hmm, what else . . .
Nice job. Incidentally, what were your game by game probabilities for OSU?
I wouldn't rule out Nebraska just yet. Very impressive win in Champaign for the Huskers, against a very good Illinois offense where they held them to just 19 points. As long as Taylor Martinez never
plays for the Huskers again, we should meet them in the B1GCCG
The offseason begins when your season ends. Even then there are no days off.
Good job putting this together.
A question came up yesterday: How to adjust the calculations given that Stanford plays Oregon and FSU plays Clemson? Obviously, there is an auto loss to distribute to either Stanford or Oregon and an
auto loss to distribute to either FSU or Clemson. For example, there is a zero percent chance that three+ of those four teams will go undefeated. Do your numbers account for this problem?
One totally unscientific way I tried to think about this problem was to assume two "survivors" from those four teams and then sort of average out the probability that the survivors would win their
other six games. I'm sure there's a much better way to do it, mathematically, but logically it makes some sense to think of these four teams => two survivors.
I don't blame you at all. For some silly reason, I got curious about this problem and was hoping to hear how to deal with it, in theory of course, because I don't want to do the work myself, either.
Like you, I enjoy spitting out some back-of-the-envelope numbers, but please don't ask me to crack one of my old text books.
I'm not sure about that - is there a math whiz in the house? From what I can tell, your calculations for each individual team are just fine - keeping in mind, as you noted, that we have to make the
game-by-game probability guesses. The tricky part, I think, is when we start to gage the odds of 1, 2, or 3, etc. undefeated teams from those independently-arrived numbers. From what I can tell, your
numbers do not show that there's a zero percent probability that 3 of those 4 will go undefeated, and then so working backward from that problem, I wonder if the calculations for 2, 1, or 0
undefeated teams is also off just a bit?
Oregon and Stanford going undefeated are mutually exclusive events so the probability of either of them going undefeated is the sum of the probabilities of each one going undefeated.
Let O = Oregon going undefeated, S = Stanford undefeated, etc. for the ACC and P12.
P (2 teams going undefeated from ACC & P12) = (P(S) + P(O))* (P(C) + P(FS))
The multiplication is allowed since ACC and P12 games are independent events.
Ends up being 27.1% chance of undefeated P12 team, 25.5% chance of undefeated ACC team, and the 28.6% chance of undefeated Alabama. Probability of all 3 happening (product of the 3 above) becomes
1.97%. Probability of 2 happening (sum of the product of the 3 combinations of 2) is 21.9%.
Probability of P12 or ACC champ being undefeated would be 52.6% (sum of all 4). Probability of exactly 1 of the 2 is 52.6 - 6.9 = 45.7% since you have to subtract the probability of both events
The 40.6% should be 38.9% (chance none of them go undefeated) and the 57 should be 54.3 (chance neither P12 or ACC champ goes undefeated). You just didn't account for the fact that someone has to win
the game between S and O, C and FSU.
Thanks! And roughly 22 percent seems about right. Perhaps the more likely scenario that would keep an undefeated Ohio State out is an undefeated ACC or P12 team + one loss SEC champ.
I thought it said meth... I'll walk backwards slowly...
I don't think meth calms nerves very well!!! ;)
I knew history said more than two undefeated teams are unlikely (but certainly possible) - Thanks for the math to back that up.
Good work here, thanks for doing the Math so I didn't have to. Hopefully this will calm some folks down, but probably not
This is cool what you did here, but it's still just your opinion fused together by a cute little math problem. It's no basis for anything but i'm picking up what you're putting down. Good work I like
it. +1 for sheer creativity.
It is for sure and I enjoyed it good post brother.
but what if it snows?! Calculate that!
Seriously, nice work..
Interesting analysis, but using math does not calm my nerves.
"It was my understanding that there would be no math"
Italics are for emphasis; an ellipsis represents an unfinished thought.
I like it, now time to have a drink to calm me down!
Go Bucks!!
Great work. You can't argue with math, because, if you do, you will lose!
Here's another really basic math fact that should help to illustrate your point that it's rare and difficult to go undefeated.
In 123 seasons, Ohio State has had exactly 6 (SIX) undefeated seasons.
123-6=117 seasons in which the Buckeyes have lost at least one game.
Basically one undefeated in every 20.5 seasons.
It's not crazy. It's math!
It would appear your post has a lot of numbers.
(All kidding aside this is what all of the "chicken littles" needed.)
Toledo - Ohio's right armpit
"A troll by any other name is still a troll".
Some smart dudes at Stanford actually developed a statistical model to calculate the probability of a team winning a game: http://www.stanford.edu/class/stats50/handouts/stern.pdf
I know of only two things that are infinite, space and human stupidity.....and I'm not sure about space". Albert Einstein.
Log in or Sign up to Join the Discussion #beatmichigan | {"url":"http://www.elevenwarriors.com/blogs/goalscorer9/a-message-to-calm-your-nervesusing-math","timestamp":"2014-04-20T19:22:15Z","content_type":null,"content_length":"85465","record_id":"<urn:uuid:542dd48e-47da-40b0-be1a-080240e672f6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number of results: 215,501
math -fractions
ordering fractions 1/2,7/8,9/10,1/3,3/5,1/4 write the above fractions in order.
Monday, January 10, 2011 at 4:20pm by arjel
I agree with ^ and fractions are not my favorite units but I'm only in 5th grade and have a lot of fractions in my future. Just try and trust me ( a complete stranger ). :)
Friday, May 7, 2010 at 12:12am by Aubree loves Math, just not fractions!!
math fractions
ADAM WANTS TO COMPARE THE FRACTIONS 3/12,1/6,AND1/3.HE WANTS TO ORDER THEM FROM LEAST TO GREATEST AND REWRITE THEM SO THEY ALL HAVE THE SAME DENOMINATOR?EXPLAIN HOW ADAM CAN REWRITE THE FRACTIONS?If
anyone could helpme explain this to my 4 th grader.i am really bad with fractions
Tuesday, February 26, 2013 at 4:49pm by ttuuffyy
Like fractions are fractions with the same denominator. You can add and subtract like fractions easily - simply add or subtract the numerators and write the sum over the common denominator. Before
you can add or subtract fractions with different denominators, you must first ...
Tuesday, January 17, 2012 at 9:07pm by Laruen
3rd grade math
Change all fractions to equivalent fractions with a denominator of 60. Either that or change these fractions to decimals.
Wednesday, March 20, 2013 at 7:02pm by Ms. Sue
3rd grade math
Change all fractions to equivalent fractions with a denominator of 60. Either that or change these fractions to decimals.
Wednesday, March 20, 2013 at 7:03pm by Ms. Sue
( egyptian fractions are fractions where the numerator can only be one) find two egyptian fractions where when added together it equalls 11/32
Tuesday, May 8, 2012 at 8:02pm by livy
6 = 5 4/4 5 4/4 = 3/4 = 5 3/4 You could find the common denominator for those fractions and convert them to equivalent fractions. But the easier way is to convert these fractions to decimals. 1/3 =
0.33 4/9 = 0.44 and so on
Monday, April 18, 2011 at 7:51pm by Ms. Sue
Algebra 1-Fractions
Or, eliminating fractions, I should say. So, I need some help. See, I am really not a big fan of fractions. But I need to eliminate fractions to do a math problem. First one is 1/2-x=3/8. I know how
to find LCD, then multiply both sides, distributive property, etc etc. But ...
Monday, September 16, 2013 at 8:16pm by Breighton
Applied Business Math/Colleg student
2/3+1/6+11/12= First you need to change these fractions to equivalent fractions with a common denominator. If you post the equivalent fractions, we'll be glad to help you solve this problem.
Saturday, January 29, 2011 at 8:19pm by Ms. Sue
The simplest way is to change the fractions to decimals and multiply. 13.25 * 11.5 = ? If your teacher wants you to use the fractions, then change the two numbers to mixed fractions. 53/4 * 23/2 =
1219/8 = 152 3/8
Saturday, July 25, 2009 at 4:04pm by Ms. Sue
Convert all of these fractions to equivalent fractions with a common denominator. An easier way is to use a calculator to convert each of these fractions to decimals. Then you can compare them
easily. 7/10 = 0.7 5/12 = 0.42 1/2 = 0.5 5/16 = 0.31
Tuesday, February 15, 2011 at 10:25pm by Ms. Sue
It is possible to divide fractions by fractions. You can write a unit rate dealing with fractions by using fraction division. For example: If ½ of the apples are rotten in every ¾ of the boxes then
the unit rate is: ⅔ rotten apples per box. The only difference is that ...
Thursday, September 5, 2013 at 8:24pm by Graham
b. What do think the fractions that are expressed as terminating decimals have in common? Think about equivalent fractions and common multiples. c. Do these fractions follow the same pattern as what
you decided about the first set of fractions? d. Why or why not? Note: The ...
Monday, August 2, 2010 at 11:22am by Ms. Sue
: Fractions are an important part of your daily lives. Describe some practical applications for fractions in your daily life and some challenges that you have experienced regarding the use of
Tuesday, May 18, 2010 at 10:30am by ree
Shelly and Marcom are selling popcorn for their music club. Each of them received a case of popcorn to sell. Shelly has sold 7/8 of her case and Marcon has sold 5/6 of his case. Which of the
following explains how to find the portion of popcorn they have sold together? A.Add ...
Thursday, November 8, 2012 at 7:45pm by Jerald
math -fractions
Change the fractions to equivalent fractions with the same denominator. 7/8 = 21/24 5/6 = 20/24 Follow the same directions I posted before -- except you could draw rectangles, rather than circles.
Tuesday, January 4, 2011 at 6:25pm by Ms. Sue
Actually, that is the best reason. I use the following criterium. If I see one of the variables having a coefficient of 1 OR -1, I solve for that variable and use substitution, resulting in no
fractions, unless the equation contains fractions to begin with. As a matter of fact...
Monday, February 18, 2008 at 9:52pm by Reiny
Separate the fractions 2/6,2/5,6/13,1/25,7/8and 9/29into two categories: those that can be written as a terminating decimal and those that cannot. Write an explanation of how you made your decisions.
b. Form a conjecture about which fractions can be expressed as terminating ...
Sunday, September 27, 2009 at 7:22pm by Anonymous
By the way, fractions are also a great part of standard music notation. You need to know fractions to read musical notes.
Monday, July 27, 2009 at 1:14am by mathland
Fractions don't format well here. Try using a/b for fractions. I'll do #1 and yo can post your own answers for the others, which we will be happy to check. #1. 7 2/3 + 8 5/6 One way is to add the
whole numbers, then add the fractions: 7+8 + 2/3 + 5/6 15 + 2/3 + 5/6 Now, set ...
Tuesday, December 20, 2011 at 2:14pm by Steve
I got part a. I do not understand the rest. b. form a conjecture about which fractions can be expressed as terminating decimals. c. test your conjecture on the following fractions; 6/12, 7/15, 28/
140, and 0/7. d. use the idea of equivalent fractions and common multiples to ...
Monday, August 2, 2010 at 11:22am by Betty
math -fractions
Change these fractions to equivalent fractions with the same denominator. 1/2 = 60/120 7/8 = 105/120 9/10 = 108/120 1/3 = 40/120 3/5 = 72/120 1/4 = 30/120 Now you can arrange them in order.
Monday, January 10, 2011 at 4:20pm by Ms. Sue
I know but wouldn't you have to go through confusing fractions and multiply by fractions to get a whole number of x? I am trying to follow the teachers directions which do not include
Tuesday, December 14, 2010 at 8:08pm by David
math 4th grade
You can find these decimals by long division. The other way to solve this is to find a common denominator. Then convert all of the fractions to fractions with a common denominator. That's complicated
with these five fractions.
Tuesday, April 13, 2010 at 9:50pm by Ms. Sue
Math- Fractions
I know this seems easy, but i stink at fractions. What is .105 as a fraction?
Monday, November 19, 2007 at 7:59pm by Ariana
Find the LCD for each of the fractions. Then convert them to equivalent fractions.
Monday, September 27, 2010 at 3:23pm by Ms. Sue
Change the fractions to equivalent fractions with the same denominator.
Sunday, November 7, 2010 at 3:47pm by Ms. Sue
Joanne, do you want me to help Change the fractions to equivalent fractions with the same denominator??
Sunday, November 7, 2010 at 3:47pm by Erin
I would change the mixed fractions to improper fractions first.
Monday, June 27, 2011 at 4:11pm by bobpursley
math estimating w/fractions
If you round these fractions, 4 5/9 is more than 4 1/2 so it could be rounded to 5.
Thursday, December 15, 2011 at 6:18pm by Ms. Sue
how to you compare 8 fractions with different denominators including improper fractions?
Tuesday, February 14, 2012 at 4:47pm by john
Change the fractions to equivalent fractions with a common denominator.
Tuesday, March 27, 2012 at 7:55pm by Ms. Sue
What is the common denominator of these fractions? What are the equivalent fractions?
Monday, November 1, 2010 at 7:29pm by Ms. Sue
The sum of two fractions is 1 1/2. Their difference is 5/12. What are the fractions?
Sunday, March 9, 2014 at 9:30pm by Kristen
8th grade
youtube(dot)com/watch?v=BeCQWUl1p00&feature=related be sure to change your mix fractions into improper fractions when you divide fractions, you multiply the reciprocal
Sunday, September 12, 2010 at 9:14pm by Anonymous
Math Fractions
What is the first step to adding multiple fractions (3) with differenct denominators?
Wednesday, July 8, 2009 at 1:40pm by Angie
x - 1/3 = 4/5 x = 4/5 + 1/3 Convert the fractions to equivalent fractions with a common denominator. Add.
Sunday, November 7, 2010 at 8:36pm by Ms. Sue
rounding fractions to 0, 0.5, or 1. where would 0.599 lay? Can you round fractions down?
Thursday, January 26, 2012 at 6:16pm by lexi
Change the fractions to equivalent fractions with a common denominator or to decimals.
Tuesday, January 31, 2012 at 6:04pm by Ms. Sue
If Im naming fractions are these fractions in the right order from least to greatest? 1/4 1/3 1/2?
Thursday, January 31, 2013 at 5:52pm by Jerald
Change all of these fractions to equivalent fractions with a denominator of 12.
Wednesday, February 6, 2013 at 9:02pm by Ms. Sue
4th grade math fractions
three different improper fractions that equal 4 1/2
Tuesday, March 23, 2010 at 8:24pm by rita
Change all of the fractions to equivalent fractions with a common denominator.
Sunday, June 2, 2013 at 6:00pm by Ms. Sue
I don't see where you need to use fractions to find the total. 9 + 6 = 15
Tuesday, February 11, 2014 at 9:41am by PsyDAG
0k i dont know the answer for these percents, decimals, and fractions. You have to change decimals to percents, fractions to decimals, and percents to fractions. 0.23 3/100 32 1/2% 0.25 3/5 75% 1/8
0.835 10% 95% 4% 120% 0.3333.... 1.05 1/6 If you can please help me anyone =\
Friday, March 14, 2008 at 1:23am by Kenya
5th grade math
rewrite each pair of fractions as equivalent fractions with a common denominator? 2/3, 3/4
Monday, January 9, 2012 at 4:30pm by Alexia
The least common denominator is 238. Change these fractions to equivalent fractions.
Thursday, November 29, 2012 at 7:09pm by Ms. Sue
What formula to use when finding patterns of sum of unit fractions #egyptian fractions
Wednesday, April 24, 2013 at 12:23pm by Jessica
4th grade math
Do you know how to change these fractions to equivalent fractions with the same denominator?
Monday, March 24, 2014 at 6:14pm by Ms. Sue
Math repost for Grace
Check this site. http://themathpage.com/arith/add-fractions-subtract-fractions-1.htm
Friday, September 28, 2007 at 6:50pm by Ms. Sue
please explian this not sure about fractions thanks :) 1/3, 5/6 , and 3/8 = add the fractions reduce to loewst terms
Monday, November 12, 2007 at 4:25pm by Anonymous
Math adding fractions
What is the best way to add fractions and reduced them to the lowest terms example 3/8+1/3+5/6
Friday, November 30, 2007 at 8:27pm by Elena
Rename percent as fractions and fractions as percents. 1/10 40% 66 2/3% 8/4 125% 2/25
Wednesday, January 30, 2008 at 3:17pm by brittany
Rename percent as fractions and fractions as percents. 1/10 40% 66 2/3% 8/4 125% 2/25
Wednesday, January 30, 2008 at 2:55pm by brittany
It's easier to subtract or add similar fractions, rather than fractions and whole numbers. 4 1/4 - 3/4 = ??? 5/4 - 4/3 - 2/4 = 1/2
Monday, January 25, 2010 at 10:30pm by Ms. Sue
math fractions
Convert the second two fractions so they have the same common denominator.
Wednesday, December 15, 2010 at 3:57pm by Ms. Sue
Math fractions
use multiplication to write three fractions equivalent to each given fraction. 1. 2/3 2. 3/5 3. 5/8 4. 9/10
Tuesday, January 24, 2012 at 8:58pm by Anonymous
Since the fractions seem to be run together, are you asking about these fractions? 0.4/0.2 = 2/1 0.4/0.2=2/1.2 0.4/0.2=1.6/1 0.4/0.2=1.6/1.2 Which one do you think is proportional?
Sunday, November 29, 2009 at 5:09pm by Ms. Sue
Help! These are fractions, so bear with me: -5/8 x= -9/10 Solve with the multiplicaation principle. How do I do this? The fractions throw me off every time!!!
Sunday, March 28, 2010 at 11:56am by bob
Help! These are fractions, so bear with me: -5/8 x= -9/10 Solve with the multiplicaation principle. How do I do this? The fractions throw me off every time!!!
Sunday, March 28, 2010 at 12:33pm by bob
math dividing fractions
divide the fractions: 3/4 ÷ 1/2 = 3/4 × 2/1 = 6/4 = 1.5 That is, Mark ate 1.5 times as much as Julia, or 50% more.
Friday, September 28, 2012 at 12:14am by Steve
When adding or subtracting fractions, we must have equivalent fractions with the same denominator. 1 1/2 = 1 4/8 (1 4/8) - (3/8) = 1 1/8 cups
Wednesday, November 7, 2012 at 5:04pm by Ms. Sue
Change the fractions to equivalent fractions using 12 as the common denominator. Then add the numerators. What do you get?
Thursday, January 10, 2013 at 5:04pm by Ms. Sue
yay...lol Rename percent as fractions and fractions as percents. 1/10 40% 66 2/3% 8/4 125% 2/25
Wednesday, January 30, 2008 at 2:55pm by brittany
False. You only do that in adding fractions. In multiplying, you may cross out fractions, but you do not have to find a common denominator.
Thursday, April 4, 2013 at 9:24pm by Knights
Algebra 1
c. multiplying x/6-5/8=4 by 6 did not eliminate all the fractions/ What could you have multiplied to get rid of all the fractions? Explain how you got your answer and write the equivelent equation
that has no fractions. HELP ME PLEASE!!!!!!! I don't understand this.
Monday, May 11, 2009 at 8:06pm by Kelsie
HELP!!!!!! i have a huge test on circles radius , circumference , diameter. Also Equivalent Fractions and Comparing and ordering fractions.
Wednesday, January 16, 2008 at 8:43pm by Gabrielle
math 116
Is it always necessary to have a common denominator to add and subtract fractions? Why? What about multiplication and division with fractions?
Monday, March 24, 2008 at 12:37am by Rosetta
Those three fractions do not add up to 10/20. Convert the fractions to their equivalents with a denominator of 60.
Friday, September 11, 2009 at 1:29pm by Ms. Sue
You and manias are both wrong. You should see that since 2 + 3 = 5. Change the fractions to equivalent fractions with a common denominator.
Thursday, August 30, 2012 at 6:55pm by Ms. Sue
how do you estimate fractions with different denominators on a number line quickly? including mixed and improper fractions 4/7 8/9 3/2 1/3 1/2 5/8 9/11
Tuesday, September 18, 2012 at 7:09pm by basketball king
how do you order fractions while estimating with different denominators including mixed numbers and improper fractions quickly?
Tuesday, September 18, 2012 at 7:46pm by basketball king
Math - fractions
http://www.aaamath.com/ Click on Fractions and then on Multiplying Fractions. 9/16 * 14/15 = ??
Saturday, October 8, 2011 at 11:28am by Writeacher
Study these sites carefully. http://www.slideshare.net/sondrateer/fractions-1196964 http://www.math.com/school/subject1/lessons/S1U4L3GL.html https://www.khanacademy.org/math/arithmetic/fractions/
Adding_and_subtracting_fractions/v/adding-and-subtracting-fractions https://www....
Wednesday, June 12, 2013 at 7:46pm by Ms. Sue
Check this site for explanations and examples. http://www.themathpage.com/Arith/add-fractions-subtract-fractions-1.htm
Saturday, July 12, 2008 at 3:23pm by Ms. Sue
The two fractions in (a) are both equal to 4. The numerators and denominators are in the same proportion. Saying that the fractions themselves are "proportional" makes no sense to me.
Friday, June 27, 2008 at 11:33pm by drwls
i am having to add, subtract, multiplication and division on fractions. whloe numbers with fraction. My question is that i do not know how to do fractions at all and i need some help
Monday, February 16, 2009 at 9:16pm by marie
You need to either convert these fractions to decimals or to fractions with a common denominator. 1/7 = 0.143 1/9 = 0.111 2/7 = 0.286 1/12 = 0.083
Tuesday, February 7, 2012 at 2:51pm by Ms. Sue
what is x? x + 2 11/14 = 5 13/21 basically, i want to know the formula to finding x. i tried to change the fractions to eqivalent fractions... any other ways?
Wednesday, February 29, 2012 at 4:10pm by jocelyn
Two fractions equivalent to 2/5: 4/10 and 6/15. Two fractions equivalent to 7/11: 14/22 and 21/33 Two fractions equivalent to 150/325: 300/650 and 450/975. To solve this I multiplied each fraction by
a number (e.g. 2/2) For 2/5, I did 2/5 x 2/2 = 4/10. I hope I helped!
Monday, May 24, 2010 at 8:43pm by Daniel
The common denominator is 30. http://www.coolmath4kids.com/fractions/fractions-04-equivalent-01.html
Monday, February 10, 2014 at 6:38pm by Ms. Sue
show how to simplify before you multiply 3 1/2 x 2 2/7 4445456743+38067738476= change both mixed fractions to "improper" fractions 3 1/2 x 2 2/7 = 7/2 x 16/7 =8 (after you cancel)
Tuesday, May 22, 2007 at 6:17pm by jan
Describe some practical applications for fractions in your daily life. What are the challenges you have experienced regarding the use of fractions? Explain your answers.
Sunday, September 21, 2008 at 2:53pm by nikki
Describe some practical applications for fractions in your daily life. What are the challenges you have experienced regarding the use of fractions? Explain your answers.
Monday, October 27, 2008 at 11:37am by Carla
Describe some practical applications for fractions in your daily life. What are the challenges you have experienced regarding the use of fractions? Explain your answers.
Sunday, September 21, 2008 at 2:53pm by Anonymous
No. Your original question was about adding and subtracting fractions. When you multiply or divide fractions, you do not need a common denominator. 3/5 divided by 2/3 = 3/5 * 3/2 = 9/10
Friday, October 9, 2009 at 6:00pm by Ms. Sue
Prompt. Use equivalent fractions to order therse fractions from least to greatest: 2/3,1/2, 4/12, 5/6. Explain the steps you took to find your answer.
Monday, November 26, 2012 at 9:48pm by Susie
The error? The denominators can't be added. The student should have changed both fractions to equivalent fractions with a common denominator. (2/3)(3/5) = 6/15 = 2/5
Tuesday, February 4, 2014 at 5:24pm by Ms. Sue
Well - Two fractions equivalent to 2/5: 4/10 an d6/15. - Two fractions equivalent to 7/11: 14/22 and 21/33. - Two fractions equivalent to 150/325: 300/650 and 450/975. To solve this you should of
multiplied each fraction by a number (E.G. 2/2) For 2/5, I did 2/5*2/2= 4/10. I ...
Monday, May 24, 2010 at 8:43pm by smiley
Change the fractions to equivalent fractions with the same denominator. 2/3 = 16/24 3/8 = 9/24 5/12 = 10/24 Add the numerators. Simplify the answer. What do you get?
Tuesday, May 28, 2013 at 7:17pm by Ms. Sue
Check this site. http://www.intmath.com/Factoring-fractions/6_Multiplication-division-fractions.php
Thursday, September 18, 2008 at 9:14pm by Ms. Sue
7th Grade Math
http://www.khanacademy.org/math/arithmetic/fractions/v/decimals-and-fractions http://www.mathsisfun.com/converting-decimals-fractions.html
Wednesday, June 20, 2012 at 8:34pm by Ms. Sue
When we add or subtract fractions, we must have a common denonimator. 2 1/4 = 2 2/8 When I added the two fractions, I added the whole numbers and the numerators. That is 5 9/8. But since 9/8 is
larger than 1, I simplified the fraction to 6 1/8. Which part doesn't the child ...
Thursday, March 14, 2013 at 5:09pm by Ms. Sue
Study this site. http://www.coolmath4kids.com/fractions/fractions-02-mixed-numbers-01.html What do you think? I'll be glad to check your answer.
Monday, June 17, 2013 at 7:43pm by Ms. Sue
6th grade math
This is a multiplication of fractions problem. (3/10) * (3/4) = 9/40 in. If you wanted to use a calculator, change the fractions to decimals. 0.3 * 0.75 = 0.225 in.
Monday, September 17, 2012 at 7:27pm by Ms. Sue
-8 2/3 - -5 1/3 How would you set up? Go here and scroll to 4 http://www.themathpage.com/ARITH/add-fractions-subtract-fractions-1.htm the answer would be -3 1/3
Monday, November 13, 2006 at 6:05pm by Jessica
Those are correct. One way to illustrate these fractions is to draw pizzas. http://www.mathsisfun.com/fractions.html
Monday, April 29, 2013 at 5:59pm by Ms. Sue
Change these fractions to equivalent fractions with a common denominator. 3/7 = 27/63 1/9 = 7/63 2/3 = 42/63
Thursday, February 16, 2012 at 8:12pm by Ms. Sue
elementary math
I am going to be a teacher one day. I need help explaining this: A student asks whether it is easier to add fractions or multiply fractions. What my respone should be?
Saturday, August 22, 2009 at 7:33pm by micah
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=Math%2FFractions","timestamp":"2014-04-18T11:04:23Z","content_type":null,"content_length":"34831","record_id":"<urn:uuid:16a0ff94-bb21-4c82-9392-f8aced3d96de>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Re: FE over two columns
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: Re: FE over two columns
From Christopher Baum <kit.baum@bc.edu>
To tazz_ben <tazz_ben@wsu.edu>
Subject st: Re: FE over two columns
Date Mon, 24 Oct 2011 13:41:38 -0400
Then something like
foreach c in US UK MX {
g `c'dum = expt=="`c'" | impt=="`c'"
will work to generate the set of country dummies. You could use the levelsof command to get the list of all the countries in your data.
I imagined that as you were speaking of FEs you meant you had panel data, as that term is usually used
in the context of panel data.
Kit Baum | Boston College Economics & DIW Berlin | http://ideas.repec.org/e/pba1.html
An Introduction to Stata Programming | http://www.stata-press.com/books/isp.html
An Introduction to Modern Econometrics Using Stata | http://www.stata-press.com/books/imeus.html
On Oct 24, 2011, at 1:31 PM, tazz_ben wrote:
> Hi Christopher,
> If you are running time series panel data what you said makes a lot of
> sense. But in my case, I'm running a model that occurs in a single year.
> You might say that's dumb (and I actually don't disagree), but it is a
> homework assignment (not my own research) I don't get to change the model.
> So, if I were to apply fixed effects in the way you are describing to my
> data, each row would have its own fixed effect; which obviously isn't what
> I want.
> So, what I'm wanting to do is not combine the two columns as you described
> with Group, but in fact there should be two 1 valued fixed effects on each
> row. So, to your example, for UK->US trade data row, there should be the
> US Fixed Effect, and UK Fixed Effect. For the US->Mexico trade data row,
> it should have 1 values for the US Fixed Effect and Mexico Fixed Effect.
> Ben
> ---
> <>
> This is probably a stupid question that can easily be done within Stata.
> So I have a gravity model with two columns containing the country names
> (obviously). I want to apply country fixed effects: which means the index
> of the FE model is over two columns (ie. a country specific dummy should
> be set to one regardless of whether it appears in column one or column
> two).
> I only have about 50 countries, so I can easily do this manually with
> excel then push it into stata; but I thought I should learn how to do this
> if Stata can do it internally.
> If the two columns are labeled exporter and importer (as is usual in
> analyzing trade data)
> you can't specify country fixed effects based on a country being either an
> X or an M, as the
> dummies would not be mutually exclusive (e.g. both US->UK and UK->US
> contain each of those
> countries). Usually with this sort of data you create fixed effects for
> all combinations, e.g.
> egen exorem = group(exporter importer)
> tsset exorem year (or whatever is your time unit)
> exorem will take on different values for each trade linkage.
> Kit
> Kit Baum | Boston College Economics & DIW Berlin |
> http://ideas.repec.org/e/pba1.html
> An Introduction to Stata Programming |
> http://www.stata-press.com/books/isp.html
> An Introduction to Modern Econometrics Using Stata |
> http://www.stata-press.com/books/imeus.html
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2011-10/msg01094.html","timestamp":"2014-04-20T08:28:34Z","content_type":null,"content_length":"11027","record_id":"<urn:uuid:013fa320-b049-4cb6-935b-020286d62661>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
when ur going to the mall
and u actually have money
(via sodamnrelatable)
Ghost Car Appears Out of Nowhere
do you ever want to sleep for 14 years without waking up
(Source: giggle, via imnotyourprincessleavemealone)
i don’t think anyone realises how much effort i put into trying not punching everyone at school
(via guy) | {"url":"http://pinkhobofish.tumblr.com/","timestamp":"2014-04-16T15:58:48Z","content_type":null,"content_length":"31526","record_id":"<urn:uuid:6e008bcd-e82e-4da2-acb5-c0e662d9e7c0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plotting Confidence Intervals in R
Hi all,
I see some nice graphs where both 90% and 95% CI intervals are plotted (e.g. the line is 95% interval whereas small ticks indicate 90% interval). Could someone point to an example code of this in R?
I would really appreciate.
I did that but could not find anything that describes plotting 90% and 95% CIs together. Would appreciate any help.
ask someone else:
Reg y x, 95 90
I assume from the way you stated the problem that you know how to do 95% lines alone?
Replicate that exact same code, but obviously calculating 90% interval values instead of 95, then use the plot function on them on the existing 95% plot but change line type = 2.
what exactly is the problem? get your CIs, plot one set of lines with the 95% CIs, and another set of lines with the 90% CIs. Voila.
Op is good at R, but not Rochester good.
Is that what you're talking about? If so, code is there.
Since someone asked, my preferred style: https://gist.github.com/4332698
^ so much wasted space in that graph. remember, graphs are supposed to improve on a traditional tabular presentation. sometimes, an ol' fashioned table is the way to go.
^No. That graph reports 18 parameters in a concise fashion that is easily understood. It's not perfect (I don't report the constant in these graphs unless they're substantively important, for
instance), but it's better than a table.
My only complaint is that it is in color. Most journals don't accept color graphics except under special circumstances, so I prefer to do things in black and white to start with.
^ i only see 12 estimates reported. and due to differences in scale, the CIs are all over the map. at best, it's ugly. at worst, it's uninformative, since some of the CIs can't even be determined.
this is a graph for graphing's sake.
^Okay, I don't actually like THAT graph, but I like the idea. The scaling is a big issue, but that's more a question of implementation than concept.
And in terms of the parameters reported, I can't tell very well, but you could imagine a graph with 3 models, 6 parameters each, color-coded by model, etc. I've used graphs like that and it worked
well. But you're right that this is less useful (at best) in actual practice in this case. My point was more conceptual than about that graph in particular. | {"url":"http://www.poliscijobrumors.com/topic.php?id=74188","timestamp":"2014-04-18T00:15:31Z","content_type":null,"content_length":"17234","record_id":"<urn:uuid:c37c3469-1897-417d-912f-e01a1bb822a0>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Weekly Problem 50 - 2013
Copyright © University of Cambridge. All rights reserved.
'Weekly Problem 50 - 2013' printed from http://nrich.maths.org/
How many different solutions are there to this letter sum?
Different letters stand for different digits, and no number begins with zero.
If you liked this problem,
here is an NRICH task
which challenges you to use similar mathematical ideas.
This problem is taken from the UKMT Mathematical Challenges.
View the previous week's solutionView the current weekly problem | {"url":"http://nrich.maths.org/2934/index?nomenu=1","timestamp":"2014-04-19T20:26:45Z","content_type":null,"content_length":"3514","record_id":"<urn:uuid:15a18172-06ea-47bd-be29-a4bb8c962ba0>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
Understanding Stresses and Modular ratio | RCC Structures | Civil Engineering Projects
Understanding Stresses and Modular ratio | RCC Structures
In one of our previous articles, we discussed “Basic definitions and formulas”.
Now we will move on with our discussion on “Permissible stresses in concrete and steel” and “Understanding Modular ratio”.
Permissible Stresses in Concrete
Reinforced concrete designs make use of M15 grade concrete. The permissible stresses for different grades of concrete is different. They are given below:
│Sr. No.│Concrete Grade │M15│M20│M25│M30│
│1. │Stress in compression │5 │7 │8.5│10 │
│ │ │ │ │ │ │
│ │ 1. Bending │ │ │ │ │
│ │ 1. Direct │4 │5 │6 │8 │
│2. │Stress in bond (average) for plain bars │0.6│0.8│0.9│1.0│
│3. │Characteristics compressive strength │15 │20 │25 │30 │
Also refer for other values in IS:456-1978
Permissible Stresses in Steel
The permissible stresses for different grades of steel are given in the table above.
The different grades steel available in the market with their market names are as follows:
Mild Steel
Grade I steel is known as mild steel. The abbreviation used for Mild steel is (m.s.)
High Tensile deformed steel has two types. They are as follows:
1. Grade Fe415 (Tor-40 or Tistrong I)
2. Grade Fe500 (Tor-50 or Tistrong II)
The names of the high tensile deformed steel have been derived from their manufacturers.
For example:
• Tor-Isteg Steel Corporation in Calcutta manufactures Tor-40 and Tor-50. Hence, the name.
• Tata Iron and Steel Co. Ltd, Calcutta manufactures Tistrong I and Tistrong II.
(Being aware of the names of the manufacturers is important for students especially those studying Civil and Structural Engineering.)
Understanding Modular Ratios
It is defined as the ratio of moduli of steel to the moduli of concrete. It is denoted by the letter “m”.
The modular ratio is not constant for all grades of concrete. It varies with the grade of concrete. Es/Ec is generally not used to calculate modular ratio for reinforced concrete designs.
As per IS: 456-1978;
m is calculated by the following formula:
m = 280/3σ[c][bc]
σ[c][bc ]= permissible compressive stress in concrete in bending.
Calculation of Modular ratio values for different grades of concrete
│Grade of concrete │Modular ratio │
│M15 │m = 280/3×5 = 18.66 │
│M20 │m = 280/3×7 = 13.33 │
│M25 │m = 280/3×8.5 = 10.98 │
│M30 │m = 280/3×10 = 9.33 │
It should be remembered that rounding off the modular ratio values is not permitted by Indian Standard.
We shall discuss the following in our succeeding articles:
You can
leave a response
, or
from your own site.
#1 by lalit on April 12, 2012 - 6:59 am
Thanks Benzujk for your valuable post. I really appreciate your effort.
#2 by Debal Chatterjee on April 13, 2012 - 1:13 am
May I also add to the concept of modular ratio. Please correct me if I am wrong. Suppose I ask u to add 1 apple + 1 guava, u cannot add them because they are two different item. Now while comparing
the moment of area of steel and concrete in a beam (say), the area of concrete is several times larger than steel, hence their moment of area cannot be compared because these 2 materials have
different properties. Hence the area of steel has to be converted in terms of concrete area and modular ratio is used as the conversion factor. As for example σst = m x σcbc, means the equivalent
area of steel in terms of concrete.
#3 by bhanudas on November 14, 2013 - 10:17 am
very good and understanding material available. | {"url":"http://www.civilprojectsonline.com/building-construction/understanding-stresses-and-modular-ratio-rcc-structures/","timestamp":"2014-04-18T00:13:20Z","content_type":null,"content_length":"47275","record_id":"<urn:uuid:d9019ed8-f85e-4d3e-91bc-9ab88a8ff41d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
question about graphs / undirected tree
June 12th 2011, 02:00 AM #1
Junior Member
Jun 2011
question about graphs / undirected tree
I am a little bit stuck in understanding this question ....
Can someone please help me with this and explain how I find the answer?
"In a directed graph every vertex has an "in-degree" and an "out-degree".
The "in-degree" is the number of heads of edges (arrows) adjacent to a vertex, and the "out-degree" is the number of tails of edges adjacent to a vertex.
A tree is called a "rooted tree" if one vertex has been designated the root, which by definition must have an in-degree of 0 and an out-degree of 1.
All the other vertices must have an in-degree of 1.
Below is given an undirected tree. How many non-isomorphic "rooted trees" can be created from this tree by choosing an appropriate root?"
I would say you can create "10" trees here ... because there are 10 points where you can create a root of.
I just don't get the "non-isomorphic "rooted trees" part here ... kind of confuses me ...
Or is the answer "5" because I can only pick the vertices that are on "on the "left sides" and the vertices that are starting from below?
Or is the answer "1" because there is no way you can redraw this tree in another way?
Or is the answer 106 ... that we need to try to create a tree and see how many combinations is possible with 10 vertices?
I am kind of confused here ...
Last edited by iwan1981; June 12th 2011 at 03:16 AM.
Do you know what a graph isomorphism is ?
Here is an easy one to check your understanding:
How many non-isomorphic spanning trees does this graph have ?
isomorphism is that you can create a different shape of the graph by just only moving the vertices.
So you have 2 different shapes (graphs) for the eye ... but if you move the vertices around you can move back and forward between the 2 graphs:
example: with pictures I found here --> Graph isomorphism - Wikipedia, the free encyclopedia
And I have no clue what the answer is to your question ... maybe you can explain that ?
Re: question about graphs / undirected tree
Can't you make each of the 14 vertices a root? Many of the resulting trees will be isomorphic, though.
I just don't get the "non-isomorphic "rooted trees" part here ... kind of confuses me ...
You need to find the number of rooted trees such that there is no isomorphism between any pair of them. If you understand isomorphism, the question should be clear. I assume that we are
considering unordered trees, i.e., one can change the order of children of any vertex without making it a different tree. This is important for seeing that the trees with the root in the leftmost
vertex and in the topmost vertex are isomorphic.
I see 4 non-isomorphic trees rooted at the leftmost vertex, second left, central bottom and the one just above it, but this needs to be double-checked.
June 12th 2011, 03:01 PM #2
Super Member
Oct 2007
London / Cambridge
June 13th 2011, 04:34 AM #3
Junior Member
Jun 2011
June 13th 2011, 12:58 PM #4
MHF Contributor
Oct 2009 | {"url":"http://mathhelpforum.com/discrete-math/182883-question-about-graphs-undirected-tree.html","timestamp":"2014-04-17T04:01:16Z","content_type":null,"content_length":"41058","record_id":"<urn:uuid:6773fd61-dd07-4350-ab93-a488b53bc397>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
My students love Geometry because of Math out of the Box®!
25th February, 2010 - Posted by Sammie Gann - 15 Comments
Not all inquiry is based in the science classroom. My students have learned so much through a math program called Math Out of the Box®. It is a K-5 inquiry based math curriculum filled with
engaging, hands-on activities based on the latest research about how children learn. I love how it has reading and writing connections embedded within every lesson. It arrives with everything that
I need to get my year rolling. The students are engaged and look forward to every lesson. Sometimes, I cannot get them to lunch on time because they refuse to stop. That has made my center
planning much easier because I just move the lesson manipulatives over to the math station and they are already familiar with the concept. They could work for hours using the materials if I had the
time. Are any of you using the program in your classroom? I would love to hear which lesson is your favorite. Here is a copy of the Geometry Journal if you want to try it out with your kids.
Bright Idea Geometry Journal for Third Graders PDF version
Tags: Curriculum, engaging, free download, geometry, geometry journal, hands on, manipulatives, math, math out of the box, reading and writing connections, video
Posted on: February 25, 2010
Filed under: Uncategorized
15 Comments
February 25th, 2010 at 11:37 am
My daughter is in 2nd grade, and loves working with the Math Out of the Box manipulatives. At her school, she is currently working out of the traditional math textbook. She began to have a little
trouble with her math, so I got her some of math out of the box manipulatives to work with to see if the manipulatives would help her. She loves working with the manipulatives! When she works with
them, she can see it “hands on”, and it makes it easier for her to understand rather than just reading about how to do it.
March 29th, 2010 at 10:12 pm
Good points…I am someone who really doesn’t write on blogs much (in fact, this may be my first post).
March 30th, 2010 at 2:26 pm
Great article!
April 5th, 2010 at 8:38 am
Cool article!
May 25th, 2010 at 10:03 am
3rd grade student:
Hi, my name is Avery and I learned that not all prisims have six faces.Some have five!That is the trangle prism.also I learned that some cyilenders have ten faces and sixteen vertices!
May 25th, 2010 at 10:17 am
3rd grade student:
I learned that there are 2 3-d shapes that have 6 faces, 12 edges,and 8 virticeis.
May 25th, 2010 at 10:46 am
3rd grade student:
I learned that a cyclinder has 0 edges and vertices and that a cyclinder has 2 faces.
May 26th, 2010 at 12:01 pm
Thank you Avery, Noah, and Nevin for sharing what you have learned using the Geometry pieces and journal. You really have learned a lot. You guys are using some really big math vocabulary in your
reflections which is wonderful! Can you tell me more about the difference between a face, vertex, and edge? How could you explain those words to someone that isn’t as familiar with geometry terms
like you are? I challenge your class to come up with definitions to help us understand these terms!
May 26th, 2010 at 12:15 pm
Hmmm…. When looking at your responses guys, I have the following questions:
1. Does a cylinder really have vertices? How do you know? What is a vertex?
2. What is a face and what is an edge? I am confused by some of your responses. Can you get together and come up with a common definition to help me understand?
3. Nevin you on right on buddy! Can you explain how you got your answers to us?
May 26th, 2010 at 12:26 pm
Just wondering if you had any 3-D shapes to use in your classroom? Sometimes remembering all of the definitions gets very confusing but if you have something concrete to work with they might help you
I’m excited to see students journaling about their math ideas. Notebooking isn’t only for science
June 8th, 2010 at 11:10 am
1. A cylinder does not have vertices because it does not have corners since it rolls.
2. A face is the flat surface but an edge is where 2 faces connect, I think….
3. A cylinder has 0 edges and vertices because there are no flat surfaces connecting because the 2 flat circle faces connect at a curve to a rectangle folded around it.
June 8th, 2010 at 8:27 pm
Great sharing this.
June 9th, 2010 at 9:03 am
Noah, Nevin, and Avery
It is so refreshing to see you guys work together to come up with a common definition.
* Pointing out that a cylinder rolls is a great observation and that it does not have corners. I am taking from this statement that vertices and corners are similar.
You guys have come a long way. I am proud of you!
May 24th, 2011 at 7:08 am
I’ve been browsing online more than three hours today, yet I never found any interesting article like yours. It’s pretty worth enough for me. In my view, if all webmasters and bloggers made good
content as you did, the web will be a lot more useful than ever before.
June 14th, 2011 at 5:45 am
Hi there, You’ve done a great job. I’ll definitely digg it and personally suggest to my friends. I’m sure they will be benefited from this web site.
Leave a reply
Carline J on View the live broadcast of Decorah Eagles
Katharine Zu on My students love Geometry because of Math out of the Box®!
Rana A on Is your kid a veggie kid? | {"url":"http://iteachinquiryblog.com/?p=123","timestamp":"2014-04-18T08:05:51Z","content_type":null,"content_length":"35821","record_id":"<urn:uuid:44302d2e-8355-47a3-b43f-bedd2a6aa8a7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Link Homology and Categorification in Kyoto
Posted by John Baez
Aaron Lauda points out an interesting conference:
You can see notes for some of the talks!
Categorifying knot invariants is all the rage, as these talks show:
The last talk here hints most strongly at the big picture. But, it’s best to look at all the talks, including ones I haven’t listed here, if you want to understand what’s really going on. It’ll take
a lot of work unless you’re pretty familiar with quantum invariants of knots, since the notes are quite terse.
Maybe some people who went to the conference can say a bit about the high points?
Posted at May 26, 2007 6:38 PM UTC
Re: Link Homology and Categorification in Kyoto
For those who are interested in this stuff, I’m sure there will be plenty of it at the Faro conference, which I’ll be talking about on my own weblog while I’m there in early July.
Posted by: John Armstrong on May 26, 2007 7:54 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
As Aaron is not in the same position as Derek and Jeff, could he be persuaded to tell us what he thought of this conference, or perhaps something on whether he feels ‘categorification’ means
something different to those working around Khovanov homology?
Posted by: David Corfield on May 27, 2007 12:12 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
I can say for a fact that many people working on knot homologies from the knot theory side think it means something different. Here’s a rough sketch of the evolution as filtered through my
“Categorification” in the sense we know it is introduced. Khovanov, as a student of Frenkel, brings representation theory to bear on certain categorifications and develops a homology theory whose
Euler characteristic is the Kauffman bracket (called “Jones polynomial” in the usual confounding of the terms). Knot theorists’ only exposure to the term “categorification” is through this homology,
and they identify the term as specifically referring to this sort of construction.
This became apparent to me at the AMS meeting in Oxford, OH this past March, when a number of the participants in the quantum topology session were confused by my use of the term “categorification”
over dinner one night. I was referring to the possibility of different categorifications of the bracket that might have nothing to do with homology theories. Since then I’ve become excited about
categorifications by anafunctors, and I’m hoping to talk about them with Dr. Przytycki and his group at George Washington over the summer to help bring them into the wider world of categorifications.
Posted by: John Armstrong on May 27, 2007 12:58 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
I wouldn’t have characterised knot theorists’ understanding of categorification in this way. It may be true that there are many people interested in Khovanov homology who don’t know about the wider
picture, but that’s not universal, and certainly not the case amongst most of the people at the Kyoto conference, for example.
Perhaps a better way to put it is the following! This is perhaps the “big question” about Khovanov homology for the “Baez school”. :-)
“Why are triangulated categories unreasonably effective in producing good categorificiations?”
(Saying triangulated categories here is perhaps just a fancy way of saying ‘theories involving homology’.)
To back up this claim of unreasonable effectiveness, let me quickly (?) describe the Bar-Natan model of Khovanov homology, in a way that emphasises how things “get better” when we pass to a
triangulated category (that is, switch to thinking about complexes up to homotopy). This will follow my introductory talk in Kyoto, slides at http://tqft.net/kyoto1, especially pages 3,8,10-12.
The Bar-Natan model produces Cob(su[2]), a 2-category. Its objects are ‘points on a line’, its 1-morphisms ‘Temperley-Lieb diagrams’, but with no relations between them, and its 2-morphisms are
(linear combos of) surfaces between Temperley-Lieb diagrams, modulo some relations. (The surfaces have to have ‘vertical boundary’ matching the the sources and targets of the source and target
The relations are ‘sphere = 0’, ‘torus = 2’, and ‘neck cutting’, which says ‘cylinder = 1/2 ((punctured torus and disc) + (disc and punctured torus))’.
Let’s now pass to the ‘matrix category’, allowing formal direct sums of objects and matrices of surfaces. Now these relations allow you to prove an isomorphism: the circle is isomorphic to the direct
sum of two empty diagrams. If you go back through everything above and introduce gradings the right way, you’ll in fact see that these two empty diagrams are shifted up and down in grading by one.
What does this say? That the Grothendieck group (we better take the split group here, as we’re not in an abelian category), is exactly the usual Temperley-Lieb 1-category. (Actually, from what I’ve
said here, it might just be a quotient, but this is easy to patch; see, for example, math.GT/0612754.)
Thus we can say: “Cob(su[2]) categorifies TL, as a tensor category”. Of course, TL is a braided tensor category! How do we see Cob(su[2]) as a categorification of TL like that? The answer turns out
to be “switch to complexes over Cob(su[2]), up to formal homotopy”. The content of the proofs of the invariance of Khovanov homology then translates to: “Kom(Cob(su[2])) decategorifies (via a
triangulated Grothendieck group) to TL as a braided tensor category.”
And essentially this story is repeated all across knot homology. We make some construction, which decategorifies to some familiar gadget (for example, the MOY skein, for su[n] polynomials), but
there’s no sign of the braiding until we pass to a category of complexes. Even further afield, the work of Stroppel et. al, and of Kamnitzer and Cautis (see the conference notes, linked earlier in
the thread), are finding braidings in algebraic geometry, but in every case you have to insert the words “derived category of” in the right place to make things work.
I don’t know the answer to this, but perhaps you guys do!
Posted by: Scott Morrison on May 28, 2007 2:03 AM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
That’s a fascinating question about braidings. I guess in representation theory one finds braid groups appearing on the derived level because the fundamental braid symmetries in Lie theory factor
through Weyl group or Hecke algebra actions when one factors through cohomology or K-theory.
I think the same comment applies in mirror symmetry - e.g. the constructions of braid group actions through spherical twists (which if I understand is what Cautis-Kamnitzer use?) factors through a
reflection group action on K-theory.
I wonder if there is any general result along these lines?
I guess this doesn’t explain why you need derived as opposed to abelian categories though… my favorite answer for that is that derived categories are “function spaces” in the sense that you can do
harmonic analysis on them — e.g. you have good pullbacks and pushforwards and integral kernels, and functors between them are given by correspondences — like K-theory or cohomology. But I’d be
curious to hear a more pertinent answer :-)
Posted by: David Ben-Zvi on May 28, 2007 4:52 AM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
But I’d be curious to hear a more pertinent answer
You mean it’s not because you’re all secretly doing 2D TFT?
Posted by: Aaron Bergman on May 28, 2007 6:39 AM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
Scott said, “Why are triangulated categories unreasonably effective in producing good categorificiations?”
Scott, do you mean categorifications in general? Or those specific to knot theory? If the latter, I have absolutely nothing to say, but I can take a stab at the general question. Beware that I’m not
at all an expert.
A derived category is in many ways a linear analogue of the stable homotopy category. So the nonlinear, unstable analogue of your question would be Why are homotopy categories unreasonably effective
in producing good categorifications?
My feeling is that question is a little like the question of why the number 1 is so useful in counting. An $n$-category ought to give rise to a “shadow” $m$-category by forcing the $k$-isomorphisms
for $k\geq m+1$ to be equalities. In the case $n=0$, we get the set of isomorphisms classes. In the case $n=1$, I think we ought to get the homotopy category.
Now if categorification does anything, it makes $n$-categories for $n\geq 1$. In particular, taking the $m=1$ shadow, it makes homotopy categories. So a homotopy category really ought to be the first
thing you get to when categorifying.
Put another way, the reason why theories involving homology are useful in categorification is simply that homological algebra is the linearization of simplicial algebra, and simplicial algebra is
essentially what categorification is about.
Perhaps people who have written papers on these things can correct me now.
Posted by: James on May 28, 2007 10:07 AM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
Let me rephrase Scott’s question from the higher category theory perspective.
Why do we like Khovanov homology? Because it gives invariants of 2-tangles! (The sign problem having been resolved in math.GT/0701339.) What’s “the right way” to have an invariant of 2-tangles?
According to the periodic table philosophy, it’s to have a 2-monoidal 2-category with duals (that is, a 4-category with only one 0-morphism and only one 1-morphism) as explained in math.QA/9811139.
(From here on in every time I say “category” I’ll leave “with lots of duals” implicit.)
From this perspective what does the sentence “Khovanov homology is a categorification of the Jones polynomial” mean? Well “decategorification” means take (split) Grothendieck group. What does this do
on the periodic table? If you have a 1-monoidal 1-category, then its Grothendieck group is a ring, which is a 1-monoidal 0-category (everything here is enriched over Vect). Similarly if you have a
2-monoidal 1-category (a braided category) its decategorification is automatically abelian, that is it is a 2-monoidal 0-category.
So, when we take the 2-monoidal 2-category KHOV and we decategorify it we should get a 2-monoidal 1-category. But that’s just a braided category! And the Jones polynomial comes from a braided
category. So the whole theory of Khovanov homology is just that we have some 2-monoidal 2-category KHOV whose split Grothendieck group is just the 2-monoidal 1-category JONES (or Rep(U_q(sl_2)) if
you’re so inclined).
Alright, now we’re ready to ask the question. Why is it that if you start with an abelian 2-monoidal 1-category its categorification is triangulated? Restated, why are all interesting known
2-monoidal 2-categories triangulated? Why doesn’t that issue appear at the 2-monoidal 1-category level? What’s going on here?
Posted by: Noah Snyder on May 29, 2007 10:11 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
Noah, thanks, but what’s does it mean for a (2-monoidal) 2-category to be triangulated? I thought we were talking about triangulated 1-categories. Perhaps if I knew anything about KHOV, it would be
Posted by: James on May 29, 2007 11:20 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
Good question. Let me try to unpack things a little bit. The category that Scott was talking about was a category of (complexes of formal direct sums of) cobordisms between Temperley-Lieb diagrams.
However, Temperley-Lieb diagrams themselves form a category! The objects are unions of points and the Temperley-Lieb diagrams are the morphisms. So this all fits into a 2-category: objects = points,
morphisms = TL diagrams, 2-morphisms = complexes of cobodisms of TL diagrams modulo the Bar-Natan relations. Note that we are only taking complexes in the 3rd (cobordism) dimension. We don’t allow
complexes of unions of points (with TL diagrams between them).
So the triangulated categories that come up are categories of morphisms in a 2-category. So I guess I’m calling a 2-category triangulated if its morphism categories are triangulated. Maybe we need a
bit more. I’m not entirely sure what the right definition is. Somehow the triangulated structure only comes in on the 2-morphism level.
Posted by: Noah Snyder on May 30, 2007 12:33 AM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
Let’s see if I understand Scott’s post.
We start with the 2-category $Cob(su_2)$ as he described. Then we form a new 2-category $Kom(Cob(su_2))$ whose hom-categories are the (derived categories?) of the (chain-complex version of the?)
hom-categories in $Cob(su_2)$.
And $Kom(Cob(su_2))$ is a 2-monoidal 2-category, in other words a braided monoidal 2-category.
Then we decategorify $Kom(Cob(su_2))$ by forming the category whose hom-sets are the grothendieck groups of the hom-categories in $Kom(Cob(su_2))$. The resultant braided monoidal category turns out
to be equivalent to $TL$.
In this sense one says that “$Cob(su_2)$ is a categorification of $TL$”. Is that right?
I’m interested in nice explicit examples of “braided monoidal 2-categories with duals”.
Posted by: Bruce Bartlett on May 30, 2007 12:35 AM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
I think you have this basically right Bruce. But a couple clarifications.
Cob(su_2) is a monoidal 2-category with duals (though clarifying exactly to what extent it “has duals” is an area of active research).
Given any 2-category C, we can make Kom(C) which has the old objects, but where the 1-morphisms are complexes (that is a bunch of 1-morphisms f_i with 2-morphisms d_i: f_i -> f_i+1) and 2-morphisms
are chain maps up to homotopy.
In particular we have a new 2-category Kom(Cob(su_2)). This 2-category isn’t just monoidal, it’s braided monoidal! (It also still has duals to some extent, though see the above caveat.)
So a simply form of the question (without reference to triangulated stuff) is why do we seem to get braided monoidal 2-categories by looking at complexes in certain monoidal 2-categories?
Finally, n-category theory cafe readers should note that “canopolis” means roughly “monoidal 2-category where the 0- and 1-morphisms have duals.” This will help you read the relevant Khovanov
homology literature.
Posted by: Noah Snyder on May 30, 2007 1:58 AM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
In Noah Snyder’s 10:11 post yesterday he mentioned that KhoHo gives invariants of 2-tangles. But for closed surfaces these invariants are trivial. See: Khovanov-Jacobsson numbers and invariants of
surface-knots derived from Bar-Natan’s theory, by Tanaka, Khovanov’s invariant for closed surfaces, by Rasmussen, and our paper Ribbon-moves for 2-knots with 1-handles attached and Khovanov-Jacobsson
numbers with Masahico Saito and Shin Satoh. Maybe Scott Morrison can tell me if one has a non-closed cobordism if these invariants are non trivial — I am not sure what that means. In the closed case
you are looking for a number.
Later in this thread, (12:35 AM) Bruce Bartlett asked for explicit examples of braided monoidal 2-categories with duals. Of course, he means something other than the catgory of 2-tangles:
Higher-Dimensional Algebra IV: 2-Tangles, JB and Laurel Langford. So anyway, the joke around here was when any representation theorist would visit, I would ask, do you have a good example of a
braided monoidal 2-category with duals. And no one did.
So I stopped asking the question. When we (CJKLS) figured out the quandle cocycle invariant Quandle Cohomology and State-sum Invariants of Knotted Curves and Surfaces the question ceased having
meaning to me. But I have a very narrow minded point of view of the subject: The reason for finding a braided monoidal category with duals was to find a knotted surface invariant.
Our original plan of attack was to try to use Neuchl’s cocycles (I don’t think that paper is on the ArXiv?) to construct knotted surface invariants.
So there are a couple of good lead’s for Bruce’s question. First, see if Neuchl’s example has duals. A good approach would be to finish the work that Masahico, Laurel, and I started: 1st get the
knotted surface invariant, the duality structure will poop-out of JB’s work with Laurel. The other approach that I always thought would work would be to just construct the category you desire from
the quandle cocycles.
There are new reasons that *I* believe that this will work. In Cohomology of the Adjoint of Hopf Algebras and Cohomology of Categorical Self-Distributivity we are seeing cocycles appearing very
2-categorically looking.
Posted by: Scott Carter on May 30, 2007 7:21 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
Scott wrote:
So anyway, the joke around here was when any representation theorist would visit, I would ask, do you have a good example of a braided monoidal 2-category with duals. And no one did.
So I stopped asking the question. When we (CJKLS) figured out the quandle cocycle invariant Quandle Cohomology and State-sum Invariants of Knotted Curves and Surfaces the question ceased having
meaning to me. But I have a very narrow minded point of view of the subject: The reason for finding a braided monoidal category with duals was to find a knotted surface invariant.
Right — from that viewpoint, once you get your invariant of knotted surface invariant you can forget about braided monoidal 2-categories. But, if you actually like braided monoidal 2-categories — as
some of us do — you can turn the process around: take any invariant of knotted surfaces and look for the braided monoidal 2-category underlying it! Of course this can only work if your invariant can
be defined on 2-tangles (not just closed knotted surfaces).
Another thing: I think Aaron Lauda and Hendryk Pfeiffer believed they could use their work to squeeze a braided monoidal 2-category out of Khovanov homology. However, they felt there was not enough
interest on the part of the Khovanov homology crowd to make this work worthwhile. I hope this changes someday.
Posted by: John Baez on May 31, 2007 12:55 AM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
I mentioned a promising candidate for an easy-ish braided monoidal 2-category with duals a while ago at the end of this post. I think I can now even take this further : every rational vertex operator
algebra equipped with a finite group of automorphisms should give you an invariant of 2-tangles :-)
Posted by: Bruce Bartlett on May 31, 2007 8:49 AM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
Bruce wrote
I think I can now even take this further […]
That’s really cool, Bruce! I wish I had more time currently to walk along such quantum paths with you – along the third edge, you know.
Unfortunately, right now I am way down a different edge. So I’ll just trust that once I get back to the Quantum-Do I’ll just pick up your thesis and be enlightened. :-)
Posted by: urs on May 31, 2007 10:26 AM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
The trouble is I’d need some help for this project (of linking vertex operator algebras to invariants of 2-tangles, via explicit examples of objects inside braided monoidal 2-categories with duals).
I think I could come up with the braided monoidal 2-category with duals : that’s actually the easy part. Caveat : one might need to weaken slightly the notion of a braided monoidal 2-category with
duals as presented in John and Laurel’s 2-tangles paper… and you’d need to check that this weakening doesn’t do violence to the construction of 2-tangle invariants.
So I guess I need help from (a) an expert on vertex operator algebras (since I know very little about them) and (b) an expert on 2-tangles. And “higher” help (of the higher categorical nature) would
always be appreciated too.
Posted by: Bruce Bartlett on May 31, 2007 1:08 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
Posted by: urs on May 31, 2007 1:28 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
Bruce Bartlett asks about 2-tangles. I think I am familiar with these. But Vertex Operator Algebras still scare me.
JB says, “But, if you actually like braided monoidal 2-categories, as some of us do, you can turn the process around: take any invariant of knotted surfaces and look for the braided monoidal
2-category underlying it! Of course this can only work if your invariant can be defined on 2-tangles (not just closed knotted surfaces).”
The cocycle invariants *should* give such invariants. This will be analogous to Masahico’s work with Kheira Ameur On classical tangles . Maybe there are problems with colorings of the boundary, but I
don’t see how that can prevent the construction of the invariant. So I am pretty sure there is a braided monoidal 2-category sitting there.
Posted by: Scott Carter on June 1, 2007 10:56 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
However, they felt there was not enough interest on the part of the Khovanov homology crowd to make this work worthwhile.
I find this statement rather disturbing. How many of the very best pieces of science and mathematics might not have occurred had their authors thought like this?
Posted by: David Corfield on June 3, 2007 9:34 AM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
David wrote:
How many of the very best pieces of science and mathematics might not have occurred had their authors thought like this?
Don’t put it in the subjunctive like that! It’s a very common occurence, more the norm than the exception. Young scientists often ask themselves if working on a lengthy and difficult project will pay
off in a tenured job or not. So, research gets focused on ‘fashionable’ areas.
You may consider this regrettable, but it’s far from clear-cut. I’m sure most people would say the projects Pfeiffer and Lauda are actually doing are more interesting than constructing braided
monoidal 2-categories. Maybe even they think so. And, maybe it’s true.
Of course, I’m trying to convince the world otherwise.
Posted by: John Baez on June 3, 2007 6:55 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
Young scientists often ask themselves if working on a lengthy and difficult project will pay off in a tenured job or not. So, research gets focused on ‘fashionable’ areas.
Oh, so that’s why it took me so long to get a job.
Maybe even they think so.
A totally different kettle of fish, if so.
Posted by: David Corfield on June 3, 2007 9:30 PM | Permalink | Reply to this
The Human Factor
David wrote:
A totally different kettle of fish, if so.
The interesting thing is: the tastes of others are not totall different from ones own — except for a few rebels like you.
To what extent does a young scientist develop his or her tastes based on the the current fashions? To what extent should this happen? Ones tastes don’t develop in a vacuum. Complete flouting of ones
intellectual environment can lead to crackpottery. Learning by example is a wonderful thing. On the other hand, completely slavish devotion to fashion leads to mediocrity. Somewhere between lies the
golden mean.
Ideally, the funding agencies that support the hard scientists would also want sociologists, historians and philosophers to study these questions. To ignore them is less than wholly rational! But
alas, right now the ‘soft sciences’ suffer from neglect — perhaps because few people realize how incredibly important the human factors are in every endeavor!
But you know all this.
Posted by: John Baez on June 3, 2007 9:58 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
In subjects with a more experimental component, I think it’s quite common when one has a “vague idea” to estimate the amount of work involved in setting up experiments, data collection and analysis
and estimate whether fully working things up is likely to be worth the effort based on how interesting, useful and important the idea could turn out to be. I know nothing about Lauda and Pfeiffer’s
work, but can easily imagine making stuff rigorous might be an analogue of experimental work.
A more interesting question is whether Lauda and Pfeiffer thought “they wouldn’t be interested because they’re an insular community” or “they wouldn’t be interested because having a B-M 2-category
doesn’t really advance the things they’re interested in”? In the first case you can be persistent, in the second case it’s a tougher sell.
Posted by: dave tweed on June 3, 2007 11:11 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
Let’s not talk about my friends Aaron Lauda and Hendryk Pfeiffer in this way anymore. They might not like it. We can talk about the sociological issues we’re discussing without using them as
I only brought their names up for mathematical reasons: namely, I believe that it’s possible to construct braided monoidal 2-categories using the math behind Khovanov homology, in part because they
thought it might be doable, and I trust their judgement.
Posted by: John Baez on June 4, 2007 1:01 AM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
If I have offended anyone, my apologies. My only reason for repeating the names was to avoid tripping myself up in multiple “they”s.
Let me try and reframe the scientific enquiry point: I have definitely had ideas that seemed like they’d probably “work”, tried to estimate the amount of experimental work they’d require to validate
and decided not to do proceed. That’s why I was a bit surprised David C was so perturbed by the suggestion that in science what one works on is partly based on the amount of “extraneous” effort. I
have also been in the situation of deciding an idea, whilst interesting itself was very likely “the end of the line” in the methodology and wouldn’t be extendable, which thus wouldn’t lead to new
papers on the idea, which then wouldn’t lead to citations.
If anyone wants to discuss the scientific philosophy issues behind this phenomenon, using me as an example if they need one, they’re welcome to.
Posted by: dave tweed on June 4, 2007 12:10 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
Of course, there is a very complex mix of considerations at play when deciding what to work on, one which is dependent on the stage of one’s career. What perturbs me is when people refuse to follow
what they believe to be the right thing to do, and the dominant consideration is not that they have been exposed to good reasons to adopt another course, but because they only care about winning
I was talking about this to a fellow philosopher a while ago, and he commented on how many young people say to themselves that they’ll play the game until they’re established, but then find they’re
too far committed to return back to what they originally wanted to do when they finally have a secure post.
Posted by: David Corfield on June 4, 2007 12:27 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
…how many young people say to themselves that they’ll play the game until they’re established, but then find they’re too far committed to return back to what they originally wanted to do when
they finally have a secure post.
On the other hand, it can be supremely nerve-wracking to be a young, unestablished academic and to do what you think is interesting, despite being told (by supporters) that phrases like “co-C object”
are the sorts of things that get you not-hired.
That said, I find it incredibly disappointing that popularity of ideas holds such weight in academic mathematics. If I were the type to proceed along popular lines I’d have dropped the math side of
my major back in college, finished the computer science, gotten picked up in time to catch the tail end of the dot-com bubble, and worked my way into some obscenely high-paid consulting gig. I’m in
this line of work because I buck the trends, and I’m not very likely to stop now.
Posted by: John Armstrong on June 4, 2007 3:49 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
Oh, and another thing… This is really just a plug for my own work, but I think the difference between categorifying the Kauffman bracket and categorifying the su[2] quantum group skein relations
(which I’ll go ahead and confound with the Jones polynomial) is closer to being understood. See math.GT/0701339 for my and Kevin Walker’s explanation of how categorifying quantum su[2] gives you a
fully functorial version of Khovanov, not one that’s just functorial ‘up to sign’, a problem which plagued the theory for years.
Posted by: Scott Morrison on May 28, 2007 2:10 AM | Permalink | PGP Sig | Reply to this
Re: Link Homology and Categorification in Kyoto
David wrote:
As Aaron is not in the same position as Derek and Jeff…
Since the conference in Kyoto ended on the 25th, I’m uncertain of Aaron’s position — but his velocity is several hundred kilometers per hour, zipping back to Columbia as we speak.
I’ll try to coax him to post when he’s de-jetlagged.
Posted by: John Baez on May 27, 2007 5:37 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
Since the conference web page has a schedule with notes from the lectures, it doesn’t make sense to tell you what each person talked about. So instead I will give my general impression of the
conference. It was very exciting to see all the different ways that link homology theories can be understood. Everything from the triangulated categories, geometric picture worlds, derived categories
of coherent sheaves, geometric representation theory, matrix factorizations, topological string theory, and gauge theory. It is quickly becoming difficult for the average mathematician to keep up
with all of the sophisticated and beautiful tools that are being used to understand and develop link homology theories. But the conference did a great job of supplying many introductory talks on
these topics.
I was particularly interested in learning more about the matrix factorization approach to link homology and Lev Rozansky gave a great introduction. Sergei Gukov’s lectures described how ideas from
gauge theory and string theory could be used to choose potentials to plug into the matrix factorization approach and get interesting link homologies.
Fans of n-categories will be excited to see non trivial examples of braided monoidal 2-categories emerging “in the wild”. Many of the link homology theories are defined for tangle cobordisms, or
2-tangles as n-category theorists like to call them.
There was also a six part lecture series on knot Floer homology given by Ciprian Manolescu and Peter Ozsvath. There are sort of opposite stories with knot Floer homology and Khovanov homology.
Khovanov homology began as a combinatorial invariant and its geometric origins are just beginning to be understood. Knot Floer homology on the other hand began as a geometric theory using symplectic
topology inspired by gauge-theory. At the conference we got to hear how this theory can now be defined purely combinatorially. So far knot Floer homology has not been extended to tangle cobordisms
which leaves some exciting work still left to be done.
Posted by: Aaron Lauda on June 4, 2007 1:51 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
Fans of n-categories will be excited to see non trivial examples of braided monoidal 2-categories emerging “in the wild”.
With duals? Could you give us hints as to what some of these beasts look like?
Posted by: David Corfield on June 5, 2007 8:57 AM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
The closest that I’ve seen to an argument that these BM 2-categories have duals is pages 23-24 of Morrison and Walker’s paper math.GT/0701339.
Posted by: Noah Snyder on June 5, 2007 6:10 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
The functoriality of the planar algebra operations ensure that we can build a ‘city of cans’ (hence the name canopolis) any way we like, obtaining the same result: either constructing several
‘towers of cans’ by composing morphisms, then combining them horizontally, or constructing each layer by combining the levels of all the towers using the planar operations, and then stacking the
levels vertically. (p. 61)
Urs, looks like you’ve been pipped to the use of tin can imagery.
Posted by: David Corfield on June 5, 2007 6:47 PM | Permalink | Reply to this
Re: Link Homology and Categorification in Kyoto
The last part of Gukov’s notes linked to above sounds really fascinating.
Ever since I learned about Khovanov homology (not that I really know much about it beyond the mere basic notions, and even these I am about to begin forgetting) I was wondering what its natural
interpretation would be. Like the Jones polynomial is a Wilson loop observable in a 3-dimensional quantum field theory, the Khovanov thing should be related to “Wilson surfaces” in 4-dimensional
quantum field theory.
I have no idea what is actually known about this. But on his latter slides, Gukov seems to at least hint at lots of relations to known TQFT, ranging from Donaldson-Witten invariants, to – apparently
– the Kapustin-Witten realization of geometric Langlands, including t’Hooft operators and all that.
It is not clear from Gukov’s slides whether he is just reviewing interesting known aspects of 4D gauge theory there, or if he actually claims to see a direct relation of Khovanov homology to these
known aspects (beyond the mere fact that it is to be expected that there is a relation).
Does anyone know?
Posted by: urs on May 30, 2007 9:51 AM | Permalink | Reply to this
Via feedback from Not Even Wrong #: check out Dror Bar-Natan’s
Posted by: urs on June 1, 2007 11:59 AM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2007/05/link_homology_and_categorifica.html","timestamp":"2014-04-18T06:07:13Z","content_type":null,"content_length":"83039","record_id":"<urn:uuid:c349eb22-47ba-4ee6-93fb-89e8f891a86b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving IVP in Matlab (ode45)
December 15th 2008, 07:36 PM #1
Senior Member
Nov 2008
Solving IVP in Matlab (ode45)
Good day to you!
I want to solve an IVP in matlab, given by
$x_1 ' (t) = -4*10^{-2} x_1(t) +3*10^7 x_2(t)x_3(t)$
$x_2'(t) = 4*10^{-2}-10^{4}x_2(t)x_3(t) - 3*10^7x_2^2(t)$
$x_3'(t) = 3*10^7 x_2^2(t)$
I tried to write this as a first order vector
$\dot{x} = \begin{pmatrix} -4*10^{-2} & 3*10^7 & 3*10^7 \\ ... & ...&...\\...&...&...\end{pmatrix}*x$
Because of $x_2(t)x_3(t)$ (or because of the first line in the matrix) I think this does not work that way.
Thanks for spending time on my problem/posting.
Kind regards,
Good day to you!
I want to solve an IVP in matlab, given by
$x_1 ' (t) = -4*10^{-2} x_1(t) +3*10^7 x_2(t)x_3(t)$
$x_2'(t) = 4*10^{-2}-10^{4}x_2(t)x_3(t) - 3*10^7x_2^2(t)$
$x_3'(t) = 3*10^7 x_2^2(t)$
I tried to write this as a first order vector
$\dot{x} = \begin{pmatrix} -4*10^{-2} & 3*10^7 & 3*10^7 \\ ... & ...&...\\...&...&...\end{pmatrix}*x$
Because of $x_2(t)x_3(t)$ (or because of the first line in the matrix) I think this does not work that way.
Thanks for spending time on my problem/posting.
Kind regards,
THis just like the other problem, set up a function with the derivative:
function dx=deriv(t,x)
dx(1)=-4*10^(-2)* x(1) +3*10^7 *x(2)*x(3);
dx(2)=4*10^(-2)-10^(4)*x(2)*x(3) - 3*10^7*x(2);
dx(3)=3*10^7 *x(2)^2;
Hello CB!
Wow, you remember that?! Long memory...
The IVPs are so confusing.
Thank you for the code! This helps me to understand the problem.
Best wishes, Rapha
December 15th 2008, 08:49 PM #2
Grand Panjandrum
Nov 2005
December 15th 2008, 09:01 PM #3
Senior Member
Nov 2008 | {"url":"http://mathhelpforum.com/math-software/65177-solving-ivp-matlab-ode45.html","timestamp":"2014-04-18T03:56:26Z","content_type":null,"content_length":"41011","record_id":"<urn:uuid:6aae038e-0d27-4a54-862a-57f0061cd34b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
February 20th 2009, 06:46 AM #1
Feb 2009
What are the min and max values of:
for reals $x,y,z$ such that $xyz=1$?
EDIT: I now see that $1$ is the only possible value if only xyz=1, but I don't know how to reduce it to show that fact. Anyone?
Last edited by Jone; February 20th 2009 at 08:07 AM.
$\frac{1}{1+x+xy}+\frac{1}{1+y+yz}+\frac{1}{1+z+zx} =$
$=\frac{1}{1+x+\frac{1}{z}}+\frac{1}{y+yz}+\frac{1} {1+z+zx}=$
$=\frac{z}{1+z+zx}+\frac{1}{1+y+yz}+\frac{1}{1+z+zx }=$
So, the expression is constant.
February 20th 2009, 09:07 AM #2 | {"url":"http://mathhelpforum.com/algebra/74671-equality.html","timestamp":"2014-04-17T12:37:42Z","content_type":null,"content_length":"33789","record_id":"<urn:uuid:fa68b4cc-acd7-410d-a7a2-2b68aa3e6ec2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elementary particles pre-Higgs coupling
I must concede that in the Standard Model, the interactions of the Higgs particle and the elementary fermions provide additional constraints. The interactions:
(left quark) . (up Higgs) . (right up quark)
(left quark) . (down Higgs) . (right down quark)
(left lepton) . (up Higgs) . (right neutrino)
(left lepton) . (down Higgs) . (right electron)
where the up Higgs has U(1) strength yhu and the down Higgs has U(1) strength yhd.
For the up and down Higgs in the MSSM, anomaly cancellation gives yhu + yhd = 0. This constraint one also gets from the SM Higgs.
The EF-Higgs interactions give these constraints:
yq + yhu - yu = 0, yq + yhd - yd = 0, yl + yhu - yn = 0, yl + yhd - ye = 0
Plugging in my previous post's results, yhu = yqx = ylx, yhd = - yqx = - ylx
One gets from these yhu + yhd = 0
These values are the values that make electromagnetism non-chiral. Left-handed and right-handed EF's couple the same to the photon. Is this a coincidence? Or is there something deeper? | {"url":"http://www.physicsforums.com/showthread.php?s=a1f0d34414434b0ba7f6951927286e94&p=4507068","timestamp":"2014-04-18T03:04:25Z","content_type":null,"content_length":"20610","record_id":"<urn:uuid:7610eb09-e301-4483-a24e-ac0f14c42f21>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Residue theorem
February 10th 2009, 06:04 AM
Residue theorem
Hey guys.
So I got this integral I need to solve, of curse using the residue theorem.
The thing is, that I don't understand the curve.
I know that whenever Z^2 = integer, this function has a singularity point because e^(2*pi*i*n) = 1.
But again, I'm not sure what this curve has enclosed in.
February 10th 2009, 09:15 AM
Hey guys.
So I got this integral I need to solve, of curse using the residue theorem.
The thing is, that I don't understand the curve.
I know that whenever Z^2 = integer, this function has a singularity point because e^(2*pi*i*n) = 1.
But again, I'm not sure what this curve has enclosed in.
The poles occur when $e^{2\pi iz^2} - 1 = 0 \implies 2\pi i z^2 = 2\pi i k, k\in \mathbb{Z}$.
Therefore, the poles of this function are at $z^2 = k$.
We are also told that $n< R^2 < n+1$ for some integer $n\geq 0$.
For the point of illustration say $n=3$ then $3<R^2 < 4$.
By definition $\Gamma_R$ is the circle $|z| = R$.
The poles need to occur inside this circle, so $|z| < R \implies |z^2| < R^2$.
Thus, the poles are all $k\in \mathbb{Z}$ with $|k| < R^2$.
In this case $k=0,\pm 1, \pm 2, \pm 3$.
This is Mine 12,:):):)th Post!!! | {"url":"http://mathhelpforum.com/calculus/72828-residue-theorem-print.html","timestamp":"2014-04-19T15:54:03Z","content_type":null,"content_length":"7102","record_id":"<urn:uuid:4674dd71-a49e-4084-8b00-f6f1359c5398>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Key Topics:
DC Circuit, Ohm’s Law
This problem focuses on the EGIL flight controller, who monitors the electrical systems, fuel cells and associated cryogenics of NASA's space shuttle. Using a circuit layout from the space shuttle,
students will apply Ohm's law to solve for unknowns.
Students will
• apply Ohm’s law to solve for unknowns in a DC circuit; and
• analyze data to derive a solution to a real life problem.
DOWNLOADS > Space Shuttle Short Circuit Educator Edition (PDF 361 KB) > Space Shuttle Short Circuit Student Edition (PDF 295 KB) | {"url":"http://www.nasa.gov/audience/foreducators/mathandscience/controlcenter/Prob_ShortCircuit_detail.html","timestamp":"2014-04-18T18:20:46Z","content_type":null,"content_length":"19431","record_id":"<urn:uuid:3b2137c8-29cb-4fab-bf6d-31a83778a762>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
braided infinity-group
braided infinity-group
Group Theory
Classical groups
Finite groups
Group schemes
Topological groups
Lie groups
Super-Lie groups
Higher groups
Cohomology and Extensions
$(\infty,1)$-Category theory
Basic concepts
Universal constructions
Local presentation
An ∞-group $G$ is braided if it is equipped with the following equivalent structure
See the examples at braided 2-group, braided 3-group.
Revised on December 12, 2012 16:49:59 by
Urs Schreiber | {"url":"http://ncatlab.org/nlab/show/braided+infinity-group","timestamp":"2014-04-18T13:18:42Z","content_type":null,"content_length":"29199","record_id":"<urn:uuid:a15639ce-8ff6-4cc3-9838-0889766af76b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Problems:
Physics Problems have a special place in Physics Learning.
We understand Physics – it means that we can solve physics problems.
At the same time to understand physics we need to solve as many physics problems as possible.
Only by solving physics problems we can understand Physics Laws and we can understand how to apply them.
There are few general rules we need to follow when we solve Physics Problems. Such as:
• drawing the picture of the problem (it is very important !!!)
- the correct picture of physics problem is more then 80 % of success. You can read about drawing the picture of physics problem with a lot of examples here
"Physics Problems: How to Draw a Picture".
• having the same units in the problem;
• checking the dimensionality of analytical expressions,
and so on.
Solutions of Physics Problems:
Another important fact about Physics Problems is how to read the solution of Physics Problem:
It is very important to understand the solution of the problem when you read about it in the book.
You read the solution of the problem and it looks very simple. You think you understand it. But you can be wrong.
To find out if you understand the solution of the problem or not you need to close the book and try to solve the problem by yourself.
If you can solve the problem then you understand the solution.
If not then you need to open the book and read the solution again, then close the book and try to solve the problem again.
It is very important that you can solve physics problems without looking at the solutions in the book.
Solutions of Physics Problems: Final Step
And the last step: How can we solve the problem and learn the most from the solution of physics problem?
To extract the most information from the solution of the problem we need to apply the method of questions. You can read about this method here "Physics Problems: Method of Questions".
With the method of questions you can learn physics 5 or even 10 times faster. | {"url":"http://www.solvephysics.com/","timestamp":"2014-04-19T14:56:39Z","content_type":null,"content_length":"23513","record_id":"<urn:uuid:b5048e9f-f673-4611-8690-2087b97488de>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Log Logistic and Weibull distribution
Replies: 1 Last Post: Mar 3, 2013 10:58 PM
Messages: [ Previous | Next ]
Sata Log Logistic and Weibull distribution parameters
Posted: Mar 3, 2013 10:58 PM
Posts: 36
Registered: I have an additional question:
In Normal distribution, I know that the 2 parameters are mean and standard deviation and I know the concepts and applications of Normal distribution. However, in Log Logistic
distribution, what do the 2 parameters of alpha (scale) and beta (shape) stand for? Similar questions go to Weibull distribution, what do the 2 parameters of lambda (scale) and kappa
(shape) stand for? What are the values or range of values for these 4 parameters? Where can I find some simple examples and simple exercises to understand the concepts of these 2
"Dua" wrote in message <kgmjgs$g6n$1@newscl01ah.mathworks.com>...
> 1) How to decide when to use Log Logistic and Weibull distribution to solve problems and know how to work such problems?
> 2) What are the alternative continuous distributions for Log Logistic and Weibull distribution. Which means if I don't use Log Logistic and Weibull, which distributions can I use
instead to solve the similar problems?
Date Subject Author
2/27/13 Log Logistic and Weibull distribution Sata
3/3/13 Log Logistic and Weibull distribution parameters Sata | {"url":"http://mathforum.org/kb/message.jspa?messageID=8513109","timestamp":"2014-04-19T08:28:27Z","content_type":null,"content_length":"17972","record_id":"<urn:uuid:de22e079-6709-43b7-a4e1-3ec1379ef7f4>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00129-ip-10-147-4-33.ec2.internal.warc.gz"} |
• Physicists propose a modular quantum computer architecture that offers scalability // February 22, 2014
• WHAT’S SO ‘SUPER’ ABOUT THIS SUPERFLUID // February 25, 2014
More News
Upcoming Events
Bruno Laburthe-Tolra | Laboratoire de Physique des Lasers – Paris 13 University & CNRS – UMR 7538
Russell Bisset | Los Alamos National Laboratory
People News
• Release from NIST Tech Beat, April 15, 2014
Three National Institute of Standards and Technology (NIST) researchers were among those honored April 14, 2014, at a... read more
• "Exploring Quantum Physics,” a massively open online course taught by JQI Fellows Victor Galitski and Charles Clark, begins its third edition on Monday, April 7. The course is available free,...
• From CMNS at UMD
Three University of Maryland students have been awarded scholarships by the Barry M. Goldwater Scholarship and Excellence in Education Foundation, which... read more
• At the March 2014 APS meeting in Denver, Ian Spielman gives his patented talk about the quantum spin Hall effect in cold atoms.
• The following papers with JQI authors will be presented at the March meeting of the American Physical Society, being held March 2-7... read more
Recent Publications
"Hysteresis in a quantized superfluid ‘atomtronic’ circuit," S. Eckel, J.G. Lee, F. Jendrzejewski, N. Murray, C.W. Clark, C.J. Lobb, W.D. Phillips, M. Edwards, G.K. Campbell, Nature, 200, 506 (2014)
Twitter Updates
• "Exploring Quantum Physics," JQI’s Coursera online course taught by Victor Galitski and Charles Clark, resumes
People Profiles
Mohammad Hafezi
Hafezi is a senior research associate and works at the interface of condensed matter theory and quantum optics. The focus of his research is on theoretical and experimental investigations of
artificial gauge fields and topological order in photonics systems. Such systems can be exploited as robust optical devices insensitive to disorder, which is the subject of his NSF Physics
Frontier Center’s seed funding program. Moreover, in the presence of strong optical nonlinearity, such systems are expected to exhibit fractional quantum Hall physics, providing a platform for
potentially observing anoynic statistics. He received his Ph.D. from Harvard in 2009 where he worked with Mikhail Lukin and Eugene Demler. There, he studied strongly correlated physics in AMO
systems. In particular, he worked on the topological characterization of ultracold atoms in 2D and also non-equilibrium dynamics of strongly interacting photons in 1D.
Crystal Senko
Crystal Senko is a graduate student in Chris Monroe's ion trapping group. While in the group she has focused on ultrafast spin manipulation as well as quantum simulation of magnetism. Senko is an
undergraduate alumni of Duke University, where she worked with Dan Gauthier on magneto-optical trapping using distributed feedback lasers.
Phil Richerme
Phil Richerme is a postdoc in Chris Monroe's Trapped Ion Quantum Information Group. He studies quantum magnetism using a well-controlled and well-isolated system of atomic ion spins, realizing
Feynman's original proposal for a quantum simulator. These experiments probe the ground state and dynamical evolution of interacting spin systems, which are difficult (or impossible) for
classical computers to calculate for even a few dozen spins. Phil received his Ph.D. from Harvard in 2012, working with Gerald Gabrielse and the ATRAP collaboration at CERN to trap antihydrogen
atoms for sensitive tests of CPT symmetry.
Gretchen Campbell, Fellow
Campbell is a NIST JQI fellow and works in the Laser Cooling and Trapping group. In her atom circuits lab, reserachers probe Na BECs in toroidal traps. The goals of these experiments include
studying superfluidity, as well as superfluid analogs to superconducting circuits. A second experiment with ultracold strontium is being built. She received a Ph.D from MIT in 2006, where she
worked with Wolfgang Ketterle and Dave Pritchard. There, she used Rb BECs in optical lattices to study atom interferometry, nonlinear atom optics and the superfluid – Mott insulator phase
transition. These experiments included the first direct observation of the atomic recoil momentum in dispersive media. More recently, she worked with Jun Ye on precision measurements and
frequency metrology with an ^87Sr optical lattice clock.
Stephen Powell
Stephen Powell, a former JQI postdoctoral fellow at CMTC, now works at the Nordic Institute of Theoretical Physics or Nordita in Stockholm, Sweden. His research in the group of Sankar Das Sarma
centered around strongly correlated systems with a specific focus on frustrated magnetism and ultracold gases. At Nordita, he will continue this line of research, which is at the meeting point of
condensed matter and atomic physics. He will help organize the Nordita program “Pushing the boundaries with cold atoms,” to be held in early 2013. In talking of his postdoctoral experience he
says, “Something I've particularly enjoyed about being at JQI is having close contact with various experimental groups here.”
Alexey V. Gorshkov
Alexey Gorshkov is a JQI fellow and theoretical physicist at NIST. He grew up in Moscow until his parents brought him to Boston when he was in 10th grade. In high school, he was good at math, so
that's what he was planning to do in college, but then math ended up being too dry. Physics offered a perfect alternative since it involved lots of interesting mathematics and grappled with
problems related to real life.
He attended Harvard for his undergraduate and graduate degrees, obtaining a physics PhD in 2010 studying under Mikhail Lukin. After that he was a postdoctoral fellow at Caltech, working with John
Preskill. He won numerous university teaching and research awards during these years.
His research is at the intersection of AMO physics, condensed matter physics, and quantum information science. He has authored dozens of papers and has a patent entitled: “Scalable Room
Temperature Quantum Information Processor.”
Quantum physics began with revolutionary discoveries in the early twentieth century and continues to be central in today’s physics research. Learn about quantum physics, bit by bit. From definitions
to the latest research, this is your portal. Subscribe to receive regular emails from the quantum world. Previous Issues...
Sign Up Now
Sign up to receive A Quantum Bit in your email! | {"url":"http://jqi.umd.edu/","timestamp":"2014-04-16T16:01:11Z","content_type":null,"content_length":"82211","record_id":"<urn:uuid:30df99e6-9f2a-446f-9a16-2dae2bc01ba4>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry and Grid
Next: Results Up: PHYSICS OF AIRFOIL Previous: Algorithm
The þstage compressor geometry used in this study models the midspan geometry of an experiment by Dring (AGARD, 1989) and is identical to the 50% axial gap configuration of Gundy-Burlet (1991). The
experimental configuration consists of an inlet guide vane (IGV) followed by two rotor/stator pairs. There are 48 IGVs while each of the other rotor and stator blade rows contain 44 airfoils. It
would be prohibitively expensive to compute the flow through the entire 224 airfoil system, so for this computation, the number of IGVs has been reduced to 44. The IGVs have been rescaled by a factor
of 48/44 in order to maintain the same blockage as in the experiment. The flow has been computed only through one passage and periodicity has been used to model the other 43 passages. Note, scaling
the IGV blade count in itself represents a form of airfoil clocking because the locations of IGV wakes are modified with respect to the first stator.
The axial gaps between airfoil rows in the experimental and computational configurations are approximately 50% of the average axial chord. The circumferential positions of the first-stage rotor
(rotor-1) relative to the second-stage rotor (rotor-2) and the first-stage stator (stator-1) relative to the second-stage stator (stator-2) were not documented in the experiment. For the
calculations, the rotors were circumferentially aligned. The full computational model with all 8 separate stator displacements in terms of percentage of pitch is shown in Fig. 1. The zero
displacement position was defined as the one in which the stators were circumferentially aligned. The other positions were evenly spaced at 12.5% of pitch apart.
A zonal grid system is used to discretize the flowfield within the þ stage compressor. Figure 2 shows the zonal grid system used for the 0% displacement case. In Fig. 2, every other point in the grid
has been plotted for clarity. There are two grids associated with each airfoil. An inner, body-centered "O" grid is used to resolve the flow near the airfoil. The thin-layer Navier-Stokes equations
are solved on the inner grids. The grid points of the inner grids are clustered near the airfoil to resolve the viscous terms. The Euler equations are solved on the outer sheared cartesian "H" grids.
The rotor and stator grids are allowed to slip past each other to simulate the relative motion between rotor and stator airfoils. In addition to the two grids used for each airfoil, there is also an
inlet and an exit grid, for a total of 12 grids.
Fine grids are used to obtain detailed data regarding the steady and unsteady flow structure in the compressor. The inner grids are dimensioned
Next: Results Up: PHYSICS OF AIRFOIL Previous: Algorithm Karen L. Gundy-Burlet
Wed Apr 9 15:39:35 PDT 1997 | {"url":"http://ti.arc.nasa.gov/m/profile/gundy/igt97_comp/node5.html","timestamp":"2014-04-19T14:47:39Z","content_type":null,"content_length":"5786","record_id":"<urn:uuid:a90c269f-3b94-470a-836f-32442c92ff2c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kindergarten Math Vocabulary
VocabularySpellingCity provides these kindergarten math word lists so teachers and parents can supplement the kindergarten math curriculum with interactive educational vocabulary games. The games and
lists are ready-to-use: just select any math area and word list and then pick one of the 25 learning activities. The material was specifically written for use in a kindergarten math class.
The math vocabulary lists are based on the Common Core Kindergarten Math Standards. VocabularySpellingCity has selected these kindergarten math words as they that apply to the key math concepts.
Kindergarteners can use this academic vocabulary at school or at home. Teachers can import the lists into their account and then edit or extend the lists for their own purposes.
Common Core State Overview for Kindergarten Math
│ Counting & Cardinality │ • Know number names and the count sequence. │
│ View Our Lists │ • Count to tell the number of objects. │
│ │ • Compare numbers. │
│ Algebraic Thinking │ • Understand addition as putting together and adding to, and understand subtraction as taking apart and taking from. │
│ View Our Lists │ │
│ Number & Operations in Base 10 │ • Work with numbers 11-19 to gain foundations for place value. │
│ View Our Lists │ │
│ Measurement & Data │ • Describe and compare measurable attributes │
│ View Our Lists │ • Classify objects and count the number of objects in each category │
│ Geometry │ • Identify and describe shapes. │
│ View Our Lists │ • Analyze, compare, create, and compose shapes. │
│ Source: www.corestandards.org │
Click for more information on Math Vocabulary and the Common Core Standards in general. For information pertaining to kindergarten in particular, please refer to the chart above. To preview the
definitions, select a list and use the Flash Cards. For help on using the site, watch one of our short videos on how to use the site.
As educators will agree, kindergarten math vocabulary and spelling words are a useful foundation to understanding math concepts. These lists give children the building blocks of kindergarten math
vocabulary that help them discuss and understand mathematical principles.
Vocabulary and SpellingCity has taken kindergarten number words that apply to key math concepts and incorporated appropriate kindergarten math definitions combined with meaningful example sentences
to enable young learners to gain a comprehensive understanding of basic mathematics. Whether using the games for drill and practice or the tests to assess understanding, these themed lists give
kindergartners everywhere an opportunity to learn while having lots of fun!
Kindergarten Math Vocabulary
Words at a Glance:
Counting & Cardinality
Comparison: big, equal, more, between, less, before, after, opposite, small, compare
Counting: hundred, count forward, even, number, odd, numeral, quantity, small, big
Grouping: pair, table, add, equal, ten, one, count forward, tally, group
Money: coin, money, cent, penny, dime, quarter, count, dollar, nickel
Sequence: fourth, fourth, number line, sequence, order, tens, ones, even numbers, odd numbers
Algebraic Thinking
Operations & Algebraic Thinking: different, alike, input, output, sort, outside, object, match, size, similar
Base Ten Operations
Number & Operations in Base Ten: minus, value, behind, sum, above, difference, add, compare, zero, below, subtract, under, ones, tens, beside, between, addition, sort
Measurement & Data
Measurement & Data: measure, long, estimate, longest, shorter, small, size, big, short, biggest, today, time, minute, calendar, hour, second, yesterday, morning, afternoon, date, minute hand, first,
second hand, hour hand, clock, year, equal parts, month, day, week
Geometry: square, shapes, pattern, triangle, rectangle, cylinder, halves, cone, in front of, cube, inside, middle, sphere, corner, curves, slide, right, graph, circle, left
For a complete online Math curriculum in Kindergarten Math, First Grade Math, Second Grade Math, Third Grade Math, Fourth Grade Math, Fifth Grade Math, Sixth Grade Math, Seventh Grade Math, or Eighth
Grade Math visit Time4Learning.com.
Here are some fun Math Games from LearningGamesForKids by grade level: Kindergarten Math Games, First Grade Math Games, Second Grade Math Games, Third Grade Math Games, Fourth Grade Math Games, Fifth
Grade Math Games, Addition Math Games, Subtraction Math Games, Multiplication Math Games, or Division Math Games. | {"url":"http://www.spellingcity.com/kindergarten-math-vocabulary.html","timestamp":"2014-04-17T22:17:51Z","content_type":null,"content_length":"71248","record_id":"<urn:uuid:de5c2f9e-5368-4979-adb8-753fa6797ff9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] weibull distribution has only one parameter?
Ryan May rmay@ou....
Mon Nov 12 11:21:31 CST 2007
D.Hendriks (Dennis) wrote:
> Alan G Isaac wrote:
>> On Mon, 12 Nov 2007, "D.Hendriks (Dennis)" apparently wrote:
>>> All of this makes me doubt the correctness of the formula
>>> you proposed.
>> It is always a good idea to hesitate before doubting Robert.
>> <URL:http://en.wikipedia.org/wiki/Weibull_distribution#Generating_Weibull-distributed_random_variates>
>> hth,
>> Alan Isaac
> So, you are saying that it was indeed correct? That still leaves the
> question why I can't seem to confirm that in the figure I mentioned (red
> and green lines). Also, if you refer to X = lambda*(-ln(U))^(1/k) as
> 'proof' for the validity of the formula, I have to ask if
> Weibull(a,Size) does actually correspond to (-ln(U))^(1/a)?
Have you actually looked at a histogram of the random variates generated
this way to see if they are wrong?
Multiplying the the individual random values by a number changes the
distribution differently than multiplying the distribution/density
function by a number.
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-November/029904.html","timestamp":"2014-04-16T13:37:10Z","content_type":null,"content_length":"4144","record_id":"<urn:uuid:f1691f6b-0a58-4978-8f77-dbcbaf9ccc38>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof of Normal Distribution
Date: 10/15/97 at 07:08:30
From: Kevin Sauter
Subject: Proof of Sample Mean and Sample Std. Dev. being normal if
sample is from a Normally Distributed Population
Dr. Math,
Can you explain (and prove) to me why samples taken from a normally
distributed population will also be normally distributed? Everyone in
class agrees that it makes sense, but how can we show it?
If we have a population X and mean (mu) with Standard deviation
(sigma), how can it be shown that the mean of the samples (X bar) will
have a sample mean of mu X bar that equals mu, and a standard
deviation of sigma Xbar equal to sigma divided by the square root of n
(the sample size)?
Thanks a lot.
K. Sauter
Date: 10/15/97 at 09:51:47
From: Doctor Statman
Subject: Re: Proof of Sample Mean and Sample Std. Dev. being normal if
sample is from a Normally Distributed Population
Dear Kevin,
There are three aspects to your question.
Consider the mean of a random sample of observations taken from a
Normal population.
1. The sample mean, xbar, follows a Normal distribution.
I would use moment generating functions to show this. I'm not sure
what grade level you teach, but I would cover this in a
mathematical statistics course for junior or senior college math
majors. The bad news is that it is advanced, but the good news is
that if you do it this way, you will see that the mean is mu and
the standard deviation is sigma/root(n).
2. The mean of the distribution of xbar is mu.
3. The standard deviation of xbar is sigma/root(n)
These last two can be built up using the following:
Show E[aX+b] = aE[X]+b
Show Var[aX+b] = a^2 Var[X]
Show E[X+Y] = E[X]+E[Y]
Show Var[X+Y] = Var[X]+Var[Y]+2 Cov(X,Y)
Show Var[X+Y] = Var[X]+Var[Y] if X and Y are independent.
Now you are ready to go for the main results:
E[xbar] = E[(1/n) sumof xi] = (1/n) sumof E[xi] = (1/n) n mu = mu !
Var[xbar] = Var[(1/n) sumof xi] = (1/n)^2 sumof Var[xi] <- using
= sigma^2 /root(n) with a little algebra.
Hope this helps!
-Doctor Statman, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: 10/15/97 at 15:42:29
From: Kevin Sauter
Subject: Re: Proof of Sample Mean and Sample Std. Dev. being normal if
sample is from a Normally Distributed Population
Danke schon, Herr Doctor! Ich verstehe und ich mochte daruber weiter
sprechen mit mein schuleren.
Auf weidersehen,
Kevin Sauter | {"url":"http://mathforum.org/library/drmath/view/52747.html","timestamp":"2014-04-18T21:28:04Z","content_type":null,"content_length":"7418","record_id":"<urn:uuid:4a9d4d59-49ee-4992-8763-2896b8c09ca5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interpreting the Importance and Precision
of Therapeutic Results
Centre for Evidence Based Medicine Stats Calculator
Vanderbilt- Power and Sample Size Calculator
The magnitude of effect is usually expressed as one of the following, where:
│ │Yes │No│
│Exposed │ a │b │
│Not Exposed │ c │d │
Control event rate (CER) = c/c+d
Experimental event rate (EER) = a/a+b
(a) Relative Risk (RR) = EER/CER=(a/a+b)/(c/c+d)
(b) Relative Risk Reduction (RRR) = CER-EER/CER
(commonest reported measure of dichotomous treatment effect)
(c) Absolute Risk Reduction (ARR) = CER-EER
(d) Number Needed to Treat (NNT) = 1/ARR
A certain risk reduction may appear impressive but how many patients would you have to treat before seeing a benefit? This concept is called "number need to treat" and is one of the most intuitive
statistics for clinical practice.
For example if:
│ │Yes│No │
│Exposed │ 8 │992│
│Not Exposed │10 │990│
The RR = (8/1000) / (10/1000) = 0.8 making the RRR = (1-0.8/1)=0.2 or 20%. Although this sounds impressive, the absolute risk reduction is only 0.01-0.008=.002 or 0.2%. Thus the NNT is 1/0.002=500
patients. It is obvious that on an individual patient basis the pre-intervention risk or probability is a major determinant of the degree of possible post-intervention benefit, yield, or risk
The estimate of where the true value of a result lies is usually expressed in terms of a 95% confidence interval (CI), or confidence limits. These define the range that includes the true relative
risk reduction 95% of the time.
If confidence limits are not provided you can calculate them if you have been given standard error of the RRR or relative risk. Just multiply the standard error by 2: the plus and minus values of
this are the upper and lower values for the confidence interval.
Alternately the 95% confidence interval for an ARR can be calculated by:
=+/-1.96 ?[CER X (1-CER)/# control patients + EER X (1-EER)/# experimental patients]
The CI for the NNT is just 1/limits of CI for ARR. | {"url":"http://www.ebm.med.ualberta.ca/TherapyCalc.html","timestamp":"2014-04-20T08:14:01Z","content_type":null,"content_length":"17709","record_id":"<urn:uuid:34088de5-bb55-4312-b8cd-bfd6e7a35f0d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computer Science
Skip to content
Professor of Mathematics
Assistant Professor of Mathematics
Lab Instructor
Visiting Assistant Professor of Mathematics
Professor of Mathematics and Director of Quantitative Analysis
Professor of Mathematics
Professor of Mathematics
Chair, Department of Mathematics and Computer Science
Assistant Professor of Mathematics
Teaching Associate in Mathematics/Computer Science
Professor of Mathematics, Emeritus
Computer Science
Assistant Professor of Computer Science
Professor of Computer Science
Professor of Computer Science, Meneely Chair (2010-2015) | {"url":"http://arts@wheatoncollege.edu/mathematics-computer-science/faculty/","timestamp":"2014-04-19T01:56:07Z","content_type":null,"content_length":"16514","record_id":"<urn:uuid:b9d8181c-c5c9-43b5-ba7e-bbb2eb15a3f4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00290-ip-10-147-4-33.ec2.internal.warc.gz"} |
Centreville, VA Prealgebra Tutor
Find a Centreville, VA Prealgebra Tutor
...I also tutor outside of BITS other children in my neighborhood and have experience tutoring children with learning disabilities. I am a patient person as I realize all children are different
and learn at various paces. I am a hard worker and I will put my best foot forward to not only help your...
12 Subjects: including prealgebra, Spanish, reading, grammar
...Relevant coursework includes: general chemistry (two semesters), physical chemistry (two semesters), inorganic chemistry, analytical chemistry, biochemistry, geochemistry. I took a
microapplications class in college, which was entirely based on Microsoft Excel, and received an A in the course. ...
10 Subjects: including prealgebra, chemistry, Spanish, algebra 1
...I have been playing the clarinet since I was 13 years of age. I have played in national ensembles, state wide ensembles, and collegiate ensembles. While in high school, I gave lessons and was
involved with multiple groups.
20 Subjects: including prealgebra, reading, writing, biology
...I like to get feedback from my students often in order to improve their experience with tutoring continuously and to be able to cater to their specific needs. I have gotten tremendous
satisfaction from seeing my students' grades improve and from hearing positive feedback from them (including con...
40 Subjects: including prealgebra, English, reading, chemistry
...Though I am located in Arlington, Virginia, I am happy to travel to meet students, particularly to areas that are easily accessible via Metro.I work as a professional economist, where I
utilize econometric models and concepts regularly using both STATA and Excel. I have also had extensive course...
16 Subjects: including prealgebra, calculus, geometry, statistics
Related Centreville, VA Tutors
Centreville, VA Accounting Tutors
Centreville, VA ACT Tutors
Centreville, VA Algebra Tutors
Centreville, VA Algebra 2 Tutors
Centreville, VA Calculus Tutors
Centreville, VA Geometry Tutors
Centreville, VA Math Tutors
Centreville, VA Prealgebra Tutors
Centreville, VA Precalculus Tutors
Centreville, VA SAT Tutors
Centreville, VA SAT Math Tutors
Centreville, VA Science Tutors
Centreville, VA Statistics Tutors
Centreville, VA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Annandale, VA prealgebra Tutors
Burke, VA prealgebra Tutors
Chantilly prealgebra Tutors
Fairfax Station prealgebra Tutors
Fairfax, VA prealgebra Tutors
Herndon, VA prealgebra Tutors
Manassas Park, VA prealgebra Tutors
Manassas, VA prealgebra Tutors
Mc Lean, VA prealgebra Tutors
Oakton prealgebra Tutors
Reston prealgebra Tutors
Sterling, VA prealgebra Tutors
Sully Station, VA prealgebra Tutors
Vienna, VA prealgebra Tutors
Woodbridge, VA prealgebra Tutors | {"url":"http://www.purplemath.com/Centreville_VA_Prealgebra_tutors.php","timestamp":"2014-04-18T03:56:57Z","content_type":null,"content_length":"24267","record_id":"<urn:uuid:3299f50b-e932-4653-9dc7-bf44efed239e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adapting Annualized Volatility to Other Time Frames
In Calculating Centered and Non-centered Historical Volatility I attempted to walk through the steps and calculations necessary for determining of historical volatility (HV) using an Excel
The second to last step in the calculation (before translating the final number into a percentage) is to annualize the standard deviation by multiplying it by the square root of the number of days in
a year. In the example I chose, I used 252 trading days, reasoning that there are 365 days per year, 104 weekend days and approximately 9 holidays. One could also argue that while the markets are not
open on weekends and holidays, there is market-moving news that makes the jump from Friday to Monday generally more volatility than a typical overnight period. By that line of reasoning, it could be
appropriate to use 365 calendar days in the calculation. I am not aware of any options traders that use the square root of 365 in their calculations instead of 252, but note that such an approach
would yield a historical volatility number about 20.4% higher (e.g., 18.81 instead of the 15.63 I arrived at in the example in Calculating Centered and Non-centered Historical Volatility.)
Most floor traders simplified the volatility calculation process by assuming 256 trading days in a year. With the square root of 256 an even 16, this greatly simplified the calculations that were
done in one’s head.
Not everyone is interested in annualized volatility data. Traders who have options expiring in a week are more interested in determining historical volatility in weekly terms. In order to calculate
weekly volatility instead of annualized volatility, simply substitute the square root of 52.14 (the number of weeks in a year) for the square root of 252. The multiplier to determine weekly
volatility thus becomes 7.22. Using the example referenced above, the 15.63 per cent annualized volatility translates into 7.11 per cent weekly volatility. A similar approach could be used to
calculate historical volatility over other periods, such as a month or perhaps even two years.
Sometimes called statistical volatility or realized volatility, the 15.63 historical volatility means that looking backward, approximately 68% of the time (one standard deviation), the underlying (S&
P 500 index) moved 15.63% or less on an annualized basis. Similarly, given the same data set, approximately 68% of the time the underlying moved 7.11% or less on a weekly basis.
Before I wrap up the current discussion of historical volatility, I will use the next two or three posts to talk about how investors might want to use historical volatility data.
For more on historical volatility, readers are encouraged to check out:
Disclosure: none | {"url":"http://vixandmore.blogspot.com/2009/12/adapting-annualized-volatility-to-other.html","timestamp":"2014-04-17T10:01:41Z","content_type":null,"content_length":"120016","record_id":"<urn:uuid:bd98141b-0a6a-43b6-8d97-4dac1f92f290>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probabilistic Multiagent Reasoning over Annotated Amalgamated F-Logic Ontologies
ISRN Artificial Intelligence
Volume 2013 (2013), Article ID 691707, 11 pages
Research Article
Probabilistic Multiagent Reasoning over Annotated Amalgamated F-Logic Ontologies
University of Zagreb, Faculty of Organization and Informatics, Pavlinska 2, 42000 Varaždin, Croatia
Received 2 May 2013; Accepted 17 June 2013
Academic Editors: H. A. Guvenir, J. A. Hernandez, and C. Kotropoulos
Copyright © 2013 Markus Schatten. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
In a multiagent system (MAS), agents can have different opinions about a given problem. In order to solve the problem collectively they have to reach consensus about the ontology of the problem. A
solution to probabilistic reasoning in such an environment by using a social network of trust is given. It is shown that frame logic can be annotated and amalgamated by using this approach which
gives a foundation for collective ontology development in MAS. Consider the following problem: a set of agents in a multiagent system (MAS) model a certain domain in order to collectively solve a
problem. Their opinions about the domain differ in various ways. The agents are connected into a social network defined by trust relations. The problem to be solved is how to obtain consensus about
the domain.
1. Introduction
To formalize the problem let be a set of agents, let be a trust relation defined over , and let be a problem domain consisting of a set of objects. Let further be a set of all possible statements
about , and let be a relation over . We will denote by the social ontology expressed by the agents. What is the probability that a certain statement from the expressed statements in is true?
By modeling some domains of interest (using a formalism like ontologies, knowledge bases, or other models) a person expresses his/her knowledge about it. Thus the main concept of interest in modeling
any domain is knowledge. Nonaka and Takeuchi once defined knowledge as a “justified true belief” [1] whereby this definition is usually credited to Plato. This means that the modeling person
implicitly presumes that the expressed statements in his/her model are true. On the other hand if one asks the important question what is the truth?, we arrive at one of the fundamental philosophical
questions. Nietzsche once argued in [2] that a person is unable to prove the truth of a statement which is nothing more than the invention of fixed conventions for merely practical purposes, like
repose, security, and/or consistence. According to this view, no one can prove that this paper is not just a fantasy of the reader reading it.
The previously outlined definition of knowledge includes, intentionally or not, two more crucial concepts: justified and belief. An individual will consider something to be true that he believes in,
and, from that perspective, the overall truth will be a set of statements that the community believes in. This mutual belief makes this set of statements justified. The truth was once that the Earth
was the center of the universe until philosophers and scientists started to question that theory. The Earth was also once a flat surface residing on the back of an elephant. So an interesting fact
about the truth, from this perspective, is that it evolves depending on the different beliefs of a certain community.
In an environment where a community of agents collaborates in modeling a domain there is a chance that there will be disagreements about the domain which can yield certain inconsistencies in the
model. A good example of such disagreements is the so-called “editor wars” on Wikipedia the popular free online encyclopedia. A belief about the war in ex-Yugoslavia will likely differ between a
Croat and a Serb, but they will probably share the same beliefs about fundamental mathematical algebra.
Following this perspective, our conceptualization of statements as units of formalized knowledge will consider the probability of giving a true statement a matter of justification. An agent is
justified if other members of a social system believe in his statements. Herein we would like to outline a social network metric introduced by Bonacich [3] called eigenvector centrality which
calculates the centrality of a node based on the centrality’s of its adjacent nodes. Eigenvector centrality assigns relative values to all nodes of a social network based on the principle that
connections to nodes with high values contribute more to the value of the node in question than equal connections to nodes with low values. In a way, if we interpret the network under consideration
as a network of trust, it yields an approximation of the probability that a certain agent will say the truth in a statement as perceived by the other agents of the network. The use of eigenvector
centrality here is arbitrary; any other metric with the described properties could be used as well.
In order to express knowledge about a certain domain, one needs an adequate language. Herein we will use frame logic or F-logic introduced by [4], which is an object-oriented, deductive knowledge
base and ontology language. The use of F-logic here is arbitrary, and any other formal (or informal) language could be used that allows expressing an ontology of a given domain. Nevertheless, F-logic
allows us to reason about concepts (classes of objects), objects (instances of classes), attributes (properties of objects) and methods (behavior of objects), by defining rules over the domain, which
makes it much more user friendly than other approaches.
2. Introducing Frame Logic
The syntax of F-logic is defined as follows [4].
Definition 1. The alphabet of an F-logic language consists of the following:(i)a set of object constructors, ; (ii)an infinite set of variables, ; (iii)auxiliary symbols, such as, , , , , , , , and ;
and (iv)usual logical connectives and quantifiers, , , , , , and .
Object constructors (the elements of ) play the role of function symbols in F-logic whereby each function symbol has an arity. The arity is a nonnegative integer that represents the number of
arguments the symbol can take. A constant is a symbol with arity 0, and symbols with arity 1 are used to construct larger terms out of simpler ones. An id term is a usual first-order term composed of
function symbols and variables, as in predicate calculus. The set of all variable free or ground id terms is denoted by and is commonly known as Herbrand Universe. Id terms play the role of logical
object identities in F-logic which is a logical abstraction of physical object identities.
A language in F-logic consists of a set of formulae constructed out of alphabet symbols. The simplest formulae in F-logic are called F-molecules.
Definition 2. A molecule in F-logic is one of the following statements:(i)an is-a assertion of the form ( is a nonstrict subclass of ) or of the form ( is a member of class ), where , , and are id
terms; (ii)an object molecule of the form O [a “;” separated list of method expressions], where is an id term that denotes an object. A method expression can be either a noninheritable data
expression, an inheritable data expression, or a signature expression. (a)Noninheritable data expressions can be in either of the following two forms. (1)A non-inheritable scalar expression, . (2)A
non-inheritable set-valued expression (). (b)Inheritable scalar and set-valued expression are equivalent to their non-inheritable counterparts except that is replaced with and with . (c)Signature
expression can also take two different forms. (1)A scalar signature expression, (). (2)A set-valued signature expression ().
All methods’ left hand sides (e.g., , , , and ) denote arguments, whilst the right hand sides (e.g., , , , and ) denote method outputs. Single-headed arrows (, and ) denote scalar methods, and
double-headed arrows (, , and ) denote set-valued methods.
As in a lot of other logic, F-formulae are built out of simpler ones by using the usual logical connectives and quantifiers mentioned above.
Definition 3. A formula in F-logic is defined recursively: (i)F-molecules are F-formulae; (ii), , and are F-formulae if so are and ; (iii) and are F-formulae if so are and , and and are variables.
F-logic further allows us to define logic programs. One of the popular class of logic programs is Horn programs.
Definition 4. A Horn F-program consists of Horn rules, which are statements of the form
Whereby is an F-molecule, and is a conjunction of F-molecules. Since the statement is a clause, we consider all variables to be implicitly universally quantified.
For our purpose these definitions of F-logic are sufficient, but the interested reader is advised to consult [4] for profound logical foundations of object-oriented and frame based languages.
3. Introducing Social Network Analysis
A formal approach to defining social networks is graph theory [5].
Definition 5. A graph is the pair whereby represents the set of verticles or nodes and the set of edges or arcs connecting pairs from .
A graph can be represented with the so-called adjacency matrix.
Definition 6. Let be a graph defined with the set of nodes and edges . For every , ( and ) one defines
Matrix is then the adjacency matrix of graph . The matrix is symmetric since if there is an edge between nodes and , then clearly there is also an edge between and . Thus .
The notion of directed and valued-directed graphs is of special importance to our study.
Definition 7. A directed graph or digraph is the pair , whereby represents the set of nodes and the set of ordered pairs of elements from that represents the set of graph arcs.
Definition 8. A valued or weighted digraph is the triple whereby represents the set of nodes or verticles, the set of ordered pairs of elements from that represent the set of graph arcs, and a
function that attaches values or weights to nodes.
A social network can be represented as a graph where denotes the set of actors and denotes the set of relations between them [6]. If the relations are directed (e.g. support, influence, message
sending, trust, etc.), we can conceptualize a social network as a directed graph. If the relations additionally can be measured in a numerical way, social networks can be represented as valued
One of the main applications of graph theory to social network analysis is the identification of the “most important” actors inside a social network. There are lots of different methods and
algorithms that allow us to calculate the importance, prominence, degree, closeness, betweenness, information, differential status, or rank of an actor. As previously mentioned we will use the
eigenvector centrality to annotate agents’ statements.
Definition 9. Let denote the value or weight of node , and let be the adjacency matrix of the network. For node let the centrality value be proportional to the sum of all values of nodes which are
connected to it. Hence where is the set of nodes that are connected to the th node, is the total number of nodes, and is a constant. In vector notation this can be rewritten as
PageRank is a variant of the eigenvector centrality measure, which we decided to use herein. PageRank was developed by Google or more precise by Larry Page (from where the word play PageRank comes
from) and Sergey Brin. They used this graph analysis algorithm, for the ranking of web pages on a web search engine. The algorithm uses not only the content of a web page but also the incoming and
outgoing links. Incoming links are hyperlinks from other web pages pointing to the page under consideration, and outgoing links are hyperlinks to other pages to which the page under consideration
PageRank is iterative and starts with a random page following its outgoing hyperlinks. It could be understood as a Markov process in which states are web pages, and transitions (which are all of
equal probability) are the hyperlinks between them. The problem of pages which do not have any outgoing links, as well as the problem of loops, is solved through a jump to a random page. To ensure
fairness (because of a huge base of possible pages), a transition to a random page is added to every page which has the probability and is in most cases 0.15. The equation which is used for rank
calculation (which could be thought of like the probability that a random user will open this particular page) is as follows: where are nodes under consideration, the set of nodes pointing to , the
number of arcs which come from node , and the number of all nodes [7, 8].
A very convenient feature of PageRank is that the sum of all ranks is 1. Thus, semantically, we can interpret the ranking value of agents (or actors in the social network) participating in a given
MAS as the probability that an agent will say the truth in the perception of the others. In the following we will use the ranking, obtained through such an algorithm in this sense.
4. Probability Annotation
As shown in Section 2 there are basically three types of statements agents can make: (1) is-a relations, (2) object molecules, and (3) Horn rules. While is-a relations and Horn rules can be
considered atomic, object molecules can be compound since object molecules of the form
can be rewritten as corresponding atomic F-molecules
We will consider in the following that all F-molecule statements are atomic. Now we are able to define the annotation scheme of agent statements as follows.
Definition 10. Let be a set of statements, let be a set of agents, let be a corresponding social ontology, let be a trust relation between agents over , and let be a function that assigns ranks to
agents based on . Then the annotation of the statements is defined as follows:
An extension to such a probability annotation is the situation when statements can have a negative valency. This happens when a particular agent disagrees to a statement of another agent. Such an
annotation would be defined as follows.
Definition 11. Let be a set of signed statements, let be a set of agents, let be a corresponding social ontology, let be a trust relation between agents over , and let be a function that assigns
ranks to agents based on . Then the annotation of the statements is defined as follows:
Such a definition is needed in order to avoid possible negative probability (the case when disagreement is greater than approvement).
5. Query Execution
In a concrete system we need to provide a mechanism for query execution that will allow agents to issue queries of the following form: where is any formula in frame logic and a probability. The
semantics of the query is: does the formula hold with probability with regard to the social ontology?
The solution of this problem is equivalent to finding the probabilities of all possible solutions of query
Definition 12. Let be a set of solutions to query ; then is a subset of consisting of those solutions from which probability is greater or equal to and represents the set of solutions to query .
The probability of a solution is obtained by a set of production rules.
Rule 1. If is a conjunction of two formulas and , then .
Rule 2. If is a disjunction of two formulas and , then .
Rule 3. If is an F-molecule if the form is , then .
The implications of these three definitions are given in the following four theorems.
Theorem 13. If is an F-molecule of the form , then .
Proof. Since in this case can be written as:
and due to Rule 3 the probabilities of the components of this conjunction are . Due to Rule 1 the probability of a conjunction is the product of the probabilities of its elements which yields .
Theorem 14. If is an F-molecule of the form , then .
Proof. Since the given F-molecule can be written as the proof is analogous to the proof of Theorem 13.
Theorem 15. If is a statement of generalization of the form , if is the set of all paths between and , and if is the relation of immediate generalization, then
Proof. Since any class hierarchy can be presented as a directed graph, it is obvious that there has to be at least one path from to . If the opposite was true, the statement would not hold and thus
wouldn’t be in the initial solution set.
For the statement to hold, at least one path statement of the form has to hold as well. This yields according to Rule 1 that the probability of one path would be
Since there is a probability that there are multiple paths which are alternative possibilities for proving the same premise, it holds that
Thus from Rule 2 we get
what we wanted to prove.
Theorem 16. If is a statement of classification of the form , then
Proof. Since the statement can be written as the given probability is a consequence of Rule 1 and Theorem 15.
A special case of query execution is when the social ontology contains Horn rules. Such rules are also subject to probability annotation. Thus we have where is the annotated probability of the rule.
In order to provide a mechanism to deal with such probability annotated rules, we will establish an extended definition by using an additional counter predicate for each Horn rule. Thus, each rule is
extended as whereby is a predicate which will count the number of times the particular rule has been successfully executed for finding a given solution.
The query execution scheme has to be altered as well. Instead of finding only the solutions from formula an additional variable for every rule in the social ontology is added to the formula. For
rules we would thus have
In order to calculate the probability of a result obtained by using some probability annotated rule we establish the following definition.
Definition 17. Let be a result obtained with probability by query from a social ontology, let be the probability of rule , and let be the number of times rule was executed during the derivation of
result . The final probability of is then defined as
This definition is intuitive since for the obtainment of result the rule has to hold times. Thus if a social ontology contains rules () their corresponding annotated probabilities are , and numbers
of execution during derivation of result are , then the final probability is defined as
6. Annotated Reasoning Example
In order to demonstrate the approach we will take the following (imaginary) example of an MAS (all images, names, and motives are taken from the 1968 movie “Yellow Submarine” produced by United
Artists (UA) and King Features Syndicate). Presume we have a problem domain entitled “Pepperland” with objects entitled “Music” and “Purpose of life.” Let us further presume that we have six agents
collaborating on this problem, namely, “John,” “Paul,” “Ringo,” “George,” “Max,” and “Glove.”
Another intelligent agent “Jeremy Hilary Boob Ph.D (Nowhere man)” tries to reason about the domain, but as it comes out, the domain is inconsistent. Table 1 shows the different viewpoints of agents.
Due to the disagreement on different issues a normal query would yield at least questionable results. For instance, if the disagreement statements are ignored in frame logic syntax, the domain would
be represented with a set of sentences similar to the following:
Thus a query asking for the class to which the object entitled “Music” belongs would yield two valid answers, namely, “evil noise” and “harmonious sounds.” Likewise if querying for the value of the
“main purpose” attribute of object , for example, the valid answers would be “glove,” “love,” and “drums.” But, these answers do not reflect the actual state of the MAS, since one answer is more
meaningful to it than the others.
Nowhere man thinks hard and comes up with a solution. The agents form a social network of trust are shown in Figure 1.
The figure reads as follows: Ringo trusts Paul and John, Paul trusts John, John trusts George, George trusts John, Max trusts Glove, and Glove does not trust anyone. Using the previously described
PageRank algorithm Nowhere man was able to order the agents by their respective rank (Table 2).
Now, Nowhere man uses these rankings to annotate the statements given by the agents:
As we can see the probability that object is and “evil noise” is equal to the sum of agents’ rankings who agree to this statement (Glove and Max) minus the sum of agents’ rankings who disagree
(George). Note that if an agent had expressed the same statement twice with the same attribute name, his ranking would be counted only once. Also note that, if an agent would have agreed and
disagreed to a statement, his sum would be zero, since he would be at the agreed and disagreed side.
From this probability calculation Nowhere man is able to conclude that the formula holds with probability . Likewise he calculates the probability of
He can now conclude that holds more likely than with regard to the social network of agents. From these calculations Nowhere man concludes that the final solutions to query are
Nowhere man continues reasoning and calculates the probabilities for the other queries
From these calculations Nowhere man concludes that is most likely to hold with . The final result of the query is then
Now we can complicate things a bit to see the other parts of the approach in action. Assume now that John has expressed a statement that relates the object entitled “Music” to the object entitled
“Purpose of life” and named the attribute “has to do with.” We would now have the following social ontology:
Now suppose that Nowhere man wants to issue the following query:
The solutions using “normal” frame logic are
To calculate the probabilities Nowhere man uses the following procedure. The variables in the query are exchanged with the actual values for a given solution:s[1]:, s[2]:, s[3]:, s[4]:, s[5]:, s[6]:.
Now according to rule 1 the conjunction becomes
The second parts of the equations were already calculated, and according to Theorem 14 the first parts of the equations become
We already know the probabilities of the is-a statement, and since the equations become and finally
7. Amalgamation
To provide a mechanism for agents to query multiple annotated social ontologies we decided to use the principles of amalgamation. The model of knowledge base amalgamation which is based on online
querying of underlaying sources is described in [9]. The intention of amalgamation is to show if a given solution holds in any of the underlaying sources.
Since the local annotations of different ontologies that are subject to amalgamation do not necessarily hold for the global ontology, we need to introduce a mechanism to integrate the ontologies in a
coherent way which will yield global annotations. Since the set of ontologies is a product of a set of respective social agent networks surrounding them, we decided to firstly integrate the social
networks in order to provide the necessary foundation for global annotation.
Definition 18. The integration of social networks represented with the valued digraphs is given as the valued digraph , where is a function that attaches values to nodes.
In particular will be a social network analysis metric or in our case a variant of the eigenvector centrality. Now we can define the integration of ontologies as follows.
Definition 19. Let be sets of statements as defined above representing particular social ontologies. The integration is given as .
What remains is to provide the annotation that is at the same time the amalgamation scheme.
Definition 20. Let be the integration of social networks of agents, let be the integration of their corresponding social ontologies, let be a trust relation between agents, and let be a function that
assigns ranks to agents based on ; then the amalgamated annotation scheme of the metadata statements is defined as follows:
8. Amalgamated Annotated Reasoning Example
To demonstrate the amalgamation approach proposed here let us again assume that our intelligent agent “Jeremy Hilary Boob Ph.D. (Nowhere man)” tries to reason about the “Pepperland” domain, but this
time he wants to draw conclusions from the domain “Yellow submarine” as well. The “Yellow submarine” domain is being modeled by “Ringo,” “John,” “Paul,” “George,” and “Young Fred” which form the
social network shown in Figure 2. Since the contents of this domain as well as the particular ranks of the agents in it will not be used further in the example, they have been left out.
Since Nowhere man wants to reason about both domains he needs to find a way to amalgamate these two domains.
Again he thinks hard and comes up with the following solution. All he needs to do is to integrate the two social networks together and recalculate the ranks of all agents of this newly established
social network in order to reannotate the metainformation in both domains.
Since the networks of “Pepperland” and “Yellow submarine” can be represented as the following sets of tuples: all he needs is to find and recalculate the ranks of this new network. Thus
The newly established integrated social network is shown in Figure 3.
Now Nowhere man calculates the ranks of this new network and uses the previously described procedure to annotate the meta information (Section 4) and reason about the amalgamated domain (Section 5).
9. Towards a Distributed Application
As we could see from the previous examples, in order to gain accurate knowledge and accurate probabilities about a certain domain, we had to introduce an all-knowing agent (Nowhere man). This agent
had to be aware of all knowledge of each agent and all trust relations they engage in. Such a scenario is not feasible for large-scale MAS (LSMAS). Thus we need to provide a mechanism to let agents
reason in a distributed manner and still get accurate enough results.
This problem consists of two parts; namely, an agent needs (1) to acquire an accurate approximation of the ranks of each agent in its network and (2) to acquire knowledge about the knowledge of other
agents. The first part deals with annotation and the second with amalgamation of the ontology.
A solution to the first problem might be to calculate ranks in a distributed manner, as has been shown in [10]. In this way agents acquire approximate knowledge about the ranks of their neighbouring
The second problem could be used by the proposed algorithm for amalgamation. Each agent can ask agents it trusts about their knowledge and then amalgamate their ontology with its own. In this way the
agent acquires continuously better knowledge about its local environment. We could have easily considered Nowhere man in the last example to be doing the just described procedure—asking one agent
after another about their knowledge.
10. Possible Application areas
In order to provide a practical example, consider a network of store-and-forward e-mail routing agents in which spam bots try to send unsolicited messages. Some routers (agents) might be under the
control of spam bots and send out messages which might be malicious to users and other routers. The domain these agents reason about is the domain of spam messages—for example, which message from
which user forwarded by which router and what kind of content is spam and should be discarded.
This scenario can be modeled by using the previously described approach: agents form trust relations and mutually exchange new rules about spam filtering. An agent will amalgamate rules (ontologies)
of other agents with its own but will decide about a message (using an adequate query) based not only on the given rules but also on the probability annotation given by the network of trust.
11. Related Work
Alternative approaches to measuring trust in the form of the reputation inference and the SUNNY algorithm are presented in [11, 12], respectively. Both of these could have been used instead of
PageRank in the approach outlined herein. A much more elaborated system of measuring reputation and likewise trust in MAS called the Regret system is presented in [13]. It is based on three different
dimensions of reputation (individual, social, and ontological) and allows for measuring several types of reputation in parallel. The approach is partly incompatible with our approach, but several
adjustments would allow us to combine both approaches.
A different approach to a similar problem related to trust management in the Semantic Web is presented in [14]. It provides a profound model based on path algebra and inspired by Markov models. It
provides a method of deriving the degree of belief in a statement that is explicitly asserted by one or more individuals in a network of trust, whilst a calculus for computing the belief in derived
statements is left to future research. Herein a formalism for deriving the belief in any computable statement is presented for F-logic.
12. Conclusion
When agents have to solve a problem collectively, they have to reach consensus about the domain since their opinions can differ. Especially when agents are self-interested, their goals in a given
situation can vary quite intensively. Herein an approach to reaching this consensus based on a network of trust between agents has been presented which is a generalization of the work done in [15, 16
] which dealt with semantic wiki systems and semantic social networks, respectively. By using a network analysis trust ranks of agents can be calculated which can be interpreted as an approximation
of the probability that a certain agent will say the truth. Using this interpretation an annotation scheme for F-logic based Horn programs has been developed which allows agents to reason about the
modeled domain and make decisions based on the probability that a certain statement (derived or explicit) is true. Based on this annotation scheme and the network of trust an amalgamation scheme has
been developed as well, which allow agents to reason about multiple domains.
Still, there are open questions: how does the approach scale in fully decentralized environments like LSMAS? What are the implications of self-interest or could agents develop strategies to “lie” on
purpose to attain their goals? These and similar questions are subject to our future research.
1. I. Nonaka and H. Takeuchi, The Knowledge-Creating Company, How Japanese Companies Create the Dynamics of Innovation, Oxford University Press, 1995.
2. F. Nietzsche, “Über wahrheit und lüge im außermoralischen sinn,” 1873, http://www.textlog.de/455.html.
3. P. Bonacich, “Factoring and weighting approaches to clique identification,” Journal of Mathematical Sociology, vol. 2, pp. 113–120, 1972.
4. M. Kifer, G. Lausen, and J. Wu, “Logical foundations of object-oriented and frame-based languages,” Journal of the Association for Computing Machinery, vol. 42, no. 4, pp. 741–843, 1995. View at
Publisher · View at Google Scholar · View at Scopus
5. S. Wasserman and K. Faust, “Social network analysis, methods and applications,” in Structural Analysis in the Social Sciences, Cambridge University Press, 1994.
6. P. Mika, Social Networks and the Semantic Web, Springer, New York, NY, USA, 2007.
7. S. Brin, “The anatomy of a large-scale hypertextual Web search engine,” Computer Networks, vol. 30, pp. 107–117, 1998. View at Scopus
8. L. Page, S. Brin, R. Motwani, and T. Winograd, “The pagerank citation ranking: bringing order to the web,” 1999.
9. A. Lovrenčić and M. Čubrilo, “Amalgamation of heterogeneous data sources using amalgamated annotated hilog,” in Proceedings of the 3rd IEEE Conference on Intelligent Engineering Systems (INES'99)
, 1999.
10. Y. Zhu and X. Li, “Distributed pagerank computation based on iterative aggregation-disaggregation methods,” in Proceedings of the 14th ACM international conference on Information and knowledge
management, pp. 578–585, 2005.
11. J. Golbeck and J. Hendler, “Accuracy of metrics for inferring trust and reputation in semantic web-based social networks,” in Proceedings of the 14th International Conference (EKAW '04), pp.
116–131, October 2004. View at Scopus
12. U. Kuter and J. Golbeck, “Using probabilistic confidence models for trust inference in web-based social networks,” ACM Transactions on Internet Technology, vol. 10, no. 2, article 8, 2010. View
at Publisher · View at Google Scholar · View at Scopus
13. J. Sabater and C. Sierra, “Reputation and social network analysis in multi-agent systems,” in Proceedings of the 1st International Joint Conference on Autonomous Agents adn Multiagent Systems
(AAMAS '02), pp. 475–482, New York, NY, USA, July 2002. View at Scopus
14. M. Richardson, R. Agrawal, and P. Domingos, “Trust management for the semantic web,” in Proceedings of the 2nd International Semantic Web Conference, pp. 351–368, 2003.
15. M. Schatten, Programming Languages for Autopoiesis Facilitating Semantic Wiki Systems [Ph.D. thesis], University of Zagreb, Faculty of Organization and Informatics, Varaždin, Croatia, 2010.
16. M. Schatten, “Knowledge management in semantic social networks,” Computational and Mathematical Organization Theory, pp. 1–31, 2012. View at Publisher · View at Google Scholar | {"url":"http://www.hindawi.com/journals/isrn/2013/691707/","timestamp":"2014-04-16T23:02:42Z","content_type":null,"content_length":"682075","record_id":"<urn:uuid:04c3d841-ccc4-4c9f-8eed-52ead81405e8>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Two-variable p-adic L-functions of elliptic curves
up vote 15 down vote favorite
Suppose $K$ is an imaginary quadratic field (with class number 1, for simplicity), $p \ne 2$ a prime split in $K$, and $K_\infty$ the $\mathbb{Z}_p^2$-extension of $K$.
If $E / \mathbb{Q}$ is an elliptic curve with CM by $K$, then there is a construction (due to Katz) for a "two-variable $p$-adic $L$-function" attached to $E$, which is a $p$-adic measure on the
Galois group $K_\infty / K$, interpolating $L$-values of the twists of the Groessencharacter of $K$ attached to $E$ by finite-order characters of p-power conductor. See e.g. de Shalit's book "Iwasawa
theory of elliptic curves with complex multiplication" (Academic Press, 1987)
If $E / \mathbb{Q}$ is any elliptic curve with good ordinary reduction at $p$ (or more generally any ordinary modular form of weight $\ge 2$), but not necessarily with CM by $K$, there is also a
construction of a two-variable $L$-function attached to $E$, written down by Perrin-Riou (J. London Math. Soc 38 (1988), 1-32) based on earlier work by Hida and others. This interpolates $L$-values
of the twists of $E$ by certain 2-dimensional Artin representations of $\mathbb{Q}$, obtained by inducing up finite-order characters of $\operatorname{Gal}(K_\infty / K)$.
My question is this: if we apply Perrin-Riou's method to an $E$ which happens to have CM by $K$, then what is the relation between the $L$-functions coming from the two constructions?
(My impression is that Perrin-Riou's construction should give the product of Katz's $L$-function with its conjugate, corresponding to the decomposition of the Tate module of $E$ as a $\operatorname
{Gal}(\overline{K} / K)$-representation into the direct sum of two conjugate characters. But I'm puzzled by the discrepancy of coefficient fields: Perrin-Riou's measure takes values in some finite
extension of $\mathbb{Q}_p$, while Katz's lives in the completion of the unramified $\mathbb{Z}_p$-extension of $\mathbb{Q}_p$, which is far larger.)
nt.number-theory iwasawa-theory p-adic-analysis l-functions
My understanding is that if your E is CM and ordinary, then Katz's measure can be shown to live in in some finite extension of Qp as well, but this is based more on general expectations that on any
direct knowledge I have of the construction of Katz. – Olivier Jun 14 '11 at 17:30
+1. I would be also interested to know the answer! I remember being told that the Katz two-variable p-adic L-function specializes to the classical (one-variable) p-adic L-function of E, but I don't
know about your more general question. – François Brunault Jun 14 '11 at 17:34
@Olivier: I'm sorry, that's not true. (The values of Katz's L-function at algebraic characters involve a period $\Omega_p$ which is transcendental over $\mathbb{Q}_p$.) – David Loeffler Jun 14 '11
at 17:34
@David: maybe one just needs to divide the Katz L-function by $\Omega_p$ ? Indeed there is no "p-adic period" in the definition of the classical p-adic L-function. – François Brunault Jun 14 '11 at
Again, I really don't know but, generally speaking, p-adic periods arise as determinants of comparison isomorphisms. If your motive is ordinary to begin with, these comparison isomorphisms and the
determinants in question live in your original ring of coefficients. At any rate, this is certainly what happens on the algebraic side, which is the only one I really know anything about. But
perhaps I am talking non-sense here, and I really should strop writing before I know more about this $\Omega_{p}$. – Olivier Jun 14 '11 at 17:58
show 5 more comments
1 Answer
active oldest votes
The conditions on K in Perrin - Riou's paper (Heegner hypothesis) probably exclude the case that we can take K to be the field of complex multiplication.
up vote 1
down vote
The Heegner hypothesis, as I understand it, is that all primes $\ell \mid N$ are split in $K$. P-R doesn't assume this, but she does assume that all primes dividing $N$ are unramified
in K. It is true that this does actually exclude the case I'm looking at here, but I'm pretty sure that this is more for simplification than because it's essential to the method. –
David Loeffler Jul 23 '11 at 7:14
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory iwasawa-theory p-adic-analysis l-functions or ask your own question. | {"url":"http://mathoverflow.net/questions/67779/two-variable-p-adic-l-functions-of-elliptic-curves","timestamp":"2014-04-20T13:34:36Z","content_type":null,"content_length":"59889","record_id":"<urn:uuid:f6d2ca1c-7240-4215-a0d3-c1074114a9a4>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
In the previous days I have received several emails asking for clarification of the effective sample size derivation in “Introducing Monte Carlo Methods with R” (Section 4.4, pp. 98-100). Formula
(4.3) gives the Monte Carlo estimate of the variance of a self-normalised importance sampling estimator (note the change from the original version in Introducing Monte
Seeing the Big Picture
Here’s a nice snippet from a 2009 article by Kass that I read yesterday: According to my understanding, laid out above, statistical pragmatism has two main features: it is eclectic and it emphasizes
the assumptions that connect statistical models with observed data. The pragmatic view acknowledges that both sides of the frequentist-Bayesian debate made important
Global done!
Over the past few weeks I’ve been working at getting Moshtemp to work entirely in the raster package. I’ve been aided greatly by the author of that package, Robert, who has been turning out
improvements to the package with regularity. For a while I was a bit stymied by some irregularities in getting source from
R/Finance 2011 Call for Papers
Held annually in Chicago, R/Finance is the conference for R users in the financial services industry. This year's conference included presentations from practitioners and researchers at institutions
like Invesco Asset Management, Black Mesa Capital, and some of the leading academic institutions from around the world. Next year's conference, to be held 29-30 April 2011, could include you. If
New housedata release 20100923
New Housedata release of 49,914 summary filings from 7897 candidates for the US House 2002-2010.
IIATMS Guest Contribution
After my recent posts fiddling around with heat maps for pitch location, Jason at It's About the Money, Stupid contacted me to ask if I would contribute some location maps for Yankee pitchers.
Obviously, I couldn't pass up the chance to contribute to ...
R Project and Google Summer of Code: Wrapping up
As this year's admin, I wrote up the following summary which has now been posted at the R site in the appropriate slot. My thanks to this year's students, fellow mentors and everybody else who helped
to make it happen. ...
R Project and Google Summer of Code: Wrapping up
As this year's admin, I wrote up the following summary which has now been posted at the R site in the appropriate slot. My thanks to this year's students, fellow mentors and everybody else who helped
to make it happen. Projects 2010 As in 2008 a...
R Project and Google Summer of Code: Wrapping up
As this year's admin, I wrote up the following summary which has now been posted at the R site in the appropriate slot. My thanks to this year's students, fellow mentors and everybody else who helped
to make it happen. ...
Monte Carlo Statistical Methods third edition
Last week, George Casella and I worked around the clock on starting the third edition of Monte Carlo Statistical Methods by detailing the changes to make and designing the new table of contents. The
new edition will not see a revolution in the presentation of the material but rather a more mature perspective on what | {"url":"http://www.r-bloggers.com/2010/09/page/6/","timestamp":"2014-04-21T07:34:00Z","content_type":null,"content_length":"37485","record_id":"<urn:uuid:fe99227d-41e7-46ec-a012-7775a63cc22a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |