content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Augustana College
Associate Professor: M. Gregg
Assistant Professors: J. Smith, T. Sorenson
Department Chair: E. Wells
The Mathematics curriculum is designed to provide for the educational needs of many students. For general education there are courses which develop basic competence in mathematical reasoning. More
advanced courses furnish necessary mathematical background for a variety of majors. A major in Mathematics suits students intending to become mathematics teachers, planning to enter certain
professions in business or industry, preparing for graduate study in mathematics or related areas, or simply wishing to support another major.
Mathematics Major:
41 credit hours
Required Courses: 33 credit hours
MATH 151 — Calculus I (4 cr)
MATH 152 — Calculus II (4 cr)
MATH 153 — Calculus III (3 cr)
MATH 200 — Foundations of Mathematics (3 cr)
MATH 220 — Linear Algebra (3 cr)
MATH 490 — Senior Seminar (1 cr)
*MATH 300-level — Elective courses (3 courses) 9 cr
Two of the following courses:
MATH 340 — Abstract Algebra (3 cr)
MATH 345 — Topology (3 cr)
MATH 350 — Real Analysis (3 cr)
MATH 355 — Complex Analysis (3 cr)
*May use the two courses not used for the elective area above.
Required Supportive Courses: 8 credit hours
COSC 210 — Computer Science I (4 cr)
PHYS 221 — General Physics I (4 cr)
Mathematics Minor:
18 credit hours
MATH 152 — Calculus II (4 cr)
MATH 200-level — Elective (or higher) (3 cr)
*MATH — Elective courses (One COSC course allowed as a substitute) (11 cr)
Mathematics Courses:
MATH 110 — Structure of Mathematics (3 credits)
Recommended for Elementary Education majors as a preliminary to MATH 113. An introduction to basic mathematical ideas including counting and measuring, calculation, symbol manipulation, algebra and
logic. Topics are matched to the elementary school curriculum. The emphasis is on developing understanding, intuition, and imagination rather than rigidly following prescribed methods. Offered Every
MATH 113 — Teaching Mathematics in Elementary and Middle School (3 credits)
This course is an introduction to the pedagogy and curriculum of an NCTM standards-based arithmetic program in grades K-12. Using the content strands of numbers and operations, analyzing patterns,
geometrics, and measurement; the course includes planning, teaching, assessment, diagnosis and evaluation of student learning in mathematics. This course will present current best-practice,
research-based instructional methods in mathematical procedures and processes, and the use of technology in teaching/student learning and classroom management as it applies to mathematics. It is
based on the recommendations of NCTM; namely that all children learn best by actively exploring and investigating math, that problem-solving, reasoning and communication are important goals of
mathematics teaching and learning and that all children have highly qualified teachers. Prerequisite: MATH 140 or higher and Admission to Teacher Education; Offered Every Semester.
MATH 140 — Quantitative Reasoning (Area 2.3) (3 credits)
For students with one or two years of high school algebra. This course is at the level of college algebra but is not focused on algebra. It stresses application of mathematics in careers of
non-scientists and in the everyday lives of educated citizens, covering basic mathematics, logic, and problem solving in the context of real-world applications. Offered Every Semester.
MATH 150 — Pre-Calculus (Area 2.3) (4 credits)
Algebra review, functions and graphs, logarithmic and exponential functions, analytic geometry, trigonometric functions, trigonometric identities and equations, mathematical induction, complex
numbers. Students completing this course are prepared to enter calculus. Offered Every Semester.
MATH 151 — Calculus I (Area 2.3) (4 credits)
Limits and continuity for functions of one real variable. Derivatives and integrals of algebraic, trigonometric, exponential, and logarithmic functions. Applications of the derivative. Introduction
to related numerical methods. Offered Every Semester.
MATH 152 — Calculus II (4 credits)
Techniques of integration, numerical integration, and applications of integrals. Infinite series including Taylor series. Introduction to differential equations. Calculus in polar coordinates.
Offered Every Semester.
MATH 153 — Calculus III (3 credits)
The calculus of vector-valued functions, functions of several variables, and vector fields. Includes vector operations, equations of curves and surfaces in space, partial derivatives, multiple
integrals, line integrals, surface integrals, and applications. Offered Every Spring Semester.
MATH 200 — Foundations of Mathematics (3 credits)
Bridges the gap between computational, algorithmic mathematics courses and more abstract, theoretical courses. Emphasizes the structure of modern mathematics: axioms, postulates, definitions,
examples conjectures, counterexamples, theorems, and proofs. Builds skill in reading and writing proofs. Includes careful treatment of sets, functions, relations, cardinality, and construction of the
integers, and the rational, real, and complex number systems. Prerequisite: MATH 152; Offered Every Fall Semester.
MATH 220 — Linear Algebra (3 credits)
Vector spaces, linear independence, basis and dimension, linear mappings, matrices, linear equations, determinants, Eigen values, and quadratic forms. Prerequisite: MATH 152; Offered Every Spring
MATH 310 — Differential Equations (3 credits)
Methods of solving first and second order differential equations, applications, systems of equations, series solutions, existence theorems, numerical methods, and partial differential equations.
Prerequisite: MATH 152; Offered Every Fall Semester.
MATH 315 — Probability and Statistics (3 credits)
Probability as a mathematical system, random variables and their distributions, limit theorems, statistical inference, estimation, decision theory and testing hypotheses. Prerequisite: MATH 152;
Offered Every Fall Semester.
MATH 320 — Discrete Structures (3 credits)
Topics to be selected from counting techniques, mathematical logic, set theory, data structures, graph theory, trees, directed graphs, algebraic structures, Boolean algebra, lattices, and
optimization of discrete processes. Prerequisites: MATH 151 and COSC 210; Offered Every Spring Semester.
MATH 330 — History of Mathematics (W - Area 2.1B) (3 credits)
The history of mathematics from ancient to modern times. The mathematicians, their times, their problems, and their tools. Major emphasis on the development of geometry, algebra, and calculus.
Prerequisite: MATH 200; Offered Interim, Odd Years.
MATH 335 — Modern Geometry (3 credits)
A review of Euclidean geometry, an examination of deficiencies in Euclidean geometry, and an introduction to non-Euclidean geometrics. Axiomatic structure and methods of proof are emphasized.
Prerequisite: MATH 200; Offered Interim, Even Years.
MATH 340 — Abstract Algebra (3 credits)
A survey of the classical algebraic structures taking an axiomatic approach. Deals with the theory of groups and rings and associated structures, including subgroups, factor groups, direct sums of
groups or rings, quotient rings, polynomial rings, ideals, and fields. Prerequisite: MATH 200 and 220; Offered Fall Semester, Even Years.
MATH 345 — Topology (3 credits)
An introduction to topological structures from point-set, differential, algebraic, and combinatorial points of view. Topics include continuity, connectedness, compactness, separation, dimension,
homeomorphism, homology, homotopy, and classification of surfaces. Prerequisite: MATH 200 and 220; Offered Spring Semester, Odd Years.
MATH 350 — Real Analysis (3 credits)
This course develops the logical foundations underlying the calculus of real-valued functions of a single real variable. Topics include limits, continuity, uniform continuity, derivatives and
integrals, sequences and series of numbers and functions, convergence, and uniform convergence. Prerequisite: MATH 200 and 220; Offered Fall Semester, Odd Years.
MATH 355 — Complex Analysis (3 credits)
A study of the concepts of calculus for functions with domain and range in the complex numbers. The concepts are limits, continuity, derivatives, integrals, sequences, and series. Topics include
Cauchy-Riemann equations, analytic functions, contour integrals, Cauchy integral formulas, Taylor and Laurent series, and special functions. Prerequisite: MATH 200 and 220; Offered Spring Semester,
Even Years.
MATH 197, 297, 397 — Topics in Mathematics (2-4 credits)
MATH 199, 299, 399 — Independent Study (2-4 credits)
MATH 490 — Senior Seminar (1 credit)
This course reviews and correlates the courses in the mathematics major. Each student is responsible for preparing the review of one area. Students also read papers from contemporary mathematics
journals and present them to the class. The course uses the ETS mathematics major exam. Prerequisite: MATH 200 and 220; Offered Every Spring Semester. | {"url":"http://augie.edu/academics/academic-catalog/mathematics","timestamp":"2014-04-18T19:06:11Z","content_type":null,"content_length":"25895","record_id":"<urn:uuid:37d13d52-e4cd-47f9-b4e1-e527acd95a72>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determine whether these sets S are bounded, and determine sup(S) and inf(S) if they e
April 21st 2009, 09:24 PM #1
Junior Member
Feb 2009
Determine whether these sets S are bounded, and determine sup(S) and inf(S)
For each set S below, determine whether S is bounded, and determine sup(S) and inf(S), if they exist.
a) S = {x: x^2 < 5x}
b) S = {x: 2x^2 < x^3 + x}
c) S = {x: 4x^2 > x^3 + x}
Last edited by qtpipi; April 21st 2009 at 09:57 PM.
Did you graph each side separately?
April 22nd 2009, 03:05 AM #2 | {"url":"http://mathhelpforum.com/discrete-math/84954-determine-whether-these-sets-s-bounded-determine-sup-s-inf-s-if-they-e.html","timestamp":"2014-04-18T23:20:22Z","content_type":null,"content_length":"33515","record_id":"<urn:uuid:c3b35bae-2044-4de7-af80-5f9825e17c38>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can Jerry Seinfeld crack P vs NP ?
The following is a quote from Comedian Jerry Seinfeld. The source is
Seinfeld Universe: The Entire Domain
by Greg Gattuso (Publisher of
Nothing: The Newsletter for Seinfeld Fans
, page 96.
I was great at Geometry. If I wanted to train someone as a comedian, I would make them do lots of proofs. That's what comedy is: a kind of bogus proof. You set up a fallacious premise and then
prove it with rigorous logic. It just makes people laugh. You'll find that most of my stuff is based on that system ... You must think rationally on a completely absurd plane.
I doubt that many comedians have seen
lots of proofs
though they may have an intuitive sense of logic for their routines. And not all comedians use this style.
I know of one theoretical computer scientist who is a comedy writer.
Jeff Westbrook
got his PhD in 1989 with Robert Tarjan on
Algorithms and Data Structures for Dynamic Graph Algorithms
. He was faculty at Yale, and then a researcher at AT+T before working on the TV shows
The Simpsons
. I actually met him in 1989- he didn't seem that funny at the time.
Are there other theorists or mathematicians that are also professional comedians or comedy writers? I doubt there are many. If you define
theorist or mathematician
having a PhD
then I assume its very very few. If you defining it as
majored in math or CS
there would probably be some.
11 comments:
1. cutting a pi4:45 PM, July 17, 2007
One immediately thinks of Lewis Carroll, both as an example to a mathematician+humorist, and as a living example of what Seinfeld suggests as a "technique" of doing humor.
The surrealists were good at this kind of thing.
2. Noam Slonim is a computational biologist/machine learnist who wrote stuff for israeli sitcoms,
incl. the great 'hahamishia hakamerit'
3. Here's a list of the mathematical background of just writers of The Simpsons. Lots of math geeks there!
4. David X. Cohen, co-creator of Futurama, was getting his Ph.D. in theoretical CS at Berkeley, but left around his their year with a Masters to go write for the Simpsons. There's a Harvard Lampoon
connection that has led a number of "math geeks" to the Simpsons and other comedy shows (animated and otherwise).
5. Tom Lehrer should qualify as a mathematician-humorist.
6. John Rogers is a stand-up comedian, TV/film screenwriter and physics graduate.
7. And let's not forget the greatest comic talent of all, and he does in fact perform professionally...
Scott Aaronson.
8. On the physics side of things, Albert Einstein and Richard Feynman have always been known for their sense of humor, especially through some notable quotes attributed to them, although they were
never formally comedians.
Also for the gamers out there, it's actually interesting to note that among the two original creators of the internet show pure pwnage, one was a former physics phd student and the other was an
undergraduate studying math, cs, and physics.
9. I've met (and drank) with a few of the Simpson's writers. One looks and acts like Crusty the Clown. I can't remember his name, but the margaritas were good. The Mercedes SLR was nice too.
Something to think about if you don't get tenure....
10. The idea that humor and scientific creativity are related was raised by Arthur Koestler in his 1964 book "The Act of Creation". But more important - when I finish my phd., how do I become a
Simpson's writer?
11. I've got a PhD in math and I do improv comedy (and I've dabbled in sketch). So add me to your short list.
I actually just had a brief blog post on math and comedy at Math for Love. | {"url":"http://blog.computationalcomplexity.org/2007/07/can-jerry-seinfeld-crack-p-vs-np.html","timestamp":"2014-04-20T16:41:32Z","content_type":null,"content_length":"172413","record_id":"<urn:uuid:a3ffd29c-9788-4f9b-956d-503aa058c2ba>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00288-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-User] Integrating a matrix exponential
Ryan Krauss ryanlists@gmail....
Tue Jul 27 10:23:10 CDT 2010
I am trying to discretize a state-space model. linalg.expm makes this
easy for the plant matrix. I am having trouble with the input matrix.
The continuous model is
xdot = A*x(t) + B*u
and the discrete time model will be
x(k+1) = G(T) *x(k) + H(T)*u(k)
Following Ogata's "Discrete-Time Control Systems" second edition, page 317:
G(T) = linalg.expm(A*T) #this works great
H(T) = int(expm(A*t)) dt from 0 to T then dot with B
That seems easy enough, and I think I want to use numeric integration
to approximate the integral of expm(A*t). But is there a method in
scipy.integrate for definite integration of a matrix of integrands?
And is this the best approach?
More information about the SciPy-User mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2010-July/026245.html","timestamp":"2014-04-17T04:01:22Z","content_type":null,"content_length":"3072","record_id":"<urn:uuid:479e729a-9179-4c6e-a035-01b51e057200>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume between two shapes (one inscribed in the other)
November 26th 2011, 04:56 PM #1
Oct 2008
Volume between two shapes (one inscribed in the other)
Problem: If a square prism is inscribed in a right circular cylinder of radius 3 and height 6, the volume inside the cylinder but outside the prism is?
I know I need to subtract the volume of the square prism from the cylinder. The volume of the cylinder is 54pi. I am having trouble visualizing/drawing a diagram to get the side length of the
The answer is 61.6, and the side length of the square is 3 times the square root of 2. Thanks!
Re: Volume between two shapes (one inscribed in the other)
Did you draw a circle with a square in it? The radius of the circle is 3 and half the diagonal of the square is the same thing. What is the length of the side of the square in this
cross-sectional drawing? Hint: 45-45-90.
November 26th 2011, 06:31 PM #2
MHF Contributor
Aug 2007 | {"url":"http://mathhelpforum.com/geometry/192776-volume-between-two-shapes-one-inscribed-other.html","timestamp":"2014-04-17T19:35:20Z","content_type":null,"content_length":"32762","record_id":"<urn:uuid:477bf398-cec9-4ac3-8480-9bfe0ed7d926>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cullen Primes
Cullen Primes: Definition and Status
News Flash!
On August 1, 2009, Magnum Bergman (working with PrimeGrid) found the largest known Cullen prime, 6679881*2^6679881+1
News Flash!
On April 20, 2009, Dennis R. Gesker (working with PrimeGrid) found the largest known Cullen prime, 6328548*2^6328548+1
Cullen Primes are Cullen numbers that are prime and of the form C[n] = n*2^n+1.
C[n ]is prime for n = 1, 141, 4713, 5795, 6611, 18496, 32292, 32469, 59656, 90825, 262419, 361275, 481899, 1354828, 6328548 and for no other n < 10,000,000; Chris Caldwell maintains the top 20 Cullen
A list of contributors to the Cullen project is here.
PrimeGrid is coordinating a distributed search for Cullen primes using BOINC.
To search for Cullen Primes of other bases, check out Günter Löh's Generalized Cullen Search for 3 <= b <= 100 and Daniel Hermle's Generalized Cullen Search for 101 <= b <= 200
Woodall numbers (W[n ]= n*2^n-1) are related to Cullen numbers and are sometimes called Cullen numbers of the second kind. Check here for the Woodall prime search.
If you have any questions about the Cullen Search, you can e-mail Mark Rodenkirch or Ray Ballinger
URL: http://www.prothsearch.net/cullen.html
Last Modified: March 19, 2013 | {"url":"http://www.prothsearch.net/cullen.html","timestamp":"2014-04-21T12:08:51Z","content_type":null,"content_length":"2771","record_id":"<urn:uuid:95b013e8-cbf3-4eab-8863-076c353185c8>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binary Numbers Comparison.
Author Binary Numbers Comparison.
Hello all ,
Joined: Mar
02, 2009 I have two arbitrary length binary numbers.
Posts: 20 ex : 100011111 & 1111001110001
Is it possible to compare them using only there 0s & 1s representations.
i.e I don't want to convert them to base 10 & compare them (probably using BigInteger).
If its possible how can I implement it
"A single conversation with a wise man is better than ten years of study."
Joined: Sep
13, 2007 You don't have to convert to decimals. Every bit is 2^x so you at least need to find the bits that are 1 and calculate what x is for that bit. Without converting to decimal I think the
Posts: 1966 faster way to get the most significant bit that is 1 of each number and compare that. Oh you do need to watch out for negatives (such as first bit is 1).
K. Tsang JavaRanch SCJP5 SCJD/OCM-JD OCPJP7
I like...
Ranch Hand
Heimdall Ksu wrote:Hello all ,
Joined: Feb
25, 2006 I have two arbitrary length binary numbers.
Posts: 266 ex : 100011111 & 1111001110001
Is it possible to compare them using only there 0s & 1s representations.
How are you storing these numbers? As Strings?
Heimdall Ksu wrote:i.e I don't want to convert them to base 10 & compare them...
In Java, all numbers are in base 10. Yes, you can use octal or hexadecimal literals, but their value will still be stored as a decimal (base 10).
Joined: Sep
18, 2000 In Java, all numbers are in base 10. Yes, you can use octal or hexadecimal literals, but their value will still be stored as a decimal (base 10).
Posts: 556
Didn't you mean that all numbers (ie: numeric primitives) are stored in binary and then by default formattted for output as decimal?
I like...
Yes I am storing them as strings.
Joined: Mar I padded them to equal length and then used the compare function in String class and got the result I was looking for.
02, 2009
Posts: 20 Thanks guys,
Joined: Oct
13, 2005
Posts: 36514 I suspect String.compareTo works as much from luck as anything else
subject: Binary Numbers Comparison. | {"url":"http://www.coderanch.com/t/440718/java/java/Binary-Numbers-Comparison","timestamp":"2014-04-21T10:41:24Z","content_type":null,"content_length":"29998","record_id":"<urn:uuid:a9ab171b-f09e-4ea3-9e47-1b99a339b111>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chemistry HW Problem [Archive] - FinHeaven Your Home for: Miami Dolphins, Miami Dolphins Forums, Miami Dolphins News
I got a B+ in my Honors Chemistry class....but that was over 20 years ago. I just remember the girl next to me in class was hot. :lol:
i got a b in chemistry this first semester, i just have a tight teacher who gives a bunch of easy extra credit like cross words so i couldnt help you
wow, i work as a chemist (medical tech) and i dont remember all that stuff.
Do what I did, drop Chemistry and take Astronomy or another science. Chemistry is very tough, you won't remember a thing when your done. A total waste of 2 sesmesters..
Im in Chemistry now, but its high school chemistry so im not sure if i could help you.
Ehh to late, i just took the test and flunked, oh well
i tried .03214 x 6.02ee23 and still get the same answer. Its not a big deal though, but thanks for trying
Powered by vBulletin® Version 4.2.2 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved. | {"url":"http://www.finheaven.com/archive/index.php/t-182397.html","timestamp":"2014-04-20T19:16:58Z","content_type":null,"content_length":"10295","record_id":"<urn:uuid:695f7e25-e812-43fa-9a55-54ab9e9d1a75>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mount Vernon, NY Statistics Tutor
Find a Mount Vernon, NY Statistics Tutor
...While there, I tutored students in everything from counting to calculus, and beyond. I then earned a Masters of Arts in Teaching from Bard College in '07. I've been tutoring for 8+ years, with
students between the ages of 6 and 66, with a focus on the high school student and the high school curriculum.
26 Subjects: including statistics, physics, calculus, geometry
...I don't believe in lecturing too much--you will learn the material much better when you are talking out loud through examples with some guidance along the way. I like to give clear
explanations for each important concept and do examples right after. Most importantly, I am personable and easy to talk to; Lessons are thorough but generally informal.
10 Subjects: including statistics, calculus, physics, geometry
...I continue to actively use Russian in my everyday life. Besides, my daughter attends the School at Russian Mission in the United Nations (she is in 7th grade), and I help her with her
homework, including Russian language and literature. I took a Linear Algebra course in Moscow Institute of Physics and Technology and used it for my Master's thesis in Optimization.
24 Subjects: including statistics, physics, GRE, Russian
...Looking forward to working with you. -RonaldI was exposed extensively to SAS in my statistics courses, performing numerous statistical analysis in my projects. I have dealt with both the UNIX
and Windows version of SAS. As a full time Statistician, I continue to use SAS almost on a daily basis.
18 Subjects: including statistics, calculus, algebra 1, algebra 2
...I have tutored many people in the use of statistical tools for analysis like SPSS. I'm a certified Six Sigma Master Black Belt. In Six Sigma we use statistics to conduct all my data analysis.
4 Subjects: including statistics, writing, SPSS, biostatistics
Related Mount Vernon, NY Tutors
Mount Vernon, NY Accounting Tutors
Mount Vernon, NY ACT Tutors
Mount Vernon, NY Algebra Tutors
Mount Vernon, NY Algebra 2 Tutors
Mount Vernon, NY Calculus Tutors
Mount Vernon, NY Geometry Tutors
Mount Vernon, NY Math Tutors
Mount Vernon, NY Prealgebra Tutors
Mount Vernon, NY Precalculus Tutors
Mount Vernon, NY SAT Tutors
Mount Vernon, NY SAT Math Tutors
Mount Vernon, NY Science Tutors
Mount Vernon, NY Statistics Tutors
Mount Vernon, NY Trigonometry Tutors | {"url":"http://www.purplemath.com/mount_vernon_ny_statistics_tutors.php","timestamp":"2014-04-19T19:49:18Z","content_type":null,"content_length":"24420","record_id":"<urn:uuid:dd11f8ea-a0b2-47c6-b141-cd4e44052339>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by Erin
Total # Posts: 578
27-f(x)=1/4x^2-2x-12. Can you show me how to find the vertex and x intercepts, step-by-step? I tried to put it in standard form, with (1/2x-1)^2-13, but my book said that's not the answer. Thanks!
Where did the 1 come from in the standard form come from? How did the 5/4 become 1 when it was transferred to outside of the parentheses?
I have to find the vertex, axis of symmetry, and x intercepts of x^2-x+(5/4). I put it in standard form as -(x+-.5)^2+1.5. Thus, the vertex would be .5, 1.5. But the book's answer is 1/2, 1. How did
they get this?
17-Identify the vertex, axis of symmetry and x-intercept(s) of F(x)=(x+5)^2-6. I have no idea how the x intercepts are -5 +/- sq. root 6 and 0. can you show me how this came to be? I factored it so
it would be x^2+10x+19, but there's no common factors in ten and 19. Can yo...
I tried googling this, but I couldn't find it. What is the name of the journalist (who was either British or American...I can't remember) who snuck into Saudi Arabia in the early twentieth century
and disguised himself as a Muslim since no non-Muslims were allowed in S...
If a/4=b/7, then a/b=?
7th grade
Its Oh Lay
Chem - Stoichiometry
How many grams of sodium carbonate is required for a complete reaction with 1.00g of calcium chloride dihydrate? Help!
Chem - Stoichiometry
How many grams of sodium carbonate is required for a complete reaction with 1.00g of calcium chloride dihydrate? Help!
I know...I realized that right after I accidentally posted - that is why there is the CORRECT post and subject right after this one called Stoichiometry. THANKS FOR THE HEADS UP, THOUGH.
How many grams of sodium carbonate is required to have a complete reaction with 1.00g of calcium chloride dihydrate? I can't figure this out...help!
1.66 m/s^2! Thanks
Liz rushes down onto a subway platform to find her train already departing. She stops and watches the cars go by. Each car is 8.60 m long. The first moves past her in 1.75 s and the second in 1.17 s.
Find the constant acceleration of the train. I know how to find average accel...
A motorist travels at a constant speed of 34.0 m/s through a school zone; exceeding the posted speed limit. A policeman, waits 7.0 s before giving chase at an acceleration of 3.9 m/s2. (a) Find the
time required to catch the car, from the instant the car passes the policeman. ...
it is everything that occupies a place in the space
Social Studies
how did geography affect the development of African kingdomes?
5.03 m/s^2
14.84 m/s^2
mitochondria-hydraulic dam ribosomes-small shops nucleus-town hall endoplasmic reticulum-special carts golgi apparatus- post office protein-widget cell membrane-fence lysosomes-scrap yard
nucleolus-carpenter's union
Math- calculus
Given f(x) = x ^ 3 - x ^ 2 -4x +4 Find the zeros of f. Write an equation of the line tangent to the graph of f at x = -1 The point (a,b) is on the graph of f and the line tangent to the graph at
(a,b) passes through the point (0,-8) which is not of the graph of f. Find a and b...
college physics
Two small spheres, each with mass m=3g and charge q, are suspended from a point by threads of length L=0.22m. What is the charge on each sphere if the threads make an angle theta of 15 degrees with
the vertical?
college physics
There are four charges, each with a magnitude of 2.0uC. Two are postive and two are negative. The charges are fixed to the corners of a 3.0m square, one to each corner, in such a way that the net
force on any charge is directed toward the center of the square. Find the magnitu...
college physics
Two small spheres, each with mass m=3g and charge q, are suspended from a point by threads of length L=0.22m. What is the charge on each sphere if the threads make an angle theta of 15 degrees with
the vertical?
Guess my rule? Write a funtion for the function. Input 6 7 8 9 Output 5 5.5 6 6.5
Explain how the African European and the Native American cultures blended to form a latin american culture?
I am having trouble writing a topical outline and expanding it to a sentence for this paper: Marie Curie is best known as the discoverer of the radioactive elements polonium and radium, and is the
first person to earn the honor of two Nobel prizes. Her work not only influenced...
Thank you Ms. Sue!
Your not dividing IN half, but BY half. I did it the way of using fractions. Because half is .5 or 1/2. I have no idea how he got 12 either!
Divide 30 by half and add 10. What do you get? I think it is 70, but my friend disagrees. He thinks it's 12.
But how did you figure it out like that.
There is a circle that is cut in three slices one slice is (in fractions) 1/3 and one slice is 1/5. What fraction is the other slice. (it is a whole circle.) Can you please tell me how to I can
figure this out on my own.
Pre Algebra
(h4) 4 (-6x7) (-9x12) the answer is -63x 19
Jake was new to Fair Ridge Elementary, and he was big. Jake bullied Tommy when his jeans jacket was too small for him. It stretched tight across his back and the sleeves were too short for his arms.
And, Jake is taking his things from the other students. But now Jake and Tommy...
Secret Pal Surprises Characters and Setting? It is for my book report. Please tell me the characters (behavioral description) and where's the setting.
Eng 102
How can I support my thesis with compelling arguments and counterarguments? Here is my thesis statement? Although many coal industries argue that mountaintop removal coal mining is more cost
effective than underground coal mining, MTR is destroying the beautiful mountains and ...
y3-2y2- + 3y-4 for y+5 (-9) (-9) (-9) m.m.m 7.a.a.b.b.b
9th grade Pre Algebra
y3-2y2- + 3y-4 for y+5
9th grade Pre Algebra
I need help with exponents -n6 for n=2
9th Pre Algebra
I need help with exponents -n6 for n=2
How many hours 150 minutes for reading log? for times with reading log
thank you!
You must be at least 48 inches tall to ride an amusement park ride, and your little sister is 30 inches tall. How many inches/must she grow before she may ride the ride? You need no more than 3,000
calories in a day. You consumed 840 calories at breakfast and 1,150 calories at...
Wilders Preparatory
What are the modifiers for the word flexible?
biology repost
Predict what would happen to a red blood cell placed in each of the following external solutions. Would it shrink, swell, or not change its volume? Explain each answer in ONE sentence. a. 150 mM NaCl
b. 100 mM NaCl plus 50 mM albumin
Predict what would happen to a red blood cell placed in each of the following external solutions. Would it shrink, swell, or not change its volume? Explain each answer in ONE sentence. a. 150 mM NaCl
b. 100 mM NaCl plus 50 mM albumin
Predict what would happen to a red blood cell placed in each of the following external solutions. Would it shrink, swell, or not change its volume? Explain each answer in ONE sentence. a. 150 mM NaCl
b. 100 mM NaCl plus 50 mM albumin
Do you have any connections with the myth you chose? Lack of connectionss? Discuss (write) about your connections.
Myth Literature Response Questions about Pandora's Box? I need help?!? What happended in the story? (Write is the plot?) Write a summary with rising action, climax and resolution.
Myth Literature Response Questions about Pandora's Box? I need help?!?
chem post
what are the point groups for mer- Co Cl3 (CO3) and fac- Co Cl3 (CO3)
what are the point groups for mer- Co Cl3 (CO3) and fac- Co Cl3 (CO3)
what are the point groups for mer- Co Cl3 (CO3) and fac- Co Cl3 (CO3)
are you good with times? I am struggling with studying with times so help me: The first walker to finish the 12 miles crossed the finish line 3 hours after the offical starting time of 10:00 a.m. At
what time did the first walker finish? The judges left the finish line at 5:30...
what are the symmetry elements for: cis-CoCl2(NH3)4 and trans-CoCl2(NH3)4 would it be C2 S2?
Choosing High School Courses
Good choice is to choose high school easy: Algebra 1, PE, and home ec (cooking class) or etc.... English is the best for writing strategy, and reading is kind of good.
chem please
yes. the solubility of KNO3(s) at 0.0 degree celsius is 15g KNO3/100g H2O. Also, A 320g sample of a saturated solution of KNO3(s) in water is prepared at 25.0 degree celsius (this is part of the info
given). i don't know how to solve this.
chem please
If 60 g water is evaporated from the solution at the same time as the temperature is reduced from 25.0 to 0.0 degree celsius, what mass of KNO3(s) will recrystallize?
chemistry: again
What is the predominant type of intermolecular force in I2 (iodine)?
Chemistry: help please
What is the millimolar solubility of oxygen gas,O2, in water at 20 degree celsius, if the pressure of oxygen is 1.00atm?
science please!
Estimate the value of the cell potential for the following rxn: Mn02(s) + 4H30+(pH=3.8) + 2Ag(s) --> Mn2+(aq, .1M) + 6H20(l) I got Ecell=.48V delta G circle= -92640 J/mol and delta g = -107484 is
this correct?
how does the life cycle of hatena compare to the origin of eukaryotic cells as noted in Margulis Endosymbiotic Theory? something about how one life form engulfs another and incorporates it, evolving
until it becomes dependent on it and has created an entirely new life f...
how does the life cycle of hatena compare to the origin of eukaryotic cells as noted in Margulis Endosymbiotic Theory? something about how one life form engulfs another and incorporates it, evolving
until it becomes dependent on it and has created an entirely new life form?
how does the life cycle of hatena compare to the origin of eukaryotic cells as noted in Margulis Endosymbiotic Theory? something about how one life form engulfs another and incorporates it, evolving
until it becomes dependent on it and has created an entirely new life fo...
an electrochemical cell uses Al and Al+3 in one compartment and Ag and Ag+ in the other write a balanced equation for the rxn that will occur spontaneously in this cell would that just be Al + Al+3 +
2e --> Ag + Ag+ ?
Algebra II- please help!
Thank you, Reiny. My teacher taught me something very similar to your method and now I understand it. Thank you for your help! : )
Algebra II- please help!
Thank you so much, Damon! You really helped me out! : )
Algebra II- please help!
I just cannot seem to be able to grasp the concept of solving polynomial inequalities. Can someone please explain, step by step, how to solve them? Here's a problem I can't solve. Please use this as
an example: (x-2)(x-5)<0 I cannot thank you enough for helping me w...
what are the three parts of the cell theory?
what are the four properties that can be used to classify elements?
algebra 2
how do you solve this? -2square root7-4square root7
history HHHEEELLLPPP!!!
What is the society and environment of the coastal plains?
what is a summary of observed behavior? maybe im thinking about it too much and its simple but im lost! HELP
Why did the cowboys go west?
history HHHEEELLLPPP!!!
did the railroad workers go west for job opportunities and technological advances?
they went west for job opportunities and for technological advances?
they went west for job opportunities and for technological advances?
what are two reasoms why the railroad workers went west?
history HHHEEELLLPPP!!!
history HHHEEELLLPPP!!!
Why did the miners move west? 2 reasons
Evaluate the following limit lim x^2 + 3x - 40 x-->5 ------------- 3x + 10 - x^2
how does yeast cause bread to rise?
Consider the following reaction: 2HBr(g)---> H_2(g) + Br_2(g) B)In the first 15.0s of this reaction, the concentration of HBr dropped from 0.510M to 0.455M. Calculate the average rate of the reaction
in this time interval. C)If the volume of the reaction vessel in part (b) ...
Need help with symbolism project. thank you for your help.? I need help with explaining symbolism of the ax from black cat. Do you know what symbolism mean is? THANK YOU FOR YOUR HELP
A closed surface encloses a net charge of 0.00000210 C. What is the net electric flux through the surface?
A point charge of -4.00nC is at the origin, and a second point charge of 6.00nC is on the axis at x = 0.800m . Find the magnitude and direction of the electric field at each of the following points
on the axis. x_2 = 17.0cm, x_3 = 1.30m , x_4=-17.0cm
history HHHEEELLLPPP!!!
Thankyou Ms.Sue!!! It helped alot.
history HHHEEELLLPPP!!!
How did the rich Southern planters regain control in the south after the civil war?
history HHHEEELLLPPP!!!
i couldn't find it in my text book.
history HHHEEELLLPPP!!!
can you help?
history HHHEEELLLPPP!!!
How did the rich southern planters regain control in the south after the civil war???
thankyou Ms.Sue!!!!
How did congress try to counteract the Jim Crow Laws?
Benzene has a heat of vaporization of 30.72kJ/mol and a normal boiling point of 80.1 degrees C. At what temperature does benzene boil when the external pressure is 470 torr?
Calculate the amount of heat required to completely sublime 24.0 g of solid dry ice CO_2 at its sublimation temperature. The heat of sublimation for carbon dioxide is 32.3 kJ/mol.
what does E mean? Could you explain it another way because I'm confused.
I tried that and got 1.8x10^8 but it wasn't the right answer
Calculate the electrical attraction that a proton in a nucleus exerts on an orbiting electron if the two particles are 1.13×10^-10 m apart.
Solve for w: P=2L+2W
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Erin&page=5","timestamp":"2014-04-20T19:23:22Z","content_type":null,"content_length":"25966","record_id":"<urn:uuid:e3d5edec-e701-4659-b408-b585e6ba4432>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
Strafford, PA Calculus Tutor
Find a Strafford, PA Calculus Tutor
...When someone can understand how a concept is working then they can apply it to solve a whole range of problems and most memorization will be unnecessary. This approach will help aid students
to achieve a higher understanding of these subjects and it will promote critical thinking. Let me show y...
16 Subjects: including calculus, Spanish, physics, algebra 1
...I taught introductory and intermediate physics classes at New College, Duke University and RPI. Some years ago I started to tutor one-on-one and have found that, more than classroom
instruction, it allows me to tailor my teaching to students' individual needs. Their success becomes my success.
21 Subjects: including calculus, reading, writing, algebra 1
...I have been tutoring for pay for over five years, as well as volunteering for countless years previous. Although most of the students I tutor are at a level between prealgebra and precalculus,
I am very well versed in higher mathematics. I am friendly and considerate of all learning styles and abilities.
11 Subjects: including calculus, geometry, algebra 1, algebra 2
No one has more experience. No one has more expertise. Over the last 15 years I've worked for several different test-prep companies.
23 Subjects: including calculus, English, geometry, statistics
...I also know classical Russian authors, including Tolstoy, Dostoevesky, Chekov and others. This course does not have a set curriculum. It often is called "World Cultures," a survey course that
examines the great civilizations of the past (Mesopotamia, China, India, Egypt, Meso-America, Greece, R...
32 Subjects: including calculus, English, geometry, biology
Related Strafford, PA Tutors
Strafford, PA Accounting Tutors
Strafford, PA ACT Tutors
Strafford, PA Algebra Tutors
Strafford, PA Algebra 2 Tutors
Strafford, PA Calculus Tutors
Strafford, PA Geometry Tutors
Strafford, PA Math Tutors
Strafford, PA Prealgebra Tutors
Strafford, PA Precalculus Tutors
Strafford, PA SAT Tutors
Strafford, PA SAT Math Tutors
Strafford, PA Science Tutors
Strafford, PA Statistics Tutors
Strafford, PA Trigonometry Tutors
Nearby Cities With calculus Tutor
Broad Axe, PA calculus Tutors
Center Square, PA calculus Tutors
Charlestown, PA calculus Tutors
Cynwyd, PA calculus Tutors
Gulph Mills, PA calculus Tutors
Ithan, PA calculus Tutors
Miquon, PA calculus Tutors
Oakview, PA calculus Tutors
Penllyn, PA calculus Tutors
Radnor, PA calculus Tutors
Rose Tree, PA calculus Tutors
Saint Davids, PA calculus Tutors
Southeastern calculus Tutors
Valley Forge calculus Tutors
Wayne, PA calculus Tutors | {"url":"http://www.purplemath.com/Strafford_PA_calculus_tutors.php","timestamp":"2014-04-17T13:05:47Z","content_type":null,"content_length":"24022","record_id":"<urn:uuid:bd8bc26d-b6e3-40ff-895a-7ef15126b923>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Perpendicular force calculated from torque and point of application?
I hope I managed to post my question the right place!
I have a body consisting of a bunch of mass-points in ℝ[itex]^{3}[/itex], and when torque is applied to this body, I'm interested in finding the force that must have caused the torque based on the
point of application and the torque vector, which are given.
I see the form;
[itex]\tau[/itex] = r[itex]\times[/itex]F
quite often, such as it is seen in
(which offers a nice overview btw).
I understand that it is not possible to calculate F, but I find it hard to believe that F[itex]_{\bot}[/itex] is impossible to calculate since it should be unique, yet I don't see such an equation
anywhere. Any pointers would be greatly appreciated! | {"url":"http://www.physicsforums.com/showpost.php?p=4266176&postcount=1","timestamp":"2014-04-17T00:56:34Z","content_type":null,"content_length":"9313","record_id":"<urn:uuid:d028e182-1c3d-4eba-8332-17c019d33f00>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
The European Mathematical Society
The Hamilton Mathematics Institute, Trinity College Dublin, Ireland
Short description of the event:
The workshop is on "Homological invariants in low-dimensional topology and geometry" and will consist of a two day mini-course August 26-27, followed by a three day lecture series, August 28-30,
This year's topic for the workshop is "Homological invariants in low-dimensional topology and geometry".
This years minicourse will be given by Jacob Rasmussen (Cambridge) and Liam Watson (Glasgow) on "Floer homology and low-dimensional topology".
The minicourse will run August 26-27 and consist of a two-day series of lectures and discussions. The target audience is graduate students and junior researchers.
Lecture Series:
The lecture series will be August 28-30, speakers to be announced. There will also be scheduled discussion and problem sessions. | {"url":"http://euro-math-soc.eu/conferences.html?page=7","timestamp":"2014-04-19T14:34:25Z","content_type":null,"content_length":"46604","record_id":"<urn:uuid:094f1bcc-dd70-4c15-8367-e11c1bc57226>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
ChiliMath - Free Math Tutor - ChiliMATH is here!
ChiliMATH is here! This site contains free online math tutorials created to supplement class lectures and to guide students in solving math problems in a straightforward way. My goal is
for students to build confidence as they develop their own mathematical skills and knowledge in the process. One secret to succeed in Math is doing a lot of practice. ChiliMATH offers
many worked examples which can be printed easily for offline use. I hope that you find these resources helpful in your studies.
URL: http://www.chilimath.com
Title: ChiliMATH - Free Math Tutor
Description: Provides free algebra tutorials with many handwritten examples. Topics include equations, radicals, lines, logarithms, matrix, polynomials, and more.
Category: Equations - Lines - Matrix - Logarithms - Polynomials - Radicals - Free Math Tutor - Many Handwritten Examples | {"url":"http://www.directoryofscience.com/site/4546806","timestamp":"2014-04-17T13:09:38Z","content_type":null,"content_length":"56161","record_id":"<urn:uuid:61a15eef-4d82-4df1-a01e-9d8f813f1c63>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math errors in python
Alex Martelli aleaxit at yahoo.com
Sun Sep 19 18:41:49 CEST 2004
Chris S. <chrisks at NOSPAM.udel.edu> wrote:
> Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this
Of course it doesn't. What a silly assertion.
> arithmetic is meant for. Any decimal can be represented by a fraction,
And pi can't be represented by either (if you mean _finite_ decimals and
> yet not all fractions can be represented by decimals. My point is that
> such simple accuracy should be supported out of the box.
In Python 2.4, decimal computations are indeed "supported out of the
box", although you do explicitly have to request them (the default
remains floating-point). In 2.3, you have to download and use any of
several add-on packages (decimal computations and rational ones have
very different characteristics, so you do have to choose) -- big deal.
> > While I'd love to compute with all those numbers in infinite
> > precision, we're all stuck with FINITE sized computers, and hence with
> > the inaccuracies of finite representations of numbers.
> So are our brains, yet we somehow manage to compute 12.10 + 8.30
> correctly using nothing more than simple skills developed in
Using base 10, sure. Or, using fractions, even something that decimals
would not let you compute finitely, such as 1/7+1/6.
> grade-school. You could theoretically compute an infinitely long
> equation by simply operating on single digits,
Not in finite time, you couldn't (excepting a few silly cases where the
equation is "infinitely long" only because of some rule that _can_ be
finitely expressed, so you don't even have to LOOK at all the equation
to solve [which is what I guess you mean by "compute"...?] it -- if you
have to LOOK at all of the equation, and it's infinite, you can't get
done in finite time).
> yet Python, with all of
> its resources, can't overcome this hurtle?
The hurdle of decimal arithmetic, you mean? Download Python 2.4 and
play with decimal to your heart's content. Or do you mean fractions?
Then download gmpy and ditto. There are also packages for symbolic
computation and even more exotic kinds of arithmetic.
In practice, with the sole exception of monetary computations (which may
often be constrained by law, or at the very least by customary
practice), there is no real-life use in which the _accuracy_ of floating
point isn't ample. There are nevertheless lots of traps in arithmetic,
but switching to forms of arithmetic different from float doesn't really
make all the traps magically disappear, of course.
> However, I understand Python's limitation in this regard. This
> inaccuracy stems from the traditional C mindset, which typically
> dismisses any approach not directly supported in hardware. As the FAQ
Ah, I see, a case of "those who can't be bothered to learn a LITTLE
history before spouting off" etc etc. Python's direct precursor, the
ABC language, used unbounded-precision rationals. As a result (obvious
to anybody who bothers to learn a little about the inner workings of
arithmetic), the simplest-looking string of computations could easily
consume all the memory at your computer's disposal, and then some, and
apparently unbounded amounts of time. It turned out that users object,
most of the time, to having some apparently trivial computation take
hours, rather than seconds, in order to be unboundedly precise rather
than, say, precise to "just" a couple hundred digits (far more digits
than you need to count the number of atoms in the Galaxy). So,
unbounded rationals as a default are out -- people may sometimes SAY
they want them, but in fact, in an overwhelming majority of the cases,
they actually do not (oh yes, people DO lie, first of all to
As for decimals, that's what a very-high level language aiming for a
niche very close to Python used from the word go. It got started WAY
before Python -- I was productively using it over 20 years ago -- and
had the _IBM_ brand on it, which at the time pretty much meant the
thousand-pounds gorilla of computers. So where is it now, having had
all of these advantages (started years before, had IBM behind it, AND
was totally free of "the traditional C mindset", which was very far from
traditional at the time, particularly within IBM...!)...?
Googlefight is a good site for this kind of comparisons... try:
and you'll see...:
Number of results on Google for the keywords python and rexx:
(10 300 000 results)
( 419 000 results)
The winner is: python
Not just "the winner", an AMAZING winner -- over TWENTY times more
popular, despite all of Rexx's advantages! And while there are no doubt
many fascinating components to this story, a key one is among the pearls
of wisdom you can read by doing, at any Python interactive prompt:
>>> import this
and it is: "practicality beats purity". Rexx has always been rather
puristic in its adherence to its principles; Python is more pragmatic.
It turns out that this is worth a lot in the real world. Much the same
way, say, C ground PL/I into the dust. Come to think of it, Python's
spirit is VERY close to C (4 and 1/2 half of the 5 principles listed as
"the spirit of C" in the C ANSI Standard's introduction are more closely
followed by Python than by other languages which borrowed C's syntax,
such as C++ or Java), while Rexx does show some PL/I influence (not
surprising for an IBM-developed language, I guess).
Richard Gabriel's famous essay on "Worse is Better", e.g. at
<http://www.jwz.org/doc/worse-is-better.html>, has more, somewhat bitter
reflections in the same vein.
Python never had any qualms in getting outside the "directly supported
in hardware" boundaries, mind you. Dictionaries and unbounded precision
integers are (and have long been) Python mainstays, although neither the
hardware nor the underlying C platform has any direct support for
either. For non-integer computations, though, Python has long been well
served by relying on C, and nowadays typically the HW too, to handle
them, which implied the use of floating-point; and leaving the messy
business of implementing the many other possibly useful kinds of
non-integer arithmetic to third-party extensions (many in fact written
in Python itself -- if you're not in a hurry, they're fine, too).
With Python 2.4, somebody finally felt enough of an itch regarding the
issue of getting support for decimal arithmetic in the Python standard
library to go to the trouble of scratching it -- as opposed to just
spouting off on a mailing list, or even just implementing what they
personally needed as just a third-party extension (there are _very_ high
hurdles to jump, to get your code into the Python standard library, so
it needs strong motivation to do so as opposed to just releasing your
own extension to the public).
> states, this problem is due to the "underlying C platform". I just find
> it funny how a $20 calculator can be more accurate than Python running
> on a $1000 Intel machine.
You can get a calculator much cheaper than that these days (and "intel
machines" not too out of the mainstream for well less than half, as well
as several times, your stated price). It's pretty obvious that the
price of the hardware has nothing to do with that "_CAN_ be more
accurate" issue (my emphasis) -- which, incidentally, remains perfectly
true even in Python 2.4: it can be less, more, or just as accurate as
whatever calculator you're targeting, since the precision of decimal
computation is one of the aspects you can customize specifically...
More information about the Python-list mailing list | {"url":"https://mail.python.org/pipermail/python-list/2004-September/246719.html","timestamp":"2014-04-21T08:40:41Z","content_type":null,"content_length":"10660","record_id":"<urn:uuid:40a710b4-8082-41b1-bd54-c92bfa7c7b37>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Anonymous on Saturday, October 17, 2009 at 10:06am.
Explain which method you would use to determine whether the following proportion is true or false. Show how you know that it is true or false.19/6=4/27
• Math - jim, Saturday, October 17, 2009 at 10:19am
I like this question! It's simple, but should make you think, and there are many ways to approach it. I'd hate to deprive you of the thinking experience, but here are some questions for you.
What does it mean for a statement about numbers being equal to be "true" or "false"?
If numbers are equal, is one bigger than the other?
Consider any fraction of the form a/b, like 13/17 or 184/63. How can you know really quickly whether that fraction is less than, equal to, or greater than 1?
Is 3/4 greater or less than 1? What about 4/4? or 5/4?
Now go look at your two fractions again.
• Math - Anonymous, Saturday, October 17, 2009 at 10:24am
Well, 3/4 is less then one, 4/4 is equivalent to 1 and 5/4 is greater then one....right?
• Math - jim, Saturday, October 17, 2009 at 10:45am
Right! If it's bigger than one, the number on top is larger.
Now, is 19/6 greater or less than one?
Is 4/27 greater or less than one?
So without calculating anything, you can show that the two fractions cannot be equal.
• Math - Anonymous, Saturday, October 17, 2009 at 10:56am
Yes! 19/6 is greater then one and 4/27 is less then one!
Thank you!
• Math - jim, Saturday, October 17, 2009 at 10:58am
Glad I could help. :-)
• Math - Anonymous, Saturday, October 17, 2009 at 11:10am
Jim, one more question....
In a case of 4/15 and 5/16 which would be bigger?
• Math - jim, Saturday, October 17, 2009 at 11:20am
Good question! Really good question, since you're thinking of the more general case.
You can't use the same trick there; I'm afraid you'll have to calculate.
The standard way, that will always work, is to find a common denominator of the two fractions, so that you transform both to have the same bottom number. You can always do that by multiplying top
and bottom of each by the bottom of the other. Exampl:
4/15 - we want to multiply top and bottom by 16:
4 / 15 = (4 * 16) /(15 * 16) = 64 / 240
5/16 - we want to multiply top and bottom by 15
5 / 16 = (5 * 15) / (16 * 15) = 75 / 240
Note: We haven't _changed_ the numbers; just the way we write them: 2/4 is exactly the same as 1/2, and 64/240 is still exactly the same as 4/15.
So now we have the question which is bigger: 64 / 240 or 75/240?
Since they're over the same denominator, i's the same wquestion as which is bigger: 64 or 75? Obviously 75.
So 75/240 is bigger than 64/240
So 5/16 is bigger than 4/15.
(If you want a quick check, pull out a calculator, divide 5 by 16 and see that the decimal is bigger than dividing 4 by 15.)
• Math - Anonymous, Saturday, October 17, 2009 at 11:27am
You are very smart! I don't know why I didn't think of that. That makes the most sense.
• Math - jim, Saturday, October 17, 2009 at 1:40pm
The reason you didn't think of that is just that you haven't had enough practice yet. :-)
Keep plugging at it, and you'll be doing these things without even having to think about them!
• Math - Anonymous, Saturday, October 17, 2009 at 2:28pm
Thank you!
Related Questions
Math - I do not understand how to decide which method would you use to determine...
calculus - Determine whether the statement is true or false. If it is true, ...
calculus - determine whether the statement is true or false. If it is true, ...
Math - Determine whether the statement is true, false, or sometimes true. ...
math - Algebra For questions 1-5, find the value of the variable. 1. x = (1 ...
Math~! - 1) Is -7+9=-9+7 true, false, or open? (1point) True False*** Open 2) Is...
Physical Ed - 11. Disagreements are a normal part of life. (1 point) True False ...
7th grade health Ms. Sue please thanks - True/False Indicate whether the ...
7th grade health true or false - 11. Disagreements are a normal part of life. (1...
HEALTH HELP PLEASE!! - Fats help maintain your cell membranes. (1 point) True ... | {"url":"http://www.jiskha.com/display.cgi?id=1255788361","timestamp":"2014-04-16T22:18:51Z","content_type":null,"content_length":"12293","record_id":"<urn:uuid:be0872c2-1c06-4037-8d28-85348c6295fd>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westfield, NJ ACT Tutor
Find a Westfield, NJ ACT Tutor
...I have also attended Chinese school for a couple of years in the States and go back to China to visit periodically. At home, the predominant language spoken is mandarin. Taken 2 years of AP
Calculus AB and BC.
35 Subjects: including ACT Math, chemistry, English, SAT math
...I currently work at the Math Center in South Orange, NJ. I have also tutored elementary school kids in math and writing. Since I took the SAT three times (2050 composite score), my memory is
fresh on the techniques I used.
29 Subjects: including ACT Math, chemistry, English, reading
...The Method: Students first take a diagnostic test, identifying their Reading weaknesses. Afterward they study and practice in an ACT Science workbook. Initially this is done in tutoring
sessions, in which I serve as an academic coach. (In the education biz, this is called "scaffolding.") Soon, however, students also do this study/practice themselves during independent study
between sessions.
11 Subjects: including ACT Math, chemistry, physics, biology
...Tutoring for many years has afforded me the opportunity to go through most of the available material, and I have pinpointed the techniques and strategies that work most effectively on each
question type. In a conventional classroom setting, many students are unable to keep up with the pace of th...
55 Subjects: including ACT Math, English, writing, reading
...I have experience tutoring in a broad subject range, from Algebra through college level Calculus.I recently passed and an proficient in the material on both Exams P/1 and FM/2. I am able to
tutor for the Praxis for Mathematics Content Knowledge. I have passed this test myself, getting only one question incorrect.
21 Subjects: including ACT Math, calculus, geometry, statistics
Related Westfield, NJ Tutors
Westfield, NJ Accounting Tutors
Westfield, NJ ACT Tutors
Westfield, NJ Algebra Tutors
Westfield, NJ Algebra 2 Tutors
Westfield, NJ Calculus Tutors
Westfield, NJ Geometry Tutors
Westfield, NJ Math Tutors
Westfield, NJ Prealgebra Tutors
Westfield, NJ Precalculus Tutors
Westfield, NJ SAT Tutors
Westfield, NJ SAT Math Tutors
Westfield, NJ Science Tutors
Westfield, NJ Statistics Tutors
Westfield, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/westfield_nj_act_tutors.php","timestamp":"2014-04-21T15:31:02Z","content_type":null,"content_length":"23781","record_id":"<urn:uuid:403b00ff-4b1c-41f8-aad8-57e7ec8ef2e8>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electron. J. Diff. Eqns., Vol. 2007(2007), No. 100, pp. 1-22.
A numerical scheme using multi-shockpeakons to compute solutions of the Degasperis-Procesi equation
Håkon A. Hoel Abstract:
We consider a numerical scheme for entropy weak solutions of the DP (Degasperis-Procesi) equation
are solutions of the DP equation with a special property; their evolution in time is described by a dynamical system of ODEs. This property makes multi-shockpeakons relatively easy to simulate
numerically. We prove that if we are given a non-negative initial function
Submitted June 13, 2007. Published July 19, 2007.
Math Subject Classifications: 35Q53, 37K10.
Key Words: Shallow water equation; numerical scheme; entropy weak solution; shockpeakon; shockpeakon collision.
Show me the PDF file (804 KB), TEX file, and other files for this article.
│ │ Håkon A. Hoel │
│ │ Centre of Mathematics for Applications, University of Oslo │
│ │ P.O. Box 1053, Blindern, NO-0316 Oslo, Norway │
│ │ email: haakonah1@gmail.com │
│ │ http://www.folk.uio.no/haakonah │
Return to the EJDE web page | {"url":"http://ejde.math.txstate.edu/Volumes/2007/100/abstr.html","timestamp":"2014-04-19T09:27:51Z","content_type":null,"content_length":"2645","record_id":"<urn:uuid:d401f38f-419e-433f-b193-1bbf02b5f2de>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Krasnosel’skii iteration process in hyperbolic spaces
Results 1 - 10 of 12
, 2008
"... In this paper we prove general logical metatheorems which state that for large classes of theorems and proofs in (nonlinear) functional analysis it is possible to extract from the proofs
effective bounds which depend only on very sparse local bounds on certain parameters. This means that the bounds ..."
Cited by 31 (18 self)
Add to MetaCart
In this paper we prove general logical metatheorems which state that for large classes of theorems and proofs in (nonlinear) functional analysis it is possible to extract from the proofs effective
bounds which depend only on very sparse local bounds on certain parameters. This means that the bounds are uniform for all parameters meeting these weak local boundedness conditions. The results
vastly generalize related theorems due to the second author where the global boundedness of the underlying metric space (resp. a convex subset of a normed space) was assumed. Our results treat
general classes of spaces such as metric, hyperbolic, CAT(0), normed, uniformly convex and inner product spaces and classes of functions such as nonexpansive, Hölder-Lipschitz, uniformly continuous,
bounded and weakly quasinonexpansive ones. We give several applications in the area of metric fixed point theory. In particular, we show that the uniformities observed in a number of recently found
effective bounds (by proof theoretic analysis) can be seen as instances of our general logical results.
- CIE 2005 NEW COMPUTATIONAL PARADIGMS: CHANGING CONCEPTIONS OF WHAT IS COMPUTABLE , 2005
"... ..."
- J. of the European Math. Soc , 2007
"... This paper provides a fixed point theorem for asymptotically nonexpansive mappings in uniformly convex hyperbolic spaces as well as new effective results on the Krasnoselski-Mann iterations of
such mappings. The latter were found using methods from logic and the paper continues a case study in the g ..."
Cited by 8 (7 self)
Add to MetaCart
This paper provides a fixed point theorem for asymptotically nonexpansive mappings in uniformly convex hyperbolic spaces as well as new effective results on the Krasnoselski-Mann iterations of such
mappings. The latter were found using methods from logic and the paper continues a case study in the general program of extracting effective data from prima-facie ineffective proofs in the fixed
point theory of such mappings. 1
, 2005
"... In this paper we obtain a quadratic bound on the rate of asymptotic regularity for the Krasnoselski-Mann iterations of nonexpansive mappings in CAT(0)-spaces, whereas previous results guarantee
only exponential bounds. The method we use is to extend to the more general setting of uniformly convex hy ..."
Cited by 8 (0 self)
Add to MetaCart
In this paper we obtain a quadratic bound on the rate of asymptotic regularity for the Krasnoselski-Mann iterations of nonexpansive mappings in CAT(0)-spaces, whereas previous results guarantee only
exponential bounds. The method we use is to extend to the more general setting of uniformly convex hyperbolic spaces a quantitative version of a strengthening of Groetsch’s theorem obtained by
Kohlenbach using methods from mathematical logic (so-called “proof mining”).
, 810
"... We propose the class of uniformly convex W-hyperbolic spaces with monotone modulus of uniform convexity (UCW-hyperbolic spaces for short) as an appropriate setting for the study of nonexpansive
iterations. UCW-hyperbolic spaces are a natural generalization both of uniformly convex normed spaces and ..."
Add to MetaCart
We propose the class of uniformly convex W-hyperbolic spaces with monotone modulus of uniform convexity (UCW-hyperbolic spaces for short) as an appropriate setting for the study of nonexpansive
iterations. UCW-hyperbolic spaces are a natural generalization both of uniformly convex normed spaces and CAT(0)-spaces. Furthermore, we apply proof mining techniques to get effective rates of
asymptotic regularity for Ishikawa iterations of nonexpansive self-mappings of closed convex subsets in UCW-hyperbolic spaces. These effective results are new even for uniformly convex Banach spaces.
"... This paper provides an effective uniform rate of metastability (in the sense of Tao) on the strong convergence of Halpern iterations of nonexpansive mappings in CAT(0) spaces. The extraction of
this rate from an ineffective proof due to Saejung is an instance of the general proof mining program whic ..."
Add to MetaCart
This paper provides an effective uniform rate of metastability (in the sense of Tao) on the strong convergence of Halpern iterations of nonexpansive mappings in CAT(0) spaces. The extraction of this
rate from an ineffective proof due to Saejung is an instance of the general proof mining program which uses tools from mathematical logic to uncover hidden computational content from proofs. This
methodology is applied here for the first time to a proof that uses Banach limits and hence makes a substantial reference to the axiom of choice. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1164644","timestamp":"2014-04-18T22:56:04Z","content_type":null,"content_length":"30733","record_id":"<urn:uuid:d2f4dec3-0712-490c-9a9e-1134d5d24399>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
softmod 5900xt to quadro3000£¬ seems im the luck one - Page 18 - Guru3D.com Forums
Originally posted by Unwinder
> 2. will we ever see any new drivers besides the 45.28 ?
Answered in RivaTuner's FAQ. The answer is NO.
bad news
how come rui managed to use new drivers for his modded 6800GT@4000FX ? | {"url":"http://forums.guru3d.com/showthread.php?p=1083397","timestamp":"2014-04-21T02:01:04Z","content_type":null,"content_length":"202583","record_id":"<urn:uuid:b7e0ec25-dda9-40b5-b7b1-56b39d3d0e81>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
Path-Structured Smooth (∞,1)-Toposes
Posted by Urs Schreiber
It seems that this Friday I’ll give a talk to the group of Ieke Moerdijk, where I just started a new position (as I mentioned).
Over on the $n$Lab I am preparing some notes along which such a talk might proceed:
Path-Structured Smooth $(\infty,1)$-Toposes (wiki page)
Abstract. A smooth topos is a context in which (synthetic) differential geometry exists. An $(\infty,1)$-topos is a context in which higher groupoids exist. Merging these two concepts yields the
notion of a smooth $(\infty,1)$-topos: a context in which $\infty$-Lie groupoids exist.
A lined topos is a context in which each space has a notion of path. A path-structured smooth $(\infty,1)$-topos is a context in which each $\infty$-Lie groupoid comes with its smooth path $\
infty$-groupoid, naturally.
Path-structured and smooth $(\infty,1)$-toposes are the context in which gauge fields given by principal $\infty$-bundles with connection exist.
Posted at October 13, 2009 1:15 AM UTC
Re: Path-Structured Smooth (∞,1)-Toposes
Ah, so you’re no longer in Bonn?
Posted by: Kevin Lin on October 13, 2009 4:49 PM | Permalink | Reply to this
Ah, so you’re no longer in Bonn?
Currently I am oscillating back and forth. Luckily there is a good train connection.
I did certainly miss your GW-seminar today, unfortunately. I should be available at the elliptic seminar tomorrow, though.
Posted by: Urs Schreiber on October 13, 2009 5:08 PM | Permalink | Reply to this
Re: whereabouts
That's a bad link; I don't know what it should be.
Posted by: Toby Bartels on October 13, 2009 11:35 PM | Permalink | Reply to this
entries on Gromov-Witten invariants
That’s a bad link;
Thanks for catching that. Fixed now.
I don’t know what it should be.
It was supposed to and now does point to the $n$Café entry called A Seminar on Gromov-Witten Invariants – whose point is to attract contributors to the entry $n$Lab:Gromov-Witten invariants.
Posted by: Urs Schreiber on October 14, 2009 10:03 AM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2009/10/pathstructured_smooth_1toposes.html","timestamp":"2014-04-17T00:49:06Z","content_type":null,"content_length":"18712","record_id":"<urn:uuid:b3391154-86ba-49b4-8b7a-8ae6e5ef225b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
gromov witten donaldson thomas correspondence
up vote 8 down vote favorite
Let $X$ be a nonsingular projective 3-fold. I am trying to understand the proof of the GW/DT correspondence as presented in Gromov-Witten/Donaldson-Thomas correspondence for toric 3-folds. I would
appreciate if anyone were to explain the general idea behind virtual localization. To be more specific, How the capped localization expresses the primary GW/DT invariants of $X$ as a sum of capped
vertex and capped edge data.
Is it possible to explain this in terse statements without going into the details. I believe that this will help my intuition as I try to wade through the detailed contents.
Thank you.
ag.algebraic-geometry gromov-witten-theory moduli-spaces
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged ag.algebraic-geometry gromov-witten-theory moduli-spaces or ask your own question. | {"url":"http://mathoverflow.net/questions/97211/gromov-witten-donaldson-thomas-correspondence","timestamp":"2014-04-21T07:31:16Z","content_type":null,"content_length":"45067","record_id":"<urn:uuid:032625f4-a8b8-4383-89e5-bfcd5dec1557>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comments on Eide Neurolearning Blog: Overthinking and Creativity - Think Like ChildPeople who answ quickly and think they are correct...Same here took me a minute figuring out the same l...Reminds me of a puzzle I read not long ago:
"...Ten seconds. I feel like the problem was advertis...I used logic to get my answer in under a minute. I...Same method as you... in under a minute. And plen...I got the right answer in under a minute,but just ...
tag:blogger.com,1999:blog-9864092.post7485343887801012767..comments2014-04-13T08:55:05.052-07:00Drs. Fernette and Brock Eidehttp://www.blogger.com/profile/
01943025422546686625noreply@blogger.comBlogger7125tag:blogger.com,1999:blog-9864092.post-31419802249068183512014-03-18T07:35:46.901-07:002014-03-18T07:35:46.901-07:00People who answ quickly and think
they are correct because they reached 2 are missing the point. Applying the same reasoning to all the sums means you're answer would be incorrect. 1111 = 0 but 1012 = 1Me Edgellhttp://
www.blogger.com/profile/08314280640137418478noreply@blogger.comtag:blogger.com,1999:blog-9864092.post-59362102926530972652014-02-07T11:11:25.159-08:002014-02-07T11:11:25.159-08:00Same here took me a
minute figuring out the same logic well hell yeah i am a programmer....Tushar Yadavhttp://www.blogger.com/profile/
17637961652794043246noreply@blogger.comtag:blogger.com,1999:blog-9864092.post-3479392739514664912013-07-18T22:01:07.879-07:002013-07-18T22:01:07.879-07:00Reminds me of a puzzle I read not long ago:
<br /><br />"Which of these verbs is not like the others? bring, buy, catch, draw, fight, find, teach, think"<br /><br />When I read "catch, fight" I thought of "catch fish&
quot; -- which led me to notice that "think" was the only verb that couldn't take a concrete direct object (you can bring a fish, you can buy a fish, you can catch a fish, you can draw
a fish, you can fight a fish, you can find a fish, you can teach a fish to swim, but you can't think a fish). But that wasn't the answer on the next page.Chris Hennickhttp://www.blogger.com/
profile/03639445454370696616noreply@blogger.comtag:blogger.com,1999:blog-9864092.post-9399769178625083222012-08-01T16:58:42.929-07:002012-08-01T16:58:42.929-07:00Ten seconds. I feel like the problem
was advertised as much harder than it is.CHADhttp://www.blogger.com/profile/
09380595397146222650noreply@blogger.comtag:blogger.com,1999:blog-9864092.post-73422770218341116482012-05-25T07:39:43.714-07:002012-05-25T07:39:43.714-07:00I used logic to get my answer in under a
minute. I decided that if preschool children got the answer quickly, it couldn't have anything to do with math ... patterns, or counting, or calculating. So, I thought like a child and figured
out to look for the "circles." I'm definitely an overthinker, but I'm also highly logical, which helped me in this problem.Cindyhttp://www.blogger.com/profile/
11452795238901796924noreply@blogger.comtag:blogger.com,1999:blog-9864092.post-28497609374949802482012-04-30T20:47:21.994-07:002012-04-30T20:47:21.994-07:00Same method as you... in under a minute. And
plenty of higher education.zencathttp://www.blogger.com/profile/
17896253631581597923noreply@blogger.comtag:blogger.com,1999:blog-9864092.post-1434608520814181102012-04-23T14:49:34.024-07:002012-04-23T14:49:34.024-07:00I got the right answer in under a minute,but
just in a totally different way. I looked at the numbers 2=0, because 2222=0, 1=0 because 1111=0. 3333 = 0, 9=1 because 9999=4, and since 8193=3, 8=2 therefore,2581=2mathmehttp://www.blogger.com/ | {"url":"http://eideneurolearningblog.blogspot.com/feeds/7485343887801012767/comments/default","timestamp":"2014-04-18T08:14:11Z","content_type":null,"content_length":"14713","record_id":"<urn:uuid:24c4647f-c468-4cd4-ade1-fbdd7773b34e>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert rpm to m/s
Error! You cannot convert rpm to m/s
The reason is that the two units are not compatible. See the unit definitions below as well as possible conversions for each unit.
revolutions per minute
Revolutions per minute
is a unit of measurement of frequency. The definition for revolutions per minute is the following:
A revolution per minute is equal to one rotation completed around a fixed axis in one minute of time.
The symbol for revolutions per minute is
rpm conversions
No compatible conversions found.
metre per second
Metre per second
is a unit of measurement of speed. The definition for metre per second is the following:
Metre per second is the SI unit of speed.
The symbol for metre per second is
Create a custom conversion table
Convert rpm to
Start value (in rpm)
Increment value by
Number of values in table
Conversion tables | {"url":"http://www.convertaz.com/convert-rpm-to-m~s/","timestamp":"2014-04-17T21:22:59Z","content_type":null,"content_length":"33915","record_id":"<urn:uuid:f88fdfe8-fb5a-4e60-9795-ddbb3bf692a1>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
and Nonlinear Programming. Addison-Wesley Publishing Company
- Advanced Lectures on Machine Learning, LNCS , 2003
"... ..."
"... Boosting algorithms like AdaBoost and Arc-GV are iterative strategies to minimize a constrained objective function, equivalent to Barrier algorithms. ..."
, 2000
"... We examine methods for constructing regression ensembles based on a linear program (LP). The ensemble regression function consists of linear combina- tions of base hypotheses generated by some
boosting-type base learning algorithm. Unlike the classification case, for regression the set of possible h ..."
Cited by 18 (9 self)
Add to MetaCart
We examine methods for constructing regression ensembles based on a linear program (LP). The ensemble regression function consists of linear combina- tions of base hypotheses generated by some
boosting-type base learning algorithm. Unlike the classification case, for regression the set of possible hypotheses producible by the base learning algorithm may be infinite. We explicitly tackle
the issue of how to define and solve ensemble regression when the hypothesis space is infinite. Our approach is based on a semi-infinite linear program that has an infinite number of constraints and
a finite number of variables. We show that the regression problem is well posed for infinite hypothesis spaces in both the primal and dual spaces. Most importantly, we prove there exists an optimal
solution to the infinite hypothesisspace problem consisting of a finite number of hypothesis. We propose two algorithms for solving the infinite and finite hypothesis problems. One uses a column
generation simplex-type algorithm and the other adopts an exponential barrier approach. Furthermore, we give sufficient conditions for the base learning algorithm and the hypothesis set to be used
for infinite regression ensembles. Computational resultsshow that these methods are extremely promising.
- Proceedings of the Second International Workshop on Independent Component Analysis and Blind Signal Separation , 2000
"... The error-entropy-minimization approach in adaptive system training is investigated. The effect of Parzen windowing on the location of the global minimum of entropy has been investigated. An
analytical proof that shows the global minimum of the entropy is a local minimum, possibly the global minimum ..."
Cited by 13 (6 self)
Add to MetaCart
The error-entropy-minimization approach in adaptive system training is investigated. The effect of Parzen windowing on the location of the global minimum of entropy has been investigated. An
analytical proof that shows the global minimum of the entropy is a local minimum, possibly the global minimum, of the nonparametrically estimated entropy using Parzen windowing with Gaussian kernels.
The performances of error-entropy-minimization and the mean-square-errorminimization criteria are compared in short-term prediction of a chaotic time series. Statistical behavior of the estimation
errors and the higher order central moments of the time series data and its predictions are utilized as the comparison criteria. 1. INTRODUCTION Starting with the early work of Wiener [1] on adaptive
filters, mean square error (MSE) has been almost exclusively employed in the training of all adaptive systems including artificial neural networks. There were mainly two reasons lying behind this
choice: Analyti...
- In Advances in Neural Information Processing Systems (NIPS , 2002
"... We give an unified convergence analysis of ensemble learning methods including e.g. AdaBoost, Logistic Regression and the Least-Square-Boost algorithm for regression. These methods have in
common that they iteratively call a base learning algorithm which returns hypotheses that are then linearly com ..."
Cited by 10 (2 self)
Add to MetaCart
We give an unified convergence analysis of ensemble learning methods including e.g. AdaBoost, Logistic Regression and the Least-Square-Boost algorithm for regression. These methods have in common
that they iteratively call a base learning algorithm which returns hypotheses that are then linearly combined. We show that these methods are related to the Gauss-Southwell method known from
numerical optimization and state non-asymptotical convergence results for all these methods. Our analysis includes ℓ1-norm regularized cost functions leading to a clean and general way to regularize
ensemble learning. 1
"... Adaptive signal processing theory was born and has lived by exclusively exploiting the mean square error criterion. When we think of the goal of least squares without restrictions of
Gaussianity, one has to wonder why an information theoretic error criterion is not utilized instead. After all, the g ..."
Cited by 1 (0 self)
Add to MetaCart
Adaptive signal processing theory was born and has lived by exclusively exploiting the mean square error criterion. When we think of the goal of least squares without restrictions of Gaussianity, one
has to wonder why an information theoretic error criterion is not utilized instead. After all, the goal of adaptive filtering should be to find the linear projection that best captures the
information in the desired response. In this paper we summarize our efforts to extend adaptive linear filtering to information filtering. We briefly review Renyi’s entropy definition, Parzen windows
and put them together in a framework to estimate entropy directly from samples (nonparametric). Once this criterion is developed we can train linear or nonlinear adaptive networks for entropy
maximization or minimization. We present results on the properties of the Renyi’s nonparametric entropy estimator, and show how it performs in chaotic time series prediction. 1.
"... 2002 This work is dedicated to all scientists and researchers, who have lived in pursuit of knowledge, and have dedicated themselves to the advancement of science. ACKNOWLEDGMENTS I would like
to start by thanking my supervisor, Dr. Jose C. Principe, for his encouraging and inspiring style that made ..."
Add to MetaCart
2002 This work is dedicated to all scientists and researchers, who have lived in pursuit of knowledge, and have dedicated themselves to the advancement of science. ACKNOWLEDGMENTS I would like to
start by thanking my supervisor, Dr. Jose C. Principe, for his encouraging and inspiring style that made possible the completion of this work. Without his guidance, imagination, and enthusiasm, which
I admire, this dissertation would not have been possible. I also wish to thank the members of my committee, Dr. John G. Harris, Dr. Tan F. Wong, and Dr. Mark C.K. Yang, for their valuable time and
interest in serving on my supervisory committee, as well as their comments, which helped improve the quality of this dissertation. Throughout the course my PhD research, I have been in interaction
with many CNEL colleagues and I have benefited from the valuable discussions we had together | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=33467","timestamp":"2014-04-19T01:50:42Z","content_type":null,"content_length":"33290","record_id":"<urn:uuid:f8736b40-57e3-4b01-8ef0-89f747a07aa6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
OpenStudy Comment:
• one year ago
• one year ago
Best Response
You've already chosen the best response.
I've been using OpenStudy for nearly a month now, and was just realizing about how much better OpenStudy is than Yahoo! Answers. At OpenStudy, you can get an explanation and real help - through
equations and/or drawings. This helps you understand the material - not just get an answer. Thanks OpenStudy!
Best Response
You've already chosen the best response.
Glad you like it! We try our best to keep our users happy and contented :)
Best Response
You've already chosen the best response.
"how much better OpenStudy is than Yahoo! Answers" Well, glad you like OpenStudy, but everything is better than Yahoo! Answers.
Best Response
You've already chosen the best response.
I don't wish to steal OpenStudy's thunder, but here are a list of other awesome places for this stuff. http://stackexchange.com/ : undergraduate to graduate level academics http://
www.scienceforums.net/ : any level of academics, science, math, or stats http://physicsforums.com/ : like above, except with more people http://www.reddit.com/r/learnmath : a fast community, not
as helpful as StackExchange, but pretty cool nonetheless http://www.khanacademy.org/ : undergraduate's only, but really awesome
Best Response
You've already chosen the best response.
lol reddit has an educational site o.O
Best Response
You've already chosen the best response.
'Preciate it @Study23!
Best Response
You've already chosen the best response.
OS is really the best website ever, but you guys should try and popularize it more. None of my classmates actually got to know of OS until I recommended it to them.
Best Response
You've already chosen the best response.
I came to know OS through the hyperphysics and that to just by chance. Never expected that I would find a new world of education, friends and help on the Internet :)
Best Response
You've already chosen the best response.
Interestingly enough, I stumbled upon OpenStudy through the HyperPhysics Website. Coincidentally, I visited the HyperPhysics website by chance - through a search engine result. If I had never
clicked the HyperPhysics search engine result, I would never have came upon this site for sharing knowledge, and learning new things!
Best Response
You've already chosen the best response.
I got to know OS through cellsalive.com. I wanted to find out about the function of the golgi apparatus in cell functioning and stumbled upon OS.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f9c9f4ae4b000ae9ed197ad","timestamp":"2014-04-19T22:24:07Z","content_type":null,"content_length":"50745","record_id":"<urn:uuid:3dd72e63-0498-44dc-a01b-6f2940c61dde>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: power series
Dave Marker marker at math.uic.edu
Wed Nov 12 21:49:41 EST 1997
A couple of comments on power series and nonstandard models.
- Suppose you are just looking for nonstandard elementary extensions
of the field of real numbers.
Vaughn Pratt suggests looking at the field of Laurent series over the
reals. This won't work as not every element will have a square root.
However the field of Puisex series will work.
This is the union of the Laurent series fields R(t^{1/n}) for
n=1..infinity. In the natural (only) order t is infinitesimal. (To argue
that this is a real closed field you can
appeal to Ax-Kochen--this is a henselian valued field with real
closed residue field and divisible value group--or argue ajoining i
gives an algebraically closed field--this is done say in Walker's
book on Algebraic Curves.)
-Of course this is only appropriate for nonstandard analysis of algebraic
functions. It is impossible to define a reasonable exponential as there is
no room for exp(1/t). Van den Dries, Macintyre and I have given a
reasonably natural construction of an elementary extension of
(R,+,x,<,exp) using power series (actually a limit of limits of power
series fields). This allows you to use some algebraic nonstandard
arguments to analize asymptotics of definable functions. We used this to
answer a problem of Hardy's on rates of growth of logarithmic-exponential
functions. (a preprint is available at http://www.math.uic.edu/~marker).
-All of these models have a reasonably natural notion of "nonstandard
integers". But they are only models of quantifier free induction
(indeed the square root of two is rational) and completely unsuitable
for the type of construction you usually want to do in nonstandard
Dave Marker
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1997-November/000253.html","timestamp":"2014-04-21T13:21:33Z","content_type":null,"content_length":"3958","record_id":"<urn:uuid:19fe38d1-bfd2-4778-9ac8-709a435037fb>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Homogeneity (statistics)
For homogeneity of variance see homoscedasticity.
In statistics, homogeneity arises in describing the properties of a dataset, or several datasets, and relates to the validity of the often convenient assumption that the statistical properties of any
one part of an overall dataset are the same as any other part. In meta-analysis, which combines the data from several studies, homogeneity measures the differences or similarities between the several
Homogeneity can be studied to several degrees of complexity. For example, considerations of homoscedasticity examine how much the variability of data-values changes throughout a dataset. However,
questions of homogeneity apply to all aspects of the statistical distributions, including the location parameter. Thus, a more detailed study would examine changes to the whole of the marginal
distribution. An intermediate-level study might move from looking at the variability to studying changes in the skewness. In addition to these, quesions of homogeneity apply also to the joint
The concept of homogeneity can be applied in many different ways and, for certain types of statistical analysis, it is used to look for further properties that might need to be treated as varying
within a dataset once some initial types of non-homogeneity have been dealt with.
Differences in the typical values across the dataset might initially be dealt with by constructing a regression model using certain explanatory variables to relate variations in the typical value to
known quantities. There should then be a later stage of analysis to examine whether the errors in the predictions from the regression behave in the same way across the dataset.
Time series
The initial stages in the analysis of a time series may involve plotting values against time to examime homogeneity of the series in various ways: stability across time as opposed to a trend;
stability of local fluctuations over time.
Combining information across sites
, data-series across a number of sites composed of annual values of the within-year annual maximum river-flow are analysed. A common model is that the distributions of these values are the same for
all sites apart from a simple scaling factor, so that the location and scale are linked in a simple way. There can then be questions of examining the homogeneity across sites of the distribution of
the scaled values.
Combining information sources
, weather datasets are acquired over many years of record and, as part of this, measurements at certain stations may cease occasionally while, at around the same time, measurements may start at
nearby locations. There are then questions as to whether, if the records are combined to form a single longer set of records, those records can be considered homgeneous over time.
Homogeneity within populations
Simple populations surveys may start from the idea that responses will be homogeneous across the whole of a population. Assessing the homogeneity of the population would involve looking to see
whether the responses of certain identifiable sub-populations differ from those of others. For example car-owners may differ from non-car-owners, or there may be differences between different
See also
• Hall, M.J. (2003) The interpretation of non-homogeneous hydrometeorological time series a case study. Meteorological Applications, 10, 61–67. (doi:10.1017/S1350482703005061 )
• Krus, D.J., & Blackman, H.S. (1988).Test reliability and homogeneity from perspective of the ordinal test theory. Applied Measurement in Education, 1, 79–88 (Request reprint).
• Loevinger, J. (1948). The technic of homogeneous tests compared with some aspects of scale analysis and factor analysis. Psychological Bulletin, 45, 507–529.
External links | {"url":"http://www.reference.com/browse/Homogeneity+(statistics)","timestamp":"2014-04-19T11:39:53Z","content_type":null,"content_length":"76971","record_id":"<urn:uuid:f8d5b4d9-0a13-425f-9e77-28f359b8125e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
"Size Balanced Tree" - more efficient than any known algorithm?
A member ("Chen Qifeng") of an online forum about teenagers'
programming contests claims to have found a new binary search tree
algorithm that outperforms all existing well-known algorithms of its
kind, and other members of that forum who have replied all confirmed
his finding. So I think it's time to forward this result to comp.theory
for a wider-scope evaluation.
The thesis in PDF format and an accompanying Pascal source code file
are in a zip file available at:
Also below is the plain text version of that PDF and the source code
for the Usenet's records.
Yao Ziyuan
==== PLAIN TEXT VERSION OF THE THESIS PDF ====
Size Balanced Tree
Chen Qifeng (Farmer John)
Zhongshan Memorial Middle School, Guangdong, China
December 29, 2006
This paper presents a unique strategy for maintaining balance in
changing Binary Search Trees that has optimal expected behavior at
worst. Size
Balanced Tree is, as the name suggests, a Binary Search Tree (abbr.
BST) kept
balanced by size. It is simple, efficient and versatile in every
aspect. It is very easy to
implement and has a straightforward description and a surprisingly
simple proof of
correctness and runtime. Its runtime matches that of the fastest BST
known so far.
Furthermore, it works much faster than many other famous BSTs due to
tendency of a perfect BST in practice. It not only supports typical
operations but also Select and Rank.
Key Words And Phrases
Size Balanced Tree
This paper is dedicated to the memory of Heavens.
1 Introduction
Before presenting Size Balanced Trees it is necessary to explicate
Binary Search Trees
and rotations on BSTs, Left-Rotate and Right-Rotate.
1.1 Binary Search Tree
Binary Search Tree is a significant kind of advanced data structures.
It supports many
dynamic-set operations, including Search, Minimum, Maximum,
Predecessor, Successor,
Insert and Delete. It can be used both as a dictionary and as a
priority queue.
A BST is an organized binary tree. Every node in a BST contains two
children at most.
The keys for compare in a BST are always stored in such a way as to
satisfy the
binary-search-tree property:
Let x be a node in a binary search tree. Then the key of x is not less
than that in left
subtree and not larger than that in right subtree.
For every node t we use the fields of left[t] and right[t] to
store two pointers to its
children. And we define key[t] to mean the value of the node t for
compare. In addition
we add s[t], the size of subtree rooted at t, to keep the number of the
nodes in that tree.
Particularly we call 0 the pointer to an empty tree and s[0]=0.
1.2 Rotations
In order to keep a BST balanced (not degenerated to be a chain) we
usually change the
pointer structure through rotations to change the configuration, which
is a local operation
in a search tree that preserves the binary-search-tree property.
Figure1.1: The operation Left-Rotate(x) transforms the configuration of
the two
nodes on the right into the configuration on the left by changing a
constant number
of pointers. The configuration on the left can be transformed into the
on the right by the inverse operation, Right-Rotate(y).
1.2.1 Pseudocode Of Right-Rotate
The Right-Rotate assumes that the left child exists.
Right-Rotate (t)
1 k←left[t]
2 left[t] ←right[k]
3 right[k] ←t
4 s[k] ←s[t]
5 s[t] ←s[left[t]]+s[right[t]]+1
6 t←k
1.2.2 Pseudocode Of Left-Rotate
The Left-Rotate assumes that the right child exists.
Left-Rotate (t)
1 k←right[t]
2 right[t] ←left[k]
3 left[k] ←t
4 s[k] ←s[t]
5 s[t] ←s[left[t]]+s[right[t]]+1
6 t←k
2 Size Balanced Tree
Size Balanced Tree (abbr. SBT) is a kind of Binary Search Trees kept
balanced by size. It
supports many dynamic primary operations in the runtime of O(logn):
Inserts a node whose key is v into the
SBT rooted at t.
Deletes a node whose key is v from
the SBT rooted at t. In the case that no
such a node in the tree, deletes the node
searched at last.
Finds the node whose key is v and
returns it.
Returns the rank of v in the tree
rooted at t. In another word, it is one plus
the number of the keys which are less
than v in that tree.
Returns the node which is ranked at
the kth position. Apparently it includes
operations of Get-max and Get-min
because Get-min is equivalent to
Select(t,1) and Get-max is equivalent to
Returns the node with maximum key
which is less than v.
Returns the node with minimum key
which is larger than v.
==== END OF PLAIN TEXT VERSION OF THE THESIS PDF ====
==== PASCAL SOURCE CODE ====
{$inline on}
program CQF_SBT;
const maxn=2000000;
var key,s,left,right,a,b:array[0..maxn] of longint;
procedure init;
for q:=1 to q do
procedure work;
var t,k:longint;
procedure right_rotate(var t:longint);inline;
procedure left_rotate(var t:longint);inline;
procedure maintain(var t:longint;flag:boolean);inline;
if flag=false then
if s[left[left[t]]]>s[right[t]] then
if s[right[left[t]]]>s[right[t]] then begin
if s[right[right[t]]]>s[left[t]] then
if s[left[right[t]]]>s[left[t]] then begin
procedure insert(var t,v:longint);inline;
if t=0 then begin
else begin
if v<key[t] then
function delete(var t:longint;v:longint):longint;inline;
if (v=key[t])or(v<key[t])and(left[t]=0)or(v>key[t])and(right[t]=0)
then begin
if (left[t]=0)or(right[t]=0) then
if v<key[t] then
function find(var t,v:longint):boolean;inline;
if t=0 then
if v<key[t] then
find:=(key[t]=v)or find(right[t],v);
function rank(var t,v:longint):longint;inline;
if t=0 then
if v<=key[t] then
function select(var t:longint;k:longint):longint;inline;
if k=s[left[t]]+1 then
if k<=s[left[t]] then
function pred(var t,v:longint):longint;inline;
if t=0 then
if v<=key[t] then
else begin
if pred=v then
function succ(var t,v:longint):longint;inline;
if t=0 then
if key[t]<=v then
else begin
if succ=v then
for q:=1 to q do
case a[q] of
==== END OF PASCAL SOURCE CODE ==== | {"url":"http://coding.derkeiler.com/Archive/General/comp.theory/2007-01/msg00009.html","timestamp":"2014-04-16T13:02:51Z","content_type":null,"content_length":"19939","record_id":"<urn:uuid:e98b8b49-b1f7-42ff-813f-a4f535eb1872>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |
[plt-scheme] calculation time for 11.5.3
From: Stephen Bloch (sbloch at adelphi.edu)
Date: Thu Nov 6 10:05:55 EST 2008
On Nov 5, 2008, at 8:08 PM, mike wrote:
> Using my def for add ,multiply and finally exponent which is
> dependent
> on the def for add and multiply i find calculation times increase
> dramatically as i raise the exponent. (exponent 10 4) takes a long
> long
> time.
> Can i assume that the nature of the functions is such that the
> number
> of calculations increases hyperbolically?? and not a function of sick
> computer?
Doing this in the "obvious" way, by structural recursion on Peano
natural numbers, the add function should take linear time, the
multiply function quadratic time, and the exponent function cubic
time. No, wait... that's too easy.
add(n1,n2) takes O(n1) time (assuming that's the parameter you recur
mult(n1, n2) takes sum_n1(O(n2)) time, i.e. O(n1 * n2) time.
raise(x, n) takes sum_{i=1}^n{x^i}, which is O(x^n) time.
In other words, the time the function takes is approximately equal to
its answer. Which is obvious in retrospect, since the only
fundamental operation it does on answers is add1; you can't get 10^4
from 0 without doing that operation at least 10^4 times.
On my computer, (raise 10 4) takes well under a second, but (raise 10
6) takes about four seconds.
BTW, I tried rewriting all three functions to be tail-recursive, and
it didn't make any noticeable difference.
As long as you're using Peano numbers (whose only constructor is
add1), this is unavoidable. You can do much better, of course, if
you use binary representation:
; A nat-num is either
; 0,
; 2n where n is a nat-num, or
; 2n+1 where n is a nat-num
Note that this uses "multiplication", but only the restricted case of
multiplication by 2. With this representation,
add(n1, n2) takes O(max(log(n1), log(n2)) time
mult(n1, n2) takes O(log(n1) * log(n2)) time (assuming you don't
start doing fast Fourier transforms!)
raise(x, n) takes O(n*log(x)) time by the obvious algorithm, or
O(log(n)*log(x)) time by a less-obvious but fairly straightforward
Homework problem: define a data structure to represent natural
numbers in binary form (not using the built-in number type), and
define these functions on that data type.
Stephen Bloch
sbloch at adelphi.edu
Posted on the users mailing list. | {"url":"http://lists.racket-lang.org/users/archive/2008-November/028354.html","timestamp":"2014-04-18T00:38:53Z","content_type":null,"content_length":"7597","record_id":"<urn:uuid:2e13506c-bc52-4348-a4b3-0d59be9d43c7>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westminster, MA Algebra Tutor
Find a Westminster, MA Algebra Tutor
...Organic Chemistry is a very interesting subject. I scored highest marks in this subject and was awarded Coca Cola gold medal for securing first rank at Goa university. I have been tutoring
Organic chemistry for 3 years and my students are very happy with my teaching.
13 Subjects: including algebra 2, algebra 1, chemistry, geometry
Greeting prospective students and parents,My name is Matt and I grew up in central Massachusetts. I went to college at the University of Massachusetts where I graduated in 2010 with my Bachelor's
of Science in Geology. I have worked as an environmental scientist and field geologist for the past two years.
25 Subjects: including algebra 1, algebra 2, English, reading
...My practicum was at a therapeutic school for children with behavioral and emotional disorders. My Bachelor's program in Special Needs covered how to deal with students who have ADD/ADHD. I
have a mild form of ADD myself, and have learned many of the tricks to successful learning.
22 Subjects: including algebra 2, reading, public speaking, writing
...Outside of tutoring, I have 5 years of research lab experience and real-life application of the science. I have taken math courses through calculus two. I have applied mathematics to many
science courses, including physics and chemistry.
16 Subjects: including algebra 1, algebra 2, Spanish, English
...I have taught courses in Algebra 1, Geometry, Trigonometry, and Pre-calculus as well. I am a licensed, certified teacher for the state of Massachusetts. I am also the advisor for the high
school math club and the advisor of the National Honor Society at a local high school.I have taught: SAT Pr...
12 Subjects: including algebra 2, algebra 1, geometry, SAT math
Related Westminster, MA Tutors
Westminster, MA Accounting Tutors
Westminster, MA ACT Tutors
Westminster, MA Algebra Tutors
Westminster, MA Algebra 2 Tutors
Westminster, MA Calculus Tutors
Westminster, MA Geometry Tutors
Westminster, MA Math Tutors
Westminster, MA Prealgebra Tutors
Westminster, MA Precalculus Tutors
Westminster, MA SAT Tutors
Westminster, MA SAT Math Tutors
Westminster, MA Science Tutors
Westminster, MA Statistics Tutors
Westminster, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/westminster_ma_algebra_tutors.php","timestamp":"2014-04-16T07:31:20Z","content_type":null,"content_length":"24065","record_id":"<urn:uuid:e18ab4e0-62bf-4029-8022-a6ade1546cca>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
What Is Trigonometry?
@everetra - I can give you one simple way to remember your trigonometry formulas: the made up word “SOHCATOA” (pronounced so-kah-to-ah). “SOH” is used for sine, and it stands for opposite over
hypotenuse. “CAH” is for cosine and stands for adjacent over hypotenuse. “TOA” is for tangent and it stands for opposite over adjacent. I learned this one simple trick and I’ve never forgotten since. | {"url":"http://www.wisegeek.com/what-is-trigonometry.htm","timestamp":"2014-04-19T19:10:16Z","content_type":null,"content_length":"68972","record_id":"<urn:uuid:977806ed-c2c3-470f-9aa6-8dd49fd70149>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Science of Roundworld
Has Earth been detected? – Another try
Monday, August 6, 2007
Yesterday I posted a modification of the Drake equation that was intended as a speculative tool – no, rather a speculative toy – to assess the probability that Earth and its biosphere have been
detected by at least one alien civilization during the last three billion years. I presented an example according to which there was a 99 percent chance that our planet has been detected.
Let’s now take another try with the equation: If we change just the value of the fraction of extraterrestrial civilizations which conduct exoplanet searches (f[s] in the equation) from 0.1 to 0.01,
the probability that our planet has been detected lowers from 99 to 35 percent. If we further lower the average number of habitable planets per star that has planets (n[e]) from 2 to 0.5 and the
fraction of life-harbouring planets on which intelligence evolves (f[i]) from 0.01 to 0.001, the probability of detection swings to 1 percent. So according to this example it is virtually certain
that Earth has not been noticed.
The point is that both this and the Drake equation are speculative – they can be nice toys to play around with and maybe sometimes even good tools to clear up muddy thinking, but they offer no firm
answers to anything.
Has Earth been detected?
Sunday, August 5, 2007
Given that in the last ten years we have found roughly 250 planets outside our solar system, it is perhaps prudent to ask whether an alien civilization may have already detected our own planet and
its biosphere during the approximately three billion years it has harboured photosynthesising life.
The Drake equation, the famous speculative tool to estimate the number of extraterrestrial civilizations in our galaxy with which we might come in contact, was not intended to address this question
but can be modified to do so. Unfortunately (but perhaps not surprisingly) this exercise necessarily involves some mathematical notation, but bear with me – it is relatively simple and in the end you
will have a tool to answer (or rather, speculate) this question yourself.
The Drake equation reads:
N = R* × f[p] × n[e] × f[l] × f[i] × f[c] × L
where N is the number of extraterrestrial civilizations which we might be able to contact, R* is the number of stars forming in the galaxy per year, f[p] the fraction of those stars that have
planets, n[e] the average number of habitable planets per star that has planets, f[l] the fraction of habitable planets on which life emerges, f[i] the fraction of life-harbouring planets on which
intelligence evolves, f[c] the fraction of civilizations that start to emit detectable signals into space, and L the average number of years such civilizations continue to emit signals into space.
Drake and his colleagues originally came up with N = 10. With different assumptions we can end up with vastly varying N values, ranging for example from 0.0000001 to 5000 (see examples here). But N
is the estimated number of civilizations in the Milky Way which we might be able to contact, not the probability that the Earth and its biosphere have been detected by an extraterrestrial
civilization sometime during the last three billion years. For that, we need to change the equation a bit.
Given a maximum detection distance from us, for each star within that distance there is a possibility that an alien civilization residing in that system has found Earth. That possibility can be
quantified as probability:
p = f[p] × n[e] × f[l] × f[i] × f[s] × f[d]
where f[p], n[e], f[l] and f[i] are the same as in the Drake equation, f[s] is the fraction of civilizations that conduct exoplanet searches and f[d] is the fraction of Earth-like planets actually
detected in those searches. This equation assumes that life and intelligence emerged always within the last three billion years, excluding the possibility of emergence and extinction before
photosynthesising life appeared on Earth.
Given the probability of single detection above, we can formulate the probability that Earth has not been detected for a given number of stars S within the maximum detection distance:
p[0] = (1 – p)^S
Now let’s plug in some numbers. I assume that alien civilizations are interested in and capable of detecting Earth-like planets within 100 light-years from they home stars. According to the Gliese
Catalogue of Nearby Stars, 3rd Edition, there are 1064 stars within 50 light-years from our solar system (although it is a conservative estimate because not all stars have been catalogued). After a
bit of calculation using the volume equation of a sphere and assuming constant density of stars, we end up with 8512 stars within 100 light-years from us. This will be the value of S. For f[s] and f
[d] we assign values 0.1 and 0.5, respectively, and for the rest of the variables we use the original values of Drake and his colleagues (f[p] = 0.5, n[e] = 2, f[l] = 1, f[i] = 0.01).
So, what is the probability that Earth has not been detected? It is 0.01, so the probability that Earth has been detected by an alien civilization is 0.99 or 99 percent. Now go speculate.
Phoenix Mars Lander launches successfully
Saturday, August 4, 2007
Phoenix, the NASA’s latest robotic mission to Mars, lifted off successfully from Cape Canaveral Air Force Station in Florida aboard a Delta II rocket today. After landing on Mars in May 2008 the
mission will study the history of water and habitability potential in the Martian arctic’s ice-rich soil.
Harry Potter and children’s injury prevention
Tuesday, July 24, 2007
The launch of the last Harry Potter book – Harry Potter and the Deathly Hallows – can be seen as both good and bad news for children’s health. In December 2005 researchers from John Radcliffe
Hospital, Oxford, UK, reported in British Medical Journal that they
observed a significant fall in the numbers of attendees to the emergency department on the weekends that of the two most recent Harry Potter books were released. Both these weekends were in
mid-summer with good weather. It may therefore be hypothesised that there is a place for a committee of safety conscious, talented writers who could produce high quality books for the purpose of
injury prevention.
So now, when there has been a new book lauch, we can expect a drop in children’s traumatic injuries – but on the other hand it was the final book.
We seem to have an urgent need for another J. K. Rowling.
Number of known extrasolar planets rising
Tuesday, July 24, 2007
This is a record year for exoplanet research. So far, the statistics of exoplanet.eu show that there has been already 36 new exoplanet candidates either announced in refereed papers and scientific
conferences or included in papers submitted to scientific journals (see figure below).
The figure shows only the exoplanet candidates detected by the radial velocity method. By the end of the year or early next year the COROT space telescope mission is expected to start to report new
candidates based on the transit method. The mission has already reported its first gas giant (but only in a press release, hence it is not included in the statistics above).
Do we live in a computer simulation?
Tuesday, July 24, 2007
In 2003 Nick Bostrom from Oxford University published an interesting article in Philosophical Quarterly. He argued
that at least one of the following propositions is true: (1) the human species is very likely to become extinct before reaching a ‘posthuman’ stage; (2) any posthuman civilization is extremely
unlikely to run a significant number of simulations of its evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief
that there is a significant chance that we shall one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation.
Since then he has set up a dedicated website for his ‘simulation argument’. Wikipedia has a related entry. Interesting thought, but don’t loose your sleep over it :) | {"url":"http://roundworldscience.wordpress.com/","timestamp":"2014-04-18T23:56:22Z","content_type":null,"content_length":"50790","record_id":"<urn:uuid:aa55e898-687e-49f7-b5d6-5d013e3a961e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
EconometricaFrontmatter of Econometrica Vol. 82 Iss. 2Hazardous Times for Monetary Policy: What Do Twenty-Three Million Bank Loans Say About the Effects of Monetary Policy on Credit Risk-Taking?Non-Manipulable House Allocation With Rent ControlStable Matching With Incomplete InformationPreference Aggregation With Incomplete InformationDynamic Mechanism Design: A Myersonian ApproachDynamic Preference for FlexibilityReturns to Tenure or Seniority?Macroeconomic Implications of AgglomerationOptimal Test for Markov Switching ParametersLocal Identification of Nonparametric and Semiparametric ModelsIdentifying Treatment Effects Under Data CombinationForthcoming PapersBackmatter of Econometrica Vol. 82 Iss. 2
In parametric, nonlinear structural models, a classical sufficient condition for local identification, like Fisher (1966) and Rothenberg (1971), is that the vector of moment conditions is
differentiable at the true parameter with full rank derivative matrix. We derive an analogous result for the nonparametric, nonlinear structural models, establishing conditions under which an
infinite dimensional analog of the full rank condition is sufficient for local identification. Importantly, we show that additional conditions are often needed in nonlinear, nonparametric models to
avoid nonlinearities overwhelming linear effects. We give restrictions on a neighborhood of the true value that are sufficient for local identification. We apply these results to obtain new,
primitive identification conditions in several important models, including nonseparable quantile instrumental variable (IV) models and semiparametric consumption-based asset pricing models. | {"url":"http://onlinelibrary.wiley.com/rss/journal/10.1111/(ISSN)1468-0262","timestamp":"2014-04-25T03:25:39Z","content_type":null,"content_length":"42929","record_id":"<urn:uuid:5e9dcec1-a47d-4b76-9bca-af69525a415a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stubborn MuleHave wheelchair, will travel…probablyHave wheelchair, will travel…probably
Have wheelchair, will travel…probably
Spending couple of weeks down the south coast of New South Wales, spotting dolphins and echidnas, has slowed down my blogging. Fortunately, regular contributor James Glover has once more come to the
rescue with a guest post. This time his topic is wheelchairs and air-travel.
Perhaps you’ve heard of a recent court case in which a wheelchair user, Sheila King, took Jetstar to court (and lost) on the basis of the Disabilities Discrimination Act? If you are a wheelchair user
and you book a flight on one of our airline carriers then a fairly obvious thing won’t happen. Unlike say a bus you won’t be able to board the aircraft in your chair and be strapped in for the
journey. What actually happens is that when making the booking you tick a box (or tell the booker on the phone) that you are in a wheelchair. If there are seats available for wheelies when you get to
the airport you will give up your chair and be made to use a specially designed “wheelchair” (its a chair, it has wheels) that is designed to be fit the narrow corridor of most planes which I am sure
you are aware of – their narrowness, for you, only apparent when the person ahead of you is blocking the aisle loading 3 pieces of carry on luggage into the overhead lockers while chatting to their
new friends in the seat they are meant to occupy. We all suffer this situation. These “wheelchairs” are not designed to be used without help, they are more like children’s toy carts and cannot be
operated by the user as the wheels are very small and low down. For a wheelchair user to be taken out of their wheelchair in a public place can be quite discombobulating. Many wheelchair users
develop a personal relationship with their chair – it is after all a place you spend many of your waking hours.
Digression. The very first time I was in a wheelchair outside the confines of a hospital ward (it was a hospital wheelchair but is the exact same model I now own, like I said it is personal) I was
being pushed by none other than the proprietor of this very website! Without going into the details let’s just say it was a pretty dramatic event and we both learned a valuable lesson in wheelchair
use and the wheelchair repair workshop at the hospital was kept busy. But I digress.
So here is the thing. According to Google about 1% of the population uses wheelchairs. And a Jetstar plane has about 200 seats so they expect to get about 2 wheelchair users on average per flight. So
what is the problem with only allowing this same number on each flight, as some airlines do? Well the problem is that statistically wheelchair users don’t travel in pairs and sometimes there will be
less than 2 users and sometimes there will be more. Just as if you toss 10 coins sometimes there will be fewer than 5 heads (the average or expected number) and sometimes there will be more. Only on
average will there be 5. In fact it is a simple problem to work out the probability of there being, say, n wheelchair users, given the average of 1% on a 200 seat plane. This is called the Binomial
Distribution. If you have access to Excel then the function Binomdist(n,200,1%) will tell you this probability. Before I give you some numbers I admit that the overall population average may not be
the same as the average flying on planes. It may be less than 1% due to wheelchair users being put off flying. But maybe on some routes it is higher: but I am guessing the annual “snowbird” migration
of retired people from the northern United States to Florida at the start of Winter would track above the 1% rate.
So here are the Binomial probability figures.
Count Probability
0 13%
1 27%
2 27%
3 18%
4 9%
5 4%
6+ 2%
Binomial Probabilities (N=200, p=1%)
For example, assuming a 1% chance of any given passenger a 200 hundred seat plane being in a wheelchair, the probability that there will be exactly 4 wheelchair passengers wanting seats is 9%. To
work out the probability of a passenger being denied a seat on their preferred flight, we will assume that we’re dealing with an airline where more than two wheelchair passenges book on a flight,
then at least on passenger will have to change their travel plans. From the table above, the chance of the flight only having 0, 1 or 2 wheelchair passengers totals 68%, so there’s a 32% chance that
there will be at least one wheelchair passenger who cannot fly. For any one wheelchair passenger, there is a (n-1)/(n+1) chance of being bumped if n other wheelchair passengers book on the flight.
Weighting that by the probably that there are n passengers and adding it up for all n>1 gives a probability of 27%. As a frequent flyer in a wheelchair, you can expect to miss out on a seat quite
regularly! [Note: these calculations have been updated: the editor's "corrections" were undone. Ed.]
I am quite fortunate now that I no longer need to travel in my wheelchair. But as I still use a walking stick I wait for everyone else to get off the plane. You sit there, looking behind you to see
if everyone else has left. But there are always these strange people who seem to sit there at the back of the plane and wait for 10 minutes or more, after everyone has disembarked, before even
moving. You wonder why the airline staff don’t just hurry them off? I assume they aren’t disabled because they are sitting at the back of the plane. If airlines really had a problem with the extra
time that getting wheelies off the plane then they could make this up by just moving these people along.
When I first read about this case my initial response was that being disabled and traveling is a bit of challenge anyway and you just get on with it. But the more I thought about it I wondered if the
airlines just took it for granted that wheelchair users would change their plans to fit in with the rules. I am glad Sheila King took the issue up!
Possibly Related Posts (automatically generated):
{ 20 comments… read them below or add one }
Hey Jimbo!
I don’t suppose those limitations should affect you when you travel to the Davos 2012 meeting this week. ;-D
If you need a body-guard, just drop me a line!
In my original version of this post I asserted that the probability of getting bumped in a wheelchair was 26%. The Mule (unilaterally)changed this to 14% calculated as the probability of there
being n-wheelchair users on the plane and that for n>2, there would (n-2) of them bumped so prob=Sum((n-2)/n*B(n,N,p))=14% which for N=200, p=1% the sum is correct. My figure was based on
calculating the expected number of people being bumped divided by the expected number of wheelchair users. The latter is just pN while the former is Sum((n-2)*B(n,N,p)) for N>2 so prob=Sum((n-2)
*B(n,N,p))/pN=26%. To reconcile them note that the Mule hasn’t taken into account that the probability of being a wheelchair user on a plane with say 3 wheelchair users isnt B(3,N,p) (the prob of
a plane having 3 wheelchair users) but has to also take into account that there are, for example, 3/2 times as many chances of being in that cohort as say a cohort with the same binomial prob but
only 2 members. The correct formula is then: prob=Sum((n-2)/n*B(n,N,p)*(n/Np))=Sum((n-2)*B(n,N,p)/pN) which is my first formula. In order to be sure I simulated this process and got the same
@magpie – bodyguard? As in Kevin Costner and Whitney Huston?
Of course!
But you better not sing to me or come up with funny ideas (not that there’s anything wrong with that: just that I’m too old to change habits).
@magpie – if this doesnt make your blood boil and break into “I Will Always Love You Poor Billionaires” then nothing will. http://www.bloomberg.com/news/2012-01-24/
I suspect one guy (the one speaking more like C. Montgomery Burns) did say the truth: “it’s all for the cameras”.
In any case, seeing is believing.
In case anyone is wondering why I went to the extreme of simulating this situation to prove (to myself) I was right I should point out that in the 14 years or so of knowing The Mule I have only
ever bested him in an argument once (about the definition of the word “Canonical” of all things). So I wanted to be confident before calling him on it. I am looking forward to an admission of him
being wrong. If so I will copy it and frame it and put it on my wall. And get him to sign it. With witnesses. Oh yes I will.
Actually, I was wondering about that…
So, Stubborn, what do you have to say in your defense?
Haven;t checked it yet, but in my defence, I did call the Zebra to discuss the calculation…the response what something along the lines of “change whatever you like”.
P.S. @Zebra don’t forget you can edit your own posts too!
In my defense (note spelling) there are very few people who I am reluctant to argue against on the spot, if they suggest I might be wrong, before going back to recheck my working. The Mule is one
of them. Possibly the only one.
@mule – reframing the argument?
OK, I’ve thought about it again: my initial attempt was wrong: I didn’t condition on the fact that the given wheelchair passenger was on the plane. I’ve redone it with n representing the number
of other wheelchair passengers on the plane (n=0, 1,…). This probably ends up being equivalent to your approach.
Mr. Zebrovsky,
Our friend may be stubborn, but he’s fair, eventually… ;D
The assumption that the proportion of wheelchair users in the population and the number seeking to fly would almost certainly be incorrect. Socioeconomic factors and other illnesses would reduce
the probability that wheelchair users would fly. It is surprising that the rate of refusal wasn’t raised in the court case.
@ken – I generally agree – the inconvenience alone would deter many from frequent air travel. On the other hand given that on average people in wheelchairs are economically worse off they may be
more inclined to travel the budget airlines and this may have pushed then numbers back up towards the average.
Mule – please consider retitling said article to
“Have wheelchair, will travel…probability”
First, a note from our editor-in-chief: “For some as yet undiagnosed reason, the Stubborn Mule’s email subscription service re-sent a post from January. I will be digging into the cause and hope
it doesn’t happen again.” Well, I kinda liked blissfully reading it thinking it was brand new. It’s still pretty current with NDIS and the topic below being talked about.
Now… “If you can’t solve it, upscope it.” – (don’t know who said it, but someone smart must have… otherwise I’ll take the credit ;))
N2S, this is a sensitive topic, of course. (Btw, disclaimer: I am one of those ‘strange people’ lingering about at the back of the plane, aloof in my reading of something more interesting than
standing in line with the other sheeple to the leave the plane, pick up luggage, go through immigration, customs, taxi rank… you get the drift. People like me are patient enough to walk slowly
behind less mobile people and assist them, if need be). So, I had this idea, which I’d have very few better venues to share other than with you problem-solving lot. Could plane seat arrangements
be made mobile and configurable prior to every flight?
While doing that for wheelies would be noble, it’d be hard to get the business case through. So, here comes the upscoping. I’m sure you quants would be aware of Bharat P Bhatta’s paper (Journal
of Revenue and Pricing Management, 2013) on weight-based airfaring: http://www.smh.com.au/travel/travel-news/airfares-should-be-pay-what-you-weigh-professor-20130325-2gp12.html
And you may also be aware that Samoa Air started doing just that last month: http://edition.cnn.com/2013/04/02/travel/samoa-air-fare-by-weight
There obviously are a lot of criticisms against this idea, which TBH does sound like an economist’s armchair pipe dream. Most of them are related to discrimination, and no one wants to do that,
of course. I even wondered whether companies would slant toward female Asian women for their travelling salesforce (which, according to some willing customers, that would not be unwelcome). While
many people would nonetheless agree with overweight people paying more (as we know, those gluttons are sinners… not!), no one – and I repeat: no one! – would agree with charging more of wheelies.
Course not. And ‘course it can be easily excluded from the weight calc. And so can any other vital equipment (no, iPads are not vital!… ok, unless adapted for medical use).
However, one must ask whether discrimination is not already in practice. It is a self-evident truth that aircraft seat space has dwindled over the last couple of decades. Is that not favouring
lighter, smaller people? Under flat-rate airfare, all an airline needs to do is reduce seat space (to the point of discomfort of larger, heavier people), put more of them in the airplane, and
give enough of a discount over the competition to bite into the shorties’ market (ahem, Tiger!). So, it ain’t price discrimination, but it sure is supply discrimination.
Incredibly, the biggest offender in this space is Finnair (those poor vikings!)! And we, in the beautiful Kingdom of Oz, are lucky to have thou most generous, Virgin Australia: http://
Libertarians amongst us (which I suspect are over-represented in this forum) will immediately say: “hey, it’s a free, competitive market; let it be”. So I won’t propose further regulation, but
rather a business idea (herein thrown away into the public domain).
What if the seat space could be configurable for each passenger composition? It’s like Tetris, in’it? If you were (or are!) a large guy (no, not that, you know what I mean!) or gal (nothing wrong
with that either), wouldn’t you pay more than the average airfare for a little more comfort (meaning un-numbed, fully-functional limbs that you can actually use to get off the sardine can)? Yeah,
we already have some of that; it’s called business class. Or economy+ (honestly??). But, I’m talking about a fully-configurable seat plan. If you’re heavier (presumably bigger, unless extreme
dense!), then yes, you’d pay more, but you’d get more than elsewhere too. Current seats are already removable, as we’d all imagine, so only the seat trains would need to be made to move slots.
Yes, there’d be safety issues. And scheduling issues (to allow for reconfiguration). On the positive side, it’d mean more thorough security checks on the seats as they get moved around.
“So, what does that have to do with wheeling passengers?”, you ask. A plane like that could accommodate however many wheelies are booked in for any given flight. No bumps, no excuses.
Btw, Virgin Australia’s aircraft are made in Brazil, and, as we know, Brazilians just need more space: http://youtu.be/XuybWspTU1U?t=1m20s
Leave a Comment | {"url":"http://www.stubbornmule.net/2012/01/have-wheelchair/","timestamp":"2014-04-20T21:23:17Z","content_type":null,"content_length":"80157","record_id":"<urn:uuid:80430436-59f3-4e4f-91ec-db9c0cc1ad7c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
Three coins are tossed. Find the probability that no more than one coin lands heads up
Number of results: 102,213
math answer check
Three coins are tossed, and the number of heads observed is recorded. (Give your answers as fractions.) (a) Find the probability for 0 heads. Incorrect: Your answer is incorrect. . Your answer cannot
be understood or graded. More Information Redone 0/3 was my answer (b) Find ...
Saturday, June 1, 2013 at 4:34pm by Nora 4
Advanced Math
The probability of getting 2 heads and 1 tail when three coins are tossed is 3 in 8. Find the odds of not getting 2 heads and 1 tail. ANSWER: 5:8? Three Coins are tossed. Find the probability that
exactly 2 coins show heads if the first coin shows heads.? ANSWERS: Could it be ...
Wednesday, June 10, 2009 at 5:36pm by Patrick
Three coins are tossed. Find the probability that no more than one coin lands heads up.
Thursday, November 19, 2009 at 11:49am by shree
math probabilty
Find the (theoretical) probability of the given event, assuming that the coins are distinguishable and fair, and that what is observed are the faces uppermost. Six coins are tossed; the result is at
most one head.
Sunday, December 5, 2010 at 12:10am by Liz
if three coins are tossed, what is the number of equally likely outcomes? 3 4 6 8 9
Wednesday, March 19, 2014 at 2:04pm by HELEN
Twenty-three coins are tossed; in how many ways can you have at least two tails?
Friday, May 3, 2013 at 1:08pm by marlene
Consider a collection of biased coins, each showing Heads with probability and Tails with probability , independently of the others. The coins are tossed and all coins showing Heads are collected
together and tossed again. Write down an expression for the probability mass ...
Sunday, January 22, 2012 at 4:45am by sand
Consider a collection of n biased coins, each showing Heads with probability p and Tails with probability 1-p , independently of the others. The coins are tossed and all coins showing Heads are
collected together and tossed again. Write down an expression for the probability ...
Sunday, January 22, 2012 at 4:45am by sand
Consider a collection of n biased coins, each showing Heads with probability p and Tails with probability 1-p , independently of the others. The coins are tossed and all coins showing Heads are
collected together and tossed again. Write down an expression for the probability ...
Sunday, January 22, 2012 at 11:59pm by sand
Consider a collection of n biased coins, each showing Heads with probability p and Tails with probability 1-p , independently of the others. The coins are tossed and all coins showing Heads are
collected together and tossed again. Write down an expression for the probability ...
Tuesday, January 24, 2012 at 1:15am by sand
Suppose three coins are tossed. What's the probability of having exactly 1 tail, 3 tails, or all the same?
Friday, April 20, 2012 at 9:37am by Carol
three coins are tossed simultaneously ,what is the is the possible outcome of getting two head one tail ???
Wednesday, October 3, 2012 at 11:31am by TUHITUHI
Three coins are tossed, and the number of heads observed is recorded. (Give your answers as fractions.) (a) Find the probability for 0 heads. (b) Find the probability for 1 head. (c) Find the
probability for 2 heads. (d) Find the probability for 3 heads.
Friday, July 6, 2012 at 2:19am by jilla
College math
Three coins are tossed. How many ways are there to get more than 2 tails, exactly 2 heads, only 1 tail. Thanks in advance for the help!
Thursday, April 25, 2013 at 12:34pm by Lorie
Clayton has three fair coins. Find the probablitlity that he gets two tails and one head when he flips the three coins.
Sunday, January 31, 2010 at 8:47pm by haseeb
two coins are tossed list all the possible outcomes what is the probability that both of the coins land heads-up?
Monday, September 3, 2012 at 3:48am by frank
Susan has some $2-coins and $5-coins. If there are 18 coins and the total amount of these coins is not less than $75, find the minimum number of $5-coins.
Sunday, October 2, 2011 at 12:17am by Nicole
three fair coins are tossed. What is the probability that at least one is tail. Enter probability as a fraction
Sunday, March 3, 2013 at 8:42pm by Andrzej
9 fair coins are independently tossed in a row. Let X be the random variable denoting the number of instances in which a Head is immediately followed by a Tail during these 9 tosses. The variance of
X has the form a/b, where a and b are coprime integers. What is the value of a...
Saturday, April 20, 2013 at 7:17am by Zeneth
Math - Algebra
The number 350 is separated into three parts (numbers). Dividing the first number by the third will give 2 and dividing the second number by the pfirst number will give 5 and a remainder of 25. Find
the three numbers. Andy has 145 coins in his coin bank. THe number of 5-...
Sunday, January 5, 2014 at 12:14am by Ashley Kate
9 fair coins are independently tossed in a row. Let X be the random variable denoting the number of instances in which a Head is immediately followed by a Tail during these 9 tosses. The variance of
X has the form ab, where a and b are coprime integers. What is the value of a+...
Saturday, April 20, 2013 at 7:11am by Ichigo
Start with one quarter, 3 dimes. That is 55cents, and you have 5 coins left to make 38 cents? I dont see how. So try 2 quarters, 4 dimes. That is six coins, which is 90 cents. You have three coins
left, can you make 3 cents with those three coins?
Monday, November 5, 2007 at 6:14pm by bobpursley
Can someone tell me if this is correct: How many forces act on an upwardly tossed coin when it gets to the top of its path? A. one the force due to gravity B. two gravity & the force in the coin
itself C. three gravity the coins internal force & a turnaround force D. none of ...
Friday, September 21, 2012 at 8:20pm by Sue
Three coins are selected from 10 coins: 4 dimes, 4 nickels, and 2 quarters. In how many possible ways can the selection be made so that the value of the coins is at least 25 cents? I know that the
total outcomes equals 120, but how do I find how many of these have the value of...
Saturday, February 21, 2009 at 9:08pm by Zoe
Two dice are rolled and two fair coins are tossed. Let X be the sum of the number of spots that show on the top faces of the dice and the number of coins that land heads up. The expected value of X
is _____ .
Friday, July 22, 2011 at 8:31pm by mike
Two dice are rolled and two fair coins are tossed. Let X be the sum of the number of spots that show on the top faces of the dice and the number of coins that land heads up. The expected value of X
is _____ .
Friday, July 22, 2011 at 8:31pm by mike
5 coins are tossed. How many outcomes are possible?
Monday, March 17, 2008 at 9:04pm by Jesse
A bag contained a number of 20cents, 50cents and $1 coins. 1/3 of the 20cents coins is equal to 2/3 of the 50cents coins. There are 3/5 as many 50cents coins as $1 coins. If the total value of these
coins is $23.10, how many coins are there altogether ? (non-algebra solution ...
Wednesday, December 28, 2011 at 9:20am by joi
common fraction
Sara tossed a fair coin five times, and Kaleb tossed a fair coin three times. There were five heads and three tails in the eight tosses. What is the probability that either Sara or Kaleb tossed
exactly three heads? Express your answer as a common fraction.
Wednesday, June 27, 2012 at 1:15pm by Anonymous
How many possible outcomes are there for 15 coins tossed simultaneously?
Sunday, April 17, 2011 at 8:30pm by Zac
A game is played flipping three coins. If all three coins land heads up you win $15. If two coins land heads up you win $5. If one coin lands heads up you win $2. If no coins land heads up you win
nothing. If you pay $4 to play, what are your expected winnings?
Sunday, December 20, 2009 at 11:26pm by Michelle
5th grade math
I need to use quarters, dimes, nickles and pennies to make the following: $1.53 in 12 coins $2.53 in 20 $2.32 in 15 coins $2.00 in 22 $1.45 in 10 coins $1.76 in 16 coins $0.85 in 11 coins $1.01 in 12
coins $1.29 in 15 coins
Tuesday, September 24, 2013 at 10:48pm by Kyle
You have six coins. three of them are 25cents. what fraction of the coins are 25cents coins?
Sunday, November 25, 2012 at 9:47pm by huxley
Nine coins are tossed; in how many ways can you have at least two tails?
Sunday, January 17, 2010 at 3:21pm by Anonymous
Intro to probability
Wait a minute, they said two coins. Assume each tossed once. n = 2
Saturday, February 23, 2008 at 9:52pm by Damon
Word Problem.
A game is played by flipping three coins. If all three coins land heads up you win $15. If two coins lands heads up you win $5. If one coin lands heads up you win $2. If no coins land heads up you
win nothing. If you play $4 to play, what are your expected winnings?
Sunday, March 7, 2010 at 8:40pm by Lyndse
Two coins are tossed, find the probability that two heads are obtained. Note: Each coin has two possible outcome H(heads), and T (tails).
Wednesday, May 30, 2012 at 10:40am by miss c
2 coins and 1 six sided cube are tossed. What is the probability of getting two heads and and a four?
Wednesday, June 17, 2009 at 8:33pm by Elizabeth
Need to know the number possible of outcomes.If you spin a spinner with sections A,B,C,D ,and E and two coins are tossed
Monday, January 9, 2012 at 7:11pm by Linda
A bag contains forty coins, all of them either 2c or 5c coins. Let there be x 2c coins and y 5c coins. If the value of money in the bag is $1.55, find the number of each kind.
Thursday, May 12, 2011 at 9:36am by AJ
You are asked to draw THREE coins from a jar one at a time without replacement. The jar contains FIVE dimes, TEN pennies, and EIGHT quarters. (a) What is the probability that all three coins are
Tuesday, July 10, 2012 at 6:34pm by Anonymous
jeremy has 19 coins of 2 different values. the total value of coins is 1$ what are the coins and how many of each coins does he have
Thursday, February 23, 2012 at 7:32pm by sam
10th grade
well first you find how many coins there are in all. In this problem the entire amount is 6 coins, 3 of the 6 coins are dimes, if you pull out one dime there are now 2 dimes out of 5 coins ( because
you took one away) so the answer would be two out of five chances.
Sunday, December 5, 2010 at 11:26pm by Meagan
4th grade
Bobby has only pennies, dimes, and quarters. One half of Bobby's coins are pennies. One fourth of his coins are quarters. Three of his coins are dimes. How many coins does Bobby have? please help me
figure this one out?
Wednesday, April 8, 2009 at 5:57pm by Lynnese
Could someone please help me with those 2 problems...thanks a lot for any of your help Use an algebraic approach to solve the problem. Find three consecutive integers whose sum is 57. Suppose that
Maria has 140 coins consisting of pennies, nickels, and dimes. The number of ...
Friday, September 10, 2010 at 5:30pm by Joe
selecting three (3) coins, but assume that there are 5 dimes, 4 nickels, and 2 quarters. In how many possible ways can the selection be made so that the value of the coins is at least 25 cents?
Wednesday, September 15, 2010 at 10:41pm by Shawn
Select three (3) coins, but assume that there are 4 dimes, 4 nickels, and 2 quarters. In how many possible ways can the selection be made so that the value of the coins is at least 25 cents?
Wednesday, January 26, 2011 at 11:29pm by Amanda
father gives his son three coins totaling 55 cents for school lunch. What coins did the boy receive if one coin is not a nickel?
Monday, December 3, 2012 at 8:54am by suzy
Sasa had some 20-cent and 50-cent coins. 7/8 of the coins were 20-cent coins and the rest were 50-cent coins.After Sasa had spent $72.50 worth of 50-cent coins and 5/7 of the 20-cent coins, she had 2
/7 of the coins left. Find the total amount of money Sasa left.
Thursday, April 19, 2012 at 12:03am by Al
math homewok
There are nine coins on the table , 5 nickels , 3 quarters and 1 dime. Tom , Bill and Drew each pick up three coins. Bill has 25 cent more than Tom. Drew has as much money as Tom and Bill together .
What three coins Drew to pick up? my answer: Drew ===)2 quarters+i nickel Is ...
Wednesday, October 24, 2012 at 10:41pm by Bob
statistic 125
have three coins, two of which are fair and the other is a double header .suppose a coin is selected using random selection and tossed twice,one after the other.if i got two heads what is the
probability that the coin that was selected was the double header?
Wednesday, March 21, 2012 at 7:08pm by vanessa
4 fair coins are tossed. What is the probability that all the outcomes are heads?
Wednesday, November 28, 2012 at 11:49am by Tina
Dave started his coin collection with 25 coins. the first wee he added 2 coins, the next week he added 4 coins, and the third wee he added 6 coins. If Dave continues adding coins in the same way, how
many coins will he have in his collection after one year (52 weeks)? My ...
Friday, June 29, 2012 at 1:11pm by luckybee
1. What do you think the judge did? He made the greedy baker hear the sound of three gold coins. By doing that, he paid for the smell of delicious cakes that the poor traveler used. Do you agree with
the judge's decision or not? If you don't agree with the judge's decision, ...
Tuesday, November 10, 2009 at 1:54am by rfvv
Bobby has only pennies, dimes, and quarters. One half of Bobby's coins are pennies. One fourth of coins are quarters. Three of his coins are dimes. How many coins does Bobby have?
Sunday, February 5, 2012 at 1:38pm by Anonymous
Andrew has 7 20c coins and 5 50c coins in his piggy bank. What is the smallest number of coins he might have to shake out to be certain of having 2 50c coins?
Wednesday, April 25, 2012 at 12:12am by Carolyn
Shirley has 18 coins. One sixth of the coins are quarters, one third of the coins are dimes, and one half of the coins are nickels. What is the value of Shirley's coins?
Monday, May 3, 2010 at 12:52am by Mia
Intro to probability
Two balanced coins are tossed. What are the expected value and the variance of the number of heads observed? ================================== I'm confused on how to start. I don't know how many
times im suppose to toss coin, and how to set up my table for the expected value ...
Saturday, February 23, 2008 at 9:52pm by amelie
liza has three hundred seventen coins in her collection. Tamca has found five hundred eighty-one coins in her collection. Tamca has how many more coins then liza?
Monday, October 25, 2010 at 9:08pm by jojo
aig math
Janet has 73 coins. Linda has 28 more coins then Janet, but 17 fewer coins than Alan. Sam has 33 fewer coins than Alan. How many more coins does same have than Janet? My answer is 12 coins, am i
right? If not please tell me your answer and how you got it.
Sunday, March 10, 2013 at 6:37pm by lidia
A PERSON HAS 1&2 RUPEE COINS WITH HIM.TOTAL NUMBER OF COINS WITH HIM IS 50.TOTAL AMOUNT WITH HIM IS 75.FIND HOW MANY 1&2 RUPEE COINS ARE THERE WITH HIM
Monday, August 22, 2011 at 7:40am by monica
A PERSON HAS 1&2 RUPEE COINS WITH HIM.TOTAL NUMBER OF COINS WITH HIM IS 50.TOTAL AMOUNT WITH HIM IS 75.FIND HOW MANY 1&2 RUPEE COINS ARE THERE WITH HIM
Monday, August 22, 2011 at 7:41am by monica
Thursday, October 4, 2007 at 7:26am by DREENA
1. What do you think the judge did? He made the greedy baker listen to the sound of three gold coins dropping on the table. By doing that, he paid for the aroma of delicious cakes that the poor
traveler smelled. Do you agree with the judge's decision or not? If you don't agree...
Tuesday, November 10, 2009 at 1:54am by Writeacher
Three dimes are tossed at the same time. What is the probability that all three land on heads?
Thursday, October 17, 2013 at 3:30pm by Anonymous
three coins, a half-dollar, quarter, nickel are tossed at the same time. to say that there is an outcome xyz means that the half-dollar, landed x, the quarter landed y, and the nickel landed z. the
eight equally likely possibilites can be listed as follows: HHH HHT HTH HTT THH...
Monday, May 14, 2012 at 11:16pm by jose
5th Grade Math
divide the coins into four equal numbered piles. How many coins are in each pile? Three of those piles is pennies, and one is nickles. So what it the value of the money?
Saturday, August 29, 2009 at 3:43pm by bobpursley
Grade 5 Maths
A coin box contained some twenty-cent and fifty-cent coins in the ratio 4:3. After 20 twenty-cent coins were taken out to exchange for fifty-cents coins of the same value and put back in the box, the
ratio of the number of twenty-cent coins to the number of fifty-cents coins ...
Sunday, January 19, 2014 at 11:37pm by LARA
Not enough information. What kind of coins ? US has only 4 commonly used coins (hard to find half-dollars or 'silver dollars' ) Canada has 6 commonly used coins The Euro has a 20 cents coin etc.
Sunday, January 9, 2011 at 8:39am by Reiny
GEOmetry(Special Coin Placement)
Three coins are randomly placed into different positions on a 4×10 grid. The probability that no two coins are in the same row or column can be expressed as a/b where a and b are coprime positive
integers. What is the sum of a+b?
Tuesday, April 2, 2013 at 3:56am by please help me out!!
Karen collects local and foreign coins. of the coins in her collection, 1/4 are foreign coins. of the coins, 2/5 are from mexico. what fraction are foreign coins that are not from mexico?
Tuesday, January 29, 2013 at 7:39pm by shohanur
There are 10 similar coins, but 2 of them are fake. The fake ones are lighter than the real ones, but both have the same weight. You are give a balance scale, and you should determine which coins are
fake. At least how many weighings do you need to find the fake coins?
Wednesday, December 23, 2009 at 9:42pm by Jill
You have 1000 coins. Start by placing one coin in its own "stack." Next to the first stack, make a stack of two coins. Then make a stack of four coins and continue creating stacks of coins, each
stack twice as high as the previous one. Now I figured it would be 13 stacks by ...
Sunday, December 1, 2013 at 7:20pm by Terri
Three coins are randomly placed into different positions on a 4×10 grid. The probability that no two coins are in the same row or column can be expressed as ab where a and b are coprime positive
integers. What is the probability of a+b?
Monday, April 1, 2013 at 5:44am by rohit
I don't know what country you are in, but I am not aware of "20c coins." To be4 absolutely certain of getting 2 50c coins, you would have to shake out 9 coins. This is assuming the worst scenario
that all the 20c coins come out first.
Wednesday, April 25, 2012 at 12:12am by PsyDAG
math answer check
Tossing a coin was a good idea, but you can't control what happens. Better would have been to take three coins and line them up, then ask what are all the possible outcomes of 3 coins in a row? and
start flipping them around.
Saturday, June 1, 2013 at 4:34pm by Steve
Six fair coins are flipped, the probability that exactly three coins turns up heads is T. If the coins are flipped again, the probability that exactly 16T coins turns up heads is S. If the six coins
are flipped again, the probability that exactly 32S coins turns up heads is Q...
Sunday, November 7, 2010 at 3:09pm by zid
The ratio of the number of coins Azam had to the number of coins Eddie had was 3:7. Eddie gave 42 coins to Azam and they ended up having the same number of coins. How many coins did each person have
at first?
Sunday, September 30, 2012 at 7:07am by Zayn
9 fair coins are independently tossed in a row. Let X be the random variable denoting the number of instances in which a Head is immediately followed by a Tail during these 9 tosses. The variance of
X has the form ab, where a and b are coprime integers. What is the value of a+b?
Thursday, April 18, 2013 at 3:11am by Daniel
Cheryl gave Brenda 7 coins worth .92 cents. Two coins were quarters. What were the other 5 coins?
Monday, November 24, 2008 at 4:10pm by Tracy
lavere has sixty-one coins, all of which are dimes and quaters. if the total value of the coins is $9.85, how many of each kind of coins has she?
Sunday, March 8, 2009 at 10:15pm by ranujan
Two coins are tossed in order. What is the probability of getting a head on the first coin and then getting a tail on the second coin?
Saturday, June 13, 2009 at 12:03pm by Maya
A game is played by flipping three coins. If all three coins lands heads up you win $15. If two coin lands heads up you win $5. If one coin lands heads up you win $2. If no coins land heads up you
win nothing. If you play $4 to play, what are you expected winnings? Answer ...
Tuesday, April 13, 2010 at 9:10pm by nicole
Zach has two pennies, a nickel, three dimes and two quarters. If he picks two of the coins at random, how many distinct values are possible for the two coins combined?
Friday, September 20, 2013 at 7:08pm by Anonymous
Mary had 6 pennies and 4 nickels. If she took three coins, what is the probability that all the coins were pennies? I think it's 18/24.
Sunday, March 30, 2008 at 5:21pm by Ktowns
the value of 2 coins is 35 cents. one of the coins is not a quarter. what are the 2 coins
Monday, June 7, 2010 at 10:47pm by Jordan
grade 9 math!!!!!!!!!! please help
lavre has sixty-one coins, all of whichare dimes and quaters. if the total value of the coins is $9.85, how many of each kind of coins has she?
Sunday, March 8, 2009 at 10:33pm by selena
the ratio of the number of coins Azam had to the number of coins Eddie had was 3:7.Eddie gave 42 coins to Azam and they ended up having the same number of coins.how many coins did each person have at
first? pls show ur answer purely!
Friday, February 3, 2012 at 12:26am by Da S
Jo has 37 coins (all nickels, dimes and quarters) worth $5.50. She has 4 more quarters than nickels. How many of each type of coins does she have? d=#dimes n=#nickels q=#quarters. ==============
d+n+q=37 n+4=q 0.1d + 0.05n + 0.25q = 5.50 ============================= three ...
Monday, January 1, 2007 at 11:45am by Kt
Julian has 10 coins. Sue has 5 coins. There is a total of 4 quarters. Sue has more than $1.00. Sue's coins are worth twice as much as Julian's coins.
Monday, May 16, 2011 at 11:29am by Jon
the ratio of the number of coins Azam had to the number of coins Eddie had was 3:7.Eddie gave 42 coins to Azam and they ended up having the same number of coins.how many coins did each person have at
first? pls sh
Friday, February 3, 2012 at 12:25am by Da S
Hi, I need some help with this question. Could anyone help? Tammy has 15 coins that total $1. What coins does she have and how many of each? Is there more than one answer? Find all the possibilities.
Thanks, Hannah
Monday, August 12, 2013 at 6:33pm by Hannah
you have 13 bags of gold wins. out of 13 bags gold coins, there is one bag gold coins lighter. assume that one real gold coins is 10g and the fake is 9g. now, you are given a electronic beam balance.
find the most minimum ways to figure out which bag of gold coins is lighter.
Friday, September 30, 2011 at 11:40am by alya
you have 13 bags of gold wins. out of 13 bags gold coins, there is one bag gold coins lighter. assume that one real gold coins is 10g and the fake is 9g. now, you are given a electronic beam balance.
find the most minimum ways to figure out which bag of gold coins is lighter.
Saturday, October 1, 2011 at 12:52pm by alya
I do not know how to answer this question. Would someone please help me with each of the four question below? Thank you. Question 1. Write the sample space for the outcomes of tossing three coins
using H for heads and T for tails. Question 2. What is the probability for each ...
Thursday, February 17, 2011 at 7:57am by Dave
Suppose you toss three fair coins. You win 32 cents if all three coins come up the same. You win 8 cents if exactly 1 head occurs, and you win 48 cents if exactly 2 heads appears. Would you pay 30
cents to play this game? If not, then how much would you be willing to pay?
Sunday, November 25, 2012 at 4:47pm by cd
data management grade 12
Cecile tosses 5 coins,one after the other. a) How many different outcomes are possible? b) In how many ways will the first coin turn up heads and the last coin turn up tails? c) In how many ways will
the second third and fourth coins all turn up heads? d) would the same ...
Friday, February 5, 2010 at 8:35pm by Paul
Split the coins into three groups, A, B and C consisting of 3 coins in each group. Weigh: A--B 1. If A=B, the the counterfeit is in group C Weigh any two coins in group C and figure out the rest. 2.
If A≠B, then the counterfeit is in the lighter group. Weigh any two coins ...
Sunday, November 28, 2010 at 9:52pm by MathMate
In each of the following situations, state whether or not the given assignment of probabilities to individual outcomes is legitimate, that is, satisfies the rules of probability. If not, give
specific reasons for your answer. a.) when a coin is spun, P(H)=0.55 and P(T)=0.45. b...
Wednesday, November 2, 2011 at 9:24pm by Mandy
Cindy had $1.08 using 9 coins. She had as many pennies as dimes. There must be 4 pennies as 9 pennies would not allow for any other coins. Since there are supposed to be as many dimes as there are
pennies, 4P + 4D = 44 cents. That leaves 65 cents to come from 1 coin which is ...
Wednesday, May 21, 2008 at 7:56pm by tchrwill
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=Three+coins+are+tossed.+Find+the+probability+that+no+more+than+one+coin+lands+heads+up","timestamp":"2014-04-21T13:44:34Z","content_type":null,"content_length":"40366","record_id":"<urn:uuid:9aa5e56d-acc4-4b23-a06b-ffdf4ba77e61>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
ESP Routines
ESP Routines
Hey, So far i know 5 ESP routines, but only 2 which are decent.... Here they are.
[B]ESP 1 - Between 1 and 4[/B]
The most simple, think of a number between 1 and 4, usually is 3
[B]ESP 2 - Between 1 and 10[/B]
Think of a number between 1 and 10, usually is 7
[B]ESP 3 - Between 1 and 50[/B]
This one is a bit harder, usually is in the 30's somewhere, about 50% of the time its 37
[B]ESP 4 - Vegetables[/B]
Ok this is one of the more interesting ones, it works on the principal of accessing one side of the brain a fair bit, then quickly changing sides and giving you an answer (you are asking for the
first vegetable that comes into your brain, it is morethen likely carrot). Heres the method i have used and has worked successfully 98% of the time.
First right down on a piece of paper carrot, as that is the most likely answer, due to its colour. When you switch sides of the brain, it looks to see the most comforting colour possible. This is
orange. And basically the only vegetable which is orange is carrot. Other possibilities are pumpkin and sweet potatoes.
Fire simple mathematical equations at them quickly such as
Then say "Ok say 6 over and over again till i say stop" - It does not have to be 6, it can be anything
"STOP, what is the first vegetable that comes into your mind"
If she says carrot - show her the piece of paper
If she says something different - say that she is very unique and that is an amazing quality (and then maybe do a freezeout)
[B]ESP 5 - Grey Elephants in Denmark[/B]
I usually only use it if i have failed in a different ESP test, as it is VERY effective.
Think of a number between 1 and 10 - dont tell me
*she says ok*
Times this number by 9
Add the digits of the answer together - This will always result in the answer 9.... so this mathamatical part is foolproof
Subtract 5 from this answer
Convert this answer into a letter, say like A=1, B=2 etc.etc.
Think of a country that starts with this letter (It will more then likely be Denmark, as the only countries which start with the letter D are Denmark, ,Djibouti & Dominica)
Think of an animal that starts with the second letter of this country say from like a circus or something (this will guarantee it is an elephant, otherwise they could choose echidna or emu which is
very common for australians)
Think of the colour of this animal
"I just have one more question, have you been drinking much? cause your answer is quite weird, there are no grey elephants in Denmark"... Works like a charm, they always are amazed. HUGE DHV.
[I]Variation on ESP #5[/I]
I just thought up of a good variation, many people know the grey elephants in denmark, but do they know the black gorillas in Hungary
Do the normal think of the number between 1 and 10
Times by 9
add digits together
this time subtract 1 instead of 5
think of a country - it will almost be Hungary, if they say hong kong later on, bust them by saying that is not a sovereign state)
Think of a animal that starts with the 4th letter of the country
think of the colour.
Basically it has the same layout but a different answer... This has not been used outside of Australia (outside of Adelaide infact
Post here and i will update the methods.
Last edited by DoubleBishop; 01-13-2007, 06:40 AM. Reason: BBcode, Updated methods ;)
Tags: None
The Queen of Hearts
BAsically you tell them to close their eyes and in their mind on a giaint piece of paper to draw/imagine a face card and that this face card is bright and colorful and that once they have that image
in there mind to let you know.
Then you ask them what the card was, now at this point some will argue that you are supposed to guess, You playfully agrue back that you already know what it is and that she should just tell you.
(this is if she did not give you the answer you wanted you can just cocky funny it and play it off)
So she gives you the answer, and you tap your shirt pocket and say maybe you should check this pocket. She then pulls out her card The queen of Hearts.
Now most women will pick the queen of hearts the majority of the time and guys tend to pick the king of spades. So before you go out put a queen of hearts in your shirt pocket and a king of spades in
your wallet. C&F any other answer that you get.
[B]Re: The Queen of Hearts[/B]
When you use the words, "bright and colorful," you're forcing a red card, as black cards have little or no color. It may be a cultural thing, but most men pick Jack of Diamonds in the southeastern
[B]Jack of Spades[/B]
[I]Another brilliant subliminal force, credited to Derren Brown. A card is freely chosen and is shown to match the card her back pocket.[/I]
[I]The magician sneaks a card into the spectator's back pocket and subliminally forces the Jack of Spades.[/I]
Presentation/Patter (instructions in [B]bold[/B]):
[I]We're going to try a little experiment, and if it doesn't work, that's quite alright, but I need to to really believe for me that this is going to work. Are you happy to do that? It's important
that you legitimitely follow along with this and it will work, do you understand?
On the table is an imaginary deck of cards. If you would, pick them up and shuffle them for me. Excellent, thank you. Now I'd like you to deal the cards onto the table, one at a time, into piles of
red and black, saying "red, black, red, black" to yourself as you go ([B]demonstrate this[/B]).
[B](As they do this, note where they lay the FIRST card--this is the red pile. You instructed them to repeat "red black red black" in their minds as they dealt the cards, which would lead them to
place the red card first.)[/B]
Now you've separated the cards into two piles, red and black, and I couldn't possibly know which pile is which, agreed? Okay, I'd like you to get red of one color, and keep the others. ([B]No, that's
not a typo. Say "get red" of one and motion to the red "pile" to suggest that they are to get rid of that one. Also, you tell the to get "red" of a "color", and black isn't a color, now is it?[/B])
Now take the remaining pile and deal them into two suits--you either have hearts and diamonds or clubs and spades, I don't know which--but deal them out and say it over in your mind as you do so--say
"hearts, diamonds" or "clubs, spades" as you deal. ([B]Watch where they lay the first card--that is the "clubs" pile[/B]).
Excellent, thanks for humoring me with this... Now we're going to eliminate one pile, the klutzes and bums, and keep the other pile, the spare suit, or whichever you prefer ([B]motion to get rid of
the clubs. "Klutzes and bums" somehow is reminiscent of "clubs" and "spare" or "favored" will lead to "spades"[/B]).
Okay, so we're down to one suit. I don't know if you've got Hearts, Clubs, Spades, or Diamonds, but I'd like you to spread these cards out in front of you ([B]motion through the air from your right
to your left[/B]). Now you've for some number cards ([B]point to your right[/B]) and some face cards ([B]point to your left[/B]). The Ace we'll count as a number card ([B]point to the leftmost part
of the "spread"[/B]). Let's get rid of one category ([B]motion as if you're sweeping away the number cards[/B]) and keep just a few cards ([B]point to the face cards on your left. "A few" means
"3"--the three face cards[/B]). Got that? Wonderful.
Now I'd like you to see these cards in your mind. Get rid of a couple--a pair--for me, and keep one card ([B]picturing three cards in your own mind, indicate that they are to remove the two cards to
your left--the king and queen--and keep the one to your right--the jack[/B]). Great, so you've got that? Good. For the first time, what card are you holding? The Jack of Spades? Look in your back
[I]This is a very "soft" effect, meaning there is a lot of room for error. One option is to keep a blank business card in your jacket pocket and to write the name of the card on the back of it
(without her seeing, of course) when she reveals her card. "QH" is clearly an abbreviation Queen of Hearts, so you can abbreviate whatever card they happen to be holding.[/I]
I've gotten this wrong only once over the past two years. It works especially well on drunk people--I can't fathom why. While this may be a little lengthly, it's more than worth it if not for the
spectator's reaction for the knowledge that you've got a very suggestible target and you can read her like a book, but I digress.
This is probably a bit confusing, so I'll shoot a video tomorrow afternoon and post it somewhere so I can give you all a link. If you have any questions, just ask.
Last edited by RobLaughter; 01-13-2007, 04:29 PM. Reason: Fixed presentation. | {"url":"http://www.venusianarts.com/forum/forum/discussion-and-resources/main-discussion/248-esp-routines","timestamp":"2014-04-17T07:41:02Z","content_type":null,"content_length":"66568","record_id":"<urn:uuid:324d751e-77d0-4cf6-94fa-6faa9dbd0d53>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Foundations of atomic spectra > Hydrogen atom states
The hydrogen atom is composed of a single proton and a single electron. The solutions to the Schrödinger equation are catalogued in terms of certain quantum numbers of the particular electron state.
The principal quantum number is an integer n that corresponds to the gross energy states of the atom. For the hydrogen atom, the energy state E[n] is equal to -(me^4)/(2{Planck constant}^2n^2) = -hcR
[¥]/n^2, where m is the mass of the electron, e is the charge of the electron, c is the speed of light, h is Planck's constant, {Planck constant} = h/2p, and R[¥] is the Rydberg constant. The energy
scale of the atom, hcR[¥], is equal to 13.6 electron volts. The energy is negative, indicating that the electron is bound to the nucleus where zero energy is equal to the infinite separation of the
electron and proton. When an atom makes a transition between an eigenstate of energy E[m] to an eigenstate of lower energy E[n], where m and n are two integers, the transition is accompanied by the
emission of a quantum of light whose frequency is given by n =|E[m] - E[n]|/h = hcR[¥](1/n^2 - 1/m^2). Alternatively, the atom can absorb a photon of the same frequency n and be promoted from the
quantum state of energy E[n] to a higher energy state with energy E[m]. The Balmer series, discovered in 1885, was the first series of lines whose mathematical pattern was found empirically. The
series corresponds to the set of spectral lines where the transitions are from excited states with m = 3,4,5, . . . to the specific state with n = 2. In 1890 Rydberg found that the alkali atoms had a
hydrogen-like spectrum that could be fitted by series formulas that are a slight modification of Balmer's formula: E = hn = hcR[¥][1/(n - a)^2 - 1/(m - b)^2], where a and b are nearly constant
numbers called quantum defects.
Contents of this article: | {"url":"http://britannica.com/nobelprize/article-80606","timestamp":"2014-04-20T03:19:26Z","content_type":null,"content_length":"16327","record_id":"<urn:uuid:6da620f1-c3b0-40b9-a926-4f196590e543>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Iselin Precalculus Tutor
Find a Iselin Precalculus Tutor
...Also I am equipped with Algebra resources like online tutoring videos, power points, worksheets, and online resources for preparing question papers for tests and quizzes. Whatever the needs of
the students in Pre-Algebra I can provide them appropriately and teach satisfactorily. Since I have ex...
10 Subjects: including precalculus, calculus, algebra 2, algebra 1
...I tutor one on one and have the ability to determine what the students strengths and weaknesses are. I work on not only improving the weaknesses but also to further develop the strengths. I
will meet you in a public place such as a public library, and have your parent or guardian attend each and every tutoring session if they choose.
3 Subjects: including precalculus, calculus, prealgebra
...Over the course of my college career, I have had the privilege of tutoring a high school student into an English honors class, a significant personal accomplishment for his strong math skill
set. In addition to tutoring I have edited a section of a student newspaper and have led educational prog...
25 Subjects: including precalculus, chemistry, reading, algebra 2
...I currently use a macbook air and have taught friends and family to use their iPhones, apple computers, iPads, and iTunes. Using these products requires an understanding of their software and
competitor products. I have taken Organic Chemistry 1 and 2 including the lab sections.
26 Subjects: including precalculus, chemistry, calculus, physics
...I will work with you until you learn the concept, task, or skill, trying various approaches to find the one that works best with your learning style. If you do not learn it, I did not do my
job.I have been drawing since I was a child. I took classes at Pratt Institute in Brooklyn, NY.
21 Subjects: including precalculus, reading, writing, geometry | {"url":"http://www.purplemath.com/iselin_precalculus_tutors.php","timestamp":"2014-04-20T21:33:52Z","content_type":null,"content_length":"23895","record_id":"<urn:uuid:ca64513f-c7fb-4343-932f-8c47996cc612>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
math (double checking)
Number of results: 218,278
thank you.. this is what i thought. just double checking
Monday, January 10, 2011 at 11:09pm by LULU
Chemistry - %v/v
thank you!
Sunday, June 2, 2013 at 2:00pm by double checking
english double checking
thank you :)
Thursday, November 24, 2011 at 12:48pm by Anonymous
english double checking
7- *** THE
Thursday, November 24, 2011 at 12:48pm by Anonymous
double checking if i paraphrased correctly
Thank you :)
Saturday, October 29, 2011 at 5:06pm by Anonymous
english grammar checking and double checking qu
#2 Stepmother and stepsister are single words. "Obsessed" is mispelled. Was Mandy's (former) boyfriend really a talking book? "past away" should be "passed away" #4 "Their" should be "there".
"herself" is one word. Is "watcher" supposed to be "viewer" (the moviegoer"? There ...
Tuesday, January 24, 2012 at 6:48am by drwls
College Algebra
Thank you! I'm double checking everything too.
Wednesday, November 18, 2009 at 11:05pm by LeAnn
I am just double checking doesnt a manor mean a castle?
Tuesday, September 11, 2007 at 9:05pm by Amy L.
english double checking
I'll be back later and will check your work then.
Thursday, November 24, 2011 at 12:48pm by Writeacher
it is not a test, it is a practice quiz they helped us make in order to prepare for the test, I just want help double checking my answers
Tuesday, October 7, 2008 at 2:19pm by cal
Computer science
a typical double method looks like this: double areaOfTrapezoid(double A, double B, double H){ double area; area=........ (according to your instructions. return area; }
Wednesday, November 2, 2011 at 10:22pm by MathMate
Thank you for your help. I used that same webpage for help but I still wasn't sure that I was doing it correctly. Thanks for double checking.
Saturday, February 5, 2011 at 4:48pm by SHelia
Maths urgent help needed!
thanks, just double checking to see if i got it right.
Sunday, October 23, 2011 at 9:28am by joi
Maybe he has tried,and just needs to check to see if he did it correctly, it's not cheating, it is double checking.
Monday, January 31, 2011 at 12:17pm by ANY
Social STUDIES
yeah its a really important exam im just double checking my answers
Thursday, July 7, 2011 at 6:30pm by Anonymous**
medical coding 1
If you get your answers off of this site or others like this double check your work. Just because they are on here does not always make them correct. I wanted to do a little reasearch on an
assignment I had and came across this site I started double checking my answeres ...
Wednesday, August 4, 2010 at 3:14pm by Belinda
english double checking
#2 - sounds fine #3 - Are you sure the spots are the result of the rabbit's beating? or the ants' sewing?
Thursday, November 24, 2011 at 12:48pm by Writeacher
problems posting
Are you making sure you have a space after every period? Are you double-checking all your spelling?
Monday, July 14, 2008 at 6:42pm by Writeacher
english double checking
I already gave you a corrected response for #5. 8. ... show the foolishness ... 9. ... negative act by the ... which results in ... with a positive ending...
Thursday, November 24, 2011 at 4:23pm by Writeacher
computer science
Sorry, can't help much there. Don't have access to myprogramminglab. But double-checking with a real programming language is the way to go.
Friday, February 24, 2012 at 5:37pm by MathMate
english double checking
You're welcome. Make sure you understand why I made the changes in that last sentence. If you don't know why using "they" is incorrect in that sentence, let me know.
Thursday, November 24, 2011 at 12:48pm by Writeacher
A college student is looking at her monthly checking account records. On September 1, 2008, her checking account held a balance of $1,050. At the end of March 2009, her checking account held a
balance of $800. What is the monthly rate of change for the student’s checking ...
Friday, December 13, 2013 at 8:37am by Beebee
double checking your work is a good idea because if you made an error, say that you meant to write one number and accidently wrote another number, that would throw the whole equation off. If you
check your answer, you will catch the error.
Thursday, August 5, 2010 at 7:08pm by barb
Oh I think I was just looking at the question the wrong way. I was wondering how I could get 44.8mL from 7.0mL forgetting that the 2.5M solution would need to be diluted. Thanks!
Sunday, June 2, 2013 at 1:47pm by double checking
The volume of the larger pie will be double. Since the height of the filling is the same, the area will be double. To double the area, increase the diameter by a factor of sqrt 2 = 1.414. Multiply 8
inches by that for the answer.
Saturday, March 8, 2008 at 10:01pm by drwls
A virus scanning program is checking every file for viruses. It has completed checking 40% of the files in 300 seconds. How long should it take to check all files?
Wednesday, July 23, 2008 at 5:34pm by Ann
english double checking
5- If someone asked me what kind of tale this was I would say it is a folktale because it is passed on and told by a person. The tale also uses animals and teaches lessons.
Thursday, November 24, 2011 at 12:48pm by Anonymous
Avirus scanning program is checking every file for viruses. It has completed checking 40% of the files in 300 seconds. How long should it take to check all the files?
Friday, April 4, 2008 at 10:18am by Ms. Teri
Business and finance. Avirus scanning program is checking every file for viruses. It has completed checking 40% of the files in 300 seconds. How long should it take to check all the files?
Thursday, April 9, 2009 at 2:49am by Allen
I'm still not understanding... it's okay i'll keep trying to figure it out. im clearly not double checking it right since i checked for 1 and got a positive answer but its supposed to be negative
till 2 :-/ thank you so much for your time! it is honestly appreciated :)
Saturday, June 22, 2013 at 7:08pm by Abi
algebra 1
A college student is looking at her monthly checking account records. On September 1, 2008, her checking account held a balance of $1,050. At the end of March 2009, her checking account held a
balance of $800. What is the monthly rate of change for the student’s checking ...
Friday, September 6, 2013 at 10:35am by sabina brown
If you have an answer that you'd like to double-check, feel free to post it for checking. Otherwise try pencil and paper or a calculator. ... =£12,000 *(1-0.10)^2 =$12,000 *0.90^2 =?
Sunday, June 5, 2011 at 5:14pm by MathMate
A college student is looking at her monthly checking account records. On September 1, 2008, her checking account held a balance of $1,050. At the end of March 2009, her checking account held a
balance of $800. What is the monthly rate of change for the student’s checking ...
Friday, January 10, 2014 at 8:51am by 1215
A college student is looking at her monthly checking account records. On September 1, 2008, her checking account held a balance of $1,050. At the end of March 2009, her checking account held a
balance of $800. What is the monthly rate of change for the student’s checking ...
Friday, January 10, 2014 at 8:51am by xfgh
english double checking
***** "The Turtle and the Rabbit Run a Race" and "The Tortoise and the Hare" both teach lessons in a humorous way that attracts the reader, and both of the stories use talking animals to show
foolishness of human actions.
Thursday, November 24, 2011 at 12:48pm by Anonymous
english double checking
5- If someone asked me what kind of tale this was I will say it is a folktale because it is passed on and told by a person. The tale uses animals and teaches lessons. The folktale also comes from an
oral tradition.
Thursday, November 24, 2011 at 12:48pm by Anonymous
Physics[CONCEPT- NO MATH]
F = G(m1 x m2)/(d)^2 If you double the mass of one object and double the distance between the center of the two objects what happens to the force? If you double the mass of each object and double the
distance between them what happens to the force?
Tuesday, December 4, 2012 at 5:52pm by Michael
Chemistry(Please check)
Last question, I just want to make sure I understand the concept of hybridization. For carboxylic acid the hybridization is: CH3= sp3 because there are only single bonds COOH= Sp2 because the oxygen
is double bonded to the C Is this correct? Thank you for checking my work!
Wednesday, September 12, 2012 at 11:52pm by Hannah
Computer Science (java1)
can someone please tell me the equation for compshift? double Angle = 45.0; double AngleRadians; double ComptonShift ; //calculations AngleRadians = Math.toRadians(Angle); ComptonShift = Helpppp here
Wednesday, March 14, 2012 at 2:31pm by Nick
Repeating in your own words the verbal content of the critic's message is: paraphrasing. checking for feelings. checking for inferences. buying time with limited agreement.
Thursday, June 24, 2010 at 4:04pm by anna
Tonia deposited a total of $174.52 into her checking account. She also withdrew a total of $186.15. Use addition to find the net change in Tonia’s checking account.
Sunday, September 25, 2011 at 5:04pm by Caitlin
english double checking
2. What lesson does it teach about small friends? 2. "The Turtle and the Rabbit Run a Race" taught us that small friends can always help us in our worse times.
Thursday, November 24, 2011 at 12:48pm by Anonymous
sure you could how about double the 7, then double again, and then subtract 7 7(2)(2) - 7 = 7(4-1) = 7(3)
Tuesday, March 2, 2010 at 8:48am by Reiny
math (double checking)
I've posted this question before but I forgot to say that I was dealing with polynomial inequalities (not sure if that makes a difference) The question is: Solve the following polynomial
inequalities. (9 marks) 4x - 5 ≤ 2(x - 7) x^3 - 5x^2 + 2x ≥ -8 2(x^3 - 2x^2 + ...
Saturday, January 12, 2013 at 9:47pm by Anonymous
What are the major organic products of this reaction: CH3CH2CH2COCH2CH3 (an O is double bonded to the C before the O) + H2O --> (acid is above the arrow) DrBob222- There are 2 oxygens. I said there
is an O double bonded to the C. So there is the O after the single C and ...
Sunday, March 21, 2010 at 9:18pm by Bobby
Math - Trig - Double Angles
Okay? But, I still don't get what you did in: 2cos^2(2x) - 1 Double-angle again. 2(cos2x * cos2x) - 1 Double-angle again.
Saturday, November 17, 2007 at 6:17pm by Anonymous
Java programming
List all overloading methods (including constructors) in this coding.. public class Circle { private double radius; private String colour; public Circle() { radius = 0; colour = ""; } public Circle
(double r) { radius = r; } double getRadius() { return radius; } double getArea...
Tuesday, April 16, 2013 at 11:59am by nina
computing grades. the fith test counts as double ( and i got the answer 74, so how much would it be as double?. thanks!
Tuesday, March 30, 2010 at 7:25pm by Joe
If it's a double bond C=C (as opposed to a COOH or CHO(aldehyde) or RCOR'(a ketone), the longest chain containing the double bond is the parent. All of the other "longer" chains or parts of them are
names as substituents of the parent. Double bonds (C=C double bonds) take ...
Sunday, June 12, 2011 at 12:20pm by DrBob222
20 is ok 80 is ok If you double again, that is 160 seconds. Or double x double = 4x and 4 x 40 = 160
Wednesday, January 18, 2012 at 2:29pm by DrBob222
How do I write a psuedocode for checking the balances and how do I do a flow chart for checking the balances?
Wednesday, March 23, 2011 at 12:19pm by Sherlyn
Math - Trig - Double Angles
For the following lines, 2cos^2(2x) - 1 Double-angle again. 2(cos2x * cos2x) - 1 Double-angle again. I don't get how you got the second line from the first line...
Saturday, November 17, 2007 at 6:17pm by Anonymous
3rd math
No. 6x8 is equal to 4 to times 3x4. 6 is double 3. 8 is double 4. 2x2=4 12 doubled is 24 or 6x4, 3x2x4.
Tuesday, November 13, 2012 at 10:33pm by Jace
english double checking
... in our worst times. (Use "worse" when comparing only two things; use "worst" when comparing three or more things.)
Thursday, November 24, 2011 at 12:48pm by Writeacher
Garland High school
Suppose you must maintain a balance of least $500 in your checking account in order to have free checking. The current balance in your checking account is $635.27. Which inequality describes how much
can write a check for a and still have free checking? A. 635.27>500-x B. ...
Thursday, December 30, 2010 at 5:39pm by Maria
what is a double factor? I am filling in a multipication table. I am sopposed to color the double factors red. i realy need help in multiplacation and division.
Thursday, January 18, 2007 at 10:09pm by alex
double checking if i paraphrased correctly
She refused to pay the fine. She defended and explained her act in a speech saying to the people everywhere, and particularly to the citizens of the United States that she did not do anything wrong
or illegal. participating every American citizen *** has the right forbid, not ...
Saturday, October 29, 2011 at 5:06pm by Damon
A virus-scanning program is checking every file for viruses. It has completed checking 40% of the files in 300 seconds. How long should it take to check all the files
Thursday, January 29, 2009 at 7:42pm by Julie
Consider the following four possibilities for two point charges and choose the one(s) that do not change the magnitude of the electrostatic force that each charge exerts on the other: A. Double the
magnitude of each charge and double the separation between them. B. Double the ...
Wednesday, July 18, 2012 at 12:36pm by Patrick
QUICKLY RUNNING OUT And your question is?? You've received several answers. Now, it's your turn. What parts of speech do you think these words are? I just need help. Well , do you think you can help
me and explain it? I wasnt expecting people to give me the answers. I was ...
Wednesday, October 18, 2006 at 10:51pm by Synester
CH3CH2-C=C-CH2CH2CH3 I am assuming there is a double bond is between the carbons that are not linked to any hydrogens count the number of carbons: 7 so, heptene whenever there's a double, you would
end with -ene (instead -ane [single bonds only]) note where the double bond is...
Saturday, September 11, 2010 at 12:47pm by Anonymous
if you double the concentration of HCL acid, will that double rate of reaction? i have to explain in terms of the theory of collision. Wont doubling the concentration double the rate as well?
Tuesday, September 25, 2012 at 10:38pm by Shreya
MS.powell has -$25 in her checking account after withdrawing money from the atm .How much did Ms.Powell withdraw if she had $85 in her checking account before she withdraw the money?
Wednesday, July 10, 2013 at 5:25pm by LESLIE
1.What is the nature of th roots of f(x) = x^2 + 4x+16-I think it is two conjugate complex roots because it is -2+-2isquare root 3? 2.Nature of roots of f(x) = X^2 + 2x+1 it would be one double
root,correct? I get confused with doing these sometimes, just wanted to verify this...
Thursday, September 22, 2011 at 9:43am by Sue C.
english grammar checking and double checking qu
The metric system is the most widely used system of measurement in the world. Three basic units of measurement is gram which is used to measure weight, liter which is used to measure capacity and
meter which is used to measure length. A beam balance is used to measure the ...
Tuesday, January 24, 2012 at 6:48am by anonomous
Terry has 43 dollars in a checking account.If terry writes a check for 62 dollars what is the new checking balance?
Monday, September 17, 2012 at 11:13pm by emily h
Barbara had $123.54 in her checking account. she made a deposit of $45.62. when she went shopping later that day, she saw a coat she really wanted. it was on sale for 40% off the original price of
$240. Assuming that she bought nothing else that day, does she have enough money...
Wednesday, September 11, 2013 at 10:36pm by Evelyn
3rd grade math
Double my tens digit to get my ones digit. Double me and I am less than 50. Who am I?
Wednesday, May 12, 2010 at 7:05pm by Pam & Hannah
If it doubles every 15 years, then by 30 years, it has doubled twice. What is the factor that equals double of double?
Tuesday, November 29, 2011 at 7:08pm by MathMate
riddle:double my tens digit to get my ones digit. double me and i am less than 50.
Thursday, March 17, 2011 at 6:12pm by jan
Jill likes to play Blackjack. She heard about a winning strategy to ensure she does not lose. The strategy is: choose a starting amount to bet, and then double your bet if you lose. Keep doubling
your bet until you win. When you win a hand, go back to the starting bet. Hoping ...
Tuesday, November 30, 2010 at 12:03am by Nic
double my tens digit to get my ones digit. double me and i am less than 50
Tuesday, March 16, 2010 at 5:25pm by christian
double my tens digit to get my ones digit. double me and i am less than 50? who am i
Wednesday, March 31, 2010 at 4:52pm by sydney
double my tens digit to get my ones digit. double me and i am less than 50
Thursday, May 6, 2010 at 8:58pm by Anonymous
Double my tens digit to get my ones digit. Double me and I am less than fifty.
Monday, March 28, 2011 at 8:48pm by jack
Double my tens digit to get my ones digit Double me and i am less than 5o.
Tuesday, May 24, 2011 at 6:55pm by tony
Double my tens digit to get my ones digit double me and I am less than 50
Thursday, April 26, 2012 at 9:50pm by Ann
Double my tens digit to get my ones digit. Double me and I am less than 50.
Thursday, May 6, 2010 at 8:58pm by mykhal
This is pretty simple. Let Linda's age be x. So we have: x + (x+6) = 36 2x + 6 = 36 2x = 36 - 6 2x = 30 x = 30/2 x = 15 Double checking, if Linda is 15, Lynn will be 21. 15 + 21 = 36 So x is
definitely 15.
Tuesday, February 15, 2011 at 8:17pm by Qpee
Chem 151
The atmosphere in a sealed diving bell contained oxygen and helium. If the gas mixture has 0.200 atm of oxygen and a total pressure of 3.00 atm, calculate the mass of helium in 10.0 L of the gas
mixture at 40 degrees Celsius.
Thursday, March 21, 2013 at 9:50pm by Double checking ans
Whole Number Math Riddle Clue 1: Double my tens digit and get my ones digit. Clue 2: Double me and I am less than 50
Monday, April 4, 2011 at 9:32pm by Savannah
A virus scanning program is checking every file for viruses. It has completed checking 40% of the files in 300 seconds. How long should it take to check all the files? I am not sure how to set this
problem up, can someone please help! 40% = 40/100 40/100 = 300/x Solve for x. ...
Saturday, May 26, 2007 at 4:22pm by Stacy H
This is ok but you omitted the double bond in CH2=CHCH2CH3. The double bond needs to be there (even though it's a condensed formula). When it's omitted the observer doesn't know if it is a double
bond or if you just forget and left out the two H atoms.
Tuesday, February 14, 2012 at 10:12pm by DrBob222
english double checking
This is also a really good and comprehensive website for all things grammar and usage! http://grammar.ccc.commnet.edu/grammar/ The INDEX is especially helpful when you are looking for something
Thursday, November 24, 2011 at 12:48pm by Writeacher
english double checking
#5 The term "oral tradition" is already stated in your first sentence here: "is passed on and told by a person." Revision: This tale is a folktale because it is passed on by people, telling stories
again and again -- in an oral tradition. The tale also uses animals and teaches...
Thursday, November 24, 2011 at 12:48pm by Writeacher
In general, divided by demand deposits (checking accounts) in the bank. (Note that, for banks, savings account deposits have a different reserve requirement than checking account deposits)
Sunday, August 31, 2008 at 12:36pm by economyst
Finance: if given the equation: y=12000(1.07)^x a)Estimate the time it will take to double using the rule of 72 b)determine the time it takes for the investment to double useing the fuction
Wednesday, January 19, 2011 at 6:38pm by Dane
Finance: if given the equation: y=12000(1.07)^x a)Estimate the time it will take to double using the rule of 72 b)determine the time it takes for the investment to double useing the fuction
Wednesday, January 19, 2011 at 7:02pm by Dane
English/ Social studies
Here's another idea about "double citations" as the rule from the MLA Citations says to double space. That could be what your teacher wants: Double space all citations and indent if the citation runs
longer than one line. Sra
Monday, June 6, 2011 at 5:02pm by SraJMcGin
can some1 explain to me how double-slit and interference are related? what is double slit????
Friday, May 15, 2009 at 2:45am by tony
Luckily many fishing boats work on this system of shares where the captain and cook for example get double shares so I am used to this. four single share employees + one double share employee = 6
shares each 10 hours so four people get 10 hours and the double share person gets...
Monday, May 18, 2009 at 3:30pm by Damon
College microeconomics
I am not looking for someone to do my work for me. I am just trying to see if I am on the right track. I have answered these questions already I am just double checking my work. For #1 I got
"A"-because a firm will hire up to the point MRP equals the wage rate anything after ...
Tuesday, April 14, 2009 at 7:43pm by sue
when should you use a double bar graph and double line graph.Tell what type of graph would be most appropriate to represent the data listed
Monday, April 18, 2011 at 8:46pm by hellokitty
double **dptr; dptr = new double*[5]; for (int i = 0; i < 5; i++) dptr[i] = new double[3]; How many rows and columns will there be in dptr? Also, what would the code look like to Delete all the space
allocated for dptr???
Friday, August 31, 2012 at 1:12am by Bill
Chem 151
A CaCO3 mixture weighing 2.75 grams was treated with 9.75 mL of 3.00 M HCl to liberate CO2. After the CO2 was liberated, the mass of the mixture decreased to 2.621 grams. Determine the mass of CaCO3
present in the mixture.
Thursday, March 21, 2013 at 10:23pm by Double checking ans
Physics again.
In the study of a double slit diffraction pattern it is noted that the 5th double slit nodal line corresponds exactly with the 1st nodal line of the overlapping single slit pattern. The double slits
used are 0.25mm apart. What is the width of each single slit used to create ...
Tuesday, June 10, 2008 at 6:30pm by Alex
double "me" and i am less than 50, means "me" has to be less than 25 so the tens digit can either be 1 or 2 double for a ones digit of 2 or 4 numbers are 12 or 24
Thursday, May 6, 2010 at 8:58pm by Reiny
english double checking
Better. I'll make a few corrections (below), but this is much clearer in meaning! "The Turtle and the Rabbit Run a Race" personally taught me that, if I don't work hard and I cheat to reach success,
I will get negative results. The turtle won the race without working hard. If ...
Thursday, November 24, 2011 at 12:48pm by Writeacher
Mrs. Rossi wrote the following clues for a mystery number. It is a 4-digit number. The tens are double the ones. The thousands are double the tens. The sum of the digits is 19.
Tuesday, February 22, 2011 at 7:57pm by rita
math 3rd grade
4 x 7 = 28 Double of 4 is 8, so we will double 28 (4x2) x 7 = (28x28) or (4+4) x 7 = (28+28) answer 56)
Monday, November 26, 2012 at 7:47pm by kriti
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=math+(double+checking)","timestamp":"2014-04-19T23:08:02Z","content_type":null,"content_length":"37736","record_id":"<urn:uuid:4bd980d5-9c21-4b71-af8b-079cc647644d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: looking for example of closed set that is *not* complete in a metric space
Replies: 26 Last Post: Feb 3, 2013 11:06 AM
Messages: [ Previous | Next ]
Re: looking for example of closed set that is *not* complete in a
metric space
Posted: Feb 2, 2013 11:43 AM
On Saturday, February 2, 2013 4:14:23 PM UTC+8, Butch Malahide wrote:
>If (X,d) is not complete, then it has at least one closed
>subspace which is not complete, namely, (X,d) is a closed
>subspace of itself.
On Feb 2, 1:01 am, quasi <qu...@null.set> wrote:
> Moreover, if (X,d) is not complete, it has uncountably many
> subsets which are closed but not complete.
Butch Malahide wrote
> Oh, right. At least 2^{aleph_0} of them.
Not understood. Can someone help me understand this one?
On Saturday, February 2, 2013 4:14:23 PM UTC+8, Butch Malahide wrote:
> On Feb 2, 1:01 am, quasi <qu...@null.set> wrote:
> > Butch Malahide wrote
> >
> > >If (X,d) is not complete, then it has at least one closed
> > >subspace which is not complete, namely, (X,d) is a closed
> > >subspace of itself.
> >
> > Moreover, if (X,d) is not complete, it has uncountably many
> > subsets which are closed but not complete.
> Oh, right. At least 2^{aleph_0} of them. | {"url":"http://mathforum.org/kb/message.jspa?messageID=8236558","timestamp":"2014-04-18T15:52:27Z","content_type":null,"content_length":"49496","record_id":"<urn:uuid:cfcf5b8a-af0b-4c2b-bf15-2973dee0668d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
4. Carotene, the pigment responsible for the color of carrots, has a percent composition of 89.49%--C and 10.51%--H. Its molecular mass was found to be 546.9g. Calculate its empirical formula and its
molecular formula.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ea109be4b07cd2b64844a9","timestamp":"2014-04-19T10:05:05Z","content_type":null,"content_length":"82340","record_id":"<urn:uuid:5429adba-67e1-464d-9abe-405388778160>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Connected Components Algorithms For Mesh-Connected Parallel Computers
Connected Components Algorithms For Mesh-Connected Parallel Computers (1995)
Download Links
by Steve Goddard , Subodh Kumar , Jan , Jan F. Prins
Venue: Parallel Algorithms: 3rd DIMACS Implementation Challenge October 17-19, 1994, volume 30 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science
Citations: 12 - 0 self | {"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.5994","timestamp":"2014-04-18T05:55:12Z","content_type":null,"content_length":"24383","record_id":"<urn:uuid:445e288d-d5b1-4d6b-9ff8-043e7337281e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - split the sublists into parts according to some rules
Date: Aug 31, 2012 4:00 AM
Author: Dr.J
Subject: split the sublists into parts according to some rules
Dear all,
I have a long list which has many sublists inside,
Each sublist has length > 1(no single element sublist exists) . And the
lengths of the sublists are different and unknow in advanced. The length of
some of the sublists are odd number, such as {a,b,c,d,e} and {x,y,z}. Some
sublists have even number list length, like {a1,a2,a3,a4}.
What I want to achieved is to split each sublist into two (or three, or
more) parts. In the two parts case, if the length of original sublist is
even number, the two new parts will have same length, e.g, {a1,a2,a3,a4}
become {{a1,a2},{a3,a4}}. If the sublist has length in odd number,
after splitting one of the two parts should have one more element than the
That is,
Input: Thelonglist={{a,b,c,d,e}, {x,y,z}, {a1,a2,a3,a4},...}
Output: Newlist={{a,b,c}, {d,e}}, {{x,y}, {z}}, {{a1,a2}, {a3,a4}},...}
Or the same idea for the three parts case,
Input: Thelonglist={{a,b,c,d,e}, {x,y,z}, {a1,a2,a3,a4},...}
Output: Newlist={{{a,b}, {c,d}, {e}}, {{x},{y},{z}}, {{a1}, {a2},
{a3,a4}}, ...}
For the case of 4 parts, the number 4 is larger than the length of some
sublists and I will abandon those list with short length.
Input: Thelonglist={{a,b,c,d,e}, {x,y,z}, {a1,a2,a3,a4},...}
Output: Newlist={{{a}, {b}, {c}, {d,e}}, {{a1}, {a2}, {a3}, {a4}}, ...}
Could there be a simple function to achieve this idea generally? Say, a
function like *SplittoPart[**list_*, *partnumber_**]*, in which I just need
to give the input list and the number of parts of sublists I want to have.
Then it will do the job above. If the number of sublist is larger then the
length of some sublists, the function just abandon those short list and do
the split(or partition) work on the other lists with long enough length.
Could some one help me on this?
If that is too complicated, I would still be happy to see some one could
give me a solution only for the case of splitting to two parts,
Input: Thelonglist={{a,b,c,d,e}, {x,y,z}, {a1,a2,a3,a4},...}
Output: Newlist={{a,b,c}, {d,e}}, {{x,y}, {z}}, {{a1,a2}, {a3,a4}},...}
Thanks a lot for your kind help! | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7882197","timestamp":"2014-04-17T19:38:24Z","content_type":null,"content_length":"3309","record_id":"<urn:uuid:4f470bc3-b15b-43e2-a62c-7493758550a2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
Network Theory (Part 12)
October 9, 2011
Network Theory (Part 12)
John Baez
Last time we proved a version of Noether's theorem for stochastic mechanics. Now I want to compare that to the more familiar quantum version.
But to do this, I need to say more about the analogy between stochastic mechanics and quantum mechanics. And whenever I try, I get pulled toward explaining some technical issues involving analysis:
whether sums converge, whether derivatives exist, and so on. I've been trying to avoid such stuff—not because I dislike it, but because I'm afraid you might. But the more I put off discussing these
issues, the more they fester and make me unhappy. In fact, that's why it's taken so long for me to write this post!
So, this time I will gently explore some of these technical issues. But don't be scared: I'll mainly talk about some simple big ideas. Next time I'll discuss Noether's theorem. I hope that by getting
the technicalities out of my system, I'll feel okay about hand-waving whenever I want.
And if you're an expert on analysis, maybe you can help me with a question.
Stochastic mechanics versus quantum mechanics
First, we need to recall the analogy we began sketching in Part 5, and push it a bit further. The idea is that stochastic mechanics differs from quantum mechanics in two big ways:
• First, instead of complex amplitudes, stochastic mechanics uses nonnegative real probabilities. The complex numbers form a ring; the nonnegative real numbers form a mere rig, which is a 'ring
without negatives'. Rigs are much neglected in the typical math curriculum, but unjustly so: they're almost as good as rings in many ways, and there are lots of important examples, like the natural
numbers $\mathbb{N}$ and the nonnegative real numbers, $[0,\infty)$. For probability theory, we should learn to love rigs.
But there are, alas, situations where we need to subtract probabilities, even when the answer comes out negative: namely when we're taking the time derivative of a probability. So sometimes we need $
\mathbb{R}$ instead of just $[0,\infty)$.
• Second, while in quantum mechanics a state is described using a 'wavefunction', meaning a complex-valued function obeying
$$ \int |\psi|^2 = 1 $$
in stochastic mechanics it's described using a 'probability distribution', meaning a nonnegative real function obeying
$$ \int \psi = 1 $$
So, let's try our best to present the theories in close analogy, while respecting these two differences.
We'll start with a set $X$ whose points are states that a system can be in. Last time I assumed $X$ was a finite set, but this post is so mathematical I might as well let my hair down and assume it's
a measure space. A measure space lets you do integrals, but a finite set is a special case, and then these integrals are just sums. So, I'll write things like
$$ \int f $$
and mean the integral of the function $f$ over the measure space $X$, but if $X$ is a finite set this just means
$$ \sum_{x \in X} f(x) $$
Now, I've already defined the word 'state', but both quantum and stochastic mechanics need a more general concept of state. Let's call these 'quantum states' and 'stochastic states':
• In quantum mechanics, the system has an amplitude $\psi(x)$ of being in any state $x \in X$. These amplitudes are complex numbers with
$$\int | \psi |^2 = 1$$
We call $\psi: X \to \mathbb{C}$ obeying this equation a quantum state.
• In stochastic mechanics, the system has a probability $\psi(x)$ of being in any state $x \in X$. These probabilities are nonnegative real numbers with
$$\int \psi = 1$$
We call $\psi: X \to [0,\infty)$ obeying this equation a stochastic state.
In quantum mechanics we often use this abbreviation:
$$ \langle \phi, \psi \rangle = \int \overline{\phi} \psi $$
so that a quantum state has
$$ \langle \psi, \psi \rangle = 1 $$
Similarly, we could introduce this notation in stochastic mechanics:
$$ \langle \psi \rangle = \int \psi $$
so that a stochastic state has
$$ \langle \psi \rangle = 1 $$
But this notation is a bit risky, since angle brackets of this sort often stand for expectation values of observables. So, I've been writing $\int \psi$, and I'll keep on doing this.
In quantum mechanics, $\langle \phi, \psi \rangle$ is well-defined whenever both $\phi$ and $\psi$ live in the vector space
$$L^2(X) = \{ \psi: X \to \mathbb{C} \; : \; \int |\psi|^2 < \infty \} $$
In stochastic mechanics, $\langle \psi \rangle$ is well-defined whenever $\psi$ lives in the vector space
$$L^1(X) = \{ \psi: X \to \mathbb{R} \; : \; \int |\psi| < \infty \} $$
You'll notice I wrote $\mathbb{R}$ rather than $[0,\infty)$ here. That's because in some calculations we'll need functions that take negative values, even though our stochastic states are
A state is a way our system can be. An observable is something we can measure about our system. They fit together: we can measure an observable when our system is in some state. If we repeat this we
may get different answers, but there's a nice formula for average or 'expected' answer.
• In quantum mechanics, an observable is a self-adjoint operator $A$ on $L^2(X)$. The expected value of $A$ in the state $\psi$ is
$$ \langle \psi, A \psi \rangle $$
Here I'm assuming that we can apply $A$ to $\psi$ and get a new vector $A \psi \in L^2(X)$. This is automatically true when $X$ is a finite set, but in general we need to be more careful.
• In stochastic mechanics, an observable is a real-valued function $A$ on $X$. The expected value of $A$ in the state $\psi$ is
$$ \int A \psi $$
Here we're using the fact that we can multiply $A$ and $\psi$ and get a new vector $A \psi \in L^1(X)$, at least if $A$ is bounded. Again, this is automatic if $X$ is a finite set, but not otherwise.
Besides states and observables, we need 'symmetries', which are transformations that map states to states. We use these to describe how our system changes when we wait a while, for example.
• In quantum mechanics, an isometry is a linear map $U: L^2(X) \to L^2(X)$ such that
$$ \langle U \phi, U \psi \rangle = \langle \phi, \psi \rangle$$
for all $\psi, \phi \in L^2(X)$. If $U$ is an isometry and $\psi$ is a quantum state, then $U \psi$ is again a quantum state.
• In stochastic mechanics, a stochastic operator is a linear map $U: L^1(X) \to L^1(X)$ such that
$$ \int U \psi = \int \psi $$
$$ \psi \ge 0 \; \; \Rightarrow \; \; U \psi \ge 0 $$
for all $\psi \in L^1(X)$. If $U$ is stochastic and $\psi$ is a stochastic state, then $U \psi$ is again a stochastic state.
In quantum mechanics we are mainly interested in invertible isometries, which are called unitary operators. There are lots of these, and their inverses are always isometries. There are, however, very
few stochastic operators whose inverses are stochastic:
Puzzle 1. Suppose $X$ is a finite set. Show that any isometry $ U: L^2(X) \to L^2(X)$ is invertible, and its inverse is again an isometry.
Puzzle 2. Suppose $X$ is a finite set. Which stochastic operators $ U: L^1(X) \to L^1(X)$ have stochastic inverses?
This is why we usually think of time evolution as being reversible quantum mechanics, but not in stochastic mechanics! In quantum mechanics we often describe time evolution using a '1-parameter
group', while in stochastic mechanics we describe it using a 1-parameter semigroup... meaning that we can run time forwards, but not backwards.
But let's see how this works in detail!
Time evolution in quantum mechanics
In quantum mechanics there's a beautiful relation between observables and symmetries, which goes like this. Suppose that for each time $t$ we want a unitary operator $U(t) : L^2(X) \to L^2(X)$ that
describes time evolution. Then it makes a lot of sense to demand that these operators form a 1-parameter group:
Definition. A collection of linear operators U(t) ($t \in \mathbb{R}$) on some vector space forms a 1-parameter group if
$$ U(0) = 1 $$
$$ U(s+t) = U(s) U(t) $$
for all $s,t \in \mathbb{R}$.
Note that these conditions force all the operators $U(t)$ to be invertible.
Now suppose our vector space is a Hilbert space, like $L^2(X)$. Then we call a 1-parameter group a 1-parameter unitary group if the operators involved are all unitary.
It turns out that 1-parameter unitary groups are either continuous in a certain way, or so pathological that you can't even prove they exist without the axiom of choice! So, we always focus on the
continuous case:
Definition. A 1-parameter unitary group is strongly continuous if $U(t) \psi$ depends continuously on $t$ for all $\psi$, in this sense:
$$ t_i \to t \;\; \Rightarrow \; \;\|U(t_i) \psi - U(t) \psi \| \to 0 $$
Then we get a classic result proved by Marshall Stone back in the early 1930s. You may not know him, but he was so influential at the University of Chicago during this period that it's often called
the "Stone Age". And here's one reason why:
Stone's Theorem. There is a one-to-one correspondence between strongly continuous 1-parameter unitary groups on a Hilbert space and self-adjoint operators on that Hilbert space, given as follows.
Given a strongly continuous 1-parameter unitary group $U(t)$ we can always write
$$ U(t) = \exp(-i t H)$$
for a unique self-adjoint operator $H$. Conversely, any self-adjoint operator determines a strongly continuous 1-parameter group this way. For all vectors $\psi$ for which $H \psi$ is well-defined,
we have
$$ \left.\frac{d}{d t} U(t) \psi \right|_{t = 0} = -i H \psi $$
Moreover, for any of these vectors, if we set
$$ \psi(t) = \exp(-i t H) \psi $$
we have
$$ \frac{d}{d t} \psi(t) = - i H \psi(t) $$
When $U(t) = \exp(-i t H)$ describes the evolution of a system in time, $H$ is is called the Hamiltonian, and it has the physical meaning of 'energy'. The equation I just wrote down is then called
Schrödinger's equation.
So, simply put, in quantum mechanics we have a correspondence between observables and nice one-parameter groups of symmetries. Not surprisingly, our favorite observable, energy, corresponds to our
favorite symmetry: time evolution!
However, if you were paying attention, you noticed that I carefully avoided explaining how we define $\exp(- i t H)$. I didn't even say what a self-adjoint operator is. This is where the
technicalities come in: they arise when $H$ is unbounded, and not defined on all vectors in our Hilbert space.
Luckily, these technicalities evaporate for finite-dimensional Hilbert spaces, such as $L^2(X)$ for a finite set $X$. Then we get:
Stone's Theorem (Baby Version). Suppose we are given a finite-dimensional Hilbert space. In this case, a linear operator $H$ on this space is self-adjoint iff it's defined on the whole space and
$$ \langle \phi , H \psi \rangle = \langle H \phi, \psi \rangle $$
for all vectors $\phi, \psi$. Given a strongly continuous 1-parameter unitary group $U(t)$ we can always write
$$ U(t) = \exp(- i t H) $$
for a unique self-adjoint operator $H$, where
$$ \exp(-i t H) \psi = \sum_{n = 0}^\infty \frac{(-i t H)^n}{n!} \psi $$
with the sum converging for all $\psi$. Conversely, any self-adjoint operator on our space determines a strongly continuous 1-parameter group this way. For all vectors $\psi$ in our space we then
$$ \left.\frac{d}{d t} U(t) \psi \right|_{t = 0} = -i H \psi $$
and if we set
$$ \psi(t) = \exp(-i t H) \psi $$
we have
$$ \frac{d}{d t} \psi(t) = - i H \psi(t) $$
Time evolution in stochastic mechanics
We've seen that in quantum mechanics, time evolution is usually described by a 1-parameter group of operators that comes from an observable: the Hamiltonian. Stochastic mechanics is different!
First, since stochastic operators aren't usually invertible, we typically describe time evolution by a mere 'semigroup':
Definition. A collection of linear operators $U(t)$ ($t \in [0,\infty)$) on some vector space forms a 1-parameter semigroup if
$$ U(0) = 1 $$
$$ U(s+t) = U(s) U(t) $$
for all $s, t \ge 0$.
Now suppose this vector space is $L^1(X)$ for some measure space $X$. We want to focus on the case where the operators $U(t)$ are stochastic and depend continuously on $t$ in the same sense we
discussed earlier.
Definition. A 1-parameter strongly continuous semigroup of stochastic operators $U(t) : L^1(X) \to L^1(X)$ is called a Markov semigroup.
What's the analogue of Stone's theorem for Markov semigroups? I don't know a fully satisfactory answer! If you know, please tell me.
Later I'll say what I do know—I'm not completely clueless—but for now let's look at the 'baby' case where $X$ is a finite set. Then the story is neat and complete:
Theorem. Suppose we are given a finite set $X$. In this case, a linear operator $H$ on $L^1(X)$ is infinitesimal stochastic iff it's defined on the whole space,
$$ \int H \psi = 0 $$
for all $\psi \in L^1(X)$, and the matrix of $H$ in terms of the obvious basis obeys
$$ H_{i j} \ge 0 $$
for all $j \ne i$. Given a Markov semigroup $U(t)$ on $L^1(X)$, we can always write
$$ U(t) = \exp(t H) $$
for a unique infinitesimal stochastic operator $H$, where
$$ \exp(t H) \psi = \sum_{n = 0}^\infty \frac{(t H)^n}{n!} \psi $$
with the sum converging for all $\psi$. Conversely, any infinitesimal stochastic operator on our space determines a Markov semigroup this way. For all $\psi \in L^1(X)$ we then have
$$ \left.\frac{d}{d t} U(t) \psi \right|_{t = 0} = H \psi $$
and if we set
$$ \psi(t) = \exp(t H) \psi $$
we have the master equation:
$$ \frac{d}{d t} \psi(t) = H \psi(t) $$
In short, time evolution in stochastic mechanics is a lot like time evolution in quantum mechanics, except it's typically not invertible, and the Hamiltonian is typically not an observable.
Why not? Because we defined an observable to be a function $A: X \to \mathbb{R}$. We can think of this as giving an operator on $L^1(X)$, namely the operator of multiplication by $A$. That's a nice
trick, which we used to good effect last time. However, at least when $X$ is a finite set, this operator will be diagonal in the obvious basis consisting of functions that equal 1 one point of $X$
and zero elsewhere. So, it can only be infinitesimal stochastic if it's zero!
Puzzle 3. If $X$ is a finite set, show that any operator on $L^1(X)$ that's both diagonal and infinitesimal stochastic must be zero.
The Hille–Yosida theorem
I've now told you everything you really need to know... but not everything I want to say. What happens when $X$ is not a finite set? What are Markov semigroups like then? I can't abide letting this
question go unresolved! Unfortunately I only know a partial answer.
We can get a certain distance using the Hille–Yosida theorem, which is much more general.
Definition. A Banach space is a vector space with a norm such that any Cauchy sequence converges.
Examples include Hilbert spaces like $L^2(X)$ for any measure space, but also other spaces like $L^1(X)$ for any measure space!
Definition. If $V$ is a Banach space, a 1-parameter semigroup of operators $U(t) : V \to V$ is called a contraction semigroup if it's strongly continuous and
$$ \| U(t) \psi \| \le \| \psi \| $$
for all $t \ge 0 $ and all $\psi \in V$.
Examples include strongly continuous 1-parameter unitary groups, but also Markov semigroups!
Puzzle 4. Show any Markov semigroup is a contraction semigroup.
The Hille--Yosida theorem generalizes Stone's theorem to contraction semigroups. In my misspent youth, I spent a lot of time carrying around Yosida's book Functional Analysis. Furthermore, Einar
Hille was the advisor of my thesis advisor, Irving Segal. Segal generalized the Hille--Yosida theorem to nonlinear operators, and I used this generalization a lot back when I studied nonlinear
partial differential equations. So, I feel compelled to tell you this theorem:
Hille–Yosida Theorem. Given a contraction semigroup $U(t)$ we can always write
$$ U(t) = \exp(t H)$$
for some densely defined operator $H$ such that $H - \lambda I $ has an inverse and
$$ \displaystyle{ \| (H - \lambda I)^{-1} \psi \| \le \frac{1}{\lambda} \| \psi \| } $$
for all $\lambda \gt 0 $ and $\psi \in V$. Conversely, any such operator determines a strongly continuous 1-parameter group. For all vectors $\psi$ for which $H \psi$ is well-defined, we have
$$ \left.\frac{d}{d t} U(t) \psi \right|_{t = 0} = H \psi $$
Moreover, for any of these vectors, if we set
$$ \psi(t) = U(t) \psi $$
we have
$$ \frac{d}{d t} \psi(t) = H \psi(t) $$
If you like, you can take the stuff at the end of this theorem to be what we mean by saying $U(t) = \exp(t H)$. When $ U(t) = \exp(t H)$, we say that $ H$ generates the semigroup $ U(t)$.
But now suppose $V = L^1(X)$. Besides the conditions in the Hille–Yosida theorem, what extra conditions on $H$ are necessary and sufficient for $H$ to generate a Markov semigroup? In other words,
what's a definition of 'infinitesimal stochastic operator' that's suitable not only when $X$ is a finite set, but an arbitrary measure space?
I asked this question on Mathoverflow a few months ago, and so far the answers have not been completely satisfactory.
Some people mentioned the Hille--Yosida theorem, which is surely a step in the right direction, but not the full answer.
Others discussed the special case when $\exp(t H)$ extends to a bounded self-adjoint operator on $L^2(X)$. When $X$ is a finite set, this special case happens precisely when the matrix $H_{i j}$ is
symmetric: the probability of hopping from $j$ to $i$ equals the probability of hopping from $i$ to $j$. This is a fascinating special case, not least because when $H$ is both infinitesimal
stochastic and self-adjoint, we can use it as a Hamiltonian for both stochastic mechanics and quantum mechanics! Someday I want to discuss this. However, it's just a special case.
After grabbing people by the collar and insisting that I wanted to know the answer to the question I actually asked—not some vaguely similar question— the best answer seems to be Martin Gisser's
reference to this book:
• Zhi-Ming Ma and Michael Röckner, Introduction to the Theory of (Non-Symmetric) Dirichlet Forms, Springer, Berlin, 1992.
This book provides a very nice self-contained proof of the Hille–Yosida theorem. On the other hand, it does not answer my question in general, but only when the skew-symmetric part of $ H$ is
dominated (in a certain sense) by the symmetric part.
So, I'm stuck on this front, but that needn't bring the whole project to a halt. We'll just sidestep this question.
For a good well-rounded introduction to Markov semigroups and what they're good for, try:
• Ryszard Rudnicki, Katarzyna Pichór and Marta Tyran-Kamínska, Markov semigroups and their applications.
You can also read comments on Azimuth, and make your own comments or ask questions there!
Here are the answers to the puzzles. The answer to Puzzle 2 is an expanded version of one given by Graham Jones.
Puzzle 1. Suppose $X$ is a finite set. Show that any isometry $ U: L^2(X) \to L^2(X)$ is invertible, and its inverse is again an isometry.
Answer. Remember that $U$ being an isometry means that it preserves the inner product:
$$\langle U \psi, U \phi \rangle = \langle \psi, \phi \rangle $$
and thus it preserves the $L^2$ norm
$$\|U \psi \| = \| \psi \| $$ given by $\| \psi \| = \langle \psi, \psi \rangle^{1/2}$. It follows that if $U\psi = 0$, then $\psi = 0$, so $U$ is one-to-one. Since $U$ is a linear operator from a
finite-dimensional vector space to itself, $U$ must therefore also be onto. Thus $U$ is invertible, and because $U$ preserves the inner product, so does its inverse: given $\psi, \phi \in L^2(X)$ we
have $$\langle U^{-1} \phi, U^{-1} \psi \rangle = \langle \phi, \psi \rangle $$
since we can write $\phi' = U^{-1} \phi,$ $\psi' = U^{-1} \psi$ and then the above equation says
$$ \langle \phi' , \psi' \rangle = \langle U \phi' , U \psi' \rangle $$
Puzzle 2. Suppose $X$ is a finite set. Which stochastic operators $ U: L^1(X) \to L^1(X)$ have stochastic inverses?
Answer. Suppose the set $ X$ has $ n$ points. Then the set of stochastic states
$$ S = \{ \psi : X \to \mathbb{R} \; : \; \psi \ge 0, \quad \int \psi = 1 \} $$
is a simplex. It's an equilateral triangle when $ n = 3$, a regular tetrahedron when $ n = 4$, and so on.
In general, $S$ has $ n$ corners, which are the functions $ \psi$ that equal 1 at one point of $ S$ and zero elsewhere. Mathematically speaking, $S$ is a convex set, and its corners are its extreme
points: the points that can't be written as convex combinations of other points of $ S$ in a nontrivial way.
Any stochastic operator $ U$ must map $ S$ into itself, so if $ U$ has an inverse that's also a stochastic operator, it must give a bijection $ U : S \to S$. Any linear transformation acting as a
bijection between convex sets must map extreme points to extreme points (this is easy to check), so $ U$ must map corners to corners in a bijective way. This implies that it comes from a permutation
of the points in $ X$.
In other words, any stochastic matrix with an inverse that's also stochastic is a permutation matrix: a square matrix with every entry 0 except for a single 1 in each row and each column.
It is worth adding that there are lots of stochastic operators whose inverses are not, in general, stochastic. We can see this in at least two ways.
First, for any measure space $X$, every stochastic operator $U : L^1(X) \to L^1(X)$ that's 'close to the identity' in this sense:
$$ \| U - I \| \lt 1 $$
(where the norm is the operator norm) will be invertible, simply because every operator obeying this inequality is invertible! After all, if this inequality holds, we have a convergent geometric
$$ \displaystyle{ U^{-1} = \frac{1}{I - (I - U)} = \sum_{n = 0}^\infty (I - U)^n } $$
Second, suppose $X$ is a finite set and $H$ is infinitesimal stochastic operator on $ L^1(X)$. Then $H$ is bounded, so the stochastic operator $\exp(t H)$ where $t \gt 0$ will always have an inverse,
namely $\exp(-t H)$. But for $t$ sufficiently small, this inverse $\exp(-tH)$ will only be stochastic if $-H$ is infinitesimal stochastic, and that's only true if $H = 0$.
In something more like plain English: when you've got a finite set of states, you can formally run any Markov process backwards in time, but a lot of those 'backwards-in-time' operators will involve
negative probabilities for the system to hop from one state to another!
Puzzle 3. If $X$ is a finite set, show that any operator on $L^1(X)$ that's both diagonal and infinitesimal stochastic must be zero.
Answer. We are thinking of operators on $L^1(X)$ as matrices with respect to the obvious basis of functions that equal 1 at one point and 0 elsewhere. If $H_{i j}$ is an infinitesimal stochastic
matrix, the sum of the entries in each column is zero. If it's diagonal, there's at most one nonzero entry in each column. So, we must have $H = 0$.
Puzzle 4. Show any Markov semigroup $ U(t): L^1(X) \to L^1(X)$ is a contraction semigroup.
Answer. We need to show $$ \|U(t) \psi\| \le \| \psi \| $$
for all $t \ge 0$ and $\psi \in L^1(X)$. Here the norm is the $L^1$ norm, so more explicitly we need to show
$$ \int |U(t) \psi | \le \int |\psi| $$
We can split $ \psi$ into its positive and negative parts:
$$\psi = \psi_+ - \psi_-$$
$$ \psi_{\pm} \ge 0$$
Since $ U(t)$ is stochastic we have
$$ U(t) \psi_{\pm} \ge 0$$
$$ \int U(t) \psi_\pm = \int \psi_\pm $$
$$ \begin{array}{ccl} \int |U(t) \psi | &=& \int |U(t) \psi_+ - U(t) \psi_-| \\ &\leq & \int |U(t) \psi_+| + |U(t) \psi_-| \\ &=& \int U(t) \psi_+ + U(t) \psi_- \\ &=& \int \psi_+ + \psi_- \\ &=& \
int |\psi| \end{array} $$
© 2011 John Baez | {"url":"http://www.math.ucr.edu/home/baez/networks/networks_12.html","timestamp":"2014-04-21T07:07:06Z","content_type":null,"content_length":"29000","record_id":"<urn:uuid:a5f35e70-0f55-47e4-a95a-ef35ddb764e4>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
if number closest to zero does sign matter?
July 27th 2010, 08:13 AM #1
Jul 2010
if number closest to zero does sign matter?
Ok this is kind of silly.
If I had these numbers {3, 5, -2, 7, 4, 9, 2, 12, -3, 8, -5} where I am to pick the one closest to zero and only one number must be returned, would the sign of the number matter? No the order of
the numbers presented does not matter.
Both -2 and 2 are closest to zero. Is it the positive number that is considered closest or the negative one? Is there a math rule for this kind of situation?
Appreciate any answers.
Both -2 and 2 are the same distance away from zero, and they are, as you said, the closest to zero in that list. In this situation, there are two solutions. If the problem asks for only one
solution, then either the problem statement itself has a method of choosing between them, or you can just pick the one you want.
Hmm... thanks for the quick reply.
What situations would have you picking one from the other. Can you cite an instance please?
I'm thinking about a line drawn from either side, so it crosses one number first. So in that case, that number would be closest to zero. Am I making sense at all?
Your second question is kind of vague. From "either side" of what?
Well, the problem statement might specify the closest number to zero that is also positive. Or, perhaps you're looking at the solution set of an equation, and you had to square the equation in
order to solve. You might, at that point, introduce extraneous solutions, which you can then throw out later, perhaps on physical grounds if it's a physics problem.
It depends
In terms of distance it makes no difference. In terms of direction it does.
Oh my apologies, I meant a line drawn coming in from the left side towards zero or a line coming in from the right towards zero.
Thanks to both of you for the answers . It's clear now. I appreciate it much.
You're welcome. Have a good one!
July 27th 2010, 08:17 AM #2
July 27th 2010, 08:26 AM #3
Jul 2010
July 27th 2010, 08:32 AM #4
Oct 2009
July 27th 2010, 08:33 AM #5
July 27th 2010, 08:34 AM #6
Oct 2009
July 27th 2010, 09:01 AM #7
Jul 2010
July 28th 2010, 02:05 AM #8 | {"url":"http://mathhelpforum.com/algebra/152115-if-number-closest-zero-does-sign-matter.html","timestamp":"2014-04-20T07:56:26Z","content_type":null,"content_length":"51102","record_id":"<urn:uuid:a68cd05c-af79-4df4-b9fc-a1cfaba5ed38>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert lambda/minute to kilolitre/second - Conversion of Measurement Units
›› Convert lambda/minute to kilolitre/second
›› More information from the unit converter
How many lambda/minute in 1 kilolitre/second? The answer is 60000000000.
We assume you are converting between lambda/minute and kilolitre/second.
You can view more details on each measurement unit:
lambda/minute or kilolitre/second
The SI derived unit for volume flow rate is the cubic meter/second.
1 cubic meter/second is equal to 60000000000 lambda/minute, or 1 kilolitre/second.
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between lambdas/minute and kiloliters/second.
Type in your own numbers in the form to convert the units!
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0035 seconds. | {"url":"http://www.convertunits.com/from/lambda/minute/to/kilolitre/second","timestamp":"2014-04-16T13:29:47Z","content_type":null,"content_length":"20047","record_id":"<urn:uuid:8cb9b67b-c944-4203-9cd3-9c87853929aa>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00414-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find a SAT Math Tutor
...I am able to identify where students are having difficulty and show them how to correctly do the problems. Sometimes, other basic skills need to be reviewed! I try to relate the concepts to
real-life situations so they have a reference point and then they can make the connection.
10 Subjects: including SAT math, geometry, algebra 2, algebra 1
...I'm a Princeton graduate in Mechanical Engineering specializing in math, science, and test prep. I scored 790M/780W/760CR on my SATs; I am a National Merit Finalist for the PSAT, and I earned
perfect 5s on: Physics C E&M, Calculus BC, Physics C Mech, Biology, Psychology, Physics B, and English L...
26 Subjects: including SAT math, English, calculus, writing
...In a similar fashion, I teach my students how to spot the clues or indicators that high achieving students intuitively look for and find so readily. I show students how to use this information
to know what to do when solving math/science problems and develop smarter, more efficient problem solvi...
40 Subjects: including SAT math, chemistry, writing, English
...I am a National Forensic League Member of Premier Distinction and was proud to receive their prestigious All-American Award about a year ago. I have a great deal of experience with college
admissions, especially ivy league admissions. I was admitted to three of the eight ivy league schools before selecting UPenn.
43 Subjects: including SAT math, reading, English, algebra 1
...While I am based in NYC and conduct my sessions in my clients’ homes, I regularly work remotely with boarding school, undergraduate and graduate school students utilizing Skype, FaceTime, and
Google Drive. I have a 98% client satisfaction rate and 96% of my students have gained admission to thei...
36 Subjects: including SAT math, English, reading, Spanish | {"url":"http://www.purplemath.com/southern_md_facility_md_sat_math_tutors.php","timestamp":"2014-04-21T10:53:14Z","content_type":null,"content_length":"23987","record_id":"<urn:uuid:9df3ab11-81a5-4d5e-8096-69c53e4ab212>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cost Accounting - Acct 3334 Chapter 9 Solutions by Lindaluo1
Chapter 9 solutions (P9-28, -29, -40)
9-28(10 min.)Capacity management, denominator-level capacity concepts. 1.d 2.c, d 3.D 4.A 5.C 6.a, b 7.A 8.B 9.c, d 10.B 11.a, b
9-29(25 min.)Denominator-level problem
1.Budgeted fixed manufacturing overhead costs rates:
Budgeted FixedBudgeted Fixed
Level CapacityOverhead perCapacityOverhead Cost
The rates are different because of varying denominator-level concepts. Theoretical and practical capacity levels are driven by supply-side concepts, i.e., “how much can produce?” Normal and
master-budget capacity levels are driven by demand-side concepts, i.e., “how much can we sell?” (or “how much should we produce?”)
2.In order to incorporate fixed manufacturing costs into unit product costs, fixed manufacturing costs have to be unitized for inventory costing. Absorption costing is the method used for tax
reporting and for financial reporting using generally accepted accounting principles. The choice of a denominator level becomes relevant under absorption costing because fixed costs are accounted for
along with variable costs at the individual product level. Variable and throughput costing account for fixed costs as a lump sum, expensed in the period incurred.
3.The variances that arise from use of the theoretical or practical level concepts will signal that there is a divergence between the supply of capacity and the demand for capacity. This is useful
input to managers. As a general rule, however, it is important not to place undue reliance on the production volume variance as a measure of the economic costs of unused capacity. | {"url":"http://www.studymode.com/course-notes/Cost-Accounting-Acct-3334-Chapter-1255401.html","timestamp":"2014-04-20T05:51:21Z","content_type":null,"content_length":"34881","record_id":"<urn:uuid:0a8be676-d5e5-498a-a264-677879c3473f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
Use vectors to prove ...
December 31st 2006, 07:54 AM #1
Nov 2006
question 1
Use vectors to prove that the line segment joining the midpoint of two sides of a triangle is parallel to the third side and half as long.
Question 2
Use vectors to prove that the midpoints of the sides of a quadrilateral are the vertices of a parallelogram.
Can you show me how to do them? Thank you very much.
question 1
Use vectors to prove that the line segment joining the midpoint of two sides of a triangle is parallel to the third side and half as long.
Question 2
Use vectors to prove that the midpoints of the sides of a quadrilateral are the vertices of a parallelogram.
Can you show me how to do them? Thank you very much.
Well, two things first,
1) HAPPY NEW YEAR TO ALL !
2) My Pittsburgh Steelers have just eliminated the Cinci Bungles from NFL playoffs! WE DEY!
Okay, for your Question #1 now.
Here is one way. It's very crude because I'd be using numbers or definite lengths for the vectors. Variables are better for proofs, but hey, it's holiday today. Why crack our heads on a holiday?
Say we have triangle whose vertices are A(0,0), B(8,6) and C(10,0).
The two sides AB and AC are halved each. D(4,3) is midpoint of AB. E(5,0) is midpoint of AC.
b) Is DE half as long as BC?
DE = sqrt[(4-5)^2 +(3-0)^2] = sqrt[1 +9] = sqrt(10) units long.
BC = sqrt[(8-10)^2 +(6-0)^2] = sqrt[4 +36] = sqrt(40) = 2sqrt(10) units long.
Therefore, yes, DE is half in length of BC.-------proven.
a) Is DE parallel to BC?
DE = AE -AD -----in vectors.
DE = ((4 -5),(3-0))
DE = (-1,3) ---------------***
BC = AC -AB .....in vectors.
BC = ((8-10),(6-0))
BC = (-2,6)
BC = 2(-1,3) ---------------***
Since BC is just DE multiplied by a scalar of 2, then BC and DE are parallel. ----proven.
That's all for now. I just wanted to say Happy New Year and to needle the Bungleds.
Last edited by ticbol; December 31st 2006 at 02:08 PM.
Umm, before I lose this, let me prove what's in your Question #2 by words only.
Take any convex quadrilateral. Draw the two diagonals.
Any diagonal divides the quadrilateral into two triangles with a common side or base---the diagonal. Draw the line segments connecting the midpoints of the other two sides for each triangle.
Remembering your Question #1 above, each of these line segments is half of the diagonal and each is parallel to the diagonal. So both line segments are equal in length and are parallel to each
A little more thinking (like, in a parallelogram, opposite sides are equal and parallel), add two and two, and you have the proof for your Question #2.
Last edited by ticbol; December 31st 2006 at 02:07 PM.
The above discussion of #2 assumes that the quadrilateral is convex. But it need not be for this to be true. Suppose that ABCD is a quadrilateral and J is midpoint of AB, K is the midpoint of BC,
L is the midpoint of CD, and M is the midpoint of DA. Now we prove that JKLM is a parallelogram.
$\begin{array}{rcl}<br /> \overrightarrow {JK} & = & \frac{1}{2}\overrightarrow {AB} + \frac{1}{2}\overrightarrow {BC} \\ <br /> \overrightarrow {LM} & = & \frac{1}{2}\overrightarrow {CD} + \frac
{1}{2}\overrightarrow {DA} \\ <br /> \overrightarrow {JK} + \overrightarrow {LM} & = & \frac{1}{2}\left( {\overrightarrow {AB} + \overrightarrow {BC} + \overrightarrow {CD} + \overrightarrow {DA}
} \right) = 0 \\ <br /> \overrightarrow {JK} & = & - \overrightarrow {LM} \\ <br /> \end{array}.$
Thus JKLM has opposite sides that are parallel and have the same length.
Hello, Jenny!
1) Use vectors to prove that the line segment joining the midpoint of two sides of a triangle
is parallel to the third side and half as long.
* *
* *
D *--------* E
* *
* *
B C
Let $D$ and $E$ be the midpoints of $AB$ and $AC$, respectively.
Draw line segment $DE.$
We know that: . $\overrightarrow{BA} + \overrightarrow{AC} \:=\:\overrightarrow{BC}$[1]
We know that: . $\overrightarrow{DA} + \overrightarrow{AE} \:=\:\overrightarrow{DE}$[2]
We are told that: . $\overrightarrow{DA} \:=\:\frac{1}{2}\overrightarrow{BA}$ .and . $\overrightarrow{AE} \:=\:\frac{1}{2}\overrightarrow{AC}$
Substitute into [2]: . $\overrightarrow{DE} \:=\:\frac{1}{2}\overrightarrow{BA} + \frac{1}{2}\overrightarrow{AC} \:=\:\frac{1}{2}\left(\overrightarrow{BA} + \overrightarrow{AC}\right)$
From [1], we have: . $\overrightarrow{DE} \:=\:\frac{1}{2}\overrightarrow{BC}$
Therefore: . $\overrightarrow{DE} \,\parallel\, \overrightarrow{BC}$ .and . $\left|\overrightarrow{DE}\right| \:=\:\frac{1}{2}\left|\overrightarrow{BC}\right|$
I worked out two proofs for these, but Plato and Soroban beat me. I am going to post anyway, so there.
Let a, b, and c be vectors along the sides of a triangle. and A,B the midpoints of a and b. Then,
so u is parallel to c and half as long.
Let a,b,c,d be vectors along the sides of the quadrilateral and A,B,C,D be the corresponding midpoints, then
$u=\frac{1}{2}b+\frac{1}{2}c$ and
but $d=a+b+c$
so, $v=\frac{1}{2}(a+b+c)-\frac{1}{2}a=\frac{1}{2}b+\frac{1}{2}c=u$
Therefore, Hence and moreover, ABCD is a parallelogram because sides AD and BC are equal and parallel.
Click on the links to see the respective diagrams. There not much, but I hope they help.
Last edited by galactus; November 24th 2008 at 05:39 AM.
December 31st 2006, 01:00 PM #2
MHF Contributor
Apr 2005
December 31st 2006, 01:33 PM #3
MHF Contributor
Apr 2005
December 31st 2006, 02:00 PM #4
December 31st 2006, 02:33 PM #5
Super Member
May 2006
Lexington, MA (USA)
December 31st 2006, 02:47 PM #6 | {"url":"http://mathhelpforum.com/calculus/9401-use-vectors-prove.html","timestamp":"2014-04-16T06:49:01Z","content_type":null,"content_length":"56708","record_id":"<urn:uuid:fabb75e2-0182-421a-81b6-43156e277447>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Harvard Crimson
What is 256 times 98? Can you do the multiplication without using a calculator? Two thirds of Massachusetts fourth-graders could not when they were asked this question on the statewide MCAS
assessment test last year.
Math education reformers have a prescription for raising the mathematical knowledge of schoolchildren. Do not teach the standard algorithms of arithmetic, such as long addition and multiplication,
they say; let the children find their own methods for adding and multiplying two-digit numbers, and for larger numbers, let them use calculators. One determined reformer puts it decisively: "It's
time to acknowledge that continuing to teach these skills (i.e., pencil-and-paper computational algorithms) to our students is not only unnecessary, but counterproductive and downright dangerous."
Mathematicians are perplexed, and the proverbial man on the street, when hearing the argument, appears to be perplexed as well: improve mathematical literacy by downgrading computational skills?
Yes, precisely, say the reformers. The old ways of teaching mathematics have failed. Too many children are scared of mathematics for life. Let's teach them mathematical thinking, not routine skills.
Understanding is the key, not computations.
Mathematicians are not convinced. By all means, liven up the textbooks, make the subject engaging and include interesting problems. But don't give up on basic skills! Conceptual understanding can and
must coexist with computational facility--we do not need to choose between them.
The disagreement extends over the entire mathematics curriculum, kindergarten through high school. It runs right through the National Council of Teachers of Mathematics (NCTM), the professional
organization of mathematics teachers. The new NCTM curriculum guidelines, presented with great fanfare on April 12, represent an earnest effort at finding common ground, but barely manage to
paper-over the differences.
Among teachers and mathematics educators, the avant-garde reformers are the most energetic, and their voices drown out those skeptical of extreme reforms. On the other side, among academic
mathematicians and scientists who have reflected on these questions, a clear majority oppose the new trends in math education. The academics, mostly unfamiliar with education issues, have been
reluctant to join the debate. But finally, some of them are speaking up.
Parents, for the most part, have also been silent, trusting the experts--the teachers' organizations and math educators. Several reform curricula do not provide textbooks in the usual sense, and this
deprives parents of one important source of information. Yet, also among parents, attitudes may be changing. A recent front-page headline in the New York Times declares that "The New, Flexible Math
Meets Parental Rebellion."
The stakes are high in this argument. State curriculum frameworks need to be written, and these serve as basis for assessment tests; some of the reformers receive substantial educational research
grants, consulting fees or textbook royalties. For now, the reformers have lost the battle in California. They are redoubling their efforts in Massachusetts, where the curriculum framework is being
revised. The struggle is fierce, by academic standards.
Both sides cite statistical studies and anecdotal evidence to support their case. Unfortunately, statistical studies in education are notoriously unreliable--blind studies, for example, are difficult
to construct. And for every charismatic teacher who succeeds with a "progressive" approach in the classroom, there are other teachers who manage to raise test scores dramatically by "going back to
The current fight echoes an earlier argument, over the "New Math" of the '60s and '70s. Then, as now, the old ways were thought to have failed. A small band of mathematicians proposed shifting the
emphasis towards a deeper understanding of mathematical concepts, though on a much more abstract level than today's reformers. Math educators took up the cause, but over time, most mathematicians and
parents became unhappy with the results. What had gone wrong? Preoccupied with "understanding," the "New Math" reformers had neglected computational skills. Mathematical understanding, it turned out,
did not develop well without sufficient computational practice. Understanding and skills grow best in tandem, each supporting the other. In most areas of human endeavor, mastery cannot be attained
without technique. Why should mathematics be different?
American schoolchildren rank near the bottom in international comparisons of mathematical knowledge. Our reformers see this as an argument for their ideas. But look at Singapore, the undisputed
leader in these comparisons: their math textbooks try hard to engage the students and to stimulate their interest. In early grades, they present mathematical problems playfully, often in the guise of
puzzles. Yet the textbooks are coherent, systematic, efficient, and cover all the basics--worlds apart from the reform curricula in this country. How I wish Singapore's approach were adopted in my
daughter's school!
The curriculum, of course, is not the only reason for Singapore's success, nor is it even the most important reason. The teachers' grasp and feeling for mathematics: that is the crucial issue,
already for teachers in the early grades. Here, it turns out, many of the reformers agree with the critics. Teacher training in America has traditionally and grossly stressed pedagogy over content.
The implicit message to the teachers is: If you know how to teach, you can teach anything! It will take a heroic effort--by mathematicians and math educators--to change the entrenched culture of
teacher training.
Mathematicians do not want to invade the educators' turf. We are not qualified to do their work. Yet we are qualified as critics of reforms in math education. We should call attention to reforms we
see as well meaning, but hectic and harmful. Most music critics would not do well as orchestra musicians. They do have acute hearing for shrill sounds from the orchestra.
Wilfried Schmid is Dwight Parker Robinson Professor of Mathematics. Earlier this year, he served as a mathematics advisor to the Massachusetts Department of Education. | {"url":"http://www.thecrimson.com/article/2000/5/4/new-battles-in-the-math-wars/","timestamp":"2014-04-17T03:57:19Z","content_type":null,"content_length":"22695","record_id":"<urn:uuid:b6d599bb-08e1-489d-b4f8-96e050048812>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 27
How many integers are common to the solution sets of both inequalities? x+7≥3x 3x+4≤5x
I have a final due within this week. I have to create a 15-20 slide power point presentation storyboard. I do not know which one to pick; either a medical documentary or a training presentation for
new hires. I have to include 45 medical words. Can someone help me on how to ev...
Health Care communication
What are the major components in health communication?
Health Care communication
To know the similarities and differences of the principles of communication with major components of health communication
Health Care communication
Compare the similarities and differences of the principles of communication with the major components of health communication. What observations do you have from your comparison? Include a list of
the principles of communication and the major components of health communication...
Health Care Communication
Compare the similarities and differences of the principles of communication you learned in Week One with the major components of health communication you learned in Week Eight. What observations do
you have from your comparison? Include a list of the principles of communicatio...
the legs of an isosceles triangle have lengths 2x+4 and x-8 the base has length 5x-2 whats teh lenght of base
Algebra-Please help me!
x - 3 = 2 - x add x to both sides 2x - 3 = 2 Add 3 to both sides 2x = 5 divide by 2 x = 2.5
AP Lit.
Thank you!
AP Lit.
It's confusing because I don't know whether to separate the interjection "Rawr" from what I am saying. Also, I'd like to know if it is grammatically correct to put extra information ('And as I walked
over to the closet and checked there was...nothing....
AP Lit.
I have to write a personal statement and I have a very unintelligent question. I'm trying to start off my statement with this "anecdote-ish" type thing. Is the below paragraph formated properly:
Rawr! "There are monsters in my closet!" I screamed. And a...
Figure ABCD is a trapezoid with DC = (1/3) AB. What is the ratio of area ABCD to area DCE? Provide an argument to support your answer. Could it be 1:3 ? Can you please explain?
thank you
A shopping center plot of land is in the shape of a parallelogram. Give two other facts about the plot or roads through it. I do not understand can you please explain? I know that A quadrilateral is
a parallelogram if and only if it has two pairs of parallel sides. A quadrila...
A box contains two dimes and four quarters. What is the expected value of a single draw of one coin under the assumption that one cannot distinguish between the coins on the basis of size? can you
please explaing for elementery students
Thank you!
A parent s chance of passing a rare inherited disease on to a child is 0.15. What is the probability that, in a family of three children, none of the children inherits the disease from the parent?
Suppose a contest offers $1000 to the person who can guess the winning four digit number. How many possibilities are there?
Two marbles are drawn from a bag containing 6 white, 4 red, and 6 green marbles. Find the probability of both marbles being white.
A basketball player has made 135 free throws in 180 attempts. Of the next 50 free throws attempted, how many would she have to make to raise her percent of free throws made by 5%?
why am is my question getting ignored?
The experiment requires 1 mg per ml of iron (Fe3+). What mass of iron (III) nitrate hexahydrate (Fe(NO3) - 6H2O) would you need to dissolve in 1 litre to achieve this value?
What is the shortest length of television cable that could be cut into either a whole number of 18-ft pieces or a whole number of 30-ft pieces?
Consider a bowl containing 36 different slips of paper. Ten of the slips of paper each contain one of the digits from the set 0 through 9 and 26 slips each contain one of the 26 capital letters of
the alphabet. If one slip is drawn at random, what is P(slip contains a letter f...
The shorter leg of a 30° 60° 90° triangle is 10. What are the lengths of the longer leg and the hypotenuse, to the nearest tenth? can you help me to understand? I know that The length of the
hypotenuse of any 30° 60° 90° triangle is ...
pigskin geography
pigskin geography
yo i dont get this crap do u? | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=purple","timestamp":"2014-04-18T09:46:04Z","content_type":null,"content_length":"11804","record_id":"<urn:uuid:34f18a34-39f5-4657-8b3d-29ea196beffa>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
Series of terms following polynomial recurrence relation
up vote 4 down vote favorite
I'm stuck trying to prove a statement that seems very reasonable based on computer experiments. For an integer $N$ and any $\epsilon \in (0,1)$, I define a sequence $u_0 = N$ and for all $k\geq 1$:
$$u_k = u_{k-1} - \frac{\epsilon^2}{2N}u_{k-1}(u_{k-1}-1) + \frac{\epsilon^3}{6N^2} u_{k-1} (u_{k-1}-1) (u_{k-1}-2)$$ The claim is that $$ \sum_{k\geq 0} (u_k - 1) \leq C N \log N$$ for some $0 < C <
\infty$ independent of $N$. I don't need $C$ to be explicit, just independent of $N$.
By defining the function $g$ such that $u_k = g(u_{k-1})$, and looking at $g$, I can say that $g$ is increasing, concave, $\lambda$-contracting with $\lambda \leq 1 - \epsilon^2/N = \sup_{u\in[1,N]}
\vert g'(u)\vert = \vert g'(1)\vert$, which is not good enough when I compute the sum; I get a bound in $N^2$ instead of the desired $N \log N$.
I've tried looking at $u_{k+1}/u_k$, $u_{k+1} - u_k$, tried to bound $g$ in various ways but couldn't conclude... any help would be hugely appreciated!
If it helps, I now have that $$ \sum_{k=0}^\tau (u_k-1) \leq C N \log N$$ where $\tau = \inf\{k: u_k \leq 2\}$. This can be proven by looking at the number of iterations $k$ required to go from any
$M$ to $\alpha M$ for any $\alpha \in (0,1)$, that is, $k$ such that $u_{n} = M$ implies $u_{n+k} \leq \alpha M$. So I now want to prove the same claim as in my original question, but starting from
$u_0 = 2$. Don't know if it's much easier since the problem is mostly "near $1$". – Pierrot Jun 19 '13 at 11:54
Note that if $u_0\in(1,2)$, then $1<u_k<2$ for all $k$ (induction) and $u_k-1\le (1-\frac{\varepsilon^2}{2N})(u_{k-1}-1)$. To sum a geometric progression shouldn't be a big headache. – fedja Jun 19
'13 at 16:38
Aaaah of course! If doing this reasoning from $u_0 = N$ it yields a bound in $N^2$ but from $u_0 = 2$ it yields a bound in $N$, indeed! Thanks fedja, that was a big help. I'll write the full answer
for the record. – Pierrot Jun 20 '13 at 0:42
add comment
1 Answer
active oldest votes
Here is the full answer in case you are interested. Let $N\in \mathbb{N}$ and $\epsilon \in (0,1)$, and define $(u_k)_{k\geq 0}$ by $u_0 = N$ and $$u_k = g_{N,\epsilon}(u_{k-1}) = u_{k-1} - \
frac{\epsilon^2}{2N}u_{k-1}(u_{k-1}-1) + \frac{\epsilon^3}{6N^2} u_{k-1} (u_{k-1}-1) (u_{k-1}-2).$$
Note first that $g_{N,\epsilon}$ is contracting and is such that $g_{N,\epsilon}(1) = 1$, so that $u_k$ goes to $1$ using Banach fixed-point theorem. The contraction coefficient of $g_{N,\
epsilon}$ can be bounded by \begin{align*} \sup_{x} \lvert g_{N,\epsilon}'(x) \rvert &\leq g_{N,\epsilon}'(1) = 1 - \frac{\epsilon^2}{2N} < 1, \end{align*} however this contraction coefficient
depends on $N$ and a direct use of it yields a bound on $\sum_{k\geq 0} (u_k - 1)$ that is not in $N \log N$.
Note also that even though $u_k$ goes to $1$, we can focus on the partial sum $\sum_{k=0}^{\sigma_2}(u_k-1)$ where $\sigma_2 = \inf\{k : u_k \leq 2\}$, because $\sum_{k=\sigma_2}^\infty (u_k -
1)$ is essentially bounded by $N$. Indeed note that for $u_k\leq 2$ we have $$\frac{\epsilon^3}{6N^2} u_{k-1} (u_{k-1}-1) (u_{k-1}-2)\leq 0$$ so that \begin{align*} u_k - 1 &\leq u_{k-1} - 1 -
\frac{\epsilon^2}{2N}u_{k-1}(u_{k-1}-1)\\ & \leq (u_{k-1} - 1 )(1 - \frac{\epsilon^2}{2N}) \mbox{ since }u_{k-1}\geq 1 \end{align*} Hence we have \begin{align*} \sum_{k\geq \sigma_2} (u_k-1) &\
leq 2 \frac{1}{1 - (1 - \frac{\epsilon^2}{2N})} = \frac{4N}{\epsilon^2} \end{align*} Therefore we can focus on bounding $\sum_{k=0}^{\sigma_2}(u_k - 1)$ by $N\log N$. Let us split this sum into
partial sums, where the first partial sum is over indices $k$ such that $N/2 \leq u_k \leq N$, the second is over indices $k$ such that $N/4 \leq u_k \leq N/2$, etc. More formally, we introduce
$(k_j)_{j = 0}^J$ such that $k_0 = 0$, $k_1 = \inf\{k: u_k \leq N/2\}$, ..., $k_j = \inf\{k: u_k \leq N/2^j\}$, up to $k_J = \inf\{k: u_k \leq N/2^J\}$ where $J$ is such that $N/2^J \leq 2$, or
equivalently \begin{align*} &\log N - J \log 2 \leq \log 2 \\ \Leftrightarrow &\log N / \log 2 - 1 \leq J. \end{align*} For instance we take $J = \lceil \log N / \log 2\rceil$. Thus we have
split $\sum_{k=0}^{\sigma_2}(u_k - 1)$ into $J$ partial sums of the form $$ \sum_{k=k_j}^{k_{j+1}} (u_k - 1) $$ and we are now going to bound each of these partial sum by the same quantity $C(\
epsilon) N$ for some $C(\epsilon)$ that depends only on $\epsilon$.
up To do so, we consider the time needed by $(u_k)_{k\geq 0}$ to from a value $N/m_j$ to a value $N/m_{j+1}$, with $m_{j+1} > m_j$; we will later take $m_j = 2^j$ and $m_{j+1} = 2^{j+1}$. Note
vote that for any $m$ we have \begin{align*} g_{N, \epsilon}\left(\frac{N}{m}\right) &= \frac{N}{m}\left(1 - \frac{\epsilon^2}{2N}(\frac{N}{m}-1) + \frac{\epsilon^3}{6N^2} (\frac{N}{m}-1) (\frac{N}
3 {m}-2)\right)\\ &= \frac{N}{m}\left(1 - \frac{1}{m}\left[\frac{\epsilon^2}{2} - \frac{m \epsilon^2}{2N} - \frac{\epsilon^3}{6m} + \frac{\epsilon^3}{2N} - \frac{m\epsilon^3}{3N^2}\right]\right).
down \end{align*} Define $$\beta(N,m,\epsilon) = \frac{\epsilon^2}{2} - \frac{m \epsilon^2}{2N} - \frac{\epsilon^3}{6m} + \frac{\epsilon^3}{2N} - \frac{m\epsilon^3}{3N^2}$$ and note that for any $N$
vote and $m\leq N/2$ we have $$\underline{\beta}(\epsilon) := \frac{\epsilon^2}{4} \leq \beta(N,m,\epsilon),$$ which is clear upon noticing that $\beta(N,m,\epsilon)\geq \beta(N,N/2,\epsilon) = \
epsilon^2/4$. For any $x > N/m_{j+1}$ we can check that $$g_{N,\epsilon}(x) \leq \frac{g_{N,\epsilon}(N/m_{j+1})}{N/m_{j+1}} \times x$$ by noticing that $x\mapsto g_{N,\epsilon}(x)/x$ is
decreasing. Hence for $k\geq 0$ such that $u_{k-1}\geq N/m_{j+1}$, we have $$u_{k} \leq \left(1 - \frac{1}{m_{j+1}} \underline{\beta}(\epsilon)\right) u_{k-1}.$$ Now suppose that for some $k_j\
geq 0$ we have $u_{k_j}\leq N/m_j$. Then let us find $K$ such that $u_{k_j+K} \leq N/m_{j+1}$. It is sufficient to find $K$ such that \begin{align*} &\left(1 - \frac{1}{m_{j+1}} \underline{\
beta}(\epsilon)\right)^{K} \frac{N}{m_j} \leq \frac{N}{m_{j+1}}\\ &\Leftrightarrow K\log \left(1 - \frac{1}{m_{j+1}} \underline{\beta}(\epsilon)\right) \leq \log \frac{m_j}{m_{j+1}}\\ &\
Leftrightarrow K \geq \log \frac{m_{j+1}}{m_j} \left(-\log\left(1 - \frac{1}{m_{j+1}} \underline{\beta}(\epsilon)\right)\right)^{-1} \end{align*} Finally we conclude that $K$ defined as follows
$$ K= \left\lceil \left(\log \frac{m_{j+1}}{m_j}\right) \frac{m_{j+1}}{\underline\beta(\epsilon)}\right\rceil $$ guarantees the inequality $u_{k_j+K} \leq N/m_{j+1}$. In other words $(u_k)_{k\
geq 0}$ needs less than $K$ steps to decrease from $N/m_j$ to $N/m_{j+1}$. Summing the terms between $k_j$ and $k_j + K$, we obtain \begin{align*} \sum_{k = k_j}^{k_j+K} u_k &\leq K \frac{N}
{m_j}\leq \left\lceil \left(\log \frac{m_{j+1}}{m_j}\right) \frac{m_{j+1}}{\underline\beta(\epsilon)}\right\rceil \frac{N}{m_j}\\ &\leq \left[\left(\log \frac{m_{j+1}}{m_j}\right) \frac{m_
{j+1}}{\underline\beta(\epsilon)} + 1\right]\frac{N}{m_j}. \end{align*} Taking $m_j = 2^j$ and $m_{j+1} = 2^{j+1}$, we have $k_{j+1}\leq k_j + K$ and thus obtain \begin{align*} \sum_{k = k_j}^
{k_{j+1}} u_k \leq \sum_{k = k_j}^{k_j+K} u_k &\leq \left[\left(\log 2\right) \frac{2^{j+1}}{\underline\beta(\epsilon)} + 1\right]\frac{N}{2^j}\\ &\leq \left[\left(\log 2\right) \frac{2}{\
underline\beta(\epsilon)} + \frac{1}{2^j}\right]N\\ &\leq C(\epsilon) N \end{align*} with $C(\epsilon) = \left(\log 2\right) \frac{2}{\underline\beta(\epsilon)} + \frac{1}{2}$. To summarize the
full sum can be bounded as follows \begin{align*} \sum_{k\geq 0} (u_k - 1) &\leq \sum_{k=0}^{\sigma_2} (u_k - 1) + \sum_{k\geq \sigma_2} (u_k - 1)\\ &\leq \sum_{j=1}^J \sum_{k=k_{j-1}}^{k_j}
u_k + \frac{4N}{\epsilon^2} \\ &\leq \left\lceil \frac{\log N}{\log 2} \right\rceil C(\epsilon)N + \frac{4N}{\epsilon^2} \\ &\leq D(\epsilon) N \log N \end{align*} for some $D(\epsilon)$ that
depends only on $\epsilon$.
add comment
Not the answer you're looking for? Browse other questions tagged sequences-and-series or ask your own question. | {"url":"http://mathoverflow.net/questions/134017/series-of-terms-following-polynomial-recurrence-relation","timestamp":"2014-04-19T19:53:53Z","content_type":null,"content_length":"59205","record_id":"<urn:uuid:0db83a5d-6649-44c3-9fa6-de2fc0dbabd3>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00398-ip-10-147-4-33.ec2.internal.warc.gz"} |
The chain rule (PDE)
Im new on the forum, so I hope you guys will have some patience with me :-)
I have a question about the chain rule and partial differential equations that I cant solve, it's:
Write the appropriate version of the chain rule for the derivative:
∂z/∂u if z=g(x,y), where y=f(x) and x=h(u,v)
I have tried to do a dependence chart of it, but it's just not working for me.
Thank you very much. | {"url":"http://www.physicsforums.com/showthread.php?p=3752819","timestamp":"2014-04-18T21:28:40Z","content_type":null,"content_length":"22120","record_id":"<urn:uuid:bcb87354-3624-49ec-9f5d-c341ee8a0059>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
question was to solve n queen's problem using backtracking. Using recursion its easy but i am trying to explore it using without recursion.
Errors :
there is no error but some bugs are there. It is giving correct answer for some steps but when backtracking goes to last element it is not giving me correct answer.
What i am trying to do:
i am trying to solve the n queen's problem using backtracking and without recursion. In which i am first blocking the positions and then if not getting positions for next queen backtracking it by
unblocking the position of previous queen. here is my code.
/*blocked= queen number
queen placed=16*/
#define MAX 20
void init(int);
void place(int*,int);
void block_path(int ,int ,int);
void unblock(int,int,int);
void display(int);
int board[MAX][MAX];
int main()
int n;
cout<<"\nEnter number of queens: ";
for(int i=0;i<n;i++)
return 0;
void init(int n)
for(int i=0;i<n;i++)
for(int j=0;j<n;j++)
void place(int *i,int n)
int k,j;
else if(j==n-1)
void block_path(int i,int j,int n)
for(int p=0;p<n;p++)
for(int q=0;q<n;q++)
for(int k=0;k<n;k++)
void unblock(int i,int j,int n)
for(int p=0;p<n;p++)
for(int q=0;q<n;q++)
for(int k=0;k<n;k++)
void display(int n)
for(int i=0;i<n;i++)
for(int j=0;j<n;j++) | {"url":"http://www.dreamincode.net/forums/topic/263432-is-my-code-for-n-queens-problem-using-backtracking-correct/","timestamp":"2014-04-20T12:35:59Z","content_type":null,"content_length":"146206","record_id":"<urn:uuid:7e2b8cbf-6384-4fd7-8083-e4d3b33bb83f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Metric Time (was Re: Why not 13 months? (Was La Systeme Metrique))
Robert A. Uhl (ruhl@phoebe.cair.du.edu)
12 Oct 1995 16:38:24 GMT
In article <45h3ek$2na@fg70.rz.uni-karlsruhe.de>,
Thomas Koenig <Thomas.Koenig@ciw.uni-karlsruhe.de> wrote:
>In alt.folklore.computers, arromdee@jyusenkyou.cs.jhu.edu (Ken Arromdee) wrote:
>>Converting miles and miles per hour to hours is no harder than doing the same
>>for kilometers and kilometers per hour.
>In Europe, you're likely to be driving an average of around 100 km/h on
>motorways, given reasonable speeds, driving times, roadworks, traffic jams,
>and stops for petrol. This makes converting distances to driving times quite
>easy: divide by 100.
What is 100 km/h in mph?
>Dunno about the US - what do you average, over long distances? 50
>miles per hour?
Well, the speed limit on most roads in 55, with some at 45 and others
at 65. 35 and below are for residentials (where travel time doesn't
matter), parking lots &c. So of course on roads we figure on 60 mph
(n-one goes the limit; you _always_ are over). A mile a minute is a
nice measure. If I'm going at 65 I subtract 15 from my time. Not
very scientific, but it gives me an idea at least.
Hoo boy! Our system is accurate to minutes whilst yours is only to
hrs. Heh heh.
| Bob Uhl | Spectre | `En touto nika' + |
| U of D | PGG FR No. 42 | http://mercury.cair.du.edu/~ruhl/ | | {"url":"http://unauthorised.org/anthropology/sci.anthropology/october-1995/0299.html","timestamp":"2014-04-19T04:39:24Z","content_type":null,"content_length":"5985","record_id":"<urn:uuid:3eaf2d42-9469-493d-a0a0-0a73a29cca8d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
Essentials of Mathematics
This beautifully written book is intended to be used by first or second year undergraduate university students to bridge the gap from school mathematics to "what comes next."...This is a wonderful
book.... It is a thought-provoking resource for anyone involved in curriculum development and teaching of mathematics at the university entrance level...It should be essential reading for every
enthusiastic undergraduate. — Cheryl E. Praeger, Gazette, Australian Mathematical Society
The presentations style is conversational, employing frequent thought questions and directions for readers to try for themselves. A unique and refreshing features of this book is the introduction of
the interesting and fun problems in mathematics that students do not usually encounter in a core course.... The book breaks the mold of traditional texts by portraying mathematics in a different
light. Through this broader view of mathematics, this text may attract more undergraduate students into the beautiful and delightful world of mathematics. — The American Statistician
This book is well written, and will, I'm sure be useful. I would love to teach a course from it. — Marion Cohen, MAA Online
Every mathematician must make the transition from the calculations of high school to the structural and theoretical approaches of graduate school. Essentials of Mathematics provides the knowledge
needed to move onto advanced mathematical work, and a glimpse of what being a mathematician might be like. No other book takes this particular holistic approach to the task.
The content is of two types. There is material for a "Transitions" course at the sophomore level; introductions to logic and set theory, discussions of proof writing and proof discovery, and
introductions to the number systems (natural, rational, real, and complex). The material is presented in a fashion suitable for a Moore Method course, although such an approach is not necessary. An
accompanying Instructor's Manual provides support for all flavors of teaching styles. In addition to presenting the important results for student proof, each area provides warm-up and follow-up
exercises to help students internalize the material.
The second type of content is an introduction to the professional culture of mathematics. There are many things that mathematicians know but weren't exactly taught. To give college students a sense
of the mathematical universe, the book includes narratives on this kind of information. There are sections on pure and applied mathematics, the philosophy of mathematics, ethics in mathematical work,
professional (including student) organizations, famous theorems, famous unsolved problems, famous mathematicians, discussions of the nature of mathematics research, and more. The prerequisites for a
course based on this book include the content of high school mathematics and a certain level of mathematical maturity. The student must be willing to think on an abstract level. Two semesters of
calculus indicates a readiness for this material.
Print on demand (POD) books are not returnable because they are printed at your request. Damaged books will, of course, be replaced (customer support information is on your receipt). All print on
demand books are paperback.
* As a textbook, Essentials of Mathematics does have DRM. Our DRM protected PDFs can be downloaded to three computers. Please note that the secure PDFs will open only on the Mac and Windows operating
systems. iOS (iPad & iPhone) and Linux are not supported at this time. Click here for more information..
Electronic ISBN: 9780883859827
Print ISBN: 9780883857298
Chapter 0: Mathematics | {"url":"http://www.maa.org/publications/ebooks/essentials-of-mathematics?device=mobile","timestamp":"2014-04-19T19:13:49Z","content_type":null,"content_length":"25191","record_id":"<urn:uuid:b4265645-ba5e-4f30-854c-7f6e82927183>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
Asymptotic ordinal inefficiency of random serial dictatorship
Theoretical Economics 4 (2009), 165–197
Asymptotic ordinal inefficiency of random serial dictatorship
Mihai Manea
We establish that the fraction of preference profiles for which the random serial dictatorship allocation is ordinally efficient vanishes for allocation problems with many object types. We consider
also a probabilistic setting where in expectation agents have moderately similar preferences reflecting varying popularity across objects. In this setting we show that the probability that the random
serial dictatorship mechanism is ordinally efficient converges to zero as the number of object types becomes large. We provide results with similarly negative content for allocation problems with
many objects of each type. One corollary is that ordinal efficiency is a strict refinement of ex-post efficiency at most preference profiles.
Keywords: Allocation problem, ex-post efficiency, ordinal efficiency, probabilistic serial, random serial dictatorship
JEL classification: D6
Full Text: | {"url":"http://econtheory.org/ojs/index.php/te/article/viewArticle/20090165","timestamp":"2014-04-19T01:48:22Z","content_type":null,"content_length":"4130","record_id":"<urn:uuid:e3344a3b-b925-4485-ad87-f1699a0392fc>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Which year was Jessica Simpson born in?
You asked:
Which year was Jessica Simpson born in?
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/which_year_was_jessica_simpson_born_in","timestamp":"2014-04-24T22:14:37Z","content_type":null,"content_length":"52116","record_id":"<urn:uuid:3858a9b9-9687-4703-a4b8-a58e92c94923>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
Magnetic Field Theory
This last SparkNote concerning magnetic fields is a purely theoretical one. We don't examine particular configurations of wires, solenoids, and magnets. We don't look at the force on moving charges.
Instead, we simply look at magnetic fields as a special kind of vector field, and describe them purely in terms of the mathematical properties of such a field. We are able to do this such that a
magnetic field can be described completely by two simple equations. In essence, we are able to compress all of the preceding topics into two equations.
Before we make these mathematical statements, however, we must first develop the multivariable calculus used to derive our equations. We develop the concepts of divergence and curl, and introduce the
two important theorems: Stokes' Theorem and Gauss' Theorem. Equipped with this background we can then apply the math to magnetic fields, generating our two important equations.
By finally analyzing magnetic fields on a purely theoretical level we complete our study of magnetic fields. We have looked at the effects of magnetic fields, the sources of magnetic fields and,
finally, in this SparkNote, the theory of magnetic fields. This complex topic must be attacked from many angles in order for us to understand it. | {"url":"http://www.sparknotes.com/physics/magneticforcesandfields/magneticfieldtheory/summary.html","timestamp":"2014-04-21T02:18:39Z","content_type":null,"content_length":"50430","record_id":"<urn:uuid:5d0eeb8f-c272-4aee-86b4-1ccebb6c0818>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the Definition of 'Number'?
Date: 01/26/2006 at 11:24:55
From: Steven
Subject: Definition of "number"
Dr. Math,
I'm at an algebra/trigonometry level of mathematics. I feel silly
asking this, but can you tell me the definition of "number"?
I read the following response to a commentary on imaginary numbers
and realized that I too have the same concept of what a number is,
but I'm not sure it's correct and that may be the "block" that I run
into occasionally. Here it is:
"One of the problems with the concept of "i" as a number is that most
people associate the word "number" with the concept of a measure of
the magnitude of some set, such as the number of people in a stadium.
Since one cannot say there are "2+i" slices of bread in a loaf, people
have a bad reaction to calling "i" some sort of number."
This statement implies that while numbers are often used as a concept
of measurement, that is not their true purpose. Did I misunderstand?
What DOES a number really represent?
Date: 01/26/2006 at 12:50:07
From: Doctor Peterson
Subject: Re: Definition of
Hi, Steven.
I think the key here is to realize that we often start with a basic,
common-sense concept, and then extend the definition to cover
something broader. The basic idea of a number that children develop
starts with mere counting, but soon extends to the more general idea
of measuring (2 1/2 feet, for example), which is probably where most
people stop, because that's enough for most everyday uses of number.
Those who continue in math continue to extend the concept further;
ultimately a number is just anything we choose to call a number,
because it behaves like a number. Complex numbers are the ultimate
extension in the development that starts with natural numbers, then
includes negative numbers, fractions, irrational numbers, and
finally imaginary numbers. Each extension does not change the basic
behavior of numbers (we can still add, multiply, and so on), but lets
us do more--for example, once we have fractions, we can divide any
two numbers (except for division by zero); once we have complex
numbers, we can take the square root of any number. There are other
kinds of "numbers" that are more abstract than this, and not strictly
an extension of ordinary numbers in that sense; but these are still
called numbers because you can do the same operations on them.
You may notice that I've avoided actually giving a single definition
of "number"! That's because we really don't need to do that; we just
take the basic term and apply it to more and more abstract objects on
the basis of analogy to the numbers we already know. Just for fun, I
looked up "number" in the Merriam-Webster dictionary (m-w.com) and
found these definitions as part of a long entry:
1 c (1) : a unit belonging to an abstract mathematical system and
subject to specified laws of succession, addition, and
multiplication; especially : NATURAL NUMBER
(2) : an element (as ð) of any of many mathematical systems
obtained by extension of or analogy with the natural
number system
I think that says what I'm saying, pretty concisely.
Does that help?
- Doctor Peterson, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/69853.html","timestamp":"2014-04-19T01:50:36Z","content_type":null,"content_length":"8346","record_id":"<urn:uuid:4fee0a85-fed6-4fd1-81cf-3b3d15b3754e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 197
Geometry - Area
Thank You so much Damon!
Geometry - Area
Ms. Sue would you be able to help me! I don't want the straight answer i want to actually learn how to solve a question like this!
Geometry - Area
Find the area of a regular hexagon with an apothem 10.4 yards long and a side 12 yards long. Round your answer to the nearest tenth.
Quinn used a scale drawing to build a soccer field near his school. Initially, he wanted the field to be 28 yards long and 17.5 yards wide. He decided to change the length of the field to 36 yards.
If the width is to be changed by the same scale factor, what is the new width o...
Math Help please Check my answer
You have $59.95 in your wallet and want to buy some new CDs. If the CDs are $11.99 each, what number of CDs, x, can you buy? Write and solve an inequality. My answer: 11.99x ≤ 59.95 11.99x/11.99 ≤
59.95/11.99 x ≤ 5
1. The fossil remains of a ____ were discovered and named Lucy. (1 point) A. Neanderthal B. Cro-Magnon C. prosimian D. hominid 2. Scientists can tell whether organisms are closely related by
comparing there ____. (1 point) A. Hair color B. Teeth C. DNA D. Scientific names 3. W...
math (please help)
Thank You so much Jai
math (please help)
1. Write the equation for the horizontal line that contains point G(-8, 8). 2. What is an equation for the line that passes through points (-1, -4) and (1, 4)? Write the equation in slope-intersect
form. 3. Write an equation for the vertical line that contains point E(10, -3).
1. A skateboard ramp has a 158º entry angle. Find the measure of ∠1. A.) 22º B.) 68º C.) 78º D.) 58º 2. Which of the following equations parallel to the line passing through A(-2, 7);B(-2, -2)? A.) y
= -2 B.) y = x C.) 3x = 5 D.) -2x = 7y 3. Whic...
To me "Taking something to heart" means that whatever he/she had said had meant a lot to me (even if its good or bad). other words to me it's like taking something seriously than what it should be.
do you have an Ti - 84? that will work it for you in a faster and reasonable manner.
1.) A skateboard ramp has a 158° entry angle. Find the measure of ∠1. 2.) Which of the following equations is parallel to the line passing through A(-2, 7);B(-2, -2)? 3.) Which of the following
equations would graph a line parallel to 3y = 2x - x + 5? *Please help me!
Determine whether a solid forms when solutions containing the following salts are mixed. If so, write the ionic equation and the net ionic equation. KCl and Na2S
Algebra 1
d is to 4 as 32 is to 56. if answer is a fraction change to decimal
Algebra 1
A triangle has a perimeter of 81 inches. The side lengths can be found by x, 3x-1, and 4x+2 what is the value of x?
Write an algebraic expression for "55 more than the product of 134 and n"
Lindsey has 47 coins in her change purse that are either dimes or quarters. If n represents the number of quarters she has,write an expression in terms of n that describes the number of dimes
What is an algebraic expression for "49 more than q"
earth science
what are the variables on rocks and minerals
alegbra 1
Tell whether each number in the list is a real number, a rational number, an irrational number, an integer, or a whole number. square root 3, 5.5, negative square root of 16,and 0
A woman goes out running. She first travels 5.2km north. She then turns and travels 3.6km west. Finally, she turns again and travels 2.6km south. We are assuming a flat rectangular world. If a bird
were to start out from the origin (where the woman starts) and fly directly (in...
The stopping distance D in feet for a car traveling at x miles per hour is given by d(x)= (1/12)x^2+(11/9)x. Determine the driving speeds that correspond to stopping distances between 300 and 500
feet, inclusive. Round speeds to the nearest mile per hour.
1. Attention Grabber for Odyssey: 2. State/Explain topic: 3. Thesis Statement: Please help me with this!
World Geography
I know what she met, but we don't have a book, it was the voice recorder and it doesn't even say the answers to these.
World Geography
Well everything is virtual and we use technology for everything and i am a visual learner not jus someone that listens to something snd learns... It's a quick check.
World Geography
But we don't have a book... I'm homeschooled so we don't have books.
World Geography
Tbh i need help!
World Geography
1.Which of the following demographic characteristics distinguishes Europe's population from that of Asia? A.) More Europeans inhabit rural areas. B.) The European population is much younger. C.)
Europeans are more concentration in urban areas. D.) Europe has a larger popul...
World Geography
Thank You, Ms.Sue!
World Geography
1.B. 2.B. 3.A. 4.C. Are these correct?
World Geography
Ms.Sue, is number 5. B ?
World Geography
1. During the late 18th and mid-19th centuries, Europeans and North Americans were attracted to nearby coastal waters due to potential economic gains from? A.) Catching trout and cod fish. B.)
Catching seals and whales. C.) Shipbuilding. D.) Finding riches and unexplored islan...
In a class of 50 students, 35 are Democrats, 17 are business majors, and 6 of the business majors are Democrats. If one student is randomly selected from the class, find the probability of choosing a
Democrat or a business major.
If 13.5 ml of concentrated (17.0 M) of sulfuric acid was spilled on the chemistry counter, calculate the mass of sodium bicarbonate needed to neutralize the acid
AP World History
I have to analyze how sub-Saharan Africa & south & south-east Asia's relationship to global trade patterns changed from 1850 to present. please help !
General(dont know actual subject)
Thts not wat i expected from you. I am sure i chose the right one otherwise i wouldnt tell u. Because tht would literally be wasting my time as well as yours. So from tht u should know i chose the
rite one. I dont know which is why i was asking you. What are u supposed to help...
General(dont know actual subject)
My project is research based... I chose plastic surgery.... Question is: why have you chosen this task? ( in the box below write a paragraph explaining why you have selected your particualar task
idea.) I really cant think of why i chose plastic surgery among all the other one...
Math trigonometry
Find the measures of the acute angles of a right triangle whose legs are 9 cm and 16 cm long.
6th grade math
Equal 15=15 they are dance same number
6th grade math
Equal 15=15 they are dance same number
6th grade math
Equal 15=15 they are dance same number
Educational Assitant
Say that is his opinion everyone has an opinion
4th grade math
If he starts with a white bead every fourth time boommtheres a white head 1_234_5_678_9_ 1, 5, 9
5th grade math
Instead of using fractions use quarter I have one dollar and three quarters and 2 dollars and three quarters try this
5th grade math (word problems)
For the first question divide 3 in two 25 your answer is the equation subtracted from 25 2nd question=200×20=your answer
3rd grade language
1-3 follow ms sue and Joshua but 4 answer=is=state of being
Who is correct
Who is correct
5th grade math
what is a word problem for 4.23x10
edie needs to start from the ones place then subtract the rest of the problem. simple as pie.
Spelling 4th grade
Fly is correct jump is wrong
Spelling 4th grade
Fly is correct jump is wrong
Determine whether a solid forms when solutions containing the following salts are mixed. If so, write the ionic equation and the net ionic equation. KCl and Na2S
If argon has a volume of 5.0 dm3 and the pressure is .92 atm and the final temperature is 30 degrees celcius and the final volume is 5.7 L and the final pressure is 800 mm Hg, what was the initial
temperature of the argon?
A motorcycle breaks down and the rider has to walk the rest of the way to work. The motorcycle was being driven at 45mi/hr and the rider walks at a speed of 6mi/hr. Teh distance from home to work is
25 miles and the total time for the trip was 2 hours. How far did the motorcyc...
Angular Speed-Physics
Stars originate as large bodies of slowly rotating gas. Because of gravity, these clumps of gas slowly decrease in size. What happens to the angular speed of a star as it shrinks? Explain.
Physics Inertia!
Rank from fastest to slowest. a)a solid ball rolling down a ramp without slipping b)a cylinder rolling down the same ramp without slipping. c)a block sliding down a firctionless ramp with the same
height and slope. Select all that apply. The ball is fastest The cylinder is fas...
An asteroid with mass m = 1.85*10^9 kg comes from deep space, effectively from infinity, and falls toward Earth. How much work would have to be done on the asteroid by friction to slow it to 550 m/s
by the time it reached a distance of 1.50*10^8 m from Earth?
Math (kindergarten)
18 + 21 = 39 Answer: 39
Henry almost had it right.... s=(2+4+4.5)km/ 20min --> convert to m/s: s=10.5km/20min x 1000m/1km x 1min/60s --> 8.75 m/s
A board is leaning against a building so that the top of the board reaches a height of 18 feet. The bottom fo the board is on the groung 4 feet away from the wall. What is the slope of the board as a
positive number?
simplify 36/3*4-(-2) ?
What is the final speed?
Light-rail passenger trains that provide transportation within and between cities are capable of modest accelerations. The magnitude of the maximum acceleration is typically 1.3 {\rm m}/{\rm s}^{2},
but the driver will usually maintain a constant acceleration that is less than...
Why is it important to list what you already know about a problem ?
Physics (one more)
A lead ball has a volume of 94.3 cm3 at 19.3°C. What is the change in volume when its temperature changes to 34.3°C? Someone on here has already answered this but it was wrong!
PHYSICS please!!!!
A cube of wood and a cube of concrete, each 0.17 m on a side, are placed side by side. One of the long faces of the rectangular prism formed by the two cubes is held at 17°C, and the opposite long
face is held at 32°C. What is the total rate of heat transfer through th...
nevermind I get it
I don't understand why you're multiplying by two for the second question.
world history/World Geography
B. Is the correct answer for this question. Source: Took a World Geography with the same question and got a hundred, and the correct answer is B. (Courtnei, A is wrong answer.)
Is the answer C?
Can you please show me how you calculated your answer?
The maximum wavelength of emission from Neptune's cloud tops is 6.0 x 10-5m. Using this value, determine the temperature of Neptune's outer atmosphere.
A river is flowing 4.0 m/s to the east. A boater on the south shore plans to reach a dock on the north shore 30.0 Degrees downriver by heading directly across the river. What should be the boat's
speed relative to the water?
A river is flowing 4.0 m/s to the east. A boater on the south shore plans to reach a dock on the north shore 30.0 Degrees downriver by heading directly across the river. What should be the boat's
speed relative to the water?
if a 12 apple weigh 5lb how much 36 weigh
1. set it up: s=d/t --> s=(2+4+4.5)km/ 20min 2. Convert to meters/ seconds s= 10.5km/20 min x 1000m/km x 1min/60s= 8.75 m/s
Thank you for your help!
Explain the idea of parallaz. The closet star to the Sun, Proxima Centauri, is 4.2 light years away from us. What is its parallax angle?
Okay, say im doing a science fair project, and I am not really good at coming up with questions, what would be a good question to ask about Newtons First Law of Motion
If we assume the Sun is made totally out of hydrogen, how many hydrogen atoms are contained with the Sun? If the Sun converts half of these hydrogen atoms into helium over its lifetime of about ten
billion years, how many hydrogen atoms are converted per second? Mass of Sun = ...
Calculate the wavelength of peak emission for a typical sunspot and compare it to that of the photosphere. How many times brighter is the photosphere than a typical sunspot? (Hint: Assume the
photosphere and a sunspot are good blackbodies for these calculations.)
Posted earlier but I wrote it incorrectly. How fast would a rocket have to be going in order to completely leave the solar system? (Assume that the Sun is the only mass in the solar system and that
the rocket is beginning its journey from Earth's orbit?)
How fast would a rocket have to be going in order to completely leave the solar system? (For the second part of this problem, assume that the Sun is the only mass in the solar system and that the
rocket is beginning its journey from Earth's orbit?)
How fast does a rocket need to be going to break free of Earth's gravity? How fast would it have to be going in order to completely leave the solar system? (For the second part of this problem,
assume that the Sun is the only mass in the solar system and that the rocket is...
How fast does a rocket need to be going to break free of Earth's gravity? How fast would it have to be going in order to completely leave the solar system? (For the second part of this problem,
assume that the Sun is the only mass in the solar system and that the rocket is...
what effect does sugar water have on a plant?
what is incorrect in this argument: the equation 5x + 3 = 3 + 5x isalways true because of the symmetric property of equality
Please help me unscramble the following words in french: penctoier uernigrsul aceir cuehaebrisnfa arlabtce botromen eercn nealum lstaa
A particle starts from xi = 10 m at t0 = 0 and moves with the velocity graph shown in the graph below. Sorry for no graph it's at 12 m/s when time is zero. Moves down consistently so that it is 8 at
1 4 at 2 0 at 3 and negative 2 at 4. (a) What is the object's position...
In a problem where you have to solve the limit f(x) as x approaches x sub zero, what does x sub zero mean?
Well 9 times 6 gets 54 and there is 2 left over so the answer is 6 and two-ninths
Acids and Bases
NaOH was added to 1.0 L of HI(aq) 0.40 M. Calculate H3O in the solution after the addition of .14 mol of NaOH. Calculate H3O in the solution after the addition of 0.91 mol of NaOH.
1/6 students like math, 2/3 like reading and 12 like science. How many students were asked.
Marine Ecology
what tissue do humans have that cnidarians do not?
A steel tank containing helium is cooled to 15◦C. If you could look into the tank and see the gas molecules, what would you observe?
8. Cognitive psychology can best be described as:
social studies
In 1963 Viginia's public schools opened their doors to:...
What is Pangea?
Taken as a whole, what would you say is the single most pressing problem found in Latin America or South America? What possible solutions can you develop?
Pages: 1 | 2 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Kaitlyn","timestamp":"2014-04-18T23:09:53Z","content_type":null,"content_length":"26898","record_id":"<urn:uuid:f3b415da-cc13-4478-984a-c454eb74b042>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
Canton, GA Geometry Tutor
Find a Canton, GA Geometry Tutor
...After all these years from browser wars to mobile, I remain a fan of MS Windows and look forward to its next evolution in both the personal and business server spaces. I first took Organic
Chemistry and related courses such as General Chemistry and Biochemistry some 20 years ago as an undergradu...
126 Subjects: including geometry, chemistry, English, calculus
I was a National Merit Scholar and graduated magna cum laude from Georgia Tech in chemical engineering. I can tutor in precalculus, advanced high school mathematics, trigonometry, geometry,
algebra, prealgebra, chemistry, grammar, phonics, SAT math, reading, and writing. I have been tutoring profe...
20 Subjects: including geometry, reading, chemistry, algebra 1
...I have had over 30 students for Calculus over the years. I teach both the AP Calculus tests and college level calculus. Several students have done very well on the AP test, scoring 5's on the
AB and BC tests.
41 Subjects: including geometry, reading, physics, writing
...Polynomials and word problems. The students will learn the 2-D and 3-D shapes, area, perimeter, volume, how to draw shapes, angles. We will make sure the students are thorough with some complex
and everyday vocabulary word at the end of the class.
9 Subjects: including geometry, statistics, algebra 1, algebra 2
...I have a Bachelor's of Science in Middle Grades Education. I am certified in math and science for those grade levels. I taught seventh grade science, and as part of the curriculum taught
14 Subjects: including geometry, biology, algebra 1, elementary math | {"url":"http://www.purplemath.com/Canton_GA_Geometry_tutors.php","timestamp":"2014-04-18T23:55:02Z","content_type":null,"content_length":"23727","record_id":"<urn:uuid:337a2e4d-98f1-4815-bd6e-81e03e4e6115>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
This is the second volume of a three-volume set comprising a comprehensive study of the tractability of multivariate problems. The second volume deals with algorithms using standard information
consisting of function values for the approximation of linear and selected nonlinear functionals. An important example is numerical multivariate integration.
The proof techniques used in volumes I and II are quite different. It is especially hard to establish meaningful lower error bounds for the approximation of functionals by using finitely many
function values. Here, the concept of decomposable reproducing kernels is helpful, allowing it to find matching lower and upper error bounds for some linear functionals. It is then possible to
conclude tractability results from such error bounds.
Tractability results, even for linear functionals, are very rich in variety. There are infinite-dimensional Hilbert spaces for which the approximation with an arbitrarily small error of all linear
functionals requires only one function value. There are Hilbert spaces for which all nontrivial linear functionals suffer from the curse of dimensionality. This holds for unweighted spaces, where the
role of all variables and groups of variables is the same. For weighted spaces one can monitor the role of all variables and groups of variables. Necessary and sufficient conditions on the decay of
the weights are given to obtain various notions of tractability.
The text contains extensive chapters on discrepancy and integration, decomposable kernels and lower bounds, the Smolyak/sparse grid algorithms, lattice rules and the CBC (component-by-component)
algorithms. This is done in various settings. Path integration and quantum computation are also discussed.
This volume is of interest to researchers working in computational mathematics, especially in approximation of high-dimensional problems. It is also well suited for graduate courses and seminars.
There are 61 open problems listed to stimulate future research in tractability.
A publication of the European Mathematical Society (EMS). Distributed within the Americas by the American Mathematical Society.
Graduate students and research mathematicians interested in computational mathematics.
• Discrepancy and integration
• Worst case: General linear functionals
• Worst case: Tensor products and decomposable kernels
• Worst case: Linear functionals on weighted spaces
• Average case setting
• Probabilistic setting
• Smolyak/Sparse grid algorithms
• Multivariate integration for Korobov and related spaces
• Randomized setting
• Nonlinear functionals
• Further topics
• Summary: Uniform integration for three Sobolev spaces
• Appendices: List of open problems and Errata for volume I
• Bibliography
• Index | {"url":"http://ams.org/bookstore-getitem/item=EMSTM-12","timestamp":"2014-04-16T11:15:49Z","content_type":null,"content_length":"16786","record_id":"<urn:uuid:20f8ca8a-6a6b-4552-89a4-c6587db2ad74>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
Think on this
OK, so how would you even start to figure it out?
Here's a cool calculator that will show you how far your car can travel on a gallon of gas. I put in a bunch of different makes/models of cars, of different ages. For the sake of easy math, I'm going
to say, on average, a car can go 20 miles on a gallon of gas. (Yes, I'm sure your car does better. And this is, after all, a guesstimate.)
According to Twin Cities gas prices, a gallon of gasoline today will run you about $2.20.
So $2.20/gal divided by 20 mpg = $0.11/mile (or $0.07/km).
When I carry passengers on my bike, I can go a maximum speed of maybe 15 miles an hour, but I'm much slower on hills, maybe 7 miles an hour, and I probably average somewhere around 11 miles an hour.
But I'm carrying two kids, not two adults, so I should pick a number at the low end of that range. And, of course, a pedicab is not always moving. So for the sake of easy math, I'll just say an
average of maybe 5 miles per hour. Assume an 8-hour working day, and that gives you 5 mph x 8 hrs/day, for a total of 40 miles per day.
Calculating fuel for the cyclist is the tricky bit. The CIA's World Factbook estimates the US per capita GDP at $48,000. That works out to $132/day. And this 2006 Forbes article says Americans spend
about 13% of their income on food. So we spend maybe $17/day on food.
$17.00/day divided by 40 miles/day = $0.42/mi or $0.27/km.
The car uses more energy per mile, but gasoline is a cheaper fuel than food. However, the fuel costs for the car depend on the distance traveled. If you double the distance traveled by car, you
double the fuel cost. With a pedicab, the fuel cost is the same, per day, no matter how far you go.
And, of course, you're probably eating today whether or not you're pedaling a rickshaw. So get out there and ride! Don't let that fuel go to waste!
posted on Thu, 05/14/2009 - 3:38pm
Guess what? This article that I just read has motivated me to hop on my neglected bike and pedal to the metal, as they say. I am sure that my workplace will enjoy and/or appreciate all of the work
that I will be doing, and I might get some positive reinforcment with my fellow collegues! However, I regret to inform you that I was a little concerned with the content portion of this article, as I
believe more information is necessary to have a more substantial impact of the denizens of society. I was, however, plesantly surprised at the amount of money per mile a cyclist pays. Thank you, and
have a nice day.
posted on Wed, 08/18/2010 - 12:19pm
Post new comment
The content of this field is kept private and will not be shown publicly.
• Allowed HTML tags: <a> <h3> <h4> <em> <i> <strong> <b> <span> <ul> <ol> <li> <blockquote> <object> <embed> <param> <sub> <sup>
• Lines and paragraphs break automatically.
• You may embed videos from the following providers vimeo, youtube. Just add the video URL to your textarea in the place where you would like the video to appear, i.e. http://www.youtube.com/watch?
• Web page addresses and e-mail addresses turn into links automatically.
• Images can be added to this post. | {"url":"http://www.sciencebuzz.org/blog/think","timestamp":"2014-04-20T03:17:35Z","content_type":null,"content_length":"29510","record_id":"<urn:uuid:1b268158-5e12-48f8-a759-0dd4c84a7dcf>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Campbell, CA Precalculus Tutor
Find a Campbell, CA Precalculus Tutor
...I love to read. A good writer can stimulate the thoughts, the imagery, and emotions of the reader. I prefer to gain information through reading rather than through listening.
37 Subjects: including precalculus, reading, English, physics
...I started tutoring when I was an undergrad in Electrical Engineering at UC Berkeley. At first, I started helping my friends with their classes in math, physics, and chemistry. As I continued
working with them, they kept telling me things such as "You should really consider being a teacher," "Yo...
11 Subjects: including precalculus, chemistry, algebra 2, calculus
...I have taught business psychology in a European university. I tutor middle school and high school math students. I can also teach Chinese at all levels.
11 Subjects: including precalculus, calculus, statistics, geometry
...My goal is to help any student understand, summarize, and prepare for comprehensive chemistry exams in order to obtain a desired result in a relatively short amount of time. I am excited to
work with passionate students in achieving their goals!AP chemistry should be as high-level as a general c...
18 Subjects: including precalculus, chemistry, calculus, physics
...I then travelled overseas, with the Peace Corps, to the Pacific Islands and Africa to help local teachers develop methods for teaching science, math and English. Since returning home, I have
tutored a few students that were recommended to me by former teaching colleagues and found that I enjoy ...
13 Subjects: including precalculus, chemistry, physics, geometry
Related Campbell, CA Tutors
Campbell, CA Accounting Tutors
Campbell, CA ACT Tutors
Campbell, CA Algebra Tutors
Campbell, CA Algebra 2 Tutors
Campbell, CA Calculus Tutors
Campbell, CA Geometry Tutors
Campbell, CA Math Tutors
Campbell, CA Prealgebra Tutors
Campbell, CA Precalculus Tutors
Campbell, CA SAT Tutors
Campbell, CA SAT Math Tutors
Campbell, CA Science Tutors
Campbell, CA Statistics Tutors
Campbell, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Campbell_CA_Precalculus_tutors.php","timestamp":"2014-04-18T21:19:28Z","content_type":null,"content_length":"24031","record_id":"<urn:uuid:3280686c-c810-4c33-aec4-01eaf1be80d8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
Regular Octagon Inscribed Inside of a Square
September 10th 2012, 03:48 PM #1
Sep 2012
United States
Regular Octagon Inscribed Inside of a Square
Normally I wouldn't do this, as I derive a certain satisfaction from doing my own work, but I'm growing desperate. This assignment is due tomorrow and the clock is ticking. I thank you all in
advance for any help given - it is always appreciated.
A regular octagon is inscribed inside of a square. The length of each side of the square is 8 cm.http://mathcentral.uregina.ca/QQ/database/QQ.09.06/richard1.1.gif This image is quite similar,
disregarding the numbers given.
1) What is the exact length of a side of the octagon? Write your answer in simplest radical form.
2) What is the exact area of the octagon? Again, in simplest radical form.
3) To the nearest hundredth, what is the length of a side of the octagon in centimeters?
4) To the nearest hundredth, what is the area of the octagon in square centimeters?
I've gotten as far as determining that the triangles are 45, 45, 90 triangles, and the sides thus represent x, x, x * sqrt(2). Beyond that, I have no idea how to approach the problem. I didn't
immediately resort to the internet, I've tried other sources and yes, even my own knowledge. Again, thanks in advance for your help.
side = $\frac{8}{1+\sqrt{2}}$
are = $64-\frac{64}{\left(1+\sqrt{2}\right)^2}$
Re: Regular Octagon Inscribed Inside of a Square
Max, do you mind showing me a bit as to how you got your answer?
Re: Regular Octagon Inscribed Inside of a Square
a = side of octa
8 cm = 2*[a*cos(45)]+a find a =....
Re: Regular Octagon Inscribed Inside of a Square
You said, "This assignment is due tomorrow and the clock is ticking". Are you saying that you want to get credit for work you have not done? Max Jastper has given you the answer and a hint toward
that answer. If you want to take credit for it, the you get that answer.
(Why do you tell us "each side of the square is 8" and link to a picture showing length of each side to be 27?)
Last edited by HallsofIvy; September 10th 2012 at 04:59 PM.
Re: Regular Octagon Inscribed Inside of a Square
I clearly said in my post to disregard the numbers given. I simply asked for help, because you know, this is a forum dedicated for help related to mathematics. If you think I'm the only person on
this forum seeking guidance for work that was/will be checked or graded, then you are a fool. If you don't wish to help me, then fine, don't help. This forum is here for a reason. Not everybody
knows the answer to every question asked.
September 10th 2012, 04:32 PM #2
September 10th 2012, 04:37 PM #3
Sep 2012
United States
September 10th 2012, 04:43 PM #4
September 10th 2012, 04:53 PM #5
MHF Contributor
Apr 2005
September 10th 2012, 05:14 PM #6
Sep 2012
United States | {"url":"http://mathhelpforum.com/pre-calculus/203240-regular-octagon-inscribed-inside-square.html","timestamp":"2014-04-16T20:03:22Z","content_type":null,"content_length":"45190","record_id":"<urn:uuid:d5cf2a17-af95-4835-89ba-b6508c76222c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lacunary Statistical Limit and Cluster Points of Generalized Difference Sequences of Fuzzy Numbers
Advances in Fuzzy Systems
Volume 2012 (2012), Article ID 459370, 6 pages
Research Article
Lacunary Statistical Limit and Cluster Points of Generalized Difference Sequences of Fuzzy Numbers
^1Department of Mathematics, Haryana College of Technology and Management, Haryana, Kaithal 136027, India
^2School of Mathematics and Computer Application, Thapar University, Punjab, Patiala 147004, India
Received 20 April 2012; Accepted 14 June 2012
Academic Editor: Katsuhiro Honda
Copyright © 2012 Pankaj Kumar et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The aim of present work is to introduce and study lacunary statistical limit and lacunary statistical cluster points for generalized difference sequences of fuzzy numbers. Some inclusion relations
among the sets of ordinary limit points, statistical limit points, statistical cluster points, lacunary statistical limit points, and lacunary statistical cluster points for these type of sequences
are obtained.
1. Introduction
The notion of statistical convergence of sequences of numbers was introduced by Fast [1] and Schoenberg [2] independently and latter discussed in [3–6], and so forth. In 1993, Fridy and Orhan [7]
presented an interesting generalization of statistical convergence with the help of a lacunary sequence and called it lacunary statistical convergence or -convergence. Demirci [8] defined -limit and
cluster points of number sequences and obtained some interesting results analogous to [4]. In past years, statistical convergence has also become an interesting area of research for sequences of
fuzzy numbers. The credit goes to Nuray and Savaş [9] who first introduced statistical convergence of sequences of fuzzy numbers. After their pioneer work, many authors have made their contribution
to study different generalizations of statistical convergence for sequences of fuzzy numbers (see [10–13], etc.).
Quite recently, statistical convergence of sequences of fuzzy numbers is studied with the help of the difference operator . For instance, Bilgin [14] introduced strongly -summable and -statistical
convergence of sequences of fuzzy numbers. Işik [15] studied some notions of generalized difference sequences of numbers. In 2006, Altin et al. [16] united lacunary sequences to introduce the concept
of lacunary statistical convergence of generalized difference sequences of fuzzy numbers and obtained some interesting results. Some more work in this direction can be found in [17–19]. In present
work, we continue with this study and introduce the concepts of lacunary statistical limit and cluster points of generalized difference sequences of fuzzy numbers. We obtain some relations among the
sets of ordinary limit, points, lacunary statistical limit, and cluster points for these type of sequences.
2. Background and Preliminaries
We begin with the following terminology on fuzzy numbers. Given any interval , we shall denote its end points by and by the set of all closed bounded intervals on real line , that is, . For we define
if and only if and . Moreover, the distance function defined by is a Hausdorff metric on and is a complete metric space. Also is a partial order on .
A fuzzy number is a function from to which is satisfying the following conditions: (i) is normal, that is, there exists such that ; (ii) is fuzzy convex, that is, for any and , ; (iii) is upper
semicontinuous; and (iv) the closure of the set denoted by is compact.
Properties (i)–(iv) imply that for each , the -level set, , is a nonempty compact convex subset of . Let denote the set of all fuzzy numbers. The linear structure of induces an addition and a scalar
multiplication in terms of -level sets by for each . Define a map by Puri and Ralescu [20] proved that is a complete metric space. Also the ordered structure on is defined as follows. For , we define
if and only if and for each . We say that if and there exist such that or . The fuzzy numbers and are said to be incomparable if neither nor .
We next recall some definitions and results which form the base for present study. For any set , let denote the set and denote the number of elements in . The natural density of is defined by . The
natural density may not exist for each set . But the upper density defined by always exists for each set . Moreover, different from zero means . Besides that, and if , then .
For any sequence of fuzzy numbers, we write to denote the range of . If is a subsequence of and , then we abbreviate by . If , is called a thin subsequence, otherwise if , is called nonthin
subsequence of .
For , the set of all sequences of fuzzy numbers, the operator is defined by
Definition 1. A sequence of fuzzy numbers is said to be -statistically convergent to a fuzzy number , in symbol: , if for each , Let denote the set of all -statistically convergent sequences of fuzzy
Definition 2. Let be a sequence of fuzzy numbers. A fuzzy number is said to be a statistical limit point (s.l.p) of the generalized difference sequence of fuzzy numbers provided that there is a
nonthin subsequence of that is -convergent to .
Let denote the set of all s.l.p. of the generalized difference sequence of fuzzy numbers.
Definition 3. Let be a sequence of fuzzy numbers. A fuzzy number is said to be a statistical cluster point (s.c.p) of the generalized difference sequence of fuzzy numbers provided that, for each ,
Let denote the set of all s.c.p of the generalized difference sequence of fuzzy numbers.
By a lacunary sequence we mean an increasing sequence of positive integers such that and as . The intervals determined by will be denoted by whereas the ratio is denoted by . Further, a lacunary
sequence is called a lacunary refinement of the lacunary sequence if .
Definition 4 (see [21]). Let be a lacunary sequence. A sequence of fuzzy numbers is said to be lacunary statistical convergent to a fuzzy number provided that for each , Let denote the set of all
lacunary statistically convergent sequences of fuzzy numbers.
Let be a lacunary sequence and a sequence of fuzzy numbers. If where is a subsequence of such that we call a -thin subsequence. On the other hand, is a -nonthin subsequence of provided that
Definition 5. Let be a lacunary sequence. A sequence of fuzzy numbers is said to be lacunary -statistically convergent to a fuzzy number , in symbol: , if for each , Let denote the set of all
lacunary -statistically convergent sequences of fuzzy numbers.
We now consider the natural definitions of statistical limit and cluster points for generalized difference sequences of fuzzy numbers with respect to lacunary sequences.
3. Main Results
Definition 6. Let be a lacunary sequence and a sequence of fuzzy numbers. A fuzzy number is said to be a lacunary statistical limit point (l.s.l.p) of the generalized difference sequence of fuzzy
numbers provided that there is a -nonthin subsequence of that is -convergent to .
Let denote the set of all l.s.l.p. of the generalized difference sequence of fuzzy numbers.
Definition 7. Let be a lacunary sequence and a sequence of fuzzy numbers. A fuzzy number is said to be a lacunary statistical cluster point (l.s.c.p) of the generalized difference sequence of fuzzy
numbers provided that, for each , Let denote the set of all l.s.c.p of the generalized difference sequence of fuzzy numbers.
Example 8. Let be a lacunary sequence. We define a sequence of fuzzy numbers as follows. For , define Then, we obtain Thus, for , it is clear that the sequence has two different subsequences which
converge to and , respectively, where and . Hence, if denotes the set of ordinary limit points of , then ; however, .
Theorem 9. Let be a lacunary sequence and a sequence of fuzzy numbers. Then, one has .
Proof. Suppose . By definition, there is a -nonthin subsequence of which is -convergent to , and therefore we have Since, for every , so we have the containment Now, is -convergent to , which implies
that, for every , is finite for which we have Thus from (15), we obtain using (13)and(16). This shows that and therefore the result is proved.
Theorem 10. Let be a lacunary sequence. Then, for any sequence of fuzzy numbers, one has .
Proof. Assume . By definition, for each we have We set a -nonthin subsequence of such that for . Since , it follows that is an infinite set. Thus we have a subsequence of that is -convergent to .
This shows that . Hence .
Theorem 11. Let be a lacunary sequence. If and are two sequences of fuzzy numbers such that , then and .
Proof. We prove the theorem into two parts. In the first part we prove that ; however, in the second part we shall prove .
Part (i). Let . By definition, there is a -nonthin subsequence of that is -convergent to . Since , it follows that . Therefore, from the later set, we can yield a -nonthin subsequence of that is
-convergent to . Hence, , and therefore we have . Also by symmetry one get . On combining we have .
Part (ii). Let . By definition, for each , Since for all most all , it follows that, for each , This shows that and therefore . By symmetry, we see that , whence .
Theorem 12. Let be a lacunary sequence. If is a sequence of fuzzy numbers such that , then .
Proof. We prove the theorem in two parts. In the first part, we prove that whereas in the second part we obtain .
Part (i). Suppose that , where , that is, is a l.s.l.p. of the generalized difference sequence different from . Choose such that . By definition there exist two -nonthin subsequences and of the
sequence which are -convergent to and , respectively. Since is -convergent to , so for each , is a finite set for which Further, we can write for which we have Since is nonthin, so, by use of (21),
we have Since , so for each and therefore we can write Furthermore for , which immediately gives the containment for which we have As left side of (29) cannot be negative, so we must have This
contradicts (24). Hence, .
Part (ii). Let be a l.s.c.p. of the generalized difference sequence different from , that is, , where . Choose such that . Since is a l.s.c.p of , so for each we have Since for every , it follows
that for which we have by (31), which is impossible as by (25) . In this way we obtained a contradiction. Hence, .
Theorem 13. Let be a lacunary sequence and a sequence of fuzzy numbers. Then one has the following:(i)if, then ,(ii)if, then ,(iii)if, then .
Proof. (i) Suppose ; there exists a such that for sufficient large , which implies that . Assume that , then there is -nonthin subsequence of that is -convergent to and Since it follows by (33) that
. Since is already -convergent to , so we have . Hence .
(ii) If , then there exists a real number such that for all . Without loss of generality, we can assume (as otherwise ). Now for all . Let , then there is a set with and . Let and . For any integer
satisfying , we can write Suppose as . Since is a lacunary sequence and the first part on the right side of above expression is a regular weighted mean transform of the sequence , therefore it too
tends to zero as . Since as , it follows that which is a contradiction as . Thus and therefore . Hence .
(iii) This is an immediate consequence of (i) and (ii).
Theorem 14. Let be a lacunary sequence and a sequence of fuzzy numbers. Then one has the following:(i)if, then ,(ii)if, then ,(iii)if, then .
Proof. The proof of the theorem can be obtain on the similar lines as that of the above theorem and therefore is omitted here.
Theorem 15. For any lacunary refinement of a lacunary sequence , and .
Proof. Suppose each of contains the points of so that , where . Note that for all , . Let be the sequence of abutting intervals ordered by increasing right end points. Let , then for each , As
before, write and . Now for each , we can write where is the characteristics function of the set and . Suppose . Then the right side of above expression is a regular weighted mean transform of and
therefore tends to zero as which contradicts (36). Thus , which shows that . Hence .
Similarly, we can prove .
The authors are grateful to the referees for their valuable suggestions which improved the readability of the paper.
1. H. Fast, “Surla convergence statistique,” Colloquium Mathematicum, vol. 2, pp. 241–244, 1951.
2. I. J. Schoenberg, “The integrability of certain functions and related summability methods,” American Mathematical Monthly, vol. 66, pp. 361–375, 1951.
3. J. A. Fridy, “On statistical convergence,” Analysis, vol. 5, pp. 301–313, 1985.
4. J. A. Fridy, “Statistical limit points,” Proceedings of the American Mathematical Society, vol. 118, no. 4, pp. 1187–1192, 1993. View at Publisher · View at Google Scholar
5. J. A. Fridy and C. Orhan, “Statistical limit superior and limit inferior,” Proceedings of the American Mathematical Society, vol. 125, no. 12, pp. 3625–3631, 1997. View at Scopus
6. M. A. Mammadov and S. Pehlivan, “Statistical cluster points and Turnpike theorem in nonconvex problems,” Journal of Mathematical Analysis and Applications, vol. 256, no. 2, pp. 686–693, 2001.
View at Publisher · View at Google Scholar · View at Scopus
7. J. A. Fridy and C. Orhan, “Lacunary statistical convergent,” Pacific Journal of Mathematics, vol. 160, no. 1, pp. 43–51, 1993.
8. K. Demirci, “On lacunary statistical limit points,” Demonstratio Mathematica, vol. 35, pp. 93–101, 2002.
9. F. Nuray and E. Savaş, “Statistical convergence of sequences of fuzzy numbers,” Mathematica Slovaca, vol. 45, pp. 269–273, 1995.
10. S. Aytar, “Statistical limit points of sequences of fuzzy numbers,” Information Sciences, vol. 165, no. 1-2, pp. 129–138, 2004. View at Publisher · View at Google Scholar · View at Scopus
11. S. Aytar and S. Pehlivan, “Statistical cluster and extreme limit points of sequences of fuzzy numbers,” Information Sciences, vol. 177, no. 16, pp. 3290–3296, 2007. View at Publisher · View at
Google Scholar · View at Scopus
12. S. Aytar, M. A. Mammadov, and S. Pehlivan, “Statistical limit inferior and limit superior for sequences of fuzzy numbers,” Fuzzy Sets and Systems, vol. 157, no. 7, pp. 976–985, 2006. View at
Publisher · View at Google Scholar · View at Scopus
13. E. Savaş, “On statistically convergent sequences of fuzzy numbers,” Information Sciences, vol. 137, no. 1–4, pp. 277–282, 2001. View at Publisher · View at Google Scholar · View at Scopus
14. T. Bilgin, “$\mathrm{\Delta }$-statistical and strong $\mathrm{\Delta }$-Cesàro convergence of sequences of fuzzy numbers,” Mathematical Communications, vol. 8, Article ID 95100, 2003.
15. M. Işik, “On statistical convergence of generalized difference sequences,” Soochow Journal of Mathematics, vol. 30, no. 2, pp. 197–205, 2004.
16. Y. Altin, M. Et, and R. Çolak, “Lacunary statistical and lacunary strongly convergence of generalized difference sequences of fuzzy numbers,” Computers and Mathematics with Applications, vol. 52,
no. 6-7, pp. 1011–1020, 2006. View at Publisher · View at Google Scholar · View at Scopus
17. Y. Altin, M. Et, and M. Basarir, “On some generalized difference sequences of fuzzy numbers,” Kuwait Journal of Science & Engineering, vol. 34, no. 1, pp. 1–14, 2007.
18. H. Altin and R. çolak, “Almost lacunary statistical and strongly almost lacunary convergence of generalized dierence sequences of fuzzy numbers,” Journal of Fuzzy Mathematics, vol. 17, no. 4, pp.
951–967, 2009.
19. R. Çolak, H. Altinok, and M. Et, “Generalized difference sequences of fuzzy numbers,” Chaos, Solitons and Fractals, vol. 40, no. 3, pp. 1106–1117, 2009. View at Publisher · View at Google Scholar
· View at Scopus
20. M. L. Puri and D. A. Ralescu, “Differential of fuzzy numbers,” Journal of Mathematical Analysis and Applications, vol. 91, pp. 552–558, 1983.
21. F. Nuray, “Lacunary statistical convergence of sequences of fuzzy numbers,” Fuzzy Sets and Systems, vol. 99, no. 3, pp. 353–355, 1998. View at Scopus | {"url":"http://www.hindawi.com/journals/afs/2012/459370/","timestamp":"2014-04-20T21:18:43Z","content_type":null,"content_length":"662594","record_id":"<urn:uuid:25772421-4786-41dc-b23e-3d7e059d6eb3>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
Now, Calculate The Potential Produced By A Uniform ... | Chegg.com
Image text transcribed for accessibility: Now, calculate the potential produced by a uniform disk of charge at a point of the axis a distance z above the center of the disk. First consider a ring a
ring with radius R' and width dR' . The area of the ring is dA = and if a is the area charge density the charge contained in the ring is dq = All charge on the ring is the same distance from the
point; in particular, in terms of z and R',r = . Thus the potential produced by the ring at z is dV = and the potential is given by the integral . Since (R'2 + z2 )-1/2 R' dR' = (R'2 + z2 )1/2 the
integral can be evaluated easily. Carefully note the limits of integration. The result is: V = To get this we need to recall the following: | {"url":"http://www.chegg.com/homework-help/questions-and-answers/calculate-potential-produced-uniform-disk-charge-point-axis-distance-z-center-disk-first-c-q206732","timestamp":"2014-04-19T09:09:21Z","content_type":null,"content_length":"18711","record_id":"<urn:uuid:6663c23a-99b8-4adc-9442-8bb114810862>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fractal Physiology and the Fractional Calculus: A Perspective
The theme of this paper is to indicate the necessity for a fractal view of physiology that explicitly takes into account the complexity of living matter and its dynamics. Complexity in this context
incorporates the recent advances in physiology concerned with the applications of the concepts from fractal geometry, fractal statistics and nonlinear dynamics, to the formation of a new kind of
understanding within the life sciences. A parallel development has been the understanding of the dynamics of fractal processes and how those dynamics are manifest in the control of physiologic
networks. For a number of years the study of fractals and its application to physiology was restricted to the determination of the fractal dimension of structure, in particular, the static structure
of objects and the scaling of time series. However, now we explore the dynamics of fractal processes using the fractional calculus, and apply this dynamical approach to both regular and stochastic
physiologic processes. To understand the need for such an approach a historical perspective is useful.
It is not a coincidence that the modern view of how the human body operates mirrors our understanding of the technological society in which we live, where a thermostat controls the temperature of a
home, the sound of a voice can turn the lights on and off, and cruise control determines the speed of a car. It is not clear when this idea of how the body works began to permeate society, but in
medicine the concept was introduced by the nineteenth century scientist Claude Bernard (1813–1878). He developed the notion underlying homeostasis in his study of stability of the human body. The
word homeostasis was popularized half a century later by Walter Cannon (1871–1945) in his book The Wisdom of the Body (Cannon, 1932). Homeostasis is what many consider to be the guiding principle of
medicine, whereby every human body has multiple automatic inhibition mechanisms that suppress disquieting influences of the environment. Homeostasis is the evolutionary strategy selected to enable
the human body to maintain an internal balance, although it is not always evident how a particular suppressing response is related to a specific antagonism. Biology teaches that evolution has, over
the millennia, reduced homeostatic networks to the bare minimum, so that in the spirit of parsimony, every internal mechanism of a physiological network is necessary to maintain either the structural
or functional integrity of the organism.
But why should physiologic networks be homeostatic? Why has nature determined that this is the “best” way to control the various complex networks in the human body? In part, nature’s choices have to
do with the fact that no physiologic network is isolated; these networks are, in fact, made up of a mind-numbing number of subnetworks, the cells. The task of a cell is simple and repetitive, but
that of an organ is not. Therefore a complex network like the cardiovascular is made up of a variety of cell types, each type performing a given different function. If responses to changes in the
external environment were at the cellular level, physiology would be much more complicated than it is already, and organs would no doubt be unstable. But nature has found that if the immediate
environment of the cells is kept within certain narrowly defined limits, then the cells can continue to perform their specific tasks and no others, even while organs respond to sometimes extravagant
external disturbances. As long as the internal environment stays within a certain operational range the cells continue to function without change. Thus, homeostasis is the presumed strategy that
nature has devised to keep the internal state of the body under control.
The level of sophistication of control mechanisms was brought to light with the centrifugal fly-ball governor (1788) constructed by J. Watt for regulating the speed of the rotary steam engine. This
artificial control mechanism heralded the onset of the Industrial Revolution. The first mathematical description and consequent understanding of Watt’s governor was constructed by the English
physicist J. C. Maxwell in 1868, when he linearized the differential equations describing the governor’s dynamics. The solutions to the linearized differential equations (control) are stable when the
eigenvalues have negative real parts (stabilizing feedback) and in this way the language for the control of dynamical networks was introduced.
The homeostatic control of physiologic networks classifies the dynamics as negative feedback, because such homeostatic networks respond in ways to dampen environmental disturbances including
fluctuations. However the control of certain networks has the opposite behavior, that is, they have a positive feedback, because the networks respond in ways to amplify perturbations. Of course, such
responses lead to unstable behavior in general, but such instability is sometimes useful. Consequently feedback can either amplify or suppress disturbances depending on the network’s dynamics.
The picture of reducing the variability in the size of widgets coming off an assembly line to meet specifications and the suppression of physiologic variability by homeostatic control remains
compatible. However the scaling of physiologic time series and the interpretation of that scaling in terms of long-term memory and fractal dimensions (Mandelbrot, 1977, 1982) is not consistent with a
simple view of the world in general or of physiology in particular. Therefore we explore some of the ways fractal dynamics has required modification of the principle of homeostasis (Goldberger, 2006)
and how allometric control (West, 2009) may replace homeostatic control. Consequently we hypothesize that complex physiologic networks require allometric control.
Another important hypothesis that developed from this view of physiologic time series is that disease and aging are associated with the loss of complexity and not with the loss of regularity (
Goldberger et al., 1990). This hypothesis could be a consequence of a loss of interactions among component networks, or as Pincus (1994) suggested the increased isolation of network elements can
result in a decrease in the complexity of the network’s signal. The complexity hypothesis may also be related to the idea that disease marks a departure from normal physiologic behavior and because
that departure may be either more or less irregular it has been called “dynamic disease” by Glass (2001) and is caused by modifications in the underlying physiologic control network.
The fractal concept was formally introduced into the physical sciences by Beniot Mandelbrot over 20 year ago in his monograph (Mandelbrot, 1977), which brought together mathematical, experimental,
and physical arguments that undermined the traditional picture of the physical world. It had been accepted that celestial mechanics and physical phenomena are, by and large, described by smooth,
continuous, and unique functions, since before the time of Lagrange (1736–1813). This belief was part of the conceptual infrastructure of the physical sciences. The changes in physical processes were
modeled by systems of dynamical equations and the solutions to such equations are continuous and differentiable at all but a finite number of points. Therefore the phenomena being described by these
equations were thought to have these properties of continuity as well as differentiability.
From the phenomenological side, Mandelbrot called into question the fidelity of the traditional perspective by pointing to the failure of the equations of physics to explain such familiar phenomena
as turbulence and phase transitions, for example, the melting of ice and the clotting of blood. In his books (Mandelbrot, 1977, 1982) Mandelbrot catalogued and described dozens of physical, social,
and biological phenomena that cannot be properly described using the familiar tenants of dynamics from physics. The functions required to explain these complex phenomena have properties that for a
100 years had been thought to be mathematically pathological. Mandelbrot argued that, rather than being septic; these functions capture essential properties of reality and are therefore better
descriptors of the real world than the traditional analytic functions of theoretical physics.
Schrödinger (1943), using the principles of equilibrium statistical physics, laid out his understanding of the connection between the world of the microscopic and macroscopic. In that discussion he
asked why atoms are so small relative to the dimension of the human body. The high level of organization necessary for life is only possible in a macroscopic network; otherwise the order would be
destroyed by microscopic (thermal) fluctuations. A living network must be sufficiently large to maintain its integrity in the presence of thermal fluctuations that randomly disrupt its constitutive
elements. Thus, macroscopic phenomena are characterized by averages over ensemble distribution functions characterizing microscopic fluctuations. The dynamics of macroscopic variables therefore
generally do not contain thermal fluctuations; the fluctuations typically observed in physiologic time series are macroscopic not microscopic. Consequently any strategy for modeling physiology must
be based on an understanding of the statistical properties of complex macroscopic phenomena, and as we shall see, on our understanding of fluctuating phenomena that lack characteristic scales and are
therefore fractal.
There are three types of fractals that appear in the life sciences: geometrical fractals, that determine the spatial properties of the tree-like structures of the mammalian lung, arterial and venous
systems, and other ramified structures (West and Deering, 1994); statistical fractals (Mandelbrot, 1982), that determine the properties of the distribution of intervals in the beating of the
mammalian heart (Peng et al., 1993), breathing (Altemeier et al., 2000), walking (Hausdorff et al., 1995; West and Griffin, 1998, 1999; Griffin et al., 2000) and in the firing of certain neurons (Das
et al., 2003) and finally dynamical fractals (West et al., 2003a), that determine the dynamical properties of networks having a large number of characteristic time scales. In complex physiologic
networks the distinctions between these three kinds of fractals often blur, and herein we focus our attention on the dynamics rather than on the geometry of fractals; although in this journal we
fully expect to entertain studies involving all three types of fractals.
We have made three interrelated hypotheses in this Introduction. This first is that complex physiologic networks require allometric control; the second is that disease is the loss of complexity; and
finally that the fractal dimension is a significantly better indicator of organismic functions in health and disease than are traditional averages. These hypotheses are interrelated due to the fact
that complex physiologic time series have 1/f variability, manifest in an inverse power-law spectrum, an inverse power-law probability density or both. The power-law index is related to the fractal
dimension, which is a measure of the complexity of the underlying process.
In support of these hypotheses we briefly review how such concepts as complexity, fractals, diverging moments, nonlinear dynamics, and other related mathematical topics along with their experimental
testing are used to understand physiologic networks. Of course, a number of books have been written about any one of these ideas – books for the research expert (Meakin, 1998), books for the informed
teacher (Schroeder, 1991), books for the struggling graduate student (West, 1999), and books for the intelligent lay person (Prigogine and Stengers, 1984). Different authors stress different
characteristics of complex phenomena, from the erratic data collected by clinical researchers (Dewey, 1997) to the fluctuations generated by deterministic dynamical equations used to model such
networks (Ott, 1993). Some authors have painted with broad brushstrokes, indicating only the panorama that these concepts reveal to us (Briggs and Peat, 1971), whereas others have sketched with
painstaking detail the structure of such phenomena and have greatly enriched those that could follow the arguments (Rosen, 1991). Herein we view our efforts as being midway between the two since
Fractal Physiology is itself a hypothesis that is continually being tested.
Manifestations of Variability
Healthy physiologic network give rise to time series that display erratic fluctuations not unlike those found in dynamical systems driven from the vicinity of a set point, or from an equilibrium
state (Stanley et al., 1999). The statistical properties of physiological fluctuations, such as found in the time series for heartbeat dynamics, respiration, human locomotion, and posture control (
Collins and DeLuca, 1994), have been the focus of interdisciplinary research on complex networks for more than two decades (West, 1999). The rationale for this persistent interest is related in part
to the idea that unlike the thermal fluctuations found in physics, which perturb a system but do not contain useful information, physiologic fluctuations are often the result of internal control and
therefore frequently contain useful information. The goal here is to better understand self-regulatory control systems for complex physiologic phenomena that produce such fluctuations and to describe
the dynamics of such phenomena with tools capable of capturing their nonlinear and often exotic statistical character (Bassingthwaighte et al., 1994).
One outcome of the research into the properties of these fluctuations has been a profound change in our understanding of the significance of homeostasis and as suggested by Stanley et al. (1999) the
possibility of their existing a “non-homeostatic physiologic variability”. The discovery of fractal and multifractal properties in physiological time series has lead to the suggestion that the
intrinsic variability of many physiological phenomena reflects the adaptability of the underlying control networks (West and Goldberger, 1987). Consequently, the statistical properties, including
correlations of physiological fluctuations, may be more important in the control of health and disease than are the average properties, such as those under homeostatic control.
Power Laws
Scale invariance is the property that relates the elements of time series across multiple time scales and has been found to hold empirically for a number of complex physiologic phenomena including
the inter-beat intervals of the human heart (Ivanov et al., 1999; West et al., 1999a), switching times in vision (Gao et al., 2006), inter-stride intervals of human gait (Jordan et al., 2006), brain
wave data from EEGs (West et al., 1995) and inter-breath intervals (Szeto et al., 1992), to name a few. One way to understand scaling in these and other experimental data is by means of a
renormalization group approach. Consider an unknown function Z(t) that satisfies a relation of the form
We solve this equation in the same manner that differential equations are solved, by assuming a trial solution, inserting the trial solution into the equation of motion and solving for the
appropriate constants. In the present case we assume a trail solution
Substituting Eq. 2 into both the lhs and the rhs of Eq. 1 yields the condition that the function A(t) is periodic in the logarithm of the time with period log b, that is, A(bt) = A(t), and the
power-law index has the value
In the literature Z(t) is called a homogeneous function (Barenblatt, 1994). Note that the parameter a scales the amplitude of the function being measured and the parameter b scales the resolution of
the time scale. The power-law index is the ratio of the logarithms of these two scaling parameters, indicating how the amplitude of the function is modified as the units of time are modified.
The homogeneous function Z(t) is now used to define the scaling observed in the moments calculated from the experimental time series with long-time memory. The second moment of a time-dependent
stochastic process is assumed to be Z(t) = 〈X(t)^2 〉 so that it has a long-time memory is given by (Bassingthwaighte et al., 1994)
For the same process a different scaling is given for the stationary autocorrelation function Z(τ) = 〈X(t)X(t + τ)〉 = C(τ) yielding (Bassingthwaighte et al., 1994)
Finally, the spectral density for this time series, given by the Fourier transform of the autocorrelation function and therefore in terms of the frequency f as Z(f) = S(f) is (Bassingthwaighte et
al., 1994)
The solutions to each of these three scaling equations are of precisely the algebraic form implied by Eq. 2, and in the simplest case the modulation amplitude A(t) is fixed at a constant
(time-independent) value.
The above renormalization scaling yields a mean-square signal level that increases nonlinearly with time according to Eq. 4 as
and the exponent H is a real constant, often called the Hurst exponent (Mandelbrot, 1977). In a complex physiologic network the response X(t) is expected to depart from the entirely random condition
of a simple random walk model, because real fluctuations are expected to have memory and correlation quantified by H. In the physics literature anomalous diffusion (H ≠ 0.5) is associated with
phenomena with long-time memory such that the two-time autocorrelation function is (Bassingthwaighte et al., 1994; Beran, 1994)
Here the power-law index is given by β = 2H − 2 in agreement with Eq. 5. Note that the two-point autocorrelation function is assumed to depend only on the time difference, thus, the underlying
process is stationary. The autocorrelation function is an inverse power law in time because 0 ≤ H ≤ 1 implying that the correlation between data points decreases in time with increasing time
separation. Note that inverse power law loss of memory is much slower than the exponential decay that is often assumed. This scaling behavior is also manifest in the spectrum, which according to Eq.
6 is a power law in frequency f:
and is inverse power law for H > 0.5, a superdiffusive process.
These three properties, the algebraic increase in time of the mean-square signal strength (Eq. 7), the inverse power law in time of the stationary autocorrelation function (Eq. 8) and the inverse
power law in frequency of the spectrum (Eq. 9), are typical of observed physiologic time series. These properties are usually assumed to be the result of long-time memory in the underlying
statistical process. Beran (1994) discusses these power-law properties of the spectrum and autocorrelation function, as well as a number of other properties for discrete and continuous time series.
In particular he points out that the interpretation in terms of how to generate long-time memory in complex networks is not unique and reviews the use of fractional difference random walks. Herein we
extend the discussion to fractional stochastic differential equations (West, 1999) and the dynamics of fractals. But first we note the long history associated with the 1/f spectrum (Eq. 9).
The phenomenon of 1/f noise was discovered by Schottky (1918) at the turn of the last century in his study of electrical conductivity. Between then and now this spectral form has been found in
biological, economic, linguistic, medical, neurological, and social phenomena as well as in physics (West et al., 2008). The spectra of such complex phenomena are given by Eq. 9 and the spectral
index falls within the interval 0.5 < α < 1.5. Complex phenomena span the dynamic range from the macroscopic behavioral level down to the microscopic level. It is evident that 1/f variability appears
in body movements such as walking, postural sway, and movement in synchrony with external stimulation such as a metronome; also such variability resides in physiologic networks as manifest in heart
rate variability (HRV, Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiolgy, 1996), human vision (Alvarez-Ramirez et al., 2008), the dynamics
of the human brain (Gilden, 2001; Grigolini et al., 2009), and in human cognition (Van Orden et al., 2005; Kello et al., 2007); also 1/f noise is measured at the level of single-ion channels (
Liebovitch and Krekora, 2002; Roy et al., 2008) and in single neuron adaptation to various stimuli (Das et al., 2003). Each of these psychophysical phenomena manifests 1/f variability (West et al.,
2008). The original assertion that α = 1 was shown in these subsequent studies to extend the spectral index to the broader range indicated.
Allometric Relations
The term scaling denotes a power-law relation between two variables x and y
and as Barenblatt (1994) explained such scaling laws are not merely special cases of more general relations; they never appear by accident and they always reveal self-similarity. In biology Eq. 10 is
historically referred to as an allometric relation between two observables. Such relations were introduced into biology in the nineteenth century. Typically an allometric equation relates two
properties of a given organism. For example, the total mass of a deer y is proportional to the mass of the deer’s antlers x raised to a specific power α. Huxley summarized the experimental basis for
this relation in his 1931 book (Huxley, 1931) and developed the mathematics to describe and explain allometric growth laws. He reasoned that in biological systems two parts of an organism grow at
different rates, but the growth rates are proportional to one another. Consequently, how rapidly one part of the organism grows can be related to how rapidly the other part of the organism grows and
the ratio of the two rates is constant. Another such application has y as the body’s metabolic rate with x the body’s mass and recent theory in terms of fractal transport of material within the body
purports to explain the observed value of the power-law index α ≈ 0.75(West et al., 1997).
The notion of an allometric relation has been generalized to include measures of time series. In this view y is interpreted to be the variance and x the average value of the quantity being measured.
The fact that these two central measures of a time series satisfy an allometric relation implies that the underlying time series is a fractal random process and therefore scales. It was first
determined empirically that certain statistical data satisfy a power-law relation of the form (Eq. 10) by Taylor (1961) and this is where we begin our discussion of the allometric aggregation method
of data analysis.
Taylor was interested in biological speciation. For one thing, he was curious about how many species of beetle can be found in a given area of land and he therefore sectioned off a large field into
plots. In each plot he sampled the soil for the variety of beetles that were present. This enabled him to determine the distribution in the number of new species of beetle spatially distributed
across the field. From the distribution he could then extract the average number of new species VarX. After this first calculation he partitioned his field into smaller plots and redid the sampling,
again determining the mean and variance in the number of species at this increased resolution. This process was repeated a number of times, yielding a set of values for the mean and variance. In the
ecological literature a graph of the logarithm of the variance versus the logarithm of the average value is called a power curve, which is linear in the logarithms of the two variables and b is the
slope of the curve. The algebraic form of the relation between the variance and mean is
where the two parameters a and b determine how the variance and mean are related to one another.
Taylor (1961) exploited the curves obtained from data in a number of ways using the slope and intercept parameters. If the slope of the curve and the intercept are both equal to 1, a = b = 1, then
the variance and mean are equal to one another. This equality is only true for a Poisson distribution, which, when it occurred, allowed him to interpret the number of new species as being randomly
distributed over the field, with the number of species in any one plot being independent of the number of species in any other plot. If, however, the slope of the curve was less than unity, the
number of new species appearing in the plots was interpreted to be quite regular. The spatial regularity of the number of species, in this case, was compared with the trees in an orchard and given
the name evenness. Finally, if the slope of the variance versus mean curve was greater than 1, the number of new species was interpreted as being clustered in space, like disjoint herds of sheep
grazing in a meadow. This clustering is a form of spatial intermittency.
Of particular interest to us here was the mechanism that Taylor and Taylor (1977) postulated to account for the experimentally observed allometric relation:
We would argue that all spatial dispositions can legitimately be regarded as resulting from the balance between two fundamental antithetical sets of behavior always present between individuals. These
are, repulsion behavior, which results from the selection pressure for individuals to maximize their resources and hence to separate, and attraction behavior, which results from the selection
pressure to make the maximum use of available resources and hence to congregate wherever these resources are currently most abundant.
Consequently, they postulated that it is the tension between the attraction and repulsion, migration and congregation, which produces the interdependence (scaling) of the spatial variance and the
average population density. We suggest that this mechanism is generic and may underlie a number of natural phenomena including those in complex physiologic networks.
We can now reinterpret Taylor’s observations because the kind of clustering he observed in the spatial distribution of species number, when the slope of the power curve is greater than 1, is
consistent with an asymptotic inverse power-law distribution of the underlying data set. Furthermore, the clustering or clumping of events is due to the fractal nature of the underlying dynamics.
Willis, some 40 years before Taylor, established the inverse power-law form of the number of species belonging to a given genera (Willis, 1922). Willis used an argument associating the number of
species with the size of the area they inhabit. It was not until the decade of the 1990s that it became clear to more than a handful of experts that the relationship between an underlying fractal
process and its space filling character obeys a scaling law (Mandelbrot, 1977, 1982). It is this scaling law that is manifest in the allometric relation between the variance and mean.
It is possible to test the allometric relation of Taylor using computer-generated data. But before we do so, we note that Taylor and Woiwod (1980) were able to extend the discussion from the
stability of the population density in space, independent of time, to the stability of the population density in time, independent of space. Consequently, just as spatial stability, as measured by
the variance, is a power function of the mean population density over a given area at all times, so too the temporal stability, as measured by the variance, is a power function of the mean population
density over time at all locations. With this generalization in hand we apply Taylor’s Law to time series.
Scaling Time Series
Allometric relations such as Eq. 10 have been extended to include measures of time series. In this extended view y is interpreted to be the variance and x the average value of the quantity being
measured as in Taylor’s Law (Eq. 11). The fact that these two central measures of the time series satisfy an allometric relation implies that the underlying time series is a fractal random process.
The scaling of time series data is here determined by grouping the data into aggregates of two, three, and more of the original data points and calculating the mean and variance at each level of
aggregation. The idea is that if the data are fractal in nature then we need not increase the resolution as Taylor did. We should be able to determine the scaling behavior by coarse graining or
aggregating the data. In this spirit the variance, for a monofractal random time series, is given by (Bassingthwaighte et al., 1994)
where the superscript on the variable indicates that it is determined using the aggregation of n-adjacent data points. It is well established (Mandelbrot, 1977; Bassingthwaighte et al., 1994) that
the exponent in a scaling equation such as Eq. 12 is related to the fractal dimension D of the underlying time series by the relation D = 2 − H.
The allometric aggregation approach has been applied to a number of data sets implementing linear regression analysis to the logarithms of the variances and the averages as follows:
Consequently the processed data from self-similar data would appear as straight lines on log–log graph paper. For example, in Figure 1 we apply Eq. 13 to one million computer-generated data points
with Gaussian statistics. The far left dot in this figure contains all the data in the calculation of the aggregated mean and variance so that n = 1 in Eq. 13. The next point to the right in the
figure contains the nearest-neighbor data points added together to define a data set with a half million data points from which to calculate the mean and variance and so on moving from left to right.
Consequently, this process of aggregating the data is equivalent to decreasing the resolution of the time series and as the resolution is systematically decreased, the adopted measure, the allometric
relation between the mean and variance, reveals an underlying property of the time series. The increase in the variance with increasing average values for increasing aggregation number shown in the
figure is not an arbitrary pattern. The curve indicates that the aggregated data points are interconnected. The original computer-generated data points are not correlated, but the adding of data
points in the aggregation process induces a correlation, one that is completely predictable. The induced correlation is linear if the original data is uncorrelated, but the induced correlation is not
linear if the original data is correlated.
FIGURE 1
Figure 1. The logarithm of the variance is plotted versus the logarithm of the mean for the successive aggregation of 10^6 computer-generated random data points with Gaussian statistics. The slope of
the curve is essentially one, determined by a linear regression using Eq. 13, so the fractal dimension of the time series is D = 1.5.
The aggregated variance versus the aggregated mean falls along a straight line in Figure 1 with a slope of 1 for the uncorrelated random process with computer-generated Gaussian statistics.
Therefore, in the case of Gaussian statistics, we obtain from the slope of the curve b = 1, so that the fractal dimension is given by D = 2 − b/2 = 1.5 corresponding to the fractal dimension of
Brownian motion (Suki et al., 2003). In the same way a completely regular time series would have b = 2, so that D = 1. The fractal dimension for most time series fall somewhere between these two
extremes; the closer the fractal dimension is to 1, the more regular the process; the closer the fractal dimension is to 1.5, the more it is like an uncorrelated random process. The data analyzed in
Figure 1 certainly have a single fractal dimension characterizing the entire computer-generated time series. If the power-law index, the slope of the above curve, is 1 then the data are from an
uncorrelated random process. If the index is greater than 1 then the data cluster, indicating correlations in the random process as interpreted by Taylor.
We emphasize that the allometric aggregation approach is just one of many procedures designed to take advantage of the scaling properties of the central moments of time series. We refer to such
methods collectively as finite variance statistical methods (FVSM). However, it should be emphasized that not all time series that scale have finite variance. Time series having Lévy α-stable
statistics exemplify processes with diverging variance, but they are described by probability density functions that scale (West, 1999). We review these matters after some discussion of the scaling
properties of physiological time series.
Fractal Time Series
Let us consider the time series from a number of complex physiologic networks such as the cardiovascular, the respiratory, and the motor control. In each case a time series associated with the
particular physiologic network is found to be a random fractal as determined by scaling behavior. We have applied the allometric aggregation approach to these time series and others as reviewed by
West (2006a) and here we begin the discussion with the observed variability of the inter-beat intervals of the heart.
The mechanisms producing the observed variability in the size of a human heart’s inter-beat intervals apparently arise from a number of sources. The sinus node (the heart’s natural pacemaker)
receives signals from the autonomic (involuntary) portion of the nervous system that has two major branches: the parasympathetic, whose stimulation decreases the firing rate of the sinus node, and
the sympathetic, whose stimulation increases the firing rate of the sinus node pacemaker cells. The influence of these two branches produces a continual tug-of-war on the sinus node, one decreasing
and the other increasing the heart rate. It has been suggested that it is this tug-of-war that produces the fluctuations in the heart rate of healthy subjects in direct analogy with the observations
of Taylor and Woiwod (1980), but alternate suggested mechanisms are pursued subsequently. Consequently, HRV provides a window through which we can observe the heart’s ability to respond to normal
disturbances that can affect its rhythm. The clinician focuses on retaining the balance in regulatory impulses from the vagus nerve and sympathetic nervous system and in this effort requires a robust
measure of that balance (West et al., 2008). A quantitative measure of HRV time series, such as the fractal dimension, serves this purpose.
Heart rate variability time series have been used as a quantitative indicator of autonomic activity. Physicians became interested in developing this indicator of variability because experiments
indicated a relationship with lethal arrhythmias. A task force was formed and charged with the responsibility of developing the standards of measurement, physiological interpretation and clinical use
of HRV. They published their findings (Task force of the European Society of Cardiology the North American Society of Pacing Electrophysiolgy, 1996) in 1996 after which time the importance of HRV to
medicine became more widely apparent.
When an individual’s heart rate is not typical it is evident that quantifying the variation in heart rate is consequential. There are a number of ways to calculate measures of HRV, some sixteen at
last count, each related to scaling in one way or another and most being in the FVSM category. However it would not be productive to review them all here. Instead we identify the scaling index as the
most revealing of the characteristics of HRV and use the allometric aggregation approach relating the variance and mean of empirical data to determine the scaling index or equivalently the fractal
dimension. We apply the allometric aggregation approach to the heart’s RR-intervals for a healthy young adult male in Figure 2.
FIGURE 2
Figure 2. The logarithm of the standard deviation is plotted versus the logarithm of the average value for the heartbeat interval time series for a young adult male, using sequential values of the
aggregation number (West, 2006a). The solid line segment is the best fit to the aggregated data points and yields a fractal dimension D = 1.24 midway between the curve for a regular process (D = 1)
and that for an uncorrelated random process (D = 1.5) as indicated by the dashed curves.
In Figure 2 the logarithm of the standard deviation is plotted versus the logarithm of the mean value for a typical HRV time series. Note that we use the standard deviation in the figure and not the
variance but there is no essential difference in the discussion. At the left-most position the data points indicates the standard deviation and mean using all the data points. Moving from left to
right the next data point is constructed from the time series with two nearest-neighbor data points added together and the procedure is repeated moving right until the right-most data point has 20
nearest-neighbor data points added together. The solid line segment is the best linear representation of the scaling obtained using a mean-square minimization procedure that intercepts most of the
data points with a positive slope of 0.76. We can see that the slope of the HRV data is midway between the dashed curves depicting an uncorrelated random process (slope = 1/2) and one that is
deterministically regular (slope = 1).
We emphasize that the conclusions we draw here are not from this single figure or set of data presented, but are representative of a much larger body of work. The conclusions are based on a large
number of similar observations (West, 1999; Glass, 2001; Suki et al., 2003) made using a variety of data processing techniques, all of which yield results consistent with the scaling of the HRV time
series indicated in Figure 2. So we conclude that the heartbeat intervals do not form an uncorrelated random sequence. Instead we see that the HRV time series is a statistical fractal, indicating
that the heartbeats have long-time memory. The implications of this long-time memory concerning the underlying physiological control system are taken up in the subsequent discussion of the
mathematical models.
Scaling phenomena, such as shown for the HRV time series data in Figure 2, are said to be self-similar. The fact that the standard deviation and mean values change in a certain way as a function of
aggregation number implies that the magnitudes of these measures depend on the size of the ruler used to measure the time interval. Recall that this is one of the defining characteristics of fractal
curves; the length of the curve becomes infinite as the size of the ruler goes to 0. The dependence of the mean and standard deviation on the ruler size, for a self-similar time series, implies that
the statistical process is fractal and consequently defines a fractal dimension for the HRV time series.
The average scaling exponent obtained by Peng et al. (1993) for a group of 10 healthy subjects having a mean age of 44 years, using 10000 data points for each subject, was b = 0.19 for the difference
in heartbeat interval time series, not the heartbeat intervals themselves. They interpreted this value to be consistent with a theoretical value of b = 0, which they conjectured would be obtained for
an infinitely long time series. The latter scaling implies that the scaling exponent for the heartbeat intervals themselves would be 1.0. However, all data sets are finite and it was determined that
the asymptotic scaling coefficients for the heartbeat interval time series of healthy young adults lie in the interval 0.7 ≤ b ≤ 1.0. The value of the scaling coefficient obtained using much shorter
time series and the relatively simple processing technique of allometric aggregation is consistent with these results.
We also investigate in the same way the dynamics of breathing; the apparently regular breathing as you sit quietly reading this paper. Here evolution’s design of the lung may be closely tied to the
way the lung carries out its function. It is not by accident that the cascading branches of the bronchial tree become smaller and smaller, nor is it good fortune alone that ties the dynamics of our
every breath to this physiologic structure. We argue that, like the heart, the lung is made up of fractal processes, some dynamic and others now static. As with the heart, the variability of the
breathing rate using breath-to-breath time intervals is denoted by breathing rate variability (BRV), to maintain a consistent notation. We present a BRV plot in Figure 3 and obtain a figure similar
to that in Figure 2. Both kinds of processes lack a characteristic scale and a simple argument establishes that such lack of scale has evolutionary advantages (West, 1990). Here again we observe that
the data fall on a line segment midway between the regular and the random with a fractal dimension of D = 1.14, perhaps tilting more toward the regular. It is also observed that as we age the fractal
dimension increases and our breathing becomes increasingly random – a loss of regularity with age (West, 2006a).
FIGURE 3
Figure 3. A fit to the aggregated standard deviation versus the aggregated mean for a typical BRV time series (West, 2006a) is depicted. The points are calculated from the data and the solid curve is
the best least-square fit to the processed BRV data and yields a fractal dimension D = 1.14 midway between the curve for a regular process (D = 1) and that for an uncorrelated random process (D =
1.5) as indicated by the dashed curves.
Such observations regarding the self-similar nature of breathing time series have been used in a medical setting to produce a revolutionary way of utilizing mechanical ventilators. Historically
ventilators have been used to facilitate breathing after an operation and have a built-in frequency of ventilation. The single-frequency ventilator design has recently been challenged by Mutch et al.
(2000), who have used an inverse power-law spectrum of respiratory rate to drive a variable ventilator. They demonstrated that this way of supporting breathing produces an increase in arterial
oxygenation over that produced by conventional control-mode ventilators. This comparison indicates that the fractal variability in breathing is not the result of happenstance, but is an important
property of respiration. A reduction in variability of breathing reduces the overall efficiency of the respiratory system.
Altemeier et al. (2000) measured the fractal characteristics of ventilation and determined that not only are local ventilation and perfusion highly correlated, but they scale as well. Finally, Peng
et al. (2002) analyzed the BRV time series for 40 healthy adults and found that under supine, resting, and spontaneous breathing conditions, the time series scale. This result implies that human BRV
time series have: “long-range (fractal) correlations across multiple time scales.”
Another exemplar of the many fractal time series is that for walking. Applying the allometric aggregation approach to stride rate variability (SRV) time series (West and Griffin, 1998, 1999; Griffin
et al., 2000) determines the scaling index as shown in Figure 4. Note the similarity of these last three figures. So, as in the cases of HRV and BRV time series, we again find an erratic
physiological time series to represent a random fractal process (West, 2006b). In the SRV context, the implied clustering indicated by a slope greater than the random dashed line means that the
intervals between strides change in clusters and not in a uniform manner over time. This result suggests that the walker does not smoothly adjust his/her stride from step to step. Rather, there are a
number of steps over which adjustments are made followed by a number of steps over which the changes in stride are completely random. The number of steps in the adjustment process and the number of
steps between adjustment periods are not independent. The results of a substantial number of stride interval experiments support the universality of this interpretation.
FIGURE 4
Figure 4. A fit to logarithm of the aggregated standard deviation versus the logarithm of the aggregated mean of SRV data for a typical walker (West, 2006a) is depicted. The points are calculated
from the data and the solid curve is the best least-square fit to the processed SRV data and yields a fractal dimension D = 1.3 midway between the curve for a regular process (D = 1) and that for an
uncorrelated random process (D = 1.5) as indicated by the dashed curves.
The SRV time series for sixteen healthy adults were downloaded from PhysioNet and the allometric aggregation approach carried out. Each of the curves looked more or less like that in Figure 4, with
the experimental curve being closer to the indicated regular or the random limits (dashed curves). On average the 16 individuals have fractal dimensions for gait in the interval 1.2 ≤ D ≤ 1.3 (West
and Griffin, 2003). The fractal dimension obtained from the analysis of an entirely different dataset, obtained using a completely different protocol, yields consistent results (Jordan et al., 2006).
The narrowness of the interval around the fractal dimension suggests that this quantity may be a good quantitative measure of an individual’s dynamical variability. We suggest the use of the fractal
dimension as a quantitative measure of how well the motor control system is doing in regulating locomotion. Furthermore, excursions outside the narrow interval of fractal dimension values for
apparently healthy individuals may be indicative of hidden pathologies.
It should not go unnoticed that people use pretty much the same control system when they are standing still, maintaining balance, as they do when they are walking. This observation would lead one to
suspect that the body’s slight movements around the center of mass of the body, would have the same statistical behavior as that observed during walking. These tiny movements are called postural sway
in the literature and have been interpreted using random walks (Collins and DeLuca, 1994). It has been determined that postural sway may well be chaotic (Blaszczyk and Klonowski, 2001), so one might
expect that there exists a relatively simple dynamical model for balance regulation that can be used in medical diagnosis. Here again fractal dynamics can be determined from the scaling properties of
postural sway time series and it is determined that a decrease of postural stability is accompanied by an increase of fractal dimension. Consequently, it has been conjectured that the control of
human movement and postural behaviors occurs as a scaling process (Hong et al., 2006).
Control of Variability
The physiological time series processed in the previous section clearly show that the complex phenomena supporting life, although they may appear to be random, do in fact scale in time and therefore
contain information about the underlying dynamic process. This scaling indicates that the fluctuations that occur on multiple time scales are tied together and the way we understand such
interdependency in the physical sciences is through underlying mechanisms that are coupled one to the other. This coupling is typically done through the equations of motion governing the dynamical
description of the process. Unfortunately we generally do not have available such dynamic equations to describe physiologic phenomena. Therefore we must take a more phenomenological approach and
develop mathematical models to explain the patterns in the data based on heuristic reasoning.
The individual mechanisms giving rise to the observed statistical properties in physiological networks are very different, so we do not attempt to find a common source to explain the observed scaling
in walking, breathing, thinking, and the heart beating. On the other hand, the physiological time series in each of these phenomena scale in the same mathematical way; they have 1/f variability, so
that at a certain level of abstraction the separate mechanisms cease to be important and only the relations matter and not those things being related. Consider that traditionally such relations have
been assumed to be linear, and their control was assumed to be in direct proportion to the disturbance through negative feedback. Classical control theory has been the backbone of homeostasis, but it
is not sufficient to describe the full range of variability in HRV, SRV, and BRV time series, and the variability in other physiologic networks, since it cannot explain how the statistics of these
time series become fractal, or how the fractal dimension changes over time (West and Deering, 1994; Ivanov et al., 1998; West et al., 2008).
The issue we address in this section is control of variability. Such control is one of the goals of medicine, in particular, understanding and controlling physiological networks in order to insure
their proper operation. We distinguish between homeostatic control and allometric control; the former is familiar and has a negative feedback character, which is both local and rapid; the latter is a
relatively new concept that can take into account long-time memory (West, 2009). The long-time memory is manifest in correlations that are inverse power law in time, as well as, long-range
interactions in complex phenomena as manifest by inverse power-law distributions in the network variable. Allometric control introduces the fractal character into otherwise featureless random time
series to enhance the robustness of physiological networks. We introduce the fractional calculus as one way to describe the control of physiologic networks (West and Griffin, 2003).
It is not only a new kind of control that is suggested by the scaling of physiologic time series. Scaling also suggests that the historical notion of disease, which has the loss of regularity at its
core, is inadequate for the treatment of dynamical diseases. Instead of loss of regularity, we identify the loss of variability with disease (Goldberger et al., 1990), so that disease not only
changes average measures, such as heart rate, which it does in late stages, but is manifest in changes in HRV at very early stages. Loss of variability implies a loss of physiologic control and this
loss of control is reflected in the changing of the scaling index of the corresponding time series (Mutch and Lefevre, 2003; West and Griffin, 2003), that is, in the change of fractal dimension.
The well-being of the body’s network of networks is measured by the fractal scaling properties of the various dynamic networks and such scaling determines how well the overall harmony is maintained.
Once the perspective that disease is the loss of variability (complexity) has been adopted the strategies presently used for combating disease must be critically examined. For example, recent
experiments (Yu et al., 2005) show a preference in the response of physiologic networks to 1/f signals over that of white noise indicating a sensitivity of physiologic networks to scaling control.
Fractional Random Walks and Scaling
Let us begin the discussion of the dynamics of fractals with a brief review of the formal generation of discrete time series. We define the variable of interest as X[j] where j = 0,1,2,… indexes the
time step. In the simplest random walk model a random step is taken in each increment of time and for convenience we set the time increment to 1. The shift operator B lowers the index by one unit
such that
so that a simple random walk can be formally written
where ξ[j] is +1 or −1 and the choice of values is made by flipping a coin. The solution to the discrete Eq. 15 is given by the position of the walker after N steps, the sum over the sequence of
The total number of steps N can be interpreted as the total time t over which the walk unfolds, since we have set the time increment to 1. Note that Eq. 16 is also equivalent to coarse graining a
sequence of discrete measurements by aggregating the data. For N sufficiently large the sum in Eq. 16 can be replaced by an integral and the central limit theorem proves that the statistics of the
dynamic variable X(t) are Gaussian. Consequently such sums of empirical data are often assumed to be Gaussian when closer analysis shows they are not. This is not a contradiction because the real
world often does not satisfy the assumptions necessary for the proofs of mathematical theorems.
In the simple random walk the steps are statistically independent of one another. The most direct generalization of this model is to make each step dependent on the preceding steps in such a way that
the second moment of the walker displacement is
The brackets in Eq. 17 denote an average over an ensemble of realizations of the walk, D is the strength of the fluctuations (diffusion coefficient) and when H ≠ 1/2 the underlying process is called
anomalous diffusion in the physics literature (West and Deering, 1994). A value of H < 1/2 is interpreted as an anti-persistent process in which case a random step in one direction is preferentially
followed by a reversal of direction. A value of H > 1/2 is interpreted as a persistent process in which case a random step in one direction is preferentially followed by another step in the same
direction. A value of H = 1/2 is interpreted as the random walk model of classical diffusion in which case the steps are statistically independent of one another (West, 1999).
One way of introducing long-term memory into a random walk model is by means of fractional differences. Following Hosking (1982) we define a fractional difference process as
and the exponent α is not an integer. As it stands Eq. 18 is just a formal definition without physiologic content to make it interesting. To make this equation usable we must determine how to
represent the operator acting on X[j] as reviewed by West (1999) to obtain the formal solution
A formulation of this process in terms of fractional autoregressive integrated moving average models (FARIMA) applied to temporal physiologic signals yields similar results (Eke et al., 2002). The
solution to the fractional random walk is clearly dependent on fluctuations that have occurred in the remote past; note the time lag k in the index on the fluctuations in Eq. 19 and the fact that it
can be arbitrarily large. The extent of the influence of these distant fluctuations on the present time network response is determined by the relative size of the coefficients in the series. Using
Stirling’s approximation on the gamma functions determines the size of the coefficients in Eq. 19 as the fluctuations recede into the past, that is, as k → ∞
since k >> α. Thus, the strength of the contributions to Eq. 20 decreases with increasing time lag as an inverse power law in the time lag as long as α < 1. The spectrum of the time series (Eq. 20)
is obtained in the low-frequency limit to be (West, 1999)
where unlike the white noise spectrum that is flat, the fractal walk spectrum is inverse power law.
Thus, since the fractional-difference dynamics are linear the network response is Gaussian and from these analytic results we conclude that X[j] is analogous to fractional Gaussian noise. The analogy
is complete if we set α = H − 1/2 so that the spectrum (Eq. 21) can be expressed as
Taking the inverse discrete Fourier transform of the exact expression for the spectrum yields the correlation coefficient (West, 1999)
as the lag time increases without limit. It is clear that for the power-law index in the interval 1 ≥ H ≥ 1/2 then both the spectrum and the correlation coefficient are inverse power law.
The probability density function (pdf) for the fractional-difference diffusion process in the continuum limit satisfies the scaling condition
where δ = H = α − 1/2. The manifestation of complexity is indicated by two distinct quantities. The first indicator of complexity is the scaling parameter δ departing from the familiar value δ = 0.5,
which it would have for a simple diffusion process. But for fractional diffusive motion considered here the value of the scaling index can be quite different. A second indicator of complexity is the
function F(·) in Eq. 24 departing from the conventional Gaussian form, although in the argument presented so far it does not.
The scaling index δ is usually determined by calculating the second moment of a time series. This method of analysis is reasonable only when F(y) has the Gaussian form, or some other distribution
with a finite second moment, that is, the process is a member of the FVSM class. If the scaling condition (Eq. 24) is realized it is convenient to measure the scaling parameter δ by the method of
diffusion entropy analysis (DEA, Scafetta and Grigolini, 2002) that, in principle, works independently of whether the second moment is finite or not. The DEA method affords many advantages, including
that of being totally independent of a constant bias.
Fractional Rates
Fractal functions often describe complex phenomena characterized by fractal time series. Such functions are known to have divergent integer-valued derivatives, and consequently traditional control
theory, involving integer-valued differentials and integrals, cannot be used to determine feedback in fractal phenomena. However a fractional operator of order α acting on a fractal function of
fractal dimension D yields a new fractal function with fractal dimension D + α, where α > 0 for a derivative and α < 0 for an integral. Therefore it seems reasonable that one strategy for modeling
the dynamics and control of complex physiologic phenomena is through the application of the fractional calculus (West, 2009). The fractional calculus has been used to model the interdependence,
organization and concinnity of complex phenomena ranging from the vestibulo-oculomotor system, to the electrical impedance of biological tissue to the biomechanical behavior of physiologic organs,
see, for example Magin (2006) for an excellent review of these applications and many more. Such descriptions can also be obtained from the continuum limit of the fractional difference equations of
the previous section.
We can relate the allometric aggregation approach to this recently developed branch of control theory involving the fractional calculus. The generalization of control theory to include fractional
operators enables the designer to take into account memory and hereditary properties that are traditionally neglected in integer-order control theory (Podlubny, 1999), such as in traditional
homeostasis. A fractional time integral is defined (West et al., 2003a; West, 2006b)
and the corresponding fractional time derivative is defined
where [α] + 1 ≥ n ≥ [α] and the bracket denotes the integer value n closest to α. Consequently for α < 1 we have n = 0 and Eq. 25 is the Riemann–Liouville (RL) formula for the fractional integral
operator when α > 0 and Eq. 26 is the corresponding RL-fractional-differential operator.
Fractional Langevin equation
Of course, the fractional calculus does not in itself constitute a physical/biological theory, but requires such a theory in order to interpret the fractional derivatives and integrals in terms of
physical/biological phenomena (West et al., 2003a). For example how is the negative feedback, so central to homeostasis, included in the fractional calculus modeling? The generalization of a
relaxation equation to fractional form is given by (Nonnenmacher and Metzler, 1995)
and the initial value becomes an inhomogeneous term in this fractional relaxation equation of motion. Note that the dissipation parameter is positive definite and is λ^α has the same units as the
fractional derivative. Equations of the form (Eq. 27) are mathematically well defined, and strategies for solving such equations have been developed by a number of investigators, particularly the
book by Miller and Ross (1993) that is devoted almost exclusively to solving such equations when the index is rational. Here we allow α to be irrational and consider the Laplace transform of Eq. 27
to obtain
whose inverse Laplace transform is the solution to the fractional-differential equation. Nonnenmacher and Metzler (1995) inverted the Laplace transform in Eq. 28 using Fox functions. The solution to
the initial value problem for the fractional relaxation equation is given by the series for the standard Mittag-Leffler function (MLF)
which in the limit α → 1 yields the exponential function
as it should, since under this condition Eq. 27 reduces to the usual relaxation rate equation. Note that in this limit the initial value term on the rhs of Eq. 27 vanishes because the gamma function
of zero diverges.
The MLF has interesting properties in both the short-time and the long-time limits. In the short-time limit it yields the Kohlrausch–Williams–Watts Law from stress relaxation in rheology (West et
al., 2003a) given by
also known as the stretched exponential. In the long-time limit it yields the inverse power law, known as the Nutting Law (West et al., 2003a),
clearly an inverse power law in time. Figure 5 displays the MLF as well as its two asymptotes, the dashed curve being the stretched exponential and the dotted curve the inverse power law. What is
apparent from this figure is that the long-time memory associated with fractional relaxation processes is inverse power law rather than being the exponential of ordinary relaxation. The MLF smoothly
joins these two empirically determined asymptotic distributions.
FIGURE 5
Figure 5. The solid curve is the MLF, the solution to the fractional relaxation equation (Eq. 29). The dashed curve (Eq. 30) is the stretched exponential (Kohlrausch–Williams–Watts Law) and the
dotted curve (Eq. 31) is the inverse power law (Nutting Law).
We can now generalize the fractional-differential equation to include a random force ξ(t) and in this way obtain a fractional stochastic differential equation, such as we did in the last section. In
physics nomenclature such a fractional stochastic differential equation is a called a fractional Langevin equation (West et al., 2003a)
The average response of the network is given by the fractional relaxation equation for a random force that is zero-centered, which is to say, by averaging over Eq. 32 we obtain Eq. 27 for the average
network response. The solution to Eq. 32 is obtained using Laplace transforms as done previously
Note the difference in the s-dependence of the two coefficients of the rhs of Eq. 33. The inverse Laplace transform of the first term yields the MLF as found for the homogeneous fractional relaxation
equation, whereas the inverse Laplace transform of the second term is the convolution of the random force and a stationary kernel. The stationary kernel is given by the series (West et al., 2003a)
which is a generalized MLF. The function defined by Eq. 34 reduces to the usual MLF when β = 1, so that both the homogeneous and inhomogeneous terms in the solution to the fractional Langevin
equation can be expressed in terms of these series.
The explicit inverse of Eq. 33 yields the solution (West et al., 2003a)
In the case α = 1, the MLF becomes the exponential, so that the solution to the fractional Langevin equation reduces to that for an Ornstein–Uhlenbeck process
as it should. The analysis of the autocorrelation function of Eq. 35 can be quite daunting and so we do not pursue it further here, but refer the reader to the literature (Kobelev and Romanov, 2000;
West et al., 2003a). However it is useful to point out that Eq. 35 is the kind of formal expression that is necessary to investigate when the physiologic phenomenon is not stationary.
Monofractal solutions
A somewhat simpler problem than Eq. 32 is the fractional Langevin equation without dissipation, that is, the solution to the fractional-dynamic stochastic equation with λ = 0. The solution to this
equation expressed in terms of the fractional integral is
and the kernel can also be interpreted as a filter. Here we see that if the stochastic driver has fractal Gaussian statistics it scales as
which for a Wiener process would have h = 1/2, but for a more general fractal statistical process 1 ≥ h > 0. This property can be used to express the scaled-time solution to the fractional-dynamical
equation as
which given its linear form also has Gaussian statistics. Using the strategy of writing the scaling parameter as γ = 1/t we can express the solution (Eq. 38) in the scaling form
so that the second moment can be expressed as
The time-dependence of the second moment (Eq. 40) agrees with that obtained for anomalous diffusion where we identify H = α + h. If the stochastic force is that of classical diffusion, that is, h = 1
/2 and 1 ≥ H > 0 then the interval of values for the fractional operator in Eq. 36 is given by −1/2 ≤ α ≤ 1/2. Consequently the process described by the dissipation-free fractional Langevin equation
can cover the full range of values 1 ≥ H > 0.
The interval 1/2 ≥ H > 0 has in the past been interpreted in terms of an anti-persistent random walk. An anti-persistent explanation of time series was made by Peng et al., (1993). for the
differences in time intervals between heart beats. They interpreted their time series, as did a number of subsequent investigators, in terms of random walks with H < 1/2. In this model the
anti-persistent behavior lead to an avoidance of the extremes, so that the time intervals did not become too large nor too small. However, we can see from Eq. 40 that the fractional Langevin equation
without dissipation is an equivalent description of the underlying dynamics. The scaling behavior alone cannot distinguish between these two models, what is needed is a complete statistical
distribution and not just the time-dependence (scaling behavior) of the central moments.
There are a number of ways to test the interpretation of the scaling behavior observed in Eq. 40. Podlubny (1999 showed that if reality has the dynamics of a fractional-differential equation, then
attempting to control it with an integer-order feedback leads to extremely slow convergence, if not divergence, of the network output. On the other hand, a fractional-order feedback, with the indices
appropriately chosen, leads to rapid convergence of output to the desired signal. Thus, we anticipate that dynamic physiologic networks with scaling properties, because they can be described by
fractional dynamics, would have fractional-differential, which is to say, allometric controls (West, 2009).
Mulifractal solutions
The solution to the fractional Langevin equation (Eq. 37) is monofractal if the fluctuations are monofractal, which is to say, the time series given by the trajectory Y(t) is a fractal random process
if the random force is a fractal random process. However, the model presented is not adequate as it stands for describing multifractal statistical processes. A number of investigators have recently
developed multifractal random walk models to account for the multiple fractal character of various physiological phenomena and here we introduce a variant of those discussions based on the fractional
calculus. The most recent generalization of the Langevin equation incorporates memory into the network’s dynamics and has the simple form of Eq. 33 with the dissipation parameter set to 0. Equation
37 could also be obtained from the construction of a fractional Langevin equation by Lutz (2001) for a free particle coupled to a fractal heat bath, when the inertial terms is negligible. The
analysis of the previous section provides us with Eq. 40 as the starting point for the present discussion.
One way to make the solution to the fractional Langevin equation a multifractal is to assume that the parameter η = 1 − α in the kernel of Eq. 36 is a random variable. To construct the traditional
measures of multifractal stochastic processes we calculate the q^th moment of the solution (Eq. 40) by averaging over both the random force ξ(t) and the random parameter η to obtain
The scaling relation in Eq. 41 determines the q^th order structure function exponent ρ(q). Note that if ρ(q) is linear in q the underlying process is monofractal, whereas, when it is nonlinear in q
the process is multifractal. We can relate the structure function to the mass exponent (Rajagopalon and Tarboton, 1993)
Consequently we have that ρ(0) = h so that τ(0) = 2 − h, as it should because of the well known relation between the fractal dimension and the global Hurst exponent D[0] = 2 − H.
A monofractal time series is characterized by a single fractal dimension. In general, time series have a local Hölder exponent h that varies over the course of the trajectory and is related to the
fractal dimension by D = 2 − h (Falconer, 1990). Note that for an infinitely long time series the Hölder exponent h and the Hurst exponent H are identical, however, for a time series of finite length
they need not be the same. We stress that the fractal dimension and the Hölder exponent are local quantities, whereas the Hurst exponent is a global quantity, consequently the relation D = 2 − H is
only true for an infinitely long time series. The function f(h), called the multifractal or singularity spectrum, describes how the local Hölder (fractal) exponents contribute to such time series.
Here h and f are independent variables, as are q and τ. The general formalism of Legendre transform pairs interrelates these two sets of variables by the relation (Feder, 1988),
The local Hölder exponent h varies with the q-dependent mass exponent through the equality
so the singularity spectrum can be written as
where the mass exponent τ(q) and its derivative are determined by data or from theory as in Eq. 42.
To determine the mass exponent in Eq. 45 we assume the statistics of the parameter μ are generated by a stable Lévy process with index β the structure function exponent can be shown to be (Feder,
Therefore the solution to the fractional Langevin equation corresponds to a monofractal process only in the case β = 1 and q > 0, otherwise the process is multifractal. We restrict the remaining
discussion to positive moments.
Thus, we observe that when the exponent in the memory kernel in the fractional Langevin equation is random, the solution consists of the product of two random quantities giving rise to a multifractal
process. We apply this approach to the SRV time series data previously discussed and observe, for the statistics of the multiplicative exponent given by Lévy statistics, the singularity spectrum as a
function of the positive moments shown by the points in Figure 6. The solid curve in this figure is obtained from the analytic form of the singularity spectrum
which is determined by substituting Eq. 46 into the equation for the singularity spectrum (Eq. 45), through the relationship between exponents (Eq. 42). It is clear from Figure 6 that the data are
well fit by the solution to the fractional Langevin equation with the parameter values β = 1.45 and b = 0.1, obtained through a mean-square fit of Eq. 47 to the SRV time series data.
FIGURE 6
Figure 6. The singularity spectrum for q > 0 obtained through the numerical fit to the human gait data. The curve is the average over the ten data sets obtained in the experiment (Peng et al., 1993).
The nonlinear form of the mass exponent obtained from the fit in Figure 6 is evidence that the inter-stride interval time series are multifractal. This analysis is further supported by the fact that
the maxima of the singularity spectra coincide with the fractal dimensions determined using the scaling properties of the time series using the allometric aggregation approach.
Of course, different physiologic processes generate different fractal time series, because the long-time memory of the underlying dynamical processes can be quite different. Physiological signals,
such as cerebral blood flow (CBF), are typically generated by complex self-regulatory systems that handle inputs with a broad range of characteristics. Ivanov et al. (1999) established that healthy
human heartbeat intervals, rather than being fractal, exhibit multifractal properties and uncovered the loss of multifractality for a life-threatening condition of congestive heart failure. West et
al. (2003b) similarly determined that CBF in healthy humans is also multifractal and this multifractality is severely narrowed for people who suffer from migraines.
Migraine headaches have been the bane of humanity for centuries, afflicting such notables as Caesar, Pascal, Kant, Beethoven, Chopin, and Napoleon. However, its etiology and pathomechanism have to
date not been satisfactorily explained. It was demonstrated (West et al., 2003b) that the characteristics of CBF time series significantly differs between normal healthy individuals and migraineurs.
Transcranial Doppler ultrasonography (TCD) enables high-resolution measurement of middle cerebral artery blood flow velocity. Like the HRV, SRV, and BRV time series data, the time series of CBF
velocity consists of a sequence of waveforms. These waveforms are influenced by a complex feedback system involving a number of variables, such as arterial pressure, cerebral vascular resistance,
plasma viscosity, arterial oxygen, and carbon dioxide content, as well as other factors. Even though the TCD technique does not allow us to directly determine CBF values, it helps clarify the nature
and role of vascular abnormalities associated with migraine.
The dynamical aspects of CBF regulation were recognized by Zhang et al. (1999). Rossitti and Stephensen (1994) used the relative dispersion, the ratio of the standard deviation to mean, of the middle
cerebral artery flow velocity time series to reveal its fractal nature; a technique closely related to the allometric aggregation approach. West et al. (1999b) extended this line or research by
taking into account the more general properties of fractal time series, showing that the beat-to-beat variability in the flow velocity has a long-time memory and is persistent with the average
scaling exponent 0.85 ± 0.04, a value consistent with that found earlier for HRV time series. They also observed that CBF was multifractal in nature.
In Figure 7 we compare the multifractal spectrum for middle cerebral artery blood flow velocity time series for a healthy group of five subjects and a group of eight migraineurs (West et al., 2003b).
A significant change in the multifractal properties of the blood flow time series is apparent. Namely, the interval for the multifractal distribution on the local scaling exponent is greatly
constricted. This is reflected in the small value of the width of the multifractal spectrum for the migraineurs 0.013, which is almost three times smaller than the width for the control group 0.038
for both migraineurs with and without aura the distributions are centered at 0.81, the same as that of the control group, so the average scaling behavior would appear to be the same.
FIGURE 7
Figure 7. The average multifractal spectrum for middle CBF time series is depicted by f(h). (A) The spectrum is the average of ten time series measurements from five healthy subjects (filled
circles). The solid curve is the best least-squares fit of the parameters to the predicted spectrum using Eq. 48. (B) The spectrum is the average of 14 time series measurements of eight migraineurs
(filled circles). The solid curve is the best least-squares fit to the predicted spectrum using Eq. 48.
However, the contraction of the spectrum for migraineurs suggests that the underlying process has lost its flexibility. The biological advantage of multifractal processes is that they are highly
adaptive, so that in this case the brain of a healthy individual adapts to the multifractality of the inter-beat interval time series. Here again we see that disease, in this case migraine, may be
associated with the loss of complexity and consequently the loss of adaptability, thereby suppressing the normal multifractality of CBF time series. Thus, the reduction in the width of the
multifractal spectrum is the result of excessive dampening of the CBF fluctuations and is the manifestation of the significant loss of adaptability and overall hyperexcitability of the underlying
regulation system. West et al. (2003b) emphasize that hyperexcitability of the CBF control system seems to be physiologically consistent with the reduced activation level of cortical neurons observed
in some transcranial magnetic simulation and evoked potential studies.
Regulation of CBF is a complex dynamical process and remains relatively constant over a wide range of perfusion pressure via a variety of feedback control mechanisms, such as metabolic, myogenic, and
neurally mediated changes in cerebrovascular impedance response to changes in perfusion pressure. The contribution to the overall CBF regulation by different areas of the brain is modeled by the
statistics of the fractional derivative parameter, which determines the multifractal nature of the time series. The source of the multifractality is over and above that produced by the cardiovascular
The multifractal nature of CBF time series is here modeled using a fractional Langevin model. We again implement the scaling properties of the random force and the memory kernel to obtain Eq. 41 as
the scaling of the solution to the fractional Langevin equation. Here when we calculate the q^th moment of the solution we assume Gaussian, rather than the more general Lévy statistics. Consequently
we obtain the quadratic function for the singularity spectrum
which can be obtained from Eq. 47 by setting β = 2. Another way to express Eq. 48 is
where we have used the fact that the fractal dimension is given by 2 − H, which is the value of the function at h = H.
It seems that the changes in the cerebral autoregulation associated with migraine can strongly modify the multifractality of middle cerebral artery blood flow. The constriction of the multifractal to
monofractal behavior of the blood flow depends on the statistics of the fractional derivative index. As the distribution of this parameter narrows down to a delta function, the nonlocal influence of
the mechanoreceptor constriction disappears. On the other hand, the cerebral autoregulation does not modify the monofractal properties characterized by the single global Hurst exponent, presumably
that produced by the cardiovascular system.
Conclusions and Summary
We now draw a number of conclusions. First of all, physiologic time series are often erratic and have scaling properties. The second moment is determined to scale algebraically in time, the
autocorrelation function is found to be an inverse power law in time and the power spectrum is an inverse power law in frequency. The power-law nature of these second-order measures is the signature
of fractal random processes. So we surmise that HRV is a fractal random point process, as are SRV and BRV, among dozens of other complex physiologic phenomena. Consequently, the dynamics of
traditional stochastic processes described by differential equations for the dynamic variables, or in phase space for the probability densities, are not sufficient to describe the properties of
complex physiologic networks. The fractional calculus can describe at least one class of complex phenomena for which other, more traditional, methods do not suffice. As mentioned the fractional
calculus has been used to model the interlinking of elements and harmony of complex phenomena ranging from the electrical impedance of biological tissue to the biomechanical behavior of physiologic
organs; see, for example, Magin (2006) for an excellent review of such applications.
The empirical evidence supports the interpretation that physiologic time series are described by fractal stochastic networks. Furthermore, the fractal nature of these time series is not constant but
may change with the vagaries of the interaction of the network with its environment and internal dynamics; therefore, physiologic phenomena are often weakly multifractal. The scaling index or fractal
dimension marks a physiologic network’s response and can be used as an indicator of the state of health.
We reiterate that controlling physiological networks in order to ensure their proper operation is one of the goals of medicine. We have emphasized the difference between homeostatic and allometric
control. Homeostatic control is familiar and has as its basis a negative feedback character, which is both local and relatively fast. Allometric control, on the other hand, can take into account
long-time memory, correlations that are inverse power law in time, as well as long-range interactions in complex phenomena as manifest by inverse power-law distributions in network variables. An
allometric control network achieves its purpose through scaling, enabling a complex network such as one performing physiologic regulation to be adaptive and accomplish concinnity of its many
interacting subnetworks. Allometric control is a generalization of the idea of feedback regulation implicit in homeostasis. The basic notion is to take part of the network’s output and feed it back
into the input, thus making the network self-regulating by minimizing the difference between the input and the sampled output. More complex networks, such as autoregulation of the heartbeat
variation, human gait variability, and cognition have more intricate feedback arrangements. In particular, because each sensor responds to its own characteristic set of frequencies, the feedback
control must carry signals appropriate to each of the interacting subnetworks. The coordination of the individual responses of the separate subnetworks is manifest in the scaling of the time series
in the output and the separate subnetworks select that aspect of the feedback to which they are the most sensitive. In this way an allometric control network not only regulates, but also adapts to
changing environmental and biophysical conditions.
It is not merely a new kind of control that is suggested by the scaling of physiologic time series. Scaling also implies that the historical notion of disease, which has the loss of regularity at its
core, is inadequate for the treatment of dynamical diseases. Instead of loss of regularity, the loss of variability is identified with disease, so that a disease not only changes an average measure,
such as heart rate or breathing rate, but is manifest in changes in variability at very early stages. Loss of variability implies a loss of physiologic control, and this loss of control is reflected
in the change of fractal dimension, that is, in the scaling index of the corresponding time series. The change in fractal dimension with age and with disease suggested the new definition of disease
as a loss of complexity, rather than the loss of regularity (Goldberger et al., 1990; West, 1990, 2009; Van Orden et al., 2005). However this new definition has not been universally embraced (Shiau,
The well-being of the body’s network of networks is measured by the fractal scaling properties of the various dynamic networks, and such scaling determines how well the overall harmony is maintained.
Once the perspective that disease is the loss of complexity has been adopted, the strategies presently used in combating disease must be critically examined. Life-support equipment is one such
strategy, but the tradition of such life-support is to supply blood at the average rate of the beating heart, to ventilate the lungs at their average rate, and so on. So how does the new perspective
regarding disease influence the traditional approaches to assisting the healing of the body?
Alan Mutch applied the lessons of fractal physiology to point out that blood flow and ventilation are delivered in a fractal manner in both space and time in a healthy body.
However, he argues, during critical illness, conventional life-support devices deliver respiratory gases by mechanical ventilation or blood by cardiopulmonary bypass pump in a monotonously periodic
fashion. This periodic driving overrides the natural aperiodic operation of the body. Mutch speculates that these devices result in the loss of normal fractal transmission and, consequently, life
support winds up doing more damage the longer it is required and becomes more problematic the sicker the patient (Mutch et al., 2000). In this perspective, the loss of complexity is the loss of the
body as a cohesive whole; the body can be reduced to a disconnected set of organ systems.
One of the traditional views of disease is what Tim Buchman calls the “fix-the-number” imperative (Buchman, 2006). He argues that if the bicarbonate level is low, then give bicarbonate; if the urine
output is low, then administer a diuretic; if the bleeding patient has a sinking blood pressure, then make the blood pressure normal. He goes on to say that such interventions are commonly
ineffective and even harmful. For example, sepsis, which is a common predecessor of multiple organ dysfunction syndrome (MODS), is often accompanied by hypocalcemia; in controlled experimental
conditions, administering calcium to normalize the laboratory value increases mortality. Consequently, one’s first choice of options, based on an assumed simple linear homeostatic relationship
between input and output, is probably wrong and a more circumspective intervention based on a fractal perspective is warranted.
Conflict of Interest Statement
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Altemeier, W. A., McKinney, S., and Glenny, R. W. (2000). Fractal nature of regional ventilation distribution. J. Appl. Physiol. 88, 1551–1557.
Alvarez-Ramirez, J., Ibarra-Valdez, C., Rodriguez, E., and Dagdug, E. (2008). 1/f Noise structures in Pollocks’s drip paintings star, open. Physica A 387, 281–295.
Barenblatt, G. I. (1994). Scaling Phenomena in Fluid Mechanics. Cambridge: Cambridge University Press.
Bassingthwaighte, J. B., Liebovitch, L. S., and West, B. J. (1994). Fractal Physiology. New York: Oxford University Press.
Blaszczyk, J. W., and Klonowski, W. (2001). Postural stability and fractal dynamics. Acta Neurobiol. Exp. 61, 105–112.
Buchman, T. G. (2006). “Physiologic failure: multiple organ dysfunction syndrome,” in Complex Systems Science in BioMedicine, eds T. S. Deisboeck and S. A. Kauffman (New York: Kluwer Academic/Plenum
Publishers), 631–640.
Collins, J. J., and DeLuca, C. J. (1994). Random walking during quiet standing. Phys. Rev. Lett. 73, 764–767.
Das, M., Gebber, G. L., Bauman, S. M., and Lewis, C. D. (2003). Fractal Properties of sympathetic nerve discharges. J. Neurophysiol. 89, 833–840.
Eke, A., Herman, P., Kocsis, L., and Kozak, L. R. (2002). Fractal characterization of complexity in temporal physiological signals. Physiol. Meas. 23, R1–R38.
Gao, J. B., Billock, V. A., Merk, I., Tung, W. W., White, K. D., Harris, J. G., and Roychowdhury, V. P. (2006). Inertia and memory in ambiguous visual perception. Cogn. Process 7, 105–112.
Gilden, D. L. (2001). Cognitive emissions of 1/f noise. Psychol. Rev. 108, 33–56.
Glass, L. (2001). Synchronization and rhythmic processes in physiology. Nature 410, 277–284.
Goldberger, A. L. (2006). Complex systems. Proc. Am. Thorac. Soc. 3, 467–471.
Goldberger, A. L., Rigney, D. R., and West, B. J. (1990). Chaos, fractals and physiology. Sci. Am. 262, 42–49.
Griffin, L., West, D. J., and West, B. J. (2000). Random stride intervals with memory. J. Biol. Phys. 26, 185–202.
Grigolini, P., Aquino, G., Bologna, M., Lukovic, M., and West, B. J. (2009). A theory of 1/f noise in human cognition. Physica A 388, 4192–4204.
Grizzi, F., and Chiriva-Internati, M. (2005). The complexity of anatomical systems. Theor. Biol. Med. Model. 2, 26.
Hausdorff, J. M., Peng, C. K., Ladin, Z., Wei, J. Y., and Goldberger, A. L. (1995). Is walking a random walk? Evidence for long-range correlations in stride interval of human gait. J. Appl. Physiol.
78, 349–358.
Hong, S. L., Bodfish, J. W., and Newell, K. M. (2006). Power-law scaling for macroscopic entropy and microscopic complexity: evidence from human movement and posture. Chaos 16, 013135.
Hosking, J. T. M. (1982). Fractional differencing. Biometrika 68, 165–176.
Ivanov, P. C., Amaral, L. A. N., Goldberger, A. L., Havlin, S., Rosenblum, M. G., Struzik, A. R., and Stanley, H. E. (1999). Multifractality in human heartbeat dynamics. Nature 399, 461–465.
Ivanov, P. C., Amaral, L. A. N, Goldberger, A. L., and Stanley, H. E. (1998). Stochastic feedback and the regulation of biological rhythms. Europhys. Lett. 43, 363–368.
Jordan, K., Challis, J., and Newell, K. (2006). Long range correlations in the stride interval of running. Gait Posture 24, 120–125.
Kello, C. T., Beltz, C., Holden, J. G., and Van Orden, G. C. (2007). The emergent coordination of cognitive function. J. Exp. Psychol. Gen. 136, 551–568.
Kobelev, V., and Romanov, E. (2000). Fractional Langevin equation to describe anomalous diffusion. Prog. Theor. Phys. Suppl. 139, 470–476.
Liebovitch, L. S., and Krekora, P. (2002). “The physical basis of ion channel kinetics: the importance of dynamics,” in Institute for Mathematics and its Applications Volumes in Mathematics and Its
Applications, Membrane Transport and Renal Physiology, Vol. 129, eds H. E. Layton and A. M. Weinstein (Berlin: Springer-Verlag), 27–52.
Lutz, E. (2001). Fractional Langevin equation. Phys. Rev. E 64, 051106.
Magin, R. L. (2006). Fractional Calculus in Bioengineering. Redding, CT: Begell House Publishers, Co.
Meakin, P. (1998). Fractals, Scaling and Growth Far From Equilibrium. Cambridge Nonlinear Science Series 5. Cambridge, UK: Cambridge University Press.
Miller, K. S., and Ross, B. (1993). An Introduction to the Fractional Calculus and Fractional Differential Equations. New York: John Wiley.
Mutch, A., and Lefevre, G. R. (2003). Health, ‘small-worlds’, fractals and complex networks: an emerging field. Med. Sci. Monit. 9, MT55–MT59.
Mutch, W. A. C., Harm, S. H., Lefevre, G. R., Graham, M. R., Girling, L. G., and Kowalski, S. E. (2000). Biologically variable ventilation increases arterial oxygenation over that seen with positive
end-expiratory pressure alone in a porcine model of acute respiratory distress syndrome. Crit. Care Med. 28, 2457–64.
Nonnenmacher, T. F., and Metzler, R. (1995). On the Riemann–Liouville fractional calculus and some recent applications. Fractals 3, 557.
Peng, C. K., Metus, J., Li, Y., Lee, C., Hausdorff, J. M., Stanley, H. E., Goldberger, A. L., and Lipsitz, L. A. (2002). Quantifying fractal dynamics of human respiration: age and gender effects.
Ann. Biomed. Eng. 30, 683–692.
Peng, C. K., Mistus, J., Hausdorff, J. M., Havlin, S., Stanley, H. E., and Goldberger, A. L. (1993).“Long-range anticorrelations and non-Gaussian behavior of the heartbeat. Phys. Rev. Lett. 70,
Pincus, S. M. (1994). Greater signal regularity may indicate increased system isolation. Math. Biosci. 122, 161.
Rajagopalon, B., and Tarboton, D. G. (1993). Understanding complexity in the structure of rainfall. Fractals 1, 6060.
Rossitti, S., and Stephensen, H. (1994). Temporal heterogeneity of the blood flow velocity at the middle cerebral artery in the normal human characterized by fractal analysis. Acta Physiol. Scand.
151, 191.
Roy, S., Mitra, I., and Llinas, R. (2008). Non-Markovian noise mediated through anomalous diffusion with ion channels. Phys. Rev. E 78, 041920.
Scafetta, N., and Grigolini, P. (2002). Scaling detection in time series: diffusion entropy analysis. Phys. Rev. E 66, 036130.
Schottky, W. (1918). Uber spontane Stromschwaukungen in verschiedenen Elektrizitattslei-tern. Ann. Phys. 362, 541–567.
Schrödinger, E. (1943). What is Life? The Physical Aspects of the Living Cell. London: Cambridge University Press.
Shiau, Y. (2008). “Detecting well-harmonized homeostasis in heart rate fluctuations,” in Proceedings of the 2008 International Conference on BioMedical Engineering and Informatics (Washington, DC:
IEEE Computer Society), 399–403.
Stanley, H. E., Amaral, L. A. N., Goldberger, A. L., Havlin, S., Ivanov, P. C., and Peng, C.-K. (1999). Statistical physics and physiology: monofractal and multifractal approaches. Physica A 270,
Suki, B., Alencar, A. M., Frey, U., Ivanov, P. C., Buldyrev. S. V., Majumdar, A., Stanley, H. E., Dawson, C. A., Krenz, G. S., and Mishima, M. (2003). Fluctuations, noise and scaling in the
cardio-pulmonary system. Fluct. Noise Lett. 3, R1–R25.
Szeto, H. H., Cheng, P. Y., Decena, J. A., Chen, Y., Wu, Y., and Dwyer, G. (1992). Fractal properties of fetal breathing dynamics. Am. J. Physiol. Regul. Integr. Comp. Physiol. 263, R141–R147.
Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiolgy. (1996). Heart rate variability: standards of measurement, physiological interpretation,
and clinical use. Eur. Heart J. 17, 354–381 (and references cited therein).
Taylor, L. R. (1961). Aggregation, variance and the mean. Nature 189, 732–735.
Taylor, L. R., and Taylor, R. A. J. (1977). Aggregation, migration and population mechanics. Nature 265, 415–421.
Taylor, L. R., and Woiwod, I. P. (1980). Temporal stability as a density-dependent species characteristic. J. Anim. Ecol. 49, 209–224.
Van Orden, G. C., Holden, J. G., and Turvey, M. T. (2005). Human cognition and 1/f scaling. J. Exp. Psychol. Gen. 134, 117–123.
West, B. J. (1990). Physiology in fractal dimension: error tolerance. Ann. Biomed. Eng. 18, 135–149.
West, B. J. (1999). Physiology, Promiscuity and Prophecy at the Millennium: A Tale of Tails. Singapore: World Scientific.
West, B. J. (2006b). Fractal physiology, complexity and the fractional calculus,” in Fractals, Diffusion and Relaxation in Disordered Complex Systems (Advances in Chemical Physics Series), eds. W. T.
Coffey and Y. P. Kalmykov (New York: Wiley & Sons), 1–92.
West, B. J. (2009). “Control from an allometric perspective,” in Progress in Motor Control: A Multidisciplinary Perspective (Advances in Experimental Medicine and Biology), Vol. 629, ed. D. Sternad
(New York: Springer), 57–82.
West, B. J., Bologna, M., and Grigolini, P. (2003a). Physics of Fractal Operators. New York: Springer.
West, B. J., Latka, M., Galaubic-Latka, M., and Latka, D. (2003b). Multifractality of cerebral blood flow. Physica A 318, 453–460.
West, B. J., Geneston, E. L., and Grigolini, P. (2008). Maximizing information exchange between complex networks. Phys. Rep. 468, 1–99.
West, B. J., and Deering, W. (1994). Fractal physiology for physicists: Lévy statistics. Phys. Rep. 246, 1–100.
West, B. J., and Griffin, L. (1998). Allometric control of human gait. Fractals 6, 101–108.
West, B. J., and Griffin, L. (1999). Allometric control, inverse power laws and human gait. Chaos Solitons Fractals 10, 1519–1527.
West, B. J., and Griffin, L. (2003). Biodynamics: Why the Wirewalker Doesn’t Fall. New York: Wiley & Sons.
West, B. J., Novaes, M. N., and Kavcic, V. (1995). “Fractal probability density and EEF/ERP time series (Chapter 10),” in Fractal Geometry in Biological Systems, eds. P. M. Iannoccone and M. Khokha
(Boca Raton: CRC), 267–316.
West, B. J., Zhang, R., Sanders, A. W., Miniyar, S., Zuckerman, J. H., and Levine, B. D. (1999a). Fractal fluctuations in cardiac time series. Physica A 270, 552–566.
West, B. J., Zhang, R., Sanders, A. W., Miniyar, S., Zuckerman, J. H., and Levine, B. D. (1999b). Fractal fluctuations in transcranial Doppler signals. Phys. Rev. E 59, 3492–3498.
West, G. B., Brown, J. H., and Enquist, B. J. (1997). A general model for the origin of allometric scaling in biology. Science 276, 122–128.
Willis, J. C. (1922). Age and Area: A Study in Geographical Distribution and Origin of Species. Cambridge: Cambridge University Press.
Yu, Y., Romero, R., and Lee, T. S. (2005). Preference of sensory neural coding for 1/f signals. Phys. Rev. Lett. 94, 108103. | {"url":"http://journal.frontiersin.org/Journal/10.3389/fphys.2010.00012/full","timestamp":"2014-04-18T07:55:01Z","content_type":null,"content_length":"202826","record_id":"<urn:uuid:f4afcc7c-9beb-491e-892a-41cdc5ffc778>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
can some one check this radical expression for me to make sure i got it right?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50dcc7c3e4b0d6c1d54302fd","timestamp":"2014-04-20T13:47:35Z","content_type":null,"content_length":"50196","record_id":"<urn:uuid:e9b42ffb-216a-4bbe-9103-537e5b06d16b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: st: from normal to bimodal distribution
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: from normal to bimodal distribution
From Linn Renée Naper <linn.naper@ecgroup.no>
To <statalist@hsphsun2.harvard.edu>
Subject RE: st: from normal to bimodal distribution
Date Mon, 17 Nov 2008 16:17:59 +0100
The command from you Maarten works well with regard to generating a new variable with a
bimodal distribution.
I have generated a bimodal variable, one for each observation,
and then added it to the original price.
But, I am still not sure how adding this kind of variable to the original prices will help me
to change the distribution in the way I want to.
At least I haven't yet figured out exactly how this may be done.
I want to "force" the old prices into having a "new distributional shape" with two peaks,
but I do not necessarily want to change the global mean very much. My prices have the
following distribution (in case it may be useful to know in order to comment on this):
variable | mean min max sd variance p25 p50 p75
mip | 60.28128 8.918235 776.2332 39.47987 1558.66 36.53056 48.77862 69.81553
I realize that the distribution of the "new prices"
will vary with how I define means and sd of the bimodal variable.
Moreover, the prices will be changed randomly even though the changes are
drawn from a bimodal distribution, so what I might need is some weighting of the observations
in order to get the "shape" that I want.
Alternatively, is there a way to for example redefine the percentiles in an
already existing distribution perhaps? Or in some other way imposing the distributional change
directly to the original data (the original price)?
-----Opprinnelig melding-----
Fra: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] På vegne av Maarten buis
Sendt: 17. november 2008 11:45
Til: statalist@hsphsun2.harvard.edu
Emne: Re: st: from normal to bimodal distribution
--- Linn Renée Naper <linn.naper@ecgroup.no> wrote:
> How do I tell Stata to draw random variables or generate a stochastic
> variable with a bimodal distribution? Can the rnd-command by Hilde be
> used?
You can create a mixture of Gaussian (normal) distributions, like in
the example below: the local p represents the probability of belonging
to group 1, the locals mu1 and mu2 represents the means of group 1 and
2 respectively, and the locals sd1 and sd2 the standard deviations in
group 1 and group 2.
*--------------- begin example -----------------
drop _all
set obs 10000
local p = .5
local sd1 = .75
local sd2 = 1.5
local mu1 = -2
local mu2 = 2
gen u = uniform()
gen e = invnorm(uniform()) * ///
cond(u < `p', `sd1', `sd2') + ///
cond(u < `p', `mu1', `mu2')
hist e
*---------------- end example ---------------------
(For more on how to use examples I sent to the Statalist, see
http://home.fsw.vu.nl/m.buis/stata/exampleFAQ.html )
Hope this helps,
Maarten L. Buis
Department of Social Research Methodology
Vrije Universiteit Amsterdam
Boelelaan 1081
1081 HV Amsterdam
The Netherlands
visiting address:
Buitenveldertselaan 3 (Metropolitan), room N515
+31 20 5986715
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-11/msg00678.html","timestamp":"2014-04-17T04:41:21Z","content_type":null,"content_length":"9380","record_id":"<urn:uuid:e937921a-bcd9-4bdb-bbf1-1e578f1bff94>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
Converting RPM to m/s - OnlineConversion Forums
Originally Posted by
I'm trying to discover the means of converting RPM to metres per second. Even a formula would be hand. The radius is 4cm.
I found a site but it crashes my computer every time I go on it. Please help, Internet!
The circumference is 2*pi*radius
Each revolution advances one circumference
If you want m/s you need two conversions
100 cm = 1 m
60 s = 1 min
2*pi*4 cm * 1 m/100 cm x RPM x 1 min/60 s | {"url":"http://forum.onlineconversion.com/showthread.php?t=12558","timestamp":"2014-04-21T04:32:12Z","content_type":null,"content_length":"58522","record_id":"<urn:uuid:84caadab-b1d1-4737-8cb8-9f906cd5e0b7>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Set Theory Question
December 21st 2009, 10:49 AM #1
Dec 2009
Basic Set Theory Question
1. How many subsets are there of the set {1,2,3,...,n}? How many maps of this set into itself? How many maps of this set onto itself?
2. How many functions are there from a nonempty set, S, into the null set? How many functions are there from the null set into an arbitrary set, S?
To anyone helping me with the above two questions, thank you! I am studying analysis on my own, so any help is certainly appreciated! Those two questions came out of "Introduction to Analysis" by
Also, if you have any suggestions that might be more suitable for self study, I'd be very thankful. I don't have any solutions in the back of this text! Haha.
1. How many subsets are there of the set {1,2,3,...,n}? How many maps of this set into itself? How many maps of this set onto itself?
2. How many functions are there from a nonempty set, S, into the null set? How many functions are there from the null set into an arbitrary set, S?
To anyone helping me with the above two questions, thank you! I am studying analysis on my own, so any help is certainly appreciated! Those two questions came out of "Introduction to Analysis" by
Also, if you have any suggestions that might be more suitable for self study, I'd be very thankful. I don't have any solutions in the back of this text!
If you have to ask about these two questions, then I do not think you have the grounding necesssary to selfstudy analysis.
I would suggest that you start with a lower level testbook, say a discrete mathematics which includes chapters on set theory.
Last edited by Plato; December 21st 2009 at 12:59 PM.
I agree with Plato. Although I can still give you some help:
1. Let p denote the amount subsets of {,1,..., n}. Then observe that $p= \sum_{k=0}^n{n \choose k}$.
Now observe that the binomium of Newton gives $2^{n} = (1+1)^n = p$
I expect you to work out the details yourself.
Observe that an injective map f has the whole set {1,...,n} as image. How many ways can we permutate this set?
Observe that an onto map from f:{1,..,n}-->{1,..,n} is automatically injective....
Last edited by Dinkydoe; December 22nd 2009 at 03:48 AM.
1. How many subsets are there of the set {1,2,3,...,n}? How many maps of this set into itself? How many maps of this set onto itself?
2. How many functions are there from a nonempty set, S, into the null set? How many functions are there from the null set into an arbitrary set, S?
To anyone helping me with the above two questions, thank you! I am studying analysis on my own, so any help is certainly appreciated! Those two questions came out of "Introduction to Analysis" by
Also, if you have any suggestions that might be more suitable for self study, I'd be very thankful. I don't have any solutions in the back of this text! Haha.
Very interesting questions,particularly the 2nd one.
The 1st one is trivial and can be found in many books.But the 2nd one is a bit tricky and is based on the following theorems:
1) for all sets, $Seq\emptyset$ ,then there does not exist a function from S to the empty set.
2) for all sets ,S, the empty set is a function from the empty set to S
Or in a more mathematical notation:
1) $\forall S$: $Seq\emptyset\Longrightarroweg \exists f(f:S\rightarrow\emptyset)$.
2) $\forall S$: $\emptyset :\emptyset\rightarrow S$.
Now since those questions came out from your Analysis book and since most Analysis books offer basic set theory ,you should be able by looking carefully on your book set theory to tackle those
But the 2nd question i must admit is quite difficult.
1. How many subsets are there of the set {1,2,3,...,n}? How many maps of this set into itself? How many maps of this set onto itself?
2. How many functions are there from a nonempty set, S, into the null set? How many functions are there from the null set into an arbitrary set, S?
To anyone helping me with the above two questions, thank you! I am studying analysis on my own, so any help is certainly appreciated! Those two questions came out of "Introduction to Analysis" by
Also, if you have any suggestions that might be more suitable for self study, I'd be very thankful. I don't have any solutions in the back of this text! Haha.
See my previous post (click here). This will answer Q2.
In case you are interested in other perspective of an empty function, below is some additional stuff.
Empty set can also be thought as an initial object in the category of Sets. That means, there exists a precisely one morphism $\emptyset \rightarrow X$ for every object X in the category of Sets.
Thank You - to all.
Thank you to all of you who have posted. Yes, I agree it would be beneficial to go to a discrete/combinatorial mathematics text and read through it prior to looking at analysis. Usually, I
believe that is the case! I am going to make an attempt to tackle this intro analysis/advanced calculus text because the preface does clearly state that undergraduate calculus is the only
I actually did figure out question one the way that it has been explained here (through the use of the binomial theorem), but I was curious if there was another way to actually prove that the
power set of the set {1,2,3,...,n} is the solution. Perhaps there isn't. Perhaps I don't know enough to figure out a different way!
The second problem that several of you made a post on - now I'm gonna have to go study my logic again... sigh. This is a labor of love. Thank you, and Merry Christmas!
The "vaciously true" proof that the Wikipedia is showing is a Semantical proof.
In mathematics all proofs are syntactical.
For example in proving : The empty set is a subset of all sets ,A.
We have to prove that for all,x : $x\in\emptyset\Longrightarrow x\in A$.
Now to avoid a proper proof we can say :
Since $x\in\emptyset$ is false ,the conditional $x\in\emptyset\Longrightarrow x\in A$ is always true.
This is a semantical proof.
So the two theorems above that i mentioned DO have a proper mathematical proof .
A difficult one i must admit
December 21st 2009, 11:54 AM #2
December 21st 2009, 02:27 PM #3
December 21st 2009, 05:20 PM #4
Mar 2009
December 21st 2009, 08:00 PM #5
December 21st 2009, 08:35 PM #6
Senior Member
Nov 2008
December 21st 2009, 09:26 PM #7
Dec 2009
December 22nd 2009, 08:43 AM #8
Mar 2009 | {"url":"http://mathhelpforum.com/discrete-math/121290-basic-set-theory-question.html","timestamp":"2014-04-17T04:19:46Z","content_type":null,"content_length":"57985","record_id":"<urn:uuid:eff38f18-7c1f-4212-b5b6-612bf11a9b7a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
projections and distances
figure 1: a, b, and the projection of b onto a.
We've now spent some time talking about projections and distances. This is an attempt to summarize that in some way or another. Recall how we found the vector projection of a vector b onto a vector a
(figure 1, to the right): we said that the length of the projection is |b| cos(theta), and so, because
|a| |b| cos(theta) = a . b,
we can divide both sides by |a| to get
|b| cos(theta) = the length of the projection = a . b / |a|
The actual vector projection is therefore a unit vector in the correct direction times this length, that is,
proj[a]b = (a / |a|)(a . b / |a|).
Next consider the other (unlabeled) vector in the figure. This is the orthogonal projection of b onto a, and its length is (hopefully obviously) |b| sin(theta). Recalling that
|a x b| = |a| |b| sin(theta),
we can find this length by dividing both sides by |a|:
|b| sin(theta) = |a x b| / |a|.
(Note that we can also find this by subtracting vectors: the orthogonal projection orth[a]b = b - proj[a]b. Make sure this makes sense!)
Points and Lines
Now, suppose we want to find the distance between a point and a line (top diagram in figure 2, below). That is, we want the distance d from the point P to the line L. The key thing to note is that,
given some other point Q on the line, the distance d is just the length of the orthogonal projection of the vector QP onto the vector v that points in the direction of the line! That is, we notice
that the length d = |QP| sin(theta), where theta is the angle between QP and v. So
d = |QP x v| / |v|.
figure 2: distances from a point to a line, and from a point to a plane.
Let's do an example. Suppose we want to know the distance between the point P = (1,3,8) and the line x(t) = -2 + t, y(t) = 1 - 2t, z(t) = -3 - t. We need some point ("Q") on the line---let's take the
point (-2, 1, -3). Then a vector from this point on the line to the point P is <3, 2, 11>. This is the vector QP in the figure. We want the length d, which is
d = |QP| sin(theta) = |QP x v| / |v|
What's v? It's the vector that points along the line, which is just <1, -2, -1>. So
QP x v = 20 i + 14 j - 8 k
(work it out to be sure), so
|QP x v| = sqrt(400 + 196 + 64) = sqrt(560) = 4 sqrt(35)
and |v| = sqrt(1 + 4 + 1) = sqrt(6), so d = 4 sqrt(35) / sqrt(6) which is approximately 9.66.
Points and Planes
Ok, how about the distance from a point to a plane? We'll do the same type of thing here. Consider the lower diagram in figure 2. Here we're trying to find the distance d between a point P and the
given plane. Again, finding any point on the plane, Q, we can form the vector QP, and what we want is the length of the projection of this vector onto the normal vector to the plane. But this is
really easy, because given a plane we know what the normal vector is. So we can say
d = |QP| cos(theta)
|QP| cos(theta) = QP . n / |n|,
d = QP . n / |n|
(taking the absolute value as necessary to get a positive distance). Cool!
An example: find the distance from the point P = (1,3,8) to the plane x - 2y - z = 12. We need a point on the plane. Hmm. There sure are a lot of them to choose from. :) Let's pick something easy:
I'll pick x = 3, y = -3 and z = -3. (Why? Just because they have to satisfy the equation x - 2y - z = 12, and I was picking numbers to try and keep x, y and z moderately small.) Then our point Q =
(3,-3,-3). A vector from the plane to P is QP = <-2, 6, 11>, so
d = <-2, 6, 11> . <1, -2, -1> / |<1, -2, -1>|
(because we know that the components of n are the coefficients of x, y, and z in the equation for the plane), or
d = (-2 - 12 - 11) / sqrt(6)
= -25 / sqrt(6)
except that a distance should be positive, so we'll take the absolute value of this: d = 25 / sqrt(6), which is approximately 10.21.
Why did we use the angle theta opposite the component of the vector giving the distance in the case of the line, and the angle adjacent for the plane? It all has to do with what we know: in the case
of the line, we already know the vector that points along the line, so if we start doing dot or cross products with this vector, the angle that's involved will be the angle we used. Similarly for a
plane, the vector associated with the plane that we know is the normal, so we're interested in angles from this vector to other vectors.
Last modified: Fri Jan 16 10:09:13 EST 2004
Comments to:glarose@umich.edu
©2004 Gavin LaRose, UM Math Dept. | {"url":"http://www.math.lsa.umich.edu/~glarose/classes/calcIII/web/13_5/","timestamp":"2014-04-18T21:51:09Z","content_type":null,"content_length":"7378","record_id":"<urn:uuid:24b34870-ec99-4e18-abde-36edeb404d3f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
how do I find x if only one side is given on a right triangle?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/510768e0e4b08a15e7844c79","timestamp":"2014-04-18T14:14:45Z","content_type":null,"content_length":"62019","record_id":"<urn:uuid:8e090b75-8927-47d4-91fb-d651e5540a96>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 55
Whats 4,580+250
Determine the energy of a photon with a wavelength of 6.74 ✕ 102 nm.
From a laboratory process designed to separate water into hydrogen and oxygen gas, a student collected 11.6 g of hydrogen and 65.9 g of oxygen. How much water was originally involved in the process?
don't understand how to work it out
thanks. i just need 1 more
thanks. i just need 1 more
Imagine you are an energy expert on a planning council for a new town to be built on an island. Evaluate resources and methods you will suggest the new town to use.
3 to the power of 3..nvm
What is the numeral for Two Million?
For nine months of the year farmers tended their crops. What fraction of a year did they farm? Reduce the fraction to its lowest terms.
Which customary unit is most reasonable to measure the capacity of a swimming pool?
Marine Park
what customary unit is most reasonable to measure the capacity of a Bathtub, A drinking glass, A swimming Pool, and a cereal bowl.
Marine Park.
How many liters of Rainbow Juice can be made with 600 milliliters of fruit juice?
Marine Park.
How many ML of fruit juice are in a 1/2 liter bottle
Do I multiply or divide 2 pounds to ounces
What do you think might happen as the size of the population in the english colonies increased
Consumer Economics
What are three ways teens are preserving and cleaning the environment?
math dividing fractions
Julia ate 1 over 2 pint of mint ice cream. Mark ate 3 over 4 of malt ice cream. How many times more ice cream did mark eat?
word problems math
Laura wants to cut a board into three equal pieces. The board is 5-8 feet long. How long will each piece be?
explain how you would write the number. that is 2 tens more than 53.
3RD GRADE MATH
hey im in 6th grade so i know whaat this means.... you have to find out wat 8x6 is (48) and find other ways you can multiply it!! ( 2x24, 3x16) hint hint i gave you the answers!!!
1 3/4, 1 9/10, 1 1/2 in a numberline from least to greatest
if you live 71 mi from a river does it make sense to say you live about 80 mi from the river
Draw a quadrangle below.measure the sides to the nearest 1.5 inch.
draw a quadrangle below measure the sides to the nearest 1.5 inch.write the lenght next to each side.find the perimeter.
Mr.lopez is putting a fence around his vegetable garden. The garden is shaped liked a rectangle. The longer sides are 14 feet long, and the shorter sides are 9.5 feet long. How much fencing should
Mr.Lopez buy?
How do you say please in french.
what PA town was the first to be lit by electricity in 1881
how many arrays can you draw for the number 30
how many arrays can you draw for the number 30
7th grade Science
will an object with a density of 0.79g/mL float, sink, or remain suspended in water
Physical Science
What are twenty questions that you could ask a scuba diver about scuba gear and diving?
Physical Science
What are at least twenty words and clues relate to the Gas Law?
how much fencing is needed to close a circular garden whose radius is 4.5
4th grade
math 4th grade
5th grade math
i'd say 3/4, but not for sure
Blah**** Find the answer out yourself.... do the wrok
1st grade
a rectangle with measurements of length of 11, width of 4, diagonal of 11.7, area of 44 and perimeter of 30 needs to be increased by 20% and decreased by 20%. How do I get the new dimensions?
Thank you so much for your help.
Can anyone tell me if I answered this correctly? Please help. 2. Assignment: Erikson s Timeline Write a 350- to 700-word paper that explains in which of Erikson s eight stages of life you believe
you are currently. Explain why you think you are a...
6th grade
8th grade
on my homework it say . on your paper write the word or words that are being moodified by the adverbs in italice in each sentence. how do i do this
How do i solve the pair of simultaneous equations y=4-2x y=2x^2-3x+1 ?? how do i find minimum value of 2x^2-3x+1 and the value of x for which the minimum occurs???
terrific i had no clue!
5th Grade
I think the answer is t=6 e= 0 n=4 and 3 c=6 s=2
4th grade
What is the plural of maestro?
In a class of 28 sixth graders, all but one of the students are 12 years old. Which two measurements are the same for the student's ages? What are those measurements?
Algebra One
Dainel has $8 more than Aisha. Let x represent the amount of money, in dollars, that Aisha has. Which expression shows how many dollars Dainel will have left he gives $5 to Aisha?- 5th grade
question-5th grader
world history
i need help with yhe neolithic age report
Does anyone know any good websites for the Valley of the Kings? http://en.wikipedia.org/wiki/Valley_of_the_Kings Is a very good description I do have that one.... any more websites? http://
Social Studies
What were mummy's portrait masks made of? the Encyclopedia Britanica-1991, Mask,... funerary masks and death masks were used in aincient Egypt and were associated with the return of the spirit to the
body. Such masks were generalized portrates and, in the case of nobility,... | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=aaliyah","timestamp":"2014-04-20T02:38:59Z","content_type":null,"content_length":"14286","record_id":"<urn:uuid:503cc2a3-94b3-4753-99de-de2970c98848>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Removing Outliers
"the cyclist" <thecyclist@gmail.com> wrote in message
> ln wrote:
>>>> the cyclist wrote:
>>> Well, an outlier is usually an observation that one believes is
>>> inconsistent with the rest of the data set. For example, if
> you
>>> are
>>> making delicate measurements of noise levels, and someone drops
> a
>>> wrench during a measurement, then that point is probably an
>>> outlier.
>>> If you have no reason to believe that points > 3 sigma are
>>> "wrong"
>>> in that way, you should not remove them just because they are
> far
>>> away from the mean.
>> Hi, actually the data is continuous RTK-GPS data. So, can I use
>> 3-sigma standard deviation rule for rejection of the outliers?
>> ln,
>> kin
> I think you misunderstood my basic point. Just being > 3 sigma
> away is NOT A GOOD CRITERION for labeling an outlier. Being out on
> the tail of a (e.g. normal) distribution is NOT the same as being an
> outlier. An outlier is a point that is a discrepancy, a point that
> was somehow sampled incorrectly, and does not really belong in the
> data sample at all.
> To say it again in a different way:
> If I sample one million points from a normal distribution, and I do
> it PERFECTLY, a few thousand of them will be > 3 sigma. But none
> of them are outliers, and none of them should be removed.
> I am not sure I can be more helpful in your doing it the right way,
> but I am trying to help you not do it the wrong way. 8-)
Yes, but... if your data were really normally distributed, and if you
sampled one million data points, your representation of the sample space
with the data would not necessarily be significantly impaired by discarding
data 3+ SDs above and 3+ SDs below the mean. The notion of "outlier" does
NOT necessarily imply incorrect sampling. For instance, if I made 1000
atomic force microscopy (AFM) measurements of elasticity moduli on the
tectorial membrane (of the inner ear), occasionaly my AFM tip might come to
rest on an errant piece of hard material (bone, for instance). My
measurement at that location is not incorrect, but the value would be
significantly higher than the mean. It would be an outlier, and it would
very likely be more than 3 SDs away from the mean. So discarding it based on
that criterion makes a good bit of sense if I want to measure only soft
That said, the Grubb's test (implemented in DELETEOUTLIERS.m) is a
well-established test for detecting and omitting outlying data (from
normally distributed populations) based on the relative values of the
samples, and is highly regarded at NIST. Some references are cited in the | {"url":"http://www.mathworks.in/matlabcentral/newsreader/view_thread/88389","timestamp":"2014-04-18T08:08:49Z","content_type":null,"content_length":"82029","record_id":"<urn:uuid:47c39b9d-c52b-40ef-8c22-f4ad7e14575a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monty Hall
July-August 2003
Monty Hall
A few weeks ago I did one of my occasional "Math Guy" segments on NPR's Weekend Edition. The topic that I discussed with host Scott Simon was probability. [Click here to listen to the interview.]
Among the examples we discussed was the famous - or should I say infamous - Monty Hall Problem. Predictably, our discussion generated a mountain of email, both to me and to the producer, as listeners
wrote to say that the answer I gave was wrong. (It wasn't.) The following week, I went back on the show to provide a further explanation. But as I knew from having written about this puzzler in
newspapers and books on a number of occasions, and having used it as an example for many years in university probability classes, no amount of explanation can convince someone who has just met the
problem for the first time and is sure that they are right - and hence that you are wrong - that it is in fact the other way round.
Here, for the benefit of readers who have not previously encountered this puzzler, is what the fuss is all about.
In the 1960s, there was a popular weekly US television quiz show called Let's Make a Deal. Each week, at a certain point in the program, the host, Monty Hall, would present the contestant with three
doors. Behind one door was a substantial prize; behind the others there was nothing. Monty asked the contestant to pick a door. Clearly, the chance of the contestant choosing the door with the prize
was 1 in 3. So far so good.
Now comes the twist. Instead of simply opening the chosen door to reveal what lay behind, Monty would open one of the two doors the contestant had not chosen, revealing that it did not hide the
prize. (Since Monty knew where the prize was, he could always do this.) He then offered the contestant the opportunity of either sticking with their original choice of door, or else switching it for
the other unopened door.
The question now is, does it make any difference to the contestant's chances of winning to switch, or might they just as well stick with the door they have already chosen?
When they first meet this problem, most people think that it makes no difference if they switch. They reason like this: "There are two unopened doors. The prize is behind one of them. The probability
that it is behind the one I picked is 1/2, the probability that it is behind the one I didn't is also 1/2, so it makes no difference if I switch."
Surprising though it seems at first, this reasoning is wrong. Switching actually DOUBLES the contestant's chance of winning. The odds go up from the original 1/3 for the chosen door, to 2/3 that the
OTHER unopened door hides the prize.
There are several ways to explain what is going on here. Here is what I think is the simplest account.
Suppose the doors are labeled A, B, and C. Let's assume the contestant initially picks door A. The probability that the prize is behind door A is 1/3. That means that the probability it is behind one
of the other two doors (B or C) is 2/3. Monty now opens one of the doors B and C to reveal that there is no prize there. Let's suppose he opens door C. Notice that he can always do this because he
knows where the prize is located. (This piece of information is crucial, and is the key to the entire puzzle.) The contestant now has two relevant pieces of information:
1. The probability that the prize is behind door B or C (i.e., not behind door A) is 2/3.
2. The prize is not behind door C.
Combining these two pieces of information yields the conclusion that the probability that the prize is behind door B is 2/3.
Hence the contestant would be wise to switch from the original choice of door A (probability of winning 1/3) to door B (probability 2/3).
Now, experience tells me that if you haven't come across this problem before, there is a probability of at most 1 in 3 that the above explanation convinces you. So let me say a bit more for the
benefit of the remaining 2/3 who believe I am just one sandwich short of a picnic (as one NPR listener delightfully put it).
The instinct that compels people to reject the above explanation is, I think, a deep rooted sense that probabilities are fixed. Since each door began with a 1/3 chance of hiding the prize, that does
not change when Monty opens one door. But it is simply not true that events do not change probabilities. It is because the acquisition of information changes the probabilities associated with
different choices that we often seek information prior to making an important decision. Acquiring more information about our options can reduce the number of possibilities and narrow the odds.
(Oddly enough, people who are convinced that Monty's action cannot change odds seem happy to go on to say that when it comes to making the switch or stick choice, the odds in favor of their
previously chosen door are now 1/2, not the 1/3 they were at first. They usually justify this by saying that after Monty has opened his door, the contestant faces a new and quite different decision,
independent of the initial choice of door. This reasoning is fallacious, but I'll pass on pursuing this inconsistency here.)
If Monty opened his door randomly, then indeed his action does not help the contestant, for whom it makes no difference to switch or to stick. But Monty's action is not random. He knows where the
prize is, and acts on that knowledge. That injects a crucial piece of information into the situation. Information that the wise contestant can take advantage of to improve his or her odds of winning
the grand prize. By opening his door, Monty is saying to the contestant "There are two doors you did not choose, and the probability that the prize is behind one of them is 2/3. I'll help you by
using my knowledge of where the prize is to open one of those two doors to show you that it does not hide the prize. You can now take advantage of this additional information. Your choice of door A
has a chance of 1 in 3 of being the winner. I have not changed that. But by eliminating door C, I have shown you that the probability that door B hides the prize is 2 in 3."
Still not convinced? Some people who have trouble with the above explanation find it gets clearer when the problem is generalized to 100 doors. You choose one door. You will agree, I think, that you
are likely to lose. The chances are highly likely (in fact 99/100) that the prize is behind one of the 99 remaining doors. Monty now opens 98 or those and none of them hides the prize. There are now
just two remaining possibilities: either your initial choice was right or else the prize is behind the remaining door that you did not choose and Monty did not open. Now, you began by being pretty
sure you had little chance of being right - just 1/100 in fact. Are you now saying that Monty's action of opening 98 doors to reveal no prize (carefully avoiding opening the door that hides the
prize, if it is behind one of those 99) has increased to 1/2 your odds of winning with your original choice? Surely not. In which case, the odds are high - 99/100 to be exact - that the prize lies
behind that one unchosen door that Monty did not open. You should definitely switch. You'd be crazy not to!
Okay, one last attempt at an explanation. Back to the three door version now. When Monty has opened one of the three doors and shown you there is no prize behind, and then offers you the opportunity
to switch, he is in effect offering you a TWO-FOR-ONE switch. You originally picked door A. He is now saying "Would you like to swap door A for TWO doors, B and C ... Oh, and by the way, before you
make this two-for-one swap I'll open one of those two doors for you (one without a prize behind it)."
In effect, then, when Monty opens door C, the attractive 2/3 odds that the prize is behind door B or C are shifted to door B alone.
So much for the explanations. Far more fascinating than the mathematics, to my mind, is the psychology that goes along with the problem. Not only do many people get the wrong answer initially
(believing that switching makes no difference), but a substantial proportion of them are unable to escape from their initial confusion and grasp any of the different explanations that are available
(some of which I gave above).
On those occasions when I have entered into some correspondence with readers or listeners, I have always prefaced my explanations and comments by observing that this problem is notoriously
problematic, that it has been used for years as a standard example in university probability courses to demonstrate how easily we can be misled about probabilities, and that it is important to pay
attention to every aspect of the way Monty presents the challenge. Nevertheless, I regularly encounter people who are unable to break free of their initial conception of the problem, and thus unable
to follow any of the explanations of the correct answer.
Indeed, some individuals I have encountered are so convinced that their (faulty) reasoning is correct that when you try to explain where they are going wrong, they become passionate, sometimes angry,
and occasionally even abusive. Abusive over a math problem? Why is it that some people feel that their ability to compute a game show probability is something so important that they become
passionately attached to their reasoning, and resist all attempts to explain what is going on? On a human level, what exactly is going on here?
First, it has to be said that the game scenario is a very cunning one, cleverly designed to lead the unsuspecting player astray. It gives the impression that, after Monty has opened one door, the
contestant is being offered a choice between two doors, each of which is equally likely to lead to the prize. That would be the case if nothing had occurred to give the contestant new information.
But Monty's opening of a door does yield new information. That new information is primarily about the two doors not chosen. Hence the two unopened doors that the contestant faces at the end are not
equally likely. They have different histories. And those different histories lead to different probabilities.
That explains why very smart people, including many good mathematicians when they first encounter the problem, are misled. But why the passion with which many continue to hold on to their false
conclusion? I have not encountered such a reaction when I have corrected students' mistakes in algebra or calculus.
I think the reason the Monty Hall problem raises people's ire is because a basic ability to estimate likelihoods of events is important in everyday life. We make (loose, and generally non-numeric)
probability estimates all the time. Our ability to do this says something about our rationality - our capacity to live a successful life - and hence can become a matter of pride, something to be
The human brain did not evolve to calculate mathematical probabilities, but it did evolve to ensure our survival. A highly successful survival strategy throughout human evolutionary history, and
today, is to base decisions on the immediate past and on the evidence immediately to hand. If that movement in the undergrowth looks as though it might be caused by a hungry tiger, the smart move is
to make a hasty retreat. Regardless of the fact that you haven't seen a tiger in that vicinity for several years, or that when you saw a similar rustle yesterday it turned out to be a gazelle. Again,
if a certain company stock has been rising steadily for the past week, we may be tempted to buy, regardless of its stormy performance over the previous year. By presenting contestants with an actual
situation in which a choice has to be made, Monty Hall tacitly encouraged people to use their everyday reasoning strategies, not the mathematical reasoning that in this case is required to get you to
the right answer.
Monty Hall contestants are, therefore, likely to ignore the first part of the challenge and concentrate on the task facing them after Monty has opened the door. They see the task as choosing between
two doors - period. And for choosing between two doors, with no additional circumstances, the probabilities are 1/2 for each. In the case of the Monty Hall problem, however, the outcome is that a
normally successful human decision making strategy leads you astray.
Finally, just to see how well you have done on this teaser, suppose you are playing a seven door version of the game. You choose three doors. Monty now opens three of the remaining doors to show you
that there is no prize behind it. He then says, "Would you like to stick with the three doors you have chosen, or would you prefer to swap them for the one other door I have not opened?" What do you
do? Do you stick with your three doors or do you make the 3 for 1 swap he is offering?
Mathematician Keith Devlin ( devlin@csli.stanford.edu) is the Executive Director of the Center for the Study of Language and Information at Stanford University and "The Math Guy" on NPR's Weekend
Edition. His most recent book is The Millennium Problems: The Seven Greatest Unsolved Mathematical Puzzles of Our Time, published last fall by Basic Books. Devlin's Angle is updated at the beginning
of each month. | {"url":"http://www.maa.org/external_archive/devlin/devlin_07_03.html","timestamp":"2014-04-16T10:14:24Z","content_type":null,"content_length":"15368","record_id":"<urn:uuid:c2f1ea63-dd7c-4159-9ea1-324984e05767>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
Did you know there's a direct correlation between the decline in Spirograph popularity and the rise in gang activity? Reverse this deplorable trend by playing around with the
Guilloché spiral pattern generator
[more inside] posted by Iridic on Mar 28, 2014 - 34 comments
Each month, the Notices of the American Math Society runs a column called "What is...." which aims to explain an advanced mathematical concept in two pages, at a level accessible to a good undergrad
math major. Armin Straub, a postdoc at Illinois,
has collected them all in one place
[more inside] posted by escabeche on Feb 26, 2014 - 33 comments
Closing in on the twin prime conjecture
) - "Just months after
announced his result,
has presented an independent proof that pushes the gap down to 600. A
new Polymath project
is in the planning stages, to try to combine the collaboration's techniques with Maynard's approach to push this bound even lower."
[more inside] posted by kliuless on Dec 1, 2013 - 16 comments
Ben Blatt "sat in a Barnes & Noble for three hours flipping through all seven
Where’s Waldo
books with a tape measure" and emerged with
a method for finding Waldo
with speed more than 50% of the time.
[more inside] posted by Iridic on Nov 19, 2013 - 43 comments
is an application of
-based data mining to music, which helps you get recommendations for other musicians. Based on 140K user-defined tags from
that are collected for over 400K artists, results are sorted by the "nearest" or most probable matches for your artist of interest (algorithm
described here
[more inside] posted by Blazecock Pileon on Oct 2, 2013 - 17 comments
Paper Matrix
is a blog that gives instructions for cool papercraft objects, "reinterpreting the
Danish tradition
of woven paper
and ornaments." Cut paper in the prescribed ways and weave it together carefully to make a mobile of colorful
hot air balloons
, gorgeous and complex
; simple but satisfying
and much more... including a full theater for performances by paper dolls.
posted by LobsterMitten on Sep 23, 2013 - 18 comments
Revelations in the field of quantum physics have resulted in the discovery of the
, a jewel-like higher dimensional object whose volume elegantly predicts fundamental physical processes that took the brilliant Dr. Richard Feynman
hundreds of pages of abstruse mathematics
to describe. The theoretical manifold not only enables simple pen-and-paper calculation of physics that would
normally require supercomputers
to work out, but also challenges basic assumptions about the nature of reality -- forgoing the core concepts of
and suggesting that space and time are merely emergent properties of a timeless, infinitely-sided "master amplituhedron," whose geometry represents the sum total of all physical interactions.
More: The 152-page source paper on arXiv [PDF]
- Lead author
Nima Arkani-Hamed
hour-long lecture at SUSY 2013
Scans of Arkani-Hamed's handwritten lecture notes
- A far more detailed lecture series "Scattering Without Space Time":
Arkani-Hamed previously on MeFi
A hot-off-the-presses Wikipedia page
(watch this space)
posted by Rhaomi on Sep 18, 2013 - 128 comments
Making Music with a Möbius Strip
: "It turns out that musical chords naturally inhabit various topological spaces, which show all the possible paths that a composer can use to move between chords. Surprisingly, the space of two-note
chords is a Möbius strip."
posted by dhruva on Aug 15, 2013 - 16 comments
This Simple Math Puzzle Will Melt Your Brain
"Adding and subtracting ones sounds simple, right? Not according to the old Italian mathematician Grandi—who showed that a simple addition of 1s and -1s can give three different answers."
posted by andoatnp on Jul 2, 2013 - 61 comments
In August of last year, mathematician Shinichi Mochizuki reported that he had solved one of the great puzzles of number theory: the ABC conjecture (
previously on Metafilter
). Almost a year later, no one else knows whether he has succeeded.
No one can understand his proof. posted by painquale on May 10, 2013 - 59 comments
Mathematician Kenneth Appel has died at the age of 80.
He is best known for having proved, with Wolfgang Haken, the four-color theorem, which states that only four colors are needed to have a map in which no two adjacent countries have the same color.
[more inside] posted by Cash4Lead on Apr 29, 2013 - 21 comments
Mathematicians Henry Segerman and Saul Schleimer
have produced a triple gear
, three linked gears in space that can rotate together. A short writeup of the topology and geometry behind the triple gear
on the arXiv
posted by escabeche on Apr 26, 2013 - 36 comments
The origins of plus and minus signs
- "There be other 2 signes in often use of which the first is made thus + and betokeneth more: the other is thus made – and betokeneth lesse."
posted by spbmp on Mar 12, 2013 - 30 comments
Bayesian analysis shows redshirts are not most likely to die on Star Trek:TOS. Although Enterprise crew members in redshirts suffer many more casualties than crew members in other uniforms, they
suffer fewer casualties than crew members in gold uniforms when the entire population size is considered. Only 10% of the entire redshirt population was lost during the three year run of Star Trek.
This is less than the 13.4% of goldshirts, but more than the 5.1% of blueshirts. What is truly hazardous is not wearing a redshirt, but being a member of the security department. The red-shirted
members of security were only 20.9% of the entire crew, but there is a 61.9% chance that the next casualty is in a redshirt and 64.5% chance this red-shirted victim is a member of the security
department. The remaining redshirts, operations and engineering make up the largest single population, but only have an 8.6% chance of being a casualty. posted by Cash4Lead on Feb 20, 2013 - 75
Tim Gowers
a series of
overlay journals called the Episciences Project that
to exclude existing
from research publication in mathematics. As arXiv overlays, the Episciences Project avoids the editing and typesetting costs that existing open-access journals pay for using article processing
charges. The French
Centre pour la Communication Scientifique Directe
(CCSD) is backing the remaining expenses, such as developing the platform.
[more inside] posted by jeffburdges on Jan 19, 2013 - 11 comments
"The models we discuss belong to the class of two-variable systems with one delay for which appropriate delay stabilizes an unstable steady state. We formulate a theorem and prove that stabilization
takes place in our case. We conclude that considerable (meaning large enough, but not too large) values of time delay involved in the model can stabilize
love affairs dynamics
[more inside] posted by bluefly on Jan 16, 2013 - 12 comments
is a website containing short videos (approx. 5-10 min.) about numbers and stuff. Mathematicians and physicists play around with the tools of their trade and explain things in simple, clear language.
Learn things you didn't know you were interested in! Find out why
is a pretty cool phone number! What's the significance of
, anyway? What the heck is a
vampire number
? Why does Pac-Man have only
screens? Suitable for viewing by everyone from intelligent and curious middle-schoolers to math-impaired adults. Browse their YouTube channel
. (
posted by BitterOldPunk on Dec 29, 2012 - 20 comments
The Nature of Computation
Intellects Vast and Warm and Sympathetic
: "I hand you a network or graph, and ask whether there is a path through the network that crosses each edge exactly once, returning to its starting point. (That is, I ask whether there is a
'Eulerian' cycle.) Then I hand you another network, and ask whether there is a path which visits each node exactly once. (That is, I ask whether there is a 'Hamiltonian' cycle.) How hard is it to
answer me?" (
[more inside] posted by kliuless on Dec 1, 2012 - 19 comments
"We have little trouble recognizing that a chess grandmaster’s victory over a novice is skill, as well as assuming that Paul the octopus’s ability to predict World Cup games is due to chance. But
what about everything else?" [
Luck and Skill Untangled: The Science of Success
posted by vidur on Nov 20, 2012 - 16 comments | {"url":"http://www.metafilter.com/tags/Mathematics","timestamp":"2014-04-18T04:20:09Z","content_type":null,"content_length":"92604","record_id":"<urn:uuid:ec8e3309-807c-4e32-afb2-0c05d834d6c0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
What to use to express the variability of data: Standard deviation or standard error of mean? Barde MP, Barde PJ - Perspect Clin Res
Year : 2012 | Volume : 3 | Issue : 3 | Page : 113-116
What to use to express the variability of data: Standard deviation or standard error of mean?
Mohini P Barde^1, Prajakt J Barde^2
^1 Shrimohini Centre for Medical Writing and Biostatistics Pune, Maharashtra, India
^2 Glenmark Pharmaceutical Ltd., Mumbai, Maharashtra, India
Date of Web Publication 5-Sep-2012
Correspondence Address:
Prajakt J Barde
Glenmark Pharmaceutical Ltd., Mumbai, Maharashtra
DOI: 10.4103/2229-3485.100662
PMID: 23125963
Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical
measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM
quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data
should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere
to their guidelines.
Keywords: Standard deviation, standard error of mean, confidence interval
How to cite this article:
Barde MP, Barde PJ. What to use to express the variability of data: Standard deviation or standard error of mean?. Perspect Clin Res 2012;3:113-6
How to cite this URL:
Barde MP, Barde PJ. What to use to express the variability of data: Standard deviation or standard error of mean?. Perspect Clin Res [serial online] 2012 [cited 2014 Apr 20];3:113-6. Available from:
Statistics plays a vital role in biomedical research. It helps present data precisely and draws meaningful conclusions. A large number of biomedical articles have statistical errors either in
presentation ^[1],[2],[3] or analysis of data. The scathing remark by Yates "It is depressing to find how much good biological work is in danger of being wasted through incompetent and misleading
analysis." highlights need of proper understanding of statistics and its appropriate use in medical literature.
In late nineties, biomedical journals have made a concerted effort to improve quality of statistics. ^[4],[5],[6] Despite this, errors are still present in published articles. One such common error
is use of SEM instead of SD to express variability of data. ^[7],[8],[9],[10] Negele et al, also showed clearly that a significant number of published articles in leading journals had misused SEM in
descriptive statistics. ^[11] In this article, we discussed the concept and use of SD and SEM.
To study the entire population is time and resource intensive and not always feasible; therefore studies are often done on the sample; and data is summarized using descriptive statistics. These
findings are further generalized to the larger, unobserved population using inferential statistics.
For example, in order to understand cholesterol levels of the population, cholesterol levels of study sample, drawn from same population are measured. The findings of this sample are best described
by two parameters; mean and SD. Sample mean is average of these observations and denoted by ^[12]
s = sample SD; X - individual value;
[Figure 1]a shows cholesterol levels of population of 200 healthy individuals. Cholesterol of the most of individuals is between 190-210mg/dl, with a mean (μ) 200mg/dl and SD (s) 10mg/dl. A study in
10 individuals drawn from same population with cholesterol levels of 180, 200, 190, 180, 220, 190, 230, 190, 190, 180mg/dl gives
Figure 1: If one draws three different groups of 10 individuals each, one will obtain three different mean and SD. (Adapted from Glantz, 2002)
Click here to view
These sample results are used to make inferences based on the premise that what is true for a randomly selected sample will be true, more or less, for the population from which the sample is chosen.
This means, sample mean ([Figure 1]b, c and d would be observed; and therefore we may expect different estimate of population mean every time.
[Figure 2] shows mean of 25 groups of 10 individuals each drawn from the population shown in [Figure 1]. If these 25 group means are treated as 25 observations, then as per the statistical "Central
Limit Theorem" these observations will be normally distributed regardless of nature of original population. Mean of all these sample means will equal the mean of original population and standard
deviation of all these sample means will be called as SEM as explained below.
Figure 2: This figure illustrates the mean of 25 groups of 10 individuals each drawn from the population of 200 individuals shown in the Figure 1. The means of three groups shown in Figure 1 are
shown using circles filled with corresponding patterns
Click here to view
SEM is the standard deviation of mean of random samples drawn from the original population. Just as the sample SD (s) is an estimate of variability of observations, SEM is an estimate of variability
of possible values of means of samples. As mean values are considered for calculation of SEM, it is expected that there will be less variability in the values of sample mean than in the original
population. This shows that SEM is a measure of the precision with which sample mean [Figure 3].
Figure 3: The figure shows that the SEM is a function of the sample size
Click here to view
Thus, SEM quantifies uncertainty in the estimate of the mean. ^[13],[14] Mathematically, the best estimate of SEM from single sample is ^[15]
σ[M] = SEM; s = SD of sample; n = sample size.
However, SEM by itself doesn't convey much useful information. Its main function is to help construct confidence intervals (CI). ^[16] CI is the range of values that is believed to encompass the
actual ("true") population value. This true population value usually is not known, but can be estimated from an appropriately selected sample. If samples are drawn repeatedly from population and CI
is constructed for every sample, then certain percentage of CIs can include the value of true population while certain percentage will not include that value. Wider CIs indicate lesser precision,
while narrower ones indicate greater precision. ^[17]
CI is calculated for any desired degree of confidence by using sample size and variability (SD) of the sample, although 95% CIs are by far the most commonly used; indicating that the level of
certainty to include true parameter value is 95%. CI for the true population mean μ is given by ^[12]
s = SD of sample; n = sample size; z (standardized score) is the value of the standard normal distribution with the specific level of confidence. For a 95% CI, Z = 1.96.
A 95% CI for population as per the first sample with mean and SD as 195 mg/dl and 17.1 mg/dl respectively will be 184.4 - 205.5 mg/dl; indicating that the interval includes true population mean m =
200 mg/dl with 95% confidence. In essence, a confidence interval is a range that we expect, with some level of confidence, to include the actual value of population mean. ^[17]
As explained above, SD and SEM estimate quite different things. But in many articles, SEM and SD are used interchangeably and authors summarize their data with SEM as it makes data seem less variable
and more representative. However, unlike SD which quantifies the variability, SEM quantifies uncertainty in estimate of the mean. ^[13] As readers are generally interested in knowing the variability
within sample and not proximity of mean to the population mean, data should be precisely summarized with SD and not with SEM. ^[18],[19]
The importance of SD in clinical settings is discussed below. In a atherosclerotic disease study, an investigator reports mean peak systolic velocity (PSV) in the carotid artery, a measure of
stenosis, as 220cm/sec with SD of 10cm/ sec. ^[20] In this case it would be unusual to observe PSV less than 200 cm/sec or greater than 240cm/sec as 95% of population fall within 2SD of the mean,
assuming that the population follows a normal distribution. Thus, there is a quick summary of the population and the range against which to compare the specific findings. Unfortunately, investigators
are quite likely to report the PSV as 220cm/ sec ± 1.6 (SEM). If one confused the SEM with the SD, one would believe that the range of the population is narrow (216.8 to 223.2cm/sec), which is not
the case.
Additionally, when two groups are compared (e.g. treatment and control groups), SD helps in visualizing the effect size, which is an index of how much difference is there between two groups. ^[12]
Effect size gives an idea of magnitude of difference to help differentiate between statistical significance and practical importance. Effect size is determined by calculating the difference between
the means divided by the pooled or average standard deviation from two groups. Generally, effect size of 0.8 or more is considered as a large effect and indicates that the means of two groups are
separated by 0.8SD; effect size of 0.5 and 0.2, are considered as moderate or small respectively and indicate that the means of the two groups are separated by 0.5 and 0.2SD. ^[12] However, same
can't be interpreted with SEM. More importantly, SEMs do not provide direct visual impression of the effect size, if number of subjects differs between groups.
Exceptionally the SD as an index of variability may be a deceptive one in many experimental situations where biological variable differs grossly from a normal distribution (e.g. distribution of
plasma creatinine, growth rate of tumor and plasma concentration of immune or inflammatory mediators). In these cases, because of the skewed distribution, SD will be an inflated measure of
variability. In such cases, data can be presented using other measures of variability (e.g. mean absolute deviation and the interquartile range), or can be transformed (common transformations include
the logarithmic, inverse, square root, and arc sine transformations). ^[17]
Some journal editors require their authors to use the SD and not the SEM. There are two reasons for this trend. First, the SEM is a function of the sample size, so it can be made smaller simply by
increasing the sample size (n) [Figure 3]. Second, the interval (mean ± 2 SEM) will contain approximately 95% of the means of samples, but will never contain 95% of the observations on individuals;
in the latter situation, mean ± 2 SD is needed. ^[21]
In general, the use of the SEM should be limited to inferential statistics where the author explicitly wants to inform the reader about the precision of the study, and how well the sample truly
represents the entire population. ^[22] In graphs and figures too, use of SD is preferable to the SEM. Further, in every case, standard deviations should preferably be reported in parentheses [i.e.,
mean (SD)] than using mean ± SD expressions, as the latter specification can be confused with a 95% CI. ^[17]
Proper understanding and use of fundamental statistics, such as SD and SEM and their application will allow more reliable analysis, interpretation, and communication of data to readers. Though, SEM
and SD are used interchangeably to express the variability; they measure different parameters. SEM, an inferential parameter, quantifies uncertainty in the estimate of the mean; whereas SD is a
descriptive parameter and quantifies the variability. As readers are generally interested in knowing variability within the sample, descriptive data should be precisely summarized with SD. Use of SEM
should be limited to compute CI which measures the precision of population estimate.
1. Pocock SJ, Hughes MD, Lee RJ. Statistical problems in the reporting of clinical trials - a survey of three medical journals. N Engl J Med 1987;317:426-32.
2. García-Berthou E, Alcaraz C. Incongruence between test statistics and P values in medical papers. BMC Med Res Methodol 2004;4:13-7.
3. Cooper RJ, Schriger DL, Close RJ. Graphical literacy: The quality of graphs in a large-circulation journal. Ann Emerg Med 2002;40:317-22.
4. Goodman SN, Altman DG, George SL. Statistical reviewing policies of medical journals. J Gen Intern Med 1998;13:753-6.
5. Gore SM, Jones G, Thompson SG. The Lancet's statistical review process: Areas for improvement by authors. Lancet 1992;340:100-2.
6. Altman DG, Gore SM, Gardner MJ, Pocock SJ. Statistical guidelines for contributors to medical journals. BMJ 1983;286:1489-93.
7. Viswanatha Swamy AH, Wangikar U, Koti BC, Thippeswamy AH, Ronad PM, Manjula DV. Cardioprotective effect of ascorbic acid on doxorubicin-induced myocardial toxicity in rats. Indian J Pharmacol
8. Bihaqi SW, Singh AP, Tiwari M. In vivo investigation of the neuroprotective property of Convolvulus pluricaulis in scopolamine-induced cognitive impairments in Wistar rats. Indian J Pharmacol
9. Adenubi OT, Raji Y, Awe EO, Makinde JM. The effect of the aqueous extract of the leaves of boerhavia diffusa linn. on semen and testicular morphology of male Wistar rats. Sci World J 2010;5:1-6.
10. Banji D, Pinnapureddy J, Banji OJ, Kumar AR, Reddy KN. Evaluation of the concomitant use of methotrexate and curcumin on Freund's complete adjuvant-induced arthritis and hematological indices in
rats. Indian J Pharmacol 2011;43:546-50.
11. Nagele P. Misuse of standard error of the mean (SEM) when reporting variability of a sample. A critical evaluation of four anaesthesia journals. Br J Anaesth 2003;90:514-6.
12. Dawson-Sanders B, Trapp RG. Basic and clinical biostatistics. Norwalk, Connecticut: Appleton & Lange; 1990.
13. Glantz SA. How to summarize data. "Primer of Biostatistics." 5th ed. Philadelphia: McGraw-Hill; 2002. p. 10-30.
14. Lang TA. How to report statistics in medicine: Annotated guidelines for authors, editors, and reviewers. Philadelphia: American College of Physicians; 1997.
15. Lee HB, Comrey AL. Elementary statistics: A Problem Solving Approach. 4th ed. UK: William Brown; 2007.
16. Armitage P, Berry G. Statistical methods in medical research. 3rd ed. Cambridge, MA: Blackwell Scientific; 1994.
17. Curran-Everett D, Taylor S, Kafadar K. Fundamental concepts in statistics: Elucidation and illustration. J Appl Physiol 1998l;85:775-86.
18. Jaykaran, Yadav P, Chavda N, Kantharia ND. Some issue related to the reporting of statistics in clinical trials published in Indian medical journals: A Survey. Int J Pharmacol 2010;6:354-9.
19. Tom L. Twenty statistical error even YOU can find in biomedical research articles. Croat Med J 2004;45:361-70.
20. Medina LS, Zurakowski D. Measurement variability and confidence intervals in medicine: Why should radiologists care? Radiology 2003;226:297-301.
21. Bartko JJ. Rationale for reporting standard deviations rather than standard errors of mean. Am J Psychiatry 1985;142:1060.
22. Strasak AM, Zaman Q, Pfeiffer KP, Gobel G, Ulmer H, Statistical errors in medical research--a review of common pitfalls. Swiss Med Wkly 2007;137:44-9.
[Figure 1], [Figure 2], [Figure 3]
This article has been cited by
1 The use and misuse of statistical methodologies in pharmacology research
Michael J. Marino
Biochemical Pharmacology. 2013; | {"url":"http://www.picronline.org/article.asp?issn=2229-3485;year=2012;volume=3;issue=3;spage=113;epage=116;aulast=Barde","timestamp":"2014-04-20T08:22:53Z","content_type":null,"content_length":"61973","record_id":"<urn:uuid:d343d183-31b9-43c4-8a0a-2b3e7fadca0b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
Functions as interfaces
Part of the "Why use F#?" series (more)
An important aspect of functional programming is that, in a sense, all functions are “interfaces”, meaning that many of the roles that interfaces play in object-oriented design are implicit in the
way that functions work.
In fact, one of the critical design maxims, "program to an interface, not an implementation", is something you get for free in F#.
To see how this works, let’s compare the same design pattern in C# and F#. For example, in C# we might want to use the “decorator pattern” to enhance some core code.
Let’s say that we have a calculator interface:
interface ICalculator
int Calculate(int input);
And then a specific implementation:
class AddingCalculator: ICalculator
public int Calculate(int input) { return input + 1; }
And then if we want to add logging, we can wrap the core calculator implementation inside a logging wrapper.
class LoggingCalculator: ICalculator
ICalculator _innerCalculator;
LoggingCalculator(ICalculator innerCalculator)
_innerCalculator = innerCalculator;
public int Calculate(int input)
Console.WriteLine("input is {0}", input);
var result = _innerCalculator.Calculate(input);
Console.WriteLine("result is {0}", result);
return result;
So far, so straightforward. But note that, for this to work, we must have defined an interface for the classes. If there had been no ICalculator interface, it would be necessary to retrofit the
existing code.
And here is where F# shines. In F#, you can do the same thing without having to define the interface first. Any function can be transparently swapped for any other function as long as the signatures
are the same.
Here is the equivalent F# code.
let addingCalculator input = input + 1
let loggingCalculator innerCalculator input =
printfn "input is %A" input
let result = innerCalculator input
printfn "result is %A" result
In other words, the signature of the function is the interface.
Generic wrappers
Even nicer is that by default, the F# logging code can be made completely generic so that it will work for any function at all. Here are some examples:
let add1 input = input + 1
let times2 input = input * 2
let genericLogger anyFunc input =
printfn "input is %A" input //log the input
let result = anyFunc input //evaluate the function
printfn "result is %A" result //log the result
result //return the result
let add1WithLogging = genericLogger add1
let times2WithLogging = genericLogger times2
The new "wrapped" functions can be used anywhere the original functions could be used — no one can tell the difference!
// test
add1WithLogging 3
times2WithLogging 3
[1..5] |> List.map add1WithLogging
Exactly the same generic wrapper approach can be used for other things. For example, here is a generic wrapper for timing a function.
let genericTimer anyFunc input =
let stopwatch = System.Diagnostics.Stopwatch()
let result = anyFunc input //evaluate the function
printfn "elapsed ms is %A" stopwatch.ElapsedMilliseconds
let add1WithTimer = genericTimer add1WithLogging
// test
add1WithTimer 3
The ability to do this kind of generic wrapping is one of the great conveniences of the function-oriented approach. You can take any function and create a similar function based on it. As long as the
new function has exactly the same inputs and outputs as the original function, the new can be substituted for the original anywhere. Some more examples:
• It is easy to write a generic caching wrapper for a slow function, so that the value is only calculated once.
• It is also easy to write a generic “lazy” wrapper for a function, so that the inner function is only called when a result is needed
The strategy pattern
We can apply this same approach to another common design pattern, the “strategy pattern.”
Let’s use the familiar example of inheritance: an Animal superclass with Cat and Dog subclasses, each of which overrides a MakeNoise() method to make different noises.
In a true functional design, there are no subclasses, but instead the Animal class would have a NoiseMaking function that would be passed in with the constructor. This approach is exactly the same as
the “strategy” pattern in OO design.
type Animal(noiseMakingStrategy) =
member this.MakeNoise =
noiseMakingStrategy() |> printfn "Making noise %s"
// now create a cat
let meowing() = "Meow"
let cat = Animal(meowing)
// .. and a dog
let woofOrBark() = if (System.DateTime.Now.Second % 2 = 0)
then "Woof" else "Bark"
let dog = Animal(woofOrBark)
dog.MakeNoise //try again a second later
Note that again, we do not have to define any kind of INoiseMakingStrategy interface first. Any function with the right signature will work. As a consequence, in the functional model, the standard
.NET “strategy” interfaces such as IComparer, IFormatProvider, and IServiceProvider become irrelevant.
Many other design patterns can be simplified in the same way.
blog comments powered by | {"url":"http://fsharpforfunandprofit.com/posts/convenience-functions-as-interfaces/","timestamp":"2014-04-18T08:02:19Z","content_type":null,"content_length":"30566","record_id":"<urn:uuid:b3ba92ee-c3b4-4d1d-8d47-e725f3d58754>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graph Theory in Practice: Part I
Graph Theory in Practice: Part I
What is the diameter of the World Wide Web? The answer is not 7,927 miles, even though the Web truly is World Wide. According to Albert-László Barabási, Reka Albert and Hawoong Jeong of Notre Dame
University, the diameter of the Web is 19.
The diameter in question is not a geometric distance; the concept comes from the branch of mathematics called graph theory. On the Web, you get from place to place by clicking on hypertext links, and
so it makes sense to define distance by counting your steps through such links. The question is: If you select two Web pages at random, how many links will separate them, on average? Among the 800
million pages on the Web, there's room to wander down some very long paths, but Barabási et al. find that if you know where you're going, you can get just about anywhere in 19 clicks of the mouse.
Barabási's calculation reflects an interesting shift in the style and the technology of graph theory. Just a few years ago it would have been unusual to apply graph-theoretical methods to such an
enormous structure as the World Wide Web. Of course just a few years ago the Web didn't exist. Now, very large netlike objects seem to be everywhere, and many of them invite graph-theoretical
analysis. Perhaps it is time to speak not only of graph theory but also of graph practice, or even graph engineering.
Connect the Dots
The graphs studied by graph theorists have nothing to do with the wiggly-line charts that plot stock prices. Here is a definition of a graph, in all its glory of abstraction: A graph is a pair of
sets, V and E, where every element of E is a two-member set whose members are elements of V. For example, this is a graph: V = {a, b, c}, E = {{a, b}, {a, c}}.
So much for definitions; most of us prefer to think of our graphs graphically. And in fact everyone knows that what graph theory is really about is connecting the dots. The set V is made up of
vertices (also known as nodes), which are drawn as dots. The set E consists of edges (also called arcs, links or bonds), and each edge is drawn as a line joining the two vertices at its end points.
Thus the graph defined abstractly above looks like this:
Most of the time, a picture is worth at least a thousand sets, and yet there are reasons for retaining the more formal definition. When you look at a graph drawing, it's hard not to focus on the
arrangement of the dots and lines, but in graph theory all that matters is the pattern of connections: the topology, not the geometry. These three diagrams all depict the same graph:
Each of the graphs sketched above is in one piece, but not all the vertices in a graph have to be joined by edges; disconnected components can be parts of a single graph. "Multigraphs" are allowed to
have multiple edges connecting the same pair of vertices. And some graphs have self-loops: edges whose two ends are both attached to the same vertex. Another variation is the directed graph, where
each edge can be traversed in only one direction.
Euler to Erdos
Graph theory got its start in the 18th century, when the great Swiss-born mathematician Leonhard Euler solved the puzzle of the Königsberg bridges. At the time, Königsberg (now Kaliningrad) had seven
bridges spanning branches of the Pregel River. The puzzle asked whether a walk through the city could cross each bridge exactly once. The problem can be encoded in a graph (actually a multigraph) by
representing the land areas as vertices and the bridges as edges:
Euler showed that you can answer the question by tabulating the degree, or valency, of each vertex—the number of edges meeting there. If a graph has no more than two odd vertices, then some path
traverses each edge once. In the Königsberg graph all four vertices are odd.
The techniques of graph theory soon proved useful for more than planning a stroll along the Pregel. The German physicist Gustav Kirchoff analyzed electric circuits in terms of graphs, with wires as
edges and junction points as vertices. Chemists found a natural correspondence between graphs and the structural diagrams of molecules: An atom is a vertex, and an edge is a bond between atoms.
Graphs also describe communications and transportation networks, and even the neural networks of the brain. Other applications are less obvious. For example, a chess tournament is a graph: The
players are nodes, and matches are edges. An economy is also a graph: Companies or industries are nodes, and edges represent transactions.
In the 20th century graph theory has become more statistical and algorithmic. One rich source of ideas has been the study of random graphs, which are typically formed by starting with isolated
vertices and adding edges one at a time. The master of this field was the late Paul Erdos. With his colleague Alfred Rényi, Erdos made the central finding that a "giant component"—a connected piece
of the graph spanning most of the vertices—emerges suddenly when the number of edges exceeds half the number of vertices.
The recent work on the World Wide Web and other very large graphs is also statistical and algorithmic in nature, and it has close ties to the Erdos-Rényi theory of random graphs. But there is a new
twist. Many of these huge graphs are not deliberate constructions but natural artifacts that have grown through some accretionary or evolutionary process. The Web, in particular, is an object no one
designed. On close examination, the structure of such graphs seems neither entirely random nor entirely regular. Understanding the balance of order and chaos in these graphs is one of the aims of the
current undertakings. A more basic goal is simply finding computational techniques that will not choke on a graph with 10^8 nodes.
Reach Out and Touch Everyone
A good example of a really big graph comes from telephone billing records. Joan Feigenbaum of the AT&T Shannon Laboratories in Floral Park, New Jersey, heads a group working with a graph known as the
call graph. The vertices are telephone numbers, and the edges are calls made from one number to another. A specific call graph recently analyzed by James M. Abello, P. M. Pardalos and M. G. C.
Resende of AT&T has 53,767,087 vertices and more than 170 million edges.
The call graph is actually a directed multigraph—directed because the two ends of a call can be distinguished as originator and receiver, a multigraph because a pair of telephones can exchange more
than one call in a day. For ease of analysis, these aspects of the graph are sometimes ignored: Sets of multiple edges are collapsed into a single edge, and the graph is treated as if it were
undirected. (The graph also has some 255 self-loops, which I find rather puzzling. I seldom call myself, and it's never long-distance.)
The first challenge in studying the call graph is that you can't swallow it whole. Even though the analysis was done on a computer with six gigabytes of main memory, the full graph would not fit.
Under these conditions most algorithms are ruinously inefficient, because pieces of the graph have to be repeatedly shuttled between memory and disk storage. The call graph has therefore become a
test-bed for algorithms designed to run quickly on data held in external storage.
What did Abello, Pardalos and Resende learn about the call graph? It is not a connected graph but has 3.7 million separate components, most of them tiny; three-fourths of the components are pairs of
telephones that called only each other. Yet the call graph also has one giant connected component, with 44,989,297 vertices, or more than 80 percent of the total. The emergence of a giant component
is characteristic of Erdos-Rényi random graphs, but the pattern of connections in the call graph is surely not random. Some models that might describe it will be taken up in Part II of this article,
to appear in the March–April issue.
Abello and his colleagues went hunting within the call graph for structures called cliques, or complete graphs. They are graphs in which every vertex is joined by an edge to every other vertex.
Identifying the largest such structure—the maxclique—is computationally difficult even in a graph of moderate size. In the call graph, the only feasible strategy is a probabilistic search that finds
large cliques without proving them maximal. Abello et al. found cliques of size 30, which are almost surely the largest. Remarkably, there are more than 14,000 of these 30-member cliques. Each clique
represents a distinct group of 30 individuals in which everyone talked with everyone else at least once in the course of a day.
People Who Know People
Some of the most interesting large graphs are those in which we are the vertices. These "social graphs" are associated with the phrase "six degrees of separation," popularized by a 1990 play of that
title and a later film, both written by John Guare. The idea is that the acquaintanceship graph connecting the entire human population has a diameter of six or less. Guare attributes this notion to
Guglielmo Marconi, who supposedly said that wireless telegraphy would so contract the world that any two people could be connected by a chain of 5.83 intermediaries. Did Marconi really make such a
statement? I have been unable to find any evidence. (And the two decimal places of precision do nothing to increase my faith in the number's authenticity).
Even if Marconi did have ideas about the acquaintanceship graph, they were unknown to those who later took up the subject. In the 1950s and 60s Anatol Rapoport based a theory of social networks on
the idea of random graphs. He showed that any bias in the random placement of edges tends to reduce the overall connectivity of the graph and increases its diameter. Thus social structures that bring
people together in clusters have the side effect of pushing the clusters farther apart. On the basis of this mathematical result, the sociologist M. S. Granovetter argued that what holds a society
together are not the strong ties within clusters but the weak ones between people who span two or more communities.
Also in the 1950s, Ithiel de Sola Pool and Manfred Kochen tried to estimate the average degree of the vertices in the acquaintanceship graph and guessed that the order of magnitude is 1,000. This
high density of interpersonal contacts led them to conjecture that anyone in the U.S. "can presumably be linked to another person chosen at random by two or three intermediaries on the average, and
almost with certainty by four."
This "small-world hypothesis" was put to the test a decade later in a famous experiment by Stanley Milgram. Packets addressed to an individual in the Boston area were given to volunteers in Nebraska
and Kansas. Each volunteer was directed to pass the packet along to any personal acquaintance who might get it closer to its intended recipient. Instructions within the packet asked each person who
received it to follow the same procedure. For the packets that made it all the way to their destination, the mean number of intermediary nodes was 5.5.
Milgram's experiment was ingenious, and yet it did not quite establish that everyone on the planet is within six handshakes of everyone else. In the first place, the reported path length of 5.5 nodes
was an average, not a maximum. (The range was from 3 to 10.) Two-thirds of the packets were never delivered at all. Furthermore, although Nebraska and Kansas may seem like the ends of the earth from
Massachusetts, the global acquaintanceship graph probably has a few backwaters even more remote. And if Milgram's result is not an upper bound on the diameter of the graph, neither is it a lower one:
There is no reason to believe that all the participants in the study found the shortest possible route.
Certain subgraphs of the acquaintanceship graph have been explored more thoroughly. The prototype is the "collaboration graph" centered on Paul Erdos, who was the most prolific mathematician ever
(Euler was second). In this graph distance from Erdos's node is termed the Erdos number. Erdos himself has an Erdos number of 0. All those who co-authored a paper with him have Erdos number 1. Those
who did not write a joint paper with Erdos but who are co-authors of a co-author have Erdos number 2, and so on. The graph built up in this way, by adding concentric layers of co-authors, can be
viewed as a component of a larger graph with a node for every contributor to the literature of science. Although the graph as a whole cannot be connected—if only because of "soloists" who never
collaborate with anyone—the connected component centered on Erdos is thought to encompass almost all active scientists and to have a small diameter.
Another collaboration graph has movie actors instead of scientists at the vertices, with the central role given to Kevin Bacon. Because feature films are a smaller universe than scientific
publications, the structure of this "Hollywood graph" can been determined in greater detail. If the records of the Internet Movie Database can be taken as complete and definitive, then the Hollywood
graph has 355,848 vertices, representing actors who have appeared in 170,479 films.
Brett C. Tjaden and Glenn Wasson of the University of Virginia maintain a Web site (The Oracle of Bacon) that tabulates Bacon numbers. Because the entire graph is known, there is no need to speculate
about whether or not it is connected or what its diameter might be. The questions can be answered directly. The Hollywood graph includes exactly one person with Bacon number 0 (that one's easy to
guess); there are 1,433 with Bacon number 1; another 96,828 have Bacon number 2, and 208,692 occupy nodes at Bacon number 3. But because the number of actors is finite, the rings around Bacon cannot
continue expanding. At Bacon number 4 there are 46,019 actors, then 2,556 at distance 5, and 252 at Bacon number 6. Finally there are just 65 actors who require seven intermediaries to be connected
to Kevin Bacon, and two exceptionally obscure individuals whose Bacon number is 8. (Finding any actor in tiers 7 or 8 will earn you a place in the Oracle's hall of fame.)
A new attempt to construct a major piece of the global acquaintanceship graph is now under way at a Web site called sixdegrees.com, founded by Andrew Weinreich. Here you are invited to fill out a
form listing the e-mail addresses of your friends, who will be invited to create database entries of their own. Thus, you should be able to explore the social graph as seen from your own position in
it—everyone gets a chance to be Kevin Bacon or Paul Erdos. When I last checked, sixdegrees.com had 2,846,129 members. Statistics on the structure of the evolving graph have not been published, but a
review by Janelle Brown in Salon magazine offers some clues. Brown reports: "I, for example, have fourteen contacts in my inner circle, 169 in my second degree, 825 in my third, 3,279 in my fourth,
10,367 in my fifth and 26,075 in my sixth." The fact that these numbers continue increasing and have not begun to approach the total size of the graph suggests that a giant connected component has
not yet emerged at sixdegrees.com.
The Width of the Web
As an object of study for graph theorists, the World Wide Web has the advantage that it comes already encoded for computer analysis. The vertices and edges do not have to be catalogued; any computer
attached to the Internet can navigate through the graph just by following links from node to node. Like the AT&T call graph, the Web is a directed multigraph with self-loops, but many analyses ignore
these complications and treat the Web as if it were a simple undirected graph.
To estimate the diameter of the Web, Barabási and his colleagues at Notre Dame did not visit every node and traverse every link; they studied a small corner of the Web and extrapolated to the rest of
the graph. The Barabási group used a software "robot" to follow all the links on a starting page, then all the links on each page reached from that page, and so on. This is the same technique
employed by search engines to index the Web, but search engines are never exhaustive; they are tuned to catalogue documents of interest to people, not to measure the connectivity of a graph.
Initially, the Notre Dame robot looked only at the nd.edu Internet domain and gathered information on 325,729 documents and 1,469,680 links (about 0.3 percent of the Web). The key step in the
analysis of these data was to calculate the probability that a page has a given number of inward and outward links. Barabási and his colleagues found that both probabilities obey a power law.
Specifically, the probability that a page has k outward links is proportional to k^–2.45, and the probability of k inward links is given by k^–2.1. The power law implies that pages with just a few
links are the most numerous, but the probability of larger numbers of links falls off gradually enough that pages with several hundred or several thousand links are to be expected.
Although nodes of very high degree are rare, they have an important effect on the connectivity of the Web. Such nodes shrink the graph by providing shortcuts between otherwise distant vertices. For
the nd.edu domain, Barabási et al. measured an average diameter of 11.2 edges; the power-law model predicted 11.6. Extrapolating to the Web as a whole yielded a diameter of about 19 links.
The diameter of the graph is an important statistic when you are trying to find something on the Web. A blind, random search would typically have to examine half the 800 million documents before
stumbling on the right one. But the Notre Dame result suggests that from any reasonable starting point, there should be a path to the target page crossing only about 19 links. Barabási et al. remark:
"The relatively small value of [the diameter] indicates that an intelligent agent, who can interpret the links and follow only the relevant one, can find the desired information quickly by navigating
the web." (But finding the relevant link is not always easy! When I tried searching for paths between randomly chosen pages, I came away doubting that I qualify as an intelligent agent.)
Rare nodes of high degree also play a role in other graph-theoretical analyses of the Web. One group doing such work calls itself the Clever project. The vertices in the Clever collaboration graph
include Jon Kleinberg of Cornell University and Prabhakar Raghavan and Sridhar Rajagopalan of the IBM Almaden Research Center. The Clever group draws attention to two special kinds of nodes in the
Web. "Hubs" are nodes of high out-degree—pages that point to many other pages. "Authorities" have high in-degree—they are pointed to by many other pages, and especially by hubs. Typical hubs are
lists of personal bookmarks or pages from directory services such as Yahoo. An authority is a Web page that many people find interesting enough to create a link to it.
The Clever algorithm defines hubs and authorities by an iterative feedback process. An initial scan of the Web identifies pages of high out-degree and high in-degree, which form the initial sets of
candidate hubs and authorities. Then these sets are refined by a recursive procedure that discards a hub candidate unless many of its outward links point to pages that are members of the authority
set; likewise authorities are weeded out unless they are pointed to by many of the hubs. Repeated application of this algorithm narrows the focus to those hubs and authorities that are most densely
connected to one another.
In one project, members of the Clever group have employed links between hubs and authorities to identify more than 100,000 "emerging communities"—collections of Web sites that share some common
theme. For example, the survey found pages associated with Australian fire brigades and with Turkish student organizations in the U.S. Remarkably, the communities were identified by a method that did
not rely in any way on the content of the Web pages; the algorithm looked only at the pattern of connectivity.
Similar principles are at work in a Web search engine called Google, developed by Sergey Brin and Lawrence Page of Stanford University. Google employs a conventional text-based scan to create an
index of the Web's content, but the pages recommended in response to a query are ranked according to information from the link analysis. A page is rated highly if many pages point to it, and if many
other pages point to those pages, and so on.
Measuring properties of a graph such as the diameter or the distribution of vertex degrees is a first step toward understanding its structure. The next step is to develop a mathematical model of the
structure, which typically takes the form of an algorithm for generating graphs with the same statistical properties. Such models of very large graphs will be the subject of Part II of this article.
© Brian Hayes | {"url":"http://www.americanscientist.org/issues/id.3279,y.0,no.,content.true,page.3,css.print/issue.aspx","timestamp":"2014-04-20T01:29:14Z","content_type":null,"content_length":"119464","record_id":"<urn:uuid:f445363b-f2bb-4760-b415-5e9114ab421f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
Register or Login To Download This Patent As A PDF
United States Patent Application 20020116348
Kind Code A1
Phillips, Robert L. ;   et al. August 22, 2002
Dynamic pricing system
The present invention provides a dynamic pricing system that generates pricing recommendations for each product in each market. In particular, the system normalizes historic pricing and sales data,
and then analyzes this historic data using parameters describing the user's business objectives to produce a pricing list to achieve these objectives. The system uses historical market data to
forecast expected sales according to a market segment, product type, and a range of future dates and to determine the effects of price changes on the forecasted future sales. The system further
calculates unit costs for the product. The system then estimates profits from sales at different prices by using the sales forecasts, adjusting these sales forecasts for changes in prices, and the
costs determinations. The system optionally optimizes prices given current and projected inventory constraints and generates alerts notices according to pre-set conditions.
Inventors: Phillips, Robert L.; (Palo Alto, CA) ; Gordon, Michael S.; (San Mateo, CA) ; Ozluk, Ozgur; (San Francisco, CA) ; Alberti, Stefano; (Mountain View, CA) ; Flint, Robert A.; (Redwood
City, CA) ; Andersson, Jorgen K.; (Sunnyvale, CA) ; Rangarajan, Keshava P.; (Twickenham MiddleSex, IN) ; Grossman, Tom; (Rockville, MD) ; Cooke, Raymond Mark; (Half Moon Bay, CA) ;
Cohen, Jeremy S.; (Sunnyvale, CA)
Correspondence Celine Jimenez Crowson
Address: Hogan & Hartson L.L.P.
555 13th Street, N.W.
Serial No.: 859674
Series Code: 09
Filed: May 18, 2001
Current U.S. Class: 705/400
Class at Publication: 705/400
International Class: G06F 017/00
What is claimed:
1. A system for dynamically pricing a product, the system comprising: a. means for collecting and storing data on past sales; b. means for forecasting normalized future sales volume based upon the
past sales data; c. means for forecasting normalized future sales volume based upon the past sales data; d. means for determining price sensitivity of consumers to changes in price of the product
based upon past data; e. means for forecasting future sales volume at different prices by adjusting the normalized future sales volume forecast by the price sensitivity; and f. means for determining
an optimal price that maximizes profits using the future sales volume forecast and costs for the product.
2. The system of claim 1 further comprising means for classifying the past sales into one or more channel segments, whereby each of the past sales is classified into only one channel segment.
3. The system of claim 2, wherein the means for determining an optimal price determines an optimal price in each of the channel segments
4. The system of claim 2, wherein the costs for the product include a different channel segment cost in each of the channel segments.
5. The system of claim 1, wherein the means for determining an optimal price accounts for one or more strategic objectives.
6. The system of claim 5, wherein one of said strategic objectives is a minimum price for the product.
7. The system of claim 5, wherein one of said strategic objectives is a maximum price for the product.
8. The system of claim 5, wherein one of said strategic objectives is a minimum sales volume for the product.
9. The system of claim 5, wherein one of said strategic objectives is a maximum sales volume for the product.
10. The system of claim 1 further comprising a means for forecasting a response of a competitor to a change in the price of the product by the seller, whereby the means for forecasting future sales
volume at different prices accounts for the competitor's response.
11. The system of claim 1 further comprising for a means for determining lost sales data, whereby the means for forecasting future sales volume at different prices accounts for the competitor's
12. The system of claim 1 further comprising a means for alerting the seller of an occurrence of a pre-specified event.
13. The system of claim 12, wherein the means for alerting the seller compares prices for actual sales to the optimal price, and the pre-specified event is a difference between the actual sales and
the optimal price.
14. The system of claim 12, wherein the means for alerting the seller compares actual sales at the optimal price to the forecasted sales volumes at the optimal price.
15. The system of claim 14, wherein the pre-specified event occurs when a ratio of actual sales volume to the forecasted sales volume is less than a first pre-specified amount.
16. The system of claim 14, wherein the pre-specified event occurs when the forecasted sales volume exceeds the actual sales volume by more than a second pre-specified amount.
17. The system of claim 1, wherein the mean for determining price sensitivity uses a logistic mathematical model.
18. A method of dynamically pricing a product, the method comprising the steps of: a. collecting data on past sales; b. forecasting normalized future sales volume based upon the past sales data; c.
determining price sensitivity of consumers to changes in price of the product based upon the past sales data; d. forecasting future sales volume at different prices by adjusting the normalized future
sales volume forecast by the price sensitivity; and e. determining an optimal price that maximizes profits using the future sales volume forecast and costs for the product.
19. The method of claim 18 further comprising the step of dynamically determining the costs for the product.
20. The method of claim 18 further comprising the step of classifying the past sales into different channel segments, wherein each of the past sales is classified into only one of the channel
segments and wherein the step of forecasting future sales at different prices further comprises forecasting future sales in each of the channel segments.
21. The method of claim 20, wherein the costs for the product include a different channel segment cost for each of the channel segments.
22. The method of claim 20, wherein the step of determining an optimal price is performed for each of the channel segments.
23. The method of claim 18, wherein the step of determining an optimal price includes accounting for one or more strategic objectives.
24. The method of claim 23 further comprising accepting and storing one or more strategic objectives from the seller.
25. The method of claim 23, wherein one of said strategic objectives is a minimum price for the product.
26. The method of claim 23, wherein one of said strategic objectives is a maximum price for the product.
27. The method of claim 23, wherein one of said strategic objectives is a minimum sales volume for the product.
28. The method of claim 23, wherein one of said strategic objectives is a maximum sales volume for the product.
29. The method of claim 18, wherein the step of forecasting future sales volume further accounts for inventory of the product.
30. The method of claim 29, wherein the inventory accounts for the forecasted sales for the product at the optimal price.
31. The method of claim 18, wherein the step of forecasting future sales volume further accounts for an expected response of a competitor.
32. The method of claim 18, wherein the step of forecasting future sales volume further accounts for lost sales data.
33. The method of claim 18, further comprising the step of comparing actual sales at the optimal price to forecasted sales volumes at the optimal price.
34. The method of claim 33 further comprising the step of adjusting the optimal price to account for actual sales.
35. The method of claim 33 further comprising the step of alerting the seller when the ratio of actual sales volume to forecasted sales volume at the optimal price is less than a first pre-specified
36. The method of claim 33 further comprising the step of alerting the seller when the actual sales volume is less than the forecasted sales volume by more than a second pre-specified amount.
37. The method of claim 18, wherein the step of determining an optimal price further comprising accounting for a volume discount for the product.
38. The method of claim 18, wherein the step of determining price sensitivity further comprises using a logistic mathematical model.
39. The method of claim 18, wherein the step of determining price sensitivity further comprises accounting for a relationship between sales of the product and a second product.
40. A dynamic pricing network for determining a recommended price for a product, the network comprising: a database storing information on prior transactions of the product; a normalized sales
forecast module that accesses the information in the database to form a normalized forecast of future sale volumes; a price sensitivity module that accesses the information in the database to
determine price sensitivity of consumers to changes in price of the product; a sales forecast module that uses the normalized forecast and the price sensitivity to form a forecast of future sales
volumes at each of multiple different prices; a costs module that accesses the information in the database to determine costs for the product; and an optimizer that recommends a profit-maximizing
price using the forecast of future sales volumes and the costs.
41. The dynamic pricing network of claim 40 further comprising a pre-processor that accesses the information in the database and classifies the past transactions into one or more channel segments,
whereby the pre-processor classifies each of the transactions into only one channel segment.
42. The dynamic pricing network of claim 41, wherein the optimizer further determines an optimal price in each of the channel segments.
43. The dynamic pricing network of claim 41, wherein the cost module further determines a cost in each of the channel segments.
44. The dynamic pricing network of claim 40 further comprising a strategic objectives database storing data on one or more strategic objectives, wherein the optimizer accesses the strategic
objectives database and accounts for one or more strategic objectives when recommending the profit-maximizing price.
45. The dynamic pricing network of claim 40 further comprising: an alert condition database that stores one or more alert conditions; and an alert generator that notifies a user when one of the alert
conditions occurs.
46. An article of manufacture, which comprises a computer readable medium having stored therein a computer program for dynamically determining a price for a product, the computer program comprising:
(a) a first code segment which, when executed on a computer, defines a database storing information on prior transactions of the product; (b) a second code segment which, when executed on a computer,
defines a normalized sales forecast module that automatically forms a normalized forecast of future sales; (c) a third code segment which, when executed on a computer, defines a price sensitivity
module that automatically determines price sensitivity for the product; (d) a fourth code segment which, when executed on a computer, uses the normalized forecast and the price sensitivity to form
forecasts of future sales of the product at different prices; (e) a fifth code segment which, when executed on a computer, determines costs for the product; and (f) a sixth code segment which, when
executed on a computer, uses the forecast of future sales at different prices and the costs to automatically recommend a profit-maximizing price.
47. A program storage device readable by a machine, tangibly embodying a program of instructions executable by a machine to perform method steps for dynamically determining a price a product, said
method steps comprising: a. collecting data on past sales; b. forecasting a normalized future sales volume under current conditions identified in the past sales data; c. determining price sensitivity
of consumers to changes in price of the product based upon the past sales data; d. forecasting an adjusted future sales volume at different prices by adjusting the normalized future sales volume
forecast by the price sensitivity; and e. determining an optimal price that maximizes profits using the adjusted future sales volume forecast and costs for the product.
[0001] This application claims priority from U. S. Provisional Application No. 60/205,714, filed on May 19, 2000, the disclosure of which is hereby incorporated by reference in full.
[0002] The present invention is a dynamic pricing system for producing an optimized price recommendation to maximize expected profits based upon forecasted sales and price sensitivity derived from
prior transaction statistics.
[0003] Historically, there has been no way for a supplier to predict, with high certainty, the price at which a product must be sold in order to maximize profits. Under traditional sales models,
pricing decisions are made based on estimates, such as anticipated product demand and presumed price sensitivity, in the hope of maximizing profits. The procedure for forming these estimates is time
and labor intensive. For example, it is known in existing spreadsheet programs to recalculate derived values automatically from data changes entered into the spreadsheet. Display of such recalculated
values facilitates evaluation of hypothetical "what if" scenarios for making business decisions. However, this is done by changing a value in a cell of the spreadsheet, resulting in recalculating all
variable entries dependent on the variable changed. It is not easy for the user to see the global effect of such changes without a careful review of the recalculated spreadsheet or separate screens
showing graphs derived from the recalculated spreadsheet. The result is a cumbersome iterative process in which the user must change a value in a cell of the spreadsheet, obtain a graph of the
resulting dependent variable changes, determine whether those results are as desired, if not, go back to the spreadsheet and make another value change in a cell, redraw the graph, and so forth until
desired results are achieved. The process is even more cumbersome if the user desires to add a line to a graph, which requires the generation of new cells in the spreadsheet. An improved system would
automatically perform these functions with little input from users.
[0004] There are several difficulties in forming an automated dynamic pricing system. One problem is that most sellers keep incomplete pricing data. For example, while the ideal client for the system
would maintain data on lost customers, competitor prices, industry availability and the like, most sellers will have data on only a subset of the potential drivers of market response. Furthermore,
the known dynamic pricing system can neither adjust rapidly to account for changes in market conditions nor suggest different prices for different markets.
[0005] In response to these and other needs, the present invention provides a dynamic pricing system that generates pricing recommendations for one or more products. The system divides records of
prior sales to define market segments, such that each sale only falls into a single segment. The system then uses pricing and sales data from these sales to determine optimal prices in view of
parameters describing the user's business objectives to produce a pricing list to achieve these objectives. In particular, the system uses historical market data to forecast expected sales within
each channel segment, product type, and a range of future dates. Historical market data is further used to predict the effects of price changes on the forecasted future sales. The system then
estimates profits from sales at different prices by using the sales forecasts, adjusting these sales forecasts for the different prices, and then subtracting costs for the product which is an input
to the system. The system optionally optimizes prices given current and projected inventory constraints and different strategic objectives, also known as business rules. The system therefore provides
the user with prices that maximize profits within the desired sales volume levels.
[0006] In one embodiment, after making price recommendations using the forecasted sales numbers, the system monitors actual sales and pricing information. The system then compares the forecasted
sales statistics with the actual sales statistics and notifies the users of any differences, such as actual sales volumes or prices that differ greatly from the forecasted values.
[0007] In another embodiment, the dynamic pricing system is general enough to provide price recommendations with varying degrees of available data. In particular, the system produces a viable pricing
value estimate using available data, and then modifies that price estimate with increased forecasting accuracy by incorporating the new data, as it becomes available. In this way, the system
functions constantly and in real time to update and alter price recommendations to reflect the most recently acquired sales data.
[0008] Referring now to the drawings in which like reference numbers represent corresponding parts throughout:
[0009] FIGS. 1 and 6 are schematic diagrams of system incorporating the dynamic pricing system of FIG. 2 in accordance with a preferred embodiment of the present invention;
[0010] FIG. 2 is a schematic diagram of a dynamic pricing system in accordance with a preferred embodiment of the present invention;
[0011] FIGS. 3-5 are output images from the system of FIG. 2 in accordance with a preferred embodiment of the present invention; and
[0012] FIG. 7 is a flowchart diagram for dynamic pricing method in accordance with a preferred embodiment of the present invention.
[0013] As depicted in FIG. 1, the present invention provides a dynamic pricing system 100 for automatically producing a set of price recommendations. The dynamic pricing system 100 is electronically
connected to an input device 10 and one or more output devices 20. The input device 10, such as a keyboard or mouse, allows a user to provide data into dynamic' pricing system 100 by transferring
information into an electronic format as needed by the dynamic pricing system 100. Analogously, the output devices 20, such as a monitor or a printer, presents price recommendations and other
information from the dynamic pricing system 100 to the user in a non-electronic format. The input and output devices 10 and 20 allow an electronic dialogue between the user and the dynamic pricing
system 100.
[0014] As depicted in FIG. 2, the dynamic pricing system 100 generally includes a Transaction Database 120, a Normalized Sales Forecaster 130, a Price Sensitivity Model 140, a Cost Model 150, a Sales
Forecaster 160, and a price optimizer 200. The components combine to allow the dynamic pricing system 100 to use historical data from prior transactions to form profit maximizing price
recommendations for future sales. The dynamic pricing system 100 specifically uses the historical data to estimate price elasticity for a product in a particular channel segment. The dynamic pricing
system 100 further uses the historical data to predict future product sales at current prices. The dynamic pricing system 100 then combines the sales predictions with the price elasticity results to
form a prediction of sales levels in the market segment in the future at different prices for the product. The dynamic pricing system 100 then determines costs for the product and combines the costs
result with the predicted sales at the different price levels to determine a set of optimal, profit maximizing prices for a product in different markets. The function of these individual components
is now described in greater detail.
[0015] Transaction Data
[0016] The system 100 stores a record of prior transactions in a transaction database 120. The user may input this information using the input device 10 or, as described below, transaction data may
be automatically fed into the transaction database 120 from outside sources, for example, by monitoring shipments to customers. It should be appreciated, however, that the particular manner and
method of processing and storing transaction data may be selected as necessary to fulfill the user's needs. In particular, the present invention relates to the analysis of transaction data and does
not generally concern the collection of this data. In fact, the dynamic pricing system 100 adjusts to create accurate price recommendations where the data collection is flawed or incomplete, as
describe in greater detail below.
[0017] Typically, a pre-processor 110 analyzes transaction data so that the transaction database 120 is it is organized in a usable, functional manner. In this way, the transaction database may have
any usable storage format, as needed for quick and consistent data access. In one embodiment, the transaction database 120 is a multi-dimensional database for On Line Analytical Processing (OLAP).
Multi-dimensional databases facilitate flexible, high performance access and analysis of large volumes of complex and interrelated data, even when that data spans several applications in different
parts of an organization. Aside from its inherent ability to integrate and analyze large volumes of enterprise data, the multi-dimensional database offers a good conceptual fit with the way end-users
visualize business data. For example, a monthly revenue and expense statement with its row and column format is an example of a simple two-dimensional data structure. A three-dimensional data
structure might be a stack of these worksheets, one for each month of the year. With the added third dimension, end-users can more easily examine items across time for trends. Insights into business
operations can be gleaned and powerful analysis
such as forecasting and statistics can be applied to examine relationships and project future opportunities.
[0018] The transaction data in the transaction database 120 generally includes information that specifies the details of each transaction, such as the date of the transaction, the transacted product,
the price for the transacted products, the parties involved in the transaction, etc. Each transaction has several attributes specifying its different features, and by exploiting the similarities
within the attributes, the transactions can be grouped by market segments. Furthermore, different market segments may be grouped into mutually exclusive and collectively exhaustive sets called
channel segments (CS). Within this disclosure, channel segments are defined to be aggregations of transactions along market segment dimensions. For example, geographic area, size of sales, method of
delivery, buyers' characteristics, etc. may be used to define channel segments. The channel segments are specified by the user through the input device 10, and the channel segments must combine to
form a mutually exclusive, exhaustive set on the universe of all sales transactions (the "market"). In other words, each and every sale can be classified into only one channel segment. These channel
segments are the level at which product prices will be recommended and are the level at which the dynamic pricing system 100 computes forecasts. Broadly defining the channel segments improves
numerical analysis by increasing the number of samples for analysis. However, broadly defining the channel segments limits possible gains to the user/seller increase profits from specifically pricing
multiple smaller channel segments.
[0019] Ideally, the user may view the transaction database 120 to review the prior transactions, as illustrated in FIG. 3. Each transaction 121, in the illustrated transaction database 120 includes a
product identifier 122, a channel segment identifier 123, a quantity of sale identifier 124, and a sales price identifier 125.
[0020] Price Sensitivity
[0021] A price sensitivity model (PSM) 140, FIG. 2, uses the information in the transaction database 120 to predict price sensitivity of buyers for the product(s) in issue. In other words, the PSM
140 mathematically estimates how changes in price for a product affect buyers' demand for that product. The price sensitivity calculations from the PSM 140 are important because the dynamic pricing
system 100 uses these calculations to predict changes in sales of the product at different prices when producing a profit maximizing price for the product. For a specific channel segment, the PSM 140
generally models price sensitivity for a particular product through a function that varies with price P to represent the relative changes in sales volumes X. The parameters for the price sensitivity
function, F.sub.PS(P), may be empirically determined through surveys, experiments, or analysis or, otherwise, may be supplied by the user through the input device 20. Alternatively, the dynamic
pricing system 100 may dynamically determine the parameters for the F.sub.PS(P) from analyzing the transaction data in the transaction database 120 according to known accounting and statistical
methods. In other words, the PSM 140 looks to see how price changes in the past have affected sales within the channel segment and uses these results to predict the effect of future price
adjustments. The dynamic pricing system 100 determines separate price sensitivity functions F.sub.PS(P) for every product and channel segment.
[0022] In one implementation, the PSM 140 looks to changes in sales prices and models the changes in sales as a function of changes in prices (.delta.X/.delta.P). This method is premised on the
assumption that price has an instantaneous impact on sales volume and that this impact is consistent over time. The PSM 140 therefore assumes that sales volume is strictly a function of the price
level. In this implementation, the PSM assumes that at a starting or reference price P.sub.ref, all the demand for the particular product turns into sales. If a transaction takes place at a final
price P.sub.final, different than P.sub.ref, then the transaction quantity is assumed to be different than what it would have been at the reference price. The transaction quantity is then normalized
using a normalization factor that is produced by the price sensitivity function, F.sub.PS(P). For example, if 100 units of product are sold at P.sub.final=$30/unit, where P.sub.REF=$35/unit and
F.sub.PS(P.sub.final)=0.9, then the normalized transaction quantity is 100/0.9 =111, implying pricing the product at $30 in this channel segment would result in the sale of 111 units.
[0023] The PSM 140 may determine the F.sub.PS(P) from a logistic model, as that developed by Belgian mathematician Pierre Verhulst. The logistic model is frequently used in biological population
studies and assumes upper and lower asymptotes on changes. Therefore, price sensitivity function can be estimated through the following equation:
F.sub.PS(P)=0.2*{1-[Arc Tan (.alpha.*(P.sub.Final-P.sub.REF))* 2/Pi]} (Eq. 1),
[0024] where the value of .alpha. is empirically determined according to the transaction records. For example, if the PSM 140 is selecting between two possible options for .alpha. (say .alpha..sub.1
and .alpha..sub.2), the PSM 140 then chooses the value for .alpha. that best corresponds to the sales and price numbers from prior transactions. Equation 1 has asymptotes at 0.0 and 2.0, so sales
cannot be negative and price reductions can, at most, double sales volumes. Another result of using Equation 1 is that sales volumes do not change when prices do not change.
[0025] The PSM 140 can similarly generalize the price sensitivity function of Equation 1 through the following equation: 1 F PS ( P ) = r * exp ( K 0 + K 1 * P ) 1 + exp ( K 0 + K 1 * P ) ( Eq . 2 )
[0026] where K.sub.i.gtoreq.0 and r .apprxeq.0.2. In Eq. 2, the variable r represents the maximum possible rate of change for price sensitivity function, and the K.sub.i represent market factors that
limit the maximum rate of change r at time period i. As before, Equation 2 concludes that F.sub.PS(P.sub.REF)=1, so that sales within the channel segments do not change if prices do not change. The r
and K.sub.i are determined using known statistical techniques by analyzing the transaction records and parameters related the product's price elasticity. Also, the model may further assume that
F.sub.PS(O)=2, so that offering free products doubles consumption of that product within the channel segment. Other functional forms for F.sub.PS are possible, corresponding to alternative
expressions for equations 1 and 2.
[0027] Alternatively, the PSM 140 may use a linear model. In the linear model, F.sub.PS(P) is a line defined by a slope estimating the change in sales per change in price and an intersect on the
price axis at which sales volume is zero.
[0028] The system 100 may display the results produced by the PSM 140, as illustrated in FIG. 4. Specifically, FIG. 4 illustrates the display of a price sensitivity model type 141 used to analyze the
product in each channel segment and price sensitivity model variables values 142a and 142b. The FIG. 4 further illustrates the display of graphs 143 of price sensitivity curves using the linear model
between maximum and minimum prices,
[0029] If the transaction database 120 includes lost sales data that represents the number of sales lost through changeable conditions such as insufficient inventory, then a lost sales model (LSM)
135, FIG. 2, could employ a win probability function, Fwp, analogous to the price sensitivity function F.sub.PS of the PSM 140. The win probability function takes a control variable as its
independent variable (such as inventory levels) and produces an estimate of increased sales for the product in the particular channel segment as the control variable is varied. Typically, the control
variable for the win probability function is either price or an adjusted margin for the channel segment.
[0030] Sales Forecaster
[0031] Using transaction information from the transaction database 120, a Normalized Sales Forecaster (NSF) 130, FIG. 2, predicts future sales within the particular channel segment assuming that the
reference price is charged. In particular, the NSF 130 functions as a generic, univariate time-series forecaster to predict sales volume, assuming that a constant reference price is applied
throughout the forecast horizon. The NSF 130 may further forecast the number of total offers made as well as normalized sales quantities.
[0032] The Sales Forecaster (SF) 160 then uses the sales forecast from the NSF 130 and price sensitivity conclusions from PSM 140 and to predict sales for the product within the channel segment at
different prices. Specifically, the SF 160 predicts decreases in sales from increase in prices and increases in sales from decreases in product prices. The dynamic pricing system 100 then uses the
sales forecasts from the SF 160 to determining profit-maximizing prices for various products within various channel segments.
[0033] The accuracy of the sales forecasts from the NSF 130 and the SF 160 allows the dynamic pricing system 100 to produce reasonable pricing recommendations. In forecasting future sales, the NSF
130 and the SF 160 use a defined forecast horizon that specifies how far in the future to forecast sales, and the accuracy of the sales forecast is improved by using shorter-term forecast horizons
where possible since short-term forecasts are intrinsically more accurate. Because the date range over which forecasts are made may depend on the length of restocking intervals, these intervals
should be chosen carefully. In the case of very long restocking cycles, the dynamic pricing system 100 can model the restocking intervals as a series of shorter forecast horizons.
[0034] The accuracy of the sales forecast may be further improved by a clear, sound definition of loss if lost sales data is available. The sales forecasts from the NSF 130 and the SF 160 may be
further improved by using relatively few channel segments and by grouping the separate products into a manageable set of model categories. A smaller number of channel segments means more historical
data for each channel segment and fewer channel segments to manage. Likewise, a smaller number of model categories results in more historical data for each model categories and fewer model categories
to manage.
[0035] In one embodiment, the NSF 130 and the SF 160 use the information from the transaction database 120 to produce a total sales, X.sub.SKU, for a particular product (SKU) in a channel segment
(CS) over a range of time (t.sub.i) by summing sales for that product in that channel segment over that range of time. Similarly, an aggregate sales total,.SIGMA.X.sub.SKU, for multiple products
(SKU.sub.1-n) in the channel segment, is found by summing the sales total X.sub.SKU for each of the products. The system can then determine a product's fraction of total sale volume by dividing sales
total for a particular product by the aggregate sales total for multiple products. The dynamic pricing system 100 then forecasts a group of products' daily sales volume by channel segment.
[0036] The dynamic pricing system 100 may perform forecasting through known statistical methods, such as linear regression or non-linear regression analysis using curve-fitting based on exponential,
power, logarithmic, Gompertz, logistic, or parabola functions. In addition, numerous averaging, smoothing, and decomposition techniques to increase the accuracy of statistical forecasts are known and
may be employed by the dynamic pricing system 100. As will be appreciated by one skilled in the art, the NSF 130 and the SF 160 may employ any commercially available forecasting program.
[0037] In a preferred embodiment, the NSF 130 and the SF 160 are adapted to forecast sales cycles in which the number of prior sales varies predictably over a period of time. To forecast these sales
cycles accurately, the NSF 130 and the SF 160 may forecast each day-of-week separately; i.e., forecast the Monday time series separately from Tuesday, Wednesday, etc. The NSF 130 and the SF 160 can
then perform an analysis of variance (ANOVA) or t-test to detect which days of the week are statistically "different" in their mean level. Alternatively, the NSF 130 and the SF 160 can aggregate
across weeks and forecast the aggregate series, applying a multiplicative (average proportion of whole week) factor to desegregate back to the daily level. The NSF 130 and the SF 160 can further
employ Association of Risk and Insurance Managers of America (ARIMA) methods that explicitly model time lags and cyclical dependencies. The above techniques may similarly be generalized to different
time cycles, such as day-of-month cycles, days-to-end-of-month cycles, and week-of-month cycles.
[0038] The NSF 130 and the SF 160 may evaluate accuracy of the sales forecast through known methods to determine "Goodness of Fit" statistics. If the forecast does not have a good fit, the dynamic
pricing system 100 can improve the results by changing the forecasting procedure, such as using non-linear regression to determine the forecast.
[0039] The results of the NSF 130 may be displayed to the user, as illustrated in FIG. 5. FIG. 5 is a spreadsheet 131 with a column 132 listing forecast demand for a product in a channel segment.
[0040] Cost Model
[0041] The pricing system 100 further includes a Cost Model (CM) 150, FIG. 2, that calculates costs assumptions used in determining the profit maximizing prices. The CM 150 may operate by accepting
inputs from the users through input device 10. In this way, the function operates only to produce revenues and uses the user's cost estimates in considering profits.
[0042] In a preferred embodiment of the system 100, however, the CM 150 examines externally provided data to determine a base product cost that represents the actual costs to the seller for the
product. For manufacturers, the base product cost represents the costs of acquiring raw materials and turning these materials into one unit of finished good, and for resellers, the base product cost
represents actual amount paid to acquire one unit of the product.
[0043] The base product cost only includes the expenses intrinsically related to acquire a unit of the product and does not include all costs associated with the production and/or acquisition of the
product. For example, advertising costs are not a base product cost because the sales of additional units of the product do not intrinsically increase this cost. Some other additional costs are
overhead costs, inventory and handling costs, administrative costs, development costs, warranty costs, training costs, and freight costs. These types of additional costs may be handled as product
cost adjustments by the dynamic pricing system 100, so that the costs may be considered when determining profit-maximizing prices. In a preferred embodiment, the dynamic pricing system 100 allows
users to provide the incremental and/or percentage adjustments for each product. The total cost for the product, the base cost modified by all of the adjustments, is referred to as adjusted product
[0044] In one embodiment, the CM 150 may account for differences in costs for transactions in different channel segments. The costs for sales in different channel segments may be due to different
methods of distribution, differences in location or other common characteristics of sales in the channel segments. The CM 150 may dynamically determine these costs by evaluating the prior transaction
data. Preferably, the dynamic pricing system 100 also allows the user to input incremental and percentage adjustment components for product sales in the channel segment to produce an adjusted product
cost. In this way, the user has access to different types of cost metrics by initializing the adjustment factors with different values.
[0045] In addition to channel segment specific adjustments which consider the additional costs associated with the product at the channel segment level, it is possible that a seller needs special
cost considerations for specific buyers, or buyer specific cost adjustments. For example, sales to a particular buyer may be more expensive because of greater transaction and delivery costs. The CM
150 may dynamically determine the additional costs for any particular buyer by evaluating the prior transaction data, using known statistical analysis techniques. The dynamic pricing system 100 also
preferably allows the user to supply costs adjustment for product sales to particular buyers, to produce a buyer adjusted product cost.
[0046] In another embodiment, the CM 150 further accounts for any discounts given to a buyer for large volume sales. These discounts are generally modeled through a function that represents the
increasing discount as the sales volumes increase. For example, the discount may be a step function that produces increasing discount amounts with increasing amounts of sales. The dynamic pricing
system 100 treats a discount as a cost because the discount diminishes expected profit from a particular sale but does effect other transactions within the channel segment.
[0047] System 100 may also display costs and discount numbers to the user, is illustrated in the spreadsheet 131 of FIG. 5. The spreadsheet 131 includes an adjusted cost column 151 and a discount
column 152 for each product in each channel segment.
[0048] Supple Forecast
[0049] In one embodiment, the system 100 further considers inventory levels. In particular, a basic premise of the dynamic system 100 is that future sales cannot exceed future inventory levels.
Accordingly, the dynamic pricing system 100 caps sales forecasts at the forecasted inventory levels. In the dynamic pricing system 100, a Supply Forecaster (SUF) 190 forms an estimate of the future
inventory in each channel segment. The SUF 190 may form an inventory forecast using any known accounting techniques and typically looks to current inventory levels and expected future changes to the
inventory levels, such as sales and restocking. Where the seller may purchase unlimited additional inventory, the system can operate without the SUF 190 since any level of sales may be accomplished.
The SUF 190 may also be replaced with a corresponding third party system to provide the same supply inputs.
[0050] If forecast horizon ends before a restocking date, then all of current inventory may not be available for use to satisfy the demand through the forecasting horizon. In this case, the SUF 190
determines how much of the current inventory is available to satisfy a future demand through the forecast horizon. One simple approach uses a linear approximation in which an amount of new inventory
is added constantly, rather than using a step function having large, sudden changes in the inventory levels. For example, available inventory may be approximated as the current inventory multiplied
by the ratio of the forecast horizon divided by the time until the next restocking.
[0051] Price Optimizer
[0052] Referring to FIG. 2, the dynamic pricing system 100 includes a Price Optimizer (OPT) 200 that produces a set of optimal prices that maximize total profit under given constraints across all
channel segments, where the constraints are defined either by the general settings of the pricing problem or by specific rules selected by the user. The OPT 200 creates the profit maximizing prices
using various data, including the product cost data from the CM 150 and the sales forecasts from the SF 160.
[0053] The OPT 200 generally assumes that a product sells at a single price for a particular channel segment. Difference in prices may be modeled in the form of volume discounts, as described in the
above discussion of cost calculations. The OPT 200 then estimates profits from different sales for a product within the channel segment at different prices. In particular, the OPT 200 looks to
.PI..sub.P,CS=X.sub.P,CS*(P.sub.CS=C.sub.CS) (Eq.3),
[0054] where P.sub.CS is the price for the product in the channel segment, C.sub.CS is the costs per product in the channel segment, X.sub.P,CS is the forecasted sales of the product in the channel
segment at price P, and .PI..sub.P,CS is the expected profit from the product's sales in the channel segment at price P. As described above, SF 160 forecasts X.sub.P,CS by using the forecasted future
sales at current price levels, as determined by NSF 130, and then adjusting the number of forecasted sales by the price elasticity of buyers in the channel segment, as determined by PSM 140:
X.sub.P,CS=X.sub.Pref,CS*F.sub.PS(P) (Eq. 4)
[0055] where X.sub.Pref,CS is the normalized sales forecast at current price from the NSF 140 and F.sub.PS(P) is the price sensitivity adjustment to sales at price P. Likewise, CM 150 determines the
costs per product within the channel segment. The OPT 200 generally starts at a base price, P.sub.base, and gradually increases the price by a set increment, The OPT 200 then suggests the particular
price(s) for the product that maximize profits within the channel segment. The OPT 200 may present the price recommendation in any form of output, such as printed page, but generally presents the
prices through a graphic user interface (GUI) on a display monitor.
[0056] In one embodiment, the OPT 200 looks only to changes in profits caused by increases in prices. In this implementation, the OPT 200 can recommend a price increase that maximizes profits,
generally a price that does not substantially decrease sales volumes while increasing revenues per product.
[0057] In another embodiment, the OPT 200 makes a more global analysis by performing estimates of a seller's profit levels within multiple relevant channel segments and provides prices for the
multiple channel segments. This way, a seller may sacrifice profits within one channel segment to increase profits in a second channel segment. For example, the seller having a limited total
inventory to be distributed in all channel segments may be better off selling less items in a first market to increase profits in a second market.
[0058] In the above-described analysis to determine optimal prices for a product, the OPT 200 uses several basic assumption, such as the pricing and sales of one product do not effect the pricing and
sales of a second product. As a result, the amount of the forecasted sales equals the normalized forecasted sales times the price sensitivity adjustments. Furthermore, the OPT 200 may optionally
assume that there are a minimum and a maximum allowable price within a channel segment. Given these assumptions, the OPT 200 can always produce one or more profit maximizing prices.
[0059] The OPT 200 may also assume a minimum and a maximum number of sales within the channel. The OPT 200 may optionally further assume that there is a maximum difference in prices for a product in
two channel segments, where this maximum difference is an absolute amount (such as prices cannot differ by more than $10) or a relative ratio in prices (such as prices cannot differ by more than
10%). As the OPT 200 makes additional assumptions, it becomes increasing likely that a set of profit maximizing prices does not exist because a solution is not possible within the assumption. The OPT
200 then starts ignoring assumptions until a solution becomes possible.
[0060] The assumptions are stored in the strategic objectives (or business rules) database, 210. The users may adjust these assumptions according the realities of the products and markets. For
example, where pricing or sales of a first product effect pricing or sales of a second product, the OPT 200 cannot assume that demand (or sales) for one product is independent of demand (or sales)
for other products and that cross-product price elasticity does not exist. The OPT 200 must therefore use a sales forecast from the SF 160 that accounts for this dependency, and then product pricing
that maximizes sales from both products. The sales for two products may be positively correlated, so that the sale of one product increases sales of the second product. Alternatively, sales of the
two products may be negatively correlated, where sales of the first product decrease sales of the second product, such as products that are substitutable. In this case, a decrease in the price of the
first product increases demand for this product while decreasing demand and sales for the second product. The dynamic pricing system 100 can account for these market conditions through altering the
operation of the SF 160 so that forecasts of the demand of a certain product, in addition to using the historical demand data for that product, also examine the historical demand data for related
products. The OPT 200 may consider cross-product elasticity in determining the optimal prices. Typically, total forecasted profits for the first product then becomes the originally expected profits
plus any adjusts to profits caused by sales of the second product to reflect the codependence of the two products: 2 ( Total Profit ) ( price of product 1 ) = ( sales of product 1 ) ( price of
product 1 ) * Unit Profit ( Product 1 ) + ( Sale of Product 1 ) * ( sales of product 1 ) ( price of product 1 ) + ( sales of product 2 ) ( price of product 1 ) * Unit Profit ( Product 2 ) . ( Eq . 5
[0061] In the above-described operation, the OPT 200 further assumes that unsold inventory does not incur any actual or opportunity cost. To improve the price prediction, the sellers may provide an
estimate of storage costs for unsold inventory that is included in the calculations of the PM 150. For example, the OPT 200 may employ cost accounting that treats any unsold inventory as a cost
against future profits. The user must specify how to value inventory at the end of the forecasting horizon and/or restocking date. Issues that arise include valuing excess inventory at the end of the
decision period, as well as any opportunity costs associated with carrying the items over a sales period and how to capture any increase in product that occurs during storage (appreciation) until the
next period. Similarly, the OPT 200 should consider cost of lost sales due to insufficient inventory.
[0062] The OPT 200 also does not account for uncertainty in supply and demand. Instead, the OPT 200 treats these factors as deterministic once supply and demand are forecasted. The SUF 190 and the SF
160 could easily be modified to incorporate an uncertainty factor. Alternatively, the demand and supply could be modeled as stochastic processes having a known mean and variance, such as lognormal
functions. The OPT 200's objective function of the optimization is then replaced by an function to maximize expected total profits.
[0063] The OPT 200 also operates under the assumption that competitor data is not available. Competitor data relates to information on the prices and sales of competing products in the same channel
segments. Where this information is available, the dynamic pricing system 100 could improve sales forecast, since the price and supply of competing products obviously affects sales. For example, the
existence of a closely related product at a lower price substantially limits the ability of the seller to increase prices. The PSM 140 and the SF 160 may use known techniques to incorporate and use
the competitor data.
[0064] In another embodiment, the dynamic pricing system 100 uses available information on competitors in the OPT 200's determination of optimal, profit maximizing prices. For example, a Competitor
Response Model (CRM) 170 uses historical data on competitor pricing and supply information to modify the price sensitivity findings of the PSM 140 and sales forecasts of the SF 160. These adjustments
are based on the logical assumption that the price and availability of substitute products within a market influence the price sensitivity of consumers and similarly affect future sales. The OPT 200
could use known techniques to determine the demand elasticity of a certain product with respect to the competitor price and incorporate that in the objective function. Alternatively, the control
variable within the system to determine price sensitivity (currently the price of the product) can be replaced by the ratio of the seller's price of the product to the competitor's price or the
difference of the two values.
[0065] Therefore, the dynamic pricing system 100 may produce optimized price recommendations by exploiting a broad range of available pricing and sales data. If this broad range of market information
is available, the dynamic pricing system 100 can model the size of the potential market as well as the market's sensitivity to price. The dynamic pricing system 100 forms a sales forecast, as a
function of price and time, by modeling the market size from the market's price sensitivity. The dynamic pricing system 100 can then evaluate this sales forecast with respect to the available supply
data and the seller's strategic objectives to generate the optimized price recommendation. Unfortunately, a broad range of market data is rarely available.
[0066] However, in most cases, the dynamic pricing system 100 must analyze the market using less-than-perfect pricing information. For example, if loss data is unavailable or not meaningful, market
size is difficult to capture. A more direct way to achieve a price recommendation is to forecast sales directly as a function of price and time. In this way, the system bypasses the need to model
market size and response but possibly produces less accurate forecasts.
[0067] Similarly, the dynamic pricing system may make optimized price recommendations even where data on some drivers of market response is unavailable because some important market drivers can be
captured reliably in data. For instance, the overall supply in the market is an observation that may be more qualitative than quantitative. As a result, corresponding adjustments to the price or
market response need to be made on a simpler basis with input from the user as the size of the adjustment to the final price or the shift to market response. These adjustments can be achieved through
overrides to the sales forecasts, demand forecasts, or market response, or more directly, by a simple percentage adjustment to the price recommendation derived from the available data. The user may
choose which adjustments to make.
[0068] The price recommendations from the price optimizer 200 may be further modified by a post-processor 240 to allow the system 100 to address various issues not explicitly addressed in the other
components. A miscellaneous parameters database 250 stores parameters which are used to adjust prices to reflect behavior not represented in the above models. This may include items such as vendor
and channel management rules, as well as industry/market availability.
[0069] System 100 may store the price recommendations in a price recommendation database 260 so that the system 100 can later access the price recommendations. The price recommendation database 260
may also store the assumptions/forecasts used to form the price recommendations.
[0070] Alert Generator
[0071] In another embodiment, the dynamic pricing system 100 further includes an alert generator 220, FIG. 2, that operates after a new set of product prices has been generated or a new day's worth
of transactions has been loaded. The alert generator 220 notifies the user of any significant changes in prices or other product characteristics, including the number of actual units sold or actual
margin that may indicate when actual sales behavior differs significantly from earlier forecasted behavior.
[0072] The user can choose, through the input device 10, conditions that cause the alert generator 220 to give notices, and these selected alert conditions are stored in an alert database 230. For
example, the alert generator 220 may inform the user when statistics in the actual sales different from the expected, forecast values. For any particular product in a channel segment, the alert
generator 220 may look at inventory statistics, the number of sales, the actual price of the products in the sales, the actual costs, revenues or the actual profits. The alert generator 220 notifies
the user when the actual numbers differ from the forecasted values determined by other components of the dynamic pricing system 100.
[0073] In order to make these comparisons, the alert generator 220 stores the results from the OPT 200. The alert generator 220 further receives and analyzes data from the actual transactions, to
compare the transactions with the forecasts. The alert generator generally operates by comparing new entries in the transaction database 120 with forecasts contained in the price recommendation
database 260.
[0074] Optimally, the user can also specify the time period from which the alert generator 220 compares expected results to actual results. For instance, the user may select the previous day,
previous week, previous month, or previous year. Likewise, the thresholds chosen for alerts may be chosen to vary by the time span selection since a small deviation from expected profits may be
important in the short term but may not matter over an extended period.
[0075] Integration of Dynamic Price System
[0076] As illustrated in FIG. 6, the dynamic pricing system 100 may coexist within a larger framework 400. In particular, the system 100 may interact with various elements in the user's supply chain,
including, a warehouse 410, a production center 420, and a purchasing center 430 to insure that supply matches appropriately with the demand forecasted by the dynamic pricing system 100. The dynamic
pricing system 100 further sets prices in view of inventory levels. Similarly, the dynamic pricing system 100 connects to sales sites for the user, such as a store 440 and a mail order center 450. In
this way, the dynamic pricing system 100 sets sales prices and monitors actual sales at the sales sites 440 and 450. Much like a feed back loop, the dynamic pricing system 100 uses the sales data to
adjust prices to the sales chain and inventory requests to the supply chain.
[0077] Based on this model, a dynamic pricing process 500 is illustrated in FIG. 7. Specifically, the dynamic pricing system collects past sales data, step 510 and uses this data to forecast future
sales at different prices, step 520. Using results from the step 520, the dynamic pricing system selects prices that optimize profits, step 530. The profit maximization may be adjusted accordingly by
choosing conditions, step 540. In step 550, the seller then sells in each channel segment at the recommended prices from the step 530. New sales information reflecting the price recommendations from
the step 530 are collected, step 560, and added to the other past sales data (step 510), and the process repeats from the start.
[0078] Conclusion
[0079] The foregoing description of the preferred embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the
invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not by this detailed
description, but rather by the claims appended hereinafter.
* * * * * | {"url":"http://patents.com/us-20020116348.html","timestamp":"2014-04-17T01:08:18Z","content_type":null,"content_length":"75269","record_id":"<urn:uuid:702ce6e2-6ac0-437f-9739-20ca875450b3>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
choosing hard classes
I think you're applying a short-term strategy that's likely to hurt you in the long run.
A few questions you may want to ask yourself:
1. Are these classes really so much easier?
2. What are they not covering?
3. How much higher are your marks really going to be? Some people actually perform better when challenged.
4. If the only way you canget into grad school is by taking easy classes, do you think that you'll really be all that successful? Grad school isn't any easier than undergrad, and usually there aren't
"easier" options.
Once you get into grad school, and more importantly, once it's over, what really matters is what you've gotten out of the classes you took.
To add to what Choppy said,
1) and 2) Where I go to, it seems there's a palpable difference between the Honours and regular courses. I may be biased, since I'm taking the Honours ones, but from what I hear from other students
the regular ones are much, much easier, and that's what the professors that teach our courses say, as well. I don't know how it is at your school, though, so you might want to check that out.
Also, I see quite a lot of topics here where upper-year students are asking whether they should take a course or read a book on proofs. I don't know how your Maths courses are, but we've been doing
proofs in our Honours Calculus and Linear Algebra courses almost from day one. I guess the biggest difference then comes from just that, namely that the Honours courses seem to be much more rigorous
and proof-based, and I can honestly I'm not intimated by having to prove stuff anymore, since basically all of our homeworks are just prove this, prove that. That isn't to say I find it easy proving
things, it's just that the techniques are not the bottle-neck. And, again, from what I hear, in regular courses it's more about computation, whereas we almost treat that as trivial. We still need to
know how to compute everything those that are taking regular courses need to, but it seems that is taken almost as granted, and the hard stuff lies elsewhere.
3) Apart from the challenge, the averages are also higher in Honours courses, since it's my university's policy not to punish those taking tougher courses. That, of course, doesn't mean, there's only
A's, but instead of a C average, you'll have a B- average or something akin to that. That may or may not be the case where you're at, though.
4) This, I think is the best point of all. If you're just taking the easy way out, then I think that's pretty weak. | {"url":"http://www.physicsforums.com/showthread.php?t=467509","timestamp":"2014-04-18T23:29:01Z","content_type":null,"content_length":"27980","record_id":"<urn:uuid:9328aebb-b03c-4fb0-a48f-aedb52eb7239>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
Madison Dynamo Experiment
The role of magnetic fields in astrophysical processes has gained greater attention in recent times. The plasma physics community is making efforts to create experiments that explore processes which
are fundamental to the evolution of planets, stars, and galaxies. One such process is the generation of the magnetic fields we observe in each of these astrophysical bodies.
The dynamic formation of these magnetic fields is explained by the dynamo. Flowing conduncting fluids can bend and stretch magnetic field lines so that they amplify the magnetic field. If the
geometry of the flow also provides a positive feedback, the magnetic field continues to grow until it becomes strong enough to affect the flow. The result is a system whereby kinetic energy in the
flow is spontaneously converted into magnetic energy.
Our experiment is designed to create a flow in a liquid metal that is predicted to produce a dynamo that can be studied in the laboratory. We are interested in answering questions about what types of
flow are required for a dynamo, what are the effects of turbulence on a laminar dynamo, and what causes the saturation of the magnetic field growth in a dynamo.
The Kinematic Dynamo:
The evolution of the velocity and the magnetic field is governed by magnetodyrodynamics. The equations which describes the evolution of the magnetic field is the magnetic induction equation which is
non-linearly coupled to the Navier-Stokes equation which describes the evolution of the velocity. Solving this system of non-linear equations is a formidable task. The kinematic dynamo addresses this
problem by assuming that the magnetic field is sufficiently weak so that the Lorentz force, the force the magntic field applies to the fluid, is weak. The magnetic induction equation becomes
decoupled from the fluid equation and the problem reduces to solving a linear partial differential equation for which we assume that the velocity field is fixed.
In solving the magnetic induction equation, we find that there are two important criteria for the production of a dynamo. First, the fluid must be able to move the magnetic field faster than it can
diffuse away. A parameter which describes this comparison of advection to diffusion is the magnetic Reynolds number Rm. The magnetic Reynolds number depends on the fluid conductivity, the size of the
system, and the characteristic speed of the fluid. A dynamo typically has a critical magnetic Reynolds number. When the fluid is sufficiently conductive, large, and fast the magnetic Reynolds number
is large enough to result in a dynamo. For fluid flow that is below the critical Reynolds number, the magnetic field decays away with some characteristic decay rate that approaches zero as the
magnetic Reynolds number reaches criticality. To the left is shown a plot of the growth rate of the magnetic field as magnetic Reynolds number is increased. The critical magnetic Reynolds number is
at about 82 for this flow. Notice as well that the gain, the ratio of the magnetic response to the applied field, starts to grow rapidly just before criticality.
The second criterion is that the flow have the correct geometry for amplifying the magnetic field and creating a feedback loop that leads to magnetic field growth. This requirement is illustrated in
the figure to the right. The magnetic field that is excited by the flow we are studying is created by a process similar to the stretch-twist-fold rope dynamo of Vainshtein and Zeldovich. The figure
shows how the magnetic field is carried by the fluid flow in the limit of infinite conductivity. A magnetic field line lies tangent to the axis of rotation of the fluid as indicated by the arrows in
the figure. Calculating the new position of the fluid element in which the field rests allows us to follow the motion of the field in the flow. Progressing from (a) to (d), the magnetic field is
stretched (or amplified) and twisted by the flow. In (d) we can see that there is a new magnetic field line lying along the original straight field line indicating the generation of new magnetic
flux. The complicated structure in the core would tend to be smoothed out by resistive diffusion resulting in a stronger field. We can also see that the degree that the field is twisted plays an
important role in generating the dynamo. If the field line were twisted too far, or not far enough, then the resulting field would not align itself with the original field line. This angle of twist
is governed by the geometry of the flow.
Here is a movie depicting the evolution of a pair of magnetic field lines.
The geodynamo relies on the large size of the Earth to achieve the critical magnetic Reynolds number. In a laboratory experiment, however, we must rely on larger flow speeds. Fluids at these speeds
(in excess of 20 m/s) are turbulent resulting in a number of interesting changes to the dynamics of the system.
Mean-field Electrodynamics addresses the presence of fluctuations in the velocity and magnetic field by assuming the fluctuations are small compared to some mean value and performing statistical
averages over the fluctuating quantities to determine the appropriate modifications to the equations.
Given the assumptions of homogeneous, isotropic turbulence, the theory predicts a turbulent EMF described by two terms: the alpha-effect and the beta-effect.
The alpha-effect arises due to the small-scale helical motions of the turbulent flow. These small helicies can produce the same stretch-twist-fold mechanism described above, but on a smaller scale.
The alpha-effect produces current anti-parallel to the direction of the magnetic field line. If the helical motions of the fluctuations are correlated, the net current generated can produce a
large-scale magnetic field.
The beta-effect describes the increased transport of magnetic flux due to turbulent stirring. In the laminar dynamo, the dissipation of magnetic flux is governed by the rate of diffusion. Turbulent
stirring, however, can increase the transport of magnetic flux from one region to another. The result is an enhanced diffusivity of the fluid. This can also be thought of as a reduction in the
effective conductivity in the fluid. Since the critical magnetic Reynolds number depends on the conductivity, the beta-effect raises the critical magnetic Reynolds number for the flow, thus making it
harder to produce a dynamo in the laboratory.
The magnetic Reynolds number decreases and the critical magnetic Reynolds number increases leading to saturation of the field.
Once the magnetic field begins to grow, the Lorentz force will become strong enough to modify the flow. The kinematic dynamo model breaks down and in order to capture the dynamics of the system, we
have to solve the full non-linear set of MHD equations. One anticipates that the flow will be modified in such a way to limit growth of the magnetic field. Simulations that solve the fully non-linear
system show that this saturation is accomplished in two ways. First, the magnetic field acts as a breaking force which slows down the fluid flow. The result is that the magnetic Reynolds number
(which depends on the characteristic speed of the fluid) is reduced. Second, the magnetic field also changes the geometry of the flow, thereby increasing the critical magnetic Reynolds number for the
flow. Saturation occures when the magnetic Reynolds number and the critical magnetic Reynolds number are equal as depicted in the figure to the right.
An example of planetary dynamos:
The magnetic field of Uranus is somewhat similar to the field which we are attempting to generate. Click here to see a movie of thetime-history of the magnetic field of Uranus. (Super big 6M
file...don't dare do this with a modem!)
The magnetic dipole moment of Uranus is not aligned with the planet's axis of spin. Our experiment will produce a similar field. | {"url":"http://plasma.physics.wisc.edu/mde-background","timestamp":"2014-04-20T05:48:32Z","content_type":null,"content_length":"13563","record_id":"<urn:uuid:36f98456-22f2-4f2b-9b72-dff2e78d8ca5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> EFA eigenvalues
Bill Roberts posted on Thursday, September 05, 2002 - 10:02 am
I have a question about the eigenvalues for EFA in Mplus. Using maximum likelihood for EFA in Mplus I get 12 eigenvalues GE 1 that are all positive. The eigenvalues ranged from .35 to 6.379 for the
45 Likert-type items in the analysis. I compared these results with the same items and the same analysis using sas. Eigenvalues from sas ranged from -.46 to 9.25 with 6 eigenvalues GE 1. If I use the
typical cut-off of for an eigenvalue GE 1 to explore the underlying dimensionality of the items, sas would indicate 6 factors and Mplus would indicate 12. I compared the rotated promax factor pattern
matrix between sas and Mplus for six factors and found that they are similar with minor differences that is probably due to rounding. I tried to run EFA in Mplus with 12 factors and ran into a
convergence problem. How should I interpret the eigenvalues given by Mplus?
bmuthen posted on Thursday, September 05, 2002 - 5:37 pm
If SAS and Mplus have eigenvalue differences, I would assume that they are not computed for the same matrix. Assuming that your outcomes are continuous variables, the eigenvalues should all be
positive. Perhaps SAS handles any missing data differently in this run, e.g. using pair-wise present data which could lead to negative eigenvalues. Another reason for differences in the matrix used
for eigenvalue computation is if an iterated principal factor method has been used in SAS so that the sample matrix is not used, but a sample matrix that has an adjusted diagonal. Mplus consider the
eigenvalues for the sample correlation matrix. The Mplus eigenvalues GE 1 can be used to guide in determining the number of factors, but such a guide is a very rough one - a somewhat less rough guide
is to use the scree approach.
Bill Roberts posted on Friday, September 06, 2002 - 8:06 am
Thank you for discussing reasons that could account for differences in the eigenvalues. I am fairly certain that I can rule out pair-wise deletion of missing cases. According to the sas
documentation, missing cases are deleted listwise by default. The sample size is the same in both programs. The simple descriptive statistics and correlation matrix look nearly identical. Perhaps sas
uses a different matrix to compute the eigenvalues, as you suggested.
bmuthen posted on Friday, September 06, 2002 - 1:45 pm
Which factor extraction method in SAS is being used?
Bill Roberts posted on Friday, September 06, 2002 - 2:08 pm
The method for proc factor is set to maximum likelihood and rotation is set to promax.
Bengt O. Muthen posted on Friday, September 06, 2002 - 2:30 pm
They must be doing something to the correlation matrix otherwise no eigenvalue would be negative because when a correlation matrix is computed from listwise present data the correlation matrix is
positive definite.
Bill Roberts posted on Monday, September 09, 2002 - 2:55 pm
I am finding that by default, SAS sets prior commonalties for each variable to its squared multiple correlation with other variables in the analysis when the method is maximum likelihood. After
taking a closer look at the SAS output, I see that the cumulative variance exceeds 100 percent, at which point, the eigenvalues become negative bringing the cumulative variance back to 100 percent.
If, however, the priors option is set to one for method = maximum likelihood all eigenvalues are positive. When squared multiple correlations are inserted along the diagonal of the correlation
matrix, then the total variance to be decomposed into factors is less than the number of variables. Eigenvalues using principal components as the method of analysis in SAS were identical to what I am
finding in the Mplus output using maximum likelihood as the estimator. Is there a way to specify EFA in Mplus using maximum likelihood and insert squared multiple correlations along the diagonal of
the correlation matrix using nonsummary data?
bmuthen posted on Monday, September 09, 2002 - 6:28 pm
The answer is no, and I can't see the need for it in terms of model estimation. The adjustments to the diagonal are connected with either descriptive purposes - getting a picture of relevant
eigenvalues to guide in choosing number of factors - or are part of simpler estimation methods such as principal factoring. You can get Mplus to give such eigenvalues if you input an adjusted
correlation matrix. But to get maximum-likelihood estimation from a correlation matrix, you don't want to adjust the diagonal of the correlation matrix.
Just a few more words on this. Mplus computes eigenvalues just like in principal component analysis (keeping the diagonal elements as they are - here 1). You can use such eigenvalues to descriptively
guide you in choosing the number of factors. The idea of adjusting the diagonal is that this perhaps makes the resulting matrix more closely approximate Lambda*Psi*Lambda' in
Sigma = Lambda*Psi*Lambda' + Theta,
where the eigenvalues shed light on the rank (number of factors) of Lambda*Psi*Lambda'.
Hervé CACI posted on Thursday, August 07, 2003 - 2:41 am
Bengt & Linda,
I'm using WLSMV with 4-point Likert like item scores (Mplus 2.01). I understand that I can use other estimators as well.
I'm puzzled by the fact that Mplus outputs a different item correlation matrix as the number of factors extracted grows. I would rather assume that the correlation matrix remained unchanged, because
the eigenvalues of this matrix are a guide to the number of factors to extract/rotate.
Also, I'd like to know on which correlation matrix are computed the eigenvalues ? The first printed out, or some hidden matrix ?
What am I missing ?
Thank you in advance.
Linda K. Muthen posted on Thursday, August 07, 2003 - 6:46 am
The correlation matrices printed are model estimated and then after that the residuals are printed. The residuals are the model estimated minus the observed values. As the model changes, that is, as
more factors are extracted, the model estimated values will change. The eigenvalues are based on the observed correlation matrix.
Anonymous posted on Tuesday, August 24, 2004 - 9:53 am
I am wondering how to correctly define an EFA using continuous variables using MPLUS (for write up). Since ones are placed on the diagonal is this analysis really a principal components analysis or a
factor analysis with principal component extraction method? To my understanding there should not be residual variances for manifest variables using PCA. Is this correct?
Linda K. Muthen posted on Tuesday, August 24, 2004 - 10:25 am
It is a factor analysis using a maximum likelihood or unweighted least squares estimator. It does not use the principal components estimator.
anonymous posted on Wednesday, October 19, 2005 - 12:02 pm
Hi Linda,
I have a similar question to the one posed above. How would one describe an EFA using categorical variables using MpLUS (for write up)? Would it be correct to write, "a factor analysis using WLSMV
estimator and promax rotation?" What would you write about the extraction method?
Linda K. Muthen posted on Wednesday, October 19, 2005 - 3:04 pm
Mplus does promax and varimax rotations. And the factor extraction method is the estimator, so if you are using WLSMV, it would be weighted least squares.
James J. Prisciandaro posted on Friday, September 01, 2006 - 12:46 pm
Hi Dr.s Muthen,
In EFA, Mplus outputs eigenvalues from the sample correlation matrix (i.e., with 1's on the diagonal) that can be used to determine the number of factors to retain (e.g., using scree plot, parallel
analysis). However, some researchers have argued that when one is conducting an EFA, it may be more accurate to use eigenvalues from the reduced correlation matrix (e.g., with communalities on the
diagonal) to determine the number of factors to retain. I was hoping you could explain to me how to obtain these latter eigenvalues in MPlus. In particular, how to do so in the context of my current
research situation: EFA with binary data (WLSMV estimation; data are weighted).
Jim Prisciandaro
Bengt O. Muthen posted on Monday, September 04, 2006 - 4:13 pm
Mplus does not produce eigenvalues for a reduced correlation matrix. I use a scree plot of eigenvalues for the unadjusted sample correlation matrix. Same with binary items and sampling weights.
Julia Diemer posted on Thursday, May 03, 2007 - 11:37 pm
Just a quick question: Is there a way of doing a parallel analysis (Horn) for determining the number of factors to extract in EFA with Mplus?
Thanks in advance,
Julia Diemer
Linda K. Muthen posted on Friday, May 04, 2007 - 7:11 am
I am not familiar with this method and it certainly isn't directly implemented in Mplus. I am not sure if it could be done in Mplus using the Monte Carlo simulation features however.
Xuan Huang posted on Friday, June 08, 2007 - 3:05 pm
Dear professors,
I conducted EFA with eight 7-point scale items. I treated these variables as categorical and used wlsmv as estimator. I got 1 eigenvalue larger than 1 which is 5.419. All other eigenvalues range from
.149 to .610. The eigenvalue indicates one-factor model may be good.
Here are my results:
One-factor: ÷2(14)=109.876, P=0.0000, RMSEA=.154, RMSR=.0469;
Two-factor: ÷2(10)=47.601, P=0.0000, RMSEA=.114, RMSR=.0280;
Three-factor: ÷2(6)=21.236, P=0.0017, RMSEA=.094, RMSR=.0165;
Four-factor: ÷2(2)=4.803, P=.0906, RMSEA=.07, RMSR=.009, one residual variance is negative;
I am confused how to interpret the eigenvalue. It indicates one-factor model but one-factor model has large RMSEA value and significant ÷2.
Could you give me some hints on the inconsistency between what eigenvalue suggests and what the model fix index suggests?
Thank you very much!
Linda K. Muthen posted on Friday, June 08, 2007 - 3:17 pm
When these eight items were developed, for how many dimensions were they developed?
Xuan Huang posted on Friday, June 08, 2007 - 3:40 pm
Thanks for your reply. The eight items were development to measure one dimesnion:
parental warmth.
Linda K. Muthen posted on Friday, June 08, 2007 - 4:20 pm
In view of that, the eigenvalues, and the RMSR, I would conclude one factor. I would, however, look at the other factor solutions and see which items cross-load and think about if that is what you
would expect. Some items may not be behaving properly.
QianLi Xue posted on Wednesday, September 30, 2009 - 6:31 am
Hi, Linda,
Does MPLUS provide summary statistics for the amount or % of variance explained by each of the factors in EFA?
Bengt O. Muthen posted on Wednesday, September 30, 2009 - 8:42 am
No, because amount of variance explained is not the focus of factor analysis, but rather of principal component analysis. Also, the percentages are well-defined only for orthogonal rotations such as
Varimax, which may not be an optimal rotation method. In the case of orthogonal rotation, you can compute the percentages yourself by summing the squared loadings in a column.
Helen Skerman posted on Wednesday, September 07, 2011 - 10:11 pm
In an EFA of categroical variables, I have negative eigenvalues for 2 of the 32 variables. Is the solution inadmissable? Can this be ignored or should I make some adjustment such as eliminating some
low frequency variables? I tried "LISTWISE=ON", but this made no difference. Any other suggestions would be appreciated.
Bengt O. Muthen posted on Thursday, September 08, 2011 - 7:20 am
I think this is ignorable. With categorical variables and WLSMV, you work with tetrachoric and polychoric correlations which are computed for pairs of variables at a time and therefore can produce a
non-positive definite sample correlation matrix - which has some negative eigenvalues. You can still get a pos-def model-estimated correlation matrix. If the model fits well to this sample
correlation matrix, you can view the situation as the non-pos def sample correlation matrix was not "significantly non-pos def." There have been ideas in the literature about deleting the eigenvalues
and eigenvectors for the negative eigenvalues and recreating the sample correlation matrix this way, smoothing it, and then fitting the model, but I am not sure that is an important improvement.
If you use ML instead, this issue does not come up because ML does not fit the model to those sample correlations.
Lisa M. Yarnell posted on Tuesday, November 29, 2011 - 7:40 pm
Hello. When one is choosing among EFA factor solutions using criteria such as the Scree plot and overall model fit, is it true that when you pick a greater number of factors to be extracted, there is
necessarily better fit (according to CFI, TLI, and RMSEA)? Or can it sometimes occur that higher numbers of factors extracted can actually result in a more poorly-fitting model than when fewer
factors are extracted? Thanks.
Linda K. Muthen posted on Wednesday, November 30, 2011 - 9:27 am
Chi-square will improve but I don't necessarily think that would hold with CFI, TLI, and RMSEA. Note that there is a maximum number of factors that can be extracted from a set of indicators. Also,
you can get negative residual variances which make the solution inadmissible.
Tracy Waasdorp posted on Thursday, March 01, 2012 - 7:11 am
In an EFA, what is the equation used to calculate the eigenvalues for ML?
Thank you
Linda K. Muthen posted on Thursday, March 01, 2012 - 8:01 am
It is the regular algorithm of an eigenvalue of a vector. Try Googling eigenvalue.
emmanuel bofah posted on Monday, September 24, 2012 - 11:48 am
which example in chapter 4 can save the eigenvalues so i can use O’Connor
(2000) macros to generate the random variables eigenvalues. Is a process recommended by the recent paper Hancock, G. R., & Mueller, R. O. (Eds.). (2013). Structural equation modeling: A second
course (2nd ed.). Charlotte, NC: Information Age Publishing, Inc. supplementary.
Linda K. Muthen posted on Monday, September 24, 2012 - 12:04 pm
The eigenvalues are printed in the output. They are not saved. This test will be in Version 7 using the PARALLEL option.
ellen posted on Tuesday, October 15, 2013 - 11:00 am
I just installed Mplus version 7.11 yesterday. It runs well with regular SEM analyses. I wanted to perform a Parallel Analysis to determine the optimum number of factors in an exploratory factor
analysis. I used the syntax below, but it's been running since yesterday evening till now for 15 hours, and it's still running with no output available yet. I am wondering whether I made a mistake in
the syntax. Why is it taking so long?
NAMES ARE sex age race y1-y50;
USEVARIABLES ARE y1-y50;
MISSING IS all (-99);
TYPE = EFA 1 50;
PARALLEL = 1000;
TYPE= PLOT2 ;
could you please let me know whether this is the correct syntax to run a Parallel Analysis?
Thanks so much!
Linda K. Muthen posted on Tuesday, October 15, 2013 - 11:13 am
You are asking for 50 factor solutions with 1000 random data sets for each of the 50 solutions. I would imagine that could take some time and you have 50 items. The problem is most likely that you
are trying to extract too many factors and are getting negative residual variances which could cause slow convergence. I would choose a range of factors related to the number of factors for which the
data were developed. For example, if the fifty items should contain four factors, I would perhaps ask for solutions from 1 to 6 or 2 to 6.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=8&page=250","timestamp":"2014-04-18T05:34:22Z","content_type":null,"content_length":"64864","record_id":"<urn:uuid:7eec6f6c-9858-4fba-8941-3796bca6337d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00272-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculation of the Dielectric Constant as a Function of Temperature Close to the Smectic A-Smectic B Transition in B5 Using the Mean Field Model
Advances in Condensed Matter Physics
Volume 2012 (2012), Article ID 262069, 4 pages
Research Article
Calculation of the Dielectric Constant as a Function of Temperature Close to the Smectic A-Smectic B Transition in B5 Using the Mean Field Model
^1Department of Physics, Middle East Technical University, 06531 Ankara, Turkey
^2Department of Physics, Yuzuncu Yil University, 65080 Van, Turkey
Received 4 July 2012; Revised 24 September 2012; Accepted 25 September 2012
Academic Editor: Durga Ojha
Copyright © 2012 Hamit Yurtseven and Emel Kilit. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
The temperature dependence of the static dielectric constant () is calculated close to the smectic A-smectic B () transition ( = 71.3°C) for the liquid crystal compound B5. By expanding the free
energy in terms of the order parameter in the mean field theory, the expression for the dielectric susceptibility (dielectric constant) is derived and is fitted to the experimental data for which was
obtained at the field strengths of 0 and 67kV/cm from literature. Coefficients in the free energy expansion are determined from our fit for the transition of B5. Our results show that the observed
behaviour of the dielectric constant close to the transition in B5 can be described satisfactorily by our mean field model.
1. Introduction
Various smectic phases which occur in ferroelectric liquid crystals are of interest to study close to the phase transitions. In the smectic A phase, the long axes of the liquid crystal molecules are
parallel to the director which is perpendicular to the smectic layers. In the smectic C (or C*) phase, those molecules are tilted (a tilt angle between the long axis and the director) and in the
presence of the chiral molecules, the AC (or AC*) transition becomes more interesting to study in ferroelectric liquid crystals. As in the smectic C (or C*) phase, the molecules are tilted in the
smectic G phase and it has been observed experimentally [1] that the transitions between the ferroelectric phases (smectic A-C, smectic A-G and smectic C-G) are influenced by an applied electric
field. However, it has also been observed [1] that transitions between the nonferroelectric phases (smectic A-B, smectic B-E) are not influenced by an applied electric field. This has been
demonstrated experimentally [1] for the nonferroelectric phases of smectic A, B, and E of compound B5 by measuring the temperature dependence of the dielectric constant at fixed field strengths.
Some theoretical models have been given in the literature to explain the transitions between the smectic phases. The mean field models where the free energy is expanded in terms of the order
parameters with the coupling terms, have been used to analyze the experimental data. Regarding the spontaneous polarization and the tilt angle , a bilinear coupling [1, 2] and biquadratic coupling ()
[3–5] in the free energy expansion have been used in the mean field models for the smectic AC (or AC*) transitions. In our earlier studies [6–10], we have also studied the mean field models with the
and couplings for the AC (or AC*) transitions in ferroelectric liquid crystals.
In this study, we focus on the smectic A-B transition in compound B5 and we analyze the experimental data [1] for the temperature dependence of the dielectric constant for constant electric fields of
0 and 67kV/cm. For this analysis, we use our mean field model [6] with the biquadratic coupling between the order parameters (polarization ) and a long-range bond-orientational order [11] for the
smectic A-smectic B transition in B5.
Below, in Section 2 we give our mean field model for the smectic A-B transition. In Section 3, our calculations and results are given. We discuss our results in Section 4 and finally, conclusions are
given in Section 5.
2. Theory
The smectic A-smectic B transition can be described by the free energy expanded in terms of the two-order parameters, namely, polarization (smectic A and B phases) and a long-range bond-orientational
order (smectic B phase only) under an external electric field. Thus, the free energy can be written as Here, where is the transition temperature between the smectic A and B phases, is a constant
dielectric susceptibility, and is the permittivity. , , , and are constants and is the coupling constant in (1).
The temperature dependence of the polarization and the bond-orientational order can be obtained from the minimization of the free energy (1) with respect to and , which gives respectively.
Substituting (3) for into (2) then gives the free energy in terms of the bond-orientational order parameter only, with the new variables Since (4) gives the free energy of the smectic B phase (in
terms of the bond-orientational order parameter in this phase), the dielectric susceptibility can be obtained as a function of temperature by taking the second derivative of with respect to the ,
which gives where (5) and .
In order to predict the temperature dependence of the reciprocal dielectric susceptibility (6), a functional form of is needed, which can be adopted from the molecular field theory [12] as follows:
for the long-range bond-orientational order parameter in the smectic B phase. Close to the smectic A-smectic B transition, a power law formula is valid, as given in (7).
3. Calculations and Results
We analyzed the temperature dependence of the dielectric susceptibility according to (6) which was fitted to the experimental data for the dielectric constant [1] () close to the smectic A-smectic B
transition in B5. In (6), we first calculated the temperature dependence of the order parameter using the power-law formula (7) where the temperature for the smectic A-smectic B transition was taken
as 71.3°C at zero electric field in B5. By fitting (6) to the experimental data [1], the fitted parameters , and below were obtained , as given in Table 1. Similarly, (6) was fitted to the
experimental data [1] for the electric field of 67kV/cm close to the smectic A-smectic B transition with the fitted parameters (Table 1). Values of were deduced from the experimental data [1] when
at (6). Since the transition temperatures are not induced by the external electric field for nonferroelectric phases [1], we took the same transition temperature ( 71.3°C) for 67kV/cm in B5, as
shown in Figure 1. Thus, we get the same fits (6) for the fixed bias field strengths of 0 and 67kV/cm for the smectic A-smectic B transition in B5 (Figure 1). In the same manner, we analyzed the
experimental data for the dielectric constant [1] according to (1) with for using the fixed electric fields of 0 and 67kV/cm. Values of the fitted parameter a are given within the temperature
intervals in Table 2. This data is also plotted in Figure 1 with the observed data [1].
4. Discussion
We analyzed here the temperature dependence of the dielectric constant through the reciprocal dielectric susceptibility (6) in our mean field model with the biquadratic () coupling for the smectic
A-smectic B transition in B5. As pointed out previously, transition temperatures are not shifted for the external electric fields of 0 and 67kV/cm (Figure 1) due to the fact that the smectic A and
smectic B phases are nonferroelectric of compound B5. For the ferroelectric transitions of smectic A-smectic C, smectic A-smectic G and smectic C-smectic G of compound A6, it has been observed
experimentally [1] that the transition temperatures are shifted under various fixed field strengths. It has also been observed [1] that nonferroelectric phases (smectic B-smectic E) are not
influenced by an electric field for this compound B5 studied here. Regarding nonferroelectric phases, in fact the smectic A phase at high temperatures occurs at zero electric field only since the
external electric field induces a tilt angle which exists in the smectic C phase. The tilted smectic phase possesses a spontaneous electric polarization if the molecules have a permanent dipole
moment and they are chiral [13].
As shown in Figure 1, dielectric constant increases abruptly with the decreasing temperature near the smectic A-smectic B transition ( 71.3°C) of compound B5. This is due to the larger electroclinic
effect in the smectic B phase [1].
As we pointed out previously, the dielectric constant (or dielectric susceptibility) was calculated as a function of temperature (6) from our mean field model with the biquadratic coupling (1) for
the smectic A-smectic B transition of compound B5 (Figure 1). In this model, the long-range bond-orientational order parameter was considered as the primary order parameter and the polarization as
the secondary order parameter in the free energy expansion (1). With the biquadratic coupling (), quadrupolar interactions between the molecules are attributed to the mechanism of the smectic
A-smectic B transition of compound B5 in our mean field model. Regarding the temperature dependence of the dielectric constant calculated from (6) which was fitted to the experimental data [1] as
stated above, our mean field model describes the observed behaviour of satisfactorily for the smectic A-smectic B transition of compound B5. This indicates that the main mechanism for this transition
is due to quadrupole-quadrupole interactions which involve a long-range bond-orientational ordering in B5.
5. Conclusions
The dielectric constant was predicted using our mean field model with the biquadratic coupling ( is the polarization and the is the long-range bond-orientational order parameter) for the smectic
A-smectic B transition for constant electric fields in compound B5. The predicted was fitted to the experimental data from the literature and the fitted parameters were determined for the smectic
A-smectic B transition of compound B5. It was shown here that our mean field model describes the observed behaviour of the dielectric constant adequately for this transition of the liquid crystalline
material studied here. The observed data also shows that the transition temperatures are not shifted under the electric field for this nonferroelectric transition.
1. Ch. Bahr, G. Heppke, and B. Sabaschus, “Influence of an electric field on phase transitions in ferroelectric liquid crystals,” Liquid Crystals, vol. 11, no. 1, pp. 41–48, 1992. View at Publisher
· View at Google Scholar
2. Ch. Bahr and G. Heppke, “Influence of electric field on a first-order smectic-A–ferroelectric-smectic-C liquid-crystal phase transition: a field-induced critical point,” Physical Review A, vol.
41, pp. 4335–4342, 1990. View at Publisher · View at Google Scholar
3. C. C. Huang and S. Dumrongrattana, “Generalized mean-field model for the smectic-A chiral-smectic C phase transition,” Physical Review A, vol. 34, no. 6, pp. 5020–5026, 1986. View at Publisher ·
View at Google Scholar · View at Scopus
4. T. Carlsson, B. Zeks, A. Levstik, C. Filipic, I. Levstik, and R. Blinc, “Generalized Landau model of ferroelectric liquid crystals,” Physical Review A, vol. 36, no. 3, pp. 1484–1487, 1987. View
at Publisher · View at Google Scholar · View at Scopus
5. R. Blinc, “Models for phase transitions in ferroelectric liquid crystals: theory and experimental results,” in Phase Transitions in Liquid Crystals, S. Martellucci and A. N. Chester, Eds., Plenum
Press, New York, NY, USA, 1992.
6. S. Salihoğlu, H. Yurtseven, A. Giz, D. Kayışoğlu, and A. Konu, “The mean field model with ${P}^{2}{\theta }^{2}$ coupling for the smectic A-smectic C* phase transition in liquid crystals,” Phase
Transitions, vol. 66, pp. 259–270, 1998. View at Publisher · View at Google Scholar
7. S. Salihoğlu, H. Yurtseven, and B. Bumin, “Concentration dependence of polarization for the AC* phase transition in a binary mixture of liquid crystals,” International Journal of Modern Physics B
, vol. 12, no. 20, pp. 2083–2090, 1998. View at Publisher · View at Google Scholar
8. H. Yurtseven and E. Kilit, “Temperature dependence of the polarization and tilt angle under an electric field close to the smectic AC* phase transition in a ferroelectric liquid crystal,”
Ferroelectrics, vol. 365, no. 1, pp. 122–129, 2008. View at Publisher · View at Google Scholar · View at Scopus
9. E. Kilit and H. Yurtseven, “Calculation of the dielectric constant as a function of temperature near the smectic AC* phase transition in ferroelectric liquid crystals,” Ferroelectrics, vol. 365,
no. 1, pp. 130–138, 2008. View at Publisher · View at Google Scholar · View at Scopus
10. H. Yurtseven and M. Kurt, “Tilt angle and the temperature shifts calculated as a function of concentration for the AC* phase transition in a binary mixture of liquid crystals,” International
Journal of Modern Physics B, vol. 25, no. 13, pp. 1791–1806, 2011. View at Publisher · View at Google Scholar
11. Z. Kutnjak and C. W. Garland, “Generalized smectic-hexatic phase diagram,” Physical Review E, vol. 57, no. 3, pp. 3015–3020, 1998. View at Scopus
12. R. Brout, Phase Transitions, chapter 2, Benjamin, New York, NY, USA, 1965.
13. R. B. Meyer, L. Liebert, L. Strzelecki, and P. Keller, “Ferroelectric liquid crystals,” Journal de Physique Lettres, vol. 36, no. 3, pp. 69–71, 1975. View at Publisher · View at Google Scholar | {"url":"http://www.hindawi.com/journals/acmp/2012/262069/","timestamp":"2014-04-16T07:59:11Z","content_type":null,"content_length":"108732","record_id":"<urn:uuid:fa08d4b5-648b-416e-a68d-cffda3ea104a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can any compactly supported continuous function be written as a linear combination of functions with small support
up vote 0 down vote favorite
Does anyone have a reference for the following result? I am pretty sure that it is true, and should not be hard to prove, but i would surprise me if it is not already proven in many places:
Let $G$ be a locally compact Abelian group and $U$ an open precompact set in $G$. Then for all $f \in C_C(G)$ we can find $n$ and $f_1,..,f_n \in C_C(G)$ so that $$f=f_1+..+f_n (*)$$ and for all
$i$ we have ${\rm supp}(f_i) \subset t_i+U$ for some $t_i \in G$.
Here $C_C(G)$ denotes the space of compactly supported continuous functions on $G$.
ca.analysis-and-odes harmonic-analysis
4 The key word you want is "partition of unity", which you can find explained in any differential topology textbook. The proof is a consequence of the standard partition of unity argument applied to
a finite open cover of $supp(f)$ of the form $\{t_i+U\}$, which you get using compactness $supp(f)$. Also, the "abelian" hypothesis is unnecessary. – Lee Mosher Jun 20 '13 at 16:34
@Lee Mosher Thank you. – Nick S Jun 20 '13 at 16:40
What exactly do you need a reference for? There are many simple proofs of it (Stone-Weierstraß, partition of unity, approximate identity), but probably there is no place where it is stated exactly
in this form. – The User Jun 20 '13 at 17:02
Sorry, I thought you were interested in an approximation (because you had mentioned Stone-Weierstraß). For precise results you will need partitions of unity. – The User Jun 20 '13 at 18:08
1 And you do not have to look into differential topology textbooks, it is a standrd result from general topology that a space admits partitions of unity if and only if Urysohn’s lemma holds if and
only if the space is normal. Then you have to consider the Alexandrov compactification. – The User Jun 20 '13 at 18:18
add comment
1 Answer
active oldest votes
Regarding partitions of unity: Every locally compact group is paracompact, thus it is normal, thus there exist partitions of unity. But you do not need this argument: For general locally
compact groups and any compact subset $A$ and any finite precompact open cover of $A$ there exists a continuous function with the value $1$ on $A$ and vanishing outside the union of the
cover which can be written as the sum of continuous functions supported on the elements of the cover (Folland, Real Analysis, page 134). You use the cover described in the comment by Lee
and then you take these functions given by the theorem and multiply them with the function $f$ (the proof uses the Alexandrov compactification which is compact, thus it is normal).
up vote 0 First I thought you were only interested in an approximation (because you had mentioned Stone Weierstraß), for an approximation you can use this proof: If $\mathcal{U}$ is a local basis
down vote at identity consisting of compact sets and $(f_U)_{U\in \mathcal{U}}$ is any family of continuous, non-negative, symmetric functions such that $\int_G f_U=1$ and $\mathrm{supp}(f_U)\
accepted subset U$ for all $U\in \mathcal{U}$, then $(f_U)_{U\in \mathcal{U}}$ is an approximate identity for the convolution algebra of compactly supported continuous functions (reference: Gerald
B. Folland, A Course in Abstract Harmonic Analysis, proposition 2.42). Since convolutions can be approximated by finite linear combinations of left-translates of the functions, we get
your result.
add comment
Not the answer you're looking for? Browse other questions tagged ca.analysis-and-odes harmonic-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/134291/can-any-compactly-supported-continuous-function-be-written-as-a-linear-combinati","timestamp":"2014-04-17T15:40:02Z","content_type":null,"content_length":"58934","record_id":"<urn:uuid:d8ecac5a-3e8f-4932-90ef-3b1c7146f398>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Size of classes in the sem mixture modeling
Anonymous posted on Tuesday, May 10, 2005 - 5:07 am
I work on a sem mixture model. Model with four classes is better than the
three classes model. AIC and BIC are lower and entropy greater, but in this four classes model the size of the third classe is only 13 cases, [4146,223,13,1832]. Is it correct to
choose such a model ? Is it preferable to choose the three classes model ?
[4209, 1853, 152].
Thank you for your help.
bmuthen posted on Tuesday, May 10, 2005 - 5:55 am
If the small class in the 4-class solution is clearly interpretable, it seems you can go with 4 classes. In this context, I would also bring in covariates before I make the decision.
Anonymous posted on Tuesday, May 10, 2005 - 6:53 am
Thank you very much Dr Muthén
Lisanne Warmerdam posted on Tuesday, July 26, 2005 - 3:11 am
Dear mr Muthen,
I have a similar question about sample sizes. I don't have data yet but I'm thinking about how many respondents I need in my project. I will use general growth mixture modeling to search for
subgroups in the sample, based on intercept and slope. It's more or less an explorative research. My question is how many respondents there have to be minimal in a subgroup? When having a subgroup
with, for example, 10 subjects isn't that resulting in a very low power?
Thanks for helping
bmuthen posted on Tuesday, July 26, 2005 - 7:57 am
10 subjects in a class can be sufficient or inadequate - it depends very much on the model and its parameter values. For example, you may be in the advantageous situation that your model has only 2
parameters specific to the small class with 10 subjects - e.g. the means of the intercept and slope - while other parameters of that class are the same as those of other classes. In this case you may
for example have enough power to reject that the slope mean is zero if the slope mean estimate has a low enough SE - which is a function of how clearly the data determines this slope. You can do a
Monte Carlo simulation study to shed light on this, but if you have no pilot data nor strong theory, it is hard to choose parameter values for such a study.
Annie Desrosiers posted on Thursday, October 05, 2006 - 12:45 pm
Hi, I have a question about assignation to classes.
I use the mixture model and I want to know if it's possible to know for everybody in witch classes they are assigned.
Thank you.
analysis: type = mixture missing;
starts = 20 2;
model: %overall%
i s q | hyb1@0 hyb2@1 hyb3@2 hyb4@3;
plot: type = plot3;
series = hyb1-hyb4(*);
Linda K. Muthen posted on Thursday, October 05, 2006 - 1:37 pm
Use the CPROBABILITIES option of the SAVEDATA command.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=13&page=647","timestamp":"2014-04-16T16:04:56Z","content_type":null,"content_length":"24425","record_id":"<urn:uuid:2b1b1ed2-be8e-499c-9661-b9b0c3178844>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00148-ip-10-147-4-33.ec2.internal.warc.gz"} |
Master Course Description
No: EE 233
Title: CIRCUIT THEORY
Credits: 5 (4 lecture - 1 laboratory)
Coordinator: Mani Soma, Professor of Electrical Engineering
Goals: To learn how to analyze electric circuits in the frequency domain; to calculate power for electric circuits; to recognize and analyze common filters such as low-pass, high-pass, band-pass, and
band-reject both for passive and active circuits; to learn how to use laboratory instruments such as the function generator, oscilloscope and multimeter for analyzing electric circuits that you build
in the laboratory; to learn how to use MultiSim; to learn how to write a lab report on your experiments; to prepare students for more advanced courses in circuit analysis and design.
Learning Objectives: At the end of this course, students will be able to:
1. Identify linear circuits, passive and active filters.
2. Develop analytical models for circuits in the frequency domain by using Kirchhoff's current and voltage laws, Ohm's law, mesh analysis, nodal analysis, Thevenin and Norton equivalents, phasor,
and Laplace Transform techniques.
3. Analyze linear circuits and passive and active filters with sinusoidal inputs.
4. Design simple circuits and passive and active filters to meet given specifications.
5. Derive the power generated/absorbed in a circuit when there are sinusoidal inputs.
6. Use MultiSim to verify the results of frequency domain circuit analysis.
7. Measure basic signal parameters (amplitude, frequency, etc.) using basic laboratory instruments: oscilloscope, power supply, function generator, and multimeter.
Textbooks: J.W. Nilsson and S.A. Riedel, Electric Circuits, 9th Edition. Prentice Hall, 2010.
Prerequisites by Topic:
1. DC circuit analysis (EE 215)
2. Transient analysis of electric circuits in the time domain (EE 215)
3. Solution of first and second order linear differential equations
4. Manipulation of complex numbers
1. Sinusoidal sources and responses, Phasors, network theorems (2 weeks, Ch 9)
2. Average and Reactive power, complex power, power factor (1 week, Ch 10)
3. Laplace transformation techniques (2 weeks, Ch. 12)
4. Circuit analysis with Laplace Transforms, transfer functions (1 week, Ch 13)
5. Passive filters (2 weeks, Ch. 14)
6. Active filters (2 weeks, Ch 15)
7. Basic EE laboratory, components, instrumentation and simulation (in Laboratory section)
Course Structure: Lecture (3 hours / week), Quiz (2 hours / week), Laboratory (3 hours / week). Weekly homework. Weekly quizzes. Three exams in class (two midterms and one final). Hands-on lab exam
at the end of the quarter.
Computer Resources: Use of MultiSim simulation software for analysis of electrical circuits related to the content of the laboratory.
Laboratory: At the end of the quarter, each student is required to take an individual hands-on exam in the Laboratory to demonstrate sufficient knowledge in using the instruments. Representative
topics of the experiments are listed below.
1. Introduction to laboratory instruments (power supply, multimeter, function generator, oscilloscope).
2. Step input response of RC circuits. Report required.
3. AC steady state analysis of RC and RLC circuits, frequency response, simple filters. Report required.
4. Operational amplifiers in both time and frequency domains. Report required.
5. Design and analysis of simple and more complex filters. Report required.
Grading: 20% Homework, 20% Laboratories, 5% Lab Test, 5% Quizzes, 25% Two Midterms, 25% Final Exam
Outcome coverage: (a) An ability to apply math, science and engineering knowledge. The vast majority of the lectures, homework, quizzes, and laboratories deal with the application of circuit theory
to analyze and design linear passive circuits, passive filters, and active op amp filters. Mathematical formulations are commonplace throughout the course. Relevance: H.
(b) An ability to design and conduct experiments, as well as to analyze and interpret data. All of the laboratory experiments require students to build circuits, collect data, and analyze data to
demonstrate that the circuits perform as designed. Relevance: L.
(e) An ability to identify, formulate, and solve engineering problems. The homework and laboratory experiments involve solving engineering problems identified in the assignments or in the experiment
descriptions. Relevance: M.
(g) An ability to communicate effectively. Students are required to write and submit laboratory report for each experiment. The body of the lab report must include the following sections: abstract,
introduction, lab procedure, experimental results, analysis of results, conclusions, team roles, appendix. Relevance: L.
(k) An ability to use the techniques, skills, and modern engineering tools necessary for engineering practice. Students use Matlab or a similar software tool to solve homework problems. Students use
MultiSim to simulate circuits built in the laboratory. Relevance: H.
Prepared By: Linda Bushnell
Last Revised: 10/15/2012 | {"url":"http://www.ee.washington.edu/academics/undergrad/abet/courses2/233mcd.htm","timestamp":"2014-04-18T08:04:11Z","content_type":null,"content_length":"46410","record_id":"<urn:uuid:ad830788-9b63-4164-93a5-eb3f23f64c63>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding a sensible balance for natural hazard mitigation with mathematical models
Uncertainty issues are paramount in the assessment of risks posed by natural hazards and in developing strategies to alleviate their consequences.
In a paper published last month in the SIAM/ASA Journal on Uncertainty Quantification, the father-son team of Jerome and Seth Stein describe a model that estimates the balance between costs and
benefits of mitigation—efforts to reduce losses by taking action now to reduce consequences later— following natural disasters, as well as rebuilding defenses in their aftermath. Using the 2011
Tohoku earthquake in Japan as an example, the authors help answer questions regarding the kinds of strategies to employ against such rare events.
"Science tells us a lot about the natural processes that cause hazards, but not everything," says Seth Stein. "Meteorologists are steadily improving forecasts of the tracks of hurricanes, but
forecasting their strength is harder. We know a reasonable amount about why and where earthquakes will happen, some about how big they will be, but much less about when they will happen. This
situation is like playing the card game '21', in which players see only some of the dealer's cards. It is actually even harder, because we do not fully understand the rules of the game, and are
trying to figure them out while playing it."
Earthquake cycles—triggered by movement of the Earth's tectonic plates and the resulting stress and strain at plate boundaries —are irregular in time and space, making it hard to predict the timing
and magnitude of earthquakes and tsunamis. Hence, forecasting the probabilities of future rare events presents "deep uncertainty," Stein says. "Deep uncertainties arise when the probabilities of
outcomes are poorly known, unknown, or unknowable. In such situations, past events may give little insight into future ones."
Another conundrum for authorities in such crisis situations is the appropriate amount of resources to direct toward a disaster zone. "Much of the problem comes from the fact that formulating
effective natural hazard policy involves using a complicated combination of geoscience, mathematics, and economics to analyze the problem and explore the costs and benefits of different options. In
general, mitigation policies are chosen without this kind of analysis," says Stein. "The challenge is deciding how much mitigation is enough. Although our first instinct might be to protect ourselves
as well as possible, resources used for hazard mitigation are not available for other needs. For example, does it make sense to spend billions of dollars building buildings in the central U.S. to the
same level of earthquake resistance as in California, or would these funds do more good if used otherwise?"
The Japanese earthquake and tsunami in 2011 toppled seawalls 5-10 meters high. The seawalls being rebuilt are about 12 meters high, and would be expected to protect against large tsunamis expected
every few hundred years. But critics argue that it would be more cost effective and efficient to focus on relocation and evacuation strategies for populations that may be affected by such tsunamis
rather than building higher seawalls, especially in areas where the population is small and dwindling.
In this paper, Stein says, the authors set out to "find the amount of mitigation—which could be the height of a seawall or the earthquake resistance of buildings—that is best for society." The
objective is to provide methods for authorities to use their limited resources in the best possible way in the face of uncertainty.
Selecting an optimum strategy, however, depends on estimating the expected value of damage. This, in turn, requires prediction of the probability of disasters.
It is still unknown whether to assume that the probability of a large earthquake on a fault line is constant with time (as routinely assumed in hazard planning) or whether the probability gets
smaller after the last incidence and increases with time. Hence, the authors incorporate both these scenarios using the general probability model of drawing balls from an urn. If an urn contains
balls that are labeled "E" for event and "N" for no event, each year is like drawing a ball. "If after drawing a ball, we replace it, the probability of an event stays constant. Thus an event is
never 'overdue' because one has not happened recently, and the fact that one happened recently does not make another less likely," explains Stein. "In contrast, we can add E-balls after a draw when
an event does not occur, and remove E-balls when an event occurs. This makes the probability of an event increase with time until one happens, after which it decreases and then grows again."
Since the likelihood of future earthquakes depends on strain accumulation at plate boundaries, the model incorporates parameters for how fast strain accumulates between quake incidences, and strain
release that happens during earthquakes.
The authors select the optimal mitigation strategy by using a general stochastic model, which is a method used to estimate the probability of outcomes in different situations under constrained data.
They minimize the expected present value of damage, the costs of mitigation, and the risk premium, which reflects the variance, or inconsistency, of the hazard. The optimal mitigation is the bottom
of a U-shaped curve summing up the cost of mitigation and expected losses, a sensible balance.
To determine the advantages and pitfalls of rebuilding after such disasters, the authors present a deterministic model. Here, outcomes are precisely determined by taking into account relationships
between states and events. The authors use this model to determine if Japan should invest in nuclear power plant construction given the Fukushima Daiichi nuclear reactor meltdown during the 2011
tsunami. Taking into account the financial and societal benefits of reactors, and balancing them against risks—both financial and natural—the model determines the preferred outcome.
Such models can also be applied toward other disaster situations, such as hurricanes and floods, and toward policies to diminish the effects of climate change. Stein gives an example: "Given the
damage to New York City by the storm surge from Hurricane Sandy, options under consideration range from doing nothing, using intermediate strategies like providing doors to keep water out of
vulnerable tunnels, to building up coastlines or installing barriers to keep the storm surge out of rivers. In this case, a major uncertainty is the effect of climate change, which is expected to
make flooding worse because of the rise of sea levels and higher ferocity and frequency of major storms. Although the magnitude of these effects is uncertain, this formulation can be used to develop
strategies by exploring the range of possible effects."
More information: Formulating Natural Hazard Policies under Uncertainty epubs.siam.org/doi/abs/10.1137/120891149 . Jerome L. Stein and Seth Stein, SIAM/ASA Journal on Uncertainty Quantification, 1
(1), 42-56. (Online publish date: March 27, 2013). | {"url":"http://phys.org/news/2013-04-natural-hazard-mitigation-mathematical.html","timestamp":"2014-04-20T23:59:18Z","content_type":null,"content_length":"71791","record_id":"<urn:uuid:36a49778-1377-44ba-b121-491acb4031e2>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
ReBlogging the Likelihood Principle #2: Solitary Fishing:SLP Violations
Reblogging from a year ago. The Appendix of the “Cox/Mayo Conversation” (linked below [i]) is an attempt to quickly sketch Birnbaum’s argument for the strong likelihood principle (SLP), and its
sins. Couple of notes: Firstly, I am a philosopher (of science and statistics) not a statistician. That means, my treatment will show all of the typical (and perhaps annoying) signs of being a
trained philosopher-logician. I’ve no doubt statisticians would want to use different language, which is welcome. Second, this is just a blog (although perhaps my published version is still too
informal for some).
But Birnbaum’s idea for comparing evidence across different methodologies is also an informal notion! He abbreviates by Ev(E, x): the inference, conclusion or evidence report about the parameter μ
arising from experiment E and result x, according to the methodology being applied.
So, for sampling theory (I prefer “error statistics”, but no matter), the report might be a p-value (it could also be a confidence interval with its confidence coefficent, etc).
The strong LP is a general conditional claim:
(SLP): For any two experiments E’ and E” with different probability models but with the same unknown parameter μ, and x’ and x” data from E’ and E” respectively, where the likelihoods of x’ and x”
are proportional to each other, then x’ and x” ought to have the identical evidential import for any inference concerning parameter ì.
For instance, E’ and E” might be Binomial sampling with n fixed, and Negative Binomial sampling, respectively. There are pairs of outcomes from E’ and E” that could serve in STP violations. For a
more extreme example, E’ might be sampling from a Normal distribution with a fixed sample size n, and E” might be the corresponding experiment that uses an optional stopping rule: keep sampling until
you obtain a result 2 standard deviations away from a null hypothesis.
Suppose we are testing the null hypothesis that μ = 0 (and for simplicity, a known standard deviation).
The SLP tells us (in relation to the optional stopping rule) that once you have observed a 2-standard deviation result, there ought to be no evidential difference between its having arisen from
experiment E’, where n was fixed at 100, and experiment E” where the stopping rule happens to stop at n = 100 (i.e., it just happens that a 2-standard deviation result was observed after n= 100
The key point is that there is a difference in the corresponding p-‐values from E’ and E”, which we may write as p’ and p”, respectively. While p’ would be ~.05, p” would be much larger, perhaps ~
.3 (the numbers do not matter). The error probability accumulates because of the optional stopping.
Clearly p’ is not equal to p”, so the two outcomes are not evidentially equivalent for a frequentist. This constitutes a violation of the strong LP (which of course is just what is proper for a
Unless a violation of the SLP is understood, it will be impossible to understand the issue about the Birnbaum argument. Some people are forgetting that for a “sampling theory” person, evidential
import must always consider the sampling distribution. This sounds awfully redundant, and it is, but given what I’m reading on some blogs, it bears repeating.
One excellent feature of Kadane’s book is that he is very clear in remarking how frequentists violate the SLP.
I should note that Birnbaum himself rejected the SLP.
The SLP is a conditional (if-then claim) that makes a general assertion, about any x’, x” that satisfy the conditions in the antecedent. Therefore, it is false so long as there is any case where the
antecedent holds and the consequent does not. Any STP violation takes this form.
(SLP Violation): Any case of two experiments E’ and E” with different probability models but with the same unknown parameter μ, where
• x’ and x” are results from E’ and E” respectively,
• likelihoods of x’ and x” are proportional to each other
• AND YET x’ and x” have different evidential import (i.e., Ev(E’,x’) is not equal to Ev(E”, x”))
I’ll wait a bit to continue with this. I am traveling around different countries, so blog posts may be irratic, (with possible errors you’ll point out).
(Made it to Zurich and rented car to Konstanz)
“A Statistical Scientist Meets a Philosopher of Science: A Conversation between Sir David Cox and Deborah Mayo“
9 thoughts on “ReBlogging the Likelihood Principle #2: Solitary Fishing:SLP Violations”
With respect to your example of the normal experiment with the optional stopping rule, the SLP does not imply what you claim it implies. In experiment E”, the probability distribution of the
outcome is not the product of distributions of independent and identically distributed normal random variables. Rather, it’s a random walk Markov chain with an absorbing boundary that grows with
the square root of N. Ignoring the actual data for the moment, the probability distribution for the number of trials in the experiment depends on mu, which means that when considered as a
likelihood, the number of trials alone is informative about mu. In fact, for some values of mu, there’s a non-zero probability that the experiment never terminates…
See Wikipedia’s article on the law of the iterated logarithm for more information about the behavior of this random walk (without the boundary).Here’s a case where the SLP applies. Suppose you’re
measuring the weight of an item. In experiment E’, the scale’s display has an upper bound of 99.9. If the number would exceed that bound, it displays “ERR” instead. In experiment E”, the upper
bound is so large relative to the weight of the items that it is effectively infinite. In both experiments, the measured weight is subject to random error of known distribution (the same
distribution in E’ and E”) and the number of trials is fixed. The SLP asserts that when all measured weights are below 100.0, the data provide the same evidence no matter which scale was used. In
contrast, the error statistical approach requires taking the truncation of the sample space into account when computing p-values, creating confidence procedures or constructing rejection regions.
I think the binomial/negative binomial sampling problem is a special case that has perhaps misled some Bayesian philosophers or even statisticians into making false claims about optional
stopping. Chapter 6 of Gelman’s Bayesian text provides the actual theory giving the conditions in which the data collection mechanism can safely be ignored.
In the above comment, imagine a paragraph break wherever the space after a period is missing.
Sorry, are you saying the SLP does not deny the relevance of the stopping rule in this example? It does (see for example Savage forum 1962)–and famously so*. I don’t deny there are many,
many OTHER violations, which is why I recommend studying them before approaching the Birnbaum argument. Any will do to make the argument more vivid, even though the issue doesn’t depend
on any particular violation of the SLP. Excuse me if I’m missing your point…
*”The likelihood principle emphasized in Bayesian statistics implies, among other things, that the rules governing when data collection stops are irrelevant to data interpretation. It is
entirely appropriate to collect data until a point has been proved or disproven … (Edwards, Lindman and Savage 1963, p. 193).
“In general, suppose that you collect data of any kind whatsoever — not necessarily Bernoullian, nor identically distributed, nor independent of each other …— stopping only when the data
thus far collected satisfy some criterion of a sort that is sure to be satisfied sooner or later, then the import of the sequence of n data actually observed will be exactly the same as
it would be had you planned to take exactly n observations in the first place (ibid., 238-239)”.
This irrelevance of the stopping rule is sometimes called the Stopping Rule Principle (SRP); it is an implication of the (strong) LP.
Edwards, W., H. Lindman and L. J. Savage. 1963. “Bayesian Statistical Inference for Psychological Research.” Psychological Review 70, 450-499.
Sad to say, but the famous statisticians you quote are simply wrong on the math.
This blog format isn’t great for trying write equations, but reason is essentially that in E’ there are some sample paths that cross the stopping boundary before the final sample size is reached,
whereas those sample paths have zero probability in E”, and the probability mass for the set of those sample paths is a function of mu. If you like, I can email you a pdf file showing you the
sampling densities of the data in the two experiments if you like, from which it will be obvious that the likelihoods are not proportional.
Actually, the above explanation is incorrect. The reason is that the fact that the experiment terminated conveys information about mu.
Erm, they’re not wrong if they were talking about two-sided tests… I was thinking about one-sided tests because that’s what the formaldehyde paper is about.
The example of the slp violation here refers to two-sided tests. It is an extreme example, but it’s the one often used, as in the “savage forum.” of course there are tons of others, less extreme.
In any event, the Birnbaum argument begins with a slp violation. that is why I say understanding such violations is needful to understand the issue.
I think you’re implicitly referring to the difference in sampling distributions rather than in likelihoods.
Like Reply
Corey While the number of trials N alone is indeed informative about the model parameter, the joint density of N and data(N) is truly proportional to the density of data(N). This may sound
surprising but it is nonetheless the case. Check with the normal example. Or in Berger and Wolpert (1988).
I welcome constructive comments for 14-21 days
Categories: Likelihood Principle Tags: Birnbaum, Likelihood Principle, SLP violations 9 Comments | {"url":"http://errorstatistics.com/2011/10/20/blogging-the-likelihood-principle-2-solitary-fishingslp-violations/","timestamp":"2014-04-19T10:18:54Z","content_type":null,"content_length":"77260","record_id":"<urn:uuid:5f5e2f95-aeb7-43be-8a0c-73d34593e580>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
The effects of population aging on optimal redistributive taxes in an overlapping generations model
Brett, Craig (2008): The effects of population aging on optimal redistributive taxes in an overlapping generations model.
Download (171Kb) | Preview
The impact of population aging on the steady state solution to a Ordover-Phelps (1979) overlapping generations optimal nonlinear income tax problem with two types of workers and
quasilinear-in-leisure preferences is investigated. A decrease in the rate of population growth, which leads to an aging population, increases the relative price of consumption per person in
retirement, which tends to decrease optimal consumption for retirees of both skill types. It is also shown that the optimal steady state rate of interest equals the rate of population growth. As a
result, the steady state interest rate unambiguously declines when the rate of population growth declines. The resulting adjustments in production plans has an ambiguous effect on the aggregate wage
rate. This article identifies factors contributing to an increase in the aggregate wage when the population ages, namely normality of consumption in retirement, complementarity between capital and
labor in production, and a large capital deepening effect relative to the increase in dependency owing to demographic change. Depending on the sign of this wage effect, ambiguities may arise in the
direction of change in the optimal steady state consumption and production plans. It is also shown that the optimal marginal income tax rates are independent of the rate of population growth.
Item Type: MPRA Paper
Original The effects of population aging on optimal redistributive taxes in an overlapping generations model
Language: English
Keywords: optimal income taxation; overlapping generations model; population aging
Subjects: H - Public Economics > H2 - Taxation, Subsidies, and Revenue > H21 - Efficiency; Optimal Taxation
D - Microeconomics > D8 - Information, Knowledge, and Uncertainty > D82 - Asymmetric and Private Information; Mechanism Design
Item ID: 8585
Depositing Craig Brett
Date 06. May 2008 04:49
Last 19. Feb 2013 16:37
Apps, P., Rees, R., 2006. Repeated optimal nonlinear income taxation, unpublished manuscript, University of Sydney and University of Munich.
Berliant, M., Ledyard, J. O., 2005. Optimal dynamic nonlinear income taxes with no commitment, downloadable from the Working Paper Archive atW ashington University at St. Louis, http:/
Boadway, R., Cu, K., Marchand, M., 2000. Optimal income taxation with quasi-linear preferences revisited. Journal of Public Economic Theory 2, 435-460.
Boadway, R., Pestieau, P., 2007. Tagging and redistributive taxation. Annales d'Economie et de Statistique 83-84, 123-147.
Brett, C.,Weymark, J. A., 2008a. The impact of changing skill levels on optimal nonlinear income taxes. Journal of Public Economics 92, 1765-1771.
Brett, C., Weymark, J. A., 2008b. Optimal nonlinear taxation of income and savings without commitment. Discussion paper 08-W05, Vanderbilt University.
Brett, C., Weymark, J. A., 2008c. Public good provision and the comparative statics of optimal nonlinear income taxation. International Economic Review 49, 255-290.
Cutler, D. M., Poterba, J. M., Sheiner, L. M., Summers, L. H., 1990. An aging society:Opportunity or challenge? Brookings Papers on Economic Activity 1, 1-56.
Dillen, M., Lundholm, M., 1996. Dynamic income taxation, redistribution, and the ratchet effect. Journal of Public Economics 59, 69-93.
Hamilton, J., Pestieau, P., 2005. Optimal income taxation and the ability distribution:Implications for migration equilibria. International Tax and Public Finance 12, 29-45.
References: Intriligator, M. D., 1971. Mathematical Optimization and Economic Theory. Prentice-Hall, Englewood Cliffs.
McDaniel, S. A., 2003. Toward disentangling policy implications of economic and demographic changes in Canada's aging population. Canadian Public Policy 29, 491-509.
Meijdam, L., Verbon, H. A. A., 1997. Aging and public pensions in an overlapping generations model. Oxford Economic Papers 49, 29-42.
Mirrlees, J. A., 1971. An exploration in the theory of optimum income taxation. Review of Economic Studies 38, 175-208.
Myles, G. D., 1995. Public Economics. Cambridge University Press, Cambridge.
Ordover, J., Phelps, E., 1979. The concept of optimal taxation in the overlapping generations model of capital and wealth. Journal of Public Economics 12, 1-26.
Pirttilla J., Tuomala, M., 2001. On optimal non-linear taxation and public good provision in an overlapping generations economy. Journal of Public Economics 79, 485-501.
Simula, L., 2007. Optimality conditions and comparative static properties of non-linear income taxes revisited, unpublished manuscript, Paris School of Economics.
Visco, I., 2001. The fiscal implications of ageing populations in OECD countries, Organization for Economic Cooperation and Development, presented at the Oxford Centreon Population
Ageing Pensions Symposium, June.
Weymark, J. A., 1986. A reduced-form optimal nonlinear income tax problem. Journal of Public Economics 30, 199-217.
Weymark, J. A., 1987. Comparative static properties of optimal nonlinear income taxes. Econometrica 55, 1165-1185.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/8585 | {"url":"http://mpra.ub.uni-muenchen.de/8585/","timestamp":"2014-04-19T22:11:52Z","content_type":null,"content_length":"25848","record_id":"<urn:uuid:1cadf581-2865-4a7f-9880-95205ff2d3db>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
College Math Refresher - OEDB.org
College Math Refresher
Whether you have a Ph.D. in civil engineering or a bachelor’s degree in music, chances are you’ll have to use math at some point in your adult life. From taxes and cooking to design projects and
computer programming, math is — and will remain — an essential skill set. This refresher guide was designed to help you revitalize your rusty math skills for today’s world, and can even introduce you
to a few new concepts along the way.
Five Mathematical Areas to Familiarize Yourself With
Throughout the history of science and philosophy, the concept of reason has been picked apart by the world’s greatest minds. Of all the many investigations made into the concept, mathematics remains
one of the best tools we can use to get closer to fully understanding the “laws” which govern reality. Below is a brief overview of several mathematical areas, with brief descriptions and test
questions you can use to study.
This area of mathematics is usually studied by middle school students as an introduction to algebra. In some senses, it’s the backbone of everyday math. Therefore, refreshing yourself on the core
concepts can be quite easy if you have some background with arithmetic, multiplication and division in the form of basic algebraic equations.
Test Your Knowledge!
Solve for “x”:
1. 2 + x = 4
2. (15 – 5) – x = 10
There are a few steps we need to follow in order to “solve for x.” The key is to isolate the “x” before the equal sign. For the first question, simply subtract 2 from both sides of the equal sign.
Subtracting 2 from 2 equals 0, and 2 from 4 equals 2 (you cannot subtract a number from x). Now your equation should look like 0 + x = 2, which is essentially x = 2. Now try the second question and
see how you do!
Typically, students study some form of calculus as a junior or senior in high school, or in the first years of college. It certainly is not the easiest area of mathematics! The invention of calculus
is often credited to 17th century thinkers Isaac Newton and Wilhelm Gottfried Leibniz and is defined as the mathematical study of change. Change, as we all know, can be quite complicated, and the
study of calculus definitely follows suit.
Because calculus equations can get rather complicated and unwieldy, there will not be any practice questions for you to take in this article. If you’re looking to spark your memory, this basic
calculus refresher created by Dr. Ismor Fischer at the University of Wisconsin is a great resource to keep handy.
Like calculus, this area of mathematics is something many people tend to just get or struggle their whole lives trying to grasp. It requires a keen sense of spatial intuition, in addition to a robust
knowledge of algebra and trigonometry. Invented by the ancient Greek mathematician Euclid, geometry is the study of shapes and sizes, as well as distinguishing the various properties of space.
Test Your Knowledge!
1. What is the supplementary angle to 24?
2. Find the area of a circle with a radius of 7.
For the first question, you should know that supplementary angles must add up to a total of 180. So solving this question is rather easy, just subtract 24 from 180 and voila! Question number two
requires the memorization of a formula (geometry requires that you memorize many different formulas). The formula you need in order to find the area of a circle is: Area = Pi (3.14) x Radius squared.
Trigonometry is quite similar to geometry, except it’s all about triangles. You’d never realize how important triangles are to our world until you study a bit of trigonometry. In addition to
naturally proceeding geometry during a student’s course of study in mathematics, it also tends to precede the study of calculus. Many of the functions and equations learned in trigonometry are often
heavily used in careers that rely upon applied mathematics, such as engineering.
Test Your Knowledge!
1. Find the Sine of angle A with and adjacent side of 8, an opposite side of 9 and a hypotenuse of 10.
If the question above seems like a bunch of gibberish, don’t worry; you’re not alone. Without going into too much detail, the Sine is really just a ratio of two sides of a right triangle: the
opposite side of angle A and the triangle’s hypotenuse (the diagonal side). So, the equation would be: Sin A = opposite / hypotenuse (adjacent side doesn’t matter for this formula). See if you can do
Applied Math
As the name would imply, the various systems of applied math are considered to be the more practical areas of mathematics. Examples of applied math include statistics, computer science and economics.
Typically, areas within applied mathematics fall under the auspices of the liberal sciences, and do not often require the same degree of mathematical knowledge needed by students of the hard
Test Your Knowledge!
1. (Probability) If you roll a die once, what is the probability that it will show an even number?
2. (Economics) Last month, shoppers bought 200 boxes of cereal out of 250 total boxes from the grocery store. This month, 300 shoppers wanted to buy cereal, but only 250 new boxes of cereal arrived.
Did the supply or demand increase for cereal this month? Was the demand met?
The first question is pretty easy compared to other probability scenarios. On a normal,
6-sided die, there are 6 possible outcomes: 3 possible even numbers and 3 possible odd numbers. Therefore, there is a 50% (1/2) chance the die will show an even number. The second question is also
relatively simple. Since 300 shoppers (100 more than the previous month) wanted to buy cereal, the demand obviously increased. Moreover, 50 boxes were carried over from the previous month, making the
total number of boxes available for this month 300. Therefore, the supply also increased and the demand was met.
Resources for Refreshing Your Math Skills
Below is a list of some of the greatest resources you can use to continue your refreshing your knowledge of mathematics:
• The Ultimate Math Refresher for the GRE, GMAT, and SAT: This comprehensive study guide is an excellent resource for refreshing your skills in several mathematical areas, such as arithmetic,
algebra and geometry. Not only does the book go over these areas in depth, it can help you prepare for the math component of several college and graduate school entrance exams.
• Oxford University Mathematics OpenCourseWare: This site, hosted by Oxford University, offers several massively open online courses (MOOCs) across many areas of mathematics that any sort of
student can study for free. While many of these courses probably shouldn’t be taken by beginners, the courses available, such as Introduction to Pure Mathematics, offer great refresher materials
for college or grad school students.
• S.O.S. Math: Helpful to both beginning and advanced math enthusiasts, this site is a boon for many of us who need to brush up on our math skills across several popular areas of mathematics. In
addition to several areas of math outlined earlier in this guide, S.O.S. Math also provides helpful information about books and other math-related sites you can learn from.
• King of Math: This game app for iOS devices provides people with a fun way to refresh their skills across several areas of mathematics. The game starts you off as a farmer who must level up by
answering several fast-paced math exercises across areas such as statistics, arithmetic and geometry.
Math Makes the World Go Round
Resources we use to refresh our skills across various areas of mathematics are not only helpful when promoting our ability to think quantitatively, but could prepare you for a real world situation
where such knowledge might come in handy. Even though your current major or profession may not require a strong knowledge of math to get by, taking some time to refresh yourself on many of the topics
outlined above could end up helping you in more ways than you think.
Answers to Questions Above:
• Pre-Algebra: 2; 0
• Geometry: 156; 156.86
• Trigonometry: 0.9
• Applied Math: (See Guide) | {"url":"http://oedb.org/ilibrarian/college-math-refresher/","timestamp":"2014-04-16T08:00:30Z","content_type":null,"content_length":"49210","record_id":"<urn:uuid:421bbd4c-7cad-4cff-a774-c60e2a73f553>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
Domain / Page Authority - Logarithmic
Comments latest first
Matt Peters
Hi Adam,
You can see a distribution of DA over the entire Mozscape index here:
(look at the bottom panel in the 4th chart, titled "Domain Authority, distribution full index"). Note that this plots log(Domain Count) so there are many many more domains with small DA then large
ones! The mean is about 10.88 and the median is 8.77.
Keyword difficulty uses the Mozscape metrics and it too takes some logs before computing the score. Since the raw metrics are highly skewed, we apply the log frequently to remove some of the
skewness. I'm not sure what the median difficulty score is and this would be really hard to calculate since it would also depend on the distribution of keywords themselves which will have a very long
tail. The best estimate I could make would just use the difficulty scores for keywords that have been run in the tool, but we don't save those in an easily usable form.
December 05, 2012 04:06 PM
Well if you're input data is rescaled logarithmically then yes, it would be impossible to tell me what a DA of 30 is in relationship to a higher or lower DA.
But can you tell me this? What is the distribution of DA values across all values? It would be nice to know that "the median DA across all sites in our database is x." That would at least put the
numbers in some perspective - and it's perspective I'm trying to get.
Can you also confirm if the "keyword difficulty" is also calculated with logarithmic inputs? And what's the median keyword difficulty?
December 04, 2012 11:10 PM
Matt Peters
Hi eatyourveggies,
The scales on PA and DA run from 1-100 with the largest, most important sites in the internet having PA/DA of 100 (Google, Facebook, etc). Beyond that, we don't attribute any special meaning to a
value of "30" or "50" other then as a relative ordering. The keyword difficulty scale is similar with 100 signifying the most difficult keyword to rank for.
PA and DA are the output from a machine learning model that we then rescale to values between 1-100. The raw output from the model is dimensionless and doesn't have any interesting meaning. The
rescaling is linear, but the inputs to the model are rescaled logarithmically before being used in the model. We use the natural log (base e) but the base is pretty arbitrary since one can transform
from one base to another by changing coefficients, and the coefficients themselves are set in a regression. The key point is that since the inputs have a log applied to them it is much harder to
increase DA from say 70 to 80 then it is from 30 to 40.
December 03, 2012 04:26 PM | {"url":"https://seomoz.zendesk.com/entries/22546573-domain-page-authority-logarithmic","timestamp":"2014-04-17T21:22:11Z","content_type":null,"content_length":"21695","record_id":"<urn:uuid:5a403f55-88b2-450f-9f56-2ecad2996a57>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
IMU-Net 56: November 2012
A Bimonthly Email Newsletter from the International Mathematical Union
Editor: Mireille Chaleyat-Maurel, University Paris Descartes, Paris, France
Mathematics of Planet Earth 2013 (MPE2013) is about to start. It has
grown from North-American initiative to an unprecedented gathering
under the patronage of UNESCO of more than one hundred partners from
all around the world. And new partners join regularly. The success of
MPE2013 comes from the fact that it is timely. Indeed, addressing the
climate change and sustainability issues requires the use and
development of sophisticated mathematical tools. A concerted, massive,
long-term involvement of the mathematical sciences in collaboration
with other disciplines is essential to any significant progress in the
understanding of planetary problems. The world mathematical community
is becoming more and more aware of the urgency of planetary and
sustainability issues, and it is time to train a new generation of
young researchers to sustainability problems. For all these reasons,
an impressive number of scientific programs and workshops have been
organized around the world. And the effort will not stop in 2013.
Already in the US, the NSF has funded follow-up activities in 2014.
The outreach component of MPE2013 is no less important. National
launches of MPE2013 are organized in several countries with public
activities. MPE public lectures will take place around the world in
2013. Congresses of teachers will highlight MPE2013. In the US, the
theme of the Math Awareness month will be sustainability, while the
French week of mathematics will be on MPE. Mathematical magazines and
enrichment material are prepared so as to raise the interest of the
future scientists still in the schools.
An important component of MPE2013 is its Open Source Exhibition of
museum-quality exhibits (modules) that will be hosted through the
Imaginary Project by Mathematischen Forschungsinstituts Oberwolfach.
The launch of the exhibition is organized by IMU jointly with MPE2013.
Il will occur at the Headquarters in Paris on March 5th 2013 and
modules will be exhibited at UNESCO on March 5 to 8. The basis of the
exhibition will come from the MPE competition
(www.mpe2013.org/competition), the winners of which will receive their
prize at the launch.
MPE2013 is an exceptional opportunity of increasing the collaborations
of mathematicians with other disciplines. For IMU, it provides an
occasion of strengthening its links with other scientific unions, with
ICIAM and with ICSU bodies, on capacity-building projects in different
areas of the world. The workshop on mathematics of climate change,
related hazards and risks (see item 3) as a satellite activity of the
Mathematical Congress of the Americas 2013 is a first step in this
It is not too late to join MPE2013, to benefit from collaborating with
the planet, and to share your enthusiasm with your students or the
public. The spirit of MPE2013 is there to stay.
Christiane Rousseau,
Vice-president of IMU
IMU on the Web
- IMU's CEIC has been represented by Olga Caprotti at the sessions
organized by the ICSU's World Data System at the 23rd CODATA
International Conferenceb Open Data and Information for a Changing
Planet -- held in Taipei on 28b 31 October 2012 [1]. Discussions have
centered on data publication and data citation, the summary 'Research
Data enters Scholarly Communication' is online [2]. IMU's poster was
presented during the WDS Membersb Forum b to update member
organizations on ongoing activities [3].
[1] http://www.codata2012.com/
[3] http://www.icsu-wds.org/images/files/WDS_Members_Forum_Programme.pdf
- Open Access Week was recognized at many schools, research
institutes, universities and colleges across the world last month. A
good summary of the spectrum of such activities is presented at this
web page, http://www.openaccessweek.org/ .
- At the same time, concerns about ongoing sustainability of the
various financial aspects of OA are voiced in various quarters. For
instance, three mathematical societies in France have gone on record
with cautionary advice to the government regarding undesirable effects
of attempts to rely upon author-funded publication.
- We continue to track the progress of the effort to redesign the
governance and support model for arxiv.org. As of September 2012, the
Simons Foundation has announced beginning 2013 through 2017 support in
the amount of up to $300,000 per year as matching funds to that from
contributing institutions. If arxiv.org is important to you, please
check its web site for the list of institutional contributors. Is
yours there? http://arxiv.org/help/support
- Some interesting things happen without much fanfare. It is not
clear that many noticed that Elsevier has recently made open access a
number of backfiles of mathematical journals. This is a good
outcome, one might wish they had come to this decision much earlier.
http://www.elsevier.com/wps/find/P11.cws_home/archivedjournals Please
have a look.
Workshop of Mathematics of Climate Change, Related Natural Hazards
This is the first announcement of a 5-day workshop that is organized
as a satellite activity of the 2013 Mathematical Congress of the
Americas at CIMAT in Guanajuato (Mexico) during July 29 -- August 2
2013. The workshop will bring together about 40 young researchers,
mainly from Latin America and the Caribbean and a dozen distinguished
scientists, each of which will give several lectures on a chosen topic.
The workshop is part of the world initiative "Mathematics of Planet
Earth 2013" which is endorsed by IMU (www.mpe2013.org). It is jointly
organized by IMU together with the International Union of Geodesy and
Geophysics (IUGG) and the International Union of Theoretical and
Applied Mechanics (IUTAM). It is supported by the International
Council of Industrial and Applied Mathematics (ICIAM), by ICSU
Regional Office for Latin America and the Caribbean, by two
interdisciplinary bodies of ICSU, namely IRDR (Integrated Research on
Disaster Risk) and WCRP, by the US National Academy of Sciences, by
the Academia Mexicana de Ciencias, and by CIMAT (Centro de
InvestigaciC3n en MatemC!ticas) in Mexico. Hopefully the workshop will
be funded by ICSU. The members of the Scientific Committee are Susan
Friedlander (IMU), Ilya Zaliapin (IUGG) and Paul F. Linden (IUTAM).
The website will be ready to receive applications by January 15 2013.
More details at: http://cams.usc.edu/mathgeo/
Nominations for IMU Awards 2014
The President of the IMU, Ingrid Daubechies, has written to the
Adhering Organizations, asking them to submit nominations for the IMU
awards listed below.
* Fields Medals - fields14-chair(at)mathunion.org
The Fields Medals are awarded every 4 years on the occasion of the
International Congress of Mathematicians to recognize outstanding
mathematical achievement for existing work and for the promise of
future achievement.
* Rolf Nevanlinna Prize - nevanlinna14-chair(at)mathunion.org
The Nevanlinna Prize is awarded once every 4 years at the
International Congress of Mathematicians, for outstanding
contributions in mathematical aspects of information sciences.
* Carl Friedrich Gauss Prize - gauss14-chair(at)mathunion.org
The Gauss Prize is awarded once every 4 years to honor a scientist
whose mathematical research has had an impact outside mathematics -
either in technology, in business, or simply in people's everyday lives.
* Chern Medal Award - chern14-chair(at)mathunion.org
The Chern Medal is awarded every 4 years on the occasion of the
International Congress of Mathematicians to an individual whose
accomplishments warrant the highest level of recognition for
outstanding achievements in the field
of mathematics.
* Leelavati Prize, sponsored by Infosys - leelavati14-chair(at)mathunion.org
The Leelavati Prize, is intended to accord high recognition and great
appreciation of the IMU and Infosys of outstanding contributions for
increasing public awareness of mathematics as an intellectual
discipline and the crucial role it plays in diverse human endeavors.
* ICM 2014 Emmy Noether Lecture - noether14-chair(at)mathunion.org
The ICM Emmy Noether lecture is a special lecture at an ICM which
honors women who have made fundamental and sustained contributions to
the mathematical sciences.
More details about each of these awards and the Noether lecture, as
well as lists of past laureates, can be found on the IMU Web site, at URL:
Deadline for nominations: December 31, 2012
The names of the chairs of the various prize committees and their
contact information can be found at:
The names of the other prize committee members remain confidential and
will be announced at the Opening Ceremony of ICM 2014 only.
Call for Nominations for the Mathematical Congress of the Americas Prizes
The organizers of the Mathematical Congress of the Americas 2013
invite nominations for the prizes to be delivered in connection with
the Congress (see
http://www.mca2013.org/prizes.html). There are 12 Prizes:
- five MCA Prizes of USD$ 1,000 each will be awarded to mathematicians
who are no more than 12 years past their PhD in August 2013 and
either received their graduate education or currently hold a position
in one or more
countries in the Americas.
- five Americas Prizes of USD$ 5,000 each will be awarded to
individual or groups in recognition of their work to enhance
collaboration and the development of research that links
mathematicians in several countries in the Americas.
- two Solomon Lefschetz Medals carrying a cash award of USD$ 5,000
will be given to mathematicians in recognition of their excellence in
research and their contributions to the development of Mathematics in
a country or countries in the Americas.
Nominations and requests for information concerning the nominating
process should be sent by e-mail to
The deadline for nominations is January 31, 2013.
Call for Nomination for the 2013 Ramanujan Prize
The Ramanujan Prize has been awarded annually since 2005. The 2013
Prize will be jointly funded and administered by ICTP and the IMU.
The Ramanujan Prize is usually awarded to one person, but may be
shared equally among recipients who have contributed to the same body
of work. Eligible for
the prize is a person who has conducted outstanding work in a
developing country, he/she must be less than 45 years of age on 31
December of the year of the award.
February 1, 2013 is the deadline for nominations.
Nominations are to be sent to math(at)ictp.it.
Fome (Friends of Mathematics Education) Conference
The Committee on Education of the European Mathematical Society is
organizing a Conference on March 14b 15, 2013 in Berlin "Friends of
Mathematics Education - A European Initiative -" to which all European
foundations, NGOs and institutions which are engaged in mathematics
education, are invited. Contact: Prof. Dr. Guenter Toerner, Chair of
EMS Committee on Education) guenter.toerner(at)uni-due.de
Mathematics of Planet Earth 2013 (MPE2013)
Several news about MPE2013 can be found in the editorial and in the item 3.
It is not too late to participate to the "Mathematics of Planet Earth
Competition for an open source exhibition of virtual modules"
(museum-quality exhibits): www.mpe2013.org/competition. The modules
submitted will form the basis of the permanent Mathematics of Planet
Earth Open Source Exhibition which will be launched at the UNESCO
Headquarters in Paris on March 5-8 2013.
Examples of modules or themes to be covered are available on the website.
The competition is open until December 20, 2012.
If you have not visited the website recently (www.mpe2013.org), then
please do so: new partners and new activities are posted regularly.
Also, some educational and bibliographic resources are now starting to
be posted on the website, and several partners committed to produce
many more during 2013.
In addition to the MPE blog, several countries intend to run their
national blogs in 2013, including Australia and France. Links to these
blogs will be posted on the main MPE website.
The South African launch already occured on October 30 2012
(http://www.sams2012.org/public-lecture/). The Canadian launch will
take place on December 7-10 2012
(http://cms.math.ca/Events/winter12/), the UK launch on December 17
2013 (http://www.newton.ac.uk/mpe2013/), and the US launch at the JMM
on January 9-12 2013 (http://jointmathematicsmeetings.org/jmm).
Code of Practice of the European Mathematical Society
An important issue for our whole community is the clear formulation and
understanding of ethical principles and accepted practice related to the
publication of mathematical results; this is all the more important in
view of troubling examples that have surfaced in recent years. (See e.g.
Several learned societies around the world have published a Code of
Ethics addressing these issues, and advertise it to their members. In
2010, the European Mathematical Society constituted a Committee of
Ethics, and asked it to draft a Code of Practice. The resulting document
was approved by the EMS Executive Committee at the end of October 2012;
it can be found on
Subscribing to IMU-Net
There are two ways of subscribing to IMU-Net:
1. Click on http://www.mathunion.org/IMU-Net with a Web browser and
go to the "Subscribe" button to subscribe to IMU-Net online.
2. Send an e-mail to imu-net-request(at)mathunion.org with the Subject-line:
Subject: subscribe
In both cases you will get an e-mail to confirm your subscription
so that misuse will be minimized. IMU will not use the list of IMU-Net
emails for any purpose other than sending IMU-Net, and will not
make it available to others.
Previous issues can be seen at:
IMU-Net is the electronic newsletter of the International Mathematical Union.
More details about IMU-Net can be found at: http://www.mathunion.org/IMU-Net/
You can find here, for instance, detailed information about subscribing to
the IMU-Net mailing list and unsubscribing from it. | {"url":"http://www.mathunion.org/?id=2556","timestamp":"2014-04-17T12:34:26Z","content_type":null,"content_length":"36425","record_id":"<urn:uuid:1499062f-c846-4c34-90b5-c53b4299fb5a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Help needed to understand behavior of Mata's increment/decrement ope
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Help needed to understand behavior of Mata's increment/decrement operators
From Joseph Coveney <jcoveney@bigplanet.com>
To Statalist <statalist@hsphsun2.harvard.edu>
Subject st: Help needed to understand behavior of Mata's increment/decrement operators
Date Sun, 14 May 2006 19:55:51 +0900
Mata's documentation (in "[M-2] op_increment -- Increment and decrement
operators") says that increment and decrement operators are performed in
relation to the "entire expression." I don't understand why the first and
second elements of -myrowvector = (i, i, ++i, i, i++, i)- don't come out the
same as the first element of -myrowvector = (i, ++i, i, i++, i)-.
Joseph Coveney
P.S. The op_increment documentation does admonish, "and many programmers
feel that i++ or ++i should never be coded in a line that has a second
reference to i, before or after." There's undoubtedly a painful story or
two behind that. And I appreciate that Mata variables aren't Stata macros.
The examples are not intended to be realistic; they're just for exploration
of the operators' behavior in various contexts.
set more off
local i = 1
display in smcl as result "`i'"
display in smcl as result "`++i'"
display in smcl as result "`i'"
display in smcl as result "`i++'"
display in smcl as result "`i'"
local i = 1
display in smcl as result "`i', `++i', `i', `i++', `i'"
local i = 1
matrix A = (`i', `++i', `i', `i++', `i')
matrix list A
mata clear // -discard-
mata set matastrict on
real rowvector function incrementi(real scalar i)
return(i, ++i, i, i++, i)
real rowvector function incrementj(real scalar j)
return(j, j, ++j, j, j++, j)
real rowvector function incrementk(real scalar k)
return( (k, ++k, k, k++, k) )
real rowvector function incrementl(real scalar l)
return( (l, l, ++l, l, l++, l) )
real rowvector function incrementm(real scalar m)
real rowvector temp; temp = J(1, 5, 0) // Sorry--old habits & all
temp = (m, ++m, m, m++, m)
real rowvector function incrementn(real scalar n)
real rowvector temp; temp = J(1, 1, 0)
temp = (n, n, ++n, n, n++, n)
real rowvector function incremento(real scalar o)
real rowvector temp; temp = J(1, 6, .z)
temp[1] = o
temp[2] = ++o
temp[3] = o
temp[4] = o++
temp[5] = o
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2006-05/msg00427.html","timestamp":"2014-04-18T04:05:19Z","content_type":null,"content_length":"7854","record_id":"<urn:uuid:f8938f2f-3194-4229-a476-0087682496e7>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prospect Heights
Streamwood, IL 60107
The Alchemist--Helping you understand the language of learning
Looking to really excel in Algebra, Geometry, Trigonometry,
, Philosophy, Biology, Chemistry, Spanish, Biochemistry, Writing, or the ACT? Come to the alchemist who will help you understand the language each of these disciplines speaks. In many cases,
Offering 10+ subjects including calculus | {"url":"http://www.wyzant.com/Prospect_Heights_Calculus_tutors.aspx","timestamp":"2014-04-21T15:39:17Z","content_type":null,"content_length":"60586","record_id":"<urn:uuid:d156dd91-2821-476e-9fca-739cbeeaa35d>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
46 search hits
A self-consistent equation of state for nuclear matter (1993)
Mark I. Gorenstein Dirk-Hermann Rischke Horst Stöcker Walter Greiner Kyrill A. Bugaev
The authors formulate a phenomenological extension of the mean-field theory approach and define a class of thermodynamically self-consistent equations of state for nuclear matter. A new equation
of state of this class is suggested and examined in detail.
Baryon number and electric charge fluctuations in Pb+Pb collisions at SPS energies (2006)
Volodymyr P. Konchakovski Mark I. Gorenstein Elena L. Bratkovskaya Horst Stöcker
Event-by-event fluctuations of the net baryon number and electric charge in nucleus-nucleus collisions are studied in Pb+Pb at SPS energies within the HSD transport model. We reveal an important
role of the fluctuations in the number of target nucleon participants. They strongly influence all measured fluctuations even in the samples of events with rather rigid centrality trigger. This
fact can be used to check different scenarios of nucleus-nucleus collisions by measuring the multiplicity fluctuations as a function of collision centrality in fixed kinematical regions of the
projectile and target hemispheres. The HSD results for the event-by-event fluctuations of electric charge in central Pb+Pb collisions at 20, 30, 40, 80 and 158 A GeV are in a good agreement with
the NA49 experimental data and considerably larger than expected in a quark-gluon plasma. This demonstrate that the distortions of the initial fluctuations by the hadronization phase and, in
particular, by the final resonance decays dominate the observable fluctuations.
Baryon number conservation and statistical production of antibaryons (2000)
Mark I. Gorenstein Marek Gazdzicki Walter Greiner
The statistical production of antibaryons is considered within the canonical ensemble formulation. We demonstrate that the antibaryon suppression in small systems due to the exact baryon number
conservation is rather different in the baryon-free (B=0) and baryon-rich (B>1) systems. At constant values of temperature and baryon density in the baryon-rich systems the density of the
produced antibaryons is only weakly dependent on the size of the system. For realistic hadronization conditions this dependence appears to be close to B/(B+1) which is in agreement with the
preliminary data of the NA49 Collaboration for the antiproton/pion ratio in nucleus-nucleus collisions at the CERN SPS energies. However, a consistent picture of antibaryon production within the
statistical hadronization model has not yet been achieved. This is because the condition of constant hadronization temperature in the baryon-free systems leads to a contradiction with the data on
the antiproton/pion ratio in e+e- interactions.
Charm coalescence at relativistic energies (2003)
Andriy P. Kostyuk Mark I. Gorenstein Horst Stöcker Walter Greiner
The J/psi yield at midrapidity at the top RHIC (relativistic heavy ion collider) energy is calculated within the statistical coalescence model, which assumes charmonium formation at the late
stage of the reaction from the charm quarks and antiquarks created earlier in hard parton collisions. The results are compared to the new PHENIX data and to predictions of the standard models,
which assume formation of charmonia exclusively at the initial stage of the reaction and their subsequent suppression. Two versions of the suppression scenario are considered. One of them assumes
gradual charmonium suppression by comovers, while the other one supposes that the suppression sets in abruptly due to quark-gluon plasma formation. Surprisingly, both versions give very similar
results. In contrast, the statistical coalescence model predicts a few times larger J/psi yield in the most central collisions.
Charm estimate from the dilepton spectra in nuclear collisions (2001)
Marek Gazdzicki Mark I. Gorenstein
A validity of a recent estimate of an upper limit of charm production in central Pb+Pb collisions at 158 AGeV is critically discussed. Within a simple model we study properties of the background
subtraction procedure used for an extraction of the charm signal from the analysis of dilepton spectra. We demonstrate that a production asymmetry between positively and negatively charged
background muons and a large multiplicity of signal pairs leads to biased results. Therefore the applicability of this procedure for the analysis of nucleus-nucleus data should be reconsidered
before final conclusions on the upper limit estimate of charm production could be drawn.
Chemical freeze-out parameters at RHIC from microscopic model calculations (2001)
Larissa V. Bravina Eugene E. Zabrodin Steffen A. Bass Amand Faessler Christian Fuchs Mark I. Gorenstein Walter Greiner Sven Soff Horst Stöcker Henning Weber
The relaxation of hot nuclear matter to an equilibrated state in the central zone of heavy-ion collisions at energies from AGS to RHIC is studied within the microscopic UrQMD model. It is found
that the system reaches the (quasi)equilibrium stage for the period of 10-15 fm/c. Within this time the matter in the cell expands nearly isentropically with the entropy to baryon ratio S/A = 150
- 170. Thermodynamic characteristics of the system at AGS and at SPS energies at the endpoints of this stage are very close to the parameters of chemical and thermal freeze-out extracted from the
thermal fit to experimental data. Predictions are made for the full RHIC energy square root s = 200$ AGeV. The formation of a resonance-rich state at RHIC energies is discussed.
Chemical freezeout in relativistic A+A collisions: is it close to the QGP? (1997)
Mark I. Gorenstein Horst Stöcker Granddon D. Yen Shin Nan Yang Walter Greiner
Preliminary experimental data for particle number ratios in the collisions of Au+Au at the BNL AGS (11A GeV/c) and Pb+Pb at the CERN SPS (160A GeV/c) are analyzed in a thermodynamically
consistent hadron gas model with excluded volume. Large values of temperature, T = 140 185 MeV, and baryonic chemical potential, µb = 590 270 MeV, close to the boundary of the quark-gluon plasma
phase are found from fitting the data. This seems to indicate that the energy density at the chemical freezeout is tremendous which would be indeed the case for the point-like hadrons. However, a
self-consistent treatment of the van der Waals excluded volume reveals much smaller energy densities which are very far below a lowest limit estimate of the quark-gluon plasma energy density.
PACS number(s): 25.75.-q, 24.10.Pa
Comment on 'Comparison of strangeness production between A + A and p + p reactions from 2 to 160 A GeV', by J. C. Dunlop and C. A. Ogilvie (2000)
Marek Gazdzicki Mark I. Gorenstein Dieter Röhrich
A recent paper on energy dependence of strangeness production in A+A and p+p interactions written by Dunlop and Ogilvie (Phys. ReV. C61 031901(R) (2000) indicates that there is a significant
misunderstanding about the concept of strangeness enhancement and its role as a signal of Quark Gluon Plasma creation. In this comment we will try to clarify some essential points. 25.75.Dw,
13.85.Ni, 21.65.+f
Critical line of the deconfinement phase transition (2005)
Mark I. Gorenstein Marek Gazdzicki Walter Greiner
Phase diagram of strongly interacting matter is discussed within the exactly solvable statistical model of the quark-gluon bags. The model predicts two phases of matter: the hadron gas at a low
temperature T and baryonic chemical potential muB, and the quark-gluon gas at a high T and/or muB. The nature of the phase transition depends on a form of the bag mass-volume spectrum (its
pre-exponential factor), which is expected to change with the muB/T ratio. It is therefore likely that the line of the 1st} order transition at a high muB/T ratio is followed by the line of the
2nd order phase transition at an intermediate muB/T, and then by the lines of "higher order transitions" at a low muB/T.
Dynamical equilibration in strongly-interacting parton-hadron matter (2011)
Vitalii Ozvenchuk Elena L. Bratkovskaya Olena Linnyk Mark I. Gorenstein Wolfgang Cassing
We study the kinetic and chemical equilibration in 'infinite' parton-hadron matter within the Parton-Hadron-String Dynamics transport approach, which is based on a dynamical quasiparticle model
for partons matched to reproduce lattice-QCD results – including the partonic equation of state – in thermodynamic equilibrium. The 'infinite' matter is simulated within a cubic box with periodic
boundary conditions initialized at different baryon density (or chemical potential) and energy density. The transition from initially pure partonic matter to hadronic degrees of freedom (or vice
versa) occurs dynamically by interactions. Different thermody-namical distributions of the strongly-interacting quark-gluon plasma (sQGP) are addressed and discussed. | {"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Mark+I.+Gorenstein%22/start/0/rows/10/institutefq/Physik/sortfield/title/sortorder/asc","timestamp":"2014-04-16T07:44:56Z","content_type":null,"content_length":"51885","record_id":"<urn:uuid:960f7ed2-9058-4c1b-b94f-19abe47c0790>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |