content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Physics Forums - View Single Post - My Attempt At Re-entry Temp. Calculation
I wanted to figure out the temperature(t[2]) of any object based on things such as its specific heat capacity, re-entry speed, etc.
Work: W = Fy
Force by Air: F[D] = 1/2pv^2C[D]A
Energy-temperature relationship: Q = mc(t[2] - t[1])
Setting things equal & solving for t[2] gives:
t[2] = (1/2pv^2C[D]Ay+mct[1]) / mc
(I'm sorry I could not get latex to format all of the formulas correctly so I formatted none of them...)
So, how accurate is this equation in describing temperature gain through freefall back down to earth? | {"url":"http://www.physicsforums.com/showpost.php?p=4237962&postcount=1","timestamp":"2014-04-20T05:54:41Z","content_type":null,"content_length":"9117","record_id":"<urn:uuid:ae3b8cd5-d556-45ed-859c-ab6ff2e07553>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Factor Pairs
5.1: Factor Pairs
Created by: CK-12
Have you ever been to a class social? Have you ever been in charge of organizing an event at school?
The sixth grade class is having a social in four weeks on a Friday night. The last time that the sixth grade had a social, it was a little unorganized and the teachers weren’t happy. This time,
Allison (President of the sixth grade class) and Hector (the Vice President) have promised to organize it and have a plan for all of the students.
Allison and Hector have been working together to plan different activities. They have decided to have music in the gym, food in the cafeteria, board games in one classroom and basketball outside in
the courtyard. They think that having enough options will keep things less chaotic. Now that they have the activities planned, they have to figure out how to arrange the students in groups. Each
group will have a certain period of time at each activity. The sixth grade has two clusters made up of two classes each.
Cluster 6A has 48 students in it.
Cluster 6B has 44 students in it.
Allison and Hector want to arrange the clusters into reasonably sized groups so that the students can hang out together, but so that the teachers will be happy too. They are struggling with how best
to arrange the students to visit each of the four activities. They want the groups to be a small enough size, but to be even too.
In this Concept you will learn how to identify factor pairs. Factor pairs is one way to help Hector and Allison with their dilemma.
This Concept is all about factors, and that is where we are going to start. In order to complete the work in this Concept, you will first need to understand and identify a factor.
What is a factor?
When you multiply, the numbers that are being multiplied together are the factors of the product. Said another way, a factor is a number or a group of number that are multiplied together for a
product. Groups of numbers including subtraction or addition operations are not single factors.
In this Concept, you will be finding factor pairs. This is when only two numbers are multiplied together for a product.
Let’s find some factors.
What are two factors of twelve? Here we want to find two factors of twelve or two numbers that multiply together to give us twelve. We could list many possible factors for twelve. Let’s choose 3 and
Our answer is 3 $\times$
What if we wanted to list out all of the factors of twelve?
To do this systematically, we should first start with the number 1. Yes, one is a factor of twelve. In fact, one is a factor of every number because any number can be multiplied by one to get itself
as a product.
1 $\times$
After starting with 1, we can move on to 2, then 3 and so on until we have listed out all of the factors for 12.
$&1 \times 12\\&2 \times 6\\&3 \times 4$
5, 7, 8 etc are not factors of 12 because we can’t multiply them by another number to get 12.
These are all of the factors for 12.
Now it's time for you to practice. List out all of the factors for each value.
Example A
Solution: $1, 36, 2, 18, 3, 12, 4, 9, 6$
Example B
Solution: </math>1, 24, 2, 12, 3, 8, 4, 6</math>
Example C
Solution: $1, 90, 2, 45, 3, 30, 5, 18, 6, 15, 9, 10$
Do you know now how to help Hector and Allison? Take a look.
Hector and Allison need to organize the students into four groups to go with the four different activities.
They can start by writing out all of the factors for Cluster 6A. The factors will give them the combinations of students that can be sent in groups.
$& \ \ 48\\&\ \ 1 \times 48\\&\ \ 2 \times 24\\&\ \ 3 \times 16\\&\left . \begin{matrix}\ 4 \times 12 \\6 \times 8 \end{matrix} \right \} \quad \text{These are the two groups that make the most
Now let’s find the factors of 44.
$&1 \times 44\\&2 \times 22\\&4 \times 11 - \ \text{This is the group that makes the most sense.}$
If Hector and Allison arrange cluster 6A into 4 groups of 12 and cluster 6B into 4 groups of 11, then the groups will be about the same size. There will be 23 students at each activity at one time.
This definitely seems like a manageable number.
Allison and Hector draw out their plan. They are excited to show their plan for the evening to their teachers.
Here is a vocabulary word in this Concept.
numbers multiplied together to equal a product.
Guided Practice
Here is one for you to try on your own.
Name the factors of 12 and 18.
First, we can start with 12.
12, 1, 2, 6, 3, 4
Next, we can work with 18.
18, 1, 2, 9, 3, 6
This is our answer.
Video Review
Here is a video for review.
Khan Academy: Finding Factors of a Number
Directions: List out factors for each of the following numbers.
1. 12
2. 10
3. 15
4. 16
5. 56
6. 18
7. 20
8. 22
9. 23
10. 25
11. 27
12. 31
13. 81
14. 48
15. 24
16. 30
Files can only be attached to the latest version of Modality | {"url":"http://www.ck12.org/book/CK-12-Concept-Middle-School-Math---Grade-6/r2/section/5.1/","timestamp":"2014-04-23T07:11:19Z","content_type":null,"content_length":"122564","record_id":"<urn:uuid:ea2f6ec7-4e7c-436d-9dec-5e73f326e893>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
A generalization of Rader's utility representation theorem
Bosi, Gianni and Zuanon, Magalì (2010): A generalization of Rader's utility representation theorem.
Download (824b)
Rader's utility representation theorem guarantees the existence of an upper semicontinuous utility function for any upper semicontinuous total preorder on a second countable topological space. In
this paper we present a generalization of Rader's theorem to not necessarily total preorders that are weakly upper semicontinuous.
Item Type: MPRA Paper
Original Title: A generalization of Rader's utility representation theorem
Language: English
Keywords: Weakly upper semicontinuous preorder; utility function
Subjects: D - Microeconomics > D1 - Household Behavior and Family Economics > D11 - Consumer Economics: Theory
C - Mathematical and Quantitative Methods > C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling > C60 - General
Item ID: 24314
Depositing User: Gianni Bosi
Date Deposited: 12. Aug 2010 10:19
Last Modified: 11. Feb 2013 11:27
J.C.R. Alcantud, Characterization of the existence of semicontinuous weak utilities, Journal of Mathematical Economics 32 (1999), 503-509.
G. Bosi, G. Herden, On the structure of completely useful topologies, Applied General Topology 3 (2002), 145-167.
G. Bosi, G.B. Mehta, Existence of a semicontinuous or continuous utility function: a unified approach and an elementary proof, Journal of Mathematical Economics 38 (2002), 311--328.
References: R. Isler, Semicontinuous utility functions in topological spaces, Rivista di Matematica per le Scienze economiche e sociali 20 (1997), 111-116.
G.B. Mehta, A remark on a utility representation theorem of Rader, Economic Theory 9 (1997), 367-370.
T. Rader, The existence of a utility function to represent preferences, Review of Economic Studies 30 (1963), 229-232.
M. Richter, Continuous and semicontinuous utility, International Economic Review 21 (1980), 293-299.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/24314 | {"url":"http://mpra.ub.uni-muenchen.de/24314/","timestamp":"2014-04-19T14:33:03Z","content_type":null,"content_length":"16998","record_id":"<urn:uuid:735b31eb-2a84-4e46-981b-d70c630b45c1>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
perplexus.info :: Shapes : Three Bugs
In this problem, the three bugs start at the corners of an equilateral triangle (with side length=10 inches).
Again, the bugs travel directly towards their neighbor (counter-clockwise). And, again, each bug homes in on its target, regardless of its target's motion. So, their paths will be curves spiraling
toward the center of the triangle, where they will meet.
What distance will the bugs have covered by then, and how did you determine it? | {"url":"http://perplexus.info/show.php?pid=1591&cid=11229","timestamp":"2014-04-21T12:08:54Z","content_type":null,"content_length":"14884","record_id":"<urn:uuid:aef8aa1a-935e-48ed-9483-8c6371c0c4dc>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Variables thing with monetary values
March 26th 2009, 05:05 AM #1
Dec 2008
Variables thing with monetary values
In January 2006, US first class mail rates increased to 39 cents for the first ounce plus 24 cents for each additional ounce. If Sabrina spent $15.00 on a total of 45 stamps of these two
denominations , how many of each denomination did she buy?
please show working out because that is waht is confusing me =P
thx in advance
You already have "the working out" in the various examples provided in your textbook, your class notes, and whatever other resources you have accessed, so looking at one more worked example is
unlikely to change the situation much.
Instead, let's try working this together. By participating in the process, you will learn, "hands on", how to do this (and thus come to understand all the worked examples you've looked at) and,
by reviewing your work and reasoning, we'll be able to "see" where you're actually needing help. So:
i) She bought two types of stamps. Pick a variable to stand for one of the types of stamps. Let's say we pick "f" to stand for "one first-ounce stamp".
ii) Since she bought 45 in total, and "f" of them were first-ounce stamps, how many were left to be the "additional ounce" stamps? Create an expression, in terms of the variable in (i), for the
number of additional-ounce stamps.
iii) The value of one first-ounce stamp is (39)(1) = 39 cents. The value of two is (39)(2) = 78 cents. Create an expression for the value of "f" first-ounce stamps.
iv) Using reasoning similar to that in (iii), create an expression for the value of the additional-ounce stamps.
v) Since the total value (in cents) is given as being 1500, and since this total is the sum of the values of the two types of stamps, add the two "value" expressions and set the sum equal to the
given total.
vi) Solve the linear equation for the value of the variable.
vii) Back-solve to find the requested values.
If you get stuck, please reply showing how far you have gotten. Thank you!
I also have this exact problem in my Pre-Algebra class.
I need to use only one variable, so I assigned it as X.
0.39x I put as the amount of 39 cents stamps
0.24(15-x) as the amount of 24 cent stamps
15(45) is the total.
0.39x + 0.24 (15-x) = 15 (45)
Would that be how it would be set up? I've got incorrect answers doing it that way.
March 26th 2009, 06:33 AM #2
MHF Contributor
Mar 2007
April 7th 2009, 02:13 PM #3
Apr 2009 | {"url":"http://mathhelpforum.com/algebra/80745-variables-thing-monetary-values.html","timestamp":"2014-04-19T18:16:37Z","content_type":null,"content_length":"37319","record_id":"<urn:uuid:b7e87dea-b587-4b13-96dd-93cc51fa1536>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intercepts and Asymptotes of Tangent Functions - Problem 1
Sometimes on your homework, you’ll be asked to find the x intercepts and asymptotes of a tangent function. Let’s find it for y equals -2 tangent of 5x. Remember the -2 is not going to affect
asymptotes or x intercepts because it’s a vertical stretch and then a reflection, it’s this guy that affects the asymptotes and intercepts so let’s call this theta.
Remember it’s when theta is an integer multiple pi that tangent equals zero. So where n is an integer, now that means that 5x is an integer multiple of pi. Divide both sides by 5 and you get n pi
over 5. These would be the points where this function equals zero, so the intercepts would be, for example when n is one, pi over 5 zero, when n is 2, 2pi over 5 zero, 3pi over 5 zero and so on.
Those would be the intercepts.
What about the asymptotes? Remember tangent is undefined when theta equals pi over 2 plus n pi. Right now theta is 5x. Again we divide by 5 and we get the values of x where this function’s undefined.
Pi over 10 plus n pi over 5 and by the way n pi over 5 is the same as 2n pi over 10. That will help us calculate some values. So for example the asymptotes would be, and these are the vertical
asymptotes, x equals, when x is zero, pi over 10, when n is 1 you get pi over 10 plus 2 pi over 10, 3 pi over 10. When n is 2 this is 4pi over 10 plus pi over 10, 5pi over 10 and so on. And it goes
in both directions.
The asymptotes would be pi x equals pi over 10, x equals 3pi over 10, x equals 5pi over 10 and so on. These are all multiples of pi over 10 and here the intercepts, pi over 5 zero, 2pi over 5 zero, 3
pi over 5 zero, integer multiples of pi over 5.
tangent zeros x intercepts vertical asymptotes | {"url":"http://www.brightstorm.com/math/trigonometry/trigonometric-functions/intercepts-and-asymptotes-of-tangent-functions-problem-1/","timestamp":"2014-04-24T10:24:02Z","content_type":null,"content_length":"70395","record_id":"<urn:uuid:8e30cf26-6667-4adc-9d2d-e7b4b7dee7e0>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Square Koch Fractal Surface
Many fractal curves can be generated using L-systems or string-rewrite rules, in which each stage of the curve is generated by replacing each line segment with multiple smaller segments in a
particular arrangement. The same technique can be extended to surfaces, where each stage is constructed by replacing each square with multiple smaller squares. This Demonstration shows an analogy of
the square Koch curve (Type 1) as a three-dimensional surface.
Snapshot 1: creation of the surface begins with a single square
Snapshot 2: each successive iteration is created by dividing each square into nine smaller squares, "raising" the center square, and then closing the surface by adding squares that connect the
raised center to the base
Snapshot 3: the bottom face of the surface is a Sierpinski carpet | {"url":"http://www.demonstrations.wolfram.com/SquareKochFractalSurface/","timestamp":"2014-04-16T10:09:53Z","content_type":null,"content_length":"42192","record_id":"<urn:uuid:522b3956-7db0-4874-a0d5-2d8c294ff3ae>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
A1 - - - RHUMB LINE SAILING - - - 10/03/2004
A line which maintains the same angle with respect to true north, (it also maintains the same angle crossing all meridians) is called a rhumb line. When traveling along a rhumb line it would appear
(over a short distance) that you were traveling in a straight line, but the path is actually a curve, and if extended it will eventually spiral in on the North or South pole. (Except for the case
where the direction is exactly East or West.)
The graphical Mercator sailing involves plotting your location and destination on a Mercator chart or Mercator plotting sheet, drawing a line between the positions and then measuring with a
protractor to determine the course angle. This is the easiest method but if the distance is large both points may not fit on the same chart or plotting sheet, so a calculated course will be required.
The length of a degree of longitude (approximately 60 nautical miles at the equator) is proportional to the cosine of the latitude on a spherical earth, but when traveling at an oblique course angle
the latitude is continuously changing and the relationship between change of longitude and East-West distance is more complex. As an approximation, the middle latitude is used as follows.
When sailing from point 1 (L1, LO1) to point 2 (L2, LO2)
C = course angle
D = distance traveled along the course
Lm = ((L1+L2) / 2) = mid latitude
Ld = ( L2-L1 ) = difference in latitude (degrees)
DLO = difference in longitude (degrees)
tan C = ( DLO * cos Lm ) / Ld
D = 60 * ( Ld / cos C ) (nautical miles)
This method can only be used on one side of the equator. When crossing the equator the parts North and South of the equator are calculated separately and the middle latitude is replaced by the half
latitude of each of the locations.
When L1 is greater than L2, then the angle C will be negative (fourth quadrant). To put the answer into the second quadrant add 180 degrees.
[ HOME ] - - - [ NEXT ] - - - [ BACK ] - - - navtrig03@yahoo.com | {"url":"http://www.angelfire.com/nt/navtrig/A1.html","timestamp":"2014-04-19T17:13:16Z","content_type":null,"content_length":"13652","record_id":"<urn:uuid:594af313-8916-4a0a-a454-810a3dd29607>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/moha_10/answered","timestamp":"2014-04-18T20:47:06Z","content_type":null,"content_length":"112401","record_id":"<urn:uuid:6f2968f2-4a90-4c06-b132-dfcaa1626558>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
week beginning
Monday 12th Seminar: Number Theory Seminar Location and Time: Meeting Room 13 (CMS) at 2.30pm Speaker: Mike Shuter (Cambridge) Title: Rational points in p-Division fields of elliptic curves Tuesday
June 13th Seminar: Number Thoery Seminar Location and Time: Meeting Room 13 (CMS) at 2.30pm Speaker: Daniel Caro (Durham) Title: Towards a good p-adic cohomology Wednesday 14th Seminar: Geometry
Seminar Location and Time: Meeting Room 13 (CMS) at 3.00pm Speaker: Caucher Birkar (Cambridge) Title: Singularities and flips Seminar: Complex Analysis Seminar Location and Time: Meeting Room 5 (CMS)
at 4.00pm Speaker: Ritva Hurri-Syrjanen (University of Helsinki) Title: TBA Seminar: Analysis Seminar Location and Time: Meeting Room 12 (CMS) at 2.00pm Speaker: Steve Dilworth (University of South
Carolina) Title: Coefficient Quantization in Banach Spaces
Seminar: Number Theory Seminar Location and Time: Meeting Room 13 (CMS) at 2.30pm Speaker: Mike Shuter (Cambridge) Title: Rational points in p-Division fields of elliptic curves
Seminar: Number Thoery Seminar Location and Time: Meeting Room 13 (CMS) at 2.30pm Speaker: Daniel Caro (Durham) Title: Towards a good p-adic cohomology
Seminar: Geometry Seminar Location and Time: Meeting Room 13 (CMS) at 3.00pm Speaker: Caucher Birkar (Cambridge) Title: Singularities and flips Seminar: Complex Analysis Seminar Location and Time:
Meeting Room 5 (CMS) at 4.00pm Speaker: Ritva Hurri-Syrjanen (University of Helsinki) Title: TBA Seminar: Analysis Seminar Location and Time: Meeting Room 12 (CMS) at 2.00pm Speaker: Steve Dilworth
(University of South Carolina) Title: Coefficient Quantization in Banach Spaces | {"url":"https://www.dpmms.cam.ac.uk/Seminars/Weekly/2005-2006/seminars-12-Jun-2006.html","timestamp":"2014-04-16T10:10:27Z","content_type":null,"content_length":"5239","record_id":"<urn:uuid:4fc62564-af08-4a8a-92a4-1b4592914a36>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Math Forum » Discussions » sci.math.* » sci.math.research
Topic: Eleven papers published by Geometry & Topology Publications
Replies: 1 Last Post: May 3, 2013 9:31 PM
Back to Topic List Jump to Tree View
Eleven papers published by Geometry & Topology Publications
Posted: May 3, 2013 9:31 PM
Nine papers have been published by Algebraic & Geometric Topology,
completing Issue 2 of Volume 13
(1) Algebraic & Geometric Topology 13 (2013) 1049-1051
A streamlined proof of Goodwillie's n-excisive approximation
by Charles Rezk
URL: http://www.msp.warwick.ac.uk/agt/2013/13-02/p032.xhtml
DOI: 10.2140/agt.2013.13.1049
(2) Algebraic & Geometric Topology 13 (2013) 1053-1070
Unstable splittings for real spectra
by Nitu Kitchloo and W Stephen Wilson
URL: http://www.msp.warwick.ac.uk/agt/2013/13-02/p033.xhtml
DOI: 10.2140/agt.2013.13.1053
(3) Algebraic & Geometric Topology 13 (2013) 1071-1087
On the geometric realization and subdivisions of dihedral sets
by Sho Saito
URL: http://www.msp.warwick.ac.uk/agt/2013/13-02/p034.xhtml
DOI: 10.2140/agt.2013.13.1071
(4) Algebraic & Geometric Topology 13 (2013) 1089-1124
On the construction of functorial factorizations for model categories
by Tobias Barthel and Emily Riehl
URL: http://www.msp.warwick.ac.uk/agt/2013/13-02/p035.xhtml
DOI: 10.2140/agt.2013.13.1089
(5) Algebraic & Geometric Topology 13 (2013) 1125-1141
Bridge number and tangle products
by Ryan Blair
URL: http://www.msp.warwick.ac.uk/agt/2013/13-02/p036.xhtml
DOI: 10.2140/agt.2013.13.1125
(6) Algebraic & Geometric Topology 13 (2013) 1143-1159
Nonseparating spheres and twisted Heegaard Floer homology
by Yi Ni
URL: http://www.msp.warwick.ac.uk/agt/2013/13-02/p037.xhtml
DOI: 10.2140/agt.2013.13.1143
(7) Algebraic & Geometric Topology 13 (2013) 1161-1182
Cosimplicial models for the limit of the Goodwillie tower
by Rosona Eldred
URL: http://www.msp.warwick.ac.uk/agt/2013/13-02/p038.xhtml
DOI: 10.2140/agt.2013.13.1161
(8) Algebraic & Geometric Topology 13 (2013) 1183-1224
Homology of moduli spaces of linkages in high-dimensional Euclidean space
by Dirk Schuetz
URL: http://www.msp.warwick.ac.uk/agt/2013/13-02/p039.xhtml
DOI: 10.2140/agt.2013.13.1183
(9) Algebraic & Geometric Topology 13 (2013) 1225-1241
The Kunneth Theorem in equivariant K-theory
for actions of a cyclic group of order 2
by Jonathan Rosenberg
URL: http://www.msp.warwick.ac.uk/agt/2013/13-02/p040.xhtml
DOI: 10.2140/agt.2013.13.1225
Two papers have been published by Geometry & Topology
(10) Geometry & Topology 17 (2013) 639-731
Parametrized ring-spectra and the nearby Lagrangian conjecture
by Thomas Kragh
Appendix: Mohammed Abouzaid
URL: http://www.msp.warwick.ac.uk/gt/2013/17-02/p018.xhtml
DOI: 10.2140/gt.2013.17.639
(11) Geometry & Topology 17 (2013) 733-838
A universal characterization of higher algebraic K-theory
by Andrew Blumberg, David Gepner and Goncalo Tabuada
URL: http://www.msp.warwick.ac.uk/gt/2013/17-02/p019.xhtml
DOI: 10.2140/gt.2013.17.733
Abstracts follow
(1) A streamlined proof of Goodwillie's n-excisive approximation
by Charles Rezk
We give a shorter proof of Goodwillie's, [Geom. Topol. 7 (2003)
645--711; Lemma 1.9], which is the key step in proving that the
construction P_n F gives an n-excisive functor.
(2) Unstable splittings for real spectra
by Nitu Kitchloo and W Stephen Wilson
We show that the unstable splittings of the spaces in the Omega
spectra representing BP, BP<n> and E(n) from [Amer. J. Math. 97 (1975)
101--123] may be obtained for the real analogs of these spectra using
techniques similar to those in [Progr. Math. 196 (2001) 35--45].
Explicit calculations for ER(2) are given.
(3) On the geometric realization and subdivisions of dihedral sets
by Sho Saito
We extend to dihedral sets Drinfeld's filtered-colimit expressions of
the geometric realization of simplicial and cyclic sets. We prove
that the group of homeomorphisms of the circle continuously act on the
geometric realization of a dihedral set. We also see how these
expressions of geometric realization clarify subdivision operations on
simplicial, cyclic and dihedral sets defined by Boekstedt, Hsiang and
Madsen, and Spalinski.
(4) On the construction of functorial factorizations for model categories
by Tobias Barthel and Emily Riehl
We present general techniques for constructing functorial
factorizations appropriate for model structures that are not known to
be cofibrantly generated. Our methods use `algebraic'
characterizations of fibrations to produce factorizations that have
the desired lifting properties in a completely categorical fashion. We
illustrate these methods in the case of categories enriched, tensored
and cotensored in spaces, proving the existence of Hurewicz-type model
structures, thereby correcting an error in earlier attempts by others.
Examples include the categories of (based) spaces, (based) G-spaces
and diagram spectra among others.
(5) Bridge number and tangle products
by Ryan Blair
We show that essential punctured spheres in the complement of links
with distance three or greater bridge spheres have bounded complexity.
We define the operation of tangle product, a generalization of both
connected sum and Conway product. Finally, we use the bounded
complexity of essential punctured spheres to show that the bridge
number of a tangle product is at least the sum of the bridge numbers
of the two factor links up to a constant error.
(6) Nonseparating spheres and twisted Heegaard Floer homology
by Yi Ni
If a 3-manifold Y contains a nonseparating sphere, then some twisted
Heegaard Floer homology of Y is zero. This simple fact allows us to
prove several results about Dehn surgery on knots in such manifolds.
Similar results have been proved for knots in L-spaces.
(7) Cosimplicial models for the limit of the Goodwillie tower
by Rosona Eldred
We call attention to the intermediate constructions T_n F in
Goodwillie's Calculus of homotopy functors, giving a new model which
naturally gives rise to a family of towers filtering the Taylor tower
of a functor. We also establish a surprising equivalence between the
homotopy inverse limits of these towers and the homotopy inverse
limits of certain cosimplicial resolutions. This equivalence gives a
greatly simplified construction for the homotopy inverse limit of the
Taylor tower of a functor F under general assumptions.
(8) Homology of moduli spaces of linkages in high-dimensional Euclidean space
by Dirk Schutz
We study the topology of moduli spaces of closed linkages in R^d
depending on a length vector l in R^n. In particular, we use
equivariant Morse theory to obtain information on the homology groups
of these spaces, which works best for odd d. In the case d = 5 we
calculate the Poincare polynomial in terms of combinatorial
information encoded in the length vector.
(9) The Kunneth Theorem in equivariant K-theory
for actions of a cyclic group of order 2
by Jonathan Rosenberg
The Kuenneth Theorem for equivariant (complex) K-theory K^*_G, in the
form developed by Hodgkin and others, fails dramatically when G is a
finite group, and even when G is cyclic of order 2. We remedy this
situation in this very simplest case G=Z/2 by using the power of
RO(G)-graded equivariant K-theory.
(10) Parametrized ring-spectra and the nearby Lagrangian conjecture
by Thomas Kragh
Appendix: Mohammed Abouzaid
Let L be an embedded closed connected exact Lagrangian sub-manifold in
a connected cotangent bundle T*N. In this paper we prove that such an
embedding is, up to a finite covering space lift of T*N, a homology
equivalence. We prove this by constructing a fibrant parametrized
family of ring spectra FL parametrized by the manifold N. The homology
of FL will be (twisted) symplectic cohomology of T*L. The fibrancy
property will imply that there is a Serre spectral sequence converging
to the homology of FL. The fiber-wise ring structure combined with the
intersection product on N induces a product on this spectral
sequence. This product structure and its relation to the intersection
product on L is then used to obtain the result. Combining this result
with work of Abouzaid we arrive at the conclusion that L --> N is
always a homotopy equivalence.
(11) A universal characterization of higher algebraic K-theory
by Andrew Blumberg, David Gepner and Gonalo Tabuada
In this paper we establish a universal characterization of higher
algebraic K-theory in the setting of small stable infinity-categories.
Specifically, we prove that connective algebraic K-theory is the
universal additive invariant, ie the universal functor with values in
spectra which inverts Morita equivalences, preserves filtered
colimits, and satisfies Waldhausen's additivity theorem. Similarly,
we prove that nonconnective algebraic K-theory is the universal
localizing invariant, ie the universal functor that moreover satisfies
the Thomason-Trobaugh-Neeman Localization Theorem.
To prove these results, we construct and study two stable
infinity-categories of `noncommutative motives'; one associated to
additivity and another to localization. In these stable
infinity-categories, Waldhausen's S_dot-construction corresponds to
the suspension functor and connective and nonconnective algebraic
K-theory spectra become corepresentable by the noncommutative motive
of the sphere spectrum. In particular, the algebraic K-theory of every
scheme, stack, and ring spectrum can be recovered from these
categories of noncommutative motives. In the case of a connective ring
spectrum R, we prove moreover that its negative K-groups are
isomorphic to the negative K-groups of the ordinary ring pi_0(R).
In order to work with these categories of noncommutative motives, we
establish comparison theorems between the category of spectral
categories localized at the Morita equivalences and the category of
small idempotent-complete stable infinity-categories. We also explain
in detail the comparison between the infinity-categorical version of
Waldhausen K-theory and the classical definition.
As an application of our theory, we obtain a complete classification
of the natural transformations from higher algebraic K-theory to
topological Hochschild homology (THH) and topological cyclic
homology (TC). Notably, we obtain an elegant conceptual description
of the cyclotomic trace map.
Geometry & Topology Publications is an imprint of
Mathematical Sciences Publishers | {"url":"http://mathforum.org/kb/message.jspa?messageID=8911731","timestamp":"2014-04-19T23:19:29Z","content_type":null,"content_length":"26086","record_id":"<urn:uuid:4130c231-ba58-476c-beab-54eb0a5c1f68>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
Methuen Geometry Tutor
Find a Methuen Geometry Tutor
...Then, during my last five years at Lowell High School, I taught both college and honors-level precalculus along with Fundamentals of Calculus. In all my teaching I drew on my 30 years of
experiences in industry between college and becoming a math teacher to provide my students with a perspective...
9 Subjects: including geometry, calculus, algebra 1, algebra 2
...For students with ADHD, as another example, I have had great success having them make flash cards and placing them throughout the room in which the session takes place (this works best for
in-home sessions). This way, they have the freedom to move about to gather the cards, while reviewing key c...
27 Subjects: including geometry, reading, English, algebra 1
...The complexity of the topics involved however, require that your grasp of mathematical concepts and function properties is strong. I have helped numerous students master both the foundations
and the specific skills taught in a variety of calculus courses. Geometry is the study and exploration of logic and visual reasoning.
23 Subjects: including geometry, physics, calculus, statistics
...At the time, it was simply a means to an end and I was completely taken by surprise when I found that I truly loved teaching. I taught in Nicaragua for a year, India for three years, and I most
recently covered a three month maternity leave at Pierce Middle School in Milton, MA, after returning ...
23 Subjects: including geometry, reading, Spanish, chemistry
I am available and eager to tutor anyone seeking additional assistance in the fields of physics (or mathematics), either at the high school or college level! I have been teaching physics as an
adjunct faculty at several universities for the last few years and very much look forward to the opportuni...
9 Subjects: including geometry, calculus, physics, algebra 1 | {"url":"http://www.purplemath.com/Methuen_Geometry_tutors.php","timestamp":"2014-04-20T16:34:47Z","content_type":null,"content_length":"23837","record_id":"<urn:uuid:6b76d829-2aaf-4764-bbed-4056c1e9b210>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
23 Jan 21:49 2013
The Coverage Condition of functional dependencies
Petr P <petr.mvd <at> gmail.com>
2013-01-23 20:49:33 GMT
Hi, trying to understand UndecidableInstances (and to find and answer to <
>), I was trying to find out why "mtl" needs UndecidableInstances.
The reason is that instances like
> instance MonadState s m => MonadState s (ContT r m) where
don't satisfy the Coverage Condition:
"The Coverage Condition. For each functional dependency, tvsleft -> tvsright, of the class, every type variable in S(tvsright) must appear in S(tvsleft), where S is the substitution mapping each type
variable in the class declaration to the corresponding type in the instance declaration. "
In other words, "s" isn't expressed using type variables in "ContT r m".
But in these cases, it's actually possible. Because of the assertion "MonadState s m" and its dependency "m -> s" we know that "s" will be always deducible from "m".
I wonder, would it be possible to augment the type checker to realize this? It seems reasonable: Before comparing if S(tvsright) is a subset of S(tvsleft), we'd add every type variable to S(tvsleft)
that is determined from it using functional dependencies in the assertion of the instance.
Best regards,
Haskell-Cafe mailing list
Haskell-Cafe <at> haskell.org | {"url":"http://comments.gmane.org/gmane.comp.lang.haskell.cafe/102928","timestamp":"2014-04-20T21:02:20Z","content_type":null,"content_length":"7109","record_id":"<urn:uuid:62f427fe-caf2-494a-968c-3f05c336cc44>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
A ConnundrumA Connundrum
Here is a question. It’s a sort of subtle question, but one that can be answered with freshman-level physics. But it’s an excellent test of understanding. I’m not promising that the question itself
is not in some sense a “trick” question, but the trick is in how you might think about the physics, not the question itself. Which is to say, I’m not promising that what the question assumes happens
can actually happen. But it might – it’s up to you to tell me!
Ok. You have a perfectly efficient car, which transmits all of the energy in its fuel into kinetic energy of forward motion. Kinetic energy is related to the forward velocity by the famous equation
We’ll make up our own unit of energy (the zap) so the concept is more clear and we don’t have to worry about the particular mass of the car. The driver of the car pours 1 zap worth of fuel into the
engine and the car accelerates from rest to 1 m/s. To double the speed you have to quadruple the kinetic energy. So you pour three more zaps of fuel into the car, and with a total kinetic energy of 4
zaps the car is moving at 2 m/s. To go 100 m/s, the car needs 10,000 zaps of fuel.
But what if you’re watching the experiment from a bus which is itself traveling at 100 m/s alongside the car, then you observe the car standing stationary. One more zap of energy should by your
prediction therefore increase its speed in your reference frame by 1 m/s – or equivalently 101 m/s with respect to the ground. But from the reference frame of the ground, the speed of the car should
be sqrt(K), where K is the 10,001 zaps of energy the car has used in total. That should put the car at 100.005 m/s, not 101 m/s.
Who is right, given that we know both inertial reference frames are equivalent?
1. #1 Kobra March 18, 2009
The distance between the two vehicles is increasing at 101 m/s, but the car only has enough energy to move at 100.005 m/s. Therefore, its velocity is 100.005 m/s.
Ah, the things I learn from random emails.
2. #2 Eric Lund March 18, 2009
There is a flaw in your statement of the problem: whence comes the momentum of the car?
Let’s say that if the car is moving 1 m/s it has one zip of momentum. It acquires this momentum from a standing start in one of two ways: (1) it’s a rocket, so it expels one zip of fuel
backwards, keeping the net momentum at 0 (we will assume that the fuel is expelled at a much higher velocity than the car moves, so that we can neglect the car’s changing mass), or (2) friction
between the tires and the road propels the car forward. (Both effects can come into play, e.g., at the Bonneville track in Utah.) Real cars operate by scenario #2, which means that some of the
energy is lost to friction and wind resistance (100 m/s is 360 km/h, which is triple the highest speed limit–75 MPH, or ~120 km/h–on any American freeway), and the ground observer will be closer
to correct. However, under the rocket scenario the bus observer will be correct. I can’t tell which of those two scenarios was your intent.
3. #3 beebeeo March 18, 2009
Is it possible that the inertia in the two reference frames is changing as well? If we are changing the V in 1/2*m*V*V is it maybe wrong to assume that m is staing the same?
4. #4 Carl Brannen March 18, 2009
It would take a lot less fuel to accelerate if you could accelerate yourself by running your tires against the bus instead of against the pavement. Another way of saying this is to note that W =
F d, work is force times distance. Since the bus is relatively stationary to you, you can apply a force to it with little change in distance. The problem with the ground is that it is receding.
5. #5 beebeeo March 18, 2009
no wrong, my previous post is wrong.
Now I got it, maybe.
momentum stays the same. I order to accelerate you have to throw something backwards. In case of the car you are actually throwing the road backwards (of course the earth is too big to really
start rotating in different direction or so, so we don’t realise it)
In the moving reference frame of the bus, you have to throw back the road. Of course the road is already moving at 100 m/s (compared to the bus reference frame) so you need more energy to
actually accelerate and 1 zap only brings you 0.005 m/s. In case of a rocket I guess the difference would be the speed of the air in which we move.
I wonder what would happen if everybody started accelerating in a certain direction simultaneously. Could we actually, at least temporarily make the earth spin faster/slower ?
I am not a physicist but I think I got it right this time …
I still can’t think of what would be if we were talking about two rockets in a vacuum …
6. #6 Tom March 18, 2009
The key concept here is understanding that kinetic energy, while conserved, is not invariant, and it’s important to understand the distinction. IOW, the speed increase as the result of a zap is
not the same in the two coordinate systems.
7. #7 Tom March 18, 2009
I should clarify that energy, in general, is conserved. Not necessarily kinetic energy, though.
8. #8 Chris P March 18, 2009
Work is force times distance. When you are travelling faster the work has to be done over a longer distance.
This comes out in the problem of high speed bicycles. Now that the top speed is just over 80 mph one of the major problems is the time and distance to actually get to that speed.
Chris P
9. #9 Anonymous March 18, 2009
Tom has the right key point (although the second post about KE not being conserved confuses the issue): KE is NOT INVARIANT, and will depend on your frame of reference.
Invariance has little to do with conservation here. The question of invariance is: If you change frames of reference, what happens to the quantities m, v, KE?
* 1/2 is invariant, i.e. both observers agree that numbers have the same value
* m is invariant, i.e. both oververs will measure the same mass (remember, freshman-level physics here)
* v is NOT invariant, almost by definition, i.e. one observer will say the speed is v1, while the other will say the speed is ( v1 – u ), where u is the relative speeds of their reference frames.
Ergo, 1/2 m v^2 cannot be invariant. That’s OK, that’s no more controversial than stating that v itself is not invariant.
So, what are we left with? (I’ll make the mass equal to 2 to cancel the 1/2 in KE). Let’s start with the reference frame of the person on the bus; I’ll label the appropriate quantities with a
prime (‘). At t=0, she sees the car moving at v = -100 m/s. At t=t1, she sees the car moving at 0 m/s. At t=t2, she see the car moving at 1 m/s. That gives:
KE_0′ = 10,000
KE_1′ = 0
KE_2′ = 1
As for the person on the side of the road, we have:
KE_0 = 0
KE_1 = 10,000
KE_2 = 10,201
Well, here it might get uncomfortable: it seems that differences in KE are not invariant either, i.e. the observers disagree about how much KE the car gained between t1 and t2. But again, no
shock: a quantity that depends on (v1^2 – v2^2) should not be invariant.
In other words, the misleading (or “tricky”) part of the problem is to state that if the observer on the bus sees an increase in KE of 1 zap, the observer on the ground should see that same
increase. This is not true, and this is where the logic in the problem statement leads to a “connundrum”.
10. #10 Mark Harrison March 18, 2009
The frames are equivalent, but an observer on the ground and an observer in the bus won’t agree on how much work was done, because they won’t agree on how much distance the car covered while
accelerating. The adding of one zap of work is ambiguous.
Let’s say a force of one zork over a distance of one meter results in one zap of work. In order to observe one zap of work in the bus frame, an observer on the bus would have to see a one zork
force applied to the car as it moved one meter ahead of it. This would result in the car moving at 1 m/s in the bus frame.
Now let’s switch to an observer on the ground. That same zork of force would be applied over a distance much longer than a meter, namely, 100t + 1, where t is the time over which the one zork
force is applied (classically, the two observers should agree on the force applied). The observer on the ground sees the force applied over a greater distance, and so would measure more work
done. In this case, the final speed is 101 m/s in the ground frame, 1 m/s in the bus frame.
If the observer on the ground measures one zap of work, the observer on the bus would observe less work done because the car wouldn’t move as far ahead of it. In this case, the final speed is
100.005 m/s in the ground frame, 0.005 m/s in the bus frame.
11. #11 Nemo March 18, 2009
Eric Lund has it right.
You cannot push something forward without pushing something else backward. The energy in the fuel goes not only into the car, but also into the ground (or into the exhaust if it is a rocket
Once you take this into account and calculate the delta-v for BOTH the car and the thing it pushes against, it turns out the reference frame does not matter.
To reduce to a simple example…
Consider two identical balls of mass 1kg with a coiled spring between them. Suppose the spring contains an amount of energy such that when you release the spring the balls fly apart in opposite
directions at 1 m/s.
From the point of view of someone stationary, the initial kinetic energy is zero and the final kinetic energy of each ball is .5 * 1kg * 1^2, which is 0.5, so the total final kinetic energy of
both balls is 1 and the change in KE is 1.
From the point of view of someone moving at velocity 1 m/s, the initial kinetic energy of the two balls is .5 * (2kg) * (-1)^2, or 1. The final kinetic energy of one ball is zero (because it is
moving with you) and the final kinetic energy of the other ball is 0.5 * (1kg) * (2^2), which is 2. So again the change in KE is 1.
This will work out no matter what reference frame you use or what masses the balls have. You just need to make sure you take into account Newton’s Third Law.
Thanks for the brain teaser!
12. #12 Max Fagin March 18, 2009
“Who is right, given that we know both inertial reference frames are equivalent?”
But they’re not equivalent, are they? The car is accelerating, the bus is not. The car is not in an inertial reference frame.
13. #13 Boris Legradic March 18, 2009
The answer clearly is: One n too much
14. #14 rex27 March 18, 2009
I dunno, I think in the description of the situation at the beginning of the post, we’re talking about getting the car to move at a certain speed from rest…i.e. 10,000 zaps are required to make
it move 100m/s when it is initially not moving…
In the case of the bus moving alongside the car: isn’t this equivalent to asking how much faster it moves when one zap is added when it is initially moving at 100m/s as opposed to when it is
initially at rest?
it’s initial K.E. is (0.5*m*v^2)
when one zap is added surely it’s K.E. changes to (0.5*m*v^2 + 1 zap)
if we call it’s new speed c, then
0.5*m*c^2 = (0.5*m*v^2 + 1z)…
if 1 zap is equivalent to 0.5*m*(1)^2, and v=100,
0.5*m*c^2 = (0.5*m*(100)^2) + 0.5*m = 0.5*m*(100^2 + 1)
therefore c^2 = 10 001
and so c = 100.005 m/s
15. #15 rex27 March 18, 2009
Oh, i think i realised the mistake in my last post… i was using K.E. as absolute, compared to the ground… compared to the bus the K.E. of the car is zero…
is it possible that the car moves 0.005 m/s faster relative to the ground and 1 m/s faster relative to the bus? After all, a car moving at 80m/s to the ground moves at 30m/s relative to a bus
moving at 50 m/s….
16. #16 rex27 March 18, 2009
Oh, I think I realised the mistake in my last post… I was using K.E. as absolute, compared to the ground… compared to the bus the K.E. of the car is zero…
Is it possible that the car moves 0.005 m/s faster relative to the ground and 1 m/s faster relative to the bus? i.e. both are right? After all, a car moving at 80m/s to the ground moves at 30m/s
relative to a bus moving at 50 m/s….
17. #17 rex27 March 18, 2009
Oh, I think I realised the mistake in my last post… I was using K.E. as absolute, compared to the ground… compared to the bus the K.E. of the car is zero…
Is it possible that the car moves 0.005 m/s faster relative to the ground and 1 m/s faster relative to the bus? i.e. both are right? After all, a car moving at 80m/s to the ground moves at 30m/s
relative to a bus moving at 50 m/s….
18. #18 Duae Quartunciae March 18, 2009
I found I could think of this problem more easily in terms of sleds on a frictionless ice pan. A sled moves by pushing on another sled.
The stated problem involves a light weight sled (car) pushing on a heavy sled (the highway), and this can be observed from any other sled. As a wrinkle, if you observed from the recoiling sled,
you can’t assume an inertial frame. If we regard the bus as a third sled, and a truly inertial frame, then the push of the car on the highway-sled will give a small change in the velocity of the
highway-sled wrt the bus-sled as well. I’ll presume that the highway on which the car is traveling is able to recoil independently of the bus. That is, the bus is coasting without friction, and
can represented as a third inertial sled on the ice-pan.
The car moves at 100 m/s wrt to the road. The car pushes on the road, until it moves at 101 m/s wrt the initial frame of the road. But road has a new inertial frame. It recoils at a speed v wrt
to its original frame. Assume that the road is X times more massive than the car. Then v = 1/X.
Observed from a busstop initially at rest wrt to the road, the car changes from 100 to 101 m/s, and has additional energy of 101^2-100^2 = 201 zaps. The road also gets a tiny bit of energy which
is X times (1/X)^2, or 1/X.
Total additional energy is 201 + 1/X, where X is large.
Observed from a bus originally sliding alongside the car… the car changes from 0 to 1 m/s, and has additional energy of 1 zap. But the road! The road starts out with a massive amount of energy.
It’s receding at 100 m/s at first, and has energy X * 100000. With the tiny additional recoil of 1/X, it gets more energy at X*(100+1/X)^2, or X * 1000000 + 200 + 1/X. When observed from the bus,
the recoiling highway has picked up 200 + 1/X additional zaps of energy!
19. #19 ppnl March 18, 2009
I ran into this conundrum years ago from a different direction. Say a rocket uses one unit of chemical energy to accelerate to a velocity of one meter per second. It then uses the same amount of
fuel again a doubles its velocity. But its energy has increased 4 times. What gives? And why is it different than a car that has to use four times the fuel to double its velocity?
It turns out that the Galilean transform can be as counter intuitive as the Lorentz transform.
20. #20 Duae Quartunciae March 18, 2009
With the rocket, most of the energy ends up the exhaust; not in the moving rocket. If the rocket accelerates by 1 m/s, with exhaust that is expelled at 1000 m/s (it’s actually a bit more than
this, usually) then it must have expelled an amount of exhaust weighing about 1000 times less than the rocket. Hence only 0.1% of the kinetic energy is in the rocket. The rest is the exhaust.
With the second pulse of the rocket engine, the expelled exhaust ends up moving at 999 m/s. 999^2 = 998001.
So. With the first pulse, the rocket gets energy 1 and the exhaust gets energy 1000
With the second pulse, the rocket gets energy 3 (from 1 to 4) and the second exhaust gets energy 998. Sum is the same in both cases. (It remains the same when you consider that the energy of the
exhaust is a bit extra 998.001 and the rocket is a bit less, because it weighs less with less fuel on board).
21. #21 T March 19, 2009
So if a spaceship burns 1 fuel container to reach a speed of 1m/s, which it then stays at constantly, what’s to stop the spaceship from declaring itself at rest (since it’s moving at constant
velocity) thus requiring it to burn only 1 more fuel container to reach a speed of 2m/s (since it’s 1m/s w.r.t his new “rest” frame).
Had he not declared himself at rest once he was at 1m/s he would have needed 4 containers of fuel to reach 2m/s.
I can see that KE is frame dependent, but when you put energy into fuel containers it’s harder to understand how that works out in reality when you can’t just say that different observers will
count different numbers of containers on board the spaceship…
So what’s the definitive answer?
22. #22 Duae Quartunciae March 20, 2009
The definitive answer is already in the comment immediately above yours. Comment #20.
The kinetic energy of a rocket’s exhaust is many orders of magnitude larger than the kinetic energy of the rocket. That’s your definitive answer.
Your estimation of 4 containers of fuel for 2 m/s is based on the notion that the fuel goes into energy of the rocket. It doesn’t. It almost all goes into energy of the exhaust. Just calculate
energy, and momentum, in any frame you like. Include exhaust and rocket, and you’ll get the right answer for a rocket working in empty space.
The second m/s for the rocket takes almost exactly the same amount of fuel as the first, no matter what frame you use. You need a bit less fuel because the rocket is a bit lighter; but that’s it.
23. #23 yesfm March 20, 2009
both are valid. you can’t compare different frames of reference. it moves 0.005 m/s faster with respect to the ground and 1 m/s faster with respect to the bus.
24. #24 Anonymous March 20, 2009
“So if a spaceship burns 1 fuel container to reach a speed of 1m/s, which it then stays at constantly, what’s to stop the spaceship from declaring itself at rest (since it’s moving at constant
velocity) thus requiring it to burn only 1 more fuel container to reach a speed of 2m/s (since it’s 1m/s w.r.t his new “rest” frame).”
It does exactly that. The rocket is different from the car in that the car reacts against the ground. If the ground is moving it is harder to react against it. The rocket is always reacting
against fuel that is motionless in the rockets frame.
It looks like the rocket is getting the better deal here. But that is negated by the fact that the rocket has to carry a huge mass of fuel with it in order to have fuel in its frame of reference.
25. #25 Duae Quartunciae March 20, 2009
You can certainly compare frames of reference; and if you do the calculations you find energy and momentum conserved in any frame you like to consider.
In both cases (rocket, and car) the system gains a certain amount of energy from consuming fuel. You calculate the same CHANGE in total energy no matter what frame is used; but in different
frames the distributions of energy may be different, and the total energy is different, because the velocities depend on the frame.
In the car example, the car is pushing on a very massive object (the highway) that is moving relative to the car. The rocket, however, is pushing on a very light object (its fuel) that is
initially at rest with respect to the rocket.
In all cases and in any frame, momentum is conserved; and a transfer of equal momentum but opposite is made between car and road, or rocket and exhaust. When equal amounts of momentum are
involved, a lighter object has the greater change in kinetic energy, and a rapid object gets the greater change in kinetic energy. The classical invariant for an object is 2Em = p^2, where E is
kinetic energy, m is mass, and p is momentum. This follows from p = mv and E = 0.5*mv^2.
Hence: from the point of view of the car (initial frame), the rapidly moving highway gets nearly all the energy (200 zaps) while the car gets 1 zap. From the point of view of the road (initial
frame), the rapidly moving car gets 201 zaps, with pretty much nothing for the road as it recoils because it is so massive and at rest.
From the point of view of a rocket (the frame in which it is initially at rest) with exhaust expelled at 1000 m/s, accelerating to 1 m/s gives the rocket a zap of energy, and the exhaust gets
1000 zaps. (1000 times lighter)
But in a frame where the rocket moves from 1m/s to 2m/s, the rockets gains 3 zaps (2^2 – 1^2), while the exhaust gains (999^2 – 1)/1000 = 998 zaps of energy.
Same in any other frame. The total energy of the fuel is all accounted for is independent of the frame, for both the car, and the rocket. | {"url":"http://scienceblogs.com/builtonfacts/2009/03/18/a-connundrum/","timestamp":"2014-04-16T04:47:22Z","content_type":null,"content_length":"80189","record_id":"<urn:uuid:d8f92817-0495-4f59-9c07-f60110f43d72>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
Napa Calculus Tutors
...I got an A in the class and I understood the concepts and proofs. I didn't have that good of a teacher, so I mainly had to teach myself through my textbook and another one. I did not like the
textbook and thought that class made differential equations more confusing than it should be.
34 Subjects: including calculus, chemistry, physics, writing
...I have acted as a tutor for MBA students in every course they took in their graduate school curriculum. I have a strong background in statistics and econometrics. I have an undergraduate degree
in biology and math and have worked many years as a data analyst in a medical environment.
49 Subjects: including calculus, physics, geometry, statistics
...My favorite subjects to tutor are the Math and Science subjects. I try to make these subjects less intimidating for students by explaining the material using real-world examples as much as
possible. If students can relate the material to something they've actually experienced, then there is a much higher potential for understanding.I love biology.
29 Subjects: including calculus, chemistry, Spanish, reading
...I am a college student, so I understand what it is like to be tight on money. Thank you, and I hope to see you soon!Algebra 1 is the foundation of much of the mathematics that students will
face as they continue to take math classes, and I have helped many people understand the concepts they are expected to learn in Algebra 1. I have helped many students become familiar with calculus
11 Subjects: including calculus, physics, geometry, algebra 1
...At UC Berkeley I taught CE100, an introductory fluid mechanics course, for which I obtained outstanding student reviews. In the past I have also independently tutored engineering graduate
students in physics, water chemistry, calculus, and fluid mechanics. Other technical subjects that I am ava...
15 Subjects: including calculus, Spanish, geometry, ESL/ESOL | {"url":"http://www.algebrahelp.com/Napa_calculus_tutors.jsp","timestamp":"2014-04-19T07:32:03Z","content_type":null,"content_length":"24797","record_id":"<urn:uuid:dcfee989-08e5-43ea-9367-4ac036f16f3c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Boardman-Vogt tensor product of operadic bimodules
Seminar Room 1, Newton Institute
(Joint work with Bill Dwyer.) The Boardman-Vogt tensor product of operads endows the category of operads with a symmetric monoidal structure that codifies interchanging algebraic structures. In this
talk I will explain how to lift the Boardman-Vogt tensor product to the category of composition bimodules over operads. I will also sketch two geometric applications of the lifted B-V tensor product,
to building models for spaces of long links and for configuration spaces in product manifolds. | {"url":"http://www.newton.ac.uk/programmes/GDO/seminars/2013040311301.html","timestamp":"2014-04-20T03:17:49Z","content_type":null,"content_length":"3966","record_id":"<urn:uuid:8c0a216c-0eee-48fa-9201-17b118f5cf32>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advanced Functions question, would love some help
February 17th 2013, 01:57 PM #1
Feb 2013
Advanced Functions question, would love some help
Experiment Problem... please help.
Another Experiment is performed using coins. This time, the experimenter starts with 4 coins. Each time she tosses the coins into the air, she counts the number of heads that appear; then adds
that number of coins to the amount she previously had. The following table shows her results.
Toss Number of heads Coins
Part A: Graph the results. DONE
Part B: Explain why the graph appeears exponential..
Part C: State an exponential equation that best fits the results. (use n to represent the number of the toss.)
Part D: State the equation that would predict the number of remaining coins if you began with N[0 ]coins.
Part E: Perform the experiment yourself. Graph your results with a different colour.
Part F: The experimenter claims that the equation is explained by the formula for compound interest: A = P(1 + i)n. She argues that P represents the number of coins she started with, i is 0.5
since the growth rate is about 50% (since about one-half of the coins tossed come up heads) and N is the number of tosses, which is like the compounding period.
If her hypothesis is correct, create a formula that predicts the total number of coins if an unfair (weighted) coin is used that only comes up heads 1 out of every 4 times.
ThankYou soo much, need some serious help on this one
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/new-users/213277-advanced-functions-question-would-love-some-help.html","timestamp":"2014-04-19T21:47:03Z","content_type":null,"content_length":"32677","record_id":"<urn:uuid:9fe0d048-2c33-4b4e-b053-7e62e2d4d503>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monads are Elephants Part 1
Introductions to monads are bit of cottage industry on the Internet. So I figured, "why buck tradition?" But this article will present Scala's way of dealing with monads.
An ancient parable goes that several blind men were experiencing their first elephant. "It's a tree," one said while wrapping his arms around its legs. "A large snake," another said while holding its
trunk. A third said...um, something about a broom or a fan or whatever.
From this parable we can conclude this: the ancients believed that the visually impaired like to fondle large mammals. Fortunately we live in a more enlightened age.
We're also supposed to learn something about how our limitations can prevent us from grasping the whole picture and that we're all blind in some way. It's so Zen.
I think there's a third lesson to be learned - the opposite of the main intent: that it's possible to learn much about the big picture by getting a series of limited explanations. If you had never
seen an elephant and people told you things like "it has legs as thick as tree trunks," "a nose like a snake," "a tail like a broom," "ears like fans," etc. then you'd soon have a pretty good
understanding. Your conception wouldn't be perfect, but when you finally saw an elephant it would fit neatly into the mental picture you had slowly built up. Just as the elephant was about to step on
you, you'd think "wow, the legs really are like trees."
Monads are Container Types
One of the most commonly used container types is List and we'll spend some time with it. I also mentioned Option in a previous article. As a reminder, an Option is always either Some(value) or None.
It might not be clear how List and Option are related, but if you consider an Option as a stunted List that can only have 0 or 1 elements, it helps. Trees and Sets can also be monads. But remember
that monads are elephants, so with some monads you may have to squint a bit to see them as containers.
Monads are parameterized. List is a useful concept, but you need to know what's in the List. A List of Strings (List[String]) is pretty different from a List of Ints (List[Int]). Obviously it can be
useful to convert from one to the other. Which leads us to the next point.
Monads Support Higher Order Functions
A higher order function is a function that takes a function as a parameter or returns a function as a result. Monads are containers which have several higher order functions defined. Or, since we're
talking about Scala, monads have several higher order methods.
One such method is map. If you know any functional languages then you're probably familiar with map in one form or another. The map method takes a function and applies it to each element in the
container to return a new container. For instance
def double(x: Int) = 2 * x
val xs = List(1, 2, 3)
val doubles = xs map double
// or val doubles = xs map {2 * _}
assert(doubles == List(2, 4, 6))
Map does not change the kind of monad, but may change its parameterized type...
val one = Some(1)
val oneString = one map {_.toString}
assert(oneString == Some("1"))
Here the {_.toString} notation means that the toString method should be called on the element.
Monads are Combinable
Now let's say we have a configuration library that let's us fetch parameters. For any parameter we'll get back an Option[String] - in other words we may or may not get a string depending on whether
the parameter is defined. Let's say we also have a function, stringToInt, which takes a String and returns Some[Int] if the string is parseable as an integer or None if it's not. If we try to combine
them using map we run into trouble.
val opString : Option[String] = config fetchParam "MaxThreads"
def stringToInt(string:String) : Option[Int] = ...
val result = opString map stringToInt
Unfortunately, since we started with an Option and mapped its contained element with a function that results in another Option, the variable "result" is now an Option that contains an Option, ie
result is an Option[Option[Int]]. That's probably not terribly useful in most cases.
To motivate a solution, imagine if instead of Option we'd used List and ended up with List[List[Int]]] - in other words a list containing some number of lists. Given that, we just need "flatten" - a
function which takes a list of lists (List[List[A]]) and returns a single list (List[A]) by concatenating everything together.^1
A flatten function for Option[Option[A]] works a bit differently.
def flatten[A](outer:Option[Option[A]]) : Option[A] =
outer match {
case None => None
case Some(inner) => inner
If the outer option is None, then result is None. Otherwise the result is the inner Option.
These two flatten functions have similar signatures: they take an M[M[A]] and turn it into an M[A]. But the way they do it is quite different. Other monads would have their own ways of doing flatten
- possibly quite sophisticated ways. This possible sophistication is why explanations of monads will often use "join" instead of "flatten." "Join" neatly indicates that some aspect of the outer monad
may be combined (joined) with some aspect of the inner monad. I'll stick with "flatten," though, because it fits with our container analogy.
Now, Scala does not require you to write flatten explicitly. But it does require that each monad have a method called flatMap.^2. What's flatMap? It's exactly what it sounds like: doing a map and
then flattening the result.
class M[A] {
private def flatten[B](x:M[M[B]]) : M[B] = ...
def map[B](f: A => B) : M[B] = ...
def flatMap[B](f: A => M[B]) : M[B] = flatten(map(f))
With that, we can revisit our problematic code...
val opString : Option[String] = config fetchParam "MaxThreads"
def stringToInt(string:String) : Option[Int] = ...
val result = opString flatMap stringToInt
Because of flatMap we end up with "result" being an Option[Int]. If we wanted, we could take result and flatMap it with a function from Int to Option[Foo]. And then we could faltMap that with a
function from Foo to Option[Bar], etc.
If you're keeping score, many papers on monads use the word "bind" instead of "flatMap" and Haskell uses the ">>=" operator. It's all the same concept.
Monads Can Be Built In Different Ways
So we've seen how the flatMap method can be built using map. It's possible to go the other way: start with flatMap and create map based on it. In order to do so we need one more concept. In most
papers on monads the concept is called "unit," in Haskell it's called "return." Scala is an object oriented language so the same concept might be called a single argument "constructor" or "factory."
Basically, unit takes one value of type A and turns it into a monad of type M[A]. For List, unit(x) == List(x) and for Option, unit(x) == Some(x).
Scala does not require a separate "unit" function or method, and whether you write it or not is a matter of taste. In writing this version of map I'll explicitly write "unit" just to show how it fits
into things.
class M[A](value: A) {
private def unit[B] (value : B) = new M(value)
def map[B](f: A => B) : M[B] = flatMap {x => unit(f(x))}
def flatMap[B](f: A => M[B]) : M[B] = ...
In this version flatMap has to be built without reference to map or flatten - it will have to do both in one go. The interesting bit is map. It takes the function passed in (f) and turns it into a
new function that is appropriate for flatMap. The new function looks like {x => unit(f(x))} meaning that first f is applied to x, then unit is applied to the result.
Conclusion for Part I
Scala monads must have map and flatMap methods. Map can be implemented via flatMap and a constructor or flatMap can be implemented via map and flatten.
flatMap is the heart of our elephantine beast. When you're new to monads, it may help to build at least the first version of a flatMap in terms of map and flatten. Map is usually pretty straight
forward. Figuring out what makes sense for flatten is the hard part.
As you move into monads that aren't collections you may find that flatMap should be implemented first and map should be implemented based on it and unit.
In part 2 I'll cover Scala's syntactic sugar for monads. In part 3 I'll present the elephant's DNA: the monad laws. Finally, in part 4 I'll show a monad that's only barely a container. In the
meantime, here's a cheat sheet for translating between computer science papers on monads, Haskell, and Scala.
│Generic │ Haskell │ Scala │
│ │data M a │class M[A] │
│ │or │or │
│M │newtype M a │case class M[A] │
│ │or │or │
│ │instance Monad (M a) │trait M[A] │
│M a │M a │M[A] │
│ │ │new M(v) │
│unit v │return v │or │
│ │ │M(v) │
│map f m │fmap f m │m map f │
│ │m >>= f │ │
│bind f m│or │m flatMap f │
│ │f =<< m │ │
│join │join │flatten │
│ │do │for │
^1. The Scala standard library includes a flatten method on List. It's pretty slick, but to explain it I would have to go into implicit conversions which would be a significant distraction. The slick
part is that flatten makes sense on List[List[A]] but not on List[A], yet Scala's flatten method is defined on all Lists while still being statically type checked.
^2. I'm using a bit of shorthand here. Scala doesn't "require" any particular method names to make a monad. You can call your methods "germufaBitz" or "frizzleMuck". However, if you stick with map
and flatMap then you'll be able to use Scala's "for comprehensions"
17 comments:
Type constructor polymorphism, a fairly new addition to Scala (in 2.5) now supports defining monads more conveniently. For a real life example, see my first take on a library for variable binding
that uses a state monad to generate fresh variable names. I intend to eventually integrate this with Scala's library of parser combinators, which are also monads.
Excellent explanation that doesn't need you to already know the answer to make sense of the text.
I wait eagerly for the next part.
Beside a good proof of scale power and expressiveness.
I understand Monads as Computations, but understanding Monads as Containers is more difficult. Most articles on the topic discuss List as a Monad which is easy to understand but how does this
result extend to other "collections" (for example, Tree etc)?
If the blind men had said that an elephant had legs like tree trunks or a nose like a snake, that would be fine. The problem is that they said it the whole elephant was like a tree trunk or a
snake or a fan. So when you're told to look out for a
"trunk/snake/fan/broom feeling thing", you wouldn't know where to begin. And that's where the analogies of Monads as spacesuits, assembly lines, containers, and so on really fall down. Call it an
imperfect analogy or a leaky abstraction, but it's just a dissonant clamor of conflicting mental models that confuse understanding of the whole.
I finally "get" monads now that I see them as a collection of behaviors (i.e. their laws and operators), but I still find myself fighting the poor mental models I was saddled with from the start.
Of course it also doesn't help that I have to fight the type system to do anything complex with Monads, but that's more Haskell's "fault" (or at least my beef with Haskell) than it is with monads
qua monads.
In the table comparing theory, haskell & scala;
map f m
So m stands for "monad" ?
Gang, sorry I've been away from the blog for awhile and so I'm a bit behind on my comments.
Adriaan, thanks for the links. I don't want to get too far down the path of Scala's type system. Trying to explain higher kinded types while explaining monads might cause sensory overload.
Lionel, thanks for the compliment. I do agree that Scala is powerful, but it's possible to implement monads in many languages. It's just that Scala has built in support for them.
I'll take the other two comments individually.
"Monads as Computations" is the more powerful understanding and I plan to work my articles in that direction. Unfortunately, it's not a very natural fit for collection classes in a (mostly)
strict language like Scala. For them the container approach I outline works better.
Still, I'll give it a shot. If you can see List as a computation, then you might see generalized Tree as a computation that when given a traversal order returns a List (or other sequence type).
The tree monad then is a computation that can be bound to tree producing functions in order to produce a new computation.
Hmmm, seems easier to think of a Tree monad based on map and flatten.
You're mostly right that once you "get" monads then most of the other analogies fall away. So my answer is another analogy ;-) Monad analogies are the scaffolding you use to create a tall
building. Once the building can stand on it's own you don't need the scaffolding any more.
I think the exception is container. I think it still has some applicability even when you work with more computational monads like continuations, parser combinators and IO. It's just that, as I
hint in the article, you have to squint more to see those as containers.
Haskellers and mathemeticians both like to use short variable names. In this case the names are meant to be a reminder of the types that map expects.
Specifically, for "map f m", f is a function of the appropriate type and m is an instance of a monad. Technically, m could be any functor, but "map f f" wouldn't be very helpful in most cases.
I've made some updates to this article. First, I had planned on doing both "for" and the laws in one article but they each deserve their own treatment plus I've added a plan for a 4th
installment. Second, I fixed the embarrassing and repeated use of "outter" instead of "outer." :) Third, I've posted a forward link to part 2.
The code in this suggests that 1.toString gives "one", when it gives "1".j
LOL. Fixed.
many thanks for your article. i do believe it is helping.
I like the artivle, monads seem strange to me. I have no idea what it is (like you say) but I want to learn more for some reason. Love the way you motivate the subject, computer science doesnt
have to be dry and dusty, i didn't realize when you got into the technical stuff :)
That was so much easier to understand than the description of monads on the scala website. Thank you!
After 5 years this is still an awesome article. Thank you
what exactly does "monad" term mean ?? is it another kind of an interface that must implement abstract methods?? In short , if some one asked what is "monad" , what would be the shortest answer? | {"url":"http://james-iry.blogspot.com.au/2007/09/monads-are-elephants-part-1.html","timestamp":"2014-04-16T13:46:12Z","content_type":null,"content_length":"84042","record_id":"<urn:uuid:afa2fba2-cd7c-41af-aecf-d8df7e100ce4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
Haverford SAT Math Tutor
Find a Haverford SAT Math Tutor
I have been a teacher and tutor for more than 20 years. Today, I am an Instructional Aide at Harriton High School, and assist special education and regular students daily in all subjects: math,
physics, chemistry, English, history, writing. As such, I am familiar with course content, assessments, standardized testing, ACT, SAT, PSAT, PSSA, and the Keystone.
35 Subjects: including SAT math, chemistry, reading, English
...I have over 800 hours of experience working with elementary age children. I have experience working with emergent readers in Costa Rica and the United States in learning how to read. I have
experience teaching children of all different background and abilities skills for decoding.
32 Subjects: including SAT math, English, reading, writing
...K taught me that math wasn't just numbers and memorization; there was a more logical way of thinking about it that I could actually grasp. By 9th grade I was at the top of my class. I am now a
junior in high school and am in AP pre-calculus.
22 Subjects: including SAT math, calculus, writing, geometry
...I conduct research at UPenn and West Chester University on colloidal crystals and hydrodynamic damping. Students I tutor are mostly college-age, but range from middle school to adult. As a
tutor with a primary focus in math and science, I not only tutor algebra frequently, but also encounter this fundamental math subject every day in my professional life.
9 Subjects: including SAT math, physics, calculus, geometry
...I was an employee of West Chester's tutoring center and achieved Master - Level 3 certification from the College Reading and Learning Association. I am comfortable tutoring all skill, age, and
confidence levels from middle school math up through Calculus. I am also available for SAT preparation.
8 Subjects: including SAT math, calculus, algebra 2, algebra 1 | {"url":"http://www.purplemath.com/haverford_sat_math_tutors.php","timestamp":"2014-04-18T13:37:42Z","content_type":null,"content_length":"24031","record_id":"<urn:uuid:ddc68932-a9da-472f-8aa4-f831ff71b732>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: The Problem
of Malfatti: Two
Centuries of Debate
MARCO ANDREATTA, ANDRA“S BEZDEK AND JAN P. BORON“SKI
ianfrancesco Malfatti (Figure 1) was a brilliant
Italian mathematician born in 1731 in a small
village in the Italian Alps, Ala, near Trento. He first
studied at a Jesuit school in Verona, then at the University
of Bologna. Malfatti was one of the founders of the
Department of Mathematics of the University of Ferrara. He
died in Ferrara in 1807.
As a very active intellectual in the Age of Enlightenment,
he devoted himself to the promotion of many new ideas
and wrote many papers in different fields of mathematics
including algebra, calculus, geometry, and probability
theory. He played an important role in the creation of the
Nuova Enciclopedia Italiana (1779), in the spirit of the
French Encyclope“die edited by Diderot and d'Alembert.
His mathematical papers were collected by the Italian | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/611/2652929.html","timestamp":"2014-04-18T09:50:44Z","content_type":null,"content_length":"7953","record_id":"<urn:uuid:d18b3aff-c473-4936-9ee2-2d7f01d01f9c>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of parallelogram
In geometry, a parallelogram is a quadrilateral with two sets of parallel sides. The opposite sides of a parallelogram are of equal length, and the opposite angles of a parallelogram are congruent.
The three-dimensional counterpart of a parallelogram is a parallelepiped.
• The area, $A$, of a parallelogram is $A = BH$, where $B$ is the base of the parallelogram and $H$ is its height.
• The area of a parallelogram is twice the area of a triangle created by one of its diagonals.
• The area of a parallelogram is also equal to the magnitude of the vector cross product of two adjacent sides.
• The diagonals of a parallelogram bisect each other.
• Opposite sides of a parallelogram are equal.
• Opposite angles of a parallelogram are equal.
• Each diagonal bisects the parallelogram into two congruent triangles.
• It is possible to create a tessellation of a plane with any parallelogram.
The properties of having equal opposite sides and opposite angles are shared with the antiparallelogram, a type of non-convex quadrilateral in which the two longer edges cross each other.
Computing the area of a parallelogram
Let $a,binR^2$ and let $V=\left[a b\right]inR^\left\{2times2\right\}$ denote the matrix with columns $a$ and $b$. Then the area of the parallelogram generated by $a$ and $b$ is equal to $|det\left(V\
Let $a,binR^n$ and let $V=\left[a b\right]inR^\left\{ntimes2\right\}$. Then the area of the parallelogram generated by $a$ and $b$ is equal to $sqrt\left\{det\left(V^T V\right)\right\}$
Let $a,b,cinR^2$. Then the area of the parallelogram is equivalent to the absolute value of the determinant of a matrix built using a, b and c as rows with the last column padded using ones as
$V = left| det begin\left\{bmatrix\right\}$
a_1 & a_2 & 1
b_1 & b_2 & 1
c_1 & c_2 & 1
end{bmatrix} right|.
Proof that diagonals bisect each other
To prove that the diagonals of a parallelogram bisect each other, first note a few pairs of equivalent angles:
$angle ABE cong angle CDE$
$angle BAE cong angle DCE$
Since they are angles that a transversal makes with parallel lines $AB$ and $DC$.
Also, $angle AEB cong angle CED$ since they are a pair of vertical angles.
Therefore, $triangle ABE sim triangle CDE$ since they have the same angles.
From this similarity, we have the ratios
$\left\{AB over CD\right\} = \left\{AE over CE\right\} = \left\{BE over DE\right\}$
Since $AB = DC$, we have
$\left\{AB over CD\right\} = 1$.
$AE = CE$
$BE = DE$
$E$bisects the diagonals $AC$ and $BD$.
You can also prove that the diagonals bisect each other, by placing the parallelogram on a coordinate grid, and assign variables to the vertexes, you can show that the diagonals have the same
Derivation of the area formula
The area formula,
$A_text\left\{parallelogram\right\} = B times H,,$
can be derived as follows:
The area of the parallelogram to the right (the blue area) is the total area of the rectangle less the area of the two orange triangles. The area of the rectangle is
$A_text\left\{rect\right\} = \left(B+A\right) times H,$
and the area of a single orange triangle is
$A_text\left\{tri\right\} = frac\left\{1\right\}\left\{2\right\} A times H,$ or $S_text\left\{tri\right\} = frac\left\{1\right\}\left\{2\right\} bh,$
Therefore, the area of the parallelogram is
$A_text\left\{parallelogram\right\} =$
A_text{rect} - 2 times A_text{tri} = left((B+A) times H right) - left(A times H right) = B times H.,
Alternate method
An alternative, less mathematically sophisticated method, to show the area is by rearrangement of the perimeter. First, take the two ends of the parallelogram and chop them off to form two more
triangles. Each of these two new triangles are equal in every way with the orange triangles. This first step is shown to the right.
The second step is merely swap the left orange triangle with the right blue triangle. Clearly, the two blue triangles plus the blue rectangle have an area equivalent to $B H$.
To further demonstrate this, the first image on the right could be printed off and cut up along the lines:
1. Cut along the lines between the orange triangles and the blue parallelogram
2. Cut along the vertical lines on the end to form the two blue triangles and the blue rectangle
3. Rearrange all five pieces as shown in the second image
See also
External links | {"url":"http://www.reference.com/browse/parallelogram","timestamp":"2014-04-17T02:20:00Z","content_type":null,"content_length":"87242","record_id":"<urn:uuid:61f34560-0186-4ebd-8567-bb954e3d0b2b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
D. J. Bernstein
Authenticators and signatures
A state-of-the-art public-key signature system
Signatures; verification
A signature of a message m under a public key pq has four pieces:
• An integer e in {1,-1}.
• An integer f in {1,2}.
• An integer r in {0,1,...,15}.
• An integer s in {0,1,...,2^1536-1}.
The pieces satisfy the equation H0(r,m) = efs^2 mod pq.
Signers are actually required to generate s in the smaller interval [0,(pq-1)/2], but verifiers do not need to bother checking for this.
Note that there are also compressed and expanded forms of signatures.
Note that, starting from a signature (e,f,r,s) and public key pq, one can recover H0(r,m), and thus recover the first 171 bytes of m; so m can be compressed if the signature and public key are
available. However, if m is below 96 bytes, compressed signatures save more space.
How do I encode a signature as a string of bytes?
The standard format is
• 192 bytes: s in little-endian form.
• 1 byte: r, plus 16 if e=-1, plus 32 if f=2; two bits unused.
How do I verify a signature?
Square s, multiply by e and f, divide by pq, and check that the remainder equals H0(r,m).
Alternatively: Square s, multiply by e and f, subtract H0(r,m), and check that the difference is divisible by pq. | {"url":"http://cr.yp.to/sigs/verify.html","timestamp":"2014-04-16T10:59:48Z","content_type":null,"content_length":"1733","record_id":"<urn:uuid:1e58bc2d-0911-4da7-8a41-01369d2c9cfe>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'Who Likes Whom ?' Brain Teaser
Who Likes Whom ?
Logic puzzles require you to think. You will have to be logical in your reasoning.
Puzzle ID: #19154
Category: Logic
Submitted By: lesternoronha1
The relationships of a person X with 3 people P, Q and R are given below.
1) X likes at least one of the three people.
2) If X likes P, but not R, then he also likes Q.
3) X likes both R and Q, or he likes neither of them.
4) If X likes R, then he also likes P.
Which of the following statements can be deduced from the above?
A) X likes P, Q and R.
B) X likes only P.
C) X likes only Q.
D) X likes P and Q only.
NOTE: Out of the given set of 4 statements, it is not necessary that one and only one statement can be deduced. One or more statements may be possible to be deduced.
Show Answer
What Next? | {"url":"http://www.braingle.com/brainteasers/19154/who-likes-whom-.html","timestamp":"2014-04-17T12:38:17Z","content_type":null,"content_length":"23151","record_id":"<urn:uuid:27fbb766-30e8-4c94-ba83-4faf1f9df2a9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tutorial Series - A simple 3D game
Hey guys. I wanted to do a tutorial for a simple 3D game. I hope that this tutorial will help people to better understand a couple of concepts that I had trouble with when first learning how to write
a game for OpenGl. Specifically, I will try to explain the following concepts that I use in my game,
Lava Ball
, which is coming to the Android Market soon.
(1) How to organize your game objects into classes and how to call them for rendering.
(2) Rotation of objects made simple with Quarternions.
(3) Code for creating a sphere.
(4) Collision detection framework done the right way. (Simplified)
(5) Creating the gameplay for a simple game.
Some disclaimers:
First, I am not an OpenGl expert under any circumstances. I have just learned it in the last couple months. My knowledge is not broad at all. Rather, it is limited to only that which I needed to make
my game. That said, I have done pretty well in a short time and I hope that what I have learned is useful to someone else.
Lava Ball
is a pretty complex game that is running at between 40 and 50 frames a second on my Droid. So, I have done something right. The game I create here will in no way be nearly as complex as LavaBall. So,
we should end up with something that performs exceeding well.
Second, you should note that I have limited time until
Lava Ball
is released. I will be rolling this tutorial out as time permits.
Finally, I am sure that there are other ways, probably even better ways, to organize and create a game. I am sure other people will chime in with comments about what I write. They will probably be
right and I will probably be wrong. That said, what I did works and will work for you. Use what you want and leave the rest.
A Request:
Please check out
Lava Ball 3D
on the android market. Thanks.
Later today I will post "Part 1 - Vector Math Class"
Last edited by seed on Tue Jul 20, 2010 5:00 pm, edited 1 time in total.
Visit Exit 4 Gaming - http://www.exit4games.com/
Home of LavaBall - http://exit4games.com/?page_id=3
Home of Rebound - http://exit4games.com/?page_id=138
Home of Tap Crazy - http://exit4games.com/?page_id=219
Download "Caveman Pool" From the Market Today!
Part 1 - A Vector Math Class
Vector Math is a key element to any 3D Game. You can't do anything without it. So, it seems like a great place to start. This is the vector math class that I use in my game. Hopefully this is useful
to someone even if they don't follow the rest of this tutorial.
The class is pretty basic except for one thing, I implemented a simple stack. We will come back to that. Other than the unique stack, the rest of the class is self explanatory. Here is the class:
Code: Select all
public class Vector3
public float x,y,z;
// ****************************
// Stack allocation and freeing
// ****************************
private static int STACK_SIZE = 100;
private static Vector3[] stack;
private static boolean initialized = false;
private static int stack_index = 0;
public static Vector3 alloc()
if (!initialized)
return stack[stack_index++];
public static void free()
if (!initialized)
if (stack_index > 0)
private static void init()
stack = new Vector3[STACK_SIZE];
for (int i=0; i<STACK_SIZE; i++)
stack[i] = new Vector3();
initialized = true;
// Constructor
public Vector3()
x = 0;
y = 0;
z = 0;
public Vector3(float xin, float yin, float zin)
x=xin; y=yin; z=zin;
public void Set(float xin, float yin, float zin)
x=xin; y=yin; z=zin;
static public void add(Vector3 v1, Vector3 v2, Vector3 v3)
v1.x = v2.x + v3.x;
v1.y = v2.y + v3.y;
v1.z = v2.z + v3.z;
static public void subtract(Vector3 v1, Vector3 v2, Vector3 v3)
v1.x = v2.x - v3.x;
v1.y = v2.y - v3.y;
v1.z = v2.z - v3.z;
static public void copy(Vector3 v1, Vector3 v2)
v1.x = v2.x;
v1.y = v2.y;
v1.z = v2.z;
static public void reflect(Vector3 result, Vector3 dir, Vector3 normal, float damp)
float len;
float ratio;
float dot;
Vector3 v1 = alloc();
Vector3 v2 = alloc();
Vector3 v3 = alloc();
len = length(dir);
dot = (1+damp)*dot_product(v3,v2);
ratio = length(v1);
static public void inverse(Vector3 v1, Vector3 v2)
v1.x = -v2.x;
v1.y = -v2.y;
v1.z = -v2.z;
static public void scale(Vector3 v1, Vector3 v2, float s)
v1.x = v2.x * s;
v1.y = v2.y * s;
v1.z = v2.z * s;
static public float length(Vector3 v1)
return (float) Math.sqrt(v1.x * v1.x + v1.y * v1.y + v1.z * v1.z);
static public void normalize(Vector3 result, Vector3 v1)
float len = length(v1);
if (len != 0)
result.x = v1.x / len;
result.y = v1.y / len;
result.z = v1.z / len;
// Written with temp vector in order to work when result is also one of
// other two parameters
static public void cross_product(Vector3 result, Vector3 v1, Vector3 v2)
Vector3 temp = alloc();
temp.x = v1.y * v2.z - v1.z * v2.y;
temp.y = v1.z * v2.x - v1.x * v2.z;
temp.z = v1.x * v2.y - v1.y * v2.x;
static public float dot_product(Vector3 v, Vector3 w)
return( v.x * w.x + v.y * w.y + v.z * w.z);
An instance of this class holds three member variables for the vector: x,y,z. The rest of the class is a series of static methods for doing math. Personally, I prefer that the math methods are static
because it makes other code more readable (at least to me) and it is a little faster at execution. You can change this class to have these methods not be static if you want, but I wouldn't.
There are methods for scaling, addition, subtraction, normalization, dot and cross products, and reflection. Each method, other than those that return a variable, puts the result of the operation in
the first parameter. Here is an example of some code using the class.
Code: Select all
// A function to keep the direction of your velocity but change the magnitude
public void changeVelocity(Vector3 velocity, float magnitude)
// Normalize the velocity
// Scale the normalized velocity using the incoming magnitude
The only really unique concept in the class is the stack. Garbage collection is a very bad thing for games. You don't want ANY garbage collection in your games. That said, there is often the need to
create temporary objects in functions to hold working data. But, you don't want to create these objects using new or you get garbage. So, my solution, although others probably won't like it, was to
create a stack for my objects. In this case, if you need a temp Vector3 object in a function, you can call the static alloc to get it and later call the static free to release it. Typically, you
would allocate at the beginning of a function and free at the end. For example:
Code: Select all
// In this method we want to change v1 using v2 BUT NOT CHANGE the data in v2.
// In order to do this we need to copy v2 into a temporary variable, manipulate it,
// and then add it to v1.
public void doSomeStuff (Vector3 v1, Vector3 v2)
// Allocate a temp Vector3 on the stack
Vector3 temp_vector = Vector3.alloc();
Vector3.free(); // temp_vector
OK, well this is really convenient and creates NO garbage. Are there any problems with it. The short answer is TONS! You must be extremely careful with the stack allocation. I wrote this for my own
use and have not tried to make it safe to use in any way. I am depending on my own ability to be safe when using it. In a tutorial, providing this unsafe method is probably a really bad thing to do.
But, it is what I am doing. I will leave it up to others to provide better solutions. I will tell you what to look out for.
1) Stack size. You must size your stack correctly. If you don't, you will get a bounds exception.
2) Don't forget the frees. If you do, well then you are in trouble. You will probably get unexplained exceptions. What I do is comment every free with the variable that it applies to. You need an
equal number of frees at the end of each function to the number of allocs at the beginning of the function.
3) Don't return early. If you do, make sure you free there too.
4) THIS IS NOT THREAD SAFE. Don't allocate Vector3s in multiple threads using alloc, period. Making it thread safe is probably pretty easy, I just didn't have a need to do it.
Well, that is is. My Vector3 class. Not much of a start but without it we go nowhere.
Coming next:
Part 2 - An OpenGl framework application.
Part 3 - Organizing Game Objects into Classes
Visit Exit 4 Gaming - http://www.exit4games.com/
Home of LavaBall - http://exit4games.com/?page_id=3
Home of Rebound - http://exit4games.com/?page_id=138
Home of Tap Crazy - http://exit4games.com/?page_id=219
Download "Caveman Pool" From the Market Today!
Re: Tutorial Series - A simple 3D game
Cool, looking forward to seeing more.
Re: Tutorial Series - A simple 3D game
OK, guys, I am back. Over the next few weeks, I will work on this tutorial and the associated game. The next post will provide a very simple OpenGL framework to develop a game in. No complicated
multithreaded design here. Just draw, move, draw, move. If it worked for Carmack and Quake, then it is good enough for me;)
In the mean time, go download Lava Ball 3D from the market and let me know what you think.
Visit Exit 4 Gaming - http://www.exit4games.com/
Home of LavaBall - http://exit4games.com/?page_id=3
Home of Rebound - http://exit4games.com/?page_id=138
Home of Tap Crazy - http://exit4games.com/?page_id=219
Download "Caveman Pool" From the Market Today! | {"url":"http://www.anddev.org/android-2d-3d-graphics-opengl-tutorials-f2/tutorial-series-a-simple-3d-game-t14268.html","timestamp":"2014-04-20T01:09:44Z","content_type":null,"content_length":"42609","record_id":"<urn:uuid:80299ba8-2018-4392-b8a1-8b263483ccb3>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
graph problem
April 15th 2010, 11:38 PM
graph problem
Can any body help me with this?
Show that if G is a simple graph with n vertices, where each vertex has degree => 1/2*(n-1),then G must be connected.
My answer is,
Total number of degrees of graph are atleast 1/2*n*(n-1)
for complete grah q = 1/2*n*(n-1), where 'q' are total number of edges.
So G is atleast complete graph which means it must be connected.
thnx in advance. | {"url":"http://mathhelpforum.com/discrete-math/139456-graph-problem-print.html","timestamp":"2014-04-20T06:44:22Z","content_type":null,"content_length":"3353","record_id":"<urn:uuid:16f9584f-132f-4210-ab23-b18cdf0ad716>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00216-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving Constraint Satisfaction Problems Using Finite State Automata
Nageshwara Rao Vempaty
In this paper, we explore the idea of representing CSPs using techniques from formal language theory. The solution set of a CSP can be expressed as a regular language; we propose the minimized
deterministic finite state automaton (MDFA) recognizing this language as a canonical representation for the CSP. This representation has a number of advantages. Explicit (enumerated) constraints can
be stored in lesser space than traditional techniques. Implicit constraints and networks of constraints can be composed from explicit ones by using a complete algebra of boolean operators like AND,
OR, NOT, etc., applied in an arbitrary manner. Such constraints are stored in the same way as explicit constraints - by using MDFAs. This capability allows our technique to construct networks of
constraints incrementally. After constructing this representation, answering queries like satisfiability, validity, equivalence, etc., becomes trivial as this representation is canonical. Thus, MDFAs
serve as a means to represent constraints as well as to reason with them. While this technique is not a panacea for solving CSPs, experiments demonstrate that it is much better than previously known
techniques on certain types of problems.
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy. | {"url":"http://aaai.org/Library/AAAI/1992/aaai92-070.php","timestamp":"2014-04-16T19:43:29Z","content_type":null,"content_length":"3147","record_id":"<urn:uuid:86fc81e7-cb12-48bb-be74-30d0482484eb>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SPAM-Bayes] - Re: Converting IBM Floats..Help.. - Bayesian Filter detected spam
[SPAM-Bayes] - Re: Converting IBM Floats..Help.. - Bayesian Filter detected spam
Jeff Epler jepler at unpythonic.net
Thu Mar 25 20:51:57 CET 2004
> Bayesian Filter detected spam
huh -- sorry your filter thought I was spam, I'm glad you found my
response. I've had very good luck with spambayes, personally.
Here's one page I found with an explanation of how more standard
floating-point numbers are stored:
The most important difference between the IEEE formats and the IBM
format is that the radix is 16 instead of 2. I think there's also a
terminlogy error--the page says that the significand (mantissa) is
stored in "two's compliment binary", but this isn't true. It's
unsigned. The sign is stored separately.
On Thu, Mar 25, 2004 at 02:17:16PM -0500, Ian Sparks wrote:
> Jeff,
> Thanks so much for this, it works fine on the rest of my data too. I'm going to have to create code that will do the reverse conversion too so I want to make sure I understand what happened here so I can apply the same concepts. I have some instructions on the reverse-process and I believe its mostly bit-shifting.
> This is somewhat "teach a man a fish" so I appreciate you're bearing with me on this basic CS stuff. I think I'm nearly there...
> def ibm360_decode(s):
> #Get element 0 of the unpack tuple
> #struct format : > = Big Endian, Q means unsigned long long
> #not sure why we need an unsigned long long ?
> #think its because we need that to do bitwise operations?
I used Q, because the data is 64 bits, and I wanted to get it in a
single Python variable. I could have used 'l, m = struct.unpack(">LL", s)'
to convert the string into a pair integers, but that would have been
less convenient because the mantissa would be split between l and m.
> exponent_with_sign = (l >> 56)
> #But we still have our sign-bit at the left side so we've got 8 digits
> #we need to cut off that extra digit
> #we can do this using the bitwise & to compare our number against 00111111
> # (0x7f - 64 == 127 - 64 == 63 = 111111)
> # e.g. for 7 with a positive sign we'd have
> # 7 = 10000111 (looks like 135)
> # 63 = 00111111 &
> # --------
> # 0000111 = 7
> #
> # But I'm clearly missing something here because 63 is only 6 binary-digits long and we need
> # to cut off the 8th...127 would seem to be more appropriate (but doesn't work). Umph ?
> exponent = exponent_with_sign & 0x7f - 64
(exponent_with_sign & 0x7f) extracts the 7 bits of the exponent.
However, the exponent is biased. Instead of being stored as a signed
number, the unsigned (exponent + bias) is stored. The subtraction
removes the bias.
exponent_with_sign_and_bias = (l>>56)
exponent_with_bias = exponent_with_sign_and_bias & 0x7f # 7 bits
exponent = exponent_with_bias - 64
> #Ok, moving on.
> #The mantissa is the last 56 bits (reading from left->right) so we need to cut off the first
> #8 bits which means ANDing (&) with a large number representing all 1's for 56 of those bits
> #and zero for the others e.g. 00000000111111...1111
> #The / by 16. ** 14 has me scratching my head, I admit. Our mantissa is firmly planted in the
> #right hand side of our binary number isn't it? But we're dividing by a massive number....?
> mantissa = (l & ((1L<<56) - 1)) / (16. ** 14)
The mantissa is an unsigned binary number. If you treat it as a float,
the radix point (decimal point) is actually to the left of the first bit
of mantissa. The division by a large power of 16 accomplishes this.
> #The instructions said the true exponent is 16 * the exponent value we extracted, again, not
> #sure why we're multiplying the mantissa up too?
> return [1,-1][sign] * (16**exponent) * mantissa
The exponent shifts the radix point again, to its final position.
When the exponent changes by 1, the radix point moves by one hex digit,
or 4 bits.
More information about the Python-list mailing list | {"url":"https://mail.python.org/pipermail/python-list/2004-March/264379.html","timestamp":"2014-04-18T06:15:03Z","content_type":null,"content_length":"7386","record_id":"<urn:uuid:d63b3aad-7bbf-4af0-96f7-d4edb2513154>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complete characterization and synthesis of the response function of elastodynamic networks
Seminar Room 1, Newton Institute
In order to characterize what exotic properties elastodynamic composite materials with high contrast constituents can have in the continuum it makes sense to first understand what behaviors discrete
networks of springs and masses can exhibit. The response function of a network of springs and masses, an elastodynamic network, is the matrix valued function W(omega), depending on the frequency
omega, mapping the displacements of some accessible or terminal nodes to the net forces at the terminals. We give necessary and sufficient conditions for a given function W(omega) to be the response
function of an elastodynamic network assuming there is no damping. In particular we construct an elastodynamic network that can mimic any achievable response in the frequency or time domain. It
builds upon work of Camar-Eddine and Seppecher, who characterized the possible response matrices of static three-dimensional spring networks. Authors: F. Guevara Vasquez (University of Utah), G.W.
Milton (University of Utah), D.Onofrei (University of Utah)
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/AGA/seminars/2010072809001.html","timestamp":"2014-04-19T20:08:08Z","content_type":null,"content_length":"6569","record_id":"<urn:uuid:8aec7ead-ffdb-44e2-913d-0d08692e8dc4>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why God is Cruciform
Next message: bivalve: "tripods"
...and how it causes the "cruciform"
shape of the Human Body and results
in the scientific proof of God (SPOG)
...........As an
example, many PhD's in Physics are familiar with General Relativity,
but few of them can tell you why real space is Riemannian (i.e. has
a quadratic, "square", metric), and of course, this is one of the most
fundamental questions underlying General Relativity in the first place.
In fact, it is the answer to "why there are no 3-legged animals".
As an example of what you have to know in order to understand a
simple fact such as why there are "no 3-legged animals" is the fact
Now, for the amateur, let me point out that the terms:
Euclidean Space
Cartesian Space
Pythagorean Space
Riemannian Space
ALL MEAN THE SAME THING vis a vis one fundamental fact:
They all refer to spaces that
have a "homogeneous quadratic
The simplest example of a "homogenous quadratic metric" is simple
Euclidean space:
dR^2 = dX^2 + dY^2 +dZ^2 (Pythagorean theorem)
Note that this is the equation for a SQUARE. The Pythagorean
Theorem is just the equation for the diagonal of a SQUARE or a CUBE.
This is why we say space is "square"... simply because this law is
known to hold true in real space.
All 4 of the above spaces are referred to as "Riemannian" for
this reason. Euclidean space is merely a special case of
a Riemannian space that has zero curvature (flat space).
However, there are non-Riemannian spaces. For instance if
you define the line element to be:
dR^3 = dX^3 + dY^3 + dZ^3
this is a NON-RIEMANNIAN METRIC (it is not a quadratic form).
Interestingly, HELMHOLTZ apparently was the first to show that:
See: http://www.bun.kyoto-u.ac.jp/~suchii/R&H.html
This was later proven rigorously by Weyl and others.
For instance Weyl in _Space, Time & Matter_ (1920)
discusses in the closing pages of Ch.II, Riemann's
famous remark that:
The metric of real space "might be a homogenous
function of the 4th order in the differentials,
or even a function built up in some other way,
and that it might not even depend rationally on
the differentials."
(Weyl, quoting Riemann, idid ChII, pp 138-148)
Weyl goes on to demonstrate that the "rotation group" requires that
the metric be a QUADRATIC form, and has proven this for 3 dimensions
(the case under discussion here).
Finally of course, for the benefit of eager PhD's in physics lest
they make the same amateur error Dr. Xxxxxx Xxxxxx has recently made,
"curvature" is not the issue here, the existence of a "quadratic
metric" is the issue. Any space that DOES NOT HAVE a quadratic
metric is non-Riemannian, and of interest, is the fact that:
This of course, tells us WHY "Real Space" has to be Riemannian.
Incidentally, also included in the definition of Riemannian, is
the fact that the metric, even when there is curvature present,
MUST reduce to a pure Euclidean quadratic for small distances.
This is a fundamental theorem in general Relativity where it is
known as the "Equivalence Principle".
At any rate, getting back to the matter of "why there are no
3-legged animals", what we see is the following. First, real
space is Riemannian, which means that it is Euclidean to first
order. Real (3D) space, locally obeys the Pythagorean theorem
(and we have just finished explaining WHY):
dR^2 = dX^2 + dY^2 +dZ^2 (Pythagorean Theorem)
Now, it is well known that several simple "Coordinate systems" can
be constructed in such a space:
Cartesian coordinates
Polar coordinates
Cylindrical coordinates
Spherical coordinates
and all of them obey the "quadratic metrical law" of REAL SPACE.
Now here comes the interesting part. A MACHINE it turns out
(in it's simplest form) is nothing more than a "mechanical
coordinate system". Which means that all "simple machines" that
we see are generally one of the above 4 mechanical structures.
For instance we all remember being told about "simple machines"
in grade school:
Lever = 1-axis Cartesian coordinate machine
Pulley = Polar coordinate machine
Screw = Cylindrical coordinate machine
Wheel = Polar coordinate machine
We see that the "simple machines" are all simple mechanical
models of the fundamental (Euclidean) "coordinate systems".
The same holds for more complicated machines:
Gears = Polar coordinate machine
Jib boom = Spherical coordinate machine
Ball bearing = Spherical coordinate machine
Airplane = 3-axis Cartesian coordinate machine
Human Body = 3-axis Cartesian coordinate machine
OBVIOUSLY, the most common of all is the "Cartesian machine"
or SQUARE-MACHINE. Cartesian coordinates may also be called
"cruciform coordinates", in fact we see, that the Cross of
Christianity is nothing more than the "Cartesian coordinate
system" itself. And, as histories number one Psychology instruction,
a Jewish person has been traditionally nailed to it for 2,000 years
to dramatically demonstrate the Cartesian geometry of the human body,
brain, Psychology, and God to the ignorant and terrified Pagan
population. Let us hope, after the establishment of the scientific
explanation of God, this historical abomination can be removed
from the True Cross perhaps by Vatican decree.
For instance a t.v., typewriter, airplane, even a fish,
are all "Cartesian machines":
Obviously, the CARTESIAN MACHINE or SQUARE-MACHINE is the
simplest kind of machine to mechanically construct. It is
for this reason that the BODY PLAN of all living things, both
plants and animals* is Cartesian (Square). Life forms, that
is Plants and Animals are "Cartesian Machines", and this
notably includes HUMAN BEINGS.
Vertebrate body:
(Note orthogonality of Medial, Transverse and Horizontal
septums which intersect at right angles to form the 3
Cartesian body axes of the generalized vertebrate body plan)
(Note 3-Cartesian axes of the human body, Spinal,
Bilateral, Dorso-ventral)
Because of this, all of the higher Animal Phyla are BILATERAL.
(7 out of 9 Phyla) in fact only the two lowest animal Phyla
(jellyfish) which are after all "legless" animals to begin with,
are not Bilateral (they are radial). Any animal with "legs" is
Cartesian and therefore Bilateral, and the "legs" will appear in
Bilateral pairs. Starfish actually have a bilateral larval form.
Therefore, there "is no such thing as a 3-legged animal". The
"Cartesian Body Plan" cannot produce such a thing, and furthermore,
it defies the "quadratic geometry" of the space which produced the
"Cartesian Animal' in the first place. In a bilateral or "cartesian"
machine standing in a plane, 4 is the minimum number of legs that
will produce stability. 3 is not possible because bilateral
symmetry (cartesian symmetry) is imposed.
Now, beyond all this, what we see is that the "geometric properties
of space itself" are the CAUSAL FORCE that determines the geometrical
shape of the human body. Because of this, it turns out that the
"geometrical shape of the brain" is also determined and is found to
be "3-axis Cartesian" in SHAPE (notice I said SHAPE, not volume).
The Brain actually has 3-Axes of mechanical symmetry, just like
the Body:
Human Brain:
(Note the mirror symmetric motor-sensory map of
the body in the brain in the above two URL's)
The structure of the brain is "3-axis Cartesian"
because of this, and produces a 3-axis Cartesian
structure in Psychometry eigenvector space)
Ultimately, this leads to a 3-Axis Cartesian structure in Psychometry
(eigenvector space), see:
And therefore, we see that the whole mathematical
geometry of PSYCHOLOGY is caused by the mathematical geometry of
REAL SPACE itself.
Finally, in one of the most stunning developments of modern
science, Hammond (1994, 1997) has discovered that because REAL
SPACE causes the structure of PSYCHOLOGY SPACE, that there is
a "curvature" in psychometry space which is caused by the
"curvature" of REAL SPACE. And to sum it all up, since this
curvature in psychometry Space is easily and IMMEDIATELY
identified as "God", we see that "Gravity is the cause of God".
IOW, the scientific proof of God has been discovered.
Now as you can see, God is not about to be readily understood by a
layman. A layman cannot even figure out why there are no "3-legged"
animals! If Dr. Xxxxxx Xxxxxx is any example, a run of the mill PhD
in Physics can not even understand it.
Finally, lest anyone get the idea that this discovery only points to
some exotic relativistic quantity that cannot really be demonstrated
to be "God", let me point out that Gravitational Curvature is only
the "ultimate" explanation of God, in fact, the Secular Trend in brain
growth turns out to be the direct-biological cause of God. In other
words, as Sir Roger Penrose pointed out in _Shadows of the Mind_
(1994), Gravitational Curvature on the quantum level ('brain gravity')
mediates brain growth itself and mediates both the Secular Trend and
the Flynn Effect. Brain growth (percentagewise) can be easily seen
to provide a compelling and elementary explanation of the history
of "God" and an explanation of the Bible, miracles, revelation,
Creation, Heaven, Eternal Life, etc. etc. This understanding is well
within the grasp of scientific layman, even if they can't understand how
"Gravity" causes it.
So, the message remains, for any one who is actually seriously
interested in whether there is a God or not, that- The scientific
proof that there is an ACTUAL REAL GOD has been discovered, found to
be an axiomatic law of Physics, and has been experimentally confirmed.
*This statement refers to "multicellular"
plants and animals, not bacteria, virus' etc.
Be sure to visit my website below, and please ask your
news service provider to add alt.sci.proof-of-god
George Hammond, M.S. Physics
Email: ghammond@mediaone.net
Website: http://people.ne.mediaone.net/ghammond/index.html
This archive was generated by hypermail 2b29 : Mon Aug 27 2001 - 16:28:09 EDT | {"url":"http://www2.asa3.org/archive/asa/200108/0448.html","timestamp":"2014-04-16T22:10:57Z","content_type":null,"content_length":"17644","record_id":"<urn:uuid:057eb6f1-e07c-4b87-9b3e-23a0eff267e2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
Students who need mechanics help have certainly come to the right place. We’re ready to provide the mechanics help that you need at all hours of the day. We can offer quantum mechanics help or
perhaps something a bit less obscure. Some instructors place a great deal of emphasis on subatomic particles, so this might feature heavily in mechanics help as well.
Fluid Mechanics Help
Some students have to request fluid mechanics help when they don’t understand free space equations. There’s a great deal to be confused by when a fluid is not under the normal stresses of gravity.
Gravity helps to maintain the structure of so many things at sea level. Crossing a line slightly over 60 miles above sea level takes a person outside of the Earth’s normal atmosphere, which causes a
measurable drop in pressure.
That really changes the way that fluid mechanics homework solutions would need to be calculated. In fact, fluid mechanics homework solutions that worked around numbers like that altitude would be
quite alien to most garden variety mathematicians.
Quantum Mechanics Help
Since the second class of problems goes against what’s taught in volumes on the usual classical mechanics solutions, these analytical mechanics solutions might be a bit different. One scientist joked
that if the sort of forces active in these kinds of analytical mechanics solutions took a day off the universe would fall apart.
In a manner of speaking, that kind of comment about mechanics solutions is true. By looking at a mechanics solution, one can actually view a mathematical picture of how our universe works. Fluid
mechanics homework might not be interesting to most people, but that’s because they’ve lost their sense of wonder. While that might paint an over romanticized view of fluid mechanics help, these
scientific theories are pretty amazing.
PhysicsHomeworkHelp.org for Mechanics
Nevertheless, writing out classical mechanics solutions is never fun. If you need to order some mechanics homework, you should feel free to get in touch with us. Mechanics answers shouldn’t be hard
to find. That’s why our fluid mechanics homework service exists. We can write out your mechanics homework in no time. | {"url":"http://www.physicshomeworkhelp.org/our-physics-homework-solver/mechanics-help/","timestamp":"2014-04-24T13:43:31Z","content_type":null,"content_length":"35991","record_id":"<urn:uuid:ea9287b7-4ab4-4117-a917-fcf4f45fc462>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00417-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analog Integrated Circuit Design 2nd Edition Chapter 5 Solutions | Chegg.com
Here we have to determine the resulting variations in the closed loop gain if the closed loop gain varies by
Here value of gain, (A) is 30
Normal gain
Feedback factor is
Substitute 30 for A in equation (1) to obtain | {"url":"http://www.chegg.com/homework-help/analog-integrated-circuit-design-2nd-edition-chapter-5-solutions-9781118213735","timestamp":"2014-04-24T23:14:52Z","content_type":null,"content_length":"31664","record_id":"<urn:uuid:76ab8969-d368-4504-9e36-d0828fa60968>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Preprints (rote Reihe) des Fachbereich Mathematik
62 search hits
A family of Cohen-Macaulay Modules over Singularities of Type Xt+Y^3 (1997)
Gerhard Pfister Dorin Popescu
An economic approach to discretization of nonstationary iterated Tikhonov method (2002)
Sergei G. Solodky
An adaptive discretization scheme of ill-posed problems is used for nonstationary iterated Tikhonov regularization. It is shown that for some classes of operator equations of the first kind the
proposed algorithm is more efficient compared with standard methods.
An existence theorem for the unmodified Vlasov equation (1979)
Reinhard Illner Helmut Neunzert
An introduction to the nonlinear Boltzmann-Vlasov equation (1981)
Helmut Neunzert
Applications of Number Theory to Ovoids and Translation Planes (1999)
Ulrich Dempwolff Andreas Guthmann
In this paper we show that for each prime p=7 there exists a translation plane of order p^2 of Mason-Ostrom type. These planes occur as 6-dimensional ovoids being projections of the 8-dimensional
binary ovoids of Conway, Kleidman and Wilson. In order to verify the existence of such projections we prove certain properties of two particular quadratic forms using classical methods form
number theory.
Asymptotic Expansions for Dirichlet Series Associated to Cusp Forms (1998)
Andreas Guthmann
We prove an asymptotic expansion of Riemann-Siegel type for Dirichlet series associated to cusp forms. Its derivation starts from a new integral formula for the Dirichlet series and uses sharp
asymptotic expansions for partial sums of the Fourier series of the cusp form.
Average densities and linear rectifiability of measures (1999)
Peter Mörters
We show that a measure in a Euclidean space is linearly rectifiable if and only if the lower 1-density is positive and finite and agrees with the lower average 1-density almost everywhere.
Brakhage's implicit iteration method and Information Complexity of equations with operators having closed range (1999)
Sergei V. Pereverzev Eberhard Schock
An a posteriori stopping rule connected with monitoringthe norm of second residual is introduced forBrakhage's implicit nonstationary iteration method, applied to ill-posed problems involving
linear operatorswith closed range. It is also shown that for someclasses of equations with such operators the algorithmconsisting in combination of Brakhage's method withsome new discretization
scheme is order optimal in the sense of Information Complexity.
Calorische Restriktion und Lebenserwartung - Ein mathematisches Modell (2004)
Joachim Türk
Caloric Restriction (CR) is the only intervention proven to retard aging and extend maximum lifespan in mammalians. A possible mechanism for the beneficial effects of CR is that the mild
metabolic stress associated with CR induces cells to express stress proteins that increase their resistance to disease processes. In this article we therefore model the retardation of aging by
dietary restriction within a mathematical framework. The resulting model comprises food intake, stress proteins, body growth and survival. We successfully applied our model to growth and survival
data of mice exposed to different food levels.
Clones preserving a quasi-order (1999)
Andrei A. Krokhin Dietmar Schweigert
It is proved that if a finite non-trivial quasi-order is nota linear order then there exist continuum many clones, whichconsist of functions preserving the quasi-order and containall unary
functions with this property. It is shown that, fora linear order on a three-element set, there are only 7 suchclones | {"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/series/id/16166/start/0/rows/10/doctypefq/preprint/sortfield/title/sortorder/asc","timestamp":"2014-04-17T07:44:59Z","content_type":null,"content_length":"42936","record_id":"<urn:uuid:25e58d1d-d8e4-41ba-84ac-5b4a8af6f2b9>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
Buena Park Geometry Tutor
Find a Buena Park Geometry Tutor
...Most of my teaching assignments were at the junior or senior high school level. I have been a Science Fair Mentor, Coordinator District Judge. I love to learn and to share my love of learning.
19 Subjects: including geometry, chemistry, reading, English
...I realize that half the battle is figuring out how to approach the test with confidence, and I can get you there, too. I studied for and passed the CBEST test myself. For the math portion: I
understand the importance of learning to estimate.
43 Subjects: including geometry, chemistry, reading, English
...My goal is for my students to see the purpose in what they are learning, and to develop an intellectual curiosity for the subject matter, whether it be English or math. The key to success in
life isn't just hard work--it's having a genuine interest or passion for learning! I've been a tutor for three years at the high school level, working with English language learners domestic and
16 Subjects: including geometry, reading, English, writing
...Additionally, I am a trained and Federally authorized tutor through a subsidiary of the No Child Left Behind program, using technology to assist special needs and ELA students. I'm very
flexible and understanding with time. I do have a 24-hour cancellation policy, but I am more than willing to offer makeup lessons!
60 Subjects: including geometry, chemistry, Spanish, reading
...I entered as a freshman and tutored that spring semester, teaching the senior Med students physics for their MCAT exams. I learned so much about how to teach not only for students below me but
also above me. It was a unique experience that I learned how much I love teaching.
11 Subjects: including geometry, chemistry, calculus, physics | {"url":"http://www.purplemath.com/Buena_Park_Geometry_tutors.php","timestamp":"2014-04-18T09:05:10Z","content_type":null,"content_length":"23992","record_id":"<urn:uuid:d58e5712-98e4-453e-a098-942f6d22d7f7>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00097-ip-10-147-4-33.ec2.internal.warc.gz"} |
Honors Geometry Final Formulas
31 terms · The formulas that were on our final review sheet
Please allow access to your computer’s microphone to use Voice Recording.
We can’t access your microphone!
Click the icon above to update your browser permissions above and try again
Reload the page to try again!
Press Cmd-0 to reset your zoom
Press Ctrl-0 to reset your zoom
It looks like your browser might be zoomed in or out. Your browser needs to be zoomed to a normal size to record audio.
Your microphone is muted
For help fixing this issue, see this FAQ.
Star this term
You can study starred terms together
NEW! Voice Recording | {"url":"http://quizlet.com/2355490/honors-geometry-final-formulas-flash-cards/","timestamp":"2014-04-20T21:04:58Z","content_type":null,"content_length":"75886","record_id":"<urn:uuid:b0c89ffb-66cf-4323-b2ac-0c817d98b677>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oxford Workshop in Quantum Mathematics and Computation
Posted by Simon Willerton
Guest post by Bruce Bartlett
The newly-created Centre for Quantum Mathematics and Computation (QMAC) at the University of Oxford is holding an inaugural research workshop from October 1-4, 2013, and you are invited to attend.
The workshop will have plenary talks by Steve Awodey, John Baez, Alexander Beilinson, Lucien Hardy, Martin Hyland, Dana Scott, Vladimir Voevodsky, and Anton Zeilinger, along with contributed talks
and a problem session.
The workshop is one of four Clay Research Workshops occurring in Oxford that week (including one on Number Theory and Physics, and one on Computational Intractability) along with the Clay Research
Conference and the conference celebrating the opening of the new Oxford Mathematics building.
A link for registration is available at the QMAC Workshop webpage. There are also webpages on the Clay Research Conference and the New Mathematics Building Opening Conference.
The workshop is funded by the Clay Mathematics Institute and the Engineering and Physical Sciences Research Council.
Inquiries regarding the workshop may be directed to Chris Douglas at cdouglas@maths.ox.ac.uk.
Posted at August 15, 2013 1:02 PM UTC
Re: Oxford Workshop in Quantum Mathematics and Computation
Posted by: Tom Leinster on August 15, 2013 2:34 PM | Permalink | Reply to this
Re: Oxford Workshop in Quantum Mathematics and Computation
It is perhaps worth mentioning that Bruce will be joining the Centre for Quantum Mathematics and Computation in Oxford later in the year. He’ll be there for two years.
Posted by: Simon Willerton on August 15, 2013 3:48 PM | Permalink | Reply to this
Re: Oxford Workshop in Quantum Mathematics and Computation
I’ll be giving a shorter, more intense version of my talks on spans and the categorified Heisenberg algebra. I’m hoping that by then I’ll have worked out a nice example of how this categorification
shows up in physics.
I guess Jamie Vicary will also be there. Who else among us will attend?
Posted by: John Baez on August 21, 2013 8:10 AM | Permalink | Reply to this
Re: Oxford Workshop in Quantum Mathematics and Computation
I am planning to be there.
I am excited to see the connection quantum theory and type theory getting more attention!
Posted by: Bas Spitters on August 21, 2013 10:06 AM | Permalink | Reply to this
Re: Oxford Workshop in Quantum Mathematics and Computation
Posted by: Bruce Bartlett on August 22, 2013 4:08 PM | Permalink | Reply to this
Re: Oxford Workshop in Quantum Mathematics and Computation
Posted by: Marni on September 2, 2013 11:15 PM | Permalink | Reply to this
Re: Oxford Workshop in Quantum Mathematics and Computation
Just in case there’s any confusion, only the speakers are ‘invited’. Everyone else is welcome as long as they register (which is free)!
Posted by: Jamie Vicary on September 3, 2013 1:38 AM | Permalink | Reply to this
Re: Oxford Workshop in Quantum Mathematics and Computation
We also have some limited funding available, so please send an email and request this if it might be useful for you.
Posted by: Jamie Vicary on September 3, 2013 10:48 AM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2013/08/oxford_workshop_in_quantum_mat.html","timestamp":"2014-04-19T06:52:39Z","content_type":null,"content_length":"20536","record_id":"<urn:uuid:a121a552-88bb-4156-befe-8a27212fa42c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is it Like to Have an Understanding of Very Advanced Mathematics?
(above: Paul Dirac, who was totally rad and probably could’ve answered this question)
My friend Bill pointed out a terrific question recently asked on Quora: “what is it like to have an understanding of very advanced mathematics?” — something I’ve definitely wondered myself. I have a
pretty solid understanding of calculus and linear algebra, but the really advanced stuff just seems kind of like magic to me, and the people who understand it are sort of like superheroes.
There are about a dozen answers given to the Quora post, but the first one — an anonymous response — is just superb:
□ You can answer many seemingly difficult questions quickly. But you are not very impressed by what can look like magic, because you know the trick. The trick is that your brain can quickly
decide if question is answerable by one of a few powerful general purpose “machines” (e.g., continuity arguments, the correspondences between geometric and algebraic objects, linear algebra,
ways to reduce the infinite to the finite through various forms of”compactness) combined with specific facts you have learned about your area. The number of fundamental ideas and techniques
that people use to solve problems is, perhaps surprisingly, pretty small — see http://www.tricki.org/tricki/map for a partial list, maintained by Tim Gowers.
□ You are often confident that something is true long before you have an airtight proof for it (this happens especially often in geometry). The main reason is that you have a large catalogue of
connections between concepts, and you can quickly intuit that if X were to be false, that would create tensions with other things you know to be true, so you are inclined to believe X is
probably true to maintain the harmony of the conceptual space. It’s not so much that you can “imagine” the situation perfectly, but you can quickly imagine many other things that are
logically connected to it.
□ You are comfortable with feeling like you have no deep understanding of the problem you are studying. Indeed, when you do have a deep understanding, you have solved the problem and it is time
to do something else. This makes the total time you spend in life reveling in your mastery of something quite brief. One of the main skills of research scientists of any type is knowing how
to work comfortably and productively in a state of confusion. More on this in the next few bullets.
You can read the whole thing here. It’s quite thorough and informative.
Thank you, anonymous mathematician(s), whoever you are!
[h/t Daniel Lemire via Bill Ward]
2 Comments
1. I like the 2nd answer, in particular it reflects exactly how one feels with a mastery of anything IMO. I feel this way with computers and particularly Linux-based IT systems, and I often feel
that way about cars (although with a healthy helping of #3…)
2. Paul Dirac would probably have mumbled the answer then rudely ignore your reply while wandering off.
Sorry, the comment form is closed at this time. | {"url":"http://www.adafruit.com/blog/2011/12/29/what-is-it-like-to-have-an-understanding-of-very-advanced-mathematics/","timestamp":"2014-04-17T09:48:11Z","content_type":null,"content_length":"35809","record_id":"<urn:uuid:0b860a4a-f399-49d2-b827-6e0107a9e67e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 11
, 1997
"... In many applications, it is necessary to determine the similarity of two strings. A widely-used notion of string similarity is the edit distance: the minimum number of insertions, deletions, and
substitutions required to transform one string into the other. In this report, we provide a stochastic mo ..."
Cited by 193 (2 self)
Add to MetaCart
In many applications, it is necessary to determine the similarity of two strings. A widely-used notion of string similarity is the edit distance: the minimum number of insertions, deletions, and
substitutions required to transform one string into the other. In this report, we provide a stochastic model for string edit distance. Our stochastic model allows us to learn a string edit distance
function from a corpus of examples. We illustrate the utility of our approach by applying it to the difficult problem of learning the pronunciation of words in conversational speech. In this
application, we learn a string edit distance with nearly one fifth the error rate of the untrained Levenshtein distance. Our approach is applicable to any string classification problem that may be
solved using a similarity function against a database of labeled prototypes.
- THEORETICAL COMPUTER SCIENCE , 2000
"... We describe the algorithmic and software design principles of an object-oriented library for weighted finite-state transducers. By taking advantage of the theory of rational power series, we
were able to achieve high degrees of generality, modularity and irredundancy, while attaining competitive eff ..."
Cited by 99 (23 self)
Add to MetaCart
We describe the algorithmic and software design principles of an object-oriented library for weighted finite-state transducers. By taking advantage of the theory of rational power series, we were
able to achieve high degrees of generality, modularity and irredundancy, while attaining competitive efficiency in demanding speech processing applications involving weighted automata of more than 10
^7 states and transitions. Besides its mathematical foundation, the design also draws from important ideas in algorithm design and programming languages: dynamic programming and shortest-paths
algorithms over general semirings, object-oriented programming, lazy evaluation and memoization.
- LECTURE NOTES IN COMPUTER SCIENCE , 1998
"... ..."
- In Conference on Uncertainty in AI (UAI , 2005
"... The need to measure sequence similarity arises in information extraction, object identity, data mining, biological sequence analysis, and other domains. This paper presents discriminative
string-edit CRFs, a finitestate conditional random field model for edit sequences between strings. Conditional r ..."
Cited by 51 (7 self)
Add to MetaCart
The need to measure sequence similarity arises in information extraction, object identity, data mining, biological sequence analysis, and other domains. This paper presents discriminative string-edit
CRFs, a finitestate conditional random field model for edit sequences between strings. Conditional random fields have advantages over generative approaches to this problem, such as pair HMMs or the
work of Ristad and Yianilos, because as conditionally-trained methods, they enable the use of complex, arbitrary actions and features of the input strings. As in generative models, the training data
does not have to specify the edit sequences between the given string pairs. Unlike generative models, however, our model is trained on both positive and negative instances of string pairs. We present
positive experimental results on several data sets. 1
"... Many pattern recognition algorithms are based on the nearest neighbour search and use the well known edit distance, for which the primitive edit costs are usually fixed in advance. In this
article, we aim at learning an unbiased stochastic edit distance in the form of a finite-state transducer from ..."
Cited by 17 (6 self)
Add to MetaCart
Many pattern recognition algorithms are based on the nearest neighbour search and use the well known edit distance, for which the primitive edit costs are usually fixed in advance. In this article,
we aim at learning an unbiased stochastic edit distance in the form of a finite-state transducer from a corpus of (input,output) pairs of strings. Contrary to the other standard methods, which
generally use the Expectation Maximisation algorithm, our algorithm learns a transducer independently on the marginal probability distribution of the input strings. Such an unbiased way to proceed
requires to optimise the parameters of a conditional transducer instead of a joint one. We apply our new model in the context of handwritten digit recognition. We show, carrying out a large series of
experiments, that it always outperforms the standard edit distance. Key words: Stochastic Edit Distance, Finite-State Transducers, Handwritten character recognition.
"... In this paper I present an algorithm for the unsupervised learning of morphology using stochastic finite state transducers, in particular Pair Hidden Markov Models. The task is viewed as an
alignment problem between two sets of words. A supervised model of morphology acquisition is converted to an u ..."
Cited by 10 (2 self)
Add to MetaCart
In this paper I present an algorithm for the unsupervised learning of morphology using stochastic finite state transducers, in particular Pair Hidden Markov Models. The task is viewed as an alignment
problem between two sets of words. A supervised model of morphology acquisition is converted to an unsupervised model by treating the alignment as a further hidden variable. The use of the
Expectation-Maximisation algorithm for this task is studied, which leads to calculations involving the permanent of a matrix of probabilities.
, 1997
"... Motivated by the goal of establishing stochastic and information theoretic foundations for the study of intelligence and synthesis of intelligent machines, this thesis probes several topics
relating to hidden state stochastic models. Finite Growth Models (FGM) are introduced. These are nonnegative f ..."
Cited by 3 (3 self)
Add to MetaCart
Motivated by the goal of establishing stochastic and information theoretic foundations for the study of intelligence and synthesis of intelligent machines, this thesis probes several topics relating
to hidden state stochastic models. Finite Growth Models (FGM) are introduced. These are nonnegative functionals that arise from parametrically-weighted directed acyclic graphs and a tuple observation
that affects these weights. Using FGMs the parameters of a highly general form of stochastic transducer can be learned from examples, and the particular case of stochastic string edit distance is
developed. Experiments are described that illustrate the application of learned string edit distance to the problem of recognizing a spoken word given a phonetic transcription of the acoustic signal.
With FGMs one may direct learning by criteria beyond simple maximum-likelihood. The MAP (maximum a posteriori estimate) and MDL (minimum description length) are discussed along with the application
to cau...
- International Joint Conference on Machine Learning (2005). Workshop: Grammatical Inference Applications: Successes and Future Challenges
"... We aim at learning an unbiased stochastic edit distance in the form of a finite-state transducer from a corpus of (input,output) pairs of strings. Contrary to the other standard methods, which
generally use the algorithm Expectation Maximization, our algorithm learns a transducer independently on th ..."
Cited by 1 (0 self)
Add to MetaCart
We aim at learning an unbiased stochastic edit distance in the form of a finite-state transducer from a corpus of (input,output) pairs of strings. Contrary to the other standard methods, which
generally use the algorithm Expectation Maximization, our algorithm learns a transducer independently on the marginal probability distribution of the input strings. Such an unbiased way to proceed
requires to optimize the parameters of a conditional transducer instead of a joint one. This transducer can be very useful in many domains of pattern recognition and machine learning, such as noise
management, or DNA alignment. Several experiments are carried out with our algorithm showing that it is able to correctly assess theoretical target distributions. 1
, 2001
"... We consider the problem of maximizing certain positive rational functions of a form that includes statistical constructs such as conditional mixture densities and conditional hidden Markov
models. The wellknown Baum-Welch and expectation maximization (EM) algorithms do not apply to rational function ..."
Add to MetaCart
We consider the problem of maximizing certain positive rational functions of a form that includes statistical constructs such as conditional mixture densities and conditional hidden Markov models.
The wellknown Baum-Welch and expectation maximization (EM) algorithms do not apply to rational functions and are therefore limited to the simpler maximum-likelihood form of such models. Our main
result is a general decomposition theorem that like Baum-Welch/EM, breaks up each iteration of the maximization task into independent subproblems that are more easily solved – but applies to rational
functions as well. It extends the central inequality of Baum-Welch/EM and associated high-level algorithms to the rational case, and reduces to the standard inequality and algorithms for simpler
problems. Keywords: Baum-Welch (forward backward algorithm), Expectation Maximization (EM), hidden Markov models (HMM), conditional mixture density estimation, discriminative training, Maximum Mutual
Information (MMI) Criterion. 1
"... Abstract. In order to achieve pattern recognition tasks, we aim at learning an unbiased stochastic edit distance, in the form of a finite-state transducer, from a corpus of (input,output) pairs
of strings. Contrary to the state of the art methods, we learn a transducer independently on the marginal ..."
Add to MetaCart
Abstract. In order to achieve pattern recognition tasks, we aim at learning an unbiased stochastic edit distance, in the form of a finite-state transducer, from a corpus of (input,output) pairs of
strings. Contrary to the state of the art methods, we learn a transducer independently on the marginal probability distribution of the input strings. Such an unbiased way to proceed requires to
optimize the parameters of a conditional transducer instead of a joint one. This transducer can be very useful in pattern recognition particularly in the presence of noisy data. Two types of
experiments are carried out in this article. The first one aims at showing that our algorithm is able to correctly assess simulated theoretical target distributions. The second one shows its
practical interest in a handwritten character recognition task, in comparison with a standard edit distance using a priori fixed edit costs. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=965182","timestamp":"2014-04-21T08:47:01Z","content_type":null,"content_length":"36566","record_id":"<urn:uuid:23fbc364-920a-429e-ab73-b0f69e7b2819>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Random sample from a continuous type dist
March 4th 2009, 04:27 PM #1
Junior Member
Nov 2008
Random sample from a continuous type dist
Let X1,X2,,,,Xn be a random sample from a continuous type distribution
a) find P(X1<=X2),P(X1<=X2,X1<=X3),...,P(X1<=Xi, i=2,3,...,n).
(The answer for this is 1/n but I am not sure the reasoning behind of that.
Can anybody explain this?)
b) Suppose the sampling continues until X1 is no longer the smallest observation, (i.e., Xj < X1 <= Xi, i=2,3,...,j-1). Let Y equal the number of trials until X1 is no longer the smallest
observation, (i.e., Y=j-1). Show that the distribution of Y is P(Y=y) = 1 / y(y+1), y=1,2,3,...
c) Compute the mean and variance of Y if they exist.
Since $1=P(X>Y)+P(X<Y)$.
I would think you just have to show that these probabilities are equal,
giving you .5.
As for (c), if P(Y=y) = 1 / y(y+1), y=1,2,3,..., then
$E(Y)=\sum_{y=1}^{\infty}{1\over y+1}=\sum_{n=2}^{\infty}{1\over n}=\infty$.
Thus the second moment and variance are also infinite.
March 6th 2009, 10:25 PM #2 | {"url":"http://mathhelpforum.com/advanced-statistics/76961-random-sample-continuous-type-dist.html","timestamp":"2014-04-18T18:29:27Z","content_type":null,"content_length":"34139","record_id":"<urn:uuid:0065ca22-1461-4fa3-bfd7-812ae0037612>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00367-ip-10-147-4-33.ec2.internal.warc.gz"} |
Boundary value problems associated with generalized Q-holomorphic functions
In this work, we discuss Riemann-Hilbert and its adjoint homogeneous problem associated with generalized Q-holomorphic functions and investigate the solvability of the Riemann-Hilbert problem.
generalized Beltrami systems; Q-holomorphic functions; Riemann-Hilbert problem
Douglis [1] and Bojarskiĭ [2] developed an analog of analytic functions for elliptic systems in the plane of the form
where w is an vector and q is an quasi-diagonal matrix. Also, Bojarskiĭ assumed that all eigenvalues of q are less than 1. Such systems are natural ones to consider because they arise from the
reduction of general elliptic systems in the plane to a standard canonical form. Subsequently Douglis and Bojarkii’s theory has been used to study elliptic systems in the form
and the solutions of such equations were called generalized (or pseudo) hyperanalytic functions. Work in this direction appears in [3-5]. These results extend the generalized (or ‘pseudo’) analytic
function theory of Vekua [6] and Bers [7]. Also, classical boundary value problems for analytic functions were extended to generalized hyperanalytic functions. A good survey of the methods
encountered in a hyperanalytic case may be found in [8,9], also see [10].
In [11], Hile noticed that what appears to be the essential property of elliptic systems in the plane for which one can obtain a useful extension of analytic function theory is the self-commuting
property of the variable matrix Q, which means
for any two points , in the domain of Q. Further, such a Q matrix cannot be brought into a quasi-diagonal form of Bojarskiĭ by a similarity transformation. So, Hile [11] attempted to extend the
results of Douglis and Bojarskiĭ to a wider class of systems in the same form with equation (1). If is self-commuting in and if has no eigenvalues of magnitude 1 for each z in , then Hile called
the system (1) the generalized Beltrami system and the solutions of such a system were called Q-holomorphic functions. Later in [12,13], using Vekua and Bers techniques, a function theory is given
for the equation
where the unknown is an complex matrix, is a self-commuting complex matrix with dimension and for . and are commuting with Q. Solutions of such an equation were called generalized
Q-holomorphic functions.
In this work, as in a complex case, following Vekua (see [[6], pp.228-236]), we investigate the necessary and sufficient condition of solvability of the Riemann-Hilbert problem for equation (2).
Solvability of Riemann-Hilbert problems
In a regular domain G, we consider the problem
We refer to this problem as boundary value problem (A). Where the unknown is an complex matrix-valued function, is a Hölder-continuous function which is a self-commuting matrix with and for .
and are commuting with Q, which is
It is assumed, moreover, that Q is commuting with and is commuting with Q, where , . In respect of the data of problem (A), we also assume that A, B and and . If , , we have homogeneous
problem (
We refer to the adjoint, homogeneous problem (A) as (
where ϕ is a generating solution for the generalized Beltrami system ([[11], p.109]), , and ds is the arc length differential. From the Green identity for Q-holomorphic functions (see [[11],
p.113]), we have
where is commuting with Q. For and , this becomes
Since satisfies the boundary condition
we have
where ϰ is a real matrix commuting with Q.
The solutions to problem (ϰ as
see ([[14], p.543]). In (9), P is a constant matrix defined by
called P-value for the generalized Beltrami system [11]. Since ϰ is a real matrix commuting with Q, inserting the expression (9) into the boundary condition (7), we have
The integral in (10) is to be taken in the Cauchy principal value sense. If we denote this equation in an operator form by and its adjoint by , then it may be easily demonstrated that the index of
(10) is . Here k and are dimensions of null spaces of and respectively. If is a complete system of solutions of (10), putting each of this into (9), we obtain the solutions of problem ( takes on
the boundary values of a Q-holomorphic function on each component of boundary contours in which is, moreover, Q-holomorphic in the domain bounded by the closed contour . Let be solutions of
equation (10) to which linearly independent solutions (see [15]) of problem ( satisfy the boundary condition of the form
Here are meant to be Q-holomorphic functions outside of and . Hence the Q-holomorphic functions satisfy the homogeneous boundary conditions
In a complex case, Vekua refers to problems of this type as being concomitant to ( be a number of linearly independent solutions of this problem. Obviously, .
Let us now return to the discussion of problem (A), where we assume that in what follows. The solutions of this problem may be expressed in terms of the generalized Cauchy kernel as follows:
(see [[14], p.543]). From the Plemelj formulas, it is seen that the density μ must satisfy the integral equation
Problem ( on Γ, where Φ is Q-holomorphic outside and . Denoting the numbers of linearly independent solutions of (ℓ and respectively, we have . In order that (13) is solvable, it is necessary and
sufficient that the nonhomogeneous data satisfy the auxiliary conditions
where are solutions to integral equation (10). These solutions may be broken up into two groups and such that for and for , where . Here and are solutions of problems ( given by (14)
Consequently, the conditions (15) are seen to hold if (6) (with ) holds. From the above discussion, one obtains a Fredholm-type theorem for problem (A).
Theorem 1Non-homogeneous boundary problem (A) is solvable if and only if the condition (6) is satisfied, being an arbitrary solution of adjoint homogeneous boundary problem (
This theorem immediately implies the following.
Theorem 2Non-homogeneous boundary problem (A) is solvable for an arbitrary right-hand side if and only if adjoint homogeneous problem (has no solution.
Sign up to receive new article alerts from Boundary Value Problems | {"url":"http://www.boundaryvalueproblems.com/content/2013/1/33?fmt_view=mobile","timestamp":"2014-04-21T05:45:25Z","content_type":null,"content_length":"126100","record_id":"<urn:uuid:d76326b0-2cee-433a-9dbb-ac5e02bee590>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving Linear Systems
Many calculations involve solving systems of linear equations. In many cases, you will find it convenient to write down the equations explicitly, and then solve them using Solve.
In some cases, however, you may prefer to convert the system of linear equations into a matrix equation, and then apply matrix manipulation operations to solve it. This approach is often useful when
the system of equations arises as part of a general algorithm, and you do not know in advance how many variables will be involved.
A system of linear equations can be stated in matrix form as , where is the vector of variables.
Note that if your system of equations is sparse, so that most of the entries in the matrix are zero, then it is best to represent the matrix as a SparseArray object. As discussed in "Sparse Arrays:
Linear Algebra", you can convert from symbolic equations to SparseArray objects using CoefficientArrays. All the functions described here work on SparseArray objects as well as ordinary matrices.
Solving and analyzing linear systems.
If you have a square matrix with a nonzero determinant, then you can always find a unique solution to the matrix equation for any . If, however, the matrix has determinant zero, then there may be
either no vector, or an infinite number of vectors which satisfy for a particular . This occurs when the linear equations embodied in are not independent.
When has determinant zero, it is nevertheless always possible to find nonzero vectors that satisfy . The set of vectors satisfying this equation form the null space or kernel of the matrix . Any of
these vectors can be expressed as a linear combination of a particular set of basis vectors, which can be obtained using NullSpace[m].
NullSpace and MatrixRank have to determine whether particular combinations of matrix elements are zero. For approximate numerical matrices, the Tolerance option can be used to specify how close to
zero is considered good enough. For exact symbolic matrices, you may sometimes need to specify something like ZeroTest->(FullSimplify[#]==0&) to force more to be done to test whether symbolic
expressions are zero.
An important feature of functions like LinearSolve and NullSpace is that they work with rectangular, as well as square, matrices.
When you represent a system of linear equations by a matrix equation of the form , the number of columns in gives the number of variables, and the number of rows gives the number of equations. There
are a number of cases.
Classes of linear systems represented by rectangular matrices.
The number of independent equations is the rank of the matrix MatrixRank[m]. The number of redundant equations is Length[NullSpace[m]]. Note that the sum of these quantities is always equal to the
number of columns in m.
Generating LinearSolveFunction objects.
In some applications, you will want to solve equations of the form many times with the same , but different . You can do this efficiently in Mathematica by using LinearSolve[m] to create a single
LinearSolveFunction that you can apply to as many vectors as you want. | {"url":"http://reference.wolfram.com/mathematica/tutorial/SolvingLinearSystems.html","timestamp":"2014-04-17T12:46:30Z","content_type":null,"content_length":"61802","record_id":"<urn:uuid:1697a736-e7fd-43fd-9e81-d9caf6458aad>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
homotopy groups for good rings
up vote 2 down vote favorite
I think this question should already be abound in literature but the only place I find is from this article:
which seems to be elaborating this definition:
but unfortunately as I do not understand much algebraic geometry I do not how to make use of this definition.
I am thinking about extending classical Bott periodicity to arbitrary rings that is good enough (UFD, for example). By extending I mean that I want to measure infinite matrices of entires in a ring
$R$ with determinant 1 by the "one point compactification" of $R^{n}$ via introducing some topology. Hence in the classical case we can measure $U$ by $S^{n}$. I want to ask:
1): Is this possible? ( I thought about it over a bus trip but do not know how to establish universal bundles if the base ring is discrete, so I am stuck in here).
2): Is there any previous such constructions? What are their properties?
I feel there must be something well-known because Bott-periodicity theorem is a very old theorem. I do not know whether this is more appropriate for MO or for stack exchange, but I decided to put in
kt.k-theory-homology at.algebraic-topology
I'm really curious as to what the answer is. Did you end up thinking about this some more? Also, what do you mean when you say you want to “measure infinite matrices?” Aren’t you just looking for a
result saying something like $\pi_k(U(R)) \cong \pi_{k+2}(U(R))$ for $U_n(R)$ being $n\times n$ unitary matrices over $R$? – David White Apr 14 '11 at 17:33
I’m also curious: how do the links at the top have anything to do with the question on Bott Periodicity? It is clear how to relate Bott Periodicity to etale maps and the algebraic $\pi_1$? – David
White Apr 14 '11 at 17:50
I did thought about the situation more and I still feel it is hopeless as I do not know much about general linear groups. I think you know what Bott periodicity is, I mean an analog by using sphere
like objects to "measure" the topological quality of such matrices. – Kerry Apr 22 '11 at 6:13
The links sited above introduced fundamental groups to schemes, but I do not understand much algebraic geometry. So I got stuck. – Kerry Apr 22 '11 at 6:15
add comment
1 Answer
active oldest votes
Perhaps an easier approach if you wanted to avoid going into the world of schemes would be to define orthogonal, symplectic, and unitary matrices over $R$ in analogy with the case when $R=\
mathbb{C}$. We should be able to at least find conditions on $R$ for which this could work (defining your ``good ring’’ of the title). For example, to define determinant and $GL(R)$, it's
sufficient for $R$ to be commutative. To get the infinite unitary group $U(R)$ you need $R$ to have an involution (because of conjugate transpose). A good place to look for more info on this
would be The Book of Involutions: http://www.amazon.com/Book-Involutions-Colloquium-Publications-Mathematical/dp/0821809040
We know that when quaternion algebras have involutions. This theory is developed in depth in the book. The book also looks at more general central simple algebras and includes a great
example for endomorphism algebras on page 23. They prove that a central simple algebra $A$ over field $F$ has an involution of the first kind (fixing the center elementwise) iff $A\otimes_F
up vote A$ splits. If $B$ is a central simple algebra over $K$, a separable quadratic extension of $F$, then $B$ has an involution of the second kind (acts as an order 2 automorphism on the center)
2 down which fixes $F$ pointwise iff $N_{K/F}(B)$ splits.
This book also discusses the connection between Clifford algebras and orthogonal involutions, between the discriminant algebra and unitary involutions, and between Tits algebras and
irreducible representations of the classical groups. So you might be able to use these connections to at least get some hands-on examples to see if Bott Periodicity extends (although you of
course already have it for Clifford algebras).
Anyway, I’d also be very curious to see if the other approach (via schemes and algebraic homotopy groups) would work. But it seems much harder to me.
Thank you for this comment. It is out of my expectation and very helpful. – Kerry Apr 22 '11 at 6:16
add comment
Not the answer you're looking for? Browse other questions tagged kt.k-theory-homology at.algebraic-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/57765/homotopy-groups-for-good-rings/61731","timestamp":"2014-04-20T11:33:20Z","content_type":null,"content_length":"58696","record_id":"<urn:uuid:ee1365ac-e71f-40c2-acb9-d385c57a61a7>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Could someone please help me with sqrt(3) tan2x=0
• one year ago
• one year ago
Best Response
You've already chosen the best response.
what do you want to do with it?
Best Response
You've already chosen the best response.
solve for x
Best Response
You've already chosen the best response.
\[\sqrt 3 \tan (2x) = 0\] first step divide both sides by \(\sqrt{3}\) what do you get?
Best Response
You've already chosen the best response.
x = 0 or x = 180
Best Response
You've already chosen the best response.
so tan2x=-1/sqrt(3)
Best Response
You've already chosen the best response.
nope. \[\tan (2x) =\frac {0}{\sqrt3}\] now simplify this
Best Response
You've already chosen the best response.
oh dear! sorry i didnt realise! The question is sqrt(3) tan2x+1=0
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
haha..then you're right \[\tan(2x) =-\frac{1}{\sqrt 3}\]
Best Response
You've already chosen the best response.
now second step. take the arctangent of both sides
Best Response
You've already chosen the best response.
ok so pi/6?
Best Response
You've already chosen the best response.
remember that negative sign...and it's not yet over
Best Response
You've already chosen the best response.
how does the negative affect it?
Best Response
You've already chosen the best response.
well it doesnt affect that much
Best Response
You've already chosen the best response.
how do you mean?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
oh ok so when you say take the arctangent, where do you go from there?
Best Response
You've already chosen the best response.
so \[2x = -\frac{\pi}{6}\] now you divide both sides by 2
Best Response
You've already chosen the best response.
ok so x= =pi/12
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
is this one of the answers?
Best Response
You've already chosen the best response.
what do you mean?
Best Response
You've already chosen the best response.
well there are 4 values for x right?
Best Response
You've already chosen the best response.
because there was a 2 between the tan and the x, which compresses the tan graph
Best Response
You've already chosen the best response.
i cant remember that part...is it adding \(\pi\) or \(2\pi\)
Best Response
You've already chosen the best response.
what do you mean 2 between tan and x..tan (2x)?
Best Response
You've already chosen the best response.
yeah, thats what i meant tan2x
Best Response
You've already chosen the best response.
i think its add pi
Best Response
You've already chosen the best response.
im not sure about this part so im gonna ask @amistre64 for some assistance. please come sir.
Best Response
You've already chosen the best response.
Thank you for your help so far!!
Best Response
You've already chosen the best response.
what does amistre64 say?
Best Response
You've already chosen the best response.
i dont think he's coming yet...i tried tagging him to come here...
Best Response
You've already chosen the best response.
let me call a different one..he might be afk
Best Response
You've already chosen the best response.
Thank you :)
Best Response
You've already chosen the best response.
@apoorvk pls help
Best Response
You've already chosen the best response.
just add pi to the solution and it wont matter.
Best Response
You've already chosen the best response.
so 11pi/12?
Best Response
You've already chosen the best response.
yep i do think its another answer.
Best Response
You've already chosen the best response.
thanks @vamgadu you saved me there heh
Best Response
You've already chosen the best response.
yes thank you! :)
Best Response
You've already chosen the best response.
its my pleasure @lgbasallote
Best Response
You've already chosen the best response.
however, in the back of my text book the answers are 5pi/12 11pi/12 17pi/12 and 23pi/12
Best Response
You've already chosen the best response.
and im wondering how my magic maths textbook got there
Best Response
You've already chosen the best response.
so we are going good with the 11pi/12 :D
Best Response
You've already chosen the best response.
and if pi is added again, we get 23pi/12 which is also good!
Best Response
You've already chosen the best response.
you're doing right. so is there a further problem?
Best Response
You've already chosen the best response.
well there are sposed to be 4 answers
Best Response
You've already chosen the best response.
then you add pi again
Best Response
You've already chosen the best response.
i have 2 :)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
subtract a pi, if want int he interval, [-2pi,2pi]
Best Response
You've already chosen the best response.
well i have 11pi/12 and 23pi/12
Best Response
You've already chosen the best response.
-pi/12 and -13pi/12
Best Response
You've already chosen the best response.
so 11pi/12-pi?
Best Response
You've already chosen the best response.
-pi/12 - pi
Best Response
You've already chosen the best response.
11pi/12 - pi is -pi/12 (bringing you back to the orig thingy)
Best Response
You've already chosen the best response.
thats not one of the answers :?
Best Response
You've already chosen the best response.
You started with 2x= -pi/6 if we call 2x theta, we have 2 solutions: |dw:1342706814544:dw| and x is 5pi/12 and 11pi/12 now add pi to each value to get the other 2 solutions or , noting that 2x+pi
divided by 2 is x+pi/2 , add multiplies of pi/2 to get all four solutions starting with -pi/12 (which is 11pi/12 in the interval [0, 2pi] )
Best Response
You've already chosen the best response.
can this be done on a tan graph?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50080bd0e4b020a2b3bd531d","timestamp":"2014-04-18T03:54:59Z","content_type":null,"content_length":"205448","record_id":"<urn:uuid:3e74ce47-3f8a-45f7-a7f9-308527dae614>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Nature of Space and Time: An Evening of Speculation
What is space? What is time? And how do we fit into it all? These are questions not only for physicists and mathematicians, but also for philosophers and theologians. The John Templeton Foundation
has gathered together just such an eclectic mix of people for a public discussion entitled The Nature of Space and Time: An Evening of speculation to be held at Emmanuel College in Cambridge on the
7th of September 2006. The discussion panel for the evening comprises some very eminent names indeed: mathematician and Fields medallist Professor Alain Connes, Rev. Dr. Michael Heller from the
Vatican Observatory, mathematicians Professor Shahn Majid and Sir Roger Penrose and theologian and physicist Rev. Dr. John Polkinghorne.
While we all have an intuitive understanding of space and time that is sufficient to get us through everyday life, when it comes to deeper questions about them one might expect to turn to physics.
The two current fundamental theories of physics are general relativity and quantum mechanics. Whilst general relativity is extremely accurate for describing the universe on the macroscopic level and
quantum mechanics similarly on the sub-atomic level, the two theories have never been united. Complications arise when one considers situations simultaneously involving both large mass scales and
very small distance scales, currently described by general relativity and quantum mechanics respectively. In order to solve these problems, physicists have been searching for a theory combining the
two — called quantum gravity — for several decades. Such a theory would not only give additional insight into how the universe began in the Big Bang, but also predict its ultimate fate.
There have been many efforts to formulate a theory of quantum gravity. Some of these take slightly modified but largely intact versions of general relativity and quantum mechanics and attempt to
bring them together, whilst others, like string theory and the more modern M-theory, take different starting points, such as replacing fundamental particles by short bits of string. Although some
progress seems to have been made, a complete unifying theory has so far proved elusive.
But whether they are complete or not, what do the existing theories tell us about the nature of space and time? Not much, according to some scientists. A particular failing of many current theories
is that they make too many assumptions about the nature of space-time in their formulation: rather than explaining our intuitions of what space and time are, they take these intuitions as a starting
point. Classical physics, for example, assumes both space and time to be continuous. In string theory space-time is assumed as an initial ingredient to be the multidimensional analogue of a smooth
and continuous surface. In both cases, these properties of space and times are assumptions that go into the formulation of the theory, rather than a result of it. "The fundamental issue is that if
space-time is to emerge from quantum gravity then it makes no sense to use our macroscopic intuitions about continuous space-time as a starting point," says panel member Shahn Majid, who is also
organising a related research programme on noncommutative geometry at the Isaac Newton Institute in Cambridge.
The continuity of space-time is not the only intuitive concept that has been called into question. Even the seemingly simple notion of a point in space-time is far from straight-forward. In our
everyday understanding of space-time, a point particle in it should be given by a number of coordinates; some of them describing its position in space and one describing its position in time. But one
of the most fundamental assertions of quantum mechanics, Heisenberg's uncertainty principle, says that, at a tiny scale, it is impossible to measure both the spatial position of a particle and its
momentum with arbitrary accuracy. If time is to be part of the quantum theory then one can expect the same problem as for momentum: the more accurate your measurement of spatial information, the less
accurate your measurement of information concerning time and vice versa. Thus, the idea of a point in space-time — an entity specified by coordinates — is, in a sense, meaningless.
So it can be argued that a unified theory of quantum gravity needs to rid itself of many of the classical notions of space-time from the outset. To do this, it needs a far more general mathematical
theory of space-time and even of space — in other words of geometry &mdash than the one we learn about at school. A prime candidate for such a theory, according to Shahn Majid, is what is called
noncommutative geometry.
Noncommutative geometry takes an algebraic approach to understanding space. Algebra is something that also enters classical geometry: given a space, we can describe lines, planes and other surfaces
in it by algebraic equations, and we can define functions which take points in the space as their input. The algebraic properties of these equations and functions can encode the properties of the
space they are based on.
Noncommutative geometry, like classical algebraic geometry before it, works only with the algebraic features of the underlying space, whatever that space may be. Some particular algebraic properties
of classical geometry that have to do with the laws of commutation are missing in the algebra of noncommutative geometry, giving it its name and making it far more general than its classical
counterpart. Crucially, it does away with the notion of a point and the assumption of continuity in the conventional sense. "It could be the right setting for its own, and perhaps correct, approach
to quantum gravity," says Shahn Majid, "it does not assume either a continuum or a discrete space as its input, but can include both as special cases."
Even as a purely mathematical theory, noncommutative geometry is not yet fully explored. And results from this mathematical research may tell us things about the properties of a quantum gravity
theory that may otherwise not be obvious — an interesting interplay between pure maths and physics. "The possibilities within noncommutative geometry at a mathematical level provide constraints on
what the unknown theory of full quantum gravity can be. The geometry's own internal consistencies can be a guide to the deeper unknown theory. It's what I call the algebraic approach to quantum
gravity," says Majid.
At this point you may be wondering what the general public could possibly contribute to such a technical and mathematical debate. Quite a lot, according to Majid, since the physics and maths still
fall short of answers to the deeper questions: "Without quantum gravity we do not have honest answers to these questions. If one does not know what time is, then what does it mean to exist (which
usually means at some moment in time)? What is existence? And what about free will? At this point we can hope to have a very broad debate involving theologians, philosophers and the general public.
Personally I think that theoretical physics can only go so far with its current ideology and methods, we actually need some completely fresh angles to make progress."
So if you think that you have something to contribute, or simply want to listen in on the debate, you can register online now. Admission is also free on the door but capacity is limited. And since
no-one has yet developed a working time machine I would recommend registering in case they run out of space.
Further reading
You can learn more about string theory in the
Tying it all up
and on the
String theory website. | {"url":"http://plus.maths.org/content/nature-space-and-time-evening-speculation","timestamp":"2014-04-19T04:31:01Z","content_type":null,"content_length":"32631","record_id":"<urn:uuid:e1852326-0a9a-4e04-8fad-d70a4137ec35>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problems 4.5
4.5 Single-bus transportation system This problem illustrates, once more, one of the main themes of this book-that the "obvious" thing to do does not always produce the best results. For several
types of urban transportation systems (e.g., buses, elevator banks, subways, etc.) it is sometimes better to delay some vehicles than to let them proceed, as soon as possible, with a "trip." This
will result in more regular headways (between the passage of vehicles from stops), which, in turn, improves overall system performance. Consider as an example the following simple situation. A bus
"system" consists of a single bus that operates on a route (Figure P4.5) with a single stop (station A) for picking up passengers, who are then delivered to other stops along the route. (This could
be a primitive model for a local bus system in a suburban community during the evening rush hour. The single pickup stop would be at the train station where commuters from a central business district
return from work.) Define H to be the headway between successive departures of the bus from station A. Assume that:
1 . The time interval, X, between the instant when the bus leaves station A and the instant when it returns there for the first time is a discrete random
2. Passengers board the bus instantaneously once it returns to station A and the capacity of the bus is sufficiently large so that no one is ever left out.
3. Passengers arrive at station A to catch the bus randomly, according to some probabilistic process which is independent of the location of the bus at any given time.
a. Find the expected time, E[ W], that a random passenger will spend at station A if the bus always leaves station A immediately after it arrives there and the passengers board it
b. Repeat part (a) for the case in which the bus is held at the station for three extra time units whenever it returns to station A only one time unit after leaving:
Note that E[ W] has decreased now despite the fact that the frequency of bus service has decreasedas well.
c. Assume that it has been decided to use the dispatching strategy
at station A, where ao is an unknown constant. Determine the value of ao that minimizes E[ W].
In general, it has been shown that, for any pdf f[x](x) for X, the optimal headway strategy for this problem is to set H = Max (a*[0], X), where a*[0 ]is the optimal value of the constant ao
(note that X is allowed to be any random variable in this case but that the result above is limited to single-bus systems).
d. In part (c) you found that a*[0] = E*[W], where E*[W] is the minimum value of E[W]. This is not an accident but a general property of optimal headway strategies for this problem [OSUN 72].
Does this suggest a good iterative procedure for solving part (c) ?
e. Repeat parts (a) and (c) for the case f[x](x) = e^-x for x
A good review and more general results for problems of this type has been published by Barnett [BARN 78]. | {"url":"http://web.mit.edu/urban_or_book/www/book/chapter4/problems4/4.5.html","timestamp":"2014-04-19T07:13:49Z","content_type":null,"content_length":"5605","record_id":"<urn:uuid:2e35300f-a899-48a1-9a4f-4623e7b52172>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nonlinear Fluid Models for Biofluid Flow in Constricted Blood
Vessels under Body Accelerations: A Comparative Study
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 950323, 27 pages
Research Article
Nonlinear Fluid Models for Biofluid Flow in Constricted Blood Vessels under Body Accelerations: A Comparative Study
^1School of Mathematical Sciences, University Science Malaysia, 11800 Penang, Malaysia
^2Centre for Applicable Mathematics and Systems Science, Department of Computer Science, Liverpool Hope University, Hope Park, Liverpool L16 9JD, UK
Received 3 January 2012; Accepted 12 February 2012
Academic Editor: M. F. El-Amin
Copyright © 2012 D. S. Sankar and Atulya K. Nagar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Pulsatile flow of blood in constricted narrow arteries under periodic body acceleration is analyzed, modeling blood as non-Newtonian fluid models with yield stress such as (i) Herschel-Bulkley fluid
model and (ii) Casson fluid model. The expressions for various flow quantities obtained by Sankar and Ismail (2010) for Herschel-Bulkley fluid model and Nagarani and Sarojamma (2008), in an improved
form, for Casson fluid model are used to compute the data for comparing these fluid models. It is found that the plug core radius and wall shear stress are lower for H-B fluid model than those of the
Casson fluid model. It is also noted that the plug flow velocity and flow rate are considerably higher for H-B fluid than those of the Casson fluid model. The estimates of the mean velocity and mean
flow rate are considerably higher for H-B fluid model than those of the Casson fluid model.
1. Introduction
Atherosclerosis is an arterial disease in large and medium size blood vessels which involve in the complex interactions between the artery wall and blood flow and is caused by intravascular plaques
leading to malfunctions of the cardiovascular system [1]. The intimal thickening of an artery is the initial process in the development of atherosclerosis and one of the most wide spread diseases in
humans [2]. In atherosclerotic arteries, the lumen is typically narrowed and the wall is stiffened by the buildup of plaque with a lipid core and a fibromuscular cap, and the narrowing of lumen of
the artery by the deposit of fats, lipids, cholesterol, and so forth is medically termed as stenosis formation [3]. Different shapes of stenoses are formed in arteries like axisymmetric, asymmetric,
overlapping, and multiple and even sometimes it may be arbitrary in shape [4–7]. Once stenosis develops in an artery, its most serious consequences are the increased resistance and the associated
reduction of blood flow to the vascular bed supplied by the artery [8, 9]. Thus, the presence of a stenosis leads to the serious circulatory disorder. Hence, it is very useful to mathematically
analyze the blood flow in stenosed arteries.
In many situations of our day to day life, we are exposed to body accelerations or vibrations, like swinging of kids in a cradle, vibration therapy applied to a patient with heart disease, travel of
passengers in road vehicles, ships and flights, sudden movement of body in sports activities, and so forth [10, 11]. Sometime, our whole body may be subjected to vibrations, like a passenger sitting
in a bus/train, and so forth, while in some other occasions, specific part of our body might be subjected to vibrations, for example, in the operation of jack hammer or lathe machine, driver of a
car, and so forth [12–14]. Prolonged exposure of our body to high level unintended external body accelerations causes serious health hazards due to the abnormal blood circulation [15–17]. Some of the
symptoms which result from prolonged exposure of body acceleration are headache, abdominal pain, increase in pulse rate, venous pooling of blood in the extremities, loss of vision, hemorrhage in the
face, neck, eye-sockets, lungs, and brain [18–20]. Thus, an adequate knowledge in this field is essential to the diagnosis and therapeutic treatment of some health problems, like vision loss, joint
pain, and vascular disorder, and so forth, and also in the design of protective pads and machines. Hence, it is important to mathematically analyze and also to quantify the effects of periodic body
accelerations in arteries of different diameters.
Due to the rheological importance of the body accelerations and the arterial stenosis, several theoretical studies were performed to understand their effects on the physiologically important flow
quantities and also their consequences [15–20]. Blood shows anomalous viscous properties. Blood, when it flows through larger diameter arteries at high shear rates, it shows Newtonian character;
whereas, when it flows in narrow diameter arteries at low shear rates, it exhibits remarkable non-Newtonian behavior [21, 22]. Many studies pertaining to blood flow analysis treated it as Newtonian
fluid [4, 15, 23]. Several researchers used non-Newtonian fluids models for mathematical analysis of blood flow through narrow arteries with different shapes of stenosis under periodic body
accelerations [24–27]. Casson and Herschel-Bulkley (H-B) fluid models are some of the non-Newtonian fluid models with yield stress and are widely used in the theoretical analysis of blood flow in
narrow arteries [28, 29]. The advantages of using H-B fluid model rather than Casson fluid model for modeling of blood flow in narrow arteries are mentioned below.
Chaturani and Samy [8] emphasized the use of H-B fluid model for blood flow modeling with the argument that when blood flows in arteries of diameter 0.095mm, it behaves like H-B fluid rather than
other non-Newtonian fluids. Tu and Deville [21] pronounced that blood obeys Casson fluid’s constitutive equation only at moderate shear rates, whereas H-B fluid model can be used still at low shear
rates and represents fairly closely what is occurring in blood. Iida [30] reports “the velocity profiles of blood when it flows in the arterioles having diameter less than 0.1mm are generally
explained fairly by Casson and H-B fluid models. However, the velocity profiles of blood flow in the arterioles whose diameters are less than 0.065mm do not conform to the Casson fluid model, but,
can still be explained by H-B fluid model.” Moreover, Casson fluid’s constitutive equation has only one parameter, namely, the yield stress, whereas the H-B fluid’s constitutive equation has one more
parameter, namely, the power law index “n” and thus one can obtain more detailed information about blood flow characteristics by using the H-B fluid model rather than Casson fluid model [31]. Hence,
it is appropriate to treat blood as H-B fluid model rather than Casson fluid model when it flows through narrow arteries.
Sankar and Ismail [32] investigated the effects of periodic body accelerations in blood flow through narrow arteries with axisymmetric stenosis, treating blood as H-B fluid model. Nagarani and
Sarojamma [33] mathematically analyzed the pulsatile flow of Casson fluid for blood flow through stenosed narrow arteries under body acceleration. The pulsatile flow of H-B fluid model and Casson
fluid model for blood flow through narrow arteries with asymmetric stenosis under periodic body acceleration has not been studied so far, to the knowledge of the authors. Hence, in the present study,
a comparative study is performed for the pulsatile flow H-B and Casson fluid models for blood flow in narrow arteries with asymmetric shapes of stenoses under periodic body acceleration. The
expressions obtained in Sankar and Ismail [32] for shear stress, velocity distribution, wall shear stress, and flow rate are used to compute data for the present comparative study. The aforesaid flow
quantities obtained by Nagarani and Sarojamma [33] for Casson fluid model in the corrected form are used in this study to compute data for performing the present comparative study. The layout of the
paper is as follows.
Section 2 mathematically formulates the H-B and Casson fluid models for blood flow and applies the perturbation method of solution. In Section 3, the results of H-B fluid model and Casson fluid model
for blood flow in axisymmetric and asymmetrically stenosed narrow arteries are compared. Some possible clinical applications to the present study are also given in Section 3. The main results are
summarized in the concluding Section 4.
2. Mathematical Formulation
Consider an axially symmetric, laminar, pulsatile, and fully developed flow of blood (assumed to be incompressible) in the axial direction through a circular narrow artery with constriction. The
constriction in the artery is assumed as due to the formation of stenosis in the lumen of the artery and is considered as mild. In this study, we consider the shape of the stenosis as asymmetric. The
geometry of segment of a narrow artery with asymmetric shape of mild stenosis is shown in Figure 1(a). For different values of the stenosis shape parameter m, the asymmetric shapes of the stenoses
are sketched in Figure 1(b). In Figure 1(b), one can notice the axisymmetric shape of stenosis when the stenosis shape parameter m = 2. The segment of the artery under study is considered to be long
enough so that the entrance, end, and special wall effects can be neglected. Due to the presence of the stenosis in the lumen of the segment of the artery, it is appropriate to treat the segment of
the stenosed artery under study as rigid walled. Assume that there is periodical body acceleration in the region of blood flow and blood is modeled as non-Newtonian fluid model with yield stress. In
this study, we use two different non-Newtonian fluid models with yield stress for blood flow simulations such as (i) Herschel-Bulkley (H-B) fluid and (ii) Casson fluid. Note that for particular
values of the parameters, H-B fluid model’s constitutive equation reduces to the constitutive equations of Newtonian fluid, power law fluid, and Bingham fluid. Also it is to be noted that Casson
fluid model’s constitutive equation reduces to the constitutive equation of Newtonian fluid when the yield stress parameter becomes zero. The cylindrical polar coordinate system has been used to
analyze the blood flow.
2.1. Herschel-Bulkley Fluid Model
2.1.1. Governing Equations and Boundary Conditions
It has been reported that the radial velocity is negligibly small and can be neglected for a low Reynolds number flow in a narrow artery with mild stenosis. The momentum equations governing the blood
flow in the axial and radial directions simplify, respectively, to [32] where , are the density and axial component of the velocity of the H-B fluid, respectively; is the pressure; is the time; is
the shear stress of the H-B fluid; is the term which represents the effect of body acceleration and is given by where is the amplitude of the body acceleration, , is the frequency in Hz and is
assumed to be small so that the wave effect can be neglected [14], is the lead angle of with respect to the heart action. Since, the blood flow is assumed as pulsatile, it is appropriate to assume
the pressure gradient as a periodic function as given below [25]: where is the steady component of the pressure gradient, is the amplitude of the pulsatile component of the pressure gradient, and ,
is the pulse frequency in Hz [23]. The constitutive equation of the H-B fluid (which represents blood) is given by where, is the yield stress of the H-B fluid and is the coefficient of viscosity of
H-B fluid with dimension . The geometry of the asymmetric shape of stenosis in the arterial segment is mathematically represented by the following equation [34]: where ; denotes the maximum height of
the stenosis at such that ; is the length of the stenosis; denotes its location; is the radius of the artery in the stenosed region; is the radius of the normal artery. It is to be noted that (2.6)
also represents the geometry of segment of the artery with axisymmetric stenosis when the stenosis shape parameter m = 2. We make use of the following boundary conditions to solve the system of
momentum and constitutive equations for the unknown velocity and shear stress:
2.1.2. Nondimensionalization
Let us introduce the following nondimensional variables: where having dimension as that of Newtonian fluid’s viscosity [22, 34]; is the generalized Wormersly frequency parameter or pulsatile Reynolds
number, and when , it reduces to the Newtonian fluid’s pulsatile Reynolds number. Using nondimensional variables defined in (2.8), the momentum and constitutive equations (2.1) and (2.5) can be
simplified to the following equations: The geometry of the asymmetric shape of the stenosis in the arterial segment in the nondimensional form reduces to the following equation: where . The boundary
conditions in the nondimensional form are The volume flow rate in the nondimensional is given by where , is the volumetric flow rate.
2.1.3. Perturbation Method of Solution
Since, (2.9) and (2.10) form the system of nonlinear partial differential equations, it is not possible to get an exact solution to them. Thus, perturbation method is used to solve this system of
nonlinear partial differential equations. Since, the present study deals with slow flow of blood (low Reynolds number flow) where the effect of pulsatile Reynolds number is negligibly small and also
it occurs naturally in the nondimensional form of the momentum equation, it is more appropriate to expand the unknowns and in (2.9) and (2.10) in the perturbation series about . Let us expand the
velocity in the perturbation series about the square of the pulsatile Reynolds number as below (where ): Similarly, one can expand the shear stress , the plug core radius , the plug core velocity ,
and the plug core shear stress in terms of . Substituting the perturbation series expansions of and in (2.9) and then equating the constant term and term, we get Using the binomial series
approximation in (2.10) (assuming ) and then applying the perturbation series expansions of and in the resulting equation and then equating the constant term and term, one can obtain Applying the
perturbation series expansions of and in the boundary conditions (2.13), we obtain Solving (2.16)–(2.17) with the help of the boundary conditions (2.18) for the unknowns , and , one can get the
following expressions (detail of obtaining these expressions is given in [32]): where , , , and . The wall shear stress is a physiologically important flow quantity which plays an important role in
determining the aggregate sites of platelets [3]. The expression for wall shear stress is given by [32] The expression for volumetric flow rate is obtained as below (see [32] for details): The
expression for the plug core radius is obtained as below [32]: The longitudinal impedance to flow in the artery is defined as where is the pressure gradient in the nondimensional form.
2.2. Casson Fluid Model
2.2.1. Governing Equations and Boundary Conditions
The momentum equations governing the blood flow in the axial and radial directions simplify, respectively, to [33] where and are the axial component of the velocity and density of Casson fluid; is
the pressure; is the time; is the shear stress of Casson fluid. Equations (2.3) and (2.4) which define mathematically the body acceleration term and pressure gradient are assumed in this subsection.
Similarly, (2.6) which mathematically describes the geometry of the axisymmetric shape of stenosis and asymmetric shape of stenosis in the segment of the stenosed artery is also assumed in this
subsection (the details of these assumptions can be found in Section 2.1.1) The constitutive equation of the Casson fluid model (which models blood) is defined as below: where is the yield stress of
Casson fluid and is the coefficient of viscosity of Casson fluid with dimension . The appropriate boundary conditions to solve the system of momentum and constitutive equations (2.25), (2.27), and (
2.28) for the unknown velocity and shear stress are
2.2.2. Nondimensionalization
Similar to (2.8), let us introduce the following nondimensional variables for the Casson fluid flow modeling as follows: where is the Wormersly frequency parameter or pulsatile Reynolds number of
Casson fluid model. Use of the above nondimensional variables reduces the momentum and constitutive equations (2.25), (2.27), and (2.28), respectively, to the following equations: Equation (2.12)
which mathematically defines the nondimensional form of the geometry of the asymmetric shapes of stenosis in the arterial segment is assumed in this sub-section. The boundary conditions in the
nondimensional form are The volume flow rate in the nondimensional is given by where , is the volumetric flow rate.
2.2.3. Perturbation Method of Solution
As described in Section 2.1.3, perturbation method is applied to solve the system of nonlinear partial differential equations (2.31) and (2.32). Let us expand the velocity in the perturbation series
about the square of the pulsatile Reynolds number as below (where ): Similarly, one can expand the shear stress , the plug core radius , the plug core velocity , and the plug core shear stress in
terms of . Substituting the perturbation series expansions of and in (2.31) and then equating the constant term and term, one can obtain Applying the perturbation series expansions of and in (2.32)
and then equating the constant term and term, we get Applying the perturbation series expansions of and in the boundary conditions (2.34) and then equating the constant terms and terms, one can get
Solving (2.37)–(2.38) with the help of the boundary conditions (2.39) for the unknowns , and , one can get the following expressions as in [33], but in a corrected form ((2.40)–(2.50)): where , , ,
and . Using (2.41) and (2.45), the expression for wall shear stress is obtained as below: The expression for volumetric flow rate is obtained as below: The expression for the plug core radius is
obtained as below [33]: The longitudinal impedance to flow in the artery is defined as
3. Numerical Simulation of the Results
The main objective of the present mathematical analysis is to compare the H-B and Casson fluid models for blood flow in constricted arteries and spell out the advantageous of using H-B fluid model
rather than Casson fluid for the mathematical modeling of blood flow in a narrow artery with asymmetric stenosis. It is also aimed to bring out the effect of body acceleration, stenosis shape
parameter, yield stress, and pressure gradient on the physiologically important flow quantities such as plug core radius, plug flow velocity, velocity distribution, flow rate, wall shear stress, and
longitudinal impedance to flow. The different parameters used in this analysis and their range of values are given below [32–35].
Yield stress θ: 0–0.3; power law index n: 0.95–1.05; pressure gradient e: 0-1; body acceleration B: 0–2; frequency parameter ω: 0-1; pulsatile Reynolds numbers and : 0.2–0.7; lead angle ϕ: 0.2–0.5;
asymmetry parameter m: 2–7; stenosis depth δ: 0–0.2.
3.1. Plug Core Radius
The variation of the plug core with axial distance in axisymmetric stenosed artery (m = 2) for different values of the yield stress of H-B and Casson fluid models with δ = 0.15, B = 2, = = 0.2, e = ϕ
= 0.7 and t = 45° is shown in Figure 2. It is observed that the plug core radius decreases slowly when the axial variable increases from 0 to 4 and then it increases when increases further from 4 to
8. The plug core radius is minimum at the centre of the stenosis (z = 4), since the stenosis is axisymmetric. The plug core radius of the H-B fluid model is slightly lower than that of the Casson
fluid model. One can note that the plug core radius increases very significantly when the yield stress of the flowing blood increases. Figure 3 sketches the variation of plug core radius with
pressure gradient ratio in asymmetrically stenosed artery (m = 4) for H-B and Casson fluid models and for different values of the body acceleration parameter with θ = δ = 0.1, t = 60°, ϕ = 0.7, m =
4, and z = 4. It is noticed that the plug core radius decreases rapidly with the increase of the pressure gradient ratio from 0 to 0.5 and then it decreases slowly with the increase of the pressure
gradient ratio from 0.5 to 1. It is seen that plug core radius increases significantly with the increase of the body acceleration parameter . Figures 2 and 3 bring out the influence of the
non-Newtonian behavior of blood and the effects of body acceleration and pressure gradient on the plug core radius when blood flows in asymmetrically stenosed artery.
3.2. Plug Flow Velocity
Figure 4 shows the variation of the plug flow velocity with yield stress for H-B and Casson fluid models and for different values of the stenosis shape parameter with e = 0.5, ϕ = 0.2, z = 4, t =
60°, ω = 0.5, B = 1, and δ = 0.1. It is noted that for H-B fluid model, the plug flow velocity decreases very slowly with the increase of the yield stress, whereas, in the case of Casson fluid model,
it decreases rapidly when the yield stress θ increases from 0 to 0.05 and then it decreases slowly with the increase of the yield stress from 0.05 to 0.3. It is seen that the plug flow velocity is
considerably higher for H-B fluid model than that of the Casson fluid model. One can easily observe that the plug flow velocity decreases significantly with the increase of the stenosis shape
parameter . The variation of plug flow velocity with axial distance for H-B and Casson fluid models and for different values of the body acceleration and pressure gradient ratio with = θ = 0.1, m =
4, t = 60°, ϕ = 0.2, and ω = 0.5 is depicted in Figure 5. It is seen that the plug flow velocity skews more to the right-hand side in the axial direction which is attributed by the skewness of the
stenosis. It is clear that the plug flow velocity increases considerably with the increase of the body acceleration parameter and pressure gradient ratio . Figures 4 and 5 show the non-Newtonian
character of blood and effects of body acceleration, pressure gradient, and asymmetry of the stenosis on the plug flow velocity of blood when it flows through a constricted artery.
3.3. Velocity Distribution
Figure 6 sketches the velocity distribution for H-B and Casson fluid models and for different values of yield stress θ, stenosis depth δ with m = 2, e = 0.2, = = 0.5, ϕ = 0.2, ω = 1, t = 60°, and B =
1. It is observed that the velocity of H-B fluid model is considerably higher than that of Casson fluid model. It is also found that the velocity of the blood flow decreases with the increase of the
yield stress θ and stenosis depth δ. But the decrease in the velocity is considerable when the stenosis depth δ increases, whereas it decreases significantly with the increase of the yield stress. It
is of interest to note that the velocity distribution of H-B fluid with δ = 0.2 and θ = 0.05 and B = 0 is in good agreement with the corresponding plot in Figure 6 of Sankar and Lee [34]. It is also
to be noted that the velocity distribution of Casson fluid with δ = 0.2, θ = 0.01, and B = 0 is in good agreement with the corresponding plot in Figure 6 of Siddiqui et al. [35].
3.4. Flow Rate
The variation of flow rate with pressure gradient ratio for H-B and Casson fluid models and for different values of the power law index n, body acceleration parameter , and stenosis shape parameter
with θ = δ = 0.1, = = ϕ = 0.2, z = 4, t = 60°, and ω = 1 is shown in Figure 7. It is seen that the flow rate increases with the pressure gradient ratio . But the increase in the flow rate is linear
for H-B fluid model and almost constant for Casson fluid model. For a given set of values of the parameters, the flow rate for H-B fluid model is considerably higher than that of the Casson fluid
model. It is also clear that for a given set of values of and m, the flow rate increases considerably with the increase of the body acceleration parameter . One can observe that for fixed values of
and B, the flow rate decreases significantly with the increase of the stenosis shape parameter . When the power law index increases from 0.95 to 1.05 and all the other parameters were held constant,
the flow rate decreases slightly when the range of the pressure gradient ratio is 0–0.5 and this behavior is reversed when the range of the pressure gradient ratio is 0.5 to 1. Figure 7 brings out
the effects of body acceleration and stenosis shape on the flow rate of blood when it flows through narrow artery with mild stenosis.
3.5. Wall Shear Stress
Figure 8 shows the variation of wall shear stress with frequency ratio for H-B and Casson fluid models and for different values of the ϕ (lead angle), (pulsatile Reynolds number for H-B fluid model),
and (pulsatile Reynolds number of Casson fluid model) with m = 2, θ = δ = 0.1, e = 0.5, B = 1, z = 4, and t = 60°. It is seen that the wall shear stress decreases slightly nonlinearly with frequency
ratio for lower values of the pulsatile Reynolds numbers and and lead angle ϕ, and it decreases linearly with frequency ratio for higher values of the pulsatile Reynolds numbers and and lead angle ϕ.
It is found that for a given set of values of the parameters, the wall shear stress is marginally lower for H-B fluid model than that of the Casson fluid model. Also, one can note that for fixed
value of the lead angle ϕ, the wall shear stress decreases significantly with the increase of the pulsatile Reynolds numbers and . It is also observed that the wall shear stress decreases marginally
with the increase of the lead angle ϕ when all the other parameters were kept as invariables. Figure 8 spells out the effects of pulsatility and non-Newtonian character of blood on the wall shear
stress when it flows in a narrow artery with mild stenosis.
3.6. Longitudinal Impedance to Flow
The variation of the longitudinal impedance to flow with axial distance for different values of the stenosis shape parameter and body acceleration parameter with θ = δ = 0.1, t = 60°, = = ϕ = 0.2, e
= 0.5, and ω = 1 is depicted in Figures 9(a) (for H-B fluid model) and 9(b) (Casson fluid model). It is noticed that the longitudinal impedance to flow increases with the increase of the axial
variable from 0 to the point where the stenosis depth is maximum and then it decreases as the axial variable increases further from that point to 8. One can see the significant increase in the
longitudinal impedance to flow when the stenosis shape parameter increases and marginal increase in the longitudinal impedance to flow when the body acceleration parameter B increases. It is also
clear that for the same set of values of the parameters, the longitudinal impedance to flow is significantly lower for H-B fluid model than that of the Casson fluid model. Figures 9(a) and 9(b) bring
out the effects of body acceleration and asymmetry of the stenosis shape on the longitudinal impedance to blood flow.
The increase in the longitudinal impedance to blood flow due to the asymmetry shape of the stenosis is defined as the ratio between the longitudinal impedance to flow of a fluid model for a given set
of values of the parameters in an artery with asymmetric stenosis and the longitudinal impedance of the same fluid model and for the same set of values of the parameters in that artery with
axisymmetric stenosis. The estimates of the increase in the longitudinal impedance to flow are computed in Table 1 for different values of the stenosis shape parameter and body acceleration parameter
with δ = θ = 0.1, e = 0.5, ω = 1, z = 4, = = ϕ = 0.2, and t = 60°. It is observed that the estimates of the increase in the longitudinal impedance to flow increase considerably when the stenosis
shape parameter increases and they decrease slightly when the body acceleration parameter increases. Hence, the longitudinal impedance to flow is significantly higher in the arteries with asymmetric
shape of the stenosis compared to that in the arteries with axisymmetric stenosis. It is also noted that the presence of the body acceleration decreases the longitudinal impedance to blood flow
3.7. Some Possible Clinical Applications
To discuss some possible clinical applications of the present study, the data (for different types of arteries, their corresponding radii, steady and pulsatile pressure gradient values) reported by
Chaturani and Wassf Issac [23] are given in Table 2 and are used in this applications part of our study. For these clinical data (given in Table 2), the estimates of the mean velocity of H-B and
Casson fluid models for different values of the stenosis shape parameter and different values of the body acceleration parameter with θ = δ = 0.1, t = 60°, ω = 1, z = 4, ϕ = 0.2, = , and e = 0.2 are
computed in Table 3. It is recorded that the estimates of the mean velocity increase significantly with the increase of the artery radius, except in arterioles. It is also found that the estimates of
the mean velocity of H-B fluid model are marginally higher than those of the Casson fluid model. It is noted that the mean velocity increases considerably with the increase of the body acceleration
parameter and the reverse behavior is found when the stenosis shape parameter increases.
For the clinical data given in Table 2, the estimates of the mean flow rate of H-B and Casson fluid models are computed in Table 4 for different values of the stenosis shape parameter and different
values of the body acceleration parameter with θ = δ = 0.1, ω = 1, t = 60°, z = 4, ϕ = 0.2, = , and . It is observed that the estimates of the mean flow rate decrease very significantly with the
increase of the artery radius. It is also found that the estimates of the mean flow rate of H-B fluid model are considerably higher than those of the Casson fluid model. It is noted that the
estimates of the mean flow rate increase significantly with the increase of the body acceleration parameter and the reverse behavior is recorded when the stenosis shape parameter increases.
4. Conclusions
The present mathematical analysis brings out various interesting rheological properties of blood when it flows through narrow stenosed arteries with body acceleration, treating it as different
non-Newtonian fluid models with yield stress such as (i) Herschel-Bulkley fluid model and (ii) Casson fluid model. By the use of appropriate mathematical expression for the geometry of segment of the
stenosed artery, both axisymmetric and asymmetric shapes of stenoses are considered to study the effects of stenosis shape and size on the physiologically important quantities. Some major findings of
this mathematical analysis are summarized below.(i)The plug core radius, wall shear stress, and longitudinal impedance to flow are marginally lower for H-B fluid model than those of the Casson fluid
model.(ii)The plug flow velocity, velocity distribution, and flow rate are considerably higher for H-B fluid model than those of the Casson fluid model.(iii)The plug core radius and longitudinal
impedance to flow increase significantly with the increase of the stenosis shape parameter, and the reverse behavior is observed for plug flow velocity, velocity distribution, and flow rate.(iv)The
estimates of the mean velocity and mean flow rate are considerably higher for H-B fluid model than those of the Casson fluid model.(v)The estimates of the mean velocity and mean flow rate increase
considerably with the increase of the body acceleration, and this behavior is reversed when the stenosis shape parameter increases.
Based on these results, one can note that there is substantial difference between the flow quantities of H-B fluid model and Casson fluid model, and thus it is expected that the use of H-B fluid
model for blood flow in diseased artery may provide better results which may be useful to physicians in predicting the effects of body accelerations and different shapes and sizes of stenosis in the
artery on the physiologically important flow quantities. Also, it is hoped that this study may provide some useful information to surgeons to take some crucial decisions regarding the treatment of
patients, whether the cardiovascular disease can be treated with medicines or should the patient undergo a surgery. Hence, it is concluded that the present study can be treated as an improvement in
the mathematical modeling of blood flow in narrow arteries with mild stenosis under the influence of periodic body accelerations.
: Radial distance
: Dimensionless radial distance
: Axial distance
: Dimensionless axial distance
: Power law index
: Pressure
: Dimensionless pressure
: Dimensionless pressure gradient
: Flow rate
: Dimensionless flow rate
: Radius of the normal artery
: Radius of the artery in the stenosed region
: Dimensionless radius of the artery in the stenosed region
: Body acceleration function
: Amplitude of the body acceleration
: Plug core radius
: Dimensionless plug core radius
: Axial velocity of Herschel-Bulkley fluid
: Dimensionless axial velocity of Herschel-Bulkley fluid
: Axial velocity of Casson fluid
: Dimensionless axial velocity of Casson fluid
: Steady component of the pressure gradient
: Amplitude of the pulsatile component of the pressure gradient
: Length of the normal artery
: Length of the stenosis
: Stenosis shape parameter
: Dimensionless length of the stenosis
: Location of the stenosis
: Dimensionless location of the stenosis
: Time
: Dimensionless time.
Greek Letters
: Dimensionless longitudinal impedance to flow
: Azimuthal angle
: Shear rate
: Yield stress
: Dimensionless yield stress
: Shear stress of the Herschel-Bulkley fluid
: Dimensionless shear stress of Herschel-Bulkley fluid
: Shear stress for Casson fluid
: Dimensionless shear stress of Casson fluid
: Dimensionless wall shear stress
: Density of Herschel-Bulkley fluid
: Density of Casson fluid
: Viscosity of Herschel-Bulkley fluid
: Viscosity of the Casson fluid
: Pulsatile Reynolds number of Herschel-Bulkley fluid
: Pulsatile Reynolds number of Casson fluid
: Depth of the stenosis
: Dimensionless depth of the stenosis
: Angular frequency of the blood flow
: Lead angle.
: Wall shear stress (used for τ)
: Herschel-Bulkley fluid (used for )
: Newtonian fluid (used for ).
This research work was supported by the Research University Grant of Universiti Sains Malaysia, Malaysia (RU Grant ref. no. 1001/PMATHS/811177). The authors thank the reviewers for their valuable
comments which helped to improve the technical quality of this research article.
1. S. Cavalcanti, “Hemodynamics of an artery with mild stenosis,” Journal of Biomechanics, vol. 28, no. 4, pp. 387–399, 1995. View at Publisher · View at Google Scholar · View at Scopus
2. M. E. Clark, J. M. Robertson, and L. C. Cheng, “Stenosis severity effects for unbalanced simple-pulsatile bifurcation flow,” Journal of Biomechanics, vol. 16, no. 11, pp. 895–906, 1983. View at
Publisher · View at Google Scholar · View at Scopus
3. D. Liepsch, M. Singh, and M. Lee, “Experimental analysis of the influence of stenotic geometry on steady flow,” Biorheology, vol. 29, no. 4, pp. 419–431, 1992. View at Scopus
4. G. T. Liu, X. J. Wang, B. Q. Ai, and L. G. Liu, “Numerical study of pulsating flow through a tapered artery with stenosis,” Chinese Journal of Physics, vol. 42, no. 4 I, pp. 401–409, 2004. View
at Scopus
5. P. R. Johnston and D. Kilpatrick, “Mathematical modelling of flow through an irregular arterial stenosis,” Journal of Biomechanics, vol. 24, no. 11, pp. 1069–1077, 1991. View at Publisher · View
at Google Scholar · View at Scopus
6. K. C. Ang and J. Mazumdar, “Mathematical modelling of triple arterial stenoses,” Australasian Physical and Engineering Sciences in Medicine, vol. 18, no. 2, pp. 89–94, 1995. View at Scopus
7. D. Kilpatrick, S. D. Webber, and J. P. Colle, “The vascular resistance of arterial stenoses in series,” Angiology, vol. 41, no. 4, pp. 278–285, 1990. View at Publisher · View at Google Scholar ·
View at Scopus
8. P. Chaturani and R. P. Samy, “Pulsatile flow of Casson's fluid through stenosed arteries with applications to blood flow,” Biorheology, vol. 23, no. 5, pp. 499–511, 1986. View at Scopus
9. D. S. Sankar and K. Hemalatha, “Pulsatile flow of Herschel-Bulkley fluid through stenosed arteries-A mathematical model,” International Journal of Non-Linear Mechanics, vol. 41, no. 8, pp.
979–990, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
10. J. C. Misra and B. Pal, “A mathematical model for the study of the pulsatile flow of blood under an externally imposed body acceleration,” Mathematical and Computer Modelling, vol. 29, no. 1, pp.
89–106, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
11. S. Chakravarty and P. K. Mandal, “Two-dimensional blood flow through tapered arteries under stenotic conditions,” International Journal of Non-Linear Mechanics, vol. 35, no. 5, pp. 779–793, 2000.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
12. V. K. Sud and G. S. Sekhon, “Arterial flow under periodic body acceleration,” Bulletin of Mathematical Biology, vol. 47, no. 1, pp. 35–52, 1985. View at Zentralblatt MATH · View at Scopus
13. R. Usha and K. Prema, “Pulsatile flow of particle-fluid suspension model of blood under periodic body acceleration,” Zeitschrift fur Angewandte Mathematik und Physik, vol. 50, no. 2, pp. 175–192,
1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
14. P. K. Mandal, S. Chakravarty, A. Mandal, and N. Amin, “Effect of body acceleration on unsteady pulsatile flow of non-newtonian fluid through a stenosed artery,” Applied Mathematics and
Computation, vol. 189, no. 1, pp. 766–779, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
15. N. Mustapha, S. Chakravarty, P. K. Mandal, and N. Amin, “Unsteady response of blood flow through a couple of irregular arterial constrictions to body acceleration,” Journal of Mechanics in
Medicine and Biology, vol. 8, no. 3, pp. 395–420, 2008. View at Publisher · View at Google Scholar · View at Scopus
16. M. El-Shahed, “Pulsatile flow of blood through a stenosed porous medium under periodic body acceleration,” Applied Mathematics and Computation, vol. 138, no. 2-3, pp. 479–488, 2003. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
17. V. P. Rathod, S. Tanveer, I. S. Rani, and G. G. Rajput, “Pulsatile flow of blood under periodic body acceleration and magnetic field through an exponentially diverging vessel,” Proceedings
National Conference Advanced, Fluid Dynamics, pp. 106–117, 2004.
18. S. Chakravarty and A. K. Sannigrahi, “An analytical estimate of the flow-field in a porous stenotic artery subject to body acceleration,” International Journal of Engineering Science, vol. 36,
no. 10, pp. 1083–1102, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
19. P. Chaturani and V. Palanisamy, “Pulsatile flow of blood with periodic body acceleration,” International Journal of Engineering Science, vol. 29, no. 1, pp. 113–121, 1991. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH · View at Scopus
20. S. N. Majhi and V. R. Nair, “Pulsatile flow of third grade fluids under body acceleration-Modelling blood flow,” International Journal of Engineering Science, vol. 32, no. 5, pp. 839–846, 1994.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
21. C. Tu and M. Deville, “Pulsatile flow of Non-Newtonian fluids through arterial stenoses,” Journal of Biomechanics, vol. 29, no. 7, pp. 899–908, 1996. View at Publisher · View at Google Scholar ·
View at Scopus
22. D. S. Sankar and K. Hemalatha, “Pulsatile flow of Herschel-Bulkley fluid through catheterized arteries—a mathematical model,” Applied Mathematical Modelling, vol. 31, no. 8, pp. 1497–1517, 2007.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
23. P. Chaturani and A. S. A. Wassf Isaac, “Blood flow with body acceleration forces,” International Journal of Engineering Science, vol. 33, no. 12, pp. 1807–1820, 1995. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at Scopus
24. G. Sarojamma and P. Nagarani, “Pulsatile flow of Casson fluid in a homogenous porous medium subject to external acceleration,” International Journal of Non-Linear Differential Equations, vol. 7,
pp. 50–64, 2002.
25. P. Chaturani and V. Palanisamy, “Casson fluid model for pulsatile flow of blood under periodic body acceleration,” Biorheology, vol. 27, no. 5, pp. 619–630, 1990. View at Scopus
26. P. Chaturani and V. Palanisamy, “Pulsatile flow of power-law fluid model for blood flow under periodic body acceleration,” Biorheology, vol. 27, no. 5, pp. 747–758, 1990. View at Scopus
27. S. Chakravarty and P. K. Mandal, “A nonlinear two-dimensional model of blood flow in an overlapping arterial stenosis subjected to body acceleration,” Mathematical and Computer Modelling, vol.
24, no. 1, pp. 43–58, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
28. Y. C. Fung, Biomechanics: Mechanical Properties of Living Tissues, Springer, Berlin, Germany, 1981.
29. J. N. Kapur, Mathematical Models in Biology and Medicine, Affiliated East-West Press Pvt. Ltd., New Delhi, India, 1992.
30. N. Iida, “Influence of plasma layer on steady blood flow in micro vessels,” Japanese Journal of Applied Physics, vol. 17, pp. 203–214, 1978. View at Publisher · View at Google Scholar
31. D. S. Sankar and K. Hemalatha, “A non-Newtonian fluid flow model for blood flow through a catheterized artery-Steady flow,” Applied Mathematical Modelling, vol. 31, no. 9, pp. 1847–1864, 2007.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
32. D. S. Sankar and A. I. M. Ismail, “Effect of periodic body acceleration in blood flow through stenosed arteries—a theoretical model,” International Journal of Nonlinear Sciences and Numerical
Simulation, vol. 11, no. 4, pp. 243–257, 2010. View at Publisher · View at Google Scholar · View at Scopus
33. P. Nagarani and G. Sarojamma, “Effect of body acceleration on pulsatile flow of casson fluid through a mild stenosed artery,” Korea Australia Rheology Journal, vol. 20, no. 4, pp. 189–196, 2008.
View at Scopus
34. D. S. Sankar and U. Lee, “Mathematical modeling of pulsatile flow of non-Newtonian fluid in stenosed arteries,” Communications in Nonlinear Science and Numerical Simulation, vol. 14, no. 7, pp.
2971–2981, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
35. S. U. Siddiqui, N. K. Verma, S. Mishra, and R. S. Gupta, “Mathematical modelling of pulsatile flow of Casson's fluid in arterial stenosis,” Applied Mathematics and Computation, vol. 210, no. 1,
pp. 1–10, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus | {"url":"http://www.hindawi.com/journals/jam/2012/950323/","timestamp":"2014-04-18T02:39:01Z","content_type":null,"content_length":"665688","record_id":"<urn:uuid:b5ddd586-2528-4110-ad8c-6edaade66b7c>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kedlaya, Kiran Sridhara - Department of Mathematics, Massachusetts Institute of Technology (MIT)
• Getting precise about precision Kiran S. Kedlaya (kedlaya@mit.edu)
• Numerical p-adic integration and (potential) applications to rational points
• Weil Numbers in Prime Cyclotomic Fields Kiran S. Kedlaya
• A construction of polynomials with squarefree discriminants
• Toric coordinates in relative p-adic Hodge theory Kiran S. Kedlaya
• EFFECTIVE p-ADIC COHOMOLOGY FOR CYCLIC CUBIC KIRAN S. KEDLAYA
• Comments/errata for "Counting Points on Hyperelliptic Curves using Monsky-Washnitzer Cohomology"
• Primality Testing Made Simple IAP 2006 Mathematics Lecture Series
• Interval arithmetic for function fields over finite fields (or, How to compute in Cp
• Semistable reduction for overconvergent F-isocrystals: geometric aspects of the proof
• Recent results on p-adic computation of zeta functions Kiran S. Kedlaya
• The Fourier transform on the affine line and the Weil Conjectures
• Crystals, Crew's conjecture, and cohomology Kiran S. Kedlaya
• Periods for the Fundamental Group Lectures by Pierre Deligne; notes by Kiran Kedlaya
• An Attempt to Refine the RiemannHurwitz Formula Kiran S. Kedlaya
• Relative (, )-modules Kiran S. Kedlaya
• Periods for the Fundamental Group Lectures by Pierre Deligne; notes by Kiran Kedlaya
• Primality Testing Made Simple IAP 2006 Mathematics Lecture Series
• Rigid cohomology and its coefficients Kiran S. Kedlaya
• Relative p-adic Hodge theory and Rapoport-Zink period domains
• Computing L-series of hyperelliptic curves and distributions of Frobenius eigenvalues
• Errata for "Good formal structures for flat meromorphic connections, I: Surfaces" Liang Xiao has pointed out that the proof of Theorem 4.2.3 is incomplete: it assumes
• The p-adic arithmetic curve: algebraic and analytic aspects
• The differential Swan conductor Kiran S. Kedlaya
• Relative p-adic Hodge theory, II: (, )-modules Kiran S. Kedlaya and Ruochuan Liu
• A differential approach to computing zeta functions over finite fields
• Recent results on p-adic computation of zeta functions Kiran S. Kedlaya
• Computing zeta functions of surfaces using padic cohomology
• ORBITS OF AUTOMORPHISM GROUPS OF FIELDS KIRAN S. KEDLAYA AND BJORN POONEN
• Relative p-adic Hodge theory, I: Foundations Kiran S. Kedlaya and Ruochuan Liu
• Crystals, Crew's conjecture, and cohomology Kiran S. Kedlaya
• Interval arithmetic for function elds over nite elds (or, How to compute in C p
• Is this number prime? Berkeley Math Circle 2002{2003
• Complex Multiplication and Explicit Class Field Theory
• Frobenius slope filtrations and Crew's conjecture Kiran S. Kedlaya
• Erratum for "Semistable reduction for overconvergent F-isocrystals, I: Unipotence and logarithmic extension"
• Slope filtrations and (, )-modules in families Kiran S. Kedlaya
• NEW METHODS FOR (, )-MODULES KIRAN S. KEDLAYA
• Effective convergence bounds for Frobenius structures on connections
• V.1. From Quadratic Reciprocity to Class Field Theory 1 V.1 From Quadratic Reciprocity to
• Semistable reduction for overconvergent F-isocrystals Kiran S. Kedlaya
• Relative p-adic Hodge theory and Rapoport-Zink period domains
• Is this number prime? Berkeley Math Circle 20022003
• Is this number prime? Berkeley Math Circle 2002-2003
• Numerical computation of Coleman integrals Kiran S. Kedlaya
• Descent Theorems for Overconvergent F -crystals Kiran Sridhara Kedlaya
• Number fields with squarefree discriminant and prescribed signature
• Absolute de Rham cohomology? A fantasy in the key of p
• Frobenius slope filtrations and Crew's conjecture Kiran S. Kedlaya
• Towards uniformity over p in p-adic Hodge theory Kiran S. Kedlaya
• An Attempt to Refine the Riemann-Hurwitz Formula Kiran S. Kedlaya
• Interval arithmetic for function fields over finite fields (or, How to compute in Cp
• Computing zeta functions of surfaces using p-adic cohomology
• Irregularity of flat meromorphic connections Kiran S. Kedlaya
• Errata to p-adic Differential Equations (updated 4 Oct 2010) Theorem 1.4.9: the given proof is incorrect. A submultiplicative norm on a field need not
• Descent Theorems for Overconvergent F -crystals by
• Primality Testing Made Simple IAP 2006 Mathematics Lecture Series
• Computing zeta functions of surfaces using p-adic cohomology
• Number fields with squarefree discriminant and prescribed signature
• Weil Numbers in Prime Cyclotomic Fields Kiran S. Kedlaya
• Periods for the Fundamental Group Lectures by Pierre Deligne; notes by Kiran Kedlaya
• The Fourier transform on the affine line and the Weil Conjectures
• Frobenius slope filtrations and Crew's conjecture Kiran S. Kedlaya
• Crystals, Crew's conjecture, and cohomology Kiran S. Kedlaya
• Good formal structures for flat meromorphic connections, III: Towards functorial modifications
• AN ALGEBRAIC SATO-TATE GROUP AND SATO-TATE GRZEGORZ BANASZAK AND KIRAN S. KEDLAYA
• Controlled reduction in the p-adic cohomology of toric hypersurfaces
• Towards a precise Sato-Tate conjecture in genus 2 Kiran S. Kedlaya
• The Sato-Tate conjecture for elliptic and hyperelliptic curves
• Relative p-adic Hodge theory, I: Foundations Kiran S. Kedlaya and Ruochuan Liu
• On surjectivity of the Witt vector Frobenius Kiran S. Kedlaya
• Convergence of solutions of p-adic differential equations Kiran S. Kedlaya
• CAMBRIDGE STUDIES IN ADVANCED MATHEMATICS 125 Editorial Board | {"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/17/349.html","timestamp":"2014-04-16T19:52:28Z","content_type":null,"content_length":"18286","record_id":"<urn:uuid:bb9d44bb-5b9b-47d4-9a5f-594512cc54ba>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Tutors in Snohomish County, WA
Everett, WA 98208
Experienced ACT and SAT tutor, former teacher
...In addition to tutoring test prep, I am also available to tutor a variety of subjects. As a former teacher, I have experience working with students of all ages. I am excellent at teaching
to those who hate it. I disliked
and I know how to teach methods...
Offering 10+ subjects including algebra 1, geometry and prealgebra | {"url":"http://www.wyzant.com/Snohomish_County_WA_Math_tutors.aspx","timestamp":"2014-04-17T19:07:28Z","content_type":null,"content_length":"61319","record_id":"<urn:uuid:c5fa4941-88c5-44be-9750-bc38c8dae4b4>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linden, NJ Prealgebra Tutor
Find a Linden, NJ Prealgebra Tutor
...B. Saul Agricultural and Murrell Dobbins High Schools in Philadelphia. My previous experiences with education have been through The Johns Hopkins University CTY Summer Program as a Resident
Assistant and Teaching Assistant.
26 Subjects: including prealgebra, calculus, statistics, writing
...If you have any questions that haven't been answered above, please feel free to ask me! I look forward to hearing from you! Most secondary level math subjects rely on skills learned in algebra
9 Subjects: including prealgebra, calculus, geometry, algebra 1
...I have a 98% client satisfaction rate and 96% of my students have gained admission to their top choice schools. Most importantly, every student has succeeded qualitatively rather than just
quantitatively, gaining confidence, resourcefulness, and strength of character. My tutoring philosophy begins and ends with the student's sense of self, which I endeavor to illuminate and enrich.
36 Subjects: including prealgebra, reading, Spanish, English
...Since the age of 6, I have prided myself on hustle and sportsmanship. I played 4 years of varsity baseball for Matawan Regional High School, and I am a former player for Stevens Institute of
Technology Division 3 baseball. I am a lefty pitcher and outfielder, with experience at all positions, even shortstop and third base.
46 Subjects: including prealgebra, chemistry, writing, calculus
Hi! I love learning, and can make material engaging and accessible whether you're 8 or 80. I'm a former 5th and 6th grade teacher, and work with middle schoolers in my full-time job, so I'm very
familiar with all elementary and middle school subjects (especially math). I graduated cum laude from a...
16 Subjects: including prealgebra, reading, French, writing
Related Linden, NJ Tutors
Linden, NJ Accounting Tutors
Linden, NJ ACT Tutors
Linden, NJ Algebra Tutors
Linden, NJ Algebra 2 Tutors
Linden, NJ Calculus Tutors
Linden, NJ Geometry Tutors
Linden, NJ Math Tutors
Linden, NJ Prealgebra Tutors
Linden, NJ Precalculus Tutors
Linden, NJ SAT Tutors
Linden, NJ SAT Math Tutors
Linden, NJ Science Tutors
Linden, NJ Statistics Tutors
Linden, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Linden_NJ_Prealgebra_tutors.php","timestamp":"2014-04-21T10:26:28Z","content_type":null,"content_length":"23980","record_id":"<urn:uuid:335b9428-901b-413e-8541-bb24183a39cd>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
A question about local connectedness in metric spaces
Take the 2-minute tour ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Must every compact and connected metric space be locally connected at at least one of its points?
Umm.. I just noticed that the answer to this was given more than a year ago to another question made by you on MO: mathoverflow.net/questions/36488/…
Tapio Rajala Jan 20 '12 at 13:53
add comment
up vote 7 down There is a "folklore" counterexample. Peter Nyikos gives the construction here (see the last paragraph for the compactness)
I assume it's folklore, but people more versed in this sort of thing may know differently...
Todd Eisworth Jan 19 '12 at 21:22
Nyikos' example seems fine, but I think he incorrectly uses positive slopes for the "left fan" and negative slopes for the "right fan." I see it making more sense with positive
slopes in both.
Matt Brin Jan 20 '12 at 0:08
It's one of those examples where the picture is much easier to comprehend than the description.
Todd Eisworth Jan 20 '12 at 0:17
And yes, the picture would seem to have slopes being positive in both fans.
Todd Eisworth Jan 20 '12 at 2:30
add comment
up vote 5 down vote
add comment
In the product of the closet unit interval $I$ with the Cantor set $C$, identify $(0,x)$ and $\big(1,f(x)\big)$ where $f(x):=3x \mod 1$.
The resulting space $X$ is the mapping cylinder of the continuous map $f:C\to C$. It is a compact metric space locally homeomorphic to $]0,1[\times C\, ,$ thus not locally connected at any
up vote point. End-points of $I$ have not a special role; we may equivalently obtain $X$ with a larger quotient, $(\mathbb{R}\times C) / \{ (t,x)=\big (t+1,f(x)\big) \} $. The important feature of
4 down the map $f:C\to C$ is that it has a dense orbit $f^n( x_0)$. This is easily seen as it is conjugate to the left-shift map on binary strings on the space ${\bf 2}^\mathbb{N}$, $(c_1,c_2,\dots)
vote \mapsto(c_2,c_3,\dots)$ which is just how we see $f$ on the 2-digits representations of points of the Cantor set. As a consequence, the image of $\mathbb{R}\times \{x_0\}$ in the latter
quotient is a path-connected dense subset of $X$, which is therefore connected.
edit. Actually, such spaces are quite common in dynamical systems; an other example is the Smale-Williams Solenoid and several strange attractors.
add comment
up vote 3 down vote The two examples given so far do not contain an illustration of the set. So here is one drawn with a "Cantor-pen":
add comment
No, examples abound, e.g., the $\sin\frac1x$-curve, i.e., the closure of $\lbrace \sin\frac1x : 0 < x \le 1\rbrace$ in the plane. As noted below, this is not a good example (I misread the
up vote 1
down vote However, every indecomposable continuum, such as Knaster's bucket-handle continuum, is an example because every proper subcontinuum is nowhere dense. The pseudo-arc is, of course the
ultimate example.
not this one: it has tons of points with locally connected neighbourhoods: anything off the $y$ axis
Anthony Quas Jan 19 '12 at 22:01
Ah yes, I misread the question but I added the bucket handle just to be sure. There's also the pseudoarc of course.
KP Hart Jan 20 '12 at 9:08
I am really embarassed and must apologize to everyone for failing to keep track of some of my past questions-in particular No.36488. As Tapio Rajala has pointed out, this question is
almost a duplicate of my present question and the very neat answer to it given by Victor Protsak also answers my present question-which should probably be closed out on the grounds of
almost duplicating a previous question. Many thanks to all of you for your excellent answers which I was never quite able to discover for myself.
Garabed Gulbenkian Jan 21 '12 at 19:27
add comment | {"url":"http://mathoverflow.net/questions/86136/a-question-about-local-connectedness-in-metric-spaces/86140","timestamp":"2014-04-18T18:50:45Z","content_type":null,"content_length":"76495","record_id":"<urn:uuid:08625ef9-7ad1-49d8-b6fc-2ade60e64fb8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Details of Publications
Andrew D. Gordon: Details of Publications
· Cryptographic Verification by Typing for a Sample Protocol Implementation. C. Fournet and K. Bhargavan. In Foundations of Security Analysis and Design VI (FOSAD), Bertinoro, 2010. Springer LNCS,
Type systems are effective tools for verifying the security of cryptographic protocols and implementations. They provide automation, modularity and scalability, and have been applied to large
protocols. In this tutorial, we illustrate the use of types for verifying authenticity properties, first using a symbolic model of cryptography, then relying on a concrete computational assumption.
We introduce refinement types (that is, types carrying formulas to record invariants) for programs written in F# and verified by F7, an SMT-based type checker. We describe a sample authenticated RPC
protocol, we implement it in F#, and we specify its security against active adversaries. We develop a sample symbolic library, we present its main cryptographic invariants, and we show that our RPC
implementation is perfectly secure when linked to this symbolic library. We implement the same library using concrete cryptographic primitives, we make a standard computational assumption, and we
show that our RPC implementation is also secure with overwhelming probability when linked to this concrete library.
The Bayesian approach to machine learning amounts to inferring posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution)
and a set of observations of variables. There is a trend in machine learning towards expressing Bayesian models as probabilistic programs. As a foundation for this kind of programming, we propose a
core functional calculus with primitives for sampling prior distributions and observing variables. We define combinators for distribution transformers, based on theorems in measure theory, and use
these to give a rigorous semantics to our core calculus. The original features of our semantics include its support for discrete, continuous, and hybrid distributions, and observations of
zero-probability events. We compile our core language to a small imperative language that in addition to the distribution transformer semantics also has a straightforward semantics via factor
graphs, data structures that enable many efficient inference algorithms. We then use an existing inference engine for efficient approximate inference of posterior marginal distributions, treating
thousands of observations per second for large instances of realistic models.
A special session of the symposium pays tribute to Robin Milner; the session features talks by Robert Harper (Carnegie Mellon University), John Harrison (Intel), and Alan Jeffrey (Bell Labs), and is
organised by Andrew D. Gordon (Microsoft Research) and Peter Sewell (University of Cambridge).
We study a first-order functional language with the novel combination of the ideas of refinement type (the subset of a type to satisfy a Boolean expression) and type-test (a Boolean expression
testing whether a value belongs to a type). Our core calculus can express a rich variety of typing idioms; for example, intersection, union, negation, singleton, nullable, variant, and algebraic
types are all derivable. We formulate a semantics in which expressions denote terms, and types are interpreted as first-order logic formulas. Subtyping is defined as valid implication between the
semantics of types. The formulas are interpreted in a specific model that we axiomatize using standard first-order theories. On this basis, we present a novel type-checking algorithm able to
eliminate many dynamic tests and to detect many errors statically. The key idea is to rely on an SMT solver to compute subtyping efficiently. Moreover, using an SMT solver allows us to show the
uniqueness of normal forms for non-deterministic expressions, to provide precise counterexamples when type-checking fails, to detect empty types, and to compute instances of types statically and at
We propose a method for verifying the security of protocol implementations. Our method is based on declaring and enforcing invariants on the usage of cryptography. To this end, we develop
cryptographic libraries that embed a logic model of their cryptographic structures, and specify pre- and post-conditions on their functions so as to maintain their invariants. We present a theory to
justify the soundness of modular code verification via our method. We implement the method for protocols coded in F# and verified using F7, our SMT-based typechecker for refinement types, that is,
types including formulas to record invariants. As illustrated by a series of programming examples, our method can flexibly deal with a range of different cryptographic constructions and protocols. We
evaluate the method on a series of larger case studies of protocol code, previously checked using a whole-program analysis based on ProVerif, a leading verifier for cryptographic protocols. Our
results indicate that compositional verification by typechecking with refinement types is more scalable than the best domain-specific analysis currently available for cryptographic code.
A refinement type {x:T|C} is the subset of the type T consisting of the values x to satisfy the formula C. In these tutorial notes we explain the principles of refinement types by developing from
first principles a concurrent lambda-calculus whose type system supports refinement types. Moreover, we describe a series of applications of our refined type theory and of related systems.
We address the problem of reasoning about Haskell programs that use Software Transactional Memory (STM). As a motivating example, we consider Haskell code for a concurrent non-deterministic tree
rewriting algorithm implementing the operational semantics of the ambient calculus. The core of our theory is a uniform model, in the spirit of process calculi, of the run-time state of
multi-threaded STM Haskell programs. The model was designed to simplify both local and compositional reasoning about STM programs. A single reduction relation captures both pure functional
computations and also effectful computations in the STM and I/O monads. We state and prove liveness, soundness, completeness, safety, and termination properties relating source processes and their
Haskell implementation. Our proof exploits various ideas from concurrency theory, such as the bisimulation technique, but in the setting of a widely used programming language rather than an abstract
process calculus. Additionally, we develop an equational theory for reasoning about STM Haskell programs, and establish for the first time equations conjectured by the designers of STM Haskell. We
conclude that using a pure functional language extended with STM facilitates reasoning about concurrent implementation code.
Behavioural type and effect systems regulate properties such as adherence to object and communication protocols, dynamic security policies, avoidance of race conditions, and many others. Typically,
each system is based on some specific syntax of constraints, and is checked with an ad hoc solver. Instead, we advocate types refined with first-order logic formulas as a basis for behavioural type
systems, and general purpose automated theorem provers as an effective means of checking programs. In particular, we describe a triple of security-related type systems: for stack inspection, for
history-based access control, and for role-based access control. The three are all instances of a refined state monad. Our semantics allows a precise comparison of the similarities and differences of
these mechanisms. Moreover, the benefit of behavioural type-checking is to rule out the possibility of unexpected security exceptions, a common problem with code-based access control.
We develop a reference implementation for a fragment of the API for a Trusted Platform Module. Our code is written in a functional language, suitable for verification with various tools, but is
automatically translated to a subset of C, suitable for interoperability testing with production code, and for inclusion in a specification or standard for the API. One version of our code
corresponds to the widely deployed TPM 1.2 specification, and is vulnerable to a recently discovered dictionary attack; verification of secrecy properties of this version fails producing an attack
trace and highlights an ambiguity in the specification that has security implications. Another version of our code corresponds to a suggested amendment to the TPM 1.2 specification; verification of
this version succeeds. From this case study we conclude that recent advances in tools for verifying implementation code for cryptographic APIs are reaching the point where it is viable to develop
verified reference implementations. Moreover, the published code can be in a widely understood language like C, rather than one of the specialist formalisms aimed at modelling cryptographic
Storing state in the client tier (in forms or cookies, for example) improves the efficiency of a web application, but it also renders the secrecy and integrity of stored data vulnerable to
untrustworthy clients. We study this general problem in the context of the Links multi-tier programming language. Like other systems, Links stores unencrypted application data, including web
continuations, on the client tier; hence, Links is open to attacks that expose secrets, and modify control flow and application data. We characterise these attacks as failures of the general
principle that ecurity properties of multi-tier applications should follow from review of the source code (as opposed to the detailed study of the files compiled for each tier, for example). We
eliminate these threats by augmenting the Links compiler to encrypt and authenticate any data stored on the client. We model this compilation strategy as a translation from a core fragment of the
language to a concurrent lambda-calculus equipped with a formal representation of cryptography. To formalize source-level reasoning about Links programs, we define a type-and-effect system for our
core language; our implementation can machine-check various integrity properties of the source code. By appeal to a recent system of refinement types for secure implementations, we show that our
compilation strategy guarantees all the properties provable by our type-and-effect system.
In authorization, there is often a wish to shift the burden of proof to those making requests, since they may have more resources and more specific knowledge to construct the required proofs. We
introduce an extreme instance of this approach, which we call Code-Carrying Authorization (CCA). With CCA, access-control decisions can partly be delegated to untrusted code obtained at run-time. The
dynamic verification of this code ensures the safety of authorization decisions. We define and study this approach in the setting of a higher-order spi calculus. The type system of this calculus
provides the needed support for static and dynamic verification.
We present a type and effect system for proving correspondence assertions in a pi-calculus with polarized channels, dependent pair types and effect terms. Given a process P and a type environment E,
we describe how to generate constraints that are formulae in the Alternating Least Fixed-Point (ALFP) logic. A reasonable model of the generated constraints yields a type and effect assignment such
that P becomes well-typed with respect to E if and only if this is possible. The formulae generated satisfy a finite model property; a system of constraints is satisfiable if and only if it has a
finite model. As a consequence, we obtain the result that type and effect inference in our system is polynomial-time decidable.
We present the design and implementation of a typechecker for verifying security properties of the source code of cryptographic protocols and access control mechanisms. The underlying type theory is
a lambda-calculus equipped with refinement types for expressing pre- and post-conditions within first-order logic. We derive formal cryptographic primitives and represent active adversaries within
the type theory. Well-typed programs enjoy assertion-based security properties, with respect to a realistic threat model including key compromise. The implementation amounts to an enhanced
typechecker for the general purpose functional language F#; typechecking generates verification conditions that are passed to an SMT solver. We describe a series of checked examples. This is the
first tool to verify authentication properties of cryptographic protocols by typechecking their source code.
Management is one of the main expenses of running the server farms that implement enterprise services, and operator errors can be costly. Our goal is to develop type-safe programming mechanisms for
combining and managing enterprise services, and we achieve this goal in the particular setting of farms of virtual machines. We assume each server is service-oriented, in the sense that the services
it provides, and the external services it depends upon, are explicitly described in metadata. We describe the design, implementation, and formal semantics of a library of combinators whose types
record and respect server metadata. We describe a series of programming examples run on our implementation, based on existing server code for order processing, a typical data centre workload.
The human operators of datacentres work from a manual, sometimes known as the run book, that lists how to perform operating procedures such as the provisioning, deployment, monitoring, and upgrading
of servers. To improve failure and recovery rates, it is attractive to replace human intervention by software, known as operations logic, that automates such operating procedures. We advocate a
declarative programming model for operations logic, and the use of static analysis to detect programming errors, such as the potential for misconfiguration.
We describe reference implementations for selected configurations of the user authentication protocol defined by the Information Card Profile V1.0. Our code can interoperate with existing
implementations of the client, identity provider, and relying party roles of the protocol. We derive formal proofs of security properties for our code using an automated theorem prover. Hence, we
obtain the most substantial examples of verified implementations of cryptographic protocols to date, and the first for any federated identity-management protocols. Moreover, we present a tool that
downloads security policies from services and token servers and compiles them to a verifiably secure client proxy.
We present a graphical semantics for the pi-calculus, that is easier to visualize and better suited to expressing causality and temporal properties than conventional relational semantics. A pi-chart
is a finite directed acyclic graph recording a computation in the pi-calculus. Each node represents a process, and each edge either represents a computation step, or a message-passing interaction.
Pi-charts enjoy a natural pictorial representation, akin to message sequence charts, in which vertical edges represent control flow and horizontal edges represent data flow based on message passing.
A pi-chart represents a single computation starting from its top (the nodes with no ancestors) to its bottom (the nodes with no descendants). Unlike conventional reductions or transitions, the edges
in a pi-chart induce ancestry and other causal relations on processes. We give both compositional and operational definitions of pi-charts, and illustrate the additional expressivity afforded by the
chart semantics via a series of examples, including secrecy properties and usage bounds guaranteed by a type system.
We consider the problem of statically verifying the conformance of the code of a system to an explicit authorization policy. In a distributed setting, some part of the system may be compromised, that
is, some nodes of the system and their security credentials may be under the control of an attacker. To help predict and bound the impact of such partial compromise, we advocate logic-based policies
that explicitly record dependencies between principals. We propose a conformance criterion, safety despite compromised principals, such that an invalid authorization decision at an uncompromised node
can arise only when nodes on which the decision logically depends are compromised. We formalize this criterion in the setting of a process calculus, and present a verification technique based on a
type system. Hence, we can verify policy conformance of code that uses a wide range of the security mechanisms found in distributed systems, ranging from secure channels down to cryptographic
primitives, including secure hashes, encryption, and public-key signatures.
We present a declarative authorization language that strikes a careful balance between syntactic and semantic simplicity, policy expressiveness, and execution efficiency. The syntax is close to
natural language, and the semantics consists of just three deduction rules. The language can express many common policy idioms using constraints, controlled delegation, recursive predicates, and
negated queries. We describe an execution strategy based on translation to Datalog with Constraints, and table-based resolution. We show that this execution strategy is sound, complete, and always
terminates, despite recursion and negation, as long as simple syntactic conditions are met.
We describe a new reference implementation of the web services security specifications. The implementation is structured as a library in the functional programming language F#. Applications written
using this library can interoperate with other compliant web services, such as those written using Microsoft WSE and WCF frameworks. Moreover, the security of such applications can be automatically
verified by translating them to the applied pi calculus and using an automated theorem prover. We illustrate the use of our reference implementation through examples drawn from the sample
applications included with WSE and WCF. We formally verify their security properties. We also experimentally evaluate their interoperability and performance.
Proving cryptographic security protocols has been a challenge ever since Needham and Schroeder threw down the gauntlet in their pioneering 1978 paper on authentication protocols: "The need for
techniques to verify the correctness of such protocols is great, and we encourage those interested in such problems to consider this area." By now, there is a wide range of informal and formal
methods that can catch most design errors. Still, as in other areas of software, the trouble is that while practitioners are typically happy for researchers to write formal models of their natural
language specifications and to apply design principles, they are reluctant to do so themselves. In practice, specifications tend to be partial and ambiguous, and the implementation code is the
closest we get to a formal description of most protocols. This motivates the subject of my talk: the relatively new enterprise of adapting formal methods for security protocols to work on code
instead of abstract models. The goal is to lower the practical cost of security protocol verification by eliminating the need to write a separate formal model. I will describe current tools that
partially address the problem, and discuss what remains to be done.
We present an architecture and tools for verifying implementations of security protocols. Our implementations can run with both concrete and symbolic cryptographic libraries. The concrete
implementation is for production and interoperability testing. The symbolic implementation is for debugging and formal verification. We develop our approach for protocols written in F#, a dialect of
ML, and verify them by compilation to ProVerif, a resolution-based theorem prover for cryptographic protocols. We establish the correctness of this compilation scheme, and we illustrate our approach
with protocols for Web Services security.
We identify common security vulnerabilities found during security reviews of web services with policy-driven security. We describe the design of an advisor for web services security configurations,
the first tool both to identify such vulnerabilities automatically and to offer remedial advice. We report on its implementation as a plugin for Microsoft Web Services Enhancements (WSE).
A realistic threat model for cryptographic protocols or for language-based security should include a dynamically growing population of principals (or security levels), some of which may be
compromised, that is, come under the control of the adversary. We explore such a threat model within a pi-calculus. A new process construct records the ordering between security levels, including the
possibility of compromise. Another expresses the expectation of conditional secrecy of a message - that a particular message is unknown to the adversary unless particular levels are compromised. Our
main technical contribution is the first system of secrecy types for a process calculus to support multiple, dynamically-generated security levels, together with the controlled compromise or
downgrading of security levels. A series of examples illustrates the effectiveness of the type system in proving secrecy of messages, including dynamically-generated messages. It also demonstrates
the improvement over prior work obtained by including a security ordering in the type system. Perhaps surprisingly, the soundness proof for our type system for symbolic cryptography is via a simple
translation into a core typed pi-calculus, with no need to take symbolic cryptography as primitive.
Distributed systems and applications are often expected to enforce high-level authorization policies. To this end, the code for these systems relies on lower-level security mechanisms such as, for
instance, digital signatures, local ACLs, and encrypted communications. In principle, authorization specifications can be separated from code and carefully audited. Logic programs, in particular, can
express policies in a simple, abstract manner. For a given authorization policy, we consider the problem of checking whether a cryptographic implementation complies with the policy. We formalize
authorization policies by embedding logical predicates and queries within a spi-calculus. This embedding is new, simple, and general; it allows us to treat logic programs as specifications of code
using secure channels, cryptography, or a combination. Moreover, we propose a new dependent type system for verifying such implementations against their policies. Using Datalog as an authorization
logic, we show how to type several examples using policies and present a general schema for compiling policies.
WS-Security provides basic means to secure SOAP traffic, one envelope at a time. For typical web services, however, using WS-Security independently for each message is rather inefficient; besides, it
is often important to secure the integrity of a whole session, as well as each message. To these ends, recent specifications provide further SOAP-level mechanisms. WS-SecureConversation introduces
security contexts, which can be used to secure sessions between two parties. WS-Trust specifies how security contexts are issued and obtained. We develop a semantics for the main mechanisms of
WS-Trust and WS-SecureConversation, expressed as a library for TulaFale, a formal scripting language for security protocols. We model typical protocols relying on these mechanisms, and automatically
prove their main security properties. We also informally discuss some limitations of these specifications.
WS-SecurityPolicy is a declarative configuration language for driving web services security mechanisms. We describe a formal semantics for WS-SecurityPolicy, and propose a more abstract link language
for specifying the security goals of web services and their clients. Hence, we present the architecture and implementation of fully automatic tools that (1) compile policy files from link
specifications, and (2) verify by invoking a theorem prover whether a set of policy files run by any number of senders and receivers correctly implements the goals of a link specification, in spite
of active attackers. Policy-driven web services implementations are prone to the usual subtle vulnerabilities associated with cryptographic protocols; our tools help prevent such vulnerabilities, as
we can verify policies when first compiled from link specifications, and also re-verify policies against their original goals after any modifications during deployment.
Web services security specifications are typically expressed as a mixture of XML schemas, example messages, and narrative explanations. We propose a new specification language for writing
complementary machine-checkable descriptions of SOAP-based security protocols and their properties. Our TulaFale language is based on the pi calculus (for writing collections of SOAP processors
running in parallel), plus XML syntax (to express SOAP messaging), logical predicates (to construct and filter SOAP messages), and correspondence assertions (to specify authentication goals of
protocols). Our implementation compiles TulaFale into the applied pi calculus, and then runs Blanchet's resolution-based protocol verifier. Hence, we can automatically verify authentication
properties of SOAP protocols.
We present a new static analysis to help identify security defects in class libraries for runtimes, such as JVMs or the CLR, that rely on stack inspection for access control. Our tool inputs a set of
class libraries plus a description of the permissions granted to unknown, potentially hostile code. It constructs a permission-sensitive call graph, which can be queried to identify potential
defects. We describe the tool architecture, various examples of security queries, and a practical implementation that analyses large pre-existing libraries for the CLR. We also develop a new formal
model of the essentials of access control in the CLR (types, classes and inheritance, access modifiers, permissions, and stack inspection). In this model, we state and prove the correctness of the
We consider the problem of specifying and verifying cryptographic security protocols for XML web services. The security specification WS-Security describes a range of XML security tokens, such as
username tokens, public-key certificates, and digital signature blocks, amounting to a flexible vocabulary for expressing protocols. To describe the syntax of these tokens, we extend the usual XML
data model with symbolic representations of cryptographic values. We use predicates on this data model to describe the semantics of security tokens and of sample protocols distributed with the WSE
implementation of WS-Security. By embedding our data model within Abadi and Fournet's applied pi calculus, we formulate and prove security properties with respect to the standard Dolev-Yao threat
model. Moreover, we informally discuss issues not addressed by the formal model. To the best of our knowledge, this is the first approach to the specification and verification of security protocols
based on a faithful account of the XML wire format.
We consider a propositional spatial logic for finite trees. The logic includes A|B (spatial composition), and A|>B (the implication induced by composition), and 0 (the unit of composition). We show
that the satisfaction and validity problems are equivalent, and decidable. The crux of the argument is devising a finite enumeration of trees to consider when deciding whether a spatial implication
is satisfied. We introduce a sequent calculus for the logic, and show it to be sound and complete with respect to an interpretation in terms of satisfaction. Finally, we describe a complete proof
procedure for the sequent calculus. We envisage applications in the area of logic-based type systems for semistructured data. We describe a small programming language based on this idea.
An XML web service is, to a first approximation, an RPC service in which requests and responses are encoded in XML as SOAP envelopes, and transported over HTTP. We consider the problem of
authenticating requests and responses at the SOAP-level, rather than relying on transport-level security. We propose a security abstraction, inspired by earlier work on secure RPC, in which the
methods exported by a web service are annotated with one of three security levels: none, authenticated, or both authenticated and encrypted. We model our abstraction as an object calculus with
primitives for defining and calling web services. We describe the semantics of our object calculus by translating to a lower-level language with primitives for message passing and cryptography. To
validate our semantics, we embed correspondence assertions that specify the correct authentication of requests and responses. By appeal to the type theory for cryptographic protocols of Gordon and
Jeffrey's Cryptyc, we verify the correspondence assertions simply by typing. Finally, we describe an implementation of our semantics via custom SOAP headers.
Both one-to-one and one-to-many correspondences between events, sometimes known as injective and non-injective agreements, respectively, are widely used to specify correctness properties of
cryptographic protocols. In earlier work, we showed how to typecheck one-to-one correspondences for protocols expressed in the spi-calculus. We present a new type and effect system able to verify
both one-to-one and one-to-many correspondences.
Operational models of fragments of the Java Virtual Machine and the .NET Common Language Runtime have been the focus of considerable study in recent years, and of particular interest have been
specifications and machine-checked proofs of type soundness. In this paper we aim to increase the level of automation used when checking type soundness for these formalizations. We present a
semi-automated technique for reducing a range of type soundness problems to a form that can be automatically checked using a decidable first-order theory. Deciding problems within this fragment is
exponential in theory but is often efficient in practice, and the time required for proof checking can be controlled by further hints from the user. We have applied this technique to two case
studies, both of which are type soundness properties for subsets of the .NET Common Language Runtime. These case studies have in turn aided us in our informal analysis of that system.
We present the first type and effect system for proving authenticity properties of security protocols based on asymmetric cryptography. The most significant new features of our type system are: (1) a
separation of public types (for data possibly sent to the opponent) from tainted types (for data possibly received from the opponent) via a subtype relation; (2) trust effects, to guarantee that
tainted data does not, in fact, originate from the opponent; and (3) challenge/response types to support a variety of idioms used to guarantee message freshness. We illustrate the applicability of
our system via protocol examples.
We define a finite-control fragment of the ambient calculus, a formalism for describing distributed and mobile computations. A series of examples demonstrates the expressiveness of our fragment. In
particular, we encode the choice-free, finite-control, synchronous p-calculus. We present an algorithm for model checking this fragment against the ambient logic (without composition adjunct). This
is the first proposal of a model checking algorithm for ambients to deal with recursively-defined, possibly nonterminating, processes. Moreover, we show that the problem is PSPACE-complete, like
other fragments considered in the literature. Finite-control versions of other process calculi are obtained via various syntactic restrictions. Instead, we rely on a novel type system that bounds the
number of active ambients and outputs in a process; any typable process has only a finite number of derivatives.
Stack inspection is a security mechanism implemented in runtimes such as the JVM and the CLR to accommodate components with diverse levels of trust. Although stack inspection enables the fine-grained
expression of access control policies, it has rather a complex and subtle semantics. We present a formal semantics and an equational theory to explain how stack inspection affects program behaviour
and code optimisations. We discuss the security properties enforced by stack inspection, and also consider variants with stronger, simpler properties.
The ambient calculus is a concurrent calculus where the unifying notion of `ambient' is used to model many different constructs for distributed and mobile computation. We study a type system that
describes several properties of ambient behavior. The type system allows ambients to be partitioned in disjoint sets (groups), according to the intended design of a system, in order to specify both
the communication and the mobility behavior of ambients.
There is great interest in applying nominal calculi---computational formalisms that include dynamic name generation---to the problems of programming, specifying, and verifying secure and mobile
computations. These notes introduce three nominal calculi---the pi calculus, the spi calculus, and the ambient calculus. We describe some typical techniques, and survey related work.
We propose a new method to check authenticity properties of cryptographic protocols. First, code up the protocol in the spi-calculus of Abadi and Gordon. Second, specify authenticity properties by
annotating the code with correspondence assertions in the style of Woo and Lam. Third, figure out types for the keys, nonces, and messages of the protocol. Fourth, check that the spi-calculus code is
well-typed according to a novel type and effect system presented in this paper. Our main theorem guarantees that any well-typed protocol is robustly safe, that is, its correspondence assertions are
true in the presence of any opponent expressible in spi. It is feasible to apply this method by hand to several well-known cryptographic protocols. It requires little human effort per protocol, puts
no bound on the size of the opponent, and requires no state space enumeration. Moreover, the types for protocol data provide some intuitive explanation of how the protocol works. This paper describes
our method and gives some simple examples. Our method has led us to the independent rediscovery of flaws in existing protocols and to the design of improved protocols.
Woo and Lam propose correspondence assertions for specifying authenticity properties of security protocols. The only prior work on checking correspondence assertions depends on model-checking and is
limited to finite-state systems. We propose a dependent type and effect system for checking correspondence assertions. Since it is based on type-checking, our method is not limited to finite-state
systems. This paper presents our system in the simple and general setting of the p-calculus. We show how to type-check correctness properties of example communication protocols based on secure
We extend the modal logic of ambients to the full ambient calculus, including name restriction. We introduce logical operators that can be used to make assertions about restricted names, and we study
their properties.
We settle the complexity bounds of the model checking problem for the replication-free ambient calculus with public names against the ambient logic without parallel adjunct. We show that the problem
is PSPACE-complete. For the complexity upper-bound, we devise a new representation of processes that remains of polynomial size during process execution; this allows us to keep the model checking
procedure in polynomial space. Moreover, we prove PSPACE-hardness of the problem for several quite simple fragments of the calculus and the logic; this suggests that there are no interesting
fragments with polynomial-time model checking algorithms.
The Microsoft .NET Framework is a new computing architecture designed to support a variety of distributed applications and web-based services. .NET software components are typically distributed in an
object-oriented intermediate language, Microsoft IL, executed by the Microsoft Common Language Runtime. To allow convenient multi-language working, IL supports a wide variety of high-level language
constructs, including class-based objects, inheritance, garbage collection, and a security mechanism based on type safe execution. This paper precisely describes the type system for a substantial
fragment of IL that includes several novel features: certain objects may be allocated either on the heap or on the stack; those on the stack may be boxed onto the heap, and those on the heap may be
unboxed onto the stack; methods may receive arguments and return results via typed pointers, which can reference both the stack and the heap, including the interiors of objects on the heap. We
present a formal semantics for the fragment. Our typing rules determine well-typed IL instruction sequences that can be assembled and executed. Of particular interest are rules to ensure no pointer
into the stack outlives its target. Our main theorem asserts type safety, that well-typed programs in our IL fragment do not lead to untrapped execution errors. Our main theorem does not directly
apply to the product. Still, the formal system of this paper is an abstraction of informal and executable specifications we wrote for the full product during its development. Our informal
specification became the basis of the product team's working specification of type-checking. The process of writing this specification, deploying the executable specification as a test oracle, and
applying theorem proving techniques, helped us identify several security critical bugs during development.
We present a commitment relation, a kind of labeled transition system, for the ambient calculus. This note is an extract from an unpublished annex to our original article on the ambient calculus.
We show that the typed region calculus of Tofte and Talpin can be encoded in a typed p-calculus equipped with name groups and a novel effect analysis. In the region calculus, each boxed value has a
statically determined region in which it is stored. Regions are allocated and de-allocated according to a stack discipline, thus improving memory management. The idea of name groups arose in the
typed ambient calculus of Cardelli, Ghelli, and Gordon. There, and in our p-calculus, each name has a statically determined group to which it belongs. Groups allow for type-checking of certain
mobility properties, as well as effect analyses. Our encoding makes precise the intuitive correspondence between regions and groups. We propose a new formulation of the type preservation property of
the region calculus, which avoids Tofte and Talpin's rather elaborate co-inductive formulation. We prove the encoding preserves the static and dynamic semantics of the region calculus. Our proof of
the correctness of region de-allocation shows it to be a specific instance of a general garbage collection principle for the p-calculus with effects.
We add an operation of group creation to the typed p-calculus, where a group is a type for channels. Creation of fresh groups has the effect of statically preventing certain communications, and can
block the accidental or malicious leakage of secrets. Intuitively, no channel belonging to a fresh group can be received by processes outside the initial scope of the group. We formalize this
intuition by adapting a notion of secrecy introduced by Abadi, and proving a preservation of secrecy property.
We add name groups and group creation to the typed ambient calculus. Group creation is surprisingly interesting: it has the effect of statically preventing certain communications, and can thus block
the accidental or malicious escape of capabilities that is a major concern in practical systems. Moreover, ambient groups allow us to refine our earlier work on type systems for ambient mobility. We
present type systems in which groups identify the set of ambients that a process may cross or open.
The Ambient Calculus is a process calculus where processes my reside within a hierarchy of locations and modify it. The purpose of the the calculus is to study mobility, which is seen as the change
of spatial configurations over time. In order to describe properties of mobile computations we devise a modal logic that can talk about space as well as time, and that has the Ambient Calculus as a
An ambient is a named cluster of processes and subambients, which moves as a group. The untyped ambient calculus is a process calculus in which ambients model a variety of concepts such as network
nodes, packets, channels, and software agents. In these models, some ambients are intended to be mobile, some immobile; and some are intended to be ephemeral, some persistent. We describe type
systems able to formalize these intentions: they can guarantee that an ambient will remain immobile, and that an ambient will not be dissolved by its environment. These guarantees could help
establish security properties of models, for instance. A novel feature of our type systems is their distinction between mobile and immobile processes.
The ambient calculus is a process calculus for describing mobile computation. We develop a theory of Morris-style contextual equivalence for proving properties of mobile ambients. We prove a context
lemma that allows derivation of contextual equivalences by considering contexts of a particular limited form, rather than all arbitrary contexts. We give an activity lemma that characterizes the
possible interactions between a process and a context. We prove several examples of contextual equivalence. The proofs depend on characterizing reductions in the ambient calculus in terms of a
labelled transition system.
Java has demonstrated the utility of type systems for mobile code, and in particular their use and implications for security. Security properties rest on the fact that a well-typed Java program (or
the corresponding verified bytecode) cannot cause certain kinds of damage. In this paper we provide a type system for mobile computation, that is, for computation that is continuously active before
and after movement. We show that a well-typed mobile computation cannot cause certain kinds of run-time fault: it cannot cause the exchange of values of the wrong kind, anywhere in a mobile system.
We obtain a new formalism for concurrent object-oriented languages by extending Abadi and Cardelli's imperative object calculus with operators for concurrency from the pi-calculus and with operators
for synchronisation based on mutexes. Our syntax of terms is extremely expressive; in a precise sense it unifies notions of expression, process, store, thread, and configuration. We present a
chemical-style reduction semantics, and prove it equivalent to a structural operational semantics. We identify a deterministic fragment that is closed under reduction and show that it includes the
imperative object calculus. A collection of type systems for object-oriented constructs is at the heart of Abadi and Cardelli's work. We recast one of Abadi and Cardelli's first-order type systems
with object types and subtyping in the setting of our calculus and prove subject reduction. Since our syntax of terms includes both stores and running expressions, we avoid the need to separate store
typing from typing of expressions. We translate communication channels and the choice-free asynchronous pi-calculus into our calculus to illustrate its expressiveness; the types of read-only and
write-only channels are supertypes of read-write channels.
We introduce a definition of bisimulation for cryptographic protocols. The definition includes a simple and precise model of the knowledge of the environment with which a protocol interacts.
Bisimulation is the basis of an effective proof technique, which yields proofs of classical security properties of protocols and also justifies certain protocol optimisations. The setting for our
work is the spi calculus, an extension of the pi calculus with cryptographic primitives. We prove the soundness of the bisimulation proof technique within the spi calculus.
We introduce a calculus describing the movement of processes and devices, including movement through administrative domains.
We introduce the spi calculus, an extension of the pi calculus designed for the description and analysis of cryptographic protocols. We show how to use the spi calculus, particularly for studying
authentication protocols. The pi calculus (without extension) suffices for some abstract protocols; the spi calculus enables us to consider cryptographic issues in more detail. We represent protocols
as processes in the spi calculus and state their security properties in terms of coarse-grained notions of protocol equivalence.
We survey several definitions of operational equivalence from studies of the lambda-calculus in the setting of the sigma-calculus, Abadi and Cardelli's untyped object calculus. In particular, we
study the relationship between the following: the equational theory induced by the primitive semantics; Morris-style contextual equivalence; experimental equivalence (the equivalence implicit in a
Milner-style context lemma); and the form of Abramsky's applicative bisimilarity induced by Howe's format.
We repeat this study in the setting of Abadi and Cardelli's polymorphic object calculus obtained by enriching system Fsub with primitive covariant self types for objects. In particular, we obtain for
the first time a co-inductive characterisation of contextual equivalence for an object calculus with subtyping, parametric polymorphism, variance annotations and structural typing rules. Soundness of
the equational theory induced by the primitive semantics of the calculus has not been proved denotationally, because structural typing rules invalidate conventional denotational models. Instead, we
show soundness of the equational theory using operational techniques.
We adopt the untyped imperative object calculus of Abadi and Cardelli as a minimal setting in which to study problems of compilation and program equivalence that arise when compiling object-oriented
languages. We present both a big-step and a small-step substitution-based operational semantics for the calculus and prove them equivalent to the closure-based operational semantics given by Abadi
and Cardelli. Our first result is a direct proof of the correctness of compilation to a stack-based abstract machine via a small-step decompilation algorithm. Our second result is that contextual
equivalence of objects coincides with a form of Mason and Talcott's CIU equivalence; the latter provides a tractable means of establishing operational equivalences. Finally, we prove correct an
algorithm, used in our prototype compiler, for statically resolving method offsets. This is the first study of correctness of an object-oriented abstract machine, and of operational equivalence for
the imperative object calculus.
The spi calculus is an extension of the pi calculus with constructs for encryption and decryption. This paper develops the theory of the spi calculus, focusing on techniques for establishing testing
equivalence, and applying these techniques to the proof of authenticity and secrecy properties of cryptographic protocols.
We introduce the spi calculus, an extension of the pi calculus designed for the description and analysis of cryptographic protocols. We show how to use the spi calculus, particularly for studying
authentication protocols. The pi calculus (without extension) suffices for some abstract protocols; the spi calculus enables us to consider cryptographic issues in more detail. We represent protocols
as processes in the spi calculus and state their security properties in terms of coarse-grained notions of protocol equivalence.
Needham defines a pure name to be ``nothing but a bit pattern that is an identifier, and is only useful for comparing for identity with other bit patterns---which includes looking up in tables in
order to find other information''. In this paper, we argue that pure names are relevant to both security and mobility. Nominal calculi are computational formalisms that include pure names and the
dynamic generation of fresh, unguessable names. We survey recent work on nominal calculi with primitives representing location failure, process migration and cryptography, and suggest areas for
further work.
We present five axioms of name-carrying lambda-terms identified up to alpha-conversion---that is, up to renaming of bound variables. We assume constructors for constants, variables, application and
lambda-abstraction. Other constants represent a function Fv that returns the set of free variables in a term and a function that substitutes a term for a variable free in another term. Our axioms are
(1) equations relating Fv and each constructor, (2) equations relating substitution and each constructor, (3) alpha-conversion itself, (4) unique existence of functions on lambda-terms defined by
structural iteration, and (5) construction of lambda-abstractions given certain functions from variables to terms. By building a model from de Bruijn's nameless lambda-terms, we show that our five
axioms are a conservative extension of HOL. Theorems provable from the axioms include distinctness, injectivity and an exhaustion principle for the constructors, principles of structural induction
and primitive recursion on lambda-terms, Hindley and Seldin's substitution lemmas and the existence of their length function. These theorems and the model have been mechanically checked in the
Cambridge HOL system.
Some applications are most easily expressed in a programming language that supports concurrency, notably interactive and distributed systems. We propose extensions to the purely-functional language
Haskell that allows it to express explicitly concurrent applications; we call the resulting language Concurrent Haskell. The resulting system appears to be both expressive and efficient, and we give
a number of examples of useful abstractions that can be built from our primitives. We have developed a freely-available implementation of Concurrent Haskell, and are now using it as a substrate for a
graphical user interface toolkit.
Bisimilarity (also known as `applicative bisimulation') has attracted a good deal of attention as an operational equivalence for lambda-calculi. It approximates or even equals Morris-style contextual
equivalence and admits proofs of program equivalence via co-induction. It has an elementary construction from the operational definition of a language. We consider bisimilarity for one of the typed
object calculi of Abadi and Cardelli. By defining a labelled transition system for the calculus in the style of Crole and Gordon and using a variation of Howe's method we establish two central
results: that bisimilarity is a congruence, and that it equals contextual equivalence. So two objects are bisimilar iff no amount of programming can tell them apart. Our third contribution is to show
that bisimilarity soundly models the equational theory of Abadi and Cardelli. This is the first study of contextual equivalence for an object calculus and the first application of Howe's method to
subtyping. By these results, we intend to demonstrate that operational methods are a promising new direction for the foundations of object-oriented programming.
We describe the design and use of monadic I/O in Haskell 1.3, the latest revision of the lazy functional programming language Haskell. Haskell 1.3 standardises the monadic I/O mechanisms now
available in many Haskell systems. The new facilities allow more sophisticated text-based application programs to be written portably in Haskell. Apart from the use of monads, the main advances over
standard Haskell 1.2 are: character I/O based on handles (analogous to ANSI C file pointers), an error handling mechanism, terminal interrupt handling and a POSIX interface. The standard also
provides implementors with a flexible framework for extending Haskell to incorporate new language features. In addition to a tutorial description of the new facilities this paper includes a worked
example: a monad for combinator parsing which is based on the standard I/O monad.
Operational intuition is central to computer science. These notes introduce functional programmers to operational semantics and operational equivalence. We show how the idea of bisimilarity from CCS
may be applied to deterministic functional languages. On elementary operational grounds it justifies equational reasoning and proofs about infinite streams.
Morris-style contextual equivalence---invariance of termination under any context of ground type---is the usual notion of operational equivalence for deterministic functional languages such as FPC
(PCF plus sums, products and recursive types). Contextual equivalence is hard to establish directly. Instead we define a labelled transition system for call-by-name FPC (and variants) and prove that
CCS-style bisimilarity equals contextual equivalence---a form of operational extensionality. Using co-induction we establish equational laws for FPC. By considering variations of Milner's `
bisimulations up to ~' we obtain a second co-inductive characterisation of contextual equivalence in terms of reduction behaviour and production of values. Hence we use co-inductive proofs to
establish contextual equivalence in a series of stream-processing examples. Finally, by proving a context lemma we show how Milner's original term-model can be extended to FPC, but in fact our form
of bisimilarity supports simpler co-inductive proofs.
We study the longstanding problem of semantics for input/output (I/O) expressed using side-effects. Our vehicle is a small higher-order imperative language, with operations for interactive character
I/O and based on ML syntax. Unlike previous theories, we present both operational and denotational semantics for I/O effects. We use a novel labelled transition system that uniformly expresses both
applicative and imperative computation. We make a standard definition of bisimilarity and prove it is a congruence using Howe's method.
Next, we define a metalogical type theory M which we may give a denotational semantics to O. M generalises Crole and Pitts' FIX-logic by adding in a parameterised recursive datatype, which is used to
model I/O. M comes equipped both with judgements of equality of expressions, and an operational semantics; M itself is given a domain-theoretic semantics in the category CPPO of cppos (bottom-pointed
posets with joins of omega chains) and Scott continuous functions. We use the CPPO semantics to prove that the equational theory is computationally adequate for the operational semantics using formal
approximation relations. The existence of such relations uses key ideas from Pitts' recent work.
A monadic-style textual translation into M induces a denotational semantics on O. Our final result justifies metalogical reasoning: if the denotations of two O programs are equal in M then the O
programs are in fact operationally equivalent.
Co-induction is an important tool for reasoning about unbounded structures. This tutorial explains the foundations of co-induction, and shows how it justifies intuitive arguments about lazy streams,
of central importance to lazy functional programmers. We explain from first principles a theory based on a new formulation of bisimilarity for functional programs, which coincides exactly with
Morris-style contextual equivalence. We show how to prove properties of lazy streams by co-induction and derive Bird and Wadler's Take Lemma, a well-known proof technique for lazy streams.
We present a new strategy for representing syntax in a mechanised logic. We define an underlying type of de Bruijn terms, define an operation of named lambda-abstraction, and hence inductively define
a set of conventional name-carrying terms. The result is a mechanisation of the practice of most authors studying formal calculi: to work with conventional name-carrying notation and substitution,
but to identify terms up to alpha-conversion. This strategy falls between most previous works, which either treat bound variable names literally or dispense with them altogether. The theory has been
implemented in the Cambridge HOL system and used in an experimental application.
This paper contributes to the methodology of using metalogics for reasoning about programming languages. As a concrete example we consider a fragment of ML corresponding to call-by-value PCF and
translate it into a metalogic which contains (amongst other types) computation types and a fixpoint type. The main result is a soundness property (*): if the denotations of two programs are provably
equal in the metalogic, they have the same operationally observable behaviour. As usual, this follows from a computational adequacy result. In early notes, Plotkin showed how such proofs could be
factored into two stages, the first non-trivial and the second (essentially) routine; our contribution is to rework his suggestion within a new framework. We define a metalogic, which incorporates
computation and fixpoint types, and specify a modular translation of the ML fragment. Our proof of (*) factors into two parts. First, the term language of the metalogic is equipped with an
operational semantics and a (generic) computational adequacy result obtained. Second, a simple syntactic argument establishes a correspondence between the operational behaviour of an object program
and of its denotation. The first part is not routine but is proved once and for all. The second is a detailed but essentially trivial calculation that is easily adaptable to other object languages.
Such a factored proof is important because it promises to scale up more easily than a monolithic one. We show that it may be adapted to an object language with call-by-name functions and one with a
simple exception mechanism.
I/O mechanisms are needed if functional languages are to be suitable for general purpose programming and several implementations exist. But little is known about semantic methods for specifying and
proving properties of lazy functional programs engaged in I/O. As a step towards formal methods of reasoning about realistic I/O we investigate three widely implemented mechanisms in the setting of
teletype I/O: synchronised-stream (primitive in Haskell), continuation-passing (derived in Haskell) and Landin-stream I/O (where programs map an input stream to an output stream of characters). Using
methods from Milner's CCS we give a labelled transition semantics for the three mechanisms. We adopt bisimulation equivalence as equality on programs engaged in I/O and give functions to map between
the three kinds of I/O. The main result is the first formal proof of semantic equivalence of the three mechanisms, generalising an informal argument of the Haskell committee.
The Binomial Theorem in HOL is a medium sized worked example whose subject matter is more widely known than any specific piece of hardware or software. This article introduces the small amount of
algebra and mathematical notation needed to state and prove the Binomial Theorem, shows how this is rendered in HOL, and outlines the structure of the proof.
If formal methods of hardware verification are to have any impact on the practices of working engineers, connections must be made between the languages used in practice to design circuits, and those
used for research into hardware verification. Silage is a simple dataflow language marketed for specifying digital signal processing circuits. Higher Order Logic (HOL) is extensively used for
research into hardware verification. This paper presents a formal definition of a substantial subset of Silage, by mapping Silage declarations into HOL predicates. The definition has been mechanised
in the HOL theorem prover to support the transformational design of Silage circuits as theorem proving in HOL.
The semantics of hardware description languages can be represented in higher order logic. This provides a formal definition that is suitable for machine processing. Experiments are in progress at
Cambridge to see whether this method can be the basis of practical tools based on the HOL theorem-proving assistant. Three languages are being investigated: ELLA, Silage and VHDL. The approaches
taken for these languages are compared and current progress on building semantically-based theorem-proving tools is discussed.
If formal methods of hardware verification are to have any impact on the practices of working designers, connections must be made between the languages used in practice to design circuits, and those
used for research into hardware verification. Silage is a simple dataflow language used for specifying digital signal processing circuits. Higher Order Logic (HOL) is extensively used for research
into hardware verification. We have used a novel combination of operational and predicative semantics to define formally a substantial subset of Silage by mapping Silage definitions into HOL
predicates. Here we sketch the method used, discuss what is gained by a formal definition, and explain an immediate practical application---secure transformational design of Silage circuits as
theorem proving in HOL.
A common attraction to functional programming is the ease with which proofs can be given of program properties. A common disappointment with functional programming is the difficulty of expressing
input/output (I/O) while at the same time being able to verify programs. In this dissertation we show how a theory of functional programming can be smoothly extended to admit both an operational
semantics for functional I/O and verification of programs engaged in I/O.
The first half develops the operational theory of a semantic metalanguage used in the second half. The metalanguage M is a simply-typed lambda-calculus with product, sum, function, lifted and
recursive types. We study two definitions of operational equivalence: Morris-style contextual equivalence, and a typed form of Abramsky's applicative bisimulation. We prove operational extensionality
for M---that these two definitions give rise to the same operational equivalence. We prove equational laws that are analogous to the axiomatic domain theory of LCF and derive a co-induction
The second half defines a small functional language, H, and shows how the semantics of H can be extended to accommodate I/O. H is essentially a fragment of Haskell. We give both operational and
denotational semantics for H. The denotational semantics uses M in a case study of Moggi's proposal to use monads to parameterise semantic descriptions. We define operational and denotational
equivalences on H and show that denotational implies operational equivalence. We develop a theory of H based on equational laws and a co-induction principle.
We study simplified forms of four widely-implemented I/O mechanisms: side-effecting, Landin-stream, synchronised-stream and continuation-passing I/O. We give reasons why side-effecting I/O is
unsuitable for lazy languages. We extend the semantics of H to include the other three mechanisms and prove that the three are equivalent to each other in expressive power.
We investigate monadic I/O, a high-level model for functional I/O based on Wadler's suggestion that monads can express interaction with state in a functional language. We describe a simple monadic
programming model, and give its semantics as a particular form of state transformer. Using the semantics we verify a simple programming example. | {"url":"http://research.microsoft.com/en-us/um/people/adg/Publications/details.htm","timestamp":"2014-04-18T10:34:44Z","content_type":null,"content_length":"198625","record_id":"<urn:uuid:a75a0b16-db7b-41ee-83bb-e5506fdabc1c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Brad Efron on Xi'an's Og
“In place of past experience, frequentism considers future behavior: an optimal estimator is one that performs best in hypothetical repetitions of the current experiment. The resulting gain in
scientific objectivity has carried the day…”
Julien Cornebise sent me this Science column by Brad Efron about Bayes’ theorem. I am a tad surprised that it got published in the journal, given that it does not really contain any new item of
information. However, being unfamiliar with Science, it may also be that it also publishes major scientists’ opinions or warnings, a label that can fit this column in Science. (It is quite a proper
coincidence that the post appears during Bayes 250.)
Efron’s piece centres upon the use of objective Bayes approaches in Bayesian statistics, for which Laplace was “the prime violator”. He argues through examples that noninformative “Bayesian
calculations cannot be uncritically accepted, and should be checked by other methods, which usually means “frequentistically”. First, having to write “frequentistically” once is already more than I
can stand! Second, using the Bayesian framework to build frequentist procedures is like buying top technical outdoor gear to climb the stairs at the Sacré-Coeur on Butte Montmartre! The naïve reader
is then left clueless as to why one should use a Bayesian approach in the first place. And perfectly confused about the meaning of objectivity. Esp. given the above quote! I find it rather surprising
that this old saw of a claim of frequentism to objectivity resurfaces there. There is an infinite range of frequentist procedures and, while some are more optimal than others, none is “the” optimal
one (except for the most baked-out examples like say the estimation of the mean of a normal observation).
“A Bayesian FDA (there isn’t one) would be more forgiving. The Bayesian posterior probability of drug A’s superiority depends only on its final evaluation, not whether there might have been
earlier decisions.”
The second criticism of Bayesianism therein is the counter-intuitive irrelevance of stopping rules. Once again, the presentation is fairly biased, because a Bayesian approach opposes scenarii rather
than evaluates the likelihood of a tail event under the null and only the null. And also because, as shown by Jim Berger and co-authors, the Bayesian approach is generally much more favorable to the
null than the p-value.
“Bayes’ Theorem is an algorithm for combining prior experience with current evidence. Followers of Nate Silver’s FiveThirtyEight column got to see it in spectacular form during the presidential
campaign: the algorithm updated prior poll results with new data on a daily basis, nailing the actual vote in all 50 states.”
It is only fair that Nate Silver’s book and column are mentioned in Efron’s column. Because it is a highly valuable and definitely convincing illustration of Bayesian principles. What I object to is
the criticism “that most cutting-edge science doesn’t enjoy FiveThirtyEight-level background information”. In my understanding, the poll model of FiveThirtyEight built up in a sequential manner a
weight system over the different polling companies, hence learning from the data if in a Bayesian manner about their reliability (rather than forgetting the past). This is actually what caused Larry
Wasserman to consider that Silver’s approach was actually more frequentist than Bayesian…
“Empirical Bayes is an exciting new statistical idea, well-suited to modern scientific technology, saying that experiments involving large numbers of parallel situations carry within them their
own prior distribution.”
My last point of contention is about the (unsurprising) defence of the empirical Bayes approach in the Science column. Once again, the presentation is biased towards frequentism: in the FDR gene
example, the empirical Bayes procedure is motivated by being the frequentist solution. The logical contradiction in “estimat[ing] the relevant prior from the data itself” is not discussed and the
conclusion that Brad Efron uses “empirical Bayes methods in the parallel case [in the absence of prior information”, seemingly without being cautious and “uncritically”, does not strike me as the
proper last argument in the matter! Nor does it give a 21st Century vision of what nouveau Bayesianism should be, faced with the challenges of Big Data and the like… | {"url":"http://xianblog.wordpress.com/tag/brad-efron/","timestamp":"2014-04-16T17:01:48Z","content_type":null,"content_length":"99132","record_id":"<urn:uuid:731bdc87-1136-44b6-bb9e-ccf7dd9f970c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
A.12 The optional Floating-Point word set
The Technical Committee has considered many proposals dealing with the inclusion and makeup of the Floating-Point Word Sets in ANS Forth. Although it has been argued that ANS Forth should not address
floating-point arithmetic and numerous Forth applications do not need floating-point, there are a growing number of important Forth applications from spreadsheets to scientific computations that
require the use of floating-point arithmetic. Initially the Technical Committee adopted proposals that made the Forth Vendors Group Floating-Point Standard, first published in 1984, the framework for
inclusion of Floating-Point in ANS Forth. There is substantial common practice and experience with the Forth Vendors Group Floating-Point Standard. Subsequently the Technical Committee adopted
proposals that placed the basic floating-point arithmetic, stack and support words in the Floating-Point word set and the floating-point transcendental functions in the Floating-Point Extensions word
set. The Technical Committee also adopted proposals that:
• changed names for clarity and consistency; e.g., REALS to FLOATS, and REAL+ to FLOAT+ .
• removed words; e.g., FPICK .
• added words for completeness and increased functionality; e.g., FSINCOS, F~, DF@, DF!, SF@ and SF!
Several issues concerning the Floating-Point word set were resolved by consensus in the Technical Committee:
Floating-point stack: By default the floating-point stack is separate from the data and return stacks; however, an implementation may keep floating-point numbers on the data stack. A program can
determine whether floating-point numbers are kept on the data stack by passing the string FLOATING-STACK to ENVIRONMENT? It is the experience of several members of the Technical Committee that with
proper coding practices it is possible to write floating-point code that will run identically on systems with a separate floating-point stack and with floating-point numbers kept on the data stack.
Floating-point input: The current base must be DECIMAL. Floating-point input is not allowed in an arbitrary base. All floating-point numbers to be interpreted by an ANS Forth system must contain the
exponent indicator E (see 12.3.7 Text interpreter input number conversion). Consensus in the Technical Committee deemed this form of floating-point input to be in more common use than the alternative
that would have a floating-point input mode that would allow numbers with embedded decimal points to be treated as floating-point numbers.
Floating-point representation: Although the format and precision of the significand and the format and range of the exponent of a floating-point number are implementation defined in ANS Forth, the
Floating-Point Extensions word set contains the words DF@, SF@, DF!, and SF! for fetching and storing double- and single-precision IEEE floating-point-format numbers to memory. The IEEE
floating-point format is commonly used by numeric math co-processors and for exchange of floating-point data between programs and systems.
A.12.3 Additional usage requirements
A.12.3.5 Address alignment
In defining custom floating-point data structures, be aware that CREATE doesn't necessarily leave the data space pointer aligned for various floating-point data types. Programs may comply with the
requirement for the various kinds of floating-point alignment by specifying the appropriate alignment both at compile-time and execution time. For example:
: FCONSTANT ( F: r -- )
CREATE FALIGN HERE 1 FLOATS ALLOT F!
DOES> ( F: -- r ) FALIGNED F@ ;
A.12.3.7 Text interpreter input number conversion
The Technical Committee has more than once received the suggestion that the text interpreter in Standard Forth systems should treat numbers that have an embedded decimal point, but no exponent, as
floating-point numbers rather than double cell numbers. This suggestion, although it has merit, has always been voted down because it would break too much existing code; many existing implementations
put the full digit string on the stack as a double number and use other means to inform the application of the location of the decimal point.
See: RFI 0004 Number Conversion
A.12.6 Glossary
A.12.6.1.0558 >FLOAT
>FLOAT enables programs to read floating-point data in legible ASCII format. It accepts a much broader syntax than does the text interpreter since the latter defines rules for composing source
programs whereas >FLOAT defines rules for accepting data. >FLOAT is defined as broadly as is feasible to permit input of data from ANS Forth systems as well as other widely used standard programming
This is a synthesis of common FORTRAN practice. Embedded spaces are explicitly forbidden in much scientific usage, as are other field separators such as comma or slash.
While >FLOAT is not required to treat a string of blanks as zero, this behavior is strongly encouraged, since a future version of ANS Forth may include such a requirement.
A.12.6.1.1427 F.
For example, 1E3 F. displays 1000. .
A.12.6.1.1492 FCONSTANT
Typical use: r FCONSTANT name
A.12.6.1.1552 FLITERAL
Typical use: : X ... [ ... ( r ) ] FLITERAL ... ;
A.12.6.1.1630 FVARIABLE
Typical use: FVARIABLE name
A.12.6.1.2143 REPRESENT
This word provides a primitive for floating-point display. Some floating-point formats, including those specified by IEEE-754, allow representations of numbers outside of an implementation-defined
range. These include plus and minus infinities, denormalized numbers, and others. In these cases we expect that REPRESENT will usually be implemented to return appropriate character strings, such as
+infinity or nan, possibly truncated.
A.12.6.2.1489 FATAN2
FSINCOS and FATAN2 are a complementary pair of operators which convert angles to 2-vectors and vice-versa. They are essential to most geometric and physical applications since they correctly and
unambiguously handle this conversion in all cases except null vectors, even when the tangent of the angle would be infinite.
FSINCOS returns a Cartesian unit vector in the direction of the given angle, measured counter-clockwise from the positive X-axis. The order of results on the stack, namely y underneath x, permits the
2-vector data type to be additionally viewed and used as a ratio approximating the tangent of the angle. Thus the phrase FSINCOS F/ is functionally equivalent to FTAN, but is useful over only a
limited and discontinuous range of angles, whereas FSINCOS and FATAN2 are useful for all angles. This ordering has been found convenient for nearly two decades, and has the added benefit of being
easy to remember. A corollary to this observation is that vectors in general should appear on the stack in this order.
The argument order for FATAN2 is the same, converting a vector in the conventional representation to a scalar angle. Thus, for all angles, FSINCOS FATAN2 is an identity within the accuracy of the
arithmetic and the argument range of FSINCOS. Note that while FSINCOS always returns a valid unit vector, FATAN2 will accept any non-null vector. An ambiguous condition exists if the vector argument
to FATAN2 has zero magnitude.
A.12.6.2.1516 FEXPM1
This function allows accurate computation when its arguments are close to zero, and provides a useful base for the standard exponential functions. Hyperbolic functions such as cosh(x) can be
efficiently and accurately implemented by using FEXPM1; accuracy is lost in this function for small values of x if the word FEXP is used.
An important application of this word is in finance; say a loan is repaid at 15% per year; what is the daily rate? On a computer with single precision (six decimal digit) accuracy:
1. Using FLN and FEXP:
FLN of 1.15 = 0.139762, divide by 365 = 3.82910E-4, form the exponent using FEXP = 1.00038, and subtract one (1) and convert to percentage = 0.038%.
Thus we only have two digit accuracy.
2. Using FLNP1 and FEXPM1:
FLNP1 of 0.15 = 0.139762, (this is the same value as in the first example, although with the argument closer to zero it may not be so) divide by 365 = 3.82910E-4, form the exponent and subtract one
(1) using FEXPM1 = 3.82983E-4, and convert to percentage = 0.0382983%.
This is full six digit accuracy.
The presence of this word allows the hyperbolic functions to be computed with usable accuracy. For example, the hyperbolic sine can be defined as:
: FSINH ( r1 -- r2 )
FEXPM1 FDUP FDUP 1.0E0 F+ F/ F+ 2.0E0 F/ ;
A.12.6.2.1554 FLNP1
This function allows accurate compilation when its arguments are close to zero, and provides a useful base for the standard logarithmic functions. For example, FLN can be implemented as:
: FLN 1.0E0 F- FLNP1 ;
See: A.12.6.2.1516 FEXPM1
A.12.6.2.1616 FSINCOS
See: A.12.6.2.1489 FATAN2
A.12.6.2.1640 F~
This provides the three types of floating point equality in common use -- close in absolute terms, exact equality as represented, and relatively close.
Table of Contents
Next Section | {"url":"http://www.taygeta.com/forth/dpansa12.htm","timestamp":"2014-04-17T06:41:46Z","content_type":null,"content_length":"11684","record_id":"<urn:uuid:b6688012-5e47-4e2b-a758-f10ab36c0833>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] iterate over multiple arrays
David Froger david.froger@gmail....
Sat Oct 1 11:21:43 CDT 2011
Thanks everybody for the different solutions proposed, I really appreciate.
What about this solution? So simple that I didn't think to it...
import numpy as np
from numpy import *
def f(arr):
return arr*2
a = array( [1,1,1] )
b = array( [2,2,2] )
c = array( [3,3,3] )
d = array( [4,4,4] )
for x in (a,b,c,d):
x[:] = x[:]*2
#instead of: x = x*2
print a
print b
print c
print d
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2011-October/058607.html","timestamp":"2014-04-17T13:28:24Z","content_type":null,"content_length":"2852","record_id":"<urn:uuid:345f2e85-8e85-4880-b0b8-7a66fe23ebdf>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear transformation formula
October 23rd 2012, 07:20 PM
Linear transformation formula
I am trying to understand and solve the following problem:
Let T: $R^2 -> R^2$ be a linear transformation. Find a formula for $T(x_{1}, x_{2})$ given that
$T(4,7) = (3,4)$ and $T(3,5) = (4,9)$
My attempt:
Now the basis $B={(4,7), (3,5)}$ is the basis for the domain. And we are trying the find the basis for the codomain? Is this correct?
$T(x_{1}(4,7) + x_{2}(3,5)) = x_{1} T(4, 7) + x_{2} T(3,5) = x_{1} (, 3, 4) + x_{2} (4, 9) = 3x_{1}+4x_{2} , 4x_{1}+9x_{2}$
But this is not correct, because if i plug in the value $(4,7)$ the formula doesn't give me $(3,4)$. But I don't understand where I went wrong?
October 23rd 2012, 07:48 PM
Re: Linear transformation formula
Hey M.R.
A linear operator taking R^2 to R^2 will have 4 elements. If this operator is [a, b; c, d] in matrix form then you have
You should get ax1 + by1 = r0, cx1 + dy1 = r1, ax2 + by2 = r2, cx2 + dy2 = r3 which gives four equations in four unknowns. You know x1,y1,x2,y2,r0,r1,r2,r3 so you can solve this system with
standard linear algebra techniques.
October 23rd 2012, 07:52 PM
Re: Linear transformation formula
I am trying to understand and solve the following problem:
Let T: $R^2 -> R^2$ be a linear transformation. Find a formula for $T(x_{1}, x_{2})$ given that
$T(4,7) = (3,4)$ and $T(3,5) = (4,9)$
My attempt:
Now the basis $B={(4,7), (3,5)}$ is the basis for the domain. And we are trying the find the basis for the codomain? Is this correct?
$T(x_{1}(4,7) + x_{2}(3,5)) = x_{1} T(4, 7) + x_{2} T(3,5) = x_{1} (, 3, 4) + x_{2} (4, 9) = 3x_{1}+4x_{2} , 4x_{1}+9x_{2}$
But this is not correct, because if i plug in the value $(4,7)$ the formula doesn't give me $(3,4)$. But I don't understand where I went wrong?
Here is the way I thought of it.
$(4,7)=4e_1+7e_2$ and $(3,5)=3e_1+5e_2$
We know that
$T(4e_1+7e_2)=3e_1+4e_2 \iff 4T(e_1)+7T(e_2)=3e_1+4e_2$
$T(3e_1+5e_2)=4e_1+9e_2 \iff 3T(e_1)+5T(e_2)=4e_1+9e_2$
This gives a system of equations for $T(e_1) \text{ and } T(e_2)$
Solving for each (work not shown) gives
$T(e_1)=13e_2+43e_2 \quad T(e_2)=-7e_1-24e_2$
This gives the matrix
$T=\begin{bmatrix}13 & -7 \\ 43 & -24 \end{bmatrix}$
October 24th 2012, 04:43 AM
Re: Linear transformation formula
I am trying to understand and solve the following problem:
Let T: $R^2 -> R^2$ be a linear transformation. Find a formula for $T(x_{1}, x_{2})$ given that
$T(4,7) = (3,4)$ and $T(3,5) = (4,9)$
My attempt:
Now the basis $B={(4,7), (3,5)}$ is the basis for the domain. And we are trying the find the basis for the codomain? Is this correct?
$T(x_{1}(4,7) + x_{2}(3,5)) = x_{1} T(4, 7) + x_{2} T(3,5) = x_{1} (, 3, 4) + x_{2} (4, 9) = 3x_{1}+4x_{2} , 4x_{1}+9x_{2}$
But this is not correct, because if i plug in the value $(4,7)$ the formula doesn't give me $(3,4)$. But I don't understand where I went wrong?
Doing it your way: Let u1 = (4,2) and u2 = (3,5) be a basis.
Let v be any vector, then v = x1u1 +x2u2
if Tu1 = (3,4) and Tu2 = (4,9)
Tv = x1Tu1 + x2Tu2 = x1(3,4) + x2(4,9)
If v = u1, v has coordinates x1=1 and x2 = 0 so Tv = (3,4)
If v = u2, v has coordinates x1= 0 and x2 = 1 so Tv = (4,9)
October 24th 2012, 02:38 PM
Re: Linear transformation formula
Already solved by TheEmptySet above.
Just another way of looking at it in the e1 = (1,0), e2 = (0,1) basis.
Given v = x1e1 + x2e2, Tv = x1Te1 + x2Te2.
We know Tu1 and Tu2, so if we find e1 and e2 in terms of u1 and u2 we have the transformation.
e1 = x11u1 + x12u2
e2 = x21u1 + x22u2
(1,0) = x11(4,7) + x12(3,5)
(0,1) = x21(4,7) + x22(3,5)
equating first and second components gives:
e1 = -5u1 + 7u2
e2 = 3u1 -4u2
Te1 = -5(3,4) +7(4,9) = (13,43)
Te2 = 3(3,4) -4(4,9) = (-7,-24)
October 24th 2012, 04:03 PM
Re: Linear transformation formula
Doing it your way: Let u1 = (4,2) and u2 = (3,5) be a basis.
Let v be any vector, then v = x1u1 +x2u2
if Tu1 = (3,4) and Tu2 = (4,9)
Tv = x1Tu1 + x2Tu2 = x1(3,4) + x2(4,9)
If v = u1, v has coordinates x1=1 and x2 = 0 so Tv = (3,4)
If v = u2, v has coordinates x1= 0 and x2 = 1 so Tv = (4,9)
But how was my way correct? Given that the answer is $T=\begin{bmatrix}13 & -7 \\ 43 & -24 \end{bmatrix}$
October 25th 2012, 08:36 AM
Re: Linear transformation formula
MR wrote: "But how was my way correct? Given that the answer is http://latex.codecogs.com/png.latex?... \end{bmatrix}"
Given u1 = (4,7), u2 = (3,5) and Tu1 = (3,4), Tu2 = (4,9)
u1 and u2 are not THE basis for the domain, they are A basis. You want to work with the basis u1 and u2? Fine.
Now you want to know, given a vector (x1,x2), what is it transformed to under T? Since you chose u1, u2 as your basis, you have to assume the vector (x1,x2) has coordinates wrt this basis, ie,
v =x1u1 + x2u2.
Then T(x1,x2) = x1Tu1 + x2Tu2
Now there are 2 options:
Option 1) Express result in e1,e2 basis, which is what you did, to get:
T(x1,x2) = x1(3,4) + x2(4,9), your answer.
So you have answered the question: If I have a vector with coordinates in the u1,u2 basis, what are the coordinates of the transformed vector in the e1,e2 basis.
If x1,x2 are the coordinates of the starting vector in the e1,e2 system, to use your procedure, you must first express it in terms of coordinates wrt the u1,u2 system.
If I want to test my transformation on u1, then u1 = (1)u1 + (0)u2. So you are asking, if (x1,x2) = (1,0), what is T(x1,x2) and the answer is (3,4)
Option 2) Express result in u1, u2 basis, ie, work exclusiveley in the u1,u2 basis. To do this, you have to express Tu1 and Tu2 in the u1,u2 basis knowing what they are in the e1,e2 basis.
Tu1 = a11u1 + a12u2
Tu2 = a21u1 + a22u2, where u1 = (4,7), u2 = (3,5) and Tu1 = (3,4), Tu2 = (4,9).
The result would have the form: T(x1,x2) = x1k1u1 + x2k2u2.
So now, starting with a vector expressed in the u1,u2, basis, you have found the transformed vector in the u1,u2 basis.
Edit: The answer you refer to, given by TheEmptySet, works exclusiveley in the e1,e2 coordinate system, and assumes (x1,x2) are coordinates in e1,e2.
Edit: There is one interpretation I completeley missed. If (x1,x2) are the coordinates of v in the u1,u2 basis, then (x1,x2) are also the coordinates of Tv in the Tu1,Tu2 basis.
October 25th 2012, 10:07 AM
Re: Linear transformation formula
There is an exquisiteley simple interpretation to the original post problem, which I saw in last edit of previous post and repeat here so it isn't missed:
If v=(x1,x2) in the u1,u2 basis, then Tv = (xl,x2) in the Tu1,Tu2 basis:
v =x1u1 + x2u2
Tv = x1Tu1 +x2Tu2,
October 25th 2012, 10:20 AM
Re: Linear transformation formula
my totally naive approach:
the formula for T (provided we CAN find one) is going to be something like:
T(x,y) = (ax+by,cx+dy) <---general linear function from R^2 to R^2.
so what we really want to do is find a,b,c and d.
now T(4,7) = (3,4). to me this means:
4a+7b = 3
4c+7d = 4
that by itself isn't enough to say what a,b,c and d are, but it's a start.
we also know T(3,5) = (4,9) so:
3a+5b = 4
3c+5d = 9
looking at just a&b, we have the two equations:
4a+7b = 3
3a+5b = 4
12a+21b = 9
-12a-20b = -16
thus b = -7 (and then from 4a+7b = 3, we get: 4a - 49 = 3, so 4a = 52, thus a = 13).
12c+21d = 12
=12c-20d = -36
so d = -24 (and then from 4c+7d = 4, we get: 4c - 168 = 4, so 4c = 172, so c = 43).
so our formula for T is:
T(x,y) = (13x-7y,43x-24y)
yes, you CAN consider a basis B, and then find the matrix of T relative to that basis. but since we're in R^2, and using coordinates (x,y) to denote elements of R^2, we can by-pass that
altogether, and just deal with "coordinate functions":
T[1](x,y) = ax+by (the first coordinate of T(x,y))
T[2](x,y) = cx+dy (the second coordinate of T(x,y)).
again, for emphasis: linear functions are linear in EACH coordinate.
NOTE: there is nothing wrong with any of the previous answers. realize that if you are going to find a FORMULA for a linear transformation T in terms of COORDINATES, one really should say "which
basis the coordinates are IN". writing (x,y) usually tacitly assumes that the basis is B = {(1,0),(0,1)}, but the matrix for T will have "different numbers" (i.e., a different formula), if you
use some OTHER basis (in the domain, range, or both).
October 25th 2012, 10:23 AM
Re: Linear transformation formula
one caveat to this way of looking at things: you need to show that {Tu[1],Tu[2]} actually IS a basis, after all: T might be singular.
October 25th 2012, 12:41 PM
Re: Linear transformation formula
Tu1 = (4,7), Tu2 = (3,5). They are obviously Linearly Independent, as are u1 and u2, so T is non-sigular.
But that's not why I write. The point of the original post is as follows:
Example: Imagine u1 and u2 plotted on a piece of paper. I have a vector v whose coordinates I only know in terms of u1 and u2. Now imagine I have a transformation which rotates u1 and u2 through
30degs, which become the transformed basis. If I apply the same transformation to v, what are the new coordinates of the transformed vector? Since the vector rotated with u1 and u2, it still has
the same components wr to the transformed basis. They all rotated together.
Edit: In the more general case for vectors drawn on a piece of paper, the theory becomes quite interesting. Suppose the transformation takes u1 to an arbitrary vector Tu1 and u2 to an arbitrary
vector Tu2. Now what are the components of v after this transformation? The theory says it still has the same components with respect to the transformed vectors. Now that is not obvious.
October 27th 2012, 02:06 PM
Re: Linear transformation formula
Edit: The answer you refer to, given by TheEmptySet, works exclusiveley in the e1,e2 coordinate system, and assumes (x1,x2) are coordinates in e1,e2.
Edit: There is one interpretation I completeley missed. If (x1,x2) are the coordinates of v in the u1,u2 basis, then (x1,x2) are also the coordinates of Tv in the Tu1,Tu2 basis.
Thanks for that explanation. If you look at my original question, I was confused if we needed to apply a change of basis, before and after the transformation. So the answer by TheEmptySet makes
sense, since he applied a change of basis before the transformation (domain) and left the answer in the standard basis after transformation.
October 27th 2012, 02:11 PM
Re: Linear transformation formula
my totally naive approach:
the formula for T (provided we CAN find one) is going to be something like:
T(x,y) = (ax+by,cx+dy) <---general linear function from R^2 to R^2.
so what we really want to do is find a,b,c and d.
now T(4,7) = (3,4). to me this means:
4a+7b = 3
4c+7d = 4
that by itself isn't enough to say what a,b,c and d are, but it's a start.
we also know T(3,5) = (4,9) so:
3a+5b = 4
3c+5d = 9
looking at just a&b, we have the two equations:
4a+7b = 3
3a+5b = 4
12a+21b = 9
-12a-20b = -16
thus b = -7 (and then from 4a+7b = 3, we get: 4a - 49 = 3, so 4a = 52, thus a = 13).
12c+21d = 12
=12c-20d = -36
so d = -24 (and then from 4c+7d = 4, we get: 4c - 168 = 4, so 4c = 172, so c = 43).
so our formula for T is:
T(x,y) = (13x-7y,43x-24y)
yes, you CAN consider a basis B, and then find the matrix of T relative to that basis. but since we're in R^2, and using coordinates (x,y) to denote elements of R^2, we can by-pass that
altogether, and just deal with "coordinate functions":
T[1](x,y) = ax+by (the first coordinate of T(x,y))
T[2](x,y) = cx+dy (the second coordinate of T(x,y)).
again, for emphasis: linear functions are linear in EACH coordinate.
NOTE: there is nothing wrong with any of the previous answers. realize that if you are going to find a FORMULA for a linear transformation T in terms of COORDINATES, one really should say "which
basis the coordinates are IN". writing (x,y) usually tacitly assumes that the basis is B = {(1,0),(0,1)}, but the matrix for T will have "different numbers" (i.e., a different formula), if you
use some OTHER basis (in the domain, range, or both).
Excellent. I actually tired this approach, but I made a mistake in my arithmetic and didn't get he right answer. And it really confused me, as to why it didn't work.
October 27th 2012, 02:23 PM
Re: Linear transformation formula
One final question. Say I have a vector $v = v_{1} + v_{2}$ in a non-standard basis, like the above. So if I express x in terms of $v$, that is $x = x_{1}v_{1}+x_{2}v_{2}$. Now if I apply
transformation to $x$ (ie, $T(x)$), would the answer of the transformation be in the same non-standard basis (of $v$)? Our would we assume that after the transformation, the answer is in the
standard basis (ie. ${(1,0), (0,1)}$?
October 27th 2012, 06:04 PM
Re: Linear transformation formula
when you are specifying a matrix, [T], for a linear transformation T, you actually need TWO bases: one for the domain of T, and one for the co-domain (the vector space the image lies in).
if T is a linear endomorphism (that is, the domain, and the co-domain are the same) one can use the same basis for BOTH, but you don't HAVE to.
let me illustrate with a simple example:
suppose T:R^2-->R^2 is the identity function: T(x,y) = (x,y).
FROM the standard basis TO the standard basis, the matrix for T is (as you would suspect):
$[T] = \begin{bmatrix}1&0\\0&1 \end{bmatrix}$.
but suppose we want to use the basis B = {b[1],b[2]} = {(1,0),(1,1)} instead.
in the basis B, the vector (1,0) = (1)(1,0) + (0)(1,1) = 1b[1] + 0b[2] = [1,0][B].
if we want the matrix for T with "inputs" as B-coordinates, and "outputs" in the standard basis, this matrix has to map (1,0) to (1,1), and (0,1) to (1,1) so we have:
$[T] = \begin{bmatrix}1&1\\0&1 \end{bmatrix}$.
if we want the matrix for T with "inputs" as standard coordinates, and "outputs" in B-coordinates, note that:
(1,0) = 1b[1] + 0b[2] = [1,0][B], so the first column of [T] will be as before. BUT:
(0,1) = (-1)b[1] + (1)b[2] = [-1,1][B], so the second column of [T] will be (-1,1) (which is what (0,1) turns into in "B-coordinates").
so NOW the matrix for T is:
$[T] = \begin{bmatrix}1&-1\\0&1 \end{bmatrix}$
you may verify yourself that this matrix is the inverse of the matrix from the B-coordinates to the standard coordinates.
if we want [T] to go from B-coordinates to B-coordinates, we can use the same matrix as from the standard basis to the standard basis. but this is because T is the identity function, this doesn't
work for EVERY linear transformation.
in general, if P is the matrix that takes the B-coordinates to the standard basis (a change of basis matrix), and [T] is the matrix of of a linear transformation T (with respect to the standard
basis), then the matrix for T from B-coordinates to B-coordinates will be:
[T][B] = P^-1[T]P.
any vector in R^2 can be REPRESENTED as a pair of numbers. what these numbers MEAN depends on what AXES we've chosen (the x-axis and the y-axis are the "standard" ones). if we choose different
axes to measure along, we're going to get different coordinates in different coordinate systems, but the vector itself remains the same (we just give it "a different address").
what makes this confusing is that the standard basis is "invisible", the standard coordinates of a vector (x,y) are just x and y (in the standard basis, the coordinates of a vector are
"themselves"). or to put it another way: (4,3) doesn't really tell us which vector we have...one needs to ask 4 which way, and 3 which other way? | {"url":"http://mathhelpforum.com/advanced-algebra/205976-linear-transformation-formula-print.html","timestamp":"2014-04-18T21:52:05Z","content_type":null,"content_length":"38565","record_id":"<urn:uuid:9eb25f2a-e99d-49c2-b95f-5588f6f2f3ab>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00398-ip-10-147-4-33.ec2.internal.warc.gz"} |
Map Projections
An Exercise In Precision Agriculture
Module 5, Exercise 1
State Plane Coordinate Conversion
The global positioning system (GPS) has been adopted by many U.S. grain producers as a method for referencing geographic and agronomic data in a geographic information system (GIS) database. GPS
fixes take the format of latitude, longitude and elevation using a WGS84 datum. A datum is a reference for physical parameters such as the shape of the earth. To utilize the GPS data the coordinates
must be projected onto a flat plane or 2 dimensional view. In Kentucky and Ohio, a Lambert Conformal Conic projection is used as the basis of the State Plane Coordinate systems whereas Indiana uses
the Transverse Mercator projection.. The nature of this projection and the associated mathematics are detailed in Map Projections A Working Manual, authored by the U.S. Geological Survey as
Professional Paper 1395. Once the projected coordinates are obtained, points, lines and polygons can be quantified. These geometric entities form the basis of a GIS database. A more descriptive
definition for GIS is that it is a relational database where properties are assigned to point, line and polygon entities in space. The objective of this Module is to understand and convert GPS point
data from WGS 84 and project the data into a State Plane coordinate system for several fields located in Indiana, Kentucky and Ohio.
Map Projections
Map projections are viewed as a mathematical operation in which latitude and longitude are transformed into Cartesian coordinates (x,y). Map projections allows longitude and latitude coordinates to
be projected from a 3-dimensional position on the earth's surface to a a plane or 2-dimensional surface (paper). Several projections and coordinates systems exist that are defined differently, with
each preserving shape, area, distance, and direction differently. The State Plane Coordinate System (SPCS) is a widely used coordinate system, not a map projection, within the United States. The
conversion is necessary to perform dimensional analysis such as area calculations. The conversion allows GPS data to be projected onto a 2-dimensional plane (paper) and permits dimensional analysis
such as area and length calculations. GPS or WGS84 data is unable to provide correct dimensional calculations due to the nature of the data in degrees longitude and latitude.
Before discussing SPSC, it is appropriate to talk briefly about datums and their function. While map projections systematically transform a position on the globe onto a plane, a datum functions as a
reference system to describe the shape and size of the earth. A datum is a smoothed mathematical surface of the earth's mean, sea- level surface. The earth is not a sphere but an oblate ellipsoid of
revolution, also called an oblate spheroid. The geoid defines the shape of the earth if all measurements were measured at sea-level. A datum is necessary for the GPS system to model the earth's
surface and calculate the position of GPS satellites and ultimately determine ones position on earth using GPS. Horizontal datums consist of latitude and longitude of a point, azimuth of a line from
that point, and two radii to describe the shape of the oblated sphere which best represents the earth. Three datums exist which are frequently associated with GPS use: North American Datum of 1927 or
know as NAD 27, NAD 83, and World Geodetic System of 1984 or WGS84. WGS84 is the one most associated with the use of GPS data and was developed by the US Military in 1984 (basis of GPS receivers
calculations). NAD 27 is based on the Clarke Spheroid of 1866 while NAD 83 is based on the GRS 80 derived ellipsoid. The radii used of the Clarke and GRS80 spheroids are presented in Table 1. An
important note is that NAD 27 coordinates are in feet while NAD 83 is based on meters.
The SPCS was devised by the US Coast and Geodetic Survey in 1933 to establish a common coordinate system across the US. It provides a greater degree of accuracy for area and distance calculations
than other projections and coordinates systems. Many Agricultural GIS and mapping packages use this coordinate system and will be the emphasis of this exercise. However, the SPCS varies from state to
state by dividing each into zones depending on whether the state is oriented more North-South or East-West. The orientation dictates the map projection, Lambert Conformal Conic or Transverse Mercator
, applied to the state. Lambert Conformal Conic Projection is the most widely used projection and is the basis of state plane coordinate system for states with a greater East-West than North-South
distances. The Transverse Mercator Projection is an ordinary Mercator projection turned through a 90^o angle to coincide with the central meridian. This projection is used for state plane coordinate
systems in states with greater North/South than East-West directions. Table 2 presents the projection used along with the number of zones for each state. Table 3 provides the necessary State Plane
parameters and origins for each zone in Indiana, Kentucky and Ohio based on the 1927 North American Datum (NAD 27). These parameters are necessary to transform from WGS84 into the SPCS.
As mentioned, GPS point data is provided in WGS84. Two sets of equations exist that transform this data into the SPCS since both the Lambert Conformal Conic and Transverse Mercator projections are
used depending upon the state orientation. The intent of this assignment is to not fully explain all the equations but to understand that they exist and how they can be used to transform WGS84
positions into the SPCS. Table 4 provides the various equations necessary for the transformation. Notice that several equations exist for each of the Lambert Conformal Conic and Transverse Mercator
projections. By knowing the standard parallels, origin, ellipsoid parameters and point to be transformed, one can simply plug these initial variables into the equations to calculate x and y
coordinates in the SPCS. It should be noted that longitude and latitude must be degrees and not in degree/minute/second. If the latter exists, a conversion must be performed to convert from minute
and seconds to degrees (1 degree = 60 minutes; 1 minute = 60 seconds).
Field boundary traverses of crop fields from Indiana, Kentucky and Ohio field will be used to transform from WGS84 to State Plane Coordinates. A downloadable Microsoft Excel file is provided
containing the necessary data and conversion equations to complete this assignment. The St_plane.xls contains the WGS84 DGPS boundary points along with a table containing text and equations for
projecting the DGPS points into the appropriate State Plane Coordinate System zone. Start by downloading the Excel file containing the DGPS data for several fields from the three states. Once the
Excel file has been extracted fill in the provided tables to convert the DGPS data into State Plane coordinates.
1) Downloading the Excel File
To download the file, click on the button at the bottom of this page. When downloading the "St_plane.xls " file, the following message box will appear:
make sure the 'Save this program to disk' dialogue box is checked and press the 'OK' button. Next, another message box will prompt you to select the directory to download this application file into:
Find the appropriate directory or create a new directory to download the applications file into and then click 'SAVE.' Once completing this step, use your "Windows Explorer" to navigate to this
application file (State Plane Conversion.exe). After locating it, double click on the file icon to run the application. The application will extract the necessary Excel file named 'St_plane.xls'
which can now be used for this assignment. It is suggested to save this file under a new name in case something wrong would occur.
2) Exercise Steps
The St_plane.xls file contains the equations and data required to complete this assignment. The first two sheets contain the parameters for each of the projections. The Raw Data WGS84 is provided on
the next six sheets and labeled according to state and particular zone. There are two boundaries from each state with each boundary being obtained from a different zone. For example, KYN represents
data collected from the Kentucky North State Plane zone. The remaining sheets provide prefabricated tables for each boundary from a particular zone to calculate the State Plane Coordinates for each
field boundary. Equations will be entered on these pages to calculate the x and y coordinates.
Below are several steps outlining what is needed to complete this assignment. Before starting, look over the first two sheets labeled 'Lambert Conformal Relationships' and 'Transverse Mercator'.
These two pages contain the necessary variables and their equations for each of the projections. Equations can be viewed by clicking on a variable's particular cell. The raw WGS84 data is contained
on separate sheets and in degrees longitude and latitude. The page following each of these data sheets is provided to perform the transformation into the SPCS. Several variables must be calculated
for each data point before using the x and y equations for the projections to determine the new location in meters. Each table has been labeled with the required variables to calculate. Equations
from Table 4 must be entered to calculate these variables and the new coordinates. Some equations have been entered for the first point due to their complexity and time to type in. These must be
copied and pasted for the rest of the points.
1. Plot the latitude and longitude coordinate pairs in Microsoft Excel using the Chart Wizard to confirm that indeed an enclosed polygon exists (remember that longitude and latitude are in degrees).
2. On the appropriate labeled sheets ( i.e. KYN SPCS for Kentucky North), transform the WGS84-GPS coordinates to the correct State Plane Coordinate System using the proper projection and the NAD27
datum. Equations are provided on the first sheet labeled Projection Relationships. Table headings have already been provided to help keep data organized.
3. Convert the calculated Easting and Northing coordinates from meters to feet (1 meter = 3.281 feet).
4. Again, plot the coordinate pairs in Microsoft Excel to confirm that a polygon exists.
5. Print out a hardcopy of the original boundary and the newly projected boundary in SPCS for comparison and contrast.
Questions and Answers
1. Compare and contrast the shape of both polygons constructed from the WGS84-GPS and NAD27-Kentucky North State Plane coordinate pairs. Why are these shapes similar or different in appearance,
and which set of coordinates will yield an estimate of the area of the field.
Click on the file name to download the self extracting executable file: St_plane.xls
(Feb 4, 2000 - All the boundary data has not been collected, only a KY north field is provided) | {"url":"http://www.bae.uky.edu/precag/modules/mod5ex1.htm","timestamp":"2014-04-17T03:49:21Z","content_type":null,"content_length":"15681","record_id":"<urn:uuid:02ef9f8f-cee2-48cf-83cf-f27837a65777>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Re:no recursive independent axiomatization
Harvey Friedman friedman at math.ohio-state.edu
Fri Dec 19 00:54:03 EST 1997
This posting is a sequel to my 8:05PM 12/18/97 posting.
1. Replace the sentence "To see iii), suppose W[e] is nonempty. Note that
by the 1-consistency of PA1 + A, PA1 + A + W[e] is empty is consistent. is
consistent." by
"To see iii), suppose W[e] is empty. Note that by the truth of PA1 + A,
"PA1 + A + W[e] is empty" is consistent."
2. There is a strengthening of Lemma 2 which suffices to prove the
following sharpened form of the Theorem:
THEOREM'. There is a true r.e. extension of PA1 in the language of PA with
no recursive independent axiomatization. Furthermore, it has a recursive
axiomatization consisting entirely of true pi-0-1 sentences.
Here is the sharper form of Lemma 2:
LEMMA 2'. There is a recursive function f of two variables, from sentences
in the language of PA and elements of N, into pi-0-1 sentences in the
language of PA, such that the following holds. Suppose PA1 + A is true. i)
f(A,e) logically implies A & PA1. ii) if W[e] is nonempty then f(A,e) is
provably equivalent to A over PA1. iii) if W[e] is empty then A does not
imply f(A,e) over PA1. iv) f(A,e) is true.
This Lemma can surely be proved using the theory of creative r.e. sets or
effectively inseparable r.e. sets. But we have a proof of this without
making a detour through recursion theory. In fact, the only proof I know of
the following related classical results from logic even for PA makes a real
detour through recursion theory:
a. The pi-0-1 theorems of PA are not recursive.
b. The pi-0-1 theorems of PA are complete r.e.
In the course of proving Lemma 2' without a real detour through recursion
theory, we also immediately have a proof of these (b implies a, of course)
without a detour through recursion theory. A little background: if we
replace pi-0-1 by sigma-0-1 in a,b, then this is immediate without a real
detour through recursion theory. And the proof of Lemma 2 from my 8:05PM
12/18/97 gives a direct proof using Godel's second theorem for implications
between pi-0-1 sentences. What we have noticed is that we can give a direct
proof using Rosser's construction for the pi-0-1 case.
Proof of Lemma 2': Define f(A,e) to be PA1 + A + "every proof of this
sentence from PA1 + A has a shorter proof of its negation from PA1 + A +
"W[e] is empty." " Let this sentence in outer quotes be B. Now i) is
Suppose first that W[e] is nonempty. Let n be the least proof of notB from
PA1 + A + "W[e] is empty." If there is a proof of B from PA1 + A <= n, then
B is refutable in PA1, which is a contradiction. On the other hand, if not,
then B is provable in PA1. So B is provable in PA1. We also obviously get
that B is true.
Now suppose that W[e] is empty. Suppose B is provable in PA1 + A. Then B is
true. Hence there is no proof of notB from PA1 + A + "W[e] is empty."
Therefore B is false. So B is not provable in PA1 + A. We also obviously
get that B is true. QED.
In fact, we give a version of this proof in isolation, to derive the
classical result b.
THEOREM A (classical). Let T be any consistent recursively axiomatized
extension of a weak fragment of arithmetic. Then the set of pi-0-1 theorems
of T is complete r.e.
Proof: Let A be a pi-0-1 sentence. By self-reference, let B be a sentence
which asserts "every proof of B from T has a shorter witness of the falsity
of A." We claim that A is true if and only if B is not provable in T.
case 1. A is true. Suppose B is provable in T, and let n be the least
proof. Since there is no witness of the falsity of A, B is refutable in T.
This contradicts the consistency of T. Hence B is not provable in T.
case 2. A is false. Let n be the least witness to notA. If there is no
proof of B from T that is <= n, then B is provable in T. If there is a
proof of B from T that is <= n, then B is refutable from T. This
contradicts the consistency of T. Hence B is provable in T. QED.
Is this proof of Theorem A new? You can teach it right after you do
Rosser's theorem.
PS: Of course, the proof of Theorem A uses a simpler more direct
construction, we can also we used for Lemma 2'. Also, it is easy to see how
we can get a recursive set of purely universal sentences in predicate
calculus with no recursive independent axiomatization, by appropriate
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1997-December/000587.html","timestamp":"2014-04-20T12:00:26Z","content_type":null,"content_length":"6736","record_id":"<urn:uuid:17482cb1-b63a-4e26-906e-e9e80c273c6b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00144-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability on infinite sets and the Kalaam argument
Suppose there is an infinite line of paving stones, labeled 1, 2, 3, ..., on each of which there is a blindfolded person. You are one of these persons. That's all you know. How likely is it you're on
a number not divisible by ten? The obvious answer is: 9/10. But now I give you a bit more information. Yesterday, all the same people were standing on the paving stones, but differently arranged. At
midnight, all the people were teleported, in such a way that the people who yesterday were standing on numbers divisible by ten are now standing on numbers not divisible by ten, and vice versa.
Should you change your estimate of the likelihood you're on a number divisible by ten?
Suppose you stick to your current estimate. Then we can ask: How likely is it that you were yesterday on a number not divisible by ten? Exactly the same reasoning that led to your 9/10 answer now
should give you a 9/10 answer to the back-dated question. But the two probabilities are inconsistent: you've assigned probability 9/10 to the proposition p[1] that yesterday you were on a number not
divisible by ten and 9/10 to the proposition p[2] that today you are on a number not divisible by ten, even though p[1] holds if and only if p[2] does not (this violates finite additivity).
So you better not stick to your current estimate. You have two natural choices left. Switch to 1/2 or switch to 1/10. Switching to 1/2 is not reasonable. Let's imagine that today is the earlier day,
and you have a button you can choose to press. If you press it, the big switch will happen—the folks on numbers divisible by ten will be swapped with the folks on numbers not divisible by ten. If you
had switched to 1/2 in my earlier story, then if you press the button, you should also switch your probabilities to 1/2, while if you don't press the button, you should clearly stick with 9/10. But
it's absurd that your decision whether to press the button or not should affect your probabilities (assume that there is no correlation between what decision you make and what number you're on).
Alright, so the right answer seems to be: switch to 1/10. But this means that the governing probabilities in infinite cases are those derived from the initial arrangements. Why should that be so?
Here is a suggestion. We assume that the initial arrangement came from some sort of a regular process, perhaps stochastic (where "regular" is understood in the same sense as "regularity" in
discussions of laws). For instance, maybe God or a natural process brought about which squares the people go on by taking the people one by one, and assigning them to squares using some natural
probability distribution, like probability 1/2 to 1, 1/4 to 2, 1/8 to 3, and so on, with the assignment being iterated until a vacant square is found (equivalently do it in one step: use this
distribution but condition on vacancy). And, maybe, for most of the "regular" distributions, once enough people are laid down, we get about a 9/10 chance that the process will land you on a square
not divisible by ten.
This assumes, however, that there is a process that puts people on squares. Suppose this assumption is false. Then there seems to be no reason to privilege the probability distribution from the first
time the folks are put on squares. And our intuitions now lead to inconsistency: assigning 9/10 to p[1] and 9/10 to p[2].
Where has all this got us? I think there is an argument here that absurdity results from an actual, simultaneous infinity of uncaused objects. But if an actual infinity of objects is possible, and it
is possible to have a contingent uncaused object, then it is very plausible (this is an ampliative inference) that it is possible to have an actual infinity of simultaneous contingent uncaused
Therefore: either it is impossible to have an uncaused object or it is impossible to have an actual infinity of simultaneous contingent objects. But it is possible to have an actual infinity of
simultaneous contingent objects if it is possible to have an infinite past. This follows by al-Ghazali's argument: just imagine at each past day a new immortal soul coming into existence, and observe
that by now we'll have a simultaneous infinity of objects. So, it is either impossible to have an uncaused contingent object or it is impossible to have an infinite past. We thus have an argument for
the disjunction of the premises of the Kalaam argument, which is kind of cool, since both of the premises of the argument have been disputed. Of course, it would be nicer to have an argument for
their conjunction. But this is some progress. And it may be the further thought along these lines will yield more fruit.
37 comments:
A round number being a number divisible by 10, I ask myself how likely I am to be on a round number in your scenario. The obvious answer is 1/10. But having just read your post, another obvious
answer must be that there is no likelihood. Why should there be? I am on some stone, and it is either round numbered or not. Now, I do feel the pull of a tendency to say that there is a
likelihood, and that it is obviously 0.1, but then I ask myself why I feel such a pull.
For myself, it is by analogy with similar finite situations. Were I on a line of very many paving stones, labeled 1, 2, 3 and so forth to some very big number, then the likelihood of my being on
a round numbered stone would be very close to 1/10. I could have arrived at my position by any method imaginable, for all I know, but there are still 9 non-round numbered paving stones for every
round numbered stone, and so my guestimate of 0.1 seems reasonable to me.
But in your scenario, there are infinitely many paving stones, and in your post you have given me a good reason to doubt that I can extrapolate from a similar finite scenario to yours (and in my
background knowledge of Hilbert's Hotel I have more reasons to doubt such extrapolations in general).
I ask myself again why there should be any likelihood at all, and I answer that there was not even much of a reason to guestimate a likelihood in the finite case. My only justification for doing
so was that there seemed to be no problem doing so. And if asked to bet on it, those are the natural odds for me to choose, given how little I know. Still I would wonder, was there something else
I knew in the back of my mind about how the world is which could tell me something more about how I could have got onto the stones and which would give me a better estimate than 9/10.
Your paradox reminds me, incidentally, of quite an old paradox of the infinite, due I think to Galileo: There are clearly twice as many even numbers as natural numbers, but if you double all the
natural numbers, you get just as many even numbers. So twice as many is just as many, which is absurd. Your paradox reminds me of that because although there are nine times as many non-round
numbered paving stones as there are round numbered stones (by analogy with the very large number of stones case, or obviously), there are also the same number (as you can see by multiplying each
number by ten) or rather, slightly fewer (via the hundreds and thousands).
Galileo's paradox was resolved by Cantor, of course; although Cantor's own paradox remains a problem for those who want to argue more positively for the so-called Actual infinite; or rather, for
the natural numbers being collectively Actual infinite (as neither that problem nor the supertask problems, nor any other I know of, is any problem at all for the Actual infinite that is the
number of points in a line of points that is given by the cardinal number 1/0 that I discovered a few years ago and about which I love to go on and on:) Incidentally, Cantor's paradox is an
argument for Open Theism, from the supposition that God is omnipotent and omniscient.
"the people who yesterday were standing on numbers divisible by ten are now standing on numbers not divisible by ten, and vice versa"
Is this possible? Or is that exactly the question you're raising? (I am not familiar with, or find the meaning of the term 'simultaneous infinity of uncaused objects')
Suppose there is an infinite line of paving stones, with labels of 1, 10, 2, 20, 3, 30, ..., 9, 90, 11, 100, 12, 110, ..., 19, 180, 21, 190, 22, 200, ..., on each of which there is a blindfolded
person. You are one of these persons. That's all you know. How likely is it you're on a number not divisible by ten? The obvious answer is: 1/2. But all I have done is to describe the same thing
you described but in a slightly different way (whence the similarity with Galileo’s paradox). You might think that the differing likelihoods following from those different descriptions was
contradiction enough to refute the standard view of the natural numbers, but in view of the Water/Wine paradox, for example, it would arguably be reasonable enough to think that (if the natural
numbers form a set then) there would not be any logical likelihood. There might be a likelihood of 9/10 if your description was more relevant to the likelihood, e.g. because of how the people
were put on the stones, but as you say, there might be no such process. And while your description more accurately described how the stones were laid out (presumably), it seems implausible that
one's likelihood of being on a non-round number labelled stone should depend upon the arrangement of the stones. Would it depend upon one's point of view as one looked at them?
The argument for Open Theism via Cantor’s paradox, incidentally, from God being omnipotent and omniscient, first uses Cantor’s paradox to show that the whole numbers are indefinitely extensible.
Then since God is omnipotent—has the most power He could possibly have—so His knowledge of arithmetic is indefinitely extensible, since only He could make worlds containing numbers of things for
all whole numbers. He would be able to make a world of N things even before He knew the cardinal number N because He would be able to know N and able to make any possible world. And He could also
have, at all times, infallible knowledge of all arithmetical truths if arithmetic is for Him Intuitionistic, if mathematics is something that is divinely created (cf. Divine Command metaethics)
from the basic concepts of a thing and a possibility (which He would always have known perfectly). (And that mathematics is divinely created might also follow from God’s omnipotence.)
If you have in infinite number of stone each labeled by a natural number. Wouldn't the probability of being on a number divisible by 10 be:
P = aleph-0 / aleph-0
Why would it be 9/10? Isn't P undefinable in transfinite math? But if there no probability of being on a stone divisible by 10, then how can this be? The person must be on some stone. The
following seems plausible:
A) In order for something to be true in the real world there must be a definable probability of it being true.
If A is correct, then we have the reducto. There cannot be an infinite number of stones.
A possible refutation of A is the probability of my cat being yellow. It is, incidentally, true that my cat is yellow. But what is the probability that it is yellow? Given that it is, there is a
trivial probability of 1; but then, that would apply to all truths. So in that sense, A is irrefutably true, but trivial. The probability of you being on a number divisible by 10 is 1 if you are,
and 0 if you are not.
So we are considering other sorts of probability. Let's say that you don't know whether or not I was lying about my cat's colour. What is, for you, the probability that my cat is yellow? Is it
the probability that I was lying? But maybe it is really an orange colour that you would call 'yellow'!
So let's beglin again. Is the probability that some cat (about which you know nothing) is yellow 1 in 3, there being three primary colours, or 1 in 10 (3 primary, 3 secondary, 1 tertiary and
black, white and grey)? Or 1 in 22 (as there are striped and mottled coats too), and so forth.
It may be reasonable to say that there is in fact no definite probability of my cat being yellow, but rather lots of probabilities, many of them rather fuzzy (is the chance of something about
which you have no information either way really exactly 0.500000000000..., or is it rather more like 0/0 where that division is given some fuzzy interpretation, such as the famous triangular
I think (A) may be false. I think there are at least aleph-0 logically possible worlds. What's the probability of the present world existing?
W = 1 / aleph-0
Does that mean that the probability of our world is 0? If so, it's rather strange that it exists. How can something be true with a 0 probability of it being true?
It may mean that, or it may mean that there is no probability. A probability is a number satisfying the axioms of some mathematical theory of probability. Or, more informally, a probability may
be not a number but high or low, and so again not be a definite number. As to whether the probability is 1/aleph-null, that depends upon whether such a division is defined. It is not standardly
defined, but it may be defined. And it may be defined to be 0, or to be a nonzero infinitesimal. As to what one should do, there are again all sorts of criteria that one might use to judge such
theories. Which are the best criteria? Here we have a risk of an infinite regress; what are the best criteria for deciding which the best criteria are!
Personally, I don't see why a probability of 0 has to mean an impossibility. A simple argument is an endless sequence of coin tosses. That gives us an endless string of heads and tails, perhaps
completely randomly, which is isomorphic to a real number from the interval [0, 1] in binary notation. Now, there is a standard uniform probability distribution over [0, 1], which is just the
unit square. That does not give us the probability of getting any particular result from the endless coin-tosses, but it does take areas to be probabilities, and the area over a single real
number is of course 0, standardly. So it would be quite natural to take the probability of any particular outcome from endlessly tossing a coin to be 0. And of course, all those outcomes are
But maybe there can't be an infinite number of randomly tossed coins. It seems to lead to the strange result that each possible outcome has a probability of 1/aleph-0, but also the outcome it
self is undefinable. You could put all the heads in a one to one correspondence with all the tails. All the instances with 2 heads in a row could be put in one to one correspondence with all the
tails. We could do the same with 3 heads and 4 heads etc.
The result doesn't seem to be definable and the probability would have to be either undefined or 0. But how can something really happen by chance and have an either undefined or zero probability,
if it actually happens?
You try to argue that a zero probability doesn't mean impossibility, but you only do this by setting up an infinite series of coin tosses. Perhaps this is a reducto for the possibility of an
infinite number of coin tosses, as opposed to an argument that 0 probability isn't an impossibility.
Yes, perhaps it is; indeed, I think it is. However, the outcome is not so much indefinable as a particular random sequence. It might be a string of nothing but heads, for example; but even if
there were as many heads as tails, their particular order would define the particular result. But I do think that there are paradoxes involved in such a thing.
Indeed, infinity divided by infinity is, as such, undefined, or an indeterminate form, but that does not mean that when that division crops up in a particular scenario there is no some other way
of defining its particular value in that scenario. The observed frequency may be an indeterminate form whilst the propensity in each case is a definite number, such as 9/10. A similar thing
happens in the calculus (according to d'Alembert), where the differential of a curve at a point is in general 0/0, but for any particular curve takes the value of the gradient of that curve (if
it has one).
A probability with the value of 0 is not necessarily the same as there being no possibility. Similarly, in non-standard analysis there are infinitesimals, which have a zero standard part. Or in
complex analysis, there are complex numbers with a non-zero modulus but a zero real part. A numerical probability depends upon a mathematical theory of probability, whereas a possibility depends
upon a metaphysical theory. So there is this conceptual possibility, of probability 0 of something that turns out to be actual. There may be no reason why you should allow it in your world-view;
but nor does there seem to be a reason why it is absurd.
In a finite space, a probability of 0 probably should mean impossibility, I think (whence our intuitions, perhaps); but when we entertain the concept of an infinite space of possibilities (e.g.
aleph-null logically possible worlds) it is apposite that in general, in mathematics, zero times infinity is an indeterminate form. Insofar as it is not defined within some particular mathematics
(e.g. standard set theory), that does not mean that it is logically indefinable, that it cannot be defined to be a range of values, or even some particular value.
Dr. Pruss,
Thanks very much for pointing me to this interesting paradox. However, I must disagree with your initial remark that a 1/10 probability assignment is "obvious." I must instead opt for what Dr.
Castell pejoratively called the abstention position (Brit J Phil Sci 49, 1998). In other words, as enigMan has already suggested in his first comment, another obvious answer is that there is no
likelihood which may be justifiably assigned.
Now, you seem to interpret this as being a problem with actual infinities, but I rather think it shows us a limitation of probability. We are not guaranteed to always have sufficient information
to begin talking about our propensity to guess correctly certain facts. This paradox uses our intuition against us; but I do not find it a persuasive argument against infinite collections of
physical objects.
Still, it's very interesting. Thanks again!
If we go for the abstention position, then we seem to get this interesting result: In an infinite multiverse, we can't do probability. But science requires probability. Thus, insofar as the
multiverse is a scientific hypothesis, it is self-defeating.
Okay, I can see how that might be an interesting point: An infinite multiverse poses problems for probability assignments insofar as it prevents us from assigning certain values which could
potentially be helpful under a finite multiverse hypothesis. But it seems like you're taking that problem further than I'd be willing to do.
I would make two claims regarding how to temper your view: First, probability assignments regarding physical systems inside our own universe remain demonstrably useful. I think this is pretty
clear from an empirical perspective, and I don't see how we could demonstrate that an infinite multiverse leads us to any logical problems, so long as we stay within the confines of our own
context---this universe. Second, I only find it to be problematic as a scientific hypothesis because it seems not to be falsifiable. So, if you have some other criticism in mind when you call it
"self-defeating," I'd be curious to hear what that is.
Anyway, I don't mean to overwhelm you with comments. I thank you for your responses so far.
The post for the day after this one addresses the in-universe case. If that argument works, then you can't assign probabilities to the outcomes of random processes. And that would be pretty bad,
since science needs to be able to do that.
Your argument is interesting, and makes me think of general objections to probability involving infinite sets. We can't compare the size of say, the integers and the odd numbers since both can be
Cantor-matched as equivalent cardinality. So you're tempted to say, there's no way to talk about probabilities if there are infinite sets involved.
However, consider a coin-toss exercise. You expect 50/50 from a fair coin over time, and whatever appropriate results from doing other things like cards, dice. But suppose the universe is
infinite (and it seems flat, so it supposedly really is.) Now there are Aleph-null people tossing coins, Aleph-null tossings, etc. all over this infinite universe. Aleph null head-landings and
Aleph null tails ... Do you really want to say that this exotic boundary condition makes it impossible for you to "expect" normal outcomes?
Somehow, the proportions for probability are based on intrinsic tendencies of the finite limit and not the idealized set properties. If you "fill it in" by making it infinite, that doesn't change
the proportion (almost like taking dy/dx in reverse.) So suppose I looked at integers between one and ten and the chance of hitting two, we'd say"1/10". If it's numbers made into tenths like 1,
1.1, ... 9.9, 10 then the chance of hitting from say 1.6 through 2.5 is again 1/10. We can keep making it finer, and the ratio holds. Indeed, we can take a range "1.5-2.5" of the continuum (Aleph
= ?!) and it's still intelligible despite there being infinitely many targets.
That's how I look at the problem of constants and features of possible universes. Imagine it being grainy to some fine degree (like 0.001 increments to each spec.), and there's various chances of
this or that. Then cut the grain to 1/10, and so on ... The proportionality should hold. That's what matters, not the limit infinities. Think of it more like the chance a dart would hit one
colored region rather than another on a picture. So yes there are various "chances" we're likely to end up in various universes depending upon the other factors (ie, Bayesian reasoning.) It is
not invalid.
But in the infinite sequence of coin tosses, there are subsequences where it's just heads for a very, very long time, and there are subsequences where it's just tails for a very, very long time.
So how do we know that we're in a place in the infinite sequence where heads and tails are roughly in equal proportion, rather than in one of those portions where it's just heads or just tails?
Well, we might say: There are a lot more portions in the sequence where heads and tails are roughly in equal proportion. But what does "a lot more portions" means in respect of an infinite
Thanks for a quick reply, Dr. Pruss! Well you have a point but we could wonder about similar issues of "am I stuck in a statistical fluke" if the universe had "only" 10^1200 such tossings or card
games going on. My point was: do you really think you can't expect a likely outcome, just because you're part of an infinite set of such activities v. a finite one, however large the latter? How
could that be? Our universe likely is infinite and that means infinite copies of us and what we do - should I care? Like I said, the number being infinite should be of no consequence, only the
local proportionality I observe at each successive scale. (That is, ignore the boundary condition.)
If there were a small number of sites, it would make sense to assume that each site is equally likely, and then the number of flukish sites would be a small fraction of the total number of sites,
so the probability of a fluke would be low. But (a) it doesn't seem to make sense to assume all sites are equally likely given an infinite number of sites, and (b) the number of flukish sites is
the same as the number of non-flukish ones.
Moreover, whether it makes sense to talk of the "local", if it includes other close-by universes, depends on what kind of a multiverse we have. A multiverse where the universes are not embedded
in a metric space may not allow one to talk of which universes are local to us.
How about this:
Lets assume we have an infinitely long array of squares. And a fair 6-sided dice.
We roll the dice an infinite number of times and write each roll's number into a square.
When we finish, how many squares have a "1" written in them? An infinite number, right?
How many squares have an even number written in them? Also an infinite number.
How many squares have a number OTHER than "1" written in them? Again, an infinite number.
Therefore, the squares with "1" can be put into a one-to-one correspondence with the "not-1" squares...correct?
Now, while we have this one-to-one correspondence between "1" and "not-1" squares set up, let's put a sticker with an "A" on it in the "1" squares. And a sticker with a "B" on it in the "not-1"
squares. We'll need the same number of "A" and "B" stickers, obviously. Aleph-null.
So, if we throw a dart at a random location on the array of squares, what is the probability of hitting a square with a "1" in it?
What is the probability of hitting a square with a "A" sticker?
The two questions don't have a compatible answers, right? So, in this scenario, probability is useless. It just doesn't apply. You should have no expectations about either outcome.
BUT. NOW. Let's erase the numbers and remove the stickers and start over.
This time, let's just fill in the squares with a repeating sequence of 1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,...
And then, let's do our same trick about putting the "1" squares into a one-to-one mapping with the "not-1" squares, and putting an "A" sticker on the "1" squares, and a "B" sticker on the "not-1"
Now, let's throw a dart at a random location on the array of squares. What is the probability of hitting a square with a "1" in it?
What is the probability of hitting a square with a "A" sticker on it?
THIS time we have some extra information! There is a repeating pattern to the numbers and the stickers. No matter where the dart hits, we know the layout of the area. This is our "measure" that
allows us to ignore the infinite aspect of the problem and apply probability.
For any area the dart hits, there will always be an equal probability of hitting a 1, 2, 3, 4, 5, *or* 6. As you'd expect. So the probability of hitting a square with a "1" in it is ~16.67%.
Any area where the dart hits will have a repeating pattern of one "A" sticker followed by five "B" stickers. So the probability of hitting an "A" sticker is ~16.67%.
The answers are now compatible, thanks to the extra "structural" information that gave us a way to ignore the infinity.
In other words, you can't apply probability to infinite sets, but you can apply it to the *structure* of an infinite set.
If the infinite set has no structure, then you're out of luck. At best you can talk about the method used to generate the infinite set...but if this method involves randomness, it's not quite the
same thing.
Alexander, Allen: I accept the intrinsic logical problems with infinite sets per se. Right, we can't compare infinite ratios like we can finite sets. But that's just taking them in the abstract
and asking which is bigger and by how much - and finding we can't, "per se." As we've seen argued, that isn't all there is to it. We need some context. Whether that has to be the idea of
structure that Allen poses or not is perhaps debatable.
So I want people to peel away from the pure abstraction some more and consider again the counter-example of betting in an infinite universe. There are infinite cases of the tosses etc. yet still
we expect (and really do find) the appropriate probabilities. That is supposed to show that the pure abstract argument is inadequate by reductio. We should not imagine, as Mr. Pruss seems to,
that the counter-argument must be wrong because the abstraction is unassailable.
I think that taking the limit of fine graining even if the "real case" goes to continuum is also valid, and there could be more.
Here's a final challenge to taking problems with infinity too seriously, albeit it regards "potential infinity" and not the expressed set per se: probability theory itself is often couched in
terms of, how the frequentist relative proportions converge to a distinct ratio as we approach an infinite number of trials (presumably Aleph null but could be others I suppose.) In a logically
possible world I could keep tossing "forever" and mosts thinkers would say, I can expect whatever chances per ordinary probability theory. (Note: some thinkers find problems with extension into
an infinite past, e.g. "I was always doing it and never started ...." One could argue it's just a time reversal of the former.)
Few would think that approaching infinity as the idea of the definition in probability, invalidates the essential concept. (However, note that probabilistic claims are not strictly falsifiable in
Popperian terms, since no particular run (and a stated "set of runs" is of course just a fragmented "run") can be dispositive! It's a judgment call, FWIW ...) So it seems to be a valid concept.
The pure abstractions of set theory are inadequate as a critique.
BTW, I recommend Rudy Rucker's Infinity and the Mind for mind-bending excursions into the transfinite world.
"There are infinite cases of the tosses etc. yet still we expect (and really do find) the appropriate probabilities."
The "really do find" depends on the claim that we are actually in an infinite universe, which we do not know. The "expect" could be explained away as an expectation unduly generalized from finite
As for frequentism, unless there are well-defined probabilities, there is no guarantee that there are any limiting frequencies. The limit might just not exist. (For instance, in the pattern
010011000011110000000011111111..., there is no limiting frequency of 1's.) Even if there are well-defined probabilities, the existence of the limit is not logically guaranteed--it is only
guaranteed to exist with probability 1.
OK, what if we were in an infinite universe. Do you think probability would cease to be either meaningful or a tool for study and expressing expectations? Really? Why should it matter, whether or
not there's all that "out there," to what happens here?
Why should it matter if the universe is infinite? Because of this argument (and the ones in my other posts on the issue). :-)
I came to the question of the Kalaam argument with very strong intuitions that an infinite universe is possible. I find the conclusions of arguments, like the present one, against the possibility
of an infinite universe to be very counterintuitive. It seems almost obvious that there could be an infinite universe.
But sometimes arguments force one to accept one thing that is counterintuitive, in order to escape another that is even more counterintuitive. Even so, I am reluctant to accept the
conclusions--that's just a psychological statement about me. But I don't know of a plausible refutation of my arguments that lead to the conclusion that it's impossible to have an infinite
universe (or, more precisely, that it's impossible for there to be an infinite number of items in the past of any item).
This comment has been removed by the author.
OK ... I can't blame you for wanting to take seriously the implications of an argument that seems valid. I'm not sure your argument about the stones has the implication that probability for
infinite sets (given the actual context) simply must be absurd in all cases. All I can say is, the counterargument e.g. that local context can't be invalidated by infinite boundary conditions
also seems valid, so we have conflict over what to accept. The limit graining argument seems valid to me also.
And again, consider the "0/0" is in itself absurd, yet dy/dx somehow makes sense. Have you asked colleagues about this argument? I wonder if "surreal numbers" and other ways of handling infinites
and infinitesimals (as in non-standard analysis) could help out here.
I have a further argument directly about the paving stones. It is very much like my previous arguments. Let's say we had groups of ten blindfolded people each, and indeed an infinite number
(A-null) of those groups of ten. You're in one of them. Each group is assigned to a set of stones: 1-10, 11-20, ... 4361-4370, .... You don't know which set you end up on. So I expect that I and
my nine comrades will end up arranged "randomly" on some set of stones.
How could I not think it plausible that I have a "1/10 chance" of ending up on #10, or maybe # 23,780, etc? After all, each group doesn't have to give a hoot - in advance of any further meddling
- whether there are other groups at all, or even other stones! Their existence is not "felt" by my group, how could it be?
Now sure, if you rearrange people that changes the chances but almost by circular argument. You're taking the people who were on e.g. 1, 2, ... 9, 11, 12, ... 137, 138, 139, 141, ... and
switching them with those on 10, 20, 30, ... I know you can do that, it's just another Hilbert Hotel. But you changed something from what it was before. If we accepted the original probability as
accurate and random, then I should believe I was likely not on a decadal stone. Hence it would make sense to change, regardless of it being done it that manner.
So I think it is the argument justifying the original probability that matters. Sure, after the shuffle the sets are "equivalent" but the second set wasn't "used" to establish the probability
directly. I don't know how else to handle this. I admit it is perplexing and we have conflicts of interest here over two compelling ways to look at it: it's a paradox!
I want to put this problem up at my own blog. Is the stones argument specifically yours Dr. Pruss, and in any case I'd like a way to cite. tx
So, start with the groups 1-10, 11-20, etc. Maybe each group is defined by the fact that they have the same color of shoes. (There are infinitely many colors, I suppose.)
Question: How likely is it that your number is divisible by 10? You want to say: 1/10.
Fine. But keep the very same people, but recolor the shoes. So now the people with numbers 1,2,3,4,5,6,7,10,20 have the same color of shoes; the people with numbers 8,9,11,12,13,14,15,16,30,40
form another group, with new shoe colors; so do the people with numbers 17,18,19,21,22,23,24,25,50,60, etc. You get the point. After recoloring the shoes, two out of ten people in each group have
a number divisible by 10. So using the grouping method, we now conclude that the probability you have a number divisible by 10 is 2/10=1/5.
But your probability of being on a number divisible by 10 should not change when someone repaints your shoes!
Perhaps, though, you think there is something special about a case where the grouping is done spatially, rather than by color. I don't know exactly why. Though, I kind of feel a pull of that
claim when the grouping is done temporally.
I expect problems like this have been produced independently by lots of people, but no, I didn't get my initial stones scenario from anybody else.
Yes of course Alexander, a regrouping attempt can rearrange the people and make the expectation different than before. But if my original reasoning is sound, and since it contradicts what you
consider the implications of regrouping - then we have a paradox. Maybe it just can't be resolved, that's the sort of problem it is.
Perhaps you can write up the problem and with my attempted rebuttal, and see what splash it makes. (I'd love to get credit even as informal citation.) I will put something up at my blog
meanwhile, but go ahead and write it up if you wish.
This stuff is tough. Right now, my project is a modality book. Once that's out the door--it's due Sept. 15--I can get on to other publication projects, and this is one option. I am also thinking
of running a conference on probability and infinity, with some good people (I have two really good people who in principle agreed to come). If I run the conference, I might just present this
stuff at it, and then probably there'd be a book from the conference.
Hi again, Alex. I notice that at the end of last month you wrote: "I came to the question of the Kalaam argument with very strong intuitions that an infinite universe is possible. I find the
conclusions of arguments, like the present one, against the possibility of an infinite universe to be very counterintuitive. It seems almost obvious that there could be an infinite universe."
I'd like to reiterate my observation that such arguments as this, even if they do work, do not show that an infinite universe is impossible. They would seem to show most directly that simple,
countable infinities of physical objects are impossible. But mathematicians have long believed that the natural numbers (the intuitive ones, not the formal ones within ZF) might be indefinitely
extensible. If so then we could only have finite numbers of ordinary objects in an infinite space, but we could have the infinite space. The reason why we could not use all that space to get a
simple transfinity of objects would be the nonsensicality of such transfinities.
I've thought about this observation quite deeply over several years, and I can see why most people (myself included, most of the time) do not make it. But it appears to be correct.
Different locations in a non-Euclidean space can have different geometrical properties. If so, then an infinite space on its own, assuming we're substantivalists about space (if we're
relationalists, I don't know that the hypothesis of an infinite space without infinitely many objects or objects of infinite extent makes sense) can be enough to generate the sorts of problems
involved here. Instead of people on different locations, just imagine different bits of space with different local geometrical properties.
This comment has been removed by the author.
I wouldn't suggest we have to take what physicists believe with no grains of salt, but: most of them think the spatial extent of our universe is infinite, due to GR considerations. IOW, it is
neither just an expanding cloud of stuff with empty space beyond (an explosion in classical space) nor a closed hypersphere of finite volume.
I do accept that the paradoxes discussed here are thought-provoking and give pause to glib acceptance of the reasonableness of infinite sets of real objects. However, if space is infinite it's
hard to imagine it not being populated with the same kinds of objects "forever" into space (or else we'd have a special boundary, even if "pure space" could go on.) That means an Aleph_null of
every card game, whatever. And as far as I'm concerned the probabilities are still what they're "supposed to be."
If we put people on squares, I guess it depends on how we do that and not the sheer logic of set theory that matters most. Note also in cases like coin tossing there is an intrinsic "generator"
of the probabilities, not just a hollow comparison between outcome sets. But that doesn't help my own argument about expectations for possible worlds as much as I'd like ...
Dr. Pruss,
I agree with hatsoff that your conclusion to this thought experiment is too hasty. Rather than pointing to problems with infinities, this more likely points to problems with assigning
probabilities. Such problems are nothing new. Our intuitions about "natural" distributions can become confused as setups become more complicated. Faced with an apparent paradox involving
assignments of probabilities, I would much sooner give up on probabilities than leap to such a sweeping metaphysical conclusion.
In this particular situation it is important to realize that in the second case we have additional information, and it is only natural to expect that additional information changes our
probability assignments. At least, it is natural on the view that probability is a function of our ignorance and the measure of our uncertainty. I don’t actually see an inconsistency in assigning
p=9/10 in the first case (where all we know is the state of affairs today) and p1=9/10, p2=1/10 in the second case (where we know about the teleportation). Our knowledge of the universe has
changed – therefore, our probability assignments must change too.
In fact, your conundrum is not specific to scenarios involving infinities. Consider this modified version of the thought experiment:
(a) There are 10 paving stones labeled 1-10. , on each of which there is a blindfolded person. You are one of these persons. That's all you know. How likely is it you’re not on number 10? The
obvious answer is: 9/10.
(b) But now I give you a bit more information. Yesterday, all the same people were standing on the paving stones, but differently arranged. At midnight, all the people were teleported, in such a
way that you end up on number 10. Should you change your estimate of the likelihood you're on a number 10?
What makes your original and my modified scenarios similar is that additional information that you receive in (b) modifies your initial ignorance-based assignment of probabilities. In your case
the effect is achieved by a non-uniform mapping of the initial distribution, which would not be possible in my case. But that detail does not seem significant. What makes the two experiments
essentially similar is that our knowledge (or our ignorance) changes from (a) to (b), and our probability assessment follows.
In the two paragraphs starting with "Here is a suggestion..." and "This assumes, however, that there is a process...," you hint at the Principle of Indifference. The Principle of Indifference, or
some generalized form of it, is pretty much the only game in town when it comes to objectively assigning prior probabilities. However, it has known challenges: apparent paradoxes not unlike the
one you outlined here (see for instance the Bertrand paradox). These issues are not necessarily related to infinities.
This comment has been removed by the author. | {"url":"http://alexanderpruss.blogspot.com/2010/03/probability-on-infinite-sets-and-kalaam.html","timestamp":"2014-04-19T19:45:50Z","content_type":null,"content_length":"368049","record_id":"<urn:uuid:ac0bcceb-89b5-4527-9c4b-05fbf9d5e7b4>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - What is the curl of a electric field?
This should be simple but I know I'm going wrong somewhere and I can't figure out where.
The curl of a electric field is zero,
i.e. [itex]\vec { \nabla } \times \vec { E } = 0[/itex]
Because , no set of charge, regardless of their size and position could ever produce a field whose curl is not zero.
Maxwell's 3rd Equation tells us that,
the curl of a electric field is equal to the negative partial time derivative of magnetic field [itex] \vec {B}[/itex].
i.e. [itex]\vec { \nabla } \times \vec { E } = -\frac { \partial }{ \partial t } \vec { B } [/itex]
So is the curl zero or is it not? If we equate those two equations we get that the time derivative of magnetic field is zero. What's wrong? What am I missing? | {"url":"http://www.physicsforums.com/showpost.php?p=4260665&postcount=1","timestamp":"2014-04-19T17:40:45Z","content_type":null,"content_length":"9194","record_id":"<urn:uuid:8cfe19fd-e956-4460-8a79-d99fb85a97ba>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability please help
November 6th 2009, 01:52 PM #1
Probability please help
I have two probability story problems that I can't figure out. Here they are:
Mrs. Miller manages a restaurant which is currently hiring. On Friday she interviewed 6 dishwashers, 1 line cook and 5 waiters. On Saturday she interviewed 3 dishwashers, 3 line cooks and 6
waiters for employment. Each day, one woman applied for a job, while the rest of the applicants were men. How probable is it that at least one of the women was a dishwasher?
On Monday this week Mr. Munton had a meeting at the Cultural Center, and on Friday he had another meeting. Of the conference rooms which his company uses, 3 face North, 3 face South, 2 face East
and 4 face West. All the rooms receive equal use. How probable is it that although he went to meetings on both Monday and Friday, Mr. Munton only saw the view from one side of the Cultural
Please show me which probability formulas you used and a detailed example of how you got the answer you did. Thanks.
Hello euler42
Welcome to Math Help Forum!
I have two probability story problems that I can't figure out. Here they are:
Mrs. Miller manages a restaurant which is currently hiring. On Friday she interviewed 6 dishwashers, 1 line cook and 5 waiters. On Saturday she interviewed 3 dishwashers, 3 line cooks and 6
waiters for employment. Each day, one woman applied for a job, while the rest of the applicants were men. How probable is it that at least one of the women was a dishwasher?
On Monday this week Mr. Munton had a meeting at the Cultural Center, and on Friday he had another meeting. Of the conference rooms which his company uses, 3 face North, 3 face South, 2 face East
and 4 face West. All the rooms receive equal use. How probable is it that although he went to meetings on both Monday and Friday, Mr. Munton only saw the view from one side of the Cultural
Please show me which probability formulas you used and a detailed example of how you got the answer you did. Thanks.
1) On Friday, 6 out of 12 interviewees were dishwashers. If one of these 12 is chosen at random as the one woman, the probability that she is a dishwasher is $\tfrac{6}{12} = \tfrac12$.
Similarly the probability that the woman is a dishwasher on Saturday is $\tfrac{3}{12}=\tfrac14$.
The simplest way to calculate the probability that at least one of these events occurred is to calculate the probability that neither of them did, and subtract the result from 1. So, the
probability that the woman was not a dishwasher on Friday = $1 - \tfrac12=\tfrac12$; and the probability that the woman was not a dishwasher on Saturday was $1 - \tfrac14 = \tfrac34$. Therefore
the probability that neither of these happened is $\tfrac12\times \tfrac34 = \tfrac 38$.
So the probability that at least one was a dishwasher is $1 - \tfrac38 = \tfrac58$.
2) The probability that he faced North on both days is $\tfrac14\times\tfrac14 = \tfrac{1}{16}$.
The probability that he faced East on both days is $\tfrac16\times\tfrac16 =\tfrac{1}{36}$.
Similarly the probability that he faced South on both day is ...? ...and West ...?
These four events are mutually exclusive (i.e. if one of them occurs, then none of the others occurs as well). So we can add their probabilities together to find the probability that one of these
occurred. I'll leave the rest to you.
November 7th 2009, 10:01 PM #2 | {"url":"http://mathhelpforum.com/statistics/112831-probability-please-help.html","timestamp":"2014-04-23T19:30:51Z","content_type":null,"content_length":"38574","record_id":"<urn:uuid:2803c1f2-7893-4984-9bbe-266b711e59b0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
• Units of Work
• Number and Algebra
• Level Four
This unit begins with Freudenthal’s (1983) annihilation model for demonstrating the addition and subtraction of integers then goes on to introduce other representations. It is designed for students
working at Stage 7 of the Number Framework who are able to choose appropriately from a broad range of mental strategies to estimate answers and solve addition and subtraction.
• Units of Work
• Number and Algebra
• Level Four
In this unit students are asked to investigate mathematical relationships when they select various numbers into be used in 3 circle configuration below and in 6 circle configurations. They will
explore both whole number and fractional numbers as they consider how various configurations impact on the relationships between numbers.
• Problem Solving Activities
• Number and Algebra
• Level Six
This is a problem from the number and algebra strand.
• Problem Solving Activities
• Number and Algebra
• Level Three
Select the appropriate number operation(s) to solve problems.
Devise and use problem solving strategies (act it out, draw a picture, organised list)
• Problem Solving Activities
• Number and Algebra
• Level Three
Explain face, place and total value of numbers
Solve 3-digit subtraction problems
Devise and use problem solving strategies (guess and check, think logically)
• Problem Solving Activities
• Number and Algebra
• Level Two
Give change for sums of money
Solve subtraction problems presented in different forms
Devise and use problem solving strategies to explore situations mathematically (guess and check, use drawing, use equipment, be systematic, act it out).
• Units of Work
• Number and Algebra
• Level Three
In this unit students are asked to investigate mathematical relationships when they select various numbers into be used in 3 circle configuration below and in 6 circle configurations. They will
explore both whole number and fractional numbers as they consider how various configurations impact on the relationships between numbers.
• Units of Work
• Number and Algebra
• Level Three
In this unit we look at a range of strategies for solving addition and subtraction problems with whole numbers with a view to students anticipating from the structure of a problem which strategies
might be best suited.
• Units of Work
• Number and Algebra
• Level Three
This unit introduces students to "The Difference Bar" digital learning object, a tool to help students work out the difference between two numbers by breaking numbers into parts.
This unit is for students at Stage 6-Advanced Additive of the Number Framework, Curriculum Level 3, i.e. students who have developed more than one part-whole strategy for addition and subtraction and
can use these strategies to solve problems involving large numbers.
This digital learning object has two versions, one where difference problems are generated and one where students and teachers can make up their own difference problems. The problems at this level
involve solving the difference between 2 two-digit numbers.
• Units of Work
• Number and Algebra
• Level Three
This unit introduces students to "The Take-Away Bar" digital learning object, a tool to help students solve subtraction problems by breaking numbers into parts.
This unit is for students at Stage 6-Advanced Additive of the Number Framework, Curriculum Level 3, i.e. students who have developed more than one part-whole strategy for addition and subtraction and
can use these strategies to solve subtraction problems involving large numbers.
This digital learning object has two versions, one where subtraction problems are generated and one where students and teachers can make up their own subtraction problems. The problems at this level
involve subtracting a two-digit number from a larger two-digit number. | {"url":"http://nzmaths.co.nz/taxonomy/term/179?page=18","timestamp":"2014-04-18T18:25:54Z","content_type":null,"content_length":"30345","record_id":"<urn:uuid:4c052347-f8b6-4884-b634-c6a432b7bfd7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00227-ip-10-147-4-33.ec2.internal.warc.gz"} |
How solve a trigometric function
Hey everybody.
I have the following trig function and I have to simplify it so that it will only have sintheta in it. Can you show me how to do that?:
9.78049(1+0.005288sinsquaredx - 0.000006sinsquared2x) (m/ssquared)
I tried pulling out a sinx, but that only seemed to complicate the problem because once I did that had to simplify the sin2x and I just couldn't find a way to do it!
Is there another way to approach this problem? I've exhausted all sources and my book has the most basic concepts possible!
Any help is appreciated, | {"url":"http://mathhelpforum.com/trigonometry/88355-how-solve-trigometric-function.html","timestamp":"2014-04-18T21:59:46Z","content_type":null,"content_length":"45456","record_id":"<urn:uuid:72696e99-f41b-4eb5-9735-2b18ee5b2b6b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
Overview Package Class Use Tree Deprecated Index Help
PREV CLASS NEXT CLASS FRAMES NO FRAMES
SUMMARY: NESTED | FIELD | CONSTR | METHOD DETAIL: FIELD | CONSTR | METHOD
Class ConditionalSumOfSquares
All Implemented Interfaces:
public class ConditionalSumOfSquares
extends Object
implements ARMAFit
The method Conditional Sum of Squares (CSS) fits an ARIMA model by minimizing the conditional sum of squares. The CSS estimates are conditional on the assumption that the past unobserved errors are
0s. The estimation produced by CSS can be used as a starting point for a better algorithm, e.g., the maximum likelihood.
Note that the order of integration is taken as an input, not estimated. The R equivalent function is arima.
See Also:
"P. J. Brockwell and R. A. Davis, "Chapter 8.7, Model Building and Forecasting with ARIMA Processes," Time Series: Theory and Methods, Springer, 2006."
│ Constructor Summary │
│ ConditionalSumOfSquares(double[] x, int p, int d, int q) │ │
│ Fit an ARIMA model for the observations using CSS. │ │
│ ConditionalSumOfSquares(double[] x, int p, int d, int q, int maxIterations) │ │
│ Fit an ARIMA model for the observations using CSS. │ │
│ Method Summary │
│ double │ AIC() │
│ │ Compute the AIC, a model selection criterion. │
│ double │ AICC() │
│ │ Compute the AICC, a model selection criterion. │
│ Matrix │ covariance() │
│ │ Get the asymptotic covariance matrix of the estimated parameters, φ and θ. │
│ ARMAModel │ getARMAModel() │
│ │ Get the fitted ARMA model. │
│ ARIMAModel │ getModel() │
│ │ Get the fitted ARMA model. │
│ int │ nParams() │
│ │ Get the number of parameters for the estimation/fitting. │
│ ImmutableVector │ stderr() │
│ │ Get the asymptotic standard errors of the estimated parameters, φ and θ. │
│ String │ toString() │
│ double │ var() │
│ │ Get the variance of the white noise. │
public ConditionalSumOfSquares(double[] x,
int p,
int d,
int q,
int maxIterations)
Fit an ARIMA model for the observations using CSS. Note that the algorithm fits only an ARMA model. d is taken as an input. If the differenced input time series is not zero-mean, it is first
de-mean-ed before running the algorithm as in Brockwell and Davis. When reporting the model, we compute the intercept to match the mean.
x - the time series of observations
p - the number of AR terms
d - the order of integration
q - the number of MA terms
maxIterations - the maximum number of iterations
public ConditionalSumOfSquares(double[] x,
int p,
int d,
int q)
Fit an ARIMA model for the observations using CSS. Note that the algorithm fits only an ARMA model. d is taken as an input. If the differenced input time series is not zero-mean, it is first
de-mean-ed before running the algorithm as in Brockwell and Davis. When reporting the model, we compute the intercept to match the mean.
x - the time series of observations
p - the number of AR terms
d - the order of integration
q - the number of MA terms
public int nParams()
Get the number of parameters for the estimation/fitting. They are the AR terms, MA terms, and variance (sigma^2).
the number of parameters
public ARIMAModel getModel()
Description copied from interface: ARMAFit
Get the fitted ARMA model.
Specified by:
getModel in interface ARMAFit
the fitted ARMA model
public ARMAModel getARMAModel()
Get the fitted ARMA model.
the fitted ARMA model
public double var()
Description copied from interface: ARMAFit
Get the variance of the white noise.
Specified by:
var in interface ARMAFit
public Matrix covariance()
Get the asymptotic covariance matrix of the estimated parameters, φ and θ. The estimators are asymptotically normal.
Specified by:
covariance in interface ARMAFit
the asymptotic covariance matrix
See Also:
"P. J. Brockwell and R. A. Davis, "Eq. 10.8.27, Thm. 10.8.2, Chapter 10.8, Model Building and Forecasting with ARIMA Processes," Time Series: Theory and Methods, Springer, 2006."
public ImmutableVector stderr()
Get the asymptotic standard errors of the estimated parameters, φ and θ. The estimators are asymptotically normal.
Specified by:
stderr in interface ARMAFit
the asymptotic errors
See Also:
"P. J. Brockwell and R. A. Davis, "Eq. 10.8.27, Thm. 10.8.2, Chapter 10.8, Model Building and Forecasting with ARIMA Processes," Time Series: Theory and Methods, Springer, 2006."
public double AIC()
Compute the AIC, a model selection criterion.
Specified by:
AIC in interface ARMAFit
the AIC
See Also:
public double AICC()
Compute the AICC, a model selection criterion.
Specified by:
AICC in interface ARMAFit
the AICC
See Also:
"P. J. Brockwell and R. A. Davis, "Eq. 9.2.1, Chapter 9.2, Model Building and Forecasting with ARIMA Processes," Time Series: Theory and Methods, Springer, 2006."
public String toString()
toString in class Object
Overview Package Class Use Tree Deprecated Index Help
PREV CLASS NEXT CLASS FRAMES NO FRAMES
SUMMARY: NESTED | FIELD | CONSTR | METHOD DETAIL: FIELD | CONSTR | METHOD
Copyright © 2010-2014 Numerical Method Incorporation Limited. All Rights Reserved. | {"url":"http://www.numericalmethod.com/javadoc/suanshu/com/numericalmethod/suanshu/stats/timeseries/linear/univariate/stationaryprocess/arma/ConditionalSumOfSquares.html","timestamp":"2014-04-16T07:14:10Z","content_type":null,"content_length":"30502","record_id":"<urn:uuid:06f09eab-fa36-4bc2-97f6-2671f37961e0>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
You are not logged in.
• Index
• » Help Me !
• » One biased coin
Post a reply
Topic review (newest first)
2013-03-26 22:08:10
Okay, very good!
2013-03-26 21:59:59
2013-03-26 21:34:29
Do you see how each value for the formula is computed?
2013-03-26 21:31:32
Thanks its womderful
2013-03-26 21:15:37
Please look at post #20.
2013-03-26 21:12:51
X occur?
2013-03-26 21:08:58
X = hhhhh
2013-03-26 21:00:55
Please teach me the formula
2013-03-26 20:59:39
Hi Agnishom;
There is a formula to do these. You just plug in. This makes it very simple.
2013-03-26 20:58:09
Okay 50 thanks
Anyway, what I actually need to know is how to do the problem...
Which I can't make any idea of
2013-03-26 20:49:57
Agnishom wrote:
Now, what is the probability that this coin is biased?
That is what his original question asked for in post #1.
Just take the complement. 9 / 41 and add that. You get 50.
2013-03-26 20:46:42
Everything is good except the fact that you got the guy the probability of the coin being biased!
2013-03-26 20:43:54
Wait a minute! This question say fair coin. Your original question asked for biased coin.
2013-03-26 20:42:47
Okay I attach it.
2013-03-26 20:38:16
It seems you posted while I was posting.
It is interesting how weird a result it can produce!
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
Hi;Do you see how each value for the formula is computed?
Hi Agnishom;There is a formula to do these. You just plug in. This makes it very simple.
Okay 50 thanks Anyway, what I actually need to know is how to do the problem...Which I can't make any idea of
Agnishom wrote:Now, what is the probability that this coin is biased?
Now, what is the probability that this coin is biased?
That is what his original question asked for in post #1.Just take the complement. 9 / 41 and add that. You get 50.
Everything is good except the fact that you got the guy the probability of the coin being biased!
Hi;Wait a minute! This question say fair coin. Your original question asked for biased coin.
It seems you posted while I was posting.It is interesting how weird a result it can produce! | {"url":"http://www.mathisfunforum.com/post.php?tid=19138&qid=258867","timestamp":"2014-04-16T21:52:52Z","content_type":null,"content_length":"22747","record_id":"<urn:uuid:2078e823-56fd-49be-9dd7-532d34feb095>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00264-ip-10-147-4-33.ec2.internal.warc.gz"} |
A135017 - OEIS
%S 0,0,1,0,2,1,3,5,7,15,20,48,60,156,205,489,761,1572,2796,5357,10174,
%T 19021,37272,69375,137759,258444,513696,976890,1934900,3727164,
%U 7358675,14316861,28217028,55288907,108942267,214462953,422973649
%N a(n) = number of strings of length n that can be obtained by starting with abc and repeatedly doubling any substring in place and then discarding any string that contains two successive equal
%C These strings may be regarded as the "primitive" strings among those enumerated by A135473.
%C Equals the inverse binomial transform of A135473.
%H <a href="/index/Do#repeat">Index entries for doubling substrings</a>
%F Empirically, grows like 2^n.
%e n=3: abc
%e n=4: -
%e n=5: ababc, abcbc
%e n=6: abcabc
%e n=7: abababc, ababcbc, abcbcbc
%Y Cf. A135473.
%K nonn
%O 1,5
%A _David Applegate_ and _N. J. A. Sloane_, Feb 12 2008
%E Extended to 37 terms by David Applegate, Feb 16 2008 | {"url":"http://oeis.org/A135017/internal","timestamp":"2014-04-19T04:54:53Z","content_type":null,"content_length":"8762","record_id":"<urn:uuid:36066969-2865-4684-ae3d-ada6bf6bfdfd>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Rocket in the Air
December 5th 2009, 05:35 AM #1
Dec 2009
A Rocket in the Air
A rocket loaded with fireworks is to be shot vertically upward from the ground level with an initial velocity of 200 feet per second. When the rocket reaches a height of 400 feet on its upward
trip, the fireworks will be detonated. How many seconds after liftoff will this take place?
Hello, sologuitar!
A rocket loaded with fireworks is to be shot vertically upward from the ground level
with an initial velocity of 200 feet per second.
When the rocket reaches a height of 400 feet on its upward trip, the fireworks will be detonated.
How many seconds after liftoff will this take place?
We are expected to be familiar with this formula: . $h \:=\:200t - 16t^2$
. . where $h$ is the height of the rocket (in feet) $t$ seconds after liftoff.
When is $h \,=\,400$ ?
We have: . $200t - 16t^2 \:=\:400 \quad\Rightarrow\quad 2t^2 - 25t + 50 \:=\:0$
. . which factors: . $(2t-5)(t-10) \:=\:0$
. . and has roots: . $t \:=\:\tfrac{5}{2},\:10$
The fireworks will be detonated $2\tfrac{1}{2}$ seconds after liftoff.
Hello, sologuitar!
We are expected to be familiar with this formula: . $h \:=\:200t - 16t^2$
. . where $h$ is the height of the rocket (in feet) $t$ seconds after liftoff.
When is $h \,=\,400$ ?
We have: . $200t - 16t^2 \:=\:400 \quad\Rightarrow\quad 2t^2 - 25t + 50 \:=\:0$
. . which factors: . $(2t-5)(t-10) \:=\:0$
. . and has roots: . $t \:=\:\tfrac{5}{2},\:10$
The fireworks will be detonated $2\tfrac{1}{2}$ seconds after liftoff.
After providing the formula, the question becomes crystal clear.
I have a serious problem with this question! If an object is thrown upward, perhaps from a catapult, the only force acting on it is gravity and that formula applies. But the whole point of a
rocket is that the rocket engine continues to fire so there is an additional upward force on it. If this really were a rocket we would have to know the thrust from the rocket engine, which is not
given, to answer the question.
Yes but...
I have a serious problem with this question! If an object is thrown upward, perhaps from a catapult, the only force acting on it is gravity and that formula applies. But the whole point of a
rocket is that the rocket engine continues to fire so there is an additional upward force on it. If this really were a rocket we would have to know the thrust from the rocket engine, which is not
given, to answer the question.
Yes, but remember that a detailed rocket math question is found or taught in physics. This question comes from my precalculus textbook. The author is just interested in knowing if students can
munipulate equations.
It's still a bad problem. The student is expected to assume things that are contrary to the information given.
December 5th 2009, 05:49 AM #2
Super Member
May 2006
Lexington, MA (USA)
December 6th 2009, 09:14 PM #3
Dec 2009
December 7th 2009, 04:41 AM #4
MHF Contributor
Apr 2005
December 7th 2009, 06:14 AM #5
Dec 2009
December 8th 2009, 01:59 AM #6
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/pre-calculus/118618-rocket-air.html","timestamp":"2014-04-20T09:05:19Z","content_type":null,"content_length":"50542","record_id":"<urn:uuid:09163a5f-8780-41be-8bf2-5b4c8c7952b6>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
3rd time I need help on this! :(
September 20th 2007, 02:55 PM #1
Junior Member
Sep 2007
Oregon, West Coast ^_^
I still don't understand these. Can someone help me on these equations? And show me how to do them step by step so I can do them on my own in the future?
$3x-1/<br /> 4=-5/11$
$3x/4<br /> +2=4x-1$
and this
Complete the table, and write a rule relating x and y (write an equation)
First, you could clear the fractions by multiplying by 4 and 11, or 44.
$132x - 11 = -20$
Next, get x by itself, adding 11 to both sides, and then dividing by 132...
$132x = -9$
$x = -\frac{9}{132}$
Remember, you can add or subtract the same value to both sides of the equation, and it stays equal. Not just numbers, but variables too, like x, or 5x, etc. And you can multiply or divide every
term on both sides of the equation by the same value. Again, number and variables work.
$3x/4<br /> +2=4x-1$
Try it on this one.
1. Clear the fraction by multiplying.
2. Get the x terms on one side, and the numbers on the other, by adding and subtracting as needed.
3. Get x by itself by dividing.
and this
Complete the table, and write a rule relating x and y (write an equation)
Well, the rule or equation will be linear, and so should be of the form:
$y = mx + b$
You need to figure out m and b. Use the entries where both x and y are given (there are four of them) to do this. If you pick the right one, you should be able to get a value for b. Then, use one
of the other three to find m.
You'll need to use the techniques from the first two problems when solving for b and m.
September 20th 2007, 03:53 PM #2
Junior Member
Sep 2007 | {"url":"http://mathhelpforum.com/algebra/19266-3rd-time-i-need-help.html","timestamp":"2014-04-16T05:11:42Z","content_type":null,"content_length":"35048","record_id":"<urn:uuid:db41ebbe-92b3-4adc-9b92-f57648345356>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: 1. Phys. A: Math, Gen. 28 (1995) 3567-3578. Printed in the UK
Numerical investigation of correlation functions for the
UqSU(2)invariant spin-; Heisenberg chain
Peter F Arndt and Thomas Heinzel
PhysikalischesInstitut. Universitu Bonn, NuEallee 12,53115 Bonn, Germany
Received 8 March 1995
AbstracL We consider.the UqSU(2) invariant spin-; xxz quantum spin chain at the roots
of unity q = exp(ix/(m i I)), correspondingto different minimal models of conformal field
theory. We conduct a numerical investigation of the correlation functions of UqSU(2)scalar
two-pointoperators in order to find which operatorsin theminimal models they correspond to.
Using graphical representationsof the Temperley-Lieb algebra we are able to deal with chains
of up to 28 sites. Dependingon q. the correlation functions show differentcharacteestics and
finite-size behaviour. For m = 213, which correspondsto the L-Yang edge singularity, we
find the surface and bulk critical exponent -1j5. Together with the known result in the case
m =3 (king model) this indicates that in the continuum limit the tWO-pOint operators involve
conformal fields of s p i n - s . For other roo$ of unity q the.&ains are too short to determine
thesurface and bulk uitical exponents.
1. Introduction
We considertwo-point correlation functions for a class of one-dimensionalquantum models
on a chain of N sites defined in terms of the Hamiltonian [1,2] | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/038/1777402.html","timestamp":"2014-04-20T19:17:42Z","content_type":null,"content_length":"8645","record_id":"<urn:uuid:a0ff52ec-b3aa-4c7c-8393-3b78d79a4bd3>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equation of a Straight Line
I've never heard of any of the equations in this thread. In my studies I have always used
where ζ is Riemann's zeta function, Γ(x) is the gamma function, ∇ is the del operator, L
denotes the inverse Laplace transform, T
is the nth Chebyshev polynomial of the first kind, C is a simple closed curve bounding a region having z = a as an interior point, σ
is a simplex of an oriented simplicial complex and [σ
, σ
^m - 1
] is an incidence number, S is a compact, orientable, differentiable k-dimensional manifold with boundary in E
and ω is a (k - 1)-form in E
, defined, and C
at all points of S, and η(x) is Dirichlet's eta function.
(Sorry for stealing your joke, Ricky.
Last edited by Zhylliolom (2006-08-07 12:11:03) | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=39104","timestamp":"2014-04-16T16:01:54Z","content_type":null,"content_length":"40426","record_id":"<urn:uuid:63dfc031-2e9b-43cf-80f8-d5c01a393833>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
January/February 2006
Roll Model
Simulating a Savings Plan Account Balance
By Richard M. Rasiej
Could stochastic modeling help Social Security actuaries achieve more accurate forecasts than traditional deterministic methods? It’s an idea they’ve been looking at for a few years. Here, in a
different context, is basically how stochastic modeling works.
IN 2003, the Social Security trustees began including, as an appendix to their annual report on the system’s financial condition, the results of stochastic projections of the system’s long-range
finances in an effort to illustrate the uncertainty of the results of the traditional deterministic valuation using the intermediate (best estimate) assumption set.
Any time we talk about mathematical modeling, we’re talking about some type of system in which behavior can be abstracted in a way that allows meaningful analysis. Let’s take the example of a thrift
or 401(k) savings plan that currently contains $1,000. Suppose we’re interested in determining what the balance would be five years from now, assuming no additional contributions.
Before we proceed, it’s important to note that this example is not intended to demonstrate how the actuaries in the Social Security Administration do their modeling. Rather, it’s intended to provide
the reader with an example of how stochastic modeling could be performed to study a much simpler system.
If we were modeling the balance in a purely deterministic way, we would be given (or we would assume) some annual rate of growth, say 6 percent. We would then take the $1,000 and multiply it by 1.06
five times in order to obtain an answer of $1,338.23.
If we were interested in performing what’s called sensitivity analysis, which is basically the methodology being used when the term “scenario testing” is employed, we might want to determine what the
five-year ending balance would be if the growth rate were -2 percent or 12 percent (instead of the assumed 6 percent). In the first case, we would calculate an ending balance of $903.92; in the
second case, it would be $1,762.34.
Now let’s take a look at what would be involved in performing a stochastic simulation of the five-year ending balance. “Stochastic” is simply a synonym for “probabilistic.” The dictionary says it
comes from the Greek word stochastikos or “proceeding by guesswork.” Mathematically, it’s defined as “a process in which a sequence of values is drawn from a corresponding sequence of jointly
distributed random variables.”
In stochastic simulations, we choose inputs into the model (in this case, the rate of growth) according to some mechanical rules that reflect the probability of that input actually being observed.
Stochastic modeling and Social Security
STOCHASTIC (OR PROBABILISTIC) MODELING generally refers to the use of probabilistic and statistical methods to simulate numerous possible evolutionary paths for the important independent variables in
the system being studied. This contrasts with deterministic modeling, in which the path of each independent variable is fixed by the assumed values assigned to that variable. The basic tools of
stochastic analysis are regression analysis, time series analysis, and mathematical simulation.
The simulation itself (often referred to as a Monte Carlo simulation) is performed by repeating a sequence of trials a large number of times (5,000 or even 10,000 trials are common). Each trial
consists of a number of steps:
• Choosing the values to be used for each independent variable (or choosing the amount of random fluctuation to add to an independent variable);
• Running the projection of trust fund balances
• Saving the result.
This process is repeated for a large number of trials. The results of each trial are then tabulated and ordered so that statistical inferences can be made. If, for example, 10,000 trials were run to
the year 2030 and the ending trust fund balance was positive in 9,750 of these trials, we would interpret that result to mean that there is a 97.5 percent probability of having a positive balance in
the year 2030.
An important potential use for stochastic modeling of the Social Security system is the analysis of various reform options before the U.S. Congress. For example, in the case of recent proposals to
partially privatize Social Security through the use of individual investment accounts, stochastic modeling could be used to try to answer the following question: Find an age “X” so that if a worker
invests “w” percent of wages into an individual investment account from age “X” to normal retirement age, there is a 95 percent probability that at the worker’s normal retirement age, the worker will
have combined benefits (from a redefined Social Security reduced benefit and the annuity purchased from the individual investment account) at least as great as under current law.
—Richard Rasiej
Rolling the Die
To restate our problem, let’s suppose we wanted to determine the range of ending account balances in a thrift savings plan five years from now, assuming that we start with $1,000 and that we have
only one asset class (investment category or mutual fund) in which to invest. We also assume that the only possible annual returns are -4 percent, 0 percent, 4 percent, 8 percent, 12 percent, and 16
percent, each equally likely. We could then simulate each five-year period with five consecutive rolls of a standard die, with faces numbered one to six. We’ll interpret a one as indicating a -4
percent return, a two as a 0 percent return, a three as a 4 percent return, and so on.
We can begin the first simulation by rolling the die five times. Suppose the rolls are three, five, one, one, and four. Then, at the end of the first year, the $1,000 has grown by 4 percent, to
$1,040. During the second year, the fund grows by 12 percent, to $1,164.80. During the third year the fund grows by -4 percent (meaning it loses 4 percent) to end the year at $1,118.21. During the
fourth year the fund loses 4 percent again, to end the year at $1,073.48. Finally, during the fifth year, the fund grows by 8 percent to end the five-year period at $1,159.36.
We would then start the second simulation by rolling the die five more times. This time we roll one, six, two, five, and one. Working through what each roll means produces a five-year ending balance
of $1,197.34.
We perform these simulations over and over again (perhaps using a spreadsheet or some other electronic aid), maybe a hundred or a thousand times. After performing all these simulations, we would then
have a range of ending values, from $815.37 (rolling five ones) to $2,100.34 (rolling five sixes), with most values in the vicinity of $1,300 (This is because the way possible returns were set up
implies an average return of 6 percent per year.)
How would the results be interpreted? Let’s suppose that 100 simulations were performed and that the results were distributed as follows:
Then, using these results, we could make the following assertions:
• There is a 12 percent chance of ending the five-year period with less than or no more than the starting balance of $1,000 (since in 12 simulations out of the 100 performed, the ending balance was
less than or equal to $1,000).
Any time we’re talking about mathematical modelling, we’re talking about some type of system in which behavior can be abstracted in a way that allows meaningful analysis.
• There is a 4 percent chance of the account doubling or better by the end of five years.
• The likeliest five-year ending balance is between $1,200 and $1,400.
It’s important to keep in mind that if another set of 100 simulations were performed, the distribution of five-year ending balances would likely be slightly different. Nonetheless, as more and more
simulations were performed, the final results would tend to stabilize.
Often, simulations are performed in models where there’s more than one independent variable affecting the system. For example, most people have their savings invested in more than one type of asset.
It’s usually the case that the various returns on the different asset classes are not entirely independent of each other. Because all asset classes represent available choices in the investment
universe, the returns on some classes are influenced by the available returns on others. This phenomenon is called “correlation” is used; it indicates the tendency to move in a similar direction
(positive correlation) or the opposite direction (negative correlation).
Raising the Stakes
In order to acquire a feel for how correlation can enter into simulations, let’s suppose that we now have two asset classes in which our initial $1,000 account balance is invested, with $500 in each.
Suppose that the first asset class is the same as above and that the second is negatively correlated with the first. In other words, if the first asset class does well, the second tends to do poorly,
and vice versa. It’s possible for both to do well or both to do poorly, although it’s less likely.
The following table shows what we’ll be assuming for possible annual returns on the second asset class, given a return on the first:
In other words, we’ll be assuming that if the annual return on the first asset class is 8 percent, then there’s a one-sixth chance of a -4 percent return on the second asset class, a one-sixth chance
of a -2 percent return, a one-sixth chance of a 0 percent return, and so on.
So now when we perform a simulation, we’ll need a pair of dice. The first die (red, to differentiate it from the second, black die) will be interpreted as before. The interpretation of the black die,
however, will depend on the number showing on the red die.
For the first simulation, suppose we first roll a four on the red die and a two on the black die to simulate the first year. The four on the red die means that the $500 invested in the first asset
class has grown by 8 percent to $540, while the two on the black die indicates that, given an 8 percent return on asset class one, there has been a -2 percent return on the $500 invested in the
second asset class, resulting in a drop in value to $490.
For the second year, suppose we roll a one on the red die and a five on the black die. The $540 invested in asset class one will lose 4 percent and drop to $518.40, while asset class two will gain 10
percent and increase to $539. For years three, four, and five, suppose that the pairs of tosses are (three, three), (one, four), and (four, two). Using the information we have about returns of the
two asset classes, we’d be able to determine that at the end of the five-year period, the amount in asset class one will have grown to $558.98, while the amount in asset class two will have grown to
$581.89, for a total ending account balance of $1,140.87.
As before, we’ll perform a second simulation by tossing the dice five more times and interpreting the results accordingly. After performing simulations numerous times, a range of possible outcomes,
together with a chart showing the distribution of results (similar to the chart illustrating only one asset class), could be developed. From this chart we’d be able to infer the likelihood of various
Here are a few things to keep in mind. First, each of the returns we used in the previous examples does not have to be equally likely. For example, suppose a -4 percent return occurred one-sixth of
the time, a 0 percent return occurred one-third of the time, a 4 percent return occurred one-third of the time, and an 8 percent return occurred one-sixth of the time. We’d be able to simulate this
situation by using a special die whose six faces were labeled one, two, two, three, three, four and interpreting a one as a -4 percent return, a two as a 0 percent return, a three as a 4 percent
return, and a four as an 8 percent return.
Second, all the possible returns don’t have to “fit” on a standard six-sided die. If there are “N” possible outcomes, we’ll be able to simulate them by envisioning an N-sided die, appropriately
marked. (Here is where it begins to be useful to have a spreadsheet or some other electronic aid.)
Furthermore, the number of possible outcomes can even be infinite, as they would be if our annual returns were chosen randomly from an interval ranging from -4 percent to 16 percent. The important
thing to remember is that what’s needed is a way to sample (simulate) from the “sample space” (the range of possible “rolls” or random numbers) and a way to interpret, in the context of the system
being modeled, the value that was chosen.
RICHARD M. RASIEJ teaches high school mathematics at Birmingham High School in Los Angeles.
Contingencies (ISSN 1048-9851) is published by the American Academy of Actuaries, 1100 17th St. NW, 7th floor, Washington, DC 20036. The basic annual subscription rate is included in Academy dues.
The nonmember rate is $24. Periodicals postage paid at Washington, DC, and at additional mailing offices. BPA circulation audited.
This article may not be reproduced in whole or in part without written permission of the publisher. Opinions expressed in signed articles are those of the author and do not necessarily reflect
official policy of the American Academy of Actuaries.
January/February 2006
Roll Model: Simulating a Savings Plan Account Balance
You Are What You Eat: Using Consumer Data to Predict Health Risk
Don’t Try This at Home: The Academy in the Public Eye
Inside Track:
Welcome to CRUSAP
Pension Penchant
Up To Code:
Basics of the ABCD
Taking Stock of Option Expensing
Medicare Financing and the 2004 Technical Panel
Statistical Miscellany:
Record First-Half 2005 Profits and Surplus Help P/C Insurers Cover Hurricane Claims
Cable Company Mathematicians
Past Issues
Contact us | {"url":"http://www.contingencies.org/janfeb06/roll_model_0106.asp","timestamp":"2014-04-19T14:48:49Z","content_type":null,"content_length":"21576","record_id":"<urn:uuid:7b21ed88-b97a-49f8-b3b3-3362e536e254>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
Small Business Formula For Price List
I need to complete my business price list. Basically it's an XL sheet with all my suppliers products listed, there price, my margins, and my retail price and bulk price. Currently I'm fiddling with
my margins and the best way to implement a formula to reflect these margins. Rather than do it manually, is there anyway I can create a forumla for different gross margins to be set for a certain
range of the suppliers goods. For example, any product I buy thats costs me between $0-$20 has a set margin of 1.44 (44%), or a product who's price ranges from $100-$200 has a set margin of 1.26
(26%) etc. This would cut out then need for me to manually check suppliers prices and change to margin accordingly, thus saving me much time.
View Complete Thread with Replies
Sponsored Links: Related Forum Messages:
Lookups For An Item's Price From A Price List
The analysis basically has 2 data components to it:
The 1st part, is a basic transaction list of shopping items bought through the year. Each transaction's shopping item also has the quantity of that item purchased at that time.
The 2nd part, is a pricing sheet for all the different types of shopping items. The pricing sheet has different prices for different quantities at which the item is purchased.
What I am trying to do is to find the relevant price for shopping item, which depends on not only what the item is, but also the quantity. In point form, it should follow the logic below:
1) Identify the item in the shopping list (worksheet 1) from the list of prices (worksheet 2)
2) Find quantity in the prices worksheet that is closest to the quantity in the shopping list (i.e. where the difference between the quantity on transaction list and the quantity on the pricing sheet
is the least)
3) Pull the price for this "closest quantity"
I have uploaded a worksheet showing the structure of that data.
Is there some VB code I need to do this, or can it just be a few simple formulas?
View Replies! View Related
Building A Formula To Increase Price List 8%
I've built a spreadsheet that accurately displays my company's price list. However, from time to time, there are increases and/decreases, by percentages. I would like to know how I can build a
formula that would allow me to quickly update the pricesheets by the appropriate percentage, without having to manually do so, one cell at a time.
View Replies! View Related
Price List
i have a some detail in the tabular form with different criteria as size, colour ,purity,cut ,by combining all these i have price list in a tabular form , if i want to intersect all this and find the
price how can i do so
View Replies! View Related
Formula To Generate 3-5 Business Days
we have salespeople in all 50 states and each state uses different rules for business days. Some states include Saturday as a business day, some don't, and Alaska uses 5 business days including
Saturdays. We are open 7 days a week.
What I am trying to create is a worksheet that has each day of the month going down and in the cells next to it, a column for states that include Saturdays as business days; so a date 3 business days
out would appear in the next cell. In the cell after, a date 3 business days out not including Saturdays. And in the last cell for Alaska, a date 5 business days out including Saturday. Of course
federal holidays are all excluded.
Something like this:
March w/ Sat w/o Sat Alaska
Now Vermont has extra holidays certain months so I would have to adjust that in the formula for the middle column during those months.
View Replies! View Related
Price List Lookup
I have a price list Width/Drop
I need to index given :-
My ranges are named Width and drop and List. Rules are anything above largest drop/width must return 0. anything below lowest width/drop will be the lowest listed
anything inbetween will choose the value >= to the value. lookup is returning the closest, and my other formula fell foul of nesting, i am working in 2007 but it is targeted at XP/2002.
View Replies! View Related
Update Price List
I have a price list with part numbers in Column E and prices in Column C. I want to update the prices from a master list that has part numbers in Column D and prices in Column H and then make only
the updated prices bold.
Currently the master list is in a different workbook, but if I need to, I can copy and past the master sheet into the same workbook as the sheet I want to update.
View Replies! View Related
Adding Business Days To A Date Formula
I am looking for a formula that looks at a date and could add business days to it.
for example:
If the date in a field is Monday, 15-dec-08 and my formula is to add 5 Business days (mon-fri) to it not including that date. The desired result would be Monday, 22-dec-08 but my formula gives me
View Replies! View Related
VLookup: Old Price List Update
A B C D
1 123 1.99 123 2.09
2 124 3.99 124 4.09
New prices arrived as shown in column C (code) & D (price)
I want my old price list updated A (code) and B (old price)
Can VLookup use the new data and replace B with information from column D.
View Replies! View Related
Multiple Pictures In A Price List
Hi, I've got a price list with 2500 different lines which the boss has decided need a picture against each one (the guys a legend!!). Can anyone help me with a macro that will look up the code in
Column A, then add .jpg on the end, and insert a picture into Column J?
View Replies! View Related
Auto Complete Items Off The Price List
I am trying to create an order form. i have a price list from my local hardware store that i want linked to my order form. i want my order form to autocomplete items off the price list. i have tried
a few things but im stumped.
View Replies! View Related
Price List Add 20-30% And End In 99cents
I get a price list from my distributor, my scale I use for my store is anything below $10 (from my distributor price) gets 30% markup, then anything above $10 gets 20% markup. How would I be able to
function this into the list I receive in excel so I can export the prices directly to my store of what the prices should be in my store. Also it would be a plus if at the end I can get it to end in
99 cents. Thank you look forward to your response! I am using Excel 2003
View Replies! View Related
Add 5% To Many Price List's Macro
I have around 150 separate price list files and i would like to create a macro to findformat currency then copy 1.05 and use Edit | Paste Special (value, multiply)...... then round up or down to
nearest cents .567 would roundup to .57
The findformat and Paste Special will work when i manually do it.... but when i record as macro it will not work when played back...... Here is the code it records... code does not include the
roundup part. I dont know how to do that.
ActiveCell.FormulaR1C1 = "1.05"
Application.FindFormat.NumberFormat = "$#,##0.00"
Selection.PasteSpecial Paste:=xlPasteValues, Operation:=xlMultiply, _
SkipBlanks:=False, Transpose:=False
Application.CutCopyMode = False
End Sub
View Replies! View Related
Macro: Small & Large Formula
I wondered if there is a possibility to make this
Range("L3").FormulaR1C1 = "=SMALL(R[-1]C[-11]:RC[-11],1)"
Range("L4").FormulaR1C1 = "=LARGE(R[-2]C[-11]:R[-1]C[-11],1)"
more simple so i can get the range for my small and large formula's variable? What i try to reach here is:
Range("L3") = smallest date In Range("A2", Range("A" & Rows.Count).End(xlUp))
Range("L4") = largest date In Range("A2", Range("A" & Rows.Count).End(xlUp))
View Replies! View Related
INDEX SMALL ROW Formula Showing #REF!
I have the formula (found in cell "C2") on the Report sheet. I need to perform a function, but I cannot get it to work on the sheet I need to pull information from. The sheet RecapWk12 has a small
section pasted (with some cells edited for obvious reasons) from the actual workbook. I can get the formula in Report cell (A10) to work on pulling information from sheet2. You can see I am getting
(#REF!) in cell C2.
View Replies! View Related
Figuring List Price - Cost To Show Discount Percentage
List Price $46.98 (e2)
Net Cost $19.53 (e3)
How do I enter a calculation that will show me my discount percentage from my supplier? (e4)
I then need to be able to drag the formula to the end of the sheet. Discount percentages will be different for each product, but the List Price and Net Costs are present, so the calculation needs to
take these differences into consideration so that I get the correct discount percentage for each item.
View Replies! View Related
Extending Vlookup: Price List To Cover Up To 10,000 Lines
However I only put in a small price list when i tried extending it the workbook produced error as per attached. I thought i would just need to amend the row numbers but it doesnt seem to work
unable to upload at present but prevoius is available on this thread. http://www.excelforum.com/excel-work...hoice-sum.html. Need to extend price list to cover up to 10,000 lines
View Replies! View Related
Price List Lookups And Additonal Calculations For Out Of Range Values
The sheet has a price list (I attached the sheet). its a width x height(drop) format. If width or height <= minimum width/height then use the minimum listed. if width or height > minimum <=maximum
then lookup in table next heightest value. here is the complication. any oversized items are priced as roundup((size -biggest size) / (biggest - second biggest size),0) * ( price of biggest-price of
second biggest). so if my widths are
and I am pricing 5050 I would do :-
calculate howmuch its oversize
5050-4800 = 250
Calculate the difference in the last 2 sizes
4800-4700 = 100
Calculate the rounded up multiples
250/100=2.5 rounded up = 3...........
View Replies! View Related
Calculate The Implied Volatility Which Minimizes The Sum Of Squared Differences Between The Observed Market Price And The Model Price For Each Day
I have calculated the implied volatility for different single options using the newton raphson method. But, I also need to calculate the implied volatility which minimizes the sum of squared
differences between the observed market price and the model price for each day. I guess one needs to use vectors (jacobian matrix) to do this, but I do not know how to expand the code to be able to
do this. Anyone have any idea how this can be done? I have attached the [code] I have used to calculate the implied volatility for one option.
View Replies! View Related
VLOOKUP Formula For Price Sheet
The task: create price sheet that calculates pricing based on 2 criteria - quantity, and production time.
There are two worksheets. #1 is the main calculator, #2 is the price sheet, broken down with time (in hours) on the left column B, the quantity across the top in row 1. There are then boxes with
different prices based on both the hours and quantity of products.
On worksheet 1, I have specified quantity in C6, and time in C8. How do I pull a price in to F8 that calculates based off the time and quantity filled out on sheet 1?
View Replies! View Related
Delete The Formula In The Amount The Unite Price Comes #value
(To keep things simple from left to right Column A-H)
The Amount Column seems to be my problem, it has the formula =H98*B98 just a simple multiplication formula to get my unit price x my qty. When I delete the formula in the amount the unite price #
value! error goes away. and all that is in the other error box is =IF(P98>0," per piece","") it just puts "per piece" in the box when something is typed.
I have a vlookup formula in Column F (thank you VoG)
To pull prices from another worksheet.
View Replies! View Related
Manually Enter % And Update Price (Formula)
I'd like to be able to add 10% to column D and have the prices in A, B, C change accordingly. Is there a formula for this?
A B C D
2009 Distributor CASE Price2010 Distributor CASE Price2010 Distributor EACH PriceVariance from 2009 price135.00148.000.0592108.25100.000.040062.8875.006.250066.6096.008.0000
PS: Right now I have it set up working the opposite way, the prices are entered and my formula tells the user by what % the price has changed. The user wants to be able to tell the formula what % is
desired and have the prices change accordingly.
The formula I am using for the way column currently calculates is:
View Replies! View Related
Formula To Calculate Price Based On Sheet2
I am looking for a formula that can change the price of some of the items on sheet1 in column C by the amount found on Sheet2 in Column D. I would like it to base the calculation on column C in
sheet2 (so I can choose if I want to add, subtract, multiply, divide, or make the price exact). I would like all prices that don't match these UPC codes to remain unchanged.
Intended Results can be seen on Sheet1 in Column E. Not sure if I should a formula with the Vlookup function or a macro, or maybe there is even a better solution.
Example spreadsheet may be viewed at http://spreadsheets.google.com/ccc?k...WLMyhNJLiPLTfA.
View Replies! View Related
IF Formula Used To Price Life Insurance Policies
I price life insurance policies and need a formalu for the following new Fund:
If the sum of the first two years of premium added to 10% of that total is equal to or less than 25% of the face amount, the case fits into the general parameter.
Here is what I have, but it isn't working:
View Replies! View Related
Copy The Current Price Back To Sheet1. The Current Price Needs To Be Pasted Back Into Sheet1 (next To The Existing Price)
All data is located within one book. I have two sheets with material codes in each sheet which include pricing (existing and current)
Sheet1 (has existing material codes plus existing pricing) Has about 1200 lines
Sheet2 (has current material codes plus current pricing), has about 36000 lines
I need to cross check if the material code (taken from sheet1) are still available in sheet2, and if they are, copy the current price back to sheet1. The current price needs to be pasted back into
sheet1 (next to the existing price). If the material code doesn't exist (for whatever reason, in sheet2), the program needs to move onto the next line and leave the current price for that material
code blank. The program should finish once all the lines in sheet1 are completed. I have attached a sample of what I'm trying to do,
View Replies! View Related
Looking For The Closest Price To A Reference Price
I have have a large array of prices (across rows) and am looking for the closest price to match a price that I have been provided with. It's a basic benchmarking exercise on a row by row basis....and
the price can be positive or negative. Is there a clean way to reference the closest price?
I have come across a fair amount of solutions, but none worked optimally - particularly the =INDEX(Data,MATCH(MIN(ABS(Data-Target)),ABS(Data-Target),0)) approach....it just didn't work for some
lines, and only worked for values less than source price in other instances.
I would also like to reference the source on the next column.
View Replies! View Related
Multiple IF's And AND Inclusive Formula (all In One Cell) That Would Look At The Above Table And Depending Upon The Price Paid
I need an all inclusive formula (all in one cell) that would look at the above table and depending upon the price paid (3000-14999 or 15000-99999 or 100000-249999) and depending upon what monthly
term they choose (24, 30, or 36), the appropriate finance charge would be used to calculate a total cost (9-13%). The only way I know to do this is by using IF's and AND's, but there are simply too
many arguments and I cannot properly write the formula.
View Replies! View Related
Making Average Buy Price And Average Sell Price
to formulate Excel formulas to obtain the average buy price and average sell price for me to do this futures trading. Thanks a lot. I downloaded the Htmlmaker to post the spreadsheet here to show the
manual way to calcualte the average buy price and average sell price but when it is on html form, i clicked on the 'Please click this button to send the source into clipboard' button & then i paste
into this thread. Is the way to make my spreadsheet appear here correct cause it cannot work.
View Replies! View Related
Sum Extended Price Without Extended Price Column
I have a unit price and a quantity. I want to be able to take the sum of the extended price without having to add a column for extended price. I don't want to just hide it, either.
Example attached.
View Replies! View Related
Calculating Business Hours
I use this formula at work to calculate business hours from Mon-Fri:
where Q3= business start time 8.30am
where Q2= business end time 5.30pm
thus the difference between 18-Apr-08 16:30 and 21-Apr-08 13:30 is 6 hours.
I now need to adapt this formula for another Department that also works on Saturday from 8.30am to 5.30pm.
View Replies! View Related
Calculation Of Business Hours
I am using the following formula to calculate business hours.
The business hours considered here is 8AM - 5PM, Start time in R9 and End time in T9. Now the problem is its calculating the correct value when the days are same, for e.g.,
Condition 1
When I am giving "31 March 2009 15:00:00" as start time (R9) and "31 March 2009 23:00:00" in end time (T9), I am getting the correct value. i.e, "2:00:00"
Condition 2
While giving "31 March 2009 16:00:00" as start time and "01 April 2009 09:00:00" as end time I am getting a value of "1:00:00", actually the value should be "3:00:00".
View Replies! View Related
Return The Corresponding Business Week
I have a spreadsheet (attached) containing 3 columns, week commencing (c), week ending (d) and business week (e). The question is can a user enter a date in one cell (b4) and have the next cell (b5)
return the corresponding business week, I have come up with a couple of solutions involving hidden columns but was wondering if there is a way to do all of this in one cell.
View Replies! View Related
Caluclating Business Days
I am looking to calculate business days - more specifically Monday through Friday. I am not currently worried about holidays or vacations, yet I wouldn't mind including it if I could have a list to
"check" from.
I tried previous searches, but found a few functions that I don't have on my computer.
View Replies! View Related
Tat Calculation Only For Business Days
I am not able to calculate TAT(Turn around Time) between two dates without taking Off days. Plz help me on this.. File is attatched as per example..
View Replies! View Related
Calculating Business Fiscal Year
writing a formula that would result in my organizations' business fiscal year.
Assuming the fiscal year is 2008, the quarters are as follows:
1st QTR = 7/2007 - 9/2007
2nd QTR = 10/2007 - 12/2007
3rd QTR = 1/2008 - 3/2008
4th QTR = 4/2008 - 6/2008
When 7/1/2007 is entered, the result should be Q1-2008
10/20/2007, Q2-2008 and so forth.
View Replies! View Related
Adding 1 To A Number On Business Days
I'm working on a complex A/R aging summary. So Ill have couple of question, but right now I'm trying to have excel automatically keep track of the A/R process. For instance every business day I would
like the number of days the Invoice is outstanding to go up by 1. So when I get to work on monday the invoices that have been due for 20 days will now show 21 without me touching it.
View Replies! View Related
SMALL Function
I'd like to use the SMALL() function in excel to pull out the second lowest unique value in a list, but I'm not sure there is a way to do this. For example, if the array is {1,1,3,10,2,6}, then SMALL
(array,2) returns 1, but I'd like it to return 2. Is there a way I can modify this function or use a different one to achieve what I want?
View Replies! View Related
Small Greater Than Zero
is there any way of using the =SMALL function to rank only numbers above zero so that the zeros don't keep showing up as the smallest figures?
View Replies! View Related
I need to do an hour calculation on two cells which have dates and times in both. the first cell is a call that we get from a customer and the second is the date and time in which that call is closed
by us...meaning that call is complete.
I need to calculate how much time in hours did it take us to complete that call for the customer. I need this calculation to respect our business hours of Monday to Friday 8am-5pm and closed on
Saturdays and Sundays.
here are some examples.
from - 2/12/2004 13:00 (thursday)
to - 2/13/2004 9:00 (friday)
answer should be 5 hours
from - 2/13/2004 14:00 (friday)
to - 2/16/2004 10:00 (monday)
answer should be 5 hours
View Replies! View Related
Subtracting Business Hours From A Data And Time Value
I am trying to subtract 12 business hours from a date/time stamp. It has to be during working hours however which are 8AM - 5PM, Monday-Friday.
For example if my value is 6/23/2008 9AM
Subtracting 12 business hours would give me 6/19/2008 3PM.
View Replies! View Related
Calculating The Number Of Business Days In A Specified Period
It should be deplaying dates of weekly days in Monday, Wednesday and Friday excluding sundays. Or Tuesday, Thursday Saturday excluding Sunday. e.g
Mondays, Wednesdays and Fridays of June
June 1, 2006
June 3, 2006
June 6, 2006
Tuesdays, Thursdays and Saturdays of June
June 2, 2006
June 5, 2006
June 7, 2006
This should be happening after entering any date on the first cell of the List and should accommodate up to 3 months
View Replies! View Related
Userform VBA Which Will Determine Business Days
I have created a useform in VB which will determine business days. I need help in the coding part.
In my user form I have text box, I have a calender and I have two command buttons, 1 Calculate 2 exit. What i would like to achieve is when i type a number in the txtbox and press cmd button
calculate the code will shade the the date in the calender so for example if I type in 10, then the code should shade 13/02/09 on the calender since 13/02/09 is the 10th business day.
View Replies! View Related
Time/date Tracking Only During Business Hours
i work dispatch at a company and we're trying to track how long it takes for a technician to arrive on-site. my worksheet currently includes the date of call-in (E1) , time of call-in (F1), Service
Date (G1), Service Time (H1), among many other field. the easy solution is to use the formula (G1+H1)-(E1+F1), and use the answer as the total amount of time, but the problem is if a customer call in
at 3:00 PM and we service them at 9:00 AM the following day, it looks like it took 18 hours to arrive on-site, when in reality, since we close at 5:00 PM and open at 9:00 AM, we only took 2 hours to
arrive on-site. is there any formula i can write to account for non-business hours and weekends, or is this a bit above what i should expect from excel?
View Replies! View Related
Small / Min Function
I have the G column listed with number from 1-100. On the B column I have the corresponde titles for each number in G.
What I want to do is =Look at the G(1:100) take the top 10 lowest value and write the correspondant title nearby.
I think it s possible with Min or Small functions, but I dont know how to, especially taking the title names nearby.
View Replies! View Related
If Greater Than Or Small Than, Or Equal To
I have a cell, M87. The score in M87 can be less than 13 or greater than 25. I need a formula within M94 which refers to M87, and outputs depending on the the following criteria. If M87 is less than
13 then output as D. If M87 is 14, 15, 16, or 17 then output as C. If M87 is 18, 19, 20, 21, 22, 23 or 24 then output as B. If M87 is greater than 24 then output as A.
View Replies! View Related | {"url":"http://excel.bigresource.com/Small-Business-Formula-for-Price-List-liyCw87D.html","timestamp":"2014-04-18T06:04:34Z","content_type":null,"content_length":"69516","record_id":"<urn:uuid:169e524c-3b55-4c21-8801-aebd9755248a>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
Michael Atiyah
Born 1929. Fellow of Trinity College, Cambridge 1954 and
1958. Friend and colleague of
, who was a Research Fellow at St John's College. Unlike Penrose, who moved into mathematical physics, Atiyah continued to specialize in differential geometry and topology. Winner of the
1966. Supervisor of
in his research at Oxford from 1980, when to everyone's surprise Simon himself won a
. President of the
1990-95. Later back at Cambridge as Master of Trinity and involved in the new IsaacNewtonInstitute
. Sir Michael is currently retired and an Honorary Professor at the University of Edinburgh. Read an interview at
. Writer of a fascinating appreciation of
at the start of
, including the rather hopeful phrase (for the non-mathematician) of
. Even for the non-specialist reader the story is clearly both a strange and an inspiring one. But how can we tell? --
RichardDrake CategoryScientist In which case why couldn't he win a NobelPrize in 1966?
Because the
is not given to mathematicians. Rumour has it that Nobel's wife ran off with a mathematician. Instead every four years the
is awarded to appropriate mathematicians, but only if they're under 40 years of age, which is why Wiles didn't get one for his proof (with Taylor) of
. I guess that means he's not really a scientist, except he is.
My point exactly, apart from the juicy bit about Nobel's wife, which is new to me, thanks. I just created FieldsMedal on WhyClublet, which is why I just ended up on Wiki for the first time for a
while. Feel free to augment our Why page, create FieldsMedal here or perhaps join the discussion about the boundaries of the two communities in WhyClublet. -- RichardDrake
Because a
would, I suspect, be far too narrow to be of any use. The general public can't understand the difference, and judging by
and similar pages, neither can lots of | {"url":"http://c2.com/cgi/wiki?MichaelAtiyah","timestamp":"2014-04-21T03:53:18Z","content_type":null,"content_length":"4217","record_id":"<urn:uuid:facb1ca7-3b26-4e36-839c-ec23600a140d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - How to calculate the Euler class of a sphere bundle?
Thank you lavinia and zhentil!
To lavinia,I'm not very clear about some property of Euler class you listed,my foundation of topology is quite insufficient.The derivation of the global angular form for an oriented 2 plane bundle
:by[tex]\frac{d\theta_\alpha}{2\pi}-\pi^* \ksi_\alpha=\frac{d \theta_\beta}{2\pi}-\pi^*\ksi_\beta[/tex] where [tex]\theta[/tex] is angular coordinates and [tex]\ksi_\alpha[/tex] is a 1 form on [tex]
U_\alpha[/tex],these forms piece together to give a global angular form on E^0.
To zhentil,oh I'm sorry there are too many thing I don't know.what is transverse section?Why the intersection of the zero section with a transverse section represents a homology class?What is
negative of the section ?
Thank you again!
Thanks. I will read this through in the book.
I don't totally understand the intersection argument but I think it is related to this.
The manifold is embedded in the vector bundle as the set of zero vectors in each fiber. Its normal bundle( as an embedded submanifold) is just the vector bundle itself. Thus the Thom class of the
normal bundle,i.e. the differential form that integrates to 1 along each fiber of the normal bundle and which is zero outside of a tubular neighborhood of the manifold (as described in Bott and Tu),
is just the Thom class of the vector bundle. Thus the manifold viewed as a cycle in the vector bundle is Poincare dual to the Thom class of the vector bundle.
The pull back of the Thom class to the manifold is the Euler class of the bundle. Its Poincare dual in the homology of the manifold ( not the vector bundle) is the intersection that Zhentil
describes. I am not sure how to prove that the intersection is the Poincare dual. Let's think it through.
The Thom isomorphism is key to defining Euler classes topologically. The properties that I described such as naturality and the Whitney sum formula fall out of it easily. You would benefit from
leaning about it. There is also a Thom isomorphism for unorientable vector bundles. In this case differential forms can not be used since the cohomology uses Z/2Z coefficients. Nevertheless the
construction is much the same. | {"url":"http://www.physicsforums.com/showpost.php?p=3666178&postcount=5","timestamp":"2014-04-16T13:45:36Z","content_type":null,"content_length":"9722","record_id":"<urn:uuid:17865323-b337-47d4-add7-23407efd1e01>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brain Teaser Question: Grade 5, Chapter 8
Mrs. Hopkins made four different pies for the fair. Each pie was the same size. At the fair, she cut the blueberry pie into 6 equal slices, the apple pie into 5 equal slices, the peach pie into 3
equal slices, and the chocolate pie into 4 equal slices.
Alyssa, Dalton, Rowan, and Shane each bought one slice of pie but ate only a portion of that slice. Each chose a different kind of pie.
● Alyssa bought the largest slice available and ate
● Dalton bought the smallest slice available and ate
● The slice Rowan bought was bigger than the slice Shane bought. Rowan ate
● Shane ate
What kind of pie did each person choose?
What fraction of the total pie did each person eat? Simplify the fractions in the answer. | {"url":"http://www.eduplace.com/kids/mhm/brain/gr5/ch08/bt_05_08_q.html","timestamp":"2014-04-20T19:30:40Z","content_type":null,"content_length":"5445","record_id":"<urn:uuid:8d1dc9cf-1743-4edf-8999-598b519cec77>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Santa Fe Springs Calculus Tutor
Find a Santa Fe Springs Calculus Tutor
...It is here that students are introduced to some advanced mathematical concepts such as limits, differentiation and integration. Physics is my passion. It describes how the world around us
works, and it is the foundation of the other sciences (chemistry, biology, etc., have their roots in physic...
11 Subjects: including calculus, physics, statistics, SAT math
...This year I have 4 precalculus students and 4 calculus students, one in honors AP and one in the IB program. There are many students in algebra 2 and under. All return students will be charged
the new rate.
9 Subjects: including calculus, geometry, Chinese, algebra 1
...I have a PhD in Chemistry and Mathematics (U of Michigan). I have also been a substitute teacher in Mathematics in the Huntington Beach Union H.S. District. I have tutored high school students
in Chemistry, Algebra, Calculus, Trig and Geometry, and college students in Linear Algebra and Advanced Calculus.
24 Subjects: including calculus, chemistry, physics, geometry
...I have tutored students in the area of Discrete Mathematics for over five years. It is one of the core parts of the undergraduate mathematics curriculum at my university. I can help with simple
tricks to build up arithmetic skills, explain how sets and numbers work, from fractions to exponents.
46 Subjects: including calculus, physics, algebra 1, geometry
With over 20 years as an Engineer, in the aerospace industry, and dual Master's degrees in Applied Mathematics and Mechanical Engineering, I have a knowledge base that is hard to beat. I'm
passionate about knowledge and will go above and beyond to help you understand not only the material, ...
9 Subjects: including calculus, geometry, statistics, algebra 1 | {"url":"http://www.purplemath.com/Santa_Fe_Springs_Calculus_tutors.php","timestamp":"2014-04-19T06:58:24Z","content_type":null,"content_length":"24349","record_id":"<urn:uuid:266ad23a-b88b-4e54-978f-71ce37b494b1>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
September 19th 2008, 06:06 PM
The seventh term of a geometric progression is 0.775 and the ninth term is 0.995.
(a) Find the common ratio (to 3 decimal places).
(b) List the first three terms of the geometric progression (to 6 decimal places).
(c) Find the sum of the first ten terms (to 3 decimal places).
September 19th 2008, 06:14 PM
The seventh term of a geometric progression is 0.775 and the ninth term is 0.995.
(a) Find the common ratio (to 3 decimal places).
(b) List the first three terms of the geometric progression (to 6 decimal places).
(c) Find the sum of the first ten terms (to 3 decimal places).
recall that the terms of a geometric progression are given by
$a_n = ar^{n - 1}$ for $n = 1,~2,~3, \dots$
where $a_n$ is the $n$th term, $a$ is the first term, $r = \frac {a_2}{a_1} = \frac {a_3}{a_2} = \cdots$ is the common ratio.
thus, we have the 7th term is $ar^6 = 0.775$ and the 9th term is $ar^8 = 0.995$
so, $\frac {ar^8}{ar^6} = \frac {0.995}{0.775}$
you can use that to find $r$, once you have $r$ you can find $a$. once you have $a$ and $r$, you can answer all the questions
good luck
EDIT: also, recall that the sum $S_n$ of the first n terms is given by $S_n = a \cdot \frac {1 - r^n}{1 - r}$ | {"url":"http://mathhelpforum.com/algebra/49803-progression-print.html","timestamp":"2014-04-20T16:27:28Z","content_type":null,"content_length":"8089","record_id":"<urn:uuid:e98b1125-64b0-4dfe-9e47-4e57b07eb502>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multi-scale algorithm for the linear arrangement problem
Results 1 - 10 of 21
- IEEE Transactions on Visualization and Computer Graphics , 2006
"... Abstract — MatrixExplorer is a network visualization system that uses two representations: node-link diagrams and matrices. Its design comes from a list of requirements formalized after several
interviews and a participatory design session conducted with social science researchers. Although matrices ..."
Cited by 55 (12 self)
Add to MetaCart
Abstract — MatrixExplorer is a network visualization system that uses two representations: node-link diagrams and matrices. Its design comes from a list of requirements formalized after several
interviews and a participatory design session conducted with social science researchers. Although matrices are commonly used in social networks analysis, very few systems support the matrix-based
representations to visualize and analyze networks. MatrixExplorer provides several novel features to support the exploration of social networks with a matrix-based representation, in addition to the
standard interactive filtering and clustering functions. It provides tools to reorder (layout) matrices, to annotate and compare findings across different layouts and find consensus among several
clusterings. MatrixExplorer also supports Node-link diagram views which are familiar to most users and remain a convenient way to publish or communicate exploration results. Matrix and node-link
representations are kept synchronized at all stages of the exploration process. Index Terms — social networks visualization, node-link diagrams, matrix-based representations, exploratory process,
matrix ordering, interactive clustering, consensus. Fig. 1. MatrixExplorer showing two synchronized representations of the same network: matrix on the left and node-link on the right. 1
, 2006
"... The minimum linear arrangement problem is widely used and studied in many practical and theoretical applications. In this paper we present a linear-time algorithm for the problem inspired by the
algebraic multigrid approach which is based on weighted edge contraction rather than simple contraction. ..."
Cited by 18 (7 self)
Add to MetaCart
The minimum linear arrangement problem is widely used and studied in many practical and theoretical applications. In this paper we present a linear-time algorithm for the problem inspired by the
algebraic multigrid approach which is based on weighted edge contraction rather than simple contraction. Our results turned out to be better than every known result in almost all cases, while the
short running time of the algorithm enabled experiments with very large graphs.
- In: Proceedings PKDD’04. Volume 3202 of LNAI , 2004
"... In this paper we introduce a simple probabilistic model, hierarchical tiles, for 0-1 data. A basic tile (X,Y,p) specifies a subset X of the rows and a subset Y of the columns of the data, i.e.,
a rectangle, and gives a probability p for the occurrence of 1s in the cells of X x Y. A hierarchical tile ..."
Cited by 15 (0 self)
Add to MetaCart
In this paper we introduce a simple probabilistic model, hierarchical tiles, for 0-1 data. A basic tile (X,Y,p) specifies a subset X of the rows and a subset Y of the columns of the data, i.e., a
rectangle, and gives a probability p for the occurrence of 1s in the cells of X x Y. A hierarchical tile has additionally a set of exception tiles that specify the probabilities for subrectangles of
the original rectangle. If the rows and columns are ordered and X and Y consist of consecutive elements in those orderings, then the tile is geometric; otherwise it is combinatorial. We give a simple
randomized algorithm for finding good geometric tiles. Our main result shows that using spectral ordering techniques one can find good orderings that turn combinatorial tiles into geometric tiles. We
give empirical results on the performance of the methods.
- IEEE Transactions on Visualization and Computer Graphics , 2006
"... Abstract—Current computer architectures employ caching to improve the performance of a wide variety of applications. One of the main characteristics of such cache schemes is the use of block
fetching whenever an uncached data element is accessed. To maximize the benefit of the block fetching mechani ..."
Cited by 14 (5 self)
Add to MetaCart
Abstract—Current computer architectures employ caching to improve the performance of a wide variety of applications. One of the main characteristics of such cache schemes is the use of block fetching
whenever an uncached data element is accessed. To maximize the benefit of the block fetching mechanism, we present novel cache-aware and cache-oblivious layouts of surface and volume meshes that
improve the performance of interactive visualization and geometric processing algorithms. Based on a general I/O model, we derive new cache-aware and cache-oblivious metrics that have high
correlations with the number of cache misses when accessing a mesh. In addition to guiding the layout process, our metrics can be used to quantify the quality of a layout, e.g. for comparing
different layouts of the same mesh and for determining whether a given layout is amenable to significant improvement. We show that layouts of unstructured meshes optimized for our metrics result in
improvements over conventional layouts in the performance of visualization applications such as isosurface extraction and view-dependent rendering. Moreover, we improve upon recent cache-oblivious
mesh layouts in terms of performance, applicability, and accuracy. Index Terms—Mesh and graph layouts, cache-aware and cache-oblivious layouts, metrics for cache coherence, data locality. 1
, 2003
"... High-dimensional collections of 0-1 data occur in many applications. The attributes in such data sets are typically considered to be unordered. However, in many cases there is a natural total or
partial order # underlying the variables of the data set. Examples of variables for which such orders exi ..."
Cited by 12 (2 self)
Add to MetaCart
High-dimensional collections of 0-1 data occur in many applications. The attributes in such data sets are typically considered to be unordered. However, in many cases there is a natural total or
partial order # underlying the variables of the data set. Examples of variables for which such orders exist include terms in documents, courses in enrollment data, and paleontological sites in fossil
data collections. The observations in such applications are flat, unordered sets; however, the data sets respect the underlying ordering of the variables. By this we mean that if A # B # C are three
variables respecting the underlying ordering #, and both of variables A and C appear in an observation, then, up to noise levels, variable B also appears in this observation. Similarly, if A1 # A2 #
# A l-1 # A l is a longer sequence of variables, we do not expect to see many observations for which there are indices i < j < k such that A i and Ak occur in the observation but A j does not.
, 2007
"... Linear ordering problems are combinatorial optimization problems which deal with the minimization of different functionals in which the graph vertices are mapped onto (1, 2,..., n). These
problems are widely used and studied in many practical and theoretical applications. In this paper we present a ..."
Cited by 10 (6 self)
Add to MetaCart
Linear ordering problems are combinatorial optimization problems which deal with the minimization of different functionals in which the graph vertices are mapped onto (1, 2,..., n). These problems
are widely used and studied in many practical and theoretical applications. In this paper we present a variety of linear-time algorithms for these problems inspired by the Algebraic Multigrid
approach which is based on weighted edge contraction. The experimental result for four such problems turned out to be better than every known result in almost all cases, while the short running time
of the algorithms enables testing very large graphs.
- Proc. Graph Drawing 2002, LNCS 2528 , 2001
"... We present an algorithm for drawing directed graphs, which is based on rapidly solving a unique one-dimensional optimization problem for each of the axes. The algorithm results in a clear
description of the hierarchy structure of the graph. Nodes are not restricted to lie on fixed horizontal laye ..."
Cited by 9 (6 self)
Add to MetaCart
We present an algorithm for drawing directed graphs, which is based on rapidly solving a unique one-dimensional optimization problem for each of the axes. The algorithm results in a clear description
of the hierarchy structure of the graph. Nodes are not restricted to lie on fixed horizontal layers, resulting in layouts that convey the symmetries of the graph very naturally. The algorithm can be
applied without change to cyclic or acyclic digraphs, and even to graphs containing both directed and undirected edges. We also derive a hierarchy index from the input digraph, which quantitatively
measures its amount of hierarchy.
, 2003
"... We describe a novel approach to the visualization of hierarchical clustering that superimposes the classical dendrogram over a fully synchronized low-dimensional embedding, thereby gaining the
benefits of both approaches. In a single image one can view all the clusters, examine the relations between ..."
Cited by 9 (3 self)
Add to MetaCart
We describe a novel approach to the visualization of hierarchical clustering that superimposes the classical dendrogram over a fully synchronized low-dimensional embedding, thereby gaining the
benefits of both approaches. In a single image one can view all the clusters, examine the relations between them and study many of their properties. The method is based on an algorithm for
lowdimensional embedding of clustered data, with the property that separation between all clusters is guaranteed, regardless of their nature. In particular, the algorithm was designed to produce
embeddings that strictly adhere to a given hierarchical clustering of the data, so that every two disjoint clusters in the hierarchy are drawn separately.
"... In this paper we introduce a direct motivation for solving the minimum 2-sum problem, for which we present a linear-time algorithm inspired by the Algebraic Multigrid approach which is based on
weighted edge contraction. Our results turned out to be better than previous results, while the short runn ..."
Cited by 6 (3 self)
Add to MetaCart
In this paper we introduce a direct motivation for solving the minimum 2-sum problem, for which we present a linear-time algorithm inspired by the Algebraic Multigrid approach which is based on
weighted edge contraction. Our results turned out to be better than previous results, while the short running time of the algorithm enabled experiments with very large graphs. We thus introduce a new
benchmark for the minimum 2-sum problem which contains 66 graphs of various characteristics. In addition, we propose the straightforward use of a part of our algorithm as a powerful local reordering
method for any other (than multilevel) framework.
, 2006
"... The construction of linear mesh layouts has found various applications, such as implicit mesh filtering and mesh streaming, where a variety of layout quality criteria, e.g., width and span, can
be considered. Similar linear sequencing problems have also been studied in the context of sparse matrix ..."
Cited by 4 (4 self)
Add to MetaCart
The construction of linear mesh layouts has found various applications, such as implicit mesh filtering and mesh streaming, where a variety of layout quality criteria, e.g., width and span, can be
considered. Similar linear sequencing problems have also been studied in the context of sparse matrix reordering and graph labeling, where width and span correspond to vertex separation and
bandwidth, respectively. One of the best-known heuristics for generating width-minimizing orderings is spectral sequencing, which is derived from the Fiedler vector. In terms of span however, other
heuristics, such as the Cuthill-Mckee (CM) scheme, generally outperform spectral sequencing. In this paper, we study the general linear sequence generation as a problem of preserving graph distances
and propose to use for sequencing the subdominant eigenvector of a kernel (affinity) matrix, defined by graph distances and appropriately chosen transfer functions. The use of Laplacians can then be
seen as a special case, where a step transfer function of unit width is applied. Despite the non-sparsity of the kernel matrix we use, the sequences can be computed efficiently for problems of large
size through subsampling and eigenvector extrapolation. When applied to mesh layouts generation, we show experimentally that the sequences obtained using our algorithm outperform those derived from
the Fiedler vector, in terms of spans, and those obtained from CM, in terms of widths and other important quality criteria. Therefore, in applications where several such quality criteria can
influence algorithm performance simultaneously, e.g., mesh streaming and implicit mesh filtering, the new mesh layouts could potentially provide a better trade-off. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=447385","timestamp":"2014-04-21T03:06:10Z","content_type":null,"content_length":"39592","record_id":"<urn:uuid:209cec2f-288f-44b9-820e-683810c0227f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex Activities
This core theory provides the foundation for representing and reasoning about complex activities and the relationship between occurrences of an activity and occurrences of its subactivities.
Occurrences of complex activities correspond to sets of occurrences of subactivities; in particular, these sets are subtrees of the occurrence tree.
The basic ontological commitments of the Complex Activities Theory are based on the following intuitions:
Intuition 1:
An activity tree consists of all possible sequences of atomic subactivity occurrences beginning from a root subactivity occurrence.
In a sense, activity trees are a microcosm of the occurrence tree, in which we consider all of the ways in which the world unfolds in the context of an occurrence of the complex activity.
Any activity tree is actually isomorphic to multiple copies of a minimal activity tree arising from the fact that other external activities may be occurring during the complex activity.
Intuition 2:
Different subactivities may occur on different branches of the activity tree i.e. different occurrences of an activity may have different subactivity occurrences or different orderings on the same
subactivity occurrences.
In this sense, branches of the activity tree characterize the nondeterminism that arises from different ordering constraints or iteration.
Intuition 3:
An activity will in general have multiple activity trees within an occurrence tree, and not all activity trees for an activity need be isomorphic. Different activity trees for the same activity can
have different subactivity occurrences.
Following this intuition, the Complex Activities Theory does not constrain which subactivities occur. For example, conditional activities are characterized by cases in which the state that holds
prior to the activity occurrence determines which subactivities occur. In fact, an activity may have subactivities that do not occur; the only constraint is that any subactivity occurrence must
correspond to a subtree of the activity tree that characterizes the occurrence of the activity.
Intuition 4:
Not every occurrence of a subactivity is a subactivity occurrence. There may be other external activities that occur during an occurrence of an activity.
This theory does not force the existence of complex activities; there may be subtrees of the occurrence tree that contain occurrences of subactivities, yet not be activity trees. This allows for the
existence of activity attempts, intended effects, and temporal constraints; subtrees that do not satisfy the desired constraints will simply not correspond to activity trees for the activity.
Informal Semantics for Complex Activities
(min_precedes ?s1 ?s2 ?a) is TRUE in an interpretation of the Complex Activity Theory if and only if ?s1 and ?s2 are subactivity occurrences in the activity tree for ?a, and ?s1 precedes ?s2 in the
subtree. Any occurrence of an activity ?a corresponds to an activity tree (which is a subtree of the occurrence tree). The activity occurrences within this subtree are the subactivity occurrences of
the occurrence of ?a.
(root ?s ?a) is TRUE in an interpretation of the Complex Activity Theory if and only if the activity occurrence ?s is the root of an activity tree for ?a.
(subtree ?a1 ?a2) ?occ2) is TRUE in an interpretation of the Complex Activity Theory if and only if every atomic subactivity occurrence in the activity tree for ?a1 is an element of the activity tree
for ?a2.
(do ?a ?s1 ?s2) is TRUE in an interpretation of the Complex Activity Theory if and only if ?s1 is the root of an activity tree and ?s2 is a leaf of the same activity tree such that both activity
occurrences are elements of the same branch of the activity tree.
(leaf ?s ?a) is TRUE in an interpretation of the Complex Activity Theory if and only if the activity occurrence ?s is the leaf of an activity tree for ?a.
(next_subocc ?s1 ?s2 ?a) is TRUE in an interpretation of the Complex Activity Theory if and only if ?s1 precedes ?s2 in the tree and there does not exist a subactivity occurrence that is between them
in the tree.
Last Updated: Wednesday, 15-December-2003 11:42:40
Return to the PSL homepage | {"url":"http://www.mel.nist.gov/psl/psl-ontology/part12/complex_page.html","timestamp":"2014-04-16T07:13:42Z","content_type":null,"content_length":"4945","record_id":"<urn:uuid:8fc04327-d991-41ca-a3bb-55bb0e5847bc>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
The hidden computation steps of turbo Abstract State Machines
- Journal Theoretical Computer Science , 2004
"... We propose a structured mathematical definition of the semantics of C# programs to provide a platform-independent interpreter view of the language for the C# programmer, which can also be used
for a precise analysis of the ECMA [22] standard of the language and as a reference model for teaching. The ..."
Cited by 16 (4 self)
Add to MetaCart
We propose a structured mathematical definition of the semantics of C# programs to provide a platform-independent interpreter view of the language for the C# programmer, which can also be used for a
precise analysis of the ECMA [22] standard of the language and as a reference model for teaching. The definition takes care to reflect directly and faithfully -- as much as possible without becoming
inconsistent or incomplete -- the descriptions in the C# standard to become comparable with the corresponding models for Java in [37] and to provide for implementors the possibility to check their
basic design decisions against an accurate highlevel model. The model sheds light on some of the dark corners of C# and on some critical differences between the ECMA standard and the implementations
of the language.
- Annals of Pure and Applied Logic , 2005
"... We capture the principal models of computation and specification in the literature by a uniform set of transparent mathematical descriptions which—starting from scratch—provide the conceptual
basis for a comparative study 1. 1 ..."
Cited by 9 (5 self)
Add to MetaCart
We capture the principal models of computation and specification in the literature by a uniform set of transparent mathematical descriptions which—starting from scratch—provide the conceptual basis
for a comparative study 1. 1
- PROC. FMCO’03, LNCS , 2004
"... From the models provided in [11] and [4] for the semantics of Java and C♯ programs we abstract the mathematical structure that underlies the semantics of both languages. The resulting model
reveals the kernel of object-oriented programming language constructs and can be used for teaching them withou ..."
Cited by 5 (3 self)
Add to MetaCart
From the models provided in [11] and [4] for the semantics of Java and C♯ programs we abstract the mathematical structure that underlies the semantics of both languages. The resulting model reveals
the kernel of object-oriented programming language constructs and can be used for teaching them without being bound to a particular language. It also allows us to identify precisely some of the major
differences between Java and C♯.
- CNR, Istituto IEI—Dipartimento di Informatica, Università di , 2002
"... Abstract. The question raised in [15] is answered how to naturally model widely used forms of recursion by abstract machines. We show that turbo ASMs as defined in [7] allow one to faithfully
reflect the common intuitive single-agent understanding of recursion. The argument is illustrated by turbo A ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. The question raised in [15] is answered how to naturally model widely used forms of recursion by abstract machines. We show that turbo ASMs as defined in [7] allow one to faithfully reflect
the common intuitive single-agent understanding of recursion. The argument is illustrated by turbo ASMs for Mergesort and Quicksort. Using turbo ASMs for returning function values allows one to
seamlessly integrate functional description and programming techniques into the high-level ’abstract programming ’ by state transforming ASM rules. 1
"... We modify Gurevich’s notion of abstract machine so as to encompass computational models, that is, sets of machines that share the same domain. We also add an effectiveness requirement. The
resultant class of “Effective Models ” includes all known Turing-complete state-transition models, operating ov ..."
Cited by 1 (1 self)
Add to MetaCart
We modify Gurevich’s notion of abstract machine so as to encompass computational models, that is, sets of machines that share the same domain. We also add an effectiveness requirement. The resultant
class of “Effective Models ” includes all known Turing-complete state-transition models, operating over any countable domain. 1 Sequential Procedures We first define “sequential procedures”, along
the lines of the “sequential algorithms ” of [3]. These are abstract state transition systems, whose states are algebras. Definition 1 (States). • A state is a structure (algebra) s over a
(finite-arity) vocabulary F, that is, a domain (nonempty set of elements) D together with interpretations [f]s over D of the function names f ∈ F. • A location of vocabulary F over a domain D is a
pair, denoted f(a), where f is a k-ary function name in F and a ∈ D k. • The value of a location f(a) in a state s, denoted [f(a)]s, is the domain element [f]s(a). • We sometimes use a term f(t1,...,
tk) to refer to the location f([t1]s,..., [tk]s). • Two states s and s ′ over vocabulary F with the same domain coincide over a set T of F-terms if [t]s = [[t]s ′ for all terms t ∈ T. • An update of
location l over domain D is a pair, denoted l: = v, where v ∈ D. • The modification of a state s into another state s ′ over the same vocabulary and domain is ∆(s, s ′ ) = {l: = v ′ | [l]s � = [l]s ′
= v ′}. • A mapping ρ(s) of state s over vocabulary F and domain D via injection ρ: D → D ′ is a state s ′ of vocabulary F over D ′ , such that ρ([f(a)]s) = [f(ρ(a))]s ′ for every location f(a) of s.
• Two states s and s ′ over the same vocabulary with domains D and D ′ , respectively, are isomorphic if there is a bijection π: D ↔ D ′ , such that s ′ = π(s). A “sequential procedure ” is like
Gurevich’s [3] “sequential algorithm”, with two modifications for computing a specific function, rather than expressing an abstract algorithm: the procedure vocabulary includes special constants “In
” and “Out”; there is a single initial state, up to changes in | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=215690","timestamp":"2014-04-18T11:50:11Z","content_type":null,"content_length":"24393","record_id":"<urn:uuid:15b8a195-d74b-4908-abbb-bc198e41b01f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
Most Active Subjects
Questions Asked
Questions Answered
Medals Received
Questions Asked
Questions Answered
Medals Received
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/aisha...with_every1/asked","timestamp":"2014-04-20T18:55:46Z","content_type":null,"content_length":"48774","record_id":"<urn:uuid:011051f9-d897-4e7e-91c2-2557237d7487>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
I hear we signed a new FA
January 5, 2010
I’m not going to re-hash analysis I’ve already done on Holliday in other places (one, because I can draw inferences from that work as the deal isn’t too different and two I don’t have my simulation
in front of me right now). The assumptions in any analysis like this are key so I’ll quickly run through those that are the most important, where I stand on them, and how I handled those in the
simulation referenced in the link above.
1. How good is Holliday’s defense? He derives a decent amount of value from above average LF defense. My projections have him worth around 6-7 RAA next year. There are those in the scouting
community that disagree with the metrics about him. Tied in here is the rate at which his defense ages. I must admit, I don’t have a good feel for this factor. When I did previous simulations
I looked at two methods 1) a linear decrease based on Jeff Z’s work 2) a step function that had greater decreases as the player got older. I think 2 is more realistic.
2. Health. Clearly a significant injury would greatly impact whether or not Holliday will be productive enough to justify the deal. Within the simulation I handled this using a random injury
generator. The generator would generate half season’s missed due to injury (10% of the time) and adjust the playing time accordingly. Is this enough? Should I have had a chance of missing a
full season or had the chances grow with age?
3. Dollar value of a win. this assumption sets the “break even” point for a deal. It determines the amount of production you’d expect for a given contract. Going into this off season the
fangraphs value was ~$4.5M per win, but up until this deal the number was hovering around $3.8M-$3.9M so far this off season. I’ll do the math for you, for a ~$17M average annual value (AAV)
that works out to about a 0.7 win difference. I assumed 4.5 in the above referenced piece.
Given those assumptions and the parameters of the contract (7 yr ~$120M) how’d the Birds on the Bat do? I think that there’s definitely a chance that Holliday performs up to his contract (especially
when accounting for inflation and the potential rising cost of wins). My final estimate (albeit the most pessimistic) had him somewhere between a 30-50% chance to be worth more than the formerly
rumored 8yr 128M deal, and I think that estimate is still has value. So slightly below even money to break even on the deal? Not exactly what I would call a win for the team over the long haul.
The “badness” is magnified if you believe (as most, including me do) that the Cardinals were basically bidding against themselves, but still gave in to Holliday/Boras. All that being said, I still
think I’d rather have Holliday and his contract going forward than Bay and his contract (though I MUCH would’ve preferred 5/80).
OK, enough gloom and doom for now, we’ll worry about the coming years after we hang the banners for the next couple of years right :)
What does Holliday do for the Cards this year? I though the best way to answer that would be to a quick playoff probability added exercise. I’m basing it off of the work displayed here. If I use
Erik’s projection of 87 wins pre Holliday I get a playoff probability of ~56%, while adding in Holliday yields a playoff probability of ~80% for an addition of 24%. Note that these values are based
on historic numbers, and do not take into account division strength. To look at that I decided to simulate the division again using Erik’s projections as a starting point. Using the projected wins
and a 9 win standard deviation (fairly wide, but Nick mentioned it in the comments of a BtB post, so I thought I’d run with it) to get to “actual wins”. I came up with the cards winning the division
~33% of the time before the signing and ~51% of the time after it.
All in all I think I’m much more indifferent towards the deal than most of the saber community. I don’t like the odds that he performs up to his deal being <50%, but I think I can grit my teeth and
deal with it given the playoff probability payoff in the short term. Now, ask me the same question again in 3 years and I may have a very different answer :)
2 Comments
Recent Comments
Tweets that mention… on Play a Hard 9 is moving
Johnny on Average Pujolsian Seasons
aswb83 on Average Pujolsian Seasons
Nathan Eubanks on Average Pujolsian Seasons
stevesommer05 on Cardinals 2011 Zips –… | {"url":"http://playahardnine.wordpress.com/2010/01/05/i-hear-we-signed-a-new-fa/","timestamp":"2014-04-19T09:46:39Z","content_type":null,"content_length":"58903","record_id":"<urn:uuid:fcb5b985-8bef-4513-a88a-9512d9a2bf10>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Newton's method.
Replies: 1 Last Post: Nov 12, 1996 1:22 PM
Messages: [ Previous | Next ]
Gandalf Newton's method.
Posted: Nov 9, 1996 1:55 AM
Posts: 1
Registered: 12/7/04 Can someone provide a little help on this problem?:
Suppose I wanted to write a sequence to perform the Newton inversion of
a polynomial which inverts a series 1 + a(1)x + a(2)x^2+... through a
particular degree. The sequence would look like the following:
(* Newton's inversion of a polynomial *)
guess = 1;
precision = 1;
While[precision < degree,
precision *=2;
If[precision > degree, precision = degree];
temp = iterate[poly, precision - 1];
guess = guess + guess*(1 - temp*guess); (*The Newton iteration*)
guess = iterate[guess, precision -1];
(* Now guess is 1 / poly through degree of 'degree' *)
iterate would be a function, iterate[f, n] that returns a polynomial f
taken through degree n. This sequence effectively doubles the precision
each pass through it. I'm a little stumped on how to implement the
iterate funtion. Any help greatly appreciated. Thanks. | {"url":"http://mathforum.org/kb/thread.jspa?threadID=224267","timestamp":"2014-04-21T12:54:06Z","content_type":null,"content_length":"17851","record_id":"<urn:uuid:8384f007-c077-4d66-bd01-6859949fc2ce>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert kVA to kWh - OnlineConversion Forums
Originally Posted by
Is it possible to convert 20kVA into kWhs?
Not directly. You need to know two things.
1) Power factor: In ac electrical systems, the current and voltage may not be exactly in phase. The product of voltage x current x cos(phase angle) is power. However, the portion of current that is
not in phase still costs the electric company money to transmit. So they may base their rates on kVA, not on kW for commercial contracts, in essence, charging you as though the current was in-phase
and represented power. (The user can install components that improve the power factor to minimize cost, and this is his incentive to do so.) When the power factor is one, kVA and kW are the same
2) Time: kWh is the product of power and time. It is 1 kW for an hour, use it for two hours and it will be 2 kWh
It won't be a perfect estimate, but you can assume the power factor is 1, so 20 kVA = 20 kW and multiply by the hours run to get kWh. | {"url":"http://forum.onlineconversion.com/showthread.php?t=13330","timestamp":"2014-04-21T14:46:13Z","content_type":null,"content_length":"63394","record_id":"<urn:uuid:ceb75ac7-010f-471f-b962-55773d72b47a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
An approach to the design of sparse array system
, 1997
"... Theory for random arrayspredicts a mean sidelobe level given by the inverse of the number of elements. In practice, however, the sidelobe level fluctuates much around this mean. In this paper
two optimization methods for thinned arrays are given: one is for optimizing the weights of each element, an ..."
Cited by 9 (6 self)
Add to MetaCart
Theory for random arrayspredicts a mean sidelobe level given by the inverse of the number of elements. In practice, however, the sidelobe level fluctuates much around this mean. In this paper two
optimization methods for thinned arrays are given: one is for optimizing the weights of each element, and the other one optimizes both the layout and the weights. The weight optimization algorithm is
based on linear programming and minimizes the peak sidelobe level for a given beamwidth. It is used to investigate the conditions for finding thinned arrays with peak sidelobe level at or below the
inverse of the number of elements. With optimization of the weights of a randomly thinned array, it is possible to come quite close and even below this value, especially for 1D arrays. Even for 2D
sparse arrays a large reduction in peak sidelobe level is achieved. Even better solutions are found when the thinning pattern is optimized also. This requires an algorithm that uses mixed integer
linear prog...
- Proc. 1995 IEEE Symp. Ultrasonics , 1995
"... ¯ A method is presented which optimizes weights of general planar 1D and 2D symmetric full and sparse arrays. The objective is to ønd a weighting of the array elements which gives the minimum
sidelobe level of the array pattern in a speciøed region - the stopband. The sidelobe level is controlled on ..."
Cited by 6 (5 self)
Add to MetaCart
¯ A method is presented which optimizes weights of general planar 1D and 2D symmetric full and sparse arrays. The objective is to ønd a weighting of the array elements which gives the minimum
sidelobe level of the array pattern in a speciøed region - the stopband. The sidelobe level is controlled on a discrete set of points from this region. The method minimizes the Chebyshev norm of the
sidelobe level. The method is based on linear programming and is solved with the simplex method. The method removes the large AEuctuation in sidelobe level which characterizes random sparse arrays.
Examples of optimal weighted 1D and 2D planar arrays are presented. I. Introduction 2D arrays in ultrasound represent a technological challenge not the least because of the high channel count [7].
For this reason sparse array methods, where elements are removed by thinning, are considered to be necessary [8]. However this will result in an often unacceptably high sidelobe level. Two dioeerent
approaches to...
- Proc. Int. Congress on Acoustics , 1995
"... this paper developments in transducers and beamformers and their relationship will be discussed. COMPOSITE ARRAYS The advent of piezocomposites has been the main recent development in transducer
technology. A piezocomposite is a combination of a piezoelectric ceramic and a polymer which forms a new ..."
Cited by 4 (1 self)
Add to MetaCart
this paper developments in transducers and beamformers and their relationship will be discussed. COMPOSITE ARRAYS The advent of piezocomposites has been the main recent development in transducer
technology. A piezocomposite is a combination of a piezoelectric ceramic and a polymer which forms a new material with different piezoelectric properties. Piezocomposites have improved the
performance of Proc. 15th Int. Congress on Acoustics, Trondheim, Norway, 26-30 June 1995 2 commonly used arrays such as the mechanically scanned annular array and the linear phased array of Fig. 1
upper panels, in the following ways [2]: 1. Acoustic impedance is reduced giving a better impedance match with tissue. This results in a reduction in reverberation level in the near field as the
transducer surface to a less extent reflects back incident energy. 2. The composite materials make the radiators closer to the ideal of a vibrating piston. Primarily this is due to the suppression of
unwanted surface waves propagating laterally over the transducer.
"... Sparsely sampled irregular arrays and random arrays have been used or proposed in several fields such as radar, sonar, ultrasound imaging, and seismics. We start with an introduction to array
processing and then consider the combinatorial problem of finding the best layout of elements in sparse 1-D ..."
Add to MetaCart
Sparsely sampled irregular arrays and random arrays have been used or proposed in several fields such as radar, sonar, ultrasound imaging, and seismics. We start with an introduction to array
processing and then consider the combinatorial problem of finding the best layout of elements in sparse 1-D and 2-D arrays. The optimization criteria are then reviewed: creation of beampatterns with
low mainlobe width and low sidelobes, or as uniform as possible coarray. The latter case is shown here to be nearly equivalent to finding a beampattern with minimal peak sidelobes. We have applied
several optimization methods to the layout problem, including linear programming, genetic algorithms and simulated annealing. The examples given here are both for 1-D and 2-D arrays. The largest
problem considered is the selection of K = 500 elements in an aperture of 50 by 50 elements. Based on these examples we propose that an estimate of the achievable peak level in an algorithmically
optimized array is inverse proportional to K and is close to the estimate of the average level in a random array. Active array systems use both a transmitter and receiver aperture and they need not
necessarily be the same. This gives additional freedom in design of the thinning patterns, and favorable solutions can be found by using periodic patterns with different periodicity for the two
apertures, or a periodic pattern in combination with an algorithmically optimized pattern with the condition that there be no overlap between transmitter and receiver elements. With the methods given
here one has the freedom to choose a design method for a sparse array system using either the same elements for the receiver and the transmitter, no overlap between the receiver and transmitter or
partial overlap as in periodic arrays.
, 1997
"... Theory for random arrays predicts a mean sidelobe level given by the inverse of the number of elements. In this paper 1 two optimization methods for thinned arrays are given: one is for
optimizing the weights of each element, and the other one optimizes both the layout and the weights. The weight ..."
Add to MetaCart
Theory for random arrays predicts a mean sidelobe level given by the inverse of the number of elements. In this paper 1 two optimization methods for thinned arrays are given: one is for optimizing
the weights of each element, and the other one optimizes both the layout and the weights. The weight optimization algorithm is based on linear programming and minimizes the peak sidelobe level for a
given beamwidth. It is used to investigate the conditions for finding thinned arrays with peak sidelobe level at or below the inverse of the number of elements. With optimization of the weights of a
randomly thinned array it is possible to come quite close and even below this value, especially for 1D arrays. Even for 2D sparse arrays a large reduction in peak sidelobe level is achieved. Even
better solutions are found when the thinning pattern is optimized also. This requires an algorithm that uses mixed integer linear programming. In this case solutions with lower peak sidelobe level
than the i...
, 1996
"... ke to thank my supervisor, Professor Sverre Holm, for his encouragement and assistance through this work. Also thanks to Dr. Geir Dahl for his contribution of ideas to the optimization methods.
Oslo, May 1996 Bjørnar Elgetun Contents 1 Introduction 5 1.1 Medical ultrasound . . . . . . . . . . . . ..."
Add to MetaCart
ke to thank my supervisor, Professor Sverre Holm, for his encouragement and assistance through this work. Also thanks to Dr. Geir Dahl for his contribution of ideas to the optimization methods. Oslo,
May 1996 Bjørnar Elgetun Contents 1 Introduction 5 1.1 Medical ultrasound . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 Objective of this thesis . . . . . . . . . . . . . . . . . . . . . . 7
2 Ultrasound imaging 8 2.1 The ultrasound imaging system . . . . . . . . . . . . . . . . . 8 2.1.1 The ultrasound transducer . . . . . . . . . . . . . . . . 9 2.1.2 Di#erent transducer types . . . .
. . . . . . . . . . . . 10 2.2 3D imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.2 Technical
considerations . . . . . . . . . . . . . . . . . 12 3 Ultrasound wave propagation 14 3.1 Ultrasound waves . . . . . . . . . . . . . . . . . . . . . . .
, 1999
"... Existing 3D ultrasound systems are based on mechanically moving 1D arrays for data collection and post-processing of data to achieve 3D images. To be able to both collect and process 3D data in
real-time, a scaling of the ultrasound system from 1D to 2D arrays is necessary. A typical 2D-system uses ..."
Add to MetaCart
Existing 3D ultrasound systems are based on mechanically moving 1D arrays for data collection and post-processing of data to achieve 3D images. To be able to both collect and process 3D data in
real-time, a scaling of the ultrasound system from 1D to 2D arrays is necessary. A typical 2D-system uses 1D arrays with about 100 receive and transmit channels. Scaling of this system to get a
3Dsystem with as good performance as the 2D-system implies a squaring of the number of channels, i.e. a 10.000 channel system. To reduce cost and complexity of such a system, removal of array
elements or equally channels is possible. Arrays with removed elements are known as sparse arrays. At the University of Oslo, there are two ongoing projects which aim to find optimal 2D sparse
layouts through optimization and simulation. The goal is to minimize the number of channels without compromising image quality. To verify this work and to critically test system performances, an
extensive evaluation program is... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2974406","timestamp":"2014-04-16T11:07:20Z","content_type":null,"content_length":"30640","record_id":"<urn:uuid:d0b5d88b-cd6e-43ea-8570-735af83cbee8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Public Parapsychology
Super Bowl XLIII Field RNG Demonstration (Part Two)
By, Bryan Williams
In this post, I provide a basic summary of the procedures, statistical analysis, and predictions to be used for the planned Super Bowl XLIII field random number generator (RNG) demonstration at
Public Parapsychology. The methods follow those used in my previous Super Bowl field RNG explorations (coming in Part Three), and are closely modeled after those developed by the PEAR Laboratory for
use in their field RNG studies (Nelson et al., 1996, 1998), and by the Global Consciousness Project for individual event analysis (Bancel & Nelson, 2008; Nelson, 2001). For more complete details,
interested readers are referred to these publications, as well as to other field RNG studies that have used these same methods (e.g., Bierman, 1996; Crawford et al., 2003; Hirukawa & Ishikawa, 2004;
Nelson & Radin, 2003; Rowe, 1998). We invite any questions, comments, or concerns from readers regarding these methods.
For each Super Bowl exploration, an Orion RNG [1] is set up to run continuously on a personal computer (PC) one hour before the football game. This PC is located in a room about twelve feet from
where B.W. usually watches the televised Super Bowl broadcast in the living room of his central New Mexico (USA) home. In order to mark the occurrence of notable events (such as kickoff, the scoring
of the first two goals, and the halftime period), a paper time log is kept by B.W. as he watches the game, and the time for each event is noted in Mountain Standard Time using a wristwatch that is
roughly synchronized to the PC’s internal clock beforehand. The PC’s clock is itself synchronized in advance with an Internet-based timeserver to ensure accurate time. Following the game, the RNG is
allowed to run for up to 15 minutes, then it is shut off and the data stored in the PC’s memory is saved to hard disk for analysis.
The PC uses a custom software package [2] developed by researchers at the Institute of Noetic Sciences to collect 200 random bits per second (= 1 test “trial”) from the RNG. Each bit consists of a
binary number (either a “1” or a “0”) that is randomly determined by sampling the electronic noise source. For simplicity, this process can be thought of as being analogous to flipping a coin, with
“heads” representing the “1”-bit, and “tails” representing the “0”-bit. When we flip a coin, each side has a 50/50 chance of turning up, and the same goes for each kind of bit (i.e., the theoretical
probability of occurrence for each kind of bit is 1/2, or p = .5). Thus, the RNG can be seen as flipping 200 electronic “coins” per second. The software then counts the number of “heads” (i.e.,
“1”-bits) that came up in the 200 flips, and stores the number as the trial outcome value. Given the 50/50 probability of occurrence in theory, roughly 100 “heads” and 100 “tails” should be generated
on average by the RNG over a long sequence of trials. In a traditional test of psychokinesis (PK), the goal is to attempt to upset this balance of heads and tails through mental intention on the RNG,
such that more of one outcome is produced over the other. If the mass “group mind” effect is related to PK, then presumably the same should be observed in the field RNG data during moments of focused
group attention and emotional response.
Statistical analysis of the RNG data proceeds using techniques that follow from classical statistical methods (Aron & Aron, 1997; Snedecor & Cochran, 1980). For those readers with a technical mind
who are curious about the details, the following steps are taken in the analysis (those of you unfamiliar with statistics may want to skip ahead to the predictions):
1.) The trial output of the RNG follows a binomial distribution that has a theoretical mean of 100 and a theoretical standard deviation (SD) of 7.071. [3] To represent a basic measure of the
deviation from the mean, each trial outcome value is converted into a z-score using the equation:
z = (x – M) / SD
where x is the outcome value for each trial, M is the mean, and SD is the standard deviation. Initially, the theoretical mean (100) was used for M, and the theoretical SD (7.071) for SD in the
analysis of the Super Bowl data. However, it should be pointed out that, although the Orion RNGs tend to closely match the theoretical values for the binomial distribution overall, it is possible for
an individual RNG to produce a small bias of the mean due to the nature of its random source. In other words, the mean and SD of each RNG should not be expected to exactly equal the theoretical
values each and every time [4]. For that reason, in May of 2007, I made the decision to begin using the mean and SD empirically calculated from all of the RNG trial outcome values for M and SD,
respectively, as a way to account for any potential mean bias in the RNG. This issue becomes relevant for the results of my previous Super Bowl explorations (discussed in Part Three).
2.) Each resulting z-score is squared to form a positive value that is Chi-Square distributed, and that has one degree of freedom (df).
3.) Given that Chi-Square values can be summed together as they are in the standard calculation of the Chi-Square statistic (e.g., Aron & Aron, 1997, p. 235), all of the individual values are added
together across time to represent the overall measure of the deviation from the mean in the RNG data. Their associated degrees of freedom are similarly added together. A probability value can then be
obtained from the total Chi-Square and degrees of freedom.
4.) The values can be cumulatively plotted over time in a graph as Chi-Square – 1 (i.e., the 1 df is subtracted from each of the associated Chi-Square values) to visualize the trends in the RNG data
as time passes.
With the accumulation of RNG data that I collected from previous Super Bowls, it is also possible to examine a combined result across all Super Bowls using a Stouffer’s Z-score, calculated by adding
together the z-scores for each individual second (Step 1) from each year, then dividing by the square root of the number of scores added (the analysis then proceeds as in Steps 2 – 4). This will be
done with the previous field RNG data, along with the data collected during the planned demonstration, in order to assess the combined result across five consecutive Super Bowls.
To explore a mass group mind effect, two test predictions are annually made for the Super Bowl. The first test prediction is for the football game itself, covering the time spanning from the moment
of kickoff to the end of the televised broadcast (the latter was included to allow for any residual effects that may occur in conjunction with the trophy presentation and crowd response). Throughout
this time period (averaging around 3.5 hours total), it is predicted that a steadily increasing non-random pattern (i.e., a positive deviation from the expected mean) will be observed in the field
RNG data, which overall will be significantly different from chance (based on the resulting probability value for the total Chi-Square and df values).
Considering the excitement and focused crowd attention that is often generated by the halftime concerts, the second test prediction specifically concerns the halftime show, covering the time from the
start of the halftime highlights to the beginning of the 3rd Quarter. During this halftime period (averaging around 30 minutes total), another steadily increasing non-random pattern is predicted to
occur in the RNG data.
To be consistent with my previous Super Bowl explorations, both of these predictions will be further tested for the planned demonstration. In the next post, we will examine the results of my previous
The rest of the series can be found in Parts One, Three, and Four.
Bryan Williams
Bryan Williams is a Native American student at the University of New Mexico, where his undergraduate studies have focused on physiological psychology and physics. He is a student affiliate of the
Parapsychological Association, a student member of the Society for Scientific Exploration, and a co-moderator of the Psi Society, a Yahoo electronic discussion group for the general public that is
devoted to parapsychology. He has been an active contributor to the Global Consciousness Project since 2001.
[1] In brief, the Orion RNG is a small external hardware circuit that uses electronic noise as its source of randomness. It is manufactured by Orion/ICATT Interactive Media in Amsterdam, the
Netherlands, and detailed specifications of the device can be found on the company’s website.
[2] This is the Microsoft Windows-based “FRED” software package, developed by researchers associated with the Institute of Noetic Sciences in Petaluma, CA.
[3] This value can be obtained by the statistical equation for the standard deviation of a binomial random variable: SD = Sqrt [Npq], where N is the total number of bits per trial (200), p is the
theoretical probability for a bit (.5), and q = 1 – p (Utts & Heckard, 2006, Section 8.4)
[4] Put another way, whenever the mean and standard deviation of all the trial outcome values generated by the RNG are calculated, they should not be expected in every case to be exactly equal 100
and 7.071, respectively. Instead, they tend to fluctuate somewhere around these two values.
Aron, A., & Aron, E. N. (1997). Statistics for the Behavioral and Social Sciences. Upper Saddle River, NJ : Prentice-Hall.
Bancel, P., & Nelson, R. (2008). The GCP event experiment: Design analytical methods, results. Journal of Scientific Exploration, 22, 309 – 333.
Bierman, D. J. (1996). Exploring correlations between local emotional and global emotional events and the behavior of a random number generator. Journal of Scientific Exploration, 10, 363 – 373.
Crawford, C. C., Jonas, W. B., Nelson, R., Wirkus, M., & Wirkus, M. (2003). Alterations in random event measures associated with a healing practice. Journal of Alternative and Complementary Medicine,
9, 345 – 353.
Hirukawa, T., & Ishikawa, M. (2004). Anomalous fluctuation of RNG data in Nebuta: Summer festival in Northeast Japan. Proceedings of Presented Papers: The Parapsychological Association 47th Annual
Convention (pp. 389 – 397). Cary, NC: Parapsychological Association, Inc.
Nelson, R. D. (2001). Correlation of global events with REG data: An Internet-based, nonlocal anomalies experiment. Journal of Parapsychology, 65, 247 – 271.
Nelson, R. D., Bradish, G. J., Dobyns, Y. H., Dunne, B. J., & Jahn, R. G. (1996). FieldREG anomalies in group situations. Journal of Scientific Exploration, 10, 111 – 141.
Nelson, R. D., Jahn, R. G., Dunne, B. J., Dobyns, Y. H., & Bradish, G. J. (1998). FieldREG II: Consciousness field effects: Replications and explorations. Journal of Scientific Exploration, 12, 425 –
Nelson, R. D., & Radin, D. I. (2003). FieldREG experiments and group consciousness: Extending REG/RNG research to real-world situations. In W. B. Jonas & C. C. Crawford (Eds.) Healing, Intention, and
Energy Medicine: Science, Research Methods and Clinical Implications (pp. 49 – 57). Edinburgh, UK: Churchill Livingstone.
Rowe, W. D. (1998). Physical measurement of episodes of focused group energy. Journal of Scientific Exploration, 12, 569 – 581.
Snedecor, G. W., & Cochran, W. G. (1980). Statistical Methods (7th Ed.). Ames, IA: Iowa State University Press.
Utts, J. M., & Heckard, R. F. (2006). Mind on Statistics (3rd Ed.). Belmont, CA: Duxbury Press.
Super Bowl XLIII Field RNG Demonstration (Part One)
by, Bryan Williams
Based on the widespread interest and attention annually given to the NFL Super Bowl, I wish to take the opportunity to present on Public Parapsychology a demonstration of a field Random Number
Generator (RNG) analysis of the upcoming Super Bowl XLIII on February 1, 2009, which I hope will be both interesting and informative for our readers. In this first post of three, I offer a brief
background on field RNG studies of sporting events that provides the foundation for this planned demonstration.
Many Americans would probably agree that Super Bowl Sunday is an event they look forward to every year with anxious anticipation. The big football parties with family and friends, the amusing TV
commercials and halftime concerts, and the general excitement of the football game itself are all things that tend to make this particular Sunday stand out from all the rest in terms of enjoyment.
Given that the Super Bowl is such a social sporting event in the United States, with the excitement stirring the attention and emotions of millions of football fans across the country, it might seem
reasonable to think that it could be conducive to short-lived psi effects, particularly psychokinesis (PK, or “mind over matter”). If millions of fans are cheering in unison – not only those in the
crowd at the stadium, but also those sitting at home watching the live TV broadcast – then one might be able to metaphorically envision a unified cheer, a mass chorus of raised voices that at times
may be as rhythmic as an orchestra. Another metaphor may be that as a large group of fans watch the game together and share the same emotional reactions, they can be seen as sharing the same frame of
mind. Focusing their collective attention on the game, cheering along with family, friends, and other spectators – they are not acting like individual minds. Rather, they are acting, in a sense, like
a single mass “group mind” that is being moved by the excitement. And if such a mass group mind is moved during the game, then perhaps it might subtly move the matter in the surrounding physical
environment along with it.
During the 1990s, as part of an effort to extend and apply their extensive laboratory findings on PK to more natural settings [1], researchers at the Princeton Engineering Anomalies Research (PEAR)
Laboratory began deploying portable random number generators (RNGs)[2] at various group events to explore the plausibility of such a mass “group mind” effect on matter. These events included
ceremonial rituals, stage performances, parties, and healing workshops. Rather than being purely random as expected, the combined streams of data from the RNGs during these events tended to show a
steady non-random pattern that was significantly different from chance by statistical standards (Nelson et al., 1996, 1998), hinting that there could be something to the notion of a mass “group
mind.” From these experiments, one may wonder: Could sporting events like the Super Bowl be conducive to a mass “group mind” effect on matter?
Field RNG Studies of Sporting Events
When they began these “field RNG” studies [3], the PEAR researchers did examine a small number of sporting events, including several Princeton University football games. The RNG data showed little
indication of a group mind effect, although the researchers noticed that most of the games were rather lacking in crowd enthusiasm (Nelson et al., 1998, pp. 442 – 443).
Despite the null results of the PEAR group, other researchers tried looking at other sports in their own field RNG studies. Dick Bierman (1996) of the University of Amsterdam had set up a field RNG
in the home of a Dutch family for a study that coincided with the 1995 European soccer final. While the family (and presumably many other people throughout the Netherlands) watched the soccer match
on TV and cheered the Dutch team to victory, the RNG ran silently in the background. The RNG data during the 90-minute game showed a steadily increasing non-random pattern that was significantly
beyond chance, while the control data collected 90 minutes before the start of the game for comparison were purely random as expected.
In a similar study, German researchers Johannes Hagel and Margot Tschapke (2004) of the Institut für Psycho-Physik in Köhn had collected streams of data from three field RNGs during a highly charged
home soccer game won by the local Köhn team. Analysis revealed increasing non-random patterns in the data from two of the RNGs that persisted for several hours following the game, when the people of
Köhn had walked through the streets in celebration.
At least two field RNG studies have previously looked at the Super Bowl directly. As part of their examination of sporting events, the PEAR researchers had run two separate field RNGs during Super
Bowl XXX in January of 1996. Although the data from each of the RNGs showed modest increases away from expected randomness, their overall results were insignificant (Nelson et al., 1998, pp. 440,
443). While at the University of Nevada at Las Vegas, Dean Radin (1997, pp. 167 – 168) had made his own independent examination of Super Bowl XXX using six field RNGs. To see how the RNG data might
correlate with audience attention, Radin split the six data streams into periods of “high” and “low” interest, based on ratings given to each period by one or more experimenters watching the
broadcast. Periods of “high” interest might include the game itself and the halftime concert, while periods of “low” interest might include pre-game broadcast commentary and the commercial periods
(of course the latter is debatable now, since the commercials tend to be quite amusing, and the interest they draw often competes with the game itself). While not notable by statistical standards,
there were slight indications that the RNG data during the “high” interest periods were gradually moving away from expected randomness, while the data from “low” interest periods remained random.
Inspired in part by the field RNG studies, the Global Consciousness Project (GCP) was founded in 1998 to further explore “group mind” effects on global scale when major world events occur. To do
this, the GCP set up and monitors an Internet-based global network of RNGs that continually run 24/7, sending their data to a server in Princeton, NJ, for archiving and analysis (Bancel & Nelson,
2008; Nelson, 2001). In addition to examining formally defined global events, the GCP informally explores local events of interest on occasion. One such event was Super Bowl XXXVII in January of 2003
[4]. Although not statistically significant overall, the data from the 50 active RNGs in the GCP network at that time seemed to show a strong non-random trend during the start of the game that was
consistent with the predicted effect. Despite interesting internal trends in some cases, GCP examinations of other sporting events, including the 2002 World Cup (Event #112) and two World Series
games (2001 & 2008; Event #89 & #279, respectively), have produced insignificant outcomes for reasons that remain unclear.
In all, field RNG studies of the Super Bowl and other sporting events have produced a “mixed bag” of results, making it unclear as to whether such events are conducive to a mass “group mind.” In the
next post, I will provide a summary of additional field RNG explorations of the Super Bowl carried out by myself, which have further motivated Public Parapsychology’s planned field RNG demonstration,
and I will provide an overview of the planned procedures, analysis, and predictions for the demonstration.
The rest of the series can be read in Parts Two, Three, and Four.
Bryan Williams
Bryan Williams is a Native American student at the University of New Mexico, where his undergraduate studies have focused on physiological psychology and physics. He is a student affiliate of the
Parapsychological Association, a student member of the Society for Scientific Exploration, and a co-moderator of the Psi Society, a Yahoo electronic discussion group for the general public that is
devoted to parapsychology. He has been an active contributor to the Global Consciousness Project since 2001.
[1] The details of these findings can be found in a journal article describing the PEAR Lab’s 12-year research database on PK (Jahn et al., 1997). Electronic copies of this and other PEAR
publications cited here have been made available for download at the PEAR Lab’s archival website.
[2] The PEAR Lab regularly uses the term “random event generator” (REG) as another name for RNG. For the most part, the two terms – RNG and REG – are synonymous, and we will use only one term (RNG)
here for convenience.
[3] As first explained by Nelson et al. (1996, p. 112), the name “field RNG” can have a double meaning. Besides reflecting the fact that the RNGs have been taken out of the laboratory and into the
field, the name can also provide a symbolic reference to a concept derived by Nelson et al. to think about the “group mind” effect. To affect the field RNG, the group mind effect might be thought of
as an invisible PK-related “field” that extends out into the surrounding environment to affect matter, analogous to the way a magnetic field seems to extend out from the magnet to affect iron. It
should be kept in mind that while this concept provides a useful way to think about how a group mind effect may work, it is purely metaphorical in nature and not currently supported by any clear
[4] This informal GCP exploration of Super Bowl XXXVII can be found at the GCP website. Links to GCP examinations of the other sporting events mentioned elsewhere in the text can be found on the
GCP’s formal results page.
Bancel, P., & Nelson, R. (2008). The GCP event experiment: Design, analytical methods, results. Journal of Scientific Exploration, 22, 309 – 333.
Bierman, D. J. (1996). Exploring correlations between local emotional and global emotional events and the behavior of a random number generator. Journal of Scientific Exploration, 10, 363 – 373.
Hagel, J., & Tschapke, M. (2004). The local event detector (LED) – an experimental setup for an exploratory study of correlations between collective emotional events and random number sequences.
Proceedings of Presented Papers: The Parapsychological Association 47th Annual Convention (pp. 379 – 388). Cary, NC: Parapsychological Association, Inc.
Jahn, R. G., Dunne, B. J., Nelson, R. D., Dobyns, Y. H., & Bradish, G. J. (1997). Correlations of random binary sequences with pre-stated operator intention: A review of a 12-year program. Journal of
Scientific Exploration, 11, 345 – 367.
Nelson, R. D. (2001). Correlation of global events with REG data: An Internet-based, nonlocal anomalies experiment. Journal of Parapsychology, 65, 247 – 271.
Nelson, R. D., Bradish, G. J., Dobyns, Y. H., Dunne, B. J., & Jahn, R. G. (1996). FieldREG anomalies in group situations. Journal of Scientific Exploration, 10, 111 – 141.
Nelson, R. D., Jahn, R. G., Dunne, B. J., Dobyns, Y. H., & Bradish, G. J. (1998). FieldREG II: Consciousness field effects: Replications and explorations. Journal of Scientific Exploration, 12, 425 –
Radin, D. I. (1997). The Conscious Universe: The Scientific Truth of Psychic Phenomena. San Francisco: HarperEdge.
Harvey J. Irwin, an Australian psychologist at the University of New England, has written four editions of An Introduction to Parapsychology. Caroline A. Watt, of the University of Edinburgh, has
joined him for an updated fifth edition. Written as a textbook, it’s 300 pages include an overview not only of extrasensory perception (ESP) and psychokinesis (PK), but relevant aspects of
poltergeist, near-death, out-of-body, apparitional, and reincarnation experiences as well as unique chapters on parapsychology’s history, phenomenology, relevance to other disciplines, belief
systems, and –possibly most important—parapsychology as a scientific enterprise.
All of this material produces a volume that is thick but authoritative; rigorous but approachable. Though, laypeople beware. With such an extensive volume of data, it’s not for those with short
attention. For those who truly feel captivated by parapsychological material though, it is a treasure trove.
One of the most poignant aspects of the book is, perhaps, the perspective in which it is written. The authors are as transparent about the topic of parapsychological phenomena as is possible. They
make no great claims to the field but do take a well-grounded and a cautious stance on its potential impact. This perspective is well illustrated by the final statement in the book:
If all of the phenomena do prove to be explicable within conventional principles of mainstream psychology surely that is something worth knowing, especially in relation to counseling practice; and if
just one of the phenomena should be found to demand a revision or an expansion of contemporary psychological principles, how enriched behavioral science would be. (p. 261)
The achievement of the level of transparency in this book is not without a well positioned grudge or two regarding the aforementioned ‘mainstream psychology’ and science community. Against a subject
I generally refer to as ‘science dogma’ the authors state, “some scientists reject parapsychology as a science simply because they cannot accept its empirical findings” (p. 251). An exemplary quote
is then given from the prominent psychologist Donald Hebb, who in a 1951 issue of the Journal of Personality wrote “why do we not accept ESP as a psychological fact? Rhine has offered us enough
evidence to have convinced us on almost any other issue…I cannot see what other basis my colleagues have for rejecting it…My own rejection of [Rhine’s] views is – in the literal sense – prejudice.”
More recently the skeptical commentator Ray Hyman admitted he could not find any methodological flaws in a series of psi experiments, yet he still refused to concede their support for the psi
hypothesis in part on the ground that “it is impossible in principle to say that any particular experiment or experimental series is completely free from possible flaws” (Hyman, 1996, p.40).
One can observe that in at least a 40 year time span there has been an unfortunate persistence of such “science dogma” which is why An Introduction to Parapsychology stands as beacon to truth and
impartiality in the scientific method.
Bottom line is that the 5th edition of Irwin’s, and now Watt’s, work is clearly the most balanced, accurate and current text for anyone interested in a true introduction to parapsychology.
Hebb, D. O. (1951). The role of neurological ideas in psychology. Journal of Personality, 20, 35-55.
Hyman, R. (1996). Evaluation of a program on anomalous mental phenomena. Journal of Scientific Exploration, 10, 31-58
Sidian M.S. Jones
Sidian M.S. Jones is a graphic designer and rock vocalist living in Boise, Idaho who also heads the Redefine God – Religion 2.0 spiritual movement at www.RedefineGod.com.
The Hidden Whisper is an interesting, fast-paced detective story with a paranormal twist. The author, JJ Lumsden, is a professional parapsychologist and full member of the Parapsychogical
Association, giving his story the depth, realism and unique perspective from his own experience in the field. The setting and the characters are well established, with a dash of humor in just the
right spots. The story starts off a little uneven, with scenes and characters flashing by as the author sets the stage for all the characters from several different angles and settings, but as the
story progresses, the characters and settings take on a cohesive life of their own.
The main character is a parapsychologist, who is asked to investigate an apparent haunting while visiting his grandmother deep in the Arizona desert. This favor is asked of him right as a close
relative of his has died. The investigation pulls him from his family, creating friction that adds a depth of character and history to the story beyond a mere detective story.
The investigation itself is a clever detective story, with the parapsychologist scrutinizing every angle and following the branching pathways of a true mystery, from strange haunting sounds in the
night to engaging in fisticuffs and frightening encounters in dark parking lots. The overarching feel of the paranormal, combined with an underlying menace that the reader is drawn into, is
complimented nicely with the shoe-leather detective style of the parapsychologist investigator.
Beyond an interesting and captivating story, the author also successfully adds an extra dimension that provides his readers with an excellent education in the basics of real life parapsychology.
Footnotes during the story lead the reader to a broad and satisfying glossary of relevant information from the field of parapsychology. The glossary is full of interesting information, stories, and
details about parapsychology and its critics, as well as supplying a large number of further reading references.
Skeptics and believers in the paranormal alike will enjoy JJ Lumsden’s The Hidden Whisper; it contains elements that will appeal to everyone.
Mark Wilson
*Mark Wilson is an avid reader of fiction and science, and is a writer of short stories. He is an information technology professional with an interest in the paranormal.
Are Parapsychologists Living or Are they Dead? A Review of the Utrecht II Conference
by Renaud Evrard
The first International Congress of Parapsychology in Utrecht, Netherlands, was in 1953. That conference helped to advance the field and to professionalize researchers. In October 2008, the
Parapsychology Foundation organized a second Utrecht conference as a tribute to that conference and an assessment of the field. It was titled Utrecht II: Charting the Future of Parapsychology. As
shown in the program, the topics approached covered many aspects of the field.
The Parapsychology Foundation has published an extensive review of Utrecht II. We can thank all the team for the great organization during the four day conference. Attending the conference was a
great opportunity, especially for students like me who are quite newcomers in the field, to be able to allowed so much time for informal discussions with researchers who, for the most part, were just
virtual names or inaccessible celebrities like the Nobel Prize winner Brian Josephson.
However, this conference was not perfect. Here are my impressions:
- There wasn’t enough time for peer-debates. There were five minute discussion periods at the end of each presentation, and after three presentations, a 30 minute discussion period with the three
lecturers on the scene. By contrast, the Euro-PA Congress of October 2007 in Paris privileged discussion (2/3) over presentation (1/3). This format allowed everyone to develop their ideas, criticisms
and responses (especially for non-native English speakers who need time to exceed their shyness!).
- There was a high heterogeneity between lecturers, maybe because of cultural differences. Methodological requirements, theories, stances in regard to the authenticity of psi phenomena, and personal
involvements were not the same from one researcher to the other. Does parapsychology really have a community? It seems that the backgrounds of the presenters were very different. The gap was
particularly noticeable between native and non-native English speakers. PF did, however, a marvellous work when putting all these researchers together, since 1953!
- Another problem is that some of the lectures were too introductory. A few of the lectures could have been made at least by ten people present at the conference.
- Most of the presentations were retrospective assessments, rarely asking ardent questions assessing the future of the discipline. This palette of assessments drew a fragmented field, each wanting to
pull the cover in his or her direction. Is it because parapsychology seems to attract so creative personalities as even this small group of researchers can’t conform itself to a common orientation?
- Maybe as a consequence of previous points, I left the congress without any impression that pragmatic decisions were taken. That’s a big difference with Utrecht I, where researchers formed
committees to bring more organization to the field making that earlier conference, as Carlos Alvarado remarked in his review, “a milestone in the 20th-century history of the field, helping to shape
the 50 years that followed.”
Are parapsychologists living or are they dead? That’s the question I asked myself after the conference, even though I just have seen in real life some well-known personalities in the field. But this
question rose from a specific definition of the parapsychologist as the scientist who challenges the issue of the authenticity of psi phenomena. This was not the case of all the researchers present
at the conference. Some of the current major contributors in experimental parapsychology (Radin, Sheldrake, Bierman, Bem, and Parker) were not there, and their absence induced a strange atmosphere.
The challenge of proof-oriented research is difficult, because of the pressures that researchers face both inside and outside the field. Only a small group of scientists produce the majority of
empirical data. They are psi-conducive experimenters with the time and money to ask the question, “Does psi exist?” The undecidability of the question maintains the interest of both the public and
researchers. But for most parapsychologists, this question is not asserted directly. These researchers cautiously work on surroundings topics such as psychological variables.. Some scientists have no
doubt about reality of psi, but at the conference their assertions made me feel some embarrassment, as if they had crossed the line of the current consensus. Is the parapsychologist dead when he or
she stops asking if psi exists?
This problem increases in complexity when we take a sociological perspective. There is the issue of personal experiences and beliefs. Can a parapsychologist bracket aside his or her own life
experiences while doing research? The necessity of personal distance is a strong requirement in the sciences, but it seemed to me stronger in Europe than in America. During the conference, it was my
impression that many of the non-European researchers had a more “psi is proven” attitude, with more self-disclosure on their personal beliefs and experiences. While more academic opportunities in
parapsychology emerge in Europe as American institutions close down, the original stance of a pragmatic pro-psi attitude appears to be breaking up. The living parapsychologist must enter in a
dissociative state. Two types of research have emerged, as if two trends living side by side.
As Deborah Delanoy pointed it in her invited address, there are pros and cons of doing research in private institutes versus the university setting. Financing, broadcasting, recognition, constraints,
and perpetuity vary completely from one institution to another. For somebody who wants to make a living while doing research in parapsychology, it is better to be known for something else than
successful proof-oriented psi research. Robert Morris’s legacy at Koestler Parapsychology Unit is an academic success with 27 PhD students, with 18 working currently in universities, but at what
price? Psi phenomena are still far from proved. Only surrounding approaches, like studies of altered states of consciousness, paranormal beliefs, anomalous experiences and historical studies assure
these academic positions. I wonder if certain properties of the paranormal entail inevitably the dissolution of its subversive reach when it penetrates into strongly structured and conformist circles
(such as universities) as suggested in George Hansen’s Trickster theory (Hansen, 2001).
But the differences between private and academic institutions do not completely describe the reality. The private Institute of Border Areas of Psychology and Mental Hygiene in Germany is probably the
most active in private parapsychological research. Its stance is, however, both careful and brave. It asks differently the question of the authenticity of psi phenomena by basing itself on
theoretical models (such as Generalized Quantum Theory), which goes at the same moment farther and less far than the common representations. This model, where the psi is not a physical signal, allows
the construction of a new field where psi receives a positive operational definition.
An exchange at the Utrecht II was very revealing of this aspect: in his lecture, Professor Harald Walach of the University of Northampton went so far as to say that parapsychology was dead, but Mario
Varvoglis, president of the private Institut Métapsychique International in France, stated that if this model (GQT) of psi showed itself exact, it would be the death of one parapsychology, one view
of psi accompanying one specific discourse. And it would be for that reason that parapsychologists still have a spark of life in the middle of a hostile world.
Hansen, G. (2001). The Trickster and the Paranormal. Philadelphia: Xlibris.
After receiving some new information, a few things must be clarified about my criticisms:
- The Parapsychology Foundation was not alone in making the decisions about the organization of the Utrecht II conference, in particular about the timing of discussions. Longer time periods were
proposed, but there were many talks on various topics, so some logistical choices were made.
- Many of the people who missed the conference had been invited, but many had life circumstances that prevented them attending. In fact, the Parapsychology Foundation sent more than 100 invitations!
- My impression was that some of the lectures were too introductory; but, actually, the guidelines were to review the basics in each area and then speculate on the future. Maybe this style of
presentation didn't work well this time because the basics are quite large, and the future quite hard to imagine.
I hope that with this addendum, my criticisms have become less unfair and more what they tried to be: part of charting the future of parapsychology.
Renaud Evrard is a French psychologist, preparing a Ph.D in clinical and differential aspects of exceptional experiences at the University of Rouen. He is an active member of the Student Group of
Institut Métapsychique International since 2004, and a student affiliate of the Parapsychological Association since 2007. He co-founded in 2007 the Service for Orientation and Help of People with
Exceptional Experiences (SOS-PSEE) in Paris.
Research Participants Needed for Study on Eye Movements and the Precognition of Emotional Faces
Researchers at Liverpool Hope University in the UK are studying conscious and unconscious measures associated with the precognition of emotional faces. Prior to coming to the scheduled experimental
session, you would be asked to complete a short personality questionnaire. At the experimental session, you would be seated in front of a computer screen and asked to take part in one or two
calibration tests such that the Eyetracker equipment can accurately measure your eye movements during the experiment itself (this is non invasive and undertaken by simply watching some images appear
on the screen in front of you).
When the experiment begins, you would take part in some practice trials prior to the start of the experimental session. During each trial, you would be asked to watch the screen as a series of seven
randomly ordered emotional faces are presented to you. You do not have to do anything other than watch the screen (the eye tracker will be monitoring the way your eyes are processing each face).
The emotions that you will see reflect happy, neutral, sad, fear, anger, disgust and surprise the emotions. After you have seen all seven faces, you would be asked to make a choice as to which of the
seven faces will be selected by the computer and appear in the future. You will do this by making a button press on the computer key pad.
Following your choice, you will receive feedback on whether you were correct or not (and see the face that was selected by the computer). You will take part in 35 “trials” after which time, the
computer will give you feedback on your overall ESP performance.
This study takes around 30 minutes to complete!
Please contact Christine Simmonds-Moore on simmonc@hope.ac.uk or (0151) 291 2158 if you would like to take part in this study.
Aesthetic Preference, Subliminal Perception and ESP Study
Researchers at Liverpool Hope University in the UK are studying processes that affect people’s preferences for visual images. In total, the study takes approximately 90 minutes to complete. Prior to
coming to the scheduled experimental session, you will complete a personal information questionnaire which includes a short battery of questions about your experiences and attitudes. This part of the
experiment will take approximately 30 minutes to complete.
At the experimental session, the first part of your participation will be administered by a computer. The second part of your participation will consist of the completion of the NEO-PI personality
inventory. This part of the experiment will take approximately 60 minutes to complete.
At the experimental session, you will be asked to watch the screen as pictures are flashed very briefly, and then asked to rate how well you like a series of pictures.
The pictures range from marginally negative to pleasant, and none contain explicit sexual content. The marginally negative pictures are no different to being exposed to images that one might see on
the television (e.g., in the news).
By signing up and coming to the laboratory at the appointed time, you are only giving your initial consent to participate in the study. You are free to withdraw your participation at any time without
any penalty – even after the study has begun.
The hypotheses of the study will be fully explained to you as soon as you complete your session. The identity of your data will be kept confidential, and only group results will be reported.
To thank you for your time, and to cover refreshment expenses we will pay each participant £5.00.
Please contact Christine Simmonds-Moore at simmonc@hope.ac.uk or (0151) 291 2158 if you would like to take part in this study.
The Parapsychological Association has recently compiled and presented a list of education opportunities in parapsychology around the globe. Here you will find everything from online courses to PhD
programs mentored by professionals in the field. The PDF is available here.
French philosophy student Louis Sagnières recently completed Dr. Caroline Watt's online course, Introduction to Parapsychology. Please welcome Louis as he shares his experiences as a student of the
course in his first guest post for Public Parapsychology.
In September, I started Dr. Caroline Watt’s new online parapsychology course offered by the University of Edinburgh. I already have a bit of non-academic training in parapsychology. I’ve been a
member of the student group of L’Institut Métapsychique International (IMI) in Paris for four years now, so I’ve had time to learn a bit about the field. But this course was a first for me, and I
really enjoyed it. I would definitely recommend it to someone with interest in parapsychology with some or no previous knowledge.
Content of the class
The structure of the course was quite simple. Each week we were assigned one or two articles to read and some chapters of the Introduction to Parapsychology book by Harvey Irwin and Caroline Watt.
And there were interviews by Dr. Watt of some parapsychology figures (pro-psi or skeptics). Everything but the book was downloadable through the site (WebCT of University of Edinburgh). Each week
focused on one theme, “Psi in the laboratory”, “Unconcious Psi” etc. The interviews are organized to give the “expert of the week” the opportunity to discuss the week’s topic but also broader issues
in parapsychology, which is a really good thing. And the mix of parapsychologists and skeptics is also a plus. Students are thus showed the whole picture instead of just the positive side of the psi
Whereas the interviews are fun and easy to listen to, the readings required a lot more work. The book chapters were quite clear, but they were sometimes too quick on certain subjects. The articles,
on the other hand, were sometimes really difficult. I wondered if everyone, especially those with no scientific background, was following the points made in the articles. I don’t think that technical
articles are unnecessary in that kind of class, on the contrary, but maybe they could have been better introduced. The interviews did introduce material sometimes when the expert was the author of
the article. Overall the whole content of the class is great, but it maybe hard to follow for those with no previous background in parapsychology.
Class setup
The whole class is online, everything takes place on the website of the University of Edinburgh. And this was for me a big disappointment. Not that I dislike online classes, I’ve experienced a few
and they are generally good, but an online course website needs to be easy to use and that wasn’t really the case. WebCT manages the environment of the site, and it really wasn’t great at all. Things
are sometimes a bit hard to find, and the whole site is slow. Instead of using just one window and some tabs, every time you open something you get a popup window. Then you end up with three or four
windows at the same time and you get lost. But I may be a bit picky since the after-class survey shows most students found the site good or very good. Actually, it appears that I was the only student
that had a bad experience with the website.
Each week, one student gets to write a short essay (500 word) on the week’s topic, which is used to start up conversation. The forum that is used to do that wasn’t really easy to use, and I think it
might have inhibited participation. The window in which one was supposed to write his essay was so small it could barely fit three words in a line. Reading wasn’t easy either. Nevertheless people did
participate and conversations were enjoyable. Dr. Watt often intervened to give relevant information, to answer student questions, or to ask questions to bring better focus to the conversation. Her
participation allowed the group to have feedback about their discussions and not be left in the blind.
Class population
For the class to be manageable, groups of ten were created. It enabled people to know each other quickly, and each of the participants had its own week essay assigned. It also made conversation easy
to follow. Class discussions were nice, but they depended a lot on the group dynamic. On average, people participated once or twice per discussion. Dr. Watt told me that with the new group there was
already three times the number of messages that there was with the previous group. So participation may depend a lot on the group dynamic.
People attending the class came from different backgrounds. Most of them never had followed a parapsychology class, but they all had certain knowledge of the field, some through readings and others
because they were psychics. I don’t recall anyone being skeptic. But the class survey shows that some had actually been skeptics even though they weren’t really vocal about it.
My overall impression is a good one. Class material was good, sometimes a bit hard, but always appropriate. The few lows about the website are surely thing that can be fixed overtime. This class is a
good thing, and definitely somewhere to start for anyone interested in parapsychology. I would not suggest it for people who already have a good background in the field, because it is only an
introductory class, but it’s a really good one.
Louis Sagnières
Louis is a PhD student in philosophy. He specializes in political philosophy and the impact that the Internet has on society. He has been a member of L’Institut Métapsychique International (IMI)
student group for four years, and has done his master’s thesis on the impact of parapsychology on philosophy (the text can be found in french here).
Public Parapsychology welcomes guest reviewer, Rosemarie Pilkington in her first contribution to the site. Below is her review of Soul Shift: Finding Where the Dead Go
Mark Ireland is the son of the gifted psychic/medium Dr. Richard Ireland, who amazed and entertained thousands in churches, halls and on television with his prodigious gifts in the 1960’s through the
80’s. Although he was an entertainer, Dr. Ireland was also a minister who tried through his psychic demonstrations to spread the message, "there is no death and there are no dead."
Mark, although he learned much from his father, and absorbed I’m sure even more than he knew of his belief, didn’t discover his own inherited abilities until he had a premonition about the death of
his son, Brandon. This tragic event led him to try to make contact with his son’s spirit, and in so doing, Mark became immersed in the world of mental mediumship. Soul Shift: Finding Where the Dead
As with many others who have lost loved ones, Ireland embarked on a quest for the meaning of life and death. The unexpected demise of his son, and perhaps even more his precognitive sensing of the
impending tragedy, changed his world view. He became more spiritual and desirous of contributing to the universe by developing his own latent powers.
Those who believe in survival after death will find much in Ireland’s interpretation of the phenomena he has experienced to support their belief. Although he says at one point that he still has
doubts and expects readers to form their own conclusions (p. 173), his narrative is designed to convince us that human personality continues after death. In Soul Shift’s 200 pages, he spends merely
half of one paragraph in a cursory nod to any other view (p. 147).
Ireland gives short shrift to those who contend that the information provided by psychics/mediums may be attributed to their own psychic abilities rather than communications from the dead. He
dismisses this theory by stating, “super psi is a very elaborate concept, which appears nearly impossible to test” (p. 147). One might say the same, and many have, about the spirit hypothesis of
course. Neither theory has ever been proven. Serious scholars and experimenters in psychical research have argued both the spirit and psi hypotheses for more than a century and are still no closer to
agreement than they were when Charles Richet and Oliver Lodge argued each side in the 1920s.
By the way, I dislike the term “super-psi.” Psi is super. No one knows the range or limits of psychic ability or indeed if there are any limits.
Ireland’s father could read notes while completely blindfolded. He telepathically picked up names and gave accurate clairvoyant and precognitive information to strangers. (Films from some of his TV
appearances may be found on You Tube). If he could pick up names of living friends and relatives, tell when babies would be born and what their sex was, or what moves or business ventures would
profit the person he was ‘reading’, Ireland’s father could just as easily pick up information about their dead loved ones by using his psychic powers. As Richet would say, “there is no reason to
suppose the intervention of the soul of a deceased person.” Because we don’t yet understand the mechanism of psi, how it works, and to what extent, we cannot assume that the information given by
psychics/mediums is obtained from beyond.
I can understand Mr. Ireland’s need to believe his son is still with him and it’s also much simpler to accept at face value that the messages we receive are indeed from our loved ones and that we
will meet them again some day. I would like to think he is right. I too have lost a child and it is a comforting thought.
Having said that, whether or not one subscribes to the survival theory there is much to ponder in this very readable work. Dr. Ireland’s brother was also psychically gifted, as are the author and his
surviving son, which demonstrates that psychic talent may be inherited. There is also evidence that Brandon, whose untimely death prompted his father’s quest, was a spiritual and probably psychically
talented person as well. But I found especially interesting the prodigious talent of Dr. Richard Ireland. His story is alone worth the price of the book and should be of interest to anyone learning
about psychic ability.
Rosemarie Pilkington, Ph.D.
Rosemarie is a writer, musician, and educator who holds a Ph.D. in Psychology from Saybrook Graduate Institute in San Francisco. She is an associate member of the Parapsychology Association. In
addition to writing many articles and book reviews on psychic phenomena, her latest book is The Spirit of Dr. Bindelof: The Enigma of Seance Phenomena | {"url":"http://publicparapsychology.blogspot.com/2009_01_01_archive.html","timestamp":"2014-04-18T08:02:35Z","content_type":null,"content_length":"172839","record_id":"<urn:uuid:83f2a914-1399-49fe-94bb-dc229bdb9166>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
The Period of Lunar Tides
The period of tides is not exactly one day but a little bit longer, because the Moon orbits the Earth with a different period than the Earth rotates around its axis. We usually think that "the high
tide is coming," but for an observer who is not standing on the Earth (as in the view of this Demonstration), observers on Earth are are actually going into the high tide.
Suppose that the whole Earth is covered by water, and the Moon orbits the Earth above the equator.
The period of the tides can be determined as the solution of the equation
where is the Earth's sidereal rotation period (around its axis), which is 23 h 56 min, and is the Moon's revolution period around the Earth (the sidereal month), which is 27 d 7 h 43 min. The
solution of this equation is 24 h 50 min, and because there are two water bulges, the high tide or low tide occurs every 12 h 25 min (the so-called semidiurnal tide), and the water level changes
every 6 h 12 min.
Because of the Earth's continents, the high tides do not occur when the Moon is right above us (or right "under" us). The Sun also has an influence on the tides, the solar tidal force being
approximately 2.2 smaller than the lunar. When we take into account all these effects (and some others not mentioned here), the high tide is slightly ahead of the connecting line between the centers
of the Earth and the Moon (in the animation the angle is shown as 10° to make this fact obvious). | {"url":"http://www.demonstrations.wolfram.com/ThePeriodOfLunarTides/","timestamp":"2014-04-17T06:44:57Z","content_type":null,"content_length":"42417","record_id":"<urn:uuid:cb525af5-0af4-4ef3-be35-090fa41b816f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/kiarunsyu/asked","timestamp":"2014-04-24T01:19:19Z","content_type":null,"content_length":"111968","record_id":"<urn:uuid:18c71671-29f6-45e0-af8f-93ccefcb0a13>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Anyone who has stayed in math classes long enough to reach calculus quickly comes to believe that calculus is more advanced and complex than arithmetic. And while that may be true for the most
intuitive aspects of arithmetic that we learn in grade school, the seemingly innocent discipline quickly becomes more mysterious as one advances into it.
Consider the most basic operation in calculus, the derivative. The derivative of the a function is said to describe the instantaneous rate of change of function (how fast it is going up or down in
any given direction), or alternatively the slope of a function at any given point. However, there is a related operation on integers is called the arithmetic derivative, defined as follows:
1. p’ = 1 for any prime number p
2. (ab)’ = a’b + ab’ for any a,b ∈ ℕ
3. 1′ = 0
4. 0′ = 0
While there is an intuitive and even “visual” nature to the calculus derivative, the arithmetic derivative seems more abstract and opaque. Additionally, the former is about continuous functions,
while the latter is about discrete entities such as integers or rational numbers. The main thing the two concepts have in common is that they obey the Liebniz Rule governing the products of
derivatives, as described on line 2 above.
So can the arithmetic derivative tell us anything useful about integers? It is intimately tied to prime numbers and prime factorization, so it is in that sense an additional tool for examining
fundamental properties of numbers as they relate to primes and patterns of primes. The article Deriving the Structure of Numbers quotes Linda Westrick, who has studied the arithmetic derivative and
says that it “provides a different context from which to view many topics of number theory, especially those concerning prime numbers. The complex patterns which arise from its simple definition make
it interesting and worthy of study.” To see such patterns, one can apply the derivative repeatedly, n’, (n’)’, etc., just as one would in calculus, to chart the variations even among the first few
natural numbers.
│ n │1│2│3│4│5│6│7│8 │9│10│11│12│13│14│15│16 │17│18│
│n‘ │0│1│1│4│1│5│1│12│6│7 │1 │16│1 │9 │8 │32 │1 │21│
│n" │0│0│0│4│0│1│0│16│5│1 │0 │32│0 │6 │12│80 │0 │10│
│n"’│0│0│0│4│0│0│0│32│1│0 │0 │80│0 │5 │16│176│0 │7 │
Indeed the interesting patterns include the fact that some numbers quickly go to zero as one repeatedly applies the arithmetic derivative, as on the case of 6′ = 5, 6” = 1, 6”’ = 0. Other numbers,
seem to increase without bound. In the above chart, for example, powers of 2 appear to grow quite quickly. Yet others bounce around unpredictably somewhere in between. In many ways, this reminds me a
bit of the Collatz Conjecture which have discussed on CatSynth in the past.
Perhaps the most intriguing property of the arithmetic derivative is its relation to twin primes, pairs of prime numbers whose difference is 2, like 11 and 13 or 29 and 31. It is conjectured that
there are infinitely many such pairs of twin primes, but no one has ever proven this. However, it turns out that twin primes are related to the second derivative of a number n”. So if there are
infinitely many numbers n for which n” = 1, then there are infinitely many twin primes. As described in this paper by Victor Ufnarovsk, one can derive this from the following theorem:
Let p be a prime and a = p+2. Then 2p is a solution for the equation n’ = a
The proof is relatively straightforward:
(2p)’ = 2′p + 2p’ = p + 2
So if 2p’ = p + 2 and p + 2 is also prime, as would be the case for twin primes, then 2p” = (p + 2)’ = 1.
Unfortunately, this does not answer the question of whether there are infinitely many such pairs. So the famous problem remains open.
arithmetic derivative
number theory
prime numbers
twin primes | {"url":"http://www.catsynth.com/tag/arithmetic-derivative/","timestamp":"2014-04-19T05:36:46Z","content_type":null,"content_length":"48055","record_id":"<urn:uuid:1f6eae84-9409-4666-9e6f-3a56f923a210>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sorting a JavaScript array using
Sorting a JavaScript array using array.sort()
Last Updated: June 1st, 2010
Sorting arrays in JavaScript is done via the method array.sort(), a method that's probably as much misunderstood as it is underestimated. While calling sort() by itself simply sorts the array in
lexicographical (aka alphabetical) order, the sky's really the limit once you go beyond the surface.
Sorting an array lexicographically (aka "alphabetically" or in dictionary order) is easy to do. Just call array.sort() without any parameters passed in:
//Sort alphabetically and ascending:
var myarray=["Bob", "Bully", "Amy"]
myarray.sort() //Array now becomes ["Amy", "Bob", "Bully"]
Notice that the order is ascending. To make it descending instead, the simplest way is to enlist the help of another Array method in combination, array.reverse():
//Sort alphabetically and descending:
var myarray=["Bob", "Bully", "Amy"]
myarray.reverse() //Array now becomes ["Bully", "Bob", "Amy"]
Now, before you start feeling comfortable, consider what happens if we call array.sort() on an array consisting of numbers:
var myarray=[7, 40, 300]
myarray.sort() //Array now becomes [300,40,7]
Although 7 is numerically smaller than 40 or 300, lexicographically, it is larger, so 7 appears at the very right of the sorted array. Remember, by default array.sort() sorts its elements in
lexicographical order.
And there you have it with array.sort() in terms of its basic usage. But there's a lot more to this method than meets the eye. Array.sort() accepts an optional parameter in the form of a function
reference that pretty much lets you sort an array based on any custom criteria, such as sort an array numerically or shuffle it (randomize the order of its elements).
As touched on already, array.sort() accepts an optional parameter in the form of a function reference (lets call it sortfunction). The format of this function looks like this:
function sortfunction(a, b){
//Compare "a" and "b" in some fashion, and return -1, 0, or 1
When such a function is passed into array.sort(), the array elements are sorted based on the relationship between each pair of elements "a" and "b" and the function's return value. The three possible
return numbers are: <0 (less than 0), 0, or >0 (greater than 0):
• Less than 0: Sort "a" to be a lower index than "b"
• Zero: "a" and "b" should be considered equal, and no sorting performed.
• Greater than 0: Sort "b" to be a lower index than "a".
To sort an array numerically and ascending for example, the body of your function would look like this:
function sortfunction(a, b){
return (a - b) //causes an array to be sorted numerically and ascending
More on this below.
To sort an array in numerical order, simply pass a custom sortfunction into array.sort() that returns the difference between "a" and "b", the two parameters indirectly/ automatically fed into the
//Sort numerically and ascending:
var myarray=[25, 8, 7, 41]
myarray.sort(function(a,b){return a - b}) //Array now becomes [7, 8, 25, 41]
This works the way it does because whenever "a" is less than "b", a negative value is returned, which results in the smaller elements always appearing to the left of the larger ones, in other words,
Sort an array numerically but descending isn't much different, and just requires reversing the two operands "a" and "b":
//Sort numerically and descending:
var myarray=[25, 8, 7, 41]
myarray.sort(function(a,b){return b - a}) //Array now becomes [41, 25, 8, 71]
To randomize the order of the elements within an array, what we need is the body of our sortfunction to return a number that is randomly <0, 0, or >0, irrespective to the relationship between "a" and
"b". The below will do the trick:
//Randomize the order of the array:
var myarray=[25, 8, "George", "John"]
myarray.sort(function() {return 0.5 - Math.random()}) //Array elements now scrambled
As you can see, there is a lot more to array.sort() than many may think. In fact, you can even sort arrays that contain more than just primitive values, but objects with properties. Lets see that | {"url":"http://www.javascriptkit.com/javatutors/arraysort.shtml","timestamp":"2014-04-19T19:34:25Z","content_type":null,"content_length":"14462","record_id":"<urn:uuid:e82c8c02-c1b5-43ec-8a39-96e1eb2291b4>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] From theorems of infinity to axioms of infinity
Monroe Eskew meskew at math.uci.edu
Tue Mar 19 21:18:29 EDT 2013
On Mar 17, 2013, at 9:51 AM, Nik Weaver <nweaver at math.wustl.edu> wrote:
> You seem to be confusing working within ZFC and reasoning about ZFC.
> You cite relative consistency results *which are theorems of Peano
> arithmetic* as evidence that *we should use set theory*.
This goes to the arguments I made in previous threads. As a practical matter and historical fact, it is only by working within ZFC (plus large cardinals) that we come to know meta-mathematical facts about ZFC, which are ultimately theorems of PA. In the course of proving a result such as Con(ZFC+ measurable cardinal) implies Con(ZFC + total probability measure in R), we do for a long time actually "use set theory" as a collection of working hypotheses. Now you can bracket all this work with prefixes and suffixes so that you boil down the content you find metaphysically acceptable, but the real work is done within set theory.
> But how on earth can the fact that various questions *cannot* be
> answered within standard set theory, be a reason for using set theory?
Because it is only through the methods of set theory that these things are known. And by this I mean nontrivial stuff-- getting deep into forcing, infinitary combinatorics, inner models, large cardinals. Using set theory is our only hope for understanding the metamathematics of set theory.
> This phenomenon, where seemingly fundamental questions like the
> continuum hypothesis are independent of ZFC, should rather raise
> suspicions that something is wrong with set theory as a foundational
> system. All the more so when these seemingly fundamental questions
> turn out to have little or no relevance to mainstream mathematical
> concerns.
If you have some idea for replacing set theory with some equally powerful systems for attacking relative consistency problems that also fits with your metaphysical beliefs, please do tell. These days, independence results are the main product manufactured by set theorists. No one else has tools that can manufacture the same products. It is, by objective standards, a successful area of mathematics, not one worthy of "suspicion that something is wrong with it."
It is quite a stretch to say these results are not relevant to mainstream mathematics. Many classical questions of analysis and topology have been answered or shown independent by set theory. Consider the question of total probability measure, or the Borel conjecture, or various questions about the relation between measure and category having to do with "cardinal characteristics of the continuum." The Whitehead problem. The recent work on C^* algebras and the influence of forcing axioms. Classification of ergodic systems. Many of these questions came from mainstream mathematics. You'd have to be a revisionist to say set theory is irrelevant.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2013-March/017122.html","timestamp":"2014-04-20T21:00:33Z","content_type":null,"content_length":"5798","record_id":"<urn:uuid:366de7ca-012f-4fa9-bd2d-610063eec147>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Density of Coins
Printable view
Students will conduct a science experiment where they will learn that the different denominations have characteristic densities that can be used to help identify the type of coin being used.
Coin Type(s)
Coin Program(s)
Students will learn that the different denominations have characteristic densities that can be used to help identify the type of coin being used.
Major Subject Area Connections
Minor/supporting Subject Area Connections
• Sixth grade
• Seventh grade
• Eighth grade
Class Time
Sessions: One
Session Length: 90 minutes
Total Length: 46-90 minutes
Terms and Concepts
• Cent
• Density
• Dime
• Mass
• Nickel
• Penny
• Volume
• Pennies
• Nickels
• Dimes
• Water
• Graduated cylinders
• Balances
• Calculators
• Pencils
• Paper
1. Divide the students into pairs. Distribute five of each coin (pennies, nickels, and dimes) to each pair.
2. Have each pair of students create a data table where they will include columns for coin type, mass, volume and density.
3. Have each group use a balance to measure and record the mass of each set of five coins in their data table.
4. Have the students fill a graduated cylinder to the 20 mL point
5. Have the students drop one coin group (pennies, nickels, or dimes) into the water and record the height that the water rises to, then subtract the initial 20 mL from the new height to find the
volume for the coin group and record that volume in their data table.
6. Have the students perform the same measurement for the remaining coin groups separately and record the results.
7. Using a calculator, have the students divide the mass of each set by the volume of that coin group. This calculation will be representative of the density of that coin group. Have them write
these densities in the data table. Have the students share their answers. Note that the mass, volume and therefore density of circulating coins may vary slightly due to wear, but these
discrepancies should be minor.
• Give students data about “unknown” coins. Have them use the knowledge the students have developed to identify what type of coin the information relates to.
• Have students find the density of coins of larger denominations (half-dollar, dollar) to demonstrate their understanding of the process of finding a coin’s density.
Use the lab notebooks to determine whether the students have met the lesson objectives.
Discipline: Math
Domain: 6.SP Statistics and Probability
Grade(s): Grade 6
Cluster: Summarize and describe distributions
• 6.SP.4. Display numerical data in plots on a number line, including dot plots, histograms, and box plots.
• 6.SP.5. Summarize numerical data sets in relation to their context, such as by:
□ Reporting the number of observations.
□ Describing the nature of the attribute under investigation, including how it was measured and its units of measurement.
□ Giving quantitative measures of center (median and/or mean) and variability (interquartile range and/or mean absolute deviation), as well as describing any overall pattern and any striking
deviations from the overall pattern with reference to the context in which the data were gathered.
□ Relating the choice of measures of center and variability to the shape of the data distribution and the context in which the data were gathered.
Discipline: Mathematics
Domain: 6-8 Number and Operations
Cluster: Compute fluently and make reasonable estimates.
Grade(s): Grades 6–8
In grades 6–8 all students should
• select appropriate methods and tools for computing with fractions and decimals from among mental computation, estimation, calculators or computers, and paper and pencil, depending on the
situation, and apply the selected methods;
• develop and analyze algorithms for computing with fractions, decimals, and integers and develop fluency in their use;
• develop and use strategies to estimate the results of rational-number computations and judge the reasonableness of the results; and
• develop, analyze, and explain methods for solving problems involving proportions, such as scaling and finding equivalent ratios.
Discipline: Mathematics
Domain: 6-8 Number and Operations
Cluster: Understand meanings of operations and how they relate to one another.
Grade(s): Grades 6–8
In grades 6–8 all students should
• understand the meaning and effects of arithmetic operations with fractions, decimals, and integers;
• use the associative and commutative properties of addition and multiplication and the distributive property of multiplication over addition to simplify computations with integers, fractions, and
decimals; and
• understand and use the inverse relationships of addition and subtraction, multiplication and division, and squaring and finding square roots to simplify computations and solve problems.
Discipline: Mathematics
Domain: 6-8 Number and Operations
Cluster: Understand numbers, ways of representing numbers, relationships among numbers, and number systems.
Grade(s): Grades 6–8
In grades 6–8 all students should
• work flexibly with fractions, decimals, and percents to solve problems;
• compare and order fractions, decimals, and percents efficiently and find their approximate locations on a number line;
• develop meaning for percents greater than 100 and less than 1;
• understand and use ratios and proportions to represent quantitative relationships;
• develop an understanding of large numbers and recognize and appropriately use exponential, scientific, and calculator notation;
• use factors, multiples, prime factorization, and relatively prime numbers to solve problems; and
• develop meaning for integers and represent and compare quantities with them.
Discipline: Mathematics
Domain: 6-8 Measurement
Cluster: Apply appropriate techniques, tools, and formulas to determine measurements.
Grade(s): Grades 6–8
In grades 6–8 all students should
• use common benchmarks to select appropriate methods for estimating measurements;
• select and apply techniques and tools to accurately find length, area, volume, and angle measures to appropriate levels of precision;
• develop and use formulas to determine the circumference of circles and the area of triangles, parallelograms, trapezoids, and circles and develop strategies to find the area of more-complex
• develop strategies to determine the surface area and volume of selected prisms, pyramids, and cylinders;
• solve problems involving scale factors, using ratio and proportion; and
• solve simple problems involving rates and derived measurements for such attributes as velocity and density.
Discipline: Mathematics
Domain: 6-8 Measurement
Cluster: Understand measurable attributes of objects and the units, systems, and processes of measurement.
Grade(s): Grades 6–8
In grades 6–8 all students should
• understand both metric and customary systems of measurement;
• understand relationships among units and convert from one unit to another within the same system; and
• understand, select, and use units of appropriate size and type to measure angles, perimeter, area, surface area, and volume.
Discipline: Science
Domain: 5-8 Content Standards
Cluster: Physical Science
Grade(s): Grades 6–8
• Properties and changes of properties in matter
• Motions and forces
• Transfer of energy
Discipline: Science
Domain: 5-8 Content Standards
Cluster: Science as Inquiry
Grade(s): Grades 6–8
• Ability necessary to do scientific inquiry
• Understand scientific inquiry | {"url":"http://www.usmint.gov/kids/teachers/lessonPlans/viewLP.cfm?id=65","timestamp":"2014-04-17T21:44:41Z","content_type":null,"content_length":"38456","record_id":"<urn:uuid:cf2aaa58-8434-480d-ac06-6e25607d42ad>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hastings On Hudson Statistics Tutor
...People just plug numbers in to equations and report them as if they were "statistics." The problem with this is that they lack the analysis part of the equation. Misused statistics are one of
my pet peeves so I love to help people figure out how to use them properly! Throughout the years I have tutored various students in SAT math.
11 Subjects: including statistics, Spanish, geometry, algebra 1
...Several of my students have increased their English/Writing scores on the SAT & ACT dramatically. I graduated with a Bachelor of Arts degree in Philosophy from the University of Southern
California (currently ranked 11th nationally for study in Philosophy). My cumulative GPA in the Philosophy ...
21 Subjects: including statistics, English, writing, ESL/ESOL
...It develops advanced algebra skills such as systems of equations, advanced polynomials, imaginary and complex numbers, quadratics, and concepts and includes the study of trigonometric
functions. Algebra 2 is vital for students’ success on the ACT, SAT 2 Math, and college mathematics entrance exa...
26 Subjects: including statistics, calculus, writing, GRE
...For Regents Algebra: I have a 99% pass for the Regents Algebra including students from Wyzant. I am detail oriented. I provide practice exercises from start to finish with the student,
including topics and questions that are similar to those on a Regents exam.
47 Subjects: including statistics, chemistry, reading, accounting
...I have not been a paid tutor for a while (since high school), but I have worked with students one-on-one through twice weekly office hours in all of the classes I have taught. I am also a
native Japanese speaker--I am interested in being a conversation partner if you would like to practice your spoken Japanese--and I can also teach basic reading and writing. I am also a musician.
17 Subjects: including statistics, chemistry, calculus, biology | {"url":"http://www.purplemath.com/hastings_on_hudson_statistics_tutors.php","timestamp":"2014-04-20T09:13:14Z","content_type":null,"content_length":"24640","record_id":"<urn:uuid:893670c1-2c50-4e1e-8ad6-38421550ef12>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00235-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newest 'random-walk simulation' Questions
In diffusion-limited aggregation on the square lattice, one lets a particle do "random walk from infinity" until it hits the current aggregate, at which point the site occupied by the particle is ...
Consider a random walk on [0, inf) where you start at 0. With probability p = 0.5, you increase by 1. With probability (1-p) = 0.5, you decrease by 1, but not below 0. As time goes to infinity, will | {"url":"http://mathoverflow.net/questions/tagged/random-walk+simulation","timestamp":"2014-04-17T07:34:15Z","content_type":null,"content_length":"32877","record_id":"<urn:uuid:7e79481e-ab50-4b3e-88b9-2a2fd117d2a9>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00355-ip-10-147-4-33.ec2.internal.warc.gz"} |
Los Altos Hills, CA Calculus Tutor
Find a Los Altos Hills, CA Calculus Tutor
I am an excellent tutor for mathematics (from pre-algebra to advanced calculus and statistics) and economics/finance, both for high school and college students. I have a special skill for making
math notation and concepts more easily understood, helping retention and test-taking skills considerably. I am a patient, engaging tutor with an easy-going personal style.
22 Subjects: including calculus, geometry, accounting, statistics
...As I continued working with them, they kept telling me things such as "You should really consider being a teacher," "You are really good at explaining things," "Your explanation is so clear,"
or "I would have aced my calculus and physics classes in high school if my teachers had taught the way yo...
11 Subjects: including calculus, chemistry, physics, geometry
...I usually work through such books with my SAT math students, giving them supplemental information in those areas where they need extra help. I have helped several people with the math section
of the ASVAB test, but do not consider myself qualified to teach the rest of the test, certainly not the...
17 Subjects: including calculus, physics, GRE, ASVAB
...I can also teach Chinese at all levels. I am patient and kind. I care about the student's academic growth as well as their personal growth.
11 Subjects: including calculus, statistics, geometry, Chinese
...I look forward to helping students grow and mature academically.The foundation for all other math courses, I will emphasize all the basics in this course as it helps students with all future
courses If I have a specialty, it would be algebra. I have tutored many students in the past in this subject. Calculus is one of the more interesting subjects that I like to tutor.
9 Subjects: including calculus, geometry, algebra 1, algebra 2
Related Los Altos Hills, CA Tutors
Los Altos Hills, CA Accounting Tutors
Los Altos Hills, CA ACT Tutors
Los Altos Hills, CA Algebra Tutors
Los Altos Hills, CA Algebra 2 Tutors
Los Altos Hills, CA Calculus Tutors
Los Altos Hills, CA Geometry Tutors
Los Altos Hills, CA Math Tutors
Los Altos Hills, CA Prealgebra Tutors
Los Altos Hills, CA Precalculus Tutors
Los Altos Hills, CA SAT Tutors
Los Altos Hills, CA SAT Math Tutors
Los Altos Hills, CA Science Tutors
Los Altos Hills, CA Statistics Tutors
Los Altos Hills, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Los_Altos_Hills_CA_Calculus_tutors.php","timestamp":"2014-04-17T13:50:39Z","content_type":null,"content_length":"24419","record_id":"<urn:uuid:ce55b6da-f9f2-4518-99d0-c731f2425540>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding the formula for Bezier curve ratios (hull/point : point/baseline)
up vote 2 down vote favorite
Given a cubic Bezier curve defined by points p₁, p₂, p₃, and p₄, a point B on that curve at some t value (where 0 ≤ t ≤ 1), a point A on the line (p₂ — p₃) at distance ratio t from p₂, and a point C
that is the intersection of the line (p₁ — p₄) and the line that goes through A and B, the ratio between distance d1 = (A — B) and d2 = (B — C) is a fixed value, regardless of the values for
coordinates p₁, p₂, p₃, and p₄
I'd like to find the formula that expresses this ratio as a function of t (all interactive graphing experiments suggest that this function is an identity function for cubic Bezier curves, not
actually being dependent on the coordinates used for the curve) but I'm having little success coming up with something satisfactory. My math skills are not sufficient...
I initially wrote up a quick data-generator using the "Processing" programming language to see if I could use that data for polynomial regression (based on the fact that the function is symmetrical
around t = 0.5, finding the expression for the interval t=0.5 to t=1), but the fact that the ratio is actually asymptotic at t = 0 and t = 1 (towards positive infinity) means that it's not a
straight-forward power function.
(note: the jsfiddle link doesn't actually log all 5000 step values; normal Processing does)
Would anyone know how to express this ratio function as a proper formula? I don't quite know how to approach this symbolically, as I'm using de Casteljau's algorithm to determine my red and green
lines; since I don't know how to symbolically express the values d1 and d2, expressing the ratio d1/d2 as a function is quite hard.
N.B.: Apologies if the tags don't fit the question. I'll take suggestions on using the right ones instead; first question on MathOverflow.
na.numerical-analysis interpolation
add comment
2 Answers
active oldest votes
A cubic bezier defined by $p_1, p_2, p_3, p_4$ has parametric equation $$B(t) = (1-t)^3p_1 + 3(1-t)^2tp_2 + 3(1-t)t^2p_3 + t^3p_4.$$
The setup here also defines $A(t) = (1-t) p_2 + tp_3$.
The way $C$ is defined, there are some real $s(t)$ and $u(t)$, both possibly depending on $p_1,\ldots,p_4$ such that $C = sA + (1-s)B = up_1 + (1-u)p_4$.
So $B - C = B - sA - (1-s)B = s(B-A)$. Hence $\frac{|B - C|}{|A - B|} = |s|$.
On the other hand, we want $sA + (1-s)B - up_1 - (1-u)p_4 = 0$. That comes out to
up vote 4 down vote
accepted $$((1-s)(1-t)^3 - u)p_1 + (s(1-t) + 3(1-s)t(1-t)^2)p_2 + (st + 3(1-s)t^2(1-t))p_3 + ((1-s)t^3 - (1-u))p_4 = 0.$$
Set $$s = \frac{t^3+(1-t)^3-1}{t^3 + (1-t)^3}$$ and $$u = \frac{(1-t)^3}{t^3 + (1-t)^3}.$$
Then the coefficents of $p_1,\ldots,p_4$ in the above expression become identically 0. Note that the denominators of these expressions are never 0 for $t \in [0,1]$, so the
divisions are ok.
So your ratio is given by the $|s|$ above (or its reciprocal, depending on how you're taking the ratio).
you are a hero, thanks! – Mike 'Pomax' Kamermans Feb 19 '13 at 18:34
add comment
Just in case someone finds this question using google at some point, and is also curious about the solution for the quadratic case, its solution is similar:
$A(t) = p_2$
$B(t) = (1-t)^2p_1 + 2t(1-t)p_2 + t^2p_3$
$C(t) = sA(t) + (1-s)B(t) = up_1 + (1-u)p_2$
This requires solving:
$$sp_2 + (1-s)((1-t)^2p_1 + 2t(1-t)p_2 + t^2p_3) - up_1 + (1-u)p_2 = 0$$
which, expressed in terms of the control points, is:
$$((1-s)(1-t)^2 - u)p_1 + (s+2t(1-s)(1-t))p_2 + ((1-s)t^2 + u - 1)p_3 = 0$$
up vote 2 down If we want these coefficients to become identically zero, we can determine s(t):
$$(1-s)(1-t)^2 = u = -((1-s)t^2 - 1)$$
which means solving:
$$(1-s)(1-t)^2 + ((1-s)t^2 - 1) = 0$$
which gives us the following expressions for s and u (after substituting s into either of the identities for u and solving):
$$s(t) = \frac{2t^2 - 2t}{2t^2 - 2t + 1}$$
$$u(t) = \frac{(t-1)^2}{2t^2 - 2t + 1}$$
(Also note that there are no solutions for curves of order 4 and higher; unlike for quadratic and cubic curves, the ratio between the two distances is not a fixed value for higher
order curves, unfortunately)
add comment
Not the answer you're looking for? Browse other questions tagged na.numerical-analysis interpolation or ask your own question. | {"url":"http://mathoverflow.net/questions/122257/finding-the-formula-for-bezier-curve-ratios-hull-point-point-baseline","timestamp":"2014-04-16T14:19:47Z","content_type":null,"content_length":"59388","record_id":"<urn:uuid:c2cfc37e-8c50-48d2-8b2a-f68e186ee225>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generalized incomplete gamma function: Integration
Indefinite integration
Involving only one direct function
Involving one direct function and elementary functions
Involving power function
Involving exponential function
Involving functions of the direct function and elementary functions
Involving elementary functions of the direct function and elementary functions
Involving powers of the direct function and exponential function
Involving only one direct function with respect to z[1]
Involving one direct function and elementary functions with respect to z[1]
Involving power function
Involving only one direct function with respect to a | {"url":"http://functions.wolfram.com/GammaBetaErf/Gamma3/21/ShowAll.html","timestamp":"2014-04-20T05:53:08Z","content_type":null,"content_length":"39859","record_id":"<urn:uuid:5213faeb-97c7-4781-926b-98a37912d6e6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Forced Self-Sustained Oscillator
I'm reading about
self-sustained oscillators
under the influence of
harmonic forcing
. The topic is introduced by studying the system in a new reference frame; one which rotates in the same direction with the frequency of the of the external force.
In this new reference frame, the oscillating force is represented by a constant vector, acting at some angle. (see pg 48
At first glance, I thought the concept was trivial, but now I'm having a very hard time understanding why this would be so. Can anyone shed some light on this topic?
ps. consider this...
--A stationary point in this reference frame is one which is oscillating in the original reference frame, with the same frequency as the force.
--The force on such a point is then said to be constant.
--However, to my understanding, the force in the original reference frame varies with time, indifferent of the objects position!! | {"url":"http://www.physicsforums.com/showthread.php?t=401803","timestamp":"2014-04-18T23:28:50Z","content_type":null,"content_length":"20459","record_id":"<urn:uuid:3d8b3afc-160b-4b90-a593-e8b6476d7f8d>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: How function gradient work?
Replies: 4 Last Post: Jan 18, 2011 1:02 AM
Messages: [ Previous | Next ]
Re: How function gradient work?
Posted: Jan 18, 2011 1:02 AM
Matlab gradient function::
this uses a finite difference approach to solution.
it applies forward difference to first and last data points and applies
central differencing to the data between::
X F dF/dX
0 3 (10-3)/(1-0)
1 10 (1-3)/(2-0)
2 1 (1-10)/(2-1)
Date Subject Author
6/5/09 How function gradient work? dai
6/5/09 Re: How function gradient work? John D'Errico
6/9/09 Re: How function gradient work? dai
6/9/09 Re: How function gradient work? John D'Errico
1/18/11 Re: How function gradient work? Kwesi Moore | {"url":"http://mathforum.org/kb/thread.jspa?threadID=1949122&messageID=7360536","timestamp":"2014-04-21T13:11:59Z","content_type":null,"content_length":"20755","record_id":"<urn:uuid:536d51de-60db-497d-a3ca-aa25fbe4d36f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brian Beckman: On Analog Computing, Beckman History and Life in the Universe Redux (Channel 9)
Brian Beckman: On Analog Computing, Beckman History and Life in the Universe Redux
• Posted: Mar 11, 2010 at 1:04 PM
• By: Charles
• 34,595 Views
Follow the Discussion
• Follow
Oops, something didn't work.
Sign In to begin to follow these comments
What does this mean?
Following an item on Channel 9 allows you to watch for new content and comments that you are interested in. You need to be signed in to Channel 9 to use this feature.
Getting "follow" information
Start following these comments
What does this mean?
Following an item on Channel 9 allows you to watch for new content and comments that you are interested in and view them all on your notifications page.
Removing your "follow"
Setting up your "follow"
Did you know you can
sign up for email notifications?
• Awesome video
Maybe it has been far too long since Brian was on channel 9... but he sure makes it worth waiting for
• Agreed. Brian is a hero to many of us. Thanks, Brian!
• The analog computer videos are amazing. It was so interesting to see and hear them describe finding the right shape of the surface, to solve a particular problem.
Someone told me that QM and GR were incompatible mathematically, but I didn't understand how that could be (after all, isn't all mathematics, at the root, based on the same underlying axioms and
theorems). He thought I wouldn't understand if he tried to explain; but I think I get the idea. Thanks Brian.
• Brain is definately a programming hero of mine. The dude is a modern day Feynman, and has been responsible for much pleasant head scratching followed by much reading and some little learning on
my part.
• Every time I hear Brian, he makes me feel I am just a 'carbon based life form'...... Excellent video guys and thanks Charles for these ETI videos
• Ah, great video. A cosmology related question popped up in mind while I was watching this... Maybe Brian or somebody else here can answer it. If Time is not a fixed variable, but it can be
distorted, how can we know if the Universe is 13.7 billion years old... Or even, does it make any sense to talk about its age? If Time is "expanding" or is being distorted, maybe the Universe was
born just before the moment we call Now... (Sorry, if it's a stupid question.)
Something else. Brian, you talked about a truck simulator software in another video some years ago and you said something like that maybe you will never see its source code, but the basic idea
behind the simulation of metal pieces was quite obvious... now it's an open source software. So you can check it out, if you want.
• Brian and Erik really have an interesting past. Thanks for sharing.
• Hi Akopacsi -- The rough idea on time is this: Consider a path -- a 1-dimensional curve -- passing through points in space-time. Every point along that curve has a particular set of 4
coordinates: 3 space coordinates and 1 time coordinate, for any reasonable choice of coordinate systems. Now, parameterize that curve by the incremental distance along the curve: as you move from
one point to another, you go a certain "distance" in 4-space, a distance measured by the "metric tensor," which is a generalization of the Pythagorean or Euclidean distance. Locally, that
incremental distance is sqrt(dx^2 + dy^2 + dz^2 - dt^2) (notice the minus sign!). This distance measure is unique for a choice of metric tensor and is called the "proper time." It's a kind of
cosmological average of proper times over the Hubble motion of galaxies along their curves that measures the age of the Universe backwards 13 or 16 billion years. Very rough idea, but hope that
adds some clarity.
I'll take another look at "Rigs of Rods," one of my all-time favorite pieces of software!
• I was thinking about the analog fire control computer today, and protein folding and neurotransmitters, and how measurement of all electrical activity in the brain as a function of time would map
back to the dynamic surface of the molecular interaction. I haven't ever heard the idea that the brain could be an analog computer in that sense, but after seeing that fire control video it
really is making me wonder. Of course, the chemical interaction is just one part of the entire process, but often I have assumed that the "intelligence" lies in the electrical state of the
system, but that might just be an artifact ... energy transfer and not computation at all. Of course I have no idea if any of this is close.
• I think there is a bi-directional correspondence between computation and energy transfers. If you think of computation as manipulations of symbols in the lambda calculus or in the pi calculus,
then that involves clearing and storing "memory cells," usually represented as states of switches in a network. Ed Fredkin showed that you can't change states of memories without energy transfers
(and the entropy growth that goes along with them, by the second law of thermodynamics!), so it's not possible to do computations without spending energy and growing the heat in the Universe!
• Interesting. That reminds me of photosynthesis as quantum computation. (http://www.scientificamerican.com/article.cfm?id=when-it-comes-to-photosynthesis-plants-perform-quantum-computation, and
for non-quantum computation, there is another paper here http://www.minds.may.ie/~xebedee/papers/MWN2005m.pdf).
• What's your favourite ".Net language" for expressing mathematics? I remmeber an old video where you showed a pretty generic mathematics library written in VB.NET. I find writing truly generic
abstract libraries for .Net, at least in C# and I would guess VB.NET as well, not very.... easy.
• F# would probably be the best way to go, now.
• I have been wondering if this equation I created a while back is true, I heard of that renormalization technique which I didn't know of formally, but I used it many months ago in a thinking spree
I had to try and show how a black hole both creates and destroys itself at the same time using principles such as time stops at infinite gravity, and at infinite speed there is infinite gravity.
The black arrow shows our current "moment" where there systems are in reasonable balance, also when speed (where I suppose speed could be considered and imply movement in space) and time are
intersected for equalization.
So this picture might look odd but I used it as a way to create the little equation.... newtons equation shows when the green text as a black holes effect is arranged to be negated/renormalized.
So Brian may I get your professional opinion on if anything jumps out at you other than the manually drawn curves, and the lack of Béziers in OneNote?
• I used to find maths interesting. At some level I liked the mechanical process of putting together simple building blocks to get more interesting results. However at some point I lost this
interest and started to find maths pretty boring, as there was a big gap between abstract maths and interesting effects..... and therefore feedback! The result of this was that I focused more on
application of maths (ee-eng) and then I moved into computing.
In computing I completely regained my joy of building bigger systems out of smaller building blocks, and solving things from first principles. However despite the deeply rooted history of
computing coming from maths.... it's almost like the maths is invisible at times. The processes are very similar, but I sometimes wonder how all the maths in computing has been hidden so
successfully by the programming languages we use. In many cases almost by design!(like Cobol or Basic)
Its a shame really, that making programming accessible, seemed incompatible with moving programming closer to maths. I wonder does that tell us more about programming or about the state of maths
education ... or the fact the ultimately so many people find maths boring even though they are good at it.
Of course building big systems these days is more like a managing a big building construction site, hence the term architect, so even at that level there's a conceptual gap between programming
and systems.
Thanks Brian for another very interesting, and entertaining interview.
• I think it's the problems that we were given to solve, using the maths we were taught, which were boring. I used to work with my friends to solve our own problems that we'd make up, like how high
exactly did Michael Jordan have to jump to dunk a basketball from the foul line. What is the function which describes the relationship between velocity, angle of take off, height etc..., and plot
that curve. That sort of thing was fun. Repetitive problems that go over the same concept 100 times was not fun. You don't understand via repetition, you just memorize, so that's all most people
can do, and then they forget everything very soon afterwards. So I think it's because the teachers are poor at teaching mathematics, in general. Some are good of course, but most not, and the
really good mathematicians are certainly not the ones teaching in elementary and high school. Not that a good mathematician makes a good teacher, but there's too few people really good at both.
• I wonder if i need to trigger a reply by replying, I can't really ask you directly on that graph/equation I spuriously thought up.
But I would agree as well that using F# is the best way to go for almost anything,unless all you are doing is bit-fiddeling like crazy.
On a side-note I have recently acquired a PlayStation 3 NOW ALLOWING ME to use the "MASS" Physics engine in the PS3 SDK which is Haskell based and actually faster than the C based physics engine
for allowing inter-core communication thus preventing socket bandwidth saturation and FINALLY I get to program a Cell Broadband engine in Haskell no less!!! Google Tech talk Here For Reference :
I get excited just thinking about 8 cores of power like this!!! Especially creating my own execution VM that changes a loop invariant of the loop making the loop "mapped" +8 per pipeline and
"reduced" by the load computation controller/tracker. Something I have planned to not be limited to the x++ increment by one problem that tends to always limit true work balancing that is making
the many-core problem so unmanageable in line-by-line assembly instruction binaries, and not just course grained groups of lambda's in functional languages.
• Yet another random question... (The question of Charles about the z axis of his garden made me less shy about my questions...
• So is Brian still watching this post??
• Yes, I'm still here, HeavensRevenge. BTW, loved the youtube on COCONUT -- very, very cool!
• Have fun, here is The Flake Equation:
• I would like to contact Brian. I knew his father Henry Beckman very well but lost contact when I went to live in Ireland.
Jessica, www.jessicaetaylor.org
Remove this comment
Remove this thread
Comments Closed
Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know. | {"url":"http://channel9.msdn.com/Blogs/Charles/Brian-Beckman-Analog-Computing-Beckman-History-and-Life-in-the-Universe","timestamp":"2014-04-17T16:39:50Z","content_type":null,"content_length":"79217","record_id":"<urn:uuid:06eebcd2-a672-4649-991b-6cf397361181>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
need help to construct UMPT (uniformly most powerful test)
February 21st 2011, 04:34 AM #1
Senior Member
Nov 2010
Hong Kong
need help to construct UMPT (uniformly most powerful test)
Let $Y_1,... Y_n$ be a random sample from a Poisson distribution with mean $\mu>0$ unknown. Construct the uniformly most powerful test for testing hypotheses
$H_0: \mu=\mu_0; H_1: \mu>\mu_0$
Suppose n=10, sample mean is 2.42, and $\mu_0$=1.5. Will you reject Ho at a significance level of 5%?
I started doing by the book and got stuck. I know it is a trivial problem, statisticians do this all the time, I need to do it once only, to understand )))
Start by writing down the likelihood function
By Neyman-Pearson lemma,
$P_{\mu_0}(\frac{L_Y(\mu;y)}{L_Y(\mu_0;y)}>k_{\alph a})=\alpha$
(my book uses the likelihood over the total sample space in the numerator)
So I find the ratio (I replaced $\Sigma_{i=1}^nY_i$ with $n\bar{Y}$
$\frac{L_Y(\mu;y)}{L_Y(\mu_0;y)}=\frac{\mu^{n{\bar{ Y}}}}{\mu_0^{n{\bar{Y}}}}e^{(\mu_0-\mu)n}$
Now then, I will reject Ho if this ratio is large. This ratio will be larger the larger $\bar{Y}$ is. But how large it should be (for say 95% confidence level?)
Here is where I am stuck. I don't know distribution of this ratio. How do I decide on the critical region? (which will be a most powerful region, I reckon). Do I need to know a distribution?...
Do I use the assymptotic distribution of the 2 log of the likelihood ratio? do I use Poisson table? When it comes to a most powerful test, my head is a mess.
thank you !
Well...you do know how large it should beThat is, according to Neyman-Pearson's lemma, the ratio you have obtained will be greater than a constant 'K' where P(your ratio > K) = $\alpha$, assuming
the null hypothesis to be true. First find the value of this constant 'K'.
To find k, I don't have $\mu$. Can I use sample mean $\bar{Y}$ in place of $\mu$? If so, do I have to prove somewhere that the sample mean is sufficient statistic for $\mu$, or that it is MLE for
You need to plug in the restricted and full parameter space MLE's into the ratio. Depending on the observed Y, these might be the same thing in which case the ratio will be 1 and you obviously
wouldn't reject. If they aren't the same thing then you work with the ratio.
The trick with these is that you usually don't need to know the distribution of the ratio. You can usually write the test as a function of the complete sufficient statistic and (hopefully) the
ratio will be a monotone in it (unimodal isn't the end of the world either, but monotone is best); thus you can base the test just on the complete sufficient statistic.
Right. Let me try this.
MLE (H_1) for Poisson distribution is $\hat{\mu}=\frac{1}{n}\Sigma_{i=1}^nY_i=\bar{Y}, \bar{Y}>\mu_0$
MLE (H_0) is $\mu_0$
then the LR is $r=\frac{\bar{Y}I_{[m_0; \infty]}}{\mu_0}$ and
$r=1, \bar{Y}=\mu_0;$
$r>1, \bar{Y}>\mu_0;$
$r=0, \bar{Y}<\mu_0$
Have I thereby constructed the UMPT? I am concerned that I don't have n anywhere in my formula. Does it matter?
Now, what about the given test results and rejecting/not rejecting of the Ho? $\bar{Y}=2.42>\mu_0=1.5$, so the r>1. What about checking within given confidence level, 5%.
Suppose I use $2ln(r)\sim\chi^2(1)$?
$2ln(\frac{2.42*1}{1.5})=0.9566049$ which is between 0.3 and 0.4 in my chi-square table (1 degree of freedom) which is definitely lower than 0.95. Should I reject Ho then?
Last edited by Volga; February 22nd 2011 at 02:13 AM.
The trick with these is that you usually don't need to know the distribution of the ratio. You can usually write the test as a function of the complete sufficient statistic and (hopefully) the
ratio will be a monotone in it (unimodal isn't the end of the world either, but monotone is best); thus you can base the test just on the complete sufficient statistic.
the reason I feel I need a distribution because in the second part of the question I am given 5% confidence level, and therefore I think I need to base my decision on a probability distribution.
PS I've just looked up Casella, Berger Statistical Inference. Question 8.31 (page 406) is asking to find UMP for a sample ~ Poisson, and in part (b) it is asking to use Central Limit Theorem to
determine the sample size n to achieve certain confidence levels. So, it is also possible to use N(0,1)??? How do I know when it is OK and when it is not?
THANK YOU
Last edited by Volga; February 22nd 2011 at 02:31 AM.
the reason I feel I need a distribution because in the second part of the question I am given 5% confidence level, and therefore I think I need to base my decision on a probability distribution.
PS I've just looked up Casella, Berger Statistical Inference. Question 8.31 (page 406) is asking to find UMP for a sample ~ Poisson, and in part (b) it is asking to use Central Limit Theorem to
determine the sample size n to achieve certain confidence levels. So, it is also possible to use N(0,1)??? How do I know when it is OK and when it is not?
THANK YOU
Of course it will be based on a distribution, but the point is that you can base it on the complete sufficient statistic which will have a distribution that you know, whereas likelihood ratios
don't necessarily have familiar distributions.
I assumed you hadn't learned about MLR yet since it looks like you are taking a Neyman-Pearson Lemma angle at this problem, but from looking at it in the text you should know about MLR. $T = \sum
X_i$ is complete sufficient and is Poisson, so we have MLR in T. Hence, reject when T is big is UMP for this hypothesis. The UMP test should
$\displaystyle<br /> \phi(T) = \begin{cases}<br /> 1 \qquad &T > K \\<br /> \gamma \qquad &T = K \\<br /> 0 \qquad & T < K<br /> \end{cases}<br />$
where $\gamma$ and K are chosen so that $\mbox{E}_{\lambda = \lambda_0} \phi(T) = \alpha$ for the desired alpha; we reject with probability $\phi(T)$. I don't think Casella and Burger get into
randomized tests, so maybe you should ignore the gamma part.
The CLT thing is just asking you to use $T \stackrel{\dot}{\sim}N(n\lambda, n\lambda)$ to do a power calculation. It's an approximation, of course. I think for the Poisson, a mean of 20 is
probably enough for it to be reasonable (just based on the standard deviation relative to the mean).
by MLR, do you mean M...mum Likelihood Ratio (test)? it is called LRT in Casella/Berger and my study guide. (Maximum or Minimum in the name, I guess, will depend on what you put in the numerator/
denominator, and what you are testing)
I (supposedly) have learnt about LRT, and I am struggling to place Neyman-Pearson, LRT, MLE (and sufficient statistics) together into one coherent picture. My study guide is a collection of
'useful' theorems and their proofs, but it has no one description of the 'method' to which these theorems are relevant.
by MLR, do you mean M...mum Likelihood Ratio (test)? it is called LRT in Casella/Berger and my study guide. I (supposedly) have learnt about LRT, and I am struggling to place Neyman-Pearson, LRT,
MLE (and sufficient statistics) together into one coherent picture. My study guide is a collection of 'useful' theorems and their proofs, but it has no one description of the 'method' to which
these theorems are relevant. If there is indeed a method.
Monotone Likelihood Ratio. I know Casella and Burger covers it.
Oh, no. My study guide is an undergrad text in Stat Inference and it does not have it in the syllabus (nor in the body of the text). It only refers to Casella and Burger sporadically, and it is
much more shallow than Casella and Burger. I wonder if there is a way to solve this exam style question with 'baby' methods (not using Monotone LR).
Okay, it may be more instructive if I just do it. Also, ignore my initial post in this thread; for some reason I thought we were making a LRT, but everything after that is fine. Consider testing
H: lambda = lambda_0 against H': lambda = lambda_1 for lambda_1 > lambda_0. The MP test of this hypothesis is to reject when
$<br /> R(X) = \left(\frac {\lambda_1}{\lambda_0}\right) ^ {\sum X_i} e^{-n(\lambda_1 - \lambda_0)} > C_\alpha<br />$
where we choose C to get the desired size (NP Lemma). Now, R(X) is monotonically increasing in $\sum X_i$ due to the fact that $\lambda_1 > \lambda_0$. Equivalently, we reject H when $\sum X_i >
k$ where k is chosen to give the desired size. Now, this test does not depend on the choice of lambda_1 because in choosing k we only make use of the distribution of $\sum X_i$ under H. Thus,
reject when $\sum X_i > k$ is UMP for the given hypothesis (it is most powerful for all lambda > lambda_0).
Okay, it may be more instructive if I just do it. Also, ignore my initial post in this thread; for some reason I thought we were making a LRT, but everything after that is fine. Consider testing
H: lambda = lambda_0 against H': lambda = lambda_1 for lambda_1 > lambda_0. The MP test of this hypothesis is to reject when
$<br /> R(X) = \left(\frac {\lambda_1}{\lambda_0}\right) ^ {\sum X_i} e^{-n(\lambda_1 - \lambda_0)} > C_\alpha<br />$
where we choose C to get the desired size (NP Lemma). Now, R(X) is monotonically increasing in $\sum X_i$ due to the fact that $\lambda_1 > \lambda_0$. Equivalently, we reject H when $\sum X_i >
k$ where k is chosen to give the desired size. Now, this test does not depend on the choice of lambda_1 because in choosing k we only make use of the distribution of $\sum X_i$ under H. Thus,
reject when $\sum X_i > k$ is UMP for the given hypothesis (it is most powerful for all lambda > lambda_0).
It looks to me that this is roughly what I have done in my first post - up to where I said:
"Now then, I will reject Ho if this ratio is large. This ratio will be larger the larger $\bar{Y}$ is. But how large it should be (for say 95% confidence level?)
Here is where I am stuck."
Since I need to answer part 2 of the question (reject or not), I do need to chose k. And I am back again to the same question, how large the k should be?
Another thing you said, may I ask to elaborate, because it is not obvous to me yet,
Now, this test does not depend on the choice of lambda_1 because in choosing k we only make use of the distribution of $\sum X_i$ under H.
(under which H?)
I clearly see $\lambda_1$ in the formula above, why is it that we only make use of the distribution of $\sum X_i$ under H? (and this is where I have difficulty in general with universally most
powerful tests.
It looks to me that this is roughly what I have done in my first post - up to where I said:
"Now then, I will reject Ho if this ratio is large. This ratio will be larger the larger $\bar{Y}$ is. But how large it should be (for say 95% confidence level?)
Here is where I am stuck."
Since I need to answer part 2 of the question (reject or not), I do need to chose k. And I am back again to the same question, how large the k should be?
Another thing you said, may I ask to elaborate, because it is not obvous to me yet,
(under which H?)
I clearly see $\lambda_1$ in the formula above, why is it that we only make use of the distribution of $\sum X_i$ under H? (and this is where I have difficulty in general with universally most
powerful tests.
We got rid of all the lambdas by noticing that the decision to reject was only based on how big $\sum X_i$ is. The "new" test becomes to reject when $\sum X_i > k$ for appropriately chosen k; we
choose k to get the desired size, which only depends on the distribution of $\sum X_i$ under the null hypothesis, so regardless of the value of $\lambda_1$ we are getting the same k.
Oh I see that now, thanks for taking time to explain!
Would you mind showing how one would approach the second part of the question, viz "Suppose n=10, sample mean is 2.42, and =1.5. Will you reject Ho at a significance level of 5%?" I am still
hoping to have this completed as this is a good practice question for the exam. I promise never to attempt constructing a UMPT in real life after I am done with this Stat Inference exam )))
February 21st 2011, 06:34 AM #2
February 21st 2011, 01:16 PM #3
Senior Member
Nov 2010
Hong Kong
February 21st 2011, 03:11 PM #4
Senior Member
Oct 2009
February 22nd 2011, 01:45 AM #5
Senior Member
Nov 2010
Hong Kong
February 22nd 2011, 02:15 AM #6
Senior Member
Nov 2010
Hong Kong
February 22nd 2011, 07:16 AM #7
Senior Member
Oct 2009
February 22nd 2011, 06:09 PM #8
Senior Member
Nov 2010
Hong Kong
February 22nd 2011, 06:13 PM #9
Senior Member
Oct 2009
February 22nd 2011, 06:21 PM #10
Senior Member
Nov 2010
Hong Kong
February 22nd 2011, 06:29 PM #11
Senior Member
Nov 2010
Hong Kong
February 22nd 2011, 07:04 PM #12
Senior Member
Oct 2009
February 22nd 2011, 07:16 PM #13
Senior Member
Nov 2010
Hong Kong
February 22nd 2011, 07:22 PM #14
Senior Member
Oct 2009
February 22nd 2011, 09:37 PM #15
Senior Member
Nov 2010
Hong Kong | {"url":"http://mathhelpforum.com/advanced-statistics/172019-need-help-construct-umpt-uniformly-most-powerful-test.html","timestamp":"2014-04-18T15:39:16Z","content_type":null,"content_length":"92114","record_id":"<urn:uuid:1dd7a424-8765-44cc-99ce-fb4dc0ab7d56>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lecture Details :
Stochastic Processes by Dr. S. Dharmaraja, Department of Mathematics, IIT Delhi. For more details on NPTEL visit http://nptel.iitm.ac.in
Course Description :
Probability Theory Refresher: Axiomatic construction of probability spaces, random variables and vectors, probability distributions, functions of random variables; mathematical expectations,
transforms and generating functions, modes of convergence of sequences of random variables, laws of large numbers, central limit theorem.
Introduction to Stochastic Processes (SPs): Definition and examples of SPs, classification of random processes according to state space and parameter space, types of SPs, elementary problems.
Discrete-time Markov Chains (MCs): Definition and examples of MCs, transition probability matrix, Chapman-Kolmogorov equations; calculation of n-step transition probabilities, limiting probabilities,
classification of states, ergodicity, stationary distribution, transient MC; random walk and gambler’s ruin problem, applications.
Continuous-time Markov Chains (MCs): Kolmogorov- Feller differential equations, infinitesimal generator, Poisson process, birth-death process, Applications to queueing theory, inventory analysis,
communication networks, finance and biology.
Brownian Motion: Wiener process as a limit of random walk; first -passage time and other problems, applications to finance.
Branching Processes: Definition and examples branching processes, probability generating function, mean and variance, Galton-Watson branching process, probability of extinction.
Renewal Processes: Renewal function and its properties, elementary and key renewal theorems, cost/rewards associated with renewals, Markov renewal and regenerative processes, applications.
Stationary Processes: Weakly stationary and strongly stationary processes, moving average and auto regressive processes.
Martingales: Conditional expectations, definition and examples of martingales, inequality, convergence and smoothing properties, applications in finance.
Other Resources :
Above free video lectures are presented by IIT Delhi, under NPTEL program, there are still 6000+ iit video lectures are available.
Other Mathematics Courses
Post your Comments | {"url":"http://freevideolectures.com/Course/3262/Stochastic-Processes","timestamp":"2014-04-17T13:43:57Z","content_type":null,"content_length":"45421","record_id":"<urn:uuid:48160c9c-603b-4e71-9d29-58d064bf4283>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |