content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
can someone explain to me what this code does: public class D { static double dp[][]=new double[51][51];static { dp[1][1]=1; for (int i=2;i<=50;i++) for(int j=1;j<=50;j++) dp[i][j]=dp[i-1][j-1]+dp
[i-1][j]*j; } public static void main(String args[]){ System.out.println("enterhow many lines:s"); Scanner sc= new Scanner(System.in); while (sc.hasNext()) { int n=sc.nextInt(); if(n==0) break;
double ans=0; for(int i=1;i<=n;i++) ans+=dp[n][i]; System.out.printf("%d %.0f\n",n,ans); } }}
• one year ago
• one year ago
Best Response
You've already chosen the best response.
alrighty.... Where did you see this? Crazy haha, lets begin.
Best Response
You've already chosen the best response.
static double dp[][]=new double[51][51];static { dp[1][1]=1; for (int i=2;i<=50;i++) for(int j=1;j<=50;j++) dp[i][j]=dp[i-1][j-1]+dp[i-1][j]*j; } not too sure why static, but it's basically
creating 2 dimensional array starting at 1 not 0 for some reason. Then it's basically storing dp[i][j]=dp[i-1][j-1]+dp[i-1][j]*j; so for the first one it goes dp[2][1] = dp[1][0] + dp[0][1] * 1;
Best Response
You've already chosen the best response.
now the main is asking for you to enter a number n. if(n==0) break; double ans=0; for(int i=1;i<=n;i++) ans+=dp[n][i]; System.out.printf("%d %.0f\n",n,ans); } n cannot be 0, it starts at 1, then
it runs through and saying ans (which is 0 at first) is = to ans + dp[n][i] which at first would be lets say n = 5, then ans = ans + dp[5][1] + dp[5][2] etc. Then prints it out.
Best Response
You've already chosen the best response.
so if it starts at dp [1][1]. what is stored in the zero? (or does that not matter since we dont actually use it. in addition: if i put in 5. then it does dp[5[ 2] but what is [5][2]/ where did
we initialze any values?
Best Response
You've already chosen the best response.
you can start anywhere, which is why n cannot = 0. dp[1][1]=1; is the initialization.
Best Response
You've already chosen the best response.
however it shows a 0 atsome point in that loop, which it's most likely going to be 0, or null.
Best Response
You've already chosen the best response.
so when you put in a number, what does it do? what does the dp[][]actually do?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
dp[2][1] = dp[1][0] + dp[0][1] * 1;
Best Response
You've already chosen the best response.
dp[2][1] = 0 dp[1][1] = 1 * 1 = 1. then dp[2][2] = = dp[2][1] + dp[1][2] * 2 =.....
Best Response
You've already chosen the best response.
it seems like it's printing out the line numbers.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50e615b6e4b058681f3f2a5a","timestamp":"2014-04-18T14:13:40Z","content_type":null,"content_length":"53323","record_id":"<urn:uuid:a7d2ebd6-d3ab-4ffa-88ba-71dcb24b4d90>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
News from the Mathematics Department
Myles Baker, Baylor mathematics student, makes headlines!
March 20, 2009
Myles Baker, an undergraduate mathematics major at Baylor, is the subject of the headline article "It's all about the numbers" in the Thursday, March 19 edition of the Baylor Lariat. Myles has
obtained some new results with math Professor Qin (Tim) Sheng in mathematical finance. His work is funded through Baylor's Undergraduate Research and Scholarly Achievement (URSA) program. For further
details, please click on the Lariat article here. Congratulations to Myles and Tim!! | {"url":"http://www.baylor.edu/math/news.php?action=story&story=57260","timestamp":"2014-04-19T13:42:17Z","content_type":null,"content_length":"15126","record_id":"<urn:uuid:2360dc2a-684a-41d7-8c08-2ecae47049cb>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability of Getting into Business School
Author Message
Probability of Getting into Business School [#permalink] 23 Sep 2010, 09:26
This post received
Senior Manager KUDOS
Status: Current Student Ahh..I've been studying for GMAT way too long and bored of meaningless word problems, so came up with my own applicable one and actually feel a little better now
(assuming I did the math correctly this time, oh god I hope I can do the math correctly!)
Joined: 14 Oct 2009
Michmax3 is applying to 5 top business schools with the following acceptance rates: Chicago (22%), Stanford (5%), Berkeley (12%), Kellogg (19%) and Duke (30%).
Posts: 370 What is the probability that she will be accepted to at least one of these schools?
Schools: Chicago Booth 2013, Ross, P (getting in at least one school) = 1- P (getting rejected from all schools)
Duke , Kellogg , Stanford, Haas
1-(.78 x .95 x .88 x .81 x .70) =
Followers: 13 1-(.369) = .63
Kudos [?]: 97 [1] , given: 53 So...probability that I will get in to at least one school of my choice is about 63%
I can live with that! Happy studying everyone!
Disclaimer: Obviously not everyone has an equal probability of being accepted to begin with, so since this is based only on random selection, it will not be
accurate. Maybe this question could be revised to include more details, like possibly I get +1% for being female, but maybe lose 5% for under 700 GMAT...well only
until the retake hopefully!)
Kaplan Promo Code Knewton GMAT Discount Codes Manhattan GMAT Discount Codes
Status: battlecruiser, Re: Probability of Getting into Business School [#permalink] 23 Sep 2010, 10:05
that's some good computing skills. just keep working on it and i'm sure you'll get there.
Joined: 25 Apr 2010
Posts: 985
Schools: Carey '16
GMAT 1: Q V
Followers: 10
Kudos [?]: 85 [0], given: 71
Ms. Big Fat Panda
Re: Probability of Getting into Business School [#permalink] 23 Sep 2010, 11:57
Status: Biting Nails Into Oblivion
Haha, I am sure you'll get into at least one of the schools you like. I definitely hope so.
Joined: 09 Jun 2010
Posts: 1857
Followers: 322
Kudos [?]: 1280 [0], given: 194
VP Re: Probability of Getting into Business School [#permalink] 23 Sep 2010, 12:23
Joined: 09 Dec 2008 Unfortunately, getting accepted is not a random event (every candidate does not have an equally likely chance of admittance) so probability theory doesn't work
Posts: 1221
Schools: Kellogg Class of 2011
Followers: 21
Kudos [?]: 237 [0], given: 17
Senior Manager Re: Probability of Getting into Business School [#permalink] 23 Sep 2010, 12:28
Status: Current Student Jerz wrote:
Joined: 14 Oct 2009 Unfortunately, getting accepted is not a random event (every candidate does not have an equally likely chance of admittance) so probability theory doesn't work
Posts: 370
I know, just for fun...it's in the disclaimer. And that's not really unfortunate, otherwise you would have a lot of unqualified people in there
Schools: Chicago Booth 2013, Ross,
Duke , Kellogg , Stanford, Haas
Followers: 13
Kudos [?]: 97 [0], given: 53
Current Student Re: Probability of Getting into Business School [#permalink] 23 Sep 2010, 12:45
Joined: 07 May 2010 Also almost all those numbers are old, Duke is down to 24% (Adcom said so at an info session)
Fun stuff though, I wish it were that simple!
Posts: 731 one more school and you'd be a lock!
Followers: 13
Kudos [?]: 84 [0], given: 66
Current Student
Status: What's your raashee? Re: Probability of Getting into Business School [#permalink] 23 Sep 2010, 15:15
Joined: 12 Jun 2009 i need this!!!!!! kudos
Posts: 1847 _________________
Location: United States (NC) If you like my answers please +1 kudos!
Concentration: Strategy, Finance
Schools: UNC (Kenan-Flagler) -
Class of 2013
GMAT 1: 720 Q49 V39
WE: Programming (Computer Software)
Followers: 22
Kudos [?]: 190 [0], given: 52
Manager Re: Probability of Getting into Business School [#permalink] 24 Sep 2010, 07:55
Joined: 16 Aug 2009 Ha Ha..Good one !
It echos my sentiments as well
Posts: 222
Followers: 3
Kudos [?]: 14 [0], given: 18
Re: Probability of Getting into Business School [#permalink] 24 Sep 2010, 08:40
Status: battlecruiser,
operational... dokiyoki wrote:
Joined: 25 Apr 2010 Ha Ha..Good one !
It echos my sentiments as well
Posts: 985
that's a good thing. It's essential that you walk into the GMAT with a good frame of mind.
Schools: Carey '16
GMAT 1: Q V
Followers: 10
Kudos [?]: 85 [0], given: 71
Manager Re: Probability of Getting into Business School [#permalink] 10 Oct 2010, 01:01
Joined: 06 Nov 2009 Congrats to your excellent score. I guess the acceptance rates are highly correlated between schools on the same level. Therefore if you get accepted by one school
, then you have a high probability that you also get accepted by the other schools.
Posts: 177
Concentration: Finance, Strategy
Followers: 1
Kudos [?]: 8 [0], given: 3
gmatclubot Re: Probability of Getting into Business School [#permalink] 10 Oct 2010, 01:01 | {"url":"http://gmatclub.com/forum/probability-of-getting-into-business-school-101562.html","timestamp":"2014-04-18T11:07:36Z","content_type":null,"content_length":"155701","record_id":"<urn:uuid:6d4fcc2f-50e9-44dd-ab69-b788ab6dfa33>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
Belfair Math Tutor
...My students have ranged from those who haven't done math in years to those who are shooting for a near perfect score. My own scores are a 170 Verbal and 169 Quantitative. I have a degree in
Linguistics from the University of Washington and have a passion for grammar.
32 Subjects: including prealgebra, LSAT, algebra 1, algebra 2
...I have experience tutoring prealgebra as well as basic mathematics. I have experience tutoring mathematics in grades levels 1st-12th grade. Over the summer, I was a tutor in Seattle tutoring at
risk children in mathematics and prealgebra.
16 Subjects: including algebra 1, prealgebra, chemistry, biology
...It is up to me to find a way of presentation that works with the student's background. I use the Socratic method a lot, asking the student questions and building on the response. I did some
Skin diving from 1966 to 1975, and back packing from 1969 to 1975, mostly in the Olympics.
17 Subjects: including algebra 1, algebra 2, calculus, chemistry
...I would be happy to tutor in American or US history, but would request access to the textbook or advance knowledge of specific topics to be covered in order to be most effective. I took IB
Biology in high school (the IB program is similar to AP classes), and I especially enjoyed learning about M...
35 Subjects: including statistics, linear algebra, English, algebra 1
...I received a 4.0 in both courses. I have extensive experience in several other C-based languages and experience tutoring this specific subject. I have experience in both the mathematical and
coding aspects of data structures in C.
26 Subjects: including algebra 1, algebra 2, logic, computer science | {"url":"http://www.purplemath.com/Belfair_Math_tutors.php","timestamp":"2014-04-18T13:39:21Z","content_type":null,"content_length":"23453","record_id":"<urn:uuid:794d0658-702e-4a50-b329-c9aff96cb077>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: specifying linear mixed-effects covariance structure
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: specifying linear mixed-effects covariance structure
From jwegelin <jwegelin@vcu.edu>
To statalist@hsphsun2.harvard.edu
Subject st: specifying linear mixed-effects covariance structure
Date Thu, 18 Oct 2007 17:05:43 -0400
The purpose of this email is to enquire regarding the capabilities of
Stata for specifying the covariance structure in linear mixed-effects
models. The email starts with a fairly detailed description of the
problem and a sketch of how one approaches it in SAS. We end with a set
of questions regarding Stata, marked by asterisks *********.
The bottom line is, "Can I do this all in Stata, or do I need to use SAS
for such analyses?"
Suppose you have a longitudinal outcome (K repeated measures on N units)
and are fitting a linear mixed-effects model. Suppose you have specified
random intercepts and random slopes.
For instance, in Stata this might look like
xi: xtmixed Size i.Tribe*Day || Mouse: Day, cov(un)
where Tribe is dichotomous ("case" or "control"), Day goes from zero to
ten, and each Mouse, belonging to one of the Tribes, is measured each
day. You want to know whether the growth patterns differ between Tribes.
(1) One might consider the possibility of autocorrelation of residuals
within unit (within Mouse) over time, for instance an AR(1)
autoregressive model; or one might want to try conjugate symmetry as
another alternative to independence of the within-Mouse residuals.
In SAS PROC MIXED it is possible to specify AR(1), exchangeable,
conjugate symmetry and other kinds of variances of the within-Mouse
residuals under the REPEATED statement, TYPE=AR(1), etc.
(2) One might suspect---e.g., from initial exploratory graphics---that
the variance of the "case" Tribe exceeds that of the "control" Tribe.
Furthermore, one might be curious whether this difference in variance is
in the intercept and slope random effects only, in the residuals only,
or in both.
In SAS PROC MIXED one can allow different variances of the random slopes
and intercepts in the two Tribes by saying "GROUP=TRIBE" under the
RANDOM statement.
Separately, one can allow different variances of the within-Mouse
residuals by saying the same thing under the REPEATED statement.
(3) Further, one can separately specify the covariance structures of the
between-mouse random effects (the slope and intercept random effects) on
one hand and the within-mouse residuals on the other hand.
When I used SAS, I specified unrestricted ("unstructured" in SAS-speak)
covariance of the slopes and intercepts within each Tribe. This used
three degrees of freedom per Tribe and permitted the random Mouse
intercept to be correlated with the random Mouse slope. But I specified
a much more restricted structure for the within-mouse residuals, since
that matrix is 10 by 10.
Am I correct in believing that there is no procedure or option in Stata
by which one can readily do either of (1) or (2) described above?
If this is correct, are there any plans, either in Stata proper or among
people making well-documented add-ons (see for instance the work of
Rabe-Hesketh), to add these features?
In current xtmixed, we can specify the between-Mouse variance of
the random effects as "independent", "exchangeable", "identity" or
"unstructured". (See http://www.stata.com/help.cgi?xtmixed for lucid
definitions.) Regarding the within-Mouse residual variance, am I correct
in guessing that it is always specified as "identity" when one runs xtmixed?
In Stata, the xtreg procedure allows us to specify the within-group
(within-Mouse) correlation structure as autoregressive, exchangeable, or
conjugate symmetry, but only with the "pa" (population average) option,
I believe. One does this with the "corr" option. But I think there is no
"corr" option in xtmixed. Furthermore, I think that one can only specify
random intercepts, not other random effects, under xtreg.
Thanks in advance for any information or correction.
Jacob A. Wegelin
Assistant Professor
Department of Biostatistics
Virginia Commonwealth University
730 East Broad Street Room 3006
P. O. Box 980032
Richmond VA 23298-0032
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2007-10/msg00686.html","timestamp":"2014-04-19T02:09:48Z","content_type":null,"content_length":"9406","record_id":"<urn:uuid:01b6dae0-275a-48b3-ba1b-91b19b18db0c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
Given The Following System, Select Each Of The ... | Chegg.com
Image text transcribed for accessibility: Given the following system, select each of the following statements that is true y [n] = (2x [n - 1] + X [n + 1]) u [n] This system is not linear This system
is linear This system is time/shift invariant This system is not time/shift invariant None of these statements is true
Electrical Engineering | {"url":"http://www.chegg.com/homework-help/questions-and-answers/given-following-system-select-following-statements-true-y-n-2x-n-1-x-n-1-u-n-system-linear-q2855066","timestamp":"2014-04-16T09:00:20Z","content_type":null,"content_length":"20434","record_id":"<urn:uuid:994b89b7-2207-4879-9625-81c98c2c0450>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
and R
When Is C(X)/P a Valuation Ring for Every Prime Ideal P?
Mathematics (HMC)
Publication Date
A Tychonoff space X is called an SV-space if for every prime ideal P of the ring C(X) of continuous real-valued functions on X, the ordered integral domain C(X)/P is a valuation ring (i.e., of any
two nonzero elements of C(X)/P, one divides the other). It is shown that X is an SV-space iff υX is an SV-space iff βX is an SV-space. If every point of X has a neighborhood that is an F-space, then
X is an SV-space. An example is supplied of an infinite compact SV-space such that any point with an F-space neighborhood is isolated. It is shown that the class of SV-spaces includes those Tychonoff
spaces that are finite unions of C^*-embedded SV-spaces. Some open problems are posed.
Rights Information
© 1992 Elsevier
Recommended Citation
Henriksen, Melvin and Wilson, Richard. 1991. When is C(X)/P a valuation ring for every prime ideal P? Topology and its Applications. 44(1-3):175-180. | {"url":"http://scholarship.claremont.edu/hmc_fac_pub/48/","timestamp":"2014-04-18T18:23:48Z","content_type":null,"content_length":"22840","record_id":"<urn:uuid:30a6bfbc-79e4-4886-b39d-dfb3c36df949>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Iterated Subdivision of a Triangle
This Demonstration shows the repeated subdivision of a triangle.
The barycenter of the triangle is determined using the classical dot product between the masses and the vectors of their positions:
The division into six triangles is done by using the barycenter as the common vertex and dividing the three edges according to the divider ratios .
A multitude of shapes can be achieved by iterating this process times and moving the sliders. This way, one creates triangles. The barycenter of each triangle can be moved by dragging the locators
and varying the weight sliders. The shape of the triangles can be altered with the three divider sliders. Starting with a symmetric initial triangle gives a more balanced end result.
Homogeneous barycentric coordinates are used by assuring that
A purely barycentric subdivision is given in snapshot 1 by setting all three masses to 0.333 and all three dividers to 0.5. | {"url":"http://demonstrations.wolfram.com/IteratedSubdivisionOfATriangle/","timestamp":"2014-04-18T00:17:17Z","content_type":null,"content_length":"44313","record_id":"<urn:uuid:c681db51-0d20-423a-8aa5-4cb89c168d31>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Consider your advice to an artillery officer who has the
Consider your advice to an artillery officer who has the following problem....
Home Tutors Physics Consider your advice to an art...
Consider your advice to an artillery officer who has the following problem. From his current postition, he must shoot over a hill of height at a target on the other side, which has the same elevation
as his gun. He knows from his accurate map both the bearing and the distance to the target and also that the hill is halfway to the target. To shoot as accurately as possible, he wants the projectile
to just barely pass above the hill.
Part C
Find the angle above the horizontal at which the projectile should be fired. | {"url":"http://www.coursehero.com/tutors-problems/Physics/6756033-Consider-your-advice-to-an-artillery-officer-who-has-the-following-pro/","timestamp":"2014-04-16T04:35:47Z","content_type":null,"content_length":"35325","record_id":"<urn:uuid:6d5791d4-3010-48b0-a36c-a4acce89d39b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Expected-Case Complexity of Approximate Nearest Neighbor
Sunil Arya
Ho-Yam Addy Fu
Most research in algorithms for geometric query problems has focused on their worst-
case performance. However, when information on the query distribution is available,
the alternative paradigm of designing and analyzing algorithms from the perspective of
expected-case performance appears more attractive. We study the approximate nearest
neighbor problem from this perspective.
As a first step in this direction, we assume that the query points are sampled uni-
formly from a hypercube that encloses all the data points; however, we make no as-
sumption on the distribution of the data points. We show that with a simple partition
tree, called the sliding-midpoint tree, it is possible to achieve linear space and logarith-
mic query time in the expected case; in contrast, the data structures known to achieve
linear space and logarithmic query time in the worst case are complex, and algorithms
on them run more slowly in practice. Moreover, we prove that the sliding-midpoint tree
achieves optimal expected query time in a certain class of algorithms.
1 Introduction
The main focus in the design of data structures and algorithms for geometric query problems | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/141/1728981.html","timestamp":"2014-04-19T08:38:43Z","content_type":null,"content_length":"8401","record_id":"<urn:uuid:9ec508d0-97b0-41f8-9d03-aa26070a3773>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help on TI-89 to solve Complex number
May 14th 2008, 09:42 AM #1
May 2008
Need help on TI-89 to solve Complex number
I know how to use "simple" complex number on TI-89
for example: 8+i3
but how about if there is two equations and we want to solve for x and y:
for example: (1+i1.5)x + i2.5y=20 and 11x + 15y=0
solve for x and y.
Is it possible to do the calculation using solve(
I tried to do it, but I kept getting "false" answer.
Any responds would be GREATLY appreciated.
Best regards,
I know how to use "simple" complex number on TI-89
for example: 8+i3
but how about if there is two equations and we want to solve for x and y:
for example: (1+i1.5)x + i2.5y=20 and 11x + 15y=0
solve for x and y.
Is it possible to do the calculation using solve(
I tried to do it, but I kept getting "false" answer.
Any responds would be GREATLY appreciated.
Best regards,
Did you use the special i button ?
Are x and y real numbers or complex numbers ?
What did you enter exactly ?
Yes, I use special i,
and I want X and Y will have answers something like a+Bi
Could you please help me?
thank you very much
Try :
cSolve((1+i*1.5)*x+i*2.5*y=20 and 11x+15y=0, {x,y})
cSolve is for solving in the set of complex numbers.
(I've always liked to put * for multiplying, because it sometimes yield errors, so I find it wiser to do it the first time ;p)
Yay ! It works
And just so you know, you don't need the * here
Last edited by Moo; May 14th 2008 at 11:43 AM. Reason: typos
May 14th 2008, 09:52 AM #2
May 14th 2008, 11:22 AM #3
May 2008
May 14th 2008, 11:25 AM #4 | {"url":"http://mathhelpforum.com/calculators/38341-need-help-ti-89-solve-complex-number.html","timestamp":"2014-04-17T16:07:55Z","content_type":null,"content_length":"41216","record_id":"<urn:uuid:d3ad96e7-5f33-4523-9322-7715ebd32443>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: f.o.m. as math?
Harvey Friedman friedman at math.ohio-state.edu
Fri Aug 28 11:13:13 EDT 1998
Here I want to respond to Shoenfield 12:55PM 8/21/98. In a separate
posting, I want to write on the issue of "combinatorial statements," and
try to get to the real issues without getting bogged down in terminology.
> In a recent communication, I stated rather casually that fom is a
>branch of mathematics. Here I would like to explain what I mean by that
>statement and what consequences I think it has for fom.
> In a reply to my statement, Harvey asserted that fom is a
>mathematical subject but not a branch of mathematics.
Yes, like statistics and computer science.
> I do not
>understand the difference, and I do not see why fom and statistics are not
>branches of mathematics and geometry and algebra are.
Because the aims and goals of statistics are very very different from
geometry and algebra.
*Rather than get bogged down in terminology*, instead of saying that "fom
is not a branch of mathematics," I could say "the aims and goals and modes
of evaluation of work in f.o.m. are very different from that of core
mathematics. The differences are so great as to explain the very low level
of interaction between f.o.m. and branches of mathematics, and the people
working in them, as compared to, say, the level of interaction between the
various branches of mathematics." Furthermore, this comparatively low
level of interaction is completely natural and does not reflect in any way
on the intellectual stature and importance of work in f.o.m. However, if
f.o.m. were a branch of mathematics, then this low a level of interaction
might well reflect very badly.
F.o.m. and mathematical logic are treated in the mathematics community as
branches of mathematics with comparatively very low levels of interaction
with other branches of mathematics. This leads to great misunderstandings,
and poor employment opportunities for f.o.m. and logic.
This is why it is so important to emphasize that f.o.m. is not properly
thought of as a branch of mathematics.
This same scenario occurred for statistics, computer science, and, to some
extent, for applied mathematics. They did not stand for this, and broke
into autonomous groups in many places. There is an underlying intellectual
significance to these groupings within Universities. Statisticians and
computer sciences do not like to call themselves mathematicians, and
certainly do not say things like "statistics and computer science are
branches of mathematics the same way as algebra and geometry are."
Of course, f.o.m. cannot really break off autonomously. However, f.o.m. as
a branch of a wider subject called foundational studies can, and if I have
my way, will break off autonomously to form the largest and most
influential autonomous group in Universities.
> He also says that
>fom is a branch of the subject of foundations. This is a truism, but a
>useless one.
It is an essential point with lots of consequences. However, I grant that
foundations, generally, is not so well developed. However, I would like to
make a contribution to this. I have been thinking more seriously about,
e.g., foundations of probability and statisitics.
>There are no significant results on foundations which can
>be used in the various foundational studies.
The whole setup of propositional calculus and predicate calculus, with its
completeness theorems, provide essential background information, as well as
the fundamentals of recursion theory and complexity, and also model theory.
There will be a full blown development of formal systems for science and
engineering, where one establishes the independence of certain scientific
and engineering principles from others, as well as the existence or
nonexistence of decision procedures for problems in science and
engineering, and the definability and nondefinability of certain concepts
from others in science and engineering.
>By contrast, mathematicians
>have though much about mathematics and reach many agreements of how
>mathematics should be done by consensus.
I said above that foundational studies is not very well developed at this
point. But I expect the situation to be very different by 2050.
> My statement was not intended as a truism, but as a statements that
>all of the significant results in fom are mathematical. I challenge
>anyone to find a significant advance in fom in which the principle
>ingredient is not the formulation or proof (or both) of a clearly
>mathematical theorem.
Frege's setup of predicate calculus. I explicitly agree that f.o.m. is a
mathematical subject, but it is not properly viewed as mathematics. At
most, some sort of applied mathematics.
> If this is correct, it has consequences for the study of fom. I
>suggested one such consequence concerning the study of intuitive ideas
>which arise in the consideration of the nature of mathematics. I said
>that the object in studying such concepts should be to replace them by
>precise concepts which we can agree capture the essential content of the
>intuitive notion.
Of course, some intuitive ideas may not yield to such replacement, but
still may be essential to consider. One doesn't simply pretend that the
concepts don't exist if one has no idea how to replace them. But rather
than "replace" I would use the word "formally analyze."
>When we have done this, we come to the most important
>part. This is to formulate and prove mathematical theorems about the
>precise concepts which increase our understanding of the intuitive notion.
>Sometimes we discover properties of the intuitive notion which we would
>probably not have even thought about in an informal discussion. (I am
>sorry that my earlier communication seemed to suggest that the above is
>all of fom; Harvey was quite right to say that there is much else.)
> Let me use the above to show what I consider to be the acievement of
>reverse mathematics. The original object was to discover what axioms are
>needed to prove the theorems of core mathematics. To make things
>manageable, researchers confined themselves to mathematics expressible in
>the language of analysis (= second order arithmetic); this is certainly a
>reasonable restriction. The main result was that over a very weak system
>of analysis, all of the theorems which they considered were equivalent to
>one of a small number (I believe 5) theorems.
With some caveats that Simpson can discuss more fully; and see his
forthcoming book.
>An additional point
>(emphasized by Harvey) is that these 5 theorems are linearly ordered by
>provable (in the weak system) implication. I take this to mean that the
>intuitive notion of "theorem of core mathematics" gives rise to five
>precise notions which are related in a nice way.
But this account doesn't take into account the fact that *reverse
mathematics II* will be developed with a somewhat weaker base theory,
incorporating more phenomena more sensitively, and where 5 is no longer the
appropriate number. The idea of reverse mathematics transcends the
particular way it is currently executed.
>The next step, I
>believe, should be to prove significant mathematical theorems abou these
This intriguing statement needs some amplification. Please elaborate.
>Thus I think that reverse mathematics has contributed
>significantly to fom, but its future progress will decide whether it
>becomes a permanent part of the theory of fom.
It is inconceivable that the idea of reverse mathematics is not a permanent
part of f.o.m., even though the base theory may change in future
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1998-August/002016.html","timestamp":"2014-04-16T22:11:09Z","content_type":null,"content_length":"10422","record_id":"<urn:uuid:eae58837-f636-420e-8a7f-5b666dfe47cc>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Haskell-cafe] Re: A question about "monad laws"
Arnar Birgisson arnarbi at gmail.com
Mon Feb 11 10:52:01 EST 2008
Hi all,
On Feb 11, 2008 3:14 PM, apfelmus <apfelmus at quantentunnel.de> wrote:
> I will be mean by asking the following counter question:
> x + (y + z) = (x + y) + z
> is a mathematical identity. If it is a mathematical identity, a
> programmer need not care about this law to implement addition + . Can
> anyone give me an example implementation of addition that violates this law?
Depends on what you mean by "addition". In general, algebraists call
any associative and commutative operation on a set "addition", and
nothing else. From that POV, there is by definition no "addition" that
violates this law.
More information about the Haskell-Cafe mailing list | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2008-February/039486.html","timestamp":"2014-04-19T08:34:30Z","content_type":null,"content_length":"3543","record_id":"<urn:uuid:842721c6-3994-45ec-8406-bbe9f60032b9>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
analytic hierarchy
analytic hierarchy
The first level can be called $\Delta^{1}_{0}$, $\Delta^{1}_{1}$, $\Sigma^{1}_{0}$, or $\Pi^{1}_{0}$, and consists of the arithmetical formulas or relations.
A formula $\phi$ is $\Sigma^{1}_{n}$ if there is some arithmetical formula $\psi$ such that:
$\phi(\vec{k})=\exists X_{1}\forall X_{2}\cdots QX_{n}\psi(\vec{k},\vec{X}_{n})$
$\text{ where }Q\text{ is either }\forall\text{ or }\exists\text{, whichever % maintains the pattern of alternating quantifiers, and each }X_{i}\text{ is a % set variable (that is, second order)}$
Similarly, a formula $\phi$ is $\Pi^{1}_{n}$ if there is some arithmetical formula $\psi$ such that:
$\phi(\vec{k})=\forall X_{1}\exists X_{2}\cdots QX_{n}\psi(\vec{k},\vec{X}_{n})$
$\text{ where }Q\text{ is either }\forall\text{ or }\exists\text{, whichever % maintains the pattern of alternating quantifiers, and each }X_{i}\text{ is a % set variable (that is, second order)}$
Mathematics Subject Classification
no label found
Added: 2002-08-18 - 03:16 | {"url":"http://planetmath.org/AnalyticHierarchy","timestamp":"2014-04-19T19:50:58Z","content_type":null,"content_length":"56994","record_id":"<urn:uuid:654f09b4-cd7d-466e-9ec9-e75a6b54a255>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
228 projects tagged "Mathematics"
Botan is a crypto library written in C++. It provides a variety of cryptographic algorithms, including common ones such as AES, MD5, SHA, HMAC, RSA, Diffie-Hellman, DSA, and ECDSA, as well as many
others that are more obscure or specialized. It also offers SSL/TLS (client and server), X.509v3 certificates and CRLs, and PKCS #10 certificate requests. A message processing system that uses a
filter/pipeline metaphor allows for many common cryptographic tasks to be completed with just a few lines of code. Assembly and SIMD optimizations for common CPUs offers speedups for critical
algorithms like AES and SHA-1. | {"url":"http://freecode.com/tags/mathematics?page=1&sort=popularity&with=191&without=","timestamp":"2014-04-16T16:39:16Z","content_type":null,"content_length":"117963","record_id":"<urn:uuid:514081dd-ecba-4efd-a07f-c07b068fba92>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
Entropy Coding (EC)
Logo Title Description
Modeling for Text Although from 1988 this paper from Timothy Bell, Ian Witten and John Cleary is one of my favourites. It is easy to read, well structured and explains all important
Compression details.
Models are best formed adaptively, based on the text seen so far. This paper surveys successful strategies for adaptive modeling which are suitable for use in
practical text compression systems. The strategies fall into three main classes: finite-context modeling, in which the last few characters are used to condition the
probability distribution for the next one; finite-state modeling, in which the distribution is conditioned by the current state (and which subsumes finite-context
modeling as an important special case); and dictionary modeling, in which strings of characters are replaced by pointers into an evolving dictionary. A comparison of
different methods on the same sample texts is included, along with an analysis of future research directions.
Compression: Algorithms: A good introduction into entropy coding is article from Charles Bloom in 1996. The process of statistical coding is explained with many simple examples.
Statistical Coders
Solving the Problems of This paper from Charles Bloom in 1998 is about the PPMZ algorithm. It handles local order estimation and secondary escape estimation.
Context Modeling
New Techniques in Context Charles Bloom presents 1996 several new techniques on high order context modeling, low order context modeling, and order-0 arithmetic coding. Emphasis is placed on
Modeling and Arithmetic economy of memory and speed. Performance is found to be significantly better than previous methods.
Arithmetische Kodierung A well structured description of the ideas, background and implementation of arithmetic codeing in German from 2002 by Eric Bodden, Malte Clasen and Joachim Kneis.
(Proseminar Good explanation of the renormalisation process and with complete source code. Very recommendable for German readers.
Is Huffman Coding Dead? A paper from 1993 written by Abraham Bookstein and Shmuel Klein about the advantages of Huffman codes against arithmetic coding, especially the speed and robustness
against errors.
Arithmetic Coding by Campos A short description about arithmetic coding from 1999 written by Arturo Campos with a little example.
Canonical Huffman by Campos Arturo Campos describes Canonical Huffman Coding in his article from 1999 with some examples.
Inductive Modeling for Data John Cleary and Ian Witten wrote this basic paper about modeling, parsing, prediction, context and state in 1987.
Arithmetic Coding by the A brief description of arithmetic coding from 2000. Easy to read, with figures and examples.
Data Compression Reference
Context Modelling for Text Several modeling strategies and algorithms are presented in 1992 by the paper of Daniel Hirschberg and Debra Lelewer. It contains a very interesting blending
Compression strategy.
The Design and Analysis of The thesis of Paul Howard from 1993 about data compression algorithms with emphasis on arithmetic coding, text and image compression.
Efficient Lossless Data
Compression Systems
Arithmetic Coding for Data Paul Howard and Jeffrey Vitter describe an efficient implementation which uses table lookups in the article from 1994.
Analysis of Arithmetic In their article from 1992 Paul Howard and Jeffrey Vitter analyse arithmetic coding and entroduce the concept of weighted entropy.
Coding for Data Compression
Practical Implementations A tutorial on arithmetic coding from 1992 by Paul Howard and Jeffrey Vitter with table lookups for higher speed.
of Arithmetic Coding
Data Compression (Tutorial) A basic paper from Debra Lelewer and Daniel Hirschberg about fundametal concepts of data compression, intended as a tutorial from 1987. Contains many small examples.
Streamlining Context Models This paper from 1991 was written by Debra Lelewer and Daniel Hirschberg and is about context modeling using self organizing lists to speed up the compression process.
for Data Compression
Lossless Compression Several nice and short articles written by Dave Marshall from 2001 about entropy coding with many examples.
Algorithms (Entropy
Range encoding: an Range encoding was first proposed by this paper from G. Martin in 1979, which describes the algorithm not very clearly.
algorithm for removing
redundancy from a digitised
Lossless Compression for Again a basic paper about modeling and coding with models for text and image compression, written by Alistair Moffat, Timothy Bell and Ian Witten in 1995.
Text and Images
Arithmetic Coding Revisited Together with the CACM87 paper this 1998 paper from Alistair Moffat, Radford Neal and Ian Witten is very well known. Improves the CACM87 implementation by using fewer
multiplications and a wider range of symbol probabilities.
Arithmetic Coding + Mark Nelson's article about arithmetic coding from 1991. The concepts are easy to understand and accompanied by a simple "BILL GATES" example. Source code for
Statistical Modeling = Data Billyboy is available.
Arithmetic Coding for Data This ACM paper from 1987, written by Ian Witten, Radford Neal and John Cleary, is the definite front-runner of all arithmetic coding papers. The article is quite
Compression short but comes with full source code for the famous CACM87 AC implementation.
Logo Name Description
Timothy Timothy Bell works at the University of Canterbury, New Zealand, and is "father" of the Canterbury Corpus. His research interests include compression, computer science for children,
Bell and music.
Charles Charles Bloom has published many papers about data compression and is author of PPMZ2, a very strong compression algorithm (2.141 bps on the Calgary Corpus)
Eric Bodden Eric Bodden is a student of the RWTH Aachen, Germany, and currently studying at the University of Kent at Canterbury. He started a small online business called Communic Arts in
November 1999.
Abraham Abraham Bookstein works at the University of Chicago, United States of America, and has published several compression papers together with Shmuel Klein.
Arturo Arturo Campos is a student and programmer, interested in data compression, and has written several articles about data compression.
Malte Malte Clasen is a student of the RWTH Aachen, Germany, and is known as "the update" in the demoscene, a community of people whose target is to demonstrate their coding, drawing and
Clasen composing skills in small programs called demos that have no purpose except posing.
John Cleary John Cleary works at the University of Waikato, New Zealand, and has published several well known papers together with Ian Witten and Timothy Bell.
Daniel Daniel Hirschberg is working at the University of California, United States of America. He is interested in the theory of design and analysis of algorithms.
Paul Howard Paul Howard is working at the Eastern Michigan University, United States of America, and is engaged in the arithmetic coding filed since 10 years.
Shmuel Shmuel Tomi Klein is working at the Bar-Ilan University, Israel, and has published several compression papers together with Abraham Bookstein.
Joachim Joachim Kneis studies Computer Science at the RWTH Aachen, Germany, and like to play "Unreal Tournament".
Mikael Mikael is interested in data compression, experimental electronic music and has written a BWT implementation, an improved range coder, a faster sort algorithm and a modified MTF
Lundqvist scheme.
Dave Dave Marshall works at the Cardiff University, United Kingdom. He is interested in music and has several compression articles on his multimedia internet site.
G. Martin G. Martin is the author of the first range coder paper presented on the Data Recording Conference in 1979.
Alistair Alistair Moffat is working at the University of Melbourne, Australia. Together with Ian Witten and Timothy Bell he is author of the book "Managing Gigabytes".
Radford Radford Neal works at the University of Toronto, Canada. He is one of the authors of the CACM87 implementation, which sets the standard in aritmetic coding.
Mark Nelson Mark is the author of the famous compression site www.datacompression.info and has published articles in the data compression field for over ten years. He is an editor of the Dr.
Dobb's Journal and author of the book "The Data Compression Book". He lives in the friendly Lone Star State Texas ("All My Ex's"...).
Michael Michael Schindler is an independent compression consultant in Austria and the author of szip and a range coder.
Jeffrey Jeffrey Vitter works at the Purdue University, United States of America. He published several data compression papers, some of them together with Paul Howard.
Ian Witten Ian is working at the University of Waikato, New Zealand. Together with John Cleary and Timothy Bell he published "Modeling for Text Compression".
Logo Title Description
Arithmetische Kodierung (Proseminar The source code from the paper of Eric Bodden, Malte Clasen and Joachim Kneis.
Arithmetic Coding by Campos A little pseudo source code from Arturo Campos.
Range coder by Campos A little pseudo source code from Arturo Campos.
CACM87 The standard CACM 1987 implementation of arithmetic coding in three different versions from John Cleary, Radford Neal and Ian Witten.
Range Coder by Lundqvist The range coder implementation from Dmitry Subbotin, improved by Mikael Lundqvist. A range coder is working similary to an arithmetic coder but uses
less renormalisations and a faster byte output.
Arithmetic Coding + Statistical Modeling The source code for the arithmetic coding article from Mark Nelson.
= Data Compression
Range Coder by Schindler Range coder source code from Michael Schindler, which is one of my favourite range coder implementations. A range coder is working similary to an
arithmetic coder but uses less renormalisations and a faster byte output. | {"url":"http://www.data-compression.info/Algorithms/EC/","timestamp":"2014-04-16T16:54:33Z","content_type":null,"content_length":"128083","record_id":"<urn:uuid:24dc18b8-8687-40ee-a71d-51ed722c88a0>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
correlation of IC50 to receptors/enzyme molecules
Dr Engelbert Buxbaum engelbert_buxbaum at hotmail.com
Mon Feb 9 10:27:56 EST 2004
interpreneur_org at yahoo.com wrote:
> In a binding or kinetic assay, how does the IC50 of a compound change
> with respect to the number of receptors/enzyme molecules available? Is
> it linear (e.g: 10 times more protein leads to a 10 fold increase in the
> IC50)? Or not? And what is the mathematical equation explaining this?
Maximal binding (or enzymatic activity) depends on the substrate
concentration, Kd (or Km) does not, at least if the concentration of
protein molecules is much smaller than that of the free ligand F:
B = Bmax * F / (Kd + F)
However, if we use the usual approximation of F = T and increase the
protein concentration to more than 0.1*F, a change in _apparent_ Kd
will be observed, because binding of the ligand to the protein will
significantly reduce the concentration of free ligand. If you replace
the free ligand concentration F by (T-B) (total minus bound ligand) and
solve the resulting quadratic equation, the resulting "Langmuir
isotherm" will correctly describe binding under these conditions, and
you will see that the _true_ Kd is still independent of protein
B = Bmax * F / (K_d + F) = Bmax * (T-B) / (K_d + T - B)
Separation of variables yields:
0 = -B^2 + B*(K_d + T + Bmax) - Bmax * T
which is a quadratic equation in standard form. The solution is:
B = -1/2 * (-(K_d + T + Bmax) + sqrt{(K_d + T + Bmax)^2 - 4* Bmax *
Note that of the two solutions of the quadratic equation only the one
given here is physically meaningfull, as there is no such thing as a
negative concentration.
This consideration is of course important only in binding studies, in an
enzymatic assay the protein concentration is virtually always much lower
than that of the substrate (otherwise measuring initial velocities would
become technically very demanding).
More information about the Proteins mailing list | {"url":"http://www.bio.net/bionet/mm/proteins/2004-February/011455.html","timestamp":"2014-04-18T17:00:45Z","content_type":null,"content_length":"4289","record_id":"<urn:uuid:7166f95b-8d7d-4e1b-9bad-ce3d86817d82>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
If you bought a car at $1900 which depreciates 20% each year, what will it be worth after 4 years? A. $578 B. $778 C. $988 D. $1,588
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5165f9dce4b066fca6614afd","timestamp":"2014-04-25T08:29:25Z","content_type":null,"content_length":"109613","record_id":"<urn:uuid:37b67cdc-a763-4acb-acb0-ec5c507c93e6>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Mechanics problem
February 25th 2009, 06:32 AM
[SOLVED] Mechanics problem
Can anyone help with solving this problem please:
Two rigid square plates, of different materials and with mass per unit area equal to λ1and λ2 respectively, are joined along one edge. The system is subject to gravity and can rotate freely
around the pivot point A in the figure, which is fixed.
(see attached drawing)
Compute the angle formed by the segment AB with the vertical direction when the system is in equilibrium.
I'm having trouble understanding what exactly is being asked in this question. Can anybody shed any light on this please?
February 25th 2009, 08:19 AM
Equilibrium positio
Hello jackiemoon
Can anyone help with solving this problem please:
Two rigid square plates, of different materials and with mass per unit area equal to λ1and λ2 respectively, are joined along one edge. The system is subject to gravity and can rotate freely
around the pivot point A in the figure, which is fixed.
(see attached drawing)
Compute the angle formed by the segment AB with the vertical direction when the system is in equilibrium.
I'm having trouble understanding what exactly is being asked in this question. Can anybody shed any light on this please?
This question is all about finding the position of the centre of mass of the body, and saying that it lies directly beneath A.
So, if the squares have sides of length $2a$, then their masses are $4a^2\lambda_1$ and $4a^2\lambda_2$.
In the attached diagram, then, take moments about G, the centre of mass:
$4a^2\lambda_1PG = 4a^2\lambda_2GQ$, where P and Q are the centres of the two squares
$\Rightarrow PG = \frac{\lambda_2}{\lambda_1}GQ$
$= \frac{\lambda_2}{\lambda_1+\lambda_2}\times PQ$
$= \frac{\lambda_2}{\lambda_1+\lambda_2}\times 2a$
$\Rightarrow GO = a - PG = a - \frac{2\lambda_2}{\lambda_1+\lambda_2}a$
$= \frac{a(\lambda_1 - \lambda_2)}{\lambda_1 + \lambda_2}$
Now when the body is freely suspended from A, the line AG is vertical, and AB makes an angle $\theta$ with this line, where
$\tan\theta = \frac{GO}{AO}=\frac{GO}{a}$
$= \frac{\lambda_1 - \lambda_2}{\lambda_1 + \lambda_2}$
February 25th 2009, 09:19 AM
Wow! Thanks for the help and great explanation Grandad. You've been very helpful. | {"url":"http://mathhelpforum.com/advanced-applied-math/75684-solved-mechanics-problem-print.html","timestamp":"2014-04-19T16:04:22Z","content_type":null,"content_length":"8939","record_id":"<urn:uuid:70bc3975-73a6-4918-9c62-36146c7e574b>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiplication Rule (Probability "and")
Multiplication Rule (Probability "and") (Jump to: Lecture | Video )
Independent Events
Two events are independent if they do not affect one another.
For example: rolling a five and then rolling a three with a normal six-sided die. These events are independent because rolling a five does not change the probability of rolling a three (it is still 1
/6). The same is true the other way around.
What is the probability of rolling a 5 and then a 3 with a normal six-sided die? To answer this, we have the Multiplication Rule for Independent Events:
There is a 1 in 36 chance of rolling and 5, and then rolling a 3.
Dependent Events
Two events are dependent if they do affect one another.
For example: drawing a king and then drawing a queen from a deck of cards, without putting the king back. These events are dependent because drawing a king changes the probability of drawing a queen.
Without the king in the deck the probability of drawing a queen changes from 4/52 to 4/51.
What is the probability of drawing a king and then drawing a queen from a deck of cards? To answer this, we have the General Multiplication Rule for Dependent/Conditional Events:
There is roughly a 0.6% chance of drawing a king, and then drawing a queen without replacement from a deck of cards. | {"url":"http://www.statisticslectures.com/topics/multiplicationrule/","timestamp":"2014-04-16T13:05:11Z","content_type":null,"content_length":"8648","record_id":"<urn:uuid:9e75dd1b-b1f5-4a88-a9c7-0addc18d3a6a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with a line integral (Fundamental Theorem of Line Integrals)
December 11th 2011, 11:28 AM #1
Dec 2011
Help with a line integral (Fundamental Theorem of Line Integrals)
I am trying to use the Fundamental Theorem of Line integrals to solve this problem. Basically, my problem is to find out if the following function is a gradient function:
H = -yi + xj
I tried integrating both terms, but I cant seem to find any function where the gradient is equal to H. The path I am moving along is from the origin to (3,0), and then from there in a circle of
radius 3 around the origin to the point (3/sqrt(2),3/sqrt(2)).
Could anyone help me here?
Re: Help with a line integral (Fundamental Theorem of Line Integrals)
December 11th 2011, 11:30 PM #2 | {"url":"http://mathhelpforum.com/calculus/194025-help-line-integral-fundamental-theorem-line-integrals.html","timestamp":"2014-04-17T10:34:40Z","content_type":null,"content_length":"34032","record_id":"<urn:uuid:6abdbabb-50eb-402f-9075-feb73379cc06>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differentiation - Introduction | Mathematics | Skoola
• andy chuks Differentiation - Introduction
When we talk about differentiation we mean finding the derivative (measure of how a function changes as its input changes).
Differentiation is a method to compute the rate at which a dependent output y changes with respect to the change in the independent input x. This rate of change is called the derivative of y with
respect to x. In more precise language, the dependence of y upon x means that y is a function of x. This functional relationship is often denoted y = f(x), where f denotes the function. If x and
y are real numbers, and if the graph of y is plotted against x, the derivative measures the slope of this graph at each point.
The simplest case is when y is a linear function of x, meaning that the graph of y against x is a straight line. In this case, y = f(x) = m x + b, for real numbers m and b, and the slope m is
given by
m = change in y/change in x = Dy/Dx
where the symbol D (the uppercase form of the Greek letter Delta) is an abbreviation for "change in." This formula is true because
y + Dy = f(x+ Dx) = m (x + Dx) + b = m x + b + m Dx = y + mDx.
It follows that Dy = m Dx. 28 October 2010Comment
□ Fatai Adelakun what is the meaning of dy/dx
0 0 13 November 2010
□ andy chuks dy/dx is the differentiation (change in value) of y with respect to x
1 0 13 November 2010
□ Fatai Adelakun gr8
0 0 17 November 2010
• Respond To Some Unanswered Questions
• Featured Questions and Posts | {"url":"http://skoola.com/lecturepage.php?id=1328&cid=22","timestamp":"2014-04-20T20:56:57Z","content_type":null,"content_length":"27654","record_id":"<urn:uuid:642a358d-2a2b-4890-b771-9ff3cd987f54>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gospel Traditions: The Spreadsheet
After the discussion in the last post about the authorship of the Gospels, I've created a spreadsheet model, in Open Office format, to illustrate the probability calculus for whether the Gospels were
written by Matthew, Mark, Luke, and John. This post probably won't make much sense if you haven't read the comments on the previous one; if you're not a mathy person just skip it. The spreadsheet
file is at the bottom of this post.
It's just a model: I don't claim to have incorporated every possible effect that could be important. I'm just trying to illustrate how the short chains of oral transmission to Papias and Irenaeus
can provide significant information. This is a dynamic spreadsheet, so if you change one box the rest of the numbers will change accordingly. The idea is that if you don't accept my numbers, you
can change it yourself to see what happens, rather than just griping at me that it should be different.
I've used the following hypothetical likelihoods for each Gospel/early writer to be pseudonymous instead of genuine, from the perspective of a person who isn't sure whether or not to believe
Matthew: .1
Mark: .01
Luke: .001
John: .03
Papias: .05
Irenaeus: .001
Eusebius: negligible
This is based on the names attached to the documents, as well as internal evidence and the fact that they were received as genuine by the Church, but not yet taking into account the testimony of
Papias through John (concerning Matthew and Mark), as quoted by Eusebius, and Irenaeus disciple of Polycarp disciple of John (for all 4 gospels).
The odds for Matthew are higher than the others because that is the only one where there were significant arguments for pseudonymity, instead of just arguments that it could have happened. For me,
the actual names written on the documents, and their acceptance by the church, are better evidence than any of the evidence against, hence the numbers above. Papias is more likely to be pseudonymous
since other than the few fragments preserved by Eusebius, we have to rely on the judgement of the early church about this.
I haven't taken into account possible lack of independence between the pseudonymity of the 4 gospels. Although the Gospels do incorporate text from each other, they were almost certainly written by
different individuals, so they aren't strongly dependent. Nevertheless, there's some probability dependence here in their later cultural acceptance by the Church. If you want to consider different
odds for their dependence, you could put that in by hand in the "scenarios" section at the bottom (where e.g. "13" would mean the probability of the 1st and 3rd Gospels are pseudonymous, relative to
the probability of all of them being genuine.)
In accordance with the arguments here, I've assigned odds of .001 per century until the first branch point for each document. I've assumed Eusebius had only one copy of Papias, and I just guessed 4
centuries for Irenaeus since I couldn't figure out the first branch point for him from what I could find online. I'm pessimistically assuming that if an textual corruption has occured, it makes the
entire document unreliable. I'm also assuming that any given nongospel writer has a .01 chance of being totally unreliable due to e.g. deliberate deception.
Finally, I'm assuming that for any two people related by an oral testimonial link, there's a .05 chance that a given fact about authorship will be garbled. I'm assuming optimistically there was only
one John and that Papias interviewed him directly, but this is balanced by not including any of the numerous other chains back to the apostles which Papias cites. I'm giving the garbling more odds
than any other form of error, but unlike the other errors I'm assuming that because this is inadvertent, if one piece of data in the document is garbled, the rest are all unaffected.
(In order to make the math easier when considering multiple gospels, I had to break out the total errors, the garbling errors and the nongarbling errors into three separate rows).
I haven't included the possibility that we might be wrong about the chains of testimony themselves, but you're free to play around with inserting extra people or changing their dependence or such.
I got the following probability odds for the Gospels being pseudonymous:
Matthew: .007
Mark: .0007
Luke: .00017
John: .0036
And for different numbers of gospels being genuine, the probability price you pay is about $10^{-2}$ for one Gospel being pseudonymous, $5 \times 10^{-5}$ for two, $3 \times 10^{-7}$ for three, and
$3 \times 10^{-10}$ for all four. This is before considering prior probabilities.
Interdependence between the four Gospels will make these last figures smaller, but I think any reasonable model will have some significant suppression of probability there.
All right then, here it is. Enjoy!
UPDATE: Replaced incorrect "genuine" with "pseudonymous" above.
UPDATE 2: Fixed a bug in the spreadsheet. See the comments section below. This doesn't change things much for the numbers I provided, but might affect things if you change the input assumptions.
See oldspreadsheet to look at the old version.
6 Responses to Gospel Traditions: The Spreadsheet
1. Most of my probability mass is in variations of "stuff got corrupted/exaggerated before it was written down." Conspiracies are less likely than honest mistakes (people misunderstanding what they
saw/heard or failing to verify sources, etc) but still more likely than people rising from the dead.
Incidentally, to see if the math is correct, I tried changing the all sources deceptive/honest fields in the spreadsheet to .99/.01 and it doesn't seem to change the final numbers much, so either
something is wrong or I don't understand the spreadsheet, probably the latter. ^^ (I expected that doing this ought to make the conclusion just as firm in the opposite direction.) (Note: .99 is
not my true belief, nor is .01!)
2. In the upper left hand corner of the spreadsheet, you'll see that I have a space in which I assigned likelihood ratios to each of the 4 Gospels being pseudonymous, equal to (.1, .01, .001, .03),
prior to the testimony of Papias and Irenaeus. These low odds contribute significantly to the final probabilities, however you should notice that the addition of the testimony of P & I does make
a noticable change.
Also, I'm interpreting deceptiveness to imply that P & I provide zero evidence for traditional authorship, not that this would be evidence against traditional authorship. That's another reason
why .99 and .01 aren't symmetric in the way you expected.
I just realized, however, that the spreadsheet does not give correct odds when the Gospel pseudonymity odds are taken to be comparable to 1. I was using the approximation where if the probability
is p, the odds ratio is p:1. This approximation is OK when p is small (as it is for the numbers I provided), but if p isn't small one should use the odds ratio (1-p):p. I've created a revised
version of the spreadsheet which fixes this; see the main post above. It also contains a new column which converts the odds ratios that N gospels are pseudonymous to a normalized probability
In the revised spreadsheet, you could go and change the odds of the Gospels being pseudonymous to (.5, .5, .5, .5)--i.e. no evidence either way until taking into account P & I. You'll see that
the chance of all 4 gospels being pseudonymous is about .006, due to the testimony of P & I. On the other hand, if we use (.15, .15, .15, .15), the probability goes down to under $10^{-5}$, with
only about .003 odds that at least 2 gospels are pseudonymous. Even if we raise the odds of deliberate deception to .05 per author, you still pay a significant price for pseudonymity. Try it and
So even under assumptions much more cynical than mine, there still seems to be a few orders of magnitude of likelihood ratios which can be extracted from the situation. If we assume that the most
efficient route for the skeptic involves taking the "liberal" view that the Gospels are mostly pseudonymous, then these liklihood ratios against pseudonymity convert directly into evidence for
Christianity. This would be on top of any evidence for Christianity coming from "minimal facts" about the early Christian claims, which should be accepted even on the liberal view.
Also, I'm interpreting deceptiveness to imply that P & I provide zero evidence for traditional authorship, not that this would be evidence against traditional authorship. That's another
reason why .99 and .01 aren't symmetric in the way you expected.
Hm, I'm just thinking out loud about if this is the correct thing to do or not. I can see several possible ways reality could be arranged. Suppose P & I literally knew nothing about the situation
at all; then their answers ought to be completely uncorrelated with the truth, and they'd indeed provide zero evidence. But "deceptiveness" implies to me the situation where P & I do actually
know stuff, but are writing down either exaggerations, falsehoods, or a subset of truth carefully chosen to cause us to believe an untruth. I'm not sure what to expect under this scenario, but
probably anti-correlation (as I was originally thinking) isn't right.
More later when I look at the revised spreadsheet.
4. I looked at the revised spreadsheet. I'm still not 100% clear on what everything means, but it seems that when I plug in some priors that more realistically represent my state of knowledge, it
does indeed change in ways I'd expect. When I plugged in numbers that seemed reasonable to me, it claimed it was most likely that two of the gospels were pseudonymous, which also seems reasonable
to me. Yes, I'm much more cynical with my priors than you. :) However, if one looks here, one can see that I'm usually only slightly underconfident: http://predictionbook.com/users/lavalamp
5. You haven't posted your numbers, so I can only speak generally. But even under these more cynical assumptions, you seem to have concluded that (with some unknown probability ratio) probably about
two of the gospels are genuine.
If two of the gospels are genuine, this makes it more likely that what they say is true. How much more likely, depends on a whole host of other questions, but it seems clear that there's some
additional evidence for Christianity here. How much, I leave to you to decide yourself since I don't have much more time to get into this now.
Ideally, this series would have concluded with a post on this topic, since it needs to be addressed to complete the historical argument. It is true that some people do lie, but I think the
gospels have several literary features which correlate with honest reporting. I might get to posting on this eventually, but I've just had a bunch of wisdom teeth removed, so I'm a bit down for
the count right now.
6. FWIW, my position is that Luke seems likely to have been written by Luke or someone very good at pretending to be Luke. John was likely written by John, but with a decent chance of having been
written on John's behalf, on account of its age. Matthew and Mark I'm more skeptical about. It's mostly a moot point to me, though, because I don't view Matthew, Mark, and Luke as independent,
and John is so late I trust it much less (and it's also not completely independent).
If two of the gospels are genuine, this makes it more likely that what they say is true. How much more likely, depends on a whole host of other questions, but it seems clear that there's some
additional evidence for Christianity here.
I'm not so sure about that. I think I've said before, most of my probability mass is in things having gotten garbled/exaggerated/misunderstood/misinterpreted (g's scenario being one member of
this set) before they got written down, and I'm honestly not sure what genuineness or lack thereof does to this scenario, I can think of arguments from both directions. I guess I think my numbers
have been mostly under the assumption that the books are genuine enough, and therefore positive evidence of forgery would be evidence against Christianity, but additional evidence of genuineness
doesn't help much-- effectively I've already updated on it. I think the date affects my confidence a lot more than the author, especially the "latest possible date" (I forget the fancy latin
It is true that some people do lie, but I think the gospels have several literary features which correlate with honest reporting. I might get to posting on this eventually, but I've just had
a bunch of wisdom teeth removed, so I'm a bit down for the count right now.
Agree, if the gospels are lies they're well crafted ones; to repeat myself, I think it's more likely that the authors were honestly mistaken. Anyway, I hear wisdom teeth removals are no fun--so
take it easy. I promise to ignore anything you write while on Vicodin. :)
This entry was posted in Theological Method. Bookmark the permalink. | {"url":"http://www.wall.org/~aron/blog/gospel-traditions-the-spreadsheet/","timestamp":"2014-04-19T10:32:46Z","content_type":null,"content_length":"43902","record_id":"<urn:uuid:2822f170-1d90-4074-b415-f802e8013750>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
5 miles is how many yards
You asked:
5 miles is how many yards
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/5_miles_is_how_many_yards","timestamp":"2014-04-17T04:11:48Z","content_type":null,"content_length":"56933","record_id":"<urn:uuid:6fb83188-a397-420e-a1dc-87b18a7cd949>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Tutors
Woodville, AL 35776
Physics, Physical Science, Algebra, Trigonometry
I am a Senior at UA Huntsville majoring in
and education with a math minor. I have a passion for learning and teaching. I am a US Navy Veteran and am older than most teacher candidates. I am a member of the
honor society Sigma Pi Sigma, and the Education...
Offering 5 subjects including physics | {"url":"http://www.wyzant.com/Gurley_AL_physics_tutors.aspx","timestamp":"2014-04-21T12:36:49Z","content_type":null,"content_length":"55035","record_id":"<urn:uuid:1575f305-2ac6-43b3-8d30-de4abef453d1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
• B. A. Murtagh and M. A. Saunders.
MINOS 5.5 User's Guide, Report SOL 83-20R,
Dept of Operations Research, Stanford University (Revised Jul 1998).
• M. A. Saunders.
Cholesky-based methods for sparse least squares: The benefits of regularization,
Report SOL 95-1, Dept of Operations Research, Stanford University (1995). In L. Adams and J. L. Nazareth (eds.), Linear and Nonlinear Conjugate Gradient-Related Methods, SIAM, Philadelphia,
92-100 (1996).
• P. E. Gill, W. Murray, M. A. Saunders, J. A. Tomlin, and M. H. Wright,
George B. Dantzig and systems optimization,
Journal on Discrete Optimization 5(2), 151-158 (2008), in memory of George B. Dantzig. | {"url":"http://www.stanford.edu/~saunders/papers.html","timestamp":"2014-04-20T11:20:49Z","content_type":null,"content_length":"16590","record_id":"<urn:uuid:3314aec4-845f-4c11-8d1d-daec85c2e1d1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newtown Square Algebra Tutor
Find a Newtown Square Algebra Tutor
...I have life guarded in a variety of settings including schools, universities, summer camps and private beaches. As a Masters level clinical therapist, I have worked with children and
adolescents with a variety of behavioral and emotional issues. I have done psychological assessment and therapy with individuals with ADD/ADHD.
38 Subjects: including algebra 2, English, writing, reading
...There and since I have worked with several students with ADD and ADHD both in their math content areas and with executive skills to help them succeed in all areas of their life. I have tutored
test taking for many tests, including the Praxis many times. I received a perfect score on the math section of the Praxis I, and scored in the upper 170s for reading and writing.
58 Subjects: including algebra 1, algebra 2, chemistry, reading
...I can teach you how to proofread your own writing, which is critical to achieving competent writing skills. I tutored elementary math on a daily basis for eight years. I have experience with
the following programs: Developmental Math, Miquon Math, and Teaching Textbooks.
23 Subjects: including algebra 2, algebra 1, reading, writing
...I have a superior knowledge in Organic Chemistry having served as a Teaching assistant in Graduate School, teaching labs, recitations, making exams and grading exams. I have over 8 publications
and presentations in the field of Organic Chemistry from Graduate School and work as a Process Organic...
26 Subjects: including algebra 2, algebra 1, chemistry, geometry
I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because
this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun...
12 Subjects: including algebra 2, algebra 1, calculus, physics | {"url":"http://www.purplemath.com/Newtown_Square_Algebra_tutors.php","timestamp":"2014-04-17T08:01:32Z","content_type":null,"content_length":"24205","record_id":"<urn:uuid:39f91a7c-265d-4974-804b-753e37afa904>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus: The Product Rule Video | MindBites
Calculus: The Product Rule
About this Lesson
• Type: Video Tutorial
• Length: 20:44
• Media: Video/mp4
• Use: Watch Online & Download
• Access Period: Unrestricted
• Download: MP4 (iPod compatible)
• Size: 224 MB
• Posted: 11/18/2008
This lesson is part of the following series:
Calculus (279 lessons, $198.00)
Calculus Review (48 lessons, $95.04)
Calculus: Final Exam Test Prep and Review (45 lessons, $64.35)
Calculus: Techniques for Finding the Derivative (8 lessons, $15.84)
Calculus: The Product and Quotient Rules (2 lessons, $5.94)
In this lesson, we learn about taking the derivatives of products and applying the product rule of differentiation. The derivative of the product of two differentiable functions is NOT equal to the
product of the derivatives of both functions. Instead, you must learn and apply the product rule to calculate the derivative of the product of two functions. The product rule states that the
derivative of f(x)*g(x) = f(x)* derivative of g(x) + g(x)* derivative of f(x).
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, College Algebra. This course and others are available from Thinkwell, Inc. The full course can be
found at http://www.thinkwell.com/student/product/calculus. The full course covers limits, derivatives, implicit differentiation, integration or antidifferentiation, L'Hôpital's Rule, functions and
their inverses, improper integrals, integral calculus, differential calculus, sequences, series, differential equations, parametric equations, polar coordinates, vector calculus and a variety of
other AP Calculus, College Calculus and Calculus II topics.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger has
won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association of
America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The Heart
of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math journals,
including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of numbers, and the
theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
About this Author
2174 lessons
Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare
students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider
of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/.
Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through...
Recent Reviews
Quite helpful in going over the product rule
~ travis5818
As a follower of cramster and calcchat.com, I was pleased to come across mindbites while searching for particularly difficult concepts for me to grasp.
Professor Burger looks like he definitely knows what he's doing and it's fairly easy to follow him in what he does. I learned this a little quicker the first time in my regular Cal I class, but
this refresher was nice and straightforward. The guy is definitely passionate about math, that's for sure, lol. It shows too, which is outstanding.
Quite helpful in going over the product rule
~ travis5818
As a follower of cramster and calcchat.com, I was pleased to come across mindbites while searching for particularly difficult concepts for me to grasp.
Professor Burger looks like he definitely knows what he's doing and it's fairly easy to follow him in what he does. I learned this a little quicker the first time in my regular Cal I class, but
this refresher was nice and straightforward. The guy is definitely passionate about math, that's for sure, lol. It shows too, which is outstanding.
Computational Techniques
The Product and the Quotient Rules
The Product Rule Page [1 of 4]
Now we know how to take derivatives of functions that could be actually pretty complicated, being strung together with a lot of pluses and minus signs. So you can have 2x3 - 7x2 plus even the square
root of x. And you can now find the derivative pretty easily by using this basic process of looking at each term separately, taking the derivatives and then string them all together. What I now want
to ask you is the following challenge question. What happens if you want to take the derivative of a product? In fact, let me actually pose that as a sort of meta-question where I won’t actually
write down a particular question. Suppose you want to find the derivative of a product – guess a formula for taking the derivative of a product? So for example, suppose you wanted to take the
derivative of the product of two functions, what would you do? Remember to take the derivative, for example, of the sum of two functions, you just take the derivatives of each of the functions and
then add them up. So right now, let’s have you make a guess and I’ll come back and see if you’re right.
Well, let’s see how you did. I think this actually may turn out to be a trick question, but we’ll see. Let’s look at a specific example to get warmed up here. Let’s take a look at the function f(x) =
(4x3 + 1)(x2 - 1). Notice, by the way, I do have a product of two separate functions. Here, I think, is a great guess, and maybe you made this guess. Take the derivative of this, take the derivative
of that and multiply it together. So let’s do that. So here’s a great guess. In fact, let me do the great guess in this red color, because I think it’s such a great guess.
So I’ll just do like we did with addition. I’ll take the derivative of this and take the derivative of that and put them together – multiplication. The derivative of that is not too bad. I bring the
3 out in front and make that a 12. x raised to the power, 3 - 1, which is 2. And then the plus 1 gives me a plus zero, because remember 1 is a constant. The derivative of a constant is zero. I could
write the plus zero there, but I won’t. I’ll just keep that out. And then I’ll multiply that by the derivative of this term. Bring the 2 out in front, so that’s a 2 times x1, again minus a constant,
so that would be minus zero. And so we see the answer is 2 x 12, which is 24x raised to the – x2 times x gives me an x3.
So that is, I think, a great guess. How can we determine if that’s the right answer or not? How can we, in general, determine if that recipe – that rule that we’ve created, the rule of taking the
derivative of this and multiplying by the derivative of that – is right or not? Well, one way is to take the derivative of this in a manner in which we’re certain of the answer and then compare it to
this guess. How could I be certain of taking the derivative of this is an answer? Well, one thing I could do is sort of multiply that all out – FOIL it – and then take the derivative of each term
using the procedure we know to be correct. So let’s try that right now. So let me actually FOIL all this out for you. So I’m going to untangle all that.
By the way, I hope you all know what FOIL means. Maybe there’s some people that didn’t learn those FOILs. And I learned this FOIL thing as a kid. Let me tell you about FOILs just for a second here.
FOIL stands for “First, Outside, Inside and Last.” And the idea behind FOILing is that if you have two things like this and you want to multiply them together, you first take the first – multiply by
the first, the F; then the outside, multiply them together, the O; the inside and multiply them together, the I; and the last, multiply them together and FOIL. So that’s why I keep saying FOIL. I
hope you knew that, but if not, I’m sorry. I should have told you that earlier.
But now let me FOIL this out here. I’ll do that pretty fast here. 4x3(x2) is going to be 4x5. Remember I add exponents when you multiply the bases. And the outside term is a minus 4x3. The inside
term is plus x2. And the last terms give me a minus 1. So there, I just FOILed it out and now I can take the derivative. So I take the derivative of this, so here’s the actual answer. Remember that
red was just our guess. Now, we’re going to say what it really is.
I wonder if people would say, “Gee, Professor Berger, you shouldn’t write down stuff like that, because someone might take it out of context and think that you’re actually claiming the answer to be
that. Well, that’s their problem. They shouldn’t be taking it out of context. You’re either going to watch this thing or nor, that’s what I say, so I think this is great. Making guesses, by the way,
is great.
Let’s take the actual answer. I’m going to take the derivative now of this thing right here. Okay? It’s just the sum and difference of people, so I know how to do that. I just take each one
individually, take its derivative and string them all together. So the actual retail value is the derivative of 4x5. I bring down the 5. I get a 20x to the 5 - 1, which is 4. Subtract off. Bring the
3 down in front of the 4. I get a 12x to the 3 - 1, which is 2. Then I have a plus x2. It gives me a 2 out in front – 2x to the 2 - 1, which is 1. The derivative of a constant is zero. That is the
answer. It looks very different from my guess. My guess must be wrong.
Now some people would be saddened by this, but I’m not, because I learned a lesson. I learned that, in fact, you can’t take derivatives of products by just taking derivative of the first and the
derivative of the second and multiplying them. That doesn’t work, because the answer is much more complicated. Look at that. That doesn’t look like anything to have to do with these two pieces right
here. This is really just a complicated looking thing.
In fact, I don’t see much to do with this at all, really. I mean, how could you just look at these terms individually – looking at each of these terms individually and somehow getting this answer? I
don’t see how to do it. Maybe there is no way of doing it. Now I’m going to show you a little magic trick, and first I’m going to make this disappear. In fact, I’ll just put it over there.
So what I’m going to do is I’ll put the question over there – I put the original function. So there’s that function way over there. And now I’m going to put this answer way over there. Let me write
the function up there. This is all for me. I know it’s there, but for me, the original function was (4x3 + 1)(x2 - 1). And we actually saw what the actual derivative is. The actual derivative we saw
– we computed this, and I’m going to have to look over here. We saw it was 20x4 - 12x2 + 2x. I think we’re all now caught up. And my question to you, which I think is real serious challenge now, is
how do we just take that function and this function separately, do something to it, and then get the derivative to look like this? You see we tried the naïve thing, which was a natural thing to try,
by the way. Always try the easy stuff first – derivative of this, multiply it by the derivative of that. It didn’t work, so now we have to think of something else. Well, now I’m going to show you a
magic trick. So watch me, because this is real magic. I’m searching for a pattern. I’m going to take this 20x4 and I’m actually going to break it into two pieces.
So this actually equals – I’m not going to change anything. I’m just going to break this into two pieces. I’m going to write this as 8x4. But if I write 8x4, I owe you some, because here I have 20x4.
So actually, how many do I owe you? I think I owe you about 12, so I’d better write those in. And then everything else, I’m just going to keep the way it is. So all I did was break this 20x4 into two
pieces – 8x4 + 12x4 – and then the rest remains.
Now I want to rearrange them. Of course, when you add numbers, 7 + 3 is the same thing as 3 + 7, so you can do any order you want. Let me take these two people and put them together. So I put these
two people together, then I see 8x4 + 2x. I’m just writing those two terms down here, and I’ll write the rest of the terms down here now: 12x4 - 12x2. And now you’ll notice, actually, I can factor
some things out of here. Here I can actually factor out a common factor of 2x. So let me actually do that. Let me factor out the common factor of 2x. What am I left with? Well, if I take a 2x out of
here, I’m left with a 4x3. If I take a 2x out of there, I’m just left with a 1. So all I did there was look at this term and factor out the common factor of 2x. Distribute that back in and you’ll see
exactly that.
Now what should I do here? Well, here, let me factor out the common factor of 12x2. If I take a 12x2 out of this piece, I’m left with an x2. If I take a 12x2 out of there, I’m left with just the
minus 1 times that 12x2. You can check again. But something magical has happened. Look down. Look at this. This is exactly the same as the first thing here. This is the exact same thing as the second
thing there. So I’ve taken this mysterious-looking answer and I’ve magically, inside of that – voila! I’m covered – bits of the original thing. Of course there’s all this other stuff left over.
But look at that other stuff. That’s actually familiar too. This is the derivative of that piece. And this is the derivative of that piece. So what have we discovered? We discovered that, in fact,
there is a formula – there is a system – of taking derivatives of products. If not the naïve one, it’s a more elaborate one. It’s one where you write down the first term – not its derivative, but the
first term – and multiply it by the derivative of the second term. Then you’ve got to add to it the second term multiplied by the derivative of the first term. And you know what? That always works.
So in fact, we are now in the position to really pick up and look up at fancy methods for taking derivatives. And the first thing we’re going to look at, the one we just discovered here. The one we
just are discovering is what’s known as the product rule. And what the product rule says is the following. If you want to take the derivative of a product – that is something that’s made up of the
product of two functions – what you do is you take the first and you multiply it by the derivative of the second. You add to it the second multiplied by the derivative of the first. That is the
product rule.
Let me write that down for you. This is really fun. So if you want to take the derivative of a product – look how I have here – I want to take the derivative of the product, this function multiplied
by that function, f times g. What is it? It turns out it equals the following, and this is called the product rule. It’s the first multiplied by the derivative of the second plus the second
multiplied by the derivative of the first. That is called the product rule.
By the way, how can you remember this? One way is to actually memorize this formula, f times g. The derivative of that is f times the derivative of g plus g times the derivative of f. That is
honestly not the way I remember it. You might not like this method or not, but I’ll tell you the way I remember it. I remember it by saying it in terms of first and second. So in my mind, whenever I
do a product rule problem, even by myself or whatever it is, I think first times the derivative of the second – this is actually what I say in my mind – the first times the derivative of the second
plus the second times the derivative of the first. That’s how I think about the product rule.
Let’s do an example. Let’s find the derivative. Let’s call it p(x). Suppose it equals (5x3 + 6x2 - 1)(3x9 - x + 7). You want to find the derivative of that. Well, you could multiply it all out like
we did before and get the correct actually answer. Or we’re now empowered, using this fancy method of the product rule, to actually just use this fact. And so the derivative – look at these really
intense functions we can now take the derivative of. So I write down the first. 5x3 + 6x2 - 1 multiplied by the derivative of the second. So I take this and multiply it by the derivative of that. So
now I go off and figure out the derivative. Well, I know how to do that. I take each piece separately – bring down the 9. 9 x 3 = 27. x to the 9 - 1, which is 8, minus – and then I’ve got x1. Bring
that down. I just see then x 1, which is . x1-1 is zero. x0 is just 1. So I’m won’t write anything in there. Plus – and the derivative of a constant is zero, and that’s where I am.
Now this is a place where I always get so confused, because this is a multi-step process. I’ve got to step back and I’ve got to re-chant the product rule again to see where I am. And I think it’s a
great idea, by the way. Of course, that’s because I do it, but you might want to do this too. Whenever you get to this stage, go back and re-chant the product rule and see where you are. The first
times the derivative of the second. Oh, okay. Plus the second – so I write down the second. So 3x9 - x + 7. And I multiply that by the derivative of the first. So now I’ve got to go back and I’ve got
to compute the derivative of that. Well, bring the 3 out in front. That’s 15x3-1, which is 2, plus bring the 2 down, times 6 is 12x to the 2 - 1, which is 1, minus – and the derivative of a constant
is zero. So I get that. And that really long answer is the derivative.
So in fact, look at these functions we can now take the derivative of. Using the product rule, we can actually take the derivative of even really complicated functions that, if we were to untangle
it, would require too much work, to unFOIL all that. But now, the product rule, no problem.
Let me do one last example using the product rule and then I’ll let you try a whole bunch. In fact, you know what? I’ll make up this example right now and I’ll have you try it first. Why should I be
having all the fun here? You should be having some of the fun here too. This is for both of us.
Here we go. How about this one? Have some fun with this. How about (2x4 - 7x - 3)( . Now let me just tell you, I want to remind you of the little chant here: first times the derivative of the second
plus second times the derivative of the first. And I invite you to actually take on that chant when you’re in the middle of the problem. Whenever you get stuck or you forget or you lose your way,
don’t panic. Just chant the little mantra. Okay, see how you do and then we’ll do it together.
Well, how did you make out? Are you getting it? It takes awhile, but I think once you get it, you’ll feel really good. So we’re going to use the product rule here. The product rule, I’ll put it way
down in the bottom here, in case you want to take a look at it. It’s the first times the derivative of the second plus the second times the derivative of the first. And I’m going to keep chanting
that throughout this problem because this is going to be a longy, folks. It’s the first – so that’s 2x4 - 7x - 3 times the derivative of the second. Well now, I’m going to take the derivative of
that, and that’s going to be a bit of work. So I’ve got to chew this off carefully.
By the way, notice I’m putting parentheses here because I’ve got to multiply those pieces together. Well, I’ve got to take the derivative of . So you might actually want to run off and do that
separately here. In fact, maybe I’ll just run off and do that separately as well. Take a little piece of paper here and do that separately really fast for us. So how would I do that? I’d say , that
equals . And now I know how to take the derivative. I bring the in front and then x to the – and then - 1 is - power. I could rewrite that. The minus sign means it’s underneath. The means square
root. That 2 remains and so I could write it like that. So in fact, that’s the derivative of that piece. I had to go off and do that. It’s a little green problem there. So the moral of that part of
the story is there here, I could just write down . Then I’ve got to subtract off the derivative of .
Well, that actually might be another little problem you might want to do off on the sidelines here. So in fact, I’ll do that one for you on the sidelines really fast. How would you do the derivative
of ? I’d first write that as x to a power, and that would be x-1, because it’s underneath. So the derivative of that, I bring the -1 out in front – x – and then the -1 - 1 is -2. And so this equals
the -1. And that negative exponent means downstairs x2.
So the derivative of I now see is . If I insert that in right now, I see a -1. So that negative and this negative produce a + . And the derivative of a 1 is zero, so I just add zero. And okay, where
am I? Well, I don’t know. I lost track of where I am. I’ve got to go back to my little chant. So I’ve got the first multiplied by the derivative of the second – oh, okay, now I know where I am – plus
the second, , multiplied by the derivative of the first. So the derivative of the first, let’s do that. 4 x 2 is 8x to the 4 - 1, which is 3, minus 7x. The derivative of 7x is just 7. The derivative
of a constant is zero. And so there we have it. There is the derivative of that complicated-looking function and I just used the product rule. It’s just this chant. So I want you to think about this
chant and try some on your own. Have fun and I’ll see you soon.
Get it Now and Start Learning
Embed this video on your site
Copy and paste the following snippet:
Link to this page
Copy and paste the following snippet: | {"url":"http://www.mindbites.com/lesson/827-calculus-the-product-rule","timestamp":"2014-04-19T01:48:58Z","content_type":null,"content_length":"74288","record_id":"<urn:uuid:7d1781ba-b1ec-483e-95e9-f297c5389360>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
La Habra Algebra 2 Tutor
...I studied Mandarin Chinese, Japanese and Spanish all at the same time in college and held a 4.0 in each semester that I had them. I studied each language for 2.5 years. I also have lived in and
have been interactive within my local community, which is mostly Chinese native speakers.
76 Subjects: including algebra 2, chemistry, English, calculus
...A good knowledge of both subjects is essential for progress in Math. I have considerable experience of helping students overcome difficulties and succeed in these subjects. Certification test
passed with 100% score.
12 Subjects: including algebra 2, chemistry, algebra 1, trigonometry
...I will never talk down to your student, but be supportive and encouraging. The families of some of the students I am tutoring have requested that I tutor others in their family. I believe this
is a testament to not only how I tutor, but also how I interact with the student.
11 Subjects: including algebra 2, calculus, statistics, differential equations
Hi, my name is Karleigh. I am a graduate from Pratt Institute with a B.F.A in Communication Design. Currently, I am an AVID tutor at a middle school, where I tutor sixth-eighth graders, some of
which have learning disabilities.
16 Subjects: including algebra 2, English, algebra 1, drawing
...I have several unique training methods aimed at beginners, which I will share. I am a professional developer in ASP.net. I am responsible for training a new hire at work.
11 Subjects: including algebra 2, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/La_Habra_Algebra_2_tutors.php","timestamp":"2014-04-19T20:05:26Z","content_type":null,"content_length":"23724","record_id":"<urn:uuid:c00663a5-7286-42d8-a80d-f38b511aa52c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interest Rate Word Problem [Archive] - Free Math Help Forum
View Full Version : Interest Rate Word Problem
07-02-2006, 10:14 PM
A principal of $5000 was invested in a savings account for 4 years. If the interest earned for the period was $400, what was the interest rate?
Could someone show me how to work this?
07-02-2006, 10:20 PM
Is this simple or compound interest? What formula did they give you to use? How far have you gotten in applying that formula?
Thank you.
07-02-2006, 10:22 PM
Is this simple or compound interest? What formula did they give you to use? How far have you gotten in applying that formula?
Thank you.
It is simple and I was dividing 400 into 1000 but I drew a blank.
07-02-2006, 10:55 PM
I was dividing 400 into 1000 but I drew a blank.
I'm sorry, but I'm not seeing how "1000" is coming into play...?
Please reply with the formula(s) you are using, and a clear listing of your steps. Thank you.
07-02-2006, 11:03 PM
I was dividing 400 into 1000 but I drew a blank.
I'm sorry, but I'm not seeing how "1000" is coming into play...?
Please reply with the formula(s) you are using, and a clear listing of your steps. Thank you.
oops I meant to say 5000 but i think i figured it out. i got:
Interest was $100/year so divide $5000 into $100 to get 2%
07-03-2006, 09:45 AM
Yes you are correct. :D 8-)
Definition of Simple Interest (http://www.freemathhelp.com/forum/posting.php?mode=reply&t=15410): I=Prt
Given: 400=(5000)r(4)
Subsitution: 400=20,000r
Division Property: .02\Rightarrow2%
So r=2%
Good Job!
07-03-2006, 10:11 AM
...assuming it was a simple interest accumulation. The problem statement should provide better information.
07-03-2006, 12:31 PM
Agree with TK; the way it's worded, answer is:
1.94265~% cpd annually, resulting in a 2% yield after 4 years....
07-06-2006, 08:41 AM
...assuming it was a simple interest accumulation. The problem statement should provide better information.
It is simple and I was dividing 400 into 1000 but I drew a blank.
Powered by vBulletin® Version 4.2.2 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved. | {"url":"http://www.freemathhelp.com/forum/archive/index.php/t-44602.html","timestamp":"2014-04-25T07:46:10Z","content_type":null,"content_length":"5840","record_id":"<urn:uuid:0a512abf-b1f0-40f6-bc1b-41c86f8e30c6>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
analyzing the graph
September 12th 2012, 09:53 PM #1
Junior Member
Aug 2012
analyzing the graph
□ The question is: (data was given for gender and speed)
Produce an appropriate graph to determine if it is reasonable to assume that the speeds,for each gender, can be modelled by a normal distribution. Comment on what your graphs suggest.
So I draw a graph(I think is appropriate for this question)- normal distribution
Now, I need to make a comment based on the graph.
Can I say: since most data are within the 95% interval line, the data is normally distributed???
Please help
From graph usually it is hard to verify accurately if distribution is normal or not. It is better to perform a statistic test to verify it. Example: Kolmogorov-Smirnov Test indicates that GLUCF
is not normally distributed at p=0.05 sig level:
September 12th 2012, 10:14 PM #2 | {"url":"http://mathhelpforum.com/advanced-statistics/203374-analyzing-graph.html","timestamp":"2014-04-20T10:06:55Z","content_type":null,"content_length":"33215","record_id":"<urn:uuid:68dc0d00-31be-4c53-bdd5-2b9b2e004bfe>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00299-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimating Distance Travelled
Fig 1: Use either the millimetres scale or one of the romers to measure distance on the map.
It's obviously very important to know how far you have travelled from your last known point - this is done by Estimating Distance.
There are two tried and tested ways of estimating how far you have travelled. These are TIMING and PACING. Timing is probably the easiest to carry out but it is often the least accurate. Pacing is
usually the most accurate but it can be laborious, especially over long distances. When the weather and conditions are difficult it is often wise to use both methods concurrently.
Fig 2: Using the 1:50,000 scale romer to measure the distance between two spot heights.
Before using either of these methods you will need to measure the distance on the map between your present location and the target you are walking to. Some people use the millimetres scale which runs
alongside the compass baseplate (Figure 1) while others prefer to use one of the romers (Figure 2). Millimetres can sometimes be hard to distinguish especially in rain or snow. On a 1:50,000 scale
map, one millimetre represents 50 metres on the ground (an easy mistake is to count one millimetre as 100 metres). On a 1:25,000 scale map, one millimetre represents 25 metres on the ground. Using
the compass romer may be clearer although some compasses don't have romers. Some compasses have removable scales for different maps.
This is based on knowing the speed at which you are walking and keeping a note of when you left your last known point. Walking speed varies and is dependent on a range of factors including fitness,
weight of rucksack, length of journey, wind, conditions underfoot, slope angle (and closeness to pub closing time). A formula for estimating the time required for a journey was published in 1892 by
the renowned Scottish mountaineer, W.W. Naismith. There are numerous variations on this formula and enthusiasts will discuss at length the merits of different models. However, useful estimates can be
made without going into great detail and most people manage with just one or two versions of Naismith's original calculations. The simplest formula combines the horizontal distance with the height
gained. Allow 5 kilometres per hour on the flat plus 10 minutes for every 100 metres height gain. Most reasonably fit people can maintain this speed throughout a day in the hills (provided there
aren't any particular difficulties) but remember that it doesn't allow for rests or stops. "Naismith's" is a valuable navigation aid and also a useful way of working out how long your entire route
will take. To use this formula for short navigation legs, break it down to 1.2 minutes per 100 metres horizontal distance and 1 minute for every 10 metres of ascent. You can only travel at the speed
of the slowest person and so you may need to use a slower formula such as 4 kph which is calculated at 1.5 minutes per 100 metres. When going gently downhill, it is best to ignore the height loss and
just use the horizontal component of the formula. When descending steep ground which will slow your rate of travel a rough estimate can be used - allow 1 minute for every 30 metres of descent,
although this is only an approximation.
Distance Travelled Speed in Kilometres per hour
5 kph 4 kph 3 kph 2 kph
1000 metres 12 min 15 min 20 min 30 min
900 metres 11 min 13½ min 18 min 27 min
800 metres 9½ min 12 min 16 min 24 min
700 metres 8½ min 10½ min 14 min 21 min
600 metres 7 min 9 min 12 min 18 min
500 metres 6 min 7½ min 10 min 15 min
400 metres 5 min 6 min 8 min 12 min
300 metres 3½ min 4½ min 6 min 9 min
200 metres 2½ min 3 min 4 min 6 min
100 metres 1 min 1½ min 2 min 3 min
50 metres ½ min ¾ min 1 min 1½ min
Fig 3: Timing Chart. The timings have been rounded to the nearest ½ minute. Remember to add 1 minute for every 10 metres of ascent.
Using a Timing Chart (Figure 3) for the horizontal component makes the calculations easy although many people prefer to work it out mentally. Remember to add 1 minute for every 10 metres of ascent.
Working out timing calculations mentally becomes straightforward with practice:-
1. Measure the distance and allow 1.2 minutes for every 100 metres. An easy way to work this out is to use the 12 times table and move the decimal point forward. For example:-
1. 300 metres
3 x 12 = 36 = 3.6 minutes
Round off to the nearest half minute = 3½ minutes OR
2. 650 metres
6 x 12 = 72 = 7.2 minutes
Round off to the nearest half minute = 7 minutes
Add ½ minute for the extra 50 metres = 7½ minutes
2. On an O.S. 1:50,000 or 1:25,000 scale map, count the number of contours and allow a minute for every contour. Remember that every fifth contour is a thick line and so you can count the thick
contours in multiples of five to work out the total height gain (on a Harvey Superwalker 1:25,000 scale map the contour interval is 15 metres and so you will have to work out the total height
gain and then allow 1 minute for every 10 metres of ascent).
3. Add (a) and (b) together and you have an estimate of how long it will take to cover the ground.
Fig 4: From A to B (1083 metre spot height to the centre of the 1210 metre ring contour).
Image produced from the Ordnance Survey
service. Image reproduced with kind permission of
Ordnance Survey
Figure 4 provides an example:-
From A to B
1. Distance 850 metres
850 metres = 8 x 12 = 96 = 9.6 minutes
Round off to the nearest half minute = 9½ minutes
Add ½ minute for the extra 50 metres = 10 minutes
2. Height gain 130 metres
(13 contours, including the one which encloses the 1083 spot height)
1 minute for every 10 metres (or for every contour if using this map) = 13 minutes
Total time from A to B = 10 + 13 = 23 minutes
None of this is of any use if you don't have a watch. It is useful to have a stopwatch facility so you don't have to remember the time at the start of each leg. Most high street jewellers sell
inexpensive digital watches which have a stopwatch facility (about £13).
Pacing is often more accurate than timing but it does require concentration. An average stride takes about 60 double paces per hundred metres (a double pace is also known as a Roman Pace - hence the
word "mile" which originated from a thousand Roman paces). You can find out your own individual pacing figure by measuring out 100 metres and then seeing how many double paces you take to cover the
distance or you can do it on the hill between known points on relatively flat terrain. Going up or down hill or walking on rough ground or in deep snow can alter the number of paces you take. You can
estimate how many extra paces you need to take to complete 100 metres at the end of every 60 double paces. It is best to measure the distance in hundreds of metres rather than by working out the
total number of paces needed for a particular navigational leg i.e. if the target is 450 metres away and your personal pacing figure is 62, then count 62 paces for four times (which gives you 400
metres) and then add the final 31 paces. It is useful to have a way of remembering how many hundreds of metres you have paced - it's easy to forget especially if someone asks you a question halfway
through the leg. Silva make a counter which fits on the side of your compass or you can use cord grips as counters on the compass lanyard.
Don't always expect your timing and pacing calculations to take you right to the spot you are heading for - look at the ground around you and compare it with the contours on the map. Can you find
anything that doesn't fit in? If so, be tenacious about finding out why it doesn't fit. Look for other features which do make sense.
Safety and skills information is provided courtesy of the
Mountaineering Council of Scotland | {"url":"http://www.walkhighlands.co.uk/safety/estimating-distance.shtml","timestamp":"2014-04-17T18:24:03Z","content_type":null,"content_length":"22638","record_id":"<urn:uuid:510444a8-c30f-4418-8210-8a01fa5f6b3a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
Go4Expert - View Single Post - c++ Binomial
I have a problem in solving these C++ Program.
The binomial expression can be expanded as follows
If n=2 a2+2ab+b2
The coefficients will be gotten using the combination ncm and its refered to as the pascal triangle
Such that if n=4 then;
4c0 4!/(4-0)!0!=1
4c1 4!/(4-1)!1!=4
4c2 4!/(4-2)!2!=6
1. write a function to compute the combinations
2. implement it in c++ program to output the pascal triangle for a number n. | {"url":"http://www.go4expert.com/forums/cpp-binomial-post14949/","timestamp":"2014-04-18T11:07:23Z","content_type":null,"content_length":"5588","record_id":"<urn:uuid:3cbff37a-68df-481c-b87d-97755d3a9e26>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: second-order logic is a myth
Robert Black Robert.Black at nottingham.ac.uk
Tue Mar 9 17:00:22 EST 1999
Charles Silver:
> To me, the major point that separates (intuitive) quantification
>from set theory is that you can say "for all x..." without implying that
>the x's must be *in* something. For example, take: "All Canada geese fly
>south for the winter." I don't think anything in this statement implies
>that in addition to there being some number of geese there is also a *set*
>of geese. Does anyone know of a theory with much of the power of set
>theory that doesn't imply the existence of some sort of container?
I'm not persuaded that 'container' is a useful metaphor here: does 'Some
critics admire only one another' imply some sort of container? But there
is of course something with the power not of set theory but of second-order
logic that, oddly, no-one (including me) has mentioned, namely unrestricted
mereology, as used for example by Hartry Field in _Science Without
Numbers_. I think the reason I haven't mentioned it is that I'm sort of
assuming that resistence to second-order logic comes from what Boolos calls
its 'staggering' undecidability. And this staggering undecidability will
hold for unrestricted mereology as well. But of course if the problem is
commitment to sets qua abstract objects (and assuming Boolos is wrong and
second-order logic does commit us to sets), then mereology deserves a
look-in here.
Robert Black
Dept of Philosophy
University of Nottingham
Nottingham NG7 2RD
tel. 0115-951 5845
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1999-March/002776.html","timestamp":"2014-04-19T04:24:55Z","content_type":null,"content_length":"3927","record_id":"<urn:uuid:8a12a688-3d5a-4b01-bdae-2497538daf13>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
April 13th 2008, 10:29 PM #1
Feb 2008
With my exam on Friday, I'm trying to get all these practice probs done since some of them will be on the exam.
If $P$ is the Parity Operator, determine the parity of the functions below:
$P\sin{x} = ?$
$Pe^{-x} = ?$
$P(e^{x} + e^{-x}) = ?$
Now, if $H$ is the Hamiltonian Operator with $V(-x) = V(x)$, find the parity of
$PHe^{-x} = ?$
So for the first func, we know that sin(x) is an odd function, and hence that would be -sin(x) if I'm not mistaken since the parity would take sin(x) and make it sin(-x) and thus it'd have parity
-1? We only dealt with cos/sin in our notes, so I'm not sure what an extra operator does and the role of exponential functions.
With my exam on Friday, I'm trying to get all these practice probs done since some of them will be on the exam.
If $P$ is the Parity Operator, determine the parity of the functions below:
$P\sin{x} = ?$
$Pe^{-x} = ?$
$P(e^{x} + e^{-x}) = ?$
Now, if $H$ is the Hamiltonian Operator with $V(-x) = V(x)$, find the parity of
$PHe^{-x} = ?$
So for the first func, we know that sin(x) is an odd function, and hence that would be -sin(x) if I'm not mistaken since the parity would take sin(x) and make it sin(-x) and thus it'd have parity
-1? We only dealt with cos/sin in our notes, so I'm not sure what an extra operator does and the role of exponential functions.
Is this related to some Physics concepts?
I am guessing that we need to define a function called P that will map like this P(f(x)) = f(-x).
$P\sin{x} = -\sin{x} \Rightarrow P(x) = -x$
$Pe^{-x} =$ $e^{x}\Rightarrow P(x) = \frac1{x}$
$P(e^{x} + e^{-x}) =$ $e^{-x} + e^{x} \Rightarrow P(x) = x$
Hopefully Topsquark, our physics maestro$Hy$ do?
With my exam on Friday, I'm trying to get all these practice probs done since some of them will be on the exam.
If $P$ is the Parity Operator, determine the parity of the functions below:
$P\sin{x} = ?$
$Pe^{-x} = ?$
$P(e^{x} + e^{-x}) = ?$
Now, if $H$ is the Hamiltonian Operator with $V(-x) = V(x)$, find the parity of
$PHe^{-x} = ?$
So for the first func, we know that sin(x) is an odd function, and hence that would be -sin(x) if I'm not mistaken since the parity would take sin(x) and make it sin(-x) and thus it'd have parity
-1? We only dealt with cos/sin in our notes, so I'm not sure what an extra operator does and the role of exponential functions.
Is this related to some Physics concepts?
I am guessing that we need to define a function called P that will map like this P(f(x)) = f(-x).
$P\sin{x} = -\sin{x} \Rightarrow P(x) = -x$
$Pe^{-x} =$ $e^{x}\Rightarrow P(x) = \frac1{x}$
$P(e^{x} + e^{-x}) =$ $e^{-x} + e^{x} \Rightarrow P(x) = x$
Hopefully Topsquark, our physics maestro$Hy$ do?
The parity operator in 1 - D, as Isomorphism suggests, maps f(x) to f(-x). In 3-D it maps the function f(x, y, z) to f(-x, -y, -z).
I think all that is being asked for here is what the parity of the function is. So
$Psin(x) = sin(-x) = -sin(x)$, so sin(x) has an odd parity, or -1.
$Pe^{-x} = e^x eq \pm e^{-x}$ so $e^{-x}$ is not a parity eigenstate.
$P(e^x + e^{-x}) = e^{-x} + e^x = +(e^x + e^{-x})$ so $e^x + e^{-x}$ has an even parity, or +1.
H is the Hamiltonian operator: $H = -\frac{\hbar ^2}{2m} \frac{d^2}{dx^2} + V(x)$
We can do $PHe^{-x}$ in two ways: Apply H to $e^{-x}$ then apply P, or we can apply P to $He^{-x}$ directly as P is a linear operator. I think at this level it is more instructive to take the
former route, so
$PHe^{-x} = P \left ( -\frac{\hbar ^2}{2m}e^{-x} + V(x)e^{-x} \right ) = -\frac{\hbar ^2}{2m}e^x + V(-x)e^x$
We know that V(x) has an even parity so this is
$PHe^{-x} = -\frac{\hbar ^2}{2m}e^x + V(x)e^x = He^x eq \pm He^{-x}$
so this is not a parity eigenstate.
April 14th 2008, 12:20 AM #2
April 14th 2008, 04:01 AM #3 | {"url":"http://mathhelpforum.com/calculus/34406-parity.html","timestamp":"2014-04-18T01:48:09Z","content_type":null,"content_length":"51389","record_id":"<urn:uuid:b23fa451-3d1f-47ba-b15a-3870f953521d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
South Easton Math Tutor
Find a South Easton Math Tutor
...I have successfully passed many exams. I am a highly regarded product development analyst. In education, I have been successful in getting my education license as a second career, and completed
many graduate level education courses.
90 Subjects: including SAT math, linear algebra, actuarial science, public speaking
...I have worked 4 years as a paraprofessional and close to two years as a special education teacher at an elementary level. I decided to pursue certification in Special Education for k-8 students
and achieved certification in May of 2012. My strengths are in building relationships with students and gaining their trust.
30 Subjects: including prealgebra, geometry, reading, English
...Most of us spend much too much time indoors for our own good. I've never taught chemistry in school though I am licensed to do so, but use it commonly as an important component of biology. To a
very large degree, every organism is an insanely complex web of chemical systems.
15 Subjects: including algebra 1, SAT math, English, geometry
...So, not only was I learning, but I was helping out a friend at the same time. Soon enough I realized I started getting good at explaining things to people. I eventually came across this website
and decided to sign up and see where things lead me.
34 Subjects: including calculus, differential equations, geometry, reading
...In addition, I have been tutoring at a local community college for the past five years in such subjects as physics, algebra, and anatomy and physiology. I have been tutoring high school algebra
for more than twenty years. I like to help student learn strategies and effective studying techniques, this includes effective note taking, writing effective outlines, etc.
15 Subjects: including geometry, algebra 1, algebra 2, biology | {"url":"http://www.purplemath.com/south_easton_ma_math_tutors.php","timestamp":"2014-04-19T09:53:18Z","content_type":null,"content_length":"23894","record_id":"<urn:uuid:ce57b900-99b8-486e-bbb3-04b51ff9241b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
Franklin Park, IL Geometry Tutor
Find a Franklin Park, IL Geometry Tutor
...Ph.D. in Hispanic Literature from the U of Texas at Austin. Let me help you get the score you want for the ACT reading test. I have a strong background in teaching, a Ph.D. in Hispanic
literature, and specific knowledge of the ACT reading format. 16 years of working with students has taught me how to perceive students' strengths and weaknesses in a short time.
17 Subjects: including geometry, Spanish, English, reading
...Topics include: functions and graphing (linear, quadratic, logarithmic, exponential), complex numbers, systems of equations and inequalities, and relations. This can also include beginning
trigonometry and probability and statistics. Geometry is unlike many other Math courses in that it is a spatial/visual class and deals minimally with variables and equations.
11 Subjects: including geometry, calculus, algebra 1, algebra 2
...I also minored in Asian Studies. After graduating from Loyola University, I began tutoring in ACT Math/Science at Huntington Learning Center in Elgin. I took pleasure in helping students
understand concepts and succeed.
26 Subjects: including geometry, chemistry, Spanish, reading
...It made my life much simpler. Now, I use Microsoft Outlook every day to send and receive e-mails, plan different events, organize my weekly schedule, etc. One of the best features for me is the
ability to see e-mails and calendar in the same window!
36 Subjects: including geometry, English, ACT English, ACT Reading
...I have helped 4 daughters do trigonometry and precalculus, which also includes complicated trigonometric identities algebraically. I have taught anatomy and physiology at a career college in
New York. I have taught physiology along with anatomy in a career college in New York.
17 Subjects: including geometry, chemistry, statistics, reading | {"url":"http://www.purplemath.com/Franklin_Park_IL_Geometry_tutors.php","timestamp":"2014-04-19T23:44:02Z","content_type":null,"content_length":"24363","record_id":"<urn:uuid:21e005ab-0bc3-4341-a48d-58d858186c63>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
Contact with ET using Math? Not so fast. - Keith Devlin (SETI Talks)
Contact with ET using Math? Not so fast. - Keith Devlin (SETI Talks)
Submitted by Exponential Times on Mon, 2012-06-11 08:57
It is often said that mathematics is a universal language that we could use to make contact with another intelligence. But is that really the case? Or is this just a disguised version of
Dr Keith Devlin has written 31 mathematics books and over 80 published research articles. He is the recipient of the Pythagoras Prize, the Peano Prize, the Carl Sagan Award, and the Joint Policy
Board for Mathematics Communications Award. In 2003, he was recognized by the California State Assembly for his "innovative work and longtime service in the field of mathematics and its relation to
logic and linguistics." He is "the Math Guy" on National Public Radio (For more information see http://profkeithdevlin.com). | {"url":"http://www.exponentialtimes.net/videos/contact-et-using-math-not-so-fast-keith-devlin-seti-talks","timestamp":"2014-04-19T04:20:08Z","content_type":null,"content_length":"26535","record_id":"<urn:uuid:e0e3c2df-1848-49e2-8b26-525966167004>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: textual math dtd ?
From: William F Hammond <hammond@csc.albany.edu> Date: Thu, 07 Oct 2004 11:35:39 -0400 To: Stéphan Sémirat <stephan.semirat@ac-grenoble.fr> Cc: "www-math" <www-math@w3.org> Message-ID:
"Stéphan Sémirat" <stephan.semirat@ac-grenoble.fr> writes:
> is there anything standard in writing document that contains math ?
> I mean : if i want to write a math article in XML, how can i tag
> theorems, lemmas, proofs, etc ? <theorem></theorem>,
> <lemma></lemma>, <proof></proof> ? The math working group has
> created mathml for math formulae, but is there something similar for
> a "mathematical text" ? Or any try to a standardized Mathml+XHTML
> math paper (<div class="theorem"></div> ?) ? (i mean something that
> would be used by publishers, referencers,...).
GELLMU "article". CTAN:/suport/gellmu
It provides LaTeX-like markup for writing article-level documents.
SGML is an intermediate stage, so you may write an article that
way if you prefer (but then you lose \newcommand with arguments).
Theorems, lemmas, etc are discussed in section 6.2 of the user
manual: http://www.albany.edu/~hammond/gellmu/glman/glman.html
or in an XHTML+MathML capable browser
The example document at
makes use of such things. (.xmh is a suffix used locally for
serving XHTML as "text/xml" while "application/xhtml+xml" lacks
universal recognition.)
Source files and PDF versions are available parallel using the
suffixes ".glm" and ".pdf", respectively.
-- Bill
Received on Thursday, 7 October 2004 15:35:47 UTC
This archive was generated by hypermail 2.3.1 : Wednesday, 5 February 2014 23:39:49 UTC | {"url":"http://lists.w3.org/Archives/Public/www-math/2004Oct/0006.html","timestamp":"2014-04-19T18:37:50Z","content_type":null,"content_length":"9788","record_id":"<urn:uuid:2ff389e1-bbbc-4d43-a297-58177c255748>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
Line Integral
June 29th 2010, 07:48 AM
Line Integral
If $F(x)= (3x²y, 2x-y)$, find
$<br /> \int\limits_\gamma F(x)\,dx<br />$
where $\gamma$ is the directed path from (0,0) to (1,1) along the graph of the vector equation:
$x = (\sin t, 2t/\pi), (0 \leq t \leq \pi/2)$
__________________________________________________ _____
Here's what I did:
I recognized that the integral is not independent of path... So set up:
$\int M dx + N dy$
which gives me the sum of 4 integral terms... I evaluated these up to the following step so somebody could double check:
$6/\pi [\pi/2 -1] - 6/\pi[\pi/3 - 7/9] + 4/\pi - 1/2$
Now does this answer seem correct to you?
June 29th 2010, 10:22 AM
I do NOT get the sum of four terms, I get the sum of three terms:
On the path x= sin t, $y= 2t/\pi$, dx= cos t dt and $dy= 2/\pi dt$.
$3x^2y= 6 tsin^2(t)/\pi$ and $2x- y= 2sin(t)- 2t/\pi$
The integral becomes $6 \int_{t= 0}^{\pi/2} t sin^2 t cos(t) \, dt+ 4\pi\int_{t=0}^{\pi/2}sin(t) \, dt- 4\pi\int_{t=0}^{\pi/2} t\, dt$.
Use integration by parts to do the first integral.
July 3rd 2010, 03:54 PM
Thank you HallsofIvy
But I think your integral is incorrect. Did you mean:
$6/\pi \int_{t= 0}^{\pi/2} t sin^2 t cos(t)dt+ 4/\pi\int_{t=0}^{\pi/2}sin(t)dt- 4/\pi^2\int_{t=0}^{\pi/2} t dt$
If this is correct, how would you integrate the first one by parts?
July 3rd 2010, 05:05 PM
mr fantastic
Yes, there were a couple of typos in HoI's reply which you have spotted and fixed.
See integrate x (Sin[x])^2 Cos[x] - Wolfram|Alpha and click on Show steps. | {"url":"http://mathhelpforum.com/calculus/149693-line-integral-print.html","timestamp":"2014-04-17T19:47:10Z","content_type":null,"content_length":"9747","record_id":"<urn:uuid:739074bf-c043-4d34-97d1-35cb331d163c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
This figure has a number line with 0, 2, 4, 8, 16, 32, and 64 marked on it, appropriately spaced. There is another line under the first line with arrows pointing to OLTP at one end and DSS at the
other end.
OLTP is at the end of the line that corresponds to the smaller numbers, and DSS is at the end that corresponds to the larger numbers. | {"url":"http://docs.oracle.com/cd/B10500_01/server.920/a96533/img_text/pfgrf039.htm","timestamp":"2014-04-17T09:08:27Z","content_type":null,"content_length":"821","record_id":"<urn:uuid:25dcf90d-282d-4833-a7fc-31869299bf55>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
autonomous categories. Number 752
- Theoretical Computer Science , 2001
"... We present an extension of the lambda-calculus with differential constructions motivated by a model of linear logic discovered by the first author and presented in [Ehr01]. We state and prove
some basic results (confluence, weak normalization in the typed case), and also a theorem relating the usual ..."
Cited by 44 (9 self)
Add to MetaCart
We present an extension of the lambda-calculus with differential constructions motivated by a model of linear logic discovered by the first author and presented in [Ehr01]. We state and prove some
basic results (confluence, weak normalization in the typed case), and also a theorem relating the usual Taylor series of analysis to the linear head reduction of lambda-calculus.
- Mathematical Structures in Computer Science , 2001
"... We present a category of locally convex topological vector spaces which is a model of propositional classical linear logic, based on the standard concept of K othe sequence spaces. In this
setting, the spaces interpreting the exponential have a quite simple structure of commutative Hopf algebra. The ..."
Cited by 31 (9 self)
Add to MetaCart
We present a category of locally convex topological vector spaces which is a model of propositional classical linear logic, based on the standard concept of K othe sequence spaces. In this setting,
the spaces interpreting the exponential have a quite simple structure of commutative Hopf algebra. The co-Kleisli category of this linear category is a cartesian closed category of entire mappings.
This work provides a simple setting where typed -calculus and dierential calculus can be combined; we give a few examples of computations. 1
- In Nineteenth ACM Symposium on Principles of Programming Languages , 1992
"... We present a functional interpretation of classical linear logic based on the concept of linear continuations. Unlike their non-linear counterparts, such continuations lead to a model of control
that does not inherently impose any particular evaluation strategy. Instead, such additional structure i ..."
Cited by 30 (1 self)
Add to MetaCart
We present a functional interpretation of classical linear logic based on the concept of linear continuations. Unlike their non-linear counterparts, such continuations lead to a model of control that
does not inherently impose any particular evaluation strategy. Instead, such additional structure is expressed by admitting closely controlled copying and discarding of continuations. We also
emphasize the importance of classicality in obtaining computationally appealing categorical models of linear logic and propose a simple "coreflective subcategory " interpretation of the modality "!".
1 Introduction In recent years, there has been considerable interest in applications of Girard's Linear Logic (LL) [Gir87] to programming language design and implementation. Over time, various more
or less mutated versions of the original system have been proposed, but they all share the same basic premise: that assumptions made in the course of a formal proof can not necessarily be used an
arbitrary n...
, 1998
"... We generalize the notion of nuclear maps from functional analysis by defining nuclear ideals in tensored -categories. The motivation for this study came from attempts to generalize the structure
of the category of relations to handle what might be called "probabilistic relations". The compact closed ..."
Cited by 28 (10 self)
Add to MetaCart
We generalize the notion of nuclear maps from functional analysis by defining nuclear ideals in tensored -categories. The motivation for this study came from attempts to generalize the structure of
the category of relations to handle what might be called "probabilistic relations". The compact closed structure associated with the category of relations does not generalize directly, instead one
obtains nuclear ideals. Most tensored -categories have a large class of morphisms which behave as if they were part of a compact closed category, i.e. they allow one to transfer variables between the
domain and the codomain. We introduce the notion of nuclear ideals to analyze these classes of morphisms. In compact closed tensored -categories, all morphisms are nuclear, and in the tensored
-category of Hilbert spaces, the nuclear morphisms are the Hilbert-Schmidt maps. We also introduce two new examples of tensored -categories, in which integration plays the role of composition. In the
first, mor...
- International Journal of Theoretical Physics , 2003
"... We give a mathematical framework to describe the evolution of an open quantum systems subjected to nitely many interactions with classical apparatuses. The systems in question may be composed of
distinct, spatially separated subsystems which evolve independently but may also interact. This evolut ..."
Cited by 10 (5 self)
Add to MetaCart
We give a mathematical framework to describe the evolution of an open quantum systems subjected to nitely many interactions with classical apparatuses. The systems in question may be composed of
distinct, spatially separated subsystems which evolve independently but may also interact. This evolution, driven both by unitary operators and measurements, is coded in a precise mathematical
structure in such a way that the crucial properties of causality, covariance and entanglement are faithfully represented. We show how our framework may be expressed using the language of (poly)
categories and functors. Remarkably, important physical consequences - such as covariance - follow directly from the functoriality of our axioms. We establish strong links between the physical
picture we propose and linear logic. Specifically we show that the rened logical connectives of linear logic can be used to describe the entanglements of subsystems in a precise way. Furthermore, we
show that there is a precise correspondence between the evolution of a given system and deductions in a certain formal logical system based on the rules of linear logic. This framework generalizes
and enriches both causal posets and the histories approach to quantum mechanics. 1
- School of Computer Science, McGill University, Montreal , 1998
"... The notion of binary relation is fundamental in logic. What is the correct analogue of this concept in the probabilistic case? I will argue that the notion of conditional probability
distribution (Markov kernel, stochastic kernel) is the correct generalization. One can define a category based on sto ..."
Cited by 7 (1 self)
Add to MetaCart
The notion of binary relation is fundamental in logic. What is the correct analogue of this concept in the probabilistic case? I will argue that the notion of conditional probability distribution
(Markov kernel, stochastic kernel) is the correct generalization. One can define a category based on stochastic kernels which has many of the formal properties of the ordinary category of relations.
Using this concept I will show how to define iteration in this category and give a simple treatment of Kozen's language of while loops and probabilistic choice. I will use the concept of stochastic
relation to introduce some of the ongoing joint work with Edalat and Desharnais on Labeled Markov Processes. In my talk I will assume that people do not know what partially additive categories are
but that they do know basic category theory and basic notions like measure and probability. This work is mainly due to Kozen, Giry, Lawvere and others. 1 Introduction The notion of binary relation
and relation...
, 2011
"... We show that the extensional collapse of the relational model of linear logic is the model of prime-algebraic complete lattices, a natural extension to linear logic of the well known Scott
semantics of the lambda-calculus. ..."
Cited by 3 (1 self)
Add to MetaCart
We show that the extensional collapse of the relational model of linear logic is the model of prime-algebraic complete lattices, a natural extension to linear logic of the well known Scott semantics
of the lambda-calculus.
, 2008
"... We introduce a probabilistic version of coherence spaces and show that these objects provide a model of linear logic. We build a model of the pure lambda-calculus in this setting and show how to
interpret a probabilistic version of the functional language PCF. We give a probabilistic interpretation ..."
Cited by 3 (0 self)
Add to MetaCart
We introduce a probabilistic version of coherence spaces and show that these objects provide a model of linear logic. We build a model of the pure lambda-calculus in this setting and show how to
interpret a probabilistic version of the functional language PCF. We give a probabilistic interpretation of the semantics of probabilistic PCF closed terms of ground type.
- Journal of Pure and Applied Algebra
"... theory of polynomials ..."
- In Multiset Processing , 2001
"... This paper is an attempt to summarize most things that are related to multiset theory. We begin by describing multisets and the operations between them. Then we present hybrid sets and their
operations. We continue with a categorical approach to multisets. Next, we present fuzzy multisets and their ..."
Add to MetaCart
This paper is an attempt to summarize most things that are related to multiset theory. We begin by describing multisets and the operations between them. Then we present hybrid sets and their
operations. We continue with a categorical approach to multisets. Next, we present fuzzy multisets and their operations. Finally, we present partially ordered multisets. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1982568","timestamp":"2014-04-19T01:36:40Z","content_type":null,"content_length":"35646","record_id":"<urn:uuid:b49ca417-5535-4456-9487-011cce899bfc>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
he eggs of a species of bird have an average diameter of 23 mm and an SD of 0.45 mm. The weights of the chicks that hatch from these eggs have an average of 6 grams and an SD of 0.5 grams. The
correlation between the two variables is 0.75 and the scatter diagram is roughly football shaped. The intercept of the regression line for estimating chick weight based on egg diameter is
_______________ grams. The diameter of one of the eggs is 0.5 mm wider than that of another. According to the regression method, the chick that hatches from the bigger egg is estimated to be
____________ grams hea
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/514760eee4b0e8e8f6bd37a4","timestamp":"2014-04-18T16:24:07Z","content_type":null,"content_length":"40080","record_id":"<urn:uuid:60eaa2f7-fdd5-40bd-a9a5-4d7ed961564a>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sum of two uniform distributions and other questions.
November 27th 2010, 11:45 AM #1
Oct 2009
Sum of two uniform distributions and other questions.
Assume that
X has uniform distribution on the interval [0,1] and Y has uniform
distribution on the interval [0,2]. Find the density of Z = X + Y .
Would this just be 1+1/2 = 3/2?
Suppose X with values (0, 1) has density $\[f(x) = cx^2(1-x)^2\]$ for 0 < x < 1. Find:
a) the constant c; b) E[X]; c) Var[X]
I get, a) c = 30; b) 1/2; c) 1/28
Transistors produced by one machine have a lifetime which is exponentially distributed with mean 100 hours. Those produced by a second machine have an exponentially distributed lifetime with mean
200 hours. A package of 12 transistors contains 4 produced by the first machine, and 8 produced by the second machine. Let X be the lifetime of a transistor picked at random from this package.
a) P(X greater or equal to 200 hours); b) E[X]; c) Var[X]
So would E[X] = (1/3)(100) + (2/3)(200) = 500/3? If so, then for a) I get e^(-6/5) = 0.3012, and for c) I get (500/3)^2 = 250000/9
Last edited by mr fantastic; November 27th 2010 at 08:32 PM. Reason: Re-titled.
Thats too many questions in a single thread. Try to post only one question in a thread. Its easy to read and makes your thread less messy.
1) I dont think so. Look at this link(page No 8) that finds the pdf of sum of 2 independent uniform RVs.
2)the values look correct.
Edit: No 3.. Your expected value is correct.. not the variance.. You are doing $V(X)=(E(X))^2$, which is not correct.
You have found $E[X]$..then you need to find $E[X^2]$ (using conditional expectation) and the variance is found the following identity:
Last edited by harish21; November 27th 2010 at 12:48 PM.
But for exponential r.v.'s isn't E[X]=SD[X]? So then to get variance, couldn't you just square SD[X]?
the sum of two indep uniforms is triangular
the easiest technique to obtain the CDF of the sum via geometry on the unit square
I got the sum of the RVs, thanks!
Could someone verify the third question? Is the logic correct in my previous post?
what is the third question?
The density $x^2(1-x)^2$ on 0<x<1 is a Beta, so all those questions are obvious.
In general if $\phi_{x} (*)$ is the PDF of a random variable x and $\phi_{y} (*)$ is the PDF of a random variable y , the PDF od the random variable z=x+y is given by...
$\displaystyle \phi_{z} (z) = \phi_{x} (z) * \phi_{y} (z)$ (1)
... where * means 'convolution'. The (1) permits to use fuitfully L-transform od F- transform tecniques to obtain in confortable way the $\phi_{z} (*)$...
Kind regards
In general if $\phi_{x} (*)$ is the PDF of a random variable x and $\phi_{y} (*)$ is the PDF of a random variable y , the PDF od the random variable z=x+y is given by...
$\displaystyle \phi_{z} (z) = \phi_{x} (z) * \phi_{y} (z)$ (1)
... where * means 'convolution'. The (1) permits to use fuitfully L-transform od F- transform tecniques to obtain in confortable way the $\phi_{z} (*)$...
In the problem proposed by BrownianMan is...
$\displaystyle \mathcal{L} \{\phi_{x} (*)\} = \Phi_{x} (s) = \frac{1-e^{-s}}{s}$
$\displaystyle \mathcal{L} \{\phi_{y} (*)\} = \Phi_{y} (s) = \frac{1-e^{-2 s}}{2 s}$ (1)
... so that is...
$\displaystyle \phi_{z} (z)= \mathcal{L}^{-1}\{\frac{1-e^{-s} - e^{-2 s} + e^{-3 s}}{2 s^{2}} \}$ (2)
The function $\phi_{z} (z)$ is illustrated here...
Kind regards
this is the third question:
Transistors produced by one machine have a lifetime which is exponentially distributed with mean 100 hours. Those produced by a second machine have an exponentially distributed lifetime with mean
200 hours. A package of 12 transistors contains 4 produced by the first machine, and 8 produced by the second machine. Let X be the lifetime of a transistor picked at random from this package.
a) P(X greater or equal to 200 hours); b) E[X]; c) Var[X]
So would E[X] = (1/3)(100) + (2/3)(200) = 500/3? If so, then for a) I get e^(-6/5) = 0.3012, and for c) I get (500/3)^2 = 250000/9
It is independent of the others.
As told in post No 2, the expected value is correct, but the variance is wrong.
November 27th 2010, 12:18 PM #2
November 29th 2010, 11:11 AM #3
Oct 2009
November 29th 2010, 02:52 PM #4
November 29th 2010, 04:52 PM #5
Oct 2009
November 29th 2010, 09:57 PM #6
November 30th 2010, 01:07 AM #7
November 30th 2010, 01:42 AM #8
November 30th 2010, 08:12 AM #9
Oct 2009
November 30th 2010, 08:42 PM #10 | {"url":"http://mathhelpforum.com/advanced-statistics/164543-sum-two-uniform-distributions-other-questions.html","timestamp":"2014-04-19T01:07:03Z","content_type":null,"content_length":"66538","record_id":"<urn:uuid:b92fef9d-82a0-4a5c-931d-cdc11c08b293>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
Turing machine notation for not on multiple symbols
November 25th 2011, 11:24 AM #1
Sep 2011
Turing machine notation for not on multiple symbols
I am trying to draw a Turing Machine diagram where I have a state and I go to one state if the current symbol pointed to by the head is an a or b, so I have (a, b) on that edge. I need another
transition for when it's not a AND not b to go to another state. I don't want to label the edge as (!a, !b) as I think this makes my machine non-deterministic since technically a would be !b and
could follow this transition, same with b and !a. I am not able to find common notation for something like this. All the examples I find show a single symbol transition of something like a and
then !a, but not multiple symbols like my case. Thanks for any advice.
Re: Turing machine notation for not on multiple symbols
There is no general accepted format for turing machine state notation, so you will need to consult your book or notes for the course (if this is for a course).
One typical method is to denote the transition from one state to another with an arrow labeled by one or more conditions which cause that state transition: 0/1,R would mean it only transitions on
zero, and when it does, it changes 0 to 1 and moves right on the tape. I would list them separately on the arrow or draw two arrows, as a/X,Y and b/X,Y, where X,Y are the new symbol and motion of
the scanner.
If you need to limit space and your instructor doesn't care about formatting, I guess you could do it like this:
Arrow 1 label: $(A$$\vee$$B)/X,Y$
Arrow 2 label: $eg$$(A$$\vee$$B)/X,Y$
Alternative arrow 2 label: $($$eg$$A$$\wedge$$eg$$B)/X,Y$
These are mutually exclusive so it is still deterministic. (Though non-determinism is not your only problem; you define the wrong machine the way you describe.)
EDIT: Note that if you're not changing the symbol on the tape to the same thing each time you pass it, you'll want to write those transitions separately or come up with a way to express "no
change" as a symbol, maybe using a dash, or something else not in (alphabet $\cup$ blank).
Re: Turing machine notation for not on multiple symbols
I agree that there are no standard notations for Turing Machines. I would denote a transition corresponding to not $a$ and not $b$ as $\overline{\{a,b\}}$ or just $\overline{a,b}$. You could also
use proper set-theoretic notations like $\Sigma-\{a,b\}$. But in any case I would add a note explaining the notation.
November 25th 2011, 12:10 PM #2
Aug 2011
November 25th 2011, 12:44 PM #3
MHF Contributor
Oct 2009 | {"url":"http://mathhelpforum.com/discrete-math/192667-turing-machine-notation-not-multiple-symbols.html","timestamp":"2014-04-20T17:41:07Z","content_type":null,"content_length":"39436","record_id":"<urn:uuid:2496b925-485f-4f29-9e6b-ff491ad72788>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Date: 05/10/2001 at 10:34:45
From: Rick Wright
Subject: Averages
Dr. Math -
My fourth grader tells me that the average of a set of data is always
computed by adding all the numbers together and dividing by the total
number. Isn't this a misconception? I need to provide some specific
discussion or an activity to correct his thinking. Please help.
Thank you.
Date: 05/10/2001 at 13:17:06
From: Doctor Twe
Subject: Re: Averages
Hi Rick - thanks for writing to Dr. Math.
It depends on how strictly you want to define the term "average."
Merriam-Webster's OnLine Dictionary at:
defines "average" as:
1a: a single value (as a mean, mode, or median) that summarizes
or represents the general significance of a set of unequal
b: MEAN
2a: an estimation of or approximation to an arithmetic mean
b: a level (as of intelligence) typical of a group, class, or
series <above the average>
3: a ratio expressing the average performance especially of an
athletic team or an athlete computed according to the number
of opportunities for successful performance
- on average or on the average: taking the typical example of the
group under consideration <prices have increased on average by
five percent>
Later, the dictionary goes on to say:
"AVERAGE is exactly or approximately the quotient obtained by
dividing the sum total of a set of figures by the number of
figures (scored an average of 85 on tests)."
This is what mathematicians define as the "mean," and it is the most
common meaning of the term "average." (Note that this is exactly
definition 1b, and definition 2a also refers to the mean.)
The mean is one of three common measures of central tendency, the
others being the median and the mode. Medians and modes are also
sometimes referred to as averages, as supported by the dictionary's
definition 1a. (The median is the middle value of the data set when
arranged in ascending order, and the mode is the most frequently
occurring value(s) in the data set.)
There are also "weighted averages," where some values are given more
weight (or counted more often) than other values in the data set.
However, for a fourth grader, understanding what an average is and how
to compute it (in the common usage of the term) is more important than
making distinctions among other types of "average" measures. When your
student is a little older, he or she will learn more about statistical
measures and tools, and will learn about these other "averages."
I hope this helps. If you have any more questions, write back.
- Doctor TWE, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/57613.html","timestamp":"2014-04-20T20:56:22Z","content_type":null,"content_length":"7794","record_id":"<urn:uuid:4b7a2da7-fc88-4651-9472-cd2a53e42a00>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rational Expressions Worksheets
Here is a graphic preview for all of the Rational Expressions Worksheets. You can select different variables to customize these Rational Expressions Worksheets for your needs. The Rational
Expressions Worksheets are randomly created and will never repeat so you have an endless supply of quality Rational Expressions Worksheets to use in the classroom or at home. We have graphing
quadratic functions, graphing quadratic inequalities, completing the square. We also have several solving quadratic equations by taking the square roots, factoring, with the quadratic formula, and by
completing the square.
Our Rational Expressions Worksheets are free to download, easy to use, and very flexible.
These Rational Expressions Worksheets are a good resource for students in the 5th Grade through the 8th Grade.
Click here for a Detailed Description of all the Rational Expressions Worksheets.
Quick Link for All Rational Expressions Worksheets
Click the image to be taken to that Rational Expressions Worksheets.
Simplifying Radical Expressions Worksheets
These Rational Expressions Worksheets will produce problems for simplifying rational expressions. You may select what type of rational expression you want to use. These Rational Expressions
Worksheets are a good resource for students in the 5th Grade through the 8th Grade.
Adding and Subtracting Rationals Expressions Worksheets
These Rational Expressions Worksheets will produce problems for adding and subtracting rational expressions. You may select what type of rational expression you want to use. These Rational
Expressions Worksheets are a good resource for students in the 5th Grade through the 8th Grade.
Multiplying Rationals Expressions Worksheets
These Rational Expressions Worksheets will produce problems for multiplying rational expressions. You may select what type of rational expression you want to use. These Rational Expressions
Worksheets are a good resource for students in the 5th Grade through the 8th Grade.
Dividing Rationals Expressions Worksheets
These Rational Expressions Worksheets will produce problems for dividing rational expressions. You may select what type of rational expression you want to use. These Rational Expressions Worksheets
are a good resource for students in the 5th Grade through the 8th Grade.
Dividing Polynomials Worksheets
These Rational Expressions Worksheets will produce problems for dividing polynomials. You may select what type of polynomial you want to use. These Rational Expressions Worksheets are a good resource
for students in the 5th Grade through the 8th Grade.
Solving Rational Expression Worksheets
These Rational Expressions Worksheets will produce problems for solving rational expressions. You may select what type of problem you want to use. These Rational Expressions Worksheets are a good
resource for students in the 5th Grade through the 8th Grade. | {"url":"http://www.math-aids.com/Algebra/Algebra_1/Rational_Expressions/","timestamp":"2014-04-18T09:03:39Z","content_type":null,"content_length":"31540","record_id":"<urn:uuid:a867c0d3-32d3-48a8-ac41-30a0637a7b79>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wednesday Search Challenge (6/6/12): Who solves impossible problems... by accident?
Every so often you hear a story about some unwitting student solving a problem that they thought was part of their homework assignment, but was ACTUALLY an impossible-to-solve problem put on the
whiteboard by their professor as an example.
Usually, the student is running late and wasn’t paying attention—or some such similar background detail that makes the story seem realistic. And the story is usually told about some famous
mathematician--Newton, Einstein and Ramanujan are often mentioned.
But is this story (or some version of it) really true?
Question: Has some student accidentally solved an impossible-problem by not knowing it was impossible?
If so, who, where, when, how... and what was the problem anyway?
The perfect answer won’t just repeat another apocryphal story, but will give a credible reference or two. As Carl Sagan famously said, “extraordinary claims require extraordinary evidence.” [1]
Also, be sure to tell us how long it took you to find the answer AND the queries you used to drill your way towards the solution.
Search on!
[1] Carl Sagan "Encyclopaedia Galactica,” episode 12 of “Cosmos”, original broadcast date, December 14, 1980; 01:24 minutes in. PBS.
32 comments:
1. That was trivial - snopes have done all the work :D
I searched for "Impossible problem solved by accident", and got the snopes article, with a good set of references, a setup for a movie, and further details.
Answer: George B. Dantzig, and the problems were two previously unsolved statistical analyses that he found written on the board, thinking they were part of a homework assignment.
See the article for a full list of sources - further searching seems to back this up quite nicely.
2. George Dantzig,UC Berekely, 1939, showed up late for a graduate level stats class
Searched for "student solves unsolvable math problem"
Got snopes article: http://www.snopes.com/college/homework/unsolvable.asp
3. George Dantzig, Berkely,1939, During my first year at Berkeley I arrived late one day to one of Neyman's classes. On the blackboard were two problems which I assumed had been assigned for
homework. I copied them down. A few days later I apologized to Neyman for taking so long to do the homework -- the problems seemed to be a little harder to do than usual. I asked him if he still
wanted the work. He told me to throw it on his desk. I did so reluctantly because his desk was covered with such a heap of papers that I feared my homework would be lost there forever."
"About six weeks later, one Sunday morning about eight o'clock, Anne and I were awakened by someone banging on our front door. It was Neyman. He rushed in with papers in hand, all excited: "I've
just written an introduction to one of your papers. Read it so I can send it out right away for publication." For a minute I had no idea what he was talking about. To make a long story short, the
problems on the blackboard which I had solved thinking they were homework were in fact two famous unsolved problems in statistics. That was the first inkling I had that there was anything special
about them."
His studies were interrupted by the War but his dissertation was the solution to the two problems "I. Complete Form Neyman-Pearson Fundamental Lemma. II. On the Non-Existence of Tests of
Student's Hypothesis Having Power Functions Independent of Sigma"
From http://www.umass.edu/wsp/statistics/tales/dantzig.html
There are multiple sources similar to this. Snopes.com led me to it with the search terms unsolvable problem and student. Took about 2-3 minutes to find all the information
4. [famous solved unsolvable problem] Took me to snopes! There I found George Danzig. http://www.snopes.com/college/homework/unsolvable.asp with credible references aplenty. An excellent reminder
that any question beginning with "Have you ever heard about..." should be searched on snopes first! This probably took 30 seconds so I wonder if you were looking for something trickier.
5. found! i started off with good old google "student impossible solve" (elegant) and then added "example" and re-searched after finding this page on the first page of results for my first search.
the re-search had a snopes page as my first result (http://www.snopes.com/college/homework/unsolvable.asp) which made me very happy, since i know they would cite references if true. and success!
george dantzig, with all the gritty details and references. took about 5 minutes.
6. It looks like this was George Dantzig, in 1939 at Berkley. His professor put two equations on the board, George assumed they were homework and solved them over the next couple of days. His
professor [Neyman] showed up at his house 6 weeks later asking him if he could publish the work. I found details of the story in both his obituary from the washington post (http://
supernet.isenberg.umass.edu/photos/gdobit.html) and an excerpt from the College of Mathematics journal, found on snopes.com (http://www.snopes.com/college/homework/unsolvable.asp).
7. Search string "student solves impossible problem snopes"
8. This took me about 30min to complete.
It seems George Dantzig did this in 1939 while he was a graduate student at UC Berkeley.
First I started with a search on "Solve Impossible Math Problem". Which actually led me to http://www.snopes.com/college/homework/unsolvable.asp
There I learned of George Dantzig.
Google Search "George Dantzig" to find further proof led me to:
D J Albers, G L Alexanderson and C Reid, More mathematical people. Contemporary conversations (Boston, MA, 1990).
D J Albers and C Reid, An interview with George B. Dantzig : the father of linear programming, College Math. J. 17 (4) (1986), 293-314.
All of these give the same story, that while he was at UC Berkley, he was late to class copied down two Statistical Problems from the board and thought they were homework problems. He handed them
in and 6 weeks later his professor (Jerzy Neyman) said they were ready for publication.
Of course this story was later used in Good Will Hunting and Rushmore.
9. Answer: Yes.
[ unsolved problem accidentally solved by student ]
The second search result leads to a snopes.com article at http://goo.gl/UqaIk about one George Bernard Dantzig (1914-2005) who worked out proofs to two then-unproved statistical theorems that he
mistook to be homework problems after his professor had written them on a chalkboard in 1939.
The Wikipedia entry for George Dantzig points to the following reference:
Cottle, Richard; Johnson, Ellis; Wets, Roger, "George B. Dantzig (1914-2005)", Notices of the American Mathematical Society, v.54, no.3, March 2007. (Available at http://goo.gl/wSf5c )
which states:
"Arriving late to one of Neyman’s classes,Dantzig saw two problems written on the blackboard and mistook them for a homework assignment. He found them more challenging than usual, but managed to
solve them and submitted them directly to Neyman. As it turned out, these problems were actually two open questions in the theory of mathematical statistics."
Time: 15 seconds to find the snopes.com article; another 10 minutes to read it and find the reference on Dantzig's Wikipedia page.
10. Took me around 30 seconds. I googled "impossible math problem solved" and read the third hit, from Snopes:
George Bernard Dantzig is the solver, and the article cites sources such as first-hand interviews with the solver.
11. A quick Google search for "student solves impossible problem" revealed a Snopes.com article (http://www.snopes.com/college/homework/unsolvable.asp) Searching for "george dantzig obituary"
revealed the Washington Post obituary here:http://www.washingtonpost.com/wp-dyn/content/article/2005/05/18/AR2005051802171.html.
12. I just googled "student solves proof"
It came up on Snopes: http://www.snopes.com/college/homework/unsolvable.asp
Albers, Donald J. and Constance Reid.
"An Interview of George B. Dantzig: The Father of Linear Programming."
College Mathematics Journal. Volume 17, Number 4; 1986 (pp. 293-314).
Brunvand, Jan Harold. Curses! Broiled Again!
New York: W. W. Norton, 1989. ISBN 0-393-30711-5 (pp. 278-283).
Dantzig, George B.
"On the Non-Existence of Tests of 'Student's' Hypothesis Having Power Functions
Independent of Sigma."
Annals of Mathematical Statistics. No. 11; 1940 (pp. 186-192).
Dantzig, George B. and Abraham Wald. "On the Fundamental Lemma of Neyman and Pearson."
Annals of Mathematical Statistics. No. 22; 1951 (pp. 87-93).
Pearce, Jeremy. "George B. Dantzig Dies at 90."
The New York Times. 23 May 2005.
13. Found with one search of "student impossible problem" on google. First item was http://www.snopes.com/college/homework/unsolvable.asp
who: George Dantzig
where: University of California, Berkeley
when: 1939
how: statistics class in Neyman's class
14. 4 clicks, 3 minutes:
Google "true story inspiration for good will hunting"
Clicked the second link: http://www.forumgarden.com/forums/science/50269-real-good-will-hunting.html ....nope, different story
Scanned the page, fourth link down:
....snopes mentioned him by name: George Dantzig
Wikipedia from that point, read the "Mathematical statistics" subsection. 2 references listed in the Wikipedia entry.
15. George Dantzig, graduate student at UC Berkley in 1939. Near the beginning of class, professor wrote 2 famously unsolved statistical problems on the board. Arriving late, Dantzing believed this
to be homework and, though it was "harder than usual," turned the solutions in a few days later.
Solved linear programming problems with the creation of the simplex algorithm. Later, when struggling for a thesis topic, the teacher just accepted these solutions in a binder.
Time to answer: 3 minutes
Search string (newest to oldest:
started out with teacher/student impossible, problems, etc; moved to "real life good will hunting" but discovered it was a janitor with a high iq - not one that had solved a problem; switched to
"real life good will hunting solved problem" to narrow the results). Snopes was the 3rd result with this and gave me the answer, Wikipedia confirmed.
16. George Dantzig - the theorems are described in these papers:
G. B. Dantzig, On the non-existence of tests of “Student’s” hypothesis having power functions in- dependent of sigma, Ann. Math. Stat. 11 (1940), 186–192.
G. B. Dantzig and A. Wald, On the fundamental lemma of Neyman and Pearson, Ann. Math. Stat. 22 (1951), 87–93.
Not impossible, just not solved at the time. Found the initial story by googling "solved impossible problem homework", then "george danzig unsolved problems", then "theorems george dantzig proved
as homework". Took about 10 mins.
17. George Bernard Danzig solved a statistics problem that he believed to be a part of a homework assignment. They are more accurately not unsolvable problems, but unproven statistical theorems for
which he worked out proofs. References to prove the validity of this claim are located at http://www.snopes.com/college/homework/unsolvable.asp, and the search was done in google by searching for
"student solves impossible math problem" and clicking on the first link. It took about 20 seconds.
18. George Dantzig.
I just googled "dantzig unsolved problem student" which lead to the dantzig wikipedia entry (because i recalled that it was dantzig :)
but if you just google "unsolved problem student" it leads directly to the snopes entry about dantzig.
19. This was not that difficult (although it looked it). It took about 10 minutes to get the right answer (but 2 minutes or less to get my first try - and a few minutes checking to see if it was
correct). So actual searching - 2-3 minutes.
My first attempts used Google with these search terms:
"solved by accident" "maths problem"
I made an assumption that the type of problem that would match would have to be a maths problem - as those are the ones that seem impossible to solve, but do get solved (e.g. like Fermat's last
That quickly brought up a very recent news story about Shouryya Ray, a schoolboy who reported solved a puzzle "posed by Sir Isaac Newton that have baffled mathematicians for 350 years".
The problem with this is that on further searching it seems that the solution he came up with wasn't new, and as Ray was entering a competition I'm not sure that "accidental" is the right
description for what Ray achieved.
So I carried on searching. I reasoned that the chances are that if this had happened, the mathematician would be known and important. So I tried:
"impossible problem" mathematicians solved
The 2nd link was: http://www.techrepublic.com/article/geek-trivia-the-math-behind-the-myth/6040689
The story repeats the idea of whether this was an urban myth and says no - naming George Bernard Dantzig
Looking him up gave lots of references - all mentioning his solving an impossible problem as a student by turning up late and not knowing the problem was unsolved.
His obituary is as good a reference as any as it's from Stanford.
http://news.stanford.edu/news/2006/june7/memldant-060706.html and also http://www.stanford.edu/group/SOL/GBD/cottle-johnson-wets-2007.pdf
So the answer is:
WHO: George Bernard Dantzig
WHERE: University of California, Berkeley
WHEN: 1939 (Source: http://en.wikipedia.org/wiki/George_Dantzig)
HOW: Dantzig arrived late to a lecture by the Statistics professor, Jerzy Neyman. He copied down two problems from the blackboard thinking that they were homework assignments. They were in fact
open, unsolved questions - but Dantzig solved them (although he thought that they were harder than usual).
WHAT: Two statistical problems - one connected to "Student's" Hypothesis and power functions, and the 2nd to do with the Neyman Pearson lemma. (Reference 12 in http://www.stanford.edu/group/SOL/
20. 5 minutes - indian boy who solved math problem -> http://ibnlive.in.com/news/indian-boy-solves-350yearold-math-problem/261730-2.html
1. ...except he didn't. This is media hype
21. This may not be the "impossible" problem mentioned - but it seems that George Bernard Dantzig performed a similar feat in 1939 by creating proofs for as-yet unsolved statistics problems at UC
Berkeley during a graduate-level stats exam.
Query: "Student solve impossible problem" from the lovely Drive-in google intro page.
22. George Bernard Dantzing
took a very short time
5:42 PM
student solves impossible problem - Google Search
5:41 PM
student solved unknown problem - Google Search
5:41 PM
solved unknown problem - Google Search
5:41 PM
solve unknown problem - Google Search
led me to
full story is there
23. http://www.snopes.com/college/homework/unsolvable.asp
24. The student was George Bernard Dantzig, and he 'solved two open problems in statistical theory which he had mistaken for homework after arriving late to a lecture of Jerzy Neyman.' (see his
wikipedia page)
- http://www.snopes.com/college/homework/unsolvable.asp
- http://en.wikipedia.org/wiki/George_Dantzig (yes wikipedia IS credible)
- D J Albers, G L Alexanderson and C Reid, More mathematical people. Contemporary conversations (Boston, MA, 1990).
- Time taken: about 2 mins to find a result, 8 mins for the sources -
25. It took under 1 minute, same way as previous commenters.
An Indian/German student, Shouryya Ray, was recently credited in the press with solving an old problem posed by Newton 350 years ago. I was curious, I found a neat paper written by Prof. Dr.
Ralph Chill and Prof. Dr. Jürgen Voigt disproving the claim.
“Nevertheless all his steps are basically known to experts, and we emphasize that he did not solve an open problem posed by Newton.”
26. I took a slightly different route but also came up with Dantzig:
I went to books.google.com and searched for: "newton unsolvable problem chalkboard"
The first result -- http://books.google.com/books?id=Y-0FeyoO4j0C&pg=PA79&dq=newton+unsolvable+problem+chalkboard&hl=en&sa=X&ei=dVHQT4SKKuXW2AXewqHaDA&ved=0CDUQ6AEwAA#v=onepage&q=
newton%20unsolvable%20problem%20chalkboard&f=false -- from a book about business complexity has the Dantzig story.
The second result -- http://books.google.com/books?id=qCehlw21nwgC&pg=PA279&dq=newton+unsolvable+problem+chalkboard&hl=en&sa=X&ei=dVHQT4SKKuXW2AXewqHaDA&ved=0CDwQ6AEwAQ#v=onepage&q=
newton%20unsolvable%20problem%20chalkboard&f=false -- from a book about urban legends has the Dantzig tale too, however, it also has a related story of 23-year old University of Chicago student
Robert Garisto who discovered an error in Newton's equations which were at the time 350 years old. Interesting.
Total search time < 1 minute.
27. http://goo.gl/B2LZq
Shouryya Ray
Dresden, Germany
28. I started (incorrectly) thinking that this had something to do with your earlier post about The Egg of Columbus or Gordian Knot. After wasting a few minutes on that I started typing in Google
[student solves...] Autocomplete suggested [student solves unsolvable math problem]
That took me to the Snopes article about George Dantzig. I was thinking it would be an urban legend and was surprised to find that it was labeled True. http://www.snopes.com/college/homework/
I tried to fact check their references by trying to find a copy of the article in the College Mathematics Journal
["An Interview of George B. Dantzig: The Father of Linear Programming"]
I tried looking for it at http://www.maa.org/pubs/cmj.html
and saw that they have many of their issues available through JSTOR. After checking every library system I have access to I had to give up. None of my libraries offer JSTOR access.
I was able to find a few obituaries that tell his story from the interview about solving the unsolvable problems.
Source Citation
Rubenstein, Steve. "George Bernard Dantzig -- Stanford math professor." San Francisco Chronicle 16 May 2005: B3. Gale Biography In Context. Web. 6 June 2012.
Document URL
For the question: Has some student accidentally solved an impossible-problem by not knowing it was impossible? YES
about 45 minutes
29. 5 mins or so:
Search "students solving impossible problems", grin past the lifehacker link and read the snopes entry
Coorelate their version by searching for "dantzig credible source"
Found a 1999 CNN chatpage with Jan Harold Brunvand who credit an interview with Professor Dantzig
Then searched "jan harold brunvand dantzig" and found an extract from "A Digression on Urban Legends and 'Falsehood'" citing Brunvand's book "The Choking Doberman" on page 282.
I think that is more of the perfect answer. Thanks
30. It is interesting as a new reader to note how many of my fellow researchers chose to verify their answer after finding it on snopes.
I learned to research in the days before full text online. Using a source as authoritative as snopes (15 year track record, touchstone of popular online research) I eyeball the citations to make
sure it doesn't look like the Mikkelsons have lost their minds, but I don't bother to double-check them any more than I would have double-checked the printed citations in a trusted source in
Is this old fashioned of me? Since it only takes five more minutes to verify the citations of a trusted source, I could see adopting the practice of checking them before considering my work done
-- but I also see a whole lot of five minutes spent verifying that yep, trustworthy sources are trustworthy. Thoughts?
1. I tend to double-check everything--even Snopes, even the NYTimes--it's not that I'm distrustful, it's because I've found that everyone (including me!) make mistakes. Often it's unintentional,
but given that it's pretty quick and usually fairly simple to verify through a second source, I think it's a good practice. So, just as you said, "it only take five more minutes to verify the
citations..." and maybe another couple of minutes to get another, different source to crosscheck.
I've just found too many errors in "usually trustworthy" sources. So I *always* check again. | {"url":"http://searchresearch1.blogspot.com/2012/06/wednesday-search-challenge-6612-who.html","timestamp":"2014-04-21T10:21:56Z","content_type":null,"content_length":"176606","record_id":"<urn:uuid:df23343c-4ee2-4bd8-b49c-70d7b0370d95>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Why is numpy.abs so much slower on complex64 than complex128 under windows 32-bit?
Francesc Alted francesc@continuum...
Tue Apr 10 10:36:56 CDT 2012
On 4/10/12 6:44 AM, Henry Gomersall wrote:
> Here is the body of a post I made on stackoverflow, but it seems to be
> a non-obvious issue. I was hoping someone here might be able to shed
> light on it...
> On my 32-bit Windows Vista machine I notice a significant (5x)
> slowdown when taking the absolute values of a fairly large
> |numpy.complex64| array when compared to a |numpy.complex128| array.
> |>>> import numpy
> >>> a= numpy.random.randn(256,2048) + 1j*numpy.random.randn(256,2048)
> >>> b= numpy.complex64(a)
> >>> timeit c= numpy.float32(numpy.abs(a))
> 10 loops, best of3: 27.5 ms per loop
> >>> timeit c= numpy.abs(b)
> 1 loops, best of3: 143 ms per loop
> |
> Obviously, the outputs in both cases are the same (to operating
> precision).
> I do not notice the same effect on my Ubuntu 64-bit machine (indeed,
> as one might expect, the double precision array operation is a bit
> slower).
> Is there a rational explanation for this?
> Is this something that is common to all windows?
I cannot tell for sure, but it looks like the windows version of NumPy
is casting complex64 to complex128 internally. I'm guessing here, but
numexpr lacks the complex64 type, so it has to internally do the upcast,
and I'm seeing kind of the same slowdown:
In [6]: timeit numpy.abs(a)
100 loops, best of 3: 10.7 ms per loop
In [7]: timeit numpy.abs(b)
100 loops, best of 3: 8.51 ms per loop
In [8]: timeit numexpr.evaluate("abs(a)")
100 loops, best of 3: 1.67 ms per loop
In [9]: timeit numexpr.evaluate("abs(b)")
100 loops, best of 3: 4.96 ms per loop
In my case I'm seeing only a 3x slowdown, but this is because numexpr is
not re-casting the outcome to complex64, while windows might be doing
this. Just to make sure, can you run this:
In [10]: timeit c = numpy.complex64(numpy.abs(numpy.complex128(b)))
100 loops, best of 3: 12.3 ms per loop
In [11]: timeit c = numpy.abs(b)
100 loops, best of 3: 8.45 ms per loop
in your windows box and see if they raise similar results?
> In a related note of confusion, the times above are notably (and
> consistently) different (shorter) to that I get doing a naive `st =
> time.time(); numpy.abs(a); print time.time()-st`. Is this to be expected?
This happens a lot, yes, specially when your code is memory-bottlenecked
(a very common situation). The explanation is simple: when your
datasets are small enough to fit in CPU cache, the first time the timing
loop runs, it brings all your working set to cache, so the second time
the computation is evaluated, the time does not have to fetch data from
memory, and by the time you run the loop 10 times or more, you are
discarding any memory effect. However, when you run the loop only once,
you are considering the memory fetch time too (which is often much more
Francesc Alted
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20120410/a39a8aa2/attachment.html
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-April/061719.html","timestamp":"2014-04-18T10:53:27Z","content_type":null,"content_length":"6636","record_id":"<urn:uuid:54510edf-8901-47ba-a61b-360528eb3453>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
25% of 18th century science
According to historian Clifford Truesdell,
… in a listing of all of the mathematics, physics, mechanics, astronomy, and navigation work produced in the 18th century, a full 25% would have been written by Leonard Euler.
Other posts about Euler:
Publish or perish
Even perfect numbers
Platonic solids end Euler’s formula
Mathematical genealogy
He was number -exp(Pi*i)!
Euler is 25%
“Would have been written”?!?! Is that at all like “was written”? You know, Im pretty sure if no one in history ever contributed to science, I “would have” single handedly caught this world back up.
I read it as “[If a hypothetical listing were created of] all of the mathematics, physics, mechanics, astronomy, and navigation work produced in the 18th century, a full 25% would have been written
by Leonard Euler.”
I read it like Johnny, too, but that leads to the question of metric. 25% of what? Books? Pamphlets? Pages? Words? Peer-reviewed journal articles?
Then again, using the 80-20 rule, 80% of the valuable content is in 20% of the work, so maybe Euler is responsible for more than 80% of the valuable content in those fields
But seriously, although those are mighty fields, I think science ought to also include chemistry, metallurgy, biology, and geology at least, and maybe medicine, so perhaps it isn’t fair to credit
Euler with 25% of science as a whole.
According to me:
… in a list of mathematics, physics, mechanics, astronomy, and navigation, a full 20% is the word, “navigation.”
Tagged with: History, Math
Posted in Math | {"url":"http://www.johndcook.com/blog/2012/01/03/25-of-18th-century-science/","timestamp":"2014-04-18T10:48:47Z","content_type":null,"content_length":"32201","record_id":"<urn:uuid:6e3bab9a-6557-4228-879d-9e203e1c063a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lincolnwood Math Tutor
Find a Lincolnwood Math Tutor
...Later on, I played with others in high school and took music classes in college which taught me a basic knowledge of guitar. I can teach basic and intermediate level guitar players. I cannot
read music notes, but I can read chord charts.
40 Subjects: including SAT math, ACT Math, algebra 1, algebra 2
...The effectiveness of these methods has already been successfully proved by the great progress of my former students. During the tutoring sessions I always try to demonstrate that actually
trigonometry is one of the simplest mathematical discipline, which becomes easy for students only if they un...
8 Subjects: including algebra 1, algebra 2, calculus, geometry
...Different learners learn differently so I tailor my tutoring to their learning styles, strengths, and challenges. I make learning fun for my students, injecting humor, focusing on their
interests, and helping them experience the joy of success! As a mother of two sons, who are now in college, I...
15 Subjects: including algebra 1, prealgebra, reading, English
...I have previously tutored biology through a Northwestern University program, where I led a class each week for my peers this past year. I graduated from Northwestern University with a
Bachelor's degree in biology. I graduated from Northwestern University with a Bachelor's in biology with a specific concentration in physiology.
7 Subjects: including algebra 1, biology, geometry, ACT Math
...I've been tutoring and teaching mathematics, science, English and test prep for over 25 years primarily to middle and lower high school students. Through my current professional job I work with
teachers on curriculum develop and lesson planning. Therefore, I know exactly what students need to know throughout school year.
11 Subjects: including algebra 1, algebra 2, vocabulary, grammar | {"url":"http://www.purplemath.com/lincolnwood_il_math_tutors.php","timestamp":"2014-04-17T01:37:27Z","content_type":null,"content_length":"23828","record_id":"<urn:uuid:14c8d1ee-acf1-4bb3-bb1b-cb2910e69152>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Transformers explained
Posts: 264
Join date: 2010-02-03
Location: Costa Rica
• Post n°1
Transformers explained
Good article on basic transformer theory.
by Menno van der Veen
President Ir. buro Vanderveen
The article is in two parts: first what everyone should know, then remarks about coupling schemes, including a couple Menno has just invented. Here's Menno:
"This article explains how to select an optimal output transformer (OPT) for your special amplifier. I shall introduce the challenging possibilities of the new "Specialist" toroidal OPTs. The only
restriction Andre gave me for this article was: "please, no formulae--we can't control how they appear on the net". Therefore I will explain with words and point to my book, published lab reports and
AES preprints for those who love doing complex calculations. Here we go."
Part 1: What every audiophile should know
In tube amplifiers you need an OPT because the voltages in the tube amplifier are too large for your loudspeaker, while the current capability of the tubes is too small to drive your speaker
correctly. There exist fabulous Output Transformerless amplifier designs (OTL) but most tube amps use output transformers. The function of an OPT is to lower the high voltage to safe values and to
multiply the weak tube currents into larger values. This action is performed by winding different amounts of turns on the incoming (primary) and outgoing (secondary) side of the OPT. The turns ratio
between primary and secondary is the major tool performing this job.
All output transformers can be divided into two groups. Transformers for Single Ended amplifiers and transformers for push-pull amplifiers. The major difference between these two is that in SE OPTs
the quiescent current of the power triode is not compensated in the transformer core, while in push-pull transformers the quiescent currents of the two push-pull power tubes cancel each other out in
the core of the OPT. This means that an SE transformer must be constructed differently from a push-pull type. In general one can say that an SE-transformer includes a gap in the core to deal with the
quiescent current while the push-pull version has a closed core with almost no gap in it. This means: even when you own a very good push-pull OPT you can't use it in a SE-amplifier!
Suppose you want to select an OPT for a special design. Suppose that we are dealing with a push-pull amplifier. Somewhere in the tube spec or design notes you should find the primary impedance (Zaa)
for optimal loading of the power tubes. Let's imagine a design with a primary impedance of 3300 Ohms. On the secondary side you wish to connect a 4 or an 8 Ohms loudspeaker. I have standardized my
toroidal designs to a 5 Ohms secondary, but very often 4 and 8 Ohms connections are found. Suppose for now that you have found a transformer with a primary impedance of 4000 Ohms and secondaries at 4
and 8 Ohms. Can you use this transformer for your special design where Zaa should equal 3300 Ohms? The answer is yes, you only have to perform a minor calculation to see how.
In your transformer you have an impedance ratio of 4000/4 = 1000. Now suppose that you don't apply a 4 Ohms loudspeaker but a 3.3 Ohms version, then with this impedance ratio of 1000 you get a
primary impedance of 3300 Ohms. When you use the 8 Ohms secondary connection, your impedance ratio is 4000/8 = 500. To get a primary impedance of 3300 Ohm you should apply a loudspeaker with an
impedance of 3300/500 = 6.6 Ohms.
These examples demonstrate the following important rule: "The impedance ratio of the OPT combined with the impedance of the loudspeaker delivers the primary impedance."
Another example: suppose you have an SE-OPT with a primary impedance Za = 2500 Ohms. On the secondary you have a 4 Ohms connection. The impedance ratio is 2500/4 = 625. Now suppose that you wish to
build a 300B SE-amplifier with a primary impedance of 3500 Ohms. What speaker should you connect? The answer is: 3500/625 = 5.6 Ohms.
Every one knows that the impedance of a loudspeaker is frequency dependent, meaning that at each frequency the impedance has a different value. Therefore loudspeaker manufacturers give a mean
impedance value. The consequence of this is that you never can calculate exactly the value of the primary impedance. Just try to be close to the primary and secondary impedances intended in the
design, but don't worry about deviations up to about 20%. This criterium will make your selection of an output transformer much easier.
The output transformer must be able to handle the output power without major losses and distortions, over the intended frequency range. This is a rather difficult topic because not all manufacturers
deliver all the information you need to judge whether the OPT is applicable or not. What you need is the following: "which is the lowest frequency at which the transformer can handle its nominal
power"? Take an example: suppose you select an OPT which can handle 50 Watts at 30 Hz. Then this transformer can handle 50/2 = 25 Watt at 30/1.414 = 21.2 Hz. Or the transformer can handle 50*2 = 100
Watt at 30*1.414 = 42.4 Hz.
The rule behind all this is: "The power capability doubles when the frequency is a factor 1.414 larger. The power capability halves when the lowest frequency is devided by 1.414 (squareroot 2)."
But what to do when the manufacturer only tells you that you are buying a 100 Watt transformer without mentioning the lowest nominal power frequency? To be honest: the lack of information makes you
'blind' and you don't know how the behaviour of this transformer will be at low frequencies. The lower the frequency of the input, the more the core of the OPT gets saturated and you only can guess
at which frequency severe distortions will start. The only thing you should count on is the good name of the manufacturer, expecting that he is knowing what he is doing. I plead, however, for OPT
power specifications to be specified with the lowest frequency clearly stated. That would help you in selecting the optimal OPT for your application.
All the magnet wire turns of the OPT have a resistance. The currents of the tubes are partly converted into heat in this internal resistance and therefore you lose power. This is expressed in the
"Insertion Loss" which you find in the specification of the OPT. Let me give an example: suppose an I-loss of 0.3 dB, how much power is lost in heat in the transformer? Now take your calculator and
calculate: 0.3/10 = 0.03, calculate -0.03 with the inverse log-function (10 to the power of x) resulting in 0.933. This results means that 93.3 % of the output power is converted into music while
100-93.3 = 6.7 % is converted into heat. This knowledge does not enable you to fry an egg on the transformer, so a more general rule will give better information: "Insertion losses smaller than 0.3
dB in an OPT indicate an acceptable heat loss without causing major difficulties."
LOW FREQUENCY RANGE and DC-IMBALANCE
The calculation of the frequency range of an OPT is very complex. You find all the information and details in my AES preprint 3887: "Theory and Practice of Wide Bandwidth Toroidal Output
Transformers", which can be ordered from the AES.
The most important quantity determining the low frequency range of an OPT is the primary inductance Lp (its value is given in H = Henry). The larger Lp is, the better the low frequency response of
the transformer. To make Lp large you need a lot of magnet wire turns around the core and you need to use a large core. A second factor determing the low frequency range is the primary impedance of
the OPT paralleled with the plate resistances of the output tubes. The smaller the plate resistances and primary impedance the wider the frequency range at the low frequency side. Select power tubes
with a low plate resistance (like triodes) for a good bass response with very little distortion combined with OPT's with a large Lp value. (For a more detailed study see my article in Glass Audio 5/
97: "Measuring Output Transformer Performance", p20ff.)
However, the larger you make Lp, the more sensitive the OPT becomes to an imbalance of the quiescent currents of the power tubes in a push-pull amplifier design. In practice this means: when you use
high quality OPTs with good bass response and a large primary inductance, you should pay special attention to carefully balancing the quiescent currents of the power tubes. Whether you use my
toroidal designs or EI-designs or C-core designs, this is a general rule for large primary inductance OPT designs. If you don't balance your quiescent currents carefully, your maximum power
capability at low frequencies gets less and the distortions become larger.
At the high frequency side, two internal quantities of the transformer limit the high frequencies. These are: the effective internal capacitance between the windings (Cip) and the leakage inductance
of the transformer (Lsp). The leakage is caused by the simple fact that not all the magnetic fieldlines are captured in the core. Some leave the core and are outside the transformer. In this aspect
the toroidal transformers show very good specifications, because the round shaped core captures almost all the fieldlines, resulting in very small leakage inductances. The smaller the leakage, the
wider the frequency range.
The influence of the internal capacitance is the same: the smaller the internal capacitance, the wider the high frequency range. A transformer designer therefore has to find an optimal balance
between the leakage, the capacitance and the tubes and impedances used to create an optimal frequency range. I discuss this in detail in my 3887 AES preprint.
Now some rules:
"The smaller the plate resistances of the tubes, the wider the frequency range."
"When the balance between Lsp and Cip is not correct, square wave overshoot will occur (incorrect Q-factor)."
"When both Lsp and Cip are large, the high frequency range becomes limited and this will result in differential phase distortions" (meaning that the frequency components of a tone, or of the music,
will be time-shifted towards each other, resulting in a distorted tone envelope, detectable by the ear due to its a-linear behaviour).
Let me summarise this as follows: it is up to the transformer designer not only to create a wide frequency range, but to tune the high frequency behaviour with the correct Q-factor (somewhere between
0.5 and 0.7). In that case no square wave overshoot will occur and the differential phase distortion will be minimal. Look into the specifications of the manufacturer to find more details about the
high frequency tuning of his designs. I have optimized my toroidal designs for a very wide frequency range, in order to be prepared for the new digital developments with sampling rates now up to 194
kHz--and who knows what the near future will bring?
Part 2 of the article by Menno van der Veen:
At the leading edge
Recently I finalised a study and research about special coupling techniques between an OPT and power tubes. The results of this research can be found in my AES preprint 4643: "Modelling Power Tubes
and their Interaction with Output Transformers", obtainable from the AES.
My basic question was: "How can I couple power tubes optimally to an output transformer?" In order to answer this simple question I first had to design a mathematical model describing the behaviour
of power tubes. Fortunately many others (like for instance the pioneers Scott Reynolds and Norman Koren) already had studied this subject and I only had to add a very small extension to their models
to be able to model pentode power tubes rather accurately.
My next important step was the understanding that there are many ways to connect power tubes to output transformers. I only mention a few possiblities: pentode push-pull, ultra linear, triode
push-pull, cathode feedback, cathode out, and so on.
I discovered that it is possible to bring all these various coupling techniques into one general model by means of the introduction of the screen grid feedback ratio X and the introduction of the
cathode feedback ratio. The figure shows eight different coupling methods between the push-pull power tubes and the OPT. To investigate all these amplifiers, I designed new toroidal output
transformers: the "Specialist Range".
These new transformers contain very special windings for the application of selectable cathode and screen grid feedback. For more information see the web pages of Plitron and Amplimo. The major time
consuming element in the designing of these new toroidal transformers was the demand that the amplifiers should be absolutely stable, not oscilating, and optimized in power, frequency range and
damping factor behaviour. During my research and the development of the new OPTs I discovered two brand new circuits (numbers 5 and 7) which are under my registration and copyright. For trade use and
/or manufacture please contact me for licensing.
I will not deal now with all the details of this new research. The technically inclined can order the 4643 AES preprint.
The circuits 5 (Super Pentode) and 7 (Super Triode) both show very special qualities not seen before by me in push-pull amps. For instance, circuit 5 delivers amazingly large output powers (80 Watt
with 2 x EL34 at 450-470 V supply), while circuit 7 delivers extremely small distortions (harmonic as well as intermodulation) and a damping factor surpassing triode amplifiers. In all this I noticed
(and calculated) that especially the quiescent current per power tube is a major tool in creating minimal distortions (while hardly decreasing the maximum output power).
The new toroidal "specialist" transformers are available for any one to perform his/her private tests with these new amplifier possibilities. See Plitron's and Amplimo's web-sites. Their internal
research reports can be ordered; they deal with these new technologies, giving a background information and important references to the relevant literature .
I did not talk in length about SE-transformers, their selection and optimal application. All this information can be found in my new book which I hope to finish soon. I paid attention to impedances,
powers and losses, the frequency range, distortions and new coupling techniques. For those who wish to study these subjects in depth, I attach a bibliography of only my own writing, which in turn
contain bibliographies of the relevant references.
If comments, reactions and advice should appear in my email, I would be a very happy man.
1) Menno van der Veen; "Transformers and Tubes"; published by Plitron; www.plitron.com
2) Menno van der Veen; "Het Vanderveen Buizenbouwboek"; published by Amplimo; www.amplimo.nl
3) Menno van der Veen; "Theory and Practice of Wide Bandwidth Toroidal Output Transformers"; AES preprint 3887
4) Menno van der Veen; "Modelling Power Tubes and their Interaction with Output Transformers"; AES preprint 4643
5) Menno van der Veen; "Measuring Output Transformer Performance"; Glass Audio 5/97
6) Menno van der Veen; "Lab Report Specialist Range Toroidal Output Transformers"; published by Plitron; www.plitron.com
7) Menno van der Veen; "Specialist Ringkern Uitgangstransformatoren, de Super Pentode Schakeling"; published by Amplimo; www.amplimo.nl
All the above literature contains a wealth of references to other authors.
Super Pentode and Super Triode:
Names and principles are registered by the author and are subject to European Union and International Copyright Laws. Licensing enquires for reproduction of and manufacture for trade sale should be
directed to Menno van der Veen
Menno van der Veen (b1949) graduated in physics and electronics. He taught physics at the physics department of a teachers' college until he founded his research and consultancy company in 1986. He
is a board member of the Dutch Section of the Audio Engineering Society and of the Elpec (Dutch Electronic Press Association). He is a member of the Dutch Acoustic Society (NAG). He is the designer
of special toroidal output transformers for tube amplifiers in close cooperation with Plitron and Amplimo. Results of his output transformer research were published at AES conventions in 1994 and
1998. He wrote over 360 articles for various Dutch high end audio magazines and is the author of the book "Transformers and Tubes". Designing tube amplifiers is his vocation--combined with playing
the Jazz guitar. Currently he is writing two new books on tube amplifiers.
Copyright text and figures (c)1998, 2005 Menno van der Veen
Posts: 20
Join date: 2011-03-09
• Post n°2
Re: Transformers explained
thanks baddog this is an awesome post, I learned a lot from it. You are the "badest dog" | {"url":"http://dynacotubeaudio.forumotion.com/t1717-transformers-explained","timestamp":"2014-04-20T21:17:19Z","content_type":null,"content_length":"36958","record_id":"<urn:uuid:19ab8260-081f-4aa1-a860-26ca08fa7db6>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
what is the advantage of galois field
December 22nd 2008, 12:13 AM
t s k r
what is the advantage of galois field
could any one please answer the question
What is the advantage of galois field .
how it is used in reed-solomon decoder and BCH decoder . please explain with example
December 22nd 2008, 08:50 AM
A simple way to see the use of a Field is to observe that in the generic communication design setting, we want to encode a k-bit string into an n-bit( n > k naturally) string so that the added
redundancy can be used to combat the effect of noise. Any set of n-tuple can be thought of as a code. However a linear code facilitates easy encoding design and simpler decoding structures(though
they may not be optimal) and hence we learn them in algebraic coding theory. A linear code is a vector space and vector spaces are built over Fields. Since most of the assumed source alphabet is
finite, we talk about vector-spaces over finite fields, which are also termed Galois Fields.
It's not easy to explain the idea behind RS-encoding decoding or BCH decoding in a simple chat. Lots of results from finite fields are used to establish properties of BCH codes/RS codes and to
facilitate their decoding.
So can you be more specific? Which part of BCH decoding have you not understood?
One of the, many ways to do BCH decoding, is to use the extended Euclidean Division Algorithm to find the error locator polynomial and error magnitude polynomial. Thus we find and correct errors. | {"url":"http://mathhelpforum.com/advanced-algebra/65766-what-advantage-galois-field-print.html","timestamp":"2014-04-20T14:17:08Z","content_type":null,"content_length":"4658","record_id":"<urn:uuid:d993605b-f171-4913-a8ae-32a9fdd65f34>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Game Theory of Mind
This paper introduces a model of ‘theory of mind’, namely, how we represent the intentions and goals of others to optimise our mutual interactions. We draw on ideas from optimum control and game
theory to provide a ‘game theory of mind’. First, we consider the representations of goals in terms of value functions that are prescribed by utility or rewards. Critically, the joint value functions
and ensuing behaviour are optimised recursively, under the assumption that I represent your value function, your representation of mine, your representation of my representation of yours, and so on
ad infinitum. However, if we assume that the degree of recursion is bounded, then players need to estimate the opponent's degree of recursion (i.e., sophistication) to respond optimally. This induces
a problem of inferring the opponent's sophistication, given behavioural exchanges. We show it is possible to deduce whether players make inferences about each other and quantify their sophistication
on the basis of choices in sequential games. This rests on comparing generative models of choices with, and without, inference. Model comparison is demonstrated using simulated and real data from a
‘stag-hunt’. Finally, we note that exactly the same sophisticated behaviour can be achieved by optimising the utility function itself (through prosocial utility), producing unsophisticated but
apparently altruistic agents. This may be relevant ethologically in hierarchal game theory and coevolution.
Author Summary
The ability to work out what other people are thinking is essential for effective social interactions, be they cooperative or competitive. A widely used example is cooperative hunting: large prey is
difficult to catch alone, but we can circumvent this by cooperating with others. However, hunting can pit private goals to catch smaller prey that can be caught alone against mutually beneficial
goals that require cooperation. Understanding how we work out optimal strategies that balance cooperation and competition has remained a central puzzle in game theory. Exploiting insights from
computer science and behavioural economics, we suggest a model of ‘theory of mind’ using ‘recursive sophistication’ in which my model of your goals includes a model of your model of my goals, and so
on ad infinitum. By studying experimental data in which people played a computer-based group hunting game, we show that the model offers a good account of individual decisions in this context,
suggesting that such a formal ‘theory of mind’ model can cast light on how people build internal representations of other people in social interactions.
Citation: Yoshida W, Dolan RJ, Friston KJ (2008) Game Theory of Mind. PLoS Comput Biol 4(12): e1000254. doi:10.1371/journal.pcbi.1000254
Editor: Tim Behrens, John Radcliffe Hospital, United Kingdom
Received: July 2, 2008; Accepted: November 13, 2008; Published: December 26, 2008
Copyright: © 2008 Yoshida et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Funding: This work was supported by Wellcome Trust Programme Grants to RJD and KJF.
Competing interests: The authors have declared that no competing interests exist.
This paper is concerned with modelling the intentions and goals of others in the context of social interactions; in other words, how do we represent the behaviour of others in order to optimise our
own behaviour? Its aim is to elaborate a simple model of ‘theory of mind’ [1],[2] that can be inverted to make inferences about the likely strategies subjects adopt in cooperative games. Critically,
as these strategies entail inference about other players, this means the model itself has to embed inference about others. The model tries to reduce the problem of representing the goals of others to
its bare essentials by drawing from optimum control theory and game theory.
We consider ‘theory of mind’ at two levels. The first concerns how the goals and intentions of another agent or player are represented. We use optimum control theory to reduce the problem to
representing value-functions of the states that players can be in. These value-functions prescribe optimal behaviours and are specified by the utility, payoff or reward associated with navigating
these states. However, the value-function of one player depends on the behaviour of another and, implicitly, their value-function. This induces a second level of theory of mind; namely the problem of
inference on another's value-function. The particular problem that arises here is that inferring on another player who is inferring your value-function leads to an infinite regress. We resolve this
dilemma by invoking the idea of ‘bounded rationality’ [3],[4] to constrain inference through priors. This subverts the pitfall of infinite regress and enables tractable inference about the ‘type’ of
player one is playing with.
Our paper comprises three sections. The first deals with a theoretical formulation of ‘theory of mind’. This section describes the basics of representing goals in terms of high-order value-functions
and policies; it then considers inferring the unknown order of an opponent's value-function (i.e., sophistication or type) and introduces priors on their sophistication that finesse this inference.
In the second section, we apply the model to empirical behavioural data, obtained while subjects played a sequential game, namely a ‘stag-hunt’. We compare different models of behaviour to quantify
the likelihood that players are making inferences about each other and their degree of sophistication. In the final section, we revisit optimisation of behaviour under inferential theory of mind and
note that one can get exactly the same equilibrium behaviour without inference, if the utility or payoff functions are themselves optimised. The ensuing utility functions have interesting properties
that speak to a principled emergence of ‘inequality aversion’ [5] and ‘types’ in social game theory. We discuss the implications of this in the context of evolution and hierarchical game theory.
Here, we describe the optimal value-function from control theory, its evaluation in the context of one agent and then generalise the model for interacting agents. This furnishes models that can be
compared using observed actions in sequential games. These models differ in the degree of recursion used to construct one agent's value-function, as a function of another's. This degree or order is
bounded by the sophistication of agents, which determines their optimum strategy; i.e., the optimum policy given the policy of the opponent. Note that we will refer to the policy on the space of
policies as a strategy and reserve policy for transitions on the space of states. Effectively, we are dealing with a policy hierarchy where we call a second-level policy a strategy. We then address
inference on the policy another agent is using and optimisation under the implicit unobservable states. We explore these schemes using a stag-hunt, a game with two Nash equilibria, one that is
risk-dominant and another that is payoff-dominant. This is important because we show that the transition from one to the other rests on sophisticated, high-order representations of an opponent's
Policies and Value Functions
Let the admissible states of an agent be the set , where the state at any time or trial is . We consider environments under Markov assumptions, where is the probability of going from state j to state
i. This transition probability defines the agent's policy as a function of value . We can summarise this policy in terms of a matrix , with elements . In what follows, will use to denote a
probability transition matrix that depends on and for a probability on . The value of a state is defined as utility or payoff, expected under iterations of the policy and can be defined recursively
The notion of value assumes the existence of a state-dependent quantity that the agent optimises by moving from one state to another. In Markov environments with n = |S| states, the value over
states, encoded in the row vector vЄℜ^1×n, is simply the payoff at the current state ℓЄℜ^1×n plus the payoff expected on the next move, ℓP, the subsequent move ℓP^2 and so on. In short, value is the
reward expected in the future and satisfies the Bellman equation [6] from optimal control theory; this is the standard equation of dynamic programming(2)
We will assume a policy is fully specified by value and takes the form(3a)
Under this assumption, value plays the role of an energy function, where λ is an inverse temperature or precision; assumed to take a value of one in the simulations below. Using the formalism of
Todorov [7], the matrix P(0) encodes autonomous (uncontrolled) transitions that would occur when, . These probabilities define admissible transitions and the nature of the state-space the agent
operates in, where inadmissible transitions are encoded with P(0)[ij] = 0. The uncontrolled transition probability matrix P(0) plays an important role in the general setting of Markov decision
processes (MDP). This is because certain transitions may not be allowed (e.g., going though a wall). Furthermore, there may be transitions, even in the absence of control, which the agent is obliged
to make (e.g., getting older). These constraints and obligatory transitions are encoded in P(0). The reader is encouraged to read Ref. [7] for a useful treatment of optimal control problems and
related approximation strategies.
Equation 3a is intuitive, in that admissible states with relatively high value will be visited with greater probability. Under some fairly sensible assumptions about the utility function (i.e.,
assuming a control cost based on the divergence between controlled and uncontrolled transition probabilities), Equation 3 is the optimum policy.
This policy connects our generative model of action to economics and behavioural game theory [8], where the softmax or logit function (Equation 3) is a ubiquitous model of transitions under value or
attraction; for example, a logit response rule is used to map attractions, to transition probabilities:(3b)
In this context, λ is known as response sensitivity; see Camerer [8] for details. Furthermore, a logit mapping is also consistent with stochastic perturbations of value, which leads to quantal
response equilibria (QRE). QRE are a game-theoretical formulation [9], which converges to the Nash equilibrium when λ goes to infinity. In most applications, it is assumed that perturbations are
drawn from an extreme value distribution, yielding the familiar and convenient logit choice probabilities in Equation 3 (see [10] for details). Here, λ relates to precision of random fluctuations on
Critically, Equation 3 prescribes a probabilistic policy that is necessary to define the likelihood of observed behaviour for model comparison. Under this fixed-form policy, the problem reduces to
optimising the value-function (i.e., solving the nonlinear self-consistent Bellman equations). These are solved simply and quickly by using a Robbins-Monro or stochastic iteration algorithm [11](4)
At convergence, becomes the optimal value-function, which is an analytic function of payoff; . From now on, we will assume is the solution to the relevant Bellman equation. This provides an optimum
value-function for any state-space and associated payoff, encoded in a ‘game’.
Clearly, this is not the only way to model behaviour. However, the Todorov formalism greatly simplifies the learning problem and provides closed-form solutions for optimum value: In treatments based
on Markov decision processes, in which the state transition matrix depends on an action, both the value-function and policy are optimised iteratively. However, by assuming that value effectively
prescribes the transition probabilities (Equation 3), we do not have to define ‘action’ and avoid having to optimise the policy per se. Furthermore, as the optimal value is well-defined we do not
have to worry about learning the value-function. In other words, because the value-function can be derived analytically from the loss-function (irrespective of the value-learning scheme employed by
the agent), we do not need to model how the agent comes to acquire it; provided it learns the veridical value-function (which in many games is reasonably straightforward). This learning could use
dynamic programming [12], or Q-learning [13], or any biologically plausible scheme.
A Toy Example
The example in Figure 1 illustrates the nature and role of the quantities described above. We used a one-dimensional state-space with n = 16 states, where an agent can move only to adjacent states (
Figure 1A). This restriction is encoded in the uncontrolled transition probabilities. We assumed the agent is equally likely to move, or not move, when uncontrolled; i.e., the probability of
remaining in a state is equal to the sum of transitions to other states (Figure 1B). To make things interesting, we considered a payoff function that has two maxima; a local maximum at state four and
the global maximum at state twelve (Figure 1C). In effect, this means the optimum policy has to escape the local maximum to reach the global maximum. Figure 1D shows the successive value-function
approximations as Equation 4 is iterated from τ = 1 to 32. Initially, the local maximum captures state-trajectories but as the value-function converges to the optimal value-function, it draws paths
through the local maximum, toward the global maximum. Instead of showing example trajectories under the optimal value-function, we shows the density of an ensemble of agents, ρ(s,t), as a function of
time, starting with a uniform distribution on state-space, ρ(s,0) = 1/n (Figure 1E). The ensemble density dynamics are given simply by . It can be seen that nearly all agents have found their goal by
about t = 18 ‘moves’.
Figure 1. Toy example using a one-dimensional maze.
(A) The agent (red circle) moves to the adjacent states from any given state to reach a goal. There are two goals, where the agent obtains a small payoff (small square at state 4) or a big payoff
(big square at state 12). (B) The uncontrolled state transition matrix. (C) The payoff-function over the states with a local and global maximum. (D) Iterative approximations to the optimal
value-function. In early iterations, the value-function is relatively flat and shows a high value at the local maximum. With a sufficient number of iterations, τ≥24, the value-function converges to
the optimum (the red line) which induces paths toward the global maximum at state 12. (E) The dynamics of an ensemble density, under the optimal value-function. The density is uniform on state-space
at the beginning, t = 1, and develops a sharp peak at the global maximum over time.
In summary, we can compute an optimal value-function for any game, G(ℓ,P(0)) specified in terms of payoffs and constraints. This function specifies the conditional transition probabilities that
define an agent's policy, in terms of the probability of emitting a sequence of moves or state-transitions. In the next section, we examine how value-functions are elaborated when several agents play
the same game.
Games and Multiple Agents
When dealing with two agents the state-space becomes the Cartesian product of the admissible states of both agents, S = S[1]×S[2] (Note that all that follows can be extended easily to over m
agents.). This means that the payoff and value are defined on a joint-space for each agent k. The payoff for the first agent ℓ[1](i, j) occurs when it is in state i and the second is in state j. This
can induce cooperation or competition, unless the payoff for one agent does not depend on the state of the other: i.e., ∀j,k : ℓ[1](i, j) = ℓ[1](i, k). Furthermore, the uncontrolled probabilities for
one agent now become a function of the other agent's value, because one agent cannot control the other. This presents an interesting issue of how one agent represents the policy of the other.
In what follows, we consider policies that are specified by an order: first-order policies discount the policies of other agents (i.e., I will ignore your goals). Second-order policies are optimised
under the assumption that you are using a first-order policy (i.e., you are ignoring my goals). Third-order policies pertain when I assume that you assume I am using a first-order policy and so on.
This construction is interesting, because it leads to an infinite regress: I model your value-function but your value-function models mine, which includes my model of yours, which includes my model
of your model of mine and so on ad infinitum. We will denote the i-th order value-function for the k-th agent by . We now consider how to compute these value-functions.
Sequential Games
In a sequential game, each agent takes a turn in a fixed order. Let player one move first. Here, the transition probabilities now cover the Cartesian product of the states of both agents and the
joint transition-matrix factorises into agent-specific terms. These are given by(5)
where Π[k](0) specifies uncontrolled transitions in the joint-space, given the uncontrolled transitions P[k](0) in the space of the k-th agent. Their construction using the Kronecker tensor product ⊗
ensures that the transition of one agent does not change the state of the other. Furthermore, it assumes that the uncontrolled transitions of one agent do not depend on the state of the other; they
depend only on the uncontrolled transitions P[k](0) among the k-th agent's states. The row vectors are the vectorised versions of the two dimensional value-functions for the k-th agent, covering the
joint states. We will use a similar notation for the payoffs, . Critically, both agents have a value-function on every joint-state but can only change their own state. These value-functions can now
be evaluated through recursive solutions of the Bellman equations(6)
This provides a simple way to evaluate the optimal value-functions for both agents, to any arbitrary order. The optimal value-function for the first agent, when the second is using is . Similarly,
the optimal value under for the second is . It can be seen that under an optimum strategy (i.e., a second-level policy) each agent should increase its order over the other until a QRE obtains when
for both agents. However, it is interesting to consider equilibria under non-optimal strategies, when both agents use low-order policies in the mistaken belief that the other agent is using an even
lower order. It is easy to construct examples where low-order strategies result in risk-dominant policies, which turn into payoff-dominant policies as high-order strategies are employed; as
illustrated next.
A Stag-Hunt
In this example, we used a simple two-player stag-hunt game where two hunters can either jointly hunt a stag or pursue a rabbit independently [14]. Table 1 provides the respective payoffs for this
game as a normal form representation. If an agent hunts a stag, he must have the cooperation of his partner in order to succeed. An agent can catch a rabbit by himself, but a rabbit is worth less
than a stag. This furnishes two pure-strategy equilibria: one is risk-dominant with low-payoff states that can be attained without cooperation (i.e., catching a rabbit) and the other is payoff
dominant; high-payoff states that require cooperation (i.e., catching a stag). We assumed the state-space of each agent is one-dimensional with n[1] = n[2] = 16 possible states. This allows us to
depict the value-functions on the joint space as two-dimensional images. The dimensionality of the state-space is not really important; however, a low-dimensional space imposes sparsity on the
transition matrices, because only a small number of neighbouring states can be visited from any given state. These constraints reduce the computational load considerably. The ‘rabbit’ and ‘stag’ do
not move; the rabbit is at state four and the stag at state twelve. The key difference is that the payoff for the ‘stag’ is accessed only when both players occupy that state (or nearby), whereas the
payoff for the ‘rabbit’ does not depend on the other agent's state. Figure 2A shows the characteristic payoff functions for both agents. The ensuing value-functions for the order i = 1,…,4 from
Equation 6 are shown in Figure 2B. It can be seen that first-order strategies defined by regard the ‘stag’ as valuable, but only when the other agent is positioned appropriately. Conversely,
high-order strategies focus exclusively on the stag. As one might intuit, the equilibrium densities of an ensemble of agents acting under first or high-order strategies have qualitatively different
forms. Low-order strategies result in both agents hunting the ‘rabbit’ and high-order schemes lead to a cooperative focus on the ‘stag’. Figure 2C shows the joint and marginal equilibrium ensemble
densities for t = 128 (i.e., after 128 moves) and a uniform starting distribution; for matched strategies, i = 1,…,4.
Figure 2. Stag-hunt game with two agents.
(A) The payoff-functions for the first (the left panel) and the second agent (the right panel) over the joint state-space. The red colour indicates a higher payoff. The payoff of the ‘stag’ (state
12) is higher than the ‘rabbit’ (state 4). (B) Optimal value-functions of first, second, third and fourth order (from the top to the bottom) for both agents. The low-order value-functions focus on
the risk-dominant states, while high-order functions lead to payoff dominant states that require cooperation. (C) The equilibrium densities of an ensemble of agents after 128 moves, when both agents
use matched value-functions in (B). The left and right panels show the joint and marginal equilibrium densities over the joint state-space and the state of the first agent, respectively.
Table 1. Normal-form representation of a stag-hunt in terms of payoffs in which the following relations hold: A>B≥C>D and a>b≥c>d.
Inferring an Agent's Strategy
In contrast to single-player games, polices in multi-player games have an order, where selecting the optimal order depends on the opponent. This means we have to consider how players evaluate the
probability that an opponent is using a particular policy or how we, as experimenters, make inferences about the policies players use during sequential games. This can be done using the evidence for
a particular policy, given the choices made. In the course of a game, the trajectory of choices or states y = s[1],s[2],…,s[T] is observed directly such that, under Markov assumptions(7)
Where mЄM represents a model of the agents and entails the quantities needed to specify their policies. The probability of a particular model, under flat priors on the models, is simply(8)
To illustrate inference on strategy, consider the situation in which the strategy (i.e., the policy order k[1]) of the first agent is known. This could be me and I might be trying to infer your
policy, to optimise mine; or the first agent could be a computer and the second a subject, whose policy we are trying to infer experimentally. In this context, the choices are the sequence of
joint-states over trials, yЄS, where there are n[1]×n[2] possible states; note that each joint state subsumes both ‘moves’ of each agent. From Equation 8 we can evaluate the probability of the second
agent's strategy, under the assumption it entails a fixed and ‘pure’ policy of order k[2](9)
Here, the model is specified by the unknown policy order, m = k[2] of the second agent. Equation 9 uses the joint transition probabilities on the moves of all players; however, one gets exactly the
same result using just the moves and transition matrix from the player in question. This is because, the contributions of the other players cancel, when the evidence is normalised. We use the
redundant form in Equation 9 so that it can be related more easily to inference on the joint strategies of all agents in Equation 8. An example of this inference is provided in Figure 3. In Figure 3A
and 3B, we used unmatched and matched strategies to generate samples using the probability transition matrices and ; starting in the first state (i.e., both agents in state 1) respectively. These
simulated games comprised four consecutive 32-move trials of the stag-hunt game specified in Figure 2. The ensuing state trajectories are shown in the left panels. We then inverted the sequence using
Equation 9 and a model-space of . The results for T = 1,…,128 are shown in the right panels. For both simulations, the correct strategy discloses itself after about sixty moves, in terms of
conditional inference on the second agent's policy. It takes this number of trials because, initially, the path in joint state-space is ambiguous; as it moves towards both the rabbit and stag.
Figure 3. Inference on agent's strategy in the stag-hunt game.
We assumed agents used unmatched strategies, in which the first agent used a fourth order strategy and the second agent used a first order strategy (A), and matched strategies - both agents used the
fourth order strategy (B). The left panels show four state trajectories of 32 moves simulated using (or generated from) value-functions in Figure 2B. The right panels show the conditional
probabilities of the second agent's strategy over a model-space of as a function of time.
Bounded Rationality
We have seen how an N-player game is specified completely by a set of utility functions and a set of constraints on state-transitions. These two quantities define, recursively, optimal
value-functions, of increasing order and their implicit policies. Given these policies, one can infer the strategies employed by agents, in terms of which policies they are using, given a sequence of
transitions. In two-player games, when the opponent uses policy k, the optimum strategy is to use policy k+1. This formulation accounts for the representation of another's goals and optimising both
policies and strategies. However, it induces a problem; to optimise ones own strategy, one has to know the opponent's policy. Under rationality assumptions, this is not really a problem because
rational players will, by induction, use policies of sufficiently high order to ensure . This is because each player will use a policy with an order that is greater than the opponent and knows a
rational opponent will do the same. The interesting issues arise when we consider bounds or constraints on the strategies available to each player and their prior expectations about these
Here, we deal with optimisation under bounded rationality [4] that obliges players to make inferences about each other. We consider bounds, or constraints, that lead to inference on the opponent's
strategy. As intimated above, it is these bounds that lead to interesting interactions between players and properly accommodate the fact that real players do not have unbounded computing resources to
attain a QRE by using . These constraints are formulated in terms of the policy k[i] of the i-th player, which specifies the corresponding value-function and policy . The constraints we consider are:
• The i-th player uses an approximate conditional density q[i](k[j]) on the strategy of the j-th player that is a point mass at the conditional mode, .
• Each player has priors p[i](k[j]), which place an upper bound on the opponents sophistication; ∀k[j]>K[i] : p[i](k[j]) = 0
These assumptions have a number of important implications. First, because q[i](k[j]) is a point mass at the mode , each player will assume every other player is using a pure strategy, as opposed to a
strategy based on a mixture of value-functions. Second, under this assumption, each player will respond optimally with another pure strategy, . Third, because there is an upper bound on imposed by an
agent's priors, they will never call upon strategies more sophisticated than k[i] = K[i]+1. In this way, K[i] bounds both the prior assumptions about other players and the sophistication of the
player per se. This defines a ‘type’ of player [15] and is the central feature of the bounded rationality under which this model is developed. Critically, type is an attribute of a player's prior
assumptions about others. The nature of this bound means that any player cannot represent the goals or intentions of another player who is more sophisticated; in other words, it precludes any player
‘knowing the mind of God’ [16].
Representing the Goals of Another
Under flat priors on the bounded support of the priors p[i](k[j]), the mode can be updated with each move using Equation 9. Here, player one would approximate the conditional density on the
opponent's strategy with the mode(10)
And optimise its strategy accordingly, by using . This scheme assumes the opponent uses a fixed strategy and consequently accumulates evidence for each strategy over the duration of the game. Figure
4 illustrates the conditional dependencies of the choices and strategies; it tries to highlight the role of the upper bounds in precluding recursive escalation of k[i](t). Note, that although each
player assumes the other is using a stationary strategy, the players own policy is updated after every move.
Figure 4. Schematic detailing inference on an opponent's strategy.
Figure 5A shows a realization of a simulated stag-hunt using two types of player with asymmetric bounds K[1] = 4 and K[2] = 3 (both starting with k[i](1) = 1). Both players strive for an optimum
strategy using Equation 10. We generated four consecutive 32-move trials; 128 trials in total, starting in the first state with both agents in state one. After 20 moves, the first, more
sophisticated, player has properly inferred the upper bound of the second and plays at one level above it. The second player has also optimised its strategy, which is sufficiently sophisticated to
support cooperative play. The lower panels show the implicit density on the opponent's strategy, p(k[2]|y,k[1]); similarly for the second player. The mode of this density is in Equation 10.
Figure 5. Inference on opponent's types in the stag-hunt game.
Two players with asymmetric types K[1] = 4 and K[2] = 3 used an optimum strategy based on the inferred opponent's strategy. (A) The top panel shows the strategies of both agents over time. The middle
and bottom panels show the implicit densities of the opponent's strategy for the first and the second player, respectively. The densities for both agents converge on the correct opponent's strategies
after around 20 moves. (B) The posterior probabilities over fixed and theory of mind (ToM) models. The left graph shows the likelihood over fixed models using k[1],k[2] = 1,…,5 and the right graph
shows the likelihood of ToM models with K[1],K[2] = 0,…,4. The veridical model (dark blue bar) shows model with the maximum likelihood, among 50 models.
Inferring Theory of Mind
We conclude this section by asking if we, as experimenters, can infer post hoc on the ‘type’ of players, given just their choice behaviours. This is relatively simple and entails accumulating
evidence for different models in exactly the same way that the players do. We will consider fixed-strategy models in which both players use a fixed k[i] or theory of mind models, in which players
infer on each other, to optimise k[i](t) after each move. The motivation for considering fixed models is that they provide a reference model, under which the policy is not updated and therefore there
is no need to infer the opponent's policy. Fixed models also relate to an alternative [prosocial] scheme for optimising behaviour, reviewed in the discussion. The evidence for fixed models is(11)
Whereas the evidence for theory of mind models is(12)
where are inferred under the appropriate priors specified by K[i]. The key difference between these models is that the policy changes adaptively in the theory of mind model, in contrast to the fixed
Under flat model priors, the posterior, p(m[i]|y) (Equation 8) can be used for inference on model-space. We computed the posterior probabilities of fifty models, using Equation 11 and 12. Half of
these models were fixed models using k[1],k[2] = 1,…,5 and the remaining were theory of mind models with K[1],K[2] = 0,…,4. Figure 5B shows the results of this model comparison using the simulated
data shown in Figure 5A. We evaluated the posterior probability of theory of mind by marginalising over the bi-partition of fixed and theory of mind models, and it can be seen that the likelihood of
the theory of mind model is substantially higher than the fixed model. Furthermore, the model with types K[1] = 4 and K[2] = 3 supervenes, yielding a 94.5% confidence that this is the correct model.
The implicit densities used by the players on each others strategy p(k[2]|y,k[1]) and p(k[1]|y,k[2]) (see Equation 11) are exactly the same as in Figure 5A because the veridical model was selected.
Because we assumed the model is stationary over trials, the conditional confidence level increases with the number of trials; although this increase depends on the information afforded by the
particular sequence. On the other hand, the posterior distribution over models tends to be flatter as the model-space expands because the difference between successive value-functions, and becomes
smaller with increasing order. For the stag-hunt game in Figure 2, value-functions with k≥4 are nearly identical. This means that we could only infer with confidence that, K[i]≥5 (see Figure S1).
In this section, we apply the modelling and inference procedures of the preceding section to behavioural data obtained while real subjects played a stag-hunt game with a computer. In this experiment,
subjects navigated a grid maze to catch stags or rabbits. When successful, subjects accrued points that were converted into money at the end of the experiment. First, we inferred the model used by
subjects, under the known policies of their computer opponents. This allowed us to establish whether they were using theory of mind or fixed models and, under theory of mind models, how sophisticated
the subjects were. Using Equation 10 we then computed the subjects' conditional densities on the opponent's strategies, under their maximum a posteriori sophistication.
Experimental Procedures
The subject's goal was to negotiate a two-dimensional grid maze in order to catch a stag or rabbit (Figure 6). There was one stag and two rabbits. The rabbits remained at the same grid location and
consequently were easy to catch without help from the opponent. If one hunter moved to the same location as a rabbit, he/she caught the rabbit and received ten points. In contrast, the stag could
move to escape the hunters. The stag could only be caught if both hunters moved to the locations adjacent to the stag (in a co-operative pincer movement), after which they both received twenty
points. Note that as the stag could escape optimally, it was impossible for a hunter to catch the stag alone. The subjects played the game with one of two types of computer agents; A and B. Agent A
adopted a lower-order (competitive) strategy and tried to catch a rabbit by itself, provided both hunters were not close to the stag. On the other hand, agent B used a higher-order (cooperative)
strategy and chased the stag even if it was close to a rabbit. At each trial, both hunters and the stag moved one grid location sequentially; the stag moved first, the subject moved next, and the
computer moved last. The subjects chose to move to one of four adjacent grid locations (up, down, left, or right) by pressing a button; after which they moved to the selected grid. Each move lasted
two seconds and if the subjects did not press a key within this period, they remained at the same location until the next trial.
Figure 6. Stag-hunt game with two hunters: a human subject and a computer agent.
The aim of the hunters (red and green circles) is to catch stag (big square) or rabbit (small squares). The hunters and the stag can move to adjacent states, while the rabbits are stationary. At each
trial, both hunters and the stag move sequentially; the stag moved first, the subject moved next, and the computer moved last. Each round finishes when either of the hunters caught a prey or when a
maximum number of moves had expired.
Subjects lost one point on each trial (even if they did not move). Therefore, to maximise the total number of points, it was worth trying to catch a prey as quickly as possible. The round finished
when either of the hunters caught a prey or when a certain number of trials (15±5) had expired. To prevent subjects changing their behaviour, depending on the inferred number of moves remaining, the
maximum number of moves was randomised for each round. In practice, this manipulation was probably unnecessary because the minimum number of moves required to catch a stag was at most nine (from any
initial state). Furthermore, the number of ‘time out’ rounds was only four out of a total 240 rounds (1.7%). At the beginning of each round the subjects were given fifteen points, which decreased by
one point per trial, continuing below zero beyond fifteen trials. For example, if the subject caught a rabbit on trial five, he/she got the ten points for catching the rabbit, plus the remaining time
points: 10 = 15−5 points, giving 20 points in total, whereas the other player received only their remaining time points; i.e., 10 points. If the hunters caught a stag at trial eight, both received
the remaining 7 = 15−8 time points plus 20 points for catching the stag, giving 27 points in total. The remaining time points for both hunters were displayed on each trial and the total number of
points accrued was displayed at the end of each round.
We studied six (normal young) subjects (three males) and each played four blocks with both types of computer agent in alternation. Each block comprised ten rounds; so that they played forty rounds in
total. The start positions of all agents; the hunters and the stag, were randomised on every round, under the constraint that the initial distances between each hunter and the stag were more than
four grids points.
Modelling Value Functions
We applied our theory of mind model to compute the optimal value-functions for the hunters and stag. As hunters should optimise their strategies based not only on the other hunter's behaviour but
also the stag's, we modelled the hunt as a game with three agents; two hunters and a stag. Here state-space became the Cartesian product of the admissible states of all agents, and the payoff was
defined on a joint space for each agent; i.e., on a |S[1]|×|S[2]|×|S[3]| array. The payoff for the stag was minus one when both hunters were at the same location as the stag and zero for the other
states. For the hunters, the payoff of catching a stag was one and accessed only when both the hunters' states were next to the stag. The payoff for catching a rabbit was one half and did not depend
on the other hunter's state. For the uncontrolled transition probabilities, we assumed that all agents would choose allowable actions (including no-move) with equal probability and allowed
co-occupied locations; i.e., two or more agents could be in the same state. Allowable moves were constrained by obstacles in the maze (see Figure 6).
We will refer to the stag, subject, and computer as the 1st, 2nd, and 3rd agent, respectively. The transition probability at each trial is . The i-th order value-function for the j-th agent, , was
evaluated through recursive solutions of the Bellman equations by generalising Equation 6 to three players(13)
Notice that the first agent's (stag's) value-function is fixed at first-order. This is because we assumed that the hunters believed, correctly, that the stag was not sophisticated. We used a
convergence criterion of to calculate the optimal value-functions, using Equation 4. For simplicity, we assumed the sensitivity λ of each player was one. A maximum likelihood estimation of the
subjects' sensitivities, using the observed choices from all subjects together, showed that the optimal value was λ = 1.6. Critically, the dependency of the likelihood on strategy did not change much
with sensitivity, which means our inferences about strategy are fairly robust to deviations from λ = 1 (see Figure S2). When estimated individually for each subject, the range was 1.5≤λ≤1.8,
suggesting our approximation was reasonable and enabled us to specify the policy for each value-function and solve Equation 13 recursively.
The ensuing optimal value-functions of the subject, , for i = 1,…,4 are shown in Figure 7. To depict the three-dimensional value-functions of one agent in two-dimensional state-space, we fixed the
positions of the other two agents for each value-function. Here, we show the value-functions of the subject for three different positions of the computer and the stag (three examples of four
value-functions of increasing order). The locations of the computer and stag are displayed as a red circle and square respectively. One can interpret these functions as encoding the average direction
the subject would choose from any location. This direction is the one that increases value (lighter grey in the figures). It can be seen that the subject's policy (whether to chase a stag or a
rabbit) depends on the order of value-functions and the positions of the other agents. The first-order policy regards the rabbits as valuable because it assumes that other agents move around the maze
in an uncontrolled fashion, without any strategies, and are unlikely to help catch the stag. Conversely, if subjects account for the opponent's value-functions (i.e., using the second or higher order
policies), they behave cooperatively (to catch a stag), provided the opponent is sufficiently close to the stag. Furthermore, with the highest order value-function, even if the other hunter is far
away from the stag, the subject still tries to catch the stag (top right panel in Figure 7). For all orders of value-functions, the stag's value becomes higher than the rabbits', when the other
hunter is sufficiently close to the stag (the middle row). However, interestingly, the policies here are clearly different; in the first-order function, value is higher for the states which are
closer to the stag and the two states next to the stag have about the same value. Thus, if the subject was in the middle of the maze, he/she would move downward to minimize the distance to the stag.
In contrast, in the second and higher-order functions, the states leading to the right of the stag are higher than the left, where the other hunter is. This is not because that the right side states
are closer to another payoff, such as a rabbit. In fact, even when the other hunter is on the right side of the stag and very close to the rabbit, the states leading to the other (left) side are
higher in the fourth-order function (bottom right panel). These results suggest that sophisticated subjects will anticipate the behaviour of other agents and use this theory of mind to compute
effective ways to catch the stag, even if this involves circuitous or paradoxical behaviour.
Figure 7. The optimal value-functions of the subjects for four different orders (columns) and for three different positions (rows).
The circles are the computer agent's locations, and the big and small squares are the locations of the stags and the rabbits, respectively. Brighter colours indicate higher values.
Modelling Strategy
Using these optimal value-functions, we applied the model comparison procedures above to infer the types of the subjects. We calculated the evidence for each subject acting under a fixed or theory of
mind model using k[2] = k[sub] = 1,…,8 and K[2] = K[sub] = 1,…,8 and data pooled from all their sessions. We used the true order of the other players' policies for the model comparison; i.e., k[1] =
k[stag] = 1 for the stag, k[3] = k[com] = 1 for the agent A and k[com] = 5 for the agent B (Figure S3). Although, as mentioned above, these values do not affect inference on the subject's model. This
entailed optimising k[sub] and K[sub] with respect to the evidence, for fixed models(14a)
and theory of mind models(14b)
Figure 8A shows the normalized posterior probabilities over the sixteen models. It can be immediately seen that the theory of mind model has a higher likelihood than the fixed model. Under theory of
mind models, we inferred the most likely sophistication level of the subjects was K[sub] = 5. This is reasonable, because the subjects did not have to use policies higher than k[sub] = 6, given the
computer agent policies never exceeded five. Among the fixed models, even though the likelihood was significantly lower, the optimal model, k[sub] = 6, was inferred.
Figure 8. Results of the empirical stag-hunt game.
(A) Model comparison. The posterior probabilities over the 16 models; eight fixed models with k[sub] = 1,…,8 and eight theory of mind (ToM) models with K[sub] = 1,…,8. The marginalized likelihood of
the ToM models is higher than that of the fixed models (the left panel). Within the ToM model-space, the subject level is inferred as K[sub] = 5. (B) The upper panels shows the inference on the
subject's strategy over time in the sessions when the subjects played with the agent A (the left panel) and B (the right panel). The lower panels show the subject's densities on the computer's
Using the inferred sophistication of the subjects, K[sub] = 5, we then examined the implicit conditional density on their opponent's policy using Equation 11. Figure 8B show a typical example from
one subject. The upper panels show the actual policies used when playing agent A (the left panel) and agent B (the right panel) and the lower panels show the subject's densities on the opponent's
strategies. For both computer agents, the subject has properly inferred the strategy of the agent and plays at a level above it; i.e., the subject behaved rationally. This is a pleasing result, in
that we can quantify our confidence that subjects employ theory of mind to optimise their choices and, furthermore, we can be very confident that they do so with a high level of sophistication. In
what follows, we relate our game theory of mind to related treatments in behavioural economics and consider the mechanisms that may underpin sophisticated behaviour.
Models in Behavioural Economics
Games with iterated or repeated play can differ greatly from one-shot games, in the sense that they engender a range of equilibria and can induce the notion of ‘reputation’, when there is uncertainty
about opponents [17]. These games address important issues concerning how people learn to play optimally given recurrent encounters with their opponents. It has been shown that reputation formation
can be formulated as a Bayesian updating of types to explain choices in repeated games with simultaneous moves [18],[19] and non-simultaneous moves [20]. An alternative approach to reputation
formation is teaching [21]. In repeated games, sophisticated players often have an incentive to ‘teach’ their opponents by choosing strategies with poor short-run payoffs that will change what the
opponents do; in a way that benefits the sophisticated player in the long run. Indeed, Camerer et al [22] showed that strategic teaching in their EWA model could select one of many repeated-game
equilibria and give rise to reputation formation without updating of types. The crucial difference between these approaches is that in the type-based model, reputation is the attribute of a
particular player, while in the teaching model, a strategy attains a reputation. In our approach, types are described in terms of bounds on strategy; the sophistication level. This contrasts with
treatments that define types in terms of unobserved payoff functions, which model strategic differences using an attribute of the agent; e.g., normal or honest type.
Recursive or hierarchical approaches to multi-player games have been adopted in behavioural economics [23],[24] and artificial intelligence [25], in which individual decision policies systematically
exploit embedded levels of inference. For instance, some studies have assumed that subject's decisions follow one of a small set of a priori plausible types, which include non-strategic and strategic
forms. Under these assumptions, inference based on decisions in one-shot (non-iterated) games suggests that while policies may be heterogeneous, the level of sophistication may be equivalent to an
approximate value of k; two or three. Camerer and colleagues [26] have suggested a ‘cognitive hierarchy’ model, in which subjects generate a form of cognitive hierarchy over each other's level of
reciprocal thinking. In this model ‘k’ corresponds to the depth of tree-search, and when estimated over a collection of games such as the p-beauty game, yields values of around one and a half to two.
Note that ‘steps of strategic thinking’ are not the same as the levels of sophistication in this paper. The sophistication addressed here pertains to the recursive representation of an opponent's
goals, and can be applied to any iterated extensive form game. Despite this, studies in behavioural economics suggest lower levels of sophistication than ours. One reason for this may be that most
games employed in previous studies have been one-shot games, which place less emphasis on planning for future interactions that rest on accurate models of an opponent's strategy.
In the current treatment, we are not suggesting that players actually compute their optimal strategy explicitly; or indeed are aware of any implicit inference on the opponent's policy. Our model is
phenomenological and is designed to allow model comparison and predictions (under any particular model) of brain states that may encode the quantities necessary to optimize behaviour. It may be that
the mechanisms of this optimization are at a very low level (e.g., at the level of synaptic plasticity) and have been shaped by evolutionary pressure. In other words, we do not suppose that subjects
engage in explicit cognitive operations but are sufficiently tuned to interactions with con-specifics that their choice behaviour is sophisticated. We now pursue this perspective from the point of
view of evolutionary optimization of the policies themselves.
Prosocial Utility
Here, we revisit the emergence of cooperative equilibria and ask whether sophisticated strategies are really necessary. Hitherto, we have assumed that the utility functions ℓ[i] are fixed for any
game. This is fine in an experimental setting but in an evolutionary setting, ℓ[i] may be optimised themselves. In this case, there is a fundamental equivalence between different types of agents, in
terms of their choices. This is because exactly the same equilibrium behaviour can result from interaction between sophisticated agents with empathy (i.e., theory of mind) and unsophisticated agents
with altruistic utility-functions. In what follows, we show why this is the case:
The recursive solutions for high-order value-functions in Equation 6 can be regarded as a Robbins-Monro scheme for optimising the joint value-functions over N players. One could regard this as
optimising the behaviour of the group of players collectively, as opposed to optimising the behaviour of any single player. Once the joint value-functions have been optimized, such that , they
satisfy the Bellman equations(15)
However, these value-functions also satisfy(16)
This rearrangement is quite fundamental because we can interpret as optimal utility-functions, under the assumption that neither player represents the goals of the other. In other words, if two
unsophisticated players were endowed with optimal utility-functions, one would observe exactly the same value-functions and behaviour exhibited by two very sophisticated players at equilibrium. These
optimal are trivial to compute, given the optimal value-functions from Equation 6; although this inverse reinforcement learning is not trivial in all situations (e.g., [27]). It is immediately
obvious that the optimal utility from Equation 16 has a much richer structure than the payoff ℓ[i] (Figure S4). Critically, states that afford payoff to the opponent now become attractive, as if
‘what is good for you is good for me’. This ‘altruism’ [28] arises because has become context-sensitive, and depends on the other player's payoff. An interesting example is when the optimised utility
of state with a local payoff is greater when the opponent occupies states close to their payoff (see Figure S4). In other words, a payoff that does not depend on the opponent has less utility, when
the opponent's payoff is low (c.f., guilt).
Altruism and Inequity Aversion
This sort of phenomenon has been associated with ‘inequity aversion’. Inequity aversion is the preference for fairness [29] or resistance to inequitable outcomes; and has been formulated in terms of
context-sensitive utility functions. For example, Fehr and Schmidt [5] postulate that people make decisions, which minimize inequity and consider N individuals who receive payoffs ℓ[i]. They then
model the utility to the j-th player as(17)
where α parameterises distaste for disadvantageous inequality and β parameterises the distaste for advantageous inequality. Although a compelling heuristic, this utility function is an ad hoc
nonlinear mixture of payoffs and has been critiqued for its rhetorical nature [30]. An optimal nonlinear mixture is given by substituting Equation 15 into Equation 16 to give(18)
These equalities express the optimal utility functions in terms of payoff and a ‘prosocial’ utility (the second terms), which allow unsophisticated agents to optimise their social exchanges. The
prosocial utility of any state is simply the difference in value expected after the next move with a sophisticated, relative to an unsophisticated, opponent. Equation 15 might provide a principled
and quantitative account of inequity aversion, which holds under rationality assumptions.
One might ask, what is the relevance of an optimised utility function for game theory? The answer lies in the hierarchal co-evolution of agents (e.g., [15],[31]), where the prosocial part of may be
subject to selective pressure. In this context, the unit of selection is not the player but the group of payers involved in a game (e.g., a mother and offspring). In this context, optimising over a
group of unsophisticated players can achieve exactly the same result (in terms of equilibrium behaviour) as evolving highly sophisticated agents with theory of mind (c.f., [32]). For example, in
ethological terms, it is more likely that the nurturing behaviour of birds is accounted for by selective pressure on ℓ^• than invoking birds with theory of mind. This speaks to ‘survival of the
nicest’ and related notions of prosocial behaviour (e.g., [33],[34]). Selective pressure on prosocial utility simply means, for example, that the innate reward associated with consummatory behaviour
is supplemented with rewards associated with nursing behaviour. We have exploited the interaction between innate and acquired value previously in an attempt to model the neurobiology of reinforcement
learning [35].
In summary, exactly the same equilibrium behaviour can emerge from sophisticated players with theory of mind, who act entirely out of self-interest and from unsophisticated players who have prosocial
altruism, furnished by hierarchical optimisation of their joint-utility function. It is possible that prosocial utility might produce apparently irrational behaviour, in an experimental setting, if
it is ignored: Gintis [33] reviews the evidence for empirically identifiable forms of prosocial behaviour in humans, (strong reciprocity), that may in part explain human sociality. “A strong
reciprocator is predisposed to cooperate with others and punish non co-operators, even when this behaviour cannot be justified in terms of extended kinship or reciprocal altruism”. In line with this
perspective, provisional fMRI evidence suggests that altruism may not be a cognitive faculty that engages theory of mind but is hard-wired and inherently pleasurable, activating subgenual cortex and
septal regions; structures intimately related to social attachment and bonding in other species [36]. In short, bounds on the sophistication of agents can be circumvented by endowing utility with
prosocial components, in the context of hierarchical optimisation.
Critically, the equivalence between prosocial and sophisticated behaviour is only at equilibrium. This means that prosocially altruistic agents will adapt the same strategy throughout an iterated
game; however, sophisticated agents will optimise their strategy on the basis of the opponent's behaviour, until equilibrium is attained. These strategic changes make it possible to differentiate
between the two sorts of agents empirically, using observed responses. To disambiguate between theory of mind dependent optimisation and prosocial utility it is sufficient to establish that players
infer on each other. This is why we included fixed models without such inference in our model comparisons of the preceding sections. In the context of the stag-hunt game examined here, we can be
fairly confident that subjects employed inference and theory of mind.
Finally, it should be noted that, although a duality in prosocial and sophisticated equilibria may exist for games with strong cooperative equilibria, there may be other games in which this is less
clearly the case; where sophisticated agents and unsophisticated altruistic agents diverge in their behaviour. For example, in some competitive games (such as Cournot duopolys and Stackelberg games),
a (selfish) understanding the other players response to payoff (empathy) produces a very different policy than one in which that payoff is inherently (altruistically) valued.
This paper has introduced a model of ‘theory of mind’ (ToM) based on optimum control and game theory to provide a ‘game theory of mind’. We have considered the representations of goals in terms of
value-functions that are prescribed by utility or rewards. We have shown it is possible to deduce whether players make inferences about each other and quantify their sophistication using choices in
sequential games. This rests on comparing generative models of choices with and without inference. Model comparison was demonstrated using simulated and real data from a ‘stag-hunt’. Finally, we
noted that exactly the same sophisticated equilibrium behaviour can be achieved by optimising the utility-function itself, producing unsophisticated but altruistic agents. This may be relevant
ethologically in hierarchal game theory and co-evolution.
In this paper, we focus on the essentials of the model and its inversion using behavioural data, such as subject choices in a stag-hunt. Future work will try to establish the predictive validity of
the model by showing a subject's type or sophistication is fairly stable across different games. Furthermore, the same model will be used to generate predictions about neuronal responses, as measured
with brain imaging, so that we can characterise the functional anatomy of these implicit processes. In the present model, although players infer the opponent's level of sophistication, they assume
the opponents are rational and that their strategies are pure and fixed. However, the opponent's strategy could be inferred under the assumption the opponent was employing ToM to optimise their
strategy. It would be possible to relax the assumption that the opponent uses a fixed and pure strategy and test the ensuing model against the current model. However, this relaxation entails a
considerable computational expense (which the brain may not be in a position to pay). This is because modeling the opponent's inference induces an infinite recursion; that we resolved by specifying
the bounds on rationality. Having said this, to model things like deception, it will be necessary to model hierarchical representations of not just the goals of another (as in this paper) but the
optimization schemes used to attain those goals by assuming agent's represent the opponent's optimization of a changing and possibly mixed strategy. This would entail specifying different bounds to
finesse the ensuing infinite recursion. Finally, although QRE have become the dominant approach to modelling human behaviour in, e.g., auctions, it remains to be established that convergence is
always guaranteed (c.f., the negative results on convergence of fictitious play to Nash equilibria).
Recent interest in the computational basis of ToM has motivated neuroimaging experiments that test the hypothesis that putative subcomponents of mentalizing might correlate with cortical brain
activity, particularly in regions implicated in ToM by psychological studies [37],[38]. In particular, Hampton and colleagues [39] report compelling data that suggest decision values and update
signals are indeed in represented in putative ToM regions. These parameters were derived from a model based on ‘fictitious play’, which is a simple, non-hierarchical learning model of two-player
inference. This model provided a better account of choice behaviour, relative to error-based reinforcement learning alone; providing support for the notion that apparent ToM behaviour arises from
more than prosocial preferences alone. Clearly, neuroimaging offers a useful method for future exploration of whether key subcomponents of formal ToM models predict brain activity in ToM regions and
may allow one to adjudicate between competing accounts.
Supporting Information
A. Log [Euclidean] distance between the value-functions in Figure 2B. B. Inference of opponent's types using the same simulated data used in Figure 5. Two players with asymmetric types K[1] = 4 and K
[2] = 3. The left graph shows the likelihood over fixed models using k[1],k[2] = 1,…,6 and the right graph shows the likelihood of theory of mind models with K[1],K[2] = 0,…,5. The veridical model
(dark blue bar) showed the maximum likelihood among 72 models.
(0.63 MB TIF)
Maximum likelihood estimation over the subject's type and payoff sensitivity. We used the models using K[sub] = 0,…,5 and λ = 0.5,…,3.0 and data pooled from all subjects.
(0.72 MB TIF)
Inference of computer agent's policy: canonical inference using all subjects' data (A) and mean and standard deviation over six subjects (B). The order of agent A's policy is inferred as k[com] = 1
and the agent B's order is inferred as k[com] = 5.
(0.75 MB TIF)
The left panels show payoff functions for sophisticated agents who have theory of mind. The right panels show optimal utility functions for unsophisticated agents who do not represent opponent's
goal: they assume opponent's policy is naïve.
(4.17 MB TIF)
We are also grateful to Peter Dayan, Debajyori Ray, Jean Daunizeau, and Ben Seymour for useful discussions and suggestions and to Peter Dayan for critical comments on the manuscript. We also
acknowledge the substantial guidance and suggestions of our three reviewers.
Author Contributions
Conceived and designed the experiments: WY RJD KJF. Performed the experiments: WY. Analyzed the data: WY KJF. Contributed reagents/materials/analysis tools: WY RJD KJF. Wrote the paper: WY RJD KJF. | {"url":"http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1000254","timestamp":"2014-04-19T05:56:42Z","content_type":null,"content_length":"221270","record_id":"<urn:uuid:1a771fbb-ce85-4b22-af9e-4ffbf6337ba3>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hour Glass shape with letters (Beginner)
How would I go about coding this. just need some helpful thing to get me started?
For example, if the user enters 7, then the following hourglass
comprised of 7 rows is printed:
DCBABCD (0 spaces preceding first letter)
_CBABC_ (1 space preceding first letter)
__BAB__ (2 spaces preceding first letter)
___A___ (3 spaces preceding first letter)
__BAB__ (2 spaces preceding first letter)
_CBABC_ (1 space preceding first letter)
DCBABCD (0 spaces preceding first letter)
P.S. I'm just a beginner, I don't know a lot stuff yet.
Last edited on
also the '_' are suppose to be spaces.
Hi there,
You will need a loop, like a for-loop.
The amount of times you loop will be determined by the number the user enters.
With each iteration, the amount of letters decreases, the amount of spaces increases.
When you reach the halfway point, the amount of letters increases and the amount of spaces decreases.
There is a little more to it - the difference between even and odd numbers entered by the user for instance - but this should be a start.
Please do let us know if you require any further help.
All the best,
would I use something like :
for ( int count = N - 2 ; count >= 1 ; count ++ )
for ( int j = 0 ; j < ( N - count ) / 2 ; j++ )
cout << ' ' ;
for ( int j = 0 ; j < count ; j++ )
cout << char ( j + ' A ' ) ;
This is more naturally solved using recursion. Do you know what that is?
I get what your trying but i'm not getting how to print it in the letter format also I don't under stand the spacing for each line.
I know the spaces has to be something like row - 'A' or something like that?
Hi there,
I'm afraid I'm not sure what you're asking - could you please verify, perhaps share your code once more?
All the best,
well right now it prints out this
if I type in a 7 it'll print
I need it to print it in a centered format like this:
Also I need it to print the alphabet.
so it would actually need to be printed like this:
That looks just like your first one did. I'm so lost right now. :(
Hi there,
Could you please copy use your code?
This is what i have right now.
#include <iostream>
using namespace std;
int main ()
int N ;
cout << endl << "Please enter an odd integer, 1 - 51 : " ;
cin >> N ;
if( N < 1 || N > 51 || N % 2 == 0 ) //less than 1 , more than 51 or even
cout << "Invalid number." << endl;
while( N < 1 || N > 51 || N % 2 == 0 ); //less than 1, greater than 51 or even
cout << endl ;
int amount_letters = N;
int amount_spaces = 0;
for (int i=0; i < N; ++i) //as many rows as there are letters
for (int j=0; j < amount_spaces; ++j) //print spaces
std::cout << ' ';
for (int k=0; k < amount_letters; ++k) //print letters
std::cout << 'A';
amount_letters -= 2; //decrease letters
amount_spaces += 2; //increase spaces
std::cout << std::endl; //print newline
amount_spaces += 2; //increase spaces
Should be:
Then you will need to reverse it - but you should be able to figure that out with the code you already have.
Let us know if you need any further help.
All the best,
I'm not figuring out how to reverse it. I'm sorry that I'm so bad at this.
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/beginner/112365/","timestamp":"2014-04-17T18:42:14Z","content_type":null,"content_length":"25911","record_id":"<urn:uuid:2eec067c-521f-4c06-94a1-3ababfe3bd07>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Origin of Pi
Posted by M ws On Tuesday, March 26, 2013 3 comments
Everybody knows the value of pi is 3.14…er, something, but how many people know where the ratio came from?
Actually, the ratio came from nature—it’s the ratio between the circumference of a circle and its diameter, and it was always there, just waiting to be discovered. But who discovered it? In honor of
Pi Day, here’s a semi-brief history of how pi came to be known as
3.14(1592653589793238462643383279502884197169…and so on).
The history lesson
It's hard to pinpoint who, exactly, first became conscious of the constant ratio between the circumference of a circle and its diameter, though human civilizations seem to have been aware of it as
early as 2550 BC.
The Great Pyramid at Giza, which was built between 2550 and 2500 BC, has a perimeter of 1760 cubits and a height of 280 cubits, which gives it a ratio of 1760/280, or approximately 2 times pi. (One
cubit is about 18 inches, though it was measured by a person's forearm length and thus varied from one person to another.) Egyptologists believe these proportions were chosen for symbolic reasons,
but, of course, we can never be too sure.
The earliest textual evidence of pi dates back to 1900 BC; both the Babylonians and the Egyptians had a rough idea of the value. The Babylonians estimated pi to be about 25/8 (3.125), while the
Egyptians estimated it to be about 256/81 (roughly 3.16).
CLICK HERE for more.
3 comments to The Origin of Pi
1. Proud2bMalaysian Very interesting.
The original article suggested that the Bible had a clear idea of pi.
Here is a suggested read I found.
1. masterwordsmith Hi Proud2bMalaysian!
Lovely to hear from you again. I am happy you enjoyed this post.
Actually, I was unsure if I should post it as it was heavy going (at least for me) but I shared it because it is indeed a very interesting article.
Thanks for sharing the link! That is also another interesting read.
Take care and please keep in touch.
God bless!
1. Proud2bMalaysian I come by regularly to read your postings. Helps me save time looking for them.
Hope you are feeling better now. Take care.
GE13 in May? Looks like it's going to be a very keen fight. | {"url":"http://masterwordsmith-unplugged.blogspot.com/2013/03/the-origin-of-pi.html","timestamp":"2014-04-18T23:15:21Z","content_type":null,"content_length":"283390","record_id":"<urn:uuid:31936091-c3ed-4a3f-9937-66e77cf2256e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
Box-Cox transformation
From Encyclopedia of Mathematics
Transformations of data designated to achieve a specified purpose, e.g., stability of variance, additivity of effects and symmetry of the density. If one is successful in finding a suitable
transformation, the ordinary method for analysis will be available. Among the many parametric transformations, the family in [a1] is commonly utilized.
Let random variable on the positive half-line. Then the Box–Cox transformation of
The formula
The power parameter maximum-likelihood method. Unfortunately, a closed form for the estimator [a1] and [a3].
This treatment has, however, some difficulties because Outlier). Further, in certain situations, the usual limiting theory based on knowing Robust statistics; and [a5] and references therein).
In the literature, Box–Cox transformations are applied to basic distributions, e.g., the cubic root transformation of chi-squared variates is used for acceleration to normality (cf. also Normal
distribution), and the square-root transformation stabilizes variances of Poisson distributions (cf. also Poisson distribution). These results are unified by appealing to features of the following
family of distributions.
Consider a collection of densities of the form
satisfying [a2] unless
It is known that both of the normalizing and the variance-stabilizing transformations of the exponential dispersion model with power variance function are given by Box–Cox transformations, see
. If
that the density of
Gram–Charlier series
). This implies that the normalizing transformation which is obtained by reducing the third-order cumulant reduces all higher-order cumulants as a result (cf. also
Distribution index
Normal 2
Inverse Gaussian
Box–Cox transformations are also applied to link functions in generalized linear models. The transformations mainly aim to get the linearity of effects of covariates. See [a3] for further detail.
Generalized Box–Cox transformations for random variables and link functions can be found in [a5].
See also Exponential distribution; Regression.
[a1] G.E.P. Box, D.R. Cox, "An analysis of transformations" J. Roy. Statist. Soc. B , 26 (1964) pp. 211–252
[a2] B. Jørgensen, "Exponential dispersion models" J. Roy. Statist. Soc. B , 49 (1987) pp. 127–162
[a3] P. McCullagh, J.A. Nelder, "Generalized linear models" , Chapman and Hall (1990) (Edition: Second)
[a4] R. Nishii, "Convergence of the Gram–Charlier expansion after the normalizing Box–Cox transformation" Ann. Inst. Statist. Math. , 45 : 1 (1993) pp. 173–186
[a5] G.A.F. Seber, C.J. Wild, "Nonlinear regression" , Wiley (1989)
How to Cite This Entry:
Box–Cox transformation. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Box%E2%80%93Cox_transformation&oldid=22177 | {"url":"http://www.encyclopediaofmath.org/index.php/Box%e2%80%93Cox_transformation","timestamp":"2014-04-18T15:39:45Z","content_type":null,"content_length":"27709","record_id":"<urn:uuid:65c92d61-23ef-48da-abab-a8d08f2b7c42>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. Introduction
Fault-tolerance is the ability of a system to maintain its functionality, even in the presence of faults. It has been extensively studied in the literature: [ALRL04] and [Lap04] gives an exhaustive
list of the basic concepts and terminology on fault-tolerance, [Pow92] introduces two fundamental notions for fault-tolerance, namely failure mode assumption and assumption coverage, and [Gär99]
formalizes the important underlying notions of fault-tolerance. Concerning more specifically real-time systems, [Rus94] gives a short survey and taxonomy for fault-tolerance and real-time systems,
and [Cri93,Jal94] treat in details the special case of fault-tolerance in distributed systems.
If you want to be convinced of the impact of faults and failures, you can browse the following pages:
The three basic notions are fault, failure, and error: a fault is a defect or flaw that occurs in some hardware or software component; an error is a manifestation of a fault; a failure is a departure
of a system from the service required. A failure in a sub-system may be seen as a fault in the global system. Hence the following causal relationship:
... --> fault --> error --> failure --> fault --> ...
Consider for instance a system running on a multi-processor architecture: a fault in one processor might cause it to crash (i.e., a failure), which will be seen as a fault of the system. Therefore,
the ability of the system to function even in the presence of the failure of one processor will be regarded as fault-tolerance instead of failure-tolerance.
Not all faults cause immediate failure: faults may be latent (activated but not apparent at the service level), and later become effective. Fault-tolerant systems attempt to detect and correct latent
errors before they become effective. Faults are classified according to the following criteria:
• by their nature: accidental or intentional;
• by their origin: physical, human, internal, external, conception, operational;
• by their persistence: transient or permanent.
Failures are classified according to the following criteria:
• by their domain: failures on values and/or timing failures;
• by their perception by the user;
• by their consequences on the environment.
The means for fault-tolerance are either:
• error processing (to remove errors from the system's state), which can be carried out either with recovery (rolling back to a previous correct state) or with compensation (masking errors using
the internal redundancy of the system);
• fault treatment (to prevent faults from being activated again), which is carried out in two steps: diagnostic (determining the cause, location, and nature of the error) and then passivation
(preventing the fault from being activated again).
Theses means use redundancy in order to treat errors, of which three forms exist: hardware redundancy (e.g., using a spare processor), software redundancy (e.g., using two implementations of the same
module), and time redundancy (e.g., re-executing a module later).
Finally, two things are important when designing fault-tolerant systems: the fault hypothesis (what type of fault do we want the system to tolerate) and the fault coverage (the probability that the
fault hypothesis be respected when a fault actually occurs in the system).
2. A fable
The following is a fable by the famous French writer and poet Jean de la Fontaine (1621--1695), titled "Le loup, la chèvre et le chevreau" (in English, "The wolf, the goat, and the goat kid)". I
think that it illustrates rather neatly the concept of fault-tolerance, at least the need of it:
La Bique allant remplir sa traînante mamelle
Et paître l'herbe nouvelle,
Ferma sa porte au loquet,
Non sans dire à son Biquet :
Gardez-vous sur votre vie
D'ouvrir que l'on ne vous die,
Pour enseigne et mot du guet :
Foin du Loup et de sa race !
Comme elle disait ces mots,
Le Loup de fortune passe ;
Il les recueille à propos,
Et les garde en sa mémoire.
La Bique, comme on peut croire,
N'avait pas vu le glouton.
Dès qu'il la voit partie, il contrefait son ton,
Et d'une voix papelarde
Il demande qu'on ouvre, en disant Foin du Loup,
Et croyant entrer tout d'un coup.
Le Biquet soupçonneux par la fente regarde.
Montrez-moi patte blanche, ou je n'ouvrirai point,
S'écria-t-il d'abord. (Patte blanche est un point
Chez les Loups, comme on sait, rarement en usage.)
Celui-ci, fort surpris d'entendre ce langage,
Comme il était venu s'en retourna chez soi.
Où serait le Biquet s'il eût ajouté foi
Au mot du guet, que de fortune
Notre Loup avait entendu ?
Deux sûretés valent mieux qu'une,
Et le trop en cela ne fut jamais perdu.
The important point here is the morale of the fable, the last two verses:
Deux sûretés valent mieux qu'une,
Et le trop en cela ne fut jamais perdu.
In English, it translates more or less into:
Two safeties are better than one,
And too much in this respect was never a loss.
I find both amusing and amazing that back in the seventeenth century, Jean de la Fontaine wrote something so much up to date with today's concerns!
3. Our contribution to fault-tolerance
In the past, we have been involved in a French "Action de Recherche Coordonnée" funded by Inria, named Tolère, and a European Research Project dealing with embedded electronics for automotive, named
EAST-EEA, and involving various automotive industries and research labs.
3.1. New scheduling/distribution heuristics
Researchers involved: Girault, Sorel, Lavarenne, Sighireanu, Dima, Pinello, Kalla, Assayad, Leignel, Yu, and Leveque.
Our personal contribution to research in the fault-tolerant embedded systems consists of several scheduling/distribution heuristics. Their common feature is to take as an input two graphs: a
data-flow graph ALG describing the algorithm of the application, and a graph ARC describing the target distributed architecture. Below to the left is an example of an algorithm graph: it has nine
operations (represented by circles) and eleven data-dependences (represented by green arrows). Among the operations, one is a sensor operation (I), one is an actuator operation (O), while the seven
others are computations (A to G). Below to the right is an example of an architecture graph: it has three processors (P1, P2, and P3) and three point-to-point communication links (L1.2, L1.3, and
Also given is a table giving the Worst-Case Execution Time (WCET) of each operation onto each processor, and the worst-case transmission time of each data-dependence onto each communication link. The
architecture being a priori heterogeneous, these need not be identical. Below is an example of such a table for the operations of ALG. The infinity sign expresses the fact that the operation I cannot
be executed by the processor P3, for instance to account for the requirement of certain dedicated hardware.
Form these three inputs, the heuristic distributes the operations of ALG onto the processors of ARC and schedules them statically, as well as the communications induced by these scheduling decisions.
The output of the heuristic is therefore a static schedule, from which embeddable code can be generated.
For the embeddable code generation, we use SynDEx, a system level CAD software based on the "Algorithm-Architecture Adequation" (AAA) methodology, for rapid prototyping and optimizing the
implementation of distributed real-time embedded applications onto "multicomponent" architectures. It has been designed and developed at INRIA by the AOSTE team. Also, our heuristics are implemented
inside SynDEx, as an alternative to its own default heuristics (called DSH: Distribution Scheduling Heuristic [GLS99]).
Our fault hypothesis is that the hardware components are fail silent, meaning that a component is either healthy and works fine, or is faulty and produces no output at all. Recent studies on modern
hardware architectures have shown that a fail-silent behavior can be achieved at a reasonable cost [BFM+03], so our fault hypothesis is reasonable.
Our contribution consists of the definition of several new scheduling/distribution heuristics in order to generate static schedules that are in addition tolerant to a fixed number of hardware
components (processors and/or communication links) faults. These new heuristics include:
3.2. Discrete controller synthesis
Researchers involved: Girault, Rutten, Abdennebi, Dumitrescu, Taha, Marchand, and Sun.
Another of our contributions (not a heuristic this time) was the usage of discrete controller synthesis theory [RW87] to generate automatically fault-tolerant software. The principle is to design a
software (for instance to control some plant) by taking into account all possible behaviors, i.e., both the good ones and the bad ones (the faults). Then, we have considered that all the fault events
were uncontrollable. We have added to the system an environment model that specifies what fault events can occur simultaneously. The advantage of discrete controller synthesis is that it is able to
produce automatically a controller that, put in parallel with the system, controls it in such a manner that it satisfies some predefined safety requirements. In our approach, these requirements
express precisely the fault tolerance. We have conducted several studies on this approach that prove its feasibility and its elegance. From the point of view of fault-tolerance, our approach is
interesting in the sense that, when the controller synthesis actually succeeds in producing a controller, we obtain a system equipped with a dynamic reconfiguration mechanism to handle faults, with a
static guarantee that all specified faults will be tolerated during the execution, and with a known bound on the system's reaction time (thanks to optimal controller synthesis) [DGMR10] [GR09] [
DGMR07b] [DGMR07a] [DGR04] [GR04]. New developments are towards the efficient implementation of the synthesized fault-tolerant controlled systems, by using the LibDGALS library for dynamic GALS
3.3. Aspect oriented programming: fault-tolerant programs and fault-tolerant circuits
Researchers involved: Fradet, Girault, Ayav, and Burlyaev.
We are investigating the use of aspect oriented programming [KLM^+97] [BSL01] for transforming automatically a non fault-tolerant program into a fault-tolerant one. As a first step in this direction,
we have proposed several automatic program transformations (i.e., no an aspect language yet) to insert automatically heartbeats and checkpoints in a real-time distributed program. We have formalized
these transformations as rewriting rules in ML for a simple programming language (with assignment, if-then-else, for loops, and input/output). Our contribution is twofold. First we have formally
proved that our transformations preserve the semantics of initial program and we have derived formulas to compute the WCET of the obtained program (this WCET can then be checked against the real-time
constraints). Second, choosing the lengths of checkpointing and heartbeating intervals is delicate. Long intervals lead to long roll-back time, while too frequent checkpointing leads to high
overheads. We have derived formulas for choosing the optimal checkpointing and hearbeating intervals. As a result, the overhead due to adding the fault-tolerance is minimized [AFG08] [AFG06].
New developments concern fault-tolerant circuits for which we want to propose automatic transformation procedures. These procedures will turn an initial non fault-tolerant circuit into a new
fault-tolerant circuit (for instance by replicating portions of the circuits, by adding voters, or by adding error correction blocks). We will also seek to prove formally the correctness of there
procedure, manually or with the help of a theorem prover.
3.4. Probabilistic contracts for reliable components
Researchers involved: Xu, Goessler, and Girault.
We are working on a probabilistic contract framework for describing and analysing component-based embedded systems, based on the theory of Interactive Markov Chains (IMC). A contract specifies the
assumptions a component makes on its context and the guarantees it provides. Probabilistic transitions allow for uncertainty in the component behavior, e.g. to model observed black-box behavior
(internal choice) or reliability. An interaction model specifies how components interact. We provide the ingredients for a component-based design flow, including (1) contract satisfaction and
refinement, (2) parallel composition of contracts over disjoint, interacting components, and (3) conjunction of contracts describing different requirements over the same component. By using
parametric probabilities in the contracts, we are able to answer questions such as "what is the most permissive component which satisfies a given contract?" [GXG12] [XGG10].
[ALRL04] A. Avizienis, J.-C. Laprie, B. Randell, and C. Landwehr. Basic concepts and taxonomy of dependable and secure computing. IEEE Trans. on Dependable and Secure Computing, 1(1):11-33, January
[ bib ]
[BFM^ M. Baleani, A. Ferrari, L. Mangeruca, M. Peri, S. Pezzini, and A. Sangiovanni-Vincentelli. Fault-tolerant platforms for automotive safety-critical applications. In International Conference
+03] on Compilers, Architectures and Synthesis for Embedded Systems, CASES'03, San Jose (CA), USA, November 2003. ACM.
[ bib ]
[BSL01] N. Bouraqadi-Saâdani and T. Ledoux. Le point sur la programmation par aspects. Technique et Science Informatique, 20(4):505-528, 2001.
[ bib ]
[Cri93] F. Cristian. Understanding fault-tolerant distributed systems. Communication of the ACM, 34(2):56-78, February 1993.
[ bib ]
[Gär99] F. Gärtner. Fundamentals of fault-tolerant distributed computing in asynchronous environments. ACM Computing Surveys, 31(1):1-26, March 1999.
[ bib ]
[GLS99] T. Grandpierre, C. Lavarenne, and Y. Sorel. Optimized rapid prototyping for real-time embedded heterogeneous multiprocessors. In 7th International Workshop on Hardware/Software Co-Design,
CODES'99, Rome, Italy, May 1999. ACM.
[ bib ]
[Jal94] P. Jalote. Fault-Tolerance in Distributed Systems. Prentice-Hall, Englewood Cliffs, New Jersey, 1994.
[ bib ]
[KLM^ G. Kiczales, J. Lamping, A. Mendhekar, C. Maeda, C. Videira Lopes, J.-M. Loingtier, and J. Irwin. Aspect-oriented programming. In European Conference on Object-Oriented Programming, ECOOP'97
+97] , volume 1241 of LNCS, pages 220-242, Jyväskylä, Finland, June 1997. Springer-Verlag.
[ bib ]
[Lap04] J.-C. Laprie. Sûreté de fonctionnement informatique : concepts de base et terminologie. Technical report, LAAS-CNRS, Toulouse, France, 2004.
[ bib ]
[Pow92] D. Powell. Failure mode assumption and assumption coverage. In International Symposium on Fault-Tolerant Computing, FTCS-22, pages 386-395, Boston (MA), USA, July 1992. IEEE. Research report
LAAS 91462.
[ bib ]
[RW87] P.J. Ramadge and W.M. Wonham. Supervisory control of a class of discrete event processes. SIAM J. Control Optimization, 25(1):206-230, January 1987.
[ bib ]
[Rus94] J. Rushby. Critical system properties: Survey and taxonomy. Reliability Engineering and Systems Safety, 43(2):189-219, 1994. Research report CSL-93-01.
[ bib ]
[AGK13] I. Assayad, A. Girault, and H. Kalla. Tradeoff exploration between reliability, power consumption, and execution time for embedded systems. Int. J. Software Tools for Technology Transfer,
15(3):229-245, June 2013.
[ bib ]
[BDGR13] A. Benoit, F. Dufossé, A. Girault, and Y. Robert. Reliability and performance optimization of pipelined real-time systems. J. of Parallel and Distributed Computing, 73:851-865, 2013.
[ bib ]
[AGK12] I. Assayad, A. Girault, and H. Kalla. Scheduling of real-time embedded systems under reliability and power constraints. In International Conference on Complex Systems, ICCS'12, Agadir,
Morocco, November 2012. IEEE.
[ bib ]
[GXG12] G. Goessler, D.N. Xu, and A. Girault. Probabilistic contracts for component-based design. Formal Methods in System Design, 41(2):211-231, 2012.
[ bib ]
[AGK11] I. Assayad, A. Girault, and H. Kalla. Tradeoff exploration between reliability, power consumption, and execution time. In International Conference on Computer Safety, Reliability and
Security, SAFECOMP'11, volume 6894 of LNCS, pages 437-451, Napoli, Italy, September 2011. Springer-Verlag.
[ bib ]
[BDGR10] A. Benoit, F. Dufossé, A. Girault, and Y. Robert. Reliability and performance optimization of pipelined real-time systems. In International Conference on Parallel Processing, ICPP'10, pages
150-159, San Diego (CA), USA, September 2010.
[ bib ]
[DGMR10] E. Dumitrescu, A. Girault, H. Marchand, and E. Rutten. Multicriteria optimal reconfiguration of fault-tolerant real-time tasks. In Workshop on Discrete Event Systems, WODES'10, Berlin,
Germany, September 2010. IFAC, New-York.
[ bib ]
[XGG10] D.N. Xu, G. Goessler, and A. Girault. Probabilistic contracts for component-based design. In International Symposium on Automated Technology for Verification and Analysis, ATVA'10, volume
6252 of LNCS, pages 325-340, Singapore, Singapore, September 2010. Springer-Verlag.
[ bib ]
[GK09] A. Girault and H. Kalla. A novel bicriteria scheduling heuristics providing a guaranteed global system failure rate. IEEE Trans. Dependable Secure Comput., 6(4):241-254, December 2009.
[ bib | http ]
[GR09] A. Girault and E. Rutten. Automating the addition of fault tolerance with discrete controller synthesis. Formal Methods in System Design, 35(2):190-225, October 2009.
[ bib | http ]
[GST09] A. Girault, E. Saule, and D. Trystram. Reliability versus performance for critical applications. J. of Parallel and Distributed Computing, 69(3):326-336, March 2009.
[ bib ]
[AFG08] T. Ayav, P. Fradet, and A. Girault. Implementing fault-tolerance by automatic program transformations. ACM Trans. Embedd. Comput. Syst., 7(4), July 2008. Research report INRIA 5919.
[ bib | .ps | .pdf ]
[DGMR07b] E. Dumitrescu, A. Girault, H. Marchand, and E. Rutten. Synthèse optimale de contrôleurs discrets et systèmes répartis tolérants aux fautes. In Modélisation des Systèmes Réactifs, MSR'07,
pages 71-86, Lyon, France, October 2007. Hermes.
[ bib | .ps | .pdf ]
[DGMR07a] E. Dumitrescu, A. Girault, H. Marchand, and E. Rutten. Optimal discrete controller synthesis for modeling fault-tolerant distributed systems. In Workshop on Dependable Control of Discrete
Systems, DCDS'07, pages 23-28, Cachan, France, June 2007. IFAC, New-York.
[ bib | .ps | .pdf ]
[AFG06] T. Ayav, P. Fradet, and A. Girault. Implementing fault-tolerance in real-time systems by automatic program transformations. In S.L. Min and W. Yi, editors, International Conference on
Embedded Software, EMSOFT'06, pages 205-214, Seoul, South Korea, October 2006. ACM, New-York. Research report INRIA 5919.
[ bib | .ps | .pdf ]
[GKS06] A. Girault, H. Kalla, and Y. Sorel. Transient processor/bus fault tolerance for embedded systems. In IFIP Working Conference on Distributed and Parallel Embedded Systems, DIPES'06, pages
135-144, Braga, Portugal, October 2006. Springer-Verlag.
[ bib | http | .ps | .pdf ]
[Gir06] A. Girault. System-level design of fault-tolerant embedded systems. ERCIM News, 67:25-26, October 2006.
[ bib | http | .ps | .pdf ]
[GY06] A. Girault and H. Yu. A flexible method to tolerate value sensor failures. In International Conference on Emerging Technologies and Factory Automation, ETFA'06, pages 86-93, Prague, Czech
Republic, September 2006. IEEE, Los Alamitos.
[ bib | .ps | .pdf ]
[Kal04] H. Kalla. Génération automatique de distributions/ordonnancements temps-réel, fiables et tolérants aux fautes. PhD Thesis, INPG, INRIA Grenoble Rhône-Alpes, projet Pop-Art, December 2004.
[ bib | .ps.gz | .pdf.gz ]
[DGR04] E. Dumitrescu, A. Girault, and E. Rutten. Validating fault-tolerant behaviors of synchronous system specifications by discrete controller synthesis. In Workshop on Discrete Event Systems,
WODES'04, Reims, France, September 2004. IFAC, New-York.
[ bib | .ps | .pdf ]
[Lév04] T. Lévêque. Fault tolerance adequation in SynDEx. Internship report, Inria Rhône-Alpes, Montbonnot, France, September 2004.
[ bib | .ps.gz | .pdf.gz ]
[GR04] A. Girault and E. Rutten. Discrete controller synthesis for fault-tolerant distributed systems. In International Workshop on Formal Methods for Industrial Critical Systems, FMICS'04, volume
133 of ENTCS, pages 81-100, Linz, Austria, September 2004. Elsevier Science, New-York.
[ bib | http | .ps | .pdf ]
[DGS04] C. Dima, A. Girault, and Y. Sorel. Static fault-tolerant scheduling with ``pseudo-topological'' orders. In Joint Conference on Formal Modelling and Analysis of Timed Systems and Formal
Techniques in Real-Time and Fault Tolerant System, FORMATS-FTRTFT'04, volume 3253 of LNCS, Grenoble, France, September 2004. Springer-Verlag.
[ bib | .ps | .pdf ]
[GKS04a] A. Girault, H. Kalla, and Y. Sorel. An active replication scheme that tolerates failures in distributed embedded real-time systems. In IFIP Working Conference on Distributed and Parallel
Embedded Systems, DIPES'04, Toulouse, France, August 2004. Kluwer Academic Pub., Hingham, MA.
[ bib | .ps | .pdf ]
[GKS04b] A. Girault, H. Kalla, and Y. Sorel. A scheduling heuristics for distributed real-time embedded systems tolerant to processor and communication media failures. Int. J. of Production Research
, 42(14):2877-2898, July 2004.
[ bib | .ps | .pdf ]
[AGK04] I. Assayad, A. Girault, and H. Kalla. A bi-criteria scheduling heuristics for distributed embedded systems under reliability and real-time constraints. In International Conference on
Dependable Systems and Networks, DSN'04, pages 347-356, Firenze, Italy, June 2004. IEEE, Los Alamitos.
[ bib | http | .ps | .pdf ]
[GKS03] A. Girault, H. Kalla, and Y. Sorel. Une heuristique d'ordonnancement et de distribution tolérante aux pannes pour systèmes temps-réel embarqués. In Modélisation des Systèmes Réactifs,
MSR'03, pages 145-160, Metz, France, October 2003. Hermes.
[ bib | http | .ps | .pdf ]
[GKSS03] A. Girault, H. Kalla, M. Sighireanu, and Y. Sorel. An algorithm for automatically obtaining distributed and fault-tolerant static schedules. In International Conference on Dependable
Systems and Networks, DSN'03, San-Francisco (CA), USA, June 2003. IEEE, Los Alamitos.
[ bib | http | .ps | .pdf ]
[GLSS01a] A. Girault, C. Lavarenne, M. Sighireanu, and Y. Sorel. Fault-tolerant static scheduling for real-time distributed embedded systems. In 21st International Conference on Distributed Computing
Systems, ICDCS'01, pages 695-698, Phoenix (AZ), USA, April 2001. IEEE, Los Alamitos. Extended abstract.
[ bib | http | .ps | .pdf ]
[GLSS01b] A. Girault, C. Lavarenne, M. Sighireanu, and Y. Sorel. Generation of fault-tolerant static scheduling for real-time distributed embedded systems with multi-point links. In IEEE Workshop on
Fault-Tolerant Parallel and Distributed Systems, FTPDS'01, San Francisco (CA), USA, April 2001. IEEE, Los Alamitos.
[ bib | http | .ps | .pdf ]
[DGLS01] C. Dima, A. Girault, C. Lavarenne, and Y. Sorel. Off-line real-time fault-tolerant scheduling. In 9th Euromicro Workshop on Parallel and Distributed Processing, PDP'01, pages 410-417,
Mantova, Italy, February 2001.
[ bib | http | .ps | .pdf ]
[GLSS00] A. Girault, C. Lavarenne, M. Sighireanu, and Y. Sorel. Fault-tolerant static scheduling for real-time distributed embedded systems. Research report 4006, Inria, September 2000.
[ bib | .ps | .pdf ] | {"url":"http://pop-art.inrialpes.fr/~girault/Projets/FT/","timestamp":"2014-04-19T20:11:39Z","content_type":null,"content_length":"51370","record_id":"<urn:uuid:f68ef4d2-167a-4554-89f7-b17582b2964f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Searching from the end of a list in Mathematica
up vote 10 down vote favorite
Many algorithms (like the algorithm for finding the next permutation of a list in lexicographical order) involve finding the index of the last element in a list. However, I haven't been able to find
a way to do this in Mathematica that isn't awkward. The most straightforward approach uses LengthWhile, but it means reversing the whole list, which is likely to be inefficient in cases where you
know the element you want is near the end of the list and reversing the sense of the predicate:
findLastLengthWhile[list_, predicate_] :=
(Length@list - LengthWhile[Reverse@list, ! predicate@# &]) /. (0 -> $Failed)
We could do an explicit, imperative loop with Do, but that winds up being a bit clunky, too. It would help if Return would actually return from a function instead of the Do block, but it doesn't, so
you might as well use Break:
findLastDo[list_, pred_] :=
Module[{k, result = $Failed},
If[pred@list[[k]], result = k; Break[]],
{k, Length@list, 1, -1}];
Ultimately, I decided to iterate using tail-recursion, which means early termination is a little easier. Using the weird but useful #0 notation that lets anonymous functions call themselves, this
findLastRecursive[list_, pred_] :=
step =
#1 == 0, $Failed,
pred@list[[#1]], #1,
True, #0[#1 - 1]] &},
All of this seems too hard, though. Does anyone see a better way?
EDIT to add: Of course, my preferred solution has a bug which means it's broken on long lists because of $IterationLimit.
In[107]:= findLastRecursive[Range[10000], # > 10000 &]
$IterationLimit::itlim: Iteration limit of 4096 exceeded.
Out[107]= (* gack omitted *)
You can fix this with Block:
findLastRecursive[list_, pred_] :=
Block[{$IterationLimit = Infinity},
step =
#1 == 0, $Failed,
pred@list[[#1]], #1,
True, #0[#1 - 1]] &},
$IterationLimit is not my favorite Mathematica feature.
If only lists were represented as double linked structures ... – belisarius Sep 16 '11 at 14:22
are you sure your first line of code is correct? Unless I'm not understanding what it is supposed to do shouldn't the second LengthWhile be LengthWhile[list//Reverse, predicate@# &]? So, adding a
Reverse and removing the !? – Sjoerd C. de Vries Sep 16 '11 at 14:40
@Sjoerd C. de Vries Yes, you're right. I edited in a fix. – Pillsy Sep 16 '11 at 15:05
1 @Pillsy Regarding efficiency: generally (and as @belisarius demonstrated in his answer), structural operations are very fast in mma. So, while not reverting a list may be better asymptotically,
you will probably need huge lists to really observe that, given that list non-reversing solutions use mma evaluator which slows them down by a large constant factor w.r.t. those using Reverse. –
Leonid Shifrin Sep 16 '11 at 15:53
@Pillsy sorry to try to reach you like this, but you need to confirm your email address for the Area 51 account before you come up on the commitment list. See area51.stackexchange.com/users/50888/
pillsy – Verbeia Jan 11 '12 at 23:05
add comment
6 Answers
active oldest votes
Personally, I don't see anything wrong with LengthWhile-based solution. Also, if we want to reuse mma built-in list-traversing functions (as opposed to explicit loops or recursion), I
don't see a way to avoid reverting the list. Here is a version that does that, but does not reverse the predicate:
findLastLengthWhile[{}, _] = 0;
findLastLengthWhile[list_, predicate_] /; predicate[Last[list]] := Length[list];
findLastLengthWhile[list_, predicate_] :=
Module[{l = Length[list]},
Scan[If[predicate[#], Return[], l--] &, Reverse[list]]; l];
Whether or not it is simpler I don't know. It is certainly less efficient than the one based on LengthWhile, particularly for packed arrays. Also, I use the convention of returning 0
when no element satisfying a condition is found, rather than $Failed, but this is just a personal preference.
Here is a recursive version based on linked lists, which is somewhat more efficient:
ClearAll[linkedList, toLinkedList];
SetAttributes[linkedList, HoldAllComplete];
toLinkedList[data_List] := Fold[linkedList, linkedList[], data];
findLastRec[list_, pred_] :=
Block[{$IterationLimit = Infinity},
Module[{ll = toLinkedList[list], findLR},
findLR[linkedList[]] := 0;
findLR[linkedList[_, el_?pred], n_] := n;
findLR[linkedList[ll_, _], n_] := findLR[ll, n - 1];
findLR[ll, Length[list]]]]
Some benchmarks:
In[48]:= findLastRecursive[Range[300000],#<9000&]//Timing
Out[48]= {0.734,8999}
In[49]:= findLastRec[Range[300000],#<9000&]//Timing
Out[49]= {0.547,8999}
EDIT 2
If your list can be made a packed array (of whatever dimensions), then you can exploit compilation to C for loop-based solutions. To avoid the compilation overhead, you can memoize the
compiled function, like so:
up vote 6
down vote Clear[findLastLW];
accepted findLastLW[predicate_, signature_] := findLastLW[predicate, Verbatim[signature]] =
With[{sig = List@Prepend[signature, list]},
Compile @@ Hold[
Module[{k, result = 0},
If[predicate@list[[k]], result = k; Break[]],
{k, Length@list, 1, -1}
CompilationTarget -> "C"]]]
The Verbatim part is necessary since in typical signatures like {_Integer,1}, _Integer will otherwise be interpreted as a pattern and the memoized definition won't match. Here is an
fn = findLastLW[#<9000&,{_Integer,1}];
Out[61]= {0.016,8999}
EDIT 3
Here is a much more compact and faster version of recursive solution based on linked lists:
findLastRecAlt[{}, _] = 0;
findLastRecAlt[list_, pred_] :=
Module[{lls, tag},
Block[{$IterationLimit = Infinity, linkedList},
SetAttributes[linkedList, HoldAllComplete];
lls = Fold[linkedList, linkedList[], list];
ll : linkedList[_, el_?pred] := Throw[Depth[Unevaluated[ll]] - 2, tag];
linkedList[ll_, _] := ll;
Catch[lls, tag]/. linkedList[] :> 0]]
It is as fast as versions based on Do - loops, and twice faster than the original findLastRecursive (the relevant benchmark to be added soon - I can not do consistent (with previous)
benchmarks being on a different machine at the moment). I think this is a good illustration of the fact that tail-recursive solutions in mma can be as efficient as procedural
(uncompiled) ones.
+1. There are advantages to returning 0, especially when dealing with Compile. – Pillsy Sep 16 '11 at 15:07
1 @Pillsy I usually reserve $Failed for functions doing something less algorithmic and predictable, like reading a file from disk, etc. But I think that this depends on the context in
which you use it more than on the function itself. I can easily imagine that in some context returning $Failed for the problem in question will be more appropriate. I just don't
think that general functions like this should do that - so in that case, I'd write a wrapper function converting 0 to $Failed. – Leonid Shifrin Sep 16 '11 at 16:01
@Pillsy I found an even faster recursive solution - please see my latest edit. – Leonid Shifrin Sep 16 '11 at 22:16
add comment
Not really an answer, just a couple of variants on findLastDo.
(1) Actually Return can take an undocumented second argument telling what to return from.
In[74]:= findLastDo2[list_, pred_] :=
Module[{k, result = $Failed},
Do[If[pred@list[[k]], Return[k, Module]], {k, Length@list, 1, -1}];
In[75]:= findLastDo2[Range[25], # <= 22 &]
Out[75]= 22
up vote 8 down vote (2) Better is to use Catch[...Throw...]
In[76]:= findLastDo3[list_, pred_] :=
Catch[Module[{k, result = $Failed},
Do[If[pred@list[[k]], Throw[k]], {k, Length@list, 1, -1}];
In[77]:= findLastDo3[Range[25], # <= 22 &]
Out[77]= 22
Daniel Lichtblau
You should docuument that second argument of Return. It makes it a lot more useful! :) – Pillsy Sep 16 '11 at 15:26
@Pillsy I filed a suggestion report for this. – Daniel Lichtblau Sep 16 '11 at 15:32
Awesome, thanks! – Pillsy Sep 16 '11 at 15:44
+1 Any idea why this is undocumented? Any reasons to restrain ourselves from using it? – Sjoerd C. de Vries Sep 16 '11 at 15:56
@Sjoerd C. de Vries No idea on that. Should be safe to use, as best I am aware. I use it. – Daniel Lichtblau Sep 16 '11 at 16:04
add comment
For the adventurous...
The following definitions define a wrapper expression reversed[...] that masquerades as a list object whose contents appear to be a reversed version of the wrapped list:
reversed[list_][[i_]] ^:= list[[-i]]
Take[reversed[list_], i_] ^:= Take[list, -i]
Length[reversed[list_]] ^:= Length[list]
Head[reversed[list_]] ^:= List
Sample use:
$list = Range[1000000];
Timing[LengthWhile[reversed[$list], # > 499500 &]]
up vote (* {1.248, 500500} *)
7 down
vote Note that this method is slower than actually reversing the list...
Timing[LengthWhile[Reverse[$list], # > 499500 &]]
(* 0.468, 500500 *)
... but of course it uses much less memory.
I would not recommend this technique for general use as flaws in the masquerade can manifest themselves as subtle bugs. Consider: what other functions need to implemented to make the
simulation perfect? The exhibited wrapper definitions are apparently good enough to fool LengthWhile and TakeWhile for simple cases, but other functions (particularly kernel built-ins) may
not be so easily fooled. Overriding Head seems particularly fraught with peril.
Notwithstanding these drawbacks, this impersonation technique can sometimes be useful in controlled circumstances.
+1 (My eyes! ). – belisarius Sep 16 '11 at 16:26
2 +1 I don't know whether to applaud or hide under my desk! – Pillsy Sep 16 '11 at 16:40
I am not sure this uses less memory as written - the $list is anyway copied first by the system. You probably can fix this by making reversed HoldAll or HoldFirst. – Leonid Shifrin Sep 16
'11 at 17:06
@Leonid I was unable to observe a memory spike that suggested that the array was being copied. I used crude techniques like sprinkling Print@MemoryInUse[] calls in the reversed
definitions and watching the virtual memory size as reported by external system tools. Can you suggest a way to observe this copying? – WReach Sep 16 '11 at 18:12
@WReach It may be my misunderstanding of how mma treats expressions internally. I thought that whenever we use something like a = Range[100];f[a], first a gets copied into some internal
heap space, which is then used by f. Apparently, mma seem to copy lazily :In[66]:= MemoryInUse[] Out[66]= 11106272 In[67]:= a= Range[1000000]; In[68]:= MemoryInUse[] Out[68]= 15107248 In
[69]:= b=a; In[70]:= MemoryInUse[] Out[70]= 15108536 In[71]:= b[[10]]=1; MemoryInUse[] Out[72]= 19110352. The real copying happened only when some destructive operation was performed on
the list that prevented the ... – Leonid Shifrin Sep 16 '11 at 21:37
show 1 more comment
Here are some alternatives, two of which don't reverse the list:
findLastLengthWhile2[list_, predicate_] :=
Length[list]-(Position[list//Reverse, _?(!predicate[#] &),1,1]/.{}->{{0}})[[1, 1]]+1
findLastLengthWhile3[list_, predicate_] :=
Module[{lw = 0},
Scan[If[predicate[#], lw++, lw = 0] &, list];
Length[list] - lw
findLastLengthWhile4[list_, predicate_] :=
Module[{a}, a = Split[list, predicate];
Length[list] - If[predicate[a[[-1, 1]]], Length[a[[-1]]], 0]
Some timings (number 1 is Pillsy's first one) of finding the last run of 1's in an array of 100,000 1's in which a single zero is placed on various positions. Timings are the mean of 10
repeated meusurements:
Code used for timings:
up vote 3 Monitor[
down vote timings = Table[
ri = ConstantArray[1, {100000}];
ri[[daZero]] = 0;
t1 = (a1 = findLastLengthWhile[ri, # == 1 &];) // Timing // First;
t2 = (a2 = findLastLengthWhile2[ri, # == 1 &];) // Timing // First;
t3 = (a3 = findLastLengthWhile3[ri, # == 1 &];) // Timing // First;
t4 = (a4 = findLastLengthWhile4[ri, # == 1 &];) // Timing // First;
{t1, t2, t3, t4},
{daZero, {1000, 10000, 20000, 50000, 80000, 90000, 99000}}, {10}
], {daZero}
Transpose[{{1000, 10000, 20000, 50000, 80000, 90000,99000}, #}] & /@
(Mean /@ timings // Transpose),
Mesh -> All, Frame -> True, FrameLabel -> {"Zero position", "Time (s)", "", ""},
BaseStyle -> {FontFamily -> "Arial", FontWeight -> Bold,
FontSize -> 14}, ImageSize -> 500
The problem with your list-nonreversing functions is that they traverse the list from the start, which (under assumptions that the result is likely to be found at the end) will likely
be far less efficient than reversing the list and traversing that. – Leonid Shifrin Sep 16 '11 at 15:21
@Leonid True, if you happen to know that will be the case. – Sjoerd C. de Vries Sep 16 '11 at 15:27
@Leonid From my timings it looks like if you don't have a clue the fourth method has the best overall performance. – Sjoerd C. de Vries Sep 16 '11 at 15:57
These timings look a bit unexpected to me. Can you post the code you used? – Leonid Shifrin Sep 16 '11 at 16:05
@Leonid Added. I hope I didn't make a mistake. However, the timings don't look unexpected to me. Number 1 and 2 do a reverse then seek. Should do good with the zero at the end. Number
3 and 4 go through the whole list and should take a time roughly independent of the zero's position. – Sjoerd C. de Vries Sep 16 '11 at 16:18
show 2 more comments
Timing Reverse for Strings and Reals
a = DictionaryLookup[__];
b = RandomReal[1, 10^6];
Timing[Short@Reverse@#] & /@ {a, b}
up vote 2 (*
down vote ->
{{0.016, {Zyuganov,Zyrtec,zymurgy,zygotic,zygotes,...}},
I get 0 for both timings. But what lesson should we learn from the above? That Reverse take longer for strings than for reals? Apparently so, as there are 10 times as much numbers as
there are strings and the ByteCount of b is 8000168 and of a is 5639088. – Sjoerd C. de Vries Sep 16 '11 at 16:12
@Sjoerd I learned that Reverse could represent a problem with really large string lists, but probably not for Reals. Besides, congrats for your tachyonic CPU. – belisarius Sep 16 '11
at 16:22
1 @Sjoerd C. de Vries: I think the lesson is that RandomReal returns a packed array, and operations on packed arrays are much faster than operations on normal lists. (And we could learn
that the first call to Reverse takes slightly longer, but you probably thought to repeat the measurement a few times) – nikie Sep 16 '11 at 17:19
add comment
An elegant solution would be:
findLastPatternMatching[{Longest[start___], f_, ___}, f_] := Length[{start}]+1
(* match this pattern if item not in list *)
up vote 0 down vote findLastPatternMatching[_, _] := -1
but as it's based on pattern matching, it's way slower than the other solutions suggested.
add comment
Not the answer you're looking for? Browse other questions tagged wolfram-mathematica or ask your own question. | {"url":"http://stackoverflow.com/questions/7446032/searching-from-the-end-of-a-list-in-mathematica","timestamp":"2014-04-20T04:05:50Z","content_type":null,"content_length":"119275","record_id":"<urn:uuid:84f6fded-0390-486e-a283-1f3ce5e054f8>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
uniform motion
Definition of Uniform Motion
● A body is said to be in Uniform Motion if it moves in a straight line at constant speed.
More about Uniform Motion
● A body in uniform motion covers equal distance in equal intervals of time.
● If a body moves in uniform motion with a constant speed v for time t, then the distance traveled s is given as s =
Examples of Uniform Motion
● Planets move around the sun in uniform motion.
● Suppose a bus travels at a constant speed of 73 miles per hour. Then this constant speed can be called as uniform motion.
Solved Example on Uniform Motion
Which of the graphs shows uniform motion?
A. both the Graphs
B. Graph 1
C. Graph 2
D. neither
Correct Answer: C
Step 1: Graph 2 shows that the velocity is constant as the time passes on and hence, Graph 2 represents uniform motion.
Related Terms for Uniform Motion
● Constant
● Motion
● Speed
● Straight Line | {"url":"http://www.icoachmath.com/math_dictionary/uniform_motion.html","timestamp":"2014-04-18T05:31:50Z","content_type":null,"content_length":"8130","record_id":"<urn:uuid:a4e8161a-8cdd-4430-9969-c6fdcff09730>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Guest Post: Black Holes, Quantum Information and Fuzzballs
Today’s guest post is by Weizmann Institute physicist, Prof. Micha Berkooz. Berkooz, a string theorist, recently organized a conference at the Institute on “Black Holes and Quantum Information
Theory.” We asked him about Hawking’s recent proposal, reported in Nature under the headline:”There are no black holes.”
Celebrated theoretical physicist Stephen Hawking has opened a can of worms in his 1976 paper on black holes. In a recent article, he is trying to put the worms back into the can. It may prove a
little trickier than expected.
Black holes are solutions of Einstein’s equations of general relativity which have the unique property that they posses a closed surface in space which is the ultimate “point of no return.” Even the
fastest things that nature allows us – light rays – will not escape if they cross this surface, dubbed the horizon. Despite their strange properties, black holes are not really exotic objects. In the
theoretical realm, if one takes enough matter, throws it all to a point and fast forwards using Einstein’s equations, one ends up with a black hole. In the observational realm, there is ample
astrophysical evidence that massive stars end their life as black holes, and that there are mega black holes at the center of many galaxies (including our own).
But black holes don’t just gobble things up. Rather they quite effortlessly chew things up – thoroughly and completely – and spit them out in a completely indecipherable form. This is the main point
of Hawking’s work from 1976, in which he showed that a black hole in empty space emits precisely thermal (black body) radiation when quantum mechanics is taken into account. In fact it emits thermal
radiation until it completely evaporates, and any initial state of the system will end up exactly the same, in the form of thermal radiation.
Very loosely we can understand this as follows: Consider creating an electron and positron pair, in a specific quantum state, just outside the horizon. Each of these particles can have either spin up
or spin down so there are a total of 4 states. Suppose the black hole now gobbles up the positron, never to be seen again, and that the electron makes it back to our lab. The electron has only two
states. So we started with a system which had 4 states and ended with a system that has two states – we lost information! Equivalently we can say that quantum mechanics is not unitary (reversible) in
the presence of black holes, or in more technical terms we can say that the emitted electron is in a density matrix and not a pure state, just the same as exactly thermal radiation.
Hawking’s computation is extremely elegant and robust – it only uses 1) quantum field theory on 2) curved space. The former is well tested and verified in just about any high energy physics
experiment, and the latter is just Einstein’s general relativity (as a classical theory). Furthermore, a very similar set of computations is successful in the context of generating the structures in
the universe from primordial quantum fluctuations after the big bang. Yet around the black hole, the synthesis of these two sets of ideas leads to a bewildering result, since any high-enough energy
experiment will create a black hole, and end in information-free thermal radiation. The universe just can’t help losing the information of where the keys are. This is unlike any other quantum
mechanical system whose time evolution does not lose any information.
Interestingly, the surprising prediction for a flux of thermal radiation from a black hole fits very nicely with other properties of the black hole. Shortly before Hawking’s article, Jacob Bekenstein
suggested that black holes have entropy. Bekenstein’s entropy, Hawking’s temperature and the mass of the black hole, which is the same as its energy, satisfy the ordinary laws of thermodynamics.
The synthesis of quantum mechanics and general relativity has been an outstanding problem for quite some time. Using string theory, and more specifically Maldacena’s AdS/CFT correspondence, it was
finally established that evolution of black holes is unitary and that we do not lose any information, since we can embed black holes in standard quantum theories which we know are completely unitary.
In these theories, black holes seem no different than lumps of coal that burn. The issue remains, however: Where exactly does the synthesis of field theory and classical general relativity fail, and
which of their well tested properties are we forced to modify?
There has been a renewed interest in this question in recent years. An elegant argument from Almheiri, Marolf, Polchinski and Sully suggested that one needs to quantum-mechanically modify the horizon
of a black hole into a hot membrane (whose nature is not clear). This solution has been called the “firewall” solution. In this solution, one gives up some aspects of Einstein’s equivalence
principle, as well as parts of the black hole solution in classical general relativity, where it naively seems that quantum effects should be small.
In another solution, suggested a few years ago by Mathur, one replaces the black hole by a large set of horizon-free solutions of string theory – this is the “fuzzball” solution. This solution is
quite attractive, but so far no one has been able to construct enough “fuzzballs” to account for the black hole entropy. Other solutions suggest some non-locality in space-time, which allows
information to be transported from the interior of the black hole to its exterior, or replacing space-time itself by an algebraic construction, keeping only quantum mechanics. Hawking conceded
already 10 years ago that black holes do not really lose information, and his recent paper provides evidence for the “fuzzball” proposal for the description of black holes.
This topic is one of the topics of research of the String theory group at the Weizmann Institute, Profs. Ofer Aharony, Micha Berkooz, Zohar Komargodski and Adam Schwimmer, who hosted a workshop on
“Black Holes and Quantum Information” earlier this month. The workshop explored the role of entanglement entropy and quantum information theory in the resolution of the black hole information
paradox, and in the very emergence of space-time as a derived concept, which seems to appear in a way similar to how thermodynamics is derived from statistical physics.
1. #1 Charles Alexadner Zorn
January 29, 2014
I think you mean indecipherable. Although close terminology, Un-decipherable refers specifically to speech and writing and asserting an impossibly of ever deciphering. Whereas, in-deciperable is
a more general term suggesting a still present potential for decoding. Scientific method would suggest more patience than denial, with data that is. I could be wrong but that is science.
2. #2 Charles Alexandner Zorn January 29, 2014
3. #4 Tom Cohoe
North Dakota
January 29, 2014
And at what point in time would the information have been lost to the universe in Hawking’s original synthesis? There’s something wrong right there.
4. #6 Orlando Carlo II
Orlando Fla.
January 31, 2014
I believe that these black holes are the subconscious of all things…and inside of them is the dream state always and I mean always preparing its self for the conscious state which is where we are
now…they are a blend of life asleep and life past on…
5. #7 G February 1, 2014
In layman’s terms, what comes across (by analogy, therefore very likely wrong) is the idea that the event horizon of a black hole isn’t a hard boundary like a shell, but rather a gradient or
gradual boundary, like the atmosphere around a planet that gets more dense as one gets closer to the planet’s surface. At the boundary’s furthest extent from the singularity, it sucks in objects
with larger mass; and at some point much closer to the singularity, it sucks in photons.
I read the linked article about information paradox.
It seems to me that a mechanism involving nonlocality would fulfill a number of criteria.
Information wouldn’t be “lost,” it would exit the black hole in a form that “would be” decipherable (in the cryptanalytic sense) “if” the outside observer also had the corresponding information
from inside the event horizon. The entire system conserves information (“inside” plus “outside”), but a local observer either “inside” or “outside” only has half of what they need to render their
observed bit stream into actual information rather than apparent noise.
If we assume that nonlocal interactions are truly instantaneous, as in, “infinite velocity”, then by definition that velocity overcomes the attraction of the singularity: the “information” (in
“encrypted” form) escapes.
Lastly a question re. gravity as a “consequence of thermodynamics”: what’s the mechanism? How does thermodynamics produce gravity, other than the obvious that successive stellar life cycles are
entropic, and stars (and the planets accreted around them) exhibit gravity?
Since gravity causes objects to clump together and stick, one could speculate that gravity is the meeting-point or intersection between dissipation (entropy) and accretion (negentropy).
OK, feel free to tell me where I thoroughly screwed up on this. At the purely conceptual level it seems to make sense, but that means nothing until there’s math to support or falsify it.
6. #8 Tom Cohoe
North Dakota
February 2, 2014
Here is what I meant in #4. General relativity says that all coordinate reference frames are equally valid. In some of them, it takes infinite time for something to reach the horizon. It is why
black holes used to be called ‘frozen stars’. If it can be said at any finite coordinate time T_0 ” that “information has now fallen through the horizon” that amounts to invalidating all the
coordinate reference frames in which the particle would fall through the horizon at a time later than T_0, which is a contradiction of general relativity. For anything to fall through the
horizon, or for a horizon to even form in coordinate time is a violation of general relativity.
7. […] Guest Post: Black Holes, Quantum Information and Fuzzballs – The Weizmann Wave […] | {"url":"http://scienceblogs.com/weizmann/2014/01/29/guest-post-black-holes-quantum-information-and-fuzzballs/","timestamp":"2014-04-16T13:40:38Z","content_type":null,"content_length":"85308","record_id":"<urn:uuid:68ba0474-517c-4c1c-b251-a62c66e8e499>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nahant Trigonometry Tutor
Find a Nahant Trigonometry Tutor
...I have been teaching and tutoring for over 25 years. I have had much success over the years with a lot of repeat and referral business. I have tutored Middle School math, High School Math,
secondary H.S. entrance exam test prep, Sat, PSAT, ACT (math and english) and SAT I and SAT II, Math.
19 Subjects: including trigonometry, geometry, GRE, algebra 1
...I have been teaching at North Reading High School for 10 years, and I enjoy working with students. I have years of experience with tutoring one-on-one including honors students looking for that
top grade, and students just trying to pass the course. In addition, I am married with 3 children, aged 7, 5, and 18 months, so I have experience with a wide range of ages.
19 Subjects: including trigonometry, calculus, physics, algebra 2
...Algebra 2 skills, including factoring, finding roots, solving sets of equations and classifying functions by their properties, are a necessary foundation for trigonometry, pre-calculus,
calculus and linear algebra. Particularly important are operations with exponents and an understanding of the ...
7 Subjects: including trigonometry, calculus, physics, algebra 1
...Finally you learn about the wide variety of real world situations that can be modeled to predict future outcomes from current data. Calculus is the study of rates of change, and has numerous
and varied applications from business, to physics, to medicine. The complexity of the topics involved however, require that your grasp of mathematical concepts and function properties is strong.
23 Subjects: including trigonometry, physics, calculus, statistics
...While classroom math rewards students for doing problems the "right" way, for SAT Math, it's not about how you get there, just that you get the right answer. While we review the important
concepts for the test, I teach students more about how to tackle problems they don't know by using alternate...
26 Subjects: including trigonometry, English, linear algebra, algebra 1 | {"url":"http://www.purplemath.com/Nahant_Trigonometry_tutors.php","timestamp":"2014-04-18T08:39:52Z","content_type":null,"content_length":"24229","record_id":"<urn:uuid:1c9f41fa-a278-484c-bbcc-d79f56cc69e7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Artificial Neural Networks in the Outcome Prediction of Adjustable Gastric Banding in Obese Women
Obesity is unanimously regarded as a global epidemic and a major contributing factor to the development of many common illnesses. Laparoscopic Adjustable Gastric Banding (LAGB) is one of the most
popular surgical approaches worldwide. Yet, substantial variability in the results and significant rate of failure can be expected, and it is still debated which categories of patients are better
suited to this type of bariatric procedure. The aim of this study was to build a statistical model based on both psychological and physical data to predict weight loss in obese patients treated by
LAGB, and to provide a valuable instrument for the selection of patients that may benefit from this procedure.
Methodology/Principal Findings
The study population consisted of 172 obese women, with a mean±SD presurgical and postsurgical Body Mass Index (BMI) of 42.5±5.1 and 32.4±4.8 kg/m^2, respectively. Subjects were administered the
comprehensive test of psychopathology Minnesota Multiphasic Personality Inventory-2 (MMPI-2). Main goal of the study was to use presurgical data to predict individual therapeutical outcome in terms
of Excess Weight Loss (EWL) after 2 years. Multiple linear regression analysis using the MMPI-2 scores, BMI and age was performed to determine the variables that best predicted the EWL. Based on the
selected variables including age, and 3 psychometric scales, Artificial Neural Networks (ANNs) were employed to improve the goodness of prediction. Linear and non linear models were compared in their
classification and prediction tasks: non linear model resulted to be better at data fitting (36% vs. 10% variance explained, respectively) and provided more reliable parameters for accuracy and
mis-classification rates (70% and 30% vs. 66% and 34%, respectively).
ANN models can be successfully applied for prediction of weight loss in obese women treated by LAGB. This approach may constitute a valuable tool for selection of the best candidates for surgery,
taking advantage of an integrated multidisciplinary approach.
Citation: Piaggi P, Lippi C, Fierabracci P, Maffei M, Calderone A, et al. (2010) Artificial Neural Networks in the Outcome Prediction of Adjustable Gastric Banding in Obese Women. PLoS ONE 5(10):
e13624. doi:10.1371/journal.pone.0013624
Editor: Jeremy Miles, RAND Corporation, United States of America
Received: February 3, 2010; Accepted: October 4, 2010; Published: October 27, 2010
Copyright: © 2010 Piaggi et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Funding: These authors have no support or funding to report.
Competing interests: The authors have declared that no competing interests exist.
Obesity is unanimously regarded as a global epidemic and a major contributing factor to the development of many common illnesses seen in medical practice. Obesity represents a serious public health
concern, reducing life expectancy and raising health care costs. The dramatic increase in the prevalence of obesity is partly related to the fact that conventional therapies have limited efficacy,
and the effective management of obesity has consequently become an important clinical focus [1], [2]. Lifestyle interventions can provide a variable degree of weight loss. The key features are
adherence to a dietary strategy and exercise programs, but high relapse rates are usually reported. Hope for the development of new anti-obesity drugs grows out of progress that is being made in our
understanding of the mechanisms that control body weight and body energy homeostasis. Yet, available pharmacotherapy options are limited and in severely obese subjects their efficacy is usually
inadequate and temporary.
The greatest excitement in obesity treatment has come from increasing evidence of the effectiveness of surgical approaches. Recent studies demonstrate a reduction in mortality, beside dramatic
benefits in comorbidities, in obese patients treated surgically. In addition, after bariatric surgery, most patients report improvement in psychosocial functioning and quality of life. Altogether,
this has lead to an exponential increase in numbers of procedures performed during the last ten years [3]. Surgery is considered the treatment of choice in extreme or morbid obesity (Body Mass Index
- BMI≥40). It reverses, ameliorates, or eliminates major cardiovascular risk factors, including diabetes, hypertension, and lipid abnormalities, also when obesity is less severe (BMI≥35). Bariatric
surgery should be conducted in centers that are able to assess patients before surgery and to offer a comprehensive approach to diagnosis, treatment, and long-term follow-up. Bariatric surgery
includes restrictive procedures as well as procedures limiting the absorption of nutrients. Each of these procedures has its own set of expected outcomes and potential complications. Laparoscopic
Adjustable Gastric Banding (LAGB) is one of the most popular restrictive bariatric surgical approaches worldwide. Briefly, a flexible silicone band lined with an inflatable balloon is wrapped around
the stomach to create a small upper portion with a narrow opening to a lower large portion. The band is connected to an injection reservoir that is implanted on the abdominal wall underneath the
skin, through which the balloon can be inflated or deflated to increase or decrease the restriction. Inflation of the balloon tightens the band and slows down food progression, eventually making
patients feel full faster and longer, but this may promote nausea and vomiting. Adjustments are made periodically based on the patient's individual needs.
LAGB has documented satisfactory long-term weight loss, has the best record of safety among the bariatric operations, does not compromise nutrient absorption, is reversible, and can be performed at a
relatively low cost. One further advantage lies in long-term adjustability, which can help maximize weight loss while minimizing adverse symptoms [4], [5]. Yet, substantial variability in the results
and significant rate of failure can be expected, and it is still debated which categories of patients are better suited to this type of bariatric procedure. In this regard the psychological profile
of the candidate patient is thought to be of great relevance. Several studies have been performed to identify potential predictors of success of LAGB, but the existing literature on this matter is
far from conclusive. Inconsistent and sometime contradictory results have been reported when BMI, sex, age, physical and psychological factors have been analyzed for their ability to influence the
outcome in patients undergoing LAGB [6]–[15]. The reasons for these discrepancies may be related to the peculiar “behavioral” effects of bariatric surgery on obese subjects who are going to lose
weight as long as they are able to change their habits after surgery [16]–[20]. A recent French nationwide survey shows that the best profile for a success after gastric banding is a patient <40
years, with an initial BMI<50, willing to change his or her eating habits and to recover or increase his or her physical activity after surgery and who has been operated by a team usually performing
>2 bariatric procedures per week [21]. Indeed, patient's ability to fulfill postoperative behavioral changes necessary for success is dependent not only on patient's individual characteristics but
also on the experience and skill of the multidisciplinary team that assists the patient during its treatment course and that must provide technical, motivational and psychological support. Therefore,
it is not unexpected that predictors of success of LAGB may differ depending on the cultural, social, ethnical or temporal context in which the obesity center is operating.
The effectiveness and the risk-benefit profile of medical intervention require advanced data analysis to classify patient typologies and to predict the effects of therapies in each class. This goal
can be set by joining the experience of a medical team, expert in obesity treatments and researchers in the fields of model identification and data mining.
Artificial Neural Networks (ANNs) [22] are flexible non linear mathematical systems capable of modeling complex functions. ANNs can be applied each time there is a relationship between independent
predictor variables (inputs) and dependent predicted variables (outputs), even when that relationship is composite, multidimensional and non linear. Another advantage is that ANNs learn by example,
and a peculiar outcome (e.g. weight loss) can be associated with an interactive combination of changes on a subset of the variables being monitored (e.g. patients' characteristics) by training
algorithms that automatically take into account also the influence of a peculiar environment (obesity center) that mediates the relationship between predictors and outcome. ANNs appear to be better
at prediction of weight loss after bariatric surgery than do traditional strategies such as logistic regression [23].
The aim of this study was to investigate the performance of ANN models for prediction of weight loss in obese women treated by LAGB, and to provide an instrument of clinical value in the selection of
patients that may benefit from LAGB. Patients' age and BMI were chosen as these parameters have been consistently reported among predictors of LAGB success. Data collected by the Minnesota
Multiphasic Personality Inventory-2 (MMPI-2) were employed since this is one of the most common psychometric tests that provides an objective understanding of the motivational patterns as well as a
broadband measure of patient's personality and psychopathology.
From March 2003 to September 2006, 235 obese females underwent LAGB (Swedish Adjustable Gastric Band by Ethicon Endosurgery, Johnson and Johnson, New Brunswick, NJ, USA) at the Obesity Center of the
University Hospital of Pisa. LAGB, among various surgical procedures, was chosen according to the following selection criteria: BMI 40 to 60 kg/m^2 or BMI 35 to 40 kg/m^2 with serious medical
conditions related to obesity. Patients with psychotic disorders, major mood disorders, personality disorders, alcohol or substance abuse, bulimia nervosa or binge eating disorder were excluded from
LAGB. None of the patients was taking psychotropic drugs at the time of surgery. For each patient presurgical evaluation included a clinical examination, laboratory and instrumental investigation, a
psychological and psychopathological evaluation and an assessment of eating behaviour. Clinical and instrumental examinations of each patient were performed following the Italian guidelines for
obesity and each patient was treated according to appropriate protocols for his/her condition. After surgery patients were periodically seen at the Center, and Excess Weight Loss (EWL) was calculated
at 2 years follow-up.
The psychological/psychiatric assessment consisted in clinical interviews and administration of MMPI-2. MMPI-2 is the most widely used questionnaire for determining the presence of psychopathology,
and it has been carefully investigated and normed [24]–[26]. The questionnaire includes 567 statements and subjects have to answer “true” or “false” according to what is predominantly true or false
for them. The test is designed for individuals aged 18 and older. The 1^st 370 items are divided into 10 clinical scales and 3 validity scales. This study also used content scales which consist in
clusters of items concerning the same psychological dimension and behavioral area. Raw scores from each scale are transformed into standardized T scores: on the clinical and validity scales, a T
score of 50 is the estimated population average with a standard deviation of 10. A T score of 65, corresponding to 92^nd percentile, appears to be an optimal cut-off point for separating the
normative samples from a “clinically interpretable” sample. If the T score of the validity-scales exceeds prefixed thresholds (Lie-scale≥80, Infrequency≥90, and Correction≥80), the possibility exists
that the test is not valid.
Among 235 patients, 8 MMPI-2 were considered invalid because more than 30 of the 567 questions remained unanswered. Ten patients did not fill out the tests due to poor Italian language (4 patients)
or to a low educational level. Twenty-five patients did not return the psychological test with no specific reasons. Among the remaining 192 women, 2 became pregnant within 2 years after surgery and
were not included in the analysis. In 5 patients the band had to be removed because of slippage (1 patient) or uncontrollable vomiting. Six patients did not receive the follow-up visit at 2 years:
one moved to a foreign country and five had a follow up visit after 2 years and 6 months. Seven patients preferred to be followed-up at a hospital closer to their home city.
Overall, the study population consisted of 172 obese women, aged 19 to 67 years (mean age ± SD = 41.7 ± 11.3 years) with a mean ± SD presurgical and postsurgical (24-months after the intervention)
BMI of 42.5 ± 5.1 kg/m^2 and 32.4 ± 4.8 kg/m^2, respectively. Table 1 shows the main phenotype characteristics of the study group before LAGB intervention.
Ethics Committee approval was not required since patients identity is not disclosed and data were collected during and according to routinary examination of the patients. Patients did not undergo any
treatment or examination specifically devised to collect data employed in this study, and for which their informed consent was necessary.
Statistical methods
At first, a best-subset algorithm was used to select the most significant predictors of the EWL among the psychological scales, age and BMI before LAGB. Selected variables were used in a standard
multiple linear regression model. An ad-hoc ANN was then employed to perform a nonlinear regression using the same variables and the EWL: a specific cost function provided a nonlinear formula to
achieve the best correlation between EWL and the selected predictors.
Finally, results obtained by the linear and the nonlinear models were applied in standard prediction and classification tasks, by dividing patients according to quartiles of EWL.
Best-subset Algorithm and Multiple Linear Regression Model.
A multiple linear regression model [27] based on a best-subset algorithm [28] was determined. All MMPI-2 psychological scales (validity, clinical, content and supplementary scales), pre-operative BMI
and age were selected as independent variables for the linear model and EWL at 24 months follow-up was chosen as dependent variable (output). EWL was calculated as follows:
where ideal weight is defined by the Lorentz formula [29] (for female subjects)
In order to obtain a robust model, only subsets with a number of independent variables ranging from 1 to 4 were calculated. This hypothesis relies on the practical constraint that the size of the
data set is limited to 172 patients and the rule of thumb of at least records is considered, where n is the number of independent variables included in the regression model. Furthermore the parsimony
principle stating that it is preferable to select the model with the smallest numbers of variables was adopted.
All possible combinations of explanatory variables (from one to four) were computed in a multiple linear regression with EWL as the dependent variable, and a list of the values of R^2, adjusted R^2,
p-value and the standard deviation for each linear model was extracted. Among all models, it was chosen the model with the highest R^2, the smallest standard deviation and p-value less than 0.05.
R-squared partial correlations were used to measure the marginal contribution of each explanatory variable when all others were already included in the model. Finally, EWL was predicted through a
linear combination of regression coefficients β.
Neural Network Models: Architecture and Learning Algorithm.
The ANN [22], [30]–[32] model used in this study was a Multi-Layer Perceptron (MLP), a feed-forward neural network for mapping sets of input data onto a set of appropriate outputs. MLP is
characterized by three layers of neurons (input layer, hidden layer and output layer) with nonlinear activation functions at the hidden layer [33].
The basic architecture of MLP (Fig. 1) consists of an input layer passing input data x[i] to a layer of “hidden” neurons with sigmoid activation function, like the hyperbolic tangent function, where
and are the weight matrix (between input layer and hidden layer) and bias parameters of hidden layer units, respectively. The outputs y[j] of the network are a linear function of the parameters of
the last hidden layer where and are the weight matrix (between hidden layer and output layer) and bias parameters of output layer units, respectively.
Usually, given observed data , the optimal values for weight and bias parameters (, , and ) are found by training the MLP, i.e., performing a non linear (due to the use of a non linear function like,
e.g., the hyperbolic tangent) optimization for which the mean square error of the output is minimized. J is called the cost function or objective function of MLP. When the training algorithm is
stopped, the MLP has found a set of non linear regression relations .
In this study, in order to identify the best correlation between the independent variables and the EWL, two feed-forward MLPs were used, one for non linear mapping of variables (x) into a single
score u, the other one for linear mapping of EWL (y) into a score v.
These two networks independently map from the inputs x and y to the scores u and v, respectively. A particular cost function forces the correlation between u and v to be maximized by finding the
optimal values of weights and bias.
In the first MLP (Fig. 2), the input layer consists of variables considered as statistically significant by the previous best-subset algorithm; the hidden layer is characterized by some hidden
neurons (in Fig. 2, five hidden neurons), and the output layer consists of one output neuron (the non linear score u). For computational issues, input variables were initially standardized by
removing the mean value and dividing them by the standard deviation of each variable.
In the second MLP (Fig. 3), a linear mapping of EWL is performed: this linear mapping (i.e., using a linear activation function) was chosen to simplify the network and to compare the results with
those obtained by the linear model based on standard multiple regression. Furthermore, by having a second MLP, a non linear recombination of multiple dependent variables (beside EWL) may be obtained
by replacing the linear activation function with a non linear one.
For both MLPs, the number of hidden neurons was determined through a trial-and-error process and following a general principle of parsimony, because no commonly accepted theory exists to determine
the optimal number of neurons in the hidden layer: in detail, several runs (i.e., training of MLP) with increasing number of neurons were made. As a result of this step, the number of hidden neurons
was chosen when the correlation between and did not improve appreciably by increasing the number of units.
For both MLPs, the input variables vectors and are mapped to the neurons in the hidden layer and as follows:
where and are the weight matrices between input layer and hidden layer and and are the bias parameter vectors of hidden layer units. The scores and are obtained from a linear combination of the
hidden neurons vectors and , respectively, with
To maximize the correlation between and , the specific cost function was minimized by finding the optimal weight values and bias between different nodes (, , , , ) using all data available. In
addition, we applied the constraints and (zero mean and unit variance for both the scores) which were inserted into a modified cost function (J[m]):
The nonlinear optimization was carried out by a quasi-Newton algorithm. Because of the well-known problem of multiple local minima in the MLP cost function, there was no guarantee that the
optimization algorithm reached the global minimum: hence a number of runs (i.e., training of MLP) mapping from to using random initial parameters were performed. The number of runs was fixed to 200
and the run attaining the lowest value of J[m] was selected as the final solution.
MLP might suffer from overfitting, i.e., if the MLP has too many parameters, its output will fit very accurately all training set data (including the noise) but it will provide meaningless responses
with new data that are not present in the training set. To overcome this pitfall, 20% of the data were randomly selected as validation data and withheld from the training set of the MLP: runs where
the correlation between u and v was found lower for the validation data than for training data set were rejected to avoid overfitted solutions.
Classification of Subjects in terms of EWL Outcome.
The predictive performance of both models was evaluated by calculation of the true positive fraction (TPF, or sensitivity) and of the false positive fraction (FPF, or specificity). To this purpose
patients were divided into 2 groups by using the first quartile of actual EWL as a cut-off value: patients with an EWL within the 3 highest quartiles were arbitrarily assigned to the positive group
while patients with an EWL within the lowest quartile were assigned to the negative group. Sensitivity was defined as the rate of patients correctly predicted in the positive group over those
actually belonging to the positive group; specificity was defined as the rate of patients correctly predicted in the negative group over those actually belonging to the negative group.
The sensitivity and specificity of both weight scores (obtained from linear and MLP models) in relation to LAGB outcome were plotted for each possible predictive score cutoff in the so-called
Receiver Operating Characteristic curves (ROC) and the Area Under each ROC Curve (AUC) was estimated. AUC measures the discriminating accuracy of the (linear or non linear) model, i.e., the ability
of the model to correctly classify patients in the positive or in the negative group.
Cross-Validation and Prediction.
Up to this point both linear and non linear models were built by considering all patients of the data set. In other words, models were built from a database where inputs and output were perfectly
known. The following step was to apply the models to new data in order to assess their prediction value by using the cross-validation method and the confusion matrix as analysis tools.
In the cross-validation algorithm, the whole data set is repeatedly split into training and test sets, and data from the test set are classified with the model obtained from the training set. In the
case of the non linear model one group of patients was used as test data in order to make a prediction of the EWL (Test Set), and the others were used for training the MLP (Training set).
The same procedure with the same partitions was conducted in the case of the linear model, calculating linear regression coefficients β from training data set and making a prediction of the output
from the test set.
Therefore in the test phase, each model made a prediction of EWL based only on the test set. If the predicted EWL value belonged to the same quartile of the actual EWL of the patient under test, the
prediction was considered correct.
Confusion matrix [34] was used as a tool for evaluating effectiveness of model prediction; this is a table that allows a comparison of the accuracy of the predicted EWL-quartile membership against
the actual membership. Each predicted quartile was plotted against the actual one and the number of subjects classified within each quartile gave an indication on the effectiveness of the prediction.
In other words, the model tried to classify patients into four possible classes of EWL, considering the selected variables. The elements of the matrix (its dimension was 4×4) represented the
percentage of patients that were correctly classified within each class.
The whole procedure was as follows:
1. The sample was subdivided into three homogeneous random subgroups;
2. Both MLP and linear regression models were trained with two of the three subgroups and the third group was used to test the model: a confusion matrix was calculated from the results of the test
operation (i.e. the number of patients properly classified by the model, expressed as percentage);
3. Step 2 was repeated cyclically, exchanging subgroups for training and for testing. From the confusion matrices that were obtained, the mean value of each element was computed to express the
global model prediction. This allowed the training algorithm to use virtually the entire data set for training;
4. The cross-validation algorithm was repeated 100 times with different subsets of patients for training and test sets, for both the linear and the non linear models.
All statistical comparisons and analysis (best subset algorithm, multiple linear regression and MLP models, ROC curves and cross validation with confusion matrix) were performed using Matlab™, by a
toolbox named “Obefix” [35].
After LAGB, an average of 48.19% EWL (SD = 19.71%) was observed. There was a large difference in EWL among patients, ranging from almost complete weight normalization to absence of weight loss (range
of EWL 0–91.3%). When patients were divided into quartiles based on EWL achieved by LAGB, the EWL upper thresholds between consecutive quartiles were 35%, 48.9% and 62.8%
The distribution of MMPI-2 scores obtained in our sample of 172 obese subjects before surgery is reported in Tables 2, 3 and 4. MMPI-2 scale scores were categorized in four classes (<50, 50–64, 65–74
and ≥75).
Table 2. Percent distribution (and cases) of obese subjects based on MMPI-2 T scores of validity scales as compared with the normative population.
Table 3. Percent distribution (and cases) of obese subjects based on MMPI-2 T scores of clinical scales as compared with the normative population.
Table 4. Percent distribution (and cases) of obese subjects based on MMPI-2 T scores of content scales as compared with the normative population.
Chi-square test was used to determine whether the distribution of MMPI-2 scale scores in our cohort of obese females differed significantly from that of the Italian normative population [36]. In this
regard it should be noted that MMPI-2 T scores have been computed to ensure that in the normative population a T score of a given level has the same percentile value for all scales. As compared with
the normative population, in the validity scale “Lie” a significantly lower proportion of obese women scored lower than 50, and a significantly greater proportion scored between 65 and 74 (Table 2).
In addition, a significantly lower proportion of obese women scored lower than 50 in the clinical scales “Hypocondriasis”, “Psychopathic Deviate” and “Schizophrenia”. A significantly higher
proportion of obese women fell within category 50–64 in clinical scales “Psychopathic Deviate” and “Schizophrenia” (Table 3).
Regarding the content scales “Obsessiveness”, “Anger” and “Family Problems“, our cohort significantly differed from the normative population, showing predominantly lower scores (Table 4).
Multiple Linear Regression Model
As a result of best-subset regression algorithm, a model with the independent variables age, “Paranoia” (Pa), “Antisocial Practices” (Asp) and “Type-A Behaviour” (TpA) was selected (Table 5). These
four independent variables accounted for about 10% of the weight loss variance: the Pearson coefficient of correlation r, coefficient of determination R^2 and p-value are shown in Table 6.
Table 5. Multiple linear regression coefficients.
Table 6. Multiple linear regression model summary.
Table 5 also illustrates that the Variance Inflation Factor (VIF) values for this model varied between 1.418 for Asp scale and 1.007 for age which are far below the recommended level of VIF = 5 [37]:
therefore, VIF values suggested that independent variables included in this model did not suffer from the problem of multicollinearity.
The analysis of residuals confirmed the validity of the model: they had zero mean, Gaussian distribution (confirmed by statistical tests of Jarque-Bera and Lilliefors) and were independent
(hypothesis confirmed by Runs Test).
A predicted EWL score (standardized) was calculated through the formula . A simple regression analysis was then conducted with actual EWL (standardized) as dependent variable and predicted EWL score
(standardized) as the independent variable (Fig. 4). Results are summarized in Table 6.
Figure 4. Linear regression model.
Figure shows predicted EWL on x-axis versus actual EWL on y-axis. Solid line represents best fit line (r = 0.326), green points are subjects with predicted EWL belonging to the first quartile, blue
points to the second quartile, cyan to the third and red to the fourth quartile. Vertical and horizontal dotted lines denote quartiles of predicted EWL and actual EWL, respectively. Black crosses
indicate centroids (mean values) of the first and the last quartiles.
Multi-Layer Perceptron Model
As a result of MLP training with increasing number of neurons, the correlation between and did not improve appreciably by increasing the number of neurons in hidden layer over five. Therefore, the
number of hidden neurons was fixed to 4 for both MLPs.
When the same input/output data were put as input of MLPs, the Pearson correlation coefficient between nonlinear score u and weight loss score v, was 0.604, significantly greater than that obtained
with the linear model. In addition, R^2 increased from 0.1 to 0.365 and the standard error of estimate decreased from 0.948 to 0.799 which indicated a better fit for the non linear model (Table 7 and
Fig. 5).
Figure 5. Non linear regression model.
Figure shows non linear score u on x-axis versus EWL score v on y-axis. Solid line represents best fit line (r = 0.604), green points are subjects who have u score belonging to the first quartile,
blue points to the second quartile, cyan to the third and red to the fourth quartile. Vertical and horizontal dotted lines denote quartiles of u and v score, respectively. Black crosses indicate
centroids (mean values) of the first and the last quartiles.
Table 7. Linear (multiple regression), all 2-way interactions model and non linear (MLP) models summary.
Furthermore, in order to validate the performance of the non linear model, a multiple regression model with all two-way interactions between variables significantly correlated with EWL, was computed.
Seven variables (main effects) and their interactions accounted for 26.8% of EWL variability, greater than the linear model (R^2 = 10%) but lower than the nonlinear model which explains 36.5%
variability (Table 7).
Comparison between Linear and Non Linear Models
A quartile division of linear and non linear scores was performed in order to identify 4 classes of predicted EWL for both models, on the basis of the values of age and selected psychological
variables (Fig. 4–5).
Centroids were calculated for the first and the fourth quartiles, which resulted in a EWL interval of 20% in the case of the linear model whereas the interval increased to over 30%.
The nonlinear model allowed a better separation among quartiles and better overlapping between predicted EWL and actual EWL with respect to the linear model (Tables 8–9, Fig. 4–5).
Table 8. Distribution of obese subjects according to the linear regression model, based upon quartile division of predicted (columns) and actual (rows) weight loss.
Table 9. Distribution of obese subjects according to the non linear regression model, based upon quartile division of predicted (columns) and actual (rows) weight loss.
This held true both for subjects in the upper quartile (mean predicted EWL by linear model = 57%, mean predicted EWL by non linear model = 63.8%, mean actual EWL = 72.9%) and for subjects in the
lower quartile (mean predicted EWL by linear model = 40%, mean predicted EWL by non linear model = 34.7%, mean actual EWL = 22.7%).
ROC Curves
Sensitivity and specificity in predicting LAGB outcome were determined from ROC curves based on predicted EWL scores (Fig. 6). Roc curves were built by dividing patients into two groups using the
first quartile as a threshold.
Figure 6. ROC curves for LAGB outcome classification model.
ROC curves for both linear and non linear models (see Methods). Sensitivity, or true positive rate, is plotted on the y-axis, and false positive rate, or 1 minus specificity, on the x-axis. Solid
green, red and black lines represent non linear, linear model and random classifier, respectively. Blue circles represent the best cut-off values for both models calculated as the closest point of
each curve to the upper left corner.
As for the linear model, the best cutoff point (i.e., the closest points to the upper left corner) of predicted EWL (standardized) was −0.0024 (true positive = 83; false positive = 12; true negative
= 31 and false negative = 46). This cut off point corresponds to 50.1% of EWL. Accuracy and mis-classification rate were 66.3% and 33.7%, respectively (Tables 10 and 11).
Table 10. Results of ROC analysis.
Table 11. Performance of linear and non linear model classifiers at best cutoff points.
As for the non linear model, the best u cutoff point was −0.09 (true positive = 85; false positive = 8; true negative = 35 and false negative = 44). This cut off point corresponds to 49.9% of EWL.
Accuracy and mis-classification rate were 69.8% and 30.2%, respectively (Table 10 and 11).
The cross-validation algorithm was used to extend the prevision capability of our models to new data. Results of the cross-validation algorithm are reported as the average of 100 confusion matrices (
Tables 12 and 13). By using the linear model, 63% patients (40% + 23%) with a predicted EWL within the 1^st quartile achieved an actual EWL <48.9% (i.e., the median value of actual EWL). At the same
time 67% patients (35% + 32%) with a predicted EWL within the 4^th quartile obtained an actual weight loss of >48.9%. At variance, by using the non linear model, the proportion of patients correctly
predicted below or above the median value of actual EWL raised up to 70% (29% + 41%) and 78% (45% + 33%) for the 1^st and the 4^th quartiles, respectively. By both models a poor prediction was
obtained when patients fell within the 2^nd or the 3^rd quartiles of predicted EWL.
Table 12. Prediction value (mean ± SEM) of the confusion matrix obtained by the linear model.
Table 13. Prediction value (mean ± SEM) of the confusion matrix obtained by the non linear model.
This study indicates that elaboration of MMPI-2 scores by ANNs can facilitate weight loss prediction in obese candidates to adjustable gastric banding.
Weight loss after bariatric surgery depends on the ability to produce a permanent reduction of daily food intake, as compared with the amount that caused the development of obesity. However, the
expected reduction in caloric intake obtained by restrictive surgery procedures does not invariably lead to predictable long term results. This can be related to adherence to a permanent dietary
restriction and lifestyle modification. Predictive factors of adherence are not established in the literature. In this regard MMPI-2 psychological scales represent a potential tool for predicting the
success of surgical procedures [38].
To investigate this possibility, in this study the MMPI-2 scores obtained before surgery were correlated to the long term results of weight loss after gastric banding. Patients derived by a
preselected sample that, based upon current knowledge, had a high probability of success by this surgical procedure. In particular, patients with high level of psychopathology were preliminarily
excluded from LAGB. Indeed, results of MMPI-2 don't show a prevalence of psychopathology in this obese sample, which is in excess of the population norms. Yet, our population reported higher scores
in validity scale “Lie” that may reflect an unsophisticated defensiveness in which respondents are denying negative characteristics and claiming positive ones because they judge it to be in their
best interest [26]. Higher scores in the clinical scale Hypocondriasis are probably related to real physical problems and a psychological component to the illness should be suspected. Similarly, the
higher prevalence of high scores in the Psychopathic Deviate clinical scale may indicate the search for immediate gratification of impulses and a limited frustration tolerance. Furthermore, higher
frequency of scores than the population expectancy on clinical scale Schizophrenia suggests that patients feel insecure, inferior, incompetent and dissatisfied to their life situation. These results
should be interpreted in light of some intrinsic limitations. First, the psychopathologic profile of our sample belongs to individuals seeking bariatric surgery, and cannot be generalized to all
obese subjects dealing with a medical condition. Second, as already mentioned, patients were selected to meet criteria that, based on our own experience and on that derived from the literature, are
associated with the best probability of long-lasting weight loss after gastric banding. This is why our data are not aligned with previous studies that concern either the general population of obese
subjects or unselected obese candidates for bariatric surgery, which show higher level of psychopathology, in particular on scales regarding anxious-depressive symptoms [39], [40].
On average, weight loss observed in our study group at 2-years follow-up was in line with that reported in the literature, to indicate that our selection criteria complied with the international
guidelines for gastric banding. However, as expected, there was a great variability among subjects. The best subset algorithm highlighted the variables “age”, “Pa” (Paranoia), “Asp” (Antisocial
Practices), “TpA” (Type-A Behavior) as significant predictors of EWL. According to Busetto et al. [41] the weight loss achieved by LAGB in older patients is lower (but it is still associated with a
significant improvement in comorbidities). Similarly, Singhal et al. [42] reported a higher, though not significant, EWL in patients with age less than 50 years. The clinical scale 6 of MMPI-2
(Paranoia) consists in 40 items. Some of those items deal with frankly psychotic behavior (suspiciousness, ideas of references, delusions of persecution and grandiosity). Others items cover such
diverse topics as sensitivity, cynicism, asocial behavior, excessive moral virtue and complaints about other people. It is possible to obtain a T score greater than 65 on this scale without endorsing
any of the frankly psychotic items. The content scale “Antisocial Practices” (Asp) consists in antisocial attitudes and antisocial behavior. The content scale “Type-A (TpA) consists in impatience and
in competitive drive [26].
In our study, age, paranoia and antisocial practices showed an inverse correlation with EWL while Type-A Behavior had a positive correlation with it. Overall, these four independent variables
accounted for 10% of the weight loss variance, which is significant but of very limited value in the clinical practice.
When the MLP model was applied, the weight loss variance predicted by the 4 variables raised up to 36%, with accuracy and mis-classification rates of 70% and 30%, respectively. As patients were
selected to exclude those with high levels of psychopathology, the inputs variables generated by MMPI-2 spanned over a relatively limited range of scores. We might speculate that if non-selected
patients had to be included in the study, a greater variability of MMPI-2 scores would have been obtained and the prediction value of our model might have been even greater. At present, we believe
that this model is the best available tool that objectively exploits psychological scores in the selection of candidates for gastric banding.
Our ANN approach extends the predictive range of the linear regression model, by replacing the identity functions with nonlinear activation functions, and it appears more suitable to describe
complicated systems. ANNs may be trained with data gained in various clinical contexts, to take into account local expertise, racial differences as well as other unknown variables that can affect the
clinical outcome. The analysis may not be necessarily limited to psychological parameters and other potentially useful variables could be tested to improve the predictive value of the model.
Furthermore, our ANN architecture using 2 MLPs is potentially able to include more than one dependent variable (in addition to EWL) and operate a non-linear transformation between them. Future
research using biochemical or anthropometric variables may build on these observations.
In conclusion, results of this study, validated in random samples of the same population, demonstrate that it is possible to establish with over 70% of reliability what the final outcome of the
intervention will be in those individuals that will either maximally or minimally benefit from LAGB. In practical terms this innovative approach, totally non invasive, may constitute a precious tool
to establish which are the best candidates to the interventions and reduce costs, sufferance and failure to those that wouldn't comply sufficiently to the therapy.
One of the main drawbacks of ANN approach is the impossibility to discriminate what is the real contribution of each variable in the final prediction: ANN is a good technique to perform predictions
if lot of data are available to train the algorithm but at the cost of loss of power of explanation.
A further limitation of ANNs is that, due to local minima in the cost function, optimizations starting from different initial parameters, often ends up at different minima. Therefore, a number of
optimization runs starting from different random initial parameters is needed, and the best run is chosen as the solution even if there is no guarantee that the global minimum of the cost function
has been found.
In addition, the number of hidden neurons in the ANNs is determined by a trial-and-error approach. Adopting techniques such as generalized cross validation and information criteria may help in the
future to provide more guidance on the choice of the most appropriate ANN architecture.
Author Contributions
Conceived and designed the experiments: PP CL MM GBC PV AP AL FS. Performed the experiments: PP CL AL FS. Analyzed the data: PP CL MM AL FS. Contributed reagents/materials/analysis tools: PP CL AL
FS. Wrote the paper: PP CL MM AL FS. Contributed to patient selection and recruitment: PF AC. Performed the bariatric surgery: MA. | {"url":"http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0013624","timestamp":"2014-04-16T04:37:56Z","content_type":null,"content_length":"224049","record_id":"<urn:uuid:7e9044b7-0707-4f5e-8c27-a0498dd0891d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
Table of Contents
Channelflow is a software system for numerical analysis of the incompressible Navier-Stokes flow in channel geometries, written in C++. The core engine of Channelflow is a spectral CFD^1) algorithm
for integrating the Navier-Stokes equations. This engine drives a number of higher-level algorithms that (for example) compute equilibria, traveling waves, and periodic orbits of Navier-Stokes.
Channelflow provides these algorithms in an easy-to-use, flexible, and intelligible form by using relatively modern software design. Channelflow consists of a software library for rapid, high-level
development of spectral CFD codes and a set of predefined executable programs that perform common tasks involving CFD. Channelflow is customized for Fourier x Chebyshev x Fourier expansions
appropriate for rectangular geometries with periodic boundary conditions in two directions and rigid walls in the remaining direction.
The main goals of Channelflow are
• to lower the barrier to entry to numerical research in fluid dynamics
• to enable creation of short, readable, easily-modified CFD codes
• to provide easy access to advanced algorithms for computing exact solutions of Navier-Stokes
If you use channelflow in your research, please cite it in your publications.
2011-07-18 The channelflow dokuwiki website was recently hacked and defaced with advertising. I've rebuilt it with tighter security measures, specifically registration by request only and tighter
editing permissions. I also removed some questionable registrations. If you find yourself unable to log in, please contact me: johnfgibson at gmail.com.
2011-07-06 The current SVN channelflow distribution now builds with Cmake. On linux build with
cmake -DCMAKE_INSTALL_PREFIX=$(pwd) .
make install
make test
2011-07-06 Upgraded channelflow.org website to dokuwiki-2011-05-25a. Please let me know if you encounter any troubles with the channelflow website.
Command-line utilities
Channelflow includes about thirty predefined command-line utility programs that perform the most common calculations in our research, such as
Most utilities read in one or more velocities fields from disk, operate on them according to command-line options, print some output, and save resulting fields to disk.
Flexible object-oriented programming
Channelflow is written as a C++ class library. The classes act as building blocks for expressing particular channel-flow simulations and associated data analysis, and underneath these, the
mathematical structures needed to perform the calculations. Channelflow provides classes for representing Chebyshev expansions, Fourier x Chebyshev x Fourier expansions, DNS^2), and a number of
differential equations. Each class has automatic memory management and a set of high-level elemental operations, so that auxiliary data fields and computations can be added to a program with a few
lines of code.
In channelflow, even the DNS algorithm is an object. This greatly increases the flexibility of DNS computations. For example, a DNS can be reparameterized and restarted multiple times within a single
program, multiple independent DNS computations can run side-by-side within the same program, and DNS computations can run as small components within a larger, more complex computations. As a result,
comparative calculations that formerly required coordination of several programs through shell scripts and saved data files can be done within single program.
The following programs illustrate how channelflow's libraries can function as a high-level, Matlab-like language for CFD research. Each defines an initial condition and a base flow, configures a DNS
algorithm method, runs a time-stepping loop, and produces output, in about a hundred lines of code. Note: these example codes are written for maximum simplicity and are not intended for production
Organized, readable library code
Channelflow uses object-oriented programming and data abstraction to maximize the organization and readability of its library code. Channelflow defines about a dozen C++ classes that act as abstract
data types for the major components of spectral channel-flow simulation. Each class forms a level of abstraction in which a set of mathematical operations are performed in terms of lower-level
abstractions, from time-stepping equations at the top to linear algebra at the bottom. The channelflow library code thus naturally reflects mathematical algorithm, both in overall structure and
line-by-line. One can look at any part of the code and quickly understand what role it plays in the overall algorithm. One can learn the algorithm in stages, either top-down or bottom-up, by focusing
on one level of abstraction at a time.
Other features
Channelflow is also
• Configurable: For example, channelflow's DNS algorithms implement a variety of time-stepping schemes, external constraints, and methods of calculating nonlinear terms.
• Extendable: The library code is structured to take small-scale extensions such as additional time-stepping schemes. Channelflow's object-oriented, modular structure allows channelflow simulations
to be embedded as small components within larger, more complex computations.
• Moderately general: Channelflow provides elemental algebraic and differential operators for its mathematical classes, so that most quantities of interest can be calculated with a few lines of
code. However, Channelflow is not general regarding geometry: it works only with rectangular geometries with two periodic and one inhomogeneous direction.
• Verifiable: The source distribution contains a test suite that verifies the correct behavior of major classes.
• Documented:
Channelflow User's Manual
contains annotated program examples, discussion of design, an overview of the main classes from a user's perspective, and a review of the mathematical algorithm. Other documentation is under
• Supported: Channelflow is currently supported by its author via this website.
• Fast: Channelflow is as fast as comparable Fortran codes
• Free: Channelflow is free software. It is licensed under the GNU GPL version 2 and available for download.
Channelflow's CFD core algorithm uses spectral discretization in spatial directions (Fourier x Chebyshev x Fourier), finite-differencing in time, and primitive variables (3d velocity and pressure) to
integrate the incompressible Navier-Stokes equations. The algorithm is based on Kleiser and Schuman's primitive-variables formulation, which uses a Chebyshev tau method for enforcement of the no-slip
conditions and influence-matrix and tau-correction algorithms to determine the pressure. Channelflow generalizes this algorithm in several ways, as described in the Channelflow User's Manual.
Channelflow's generalizations include
• Seven semi-implicit time-stepping algorithms: semi-implicit backwards differentiation of orders 1-4, two 2nd-order Runge-Kutta schemes, and the classic 2nd-order Crank-Nicolson Adams-Bashforth
• Computation of the nonlinear term in seven forms: skew-symmetric, rotational, convection, divergence, alternating convection/divergence, or linearized about the base flow.
• Enforcement of pressure-gradient or mean-velocity constraints, either constant or time-varying.
• Integration of total or fluctuating velocity fields.
• Arbitrary base-flow profiles U(y).
• Dealiased or aliased collocation calculations.
The algorithms for computing invariant solutions for plane Couette flow were developed by Divakar Viswanath. These algorithms combine a “hookstep” trust-region modification of the classic Newton
search with Krylov-subspace methods for solving the Newton-hookstep equations, and use Arnoldi iteration for linear stability analysis.
Channelflow uses the elegant and powerful FFTW library for its Fourier transforms. See Channelflow documentation and References for more on the numerical algorithms.
Development status
Channelflow began in 1999 as a part of my Ph.D. research in Theoretical and Applied Mechanics at Cornell University. It has been under active development since; it nows serves as the primary platform
for numerical research in plane Couette dynamics at the Center for Nonlinear Science in the Georgia Tech School of Physics. I know of about 10-20 other active users of channelflow. Its DNS algorithms
are verified as correct by the test suite: correct integration of Orr-Sommerfeld eigenfunctions, Poiseuille flow, and sinusoidal disturbances to Poiseuille flow. Channelflow has also been verified
against independent codes in the computation of equilibria, eigenvalues, and periodic orbits of plane Couette flow. Channelflow's test suite is not exhaustive, so some inessential utility functions
might still contain errors.
Discussion forums
The channelflow website is hosted on a wiki in order to encourage discussion and collaborative maintenance of documentation. The main channelflow discussion forums are
A few other related discussion forums are also hosted here:
Under development
These sections of the wiki are under development and will remain here until they're ready to replace the handwritten-html versions from the old website.
NSF notices
This material is based upon work supported by the National Science Foundation under Grant No. 0807574.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. | {"url":"http://channelflow.org/","timestamp":"2014-04-18T13:07:22Z","content_type":null,"content_length":"30794","record_id":"<urn:uuid:1429fb1c-fd90-4555-9926-cf5ccc1c52f3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Welcome to the math homework help subreddit.
This subreddit is mainly for getting help with math homework. However, math questions of all kinds are welcome. Please read the FAQ before posting. Please be careful with your superscripts, see here
for advice.
Using LaTeX
Plugin based on TeX The World and modified by wsdenker.
[; e^{\pi i} + 1 = 0 ;]
You may need to add four spaces before, or put backticks around, math fragments.
Other resources | {"url":"http://www.reddit.com/r/cheatatmathhomework/comments/1408to/series_help/","timestamp":"2014-04-19T18:13:56Z","content_type":null,"content_length":"50925","record_id":"<urn:uuid:2bf396e0-4d4e-4c35-a368-80a692ac0c2f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How would I find the answer to 12*3.14?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
12 x 3.14 I can never remember how to multiply decimals
Best Response
You've already chosen the best response.
\[\Huge{12 \times \frac{314}{100}}\]
Best Response
You've already chosen the best response.
Basically, you just multiply 12 x 314 without the decimals as normal, and then you move the decimal however many places total from the original two numbers. Like, since you had two decimal places
to the left in 314, you move the end result of 12x314's decimal place to the left two places as well. It makes sense because decimals are usually caused by fractions of powers of 10. 3.14 is the
same as 314/100, and if you multiply 12x3.14 = 12x314/100, you multiply the numerators.
Best Response
You've already chosen the best response.
can u solve this ?
Best Response
You've already chosen the best response.
now there is nt any decimal problem :)
Best Response
You've already chosen the best response.
aliter method :- simply multiply 314 to 12 and then put decimal point before 2 digits .
Best Response
You've already chosen the best response.
12*3.14= 37.68 Is this correct?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
right @mayankdevnani ?
Best Response
You've already chosen the best response.
Awesome thank you!
Best Response
You've already chosen the best response.
yw :)
Best Response
You've already chosen the best response.
You're welcome! :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50977de7e4b02ec0829c204a","timestamp":"2014-04-20T16:02:00Z","content_type":null,"content_length":"54288","record_id":"<urn:uuid:d74b85c6-b46a-4748-83e3-54cde8bf7aca>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
Partial Fraction
October 13th 2009, 04:48 AM #1
Junior Member
Apr 2008
Partial Fraction
so I have
x^2 / (x-1)(x-2)
The answer is 1 - 1/(x-1) + 4/(x-2).
Where did the 1 come from. I did the whole solving for A and B thing and got the same answer without the 1.
$\frac {x^2}{(x-1)(x-2)}=\frac {x^2}{x^2-3x+2}= 1+\frac{3x-2}{x^2-3x+2}=1+\frac{3x-2}{(x-1)(x-2)}$
now resolve $\frac{3x-2}{(x-1)(x-2)}$ into partial fraction
$\frac{3x-2}{(x-1)(x-2)}=\frac{- 1}{(x-1)} + \frac{4}{(x-2)}$
$therefore\quad \frac {x^2}{(x-1)(x-2)}=1-\frac{ 1}{(x-1)} + \frac{4}{(x-2)}$
Last edited by ramiee2010; October 13th 2009 at 05:17 AM.
October 13th 2009, 05:03 AM #2
May 2009
New Delhi | {"url":"http://mathhelpforum.com/algebra/107728-partial-fraction.html","timestamp":"2014-04-21T15:18:19Z","content_type":null,"content_length":"31804","record_id":"<urn:uuid:68d5ad04-25d2-41dd-bf4e-83d46a91675f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: March 2009 [01055]
[Date Index] [Thread Index] [Author Index]
Option instead of a function argument: is it possible?
• To: mathgroup at smc.vnet.net
• Subject: [mg98037] Option instead of a function argument: is it possible?
• From: Alexei Boulbitch <Alexei.Boulbitch at iee.lu>
• Date: Sat, 28 Mar 2009 05:44:58 -0500 (EST)
Dear Community,
If we need to write functions depending upon several parameters the
latter are usually passed to the function as its arguments. I wonder
however, if you know a way to pass some parameters to a function in the
same way as options are passed to operators in Mathematica. That is, if
the default value of the parameter in question is OK, you do not even
mention such a parameter among the function arguments. If you need to
specify such a parameter you include an argument having the form
Let me explain precisely within a simple example what I would like to:
1. Consider a function that solves a system of 2 ordinary differential
equations and draws a trajectory on the (x, y) plane:
trajectory1[eq1_, eq2_, point_, tmax_] :=
Module[{s, eq3, eq4},
eq3 = x[0] == point[[1]];
eq4 = y[0] == point[[2]];
s = NDSolve[{eq1, eq2, eq3, eq4}, {x, y}, {t, tmax}];
ParametricPlot[Evaluate[{x[t], y[t]} /. s], {t, 0, tmax},
PlotRange -> All]]
Equations can be fixed say, like these:
eq1 = x'[t] == -y[t] - x[t]^2;
eq2 = y'[t] == 2 x[t] - y[t]^3;
and initial conditions are passed by the parameter point. The function
can be called:
trajectory1[eq1, eq2, {1, 1}, 30]
2. Assume now that I need to specify the accuracy goal and MaxSteps
parameters. Then the function will take a slightly different form:
trajectory2[eq1_, eq2_, point_, tmax_, accuracyGoal_, maxSteps_] :=
Module[{s, eq3, eq4},
eq3 = x[0] == point[[1]];
eq4 = y[0] == point[[2]];
s = NDSolve[{eq1, eq2, eq3, eq4}, {x, y}, {t, tmax},
AccuracyGoal -> accuracyGoal, MaxSteps -> maxSteps];
ParametricPlot[Evaluate[{x[t], y[t]} /. s], {t, 0, tmax},
PlotRange -> All]]
and also called:
trajectory2[eq1, eq2, {1, 1}, 30, 10, 1000]
However, I would like to achieve a function
trajectory3[eq1_, eq2_, point_, tmax_]
that can be addressed both as
trajectory3[eq1, eq2, {1, 1}, 30]
(if I agree with the default values of the AccuracyGoal and MaxSteps)
and as
trajectory3[eq1, eq2, {1, 1}, 30, AccuracyGoal->10, MaxSteps->10000],
if a change in these options is necessary. Is it possible?
Best regards, Alexei
Alexei Boulbitch, Dr., habil.
Senior Scientist
IEE S.A.
ZAE Weiergewan
11, rue Edmond Reuter
L-5326 Contern
Phone: +352 2454 2566
Fax: +352 2454 3566
Website: www.iee.lu
This e-mail may contain trade secrets or privileged, undisclosed or otherwise confidential information. If you are not the intended recipient and have received this e-mail in error, you are hereby notified that any review, copying or distribution of it is strictly prohibited. Please inform us immediately and destroy the original transmittal from your system. Thank you for your co-operation. | {"url":"http://forums.wolfram.com/mathgroup/archive/2009/Mar/msg01055.html","timestamp":"2014-04-19T04:53:03Z","content_type":null,"content_length":"27841","record_id":"<urn:uuid:640b21a5-124f-4dac-a16a-72be22d3e4fa>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Elizabeth, NJ Statistics Tutor
Find a North Elizabeth, NJ Statistics Tutor
...It is an individualized, naturally differentiated program. Through my education and teaching experience, I've gained deep insight into how to best motivate students to create strong habits and
schedules for themselves. I'm trained in a character education/study habit/life skills program entitled Quantum Learning.
22 Subjects: including statistics, English, reading, algebra 1
...Teaching is my passion. I have worked with kids of all ages for the best six year, from one-on-one home tutoring to group tutoring in class rooms and after-school programs. Although I have a
bachelor's in biology, I am able to tutor different subject and help with homework for every grade.
26 Subjects: including statistics, chemistry, reading, geometry
...The ACT English section is about working quickly and efficiently. I like to practice using timed conditions and review grammar rules as needed. The ACT Math section requires speed and
endurance because of its length.
17 Subjects: including statistics, calculus, physics, geometry
...In short, I love mathematics. My motivation is to make you love it as well. My job is try to make math as lucid and coherent as I can.
12 Subjects: including statistics, calculus, geometry, algebra 2
I have been teaching statistics and tutoring for almost 10 years and in that time have never found a student I could not teach statistics to, no matter how much they hate/fear math, provided they
are willing to put in some work and practice. As a trained psychologist and neuroscientist, I can also ...
5 Subjects: including statistics, algebra 1, psychology, Microsoft Excel
Related North Elizabeth, NJ Tutors
North Elizabeth, NJ Accounting Tutors
North Elizabeth, NJ ACT Tutors
North Elizabeth, NJ Algebra Tutors
North Elizabeth, NJ Algebra 2 Tutors
North Elizabeth, NJ Calculus Tutors
North Elizabeth, NJ Geometry Tutors
North Elizabeth, NJ Math Tutors
North Elizabeth, NJ Prealgebra Tutors
North Elizabeth, NJ Precalculus Tutors
North Elizabeth, NJ SAT Tutors
North Elizabeth, NJ SAT Math Tutors
North Elizabeth, NJ Science Tutors
North Elizabeth, NJ Statistics Tutors
North Elizabeth, NJ Trigonometry Tutors
Nearby Cities With statistics Tutor
Bayway, NJ statistics Tutors
Bergen Point, NJ statistics Tutors
Chestnut, NJ statistics Tutors
Elizabeth, NJ statistics Tutors
Elizabethport, NJ statistics Tutors
Elmora, NJ statistics Tutors
Greenville, NJ statistics Tutors
Midtown, NJ statistics Tutors
Pamrapo, NJ statistics Tutors
Parkandbush, NJ statistics Tutors
Peterstown, NJ statistics Tutors
Roseville, NJ statistics Tutors
Townley, NJ statistics Tutors
Union Square, NJ statistics Tutors
Weequahic, NJ statistics Tutors | {"url":"http://www.purplemath.com/North_Elizabeth_NJ_statistics_tutors.php","timestamp":"2014-04-20T09:09:51Z","content_type":null,"content_length":"24391","record_id":"<urn:uuid:1821bf13-1a9d-4891-80b4-f8807ac31b39>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Entailment of Type Equalities
Our aim is to derive the entailment judgement
g1, .., gn |- w1, .., wm
i.e., whether we can derive the wanted equalities w1 to wm from the given equalities g1, .., gn under a given set of toplevel equality schemas (i.e., equalities involving universally quantified
variables). We permit unification variables in the wanted equalities, and a derivation may include the instantiation of these variables; i.e., may compute a unifier. However, that unifer must be most
The derivation algorithm is complicated by the pragmatic requirement that, even if there is no derivation for the judgement, we would like to compute a unifier. This unifier should be as specific as
possible under some not yet specified (strongest) weakening of the judgement so that it is derivable. (Whatever that means...)
The following is based on ideas for the new, post-ICFP'08 solving algorithm described in CVS papers/type-synonym/new-single.tex. A revised version of new-single.tex that integrates the core ideas
from this wiki page is in papers/type-synonym/normalised_equations_algorithm.tex. Most of the code is in the module TcTyFuns.
Wanted equality
An equality constraint that we need to derive during type checking. Failure to derive it leads to rejection of the checked program.
Local equality, given equality
An equality constraint that -in a certain scope- may be used to derive wanted equalities.
Flexible type variable, unification variable, HM variable
Type variables that may be globally instantiated by unification. We use Greek letters alpha, beta,... as names for these variables.
Rigid type variable, skolem type variable
Type variable that cannot be globally instantiated, but it may be locally refined by a local equality constraint. We use Roman letters a, b,... as names for these variables.
In positions where we can have both flexible and rigid variables, we use x, y, z.
Overall algorithm
The overall structure is as in new-single.tex, namely
1. normalise all constraints (both locals and wanteds),
2. solve the wanteds, and
3. finalise.
However, the three phases differ in important ways. In particular, normalisation includes decompositions & the occurs check, and we don't instantiate any flexible type variables before we finalise
(i.e., solving is purely local).
Normal equalities
Central to the algorithm are normal equalities, which can be regarded as a set of rewrite rules. Normal equalities are carefully oriented and contain synonym families only as the head symbols of
left-hand sides. They assume one of the following two major forms:
1. Family equality: co :: F t1..tn ~ t or
2. Variable equality: co :: x ~ t, where we again distinguish two forms:
1. Variable-term equality: co :: x ~ t, where t is not a variable, or
2. Variable-variable equality: co :: x ~ y, where x > y.
• the types t, t1, ..., tn may not contain any occurrences of synonym families,
• the left-hand side of an equality may not occur in the right-hand side, and
• the relation x > y is a total order on type variables, where alpha > a whenever alpha is a flexible and a a rigid type variable (otherwise, the total order may be aribitrary).
The second bullet of the where clause is trivially true for equalities of Form (1); it also implies that the left- and right-hand sides are different.
Furthermore, we call a variable equality whose left-hand side is a flexible type variable (aka unification variable) a flexible variable equality, and correspondingly, a variable equality whose
left-hand side is a rigid type variable (aka skolem variable) a rigid variable equality.
The following is interesting to note:
• Normal equalities are similar to equalities meeting the Orientation Invariant and Flattening Invariant of new-single, but they are not the same.
• Normal equalities are never self-recursive. They can be mutually recursive. A mutually recursive group will exclusively contain variable equalities.
Coercions co are either wanteds (represented by a flexible type variable) or givens aka locals (represented by a type term of kind CO). In GHC, they are represented by TcRnTypes.EqInstCo, which is
defined as
type EqInstCo = Either
TcTyVar -- case for wanteds (variable to be filled with a witness)
Coercion -- case for locals
Moreover, TcTyFuns.RewriteInst represents normal equalities, emphasising their role as rewrite rules.
SLPJ: I propose that we use a proper data type, not Either for this.
The following function norm turns an arbitrary equality into a set of normal equalities. As in new-single, the evidence equations are differently interpreted depending on whether we handle a wanted
or local equality.
data EqInst -- arbitrary equalities
data FlatEqInst -- synonym families may only occur outermost on the lhs
data RewriteInst -- normal equality
norm :: EqInst -> [RewriteInst]
norm [[co :: F s1..sn ~ t]] = [[co :: F s1'..sn' ~ t']] : eqs1++..++eqsn++eqt
(s1', eqs1) = flatten s1
(sn', eqsn) = flatten sn
(t', eqt) = flatten t
norm [[co :: t ~ F s1..sn]] = norm [[co' :: F s1..sn ~ t]] with co = sym co'
norm [[co :: s ~ t]] = check [[co :: s' ~ t']] : eqs++eqt
(s', eqs) = flatten s
(t', eqt) = flatten t
check :: FlattenedEqInst -> [FlattenedEqInst]
-- Does OccursCheck + Decomp + Triv + Swap (of new-single)
check [[co :: t ~ t]] = [] with co = id
check [[co :: x ~ y]]
| x < y = [[co :: x ~ y]]
| otherwise = [[co' :: y ~ x]] with co = sym co'
check [[co :: x ~ t]]
| x `occursIn` t = fail
| otherwise = [[co :: x ~ t]]
check [[co :: t ~ x]]
| x `occursIn` t = fail
| otherwise = [[co' :: x ~ t]] with co = sym co'
check [[co :: s1 s2 ~ t1 t2]]
= check [[col :: s1 ~ t1]] ++ check [[cor :: s2 ~ t2]] with co = col cor
check [[co :: T ~ S]] = fail
flatten :: Type -> (Type, [FlattenedEqInst])
-- Result type has no synonym families whatsoever
flatten [[F t1..tn]] = (alpha, [[id :: F t1'..tn' ~ alpha]] : eqt1++..++eqtn)
(t1', eqt1) = flatten t1
(tn', eqtn) = flatten tn
FRESH alpha, such that alpha > x for all x already used
RECORD alpha := F t1'..tn'
flatten [[t1 t2]] = (t1' t2', eqs++eqt)
(t1', eqs) = flatten t1
(t2', eqt) = flatten t2
flatten t = (t, [])
The substitutions RECORDed during flatten need to be (unconditionally) applied during finalisation (i.e., the 3rd phase).
• Perform Rule Triv as part of normalisation.
• Whenever an equality of Form (2) or (3) would be recursive, the program can be rejected on the basis of a failed occurs check. (Immediate rejection is always justified, as right-hand sides do not
contain synonym familles; hence, any recursive occurrences of a left-hand side imply that the equality is unsatisfiable.)
• We flatten locals and wanteds in the same manner, using fresh flexible type variables. (We have flexibles in locals anyway and don't use (Unify) during normalisation - this is different to
Propagation (aka Solving)
A significant difference to new-single is that solving is a purely local operation. We never instantiate any flexible variables.
co :: F t1..tn ~ t
co' :: [s1/x1, .., sm/xm]s ~ t with co = g s1..sm |> co'
where g :: forall x1..xm. F u1..um ~ s and [s1/x1, .., sm/xm]u1 == t1.
co1 :: F t1..tn ~ t & co2 :: F t1..tn ~ s
co1 :: F t1..tn ~ t & co2' :: t ~ s with co2 = co1 |> co2'
where co1 is local, or both co1 and co2 are wanted and at least one of the equalities contains a flexible variable.
co1 :: x ~ t & co2 :: x ~ s
co1 :: x ~ t & co2' :: t ~ s with co2 = co1 |> co2'
where co1 is local, or both co1 and co2 are wanted and at least one of the equalities contains a flexible variable.
• Rules applying to family equalities:
□ SubstFam (formerly, IdenticalLHS) only applies to family equalities (both local and wanteds)
□ Top only applies to family equalities (both locals and wanteds)
We should apply SubstFam first as it cheaper and potentially reduces the number of applications of Top. On the other hand, for each family equality, we may want to try to reduce it with Top, and
if that fails, use it with SubstFam. (That strategy should lend itself well to an implementation.) But be careful, we need to apply Top exhaustively, to avoid non-termination. More precisely, if
we interleave Top and SubstFam, we can easily diverge.
• Rules applying to variable equalities:
□ SubstVar (formerly, Local) applies to variable equalities (both locals and wanteds)
• With SubstFam and SubstVar, we always substitute locals into wanteds and never the other way around. We perform substitutions exhaustively. For SubstVar, this is crucial to avoid non-termination.
(It seems we can drop this requirement if we only ever substitute into left-hand sides.)
• We should probably use SubstVar on all variable equalities before using SubstFam, as the former may refine the left-hand sides of family equalities, and hence, lead to Top being applicable where
it wasn't before.
• We use SubstFam and SubstVar to substitute wanted equalities only if their left-hand side contains a flexible type variables (which for variable equalities means that we apply SubstVar only to
flexible variable equalities). TODO This is not sufficient while we are inferring a type signature as SPJ's example shows: |- a ~ [x], a ~ [Int]. Here we want to infer x := Int before yielding a
~ [Int] as an irred. So, we need to use SubstVar and SubstFam also if the rhs of a wanted contains a flexible variable. This unfortunately makes termination more complicated. However, SPJ also
observed that we really only need to substitute variables in left-hand sides (not in right-hand sides) as far as enabling other rewrites goes. However, there are trick problems left as the
following two examples show |- a~c, a~b, c~a and |- b ~ c, b ~ a, a ~ b, c ~ a. !!Seems SubstVar? with a wanted is also sometimes needed if the wanted contains no flexible type variable (as this
can trigger applications of Top, which may lead to more specific unifiers).
• Substitute only into left-hand sides?
• In principle, a variable equality could be discarded after an exhaustive application of SubstVar. However, while the set of class constraints is kept separate, we may always have some occurrences
of the supposedly eliminated variable in a class constraint, and hence, need to keep all local equalities around. That reasoning definitely applies to local equalities, but I think it also
applies to wanteds (and I think that GHC so far never applies wanteds to class dictionaries, which might explain some of the failing tests.) Flexible variable equalities cannot be discarded in
any case as we need them for finalisation.
• SubstVar is the most expensive rule as it needs to traverse all type terms.
• Only SubstVar when replacing a variable in a family equality can lead to recursion with (Top).
If only flexible type equalities remain as wanted equalities, the locals entail the wanteds. We can now instantiate type variables in flexible type equalities where possible to propagate constraints
into the environment. In GHC, we may wrap any remaining equalities (of any form) into an implication constraint to be propagated outwards (where it may be solved under an extended set of local
• (Unify) is an asymmetric rule, and hence, only fires for equalities of the form x ~ c, where c is free of synonym families. Moreover, it only applies to wanted equalities. (Rationale: Local
equality constraints don't justify global instantiation of flexible type variables - just as in new-single.)
• TODO Now that we delay instantiation until after solving, do we still need to prioritise flexible variables equalities over rigid ones? (Probably not.)
Substituting wanted family equalities with SubstFam is crucial if the right-hand side contains a flexible type variable
Top: F Int ~ [Int]
|- F delta ~ [delta], F delta ~ [Int]
|- F delta ~ [delta], norm [[ [delta] ~ [Int] ]]
|- F delta ~ [delta], delta ~ Int
|- norm [[ F Int ~ [Int] ]], delta ~ Int
|- F Int ~ [Int], delta ~ Int
|- norm [[ [Int] ~ [Int] ]], delta ~ Int
|- delta ~ Int
Interaction between local and wanted family equalities
Example 4 of Page 9 of the ICFP'09 paper.
F [Int] ~ F (G Int) |- G Int ~ [Int], H (F [Int]) ~ Bool
F [Int] ~ a, F b ~ a, G Int ~ b
G Int ~ [Int], H x ~ Bool, F [Int] ~ x
(SubstFam w/ F [Int])
F [Int] ~ a, F b ~ a, G Int ~ b
G Int ~ [Int], H x ~ Bool, x ~ a
(SubstFam w/ G Int)
F [Int] ~ a, F b ~ a, G Int ~ b
b ~ [Int], H x ~ Bool, x ~ a
(SubstVar w/ x)
F [Int] ~ a, F b ~ a, G Int ~ b
b ~ [Int], H a ~ Bool, x ~ a
TODO If we use flexible variables for the flattening of the wanteds, too, the equality corresponding to x ~ a above will be oriented the other way around. That can be a problem because of the
asymmetry of the SubstVar? and SubstFun? rules (i.e., wanted equalities are not substituted into locals).
The Note [skolemOccurs loop] in the old code explains that equalities of the form x ~ t (where x is a flexible type variable) may not be used as rewrite rules, but only be solved by applying Rule
Unify. As Unify carefully avoids cycles, this prevents the use of equalities introduced by the Rule SkolemOccurs as rewrite rules. For this to work, SkolemOccurs also had to apply to equalities of
the form a ~ t[[a]]. This was a somewhat intricate set up that we seek to simplify here. Whether equalities of the form x ~ t are used as rewrite rules or solved by Unify doesn't matter anymore.
Instead, we disallow recursive equalities after normalisation completely (both locals and wanteds). This is possible as right-hand sides are free of synonym families.
To look at this in more detail, let's consider the following notorious example:
E_t: forall x. F [x] ~ [F x]
[F v] ~ v ||- [F v] ~ v
New-single: The following derivation shows how the algorithm in new-single fails to terminate for this example.
[F v] ~ v ||- [F v] ~ v
==> normalise
v ~ [a], F v ~ a ||- v ~ [x], F v ~ x
a := F v
==> (Local) with v
F [a] ~ a ||- [a] ~ [x], F [a] ~ x
==> normalise
F [a] ~ a ||- x ~ a, F[a] ~ x
==> 2x (Top) & Unify
[F a] ~ a ||- [F a] ~ a
..and so on..
New-single using flexible tyvars to flatten locals, but w/o Rule (Local) for flexible type variables: With (SkolemOccurs) it is crucial to avoid using Rule (Local) with flexible type variables. We
can achieve a similar effect with new-single if we (a) use flexible type variables to flatten local equalities and (b) at the same time do not use Rule (Local) for variable equalities with flexible
type variables. NB: Point (b) was necessary for the ICFP'08 algorithm, too.
[F v] ~ v ||- [F v] ~ v
==> normalise
v ~ [x2], F v ~ x2 ||- v ~ [x1], F v ~ x1
** x2 := F v
==> (Local) with v
F [x2] ~ x2 ||- [x2] ~ [x1], F [x2] ~ x1
** x2 := F v
==> normalise
F [x2] ~ x2 ||- x2 ~ x1, F [x2] ~ x1
** x2 := F v
==> 2x (Top) & Unify
[F x1] ~ x1 ||- [F x1] ~ x1
** x1 := F v
==> normalise
x1 ~ [y2], F x1 ~ y2 ||- x1 ~ [y1], F x1 ~ y1
** x1 := F v, y2 := F x1
..we stop here if (Local) doesn't apply to flexible tyvars
A serious disadvantage of this approach is that we do want to use Rule (Local) with flexible type variables as soon as we have rank-n signatures. In fact, the lack of doing so is responsible for a
few failing tests in the testsuite in the GHC implementation of (SkolemOccurs).
De-prioritise Rule (Local): Instead of outright forbidding the use of Rule (Local) with flexible type variables, we can simply require that Local is only used if no other rule is applicable. (That
has the same effect on satisfiable queries, and in particular, the present example.)
[F v] ~ v ||- [F v] ~ v
==> normalise
v ~ [a], F v ~ a ||- v ~ [x], F v ~ x
a := F v
==> (IdenticalLHS) with v & F v
v ~ [a], F v ~ a ||- [a] ~ [x], x ~ a
==> normalise
v ~ [a], F v ~ a ||- x ~ a, x ~ a
==> (Unify)
v ~ [a], F v ~ a ||- a ~ a
==> normalise
v ~ [a], F v ~ a ||-
In fact, it is sufficient to de-prioritise Rule (Local) for variable equalities (if it is used for other equalities at all):
[F v] ~ v ||- [F v] ~ v
==> normalise
v ~ [a], F v ~ a ||- v ~ [x], F v ~ x
a := F v
==> (Local) with F v
v ~ [a], F v ~ a ||- v ~ [x], x ~ a
==> (Unify)
v ~ [a], F v ~ a ||- v ~ [a]
==> (Local) with v
v ~ [a], F [a] ~ a ||- [a] ~ [a]
==> normalise
v ~ [a], F [a] ~ a ||-
One problems remains: The algorithm still fails to terminate for unsatisfiable queries.
[F v] ~ v ||- [G v] ~ v
==> normalise
v ~ [a], F v ~ a ||- v ~ [x], G v ~ x
a := F v
==> (Local) with v
F [a] ~ a ||- [a] ~ [x], G [a] ~ x
==> normalise
F [a] ~ a ||- x ~ a, G [a] ~ x
==> (Unify)
F [a] ~ a ||- G [a] ~ a
==> (Top)
[F a] ~ a ||- G [a] ~ a
==> normalise
a ~ [b], F a ~ b ||- G [a] ~ a
b := F a
..and so on..
My guess is that the algorithm terminates for all satisfiable queries. If that is correct, the entailment problem that the algorithm solves would be semi-decidable. | {"url":"https://ghc.haskell.org/trac/ghc/wiki/TypeFunctionsSolving?version=58","timestamp":"2014-04-18T12:07:02Z","content_type":null,"content_length":"31182","record_id":"<urn:uuid:bd326b8c-02ab-449c-8fb3-9a71c735f4ee>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
The functor of continuous functions from compact CW-spaces to the reals
up vote 0 down vote favorite
The contravariant functor $C(-)$ given by $$ \hom_{Top}(-,\mathbb{R}):cCW\to Rng $$ where $cCW$ is the category of compact CW complexes is injective on objects. What is known about surjectivity,
faithfulness and fullness of this functor?
gn.general-topology oa.operator-algebras gelfand-duality
1 Surjective on objects: Definitely not, how do you get mathbb{Z} or worse a noncommutative ring? Full: How do you induce the zero map between two rings of continuous functions with a continuous
function between the spaces? Faithful: This is the only interesting one. I am guessing that it is faithful. Examining the proof that it is injective on objects (looking at the MaxSpec
construction), should point you in the right direction I think. Maybe your question is more interesting if you restrict your attention to R-modules? – Steven Gubkin Oct 28 '10 at 14:27
Steven: the zero map shouldn't really count: it's not a ring homomorphism here. (I assume rings have identity and the identity is preserved, a standard convention for commutative rings.) – KConrad
Oct 28 '10 at 14:38
2 Rng is a strange choice of target category. You want at least commutative R-algebras and you actually get a commutative Banach algebra or, even better, a commutative C*-algebra over R with trivial
involution. – Qiaochu Yuan Oct 28 '10 at 15:57
The Gelfand-Naimark-Theorem gives an answer. But it does not tell you how to see whether a space is a CW by looking at its function algebra. – Johannes Ebert Oct 28 '10 at 17:55
add comment
1 Answer
active oldest votes
Corollary 4.1.(i) in Johnstone's book Stone Spaces (electronic version: http://gen.lib.rus.ec/get?nametype=orig&md5=C26F62F69C32101307213F1960F85BA3) states that the category of
realcompact spaces is dual to the full subcategory of the category of commutative rings consisting of rings of the form C(X). The functor C implements the duality.
up vote 5 down
vote accepted The category of compact CW-complexes embeds into the category of realcompact spaces as a full subcategory, hence the functor C is fully faithful.
For reference: en.wikipedia.org/wiki/Realcompact_space – David Roberts Oct 28 '10 at 20:13
Thank you. Paracompactness does not suffice here, right? – roger123 Nov 1 '10 at 12:48
I think there are non-homeomorphic paracompact spaces with isomorphic algebras of continuous functions. This is plausible because not all paracompact spaces are realcompact. –
Dmitri Pavlov Nov 1 '10 at 15:50
add comment
Not the answer you're looking for? Browse other questions tagged gn.general-topology oa.operator-algebras gelfand-duality or ask your own question. | {"url":"http://mathoverflow.net/questions/43974/the-functor-of-continuous-functions-from-compact-cw-spaces-to-the-reals?sort=oldest","timestamp":"2014-04-19T19:45:34Z","content_type":null,"content_length":"60960","record_id":"<urn:uuid:fb160b16-3650-4f4d-aed6-bcc8d1b7cf62>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unified Theory Replaces Relativity.
Authors: Rati Ram Sharma
F A wave exists only in its propagating medium but Einstein erred to discard the physical medium for light wave and to introduce the non-existent 4-D spacetime continuum instead. It denied him the
chance to address the intrinsic wave-quantum Unity of light and predict the new entity of 'basic substance' to compose all forms of E & m so compellingly demanded for the inter-conversions of E & m
by the eqn. E=mc^2, which is now re-derived. Unified Theory gives cogent arguments and experimental support for the existence of a real physical medium in space, the all-composing & allpervading
'sharmon medium' as Basic Substance. It propagates light as a wave-quantum UNITY, the particle aspect showing up at short wavelength e.g. from ~ 7000 A° downward in photochemical effects and below ~
3000 A° in photoelectric effects. The non-substantive abstract concepts of space & time evolve from our perceptions of successive motions & changes in the surrounding objects and cannot fuse into any
concrete spacetime continuum. If existent it would retard motion of heavenly bodies, which is not actually observed. Any non-composite static spacetime cannot undulate to transmit light. Various
multidimensional spacetime continua are mere mathematical constructs bereft of physical existence and theories based on them unrealistic. Unified Theory explains from sharmon medium the constancy &
invariance to source-observer motion, the two pillar postulates of Special Relativity without validating SR. It explains the Michelson-Morley and Sagnac experiments as also the observed variability
of light velocity and superluminality, which invalidate Relativity Theories. Lorentz transformations do not describe any natural motion since no velocity can vary (like v) with, and be invariant
(like c) to, a source-observer motion at the same time. The actual length of an object, viewed by say, 100 differently moving observers cannot undergo 100 different objective contractions at the same
time, making 'contraction of length' an unrealistic concept. So is 'dilatation of time'. Unified Theory derives from sharmon medium the Maxwell equations and the time containing and time free
equations for the propagation of wave-quantum unity in gravitational and electromagnetic radiation. The Schrodinger wave equation is also derived. It explains the photoelectric effect. Explanation of
the bending of light in a gravitational field shows that photon has mass and gravitation is not a curvature in 4-D spacetime. All particles and energy-quanta have definite mass & size.
Comments: 14 pages
Download: PDF
Submission history
[v1] 2 Aug 2009
[v2] 6 Aug 2009
[v3] 18 Aug 2009
Unique-IP document downloads: 210 times
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted
as unhelpful.
comments powered by | {"url":"http://vixra.org/abs/0908.0008","timestamp":"2014-04-17T06:47:27Z","content_type":null,"content_length":"9311","record_id":"<urn:uuid:09ab7850-c638-4a02-895d-eef4b9015a19>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Point and Line Spread Functions
Another concept that may be new to neophyte vision people is that of point spread function.
Most lenses including the human lens are not perfect optical systems. As a result when visual stimuli are passed through the cornea and lens the stimuli undergo a certain degree of degradation. The
question is how can this degradation be represented? Well suppose you have an exceedingly small dot of light, a point, and project it through a lens. The image of this point will not be the same as
the original. The lens will introduce a small amount of blur. Click on point spread function (PSF) so see a graphical representation.
If you clicked on the graphical representation you saw a diagram representing the light distribution of a point after passing through a lens, for example a human lens. But this plot only represented
light distribution along one axis of a plane. Click on spread functions to see a three dimensional representation. The point spread function you saw if you clicked on the above figure was a two
dimensional cross section of this three dimensional solid. In this figure you also saw an image of a line.
The cross section of the line image is called a line spread function (LSF). A LSF is derived by integrating the point solid along sections parallel to the direction of the line. This works because a
line image is the summation of an infinite number of image points along its length.
Ok, so PSF represents a tool for describing the visual stimulus. But you are interested in visual perception and want to know how that helps you to understand what you are looking at.
Suppose you have two points of lights and when you plot the energy as a function of space you have two distinct distributions. The question is how would these two distributions be perceived? The
quick answer is that depends on how close the two points are. If you have already clicked on two distinct distributions. you know what is involved. If not click on it now.
Now you are probably thinking, yes, ok but the world is not made up of just a bunch of points. For example, sometimes I see lines. You can readily imagine that if a point undergoes a certain amount
of degradation so does a line. If one has a series of parallel lines this is called a square wave grating.
Square wave gratings are often used to determine the modulation transfer function of an optical system.
Table of Contents
Subject Index
Table of Contents [When not using framtes] | {"url":"http://www.yorku.ca/eye/psf.htm","timestamp":"2014-04-19T01:51:26Z","content_type":null,"content_length":"3625","record_id":"<urn:uuid:26858673-4ffb-4b08-a0cb-f2a1c4b4dca1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bolinas Trigonometry Tutor
...We review and learn any missing knowledge in the student’s math background that may be needed to master the current subject. Since repetition is very important in studying math, we work on
extra problems either from the book or that I offer myself. The student should work on as much of the current homework before a session as possible.
12 Subjects: including trigonometry, calculus, statistics, geometry
...She is highly acclaimed for her innovative techniques designed to make even intricate and complex topics fun for teens and pre-teens. She has an innate understanding of what colleges need and
recognizes the value of an early start. Her methods are unique and they work.
29 Subjects: including trigonometry, reading, chemistry, physics
...I have also taught other related Life Sciences, including creating and teaching courses in Genetics, Bioethics, Ornithology and Ecology. I have been teaching a pre-algebra Math Enrichment
course for the past two years, and have been tutoring middle and high school students in pre-algebra and oth...
43 Subjects: including trigonometry, chemistry, Spanish, geometry
Hello my name is Steve and I am a 7th grade math teacher. I have experience tutoring and teaching everything from Pre-Algebra to Calculus. If anyone is interested in studying to prepare for the
upcoming school year, I am your man.
10 Subjects: including trigonometry, geometry, statistics, algebra 1
I have got two degrees: B.Sc. (Hons) from Huddersfield University (UK) and M.Sc., (ENG) from Hong Kong University both in engineering. The areas that I am able to teach and help students to learn
are mathematics and Chinese. My experience from a long education career enables me to better understand students, envisage their difficulties and help them where they need help.
10 Subjects: including trigonometry, geometry, Chinese, algebra 1 | {"url":"http://www.purplemath.com/Bolinas_Trigonometry_tutors.php","timestamp":"2014-04-18T09:02:17Z","content_type":null,"content_length":"24122","record_id":"<urn:uuid:1a9824a9-c881-41d3-9d94-0f3722014f12>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Integrated Computational/Experimental Model of Lymphoma Growth
Non-Hodgkin's lymphoma is a disseminated, highly malignant cancer, with resistance to drug treatment based on molecular- and tissue-scale characteristics that are intricately linked. A critical
element of molecular resistance has been traced to the loss of functionality in proteins such as the tumor suppressor p53. We investigate the tissue-scale physiologic effects of this loss by
integrating in vivo and immunohistological data with computational modeling to study the spatiotemporal physical dynamics of lymphoma growth. We compare between drug-sensitive Eμ-myc Arf-/- and
drug-resistant Eμ-myc p53-/- lymphoma cell tumors grown in live mice. Initial values for the model parameters are obtained in part by extracting values from the cellular-scale from whole-tumor
histological staining of the tumor-infiltrated inguinal lymph node in vivo. We compare model-predicted tumor growth with that observed from intravital microscopy and macroscopic imaging in vivo,
finding that the model is able to accurately predict lymphoma growth. A critical physical mechanism underlying drug-resistant phenotypes may be that the Eμ-myc p53-/- cells seem to pack more closely
within the tumor than the Eμ-myc Arf-/- cells, thus possibly exacerbating diffusion gradients of oxygen, leading to cell quiescence and hence resistance to cell-cycle specific drugs. Tighter cell
packing could also maintain steeper gradients of drug and lead to insufficient toxicity. The transport phenomena within the lymphoma may thus contribute in nontrivial, complex ways to the difference
in drug sensitivity between Eμ-myc Arf-/- and Eμ-myc p53-/- tumors, beyond what might be solely expected from loss of functionality at the molecular scale. We conclude that computational modeling
tightly integrated with experimental data gives insight into the dynamics of Non-Hodgkin's lymphoma and provides a platform to generate confirmable predictions of tumor growth.
Author Summary
Non-Hodgkin's lymphoma is a cancer that develops from white blood cells called lymphocytes in the immune system, whose role is to fight disease throughout the body. This cancer can spread throughout
the whole body and be very lethal – in the US, one third of patients will die from this disease within five years of diagnosis. Chemotherapy is a usual treatment for lymphoma, but the cancer can
become highly resistant to it. One reason is that a critical gene called p53 can become mutated and help the cancer to survive. In this work we investigate how cells with this mutation affect the
cancer growth by performing experiments in mice and using a computer model. By inputting the model parameters based on data from the experiments, we are able to accurately predict the growth of the
tumor as compared to tumor measurements in living mice. We conclude that computational modeling integrated with experimental data gives insight into the dynamics of Non-Hodgkin's lymphoma, and
provides a platform to generate confirmable predictions of tumor growth.
Citation: Frieboes HB, Smith BR, Chuang Y-L, Ito K, Roettgers AM, et al. (2013) An Integrated Computational/Experimental Model of Lymphoma Growth. PLoS Comput Biol 9(3): e1003008. doi:10.1371/
Editor: Mark S. Alber, University of Notre Dame, United States of America
Received: July 5, 2012; Accepted: February 13, 2013; Published: March 28, 2013
Copyright: © 2013 Frieboes et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original author and source are credited.
Funding: This work was supported in part by NCI PSOC MC-START U54CA143907 (VC, SSG, HF), NCI ICBP 1U54CA151668 (VC), NCI ICMIC P50CA114747 (SSG), NCI RO1 CA082214 (SSG), NCI CCNE-TR U54 CA119367
(SSG), CCNE-T U54 U54CA151459 (SSG), and Canary Foundation (SSG), and K99 CA160764 (BRS). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of
the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Monoclonal antibodies and small molecule inhibitors of intracellular targets are being developed alongside a host of anti-non-Hodgkin's lymphoma therapeutic options [1]. Yet the tumor tissue-scale
effects from these molecular-scale manipulations are not well-understood. With the ultimate goal to more rationally optimize lymphoma treatment, we integrate pre-clinical in vivo observations of
lymphoma growth with computational modeling to create a platform that could lead to optimized therapy. As a first step towards this goal, we develop the capability for simulation in order to gain
insight into the tissue-scale effect of molecular-scale mechanisms that drive lymphoma growth. We use the modeling to study these mechanisms and their association to cell proliferation, death, and
physical transport barriers within the tumor tissue.
Tumor growth and treatment response have been modeled using mathematics and numerical simulation for the past several decades (see recent reviews [2]–[9]). Models are usually either discrete or
continuum depending on how the tumor tissue is represented. Discrete models represent individual cells according to a specific set of bio-physical and -chemical rules, which is particularly useful
for studying carcinogenesis, natural selection, genetic instability, and cell-cell and cell-microenvironment interaction (see reviews by [10]–[20]). Continuum models treat tumors as a collection of
tissue, applying principles from continuum mechanics to describe cancer-related variables (e.g., cell volume fractions and concentrations of oxygen and nutrients) as continuous fields by means of
partial differential and integro-differential equations [2]. A third modeling approach employs a hybrid combination of both continuum and discrete representations of tumor cells and microenvironment
components, aiming to develop multiscale models where the discrete scale can be directly fitted to molecular and cell-scale data and then upscaled to inform the phenomenological parameters at the
continuum scale (see recent work by [21]–[23]).
There is a paucity of mathematical oncology work applied to the study of non-Hodgkin's lymphoma, with some notable exceptions providing insight into the role of the tumor microenvironment
heterogeneity in the treatment response [24], [25] and the disease origin [26]. Like many other cancers (solid tumors), two critical tissue-scale effects in lymphoma are hypoxia and angiogenesis, as
observed in our studies and other work [27]. Supporting previous qualitative observations of physiological resistance, mathematical modeling and computational simulation have shown that the diffusion
barrier alone can result in poor tumor response to chemotherapy due to diminished delivery of drug, oxygen, and cell nutrients [28], [29]. Local depletion of oxygen and cell nutrients may further
promote survival to cell cycle-specific drugs through cell quiescence.
In order to study these effects in lymphoma, we implement an integrated computational/experimental approach to quantitatively link the processes from the cell scale to the tumor tissue-scale behavior
in order to gain insight into their cause and progression in time. We extend a version of our 3D continuum model [30]–[32], building upon extensive mathematical oncology work [2], [3], [33]–[35], and
calibrate both parameters and equations, i.e., functional relationships that are not conservation laws, from detailed experimental data to produce a virtual lymphoma. We obtain the experimental data
by very fine sectioning of both drug-sensitive and -resistant lymphomas, thus visualizing molecular, cellular, and tissue-scale parameter information across the whole tumor geometry. We further
develop the protocols for calibration of parameters by building on recent work based on patient histopathology [36], [37]. We also use the data to derive the relationships between model parameters
for apoptosis, proliferation, and vasculature. We verify the model results at the tumor-scale through tissue-scale observations in vivo of tumor size, morphology, and vasculature using intravital
microscopy and macroscopic imaging of the inguinal lymph node. We note that comparison of model results to experimental data has been done to various extents for different cancers (see reviews
above); here, we perform a tissue-scale comparison after extensive calibration of cell-scale parameters in order to validate the model results. We undertake simulations to study how the growth of
drug-resistant Non-Hodgkin's lymphoma may be governed by the cellular phenotype, and use this information to better elucidate the links between physical drug resistance and molecular-scale phenotype
by experimental and computational comparison to drug-sensitive tumors.
This process yields a lymphoma simulator as an initial step to study detailed tumor progression and provide further insight into drug resistance, and, ultimately, may provide a tool to design better
personalized treatments for Non-Hodgkin's lymphoma. Since the cell-scale measurements used for calibration are different from those at the tissue-scale used for verification, this methodology enables
the model to bridge from the cell to the tumor scale to calculate tumor growth and hypothesize associated mechanisms predictively, i.e., without resorting to fitting to the experimental data. This
process quantitatively links the cellular phenotype to the tumor tissue-scale behavior, and may serve to highlight the importance of physical heterogeneity and interactions in the tumor
microenvironment when evaluating chemotherapeutic agents in addition to consideration of chemo-protective effects such as cell-specific phenotypic properties and cell-cell and cell-ECM adhesion [38].
Materials and Methods
Experimental model
We choose an Eμ-myc murine orthotopic lymphoma experimental model because of its similarity to human Non-Hodgkin's Lymphoma [39], and select five parameters to measure based on their importance to
lymphoma progression: viability, hypoxia, vascularization, proliferation, and apoptosis. In order to investigate the role of physical heterogeneity in the development of drug resistance, including
the impediment of transport barriers, we focus on two types of lymphoma cells: Eμ-myc Arf-/- cells (Doxorubicin (DOX) and Cyclophosphamide (CTX) sensitive, with IC[50] = 3.5 nM and 16.0 µM,
respectively; the IC[50] is the amount of drug needed to kill 50% of a cell population), and Eμ-myc p53-/- cells (DOX and CTX resistant: IC[50] = 46.2 nM and 75.8 µM, respectively). The Eμ-myc
transgenic mouse model expresses the Myc oncogene in the B cell compartment, resulting in mice with transplantable B cell lymphomas. We chose this in vivo model because it captures genetic and
pathological features of the human disease and, given the appropriate genetic mutation, drug-resistant and drug-sensitive tumors can be directly compared [39],[40].
Cell culture
Eμ-myc/Arf-/- and Eμ-myc/p53-/- lymphoma cells, which harbor loss-of-function regions in the Arf and p53 genes respectively, were previously derived by intercrossing Eμ-myc transgenic mice with Arf
-null and p53-null mice, all in the C57BL/6 background as described previously [39]. Eμ-myc/Arf-/- lymphoma cells and Eμ-myc/p53-/- lymphoma cells were cultured in 45% Dulbecco's modified Eagle
medium (DMEM) and 45% Iscove's Modified Dulbecco's Medium (IMDM) with 10% fetal bovine serum (FBS) and 1% penicillin G-streptomycin onto the feeder cells – Mouse Embryonic Fibroblasts (MEFs).
Murine lymphoma model
C57BL/6 mice were obtained from Charles River Laboratories (Wilmington, Massachusetts). All animal studies were approved by The Stanford University Institutional Animal Care and Use Committee.
Lymphoma cells (1×10^6) Eμ-myc/Arf-/- and Eμ-myc/p53-/- were diluted with 200 µl of PBS and injected intravenously via the tail vein as described previously [39]. The intravital microscopy and
macroscopic tumor observations were obtained for at least n = 4 mice per tumor group.
We isolated both Eμ-myc/Arf-/- and Eμ-myc/p53-/- driven tumors at day 21 after tail-vein injection of lymphoma cells. Typical murine lymphomas were observed to range from about 4 to 6 mm in diameter
prior to fixation. Lymph node tissues were fixed and paraffin-embedded. The tissues were used for immunohistochemical (IHC) identification of cell viability (H&E staining), hypoxia (HIF-1α),
vascularization (CD31), proliferation (Ki-67), and apoptosis (Caspase-3). Five 2-µm thick sections were cut 5 µm apart from each other in order to stain for these markers (Figure 1). A total of five
sets (S1 through S5) of five stained sections each was collected every 100 µm along the lymphoma, in order to section and stain the entire tumor for sequential microscopic scanning of the stained
sections. Sections S1 and S5 were at the tumor top and bottom, respectively, while the other sections were towards the center with S3 being in the middle. Note that due to tissue processing and
dehydration, the tumors as cut were smaller than measured when removed from the animal. All the sections were de-paraffinized and rehydrated in PBS. Then the sections in each set were incubated at
4°C with the primary antibody overnight: rabbit anti-mouse HIF-1 antibody (Abcam, Santa Cruz, CA), rabbit anti-mouse Ki-67 antibody (Labvision, Fremont, CA), rabbit anti-mouse Caspase-3 antibody
(Cell Signaling Technology, Beverly, CA), and rat anti-mouse CD31 antibody (BD Pharmingen, San Diego, CA), and incubated for 1 hour at room temperature with a peroxidase-conjugated secondary
antibody. The samples were fully scanned and stitched together using a digital pathology BioImagene instrument (Ventana Medical Systems, Tucson AZ) at ×20 magnification.
Figure 1. Scheme to obtain the cellular-scale experimental data.
Lymphomas (shown as large orange sphere) were grown in vivo by tail vein injection of either drug-sensitive Eμ-myc/Arf-/- or drug-resistant Eμ-myc/p53-/- lymphoma cells. The inguinal lymph node tumor
was excised, fixed, and sliced for histology sections (5 µm apart) every 100 µm along the tumor. A total of five sets (S1 through S5) of histology sections were obtained (for simplicity, the figure
only shows three sets). The sections in each set were stained for cell viability (H&E), hypoxia (HIF-1α), proliferation (Ki-67), apoptosis (Caspase-3), and vascularization (CD-31).
Mathematical model
The model treats tissue as a mixture of various cell species, water, and ECM; each component is subject to physical conservation laws described by diffusion-taxis-reaction equations (see below).
Briefly, the tissue microstructure is modeled through the proper choice of parameter values and through biologically-justified functional relationships between these parameters, e.g., cellular
transitions from quiescence to proliferation depend upon oxygen concentration [41]. The model simulates non-symmetric tumor evolution in 2D and 3D, and dynamically couples heterogeneous growth,
vascularization, and tissue biomechanics (Figure 2). In [36] we calibrated models using cell-scale data to predict tissue scale parameters such as size and growth rate. These models are predictive
because they are not calibrated with the same data used for model validation, which avoids data fitting. While in [36] we focused on the final predicted tumor sizes, here we focus on the growth rate
as an essential first step; in follow-up work, we will evaluate the complex problem of drug response. Our approach to constrain the computational model involves both cell- and tumor-scale approaches
as described in Figure 3.
Figure 2. Algorithm flowchart.
Refer to Materials and Methods and Text S1 for equations. Using the cellular-scale data, we measured values for proliferation and apoptosis for both drug-sensitive and drug-resistant tumors and
calculated corresponding values for the model mitosis and apoptosis parameters λ[M] and λ[A]. We solved Eq. (2) for the local levels of cell substrates n at each time step of simulation of tumor
growth. The parameters were input into Eq. (3) to numerically calculate the source mass terms S[i], which were then used in Eq. (1) to compute the volume fractions of viable ρ[V] and ρ[D] dead
tissue. These fractions were used in Eq. (4) to obtain the tumor tissue growth velocity.
Figure 3. Schematic showing integrated computational/experimental modeling strategy involving both cell- and tumor-scale measurements.
(A) Functional relationships involving cell-scale parameters such as proliferation (Ki-67), apoptosis (Caspase-3), and hypoxia (HIF-1α) are defined based on experimental observations, e.g., from
immunohistochemistry the density of viable tissue as a function of vascularization is shown in the third panel (red: highest density; yellow: lowest; blue: vessels). These functional relationships as
well as parameter values measured experimentally are then used as input to the model to create simulations of lymphoma growth. A sample simulated tumor cross-section showing vascularized viable
tissue (highest density in red, lowest in yellow, with vessel cross-sections as small blue dots) is shown at the far right. (B) Lymphoma observations regarding size, morphology, and vasculature from
macroscopic imaging of an inguinal lymph node in live mice provide part of the tumor-scale information to validate the model simulations. Note the pre-existing vasculature in the lymph node (in the
center of each frame) from which oxygen and nutrients are supplied to the tissue. For comparison, a control group of lymph nodes in animals without tumors is also shown.
We approximate the healthy lymph node as a sphere to represent the experiments in the mouse model (Figure 4). To simulate node expansion and deformation of surrounding tissue to accommodate the
growing tumor, as a first step we delineate the tumor boundary by decreasing the value of the cell mobility parameter beyond the sphere diameter (see below). For the multigrid algorithm, we pick a
computational domain that is a 6.4 mm× 6.4 mm× 6.4 mm box, with finest mesh grid size = 100 microns; this grid size provides adequate resolution to resolve the tumor boundaries without incurring
excessive computational cost.
Figure 4. Representation of the lymph node by the computational model.
(A) Diagram highlighting a typical lymph node structure. (B) Simulation output from the model showing an incipient tumor (dark red) forming in the center of the node. Afferent lymphatic vessels are
collectively represented as one incoming tube on the top, and the efferent vessel is at the bottom. (C) The simulated distribution of oxygen (brown color) released by the blood vasculature within the
node remains uniform at this initial stage.
Distribution of cell species
We assume that the tumor is a mixture of cells, interstitial fluid, and extracellular matrix (ECM). The temporal rate of change in viable and dead tumor tissue at any location within the tumor equals
the amount of mass that is pushed, transported, and pulled due to cell motion, adhesion, and tissue pressure, plus the net result of production and destruction of mass due to cell proliferation and
The rate of change in the volume fraction ρ[i] of cell species i (V: viable tumor; D: dead tumor; H: host) is specified throughout the computational domain by balancing net creation (S[i]:
proliferation minus apoptosis and necrosis; see below) with cell advection (∇·(u[i]ρ[i]), where u[i] is the velocity of the cell species) and cell-cell and cell-ECM interactions (adhesion, cell
incompressibility, chemotaxis, and haptotaxis, incorporated in a flux J[i]) [31], [32]. The reticular network within the lymph node contains a variety of extracellular matrix proteins, many of which
are known ligands for integrin cell surface adhesion receptors [42], [43]. Cell-cell and cell-ECM mechanical interactions are modeled through J using a generalized Fick's Law [31].
Tumor angiogenesis is driven by excessive accumulation of cancerous cells, leading to a chronic under-supply of oxygen and cell nutrients (generically here labeled “nutrients”) in tumor regions
farther removed from pre-existing vessels [44]. Hypoxic cells in lymphoma release a net balance of pro-angiogenic factors such as VEGF-A, bFGF, PDGF and VEGF-C, which promote neo-vascularization
mainly through sprouting angiogenesis of mature resident endothelial cells and, to a lesser extent, through vasculogenesis from recruitment of bone marrow-derived progenitor cells [45]. Accordingly,
the model incorporates angiogenesis into the lymphoma by coupling with a multiscale representation of tumor vessel growth, branching, and anastomosis based on earlier work [46]–[48] (further details
in Text S1).
The vasculature releases oxygen and nutrients n that diffuse through the tissue and are uptaken by cells during metabolism, while tumor cells secrete VEGF (n[V]) in response to hypoxia [32]. The
oxygen and nutrients are non-dimensionalized by the maximum inside vessels, hence their levels are ≤1 and are assumed to be stationary. The transport can be described as:(2)
where D[n] and are diffusion constants (1×10^−5 cm^2/sec for oxygen [49] and 1×10^−7 cm^2/sec for VEGF [50]), δ[vessel] (Dirac delta function) is the indicator function of vasculature (1 where it
exists and 0 otherwise), ν is the delivery rate (depends upon a, capillary vessel cross-sectional area, and u[b], blood velocity), , , and are the uptake rates, and are the decay rates (for
simplicity, assumed to be zero), and is the secretion rate.
Proliferation, apoptosis, and necrosis
The tumor species viable (V) volume fraction ρ[V] is assumed to increase through proliferation and decrease through apoptosis and necrosis. We assume that normal host cells (H) do not proliferate,
but may also undergo apoptosis (A) and necrosis (N); the total volume fraction of dead cells (D) is ρ[D]. For simplicity, we assume these primarily affect tumor mass through the transport of water
within the tissue and hence neglect their solid fraction. Under the assumption that a dense viable cell population prevents nutrient saturation, we model the proliferation as directly proportional to
(non-dimensionalized) nutrient substrate n above a threshold level n[N], resulting in the net creation of one cell by removing the equivalent water volume from the interstitium. Cells experiencing a
substrate level below n[N] are considered quiescent (e.g., due to hypoxia). Apoptosis transfers cells from the viable tumor and host cell species to the dead cell species, where cells degrade and
release their water content; this models phagocytosis of apoptotic bodies by neighboring viable cells and the subsequent release of the water of lysed cells. Necrosis occurs when the nutrient
substrate concentration falls below the threshold n[N] and ultimately releases the cellular water content (i.e., we assume that the main mode of cell death due to lack of nutrients is mainly
represented by necrosis). The resulting model is:(3)
where λ[M,i], λ[A,i], and λ[N,i] are mitosis, apoptosis, and necrosis rates, λ[D] is the cell degradation rate (varies due to the differences between apoptosis and necrosis), and H(x) is the
Heaviside “switch” function.
Velocity of cell species
The movement of a cell species is determined by the balance of proliferation-generated oncotic pressure, cell-cell and cell-ECM adhesion, as well as chemotaxis (due to substrate gradients), and
haptotaxis (due to gradients in the ECM density). We model the motion of cells and interstitial fluid through the ECM as a viscous, inertialess flow through a porous medium. Therefore, no distinction
between interstitial fluid hydrostatic pressure and mechanical pressure due to cell-cell interactions is made. Cell velocity is a function of cell mobility and tissue oncotic (solid) pressure
(Darcy's law); cell-cell adhesion is modeled using an energy approach from continuum thermodynamics (see Text S1). For simplicity, the interstitial fluid is modeled as moving freely through the ECM
(i.e., at a faster time scale than the cells).(4)
The variational derivative δE/δρ[i] of the cell-cell interaction potential, combined with the remaining contributions to the flux J (due to pressure, haptotaxis, and chemotaxis; see Text S1), yields
a generalized Darcy-type constitutive law for the cell velocity u[i] of a cell species i, determined by the balance of proliferation-generated oncotic pressure p, cell-cell and cell-ECM adhesion, as
well as chemotaxis (due to gradients in the cell substrates n), and haptotaxis (due to gradients in the ECM density f) [32]. k[i] is cellular mobility, reflecting the response to pressure gradients
and cell-cell interactions, γ[j] is the adhesion force, and χ[n] and χ[h] are the chemotaxis and haptotaxis coefficients, respectively (see Table S1). For the host cells, χ[n] = χ[h] = 0. The
Supplemental Text S1 further describes the ECM density f as well as the effect of the cell velocity on the lymph node geometry.
Comparison between Eμ-myc p53-/- and Eμ-myc Arf-/- tumors
We used the IHC staining to estimate the number and spatial localization of cells that were viable (from H&E), proliferating (from Ki-67), apoptotic (from Caspase-3), hypoxic (from HIF-1α), and with
vascular endothelial characteristics (from CD31). These estimates were calculated for both Eμ-myc Arf-/- and Eμ-myc p53-/- cells for each set of five sections obtained every 100 µm across the
lymphoma (Figures 5 and 6).
Figure 5. Lymphoma tumor cell viability.
Viability per area was measured along the five sets (S1 through S5) of histology sections for Eμ-myc Arf-/- (black) and Eμ-myc p53-/- (gray) tumors. All error bars represent standard deviation from
at least n = 3 measurements in each section. Asterisks show level of statistical significance determined by Student's t-test with α = 0.05 (one asterisk, p<0.05; two asterisks, p<0.01).
Figure 6. Lymphoma tumor characteristics.
Histological measurements are shown Eμ-myc Arf-/- (black) and Eμ-myc p53-/- (gray) tumors along the five sets of sections (S1 through S5) of the lymphoma: (A) Endothelial cells per area; (B) hypoxic
cells per area; (C) proliferating cells per area; (D) apoptotic cells per area. Sections S1 and S5 are at the tumor top and bottom, respectively, while the other sections are in the interior with S3
being in the middle. Dashes in panels (A) and (C) indicate that no data was obtained; in panel (C), no proliferation was detected for Eμ-myc p53-/- cells in sets S4 and S5, and none for Eμ-myc Arf-/-
in set S5, probably due to sample defects. All error bars represent standard deviation from at least n = 3 measurements in each section; asterisk indicates statistical significance (p<0.05)
determined by Student's t-test with α = 0.05. The data shows that for Eμ-myc p53-/- there is higher vascularization in the center, higher hypoxic density on the periphery, and higher overall
apoptotic density compared to Eμ-myc Arf-/-.
A comparison of viable Eμ-myc p53-/- to Eμ-myc Arf-/- cells along the lymphoma (Figure 5) indicates that the viability is higher for the drug-resistant tumors in the middle of the tumor (Section S3)
compared to the drug-sensitive tumors, with a corresponding statistically significant increase in cell density (p = 0.024; Student's t-test with α = 0.05). In contrast to the Eμ-myc p53-/- tumors,
the Eμ-myc Arf-/- seemed to be more dense in the peripheral regions (p = 0.002 on one end (Section S1) and p = 0.009 on the other end (Section S5)), whereas they were about the same for both tumor
types in the intermediate sections S2 and S4. Tumors with drug-resistant cells have a 4-fold increase in endothelial cells in the core of the tumor (Section S3) compared to drug-sensitive tumors (
Figure 6A). Hypoxia is higher in the peripheral regions for the Eμ-myc p53-/- (Figure 6B) even though for both tumor types the peripheral regions seem to be equally vascularized (based on the
endothelial cell density). This could be due to the vasculature on the periphery not being fully functional, with a potential difference in vascular function between the two tumor types leading to a
more hypoxic phenotype for the Eμ-myc p53-/-. Although the core proportionally holds almost twice the number of proliferating cells for the drug-resistant tumors as compared to the drug-sensitive
case (Figure 6C), a correlation between proliferation and vascularization/hypoxia is precluded. Interestingly, the number of apoptotic cells is consistently higher for Eμ-myc p53-/- (Figure 6D),
suggesting non-hypoxia driven apoptosis for these tumors.
Model calibration with cellular-scale data
By analyzing each IHC section longitudinally along the tumor, a range of baseline values can be calculated from the experimental data for key model parameters (Table S1), inspired by recent methods
in mathematical pathology [36]: cell viability, necrosis, and spatial distribution pattern (from H&E), cell proliferation (from Ki-67), cell apoptosis (from Caspase-3), oxygen diffusion distance
(from HIF-1α), and blood vessel density (from CD31). These values are obtained for both Eμ-myc Arf-/- and Eμ-myc p53-/- tumors for each of the five sections obtained longitudinally along the tumor,
with values sampled from the middle (core) and the edge (periphery) of each section. The measured values are not resolved in space but averaged over each section, thus yielding information averaged
over space. The periphery was defined as the region approximately within 200 µm of the tumor boundary.
Figure S1 shows an example of this calibration process for proliferation at the periphery and middle from two histology sections in the center of the tumor (Section S3). Taking an average
proliferation cycle of 20 hours that we observed for the lymphoma cells in culture, the proliferation calculation in units of day^−1 is λ[M]*<n> = [(stained/(stained+unstained))/20 hours/prolif.] *
24 hours/day. The average nutrient <n> indicates that this proliferation rate depends on the model diffusion of cell substrates such as glucose and oxygen in the 3D space (Eq. 2). Similarly, since
the apoptosis cycle was detectable up to 5 hours, the apoptosis calculation in units of day^−1 is λ[A] = [(stained/(stained+unstained))/5 hours/apoptosis] * 24 hours/day.
We calculate the average nutrient from the blood vessel density by assuming a uniform nutrient delivery rate from the blood to the tissue adjacent to the vessels (Eq. 2). Estimating blood vessel area
versus surrounding tissue provides a measure of the magnitude of cell substrates transferred into the tumor. Thus, we calculate the fraction of cells supported per endothelial cell in a unit volume
to be (number unstained/(number stained+unstained))^3/2. When the viable cell fraction in the simulations matches what is directly observed from microscopy, this implies that the vascular and
nutrient distributions have been correctly represented in the model (Figure 3A, middle). Similarly, we calculate the hypoxic cell fraction per unit volume as (number stained/(number
Modeling of the lymph node
The node is represented by the computational model initially as a spherical capsule in 3D with a membrane boundary separating it from the surrounding tissue (Figure 4) (see Text S1). Lymphoma cells
are assumed to enter the lymph node through the afferent lymph vessels. As they accumulate in the node during tumor progression in time, they compete for cell substrates such as oxygen and nutrients
with the normal lymphocytes. These substrates are assumed to diffuse radially outward toward the node periphery from the pre-existing vasculature, situated mainly in the core of the node (see Figure
4A and Figure 3B, left, at the intersection of three large blood vessels). Once a tumor has begun to form in the core of the node, this diffusion process presents a transport barrier for oxygen and
nutrients to the lymphoma cells incoming through the afferent vessels into the node.
Assessment of the model
We investigate the effect of initially available oxygen and cell substrates needed for cell proliferation, since lymphoma growth is hypothesized to depend on access to these through the vasculature.
Preliminary calculations suggested that the initially available nutrient level has a significant effect on the growth phase of the tumor but not on its terminal size, which according to a theoretical
analysis of the model (Text S1) depends mainly on the ratio of apoptosis to proliferation [51]. A further investigation revealed that the initial guess of parameter values results in a mismatch
between the ratio of hypoxic cells and the average apoptosis rate: where the range of hypoxic ratio matches the experiments, the apoptosis rate range in the model is too low.
Accordingly, we calibrated the cell necrosis rate so that the key parameter values remain invariant when the initial nutrient is set to a threshold of 0.5. With this set of parameters, a necrosis
rate from 5 to 7 (non-dimensional units) would satisfy the experimentally observed ranges of both the hypoxic fractions and the average apoptosis rate (Figure S2A–B). We then varied the initial
nutrient threshold while maintaining the necrosis rate invariant to confirm that the fraction of hypoxic cells and average apoptosis rate would remain within the experimentally observed range of
values (Figure S2C–D). This calibration suggests that the initially available nutrient still affects the growth phase of lymphoma. In this model, the lymphoma tumor and the lymph node greatly outgrow
the original lymph node size, which we consistently observed in vivo in addition to the distortion of the lymph node geometry (we are currently implementing the Diffuse Domain Method [52] to better
represent this geometry).
Prediction of lymphoma growth
After using the IHC data to perform a cell-scale calibration of the lymphoma model, we verify the simulated tissue-scale lymphoma size from in vivo macroscopic observations and intravital imaging at
the tissue scale. Recently, it has been discovered with bioluminescence imaging by Gambhir and co-workers that lymphoma cells coming from the spleen and bone marrow seed the inguinal lymph node
around Day 9 in vivo [53]. Using this seeding as the initial condition for the simulations, the model predicts the tumor diameter to be ~5.2±0.5 mm by Day 21 (Figure 7). This figure also shows the
gross tumor size from our caliper measurements in time, indicating that the model-predicted tumor diameter for the maximum possible value of initial nutrient falls within the range of the
measurements in vivo (the experiments show that there is no statistical difference in the tumor growth between the two cell types, Figure 3B, right). The model simulations are based on an oxygen
diffusion distance from the vessels estimated to be directly proportional to the distance at which hypoxia is detected away from blood vessels, measured experimentally from the HIF-1α staining to be
80±20 µm. The variation in this measurement leads to the variation in the simulated diameter. If the lymphoma is begun at sites within the lymph node other than the center (Figure 4), similar growth
curves are computationally obtained as the whole node volume is eventually taken over by the proliferating tumor cells (results not shown). We note that since there is a distributed source of vessels
in the tumor, the proliferation is relatively weakly sensitive to additional outside sources.
Figure 7. Prediction of lymphoma growth based on the calibrated model parameters.
Simulated mean tumor diameter (solid red line) bounded by variation in the measured oxygen diffusion distance (dashed red lines) falls within the range of values measured for the tumor growth
observed in vivo (denoted by the triangles and squares with vertical error bars). Note that the simulated growth is the same for both Eμ-myc Arf-/- and Eμ-myc p53-/- tumors.
The tumor growth from the model calibrated from the cell-scale can be validated through theoretical analysis of the model based on previous mathematical and computational work [51], [54]–[56] (see
Text S1). Assuming that the lymph node geometry is approximated by a 3D sphere, the model can be used to predict the tumor radius in time based on the ratio A of the rates of apoptosis to
proliferation calculated from the experimental IHC data. The average ratio A = λ[A]/λ[M] ~0.4 for both drug-sensitive and drug-resistant cells. In comparison with the simulations based on the
cell-scale calibration, this analysis predicts that the tumor would reach a diameter of ~6 mm. Both the theoretical analysis and the tumor growth obtained through the simulations agree with the
similar diameters observed experimentally in vivo (~5 to 6 mm) (Figure 7).
Simulation of diffusion barriers within the lymph node tumor
In the model, simulations of the vasculature were qualitatively compared to independent intravital microscopy observations in vivo of a Eμ-myc p53-/- tumor in the same animal over time (Figure 8).
The density of simulated viable tumor tissue (Figure 3A, right) as a function of the vascularization at day 21 qualitatively matches the density of the tissue observed experimentally (fraction of
simulated viable cells in the 2D plane, >90% per mm^2 in inset in Figure 8, vs. the average fraction of viable cells measured from H&E staining, 87%±6% per mm^2), indicating that the overall
vasculature function was modeled properly. The density of simulated endothelial tissue is also highest in the tumor core, as observed from histology. The increase in the lymphoma cell population
disturbs the homogeneous distribution of cell substrates (such as oxygen and cell nutrients), leading to diffusion gradients of these substances that in turn affect the lymphoma cell viability. If
the cell viability is established heterogeneously within the tumor, e.g., as observed experimentally in IHC with the Eμ-myc Arf-/- cells near the tumor periphery, the model predicts that the
diffusion gradients would not be as pronounced. If the cell viability is higher near the center of the tumor, which is observed in IHC with the Eμ-myc p53-/- cells (Figure 5), then the gradients are
predicted to be steeper and more uniform [28].
Figure 8. Vasculature and angiogenesis in the lymph node tumor.
Observations in living mice using intravital microscopy (A, B, C: red – functional blood vessels; shown for Eμ-myc p53-/- tumor) provide information to qualitatively compare the vessel formation (D,
E, F: red – highest flow; white – lowest; dots indicate vessel points of origin from pre-existing vasculature (not shown)) in the computational model (calibrated from other data, see Text S1). The
modeling of diffusion of cell substrates (e.g., oxygen and cell nutrients) within the tumor enables prediction of the spatial distribution of lymphoma cells (inset, shown for one vessel
cross-section; brown: highest concentration of cells; white: lowest concentration of cells) as their viability is modulated by access to the oxygen and nutrients diffusing from the vasculature into
the surrounding tissue (Eq. 2).
We integrate in vivo lymphoma data with computational modeling to develop a basic model of Non-Hodgkin's lymphoma. Through this work we seek a deeper quantitative understanding of the dynamics of
lymphoma growth in the inguinal lymph node and associated physical transport barriers to effective treatment. We obtain histology data by very fine sectioning across whole lymph node tumors, thus
providing detailed three-dimensional lymphoma information. We develop a computational model that is calibrated from these cell-scale data and show that the model can independently predict the
tissue-scale tumor size observed in vivo without fitting to the data. We further show that this approach can shed insight into the tumor progression within the node, particularly regarding the
physical reasons why some tumors might be resistant to drug treatment – a critical consideration when attempting to quantify and predict the treatment response. We envision that the modeling and
functional relationships derived in this study could contribute with further development to patient-specific predictors of lymphoma growth and drug response.
Although the number of mice used for the experimental in vivo validation is limited, the model results are consistent with previous work. For example, a well-studied mechanism of physiological
resistance is the dependence of cancer cell sensitivity to many chemotherapeutic agents on the proliferative state of the cell [28]. This physical mechanism is likely important in the difference in
drug-sensitivity between the tumors formed from the two cell lines and will be explored in further studies. We found that the Eμ-myc Arf-/- cells tend to congregate at the periphery of the tumor (
Figure 5), even though there are vessels in the interior of the tumor. This suggests the hypothesis that the more drug-sensitive Eμ-myc Arf-/- cells maintain better oxygenation at the expense of
higher drug sensitivity by growing less compactly in the interior of the tumor – where there would be stronger competition for oxygen and cell nutrients – whereas the Eμ-myc p53-/- lymphoma cells may
enhance their survival by closer packing in the core of the tumor. Cell packing density may present a barrier to effective drug penetration [57], which we have also modeled previously [28]. Closer
packing could further increase the number of cells that would be quiescent due to depletion of oxygen and nutrients, as we specify in the model (Materials and Methods) and as we have simulated in
previous work [28]. However, the proportion of chemoresistance inherent with Eμ-myc p53-/- that can be attributed to resistance at the genetic level compared to what can be attributed to suboptimal
drug delivery and quiescence is unclear. In follow-up work we plan to measure drug amounts near various cells in order to begin answering this question, and to perform sensitivity analyses of the
IC50 of each cell line with the computational model. This would provide a (model-generated) measure of how much of an effect suboptimal delivery could be attributed to resistance as compared to
genetic effects (as measured by IC50).
Lymphoma cells are known to retain cell-cell adhesion, with strength associated with the lymphoma's originating cell type (B- or T-cell) [58]. Mechanisms of cell packing related to drug resistance
may include weaker cell adhesion in Eμ-myc Arf-/- than in Eμ-myc p53-/- leading to higher cell density as well as a denser extra-cellular matrix in the latter [57]. Loss of ARF has been linked to
increased cancer cell migration and invasion, and hence weaker cell-cell adhesion [59], associated with the binding of ARF to the transcriptional corepressor CtBP2 and promoting CtBP2 degradation
Perhaps surprisingly, the experimental data indicate minimal presence of hypoxia within the tumor (Figure 6B). This may be due to the fact that lymphoma cells may associate with other cells including
stromal cells in the tumor, and the consequent cytokine stimulation (e.g., IL-7) may also trigger proliferation [63]. We note that the oxygen diffusion length estimate is subject to variation, as
calculated to be directly proportional to the hypoxic distances observed from the IHC; this may be improved by directly measuring the diffusing substances, e.g., oxygen. The simulated elastic tumor
boundary may also introduce some variation into the size calculation. Nevertheless, even taking these variations into account, the model-calculated average ratio of apoptosis to proliferation,
established from cell-scale measurements, implies that the tumor sizes fall within the range of the sizes estimated from the diameter measured with calipers in vivo. The hypothesis we test with the
model by successful comparison to the experimental data is that the growth and eventual slowdown of these tumors is the balance of proliferation and death, which we have also previously observed for
ductal carcinoma in situ [38]. Experimental evidence using bioluminescence imaging of living mice [53] demonstrates that lymphoma cells seed the tumor in the inguinal lymph node from other sites
(e.g., spleen and bone marrow) in the mouse body at earlier times during the tumor growth. The model results are robust, however, because the tumor size by Day 21 predicted by the theory is
independent of the earlier times; any influx of cells only provides an initial (transient) condition.
The staining also shows that apoptosis seems highest for drug-sensitive cells at the periphery of the tumor (Sections S1 and S5) compared to the center (Section S3) (both p-values = 0.04 using a
Student's t-test with α = 0.05), and for drug-resistant cells it is highest in the more central regions (Figure 6). In accordance with biological observations [64], [65], [41], the model hypothesizes
that increased hypoxia may lead to higher cell quiescence and hence drug resistance. In the experiments, angiogenesis is higher in the central regions, and is more pronounced for drug-resistant
cells, suggesting that these cells are in a more angiogenic environment as a result of ongoing hypoxic stimulus. Higher tumor cell density around blood vessels suggests a functional relationship of
cell viability as a function of nutrients, as we have implemented in the model (see Materials and Methods). However, apoptosis may not necessarily be driven solely by hypoxia, since lymphoma cells
are known to have a cellular turnover rate that is on the order of days [66], [67]. We further note that angiogenesis is not necessarily triggered only by hypoxia. Lymphoma as well as stromal cells
(such as tumor associated macrophages) may produce factors promoting angiogenesis (e.g., vascular endothelial growth factor or VEGF) under otherwise normoxic conditions.
The present work calibrates a computational model of lymphoma with experimental data from drug-sensitive and drug-resistant tumors. This data was derived from detailed IHC analysis of whole tumors,
and validation of the model was performed via intravital microscopy measurements. The results suggest that differences in spatial localization of cells and vasculature, as well as in the transport
phenomena in the tumor microenvironment may play a nontrivial role in the tumor behavior. This suggests that the genetic differences (Eμ-myc Arf-/- and Eμ-myc p53-/-) may provide a substantial
compensation mechanism for these phenomena at the tissue scale in addition to the molecular as it relates to their drug resistance. We plan to verify this hypothesis in the future by assessing model
predictions for therapeutic response of drug-sensitive and drug-resistant tumors in terms of cellular parameters such as proliferation, apoptosis, and hypoxia via both IHC and intravital microscopy.
Supporting Information
Example of calibration process of model parameters from the Ki-67 IHC data. The proliferation parameter is calculated for both Eμ-myc Arf-/- (drug-sensitive) and Eμ-myc p53-/- (drug-resistant)
lymphoma cells. This sample (from Set S3 in the center of the tumor) shows measurements obtained at the edge (periphery) and middle (center) of the section. Positive staining shown in the panels A–D
is converted to red and negative staining to green in panels E–H to obtain a quantitative measure of proliferative activity, as calculated in the text. Results are shown in bottom right insets in
panels E–H.
Determination of optimal necrotic rate threshold for cell viability. The necrosis rate is varied while the initial nutrient threshold is fixed at 0.5 to determine a range for which both the hypoxic
fractions (A) and average apoptosis rate (B) match what is observed experimentally, finding that this range is from 5 to 7 (non-dimensionalized). We then varied the initial nutrient threshold while
maintaining the necrosis rate invariant to confirm that the fraction of hypoxic cells (C) and average apoptosis rate (D) would remain within the experimentally observed ranges.
Range of key parameter values and corresponding baseline values for the computational model. (M) values were calculated from the cell-scale immunohistochemistry data, (C) values were calibrated using
these data, and (ND) are non-dimensionalized values.
Supplemental material.
We are grateful to John Lowengrub (Mathematics, UCI) for useful discussions and advice, and to Fang Jin (Pathology, UNM) for enhancements to the tumor angiogenesis model. We wish to thank the
reviewers for their valuable contribution.
Author Contributions
Conceived and designed the experiments: HBF BRS YLC KI SSG VC. Performed the experiments: BRS YLC KI. Analyzed the data: HBF BRS KI AMR. Contributed reagents/materials/analysis tools: SSG VC. Wrote
the paper: HBF BRS YLC KI SSG VC. | {"url":"http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1003008","timestamp":"2014-04-20T08:32:55Z","content_type":null,"content_length":"235330","record_id":"<urn:uuid:a26e0fa2-4a98-4135-9566-3fa1251b27cd>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about abelian on Math Jokes 4 Mathy Folks
Posts tagged ‘abelian’
My son is doing his math homework — he’s in first grade, so it involves writing a certain number, spelling that number, and finding all occurrences of that number in a grid of random numbers called a
“Number Hunt.” Based on today’s number, he came up with the following joke:
What number is mostly even but not even?
Not a great joke, to be sure… but as good as most jokes on his dad’s blog, and he’s only six years old.
The homework was frustrating (for me), because my sons are capable of much more.
When my sons ride their bikes through the parking lot, they solve problems involving parking space numbers, the digits on license plates, and other numerical things. They ask me to create “math
challenges” for them to think about as they ride. Yesterday, they solved the following three challenges:
1. Which license plate has the greatest product if you multiply its four digits together? (The license plate format in Virginia is LLL-DDDD, where L is a letter and D is a digit.)
2. How many different license plates are possible with the format LLL-DDDD?
3. Each of the three rows in our parking lot has a different number of cars. If our parking lot had a fourth row, how many cars would there be in the fourth row?
For Question 1, Eli realized that the license plate with {9, 7, 6, 5} would have a greater product than the license plate with {9, 7, 6, 3}, since 5 > 3. But then he realized that {9, 9, 8, 2} would
be even greater, and he correctly determined that the product is 1,296.
For Question 2, Alex thought it would be 144. His argument was that there would be 6 ways to arrange the letters and 24 ways to arrange the digits, and 6 × 24 = 144. We talked about this, and I
pointed out that his answer would be correct if we knew which three letters and which three digits we were using (and they were all different). He and Eli reconvened and eventually claimed there
would be 26^3 x 10^4 possible license plates… and being the good father that I am, I let them use the calculator on my phone to find the product.
For Question 3, the number of cars in the three rows was 2, 5, and 8. They extended the pattern and concluded that there would be 11 cars in the non-existent fourth row.
So you can understand why I’d be frustrated that Alex’s homework involved writing the number 11 repeatedly. I thought about telling him not to do it, but then I imagined the following conversation:
Alex: Would you punish me for something I didn’t do?
Teacher: Of course not, Alex.
Alex: Good, because I didn’t do my homework.
Or perhaps he’d just fabricate an excuse:
I thought my homework was abelian, so I figured I could turn it in and then do it.
And finally — should abelian be capitalized?
While playing Scrabble^® on my phone today, I had a rack with following letters:
Near the top of the board was TAVERNA, and it was possible to hook above the first six letters or below the first two letters. There were other spaces on the board to place words, but this was
clearly the most fertile. The full board looked like this:
On my rack, the letters weren’t in alphabetical order (as above), so I missed a seven-letter word that would have garnered 78 points. Instead, I played ABLE for a paltry 13 points.
After my turn, the Teacher feature showed me the word I should have played:
Kickin’ myself. I’ll get over not seeing BANAL, LANAI, or even LEV. But how does a math guy miss ABELIAN? I would not put up a fight if someone wanted to rescind my Math Dorkdom membership card.
What loves letters and commutes?
An abelian Scrabble player.
(That’s a joke. Please don’t play Scrabble while driving.) | {"url":"https://mathjokes4mathyfolks.wordpress.com/tag/abelian/","timestamp":"2014-04-16T09:25:27Z","content_type":null,"content_length":"37405","record_id":"<urn:uuid:6681230e-153b-4d13-8881-71126e5b42a9>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Equation that Couldn't Be Solved: How Mathematical Genius Discovered the Language of Symmetry.
THE EQUATION THAT COULDN'T BE SOLVED: HOW Mathematical Genius Discovered the Language of Symmetry MARIO LIVIO Mario Livio (born 1945) is an astrophysicist and an author of works that popularize
science and mathematics. He is currently Senior Astrophysicist at the Hubble Space Telescope Science Institute.
Symmetry is easily recognized in art, music, and biology, writes Livio, an astrophysicist. Mathematically, however, symmetry is complex. Many mathematicians have spent their lives attempting to
unlock the secrets of symmetry. This book opens with a review of some of these efforts, including the development of algebra and the discovery of the quintic equation In mathematics, a quintic
equation is a polynomial equation in which the greatest exponent on the independent variable is five. It is of the form:
where , which resisted solution for centuries. The equation ultimately yielded to group theory, which Livio calls the "language of symmetry," Group theory was developed by two 19th-century
mathematicians, Niels Henrik Abel Noun 1. Niels Henrik Abel - Norwegian mathematician (1802-1829)
Abel, Niels Abel and Evariste Galois Noun 1. Evariste Galois - French mathematician who described the conditions for solving polynomial equations; was killed in a duel at the age of 21 (1811-1832)
Galois , both of whom managed their achievements during tragically short lives, Abel died of tuberculosis at 26 and Galois was killed in a duel at age 20, Livio devotes special attention to Galois,
whose proof would create a new branch of algebra, The author also delves deep into groups and permutations, and describes how symmetry applies to fields as diverse as physics and psychology. Simon &
Schuster Simon & Schuster
U.S. publishing company. It was founded in 1924 by Richard L. Simon (1899–1960) and M. Lincoln Schuster (1897–1970), whose initial project, the original crossword-puzzle book, was a best-seller. ,
2005, 268 p., b&w illus. and photos, hardcover, $26.95.
Reader Opinion | {"url":"http://www.thefreelibrary.com/The+Equation+that+Couldn't+Be+Solved%3A+How+Mathematical+Genius...-a0138661505","timestamp":"2014-04-23T20:43:23Z","content_type":null,"content_length":"19013","record_id":"<urn:uuid:973ebe91-4bf5-4906-82de-cbab2a8efcd8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve the equation below for x:
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b18b49e4b0e906b4a62d3f","timestamp":"2014-04-18T16:31:04Z","content_type":null,"content_length":"89571","record_id":"<urn:uuid:3469b706-55b3-4365-8684-60d29375b433>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with maze in C++
I have this for homework for tomorrow and I can't do it.
Please if there is anyone that can help.
Find a way in a maze
Write a program that finds way between two points in certain maze.
The maze is rectangular matrix which consists of cells. Each cell is from one of the following 4 types and it can be represented with one positive number.
1. Void (blank space) – This cell is empty and you can go through it. Represented by 0 (zero).
2. Wall – in this cell there is a wall and you can’t go through it. Represented by 100.
3. Key – this cell contains a key with a given number. You can go through it, and when you do that a “door†with corresponding number to the key opens. Represented by the numbers from 1 to 20.
Key number 1, 2, ………20.
4. Door – this cell is a wall. You can walk through it only if you walked on key which is corresponding to this wall. Represented by the numbers from 101 to 120, respectively for wall number 101,
102, 103, ……….120.
In the maze there can not be more than 20 walls, and 20 keys.
Information for the maze can be written in text file according to this format:
On the first line there are 2 positive integer numbers N(3 < N < 10)- the width of the maze and M(3 < M < 10)- length of the maze. Followed by 4 numbers
SX (0 < SX < N) , SY (0 < SY < M) , TX (0 < TX < N) , TY (0 < TY < M),
which are the start position (SX, SY) and the final position (TX, TY) in the maze. These are the two positions between you have to find a way.
Then there is M in number of lines with N in numbers of numbers on each line. They describe the contents of each cell of the maze as it is described in 1(void), 2(wall), 3(key), 4(door).
And thank you for your help.
I almost have a solution but there are some places that is said what exactly to do to finish the program but I can't do it.
bool LabPath( int [][] maze, int[][] Path, int xpos, int ypos )
int [][] TempMaze = maze[][];
int [][] TempPath = Path[][];
if ( xpos == xend and ypos == yend )
You add maze[xpos][ypos] to TempPath[][]
solution[][] = TempPath[][];
return true;
if ( TempMaze[xpos][ypos] >= 100 and TempMaze[xpos][ypos] <= 120 )
return False;
if ( TempMaze [xpos][ypos] >= 1 and TempMaze[xpos][ypos] <= 20 )
in TempMaze[][] you find the cell with value = ( TempMaze [xpos][ypos] +100 ) and you give it a value = 0
the door is open
for( i ... )
for( j ... )
if ( TempMaze [i][j] == ( TempMaze [xpos][ypos] +100 ))
TempMaze [i][j] = 0;
// the visited cells in the maze we mark with 40
if ( TempMaze [xpos][ypos] == 40 )
return False;
TempMaze[xpos][ypos] = 40;
add [xpoa, ypos] to TempPath[][];
return LabPath( TempMaze, TempPath, xpos + 1, ypos ) or
LabPath( TempMaze, TempPath, xpos - 1, ypos ) or
LabPath( TempMaze, TempPath, xpos, ypos +1 ) or
LabPath( TempMaze, TempPath, xpos, ypos - 1 )
// global variables
int lab[][];
int lab1[][];
int path1[][] =[];
int solution[][] = [];
int xbegin, xend, ybegin, yend;
main( argc, argv )
Open fail in.txt for reading
lab[M][N] = data for the maze
xbegin, ybegin = start position
xend, yend = end position
you create lab1[M+2][N+2] array and you copy the whole lab[m][n] in it from position (1, 1) to (M,N), and the points on the walls of the new array lab1[][] you mark them as walls. They are = 100(you
surround the array with wall, it is needed so that the function can work)
LabPath( lab1, path1, xbegin, ybegin );
if ( solution[][] = [] )
You open out.txt
Write down the found way from solution[][], and this solution is for lab1 array, it is with bigger size then the needed array so, you must take out 1 from the coordinates of the found solution on the
axis X and Y | {"url":"http://idevgames.com/forums/thread-4341.html","timestamp":"2014-04-17T04:02:38Z","content_type":null,"content_length":"16086","record_id":"<urn:uuid:afa13519-2404-4aad-87df-c2039c3ae63b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
Romanesco Broccoli
From Math Images
Romanesco Broccoli
This is the Romanesco Broccoli, which is a natural vegetable that grows in accordance to the Fibonacci Sequence, is a fractal, and is three dimensional.
Basic Description
Although the broccoli looks like it grows in accordance to the Fibonacci sequence, does it really? By making ratios between the distances of the vertices of each iteration of the fractals, and then
making ratios between the numbers in the fibonacci sequence, then plotting them, the growth of the broccoli in accordance to the sequence was proved.
A More Mathematical Explanation
Proof that the Romanesco Broccoli is a natural example of the Fibonacci sequence:
It looks as if the
Proof that the Romanesco Broccoli is a natural example of the Fibonacci sequence:
It looks as if the Romanesco Broccoli is a natural example of the Fibonacci sequence. However appearances can deceive. A mathematical and scientific proof does not. The main image of the broccoli was
taken and points were placed on each iteration of the fractal points. Then the points were conncted to form line segments.
The line segments were measured from largest to smallest. Ratios were made between the largest line segment and the next largest until the last line segment was reached.The image used for this was a
side view of the broccoli.
The process was repeated with an image of the broccoli from above.
For the first few numbers of the Fibonacci Sequence, ratios were created from the largest to the smallest numbers. This was to ensure a point of comparison to the line segments on the broccoli. Then
a chart was created with all these ratios.
The chart was used to create a line graph with the ratios. Notably, the line made by the ratios of the Fibonacci sequence was very similar to the line made by the third set of broccoli ratios. They
both followed the same upward growth, but they had different intercepts, which does not change the slopes of the lines. The Fibonacci Sequence has a higher bound of growth, limited at 2, while the
broccoli was limited at 1.
For a more clear comparison between the broccoli and the Fibonacci Sequence, only the Fibonacci line and the line from the third set of broccoli ratios were kept. A trend line was created for both
with their equations displayed. The y intercepts for the equations were different, however the slopes were nearly the same. There was only a 0.0116 difference between them.
The following is the graph and proof the Romanesco Broccoli grows in accordance to the Fibonacci Sequence:
Why It's Interesting
This is interesting because it is proof that the Fibonacci sequence occurs naturally, and the numbers are something derived from nature. Nature has math in it.
Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
If you are able, please consider adding to or editing this page!
Have questions about the image or the explanations on this page?
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. | {"url":"http://mathforum.org/mathimages/index.php/Romanesco_Broccoli","timestamp":"2014-04-17T13:25:43Z","content_type":null,"content_length":"23636","record_id":"<urn:uuid:920819fd-3a81-4987-a473-daead19ddada>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Critical Section - solving bongard problems
solving bongard problems Monday, 11/22/04 11:52 PM
I found a great site from Harry Foundalis about his Research on the Bongard problems. What's a Bongard problem? Well, here's one:
These problems were devised by the Russian scientist M.M. Bongard in 1967, as a test for automated pattern recognition systems. Each of the 100 problems consists of two groups of six patterns. The
boxes on the left each conform to some rule, while the boxes on the right are counter-examples to the rule. The problem for the automated pattern recognizer is to determine the rule for each problem.
Can you find the rule for the problem above? Click here for the answer.
Okay so that one was pretty easy - for a human - what about this one?
Do you see the rule? I've worked with these quite a bit so I see it right off, but it might not be obvious. Click here for the answer.
Okay, now for a pretty hard one. What's the defining rule for this one:
Pretty tough, eh? Just when you think you have it, you find one of the patterns on the left doesn't match, or one of the patterns on the right does. Anyway click here for the answer.
I finally, some of these are maniacal, consider this one:
It would be pretty tough for an automated pattern recognizer to figure this one out! If you give up, click here for the answer.
Harry Foundalis actually developed software to parse and analyze these figures. It is a tough problem; first you have to get from pixels to lines, shapes, etc.; just the representation is tough. Then
figuring out the set of all possible rules is really hard - the set is almost infinite - and winnowing down the list to the rules that match on the left and don't on the right is pretty tough. To
date his program can solve about 20 of the hundred, including the top two above. Pretty impressive.
The rule is "isocoles triangle". Click to return to problem.
The rule is "convex". Click to return to problem.
The rule is "dots on same side of neck". Click to return to problem.
The rule is "ends are parallel". Click to return to problem. | {"url":"http://www.w-uh.com/posts/041122a-bongard_problems.html","timestamp":"2014-04-20T19:49:14Z","content_type":null,"content_length":"15222","record_id":"<urn:uuid:97baa778-2363-444c-949d-7a4e938f62df>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00156-ip-10-147-4-33.ec2.internal.warc.gz"} |
Positive definite Hermitian matrices of countable rank
up vote 0 down vote favorite
Say that a $\omega\times \omega$ Hermitian matrix $A$ is positive semidefinite of rank $n$ if there exists a $\omega\times n$ complex matrix $B$ such that $A=B B^\dagger$ where $^\dagger$ denotes the
conjugate transpose.
Let $f$ be a real-analytic function that converges in a neighbourhood of the origin in ${\mathbb{C}}$. Develop $f=\sum_{i,j=0}^\infty c_{ij} z^i\bar{z}^j$ as a power series in $z$ and $\bar{z}$.
Suppose that $f$ is real-valued so that $(c_{ij})$ is a $\omega\times \omega$ Hermitian matrix.
Suppose one shows that for any $(a_k)\in l^2({\mathbb{C}})$, the sum $\sum_{i,j=0}^\infty c_{ij} a_i\bar{a_j}$ is nonnegative. Does this imply that $(c_{ij})$ is positive semidefinite of some rank $n
\le \omega$?
This characterization of positive semidefiniteness is valid for finite rank Hermitian matrix. But I'm unsure about the convergence conditions in the infinite rank case.
linear-algebra fa.functional-analysis
add comment
1 Answer
active oldest votes
I'm puzzled as to why you would expect this ... no, let $(c_{ij})$ be the identity matrix ($c_{ii} = 1$, $c_{ij} = 0$ for $i \neq j$). That is positive semidefinite but it has
infinite rank.
There's a well-developed theory of positive semidefiniteness for operators on $l^2$. If the operator is bounded, then $\langle Av,v\rangle \geq 0$ for all $v \in l^2$ does imply
that $A = B^*B$ for some bounded operator $B$.
up vote 3 down vote
accepted Edit: I just noticed that you asked for "rank $n \leq \omega$". If you weren't conjecturing finite rank, the question makes more sense. But if $n$ could equal $\omega$, then
without boundedness assumptions expressions like $B^*B$ don't make sense.
So I think the answer I wrote above that ignores the red herring of "rank" may be what you want. If you're really interested in unbounded operators we can talk about them too.
I think I should be using bounded operators instead. Thanks for pointing that out. – Colin Tan Oct 9 '12 at 5:08
This should be in Reed and Simon, Functional Analysis vol. 1 or in Conway, A Course in Functional Analysis. – Nik Weaver Oct 9 '12 at 5:44
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra fa.functional-analysis or ask your own question. | {"url":"https://mathoverflow.net/questions/109198/positive-definite-hermitian-matrices-of-countable-rank/109201","timestamp":"2014-04-18T10:56:40Z","content_type":null,"content_length":"53933","record_id":"<urn:uuid:6ae18ad4-4407-411e-89d9-95e3a86e0254>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
The prime divisor
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
I have another question, On the prime divisor of $n=(p^{2}-1)/2$, where $p$ is prime, please tell me your idea. Let $p\neq 3$ be Merssen prime. Is it true $n$ has a prime
divisor $r$ such that $r^{2}$ does not divide $n$?
up vote 1 down vote
favorite nt.number-theory
add comment
I have another question, On the prime divisor of $n=(p^{2}-1)/2$, where $p$ is prime, please tell me your idea. Let $p\neq 3$ be Merssen prime. Is it true $n$ has a prime divisor $r$ such that $r^{2}
$ does not divide $n$?
Well, maybe I'm missing the point, but the first Mersenne prime is $3=2^2-1$, and $\frac{3^2-1}{2} = 4 = 2^2$, so... no, I guess.
up vote 2 down vote
add comment
Well, maybe I'm missing the point, but the first Mersenne prime is $3=2^2-1$, and $\frac{3^2-1}{2} = 4 = 2^2$, so... no, I guess.
Considering there are only 47 known Mersenne primes, and finding the factors of $2^p-1$ is a difficult task, I'm not sure this question is fully tractable. But we can certainly show that
all of the known Mersenne primes satisfy your question. First, take $m=2^p-1$ to be a Mersenne number, and rewrite $n= \frac{m^2 - 1}{2} = 2^{2p-1} - 2^p$. This is powerful when $p=2$,
which gives Philip van Reeuwijk's counterexample. For larger $p$, we know $4$ will always divide $n$, so we can ignore powers of $2$ and just consider the odd part $n'= 2^{p-1} - 1$.
We know $p$ is prime, and odd since it is not $2$, so we are looking at $n' = 4^k - 1$, with $k=\frac{p-1}{2}$. Clearly $3$ divides $n'$, and $9$ divides iff $3$ divides $k$. So we require
that $p \equiv 1 \mod6$. This is not enough; plenty of known Mersenne primes have this property.
up vote 2 Since $3$ divides $k$, we can write $n'=64^{k'}-1$. Then $7$ divides $n'$, and $7^2$ divides $n'$ iff $7$ divides $k'$. So now we require that $p \equiv 1 \mod42$. Sadly, again this is not
down vote enough.
One step further, we see that when $7$ divides $k'$, $43$ divides $64^{k'}-1$, and $43^2$ divides iff $43$ divides $k'$. Now we're happy (for the time being), because no known Mersenne
prime has $p\equiv 1 \mod 1806 = 43*7*6$. But it seems there's no reason they can't have this property, so you may have to continue your search once such a Mersenne prime is found. Expect
one by the 504th instance: $504 = \phi(1806)$.
add comment
Considering there are only 47 known Mersenne primes, and finding the factors of $2^p-1$ is a difficult task, I'm not sure this question is fully tractable. But we can certainly show that all of the
known Mersenne primes satisfy your question. First, take $m=2^p-1$ to be a Mersenne number, and rewrite $n= \frac{m^2 - 1}{2} = 2^{2p-1} - 2^p$. This is powerful when $p=2$, which gives Philip van
Reeuwijk's counterexample. For larger $p$, we know $4$ will always divide $n$, so we can ignore powers of $2$ and just consider the odd part $n'= 2^{p-1} - 1$.
We know $p$ is prime, and odd since it is not $2$, so we are looking at $n' = 4^k - 1$, with $k=\frac{p-1}{2}$. Clearly $3$ divides $n'$, and $9$ divides iff $3$ divides $k$. So we require that $p \
equiv 1 \mod6$. This is not enough; plenty of known Mersenne primes have this property.
Since $3$ divides $k$, we can write $n'=64^{k'}-1$. Then $7$ divides $n'$, and $7^2$ divides $n'$ iff $7$ divides $k'$. So now we require that $p \equiv 1 \mod42$. Sadly, again this is not enough.
One step further, we see that when $7$ divides $k'$, $43$ divides $64^{k'}-1$, and $43^2$ divides iff $43$ divides $k'$. Now we're happy (for the time being), because no known Mersenne prime has $p\
equiv 1 \mod 1806 = 43*7*6$. But it seems there's no reason they can't have this property, so you may have to continue your search once such a Mersenne prime is found. Expect one by the 504th
instance: $504 = \phi(1806)$. | {"url":"http://mathoverflow.net/questions/93317/the-prime-divisor/93329","timestamp":"2014-04-19T02:54:50Z","content_type":null,"content_length":"59343","record_id":"<urn:uuid:8eaff218-f387-4a99-b209-578c2811eaee>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ilusions in Regression Analysis by Scott Armstrong
Ilusions in Regression Analysis
Soyer and Hogarth’s article, “The Illusion of Predictability,” shows that diagnostic statistics that are commonly provided with regression analysis lead to confusion, reduced accuracy, and
overconfidence. Even highly competent researchers are subject to these problems. This overview examines the Soyer-Hogarth findings in light of prior research on illusions associated with regression
analysis. It also summarizes solutions that have been proposed over the past century. These solutions would enhance the value of regression analysis.
Keywords: a priori analysis, decision-making, ex ante testing, forecasting, non-experimental data, statistical significance, uncertainty
The "Illusion of Predictability: How Regression Statistics Mislead Experts," by Emre Soyer and Robin Hogarth, is dedicated to the memory of Arnold Zellner (1927-2010).[Footnote] I am sure that Arnold
would have agreed with me that their paper is a fitting tribute.
Given the widespread use of regression analysis, the implications of the article are important for the life and social sciences. Employing a simple experiment, Soyer and Hogarth (2011, hereafter “S&
H”) show that some of the world’s leading experts in econometrics can be misled by standard statistics provided with regression analyses: t, p, F, R-squared and the like.
S&H follows a rich history on the illusions of predictability associated with the use of regression analysis on non-experimental data. A look at the history of regression analysis suggests why
illusions of predictability occur and why they have increased over time – to the detriment, as S&H show, of scientific analysis and forecast-ability.[Footnote]
Historical view of illusions in regression analysis
Regression analysis entered the social sciences in the 1870s with the pioneering work by Francis Galton. But “least squares” goes back at least to the early 1800s and the German mathematician Karl
Gauss, who used the technique to predict astronomical phenomena.
For most of its history, regression analysis was a complex, cumbersome, and expensive undertaking. Consider Milton Friedman’s experience more than forty years prior to user-friendly software and the
personal computer revolution. Around 1944, as part of the war effort, Friedman was asked to analyze data on alloys used in turbine engine blades. He used regression analysis to develop a model that
predicted time to failure as a function of stress, temperature, and some metallurgical variables representing the alloy’s composition. Obtaining estimates for Friedman’s equation by hand and
calculating test statistics would have taken a skilled analyst about three months labor. Fortunately, a large computer, built from many IBM card-sorters and housed in Harvard’s air-conditioned
gymnasium, could do the calculations. Ignoring time required for data input, the computer needed 40 hours to calculate the regression estimates and test statistics. Today, a regression of the size
and complexity of Friedman’s could be executed in about one second.
Friedman was delighted with the results; the model had a high R2 and the variables were “statistically significant” at conventional levels. As a result, Friedman recommended two new improved alloys,
which his model predicted would survive several hundred hours at high temperatures. Tests of the new alloys were carried out by engineers in an MIT laboratory. The result? The first alloy broke in
about two hours and the second one in about three. Friedman concluded one should focus on tests of outputs (forecasts) rather than statistically significant inputs. He also opined that “the more
complex the regression, the more skeptical I am” (Friedman and Schwartz 1991).
Given that doing regressions was expensive, it was sensible to rely heavily on a priori analyses. Opinions about the proper model (e.g., which variables are important) should not be based on opinions
or untested ideas even though they may be offered by famous people. (The names Keynes and Samuelson spring to mind.) Instead, the evidence should be based on meta-analyses. Meta-analyses produce more
accurate and less biased summaries than those provided by traditional reviews as shown by Cumming (2012).
When possible, meta-analyses should be used in making decisions as to what variables to include; specifying the expected direction of the relationships; and specifying the nature of the functional
form, ranges of magnitudes of relationships, and size of expected magnitudes of those relationships. It is also important to determine relationships that can be measured outside the model based
either on common knowledge (for example, adjusting for inflation or transforming the data to a per capita basis) or on analyses of other data.
Analysts should use simple pre-specified rules to combine a priori estimates with estimates obtained from regression analysis (for example, one might weight each estimate of a relationship equally,
then re-run the regression to estimate the coefficients for the other variables). This approach was recommended by Wold and Jureen (1953), where it was called “conditional regression.” I think of it
as a ‘poor man’s Bayesian analysis.” I prefer it to formal Bayesian forecasting methods because of its clarity about the nature of each causal relationship and the related evidence (and because I
have a strong need for sleep whenever I try to read a paper on Bayesian forecasting.) In this paper, I refer to it as an a priori analysis.
In the mid-1960s, I was working on my PhD thesis at MIT. While the cost of regression analysis had plunged, it still involved punch cards and overnight runs. But the most time-consuming part of my
thesis was the a priori analysis. Before doing any regression analyses, I gave John Little, my thesis advisor, a priori estimates of the coefficients for all variables in a demand-forecasting model.
As it turned out, these purely a priori models provided relatively accurate forecasts on their own. I then used regression analyses of time-series, longitudinal, and household data to estimate
parameters. These were used to revise the a priori estimates. This procedure provided forecasts that were substantially more accurate than those from extrapolation methods and from stepwise
regression on the complete set of causal variables that were considered (Armstrong 1968a,b).
Despite warnings over the past half-century or more (Zellner, 2001, traces this back to Sir Harold Jeffreys in the mid-1900s), a priori analysis seems to be giving way among academic researchers to
the belief that with enormous databases they can use complex methods and analytical measures such as R2 and t-statistics to create models. They even try various transformations or different lags of
variables to see which best fit the historical data. Einhorn (1972) concluded, “Just as the alchemists were not successful in turning base metal into gold, the modern researcher cannot rely on the
‘computer’ to turn his data into meaningful and valuable scientific information.” Ord (2012) provides a simple demonstration of how standard regression procedures, applied without a priori analyses,
can lead one astray.
Forecast accuracy and confidence
We have ample evidence that regression analysis often provides useful forecasts (Armstrong 1985; Allen and Fildes 2001). Regression-based prediction is most effective when dealing with a small number
of variables, large amounts of reliable and valid data, where changes are expected to be large and predictable, and when using well-established causal relationships – such as the elasticities for
income, price, and advertising when forecasting demand. However, there are illusions that reduce the forecast accuracy and lead to overconfidence in regression analysis. I discuss five of them here:
Complexity illusion: It seems common sense that complex solutions are needed for complex and uncertain problems. Research findings suggest the opposite. For example, Christ (1960) found that
simultaneous equations provided forecasts that were more accurate than those from simpler regression models when tested on artificial data, but not when tested out of sample using real data. My
summary of the empirical evidence concluded that increased complexity of regression models typically reduced forecast accuracy (Armstrong 1985, pp. 225-232). Zellner (2001) reached the same
conclusion in his review of the research. He also found that many users have become disillusioned with complicated models. For example he reported “the Federal Reserve Bank of Minneapolis decided to
scrap its complicated vector autoregressive (VAR, i.e., Very Awful Regression) models after their poor performance in forecasting turning points, etc.”
Evidence favoring simplicity has continued to appear over the past quarter century. Why then is there such a strong interest in complex regression analyses? Perhaps this is due to academics’
preference for complex solutions, as Hogarth (2012) describes.
Somewhere I encountered the idea that statistics was supposed to aid communication. Complex regression methods and a flock of diagnostic statistics have taken us in the other direction.
The solution is that, when specifying a model, rely upon a priori analysis. Follow Zellner's (2001) advice and use Occam’s Razor.[Footnote] In other words, keep it simple. Start with a very simple
model, such as a no-change model, and then add complexity only if there is experimental evidence to support the complication. And do not try to estimate relationships for more than three variables in
a regression (findings from Goldstein and Gigerenzer, 2009, are consistent with this rule-of-thumb).
Illusion that regression models are sufficient: Forecasts are often derived only from what is thought to be the best model. This belief has a long history in forecasting.
For solutions, I call your attention to two of the most important findings in forecasting. First is that the naïve or no-change model is often quite accurate. It is to forecasting what the placebo is
to medicine. This approach is especially difficult to beat in situations involving complexity and uncertainty. Here, it often helps to shrink each coefficient toward having no effect (but remember to
re-run the regression to calibrate the constant term).
Second is the benefit of combining forecasts. That is, find two or more valid forecasting methods and then calculate averages of their forecasts. For example, make forecasts by using different
regression models, and then combine the forecasts. This is especially effective when the methods, models, and data differ substantially. Combining forecasts has reduced errors from about 10% to 58%
(depending on the conditions) compared to the average errors of the uncombined individual forecasts (Graefe, et al 2011).
Illusion that regression provides the best linear unbiased estimators: Statisticians have devoted much time to showing that regression leads to the best estimates of relationships. However, studies
have shown that regression estimates produce ex ante forecasts that are often less accurate than forecasts from “unit weights” models. Schmidt (1971) was one of the first to test this idea and he
found that unit weights were superior to regression weights when the regressions were based on many variables and small sample sizes. Einhorn and Hogarth (1975) and Dana and Dawes (2004) show the
conditions under which regression is and is not effective relative to equal weights.
One good characteristic of regression estimates is that they become more conservative as uncertainty increases. Unfortunately, some aspects of uncertainty are ignored. For example, the coefficients
can “get credit” for important excluded variables that happen to be correlated with the predictor variables. Adding variables to the regression cannot solve this problem.
In addition, analysts searching for the best fit, and publication practices favoring statistically significant results often defeat conservatism. Thus, regressions typically over-estimate change.
Ioannidis (2005, 2008) provides more reasons why regressions over-estimate changes.
One solution is to combine forecasts, and among the alternative models, include the naïve model. In particular, for problems involving time-series, damp the forecasts more heavily toward the naïve
model forecasts as the forecast horizon increases. This is done to reflect effects of the increasing amount of uncertainty in the more distant future.
Illusion of control: Users of regression assume that by putting variables into the equation they are somehow controlling for these variables. This only occurs for experimental data. Adding variables
does not mean controlling for variables in non-experimental data because many variables typically co-vary with other predictor variables. The problem becomes worse as variables are added to the
regression. Large sample sizes cannot resolve this problem, so statistics on the number of degrees of freedom are misleading.
One solution is to use evidence from experimental studies to estimate effects and then adjust the dependent variable for these effects.
“Fit implies accuracy” illusion: Analysts assume that models with a better fit provide more accurate forecasts. This ignores the research showing that fit bears little relationship to ex ante
forecast accuracy, especially for time series. Typically, fit improves as complexity increases, while ex ante forecast accuracy decreases – a conclusion that Zellner (2001) traced back to Sir Harold
Jeffreys in the 1930s. In addition, analysts use statistics to improve the fit of the model to the data. In one of my Tom Swift studies, Tom used standard procedures when starting with 31
observations and 30 potential variables. He used stepwise regression and included only variables where t was greater than 2.0. Along the way, he dropped three outliers. The final regression had eight
variables and an R-square (adjusted for degrees of freedom) of 0.85. Not bad, considering that the data were from Rand's book of random numbers (Armstrong 1970).
I traced studies on this illusion back to at least 1956 in an early review of the research on fit and accuracy (Armstrong 1985). Studies have continued to find the fit is not a good way to assess
predictive ability (e.g., Pant and Starbuck 1990).
The obvious solution is to avoid use of t, p, F, R-squared and the like when using regression.
Using regression for decision-making
Regression analysis provides an objective and systematic way to analyze data. As a result, decisions based on regression are less likely to be subject to bias, they are consistent, the basis for the
decisions can be fully explained – and they are generally useful. The gains are especially well documented when compared to judgmental decisions based on the same data (Grove and Meehl 1996;
Armstrong 2001). However, two illusions, statistical significance and correlations, can reduce the value of regression analysis.
Statistical significance illusion: S&H incorporate the illusion due to tests of statistical significance. Meehl (1978) concluded that “reliance on merely refuting the null hypothesis . . . is
basically unsound, poor scientific strategy, and one of the worst things that ever happened in the history of psychology
Schmidt (1996) offered the following challenge: “Can you articulate even one legitimate contribution that significance testing has made (or makes) to the research enterprise (i.e., any way in which
it contributes to the development of cumulative scientific knowledge)?” One might also ask if there is a study wherein statistical significance improves decision-making. In contrast, it is easy to
find cases where statistical significance harmed decision-making. Ziliak and McCloskey (2008) document in devastating examples taken from across the sciences. To offer another example, Hauer (2004)
demonstrates harmful decisions related to automobile traffic safety, such as the “Right-turn-on-red decision.” Cumming (2012) describes additional examples of the harm caused by the use of
statistical significance.
The commonly recommended solution is to use confidence intervals and avoid the use of statistical significance. Statisticians argue that statistical significance provides the same information as
confidence intervals. But the issue is how people use the information. Significance levels lead to confusion even among leading researchers. Cumming (2012, pp. 13-14) describes an experiment showing
that when researchers in psychology, behavioral neuroscience, and medicine were presented with a set of results, 40% of 55 participants who used significance levels to guide their interpretation
reached correct conclusions. In stark contrast, 95% of the 57 participants who thought in terms of confidence intervals reached correct conclusions.
Correlation illusion: We all claim to understand that correlation is not causation. The correlations might occur because A causes B, or B causes A, or they are each related to C, or they could be
spurious. But when presented with sophisticated and complex regressions, people often forget that; Researchers in medicine, economics, psychology, finance, marketing, sociology, and so on, fill
journals and newspapers with interesting but erroneous-and even costly-findings.
In one study, we had an opportunity to compare findings from experiments with those from analyses of non-experimental data for 24 causal statements. The directional effects differed for 8 of the 24
comparisons (Armstrong and Patnaik 2009). My conclusion is that analyses of non-experimental are often misleading.
This illusion has led people to make poor decisions about such things as what to eat (e.g., coffee, once bad, is now good for health), what medical procedures to use (e.g., the frequently recommended
PSA test for prostate cancer has now been shown to be harmful), and what economic policies the government should adopt in recessions (e.g., trusting the government to be more efficient than the
According to Zellner (2001), Sir Harold Jeffreys had warned of this illusion, and, in 1961, referred to it as the “most fundamental fallacy of all.”
The solution is to base causality on meta-analyses of experimental studies.
An obvious conclusion from the study by S&H is to de-emphasize descriptive statistics for regression packages. Software developers should provide statistics on the ability of alternative methods to
produce accurate forecasts on holdout samples as the default option. They could allow users to click a button to access the traditional regression statistics, a warning label should be provided near
that button.
S&H, echoing Friedman, emphasize that scientific theories should be tested for their predictive ability relative to other methods. Ord (2012) deplores the fact that few regression packages aid in
such analyses. It would be helpful if software providers would focus on ex ante testing by making it easy to simulate the forecasting situation. For cross-sectional forecasts, use jackknifing—that
is, use all but one data point to estimate the model, then predict for the excluded observation, and repeat until predictions have been made for each observation in the data. For time-series,
withhold data, then use successive updating and report the accuracy for each forecast horizon. These testing procedures are less likely to lead to overconfidence because they include the uncertainty
from errors due to over-fitting and errors in forecasting the predictor variables.
Software packages should provide statistics that allow for meaningful comparisons among methods (Armstrong and Collopy 1992). The MdRAE (Median Relative Absolute Error) was designed for such
comparisons and many software packages now provide this statistic, so it should be among the default statistics, along with a link to the literature on this topic. Do not provide RMSE (Root Mean
Square Errors) as it is unreliable and uninformative.
Allen and Fildes (2001) note that since 1985 there has been a substantial increase in the attention paid to ex ante forecast comparisons in published papers. This is consistent with the aims stated
in the founding of the Journal of Forecasting in 1982, and subsequently, the International Journal of Forecasting.
Regression analysis is clearly one of the most important tools available to researchers. However, it is not the only game in town. Researchers made scientific discoveries about causality prior to the
availability of regression analysis as shown by Freedman (1991) in his paper aptly titled “Statistical models and shoe leather.” He demonstrates how major gains were made in epidemiology in the
1800s. For example, John Snow’s discovery of the cause of cholera in London in the 1850s came about from “the clarity of the prior reasoning, the bringing together of many different lines of
evidence, and the amount of shoe leather Snow was willing to use to get the data.” These three characteristics of good science, described by Freedman, are missing from most regression analyses that I
see in journals.
We would be wise to recall a method that Ben Franklin used to address the issue of how to make decisions when many variables are involved (Sparks, 1844).[Footnote] He suggested listing the variables
related to the choice between two options, identify which option is better for each variable, weight the variables, and then add. Pick the option that has the highest score. Andreas Graefe and I have
built upon Franklin’s advice in developing what we call the index method for forecasting. The method relies only on a priori analysis (preferably experimental findings) to determine which variables
are important and what is the direction of the effect for each variable. Franklin suggested differential weights, but the literature discussed above suggests that unit weights are a good place to
start. Regression analyses can then be used to estimate the effects of an index score. The index model allows analysts to take account of “the knowledge present in a field” as recommended by Zellner
(2001). The few tests to date suggest that the index method provides useful forecasts when there are many important variables and substantial prior knowledge (e.g., see Armstrong and Graefe 2011).
It can be expensive to do a priori analyses for complex situations. For example, I am currently involved in a project to forecast the effectiveness of advertisements by using the index method. I have
spent many hours over a 16-year period summarizing the knowledge. This led to 195 principles (causal condition/action statements), each based on meta-analysis when there was more than one source of
evidence. The vast majority of them were sufficiently complex such that neither prior experience nor regression analyses were able to discover them. They were formulated thanks to a century of
experimental and non-experimental studies. None of these evidence-based principles were found in the advertising textbooks and handbooks that I analyzed. Many are counter-intuitive and are often
violated by advertisers (Armstrong 2011). Early findings suggest that the index model provides useful forecasts in this situation. In contrast, regression analyses have met repeated failures in this
area because there may be over 50 principles that were used - or misused - in an ad.
As S&H suggest, further experimentation is needed. We need experiments to assess the ability of alternative techniques to improve accuracy when tested on large samples of forecasts on holdout
samples. Journal editors should commission such studies. Allen and Fildes (2001) provide an obvious starting point for research topics as they developed principles for the effective use of regression
based on the existing knowledge. They also describe the evidence on the principles. For example, on the matter of evidence on one of the diagnostic statistics they state (p. 311), “The mountains of
articles on autocorrelation testing contrast with the near absence of studies on the impact of autocorrelation correction on [ex ante] forecast performance.”
Unfortunately, software developers typically fail to incorporate evidence-based findings about which methods work best, and statisticians who work on new forecasting procedures seldom cite the
literature that tests the relative effectiveness of various methods for forecasting (Fildes and Makridakis 1995). To make the latest findings easily available to forecasters, the International
Institute of Forecasters has supported the ForPrin . com website. The goal is to present forecasting principles in a way that can be understood by those who use forecasting techniques such as
S&H recommend visual presentation – and Ziliak (2012) adds support. This seems like a promising avenue. Consider, for example, the effectiveness of the communication provided by Anscombe’s Quartet
(see Wikipedia).
Do not use regression to search for causal relationships. And do not try to predict by using variables that were not specified in the a priori analysis. Thus, avoid data mining, stepwise regression,
and related methods.
Regression analysis can play an important role when analyzing non-experimental data. Various illusions can reduce the accuracy of regression analysis, lead to a false sense of confidence, and harm
decision-making. Over the past century or so, effective solutions have been developed to deal with the illusions. The basic problem is that the solutions are often ignored in practice. S&H shows that
this pertains even among the world’s leading researchers. Researchers might benefit by systematically checking their use of regression to ensure that they have taken steps to avoid the illusions.
Reviewers could help to make researchers aware of solutions to the illusions. Software providers should inform their users.
To me, S&H’s key recommendation is to conduct experiments to compare different approaches to developing and using models. It is remarkable that so little experimentation has been done over the past
century to determine which regression methods work best under what conditions.
Arnold Zellner would not have been surprised by these conclusions.
Acknowledgements: Peer review was vital for this paper. Four people reviewed two versions of this paper: P. Geoffrey Allen, Robert Fildes, Kesten C. Green, and Stephen T. Ziliak. Excellent
suggestions were also received from Kay A. Armstrong, Jason Dana, Antonio García-Ferrer, Andreas Graefe, Daniel G. Goldstein, Geoff Cumming, Paul Goodwin, Robin Hogarth, Philippe Jacquart, Keith Ord,
and Christophe Van den Bulte. This is not to imply that the reviewers agree with all of the conclusions in this paper.
Allen, P. G. & Fildes, R. (2001). Econometric forecasting. In J. S. Armstrong. Principles of Forecasting, Boston: Kluwer Academic Publishers,
Armstrong, J. S. (2011). Evidence-based advertising: An application to persuasion. International Journal of Advertising, 30 (5), 743-767.
Armstrong, J. S. (2001). Judgmental bootstrapping: Inferring experts’ rules for forecasting.” In J. S. Armstrong. Principles of Forecasting. Boston: Kluwer Academic Publishers.
Armstrong, J. S. (1985). Long-Range Forecasting. New York: John Wiley.
Armstrong, J. S. (1970). How to avoid exploratory research. Journal of Advertising Research, 10, No. 4, 27-30.
Armstrong, J. S. (1968a). Long-range Forecasting for a Consumer Durable in an International Market. PhD Thesis: MIT. (URL to be added)
Armstrong, J. S. (1968b). Long-range forecasting for international markets: The use of causal models. In R. L. King, Marketing and the New Science of Planning. Chicago: American Marketing
Armstrong, J. S. & Collopy, F. (1992). Error measures for generalizing about forecasting methods: Empirical comparisons, International Journal of Forecasting, 8, 69-80.
Armstrong, J. S. & Graefe, A. (2011). Predicting elections from biographical information about candidates: A test of the index method. Journal of Business Research, 64, 699-706.
Armstrong, J. S. & Patnaik, S. (2009). Using Quasi-experimental data to develop principles for persuasive advertising. Journal of Advertising Research, 49, No. 2, 170-175.
Christ, C. F. (1960). Simultaneous equation estimation: Any verdict yet? Econometrica, 28, 835-845.
Cumming, G. (2012). Understanding the New Statistics: Effect sizes. Confidence Intervals and Meta-Analysis. New York: Routledge.
Dana, J. & Dawes, R. M. (2004). The superiority of simple alternatives to regression for social science predictions. Journal of Educational and Behavioral Statistics, 29 (3), 317-331.
Einhorn, H. J. (1972). Alchemy in the behavioral sciences. Public Opinion Quarterly, 36, 367-378.
Einhorn, H. J. & R. Hogarth (1975). Unit weighting schemes for decision making. Organizational Behavior and Human Performance, 13, 171-192.
Fildes, R. & Makridakis, S. (1995). The impact of empirical accuracy papers on time series analysis and forecasting. International Statistical Review, 63, 289-308.
Freedman, D. A. (1991). Statistical models and shoe leather. Sociological Methodology, 21, 291-313.
Friedman, M.A. & Schwartz, A. J. (1991). Alternative approaches to analyzing economic data. American Economic Review, 81, Appendix 48-49.
Goldstein, D. G. & Gigerenzer, G. (2009). Fast and frugal forecasting. International Journal of Forecasting, 25, 760-772.
Graefe, A., Armstrong, J.S., Cuzán, A. G. & Jones, R.J., Jr. (2011). Combining forecasts: An application to election forecasts, Working Paper.
Grove, W.M. & Meehl, P.E. (1996). Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: the clinical – statistical controversy.
Psychology, Public Policy, and Law, 2, 293-323.
Hauer, E. (2004). The harm done by tests of significance. Accident Analysis and Prevention, 36, 495-500.
Hogarth, R. M. (2012). When simple is hard to accept. In P. M. Todd & Gigerenzer, G. (Eds.), Ecological rationality: Intelligence in the world (in press). Oxford: Oxford University Press.
Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2, 696-701.
Ioannidis, J. P. A. (2008). Why most discovered true associations are inflated. Epidemiology, 19,640-648.
Karni, E. & Shapiro, B. K. (1980). Tales of horror from ivory towers. Journal of Political Economy. 88, No. 1, 210-212.
Kennedy, P. (2002). Sinning in the basement: What are the rules? The ten commandments of applied econometrics. Journal of Economic Surveys, 16, 569-589.
Meehl, P.E. (1978), Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806-834.
Ord, K. (2012). The Illusion of predictability: A call to action. International Journal of Forecasting, 28 xxx-xxx.
Pant, P. N. & Starbuck, W. H. (1990). Innocents in the forest: Forecasting and research methods. Journal of Management, 16, 433-446
Schmidt, F. L. (1971). The relative efficiency of regression and simple unit predictor weights in applied differential psychology. Educational and Psychological Measurement, 31, 699-714.
Schmidt, F. L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for training of researchers. Psychological Methods, 1, 115-129.
Soyer, E. & Hogarth, R. (2012). Illusion of predictability: How regressions statistics mislead experts. International Journal of Forecasting, 28 xxx-xxx.
Sparks, J. (1844). The Works of Benjamin Franklin. Boston: Charles Tappan Publisher.
Wold, H. & Jureen, L. (1953). Demand Analysis. New York: John Wiley.
Zellner, A. (2001). Keep it sophisticatedly simple. In Keuzenkamp, H. & McAleer, M. Eds. Simplicity, Inference, and Modelling: Keeping it Sophisticatedly Simple. Cambridge University Press,
Ziliak, S. T. (2012). Visualizing uncertainty: On Soyer’s and Hogarth’s “The illusion of predictability: How regression statistics mislead experts.” International Journal of Forecasting, 28 xxxxx
Ziliak, S. T. & McCloskey, D. N. (2008). The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives. Ann Arbor: The University of Michigan Press.
November 20, 2011 (R47)
Have a question for Scott or want to leave a comment? | {"url":"http://www.evancarmichael.com/Advertising/1607/Ilusions-in-Regression-Analysis.html","timestamp":"2014-04-17T06:42:34Z","content_type":null,"content_length":"65230","record_id":"<urn:uuid:288c3a98-1ec2-48cf-8131-c72eedf415b7>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chaos and the physics of non-equilibrium systems
Seminar Room 1, Newton Institute
After a brief introduction to chaos theory I will summarize some of the methods for relating it to non-equilibrium statistical mechanics.Then I will show how to use kinetic theory methods to
calculate characteristic chaotic properties such as Lyapunov exponents and Kolmogorov-Sinai entropies for dilute interacting particle systems. For the Lorentz gas (a system of light point particles
moving among fixed scatterers) these calculations are especially simple, but they can also be done for systems of moving hard spheres. Finally, I will consider the case of the Brownian motion of one
large sphere in a very dilute gas of small spheres. Under these conditions the largest Lyapunov exponents are due to the Brownian particle. They can be calculated by solving a Fokker-Planck equation. | {"url":"http://www.newton.ac.uk/programmes/PDS/seminars/2006041814151.html","timestamp":"2014-04-16T16:05:00Z","content_type":null,"content_length":"5043","record_id":"<urn:uuid:b66f2119-ea07-4a51-95bf-de27b0279e86>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
Composite Solids ( Read ) | Geometry
What if you built a solid three-dimensional house model consisting of a pyramid on top of a square prism? How could you determine how much two-dimensional and three-dimensional space that model
occupies? After completing this Concept, you'll be able to find the surface area and volume of composite solids like this one.
Watch This
A composite solid is a solid that is composed, or made up of, two or more solids. The solids that it is made up of are generally prisms, pyramids, cones, cylinders, and spheres. In order to find the
surface area and volume of a composite solid, you need to know how to find the surface area and volume of prisms, pyramids, cones, cylinders, and spheres. For more information on any of those
specific solids, consult the concept that focuses on them. This concept will assume knowledge of those five solids.
Most composite solids problems that you will see will be about volume, so most of the examples and practice problems below are about volume. There is one surface area example as well.
Example A
Find the volume of the solid below.
This solid is a parallelogram-based prism with a cylinder cut out of the middle.
$V_{prism} &= (25 \cdot 25)30=18,750 \ cm^3\\V_{cylinder} &= \pi (4)^2 (30)=480 \pi \ cm^3$
The total volume is $18750 - 480 \pi \approx 17,242.04 \ cm^3$
Example B
Find the volume of the composite solid. All bases are squares.
This is a square prism with a square pyramid on top. First, we need the height of the pyramid portion. Using the Pythagorean Theorem, we have, $h=\sqrt{25^2-24^2}=7$
$V_{prism} &= (48)(48)(18)=41,472 \ cm^3\\V_{pyramid} &= \frac{1}{3} (48^2)(7)=5376 \ cm^3$
The total volume is $41,472 + 5376 = 46,848 \ cm^3$
Example C
Find the surface area of the following solid.
This solid is a cylinder with a hemisphere on top. It is one solid, so do not include the bottom of the hemisphere or the top of the cylinder.
$SA &=LA_{cylinder}+LA_{hemisphere}+A_{base \ circle}\\&= 2 \pi rh+\frac{1}{2} 4 \pi r^2+\pi r^2\\&= 2 \pi (6)(13)+2 \pi 6^2+\pi 6^2\\&= 156 \pi +72 \pi +36 \pi\\&= 264 \pi \ in^2 \qquad \qquad
``LA'' \ \text{stands for} \ lateral \ area.$
Guided Practice
1. Find the volume of the following solid.
2. Find the volume of the base prism. Round your answer to the nearest hundredth.
3. Using your work from #2, find the volume of the pyramid and then of the entire solid.
1. Use what you know about cylinders and spheres. The top of the solid is a hemisphere.
$V_{cylinder} &= \pi 6^2 (13)=468 \pi\\V_{hemisphere} &= \frac{1}{2} \left(\frac{4}{3} \pi 6^3\right)=144 \pi\\V_{total} &= 468 \pi+144 \pi =612 \pi \ in^3$
2. Use what you know about prisms.
$V_{prism}&=B \cdot h \\ V_{prism}&=(4\cdot 4)\cdot 5\\ V_{prism}&=80in^3$
3. Use what you know about pyramids.
$V_{pyramid}&=\frac{1}{3} B \cdot h \\ V_{pyramid}&=\frac{1}{3}(4 \cdot 4)(6)\\ V_{pyramid}&=32in^3$
Now find the total volume by finding the sum of the volumes of each solid.
$V_{total}&=V_{prism}+V_{pyramid}\\ V_{total}&=112 in^3$
Round your answers to the nearest hundredth. The solid below is a cube with a cone cut out.
1. Find the volume of the cube.
2. Find the volume of the cone.
3. Find the volume of the entire solid.
The solid below is a cylinder with a cone on top.
4. Find the volume of the cylinder.
5. Find the volume of the cone.
6. Find the volume of the entire solid.
9. You may assume the bottom is open.
Find the volume of the following shapes. Round your answers to the nearest hundredth.
13. A sphere has a radius of 5 cm. A right cylinder has the same radius and volume. Find the height of the cylinder.
The bases of the prism are squares and a cylinder is cut out of the center.
14. Find the volume of the prism.
15. Find the volume of the cylinder in the center.
16. Find the volume of the figure.
This is a prism with half a cylinder on the top.
17. Find the volume of the prism.
18. Find the volume of the half-cylinder.
19. Find the volume of the entire figure.
Tennis balls with a 3 inch diameter are sold in cans of three. The can is a cylinder. Round your answers to the nearest hundredth.
20. What is the volume of one tennis ball?
21. What is the volume of the cylinder?
22. Assume the balls touch the can on the sides, top and bottom. What is the volume of the space not occupied by the tennis balls? | {"url":"http://www.ck12.org/geometry/Composite-Solids/lesson/Composite-Solids/","timestamp":"2014-04-16T07:26:47Z","content_type":null,"content_length":"116412","record_id":"<urn:uuid:147e3e0f-6224-43a0-b5b9-0dc1000bf0a0>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Re:[ap-calculus] "Rule" for Logarithmic Differentiation??
Replies: 0
Re:[ap-calculus] "Rule" for Logarithmic Differentiation??
Posted: Oct 23, 2012 6:11 PM
This ap-calculus EDG will be closing in the next few weeks. Please sign up for the new AP Calculus
Teacher Community Forum at https://apcommunity.collegeboard.org/getting-started
and post messages there.
Douglas J Kuhlmann wrote:
> Barbara: I think you missed the u' in Kristen's student's
> formula. She posted the same formula that you derived.
> On another note, one can prove this using multi-variable
> calculus techniques, too--not that I am recommending it
> for 1st year calc.
For those interested in what Doug is alluding to, I've
pasted a 2009 post of mine that goes into the details.
AP-Calculus post from 7 December 2009
This observation was recently made by John M. Johnson
in his paper "Derivatives of generalized power functions"
[Mathematics Teacher 102 #7 (March 2009), pp. 554-557],
and I've seen it in print in some other places as well:
Richard Katz and Stewart Venit, "Partial differentiation
of functions of a single variable", Pi Mu Epsilon Journal
7 #6 (Spring 1982), 405-406.
Gerry Myerson, "FFF #47: A natural way to differentiate
an exponential", College Mathematics Journal 22 #5
(November 1991), p. 460.
G. E. Bilodeau, "An exponential rule", College Mathematics
Journal 24 #4 (September 1993), 350-351.
Dane W. Wu, "Miscellany", Pi Mu Epsilon Journal 10 #10
(Spring 1999), 833.
Noah Samuel Brannen and Ben Ford, "Logarithmic differentiation:
Two wrongs make a right", College Mathematics Journal
35 #5 (November 2004), 388-390.
The expanded form of (d/dx)(U^V) can be explained by the
multivariable chain rule. Let y = f(U,V), where U and V
are differentiable functions of x. In this setting the
chain rule takes the form
dy/dx = (del f)/(del U) * (del U)/(del x)
+ (del f)/(del V) * (del V)/(del x)
which equals
[V * U^(V-1)] * (dU/dx) + [U^V * ln(U)] * (dV/dx)
when f(U,V) = U^V.
This exponential derivative identity was first published
in 1695 by Leibniz, who also stated at this time that both
he and Johann Bernoulli independently discovered it. See
the following paper (freely available on the internet)
for more historical issues relating to the derivative of
a function to a function power.
Bos, Henk J. M. "Johann Bernoulli on Exponential Curves ...",
Nieuw Archief voor Wiskunde (4) 14 (1996), 1-19.
You can also use the chain rule above to "explain" both the
product rule and the quotient rule.
For instance, if y = f(U,V) = UV, then (del f)/(del U) = V
and (del f)/(del V) = U, so
dy/dx = V * (dU/dx) + U * (dV/dx).
Also, if y = f(U,V) = U/V, then (del f)/(del U) = 1/V
and (del f)/(del V) = -U/(V^2), so
dy/dx = (1/V) * (dU/dx) + [-U/(V^2)] * (dV/dx).
= [V*(dU/dx) - U*(dV/dx)] / V^2
Dave L. Renfro
To search the list archives for previous posts go to | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2411142","timestamp":"2014-04-16T19:23:00Z","content_type":null,"content_length":"17397","record_id":"<urn:uuid:8e7a9165-85a0-46f0-9b64-be3c91f9b783>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ranks of free submodules of free modules
up vote 4 down vote favorite
Possible Duplicate:
Atiyah-MacDonald, exercise 2.11
The following question came up during tea today.
Let $R$ be a commutative ring with an identity and let $M \subset R^n$ be a submodule. Assume that $M \cong R^k$ for some $k$. Question : Must $k \leq n$?
If $R$ is a domain, then this is obvious. The obvious approach to proving the general result then is to mod out by the radical of $R$. If the resulting map $M / \text{rad}(R) M \rightarrow (R / \text
{rad}(R))^n$ were injective, then we'd be done. However, I can't seem to prove this injectivity (I'm not even totally convinced that it's true).
Thank you for any help!
This is one of the early exercises in Atiyah-Macdonald. The answer is "yes" but the proof is tricky and there's not enough room in this comment box for it ;-) I think that if you google around for
2 solutions to all the exercises in Atiyah-Macdonald then you will find a document that looks promising but which contains an incorrect proof. I remember when I did this question finding the notion
of Euler characteristic very helpful, which I learnt from one of the later chapters of Matsumura! It has been suggested that A-M might have put the question in in error, not realising how tricky
it was. – Kevin Buzzard Jul 7 '10 at 6:29
2 Even though Robin has already answered the question I'm still going to point out that this is a duplicate of mathoverflow.net/questions/136/atiyah-macdonald-exercise-2-11/… , something I
discovered after Robin had posted his answer. I'm voting to close, in a nice way. – Kevin Buzzard Jul 7 '10 at 6:50
2 We have to assume $R \neq 0$. ;-) – Martin Brandenburg Jul 7 '10 at 8:12
If I understand correctly, this question will be deleted. In my opinion, this would be unfortunate because the answer to it are (I think) of at least as high quality as the answers to the
2 "Atiyah-MacDonald, exercise 2.11" question (mathoverflow.net/questions/136/atiyah-macdonald-exercise-2-11). The best would be of course to append the answers to this question to the answers to the
other question. But if this is too complicated, it would be better to reopen this question. – Pierre-Yves Gaillard Jul 8 '10 at 18:00
add comment
marked as duplicate by Kevin Buzzard, Pete L. Clark, Yemon Choi, S. Carnahan♦ Jul 8 '10 at 2:51
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
3 Answers
active oldest votes
This reduces to the question: is there an $R$-module injection from $R^{n+1}$ to $R^n$. This is a matrix question: is there a nonzero nullvector for an $n$-by-$n+1$ matrix $M$.
Clearly $M$ has a nullvector formed by the $n$-by-$n$ minors, the trouble is that it could be zero. In that case we need to show that an $n$-by-$n$ matrix $N$ with zero determinant has a
nonzero nullvector.
up vote 5
down vote Let $r$ be the determinantal rank of $N$: the size of the largest nonzero subdeterminant of $N$. Then $r < n$. Let's assume the top left $r$ by $r$ submatrix of $N$ has nonzero
determinant. Let $N'$ be the top left $r+1$-by-$r+1$ submatrix of $N$. Then the adjugate of $N'$ has a nonzero row. Fill this out to a row vector of length $n$ by adding zeros. Then this
is a nullvector of $N$.
Robin, this question is a duplicate ;-) grin but I only just discovered the source. mathoverflow.net/questions/136/atiyah-macdonald-exercise-2-11/… . Having discovered this I'm voting
to close. While you were answering the question I was grepping the database dumps :-/ – Kevin Buzzard Jul 7 '10 at 6:50
2 Blimey Kevin, you're a hard man :-) Anyway I think my proof is nicer than those in the cited thread. OK, my argument is more-or-less the same as Anton's there, except I don't worry
about annihilators as they seem unnecessary here :-) – Robin Chapman Jul 7 '10 at 8:34
I really like the second proof in Pete Clark's answer. I found it a few days ago by googling for the paper mentioned in mathoverflow.net/questions/30066/… (dealing with a related
question); it's Cor 2.7.4 in Antoine Chambert-Loir, ALGÈBRE COMMUTATIVE, Cours à l’Université de Rennes 1 (2006–2007) – Victor Protsak Jul 7 '10 at 22:18
add comment
For a proof using multilinear algebra, see Corollary 5.11 at
up vote 5 down vote http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/extmod.pdf
Beautiful. I liked Chapman's proof a lot and felt the urge to translate it into coordinate-free language. Now I don't have to. – Tom Goodwillie Jul 7 '10 at 20:56
That is, I wanted to see an argument explicitly using exterior powers instead of determinants. – Tom Goodwillie Jul 8 '10 at 0:25
add comment
Here's a proof by Karl Dahlke:
Math Reference: A Free Submodule Embeds
up vote 0 down
vote The result can be generalized to infinite ranks.
I didn't read the details of the proof carefully (so I might be misinterpreting things), but at least at the beginning he claims to only be considering domains. – Andy Putman Jul
7 '10 at 14:18
Andy, the link I posted does not assume the underlying ring is a domain. – KConrad Jul 8 '10 at 3:09
@KConrad : Yes, the proof at the link you posted is very nice! I was just pointing out that this answer does not appear to answer the question posed by the OP. – Andy Putman Jul
8 '10 at 3:19
@Andy: Read carefully. Karl generalizes step by step. – Martin Brandenburg Jul 8 '10 at 9:55
add comment
Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/30860/ranks-of-free-submodules-of-free-modules","timestamp":"2014-04-18T21:02:50Z","content_type":null,"content_length":"68260","record_id":"<urn:uuid:20067f5d-09f5-41fd-935b-b9a91c14b776>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parallel partition phase for quick sort
A while ago I was wrapping my head around parallel merge sort. Since it requires additional O(n) space in practice it is the best choice if available memory is not constrained. Otherwise it is better
to consider quick sort. As in merge sort in quick sort we have two phases:
• rearrange elements into two partitions such that left one contains elements less than or equal to the selected pivot element and greater or equal to the pivot elements are in the right one
• recursively sort (independent) partitions
Second phase is naturally parallelized using task parallelism since partitions are independent (partition elements will remain inside partition boundaries when the sort of the whole array is
finished). You can find an example of this behavior in parallel quick sort. It is a good start. But the first phase is still contributes O(n) at each recursion level. By parallelizing partition phase
we can further speed up quick sort.
Sequential version of partition phase is pretty straightforward.
class PartitionHelper<T>
private readonly T[] m_arr;
private readonly IComparer<T> m_comparer;
public PartitionHelper(T[] arr, IComparer<T> comparer)
m_arr = arr;
m_comparer = comparer;
// Moves elements within range around pivot sequentially and
// returns position of the first element equal to the pivot.
public int SequentialPartition(T pivot, int from, int to)
var j = from;
for (var i = from; i < to; i++) {
if (m_comparer.Compare(m_arr[i], pivot) < 0) {
SwapElements(i, j++);
return j;
private void SwapElements(int from, int to)
var tmp = m_arr[from];
m_arr[from] = m_arr[to];
m_arr[to] = tmp;
An interesting point is that we do not know in advance how the partitioning will be done since it is data dependent (position of an element depends on other elements). Still independent pieces can be
carved out. Here is the core idea.
Let’s assume an array that looks like below where x denotes some element, p denotes selected pivot, e is equal, l is less and g is greater elements than pivot.
e l l (g l g e e l) x x x x x x x x x (l l g g e l) g e g p
left right
Let’s assume we selected two blocks of elements within array called left (containing elements g l g e e l) such that all elements before are less or equal to the pivot and right (that holds elements
l l g g e l) such that all elements after it are greater or equal to the pivot from left and right ends respectively. After the partitioning against pivot is done left block will hold elements less
than or equal to the pivot and right block will contain elements greater that or equal to the pivot. In our example left block contains two g elements that do not belong there and right block holds
three l elements that must not be there. But this means that we can swap two l elements from right block with two g elements from left block and left block will comply with partitioning against
e l l (l l l e e l) x x x x x x x x x (g g g g e l) g e g p
left right
Overall after blocks rearrange operation at least one of them contains correct elements (if the number of elements to be moved in each block is not equal).
// Enum that indicates which of the blocks are in
// in place after arrangment.
private enum InPlace
// Tries to rearranges elements of the two blocks such that
// right block contains elements greater or equal to the
// pivot and/or left block contains elements less than or
// equal to the pivot. At least one of the blocks is
// correctly reaaranged.
private InPlace ArrangeBlocks(T pivot, ref int leftFrom, int leftTo, ref int rightFrom, int rightTo)
while (leftFrom < leftTo && rightFrom < rightTo) {
while (m_comparer.Compare(m_arr[leftFrom], pivot) <= 0 && ++leftFrom < leftTo) {
while (m_comparer.Compare(m_arr[rightFrom], pivot) >= 0 && ++rightFrom < rightTo) {
if (leftFrom == leftTo || rightFrom == rightTo) {
SwapElements(leftFrom++, rightFrom++);
if (leftFrom == leftTo && rightFrom == rightTo) {
return InPlace.Both;
if (leftFrom == leftTo) {
return InPlace.Left;
return InPlace.Right;
Then we can select next left block try to do the same. Repeat it until blocks meet piece by piece making left and right parts of the array as partitioning wants it to be. Sequential block based
algorithm looks like this:
• select block size and pick block from left and right ends of the array
• rearrange elements of the two blocks
• pick next block from the same end if all elements of the block are in place (as the partitioning wants them to be)
• repeat until all blocks are processed
• do sequential partitioning of the remaining block (since from a pair of blocks at most one block may remain not rearranged) if one exists
Interesting bit is that pairs of blocks can be independently rearranged. Workers can pick blocks concurrently from corresponding ends of array and in parallel rearrange elements.
A block once taken by a worker should not be accessible by other workers. When no more blocks left worker must stop. Basically we have two counters (number of blocks taken from left and right ends of
the array). In order to take a block we must atomically increment corresponding counter and check that sum of the two counters is less than equal to total number of blocks otherwise all blocks are
exhausted and worker must stop. Doing under a lock is simple and acceptable for large arrays and blocks but inefficient for small arrays and blocks.
We will pack two counters into a single 32 bit value where lower 16 bits are for right blocks counter and higher 16 bits are for left blocks. To increment right and left blocks counters 1 and 1<<16
must be added to combined value respectively. Atomically updated combined value allows to extract individual counters and make decision on whether block was successfully taken or not.
Since each worker may attempt to race for the last not taken block care should be taken of overflow. So only 15 bits are used for each counter and so it will require 1<<15 workers to cause overflow
that is not realistic.
// Class that maintains taken blocks in a thread-safe way.
private class BlockCounter
private const int c_minBlockSize = 1024;
private readonly int m_blockCount;
private readonly int m_blockSize;
private int m_counter;
private const int c_leftBlock = 1 << 16;
private const int c_rightBlock = 1;
private const int c_lowWordMask = 0x0000FFFF;
public BlockCounter(int size)
// Compute block size given that we have only 15 bits
// to hold block count.
m_blockSize = Math.Max(size/Int16.MaxValue, c_minBlockSize);
m_blockCount = size/m_blockSize;
// Gets selected block size based on total number of
// elements and minimum block size.
public int BlockSize
get { return m_blockSize; }
// Gets total number of blocks that is equal to the
// total number devided evenly by the block size.
public int BlockCount
get { return m_blockCount; }
// Takes a block from left end and returns a value which
// indicates whether taken block is valid since due to
// races a block that is beyond allowed range can be
// taken.
public bool TakeLeftBlock(out int left)
int ignore;
return TakeBlock(c_leftBlock, out left, out ignore);
// Takes a block from ringt end and returns its validity.
public bool TakeRightBlock(out int right)
int ignore;
return TakeBlock(c_rightBlock, out ignore, out right);
// Atomically takes a block either from left or right end
// by incrementing higher or lower word of a single
// double word and checks that the sum of taken blocks
// so far is still within allowed limit.
private bool TakeBlock(int block, out int left, out int right)
var counter = unchecked((uint) Interlocked.Add(ref m_counter, block));
// Extract number of taken blocks from left and right
// ends.
left = (int) (counter >> 16);
right = (int) (counter & c_lowWordMask);
// Check that the sum of taken blocks is within
// allowed range and decrement them to represent
// most recently taken blocks indices.
return left-- + right-- <= m_blockCount;
With multiple workers rearranging pairs of blocks we may end up with “wholes”.
(l l e) (l g l) (l e e) (l l l) (g g e) (e l l) x x x (g g e) (l g e) (e e g)
l0 l1 l2 l3 l4 l5 r0 r1 r2
In the example above blocks l1, l4 and r1 are the wholes in left and right partitions of the array meaning they were not completely rearranged. We must compact left and right partitions such that
they contain no wholes.
(l l e) (e l l) (l e e) (l l l) (g g e) (l g l) x x x (l g e) (g g e) (e e g)
l0 l5 l2 l3 l4 l1 r1 r0 r2
Now we can do sequential partitioning of range between the end of the left most rearranged block (l3) and beginning of the right most rearranged block (r0).
// A threshold of range size below which parallel partition
// will switch to sequential implementation otherwise
// parallelization will not be justified.
private const int c_sequentialThreshold = 8192;
// Moves elements within range around pivot in parallel and
// returns position of the first element equal to the pivot.
public int ParallelPartition(T pivot, int from, int to)
var size = to - from;
// If range is too narrow resort to sequential
// partitioning.
if (size < c_sequentialThreshold) {
return SequentialPartition(pivot, from, to);
var counter = new BlockCounter(size);
var blockCount = counter.BlockCount;
var blockSize = counter.BlockSize;
// Workers will process pairs of blocks and so number
// of workers should be less than half the number of
// blocks.
var workerCount = Math.Min(Environment.ProcessorCount, blockCount / 2);
// After the worker is done it must report blocks that
// were not rearranged
var leftRemaining = AllocateRemainingArray(workerCount);
var rightRemaining = AllocateRemainingArray(workerCount);
// and left most and right most rearranged blocks.
var leftMostBlocks = AllocateMostArray(workerCount);
var rightMostBlocks = AllocateMostArray(workerCount);
Action<int> worker = index =>
int localLeftMost = -1, localRightMost = -1;
var leftBlock = localLeftMost;
var rightBlock = localRightMost;
int leftFrom = 0, leftTo = 0;
int rightFrom = 0, rightTo = 0;
var result = InPlace.Both;
// Until all blocks are exhausted try to rearrange
while (true) {
// Depending on the previous step one or two
// blocks must taken.
if (result == InPlace.Left ||
result == InPlace.Both) {
// Left or both blocks wre successfully
// rearranged so we need to update left most
// block.
localLeftMost = leftBlock;
// and try to take block from the left end.
if (!counter.TakeLeftBlock(out leftBlock)) {
leftFrom = from + leftBlock*blockSize;
leftTo = leftFrom + blockSize;
if (result == InPlace.Right ||
result == InPlace.Both) {
// Right or both blocks were successfully
// rearranged update right most and take new
// right block.
localRightMost = rightBlock;
if (!counter.TakeRightBlock(out rightBlock)){
rightTo = to - rightBlock*blockSize;
rightFrom = rightTo - blockSize;
// Try to rearrange elements of the two blocks
// such that elements of the right block are
// greater or equal to pivot and left block
// contains elements less than or equal to pivot.
result = ArrangeBlocks(pivot, ref leftFrom, leftTo, ref rightFrom, rightTo);
// At least one of the blocks is correctly
// rearranged and if we are lucky - two of them.
// If the end of right block was not rearranged mark
// it as remaining to be arranged.
if (rightFrom != rightTo) {
rightRemaining[index] = rightBlock;
// Same for the left block.
if (leftFrom != leftTo) {
leftRemaining[index] = leftBlock;
// Update worker local left most and right most
// arranged blocks.
leftMostBlocks[index] = localLeftMost;
rightMostBlocks[index] = localRightMost;
Parallel.For(0, workerCount, worker);
// Compact arranged blocks from both ends so that all non
// arranged blocks lie consecutively between arranged
// left and right blocks.
var leftMostBlock = ArrangeRemainingBlocks(from, blockSize, leftRemaining, leftMostBlocks.Max(), 1);
var rightMostBlock = ArrangeRemainingBlocks(to - blockSize, blockSize, rightRemaining, rightMostBlocks.Max(), -1);
// Do sequential partitioning of the inner most area.
return SequentialPartition(pivot, from + (leftMostBlock + 1) * blockSize, to - (rightMostBlock + 1)*blockSize);
// Moves rearranged blocks to cover holes such that all
// rearranged blocks are consecutive. Basically it does
// compaction and returns most rearranged block.
private int ArrangeRemainingBlocks(int bound, int blockSize, int[] remaining, int mostBlock, int sign)
var j = Array.FindLastIndex(remaining, b => b < mostBlock);
for (var i = 0; i < remaining.Length && remaining[i] <= mostBlock;) {
if (remaining[j] == mostBlock) {
else {
SwapBlocks(bound + sign * remaining[i] * blockSize, bound + sign * mostBlock * blockSize, blockSize);
return mostBlock;
private static int[] AllocateRemainingArray(int workerCount)
return Enumerable.Repeat(Int32.MaxValue, workerCount).ToArray();
private static int[] AllocateMostArray(int workerCount)
return Enumerable.Repeat(-1, workerCount).ToArray();
// Swaps two blocks
private void SwapBlocks(int from, int to, int blockSize)
for (var i = 0; i < blockSize; i++) {
SwapElements(from + i, to + i);
Now we have parallel implementation of the quick sort partition phase. Experiments with random generated arrays of integer values show that it helps to speed up parallel quick sort by approximately
50% on a 8 way machine.
No comments: | {"url":"http://dzmitryhuba.blogspot.com/2012/03/parallel-partition-phase-for-quick-sort.html","timestamp":"2014-04-21T09:36:47Z","content_type":null,"content_length":"60750","record_id":"<urn:uuid:a2eca952-6e41-40cd-b382-640b59e6cb6d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simplifying a Complex Rational Expression
I need help with learning the steps to solving this problem.
The first thing to do is to get rid of the fractions in the numerator. So what is the LCM of $a + b$ and $a - b$? $(a + b)(a - b)$, of course. So we want to multiply the numerator of the complex
fraction by $(a + b)(a - b)$, and thus we need to multiply the same thing in the denominator: $\frac{ \frac{3}{a + b} - \frac{3}{a - b} }{2ab}$ $= \frac{ \frac{3}{a + b} - \frac{3}{a - b} }{2ab} \
cdot \frac{(a + b)(a - b)}{(a + b)(a - b)}$ $= \frac{3(a - b) - 3(a + b) }{2ab(a + b)(a - b)}$ which you can simplify from here. -Dan
So the answer would be - -6b_________ 2ab(a+b) (a-b) _ _____3________ a( a + b) (a - b) | {"url":"http://mathhelpforum.com/algebra/21919-simplifying-complex-rational-expression.html","timestamp":"2014-04-21T01:59:27Z","content_type":null,"content_length":"45975","record_id":"<urn:uuid:6fba75ad-276a-4f64-a071-5fa19707dade>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Replicator (CA)
From LifeWiki
(Redirected from
Rulestring 1357/1357
Character Explosive
Replicator is a Life-like cellular automaton where a cell survives or is born if there are an odd number of neighbors. It is one of two Life-like Fredkin replicator rules. Under this ruleset, every
pattern self-replicates; furthermore, every pattern will eventually produce an arbitrary number of copies of itself, all arbitrarily far away from each other.
Replication Property
The replication property follows from a property of Fredkin replicator rules, in which patterns can be modelled as an infinite grid whose entries are elements of the cyclic group Z[n], where n is the
number of states. In this case, n=2, 0 is the off state, and 1 is the on state. The rule can be expressed equivalently as assigning a new value to a cell by summing all neighboring cells. Since Z[n]
is an abelian group, addition is commutative and associative; hence applying the rule to a sum (XOR) of two patterns is the same as summing the two patterns after the rule is applied to each one.
Thus, to find the nth generation of a pattern, it suffices to XOR together the nth generation of each of the single cells which compose the pattern. A single cell is a replicator. More specifically,
an on-cell at (0,0) at time 0 will produce, at time 2^n, 8 on-cells at all positions (b,c) where b and c are any of -2^n, 0, or 2^n, and b and c are not both 0 (this can be proven using induction on
n). When n is large enough, the 8 cells are arbitrarily far away, and thus, for a pattern, the XOR sum of the (2^n)th generation of each of its cells forms the pattern's (2^n)th generation, 8 copies
of the original. Repeating this process produces an arbitrary number of copies, all at arbitrary distance.
Replicator 2
Rulestring 02468/1357
Character Explosive
Replicator 2, also known as Fredkin, is a related totalistic Fredkin replicator rule, where a cell survives or is born if the number of neighbors, including itself, is odd. It has the totalistic
rulestring 13579. Like Replicator, every pattern self-replicates. | {"url":"http://www.conwaylife.com/wiki/Fredkin","timestamp":"2014-04-21T04:38:29Z","content_type":null,"content_length":"14535","record_id":"<urn:uuid:2a13dc51-21bc-4876-94f1-8621943c8d57>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Random Number Generation
A random or pseudorandom number generator (RNG) is a computational or physical device designed to generate a random sequence of numbers. There are many different methods for generating random bits
and testing their quality. This Demonstration shows all the Mathematica algorithms for producing random binary sequences and random real number sequences (between 0 and 1) with other examples (for
binary sequences) that clearly fail as RNGs, such as a repetitive sequence and the Thue-Morse example.
For binary sequences, the Demonstration also includes a sample segment of random bits that was produced by a hardware device whose randomness relies on a quantum physical process and another whose
randomness relies on atmospheric noise. It can be seen that all of them succeed or fail the oversimplified tests to detect the lack of randomness, either by changing the seed or by shifting the
threshold of the statistical tolerance. All comparisons are made over groups of 10000 bits each (one group per seed) even when only 3000 bits are displayed.
The oversimplified tests include two common statistical tests: normality and autocorrelation. A random sequence is normal (but not the other way around). A normal sequence is a sequence whose digits
show a uniform distribution, with all digits being equally likely; the 5-normality test partitions the whole sequence into substrings of length 1 to 5 and tests each for whether the standard
variation of their frequency differs by an acceptable value (the statistical tolerance).
The autocorrelation test looks for possible "hidden" functions producing the sequence. It is the cross-correlation of the sequence with itself. Autocorrelation is useful for finding repeating
patterns, such as periodic sequences. The oversimplified implementation of this test partitions the entire sequence into groups of 10 and then again into pairs, comparing each first segment with the
second, looking for possible regularities.
The compressibility test is based on algorithmic information theory. Compression algorithms look for regularities. If the compressed version of the sequence is short enough compared to the original
size of the sequence given a threshold (defined by the tolerance control), the sequence is unlikely to be random.
These tests may be useful as a first step in determining whether or not a generator obviously fails or not as an RNG. No test can prove definitively whether a nontrivial sequence (and the generator
that produced it) is good enough as an RNG due to possible random or nonrandom local segments. | {"url":"http://www.demonstrations.wolfram.com/RandomNumberGeneration/","timestamp":"2014-04-21T14:42:01Z","content_type":null,"content_length":"45358","record_id":"<urn:uuid:417c81ce-edf5-43bc-a05b-680ca0dec5d2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tileshift – Updated A* implementation and query size.
August 26th, 2012 7:52 am
So, I’ve been working on A* heuristics.
It is interesting so I wrote a visualisation tool for looking at the costs and paths.
This is a relatively large number of iterations using Manhattan Distance. The large search space is quite costly since we use the search algorithm as a fitness function to the genetic selection
To improve the efficiency of the GA, I reduced the size of the search, and we get a similar short term prediction which is good enough:
In particular, the shorter search allows for the users location to have more of an effect on the genetic algorithm since the best possible destination will be more local.
Finally, I was interested to compare the results with Euclidean distance. We see a larger diagonal component (which is to be expected) heading towards the goal (in this case towards the lower right).
I actually found that the directionality of this cost function didn’t give as good results as the Manhattan distance allows the user to try out both horizontal and vertical paths which effectively
have the same cost, where-as the Euclidian cost tends to build paths directly towards the goal diagonally.
It has been interesting to play with the parameters of the A* algorithm and visualise the results, and generally it has been very helpful. You can try out some of the visualisations in the first
level here. | {"url":"http://www.ludumdare.com/compo/2012/08/26/tileshift-updated-a-implementation-and-query-size/","timestamp":"2014-04-18T13:07:00Z","content_type":null,"content_length":"14585","record_id":"<urn:uuid:1be94025-9383-4238-bae0-f9b6e9171e9f>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
6th European Congress of Mathematics
On Monday, July 2, right after the final of the UEFA European Championship, the doors open to the
6th European Congress of Mathematics
in the beautiful historic city of Kraków in Poland. Since 1992, the European Mathematical Society (EMS) invites every four years mathematicians from all over the world to this important event.
Previous congresses have been held in Paris, Budapest, Barcelona, Stockholm and Amsterdam. This year, the congress is organized by colleagues from the Polish Mathematical Society and the Jagiellonian
University in Krakow, chaired by Prof. Stefan Jackowski (Warsaw). The Polish President, Mr. Bronislaw Komorowski has accepted the honorary patronage for the congress.
Close to 1.000 mathematicians are expected to participate in the congress that will take place over a whole week at the Auditorium Maximum of the Jagellionian University in the city center of Kraków.
They are looking forward to the opening ceremony on Monday morning with excitement for a very particular reason: A total of 12 prizes installed by the European Mathematical Society will be awarded by
EMS President Prof. Marta Sanz-Solé (Barcelona, Spain) to laureates selected by three prize committees. The monetary value of each prize is 5000 Euro. All prize winners will be invited to deliver
lectures at 6ECM.
Ten EMS prizes
10 EMS prizes will be awarded to young researchers not older than 35 years, of European nationality or working in Europe, in recognition of excellent contributions in mathematics. The prize winners
were selected by a committee of around 15 internationally recognized mathematicians covering a large variety of fields and chaired by Prof. Frances Kirwan (Oxford, UK). Funds for this prize have been
endowed by the Foundation Compositio Mathematica.
Previous prize winners have proved to continue their careers with high success. Several of them have won the most important distinction for young mathematicians, the Fields Medal, of which at most
four are awarded every four years by the International Mathematical Union. Congress participants may thus be able to attend a lecture by a future Fields Medal winner!
European research politicians should be concerned: Among the ten selected extremely talented young mathematicians, five have chosen to pursue their career in the United States!
List of Prize winners Simon Brendle
, 31 years old, received his PhD from Tübingen University in Germany under the supervision of Gerhard Huisken. He is now a Professor of mathematics at Stanford University, USA. An EMS-prize is
awarded to him
for his outstanding results on geometric partial differential equations and systems of elliptic, parabolic and hyperbolic types, which have led to breakthroughs in differential geometry including the
differentiable sphere theorem, the general convergence of Yamabe flow, the compactness property for solutions of the Yamabe equation, and the Min-Oo conjecture. The talk Emmanuel Breuillard
, 35 years old, graduated in mathematics and physics from Ecole Normale Superieure (Paris); then he pursued graduate studies in Cambridge (UK) and Yale (USA) where he obtained a PhD in 2004. He is
currently a professor of mathematics at Universite Paris-Sud, Orsay. He receives an EMS-prize
for his important and deep research in asymptotic group theory, in particular on the Tits alternative for linear groups and on the study of approximate subgroups, using a wealth of methods from very
different areas of mathematics, which has already made a long lasting impact on combinatorics, group theory, number theory and beyond. Alessio Figalli
, 28 years old, graduated in mathematics from the Scuola Normale Superiore of Pisa (2006) and he received a joint PhD from the Scuola Normale Superiore of Pisa and the Ecole Normale Supérieure of
Lyon (2007). Currently he is a professor at the University of Texas at Austin. An EMS-prize goes to him
for his outstanding contributions to the regularity theory of optimal transport maps, to quantitative geometric and functional inequalities and to partial solutions of the Mather and Mañé conjectures
in the theory of dynamical systems. The talk Adrian Ioana
, 31 years old, obtained a bachelor of Science from the University of Bucharest (2003) AND received his Ph.D. from UCLA in 2007 under the direction of Sorin Popa. Currently, he is an assistant
professor at the University of California at San Diego. An EMS prize is awarded to him
for his impressive and deep work in the field of operator algebras and their connections to ergodic theory and group theory, and in particular for solving several important open problems in
deformation and rigidity theory, among them a long standing conjecture of Connes concerning von Neumann algebras with no outer automorphisms. The talk Mathieu Lewin
, 34 years old, studied mathematics at the École Normale Supérieure (Cachan), before he went to the university of Paris–Dauphine where he got his PhD in 2004. He currently occupies a full-time CNRS
research position at the University of Cergy-Pontoise, close to Paris. He receives an EMS-prize
for his ground breaking work in rigorous aspects of quantum chemistry, mean field approximations to relativistic quantum field theory and statistical mechanics. The talk Ciprian Manolescu
, 33 years old, studied mathematics at Harvard University; he received his PhD in 2004 under the supervision of Peter B. Kronheimer. He worked for three years at Columbia University, and since 2008
he is an Associate Professor at UC in Los Angeles. An EMS-prize goes to him
for his deep and highly influential work on Floer theory, successfully combining techniques from gauge theory, symplectic geometry, algebraic topology, dynamical systems and algebraic geometry to
study low-dimensional manifolds, and in particular for his key role in the development of combinatorial Floer theory.
The talk
Grégory Miermont
received his education at Ecole Normale Supérieure in Paris during 1998–2002. He defended his PhD thesis, which was supervised by Jean Bertoin, in 2003. Since 2009 he is a professor at Université
Paris-Sud 11 (Orsay). During the academic year 2011–2012 he is on leave as a visiting professor at the University of British Columbia (Vancouver). An EMS prize is awarded to him
for his outstanding work on scaling limits of random structures such as trees and random planar maps, and his highly innovative insight in the treatment of random metrics. The talk Sophie Morel
, 32 years old, studied mathematics at the École Normale Supérieure in Paris, before earning her PhD at Université Paris-Sud, under the direction of Gerard Laumon. Since December 2009, she is a
professor at Harvard University. She receives an EMS-prize
for her deep and original work in arithmetic geometry and automorphic forms, in particular the study of Shimura varieties, bringing new and unexpected ideas to this field. Tom Sanders
studied mathematics in Cambridge; he received his PhD in 2007 under the supervision of William T. Gowers. Since October 2011, he is a Royal Society University Research Fellow at the University of
Oxford. An EMS-prize goes to him
for his fundamental results in additive combinatorics and harmonic analysis, which combine in a masterful way deep known techniques with the invention of new methods to achieve spectacular
applications. The talk Corinna Ulcigrai
, 32 years old, obtained her diploma in mathematics from the Scuola Normale Superiore in Pisa (2002) and defended her PhD in mathematics at Princeton University (2007), under the supervision of Ya.
G. Sinai. Since August 2007 she is a Lecturer and a RCUK Fellow at the University of Bristol. An EMS prize is awarded to her
for advancing our understanding of dynamical systems and the mathematical characterizations of chaos, and especially for solving a long-standing fundamental question on the mixing property for
locally Hamiltonian surface flows. Felix Klein Prize
The Felix Klein prize, endowed by the Institute for Industrial Mathematics in Kaiserslautern, will be awarded to a young scientist (normally under the age of 38) for using sophisticated methods to
give an outstanding solution, which meets with the complete satisfaction of industry, to a concrete and difficult industrial problem. The Prize Committee that selected the winner consisted of six
members, chaired by Prof. Wil H.A. Schilders from Eindhoven in the Netherlands.
Emmanuel Trélat
, 37 years old, obtained his PhD at the University of Bourgogne in 2000. Currently he is a full professor at the University Pierre et Marie Curie (Paris 6), France, and member of the Institut
Universitaire de France, since 2011. He receives the Felix Klein Prize
for combining truly impressive and beautiful contributions in fine fundamental mathematics to understand and solve new problems in control of PDE’s and ODE’s (continuous, discrete and mixed
problems), and above all for his studies on singular trajectories, with remarkable numerical methods and algorithms able to provide solutions to many industrial problems in real time, with
substantial impact especially in the area of astronautics. The talk Otto Neugebauer Prize
For the first time ever, the newly established Otto Neugebauer Prize in the History of Mathematics will be awarded for a specific highly influential article or book. The prize winner was selected by
a committee of five specialists in the history of mathematics, chaired by Prof. Jeremy Gray (Open University, UK). The funds for this prize have been offered by Springer-Verlag, one of the major
scientific publishing houses.
Jan P. Hogendijk
obtained his Ph.D. at Utrecht University in 1983 with a dissertation on an unpublished Arabic treatise on conic sections by Ibn al-Haytham (ca. 965-1041). He is now a full professor in History of
Mathematics at the Mathematics Department of Utrecht University. He is the first recipient of the Otto Neugebauer Prize
for having illuminated how Greek mathematics was absorbed in the medieval Arabic world, how mathematics developed in medieval Islam, and how it was eventually transmitted to Europe. The talk Photos
From the prize ceremony, and in particular, photos of all prize winners, will be publicly available around 12 am on the web pages
Translations to several European languages will be added later during the day. | {"url":"http://www.6ecm.pl/en","timestamp":"2014-04-18T19:03:43Z","content_type":null,"content_length":"31032","record_id":"<urn:uuid:d02184a0-49e9-46b7-beae-ffe671d92a3a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
Institute for Mathematics and its Applications (IMA)
- Dynamics of Josephson Junctions Coupled through a Shared LRC-Load
by Don Aronson, University of Minnesota
Don Aronson, University of Minnesota
We consider a system of N identical current-biased Josephson point junctions coupled via a shared LCR-load. The mathematical midel involves a system of N "pendulum-type" equations with a load-induced
forcing term together with a second-order load equation which is forced by the mean velocity of the pendula. In addition to the three load parameters, the system involves two additional parameters
describing the intrinsic capacity of the junctions and the common bias current. The system is equivariant with respect to permutations and is consequently amenable to a considerable amount of
analysis. Numerical studies for specific loads have shown that the system has extremely complicated dynamics. We will describe some of these observations, and show how continuation studies using AUTO
together with geometrical and classical analysis have led to a nearly complete picture of the dynamics in certain cases. Much work remains to be done and there are many open problems. One of the main
open problems involves the singular limit in which the individual junction capacities tend to zero. | {"url":"http://www.ima.umn.edu/dynsys/wkshp_abstracts/aronson1.html","timestamp":"2014-04-19T01:51:25Z","content_type":null,"content_length":"13873","record_id":"<urn:uuid:5428057e-238b-4728-861e-fe1011e462d3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00299-ip-10-147-4-33.ec2.internal.warc.gz"} |
Manual:Algebra View
This page is part of the official manual for print and pdf. For structural reasons normal users can't edit this page. If you found any errors on this page please contact us.
Go to the version which can be edited by users.
Using the Input Bar you can directly enter algebraic expressions in GeoGebra. After hitting the Enter key your algebraic input appears in the Algebra View while its graphical representation is
automatically displayed in the Graphics View.
Example: The input f(x) = x^2 gives you the function f in the Algebra View and its function graph in the Graphics View.
In the Algebra View, mathematical objects are organized as free and dependent objects. If you create a new object without using any other existing objects, it is classified as a free object. If your
newly created object was created by using other existing objects, it is classified as a dependent object.
If you want to hide the algebraic representation of an object in the Algebra View, you may specify the object as an auxiliary object:
Right click (Mac OS: Ctrl-click) on the corresponding object in the Algebra View and select Properties from the appearing Context Menu.
On tab Basic of the Properties Dialog you may specify the object as an Auxiliary Object.By default, auxiliary objects are not shown in the Algebra View, but you can change this setting by selecting
"Auxiliary Objects" from the context menu (right-click) or by clicking on the appropriate icon in the Style Bar
Note that you are able to modify objects in the Algebra View as well: Make sure that you activate the Move Tool before you double click on a free object in the Algebra View. In the appearing text
box you can directly edit the algebraic representation of the object. After hitting the Enter key, the graphical representation of the object will automatically adapt to your changes.
If you double-click on a dependent object in the Algebra View, a dialog window appears allowing you to Redefine the object.
GeoGebra also offers a wide range of commands that can be entered into the Input Bar. You can open the list of commands in the right corner of the Input Bar by clicking on the button Command. After
selecting a command from this list (or typing its name directly into the Input Bar) you can press the F1 key to get information about the syntax and arguments required to apply the corresponding
Style Bar
This Style Bar contains two buttons.
toggling this button shows or hides Auxiliary Objects.
when turned on, objects are sorted by type (e.g. Points, Lines, ...), otherwise they are divided among Free, Dependent and Auxiliary Objects (sorted by Layers or listed in Construction Order). | {"url":"http://wiki.geogebra.org/en/Manual:Algebra_View","timestamp":"2014-04-16T08:08:20Z","content_type":null,"content_length":"22045","record_id":"<urn:uuid:ddfcec38-efc6-4392-becd-c7085efed21f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
7.0 Hector Mine sequence
Next, we turn our attention to local and regional recordings of the 7.0 1999 Hector Mine mainshock and 6 aftershocks ranging between 3.7 and 5.4. In this case we consider 6 broadband stations ranging
between 60 and 700 km: GSC, PFO, MNV, CMB, TUC, and ELK. All the events have independent regional seismic moment estimates from full waveform inversion by G. Ichinose (pers. comm., 2006). As observed
for San Francisco Bay Area events, the coda spectral ratios for Hector Mine events were very stable, with average standard deviations of less than 0.1 for all frequencies. Figure 2.13 shows all 6
ratios, assuming both simultaneous source model fits and individual ratio fits. In all cases the high frequency asymptote is significantly above the theoretically predicted value. This is consistent
with a break in self-similarity and is inconsistent with a standard self-similar Brune (1970) style omega-square model. Our preferred interpretation is that the apparent stresses are systematically
lower for the aftershocks than the mainshock. If all events have Brune-style spectra with an f-2 fall-off at high frequencies, this implies the corner frequency scaling is steeper than f-3 for
self-similar, constant apparent stress scaling. More in-depth results of this study can be found in (Mayeda et al., 2007).
Figure 2.13: Spectral ratios for the Hector Mine mainshock relative to 6 aftershocks. In each figure, we show the low and high frequency asymptotes assuming constant apparent stress scaling as solid
lines. Dashed lines show the case if the spectral fall-off were 1.5 rather than 2.0. However, observations worldwide are inconsistent with a fall-off of 1.5 and we are left to assume that the
apparent stresses are systematically lower for the aftershocks than the mainshock, breaking similarity.
Berkeley Seismological Laboratory
215 McCone Hall, UC Berkeley, Berkeley, CA 94720-4760
Questions or comments? Send e-mail: www@seismo.berkeley.edu
© 2007, The Regents of the University of California | {"url":"http://seismo.berkeley.edu/annual_report/ar06_07/node41.html","timestamp":"2014-04-17T18:24:58Z","content_type":null,"content_length":"6257","record_id":"<urn:uuid:3bd2daa5-ec6d-4786-aa69-f56be35f42db>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pre-Algebra: Integers Help and Practice Problems
Find study help on integers for pre-algebra. Use the links below to select the specific area of integers you're looking for help with. Each guide comes complete with an explanation, example problems,
and practice problems with solutions to help you learn integers for pre-algebra.
The most popular articles in this category | {"url":"http://www.education.com/study-help/study-help-pre-algebra-integers/","timestamp":"2014-04-17T19:42:07Z","content_type":null,"content_length":"95779","record_id":"<urn:uuid:fb77914b-4594-4572-a5b9-8b48df5713b2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
Compare-Contrast Essay - Expository Writing
Monday October 4
Students complete a journal entry in their journals for 10 minutes:
Choose someone you know really well (parent, friend, coach, etc.) and write about the similarities and differences between yourself and that person. Try to write about at least 3 different things.
Remind students to upload essay final drafts to Turn it In
Students read through pp. 176-185 of Expository Composition Textbook. They are to write explanations/ definitions of the following terms:
• thesis
• topic
• subject
• point
• comparison
• contrast
• point-by-point
• block-by-block
• repetition
• parallelism
- For homework, students complete the practice exercise #1, p. 186.
Tuesday October 5
- Put students into writing groups and have them share their homework assignments. They will put their Venn diagrams into the “writing process” section of their notebooks.
- Have student groups read “Meet Me at Half-Time” (p. 200) and write answers to the discussion questions on p. 202
- Discuss as whole class the answers to the discussion questions
Wednesday October 6
- Students write in journals for 10 minutes
- Distribute comparison-contrast essay assignment.
- Review notes/terms from Tuesday. (Power Point)
Thursday October 7
- Student work time in lab/library to research topic information and begin drafting.
- Rough Drafts will be due Monday October11.
Friday October 8
- Students write in journals for 10 minutes
- Minilesson on citations and works cited pages.
- Students complete citation exercise.
Monday October 11
- Rough Drafts due
- Students write in journals for 10 minutes
- Students work in writers’ groups to provide feedback. This will be group feedback. Students read aloud their papers to the group while the group members provide written feedback.
Tuesday October 12
- Students continue writer’s groups
- Students complete an additional citation exercise
Wednesday October 13
- Testing Day, no class
Thursday October 14
- Assign students new vocabulary lists
- Students work on vocabulary exercises
Friday October 15
- Lab Work Day for final drafts
- Final Drafts due Monday October 18 | {"url":"https://sites.google.com/site/lhsiexpwriting/compare-contrast-essay","timestamp":"2014-04-16T07:31:57Z","content_type":null,"content_length":"36491","record_id":"<urn:uuid:44ee0309-0965-45ec-9cb9-ecfdb70fb558>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polynomial expected behavior of a pivoting algorithm for linear complementarity and linear programming problems
Results 1 - 10 of 17
- Journal of Algorithms , 1985
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself
in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’ ’ W. H. Freeman & Co ..."
Cited by 188 (0 self)
Add to MetaCart
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our
book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’ ’ W. H. Freeman & Co., New York, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by
their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, cross-references will be given to that book and the list of problems (NP-complete and harder)
presented there. Readers who have results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.) or open problems they would like publicized, should
, 2003
"... We introduce the smoothed analysis of algorithms, which continuously interpolates between the worst-case and average-case analyses of algorithms. In smoothed analysis, we measure the maximum
over inputs of the expected performance of an algorithm under small random perturbations of that input. We me ..."
Cited by 146 (14 self)
Add to MetaCart
We introduce the smoothed analysis of algorithms, which continuously interpolates between the worst-case and average-case analyses of algorithms. In smoothed analysis, we measure the maximum over
inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations.
We show that the simplex algorithm has smoothed complexity polynomial in the input size and the standard deviation of
, 1992
"... CONTENTS 1 Introduction 1 2 The Basics of Predictor-Corrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 Piecewise-Linear Methods 34 6 Complexity 41 7 Available Software
44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful ..."
Cited by 70 (6 self)
Add to MetaCart
CONTENTS 1 Introduction 1 2 The Basics of Predictor-Corrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 Piecewise-Linear Methods 34 6 Complexity 41 7 Available Software 44
References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful theoretical tools in modern mathematics. Their use can be traced back at least to such venerated
works as those of Poincar'e (1881--1886), Klein (1882-- 1883) and Bernstein (1910). Leray and Schauder (1934) refined the tool and presented it as a global result in topology, viz., the homotopy
invariance of degree. The use of deformations to solve nonlinear systems of equations Partially supported by the National Science Foundation via grant # DMS-9104058 y Preprint, Colorado State
University, August 2 E. Allgower and K. Georg may be traced back at least to Lahaye (1934). The classical embedding methods were the
, 2000
"... . We examine the history of linear programming from computational, geometric, and complexity points of view, looking at simplex, ellipsoid, interior-point, and other methods. Key words. linear
programming -- history -- simplex method -- ellipsoid method -- interior-point methods 1. Introduction A ..."
Cited by 25 (1 self)
Add to MetaCart
. We examine the history of linear programming from computational, geometric, and complexity points of view, looking at simplex, ellipsoid, interior-point, and other methods. Key words. linear
programming -- history -- simplex method -- ellipsoid method -- interior-point methods 1. Introduction At the last Mathematical Programming Symposium in Lausanne, we celebrated the 50th anniversary
of the simplex method. Here, we are at or close to several other anniversaries relating to linear programming: the sixtieth of Kantorovich's 1939 paper on "Mathematical Methods in the Organization
and Planning of Production" (and the fortieth of its appearance in the Western literature) [55]; the fiftieth of the historic 0th Mathematical Programming Symposium that took place in Chicago in 1949
on Activity Analysis of Production and Allocation [64]; the forty-fifth of Frisch's suggestion of the logarithmic barrier function for linear programming [37]; the twenty-fifth of the awarding of the
1975 Nobe...
"... We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman
and Teng ..."
Cited by 23 (4 self)
Add to MetaCart
We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and
- In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science , 2006
"... Abstract. The smoothed analysis of algorithms is concerned with the expected running time of an algorithm under slight random perturbations of arbitrary inputs. Spielman and Teng proved that the
shadow-vertex simplex method has polynomial smoothed complexity. On a slight random perturbation of an ar ..."
Cited by 19 (4 self)
Add to MetaCart
Abstract. The smoothed analysis of algorithms is concerned with the expected running time of an algorithm under slight random perturbations of arbitrary inputs. Spielman and Teng proved that the
shadow-vertex simplex method has polynomial smoothed complexity. On a slight random perturbation of an arbitrary linear program, the simplex method finds the solution after a walk on polytope(s) with
expected length polynomial in the number of constraints n, the number of variables d and the inverse standard deviation of the perturbation 1/σ. We show that the length of walk in the simplex method
is actually polylogarithmic in the number of constraints n. Spielman-Teng’s bound on the walk was O ∗ (n 86 d 55 σ −30), up to logarithmic factors. We improve this to O(log 7 n(d 9 + d 3 σ −4)). This
shows that the tight Hirsch conjecture n − d on the length of walk on polytopes is not a limitation for the smoothed Linear Programming. Random perturbations create short paths between vertices. We
propose a randomized phase-I for solving arbitrary linear programs, which is of independent interest. Instead of finding a vertex of a feasible set, we add a vertex at
- ANNALS OF OPERATIONS RESEARCH. (SUBMITTED , 1991
"... The purpose of this paper is to survey the various pivot rules of the simplex method or its variants that have been developed in the last two decades, starting from the appearance of the minimal
index rule of Bland. We are mainly concerned with the finiteness property of simplex type pivot rules. Th ..."
Cited by 9 (1 self)
Add to MetaCart
The purpose of this paper is to survey the various pivot rules of the simplex method or its variants that have been developed in the last two decades, starting from the appearance of the minimal
index rule of Bland. We are mainly concerned with the finiteness property of simplex type pivot rules. There are some other important topics in linear programming, e.g. complexity theory or
implementations, that are not included in the scope of this paper. We do not discuss ellipsoid methods nor interior point methods. Well known classical results concerning the simplex method are also
not particularly discussed in this survey, but the connection between the new methods and the classical ones are discussed if there is any. In this paper we discuss three classes of recently
developed pivot rules for linear programming. The first class (the largest one) of the pivot rules we discuss is the class of essentially combinatorial pivot rules. Namely these rules only use
labeling and signs of the variab...
, 2003
"... In this paper we study the distribution tails and the moments of C (A) and log C (A), where C (A) is a condition number for the linear conic system Ax 0, x 6= 0, with A 2 IR . We consider the
case where A is a Gaussian random matrix. For this input model we characterise the exact decay rates of ..."
Cited by 8 (5 self)
Add to MetaCart
In this paper we study the distribution tails and the moments of C (A) and log C (A), where C (A) is a condition number for the linear conic system Ax 0, x 6= 0, with A 2 IR . We consider the case
where A is a Gaussian random matrix. For this input model we characterise the exact decay rates of the distribution tails, we improve the existing moment estimates, and we prove various limit
theorems for the cases where either n or m and n tend to in nity. Our results are of complexity theoretic interest, because interior-point methods and relaxation methods for the solution of Ax 0, x 6
= 0 have running times that are bounded in terms of log C (A) and C (A) respectively. AMS Classi cation: primary 90C31,15A52; secondary 90C05,90C60,62H10. Key Words: condition number, random
matrices, linear programming, probabilistic analysis, complexity theory.
, 1991
"... We devise a new simplex pivot rule which has interesting theoretical properties. Beginning with a basic feasible solution, and any nonbasic variable having a negative reduced cost, the pivot
rule produces a sequence of pivots such that ultimately the originally chosen nonbasic variable enters the ba ..."
Cited by 4 (1 self)
Add to MetaCart
We devise a new simplex pivot rule which has interesting theoretical properties. Beginning with a basic feasible solution, and any nonbasic variable having a negative reduced cost, the pivot rule
produces a sequence of pivots such that ultimately the originally chosen nonbasic variable enters the basis, and all reduced costs which were originally nonnegative remain nonnegative. The pivot rule
thus monotonically builds up to a dual feasible, and hence optimal, basis. A surprising property of the pivot rule is that the pivot sequence results in intermediate bases which are neither primal
nor dual feasible. We prove correctness of the procedure, give a geometric interpretation, and relate it to other pivoting rules for linear programming. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=452603","timestamp":"2014-04-21T02:13:59Z","content_type":null,"content_length":"36755","record_id":"<urn:uuid:91b810d2-22cc-4902-a1e9-7459bc61adc4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |