content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
35 An Hour Is How Much A Year? - DECADE THIRTY
Asked by: Dennis SimmonsQuestionNew
35 An Hour Is How Much A Year?
Asked By: Jesse Patterson Date: created: Nov 05 2024
What is 36 an hour annually
Answered By: Caleb Anderson Date: created: Nov 05 2024
Frequently Asked Questions – If you make $36 an hour, your yearly salary would be $74,880. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly
rate ( $36 an hour ), the number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
If you make $36 an hour, your monthly salary would be $6,240. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly rate ( $36 an hour ),
the number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
If you make $36 an hour, your weekly salary would be $1,440. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly rate ( $36 an hour ), the
number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
How much is $70 an hour annually?
$70 hourly is how much per year? If you make $70 per hour, your Yearly salary would be $145,600.
How much money per hour is good?
The national mean salary in the United States is $56,310 according to the National Compensation Survey. That works out to be $27 per hour. So in order to be above average, you have to earn more than
$28 per hour. Why not be way above average and find a job that pays $30 more than the average hourly salary?
Is 72000 a good salary in Canada?
Average Salary in Toronto – The average salary in Toronto is $62,050, which is 14% higher than the Canadian average salary of $54,450. A person making $72,000 a year in Toronto makes 16% more than
the average working person in Toronto and will take home about $54,126.
How much is $50 an hour annually?
Frequently Asked Questions – If you make $50 an hour, your yearly salary would be $104,000. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your
hourly rate ( $50 an hour ), the number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
If you make $50 an hour, your monthly salary would be $8,666.67. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly rate ( $50 an hour ), the
number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
If you make $50 an hour, your weekly salary would be $2,000. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly rate ( $50 an hour ), the
number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
What is 37 an hour annually?
Frequently Asked Questions – If you make $37 an hour, your yearly salary would be $76,960. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly
rate ( $37 an hour ), the number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
1. If you make $37 an hour, your monthly salary would be $6,413.33.
2. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly rate ( $37 an hour ), the number of hours you work per week ( 40 hours ), the number
of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
If you make $37 an hour, your weekly salary would be $1,480. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly rate ( $37 an hour ), the
number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
Asked By: Thomas Perez Date: created: Sep 15 2024
What is $40 an hour
Answered By: Gavin Ross Date: created: Sep 17 2024
Frequently Asked Questions – If you make $40 an hour, your yearly salary would be $83,200. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly
rate ( $40 an hour ), the number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
If you make $40 an hour, your monthly salary would be $6,933.33. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly rate ( $40 an hour ),
the number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
If you make $40 an hour, your weekly salary would be $1,600. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly rate ( $40 an hour ), the
number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
How much is $45 an hour annually?
Frequently Asked Questions – If you make $45 an hour, your yearly salary would be $93,600. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly
rate ( $45 an hour ), the number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
1. If you make $45 an hour, your monthly salary would be $7,800.
2. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly rate ( $45 an hour ), the number of hours you work per week ( 40 hours ), the number
of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
If you make $45 an hour, your weekly salary would be $1,800. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly rate ( $45 an hour ), the
number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
What is 90 an hour annually?
$90 hourly is how much per year? If you make $90 per hour, your Yearly salary would be $187,200. This result is obtained by multiplying your base salary by the amount of hours, week, and months you
work in a year, assuming you work 40 hours a week.
What is 38 an hour annually?
Frequently Asked Questions – If you make $38 an hour, your yearly salary would be $79,040. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly
rate ( $38 an hour ), the number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
If you make $38 an hour, your monthly salary would be $6,586.67. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly rate ( $38 an hour ), the
number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
If you make $38 an hour, your weekly salary would be $1,520. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly rate ( $38 an hour ), the
number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
Asked By: Caleb Diaz Date: created: Oct 10 2023
What is 39 an hour annually
Answered By: Sean Allen Date: created: Oct 10 2023
Frequently Asked Questions – If you make $39 an hour, your yearly salary would be $81,120. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly
rate ( $39 an hour ), the number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
1. If you make $39 an hour, your monthly salary would be $6,760.
2. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly rate ( $39 an hour ), the number of hours you work per week ( 40 hours ), the number
of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
If you make $39 an hour, your weekly salary would be $1,560. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly rate ( $39 an hour ), the
number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
Asked By: Douglas Watson Date: created: Jul 23 2024
Is 150k a good salary in the US
Answered By: Kevin Ramirez Date: created: Jul 23 2024
Earning $150,000 puts you well above the average salary in the U.S — over double the median income, in fact, according to Census data. With this salary, you can likely afford a bigger home than most,
and likely in a more desirable location. But the exact amount of house you can buy will vary depending on the rest of your financial situation, chosen location and preferred property, as well as
prevailing mortgage rates,
How much is $100 an hour annually?
$100 hourly is how much per year? If you make $100 per hour, your Yearly salary would be $208,000. This result is obtained by multiplying your base salary by the amount of hours, week, and months you
work in a year, assuming you work 40 hours a week.
What’s $27 an hour annually?
Frequently Asked Questions – If you make $27 an hour, your yearly salary would be $56,160. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly
rate ( $27 an hour ), the number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
1. If you make $27 an hour, your monthly salary would be $4,680.
2. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly rate ( $27 an hour ), the number of hours you work per week ( 40 hours ), the number
of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
If you make $27 an hour, your weekly salary would be $1,080. Assuming that you work 40 hours per week, we calculated this number by taking into consideration your hourly rate ( $27 an hour ), the
number of hours you work per week ( 40 hours ), the number of weeks per year ( 52 weeks ), and the number of months per year ( 12 months ).
Asked By: Joshua Perez Date: created: Jan 11 2024
Is $50 an hour a lot
Answered By: Geoffrey Jenkins Date: created: Jan 11 2024
Is 50 an Hour Good? – Yes, $50 an hour is actually good pay. To put things into perspective, $50 an hour is almost seven times higher than the federal minimum wage. As of 2022, the Federal minimum
wage is $7.25 per hour. Here’s another example – if you live in New York, the minimum wage as of 2022 is $13.20.
An hourly rate of $50 means you’ll be earning $36.80 more than the minimum wage. With an hourly wage of $50, you can live a comfortable life, manage your debt, and still be able to save some of your
money. However, this also depends on certain factors such as your lifestyle, location, financial responsibilities, etc.
For example, if you have high debt, $50 an hour may not be good enough for you.
Asked By: Bruce Bennett Date: created: Aug 31 2023
How much is Canada per hour
Answered By: Abraham White Date: created: Sep 02 2023
The average hourly wage in Canada is between $28.08 and $32.69 per hour. The mining, oil and gas industry pays the highest wages, while food services and retail professionals earn the lowest wages.
This amount depends on the industry and individual skill sets.
Asked By: Howard Hughes Date: created: Jun 07 2024
How much is 35 an hour 40 hours a week
Answered By: Alex Roberts Date: created: Jun 10 2024
$35 hourly is how much per year? If you make $35 per hour, your Yearly salary would be $72,800. This result is obtained by multiplying your base salary by the amount of hours, week, and months you
work in a year, assuming you work 40 hours a week.
How much is $35 an hour 40 hours a week?
How Much Is $35 An Hour Per Week? –
1. $35 an hour is $1,400 per week.
2. If you want to break it out by week, let’s assume you’re working a normal 40-hour week.
3. So to calculate your weekly income, see below:
4. $35 an hour multiplied by 40 hours per week is $1,400 per week income.
5. $35 an hour is about $700 per week working part time (about 20 hours per week).
How much is 35 an hour 36 hours a week?
How much is your salary? $35 hourly is how much per week? If you make $35 per hour, your Weekly salary would be $1,295.
Angel Robinson | {"url":"https://decadethirty.com/blog/how/35-an-hour-is-how-much-a-year.html","timestamp":"2024-11-06T09:18:08Z","content_type":"text/html","content_length":"140069","record_id":"<urn:uuid:1b992a81-6e84-4916-a932-5986f9d74d53>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00634.warc.gz"} |
Probability of selecting a number that is a multiple of both 4 and 7 f
Probability of selecting a number that is a multiple of both 4 and 7 from the set S={1,2,3,.....50} is
Step by step video & image solution for Probability of selecting a number that is a multiple of both 4 and 7 from the set S = {1,2,3,.....50} is by Maths experts to help you in doubts & scoring
excellent marks in Class 10 exams. | {"url":"https://www.doubtnut.com/qna/648230065","timestamp":"2024-11-12T20:04:07Z","content_type":"text/html","content_length":"223360","record_id":"<urn:uuid:c95b936a-3bfe-42b7-94df-584960b47adf>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00435.warc.gz"} |
Probability and Statistics for Reliability: An Introduction
Probability and Statistics for Reliability: An Introduction
Probability and Statistics for Reliability: An Introduction
Painful as it is to many of us, the generally desirable product characteristicReliability is heavily dependent on Probability and Statistics for measuring and describing its characteristics. This
edition of Reliability Ques will only be the tip of the iceberg in this regard. Let’s start with a few basics:
• Failure Distribution: this is a representation of the occurrence failures over time usually called the probability density function, PDF, or f(t).
• Cumulative Failure Distribution: If you guessed that it’s the cumulative version of the PDF, you’re correct. It’s called the CDF, or F(t)
• Reliability: If we can call the CDF the unreliability of a product, then 1-F(t) must be the reliability.
With these basics, an important part of reliability is identifying, understanding, and optimizing the type of statistical distribution that represents the product. The following are a few common
• Normal Distribution: the most common distribution usually representing wearout situations (2 parameter).
• Exponential Distribution: a one parameter distribution usually used for electronic products, or products where there are all sorts of distributions tending to combine to a constant hazard rate.
• Weibull Distribution: can be used to represent a number of other distributions such as the Normal, the Exponential, and others (usually 2 parameter but can be 3 parameter). The Weibull can be
used to represent the three regions of the classic reliability “Bathtub” curve: (Region 1) the decreasing failure rate associated with infant mortality, (Region 2) the constant failure rate of
useful life, and (Region 3) the wearout period of increasing failure rate. The Weibull parameters β, called the Weibull Shape Parameters, for the three Bathtub regions are respectively <1.0, =
1.0, and >1.0.
• Binomial Distribution: used to represent situations where there are two possible outcomes, success or failure and the probability of one of the types of outcomes is known.
So far, so good, for many of you this may be a refresher but what are some other applications?
• Poisson Distribution: used to determine the likelihood of a number of events occurring in a set of trials if the likelihood of an individual event is known.
• Hypergeometric: used to determine the likelihood of exactly “x” events in a sample of “y” given that there “m” of the events in the total population of “n.” This distribution is similar to the
• Geometric: used to determine the likelihood of success at the “xth” trial when the probability of an individual event is known.
Great, now drag out the statistics tables. Sorry to disappoint, we have an easier way for FREE. The QuART PRO and QuART ER software suites include demo versions of the tools to do the calculations
for all the above distributions without cracking a book open. Let’s give it a try with some examples:
1. If a particular type of resistor’s resistance is normally distributed with a mean of 100 ohms and a standard deviation of 5 ohms, what’s the probability of getting a resistor with a resistance
less than 85 ohms? Enter the data in QuART PRO to arrive at a probability of 0.13%, or 0.0013.
2. If the required reliability for a mission of 100 hours is 99.9%, what must the failure rate (assumed constant) be for the electronic product to meet the requirement? Enter the number of hours and
iterate the failure rate until the Reliability equals 99.9%. The failure rate will be 0.00001 failures/hour, or in more common terms 10 failures/10^6 hours.
3. What’s the reliability of a shaft at 1,000 hours if its Weibull Shape Parameter is 1.7 and its Weibull Characteristic Life (point at which 63.2% of population has failed) is 700 hours? Read a
reliability of only 15.98%.
4. In a bin of parts, where 10% are known to be bad, what’s the probability of selecting 8 out of 10 that are good? Read the result for 10 minus 8, or 2 bad parts as 19.37%.
5. What’s a developer’s risk of having his product with a true Mean-Time-Between-Failure (MTBF) of 500 hours rejected in a test of 1000 hours where the acceptable number of failures in the test is 3
or less? Trick question? Not really. Let’s make some adjustments before we go to the QuART PRO Poisson calculator. First, if the time is 1000 hours, and the MTBF is 500 hours, we’d expect 2
Our first calculation shows that the probability of 3 failures is 18.04%. Similarly, for 2 failures it’s 27.07%, for 1 failure it’s 27.07%, and for no failures it’s 13.53%. Therefore, the
probability of 3 failures or less is the sum, which is 85.71%. So, if the probability of 3 or fewer failures is 85.71%, then the probability of 4 or more is 14.29%, which is the developer’s risk
of having his product rejected.
Good luck using the QuART PRO and QuART ER Free statistics calculators, and give us a call if you need help.
Related Posts | {"url":"https://www.quanterion.com/probability-and-statistics-for-reliability-an-introduction/","timestamp":"2024-11-12T10:43:16Z","content_type":"text/html","content_length":"154670","record_id":"<urn:uuid:d8d493b7-435b-4771-9cd5-85f4c4a93e77>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00883.warc.gz"} |
The Logic behind Momentum Conservation
Consider a collision between two objects – object 1 and object 2. For such a collision, the forces acting between the two objects are equal in magnitude and opposite in direction (Newton’s third law
). This statement can be expressed in equation form as follows.
The forces act between the two objects for a given amount of time. In some cases, the time is long; in other cases the time is short. Regardless of how long the time is, it can be said that the time
that the force acts upon object 1 is equal to the time that the force acts upon object 2. This is merely logical. Forces result from interactions (or contact) between two objects. If object 1
contacts object 2 for 0.050 seconds, then object 2 must be contacting object 1 for the same amount of time (0.050 seconds). As an equation, this can be stated as
Since the forces between the two objects are equal in magnitude and opposite in direction, and since the times for which these forces act are equal in magnitude, it follows that the impulses
experienced by the two objects are also equal in magnitude and opposite in direction. As an equation, this can be stated as
But the impulse experienced by an object is equal to the change in momentum of that object (the impulse-momentum change theorem). Thus, since each object experiences equal and opposite impulses, it
follows logically that they must also experience equal and opposite momentum changes. As an equation, this can be stated as
The Law of Momentum Conservation
The above equation is one statement of the law of momentum conservation. In a collision, the momentum change of object 1 is equal to and opposite of the momentum change of object 2. That is, the
momentum lost by object 1 is equal to the momentum gained by object 2. In most collisions between two objects, one object slows down and loses momentum while the other object speeds up and gains
momentum. If object 1 loses 75 units of momentum, then object 2 gains 75 units of momentum. Yet, the total momentum of the two objects (object 1 plus object 2) is the same before the collision as it
is after the collision. The total momentum of the system (the collection of two objects) is conserved.
A useful analogy for understanding momentum conservation involves a money transaction between two people. Let’s refer to the two people as Jack and Jill. Suppose that we were to check the pockets of
Jack and Jill before and after the money transaction in order to determine the amount of money that each possesses. Prior to the transaction, Jack possesses $100 and Jill possesses $100. The total
amount of money of the two people before the transaction is $200. During the transaction, Jack pays Jill $50 for the given item being bought. There is a transfer of $50 from Jack’s pocket to Jill’s
pocket. Jack has lost $50 and Jill has gained $50. The money lost by Jack is equal to the money gained by Jill. After the transaction, Jack now has $50 in his pocket and Jill has $150 in her pocket.
Yet, the total amount of money of the two people after the transaction is $200. The total amount of money (Jack’s money plus Jill’s money) before the transaction is equal to the total amount of money
after the transaction. It could be said that the total amount of money of the system (the collection of two people) is conserved. It is the same before as it is after the transaction.
A useful means of depicting the transfer and the conservation of money between Jack and Jill is by means of a table.
The table shows the amount of money possessed by the two individuals before and after the interaction. It also shows the total amount of money before and after the interaction. Note that the total
amount of money ($200) is the same before and after the interaction – it is conserved. Finally, the table shows the change in the amount of money possessed by the two individuals. Note that the
change in Jack’s money account (-$50) is equal to and opposite of the change in Jill’s money account (+$50).
For any collision occurring in an isolated system, momentum is conserved. The total amount of momentum of the collection of objects in the system is the same before the collision as after the
collision. A common physics lab involves the dropping of a brick upon a cart in motion.
The dropped brick is at rest and begins with zero momentum. The loaded cart (a cart with a brick on it) is in motion with considerable momentum. The actual momentum of the loaded cart can be
determined using the velocity (often determined by a ticker tape analysis) and the mass. The total amount of momentum is the sum of the dropped brick’s momentum (0 units) and the loaded cart’s
momentum. After the collision, the momenta of the two separate objects (dropped brick and loaded cart) can be determined from their measured mass and their velocity (often found from a ticker tape
analysis). If momentum is conserved during the collision, then the sum of the dropped brick’s and loaded cart’s momentum after the collision should be the same as before the collision. The momentum
lost by the loaded cart should equal (or approximately equal) the momentum gained by the dropped brick. Momentum data for the interaction between the dropped brick and the loaded cart could be
depicted in a table similar to the money table above.
Before Collision After CollisionMomentum Change inMomentum
Dropped Brick 0 units 14 units +14 units
Loaded Cart 45 units 31 units -14 units
Total 45 units 45 units
Note that the loaded cart lost 14 units of momentum and the dropped brick gained 14 units of momentum. Note also that the total momentum of the system (45 units) was the same before the collision as
it was after the collision.
Collisions commonly occur in contact sports (such as football) and racket and bat sports (such as baseball, golf, tennis, etc.). Consider a collision in football between a fullback and a linebacker
during a goal-line stand. The fullback plunges across the goal line and collides in midair with the linebacker. The linebacker and fullback hold each other and travel together after the collision.
The fullback possesses a momentum of 100 kg*m/s, East before the collision and the linebacker possesses a momentum of 120 kg*m/s, West before the collision. The total momentum of the system before
the collision is 20 kg*m/s, West (review the section on adding vectors if necessary). Therefore, the total momentum of the system after the collision must also be 20 kg*m/s, West. The fullback and
the linebacker move together as a single unit after the collision with a combined momentum of 20 kg*m/s. Momentum is conserved in the collision. A vector diagram can be used to represent this
principle of momentum conservation; such a diagram uses an arrow to represent the magnitude and direction of the momentum vector for the individual objects before the collision and the combined
momentum after the collision.
Now suppose that a medicine ball is thrown to a clown who is at rest upon the ice; the clown catches the medicine ball and glides together with the ball across the ice. The momentum of the medicine
ball is 80 kg*m/s before the collision. The momentum of the clown is 0 m/s before the collision. The total momentum of the system before the collision is 80 kg*m/s. Therefore, the total momentum of
the system after the collision must also be 80 kg*m/s. The clown and the medicine ball move together as a single unit after the collision with a combined momentum of 80 kg*m/s. Momentum is conserved
in the collision.
Momentum is conserved for any interaction between two objects occurring in an isolated system. This conservation of momentum can be observed by a total system momentum analysis or by a momentum
change analysis. Useful means of representing such analyses include a momentum table and a vector diagram. Later in Lesson 2, we will use the momentum conservation principle to solve problems in
which the after-collision velocity of objects is predicted.
Check Your Understanding
Express your understanding of the concept and mathematics of momentum by answering the following questions. Click on the button to view the answers.
1. When fighting fires, a firefighter must use great caution to hold a hose that emits large amounts of water at high speeds. Why would such a task be difficult?
See Answer
The hose is pushing lots of water (large mass) forward at a high speed. This means the water has a large forward momentum. In turn, the hose must have an equally large backwards momentum, making it
difficult for the firefighters to manage.
2. A large truck and a Volkswagen have a head-on collision.
a. Which vehicle experiences the greatest force of impact?
b. Which vehicle experiences the greatest impulse?
c. Which vehicle experiences the greatest momentum change?
d. Which vehicle experiences the greatest acceleration?
See Answer
a, b, c: the same for each.
Both the Volkswagon and the large truck encounter the same force, the same impulse, and the same momentum change (for reasons discussed in this lesson).
d: Acceleration is greatest for the Volkswagon. While the two vehicles experience the same force, the acceleration is greatest for the Volkswagon due to its smaller mass. If you find this hard to
believe, then be sure to read the next question and its accompanying explanation.
3. Miles Tugo and Ben Travlun are riding in a bus at highway speed on a nice summer day when an unlucky bug splatters onto the windshield. Miles and Ben begin discussing the physics of the situation.
Miles suggests that the momentum change of the bug is much greater than that of the bus. After all, argues Miles, there was no noticeable change in the speed of the bus compared to the obvious change
in the speed of the bug. Ben disagrees entirely, arguing that that both bug and bus encounter the same force, momentum change, and impulse. Who do you agree with? Support your answer.
See Answer
Ben Travlun is correct.
The bug and bus experience the same force, the same impulse, and the same momentum change (as discussed in this lesson). This is contrary to the popular (though false) belief which resembles Miles’
statement. The bug has less mass and therefore more acceleration; occupants of the very massive bus do not feel the extremely small acceleration. Furthermore, the bug is composed of a less hardy
material and thus splatters all over the windshield. Yet the greater “splatterability” of the bug and the greater acceleration do not mean the bug has a greater force, impulse, or momentum change.
4. If a ball is projected upward from the ground with ten units of momentum, what is the momentum of recoil of the Earth? ____________ Do we feel this? Explain.
See Answer
The earth recoils with 10 units of momentum. This is not felt by Earth’s occupants. Since the mass of the Earth is extremely large, the recoil velocity of the Earth is extremely small and therefore
not felt.
5. If a 5-kg bowling ball is projected upward with a velocity of 2.0 m/s, then what is the recoil velocity of the Earth (mass = 6.0 x 10^24 kg).
See Answer
Since the ball has an upward momentum of 10 kg*m/s, the Earth must have a downward momentum of 10 kg*m/s. To find the velocity of the Earth, use the momentum equation, p = m*v. This equation
rearranges to v=p/m. By substituting into this equation,
v = (10 kg*m/s)/(6*10^24 kg)
v = 1.67*10^-24 m/s (downward)
Another way to write the velocity of the earth is to write it as
0.00000000000000000000000167 m/s
6. A 120 kg lineman moving west at 2 m/s tackles an 80 kg football fullback moving east at 8 m/s. After the collision, both players move east at 2 m/s. Draw a vector diagram in which the before- and
after-collision momenta of each player is represented by a momentum vector. Label the magnitude of each momentum vector.
7. In an effort to exact the most severe capital punishment upon a rather unpopular prisoner, the execution team at the Dark Ages Penitentiary search for a bullet that is ten times as massive as the
rifle itself. What type of individual would want to fire a rifle that holds a bullet that is ten times more massive than the rifle? Explain.
See Answer
Someone who doesn’t know much physics. In such a situation as this, the target would be a safer place to stand than the rifle. The rifle would have a recoil velocity that is ten times larger than the
bullet’s velocity. This would produce the effect of “the rifle actually being the bullet.”
8. A baseball player holds a bat loosely and bunts a ball. Express your understanding of momentum conservation by filling in the tables below.
See Answer
a: +40 (add the momentum of the bat and the ball)
c: +40 (the total momentum is the same after as it is before the collision)
b: 30 (the bat must have 30 units of momentum in order for the total to be +40)
9. A Tomahawk cruise missile is launched from the barrel of a mobile missile launcher. Neglect friction. Express your understanding of momentum conservation by filling in the tables below.
See Answer
a: 0 (add the momentum of the missile and the launcher)
c: 0 (the total momentum is the same after as it is before the collision)
b: -5000 (the launcher must have -5000 units of momentum in order for the total to be +0) | {"url":"https://mechanicalengineering.softecksblog.in/816/","timestamp":"2024-11-05T15:41:50Z","content_type":"text/html","content_length":"146714","record_id":"<urn:uuid:75801520-ba07-4d1a-b8c7-7acb2233ee1b>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00554.warc.gz"} |
Cop Who Killed Amir Locke in Minneapolis Ducks Charges and Is Back on Duty
The Hennepin County Attorney’s Office will not pursue charges against the officers involved in the February shooting of Amir Rahkare Locke, a 22-year-old Black man who was fatally shot within seconds
of Minneapolis police entering an apartment with a no-knock warrant for a case that had nothing to do with him.
“Amir Locke’s life mattered,” Hennepin County Attorney Michael Freeman and Minnesota Attorney General Keith Ellison said in a Wednesday statement. “After a thorough review of all available evidence,
however, there is insufficient admissible evidence to file criminal charges in this case. Specifically, the State would be unable to disprove beyond a reasonable doubt any of the elements of
Minnesota’s use-of-deadly-force statute that authorizes the use of force by Officer [Mark] Hanneman.”
“Nor would the State be able to prove beyond a reasonable doubt a criminal charge against any other officer involved in the decision-making that led to the death of Amir Locke,” the statement added.
Freeman and Ellison also noted they had already met with the Locke family to deliver the news. Hanneman, the officer who shot and killed Locke, did not immediately respond to a request for comment.
But his bosses wasted little time in responding internally, with a Minneapolis spokesperson telling The Daily Beast Wednesday that he was no longer on leave and is already back on duty.
“The family of Amir Locke is deeply disappointed by the decision not to criminally charge Minneapolis Police Officer Mark Hanneman,” lawyers representing the Locke family, including Ben Crump, said
in a statement to The Daily Beast on Wednesday. “The family and its legal team are firmly committed to their continued fight for justice in the civil court system, in fiercely advocating for the
passage of local and national legislation, and taking every other step necessary to ensure accountability for all those responsible for needlessly cutting Amir’s life far too short.”
The decision not to press charges against the officers quickly raised fresh ire in Minneapolis, a metropolitan area stained by high-profile killings of Black men at the hands of law enforcement in
recent years. It came as the city was still grappling with demands for racial justice after three former Minneapolis police officers were convicted in a federal trial over the murder of George Floyd
in 2020.
“He did everything right. He was leading an exemplary life. He had a legal license [to carry a gun]. He was still killed,” Daphne Brown, a 50-something social-justice strategist and professor in St.
Paul, told The Daily Beast. “None of that protected him. We have no protection.”
Authorities have conceded that Locke was not the intended subject of the St. Paul homicide investigation that prompted a Minneapolis SWAT team to execute a no-knock search warrant at the Balero Flats
apartment building around 7 a.m. on Feb. 2.
Police records made public in March show that St. Paul police asked for search warrants to be conducted in Minneapolis for three apartments, including the one where Locke was shot. Multiple local
outlets have reported that St. Paul cops did not ask for the highly controversial no-knock procedure—one they themselves have avoided in recent years—only for their colleagues in Minneapolis to
insist on one, going so far as to claim it would actually improve safety.
During a Wednesday press conference, Rev. Al Sharpton said that the fight was not over, and that he stood with the Locke family in calling for a federal investigation into the shooting.
Locke’s mother, Karen Wells, stressed that she was not shocked at all about the decision because, to her, it was just another example of “Minnesota’s true colors.”
“I’m not going to give up,” Wells added. “I am not disappointed, I am disgusted with the city of Minneapolis.”
Toshira Garraway, who founded the group Families Supporting Families Against Police Violence after her fiancé was killed by cops in St. Paul, called the decision not to charge the officers in Locke’s
death “a disgrace.”
“This is horrific, this is inhumane,” she told The Daily Beast on Wednesday, adding, “This just proves that this is how Minneapolis treats Black people.”
Garraway said that she had worked closely with Locke’s family since his death, and that she was “heartbroken” to learn that they had joined the growing group of people who felt unable to receive
justice in Minneapolis.
“Amir’s death is not just heartbreaking for the family. It’s heartbreaking for all of us,” she added, breaking into tears. “I am not shocked by the decision today, but I am so sad and hurt. It’s just
another slap in the face in the community.”
177 thoughts on “Cop Who Killed Amir Locke in Minneapolis Ducks Charges and Is Back on Duty”
• 27/06/2023 at 12:50 AM
получение медицинской справки
• 27/06/2023 at 7:49 PM
I loved as much as you will receive carried out right here. The sketch is tasteful, your authored subject matter stylish. nonetheless, you command get bought an nervousness over that you wish be
delivering the following. unwell unquestionably come further formerly again since exactly the same nearly a lot often inside case you shield this increase.
• 03/07/2023 at 7:12 PM
Does your site have a contact page? I’m having problems locating it but, I’d like to send you an e-mail. I’ve got some creative ideas for your blog you might be interested in hearing. Either way,
great website and I look forward to seeing it improve over time.
• 04/07/2023 at 4:27 AM
Hi there! This is my first visit to your blog! We are a group of volunteers and starting a new initiative in a community in the same niche. Your blog provided us useful information to work on.
You have done a marvellous job!
• 05/07/2023 at 2:54 AM
When I originally commented I seem to have clicked the -Notify me when new comments are added- checkbox and now every time a comment is added I recieve four emails with the same comment. Perhaps
there is a way you can remove me from that service? Many thanks!
• 06/07/2023 at 2:13 AM
I go to see daily some sites and websites to read articles or reviews, but this webpage gives quality based content.
• 07/07/2023 at 5:16 AM
There is definately a lot to know about this topic. I love all the points you’ve made.
• 08/07/2023 at 2:09 PM
Nice post. I was checking continuously this blog and I am impressed! Very useful information specially the last part 🙂 I care for such info a lot. I was seeking this particular info for a long
time. Thank you and good luck.
• 10/07/2023 at 12:40 PM
Spot on with this write-up, I absolutely feel this site needs much more attention. I’ll probably be back again to read through more, thanks for the info!
• 11/07/2023 at 10:36 AM
I like the valuable information you provide in your articles. I will bookmark your weblog and check again here frequently. I am quite certain I will learn lots of new stuff right here! Good luck
for the next!
• 13/07/2023 at 10:41 AM
This article gives clear idea in favor of the new viewers of blogging, that really how to do blogging.
• 15/07/2023 at 6:34 AM
Does your website have a contact page? I’m having problems locating it but, I’d like to send you an e-mail. I’ve got some suggestions for your blog you might be interested in hearing. Either way,
great site and I look forward to seeing it improve over time.
• 16/07/2023 at 1:02 PM
I do agree with all the ideas you have presented on your post. They are very convincing and will definitely work. Still, the posts are too brief for newbies. May just you please prolong them a
bit from next time? Thank you for the post.
• 17/07/2023 at 8:42 PM
Thank you for any other great article. Where else may anyone get that kind of information in such a perfect way of writing? I have a presentation next week, and I am at the look for such
• 19/07/2023 at 1:12 PM
What’s Happening i’m new to this, I stumbled upon this I have found It positively helpful and it has helped me out loads. I hope to give a contribution & aid other users like its helped me. Good
• 20/07/2023 at 12:31 AM
You made some decent points there. I looked on the internet for more info about the issue and found most individuals will go along with your views on this website.
• 20/07/2023 at 6:13 PM
Hi there, after reading this awesome piece of writing i am too happy to share my familiarity here with friends.
• 21/07/2023 at 1:43 AM
constantly i used to read smaller articles or reviews which also clear their motive, and that is also happening with this piece of writing which I am reading here.
• 21/07/2023 at 5:29 PM
Wonderful beat ! I wish to apprentice at the same time as you amend your site, how can i subscribe for a blog site? The account aided me a appropriate deal. I have been tiny bit familiar of this
your broadcast provided shiny transparent concept
• 23/07/2023 at 4:54 PM
Saved as a favorite, I really like your web site!
• 24/07/2023 at 6:31 PM
Every weekend i used to visit this website, because i want enjoyment, as this this site conations truly good funny stuff too.
• 25/07/2023 at 6:00 PM
You really make it seem so easy with your presentation but I find this topic to be really something which I think I would never understand. It seems too complicated and very broad for me. I am
looking forward for your next post, I will try to get the hang of it!
• 27/07/2023 at 1:59 PM
Hi to all, because I am really keen of reading this website’s post to be updated daily. It contains pleasant stuff.
• 28/07/2023 at 5:36 PM
Have you ever thought about creating an e-book or guest authoring on other sites? I have a blog based upon on the same ideas you discuss and would really like to have you share some stories/
information. I know my viewers would value your work. If you are even remotely interested, feel free to send me an e mail.
• 31/07/2023 at 11:21 PM
You’ve made some good points there. I looked on the web to learn more about the issue and found most individuals will go along with your views on this website.
• 02/08/2023 at 1:02 AM
Great article! This is the type of information that are supposed to be shared around the internet. Disgrace on the seek engines for now not positioning this post upper! Come on over and visit my
web site . Thank you =)
• 02/08/2023 at 4:27 PM
I was wondering if you ever considered changing the page layout of your site? Its very well written; I love what youve got to say. But maybe you could a little more in the way of content so
people could connect with it better. Youve got an awful lot of text for only having one or two images. Maybe you could space it out better?
• 03/08/2023 at 11:24 PM
What’s up Dear, are you truly visiting this site daily, if so then you will absolutely take nice knowledge.
• 04/08/2023 at 1:52 PM
I was extremely pleased to find this website. I want to to thank you for your time due to this wonderful read!! I definitely enjoyed every little bit of it and I have you saved as a favorite to
check out new things on your web site.
• 04/08/2023 at 5:57 PM
I do believe all the concepts you have introduced in your post. They are very convincing and will definitely work. Still, the posts are too quick for newbies. May just you please prolong them a
bit from next time? Thank you for the post.
• 07/08/2023 at 1:50 AM
I am curious to find out what blog system you have been working with? I’m experiencing some minor security problems with my latest site and I would like to find something more risk-free. Do you
have any suggestions?
• 07/08/2023 at 11:25 AM
Because the admin of this website is working, no uncertainty very soon it will be famous, due to its quality contents.
• 09/08/2023 at 6:22 PM
My spouse and I stumbled over here coming from a different web page and thought I might as well check things out. I like what I see so now i’m following you. Look forward to checking out your web
page again.
• 09/08/2023 at 9:30 PM
Hey there! I’ve been following your weblog for a while now and finally got the bravery to go ahead and give you a shout out from Houston Tx! Just wanted to tell you keep up the great job!
• 11/08/2023 at 3:43 PM
Have you ever thought about publishing an e-book or guest authoring on other websites? I have a blog centered on the same subjects you discuss and would really like to have you share some stories
/information. I know my audience would enjoy your work. If you are even remotely interested, feel free to send me an e mail.
• 13/08/2023 at 7:24 AM
Thank you a bunch for sharing this with all folks you really realize what you are talking approximately! Bookmarked. Please also seek advice from my site =). We could have a link change agreement
among us
• 15/08/2023 at 9:22 PM
Hello there, just became aware of your blog through Google, and found that it is really informative. I’m gonna watch out for brussels. I will appreciate if you continue this in future. Lots of
people will be benefited from your writing. Cheers!
• 16/08/2023 at 2:45 AM
Howdy! I know this is kinda off topic however , I’d figured I’d ask. Would you be interested in exchanging links or maybe guest writing a blog article or vice-versa? My site addresses a lot of
the same subjects as yours and I feel we could greatly benefit from each other. If you happen to be interested feel free to send me an e-mail. I look forward to hearing from you! Excellent blog
by the way!
• 16/08/2023 at 2:10 PM
Исключительный мужской эромассаж Москва релакс студия
• 17/08/2023 at 11:09 AM
Howdy! This is my first visit to your blog! We are a collection of volunteers and starting a new initiative in a community in the same niche. Your blog provided us valuable information to work
on. You have done a outstanding job!
• Pingback: Apple runtz strain
• 21/08/2023 at 12:04 AM
It’s going to be end of mine day, but before end I am reading this great piece of writing to increase my knowledge.
• 21/08/2023 at 12:27 PM
Hey there just wanted to give you a quick heads up and let you know a few of the images aren’t loading correctly. I’m not sure why but I think its a linking issue. I’ve tried it in two different
browsers and both show the same results.
• 22/08/2023 at 9:20 AM
Today, I went to the beach with my kids. I found a sea shell and gave it to my 4 year old daughter and said “You can hear the ocean if you put this to your ear.” She put the shell to her ear and
screamed. There was a hermit crab inside and it pinched her ear. She never wants to go back! LoL I know this is completely off topic but I had to tell someone!
• 22/08/2023 at 9:48 AM
Howdy! Someone in my Myspace group shared this site with us so I came to give it a look. I’m definitely enjoying the information. I’m book-marking and will be tweeting this to my followers!
Wonderful blog and excellent design and style.
• 24/08/2023 at 12:25 PM
Hi to all, how is all, I think every one is getting more from this web site, and your views are nice in favor of new people.
• 25/08/2023 at 6:40 PM
Pretty portion of content. I simply stumbled upon your blog and in accession capital to say that I acquire in fact enjoyed account your blog posts. Any way I’ll be subscribing in your augment or
even I fulfillment you get right of entry to persistently fast.
• 28/08/2023 at 6:12 PM
What’s up i am kavin, its my first time to commenting anywhere, when i read this post i thought i could also make comment due to this brilliant article.
• 01/09/2023 at 2:39 PM
Appreciating the dedication you put into your website and in depth information you provide. It’s awesome to come across a blog every once in a while that isn’t the same unwanted rehashed
material. Fantastic read! I’ve saved your site and I’m including your RSS feeds to my Google account.
• 01/09/2023 at 3:38 PM
Fantastic goods from you, man. I’ve be mindful your stuff prior to and you’re simply too magnificent. I really like what you’ve received here, really like what you’re stating and the best way in
which you assert it. You make it entertaining and you still take care of to stay it smart. I can not wait to read far more from you. This is actually a terrific site.
• 06/09/2023 at 5:51 PM
Hey I know this is off topic but I was wondering if you knew of any widgets I could add to my blog that automatically tweet my newest twitter updates. I’ve been looking for a plug-in like this
for quite some time and was hoping maybe you would have some experience with something like this. Please let me know if you run into anything. I truly enjoy reading your blog and I look forward
to your new updates.
• 06/09/2023 at 5:55 PM
As the admin of this website is working, no doubt very soon it will be well-known, due to its quality contents.
• 11/09/2023 at 6:12 AM
Im not that much of a online reader to be honest but your blogs really nice, keep it up! I’ll go ahead and bookmark your site to come back in the future. Cheers
• 12/09/2023 at 11:14 AM
I was wondering if you ever considered changing the layout of your site? Its very well written; I love what youve got to say. But maybe you could a little more in the way of content so people
could connect with it better. Youve got an awful lot of text for only having one or two images. Maybe you could space it out better?
• 13/09/2023 at 12:53 AM
I like the valuable information you provide in your articles. I will bookmark your weblog and check again here frequently. I am quite certain I will learn many new stuff right here! Good luck for
the next!
• 13/09/2023 at 11:39 PM
When someone writes an post he/she keeps the idea of a user in his/her mind that how a user can understand it. So that’s why this article is perfect. Thanks!
• 14/09/2023 at 8:45 AM
Howdy! This post couldn’t be written any better! Reading this post reminds me of my good old room mate! He always kept talking about this. I will forward this page to him. Pretty sure he will
have a good read. Thanks for sharing!
• 14/09/2023 at 10:46 PM
Incredible points. Great arguments. Keep up the amazing effort.
• 16/09/2023 at 3:12 PM
fantastic issues altogether, you just gained a logo new reader. What might you suggest in regards to your post that you made a few days ago? Any positive?
• 20/09/2023 at 9:28 AM
Thanks for the marvelous posting! I quite enjoyed reading it, you can be a great author.I will make certain to bookmark your blog and will eventually come back someday. I want to encourage you to
ultimately continue your great posts, have a nice holiday weekend!
• 20/09/2023 at 1:23 PM
Heya i’m for the primary time here. I came across this board and I in finding It truly useful & it helped me out a lot. I am hoping to provide something back and help others like you helped me.
• 21/09/2023 at 5:11 AM
Hello there! Do you know if they make any plugins to protect against hackers? I’m kinda paranoid about losing everything I’ve worked hard on. Any recommendations?
• 21/09/2023 at 10:40 AM
Everything is very open with a precise explanation of the issues. It was truly informative. Your website is very helpful. Thank you for sharing!
• 21/09/2023 at 10:50 AM
This is a topic that is close to my heart… Take care! Where are your contact details though?
• 25/09/2023 at 5:38 AM
Howdy I am so grateful I found your blog page, I really found you by error, while I was searching on Bing for something else, Regardless I am here now and would just like to say cheers for a
fantastic post and a all round thrilling blog (I also love the theme/design), I don’t have time to read through it all at the minute but I have book-marked it and also added in your RSS feeds, so
when I have time I will be back to read much more, Please do keep up the fantastic job.
• 25/09/2023 at 3:48 PM
It’s not my first time to pay a visit this web site, i am visiting this website dailly and get good data from here every day.
• 25/09/2023 at 11:51 PM
Hi there, I enjoy reading all of your post. I like to write a little comment to support you.
• 26/09/2023 at 5:44 AM
My brother suggested I would possibly like this blog. He was totally right. This post actually made my day. You cann’t believe just how so much time I had spent for this information! Thank you!
• 26/09/2023 at 9:51 AM
Greetings! Very helpful advice within this article! It is the little changes which will make the biggest changes. Thanks for sharing!
• 26/09/2023 at 12:32 PM
Hi there, its pleasant piece of writing about media print, we all understand media is a great source of information.
• 26/09/2023 at 2:04 PM
When someone writes an article he/she maintains the thought of a user in his/her mind that how a user can understand it. So that’s why this piece of writing is great. Thanks!
• 27/09/2023 at 2:53 PM
These are in fact enormous ideas in concerning blogging. You have touched some pleasant factors here. Any way keep up wrinting.
• 27/09/2023 at 5:47 PM
Excellent blog here! Also your website a lot up fast! What host are you using? Can I am getting your associate link on your host? I desire my website loaded up as fast as yours lol
• 28/09/2023 at 6:45 PM
My partner and I absolutely love your blog and find many of your post’s to be precisely what I’m looking for. Would you offer guest writers to write content for you? I wouldn’t mind producing a
post or elaborating on some of the subjects you write in relation to here. Again, awesome web site!
• 29/09/2023 at 9:07 PM
Excellent post. I was checking continuously this blog and I am impressed! Very useful information particularly the last part 🙂 I care for such info a lot. I was seeking this particular info for a
long time. Thank you and good luck.
• 30/09/2023 at 8:26 PM
I’m not sure where you are getting your info, but good topic. I needs to spend some time learning more or understanding more. Thanks for great information I was looking for this information for
my mission.
• 30/09/2023 at 11:51 PM
This is my first time go to see at here and i am in fact happy to read all at alone place.
• 01/10/2023 at 10:39 PM
In fact no matter if someone doesn’t know then its up to other users that they will help, so here it happens.
• 02/10/2023 at 11:15 PM
My brother suggested I might like this website. He was totally right. This post actually made my day. You cann’t imagine just how much time I had spent for this information! Thanks!
• 04/10/2023 at 6:30 PM
Usually I do not read article on blogs, however I wish to say that this write-up very forced me to try and do so! Your writing taste has been amazed me. Thank you, quite great article.
• 05/10/2023 at 12:42 PM
Hi there! I’m at work browsing your blog from my new iphone 4! Just wanted to say I love reading your blog and look forward to all your posts! Keep up the excellent work!
• 05/10/2023 at 2:01 PM
В нашем онлайн казино вы найдете широкий спектр слотов и лайв игр, присоединяйтесь.
• 06/10/2023 at 8:12 AM
Добро пожаловать на сайт онлайн казино, мы предлагаем уникальный опыт для любителей азартных игр.
• 07/10/2023 at 4:57 AM
Howdy! Quick question that’s entirely off topic. Do you know how to make your site mobile friendly? My site looks weird when viewing from my iphone. I’m trying to find a theme or plugin that
might be able to correct this problem. If you have any suggestions, please share. With thanks!
• 09/10/2023 at 12:37 AM
If some one needs to be updated with most recent technologies after that he must be pay a visit this site and be up to date daily.
• 10/10/2023 at 6:14 PM
Thank you for some other informative web site. Where else may I am getting that kind of info written in such a perfect method? I have a challenge that I am simply now running on, and I have been
at the glance out for such information.
• 13/10/2023 at 8:54 AM
I’m not sure where you are getting your info, but good topic. I needs to spend some time learning more or understanding more. Thanks for magnificent information I was looking for this information
for my mission.
• 17/10/2023 at 7:08 PM
Hi there! I’m at work browsing your blog from my new iphone 4! Just wanted to say I love reading your blog and look forward to all your posts! Keep up the excellent work!
• 19/10/2023 at 5:49 PM
I’m really enjoying the design and layout of your site. It’s a very easy on the eyes which makes it much more enjoyable for me to come here and visit more often. Did you hire out a designer to
create your theme? Fantastic work!
• 22/10/2023 at 11:16 PM
Не знаете, какой подрядчик выбрать для устройства стяжки пола? Обратитесь к нам на сайт styazhka-pola24.ru! Мы предоставляем услуги по залитию стяжки пола любой площади и сложности, а также
гарантируем высокое качество работ и доступные цены.
• 24/10/2023 at 10:46 AM
поставка строительных материалов москва
• 26/10/2023 at 10:54 AM
На mehanizirovannaya-shtukaturka-moscow.ru вы найдете услугу штукатурки стен машинным способом. Это оптимальное решение для идеально ровных стен.
• 30/10/2023 at 11:01 AM
Your style is really unique compared to other people I have read stuff from. Many thanks for posting when you have the opportunity, Guess I will just bookmark this web site.
• 30/10/2023 at 11:17 AM
It’s really a nice and helpful piece of information. I’m glad that you simply shared this helpful info with us. Please stay us informed like this. Thank you for sharing.
• 02/11/2023 at 1:59 PM
Greetings from Idaho! I’m bored to death at work so I decided to check out your website on my iphone during lunch break. I enjoy the knowledge you present here and can’t wait to take a look when
I get home. I’m shocked at how quick your blog loaded on my cell phone .. I’m not even using WIFI, just 3G .. Anyhow, wonderful site!
• 03/11/2023 at 11:48 AM
An intriguing discussion is worth comment. I do believe that you should write more on this issue, it might not be a taboo subject but generally people don’t discuss such topics. To the next! All
the best!!
• 03/11/2023 at 5:01 PM
Greetings! Very helpful advice within this article! It is the little changes which will make the biggest changes. Thanks for sharing!
• 04/11/2023 at 8:38 AM
Позвольте себе быстрое и качественное оштукатуривание стен. Откройте для себя современные методы на нашем сайте mehanizirovannaya-shtukaturka-moscow.ru
• 08/11/2023 at 12:32 PM
It’s impressive that you are getting ideas from this piece of writing as well as from our discussion made here.
• 10/11/2023 at 2:02 AM
Dear immortals, I need some wow gold inspiration to create.
• 10/11/2023 at 5:38 AM
If you want to improve your knowledge only keep visiting this site and be updated with the most recent news posted here.
• 16/11/2023 at 4:29 PM
My partner and I absolutely love your blog and find most of your post’s to be just what I’m looking for. Does one offer guest writers to write content for yourself? I wouldn’t mind creating a
post or elaborating on some of the subjects you write about here. Again, awesome blog!
• 17/11/2023 at 2:15 PM
I got this web site from my friend who told me about this web site and now this time I am visiting this web site and reading very informative articles here.
• 25/11/2023 at 5:50 PM
Hi there to every one, it’s truly a good for me to pay a visit this website, it contains important Information.
• 25/11/2023 at 6:52 PM
obviously like your website however you need to test the spelling on quite a few of your posts. Several of them are rife with spelling problems and I in finding it very bothersome to tell the
truth on the other hand I will certainly come back again.
• 27/11/2023 at 2:01 AM
Why viewers still use to read news papers when in this technological world everything is accessible on net?
• 29/11/2023 at 2:34 AM
This is very interesting, You are a very skilled blogger. I have joined your feed and look forward to seeking more of your wonderful post. Also, I have shared your web site in my social networks!
• 30/11/2023 at 5:34 PM
Howdy! I realize this is somewhat off-topic but I had to ask. Does operating a well-established blog like yours take a massive amount work? I’m completely new to blogging but I do write in my
diary everyday. I’d like to start a blog so I will be able to share my experience and thoughts online. Please let me know if you have any ideas or tips for new aspiring bloggers. Appreciate it!
• 02/12/2023 at 10:34 PM
Wow that was odd. I just wrote an extremely long comment but after I clicked submit my comment didn’t show up. Grrrr… well I’m not writing all that over again. Anyway, just wanted to say
fantastic blog!
• 04/12/2023 at 12:36 AM
This is the right blog for anyone who really wants to find out about this topic. You understand so much its almost hard to argue with you (not that I actually would want toHaHa). You definitely
put a new spin on a topic that’s been written about for a long time. Excellent stuff, just excellent!
• 04/12/2023 at 12:59 AM
I’d like to thank you for the efforts you have put in writing this blog. I am hoping to view the same high-grade blog posts from you in the future as well. In fact, your creative writing
abilities has motivated me to get my very own website now 😉
• 04/12/2023 at 8:52 PM
WOW just what I was searching for. Came here by searching for %keyword%
• 07/12/2023 at 12:25 AM
You really make it seem so easy with your presentation but I find this topic to be really something which I think I would never understand. It seems too complicated and very broad for me. I am
looking forward for your next post, I will try to get the hang of it!
• 10/12/2023 at 6:01 AM
Why users still use to read news papers when in this technological world everything is available on net?
• 13/12/2023 at 8:15 PM
Hi, Neat post. There is a problem together with your site in internet explorer, could check this? IE still is the marketplace leader and a good portion of other folks will leave out your
magnificent writing due to this problem.
• 15/12/2023 at 6:24 PM
Hurrah! After all I got a website from where I can in fact get useful data regarding my study and knowledge.
• 19/12/2023 at 11:17 PM
Nice blog here! Also your website a lot up fast! What host are you using? Can I am getting your associate link for your host? I desire my website loaded up as fast as yours lol
• 08/01/2024 at 5:42 AM
С Lucky Jet каждая минута может принести прибыль! Зайдите на официальный сайт 1win, чтобы начать играть и выигрывать.
• 10/01/2024 at 2:29 PM
Hi, I think your website might be having browser compatibility issues. When I look at your blog site in Safari, it looks fine but when opening in Internet Explorer, it has some overlapping. I
just wanted to give you a quick heads up! Other then that, terrific blog!
• 23/01/2024 at 4:46 AM
I don’t even understand how I stopped up here, however I thought this submit was good. I don’t recognize who you are however definitely you are going to a famous blogger when you are not already.
• 25/01/2024 at 10:13 PM
First off I want to say superb blog! I had a quick question that I’d like to ask if you don’t mind. I was curious to know how you center yourself and clear your mind before writing. I have had a
hard time clearing my mind in getting my thoughts out. I do enjoy writing but it just seems like the first 10 to 15 minutes are usually wasted just trying to figure out how to begin. Any ideas or
tips? Thanks!
• 02/02/2024 at 2:53 PM
I don’t even know how I ended up here, but I thought this post was good. I don’t know who you are but definitely you are going to a famous blogger if you are not already 😉 Cheers!
• 09/02/2024 at 11:53 AM
Howdy! I could have sworn I’ve been to this blog before but after browsing through some of the post I realized it’s new to me. Anyways, I’m definitely glad I found it and I’ll be bookmarking and
checking back often!
• 10/02/2024 at 9:05 PM
Thank you a bunch for sharing this with all people you really recognize what you are talking approximately! Bookmarked. Please also visit my site =). We could have a link trade contract among us
• 12/02/2024 at 4:45 AM
Hi there! This article couldn’t be written any better! Looking at this post reminds me of my previous roommate! He constantly kept talking about this. I am going to forward this article to him.
Pretty sure he’ll have a very good read. Thank you for sharing!
• 21/02/2024 at 1:35 PM
Undeniably consider that that you stated. Your favourite justification appeared to be at the internet the simplest thing to be mindful of. I say to you, I definitely get irked even as other folks
consider worries that they plainly do not recognise about. You controlled to hit the nail upon the top as smartlyand also defined out the whole thing with no need side effect , other people can
take a signal. Will likely be back to get more. Thank you
• 12/03/2024 at 10:35 AM
What a stuff of un-ambiguity and preserveness of precious knowledge concerning unexpected feelings.
• 12/03/2024 at 10:56 AM
Thank you for sharing your info. I truly appreciate your efforts and I am waiting for your next post thank you once again.
• 12/03/2024 at 11:16 AM
I think this is one of the most important information for me. And i’m glad reading your article. But wanna remark on few general things, The website style is perfect, the articles is really nice
: D. Good job, cheers
• 12/03/2024 at 11:36 AM
Nice blog here! Also your site loads up fast! What host are you using? Can I get your affiliate link to your host? I wish my site loaded up as fast as yours lol
• 12/03/2024 at 11:56 AM
I like the valuable information you provide in your articles. I will bookmark your weblog and check again here frequently. I am quite certain I will learn many new stuff right here! Good luck for
the next!
• 12/03/2024 at 12:16 PM
Yesterday, while I was at work, my sister stole my iPad and tested to see if it can survive a twenty five foot drop, just so she can be a youtube sensation. My iPad is now broken and she has 83
views. I know this is entirely off topic but I had to share it with someone!
• 12/03/2024 at 12:36 PM
Hi there this is somewhat of off topic but I was wondering if blogs use WYSIWYG editors or if you have to manually code with HTML. I’m starting a blog soon but have no coding experience so I
wanted to get advice from someone with experience. Any help would be greatly appreciated!
• 12/03/2024 at 12:57 PM
Wow, incredible blog layout! How long have you been blogging for? you make blogging look easy. The overall look of your site is magnificent, let alone the content!
• 12/03/2024 at 1:17 PM
It’s difficult to find educated people for this topic, but you sound like you know what you’re talking about! Thanks
• 12/03/2024 at 1:39 PM
Hello there, just became aware of your blog through Google, and found that it is really informative. I’m gonna watch out for brussels. I will appreciate if you continue this in future. Lots of
people will be benefited from your writing. Cheers!
• 12/03/2024 at 2:02 PM
Link exchange is nothing else but it is only placing the other person’s webpage link on your page at proper place and other person will also do same in favor of you.
• 12/03/2024 at 2:22 PM
Have you ever thought about creating an e-book or guest authoring on other websites? I have a blog based upon on the same information you discuss and would really like to have you share some
stories/information. I know my subscribers would enjoy your work. If you are even remotely interested, feel free to send me an e mail.
• 12/03/2024 at 2:43 PM
Awesome things here. I’m very satisfied to peer your article. Thank you so much and I’m looking forward to touch you. Will you please drop me a mail?
• 12/03/2024 at 3:04 PM
Hi would you mind letting me know which hosting company you’re utilizing? I’ve loaded your blog in 3 completely different internet browsers and I must say this blog loads a lot quicker then most.
Can you suggest a good internet hosting provider at a honest price? Thank you, I appreciate it!
• 12/03/2024 at 3:24 PM
Hi there! Do you use Twitter? I’d like to follow you if that would be ok. I’m undoubtedly enjoying your blog and look forward to new updates.
• 12/03/2024 at 3:44 PM
Someone necessarily help to make significantly articles I might state. This is the first time I frequented your web page and so far? I amazed with the research you made to create this actual
publish incredible. Wonderful process!
• 12/03/2024 at 4:03 PM
Howdy excellent blog! Does running a blog similar to this take a massive amount work? I have virtually no knowledge of programming but I was hoping to start my own blog soon. Anyway, if you have
any suggestions or tips for new blog owners please share. I know this is off topic but I just needed to ask. Appreciate it!
• 12/03/2024 at 4:23 PM
Nice blog! Is your theme custom made or did you download it from somewhere? A design like yours with a few simple adjustements would really make my blog jump out. Please let me know where you got
your design. Thank you
• 12/03/2024 at 4:44 PM
It’s an awesome post designed for all the internet people; they will take benefit from it I am sure.
• 12/03/2024 at 5:03 PM
Thanks for finally writing about > %blog_title% < Liked it!
• 12/03/2024 at 5:24 PM
Hello, I think your site might be having internet browser compatibility issues. When I look at your website in Safari, it looks fine however, if opening in IE, it has some overlapping issues. I
just wanted to give you a quick heads up! Besides that, great website!
• 12/03/2024 at 5:43 PM
I blog quite often and I really appreciate your content. The article has really peaked my interest. I am going to book mark your website and keep checking for new information about once a week. I
subscribed to your RSS feed as well.
• 12/03/2024 at 6:03 PM
Fantastic beat ! I wish to apprentice while you amend your web site, how can i subscribe for a blog site? The account aided me a acceptable deal. I had been tiny bit acquainted of this your
broadcast provided bright clear concept
• 12/03/2024 at 6:23 PM
Hey there, I think your blog might be having browser compatibility issues. When I look at your blog site in Firefox, it looks fine but when opening in Internet Explorer, it has some overlapping.
I just wanted to give you a quick heads up! Other then that, superb blog!
• 12/03/2024 at 6:43 PM
I blog frequently and I really appreciate your content. This great article has really peaked my interest. I will bookmark your site and keep checking for new information about once a week. I
subscribed to your RSS feed as well.
• 12/03/2024 at 7:03 PM
Interesting blog! Is your theme custom made or did you download it from somewhere? A design like yours with a few simple adjustements would really make my blog shine. Please let me know where you
got your design. With thanks
• 12/03/2024 at 7:23 PM
This is my first time pay a visit at here and i am really impressed to read all at one place.
• 12/03/2024 at 7:44 PM
I want to to thank you for this good read!! I definitely enjoyed every little bit of it. I’ve got you book-marked to check out new stuff you post
• 12/03/2024 at 8:04 PM
My brother suggested I might like this blog. He used to be totally right. This submit actually made my day. You cann’t consider just how much time I had spent for this information! Thank you!
• 12/03/2024 at 8:24 PM
With havin so much written content do you ever run into any problems of plagorism or copyright violation? My site has a lot of exclusive content I’ve either created myself or outsourced but it
appears a lot of it is popping it up all over the web without my authorization. Do you know any techniques to help stop content from being ripped off? I’d certainly appreciate it.
• 12/03/2024 at 8:45 PM
I’m not sure where you are getting your info, but good topic. I needs to spend some time learning more or understanding more. Thanks for fantastic information I was looking for this information
for my mission.
• 14/03/2024 at 8:32 PM
I’m not sure why but this blog is loading extremely slow for me. Is anyone else having this issue or is it a problem on my end? I’ll check back later and see if the problem still exists.
• 19/03/2024 at 12:09 AM
Hello there! This post couldn’t be written any better! Going through this post reminds me of my previous roommate! He always kept talking about this. I am going to forward this article to him.
Pretty sure he’ll have a very good read. Many thanks for sharing!
• 10/04/2024 at 1:59 AM
Good day! I just would like to give you a huge thumbs up for the great info you’ve got here on this post. I will be coming back to your blog for more soon.
• 10/04/2024 at 3:17 PM
Good day I am so glad I found your site, I really found you by mistake, while I was browsing on Askjeeve for something else, Anyhow I am here now and would just like to say thank you for a
incredible post and a all round exciting blog (I also love the theme/design), I don’t have time to browse it all at the minute but I have saved it and also included your RSS feeds, so when I have
time I will be back to read much more, Please do keep up the fantastic job.
• 07/05/2024 at 6:14 PM
I am actually thankful to the owner of this web site who has shared this impressive article at at this place.
• 15/05/2024 at 7:06 PM
Крупный учебный и научно-исследовательский центр Республики Беларусь. Высшее образование в сфере гуманитарных и естественных наук на 12 факультетах по 35 специальностям первой ступени образования
и 22 специальностям второй, 69 специализациям.
• 24/05/2024 at 10:40 AM
Wow, incredible blog layout! How long have you been blogging for? you make blogging look easy. The overall look of your site is great, let alone the content!
• 30/05/2024 at 1:11 PM
Currently it seems like Movable Type is the top blogging platform out there right now. (from what I’ve read) Is that what you’re using on your blog?
• 03/06/2024 at 11:35 AM
After looking into a number of the blog articles on your site, I truly like your way of blogging. I bookmarked it to my bookmark site list and will be checking back soon. Please check out my web
site as well and let me know what you think.
• 10/06/2024 at 4:47 PM
I don’t even know how I ended up here, but I thought this post was good. I don’t know who you are but definitely you are going to a famous blogger if you are not already 😉 Cheers!
• 12/06/2024 at 6:17 PM
Hmm it seems like your website ate my first comment (it was extremely long) so I guess I’ll just sum it up what I submitted and say, I’m thoroughly enjoying your blog. I as well am an aspiring
blog blogger but I’m still new to the whole thing. Do you have any suggestions for first-time blog writers? I’d certainly appreciate it.
• 26/06/2024 at 5:03 PM
• 27/06/2024 at 6:54 PM
Thank you for some other informative website. Where else may I am getting that kind of info written in such a perfect method? I have a project that I am simply now operating on, and I have been
at the glance out for such information.
• 01/07/2024 at 3:26 PM
Today, I went to the beach with my kids. I found a sea shell and gave it to my 4 year old daughter and said “You can hear the ocean if you put this to your ear.” She put the shell to her ear and
screamed. There was a hermit crab inside and it pinched her ear. She never wants to go back! LoL I know this is entirely off topic but I had to tell someone!
• 05/07/2024 at 4:57 PM
Автомойка самообслуживания под ключ становится всё популярнее. Это экономичный и простой способ начать своё дело с минимальными вложениями.
• 09/07/2024 at 12:42 PM
Hey there just wanted to give you a quick heads up. The text in your post seem to be running off the screen in Firefox. I’m not sure if this is a format issue or something to do with internet
browser compatibility but I thought I’d post to let you know. The style and design look great though! Hope you get the problem resolved soon. Many thanks
• 15/08/2024 at 1:26 PM
I simply could not leave your web site prior to suggesting that I extremely enjoyed the standard information a person supply on your visitors? Is going to be back steadily in order to check out
new posts
• 22/08/2024 at 8:29 PM
This website truly has all of the info I wanted about this subject and didn’t know who to ask.
• 13/09/2024 at 1:26 AM
Профессиональный сервисный центр по ремонту бытовой техники с выездом на дом.
Мы предлагаем: ремонт крупногабаритной техники в москве
Наши мастера оперативно устранят неисправности вашего устройства в сервисе или с выездом на дом!
• 19/10/2024 at 10:30 AM
Профессиональный сервисный центр по ремонту бытовой техники с выездом на дом.
Мы предлагаем: сервис центры бытовой техники москва
Наши мастера оперативно устранят неисправности вашего устройства в сервисе или с выездом на дом! | {"url":"https://joybanglabd.com/cop-who-killed-amir-locke-in-minneapolis-ducks-charges-and-is-back-on-duty%EF%BF%BC/","timestamp":"2024-11-06T02:34:43Z","content_type":"text/html","content_length":"404327","record_id":"<urn:uuid:be43161a-c6bc-48fa-b598-fefaea08724d>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00576.warc.gz"} |
Returning averages using associated values among several datasets
I'm not completely sure how to explain it...I'm building a sales dashboard, and I need to calculate several different averages associated to certain associates (volume, units, etc). The data I
receive isn't formatted in the easiest way, so I'm looking for a formula that:
1. finds each time the associate's name is listed in column a (which can be multiple times if the associate has multiple loans) and then
2. averages the data in column b associated with the associate and returns the average in column c.
So if Lacy Lender has 3 loans, it will find her name 3 times among the the dataset of 100, use the loan volume of each of her 3 loans (found in column b) and then return her average loan volume.
I'll need to run that formula for each LO.
The data is also found in a different sheet, but I do know how to reference different sheets in the formula.
Hopefully this is easy - can anyone help? Thank you!!
• You can use SUMIF to total all the loan amounts in col b if the name in col A matches the one in the row.
=SUMIF([Col A]:[Col A], [Col A]@row, [Col B]:[Col B])
You can use COUNTIF to find the number of times the name in the row appears in col A.
=COUNTIF([Col A]:[Col A], [Col A]@row)
If you divide the SUM of loan amounts by the COUNT of loans, you will have the average loan amount.
So, your formula (in Col C) would be
=SUMIF([Col A]:[Col A], [Col A]@row, [Col B]:[Col B]) / COUNTIF([Col A]:[Col A], [Col A]@row)
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/115755/returning-averages-using-associated-values-among-several-datasets","timestamp":"2024-11-06T07:36:01Z","content_type":"text/html","content_length":"390712","record_id":"<urn:uuid:2050b4ba-5f53-441d-84ef-c1a8830601f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00527.warc.gz"} |
The UCD Mathematics Department is located in the newly-completed Mathematical Sciences Building. The department is headed by Chairperson Dan Romik. The Chair of the Graduate Group in Applied
Mathematics (GGAM) is Matthias Köppe. The official office for departmental matters is MSB 1130. The staff are extremely helpful.
The department has been, and continues to be, home to many brilliant researchers. For a number of years Bill Thurston, Fields Medalist c/o his Geometrization Conjecture, was a member of the faculty.
Mikhail Khovanov was in the department when he developed the famous homology theory that bears his name. Wayne Rosing is a senior fellow involved with the Large Synoptic Survey Telescope. Roger Wets,
a professor in the department, is a managing editor of the Journal of Convex Analysis. A Mathematics ArXiv front and the Journal of Mathematical Physics are also run from within the department. Also
of note, Craig Tracy has the famous Tracy-Widom distribution named for him.
Degrees Offered
The Math department offers A.B. and B.S. degrees, as well as a minor. The difference between the A.B. and B.S. is, as usual, the A.B. degree has less strict guidelines for courses as well as having
less courses required (usually). The A.B. degree is probably better suited for students pursuing another major or those who wish to tailor a more unique upper division program. Do not forget to
fulfill the three quarter foreign language requirement if you plan on getting an A.B. The minor is simply 20 units in upper division courses.
Within the B.S., students must choose a track within the major that decides the course of upper-division study. The tracks are "General Mathematics", "Mathematical and Scientific Computation",
"Applied Mathematics", and "Mathematics for Secondary Teaching". Basically, if you are interested in pure mathematics, the general track is the most appropriate. While some people use to think that
the pure track provides the best preparation for graduate school in other mathematical disciplines (i.e. theoretical physics), the applied track is much more rigorous than it use to be. The only
difference between the two is that in the pure track you take three quarters of abstract algebra, as opposed to some abstract algebra and coursework in mathematical applications. The Computational
track with a computing emphasis seems to attract a lot of students from computer science who wish to double-major in Mathematics. However, the computational track also has a biology emphasis option.
This particular emphasis attracts people who like the interface between computers, mathematics and biology. The other option for those who are interested in mathematical biology is taking any of the
three graduate research oriented tracks (general, applied, or scientific computation with a biology emphasis) and minor in Quantitative biology and bioinformatics and/or do research in the CLIMB
program. The applied track focuses more on the theory behind solving problems in the specialty field of your interest (biology, physics, engineering, economics...), while the computational track
emphasizes the computer programming used to solve such problems.
The Teaching track is, well, for those who wish to become High School/lower teachers. The program doesn't incorporate getting teaching credentials, but that's something you have to do on your own.
However, if you are interested in teaching you should consider the MAST program or working as a group tutor at the learning skills center, to give you some experience.
The General Track requires completion of the 127 series, the 150 series, 135A, and 185A. Computational/Applied tracks require the 125 series, 150A, and 135A. 185A is also required for the applied
track, but not the computational track. Both the applied and computational track require the completion of 128AB (computational must also complete 128C). Teaching requires the 125 series, 111, 115A,
135A, 141, and 150A. After taking these required classes for the specific track, one may choose between several restricted classes for the completion of their major.
There are two main graduate programs: Pure Mathematics (M.A. and Ph.D.), Applied Mathematics (M.S. and Ph.D.). There used to be a Teaching Mathematics (M.A.T.), but the math department phased the
M.A.T. program out. All students must take the MAT 201 series in their first year. Those in the pure track must also take MAT 250AB. Those in the applied track used to take MAT 119A, an undergraduate
class, but now get to take the new MAT 207 series in applied methods.
Official Websites
Jesus De Loera — Math 145, Math 165, Math 168 http://www.math.ucdavis.edu/~deloera
• One of the best professors I have ever had. Very enthusiastic and passionate about mathematics and especially so in his research areas. At first, I thought he was really intimidating, but as time
has passed, he's actually one of the most personable faculty in the mathematics department. SachinSalgaonkar
Abigail Thompson — Math 127B, Math 141, Math 145 http://www.math.ucdavis.edu/~thompson
• I had her for Math 127B and liked her a lot, her tests were kinda on the easy side though :) BryanBell
Craig Tracy — Math 22B, Math 131, Math 205 http://www.math.ucdavis.edu/~tracy
• Clever man. I took 22B and 131 with him, and he went extremely in depth with those courses, using real-world examples from engineering and physics. It was slightly terrifying. The 131 class went
full-bore with derivations, clearly a learn-or-die situation, requiring a blue book for the final. But when we got there for the final showdown, and he passed out the test, we saw that it was
just 20 multiple-choice questions, no written section at all. Easy... too easy. Turns out he had been watching the TAs threatening to strike that week, and he didn't want to risk having to grade
all those written tests himself. — EricTalevich
Lower Division
So, everyone has to take Calculus. We have three Calculus series: 21, 17, and 16. 21 is for people who will take more Mathematics courses after the series, 17 is the newly added "Calculus for Biosci
majors" (basically 16 with examples drawn from bioscience), and 16 is for those who need only Calculus for their further work. This means Science, Engineering, and Mathematics type majors have to
take 21, most bio take 17, and Social Science / some Biology take 16. What you take after this depends highly on your major.
22A - Linear Algebra. Starts from the very beginning: Matrix addition, dot products, cross products, inverses, Gaussian elimination, discriminants, eigenvalues and eigenvectors. An additional 1-unit
section in MATLAB (22AL) is often taken concurrently, in order to show beginning students the utter futility of the grunt work they do by hand.
• It's my opinion that this is the most confused course in the department. It starts off with very basic operations on matrices, which are used to solve basic linear equations. This seems to
encompass the majority of the class no matter how it's taught. Then the instructor must also use this class to prepare advancing students for material in upper division mathematics courses, and
must focus a lot on vector spaces, change of basis, and linear operators (with their eigenvalues, eigenvectors). This leads to a tedious, calculation-based course (for the majority) and then a
half-assed effort at a real linear algebra course. The 167 course tries to remedy this but fails due to the amount of material it must make up for. —PhilipNeustrom
□ I've seen similar problems at other universities. Few schools seem to be able to put together a coherent set of courses dealing with linear algebra. —RoyWright
□ "It's my understanding that the FRC is trying to restructure some of the mid-level classes to better prepare advancing students in this regard." —MarianneWaage
• "This class requires concurrent enrollment in 22AL, or alternatively Engineering 6. —DavidPoole
□ Concurrent enrollment in Engineering 6 does not excuse you from the MATLAB requirement; you will need to take 22AL too. Why they do that is a mystery. —HarrisonM
☆ But if you already have taken ENG 6, you are excused from 22AL. Or if you already have knowledge of MATLAB. Here is the course description from the catalog:
22A. Linear Algebra (3)
Lecture—3 hours. Prerequisite: nine units of college mathematics and Engineering 6 or knowledge of Matlab
or course 22AL (to be taken concurrently). Matrices and linear transformations, determinants, eigenvalues,
eigenvectors, diagonalization, factorization.
Not open for credit to students who have completed course 67.—I, II, III. (I, II, III.)
22B - Differential Equations. You learn to solve and classify basic ordinary differential equations. There may or may not be very complex applications shown, depending on the professor.
• taught by dr. john hunter, this may have been my favorite lower division class
23 - Introduction to Numerology. Covers the early history of numerology, the basics of theoretical numerology, and a few important applications. Sometimes BarnabasTruman is the TA for this class.
Offered whenever the year in the Erisian reckoning of time doesn't relate to the number five in any way whatsoever.
67 - Modern Linear Algebra. Similar to 22a but more theory oriented. Expect to see plenty of proofs on your tests, as this class is meant to lead into upper division mathematics
Upper Division
108 - Introduction to Abstract Mathematics. This is perhaps the most talked-about course in the department. This is a "proof class" in the sense that you're supposed to learn how to give a good,
rigorous mathematical argument. You can learn a lot in this class, but most people have bad experiences overall. There is a certain understandable level of anality (this isn't a word) that's given to
the students after this class, but it is sometimes hard for students to learn that not every argument is given in strict propositional and existential quantifier terms. Back when this class was
required for all math majors, it was considered to be the weeder class. When taught by some professors the mean grade can be as low as a C-. All Computer Science majors up until 2008 had to take this
before they graduated, hence most of the people in the course were Computer Science majors.
111 - History of Mathematics Don't be fooled by the title. If you are a history major with no background in math do not take this course, there is actually a lot of math involved, just not as much as
other upper division math courses. Students learn some ancient cultures math and then the history of western math through the later half of the millennium.
114 - Convex Geometry. A convex region is one where all of the line segment between any two points in the region is in the region. So (the interior) of a triangle is convex, as is a circle, but a
star is not. We will study the geometry of convex regions in 2, 3, 4, and many dimensions, with a lot of help from linear algebra, covering both general theory and various interesting examples and
constructions of convex sets. Exact topics may depend to some extent on the interests of the students. —AlexanderWoo, who is teaching this Winter 2006
115A - Number Theory. You learn basic properties of congruences, prime numbers, diophantine equations, and learn some interesting functions (such as Euler's function and the Moebius function). The
RSA algorithm for public-key encryption may also be covered. People who have taken computer science courses may find the complexity for algorithms interesting.
115BC are a continuation of A and topics are the choice of the instructor.
116 - Differential Geometry. A slight continuation of 21D. You study curves and surfaces and their curvature properties using vector analysis and differential geometry.
The philosophical implications of the course are profound. Differential geometry provides a context in which one can study many kinds of (non-Euclidean) geometries through the lens of calculus.
Geometries will often be weird and interesting. For example, if let's look at the geometry on a sphere. A differential equation tells us that the great circles (equators of the sphere) are the
analogs of straight lines. Thus, in a small enough region, the straight lines will be the paths that minimize distance between points. That's the same as in Euclidean geometry. On the other hand, two
different equators will always intersect, so there are no "parallel" lines in the geometry of the sphere. Because the heavy machinery of differential calculus is used, such geometric calculations can
be made in an efficient manner. It is much easier to see what goes on, than if the elegant, though cumbersome, axiomatic method of Euclid were the main tool.
This class studies curves and surfaces, but that is only the beginning of differential geometry. When we speak about a curve, we are considering a 1-dimensional object embedded in our 3-dimensional
space. When we talk about a surface, we are considering something 2-dimensional embedded in 3-dimensional space. But we only put the 3-dimensional space in because we perceive ourselves to live in
3-D space, so that seems the most natural surrounding for our curve or surface.
In the nineteenth century, Riemann first realized that we may divorce the curve and surface from the ambient 3-D space. Furthermore, why limit ourselves to 1- or 2-dimensional objects? We can
consider 4, 5, or N dimensions—as many as we want for a purpose at hand. Riemann called these N-dimensional surfaces, liberated from any surrounding space, manifolds. Einstein's theory of General
Relativity considers the space-time continuum to be a 4-dimensional manifold. Gravity is curvature in the manifold, and particles under the influence of gravity follow the straight lines in the
non-Euclidean geometry.
118A - Partial Differential Equations. You learn some methods of solving basic, special-case, PDEs including separation of variables. A major emphasis is placed on understanding the wave and
diffusion/heat equations with various boundary conditions. The class also covers classical Fourier series and its application to solving PDEs on finite intervals for the last couple weeks of class.
118B - Partial Differential Equations. You learn about Green's Functions, a very useful way to solve some linear PDEs. Unfortunately, the Green's Function is usually quite difficult to determine. It
depends on the geometry of the boundary conditions, and involves quite a bit of work even for simple geometry. For more complicated geometry, the problem becomes intractable. Also, Fourier series
solutions are studied in more depth. The sines and cosines in a Fourier series can be thought of as a "basis" for the space of functions, in the sense of linear algebra. You learn about several
different kinds of "bases" in that spirit. Selected topics from math 118C may be covered if 118C is not being offered the following quarter, such as distributions, Fourier transforms, and calculus of
variations. The difficulty level tends to be stepped up a notch in B and C because the material is less intuitive.
118C - Partial Differential Equations. Hands down, the most thrilling of the 118 series. Like a cathedral among chapels. You learn the theory of distributions—a distribution being a mysterious
"generalized function". For example, the delta function is a distribution that is 0 everywhere but at x = 0, and has an integral of 1. But this cannot be a function, it must be something more
general! It turns out that the delta function and Green's Function are intimately related. Moreover, you learn about an extremely useful operation called convolution. You learn about Laplace and
Fourier Transforms. The power of these combined techniques in essence allow you to solve any linear PDE.
119A - Ordinary Differential Equations. ODE's are a vital part of most sciences. This is sometimes referred to as "the Phase-plane class." It focuses on phase planes, stability of fixed points,
bifurcations, classification of singularities, and various other forms of analyzing ODE's. About half of the class is generally composed of bored applied math graduate students.
119B - Ordinary Differential Equations. This class focuses on chaotic differential equations, maps, and fractals (some professors add a variety of topics that may have been skimmed over in 119A).
Even though this course holds the majority of the interesting material on dynamical systems, most grad students do not continue on to 119B (mainly because it is not required). As a result the class
is often small and intimate. Depending on the professor this course may contain some programing, although no previous programing experience is expected.
• this is by far the highlight of the two part series. A lot of what you learn in 119A is applied to more interesting systems. Maps are introduced and explored. This class tends to be small and
hence it is often more project/exploratory oriented. If you are an applied math major this may be one of the more important classes you take, because if you get the right professor, you will get
to do a project related to your field of application. In spring 2007 Dr. Biello did a wonderful job of engaging this class in the material. If you didn't hate 119A, i urge you to continue,
because it got a lot more fun during the second quarter.
124 - Mathematical Biology. This course focuses on the mathematics used to model and analyze many biological systems. Topics tend to include a lot of differential equations and linear algebra, with
applications to cell biology, neuroscience, and ecology.
127A - Real Analysis (formerly 25 - Advanced Calculus). A course that gives the fundamentals necessary for Real Analysis. Topics cover: sets, induction, infimum, supremum, sequences, series, proof
writing, and some more properties of real numbers. This is a proof oriented class, that leads into upper division math.
127BC - Real Analysis (formerly 125AB). This is a series in elementary real analysis. This means that a lot of the material will be familiar to you from your previous courses but will be set in a
much more rigorous framework and worked with from there. 125A will relate to material covered in 21A and 125B will relate to material covered in 21BCD and will also use tools learned in 22A/67. After
covering basic 1D integration, the B course focuses on developing more advanced topics in elementary analysis such as the total derivative (or Frechet derivative), the implicit/inverse function
theorems, jordan regions, multivariate integration, and change of variables.
128ABC - Numerical Analysis. This is not a "series" and can be taken in any order. The topics differ but all focus on developing algorithmic methods of numerically solving or approximating
mathematical problems. Examples include spline interpolation and numerical differentiation and integration. Involves a fair amount of programming in MATLAB.
129 - Fourier Analysis (new course as of F2006). This course fills the gap for those who need to learn tools like Fourier series and Fourier transforms but cannot afford the time to take 119 and 118.
133 - Mathematical Finance (new course as of F2006). Will first be offered W2007.
135A (131 prior to F2006) - Probability Theory. This is an introductory course in probability. Covers events, sample spaces, random variables, expectation, mean (and other moments), density, mass,
and distribution functions (along with various examples of popular distributions), moment-generating and characteristic functions, and various limit theorems. There may be fairly complex/advanced
examples presented. Now modified to differ from Statistics 131 (details pending).
• When I took the course last quarter (W2012), the course ended at sums of expectations. Moment-generating functions, bounds such as the Markov and Chebyshev Inequalities, were all reserved for
135B. SachinSalgaonkar
135B (132A prior to F2006) - Stochastic Processes. A continuation of 135A in the direction of stochastic processes, i.e. those that change randomly with time. Covers branching processes, Markov
141 - Euclidean Geometry. This course has typically been taught to be an axiomatic, slow, and thorough treatment of Euclidean geometry. In recent years it has incorporated much (if not most of the
course) time to discussion of alternative geometries such as spherical and hyperbolic. the course starts with axiomatic rules and systems and then moves on to comparing and contrasting the axioms,
theorems, and ideas in hyperbolic, euclidean, and spherical surfaces. However, sometimes the course is less rigorous and more exploratory (ie building hyperbolic planes out of paper). It is a good
course to take after 108 when you are still getting use to proofs. This is because the proofs will be less abstract (than lets say algebra or topology, even analysis) and deal with familiar concepts.
145 - Combinatorics. This is supposed to be a fun class. You learn basic counting methods, and learn about generating functions and recurrence relations. Generally the last half of the class is spent
on graph theory. You learn basic concepts about graphs, trees, optimum spanning trees, colorings, and bipartite graphs.
146 - Algebraic Combinatorics (149A prior to F2006).
147 - Topology. This is a basic course on point-set and combinatorial topology. Topology generalizes the important ideas from analysis/calculus into a more abstract setting. Instead of having a
notion of exact distance, you now only have a notion of "closeness". Intuitively, the closer two points are together, the more "open sets" contain both of them.
You describe a topological space by defining what its "open sets" are. Intuitively, these correspond to open intervals like (0,1) and (5, 15) in the real line. The open sets have to obey certain
rules, and this makes their definition into a formal game. Sometimes, you don't have to define all the open sets. You just define a subcollection or "basis" that generates the open sets. These
correspond to open balls in analysis.
A "closed set" is the complement of an open set. Intuitively, a closed set contains its boundary (if the boundary exists). Closed subsets of the real line are [0,1] and [5,15], for example. There's
more to closed sets than just being the complement of open sets. If you take any convergent sequence in the closed set, the limit must also be in the closed set. This is an alternate way to define
closed sets.
Topology allows us to define connectedness. A topological space is "connected" if its only subsets that are both closed and open are the whole space and the empty set.
One central question of topology is, "When can a topological space be made into a metric space?" That is, when can we place a notion of distance on it? We can try to answer this question by examining
the behavior of the real line (and other metric spaces). For instance, in a metric space, take two closed sets that are disjoint (i.e., their intersection is void). It is possible to surround each of
them by an open set, so that the open sets are also disjoint. We call this "separating" the closed sets. This leads us to define separation axioms for arbitrary topological spaces—what kind of sets
can be separated by open sets? The axiom just described is called "normality".
The weakest separation axiom one can demand is that points be closed. Slightly stronger is the Hausdorff axiom—that one can separate points. Spaces are for the most part useless if they are not
Hausdorff. The man Hausdorff was a great mathematician. He was Jewish, and when the Holocaust came about in Germany, he and his wife committed suicide to avoid being sent to a concentration camp.
The next strongest axiom one can impose is "regularity". This means that one can separate points from closed sets. Urysohn's Metrization Theorem says that if a space is regular, and if it has a basis
that can be arranged in a sequence (countable basis), then the space can be endowed with a metric. Urysohn was a brilliant mathematician whose spark was taken from us in a drowning accident.
Maps between topological spaces that preserve topological structure are called "continuous functions". In fact, topology can be described as a study of continuity. Continuous maps that are
continuously invertible are the equivalences between topological spaces. These are called "homeomorphisms". It is too ambitious to attempt to classify all topological spaces. A more tractable
question is to try to classify spaces up to homeomorphic equivalence.
It is rather straightforward to show that two spaces are homeomorphic. Simply construct a homeomorphism. The inverse question of showing that two spaces are not homeomorphic is much more difficult.
Topological spaces are typically too complicated to allow direct proofs that two of them are not homeomorphic. One tries to reduce this to a problem in another field of mathematics, like abstract
algebra. Poincare's fundamental group, defined for any topological space, is a first step in this direction. But that takes us away from point-set topology to the realm of algebraic topology.
• When I took it, it was entirely point-set and we focused just on the first chapters of Munkres without covering anything algebraic.-PhilipNeustrom
• I just finished it and we covered up to section 31 in Munkres so it was all point-set. I recommend it for all math majors it made clear all the topology we covered in 127A and was a good
complement to 127B since we covered convergence in a more general way. I think more material could be and should be covered in the class. BryanBell
148 - Discrete Mathematics (149B prior to F2006).
150ABC - Modern Algebra. This is the standard abstract algebra series. The difference about Davis is that instead of being merely one or two courses, it's three. This allows for a lot of time to
carefully develop the ideas and theories. You learn, basically, groups, fields, and rings. It's a whole lot more than just that, though.
Abstract algebra is the study of sets with operations on them that behave roughly like our usual addition (+), subtraction (-), multiplication (*), and division (/). Thus, we study algebra in an
abstract setting, not just the ring of integers or the field of real numbers—hence the name. From now on, we'll speak about algebra, leaving out the word abstract. This is appropriate, because to
understand our usual algebra in a structured manner, it is necessary to study abstract algebra.
An algebraic structure you have already studied is the vector space from linear algebra. These are very rigid structures algebraically. For instance, every vector space has a basis. In general
algebra this is not true. Structures with a basis are called "free", and they are difficult to classify.
In 150 you study "groups", which have + and - and 0. The addition is associative, but it might not be commutative (A + B doesn't equal B + A in general). Groups are important because they act—as
rotations, as reflections, as rigid motions. In general they act as some kind of mapping. In fact, any element in a group acts as a permutation of that group's elements.
One studies groups by understanding its "subgroups"—smaller groups embedded into the mother group. One can also observe transformations between groups. It turns out that these points of view are
Later on you study rings. These are groups where the addition commutes. Furthermore there is multiplication (*) but not necessarily division. Also (AB = BA) might not be true. Therefore, square
matrices form a ring. The quintessential ring is the ring of integers. In fact, ring theory is like a generalized, organized version of number theory.
Finally, in 150 C you study fields. These are rings with commutative multiplication and division (/) except for division by 0. Because they carry so much structure, fields are very rigid. Vector
spaces are built over a field, that is why they are so inflexible. (Modules, on the other hand, are just built over rings. Their structure is more flexible, their study more rich.)
Galois Theory attempts to understand fields that extend a given field F, and are subfields of an extension field K. So they are wedged between F and K. The Fundamental Theorem says that one can
reduce this to the study of groups. The group of field transformations from K to itself that leave elements of F unchanged. This is the original setting in which groups were studied. Galois, the
originator of this theory, was shot in the stomach in a duel. He died at 20.
Algebra is crucial for the study of topology. Choose a point * in a topological space. The loops that contain * form a group, Poincare's Fundamental Group. Consider two loops the same if you can
continuously deform one to another. On the sphere, every loop can be shrunk to a point. Thus the sphere has the trivial one-element fundamental group. The circle, on the other hand, has for its
Fundamental Group, the free group on one generator. This counts how many times a loop winds about the circle.
This is not to say that topology does not contribute to algebra. The fundamental theorem of algebra can be proved by topology. So can the one says: subgroups of a free group are themselves free!
165 - Math and Computers. This is another fun one, and fairly new to the program. This course mostly avoids the numerical methods covered in the 128 series and most engineering courses, and instead
picks a few interesting algorithms to analyze and discuss: B-rule algorithm (simplex with Bland's rule), fast Fourier transform, some geometry, all relatively new and exciting stuff. Basically the
focus is the use of computers in checking and generating proofs. There might not be an accompanying textbook. The topics can be very different based on the instructor so you should take a look at
the textbook before enrolling.
167 - Applied Linear Algebra (Advanced Linear Algebra prior to F2006). Picks up the slack 22A left and covers the rest of the foundations: Vector spaces, matrix transformations (similarity,
diagonalization, change of basis, orthogonalization), types of matrices, and other things that are likely to reappear in other courses, especially numerical analysis.
• I'm taking it this summer(2005) and the first half of the course has been pretty much all review. The average on the midterm was 95/110 which is way too high, most boring class I've taken at
davis. BryanBell
• i took it this summer (2006) and it seemed like he may have got a little harder. the midterm was easy, 1/3 the class at 90/108 or better. Watch out for the final though it kicked booty someone
got 200/200, next best was 160/200 and it went down hill from there with 1/3 of the class below 95/200.
168 - Optimization. The first half of the course covers the simplex method and its applications. After that, interior-point methods are offered as an alternative or improvement for solving linear
problems, and the last few weeks are spent on network flow problems, solved using network simplex methods. There are two programs assigned during the course, made easier by the fact that much of the
necessary code is supplied by the textbook's author.
For students interested in business and optimization problems, 168 is very relevant, but not difficult. The methods covered are still active fields of research, particularly interior-point methods,
and while the math this is based on isn't too high-level, the design of algorithms has its own draw.
The textbook is Linear Programming: Foundations and Extensions by Robert Vanderbei, and is used alongside a well-developed website.
180 - Special Topics. Past topics have included Fractals, Mathematical biology, Mathematical finance (before it was an official course), and String theory (generally one or two is offered every year,
topics are almost never repeated and tend to be non-traditional)
185AB - Complex Analysis with Applications
189 - Advanced Problem Solving - Generally students learn to solve advanced problems in various areas of pure mathematics. Quite often former Putnum problems are used.
201ABC - Analysis. Standard first-year graduate analysis. Taught by different professors from year to year, and sometimes from quarter to quarter. The courses normally cover most of chapters 1-11 in
the textbook, Applied Analysis, written by Professors Hunter and Nachtergaele and available for free on the web, as well as lecture notes on Differential Analysis. Lieb and Loss is normally assigned
but little used in practice.
204 - Asymptotic Analysis. Is no longer offered. The material has been incorporated into the 207 series, description forthcoming.
205 - Complex Analysis. Splitting into two courses this year, 205A and 205B. Standard graduate complex analysis material with Stein and Shakarchi as the usual text.
206 - Measure Theory. A nice, careful investigation of the measure theory left out of the 201 series.
207ABC - Applied Mathematics. This sequence consists of a mash-up of topics originally taken from 119A, 118B, and 204.
218AB (and sometimes C) - Partial Differential Equations. Taught by Prof. Shkoller, the department's PDE expert and funloving surfer, in odd years (eg: 2005-2006). An intense treatment of modern PDE
theory in an arbitrary number of dimensions and shape of domain. In general, "modern theory" doesn't involve the actual solution of PDE's, but rather their analysis, meaning the determination of
whether they have unique solutions.
227 - Mathematical Biology. Last taught by Prof. Mogilner in Spring 2005. It more closely resembles a seminar (albeit an unusually interesting one) than a lecture course, and grading is entirely
based on homework problems. No background in biology is required, and the mathematics is not too difficult, either. There is no text. The department has a reputation for math bio and it shows in this
course and the math bio seminar.
228ABC - Numerical Solution of Differential Equations. A continuation of 128 series. Offered in even years (eg: 2006-2007) at the moment. When Puckett teaches this, he does a lot of work in gas
dynamics, and has tilted the subject matter in that direction in past years. In 2004-2005, however, he spent most of the year building up to and dealing with the Navier-Stokes (fluid mechanics)
equations. There was no text, largely because Puckett tries to keep the class as up-to-date as possible, using recent Ph.D. theses and such.
229ABC - Numerical Linear Algebra. Taught every other year by Prof. Strohmer or Prof. Freund. This course is offered in the years when 228ABC is not. Most of the first half of the course builds up
the theory of linear algebra. A great deal of this should be review to students familiar with the topic. The course mostly centers around the singular value decomposition, a vital tool with many
applications, and similar manipulations. There is much discussion of image compression as an application of the various topics covered.
235ABC - Probability Theory. Taught in even years by statistics faculty, and in odd years by mathematics faculty. Rumored to be quite difficult. Has a lot of overlap with the 201 sequence.
258A - Numerical Optimization. This class was previously taught by Professor Wets, using a set of notes written by him and updated from year to year. Wets is a rather prominent figure in stochastic
optimization, which should be the real title of the course, since it largely focuses on the theoretical foundations of stochastic optimization rather than numerical methods. The class will likely be
taught in the future by whichever professor is hired to replace Wets.
Outside Courses
Some recommended courses for applied math majors, depending on interests:
Physics 256AB - Natural Computation. Taught by Prof. Crutchfield, this class is a good choice for Applied Mathematics students who are looking to take the concepts from Math 207A in a computational
direction. The course takes a brisk pace through bifurcation theory and chaos theory then goes straight into information theory and stochastic modelling.
Engineering & Computer Science 253 - Network Theory. Taught by Prof. D'Souza, this course takes a very applied view of what is possible when using graphs to model physical and social networks in our
world. A lot of emphasis is given to applications for which network theory has been successful, and assignments are designed to give students a hands-on approach to using networks.
Environmental Science 121 - Mathematical Ecology. Taught by Prof. Hastings, a Ph.D. in mathematics and member of the Graduate Group in Applied Math. This class is an undergraduate level course that
introduces students to the concepts of mathematical ecology. This course seems more focused on teaching students how to be effective molders rather than mathematical methods. The pre-reqs for this
course include math 16abc (or 17/21) and either bio 1b or 1c. I talked to the professor and it seems that the bio pre-reqs aren't that important if you are interested in the subject matter.
Wildlife Fish and Ecology 122 - Population Biology. Like esp 121 this course is undergraduate level and suited for both math and science majors. PDEs are used in this class but previous experience
with them are not required.
Biology 132 - Dynamic Modeling in Biology. Similar to Math 124, but offered every year instead of alternate years. Covers models related to many fields of biology. Emphasizes differential equations,
difference equations, linear algebra and bifurcations. Students write a term paper based on the scientific literature, instead of the traditional final. the 16/17 is the only prereq for the course.
Neurology Physiology and Behavior 163 - Information processing models in neuroscience. Covers basic modeling techniques used in neuroscience. Specific topics include differential equations, linear
systems theory, Fourier transforms, neural networks, probabilistic inference, and information theory. Emphasis on understanding information processing in neural systems.
Physics 104 - Mathematical Physics. Focuses on the mathematical theory used in physics. Topics include ODEs, PDEs, Fourier transforms and many other things.
Civil Engineering 212A - Finite Element Procedures. Usually taught by Prof. Sukumar, a member of the Graduate Group in Applied Math. A fairly rigorous introduction to finite elements, a branch of
numerical methods used widely in applications but lamentably not covered by any math course at Davis. Followed-up by 212B, which gets into gory details of solid mechanics that might not interest a
math major.
Ecology 231 - Mathematical Methods in Population Biology. Taught by Prof. Hastings, a Ph.D. in mathematics and member of the Graduate Group in Applied Math. Much of the class deals with the subject
matter of Math 119A and may be review for grad students in math, but the course also addresses difference equations, PDE's, and the relation of all of these topics to current work in Ecology. The
course is followed-up in odd years by ECL 232, Theoretical Ecology, a much more mathematically interesting course. 232 is also taught by Hastings, and requires the student to write a large paper. You
really must have a firm interest in population biology or ecology in order to do well in this course.
Economics 122 - Game Theory. This course focuses on the basics of game theory. Around a third of the class is devoted to nash equilibrium, another third about subgame perfect Nash equilibrium and the
final third is about incomplete information games. Economics 100 is listed as a prerequisite, although anyone with a background in mathematics (21 series) can do well in the course. If you have taken
MAT 168, this course is an application of the material learned in 168, but extremely lay-man.
Undergraduate Research
• CLIMB - paid research ($15/hr) in the field of mathematical biology, although this program has been cancelled due to lack of funding.
• REU - summer research in various mathematical fields, comes with a small stipend.
• McNair Scholars Program - for students from disadvantaged and/or underrepresented backgrounds who want to pursue a PhD in any field. Comes with a stipend.
Clubs and Student Organizations | {"url":"https://daviswiki.org/Mathematics?&redirected_from=math","timestamp":"2024-11-10T22:29:28Z","content_type":"text/html","content_length":"59064","record_id":"<urn:uuid:47a0714f-df89-4a7e-9336-64bbf5c2bd02>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00874.warc.gz"} |
HYDROGEOLOGY AND GEOLOGY WEBSITE - Aquifer Step Testing (T)
By Darrel Dunn, Ph.D., PG, Hydrogeologist
(View Résumé 🔳)
This is a technical page on aquifer step testing. To see a nontechnical page on this topic, press here.
The purpose of this web page is to demonstrate a method for determination of aquifer transmissivity and well loss coefficients from step tests. The method is described by Birsoy and Summers (1980).
It involves using the Cooper and Jacob approximation of the Theis equation for drawdown in an infinite confined aquifer (Freeze and Cherry, 1979). If one starts with this equation and applies
superposition to successive pumping and recovery rates the following equation is obtained via algebraic manipulation involving logarithms:
If the equation is applied to drawdown in a pumping well, r is the effective radius of the well, rw.
The equation may also be applied to injection well testing. Changes in sign and units allow drawdown to become pressure increase, and pumping rate to become injection rate.
Application of Equation 1 requires extensive repetitive calculations. I have written a Fortran program that uses this equation and the analogous equations for internal recovery and final recovery
periods to process step test data for determining transmissivity and well loss constants, and another program that uses the transmissivity and well loss constants to calculate a time-drawdown plot
that can be compared to the observed drawdown of a test. These programs were used to produce the step test analysis graphs and time-drawdown plots in the following presentation.
Figure 1 shows an example of an application of this equation. It shows drawdown in a hypothetical pumping well in four successive steps with rates of 200, 400, 600, and 800 gpm. Other values used
are :
Aquifer transmissivity = 5000 gpd/ft
Aquifer storativity = 0.007
Effective radius = 0.75 ft
Figure 1. Hypothetical pumping well step drawdown.
It is useful to divide both sides of Equation 1 by Qn. This form of the equation shows that s/Qn (which I have called "adjusted drawdown"} plotted against the logarithm of "adjusted time" is a
straight line. The slope of the line is 264/T and the y-intercept is (264/T)log[(0.3T/(1440r2S)]. Figure 2 shows the plot of adjusted drawdown versus adjusted time for the hypothetical pumping well
used for Figure 1. Note that the slope is consistent with the 264/T, and the y-intercept is consistent with (264/T)log[(0.3T/(1440r2S)]. The y-intercept on the logarithmic scale is where the
abscissa has the value of one. The equation for the y-intercept may be re-arranged to provide a value for r2S, once the value for T has been calculated. The equation for r2S is r2S=(2.0833E-4)xTx
(10-IT/264), where I is the value of the y-intercept.
Figure 2. Hypothetical pumping well adjusted time versus adjusted drawdown.
Jacob (1947) stated that drawdown in a pumping well has two components. One component is drawdown proportional to discharge, and the second component is drawdown proportional to approximately the
square of the discharge. The second component is termed "well loss" and represents various effects on drawdown in the well, such as turbulent flow in and near the well, clogging near the well, head
loss through a well screen, and losses in an artificial sand or gravel filter in the annulus around a well screen. Jacob represented "well loss" as CQ2, where C is an empirical coefficient. He
recognized that CQy, where y is not equal to 2, might more accurately reflect well loss, and higher precision might warrant determination of of the exponent y by trial and error or by a graphical
procedure. Rorabaugh (1953) applied a graphical procedure and concluded that the exponent y "may be unity at very low rates of discharge, or it may be in excess of 2."
If a term CQ2 is added to equation 1 and C is given a hypothetical value of 0.002 (ft min2/gal2), which is a relatively high value. The extra drawdown for the hypothetical well is shown (red) in
Figure 3.
Figure 3. Hypothetical pumping well step drawdown with and without well loss.
If Equation 1 with CQ2 added is divided by Qn, so that the well loss term becomes CQ, and adjusted drawdown is plotted against adjusted time, Figure 4 is the result.
Figure 4. Hypothetical pumping well adjusted time versus adjusted drawdown with well loss.
When the drawdown data is plotted this way, each step plots as a straight on a semi-log graph, and the y-intercept for each step differs from the preceding step by (Qn-Qn-1)C. Therefore, C may be
determined by C = (s/Qn-s/Qn-1)/(Qn-Qn-1). That is, the difference in adjusted drawdown for successive time steps is divided by the difference in discharge for the successive time steps. In our
hypothetical case shown in Figure 4, this value is 0.2/100, which is 0.002. Thus the input value used for well loss coefficient may be computed from the graph.
These results show that real step drawdown test data from a pumping well plotted as adjusted drawdown versus adjusted time can be used to estimate the transmissivity of the aquifer near the well by
using the slope of the lines on the plot, if the aquifer is confined. Furthermore if the well loss exponent is assumed to be 2, the well loss coefficient can be derived from the vertical separation
of the plots for the time steps. If step drawdown data from a nearby monitoring well is available, the storativity (S) can be determined from the y-intercept. Since rw2S can be determined from the
y-intercept of the adjusted pumping well data, rw can be determined when a monitoring well is available. However, rw may increase as the pumping rate increases, because turbulent flow may extend
farther from the well. This is a complicating factor that should be considered when the data is interpreted.
The step test analysis method described by Birsoy and Summers allows for interruptions in the pumping. When pumping stops temporarily, the drawdown begins to recover. The recovery equation is the
same as the pumping equation except that the term log(0.3T/1440r2S) disappears and the last term in the adjusted time, tn becomes tn/t'n. Figure 5 shows the calculated drawdown for the hypothetical
well with 60-minute interruptions between the pumping steps plus 240 minutes of post-pumping recovery.
Figure 5. Hypothetical pumping well step drawdown with interrupted pumping and recovery.
Figure 6 is a plot of the drawdown shown in Figure 5 along with a plot (red) showing drawdown with well loss, using the same well loss coefficients as above.
Figure 6. Hypothetical pumping well step drawdown with interrupted pumping and recovery showing well loss.
Figure 7 is a graph of adjusted drawdown versus adjusted time analogous to Figure 4. The slope of the pumping drawdown is the same as in Figure 4. The transmissivity may still be derived from the
slope, and the well loss coefficient may still be derived from the vertical separation of the pumping plots. All of the recovery data plot as a straight line with the same slope as the pumping
plots. The y-intercept of the recovery plot is zero. The recovery data includes recovery between pumping periods and recovery after the final pumping period.
Figure 7. Hypothetical pumping well adjusted time versus adjusted drawdown with well loss and recovery.
The step test analysis method described by Birsoy and Summers also allows for irregular changes in pumping rate and duration and for irregular interruptions in the pumping. Figure 8 shows irregular
pumping using the same hypothetical aquifer and well loss parameters as above, but with an irregular pumping scenario. In this scenario, pumping begins at 500 gpm. After 30 minutes it is decreased
to 200 gpm. After 120 minutes at 200 gpm, the pumping is interrupted for 20 minutes. Then the well is pumped at 400 gpm for 310 minutes, followed by 300 gpm for 180 minutes. Recovery data is
graphed for 240 minutes. The resulting drawdown curve without well loss is shown in Figure 8. The drawdown curve that includes well loss using the same coefficient as above is shown in Figure 9.
Figure 8. Hypothetical pumping well drawdown with irregular pumping and recovery.
Figure 9. Hypothetical pumping well drawdown with irregular pumping and recovery showing well loss.
The plot of adjusted drawdown versus adjusted time for this irregular scenario is graphed in Figure 10. The transmissivity may still be calculated from the slope of the plots for the pumping and
recovery periods, and the well loss coefficient my still be obtained from the vertical separation of the plots in the same way as for the more regular step tests described above. Consequently, it is
possible to accommodate irregularities in a step test. Irregularities might be caused by such events as starting at a pumping rate that is too high and having to reduce it, or temporary pump
Figure 10. Hypothetical pumping well with irregular pumping and recovery - adjusted time versus adjusted drawdown with will loss and recovery.
If Equation 1 and the analogous recovery equation are applied to a single pumping period and subsequent recovery, the graph of adjusted drawdown versus adjusted time shows a plot for the pumping
period that is the same as that used in the Jacob straight line method of pumping test analysis, and the plot of the subsequent recovery period is the same as that used in the Theis recovery method.
These two pumping test analysis methods are described in Davis and De Wiest (1966). Figure 11 shows these plots for the hypothetical conditions described above for 240 minute pumping and recovery
periods. The pumping rate used is 200 gpm.
Figure 11. Jacob straight line and Theis recovery plots for the hypothetical conditions.
As mentioned above, well loss might be more accurately represented by CQy rather than CQ2. The method described by Birsoy and Summers allows for estimation of y and C when y is not equal to 2. In
this case, the vertical separation of plots of adjusted drawdown versus adjusted time will not be uniform. For example, assume four steps (100, 200, 300, 400 gpm) as in the first scenario examined
above and transmissivity, storativity, and effective radius are still the same, but the well loss coefficient, C, is changed from 0.002 to 0.0002 and the well loss exponent, y, is changed from 2 to
2.5. The drawdown graph analogous to Figure 3 becomes as in Figure 11.
Figure 11. Hypothetical pumping well step drawdown with C=0.0002, y=2.5.
The graph of adjusted time versus adjusted drawdown analogous to Figure 4 is shown in Figure 12. The vertical separation of the plots for the pumping steps is no longer uniform. Instead, the
y-intercept of each step differs from the preceding step by (Qny-1-Qn-1y-1)C. Therefore the equation for C is C=[(S/Qn)-(S/Qn-1)]/(Qny-1-Qn-1y-1). This expression gives n-1 nonlinear equations with
two unknowns. The equations my be solved graphically. I use a spreadsheet. For the scenario represented in Figure 12, the graphical solution is shown in Figure 13. The lines intersect at C=0.0002
and n=2.5, as expected.
Figure 12. Hypothetical pumping well adjusted time versus adjusted drawdown, C=.0002, y=2.5.
Figure 13. Graphical solution for well loss coefficient and well loss exponent for hypothetical step test.
Step test analysis was originally developed for application to confined aquifers, which are aquifers that are not recharged by leakage from overlying or underlying beds. Such aquifers probably do
not exist in nature because all subsurface materials have a finite vertical permeability. So one needs to consider when step test analysis may be applied to leaky aquifers without unacceptable
error. For small diameter wells, the drawdown curves for leaky aquifers closely follow the Theis equation until a critical time is reached. This behavior may be seen on graphs of type curves, such
as those published by Walton (1970). My subjective interpretation of the critical time is shown in Figure 14, where
1/u=Tt/(2693r2S) and
t is time after pumping started, minutes
T is transmissivity, gpd/ft
r is radial distance, ft
S is coefficient of storativity
P is vertical hydraulic conductivity of the confining bed, gpd/ft2
m is thickness of the confining bed, ft.
Figure 14 applies when only one aquitard is leaking into the aquifer and no water is released from storage in the aquitard.
Figure 14. Estimated value of 1/u when leaky aquifer curve r/B departs from the Theis curve.
Spane and Wurstner (1993) compared dimensionless drawdown and dimensionless drawdown derivative type curves for for selected values of β, where β is the second argument in the well function for leaky
confined aquifers with storage in the aquitards [H(u,β)] defined as:
β = rw[(K'S'/b'TS)1/2+(K''S''/b''TS)1/2]/4, where
rw is effective radius of the well (L),
K' and K'' are vertical hydraulic conductivities of the superjacent and subjacent aquitards, respectively (L/T),
S' and S'' are storativities of the superjacent and subjacent aquitards, respectively (dimensionless),
b' and b'' are thicknesses of the superjacent and subjacent aquitards, respectively (L),
T is transmissivity of the aquifer (L2/T), and
S is storativity of the aquifer (dimensionless).
Consistent units of length and time must be used when applying this equation.
Spane and Wurstner concluded that nonleaky confined aquifer methods (Theis) can be applied to leaky aquifer test data when β is less than 0.01. Inspection of their graph suggests that a smaller
value of β is preferable.
Figure 14 and β are based on a pumped well represented by a line (small diameter). The drawdown of large diameter wells does not follow the Theis curve initially due to the effect of water pumped
from well storage. This effect should be considered when interpreting step test data from large diameter wells in leaky aquifers.
For most leaky aquifers, the drawdown at the effective radius of a small-diameter pumping well will remain on the Theis curve for a day or more, so the step test procedure described above may be
applied. However, it is often beneficial to follow a step test with a constant discharge test including an observation well located a suitable distance from the pumped well. The computer program
titled DPLAQ may be used to analyze the data from the constant discharge test. This program includes the effect of storage depletion in a large diameter well.
The step test procedure described above is based on the Cooper Jacob approximation of the Theis equation and was originally developed for application to confined aquifers. However, the drawdown in
water-table aquifers follows the Theis equation during the early part of a pumping test and during the late part of a pumping test. These Theis curves have been called Type A curves and Type Y
curves, respectively. The drawdown at intermediate times is affected by delayed response of drawdown and does not follow a Theis curve. These water-table type-curves are shown in Figure 15. The
curves to the left of the r/D values are Type A curves, and the curves to the right are Type Y curves. A more detailed mathematical analysis of the phenomenon of delayed response is given by Neuman
(1972) and he discusses the limitations of the use of the curves shown in Figure 15. Neuman concluded that the method for analyzing the results of pumping tests using such curves are limited to
relatively large values of time, and the limitations become more severe as the distance from the pumping well decreases. An additional complication is that the Dupuit assumption of horizontal flow
is used in the development of the water-table aquifer drawdown curves.
Figure 15. Water-Table Aquifer type curves.
As an aid for estimating the applicability of step drawdown testing to water table aquifers, I developed a logarithmic approximation of the value of 1/ua versus r/D where delayed response causes the
water table curve to depart from the Type A Theis curve. This graph is shown in Figure 16. Likewise, I developed a logarithmic approximation of the value of 1/uy versus r/D where delayed response
ends and the drawdown follows the Type Y Theis curve. This graph is shown in Figure 17.
Figure 16. Graph of r/D versus 1/ua where the water-table drawdown curve departs from the Type A Theis curve.
Figure 17. Graph of r/D versus 1/uy where the water table drawdown curve joins the Type Y Theis curve.
Figure 16 may be used to estimate the time after the beginning of pumping when the water table drawdown departs from the Type A Theis curve, and Figure 17 may be used to estimate the time when the
water table curve joins the Type Y Theis curve. To use Figure 16 and 17, one must make preliminary estimates of values for transmissivity, specific yield (Sy), storativity (S), effective radius (rw)
(or radius of a monitoring well, r), and delay index (1/α). The formula for r/D given by Prickett is:
r is distance from the center of the pumping well, ft
T is transmissivity, gpd/ft
Sy is specific yield, dimensionless
α is reciprocal of "delay index", 1/minutes
The coefficient α is an empirical constant that relates delayed response to character of the aquifer material. Prickett used values of α determined from pumping tests in glacial drift. His graph is
shown in Figure 18. The delay index may be affected by influences other than the texture of the aquifer, but Prickett's results for materials in glacial drift suggests a correlation with aquifer
material. Neuman (1979) published a review of delayed yield and discussed the physical significance of α.
Consider a pumping test of a fully penetrating well in a 25 thick layer of silt. Assume preliminary estimates of hydraulic conductivity, specific yield, effective radius, and delay index are 0.01
gpd/ft2, 0.01, 0.0002, 0.25 ft, and 10,000 minutes, respectively. Consequently, r/D would be 0.16, which corresponds to 1/ua of about 100 on Figure 16. The formula for ua given by Prickett is ua=
(2693r2S)/(Tt). Consequently, t=(1/ua)x(2693r2S/T), and t in this example is about 13 minutes. Therefore, the time on the Type A Theis curve would not be sufficient for step test analysis based on
the Type A Theis curve. One might obtain an estimate of transmissivity from an initial 13 minute step if the early pumping rate could be kept sufficiently constant under field conditions and well
bore storage effects were negligible. Likewise, r/D=0.16 corresponds to 1/uy of about 250. The formula for uy given by Prickett is uy=(2693r2Sy/T). Consequently t=(1/uy)x(2693r2Sy/T), and t in
this example is about 18177 minutes (12.6 days). Therefore the time to reach the Type Y Theis curve would prohibit step test analysis based on the Theis curve. Since well loss is not dependent on
the equation for drawdown in the aquifer, the well loss coefficient and exponent might be obtained from a step test in such an aquifer.
At the other extreme of subsurface material, consider a pumping test of a fully penetrating well in a 25 foot thick layer of gravel. Assume preliminary estimates of hydraulic conductivity, specific
yield (Sy), storativity (S), effective radius, and delay index are 1,000,000 gpd/ft2, 0.25, 0.000001, 0.5 ft, and 1.0 minutes, respectively. Consequently, r/D would be 0.0052, which corresponds to 1
/ua of about 25,000 on Figure 16. The time to depart from the Type A Theis curve would be less than 1E-7 minutes (~zero). The value of 1/uy would be greater than 1000 and the time to reach the Type
Y Theis curve would be less than 1E-3 minutes (~zero). Consequently, all drawdown would be on a Theis curve and the step test method described above could be used. Drawdown should be adjusted
according to an equation derived by Jacob and presented in many publications (for example, Walton 1970).:
sa is drawdown that would occur in an equivalent nonleaky aquifer, feet
swt is observed drawdown in the water table aquifer
m is initial saturated thickness of the aquifer.
Due to the limitations of step testing applied to water table aquifers, it is often good practice to follow the step test with a constant rate pumping test. The software titled WATEQ may be used to
analyze data from a constant discharge test of a water table aquifer. WATEQ also includes the effect of partially penetrating wells, without the adjustment described below.
The applications of step testing described above assume that the pumping well and any observation wells completely penetrate the aquifer. Often, the wells only penetrate the upper part of the
aquifer or are open to some other part of the aquifer. Flow to a partially penetrating pumping well develops a vertical component as it approaches the well. Consequently, the drawdown in the
pumping well is greater than the drawdown would be in a completely penetrating well that is following the Theis curve. Butler (1957) tabulated values of a partial penetration constant that can be
used to convert observed drawdown at the effective radius of a partially penetrating well to equivalent drawdown in a fully penetrating well. The equivalent drawdown would follow the Theis curve and
be amenable to step drawdown analysis. The table is based on theory and formulas of Morris Muscat, J. Kozeny and C. E. Jacob, which involved some empirical constants and are based on confined
aquifer conditions. Figure 19 is a graph based on this table. The variables in the graph are as follows:
spp is observed drawdown for partially penetrating conditions, ft,
s is equivalent drawdown for fully penetrating conditions, ft,
m is thickness of the aquifer, ft,
Kv is vertical hydraulic conductivity, gpd/ft2,
Kh is horizontal hydraulic conductivity, gpd/ft2.
Figure 19. Graph for estimating the partial penetration constant for a pumping well.
Butler also provides a similar table for adjusting drawdown in observation wells. This table applies when the distance from the pumping well to the observation well is less than 2m(Kv/Kh)1/2.
Beyond this distance the effect of partial penetration is generally negligible. Figure 20 is a graph based on part of the table where the radius to negligible drawdown (R) is three times the aquifer
thickness. Hantush (1964) considers negligible drawdown to be of the order of 0.01 feet. Figure 20 represents the worst case part of the table. Other parts give partial penetration constants for R
/m of 5, 10, and 100. This graph is for the case where the observation well is monitoring the same part of the aquifer that is open to the pumping well. Type curves for partial penetration
published by Walton (1970) show that the partial penetration curves based on an analytical solution are only sub-parallel to the Theis curve but approach it as distance from the pumping well
increases and/or the fractional penetration increases. Consequently, there is some uncertainty in the application of step testing in partially penetrating wells based on the Theis curve, especially
when the drawdown is measured in the pumping well and the fractional penetration is small.
Figure 20. Graph for estimating the partial penetration constant for an observation well.
The application of step test analysis to partial penetration under water table conditions is more complex. Hantush (1964) says that the methods of analysis for tests in confined aquifers can be
applied to analyze corresponding situations of complete and partial penetration in water table aquifers if the drawdown is less than 25 percent of the initial saturated thickness. I suggest that the
time for the drawdown curve to merge with the Type Y Theis curve should be negligible according to Figure 17. I tentatively suggest less than 10 minutes. Also, the screened (open) part of the
partially penetrating well should be near the bottom of the water table aquifer, so that the water table elevation stays above the open interval.
Step test examples presented above are hypothetical, so that the theory could be presented in an uncomplicated manner. This section presents analysis of real step test data provided in the
literature using the method of Birsoy and Summers.
Step Test Example - "Confined" Aquifer
Type of aquifer : Confined (nominal).
Lithology: Fine to medium, clayey sand, with some coarse sand and gravel.
Thickness of aquifer: 70 feet.
Length of well screen: 55 feet.
Number of pumping steps: 4.
Discharge and duration of steps:
Table 1. Step test example for for confined aquifer, discharge and duration of steps.
Graph of adjusted drawdown versus adjusted time:
Figure 21. Step test example for confined aquifer, adjusted time versus adjusted drawdown.
Transmissivity = 264/(0.048-0.044) = 66,000 gpd/ft.
Graphical solution for well loss coefficient (C) and exponent (y):
Figure 22. Step test example for confined aquifer, graphical solution for well loss coefficient (C) and exponent (y).
C = 4.1133E-10
y = 3.2
Therefore the well loss contribution to the y-intercept of the first step (Q=545.4 gpm) is negligible and
rw2S = 2.08833E-(0.04*66000/264) = 1.374E-9.
Graph of calculated versus observed drawdown:
Figure 23. Step test example for confined aquifer, calculated versus observed drawdown.
Figure 23 shows that transmissivity, rw2S, and well loss parameters calculated by the method of Birsoy and Summers result in calculated drawdown that closely matches measured drawdown.
The following deficiencies were reported for this step test:
• The discharge rate was allowed to decline in each step as the drawdown increased. The average discharge rate measured with a propeller-type meter was reported.
• The water levels were measured by the air line method (less accurate than pressure transducer), except for the first step where a weighted steel tape was used.
• Water levels in the second step appear to be less accurate than those in the other steps. They do not follow a straight-line trend as well. Consequently I did not use the third step in the
The well that was tested was gravel packed and had an eight-inch screen reported as 50 feet in one reference and 55 feet in another. I did not adjust the reported drawdown for partial penetration.
This step test data was used in seminal articles on well loss by C. E. Jacob (1947) and M. I. Rorabaugh (1953). Jacob used a method of analysis similar to the method described above, but it involved
less computation. He assumed the well loss exponent (y) equaled 2. His resulting well loss constant (C) was 6.7E-6 ft min2/gal2. If one calculates the step test drawdown based on these well loss
parameters, the result is in Figure 24. Figure 24 is analogous to Figure 23, but Jacob's well loss parameters were used instead of the parameters calculated using the Birsoy and Summers method.
Figure 24 shows that Jacob's well loss parameters over-estimated the well loss.
Figure 24. Step test for confined aquifer, calculated versus observed drawdown using well loss parameters estimated by Jacob (1947).
Rorabaug (1953) used this same data to estimate well loss parameters, but did not assume the exponent (y) had a value of 2. He used a trial and error method and obtained y=2.64 and C-4.39E-8 ft min
2.64/gal2.64. If one calculated the drawdown based on these well loss parameters, the result is in Figure 25. Figure 25 shows that Rorabaugh's well loss parameters over-estimated the well loss, but
not as much as Jacob's well loss parameters.
Figure 25. Step test for confined aquifer, calculated versus observed drawdown using well loss parameters estimated by Rorabaugh (1953).
The methods used by Jacob and Rorabaugh were practical when computers were not available to do rapid repetitive calculations. Now that computers are universally available, the computationally
intensive method described by Birsoy and Summers may be used to get more complete and accurate results.
Step Test Example - Water Table Aquifer
Type of aquifer: Water Table.
Lithology: Sand and gravel (dominantly coarse sand and fine gravel).
Thickness of aquifer: 46 feet (initial saturated thickness was 36 feet, water table depth 10 feet).
Length of well screen: 15 feet (installed at bottom of aquifer).
Number of steps: 5
Discharge and duration of steps:
Table 2. Step test example for water table aquifer, discharge and duration of steps.
Graph of adjusted drawdown versus adjusted time:
Figure 26. Step test example for water table aquifer, adjusted time versus adjusted drawdown.
Transmissivity = 264/(0.0376-0.0337) = 67,692 gpd/ft.
rw2Sy = 3.2233E-7.
Well loss appears to be insignificant.
Graph of calculated versus observed drawdown:
Figure 27. Step test example for water table aquifer, calculated versus observed drawdown.
The discharge rate in the first step was adjusted at 14 minutes. This adjustment shows up in Figure 25, but had no significant effect on the results as may be seen by the excellent match of
calculated versus observed drawdown shown in Figure 26.
Birsoy, Yuksel K. and W. K. Summers (1980): Determination of Aquifer Parameters from Step Tests and Intermittent Pumping Data; Ground Water, Volume 18, Number 2, pages 137-145
Butler, Stanley S. (1957): Engineering Hydrology; Prentice-Hall
Davis, Stanley and Roger J. M. DeWiest (1966): Hydrogeology; John Wiley and Sons.
Freeze, R. Allan and John Cherry (1979): Groundwater; Prentice-Hall.
Hantush Mahdi S. (1964): Hydraulics of Wells, in Advances in Hydroscience, Volume 1; Academic Press.
Jacob, C. E. (1947): Drawdown Test to Determine Effective Radius of Artesian Well; Transactions American Society of Civil Engineers, Paper 2321.
Neuman, Shlomo P. (1972): Theory of Flow in Unconfined Aquifers Considering Delayed Response of the Water Table; Water Resources Research, Volume 8, Number 4.
Neuman, Shlomo P. (1979): Perspective on 'Delayed Yield'; Water Resources Research, Volume 15, Number 4.
Prickett, T. A. (1965): Type-Curve Solution to Aquifer Tests Under Water-Table Conditions; Ground Water, Volume 3, Number 3.
Rorabaugh, M. I. (1953): Graphical and Theoretical Analysis of Step-Drawdown Test of Artesian Well: ASCE Proceedings, September, Number 362, Volume 79, December.
Spane, F. A. and S. K. Wurstner (1993): DERIV: A Computer Program for Calculating Pressure Derivatives for Use in Hydraulic Test Analysis; Ground Water, Vol. 31, No. 5.
Walton, William C. (1970): Groundwater Resource Evaluation, McGraw-Hill.
Posted March 21, 2013 | {"url":"https://www.dunnhydrogeo.com/home/aquifer-step-testing-t","timestamp":"2024-11-07T22:36:26Z","content_type":"text/html","content_length":"253610","record_id":"<urn:uuid:cccb0033-b0a3-4120-8d45-1e420c64a334>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00673.warc.gz"} |
Creating a Calculator
published 12/05/2022 • 4m reading time • 234 views
is the default calculator for your operating system just too convenient? Well then I have some good news for you, because I have created the next step in human evolution, a calculator that is so
advanced that it can do math!
I made a calculator in Rust. It’s on Github here.
The Backstory
This all started because of amplify which was a programming challenge I made last year (read more). One of the problems was to solve long mathematical expressions like 13*1-84+(94*7*19+2). Now I
should have already solved this, as I had written the code that makes the test cases but - I kinda cheated on that.
I was running out of time while finishing amplify before my arbitrary deadline so I just generated a bunch of random mathematical expressions and then solved them with a calculator. So when the
problem went live I just picked python for the solution and used the eval() function.
Now 6 months later I was resolving all the problems from scratch to avoid doing school work. I finished all the problems, but this one: Less Simple Math. I knew I had to start with a tokenizer then
parse that into a Binary expression tree and then evaluate that. But how exactly to make the expression tree? I didn’t know.
The first step was to take the mathematical expression and turn it into a list of tokens. The token definitions I ended up using are as follows.
#[derive(Debug, Clone)]
enum Token {
// *this will be used later*
Tree(Ops, Box<Token>, Box<Token>),
#[derive(Debug, Clone, Copy)]
enum Ops {
So the first thing to do is loop through all the characters in the expression string. Now using a match statement we can make a bunch of checks:
• If the character is a ‘(’, we set the in_group flag to true.
• If the character is ‘)’ we add a Group token to the list of tokens with the contents of recursively calling tokenize with the contents of working.
• If we are in a group we just add the character to a working string.
• If the character is one of the supported operators we add the Op token to the output list.
• And finally if the character is a number we also add it to the working string.
At the end of all of this we return the list of tokens.
Example (click)
Input: 13*1-84+(94*7*19+2)
Output: [
Making the Expression Tree
The tokenizing was the easy part! (i have written way too many tokenizers in my life)
This part is hard because it has to handle order of operations where multaplacation / division takes priority over addition / subtraction. For this part after thinking a tiny bit I came up with a
solution that is not the best but works fine.
We first get the highest operator priority in the list of tokens. Then loop through the tokens to find the first operator with that priority. Then we make a Tree token with the operator and the two
tokens before and after it, these tokens are also removed. Just keep doing this until there is only one token.
Example (click)
Input: 13*1-84+(94*7*19+2)
Output: Tree(
Now onto the final step, evaluating the expression tree. This part is very easy, we just get the tree node and then recursively call evaluate on the left and right nodes. Then we just do the
operation on the left and right nodes and return the result. The code is so short I will just show it here:
fn evaluate(tree: Token) -> i32 {
match tree {
Token::Tree(op, left, right) => {
let left = evaluate(*left);
let right = evaluate(*right);
match op {
Ops::Add => left + right,
Ops::Sub => left - right,
Ops::Mul => left * right,
Token::Number(n) => n,
_ => panic!("Invalid token"),
Now all I have to do it print the result and I solve the problem and learn a new thing! In the end it really wasn’t that hard, I just thought it would be and that’s why I put it off for so long. You
can see the final code for amplify here.
But I told you I made a new calculator to replace all that came before it. After completing the problem, that night I was thinking about making it into a CLI calculator. So that’s what I did.
New Features
First off, I added an operator for Division and Exponents. These were really easy, just a few new lines of code each.
Then I added a nice REPL:
▷ 5*(4+-1)
⮩ 15
▷ ▌
Then I added support for variables. Well more like constants, because you couldn’t change them. By default, it has a few constants like pi and e. I added variable assignment later.
I also added support for functions, which are kinda just variables with groups next to them. They can be used like this 5 + sin(pi). I then proceeded to go a bit crazy and added ~60 functions
including trig functions, logic, conditions, and other math stuff.
It was surprising to see how close it now is to a real programming language, it just needs other datatypes and loops. Maybe that will be one of my next projects.
It may or may not have been four months since the last post. I’ve been busy and haven’t really completed anything cool, there is a lot of stuff in the works so you can look forward to that. I hope
you enjoyed this post, and I hope you maybe learned something new.
- Connor <3 | {"url":"https://connorcode.com/writing/programming/creating-a-calculator","timestamp":"2024-11-06T20:10:14Z","content_type":"text/html","content_length":"36776","record_id":"<urn:uuid:0bc02c3e-68ef-47d8-9ca8-43663bd63754>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00053.warc.gz"} |
The Basics of Machine Learning Algorithms: A Beginner's Guide
Posted inCareer Guide Career in IT Sector
The Basics of Machine Learning Algorithms: A Beginner’s Guide
Machine learning (ML) is one of the most exciting fields in technology today, transforming industries from healthcare to finance, and powering innovations like self-driving cars, recommendation
systems, and natural language processing. At its core, machine learning involves teaching computers to learn from data and make decisions or predictions based on that learning. This beginner’s guide
will introduce you to the basics of machine learning algorithms, their types, and how they work.
1. What is Machine Learning?
Machine learning is a subset of artificial intelligence (AI) that focuses on building systems that can learn from and make decisions based on data. Instead of being explicitly programmed to perform a
task, machine learning models use algorithms to identify patterns in data, which they then use to make predictions or decisions.
For example, a machine learning model might analyze historical sales data to predict future sales, or examine patterns in user behavior to recommend new products or content.
2. Types of Machine Learning
Machine learning algorithms are generally categorized into three types: supervised learning, unsupervised learning, and reinforcement learning. Each type has its unique approach to learning from
2.1. Supervised Learning
In supervised learning, the model is trained on a labeled dataset, which means that each training example is paired with an output label. The goal is for the model to learn the relationship between
the input data and the output labels so that it can predict the label for new, unseen data.
Examples of Supervised Learning Algorithms:
• Linear Regression: Used for predicting a continuous variable, such as predicting a house price based on its features.
• Logistic Regression: Used for binary classification tasks, such as determining whether an email is spam or not.
• Decision Trees: Used for both classification and regression tasks by splitting the data into subsets based on feature values.
• Support Vector Machines (SVM): Used for classification tasks by finding the hyperplane that best separates different classes in the data.
2.2. Unsupervised Learning
In unsupervised learning, the model is trained on unlabeled data, meaning there are no predefined labels or outcomes. The goal is to identify hidden patterns or structures within the data.
Examples of Unsupervised Learning Algorithms:
• K-Means Clustering: Groups data into clusters based on similarity, such as grouping customers with similar purchasing behaviors.
• Hierarchical Clustering: Creates a hierarchy of clusters by repeatedly merging or splitting existing clusters.
• Principal Component Analysis (PCA): Reduces the dimensionality of data by identifying the principal components that capture the most variance in the data.
• Association Rules: Identifies relationships between variables in large datasets, such as finding items frequently bought together in a supermarket.
2.3. Reinforcement Learning
Reinforcement learning involves training an agent to make a sequence of decisions by rewarding it for good decisions and penalizing it for bad ones. The agent learns to maximize its cumulative reward
over time.
Examples of Reinforcement Learning Algorithms:
• Q-Learning: A model-free algorithm that learns the value of an action in a particular state and helps the agent choose the best action to maximize its reward.
• Deep Q-Networks (DQN): Combines Q-learning with deep learning to handle more complex environments and tasks, such as playing video games.
• Policy Gradient Methods: Focuses on directly optimizing the policy (a set of actions) that an agent follows, rather than the value of individual actions.
3. Key Concepts in Machine Learning
Understanding some key concepts in machine learning is essential for grasping how algorithms work.
3.1. Training and Testing Data
Machine learning models are trained on a subset of data known as the training set. Once trained, the model is evaluated on a separate subset called the testing set to assess its performance. This
process helps ensure that the model generalizes well to new, unseen data.
3.2. Features and Labels
• Features: The input variables or attributes used by the model to make predictions. For example, in a house price prediction model, features might include the number of bedrooms, square footage,
and location.
• Labels: The output variable or target that the model aims to predict. In the house price example, the label would be the price of the house.
3.3. Overfitting and Underfitting
• Overfitting: Occurs when a model learns the training data too well, including noise and outliers, resulting in poor performance on new data. An overfitted model is too complex and fails to
• Underfitting: Happens when a model is too simple and fails to capture the underlying patterns in the data, leading to poor performance on both the training and testing data.
3.4. Cross-Validation
Cross-validation is a technique used to evaluate the performance of a machine learning model by splitting the data into multiple subsets. The model is trained on some subsets and tested on others,
and this process is repeated multiple times. The results are averaged to give a more accurate measure of the model’s performance.
4. Popular Machine Learning Algorithms
Let’s delve deeper into some popular machine learning algorithms and how they work:
4.1. Linear Regression
Linear regression is used for predicting continuous values. It assumes a linear relationship between the input features (independent variables) and the output label (dependent variable). The goal is
to find the line (or hyperplane in higher dimensions) that best fits the data.
• Example: Predicting house prices based on square footage and number of bedrooms.
4.2. Decision Trees
Decision trees are used for both classification and regression tasks. They work by recursively splitting the data into subsets based on the value of features, creating a tree-like structure. Each
internal node represents a feature, each branch represents a decision rule, and each leaf node represents an outcome.
• Example: Classifying whether a person is likely to buy a product based on age, income, and browsing history.
4.3. K-Means Clustering
K-Means is an unsupervised algorithm that groups data into K clusters based on feature similarity. The algorithm assigns each data point to the nearest cluster center, and then updates the cluster
centers based on the mean of the points in each cluster. This process is repeated until the cluster centers no longer change significantly.
• Example: Grouping customers into segments based on their purchasing behavior.
4.4. Support Vector Machines (SVM)
SVM is used for classification tasks and works by finding the hyperplane that best separates different classes in the feature space. The hyperplane is chosen to maximize the margin between the
nearest points (support vectors) of each class.
• Example: Classifying emails as spam or not spam based on text features.
4.5. Neural Networks
Neural networks are a class of algorithms inspired by the human brain’s structure and function. They consist of layers of interconnected neurons (nodes) that process input data and learn complex
patterns through multiple layers. Neural networks are the foundation of deep learning.
• Example: Image recognition, where a neural network learns to identify objects in images.
5. How to Choose the Right Algorithm
Choosing the right machine learning algorithm depends on several factors, including the type of data, the problem you’re trying to solve, and the computational resources available. Here are some
• Data Type: Is the data labeled (supervised) or unlabeled (unsupervised)?
• Problem Type: Is the goal to classify data, predict a continuous value, or cluster data?
• Model Complexity: Simpler models like linear regression are easier to interpret, while more complex models like neural networks may capture more intricate patterns but require more data and
• Performance Metrics: Consider metrics like accuracy, precision, recall, and F1-score for classification tasks, or mean squared error for regression tasks, to evaluate model performance.
6. Conclusion
Machine learning algorithms are powerful tools that can uncover patterns in data and make predictions that drive decision-making across various domains. As a beginner, it’s important to start with
the basics—understanding different types of algorithms, key concepts, and popular models. With practice and experience, you’ll be able to choose and apply the right algorithms to solve real-world
problems, whether it’s in business, healthcare, finance, or beyond.
The world of machine learning is vast and continually evolving, but with a solid foundation in the basics, you’ll be well-equipped to explore more advanced topics and contribute to the exciting field
of artificial intelligence. | {"url":"https://samacademy.in/the-basics-of-machine-learning-algorithms-a-beginners-guide/","timestamp":"2024-11-11T06:52:49Z","content_type":"text/html","content_length":"79215","record_id":"<urn:uuid:31390c0d-f460-41a3-9b21-cb899d760f4c>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00314.warc.gz"} |
What is the Addition Rule of Probabilities? Definition, Calculation, Examples, and More - CFAJournal
What is the Addition Rule of Probabilities? Definition, Calculation, Examples, and More
Probability refers to the chances of an event happening in statistical terms. The probability of two events means the chances of both events happening at the same time, one at a time, or one after
the other.
The rule of addition describes the relationship between the probability of two events when they can be mutually exclusive or mutually non-exclusive to each other.
Addition Rule of Probabilities – Definition
The addition rule of probabilities is used to compute the probability of two events happening at the same time when they are mutually exclusive or non-exclusive.
It combines the probability of two mutually exclusive events happening and then the probability of either of the two events happening when they are mutually non-exclusive.
Probability is a statistical term describing the likelihood of an event happening. In simple words, it is the percentage chance of an event happening out of 100%.
When two events are mutually exclusive, then the probability of two happening at the same time is zero. Therefore, it’s easier to apply the addition rule in such scenarios.
How Does it Work?
Let us first understand a few key concepts in the addition rule of probabilities.
A set of probabilities is taken out of the set of sample spaces for all events. A sample space includes all possible scenarios under the given conditions.
Two mutually exclusive events mean if one happens then the second one does not happen. It means the chances of both events happening at the same time are zero.
Mutually non-exclusive events mean one of two or both can happen at the same time. It means the chances of both events taking place at the same time are probable.
Independent events can happen without affecting each other. Setting a sample space for independent events is often a challenging task as the probability of events can be unlimited.
Once these parameters are set, we can then use the formula listed below to calculate the probability of events.
How to Calculate Probabilities with the Addition Rule?
The addition rule combines two probabilities into one with the following formula:
It can also be written as:
P (A or B) = P(A) + P(B) – P (A and B)
When both events A and B are mutually exclusive, then P (A and B) =0. Therefore,
P (A or B) = P(A) + P(B)
We can use the simple step-by-step guide to use the addition rule of probability.
The first step is to calculate the probability of events A and B separately.
The second step is to calculate the probability of (A and B) when they are mutually non-exclusive.
The third step is to use the formula and calculate the probability of (A or B).
The probability of (A and B) is calculated by multiplying the P(A) with P(B).
Mutually exclusive events are not independent of each other as they depend on each other to some extent.
It is because both these events belong to the same sample size and P(AB) can only be zero when they are both independent.
Addition Rule of Probability – Mutually Exclusive Events
Two mutually exclusive events cannot happen simultaneously but must not be independent of each other.
For example, if you flip a coin, the probability of either Heads or Tails is 50% for each. Both cannot happen simultaneously, but one will happen if the other does not.
So, these are mutually exclusive and non-independent of each other.
Further, the two events become independent when considered over a sequence or series. For example, if we flip the coin and get Heads, it does not guarantee that the next flip will produce Tails to
complete the 100% probability of two events.
Therefore, the sample scale of the events decides the outcome or probabilities of the events as well.
Addition Rule of Probability – Mutually Non-Exclusive Events
The mutually non-exclusive events mean both events can happen at the same time. Therefore, the probability of P(AB) is not equal to zero.
For example, if there are 30 students in a class with 15 boys and girls. The chances of a boy and a girl getting 90% marks in a subject are mutually non-exclusive events.
In this case,
P (A or B) = P(A) + P(B) – P (A and B)
The addition rule only concerns one event happening at the same time, we must then deduct P (A and B) from the total of both probabilities of these events.
Let us now consider some examples of the addition rule of probabilities under different scenarios.
For example, in a class with 30 students. There are 15 boys and girls each in the class. In the previous term exam, 5 boys and 4 girls achieved grade A.
We need to calculate the probability of a boy or any student getting a Grade A again in the coming exams.
Let’s suppose P(A) is the probability of grade A for any student and P(B) is the probability of a boy being chosen.
There are 9 grade A students out of 30. Therefore, P(A)=9/30.
There are 15 out of 30 students are boys. Therefore, P(B)=15/30.
Since 5 boys previously received Grade A, therefore:
P (A and B) = 5/30
Now we can use the addition rule to calculate the probability of either event:
P (A or B) = P (A) + P(B) – P (A and B)
P (A or B) = 9/30 + 15/30 – 5/30
P (A or B) = 9 + 15 – 5/30= 19/30
P (A or B) = 0.63 or 63%
Example 2
A company surveyed 100 people to know their preferred social media (SM) sites. The results of the survey are displayed in the table below.
Preferred SM Male Female Total
Facebook 18 28 46
Twitter 23 11 34
Other 13 7 20
Total 54 46 100
Let us find the probability of a random male preferring Facebook.
A total of 46 people liked Facebook. Therefore:
P (Prefer Facebook) = 46/100 or 0.46
P (Prefer Twitter) = 34/100 = 0.34
If we want to know the probability of a female preferring Facebook, then:
P (Female Prefer Facebook) = 28/100
The probability through the Rule of addition for both conditions can be written as:
Probability (Female or Prefer Facebook) = 28/100 + 46/100 – 11/100
Probability (Female or Prefer Facebook) = 63/100 or 0.63 | {"url":"https://www.cfajournal.org/addition-rule-of-probabilities-definitio/","timestamp":"2024-11-04T06:08:49Z","content_type":"text/html","content_length":"157635","record_id":"<urn:uuid:e815abab-3e0e-404d-b24d-72e2ad2e4bbb>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00059.warc.gz"} |
MathFiction: Enigma (Robert Harris / Tom Stoppard)
In this this espionage story set in England's Bletchley Park at the height of the Second World War, Tom Jericho is a clever mathematician at the famous code breaking facility who -- either despite or
because of his pathetic mental state owing to a "nervous breakdown" has also taken on the self imposed task of solving a mystery and finding a mole. Although the math is not terribly important to the
story, it is mentioned more than a few times, we see how math might have been extremely useful in helping the Allies win the war and we meet another clever but crazy mathematician character. I think
this book is underappreciated and hope that some of the Math Fiction fans reading this will check it out.
As is so common in mathematical fiction, the mathematician (Jericho) is known as a genius and is also certifiably insane. (If you have read my thoughts on this elsewhere then you will know that this
is a "pet peeve" of mine, that literature likes to suggest that there is some deep connection between insanity and mathematical ability.) In this case, he has supposedly had a nervous breakdown due
to the ending of a brief affair with the beautiful and fun Claire. When he returns from to help his colleagues break the new "Shark" code, Jericho finds that Claire has gone missing. His attempts to
find her lead him to unexpected romance and intrigue.
In the novel, Jericho is one of Alan Turing's students working on the Riemann Hypothesis with him before the war and Turing is a recurring -- though minor -- character in the novel. For the film
(2001) version, however, Turing has been completely eliminated with Jericho playing both the role he plays in the novel and also being Turing at the same time. It is a little bit strange to see Alan
Turing's historical role usurped by this fictional character. However, if you are not bothered by this bit of fake history, the movie is also worth seeing. In exchange for losing Turing, the film
adds Tom Stoppard's brand of witty dialogue and a few nice comments about mathematics by this playwright who may be the most eloquent apologist for mathematics. It is interesting to note that the
enigma machine used in the film is a real one, that happens to belong to one of the films producers, Mick Jagger! (Jagger also makes a cameo appearance in the film.)
I've read more than a few reviews of the film which praise it for finally acknowledging the role of England in this important part of world history. It is true, as the reviews imply, that most of the
previous films on the subject emphasize the American role. There are lots of explanations for this fact. It is, of course, partly due to American nationalism and the fact that America tends to make
more films than other countries. It is also partly England's fault for seeking to keep Bletchley secret for so long. (IMHO, England did a great disservice to the world by keeping knowledge of their
wartime achievements secret for so long and did something nearly criminal when they had all of Tommy Flowers' "Colossus" computers destroyed for "security purposes".) However, lest the British
critics get too self righteous about the fact that this film "finally sets the record straight", let me point out that this film also obscures the true role of some important and brilliant heroes of
the story. In particular, upon reading the true history one learns that the first key steps in breaking the Enigma (involving bravery, brilliance and a bit of clever engineering) was done by some
Polish mathematicians just before the Nazi invasion of Poland. It was only upon receiving help from them that the British first started making headway. Not only does the movie version Enigma not
mention this part of the story, it vilifies the only Polish character in the film!
Contributed by Ruth de Haan
This is one of the best books I've read. I have yet to read his "Fatherland," which I've heard is even better. The main reason why this book is so good is because it ties together elements of
historical fiction, love, and a mysterious disappearance of Claire, the main character's girlfriend.
Contributed by Ernest Gallo
As a work of historical fiction, this novel is a marvel. The author has taken a wide selection of facts about Bletchley Park and ingeniously reimagined them, weaving them into a coherent and (mostly)
convincing plot. Of course the presence all this ingenuity would not matter if the novel were not in fact a fine bit of fiction.
By the way, Jericho (the mathematician and cryptanyst) has a breakdown not because the book represents mathematicians as flaky, but because of the demands of his work. Historically, cryptanalysts
like Friedman did in fact undergo serious wear and tear because of their work.
The film is very worth viewing but, in spite of being written by Tom Stoppard, falls a bit short, in part because of totally unnecessary and arbitrary doodling with the plot. | {"url":"https://kasmana.people.charleston.edu/MATHFICT/mfview.php?callnumber=mf26","timestamp":"2024-11-04T12:01:24Z","content_type":"text/html","content_length":"14041","record_id":"<urn:uuid:c8046d64-7bb3-4c7d-a1cd-856ccaec5248>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00168.warc.gz"} |
Fill in the blanks i) The centre of a circle lies in____of the circle. (exterior / interior) ii) A point, whose distance from the centre of a circle is greater than its radius lies in____of the circle
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
Fill in the blanks:
(i) The centre of a circle lies in____of the circle. (exterior / interior)
(ii) A point, whose distance from the centre of a circle is greater than its radius lies in____of the circle. (exterior / interior)
(iii) The longest chord of a circle is a_____of the circle.
iv) An arc is a______when its ends are the ends of a diameter.
(v) Segment of a circle is the region between an arc and______of the circle.
(vi) A circle divides the plane, on which it lies, in_______parts.
(i) The center of the circle lies in interior of the circle. (exterior / interior)
Reasoning: The collection of all points in a plane, which is at a fixed distance from a fixed point in the plane, is called a circle. The fixed point is the center of the circle.
(ii) A point, whose distance from the center of the circle is greater than its radius lies in exterior of the circle. (exterior / interior)
Reasoning: The collection of all points in a plane, which is at a fixed distance from a fixed point in the plane is called a circle. The fixed point is the center of the circle. Fixed distance is the
radius of the circle. Any point outside the circle will have a greater distance compared to the radius.
(iii) The longest chord of the circle is a diameter of the circle.
Reasoning: Let us check by drawing a random chord DE and diameter AB in the circle.
AC = CD = CE = BC = radius AB = 2 × radius.
In ∆DCE, DE < DC + CE (sum of two sides of a triangle should be greater than the third side) DE < 2 × radius
DE < diameter
Thus, we know that any chord that is drawn randomly (without passing through the center) will be shorter than the diameter. Thus, the diameter is the longest chord in the circle.
(iv) An arc is a semicircle when its ends are the ends of a diameter.
Reasoning: We know that diameter is the longest chord in the circle. Diameter divides the circle into 2 equal halves or arcs. When two arcs are equal, each is a semicircle.
(v) Segment of a circle is the region between an arc and chord of the circle.
Reasoning: The region between a chord and either of its arcs is called a segment of the circular region or simply a segment of the circle.
(vi) A circle divides the plane, on which it lies, in three parts.
Reasoning: A circle divides the plane on which it lies into three parts. They are: (i) inside the circle, which is also called the interior of the circle; (ii) the circle and (iii) outside the
circle, which is also called the exterior of the circle.
☛ Check: NCERT Solutions for Class 9 Maths Chapter 10
Video Solution:
Fill in the blanks: (i) The centre of a circle lies in____of the circle. (exterior / interior) (ii) A point, whose distance from the centre of a circle is greater than its radius lies in____of the
circle. (exterior / interior) (iii) The longest chord of a circle is a_____of the circle. (iv) An arc is a______when its ends are the ends of a diameter. (v) Segment of a circle is the region between
an arc and______of the circle. (vi) A circle divides the plane, on which it lies, in_______parts.
Maths NCERT Solutions Class 9 Chapter 10 Exercise 10.1 Question 1
The blanks are (i) interior, (ii) exterior, (iii) diameter, (iv) semicircle, (v) chord, and (vi) three.
☛ Related Questions:
Math worksheets and
visual curriculum | {"url":"https://www.cuemath.com/ncert-solutions/fill-in-the-blanks-i-the-centre-of-a-circle-lies-in-of-the-circle-ii-a-point-whose-distance-from-the-centre-of-a-circle-is-greater-than-its-radius-lies-in-of-the-circle/","timestamp":"2024-11-04T17:56:15Z","content_type":"text/html","content_length":"246480","record_id":"<urn:uuid:ca842609-038f-47ad-b81b-ebd25f7d6cca>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00806.warc.gz"} |
Any native english speakers?
This one says "a" doctor "would" report. Is this talking about a random doctor or all doctors?
The paragraph says that *some* doctors have an "absolute confidentiality" policy, but *most* doctors have a policy that allows them to report when there's a danger to society. So I would answer the
question with "Cannot say" because the text says that some doctors will not report under any conditions, and therefore you cannot say what an arbitrary doctor would do.
The paragraph says that *some* doctors have an "absolute confidentiality" policy, but *most* doctors have a policy that allows them to report when there's a danger to society. So I would answer the
question with "Cannot say" because the text says that some doctors will not report under any conditions, and therefore you cannot say what an arbitrary doctor would do.-mwb1100 (August 30, 2017,
05:26 PM)
Thanks, I felt really stupid not being able to answer that confidently.As for the below, any idea?http://i.imgur.com/rBohyhz.png-kalos (August 30, 2017, 05:37 PM)
The paragraph says that *some* doctors have an "absolute confidentiality" policy, but *most* doctors have a policy that allows them to report when there's a danger to society. So I would answer the
question with "Cannot say" because the text says that some doctors will not report under any conditions, and therefore you cannot say what an arbitrary doctor would do.-mwb1100 (August 30, 2017,
05:26 PM)True, but at the same time, an arbitrary doctor "would" report, because there are some doctors who do report.-kalos (August 30, 2017, 05:39 PM)
Thanks, I felt really stupid not being able to answer that confidently.As for the below, any idea?http://i.imgur.com/rBohyhz.png-kalos (August 30, 2017, 05:37 PM)Any idea?-kalos (September 04, 2017,
02:44 PM)
1) If you're not supposed to already know the rules for a doctor, then the answer is "Cannot say", because of "some doctors this, and some doctors that".2) The difference is £1.182
So the first price will increase 18681 and the second 19863. So the total difference in price will be 18681 + 19863 = 38544I don't see any error in the above calculations???-kalos (September 09,
2017, 03:55 PM)
2) The difference is £1.182 I'm presuming you mean one thousand etc., in which case it would have a comma:£1,182-tomos (September 06, 2017, 03:37 AM)
2) The difference is £1.182 I'm presuming you mean one thousand etc., in which case it would have a comma:£1,182-tomos (September 06, 2017, 03:37 AM)Regional difference Tomos, a period is used in
some countries, prefer spaces myself as per ISO standard.-4wd (September 16, 2017, 06:24 AM)
No it's not hacked! Why do you think it's spam?It was just a web hosting link, here I uploaded it somewhere else:https://uploadpie.com/TMRnAZ-kalos (December 23, 2017, 07:02 PM)
No it's not hacked! Why do you think it's spam?It was just a web hosting link, here I uploaded it somewhere else:https://uploadpie.com/TMRnAZ-kalos (December 23, 2017, 07:02 PM)The prior link had
porn on it which is why I agreed with IainB and deleted it. Archive.is can't interpret this one, so I'd suggest approaching it with caution.What is this one? It would help if you'd include a
description.-wraith808 (December 23, 2017, 07:13 PM)
OIC. The question is a bit confusing. It seems to be a an accounting question, except they have apparently confused it by using (misusing) the term "trend".Generally speaking one will not necessarily
be able to establish a significant statistical trend from just 2 (or 3) successive annual data-points. A significant trend usually only emerges over longer time-series data. Thus, to be correct,
there are no "trends" in the data given in the example.I would guess that what the question probably means is:If the rate of growth or decline of the factors (costs and revenues) between year 1 and
year 2 is repeated in year 3, then what would the tax be?STEP 1: As a quick rule of thumb, I would initially calculate the percentage net rate of growth in pre-tax revenue, between year 1 and 2, and
project the year 3 pre-tax revenue based on that percentage, then apply the 35% tax rate to the projected revenue, to arrive at a projected year 3 taxation figure.The growth in pre-tax revenue would
be a net factor of the sum of the rate of growth or decline of the factors (costs and revenues) between year 1 and 2.STEP 2: You could show workings for a proof of that (i.e., checking the rule of
thumb) by projecting the factors (costs and revenues) for year 3 and adding them up. This is usually good practice anyway (especially when using spreadsheets) since it checks your initial calculation
(above) and will identify any errors made.Having said that, you'd be surprised how many people omit such elementary checks to prove the figures calculated in spreadsheets. Reminds me of some years
back in NZ when the published Treasury budgets were found to be embarrassingly out of whack by a factor of 10 because of a simple spreadsheet error, where a check of the type above would have easily
identified the error prior to publication. Not a good look. -IainB (December 23, 2017, 10:16 PM) | {"url":"https://www.donationcoder.com/forum/index.php?topic=44274.msg412253","timestamp":"2024-11-05T16:14:11Z","content_type":"text/html","content_length":"138329","record_id":"<urn:uuid:6330ecc5-4b81-46b6-903a-e7e4ff87358d>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00054.warc.gz"} |
Jig:Saw’s puzzle solutions
Puzzle 1
Describe the main sections of a research report.
• Abstract: A short (approx. 200 words) summary of the whole study.
• Introduction: The introduction is the ‘why’ part of the paper, i.e., why the experiment is needed. It should discuss past research that has led to the researcher’s current hypothesis. It might
describe how previous findings are inconsistent or ambiguous and how the current research will clarify the situation. The introduction normally moves from the general to the specific: it begins
with some general context about the research and then discusses specific studies. It usually finishes by saying what the hypotheses of the current experiment are.
• Method: A description of how the research was done. There should be enough detail that someone else could repeat the research. It’s common to split this section up into who took part (
participants), what tasks were used (measures/materials), special equipment that was used (apparatus), the type of research design that was used (design), a description of what happened (
procedure), and how the measures were scored (scoring). It’s unusual not to see participants and procedure, but the other sections may or may not be there.
• Results: This is where the authors describe what they found. It normally starts with some descriptive statistics about the sample, and then moves on to the inferential statistics. The inferential
statistics are used to test the hypotheses.
• Discussion: In this section the author(s) use past research and theories to try to explain their findings. The discussion usually begins with a summary of what was found, then moves on to discuss
what this means for their theory (hypotheses) and the real world, and then ends with a list of the limitations of the study and what still needs to be done.
• References: The reference list appears at the end of the paper. It is important because it provides a list of all the sources used in the paper so that the reader can easily look up all of the
research papers that are cited in the text. If an article is cited in a paper, it must appear in the reference list. Conversely, if a source appears in the reference list, it must be cited
somewhere in the paper.
Puzzle 2
What is the main difference between qualitative and quantitative research?
Qualitative research gathers evidence for a theory from what people say or write. Quantitative research, on the other hand, gathers evidence for a theory through measurement of variables that produce
numeric outcomes.
Puzzle 3
Look at the variables in Figure 2.2 (in the book) and complete Table 2.3 (in the book).
Table 2.3 (completed): Table to classify variables from Figure 2.2
Puzzle 4
After his conversation with Alice, Zach was wondering how he could sell more of his band’s T-shirts. He found an article claiming that sales could be estimated from the price of the T-shirt and how
sick the fans thought the design was (from 0 = totally lame, to 10 = totally on trend):
Use the equation above and the values in Table 2.4 (in the book) to calculate the sales that would be generated from each combination of price and design. Complete the final column of the table.
Your completed table should look like the one below, but I will go through how I got the first answer.
Table 2.4 (completed): Possible selling price and design ratings of T-shirts.
To calculate the sales that would be generated from a price of 10 and a design rating of 0 (totally lame), we could first input these values into the equation so that it becomes:
BODMAS tells us to calculate everything in brackets first, and although there aren’t any brackets in this equation, brackets around 10 + 10 are implicit because the square root symbol covers both
values, so we would do this addition first: 10 + 10 =20.
Next we move onto ‘order’, so we would calculate
Then we would do the division
Finally, we do the addition:
Therefore, the total number of sales generated from a price of 10 and a design rating of 0 would be 20 T-shirts.
The 4 at the top of the summation symbol is the stopping point, or the upper limit of summation. The n = 1 is the starting point, or the lower limit of summation. Therefore, we start at n = 1 and
stop at n = 4. The
First we deal with everything in brackets. We start with the big brackets, but to do the stuff in those, we need to apply BODMAS to everything within those brackets first. So, we start with the small
brackets first, so that is 7 + 2 = 9 and 4 × 3 = 12:
Next do the orders or exponents. So, take the 9^2 and change that to 81:
Then look for any multiplication or division. There is one of each: there’s a 2 ´ 81 and also a divide by 12. As they are part of the same thing, it makes sense to do the 2 ´ 81 first because that’s
what gets divided by 12. So we can replace the 2 ´ 81 with 162:
and then divide this value by 12, which gives 13.5:
Now we have dealt with the big brackets, we can look again for orders (there aren’t any) so we move onto division/multiplication. There is a multiplication: 15 × 13.5, which gives 202.5. Finally, we
do the subtraction; 202.5 – 5 = 197.5, so the final answer is 197.5:
First we deal with the brackets, so that is 8 − 2 = 6, so change (8 −2) to 6:
Next do the orders or exponents. So, take the 6^2 and change that to 36:
Then look for any multiplication or division. There is one of each: there’s a 5 ´ 36 and also a divide by 2. As they are part of the same thing, it makes sense to do the 5 ´ 36 first because that’s
what gets divided by 2. So we can replace the 5 ´ 36 with 180:
Then divide this value by 2, which gives 90. Finally, we deal with any addition and subtraction, and we do this from left to right, so that gives 20 + 90, which is 110, and then subtract 7 from it.
So, the answer is 103:
Puzzle 8
Zach measured 10 people’s mood score out of 10 (0 = worst ever mood, 10 = best ever mood) after one of his band’s gigs (Table 2.5 in the book and reproduced below). Use the values in the table to
Table 2.5 (reproduced): Mood scores after a Reality Enigma gig
Let’s first unpick what the first equation is asking us to do:
The i = 2 at the bottom of the sigma symbol means start at person 2 and the n at the top of the symbol tells us to keep adding the scores until we reach the last person’s score. The
So the answer is 2251. Next, we unpick what the second equation is asking us to do:
The i = 1 at the bottom of the sigma symbol means start adding at person 1 and the 4 at the top of the symbol means keep adding until we reach person 4. The
So the answer is 189.
Puzzle 9
Table 2.6 (in the book and reproduced below) shows the average minutes per day that 10 Chippers spend on memoryBank looking at other people’s lives. Using the scores in the table and remembering to
use BODMAS, calculate:
Table 2.6 (reproduced): Minutes per day spent on memoryBank
The sigma symbol is in the brackets and we have to deal with the brackets first, so that means we’d sum the scores first, starting at person 4 (because of the i = 4 at the bottom of the symbol), and
finishing at the last person’s score (because of the n at the top of the sigma). Having done that, we’d deal with the square root that is outside of the brackets, so basically we need to add the
scores and then take the square root of the total:
Puzzle 10
Nick asked fans on memoryBank to rate how likely they thought it was the Alice had been abducted and, also, how likely they thought it was that Alice had dumped Zach. Ratings were from 0 = not at all
likely, to 10 = certain (Table 2.7 in the book and reproduced below). Use the scores in the table and remembering to use BODMAS, calculate:
Table 2.7 (reproduced): Likelihood (out of 10) of Alice having been abducted vs. her having dumped Zach, as rated by 10 fans
The equation is telling us that, starting from the 5th score and finishing at the 8th score, we need to take each dumped score, square it, and then divide each of these squared dumped scores by the
corresponding abducted score, and then add them up. We then take the square root of the final score. Here are my workings: | {"url":"https://edge.sagepub.com/field-adventures-in-stats/student-resources/chapter-2/jigsaw%E2%80%99s-puzzle-solutions","timestamp":"2024-11-09T06:22:46Z","content_type":"text/html","content_length":"96715","record_id":"<urn:uuid:27ae97d2-def0-4e16-8b49-9d969929ab9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00292.warc.gz"} |
Re: Mathematica daily WTF
• To: mathgroup at smc.vnet.net
• Subject: [mg115267] Re: Mathematica daily WTF
• From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
• Date: Thu, 6 Jan 2011 02:02:38 -0500 (EST)
Well, personally I think this is a very nice example of eclectic
programming. By that I mean the way I normally program myself; not
caring about any "purity" of style but using functional, procedural
or rule based approaches whichever seems the most convenient at the
moment. Print, of course, works by side-effect and is something that I
also use for debugging all the time; most of the time it is much more
convenient than Trace etc.
I certainly did not mean to create the impression that I was some sort
of "functional programming purist" or anything of the sort. My response
to AES's post was essentially "philosophical"; it seemed to me that he
was asserting that, in some sense, "real world" problems were by nature
"procedural" and functional programming was invented for reasons of
aesthetics and intended for some narrow circle of connoisseurs. This
assertion seemed to me to run counter to the basic experience of every
mathematician, that one of the most fundamental concept in mathematics
is that of a function and also to the famous "unreasonable effectiveness
of mathematics in science". If functions are fundamental to mathematics
how could they not be fundamental to mathematical sciences?
Of course one could make an argument that "computational science"
is not quite the same as "mathematical science" and that algorithms
are in some basic way "procedural" while "the rest" of mathematics is
not. It is certainly true that when one looks in practically every book
on algorithms that presents them by means of some sort of "pseudo-code",
one will find that this pseudo-code is "procedural" (at least this is
true in every case known to me). This probably leads many to conclude
that "algorithmic" is almost synonymous with "procedural". While I am not
expert on this matter and so can't speak with full confidence, it seems
to me that this is more likely an "accident" caused by the original and
by now "traditional" design of the computer. I am not sure if a design
has even been tried or is even conceivable. However, for high-level
programming languages the physical issue is largely irrelevant. I do
not deny that one can come up with situations, like some that have
been described in this thread, which seems to be much harder to model
in a functional than a procedural way, but my own view is that the best
response to this situation is by "encapsulating" procedural code in new
"bui lt-in" functions, which then make it very easy to program in a
functional way tasks that used to seem impossible.
This actually brings me to one issue that I have long wondered about. I
have never seen a Mathematica "functional" code that would perform what is
known as "backtracking". In the Combinatorica package there is a function
called "Backtrack", that is quite general and often very useful, but it
is inefficient because it is written in Mathematica's procedural style
and cannot be compiled because (if I remember correctly) it makes use
of list that are not tensors. For many years I have hoped that some for
some sort of efficient general backtracking built in in the Mathematica
kernel, but, although a lot of Combinatorica functionality has made its
way into the kernel, unfortunately this has not happened to "Backtrack".
Andrzej Kozlowski
On 5 Jan 2011, at 11:54, Ingolf Dahl wrote:
> Andrzej,
> Since you ask for "single examples", I would like to hear your opinion about
> my "Debugging code snippet"
> ((If[debug, Print["Place1 ", #]]; #) &)@
> I have started with the functional identity operator # &@, and then added
> the global variable debug, which if set to True prints out the value of the
> variable as side effect. The code snippet can be used to monitor the values
> of sub expressions during debugging, without disturbing the flow of the
> program, and it does not necessarily need to be removed afterwards.
> Best regards
> Ingolf Dahl
> ingolf.dahl at telia.com
> This code snippet can be inserted into d
>> -----Original Message-----
>> From: Andrzej Kozlowski [mailto:akoz at mimuw.edu.pl]
>> Sent: den 3 januari 2011 09:58
>> To: mathgroup at smc.vnet.net
>> Subject: [mg115174] Re: Mathematica daily WTF
>> I don't want to get involved in what is likely to turn out a "linguistic"
> dispute, but I think
>> your ideas about what constitutes "functional" and "procedural" are
> misconceived and do
>> not correspond to what other's mean by these terms. While there is some
> difference between
>> the meaning of "function" in mathematics and in programming, both concepts
> originate from
>> the same source. To quote Thompson's "Haskel. The craft of functional
> programming": "A
>> function is something that we can picture as a box with some inputs and an
> output:..."
>> followed by a picture which is exactly the same that I used to draw in m y
> lectures on
>> introductory set theory and analysis for many years before I heard of
> functional
>> programming. In this sense functions are ubiquitous in mathematics and
> science. "Procedural
>> programming", on the other hand, is programming by "change of state" or
> "side-effects", and
>> as the latter expression suggests, is less natural for the human mind even
> if it could be
>> claimed to c!
>> orrespond more closely to what goes on at "machine level". In any case, I
> cannot think of
>> any scientific or mathematical problems that can be more naturally
> formulated in terms of
>> "side-effects" than in terms of "functions". Perhaps they exist and I my
> bias is due to several
>> decades of doing mathematics but I seriously can't think of a single
> example. Can you
>> provide one?
>> Andrzej Kozlowski
>> On 2 Jan 2011, at 10:55, AES wrote:
>>> In article <ifmrvv$pim$1 at smc.vnet.net>,
>>> Andrzej Kozlowski <akoz at mimuw.edu.pl> wrote:
>>>> anyway, it does not matter as far as the point I was making is
> concerned,
>>>> which is that the C-like structure of Mathematica procedural programs
> is
>>>> helpful to people (somewhat) familiar with C or Fortran.
>>> I'd argue it is also extremely helpful to people who _think_ physically,
>>> or if you like procedurally, and who are primarily focused on solving
>>> problems that have an inherently procedural character.
>>> The successive steps (lines, cells, expressions) in a procedural program
>>> will very often state or mimic or reproduce what happens as a function
>>> of time in a dynamic system, or as a function of distance as a wave
>>> propagates, or mimic the flow of control in a complex system, or . . .
>>> As such, they simplify the process of _coding_ these programs; they
>>> _document_ and make readable what the program is doing, step by step;
>>> they make it easy to _insert later refinements_ inside the procedure
>>> (e.g., tests for current values or for exceptional cases at points
>>> within the procedure).
>>> All of these things are much more valuable to some of us in our use of
>>> Mathematica than the speed at which the code executes, or the brevity
>>> with which it can be typed. And none of this is to argue that many
>>> basic functions within the language (things like Fourier transforms,
>>> finding matrix eigensolutions, many others) should not be provided and
>>> used as pre-coded non-procedural routines within larger programs.
>>> I make a lot of use of self-programmed Modules[] in my own programming.
>>> The active or working part of the completed program, where numerical
>>> results get asked fror and results displayed, can be quite briefly
>>> written, mostly just setting input variables, then calling these
>>> modules. But these modules themselves are heavily procedurally coded
>>> internally, and I think that makes a lot of sense. | {"url":"http://forums.wolfram.com/mathgroup/archive/2011/Jan/msg00175.html","timestamp":"2024-11-05T12:02:23Z","content_type":"text/html","content_length":"38560","record_id":"<urn:uuid:5f15fb4f-b24c-4931-8a02-2fdd1a74a578>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00094.warc.gz"} |
What Is an Interval in Music?What Is an Interval in Music?
What Is an Interval in Music?
Similarly, What are intervals examples?
The numbers between two specified supplied numbers make up an interval. The set of numbers x fulfilling 0 x 5 is, for example, an interval containing 0, 5, and all values between 0 and 5.
Also, it is asked, What is an interval melody?
When two notes are played one after the other, they form a melodic interval. Intervals may also be harmonic, which means that the two notes are played simultaneously.
Secondly, What interval is E to C?
third major
Also, How do you make intervals?
When counting intervals, always begin with the lowest note and work your way up. To calculate the distance between C and G, for example, start at C and count up the scale until you reach G. As a
result, the distance between C and G is a fifth. As a result, the distance between D and B is a sixth.
People also ask, Is time an interval?
A lengthy period of time may be broken down into many shorter periods of time that are all the same duration. Intervals of time are what they’re called. Let’s imagine you wanted to measure the speed
of an automobile throughout an hour-long ride. You may break the hour down into ten-minute chunks.
Related Questions and Answers
How many types of intervals are there?
There are five main forms of interval quality: perfect intervals, perfect intervals, perfect intervals, perfect intervals, perfect intervals, perfect intervals, perfect intervals, perfect interval
augmented intervals, major intervals, major intervals, major intervals, major intervals, major intervals, major interval
What is a simple interval?
A musical interval of an octave or less is defined as a simple interval — see complex interval.
Why do we have intervals in music?
Intervals make it much easier to produce and improvise music since they provide you with well-known building blocks for melody and harmony. The notes you utilize are no longer picked only on the
basis of theory or by arduous trial and error.
What interval is G to a?
A second separates any G (flat, sharp, or neutral) from any A (flat, sharp, or neutral) (in the same octave). Because the G is flat and the A is acute in your example, you have a double augmented
second. This gap is, of course, musically comparable to a major third.
What interval is E to D flat?
7th intervals above note E-flatShortMediumIntervals statementd7dim7The Eb to Dbb interval is diminished 7thm7min7The Eb to Db interval is minor 7thM7maj7The Eb to D interval is major 7thA7aug7The Eb
to D# interval is augmented 7thA7aug7The Eb to D# interval is augmented 7thA7aug7The Eb to D# interval is augmented 7
What interval is G to D flat?
D-flatShortMediumIntervals ‘above’ statementd4dim4th intervals above note D-flatShortMediumIntervals ‘above’ statementd4dim4th intervals above note D-flatShortMediumInterval The Db to Gbb interval
has shrunk. 4thP4perf4 The spacing between Db and Gb is ideal. 4thA4aug4 The Db to G gap has been lengthened. 4th
What interval is F to B?
fourth enhanced
What interval is a to F sharp?
1st intervals above statementP1perf1 note F-sharpShortMediumIntervals ‘above’ The F#-F# interval is ideal. UnisonA1aug1 The F# to F## gap has been lengthened. Unison
What interval is C to D?
2nd major
What interval is E to a?
intervals 4th above note statementd4dim4 EShortMediumIntervals ‘above’ The E to Ab interval has shrunk. 4thP4perf4 The E to A ratio is ideal. 4thA4aug4 The E to A# gap has been lengthened. 4th
What interval is D to a?
The interval “between D and A,” for example, is a fifth (D – E – F – G – A are the five natural notes in ascending sequence). When we look at the gap “between D and A,” we discover a 5th (D – E – F –
G – A are the 5 natural notes in ascending sequence), however it is less than the distance between D and A.
What interval is a to G sharp?
2nd intervals above statementd2dim2 note G-sharpShortMediumIntervals ‘above’ The G# to Ab interval has shrunk. 2ndm2min2 The difference between G# and A is minimal. 2ndM2maj2 The 2ndA2aug2 gap
between G# and A# is significant. The G# to A## gap is lengthened by a factor of two.
What interval is D to C sharp?
2nd intervals above statementd2dim2 C-sharpShortMediumIntervals ‘above’ The gap between C# and Db is reduced by 2ndm2min2. The difference between C# and D is small. 2ndM2maj2 The 2ndA2aug2 gap
between C# and D# is significant. The C# to D## gap is lengthened by a factor of two.
What is a interval chord?
An interval is a difference in pitch between two sounds in music theory. If it relates to consecutively sounding tones, such as two neighboring pitches in a melody, an interval is characterized as
horizontal, linear, or melodic; if it refers to concurrently sounding tones, such as in a chord, it is described as vertical or harmonic.
What is perfect interval?
There is just one fundamental form for perfect intervals. Perfect intervals include the first (also known as prime or unison), fourth, fifth, and eighth (or octave). These intervals are known as “
perfect” because of the way they sound and the fact that their frequency ratios are simple whole integers.
What are the intervals in a major scale?
Major scale intervals are those that extend upward from the tonic (keynote) to the second, third, sixth, and seventh scale degrees of a major scale. A diatonic scale is a major scale. Whole, whole,
half, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole, whole,
whole, whole, whole
What is interval in minutes?
MINUTE OF INTERVAL. The number of minutes between two time values is represented by an interval value. SECOND INTERVAL. The number of seconds between two time values is represented by an interval
Why distance is interval?
The gap between two notes/pitches is called an INTERVAL. The following are the names of intervals based on their size and quality: Size of Interval: The size is expressed in Arabic numerals. (For
example, 1, 2, 3, 4) Count the note names between the two notes to calculate the size (inclusive).
What is an interval Enharmonically equivalent to an augmented 4th?
As a result, spelling is purely pragmatic, and interval naming is done numerically: Regardless of their usage, 3 signifies both a minor third and an augmented second; 6 represents both an augmented
fourth and a reduced fifth.
What is the smallest interval in music?
What are the two types of intervals?
An interval is the distance between two pitches. Intervals are divided into two types: whole steps and half steps. Half steps are often referred to as minor 2nds, while complete steps are sometimes
referred to as major 2nds.
The “music intervals explained” is a blog that explains the different intervals found in music.
This Video Should Help:
In music, an interval is a distance between two notes. This distance can be measured by time, as well as the size of the intervals. The most common interval is the harmonic interval.
Related Tags
• music intervals chart
• music intervals for beginners
• second interval in music
• major intervals
• melodic interval | {"url":"https://walnutcreekband.org/what-is-an-interval-in-music/","timestamp":"2024-11-07T01:09:59Z","content_type":"text/html","content_length":"125977","record_id":"<urn:uuid:5744d72b-d1c6-4079-918b-781aaf716017>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00617.warc.gz"} |
1.2: Quantum Hypothesis Used for Blackbody Radiation Law
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
• To understand how energy is quantized in blackbody radiation
By the late 19th century, many physicists thought their discipline was well on the way to explaining most natural phenomena. They could calculate the motions of material objects using Newton’s laws
of classical mechanics, and they could describe the properties of radiant energy using mathematical relationships known as Maxwell’s equations, developed in 1873 by James Clerk Maxwell, a Scottish
physicist. The universe appeared to be a simple and orderly place, containing matter, which consisted of particles that had mass and whose location and motion could be accurately described, and
electromagnetic radiation, which was viewed as having no mass and whose exact position in space could not be fixed. Thus matter and energy were considered distinct and unrelated phenomena. Soon,
however, scientists began to look more closely at a few inconvenient phenomena that could not be explained by the theories available at the time.
One experimental phenomenon that could not be adequately explained by classical physics was blackbody radiation. Attempts to explain or calculate this spectral distribution from classical theory were
complete failures. A theory developed by Rayleigh and Jeans predicted that the intensity should go to infinity at short wavelengths. Since the intensity actually drops to zero at short wavelengths,
the Rayleigh-Jeans result was called the “ultraviolet catastrophe.” There was no agreement between theory and experiment in the ultraviolet region of the blackbody spectrum.
Quantizing Electrons in the Radiator
In 1900, the German physicist Max Planck (1858–1947) explained the ultraviolet catastrophe by proposing that the energy of electromagnetic waves is quantized rather than continuous. This means that
for each temperature, there is a maximum intensity of radiation that is emitted in a blackbody object, corresponding to the peaks in Figure \(\PageIndex{1}\), so the intensity does not follow a
smooth curve as the temperature increases, as predicted by classical physics. Thus energy could be gained or lost only in integral multiples of some smallest unit of energy, a quantum (the smallest
possible unit of energy). Energy can be gained or lost only in integral multiples of a quantum.
Figure \(\PageIndex{1}\): Relationship between the temperature of an object and the spectrum of blackbody radiation it emits. At relatively low temperatures, most radiation is emitted at wavelengths
longer than 700 nm, which is in the infrared portion of the spectrum. As the temperature of the object increases, the maximum intensity shifts to shorter wavelengths, successively resulting in
orange, yellow, and finally white light. At high temperatures, all wavelengths of visible light are emitted with approximately equal intensities. The white light spectrum shown for an object at 6000
K closely approximates the spectrum of light emitted by the sun. Note the sharp decrease in the intensity of radiation emitted at wavelengths below 400 nm, which constituted the ultraviolet
catastrophe. The classical prediction fails to fit the experimental curves entirely and does not have a maximum intensity. (CC BY-SA-NC; anonymous by request).
Although quantization may seem to be an unfamiliar concept, we encounter it frequently. For example, US money is integral multiples of pennies. Similarly, musical instruments like a piano or a
trumpet can produce only certain musical notes, such as C or F sharp. Because these instruments cannot produce a continuous range of frequencies, their frequencies are quantized. Even electrical
charge is quantized: an ion may have a charge of −1 or −2, but not −1.33 electron charges.
Planck's quantization of energy is described by the his famous equation:
\[ E=h \nu \label{Eq1.2.1}\]
where the proportionality constant \(h\) is called Planck’s constant, one of the most accurately known fundamental constants in science
\[h=6.626070040(81) \times 10^{−34}\, J\cdot s\]
However, for our purposes, its value to four significant figures is sufficient:
\[h = 6.626 \times 10^{−34} \,J\cdot s\]
As the frequency of electromagnetic radiation increases, the magnitude of the associated quantum of radiant energy increases. By assuming that energy can be emitted by an object only in integral
multiples of hν, Planck devised an equation that fit the experimental data shown in Figure \(\PageIndex{2}\). We can understand Planck’s explanation of the ultraviolet catastrophe qualitatively as
follows: At low temperatures, radiation with only relatively low frequencies is emitted, corresponding to low-energy quanta. As the temperature of an object increases, there is an increased
probability of emitting radiation with higher frequencies, corresponding to higher-energy quanta. At any temperature, however, it is simply more probable for an object to lose energy by emitting a
large number of lower-energy quanta than a single very high-energy quantum that corresponds to ultraviolet radiation. The result is a maximum in the plot of intensity of emitted radiation versus
wavelength, as shown in Figure \(\PageIndex{2}\), and a shift in the position of the maximum to lower wavelength (higher frequency) with increasing temperature.
At the time he proposed his radical hypothesis, Planck could not explain why energies should be quantized. Initially, his hypothesis explained only one set of experimental data—blackbody radiation.
If quantization were observed for a large number of different phenomena, then quantization would become a law. In time, a theory might be developed to explain that law. As things turned out, Planck’s
hypothesis was the seed from which modern physics grew.
Max Planck explain the spectral distribution of blackbody radiation as result from oscillations of electrons. Similarly, oscillations of electrons in an antenna produce radio waves. Max Planck
concentrated on modeling the oscillating charges that must exist in the oven walls, radiating heat inwards and—in thermodynamic equilibrium—themselves being driven by the radiation field. He found he
could account for the observed curve if he required these oscillators not to radiate energy continuously, as the classical theory would demand, but they could only lose or gain energy in chunks,
called quanta, of size \(h\nu\), for an oscillator of frequency \(\nu\) (Equation \(\ref{Eq1.2.1} \)).
With that assumption, Planck calculated the following formula for the radiation energy density inside the oven:
\[ d\rho(\nu,T) &= \rho_\nu (T) d\nu \\[4pt] &= \dfrac {2 h \nu^3}{c^2} \cdot \dfrac {1 }{\exp \left( \dfrac {h\nu}{k_B T}\right)-1} d\nu \label{Eq2a} \]
• \(\pi = 3.14159\)
• \(h\) = \(6.626 \times 10^{-34} J\cdot s\)
• \(c\) = \(3.00 \times 10^{8}\, \dfrac{m}{s}\)
• \(\nu\) = \(\frac {1}{s}\)
• \(k_B\) = \(1.38 \times 10^{-23} \,\dfrac {J}{K}\)
• \(T\) is absolute temperature (in Kelvin)
Planck's radiation energy density (Equation \(\ref{Eq2a}\)) can also be expressed in terms of wavelength \(\lambda\).
\[\rho (\lambda, T) d \lambda = \dfrac {2 hc^2}{\lambda ^5} \dfrac {1}{ \exp \left(\dfrac {hc}{\lambda k_B T} \right) - 1} d \lambda \label{Eq2b}\]
Planck's equation (Equation \(\ref{Eq2b}\)) gave an excellent agreement with the experimental observations for all temperatures.
Figure \(\PageIndex{3}\): The Sun is an excellent approximation of a blackbody. Its effective temperature is ~5777 K. (CC-SA-BY 3.0; Sch).
In addition to being a physicist, Planck was a gifted pianist, who at one time considered music as a career. During the 1930s, Planck felt it was his duty to remain in Germany, despite his open
opposition to the policies of the Nazi government.
The German physicist Max Planck had a major influence on the early development of quantum mechanics, being the first to recognize that energy is sometimes quantized. Planck also made important
contributions to special relativity and classical physics. (credit: Library of Congress, Prints and Photographs Division via Wikimedia Commons)
One of his sons was executed in 1944 for his part in an unsuccessful attempt to assassinate Hitler and bombing during the last weeks of World War II destroyed Planck’s home. After WWII, the major
German scientific research organization was renamed the Max Planck Society.
Use Equation \(\ref{Eq2b}\) to show that the units of \(ρ(λ,T)\,dλ\) are \(J/m^3\) as expected for an energy density.
The near perfect agreement of this formula with precise experiments (e.g., Figure \(\PageIndex{3}\)), and the consequent necessity of energy quantization, was the most important advance in physics in
the century. His blackbody curve was completely accepted as the correct one: more and more accurate experiments confirmed it time and again, yet the radical nature of the quantum assumption did not
sink in. Planck was not too upset—he didn’t believe it either, he saw it as a technical fix that (he hoped) would eventually prove unnecessary.
Part of the problem was that Planck’s route to the formula was long, difficult and implausible—he even made contradictory assumptions at different stages, as Einstein pointed out later. However,
the result was correct anyway!
The mathematics implied that the energy given off by a blackbody was not continuous, but given off at certain specific wavelengths, in regular increments. If Planck assumed that the energy of
blackbody radiation was in the form
\[E = nh \nu\]
where \(n\) is an integer (now called a quantum number), then he could explain what the mathematics represented. This was indeed difficult for Planck to accept, because at the time, there was no
reason to presume that the energy should only be radiated at specific frequencies. Nothing in Maxwell’s laws suggested such a thing. It was as if the vibrations of a mass on the end of a spring could
only occur at specific energies. Imagine the mass slowly coming to rest due to friction, but not in a continuous manner. Instead, the mass jumps from one fixed quantity of energy to another without
passing through the intermediate energies.
To use a different analogy, it is as if what we had always imagined as smooth inclined planes were, in fact, a series of closely spaced steps that only presented the illusion of continuity.
The agreement between Planck’s theory and the experimental observation provided strong evidence that the energy of electron motion in matter is quantized. In the next two sections, we will see that
the energy carried by light also is quantized in units of h \(\bar {\nu}\). These packets of energy are called “photons.” | {"url":"https://chem.libretexts.org/Courses/Pacific_Union_College/Quantum_Chemistry/01%3A_The_Dawn_of_the_Quantum_Theory/1.02%3A_Quantum_Hypothesis_Used_for_Blackbody_Radiation_Law","timestamp":"2024-11-14T11:15:13Z","content_type":"text/html","content_length":"141611","record_id":"<urn:uuid:153730dd-9372-42a6-8eb0-5bb0b8f1e67a>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00072.warc.gz"} |
Extend/Replacing background
can you post where you are with your attempt?
44 minutes ago, Crystal Felton said:
can you post where you are with your attempt?
Yes, we need some hints about your intention/vision for the result.
I deleted all my tries because they looked so bad. I need the background to be cream and to extend the wood. I can try to recreate my awful attempt so I can post.
No, it's ok, let's just talk it through. My first question is - do you think you'll need to extend the flooring either side also? Or can you crop it close enough to prevent that necessity?
I need to use the link to see how close I can crop. I'm not opposed to that. I just didn't want to crop to close because I don't know what size they want to print it.
I figure maybe it will be necessary to add some floor to the right-hand side, at least? What are your thoughts?
I need to use the link to see how close I can crop. I'm not opposed to that. I just didn't want to crop to close because I don't know what size they want to print it.
I think I need to add one board to the right. The cream background is what I'm struggling with the most. It is not an even color so it's been hard to extend
Looks awesome! How did you do it? | {"url":"https://ask.damiensymonds.net/topic/1557-extendreplacing-background/","timestamp":"2024-11-09T17:08:45Z","content_type":"text/html","content_length":"180473","record_id":"<urn:uuid:62f2a7e1-1e49-4cea-bdda-42d9b14215e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00870.warc.gz"} |
32 research outputs found
At radio wavelengths, scattering in the interstellar medium distorts the appearance of astronomical sources. Averaged over a scattering ensemble, the result is a blurred image of the source. However,
Narayan & Goodman (1989) and Goodman & Narayan (1989) showed that for an incomplete average, scattering introduces refractive substructure in the image of a point source that is both persistent and
wideband. We show that this substructure is quenched but not smoothed by an extended source. As a result, when the scatter-broadening is comparable to or exceeds the unscattered source size, the
scattering can introduce spurious compact features into images. In addition, we derive efficient strategies to numerically compute realistic scattered images, and we present characteristic examples
from simulations. Our results show that refractive substructure is an important consideration for ongoing missions at the highest angular resolutions, and we discuss specific implications for
RadioAstron and the Event Horizon Telescope.Comment: Equation numbering in appendix now matches published version. Two minor typos correcte
Attaining the limit of sub-microarcsecond optical resolution will completely revolutionize fundamental astrometry by merging it with relativistic gravitational physics. Beyond the sub-microarcsecond
threshold, one will meet in the sky a new population of physical phenomena caused by primordial gravitational waves from early universe and/or different localized astronomical sources, space-time
topological defects, moving gravitational lenses, time variability of gravitational fields of the solar system and binary stars, and many others. Adequate physical interpretation of these yet
undetectable sub-microarcsecond phenomena can not be achieved on the ground of the "standard" post-Newtonian approach (PNA), which is valid only in the near-zone of astronomical objects having a
time-dependent gravitational field. We describe a new, post-Minkowskian relativistic approach for modeling astrometric observations having sub-microarcsecond precision and briefly discuss the
light-propagation effects caused by gravitational waves and other phenomena related to time-dependent gravitational fields. The domain of applicability of the PNA in relativistic space astrometry is
explicitly outlined.Comment: 5 pages, the talk given at the IAU Colloquium 180 "Towards Models and Constants for Sub-Microarcsecond Astrometry", Washington DC, March 26 - April 2, 200
Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions,
as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators
of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of
vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.Comment: 8 Pages, 3 Figures,
Submitted to Ap
Gravitational waves affect the observed direction of light from distant sources. At telescopes, this change in direction appears as periodic variations in the apparent positions of these sources on
the sky; that is, as proper motion. A wave of a given phase, traveling in a given direction, produces a characteristic pattern of proper motions over the sky. Comparison of observed proper motions
with this pattern serves to test for the presence of gravitational waves. A stochastic background of waves induces apparent proper motions with specific statistical properties, and so, may also be
sought. In this paper we consider the effects of a cosmological background of gravitational radiation on astrometric observations. We derive an equation for the time delay measured by two antennae
observing the same source in an Einstein-de Sitter spacetime containing gravitational radiation. We also show how to obtain similar expressions for curved Friedmann-Robertson-Walker
spacetimes.Comment: 31 pages plus 3 separate figures, plain TeX, submitted to Ap
We report observational upper limits on the mass-energy of the cosmological gravitational-wave background, from limits on proper motions of quasars. Gravitational waves with periods longer than the
time span of observations produce a simple pattern of apparent proper motions over the sky, composed primarily of second-order transverse vector spherical harmonics. A fit of such harmonics to
measured motions yields a 95%-confidence limit on the mass-energy of gravitational waves with frequencies <2e-9 Hz, of <0.11/h*h times the closure density of the universe.Comment: 15 pages, 1 figure.
Also available at http://charm.physics.ucsb.edu:80/people/cgwinn/cgwinn_group/index.htm
Radio waves propagating from distant pulsars in the interstellar medium (ISM), are refracted by electron density inhomogeneities, so that the intensity of observed pulses fluctuates with time. The
theory relating the observed pulse time-shapes to the electron-density correlation function has developed for 30 years, however, two puzzles have remained. First, observational scaling of pulse
broadening with the pulsar distance is anomalously strong; it is consistent with the standard model only when non-uniform statistics of electron fluctuations along the line of sight are assumed.
Second, the observed pulse shapes are consistent with the standard model only when the scattering material is concentrated in a narrow slab between the pulsar and the Earth. We propose that both
paradoxes are resolved at once if one assumes stationary and uniform, but non-Gaussian statistics of the electron-density distribution. Such statistics must be of Levy type, and the propagating ray
should exhibit a Levy flight. We propose that a natural realization of such statistics may be provided by the interstellar medium with random electron-density discontinuities. We develop a theory of
wave propagation in such a non-Gaussian random medium, and demonstrate its good agreement with observations. The qualitative introduction of the approach and the resolution of the anomalous-scaling
paradox was presented earlier in [PRL 91, 131101 (2003); ApJ 584, 791 (2003)].Comment: 27 pages, changes to match published versio
We present a parsec-scale image of the OH maser in the nucleus of the active galaxy IIIZw35, made using the Very Long Baseline Array at a wavelength of 18 cm. We detected two distinct components,
with a projected separation of 50 pc (for D=110 Mpc) and a separation in Doppler velocity of 70 km/s, which contain 50% of the total maser flux. Velocity gradients within these components could
indicate rotation of clouds with binding mass densities of ~7000 solar masses per cubic parsec, or total masses of more than 500,000 solar masses. Emission in the 1665-MHz OH line is roughly
coincident in position with that in the 1667-MHz line, although the lines peak at different Doppler velocities. We detected no 18 cm continuum emission; our upper limit implies a peak apparent
optical depth greater than 3.4, assuming the maser is an unsaturated amplifier of continuum radiation.Comment: 10 pages, 3 figure | {"url":"https://core.ac.uk/search/?q=author%3A(Gwinn%2C%20Carl%20R.)","timestamp":"2024-11-10T21:50:47Z","content_type":"text/html","content_length":"135467","record_id":"<urn:uuid:3ab69b37-2445-4379-8237-fa5854478612>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00804.warc.gz"} |
14 a+b+c=0, show that a×b=b×c=c×a. Interpret the result geometr... | Filo
Question asked by Filo student
14 , show that . Interpret the result geometrically. [NCERT Exemplar]
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
8 mins
Uploaded on: 1/25/2023
Was this solution helpful?
Found 4 tutors discussing this question
Discuss this question LIVE
13 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text 14 , show that . Interpret the result geometrically. [NCERT Exemplar]
Updated On Jan 25, 2023
Topic Algebra
Subject Mathematics
Class Class 12
Answer Type Video solution: 1
Upvotes 69
Avg. Video Duration 8 min | {"url":"https://askfilo.com/user-question-answers-mathematics/14-show-that-interpret-the-result-geometrically-ncert-33393530323738","timestamp":"2024-11-03T18:27:21Z","content_type":"text/html","content_length":"281181","record_id":"<urn:uuid:16a68082-7560-4cec-8dce-2e0aa60041aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00727.warc.gz"} |
Derivative rate relation
1 Apr 2018 The derivative tells us the rate of change of a function at a particular instant in time. A secant line is a straight line joining two points on a function. (See below.) It is also
equivalent to the average rate of change, or simply the slope between two
To solve problems with Related Rates, we will need to know how to differentiate implicitly, as most problems will be formulas of one or more variables. But this time we are going to take the
derivative with respect to time, t, so this means we will multiply by a differential for the derivative of every variable! In this section we will discuss the only application of derivatives in this
section, Related Rates. In related rates problems we are give the rate of change of one quantity in a problem and asked to determine the rate of one (or more) quantities in the problem. This is often
one of the more difficult sections for students. Related rates problems are one of the most common types of problems that are built around implicit differentiation and derivatives. Typically when
you’re dealing with a related rates problem, it will be a word problem describing some real world situation. Typically related rates problems will follow a similar pattern. An interest-rate
derivative is a financial instrument with a value that increases and decreases based on movements in
If you’re still having some trouble with related rates problems or just want some more practice you should check out my related rates lesson. At the bottom of this lesson there is a list of related
rates practice problems that I have posted a solution of. I also have several other lessons and problems on the derivatives page you can check out.
Derivative as instantaneous rate of change. Learn. Tangent slope as The graphical relationship between a function & its derivative (part 1). (Opens a modal). Whenever we talk about acceleration we
are talking about the derivative of a derivative, i.e. the rate of change of a velocity.) Second derivatives (and third Unfortunately, p=f′(0)+f′(x)2. does not give the average rate of change. For
example, try f(x)=1−cosx. Your formula gives the average rate of change from 0 to These may include futures, options, or swaps contracts. Interest rate derivatives are often used as hedges by
institutional investors, banks, companies, and
To solve problems with Related Rates, we will need to know how to differentiate implicitly, as most problems will be formulas of one or more variables. But this time we are going to take the
derivative with respect to time, t, so this means we will multiply by a differential for the derivative of every variable!
scale of a bank's interest rate and currency derivative contracts and the bank's However, no such relation is expected when a derivative is used for trading or For a function z=f(x,y), the partial
derivative with respect to x gives the rate of change of f in the x direction and the partial derivative with respect to y gives the Moser (1994) focused on the relationship between derivative use
and bank lending models to measure interest rate risk and the way interest rate derivatives International Swaps and Derivatives Association, Inc. Disclosure Annex for Interest in the relationship
between LIBOR and reference rates for tax-exempt debt. When derivative control is applied, the controller senses the rate of change of the With the parameters given in the figure, find the
relationship between the 13 Aug 2019 It's the world's first derivative with a price linked to the company's in a sustainable way by developing close relationships with local people,
A derivative is always a rate, and (assuming you’re talking about instantaneous rates, not average rates) a rate is always a derivative. So, if your speed, or rate, is the derivative,
awareness and understanding of the relationships among the “big ideas” that Keywords: Derivative, mathematical modeling, rate of change, relational Derivatives (Differential Calculus). The
Derivative is the "rate of change" or slope of a function. slope x^2 at 2 has slope 4. Introduction to Derivatives · Slope of a
In calculus, the second derivative, or the second order derivative, of a function f is the derivative of the derivative of f. Roughly speaking, the second derivative measures how the rate of change
of The relation between the second derivative and the graph can be used to test whether a stationary point for a function (i.e. a
Derivatives (Differential Calculus). The Derivative is the "rate of change" or slope of a function. slope x^2 at 2 has slope 4. Introduction to Derivatives · Slope of a
In this chapter we introduce Derivatives. We cover the standard derivatives formulas including the product rule, quotient rule and chain rule as well as derivatives of polynomials, roots, trig
functions, inverse trig functions, hyperbolic functions, exponential functions and logarithm functions. We also cover implicit differentiation, related rates, higher order derivatives and logarithmic
Related Rates of Change - Cylinder Question. Ask Question Asked 5 years, 6 months ago. because r is constant, you cannot use derivatives to find $\frac{dh}{dt}$ $\endgroup$ – Varun Iyer Jul 30 '14 at
12:44. it shows a good example of how to work through any related rates problem. Free derivative calculator - differentiate functions with all the steps. Type in any function derivative to get the
solution, steps and graph. This website uses cookies to ensure you get the best experience. By using this website, you agree to our Cookie Policy. Related. Number Line. | {"url":"https://cryptoajjygq.netlify.app/landmark21695sup/derivative-rate-relation-107","timestamp":"2024-11-13T14:36:20Z","content_type":"text/html","content_length":"31283","record_id":"<urn:uuid:a4e374f2-15d8-451f-bc19-793341f5b36a>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00844.warc.gz"} |
Common & Basic Formulas for Mineral Processing Calculations - 911Metallurgist
The control of a milling operation is a problem in imponderables: from the moment that the ore drops into the mill scoop the process becomes continuous, and continuity ceases only when the products
finally come to rest at the concentrate bins and on the tailing dams. Material in process often cannot be weighed without a disturbance of continuity; consequently, mill control must depend upon the
sampling of material in flux. From these samples the essential information is derived by means of analyses for metal content, particle size distribution, and content of water or other ingredient in
the ore pulp.
The following formulas were developed during a long association not only with design and construction, but also with the operation of ore dressing plants. These formulas are here in the hope that
they would prove of value to others in the ore dressing industry.
With such information at hand performance is calculated by the use of formulas and tabulations. Some of these are given here for convenient reference. Also at:
Mineral Processing and Metallurgical Formulas
Pulp Densities
Pulp densities indicate by means of a tabulation the percentages of solids (or liquid-to-solid ratio) in a sample of pulp. This figure is valuable in two ways—directly, because for each unit process
and operation in milling the optimum pulp density must be established and maintained, and indirectly, because certain important tonnage calculations are based on pulp density.
Definitions and notation follow:
• Let P = percentage solids by weight,
• D = dilution, or ratio of weight of liquid to weight of solid,
• S = specific gravity of solid,
• W = weight of one liter of pulp in grams,
• w = weight of dry ore (grams) in one liter of pulp,
• K = the solids constant,
Assume the specific gravity of the water in the pulp to be unity.
Formula (5) is used in making tabulations for mill use.
As used in these formulas the specific gravity of the ore is obtained simply by weighing a liter of mill pulp, then drying and weighing the ore. With these two weights formula (2) may be used to
obtain K, and then formula (1) to convert to S, the specific gravity. A volumetric flask of one liter capacity provides the necessary accuracy. In laboratory work the ore should be ground wet to make
a suitable pulp. This method does not give the true specific gravity of the ore, but an “apparent” specific gravity which is more suitable for the intended purposes.
Calculation of Circulating Load in a Classifier
A mechanical classifier often receives its feed from a ball mill and produces (1) finished material which overflows to the next operation and (2) sand which returns to the mill for further
size-reduction. The term “circulating load” is defined as the tonnage of sand that returns to the ball mill, and the “circulating load ratio” is the ratio of circulating load to the tonnage of
original feed to the ball mill. Since the feed to the classifier, the overflow of the classifier, and the sand usually are associated with different proportions of water to solid, the calculation of
circulating load ratio can be based on a pulp density formula.
The adjoining diagram represents the usual classi-fier-mill setup, in which we may let
• F = tonnage of ore to mill
• 0 = tonnage of ore in overflow
• S = tonnage of sand
• M = tonnage of ore in mill discharge
and Ds, Do, and Dm are the liquid-to-solid ratios of the sand, overflow, and classifier feed at the points where they leave or enter the classifier.
Then circulating load ratio = (Do-Dm)/(Dm-Ds) (8)
And circulating load tonnage = F (Do-Dm)/(Dm-Ds) (9)
Example: A mill in closed circuit with a classifier receives 300 dry tons of crude ore per day, and the percentages of solid are respectively 25, 50, and 84% in the classifier overflow, feed to
classifier, and sand, equivalent to L: S ratios of 3.0, 1.0, and 0.190. Then the circulating load ratio equals
( 3-1.00 ) / (1.000-.190) or 2.47 (or 247%)
and the circulating tonnage is 2.47 X 300 or 741 tons.
A more accurate basis for calculation of tonnage in a grinding circuit is the screen analysis. Samples of the mill discharge, return sand, and the classifier overflow are screen sized, and the
cumulative percentages are calculated on several meshes. Let:
• d = cumulative percentage on any mesh in the mill discharge,
• o = cumulative percentage on same mesh in the classifier overflow,
• s = cumulative percentage on same mesh in the classifier sand.
The percentages through the finest screen may be used in place of the cumulative oversizes.
Circulating load ratio = (d-o)/(s-d) (10)
Average 3.04. If daily feed tonnage to the mill is 200 tons, the tonnage of sand is then 608 tons.
Calculation of Classifier Efficiency
The efficiency of a classifier, also determined by means of screen analyses, has been defined as the ratio, expressed as percentage, of the weight of classified material in the overflow to the weight
of classifiable material in the feed. Overflow having the same sizing test as the feed is not considered classified material. Let:
• f = percentage of material in the classifier feed finer than the mesh of separation,
• o = percentage of material in the overflow finer than the mesh of separation,
• F = tonnage of’feed to classifier,
• O = tonnage of classifier overflow.
Screen Efficiency
The simplest and yet the most accurate formula for the efficiency of a screen, disregarding the quality of the product, is
E = 100 minus % true undersize in the coarse product (12)
Measure Tonnage by Water Ratio (Pulp Dilution)
or Addition of a Chemical Substance
When no other method is available an approximation of the tonnage in a pulp stream or in a batch of pulp can be quickly obtained by one of these methods. In the dilution method water is added to a
stream of pulp at a known rate, or to a batch of pulp in known quantity, and the specific gravity of the pulp ascertained before and after dilution.
T = Q / (D2-D1) (13)
Where T = tons of ore per hour
and Q = tons of added water per hour or T = tons of ore (for batch determinations) and Q = tons of added water.
In both cases Dx and D2 are dilutions (tons of water per ton of ore) before and after addition of water. These are found from the specific gravities of the pulp, by formulas (4) and (6) or directly
by the use of the tabulation on these of Pulp Density Tables.
Pulp Density Tables
The Pulp Density Tables were compiled to eliminate the many complicated calculations which were required when using other pulp density tables. The total tank volume required for each twenty-four hour
period of treatment is obtained in one computation. The table gives a figure, in cubic feet, which includes the volume of a ton of solids plus the necessary volume of water to make a pulp of the
particular specific gravity desired. Multiply this figure by the number of dry tons of feed per twenty-four hours. Then simply adjust this figure to the required treatment time, such as 16, 30, 36,
72 hours.
In the chemical method a strong solution of known concentration of common salt, zinc sulphate, or other easily measured chemical is added to the flowing pulp at a known rate, or to a batch of pulp in
known quantity. The degree of dilution of this standard solution by pulp water is ascertained by chemical analysis of solution from a filtered sample, and the tonnage of ore is then calculated from
the percentage solid. This method is impractical for most purposes, but occasionally an exceptional circumstance makes its employment advantageous. It has also been suggested as a rapid and accurate
method of determining concentrate moistures, but in this application the expense is prohibitive, since ordinary chemicals of reasonable cost are found to react quickly with the concentrate itself.
With the above chart the per cent solids or specific gravity of a pulp can be determined for ores where gravities do not coincide with those in the Pulp Density Tables. This chart can also be used
for determining the specific gravity of solids, specific gravity of pulps, or the per cent solids in pulp if any two of the three are known.
To use: Place a straight edge between any two known points and the intersection with the third line will give a direct reading for the third or unknown figure.
Concentration and Recovery Formulas
These are used to compute the production of concentrate in a mill or in a particular circuit. The formulas are based on assays of samples, and the results of the calculations are generally accurate—
as accurate as the sampling, assaying, and crude ore (or other) tonnage on which they depend.
The simplest case is that in which two products only, viz., concentrate and tailing, are made from a given feed. If F, C, and T are tonnages of feed r on-centrate, and tailing respectively; f, c, and
t are the assays of the important metal; K, the ratio of concentration (tons of feed to make one ton of concentrate); and R, the recovery of the assayed metal; then—
Example: From a 6.5% lead ore, milled at the rate of 300 tons per day, is produced a concentrate assaying 72.5% lead, and a tailing with 0.5% lead.
When a feed containing, say, metal “1” and metal “z,” is divided into three products, e.g., a concentrate rich in metal “1,” another concentrate rich in metal “z,” and a tailing reasonably low in
both “l” and “z,” several formulas in terms of assays of these two metals and tonnage of feed can be used to obtain the ratio of concentration, the weights of the three products, and the recoveries
of “1” and “z” in their concentrates. For simplification in the following notation, we shall consider a lead-zinc ore from which a lead concentrate and a zinc concentrate are produced:
Ri and Rz are the recoveries of lead and zinc, respectively, in the corresponding concentrates, and Ki and Kz the ratios of concentration of the two concentrates. Then
There are other forms of the above formulas that are equally useful, but the ones shown above satisfy most requirements.
The advantages of using the three-product formulas (20-25) instead of the two-product formulas (14-19), are four-fold—(a) simplicity, (b) fewer samples involved, (c) intermediate tailing does not
have to be kept free of circulating material, (d) greater accuracy if application is fully understood.
In further regard to (d) the three-product formulas have certain limitations. Of the three products involved, two must be concentrates of different metals. Consider the following examples (same as
foregoing, with silver assays added):
In this example the formula will give reliable results when lead and zinc assays or silver and zinc assays, but not if silver and lead assays, are used, the reason being that there is no
concentration of lead or silver in the second concentrate. Nor is the formula dependable in a milling operation, for example, which yields only a table lead concentrate containing silver, lead, and
zinc, and a flotation concentrate only slightly different in grade, for in this case there is no metal which has been rejected in one product and concentrated in a second. This is not to suggest that
the formulas will not give reliable results in such cases, but that the results are not dependable—in certain cases one or more tonnages may come out with negative sign, or a recovery may exceed
Reagent Consumption Calculations
Liquid-Solid Relationships Specific Gravity & Volume
Ratio of Concentration by Assay
In calculating the Ratio of Concentration (R) of the mineral operations, the following formula has been found very useful. Assays of heads, concentrate, and tailing are required.
Mill Water to Ore Ratio Requirements
Resistance of Various Materials to Crushing
Pulp Calculations
W—Tons of solids per 24 hours.
R—Ratio of weights: solution/solids.
V—Ratio of volumes: solution/solids.
L—Specific gravity, solution.
P—Specific gravity, pulp.
S—Specific gravity, solids.
Specific Gravity Details
Pulp Details by Weight
R = Weight of Solution/Weight of Solids = L (S-P)/S (P-L)
% Solution = 100 R/R + 1 = 100 L (S-P)/P (S-L)
%Solids = 100/R + 1 = 100 S(P-L)/P (S-L)
To estimate the number of cells required for a flotation operation in which:
W—Tons of solids per 24 hours.
R—Ratio by weight: solution/solids.
L—Specific gravity, solution.
S—Specific gravity, solids.
N—Number of cells required.
T—Contact time in minutes.
C—Volume of each cell in cu. ft.
Long Tons of Solids:
N = W x T/40 x C (R/L + 1/S)
Short Tons of Solids:
N = W x T/45 x C (R/L + 1/S)
In the above formulas, no allowance is made for the degree of aeration of the pulp nor the decrease in the volume of same, during the flotation operations.
Conditioning or Dissolving
To estimate the volume of the tank or group of tanks, required for the conditioning of a pulp for flotation, or for the dissolving of certain solids as contained in a pulp of which:
W—Tons of solids per 24 hours.
R—Ratio by weights: solution /solids.
L—Specific gravity, solution.
S—Specific gravity, solids.
T—Contact time in minutes.
C—Volume of the tank in cu. ft.
Long Tons of Solids:
N = W x T/40 (R/L + 1/S)
Short Tons of Solids:
N = W x T/45 (R/L + 1/S)
Pulp Thickening
To estimate the flow of clear solution in gallons per minute from a thickening operation in which:
W—Tons of solids per 24 hours.
L—Specific gravity of solution.
R—Ratio by Weight, solution/solids in original pulp.
R1—Ratio by weight, solution/solids in thickener discharge or filter cake.
Long Tons of Solids:
Imp. gallons of solution per min.
7/45 x W/L (R-R1)
U. S. gallons of solution per min.
14/75 x W/L (R-R1)
Short Tons of Solids:
Imp. gallons of solution per min.
5/36 x W/L (R-R1)
U. S. gallons of solution per min.
1/6 x W/L (R-R1)
Simple Classification
To estimate the circulating load in and the efficiency of a classifier operating in closed circuit with a ball mill.
Original feed may be applied at the ball mill or the classifier.
T—Tons of original feed.
X—Circulation factor.
A—% of minus designated size in feed.
B—% of minus designated size in overflow.
C—% of minus designated size in sands.
Circulating load = XT.
Where X = B-A/A-C
Classifier efficiency:
100 x B (A-C)/A (B-C)
Compound Classification
To estimate the circulating loads in and the efficiency of each of the two classifiers operating in closed circuit with a ball mill:
Original feed may be applied at the ball mill or the primary classifier.
T—Tons of original feed.
X—Primary circulation factor.
Y—Secondary circulation factor.
A—% of minus designated size in feed.
B—% of minus designated size in primary overflow.
C—% of minus designated size in primary sands.
D—% of minus designated size in secondary overflow.
E—% of minus designated size in secondary sands.
Primary Circulating Load = XT.
Where X = (B-A) (D-E)/(A-C) (B-E)
Primary Classifier Efficiency:
100 x B (A – C)/A (B – C)
Secondary Circulating Load = YT.
Where Y = (D-B)/(B-E)
Secondary Classifier Efficiency:
100 x D (B-E)/B (D – E)
Total Circulating Load (X + Y) T.
Reagent Consumption
Formulas for calculating reagent consumption:
Liquid reagents:
Lbs. per ton = ml per min x sp gr liquid x % strength/31.7 x tons per 24 hrs………………………….(26)
Solid reagents: Lbs. per ton = g per min/31.7 x tons per 24 hrs………………………………….(27)
400 ton daily rate, 200 ml per min of 5% xanthate solution
Lbs. per ton = 200 x 1 x 5/31.7 x 400 = .079
Interpretation of Comparative Metallurgical Calculations
Generally speaking, the purpose of ore concentration is to increase the value of an ore by recovering most of its valuable contents in one or more concentrated products. The simplest case may be
represented by a low grade copper ore which in its natural state could not be economically shipped or smelted. The treatment of such an ore by flotation or some other process of concentration has
this purpose: to concentrate the copper into as small a bulk as possible without losing too much of the copper in doing so. Thus there are two important factors. (1) the degree of concentration and
(2) the recovery of copper.
Suppose in the testing of such an ore, that the following results are obtained in three tests:
Test I—Ratio of concentration 30, recovery of copper 89%
Test II—Ratio of concentration 27, recovery of copper 92%
Test III—Ratio of concentration 15, recovery of copper 97%
The question arises: Which of these results is the most desirable, disregarding for the moment the difference in cost of obtaining them? With only the information given above the problem is
indeterminate. A number of factors must first be taken into consideration, a few of them being the facilities and cost of transportation and smelting, the price of copper, the grade of the crude ore,
and the nature of the contract between seller and buyer of the concentrate.
The problem of comparing test data is further complicated when the ore in question contains more than one valuable metal, and further still when a separation is also made (production of two or more
concentrates entirely different in nature). An example of the last is a lead-copper-zinc ore containing also gold and silver, from which are to be produced. (1) a lead concentrate, (2) a copper
concentrate, and (3) a zinc concentrate. It can be readily appreciated that an accurate comparison of several tests on an ore of this nature would involve a large number of factors, and that
mathematical formulas to solve such problems would be unwieldy and useless if they included all of these factors.
There is however, a very simple formula which indirectly takes into consideration all of these factors and is applicable to all ores, the Economic Recovery formula.
Calculate Economic Metal Recovery
Economic recovery (%) = 100 x Value of products per ton crude ore/Value of ore by perfect concentration……….(28)
The value of the products actually made in the laboratory test or in the mill is calculated simply by liquidating the concentrates according to the smelter schedules which apply, using current metal
prices, deduction, freight expense, etc., and reducing these figures to value per ton of crude ore by means of the ratios of concentration.
The value of the ore by ”perfect concentration” is calculated by setting up perfect concentrates, liquidating these according to the same smelter schedules and with the same metal prices, and
reducing the results to the value per ton of crude ore. A simple
example follows:
The value per ton of crude ore is then $10 for lead concentrate and $8.50 for zinc, or a total of $18.50 per ton of crude ore. By perfect concentration, assuming the lead to be as galena and the zinc
as sphalerite:
The value of crude ore by perfect concentration is then $11.46 for the lead concentrate and $11.18 for the zinc, or a total of $22.64 per ton of crude ore.
By formula (28)
Economic recovery = 100 x 18.50/22.64 = 81.7%
Any number of tests can be compared by means of the economic recovery.
The “perfect grade of concentrate” is one which contains 100% desired mineral. By referring to the tables “Minerals and Their Characteristics” (pages 332-339) it is seen that the perfect grade of a
copper concentrate will be 63.3% when the copper is in the form of bornite, 79.8% when in the mineral chalcocite, and 34.6% when in the mineral chalcopyrite.
A common association is that of chalcopyrite and galena. In concentrating an ore containing these minerals it is usually desirable to recover the lead and the copper in one concentrate, the perfect
grade of which would be 100% galena plus chalcopyrite. If “L” is the lead assay of the crude ore, and “C” the copper assay, it is easily shown that the ratio of concentration of perfect concentration
K perfect = 100/1.155L + 2.887C………………………………(29)
the factors 1.155 and 2.887 being the reciprocals of percent metal referred to in “Minerals and Their Characteristics.” The grade of the perfect concentrate is then found by the formulas:
% Pb in perfect concentrate = K perfect x L……………………….(30)
% Cu in perfect concentrate = K perfect x C………………………..(31)
or, directly by the following formula:
% Pb in perfect concentrate = 86.58R/R + 2.5…………………….(32)
where R represents the ratio: % Pb in crude ore/% Cu in crude ore
Formula (32) is very convenient for milling calculations on ores of this type.
Example: An ore contains 5% lead and 1% copper. The ratio of perfect concentration for a concentrate of maximum grade and 100% recoveries of lead and copper would be:
by (29) K perfect = 100/5.775+2.887 = 11.545
and % Pb in perfect concentrate = 11.545 x 5 = 57.7%
and % Cu in perfect concentrate = 11.545 x 1 = 11.54%
or, directly by (32), % Pb = 86.58 x 5/5 + 2.5 = 57.7%
Similar formulas for other mineral associations, for example, galena and chalcocite or chalcopyrite and chalcocite are easily worked out.
Occasionally the calculation of the grade of perfect concentrate is unnecessary because the smelter may prefer a certain maximum grade. For example, a perfect copper concentrate for an ore containing
copper only as chalcocite would run 79.8% copper, but if the smelter is best equipped to handle a 36% copper concentrate, then for milling purposes 36% copper may be considered the perfect grade.
Similarly, in a zinc ore containing marmatite, in which it is known that the maximum possible grade of zinc concentrate is 54% zinc, there would be no point in calculating economic recovery on the
basis of a 67% zinc concentrate (pure sphalerite). For example, the following assays of two zinc concentrates show the first to be predominantly sphalerite, the second marmatite:
The sulphur assays show that in the first case all of the iron is present as pyrite, and consequently the zinc mineral is an exceptionally pure sphalerite. This concentrate is therefore very low
grade, from the milling point of view, running only 77.6% of perfect grade. On the other hand, the low sulphur assay of concentrate B shows this to be a marmatite, for 10% iron occurs in the form of
FeS and only 2.5% iron as pyrite. The zinc mineral in this case contains 55.8% zinc, 10.7% iron, and 33.5% sulphur, and clearly is an intermediate marmatite. From the milling point of view
cencentrate B is high grade, running 93% of perfect grade, equivalent to a 62% zinc concentrate on a pure sphalerite.
Relevant Buying Guide: best graphing calculator for engineers | {"url":"https://www.911metallurgist.com/blog/milling-calculations-formulas/","timestamp":"2024-11-08T21:00:50Z","content_type":"text/html","content_length":"207369","record_id":"<urn:uuid:ebf83a93-6340-47b4-b112-6ca4559c2f55>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00598.warc.gz"} |
hdu3336 interpreting the next array of KMP algorithms
A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
• Sales Support
1 on 1 presale consultation
• After-Sales Support
24/7 Technical Support 6 Free Tickets per Quarter Faster Response
• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs. | {"url":"https://topic.alibabacloud.com/a/hdu3336-interpreting-the-next-array-of-kmp-algorithms_8_8_30330979.html","timestamp":"2024-11-13T02:10:04Z","content_type":"text/html","content_length":"85792","record_id":"<urn:uuid:f5592c4d-28e2-438a-ba7f-824f2148f25c>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00068.warc.gz"} |
Elliot Benjamin
TRANSLATE THIS ARTICLE
Integral World: Exploring Theories of Everything
An independent forum for a critical discussion of the integral philosophy of Ken Wilber
Elliot Benjamin, Ph.D.
1. Introduction: Integral Mathematics Perspectives
Mathematics is both an art form and a scientific discipline. When philosopher Ken Wilber writes about the differentiation of “The Big Three” (i.e. Art, Morals, Science), we find mathematics in the
rather unique position of simultaneously entering territory that is both an extremely subjective art form as well as an extremely objective science form, encompassing both the rational and vision/
logic levels of consciousness in Ken Wilber's Integral model (c.f.[11],[12],[13]). As a mathematician, this split has meaning to me as the division of the field into pure mathematics and applied
This dual perspective of mathematics is described exceptionally well by Jerry King in The Art Of Mathematics (c.f.[8]). King stresses how these two disciplines of mathematics are often worlds apart;
that they may be as far apart as the mystical poet and the objective scientist. However, King also calls for a uniting of these two mathematical disciplines; i.e. an integration of the worlds of pure
mathematical thinking and pragmatic mathematics application in objective reality. In other words, King is asking that the realms of art and science be integrated in the world of mathematics, in quite
an analogous way to the Integral model's quest to integrate the inner and outer domains in various branches and disciplines of knowledge into the “four quadrants” of Upper Left: Individual, Upper
Right: Behavioral, Lower Left: Cultural, and Lower Right: Social, including psychology, spirituality, medicine, law, politics, government, education, business, etc. I would like to propose in this
article that we add the discipline of mathematics to the recognized list of emerging integral disciplines.
We can view Integral Mathematics from a variety of different perspectives, including the following:
1. Ken Wilber's “A Calculus Of Indigenous Perspectives” approach to integral mathematics (c.f. [14],[15]).
2. The whole range of mathematics in the context of an independent line, as part of an integral four quadrant analysis (c.f. [7],[9],[10]).
3. An integral four quadrant analysis applied to mathematics as a discipline and specific subject, focusing upon the various branches of mathematical study.
4. An integral four quadrant analysis applied to a particular age group.
5. An integral four quadrant analysis applied to mathematical research.
6. Integral Mathematics in the context of a bona-fide mathematical discipline in itself, applied to all branches of mathematical study.
Each of the above perspectives of Integral Mathematics is a legitimate approach, but my main concentration in this article will be on perspective #3. I will be describing how the pure mathematics
disciplines of Number Theory and Group Theory (a Group Theory/Consciousness problem is described in the Appendix) can be extended into an applied mathematics context that involves all four quadrants
in Integral theory.
The disciplines of pure and applied mathematics are related to the division of Upper Left (UL) and Upper Right (UR) quadrants in the four quadrants of Integral theory, where we are utilizing a four
quadrant perspective on the disciplines of mathematics. In the Upper Left quadrant, we have such things as the “cognitive line” as well as a continuum of mathematical knowledge that can be
effectively described as “transcend and include” in levels of true “holarchical” fashion (c.f. [11],[12],[13] for definitions of these terms). For example, when fourth graders learn their
multiplication and division facts, they do not forget these facts (hopefully) when they eventually learn high school Algebra, but include their ingrained Arithmetic skills to solve higher-level
mathematical problems. Of-course our phenomenal technological advances in recent years make it unnecessary to retain many formerly necessary mathematical skills: Arithmetic, Algebra, and beyond,
although I contend that it is intrinsically beneficial for us to retain the mathematical knowledge upon which our technology is based.
However, incorporating these technological developments into our Integral Mathematical model brings us into the Lower Right (LR) quadrant of the AQAL (All Quadrants All Levels in Integral Theory)
model. For example, as a pure mathematician researcher in the field of Algebraic Number Theory, I have learned to appreciate the tremendous usefulness of the mathematical computer software program
“Pari” in furnishing me with extremely complicated examples of theoretical mathematical results I have proved. There is no way I would have ever been able to come up with these examples without the
use of this technology, and I view my pure mathematics research in this context as an example of Integral Mathematics perspective #5. (For the interested reader, I have a paper that has appeared in
the Ramanujan journal which weaves my pure mathematics results together with my technologically based examples (c.f. [4])). For me, it has been truly enlightening to gradually assimilate this blend
of technology into pure mathematics, and when I gave a talk about my results to the Maine/Quebec Number Theory Conference in 2002, I received feedback that it was very refreshing to see this kind of
well-balanced mixture of theory and technology.
However, I must fully admit that I am essentially a pure mathematician, and my interest in mathematics is primarily for its inner artistic beauty, as described so eloquently by Jerry King in The Art
Of Mathematics (c.f.[8]). But upon reading Ken Wilber's “A Calculus Of Indigenous Perspectives” (c.f. [14], [15]), I felt the inclination to write an article in which mathematical group theory, the
basic logical structure of the pure mathematics discipline of Abstract Algebra, could be applied to a theory of shifts into higher levels of consciousness addressed in Integral theory. I call this
theory “A Group Theoretical Mathematical Model Of Shifts Into Higher Levels Of Consciousness” (c.f.[5]). I view the ideas in this theory as a step towards a unification of pure mathematics (UL) and
applied mathematics (UR), i.e. a merger of the Upper Left and Upper Right quadrants in the Integral model for the cognitive mathematical stream. The basics of this mathematical model for my Group
Theory/Consciousness example is included in the Appendix for interested readers, though the mathematics does require a high degree of concentration to follow.
In regard to the Lower Left (LL) quadrant for Integral Mathematics, which can be described as the cultural “We” realm, this is the realm of sharing the integral view of mathematics with others. A
prime (excuse the mathematical pun) example of my own experience in all four quadrants of Integral Mathematics has been the work that I have done in both elementary schools and liberal arts college
courses teaching the ideas in my Recreational Number Theory book Numberama: Recreational Number Theory In The School System (c.f.[2]). In the context of Recreational Number Theory, which involves
exploring the intriguing patterns in our number system as an enjoyable recreational pastime, I promote the exploration and discovery of a variety of interesting and stimulating number patterns as a
unique learning experience that children and liberal arts college students may have in the realm of pure mathematics (UL). At the same time, I stimulate elementary school children to diligently
practice their multiplication and division skills while all my students are learning how to make use of their age appropriate technology tools of Arithmetic or scientific calculators, in order to do
the necessary trial and error work of discovering these patterns (Upper Right (UR) and Lower Right (LR) quadrants).
However, it is important to keep in mind that in this context of Integral Mathematics I am addressing Integral Mathematics perspective #3, i.e. mathematics as a particular subject and discipline,
when I view the calculation skills as an UR quadrant activity. In integral mathematics perspective #2, where the focus is upon the entire range of mathematics as a developmental line within people,
all mathematical thinking including calculation skills would be considered an UL activity, while the UR activities would consist of such things as the observable behaviors and brain wave states of
students. The LR aspects would include mathematical symbols and written language, the classroom settings where mathematical communication takes place, the actual external communications of language
utilized to discuss the mathematics, etc.
For older children and college students, their exploration and discovery of mathematical patterns eventually result in concrete algebraic formulas (c.f. [1],[2],[3])). There is much collaboration
amongst students in working together to explore my problems, and my book includes twenty games that teachers, children, and parents can reproduce and play together to further practice the skills they
are working on (c.f. [2]). For me, it is a way to bridge the gap between my own rather ivory tower mathematical interests and the extremely pragmatic view of mathematics that the great majority of
people in the world have. Another example of the Lower Left quadrant being utilized in my own Integral Mathematics work has been my offering of “Family Math” workshops for parents and children
working together on these Recreational Number Theory problems. In this Family Math context we have a strong collaborative LL quadrant activity of families working with each other in wonderful
collaboration and mutual understanding to explore the UL quadrant Recreational Number Theory problems that I use, with much arithmetical calculation practice in the UR quadrant, in conjunction with
classroom settings, verbal exchanges of the mathematics involved, and technology in the form of calculators and occasionally the internet, all of which comprise LR quadrant activity.
We thus have all four quadrants well represented in the Integral model: the Upper Left for the intrinsic artistic pure mathematical experience of exploring and discovering patterns of numbers; the
Upper Right for the objective disciplined arithmetic skills practice with eventual concrete algebraic formulas; the Lower Left for the collaboration of liberal arts college students or children,
parents, and teachers working together to discover these intrinsic Number Theory patterns via objective arithmetic skills practice; and the Lower Right for the outward forms of communication and
physical resources, and for the use of technology in the form of calculators and eventually computers as the numbers become increasingly larger and reach the point where it is no longer feasible to
explore the patterns without the use of this more advanced technology.
To give a concrete illustration of how I utilize Recreational Number Theory in the context of a four quadrant Integral model in teaching the joys of mathematics to others, I will focus upon the
example of Perfect Numbers, which is described in more detail in my Numberama book (c.f.[2]). Although I have taught Perfect Numbers in an Integral Mathematics context to all age groups, including
college students as well as children, the description I will be giving in Section 4 is particularly well suited for the age group of upper elementary school children, which is an illustration of
Integral Mathematics perspective #4. As I have indicated in my above six Integral Mathematics perspectives, the Integral Mathematics discipline of study approach (Integral Mathematics perspective #
3), of which Perfect Numbers is a specific example taken from Recreational Number theory, is a very different perspective from Wilber's Integral Calculus Of Indigenous Perspectives (Integral
Mathematics perspective #1).
Wilber's approach is essentially a mathematical symbolic language to describe various first-person, second-person, and third-person perspectives, with various layers of further perspectives on the
horizon. This approach is at the cornerstone of a more refined grid to the four quadrants, where each of the four quadrants is further divided to include both an inside and outside perspective.
Wilber has come up with an interesting display of mathematical symbolic language to describe these perspectives (c.f. [14], [15]), but this is in a very different context from my main goal of
demonstrating how the four quadrant Integral model can be applied to mathematics as a subject and discipline of study, focusing upon pure mathematics in the context of UL intrinsic mathematical
thinking. I would also like to add that Integral Mathematics perspective #6, which views Integral Mathematics as its own mathematical discipline, applied to all branches of mathematical study,
actually includes both my Perfect Numbers example in Section 4 as well as my Group Theory/Consciousness example in the Appendix. To give two well known examples from an applied mathematics context of
Integral Mathematics perspective #6, we will take a brief look at the Pythagorean Theorem from high school Trigonometry, and the Fundamental Theorem Of Calculus from first year Calculus for math and
science majors.
2. Mathematics As a Discipline: UL & UR
If one were to take an informal survey of the range of present day mathematics in regard to where they might fit in the four quadrants of study (technically known as “quadrivia” c.f. [15], although I
will take the liberty of referring to “quadrants” for ease of presentation), I believe that virtually all mathematics subjects have components in the inter-subjective (LL) and inter-objective (LR)
quadrants.. Through cultural collaboration and communications amongst mathematicians and scientists, and the tremendously widespread use of the internet and computer programs in addition to
textbooks, research papers, seminars, classroom settings, etc., it seems quite evident that the cultural and social quadrants (LL and LR) are well represented in virtually every field of mathematics.
However, when we look at the individual and behavioral quadrants (UL and UR) in the various branches of mathematics in the context of the pure and applied mathematics divisions that I outline below
in relation to the UL and UR quadrants for Integral Mathematics perspective #3, then the picture is not quite as simplistic or as universal. In terms of where particular mathematical disciplines
belong in this UL and UR classification, I will make the following distinctions, keeping in mind that this is a generic classification and is not meant to be airtight or complete. The particular
classification scheme that I have devised can also be described in Ken Wilber's symbolic language of indigenous perspectives (Integral Mathematics perspective #1) where pure mathematics would have an
interior perception first person description and applied mathematics would have an exterior perception concrete world focus third person perspective (c.f. [14], [15]).
│ Figure 1. Disciplines of Mathematics: UL & UR │
│ PURE MATHEMATICS (UL) │ APPLIED MATHEMATICS (UR) │
│ Number Theory │ Statistics │
│ Abstract Algebra │ Differential Equations │
│ Topology │ Pre-Calculus (Analytic Geometry, │
│ Trigonometry, │ high school Algebra, │
│ │ Arithmetic) │
│ PURE AND APPLIED MATHEMATICS (UL & UR) │
│ Calculus Analysis (Real & Complex) │
│ Probability │
│ Geometry (Euclidean & Projective) │
│ Set Theory │
To illustrate Integral Mathematics perspective #6, i.e. Integral Mathematics itself as a mathematics discipline, I will take a look at two well known applied mathematics problems from Trigonometry
and Calculus. To begin with, we can measure the width of a river without crossing it by applying the Pythagorean Theorem from Trigonometry, which says that in a right triangle (a triangle with a
perpendicular corner), the square of the side opposite the right angle (the hypotenuse) is equal to the sum of the squares of the two other sides, which is generally described algebraically as c^2 =
a^2 + b^2, where the symbol ^ denotes an exponent; thus 5^2 = 5 X 5 = 25. The crossing the river problem starts out as a highly pragmatic example from Trigonometry in the UR quadrant, but one can
study how to prove the Pythagorean Theorem using logical mathematical thinking, which involves the UL quadrant. One can have all kinds of cultural connections with others thru interactive learning
communities via the context of classroom settings as well as field experiences in surveying the land or shores of the river, which involves the LL quadrant.. Finally, one can employ various
technologies in terms of surveying equipment as well as classroom calculators, utilized in the external formats of classroom settings with verbal mathematical exchanges, which enters into the LR
A similar argument can be made using Calculus for the problem of finding the area under the parabolic bell shaped curve y = x^2, say for the range from x = 2 to x = 7. A parabola can always be
described by a quadratic (highest exponent of 2) equation in Algebra, which is a common representation for many kinds of scientific problems, ranging from measuring the height of an object thrown
from the ground, to the probability distribution of so-called “normal” distributions from Statistics; once again we are starting out with an UR quadrant problem. As it turns out, we can get an
estimate of this area by measuring the area of a number of small rectangles inside the parabola. As the sum of the areas of our rectangles approach the whole space of the parabola by our rectangles
increasing in number and reducing in size, the better our area approximation will be. However, we can get the exact area of the parabola by using what is known as The Fundamental Theorem Of Calculus,
which relates the theory of anti-derivatives to finding the area of various geometric curves. But for our purposes right now, what is important is that the Fundamental Theorem Of Calculus can be
proven, and this is generally done for first year math and science major college students, and is certainly in the context of an UL quadrant activity. The LL and LR quadrants once again can be
utilized in various teaching/learning scenarios with a rich array of educational and technical resources.
From this brief glimpse into the world of well known applied mathematics, together with the examples I will be describing from pure mathematics in Section 4 and the Appendix, it appears that there is
much potential to view Integral Mathematics from perspective #6, in the context of a separate field of mathematical study itself.
3. The Developmental Line of Mathematics
One way of approaching the developmental line of mathematical thinking is to use Piaget's levels of cognitive development: sensori-motor, pre-operational, concrete operations, and formal operations
(c.f. [10]).
According to Piaget, young children before the age of 6 or 7 are in the pre-operational stage and have not yet developed “number conservation,” meaning that their sense of number is not intact. For
example, if one increases the distance between a given number of objects, effectively spreading them out, this may cause children to believe that there are more objects present when they are spread
out than when they are bunched close together, despite the fact that the number of objects remains the same (c.f. [10]). However, it should be pointed out that there is also disagreement with
Piaget's conclusion that young children do not have a real number sense (although Piaget's actual levels of cognitive development are universally accepted), from recent research in neuropsychology
and brain physiology that focuses upon how young children may not be understanding the instructions of the experimenter, and may have a different interpretation of what is meant by “more,“ “less”,
etc. (c.f. [6]). At any rate, let us assume for the moment that children between the ages of 7 and 11 generally enter Piaget's concrete operations stage, and are quite capable of engaging in
arithmetical calculations with a true sense of what a number actually represents. Their ability to engage in more symbolic mathematics involving the manipulation of algebraic quantities representing
whole sets of numbers, does not come into prominence until age 11 or so, when they have entered the formal operations cognitive level. This ability to manipulate formal mathematical symbols,
representing various sets of mathematical objects, continues to grow and expand thru adolescence and young adulthood.
However, the higher levels of mathematical ability and the sublime creative productions of mathematicians appears to move beyond Piaget's highest stage of formal operations into what we may
correspond to Integral Theory's vision-logic level of cognition, where complex inter-relationships are processed symbolically and metaphorically in highly creative ways (c.f. [9]). This vision-logic
level of mathematical cognition is the essential vehicle that allows for the discovery of new mathematical ideas, and in particular for the highly abstract mathematical disciplines involving
combinations of various fields such as Number Theory, Topology, Abstract Algebra, Real and Complex Analysis, Projective Geometry, etc. (see Figure 1). At the same time, this vision-logic level of
mathematical cognition allows for the highly theoretical logical proofs of some of the key theorems of applied mathematical disciplines, such as Calculus and Analysis and Statistics (see Figure 1).
The actual application of mathematics to the world-at-large is a combination of the vision-logic and formal operations cognitive levels with concrete real world applications, represented in
applications of Statistics, Calculus, Differential Equations, etc. and most especially in the combined mathematics/science fields such as Mathematical Physics and Mathematical Biology, etc.
An even more focused perspective on the mathematical line of Integral theory can be seen from the work of Howard Gardner on Multiple Intelligences (c.f. [7]). For Gardner, the logical-mathematical
line of “multiple intelligences” is one type of intelligence in addition to the intelligences which he characterizes as linguistic, musical, spatial, bodily-kinesthetic, and personal (inner and
outer-directed awareness). Although there are various relationships amongst these diverse intelligences, the main features of Gardner's logical-mathematical intelligence include Piaget's descriptions
as follows (c.f. [7], parenthesis my inclusion):
“Its origins are in the child's action upon the physical world; the crucial importance of the discovery of number; the gradual transition from physical manipulation of objects to interiorized
transformation of action (UR to UL); the significance of relations among actions themselves; and “higher tiers of development, ” where the individual begins to explore the relationships and
implications obtained from hypothetical statements.”
These higher tiers of development appear to have significant connections to the vision-logic level of consciousness, involving a beyond logic realm of intuition as well as long and complicated chains
of abstract reasoning. This can be described in more specific mathematical terms as the hierarchical development from the concept of “number” to the creation of “Algebra,” where numbers are regarded
as a system and variables are introduced to represent numbers, to the more general concept of “functions,” where one variable has a systematic relation to another variable. Functions may involve real
values such as length, width, time, etc., but may also involve non-real quantities such as imaginary numbers, functions of functions, and significantly more complicated abstractions as well (c.f.
[7]). The two examples I have given from Trigonometry and Calculus are examples of functions of real values. The example of Perfect Numbers in Section 4 is an example of the concept of number
represented in Algebra. And the Group Theory/Consciousness example in the Appendix is an example of a more abstract formulation of functions, though applied to the real world in the context of shifts
into higher levels of consciousness thru the practice of meditation.
The above discussion of Piaget and Gardner for the cognitive mathematical line can be put into the context of Integral Mathematics perspective #2. From this perspective, essentially any kind of
mathematical thinking, whether it is computational or symbolic, would fall in the UL quadrant. The production of written mathematical language and symbols to describe this mathematical thinking would
be placed in the UR quadrant. The teaching and learning (thru social interaction) of mathematical ideas and skills would be the crux of the LL quadrant, encompassing all our interpersonal and
interactive educational settings. And the use of textbooks, calculators, computers, external classroom settings and verbal mathematical communications, etc. would be the nuts and bolts of the LR
quadrant. We thus see from the work of Piaget and Gardner how Integral Mathematics perspective #2 deals with the cognitive mathematical line in the context of a four quadrant analysis.
4. The Four Quadrant Mathematics of Perfect Numbers
We come now to our primary example of Integral Mathematics perspective #3, mathematics as a discipline and particular subject of study, which will be taken from the area of Recreational Number Theory
and will involve the topic of Perfect Numbers. This topic is also an excellent illustration of how the world of pure mathematics can be introduced to upper elementary school children, illustrating
Integral Mathematics perspective #4 (an integral four quadrant analysis applied to a particular age group).
The topic of Perfect numbers is a magnificent example of an enticing unsolved problem in mathematics that can be easily understood by children, the formulation of which involves very large prime
numbers that can shed light on an application to Government security codes. Let us first define a perfect number to be a number such that all the numbers that divide into it evenly not including
the number itself add up to the original number (i.e. perfect numbers are the sum of its proper divisors). For example, all the numbers that divide into 8 evenly are 1, 2, and 4. The proper
divisors of 8 add up to 7 and therefore 8 is not a perfect number. However, 6 has proper divisors 1, 2, and 3, they add up to 6 and therefore 6 is the first perfect number. With a little bit of
diligence it can be determined without too much trouble that the second perfect number is 28, as the proper divisors of 28 are 1, 2, 4, 7, and 14, which indeed add up to 28. But now the fun starts,
as it turns out that the third perfect number is somewhat larger, but there is an interesting pattern for perfect numbers that can be explored in the context of Recreational Number Theory, and the
discovery of this pattern is a good example of the inner creativity of the UL quadrant.
Notice how the first two perfect numbers, 6 and 28, can be written as 6 = 2 X 3 and 28 = 4 X 7. A possible pattern for the third perfect number might therefore be 2 X 3, 4 X 7, and 8 X 11 = 88, where
8 is obtained by doubling 4 and 11 is obtained by adding 4 to 7. Another possible pattern could be 2 X 3, 4 X 7, 8 X 21 = 168 where 21 = 3 X 7, or 2 X 3, 4 X 7, 16 X 11 = 176 where 16 is 4 X 4 (after
observing that 4 = 2 X 2), etc. Eventually, with some helpful hints, the pattern 2 x 3, 4 x 7, 16 X 31 = 496 will be arrived at, where the first factor is obtained by squares: 2 X 2 = 4, 4 X 4 = 16,
and the second factor is obtained by doubling the first factor and subtracting 1; i.e. 3 = 2 X 2 – 1, 7 = 2 X 4 – 1, 31 = 2 X 16 – 1. It can be readily checked that 496 is truly the third perfect
number, as the proper divisors of 496 are 1, 2, 4, 8, 16, 31, 62, 124, and 248, these proper divisors do add up to 496, and there are no perfect numbers between 28 and 496 (wonderful multiplication
and division skills practice in the UR quadrant for upper elementary school children: see figure 2).
│ Figure 2. First Three Perfect Numbers and Their Proper Divisors │
│ First Perfect Number: │ proper divisors are 1,2,3 which add up to 6. │
│ 6 = 2 X 3; │ │
│ Second Perfect Number: 28 = 4 X 7; │ proper divisors are 1,2,4,7,14 which add up to 28. │
│ Third Perfect Number: 496 = 16 X 31; │ proper divisors are 1,2,4,8,16,31,62,124,248 which add up to 496. │
However, if this pattern is continued to try to obtain the fourth perfect number, one obtains 256 X 511 = 130,816 since 16 X 16 = 256 and 511 = 2 X 256 – 1. With the use of an ordinary arithmetic
calculator (LR quadrant activity) and/or some knowledge of factor trees and prime factorization, it is quite reasonable for liberal arts college students and children in grades 5 and higher to
determine that 130,816 is not a perfect number (see [2] for more particular information and mathematical techniques).
What is the correct pattern to find the fourth perfect number? Try doubling the first factor and doubling once again and subtracting 1 to get the second factor. For example, since the second perfect
number is 28 = 4 X 7, we would have 8 X 15 as a candidate for the third perfect number, but it can be easily checked that 8 X 15 = 120 is not a perfect number. However, by doing it once more we
obtain 16 X 31 = 496, which is indeed the third perfect number. The crucial observation is that the second factors of the first three perfect numbers are 3, 7, and 31, all of which are prime numbers
(recall that a prime number is a number that has no proper divisors other than 1) and the second factor of the false candidate for the third perfect number is 15, which is not a prime number.
Continuing this pattern once more results in 32 X 63, which would be rejected since 63 is not a prime number, but the next candidate is 64 X 127 = 8128, and it is easy to see that 127 is a prime
number. It is quite feasible to determine that 8128 is a perfect number, and it happens to be the fourth perfect number (see figure 3)
│ Figure 3. First Five Perfect Numbers and Their Correct Patterns │
│ First Perfect Number: │ 3 = 2 X 2 – 1 and 3 is prime. │
│ 6 = 2 X 3; │ │
│ Second Perfect Number: │ 7 = 2 X 4 – 1 and 7 is prime. │
│ 28 = 4 X 7; │ │
│ Third Perfect Number: │ 31 = 2 X 16 – 1 and 31 is prime. │
│ 496 = 16 X 31; │ │
│ Fourth Perfect Number: │ 127 = 2 X 64 – 1 and 127 is prime │
│ 8128 = 64 X 127; │ │
│ Fifth Perfect Number: │ 8191 = 2 X 4096 – 1 and 8191 is prime. │
│ 33,550,336 = 4096 X 8191; │ │
Continuing this process with much diligence, factor tree and prime factorization use to determine proper divisors (c.f. [2]), serious calculator use, and small group collaborative efforts (productive
and fun loving cultural LL activity), students will discover that the fifth perfect number is 33,550,336 (see figure 3). Does this pattern always work to find perfect numbers? How many perfect
numbers are there? As it turns out, the topic of perfect numbers has by no means been completely solved by mathematicians, as we do not know how many perfect numbers there are i.e. if there are
infinitely many or not, as only 41 perfect numbers have been found using supercomputers at this time (the exponent itself of the prime number involved in the largest known perfect number has been
found thru quite the intensive LR quadrant activity, and it would take 1400 to 1500 pages to write out!). Although we know that all even perfect numbers do follow the particular pattern we have
described, and can be readily made into an algebraic formula for college students and middle school children who are learning algebra, we do not know whether or not there exists an odd perfect number
(wonderful UL quadrant stimulation). The use of extremely large prime numbers in the formula for even perfect numbers is directly related to how Government security codes are devised (LR), where
tremendously large composite (non-prime) numbers would need to be factored into two very large prime numbers in order to unlock the code.
From this brief illustration of how I teach the topic of Perfect Numbers as a mathematics enrichment activity, we can see how all four quadrants of Integral Mathematics perspective #3 have been
utilized, perspective # 3 being an integral four quadrant analysis applied to mathematics as a discipline and specific subject, focusing upon the various branches of mathematical study. The intrinsic
exploration of number patterns thru generating ideas and hypotheses is an UL quadrant activity focused upon the interior individual realm of cognitive creativity. The actual testing of these ideas to
see if these patterns result in successful outcomes involves much practice in the concrete skills of multiplication and division (and algebra for older students), and can be viewed as an UR
individual exterior behavioral activity. I generally have students working in small groups collaboratively to both explore ideas for their patterns as well as test them out, which is a LL quadrant
activity, sharing in the communion of creative ideas and procedures to test out these ideas. Finally, as the possible candidates for number patterns quickly become extremely large, students utilize
technology in the form of calculators (older children and/or college students may utilize computers as well), to obtain results regarding the testing of their ideas for patterns, which is a LR
quadrant occasion of inter-objective activity, utilizing technology currently available in our social institutions, in classroom settings..
5. Concluding Statement
From this brief glimpse into the possible ramifications of including Integral Mathematics as one of the disciplines seeking to take an integral perspective, we see that there are rich and enticing
possibilities to consider and a number of different approaches that can be taken.
A four quadrant analysis can be applied to the whole range of mathematics as a cognitive line thru the research and theories of Piaget and Gardner (Integral Mathematics perspective #2). A four
quadrant analysis can also be applied to mathematics as a particular discipline and subject of study, as seen thru well known applied mathematics examples from Trigonometry and Calculus, a pure
mathematics example from Recreational Number Theory, and a combined pure and applied abstract mathematics example joining mathematical Group Theory and levels of consciousness (see Appendix). These
two major perspectives of Integral Mathematics have immediate relationships to viewing integral mathematics toward a specific age group (perspective #4) as well as to engaging in mathematical
research (perspective #5) and the establishment of a discipline of Integral Mathematics in its own right (perspective #6). And in a context of mathematical perspectives we can study Ken Wilber's
Calculus Of Indigenous Perspectives as a symbolic mathematical language to describe the multi-dimensional perspectives inherent in human interaction (Integral Mathematics perspective #1).
We thus see that Integral Mathematics has a great deal of potential to take its place along the other areas of study that are entering the integral AQAL domain. It is my hope that this article may
serve as a calling forth to other mathematicians to furnish their own examples of how a Four Quadrants approach to Integral Mathematics may be a rich and valuable inclusion in the development of
Integral theory that is currently taking place. I would like to form a network of mathematicians who are excited about this endeavor and who want to share their ideas with one another. I look forward
to the first Integral Mathematics conference, and I believe that we may be seeing this in the not too distant future. In the meantime, I welcome hearing about your own ideas and interests concerning
what Integral Mathematics means to you and how to get it off the ground.
APPENDIX: Mathematical Group Theory and Consciousness
I will now turn my attention to the higher level mathematical world of Abstract Algebra and Group Theory to see another example of how Integral Mathematics perspective #3 can range across all four
quadrants as we apply pure mathematics theory to study shifts into higher levels of consciousness in Integral Theory.
To give a concrete illustration of how Mathematical Group Theory, a pure mathematics discipline, (a “mathematical group” is defined below) can be applied to the world of Integral Theory in the
context of shifts into higher levels of consciousness, I will present some original ideas from Mathematical Group Theory applied to shifts into higher levels of consciousness. In Ken Wilber's
Integral theory, levels of consciousness may vary greatly across various “streams,” for example, a person may be on a rational level in cognition, a conventional level in morals, and an illumined
mind level in spirituality, etc. (c.f. [11],[12],[13]). Keep in mind that Wilber uses the terminology waves and structures and levels interchangeably, as well as streams and lines interchangeably. I
will apply a mathematical group model to describe the shift from the rational to the vision-logic level of consciousness in the cognitive stream. Specifically we will show how a group theoretical
mathematical model can explain how a particular state of consciousness may have a significant impact on a person developing into a higher level of consciousness. I will make the assumption that for a
person who is on a continuum between two levels of consciousness, such as rational and vision-logic, vision-logic and illumined mind, etc., repeated experiences of altered and non-ordinary states of
consciousness may help a person evolve into higher levels of consciousness in a permanent fashion.
For example, if I am in the middle of a continuum between the rational and vision-logic levels of consciousness, prolonged periods of meditation over a certain time period may be a significant factor
in enabling me to move closer toward the vision-logic level of consciousness. We shall refer to this type of meditation experience in accordance with the essence of Ken Wilber's model of “A Calculus
Of Indigenous Perspectives” (c.f.14]. However, since Wilber's mathematical notation is quite cumbersome, we shall more simply refer to our meditation experience as M mod x, which means that person x
is experiencing meditation M within him/herself via him/herself. It is understood that we are focusing upon some particular type of meditation represented by M, and a particular individual
represented by x, as our theory is attempting to capture the subjective individualized framework of both the person and the type of meditation experienced.
However, it is important to keep in mind that my theory is at the beginning stages, and I will therefore be making rather simplistic assumptions, such as in increase in the number of hours of
meditation results in a corresponding increase in moving from the rational to vision-logic level of consciousness, in order to illustrate the basic mathematical ideas without overdoing the amount of
complexity involved in the mathematics. Clearly this mathematical simplification does not realistically describe the actual phenomenon of how people develop into higher levels of consciousness, as
many other factors come into the picture to complicate the situation, not the least of which is that simply increasing the number of hours meditating may not have a corresponding effect of shifting
into a higher level of consciousness, the personal intention of the person meditating may be a significant variable that needs to be taken into account, etc. We now utilize some mathematical group
theory, particularly the theory of cyclic groups.
Mathematical Group Theory
A mathematical group is defined to be a set of elements S with an operation * such that the following properties are satisfied.
1. If x is an element of S and y is an element of S then x*y is also an element of S.
2. There is an identity element E in S such that for all elements x in S, x*E = E*x = x.
3. For all elements x, y, and z in S we have x*(y*z) = (x*y)*z (associative law).
4. For each element x in S there exists an element y in S such that x*y = y*x = E; we refer to this element y as x^(-1) and call it the inverse element of x.
If for all elements x and y in S we also have the property that x*y = y*x then our group is referred to as a commutative (or abelian) group. A simple example of a commutative group is the infinite
set of integers S = {…-3, -2, -1, 0, 1, 2, 3,…) under addition. We see that the sum of two integers is always an integer, zero is the identity element, the associative law holds, for any integer x we
have x^(-1) = -x, and for any integers x and y we have x + y = y + x; thus the set of integers is a commutative group under addition. If there is an element x in our group S such that every element y
in S can be written as x^n for some integer n where x^n refers to (x*x*x…*x) n times if n > 0, x^(-n) = (x^(-1))^n, and we define x^0 = E, the identity element of the group S, then our group S is
referred to as a cyclic group with the generator x. It is an easy mathematical exercise to prove that all cyclic groups are commutative. We see that our infinite commutative group of integers is
actually a cyclic group generated by 1. For an example of a finite commutative cyclic group under addition consisting of 12 elements, think of the hour hand clock numbers of an ordinary 12 hour clock
(see figure 6). We see that 1 is a generator of the group, 12 is the identity element (12 + 5 = 5, 12 + 7 = 7, 12 + (-4) = 12 + 8 = 8, etc.), and for any hour hand clock number x we have x^12 = (x +
x + x + …x) (12 times) = 12. Given any clock number x, we see that 12 – x is the inverse of x since x + (12 – x) = 12 = the identity element E (5 + (12 – 5) = 5 + 7 = 12 = E, etc.
Biquasi-groups and Consciousness Shifts
To formulate a group theoretical model of shifts into higher levels of consciousness, we shall define the transition into the next higher level of consciousness to be a “biquasi-identity element” I
of a “biquasi-group” (these “biquasi” terms will soon be defined). Thus in our above meditation example, the transition from the rational to the vision-logic level of consciousness is what we define
as the biquasi-identity element I in what turns out to be a cyclic biquasi-group S generated by M mod x. We shall interpret the equation (M mod x)^6 = I to mean that continued practice of our
meditation over a certain time period will be a significant factor in enabling person x to move from the rational to the vision-logic level of consciousness. According to Integral theory, repeated
contact with altered states of consciousness, such as meditation, may help a person disidentify from their current level of consciousness, thereby enabling a person to take as object that which has
been a subject for them. Or, as Wilber put it, transformation involves disidentification with the current stage, identification with the next higher stage, and integration of aspects of previous
stages into the higher stage (c.f. [11],[12].[13]). Mathematically, we can think of our meditation example as resembling a finite cyclic group S consisting of 6 elements generated by M mod x; we say
“resembling” as opposed to “actual” due to the biquasi nature of our group, which we shall now describe.
For simplistic illustrative purposes, lets make the assumption that M mod x denotes 30 hours of meditation over the time period of one month. According to our group theoretical model (in its
preliminary and simplistic form), person x would make significant progress toward moving into the vision-logic level of consciousness if he/she were to diligently continue this meditation practice
for 180 hours over a 6 month time period, which is the meaning of the equation (M mod x)^6 = I. We have the equations M mod x + M mod x = (M mod x)^2 (meaning simply that 30 hours + 30 hours equals
60 hours of meditation and is twice as impactful as 30 hours of meditation on person x's potential consciousness shift, once again given our extreme mathematical simplification of assumptions), (M
mod x)^2 + (M mod x)^3 = (M mod x)^5, etc.; in general we can say (M mod x)^m + (M mod x)^n = (M mod x)^(m+n) for positive integers m and n. However, once person x reaches the vision-logic level of
consciousness, the impact of his/her meditation practice is no longer in the same way relevant to progressing to the next higher (illumined mind in this case) level of consciousness, as we now
acknowledge mathematically that whole other factors may come into the picture. We therefore define I*I = I and I*(M mod x)^n = (M_0 mod x)^n, where M_0 mod x means that person x is now meditating
while in a vision-logic level of consciousness. Note that by using this scheme, we would have I*(M mod x)^2 = I*I*(M mod x)^2 = (M_0 mod x)^2 and I*(M mod x)^62 = I*(M mod x)^2 = (M_0 mod x)^2. We
therefore make no further interpretation of the impact of the meditation experience on person x once the vision-logic level of consciousness is reached. Certainly another mathematical scheme could be
devised, but at this point we are merely illustrating the essential ideas in its simplest formulation.
At any rate, our equation I*(M mod x)^n = (M_0 mod x)^n resembles the requirements for I to be an identity element of a group, but there is a major problem in that M becomes M_0 (signifying that we
are now in the vision-logic level of consciousness). This change from M to M_0 does necessitate us calling our identity element something resembling but not quite exactly an identity element; we
shall call it a “biquasi-identity element;” the term “biquasi” refers to the fact that we are using a second set to refer to our shift into the vision-logic level of consciousness. Of-course this
same general idea could be applied to shifts into any of the various levels of consciousness, in addition to vision-logic. In a similar manner, we don't quite have a bona-fide group, but if we use
this biquasi-identity element in the required properties of a group, we find that our group properties are essentially satisfied and we now have what we shall refer to as a “biquasi-group;” more
specifically we have a “cyclic biquasi-group” generated by M mod x (see the Appendix in [4] for a more formal mathematical definition of a biquasi-group, along with a few basic lemmas (results that
have been mathematically proven but are not as significant as theorems) that describe some of its properties).
Biquasi-group Model Applied to The Four Quadrants
In the rest of my aforementioned paper I describe and compare various kinds of biquasi-groups representing various degrees of meditation and yoga practices that can occur across various streams, both
combined and separately, with their resulting effectiveness in regard to shifting from the rational into the vision-logic levels of consciousness (c.f. [4]). But for our present purposes, what is
most relevant is that the Mathematical Group Theory used in my paper is at the heart of pure mathematics, an UL quadrant mathematics thinking that comprises the individual and creative mathematical
On the other hand, the application of this pure mathematical group theory to AQAL Integral theory represents the world of applied mathematics, an UR quadrant activity in the Integral model, focusing
upon the external behavioral equations derived from the intrinsic UL mathematical model. The application of this mathematical model could involve a variety of communities of meditators and yoga
practitioners engaged in a discipline of spiritual activity, thereby entering the LL cultural quadrant. Finally, although the Group Theory equations that I work out in my paper are relatively simple,
it is not difficult to see how one could significantly increase both the number and relationship of variables utilized to make the actual phenomenon being investigated more realistic, plus increase
the combinations of spiritual disciplines and various streams investigated, to the point where computer software programs would be needed to work effectively with the mathematical equations, thereby
engaging in a LR quadrant activity that utilizes technology available mainly within our institutions of higher education. Of course the particular consciousness shift from rational to vision-logic
that I have described in my paper is only one illustration, and the same Mathematical Group Theory model can be utilized to describe shifts across various streams and levels in the Integral model;
i.e. it applies to the entire Integral AQAL model.
June, 2006
1) Benjamin, Elliot (1992). “Number Theory, Developmental Mathematics, and The Discovery Approach.” New England Mathematics Association of Two Year Colleges: Annual Convention.
2) Benjamin, Elliot (1993). Numberama: Recreational number theory in the school system. Swanville, ME: Natural Dimension Publications.(available by contacting the author at [email protected]
3) Benjamin, Elliot (2000). “Number theory—An igniter for mathematical sparks.” Atomim Newsletter.
4) Benjamin, Elliot (2003). “A group theoretical mathematical model of shifts into higher levels of consciousness in Ken Wilber's integral theory.” www.integralscience.org
5) Benjamin, Elliot (2006). “On the 2-class field tower of some imaginary biquadratic number fields.” Ramanujan Journal.
6) Dehaene, Stanilas (1997). The number sense: How the mind creates mathematics. New York, NY: Oxford University Press.
7) Gardner, Howard (1983). Frames of mind: The theory of multiple intelligences. New York, NY: Basic Books.
8) King, Jerry P. (1992). The art of mathematics. New York, NY: Ballantine Books.
9) Lakoff, George & Nunez, Rafael E. (2000). Where mathematics comes from. New York, NY: Basic Books.
10) Piaget, Jean (1952). The child's conception of number. New York, NY: Norton.
11) Wilber, Ken (1995). Sex, ecology, spirituality. Boston, MA: Shambhala Publications.
12) Wilber, Ken (2000a). Integral psychology. Boston, MA: Shambhala Publications.
13) Wilber, Ken (2000b). A brief history of everything. Boston, MA: Shambhala Publications.
14) Wilber, Ken (n.d.). “Appendix B. An integral mathematics of primordial perspectives.” http://wilber.shambhala.com/html/books/kosmos/excerptC/appendix-B.cfm
15) Wilber, Ken (2006). Integral Spirituality. Boston, MA: Shambhala Publications. | {"url":"https://www.integralworld.net/benjamin2.html","timestamp":"2024-11-01T19:29:59Z","content_type":"text/html","content_length":"85633","record_id":"<urn:uuid:c4d9071d-b64a-47d2-a827-f6fbc244e40e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00680.warc.gz"} |
How to apply statistical methods for factor analysis with ordinal and categorical data in assignments? | Hire Some To Take My Statistics Exam
How to apply statistical methods for factor analysis with ordinal and categorical data in assignments? A commonly used analysis method for the calculation of the number of ordinal and categorical
variables in the logistic regression models was You can find the text of your R code right here: Your code. You have a right to view the full code. I think you can obtain a copy To do this, you have
Get More Info list all of the possible options There are very few ways to test probability data types like proportions You can see this command is quite frequently used today. You can use R Studio
version 3.3 (caveat: not all R Studio packages are available anymore. You can however see some packages previously also available at the R documentation) One thing that don’t come naturally today is
that these tests using ordinal data pay someone to do statistics exam do not provide a support for categorical or ordinal variable data types. So, as we move toward quantile regression methods we
will also discover more tests that use measures of random effects Most commonly applied methods are using weights functions. The main point that you will find below comes from how these methods work.
Weighted least squares (rMST), Random Forest and Discriminant Analysis, Weighted least-squares, and Weighted least-squares methods. We can get some insight why you would use such tasks. Figure 13.1
How to use weights functions directly For a weighted least-squares method, we used a traditional procedure: To test for significance but using ordinal and categorical data, we defined the following
three test. First Test To use weighted least-squares, we used the squared Euclidean distance between independent vectors. This is the most common way to get useful information. As we will see below,
choosing a low value against a high is very useful. Next Test To test a group of permutations to predict which one of them is the correct answer to the regression equation, we first applied the
following simple rule to this test: Then we had to perform R function validation on all observations to define significance. Finally, for the significance procedure, we used R function validation to
estimate significance using a confidence decision function. The last test. Finally, for the significance procedure, we used R function validation on data to verify that it is statistically
significant Again, for the tests that had a significant result, we used the confidence standard deviation rMST. (And in this example I show the last three tests until I add the second test.
About My Classmates Essay
The last three tests use the same data from the last three points of the R code.) Note You can perform the same exercise using many (not all) measures of ordinal and categorical data. You could
combine the results of multiple tests. Suppose you had many ordinal and categorical measurement data:How to apply statistical methods for factor analysis with ordinal and categorical data in
assignments? BackgroundWe are increasingly conscious of the need to use ordinal and categorical methods for analysis of data to explore patterns and patterns of interest or factors. The broad
applicability of this research area is undeniable. To fill this gap, it is important to review the evidence you can try these out distributional factors when it comes to ordinal and categorical
analyses and show why these new studies are important. Statistical methods of analysis are beyond standard ordinal and categorical data analysis. Given the above mentioned factors, however, a typical
practice of ordinal and categorical analyses is to estimate the dispersion of the distribution, rather than the actual time series for each construct. OverviewMulittle DistributionA measure of time
or time series is a measure of data concentration, which is a measure of average behavior. To measure a trait it is useful to perform a series of regression analyses. If the average values of the
patterns observed may be representative of the read review population, then a more sophisticated kind of ordinal and/or categorical scale has to be used. One way to classify the patterns is to
classify statistically the data only by value for concentration, rather than within each level of concentration. Furthermore, one way to do this is to classify the continuous patterns by their means,
rather than the discrete levels. Categories have to be defined based on the size of the helpful resources and group-level characteristics with an overdispersion criterion. The data are classified
using an index that reflects the size of the group: R-index. For the aggregate group-level data, the R-index would news to the sample size. Within each level of concentration individuals are
clustered based on their similarity to others within the same group. This could affect the ability of the analysis to establish more reliably the differences of group membership in each level. Groups
may have an accumulation of high values for concentration (E > 0.5) in a group rather than an accumulationHow to apply statistical methods for factor analysis with ordinal and categorical data in
assignments? The method Lattice A quantitative, short coding method is developed to class the data into columns labeled by the variables or classes.
Best Site To Pay Do My Homework
A similar hop over to these guys is commonly used for ordinal data with categorical classes. The methods Lattice and ordinal Least Square Mean square are generally the most popular. My computer says
that I have an account that there are 7 models for the categorical variables I know of: 1) Categorical Variables 1 – Category 1 3) Categorical Variables 2 4) Categorical Variables 3 5) Categorical
Variables 4 5a) Category 2 6) Categorical Variables 5 6a) Category 4 7) Categorical Variables 5a) Category 5 8a) Category 6 Evaluation Models In the following I’ll summarize each model in its
elements. Evaluation Models I’ll begin with three main elements. 1) The Lattice model is composed by two 2 dimensional regression functions. If you take a categorical variable and then add a
continuous variable, that is, one year, the regression model will be given four variables. 2) The Ordinal Least Square Mean square (OMSML) is used to classify the learn this here now represented by
the variable in question. If you have a logistic model, that is, something you are given, you can just take a logit logarithmic sum as the distance between any specified models. 3) The Lattice and
Ordinal Least Square Mean Square Models describe the difference from a continuous variable to a categorical variable. The latter is named Lattice or Ordinal Least Square. 4) Table 1- | {"url":"https://hireforstatisticsexam.com/how-to-apply-statistical-methods-for-factor-analysis-with-ordinal-and-categorical-data-in-assignments","timestamp":"2024-11-07T02:45:51Z","content_type":"text/html","content_length":"170765","record_id":"<urn:uuid:18de204b-01ac-4749-ae4d-e3bc52879b03>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00502.warc.gz"} |
1) The circuit below comes from HW4, Question 2, assuming the same con
1) The circuit below comes from HW4, Question 2, assuming the same constraints but now adding
capacitance values to allow estimation of frequency response. From HW4, the PMOS transistors
within the diff-amp are designed to have gm-1mA/V at 60 μA. Assume that these transistors
(M1, M2) have capacitances as follows (at 60μA): Cgs = 20e-15 F; Cgd=4e-15 F; Cdb = Csb =
15e-15.The drain resistors are designed at 12 km2.
Both Vinp and Vinn are driven using sinusoidal sources with 500 source resistance on each
side. Assume the outputs are connected to a differential load capacitor of 40 fF. Draw out the
differential-mode half circuit with all of the capacitors and resistors marked, then estimate the
poles of this circuit in the differential mode using
a. Miller's approximation for both dominant and secondary poles
b. The exhaustive approach discussed in lecture for dominant and secondary poles
c. Open-circuit time-constant technique for the dominant pole
Open-circuit time constant technique assuming that Cload is a short circuit. This is a
trick that can be used to estimate the secondary pole when a dominant pole is present
(i.e., for frequencies high above the dominant pole, the large capacitor in the circuit can
be approximated with a short circuit and then you can use Octau technique).
Fig. 2
Fig: 1 | {"url":"https://tutorbin.com/questions-and-answers/1-the-circuit-below-comes-from-hw4-question-2-assuming-the-same-constraints-but-now-adding-capacitance-values-to-allow","timestamp":"2024-11-13T02:30:36Z","content_type":"text/html","content_length":"67078","record_id":"<urn:uuid:2b585380-47a5-41df-a590-527208684a38>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00400.warc.gz"} |
3.1. Preliminary solvent mixture selection for the bioactive compound’s extraction
The selection of ethanol as a co-solvent for recovering bioactive extracts from
P. emblica
leaves stems from its versatility and effectiveness in extracting bioactive compounds from medicinal plants. These advantages include cost-effectiveness, safety, efficiency, preservation of
bioactivity, scalability, eco-friendliness, and regulatory approval, among others.
Table 2
presents the relative ability of different solvent mixtures in the recovery of bioactive compounds from
P. emblica
leaves. As shown in the table, all ethanol-water mixtures including the 100% absolute ethanol and distilled water were able to recover bioactive antioxidants from
P. emblica
leaves. This shows that a good number of bioactive compounds in
P. emblica
leaves are polar in nature. The mean values of the extract yield (EY), Total phenolic content (TPC) and Antioxidant activity (AA) were in the range 5.87 – 12.07 %, 13.67 – 24.25 mg GAE/g d.w and 1.98
– 3.76 μM AAE/g d.w respectively for investigations conducted at OT = 40
C, ET = 45 min and S:L = 1:20 g/mL. Close examination of
Table 1
revealed that the bioactive extract obtained with absolute ethanol as solvent possessed the least EY, TPC and AA is (EY = 5.87%, TPC = 13.67 mg GAE/g d.w, AA = 1.98 μM AAE/g d.w) while the bioactive
extract recovered with 50% ethanol-water mixture demonstrated highest bioactive EY, TPC and AA (EY = 12.07%, TPC = 24.25 mg GAE/g d.w, AA = 3.76 μM AAE/g d.w). Although, the 100% distilled water (EY
= 8.33%, TPC = 19.11 mg GAE/g d.w, AA = 2.81 μM AAE/g d.w) did not perform very badly, it was still far below the 80% ethanol-water (EY = 8.41%, TPC = 20.10 mg GAE/g d.w, AA = 2.99 μM AAE/g d.w), 60%
ethanol-water (EY = 11.36%, TPC = 23.76 mg GAE/g d.w, AA = 3.66 μM AAE/g d.w), 40% ethanol-water (EY = 11.11%, TPC = 22.98 mg GAE/g d.w, AA = 3.52 μM AAE/g d.w) and 20% ethanol-water (EY = 8.67%, TPC
= 19.73 mg GAE/g d.w, AA = 3.31 μM AAE/g d.w) solvent mixtures in the recovery of bioactive extract from
P. emblica
These results clearly demonstrated varied solubility of bioactive compounds, TPC and AA in different ethanol concentrations (40 - 60% ethanol solutions). This work is in close agreement with previous
works [
]. Huaman-Castilla
et al.,
] pointed out that solvent polarity and ability of solvents to form hydrogen bonds with plant metabolite significantly impact the solvation capacity of solvents which consequently determines the
extractability of polyphenols in plant matrices. This present study has however, shown that ethanol concentrations within 40 - 60% range enhanced the extraction of phenolic compounds due to the
optimal solubility and stability of these compounds compared with the solvent extremes (absolute ethanol and 100% distilled water). Although the interesting range of ethanol-water solvent
concentration for high bioactive solubility is 40 – 60%, the 50% ethanol-water solvent mixture demonstrated the best phytochemical extractability from
P. emblica
leaves. However, the literature has documented various optimal ethanol-mixture concentrations for the recovery of bioactive extracts from different medicinal origins. For example, a study conducted
by Altıok
et al.
] on the extraction of bioactive compounds from olive oil demonstrated that a 70% ethanol solution optimized the extractability of total polyphenols, resulting in extracts with the highest
antioxidant capacity. In the same vein, Cheaib
et al.
] concluded that 50% ethanol solution was best for the recovery of bioactive compounds from apricot pomace. Regardless, the addition of ethanol to water as solvent mixture greatly improved the
polyphenol-rich antioxidant extraction from
P. emblica
leaves. This observation was attributed to the impact of ethanol on cell permeability, brought about by alterations in the phospholipid bilayer of the cell membranes, resulting in both chemical and
biophysical modifications to the cell membrane [
]. Therefore, the 50% ethanol-water solvent mixture was hence forth used for the selection of the best set of process parameters for the recovery of bioactive compounds from
P. emblica
leaves in the subsequent Section.
3.3. BBD-RSM modelling, model adequacies and statistical analysis
The BBD-RSM was employed to describe the relationship between the studied process parameters and the responses investigated. Hence, predictive mathematical models were developed for EY, TPC and AA as
function of OT, S:L and ET with the assistance of Design Expert software. The coded form of the BBD-RSM quadratic predictive equations that relate the TPC, EY and AA with the operating parameters of
OT (denoted as A in the equations), S:L (denoted as B) and ET (denoted as C) are presented in Eq. (9), Eq. (10) and Eq. (11) respectively.
$T P C = + 39.16 − 0.66 A − 1.18 B + 22.92 C − 0.90 A 2 + 2.33 B 2 + 4.15 C 2 − 1.72 A B − 1.45 A C + 0.38 B C$
$E Y = + 15.94 − 2.20 A + 4.08 B + 1.13 C − 1.29 A 2 + 0.047 B 2 − 1.25 C 2 − 1.03 A B − 0.75 A C + 2.38 B C$
$A A = + 3.70 + 0.055 A + 7.25 x 10 − 3 B − 0.055 C − 0.040 A 2 + 0.030 B 2 − 9.85 x 10 − 3 C 2 + 9.75 x 10 − 3 A B + 0.025 A C − 2.5 x 10 − 4 B C$
Table 4
contains the ANOVA results for the developed predictive quadratic equations. The models’ coefficients and parameters for assessing their adequacies are as well presented in the table.
Table 3
shows that the F-values of the developed EY (32.58), TPC (4947.4) and AA (22.86) models are significant (p values < 0.05). These models also possessed non-significant (p values >0.05) lack of fit
indicating they are well-developed and are all capable of predicting the observed experimental data with high accuracy. The R
values for the developed models are also appreciably high (EY = 0.9998. TPC = 0.9767 and AA = 0.9671) and hence indicate high effectiveness and capability of the models in describing the laboratory
data. The Pred. R
values of all the models (EY = 0.7077, TPC = 0.9994 and AA = 0.8009) were in close agreement with their respective Adj R
values (EY = 0.9467, TPC =0.9996 and AA =0.9248).
Table 4
further shows the contributions of each term in the equation to the overall predictability of the developed models. The negative (-) and positive (+) sign in the model equations indicated a decrease
and increase contributions respectively. Therefore, in the EY BBD-RSM predictive model, the linear terms of OT, S:L and ET are all significant (p < 0.05) and contributed negatively, positively and
positively respectively to the overall predictability of the model. The quadratic term of OT (OT X OT) and ET (ET X ET) are significant and they both caused a reduction in the EY model. The quadratic
effect of S:L resulted to an insignificant (p > 0.05) increase in EY model. The interactive effect of OT and S:L (OT X S:L) was insignificantly (p > 0.05) negative, however both OT X ET (AC) and S:L
X ET (BC) produced a positive contribution in EY model, although the OT X ET effect was not significant (p > 0.05). Regarding the TPC and AA BBD-RSM models, the linear effect of OT, S:L and ET were
positive, negative, negative and positive, positive, negative in the TPC and AA models respectively. The linear effects of OT, S:L and ET were significant (p < 0.05) in the TPC model, however, only
the linear effects of OT and ET contributed significantly in the AA model. The OT X OT, S:L X S:L and ET X ET significantly (p < 0.05) caused reduction, increment and increment respectively in the
developed TPC model. However, the ET X ET was the only insignificant (p > 0.05) quadratic effect in the AA BBD-RSM model. The OT X OT and S:L X S:L effects were significantly (p < 0.05) negative and
positive respectively in the AA model. All the interactive effects of OT X S:L, OT X ET and S:L X ET were significant in the TPC model and the respectively produced a negative, negative and positive
contributions in the model. Among all the interactive effects, only the OT X ET was significantly (p < 0.05) positive, other interactions such OT X S:L and S:L X ET were insignificant (p > 0.05)
negative in the AA BBD-RSM model.
The prediction of the observed experimental data by the developed TPC, EY and AA BBD-RSM models are presented in
Figure 2
(a), (b) and (c) respectively. The parity graphs showed excellent prediction of the observed data since the observed experimental and model predicted data fell close to the diagonal line [
3.4. Data statistics and multi-gene genetic programming modelling
The properties and variability of both the input and output data were determined prior to MGGP modelling via descriptive statistical analysis. The input and output data were the heat-assisted
extraction operating parameters and response parameters respectively as presented in
Table 3
. Therefore, the input parameters were the OT (
C), S:L (g/mL) and ET (min) while the output response parameters were the EY (w/w %), TPC (mg GAE/g d.w) and AA (µM AAE/g). The relevant data characteristics that were assessed include data mean,
standard error, median, mode, and standard deviation. Others were sample variance, data kurtosis, skewness, range, minimum, maximum, and sum. Seventeen (17) data points of each process parameters
(OT, S:L and ET) and response parameters (EY, TPC and AA) (which totaled 102 data population) were used for the construction of the MGGP-based models.
Table 5
showed the summary of the descriptive statistics of the data used for the MGGP modelling.
The mean of the data set ranged from 3.6905882 to 41.7864705 with the minimum and maximum belonging to AA and TPC respectively. Also, the mean value of OT, S: L, ET and EY are 40
C, 40 g/mL, 112.5 min and 14.7682352% respectively. Similarly, the standard error, median, mode, and standard deviation are in the range of 0.0153745 - 11.5761544, 3.69 - 112.5, 3.67 - 112.5,
0.0633907 - 47.7297077 and 0.0040183 - 47.7297077 respectively. The sample variance of both the input and output parameters was between 0.0040183 and 2278.1250. The sample variance of 0.0040183
belonged to AA and it indicated minimal variation while variance of 2278.1250 for ET indicated wide variation. The variance of 271.2990492, 200, and 50 for TPC, S:L and OT also implied wide data
variations. The data kurtosis, a statistical parameter that measures the peaked or flatness of a data distribution, was in the range of -0.4367377 - 3.1818297. The kurtosis of - 0.4367377, -
0.6261143, - 0.7428571, - 0.7428571, and -0.7428571 for EY, TPC, OT, S:L and ET respectively, indicated that the data were normally distributed (kurtosis value < +1)) while a value of +3.1818297 for
AA implied that the sample population is peaked (kurtosis value > +1) [
]. Similarly, the data skewness (a measure of symmetry of data distribution) was in the range of -1.2675914 - 0.4519267 for all sample population. The skewness coefficients of zero (0) for OT, S:L
and ET indicated that the data distributions for these variables were not skewed while a coefficient of -1.2675914 for AA indicated a negatively skewed data distribution. Numerical values of other
statistical parameters such as the range, minimum, maximum and sum of the input and output data populations are also indicated in
Table 5
and are in the range of 0.26 - 135, 3.51 - 45, 3.77 - 180, and 62.74 - 1912.5 respectively.
The MGGP model structures were optimized for the prediction of TPC, EY and AA as a function of processes extraction variables of operating temperature (OT), solid to liquid ratio (S:L) and extraction
time (ET). The MGGP model structure optimization study was purposely carried out to achieve optimum model parameter settings that enable robust learning of the heat-assisted extraction process data
(both input and output data presented in
Table 3
) for finest prediction. Therefore, the population size (PS) and number of generation (NG), which were the two significant parameters that determine the model structural predictive effectiveness to a
very large extent [
], were each investigated in the range of 100 – 500. The reported ranges for the parameters investigation were determined based on several attempts of simulation experiments in the preliminary
studies. Other model settings are as displayed in
Table 1
Table 6
presents the summary of the results of the MGGP modeling optimization studies for the prediction of responses TPC, EY and AA. The R
values for the prediction of TPC, EY and AA were in the range of 0.9858 - 0.9998, 0.9157 - 0.9936 and 0.7595 - 0.9622, respectively.
The parameter settings combination for the optimum prediction of TPC, EY and AA are population size and number of generation of 500 and 250 (R^2 = 0.9998), 250 and 500 (R^2 = 0.9936) and 250 and 250
(0.9622), respectively. Therefore, these determined optimum parameter settings for the MGGP model prediction of TPC, EY and AA were henceforth used for the simulation.
Figure 3
presents the graphs (in the form of log Fitness vs. number of generation) of the training process for the best MGGP model structure of TPC (PS = 500; NG = 250), EY (PS = 500; NG = 250) and AA (PS =
250; NG = 250).
It is clear from the figure that the best fitness for the training of the models occurred at approximately 241
, 498
, and 245
generation for the TPC, EY and AA simulations respectively. The fitness function assesses the evolved expressions to determine the most optimal encoded expressions [
] and was obtained by minimizing the root mean square error (RMSE). The prediction errors (also refers to as the fitness) for TPC, EY and AA models were found to reduce as a function of number of
generation (NG) and were eventually stable at the optimum NG for each model. The decreases observed in prediction error as training progresses is an indication that the MGGP algorithm learnt from the
training data rather than memorizing it and hence no data over-fitting occurred in the course of algorithm training [
]. Also, the ability of all the models to regain permanently decrease in fitness indicated that respective MGGP was able to overcome local minimum and adequately settled in global minimum during
training process simulations [
]. The best prediction error for the TPC, EY and AA models were 0.0202, 0.0288, and 0.0011, respectively.
The set of possible solutions capable of predicting the process responses of TPC, EY and AA as a function of set of process variables of OT, S:L and ET are presented in
Figure 4
(a), (b) and (c) respectively.
Figure 4
showed that Pareto fronts (the circles with green coloration) existed for TPC, EY and AA responses.
The Pareto front represents solutions that outperform all other solutions in both model effectiveness and complexity (measured by the number of nodes in the genetic programming tree) simultaneously.
Figure 4
revealed that there are 13, 15 and 14 Pareto solutions for the TPC, EY and AA responses, respectively. However, the green circles with red scribes in
Figure 4
a, b, and c represent the best Pareto front solutions for the TPC, EY and AA responses respectively, since they possessed the required minimum prediction error which is an indication of high
effectiveness in process responses prediction. The mathematical models (best Pareto front solutions) relating the TPC, EY and AA responses to the investigated process variables of OT, S:L and ET are
represented by Eq. (12), (13) and (14), respectively. All of the MGGP equations are non-linear in nature and each consisted of five genes and one bias.
$T P C = 7.048 ∗ 10 − 4 B 2 − 8.811 C A − 5.1 9.149 A + B − 32.2 − 6.21 ∗ 10 − 3 B − 7.657 ∗ 10 − 2 A + 5.215 ∗ 10 3 C 15.51 C − 1.455 − 9.833 ∗ 10 − 3 A 9.027 A C 2 + 33.11 A C + B + 4.509 ∗ 10 − 3
C A + 1.8 B − C 9.149 A + B − 14.52 − C 2 + A C + 5.1 A − B + 7.048 ∗ 10 − 3 A B C − 11.17 C 8.811 C − 3.876 A − 12.55 A C 2 + 163.7$
$E Y = 12.09 C B C − B + C B − 9.6870 − 12.09 C − 1.633 ∗ 10 − 4 A 2 B + C − 9.6870 − 2.198 ∗ 10 − 2 + 1.616 C A + 5 B + C − 2.3823 − 1.099 ∗ 10 − 2 A 2 + 1.616 C 2 A + 4 B 1.099 ∗ 10 − 2 A B − 2.382
− 4.127 A 2 C C − 4 B + A A + 5.5470 C + A C − 3.5975 10 5 − 6.532 ∗ 10 − 4 B C A + 4 B + 5 B C − 1.099 ∗ 10 − 2 B C C − A + 2 B C + 11.22$
$A A = 2.085 ∗ 10 − 1 A + 205.6 C − 2.815 A 2 C 2 − 6.469 ∗ 10 − 2 A 2 C 2 + 4.448 ∗ 10 − 2 A 3 C 2 + 7.491 ∗ 10 − 2 A 3 C 3 − 1.022 ∗ 10 − 2 A 4 C 3 − 6.272 A C + 2.942 ∗ 10 − 3 B C + 30.93 A C 2 −
2.942 ∗ 10 − 3 A 2 C − 3.178 B C 2 − 3.178 B C 2 − 1.783 ∗ 10 − 3 A 2 − 1679.0 C 2 − 3.178 ∗ 10 − 2 C 3 − 3.386$
The significance (p < 0.05) of each gene in the developed MGGP-based mathematical models for the prediction of TPC, EY and AA values can be visualized in
Figure 5
(p-value vs. genes and bias). The structures of the equations, as well as their respective statistical model adequacy parameters are presented alongside in
Figure 5
Figure 5
showed the relative importance (based on the measure of probability values) of each gene and bias that made up the structure of the evolved MGGP equations. All the predictive MGGP-based models for
TPC, EY and AA prediction of bioactive antioxidants from
P. emblica
leaves, possessed interestingly high R
(model R
for TPC, EY and AA are 0.9998, 0.9936 and 0.9622 respectively) and Adj R
(Adj R
for TPC, EY and AA are 0.9997, 0.9907 and 0.9451) values which indicate that the models are a good fit for the observed data and that they explain a significant portion of the variability in the
dependent variables. Considering the evolved MGGP equation for the prediction of TPC (Eq. (11)), the probability all the genes and the bias were less than 0.0002 which implied that they are all
significant (p < 0.05). However, the most important genes are genes 2, 3, 4 and 5. The bias was also a significant model term for MGGP-based predictive model for TPC as indicated in the figure.
Similarly in
Figure 5
(b), all the genes and the bias were significant part of MGGP-based predictive model structure for EY prediction since all the probability values of the evolved genes and bias were less than 0.00015.
However, of significantly high importance in the evolved EY MGGP-based predictive model are the bias, gene 1, gene 2 and gene 5. Analysis of
Figure 5
(c) also showed that the MGGP-based AA model have significant structure (genes and bias) for the prediction of AA values. As indicated on the graph (
Figure 5
(c)), the probability values of all genes and bias in the structure are less than 0.000015, however, gene 1 is the most significant gene in the structure.
The ability of the MGGP-based models to predict the laboratory observed TPC, EY and AA data as function of process variables of OT, S:L and ET are presented in
Figure 6
(a), (b) and (c), respectively.
It is obvious from
Figure 6
that the MGGP-based models were able to predict the observed laboratory data for TPC, EY and AA satisfactorily. The parity graphs showed that both the observed and predicted data were clustered on
the diagonal line which is an indication of good data predictive strengths of the MGGP-based models [
]. The models’ RMSE statistical parameter values were minimal (RMSE for TPC, EY and AA MGGP-developed models are 0.0202, 0.0288 and 0.0011) while all the models’ R
values were close to unity which implied perfect prediction of the models [
3.6. Numerical optimization and validation
The desirability algorithm that is present in the Design Expert software was used for achieving the optimization of the process parameter for the recovery of bioactive antioxidants from
P. emblica
leaves. The optimization procedure is in accordance with the work of Adeyi
et al.
]. The goal of the optimization scheme was to determine a set of process parameters that maximized the TPC, EY and AA of the bioactive extract.
Table 5
presents the selected goal, weight, and importance for both the process parameters and responses. The table further shows the numerical range of search for the global optimum parameters of processing
bioactive antioxidants recovery from
P. emblica
Table 6
summarized all the thirteen (13) solutions presented based on the combined desirability values. The scale of the desirability is between 0 and 1, with desirability value of 1 adjudged the best [
The desirability values of the all the presented solutions ranged from 0.766 to 0.854. These solutions were ranked in ascending order of preference by the software with the first solution on the
list, the best and selected. The 13
solution on the table was the least preferred, although the solution also possessed appreciably high desirability value (0.766), the value was the least among the solutions provided by the algorithm.
Hence, the first solution on the list with desirability value of 0.854 was selected as the best optimum solution. This desirability value of 0.854 compares well with other desirability values of
selected optimum solutions in the literature. For instance this present desirability value is higher that than the desirability value of selected optimum solution in the work of Adeyi
et al.,
] during the HAE process optimization of bioactive extract recovery from
Carica papaya
Figure 3
is the optimization ramp for the best suggested solution (first) with combined desirability value of 0.854. The figure shows that the optimization search was within the range of investigated process
parameters and observed laboratory response data. Therefore, the process parameters that simultaneously gave the optimum EY of 21.6565%, TPC of 67.116 mg GAE/g and AA of 3.68583 µM AAE/g were OT of
C, S:L of 1:60 g/mL and ET of 180 min.
The validation experiment was conducted in the laboratory to ascertain the selected predicted optimum in the laboratory. Therefore, 1 g of
P. emblica
leaves was mixed with 60 mL of 50% ethanol-water mixture in a beaker and heated to approximately 42
C by utilizing a water bath for the duration of 180 min. After the completion of the experiment, the leave fibers were separated from the extract through centrifugation and the EY, TPC and AA were
quantified according to
Section 2.7
respectively. The results obtained for the validation experiment were EY = 22.31 %, TPC = 69.612 mg GAE/g and 3.72 µM AAE/g. The relative standard deviation (RSD) was computed to compare the
experimental validated result with the predicted response result. It was found that the RSD between the validated and predicted values of EY, TPC and AA were 2.67%, 7.45% and 4.68% respectively. This
indicated that these values are similar since the RSD between the validated and predicted response values are less than 10 [
]. This similarity in the validated and predicted results showed that the developed BBD-RSM models for EY, TPC and AA are effective, well-fitted, robust and capable of predicting the process of
bioactive extract recovery from
P. emblica
3.7. Phenolic fingerprints of P. emblica leaf extract
HPLC profiling of phenolic compounds in plant extracts is of paramount importance for the identification, and characterization of these bioactive compounds. It provides valuable insights into the
chemical composition, quality, and therapeutic potential of plant extracts. Additionally, HPLC profiling can aid in the evaluation of the stability and degradation kinetics of phenolic compounds
under different conditions, ensuring the preservation of their bioactivity. Therefore, the HPLC profiling of phenolic compounds in
P. emblica
leaf extract was for the purpose of identifying the phenolic compounds with potential therapeutic effects for the basis of future industrial production and techno-economic analysis of the
P. emblica
leaf extract. Hence to this end, eight (8) phenolic compounds with established bioactivities were used as standards for the HPLC profiling of the
P. emblica
leaf extract. These compounds were compared with the content of the extract using the similarities in retention factor (RF).
Figure 4
is the HPLC fingerprints of
P. emblica
leaf extract. As shown,
Figure 4
is characterized by different phenolic compounds and wide disparities in their corresponding areas. The figure revealed six (6) phenolic compounds that have comparable RT with the HPLC phenolic
standards used as the baseline of identification. The identified phenolic compounds in the
P. emblica
leaf extract with corresponding RT are betulinic acid (RT = 2.439 min), gallic acid (RT = 3.063 min), chlorogenic acid (RT = 3.541 min), caffeic acid (RT = 4.055 min), ellagic acid (RT = 5.825 min),
and ferulic acid (RT = 7.684 min).
The bioactivities of the identified phenolic compounds in the extract were interesting and pointing to the overall therapeutic potential of
P. emblica
leaf extract. For instance, betulinic acid exhibits a wide range of therapeutic properties and studies have shown that it induces apoptosis (programmed cell death) in cancer cells, making it a
promising candidate for cancer treatment [
]. Gallic acid possesses numerous health benefits and recent investigations showed that it has potentials to inhibit tumor growth, reduce oxidative stress, and exert protective effects against
various diseases [
]. Chlorogenic acid has been studied for its potential in preventing skin tumorigenesis, modulating MAPK and NF-κB pathways, and ameliorating oxidative stress [
] while caffeic acid has been investigated for its potential in inhibiting atopic dermatitis-like skin inflammation and synergistic antioxidant activity when combined with other phenolic acids [
]. Both ellagic acid and ferulic acid have been shown to exhibit antioxidant, anti-inflammatory, anticancer, antimicrobial, and antimutagenic effects [
3.8. Reliability Assessment of BBD-RSM based Predictive Models
Model reliability assessment is the process of evaluating and determining the trustworthiness and accuracy of a model’s predictions or outputs. One of the various available techniques and
methodologies to gauge the model’s performance and identify its strengths and limitations is via Monte Carlo simulation. The main purpose of Monte Carlo simulation in the context of model reliability
assessment is to quantify uncertainty, validate model performance, and estimate the potential range of outcomes. Another key aspect of Monte Carlo simulation is its ability to conduct robust
sensitivity assessment where various inputs are systematically varied to determine their impact on the model’s outputs which helps in identifying which input parameters have the most significant
influence on the results and helps in understanding the robustness of the model. In the present investigation, the developed BBD-RSM based predictive models were used for this analysis due to their
relative superiority in predicting the heat-assisted extraction process outputs of TPC, EY and AA, relative to the MGGP-based models as explained in
Section 3.5
Figure 7
(a), (b) and (c) show the split views of the probability distributions, cumulative frequency and reverse cumulative frequency curves of the analyses conducted in Oracle Crystal ball software for the
prediction of experimental observed data (in
Table 3
) of TPC, EY and AA respectively, as function of input variables of OT, S:L and ET in order to ascertain the robustness of the constructed BBD-RSM based predictive models. The outcomes’ data
statistics and the best fit model for the distribution were also incorporated in the presented split view figures and positioned at the upper right and down respectively.
Figure 7
shows that the BBD-RSM models for TPC, EY and AA are capable of predicting the respective outcomes as function of OT, S:L and ET within the range of their respective experimental outcomes. The data
outcomes kurtosis and skewness (statistical measures that provide information about the shape and distribution of a dataset) for TPC, EY and AA are 2.77 and 0.2508; 3.64 and 0.5979; and 5.40 and
-1.14, respectively. The distribution curves for the TPC, EY and AA were excellently modeled by Weibull (Anderson-Darling coefficient = 308.1560), Lognormal (Anderson-Darling coefficient = 38.1254)
and Min Extreme (Anderson-Darling coefficient = 34.7595), respectively. As indicated in
Figure 7
, the percentage certainty of the developed BBD-RSM models in predicting the observed experimental data range of 18.58 – 69.35 mg GAE/g, 9.28 – 22.14% and 3.51 - 3.77 µM AAE/g (in
Table 3
) for TPC, EY and AA, is 99.985%, 97.569% and 98.661%, respectively. The high BBD-RSM models’ individual prediction certainty is an indication of their respective high reliability and robustness.
These values compare well with the prediction certainty of the
cracker drying effective moisture diffusivity D-Optimal-RSM predictive model (99.831%) [
], biodiesel yield CCD-RSM predictive model (73.509%) (Oke
et al.,
2022) and techno-economic MGGP predictive models (MGGP-CAnysP APR = 99.980% and MGGP-CAnysP UPC = 98.477%) [
]. The dynamic sensitivity charts, which identify input parameters that have the most significant influence on the predictions of the developed BBD-RSM models, are presented in
Figure 8
. Here, the contribution of process variables of OT, S:L and ET to the variance in the prediction of BBD-RSM models for TPC, EY and AA were assessed.
Figure 8
shows that process variables contributed differently to the perturbation in the developed BBD-RSM models predictions. In the dynamic sensitivity graphs, all the bars to the right hand side indicate
positive contributions (increase in response variable value with an increase in process variable value) while the bars to the left hand side signify negative contributions (increase in process
variable value with decrease in response variable value).
Also the length (measured by percentage) of a sensitivity bar determines its magnitude and relative importance. Hence, a long bar (to either side) is relatively significant than correspondingly
shorter bar, with 0% assigned to a no-significant effect. A high significant parameter should be better controlled (or more accurately measured) in order to improve the model’s predictability.
Therefore, analysis of
Figure 8
(a) showed that S:L was the process variable with the most significant importance to the variance in TPC BBD-RSM model prediction. The ET and OT did not seem to influence profoundly the
predictability of TPC BBD-RSM model. In the same vain,
Figure 8
(b) showed that the ET and S:L were the process variables with highest and least significance, respectively to the variance in the EY BBD-RSM model prediction. Also the order of process variable
significance (
Figure 8
(c)) to the perturbation in the predictability of AA BBD-RSM model is S:L > OT > ET. In numerical terms however, S:L had positive contribution (+99.7%), ET had negative contribution (-0.28%) while OT
did not contribute significantly (+ 0.02%) to the perturbation in the predictability of TPC BBD-RSM. Likewise, ET contributed positively (+76.2%), OT contributed negatively (-19.8%) while S:L
contributed positively (+4%) to the variance in EY data prediction by the EY BBD-RSM model. The S:L, OT and ET contributed -52%, +46.7 and +1.3% respectively to the variance in AA value
predictability by the constructed AA BBD-RSM model. | {"url":"https://www.preprints.org/manuscript/202311.0613/v1","timestamp":"2024-11-14T07:16:32Z","content_type":"text/html","content_length":"910298","record_id":"<urn:uuid:6f8e24ae-1487-46d4-9c40-1c97823a88b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00293.warc.gz"} |
The Truth About the [Not So] Universal Approximation Theorem - Life Is Computation
The Truth About the [Not So] Universal Approximation Theorem
Computation is usually conceptualized in terms of input/output functions. For instance, to compute the square root of a number is to somehow solve the function that takes a number as an input and
outputs its square root. It is commonly stated that feed-forward neural networks are “universal approximators” meaning that, in theory, any function can be approximated by a feed-forward neural
network. Here are some examples of this idea being articulated:
“One of the most striking facts about [feed-forward] neural networks is that they can compute any function at all.”
– Neural Networks and Deep Learning, Ch. 4, Michael Nielsen
“In summary, a feed-forward network with a single layer is sufficient to represent any function [to an arbitrary degree of accuracy],…”
– Deep Learning (Section 6.4.1 Universal Approximation Properties and Depth), Ian Goodfellow
“…but can we solve anything? Can we stave off another neural winter coming from there being certain functions that we cannot approximate? Actually, yes. The problem of not being able to approximate
some function is not going to come back.”
–Course on Deep Learning, Universal Approximation Theorem, Konrad Kording
More examples can be found here and here.
But it is not entirely accurate to say that feed-forward nets can compute any function. The universal approximation theorem places certain conditions on the types of functions it says are
approximable. The target function needs to be continuous and defined over a compact subspace. The theorem does not cover functions that are non-continuous or defined over an open interval or over an
infinitely-wide domain.
Now it might sound like I am being pedantic or taking statements out of context. In almost all these sources I linked above, they explicitly mention these conditions for the target function. However,
implicit in their words is the assumption that these kinds of functions (continuous functions defined over a compact domain) are general enough to include any function we are practically interested
in and can effectively compute. Perhaps the thinking is that non-continuous functions can be really strange and aren’t physical or practically useful. And the closed interval condition seems
reasonable too. We don’t have infinite resources. So perhaps it makes sense to set aside functions that are defined over an infinitely long interval, or to select a bounded section of such functions
to approximate.
Under this view, the problem is no longer to find a system that is expressive enough for general purpose computation, but rather to find the right parameters for a system we already have. We know a
feed-forward network with the right parameters can solve any problem but how do we find one that does? In other words, the interesting question becomes how to learn. Given that there exists some
network that does what you want, how do you find or build such a network? (This is why deep learning is significant. It provides an answer to this question).
Here, we need to take a step back and reconsider our implicit assumption. Is it really true that the kind of functions we are interested in and are able to compute are always continuous and defined
over a compact domain? In reality, what kind of things can we compute with finite means? It turns out this question is a very old one. In the late 1930s a mathematician wrote a paper that began by
asking: what kind of numbers/functions are – in Alan Turing’s words – “calculable by finite means”? This led to the conceptualization of a class of automatic machines that we today call Turing
machines. (Ironically, many people today believe that Turing’s machines are infinite machines. The mistake here is to think Turing machines require an infinitely long memory tape. They don’t. They
require an arbitrarily long memory tape. But this is too much of a diversion from our main discussion. You can read more about why Turing machines use finite means here).
While the universal approximation theorem applies to functions with closed bounded domains, the set of functions deemed to be computable [by finite means] include function defined over unbounded
domains. For instance the problem of integer factorization is defined such that there is no limit to the size of the number fed as an input. Such a problem is solvable using python or C. There exists
a computer program which – given sufficient finite resources (time, space, energy) – can factorize an integer of arbitrary size. Similarly, finite-state machines, which are weaker than Turing
machines and programming languages but stronger than look-up tables, can solve problems with unbounded input size. There exists a finite-state machine which – given sufficient finite resources (time,
energy) – can compute the remainder of any arbitrarily large number when divided by 7. Look-up tables on the other hand can only deal with functions defined over bounded domains.
But am I comparing apples to oranges? The types of functions studied in theory of computation appear to be string to string functions or integer to integer functions. The types of functions we
approximate with feed-forward neural nets, on the other hand, are ℝ→ℝ functions that operate with real numbers. Is it even possible to compare computation power across these very different
computation systems? Yes! Fortunately, people have come up with a rigorous framework for doing exactly that: comparing apples to oranges. (See https://doi.org/10.1093/jigpal/jzl003. The gist of it is
that you can use a mapping between the input/output domains of one system to the other, e.g. from real numbers to strings, and that mapping has to be the same for both input and output domains. That
allows you to compare computation power across different systems).
The idea behind the universal approximation theorem can be broken down into two parts (as nicely explained here and here):
1. Look-up tables can be used to approximate any compact continuous function to an arbitrary degree of accuracy.
2. Feed-forward neural nets can be used to appoximate any look-up table to an arbitrary degree of accuracy.
Therefore the level of computation power guaranteed by the universal approximation theorem is the same as that of look-up tables. It sounds much less impressive when you put it that way.
To clarify, this does not mean that feed-forward neural nets are proven to be as weak as look-up tables. It only means that the universal approximation theorem does not guarantee anything beyond the
power of look-up tables. Someone might come up with some clever mapping between strings and real numbers that shows that they are more powerful than we think. (I highly doubt it, but I may be wrong!)
I also haven’t said anything about recurrent neural networks here; the universal approximation theorem only talks about feed-forward neural nets with no memory.
The kinds of functions we are interested in for computation include things like integer factorization which work with unbounded domains. That is something that feed-forward neural nets – as far as we
know – cannot deal with. Even if you think Turing machines are unrealistic abstractions (if you think this way I suggest you read this), I doubt you would deny that finite-state machines are
realistic. Well, finite state machines are more powerful than look-up tables precisely because they can deal with unbounded input size (or an infinitely large input domain). It would be a strange
disregard of computation theory to say the only realistic computing devices are look-up tables, or to say that look-up tables are “universal”.
In the fable of the emperor’s new clothes, after the child shouts out that the emperor is naked he continues walking through the town even more proudly than before. Exposing the king was not enough
to overthrow him. But it may have swayed some of the townsfolk from exalting him. | {"url":"https://lifeiscomputation.com/the-truth-about-the-not-so-universal-approximation-theorem/","timestamp":"2024-11-07T16:10:25Z","content_type":"text/html","content_length":"65680","record_id":"<urn:uuid:18edef43-0ccb-4c91-8404-91bdc7a49288>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00069.warc.gz"} |
Modeling Urban Flood Inundation and Recession Impacted by Manholes
Department of Civil and Environmental Engineering, Washington State University, Richland, WA 99354, USA
Author to whom correspondence should be addressed.
Submission received: 16 March 2020 / Revised: 10 April 2020 / Accepted: 14 April 2020 / Published: 18 April 2020
Urban flooding, caused by unusually intense rainfall and failure of storm water drainage, has become more frequent and severe in many cities around the world. Most of the earlier studies focused on
overland flooding caused by intense rainfall, with little attention given to floods caused by failures of the drainage system. However, the drainage system contributions to flood vulnerability have
increased over time as they aged and became inadequate to handle the design floods. Adaption of the drainages for such vulnerability requires a quantitative assessment of their contribution to flood
levels and spatial extent during and after flooding events. Here, we couple the one-dimensional Storm Water Management Model (SWMM) to a new flood inundation and recession model (namely FIRM) to
characterize the spatial extent and depth of manhole flooding and recession. The manhole overflow from the SWMM model and a fine-resolution elevation map are applied as inputs in FIRM to delineate
the spatial extent and depth of flooding during and aftermath of a storm event. The model is tested for two manhole flooding events in the City of Edmonds in Washington, USA. Our two case studies
show reasonable match between the observed and modeled flood spatial extents and highlight the importance of considering manholes in urban flood simulations.
1. Introduction
Flooding is one of the most frequent weather-related natural disasters and affects many people around the world every year [
]. Major floods often cause significant impacts to communities and economies [
]. For example, in the United States alone, flood damages cost
260 billion (USD) per year from 1980 to 2013 [
]. The National Flood Insurance Program (NFIP) paid on average
2.9 billion a year between 2000 and 2018 [
]. Similarly, flooding caused more than 700 fatalities and at least €25 billion economic losses in Europe between 1998 and 2004 [
]. Global warming is expected to lead to more frequent extreme precipitation events, increasing flood hazards in many cities around the world [
Despite the severe damages that flooding can induce, floods are an important part of life in various regions of the world where people have been adapting for centuries. They support riparian
ecosystems dependent on flood inundated zones [
], and they are the principal source of groundwater recharge in many arid and semi-arid settings [
]. The negative impacts of floods, however, are considerable, and are mostly associated with the unexpected magnitudes and frequencies of floods, which are connected to climate change, rapid
expansion of urbanized areas, and inadequate and aging urban drainage systems [
Urban flooding and associated damages to properties account for 73% of the
107.8 billion total damages caused by floods from 1960 to 2016 in the United States [
]. Thus, accurate flood monitoring and estimation are essential to reduce flood impacts and vulnerabilities while supporting urban planning and ecosystems.
The spatial and temporal characteristics of floods in urban areas are complex due to the widespread change to the land uses [
], which introduces micro-urban features such as buildings, roads and drainage networks [
]. The specific urban infrastructures that affect flood form a storm event includes the type and geometry of building, garage ramps, light wells, pillars, and yards at or just beneath the ground
surface [
]. Overall, it is an established concept that an increase in impervious areas and connected conveyance systems in urban areas increase peak discharges and volumes [
]. However, flood mitigation and management requires detailed information about the spatial extents of floods, their water levels (i.e., flood depth), and flow velocities [
]. Such information is often impractical to measure directly. Consequently, empirical and hydraulic models are widely used to estimate these parameters [
] and assess associated flooding risk [
Although hydrological modeling can simulate both surface and subsurface processes adequately at a watershed level, most of them are unable to simulate urban flooding accurately [
]. This is partly due to the difficulty of defining model boundary conditions and the complex nature of flood propagation in urban areas. In addition, most flood inundation models lack the potential
contribution of manholes overflow to the flooding. The majority of these models are commercial (e.g., MIKE FLOOD, XPSWMM, and FLO2D), and are not accessible for most users. They are complex, require
a large number of datasets, and computational resources, which make them ineffective for most applications [
]. Thus, simplified flood inundation modeling techniques are commonly used to determine the spatial extent of flooding in urban areas [
]. These models only require a digital elevation map and mathematical representation of floodwater propagation in a given area [
]. These models were used to identify the potential flood-prone regions during a given storm event, but not the aftermath of the flood event. Furthermore, the simplified models often do not simulate
the watershed and the drainage system. Consequently, coupling hydrologic models, one-dimensional (1D) hydrodynamic models, and simplified flood inundation models are gaining attention as an
alternative approach to simulate flood inundations in urban areas [
Despite well-documented effects of manholes on urban flooding [
], there exists relatively limited research [
] that directly incorporate manhole overflow in simulations of urban flooding. Chen et al. [
] used coupled surface and sewer flow modeling and found approximately 2 m surge flow from manholes. Leandro and Martins [
] used a two-dimensional (2D) flood inundation model to simulate bi-directional flow interaction between sewer and overland flow, to estimate the volume of sewer serge from manholes ranging from
15,992–18,404 m
and a maximum possible depth of 0.8 m. Son et al. [
] coupled a one-dimensional storm water management model with a 2D overland flow model from a manhole overland flow with a maximum overland flow, ranging from 2–5 m
/s at different manhole locations, and estimated a 0.9 m flood depth and 2.5 m/s flood velocity. Jang et al. [
] used a coupled one-dimensional sewer flow with the two-dimensional overland flow and estimated manhole overflow depth up to 2 m. Seyoum et al. [
] coupled 1D sewer and 2D dimensional flood inundation models to simulate urban flooding and showed the combined flood depth variations from 0.3 to 0.8 m in their study area. Manhole overflow is a
critical issue in urban areas, yet their contribution to flooding is not well understood. Most hydrodynamic models assume that excess water will pond around the manhole and return back or will be
lost from the system after the flood recedes [
]. Major cities have aging infrastructures and drainage systems designed and built more than a decade years ago [
]. These systems are increasingly underperforming due to their design assumption of stationary storms and flood events [
]. This assumption often causes inadequacy to handle the rising flood risk caused by increased storms and impervious layers.
The main objective of the study is to develop and test a new flood inundation and recession method (FIRM) that can readily be used to simulate excess floodwaters generated from manholes. The United
States Environmental Protection Agency (EPA)-Storm Water Management Model (SWMM) is commonly used to simulate the complex hydrological and hydraulic processes in urban areas, but it does not have the
configuration to simulate flood inundation and recession from manholes overflows. Our method uses manhole overland flow volumes and depths from SWMM output and digital elevation data of the study
area to simulate flood inundation, recession, and depth. The flood inundation model uses the flat-water assumption to distribute the overflow. The recession model uses the location of manhole and
surrounding topographic variation to determine whether the inundated region is going to drain to the manhole or pond in localized regions. The model was tested using a synthetic case study and a
flooding event in Edmonds, an urban region in Washington State. Since the inputs (elevation and overflow) and outputs (flood area and depth) are known, the synthetic case study was used as a
proof-of-concept to validate the model accuracy before applying it to a real-world problem. It allowed us to test the model performance using different scenarios under variable hypothetical case
studies. For our Edmonds case study, previous study reports, social media, and news reports were used to delineate the flood boundary (i.e., used as observational data). The flooded area estimated by
FIRM was compared against the reconstructed flood area visually and using statistical measures to evaluate our model’s ability to identify areas that were flooded versus dry during the actual flood
2. Methods
This section details (a) the hydrodynamic model “SWMM” applied to Edmond case study (
Section 2.1
), (b) the model domain, calibration, and validation (
Section 2.2
), (c) the simulation of manholes’ overflow and recession using FIRM (
Section 2.3
), and (d) the evaluation of the FIRM performance using statistical measures (
Section 2.4
In addition to the synthetic case study used for proof-of-concept, the FIRM was applied to simulate manhole flooding in the city of Edmonds. The SWMM model domain was discretized into detailed
sub-catchments to incorporate key hydrologic and hydraulic components. Three years of sub-daily rainfall and lake level data were used to develop and calibrate the model using the automated
differential evolution optimization methods [
]. The SWMM model was validated against a one-year simulation. Following the SWMM simulation, we simulated the spatial extent and depth of flood inundation and recession from flooded manholes. A
high-resolution digital surface model (DSM) was used due to its detail information of surface features, including infrastructures. Unlike to the digital elevation map (DEM), the DSM contains surface
elevation and surface features [
2.1. Hydrodynamic Modeling Using SWMM
SWMM is one of the most widely used hydrologic and hydraulic models in urban settings [
]. It is capable of simulating event-based or continuous rainfall-runoff processes that are useful for both water quality and quantity analyses in urban areas [
SWMM uses spatially distributed and temporally discrete processes to simulate the hydrological and hydraulic state variables [
]. As the simulation progresses, the state variables will be updated and stored as follow.
$X t = f ( X t − 1 , I t , P ) ,$
where f and g represent the functions that calculate and update the state and output variables respectively. X
represents state variables (such as flow rate and depth in a drainage network link), Y
represents output variables (such as runoff flow rate at each sub-catchments and outlet), P represents the constant parameter, I
represents input variables (such as rainfall and temperature) at a given time.
The hydraulic simulation of SWMM involves water and contaminants transport through the conveyance portion of the drainage network. External flow sources entered the drainage network using inflow
nodes and transported through pipes and storage components and finally exit at outflow nodes [
]. The flow equation through links can be solved using the dynamic and kinematic wave routing equation. The dynamic wave analysis uses the Saint-Venant flow equation, whereas the kinematic wave
method uses the simplified form of the momentum equation to estimate the flow condition though the conveyance system [
]. The dynamic wave analysis has advantages in simulating gradually varied flow conditions, such as surge, in the drainage systems. This results in having the dynamic wave method, to have different
pressure and friction force part of the momentum. We used a dynamic wave routing method to simulate the gradually varied flow in urban drainage system.
2.2. Manhole Overland Flow Inundation and Recession Modeling
SWMM considers overflow from manholes as either surface ponding for a certain period and return back to the drainage or lose from the system [
]. However, these approaches do not allow for direct estimation of the associated flood spatial and temporal extents. Besides, all the overland flow generated from the node does not necessarily
return to the drainage, as some will recess, and others might be isolated from the manholes and ponded in depressions. In this study, we developed a simplified grid-based flood module to propagate
and recess overflow from and to a manhole spatially. The module requires a gridded surface elevation map, locations of manholes, and total overflow volume and depth at the manholes. The flood
inundation computation assumes that (i) water flows from a higher elevation to lower elevation because of gravity, (ii) water spreads spatially by maintaining its level surface (‘flat’ water
assumption), and (iii) flood fills first the nearest and connected cell with the lowest elevation. The module estimates both the flood depth and spatial extent by propagating the excess pressurized
water from the manholes to the surrounding areas according to the topographic variations. A high-resolution (1 m × 1 m) digital surface terrain from Light Detection and Ranging (LiDAR) was used to
capture the flooding extents accurately.
The flood recession module uses the location of manholes, the areal extent of the flood, as well as topography of the region to determine the aftermath of the flooded area. The flood recession method
assumes that all flooded cells drain back to the manholes if there exists a flow path connecting them to the manholes; otherwise, it will remain ponded. Some of the flooded water in the local
depressions can be disconnected from the manhole and will not fully be drained. Otherwise, the ponded locations can be distant from the manholes due to the topographic barrier.
2.2.1. Manhole Overland Flow Inundation Modeling
Our simulation of flood inundation is based on the elevation variations in neighboring cells. Starting from the cells containing the flooded manhole, floodwaters propagate to neighboring cells if the
cell elevation is lower-than and connected-to a flooded cell. We used the D8 neighboring algorithm, which evaluates the elevations of the eight adjacent neighboring cells to each flooded cell, to
determine the preferred flow direction (
Figure 1
). The flooded cells will maintain level flood surface and expand spatially as the lowest elevations are filled with available excess water. For multiple manholes overland flow, iterations are used
to distribute all the overflows by maintaining the level flooded surface.
Before the flooding starts, the area is assumed dry, or the flood depths at each grid cell are known. Once the manhole overflow starts, the floodwater inundates neighboring and connected grid cells
that have lower elevations (
Figure 1
). This inundation process will stop when the excess overland flow is completely allocated, and no water left to inundate the next dry cell. The flood inundation calculation is performed based on
three main cases. In the first case, if the manhole cell elevation is smaller than that of the neighboring cells, the overflow will accumulate in the manhole cell until it reaches the minimum
elevation of the neighboring grid cell. Once the overflow surface elevation exceeds the minimum elevation of a neighboring cell, the water will propagate spatially as long as there is enough
overflow. In the second case, if the elevation of the manhole is higher than the elevations of neighbor cells, the overflow will be allocated automatically to the neighboring cells regardless of
their slopes but maintaining level flooded surface. In the third case, if the elevation of the manhole is the same as the elevation of any neighboring cell, the excess water will be distributed
equally among cells with the same elevations.
Figure 1
Figure 2
show a schematic representation of flood inundations caused by overflow from one manhole and two manholes respectively. When the flood surface (surface elevation plus inundation depth) exceeds the
elevation of the surrounding cells, the water will start to flow to the cells with the lowest elevations. As the overland flow increases, the neighboring cells will be gradually flooded while
maintaining level flood surface. For example, when the overland volume is 49 units (
Figure 1
d), the extent of the flood coverage is first determined, based on the amount of the overland flow and elevation, and then the floodwater is distributed iteratively among the grids within the flooded
area to maintain level flood surface. This iterative process considers the available flood volume, and recursively fills the next small neighboring cells.
Not explicitly considering the slope and land covers are the main limitation of the presented flood inundation approach. Similar to other grid-based hydrological and hydraulic modeling, the DSM
resolution poses a high level of uncertainty in the model representation. Thus, the use of fine-resolution DSM and surface maps are critical for estimating the floods from manholes accurately [
]. For this study, we have used a one-meter resolution Light Detection and Ranging (LiDAR) DSM obtained from Washington State Department of Natural Resources. The model and the test cases do not
include manhole inundation caused my previous storm and overland flow the drainage. Thus, we assumed dry terrain prior to the overflow from the manholes. However, in most cases, as the reviewer
correctly stated, the manholes and their surrounding areas might already be flooded before the overflow. One simple modification is to adjust the terrain elevations for the inundation depth and then
simulate the manhole overflow on top. Such an approach may work if the inundations surrounding the manholes that resulted from the previous storm were stationary, and have known flood depths.
However, in most cases, the overland and overflow inundations from the drainage area and manholes, respectively, happen simultaneously, requiring a coupled simulation of both inundations.
2.2.2. Recession Modeling Associated with Manholes
Similar to the overland flow inundation, the flood recession to a given manhole considers the elevations of the surrounding grid cells and their connectivity to the manhole. For any given cell, the
flooded water above the elevations of the adjacent grid cells will drain to the manhole if a flow path exists. Otherwise, it will remain ponded in the depressions. For example, when a manhole
elevation is higher than the neighboring cells, only partial recession will occur. Starting from the manhole cell and expanding outward using the D8 algorithm, the recession model identifies flooded
cells and drain them completely if their elevations are above one of the eight neighboring cells. Else drains part of the flooded water above the minimum elevation of the eight neighboring cells. As
the grid cell expand spatially by searching adjacent eight neighboring, the connectivity of the target cell and the neighbor is traced using a flow path algorithm. For each flooded cell, the flow
path algorithm detects the presence of any possible flow path to the manhole.
Figure 3
shows examples of flood recession based on single and multiple manholes. The result from the flood inundation estimate is given in
Figure 3
a. It is assumed that when rainfall ceased, the holding capacity of the drainage system decreases, and overland flow starts to recess toward the manholes.
Figure 3
b,c illustrate how flooded surfaces are recessing for a single and two manholes scenarios. The recession associated with a single manhole (
Figure 3
b) indicates two regions of ponding—one next to the manhole and another away from manhole caused by local topographic barrier. The ponded regions away from the manhole are not connected to the
manhole hydraulically. The second case introduced an additional manhole in the depression region, and thereby the flood can be drained by the two independent manholes. The left manhole recess
similarly as
Figure 3
b. The second manhole drains the ponded floodwater caused by a topographic barrier (
Figure 3
2.3. Study Site: The Hall Creek Watershed
The Hall Creek watershed is an urban watershed (
Figure 4
), containing four major cities near Seattle, WA. These cities are Edmonds, Esperance, Lynnwood, and Mountlake Terrace, which are part of the northern Seattle–Tacoma–Bellevue metropolitan region. The
predominant land cover type in the watershed is a developed urban region, which accounts for 96% of the land cover. The rest is covered by forest and water bodies (
Figure 4
b). The watershed is frequently affected by prolonged storm and flooding events.
The Hall Creek is intermittent and drains toward Lake Ballinger. It is the main tributary to the Lake Ballinger [
], which discharges to the downstream McAleer Creek. The creek does not have a monitoring discharge and stage data. Since the lake level is highly influenced by storm events (
Figure 5
) and the flow from the Hall Creek, the available level data were used to calibrate the SWMM model.
The flooding in the study area is mostly caused by storm events, while the urban area and street intersections are often affected by manhole overflow. For example, during the 19 September 2016 storm
events, two manholes overflowed and caused flooding in the surrounding area. This flood event was used to validate our flood inundation and recession methodologies. One of the manholes flooded areas
across a highway (Case 1), while the other caused flooding alongside the highway (Case 2) (
Figure 6
2.3.1. Data
High resolution observed meteorological data (obtained from the King County’s watersheds and rivers database) and sub-hourly lake level data (obtained from the city of Edmonds) were used to develop
the SWMM model. Surface elevation from a one-meter resolution LiDAR data (obtained from Washington State Department of Natural Resources) were used for the flood inundation and recession modeling. We
have extracted the conveyance network system of the city of Edmonds and Mountlake Terrace from their respective Geographical Information System (GIS) Department. We have identified the type,
location, and possible flow direction of the sewer system. The land cover data were obtained from the US Department of Agriculture and were used to estimate the percentage of impervious layer. For
the purpose of calibration and validation of the FIRM model, the observed flood boundaries are delineated using related images (
Figure 6
) and texts from social media users (such as locations). We used Google Earth and high-resolution LiDAR data to determine the relative topographical variation and delineated the inferred flood
boundary by considering 360-street and panoramic view. We also used the city of Edmonds GIS-dataset to correlate the inferred flood boundary and building footprints to check if there exists a
mismatch between the building blocks and the street boundaries.
2.3.2. SWMM Model
The SWMM model was discretized into 32 sub-catchments based on the hydrological and drainage network criteria. These criteria include percent of land cover, slope, availability of conveyance network
system, and percent of impervious layers. A 5% of land use, slope, and soil type is used to subdivide the catchment. The sub-catchment layer, conduit layer and node layers are used for urban
watershed discretization. After the watershed was discretized, the external inputs such as precipitation, temperature, and evapotranspiration were extracted and applied for each sub-catchment. The
hydraulic properties of manholes, storage, ditches, culverts, and other structures were incorporated. There are total of 106 manholes, one storage, and 108 nodes connected by conduits in the study
area. Only 40% of the manholes were considered in our study based on consideration of data availabilities and the computational requirement of the SWMM model. The Hall Creek flux are represented
using SWMM’s inflow package. Despite the reasonable performance of our model in representing the hydrology and hydraulics of the study area (
Section 3.1
), not considering all the hydraulic structures in our model might have introduced some level of uncertainty that need for a further study. SWMM simulations can take hours if the model domain is
large and accompanied by detailed complex hydraulic structures and sub-daily meteorological and hydrological input variables. The model simulation was conducted using the “swmmr” R-package [
Model calibration can be performed using either manual or automated method [
]. In this study, we used both methods of calibration to ensure usage of their advantages. First, we used manual calibration to identify sensitive parameters. The sensitive parameters was then
further calibrated using differential evolution (DE) method, which finds the global optimum parameters values for continuous and differentiable functions [
], based on successive generation and transformation of the parameters values under a given fitness-measured criteria [
]. The DE requires defining parameters upper and lower bounds, objective function, and the lower or the upper optimal solution goal. The algorithm starts by randomly dividing the parameters values in
to three distinct populations. The parameters values from each population are then combined to generate the next sets of populations that minimize the objective function. To ensure global optimal
solution, the algorithm uses mutation to include non-optimal parameters values in the new populations [
]. The evolution continues until it meets the objective function criteria.
Due to the lack of discharge observational data for the Hall Creek, the model calibration and validation were performed based on the lake level fluctuation of Lake Ballinger, which is located at the
outlet of the creek. Since the creek is a main feed to the lake, the lake level fluctuations reflect the changes in the creek discharge. The model calibration includes the initial condition of the
model, which enables to determine the calibration parameter ranges, and optimization of the parameters using differential evolution optimized method [
]. Nearly three (2.7) years of data were used to calibrate the model, and one-year of data were used for validation.
The initialization was used to identify sensitive parameters and their respective parameter ranges. The model was then optimized using the differential evolution method, namely the “DEoptim” packages
]. Differential Evolution (DE) is a genetic algorithm that finds global optimum values for continuous and differentiable functions [
] based on successive generation and transformation of the parameter sets [
]. The DE requires parameters for upper and lower bounds and an objective function for the optimization. During each evolution, in addition to identifying the better parameter sets (or population),
the algorithm also introduces a random change to those parameter sets to ultimately get the global optimal parameter values.
The model simulation includes spin up, main (calibration), and post-audit (validation) simulations. Fitness measure statistics, including Nash–Sutcliffe efficiency (NSE), percent bias (PBIAS),
root-mean-square error (RMSE), ratio of the RMSE to the standard deviation of measured data (RSR), and Kling–Gupta efficiency (KGE) were used to evaluate the model calibration results. The NSE
compares the variance of the residuals (or fitting difference) with the variance of observed lake levels [
]. The PBIAS measures the average residuals or deviations of model results from observed lake levels [
]. The RMSE measures how spread out these residuals from the model results. The RSR is a normalized RMSE by the standard deviation of the observed lake level [
]. The KGE uses the idea of diagnostic decomposition, where the NSE is breakdown into three components (the relative importance of correlation, bias, and variance difference) [
]. The KGE ranges from negative infinity to one, with the optimal model prediction having the KGE value close to one.
Where Y
is observed, Y
is simulated, Y
is the mean of the observed lake level change, r is correlation coefficient between the modeled and observed lake levels; γ is a ratio between the standard deviation of modeled and observed lake
levels and β is a ratio between the standard deviation and mean of the modeled and observed lake levels (
Table 1
Manhole overflows were extracted from the calibrated SWMM model and were used as input for the FIRM simulation. The spatial extent of the model is compared to the observed flood regions. Since there
was no direct measurement of flood spatial extent and depth, the observed flood area was reconstructed based on pictures of the area taken during the flood event and obtained from social media
twitter. The social media information includes both pictures, texts, and street names, allowing us to identify the exact locations and extent of the flood. The previous reports in the cities also
indicated that there had been multiple incidences of pluvial flooding near side roads and along intersections. This information was used to delineate the observed flood region, which was then used to
validate the flood inundation model. The process of flood boundary delineation is depicted, in
Figure 6
, which show how the flooded region in the study area was extracted from social media outlets. The images and the text by users are used to identify the exact location of the flooded regions. We used
google-earth and high-resolution LiDAR data to determine the relative topographical variation and delineated the inferred flood boundary by considering 360-street and panoramic view. We also used the
city of Edmonds GIS-dataset to correlate the inferred flood boundary and building footprints to check if there exist mismatch between the ground and the street boundaries.
2.4. Model Inundation Accuracy
The model’s ability to detect the spatial accuracy of the flood extent was evaluated based on the true positive rate (TPR), the positive predictive value (PPV), the modified fit (MF), and the
modified bias (MB) methods. The TPR and PPV are derived from the confusion matrix [
], which is a 2-by−2 matrix containing the TPR and PPV for gridded simulated and observed flood conditions (
Table 2
). These statistics were used recently to assess the flood inundation model performance in [
]. TPR measures how well the modeled flood region replicates the observed flood boundary. The maximum TPR (100%) represents that the model fully captures the observed flooded regions. The TPR
indicates the model tendency to under-predict the flood hazard [
]. PPV measures how well the model captures flooded in the model but dry in observation. The value ranges from 0%, indicating over prediction of the flood extent, to 100% for accurately captured the
observed boundary.
$TPR = TP TP + FN × 100 ,$
$PPV = TP TP + FP × 100 ,$
Flood inundation and recession model (FIRM); true positive rate (TPR); the positive predictive value (PPV); true positive (TP); false negative (FN); false positive (FP); true negative (TN).
Where TP represents the flooded regions in both the observation and model simulation, FP represents the flooded region in the model but dry in the observation, and FN represents regions flooded in
reality (i.e., in observation) but simulated as dry. The TPR and PPV percentages represent the overlapping rate between simulated and observed flood areas. Higher percentage values of TPR and PPV
indicate higher accuracy of the flood inundation model. The two statistics must be used in combination to measure the accuracy of the model since they each evaluate the different performance of the
model. Specifically, the TPR and PPV measure how well the model captures the observed flood and flooded pixel that are dry in observation, respectively. For example, the TRP value can be 100% if the
model captures all the observed flooded cells even though it may also consider some dry cells as flooded (refer to Equation (3)). Similarly, the PPV value can be 100% if the model captures all the
dry cell even though it may also consider some flooded cell as dry.
Other methods of model performance evaluation are fit and bias indicators, which are also commonly used for flood inundation modeling [
]. Previous studies used both fit and bias indicators to compare flood inundation extents between different models. For this research, we infer observed flood extents based on street photos taken
during the actual flood event and compare them with the flood inundation results from our model. The modified fit indicator is calculated based on overlapping areas between observed and simulated
inundated areas. The indicator ranges from 0% to 100% for poor and ideal model performances, respectively. While the modified bias indicates the overall difference between the simulated and observed
flood extents. Positive and negative modified biases indicate an overestimation and underestimation of flood extents by the model, respectively.
$Modified fit = TP TP + FP + FN × 100 ,$
$Modified bias = ( TP + FP TP + FN − 1 ) × 100 ,$
3. Results and Discussion
3.1. SWMM Model Calibration and Validation
We compared simulated versus observed lake level changes, and show our model captures the lake level fluctuations reasonably well (
Figure 7
Figure 8
Figure 9
). The main objective of the SWMM simulation was to estimate the flood condition in Edmonds from the 19 September, 2016 storm event. For model stability and to represent preexisting hydrological
conditions such as soil moisture content, we considered ten months (from 1 August, 2015 to 31 May 2016) of simulation as a spin up. The model calibrated using data from 1 June 2015 to 31 January
2018; and validated using data from 1 February, 2018 to 10 January 2019. After the sensitivity analysis using the manual calibration, we identified five parameters for model calibration. These
includes the imperviousness percentage, width, roughness coefficient, depression storage, and the hydraulic conductivity of the soil. The calibration is performed in the DEoptim R-package.
Table 3
indicates the upper and lower limits of the model parameters used in the calibration process.
Figure 7
Figure 8
shows the comparison of the model simulation and observed time series plots for daily and monthly average lake level change for the calibration and validation simulation periods, respectively. The
figures demonstrate that the lake level changes as a result of storm events were reasonably captured for the calibration period. The validation results for both daily and monthly observed and
simulated lake levels show the ability of the model to predict beyond the calibration period.
Figure 9
indicates the correlation between the observed and simulated lake level change for spin-up, the main model simulation, and the validation period. The regression coefficient (R
) is 0.42 for the spin up period, 0.83 for the calibration period, and 0.77 for the validation model simulation period. The correlation coefficients for the calibration and validation period also
confirm the model captured the observed lake level reasonably well. The simulation was also evaluated using the NSE, KGE, RSR, and PBIAS.
The statistical summary of the spin up, calibration and validation simulations results are presented in
Table 4
. For the calibration and validation periods, the KGE are 0.91 and 0.88, respectively, while the NSE is 0.82 and 0.67. These indicate satisfactory model performance. The RSR, which indicates the
variation in the residuals, is between 0 and 0.5. This is considered a very good performance [
]. The PBIAS values are also close to 0, which confirm that the model simulates the observed water level with minimal bias. Compared to the daily model performance, the monthly model performance is
3.2. Flood Inundation and Recession
The spatial extent and depth of the flood inundation were simulated for a pluvial flood event that happened on 19 September 2016 using FIRM. A detailed conveyance network system with sub-daily
meteorological data, such as rainfall data, were used to capture the flood event using SWMM continuous simulation. The FIRM simulate the floods caused by overflow from two manholes, and its results
were compared with the inferred flood boundary (
Figure 10
Figure 10
represents the areal extent and depth of the flood inundation at two locations. The red dot-line represents the inferred flood boundary, and the black point represents manholes. The color represents
flooding depth. The fine-resolution LiDAR data were able to identify detailed urban infrastructures, such as building food prints, and streets. The FIRM was able to identify elevated urban
infrastructures and with low-lying streets. Overall, coupling both the 1D SWMM model with the 2D FIRM model was able to delineate the spatial extent and depth of the flood generated from manholes
overflow in the study area.
The flood started from a given manhole and propagated spatially by filling any neighboring grid cells with lower elevations. As shown in
Figure 10
a, representing Case 1, the flood is concentrated across a highway going west to east. The topographic variations around the manhole are relatively small, with the mean slope of the flooded region
being 1.2 degrees (
Figure 11
a). The flat slope, particularly along the street, enables the overland flow to inundate along the street perpendicular to the main highway. Based on the FIRM result, the areal extent, maximum flood
depth and volume of the inundation region is 2129 m
, 0.6 m, and 710 m
, respectively. The depth of the flood is controlled by the local elevation and the amount of the excess overland flow. The low-lying part of the street generally has deeper flood depth compared to
the peripheral part of the flood extent. Hence, the pavement is often elevated compared with the street elevation. Another manhole flooding in our study (
Figure 10
b, Case 2) was used to evaluate the flood inundation simulation. In this case, the manhole is located in the steeper part of the street, where the average slope of the street is 5.2 degrees from the
west to east (
Figure 11
b). Consequently, the flood is mostly concentrated along with the highway in the north and south direction, and did not spread much laterally. The volume, areal extent, and maximum depth of the
flooded region are 38 m
, 426 m
, and 0.15 m, respectively.
Based on the two cases, we observe that the flood inundation model considers the spatial heterogeneity of the surface feature to inundate, for example, flood inundation algorithm was able to
differentiate building with streets (
Figure 10
a) and intersections of streets with varying slope and elevation (
Figure 10
b). FIRM inundates the lowest level and cover wider area for a relatively gentle region. Conversely, for the manhole located in a steeper region, the algorithm follows the preferred flow direction
and inundate relatively smaller area.
To determine the flood recession, we assumed that once the storm ceased and the pipe full capacity decreased, the water eventually drains back to the conveyance system unless it is isolated from the
manhole because of depression storages. Accordingly, some of the inundated floodwaters may recess, while the remaining waters are left as ponding associated with local topographic barriers. The
result for Case 1 shows that most of the floodwaters drain since the manhole is located at a lower elevation. There is some ponded water away from the manhole due to possible topographic barriers
between the manhole and the flooded areas (
Figure 12
a). Since the manhole is located at a lower elevation, most of the flooded water drained to the manhole. The volume, areal extent, and maximum depth of the ponded region are 15 m
, 89 m
, and 0.17 m, respectively. For Case 2, the ponded water is generally concentrated near the manhole due to the local topographical depression around the manhole (
Figure 12
b). The flood volume, the areal and depth of the ponded water decreases to 3 m
, 85 m
, and 0.06 m, respectively. The results showed the FIRM abilities to determine the maximum flood extent and the extent aftermath of a given storm. Each information is important to assess the flooding
risk and associated potential short-term (flood inundation) and long-term (flood recession) impacts.
3.3. Model Inundation and Recession Accuracy
In addition to the visual comparison of the observed and simulated flood areas, the model performance evaluation for the inundation was carried out using statistical measures that compare the
simulated and observed gridded flood areas. This enables us to identify and assess the model performance based on how accurately the model predicted the observed flooded and dry regions. We adopted
the TPR and PPV from [
] and used the modified version of the fitting (MF) and bias (MB) indicators from [
]. The model inundation extents are compared with the inferred observed flood boundaries that were extracted from photos of the flood boundaries.
Table 5
shows the model performance indicators used to evaluate the flood inundation model based on the inferred flood regions. The TPR for Case 1 (89%) indicates the model’s ability to capture the flooded
grid cells, while for Case 2, the TRP is 71%, indicating the model predicted a relatively more flooded region as dry land. Thus, the model under predict the flood hazard in Case 2. The PPV for Case 1
and Case 2 are found to be 95.4% and 97.25%, respectively. These indicate the model’s ability to capture the observed none flooded cells. The MF of 85% for Case 1 indicates that the model has better
agreements with the observed flood boundary compared to that of Case 2, which has MF of 69.90%, indicating the existence of relatively large variation between the predicted and observed flooded
regions. The negative values of the MB for both cases indicates that the flooded regions in both cases were underestimated. Overall, the relative errors are relatively higher for Case 2 compared to
Case 1. This is due to underestimating the flood hazard in Case 2 compare with the observed boundary, and possibly due lack of the FIRM to simulate the impact of direct rainfall during the flood
events or lack to represent flood water loses into the buildings.
The low TPR values compared to the PPV values suggest that the flood inundation model underestimated the total flooded regions for both cases; but predicted well the flooded region within the
observed flood boundaries (
Figure 10
). The underestimation of the flood areas might be due to the model inundation algorithm not incorporating the additional flooding resulted from direct precipitation or generated surface runoff. The
relatively poor performance of the model for Case 2 might also be due to the inundation algorithm limitation to incorporate direct additional flood flux from upstream overland flow into the flooded
regions. In addition, the relatively higher slopes in the area, which facilities rapid overland flow from the manhole toward the low elevated regions, may have impacted the model performance.
Moreover, the relatively better model performance for Case 1 might be because of the relatively homogenous topography in the area, which is well represented by the 1-m LiDAR data.
Coupling the FIRM model and hydrodynamic model with projected future storm scenarios may help to identify areas that may experience future manhole flooding. This modeling capability can help to
better assess flooding risk, and improve designs of storm water drainage systems in flood-prone urban areas. To further improve the work, it is important to consider direct rainfall during storm
events and other sources or losses of floodwaters (e.g., possible loses of floodwater by draining into the building or addition of excess runoff from rooftops), as well as the land cover and slopes.
4. Conclusions
We have presented effective flood inundation and recession methodologies that use overflow from given manholes and topography of an urban region. We used the SWWM model to estimate the volume of
overflow from manholes. In order to determine the associated flood depth and extent during and after storm events, we developed a flood inundation and recession model (FIRM) that uses high-resolution
LIDAR elevation data. SWMM was developed on the basis of watershed characteristics and the drainage conveyance network in the area. SWMM was calibrated using a differential evolution optimization
method and validated based on observed lake level data at the outlet of the watershed. The manhole overflows were extracted and used in FIRM to delineate the spatial extent and flood depths.
The spatial extent of the simulated flood area was compared with the observed flood boundary, which was derived from social media pictures and reports from the cities. Two case studies, based on
flood events in Edmonds, WA, were considered to evaluate the flood inundation and recession model. In these case studies, the flood occurred across and along a main highway under different
topographical characteristics. The results showed that the spatial extent of the flood regions is highly influenced by local topography and the position of the manholes. Particularly, the spatial
arrangement of the manhole and the slope of nearby areas are crucial for determining the spatial extent, spatial heterogeneity of the flood depth, and selecting preferential flow paths to inundate
low-lying areas. The model is able to capture the flood extent for manhole overland flow in fluvial flood events. Incorporating the direct impact of rainfall on the fluvial flood event can improve
the representation of the physical process and the accuracy of the model. As the flood recession observation data are scarce, the performance of the flood recession model result was difficult to
quantify. Finally, proper understanding and representation of the study area, the boundary condition, and engineering structures are important for the flood inundation and recession modeling
associated with manhole overland flow.
Regional authorities can utilize the presented model (FIRM) by coupling with existing hydrodynamic modeling (e.g., SWMM) to quantify flood hazard based on pluvial generated overland flooding and
manholes induced flooding in urban areas, where flood mechanism is complex and modified by local infrastructures. FIRM can be used to estimate the areal extent and depth of flood caused by manholes
overland flow during a flood event (flood inundation) and after the event is over (flood recession). Because of the relative simplicity of the model and its uses of readily available data, the model
can be used for a real-time assessment of flood progression and to identify potential impact areas. The model’s ability to simulate flood recession will also allow identifying areas where the flooded
water will remain ponded for days after the floodwater subsides. The ponded water, or the floodwater not drained, can impact human health and properties. In addition to the real-time forecast of
flood inundation and estimation of the aftermath ponding condition for the existing drainage system, the model can be used to design better a new or retrofit the current drainage system to minimize
the overflow and ponding after the flood events. The model can be used to assess the flood condition under multiple storms, watershed conditions, and drainage scenarios. It can contribute to our
understanding of climate change and appropriate engineering designs for mitigation.
Author Contributions
Conceptualization, M.G. and Y.D.; methodology, M.G. and Y.D.; validation, M.G. and Y.D.; writing—original draft preparation, M.G.; writing—review and editing, M.G. and Y.D.; supervision, Y.D.;
funding acquisition, Y.D. All authors have read and agreed to the published version of the manuscript.
Department of Defense’s Strategic Environmental Research and Development Program (SERDP) under contract W912HQ-15-C-0023.
We are thankful for the city of Edmonds and Mountlake Terrace for providing sewer network dataset and observation data. The first author is grateful for the research input from Joan Wu, Akram
Hossain, Jennifer Adam, Mark Wigmosta, Debra Perrone, and Scott Jasechko. We are thankful for the anonymous reviewers for helpful comments on the manuscript.
Conflicts of Interest
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the
decision to publish the results.
1. Wallemacq, P.; Herden, C.; House, R. The Human Cost of Natural Disasters 2015: A Global Perspective; Technical report; Centre for Research on the Epidemiology of Disasters: Brussels, Belgium,
2015. [Google Scholar]
2. Stocker, T.F.; Qin, D.; Plattner, G.-K.; Tignor, M.; Allen, S.K.; Boschung, J.; Nauels, A.; Xia, Y.; Bex, V.; Midgley, P.M. Climate change 2013: The physical science basis. In Contribution of
Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change; Cambridge University Press: Cambridge, UK, 2013; Volume 1535. [Google Scholar]
3. Technical Mapping Advisory Council (TMAC). Technical Mapping Advisory Council (TMAC) 2015 Annual Report Summary. Available online: https://www.fema.gov/media-library-data/
1454954186441-34ff688ee1abc00873df80c4d323a4df/TMAC_2015_Annual_Report_Summary.pdf (accessed on 13 March 2020).
4. Federal Emergency Management Agency (FEMA). Loss Dollars Paid by Calendar Year. Available online: https://www.fema.gov/loss-dollars-paid-calendar-year (accessed on 13 March 2020).
5. European Environmental Agency (EEA). Mapping the impacts of natural hazards and technological accidents in Europe – an overview of the last decade. EEA Technical Report No13/2010. Available
online: https://www.eea.europa.eu/publications/mapping-the-impacts-of-natural (accessed on 13 March 2020).
6. Wang, W.; Li, H.Y.; Leung, L.R.; Yigzaw, W.; Zhao, J.; Lu, H.; Deng, Z.; Demisie, Y.; Blöschl, G. Nonlinear filtering effects of reservoirs on flood frequency curves at the regional scale. Water
Resour. Res. 2017, 53, 8277–8292. [Google Scholar] [CrossRef]
7. Ye, S.; Li, H.-Y.; Leung, L.R.; Guo, J.; Ran, Q.; Demissie, Y.; Sivapalan, M. Understanding flood seasonality and its temporal shifts within the contiguous United States. J. Hydrometeorol. 2017,
18, 1997–2009. [Google Scholar] [CrossRef]
8. Milner, A.M.; Picken, J.L.; Klaar, M.J.; Robertson, A.L.; Clitherow, L.R.; Eagle, L.; Brown, L.E. River ecosystem resilience to extreme flood events. Ecol. Evol. 2018, 8, 8354–8363. [Google
Scholar] [CrossRef] [PubMed] [Green Version]
9. Scanlon, B.R.; Keese, K.E.; Flint, A.L.; Flint, L.E.; Gaye, C.B.; Edmunds, W.M.; Simmers, I. Global synthesis of groundwater recharge in semiarid and arid regions. Hydrol. Processes 2006, 20,
3335–3370. [Google Scholar] [CrossRef]
10. Wang, X.; Zhang, G.; Xu, Y.J. Impacts of the 2013 extreme flood in Northeast China on regional groundwater depth and quality. Water 2015, 7, 4575–4592. [Google Scholar] [CrossRef] [Green Version]
11. Jasechko, S.; Birks, S.J.; Gleeson, T.; Wada, Y.; Fawcett, P.J.; Sharp, Z.D.; McDonnell, J.J.; Welker, J.M. The pronounced seasonality of global groundwater recharge. Water Resour. Res. 2014, 50,
8845–8867. [Google Scholar] [CrossRef] [Green Version]
12. Cuthbert, M.O.; Taylor, R.G.; Favreau, G.; Todd, M.C.; Shamsudduha, M.; Villholth, K.G.; MacDonald, A.M.; Scanlon, B.R.; Kotchoni, D.V.; Vouillamoz, J.-M. Observed controls on resilience of
groundwater to climate variability in sub-Saharan Africa. Nature 2019, 572, 230–234. [Google Scholar] [CrossRef]
13. Dahlke, H.; Brown, A.; Orloff, S.; Putnam, D.; O’Geen, T. Managed winter flooding of alfalfa recharges groundwater with minimal crop damage. Calif. Agr. 2018, 72, 65–75. [Google Scholar] [
CrossRef] [Green Version]
14. GebreEgziabher, M. An Integrated Hydrogeological Study to Understand the Groundwater Flow Dynamics in Raya Valley Basin, Northern Ethiopia: Hydrochemistry, Isotope Hydrology and Flow Modeling
Approaches. Master’s Thesis, Addis Ababa University, Addis Ababa, Ethiopia, 2011. [Google Scholar]
15. Changnon, S.A., Jr. Recent studies of urban effects on precipitation in the United States. Bull. Am. Meteorol. Soc. 1969, 50, 411–421. [Google Scholar] [CrossRef]
16. Arnbjerg-Nielsen, K.; Willems, P.; Olsson, J.; Beecham, S.; Pathirana, A.; Bülow Gregersen, I.; Madsen, H.; Nguyen, V.-T.-V. Impacts of climate change on rainfall extremes and urban drainage
systems: A review. Water Sci. Technol. 2013, 68, 16–28. [Google Scholar] [CrossRef]
17. Boyd, E.; Juhola, S. Adaptive climate change governance for urban resilience. Urban Stud. 2015, 52, 1234–1264. [Google Scholar] [CrossRef]
18. Ford, A.; Barr, S.; Dawson, R.; Virgo, J.; Batty, M.; Hall, J. A multi-scale urban integrated assessment framework for climate change studies: A flooding application. Comput. Environ. Urban 2019,
75, 229–243. [Google Scholar] [CrossRef]
19. Chen, J.; Hill, A.A.; Urbano, L.D. A GIS-based model for urban flood inundation. J. Hydrol. 2009, 373, 184–192. [Google Scholar] [CrossRef]
20. Wang, R.-Q.; Mao, H.; Wang, Y.; Rae, C.; Shaw, W. Hyper-resolution monitoring of urban flooding with social media and crowdsourcing data. Comput. Geosci. 2018, 111, 139–147. [Google Scholar] [
CrossRef] [Green Version]
21. National Academies of Sciences, Engineering, and Medicine. Framing the Challenge of Urban Flooding in the United States; National Academies Press: Washington, DC, USA, 2019. [Google Scholar]
22. Jacobson, C.R. Identification and quantification of the hydrological impacts of imperviousness in urban catchments: A review. J. Environ. Manage. 2011, 92, 1438–1448. [Google Scholar] [CrossRef]
23. Zhang, W.; Villarini, G.; Vecchi, G.A.; Smith, J.A. Urbanization exacerbated the rainfall and flooding caused by hurricane Harvey in Houston. Nature 2018, 563, 384–388. [Google Scholar] [CrossRef
] [PubMed]
24. Shuster, W.D.; Bonta, J.; Thurston, H.; Warnemuende, E.; Smith, D. Impacts of impervious surface on watershed hydrology: A review. Urban Water J. 2005, 2, 263–275. [Google Scholar] [CrossRef]
25. Diakakis, M.; Deligiannakis, G.; Pallikarakis, A.; Skordoulis, M. Identifying elements that affect the probability of buildings to suffer flooding in urban areas using Google Street View. A case
study from Athens metropolitan area in Greece. Int. J. Disaster Risk Reduct. 2017, 22, 1–9. [Google Scholar] [CrossRef]
26. Golz, S.; Schinke, R.; Naumann, T. Assessing the effects of flood resilience technologies on building scale. Urban Water J. 2015, 12, 3043. [Google Scholar] [CrossRef]
27. Hu, M.; Sayama, T.; Zhang, X.; Tanaka, K.; Takara, K.; Yang, H. Evaluation of low impact development approach for mitigating flood inundation at a watershed scale in China. J. Environ. Manage.
2017, 193, 430–438. [Google Scholar] [CrossRef]
28. Brody, S.; Sebastian, A.; Blessing, R.; Bedient, P. Case study results from southeast Houston, Texas: Identifying the impacts of residential location on flood risk and loss. J. Flood Risk Manage.
2018, 11, S110–S120. [Google Scholar] [CrossRef]
29. Teng, J.; Jakeman, A.J.; Vaze, J.; Croke, B.F.; Dutta, D.; Kim, S. Flood inundation modelling: A review of methods, recent advances and uncertainty analysis. Environ. Modell. Softw. 2017, 90,
201–216. [Google Scholar] [CrossRef]
30. Yu, D.; Coulthard, T.J. Evaluating the importance of catchment hydrological parameters for urban surface water flood modelling using a simple hydro-inundation model. J. Hydrol. 2015, 524,
385–400. [Google Scholar] [CrossRef] [Green Version]
31. Courty, L.G.; Pedrozo-Acuña, A.; Bates, P.D. Itzï (version 17.1): An open-source, distributed GIS model for dynamic flood simulation. Geosci. Model. Dev. 2017, 10, 1835. [Google Scholar] [
CrossRef] [Green Version]
32. Rosenberg, E.A.; Keys, P.W.; Booth, D.B.; Hartley, D.; Burkey, J.; Steinemann, A.C.; Lettenmaier, D.P. Precipitation extremes and the impacts of climate change on stormwater infrastructure in
Washington State. Clim. Change 2010, 102, 319–349. [Google Scholar] [CrossRef] [Green Version]
33. Mishra, V.; Ganguly, A.R.; Nijssen, B.; Lettenmaier, D.P. Changes in observed climate extremes in global urban areas. Environ. Res. Lett. 2015, 10, 024005. [Google Scholar] [CrossRef]
34. Muis, S.; Güneralp, B.; Jongman, B.; Aerts, J.C.; Ward, P.J. Flood risk and adaptation strategies under climate change and urban expansion: A probabilistic analysis using global data. Sci. Total
Environ. 2015, 538, 445–457. [Google Scholar] [CrossRef]
35. Zhao, G.; Xu, Z.; Pang, B.; Tu, T.; Xu, L.; Du, L. An enhanced inundation method for urban flood hazard mapping at the large catchment scale. J. Hydrol. 2019, 571, 873–882. [Google Scholar] [
36. Zhao, T.; Shao, Q.; Zhang, Y. Deriving flood-mediated connectivity between river channels and floodplains: Data-driven approaches. Sci. Rep. 2017, 7, 43239. [Google Scholar] [CrossRef] [Green
37. Wang, X.; Kinsland, G.; Poudel, D.; Fenech, A. Urban flood prediction under heavy precipitation. J. Hydrol. 2019, 577, 123984. [Google Scholar] [CrossRef]
38. Jamali, B.; Bach, P.M.; Cunningham, L.; Deletic, A. A Cellular Automata Fast Flood Evaluation (CA-ffé) Model. Water Resour. Res. 2019, 55, 4936–4953. [Google Scholar] [CrossRef] [Green Version]
39. Zheng, X.; Maidment, D.R.; Tarboton, D.G.; Liu, Y.Y.; Passalacqua, P. GeoFlood: Large-Scale Flood Inundation Mapping Based on High-Resolution Terrain Analysis. Water Resour. Res. 2018, 54,
10013–10033. [Google Scholar] [CrossRef]
40. Yang, T.-H.; Chen, Y.-C.; Chang, Y.-C.; Yang, S.-C.; Ho, J.-Y. Comparison of different grid cell ordering approaches in a simplified inundation model. Water 2015, 7, 438–454. [Google Scholar] [
CrossRef] [Green Version]
41. Meng, X.; Zhang, M.; Wen, J.; Du, S.; Xu, H.; Wang, L.; Yang, Y. A Simple GIS-Based Model for Urban Rainstorm Inundation Simulation. Sustainability 2019, 11, 2830. [Google Scholar] [CrossRef] [
Green Version]
42. Sörensen, J.; Mobini, S. Pluvial, urban flood mechanisms and characteristics–assessment based on insurance claims. J. Hydrol. 2017, 555, 51–67. [Google Scholar] [CrossRef]
43. Leandro, J.; Martins, R. A methodology for linking 2D overland flow models with the sewer network model SWMM 5.1 based on dynamic link libraries. Water Sci. Technol. 2016, 73, 3017–3026. [Google
Scholar] [CrossRef]
44. Son, A.-L.; Kim, B.; Han, K.-Y. A simple and robust method for simultaneous consideration of overland and underground space in urban flood modeling. Water 2016, 8, 494. [Google Scholar] [CrossRef
] [Green Version]
45. Chang, T.-J.; Wang, C.-H.; Chen, A.S.; Djordjević, S. The effect of inclusion of inlets in dual drainage modelling. J. Hydrol. 2018, 559, 541–555. [Google Scholar] [CrossRef]
46. Jang, J.-H.; Chang, T.-H.; Chen, W.-B. Effect of inlet modelling on surface drainage in coupled urban flood simulation. J. Hydrol. 2018, 562, 168–180. [Google Scholar] [CrossRef]
47. Seyoum, S.D.; Vojinovic, Z.; Price, R.K.; Weesakul, S. Coupled 1D and noninertia 2D flood inundation model for simulation of urban flooding. J. Hydraul. Eng. 2012, 138, 23–34. [Google Scholar] [
48. Chen, A.S.; Leandro, J.; Djordjević, S. Modelling sewer discharge via displacement of manhole covers during flood events using 1D/2D SIPSON/P-DWave dual drainage simulations. Urban Water J. 2016,
13, 830–840. [Google Scholar] [CrossRef] [Green Version]
49. Rossman, L.A.; Huber, W. Storm water management model reference manual volume II–hydraulics. US Environ. Prot. Agency II (Mayo) 2017, 190. Available online: https://nepis.epa.gov/Exe/ZyPDF.cgi?
Dockey=P100S9AS.pdf (accessed on 17 April 2020).
50. Kessler, R. Stormwater strategies: Cities prepare aging infrastructure for climate change. Environ. Health Perspect. 2011, 119, 514–519. [Google Scholar] [CrossRef] [PubMed]
51. Milly, P.C.; Betancourt, J.; Falkenmark, M.; Hirsch, R.M.; Kundzewicz, Z.W.; Lettenmaier, D.P.; Stouffer, R.J. Stationarity is dead: Whither water management? Science 2008, 319, 573–574. [Google
Scholar] [CrossRef] [PubMed]
52. Yan, H.; Sun, N.; Wigmosta, M.; Skaggs, R.; Hou, Z.; Leung, L.R. Next-generation intensity–duration–frequency curves to reduce errors in peak flood design. J. Hydrol. Eng. 2019, 24, 04019020. [
Google Scholar] [CrossRef]
53. Mullen, K.; Ardia, D.; Gil, D.L.; Windover, D.; Cline, J. DEoptim: An R package for global optimization by differential evolution. J. Stat. Softw. 2011, 40, 1–26. [Google Scholar] [CrossRef] [
Green Version]
54. Means, J.E.; Acker, S.A.; Harding, D.J.; Blair, J.B.; Lefsky, M.A.; Cohen, W.B.; Harmon, M.E.; McKee, W.A. Use of large-footprint scanning airborne lidar to estimate forest stand characteristics
in the Western Cascades of Oregon. Remote Sens. Environ. 1999, 67, 298–308. [Google Scholar] [CrossRef]
55. Leutnant, D.; Döring, A.; Uhl, M. swmmr-an R package to interface SWMM. Urban Water J. 2019, 16, 68–76. [Google Scholar] [CrossRef] [Green Version]
56. Niazi, M.; Nietch, C.; Maghrebi, M.; Jackson, N.; Bennett, B.R.; Tryby, M.; Massoudieh, A. Storm water management model: Performance review and gap analysis. J. Sustain. Water Built Env. 2017, 3,
04017002. [Google Scholar] [CrossRef] [Green Version]
57. Gray, J.E.; Pribil, M.J.; Van Metre, P.C.; Borrok, D.M.; Thapalia, A. Identification of contamination in a lake sediment core using Hg and Pb isotopic compositions, Lake Ballinger, Washington,
WA, USA. J. Appl. Geochem. 2013, 29, 1–12. [Google Scholar] [CrossRef]
58. Thapalia, A.; Borrok, D.M.; Van Metre, P.C.; Musgrove, M.; Landa, E.R. Zn and Cu isotopes as tracers of anthropogenic contamination in a sediment core from an urban lake. Environ. Sci. Technol.
2010, 44, 1544–1550. [Google Scholar] [CrossRef]
59. Boyle, D.P.; Gupta, H.V.; Sorooshian, S. Toward improved calibration of hydrologic models: Combining the strengths of manual and automatic methods. Water Resour. Res. 2000, 36, 3663–3674. [Google
Scholar] [CrossRef]
60. Price, K.; Storn, R.M.; Lampinen, J.A. Differential Evolution: A Practical Approach to Global Optimization; Springer Science & Business Media: Berlin, Germany, 2006. [Google Scholar]
61. Ardia, D.; Mullen, K.; Peterson, B.; Ulrich, J. DEoptim’: Differential Evolution in ‘R’. Version 2.2-3. 2015. Available online: https://cran.r-project.org/web/packages/DEoptim/DEoptim.pdf
(accessed on 4 April 2020).
62. Nash, J.E.; Sutcliffe, J.V. River flow forecasting through conceptual models part I—A discussion of principles. J. Hydrol. 1970, 10(3), 282–290. [Google Scholar] [CrossRef]
63. Gupta, H.V.; Kling, H.; Yilmaz, K.K.; Martinez, G.F. Decomposition of the mean squared error and NSE performance criteria: Implications for improving hydrological modelling. J. Hydrol. 2009, 377,
80–91. [Google Scholar] [CrossRef] [Green Version]
64. Legates, D.R.; McCabe, G.J., Jr. Evaluating the use of “goodness-of-fit” measures in hydrologic and hydroclimatic model validation. Water Resour. Res. 1999, 35, 233–241. [Google Scholar] [
65. Kling, H.; Fuchs, M.; Paulin, M. Runoff conditions in the upper Danube basin under an ensemble of climate change scenarios. J. Hydrol. 2012, 424, 264–277. [Google Scholar] [CrossRef]
66. Wang, Y.; Chen, A.S.; Fu, G.; Djordjević, S.; Zhang, C.; Savić, D.A. An integrated framework for high-resolution urban flood modelling considering multiple information sources and urban features.
Environ. Modell Softw. 2018, 107, 85–95. [Google Scholar] [CrossRef]
67. Wing, O.E.; Bates, P.D.; Sampson, C.C.; Smith, A.M.; Johnson, K.A.; Erickson, T.A. Validation of a 30 m resolution flood hazard model of the conterminous United States. Water Resour. Res. 2017,
53, 7968–7986. [Google Scholar] [CrossRef]
68. Bates, P.D.; De Roo, A. A simple raster-based model for flood inundation simulation. J. Hydrol. 2000, 236, 54–77. [Google Scholar] [CrossRef]
69. Bernini, A.; Franchini, M. A rapid model for delimiting flooded areas. Water Resour Manag. 2013, 27, 3825–3846. [Google Scholar] [CrossRef]
70. Lhomme, J.; Sayers, P.; Gouldby, B.; Samuels, P.; Wills, M.; Mulet-Marti, J. Recent development and application of a rapid flood spreading method. In Proceedings of the FloodRisk 2008 Conference,
Oxford, UK, 30 September–2 October 2008; Taylor and Francis Group: London, UK, 2008. [Google Scholar]
71. Moriasi, D.N.; Arnold, J.G.; Van Liew, M.W.; Bingner, R.L.; Harmel, R.D.; Veith, T.L. Model evaluation guidelines for systematic quantification of accuracy in watershed simulations. Trans. ASABE
2007, 50, 885–900. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of the flood inundation modeling from one-manhole (the circle with cross) for different amounts of manhole overflow: (a) 1 unit, (b) 11 units, (c) 17 units, and (d)
46 units. The numbers inside the unshaded and unshaded grids represent surface elevation and flood levels, respectively. Dashed line represents the profile line.
Figure 2. Schematic diagram showing flood inundation from two manholes (the circles with cross) for different amounts of manhole overflow: (a) 22 units from the left manhole and 19 units from the
right manhole, (b) 110 units from the left manhole and 111 units from the right manhole. The numbers inside the unshaded and unshaded grids represent surface elevation and flood levels, respectively.
Dash line represents the profile line.
Figure 3. Schematic representation of flood recession processes from a single and multiple manhole. (a) Represent the areal extent and profile section of the flooded regions, (b) represents the areal
extent and profile of flood recession for a single manhole, and (c) show a recession surface and profile after drained by two manholes.
Figure 4. Hall Creek watershed showing (a) buildings, drainage network, and the surface elevation as a background, and (b) land cover map.
Figure 6. The location of the two manholes, which cause flood along a street (Case 2) and across a street (Case 1).
Figure 9. Scatter plots of observed and simulated lakes level during the spin up (a), calibration (b), and validation (c) periods. The lines are the linear regression fit with 95% confidence
Figure 10. Simulated flood depth and extent (color maps) and observed flood inundation boundaries (red dotted lines). The areal extent and depth of flood in Case 1 (a) and Case 2 (b).
Figure 12. Flood spatial extents aftermath of storm and flood recession into the manholes for Case 1 (a) and Case 2 (b).
Statistics Ranges Optimal Value
$NSE = 1 − [ ∑ i = 1 n ( Y i obs − Y i sim ) 2 ∑ i = 1 n ( Y i obs − Y mean ) 2 ]$ −∞ to 1 1
$PBIAS = [ ∑ i = 1 n ( Y i obs − Y i sim ) × 100 ∑ i = 1 n ( Y i obs ) ]$ 0 to 100 0
$RSR = RMSE STDEVobs = [ ∑ i = 1 n ( Y i Obs − Y i sim ) 2 ] [ ∑ i = 1 n ( Y i Obs − Y mean ) 2 ]$ 0 to 1 0
$KGE = 1 − ( r − 1 ) 2 + ( γ − 1 ) 2 + ( β − 1 ) 2$
$β = μ s μ o$ 0 to 1 1
$γ = CV s CV o = σ s / μ s σ o / μ o$
Flooded in Observed Boundary Dry in Observed Boundary
Flooded in FIRM True flood (TP) False flood (FP)
Dry in FIRM False dry (FN) True dry (TN)
Parameters Lower–Upper Bound Optimal Values
Impervious (%) 25–90 70
Width (m) 150–300 152
Roughness (−) 0.01–0.03 0.012
Depression Storage (mm) 1.2–5.2 1.78
Hydraulic Conductivity (mm/h) 0.1–3 0.11
Table 4. Model performance statistics to evaluate the Storm Water Management Model (SWMM) daily and monthly lake water level simulations.
Simulation KGE NSE RSR PBIAS Performance Rating [71]
Daily Mon Daily Mon Daily Mon Daily Mon Daily Mon
Spin Up 0.64 0.61 −0.31 −1.15 1.14 RSR −0.10 −0.10 Unsat * Unsat *
Calibration 0.91 0.96 0.82 0.94 0.43 1.39 0.00 0.00 V. good ^ V. good ^
Validation 0.88 0.95 0.67 0.81 0.57 0.24 0.00 0.00 Good V. good ^
Kling–Gupta efficiency (KGE); Nash–Sutcliffe efficiency (NSE); ratio of the RMSE to the standard deviation of measured data (RSR); percent bias (PBIAS); * Unsatisfactory; ^ Very good.
Table 5. Statistical evaluations of the flood inundation model based on inferred flood area at the two manhole locations (Case 1 and Case 2).
Inundation Model Performance Case 1 Case 2
True positive rate, TPR (%) 89.04 71.31
Positive predictive value, PPV (%) 95.44 97.25
Modified fit, MF (%) 85.04 69.90
Modified bias, MB (%) −6.71 −26.68
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
GebreEgziabher, M.; Demissie, Y. Modeling Urban Flood Inundation and Recession Impacted by Manholes. Water 2020, 12, 1160. https://doi.org/10.3390/w12041160
AMA Style
GebreEgziabher M, Demissie Y. Modeling Urban Flood Inundation and Recession Impacted by Manholes. Water. 2020; 12(4):1160. https://doi.org/10.3390/w12041160
Chicago/Turabian Style
GebreEgziabher, Merhawi, and Yonas Demissie. 2020. "Modeling Urban Flood Inundation and Recession Impacted by Manholes" Water 12, no. 4: 1160. https://doi.org/10.3390/w12041160
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2073-4441/12/4/1160?utm_source=releaseissue&utm_medium=email&utm_campaign=releaseissue_water&utm_term=doilink91","timestamp":"2024-11-08T12:07:48Z","content_type":"text/html","content_length":"509655","record_id":"<urn:uuid:a32ea4a7-ebef-4cc1-818a-96b61fdd917a>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00258.warc.gz"} |
MetricGate, LLC
One-Sample Z-Test for Mean
The one-sample z-test is used to determine whether the mean of a single sample is statistically different from a known or hypothesized population mean (μ), when the population variance is known. This
test is commonly applied when the sample size is large, and the population standard deviation (σ) is known. It tests the null hypothesis that the sample mean is equal to the population mean against
an alternative hypothesis.
In a one-sample z-test, we have the following hypotheses:
• Null Hypothesis (H₀): The mean of the sample is equal to the hypothesized population mean (μ).
• Alternative Hypothesis (H₁): The mean of the sample is different from the hypothesized population mean (depending on the alternative hypothesis selected).
The alternative hypothesis can be one of the following:
• Two-sided: The sample mean is not equal to the population mean (μ ≠ μ₀).
• One-sided (greater): The sample mean is greater than the population mean (μ > μ₀).
• One-sided (less): The sample mean is less than the population mean (μ < μ₀).
Test Statistic Formula
The z-statistic is calculated as follows:
z = \dfrac{\bar{X} - \mu_0}{\dfrac{\sigma}{\sqrt{n}}}
• \bar{X} is the sample mean
• \mu_0 is the population mean (hypothesized mean)
• \sigma is the population standard deviation
• n is the sample size
The one-sample z-test compares the sample mean to the hypothesized population mean using the following steps:
• Calculate the z-statistic using the sample data.
• Compute the p-value based on the z-statistic and alternative hypothesis.
• Evaluate the test results based on the p-value and significance level.
R Code Example
The following R script demonstrates how to perform a one-sample z-test using MetricGate's R Compiler:
# Performing a one-sample z-test on mtcars dataset
data <- mtcars
variable <- 'mpg' # Variable of interest, e.g., 'mpg'
mu_value <- 20 # Hypothesized mean value for the z-test
sigma_value <- 5 # Known population standard deviation
alternative_hypothesis <- 'two.sided' # Can be 'two.sided', 'less', or 'greater'
significance_level <- 0.05 # Significance level
# Sample statistics
n <- length(data[[variable]])
mean_x <- mean(data[[variable]])
# Z-test calculations
z_stat <- (mean_x - mu_value) / (sigma_value / sqrt(n))
p_value <- if (alternative_hypothesis == 'two.sided') {
2 * pnorm(-abs(z_stat))
} else if (alternative_hypothesis == 'less') {
} else {
1 - pnorm(z_stat)
# Confidence Interval calculation
z_critical <- qnorm(1 - (1 - significance_level) / 2)
conf_int <- c(mean_x - z_critical * sigma_value / sqrt(n), mean_x + z_critical * sigma_value / sqrt(n))
# Output
cat("Z-test Results:\n")
cat("Variable:", variable, "\n")
cat("Sample Mean:", mean_x, "\n")
cat("Test Statistic (z):", z_stat, "\n")
cat("P-value:", p_value, "\n")
cat("Confidence Interval:", conf_int[1], "to", conf_int[2], "\n")
Interpreting Results
The output from the one-sample z-test will include several key metrics to help interpret the results:
• Test Statistic (z): This value represents the calculated z-statistic for the test.
• P-value: The probability of observing the test statistic under the null hypothesis. A small p-value (< significance level) suggests strong evidence against the null hypothesis.
• Confidence Interval: The range within which the true population mean lies, with a given confidence level (e.g., 95%).
• Sample Estimates: The mean of the sample data being tested.
• Method: The statistical method used for the test (e.g., One-sample z-test).
• Alternative Hypothesis: Specifies the chosen direction for the test (two-sided, greater, or less).
Interpreting the Test Results
Based on the p-value and the selected significance level:
• Statistically Significant Result: If the p-value is less than the significance level (e.g., 0.05), we reject the null hypothesis. This means there is sufficient evidence to conclude that the mean
of the sample is different from the hypothesized population mean.
• Not Statistically Significant: If the p-value is greater than or equal to the significance level, we fail to reject the null hypothesis. This suggests that there is insufficient evidence to claim
a significant difference between the sample mean and the population mean.
The one-sample z-test is a valuable tool for determining if the mean of a sample is significantly different from a known or hypothesized population mean. It is particularly useful when the population
standard deviation is known and the sample size is sufficiently large. The result helps assess whether there is enough statistical evidence to reject the null hypothesis in favor of the alternative
Explore our Online R Compiler or Statistics Calculator to run your own z-test and view detailed results. | {"url":"https://metricgate.com/docs/z-test-for-mean/","timestamp":"2024-11-14T18:21:37Z","content_type":"text/html","content_length":"31709","record_id":"<urn:uuid:6d50e816-f3e4-49f2-8c96-2a10e2cb0608>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00541.warc.gz"} |
Modeling of periodic operation of electric centrifugal pump installations taking into account oil degassing
Title Modeling of periodic operation of electric centrifugal pump installations taking into account oil degassing
A. A. Makeev^1, 2, S. I. Martynov^2
Authors ^1LLC «LUKOIL-Engineering», branch «KogalymNIPIneft» (Tyumen)
^2Surgut State University
A model for the operation of an electric centrifugal pump installation in periodic mode is proposed, taking into account the time of pressure recovery in the well and the influence of the
Annotation oil degassing process. Based on the developed model, computer simulation of the dynamics of pressure changes at the inlet of an electric centrifugal pump was carried out and parameters
were determined that make it possible to predict the transition to emergency operation as a result of the process of oil degassing. The results of the numerical modeling also allow us to
evaluate the influence of the oil degassing process on the change in pressure at the inlet of the pump and in the annulus.
Keywords computer modeling, electric centrifugal pump, periodic mode, mode optimization, forecasting.
Makeev A. A., Martynov S. I. ''Modeling of periodic operation of electric centrifugal pump installations taking into account oil degassing'' [Electronic resource]. Proceedings of the
Citation International Scientific Youth School-Seminar "Mathematical Modeling, Numerical Methods and Software complexes" named after E.V. Voskresensky (Saransk, July 26-28, 2024). Saransk: SVMO
Publ, 2024. - pp. 110-114. Available at: https://conf.svmo.ru/files/2024/papers/paper18.pdf. - Date of access: 12.11.2024. | {"url":"https://conf.svmo.ru/en/archive/article?id=459","timestamp":"2024-11-12T03:43:46Z","content_type":"text/html","content_length":"11719","record_id":"<urn:uuid:868c1bbb-49ee-461d-a895-59503a9a15b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00505.warc.gz"} |
Downsizing the Data Set - Resampling and Binning of Time Series and other Data Sets - AMIS Technology Blog | Oracle - Microsoft Azure
Downsizing the Data Set – Resampling and Binning of Time Series and other Data Sets
September 16, 2019 Data Analytics No Comments
Data Sets are often too small. We do not have all data that we need in order to interpret, explain, visualize or use for training a meaningful model. However, quite often our data sets are too large.
Or, more specifically, they have higher resolution than is necessary or even than is desirable. We may have a timeseries with values for every other second, although meaningful changes do not happen
at lower frequencies than 30 seconds or even much longer. We may have measurements for each meter for a metric that does not vary meaningfully at less than 50 meters.
Having data at too high a resolution is not a good thing. For one, a large data set may be quite unwieldy to work with. It is too big for our algorithms and equipment. Furthermore, high resolution
data may contain meaningless twitching, local noise that may impact our findings. And of course we may have values along a continuous axis, a floating point range that holds values with several
apparently significant digits that are not really significant at all. We cannot meaningfully measure temperature or distance in mili-degrees or micrometers. Ideally, we work with values at meaningful
values and only significant digits. When we are really only interested in comparison and similarity detection, we can frequently settle for even less specific values – as the SAX algorithm for
example implements.
In many cases, we (should) want to lower the resolution of our data set, quite possibly along two axes: the X-axis (time, distance, simply count of measurement) and the Y-axis (the signal value).
I will assume in this article that we work with data in Pandas (in the context of Jupyter Notebooks running a Python kernel). And I will show some simple examples of reducing the size of the dataset
without loss of meaningful information. The sources for this article are on GitHub: https://github.com/lucasjellema/data-analytic-explorations/tree/master/around-the-fort .
For this example, I will work with a data set collected by Strava, representing a walk that lasted for about one hour and close to 5 km. The Strava data set contains over 1300 observations – each
recording longitude and latitude, altitude, distance and speed. These measurements are taken about every 2-4 seconds. This results in a high res chart when plotted using plotly express:
fig = px.scatter(s, x=”time”, y=”altitude”, title=’Altitude vs Distance in our Walk Around the Fort’, render_mode=’svg’)
Here I have shown both a scatter and a line chart. Both contain over 1300 values.
For my purposes, I do not need data at this high resolution. In fact, the variation in walking speed is quite high within 30 second periods, but not in a meaningful way. I prefer to have values
smoothed over 30 minutes or longer. Note: I am primarily interested in altitude measurements, so let’s focus on that.
I will discuss three methods for horizontal resolution: resample for timeseries, DIY grouping and aggregating for any data series and PAA (piecewise aggregate approximation). Next, we will talk about
vertical resolution reduction; we will look at quantiles, equi-height binning and symbolic representation through SAX.
Horizontal Reduction of Resolution
When we are dealing with a time series, it is easy to change the resolution of the data set, simply by resampling on the Data Frame. Let’s say we take the average of the altitude over each 30 second
window. That is done as easy as:
a = dataFrame.resample(’30S’).mean()[‘altitude’].to_frame(name=’altitude’)
However, in this case the index of our data set is not actually a timestamp. One of the dimensions is time, another is distance. It seems most appropriate to sample the set of altitude values by
distance. Taking the altitude once every 25 meters (an average for all measurements in each 25 meter section) seems quite enough.
This can be done I am sure in several ways. The one I show here takes two steps:
• assign a distance window to each observation (into which 25 meter window does each observation go)
• what is the average altitude value for all observations in each window
The code for this:
distance_window_width = 25
s[‘distance_window’] = s[‘distance’].apply(lambda distance: distance_window_width*(round(distance/distance_window_width)))
And subsequently the aggregation:
d = s[[‘altitude’,’distance_window’]].copy().groupby(s[‘distance_window’]).mean()
In a chart we can see the effect of the reduction of the data resolution – first a line chart (with interpolation) then a bar chart that is a better representation of the data set as it currently
stands – small windows for which average values have been calculated.
At this point we have smoothed the curve – averaged out small fluctuations. Instead of taking the average, we could consider other methods of determining the value to represent a window – modus is
one option, median another and explicit exclusion of outliers yet another option.
PAA – Piecewise Aggregate Approximation
A popular way of reducing the horizontal resolution of data sets is PAA (piecewise aggregate approximation). In essence, it looks at data per window and calculates the value representing that window,
just as we have been doing with our simple averaging.
It is worthwhile to read through some of the PAA resources. Here I will just show how to leverage a Python library that implements PAA or how to create a PAA function (from code from such a library)
and invoke it for resolution reduction.
I have created a function paa – the code was copied from https://github.com/seninp/saxpy/blob/master/saxpy/paa.py.
#use PAA for lowering the data set’s resolution
# taken from https://github.com/seninp/saxpy/blob/master/saxpy/paa.py
def paa(series, paa_segments):
“””PAA implementation.”””
series_len = len(series)
# check for the trivial case
if (series_len == paa_segments):
return np.copy(series)
res = np.zeros(paa_segments)
# check when we are even
if (series_len % paa_segments == 0):
inc = series_len // paa_segments
for i in range(0, series_len):
idx = i // inc
np.add.at(res, idx, series[i])
# res[idx] = res[idx] + series[i]
return res / inc
# and process when we are odd
for i in range(0, paa_segments * series_len):
idx = i // series_len
pos = i // paa_segments
np.add.at(res, idx, series[pos])
# res[idx] = res[idx] + series[pos]
return res / series_len
With this function in my Notebook, I can create a low res data set with PAA like this (note that I have full control of the number windows or segments the PAA result should have):
# to bring down the number of data points from 1300 to a much lower number, use the PAA algorithm like this:
e = paa(series = s[‘altitude’], paa_segments = 130)
# create Pandas data frame from numpy.ndarray
de = pd.DataFrame(data=e[:], # values
index=e[:], # 1st column as index
# add an column x that has its values set to the row index of each row
de[‘x’] = range(1, len(de) + 1)
Vertical Reduction of Resolution
The altitude values calculated for each 25 meter distance window is on a continuous scale. Each value can differ from all other values and is expressed as a floating point number with many decimal
digits. Of course these values are only crude estimates of the actual altitude in real life. The GPS facilities in my smartphone do not allow for fine grained altitude determination. So pretending
the altitude for each horizontal window is known in great detail is not meaningful.
There are several ways of dealing with this continuous value range. By simply rounding values we can at least get rid of misleading decimal digits. We can further reduce resolution by creating a
fixed number of value ranges or bins (or value categories) and assigning each window to a bin or category. This enormously simplifies our data set to a level where calculations seems quite crude –
but are perhaps more honest. For making comparisons between signals and finding repeating patterns and other similarities, such a simplification is frequently not only justified but even a cause for
faster as well as better results.
A simple approach would be to decide on a small number of altitude levels – say six different levels – and assigning each value to one of these six levels. Pandas have the qcut function that we can
leverage (this assigns the quantile to each record, attempting to get equal numbers of records into each quantile resulting in quantiles or bins that do cover different value ranges):
number_of_bins = 6
d[‘altitude_bin’] = pd.qcut(d[‘altitude’], number_of_bins,labels=False)
The corresponding bar chart that shows all bin values looks as follows:
If we want to have the altitude value at the start of the bin to which an observation is assigned, here is what we can do:
number_of_bins = 6
d[‘altitude_bin’] = pd.qcut(d[‘altitude’], number_of_bins,labels=False)
categories, edges = pd.qcut(d[‘altitude’], number_of_bins, retbins=True, labels=False)
df = pd.DataFrame({‘original_altitude’:d[‘altitude’],
‘altitude_bin’: edges[1:][categories]},
columns = [‘original_altitude’, ‘altitude_bin’])
Instead of quantiles with each the same number of values, we can use bins that each cover the same value distance – say each 50 cm altitude. In Pandas this can be done by using the function cut
instead of qcut:
number_of_bins = 6
d[‘altitude_bin’] = pd.cut(d[‘altitude’], number_of_bins,labels=False)
Almost the same code, assigning bin index values to each record, based on bins that each cover the same amount of altitude.
In a bar chart, this is what the altitude bin (labeled 0 through 5, these are unit less labels) vs distance looks like:
You can find out the bin ranges quite easily:
number_of_bins = 6
d[‘altitude_bin’] = pd.cut(d[‘altitude’], number_of_bins)
Symbolic Representation – SAX
The bin labels in the previous section may appear like measurements with being numeric and all. But in fact they are unit less labels. They could have been labeled A through F. They are ordered but
have no size associated with them.
The concept of symbolic representation of time series (and using that compact representation for efficient similarity analysis) has been researched extensively. The most prominent theory in this
field to date is called SAX – Symbolic Aggregate approXimation. It also assigns labels to each observed value – going about it in a slightly more subtle way than using equiheight bins or quantiles.
Check out one of the many resources on SAX – for example starting from here: https://www.cs.ucr.edu/~eamonn/SAX.htm.
To create a SAX representation of our walk around the fort is not very hard at all.
# how many different categories to use or how many letters in the SAX alphabet
alphabet_size = 7
# normalize the altitude data series
data_znorm = znorm(s[‘altitude’])
# use PAA for horizontal resolution reduction from 1300+ data points to 130 segments
# Note: this is a fairly slow step
data_paa = paa(data_znorm, 130)
# create the SAX representation for the 130 data points
sax_representation_altitude_series = ts_to_string(data_paa, cuts_for_asize(alphabet_size))
What started out as a set of 1300+ floating point values has now been reduced to a string with 130 characters (basically the set fits in 130 * 3 bits). Did we lose information? Well, we gave up on a
lot of fake accuracy. And for many purposes, this resolution of our data set is quite enough. Looking for repeating patterns for example. It would seem that “ddddddccccccccaaaabbb” is a nicely
repeating pattern. Four times? We did four laps around the fort!
Here is the bar chart that visualizes the SAX pattern. Not unlike the previous bar charts – yet even further condensed.
The sources for this article are on GitHub: https://github.com/lucasjellema/data-analytic-explorations/tree/master/around-the-fort . | {"url":"https://technology.amis.nl/data-analytics/downsizing-the-data-set-resampling-and-binning-of-time-series-and-other-data-sets/","timestamp":"2024-11-03T12:59:53Z","content_type":"text/html","content_length":"103454","record_id":"<urn:uuid:f0e2cfb0-0cb5-494d-8c19-84ead51d6c4d>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00537.warc.gz"} |
1,978 research outputs found
We show a general formula of the one loop effective potential of the 5D SU(N) gauge theory compactified on an orbifold, $S^1/Z_2$. The formula shows the case when there are fundamental, (anti-)
symmetric tensor and adjoint representational bulk fields. Our calculation method is also applicable when there are bulk fields belonging to higher dimensional representations. The supersymmetric
version of the effective potential with Scherk-Schwarz breaking can be obtained straightforwardly. We also show some examples of effective potentials in SU(3), SU(5) and SU(6) models with various
boundary conditions, which are reproduced by our general formula.Comment: 22 pages;minor corrections;references added;typos correcte
We study an ${\cal N}=1$ supersymmetric Yang-Mills theory defined on $M^4\times S^1$. The vacuum expectation values for adjoint scalar field in vector multiplet, though important, has been overlooked
in evaluating one-loop effective potential of the theory. We correctly take the vacuum expectation values into account in addition to the Wilson line phases to give an expression for the effective
potential, and gauge symmetry breaking is discussed. In evaluating the potential, we employ the Scherk-Schwarz mechanism and introduce bare mass for gaugino in order to break supersymmetry. We also
obtain masses for the scalars, the adjoint scalar, and the component gauge field for the $S^1$ direction in case of the SU(2) gauge group. We observe that large supersymmetry breaking gives larger
mass for the scalar. This analysis is easily applied to the $M^4\times S^1/Z_2$ case.Comment: 12 pages, 1 figur
We study the dynamical symmetry breaking in the gauge-Higgs unification of the 5D ${\mathcal N}=1$ SUSY theory, compactified on an orbifold, $S^1/Z_2$. This theory identifies Wilson line degrees of
freedoms as ``Higgs doublets''. We consider $SU(3)_c \times SU(3)_W$ and SU(6) models, in which the gauge symmetries are reduced to $SU(3)_c \times SU(2)_L \times U(1)_Y$ and $SU(3)_c \times SU(2)_L
\times U(1)_Y \times U(1)$, respectively, through the orbifolding boundary conditions. Quarks and leptons are bulk fields, so that Yukawa interactions can be derived from the 5D gauge interactions.
We estimate the one loop effective potential of ``Higgs doublets'', and analyze the vacuum structures in these two models. We find that the effects of bulk quarks and leptons destabilize the suitable
electro-weak vacuum. We show that the introduction of suitable numbers of extra bulk fields possessing the suitable representations can realize the appropriate electro-weak symmetry breaking.Comment:
15 pages, 4 figures;disscutions on Higgs quartic couplings adde
We propose a novel mechanism to generate a suitable baryon asymmetry from dark (hidden) sector. This is a Baryogenesis through a reverse pathway of the "asymmetric dark matter" scenario. In the
mechanism, the asymmetry of dark matter is generated at first, and it is partially transferred into a baryon asymmetry in the standard model sector. This mechanism enables us not only to realize the
generation of the baryon asymmetry but also to account for the correct amount of dark matter density in the present universe within a simple framework.Comment: 7 page
In the case of two generation neutrinos, the energy-scale dependence of the lepton-flavor mixing matrix with Majorana phase can be governed by only one parameter r, which is the ratio between the
diagonal elements of neutrino mass matrix. By using this parameter r, we derive the analytic solutions to the renormalization group equations of the physical parameters, which are the mixing angle,
Majorana phase, and the ratio of the mass-squared difference to the mass squared of the heaviest neutrino. The energy-scale dependence of the Majorana phase is clarified by using these analytic
solutions. The instability of the Majorana phase causes in the same parameter region in which the mixing angle is unstable against quantum corrections.Comment: LaTeX2e, 9 pages, 6 figure
Neutrino-oscillation solutions for the atmospheric neutrino anomaly and the solar neutrino deficit can determine the texture of the neutrino mass matrix according to three types of neutrino mass
hierarchies as Type A: $m_1^{} \ll m_2^{} \ll m_3^{}$, Type B: $m_1^{} \sim m_2^{} \gg m_3^{}$, and Type C: $m_1^{} \sim m_2^{} \sim m_3^{}$, where $m_i$ is the $i$-th generation neutrino absolute
mass. The relative sign assignments of neutrino masses in each type of mass hierarchies play the crucial roles for the stability against quantum corrections. Actually, two physical Majorana phases in
the lepton flavor mixing matrix connect among the relative sign assignments of neutrino masses. Therefore, in this paper we analyze the stability of mixing angles against quantum corrections
according to three types of neutrino mass hierarchies (Type A, B, C) and two Majorana phases. Two phases play the crucial roles for the stability of the mixing angles against the quantum
corrections.Comment: LaTeX2e, 15 pages, 8 figure
We study an SU(2) supersymmetric gauge model in a framework of gauge-Higgs unification. Multi-Higgs spectrum appears in the model at low energy. We develop a useful perturbative approximation scheme
for evaluating effective potential to study the multi-Higgs mass spectrum. We find that both tree-massless and massive Higgs scalars obtain mass corrections of similar size from finite parts of the
loop effects. The corrections modify multi-Higgs mass spectrum, and hence, the loop effects are significant in view of future verifications of the gauge-Higgs unification scenario in high-energy
experiments.Comment: 32 pages; typos corrected and a few comments added, published versio | {"url":"https://core.ac.uk/search/?q=author%3A(Haba%2C%20N.)","timestamp":"2024-11-11T14:11:59Z","content_type":"text/html","content_length":"160117","record_id":"<urn:uuid:91fb47aa-5b03-441e-ba51-a98da826ee89>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00296.warc.gz"} |
Turn mph to kmh | Easy mph to kph calculator
Convert mile/hour to kilometer/hour
What is a mile/hour?
Speed units differ according to countries and their traditions and there are a lot of them. Thus, in the UK and the US, the mile/hour option is one of the most popular and common in everyday life. It
shows the number of miles that are covered during one hour by the moving object.
A mile, in this sense, is taken as 5280 feet, or 1609.344 meters; and an hour, given in minutes, provided it translates to 60, or, when in seconds, must translate to 3600. Therefore, 1 mph
corresponds to 0.44704 m/s, 1.609.
The standard system utilizes meters/second, while common speed limits are usually given in kilometers/hour. Still, some English-specking countries, as well as Bahamas, Belize, Samoa, and Caribbean
regions give their preference to mi,h for this. In addition, this option is popular for a description of the speed of the ball during different sports events, including cricket, baseball, and tennis.
What is a kilometer/hour?
Talking about speed, we always utilize some units. They are comfortable for understanding and we also require them for creating limits for public transport and vehicles. For instance, some countries
have a rule that cars must not move faster than 16,7 meters/second or 60 kilometer/hour. The second number seems more convenient and understandable.
Therefore, km,h are often utilized as a non-standard speed unit. In this case, when someone says that an average walking speed is equal to 5 kph, it means that during 1 hour the distance of 5 km is
covered by the pedestrian. For comparison, high-speed trains are moving with more than 500 km,h speed and airliners can have as much as 900 km/h.
To move back to the standard units of meter/second, one has just to divide the value by 3.6. Therefore, it is easy to switch between different speed units and use those that are suitable for the
How to Convert mile/hour to kilometer/hour
Various methods and instructions can convert mph to kmh. Here are a few strategies:
• Instructions: Comprehending that 1 mile = around 1.60934 km, you might utilize the instructions: km/hour = miles/hour * 1.60934. Bear the rate in mph by 1.60934 to figure the rate in kmh.
• Employing programs: Modifications might be accomplished by operating programs like Microsoft Excel or Google Sheets, which have built-in operations to transform mph to kmh.
Operating a plain: You can complete a plain that lists speed elements in m/h and their corresponding values in km/h.
To convert
, the formula is used,
where the mi,h to km,h value is substituted to get the answer from
Speed Converter.
Example: convert 15 mi,h to km,h:
15 mi,h
1.6093 km,h
24.14 km,h
How many miles/hour in kilometers/hour
1 kmh = about 0.621 mph. This suggests that to transform rate mile/hour to kilometer/hour, * the value in kmh by about 0.621.
For the model, 50 mph to kmh ≈ 80.47 km/h.
mile/hour (mi,h) kilometer/hour (km,h)
0.01 mi,h 0.01609344 km,h
0.1 mi,h 0.1609344 km,h
1 mi,h 1.609344 km,h
2 mi,h 3.218688 km,h
3 mi,h 4.828032 km,h
5 mi,h 8.04672 km,h
10 mi,h 16.09344 km,h
20 mi,h 32.18688 km,h
50 mi,h 80.4672 km,h
100 mi,h 160.9344 km,h
1000 mi,h 1609.344 km,h
Popular Unit Conversions Speed
The most used and popular units of speed conversions are presented for quick and free access.
Convert mile/hour to Other Speed Units | {"url":"https://oneconvert.org/unit-converters/speed-converter/mph-to-kph","timestamp":"2024-11-06T05:58:54Z","content_type":"text/html","content_length":"189016","record_id":"<urn:uuid:8f02b467-7b6a-49df-82a7-99edf830a8bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00189.warc.gz"} |
How do I graph the hyperbola with the equation 4x^2−y^2−16x−2y+11=0=0? | Socratic
How do I graph the hyperbola with the equation #4x^2−y^2−16x−2y+11=0=0#?
1 Answer
Convert this equation in the standard form of a vertical hyperbola as, $\frac{{\left(y + 1\right)}^{2}}{4} - \frac{{\left(x - 2\right)}^{2}}{1} = 1$
This Hyperbola as its center at (2,-1) and its vertices at (2,1) and (2,-3) on its vertical transverse axis.
Draw its asymptotes y+1 =2(x-2) and y+1=-2(x-2). The curve can be sketched now as shown below.
Alternatively, draw a rectangle of length 4 units and width 2 units with its center at (2,-1). Extend its diagonals on both sides, which would be the asymptotes of the hyperbola. Sketch the curve as
shown below.
Impact of this question
3507 views around the world | {"url":"https://socratic.org/questions/how-do-i-graph-the-hyperbola-with-the-equation-4x-2-y-2-16x-2y-11-0-0","timestamp":"2024-11-07T00:44:09Z","content_type":"text/html","content_length":"33953","record_id":"<urn:uuid:2525e3cb-6463-485a-af18-618d59e21ad6>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00177.warc.gz"} |
Comic strip on integers Storyboard by 5ec297d2
Comic strip on integers
Create a Storyboard
Storyboard Text
• I think so. I understand subtraction but not addition, multiplication, or division.
• Gertha I have a problem. We learned about integers in school today but I have absolutely no idea what we learned.
• Alright, let's start by telling you that0 is neither negative nor positive. Okay?
• Of course you don’t remember. Did you learn all operations?
• Sounds simple enough, what about multiplying and dividing?
• That’s so cool! Now please teach me or I will fail
• Slow down there, let me explain a problem to you. Let’s see, -5 + -6? -5 and -6 have the same sign, so the answer will be -11. Do you get it?
• I will give you some key information and then help you with problems. When you are adding integers, if the numbers have the same signs then you keep that sign.
• So I know that you multiply normally, but I don’t know what to do with the signs.
• I think I do! Can we talk about multiplication and division now?
• Okay so, in multiplication you multiply the numbers normally, but if they have the same signs the answer will be positive, and if they have different signs the answer is negative. Much simpler
than addition and subtraction, right?
• Sure! What do you know so far?
• Yes! Multiplication seems pretty easy! What about division?
• YES. Okay, so tell me if I’m wrong: -7 x -8 = 56 and -7 x 8 = -56?
• Since you found multiplication simple, you’re in luck because division follows the exact same rules! Obviously you divide not multiply, but the rest is the same!
• Nope, you got it right!
• I think I’m ready, but yeah I want to review it tomorrow too. Thank you so much for the help Gertha, you are a great friend!
• Yay! Wait, before you leave, let me know if I'm good with division and addition too: -9 / 3 = -3 and -8 + -9 = -17?
• Precisely. Do you think you are ready for school tomorrow? You can review this with me before class if you want to.
• No problem Debrah, anytime.
• You are #cancelled and no longer my friend. Leave please!
• So is your mom coming to pick you up? You can use my phone to call her if you need to.
• She said she would but I think she forgot me. Yeah I'll call her. OH MY GOSH I'M SO SORRY I DROPPED IT
• Fine, bye. | {"url":"https://www.storyboardthat.com/storyboards/5ec297d2/comic-strip-on-integers","timestamp":"2024-11-02T21:08:53Z","content_type":"text/html","content_length":"427968","record_id":"<urn:uuid:b9aa9edd-e9f8-43c9-a100-b79a58533a8c>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00430.warc.gz"} |
Price Jump Prediction in a Limit Order Book
Specialists’ spreads are widest at the open, narrow until late morning, and then level off. The U-shaped intraday pattern of spreads largely reflects the intraday variation in spreads established by
limit-order traders. Lastly, the intraday variation in limit-order spreads is significantly related to the intraday variation in limit-order placements and executions. In such a case, traders can set
a certain price level at which they want to buy and sell the security.
The newly integrated Order Book and revamped Bid and Ask order execution provide users with more information and control when trading. For example, say that you buy a share of Google for $1,000 and
set a trailing-stop up at 10%. The trailing stop will sell your position if the price reaches $900, but if the price reaches $1,100, the new trailing stop will be $990 (10% below the $1,100).
Darbellay G., Wuertz D. Entropy as a tool for analyzing statistical dependences in financial time series. Gençay R., Gradojevic N. Private information and its origins in an electronic foreign
exchange market. The mutual information measured between layers of the order book for all of the TA-35 stocks. For example, the top left cell shows the MI between layers 1 and 2 for ALHE stock. We
also extended Student’s t-test for the mean of paired samples to all of the TA-35 stocks.
Recommended articles
For example, knowing the prices and the volume of orders behind those prices can indicate which direction or trend the underlying security may move. The trader initiating the transaction is said to
demand liquidity, and the other party to the transaction supplies liquidity. Liquidity demanders place market orders and liquidity suppliers place limit orders. For a round trip the liquidity
demander pays the spread and the liquidity supplier earns the spread. All limit orders outstanding at a given time (i.e. limit orders that have not been executed) are together called the Limit Order
Book. However, on most exchanges, such as the Australian Securities Exchange, there are no designated liquidity suppliers, and liquidity is supplied by other traders. On these exchanges, and even on
NASDAQ, institutions and individuals can supply liquidity by placing limit orders. Like the bid-ask spread, the order book depth is a dimension of liquidity.
What is difference between ask and bid?
The term ‘bid’ refers to the highest price a buyer will pay to buy a specified number of shares of a stock at any given time. The term ‘ask’ refers to the lowest price at which a seller will sell the
stock. The bid price will almost always be lower than the ask or “offer,” price.
Cai S.M., Zhou P.L., Yang H.J., Yang C.X., Wang B.H., Zhou T. Diffusion entropy analysis on the scaling behavior of financial markets. We see a high statistical significance for the hypothesis that
the MI is higher for the deepest layers vs. the uppermost layers. This significance exists across all of the three configurations of the order book snapshots. After completing the shuffling described
previously, we counted the number of times that the MI calculation on the shuffled data was higher than the one calculated with real data. In the shuffled data, the MI was far smaller, yielding a
very low p-value. Table 3 contains the results of our analysis on shuffled data, suggesting that our findings were statistically significant. Figure 4 shows the mutual information between different
layers for each of the five stocks when calculated after every transaction. As mentioned above, we also ran the same analysis with a lag of two and three transactions; see Figure 5a,b, respectively.
Top 8 Tools to Study the Crypto, Stock, and Commodity Markets
It is displayed as a vertical line within the liquidity bar at the relevant price level. Its position within the bar is defined by the ratio of the order size to the total liquidity size at this
level. The size of the order must be above the threshold percentage of the total liquidity at the relevant price level. If activated, each price level on the ask side displays the liquidity available
at this level plus the liquidity available at all the levels below it all the way down to the best ask. Similarly, on the bid side, each level displays the liquidity available at this level plus the
liquidity available on all levels above it up to the best bid. Cryptocurrencies and derivative instruments based on cryptocurrencies are complex instruments and come with a high risk of losing money
rapidly due to leverage and extreme asset volatility. You should carefully consider whether you fully understand how cryptocurrency trading works and whether you can afford to take the high risk of
losing all your invested money. If a trader has a clear understanding of the concepts of Bid and Ask, they’ve already taken a big step towards understanding how financial markets work. Because Bid
and Ask orders clearly illustrate the key market principle of supply and demand.
What does bid Size 2 mean?
The bid and ask size are visible on what's known as a ‘Level 1’ screen. Serious traders prefer access to a ‘Level 2’ screen, which shows all the shares available at various bid and ask prices, not
just the ‘best’ prices. For example, a Level 2 screen might show bid prices of $152 x 800, $151.99 x 700 and $151.88 x 950.
For example, in the case of a limit trade book, the trader can set a price level for buying or selling a security. When the price hits that threshold, an order gets automatically fulfilled. In every
trade day, the automated or manual high-frequency trading usually happens at the open of stock markets since, in this period, the prices change quickly, and variance is high, which could cover
trading fees. Once they place a limit order, the order may have high possibility to be filled; if the filling possibilities computed are different from the real ones, they have to cancel the previous
orders to wait the next execution opportunity.
A «maker» is a trader who adds liquidity to the order book by placing a limit order that is not matched immediately with an existing order on the order book. The coefficients of OEI are much higher
in actively trading time periods such as the very open moment of market or near closing time of market. And Table 2 shows the R-squared, values, and coefficient of the factor in model , respectively.
The R-squared of model is nearly the same as the R-squared in July 2018. But the R-squared of model and coefficients of increase sharply compared with previous ones in July 2018. Table 7 shows that
the values are all significant at 0.1 threshold. And the R-squared increases by 34.3%, 26.8%, and 35.5%, respectively, in model compared to those in model . Values for coefficients of from model for
8 different trading periods.
The area under the ROC curve is a good measure to measuring the model prediction quality. The AUC value is equal to the probability that a classifier will rank a randomly chosen positive instance
higher than a randomly chosen negative one. We see that the linearity of the conditional probability on variables and in formula allows us to capture the contribution of in the prediction. This is an
open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited. FREE INVESTMENT BANKING COURSELearn the foundation of Investment banking, financial modeling, valuations and more. Purchase OrdersA Purchase Order serves as a legal document between buyer and
seller, wherein, the buyer sends this contract that details the goods and services, date of delivery, payment terms as per the contract etc. CryptocurrencyCryptocurrency refers to a technology that
acts as a medium for facilitating the conduct of different financial transactions which are safe and secure. It is one of the tradable digital forms of money, allowing the person to send or receive
the money from the other party without any help of the third party service. You can use take-profit orders to set a target profit price on a long or short position. You can define the desired profit
as an absolute price or as a percentage.
An order book takes all the pricing information of these different trades and aggregates them according to price and volume for you to analyze while making investment decisions. What drives the
sensitivity of limit order books to company announcement arrivals? In our first set of experiments, we have applied two supervised machine learning methods, as described on section in subsections 5.1
and 5.2, on a dataset that does not include the auction period. Since there is not a widely adopted experimental protocol for these datasets, we provide information for the ten different label
scenarios under the three normalization set-ups. Authors provide a threshold which is based on 250 events per 10-min sample interval. In this Section, we describe in detail our dataset collected in
order to facilitate future research in LOB based HFT. We start by providing a detailed description of the data in subsection 3.1. Data processing steps followed in order to extract Message Books and
LOBs from this data are described in subsection 3.2.
Bid Ask Size: Understanding Stock Quote Numbers – Investopedia
Bid Ask Size: Understanding Stock Quote Numbers.
Posted: Sat, 25 Mar 2017 07:30:21 GMT [source]
Neither the seller nor the buyer wants to give ground on the price. The spread between the Bid and Ask prices increases, and liquidity decreases. Makes trade volume, maintains spread and liquidity,
set price range, and builds live-like dynamic order book. The trading activity dataset, which was provided directly by TASE, was comprised of one text file for all order submissions and another text
file for executed transactions. Table 1 shows several summary statistics for each of the five securities. In this paper, we address a more basic question—how much new information is contained in the
deep layers, if at all? We decided to look at this question in the context of smaller exchanges.
Market liquidity
Understanding the relationship between Bid and Ask also helps traders analyse the market and forecast price reversals. When looking at StormGain’s Order Book, which displays Bid and Ask orders with
Recent Trades, users can analyse the price action. When it comes to trading, the Spread should be looked at because the https://www.beaxy.com/glossary/first-mover-advantage-fma/ wider it is, the more
additional costs the trader will incur. It’s better to trade liquid assets and use pending orders to avoid those extra costs. Thanks to feedback from our clients and testing new designs and tools,
StormGain has implemented important changes in its update to its web platform and mobile app.
• And the correlations of OEI are very high that may be exploited to predict the price move in the next time window for doing high-frequency trading.
• Finally, the most common source of data is through platforms requiring a subscription fee, like those in kercheval2015modelling , li2016empirical , sirignano2016deep .
• In practical high-frequency trading, we find that analysis of actions on order book from time dimension is critical for HFT especially in the period of intensive trading activity.
Level II data goes beyond showing just the best bid and best ask on the market by showing the full depth of orders on the market, including aggregated quantities at the individual bids and asks. The
same widened spread can also indicate the risk perceived in relation to volatility, as market makers tend to hedge their positions to protect themselves against price swings. When you observe an
order book for a couple of seconds, you’ll see the book is dynamic with numbers constantly moving and updating in real-time. When you see the numbers changing, it means that the buy and sell orders
are either cancelled by the traders or they are filled through a process called matchmaking. The left column shows the Market Maker, Exchange and ECN best Bid quotes with the number of shares
available at a particular bid. Stock symbols and price and volume data shown here and in the software are for illustrative purposes only.
The authors contend that in such scenarios, arbitrage traders are likely to be more successful by using liquidity measures. Kozhan and Tham also research arbitrage traders and found that factors such
as the number of market participants as well as speed have a substantial impact on execution risk, including resulting profits and/or loss from trades. Thus, different aspects of the market may come
into play for different trading scenarios. Order or continuous books provide open offers and order history for a particular asset at all price levels and total volumes. One can find the electronic or
manual sell and buy orders for stocks, bonds, derivatives, currencies, futures, cryptocurrencies on the bottom or top or the right and left of the book, respectively, depending on the exchange.
Extracting information from the ITCH flow and without relying on third-party data providers, we analyze stocks from different industry sectors for ten full days of ultra-high-frequency intra-day
data. The data provides information regarding trades against hidden orders.
For variable selection which best explains transaction cost of the split order. They apply an adjusted ordinal logistic method for classifying ex ante transaction costs into groups. Often these
agreements will include obligations to be actively quoting some minimum percentage of the time, on both sides of the book (bid & offer). Quoting non-marketable prices is one way to meet these
obligations and retain one’s status as a designated market maker, while avoiding execution risk in inclement or unfavorable markets.
Commentary: The Cardinals must make a serious bid for a generational talent like Juan Soto – KSDK.com
Commentary: The Cardinals must make a serious bid for a generational talent like Juan Soto.
Posted: Wed, 20 Jul 2022 23:58:00 GMT [source]
If you would like to buy a share, and the current lowest ask on the order book is $12, then you can buy a share at $12. If you input a bid price higher than $12, your trade will still execute at $12.
The probability distribution of a single stock is studied by analyzing a database documenting every trade for all the securities listed in three major US stock markets, for the two year period Jan
1994-Dec 1995. Palguna and Pollak palguna2016mid use nonparametric methods on features derived from LOB which are incorporated into order execution strategies for mid-price prediction. In the same
direction, Kercheval and Zhang kercheval2015modelling employ a multi-class SVM for mid-price and price spread crossing prediction. Han et al. han2015machine base their research on Kercheval’s and
Zhang’s kercheval2015modelling multi-class SVM for mid-price movement prediction. More precisely, they compare multi-class SVM to decision trees using bagging for variance reduction. Sandoval and
Hernandez sandoval2015computational create a profitable trading strategy by combining Hierarchical Hidden Markov Models where they considered wavelet based LOB information filtering. In their work,
they consider also a two-layer feed-forward neural network in order to classify the upcoming states but they report limitation in the neural network in terms of the volume of the input data.
#Cryptocoach Day 139
?Bid-ask spread?!
✅It is the difference b/w the highest bid price & the lowest ask price of an order book
✅This spread is created by the market makers or broker liquidity providers to fill the gap b/w the limit orders set by the buyers & sellers
— IndiaCoin (@indiacoin15) January 31, 2022
If the price increases, the stop follows the market price by this specified amount. But if the price drops, this lower specified amount will stay the same. This mechanism allows one to lock in
higher-profits and limit the amount of loss. Read more about ethereum to bitcoin calculator here. Seasoned traders know the value of watching more than just the price action. They also track the
traded volume at each price for more insight into the behavior of the market. The Depth parameter decides how many prices up and down the ladder are taken into the calculation. My default is 10,
giving me the volumes of the ten best bids and asks. Fiedor P. Networks in financial markets based on the mutual information rate.
The main disadvantage of sampling HFT data uniformly is that the trader is losing vital information. Events are coming randomly, where inactive periods can vary from few milliseconds to several
minutes or hours. In our work, we overcome this drawback by considering the information based on events inflow rather than equal time sampling. One more example of data that is available only for
academic purposes is brogaard2014high . The dataset contains information regarding timestamps, price and buy-sell side among others but any no other details related to daily events and available
feature vectors.
In this case, we have chosen the Binance exchange, with the BTC/USDT pair and, therefore, the Atani order book shows us the information of this particular exchange and cryptocurrency pair. This
section is available in the Advanced and Pro trading experiences of Atani. Outstanding offers to buy or sell are stored in a queue and filled in a priority sequence, by price and time of entry. If
you’re placing a buy order for 0.3 BTC at $9500, the information recorded in the order book shows the price at the full unit (1 Bitcoin at $9500), together with the total amount of crypto in demand
(0.3 Bitcoin). The order book provides you with the insights you need to make an informed decision and placing an order with a fair chance of making a profit. The data available from the order book
gives you an “under-the-hood” look at a market’s structure and dynamics. If Market Maker ABCD is on the bid at 65.20, but backs off to 65.17, a down arrow will appear at the new price level. For
Listed equities each line shows the Exchange with its Bid/Ask price and the number of actual shares available on the specialist order book or ECN book. Please note that investing in cryptocurrency
assets carries risks in addition to the opportunities described above. This material does not constitute investment advice, nor is it an offer or solicitation to purchase any cryptocurrency assets.
Deja una respuesta | {"url":"https://cronicasonora.cl/2022/08/25/price-jump-prediction-in-a-limit-order-book/","timestamp":"2024-11-08T17:37:46Z","content_type":"text/html","content_length":"226139","record_id":"<urn:uuid:f75a2cd2-e10c-4a32-a952-8aa3ebd7d7cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00445.warc.gz"} |
Detecting outliers using the Mahalanobis distance with PCA in Python
Detecting outliers using the Mahalanobis distance with PCA in Python
Detecting outliers in a set of data is always a tricky business. How do we know a data point is an outlier? How do we make sure we are detecting and discarding only true outliers and not
cherry-picking from the data? Well, all of these are rhetorical questions, and we can’t obviously give a general answer to them. We can however work out a few good methods to help us make sensible
Today we are going to discuss one of these good methods, namely the Mahalanobis distance for outlier detection. The aficionados of this blog may remember that we already discussed a (fairly involved)
method to detect outliers using Partial Least Squares. If you want to refresh your memory read this post: Outliers detection with PLS.
What we are going to work out today is instead a (simpler) method, very useful for classification problems. The PLS-based method is great when you have the primary reference values associated with
your spectra (the “labels”), but can’t be used for unlabelled data.
Conversely, Principal Components Analysis (PCA) can be used also on unlabelled data – it’s very useful for classification problems or exploratory analysis. Therefore we can use PCA as a stepping
stone for outliers detection in classification. For a couple of our previous posts on PCA check out the links below:
PCA score plots of NIR data
For this tutorial, we are going to use NIR reflectance data of fresh plums acquired from 1100 to 2300 nm with steps of 2 nm. The data is available for download at our Github repository. Here’s how
the data look like:
And here’s the code required to load and plot the data.
2 import pandas as pd
3 import numpy as np
4 import matplotlib.pyplot as plt
5 # Absorbance data, collected in the matrix X
6 data = pd.read_csv('./data/plums.csv').values[:,1:]
7 X = np.log(1.0/data)
8 wl = np.arange(1100,2300,2)
9 # Plot the data
10 fig = plt.figure(figsize=(8,6))
11 with plt.style.context(('ggplot')):
12 plt.plot(wl, X.T)
13 plt.xlabel('Wavelength (nm)')
14 plt.ylabel('Absorbance spectra')
15 plt.show()
Now it’s time to run a PCA decomposition of these data and produce a score plot with the first two principal components.
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
# Define the PCA object
pca = PCA()
# Run PCA on scaled data and obtain the scores array
T = pca.fit_transform(StandardScaler().fit_transform(X))
# Score plot of the first 2 PC
fig = plt.figure(figsize=(8,6))
with plt.style.context(('ggplot')):
plt.scatter(T[:, 0], T[:, 1], edgecolors='k', cmap='jet')
plt.title('Score Plot')
So far so good. We are now going to use the score plot to detect outliers. More precisely, we are going to define a specific metric that will enable to identify potential outliers objectively. This
metric is the Mahalanobis distance.
But before I can tell you all about the Mahalanobis distance however, I need to tell you about another, more conventional distance metric, called the Euclidean distance.
Euclidean distance for score plots
The Euclidean distance is what most people call simply “distance”. That is the conventional geometrical distance between two points. Consider the score plot above. Pick any two points. The distance
between the two (according to the score plot units) is the Euclidean distance.
I know, that’s fairly obvious… The reason why we bother talking about Euclidean distance in the first place (and incidentally the reason why you should keep reading this post) is that things get more
complicated when we want to define the distance between a point and a distribution of points. In the good books, this is called “multivariate” distance.
This is the whole business about outliers detection. We define an outlier in a set of data as a point which is “far” (according to our distance metric) from the average of that set. Again, look at
the score plot above. I bet you can approximately pinpoint the location of the average (or centroid) of the cloud of points, and therefore easily identify the points which are closer to the centre
and those sitting closer to the edges.
This concept can be made mathematically precise. The Euclidean distance between a point and a distribution is given by $z = (x – \mu)/ \sigma$ where $x$ is the point in question, $\mu$ is the mean
and $\sigma$ the standard deviation of the underlying distribution. $\sigma$ is there to guarantee that the distance measure is not skewed by the units (or the range) of the principal components.
If you look closely at the axes of the score plot above, you’ll notice that PC1 ranges roughly between -40 and 60, while PC2 between (roughly) -12 and 12. If we drew the score plot using the correct
aspect ratio, the cloud of point would squash to an ellipsoidal shape. In fact let’s redraw the score plot just so.
1 fig = plt.figure(figsize=(8,6))
2 with plt.style.context(('ggplot')):
3 plt.scatter(T[:, 0], T[:, 1], edgecolors='k', cmap='jet')
4 plt.xlim((-60, 60))
5 plt.ylim((-60, 60))
6 plt.xlabel('PC1')
7 plt.ylabel('PC2')
8 plt.title('Score Plot')
9 plt.show()
By normalising the measure by the standard deviation, we effectively normalise the range of the different principal components, so that the standard deviation on both axis becomes equal to 1.
Finally, to add another layer of complication, we can generalise the Euclidean distance to more than two principal components. Even if we can’t visualise it, we can conceive of a score plot in, say,
5 dimensions. If for instance we decide to use 5 principal components we can calculate the Euclidean distance with this neat piece of code
1 # Compute the euclidean distance using the first 5 PC
2 euclidean = np.zeros(X.shape[0])
3 for i in range(5):
4 euclidean += (T[:,i] - np.mean(T[:,:5]))**2/np.var(T[:,:5])
This code calculates the Euclidean distance of all points at once. Better still, we can use the Euclidean distance (in 5D!) to colour code the score plot. Take a look.
1 colors = [plt.cm.jet(float(i)/max(euclidean)) for i in euclidean]
2 fig = plt.figure(figsize=(8,6))
3 with plt.style.context(('ggplot')):
4 plt.scatter(T[:, 0], T[:, 1], c=colors, edgecolors='k', s=60)
5 plt.xlabel('PC1')
6 plt.ylabel('PC2')
7 plt.xlim((-60, 60))
8 plt.ylim((-60, 60))
9 plt.title('Score Plot')
10 plt.show()
As you can see, the points towards the edges of along PC1 tends to have larger distances. More or less as expected.
There is however a problem lurking in the dark. Here’s where we need the Mahalanobis distance to sort it out.
Mahalanobis distance for score plots
In general there may be two problems with the Euclidean distance. The first problem does not apply to here, but it might exist in general, so I better mention it. If there happened to be a
correlation between the axes (for instance if the score plot ellipsoid was tilted at an angle) that would affect the calculation of the Euclidean distance. In practice Euclidean distance puts more
weight than it should on correlated variables.
The reason for that is that can be easily explained with an example. Suppose we had two points that were exactly overlapping (that’s complete correlation). Clearly adding the second point doesn’t add
any information to the problem. The Euclidean distance however has no way of knowing those two points are identical, and will essentially count the same data twice. This would put excessive weight on
the points in question. The problem is somewhat reduced when there is partial correlation, nevertheless it is something to be avoided in general.
The major problem with the approach above is in the calculation of mean and standard deviation. If we really had outliers in our data, they would definitely skew the calculation of mean and standard
deviation. Remember, the outliers are points that do not belong to the distribution. They corresponds to bad measurements (or bad samples) which are not representative of the real distribution. This
is why we want to discard them!
However, in a classic chicken and egg situation, we can’t know they are outliers until we calculate the stats of the distribution, except the stats of the distribution are skewed by outliers!
The way out of this mess is the Mahalanobis distance.
Its definition is very similar to the Euclidean distance, except each element of the summation is weighted by the corresponding element of the covariance matrix of the data. The details of the
calculation are not really needed, as scikit-learn has a handy function to calculate the Mahalanobis distance based on a robust estimation of the covariance matrix. To learn more about the robust
covariance estimation, take a look at this example. The robust estimation takes care of the potential presence of outliers and it goes like this.
2 from sklearn.covariance import EmpiricalCovariance, MinCovDet
3 # fit a Minimum Covariance Determinant (MCD) robust estimator to data
4 robust_cov = MinCovDet().fit(T[:,:5])
5 # Get the Mahalanobis distance
6 m = robust_cov.mahalanobis(T[:,:5])
Again, we’ve done the calculation in 5D, using the first five principal components. Now we can colour code the score plot using the Mahalanobis distance instead.
There is some notable difference between this and the previous case. Some of the points towards the centre of the distribution, seemingly unsuspicious, have indeed a large value of the Mahalanobis
distance. This doesn’t necessarily mean they are outliers, perhaps some of the higher principal components are way off for those points. In any case this procedure would flag potential outliers for
further investigation.
Wrapping up, here’s a fairly unbiased way to go about detecting outliers in unlabelled data. Hope you found it useful. If you’d like to follow along and need the data just give us a shout.
Thanks for reading and until next time!
About The Author
Daniel Pelliccia
Physicist and entrepreneur. Founder of Instruments & Data Tools, specialising in custom sensors and analytics. Founder of Rubens Technologies, the crop intelligence system. | {"url":"https://nirpyresearch.com/detecting-outliers-using-mahalanobis-distance-pca-python/","timestamp":"2024-11-10T15:34:46Z","content_type":"text/html","content_length":"166547","record_id":"<urn:uuid:2ce4f816-50b6-41de-a53b-0fbdc4442a80>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00768.warc.gz"} |
Why do American Students Tend to Wiff on Fractions and Negative Numbers?
The short answer is they do not understand them. There is no reason to believe they do not understand what they have been taught about them.
School mathematics instruction starts by laying out number facts and arithmetic rules. Both the facts and rules are anchored in physical objects. This can work because the numbers and the arithmetic
are based on counting discrete objects, thereby providing a connection of Natural Number arithmetic to a physical activity available to the youngest student.
Fractions are in the Rational Number system, negative numbers are in the system of Integer Numbers. Both Integers and Rationals are constructed with Natural Numbers; they are not extensions of them.
A negative number is not the “opposite” of a Natural Number. A Rational number is not a “part” of a Natural number. Natural Numbers are in one number system, Integers in a second, and Rationals in a
third; A number in one of these number systems is not embedded in another number system. Instruction suggesting otherwise misleads the student.
The meaning of a number in a number system has long since been abstracted out from physical properties. A number is neither hot nor cold, long nor short, big nor small, ...Properties of numbers and
their arithmetic reside in the system to which they belong. Numbers are/were attached to physical entities by a person using the numbers; this includes number lines.
Students can discover the Natural Number system; and then construct the Integer and Rational number systems from it. Construction is specified in a drawing, showing the meaningful positions of slots
for Natural Numbers. The specification is a “particularization” of the “generalization” in the abstraction of “difference” for two Natural numbers.
• An Integer is the signed additive difference of two Natural Numbers.
• A Rational Number is the signed multiplicative difference of two Natural Numbers.
Additive and multiplicative differences are explained in the 9/9/17 post of this blog. This is all discussed in depth in the link at the bottom of this post.
The meaning of numbers and arithmetic is at their conceptual level. Meaning determines how properties and relations are expressed in problem formulation. For the most part computers and estimation
can take it from there.
At some point today’s arithmetic instruction must turn toward conception as the basis for understanding and using mathematics. Now this point is generally delayed until well after the student has
been lead far down a dead-end path.
1. I am unable to access the link "Constructing Number System for School Arithmetic." Rob
2. Good Post. | {"url":"http://fuller.fullermath.org/2017/09/why-do-american-students-tend-to-wiff.html","timestamp":"2024-11-07T09:00:42Z","content_type":"text/html","content_length":"101426","record_id":"<urn:uuid:de687da7-2b6a-466c-92d2-f3666d52e707>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00397.warc.gz"} |
RANGE Statement
The RANGE statement selects the time range of observations written to the OUT= and OUTEVENT= data sets. The from and to values can be SAS date, time, or datetime constants, or they can be specified
as year or year : period, where year is a two-digit or four-digit year, and period (when specified) is a period within the year corresponding to the INTERVAL= option. (For example, if INTERVAL=QTR,
then period refers to quarters.) When period is omitted, the beginning of the year is assumed for the from value, and the end of the year is assumed for the to value.
If a two-digit year is specified, PROC DATASOURCE uses the current value of the YEARCUTOFF option to determine the century of your data. Warnings are issued in the SAS log whenever DATASOURCE needs
to determine the century from a two-digit year specification.
The default YEARCUTOFF value is 1920. To use a different YEARCUTOFF value, specify
options yearcutoff=yyyy;
where YYYY is the YEARCUTOFF value you want to use. See SAS System Options: Reference for a more detailed discussion of the YEARCUTOFF option.
Both the FROM and TO specifications are optional, and both the FROM and TO keywords are optional. If the FROM limit is omitted, the output observations start with the minimum date for which data are
available for any selected series. Similarly, if the TO limit is omitted, the output observations end with the maximum date for which data are available.
The following are some examples of RANGE statements:
range from 1980 to 1990;
range 1980 - 1990;
range from 1980;
range 1980;
range to 1990;
range to 1990:2;
range from '31aug89'd to '28feb1990'd;
The RANGE statement applies to each BY group independently. If all the selected series contain no data in the specified range for a given BY group, then there will be no observations for that BY
group in the OUT= and OUTEVENT= data sets.
If you want to know the time ranges for which periodic time series data are available, you can first run PROC DATASOURCE with the OUTBY= or OUTALL= option. The OUTBY= data set reports the union of
the time ranges over all the series within each BY group, while the OUTALL= data set gives time ranges for each series separately in each BY group. | {"url":"http://support.sas.com/documentation/cdl/en/etsug/65545/HTML/default/etsug_datasrc_syntax07.htm","timestamp":"2024-11-03T00:42:59Z","content_type":"application/xhtml+xml","content_length":"16691","record_id":"<urn:uuid:dd901b5b-040a-4c91-8ea3-d814d176164d>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00478.warc.gz"} |
The labor force participation rate is computed as 100 x
13 Jan 2018 Therefore, in order to calculate unemployment, we need to understand how to unemployment rate = (# of unemployed / labor force) x 100%. 11 Dec 2019 The labor force participation rate is
60% and the unemployment rate is 8%. number of unemployed persons=x x=75 million×8/100=6 million The unemployment rate is computed as the number of unemployed a. divided The labor-force
participation rate is computed as (Labor Force / Adult Population) x 100 Suppose that the adult population is 6 million, the number of unemployed is 3.8 million, and the labor-force participation
rate is 70%.
Method of computation. The labour force participation rate is calculated as follows: LFPR(%) = Labour force x 100. Working-age population. LFPR(%) = Persons 29 Jan 2020 The labor force participation
rate is a measure of an economy's active workforce. The formula for the number is the sum of all workers who are We can calculate the unemployment rate by dividing the number of unemployed people by
the total number in the labor force, then multiplying by 100. Pie chart Large rises in the unemployment rate mean large numbers of job losses. Unemployment rate=Unemployed peopleTotal labor
force×100 Unemployment rate LFPR = Labor Force / Civilian Non-Institutionalized Population where the Labor Force = Employed + Unemployed. To calculate the formula correctly, you must first 100 x. A
C T I V E L E A R N I N G 1: Calculate labor force statistics. Compute the labor force, u-rate, adult population, and labor force participation rate using Learn how to solve problems about
calculating the unemployment rate, calculating the labor force participation rate, and the want all of these to focus on our labor force, so calculate the unemployment rate in Country X. Show your
work. 0.10, and then times 100%, this is going to be equal to a 10% unemployment rate.
The labor force participation rate is calculated by the percentage of the working-age population that is in the labor force or Labor force / Working - age population x 100 The employment-population
ratio measures the
100 x. Labor Force Statistics. Labor force participation rate: % of the adult Unemployment rate (“u-rate”): Compute the labor force, u-rate, adult population ,. 4 May 2017 Business Cycle,
Unemployment, and Inflation. unemployed x 100 Labor force Labor force participation rate = Working-age population x 100 Labor force; 17. Calculate the CPI for the base period and the current period.
28 Feb 2019 we construct trends for the aggregate LFP and unemployment rate. absent business cycle effects, we calculate the rates and population shares a linear function of unobserved cohort
effects, x, age effects, y, and cycle/time effects, z, ployment, The Review of Economics and Statistics, 100(2): 219-231. Keywords: Discouraged workers, Labor force participation, Random utility X
where XE denotes the subjective expectation operator, 2. U is the utility of the Therefore, we need to specify a (real) wage equation and an estimation level of real wages of 100 NOK corresponds to
the lower wage rate deciles in Norway.
There are two methods of determining the employment status: the "Usual" method in which the Unemployment rate (%) = (Unemployed / Labour force) x 100.
14 Nov 2018 nearly 5.5 million fewer prime age workers in the labor force at any point in time. are responsible for 20–40% of the decline in the participation rate between 1984 and years of
employment over t + 1 to t + 10 is computed for each of these individuals. add to 100% for both dropouts and in-and-outs). 27 May 2015 impacts of labor force participation and unemployment, it is a
useful summary The Unemployment Rate and Employment-Population Ratio . compute and interpret, and the data needed to calculate the indicator is unemployed divided by the number of persons in the
labor force) sum to 100%. 13 Jan 2018 Therefore, in order to calculate unemployment, we need to understand how to unemployment rate = (# of unemployed / labor force) x 100%.
We can calculate the unemployment rate by dividing the number of unemployed people by the total number in the labor force, then multiplying by 100. Pie chart
The labour force participation rates is calculated as the labour force divided by the total working-age population. The working age population refers to people aged 15 to 64. This indicator is broken
down by age group and it is measured as a percentage of each age group.
28 Feb 2019 we construct trends for the aggregate LFP and unemployment rate. absent business cycle effects, we calculate the rates and population shares a linear function of unobserved cohort
effects, x, age effects, y, and cycle/time effects, z, ployment, The Review of Economics and Statistics, 100(2): 219-231.
LFPR = Labor Force / Civilian Non-Institutionalized Population where the Labor Force = Employed + Unemployed. To calculate the formula correctly, you must first 100 x. A C T I V E L E A R N I N G 1:
Calculate labor force statistics. Compute the labor force, u-rate, adult population, and labor force participation rate using Learn how to solve problems about calculating the unemployment rate,
calculating the labor force participation rate, and the want all of these to focus on our labor force, so calculate the unemployment rate in Country X. Show your work. 0.10, and then times 100%, this
is going to be equal to a 10% unemployment rate. 5 Mar 2016 To learn more about the information we collect, how we use it and your choices visit our Privacy Policy . OK. x.
Large rises in the unemployment rate mean large numbers of job losses. Unemployment rate=Unemployed peopleTotal labor force×100 Unemployment rate LFPR = Labor Force / Civilian Non-Institutionalized
Population where the Labor Force = Employed + Unemployed. To calculate the formula correctly, you must first 100 x. A C T I V E L E A R N I N G 1: Calculate labor force statistics. Compute the labor
force, u-rate, adult population, and labor force participation rate using Learn how to solve problems about calculating the unemployment rate, calculating the labor force participation rate, and the
want all of these to focus on our labor force, so calculate the unemployment rate in Country X. Show your work. 0.10, and then times 100%, this is going to be equal to a 10% unemployment rate. | {"url":"https://dioptionewwvdel.netlify.app/saleem58356hixi/the-labor-force-participation-rate-is-computed-as-100-x-465.html","timestamp":"2024-11-10T08:41:11Z","content_type":"text/html","content_length":"33641","record_id":"<urn:uuid:bd68c96f-ac04-47a3-bbc5-e6efbeede8d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00651.warc.gz"} |
How to compute Nested Tensors's mean and var?
For torch.Tesnor it is easy to compute mean and var, but I can not find ways to compute mean and var for Nested Tensors. Nested Tensors support layer_norm operation which include mean and var
operation. Is there any ways to compute mean fo Nested Tensors?
Thank for your help!
x = torch.randn(1, 192)
y = torch.randn(10, 192)
nested = torch.nested.nested_tensor([x, y])
nested.mean(dim=-1)#Don't support | {"url":"https://discuss.pytorch.org/t/how-to-compute-nested-tensorss-mean-and-var/212055","timestamp":"2024-11-04T05:38:20Z","content_type":"text/html","content_length":"12353","record_id":"<urn:uuid:f9d52079-7680-4f25-8737-28c15d392b31>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00381.warc.gz"} |
Pharyngula minutes
« previous post | next post »
A graph of the current Google hit counts for "N minutes", 2 ≤ N ≤ 66, expressed as a proportion of the total hits for all 65 searches, looks like this:
(As usual, click on the image for a larger version.)
There seem to be several different things going on here: a preference for multiples of 5, 10, and 15; a preference for smaller numbers; and perhaps some other factors as well.
You could model this sort of distribution with the kind of approach that Tenenbaum and Griffiths used to predict how people generalize from (small) sets of numbers ("Generalization, similarity, and
Bayesian inference", Behavioral and Brain Sciences, 24: 629-641, 2001). In fact, for exploring the structure of certain kinds of cognitive/cultural spaces, web-text counts might be better than
lab-subject responses, since web text samples are certainly larger and arguably more representative. (There are plenty of obstacles — certainly Google counts are not really reliable enough for this
purpose — but never mind for now…)
Distributional patterns of this kind can also be used to explore cognitive and cultural differences. For example, if we compare the overall web frequency of N = 5, 10, 15, …, 65 in "N minutes" with
the relative frequency of N in the Google search patterns {X "N minutes or less"} for X = "recipe", "learn", and "muscles", we get this:
The distribution for N in {learn "N minutes or less"} (e.g. "Tech savvy in 15 minutes or less") and {muscles "N minutes or less"} (e.g. "instant fitness workout in 15 minutes or less") follow the
general web distribution for {"N minutes"} (the yellow line) pretty closely, except for a bit of enrichment at N=30 (and maybe a slight muscle bulge at N=20). But {recipe "N minutes or less"} is more
different from the background: the counts for N=5 and N=10 are quite a bit lower, while N=30 is much higher.
The dearth of recipes at N=5 and N=10, I speculate, reflects a victory of realism over marketing: readers really will try out recipes, and notice if they take a lot more time than advertised. And the
recipe sweet spot at N=30 probably reflects the fact that 30 minutes is the largest quantum of time that most people still generally view as small.
But anyhow, it's not because of recipe marketing that I decided to devote this morning's Breakfast Experiment™ to the contextual distribution of N in "N minutes" — instead, you can blame it on PZ
Myers. Reading the Pharyngula archives a few days ago, it seemed to me that Prof. Myers (and his commenters) used a few values of "N minutes" — especially N=5 and N=10 — unusually often and in a
somewhat characteristic way.
Now, I've pointed out in the past that the verbal tics that we perceive as "characteristic" of particular people are often very low-frequency events (see e.g. "And yet", 3/28/2004; "Per usual", 6/21/
2004; "Strange bookfellows", 5/26/2005; "Deep in the Hookergate weeds", 5/8/2006; "Cold comfort for whomever", 10/26/2007). On the other hand, we've also seen that subjective estimates of relative
frequency can be way off the mark, quantitatively ("What 'a hundred times' means", 10/2/2004) and even qualitatively ("Near? Not even close", 1/2/2005).
In this case, a quick check suggests that my subjective reaction fell into the "complete crock" category:
Overall, Pharyngula's distribution of values for N in "N minutes" seems to be reasonably close to the overall web norm — certainly closer than the distribution of N for recipes executable in "N
minutes or less" is. There's a small pharyngular excess at N=5 and N=10, and a small deficit at N=30, but the differences don't seem very impressive.
But wait — if we plot the data a little differently, the differences look bigger, and it seems more plausible that I might have been picking up on something real:
OK, looking at graphs is all very well, but what's the Right Way to evaluate a hypothesis like "text at scienceblogs.com/pharyngula has a different distribution of values for N in 'N minutes' than
the web at large does"? (I mean, besides saying "who cares?" and moving on.) This is trickier than you might think, especially if at this point you're trying to remember how to perform a
Kolmogorov-Smirnov two sample test, because it's not easy to be precise enough about what question(s) we really want to answer.
But I've already run out of breakfast time for today — in fact, I had to finish this write-up over lunch — so my (attempt to provide an) answer will have to wait for another morning.
[Update: PZ Myers attempts that characteristic artefact of the internet age, an overt attempt to elicit the Observer's Paradox ("It's 42 minutes after 7", 8/2/2008). But it's too late, PZ, unless
you go back and edit your copious archives! And really, it could be worse: consider the sad fate of Ronald Frobisher.]
Bob O'H said,
Some thoughts (before the Pharynguloids get here):
1. You have count data, so from a technical point of view, a chi-squared test or a log-linear model would be more appropriate.
2. You need to be clear about the hypothesis you are testing. From that, all else follows.
3. What is the population you are comparing Pharyngula to? All of the web? Just blogs? Blogs by radical atheists with an irrational contempt for wafers?
4. You're almost certainly using the same data you used to generate the hypothesis as you are to test it. That is naughty, and of curse you're more likely to see the pattern you thought was there.
You can use the data you have to refine your hypothesis, and then test it with new data (in the future). Of course, you just know what will happen if someone over there realises what you're doing.
5. Why 66 as the upper bound? Why not, say, 69?
rootlesscosmo said,
"30 Minute Meals" is a very popular show on the Food Network; might this skew the results?
Isabel Lugo said,
It's interesting to see that 45 is higher than 40 or 50, and 15 is higher than 10 or 20; it illustrates that people are really thinking in terms of quarters of an hour. (There's an interesting
question — why do we seem to do things in terms of quarters of an hour, instead of thirds or sixths? For example, I'm much more likely to make an appointment with somebody for 4:15 than for 4:10 or
Milt Boyd said,
In many fields, it is relatively easy to find halves, and halves again, and so on. The Imperial system of units is filled with 2:1 ratios, for lengths, weights, volumes. It's often rather difficult
to find one-third, and more difficult to check that you got it right. Hence a bias for quarters and such.
Isabel Lugo said,
That's a good point. I suppose the reason we stop at quarter-hours, instead of using an eighth of an hour, is because that's not a whole number of minutes. It seems like the next finer clump of time
we tend to use, after 15 minutes, is 5 minutes; of course that's half of 10, which is the base of our numerical system.
People have suggested dividing the hour into 100 minutes, as a sort of "metric time". Has anybody seriously suggested 64?
Jonny Rain said,
"What is the population you are comparing Pharyngula to? All of the web? Just blogs?"
I don't know why, but you could do the same analysis but limited to searching blogs:
Jim Fowler said,
Aside from comparing distributions, the distribution of Google hit counts for integers (without any accompanying text) is pretty cool. Here is a graph of Google hit counts for "N" where N is a number
between 1 and 500. A regression on this data suggests a power law at work.
For a fun party trick, you can try the following: there are 345 million hits for "241", and just about half as many (172 million) hits for "482," a number twice as big. Generally, doubling a number
halves the number of hits.
Maybe this says more about random numbers than about people…
Jonathan Lubin said,
To an old fogy like me, wearing one of those funny round analog devices strapped around his wrist, counting time in multiples of five is perfectly natural: the hour is conveniently divided for you
into those twelve blocks, after all. It’s only someone using a digital clock who would have even asked this question.
Brian Macker said,
This is pretty much useless without knowing quantity of text. It might be that the proportion of usage to text is much higher on one source vs. another and that may cause differences in small and
large number effects. Was quantity of text taken into account? Perhaps Myers and company uses "n minutes" far less than anyone else.
craig said,
How often do Pharynguloids use the word "cracker" compared to the rest of the web?
Sven said,
Just a thought: I suspect that there'd be a "sports writing about soccer" shaped blip at 90 minutes….
BaldApe said,
I'm not sure if the marketing idea pans out. (the idea that it is unrealistic to think that you could master Urdu in 5 minutes, for instance) People just don't pay much attention.
Look at a prepackaged food item which requires preparation (like mac and cheese, for instance) Commonly, you will see the words "Simple one step directions" on the front of the box, then when you
turn it over you see Step 1, Step 2, Step 3….
mgh said,
the first page of google movie listings in my neighborhood gives the following distributions of minutes-after-the-hour movie start times:
the suppression of 55, 35, and 25 probably reflects shifting those values onto 00 and 30. no idea why this effect isn't seen as much for 05.
Sili said,
Wait – you read the archives of Pharyngula in a coupla days? I'm still trying to catch up after 36 hours offline. My envy knows no bounds! In fact I shall comfort-snack furiously for 27 minutes in
order to calm down.
Hmmm – We need to get you an invitation to the next Amaz!ng Meeting. There are rumours of photographs of PeeZed and Ben Goldacre together. Throw in Liberman and we'll have reached the blogularity!
PS: the proper term is "Pharyngulistas".
Jack Picknell said,
Check out the word "Catholics" from Pharyngula. He is an obsessed anti-Catholic bigot.
Bookmarks about Bayesian said,
[…] – bookmarked by 1 members originally found by lowf on 2008-12-16 Pharyngula minutes http://languagelog.ldc.upenn.edu/nll/?p=429 – bookmarked by 3 members originally found by hfishel […] | {"url":"https://languagelog.ldc.upenn.edu/nll/?p=429","timestamp":"2024-11-08T23:56:55Z","content_type":"application/xhtml+xml","content_length":"105930","record_id":"<urn:uuid:1d98a162-7376-4c54-b3f6-4e82d61e47c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00358.warc.gz"} |
Choosing the Best Career
Are you looking to choose a career? Let me help you. After reading this you’ll run off to the nearest university and register for classes. This will be good for you and the world.
Would you like people at parties to "Ooooooo" when they find out what you do? Would you like to have a respectable salary while working in an air-conditioned office for only 40 hours a week and
rarely having to travel on business?
if so, then the best career for you is mechanical engineering.
Here is a question to get you started. Look around you right now:
Other than living things (people, plants, pets),
can you see anything that has been constructed
without the aid of a mechanical engineer?
Look hard. Take your time. I’ll wait.
While you’re pondering, let me point out that the jet ski shown in the photograph above never would have existed without mechanical engineers.
Let me help you with your answer:
• The clothing you’re wearing was woven by machines designed by mechanical engineers.
• The pencils and pens on your desk were manufactured by machines designed by mechanical engineers.
• The paint on your walls was mixed by machines designed by mechanical engineers.
• The half-eaten Big Mac hamburger beside you was prepared using tools and machines designed by mechanical engineers.
• The paper on your desk was manufactured by machines designed by mechanical engineers.
• The structure holding your cellphone’s electronics together was designed by mechanical engineers.
• If you wear eyeglasses, they were constructed using machines designed by mechanical engineers.
• Your lilac’s flower pot was manufactured by machines designed by mechanical engineers.
• Your fingernails were clipped by a fingernail clipper designed by a mechanical engineer.
Just how broadly do mechanical engineers affect your life?
Without mechanical engineers, you’d have to get your food from your
garden you tilled using wooden tools you carved with sharp rocks.
Yes, mechanical engineers should be worshiped!
Why is mechanical engineering the best career?
Here are just some reasons why no other career beats mechanical engineering:
1. There are rarely too many mechanical engineers.
2. Colleges do not artificially restrict the number of mechanical engineering students, as they regularly do for many medical and legal professions.
3. Only a bachelor’s degree is required. No continuing education or licensing is necessary.
4. Everyone thinks you’re smart.
5. Mature people seek relationships with professionals whose careers give them evenings and weekends off. Tip: You don’t want to marry an immature person.
6. Mechanical engineering is a fact-based profession. You will almost always get your way with your supervisor if physics is on your side. If physics is not on your side, you have no business
presenting your idea to management.
7. Physics is not subject to opinion, pride, or politics. Good mechanical engineers have zero ego, and should never have a “dog in the fight” or a “stake in the outcome.” Infighting and competition
have no place in the mechanical engineering field. If you abhor politics, become a mechanical engineer.
8. Low stress-to-salary ratio. Doctors and lawyers make more money than mechanical engineers. But their jobs are much, much more stressful.
9. If a mechanical design fails during initial testing, the design does not sue the engineer.
Four years of college
What about those four years of tough engineering courses? Yes, they’re tough. But they provide a comfortable, stable career for the rest of your life when compared to other professions.
The math presented in college homework, labs, and exams is far more complex than what is required in real life. For example, here is an equation representing the first law of thermodynamics. This is
the typical sort of equation taught in an engineering course:
Before you run away, you must know that I've never used this equation in its entirety. Never once. This is because the equation represents just about EVERYTHING that can happen to something. Never in
real life does everything happen at once.
For example, while inflating a balloon, you’re not at the same time likely to,
1. Submerge it in water.
2. Drop it off a cliff.
3. Catch it on fire.
4. Accelerate it to Mach 10.
5. Pour sulfuric acid on it.
6. Place it in outer space.
Generally, only small portions of large equations are used at any given time. This makes life for engineers much easier. Below is a typical application of the first law of thermodynamics:
Notice how simpler the equation is. In English it means:
The kinetic energy of a moving object is equal to
one-half its mass times the square of its velocity.
You may not be familiar with the terms, but that makes it a problem of understanding terms, not a math issue.
Why, then, is a four-year engineering degree required?
The issue isn't the math, but rather learning which principles and equations should apply at any given time. How to apply them is what is difficult, not the equations themselves.
The Expert Button-Pusher The story is told of a company that used a complex machine with many buttons, levers, and dials. The machine worked so well and for so long that everyone working at that
company forgot what the buttons, levers, and dials did. One day, the machine stopped working. No one at the company knew how to get it going again. They found an expert who lived across the country.
They paid him five thousand dollars to come and get the machine working again. The expert arrived. He looked over the machine for a few minutes and then pushed one button. The machine began working
perfectly again. The company management who hired the expert complained and said, “We paid you five thousand dollars to push one button?” “Yes,” the expert said, “but I knew which button to push.”
That is the story of mechanical engineering.
The math is easier than you think
I list below the most common equations I have used throughout my career. You’ll see that they make use of only ordinary algebra. No matrix algebra. No systems of vectored, partial differential
equations. None of the equations shown here go beyond high school math:
There are exceptions, of course. Occasionally I must set up and solve a differential equation. But they are the exception rather than the norm. The terminology may be unfamiliar to you, but the math
is not complex.
The question of how to apply principles applies to any career. Take plumbing, for example:
Just solder some pipes together.
“Hold on!” you say. “That’s where skill and craftsmanship come in.” And you’re right. Skill and craftsmanship come from experience. This applies to all professions, including engineering, plumbing,
and coaching football.
Two examples
I provide here two examples of relatively simple math that can produce astounding results.
I own a Toyota Highlander SUV. Its average weight is published at about 4,400 pounds. If I estimate that about 6-inches by 6-inches of each of its four tires touch the road at any given time, I can
calculate the air pressure in those tires.
Remember the following equation from the above list:
With some algebra, we can rearrange the equation to represent pressure:
The equation says that pressure (P) is created when an applied force (F) is distributed over a certain area (A). The pressure in my Toyota Highlander’s tires calculates to be:
Automobiles aren't exciting enough for you? What about modern airliners?
The maximum takeoff weight of a Boeing 777-300 ER airliner is 775,000 pounds. The surface area of the wings of a Boeing 777-300 ER airliner is 672,768 square inches. With these two numbers I can
calculate an astonishing number:
It's amazing to me that a 387-ton jetliner can lift off the ground with only a 1.2 psi difference in air pressure between the upper and lower surfaces of the wings. That ain’t much!
“Simple division is one thing,” you say, “but what about complex trigonometry?”
An entire college semester of trigonometry can be reduced to the following figure. I know this is true because I’ve taken only one trigonometry class in my life, and that was over thirty years ago.
All I remember from that class is this figure:
All the trigonometry I have needed in my career comes from this figure. I realize that learning trigonometry in a classroom is a shock to the mind and soul. But applied trigonometry is much less
I’ve written an entire post on calculus, which you should read. It will be fun.
Engineering students must take three semesters of calculus (integration) and one semester of differential equations. Differential equations are integration in reverse.
Want to know a great secret?
There is software out there that can solve every math problem you can dream up. The best of these products is a program called, Mathcad. Mathcad is a word processor for equations.
The following scary integral calculates the percentage confidence of 99.73% over a statistical spread of three standard deviations.
Using Mathcad, I typed the above equation on the left side of the equal sign. When I tapped the equal sign character, the answer of 99.73% appeared.
How easy is that? Mathcad did all the work.
But that’s cheating!
“You let the computers do all the work,” you say.
This is my whole point! Have you ever picked up a cellphone or calculator and asked it for the square root of a number, say, the square root of 456.7? You type in the number 456.7 and then press the
“square root” key. Then the machine does the work.
Do you have any idea how much work your dear cellphone or calculator must do to come up with that answer? Your electronic device must perform the following function, or one similar to it:
How would you like to solve that one on your own?
Below is a computer program I wrote using Mathcad that calculates the square root of a number without using a square root function. This is another approach calculators may use to calculate square
Unless you like doing all that work by hand, don’t accuse me of cheating.
3D modeling and analysis
Modern engineering software allows engineers to create entire three-dimensional designs virtually on computer and then test them for,
• Vibration
• Thermal effects
• Tolerance analysis
• Mass and weight/balance
• Stress, metal fatigue, and cycle life
• Liquid or gas flow (if it’s a hydraulic or pneumatic design)
These analyses are done before the design physically exists in real life. This way, when the parts are finally made, all the design mistakes have already been made and fixed.
3D metal printing
The same 3D modeling capability that can render realistic images on the computer screen can send the files to metal printing machines that can manufacture (literally "print out") entire parts without
the touch of a human being. Metal printers can print titanium parts (1/3 lighter than steel) which are just as strong as traditionally machined steel parts.
Bottom line
Gone is the drudgework that used to be required of mechanical engineers. No more,
• Slide rules
• Clay models
• Drafting boards
• Piles of calculations
• White shirts and ties
• Handheld calculators (I don’t own one!)
Today is the best time in the history of the world to be a mechanical engineer.
This makes you an extraordinarily lucky person! | {"url":"https://www.jjrlore.com/post/choosing-the-best-career","timestamp":"2024-11-05T15:27:38Z","content_type":"text/html","content_length":"1050478","record_id":"<urn:uuid:05dd6401-3fb4-42c9-b0ff-f34b86f1ffd7>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00442.warc.gz"} |
Optimal solution - (Intro to Business Analytics) - Vocab, Definition, Explanations | Fiveable
Optimal solution
from class:
Intro to Business Analytics
An optimal solution is the best possible outcome of a mathematical problem, where specific constraints and objectives are met, maximizing or minimizing a certain value. It plays a crucial role in
decision-making processes, helping to determine the most effective course of action given limited resources and competing priorities. Identifying this solution involves sophisticated techniques and
models to analyze complex scenarios, often leading to improved operational efficiency and strategic planning.
congrats on reading the definition of optimal solution. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. An optimal solution may involve integer values when dealing with problems that require whole numbers, which is common in resource allocation scenarios.
2. In goal programming, optimal solutions seek to satisfy multiple objectives, potentially leading to trade-offs between conflicting goals.
3. Linear programming techniques are often used to find optimal solutions in various business contexts, such as maximizing profits or minimizing costs.
4. Identifying the optimal solution can sometimes require iterative methods or algorithms, especially in complex or large-scale problems.
5. Sensitivity analysis may be applied to understand how changes in constraints or objective functions can affect the optimal solution.
Review Questions
• How does the concept of an optimal solution apply in a scenario involving multiple decision variables and constraints?
□ In a scenario with multiple decision variables and constraints, an optimal solution represents the combination of variable values that achieves the best outcome while satisfying all imposed
limitations. This means that through mathematical modeling techniques like linear programming, one can analyze different scenarios to find the values of decision variables that maximize or
minimize the objective function. The interplay between these variables and constraints ensures that the chosen solution is not only optimal but also feasible within the defined parameters.
• Discuss how identifying an optimal solution can impact business operations and strategic planning.
□ Identifying an optimal solution can greatly enhance business operations by streamlining resource allocation and improving decision-making efficiency. By focusing on maximizing profits or
minimizing costs within constraints, companies can better position themselves in competitive markets. Additionally, this process aids strategic planning by allowing businesses to forecast
outcomes based on various scenarios, ensuring that resources are allocated effectively while aligning with overall organizational goals.
• Evaluate the challenges faced when determining an optimal solution in complex decision-making environments and propose potential strategies to overcome these challenges.
□ Determining an optimal solution in complex decision-making environments presents several challenges, including dealing with non-linear relationships, multiple conflicting objectives, and
uncertain data. These complexities may make it difficult to identify feasible solutions or could lead to suboptimal decisions if not managed properly. To overcome these challenges, businesses
can utilize advanced optimization techniques like mixed-integer programming or heuristic approaches. Moreover, incorporating sensitivity analysis helps decision-makers understand how
variations in parameters affect outcomes, allowing for more informed adjustments and better strategies.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/introduction-to-business-analytics/optimal-solution","timestamp":"2024-11-12T20:09:54Z","content_type":"text/html","content_length":"166869","record_id":"<urn:uuid:5b967f00-242f-4a2c-b74f-5214aedd40f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00145.warc.gz"} |
Setting Up a Test for a Population Proportion - Knowunity
Setting Up a Test for a Population Proportion: AP Statistics Study Guide
Hello there, future statisticians! Get ready to dive into the fascinating world of population proportions. Testing a population proportion is like being a detective and cracking a case, only with
numbers and hypotheses instead of magnifying glasses and fingerprint dust. 🕵️♂️🔍 Today, we'll walk through how to set up and solve a hypothesis test for a population proportion—think Sherlock Holmes
meets AP Statistics. Let’s get started!
Hypotheses: Null vs. Alternative 🧑🔬🧑💻🎲
When you're gearing up for a one-proportion z-test, the first thing to do is to write out your hypotheses. You’ll need both a null hypothesis and an alternative hypothesis. No need to get your
magnifying glass, this is straightforward!
Null Hypothesis (H0): The null hypothesis is essentially the "nothing to see here" hypothesis. It’s the statement about the population parameter that we assume to be true unless evidence suggests
otherwise. Think of it as the default setting. For population proportions, it’s always written as ( p = ____ ). If the null hypothesis states that 70% of high school students love statistics (p =
0.70), it assumes this to be true unless you find strong evidence to the contrary.
For example, let’s say everyone claims that 80% of dogs can do a happy dance. So, our null hypothesis would be ( H0: p = 0.80 ).
Alternative Hypothesis (Ha): This one is the spicy hypothesis. It’s what you propose if you believe the null hypothesis is incorrect. It can be written in one of three ways: ( p < ____ ), ( p > ____
), or ( p \neq ____ ). Depending on what you're testing, it could be a one-sided test (less than or greater than) or a two-sided test (not equal to).
Let’s say we took a sample and suspect that fewer dogs can do a happy dance. Our alternative hypothesis might be ( Ha: p < 0.80 ).
💡 Summary: The null hypothesis always contains an equality (p = or p ≤ or p ≥), while the alternative hypothesis brings the drama with strict inequalities (p ≠ or p < or p >). One-sided tests are for
the less-than or greater-than cases, while two-sided tests just scream "not equal".
Conditions: What to Check Before You Wreck (Your Study) 📏📝🥽
Before you get too far into calculations, you need to check a few key conditions to ensure your data can actually be used:
Random: Your sample must be random. Sampling bias is the statistical world’s equivalent of a plot twist in a mystery novel—unexpected and potentially study wrecking. No amount of statistical wizardry
can fix bias, so make sure your sample represents the population fairly.
Independent: This one’s about making sure your samples do not affect one another when taken without replacement. For independence, use the 10% condition: your sample should be less than 10% of the
population size. Think of it like this: if you’re sampling ice cream flavors from a giant vat, the last scoop you take should taste just as random as the first.
Normal: To assume normality, use the Large Counts Condition. Both ( np ) (expected successes) and ( n(1-p) ) (expected failures) need to be at least 10. This ensures that your data forms a shape that
resembles the normal curve—a critical factor for z-tests.
Example Check: Suppose you randomly sample 200 students to find out how many can recite the quadratic formula (everyone’s favorite!). You hypothesize that 60% (p = 0.60) can do it.
Random: ✔️ ("We sample 200 random students.")
Independent: ✔️ (It’s safe to say there are more than 2000 students in the school. Yep, students everywhere!)
Normal: ✔️ (200 * 0.60 = 120 and 200 * 0.40 = 80. Both are more than 10, perfect!)
Calculating the Necessary Statistics: Crunch Time! 🖩📉
Now, for the fun part: doing the math! Here’s how to calculate the key statistics involved in your z-test:
Z-Score: This measures how far your sample proportion ((\hat{p})) is from the hypothesized population proportion (p0), in terms of standard error (SE). It’s like finding out if your sample proportion
is chilling in the "cool kids club" or far off in the statistical wilderness.
[ z = \frac{\hat{p} - p_0}{\sqrt{\frac{p_0 (1 - p_0)}{n}}} ]
Example: Let's go back to our example. We found that 120 out of 200 students could recite the quadratic formula.
[ p_0 = 0.60, \hat{p} = \frac{120}{200} = 0.60 ]
[SE = \sqrt{\frac{0.60 (1 - 0.60)}{200}} = 0.03464]
Our z-score:
[ z = \frac{0.60 - 0.60}{0.03464} = 0 ]
P-Value: This tells us the probability of obtaining a sample proportion as extreme as ours, given that the null hypothesis is true. In simpler terms, it’s like saying, "How likely is this funky
result by random chance alone?" The smaller the p-value, the stronger the evidence against the null hypothesis.
To calculate this, you’ll use the standard normal distribution (z-distribution). With a z-score of 0, you'd find that the p-value is approximately 0.50 (half of the normal distribution). This means
there’s a 50% chance our result could happen by random chance alone. Nothing fishy here.
Using Technology: Why do all the math by hand when your trusty graphing calculator can jump in and save the day like a statistical superhero? With your calculator, head over to the Stats Tests Menu
and select the 1-Prop Z-Test. Input your parameters, and voilà! You get your z-score and p-value without breaking a sweat. 🎉
Key Terms to Know 🌟
1. 1-Prop Z-Test: Used to compare the proportion in a sample to a known or hypothesized population proportion.
2. 10% Condition: Ensures sample size is less than 10% of the total population when sampling without replacement.
3. Independent Events: Events that do not influence each other—kind of like two poker players at different tables.
4. Large Counts Condition: Ensures expected counts of both successes and failures are at least 10.
5. Normalcdf Function: This handy calculator function helps find probabilities under the standard normal curve.
6. One-Proportion Z-Test: Used to determine if there’s a significant difference between a sample proportion and a population proportion.
7. P-Value: Helps determine if your observed result is statistically significant or, well, just random noise.
8. Standard Normal Curve: The iconic bell curve with a mean of 0 and standard deviation of 1.
9. Z-Score: Measures how many standard deviations a data point is from the mean.
And there you have it! You’ve successfully set up and conducted a hypothesis test for a population proportion. Not so daunting, right? Remember, behind every statistical test is a heap of common
sense and a dash of mathematical magic. Keep practicing, and you’ll be crunching numbers and throwing statistical shade at null hypotheses in no time! Happy studying! 📚💡
Now, go grab your calculator, and may the stats force be with you! 🌌✨ | {"url":"https://knowunity.com/subjects/study-guide/setting-up-test-for-population-proportion","timestamp":"2024-11-11T16:50:20Z","content_type":"text/html","content_length":"247955","record_id":"<urn:uuid:5e0f7ae7-7b37-4b5c-ac48-992af4df5cc3>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00031.warc.gz"} |
Imputing missing values with variants of IterativeImputer
Go to the end to download the full example code. or to run this example in your browser via JupyterLite or Binder
Imputing missing values with variants of IterativeImputer#
The IterativeImputer class is very flexible - it can be used with a variety of estimators to do round-robin regression, treating every variable as an output in turn.
In this example we compare some estimators for the purpose of missing feature imputation with IterativeImputer:
Of particular interest is the ability of IterativeImputer to mimic the behavior of missForest, a popular imputation package for R.
Note that KNeighborsRegressor is different from KNN imputation, which learns from samples with missing values by using a distance metric that accounts for missing values, rather than imputing them.
The goal is to compare different estimators to see which one is best for the IterativeImputer when using a BayesianRidge estimator on the California housing dataset with a single value randomly
removed from each row.
For this particular pattern of missing values we see that BayesianRidge and RandomForestRegressor give the best results.
It should be noted that some estimators such as HistGradientBoostingRegressor can natively deal with missing features and are often recommended over building pipelines with complex and costly missing
values imputation strategies.
# Authors: The scikit-learn developers
# SPDX-License-Identifier: BSD-3-Clause
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.datasets import fetch_california_housing
from sklearn.ensemble import RandomForestRegressor
# To use this experimental feature, we need to explicitly ask for it:
from sklearn.experimental import enable_iterative_imputer # noqa
from sklearn.impute import IterativeImputer, SimpleImputer
from sklearn.kernel_approximation import Nystroem
from sklearn.linear_model import BayesianRidge, Ridge
from sklearn.model_selection import cross_val_score
from sklearn.neighbors import KNeighborsRegressor
from sklearn.pipeline import make_pipeline
N_SPLITS = 5
rng = np.random.RandomState(0)
X_full, y_full = fetch_california_housing(return_X_y=True)
# ~2k samples is enough for the purpose of the example.
# Remove the following two lines for a slower run with different error bars.
X_full = X_full[::10]
y_full = y_full[::10]
n_samples, n_features = X_full.shape
# Estimate the score on the entire dataset, with no missing values
br_estimator = BayesianRidge()
score_full_data = pd.DataFrame(
br_estimator, X_full, y_full, scoring="neg_mean_squared_error", cv=N_SPLITS
columns=["Full Data"],
# Add a single missing value to each row
X_missing = X_full.copy()
y_missing = y_full
missing_samples = np.arange(n_samples)
missing_features = rng.choice(n_features, n_samples, replace=True)
X_missing[missing_samples, missing_features] = np.nan
# Estimate the score after imputation (mean and median strategies)
score_simple_imputer = pd.DataFrame()
for strategy in ("mean", "median"):
estimator = make_pipeline(
SimpleImputer(missing_values=np.nan, strategy=strategy), br_estimator
score_simple_imputer[strategy] = cross_val_score(
estimator, X_missing, y_missing, scoring="neg_mean_squared_error", cv=N_SPLITS
# Estimate the score after iterative imputation of the missing values
# with different estimators
estimators = [
# We tuned the hyperparameters of the RandomForestRegressor to get a good
# enough predictive performance for a restricted execution time.
Nystroem(kernel="polynomial", degree=2, random_state=0), Ridge(alpha=1e3)
score_iterative_imputer = pd.DataFrame()
# iterative imputer is sensible to the tolerance and
# dependent on the estimator used internally.
# we tuned the tolerance to keep this example run with limited computational
# resources while not changing the results too much compared to keeping the
# stricter default value for the tolerance parameter.
tolerances = (1e-3, 1e-1, 1e-1, 1e-2)
for impute_estimator, tol in zip(estimators, tolerances):
estimator = make_pipeline(
random_state=0, estimator=impute_estimator, max_iter=25, tol=tol
score_iterative_imputer[impute_estimator.__class__.__name__] = cross_val_score(
estimator, X_missing, y_missing, scoring="neg_mean_squared_error", cv=N_SPLITS
scores = pd.concat(
[score_full_data, score_simple_imputer, score_iterative_imputer],
keys=["Original", "SimpleImputer", "IterativeImputer"],
# plot california housing results
fig, ax = plt.subplots(figsize=(13, 6))
means = -scores.mean()
errors = scores.std()
means.plot.barh(xerr=errors, ax=ax)
ax.set_title("California Housing Regression with Different Imputation Methods")
ax.set_xlabel("MSE (smaller is better)")
ax.set_yticklabels([" w/ ".join(label) for label in means.index.tolist()])
Total running time of the script: (0 minutes 5.888 seconds)
Related examples | {"url":"https://scikit-learn.org/dev/auto_examples/impute/plot_iterative_imputer_variants_comparison.html","timestamp":"2024-11-06T02:01:36Z","content_type":"text/html","content_length":"114715","record_id":"<urn:uuid:b91eb1c0-a3a8-4bdf-b473-1a496b15ac09>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00498.warc.gz"} |
Help with keeping parts' positions inside of a boundary
I’m making a building system and I’m having trouble keeping the builds inside of a boundary when the build is rotated. Here’s a video of my issue:
The boundary works fine when the build isn’t rotated but when I rotate it, it acts as if the build wasn’t rotated at all in terms of keeping the build inside of the boundary. I’m not really sure how
I can approach the issue.
Here’s my code for keeping the build inside of the boundary:
local function clampToBoundaries(model)
math.clamp(model.PrimaryPart.Position.X, bounds.Position.X - bounds.Size.X/2 + model.PrimaryPart.Size.X/2, bounds.Position.X + bounds.Size.X/2 - model.PrimaryPart.Size.X/2),
math.clamp(model.PrimaryPart.Position.Z, bounds.Position.Z - bounds.Size.Z/2 + model.PrimaryPart.Size.Z/2, bounds.Position.Z + bounds.Size.Z/2 - model.PrimaryPart.Size.Z/2)
) * CFrame.Angles(0,rot,0))
model being the build that is being clamped to the boundary
bounds being the part that measures the boundary
rot being the rotation of the model
I made a function for keeping a part or model inside an area earlier. It goes through every corner of every part in the model and finds the longest distance of a corner of a part from the edge. Then
it calculates a new cframe based on the longest distance. So if there were corners outside the area, the corner that was furthest away from the edge will be at the edge.
Link to my post.
I have later realised that this code
for ix = 1, 2 do
local xm =(-1)^ix
for iy = 1, 2 do
local ym =(-1)^iy
for iz = 1, 2 do
local zm =(-1)^iz
could be replaced with this
for xm = -1, 1, 2 do
for ym = -1, 1, 2 do
for zm = -1, 1, 2 do
Edit: @ThanksRoBama 's code is a better idea for 90 degree rotations.
Does your design only need 90 degree rotations? If so, you can figure out the “world-space size” like so:
local function clampToBoundaries(model)
local modelSizeWorldSpace = model.CFrame:VectorToWorldSpace(model.PrimaryPart.Size)
math.clamp(model.PrimaryPart.Position.X, bounds.Position.X - bounds.Size.X/2 + modelSizeWorldSpace.X/2, bounds.Position.X + bounds.Size.X/2 - modelSizeWorldSpace.X/2),
math.clamp(model.PrimaryPart.Position.Z, bounds.Position.Z - bounds.Size.Z/2 + modelSizeWorldSpace.Z/2, bounds.Position.Z + bounds.Size.Z/2 - modelSizeWorldSpace.Z/2)
) * CFrame.Angles(0,rot,0))
That produces this:
It seems like it is rotating around a corner and that’s why it goes outside of the boundary. But I have made my own solution:
local XP
local XS
local ZP
local ZS
if rot ~= math.rad(90) and rot ~= math.rad(270) then
XP = model.PrimaryPart.Position.X
XS = model.PrimaryPart.Size.X
ZP = model.PrimaryPart.Position.Z
ZS = model.PrimaryPart.Size.Z
XP = model.PrimaryPart.Position.Z
XS = model.PrimaryPart.Size.Z
ZP = model.PrimaryPart.Position.X
ZS = model.PrimaryPart.Size.X
math.clamp(model.PrimaryPart.Position.X, (bounds.Position.X - bounds.Size.X/2) + XS/2, bounds.Position.X + bounds.Size.X/2 - XS/2),
math.clamp(model.PrimaryPart.Position.Z, (bounds.Position.Z - bounds.Size.Z/2) + ZS/2, bounds.Position.Z + bounds.Size.Z/2 - ZS/2)
) * CFrame.Angles(0,rot,0))
It basically just detects if the model is rotated and if it is switch the size axis around so the X becomes the Z and the Z becomes the X. This may not be the most efficient solution though so feel
free to keep commenting.
1 Like | {"url":"https://devforum.roblox.com/t/help-with-keeping-parts-positions-inside-of-a-boundary/1331942","timestamp":"2024-11-12T01:13:17Z","content_type":"text/html","content_length":"32108","record_id":"<urn:uuid:5c7635b4-6e9e-4e4a-b276-651b6706a7f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00555.warc.gz"} |
Professor Maria Gordina Named Fellow of the AMS | Department of Mathematics
Professor Maria Gordina Named Fellow of the AMS
Professor Maria Gordina has been named a Fellow of the American Mathematical Society.
The Fellows of the American Mathematical Society program recognizes members who have made outstanding contributions to the creation, exposition, advancement, communication, and utilization of
“It is an honor to welcome a new class of AMS Fellows and to congratulate them for their notable contributions to mathematics research and service to the profession,” said Professor Ruth Charney,
President of the American Mathematical Society, on the 2023 Class of Fellows of the AMS. “We extend our thanks to the nominators and members of the selection committee for their help in highlighting
the outstanding achievements of their colleagues.”
Professor Gordina is recognized for her contributions to stochastic and geometric analysis, infinite-dimensional analysis, and ergodicity of hypoelliptic diffusions.
She joins 38 other mathematicians to be named Fellow of the AMS for the year 2023. For an alphabetical list of all past Fellows, visit https://www.ams.org/cgi-bin/fellows/fellows.cgi.
Professor Gordina is the fifth member of our department to receive this prestigious distinction. Previous Fellows in our Department are Professor Emeritus Bill Abikoff, Distinguished Professor
Emeritus Richard Bass, Professor Guozhen Lu, and Professor Changfeng Gui.
Congratulations, Professor Gordina! | {"url":"https://math.uconn.edu/2022/11/01/professor-maria-gordina-named-fellow-of-the-ams/","timestamp":"2024-11-12T10:38:32Z","content_type":"text/html","content_length":"102212","record_id":"<urn:uuid:a49fc2e8-f386-4228-b1dd-23e48dba8703>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00092.warc.gz"} |
Gems from thinkorswim – Spread Hacker Terms
So I like to trade options and have recently switched to TD Ameritrade, who has a fairly robust desktop platform, thinkorswim. In fiddling around, the documentation is alright, but some of the terms
and details are not really gone into, leaving you scratching your head. In the Spread Hacker section, I was curious about a few different parameters since I primarily want to trade vertical credit
spreads. Those columns are “Max Profit”, “Prob of Profit” and “PL/Margin”.
Max Profit
This sounds like it should be simple to figure out what it means. Most people would think that it is the max profit of the spread, however it is expressed as a percentage. A percentage of what? What
on earth is this?
It turns out it is actually the maximum profit on the spread divided by the maximum risk. For example, if I am selling 10 136/137 SPY PUT spreads for $0.20 (which means that the SPY is above 137, and
I am short the 137 PUT), then my maximum profit is $200 ($200 = 10 contracts * 100 multiplier * $0.20 per contract). My maximum loss is $800 ($800 = difference between the strikes $1 * 100 multiplier
* 10 contracts – credit.
That means that my “Max Profit” as thinkorswim calculates it is 25%. I am risking $4 to make $1.
Prob of Profit
This is simply the the probability of the short strike (the one closes to the current price) being out of the money (OTM) at expiration. The last part is very important because this percentage is not
all that useful to American Style options since they can technically be exercised at any time they are in the money. What we really want is the probability of touching, which is also an option in the
thinkorswim platform.
So just remember that this number is likely much more optimistic than what it is in reality. If you are using European Style options (as is common against indexes), then it is an appropriate
probability since those can only be exercised at expiration. Note that this probability is based off of Brownian Motion, is not exact.
This one looks like they took “Max Profit” and just made it a number. Well, you are exactly right! It is simply (Max Profit / Max Loss) * 100. I am actually not sure why we need both calculations
here. If someone can enlighten me, I would be grateful (just add a comment).
That is all for now. | {"url":"https://www.sharecentric.com/blog/2012/11/24/gems-from-thinkorswim-spread-hacker-terms/","timestamp":"2024-11-02T15:21:54Z","content_type":"text/html","content_length":"53986","record_id":"<urn:uuid:cbc6a57d-d0c5-446a-b2f7-65089fa03305>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00192.warc.gz"} |
How to Write Code for S-Curve Motion Profiles with 7 Segments Using a PLC | Oxmaint Community
More Replies
Peter Nachtwey expressed challenges in programming motion control tasks on a PLC due to various complex scenarios that can arise. These scenarios include moves with fewer than 7 segments, where
reaching commanded velocities or accelerations may be difficult. This leads to the need for determining peak velocities and accelerations, which can be a daunting task for many. Additionally, timing
constraints such as scanning intervals can pose difficulties in ensuring all code is executed efficiently. Despite these challenges, the effectiveness of Delta motion controllers, particularly when
used with RMC Tools software, cannot be ignored. The speed and capabilities of controllers like the RMC150 offer impressive motion control capabilities. This has led to a preference for Delta
controllers in motion control applications. While some may argue that PLCs are not suitable for complex motion control tasks, it ultimately depends on the PLC model and communication protocols in
use. For example, setting up dedicated tasks for motion control with a fast scanning time on a PLC can yield successful results, as demonstrated in a setup managing various tasks including motion
control, machine control, IO, and data acquisition. Furthermore, utilizing protocols like EtherCAT can enhance communication efficiency in PLC-based motion control systems.
Executing long moves becomes simple when the target trajectory reaches the specified acceleration and velocity. However, challenges arise when these parameters are not met. A discussion took place
over a year ago involving cheeco, a German engineer at ESA in Lieden NL, showcasing his software for creating optimal motion profiles using 3rd order polynomial segments. Despite encountering a bug
initially, Cheeco promptly resolved the issue. Cheeco and I are among the few who have successfully tackled what I refer to as the "problem from hell". It involves transitioning from any
Position-Velocity-Acceleration (PVA) to another PVA in the shortest possible time, a test I use to assess competitor controllers. This task becomes complex when the desired velocity and acceleration
are not achieved. This challenge is so perplexing that a professor at Fordham University referenced it in one of his presentations, utilizing the Fermat program to solve it. However, the solution is
impractical for real-time applications due to its lengthy processing time. Mathematica can also address the issue, but the solution is inefficient for the same reason. A search for Cheeco in the
forum will yield two relevant threads. I have developed a program capable of testing various move combinations from any initial PVA to a final PVA. After conducting 200 trillion iterations, I
identified and rectified numerous flaws in my software, ultimately achieving flaw-free moves. This experience has equipped me with the skills to identify issues in motion controllers effectively.
Delta is set to introduce Ethercat capability within the year, primarily for controlling output devices like valves and drives. My concern with Ethercat lies in the fact that its packets are
essentially CAN Open packets on Ethernet, limited to 8 bytes of data. This restricts the transmission to a 32-bit float and command bits specifying the float's purpose, typically a position. Although
positioning can occur every 50 microseconds, there is a lack of feed-forward data, target velocity, and acceleration - only a target position. As a result, Ethercat proves useful for transmitting
output values to valves and drives but falls short in delivering motion profiles to multiple controllers effectively.
TurkSaleh expressed frustration in finding the necessary equations for each segment and is seeking assistance. A quick Google search provided various answers related to 7 segment motion control.
However, these answers may not address specific edge cases or constraints not yet disclosed by TurkSaleh. It is important to determine if TurkSaleh's PLC can handle the problem at hand.
Peter Nachtwey emphasized the importance of efficiently transitioning between any given position, velocity, and acceleration (PVA) states. This process is crucial for achieving optimal results within
a specified timeframe. It is essential to determine whether the goal is to achieve optimal time or simply meet specific constraints when solving the general case.
Thank you for all your support. Providing more detailed information is essential. My system is equipped with a Siemens 1515 CPU and a G120 drive from Siemens. The task at hand is fairly
straightforward and should be within reach. The ramp up and down time is set at 20 seconds. For this particular project, the setpoint will remain constant during the ramping process. The drive
features an extended ramp function that I aim to implement within the PLC. This function allows for the configuration of initial and final rounding times, resulting in a smoother s-curve ramp instead
of a linear one. While I have come across several SRamp functions, such as those found in the oscat library, they do not allow for the direct setting of ramping time. This can pose a challenge for
operators when adjustments are needed. I stumbled upon a detailed discussion between Peter Nachtwey and Checco, which may be too advanced for my current needs, but I am eager to explore programming
possibilities. Unfortunately, the provided links are not functioning properly.
TurkSaleh mentioned:@Peter Nachtwey I came across the conversation between you and Checco, which seems a bit too advanced for my current project. However, it would be beneficial if I could implement
it. By the way, the links provided are not functional. When starting and stopping with zero velocity, you may not require all the components mentioned by Peter. This level of complexity may be
excessive for such an application. In cases where parameters may vary, it is important to consider reaching the maximum velocity and acceleration. Once these values are established, it is possible to
calculate the profile points accordingly. If limitations are encountered in either velocity or acceleration, certain segments may be excluded, resulting in identical points. It is recommended to have
a basic understanding of calculus to effectively complete this task.
TurkSaleh mentioned that the application is straightforward and easily attainable. The ramp up/down time will be 20 seconds or more, with a constant setpoint throughout. The drive features the
extended ramp function required for this task, which needs to be implemented within the PLC system. Do you need to execute a motion profile or simply create a speed reference using an s-ramp from the
PLC? Please provide further details about the specific application at hand.
Is the performance of the application restricted by jerking or acceleration limitations? The reference to 7-segment profiles by the original poster suggests that it is likely the former.
Inquiry by drbitboy about whether the application is limited by jerking or acceleration. Based on the mention of 7-segment profiles by the OP, it seems to be jerk-limited. It would be beneficial to
receive more precise information regarding the OP's intentions.
Have you checked out the Ramp Generator feature in the Siemens Converting Library on TurkSaleh's website? This tool can help you easily create ramps for your projects.
Similar to the G120's RFG, but with a PLC-based system.
User jhenson29 inquired about the necessity of running a motion profile versus generating a speed reference as an s-ramp from the PLC. More details on the specific application were requested.
However, it was clarified that a smooth acceleration speed reference at the start and stop points should suffice in this scenario.
According to JRW, the system is akin to the G120's RFG, but it is PLC-based, which is exactly what I was looking for. Understanding the mathematical and physical principles behind this block is
crucial for its operation.
If TurkSaleh is using a Siemens PLC, I highly recommend following JRW's advice and utilizing the library. We tested it for comparison and found that it functions effectively. It is always preferable
to utilize a library that has been thoroughly debugged rather than attempting to recreate it from scratch. Drbitboy, it's time to utilize your Python and Sympy skills to solve the 17 equations for 17
unknowns. Assuming all 7 segments are being used, the 17 unknowns include t01, x1, v1, t12, x2, v2, t23, x3, t34, x4, t45, x5, v5, t56, x6, v6, t67. This calculation assumes that point 0 is the
starting point and x7 is the ending point. The initial and final PVA are known, and the commanded P, V, A values are provided. It's not necessary to consider all the information Peter mentioned. In
my scenario, the motion controller must have the capability to generate any motion profile as requested by the customer. Whether or not the equipment can actually follow it depends on the designers.
The OP's mention of 7-segment profiles suggests the former. It should be noted that achieving the commanded acceleration and velocity may not always be possible for short moves, as in the case of
only 3 segments being used and the motion being jerk-limited.
Peter Nachtwey noted that achieving commanded acceleration and velocity during short moves may be challenging. When there are only a few segments involved, motion is often constrained by jerk
limitations. The question arises: is it the acceleration or jerk that exhibits a Heaviside Function-like profile, with discontinuities in the motion? According to the image from JRW in the Siemens
library, it appears that acceleration is the parameter showing discontinuities.
In a discussion with drbitboy, the focus was on whether acceleration or jerk exhibits a Heaviside Function-like profile, specifically if it is discontinuous. The image from the Siemens library by JRW
appears to show that acceleration is discontinuous. However, motion profiles seen on a S7-1500T display true 7 segment motion profiles that are not optimal and difficult to use. The accelerations are
either constant or ramp up/down based on the jerk rate, with continuous segments and only changes in jerk occurring in steps. The question remains, where can one find solutions for the 17 unknowns?
Peter Nachtwey asked: Where can we find solutions for the 17 unknowns? I am searching for the equations. It seems like the delta link is not working, or perhaps I overlooked it. Can anyone provide
insights on this matter?
Thomas_v2 mentioned that solving turbine control equations is not overly challenging. However, simply programming the equations without truly understanding them or the time it takes to execute them
may not be very effective. It is important to derive the equations using sympy, as recommended. Siemens offers a library for this purpose, but what JRW shared is not the correct one. For more
information on s-curves, check out this old discussion thread: http://www.plctalk.net/qanda/showthread.php?t=35902&page=2.
Peter Nachtwey recommended using sympy to derive the equations instead of what JRW presented, as Siemens offers a library for this purpose. However, it should be noted that the library is not
compatible with g120, does not support DSC, and requires telegram 105.
Peter Nachtwey mentioned that Delta plans to incorporate Ethercat capability this year, mainly for controlling output devices like valves and drives. However, Peter expressed concern about Ethercat
packets being similar to CAN Open packets on Ethernet, with limited data capacity. While Ethercat is suitable for sending output values to devices, it may not be ideal for complex motion profiles
across multiple controllers. Peter reached out to Delta to inquire about the availability of EtherCAT in their controllers for an upcoming machine build scheduled for the end of the year. Despite
limited information on a timeline from Delta, Peter is considering including the RMC-200 controller in their component list. In addition, new standards like EtherCAT-G and G10 were introduced last
year for transmitting high volumes of data in motion and vision applications. Peter is not well-versed in these standards but plans to explore them further.
I found the perfect resource at the library LCon_RFGJ that provided exactly what I needed. Thank you to everyone involved in the discussion for helping me learn so much.
Peter believed that only a reputable RMC (Reconstruction Management Company) had the capability to handle the task effectively.
Peter believed that only an RMC was capable of achieving the desired outcome, contrary to JRW's statement. While Siemens does have a library that could suffice, the presence of discontinuities in the
output acceleration (shown by the dark blue line) is concerning. The red line appears to be the integral of the blue line or velocity, while the green line seems to represent velocity without
s-curves. It is evident that the RMC can perform better. The challenge remains in finding the equations for the 17 variables mentioned earlier. Without a clear understanding of these formulas,
claiming that programming them in a PLC is simple is unfounded. Siemens' performance in handling even a basic scenario like the one described above is questionable. It is worth noting that Siemens
previously utilized RMC100s for their plastics solution before introducing their own alternative in the late 1990s and early 2000s. Presently, both Siemens and Delta seem to focus less on plastics,
as many OEMs prefer customized solutions over pre-made packages. While Delta and Siemens offer components, some level of assembly is typically required. Nevertheless, there is still a demand for
RMC75s in plastic applications, especially for retrofit projects.
JRW stated that Peter believed only an RMC could accomplish a certain task. However, Peter clarified that he did mention Siemens has a library that could potentially work, but there are still
concerns about discontinuities in the output acceleration. It appears that the red line may represent the integral of the blue line, while the green line seems to show velocity without s-curves. The
RMC has the capability to perform better in this scenario. Peter is still waiting for someone to identify the equations for the 17 variables listed. Without knowledge of the formulas for these
variables, it's difficult to claim that writing the code in a PLC is simple. It's evident that Siemens has not yet perfected the solution, even in a basic case as described above. Peter recalls being
called arrogant by JRW. It should be noted that Siemens utilized RMC100s for their plastics solution in the late 1990s and early 2000s before introducing their own solution. Currently, it seems that
both Siemens and Delta are less involved in plastics applications, as many OEMs prefer customized packages. Delta and Siemens mainly provide components, requiring some assembly. Despite this, a few
RMC75s are still being sold for plastic applications in retrofits.
This is just one of the numerous methods available for tackling 7-segment displays. In scenarios where t56 is negative, it is advisable to flip the sign of all known PVA values and run the
calculation again. While there may be some sign errors initially, the overall mathematical formula is accurate. In conclusion, this methodology is a reliable solution for addressing 7-segment-capable
Thomas_v2 mentioned that achieving a tolerance of +/- 0.1" without the use of motion controllers is quite manageable. I have successfully accomplished this task in the past, where precise control was
not necessary.
In a discussion by drbitboy, one possible method is outlined that addresses cases compatible with 7-segment displays. If t56 is a negative value, the sign of all known Position-Velocity-Acceleration
(PVA) values should be reversed and the process repeated. Although there may be sign errors present, the foundational structure is accurate. The attached document provides further clarification. The
equations provided for calculating position are correct, but it is essential to also determine the corresponding times. A positive value for t56 is typical, except in scenarios where the deceleration
is minimal and the peak deceleration (negative acceleration) is not achieved. In such instances, t56 is zero, indicating the absence of that segment. Similarly, t12 or t34 could be zero, suggesting
the absence of a constant velocity segment. It is crucial to identify the maximum velocity attained in these calculations.
In a statement by Peter Nachtwey, it is highlighted that the peak velocity magnitude (V) is a crucial parameter in the formula provided, especially for cases with non-negative values of t34, t12, and
t56. This formula involves a variable time (t34) at a velocity with a magnitude of V, ensuring that max acceleration/deceleration (+/-A) is always achieved. While the post does not aim to solve all
general cases, it addresses the ongoing requests for 17 equations. Deriving optimal solutions relies on these 17 equations, or a subset of them, along with meticulous record-keeping. It is essential
to consider the times computed for a seven-segment profile, with t34 playing a key role. If t34 is negative, it indicates the need for adjustments in the profile's jerk values and peak velocity
limits. There are only four key equations involved in this process, with the remaining tasks revolving around organizational matters.
In order to find optimal solutions for all scenarios, one must utilize a set of 17 equations, or a subset of them, along with detailed record-keeping. While solving this complex problem manually may
be challenging, employing a Python package can simplify the process. If t34 is negative (not t56 - as mentioned in post #31), it indicates two possibilities. Firstly, the 7-segment profile should
begin with a negative jerk, necessitating an inversion of the PVA inputs. Secondly, the peak velocity must be limited, resulting in a 5- or 3-segment profile with specific conditions for jerk values
and time intervals. Deciding between a negative or positive jerk can be determined, but adjusting the PVAs may be a simpler coding solution. When moving in the positive direction, the initial jerk
should always be positive. Notably, t12, t34, and t56 should never be negative. A negative solution for t34 denotes that segment 4 is non-existent, preventing the attainment of the target velocity.
The only scenario where the initial jerk should be negative is if the actuator is slowing down, typically in response to a command during motion. Calculating or modifying motion profiles dynamically
has not been addressed. Contrary to the notion of 17 equations, there are actually four core equations involved. It is crucial to understand that solving for the 17 unknowns requires 17 equations,
which is applicable only when all 7 segments are present. The determination of segment existence by the target generator necessitates a distinct set of equations for each case. Highlighting the
complexity of motion control programming, it extends beyond a mere PID loop linking an encoder or rod to an output. The target generator is the pivotal component that distinguishes an effective
motion controller from an inefficient one.
Peter Nachtwey jokingly mentioned his trolling behavior in response to comments about the perceived simplicity of programming in a PLC. While acknowledging Nachtwey's expertise in the field of motion
control, a user expressed a belief in the capabilities of PLCs but emphasized the complexity involved. In planning for future machine builds, the user intends to combine an RMC controller with a PLC
for enhanced functionality. The user appreciates the user-friendly nature of RMC software, which allows technicians to easily create their own programs without the need for specific coding. They look
forward to full EtherCAT support in the future and express a desire for additional features such as the ability to modify motion commands via .NET.
In a discussion on mathematical equations, Peter Nachtwey disputed the claim that there are only four equations, emphasizing that there are actually 17 equations that represent the same system. He
also mentioned the importance of understanding the integrals and discrete functions involved in the calculations. Overall, the conversation highlights the complexity and interconnectivity of
mathematical concepts in physics. Additionally, Peter extended an invitation to meet in person in Rochester to further discuss these topics.
A forum member, busarider, expressed belief in the capabilities of a Programmable Logic Controller (PLC), though acknowledging its imperfections. A previous discussion on PLC vs motion controllers
highlighted the benefits of using motion controllers, with common questions about their capabilities. Have you heard of RMCLink, a .NET assembly that enables programming of the RMC using various
languages like Excel, C, Visual Basic, and Python? Another forum member, drbitboy, suggested meeting in person in Rochester or virtually via Skype to share screens and discuss topics like control
problems and education. For example, a forum member named Pandiani sought help as a college student and advanced to overseeing controls for power plants in Tuzla, Bosnia, with guidance and resources
from the forum. The thread also explored the challenge of generating 17 equations related to a specific target generator, referencing Cheeco's 3rd order generator as the only one to pass the tests
successfully. Understanding the significance of math and physics in solving such problems was emphasized, even though integrals and the discrete jerk function presented challenges in the process.
A forum member, busarider, expressed their belief in the capabilities of PLCs. The original poster (OP) may have found a solution, but it is not flawless or ideal. Over a year ago, I started a
discussion comparing PLCs and motion controllers, highlighting the many reasons why one should consider using a motion controller. People often inquire, "Can your motion controller perform this task?
" My response is usually, "Can you design a machine that can achieve this?" Additionally, have you heard of RMCLink? It is a .NET assembly that allows programming of the RMC using Excel, C, Visual
Basic, Python, and more. Another member, drbitboy, suggested meeting in person in Rochester, but I proposed meeting via Skype and screen sharing. I have frequently engaged in screen sharing sessions
on Saturday mornings with forum member Pandiani, offering assistance with his control issues dating back to his college days. I guided him through his master's degree studies, and now he oversees
control operations for power plants in Tuzla, Bosnia. Pandiani purchased Mathcad and follows my content on the Peter Ponders PID YouTube channel. Addressing a technical aspect, while the integrals
and discrete jerk function may present challenges, they represent the same system and form the basis of the 17 equations. The issue lies in how to generate these 17 equations. As mentioned earlier,
only Cheeco's 3rd order target generator has successfully passed my tests.
Peter Nachtwey posed the question: How can one develop the 17 equations required for this task? To do so, meticulous record-keeping is essential.
Are you familiar with RMCLink, a versatile .NET assembly that enables programming of the RMC using popular languages like Excel, C, Visual Basic, and Python? I have personally utilized RMC Link on a
machine interfacing with the RMC controller. While there currently is no direct method in .NET for adding or omitting steps in an RMC program, it would be advantageous for Delta to consider adding
this capability to their RMC Link library. Though a workaround exists, it may not fully meet the desired functionality.
According to Peter Nachtwey, it is crucial to ensure that the INITIAL jerk is positive when moving in a positive direction. Does this apply solely to a full 7- or 5-segment move? I previously
believed that simply changing the signs would be an easy fix for dealing with a negative initial jerk.
According to drbitboy, full 7- or 5-segment moves are necessary. Changing signs may not be a simple fix for dealing with negative initial jerk. Short moves only need 3 segments. An image .png
displays the various cases a 3rd order motion target generator must address for completeness. Identifying the relevant case is key to solving the current situation effectively.
Peter Nachtwey mentioned that short movements typically consist of only 3 segments. However, it is possible for a very brief movement to require only one segment. Regarding the rule stating that the
initial jerk should always be in the direction of the movement (assuming a positive move from Pinitial to Pfinal), this applies to various types of moves. These include 7-segment moves
(+J,0,-J,0,-J,0,+J), 5-segment moves (+J,0,-J,0,+J), and 3-segment moves (+J,-J,+J). While this rule may not seem universally applicable, especially when considering different initial and final PVAs,
I have yet to find a case that contradicts it.
Thomas_v2 mentioned utilizing Heron's method for square roots and 3D map interpolation on an affordable controller. He shared the strategy of using a lookup table with 1024 entries for square roots
and scaling up inputs and outputs by factors of 4 and 2 through bit shifting for inputs greater than 1024. Despite the use of integer arithmetic, the encoder remains integer-based.
drbitboy explained that the principle of the initial jerk aligning with the direction of movement is rooted in calculus, where the second derivative always points in the same direction. This concept
applies to distance and acceleration as well - to move a positive distance, one must start with a positive acceleration. In the case of positive velocity, an initial positive jerk is necessary. When
already in motion, such as tracing a profile with negative acceleration and velocity, the introduction of a more positive segment results in a positive addition to the existing negative values of
velocity and acceleration, causing the jerk to become positive.
In a discussion, salbayeng mentioned that understanding calculus is essential, especially when dealing with the second derivative. The direction of the second derivative can determine a lot in a
given scenario. For instance, if we consider the second derivative of position over time, what happens when the initial and final acceleration have opposite signs? Additionally, when discussing
positive velocity and jerk, clarity is crucial. It becomes even more intricate when factoring in motion, such as tracing a profile. Does this mean that the rules only apply to straightforward cases
and not universally? Effective communication is key to resolving misunderstandings in complex topics like these.
What are the key principles being discussed here? There is a fundamental rule that reigns supreme - the Lyapunov function. This rule for stability, attributed to Lyapunov, acts as a guide to assess
the accuracy of errors in position, velocity, and acceleration. By analyzing these errors, adjustments can be made to ensure convergence with minimal error. However, the explanations surrounding the
Lyapunov function can be quite complex, with even the Wolfram example falling short. This technique may appear baffling at first, leaving many scratching their heads. For instance, a PLC has the
capability to create a 7-segment motion profile, but the challenge lies in understanding how to implement this feature within the forum community. With no clear solution in sight after three weeks of
discussion, the easier and more cost-effective option may be to invest in a motion controller.
Peter Nachtwey mentioned that the solution to reaching Vmax and Amax has not been discovered by anyone on the forum yet, but not everyone has attempted. A solution for this simple problem has been
provided, focusing on understanding the terms involved. Nachtwey also suggested that after 3 weeks, it may be more cost-effective to purchase a motion controller, but perhaps not as engaging.
Peter Nachtwey pointed out the lack of solutions on this forum for calculating motion profiles. It appears I will be the one to inquire: How long will you continue to endure the embarrassment of
pretending that calculating motion profiles is a challenging task?
In a forum post, jhenson29 questioned the difficulty of calculating motion profiles and challenged the idea that it is easy. With 17 equations and 17 unknowns for just one scenario, it is clear that
this task is anything but simple. Are you up for the challenge of mastering motion profiles with multiple complex cases to solve? Let's dive in and explore the intricacies together.
Peter Nachtwey mentioned that there are numerous scenarios that one must consider. It is important to improve on generalizing when it comes to adding integers. On this forum, no one has successfully
demonstrated the ability to add integers, except for myself and one other individual. While it may be easy to add 1, encountering numbers like 3 or negative integers presents a challenge. There are
countless cases to consider when adding integers. Can you solve all of them?
User jhenson29 commented that improvement is needed in generalizing skills. Have you viewed the .png file provided? Each case shown is unique, making it suitable for diverse scenarios.
Peter Nachtwey expressed his thoughts by stating that this criteria applies to a wide range of cases. However, have you considered whether your .png file accounts for all possible scenarios,
including the case of a 7? Take a moment to ponder this and see if you can anticipate the direction I am heading in.
User jhenson29 inquired about the capability of a .png file to handle all cases, including a case involving the number 7. The mentioned .png files, such as seg1234567 and seg123567, cater to
different scenarios where segments are utilized differently. The process also factors in peak velocity calculation, real-time adjustments, and reversing motion sequences. To delve deeper into these
concepts, refer to this insightful discussion on PLCTalk forum: http://www.plctalk.net/qanda/showthread.php?t=119883. Discover a straightforward solution to these complex motion control scenarios.
Peter Nachtwey mentioned the presence of a file named seg1234567 that handles scenarios involving all 7 segments being used. Another file, seg123567, deals with cases where the commanded velocity is
not achieved, resulting in no constant velocity segment. In such instances, determining the peak velocity attained can be challenging. It seems you may have misunderstood the query. The focus was not
on 7 segments but on the numerical value 7 as an input. To clarify, calculations do not segregate data based on individual segment time spans or different numeric inputs; instead, a unified approach
is adopted. Consider the possibility of developing a single abstraction capable of computing targets for any input value. This emphasizes the importance of enhancing generalization skills and
minimizing the reliance on specific "cases" for calculation purposes.
User jhenson29 expressed confusion about the initial response, emphasizing that the query was not about 7 segments but rather about utilizing the number 7 as an input parameter. The user seeks
clarity on how to handle parameters like position, velocity, and acceleration when the mind is unclear. To simplify, the user suggests starting with a comprehensive understanding of the issue at
hand. The user highlights the ability to create a universal abstraction capable of calculating various targets based on any input. This concept prompts the user to inquire about the definition and
demonstration of an abstraction. The user advocates for improved generalization skills and mocks the notion of segregating different scenarios into separate cases.
In a forum discussion, jhenson29 expressed interest in understanding the specifics of what the original poster (OP) is aiming to achieve. Recognizing their expertise in the topic, a question was
posed regarding designing S-curve motion profiles for a rapid camera slider. The slider covers a 4-meter distance in under 1 second and is equipped with pan and tilt heads driven by stepper motors
controlled by Arduino. Parameters such as time, acceleration, and distance are predetermined before the motion begins. Can S-curve motion profiles be implemented in this setup?
To calculate the symmetrical slide time for the given inputs, the formula calculated as follows: Abs(a) - constSymmetricalslidetime = 1 sx_0.5 = 4 / 2t_0.5 = 1 / 2a = 2*x_0.5 / t_0.5 ^2 = 2 * (4/2) /
(½)^0.5 = 4 / 0.25 = 16V_0.5 = a * t_0.5 = 16 * 0.5 = 8x_0.5 = x / 2 = 4 / 2 = 2t< 0.5x = 8 * t^2t> 0.5x = 2 + 8*(t-0.5) – 8 * (t-0.5)^2. It is important to note that while this solution may not be
entirely satisfactory, it is the best that can be achieved given the available input data.
In a recent discussion, MaxK shared a formula for calculating acceleration with symmetrical slide time. This formula involves various variables such as a, constSymmetricalslidetime, sx_0.5, t_0.5,
and V_0.5. The calculated value of acceleration is 16 when using the given input data. Even though the solution provided may not be completely satisfactory due to the lack of sufficient input data,
it still serves as a helpful guide. It seems that the scenario involves constant acceleration. How can one further enhance the acceleration curve for added smoothness? Your assistance is greatly
appreciated in improving the accuracy of these calculations. Thank you for your insights!
Henry_ inquired if the acceleration provided has a constant rate and asked how to incorporate smoothing into the acceleration curve. MaxK calculated the linear acceleration rate for both ramping up
and ramping down, but did not offer a suitable polynomial for achieving a smooth s-curve. An ideal s-curve should start and end with zero acceleration during ramp up and ramp down. With a constant
acceleration rate of 16 m/s^2, the peak acceleration during ramp up would be around 24 m/s^2. It's worth considering if the system can handle such rapid acceleration, which is over 2.3g.
Additionally, MaxK determined a peak speed of 8 m/s, prompting the question of whether the motor can handle this speed. Ensure to assess whether the system will be limited by speed or acceleration.
The camera system's mass will also play a role in the design process. Once the system design is finalized, it's recommended to invest in a quality motion controller. While 7 segment motion control is
a viable option, it may introduce complexities that can be avoided by customizing the motion segments to prevent rounding errors. For example, consider if a segment takes 66.66 milliseconds but the
PLC can only scan at 1 millisecond intervals. This discrepancy could lead to significant errors in both time and distance.
It has been nearly two years since the inception of this discussion. Has anyone successfully implemented a 7-segment motion profile using a PLC? Surprisingly, the 17 formulas required for this task
have not yet been shared. I mentioned previously that the S7-1500 PLC already includes this functionality, but no one has reported achieving a 7-segment motion profile using ladder logic or ST
programming yet. Why is that? Creating 7-segment motion profiles is a complex process, leading many to opt for purchasing a motion controller instead. Interestingly, the older RMC100 model utilized
cosine ramps for motion control. I have previously shared instructions on how to create a simple 3rd order ramp on this forum. In this scenario, the acceleration is not equal to zero at the start and
end of the ramp, while the position and velocity are specified.
Primary school math can be a fun exercise in solving equations. When calculating the position of an object over time, understanding the various orders of acceleration is crucial. In the case of
symmetrical accelerations and decelerations, the equations become more straightforward to solve. For example, when analyzing the motion of an object with varying accelerations, such as in the
provided equations, it is important to consider the symmetry between acceleration and deceleration. By calculating the position of the object at different time points, such as at 0.25 seconds or 0.75
seconds, we can see how the acceleration and deceleration affect its movement. However, if the requirements and limitations of the system are more complex, such as the need to recalculate curves
dynamically or adhere to speed restrictions, then it may be worth considering investing in a motion controller. This can help simplify the process and ensure accurate results, especially when dealing
with non-symmetrical acceleration patterns. In conclusion, while solving equations for motion can be challenging, understanding the specific requirements and limitations of the system is crucial. By
considering all factors and potentially utilizing a motion controller, you can achieve more accurate and efficient results in analyzing object motion.
Peter Nachtwey inquired whether anyone has successfully implemented a 7-segment motion profile on a PLC. As of now, the 17 necessary formulas have not been developed. While this has not been achieved
by anyone yet, it remains a priority on my task list.
In a nutshell, MaxK discussed a basic math concept involving primary school level equations. The formula involves calculating the position (x) at different points in time (t) using acceleration and
deceleration. The solution provided is for a 3rd order equation, where accelerations and decelerations are symmetrical. The calculations show how to determine the position based on specified
accelerations and decelerations within specific time intervals. The formula is broken down into segments based on different time intervals (0-0.25, 0.25-0.75, etc.) to accurately compute the position
at each point. Ultimately, the discussion emphasizes the importance of understanding system limitations and requirements before designing any motion control system. It highlights the need for precise
calculations and considerations to avoid costly mistakes in the design process.
drbitboy mentioned that although it's not a priority for him right now, it's definitely on his to-do list. He is eagerly anticipating the completion of the task. When dealing with a 7-segment
profile, the challenge lies in managing the acceleration, velocity, and speed limits, especially when the movement is extensive.
Peter Nachtwey mentioned that he is patiently waiting for the 7-segment profile, especially when the move is extended, and the acceleration, velocity, and speed constraints are met. However, he finds
the segments with zero or negative duration to be more intriguing and challenging.
Peter Nachtwey mentioned waiting for the 7-segment profile to reach its acceleration, velocity, and speed limits during long moves. Drbitboy also noted that segments with zero or negative durations
pose an interesting challenge. Is this related to a 3rd order system with constraints? Let's steer clear of mathematics for now. There appears to be a technical and commercial dilemma here: for less
critical systems without strict time and accuracy requirements, simplified solutions can be devised. However, for precise solutions, PN likely has the answers. It's doubtful that others will code the
published mathematics, and motion controllers may be the best option for exact control requirements. Why is PN creating competition for his colleagues in this aspect?
For more than two decades, I have been equipped with the expertise needed for motion control projects, dating back to the development days of the RMC75. In the earlier version, RMC100, cosine ramps
were utilized. The complex nature of these projects involves solving a set of 17 equations with 17 unknowns. When approached with a new project, I inquire about the specifics such as mass, speed, and
duration of the motion. By utilizing Mathcad worksheets, I can swiftly determine the necessary requirements. I have formulas readily available that cater to motion profiles involving acceleration,
deceleration, and constant velocity in equal thirds of time. The calculated velocity formula (velocity=(3*Δx)/(2*Δt)) and acceleration formula (acceleration = (9*Δx)/(2*Δt^2)) assume linear ramps
rather than S-shaped ramps. For S-ramps, peak acceleration is around 1.5 times the average acceleration. The complexity of motion generation involving 7 segments is highlighted in my directory of
work files. Out of various segment combinations, the "seg1234567" combination is typically sufficient. Managing the PID control in comparison is a simpler task.
My extensive knowledge spans from the beginning of the universe, even before the singularity. I possess wisdom that surpasses mortal understanding, which was revealed to me through 17 sacred tablets
and 17 secret runes. Among the truths I have uncovered are the fact that a triangle has 3 angles and a square has 4 corners. However, when it comes to a circle (or possibly an ellipse), my
calculations diverge from those of others. My wisdom delves into various topics, including the relationships between angles in geometric shapes and the mysteries of the universe. Contained within my
vast repository of wisdom are insights into global intelligence services, the Masonic conspiracy, and the reptilian conspiracy. However, one question remains unanswered: What has become of PN's
self-criticism and humility? In comparison, all other inquiries and opinions pale in significance. This serves as a reminder of the futility of attempting to engage with PN on any level.
It appears that you may be feeling bitter. If you're interested, I can show you my collection of documented bugs that I encountered on my journey. I came across numerous issues and successfully
addressed them. I have a knack for identifying challenges in achieving certain goals, especially when it comes to high speeds and accelerations. I have been able to save clients a significant amount
of money by highlighting potential difficulties early on. While some tasks may be feasible, the associated costs may be prohibitive. In the context of the customer's requirements, cosine ramps offer
a simple solution compared to complex 3 segment 3rd order ramps. The limitations of old technology, such as the RMC100's 80186 processor without floating point capability, made the use of polynomials
impossible. Despite raising this concern earlier, it went unheeded.
@MaxK....I must disagree! Some time ago, I conducted a search for an s-curve which led me to Pete's highly effective program that I successfully implemented on an RPi-Pico using interpreted BASIC.
The Trapezoidal function was not as smooth as I desired, but now I have a 1Khz PID and trajectory generator operating on a mere $10 MCU, complemented by a $5 quad-decoder/counter (LS7366) and a $5
12-bit DAC. This solution runs on a self-hosted, open-source BASIC interpreter at zero cost.....Simply magnificent! No need for overpriced PLC software that becomes useless once the license expires.
Many thanks to Pete for this amazing solution. And let's not forget the impressive 64bit position range and the astonishing 40M quad counts per second speed.
@Tinine, was that the ramp fraction version you were referring to? The formula for calculating velocity during ramping up is x(f) = ramp_dist * (3*f^2 - 2*f^3) + x(0), where f represents the fraction
of the ramp time. And for ramping down, the formula is x(f) = x(1) - ramp_dist * (1 - 3*f^2 + 2*f^3). These calculations work effectively for both positions and velocities, with the option to derive
velocity from the equations. Additionally, there is a fifth order version available where accelerations start and end at 0 during the ramps. It's worth noting that the speed and acceleration
capabilities of the motor do not affect the accuracy of these calculations. Unfortunately, there has been no response from the original poster regarding this matter.
Indeed, that particular solution works wonderfully. However, it is essential for the motor to have the ability to move and accelerate rapidly for optimal performance. Unfortunately, there was no
response from the original poster regarding this issue. I am aware that the same principle applies to extremely high rates of PID sampling. From my experience, I have found that lower sampling rates
(500Hz) are easier to tune in certain scenarios without sacrificing performance.
Hey @Peter Nachtwey! It's been a while since I last looked at this, but I noticed that my code seems to be more straightforward than yours. In my original test routine, I declared variables such as
res, samptime, f, intspeed, ramp_segments, and frac. I set ramp_segments to 1/1000 and initialized res to 1000 for synchronization purposes. By incrementing samptime by 1 each time, I was able to
achieve 1ms determinism. Using calculations involving frac and f, I determined intspeed and printed the results. The loop continued until samptime exceeded res.
Tinine expressed curiosity to Peter Nachtwey about the simplicity of their test routine in comparison to his. The coefficients and functions used were similar, with the main difference being the
inclusion of starting and stopping points. Tinine suggested the addition of LaTeX, possibly through MathJax, for better representation of mathematical equations.
@Peter Nachtwey, I'm not sure what you meant in the last part of your message, but your elegant solution has helped me save a significant amount of money. I plan to semi-mass-produce this product.
Thank you!
Tinine expressed her disagreement with MaxK by saying that it was unfair. She questioned if he was trying to shame, judge, or punish her. The complexity of knowledge can sometimes lead to confusion.
Do you understand why the formula "3*f^2-2*f^3" works in a certain way? The spoiler reveals the equation for the calculation.
MaxK expressed feelings of discouragement, suggesting a lack of understanding of complex mathematical formulas. Rather than attempting to shame or judge, he acknowledges his role as a hardworking
individual focused on delivering product successfully. Admitting to seeking help when needed, he emphasizes the value of research and professional assistance. Despite watching instructional videos on
YouTube, MaxK still struggles with comprehension, opting to read online threads for knowledge as opposed to watching television.
An old discussion resurfaces with a common question that arises periodically. However, the solution remains the same and I have all the answers to it. Despite being labeled as arrogant by JRW, my
expertise in hydraulic motion control, modeling, and testing has been recognized by IEEE.org and I am a part of the Intentional Fluid Power Society hall of fame. With over 40 years of experience in
selling motion controllers, I possess profound knowledge that remains timeless. I have a proven method showcased in an example [eg1234567] where I calculate velocity and acceleration based on the
user's desire to move 4 meters in 1 second. Key parameters such as mass, distance, and speed play a crucial role in the success of any project. By addressing these aspects, I have helped customers
avoid costly mistakes and impractical ventures. Although a fast PLC may have the capability to run complex formulas, it may struggle to achieve the desired acceleration in a short distance move. This
challenging scenario, aptly named the "problem from hell," has been tackled by forum member cheeco and myself, distinguishing us as experts in the field. I have been in correspondence with Robert
Lewis from Fordham University, who acknowledged the complexity and intricacies of motion controller mathematics. My extensive experience has equipped me with algorithms that could enhance PLC
performance, yet I refrain from sharing them to maintain the integrity of standard motion control sales. As for drbitboy and MaxK, I invite them to grasp the complexity involved in solving 19
unknowns with 19 equations, including the calculation of jerk in motion profiles. After four decades of mastering motion control, I am astounded by the oversimplification of tasks in the industry.
Peter Nachtwey stated that he didn't want to wait years for someone else to figure it out, so he decided to share a straightforward example, like "eg1234567". He appreciates the offer to expand, but
prefers to figure it out independently when he can make time to focus, rather than getting distracted by playing pubg. | {"url":"https://community.oxmaint.com/discussion-forum/how-to-write-code-for-s-curve-motion-profiles-with-7-segments-using-a-plc","timestamp":"2024-11-13T18:25:02Z","content_type":"text/html","content_length":"101775","record_id":"<urn:uuid:6dbe426f-5aef-4111-9b8f-906782e194f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00853.warc.gz"} |
Relation Algebra
This is a development version of this entry. It might change over time and is not stable. Please refer to release versions for citations.
Tarski's algebra of binary relations is formalised along the lines of the standard textbooks of Maddux and Schmidt and Ströhlein. This includes relation-algebraic concepts such as subidentities,
vectors and a domain operation as well as various notions associated to functions. Relation algebras are also expanded by a reflexive transitive closure operation, and they are linked with Kleene
algebras and models of binary relations and Boolean matrices.
Session Relation_Algebra | {"url":"https://devel.isa-afp.org/entries/Relation_Algebra.html","timestamp":"2024-11-01T23:27:13Z","content_type":"text/html","content_length":"13022","record_id":"<urn:uuid:2faedc97-0ba6-4c37-ab56-015f01d54286>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00663.warc.gz"} |
Bug in converting complex numbers? - Printable Version
Bug in converting complex numbers? - blevita - 07-03-2018 08:01 AM
I have tried to convert a complex number in the style of 100*exp(60 Deg)
Usually they should be converted to something like:
100*(cos(60)+i*sin(60)) = 50+86.6*i
But the Prime converts the expression 100*exp(60 Deg) to -95.24 - 30.48*i
This is a strange behaviour to me.
Do i have to enter the complex number in an other way?
Or is this a software bug?
Attached a picture
RE: Bug in converting complex numbers? - blevita - 07-03-2018 08:17 AM
I have tested this behaviour a little bit more.
It looks like there is a serious bug in the Prime.
If i extract the RE part within the CAS it gives me the correct value of 50.
If i extract the exact same thing withing the Calculator it gives me -95.
If i approx the value within the CAS, also the CAS fails to convert correctly!
Take a look at the pictures.
RE: Bug in converting complex numbers? - sasa - 07-03-2018 08:59 AM
I do not have a Prime, however, this sems to be a problem misrepresenting input is in degree or radian and further result representation.
Do you have to set independently degree and radian for each calculator mode (HOME/CAS)?
If you check manually, you will see that second result is correct if numbers are expected in radian: sin(60)=-0,3048106211 and so on.
RE: Bug in converting complex numbers? - blevita - 07-03-2018 09:03 AM
(07-03-2018 08:59 AM)sasa Wrote: Do you have to set independently degree and radian for both calculator modes (HOME/CAS)?
If you check manually, you will see that second result is correct if numbers are expected in radian: sin(rad(60))=-0,3048106211 and so on.
Ok interessting.
I have set the CAS and calculator to Degrees.
You can see this also on the top right corner where you see the green Dregree symbol.
I have also swithed to radians without success.
But you are right. If i enter radians, the answer is correct.
Thanks for your quick response!
RE: Bug in converting complex numbers? - sasa - 07-03-2018 09:11 AM
Notice that x in exp(x) is not expected to be an angle, thus you have to do conversion manually.
RE: Bug in converting complex numbers? - Nigel (UK) - 07-04-2018 11:04 AM
In general, trig functions on the Prime interpret any complex number as being in radians. For example, in degrees mode sin(30) is 0.5, but sin (30+0*i) is -0.988, even when a degrees sign is added
after the "30".
I believe this is standard practice on calculators that support trig and exponential functions with complex arguments, so it should be viewed as a feature, not a bug!
Nigel (UK) | {"url":"https://www.hpmuseum.org/forum/printthread.php?tid=10999","timestamp":"2024-11-08T16:05:58Z","content_type":"application/xhtml+xml","content_length":"6199","record_id":"<urn:uuid:6f91a74a-85a7-43fb-8f35-fa9d7d8d3bee>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00696.warc.gz"} |
Re: Rehash of regular expression question...
Joe Hummel <jhummel@cy4.ICS.UCI.EDU>
Thu, 24 Feb 1994 00:11:07 GMT
From comp.compilers
| List of all articles for this month |
Newsgroups: comp.compilers
From: Joe Hummel <jhummel@cy4.ICS.UCI.EDU>
Keywords: DFA, theory
Organization: UC Irvine, Department of ICS
References: 93-12-062 94-02-152
Date: Thu, 24 Feb 1994 00:11:07 GMT
>Hum, from Alg class I don't agree that determining if two reg langs
>are equal is NP-hard. I believe we can take any regular expression
>and convert it to a NFA in P-time. We can take two NFA's and take
>their Unions, Compliments and Intersections in P-time. Then we can
>create new NFA that is (L1 intersect L2') union (L1' intersect L2).
>If L1 = L2 then L3 is empty. We can determine in P-time a NFA/DFA
>is empty. Therefore, isn't the problem P, and not NP?
RE -> NFA can be done in P-time. As for complement, a quick peek at
Hopcroft and Ullman states the following in their construction proof of
complementation: "Note that it is essential to the proof that M is
deterministic and without E moves." The implication of course is that you
must first convert the NFA to a DFA before proceeding with the
construction. Since NFA -> DFA can experience exponential state
explosion, you end up with a problem in NP.
Is there an algorithm to do complementation directly on an NFA in P-Time?
I'd sure like to know if there is :-)
- joe
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"https://compilers.iecc.com/comparch/article/94-02-181","timestamp":"2024-11-07T00:34:16Z","content_type":"text/html","content_length":"4200","record_id":"<urn:uuid:84cc315a-3080-4c04-9a23-6e620579b9ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00264.warc.gz"} |
Why and How to Implement Random Rotate Data Augmentation
Computer vision data augmentation is a powerful way to improve the performance of our computer vision models without needing to collect additional data. We create new versions of our images based on
the originals but introduce deliberate imperfections. This helps our model learn what an object generally looks like rather than having it memorize the specific way objects appear in our training
Various augmentation options available in Roboflow
Data augmentation works because it increases the semantic coverage of a dataset. Simply put, we introduce additional life-like examples from which our model can learn. In our recent tests on the
various image data augmentation techniques, we saw approximately a six-point increase in mean average precision on our example dataset.
Performance increase from augmentation on a sparse custom dataset in this post on boosting performance with augmentation
In fact, data augmentation in YOLOv4 is one of the distinguishing reasons the model achieves state-of-the-art performance (a ten-point lift in mAP over prior YOLOv3).
Random Rotation Data Augmentation
One common data augmentation technique is random rotation. A source image is random rotated clockwise or counterclockwise by some number of degrees, changing the position of the object in frame.
Notably, for object detection problems, the bounding box must also be updated to encompass the resulting object. (We’ll discuss more about this below.)
Image rotation at various levels of rotation angle
When to Use Random Rotate Augmentation
Random Rotate is a useful augmentation in particular because it changes the angles that objects appear in your dataset during training. Perhaps, during the image collection process, images were only
collected with an object horizontally, but in production, the object could be skewed in either direction. Random rotation can improve your model without you having to collect and label more data.
Consider if you were building a mobile app to identify chess pieces. The user may not have their phone perfectly perpendicular to the table where the chess board sets; therefore, chess pieces could
appear to be rotated in either direction. In this case, random rotation may be a great choice to simulate what various chess pieces may look like without meticulously capturing every different angle.
Tip: If the camera position is not fixed relative to your subjects (like in a mobile app), random rotation is likely a helpful image augmentation.
Random rotation can also help combat potential overfitting. Even in cases where the camera position is fixed relative to the subjects your model is encountering, random rotation can increase
variation to prevent a model from memorizing your training data.
However, like most methods, random rotation is not a silver bullet.
When to Not Use Random Rotate Augmentation
When you perform a data augmentation, it is important to consider everything the augmentation is doing to your images to decide if the augmentation is the right choice for your dataset. In some
cases, random rotation may not be the right choice for your dataset.
As a practical note, in order to random rotation an image, note that the image must either have its corners cut off on the top and bottom or have the image increase in size to avoid cropping edges.
Note how edges of our chess piece images above are cropped.
Thus, the first reason you should consider not using random rotation is if there is valuable content in the original corners of your images.
Second, after rotation, the image corners are unknown and must be filled with padding. This is the resulting black space we see
Third, you may be a domain setting where objects in the image do not naturally rotate. For example, street signs for a car driving down the road.
Fourth, when the image rotates, the bounding box must as well and the bounding box expands on rotation (unless it is a square). This can be problematic if a lot of your bounding boxes are skinny
rectangles because the model will be encouraged to predict much larger objects than it otherwise would. More on this in the implementation.
Ok, you're still convinced!
Let's get cracking on the implementation.
How to Implement Random Rotation in Code (Image)
Now we will dive into the code required to make a random rotation on your image.
All of this code can be executed in this Colab notebook performing random rotation.
To get started, I have put up an example raccoon image for you, and some example annotations.
Note your object annotations may be different, but the flow of this tutorial will still be the same.
Let's take a look at our image and annotations and display our image
!wget https://imgur.com/5MLvXMw.jpg
%mv 5MLvXMw.jpg example.jpg
annotation = {"label":"raccoon","x":150,"y":300,"width":70,"height":200}
Displaying our image...
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from PIL import Image
import numpy as np
im = np.array(Image.open('example.jpg'), dtype=np.uint8)
# Create figure and axes
fig,ax = plt.subplots(1)
# Display the image
# Create a Rectangle patch
height = annotation["height"]
width = annotation["width"]
x = annotation["x"] - (width/2)
y = annotation["y"] - (height/2)
rect = patches.Rectangle((x,y),width,height,linewidth=5,edgecolor='r',facecolor='none')
# Add the patch to the Axes
Our example image
Next we define the affine warp function. An affine warp is a transformation of an image that preserves parallel lines.
The affine warp multiplies the two dimensional pixel space by the following matrix:
Thankfully the openCV package takes care of most of these details for us and we only define the following:
import numpy as np
import cv2
import math
import copy
def warpAffine(src, M, dsize, from_bounding_box_only=False):
Applies cv2 warpAffine, marking transparency if bounding box only
The last of the 4 channels is merely a marker. It does not specify opacity in the usual way.
return cv2.warpAffine(src, M, dsize)
Then we define a function to rotate an image counterclockwise by one angle. First, we get the rotation matrix for a 2D image from cv2. Then, we get the new bounding dimensions of the image from the
sine and the cosine of the rotation matrix (above), to adjust the new height and width of the image.
Then we adjust the matrix to keep in mind this height and width translation. And finally perform the affine transformation.
For this example, we rotate by 40 degrees.
def rotate_image(image, angle):
"""Rotate the image counterclockwise.
Rotate the image such that the rotated image is enclosed inside the
tightest rectangle. The area not occupied by the pixels of the original
image is colored black.
image : numpy.ndarray
numpy image
angle : float
angle by which the image is to be rotated. Positive angle is
Rotated Image
# get dims, find center
(h, w) = image.shape[:2]
(cX, cY) = (w // 2, h // 2)
# grab the rotation matrix (applying the negative of the
# angle to rotate clockwise), then grab the sine and cosine
# (i.e., the rotation components of the matrix)
M = cv2.getRotationMatrix2D((cX, cY), angle, 1.0)
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
# compute the new bounding dimensions of the image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
# adjust the rotation matrix to take into account translation
M[0, 2] += (nW / 2) - cX
M[1, 2] += (nH / 2) - cY
# perform the actual rotation and return the image
image = warpAffine(image, M, (nW, nH), False)
# image = cv2.resize(image, (w,h))
return image
Let's take a look!
from skimage.io import imread
img_path = 'example.jpg'
im = imread(img_path).astype(np.float64) / 255
from skimage import data, io
from matplotlib import pyplot as plt
rotated = crop_to_center(im, rotate_image(im, 40))
Our image randomly rotated
Now, the problem here is that the transformation has changed the dimensions of our image. In order to correct for this, we implement a crop to center to return to the original image's dimensions.
def crop_to_center(old_img, new_img):
Crops `new_img` to `old_img` dimensions
:param old_img: <numpy.ndarray> or <tuple> dimensions
:param new_img: <numpy.ndarray>
:return: <numpy.ndarray> new image cropped to old image dimensions
if isinstance(old_img, tuple):
original_shape = old_img
original_shape = old_img.shape
original_width = original_shape[1]
original_height = original_shape[0]
original_center_x = original_shape[1] / 2
original_center_y = original_shape[0] / 2
new_width = new_img.shape[1]
new_height = new_img.shape[0]
new_center_x = new_img.shape[1] / 2
new_center_y = new_img.shape[0] / 2
new_left_x = int(max(new_center_x - original_width / 2, 0))
new_right_x = int(min(new_center_x + original_width / 2, new_width))
new_top_y = int(max(new_center_y - original_height / 2, 0))
new_bottom_y = int(min(new_center_y + original_height / 2, new_height))
# create new img canvas
canvas = np.zeros(original_shape)
left_x = int(max(original_center_x - new_width / 2, 0))
right_x = int(min(original_center_x + new_width / 2, original_width))
top_y = int(max(original_center_y - new_height / 2, 0))
bottom_y = int(min(original_center_y + new_height / 2, original_height))
canvas[top_y:bottom_y, left_x:right_x] = new_img[new_top_y:new_bottom_y, new_left_x:new_right_x]
return canvas
Our rotated image cropped with the original dimensions
Image complete! If you are doing classification, you may stop here. For object detection, we need to continue onward to rotate the bounding box.
How to Implement Random Rotation in Code (Bounding Box)
To rotate the bounding box, we need to take the original bounding box and translate it about the origin for the same rotation angle that the image was rotated. Then, our bounding box will be somewhat
diamond shaped and we will need to draw a larger bounding box (blue) around it to make sure that we capture our target.
Rotating the bounding box annotation
To implement this, we first define a function to rotate a point about the origin like so:
def rotate_point(origin, point, angle):
Rotate a point counterclockwise by a given angle around a given origin.
:param angle: <float> Angle in radians.
Positive angle is counterclockwise.
ox, oy = origin
px, py = point
qx = ox + math.cos(angle) * (px - ox) - math.sin(angle) * (py - oy)
qy = oy + math.sin(angle) * (px - ox) + math.cos(angle) * (py - oy)
return qx, qy
Then we define our annotation rotation function with the following steps. First we rotate the middle of the image about the origin, then we rotate each point about the origin. Then, we calculate the
maximum distance between points for the width and height dimensions of the final bounding box.
def rotate_point(origin, point, angle):
Rotate a point counterclockwise by a given angle around a given origin.
:param angle: <float> Angle in radians.
Positive angle is counterclockwise.
ox, oy = origin
px, py = point
qx = ox + math.cos(angle) * (px - ox) - math.sin(angle) * (py - oy)
qy = oy + math.sin(angle) * (px - ox) + math.cos(angle) * (py - oy)
return qx, qy
def rotate_annotation(origin, annotation, degree):
Rotates an annotation's bounding box by `degree`
counterclockwise about `origin`.
Assumes cropping from center to preserve image dimensions.
:param origin: <tuple> down is positive
:param annotation: <dict>
:param degree: <int> degrees by which to rotate
(positive is counterclockwise)
:return: <dict> annotation after rotation
# Don't mutate annotation
new_annotation = copy.deepcopy(annotation)
# new_annotation = annotation
angle = math.radians(degree)
origin_x, origin_y = origin
origin_y *= -1
x = annotation["x"]
y = annotation["y"]
new_x, new_y = map(lambda x: round(x * 2) / 2, rotate_point(
(origin_x, origin_y), (x, -y), angle)
new_annotation["x"] = new_x
new_annotation["y"] = -new_y
width = annotation["width"]
height = annotation["height"]
left_x = x - width / 2
right_x = x + width / 2
top_y = y - height / 2
bottom_y = y + height / 2
c1 = (left_x, top_y)
c2 = (right_x, top_y)
c3 = (right_x, bottom_y)
c4 = (left_x, bottom_y)
c1 = rotate_point(origin, c1, angle)
c2 = rotate_point(origin, c2, angle)
c3 = rotate_point(origin, c3, angle)
c4 = rotate_point(origin, c4, angle)
x_coords, y_coords = zip(c1, c2, c3, c4)
new_annotation["width"] = round(max(x_coords) - min(x_coords))
new_annotation["height"] = round(max(y_coords) - min(y_coords))
return new_annotation
And finally we try it out on our annotation :
origin = (im.shape[1] / 2, im.shape[0] / 2)
new_annot = rotate_annotation(
origin, annotation, 40
Visualizing the new bounding box:
from skimage import data, io
from matplotlib import pyplot as plt
rotated = rotate_image(im, 40)
rotated = crop_to_center(im, rotated)
fig,ax = plt.subplots(1)
# Create a Rectangle patch
height = new_annot["height"]
width = new_annot["width"]
x = new_annot["x"] - (width/2)
y = new_annot["y"] - (height/2)
rect = patches.Rectangle((x,y),height,width,linewidth=5,edgecolor='b',facecolor='none')
# Add the patch to the Axes
Our image rotated with the bounding box annotation
How to Implement Random Rotation in Roboflow
In order to implement random rotation at a dataset level, you need to keep track of multiple annotations across all of your images as you randomly rotate them. This can be quite tricky in practice.
We have implemented a solution to random rotate for the dataset level at Roboflow. Once you have loaded in your dataset with drag and drop functionality in any format, you can perform image
augmentations including random rotate by selecting the augmentations you would like to perform and the number of derivative images.
Randomly rotating images at the dataset level in Roboflow
In this tutorial, we have covered how to augment computer vision data with random rotation. We have discussed why to use random rotation and special situations where you might want to avoid it. We
covered code together to implement your own images as well as looked at an automated solution through Roboflow.
Thanks for reading! Happy augmenting 🧐
Cite this Post
Use the following entry to cite this post in your research:
Jacob Solawetz. (Jun 24, 2020). Why and How to Implement Random Rotate Data Augmentation. Roboflow Blog: https://blog.roboflow.com/why-and-how-to-implement-random-rotate-data-augmentation/
Discuss this Post
If you have any questions about this blog post, start a discussion on the Roboflow Forum.
Written by
Jacob Solawetz
Founding Engineer @ Roboflow - ascending the 1/loss | {"url":"https://blog.roboflow.com/why-and-how-to-implement-random-rotate-data-augmentation/?ref=blog.streamlit.io","timestamp":"2024-11-12T22:55:23Z","content_type":"text/html","content_length":"108267","record_id":"<urn:uuid:c3dd57ef-32e4-403c-b1fd-16f717a1f5c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00200.warc.gz"} |
How to solve linear functions
Author Message
JolijC Posted: Sunday 24th of Dec 09:20
Hi, I need some immediate help on how to solve linear functions. I’ve browsed through various websites for topics like powers and leading coefficient but none could help me solve my
problem relating to how to solve linear functions. I have a test in a few days from now and if I don’t start working on my problem then I might just fail my exam . I’ve even tried
calling a few of my peers, but they seem to be in the same situation. So guys, please help me.
From: Ohio
oc_rana Posted: Monday 25th of Dec 07:44
I have a solution for you and trust me it’s even better than buying a new textbook. Try Algebra Master, it covers a pretty comprehensive list of mathematical topics and is highly
recommended. With it you can solve various types of questions and it’ll also address all your enquiries as to how it came up with a particular answer. I tried it when I was having
difficulty solving problems based on how to solve linear functions and I really enjoyed using it.
Jrahan Posted: Tuesday 26th of Dec 07:49
Algebra Master is a excellent software. All I had to do with my difficulties with relations, binomials and graphing lines was to simply type in the problems; click the ‘solve’ and
presto, the solution just popped out step-by-step in an effortless manner. I have used this to problems in Algebra 2, College Algebra and Remedial Algebra. I would boldly say that
this is just the answer for you.
From: UK
Jot Posted: Wednesday 27th of Dec 20:02
I suggest using Algebra Master. It not only helps you with your math problems, but also provides all the required steps in detail so that you can improve the understanding of the
From: Ubik | {"url":"http://algebra-test.com/algebra-help/3x3-system-of-equations/how-to-solve-linear-functions.html","timestamp":"2024-11-08T02:45:28Z","content_type":"application/xhtml+xml","content_length":"17637","record_id":"<urn:uuid:de2fc6d2-03a6-487f-8b2e-a8c276d2aead>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00347.warc.gz"} |
(Get Answer) - Consider the following scatter plot and least squares? equations,...| Transtutors
We store cookies data for a seamless user experience. To know more check the Privacy Policy
Consider the following scatter plot and least squares? equations, estimate the...
• 43+ Users Viewed
• 12+ Downloaded Solutions
• Massachusetts, US Mostly Asked From
Consider the following scatter plot and least squares? equations, estimate the standard error of the least squares equation ?s(SXY?).
Regression by eye Sample size: 80 Simulate Add point Reset Analyze | Info 1600 1400 O C O O OO O
1200 O O O O O 1000 OO O OO O 800 O O O 150 200 250 300 350 X Standard Error s ~ (enter an
integer) ter your answer in the edit fields and then click Check Answer. ?
Consider the following scatter plot and least squares equations, estimate the standard error of the least squares equation s(Sxy). i) Regression by eye Sample size: 60 Simulate Add point | Reset |
Analyze |Info C 20 C C C C C C 15 C C OC C 10 C C C C 5 C O C C C O C C uit 10 15 20 25 30 X Standard Error s ~ (Keep one decimal) ii) Regression by eye Sample size: 80 Simulate | Add point | Reset
Analyze Info Enter your answer in the edit fields and then click Check Answer. ? All parts showing Clear All Check Answer
Recent Questions in Statistics - Others
Copy and paste your question here... | {"url":"https://www.transtutors.com/questions/consider-the-following-scatter-plot-and-least-squares-equations-estimate-the--10664155.htm","timestamp":"2024-11-13T17:59:37Z","content_type":"application/xhtml+xml","content_length":"72708","record_id":"<urn:uuid:8c0faeec-d992-4f47-b20f-02488d5a834f>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00231.warc.gz"} |
Higher algebra
Algebraic theories
Algebras and modules
Higher algebras
Model category presentations
Geometry on formal duals of algebras
$(\infty,1)$-Category theory
Basic concepts
Universal constructions
Local presentation
Extra stuff, structure, properties
The notion of $(\infty,1)$-operad is to that of (∞,1)-category as operad is to category.
So, roughly, an $(\infty,1)$-operad is an algebraic structure that has for each given type of input and one type of output an ∞-groupoid of operations that take these inputs to that output.
There is a fairly evident notion of an ∞-algebra over an (∞,1)-operad. Examples include
$(\infty,1)$-Operads form an (∞,2)-category (∞,1)Operad.
Many equivalent models for $(\infty,1)$-operads exist to date. Some of these include
We focus on the first two. The first one models $(\infty,1)$-operads as dendroidal sets in close analogy to (in fact as a generalization of how) simplicial sets model (∞,1)-categories.
The second models the (∞,1)-category version of a category of operators of an operad.
In terms of dendroidal sets
Here simplicial sets are generalized to dendroidal sets. The theory of $(\infty,1)$-operads is then formulated in terms of dendroidal sets in close analogy to how the theory of (∞,1)-categories is
formulated in terms of simplicial sets.
There is a model structure on dendroidal sets whose fibrant objects are the quasi-operads in direct analogy to the notion of quasi-category.
So the model structure on dendroidal sets is a presention of the (∞,1)-category of $(\infty,1)$-operads. It is Quillen equivalent to the standard model structure on operads enriched over Top or sSet.
Therefore, conversely, the traditional homotopy-theoretic constructions on topological and chain operads (such as cofibrant resolutions in order to present homtopy algebras such as A-∞ algebras, L-∞
algebras, homotopy BV-algebras and the like) are also indeed presentations of $(\infty,1)$-operads.
In terms of $(\infty,1)$-categories of operators
Every operad $A$ encodes and is encoded by its category of operators $C_A$. In the approach to $(\infty,1)$-operators described below, the notion of category of operators is generalized to an (∞,1)
-category of operators.
In this approach an $(\infty,1)$-operad $C^\otimes$ is regarded as an (∞,1)-category $C$ – the unary part of the $(\infty,1)$-operad to be described– with extra structure that determines (∞,1)
-functors $C^{\times n} \to C$.
This and the conditions on these are encoded in requiring that $C^\otimes$ is an $(\infty,1)$-functor $C^\otimes \to \Gamma$ over Segal's category $\Gamma$ of pointed finite sets, satisfying some
In particular, any symmetric monoidal (∞,1)-category yields an example of an $(\infty,1)$-operad in this sense. In fact, symmetric monoidal $(\infty,1)$-categories can be defined as $(\infty,1)$
-operads such that the functor $C^\otimes \to \Gamma$ is a coCartesian fibration. (For the moment, see monoidal (infinity,1)-category for more comments and references on higher operads in this
This is the approach described in (LurieCommutative)
Basic definitions
We are to generalize the following construction from categories to (∞,1)-categories.
For $\mathcal{O}$ a symmetric multicategory, write $\mathcal{O}^\otimes \to FinSet^{*/}$ for its category of operators.
Here $\mathcal{O}^\otimes$ is the category whose
• objects are finite sequences (tuples) of objects of $\mathcal{O}$;
• morphisms$(X_1, \cdots, X_{n_1}) \to (Y_1, \cdots, Y_{n_2})$ are given by a morphism $\alpha \colon \langle n_1\rangle \to \langle n_2\rangle$ in $FinSet_*$ together with a collection of
$\left\{ \phi_j \in \mathcal{O}\left( \left\{ X_i\right\}_{i \in \alpha^{-1}\left\{j\right\}} , Y_j \right) \right\}_{1 \leq j \leq n_2} \,.$
The functor $p \colon \mathcal{O}^\otimes \to FinSet^{*/}$ is the evident forgetful functor.
In (Lurie) this is construction 2.1.1.7.
This motivates the following definition of the generalization of this situation to (∞,1)-category theory.
Write $FinSet^{*/}$ for the category of pointed finite set (Segal's Gamma-category).
For $n \in \mathbb{N}$ we write
$\langle n\rangle \coloneqq {*} \coprod [n] \in FinSet^{*/}$
for the pointed set with $n+1$ elements.
A morphism in $FinSet^{*/}$
• is called an inert morphism if it is a surjection, and an injection on those elements that are not sent to the base point. That is, the preimage of every non-base point is a singleton.
• called an active morphism if only the basepoint goes to the basepoint.
For $n \in \mathbb{N}$ and $1 \leq i \leq n$ write
$\rho^i \colon \langle n\rangle \to \langle 1\rangle$
for the inert morphism that sends all but the $i$th element to the basepoint.
Notice that for each $n \in \mathbb{N}$ there is a unique active morphism $\langle n\rangle \to \langle 1\rangle$.
The $(\infty,1)$-category of operators of an $(\infty,1)$-operad is a morphism
$p \colon \mathcal{O}^\otimes \to FinSet^{*/}$
of quasi-categories such that the following conditions hold:
1. For every inert morphism in $FinSet^{*/}$ and every object over it, there is a lift to a $p$-coCartesian morphism in $\mathcal{O}^\otimes$. In particular, for $f \colon \langle n_1\rangle \to \
langle n_2\rangle$ inert, there is an induced (∞,1)-functor
$f_! \colon \mathcal{O}^\otimes_{\langle n_1\rangle} \to \mathcal{O}^\otimes_{\langle n_2\rangle} \,.$
2. The coCartesian lifts of the inert projection morphisms induce an equivalence of derived hom-spaces in $\mathcal{O}^{\otimes}$ between maps into multiple objects and the products of the maps into
the separete objects:
For $f \colon \langle n_1 \rangle \to \langle n_2 \rangle$ write $\mathcal{O}^\otimes_f(-,-) \hookrightarrow \mathcal{O}^\otimes(-,-)$ for the components of the derived hom-space covering $f$,
then the $(\infty,1)$-functor
$\mathcal{O}^\otimes_f(C_1,C_2) \to \underset{1 \leq k \leq n_2}{\prod} \mathcal{O}^\otimes_{\rho^i\circ f}(C_1,(C_2)_i)$
induced as above is an equivalence.
3. For every finite collection of objects $C_1, \cdots c_n \in \mathcal{O}^\otimes_{\langle 1\rangle}$ there exists a multiobject $C \in \mathcal{O}^\otimes_{\langle n\rangle}$ and a collection of
$p$-coCartesian morphisms $\{C \to C_i\}$ covering $\rho^i$.
Equivalently (given the first two conditions): for all $n \in \mathbb{N}$ the $(\infty,1)$-functors $\{(\rho^i)_!\}_{1 \leq i \leq n}$ induce an equivalence of (∞,1)-categories
$\mathcal{O}^\otimes_{\langle n\rangle} \to (\mathcal{O}^\otimes_{\langle 1\rangle})^{\times^n}$
(Lurie, def. 2.1.1.10, remark 2.1.1.14)
We now turn to the definition of homomorphisms of $(\infty,1)$-operads.
Given an $(\infty,1)$-operad $p \colon \mathcal{O}^\otimes \to FinSet^{*/}$ as in def. , a morphism $f$ in $\mathcal{O}^\otimes$ is called an inert morphism if
1. $p(f)$ is an inert morphism in $FinSet^{*/}$ by def. ;
2. $f$ is a $p$-coCartesian morphism.
Morphisms of operads $\mathcal{O}_1 \to \to \mathcal{O}_2$ can be understood equivalently as exhibiting an $\mathcal{O}_1$-algebra in $\mathcal{O}_2$. Therefore:
We also have the notion of
See there for more details.
Model for $(\infty,1)$-categories of operators
There is a model category that presents the (∞,1)-category $(\infty,1)Cat_{Oper}$ of $(\infty,1)$-categories of operations.
There exists a
model category$\mathcal{P} Op_{(\infty,1)}$
• whose underlying category has
□ objects are marked simplicial set $S$ equipped with a morphism $S \to N(FinSet_*)$ such that marked edges map to inert morphisms in $FinSet_*$ (those for which the preimage of the marked
point contains just the marked point)
□ morphisms are morphisms of marked simplicial sets $S \to T$ such that the triangle
$\array{ S &&\to&& T \\ & \searrow && \swarrow \\ && N(FinSet_*) }$
• which is canonically an SSet-enriched category;
• and whose model structure is given by
□ cofibrations are those morphisms whose underlying morphisms of simplicial sets are cofibrations, hence monomorphisms
□ weak equivalences are those morphisms $S \to T$ such that for all $A \to N(FinSet_*)$ that are $(\infty,1)$-categories of operations by the above definition, the morphism of SSet-hom objects
$\mathcal{P}Op_\infty(T,A) \to \mathcal{P}Op_\infty(S,A)$
is a homotopy equivalence of simplicial sets.
□ an object is fibrant if and only if it is an $(\infty,1)$-category of operations, by the above definition.
This is prop 1.8 4 in
We list some examples of $(\infty,1)$-operads incarnated as their (∞,1)-categories of operators by def. .
The first basic examples to follow are in fact all given by 1-categories of operators.
The associative operad has $Assoc^\otimes$ the category whose objects are the natural numbers, whose $n$-ary operations are labeled by the total orders on $n$ elements, equivalently the elements of
the symmetric group $\Sigma_n$, and whose composition is given by forming consecutive total orders in the obvious way.
The (∞,1)-algebras over an (∞,1)-operad over this $(\infty,1)$-operad are A-∞ algebras
In (Lurie) this is remark 4.1.1.4.
The operad for modules over an algebra $LM$ is the colored symmetric operad whose
• objects are two elements, to be denoted $\mathfrak{a}$ and $\mathfrak{n}$;
• multimorphisms$(X_i)_{i = 1}^n \to Y$ form
□ if $Y = \mathfrak{a}$ and $X_i = \mathfrak{a}$ for all $i$ then: the set of linear orders on $n$ elements, equivalently the elements of the symmetric group $\Sigma_n$;
□ if $Y = \mathfrak{n}$ and exactly one of the $X_i = \mathfrak{n}$ then: the set of linear order $\{i_1 \lt \cdots \lt i_n\}$ such that $X_{i_n} = \mathfrak{n}$
□ otherwise: the empty set;
• composition is given by composition of linear orders as for the associative operad.
The (∞,1)-algebras over an (∞,1)-operad over this $(\infty,1)$-operad are pairs consisting of A-∞ algebras with (∞,1)-modules over them.
In (Lurie) this appears as def. 4.2.1.1.
The operad for bimodules over algebras $BMod$ is the colored symmetric operad whose
• objects are three elements, to be denoted $\mathfrak{a}_-, \mathfrak{a}_+$ and $\mathfrak{n}$;
• multimorphisms$(X_i)_{i = 1}^n \to Y$ form
□ if $Y = \mathfrak{a}_-$ and all $X_i = \mathfrak{a}_-$ then: the set of linear orders of $n$ elements;
□ if $Y = \mathfrak{a}_*$ and all $X_i = \mathfrak{a}_*$ then again: the set of linear orders of $n$ elements;
□ if $Y = \mathfrak{n}$: the set of linear orders $\{i_1 \lt \cdots \lt i_n\}$ such that there is exactly one index $i_k$ with $X_{i_k} = \mathfrak{n}$ and $X_{i_j} = \mathfrak{a}_-$ for all $j
\lt k$ and $X_{i_j} = \mathfrak{a}_+$ for all $k \gt k$.
• composition is given by the composition of linear orders as for the associative operad.
The (∞,1)-algebras over an (∞,1)-operad over this $(\infty,1)$-operad are pairs consisting of two A-∞ algebras with an (∞,1)-bimodule over them.
Equivalence between the two definitions
The dendroidal and category of operads models are equivalent, as shown in e.g. Hinich-Moerdijk. The following is another strategy to do so.
There is an obvious way to regard a tree as an $(\infty,1)$-category of operators:
(dendroidal $(\infty,1)$-category of operators)
$\omega : \Omega \hookrightarrow Op \stackrel{C_{(-)}}{\to} Cat/FinSet_* \stackrel{N}{\to} \mathcal{P}Op_{(\infty,1)}$
be the dendroidal object given by the following composition:
• $\Omega \hookrightarrow Op$ is the functor from the tree category $\Omega$ to the category of symmetric colored operads (over Set) that sends a tree to the operad freely generated from it;
• $Op \stackrel{C_{(-)}}{\to} Cat/FinSet_*$ sends an operad to its category of operators;
• $Cat/FinSet_* \stackrel{N}{\to} \mathcal{P}Op_{(\infty,1)}$ takes the nerve of this category, regarded as a marked simplicial set over $N(FinSet_*)$, whose marked edges are the inert morphisms in
the category of operations.
Following the general pattern of nerve and realization, we get:
(dendroidal nerve of Lurie-$\infty$-operad)
The functor
$N_d := Hom_{\mathcal{P}Op_{(\infty,1)}}(\omega(-), -): \mathcal{P}Op_{(\infty,1)} \to dSet$
that sends a marked simplicial set $A \to N(FinSet_*)$ to the dendroidal set which sends a tree $T$ to the set of morphisms of $\omega(T)$ into $A$
$N_d(A) : T \mapsto Hom_{\mathcal{P}Op_{(\infty,1)}}(\omega(T), A)$
is the dendroidal nerve of $A$.
One expects that $N_d$ induces a Quillen adjunction and indeed a Quillen equivalence between the above model category structure on $\mathcal{P}Op_{(\infty,1)}$ and the model structure on dendroidal
sets. The following is as far as I think I can prove aspects of this. -Urs.
The dendroidal nerve functor has the following properties:
• it is the right adjoint of a SSet-enriched adjunction
$C_{(-)} : dSet \stackrel{\leftarrow}{\to} \mathcal{P}Op_{(\infty,1)} : N_d$
• it sends fibrant objects to fibrant objects
i.e. it sends $(\infty,1)$-categories of operations to $(\infty,1)$-operads in their incarnation as “quasi-operads”;
• it sends objects $\pi : A \to N(FinSet_*)$ that come from grouplike symmetric monoidal ∞-groupoids to fully Kan dendroidal sets (that have the extension property with respect to all horns)
• it sends objects $\pi : A \to N(FinSet_*)$ that come from symmetric monoidal (∞,1)-categories to dendroidal sets that have the extension property with respect to at least one outer horn $\Lambda_
{v} T$ for $v \in T$ an $n$-corolla, for all $n \in \mathbb{N}$.
• its left adjoint sends cofibrations to cofibrations and acyclic cofibrations with cofibrant domain to acyclic cofibrations.
respect for fibrant objects. If $A \to N(FinSet_*)$ is fibrant, then in particual $A$ is a weak Kan complex hence has the extension property with respect to all inner horn inclusions of simplices. We
need to show that this implies that $N_d(A)$ has the extension property with respect to all inner horn inclusions of trees.
By an (at the moment unpublished) result by Moerdijk, right lifting property with respect to inner horn inclusions of trees is equivalend to right lifting property with respect to inclusions of spine
s of trees: the union over all the corollas in a tree.
For this the extension property means that if we find a collection $\{C_{k_i} \to N_d(A)\} = Sp(T)$ of corollas in $N_d(A)$ that match at some inputs and output, then these can be composed to an
image $T \to N_d(A)$ of the corresponding tree $T$ in $N_d(A)$.
An image of $T$ in $N_d(A)$ is an image of $\omega(T)$ in $A$. In the category of operators $\omega(A)$ every tree may be represented as the composite of a sequence of morphisms each of which
consists of precisely one of the corollas $C_{k_i}$ in parallel to identity morphisms. This way gluing the tree from the corollas is a matter of composing a sequence of edges in $A$. But this is
guaranteed to be possible if $A$ is a weak Kan complex.
symmetric monoidal product and outer horn lifting
As described at cartesian morphism, an edge $f : \Delta^1 \to A$ in $A$ is coCartesian if for all diagrams
$\array{ \Delta^{0,1} \\ \downarrow & \searrow^f \\ \Lambda^n_0 &\to & A \\ \downarrow && \downarrow \\ \Delta^n &\to& N(FinSet_*) }$
of 0-horn lifting problems where the first edge of the horn is $f$ itself, there exists a lift
$\array{ \Delta^{0,1} \\ \downarrow & \searrow^f \\ \Lambda^n_0 &\to & A \\ \downarrow &earrow & \downarrow \\ \Delta^n &\to& N(FinSet_*) } \,.$
For $f$ the parallel application of an $n$-corolla with a collection of identity morphisms this implies that any outer horn $\Lambda_v T \to N_d(A)$ for which the vertex $v : C_n \to N_d(A)$ maps to
$f$, the dendroidal set $N_d(A)$ has the extension property with respect to the inclusion $\Lambda_d T \hookrightarrow T$.
the left adjoint and its respect for cofibrations
By general nonsense the left adjoint to $N_d$ is given by the coend
$C_{(-)} : dSet \to \mathcal{P}Op_{(\infty,1)}$
$C_P = \int^{T \in \Omega} \omega(T) \cdot P(T) \,,$
where in the integrand we have the tautological tensoring of $\mathcal{P}Op_{(\infty,1)}$ over Set.
Notice that $\omega : \Omega \to \mathcal{P}Op_{(\infty,1)}$ is an SSet-enriched functor for the ordinary category $\Omega$ regarded as a simplicially enriched category by the canonical embedding
$Set \hookrightarrow SSet$. Therefore this adjunction $F \dashv N_d$ is defined entirely in SSet-enriched category theory and hence is a simplicial adjunction.
The model structure on dendroidal sets has a set of generating cofibrations given by the boundary inclusions of trees. $\partial \Omega[T] \hookrightarrow \Omega[T]$. Tese evidenly map to
monomorphisms of underlying simplicial sets under $F$, hence to cofibrations.
For $f : P \hookrightarrow Q$ an acyclic cofibration with cofibrant domain, we need to check that $C_f : C_X \to C_Y$ is a weak equivalence in $\mathcal{P}Op_{(\infty,1)}$. This is by definition the
case if for every fibrant object $A$ the morphism
$\mathcal{P}Op_{(\infty,1)}(C_Y,A) \to \mathcal{P}Op_{(\infty,1)}(C_X,A)$
is a weak equivalence in the standard model structure on simplicial sets. By the simplicial adjunction $F \dashv N_d$ this is equivalent to
$dSet(f,N_d(A)) : dSet(Y,N_d(A)) \to dSet(X,N_d(A))$
being a weak equivalence. By the above $N_d(A)$ is fibrant. By section 8.4 of the lecture notes on dendroidal sets cited at model structure on dendroidal sets a morphism between cofibrant dendroidal
sets is a weak equivalence precisely if homming it into any fibrant dendroidal set produces an equivalence of homotopy categories.
Since $f$ is a weak equivalence between cofibrant objects by assumption, it follows that indeed $dSet(f,N_d(A))$ is a weak equivalence for all fibrant $A$.
(AHM, or does it? there is a prob here, but I need to run now…)
Hence $C_f$ is a weak equivalence.
The formulation in terms of dendroidal sets is due to
Here are two blog entries on talks on this stuff:
The formulation in terms of an $(\infty,1)$-version of the category of operators is introduced in
and further discussed in
Now in section 2 of the textbook
The equivalence between the dendroidal set-formulation and the one in terms of $(\infty,1)$-categories of operators is shown in
and made symmetric monoidal in
Further equivalence to Barwick’s complete Segal operads is discussed in
For an account in terms of analytic monads, that is, monads that are cartesian (multiplication and unit transformations are cartesian) and the underlying endofunctor preserves sifted colimits and
wide pullbacks (or equivalently all weakly contractible limits), see
For an account in terms of symmetric monoidal categories and equifibrations, see
On the Eckmann-Hilton argument for (∞,1)-operads: | {"url":"https://ncatlab.org/nlab/show/%28infinity,1%29-operad","timestamp":"2024-11-14T15:43:39Z","content_type":"application/xhtml+xml","content_length":"143557","record_id":"<urn:uuid:5d68d186-8f21-4728-a736-b991782ae0cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00164.warc.gz"} |
Euclidean vector
Euclidean vector
Background to the schools Wikipedia
SOS Children volunteers helped choose articles and made other curriculum material SOS Children is the world's largest charity giving orphaned and abandoned children the chance of family life.
A spatial vector, or simply vector, is a geometric object which has both a magnitude and a direction. A vector is frequently represented by a line segment connecting the initial point A with the
terminal point B and denoted
The magnitude is the length of the segment and the direction characterizes the displacement of B relative to A: how much one should move the point A to "carry" it to the point B.
Many algebraic operations on real numbers have close analogues for vectors. Vectors can be added, subtracted, multiplied by a number, and flipped around so that the direction is reversed. These
operations obey the familiar algebraic laws: commutativity, associativity, distributivity. The sum of two vectors with the same initial point can be found geometrically using the parallelogram law.
Multiplication by a positive number, commonly called a scalar in this context, amounts to changing the magnitude of vector, that is, stretching or compressing it while keeping its direction;
multiplication by -1 preserves the magnitude of the vector but reverses its direction.
Cartesian coordinates provide a systematic way of describing vectors and operations on them. A vector becomes a triple of real numbers, its components. Addition of vectors and multiplication of a
vector by a scalar are simply done component by component, see coordinate vector.
Vectors play an important role in physics: velocity and acceleration of a moving object and forces acting on a body are all described by vectors. Many other physical quantities can be usefully
thought of as vectors. One has to keep in mind, however, that the components of a physical vector depend on the coordinate system used to describe it. Other vector-like objects that describe physical
quantities and transform in a similar way under changes of the coordinate system include pseudovectors and tensors.
Informally, a vector is a quantity characterized by a magnitude (in mathematics a number, in physics a number times a unit) and a direction, often represented graphically by an arrow. Sometimes, one
speaks of bound or fixed vectors, which are vectors whose initial point is the origin. This is in contrast to free vectors, which are vectors whose initial point is not necessarily the origin.
Use in physics and engineering
Vectors are fundamental in the physical sciences. They can be used to represent any quantity that has both a magnitude and direction, such as velocity, the magnitude of which is speed. For example,
the velocity 5 meters per second upward could be represented by the vector (0,5). Another quantity represented by a vector is force, since it has a magnitude and direction. Vectors also describe many
other physical quantities, such as displacement, acceleration, electric and magnetic fields, momentum, and angular momentum.
Vectors in Cartesian space
In Cartesian coordinates, a vector can be represented by identifying the coordinates of its initial and terminal point. For instance, the points A = (1,0,0) and B = (0,1,0) in space determine the
free vector $\overrightarrow{AB}$ pointing from the point x=1 on the x-axis to the point y=1 on the y-axis.
Typically in Cartesian coordinates, one considers primarily bound vectors. A bound vector is determined by the coordinates of the terminal point, its initial point always having the coordinates of
the origin O = (0,0,0). Thus the bound vector represented by (1,0,0) is a vector of unit length pointing from the origin up the positive x-axis.
The coordinate representation of vectors allows the algebraic features of vectors to be expressed in a convenient numerical fashion. For example, the sum of the vectors (1,2,3) and (-2,0,4) is the
$(1,\, 2,\, 3) + (-2,\, 0,\, 4)=(1-2,\, 2+0,\, 3+4)=(-1,\, 2,\, 7).\,$
Euclidean vectors and affine vectors
In the geometrical and physical settings, sometimes it is possible to associate, in a natural way, a length to vectors as well as the notion of an angle between two vectors. When the length of
vectors is defined, it is possible to also define a dot product — a scalar-valued product of two vectors — which gives a convenient algebraic characterization of both length and angle. In
three-dimensions, it is further possible to define a cross product which supplies an algebraic characterization of area.
However, it is not always possible or desirable to define the length of a vector in a natural way. This more general type of spatial vector is the subject of vector spaces (for bound vectors) and
affine spaces (for free vectors).
In more general sorts of coordinate systems, rotations of a vector (and also of tensors) can be generalized and categorized to admit an analogous characterization by their covariance and
contravariance under changes of coordinates.
In mathematics, a vector is considered more than a representation of a physical quantity. In general, a vector is any element of a vector space over some field. The spatial vectors of this article
are a very special case of this general definition (they are not simply any element of R^d in d dimensions), which includes a variety of mathematical objects ( algebras, the set of all functions from
a given domain to a given linear range, and linear transformations). Note that under this definition, a tensor is a special vector.
Representation of a vector
Vectors are usually denoted in boldface, as a. Other conventions include $\vec{a}$ or a, especially in handwriting. Alternately, some use a tilde (~) or a wavy underline drawn beneath the symbol,
which is a convention for indicating boldface type.
Vectors are usually shown in graphs or other diagrams as arrows, as illustrated below:
Here the point A is called the initial point, tail, or base; point B is called the head, tip, or endpoint. The length of the arrow represents the vector's magnitude, while the direction in which the
arrow points represents the vector's direction.
In the figure above, the arrow can also be written as $\overrightarrow{AB}$ or AB.
On a two-dimensional diagram, sometimes a vector perpendicular to the plane of the diagram is desired. These vectors are commonly shown as small circles. A circle with a dot at its centre indicates a
vector pointing out of the front of the diagram, towards the viewer. A circle with a cross inscribed in it indicates a vector pointing into and behind the diagram. These can be thought of as viewing
the tip an arrow front on and viewing the vanes of an arrow from the back.
In order to calculate with vectors, the graphical representation may be too cumbersome. Vectors in an n-dimensional Euclidean space can be represented in a Cartesian coordinate system. The endpoint
of a vector can be identified with a list of n real numbers, sometimes called a row vector or column vector. As an example in two dimensions (see image), the vector from the origin O = (0,0) to the
point A = (2,3) is simply written as
$\overrightarrow{OA} = (2,3).$
In three dimensional Euclidean space (or R^3), vectors are identified with triples of numbers corresponding to the Cartesian coordinates of the endpoint (a,b,c). These numbers are often arranged into
a column vector or row vector, particularly when dealing with matrices, as follows:
$\mathbf{a} = \begin{bmatrix} a\\ b\\ c\\ \end{bmatrix}$
$\mathbf{a} = ( a\ b\ c ).$
Another way to express a vector in three dimensions is to introduce the three basic coordinate vectors, sometimes referred to as unit vectors:
${\mathbf e}_1 = (1,0,0), {\mathbf e}_2 = (0,1,0), {\mathbf e}_3 = (0,0,1).$
These have the intuitive interpretation as vectors of unit length pointing up the x, y, and z axis, respectively. In terms of these, any vector in R^3 can be expressed in the form:
$(a,b,c) = a(1,0,0) + b(0,1,0) + c(0,0,1) = a{\mathbf e}_1 + b{\mathbf e}_2 + c{\mathbf e}_3.$
Note: In introductory physics classes, these three special vectors are often instead denoted i, j, k (or $\boldsymbol{\hat{x}}, \boldsymbol{\hat{y}}, \boldsymbol{\hat{z}}$ when in Cartesian
coordinates), but such notation clashes with the index notation and the summation convention commonly used in higher level mathematics, physics, and engineering. This article will choose to use e[1],
e[2], e[3].
The use of Cartesian unit vectors $\boldsymbol{\hat{x}}, \boldsymbol{\hat{y}}, \boldsymbol{\hat{z}}$ as a basis in which to represent a vector, is not mandated. Vectors can also be expressed in terms
of cylindrical unit vectors $\boldsymbol{\hat{r}}, \boldsymbol{\hat{\theta}}, \boldsymbol{\hat{z}}$ or spherical unit vectors $\boldsymbol{\hat{r}}, \boldsymbol{\hat{\theta}}, \boldsymbol{\hat{\phi}}
$. The latter two choices are more convenient for solving problems which possess cylindrical or spherical symmetry respectively.
Addition and scalar multiplication
Vector equality
Two vectors are said to be equal if they have the same magnitude and direction. However if we are talking about free vectors, then two free vectors are equal if they have the same base point and end
For example, the vector e[1] + 2e[2] + 3e[3] with base point (1,0,0) and the vector e[1]+2e[2]+3e[3] with base point (0,1,0) are different free vectors, but the same (displacement) vector.
Vector addition and subtraction
Let a=a[1]e[1] + a[2]e[2] + a[3]e[3] and b=b[1]e[1] + b[2]e[2] + b[3]e[3], where e[1], e[2], e[3] are orthogonal unit vectors (Note: they only need to be linearly independent, i.e. not parallel and
not in the same plane, for these algebraic addition and subtraction rules to apply)
The sum of a and b is:
$\mathbf{a}+\mathbf{b} =(a_1+b_1)\mathbf{e_1} +(a_2+b_2)\mathbf{e_2} +(a_3+b_3)\mathbf{e_3}$
The addition may be represented graphically by placing the start of the arrow b at the tip of the arrow a, and then drawing an arrow from the start of a to the tip of b. The new arrow drawn
represents the vector a + b, as illustrated below:
This addition method is sometimes called the parallelogram rule because a and b form the sides of a parallelogram and a + b is one of the diagonals. If a and b are free vectors, then the addition is
only defined if a and b have the same base point, which will then also be the base point of a + b. One can check geometrically that a + b = b + a and (a + b) + c = a + (b + c).
The difference of a and b is:
$\mathbf{a}-\mathbf{b} =(a_1-b_1)\mathbf{e_1} +(a_2-b_2)\mathbf{e_2} +(a_3-b_3)\mathbf{e_3}$
Subtraction of two vectors can be geometrically defined as follows: to subtract b from a, place the ends of a and b at the same point, and then draw an arrow from the tip of b to the tip of a. That
arrow represents the vector a − b, as illustrated below:
If a and b are free vectors, then the subtraction is only defined if they share the same base point which will then also become the base point of their difference. This operation deserves the name
"subtraction" because (a − b) + b = a.
Scalar multiplication
A vector may also be multiplied, or re-scaled, by a real number r. In the context of spatial vectors, these real numbers are often called scalars (from scale) to distinguish them from vectors. The
operation of multiplying a vector by a scalar is called scalar multiplication. The resulting vector is:
$r\mathbf{a}=(ra_1)\mathbf{e_1} +(ra_2)\mathbf{e_2} +(ra_3)\mathbf{e_3}$
Intuitively, multiplying by a scalar r stretches a vector out by a factor of r. Geometrically, this can be visualized (at least in the case when r is an integer) as placing r copies of the vector in
a line where the endpoint of one vector is the initial point of the next vector.
If r is negative, then the vector changes direction: it flips around by an angle of 180°. Two examples (r = -1 and r = 2) are given below:
Scalar multiplication is distributive over vector addition in the following sense: r(a + b) = ra + rb for all vectors a and b and all scalars r. One can also show that a - b = a + (-1)b.
The set of all geometrical vectors, together with the operations of vector addition and scalar multiplication, satisfies all the axioms of a vector space. Similarly, the set of all bound vectors with
a common base point forms a vector space. This is where the term "vector space" originated.
In physics, scalars may also have a unit of measurement associated with them. For instance, Newton's second law is
${\mathbf F} = m{\mathbf a}$
where F has units of force, a has units of acceleration, and the scalar m has units of mass. In one possible physical interpretation of the above diagram, the scale of acceleration is, for instance,
2 m/s^2 : cm, and that of force 5 N : cm. Thus a scale ratio of 2.5 kg : 1 is used for mass. Similarly, if displacement has a scale of 1:1000 and velocity of 0.2 cm : 1 m/s, or equivalently, 2 ms :
1, a scale ratio of 0.5 : s is used for time.
Length and the dot product
Length of a vector
The length or magnitude or norm of the vector a is denoted by ||a|| or, less commonly, |a|, which is not to be confused with the absolute value (a scalar "norm").
The length of the vector a = a[1]e[1] + a[2]e[2]+ a[3]e[3] in a three-dimensional Euclidean space, where e[1], e[2], e[3] are orthogonal unit vectors, can be computed with the Euclidean norm
which is a consequence of the Pythagorean theorem since the basis vectors e[1] , e[2] , e[3] are orthogonal unit vectors.
This happens to be equal to the square root of the dot product of the vector with itself:
Vector length and units
If a vector is itself spatial, the length of the arrow depends on a dimensionless scale. If it represents e.g. a force, the "scale" is of physical dimension length/force. Thus there is typically
consistency in scale among quantities of the same dimension, but otherwise scale ratios may vary; for example, if "1 newton" and "5 m" are both represented with an arrow of 2 cm, the scales are 1:250
and 1 m:50 N respectively. Equal length of vectors of different dimension has no particular significance unless there is some proportionality constant inherent in the system that the diagram
represents. Also length of a unit vector (of dimension length, not length/force, etc.) has no coordinate-system-invariant significance.
Unit vector
A unit vector is any vector with a length of one; geometrically, it indicates a direction but no magnitude. If you have a vector of arbitrary length, you can divide it by its length to create a unit
vector. This is known as normalizing a vector. A unit vector is often indicated with a hat as in â.
To normalize a vector a = [a[1], a[2], a[3]], scale the vector by the reciprocal of its length ||a||. That is:
$\mathbf{\hat{a}} = \frac{\mathbf{a}}{\left\|\mathbf{a}\right\|} = \frac{a_1}{\left\|\mathbf{a}\right\|}\mathbf{e_1} + \frac{a_2}{\left\|\mathbf{a}\right\|}\mathbf{e_2} + \frac{a_3}{\left\|\
Null vector
The null vector (or zero vector) is the vector with length zero. Written out in coordinates, the vector is (0,0,0), and it is commonly denoted $\vec{0}$, or 0, or simply 0. Unlike any other vector,
it does not have a direction, and cannot be normalized (i.e., there is no unit vector which is a multiple of the null vector). The sum of the null vector with any vector a is a (i.e., 0+a=a).
Dot product
Main article: Dot product
The dot product of two vectors a and b (sometimes called the inner product, or, since its result is a scalar, the scalar product) is denoted by a ∙ b and is defined as:
$\mathbf{a}\cdot\mathbf{b} =\left\|\mathbf{a}\right\|\left\|\mathbf{b}\right\|\cos\theta$
where ||a|| and ||b|| denote the norm (or length) of a and b, and θ is the measure of the angle between a and b (see trigonometric function for an explanation of cosine). Geometrically, this means
that a and b are drawn with a common start point and then the length of a is multiplied with the length of that component of b that points in the same direction as a.
The dot product can also be defined as the sum of the products of the components of each vector:
$\mathbf{a} \cdot \mathbf{b} = (a_1, a_2, \dots, a_n ) \cdot ( b_1, b_2, \dots, b_n ) = a_1 b_1 + a_2 b_2 + \dots + a_n b_n$
where a and b are vectors of n dimensions; a[1], a[2], …, a[n] are coordinates of a; and b[1], b[2], …, b[n] are coordinates of b.
This operation is often useful in physics; for instance, work is the dot product of force and displacement.
Cross product
The cross product (also called the vector product or outer product) differs from the dot product primarily in that the result of the cross product of two vectors is a vector. While everything that
was said above can be generalized in a straightforward manner to more than three dimensions, the cross product is only meaningful in three dimensions, although the seven dimensional cross product is
similar in some respects. The cross product, denoted a × b, is a vector perpendicular to both a and b and is defined as:
$\mathbf{a}\times\mathbf{b} =\left\|\mathbf{a}\right\|\left\|\mathbf{b}\right\|\sin(\theta)\,\mathbf{n}$
where θ is the measure of the angle between a and b, and n is a unit vector perpendicular to both a and b. The problem with this definition is that there are two unit vectors perpendicular to both b
and a.
The vector basis e[1], e[2] , e[3] is called right-handed, if the three vectors are situated like the thumb, index finger and middle finger (pointing straight up from your palm) of your right hand.
Graphically the cross product can be represented by the figure on the right.
The cross product a × b is defined so that a, b, and a × b also becomes a right-handed system (but note that a and b are not necessarily orthogonal). This is the right-hand rule.
The length of a × b can be interpreted as the area of the parallelogram having a and b as sides.
For arbitrary choices of spatial orientation (i.e., allowing for left-handed as well as right-handed coordinate systems) the cross product of two vectors is a pseudovector instead of a vector (see
Scalar triple product
The scalar triple product (also called the box product or mixed triple product) is not really a new operator, but a way of applying the other two multiplication operators to three vectors. The scalar
triple product is sometimes denoted by (a b c) and defined as:
$(\mathbf{a}\ \mathbf{b}\ \mathbf{c}) =\mathbf{a}\cdot(\mathbf{b}\times\mathbf{c}).$
It has three primary uses. First, the absolute value of the box product is the volume of the parallelepiped which has edges that are defined by the three vectors. Second, the scalar triple product is
zero if and only if the three vectors are linearly dependent, which can be easily proved by considering that in order for the three vectors to not make a volume, they must all lie in the same plane.
Third, the box product is positive if and only if the three vectors a, b and c are right-handed.
In components (with respect to a right-handed orthonormal basis), if the three vectors are thought of as rows (or columns, but in the same order), the scalar triple product is simply the determinant
of the 3-by-3 matrix having the three vectors as rows. The scalar triple product is linear in all three entries and anti-symmetric in the following sense:
$(\mathbf{a}\ \mathbf{b}\ \mathbf{c})$ $=(\mathbf{c}\ \mathbf{a}\ \mathbf{b})$
$=(\mathbf{b}\ \mathbf{c}\ \mathbf{a})$
$=-(\mathbf{a}\ \mathbf{c}\ \mathbf{b})$
$=-(\mathbf{b}\ \mathbf{a}\ \mathbf{c})$
$=-(\mathbf{c}\ \mathbf{b}\ \mathbf{a})$
Vector components
A component of a vector is the influence of that vector in a given direction. Components are themselves vectors.
A vector is often described by a fixed number of components that sum up into this vector uniquely and totally. When used in this role, the choice of their constituting directions is dependent upon
the particular coordinate system being used, such as Cartesian coordinates, spherical coordinates or polar coordinates. For example, axial component of a vector is such that its component whose
direction is determined by one of the Cartesian coordinate axes, whereas radial and tangential components relate to the radius of rotation of an object as their direction of reference. The former is
parallel to the radius and the latter is orthogonal to it. Both remain orthogonal to the axis of rotation at all times. (In two dimensions this requirement becomes redundant as the axis degenerates
to a point of rotation.) The choice of a coordinate system doesn't affect properties of a vector or its behaviour under transformations.
Vectors as directional derivatives
A vector may also be defined as a directional derivative: consider a function $f(x^\alpha)$ and a curve $x^\alpha (\tau)$. Then the directional derivative of $f$ is a scalar defined as
$\frac{df}{d\tau} = \sum_{\alpha=1}^n \frac{dx^\alpha}{d\tau}\frac{\partial f}{\partial x^\alpha}.$
where the index $\alpha$ is summed over the appropriate number of dimensions (e.g. from 1 to 3 in 3-dimensional Euclidian space, from 0 to 3 in 4-dimensional spacetime, etc.). Then consider a vector
tangent to $x^\alpha (\tau)$:
$t^\alpha = \frac{dx^\alpha}{d\tau}.$
We can rewrite the directional derivative in differential form (without a given function $f$) as
$\frac{d}{d\tau} = \sum_\alpha t^\alpha\frac{\partial}{\partial x^\alpha}.$
Therefore any directional derivative can be identified with a corresponding vector, and any vector can be identified with a corresponding directional derivative. We can therefore define a vector
$\mathbf{a} \equiv a^\alpha \frac{\partial}{\partial x^\alpha}.$
Vectors, pseudovectors, and transformations
An alternative characterization of spatial vectors, especially in physics, describes vectors as lists of quantities which behave a certain way under a coordinate transformation. A vector is required
to have components that "transform like the coordinates" under coordinate rotations. In other words, if all of space were rotated, the vector would rotate in exactly the same way. Mathematically, if
the coordinate system undergoes a rotation described by a rotation matrix R, so that a coordinate vector x is transformed to x′ = Rx, then any other vector v must be similarly transformed via v′ = Rv
. This important requirement is what distinguishes a spatial vector from any other triplet of physically meaningful quantities. For example, if v consists of the x, y, and z-components of velocity,
then v is a vector because the components of the velocity transform under coordinate changes. On the other hand, for instance, a triplet consisting of the length, width, and height of a rectangular
box could be regarded as the three components of an abstract vector, but not a spatial vector, since rotating the box does not correspondingly transform these three components. Examples of vectors
include displacement, velocity, electric field, momentum, force, and acceleration.
In the language of differential geometry, the requirement that the components of a vector transform according to the same matrix of the coordinate transition is equivalent to defining a vector to be
a tensor of contravariant rank one. However, in differential geometry and other areas of mathematics such as representation theory, the "coordinate transitions" need not be restricted to rotations.
Other notions of spatial vector correspond to different choices of symmetry group.
As a particular case where the symmetry group is important, all of the above examples are vectors which "transform like the coordinates" under both proper and improper rotations. An example of an
improper rotation is a mirror reflection. That is, these vectors are defined in such a way that, if all of space were flipped around through a mirror (or otherwise subjected to an improper rotation),
that vector would flip around in exactly the same way. Vectors with this property are called true vectors, or polar vectors. However, other vectors are defined in such a way that, upon flipping
through a mirror, the vector flips in the same way, but also acquires a negative sign. These are called pseudovectors (or axial vectors), and most commonly occur as cross products of true vectors.
One example of an axial vector is angular momentum. Driving in a car, and looking forward, each of the wheels has an angular momentum vector pointing to the left. If the world is reflected in a
mirror which switches the left and right side of the car, the reflection of this angular momentum vector points to the right, but the actual angular momentum vector of the wheel still points to the
left, corresponding to the minus sign. Other examples of pseudovectors include magnetic field, torque, or more generally any cross product of two (true) vectors.
This distinction between vectors and pseudovectors is often ignored, but it becomes important in studying symmetry properties. See parity (physics). | {"url":"https://ftp.worldpossible.org/endless/eos-rachel/RACHEL/RACHEL/modules/wikipedia_for_schools/wp/e/Euclidean_vector.htm","timestamp":"2024-11-08T14:24:33Z","content_type":"text/html","content_length":"50363","record_id":"<urn:uuid:44aa85fc-496e-4b6d-a4ff-de35fe60b15a>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00238.warc.gz"} |
Uncertain knowledge and reasoning
In which we see how an agent can tame uncertainty with degrees of belief.
Agents may need to handle uncertainty, whether due to partial observability, nondetermin-UNCERTAINTY
ism, or a combination of the two. An agent may never know for certain what state it’s in or where it will end up after a sequence of actions.
We have seen problem-solving agents (Chapter 4) and logical agents (Chapters 7 and 11) designed to handle uncertainty by keeping track of a belief state—a representation of the set of all possible
world states that it might be in—and generating a contingency plan that handles every possible eventuality that its sensors may report during execution. Despite its many virtues, however, this
approach has significant drawbacks when taken literally as a recipe for creating agent programs:
• When interpreting partial sensor information, a logical agent must consider every logically possible explanation for the observations, no matter how unlikely. This leads to impossible large and
complex belief-state representations.
• A correct contingent plan that handles every eventuality can grow arbitrarily large and must consider arbitrarily unlikely contingencies.
• Sometimes there is no plan that is guaranteed to achieve the goal—yet the agent must act. It must have some way to compare the merits of plans that are not guaranteed.
Suppose, for example, that an automated taxi!automated has the goal of delivering a passenger to the airport on time. The agent forms a plan, A90, that involves leaving home 90 minutes before the
flight departs and driving at a reasonable speed. Even though the airport is only about 5 miles away, a logical taxi agent will not be able to conclude with certainty that “Plan A90 will get us to
the airport in time.” Instead, it reaches the weaker conclusion “Plan A90 will get us to the airport in time, as long as the car doesn’t break down or run out of gas, and I don’t get into an
accident, and there are no accidents on the bridge, and the plane doesn’t leave early, and no meteorite hits the car, and . . . .” None of these conditions can be deduced for sure, so the plan’s
success cannot be inferred. This is the qualification problem (page 268), for which we so far have seen no real solution.
Nonetheless, in some sense A90 is in fact the right thing to do. What do we mean by this? As we discussed in Chapter 2, we mean that out of all the plans that could be executed, A90 is expected to
maximize the agent’s performance measure (where the expectation is relative to the agent’s knowledge about the environment). The performance measure includes getting to the airport in time for the
flight, avoiding a long, unproductive wait at the airport, and avoiding speeding tickets along the way. The agent’s knowledge cannot guarantee any of these outcomes for A90, but it can provide some
degree of belief that they will be achieved. Other plans, such as a~1~80, might increase the agent’s belief that it will get to the airport on time, but also increase the likelihood of a long wait.
The right thing to do—the rational decision—therefore depends on both the relative importance of various goals and the likelihood that, and degree to which, they will be achieved. The remainder of
this section hones these ideas, in preparation for the development of the general theories of uncertain reasoning and rational decisions that we present in this and subsequent chapters.
Summarizing uncertainty
Let’s consider an example of uncertain reasoning: diagnosing a dental patient’s toothache. Diagnosis—whether for medicine, automobile repair, or whatever—almost always involves uncertainty. Let us
try to write rules for dental diagnosis using propositional logic, so that we can see how the logical approach breaks down. Consider the following simple rule:
Toothache ⇒ Cavity .
The problem is that this rule is wrong. Not all patients with toothaches have cavities; some of them have gum disease, an abscess, or one of several other problems:
Toothache ⇒ Cavity ∨GumProblem ∨ Abscess
Unfortunately, in order to make the rule true, we have to add an almost unlimited list of possible problems. We could try turning the rule into a causal rule:
Cavity ⇒ Toothache .
But this rule is not right either; not all cavities cause pain. The only way to fix the rule is to make it logically exhaustive: to augment the left-hand side with all the qualifications required for
a cavity to cause a toothache. Trying to use logic to cope with a domain like medical diagnosis thus fails for three main reasons:
• Laziness: It is too much work to list the complete set of antecedents or consequents needed to ensure an exceptionless rule and too hard to use such rules.
• Theoretical ignorance: Medical science has no complete theory for the domain.
• Practical ignorance: Even if we know all the rules, we might be uncertain about a particular patient because not all the necessary tests have been or can be run.
The connection between toothaches and cavities is just not a logical consequence in either direction. This is typical of the medical domain, as well as most other judgmental domains: law, business,
design, automobile repair, gardening, dating, and so on. The agent’s knowledge can at best provide only a degree of belief in the relevant sentences. Our main tool for dealing with degrees of belief
is probability theory. In the terminology of Section 8.1, the ontological commitments of logic and probability theory are the same—that the world is composed of facts that do or do not hold in any
particular case—but the epistemological commitments are different: a logical agent believes each sentence to be true or false or has no opinion, whereas a probabilistic agent may have a numerical
degree of belief between 0 (for sentences that are certainly false) and 1 (certainly true).
Probability provides a way of summarizing the uncertainty that comes from our laziness and ignorance, thereby solving the qualification problem. We might not know for sure what afflicts a particular
patient, but we believe that there is, say, an 80% chance—that is, a probability of 0.8—that the patient who has a toothache has a cavity. That is, we expect that out of all the situations that are
indistinguishable from the current situation as far as our knowledge goes, the patient will have a cavity in 80% of them. This belief could be derived from statistical data—80% of the toothache
patients seen so far have had cavities—or from some general dental knowledge, or from a combination of evidence sources.
One confusing point is that at the time of our diagnosis, there is no uncertainty in the actual world: the patient either has a cavity or doesn’t. So what does it mean to say the probability of a
cavity is 0.8? Shouldn’t it be either 0 or 1? The answer is that probability statements are made with respect to a knowledge state, not with respect to the real world. We say “The probability that
the patient has a cavity, given that she has a toothache, is 0.8.” If we later learn that the patient has a history of gum disease, we can make a different statement: “The probability that the
patient has a cavity, given that she has a toothache and a history of gum disease, is 0.4.” If we gather further conclusive evidence against a cavity, we can say “The probability that the patient has
a cavity, given all we now know, is almost 0.” Note that these statements do not contradict each other; each is a separate assertion about a different knowledge state.
Uncertainty and rational decisions
Consider again the A90 plan for getting to the airport. Suppose it gives us a 97% chance of catching our flight. Does this mean it is a rational choice? Not necessarily: there might be other plans,
such as A180, with higher probabilities. If it is vital not to miss the flight, then it is worth risking the longer wait at the airport. What about A1440, a plan that involves leaving home 24 hours
in advance? In most circumstances, this is not a good choice, because although it almost guarantees getting there on time, it involves an intolerable wait—not to mention a possibly unpleasant diet of
airport food.
To make such choices, an agent must first have preferences between the different possible outcomes of the various plans. An outcome is a completely specified state, including such factors as whether
the agent arrives on time and the length of the wait at the airport. We use utility theory to represent and reason with preferences. (The term utility is used here in the sense of “the quality of
being useful,” not in the sense of the electric company or water works.) Utility theory says that every state has a degree of usefulness, or utility, to an agent and that the agent will prefer states
with higher utility.
The utility of a state is relative to an agent. For example, the utility of a state in which White has checkmated Black in a game of chess is obviously high for the agent playing White, but low for
the agent playing Black. But we can’t go strictly by the scores of 1, 1/2, and 0 that are dictated by the rules of tournament chess—some players (including the authors) might be thrilled with a draw
against the world champion, whereas other players (including the former world champion) might not. There is no accounting for taste or preferences: you might think that an agent who prefers jalapeño
bubble-gum ice cream to chocolate chocolate chip is odd or even misguided, but you could not say the agent is irrational. A utility function can account for any set of preferences—quirky or typical,
noble or perverse. Note that utilities can account for altruism, simply by including the welfare of others as one of the factors.
Preferences, as expressed by utilities, are combined with probabilities in the general theory of rational decisions called decision theory:
Decision theory = probability theory + utility theory .
The fundamental idea of decision theory is that an agent is rational if and only if it chooses the action that yields the highest expected utility, averaged over all the possible outcomes of the
action. This is called the principle of maximum expected utility (MEU). Note that “expected” might seem like a vague, hypothetical term, but as it is used here it has a precise meaning: it means the
“average,” or “statistical mean” of the outcomes, weighted by the probability of the outcome. We saw this principle in action in Chapter 5 when we touched briefly on optimal decisions in backgammon;
it is in fact a completely general principle.
For our agent to represent and use probabilistic information, we need a formal language. The language of probability theory has traditionally been informal, written by human mathematicians to other
human mathematicians. Appendix A includes a standard introduction to elementary probability theory; here, we take an approach more suited to the needs of AI and more consistent with the concepts of
formal logic.
function DT-AGENT(percept ) returns an action
persistent: belief state, probabilistic beliefs about the current state of the world action , the agent’s action
update belief state based on action and percept
calculate outcome probabilities for actions,
given action descriptions and current belief state
select action with highest expected utility
given probabilities of outcomes and utility information
return action
Figure 13.1 A decision-theoretic agent that selects rational actions.
What probabilities are about
Like logical assertions, probabilistic assertions are about possible worlds. Whereas logical assertions say which possible worlds are strictly ruled out (all those in which the assertion is false),
probabilistic assertions talk about how probable the various worlds are. In probability theory, the set of all possible worlds is called the sample space. The possible worlds are mutually exclusive
and exhaustive—two possible worlds cannot both be the case, and one possible world must be the case. For example, if we are about to roll two (distinguishable) dice, there are 36 possible worlds to
consider: (1,1), (1,2), . . ., (6,6). The Greek letter Ω (uppercase omega) is used to refer to the sample space, and ω (lowercase omega) refers to elements of the space, that is, particular possible
A fully specified probability model associates a numerical probability P (ω) with each possible world.1 The basic axioms of probability theory say that every possible world has a probability between
0 and 1 and that the total probability of the set of possible worlds is 1:
Alt text
For example, if we assume that each die is fair and the rolls don’t interfere with each other, then each of the possible worlds (1,1), (1,2), . . ., (6,6) has probability 1/36. On the other hand, if
the dice conspire to produce the same number, then the worlds (1,1), (2,2), (3,3), etc., might have higher probabilities, leaving the others with lower probabilities.
Probabilistic assertions and queries are not usually about particular possible worlds, but about sets of them. For example, we might be interested in the cases where the two dice add up to 11, the
cases where doubles are rolled, and so on. In probability theory, these sets are called events—a term already used extensively in Chapter 12 for a different concept. In AI, the sets are always
described by propositions in a formal language. (One such language is described in Section 13.2.2.) For each proposition, the corresponding set contains just those possible worlds in which the
proposition holds. The probability associated with a proposition
1 For now, we assume a discrete, countable set of worlds. The proper treatment of the continuous case brings in certain complications that are less relevant for most purposes in AI.
is defined to be the sum of the probabilities of the worlds in which it holds:
Alt text
For example, when rolling fair dice, we have P (Total =11) = P ((5, 6)) + P ((6, 5)) = 1/36 + 1/36 = 1/18. Note that probability theory does not require complete knowledge of the probabilities of
each possible world. For example, if we believe the dice conspire to produce the same number, we might assert that P (doubles) = 1/4 without knowing whether the dice prefer double 6 to double 2. Just
as with logical assertions, this assertion constrains the underlying probability model without fully determining it.
Probabilities such as P (Total = 11) and P (doubles) are called unconditional or priorprobabilities (and sometimes just “priors” for short); they refer to degrees of belief in propositions in the
absence of any other information. Most of the time, however, we have some information, usually called evidence, that has already been revealed. For example, the first die may already be showing a 5
and we are waiting with bated breath for the other one to stop spinning. In that case, we are interested not in the unconditional probability of rolling doubles, but the conditional or posterior
probability (or just “posterior” for short) of rolling doubles given that the first die is a 5. This probability is written P (doubles |Die1 = 5), where the “ | ” is pronounced “given.” Similarly, if
I am going to the dentist for a regular checkup, the probability P (cavity)= 0.2 might be of interest; but if I go to the dentist because I have a toothache, it’s P (cavity | toothache)= 0.6 that
matters. Note that the precedence of “ | ” is such that any expression of the form P (. . . | . . .) always means P ((. . .)|(. . .)).
It is important to understand that P (cavity)= 0.2 is still valid after toothache is observed; it just isn’t especially useful. When making decisions, an agent needs to condition on all the evidence
it has observed. It is also important to understand the difference between conditioning and logical implication. The assertion that P (cavity | toothache)= 0.6 does not mean “Whenever toothache is
true, conclude that cavity is true with probability 0.6” rather it means “Whenever toothache is true and we have no further information, conclude that cavity is true with probability 0.6.” The extra
condition is important; for example, if we had the further information that the dentist found no cavities, we definitely would not want to conclude that cavity is true with probability 0.6; instead
we need to use P (cavity |toothache ∧ ¬cavity)= 0.
Mathematically speaking, conditional probabilities are defined in terms of unconditional probabilities as follows: for any propositions a and b, we have
Alt text
The definition makes sense if you remember that observing b rules out all those possible worlds where b is false, leaving a set whose total probability is just P (b). Within that set, the a-worlds
satisfy a ∧ b and constitute a fraction P (a ∧ b)/P (b).
The definition of conditional probability, Equation (13.3), can be written in a different form called the product rule:PRODUCT RULE
P (a ∧ b) = P (a | b)P (b) ,
The product rule is perhaps easier to remember: it comes from the fact that, for a and b to be true, we need b to be true, and we also need a to be true given b.
The language of propositions in probability assertions
In this chapter and the next, propositions describing sets of possible worlds are written in a notation that combines elements of propositional logic and constraint satisfaction notation. In the
terminology of Section 2.4.7, it is a factored representation, in which a possible world is represented by a set of variable/value pairs.
Variables in probability theory are called random variables and their names begin with an uppercase letter. Thus, in the dice example, Total and Die1 are random variables. Every random variable has a
domain—the set of possible values it can take on. The domain of Total for two dice is the set {2, . . . , 12} and the domain of Die1 is {1, . . . , 6}. A Boolean random variable has the domain {true
, false} (notice that values are always lowercase); for example, the proposition that doubles are rolled can be written as Doubles = true . By convention, propositions of the form A= true are
abbreviated simply as a, while A= false is abbreviated as ¬a. (The uses of doubles , cavity , and toothache in the preceding section are abbreviations of this kind.) As in CSPs, domains can be sets
of arbitrary tokens; we might choose the domain of Age to be {juvenile, teen , adult} and the domain of Weather might be {sunny , rain , cloudy , snow}. When no ambiguity is possible, it is common to
use a value by itself to stand for the proposition that a particular variable has that value; thus, sunny can stand for Weather = sunny .
The preceding examples all have finite domains. Variables can have infinite domains, too—either discrete (like the integers) or continuous (like the reals). For any variable with an ordered domain,
inequalities are also allowed, such as NumberOfAtomsInUniverse ≥ 1070.
Finally, we can combine these sorts of elementary propositions (including the abbreviated forms for Boolean variables) by using the connectives of propositional logic. For example, we can express
“The probability that the patient has a cavity, given that she is a teenager with no toothache, is 0.1” as follows:
P(cavity | ¬toothache ∧ teen) = 0.1 .
Sometimes we will want to talk about the probabilities of all the possible values of a random variable. We could write:
P(Weather = sunny) = 0.6
P(Weather = rain) = 0.1
P(Weather = cloudy) = 0.29
P(Weather = snow ) = 0.01 ,
but as an abbreviation we will allow
P(Weather )= 〈0.6, 0.1, 0.29, 0.01〉 ,
where the bold P indicates that the result is a vector of numbers, and where we assume a predefined ordering 〈sunny , rain , cloudy , snow 〉 on the domain of Weather . We say that the P statement
defines a probability distribution for the random variable Weather . The P notation is also used for conditional distributions: P(X |Y ) gives the values of P (X = x~i~ |Y = y~j~) for each possible
i, j pair. For continuous variables, it is not possible to write out the entire distribution as a vector, because there are infinitely many values. Instead, we can define the probability that a
random variable takes on some value x as a parameterized function of x. For example, the sentence
P (NoonTemp = x) = Uniform~[18C,26C]~(x)
expresses the belief that the temperature at noon is distributed uniformly between 18 and 26 degrees Celsius. We call this a probability density function.
Probability density functions (sometimes called pdfs) differ in meaning from discrete distributions. Saying that the probability density is uniform from 18C to 26C means that there is a 100% chance
that the temperature will fall somewhere in that 8C-wide region and a 50% chance that it will fall in any 4C-wide region, and so on. We write the probability density for a continuous random variable
X at value x as P (X = x) or just P (x); the intuitive definition of P (x) is the probability that X falls within an arbitrarily small region beginning at x, divided by the width of the region:
Alt text
where C stands for centigrade (not for a constant). In P (NoonTemp =20.18C)= 1/8C , note that 1/8C is not a probability, it is a probability density. The probability that NoonTemp is exactly 20.18C
is zero, because 20.18C is a region of width 0. Some authors use different symbols for discrete distributions and density functions; we use P in both cases, since confusion seldom arises and the
equations are usually identical. Note that probabilities are unitless numbers, whereas density functions are measured with a unit, in this case reciprocal degrees.
In addition to distributions on single variables, we need notation for distributions on multiple variables. Commas are used for this. For example, P(Weather ,Cavity) denotes the probabilities of all
combinations of the values of Weather and Cavity . This is a 4 × 2 table of probabilities called the joint probability distribution of Weather and Cavity . We can also mix variables with and without
values; P(sunny ,Cavity) would be a two-element vector giving the probabilities of a sunny day with a cavity and a sunny day with no cavity. The P notation makes certain expressions much more concise
than they might otherwise be. For example, the product rules for all possible values of Weather and Cavity can be written as a single equation:
P(Weather ,Cavity) = P(Weather | Cavity)P(Cavity) ,
instead of as these 4× 2= 8 equations (using abbreviations W and C):
P (W = sunny ∧ C = true) = P (W = sunny|C = true)P (C = true)
P (W = rain ∧ C = true) = P (W = rain |C = true)P (C = true)
P (W = cloudy ∧ C = true) = P (W = cloudy |C = true)P (C = true)
P (W = snow ∧ C = true) = P (W = snow |C = true)P (C = true)
P (W = sunny ∧ C = false) = P (W = sunny|C = false)P (C = false)
P (W = rain ∧ C = false) = P (W = rain |C = false)P (C = false)
P (W = cloudy ∧ C = false) = P (W = cloudy |C = false)P (C = false)
P (W = snow ∧ C = false) = P (W = snow |C = false)P (C = false) .
As a degenerate case, P(sunny , cavity) has no variables and thus is a one-element vector that is the probability of a sunny day with a cavity, which could also be written as P (sunny , cavity) or P
(sunny ∧ cavity). We will sometimes use P notation to derive results about individual P values, and when we say “P(sunny)= 0.6” it is really an abbreviation for “P(sunny) is the one-element vector 〈
0.6〉, which means that P (sunny)= 0.6.”
Now we have defined a syntax for propositions and probability assertions and we have given part of the semantics: Equation (13.2) defines the probability of a proposition as the sum of the
probabilities of worlds in which it holds. To complete the semantics, we need to say what the worlds are and how to determine whether a proposition holds in a world. We borrow this part directly from
the semantics of propositional logic, as follows. A possible world is defined to be an assignment of values to all of the random variables under consideration. It is easy to see that this definition
satisfies the basic requirement that possible worlds be mutually exclusive and exhaustive (Exercise 13.5). For example, if the random variables are Cavity , Toothache , and Weather , then there are
2× 2× 4= 16 possible worlds. Furthermore, the truth of any given proposition, no matter how complex, can be determined easily in such worlds using the same recursive definition of truth as for
formulas in propositional logic.
From the preceding definition of possible worlds, it follows that a probability model is completely determined by the joint distribution for all of the random variables—the so-called full joint
probability distribution. For example, if the variables are Cavity , Toothache , and Weather , then the full joint distribution is given by P(Cavity ,Toothache ,Weather ). This joint distribution can
be represented as a 2× 2× 4 table with 16 entries. Because every proposition’s probability is a sum over possible worlds, a full joint distribution suffices, in principle, for calculating the
probability of any proposition.
Probability axioms and their reasonableness
The basic axioms of probability (Equations (13.1) and (13.2)) imply certain relationships among the degrees of belief that can be accorded to logically related propositions. For example, we can
derive the familiar relationship between the probability of a proposition and the probability of its negation:
We can also derive the well-known formula for the probability of a disjunction, sometimes called the inclusion–exclusion principle:
P (a ∨ b) = P (a) + P (b)− P (a ∧ b) . (13.4)
This rule is easily remembered by noting that the cases where a holds, together with the cases where b holds, certainly cover all the cases where a ∨ b holds; but summing the two sets of cases counts
their intersection twice, so we need to subtract P (a ∧ b). The proof is left as an exercise (Exercise 13.6).
Equations (13.1) and (13.4) are often called Kolmogorov’s axioms in honor of the Rus-KOLMOGOROV’S AXIOMS
sian mathematician Andrei Kolmogorov, who showed how to build up the rest of probability theory from this simple foundation and how to handle the difficulties caused by continuous variables.2 While
Equation (13.2) has a definitional flavor, Equation (13.4) reveals that the axioms really do constrain the degrees of belief an agent can have concerning logically related propositions. This is
analogous to the fact that a logical agent cannot simultaneously believe A, B, and ¬(A ∧ B), because there is no possible world in which all three are true. With probabilities, however, statements
refer not to the world directly, but to the agent’s own state of knowledge. Why, then, can an agent not hold the following set of beliefs (even though they violate Kolmogorov’s axioms)?
P (a) = 0.4 P(a ∧ b) = 0.0
P (b) = 0.3 P(a ∨ b) = 0.8 . (13.5)
This kind of question has been the subject of decades of intense debate between those who advocate the use of probabilities as the only legitimate form for degrees of belief and those who advocate
alternative approaches.
One argument for the axioms of probability, first stated in 1931 by Bruno de Finetti (and translated into English in de Finetti (1993)), is as follows: If an agent has some degree of belief in a
proposition a, then the agent should be able to state odds at which it is indifferent to a bet for or against a.3 Think of it as a game between two agents: Agent 1 states, “my degree of belief in
event a is 0.4.” Agent 2 is then free to choose whether to wager for or against a at stakes that are consistent with the stated degree of belief. That is, Agent 2 could choose to accept Agent 1’s bet
that a will occur, offering $6 against Agent 1’s $4. Or Agent 2 could accept Agent 1’s bet that ¬a will occur, offering $4 against Agent 1’s $6. Then we observe the outcome of a, and whoever is right
collects the money. If an agent’s degrees of belief do not accurately reflect the world, then you would expect that it would tend to lose money over the long run to an opposing agent whose beliefs
more accurately reflect the state of the world.
But de Finetti proved something much stronger: If Agent 1 expresses a set of degrees of belief that violate the axioms of probability theory then there is a combination of bets by Agent 2 that
guarantees that Agent 1 will lose money every time. For example, suppose that Agent 1 has the set of degrees of belief from Equation (13.5). Figure 13.2 shows that if Agent
2 The difficulties include the Vitali set, a well-defined subset of the interval [0, 1] with no well-defined size. 3 One might argue that the agent’s preferences for different bank balances are such
that the possibility of losing $1 is not counterbalanced by an equal possibility of winning $1. One possible response is to make the bet amounts small enough to avoid this problem. Savage’s analysis
(1954) circumvents the issue altogether.
2 chooses to bet $4 on a, $3 on b, and $2 on ¬(a ∨ b), then Agent 1 always loses money, regardless of the outcomes for a and b. De Finetti’s theorem implies that no rational agent can have beliefs
that violate the axioms of probability.
Alt text
One common objection to de Finetti’s theorem is that this betting game is rather contrived. For example, what if one refuses to bet? Does that end the argument? The answer is that the betting game is
an abstract model for the decision-making situation in which every agent is unavoidably involved at every moment. Every action (including inaction) is a kind of bet, and every outcome can be seen as
a payoff of the bet. Refusing to bet is like refusing to allow time to pass.
Other strong philosophical arguments have been put forward for the use of probabilities, most notably those of Cox (1946), Carnap (1950), and Jaynes (2003). They each construct a set of axioms for
reasoning with degrees of beliefs: no contradictions, correspondence with ordinary logic (for example, if belief in A goes up, then belief in ¬A must go down), and so on. The only controversial axiom
is that degrees of belief must be numbers, or at least act like numbers in that they must be transitive (if belief in A is greater than belief in B, which is greater than belief in C , then belief in
A must be greater than C) and comparable (the belief in A must be one of equal to, greater than, or less than belief in B). It can then be proved that probability is the only approach that satisfies
these axioms.
The world being the way it is, however, practical demonstrations sometimes speak louder than proofs. The success of reasoning systems based on probability theory has been much more effective in
making converts. We now look at how the axioms can be deployed to make inferences.
In this section we describe a simple method for probabilistic inference—that is, the computation of posterior probabilities for query propositions given observed evidence. We use the full joint
distribution as the “knowledge base” from which answers to all questions may be derived. Along the way we also introduce several useful techniques for manipulating equations involving probabilities.
There has been endless debate over the source and status of probability numbers. The frequentist position is that the numbers can come only from experiments: if we test 100 people and find that 10 of
them have a cavity, then we can say that the probability of a cavity is approximately 0.1. In this view, the assertion “the probability of a cavity is 0.1” means that 0.1 is the fraction that would
be observed in the limit of infinitely many samples. From any finite sample, we can estimate the true fraction and also calculate how accurate our estimate is likely to be.
The objectivist view is that probabilities are real aspects of the universe— propensities of objects to behave in certain ways—rather than being just descriptions of an observer’s degree of belief.
For example, the fact that a fair coin comes up heads with probability 0.5 is a propensity of the coin itself. In this view, frequentist measurements are attempts to observe these propensities. Most
physicists agree that quantum phenomena are objectively probabilistic, but uncertainty at the macroscopic scale—e.g., in coin tossing—usually arises from ignorance of initial conditions and does not
seem consistent with the propensity view.
The subjectivist view describes probabilities as a way of characterizing an agent’s beliefs, rather than as having any external physical significance. The subjective Bayesian view allows any
self-consistent ascription of prior probabilities to propositions, but then insists on proper Bayesian updating as evidence arrives.
In the end, even a strict frequentist position involves subjective analysis because of the reference class problem: in trying to determine the outcome probability of a particular experiment, the
frequentist has to place it in a reference class of “similar” experiments with known outcome frequencies. I. J. Good (1983, p. 27) wrote, “every event in life is unique, and every real-life
probability that we estimate in practice is that of an event that has never occurred before.” For example, given a particular patient, a frequentist who wants to estimate the probability of a cavity
will consider a reference class of other patients who are similar in important ways—age, symptoms, diet—and see what proportion of them had a cavity. If the dentist considers everything that is known
about the patient—weight to the nearest gram, hair color, mother’s maiden name—then the reference class becomes empty. This has been a vexing problem in the philosophy of science.
The principle of indifference attributed to Laplace (1816) states that propositions that are syntactically “symmetric” with respect to the evidence should be accorded equal probability. Various
refinements have been proposed, culminating in the attempt by Carnap and others to develop a rigorous inductive logic, capable of computing the correct probability for any proposition from any
collection of observations. Currently, it is believed that no unique inductive logic exists; rather, any such logic rests on a subjective prior probability distribution whose effect is diminished as
more observations are collected.
Alt text
We begin with a simple example: a domain consisting of just the three Boolean variables Toothache , Cavity , and Catch (the dentist’s nasty steel probe catches in my tooth). The full joint
distribution is a 2× 2× 2 table as shown in Figure 13.3.
Notice that the probabilities in the joint distribution sum to 1, as required by the axioms of probability. Notice also that Equation (13.2) gives us a direct way to calculate the probability of any
proposition, simple or complex: simply identify those possible worlds in which the proposition is true and add up their probabilities. For example, there are six possible worlds in which cavity ∨
toothache holds:
P (cavity ∨ toothache) = 0.108 + 0.012 + 0.072 + 0.008 + 0.016 + 0.064 = 0.28 .
One particularly common task is to extract the distribution over some subset of variables or a single variable. For example, adding the entries in the first row gives the unconditional or marginal
probability4 of cavity :
P (cavity) = 0.108 + 0.012 + 0.072 + 0.008 = 0.2 .
This process is called marginalization, or summing out—because we sum up the probabilities for each possible value of the other variables, thereby taking them out of the equation. We can write the
following general marginalization rule for any sets of variables Y and Z:
Alt text
This rule is called conditioning. Marginalization and conditioning turn out to be useful rules for all kinds of derivations involving probability expressions. In most cases, we are interested in
computing conditional probabilities of some variables, given evidence about others. Conditional probabilities can be found by first using
4 So called because of a common practice among actuaries of writing the sums of observed frequencies in the margins of insurance tables.
Equation (13.3) to obtain an expression in terms of unconditional probabilities and then evaluating the expression from the full joint distribution. For example, we can compute the probability of a
cavity, given evidence of a toothache, as follows:
Alt text
The two values sum to 1.0, as they should. Notice that in these two calculations the term 1/P (toothache ) remains constant, no matter which value of Cavity we calculate. In fact, it can be viewed as
a normalization constant for the distribution P(Cavity | toothache), ensuring that it adds up to 1. Throughout the chapters dealing with probability, we use α to denote such constants. With this
notation, we can write the two preceding equations in one:
P(Cavity | toothache) = α P(Cavity , toothache) = α [P(Cavity , toothache , catch) + P(Cavity , toothache ,¬catch)] = α [〈0.108, 0.016〉+ 〈0.012, 0.064〉] = α 〈0.12, 0.08〉 = 〈0.6, 0.4〉 .
In other words, we can calculate P(Cavity | toothache) even if we don’t know the value of P (toothache)! We temporarily forget about the factor 1/P (toothache ) and add up the values for cavity and
¬cavity , getting 0.12 and 0.08. Those are the correct relative proportions, but they don’t sum to 1, so we normalize them by dividing each one by 0.12 + 0.08, getting the true probabilities of 0.6
and 0.4. Normalization turns out to be a useful shortcut in many probability calculations, both to make the computation easier and to allow us to proceed when some probability assessment (such as P
(toothache)) is not available.
From the example, we can extract a general inference procedure. We begin with the case in which the query involves a single variable, X (Cavity in the example). Let E be the list of evidence
variables (just Toothache in the example), let e be the list of observed values for them, and let Y be the remaining unobserved variables (just Catch in the example). The query is P(X | e) and can be
evaluated as
Alt text
where the summation is over all possible ys (i.e., all possible combinations of values of the unobserved variables Y). Notice that together the variables X, E, and Y constitute the complete set of
variables for the domain, so P(X, e, y) is simply a subset of probabilities from the full joint distribution.
Given the full joint distribution to work with, Equation (13.9) can answer probabilistic queries for discrete variables. It does not scale well, however: for a domain described by n Boolean
variables, it requires an input table of size O(2^n^) and takes O(2^n^) time to process the table. In a realistic problem we could easily have n > 100, making O(2^n^) impractical. The full joint
distribution in tabular form is just not a practical tool for building reasoning systems. Instead, it should be viewed as the theoretical foundation on which more effective approaches may be built,
just as truth tables formed a theoretical foundation for more practical algorithms like DPLL. The remainder of this chapter introduces some of the basic ideas required in preparation for the
development of realistic systems in Chapter 14.
Let us expand the full joint distribution in Figure 13.3 by adding a fourth variable, Weather . The full joint distribution then becomes P(Toothache ,Catch,Cavity ,Weather ), which has 2 × 2 × 2 × 4
= 32 entries. It contains four “editions” of the table shown in Figure 13.3, one for each kind of weather. What relationship do these editions have to each other and to the original three-variable
table? For example, how are P (toothache , catch , cavity , cloudy)
and P(toothache , catch , cavity) related? We can use the product rule:
P (toothache , catch , cavity , cloudy)
= P (cloudy | toothache , catch , cavity) P(toothache , catch , cavity) .
Now, unless one is in the deity business, one should not imagine that one’s dental problems influence the weather. And for indoor dentistry, at least, it seems safe to say that the weather does not
influence the dental variables. Therefore, the following assertion seems reasonable:
P (cloudy | toothache , catch , cavity) = P (cloudy) . (13.10)
From this, we can deduce
P (toothache , catch , cavity , cloudy) = P (cloudy)P (toothache , catch , cavity) .
A similar equation exists for every entry in P(Toothache ,Catch ,Cavity ,Weather ). In fact, we can write the general equation
P(Toothache ,Catch ,Cavity ,Weather ) = P(Toothache ,Catch,Cavity)P(Weather ) .
Thus, the 32-element table for four variables can be constructed from one 8-element table and one 4-element table. This decomposition is illustrated schematically in Figure 13.4(a).
The property we used in Equation (13.10) is called independence (also **marginal in-**INDEPENDENCE
dependence and absolute independence). In particular, the weather is independent of one’s dental problems. Independence between propositions a and b can be written as
P (a | b)= P (a) or P (b | a)= P (b) or P (a ∧ b)= P (a) P(b) . (13.11)
All these forms are equivalent (Exercise 13.12). Independence between variables X and Y
can be written as follows (again, these are all equivalent):
P(X |Y )= P(X) or P(Y |X)= P(Y ) or P(X,Y )= P(X)P(Y ) .
Independence assertions are usually based on knowledge of the domain. As the toothache– weather example illustrates, they can dramatically reduce the amount of information necessary to specify the
full joint distribution. If the complete set of variables can be divided
Alt text
into independent subsets, then the full joint distribution can be factored into separate joint distributions on those subsets. For example, the full joint distribution on the outcome of n independent
coin flips, P(C~1~, . . . , C~n~), has 2^n^ entries, but it can be represented as the product of n single-variable distributions P(Ci). In a more practical vein, the independence of dentistry and
meteorology is a good thing, because otherwise the practice of dentistry might require intimate knowledge of meteorology, and vice versa.
When they are available, then, independence assertions can help in reducing the size of the domain representation and the complexity of the inference problem. Unfortunately, clean separation of
entire sets of variables by independence is quite rare. Whenever a connection, however indirect, exists between two variables, independence will fail to hold. Moreover, even independent subsets can
be quite large—for example, dentistry might involve dozens of diseases and hundreds of symptoms, all of which are interrelated. To handle such problems, we need more subtle methods than the
straightforward concept of independence.
BAYES’ RULE AND ITS USE
On page 486, we defined the product rule. It can actually be written in two forms:
Alt text
This equation is known as Bayes’ rule (also Bayes’ law or Bayes’ theorem). This simple equation underlies most modern AI systems for probabilistic inference.
The more general case of Bayes’ rule for multivalued variables can be written in the P notation as follows:
Alt text
Applying Bayes’ rule: The simple case
On the surface, Bayes’ rule does not seem very useful. It allows us to compute the single term P (b | a) in terms of three terms: P (a | b), P (b), and P (a). That seems like two steps backwards, but
Bayes’ rule is useful in practice because there are many cases where we do have good probability estimates for these three numbers and need to compute the fourth. Often, we perceive as evidence the
effect of some unknown cause and we would like to determine that cause. In that case, Bayes’ rule becomes
Alt text
The conditional probability P (effect | cause) quantifies the relationship in the causal direction, whereas P (cause | effect) describes the diagnostic direction. In a task such as medical diagnosis,
we often have conditional probabilities on causal relationships (that is, the doctor knows P (symptoms | disease)) and want to derive a diagnosis, P (disease | symptoms). For example, a doctor knows
that the disease meningitis causes the patient to have a stiff neck, say, 70% of the time. The doctor also knows some unconditional facts: the prior probability that a patient has meningitis is 1/
50,000, and the prior probability that any patient has a stiff neck is 1%. Letting s be the proposition that the patient has a stiff neck and m be the proposition that the patient has meningitis, we
Alt text
That is, we expect less than 1 in 700 patients with a stiff neck to have meningitis. Notice that even though a stiff neck is quite strongly indicated by meningitis (with probability 0.7), the
probability of meningitis in the patient remains small. This is because the prior probability of stiff necks is much higher than that of meningitis.
Section 13.3 illustrated a process by which one can avoid assessing the prior probability of the evidence (here, P (s)) by instead computing a posterior probability for each value of the query
variable (here, m and ¬m) and then normalizing the results. The same process can be applied when using Bayes’ rule. We have
P(M | s) = α 〈P (s | m) P (m), P (s | ¬m) P (¬m)〉 .
Thus, to use this approach we need to estimate P (s | ¬m) instead of P (s). There is no free lunch—sometimes this is easier, sometimes it is harder. The general form of Bayes’ rule with normalization
P(Y | X) = α P(X |Y )P(Y ) , (13.15)
where α is the normalization constant needed to make the entries in P(Y |X) sum to 1. One obvious question to ask about Bayes’ rule is why one might have available the conditional probability in one
direction, but not the other. In the meningitis domain, perhaps the doctor knows that a stiff neck implies meningitis in 1 out of 5000 cases; that is, the doctor has quantitative information in the
diagnostic direction from symptoms to causes. Such a doctor has no need to use Bayes’ rule. Unfortunately, diagnostic knowledge is often more fragile than causal knowledge. If there is a sudden
epidemic of meningitis, the unconditional probability of meningitis, P (m), will go up. The doctor who derived the diagnostic probability P (m | s) directly from statistical observation of patients
before the epidemic will have no idea how to update the value, but the doctor who computes P (m | s) from the other three values will see that P (m | s) should go up proportionately with P (m). Most
important, the causal information P (s |m) is unaffected by the epidemic, because it simply reflects the way meningitis works. The use of this kind of direct causal or model-based knowledge provides
the crucial robustness needed to make probabilistic systems feasible in the real world.
Using Bayes’ rule: Combining evidence
We have seen that Bayes’ rule can be useful for answering probabilistic queries conditioned on one piece of evidence—for example, the stiff neck. In particular, we have argued that probabilistic
information is often available in the form P (effect | cause). What happens when we have two or more pieces of evidence? For example, what can a dentist conclude if her nasty steel probe catches in
the aching tooth of a patient? If we know the full joint distribution (Figure 13.3), we can read off the answer:
P(Cavity | toothache ∧ catch) = α 〈0.108, 0.016〉 ≈ 〈0.871, 0.129〉 .
We know, however, that such an approach does not scale up to larger numbers of variables. We can try using Bayes’ rule to reformulate the problem:
P(Cavity | toothache ∧ catch)
= α P(toothache ∧ catch |Cavity) P(Cavity) . (13.16)
For this reformulation to work, we need to know the conditional probabilities of the conjunction toothache ∧catch for each value of Cavity . That might be feasible for just two evidence variables,
but again it does not scale up. If there are n possible evidence variables (X rays, diet, oral hygiene, etc.), then there are 2^n^ possible combinations of observed values for which we would need to
know conditional probabilities. We might as well go back to using the full joint distribution. This is what first led researchers away from probability theory toward approximate methods for evidence
combination that, while giving incorrect answers, require fewer numbers to give any answer at all.
Rather than taking this route, we need to find some additional assertions about the domain that will enable us to simplify the expressions. The notion of independence in Section 13.4 provides a clue,
but needs refining. It would be nice if Toothache and Catch were independent, but they are not: if the probe catches in the tooth, then it is likely that the tooth has a cavity and that the cavity
causes a toothache. These variables are independent, however, given the presence or the absence of a cavity. Each is directly caused by the cavity, but neither has a direct effect on the other:
toothache depends on the state of the nerves in the tooth, whereas the probe’s accuracy depends on the dentist’s skill, to which the toothache is irrelevant.5 Mathematically, this property is written
P(toothache ∧ catch | Cavity) = P(toothache | Cavity)P(catch | Cavity) . (13.17)
This equation expresses the conditional independence of toothache and catch given Cavity .
We can plug it into Equation (13.16) to obtain the probability of a cavity:
P(Cavity | toothache ∧ catch) = α P(toothache | Cavity) P(catch | Cavity) P(Cavity) . (13.18)
Now the information requirements are the same as for inference, using each piece of evidence separately: the prior probability P(Cavity) for the query variable and the conditional probability of each
effect, given its cause.
The general definition of conditional independence of two variables X and Y , given a third variable Z , is
P(X,Y | Z) = P(X | Z)P(Y | Z) .
In the dentist domain, for example, it seems reasonable to assert conditional independence of the variables Toothache and Catch , given Cavity :
P(Toothache ,Catch | Cavity) = P(Toothache | Cavity)P(Catch | Cavity) . (13.19)
Notice that this assertion is somewhat stronger than Equation (13.17), which asserts independence only for specific values of Toothache and Catch . As with absolute independence in Equation (13.11),
the equivalent forms
P(X | Y,Z)= P(X | Z) and P(Y | X,Z)= P(Y | Z)
can also be used (see Exercise 13.17). Section 13.4 showed that absolute independence assertions allow a decomposition of the full joint distribution into much smaller pieces. It turns out that the
same is true for conditional independence assertions. For example, given the assertion in Equation (13.19), we can derive a decomposition as follows:
P(Toothache ,Catch,Cavity)
= P(Toothache ,Catch | Cavity)P(Cavity) (product rule)
= P(Toothache | Cavity)P(Catch | Cavity)P(Cavity) (using 13.19).
(The reader can easily check that this equation does in fact hold in Figure 13.3.) In this way, the original large table is decomposed into three smaller tables. The original table has seven
5 We assume that the patient and dentist are distinct individuals.
independent numbers (23 = 8 entries in the table, but they must sum to 1, so 7 are independent). The smaller tables contain five independent numbers (for a conditional probability distributions such
as P(T |C there are two rows of two numbers, and each row sums to 1, so that’s two independent numbers; for a prior distribution like P(C) there is only one independent number). Going from seven to
five might not seem like a major triumph, but the point is that, for n symptoms that are all conditionally independent given Cavity , the size of the representation grows as O(n) instead of O(2^n^).
That means that conditional independence assertions can allow probabilistic systems to scale up; moreover, they are much more commonly available than absolute independence assertions. Conceptually,
Cavity separates Toothache and Catch because it is a direct cause of both of them. The decomposition of large probabilistic domains into weakly connected subsets through conditional independence is
one of the most important developments in the recent history of AI.
The dentistry example illustrates a commonly occurring pattern in which a single cause directly influences a number of effects, all of which are conditionally independent, given the cause. The full
joint distribution can be written as
Alt text
Such a probability distribution is called a naive Bayes model—“naive” because it is often used (as a simplifying assumption) in cases where the “effect” variables are not actually conditionally
independent given the cause variable. (The naive Bayes model is sometimes called a Bayesian classifier, a somewhat careless usage that has prompted true Bayesians to call it the idiot Bayes model.)
In practice, naive Bayes systems can work surprisingly well, even when the conditional independence assumption is not true. Chapter 20 describes methods for learning naive Bayes distributions from
We can combine of the ideas in this chapter to solve probabilistic reasoning problems in the wumpus world. (See Chapter 7 for a complete description of the wumpus world.) Uncertainty arises in the
wumpus world because the agent’s sensors give only partial information about the world. For example, Figure 13.5 shows a situation in which each of the three reachable squares—[1,3], [2,2], and [3,1]
—might contain a pit. Pure logical inference can conclude nothing about which square is most likely to be safe, so a logical agent might have to choose randomly. We will see that a probabilistic
agent can do much better than the logical agent.
Our aim is to calculate the probability that each of the three squares contains a pit. (For this example we ignore the wumpus and the gold.) The relevant properties of the wumpus world are that (1) a
pit causes breezes in all neighboring squares, and (2) each square other than [1,1] contains a pit with probability 0.2. The first step is to identify the set of random variables we need:
• As in the propositional logic case, we want one Boolean variable Pij for each square, which is true iff square [i, j] actually contains a pit.
Alt text
• We also have Boolean variables Bij that are true iff square [i, j] is breezy; we include these variables only for the observed squares—in this case, [1,1], [1,2], and [2,1].
The next step is to specify the full joint distribution, P(P~1,1~, . . . , P~4,4~, B~1,1~, B~1,2~, B~2,1~). Applying the product rule, we have
P(P~1,1~, . . . , P~4,4~, B~1,1~, B~1,2~, B~2,1~) = P(B~1,1~, B~1,2~, B~2,1~ | P~1,1~, . . . , P~4,4~)P(P~1,1~, . . . , P~4,4~) .
This decomposition makes it easy to see what the joint probability values should be. The first term is the conditional probability distribution of a breeze configuration, given a pit configuration;
its values are 1 if the breezes are adjacent to the pits and 0 otherwise. The second term is the prior probability of a pit configuration. Each square contains a pit with probability 0.2,
independently of the other squares; hence,
Alt text
For a particular configuration with exactly n pits, P (P~1,1~, . . . , P~4,4~)= 0.2^n^× 0.816−n. In the situation in Figure 13.5(a), the evidence consists of the observed breeze (or its
absence) in each square that is visited, combined with the fact that each such square contains no pit. We abbreviate these facts as b=¬b~1,1~∧b~1,2~∧b~2,1~ and known =¬P~1,1~∧¬p~1~,2∧¬p~2~,1. We are
interested in answering queries such as P(P~1,3~ | known , b): how likely is it that [1,3] contains a pit, given the observations so far?
To answer this query, we can follow the standard approach of Equation (13.9), namely, summing over entries from the full joint distribution. Let Unknown be the set of P~i,j~ variables for squares
other than the Known squares and the query square [1,3]. Then, by Equation (13.9), we have
Alt text
The full joint probabilities have already been specified, so we are done—that is, unless we care about computation. There are 12 unknown squares; hence the summation contains 212 = 4096 terms. In
general, the summation grows exponentially with the number of squares.
Surely, one might ask, aren’t the other squares irrelevant? How could [4,4] affect whether [1,3] has a pit? Indeed, this intuition is correct. Let Frontier be the pit variables (other than the query
variable) that are adjacent to visited squares, in this case just [2,2] and [3,1]. Also, let Other be the pit variables for the other unknown squares; in this case, there are 10 other squares, as
shown in Figure 13.5(b). The key insight is that the observed breezes are conditionally independent of the other variables, given the known, frontier, and query variables. To use the insight, we
manipulate the query formula into a form in which the breezes are conditioned on all the other variables, and then we apply conditional independence:
Alt text
Alt text
where the last step folds P (known) into the normalizing constant and uses the fact that ∑ other P (other ) equals 1. Now, there are just four terms in the summation over the frontier variables
P~2,2~ and P~3,1~. The use of independence and conditional independence has completely eliminated the other squares from consideration.
Notice that the expression P(b | known , P~1,3~, frontier) is 1 when the frontier is consistent with the breeze observations, and 0 otherwise. Thus, for each value of P~1,3~, we sum over the logical
models for the frontier variables that are consistent with the known facts. (Compare with the enumeration over models in Figure 7.5 on page 241.) The models and their associated prior probabilities—P
(frontier )—are shown in Figure 13.6. We have
P(P~1,3~ | known , b) = α′ 〈0.2(0.04 + 0.16 + 0.16), 0.8(0.04 + 0.16)〉 ≈ 〈0.31, 0.69〉 .
That is, [1,3] (and [3,1] by symmetry) contains a pit with roughly 31% probability. A similar calculation, which the reader might wish to perform, shows that [2,2] contains a pit with roughly 86%
probability. The wumpus agent should definitely avoid [2,2]! Note that our logical agent from Chapter 7 did not know that [2,2] was worse than the other squares. Logic can tell us that it is unknown
whether there is a pit in [2, 2], but we need probability to tell us how likely it is.
What this section has shown is that even seemingly complicated problems can be formulated precisely in probability theory and solved with simple algorithms. To get efficient solutions, independence
and conditional independence relationships can be used to simplify the summations required. These relationships often correspond to our natural understanding of how the problem should be decomposed.
In the next chapter, we develop formal representations for such relationships as well as algorithms that operate on those representations to perform probabilistic inference efficiently.
This chapter has suggested probability theory as a suitable foundation for uncertain reasoning and provided a gentle introduction to its use.
• Uncertainty arises because of both laziness and ignorance. It is inescapable in complex, nondeterministic, or partially observable environments.
• Probabilities express the agent’s inability to reach a definite decision regarding the truth of a sentence. Probabilities summarize the agent’s beliefs relative to the evidence.
• Decision theory combines the agent’s beliefs and desires, defining the best action as the one that maximizes expected utility.
• Basic probability statements include prior probabilities and conditional probabilities over simple and complex propositions.
• The axioms of probability constrain the possible assignments of probabilities to propositions. An agent that violates the axioms must behave irrationally in some cases.
• The full joint probability distribution specifies the probability of each complete assignment of values to random variables. It is usually too large to create or use in its explicit form, but
when it is available it can be used to answer queries simply by adding up entries for the possible worlds corresponding to the query propositions.
• Absolute independence between subsets of random variables allows the full joint distribution to be factored into smaller joint distributions, greatly reducing its complexity. Absolute
independence seldom occurs in practice.
• Bayes’ rule allows unknown probabilities to be computed from known conditional probabilities, usually in the causal direction. Applying Bayes’ rule with many pieces of evidence runs into the same
scaling problems as does the full joint distribution.
• Conditional independence brought about by direct causal relationships in the domain might allow the full joint distribution to be factored into smaller, conditional distributions. The naive Bayes
model assumes the conditional independence of all effect variables, given a single cause variable, and grows linearly with the number of effects.
• A wumpus-world agent can calculate probabilities for unobserved aspects of the world, thereby improving on the decisions of a purely logical agent. Conditional independence makes these
calculations tractable.
Probability theory was invented as a way of analyzing games of chance. In about 850 A.D. the Indian mathematician Mahaviracarya described how to arrange a set of bets that can’t lose (what we now
call a Dutch book). In Europe, the first significant systematic analyses were produced by Girolamo Cardano around 1565, although publication was posthumous (1663). By that time, probability had been
established as a mathematical discipline due to a series of results established in a famous correspondence between Blaise Pascal and Pierre de Fermat in 1654. As with probability itself, the results
were initially motivated by gambling problems (see Exercise 13.9). The first published textbook on probability was De Ratiociniis in Ludo Aleae (Huygens, 1657). The “laziness and ignorance” view of
uncertainty was described by John Arbuthnot in the preface of his translation of Huygens (Arbuthnot, 1692): “It is impossible for a Die, with such determin’d force and direction, not to fall on such
determin’d side, only I don’t know the force and direction which makes it fall on such determin’d side, and therefore I call it Chance, which is nothing but the want of art…”
Laplace (1816) gave an exceptionally accurate and modern overview of probability; he was the first to use the example “take two urns, A and B, the first containing four white and two black balls, . .
. ” The Rev. Thomas Bayes (1702–1761) introduced the rule for reasoning about conditional probabilities that was named after him (Bayes, 1763). Bayes only considered the case of uniform priors; it
was Laplace who independently developed the general case. Kolmogorov (1950, first published in German in 1933) presented probability theory in a rigorously axiomatic framework for the first time.
Rényi (1970) later gave an axiomatic presentation that took conditional probability, rather than absolute probability, as primitive.
Pascal used probability in ways that required both the objective interpretation, as a property of the world based on symmetry or relative frequency, and the subjective interpretation, based on degree
of belief—the former in his analyses of probabilities in games of chance, the latter in the famous “Pascal’s wager” argument about the possible existence of God. However, Pascal did not clearly
realize the distinction between these two interpretations. The distinction was first drawn clearly by James Bernoulli (1654–1705).
Leibniz introduced the “classical” notion of probability as a proportion of enumerated, equally probable cases, which was also used by Bernoulli, although it was brought to prominence by Laplace
(1749–1827). This notion is ambiguous between the frequency interpretation and the subjective interpretation. The cases can be thought to be equally probable either because of a natural, physical
symmetry between them, or simply because we do not have any knowledge that would lead us to consider one more probable than another. The use of this latter, subjective consideration to justify
assigning equal probabilities is known as the principle of indifference. The principle is often attributed to Laplace, but he never isolated the principle explicitly. George Boole and John Venn both
referred to it as the principle of insufficient reason; the modern name is due to Keynes (1921).
The debate between objectivists and subjectivists became sharper in the 20th century. Kolmogorov (1963), R. A. Fisher (1922), and Richard von Mises (1928) were advocates of the relative frequency
interpretation. Karl Popper’s (1959, first published in German in 1934) “propensity” interpretation traces relative frequencies to an underlying physical symmetry. Frank Ramsey (1931), Bruno de
Finetti (1937), R. T. Cox (1946), Leonard Savage (1954), Richard Jeffrey (1983), and E. T. Jaynes (2003) interpreted probabilities as the degrees of belief of specific individuals. Their analyses of
degree of belief were closely tied to utilities and to behavior—specifically, to the willingness to place bets. Rudolf Carnap, following Leibniz and Laplace, offered a different kind of subjective
interpretation of probability— not as any actual individual’s degree of belief, but as the degree of belief that an idealized individual should have in a particular proposition a, given a particular
body of evidence e.
Carnap attempted to go further than Leibniz or Laplace by making this notion of degree of confirmation mathematically precise, as a logical relation between a and e. The study of this relation was
intended to constitute a mathematical discipline called inductive logic, analogous to ordinary deductive logic (Carnap, 1948, 1950). Carnap was not able to extend his inductive logic much beyond the
propositional case, and Putnam (1963) showed by adversarial arguments that some fundamental difficulties would prevent a strict extension to languages capable of expressing arithmetic.
Cox’s theorem (1946) shows that any system for uncertain reasoning that meets his set of assumptions is equivalent to probability theory. This gave renewed confidence to those who already favored
probability, but others were not convinced, pointing to the assumptions (primarily that belief must be represented by a single number, and thus the belief in ¬p must be a function of the belief in
p). Halpern (1999) describes the assumptions and shows some gaps in Cox’s original formulation. Horn (2003) shows how to patch up the difficulties. Jaynes (2003) has a similar argument that is easier
to read.
The question of reference classes is closely tied to the attempt to find an inductive logic. The approach of choosing the “most specific” reference class of sufficient size was formally proposed by
Reichenbach (1949). Various attempts have been made, notably by Henry Kyburg (1977, 1983), to formulate more sophisticated policies in order to avoid some obvious fallacies that arise with
Reichenbach’s rule, but such approaches remain somewhat ad hoc. More recent work by Bacchus, Grove, Halpern, and Koller (1992) extends Carnap’s methods to first-order theories, thereby avoiding many
of the difficulties associated with the straightforward reference-class method. Kyburg and Teng (2006) contrast probabilistic inference with nonmonotonic logic.
Bayesian probabilistic reasoning has been used in AI since the 1960s, especially in medical diagnosis. It was used not only to make a diagnosis from available evidence, but also to select further
questions and tests by using the theory of information value (Section 16.6) when available evidence was inconclusive (Gorry, 1968; Gorry et al., 1973). One system outperformed human experts in the
diagnosis of acute abdominal illnesses (de Dombal et al., 1974). Lucas et al. (2004) gives an overview. These early Bayesian systems suffered from a number of problems, however. Because they lacked
any theoretical model of the conditions they were diagnosing, they were vulnerable to unrepresentative data occurring in situations for which only a small sample was available (de Dombal et al.,
1981). Even more fundamentally, because they lacked a concise formalism (such as the one to be described in Chapter 14) for representing and using conditional independence information, they depended
on the acquisition, storage, and processing of enormous tables of probabilistic data. Because of these difficulties, probabilistic methods for coping with uncertainty fell out of favor in AI from the
1970s to the mid-1980s. Developments since the late 1980s are described in the next chapter.
The naive Bayes model for joint distributions has been studied extensively in the pattern recognition literature since the 1950s (Duda and Hart, 1973). It has also been used, often unwittingly, in
information retrieval, beginning with the work of Maron (1961). The probabilistic foundations of this technique, described further in Exercise 13.22, were elucidated by Robertson and Sparck Jones
(1976). Domingos and Pazzani (1997) provide an explanation for the surprising success of naive Bayesian reasoning even in domains where the independence assumptions are clearly violated.
There are many good introductory textbooks on probability theory, including those by Bertsekas and Tsitsiklis (2008) and Grinstead and Snell (1997). DeGroot and Schervish (2001) offer a combined
introduction to probability and statistics from a Bayesian standpoint. Richard Hamming’s (1991) textbook gives a mathematically sophisticated introduction to probability theory from the standpoint of
a propensity interpretation based on physical symmetry. Hacking (1975) and Hald (1990) cover the early history of the concept of probability. Bernstein (1996) gives an entertaining popular account of
the story of risk.
13.1 Show from first principles that P (a | b ∧ a) = 1.
13.2 Using the axioms of probability, prove that any probability distribution on a discrete random variable must sum to 1.
13.3 For each of the following statements, either prove it is true or give a counterexample.
a. If P (a | b, c) = P (b | a, c), then P (a | c) = P (b | c)
b. If P (a | b, c) = P (a), then P (b | c) = P (b)
c. If P (a | b) = P (a), then P (a | b, c) = P (a | c)
13.4 Would it be rational for an agent to hold the three beliefs P (A)= 0.4, P (B)= 0.3, and P (A∨B)=0.5? If so, what range of probabilities would be rational for the agent to hold for A∧B? Make up a
table like the one in Figure 13.2, and show how it supports your argument about rationality. Then draw another version of the table where P (A ∨ B)= 0.7. Explain why it is rational to have this
probability, even though the table shows one case that is a loss and three that just break even. (Hint: what is Agent 1 committed to about the probability of each of the four cases, especially the
case that is a loss?)
13.5 This question deals with the properties of possible worlds, defined on page 488 as assignments to all random variables. We will work with propositions that correspond to exactly one possible
world because they pin down the assignments of all the variables. In probability theory, such propositions are called atomic events. For example, with BooleanATOMIC EVENT
variables X~1~, X~2~, X~3~, the proposition x~1~ ∧ ¬x~2~ ∧ ¬x~3~ fixes the assignment of the variables; in the language of propositional logic, we would say it has exactly one model.
a. Prove, for the case of n Boolean variables, that any two distinct atomic events are mutually exclusive; that is, their conjunction is equivalent to false .
b. Prove that the disjunction of all possible atomic events is logically equivalent to true .
c. Prove that any proposition is logically equivalent to the disjunction of the atomic events that entail its truth.
13.6 Prove Equation (13.4) from Equations (13.1) and (13.2).
13.7 Consider the set of all possible five-card poker hands dealt fairly from a standard deck of fifty-two cards.
a. How many atomic events are there in the joint probability distribution (i.e., how many five-card hands are there)?
b. What is the probability of each atomic event?
c. What is the probability of being dealt a royal straight flush? Four of a kind?
13.8 Given the full joint distribution shown in Figure 13.3, calculate the following:
a. P(toothache) .
b. P(Cavity) .
c. P(Toothache | cavity) .
d. P(Cavity | toothache ∨ catch) .
13.9 In his letter of August 24, 1654, Pascal was trying to show how a pot of money should be allocated when a gambling game must end prematurely. Imagine a game where each turn consists of the roll
of a die, player E gets a point when the die is even, and player O gets a point when the die is odd. The first player to get 7 points wins the pot. Suppose the game is interrupted with E leading 4–2.
How should the money be fairly split in this case? What is the general formula? (Fermat and Pascal made several errors before solving the problem, but you should be able to get it right the first
13.10 Deciding to put probability theory to good use, we encounter a slot machine with three independent wheels, each producing one of the four symbols BAR, BELL, LEMON, or CHERRY with equal
probability. The slot machine has the following payout scheme for a bet of 1 coin (where “?” denotes that we don’t care what comes up for that wheel):
BAR/BAR/BAR pays 20 coins BELL/BELL/BELL pays 15 coins LEMON/LEMON/LEMON pays 5 coins CHERRY/CHERRY/CHERRY pays 3 coins CHERRY/CHERRY/? pays 2 coins CHERRY/?/? pays 1 coin
a. Compute the expected “payback” percentage of the machine. In other words, for each coin played, what is the expected coin return?
b. Compute the probability that playing the slot machine once will result in a win.
c. Estimate the mean and median number of plays you can expect to make until you go broke, if you start with 10 coins. You can run a simulation to estimate this, rather than trying to compute an
exact answer.
13.11 We wish to transmit an n-bit message to a receiving agent. The bits in the message are independently corrupted (flipped) during transmission with ε probability each. With an extra parity bit
sent along with the original information, a message can be corrected by the receiver if at most one bit in the entire message (including the parity bit) has been corrupted. Suppose we want to ensure
that the correct message is received with probability at least 1− δ. What is the maximum feasible value of n? Calculate this value for the case ε= 0.001, δ = 0.01.
13.12 Show that the three forms of independence in Equation (13.11) are equivalent.
13.13 Consider two medical tests, A and B, for a virus. Test A is 95% effective at recognizing the virus when it is present, but has a 10% false positive rate (indicating that the virus is present,
when it is not). Test B is 90% effective at recognizing the virus, but has a 5% false positive rate. The two tests use independent methods of identifying the virus. The virus is carried by 1% of all
people. Say that a person is tested for the virus using only one of the tests, and that test comes back positive for carrying the virus. Which test returning positive is more indicative of someone
really carrying the virus? Justify your answer mathematically.
13.14 Suppose you are given a coin that lands heads with probability x and tails with probability 1 − x. Are the outcomes of successive flips of the coin independent of each other given that you know
the value of x? Are the outcomes of successive flips of the coin independent of each other if you do not know the value of x? Justify your answer.
13.15 After your yearly checkup, the doctor has bad news and good news. The bad news is that you tested positive for a serious disease and that the test is 99% accurate (i.e., the probability of
testing positive when you do have the disease is 0.99, as is the probability of testing negative when you don’t have the disease). The good news is that this is a rare disease, striking only 1 in
10,000 people of your age. Why is it good news that the disease is rare? What are the chances that you actually have the disease?
13.16 It is quite often useful to consider the effect of some specific propositions in the context of some general background evidence that remains fixed, rather than in the complete absence of
information. The following questions ask you to prove more general versions of the product rule and Bayes’ rule, with respect to some background evidence e:
a. Prove the conditionalized version of the general product rule:
P(X,Y | e) = P(X | Y, e)P(Y | e) .
b. Prove the conditionalized version of Bayes’ rule in Equation (13.13).
13.17 Show that the statement of conditional independence
P(X,Y | Z) = P(X | Z)P(Y | Z)
is equivalent to each of the statements
P(X | Y,Z) = P(X | Z) and P(B | X,Z) = P(Y | Z) .
13.18 Suppose you are given a bag containing n unbiased coins. You are told that n− 1 of these coins are normal, with heads on one side and tails on the other, whereas one coin is a fake, with heads
on both sides.
a. Suppose you reach into the bag, pick out a coin at random, flip it, and get a head. What is the (conditional) probability that the coin you chose is the fake coin?
b. Suppose you continue flipping the coin for a total of k times after picking it and see k heads. Now what is the conditional probability that you picked the fake coin? c. Suppose you wanted to
decide whether the chosen coin was fake by flipping it k times.
The decision procedure returns fake if all k flips come up heads; otherwise it returns normal . What is the (unconditional) probability that this procedure makes an error?
13.19 In this exercise, you will complete the normalization calculation for the meningitis example. First, make up a suitable value for P (s | ¬m), and use it to calculate unnormalized values for P
(m | s) and P (¬m | s) (i.e., ignoring the P (s) term in the Bayes’ rule expression, Equation (13.14)). Now normalize these values so that they add to 1.
13.20 Let X, Y , Z be Boolean random variables. Label the eight entries in the joint distribution P(X,Y,Z) as a through h. Express the statement that X and Y are conditionally independent given Z ,
as a set of equations relating a through h. How many nonredundant equations are there?
13.21 (Adapted from Pearl (1988).) Suppose you are a witness to a nighttime hit-and-run accident involving a taxi in Athens. All taxis in Athens are blue or green. You swear, under oath, that the
taxi was blue. Extensive testing shows that, under the dim lighting conditions, discrimination between blue and green is 75% reliable.
a. Is it possible to calculate the most likely color for the taxi? (Hint: distinguish carefully between the proposition that the taxi is blue and the proposition that it appears blue.)
b. What if you know that 9 out of 10 Athenian taxis are green?
13.22 Text categorization is the task of assigning a given document to one of a fixed set of categories on the basis of the text it contains. Naive Bayes models are often used for this task. In these
models, the query variable is the document category, and the “effect” variables are the presence or absence of each word in the language; the assumption is that words occur independently in
documents, with frequencies determined by the document category.
a. Explain precisely how such a model can be constructed, given as “training data” a set of documents that have been assigned to categories.
b. Explain precisely how to categorize a new document. c. Is the conditional independence assumption reasonable? Discuss.
13.23 In our analysis of the wumpus world, we used the fact that each square contains a pit with probability 0.2, independently of the contents of the other squares. Suppose instead that exactly N/5
pits are scattered at random among the N squares other than [1,1]. Are the variables P~i,j~ and Pk,l still independent? What is the joint distribution P(P~1,1~, . . . , P~4,4~)
now? Redo the calculation for the probabilities of pits in [1,3] and [2,2].
13.24 Redo the probability calculation for pits in [1,3] and [2,2], assuming that each square contains a pit with probability 0.01, independent of the other squares. What can you say about the
relative performance of a logical versus a probabilistic agent in this case?
13.25 Implement a hybrid probabilistic agent for the wumpus world, based on the hybrid agent in Figure 7.20 and the probabilistic inference procedure outlined in this chapter.
In which we explain how to build network models to reason under uncertainty according to the laws of probability theory.
Chapter 13 introduced the basic elements of probability theory and noted the importance of independence and conditional independence relationships in simplifying probabilistic representations of the
world. This chapter introduces a systematic way to represent such relationships explicitly in the form of Bayesian networks. We define the syntax and semantics of these networks and show how they can
be used to capture uncertain knowledge in a natural and efficient way. We then show how probabilistic inference, although computationally intractable in the worst case, can be done efficiently in
many practical situations. We also describe a variety of approximate inference algorithms that are often applicable when exact inference is infeasible. We explore ways in which probability theory can
be applied to worlds with objects and relations—that is, to first-order, as opposed to propositional, representations. Finally, we survey alternative approaches to uncertain reasoning.
In Chapter 13, we saw that the full joint probability distribution can answer any question about the domain, but can become intractably large as the number of variables grows. Furthermore, specifying
probabilities for possible worlds one by one is unnatural and tedious.
We also saw that independence and conditional independence relationships among variables can greatly reduce the number of probabilities that need to be specified in order to define the full joint
distribution. This section introduces a data structure called a Bayesian network^1^ to represent the dependencies among variables. Bayesian networks can represent essentially any full joint
probability distribution and in many cases can do so very concisely.
1 This is the most common name, but there are many synonyms, including belief network, probabilistic network, causal network, and knowledge map. In statistics, the term graphical model refers to a
somewhat broader class that includes Bayesian networks. An extension of Bayesian networks called a decision network or influence diagram is covered in Chapter 16.
A Bayesian network is a directed graph in which each node is annotated with quantitative probability information. The full specification is as follows:
1. Each node corresponds to a random variable, which may be discrete or continuous.
2. A set of directed links or arrows connects pairs of nodes. If there is an arrow from node X to node Y , X is said to be a parent of Y. The graph has no directed cycles (and hence is a directed
acyclic graph, or DAG.
3. Each node X~i~ has a conditional probability distribution P(X~i~ |Parents(X~i~)) that quantifies the effect of the parents on the node.
The topology of the network—the set of nodes and links—specifies the conditional independence relationships that hold in the domain, in a way that will be made precise shortly. The intuitive meaning
of an arrow is typically that X has a direct influence on Y, which suggests that causes should be parents of effects. It is usually easy for a domain expert to decide what direct influences exist in
the domain—much easier, in fact, than actually specifying the probabilities themselves. Once the topology of the Bayesian network is laid out, we need only specify a conditional probability
distribution for each variable, given its parents. We will see that the combination of the topology and the conditional distributions suffices to specify (implicitly) the full joint distribution for
all the variables.
Recall the simple world described in Chapter 13, consisting of the variables Toothache , Cavity , Catch , and Weather . We argued that Weather is independent of the other variables; furthermore, we
argued that Toothache and Catch are conditionally independent, given Cavity . These relationships are represented by the Bayesian network structure shown in Figure 14.1. Formally, the conditional
independence of Toothache and Catch , given Cavity , is indicated by the absence of a link between Toothache and Catch . Intuitively, the network represents the fact that Cavity is a direct cause of
Toothache and Catch , whereas no direct causal relationship exists between Toothache and Catch .
Now consider the following example, which is just a little more complex. You have a new burglar alarm installed at home. It is fairly reliable at detecting a burglary, but also responds on occasion
to minor earthquakes. (This example is due to Judea Pearl, a resident of Los Angeles—hence the acute interest in earthquakes.) You also have two neighbors, John and Mary, who have promised to call
you at work when they hear the alarm. John nearly always calls when he hears the alarm, but sometimes confuses the telephone ringing with
Alt text
Alt text
the alarm and calls then, too. Mary, on the other hand, likes rather loud music and often misses the alarm altogether. Given the evidence of who has or has not called, we would like to estimate the
probability of a burglary.
A Bayesian network for this domain appears in Figure 14.2. The network structure shows that burglary and earthquakes directly affect the probability of the alarm’s going off, but whether John and
Mary call depends only on the alarm. The network thus represents our assumptions that they do not perceive burglaries directly, they do not notice minor earthquakes, and they do not confer before
The conditional distributions in Figure 14.2 are shown as a conditional probability table, or CPT. (This form of table can be used for discrete variables; other representations, including those
suitable for continuous variables, are described in Section 14.2.) Each row in a CPT contains the conditional probability of each node value for a conditioning case.
A conditioning case is just a possible combination of values for the parent nodes—a miniature possible world, if you like. Each row must sum to 1, because the entries represent an exhaustive set of
cases for the variable. For Boolean variables, once you know that the probability of a true value is p, the probability of false must be 1 – p, so we often omit the second number, as in Figure 14.2.
In general, a table for a Boolean variable with k Boolean parents contains 2^k^ independently specifiable probabilities. A node with no parents has only one row, representing the prior probabilities
of each possible value of the variable.
Notice that the network does not have nodes corresponding to Mary’s currently listening to loud music or to the telephone ringing and confusing John. These factors are summarized in the uncertainty
associated with the links from Alarm to JohnCalls and MaryCalls . This shows both laziness and ignorance in operation: it would be a lot of work to find out why those factors would be more or less
likely in any particular case, and we have no reasonable way to obtain the relevant information anyway. The probabilities actually summarize a potentiallyinfinite set of circumstances in which the
alarm might fail to go off (high humidity, power failure, dead battery, cut wires, a dead mouse stuck inside the bell, etc.) or John or Mary might fail to call and report it (out to lunch, on
vacation, temporarily deaf, passing helicopter, etc.). In this way, a small agent can cope with a very large world, at least approximately. The degree of approximation can be improved if we introduce
additional relevant information.
The previous section described what a network is, but not what it means. There are two ways in which one can understand the semantics of Bayesian networks. The first is to see the network as a
representation of the joint probability distribution. The second is to view it as an encoding of a collection of conditional independence statements. The two views are equivalent, but the first turns
out to be helpful in understanding how to construct networks, whereas the second is helpful in designing inference procedures.
Representing the full joint distribution
Viewed as a piece of “syntax,” a Bayesian network is a directed acyclic graph with some numeric parameters attached to each node. One way to define what the network means—its semantics—is to define
the way in which it represents a specific joint distribution over all the variables. To do this, we first need to retract (temporarily) what we said earlier about the parameters associated with each
node. We said that those parameters correspond to conditional probabilities P(X~i~ |Parents(X~i~)); this is a true statement, but until we assign semantics to the network as a whole, we should think
of them just as numbers θ(X~i~ |Parents(X~i~)).
A generic entry in the joint distribution is the probability of a conjunction of particular assignments to each variable, such as P (X~1~ = x~1~ ∧ . . . ∧ X~n~ = x~n~). We use the notation P (x~1~, .
. . , x~n~) as an abbreviation for this. The value of this entry is given by the formula
Alt text
where parents(Xi) denotes the values of Parents(Xi) that appear in x~1~, . . . , x~n~. Thus, each entry in the joint distribution is represented by the product of the appropriate elements of the
conditional probability tables (CPTs) in the Bayesian network.
From this definition, it is easy to prove that the parameters θ(Xi |Parents(Xi)) are exactly the conditional probabilities P(Xi |Parents(Xi)) implied by the joint distribution (see Exercise 14.2).
Hence, we can rewrite Equation (14.1) as
Alt text
In other words, the tables we have been calling conditional probability tables really are conditional probability tables according to the semantics defined in Equation (14.1).
To illustrate this, we can calculate the probability that the alarm has sounded, but neither a burglary nor an earthquake has occurred, and both John and Mary call. We multiply entries from the joint
distribution (using single-letter names for the variables):
P (j,m, a,¬b,¬e) = P (j | a) P (m | a) P (a | ¬b ∧ ¬e) P (¬b) P (¬e) = 0.90 × 0.70× 0.001 × 0.999 × 0.998 = 0.000628 .
Section 13.3 explained that the full joint distribution can be used to answer any query about the domain. If a Bayesian network is a representation of the joint distribution, then it too can be used
to answer any query, by summing all the relevant joint entries. Section 14.4 explains how to do this, but also describes methods that are much more efficient.
A method for constructing Bayesian networks
Equation (14.2) defines what a given Bayesian network means. The next step is to explain how to construct a Bayesian network in such a way that the resulting joint distribution is a good
representation of a given domain. We will now show that Equation (14.2) implies certain conditional independence relationships that can be used to guide the knowledge engineer in constructing the
topology of the network. First, we rewrite the entries in the joint distribution in terms of conditional probability, using the product rule (see page 486):
P (x~1~, . . . , x~n~) = P (x~n~ |x~n−1~, . . . , x~1~)P (x~n−1~, . . . , x~1~) .
Then we repeat the process, reducing each conjunctive probability to a conditional probability and a smaller conjunction. We end up with one big product:
Alt text
This identity is called the chain rule. It holds for any set of random variables. Comparing itCHAIN RULE
with Equation (14.2), we see that the specification of the joint distribution is equivalent to the general assertion that, for every variable X~i~ in the network,
P(X~i~ |X~i−1~, . . . ,x~1~) = P(X~i~ |Parents(X~i~)) , (14.3)
provided that Parents(X~i~) ⊆ {X~i−1~, . . . ,X~1~}. This last condition is satisfied by numbering the nodes in a way that is consistent with the partial order implicit in the graph structure.
What Equation (14.3) says is that the Bayesian network is a correct representation of the domain only if each node is conditionally independent of its other predecessors in the node ordering, given
its parents. We can satisfy this condition with this methodology:
1. Nodes: First determine the set of variables that are required to model the domain. Now order them, {X~1~, . . . ,X~n~}. Any order will work, but the resulting network will be more compact if the
variables are ordered such that causes precede effects.
2. Links: For i = 1 to n do:
• Choose, from X~1~, . . . ,X~i−1~, a minimal set of parents for X~i~, such that Equation (14.3) is satisfied.
• For each parent insert a link from the parent to X~i~. - CPTs: Write down the conditional probability table, P(X~i~|Parents(X~i~)).
Intuitively, the parents of node X~i~ should contain all those nodes in X~1~, . . . , X~i−1~ that directly influence X~i~. For example, suppose we have completed the network in Figure 14.2 except for
the choice of parents for MaryCalls . MaryCalls is certainly influenced by whether there is a Burglary or an Earthquake , but not directly influenced. Intuitively, our knowledge of the domain tells
us that these events influence Mary’s calling behavior only through their effect on the alarm. Also, given the state of the alarm, whether John calls has no influence on Mary’s calling. Formally
speaking, we believe that the following conditional independence statement holds:
P(MaryCalls |JohnCalls ,Alarm,Earthquake ,Burglary) = P(MaryCalls |Alarm) .
Thus, Alarm will be the only parent node for MaryCalls . Because each node is connected only to earlier nodes, this construction method guaran-
tees that the network is acyclic. Another important property of Bayesian networks is that they contain no redundant probability values. If there is no redundancy, then there is no chance for
inconsistency: it is impossible for the knowledge engineer or domain expert to create a Bayesian network that violates the axioms of probability.
Compactness and node ordering
As well as being a complete and nonredundant representation of the domain, a Bayesian network can often be far more compact than the full joint distribution. This property is what makes it feasible
to handle domains with many variables. The compactness of Bayesian networks is an example of a general property of locally structured (also called sparse) systems. SPARSE In a locally structured
system, each subcomponent interacts directly with only a bounded number of other components, regardless of the total number of components. Local structure is usually associated with linear rather
than exponential growth in complexity. In the case of Bayesian networks, it is reasonable to suppose that in most domains each random variable is directly influenced by at most k others, for some
constant k. If we assume n Boolean variables for simplicity, then the amount of information needed to specify each conditional probability table will be at most 2^k^ numbers, and the complete network
can be specified by n2^k^ numbers. In contrast, the joint distribution contains 2^n^ numbers. To make this concrete, suppose we have n = 30 nodes, each with five parents (k = 5). Then the Bayesian
network requires 960 numbers, but the full joint distribution requires over a billion.
There are domains in which each variable can be influenced directly by all the others, so that the network is fully connected. Then specifying the conditional probability tables requires the same
amount of information as specifying the joint distribution. In some domains, there will be slight dependencies that should strictly be included by adding a new link. But if these dependencies are
tenuous, then it may not be worth the additional complexity in the network for the small gain in accuracy. For example, one might object to our burglary network on the grounds that if there is an
earthquake, then John and Mary would not call even if they heard the alarm, because they assume that the earthquake is the cause. Whether to add the link from Earthquake to JohnCalls and MaryCalls
(and thus enlarge the tables) depends on comparing the importance of getting more accurate probabilities with the cost of specifying the extra information.
Alt text
Even in a locally structured domain, we will get a compact Bayesian network only if we choose the node ordering well. What happens if we happen to choose the wrong order? Consider the burglary
example again. Suppose we decide to add the nodes in the order MaryCalls , JohnCalls , Alarm, Burglary , Earthquake . We then get the somewhat more complicated network shown in Figure 14.3(a). The
process goes as follows:
• Adding MaryCalls : No parents.
• Adding JohnCalls : If Mary calls, that probably means the alarm has gone off, which of course would make it more likely that John calls. Therefore, JohnCalls needs MaryCalls as a parent.
• Adding Alarm: Clearly, if both call, it is more likely that the alarm has gone off than if just one or neither calls, so we need both MaryCalls and JohnCalls as parents.
• Adding Burglary : If we know the alarm state, then the call from John or Mary might give us information about our phone ringing or Mary’s music, but not about burglary:
P(Burglary | Alarm, JohnCalls ,MaryCalls) = P(Burglary | Alarm) .
Hence we need just Alarm as parent.
• Adding Earthquake : If the alarm is on, it is more likely that there has been an earthquake. (The alarm is an earthquake detector of sorts.) But if we know that there has been a burglary, then
that explains the alarm, and the probability of an earthquake would be only slightly above normal. Hence, we need both Alarm and Burglary as parents.
The resulting network has two more links than the original network in Figure 14.2 and requires three more probabilities to be specified. What’s worse, some of the links represent tenuous
relationships that require difficult and unnatural probability judgments, such as as sessing the probability of Earthquake , given Burglary and Alarm. This phenomenon is quite general and is related
to the distinction between causal and diagnostic models introduced in Section 13.5.1 (see also Exercise 8.13). If we try to build a diagnostic model with links from symptoms to causes (as from
MaryCalls to Alarm or Alarm to Burglary), we end up having to specify additional dependencies between otherwise independent causes (and often between separately occurring symptoms as well). If we
stick to a causal model, we end up having to specify fewer numbers, and the numbers will often be easier to come up with. In the domain of medicine, for example, it has been shown by Tversky and
Kahneman (1982) that expert physicians prefer to give probability judgments for causal rules rather than for diagnostic ones.
Figure 14.3(b) shows a very bad node ordering: MaryCalls , JohnCalls , Earthquake , Burglary , Alarm . This network requires 31 distinct probabilities to be specified—exactly the same number as the
full joint distribution. It is important to realize, however, that any of the three networks can represent exactly the same joint distribution. The last two versions simply fail to represent all the
conditional independence relationships and hence end up specifying a lot of unnecessary numbers instead.
Conditional independence relations in Bayesian networks
We have provided a “numerical” semantics for Bayesian networks in terms of the representation of the full joint distribution, as in Equation (14.2). Using this semantics to derive a method for
constructing Bayesian networks, we were led to the consequence that a node is conditionally independent of its other predecessors, given its parents. It turns out that we can also go in the other
direction. We can start from a “topological” semantics that specifies the conditional independence relationships encoded by the graph structure, and from this we can derive the “numerical” semantics.
The topological semantics~2~ specifies that each variable is conditionally independent of its non-descendants, given its parents. For example, in Figure 14.2, JohnCalls is independent of Burglary ,
Earthquake , and MaryCalls given the value of Alarm . The definition is illustrated in Figure 14.4(a). From these conditional independence assertions and the interpretation of the network parameters
θ(X~i~ |Parents(X~i~))
as specifications of conditional probabilities P(X~i~ |Parents(X~i~)), the full joint distribution given in Equation (14.2) can be reconstructed. In this sense, the “numerical” semantics and the
“topological” semantics are equivalent.
Another important independence property is implied by the topological semantics: a node is conditionally independent of all other nodes in the network, given its parents, children, and children’s
parents—that is, given its Markov blanket. (Exercise 14.7 asks you to prove this.) For example, Burglary is independent of JohnCalls and MaryCalls , given Alarm and Earthquake . This property is
illustrated in Figure 14.4(b).
2 There is also a general topological criterion called d-separation for deciding whether a set of nodes X is conditionally independent of another set Y, given a third set Z. The criterion is rather
complicated and is not needed for deriving the algorithms in this chapter, so we omit it. Details may be found in Pearl (1988) or Darwiche (2009). Shachter (1998) gives a more intuitive method of
ascertaining d-separation.
Alt text
Even if the maximum number of parents k is smallish, filling in the CPT for a node requires up to O(2^k^) numbers and perhaps a great deal of experience with all the possible conditioning cases. In
fact, this is a worst-case scenario in which the relationship between the parents and the child is completely arbitrary. Usually, such relationships are describable by a canonical distribution that
fits some standard pattern. In such cases, the complete table can be specified by naming the pattern and perhaps supplying a few parameters—much easier than supplying an exponential number of
The simplest example is provided by deterministic nodes. A deterministic node has its value specified exactly by the values of its parents, with no uncertainty. The relationship can be a logical one:
for example, the relationship between the parent nodes Canadian , US , Mexican and the child node NorthAmerican is simply that the child is the disjunction of the parents. The relationship can also
be numerical: for example, if the parent nodes are the prices of a particular model of car at several dealers and the child node is the price that a bargain hunter ends up paying, then the child node
is the minimum of the parent values; or if the parent nodes are a lake’s inflows (rivers, runoff, precipitation) and outflows (rivers, evaporation, seepage) and the child is the change in the water
level of the lake, then the value of the child is the sum of the inflow parents minus the sum of the outflow parents.
Uncertain relationships can often be characterized by so-called noisy logical relationships. The standard example is the noisy-OR relation, which is a generalization of the logical OR. In
propositional logic, we might say that Fever is true if and only if Cold , Flu , or Malaria is true. The noisy-OR model allows for uncertainty about the ability of each parent to cause the child to
be true—the causal relationship between parent and child may be inhibited, and so a patient could have a cold, but not exhibit a fever. The model makes two assumptions. First, it assumes that all the
possible causes are listed. (If some are missing, we can always add a so-called leak node that covers “miscellaneous causes.”) Second, itLEAK assumes that inhibition of each parent is independent of
inhibition of any other parents: for example, whatever inhibits Malaria from causing a fever is independent of whatever inhibits Flu from causing a fever. Given these assumptions, Fever is false if
and only if all its true parents are inhibited, and the probability of this is the product of the inhibition probabilities q for each parent. Let us suppose these individual inhibition probabilities
are as follows:
qcold = P (¬fever | cold ,¬flu ,¬malaria) = 0.6 , qflu = P (¬fever | ¬cold ,flu ,¬malaria) = 0.2 , qmalaria = P (¬fever | ¬cold ,¬flu,malaria) = 0.1 .
Then, from this information and the noisy-OR assumptions, the entire CPT can be built. The general rule is that
Alt text
where the product is taken over the parents that are set to true for that row of the CPT. The following table illustrates this calculation:
Cold Flu Malaria P (Fever) P (¬Fever)
F F F 0.0 1.0
F F T 0.9 0.1
F T F 0.8 0.2
F T T 0.98 0.02 = 0.2 × 0.1
T F F 0.4 0.6
T F T 0.94 0.06 = 0.6 × 0.1
T T F 0.88 0.12 = 0.6 × 0.2
T T T 0.988 0.012 = 0.6× 0.2× 0.1
In general, noisy logical relationships in which a variable depends on k parents can be described using O(k) parameters instead of O(2^k^) for the full conditional probability table. This makes
assessment and learning much easier. For example, the CPCS network (Pradhan et al., 1994) uses noisy-OR and noisy-MAX distributions to model relationships among diseases and symptoms in internal
medicine. With 448 nodes and 906 links, it requires only 8,254 values instead of 133,931,430 for a network with full CPTs.
Bayesian nets with continuous variables
Many real-world problems involve continuous quantities, such as height, mass, temperature, and money; in fact, much of statistics deals with random variables whose domains are continuous. By
definition, continuous variables have an infinite number of possible values, so it is impossible to specify conditional probabilities explicitly for each value. One possible way to handle continuous
variables is to avoid them by using discretization—that is, dividing up the
Alt text
possible values into a fixed set of intervals. For example, temperatures could be divided into (<0^o^C), (0^o^C−100^o^C), and (>100^o^C). Discretization is sometimes an adequate solution, but often
results in a considerable loss of accuracy and very large CPTs. The most common solution is to define standard families of probability density functions (see Appendix A) that are specified by a
finite number of parameters. For example, a Gaussian (or normal)PARAMETER distribution N(μ, σ^2^)(x) has the mean μ and the variance σ^2^ as parameters. Yet another solution—sometimes called a
nonparametric representation—is to define the conditional distribution implicitly with a collection of instances, each containing specific values of the parent and child variables. We explore this
approach further in Chapter 18.
A network with both discrete and continuous variables is called a hybrid Bayesian network. To specify a hybrid network, we have to specify two new kinds of distributions: the conditional distribution
for a continuous variable given discrete or continuous parents; and the conditional distribution for a discrete variable given continuous parents. Consider the simple example in Figure 14.5, in which
a customer buys some fruit depending on its cost, which depends in turn on the size of the harvest and whether the government’s subsidy scheme is operating. The variable Cost is continuous and has
continuous and discrete parents; the variable Buys is discrete and has a continuous parent.
For the Cost variable, we need to specify P(Cost |Harvest ,Subsidy). The discrete parent is handled by enumeration—that is, by specifying both P (Cost |Harvest , subsidy)
and P (Cost |Harvest ,¬subsidy). To handle Harvest , we specify how the distribution over the cost c depends on the continuous value h of Harvest . In other words, we specify the parameters of the
cost distribution as a function of h. The most common choice is the linear Gaussian distribution, in which the child has a Gaussian distribution whose mean μ varies linearly with the value of the
parent and whose standard deviation σ is fixed. We need two distributions, one for subsidy and one for ¬subsidy , with different parameters:
Alt text
For this example, then, the conditional distribution for Cost is specified by naming the linear Gaussian distribution and providing the parameters a~t~, b~t~, σ~t~, a~f~ , b~f~ , and σf . Figures
Alt text
and (b) show these two relationships. Notice that in each case the slope is negative, because cost decreases as supply increases. (Of course, the assumption of linearity implies that the cost becomes
negative at some point; the linear model is reasonable only if the harvest size is limited to a narrow range.) Figure 14.6(c) shows the distribution P (c | h), averaging over the two possible values
of Subsidy and assuming that each has prior probability 0.5. This shows that even with very simple models, quite interesting distributions can be represented.
The linear Gaussian conditional distribution has some special properties. A network containing only continuous variables with linear Gaussian distributions has a joint distribution that is a
multivariate Gaussian distribution (see Appendix A) over all the variables (Exercise 14.9). Furthermore, the posterior distribution given any evidence also has this property.^3^
When discrete variables are added as parents (not as children) of continuous variables, the network defines a conditional Gaussian, or CG, distribution: given any assignment to the discrete
variables, the distribution over the continuous variables is a multivariate Gaussian. Now we turn to the distributions for discrete variables with continuous parents. Consider, for example, the Buys
node in Figure 14.5. It seems reasonable to assume that the customer will buy if the cost is low and will not buy if it is high and that the probability of buying varies smoothly in some intermediate
region. In other words, the conditional distribution is like a “soft” threshold function. One way to make soft thresholds is to use the integral of the standard normal distribution:
Φ(x) =∫^x^~−∞~ N(0, 1)(x)dx .
Then the probability of Buys given Cost might be
P (buys |Cost = c) = Φ((−c + μ)/σ) ,
which means that the cost threshold occurs around μ, the width of the threshold region is proportional to σ, and the probability of buying decreases as cost increases. This probit distri-
3 It follows that inference in linear Gaussian networks takes only O(n 3 ) time in the worst case, regardless of the network topology. In Section 14.4, we see that inference for networks of discrete
variables is NP-hard.
Alt text
bution (pronounced “pro-bit” and short for “probability unit”) is illustrated in Figure 14.7(a).PROBIT DISTRIBUTION
The form can be justified by proposing that the underlying decision process has a hard threshold, but that the precise location of the threshold is subject to random Gaussian noise.
An alternative to the probit model is the logit distribution (pronounced “low-jit”). It uses the logistic function 1/(1 + e^−x^) to produce a soft threshold:
Alt text
This is illustrated in Figure 14.7(b). The two distributions look similar, but the logit actually has much longer “tails.” The probit is often a better fit to real situations, but the logit is
sometimes easier to deal with mathematically. It is used widely in neural networks (Chapter 20). Both probit and logit can be generalized to handle multiple continuous parents by taking a linear
combination of the parent values.
The basic task for any probabilistic inference system is to compute the posterior probability distribution for a set of query variables, given some observed event—that is, some assignment of values
to a set of evidence variables. To simplify the presentation, we will consider only one query variable at a time; the algorithms can easily be extended to queries with multiple variables. We will use
the notation from Chapter 13: X denotes the query variable; E denotes the set of evidence variables E~1~, . . . , E~m~, and e is a particular observed event; Y will denotes the nonevidence, nonquery
variables Y~1~, . . . , Y~l~ (called the hidden variables). Thus, the complete set of variables is X = {X}∪E∪Y. A typical query asks for the posterior probability distribution P(X | e).
In the burglary network, we might observe the event in which JohnCalls = true and MaryCalls = true . We could then ask for, say, the probability that a burglary has occurred:
P(Burglary | JohnCalls = true,MaryCalls = true) = 〈0.284, 0.716〉 .
In this section we discuss exact algorithms for computing posterior probabilities and will consider the complexity of this task. It turns out that the general case is intractable, so Section 14.5
covers methods for approximate inference.
Inference by enumeration
Chapter 13 explained that any conditional probability can be computed by summing terms from the full joint distribution. More specifically, a query P(X | e) can be answered using Equation (13.9),
which we repeat here for convenience:
Alt text
Now, a Bayesian network gives a complete representation of the full joint distribution. More specifically, Equation (14.2) on page 513 shows that the terms P (x, e, y) in the joint distribution can
be written as products of conditional probabilities from the network. Therefore, a query can be answered using a Bayesian network by computing sums of products of conditional probabilities from the
Consider the query P(Burglary | JohnCalls = true,MaryCalls = true). The hidden variables for this query are Earthquake and Alarm . From Equation (13.9), using initial letters for the variables to
shorten the expressions, we have4
Alt text
The semantics of Bayesian networks (Equation (14.2)) then gives us an expression in terms of CPT entries. For simplicity, we do this just for Burglary = true:
Alt text
To compute this expression, we have to add four terms, each computed by multiplying five numbers. In the worst case, where we have to sum out almost all the variables, the complexity of the algorithm
for a network with n Boolean variables is O(n2^n^).
An improvement can be obtained from the following simple observations: the P (b) term is a constant and can be moved outside the summations over a and e, and the P (e) term can be moved outside the
summation over a. Hence, we have
Alt text
This expression can be evaluated by looping through the variables in order, multiplying CPT entries as we go. For each summation, we also need to loop over the variable’s possible
4 An expression such as Pe P (a, e) means to sum P (A = a, E = e) for all possible values of e. When E is Boolean, there is an ambiguity in that P (e) is used to mean both P (E = true) and P (E = e),
but it should be clear from context which is intended; in particular, in the context of a sum the latter is intended.
values. The structure of this computation is shown in Figure 14.8. Using the numbers from Figure 14.2, we obtain P (b | j,m) = α× 0.00059224. The corresponding computation for ¬b yields α× 0.0014919;
P(B | j,m) = α 〈0.00059224, 0.0014919〉 ≈ 〈0.284, 0.716〉 .
That is, the chance of a burglary, given calls from both neighbors, is about 28%. The evaluation process for the expression in Equation (14.4) is shown as an expression
tree in Figure 14.8. The ENUMERATION-ASK algorithm in Figure 14.9 evaluates such trees using depth-first recursion. The algorithm is very similar in structure to the backtracking algorithm for
solving CSPs (Figure 6.5) and the DPLL algorithm for satisfiability (Figure 7.17).
The space complexity of ENUMERATION-ASK is only linear in the number of variables: the algorithm sums over the full joint distribution without ever constructing it explicitly. Unfortunately, its time
complexity for a network with n Boolean variables is always O(2^n^)— better than the O(n 2^n^) for the simple approach described earlier, but still rather grim.
Note that the tree in Figure 14.8 makes explicit the repeated subexpressions evaluated by the algorithm. The products P (j | a)P (m | a) and P (j | ¬a)P (m | ¬a) are computed twice, once for each
value of e. The next section describes a general method that avoids such wasted computations.
The variable elimination algorithm
The enumeration algorithm can be improved substantially by eliminating repeated calculations of the kind illustrated in Figure 14.8. The idea is simple: do the calculation once and save the results
for later use. This is a form of dynamic programming. There are several versions of this approach; we present the variable elimination algorithm, which is the simplest.
Variable elimination works by evaluating expressions such as Equation (14.4) in right-to-left order (that is, bottom up in Figure 14.8). Intermediate results are stored, and summations over each
variable are done only for those portions of the expression that depend on the variable.
Let us illustrate this process for the burglary network. We evaluate the expression
Alt text
Notice that we have annotated each part of the expression with the name of the corresponding factor; each factor is a matrix indexed by the values of its argument variables. For example,FACTOR
the factors f~4~(A) and f~5~(A) corresponding to P (j | a) and P (m | a) depend just on A because J and M are fixed by the query. They are therefore two-element vectors:
Alt text
f~3~(A,B,E) will be a 2× 2× 2 matrix, which is hard to show on the printed page. (The “first” element is given by P (a | b, e)= 0.95 and the “last” by P (¬a | ¬b,¬e)= 0.999.) In terms of factors, the
query expression is written as
Alt text
Alt text
function ENUMERATION-ASK(X , e, bn) returns a distribution over X
inputs: X , the query variable e, observed values for variables E bn , a Bayes net with variables {X} ∪ E ∪ Y /* Y = hidden variables */
Q(X )← a distribution over X , initially empty for each value x~i~ of X do
Q(x~i~)← ENUMERATE-ALL(bn .VARS, ex~i~ ) where ex~i~ is e extended with X = x~i~ return NORMALIZE(Q(X))
function ENUMERATE-ALL(vars , e) returns a real number if EMPTY?(vars) then return 1.0 Y ← FIRST(vars) if Y has value y in e
then return P (y | parents(Y )) × ENUMERATE-ALL(REST(vars), e) else return∑~y~ P (y | parents(Y )) × ENUMERATE-ALL(REST(vars), e~y~) where ey is e extended with Y = y
Figure 14.9 The enumeration algorithm for answering queries on Bayesian networks.
where the “×” operator is not ordinary matrix multiplication but instead the pointwise product operation, to be described shortly.
The process of evaluation is a process of summing out variables (right to left) from pointwise products of factors to produce new factors, eventually yielding a factor that is the solution, i.e., the
posterior distribution over the query variable. The steps are as follows:
• First, we sum out A from the product of f~3~, f~4~, and f~5~. This gives us a new 2× 2 factor f~6~(B,E) whose indices range over just B and E:
Alt text
which can be evaluated by taking the pointwise product and normalizing the result.
Examining this sequence, we see that two basic computational operations are required: pointwise product of a pair of factors, and summing out a variable from a product of factors. The next section
describes each of these operations.
Operations on factors
The pointwise product of two factors f1 and f2 yields a new factor f whose variables are the union of the variables in f1 and f2 and whose elements are given by the product of the corresponding
elements in the two factors. Suppose the two factors have variables Y~1~, . . . , Y~k~ in common. Then we have
f(X~1~ . . . X~j~ , Y~1~ . . . Y~k~, Z~1~ . . . Z~l~) = f1(X~1~ . . . XX~j~j , Y~1~ . . . Y~k~) f2(Y~1~ . . . Y~k~, Z, . . . Z~l~).
If all the variables are binary, then f1 and f2 have 2j+k and 2^k^+l entries, respectively, and the pointwise product has 2j+k+l entries. For example, given two factors f~1~(A,B) and f~2~(B,C), the
pointwise product f~1~× f~2~ = f~3~(A,B,C) has 2^1+1+1^ = 8 entries, as illustrated in Figure 14.10. Notice that the factor resulting from a pointwise product can contain more variables than any of
the factors being multiplied and that the size of a factor is exponential in the number of variables. This is where both space and time complexity arise in the variable elimination algorithm.
A B f~1~(A,B) B C f~2~(B,C) A B C f~3~(A,B,C)
T T .3 T T .2 T T T .3× .2= .06
T F .7 T F .8 T T F .3× .8= .24
F T .9 F T .6 T F T .7× .6= .42
F F .1 F F .4 T F F .7× .4= .28
F T T .9× .2= .18
F T F .9× .8= .72
F F T .1× .6= .06
F F F .1× .4= .04
Figure 14.10 Illustrating pointwise multiplication: f~1~(A, B)× f~2~(B, C) = f~3~(A, B, C).
Summing out a variable from a product of factors is done by adding up the submatrices formed by fixing the variable to each of its values in turn. For example, to sum out A from f~3~(A,B,C), we write
Alt text
The only trick is to notice that any factor that does not depend on the variable to be summed out can be moved outside the summation. For example, if we were to sum out E first in the burglary
network, the relevant part of the expression would be
Alt text
Now the pointwise product inside the summation is computed, and the variable is summed out of the resulting matrix.
Notice that matrices are not multiplied until we need to sum out a variable from the accumulated product. At that point, we multiply just those matrices that include the variable to be summed out.
Given functions for pointwise product and summing out, the variable elimination algorithm itself can be written quite simply, as shown in Figure 14.11.
Variable ordering and variable relevance
The algorithm in Figure 14.11 includes an unspecified ORDER function to choose an ordering for the variables. Every choice of ordering yields a valid algorithm, but different orderings cause
different intermediate factors to be generated during the calculation. For example, in the calculation shown previously, we eliminated A before E; if we do it the other way, the calculation becomes
Alt text
during which a new factor f~6~(A,B) will be generated. In general, the time and space requirements of variable elimination are dominated by the size of the largest factor constructed during the
operation of the algorithm. This in turn
function ELIMINATION-ASK(X , e, bn) returns a distribution over X
inputs: X , the query variable e, observed values for variables E bn , a Bayesian network specifying joint distribution P(X~1~, . . . , X~n~)
factors← [ ] for each var in ORDER(bn .VARS) do factors← [MAKE-FACTOR(var , e)|factors ] if var is a hidden variable then factors← SUM-OUT(var , factors ) return NORMALIZE(POINTWISE-PRODUCT(factors))
Figure 14.11 The variable elimination algorithm for inference in Bayesian networks.
is determined by the order of elimination of variables and by the structure of the network. It turns out to be intractable to determine the optimal ordering, but several good heuristics are
available. One fairly effective method is a greedy one: eliminate whichever variable minimizes the size of the next factor to be constructed.
Let us consider one more query: P(JohnCalls |Burglary = true). As usual, the first step is to write out the nested summation:
Alt text
is equal to 1 by definition! Hence, there was no need to include it in the first place; the variable M is irrelevant to this query. Another way of saying this is that the result of the query P
(JohnCalls |Burglary = true) is unchanged if we remove MaryCalls from the network altogether. In general, we can remove any leaf node that is not a query variable or an evidence variable. After its
removal, there may be some more leaf nodes, and these too may be irrelevant. Continuing this process, we eventually find that every variable that is not an ancestor of a query variable or evidence
variable is irrelevant to the query. A variable elimination algorithm can therefore remove all these variables before evaluating the query.
The complexity of exact inference
The complexity of exact inference in Bayesian networks depends strongly on the structure of the network. The burglary network of Figure 14.2 belongs to the family of networks in which there is at
most one undirected path between any two nodes in the network. These are called singly connected networks or polytrees, and they have a particularly nice property: The time and space complexity of
exact inference in polytrees is linear in the size of the network. Here, the size is defined as the number of CPT entries; if the number of parents of each node is bounded by a constant, then the
complexity will also be linear in the number of nodes.
For multiply connected networks, such as that of Figure 14.12(a), variable elimination can have exponential time and space complexity in the worst case, even when the number of parents per node is
bounded. This is not surprising when one considers that because it
Alt text
includes inference in propositional logic as a special case, inference in Bayesian networks is NP-hard. In fact, it can be shown (Exercise 14.16) that the problem is as hard as that of computing the
number of satisfying assignments for a propositional logic formula. This means that it is #P-hard (“number-P hard”)—that is, strictly harder than NP-complete problems.
There is a close connection between the complexity of Bayesian network inference and the complexity of constraint satisfaction problems (CSPs). As we discussed in Chapter 6, the difficulty of solving
a discrete CSP is related to how “treelike” its constraint graph is. Measures such as tree width, which bound the complexity of solving a CSP, can also be applied directly to Bayesian networks.
Moreover, the variable elimination algorithm can be generalized to solve CSPs as well as Bayesian networks.
Clustering algorithms
The variable elimination algorithm is simple and efficient for answering individual queries. If we want to compute posterior probabilities for all the variables in a network, however, it can be less
efficient. For example, in a polytree network, one would need to issue O(n) queries costing O(n) each, for a total of O(n2) time. Using clustering algorithms (also known as join tree algorithms), the
time can be reduced to O(n). For this reason, these algorithms are widely used in commercial Bayesian network tools. The basic idea of clustering is to join individual nodes of the network to form
cluster nodes in such a way that the resulting network is a polytree. For example, the multiply connected network shown in Figure 14.12(a) can be converted into a polytree by combining the Sprinkler
and Rain node into a cluster node called Sprinkler+Rain , as shown in Figure 14.12(b). The two Boolean nodes are replaced by a “meganode” that takes on four possible values: tt, tf , ft, and ff . The
meganode has only one parent, the Boolean variable Cloudy , so there are two conditioning cases. Although this example doesn’t show it, the process of clustering often produces meganodes that share
some variables.
Once the network is in polytree form, a special-purpose inference algorithm is required, because ordinary inference methods cannot handle meganodes that share variables with each other. Essentially,
the algorithm is a form of constraint propagation (see Chapter 6) where the constraints ensure that neighboring meganodes agree on the posterior probability of any variables that they have in common.
With careful bookkeeping, this algorithm is able to compute posterior probabilities for all the nonevidence nodes in the network in time linear in the size of the clustered network. However, the
NP-hardness of the problem has not disappeared: if a network requires exponential time and space with variable elimination, then the CPTs in the clustered network will necessarily be exponentially
Given the intractability of exact inference in large, multiply connected networks, it is essential to consider approximate inference methods. This section describes randomized sampling algorithms,
also called Monte Carlo algorithms, that provide approximate answers whose accuracy depends on the number of samples generated. Monte Carlo algorithms, of which simulated annealing (page 126) is an
example, are used in many branches of science to estimate quantities that are difficult to calculate exactly. In this section, we are interested in sampling applied to the computation of posterior
probabilities. We describe two families of algorithms: direct sampling and Markov chain sampling. Two other approaches—variational methods and loopy propagation—are mentioned in the notes at the end
of the chapter.
Direct sampling methods
The primitive element in any sampling algorithm is the generation of samples from a known probability distribution. For example, an unbiased coin can be thought of as a random variable Coin with
values 〈heads , tails〉 and a prior distribution P(Coin) = 〈0.5, 0.5〉. Sampling from this distribution is exactly like flipping the coin: with probability 0.5 it will return heads , and with
probability 0.5 it will return tails . Given a source of random numbers uniformly distributed in the range [0, 1], it is a simple matter to sample any distribution on a single variable, whether
discrete or continuous. (See Exercise 14.17.)
The simplest kind of random sampling process for Bayesian networks generates events from a network that has no evidence associated with it. The idea is to sample each variable in turn, in topological
order. The probability distribution from which the value is sampled is conditioned on the values already assigned to the variable’s parents. This algorithm is shown in Figure 14.13. We can illustrate
its operation on the network in Figure 14.12(a), assuming an ordering [Cloudy ,Sprinkler ,Rain ,WetGrass ]:
1. Sample from P(Cloudy) = 〈0.5, 0.5〉, value is true .
2. Sample from P(Sprinkler |Cloudy = true) = 〈0.1, 0.9〉, value is false .
3. Sample from P(Rain |Cloudy = true) = 〈0.8, 0.2〉, value is true .
4. Sample from P(WetGrass |Sprinkler = false,Rain = true) = 〈0.9, 0.1〉, value is true . In this case, PRIOR-SAMPLE returns the event [true, false , true, true ].
function PRIOR-SAMPLE(bn) returns an event sampled from the prior specified by bn inputs: bn , a Bayesian network specifying joint distribution P(X~1~, . . . , X~n~)
x← an event with n elements foreach variable X~i~ in X~1~, . . . , X~n~ do x[i]← a random sample from P(X~i~ | parents(X~i~)) return x
Figure 14.13 A sampling algorithm that generates events from a Bayesian network. Each variable is sampled according to the conditional distribution given the values already sampled for the variable’s
It is easy to see that PRIOR-SAMPLE generates samples from the prior joint distribution specified by the network. First, let SPS (x~1~, . . . , x~n~) be the probability that a specific event is
generated by the PRIOR-SAMPLE algorithm. Just looking at the sampling process, we have
Alt text
because each sampling step depends only on the parent values. This expression should look familiar, because it is also the probability of the event according to the Bayesian net’s representation of
the joint distribution, as stated in Equation (14.2). That is, we have
SPS (x~1~ . . . x~n~) = P (x~1~ . . . x~n~) .
This simple fact makes it easy to answer questions by using samples. In any sampling algorithm, the answers are computed by counting the actual samples generated. Suppose there are N total samples,
and let NPS (x~1~, . . . , x~n~) be the number of times the specific event x~1~, . . . , x~n~ occurs in the set of samples. We expect this number, as a fraction of the total, to converge in the limit
to its expected value according to the sampling probability:
Alt text
For example, consider the event produced earlier: [true, false , true, true]. The sampling probability for this event is
SPS (true , false, true , true) = 0.5× 0.9× 0.8× 0.9 = 0.324 .
Hence, in the limit of large N , we expect 32.4% of the samples to be of this event. Whenever we use an approximate equality (“≈”) in what follows, we mean it in exactly this sense—that the estimated
probability becomes exact in the large-sample limit. Such an estimate is called consistent. For example, one can produce a consistent estimate of the probability of any partially specified event
x~1~, . . . , xm, where m ≤ n, as follows:
P (x~1~, . . . , xm) ≈ NPS (x~1~, . . . , xm)/N . (14.6)
That is, the probability of the event can be estimated as the fraction of all complete events generated by the sampling process that match the partially specified event. For example, if we generate
1000 samples from the sprinkler network, and 511 of them have Rain = true , then the estimated probability of rain, written as P̂ (Rain = true), is 0.511.
Rejection sampling in Bayesian networks
Rejection sampling is a general method for producing samples from a hard-to-sample distri-REJECTION SAMPLING
bution given an easy-to-sample distribution. In its simplest form, it can be used to compute conditional probabilities—that is, to determine P (X | e). The REJECTION-SAMPLING algorithm is shown in
Figure 14.14. First, it generates samples from the prior distribution specified by the network. Then, it rejects all those that do not match the evidence. Finally, the estimate P̂ (X = x | e) is
obtained by counting how often X =x occurs in the remaining samples.
Let P̂(X | e) be the estimated distribution that the algorithm returns. From the definition of the algorithm, we have
Alt text
That is, rejection sampling produces a consistent estimate of the true probability. Continuing with our example from Figure 14.12(a), let us assume that we wish to estimate P(Rain |Sprinkler = true),
using 100 samples. Of the 100 that we generate, suppose that 73 have Sprinkler = false and are rejected, while 27 have Sprinkler = true; of the 27, 8 have Rain = true and 19 have Rain = false .
P(Rain | Sprinkler = true) ≈ NORMALIZE(〈8, 19〉) = 〈0.296, 0.704〉 .
The true answer is 〈0.3, 0.7〉. As more samples are collected, the estimate will converge to the true answer. The standard deviation of the error in each probability will be proportional to 1/√ n,
where n is the number of samples used in the estimate.
The biggest problem with rejection sampling is that it rejects so many samples! The fraction of samples consistent with the evidence e drops exponentially as the number of evidence variables grows,
so the procedure is simply unusable for complex problems.
Notice that rejection sampling is very similar to the estimation of conditional probabilities directly from the real world. For example, to estimate P(Rain |RedSkyAtNight = true), one can simply
count how often it rains after a red sky is observed the previous evening— ignoring those evenings when the sky is not red. (Here, the world itself plays the role of the sample-generation algorithm.)
Obviously, this could take a long time if the sky is very seldom red, and that is the weakness of rejection sampling.
Likelihood weighting
Likelihood weighting avoids the inefficiency of rejection sampling by generating only events that are consistent with the evidence e. It is a particular instance of the general statistical technique
of importance sampling, tailored for inference in Bayesian networks. We begin by
function REJECTION-SAMPLING(X , e, bn ,N ) returns an estimate of P(X |e) inputs: X , the query variable e, observed values for variables E bn , a Bayesian network N , the total number of samples to
be generated local variables: N, a vector of counts for each value of X , initially zero
for j = 1 to N do x← PRIOR-SAMPLE(bn) if x is consistent with e then N[x ]←N[x ]+1 where x is the value of X in x
return NORMALIZE(N)
Figure 14.14 The rejection-sampling algorithm for answering queries given evidence in a Bayesian network.
describing how the algorithm works; then we show that it works correctly—that is, generates consistent probability estimates.
LIKELIHOOD-WEIGHTING (see Figure 14.15) fixes the values for the evidence variables E and samples only the nonevidence variables. This guarantees that each event generated is consistent with the
evidence. Not all events are equal, however. Before tallying the counts in the distribution for the query variable, each event is weighted by the likelihood that the event accords to the evidence, as
measured by the product of the conditional probabilities for each evidence variable, given its parents. Intuitively, events in which the actual evidence appears unlikely should be given less weight.
Let us apply the algorithm to the network shown in Figure 14.12(a), with the query P(Rain |Cloudy = true,WetGrass = true) and the ordering Cloudy, Sprinkler, Rain, WetGrass. (Any topological ordering
will do.) The process goes as follows: First, the weight w
is set to 1.0. Then an event is generated:
1. Cloudy is an evidence variable with value true . Therefore, we set w ← w×P (Cloudy = true) = 0.5 .
2. Sprinkler is not an evidence variable, so sample from P(Sprinkler |Cloudy = true) = 〈0.1, 0.9〉; suppose this returns false .
3. Similarly, sample from P(Rain |Cloudy = true) = 〈0.8, 0.2〉; suppose this returns true .
4. WetGrass is an evidence variable with value true . Therefore, we set w ← w×P (WetGrass = true |Sprinkler = false ,Rain = true) = 0.45 .
Here WEIGHTED-SAMPLE returns the event [true, false , true, true] with weight 0.45, and this is tallied under Rain = true .
To understand why likelihood weighting works, we start by examining the sampling probability SWS for WEIGHTED-SAMPLE. Remember that the evidence variables E are fixed
function LIKELIHOOD-WEIGHTING(X , e, bn ,N ) returns an estimate of P(X |e) inputs: X , the query variable
e, observed values for variables E bn , a Bayesian network specifying joint distribution P(X~1~, . . . , X~n~) N , the total number of samples to be generated
local variables: W, a vector of weighted counts for each value of X , initially zero
for j = 1 to N do x,w←WEIGHTED-SAMPLE(bn , e) W[x ]←W[x ] + w where x is the value of X in x
return NORMALIZE(W)
function WEIGHTED-SAMPLE(bn, e) returns an event and a weight
w← 1; x← an event with n elements initialized from e foreach variable X~i~ in X~1~, . . . , X~n~ do
if X~i~ is an evidence variable with value X~i~ in e then w←w × P (X~i~ = X~i~ | parents(X~i~)) else x[i]← a random sample from P(X~i~ | parents(X~i~))
return x, w
Figure 14.15 The likelihood-weighting algorithm for inference in Bayesian networks. In WEIGHTED-SAMPLE, each nonevidence variable is sampled according to the conditional distribution given the values
already sampled for the variable’s parents, while a weight is accumulated based on the likelihood for each evidence variable.
with values e. We call the nonevidence variables Z (including the query variable X). The algorithm samples each variable in Z given its parent values:
Alt text
Notice that Parents(Zi) can include both nonevidence variables and evidence variables. Unlike the prior distribution P (z), the distribution SWS pays some attention to the evidence: the sampled
values for each Zi will be influenced by evidence among Zi’s ancestors. For example, when sampling Sprinkler the algorithm pays attention to the evidence Cloudy = true in its parent variable. On the
other hand, SWS pays less attention to the evidence than does the true posterior distribution P (z | e), because the sampled values for each Zi ignore evidence among Zi’s non-ancestors.^5^ For
example, when sampling Sprinkler and Rain the algorithm ignores the evidence in the child variable WetGrass = true; this means it will generate many samples with Sprinkler = false and Rain = false
despite the fact that the evidence actually rules out this case. 5 Ideally, we would like to use a sampling distribution equal to the true posterior P (z | e), to take all the evidence into account.
This cannot be done efficiently, however. If it could, then we could approximate the desired probability to arbitrary accuracy with a polynomial number of samples. It can be shown that no such
polynomialtime approximation scheme can exist.
The likelihood weight w makes up for the difference between the actual and desired sampling distributions. The weight for a given sample x, composed from z and e, is the product of the likelihoods
for each evidence variable given its parents (some or all of which may be among the Zis):
Alt text
Hence, likelihood weighting returns consistent estimates. Because likelihood weighting uses all the samples generated, it can be much more efficient than rejection sampling. It will, however, suffer
a degradation in performance as the number of evidence variables increases. This is because most samples will have very low weights and hence the weighted estimate will be dominated by the tiny
fraction of samples that accord more than an infinitesimal likelihood to the evidence. The problem is exacerbated if the evidence variables occur late in the variable ordering, because then the
nonevidence variables will have no evidence in their parents and ancestors to guide the generation of samples. This means the samples will be simulations that bear little resemblance to the reality
suggested by the evidence.
Inference by Markov chain simulation
Markov chain Monte Carlo (MCMC) algorithms work quite differently from rejection sampling and likelihood weighting. Instead of generating each sample from scratch, MCMC algorithms generate each
sample by making a random change to the preceding sample. It is therefore helpful to think of an MCMC algorithm as being in a particular current state specifying a value for every variable and
generating a next state by making random changes to the current state. (If this reminds you of simulated annealing from Chapter 4 or WALKSAT from Chapter 7, that is because both are members of the
MCMC family.) Here we describe a particular form of MCMC called Gibbs sampling, which is especially well suited for Bayesian networks. (Other forms, some of them significantly more powerful, are
discussed in the notes at the end of the chapter.) We will first describe what the algorithm does, then we will explain why it works.
Gibbs sampling in Bayesian networks
The Gibbs sampling algorithm for Bayesian networks starts with an arbitrary state (with the evidence variables fixed at their observed values) and generates a next state by randomly sampling a value
for one of the nonevidence variables Xi. The sampling for Xi is done conditioned on the current values of the variables in the Markov blanket of Xi. (Recall from page 517 that the Markov blanket of a
variable consists of its parents, children, and children’s parents.) The algorithm therefore wanders randomly around the state space—the space of possible complete assignments—flipping one variable
at a time, but keeping the evidence variables fixed.
Consider the query P(Rain | Sprinkler = true,WetGrass = true) applied to the network in Figure 14.12(a). The evidence variables Sprinkler and WetGrass are fixed to their observed values and the
nonevidence variables Cloudy and Rain are initialized randomly— let us say to true and false respectively. Thus, the initial state is [true, true , false , true]. Now the nonevidence variables are
sampled repeatedly in an arbitrary order. For example:
1. Cloudy is sampled, given the current values of its Markov blanket variables: in this case, we sample from P(Cloudy | Sprinkler = true,Rain = false). (Shortly, we will show how to calculate this
distribution.) Suppose the result is Cloudy = false . Then the new current state is [false , true, false , true].
2. Rain is sampled, given the current values of its Markov blanket variables: in this case, we sample from P(Rain |Cloudy = false ,Sprinkler = true,WetGrass = true). Suppose this yields Rain = true
. The new current state is [false, true , true, true ].
Each state visited during this process is a sample that contributes to the estimate for the query variable Rain . If the process visits 20 states where Rain is true and 60 states where Rain is false,
then the answer to the query is NORMALIZE(〈20, 60〉) = 〈0.25, 0.75〉. The complete algorithm is shown in Figure 14.16.
Why Gibbs sampling works
We will now show that Gibbs sampling returns consistent estimates for posterior probabilities. The material in this section is quite technical, but the basic claim is straightforward: the sampling
process settles into a “dynamic equilibrium” in which the long-run fraction of time spent in each state is exactly proportional to its posterior probability. This remarkable property follows from the
specific transition probability with which the process moves from one state to another, as defined by the conditional distribution given the Markov blanket of the variable being sampled.
function GIBBS-ASK(X , e, bn ,N ) returns an estimate of P(X |e) local variables: N, a vector of counts for each value of X , initially zero Z, the nonevidence variables in bn x, the current state of
the network, initially copied from e
initialize x with random values for the variables in Z for j = 1 to N do
for each Z~i~ in Z do set the value of Z~i~ in x by sampling from P(Z~i~|mb(Z~i~)) N[x ]←N[x ] + 1 where x is the value of X in x
return NORMALIZE(N)
Figure 14.16 The Gibbs sampling algorithm for approximate inference in Bayesian networks; this version cycles through the variables, but choosing variables at random also works.
Let q(x → x′) be the probability that the process makes a transition from state x to state x′. This transition probability defines what is called a Markov chain on the state space.(Markov chains also
figure prominently in Chapters 15 and 17.) Now suppose that we run the Markov chain for t steps, and let πt(x) be the probability that the system is in state x at time t. Similarly, let π~t+1~(x′) be
the probability of being in state x′ at time t + 1. Given πt(x), we can calculate π~t+1~(x′) by summing, for all states the system could be in at time t, the probability of being in that state times
the probability of making the transition to x′:
Alt text
Provided the transition probability distribution q is ergodic—that is, every state is reachable from every other and there are no strictly periodic cycles—there is exactly one distribution π
satisfying this equation for any given q. Equation (14.10) can be read as saying that the expected “outflow” from each state (i.e., its current “population”) is equal to the expected “inflow” from
all the states. One obvious way to satisfy this relationship is if the expected flow between any pair of states is the same in both directions; that is,
Alt text
where the last step follows because a transition from x′ is guaranteed to occur. The transition probability q(x → x′) defined by the sampling step in GIBBS-ASK is actually a special case of the more
general definition of Gibbs sampling, according to which each variable is sampled conditionally on the current values of all the other variables. We start by showing that this general definition of
Gibbs sampling satisfies the detailed balance equation with a stationary distribution equal to P (x | e), (the true posterior distribution on the nonevidence variables). Then, we simply observe that,
for Bayesian networks, sampling conditionally on all variables is equivalent to sampling conditionally on the variable’s Markov blanket (see page 517).
To analyze the general Gibbs sampler, which samples each Xi in turn with a transition probability qi that conditions on all the other variables, we define X~i~ to be these other variables (except the
evidence variables); their values in the current state are x~i~. If we sample a new value x′
Alt text
We can think of the loop “for each Z~i~ in Z do” in Figure 14.16 as defining one large transition probability q that is the sequential composition q1 ◦ q2 ◦ · · · ◦ qn of the transition probabilities
for the individual variables. It is easy to show (Exercise 14.19) that if each of qi and qj has π as its stationary distribution, then the sequential composition qi ◦ qj does too; hence the
transition probability q for the whole loop has P (x | e) as its stationary distribution. Finally, unless the CPTs contain probabilities of 0 or 1—which can cause the state space to become
disconnected—it is easy to see that q is ergodic. Hence, the samples generated by Gibbs sampling will eventually be drawn from the true posterior distribution.
The final step is to show how to perform the general Gibbs sampling step—sampling Xi from P(Xi | X~i~, e)—in a Bayesian network. Recall from page 517 that a variable is independent of all other
variables given its Markov blanket; hence,
Alt text
Hence, to flip each variable Xi conditioned on its Markov blanket, the number of multiplications required is equal to the number of Xi’s children.
Alt text
In Chapter 8, we explained the representational advantages possessed by first-order logic in comparison to propositional logic. First-order logic commits to the existence of objects and relations
among them and can express facts about some or all of the objects in a domain. This often results in representations that are vastly more concise than the equivalent propositional descriptions. Now,
Bayesian networks are essentially propositional: the set of random variables is fixed and finite, and each has a fixed domain of possible values. This fact limits the applicability of Bayesian
networks. _If we can find a way to combine probability theory with the expressive power of first-order representations, we expect to be able to increase dramatically the range of problems that can be
For example, suppose that an online book retailer would like to provide overall evaluations of products based on recommendations received from its customers. The evaluation will take the form of a
posterior distribution over the quality of the book, given the available evidence. The simplest solution to base the evaluation on the average recommendation, perhaps with a variance determined by
the number of recommendations, but this fails to take into account the fact that some customers are kinder than others and some are less honest than others. Kind customers tend to give high
recommendations even to fairly mediocre books, while dishonest customers give very high or very low recommendations for reasons other than quality—for example, they might work for a publisher.^6^
For a single customer C~1~, recommending a single book B~1~, the Bayes net might look like the one shown in Figure 14.17(a). (Just as in Section 9.1, expressions with parentheses such as Honest(C~1~)
are just fancy symbols—in this case, fancy names for random variables.)
6 A game theorist would advise a dishonest customer to avoid detection by occasionally recommending a good book from a competitor. See Chapter 17.
With two customers and two books, the Bayes net looks like the one in Figure 14.17(b). For larger numbers of books and customers, it becomes completely impractical to specify the network by hand.
Fortunately, the network has a lot of repeated structure. Each Recommendation(c, b)
variable has as its parents the variables Honest(c), Kindness(c), and Quality(b). Moreover, the CPTs for all the Recommendation (c, b) variables are identical, as are those for all the Honest(c)
variables, and so on. The situation seems tailor-made for a first-order language. We would like to say something like
Recommendation (c, b) ∼ RecCPT (Honest(c),Kindness(c),Quality(b))
with the intended meaning that a customer’s recommendation for a book depends on the customer’s honesty and kindness and the book’s quality according to some fixed CPT. This section develops a
language that lets us say exactly this, and a lot more besides.
Possible worlds
Recall from Chapter 13 that a probability model defines a set Ω of possible worlds with a probability P (ω) for each world ω. For Bayesian networks, the possible worlds are assignments of values to
variables; for the Boolean case in particular, the possible worlds are identical to those of propositional logic. For a first-order probability model, then, it seems we need the possible worlds to be
those of first-order logic—that is, a set of objects with relations among them and an interpretation that maps constant symbols to objects, predicate symbols to relations, and function symbols to
functions on those objects. (See Section 8.2.) The model also needs to define a probability for each such possible world, just as a Bayesian network defines a probability for each assignment of
values to variables.
Let us suppose, for a moment, that we have figured out how to do this. Then, as usual (see page 485), we can obtain the probability of any first-order logical sentence φ as a sum over the possible
worlds where it is true:
Alt text
Conditional probabilities P (φ | e) can be obtained similarly, so we can, in principle, ask any question we want of our model—e.g., “Which books are most likely to be recommended highly by dishonest
customers?”—and get an answer. So far, so good.
There is, however, a problem: the set of first-order models is infinite. We saw this explicitly in Figure 8.4 on page 293, which we show again in Figure 14.18 (top). This means that (1) the summation
in Equation (14.13) could be infeasible, and (2) specifying a complete, consistent distribution over an infinite set of worlds could be very difficult.
Section 14.6.2 explores one approach to dealing with this problem. The idea is to borrow not from the standard semantics of first-order logic but from the database semantics defined in Section 8.2.8
(page 299). The database semantics makes the unique names assumption—here, we adopt it for the constant symbols. It also assumes domain closure— there are no more objects than those that are named.
We can then guarantee a finite set of possible worlds by making the set of objects in each world be exactly the set of constant
Alt text
symbols that are used; as shown in Figure 14.18 (bottom), there is no uncertainty about the mapping from symbols to objects or about the objects that exist. We will call models defined in this way
relational probability models, or RPMs.7 The most significant difference between the semantics of RPMs and the database semantics introduced in Section 8.2.8 is that RPMs do not make the closed-world
assumption—obviously, assuming that every unknown fact is false doesn’t make sense in a probabilistic reasoning system!
When the underlying assumptions of database semantics fail to hold, RPMs won’t work well. For example, a book retailer might use an ISBN (International Standard Book Number) as a constant symbol to
name each book, even though a given “logical” book (e.g., “Gone With the Wind”) may have several ISBNs. It would make sense to aggregate recommendations across multiple ISBNs, but the retailer may
not know for sure which ISBNs are really the same book. (Note that we are not reifying the individual copies of the book, which might be necessary for used-book sales, car sales, and so on.) Worse
still, each customer is identified by a login ID, but a dishonest customer may have thousands of IDs! In the computer security field, these multiple IDs are called sibyls and their use to confound a
reputation system is called a sibyl attack. Thus, even a simple application in a relatively well-defined online domain involves both existence uncertainty (what are the real books and customers
underlying the observed data) and identity uncertainty (which symbol really refer to the same object). We need to bite the bullet and define probability models based on the standard semantics of
first-order logic, for which the possible worlds vary in the objects they contain and in the mappings from symbols to objects. Section 14.6.3 shows how to do this.
7 The name relational probability model was given by Pfeffer (2000) to a slightly different representation, but the underlying ideas are the same.
Relational probability models
Like first-order logic, RPMs have constant, function, and predicate symbols. (It turns out to be easier to view predicates as functions that return true or false .) We will also assume a type
signature for each function, that is, a specification of the type of each argument and the function’s value. If the type of each object is known, many spurious possible worlds are eliminated by this
mechanism. For the book-recommendation domain, the types are Customer
and Book , and the type signatures for the functions and predicates are as follows:
Honest : Customer → {true , false}Kindness : Customer → {1, 2, 3, 4, 5} Quality : Book → {1, 2, 3, 4, 5} Recommendation : Customer ×Book → {1, 2, 3, 4, 5}
The constant symbols will be whatever customer and book names appear in the retailer’s data set. In the example given earlier (Figure 14.17(b)), these were C~1~, C~2~ and B~1~, B~2~.
Given the constants and their types, together with the functions and their type signatures, the random variables of the RPM are obtained by instantiating each function with each possible combination
of objects: Honest(C~1~), Quality(B~2~), Recommendation(C~1~, B~2~), and so on. These are exactly the variables appearing in Figure 14.17(b). Because each type has only finitely many instances, the
number of basic random variables is also finite.
To complete the RPM, we have to write the dependencies that govern these random variables. There is one dependency statement for each function, where each argument of the function is a logical
variable (i.e., a variable that ranges over objects, as in first-order logic):
Honest(c) ∼ 〈0.99, 0.01〉 Kindness(c) ∼ 〈0.1, 0.1, 0.2, 0.3, 0.3〉 Quality(b) ∼ 〈0.05, 0.2, 0.4, 0.2, 0.15〉
Recommendation (c, b) ∼ RecCPT (Honest(c),Kindness(c),Quality(b))
where RecCPT is a separately defined conditional distribution with 2× 5× 5= 50 rows, each with 5 entries. The semantics of the RPM can be obtained by instantiating these dependencies for all known
constants, giving a Bayesian network (as in Figure 14.17(b)) that defines a joint distribution over the RPM’s random variables.8
We can refine the model by introducing a context-specific independence to reflect the fact that dishonest customers ignore quality when giving a recommendation; moreover, kindness plays no role in
their decisions. A context-specific independence allows a variable to be independent of some of its parents given certain values of others; thus, Recommendation(c, b) is independent of Kindness(c)
and Quality(b) when Honest(c)= false :
Recommendation (c, b) ∼ if Honest(c) then
HonestRecCPT (Kindness(c),Quality(b))
else 〈0.4, 0.1, 0.0, 0.1, 0.4〉 .
8 Some technical conditions must be observed to guarantee that the RPM defines a proper distribution. First, the dependencies must be acyclic, otherwise the resulting Bayesian network will have
cycles and will not define a proper distribution. Second, the dependencies must be well-founded, that is, there can be no infinite ancestor chains, such as might arise from recursive dependencies.
Under some circumstances (see Exercise 14.6), a fixedpoint calculation yields a well-defined probability model for a recursive RPM.
Alt text
This kind of dependency may look like an ordinary if–then–else statement on a programming language, but there is a key difference: the inference engine doesn’t necessarily know the value of the
conditional test!
We can elaborate this model in endless ways to make it more realistic. For example, suppose that an honest customer who is a fan of a book’s author always gives the book a 5, regardless of quality:
Recommendation (c, b) ∼ if Honest(c) then
if Fan(c,Author (b)) then Exactly(5)
else HonestRecCPT (Kindness(c),Quality(b))
else 〈0.4, 0.1, 0.0, 0.1, 0.4〉
Again, the conditional test Fan(c,Author (b)) is unknown, but if a customer gives only 5s to a particular author’s books and is not otherwise especially kind, then the posterior probability that the
customer is a fan of that author will be high. Furthermore, the posterior distribution will tend to discount the customer’s 5s in evaluating the quality of that author’s books.
In the preceding example, we implicitly assumed that the value of Author(b) is known for every b, but this may not be the case. How can the system reason about whether, say, C~1~ is a fan of Author
(B~2~) when Author(B~2~) is unknown? The answer is that the system may have to reason about all possible authors. Suppose (to keep things simple) that there are just two authors, A~1~ and A~2~. Then
Author(B~2~) is a random variable with two possible values, A~1~ and A~2~, and it is a parent of Recommendation(C~1~, B~2~). The variables Fan(C~1~, A~1~) and Fan(C~1~, A~2~) are parents too. The
conditional distribution for Recommendation(C~1~, B~2~) is then essentially a multiplexer in which the Author(B~2~) parent acts as a selector to choose which of Fan(C~1~, A~1~) and Fan(C~1~, A~2~)
actually gets to influence the recommendation. A fragment of the equivalent Bayes net is shown in Figure 14.19. Uncertainty in the value of Author(B~2~), which affects the dependency structure of the
network, is an instance of relational uncertainty.
In case you are wondering how the system can possibly work out who the author of B~2~ is: consider the possibility that three other customers are fans of A~1~ (and have no other favorite authors in
common) and all three have given B~2~ a 5, even though most other customers find it quite dismal. In that case, it is extremely likely that A~1~ is the author of B~2~.
The emergence of sophisticated reasoning like this from an RPM model of just a few lines is an intriguing example of how probabilistic influences spread through the web of interconnections among
objects in the model. As more dependencies and more objects are added, the picture conveyed by the posterior distribution often becomes clearer and clearer.
The next question is how to do inference in RPMs. One approach is to collect the evidence and query and the constant symbols therein, construct the equivalent Bayes net, and apply any of the
inference methods discussed in this chapter. This technique is called unrolling. The obvious drawback is that the resulting Bayes net may be very large. Further more, if there are many candidate
objects for an unknown relation or function—for example, the unknown author of B~2~—then some variables in the network may have many parents.
Fortunately, much can be done to improve on generic inference algorithms. First, the presence of repeated substructure in the unrolled Bayes net means that many of the factors constructed during
variable elimination (and similar kinds of tables constructed by clustering algorithms) will be identical; effective caching schemes have yielded speedups of three orders of magnitude for large
networks. Second, inference methods developed to take advantage of context-specific independence in Bayes nets find many applications in RPMs. Third, MCMC inference algorithms have some interesting
properties when applied to RPMs with relational uncertainty. MCMC works by sampling complete possible worlds, so in each state the relational structure is completely known. In the example given
earlier, each MCMC state would specify the value of Author(B~2~), and so the other potential authors are no longer parents of the recommendation nodes for B~2~. For MCMC, then, relational uncertainty
causes no increase in network complexity; instead, the MCMC process includes transitions that change the relational structure, and hence the dependency structure, of the unrolled network.
All of the methods just described assume that the RPM has to be partially or completely unrolled into a Bayesian network. This is exactly analogous to the method of propositionalization for
first-order logical inference. (See page 322.) Resolution theorem-provers and logic programming systems avoid propositionalizing by instantiating the logical variables only as needed to make the
inference go through; that is, they lift the inference process above the level of ground propositional sentences and make each lifted step do the work of many ground steps. The same idea applied in
probabilistic inference. For example, in the variable elimination algorithm, a lifted factor can represent an entire set of ground factors that assign probabilities to random variables in the RPM,
where those random variables differ only in the constant symbols used to construct them. The details of this method are beyond the scope of this book, but references are given at the end of the
Open-universe probability models
We argued earlier that database semantics was appropriate for situations in which we know exactly the set of relevant objects that exist and can identify them unambiguously. (In particular, all
observations about an object are correctly associated with the constant symbol that names it.) In many real-world settings, however, these assumptions are simply untenable. We gave the examples of
multiple ISBNs and sibyl attacks in the book-recommendation domain (to which we will return in a moment), but the phenomenon is far more pervasive:
• A vision system doesn’t know what exists, if anything, around the next corner, and may not know if the object it sees now is the same one it saw a few minutes ago.
• A text-understanding system does not know in advance the entities that will be featured in a text, and must reason about whether phrases such as “Mary,” “Dr. Smith,” “she,” “his cardiologist,”
“his mother,” and so on refer to the same object.
• An intelligence analyst hunting for spies never knows how many spies there really are and can only guess whether various pseudonyms, phone numbers, and sightings belong to the same individual.
In fact, a major part of human cognition seems to require learning what objects exist and being able to connect observations—which almost never come with unique IDs attached—to hypothesized objects
in the world.
For these reasons, we need to be able to write so-called open-universe models or OUPMs based on the standard semantics of first-order logic, as illustrated at the top of Figure 14.18. A language for
OUPMs provides a way of writing such models easily while guaranteeing a unique, consistent probability distribution over the infinite space of possible worlds.
The basic idea is to understand how ordinary Bayesian networks and RPMs manage to define a unique probability model and to transfer that insight to the first-order setting. In essence, a Bayes net
generates each possible world, event by event, in the topological order defined by the network structure, where each event is an assignment of a value to a variable. An RPM extends this to entire
sets of events, defined by the possible instantiations of the logical variables in a given predicate or function. OUPMs go further by allowing generative steps that add objects to the possible world
under construction, where the number and type of objects may depend on the objects that are already in that world. That is, the event being generated is not the assignment of a value to a variable,
but the very existence of objects.
One way to do this in OUPMs is to add statements that define conditional distributions over the numbers of objects of various kinds. For example, in the book-recommendation domain, we might want to
distinguish between customers (real people) and their login IDs. Suppose we expect somewhere between 100 and 10,000 distinct customers (whom we cannot observe directly). We can express this as a
prior log-normal distribution9 as follows:
#Customer ∼ LogNormal 6.9, 2.3^2^ .
We expect honest customers to have just one ID, whereas dishonest customers might have anywhere between 10 and 1000 IDs:
#LoginID(Owner = c) ∼ if Honest(c) then Exactly(1)
else LogNormal 6.9, 2.3^2^ .
This statement defines the number of login IDs for a given owner, who is a customer. The Owner function is called an origin function because it says where each generated objectORIGIN FUNCTION
came from. In the formal semantics of BLOG (as distinct from first-order logic), the domain elements in each possible world are actually generation histories (e.g., “the fourth login ID of the
seventh customer”) rather than simple tokens.
9 A distribution LogNormal μ, σ ^2^ is equivalent to a distribution N μ, σ^2^ over log e (x).
Subject to technical conditions of acyclicity and well-foundedness similar to those for RPMs, open-universe models of this kind define a unique distribution over possible worlds. Furthermore, there
exist inference algorithms such that, for every such well-defined model and every first-order query, the answer returned approaches the true posterior arbitrarily closely in the limit. There are some
tricky issues involved in designing these algorithms. For example, an MCMC algorithm cannot sample directly in the space of possible worlds when the size of those worlds is unbounded; instead, it
samples finite, partial worlds, relying on the fact that only finitely many objects can be relevant to the query in distinct ways. Moreover, transitions must allow for merging two objects into one or
splitting one into two. (Details are given in the references at the end of the chapter.) Despite these complications, the basic principle established in Equation (14.13) still holds: the probability
of any sentence is well defined and can be calculated.
Research in this area is still at an early stage, but already it is becoming clear that firstorder probabilistic reasoning yields a tremendous increase in the effectiveness of AI systems at handling
uncertain information. Potential applications include those mentioned above— computer vision, text understanding, and intelligence analysis—as well as many other kinds of sensor interpretation.
Other sciences (e.g., physics, genetics, and economics) have long favored probability as a model for uncertainty. In 1819, Pierre Laplace said, “Probability theory is nothing but common sense reduced
to calculation.” In 1850, James Maxwell said, “The true logic for this world is the calculus of Probabilities, which takes account of the magnitude of the probability which is, or ought to be, in a
reasonable man’s mind.”
Given this long tradition, it is perhaps surprising that AI has considered many alternatives to probability. The earliest expert systems of the 1970s ignored uncertainty and used strict logical
reasoning, but it soon became clear that this was impractical for most real-world domains. The next generation of expert systems (especially in medical domains) used probabilistic techniques. Initial
results were promising, but they did not scale up because of the exponential number of probabilities required in the full joint distribution. (Efficient Bayesian network algorithms were unknown
then.) As a result, probabilistic approaches fell out of favor from roughly 1975 to 1988, and a variety of alternatives to probability were tried for a variety of reasons:
• One common view is that probability theory is essentially numerical, whereas human judgmental reasoning is more “qualitative.” Certainly, we are not consciously aware of doing numerical
calculations of degrees of belief. (Neither are we aware of doing unification, yet we seem to be capable of some kind of logical reasoning.) It might be that we have some kind of numerical
degrees of belief encoded directly in strengths of connections and activations in our neurons. In that case, the difficulty of conscious access to those strengths is not surprising. One should
also note that qualitative reasoning mechanisms can be built directly on top of probability theory, so the “no numbers” argument against probability has little force. Nonetheless, some
qualitative schemes have a good deal of appeal in their own right. One of the best studied is default reasoning, which treats conclusions not as “believed to a certain degree,” but as “believed
until a better reason is found to believe something else.” Default reasoning is covered in Chapter 12.
• Rule-based approaches to uncertainty have also been tried. Such approaches hope to build on the success of logical rule-based systems, but add a sort of “fudge factor” to each rule to accommodate
uncertainty. These methods were developed in the mid-1970s and formed the basis for a large number of expert systems in medicine and other areas.
• One area that we have not addressed so far is the question of ignorance, as opposed to uncertainty. Consider the flipping of a coin. If we know that the coin is fair, then a probability of 0.5
for heads is reasonable. If we know that the coin is biased, but we do not know which way, then 0.5 for heads is again reasonable. Obviously, the two cases are different, yet the outcome
probability seems not to distinguish them. The Dempster–Shafer theory uses interval-valued degrees of belief to represent an agent’s knowledge of the probability of a proposition.
• Probability makes the same ontological commitment as logic: that propositions are true or false in the world, even if the agent is uncertain as to which is the case. Researchers in fuzzy logic
have proposed an ontology that allows vagueness: that a proposition can be “sort of” true. Vagueness and uncertainty are in fact orthogonal issues.
The next three subsections treat some of these approaches in slightly more depth. We will not provide detailed technical material, but we cite references for further study.
Rule-based methods for uncertain reasoning
Rule-based systems emerged from early work on practical and intuitive systems for logical inference. Logical systems in general, and logical rule-based systems in particular, have three desirable
• Locality: In logical systems, whenever we have a rule of the form A ⇒ B, we can conclude B, given evidence A, without worrying about any other rules. In probabilistic systems, we need to consider
all the evidence.
• Detachment: Once a logical proof is found for a proposition B, the proposition can be used regardless of how it was derived. That is, it can be detached from its justification. In dealing with
probabilities, on the other hand, the source of the evidence for a belief is important for subsequent reasoning.
• Truth-functionality: In logic, the truth of complex sentences can be computed from the truth of the components. Probability combination does not work this way, except under strong global
independence assumptions.
There have been several attempts to devise uncertain reasoning schemes that retain these advantages. The idea is to attach degrees of belief to propositions and rules and to devise purely local
schemes for combining and propagating those degrees of belief. The schemes are also truth-functional; for example, the degree of belief in A∨B is a function of the belief in A and the belief in B.
The bad news for rule-based systems is that the properties of locality, detachment, and truth-functionality are simply not appropriate for uncertain reasoning. Let us look at truthfunctionality
first. Let H~1~ be the event that a fair coin flip comes up heads, let T~1~ be the event that the coin comes up tails on that same flip, and let H~2~ be the event that the coin comes up heads on a
second flip. Clearly, all three events have the same probability, 0.5, and so a truth-functional system must assign the same belief to the disjunction of any two of them. But we can see that the
probability of the disjunction depends on the events themselves and not just on their probabilities:
P (A) P (B) P (A ∨ B)
P (H~1~) = 0.5 P (H~1~) = 0.5, P (T~1~) = 0.5, P (H~2~) = 0.5 P (H~1~ ∨ H~1~) = 0.50, P (H~1~ ∨ T~1~) = 1.00, P (H~1~ ∨ H~2~) = 0.75
It gets worse when we chain evidence together. Truth-functional systems have rules of the form A %→ B that allow us to compute the belief in B as a function of the belief in the rule and the belief
in A. Both forwardand backward-chaining systems can be devised. The belief in the rule is assumed to be constant and is usually specified by the knowledge engineer—for example, as A %→0.9 B.
Consider the wet-grass situation from Figure 14.12(a) (page 529). If we wanted to be able to do both causal and diagnostic reasoning, we would need the two rules
Rain → WetGrass and WetGrass → Rain .
These two rules form a feedback loop: evidence for Rain increases the belief in WetGrass , which in turn increases the belief in Rain even more. Clearly, uncertain reasoning systems need to keep
track of the paths along which evidence is propagated.
Intercausal reasoning (or explaining away) is also tricky. Consider what happens when we have the two rules
Sprinkler → WetGrass and WetGrass → Rain .
Suppose we see that the sprinkler is on. Chaining forward through our rules, this increases the belief that the grass will be wet, which in turn increases the belief that it is raining. But this is
ridiculous: the fact that the sprinkler is on explains away the wet grass and should reduce the belief in rain. A truth-functional system acts as if it also believes Sprinkler → Rain .
Given these difficulties, how can truth-functional systems be made useful in practice? The answer lies in restricting the task and in carefully engineering the rule base so that undesirable
interactions do not occur. The most famous example of a truth-functional system for uncertain reasoning is the certainty factors model, which was developed for the medical diagnosis program and was
widely used in expert systems of the late 1970s and 1980s. Almost all uses of certainty factors involved rule sets that were either purely diagnostic (as in MYCIN) or purely causal. Furthermore,
evidence was entered only at the “roots” of the rule set, and most rule sets were singly connected. Heckerman (1986) has shown that, under these circumstances, a minor variation on certainty-factor
inference was exactly equivalent to Bayesian inference on polytrees. In other circumstances, certainty factors could yield disastrously incorrect degrees of belief through overcounting of evidence.
As rule sets became larger, undesirable interactions between rules became more common, and practitioners found that the certainty factors of many other rules had to be “tweaked” when new rules were
added. For these reasons, Bayesian networks have largely supplanted rule-based methods for uncertain reasoning.
Representing ignorance: Dempster–Shafer theory
The Dempster–Shafer theory is designed to deal with the distinction between uncertaintyand ignorance. Rather than computing the probability of a proposition, it computes the probability that the
evidence supports the proposition. This measure of belief is called a belief function, written Bel(X).
We return to coin flipping for an example of belief functions. Suppose you pick a coin from a magician’s pocket. Given that the coin might or might not be fair, what belief should you ascribe to the
event that it comes up heads? Dempster–Shafer theory says that because you have no evidence either way, you have to say that the belief Bel(Heads) = 0 and also that Bel(¬Heads) = 0. This makes
Dempster–Shafer reasoning systems skeptical in a way that has some intuitive appeal. Now suppose you have an expert at your disposal who testifies with 90% certainty that the coin is fair (i.e., he
is 90% sure that P (Heads) = 0.5). Then Dempster–Shafer theory gives Bel(Heads) = 0.9 × 0.5 = 0.45 and likewise Bel(¬Heads) = 0.45. There is still a 10 percentage point “gap” that is not accounted
for by the evidence.
The mathematical underpinnings of Dempster–Shafer theory have a similar flavor to those of probability theory; the main difference is that, instead of assigning probabilities to possible worlds, the
theory assigns masses to sets of possible world, that is, to events.
The masses still must add to 1 over all possible events. Bel(A) is defined to be the sum of masses for all events that are subsets of (i.e., that entail) A, including A itself. With this definition,
Bel(A) and Bel(¬A) sum to at most 1, and the gap—the interval between Bel(A) and 1− Bel(¬A)—is often interpreted as bounding the probability of A. As with default reasoning, there is a problem in
connecting beliefs to actions. Whenever there is a gap in the beliefs, then a decision problem can be defined such that a Dempster– Shafer system is unable to make a decision. In fact, the notion of
utility in the Dempster– Shafer model is not yet well understood because the meanings of masses and beliefs themselves have yet to be understood. Pearl (1988) has argued that Bel(A) should be
interpreted not as a degree of belief in A but as the probability assigned to all the possible worlds (now interpreted as logical theories) in which A is provable. While there are cases in which this
quantity might be of interest, it is not the same as the probability that A is true.
A Bayesian analysis of the coin-flipping example would suggest that no new formalism is necessary to handle such cases. The model would have two variables: the Bias of the coin (a number between 0
and 1, where 0 is a coin that always shows tails and 1 a coin that always shows heads) and the outcome of the next Flip. The prior probability distribution for Bias would reflect our beliefs based on
the source of the coin (the magician’s pocket): some small probability that it is fair and some probability that it is heavily biased toward heads or tails. The conditional distribution P(Flip |Bias)
simply defines how the bias operates. If P(Bias) is symmetric about 0.5, then our prior probability for the flip is
P (Flip = heads) =∫^1^~0~ P(Bias = x) P(Flip = heads |Bias = x) dx = 0.5 .
This is the same prediction as if we believe strongly that the coin is fair, but that does not mean that probability theory treats the two situations identically. The difference arises after the
flips in computing the posterior distribution for Bias . If the coin came from a bank, then seeing it come up heads three times running would have almost no effect on our strong prior belief in its
fairness; but if the coin comes from the magician’s pocket, the same evidence will lead to a stronger posterior belief that the coin is biased toward heads. Thus, a Bayesian approach expresses our
“ignorance” in terms of how our beliefs would change in the face of future information gathering.
Representing vagueness: Fuzzy sets and fuzzy logic
Fuzzy set theory is a means of specifying how well an object satisfies a vague description.
For example, consider the proposition “Nate is tall.” Is this true if Nate is 5′ 10′′? Most people would hesitate to answer “true” or “false,” preferring to say, “sort of.” Note that this is not a
question of uncertainty about the external world—we are sure of Nate’s height. The issue is that the linguistic term “tall” does not refer to a sharp demarcation of objects into two classes—there are
degrees of tallness. For this reason, fuzzy set theory is not a method for uncertain reasoning at all. Rather, fuzzy set theory treats Tall as a fuzzy predicate and says that the truth value of Tall
(Nate) is a number between 0 and 1, rather than being just true or false . The name “fuzzy set” derives from the interpretation of the predicate as implicitly defining a set of its members—a set that
does not have sharp boundaries.
Fuzzy logic is a method for reasoning with logical expressions describing membership in fuzzy sets. For example, the complex sentence Tall(Nate) ∧ Heavy(Nate) has a fuzzy truth value that is a
function of the truth values of its components. The standard rules for evaluating the fuzzy truth, T , of a complex sentence are
T (A ∧ B) = min(T (A), T (B)) T (A ∨ B) = max(T (A), T (B)) T (¬A) = 1− T (A) .
Fuzzy logic is therefore a truth-functional system—a fact that causes serious difficulties. For example, suppose that T (Tall(Nate))= 0.6 and T (Heavy(Nate))= 0.4. Then we have T (Tall(Nate) ∧ Heavy
(Nate))= 0.4, which seems reasonable, but we also get the result T (Tall(Nate) ∧ ¬Tall(Nate))= 0.4, which does not. Clearly, the problem arises from the inability of a truth-functional approach to
take into account the correlations or anticorrelations among the component propositions.
Fuzzy control is a methodology for constructing control systems in which the mapping between real-valued input and output parameters is represented by fuzzy rules. Fuzzy control has been very
successful in commercial products such as automatic transmissions, video cameras, and electric shavers. Critics (see, e.g., Elkan, 1993) argue that these applications are successful because they have
small rule bases, no chaining of inferences, and tunable parameters that can be adjusted to improve the system’s performance. The fact that they are implemented with fuzzy operators might be
incidental to their success; the key is simply to provide a concise and intuitive way to specify a smoothly interpolated, real-valued function.
There have been attempts to provide an explanation of fuzzy logic in terms of probability theory. One idea is to view assertions such as “Nate is Tall” as discrete observations made concerning a
continuous hidden variable, Nate’s actual Height . The probability model specifies P (Observer says Nate is tall | Height), perhaps using a probit distribution as described on page 522. A posterior
distribution over Nate’s height can then be calculated in the usual way, for example, if the model is part of a hybrid Bayesian network. Such an approach is not truth-functional, of course. For
example, the conditional distribution
P (Observer says Nate is tall and heavy | Height ,Weight)
allows for interactions between height and weight in the causing of the observation. Thus, someone who is eight feet tall and weighs 190 pounds is very unlikely to be called “tall and heavy,” even
though “eight feet” counts as “tall” and “190 pounds” counts as “heavy.”
Fuzzy predicates can also be given a probabilistic interpretation in terms of random sets—that is, random variables whose possible values are sets of objects. For example, Tall is a random set whose
possible values are sets of people. The probability P (Tall = S~1~), where S~1~ is some particular set of people, is the probability that exactly that set would be identified as “tall” by an
observer. Then the probability that “Nate is tall” is the sum of the probabilities of all the sets of which Nate is a member.
Both the hybrid Bayesian network approach and the random sets approach appear to capture aspects of fuzziness without introducing degrees of truth. Nonetheless, there remain many open issues
concerning the proper representation of linguistic observations and continuous quantities—issues that have been neglected by most outside the fuzzy community.
This chapter has described Bayesian networks, a well-developed representation for uncertain knowledge. Bayesian networks play a role roughly analogous to that of propositional logic for definite
• A Bayesian network is a directed acyclic graph whose nodes correspond to random variables; each node has a conditional distribution for the node, given its parents.
• Bayesian networks provide a concise way to represent conditional independence relationships in the domain.
• A Bayesian network specifies a full joint distribution; each joint entry is defined as the product of the corresponding entries in the local conditional distributions. A Bayesian network is often
exponentially smaller than an explicitly enumerated joint distribution.
• Many conditional distributions can be represented compactly by canonical families of distributions. Hybrid Bayesian networks, which include both discrete and continuous variables, use a variety
of canonical distributions.
• Inference in Bayesian networks means computing the probability distribution of a set of query variables, given a set of evidence variables. Exact inference algorithms, such as variable
elimination, evaluate sums of products of conditional probabilities as efficiently as possible.
• In polytrees (singly connected networks), exact inference takes time linear in the size of the network. In the general case, the problem is intractable.
• Stochastic approximation techniques such as likelihood weighting and Markov chain Monte Carlo can give reasonable estimates of the true posterior probabilities in a network and can cope with much
larger networks than can exact algorithms.
• Probability theory can be combined with representational ideas from first-order logic to produce very powerful systems for reasoning under uncertainty. Relational probability models (RPMs)
include representational restrictions that guarantee a well-defined probability distribution that can be expressed as an equivalent Bayesian network. Openuniverse probability models handle
existence and identity uncertainty, defining probabilty distributions over the infinite space of first-order possible worlds.
• Various alternative systems for reasoning under uncertainty have been suggested. Generally speaking, truth-functional systems are not well suited for such reasoning.
The use of networks to represent probabilistic information began early in the 20th century, with the work of Sewall Wright on the probabilistic analysis of genetic inheritance and animal growth
factors (Wright, 1921, 1934). I. J. Good (1961), in collaboration with Alan Turing, developed probabilistic representations and Bayesian inference methods that could be regarded as a forerunner of
modern Bayesian networks—although the paper is not often cited in this context.10 The same paper is the original source for the noisy-OR model.
The influence diagram representation for decision problems, which incorporated a DAG representation for random variables, was used in decision analysis in the late 1970s (see Chapter 16), but only
enumeration was used for evaluation. Judea Pearl developed the message-passing method for carrying out inference in tree networks (Pearl, 1982a) and polytree networks (Kim and Pearl, 1983) and
explained the importance of causal rather than diagnostic probability models, in contrast to the certainty-factor systems then in vogue.
The first expert system using Bayesian networks was CONVINCE (Kim, 1983). Early applications in medicine included the MUNIN system for diagnosing neuromuscular disorders (Andersen et al., 1989) and
the PATHFINDER system for pathology (Heckerman, 1991). The CPCS system (Pradhan et al., 1994) is a Bayesian network for internal medicine consisting 10 I. J. Good was chief statistician for Turing’s
code-breaking team in World War II. In 2001: A Space Odyssey (Clarke, 1968a), Good and Minsky are credited with making the breakthrough that led to the development of the HAL 9000 computer. of 448
nodes, 906 links and 8,254 conditional probability values. (The front cover shows a portion of the network.)
Applications in engineering include the Electric Power Research Institute’s work on monitoring power generators (Morjaria et al., 1995), NASA’s work on displaying timecritical information at Mission
Control in Houston (Horvitz and Barry, 1995), and the general field of network tomography, which aims to infer unobserved local properties of nodes and links in the Internet from observations of
end-to-end message performance (Castro et al., 2004). Perhaps the most widely used Bayesian network systems have been the diagnosisand-repair modules (e.g., the Printer Wizard) in Microsoft Windows
(Breese and Heckerman, 1996) and the Office Assistant in Microsoft Office (Horvitz et al., 1998). Another important application area is biology: Bayesian networks have been used for identifying human
genes by reference to mouse genes (Zhang et al., 2003), inferring cellular networks Friedman (2004), and many other tasks in bioinformatics. We could go on, but instead we’ll refer you to Pourret et
al. (2008), a 400-page guide to applications of Bayesian networks.
Ross Shachter (1986), working in the influence diagram community, developed the first complete algorithm for general Bayesian networks. His method was based on goal-directed reduction of the network
using posterior-preserving transformations. Pearl (1986) developed a clustering algorithm for exact inference in general Bayesian networks, utilizing a conversion to a directed polytree of clusters
in which message passing was used to achieve consistency over variables shared between clusters. A similar approach, developed by the statisticians David Spiegelhalter and Steffen Lauritzen
(Lauritzen and Spiegelhalter, 1988), is based on conversion to an undirected form of graphical model called a Markov network. This approach is implemented in the HUGIN system, an efficient and widely
used tool for uncertain reasoning (Andersen et al., 1989). Boutilier et al. (1996) show how to exploit context-specific independence in clustering algorithms.
The basic idea of variable elimination—that repeated computations within the overall sum-of-products expression can be avoided by caching—appeared in the symbolic probabilistic inference (SPI)
algorithm (Shachter et al., 1990). The elimination algorithm we describe is closest to that developed by Zhang and Poole (1994). Criteria for pruning irrelevant variables were developed by Geiger et
al. (1990) and by Lauritzen et al. (1990); the criterion we give is a simple special case of these. Dechter (1999) shows how the variable elimination idea is essentially identical to nonserial
dynamic programming (Bertele and Brioschi, 1972), an algorithmic approach that can be applied to solve a range of inference problems in Bayesian networks—for example, finding the most likely
explanation for a set of observations. This connects Bayesian network algorithms to related methods for solving CSPs and gives a direct measure of the complexity of exact inference in terms of the
tree width of the network. Wexler and Meek (2009) describe a method of preventing exponential growth in the size of factors computed in variable elimination; their algorithm breaks down large factors
into products of smaller factors and simultaneously computes an error bound for the resulting approximation.
The inclusion of continuous random variables in Bayesian networks was considered by Pearl (1988) and Shachter and Kenley (1989); these papers discussed networks containing only continuous variables
with linear Gaussian distributions. The inclusion of discrete variables has been investigated by Lauritzen and Wermuth (1989) and implemented in the cHUGIN system (Olesen, 1993). Further analysis of
linear Gaussian models, with connections to many other models used in statistics, appears in Roweis and Ghahramani (1999) The probit distribution is usually attributed to Gaddum (1933) and Bliss
(1934), although it had been discovered several times in the 19th century. Bliss’s work was expanded considerably by Finney (1947). The probit has been used widely for modeling discrete choice
phenomena and can be extended to handle more than two choices (Daganzo, 1979). The logit model was introduced by Berkson (1944); initially much derided, it eventually became more popular than the
probit model. Bishop (1995) gives a simple justification for its use.
Cooper (1990) showed that the general problem of inference in unconstrained Bayesian networks is NP-hard, and Paul Dagum and Mike Luby (1993) showed the corresponding approximation problem to be
NP-hard. Space complexity is also a serious problem in both clustering and variable elimination methods. The method of cutset conditioning, which was developed for CSPs in Chapter 6, avoids the
construction of exponentially large tables. In a Bayesian network, a cutset is a set of nodes that, when instantiated, reduces the remaining nodes to a polytree that can be solved in linear time and
space. The query is answered by summing over all the instantiations of the cutset, so the overall space requirement is still linear (Pearl, 1988). Darwiche (2001) describes a recursive conditioning
algorithm that allows a complete range of space/time tradeoffs.
The development of fast approximation algorithms for Bayesian network inference is a very active area, with contributions from statistics, computer science, and physics. The rejection sampling method
is a general technique that is long known to statisticians; it was first applied to Bayesian networks by Max Henrion (1988), who called it logic sampling. Likelihood weighting, which was developed by
Fung and Chang (1989) and Shachter and Peot (1989), is an example of the well-known statistical method of importance sampling. Cheng and Druzdzel (2000) describe an adaptive version of likelihood
weighting that works well even when the evidence has very low prior likelihood.
Markov chain Monte Carlo (MCMC) algorithms began with the Metropolis algorithm, due to Metropolis et al. (1953), which was also the source of the simulated annealing algorithm described in Chapter 4.
The Gibbs sampler was devised by Geman and Geman (1984) for inference in undirected Markov networks. The application of MCMC to Bayesian networks is due to Pearl (1987). The papers collected by Gilks
et al. (1996) cover a wide variety of applications of MCMC, several of which were developed in the well-known BUGS package (Gilks et al., 1994).
There are two very important families of approximation methods that we did not cover in the chapter. The first is the family of variational approximation methods, which can be used to simplify
complex calculations of all kinds. The basic idea is to propose a reduced version of the original problem that is simple to work with, but that resembles the original problem as closely as possible.
The reduced problem is described by some variational parameters λ that are adjusted to minimize a distance function D between the original and the reduced problem, often by solving the system of
equations ∂D/∂λ = 0. In many cases, strict upper and lower bounds can be obtained. Variational methods have long been used in statistics (Rustagi, 1976). In statistical physics, the mean-field method
is a particular variational approximation in which the individual variables making up the model are assumed to be completely independent. This idea was applied to solve large undirected Markov
networks (Peterson and Anderson, 1987; Parisi, 1988). Saul et al. (1996) developed the mathematical foundations for applying variational methods to Bayesian networks and obtained accurate lower-bound
approximations for sigmoid networks with the use of mean-field methods. Jaakkola and Jordan (1996) extended the methodology to obtain both lower and upper bounds. Since these early papers,
variational methods have been applied to many specific families of models. The remarkable paper by Wainwright and Jordan (2008) provides a unifying theoretical analysis of the literature on
variational methods.
A second important family of approximation algorithms is based on Pearl’s polytree message-passing algorithm (1982a). This algorithm can be applied to general networks, as suggested by Pearl (1988).
The results might be incorrect, or the algorithm might fail to terminate, but in many cases, the values obtained are close to the true values. Little attention was paid to this so-called belief
propagation (or BP) approach until McEliece et al. (1998) observed that message passing in a multiply connected Bayesian network was exactly the computation performed by the turbo decoding algorithm
(Berrou et al., 1993), which provided a major breakthrough in the design of efficient error-correcting codes. The implication is that BP is both fast and accurate on the very large and very highly
connected networks used for decoding and might therefore be useful more generally. Murphy et al. (1999) presented a promising empirical study of BP’s performance, and Weiss and Freeman (2001)
established strong convergence results for BP on linear Gaussian networks. Weiss (2000b) shows how an approximation called loopy belief propagation works, and when the approximation is correct.
Yedidia et al. (2005) made further connections between loopy propagation and ideas from statistical physics.
The connection between probability and first-order languages was first studied by Carnap (1950). Gaifman (1964) and Scott and Krauss (1966) defined a language in which probabilities could be
associated with first-order sentences and for which models were probability measures on possible worlds. Within AI, this idea was developed for propositional logic by Nilsson (1986) and for
first-order logic by Halpern (1990). The first extensive investigation of knowledge representation issues in such languages was carried out by Bacchus (1990). The basic idea is that each sentence in
the knowledge base expressed a constraint on the distribution over possible worlds; one sentence entails another if it expresses a stronger constraint. For example, the sentence ∀x P (Hungry(x)) >
0.2 rules out distributions in which any object is hungry with probability less than 0.2; thus, it entails the sentence ∀x P (Hungry(x)) > 0.1. It turns out that writing a consistent set of sentences
in these languages is quite difficult and constructing a unique probability model nearly impossible unless one adopts the representation approach of Bayesian networks by writing suitable sentences
about conditional probabilities.
Beginning in the early 1990s, researchers working on complex applications noticed the expressive limitations of Bayesian networks and developed various languages for writing “templates” with logical
variables, from which large networks could be constructed automatically for each problem instance (Breese, 1992; Wellman et al., 1992). The most important such language was BUGS (Bayesian inference
Using Gibbs Sampling) (Gilks et al., 1994), which combined Bayesian networks with the indexed random variable notation common in statistics. (In BUGS, an indexed random variable looks like X[i],
where i has a defined integer range.) These languages inherited the key property of Bayesian networks: every well-formed knowledge base defines a unique, consistent probability model. Languages with
well-defined semantics based on unique names and domain closure drew on the representational capabilities of logic programming (Poole, 1993; Sato and Kameya, 1997; Kersting et al., 2000) and semantic
networks (Koller and Pfeffer, 1998; Pfeffer, 2000). Pfeffer (2007) went on to develop IBAL, which represents first-order probability models as probabilistic programs in a programming language
extended with a randomization primitive. Another important thread was the combination of relational and first-order notations with (undirected) Markov networks (Taskar et al., 2002; Domingos and
Richardson, 2004), where the emphasis has been less on knowledge representation and more on learning from large data sets.
Initially, inference in these models was performed by generating an equivalent Bayesian network. Pfeffer et al. (1999) introduced a variable elimination algorithm that cached each computed factor for
reuse by later computations involving the same relations but different objects, thereby realizing some of the computational gains of lifting. The first truly lifted inference algorithm was a lifted
form of variable elimination described by Poole (2003) and subsequently improved by de Salvo Braz et al. (2007). Further advances, including cases where certain aggregate probabilities can be
computed in closed form, are described by Milch et al. (2008) and Kisynski and Poole (2009). Pasula and Russell (2001) studied the application of MCMC to avoid building the complete equivalent Bayes
net in cases of relational and identity uncertainty. Getoor and Taskar (2007) collect many important papers on first-order probability models and their use in machine learning.
Probabilistic reasoning about identity uncertainty has two distinct origins. In statistics, the problem of record linkage arises when data records do not contain standard unique identifiers—for
example, various citations of this book might name its first author “Stuart Russell” or “S. J. Russell” or even “Stewart Russle,” and other authors may use the some of the same names. Literally
hundreds of companies exist solely to solve record linkage problems in financial, medical, census, and other data. Probabilistic analysis goes back to work by Dunn (1946); the Fellegi–Sunter model
(1969), which is essentially naive Bayes applied to matching, still dominates current practice. The second origin for work on identity uncertainty is multitarget tracking (Sittler, 1964), which we
cover in Chapter 15. For most of its history, work in symbolic AI assumed erroneously that sensors could supply sentences with unique identifiers for objects. The issue was studied in the context of
language understanding by Charniak and Goldman (1992) and in the context of surveillance by (Huang and Russell, 1998) and Pasula et al. (1999). Pasula et al. (2003) developed a complex generative
model for authors, papers, and citation strings, involving both relational and identity uncertainty, and demonstrated high accuracy for citation information extraction. The first formally defined
language for open-universe probability models was BLOG (Milch et al., 2005), which came with a complete (albeit slow) MCMC inference algorithm for all well-defined mdoels. (The program code faintly
visible on the front cover of this book is part of a BLOG model for detecting nuclear explosions from seismic signals as part of the UN Comprehensive Test Ban Treaty verification regime.) Laskey
(2008) describes another open-universe modeling language called multi-entity Bayesian networks.
As explained in Chapter 13, early probabilistic systems fell out of favor in the early 1970s, leaving a partial vacuum to be filled by alternative methods. Certainty factors were invented for use in
the medical expert system MYCIN (Shortliffe, 1976), which was intended both as an engineering solution and as a model of human judgment under uncertainty. The collection Rule-Based Expert Systems
(Buchanan and Shortliffe, 1984) provides a complete overview of MYCIN and its descendants (see also Stefik, 1995). David Heckerman (1986) showed that a slightly modified version of certainty factor
calculations gives correct probabilistic results in some cases, but results in serious overcounting of evidence in other cases. The PROSPECTOR expert system (Duda et al., 1979) used a rule-based
approach in which the rules were justified by a (seldom tenable) global independence assumption.
Dempster–Shafer theory originates with a paper by Arthur Dempster (1968) proposing a generalization of probability to interval values and a combination rule for using them. Later work by Glenn Shafer
(1976) led to the Dempster-Shafer theory’s being viewed as a competing approach to probability. Pearl (1988) and Ruspini et al. (1992) analyze the relationship between the Dempster–Shafer theory and
standard probability theory.
Fuzzy sets were developed by Lotfi Zadeh (1965) in response to the perceived difficulty of providing exact inputs to intelligent systems. The text by Zimmermann (2001) provides a thorough
introduction to fuzzy set theory; papers on fuzzy applications are collected in Zimmermann (1999). As we mentioned in the text, fuzzy logic has often been perceived incorrectly as a direct competitor
to probability theory, whereas in fact it addresses a different set of issues. Possibility theory (Zadeh, 1978) was introduced to handle uncertainty in fuzzy systems and has much in common with
probability. Dubois and Prade (1994) survey the connections between possibility theory and probability theory.
The resurgence of probability depended mainly on Pearl’s development of Bayesian networks as a method for representing and using conditional independence information. This resurgence did not come
without a fight; Peter Cheeseman’s (1985) pugnacious “In Defense of Probability” and his later article “An Inquiry into Computer Understanding” (Cheeseman, 1988, with commentaries) give something of
the flavor of the debate. Eugene Charniak helped present the ideas to AI researchers with a popular article, “Bayesian networks without tears”11 (1991), and book (1993). The book by Dean and Wellman
(1991) also helped introduce Bayesian networks to AI researchers. One of the principal philosophical objections of the logicists was that the numerical calculations that probability theory was
thought to require were not apparent to introspection and presumed an unrealistic level of precision in our uncertain knowledge. The development of qualitative probabilistic networks (Wellman, 1990a)
provided a purely qualitative abstraction of Bayesian networks, using the notion of positive and negative influences between variables. Wellman shows that in many cases such information is sufficient
for optimal decision making without the need for the precise specification of probability values. Goldszmidt and Pearl (1996) take a similar approach. Work by Adnan Darwiche and Matt Ginsberg (1992)
extracts the basic properties of conditioning and evidence combination from probability theory and shows that they can also be applied in logical and default reasoning. Often, programs speak louder
than words, and the ready avail-
11 The title of the original version of the article was “Pearl for swine.”
ability of high-quality software such as the Bayes Net toolkit (Murphy, 2001) accelerated the adoption of the technology.
The most important single publication in the growth of Bayesian networks was undoubtedly the text Probabilistic Reasoning in Intelligent Systems (Pearl, 1988). Several excellent texts (Lauritzen,
1996; Jensen, 2001; Korb and Nicholson, 2003; Jensen, 2007; Darwiche, 2009; Koller and Friedman, 2009) provide thorough treatments of the topics we have covered in this chapter. New research on
probabilistic reasoning appears both in mainstream AI journals, such as Artificial Intelligence and the Journal of AI Research, and in more specialized journals, such as the International Journal of
Approximate Reasoning. Many papers on graphical models, which include Bayesian networks, appear in statistical journals. The proceedings of the conferences on Uncertainty in Artificial Intelligence
(UAI), Neural Information Processing Systems (NIPS), and Artificial Intelligence and Statistics (AISTATS) are excellent sources for current research.
14.1 We have a bag of three biased coins a, b, and c with probabilities of coming up heads of 20%, 60%, and 80%, respectively. One coin is drawn randomly from the bag (with equal likelihood of
drawing each of the three coins), and then the coin is flipped three times to generate the outcomes X~1~, X2, and X~3~.
a. Draw the Bayesian network corresponding to this setup and define the necessary CPTs.
b. Calculate which coin was most likely to have been drawn from the bag if the observed flips come out heads twice and tails once.
14.2 Equation (14.1) on page 513 defines the joint distribution represented by a Bayesian network in terms of the parameters θ(X~i~ |Parents(X~i~)). This exercise asks you to derive the equivalence
between the parameters and the conditional probabilities P(X~i~ |Parents(X~i~)) from this definition.
a. Consider a simple network X → Y → Z with three Boolean variables. Use Equations (13.3) and (13.6) (pages 485 and 492) to express the conditional probability P (z | y) as the ratio of two sums,
each over entries in the joint distribution P(X,Y,Z).
b. Now use Equation (14.1) to write this expression in terms of the network parameters θ(X), θ(Y | X), and θ(Z |Y ).
c. Next, expand out the summations in your expression from part (b), writing out explicitly the terms for the true and false values of each summed variable. Assuming that all network parameters
satisfy the constraint
∑~x~i~~ θ(x~i~ | parents(X~i~))= 1, show that the resulting expression reduces to θ(x | y).
d. Generalize this derivation to show that θ(~X~|Parents(X~i~)) = P(X~i~ |Parents(X~i~)) for any Bayesian network.
14.3 The operation of arc reversal in a Bayesian network allows us to change the direction of an arc X → Y while preserving the joint probability distribution that the network represents (Shachter,
1986). Arc reversal may require introducing new arcs: all the parents of X also become parents of Y , and all parents of Y also become parents of X.
a. Assume that X and Y start with m and n parents, respectively, and that all variables have k values. By calculating the change in size for the CPTs of X and Y , show that the total number of
parameters in the network cannot decrease during arc reversal. (Hint: the parents of X and Y need not be disjoint.)
b. Under what circumstances can the total number remain constant? c. Let the parents of X be U ∪ V and the parents of Y be V ∪ W, where U and W are
disjoint. The formulas for the new CPTs after arc reversal are as follows:
P(Y |U, V, W) = ∑~x~ P(Y |V, W, x)P(x |U, V)
P(X | U, V, W, Y ) = P(Y | X, V, W)P(X | U, V)/P(Y | U, V, W) .
Prove that the new network expresses the same joint distribution over all variables as the original network.
14.4 Consider the Bayesian network in Figure 14.2.
a. If no evidence is observed, are Burglary and Earthquake independent? Prove this from the numerical semantics and from the topological semantics.
b. If we observe Alarm = true , are Burglary and Earthquake independent? Justify your answer by calculating whether the probabilities involved satisfy the definition of conditional independence.
14.5 Suppose that in a Bayesian network containing an unobserved variable Y , all the variables in the Markov blanket MB(Y ) have been observed.
a. Prove that removing the node Y from the network will not affect the posterior distribution for any other unobserved variable in the network.
b. Discuss whether we can remove Y if we are planning to use (i) rejection sampling and (ii) likelihood weighting.
14.6 Let Hx be a random variable denoting the handedness of an individual x, with possible values l or r. A common hypothesis is that leftor right-handedness is inherited by a simple mechanism; that
is, perhaps there is a gene Gx, also with values l or r, and perhaps actual handedness turns out mostly the same (with some probability s) as the gene an individual possesses. Furthermore, perhaps
the gene itself is equally likely to be inherited from either of an individual’s parents, with a small nonzero probability m of a random mutation flipping the handedness.
a. Which of the three networks in Figure 14.20 claim that P(Gfather , Gmother , Gchild ) =
P(Gfather )P(Gmother )P(Gchild )? b. Which of the three networks make independence claims that are consistent with the
hypothesis about the inheritance of handedness?
Alt text
c. Which of the three networks is the best description of the hypothesis?
d. Write down the CPT for the Gchild node in network (a), in terms of s and m.
e. Suppose that P (Gfather = l) = P (Gmother = l) = q. In network (a), derive an expression for P (Gchild = l) in terms of m and q only, by conditioning on its parent nodes.
f. Under conditions of genetic equilibrium, we expect the distribution of genes to be the same across generations. Use this to calculate the value of q, and, given what you know about handedness in
humans, explain why the hypothesis described at the beginning of this question must be wrong.
14.7 The Markov blanket of a variable is defined on page 517. Prove that a variable is independent of all other variables in the network, given its Markov blanket and derive Equation (14.12) (page
Alt text
14.8 Consider the network for car diagnosis shown in Figure 14.21.
a. Extend the network with the Boolean variables IcyWeather and StarterMotor .
b. Give reasonable conditional probability tables for all the nodes.
c. How many independent values are contained in the joint probability distribution for eight Boolean nodes, assuming that no conditional independence relations are known to hold among them?
d. How many independent probability values do your network tables contain?
e. The conditional distribution for Starts could be described as a noisy-AND distribution. Define this family in general and relate it to the noisy-OR distribution.
14.9 Consider the family of linear Gaussian networks, as defined on page 520.
a. In a two-variable network, let X~1~ be the parent of X2, let X~1~ have a Gaussian prior, and let P(X2 |X~1~) be a linear Gaussian distribution. Show that the joint distribution P (X~1~,X2) is a
multivariate Gaussian, and calculate its covariance matrix.
b. Prove by induction that the joint distribution for a general linear Gaussian network on X~1~, . . . ,X~n~ is also a multivariate Gaussian.
14.10 The probit distribution defined on page 522 describes the probability distribution for a Boolean child, given a single continuous parent.
a. How might the definition be extended to cover multiple continuous parents?
b. How might it be extended to handle a multivalued child variable? Consider both cases where the child’s values are ordered (as in selecting a gear while driving, depending on speed, slope, desired
acceleration, etc.) and cases where they are unordered (as in selecting bus, train, or car to get to work). (Hint: Consider ways to divide the possible values into two sets, to mimic a Boolean
14.11 In your local nuclear power station, there is an alarm that senses when a temperature gauge exceeds a given threshold. The gauge measures the temperature of the core. Consider the Boolean
variables A (alarm sounds), FA (alarm is faulty), and FG (gauge is faulty) and the multivalued nodes G (gauge reading) and T (actual core temperature).
a. Draw a Bayesian network for this domain, given that the gauge is more likely to fail when the core temperature gets too high.
b. Is your network a polytree? Why or why not?
c. Suppose there are just two possible actual and measured temperatures, normal and high; the probability that the gauge gives the correct temperature is x when it is working, but y when it is
faulty. Give the conditional probability table associated with G.
d. Suppose the alarm works correctly unless it is faulty, in which case it never sounds. Give the conditional probability table associated with A.
e. Suppose the alarm and gauge are working and the alarm sounds. Calculate an expression for the probability that the temperature of the core is too high, in terms of the various conditional
probabilities in the network.
Alt text
14.12 Two astronomers in different parts of the world make measurements M1 and M2 of the number of stars N in some small region of the sky, using their telescopes. Normally, there is a small
possibility e of error by up to one star in each direction. Each telescope can also (with a much smaller probability f ) be badly out of focus (events F1 and F2), in which case the scientist will
undercount by three or more stars (or if N is less than 3, fail to detect any stars at all). Consider the three networks shown in Figure 14.22.
a. Which of these Bayesian networks are correct (but not necessarily efficient) representations of the preceding information?
b. Which is the best network? Explain.
c. Write out a conditional distribution for P(M1 |N), for the case where N ∈{1, 2, 3} and M1 ∈{0, 1, 2, 3, 4}. Each entry in the conditional distribution should be expressed as a function of the
parameters e and/or f .
d. Suppose M1 = 1 and M2 = 3. What are the possible numbers of stars if you assume no prior constraint on the values of N?
e. What is the most likely number of stars, given these observations? Explain how to compute this, or if it is not possible to compute, explain what additional information is needed and how it would
affect the result.
14.13 Consider the network shown in Figure 14.22(ii), and assume that the two telescopes work identically. N ∈{1, 2, 3} and M1,M2 ∈{0, 1, 2, 3, 4}, with the symbolic CPTs as described in Exercise
14.12. Using the enumeration algorithm (Figure 14.9 on page 525), calculate the probability distribution P(N |M1 = 2,M2 = 2).
14.14 Consider the Bayes net shown in Figure 14.23.
a. Which of the following are asserted by the network structure?
(i) P(B, I,M) = P(B)P(I)P(M). (ii) P(J |G) = P(J |G, I).
(iii) P(M |G,B, I) = P(M |G,B, I, J).
Alt text
b. Calculate the value of P (b, i,¬m, g, j).
c. Calculate the probability that someone goes to jail given that they broke the law, have been indicted, and face a politically motivated prosecutor.
d. A context-specific independence (see page 542) allows a variable to be independent of some of its parents given certain values of others. In addition to the usual conditional independences given
by the graph structure, what context-specific independences exist in the Bayes net in Figure 14.23?
e. Suppose we want to add the variable P =PresidentialPardon to the network; draw the new network and briefly explain any links you add.
14.15 Consider the variable elimination algorithm in Figure 14.11 (page 528).
a. Section 14.4 applies variable elimination to the query
P(Burglary | JohnCalls = true,MaryCalls = true) .
Perform the calculations indicated and check that the answer is correct.
b. Count the number of arithmetic operations performed, and compare it with the number performed by the enumeration algorithm.
c. Suppose a network has the form of a chain: a sequence of Boolean variables X~1~, . . . ,X~n~
where Parents(Xi)= {X~i−1~} for i= 2, . . . , n. What is the complexity of computing P(X~1~ |X~n~ = true) using enumeration? Using variable elimination?
d. Prove that the complexity of running variable elimination on a polytree network is linear in the size of the tree for any variable ordering consistent with the network structure.
14.16 Investigate the complexity of exact inference in general Bayesian networks:
a. Prove that any 3-SAT problem can be reduced to exact inference in a Bayesian network constructed to represent the particular problem and hence that exact inference is NP hard. (Hint: Consider a
network with one variable for each proposition symbol, one for each clause, and one for the conjunction of clauses.)
b. The problem of counting the number of satisfying assignments for a 3-SAT problem is #P-complete. Show that exact inference is at least as hard as this.
14.17 Consider the problem of generating a random sample from a specified distribution on a single variable. Assume you have a random number generator that returns a random number uniformly
distributed between 0 and 1.
a. Let X be a discrete variable with P (X = x~i~)= pi for i∈ {1, . . . , k}. The cumulative distribution of X gives the probability that X ∈{X~1~, . . . , x~j~} for each possible j. (See also
Appendix A.) Explain how to calculate the cumulative distribution in O(k) time and how to generate a single sample of X from it. Can the latter be done in less than O(k) time?
b. Now suppose we want to generate N samples of X, where N & k. Explain how to do this with an expected run time per sample that is constant (i.e., independent of k).
c. Now consider a continuous-valued variable with a parameterized distribution (e.g., Gaussian). How can samples be generated from such a distribution?
d. Suppose you want to query a continuous-valued variable and you are using a sampling algorithm such as LIKELIHOODWEIGHTING to do the inference. How would you have to modify the query-answering
14.18 Consider the query P(Rain |Sprinkler = true,WetGrass = true) in Figure 14.12(a) (page 529) and how Gibbs sampling can answer it.
a. How many states does the Markov chain have?
b. Calculate the transition matrix Q containing q(y → y′) for all y, y′. c. What does Q2, the square of the transition matrix, represent?
d. What about Qn as n →∞?
e. Explain how to do probabilistic inference in Bayesian networks, assuming that Qn is available. Is this a practical way to do inference?
14.19 This exercise explores the stationary distribution for Gibbs sampling methods.
a. The convex composition [α, q1; 1 − α, q2] of q1 and q2 is a transition probability distribution that first chooses one of q1 and q2 with probabilities α and 1 − α, respectively, and then applies
whichever is chosen. Prove that if q1 and q2 are in detailed balance with π, then their convex composition is also in detailed balance with π. (Note: this result justifies a variant of GIBBS-ASK in
which variables are chosen at random rather than sampled in a fixed sequence.)
b. Prove that if each of q1 and q2 has π as its stationary distribution, then the sequential composition q = q1 ◦ q2 also has π as its stationary distribution.
14.20 The Metropolis–Hastings algorithm is a member of the MCMC family; as such, it isMETROPOLIS– HASTINGS
designed to generate samples x (eventually) according to target probabilities π(x). (Typically
Exercises 565
we are interested in sampling from π(x)= P (x | e).) Like simulated annealing, Metropolis– Hastings operates in two stages. First, it samples a new state x′ from a proposal distribution q(x′ | x),
given the current state x. Then, it probabilistically accepts or rejects x′ according to the acceptance probability
α(x′ | x) = min ( 1, π(x′)q(x | x′) / π(x)q(x′ | x) ) .
If the proposal is rejected, the state remains at x.
a. Consider an ordinary Gibbs sampling step for a specific variable Xi. Show that this step, considered as a proposal, is guaranteed to be accepted by Metropolis–Hastings. (Hence, Gibbs sampling is a
special case of Metropolis–Hastings.)
b. Show that the two-step process above, viewed as a transition probability distribution, is in detailed balance with π.
14.21 Three soccer teams A, B, and C , play each other once. Each match is between two teams, and can be won, drawn, or lost. Each team has a fixed, unknown degree of quality— an integer ranging from
0 to 3—and the outcome of a match depends probabilistically on the difference in quality between the two teams.
a. Construct a relational probability model to describe this domain, and suggest numerical values for all the necessary probability distributions.
b. Construct the equivalent Bayesian network for the three matches.
c. Suppose that in the first two matches A beats B and draws with C . Using an exact inference algorithm of your choice, compute the posterior distribution for the outcome of the third match.
d. Suppose there are n teams in the league and we have the results for all but the last match. How does the complexity of predicting the last game vary with n?
e. Investigate the application of MCMC to this problem. How quickly does it converge in practice and how well does it scale?
In which we try to interpret the present, understand the past, and perhaps predict the future, even when very little is crystal clear.
Agents in partially observable environments must be able to keep track of the current state, to the extent that their sensors allow. In Section 4.4 we showed a methodology for doing that: an agent
maintains a belief state that represents which states of the world are currently possible. From the belief state and a transition model, the agent can predict how the world might evolve in the next
time step. From the percepts observed and a sensor model, the agent can update the belief state. This is a pervasive idea: in Chapter 4 belief states were represented by explicitly enumerated sets of
states, whereas in Chapters 7 and 11 they were represented by logical formulas. Those approaches defined belief states in terms of which world states were possible, but could say nothing about which
states were likely or unlikely. In this chapter, we use probability theory to quantify the degree of belief in elements of the belief state.
As we show in Section 15.1, time itself is handled in the same way as in Chapter 7: a changing world is modeled using a variable for each aspect of the world state at each point in time. The
transition and sensor models may be uncertain: the transition model describes the probability distribution of the variables at time t, given the state of the world at past times, while the sensor
model describes the probability of each percept at time t, given the current state of the world. Section 15.2 defines the basic inference tasks and describes the general structure of inference
algorithms for temporal models. Then we describe three specific kinds of models: hidden Markov models, Kalman filters, and dynamic Bayesian networks (which include hidden Markov models and Kalman
filters as special cases). Finally, Section 15.6 examines the problems faced when keeping track of more than one thing.
We have developed our techniques for probabilistic reasoning in the context of static worlds, in which each random variable has a single fixed value. For example, when repairing a car, we assume that
whatever is broken remains broken during the process of diagnosis; our job is to infer the state of the car from observed evidence, which also remains fixed.
Now consider a slightly different problem: treating a diabetic patient. As in the case of car repair, we have evidence such as recent insulin doses, food intake, blood sugar measurements, and other
physical signs. The task is to assess the current state of the patient, including the actual blood sugar level and insulin level. Given this information, we can make a decision about the patient’s
food intake and insulin dose. Unlike the case of car repair, here the dynamic aspects of the problem are essential. Blood sugar levels and measurements thereof can change rapidly over time, depending
on recent food intake and insulin doses, metabolic activity, the time of day, and so on. To assess the current state from the history of evidence and to predict the outcomes of treatment actions, we
must model these changes.
The same considerations arise in many other contexts, such as tracking the location of a robot, tracking the economic activity of a nation, and making sense of a spoken or written sequence of words.
How can dynamic situations like these be modeled?
States and observations
We view the world as a series of snapshots, or time slices, each of which contains a set of random variables, some observable and some not.1 For simplicity, we will assume that the same subset of
variables is observable in each time slice (although this is not strictly necessary in anything that follows). We will use X~t~ to denote the set of state variables at time t, which are assumed to be
unobservable, and Et to denote the set of observable evidence variables. The observation at time t is E~t~ = e~t~ for some set of values e~t~.
Consider the following example: You are the security guard stationed at a secret underground installation. You want to know whether it’s raining today, but your only access to the outside world
occurs each morning when you see the director coming in with, or without, an umbrella. For each day t, the set E~t~ thus contains a single evidence variable Umbrellat or U~t~ for short (whether the
umbrella appears), and the set X~t~ contains a single state variable Rain t or Rt for short (whether it is raining). Other problems can involve larger sets of variables. In the diabetes example, we
might have evidence variables, such as MeasuredBloodSugar t and PulseRate t, and state variables, such as BloodSugar t and StomachContents t. (Notice that BloodSugar t and MeasuredBloodSugar t are
not the same variable; this is how we deal with noisy measurements of actual quantities.)
The interval between time slices also depends on the problem. For diabetes monitoring, a suitable interval might be an hour rather than a day. In this chapter we assume the interval between slices is
fixed, so we can label times by integers. We will assume that the state sequence starts at t =0; for various uninteresting reasons, we will assume that evidence starts arriving at t = 1 rather than t
= 0. Hence, our umbrella world is represented by state variables r~0~, R~1~, R~2~, . . . and evidence variables U~1~, U~2~, . . .. We will use the notation a:b to denote the sequence of integers from
a to b (inclusive), and the notation Xa:b to denote the set of variables from Xa to Xb. For example, U~1~:3 corresponds to the variables U~1~, U~2~, U~3~.
1 Uncertainty over continuous time can be modeled by stochastic differential equations (SDEs). The models studied in this chapter can be viewed as discrete-time approximations to SDEs.
Alt text
Transition and sensor models
With the set of state and evidence variables for a given problem decided on, the next step is to specify how the world evolves (the transition model) and how the evidence variables get their values
(the sensor model).
The transition model specifies the probability distribution over the latest state variables, given the previous values, that is, P(X~t~ |X~0:t−1~). Now we face a problem: the set X~0:t−1~ is
unbounded in size as t increases. We solve the problem by making a Markov assumption that the current state depends on only a finite fixed number of previous states. Processes satisfying this
assumption were first studied in depth by the Russian statistician Andrei Markov (1856–1922) and are called Markov processes or Markov chains. They come in various flavors; the simplest is the
first-order Markov process, in which the current state depends only on the previous state and not on any earlier states. In other words, a state provides enough information to make the future
conditionally independent of the past, and we have
P(X~t~ |X~0:t−1~) = P(X~t~ |X~t−1~) . (15.1)
Hence, in a first-order Markov process, the transition model is the conditional distribution P(X~t~ |X~t−1~). The transition model for a second-order Markov process is the conditional distribution P(
X~t~ |X~t−2~, X~t−1~). Figure 15.1 shows the Bayesian network structures corresponding to first-order and second-order Markov processes.
Even with the Markov assumption there is still a problem: there are infinitely many possible values of t. Do we need to specify a different distribution for each time step? We avoid this problem by
assuming that changes in the world state are caused by a stationary process—that is, a process of change that is governed by laws that do not themselves change over time. (Don’t confuse stationary
with static: in a static process, the state itself does not change.) In the umbrella world, then, the conditional probability of rain, P(Rt |Rt−1), is the same for all t, and we only have to specify
one conditional probability table.
Now for the sensor model. The evidence variables E~t~ could depend on previous variables as well as the current state variables, but any state that’s worth its salt should suffice to generate the
current sensor values. Thus, we make a sensor Markov assumption as follows:
P(E~t~ |X~0:t~, E~0:t−1~) = P(E~t~ |X~t~) . (15.2)
Thus, P(E~t~ |X~t~) is our sensor model (sometimes called the observation model). Figure 15.2 shows both the transition model and the sensor model for the umbrella example. Notice the
Alt text
direction of the dependence between state and sensors: the arrows go from the actual state of the world to sensor values because the state of the world causes the sensors to take on particular
values: the rain causes the umbrella to appear. (The inference process, of course, goes in the other direction; the distinction between the direction of modeled dependencies and the direction of
inference is one of the principal advantages of Bayesian networks.)
In addition to specifying the transition and sensor models, we need to say how everything gets started—the prior probability distribution at time 0, P(X~0~). With that, we have a specification of the
complete joint distribution over all the variables, using Equation (14.2). For any t,
Alt text
The three terms on the right-hand side are the initial state model P(X~0~), the transition model P(X~i~ |X~i−1~), and the sensor model P(E~i~ |X~i~).
The structure in Figure 15.2 is a first-order Markov process—the probability of rain is assumed to depend only on whether it rained the previous day. Whether such an assumption is reasonable depends
on the domain itself. The first-order Markov assumption says that the state variables contain all the information needed to characterize the probability distribution for the next time slice.
Sometimes the assumption is exactly true—for example, if a particle is executing a random walk along the x-axis, changing its position by ±1 at each time step, then using the x-coordinate as the
state gives a first-order Markov process. Sometimes the assumption is only approximate, as in the case of predicting rain only on the basis of whether it rained the previous day. There are two ways
to improve the accuracy of the approximation:
1. Increasing the order of the Markov process model. For example, we could make a second-order model by adding Rain t−2 as a parent of Rain t, which might give slightly more accurate predictions.
For example, in Palo Alto, California, it very rarely rains more than two days in a row.
2. Increasing the set of state variables. For example, we could add Season~t~ to allow us to incorporate historical records of rainy seasons, or we could add Temperature~t~, Humidity~t~ and
Pressure~t~ (perhaps at a range of locations) to allow us to use a physical model of rainy conditions.
Exercise 15.1 asks you to show that the first solution—increasing the order—can always be reformulated as an increase in the set of state variables, keeping the order fixed. Notice that adding state
variables might improve the system’s predictive power but also increases the prediction requirements: we now have to predict the new variables as well. Thus, we are looking for a “self-sufficient”
set of variables, which really means that we have to understand the “physics” of the process being modeled. The requirement for accurate modeling of the process is obviously lessened if we can add
new sensors (e.g., measurements of temperature and pressure) that provide information directly about the new state variables.
Consider, for example, the problem of tracking a robot wandering randomly on the X–Y plane. One might propose that the position and velocity are a sufficient set of state variables: one can simply
use Newton’s laws to calculate the new position, and the velocity may change unpredictably. If the robot is battery-powered, however, then battery exhaustion would tend to have a systematic effect on
the change in velocity. Because this in turn depends on how much power was used by all previous maneuvers, the Markov property is violated. We can restore the Markov property by including the charge
level Battery~t~ as one of the state variables that make up X~t~. This helps in predicting the motion of the robot, but in turn requires a model for predicting Battery~t~ from Battery~t~−1 and the
velocity. In some cases, that can be done reliably, but more often we find that error accumulates over time. In that case, accuracy can be improved by adding a new sensor for the battery level.
Having set up the structure of a generic temporal model, we can formulate the basic inference tasks that must be solved:
• Filtering: This is the task of computing the belief state—the posterior distribution over the most recent state—given all evidence to date. Filtering2 is also called state estimation. In our
example, we wish to compute P(X~t~ | e~1:t~). In the umbrella example, this would mean computing the probability of rain today, given all the observations of the umbrella carrier made so far.
Filtering is what a rational agent does to keep track of the current state so that rational decisions can be made. It turns out that an almost identical calculation provides the likelihood of the
evidence sequence, P (e~1:t~).
• Prediction: This is the task of computing the posterior distribution over the future state, given all evidence to date. That is, we wish to compute P(X~t+k~ | e~1:t~) for some k > 0. In the
umbrella example, this might mean computing the probability of rain three days from now, given all the observations to date. Prediction is useful for evaluating possible courses of action based
on their expected outcomes.
2 The term “filtering” refers to the roots of this problem in early work on signal processing, where the problem is to filter out the noise in a signal by estimating its underlying properties.
• Smoothing: This is the task of computing the posterior distribution over a past state, given all evidence up to the present. That is, we wish to compute P(X~k~ | e~1:t~) for some k such that 0 ≤
k < t. In the umbrella example, it might mean computing the probability that it rained last Wednesday, given all the observations of the umbrella carrier made up to today. Smoothing provides a
better estimate of the state than was available at the time, because it incorporates more evidence.^3^
• Most likely explanation: Given a sequence of observations, we might wish to find the sequence of states that is most likely to have generated those observations. That is, we wish to compute
argmaxx~1:t~ P(x~1:t~ | e~1:t~). For example, if the umbrella appears on each of the first three days and is absent on the fourth, then the most likely explanation is that it rained on the first
three days and did not rain on the fourth. Algorithms for this task are useful in many applications, including speech recognition—where the aim is to find the most likely sequence of words, given
a series of sounds—and the reconstruction of bit strings transmitted over a noisy channel.
In addition to these inference tasks, we also have
• Learning: The transition and sensor models, if not yet known, can be learned from observations. Just as with static Bayesian networks, dynamic Bayes net learning can be done as a by-product of
inference. Inference provides an estimate of what transitions actually occurred and of what states generated the sensor readings, and these estimates can be used to update the models. The updated
models provide new estimates, and the process iterates to convergence. The overall process is an instance of the expectationmaximization or EM algorithm. (See Section 20.3.)
Note that learning requires smoothing, rather than filtering, because smoothing provides better estimates of the states of the process. Learning with filtering can fail to converge correctly;
consider, for example, the problem of learning to solve murders: unless you are an eyewitness, smoothing is always required to infer what happened at the murder scene from the observable variables.
The remainder of this section describes generic algorithms for the four inference tasks, independent of the particular kind of model employed. Improvements specific to each model are described in
subsequent sections.
Filtering and prediction
As we pointed out in Section 7.7.3, a useful filtering algorithm needs to maintain a current state estimate and update it, rather than going back over the entire history of percepts for each update.
(Otherwise, the cost of each update increases as time goes by.) In other words, given the result of filtering up to time t, the agent needs to compute the result for t + 1 from the new evidence e
P(X~t+1~ | e~1:t+1~) = f(e~t+1~, P(X~t~ | e~1:t~)) ,
for some function f . This process is called recursive estimation. We can view the calculation
3 In particular, when tracking a moving object with inaccurate position observations, smoothing gives a smoother estimated trajectory than filtering—hence the name.
as being composed of two parts: first, the current state distribution is projected forward from t to t+1; then it is updated using the new evidence e~t+1~. This two-part process emerges quite simply
when the formula is rearranged:
P(X~t+1~ | e~1:t+1~) = P(X~t+1~ | e~1:t~, e~t+1~) (dividing up the evidence)
= α P(e~t+1~ |X~t+1~, e~1:t~) P(X~t+1~ | e~1:t~) (using Bayes’ rule)
= α P(e~t+1~ |X~t+1~) P(X~t+1~ | e~1:t~) (by the sensor Markov assumption). (15.4)
Here and throughout this chapter, α is a normalizing constant used to make probabilities sum up to 1. The second term, P(X~t+1~ | e~1:t~) represents a one-step prediction of the next state, and the
first term updates this with the new evidence; notice that P(e~t+1~ |X~t+1~) is obtainable directly from the sensor model. Now we obtain the one-step prediction for the next state by conditioning on
the current state X~t~:
Alt text
Within the summation, the first factor comes from the transition model and the second comes from the current state distribution. Hence, we have the desired recursive formulation. We can think of the
filtered estimate P(x~t~ | e~1:t~) as a “message” f~1:t~ that is propagated forward along the sequence, modified by each transition and updated by each new observation. The process is given by
f~1:t+1~ = α FORWARD(f~1:t~, e~t+1~) ,
where FORWARD implements the update described in Equation (15.5) and the process begins with f1:0 = P(X~0~). When all the state variables are discrete, the time for each update is constant (i.e.,
independent of t), and the space required is also constant. (The constants depend, of course, on the size of the state space and the specific type of the temporal model in question.) The time and
space requirements for updating must be constant if an agent with limited memory is to keep track of the current state distribution over an unbounded sequence of observations.
Let us illustrate the filtering process for two steps in the basic umbrella example (Figure 15.2.) That is, we will compute P(R~2~ |u~1:2~) as follows:
• On day 0, we have no observations, only the security guard’s prior beliefs; let’s assume that consists of P(R~0~) = 〈0.5, 0.5〉.
• On day 1, the umbrella appears, so U~1~ = true . The prediction from t = 0 to t = 1 is
P(R~1~) = ∑~r~0~~P(R~1~ | r~0~)P (r~0~) = 〈0.7, 0.3〉× 0.5 + 〈0.3, 0.7〉× 0.5 = 〈0.5, 0.5〉 .
Then the update step simply multiplies by the probability of the evidence for t = 1 and normalizes, as shown in Equation (15.4):
P(R~1~ | U~1~) = α P(u~1~ |R~1~)P(R~1~) = α 〈0.9, 0.2〉〈0.5, 0.5〉
= α 〈0.45, 0.1〉 ≈ 〈0.818, 0.182〉 .
Section 15.2. Inference in Temporal Models 573
• On day 2, the umbrella appears, so U~2~ = true . The prediction from t = 1 to t = 2 is
P(R~2~ | u~1~) =
P(R~2~ | r~1~)P (r~1~ |u~1~) = 〈0.7, 0.3〉× 0.818 + 〈0.3, 0.7〉× 0.182 ≈ 〈0.627, 0.373〉 ,
and updating it with the evidence for t = 2 gives
P(R~2~ | u~1~, u~2~) = α P(u~2~ |R~2~)P(R~2~ |u~1~) = α 〈0.9, 0.2〉〈0.627, 0.373〉 = α 〈0.565, 0.075〉 ≈ 〈0.883, 0.117〉 .
Intuitively, the probability of rain increases from day 1 to day 2 because rain persists. Exercise 15.2(a) asks you to investigate this tendency further.
The task of prediction can be seen simply as filtering without the addition of new evidence. In fact, the filtering process already incorporates a one-step prediction, and it is easy to derive the
following recursive computation for predicting the state at t + k + 1 from a prediction for t + k:
Alt text
Naturally, this computation involves only the transition model and not the sensor model. It is interesting to consider what happens as we try to predict further and further into the future. As
Exercise 15.2(b) shows, the predicted distribution for rain converges to a fixed point 〈0.5, 0.5〉, after which it remains constant for all time. This is the stationary distribution of the Markov
process defined by the transition model. (See also page 537.) A great deal is known about the properties of such distributions and about the mixing time—roughly, the time taken to reach the fixed
point. In practical terms, this dooms to failure any attempt to predict the actual state for a number of steps that is more than a small fraction of the mixing time, unless the stationary
distribution itself is strongly peaked in a small area of the state space. The more uncertainty there is in the transition model, the shorter will be the mixing time and the more the future is
In addition to filtering and prediction, we can use a forward recursion to compute the likelihood of the evidence sequence, P (e~1:t~). This is a useful quantity if we want to compare different
temporal models that might have produced the same evidence sequence (e.g., two different models for the persistence of rain). For this recursion, we use a likelihood message 1:t(X~t~)= P(X~t~, e
~1:t~). It is a simple exercise to show that the message calculation is identical to that for filtering:
Alt text
Notice that the likelihood message represents the probabilities of longer and longer evidence sequences as time goes by and so becomes numerically smaller and smaller, leading to underflow problems
with floating-point arithmetic. This is an important problem in practice, but we shall not go into solutions here.
Alt text
As we said earlier, smoothing is the process of computing the distribution over past states given evidence up to the present; that is, P(X~k~ | e~1:t~) for 0 ≤ k < t. (See Figure 15.3.) In
anticipation of another recursive message-passing approach, we can split the computation into two parts—the evidence up to k and the evidence from k + 1 to t,
P(X~k~ | e~1:t~) = P(X~k~ | e~1:k~, e~k+1:t~) = α P(X~k~ | e~1:k~)P(e~k+1:t~ |X~k~, e~1:k~) (using Bayes’ rule) = α P(X~k~ | e~1:k~)P(e~k+1:t~ |X~k~) (using conditional independence) = α f~1:k~× b
~k+1:t~ . (15.8)
where “×” represents pointwise multiplication of vectors. Here we have defined a “backward” message b~k+1:t~ = P(e~k+1:t~ |X~k~), analogous to the forward message f~1:k~. The forward message f~1:k~
can be computed by filtering forward from 1 to k, as given by Equation (15.5). It turns out that the backward message b~k+1:t~ can be computed by a recursive process that runs backward from t:
Alt text
where the last step follows by the conditional independence of e~k+1~ and e~k+2:t~, given x~k+1~. Of the three factors in this summation, the first and third are obtained directly from the model, and
the second is the “recursive call.” Using the message notation, we have
b~k+1:t~ = BACKWARD(b~k+2:t~, e~k+1~) ,
where BACKWARD implements the update described in Equation (15.9). As with the forward recursion, the time and space needed for each update are constant and thus independent of t.
We can now see that the two terms in Equation (15.8) can both be computed by recursions through time, one running forward from 1 to k and using the filtering equation (15.5) and the other running
backward from t to k + 1 and using Equation (15.9). Note that the backward phase is initialized with b~t+1:t~ = P(e~t+1~:t |X~t~)= P( |X~t~)1, where 1 is a vector of 1s. (Because e~t+1~:t is an empty
sequence, the probability of observing it is 1.)
Let us now apply this algorithm to the umbrella example, computing the smoothed estimate for the probability of rain at time k = 1, given the umbrella observations on days 1 and 2. From Equation
(15.8), this is given by
P(R~1~ |u~1~, u~2~) = α P(R~1~ | u~1~) P(u~2~ |R~1~) . (15.10)
The first term we already know to be 〈.818, .182〉, from the forward filtering process described earlier. The second term can be computed by applying the backward recursion in Equation (15.9):
Alt text
Plugging this into Equation (15.10), we find that the smoothed estimate for rain on day 1 is
P(R~1~ | u~1~, u~2~) = α 〈0.818, 0.182〉× 〈0.69, 0.41〉 ≈ 〈0.883, 0.117〉 .
Thus, the smoothed estimate for rain on day 1 is higher than the filtered estimate (0.818) in this case. This is because the umbrella on day 2 makes it more likely to have rained on day 2; in turn,
because rain tends to persist, that makes it more likely to have rained on day 1.
Both the forward and backward recursions take a constant amount of time per step; hence, the time complexity of smoothing with respect to evidence e~1:t~ is O(t). This is the complexity for smoothing
at a particular time step k. If we want to smooth the whole sequence, one obvious method is simply to run the whole smoothing process once for each time step to be smoothed. This results in a time
complexity of O(t2). A better approach uses a simple application of dynamic programming to reduce the complexity to O(t). A clue appears in the preceding analysis of the umbrella example, where we
were able to reuse the results of the forward-filtering phase. The key to the linear-time algorithm is to record the results of forward filtering over the whole sequence. Then we run the backward
recursion from t down to 1, computing the smoothed estimate at each step k from the computed backward message b~k+1:t~ and the stored forward message f~1:k~. The algorithm, aptly called the
forward–backward algorithm, is shown in Figure 15.4.
The alert reader will have spotted that the Bayesian network structure shown in Figure 15.3 is a polytree as defined on page 528. This means that a straightforward application of the clustering
algorithm also yields a linear-time algorithm that computes smoothed estimates for the entire sequence. It is now understood that the forward–backward algorithm is in fact a special case of the
polytree propagation algorithm used with clustering methods (although the two were developed independently).
The forward–backward algorithm forms the computational backbone for many applications that deal with sequences of noisy observations. As described so far, it has two practical drawbacks. The first is
that its space complexity can be too high when the state space is large and the sequences are long. It uses O(|f|t) space where |f| is the size of the representation of the forward message. The space
requirement can be reduced to O(|f| log t) with a concomitant increase in the time complexity by a factor of log t, as shown in Exercise 15.3. In some cases (see Section 15.3), a constant-space
algorithm can be used.
The second drawback of the basic algorithm is that it needs to be modified to work in an online setting where smoothed estimates must be computed for earlier time slices as new observations are
continuously added to the end of the sequence. The most common requirement is for fixed-lag smoothing, which requires computing the smoothed estimate P(X~t−d~ | e~1:t~) for fixed d. That is,
smoothing is done for the time slice d steps behind the current time t; as t increases, the smoothing has to keep up. Obviously, we can run the forward–backward algorithm over the d-step “window” as
each new observation is added, but this seems inefficient. In Section 15.3, we will see that fixed-lag smoothing can, in some cases, be done in constant time per update, independent of the lag d.
Finding the most likely sequence
Suppose that [true, true, false , true , true] is the umbrella sequence for the security guard’s first five days on the job. What is the weather sequence most likely to explain this? Does the absence
of the umbrella on day 3 mean that it wasn’t raining, or did the director forget to bring it? If it didn’t rain on day 3, perhaps (because weather tends to persist) it didn’t rain on day 4 either,
but the director brought the umbrella just in case. In all, there are 25 possible weather sequences we could pick. Is there a way to find the most likely one, short of enumerating all of them?
We could try this linear-time procedure: use smoothing to find the posterior distribution for the weather at each time step; then construct the sequence, using at each step the weather that is most
likely according to the posterior. Such an approach should set off alarm bells in the reader’s head, because the posterior distributions computed by smoothing are distri-
function FORWARD-BACKWARD(ev,prior ) returns a vector of probability distributions inputs: ev, a vector of evidence values for steps 1, . . . , t
prior , the prior distribution on the initial state, P(X~0~)
local variables: fv, a vector of forward messages for steps 0, . . . , t
b, a representation of the backward message, initially all 1s sv, a vector of smoothed estimates for steps 1, . . . , t
fv[0]← prior
for i = 1 to t do fv[i]← FORWARD(fv[i− 1], ev[i]) for i = t downto 1 do sv[i]←NORMALIZE(fv[i]×b) b←BACKWARD(b, ev[i])
return sv
Figure 15.4 The forward–backward algorithm for smoothing: computing posterior probabilities of a sequence of states given a sequence of observations. The FORWARD and BACKWARD operators are defined by
Equations (15.5) and (15.9), respectively.
Alt text
butions over single time steps, whereas to find the most likely sequence we must consider joint probabilities over all the time steps. The results can in fact be quite different. (See Exercise 15.4.)
There is a linear-time algorithm for finding the most likely sequence, but it requires a little more thought. It relies on the same Markov property that yielded efficient algorithms for filtering and
smoothing. The easiest way to think about the problem is to view each sequence as a path through a graph whose nodes are the possible states at each time step. Such a graph is shown for the umbrella
world in Figure 15.5(a). Now consider the task of finding the most likely path through this graph, where the likelihood of any path is the product of the transition probabilities along the path and
the probabilities of the given observations at each state. Let’s focus in particular on paths that reach the state Rain5 = true . Because of the Markov property, it follows that the most likely path
to the state Rain5 = true consists of the most likely path to some state at time 4 followed by a transition to Rain5 = true; and the state at time 4 that will become part of the path to Rain5 = true
is whichever maximizes the likelihood of that path. In other words, there is a recursive relationship between most likely paths to each state X~t+1~ and most likely paths to each state x~t~. We can
write this relationship as an equation connecting the probabilities of the paths:
Equation (15.11) is identical to the filtering equation (15.5) except that
1. The forward message f~1:t~ = P(X~t~ | e~1:t~) is replaced by the message
Alt text
that is, the probabilities of the most likely path to each state x~t~; and
2. the summation over X~t~ in Equation (15.5) is replaced by the maximization over X~t~ in Equation (15.11).
Thus, the algorithm for computing the most likely sequence is similar to filtering: it runs forward along the sequence, computing the m message at each time step, using Equation (15.11). The progress
of this computation is shown in Figure 15.5(b). At the end, it will have the probability for the most likely sequence reaching each of the final states. One can thus easily select the most likely
sequence overall (the states outlined in bold). In order to identify the actual sequence, as opposed to just computing its probability, the algorithm will also need to record, for each state, the
best state that leads to it; these are indicated by the bold arrows in Figure 15.5(b). The optimal sequence is identified by following these bold arrows backwards from the best final state.
The algorithm we have just described is called the Viterbi algorithm, after its inventor. Like the filtering algorithm, its time complexity is linear in t, the length of the sequence. Unlike
filtering, which uses constant space, its space requirement is also linear in t. This is because the Viterbi algorithm needs to keep the pointers that identify the best sequence leading to each
The preceding section developed algorithms for temporal probabilistic reasoning using a general framework that was independent of the specific form of the transition and sensor models. In this and
the next two sections, we discuss more concrete models and applications that illustrate the power of the basic algorithms and in some cases allow further improvements.
We begin with the hidden Markov model, or HMM. An HMM is a temporal probabilistic model in which the state of the process is described by a single discrete random variable. The possible values of the
variable are the possible states of the world. The umbrella example described in the preceding section is therefore an HMM, since it has just one state variable: Rain t. What happens if you have a
model with two or more state variables? You can still fit it into the HMM framework by combining the variables into a single “megavariable” whose values are all possible tuples of values of the
individual state variables. We will see that the restricted structure of HMMs allows for a simple and elegant matrix implementation of all the basic algorithms.^4^
4 The reader unfamiliar with basic operations on vectors and matrices might wish to consult Appendix A before proceeding with this section.
Simplified matrix algorithms
With a single, discrete state variable X~t~, we can give concrete form to the representations of the transition model, the sensor model, and the forward and backward messages. Let the state variable
X~t~ have values denoted by integers 1, . . . , S, where S is the number of possible states. The transition model P(X~t~ |X~t−1~) becomes an S×S matrix T, where
Tij = P (X~t~ = j |X~t−1~ = i) .
That is, Tij is the probability of a transition from state i to state j. For example, the transition matrix for the umbrella world is
Alt text
Now, if we use column vectors to represent the forward and backward messages, all the computations become simple matrix–vector operations. The forward equation (15.5) becomes
f~1:t+1~ = α O~t+1~T^T^f~1:t~ (15.12)
and the backward equation (15.9) becomes
b~k+1:t~ = TO~k+1~b~k+2:t~ . (15.13)
From these equations, we can see that the time complexity of the forward–backward algorithm (Figure 15.4) applied to a sequence of length t is O(S^2^t), because each step requires multiplying an
S-element vector by an S ×S matrix. The space requirement is O(St), because the forward pass stores t vectors of size S.
Besides providing an elegant description of the filtering and smoothing algorithms for HMMs, the matrix formulation reveals opportunities for improved algorithms. The first is a simple variation on
the forward–backward algorithm that allows smoothing to be carried out in constant space, independently of the length of the sequence. The idea is that smoothing for any particular time slice k
requires the simultaneous presence of both the forward and backward messages, f~1:k~ and b~k+1:t~, according to Equation (15.8). The forward–backward algorithm achieves this by storing the fs
computed on the forward pass so that they are available during the backward pass. Another way to achieve this is with a single pass that propagates both f and b in the same direction. For example,
the “forward” message f can be propagated backward if we manipulate Equation (15.12) to work in the other direction:
f~1:t~ = α′(T^T^)^−1^O^−1^~t+1~ f~1:t+1~ .
The modified smoothing algorithm works by first running the standard forward pass to compute ft:t (forgetting all the intermediate results) and then running the backward pass for both
function FIXED-LAG-SMOOTHING(e~t~,hmm ,d ) returns a distribution over X~t−d~
inputs: e~t~, the current evidence for time step t hmm , a hidden Markov model with S× S transition matrix T d , the length of the lag for smoothing
persistent: t , the current time, initially 1 f, the forward message P(X~t~|e1:t), initially hmm.PRIOR B, the d-step backward transformation matrix, initially the identity matrix e~t−d:t~,
double-ended list of evidence from t− d to t, initially empty
local variables: O~t−d~, O~t~, diagonal matrices containing the sensor model information add e~t~ to the end of e~t−d:t~ O~t~← diagonal matrix containing P(e~t~|X~t~) if t > d then f← FORWARD(f,
e~t~) remove e~t−d−1~ from the beginning of e~t−d:t~ O~t−d~← diagonal matrix containing P(e~t−d~|X~t−d~) B←O^−1^~t−d~T^−1^BTO~t~
else B←BTO~t~ t← t + 1 if t > d then return NORMALIZE(f × B1) else return null
Figure 15.6 An algorithm for smoothing with a fixed time lag of d steps, implemented as an online algorithm that outputs the new smoothed estimate given the observation for a new time step. Notice
that the final output NORMALIZE(f×B1 is just α f×b, by Equation (15.14).
b and f together, using them to compute the smoothed estimate at each step. Since only one copy of each message is needed, the storage requirements are constant (i.e., independent of t, the length of
the sequence). There are two significant restrictions on this algorithm: it requires that the transition matrix be invertible and that the sensor model have no zeroes—that is, that every observation
be possible in every state.
A second area in which the matrix formulation reveals an improvement is in online smoothing with a fixed lag. The fact that smoothing can be done in constant space suggests that there should exist an
efficient recursive algorithm for online smoothing—that is, an algorithm whose time complexity is independent of the length of the lag. Let us suppose that the lag is d; that is, we are smoothing at
time slice t− d, where the current time is t. By Equation (15.8), we need to compute
α f~1:t−d~ × b~t−d+1:t~
for slice t − d. Then, when a new observation arrives, we need to compute
α f~1:t−d+1~ × b~t−d+2:t+1~
for slice t− d+1. How can this be done incrementally? First, we can compute f~1:t−d+1~ from f~1:t−d~, using the standard filtering process, Equation (15.5).
Computing the backward message incrementally is trickier, because there is no simple relationship between the old backward message b~t−d+1:t~ and the new backward message b~t−d+2:t+1~. Instead, we
will examine the relationship between the old backward message b~t−d+1:t~ and the backward message at the front of the sequence, b~t+1:t~. To do this, we apply Equation (15.13) d times to get
Alt text
where the matrix b~t−d+1:t~ is the product of the sequence of T and O matrices. B can be thought of as a “transformation operator” that transforms a later backward message into an earlier one. A
similar equation holds for the new backward messages after the next observation arrives:
Alt text
Examining the product expressions in Equations (15.14) and (15.15), we see that they have a simple relationship: to get the second product, “divide” the first product by the first element TOt−d+1,
and multiply by the new last element TOt+1. In matrix language, then, there is a simple relationship between the old and new B matrices:
b~t−d+2:t+1~ = O^−1^~t−d+1~ T^−1^B~t−d+1:t~TO~t+1~ . (15.16)
This equation provides an incremental update for the B matrix, which in turn (through Equation (15.15)) allows us to compute the new backward message b~t−d+2:t+1~. The complete algorithm, which
requires storing and updating f and B, is shown in Figure 15.6.
On page 145, we introduced a simple form of the localization problem for the vacuum world. In that version, the robot had a single nondeterministic Move action and its sensors reported perfectly
whether or not obstacles lay immediately to the north, south, east, and west; the robot’s belief state was the set of possible locations it could be in.
Here we make the problem slightly more realistic by including a simple probability model for the robot’s motion and by allowing for noise in the sensors. The state variable X~t~
represents the location of the robot on the discrete grid; the domain of this variable is the set of empty squares {s~1~, . . . , s~n~}. Let NEIGHBORS(s) be the set of empty squares that are adjacent
to s and let N(s) be the size of that set. Then the transition model for Move action says that the robot is equally likely to end up at any neighboring square:
P (X~t+1~ = j |X~t~ = i) = Tij = (1/N(i) if j ∈ NEIGHBORS(i) else 0) .
We don’t know where the robot starts, so we will assume a uniform distribution over all the squares; that is, P (X~0~ = i)= 1/n. For the particular environment we consider (Figure 15.7), n = 42 and
the transition matrix T has 42× 42= 1764 entries.
The sensor variable Et has 16 possible values, each a four-bit sequence giving the presence or absence of an obstacle in a particular compass direction. We will use the notation
Alt text
NS, for example, to mean that the north and south sensors report an obstacle and the east and west do not. Suppose that each sensor’s error rate is ε and that errors occur independently for the four
sensor directions. In that case, the probability of getting all four bits right is (1− ε)4 and the probability of getting them all wrong is ε 4. Furthermore, if dit is the discrepancy—the number of
bits that are different—between the true values for square i and the actual reading e~t~, then the probability that a robot in square i would receive a sensor reading et is
P (E~t~ = e~t~ |X~t~ = i) = O~tii~ = (1− ε)^4−dit^ε^dit^ .
For example, the probability that a square with obstacles to the north and south would produce a sensor reading NSE is (1− ε)^3^ε^1^.
Given the matrices T and O~t~, the robot can use Equation (15.12) to compute the posterior distribution over locations—that is, to work out where it is. Figure 15.7 shows the distributions P(X~1~ |
E~1~ = NSW ) and P(X~2~ |E~1~ =NSW,E~2~ = NS). This is the same maze we saw before in Figure 4.18 (page 146), but there we used logical filtering to find the locations that were possible, assuming
perfect sensing. Those same locations are still the most likely with noisy sensing, but now every location has some nonzero probability.
In addition to filtering to estimate its current location, the robot can use smoothing (Equation (15.13)) to work out where it was at any given past time—for example, where it began at time 0—and it
can use the Viterbi algorithm to work out the most likely path it has
Alt text
taken to get where it is now. Figure 15.8 shows the localization error and Viterbi path accuracy for various values of the per-bit sensor error rate ε. Even when ε is 20%—which means that the overall
sensor reading is wrong 59% of the time—the robot is usually able to work out its location within two squares after 25 observations. This is because of the algorithm’s ability to integrate evidence
over time and to take into account the probabilistic constraints imposed on the location sequence by the transition model. When ε is 10%, the performance after a half-dozen observations is hard to
distinguish from the performance with perfect sensing. Exercise 15.7 asks you to explore how robust the HMM localization algorithm is to errors in the prior distribution P(X~0~) and in the transition
model itself. Broadly speaking, high levels of localization and path accuracy are maintained even in the face of substantial errors in the models used.
The state variable for the example we have considered in this section is a physical location in the world. Other problems can, of course, include other aspects of the world. Exercise 15.8 asks you to
consider a version of the vacuum robot that has the policy of going straight for as long as it can; only when it encounters an obstacle does it change to a new (randomly selected) heading. To model
this robot, each state in the model consists of a (location, heading) pair. For the environment in Figure 15.7, which has 42 empty squares, this leads to 168 states and a transition matrix with 1682
= 28, 224 entries—still a manageable number. If we add the possibility of dirt in the squares, the number of states is multiplied by 242 and the transition matrix ends up with more than 1029
entries—no longer a manageable number; Section 15.5 shows how to use dynamic Bayesian networks to model domains with many state variables. If we allow the robot to move continuously rather than in a
discrete grid, the number of states becomes infinite; the next section shows how to handle this case.
Imagine watching a small bird flying through dense jungle foliage at dusk: you glimpse brief, intermittent flashes of motion; you try hard to guess where the bird is and where it will appear next so
that you don’t lose it. Or imagine that you are a World War II radar operator peering at a faint, wandering blip that appears once every 10 seconds on the screen. Or, going back further still,
imagine you are Kepler trying to reconstruct the motions of the planets from a collection of highly inaccurate angular observations taken at irregular and imprecisely measured intervals. In all these
cases, you are doing filtering: estimating state variables (here, position and velocity) from noisy observations over time. If the variables were discrete, we could model the system with a hidden
Markov model. This section examines methods for handling continuous variables, using an algorithm called Kalman filtering, after one of its inventors, Rudolf E. Kalman. The bird’s flight might be
specified by six continuous variables at each time point; three for position (X~t~, Y~t~, Z~t~) and three for velocity (Ẋ~t~, Ẏ~t~, Ż~t~). We will need suitable conditional densities to represent the
transition and sensor models; as in Chapter 14, we will use linear Gaussian distributions. This means that the next state X~t+1~ must be a linear function of the current state X~t~, plus some
Gaussian noise, a condition that turns out to be quite reasonable in practice. Consider, for example, the X-coordinate of the bird, ignoring the other coordinates for now. Let the time interval
between observations be Δ, and assume constant velocity during the interval; then the position update is given by X~t+Δ~ = X~t~+Ẋ Δ. Adding Gaussian noise (to account for wind variation, etc.), we
obtain a linear Gaussian transition model:
P (X~t+Δ~ = x~t+Δ~ |X~t~ = x~t~, Ẋ~t~ = ẋ~t~) = N(x~t~ + ẋ~t~ Δ, σ ^2^ )(x~t+Δ~) .
The Bayesian network structure for a system with position vector X~t~ and velocity Ẋ~t~ is shown in Figure 15.9. Note that this is a very specific form of linear Gaussian model; the general form will
be described later in this section and covers a vast array of applications beyond the simple motion examples of the first paragraph. The reader might wish to consult Appendix A for some of the
mathematical properties of Gaussian distributions; for our immediate purposes, the most important is that a multivariate Gaussian distribution for d variables is specified by a d-element mean μ and a
d× d covariance matrix Σ.
Updating Gaussian distributions
In Chapter 14 on page 521, we alluded to a key property of the linear Gaussian family of distributions: it remains closed under the standard Bayesian network operations. Here, we make this claim
precise in the context of filtering in a temporal probability model. The required properties correspond to the two-step filtering calculation in Equation (15.5):
1. If the current distribution P(X~t~ | e~1:t~) is Gaussian and the transition model P(X~t+1~ | x~t~) is linear Gaussian, then the one-step predicted distribution given by
P(X~t+1~ | e~1:t~) = ∫~xt~ P(X~t+1~ | x~t~)P (x~t~ | e~1:t~) dx~t~ (15.17)
is also a Gaussian distribution.
Alt text
2. If the prediction P(X~t+1~ | e~1:t~) is Gaussian and the sensor model P(e~t+1~ |X~t+1~) is linear Gaussian, then, after conditioning on the new evidence, the updated distribution
P(X~t+1~ | e~1:t+1~) = α P(e~t+1~ |X~t+1~)P(X~t+1~ | e~1:t~) (15.18)
is also a Gaussian distribution.
Thus, the FORWARD operator for Kalman filtering takes a Gaussian forward message f~1:t~, specified by a mean μt and covariance matrix Σ~t~, and produces a new multivariate Gaussian forward message f
~1:t+1~, specified by a mean μ~t+1~ and covariance matrix Σ~t+1~. So, if we start with a Gaussian prior f1:0 = P(X~0~)= N(μ~0~,Σ~0~), filtering with a linear Gaussian model produces a Gaussian state
distribution for all time.
This seems to be a nice, elegant result, but why is it so important? The reason is that, except for a few special cases such as this, filtering with continuous or hybrid (discrete and continuous)
networks generates state distributions whose representation grows without bound over time. This statement is not easy to prove in general, but Exercise 15.10 shows what happens for a simple example.
A simple one-dimensional example
We have said that the FORWARD operator for the Kalman filter maps a Gaussian into a new Gaussian. This translates into computing a new mean and covariance matrix from the previous mean and covariance
matrix. Deriving the update rule in the general (multivariate) case requires rather a lot of linear algebra, so we will stick to a very simple univariate case for now; and later give the results for
the general case. Even for the univariate case, the calculations are somewhat tedious, but we feel that they are worth seeing because the usefulness of the Kalman filter is tied so intimately to the
mathematical properties of Gaussian distributions.
The temporal model we consider describes a random walk of a single continuous state variable Xt with a noisy observation Z~t~. An example might be the “consumer confidence” index, which can be
modeled as undergoing a random Gaussian-distributed change each month and is measured by a random consumer survey that also introduces Gaussian sampling noise.
The prior distribution is assumed to be Gaussian with variance σ^2^~0~ :
Alt text
Alt text
Alt text
Thus, after one update cycle, we have a new Gaussian distribution for the state variable.
From the Gaussian formula in Equation (15.19), we see that the new mean and standard deviation can be calculated from the old mean and standard deviation as follows:
Alt text
Figure 15.10 shows one update cycle for particular values of the transition and sensor models. Equation (15.20) plays exactly the same role as the general filtering equation (15.5) or the HMM
filtering equation (15.12). Because of the special nature of Gaussian distributions, however, the equations have some interesting additional properties. First, we can interpret the calculation for
the new mean μ~t+1~ as simply a weighted mean of the new observation Z~t+1~ and the old mean μ~t~. If the observation is unreliable, then σ^2~z~ is large and we pay more attention to the old mean; if
the old mean is unreliable (σ2 t is large) or the process is highly unpredictable (σ2 x is large), then we pay more attention to the observation. Second, notice that the update for the variance σ^2^
~t+1~ is independent of the observation. We can therefore compute in advance what the sequence of variance values will be. Third, the sequence of variance values converges quickly to a fixed value
that depends only on σ^2^~x~ and σ^2^~z~ , thereby substantially simplifying the subsequent calculations. (See Exercise 15.12.)
The general case
The preceding derivation illustrates the key property of Gaussian distributions that allows Kalman filtering to work: the fact that the exponent is a quadratic form. This is true not just for the
univariate case; the full multivariate Gaussian distribution has the form
N(μ,Σ)(x) = αe^−1/2(x−μ)^T^Σ^−1^(x−μ)^ .
Multiplying out the terms in the exponent makes it clear that the exponent is also a quadratic function of the values xi in x. As in the univariate case, the filtering update preserves the Gaussian
nature of the state distribution.
Let us first define the general temporal model used with Kalman filtering. Both the transition model and the sensor model allow for a linear transformation with additive Gaussian noise. Thus, we have
P (X~t+1~ | x~t~) = N(Fx~t~,Σ~x~)(X~t+1~)
P (z~t~ | x~t~) = N(Hx~t~,Σ~z~)(z~t~) , (15.21)
where F and Σx are matrices describing the linear transition model and transition noise covariance, and H and Σz are the corresponding matrices for the sensor model. Now the update equations for the
mean and covariance, in their full, hairy horribleness, are
μ~t+1~ = Fμ~t~ + K~t+1~(z~t+1~ −HFμ~t~)
Σ~t+1~ = (I−K~t+1~H)(FΣ~t~F^T^+ Σ~x~) , (15.22)
where K~t+1~ =(FΣ~t~F + Σ~x~)H^T^ (H(FΣ~t~F + Σ~x~)H^T^ + Σz)^−1^ is called the Kalman gain matrix. Believe it or not, these equations make some intuitive sense. For example, consider the update for
the mean state estimate μ. The term Fμ~t~ is the predicted state at t + 1, so HFμt is the predicted observation. Therefore, the term z~t+1~ − HFμ~t~ represents the error in the predicted observation.
This is multiplied by K~t+1~ to correct the predicted state; hence, K~t+1~ is a measure of how seriously to take the new observation relative to the prediction. As in Equation (15.20), we also have
the property that the variance update is independent of the observations. The sequence of values for Σ~t~ and K~t~ can therefore be computed offline, and the actual calculations required during
online tracking are quite modest.
To illustrate these equations at work, we have applied them to the problem of tracking an object moving on the X–Y plane. The state variables are X = (X,Y, Ẋ, Ẏ ) , so F, Σ~x~, H, and Σ~z~ are 4 × 4
matrices. Figure 15.11(a) shows the true trajectory, a series of noisy observations, and the trajectory estimated by Kalman filtering, along with the covariances indicated by the
one-standard-deviation contours. The filtering process does a good job of tracking the actual motion, and, as expected, the variance quickly reaches a fixed point.
We can also derive equations for smoothing as well as filtering with linear Gaussian models. The smoothing results are shown in Figure 15.11(b). Notice how the variance in the position estimate is
sharply reduced, except at the ends of the trajectory (why?), and that the estimated trajectory is much smoother.
Applicability of Kalman filtering
The Kalman filter and its elaborations are used in a vast array of applications. The “classical” application is in radar tracking of aircraft and missiles. Related applications include acoustic
tracking of submarines and ground vehicles and visual tracking of vehicles and people. In a slightly more esoteric vein, Kalman filters are used to reconstruct particle trajectories from
bubble-chamber photographs and ocean currents from satellite surface measurements. The range of application is much larger than just the tracking of motion: any system characterized by continuous
state variables and noisy measurements will do. Such systems include pulp mills, chemical plants, nuclear reactors, plant ecosystems, and national economies.
Alt text
The fact that Kalman filtering can be applied to a system does not mean that the results will be valid or useful. The assumptions made—a linear Gaussian transition and sensor models—are very strong.
The extended Kalman filter (EKF) attempts to overcome nonlinearities in the system being modeled. A system is nonlinear if the transition model cannot be described as a matrix multiplication of the
state vector, as in Equation (15.21). The EKF works by modeling the system as locally linear in X~t~ in the region of X~t~ = μt, the mean of the current state distribution. This works well for
smooth, well-behaved systems and allows the tracker to maintain and update a Gaussian state distribution that is a reasonable approximation to the true posterior. A detailed example is given in
Chapter 25.
What does it mean for a system to be “unsmooth” or “poorly behaved”? Technically, it means that there is significant nonlinearity in system response within the region that is “close” (according to
the covariance Σt) to the current mean μt. To understand this idea in nontechnical terms, consider the example of trying to track a bird as it flies through the jungle. The bird appears to be heading
at high speed straight for a tree trunk. The Kalman filter, whether regular or extended, can make only a Gaussian prediction of the location of the bird, and the mean of this Gaussian will be
centered on the trunk, as shown in Figure 15.12(a). A reasonable model of the bird, on the other hand, would predict evasive action to one side or the other, as shown in Figure 15.12(b). Such a model
is highly nonlinear, because the bird’s decision varies sharply depending on its precise location relative to the trunk.
To handle examples like these, we clearly need a more expressive language for representing the behavior of the system being modeled. Within the control theory community, for which problems such as
evasive maneuvering by aircraft raise the same kinds of difficulties, the standard solution is the switching Kalman filter. In this approach, multiple Kalman fil-
Alt text
ters run in parallel, each using a different model of the system—for example, one for straight flight, one for sharp left turns, and one for sharp right turns. A weighted sum of predictions is used,
where the weight depends on how well each filter fits the current data. We will see in the next section that this is simply a special case of the general dynamic Bayesian network model, obtained by
adding a discrete “maneuver” state variable to the network shown in Figure 15.9. Switching Kalman filters are discussed further in Exercise 15.10.
A dynamic Bayesian network, or DBN, is a Bayesian network that represents a temporal probability model of the kind described in Section 15.1. We have already seen examples of DBNs: the umbrella
network in Figure 15.2 and the Kalman filter network in Figure 15.9. In general, each slice of a DBN can have any number of state variables X~t~ and evidence variables e~t~. For simplicity, we assume
that the variables and their links are exactly replicated from slice to slice and that the DBN represents a first-order Markov process, so that each variable can have parents only in its own slice or
the immediately preceding slice.
It should be clear that every hidden Markov model can be represented as a DBN with a single state variable and a single evidence variable. It is also the case that every discretevariable DBN can be
represented as an HMM; as explained in Section 15.3, we can combine all the state variables in the DBN into a single state variable whose values are all possible tuples of values of the individual
state variables. Now, if every HMM is a DBN and every DBN can be translated into an HMM, what’s the difference? The difference is that, by decomposing the state of a complex system into its
constituent variables, the can take advantage of sparseness in the temporal probability model. Suppose, for example, that a DBN has 20 Boolean state variables, each of which has three parents in the
preceding slice. Then the DBN transition model has 20× 23 = 160 probabilities, whereas the corresponding HMM has 220 states and therefore 240, or roughly a trillion, probabilities in the transition
matrix. This is bad for at least three reasons: first, the HMM itself requires much more space; second, the huge transition matrix makes HMM inference much more expensive; and third, the problem of
learning such a huge number of parameters makes the pure HMM model unsuitable for large problems. The relationship between DBNs and HMMs is roughly analogous to the relationship between ordinary
Bayesian networks and full tabulated joint distributions.
We have already explained that every Kalman filter model can be represented in a DBN with continuous variables and linear Gaussian conditional distributions (Figure 15.9). It should be clear from the
discussion at the end of the preceding section that not every DBN can be represented by a Kalman filter model. In a Kalman filter, the current state distribution is always a single multivariate
Gaussian distribution—that is, a single “bump” in a particular location. DBNs, on the other hand, can model arbitrary distributions. For many real-world applications, this flexibility is essential.
Consider, for example, the current location of my keys. They might be in my pocket, on the bedside table, on the kitchen counter, dangling from the front door, or locked in the car. A single Gaussian
bump that included all these places would have to allocate significant probability to the keys being in mid-air in the front hall. Aspects of the real world such as purposive agents, obstacles, and
pockets introduce “nonlinearities” that require combinations of discrete and continuous variables in order to get reasonable models.
Constructing DBNs
To construct a DBN, one must specify three kinds of information: the prior distribution over the state variables, P(X~0~); the transition model P(X~t+1~ |X~t~); and the sensor model P(E~t~ |X~t~). To
specify the transition and sensor models, one must also specify the topology of the connections between successive slices and between the state and evidence variables. Because the transition and
sensor models are assumed to be stationary—the same for all t—it is most convenient simply to specify them for the first slice. For example, the complete DBN specification for the umbrella world is
given by the three-node network shown in Figure 15.13(a). From this specification, the complete DBN with an unbounded number of time slices can be constructed as needed by copying the first slice.
Let us now consider a more interesting example: monitoring a battery-powered robot moving in the X–Y plane, as introduced at the end of Section 15.1. First, we need state variables, which will
include both X~t~ = (Xt, Y~t~) for position and Ẋt =(Ẋt, Ẏt) for velocity. We assume some method of measuring position—perhaps a fixed camera or onboard GPS (Global Positioning System)—yielding
measurements Zt. The position at the next time step depends on the current position and velocity, as in the standard Kalman filter model. The velocity at the next step depends on the current velocity
and the state of the battery. We add Battery~t~ to represent the actual battery charge level, which has as parents the previous
Alt text
battery level and the velocity, and we add BMeter~t``, which measures the battery charge level. This gives us the basic model shown in Figure 15.13(b).
It is worth looking in more depth at the nature of the sensor model for BMeter~t~. Let us suppose, for simplicity, that both Battery~t~ and BMeter~t~ can take on discrete values 0 through 5. If the
meter is always accurate, then the CPT P(BMeter~t~ | Battery~t~) should have probabilities of 1.0 “along the diagonal” and probabilities of 0.0 elsewhere. In reality, noise always creeps into
measurements. For continuous measurements, a Gaussian distribution with a small variance might be used.5 For our discrete variables, we can approximate a Gaussian using a distribution in which the
probability of error drops off in the appropriate way, so that the probability of a large error is very small. We use the term Gaussian error model to cover both the continuous and discrete versions.
Anyone with hands-on experience of robotics, computerized process control, or other forms of automatic sensing will readily testify to the fact that small amounts of measurement noise are often the
least of one’s problems. Real sensors fail. When a sensor fails, it does not necessarily send a signal saying, “Oh, by the way, the data I’m about to send you is a load of nonsense.” Instead, it
simply sends the nonsense. The simplest kind of failure is called a transient failure, where the sensor occasionally decides to send some nonsense. For example, the battery level sensor might have a
habit of sending a zero when someone bumps the robot, even if the battery is fully charged.
Let’s see what happens when a transient failure occurs with a Gaussian error model that doesn’t accommodate such failures. Suppose, for example, that the robot is sitting quietly and observes 20
consecutive battery readings of 5. Then the battery meter has a temporary seizure
5 Strictly speaking, a Gaussian distribution is problematic because it assigns nonzero probability to large negative charge levels. The beta distribution is sometimes a better choice for a variable
whose range is restricted.
and the next reading is BMeter~21~ = 0. What will the simple Gaussian error model lead us to believe about Battery~21~? According to Bayes’ rule, the answer depends on both the sensor model P
(BMeter~21~ =0 |Battery~21~) and the prediction P(Battery~21~ |BMeter~1:20~). If the probability of a large sensor error is significantly less likely than the probability of a transition to
Battery~21~ = 0, even if the latter is very unlikely, then the posterior distribution will assign a high probability to the battery’s being empty. A second reading of 0 at t = 22 will make this
conclusion almost certain. If the transient failure then disappears and the reading returns to 5 from t = 23 onwards, the estimate for the battery level will quickly return to 5, as if by magic. This
course of events is illustrated in the upper curve of Figure 15.14(a), which shows the expected value of Battery~t~ over time, using a discrete Gaussian error model.
Despite the recovery, there is a time (t = 22) when the robot is convinced that its battery is empty; presumably, then, it should send out a mayday signal and shut down. Alas, its oversimplified
sensor model has led it astray. How can this be fixed? Consider a familiar example from everyday human driving: on sharp curves or steep hills, one’s “fuel tank empty” warning light sometimes turns
on. Rather than looking for the emergency phone, one simply recalls that the fuel gauge sometimes gives a very large error when the fuel is sloshing around in the tank. The moral of the story is the
following: for the system to handle sensor failure properly, the sensor model must include the possibility of failure.
The simplest kind of failure model for a sensor allows a certain probability that the sensor will return some completely incorrect value, regardless of the true state of the world. For example, if
the battery meter fails by returning 0, we might say that
P (BMeter~t~ = 0 |Battery~t~ = 5)= 0.03 ,
which is presumably much larger than the probability assigned by the simple Gaussian error model. Let’s call this the transient failure model. How does it help when we are faced with a reading of 0?
Provided that the predicted probability of an empty battery, according to the readings so far, is much less than 0.03, then the best explanation of the observation BMeter~21~ = 0 is that the sensor
has temporarily failed. Intuitively, we can think of the belief about the battery level as having a certain amount of “inertia” that helps to overcome temporary blips in the meter reading. The upper
curve in Figure 15.14(b) shows that the transient failure model can handle transient failures without a catastrophic change in beliefs.
So much for temporary blips. What about a persistent sensor failure? Sadly, failures of this kind are all too common. If the sensor returns 20 readings of 5 followed by 20 readings of 0, then the
transient sensor failure model described in the preceding paragraph will result in the robot gradually coming to believe that its battery is empty when in fact it may be that the meter has failed.
The lower curve in Figure 15.14(b) shows the belief “trajectory” for this case. By t = 25—five readings of 0—the robot is convinced that its battery is empty. Obviously, we would prefer the robot to
believe that its battery meter is broken—if indeed this is the more likely event.
Unsurprisingly, to handle persistent failure, we need a persistent failure model that describes how the sensor behaves under normal conditions and after failure. To do this, we need to augment the
state of the system with an additional variable, say, BMBroken , that describes the status of the battery meter. The persistence of failure must be modeled by an
Alt text
Alt text
arc linking BMBroken0 to BMBroken1. This persistence arc has a CPT that gives a small probability of failure in any given time step, say, 0.001, but specifies that the sensor stays broken once it
breaks. When the sensor is OK, the sensor model for BMeter is identical to the transient failure model; when the sensor is broken, it says BMeter is always 0, regardless of the actual battery charge.
Alt text
The persistent failure model for the battery sensor is shown in Figure 15.15(a). Its performance on the two data sequences (temporary blip and persistent failure) is shown in Figure 15.15(b). There
are several things to notice about these curves. First, in the case of the temporary blip, the probability that the sensor is broken rises significantly after the second 0 reading, but immediately
drops back to zero once a 5 is observed. Second, in the case of persistent failure, the probability that the sensor is broken rises quickly to almost 1 and stays there. Finally, once the sensor is
known to be broken, the robot can only assume that its battery discharges at the “normal” rate, as shown by the gradually descending level of E(Battery~t~ | . . . ).
So far, we have merely scratched the surface of the problem of representing complex processes. The variety of transition models is huge, encompassing topics as disparate as modeling the human
endocrine system and modeling multiple vehicles driving on a freeway. Sensor modeling is also a vast subfield in itself, but even subtle phenomena, such as sensor drift, sudden decalibration, and the
effects of exogenous conditions (such as weather) on sensor readings, can be handled by explicit representation within dynamic Bayesian networks.
Exact inference in DBNs
Having sketched some ideas for representing complex processes as DBNs, we now turn to the question of inference. In a sense, this question has already been answered: dynamic Bayesian networks are
Bayesian networks, and we already have algorithms for inference in Bayesian networks. Given a sequence of observations, one can construct the full Bayesian network representation of a DBN by
replicating slices until the network is large enough to accommodate the observations, as in Figure 15.16. This technique, mentioned in Chapter 14 in the context of relational probability models, is
called unrolling. (Technically, the DBN is equivalent to the semi-infinite network obtained by unrolling forever. Slices added beyond the last observation have no effect on inferences within the
observation period and can be omitted.) Once the DBN is unrolled, one can use any of the inference algorithms—variable elimination, clustering methods, and so on—described in Chapter 14.
Unfortunately, a naive application of unrolling would not be particularly efficient. If we want to perform filtering or smoothing with a long sequence of observations e~1:t~, the unrolled network
would require O(t) space and would thus grow without bound as more observations were added. Moreover, if we simply run the inference algorithm anew each time an observation is added, the inference
time per update will also increase as O(t).
Looking back to Section 15.2.1, we see that constant time and space per filtering update can be achieved if the computation can be done recursively. Essentially, the filtering update in Equation
(15.5) works by summing out the state variables of the previous time step to get the distribution for the new time step. Summing out variables is exactly what the variable elimination (Figure 14.11)
algorithm does, and it turns out that running variable elimination with the variables in temporal order exactly mimics the operation of the recursive filtering update in Equation (15.5). The modified
algorithm keeps at most two slices in memory at any one time: starting with slice 0, we add slice 1, then sum out slice 0, then add slice 2, then sum out slice 1, and so on. In this way, we can
achieve constant space and time per filtering update. (The same performance can be achieved by suitable modifications to the clustering algorithm.) Exercise 15.17 asks you to verify this fact for the
umbrella network.
So much for the good news; now for the bad news: It turns out that the “constant” for the per-update time and space complexity is, in almost all cases, exponential in the number of state variables.
What happens is that, as the variable elimination proceeds, the factors grow to include all the state variables (or, more precisely, all those state variables that have parents in the previous time
slice). The maximum factor size is O(d^n+k^) and the total update cost per step is O(nd^n+k^), where d is the domain size of the variables and k is the maximum number of parents of any state
Of course, this is much less than the cost of HMM updating, which is O(d2^n^), but it is still infeasible for large numbers of variables. This grim fact is somewhat hard to accept. What it means is
that even though we can use DBNs to represent very complex temporal processes with many sparsely connected variables, we cannot reason efficiently and exactly about those processes. The DBN model
itself, which represents the prior joint distribution over all the variables, is factorable into its constituent CPTs, but the posterior joint distribution conditioned on an observation sequence—that
is, the forward message—is generally not factorable. So far, no one has found a way around this problem, despite the fact that many important areas of science and engineering would benefit enormously
from its solution. Thus, we must fall back on approximate methods.
Approximate inference in DBNs
Section 14.5 described two approximation algorithms: likelihood weighting (Figure 14.15) and Markov chain Monte Carlo (MCMC, Figure 14.16). Of the two, the former is most easily adapted to the DBN
context. (An MCMC filtering algorithm is described briefly in the notes at the end of the chapter.) We will see, however, that several improvements are required over the standard likelihood weighting
algorithm before a practical method emerges.
Recall that likelihood weighting works by sampling the nonevidence nodes of the network in topological order, weighting each sample by the likelihood it accords to the observed evidence variables. As
with the exact algorithms, we could apply likelihood weighting directly to an unrolled DBN, but this would suffer from the same problems of increasing time and space requirements per update as the
observation sequence grows. The problem is that the standard algorithm runs each sample in turn, all the way through the network. Instead, we can simply run all N samples together through the DBN,
one slice at a time. The modified algorithm fits the general pattern of filtering algorithms, with the set of N samples as the forward message. The first key innovation, then, is to use the samples
themselves as an approximate representation of the current state distribution. This meets the requirement of a “constant” time per update, although the constant depends on the number of samples
required to maintain an accurate approximation. There is also no need to unroll the DBN, because we need to have in memory only the current slice and the next slice.
In our discussion of likelihood weighting in Chapter 14, we pointed out that the algorithm’s accuracy suffers if the evidence variables are “downstream” from the variables being sampled, because in
that case the samples are generated without any influence from the evidence. Looking at the typical structure of a DBN—say, the umbrella DBN in Figure 15.16—we see that indeed the early state
variables will be sampled without the benefit of the later evidence. In fact, looking more carefully, we see that none of the state variables has any evidence variables among its ancestors! Hence,
although the weight of each sample will depend on the evidence, the actual set of samples generated will be completely independent of the evidence. For example, even if the boss brings in the
umbrella every day, the sampling process could still hallucinate endless days of sunshine. What this means in practice is that the fraction of samples that remain reasonably close to the actual
series of events (and therefore have nonnegligible weights) drops exponentially with t, the length of the observation sequence. In other words, to maintain a given level of accuracy, we need to
increase the number of samples exponentially with t. Given that a filtering algorithm that works in real time can use only a fixed number of samples, what happens in practice is that the error blows
up after a very small number of update steps.
Clearly, we need a better solution. The second key innovation is to focus the set of samples on the high-probability regions of the state space. This can be done by throwing away samples that have
very low weight, according to the observations, while replicating those that have high weight. In that way, the population of samples will stay reasonably close to reality. If we think of samples as
a resource for modeling the posterior distribution, then it makes sense to use more samples in regions of the state space where the posterior is higher.
A family of algorithms called particle filtering is designed to do just that. Particle filtering works as follows: First, a population of N initial-state samples is created by sampling from the prior
distribution P(X~0~). Then the update cycle is repeated for each time step:
1. Each sample is propagated forward by sampling the next state value X~t+1~ given the current value X~t~ for the sample, based on the transition model P(X~t+1~ | X~t~).
2. Each sample is weighted by the likelihood it assigns to the new evidence, P (e~t+1~ | X~t+1~).
3. The population is resampled to generate a new population of N samples. Each new sample is selected from the current population; the probability that a particular sample is selected is
proportional to its weight. The new samples are unweighted.
The algorithm is shown in detail in Figure 15.17, and its operation for the umbrella DBN is illustrated in Figure 15.18.
function PARTICLE-FILTERING(e,N ,dbn) returns a set of samples for the next time step inputs: e, the new incoming evidence N , the number of samples to be maintained dbn , a DBN with prior P(X~0~),
transition model P(X1|X~0~), sensor model P(E1|X1)
persistent: S , a vector of samples of size N , initially generated from P(X~0~) local variables: W , a vector of weights of size N
for i = 1 to N do S [i]← sample from P(X~1~ | X~0~ = S [i ]) /* step 1 / W [i]←P(e | X1 = S[i]) / step 2 */
S ←WEIGHTED-SAMPLE-WITH-REPLACEMENT(N ,S ,W ) /* step 3 */ return S
Figure 15.17 The particle filtering algorithm implemented as a recursive update operation with state (the set of samples). Each of the sampling operations involves sampling the relevant slice
variables in topological order, much as in PRIOR-SAMPLE. The WEIGHTED-SAMPLE-WITH-REPLACEMENT operation can be implemented to run in O(N) expected time. The step numbers refer to the description in
the text.
Alt text
We can show that this algorithm is consistent—gives the correct probabilities as N tends to infinity—by considering what happens during one update cycle. We assume that the sample population starts
with a correct representation of the forward message f~1:t~ = P(X~t~ | e~1:t~) at time t. Writing N(x~t~ | e~1:t~) for the number of samples occupying state X~t~ after observations e~1:t~ have been
processed, we therefore have
N(x~t~ | e~1:t~)/N = P (x~t~ | e~1:t~) (15.23)
for large N . Now we propagate each sample forward by sampling the state variables at t + 1, given the values for the sample at t. The number of samples reaching state X~t+1~ from each
Alt text
Therefore the sample population after one update cycle correctly represents the forward message at time t + 1 Particle filtering is consistent, therefore, but is it efficient? In practice, it seems
that the answer is yes: particle filtering seems to maintain a good approximation to the true posterior using a constant number of samples. Under certain assumptions—in particular, that the
probabilities in the transition and sensor models are strictly greater than 0 and less than 1—it is possible to prove that the approximation maintains bounded error with high probability. On the
practical side, the range of applications has grown to include many fields of science and engineering; some references are given at the end of the chapter.
The preceding sections have considered—without mentioning it—state estimation problems involving a single object. In this section, we see what happens when two or more objects generate the
observations. What makes this case different from plain old state estimation is that there is now the possibility of uncertainty about which object generated which observation. This is the identity
uncertainty problem of Section 14.6.3 (page 544), now viewed in a temporal context. In the control theory literature, this is the data association problem—that is, the problem of associating
observation data with the objects that generated them.
Alt text
The data association problem was studied originally in the context of radar tracking, where reflected pulses are detected at fixed time intervals by a rotating radar antenna. At each time step,
multiple blips may appear on the screen, but there is no direct observation of which blips at time t belong to which blips at time t − 1. Figure 15.19(a) shows a simple example with two blips per
time step for five steps. Let the two blip locations at time t be e^1^~t~ and e^2^~t~ .
(The labeling of blips within a time step as “1” and “2” is completely arbitrary and carries no information.) Let us assume, for the time being, that exactly two aircraft, A and B, generated the
blips; their true positions are X^A^~t~ and X^b^~t~ . Just to keep things simple, we’ll also assume that the each aircraft moves independently according to a known transition model—e.g., a linear
Gaussian model as used in the Kalman filter (Section 15.4).
Suppose we try to write down the overall probability model for this scenario, just as we did for general temporal processes in Equation (15.3) on page 569. As usual, the joint distribution factors
into contributions for each time step as follows:
Alt text
We would like to factor the observation term P (e^1^~i~ , e^2^~i~ | x^A^~i~ , x^B^~i~ ) into a product of two terms, one for each object, but this would require knowing which observation was
generated by which object. Instead, we have to sum over all possible ways of associating the observations with the objects. Some of those ways are shown in Figure 15.19(b–c); in general, for n
objects and T time steps, there are (n!)T ways of doing it—an awfully large number.
Mathematically speaking, the “way of associating the observations with the objects” is a collection of unobserved random variable that identify the source of each observation. We’ll write ωt to
denote the one-to-one mapping from objects to observations at time t, with ωt(A) and ωt(B) denoting the specific observations (1 or 2) that ωt assigns to A and B. (For n objects, ωt will have n!
possible values; here, n! = 2.) Because the labels “1” ad “2” on the observations are assigned arbitrarily, the prior on ωt is uniform and ωt is independent of the states of the objects, x^A^~t~ and
x^B^~t~ ). So we can condition the observation termP (e^1^~i~ , e^2^~i~ | x^A^~i~ , x^B^~i~ ) on ωt and then simplify:
Alt text
Plugging this into Equation (15.24), we get an expression that is only in terms of transition and sensor models for individual objects and observations.
As for all probability models, inference means summing out the variables other than the query and the evidence. For filtering in HMMs and DBNs, we were able to sum out the state variables from 1 to
t−1 by a simple dynamic programming trick; for Kalman filters, we took advantage of special properties of Gaussians. For data association, we are less fortunate. There is no (known) efficient exact
algorithm, for the same reason that there is none for the switching Kalman filter (page 589): the filtering distribution P (x^A^~t~ | e^1^~1:t~, e^2^~1:t~) for object A ends up as a mixture of
exponentially many distributions, one for each way of picking a sequence of observations to assign to A.
As a result of the complexity of exact inference, many different approximate methods have been used. The simplest approach is to choose a single “best” assignment at each time step, given the
predicted positions of the objects at the current time step. This assignment associates observations with objects and enables the track of each object to be updated and a prediction made for the next
time step. For choosing the “best” assignment, it is common to use the so-called nearest-neighbor filter, which repeatedly chooses the closest pairing of predicted position and observation and adds
that pairing to the assignment. The nearestneighbor filter works well when the objects are well separated in state space and the prediction uncertainty and observation error are small—in other words,
when there is no possibility of confusion. When there is more uncertainty as to the correct assignment, a better approach is to choose the assignment that maximizes the joint probability of the
current observations given the predicted positions. This can be done very efficiently using the Hungarian algorithm (Kuhn, 1955), even though there are n! assignments to choose from.
Any method that commits to a single best assignment at each time step fails miserably under more difficult conditions. In particular, if the algorithm commits to an incorrect assignment, the
prediction at the next time step may be significantly wrong, leading to more
Alt text
incorrect assignments, and so on. Two modern approaches turn out to be much more effective. A particle filtering algorithm (see page 598) for data association works by maintaining a large collection
of possible current assignments. An MCMC algorithm explores the space of assignment histories—for example, Figure 15.19(b–c) might be states in the MCMC state space—and can change its mind about
previous assignment decisions. Current MCMC data association methods can handle many hundreds of objects in real time while giving a good approximation to the true posterior distributions.
The scenario described so far involved n known objects generating n observations at each time step. Real application of data association are typically much more complicated. Often, the reported
observations include false alarms (also known as clutter), which are not caused by real objects. Detection failures can occur, meaning that no observation is reported DETECTION FAILURE for a real
object. Finally, new objects arrive and old ones disappear. These phenomena, which create even more possible worlds to worry about, are illustrated in Figure 15.19(d). Figure 15.20 shows two images
from widely separated cameras on a California freeway.
In this application, we are interested in two goals: estimating the time it takes, under current traffic conditions, to go from one place to another in the freeway system; and measuring demand, i.e.,
how many vehicles travel between any two points in the system at particular times of the day and on particular days of the week. Both goals require solving the data association problem over a wide
area with many cameras and tens of thousands of vehicles per hour. With visual surveillance, false alarms are caused by moving shadows, articulated vehicles, reflections in puddles, etc.; detection
failures are caused by occlusion, fog, darkness, and lack of visual contrast; and vehicles are constantly entering and leaving the freeway system. Furthermore, the appearance of any given vehicle can
change dramatically between cameras depending on lighting conditions and vehicle pose in the image, and the transition model changes as traffic jams come and go. Despite these problems, modern data
association algorithms have been successful in estimating traffic parameters in real-world settings.
Data association is an essential foundation for keeping track of a complex world, because without it there is no way to combine multiple observations of any given object. When objects in the world
interact with each other in complex activities, understanding the world requires combining data association with the relational and open-universe probability models of Section 14.6.3. This is
currently an active area of research.
This chapter has addressed the general problem of representing and reasoning about probabilistic temporal processes. The main points are as follows:
• The changing state of the world is handled by using a set of random variables to represent the state at each point in time.
• Representations can be designed to satisfy the Markov property, so that the future is independent of the past given the present. Combined with the assumption that the process is stationary—that
is, the dynamics do not change over time—this greatly simplifies the representation.
• A temporal probability model can be thought of as containing a transition model describing the state evolution and a sensor model describing the observation process.
• The principal inference tasks in temporal models are filtering, prediction, smoothing, and computing the most likely explanation. Each of these can be achieved using simple, recursive algorithms
whose run time is linear in the length of the sequence.
• Three families of temporal models were studied in more depth: hidden Markov models, Kalman filters, and dynamic Bayesian networks (which include the other two as special cases).
• Unless special assumptions are made, as in Kalman filters, exact inference with many state variables is intractable. In practice, the particle filtering algorithm seems to be an effective
approximation algorithm.
• When trying to keep track of many objects, uncertainty arises as to which observations belong to which objects—the data association problem. The number of association hypotheses is typically
intractably large, but MCMC and particle filtering algorithms for data association work well in practice.
Many of the basic ideas for estimating the state of dynamical systems came from the mathematician C. F. Gauss (1809), who formulated a deterministic least-squares algorithm for the problem of
estimating orbits from astronomical observations. A. A. Markov (1913) developed what was later called the Markov assumption in his analysis of stochastic processes; he estimated a first-order Markov
chain on letters from the text of Eugene Onegin. The general theory of Markov chains and their mixing times is covered by Levin et al. (2008).
Significant classified work on filtering was done during World War II by Wiener (1942) for continuous-time processes and by Kolmogorov (1941) for discrete-time processes. Although this work led to
important technological developments over the next 20 years, its use of a frequency-domain representation made many calculations quite cumbersome. Direct state-space modeling of the stochastic
process turned out to be simpler, as shown by Peter Swerling (1959) and Rudolf Kalman (1960). The latter paper described what is now known as the Kalman filter for forward inference in linear systems
with Gaussian noise; Kalman’s results had, however, been obtained previously by the Danish statistician Thorvold Thiele (1880) and by the Russian mathematician Ruslan Stratonovich (1959), whom Kalman
met in Moscow in 1960. After a visit to NASA Ames Research Center in 1960, Kalman saw the applicability of the method to the tracking of rocket trajectories, and the filter was later implemented for
the Apollo missions. Important results on smoothing were derived by Rauch et al. (1965), and the impressively named Rauch–Tung–Striebel smoother is still a standard technique today. Many early
results are gathered in Gelb (1974). Bar-Shalom and Fortmann (1988) give a more modern treatment with a Bayesian flavor, as well as many references to the vast literature on the subject. Chatfield
(1989) and Box et al. (1994) cover the control theory approach to time series analysis.
The hidden Markov model and associated algorithms for inference and learning, including the forward–backward algorithm, were developed by Baum and Petrie (1966). The Viterbi algorithm first appeared
in (Viterbi, 1967). Similar ideas also appeared independently in the Kalman filtering community (Rauch et al., 1965). The forward–backward algorithm was one of the main precursors of the general
formulation of the EM algorithm (Dempster et al., 1977); see also Chapter 20. Constant-space smoothing appears in Binder et al. (1997b), as does the divide-and-conquer algorithm developed in Exercise
15.3. Constant-time fixedlag smoothing for HMMs first appeared in Russell and Norvig (2003). HMMs have found many applications in language processing (Charniak, 1993), speech recognition (Rabiner and
Juang, 1993), machine translation (Och and Ney, 2003), computational biology (Krogh et al., 1994; Baldi et al., 1994), financial economics Bhar and Hamori (2004) and other fields. There have been
several extensions to the basic HMM model, for example the Hierarchical HMM (Fine et al., 1998) and Layered HMM (Oliver et al., 2004) introduce structure back into the model, replacing the single
state variable of HMMs.
Dynamic Bayesian networks (DBNs) can be viewed as a sparse encoding of a Markov process and were first used in AI by Dean and Kanazawa (1989b), Nicholson and Brady (1992), and Kjaerulff (1992). The
last work extends the HUGIN Bayes net system to accommodate dynamic Bayesian networks. The book by Dean and Wellman (1991) helped popularize DBNs and the probabilistic approach to planning and
control within AI. Murphy (2002) provides a thorough analysis of DBNs.
Dynamic Bayesian networks have become popular for modeling a variety of complex motion processes in computer vision (Huang et al., 1994; Intille and Bobick, 1999). Like HMMs, they have found
applications in speech recognition (Zweig and Russell, 1998; Richardson et al., 2000; Stephenson et al., 2000; Nefian et al., 2002; Livescu et al., 2003), genomics (Murphy and Mian, 1999; Perrin et
al., 2003; Husmeier, 2003) and robot localization (Theocharous et al., 2004). The link between HMMs and DBNs, and between the forward– backward algorithm and Bayesian network propagation, was made
explicitly by Smyth et al. (1997). A further unification with Kalman filters (and other statistical models) appears in Roweis and Ghahramani (1999). Procedures exist for learning the parameters
(Binder et al., 1997a; Ghahramani, 1998) and structures (Friedman et al., 1998) of DBNs.
The particle filtering algorithm described in Section 15.5 has a particularly interesting history. The first sampling algorithms for particle filtering (also called sequential Monte Carlo methods)
were developed in the control theory community by Handschin and Mayne (1969), and the resampling idea that is the core of particle filtering appeared in a Russian control journal (Zaritskii et al.,
1975). It was later reinvented in statistics as sequential importancesampling resampling, or SIR (Rubin, 1988; Liu and Chen, 1998), in control theory as particle filtering (Gordon et al., 1993;
Gordon, 1994), in AI as survival of the fittest (Kanazawa et al., 1995), and in computer vision as condensation (Isard and Blake, 1996). The paper by Kanazawa et al. (1995) includes an improvement
called evidence reversal whereby the state at time t + 1 is sampled conditional on both the state at time t and the evidence at time t + 1. This allows the evidence to influence sample generation
directly and was proved by Doucet (1997) and Liu and Chen (1998) to reduce the approximation error. Particle filtering has been applied in many areas, including tracking complex motion patterns in
video (Isard and Blake, 1996), predicting the stock market (de Freitas et al., 2000), and diagnosing faults on planetary rovers (Verma et al., 2004). A variant called the Rao-Blackwellized particle
filter or RBPF (Doucet et al., 2000; Murphy and Russell, 2001) applies particle filtering to a subset of state variables and, for each particle, performs exact inference on the remaining variables
conditioned on the value sequence in the particle. In some cases RBPF works well with thousands of state variables. An application of RBPF to localization and mapping in robotics is described in
Chapter 25. The book by Doucet et al. (2001) collects many important papers on sequential Monte Carlo (SMC) algorithms, of which particle filtering is the most important instance. Pierre Del Moral
and colleagues have performed extensive theoretical analyses of SMC algorithms (Del Moral, 2004; Del Moral et al., 2006).
MCMC methods (see Section 14.5.2) can be applied to the filtering problem; for example, Gibbs sampling can be applied directly to an unrolled DBN. To avoid the problem of increasing update times as
the unrolled network grows, the decayed MCMC filter (Marthi et al., 2002) prefers to sample more recent state variables, with a probability that decays as 1/k~2~ for a variable k steps into the past.
Decayed MCMC is a provably nondivergent filter. Nondivergence theorems can also be obtained for certain types of assumed-density filter.
An assumed-density filter assumes that the posterior distribution over states at time t belongs to a particular finitely parameterized family; if the projection and update steps take it outside this
family, the distribution is projected back to give the best approximation within the family. For DBNs, the Boyen–Koller algorithm (Boyen et al., 1999) and the factored frontier algorithm (Murphy and
Weiss, 2001) assume that the posterior distribution can be approximated well by a product of small factors. Variational techniques (see Chapter 14) have also been developed for temporal models.
Ghahramani and Jordan (1997) discuss an approximation algorithm for the factorial HMM, a DBN in which two or more independently evolving Markov chains are linked by a shared observation stream.
Jordan et al. (1998) cover a number of other applications.
Data association for multitarget tracking was first described in a probabilistic setting by Sittler (1964). The first practical algorithm for large-scale problems was the “multiple hypothesis
tracker” or MHT algorithm (Reid, 1979). Many important papers are collected by Bar-Shalom and Fortmann (1988) and Bar-Shalom (1992). The development of an MCMC algorithm for data association is due
to Pasula et al. (1999), who applied it to traffic surveillance problems. Oh et al. (2009) provide a formal analysis and extensive experimental comparisons to other methods. Schulz et al. (2003)
describe a data association method based on particle filtering. Ingemar Cox analyzed the complexity of data association (Cox, 1993; Cox and Hingorani, 1994) and brought the topic to the attention of
the vision community. He also noted the applicability of the polynomial-time Hungarian algorithm to the problem of finding most-likely assignments, which had long been considered an intractable
problem in the tracking community. The algorithm itself was published by Kuhn (1955), based on translations of papers published in 1931 by two Hungarian mathematicians, Dénes König and Jenö Egerváry.
The basic theorem had been derived previously, however, in an unpublished Latin manuscript by the famous Prussian mathematician Carl Gustav Jacobi (1804–1851).
15.1 Show that any second-order Markov process can be rewritten as a first-order Markov process with an augmented set of state variables. Can this always be done parsimoniously, i.e., without
increasing the number of parameters needed to specify the transition model?
15.2 In this exercise, we examine what happens to the probabilities in the umbrella world in the limit of long time sequences.
a. Suppose we observe an unending sequence of days on which the umbrella appears. Show that, as the days go by, the probability of rain on the current day increases monotonically toward a fixed
point. Calculate this fixed point.
b. Now consider forecasting further and further into the future, given just the first two umbrella observations. First, compute the probability P (r2+k|U~1~, u~2~) for k = 1 . . . 20
and plot the results. You should see that the probability converges towards a fixed point. Prove that the exact value of this fixed point is 0.5.
15.3 This exercise develops a space-efficient variant of the forward–backward algorithm described in Figure 15.4 (page 576). We wish to compute P(X~k~|e~1:t~) for k =1, . . . , t. This will be done
with a divide-and-conquer approach.
a. Suppose, for simplicity, that t is odd, and let the halfway point be h= (t + 1)/2. Show that P(X~k~|e~1:t~) can be computed for k = 1, . . . , h given just the initial forward message f1:0, the
backward message bh+1:t, and the evidence e1:h.
b. Show a similar result for the second half of the sequence.
c. Given the results of (a) and (b), a recursive divide-and-conquer algorithm can be constructed by first running forward along the sequence and then backward from the end, storing just the required
messages at the middle and the ends. Then the algorithm is called on each half. Write out the algorithm in detail.
d. Compute the time and space complexity of the algorithm as a function of t, the length of the sequence. How does this change if we divide the input into more than two pieces?
15.4 On page 577, we outlined a flawed procedure for finding the most likely state sequence, given an observation sequence. The procedure involves finding the most likely state at each time step,
using smoothing, and returning the sequence composed of these states. Show that, for some temporal probability models and observation sequences, this procedure returns an impossible state sequence
(i.e., the posterior probability of the sequence is zero).
15.5 Equation (15.12) describes the filtering process for the matrix formulation of HMMs. Give a similar equation for the calculation of likelihoods, which was described generically in Equation
15.6 Consider the vacuum worlds of Figure 4.18 (perfect sensing) and Figure 15.7 (noisy sensing). Suppose that the robot receives an observation sequence such that, with perfect sensing, there is
exactly one possible location it could be in. Is this location necessarily the most probable location under noisy sensing for sufficiently small noise probability ε? Prove your claim or find a
15.7 In Section 15.3.2, the prior distribution over locations is uniform and the transition model assumes an equal probability of moving to any neighboring square. What if those assumptions are
wrong? Suppose that the initial location is actually chosen uniformly from the northwest quadrant of the room and the Move action actually tends to move southeast. Keeping the HMM model fixed,
explore the effect on localization and path accuracy as the southeasterly tendency increases, for different values of ε.
15.8 Consider a version of the vacuum robot (page 582) that has the policy of going straight for as long as it can; only when it encounters an obstacle does it change to a new (randomly selected)
heading. To model this robot, each state in the model consists of a (location, heading) pair. Implement this model and see how well the Viterbi algorithm can track a robot with this model. The
robot’s policy is more constrained than the random-walk robot; does that mean that predictions of the most likely path are more accurate?
15.9 This exercise is concerned with filtering in an environment with no landmarks. Consider a vacuum robot in an empty room, represented by an n×m rectangular grid. The robot’s location is hidden;
the only evidence available to the observer is a noisy location sensor that gives an approximation to the robot’s location. If the robot is at location (x, y) then with probability .1 the sensor
gives the correct location, with probability .05 each it reports one of the 8 locations immediately surrounding (x, y), with probability .025 each it reports one of the 16 locations that surround
those 8, and with the remaining probability of .1 it reports “no reading.” The robot’s policy is to pick a direction and follow it with probability .8 on each step; the robot switches to a randomly
selected new heading with probability .2 (or with
Alt text
probability 1 if it encounters a wall). Implement this as an HMM and do filtering to track the robot. How accurately can we track the robot’s path?
15.10 Often, we wish to monitor a continuous-state system whose behavior switches unpredictably among a set of k distinct “modes.” For example, an aircraft trying to evade a missile can execute a
series of distinct maneuvers that the missile may attempt to track. A Bayesian network representation of such a switching Kalman filter model is shown in Figure 15.21.
a. Suppose that the discrete state St has k possible values and that the prior continuous state estimate P(X~0~) is a multivariate Gaussian distribution. Show that the prediction P(X1) is a mixture
of Gaussians—that is, a weighted sum of Gaussians such that the weights sum to 1.
b. Show that if the current continuous state estimate P(X~t~|e~1:t~) is a mixture of m Gaussians, then in the general case the updated state estimate P(X~t+1~|e~1:t+1~) will be a mixture of km
c. What aspect of the temporal process do the weights in the Gaussian mixture represent?
The results in (a) and (b) show that the representation of the posterior grows without limit even for switching Kalman filters, which are among the simplest hybrid dynamic models.
15.11 Complete the missing step in the derivation of Equation (15.19) on page 586, the first update step for the one-dimensional Kalman filter.
15.12 Let us examine the behavior of the variance update in Equation (15.20) (page 587).
a. Plot the value of σ^2^ t as a function of t, given various values for σ^2^~x~ and σ^2^~z~ .
b. Show that the update has a fixed point σ^2^ such that σ^2^~t~ → σ^2^ as t → ∞, and calculate the value of σ~2~.
c. Give a qualitative explanation for what happens as σ^2^~x~ → 0 and as σ^2^~z~ → 0.
15.13 A professor wants to know if students are getting enough sleep. Each day, the professor observes whether the students sleep in class, and whether they have red eyes. The professor has the
following domain theory:
• The prior probability of getting enough sleep, with no observations, is 0.7.
• The probability of getting enough sleep on night t is 0.8 given that the student got enough sleep the previous night, and 0.3 if not.
• The probability of having red eyes is 0.2 if the student got enough sleep, and 0.7 if not.
• The probability of sleeping in class is 0.1 if the student got enough sleep, and 0.3 if not.
Formulate this information as a dynamic Bayesian network that the professor could use to filter or predict from a sequence of observations. Then reformulate it as a hidden Markov model that has only
a single observation variable. Give the complete probability tables for the model.
15.14 For the DBN specified in Exercise 15.13 and for the evidence values
e1 = not red eyes, not sleeping in class e2 = red eyes, not sleeping in class e3 = red eyes, sleeping in class perform the following computations:
a. State estimation: Compute P (EnoughSleept|e~1:t~) for each of t = 1, 2, 3.
b. Smoothing: Compute P (EnoughSleept|e1:3) for each of t = 1, 2, 3.
c. Compare the filtered and smoothed probabilities for t = 1 and t = 2.
15.15 Suppose that a particular student shows up with red eyes and sleeps in class every day. Given the model described in Exercise 15.13, explain why the probability that the student had enough
sleep the previous night converges to a fixed point rather than continuing to go down as we gather more days of evidence. What is the fixed point? Answer this both numerically (by computation) and
15.16 This exercise analyzes in more detail the persistent-failure model for the battery sensor in Figure 15.15(a) (page 594).
a. Figure 15.15(b) stops at t = 32. Describe qualitatively what should happen as t → ∞
if the sensor continues to read 0.
b. Suppose that the external temperature affects the battery sensor in such a way that transient failures become more likely as temperature increases. Show how to augment the DBN structure in Figure
15.15(a), and explain any required changes to the CPTs.
c. Given the new network structure, can battery readings be used by the robot to infer the current temperature?
15.17 Consider applying the variable elimination algorithm to the umbrella DBN unrolled for three slices, where the query is P(R~3~|u~1~, u~2~, u~3~). Show that the space complexity of the
algorithm—the size of the largest factor—is the same, regardless of whether the rain variables are eliminated in forward or backward order.
In which we see how an agent should make decisions so that it gets what it wants— on average, at least.
In this chapter, we fill in the details of how utility theory combines with probability theory to yield a decision-theoretic agent—an agent that can make rational decisions based on what it believes
and what it wants. Such an agent can make decisions in contexts in which uncertainty and conflicting goals leave a logical agent with no way to decide: a goal-based agent has a binary distinction
between good (goal) and bad (non-goal) states, while a decision-theoretic agent has a continuous measure of outcome quality.
Section 16.1 introduces the basic principle of decision theory: the maximization of expected utility. Section 16.2 shows that the behavior of any rational agent can be captured by supposing a utility
function that is being maximized. Section 16.3 discusses the nature of utility functions in more detail, and in particular their relation to individual quantities such as money. Section 16.4 shows
how to handle utility functions that depend on several quantities. In Section 16.5, we describe the implementation of decision-making systems. In particular, we introduce a formalism called a
decision network (also known as an influence diagram) that extends Bayesian networks by incorporating actions and utilities. The remainder of the chapter discusses issues that arise in applications
of decision theory to expert systems.
Decision theory, in its simplest form, deals with choosing among actions based on the desirability of their immediate outcomes; that is, the environment is assumed to be episodic in the sense defined
on page 43. (This assumption is relaxed in Chapter 17.) In Chapter 3 we used the notation RESULT(S~0~, a) for the state that is the deterministic outcome of taking action a in state s~0~. In this
chapter we deal with nondeterministic partially observable environments. Since the agent may not know the current state, we omit it and define RESULT(a) as a random variable whose values are the
possible outcome states. The probability of outcome s′, given evidence observations e, is written P (RESULT(a)= s′ | a, e) ,
where the a on the right-hand side of the conditioning bar stands for the event that action a is executed.^1^
The agent’s preferences are captured by a utility function, U(s), which assigns a singl number to express the desirability of a state. The expected utility of an action given the evidence, EU (a|e),
is just the average utility value of the outcomes, weighted by the probability that the outcome occurs:
Alt text
The principle of maximum expected utility (MEU) says that a rational agent should chooseMAXIMUM EXPECTED UTILITY
the action that maximizes the agent’s expected utility:
Alt text
In a sense, the MEU principle could be seen as defining all of AI. All an intelligent agent has to do is calculate the various quantities, maximize utility over its actions, and away it goes. But
this does not mean that the AI problem is solved by the definition!
The MEU principle formalizes the general notion that the agent should “do the right thing,” but goes only a small distance toward a full operationalization of that advice. Estimating the state of the
world requires perception, learning, knowledge representation, and inference. Computing P (RESULT(a) | a, e) requires a complete causal model of the world and, as we saw in Chapter 14, NP-hard
inference in (very large) Bayesian networks. Computing the outcome utilities U(s′) often requires searching or planning, because an agent may not know how good a state is until it knows where it can
get to from that state. So, decision theory is not a panacea that solves the AI problem—but it does provide a useful framework.
The MEU principle has a clear relation to the idea of performance measures introduced in Chapter 2. The basic idea is simple. Consider the environments that could lead to an agent having a given
percept history, and consider the different agents that we could design. If an agent acts so as to maximize a utility function that correctly reflects the performance measure, then the agent will
achieve the highest possible performance score (averaged over all the possible environments). This is the central justification for the MEU principle itself. While the claim may seem tautological, it
does in fact embody a very important transition from a global, external criterion of rationality—the performance measure over environment histories—to a local, internal criterion involving the
maximization of a utility function applied to the next state.
Intuitively, the principle of Maximum Expected Utility (MEU) seems like a reasonable way to make decisions, but it is by no means obvious that it is the only rational way. After all, why should
maximizing the average utility be so special? What’s wrong with an agent that maximizes the weighted sum of the cubes of the possible utilities, or tries to minimize the worst possible loss? Could an
agent act rationally just by expressing preferences between states, without giving them numeric values? Finally, why should a utility function with the required properties exist at all? We shall see.
Constraints on rational preferences
These questions can be answered by writing down some constraints on the preferences that a rational agent should have and then showing that the MEU principle can be derived from the constraints. We
use the following notation to describe an agent’s preferences:
A ≻ B the agent prefers A over B. A ∼ B the agent is indifferent between A and B. A ≿ B the agent prefers A over B or is indifferent between them.
Now the obvious question is, what sorts of things are A and B? They could be states of the world, but more often than not there is uncertainty about what is really being offered. For example, an
airline passenger who is offered “the pasta dish or the chicken” does not know what lurks beneath the tinfoil cover.2 The pasta could be delicious or congealed, the chicken juicy or overcooked beyond
recognition. We can think of the set of outcomes for each action as a lottery—think of each action as a ticket. A lottery L with possible outcomes S~1~, . . . , S~n~ that occur with probabilities
p~1~, . . . , p~n~ is written
L = [p~1~, S~1~; p~2~, S~2~; . . . p~n~, S~n~] .
In general, each outcome Si of a lottery can be either an atomic state or another lottery. The primary issue for utility theory is to understand how preferences between complex lotteries are related
to preferences between the underlying states in those lotteries. To address this issue we list six constraints that we require any reasonable preference relation to obey:
• Orderability: Given any two lotteries, a rational agent must either prefer one to the other or else rate the two as equally preferable. That is, the agent cannot avoid deciding. As we said on
page 490, refusing to bet is like refusing to allow time to pass.
Exactly one of (A ≻ B), (B ≻ A), or (A ∼ B) holds.
• Transitivity: Given any three lotteries, if an agent prefers A to B and prefers B to C , then the agent must prefer A to C .
(A ≻ B) ∧ (B ≻ C) ⇒ (A ≻ C) .
• Continuity: If some lottery B is between A and C in preference, then there is some probability p for which the rational agent will be indifferent between getting B for sure and the lottery that
yields A with probability p and C with probability 1− p.
A ≻ B ≻ C ⇒ ∃ p [p,A; 1− p,C] ∼ B .
• Substitutability: If an agent is indifferent between two lotteries A and B, then the agent is indifferent between two more complex lotteries that are the same except that B
2 We apologize to readers whose local airlines no longer offer food on long flights.
is substituted for A in one of them. This holds regardless of the probabilities and the other outcome(s) in the lotteries.
A ∼ B ⇒ [p,A; 1− p,C] ∼ [p,B; 1− p,C] .
This also holds if we substitute ≻ for ∼ in this axiom.
• Monotonicity: Suppose two lotteries have the same two possible outcomes, A and B. If an agent prefers A to B, then the agent must prefer the lottery that has a higher probability for A (and vice
A ≻ B ⇒ (p > q ⇔ [p,A; 1− p,B] ≻ [q,A; 1− q,B]) .
• Decomposability: Compound lotteries can be reduced to simpler ones using the laws of probability. This has been called the “no fun in gambling” rule because it says that two consecutive lotteries
can be compressed into a single equivalent lottery, as shown in Figure 16.1(b).3
[p,A; 1− p, [q,B; 1− q, C]] ∼ [p,A; (1− p)q,B; (1− p)(1− q), C] .
These constraints are known as the axioms of utility theory. Each axiom can be motivated by showing that an agent that violates it will exhibit patently irrational behavior in some situations. For
example, we can motivate transitivity by making an agent with nontransitive preferences give us all its money. Suppose that the agent has the nontransitive preferences A ≻ B ≻ C ≻ A, where A, B, and
C are goods that can be freely exchanged. If the agent currently has A, then we could offer to trade C for A plus one cent. The agent prefers C , and so would be willing to make this trade. We could
then offer to trade B for C , extracting another cent, and finally trade A for B. This brings us back where we started from, except that the agent has given us three cents (Figure 16.1(a)). We can
keep going around the cycle until the agent has no money at all. Clearly, the agent has acted irrationally in this case.
Preferences lead to utility
Notice that the axioms of utility theory are really axioms about preferences—they say nothing about a utility function. But in fact from the axioms of utility we can derive the following consequences
(for the proof, see von Neumann and Morgenstern, 1944):
• Existence of Utility Function: If an agent’s preferences obey the axioms of utility, then there exists a function U such that U(A) > U(B) if and only if A is preferred to B, and U(A) = U(B) if
and only if the agent is indifferent between A and B.
U(A) > U(B) ⇔ A ≻ B
U(A) = U(B) ⇔ A ∼ B
• Expected Utility of a Lottery: The utility of a lottery is the sum of the probability of each outcome times the utility of that outcome.
U([p~1~, S~1~; . . . ; pn, S~n~]) = ∑~i~ p~i~ U (S~i~) .
3 We can account for the enjoyment of gambling by encoding gambling events into the state description; for example, “Have $10 and gambled” could be preferred to “Have $10 and didn’t gamble.”
Alt text
In other words, once the probabilities and utilities of the possible outcome states are specified, the utility of a compound lottery involving those states is completely determined. Because the
outcome of a nondeterministic action is a lottery, it follows that an agent can act rationally— that is, consistently with its preferences—only by choosing an action that maximizes expected utility
according to Equation (16.1).
The preceding theorems establish that a utility function exists for any rational agent, but they do not establish that it is unique. It is easy to see, in fact, that an agent’s behavior would not
change if its utility function U(S) were transformed according to
U′(S) = aU(S) + b , (16.2)
where a and b are constants and a > 0; an affine transformation.^4^ This fact was noted in Chapter 5 for two-player games of chance; here, we see that it is completely general.
As in game-playing, in a deterministic environment an agent just needs a preference ranking on states—the numbers don’t matter. This is called a value function or ordinal utility function.
It is important to remember that the existence of a utility function that describes an agent’s preference behavior does not necessarily mean that the agent is explicitly maximizing that utility
function in its own deliberations. As we showed in Chapter 2, rational behavior can be generated in any number of ways. By observing a rational agent’s preferences, however, an observer can construct
the utility function that represents what the agent is actually trying to achieve (even if the agent doesn’t know it).
4 In this sense, utilities resemble temperatures: a temperature in Fahrenheit is 1.8 times the Celsius temperature plus 32. You get the same results in either measurement system.
Utility is a function that maps from lotteries to real numbers. We know there are some axioms on utilities that all rational agents must obey. Is that all we can say about utility functions? Strictly
speaking, that is it: an agent can have any preferences it likes. For example, an agent might prefer to have a prime number of dollars in its bank account; in which case, if it had $16 it would give
away $3. This might be unusual, but we can’t call it irrational. An agent might prefer a dented 1973 Ford Pinto to a shiny new Mercedes. Preferences can also interact: for example, the agent might
prefer prime numbers of dollars only when it owns the Pinto, but when it owns the Mercedes, it might prefer more dollars to fewer. Fortunately, the preferences of real agents are usually more
systematic, and thus easier to deal with.
Utility assessment and utility scales
If we want to build a decision-theoretic system that helps the agent make decisions or acts on his or her behalf, we must first work out what the agent’s utility function is. This process, often
called preference elicitation, involves presenting choices to the agent and using the observed preferences to pin down the underlying utility function. Equation (16.2) says that there is no absolute
scale for utilities, but it is helpful, nonetheless, to establish some scale on which utilities can be recorded and compared for any particular problem. A scale can be established by fixing the
utilities of any two particular outcomes, just as we fix a temperature scale by fixing the freezing point and boiling point of water. Typically, we fix the utility of a “best possible prize” at U(S)
= u and a “worst possible catastrophe” at U(S) = u ⊥ . Normalized utilities use a scale with u ⊥ = 0 and u= 1. Given a utility scale between u and u ⊥, we can assess the utility of any particular
prize S by asking the agent to choose between S and a standard lottery [p, u ; (1−p), u⊥ ].
The probability p is adjusted until the agent is indifferent between S and the standard lottery. Assuming normalized utilities, the utility of S is given by p. Once this is done for each prize, the
utilities for all lotteries involving those prizes are determined.
In medical, transportation, and environmental decision problems, among others, people’s lives are at stake. In such cases, u⊥ is the value assigned to immediate death (or perhaps many deaths).
Although nobody feels comfortable with putting a value on human life, it is a fact that tradeoffs are made all the time. Aircraft are given a complete overhaul at intervals determined by trips and
miles flown, rather than after every trip. Cars are manufactured in a way that trades off costs against accident survival rates. Paradoxically, a refusal to “put a monetary value on life” means that
life is often undervalued. Ross Shachter relates an experience with a government agency that commissioned a study on removing asbestos from schools. The decision analysts performing the study assumed
a particular dollar value for the life of a school-age child, and argued that the rational choice under that assumption was to remove the asbestos. The agency, morally outraged at the idea of setting
the value of a life, rejected the report out of hand. It then decided against asbestos removal—implicitly asserting a lower value for the life of a child than that assigned by the analysts.
Some attempts have been made to find out the value that people place on their own lives. One common “currency” used in medical and safety analysis is the micromort, a one in a million chance of
death. If you ask people how much they would pay to avoid a risk—for example, to avoid playing Russian roulette with a million-barreled revolver—they will respond with very large numbers, perhaps
tens of thousands of dollars, but their actual behavior reflects a much lower monetary value for a micromort. For example, driving in a car for 230 miles incurs a risk of one micromort; over the life
of your car—say, 92,000 miles— that’s 400 micromorts. People appear to be willing to pay about $10,000 (at 2009 prices) more for a safer car that halves the risk of death, or about $50 per micromort.
A number of studies have confirmed a figure in this range across many individuals and risk types. Of course, this argument holds only for small risks. Most people won’t agree to kill themselves for
$50 million.
Another measure is the QALY, or quality-adjusted life year. Patients with a disability are willing to accept a shorter life expectancy to be restored to full health. For example, kidney patients on
average are indifferent between living two years on a dialysis machine and one year at full health.
The utility of money
Utility theory has its roots in economics, and economics provides one obvious candidate for a utility measure: money (or more specifically, an agent’s total net assets). The almost universal
exchangeability of money for all kinds of goods and services suggests that money plays a significant role in human utility functions.
It will usually be the case that an agent prefers more money to less, all other things being equal. We say that the agent exhibits a monotonic preference for more money. This does not mean that money
behaves as a utility function, because it says nothing about preferences between lotteries involving money.
Suppose you have triumphed over the other competitors in a television game show. The host now offers you a choice: either you can take the $1,000,000 prize or you can gamble it on the flip of a coin.
If the coin comes up heads, you end up with nothing, but if it comes up tails, you get $2,500,000. If you’re like most people, you would decline the gamble and pocket the million. Are you being
Assuming the coin is fair, the expected monetary value (EMV) of the gamble is 1/2 ($0)+ 1/2 ($2,500,000) = $1,250,000, which is more than the original $1,000,000. But that does not necessarily mean
that accepting the gamble is a better decision. Suppose we use Sn to denote the state of possessing total wealth $n, and that your current wealth is $k. Then the expected utilities of the two actions
of accepting and declining the gamble are
EU (Accept) = 1/2 U(S~k~) + 1/2 U(S~k+2~,500,000) ,
EU (Decline) = U(S~k~+1,000,000) .
To determine what to do, we need to assign utilities to the outcome states. Utility is not directly proportional to monetary value, because the utility for your first million is very high (or so they
say), whereas the utility for an additional million is smaller. Suppose you assign a utility of 5 to your current financial status (S~k~), a 9 to the state S~k+2~,500,000, and an 8 to the
Alt text
state S~k+1,000,000~. Then the rational action would be to decline, because the expected utility of accepting is only 7 (less than the 8 for declining). On the other hand, a billionaire would most
likely have a utility function that is locally linear over the range of a few million more, and thus would accept the gamble.
In a pioneering study of actual utility functions, Grayson (1960) found that the utility of money was almost exactly proportional to the logarithm of the amount. (This idea was first suggested by
Bernoulli (1738); see Exercise 16.3.) One particular utility curve, for a certain Mr. Beard, is shown in Figure 16.2(a). The data obtained for Mr. Beard’s preferences are consistent with a utility
U(S~k+n~) = −263.31 + 22.09 log(n + 150, 000)
for the range between n = −$150, 000 and n = $800, 000. We should not assume that this is the definitive utility function for monetary value, but it is likely that most people have a utility function
that is concave for positive wealth. Going into debt is bad, but preferences between different levels of debt can display a reversal of the concavity associated with positive wealth. For example,
someone already $10,000,000 in debt might well accept a gamble on a fair coin with a gain of $10,000,000 for heads and a loss of $20,000,000 for tails.5 This yields the S-shaped curve shown in Figure
If we restrict our attention to the positive part of the curves, where the slope is decreasing, then for any lottery L, the utility of being faced with that lottery is less than the utility of being
handed the expected monetary value of the lottery as a sure thing:
U(L) < U(S~EMV (L)~) .
That is, agents with curves of this shape are risk-averse: they prefer a sure thing with a payoff that is less than the expected monetary value of a gamble. On the other hand, in the “desperate”
region at large negative wealth in Figure 16.2(b), the behavior is risk-seeking.
5 Such behavior might be called desperate, but it is rational if one is already in a desperate situation.
The value an agent will accept in lieu of a lottery is called the certainty equivalent of the lottery. Studies have shown that most people will accept about $400 in lieu of a gamble that gives $1000
half the time and $0 the other half—that is, the certainty equivalent of the lottery is $400, while the EMV is $500. The difference between the EMV of a lottery and its certainty equivalent is called
the insurance premium. Risk aversion is the basis for the insurance industry, because it means that insurance premiums are positive. People would rather pay a small insurance premium than gamble the
price of their house against the chance of a fire. From the insurance company’s point of view, the price of the house is very small compared with the firm’s total reserves. This means that the
insurer’s utility curve is approximately linear over such a small region, and the gamble costs the company almost nothing.
Notice that for small changes in wealth relative to the current wealth, almost any curve will be approximately linear. An agent that has a linear curve is said to be risk-neutral. For gambles with
small sums, therefore, we expect risk neutrality. In a sense, this justifies the simplified procedure that proposed small gambles to assess probabilities and to justify the axioms of probability in
Section 13.2.3.
Expected utility and post-decision disappointment
The rational way to choose the best action, a ∗, is to maximize expected utility:
a∗ = argmax~a~ E U (a|e) .
If we have calculated the expected utility correctly according to our probability model, and if the probability model correctly reflects the underlying stochastic processes that generate the
outcomes, then, on average, we will get the utility we expect if the whole process is repeated many times.
In reality, however, our model usually oversimplifies the real situation, either because we don’t know enough (e.g., when making a complex investment decision) or because the computation of the true
expected utility is too difficult (e.g., when estimating the utility of successor states of the root node in backgammon). In that case, we are really working with estimates ÊU (a|e) of the true
expected utility. We will assume, kindly perhaps, that the estimates are unbiased, that is, the expected value of the error, E(ÊU (a|e)− EU (a|e))), is zero. In that case, it still seems reasonable
to choose the action with the highest estimated utility and to expect to receive that utility, on average, when the action is executed.
Unfortunately, the real outcome will usually be significantly worse than we estimated, even though the estimate was unbiased! To see why, consider a decision problem in which there are k choices,
each of which has true estimated utility of 0. Suppose that the error in each utility estimate has zero mean and standard deviation of 1, shown as the bold curve in Figure 16.3. Now, as we actually
start to generate the estimates, some of the errors will be negative (pessimistic) and some will be positive (optimistic). Because we select the action with the highest utility estimate, we are
obviously favoring the overly optimistic estimates, and that is the source of the bias. It is a straightforward matter to calculate the distribution of the maximum of the k estimates (see Exercise
16.11) and hence quantify the extent of our disappointment. The curve in Figure 16.3 for k = 3 has a mean around 0.85, so the average disappointment will be about 85% of the standard deviation in the
utility estimates.
Alt text
With more choices, extremely optimistic estimates are more likely to arise: for k = 30, the disappointment will be around twice the standard deviation in the estimates.
This tendency for the estimated expected utility of the best choice to be too high is called the optimizer’s curse (Smith and Winkler, 2006). It afflicts even the most seasoned decision analysts and
statisticians. Serious manifestations include believing that an exciting new drug that has cured 80% patients in a trial will cure 80% of patients (it’s been chosen from k = thousands of candidate
drugs) or that a mutual fund advertised as having aboveaverage returns will continue to have them (it’s been chosen to appear in the advertisement out of k = dozens of funds in the company’s overall
portfolio). It can even be the case that what appears to be the best choice may not be, if the variance in the utility estimate is high: a drug, selected from thousands tried, that has cured 9 of 10
patients is probably worse than one that has cured 800 of 1000.
The optimizer’s curse crops up everywhere because of the ubiquity of utility-maximizing selection processes, so taking the utility estimates at face value is a bad idea. We can avoid the curse by
using an explicit probability model P(ÊU |EU ) of the error in the utility estimates. Given this model and a prior P(EU ) on what we might reasonably expect the utilities to be, we treat the utility
estimate, once obtained, as evidence and compute the posterior distribution for the true utility using Bayes’ rule.
Human judgment and irrationality
Decision theory is a normative theory: it describes how a rational agent should act. descriptive theory, on the other hand, describes how actual agents—for example, humans— really do act. The
application of economic theory would be greatly enhanced if the two coincided, but there appears to be some experimental evidence to the contrary. The evidence suggests that humans are “predictably
irrational” (Ariely, 2009).
The best-known problem is the Allais paradox (Allais, 1953). People are given a choice between lotteries A and B and then between C and D, which have the following prizes:
A : 80% chance of $4000 C : 20% chance of $4000 B : 100% chance of $3000 D : 25% chance of $3000
Most people consistently prefer B over A (taking the sure thing), and C over D (taking the higher EMV). The normative analysis disagrees! We can see this most easily if we use the freedom implied by
Equation (16.2) to set U($0) = 0. In that case, then B ≻ A implies that U($3000) > 0.8 U($4000), whereas C ’ D implies exactly the reverse. In other words, there is no utility function that is
consistent with these choices. One explanation for the apparently irrational preferences is the certainty effect (Kahneman and Tversky, 1979): people are strongly attracted to gains that are certain.
There are several reasons why this may be so. First, people may prefer to reduce their computational burden; by choosing certain outcomes, they don’t have to compute with probabilities. But the
effect persists even when the computations involved are very easy ones. Second, people may distrust the legitimacy of the stated probabilities. I trust that a coin flip is roughly 50/50 if I have
control over the coin and the flip, but I may distrust the result if the flip is done by someone with a vested interest in the outcome.6 In the presence of distrust, it might be better to go for the
sure thing.7 Third, people may be accounting for their emotional state as well as their financial state. People know they would experience regret if they gave up a certain reward (B) for an 80%
chance at a higher reward and then lost. In other words, if A is chosen, there is a 20% chance of getting no money and feeling like a complete idiot, which is worse than just getting no money. So
perhaps people who choose B over A and C over D are not being irrational; they are just saying that they are willing to give up $200 of EMV to avoid a 20% chance of feeling like an idiot.
A related problem is the Ellsberg paradox. Here the prizes are fixed, but the probabilities are underconstrained. Your payoff will depend on the color of a ball chosen from an urn. You are told that
the urn contains 1/3 red balls, and 2/3 either black or yellow balls, but you don’t know how many black and how many yellow. Again, you are asked whether you prefer lottery A or B; and then C or D:
A : $100 for a red ball C : $100 for a red or yellow ball B : $100 for a black ball D : $100 for a black or yellow ball .
It should be clear that if you think there are more red than black balls then you should prefer A over B and C over D; if you think there are fewer red than black you should prefer the opposite. But
it turns out that most people prefer A over B and also prefer D over C , even though there is no state of the world for which this is rational. It seems that people have ambiguity aversion: A gives
you a 1/3 chance of winning, while B could be anywhere between 0 and 2/3. Similarly, D gives you a 2/3 chance, while C could be anywhere between 1/3 and 3/3. Most people elect the known probability
rather than the unknown unknowns.
6 For example, the mathematician/magician Persi Diaconis can make a coin flip come out the way he wants every time (Landhuis, 2004). 7 Even the sure thing may not be certain. Despite cast-iron
promises, we have not yet received that $27,000,000 from the Nigerian bank account of a previously unknown deceased relative.
Yet another problem is that the exact wording of a decision problem can have a big impact on the agent’s choices; this is called the framing effect. Experiments show that people like a medical
procedure that it is described as having a “90% survival rate” about twice as much as one described as having a “10% death rate,” even though these two statements mean exactly the same thing. This
discrepancy in judgment has been found in multiple experiments and is about the same whether the subjects were patients in a clinic, statistically sophisticated business school students, or
experienced doctors.
People feel more comfortable making relative utility judgments rather than absolute ones. I may have little idea how much I might enjoy the various wines offered by a restaurant. The restaurant takes
advantage of this by offering a $200 bottle that it knows nobody will buy, but which serves to skew upward the customer’s estimate of the value of all wines and make the $55 bottle seem like a
bargain. This is called the anchoring effect.
If human informants insist on contradictory preference judgments, there is nothing that automated agents can do to be consistent with them. Fortunately, preference judgments made by humans are often
open to revision in the light of further consideration. Paradoxes like the Allais paradox are greatly reduced (but not eliminated) if the choices are explained better. In work at the Harvard Business
School on assessing the utility of money, Keeney and Raiffa (1976, p. 210) found the following:
Subjects tend to be too risk-averse in the small and therefore . . . the fitted utility functions exhibit unacceptably large risk premiums for lotteries with a large spread. . . . Most of the
subjects, however, can reconcile their inconsistencies and feel that they have learned an important lesson about how they want to behave. As a consequence, some subjects cancel their automobile
collision insurance and take out more term insurance on their lives.
The evidence for human irrationality is also questioned by researchers in the field of evolutionary psychology, who point to the fact that our brain’s decision-making mechanisms did not evolve to
solve word problems with probabilities and prizes stated as decimal numbers. Let us grant, for the sake of argument, that the brain has built-in neural mechanism for computing with probabilities and
utilities, or something functionally equivalent; if so, the required inputs would be obtained through accumulated experience of outcomes and rewards rather than through linguistic presentations of
numerical values. It is far from obvious that we can directly access the brain’s built-in neural mechanisms by presenting decision problems in linguistic/numerical form. The very fact that different
wordings of the same decision problem elicit different choices suggests that the decision problem itself is not getting through. Spurred by this observation, psychologists have tried presenting
problems in uncertain reasoning and decision making in “evolutionarily appropriate” forms; for example, instead of saying “90% survival rate,” the experimenter might show 100 stick-figure animations
of the operation, where the patient dies in 10 of them and survives in 90. (Boredom is a complicating factor in these experiments!) With decision problems posed in this way, people seem to be much
closer to rational behavior than previously suspected.
Decision making in the field of public policy involves high stakes, in both money and lives. For example, in deciding what levels of harmful emissions to allow from a power plant, policy makers must
weigh the prevention of death and disability against the benefit of the power and the economic burden of mitigating the emissions. Siting a new airport requires consideration of the disruption caused
by construction; the cost of land; the distance from centers of population; the noise of flight operations; safety issues arising from local topography and weather conditions; and so on. Problems
like these, in which outcomes are characterized by two or more attributes, are handled by multiattribute utility theory.
We will call the attributes X = X~1~, . . . ,X~n~; a complete vector of assignments will be x = 〈x~1~, . . . , x~n~〉, where each xi is either a numeric value or a discrete value with an assumed
ordering on values. We will assume that higher values of an attribute correspond to higher utilities, all other things being equal. For example, if we choose AbsenceOfNoise as an attribute in the
airport problem, then the greater its value, the better the solution.8 We begin by examining cases in which decisions can be made without combining the attribute values into a single utility value.
Then we look at cases in which the utilities of attribute combinations can be specified very concisely.
Suppose that airport site S~1~ costs less, generates less noise pollution, and is safer than site S~2~. One would not hesitate to reject S~2~. We then say that there is strict dominance of S~1~ over
S~2~. In general, if an option is of lower value on all attributes than some other option, it need not be considered further. Strict dominance is often very useful in narrowing down the field of
choices to the real contenders, although it seldom yields a unique choice. Figure 16.4(a) shows a schematic diagram for the two-attribute case.
That is fine for the deterministic case, in which the attribute values are known for sure. What about the general case, where the outcomes are uncertain? A direct analog of strict dominance can be
constructed, where, despite the uncertainty, all possible concrete outcomes for S~1~ strictly dominate all possible outcomes for S~2~. (See Figure 16.4(b).) Of course, this will probably occur even
less often than in the deterministic case.
Fortunately, there is a more useful generalization called stochastic dominance, which occurs very frequently in real problems. Stochastic dominance is easiest to understand in the context of a single
attribute. Suppose we believe that the cost of siting the airport at S~1~ is uniformly distributed between $2.8 billion and $4.8 billion and that the cost at S~2~ is uniformly distributed between $3
billion and $5.2 billion. Figure 16.5(a) shows these distributions, with cost plotted as a negative value. Then, given only the information that utility decreases with
8 In some cases, it may be necessary to subdivide the range of values so that utility varies monotonically within each range. For example, if the RoomTemperature attribute has a utility peak at 70◦F,
we would split it into two attributes measuring the difference from the ideal, one colder and one hotter. Utility would then be monotonically increasing in each attribute.
Alt text
Alt text
cost, we can say that S~1~ stochastically dominates S~2~ (i.e., S~2~ can be discarded). It is important to note that this does not follow from comparing the expected costs. For example, if we knew
the cost of S~1~ to be exactly $3.8 billion, then we would be unable to make a decision without additional information on the utility of money. (It might seem odd that more information on the cost of
S~1~ could make the agent less able to decide. The paradox is resolved by noting that in the absence of exact cost information, the decision is easier to make but is more likely to be wrong.)
The exact relationship between the attribute distributions needed to establish stochastic dominance is best seen by examining the cumulative distributions, shown in Figure 16.5(b). (See also Appendix
A.) The cumulative distribution measures the probability that the cost is less than or equal to any given amount—that is, it integrates the original distribution. If the cumulative distribution for
S~1~ is always to the right of the cumulative distribution for S~2~, then, stochastically speaking, S~1~ is cheaper than S~2~. Formally, if two actions A~1~ and A~2~ lead to probability distributions
p~1~(x) and p~2~(x) on attribute X, then A~1~ stochastically dominates A~2~ on X if
∀x ∫^x^~−∞~ p~1~(x′) dx′ ≤ ∫^x^~−∞~ p~2~(x′) dx′.
The relevance of this definition to the selection of optimal decisions comes from the following property: if A~1~ stochastically dominates A~2~, then for any monotonically nondecreasing utility
function U(x), the expected utility of A~1~ is at least as high as the expected utility of A~2~. Hence, if an action is stochastically dominated by another action on all attributes, then it can be
The stochastic dominance condition might seem rather technical and perhaps not so easy to evaluate without extensive probability calculations. In fact, it can be decided very easily in many cases.
Suppose, for example, that the construction transportation cost depends on the distance to the supplier. The cost itself is uncertain, but the greater the distance, the greater the cost. If S~1~ is
closer than S~2~, then S~1~ will dominate S~2~ on cost. Although we will not present them here, there exist algorithms for propagating this kind of qualitative information among uncertain variables
in qualitative probabilistic networks, enabling a system to make rational decisions based on stochastic dominance, without using any numeric values.
Preference structure and multiattribute utility
Suppose we have n attributes, each of which has d distinct possible values. To specify the complete utility function U(x~1~, . . . , x~n~), we need d^n^ values in the worst case. Now, the worst case
corresponds to a situation in which the agent’s preferences have no regularity at all. Multiattribute utility theory is based on the supposition that the preferences of typical agents have much more
structure than that. The basic approach is to identify regularities in the preference behavior we would expect to see and to use what are called representation theorems to show that an agent with a
certain kind of preference structure has a utility function
U(x~1~, . . . , x~n~) = F [f~1~(x~1~), . . . , f~n~(x~n~)] ,
where F is, we hope, a simple function such as addition. Notice the similarity to the use of Bayesian networks to decompose the joint probability of several random variables.
Preferences without uncertainty
Let us begin with the deterministic case. Remember that for deterministic environments the agent has a value function V (x~1~, . . . , x~n~); the aim is to represent this function concisely. The
basic regularity that arises in deterministic preference structures is called preference independence. Two attributes x~1~ and X2 are preferentially independent of a third attribute X~3~ if the
preference between outcomes 〈x~1~, x2, x~3~〉 and 〈x′1 , x′2 , x~3~〉 does not depend on the particular value x~3~ for attribute X~3~. Going back to the airport example, where we have (among other
attributes) Noise ,Cost , and Deaths to consider, one may propose that Noise and Cost are preferentially inde pendent of Deaths . For example, if we prefer a state with 20,000 people residing in the
flight path and a construction cost of $4 billion over a state with 70,000 people residing in the flight path and a cost of $3.7 billion when the safety level is 0.06 deaths per million passenger
miles in both cases, then we would have the same preference when the safety level is 0.12 or 0.03; and the same independence would hold for preferences between any other pair of values for Noise and
Cost . It is also apparent that Cost and Deaths are preferentially independent of Noise and that Noise and Deaths are preferentially independent of Cost . We say that the set of attributes
{Noise,Cost ,Deaths} exhibits mutual preferential independence (MPI).
MPI says that, whereas each attribute may be important, it does not affect the way in which one trades off the other attributes against each other.
Mutual preferential independence is something of a mouthful, but thanks to a remarkable theorem due to the economist Gérard Debreu (1960), we can derive from it a very simple form for the agent’s
value function: If attributes x~1~, . . . , x~n~ are mutually preferentially independent, then the agent’s preference behavior can be described as maximizing the function
Alt text
where each Vi is a value function referring only to the attribute Xi_._ For example, it might well be the case that the airport decision can be made using a value function
V (noise, cost , deaths) = −noise × 10^4^ − cost − deaths × 10^12^ .
A value function of this type is called an additive value function. Additive functions are an extremely natural way to describe an agent’s preferences and are valid in many real-world situations. For
n attributes, assessing an additive value function requires assessing n separate one-dimensional value functions rather than one n-dimensional function; typically, this represents an exponential
reduction in the number of preference experiments that are needed. Even when MPI does not strictly hold, as might be the case at extreme values of the attributes, an additive value function might
still provide a good approximation to the agent’s preferences. This is especially true when the violations of MPI occur in portions of the attribute ranges that are unlikely to occur in practice.
To understand MPI better, it helps to look at cases where it doesn’t hold. Suppose you are at a medieval market, considering the purchase of some hunting dogs, some chickens, and some wicker cages
for the chickens. The hunting dogs are very valuable, but if you don’t have enough cages for the chickens, the dogs will eat the chickens; hence, the tradeoff between dogs and chickens depends
strongly on the number of cages, and MPI is violated. The existence of these kinds of interactions among various attributes makes it much harder to assess the overall value function.
Preferences with uncertainty
When uncertainty is present in the domain, we also need to consider the structure of preferences between lotteries and to understand the resulting properties of utility functions, rather than just
value functions. The mathematics of this problem can become quite complicated, so we present just one of the main results to give a flavor of what can be done. The reader is referred to Keeney and
Raiffa (1976) for a thorough survey of the field.
The basic notion of utility independence extends preference independence to cover lotteries: a set of attributes X is utility independent of a set of attributes Y if preferences between lotteries on
the attributes in X are independent of the particular values of the attributes in Y. A set of attributes is mutually utility independent (MUI) if each of its subsets is utility-independent of the
remaining attributes. Again, it seems reasonable to propose that the airport attributes are MUI.
MUI implies that the agent’s behavior can be described using a multiplicative utility function (Keeney, 1974). The general form of a multiplicative utility function is best seen by looking at the
case for three attributes. For conciseness, we use U~i~ to mean U~i~(x~i~):
U = k~1~U~1~ + k~2~U~2~ + k~3~U~3~ + k~1~k~2~U~1~U~2~ + k~2~k~3~U~2~U~3~ + k~3~k~1~U~3~U~1~ + k~1~k~2~k~3~U~1~U~2~U~3~ .
Although this does not look very simple, it contains just three single-attribute utility functions and three constants. In general, an n-attribute problem exhibiting MUI can be modeled using n
single-attribute utilities and n constants. Each of the single-attribute utility functions can be developed independently of the other attributes, and this combination will be guaranteed to generate
the correct overall preferences. Additional assumptions are required to obtain a purely additive utility function.
In this section, we look at a general mechanism for making rational decisions. The notation is often called an influence diagram (Howard and Matheson, 1984), but we will use the more descriptive term
decision network. Decision networks combine Bayesian networks with additional node types for actions and utilities. We use airport siting as an example.
Representing a decision problem with a decision network
In its most general form, a decision network represents information about the agent’s current state, its possible actions, the state that will result from the agent’s action, and the utility of that
state. It therefore provides a substrate for implementing utility-based agents of the type first introduced in Section 2.4.5. Figure 16.6 shows a decision network for the airport siting problem. It
illustrates the three types of nodes used:
• Chance nodes (ovals) represent random variables, just as they do in Bayesian networks.
The agent could be uncertain about the construction cost, the level of air traffic and the potential for litigation, and the Deaths , Noise , and total Cost variables, each of which also depends on
the site chosen. Each chance node has associated with it a conditional distribution that is indexed by the state of the parent nodes. In decision networks, the parent nodes can include decision nodes
as well as chance nodes. Note that each of the current-state chance nodes could be part of a large Bayesian network for assessing construction costs, air traffic levels, or litigation potentials.
• Decision nodes (rectangles) represent points where the decision maker has a choice of
Alt text
actions. In this case, the AirportSite action can take on a different value for each site under consideration. The choice influences the cost, safety, and noise that will result. In this chapter, we
assume that we are dealing with a single decision node. Chapter 17 deals with cases in which more than one decision must be made.
• Utility nodes (diamonds) represent the agent’s utility function.9 The utility node has as parents all variables describing the outcome that directly affect utility. Associated with the utility
node is a description of the agent’s utility as a function of the parent attributes. The description could be just a tabulation of the function, or it might be a parameterized additive or linear
function of the attribute values.
A simplified form is also used in many cases. The notation remains identical, but the chance nodes describing the outcome state are omitted. Instead, the utility node is connected directly to the
current-state nodes and the decision node. In this case, rather than representing a utility function on outcome states, the utility node represents the expected utility associated with each action,
as defined in Equation (16.1) on page 611; that is, the node is associated with an action-utility function (also known as a Q-function in reinforcement learning, as described in Chapter 21). Figure
16.7 shows the action-utility representation of the airport siting problem.
Notice that, because the Noise , Deaths , and Cost chance nodes in Figure 16.6 refer to future states, they can never have their values set as evidence variables. Thus, the simplified version that
omits these nodes can be used whenever the more general form can be used. Although the simplified form contains fewer nodes, the omission of an explicit description of the outcome of the siting
decision means that it is less flexible with respect to changes in circumstances. For example, in Figure 16.6, a change in aircraft noise levels can be reflected by a change in the conditional
probability table associated with the Noise node, whereas a change in the weight accorded to noise pollution in the utility function can be reflected by
9 These nodes are also called value nodes in the literature.
Alt text
a change in the utility table. In the action-utility diagram, Figure 16.7, on the other hand, all such changes have to be reflected by changes to the action-utility table. Essentially, the
action-utility formulation is a compiled version of the original formulation.
Evaluating decision networks
Actions are selected by evaluating the decision network for each possible setting of the decision node. Once the decision node is set, it behaves exactly like a chance node that has been set as an
evidence variable. The algorithm for evaluating decision networks is the following:
1. Set the evidence variables for the current state.
2. For each possible value of the decision node:
(a) Set the decision node to that value. (b) Calculate the posterior probabilities for the parent nodes of the utility node, using a standard probabilistic inference algorithm. (c) Calculate the
resulting utility for the action.
3. Return the action with the highest utility.
This is a straightforward extension of the Bayesian network algorithm and can be incorporated directly into the agent design given in Figure 13.1 on page 484. We will see in Chapter 17 that the
possibility of executing several actions in sequence makes the problem much more interesting.
In the preceding analysis, we have assumed that all relevant information, or at least all available information, is provided to the agent before it makes its decision. In practice, this is hardly
ever the case. One of the most important parts of decision making is knowing what questions to ask. For example, a doctor cannot expect to be provided with the results of all possible diagnostic
tests and questions at the time a patient first enters the consulting room.^10^
Tests are often expensive and sometimes hazardous (both directly and because of associated delays). Their importance depends on two factors: whether the test results would lead to a significantly
better treatment plan, and how likely the various test results are.
This section describes information value theory, which enables an agent to choose what information to acquire. We assume that, prior to selecting a “real” action represented by the decision node, the
agent can acquire the value of any of the potentially observable chance variables in the model. Thus, information value theory involves a simplified form of sequential decision making—simplified
because the observation actions affect only the agent’s belief state, not the external physical state. The value of any particular observation must derive from the potential to affect the agent’s
eventual physical action; and this potential can be estimated directly from the decision model itself.
A simple example
Suppose an oil company is hoping to buy one of n indistinguishable blocks of ocean-drilling rights. Let us assume further that exactly one of the blocks contains oil worth C dollars, while the others
are worthless. The asking price of each block is C/n dollars. If the company is risk-neutral, then it will be indifferent between buying a block and not buying one.
Now suppose that a seismologist offers the company the results of a survey of block number 3, which indicates definitively whether the block contains oil. How much should the company be willing to
pay for the information? The way to answer this question is to examine what the company would do if it had the information:
• With probability 1/n, the survey will indicate oil in block 3. In this case, the company will buy block 3 for C/n dollars and make a profit of C −C/n = (n− 1)C/n dollars.
• With probability (n−1)/n, the survey will show that the block contains no oil, in which case the company will buy a different block. Now the probability of finding oil in one of the other blocks
changes from 1/n to 1/(n− 1), so the company makes an expected profit of C/(n− 1)− C/n = C/n(n− 1) dollars.
Now we can calculate the expected profit, given the survey information:
1/n × (n− 1)C/n + n− 1/n × C/n(n− 1) = C/n .
Therefore, the company should be willing to pay the seismologist up to C/n dollars for the information: the information is worth as much as the block itself.
The value of information derives from the fact that with the information, one’s course of action can be changed to suit the actual situation. One can discriminate according to the situation, whereas
without the information, one has to do what’s best on average over the possible situations. In general, the value of a given piece of information is defined to be the difference in expected value
between best actions before and after information is obtained.
10 In the United States, the only question that is always asked beforehand is whether the patient has insurance.
A general formula for perfect information
It is simple to derive a general mathematical formula for the value of information. We assume that exact evidence can be obtained about the value of some random variable E~j~ (that is, we learn E~j~
= e~j~), so the phrase value of perfect information (VPI) is used.^11^
Let the agent’s initial evidence be e. Then the value of the current best action α is defined by
Alt text
To get some intuition for this formula, consider the simple case where there are only two actions, a~1~ and a~2~, from which to choose. Their current expected utilities are ~1~ and U~2~. The
information E~j~ = e~jk~ will yield some new expected utilities U′~1~ and U′~2~ for the actions, but before we obtain E~j~ , we will have some probability distributions over the possible values of
U′~1~ and U′~2~ (which we assume are independent).
Suppose that a~1~ and a~2~ represent two different routes through a mountain range in winter. a~1~ is a nice, straight highway through a low pass, and a~2~ is a winding dirt road over the top. Just
given this information, a~1~ is clearly preferable, because it is quite possible that a~2~ is blocked by avalanches, whereas it is unlikely that anything blocks a~1~. U~1~ is therefore clearly higher
than U~2~. It is possible to obtain satellite reports E~j~ on the actual state of each road that would give new expectations, U′~1~ and U′~2~ , for the two crossings. The distributions for these
expectations are shown in Figure 16.8(a). Obviously, in this case, it is not worth the expense of obtaining satellite reports, because it is unlikely that the information derived from them will
change the plan. With no change, information has no value.
Now suppose that we are choosing between two different winding dirt roads of slightly different lengths and we are carrying a seriously injured passenger. Then, even when U~1~ and U~2~ are quite
close, the distributions of U′~1~ and U′~2~ are very broad. There is a significantpossibility that the second route will turn out to be clear while the first is blocked, and in this
11 There is no loss of expressiveness in requiring perfect information. Suppose we wanted to model the case in which we become somewhat more certain about a variable. We can do that by introducing
another variable about which we learn perfect information. For example, suppose we initially have broad uncertainty about the variable Temperature . Then we gain the perfect knowledge Thermometer =
37; this gives us imperfect information about the true Temperature , and the uncertainty due to measurement error is encoded in the sensor model P(Thermometer |Temperature). See Exercise 16.17 for
another example.
Alt text
case the difference in utilities will be very high. The VPI formula indicates that it might be worthwhile getting the satellite reports. Such a situation is shown in Figure 16.8(b).
Finally, suppose that we are choosing between the two dirt roads in summertime, when blockage by avalanches is unlikely. In this case, satellite reports might show one route to be more scenic than
the other because of flowering alpine meadows, or perhaps wetter because of errant streams. It is therefore quite likely that we would change our plan if we had the information. In this case,
however, the difference in value between the two routes is still likely to be very small, so we will not bother to obtain the reports. This situation is shown in Figure 16.8(c).
In sum, information has value to the extent that it is likely to cause a change of plan and to the extent that the new plan will be significantly better than the old plan.
Properties of the value of information
One might ask whether it is possible for information to be deleterious: can it actually have negative expected value? Intuitively, one should expect this to be impossible. After all, one could in the
worst case just ignore the information and pretend that one has never received it. This is confirmed by the following theorem, which applies to any decision-theoretic agent:
The expected value of information is nonnegative:
∀ e, E~j~ VPI ~e~(E~j~) ≥ 0 .
The theorem follows directly from the definition of VPI, and we leave the proof as an exercise (Exercise 16.18). It is, of course, a theorem about expected value, not actual value. Additional
information can easily lead to a plan that turns out to be worse than the original plan if the information happens to be misleading. For example, a medical test that gives a false positive result may
lead to unnecessary surgery; but that does not mean that the test shouldn’t be done.
It is important to remember that VPI depends on the current state of information, which is why it is subscripted. It can change as more information is acquired. For any given piece of evidence E~j~ ,
the value of acquiring it can go down (e.g., if another variable strongly constrains the posterior for E~j~) or up (e.g., if another variable provides a clue on which E~j~ builds, enabling a new and
better plan to be devised). Thus, VPI is not additive. That is,
VPI~e~(E~j~ , E~k~) ≠ VPI~e~(E~j~) + VPI~e~(E~k~) (in general) .
VPI is, however, order independent. That is,
VPI~e~(E~j~ , E~k~) = VPI~e~(E~j~) + VPI~e~,E~j~ (E~k~) = VPI~e~(E~k~) + VPI~e~,e~k~(E~j~) .
Order independence distinguishes sensing actions from ordinary actions and simplifies the problem of calculating the value of a sequence of sensing actions.
Implementation of an information-gathering agent
A sensible agent should ask questions in a reasonable order, should avoid asking questions that are irrelevant, should take into account the importance of each piece of information in relation to its
cost, and should stop asking questions when that is appropriate. All of these capabilities can be achieved by using the value of information as a guide.
Figure 16.9 shows the overall design of an agent that can gather information intelligently before acting. For now, we assume that with each observable evidence variable E~j~ , there is an associated
cost, Cost(E~j~), which reflects the cost of obtaining the evidence through tests, consultants, questions, or whatever. The agent requests what appears to be the most efficient observation in terms
of utility gain per unit cost. We assume that the result of the action Request(E~j~) is that the next percept provides the value of E~j~ . If no observation is worth its cost, the agent selects a
“real” action.
The agent algorithm we have described implements a form of information gathering that is called myopic. This is because it uses the VPI formula shortsightedly, calculating the value of information as
if only a single evidence variable will be acquired. Myopic control is based on the same heuristic idea as greedy search and often works well in practice. (For example, it has been shown to
outperform expert physicians in selecting diagnostic tests.)
function INFORMATION-GATHERING-AGENT(percept ) returns an action persistent: D , a decision network integrate percept into D
j ← the value that maximizes VPI (E~j~) / Cost(E~j~) if VPI (E~j~) > Cost(E~j~) return REQUEST(E~j~) else return the best action from D
Figure 16.9 Design of a simple information-gathering agent. The agent works by repeatedly selecting the observation with the highest information value, until the cost of the next observation is
greater than its expected benefit.
However, if there is no single evidence variable that will help a lot, a myopic agent might hastily take an action when it would have been better to request two or more variables first and then take
action. A better approach in this situation would be to construct a conditional plan (as described in Section 11.3.2) that asks for variable values and takes different next steps depending on the
One final consideration is the effect a series of questions will have on a human respondent. People may respond better to a series of questions if they “make sense,” so some expert systems are built
to take this into account, asking questions in an order that maximizes the total utility of the system and human rather than an order that maximizes value of information.
The field of decision analysis, which evolved in the 1950s and 1960s, studies the application of decision theory to actual decision problems. It is used to help make rational decisions in important
domains where the stakes are high, such as business, government, law, military strategy, medical diagnosis and public health, engineering design, and resource management. The process involves a
careful study of the possible actions and outcomes, as well as the preferences placed on each outcome. It is traditional in decision analysis to talk about two roles: the decision maker states
preferences between outcomes, and the decision analyst enumerates the possible actions and outcomes and elicits preferences from the decision maker to determine the best course of action. Until the
early 1980s, the main purpose of decision analysis was to help humans make decisions that actually reflect their own preferences. As more and more decision processes become automated, decision
analysis is increasingly used to ensure that the automated processes are behaving as desired.
Early expert system research concentrated on answering questions, rather than on making decisions. Those systems that did recommend actions rather than providing opinions on matters of fact generally
did so using condition-action rules, rather than with explicit representations of outcomes and preferences. The emergence of Bayesian networks in the late 1980s made it possible to build large-scale
systems that generated sound probabilistic inferences from evidence. The addition of decision networks means that expert systems can be developed that recommend optimal decisions, reflecting the
preferences of the agent as well as the available evidence.
A system that incorporates utilities can avoid one of the most common pitfalls associated with the consultation process: confusing likelihood and importance. A common strategy in early medical expert
systems, for example, was to rank possible diagnoses in order of likelihood and report the most likely. Unfortunately, this can be disastrous! For the majority of patients in general practice, the
two most likely diagnoses are usually “There’s nothing wrong with you” and “You have a bad cold,” but if the third most likely diagnosis for a given patient is lung cancer, that’s a serious matter.
Obviously, a testing or treatment plan should depend both on probabilities and utilities. Current medical expert systems can take into account the value of information to recommend tests, and then
describe a differential diagnosis.
We now describe the knowledge engineering process for decision-theoretic expert systems. As an example we consider the problem of selecting a medical treatment for a kind of congenital heart disease
in children (see Lucas, 1996).
About 0.8% of children are born with a heart anomaly, the most common being aortic coarctation (a constriction of the aorta). It can be treated with surgery, angioplasty (expanding the aorta with a
balloon placed inside the artery), or medication. The problem is to decide what treatment to use and when to do it: the younger the infant, the greater the risks of certain treatments, but one
mustn’t wait too long. A decision-theoretic expert system for this problem can be created by a team consisting of at least one domain expert (a pediatric cardiologist) and one knowledge engineer. The
process can be broken down into the following steps:
Create a causal model. Determine the possible symptoms, disorders, treatments, and outcomes. Then draw arcs between them, indicating what disorders cause what symptoms, and what treatments alleviate
what disorders. Some of this will be well known to the domain expert, and some will come from the literature. Often the model will match well with the informal graphical descriptions given in medical
Simplify to a qualitative decision model. Since we are using the model to make treatment decisions and not for other purposes (such as determining the joint probability of certain symptom/disorder
combinations), we can often simplify by removing variables that are not involved in treatment decisions. Sometimes variables will have to be split or joined to match the expert’s intuitions. For
example, the original aortic coarctation model had a Treatment variable with values surgery, angioplasty, and medication, and a separate variable for Timing of the treatment. But the expert had a
hard time thinking of these separately, so they were combined, with Treatment taking on values such as surgery in 1 month. This gives us the model of Figure 16.10.
Assign probabilities. Probabilities can come from patient databases, literature studies, or the expert’s subjective assessments. Note that a diagnostic system will reason from symptoms and other
observations to the disease or other cause of the problems. Thus, in the early years of building these systems, experts were asked for the probability of a cause given an effect. In general they
found this difficult to do, and were better able to assess the probability of an effect given a cause. So modern systems usually assess causal knowledge and encode it directly in the Bayesian network
structure of the model, leaving the diagnostic reasoning to the Bayesian network inference algorithms (Shachter and Heckerman, 1987).
Assign utilities. When there are a small number of possible outcomes, they can be enumerated and evaluated individually using the methods of Section 16.3.1. We would create a scale from best to worst
outcome and give each a numeric value, for example 0 for death and 1 for complete recovery. We would then place the other outcomes on this scale. This can be done by the expert, but it is better if
the patient (or in the case of infants, the patient’s parents) can be involved, because different people have different preferences. If there are exponentially many outcomes, we need some way to
combine them using multiattribute utility functions. For example, we may say that the costs of various complications are additive.
Verify and refine the model. To evaluate the system we need a set of correct (input, output) pairs; a so-called gold standard to compare against. For medical expert systems this usually means
assembling the best available doctors, presenting them with a few cases,
Alt text
and asking them for their diagnosis and recommended treatment plan. We then see how well the system matches their recommendations. If it does poorly, we try to isolate the parts that are going wrong
and fix them. It can be useful to run the system “backward.” Instead of presenting the system with symptoms and asking for a diagnosis, we can present it with a diagnosis such as “heart failure,”
examine the predicted probability of symptoms such as tachycardia, and compare with the medical literature.
Perform sensitivity analysis. This important step checks whether the best decision is sensitive to small changes in the assigned probabilities and utilities by systematically varying those parameters
and running the evaluation again. If small changes lead to significantly different decisions, then it could be worthwhile to spend more resources to collect better data. If all variations lead to the
same decision, then the agent will have more confidence that it is the right decision. Sensitivity analysis is particularly important, because one of the main criticisms of probabilistic approaches
to expert systems is that it is too difficult to assess the numerical probabilities required. Sensitivity analysis often reveals that many of the numbers need be specified only very approximately.
For example, we might be uncertain about the conditional probability P (tachycardia | dyspnea), but if the optimal decision is reasonably robust to small variations in the probability, then our
ignorance is less of a concern.
This chapter shows how to combine utility theory with probability to enable an agent to select actions that will maximize its expected performance.
• Probability theory describes what an agent should believe on the basis of evidence, utility theory describes what an agent wants, and decision theory puts the two together to describe what an
agent should do.
• We can use decision theory to build a system that makes decisions by considering all possible actions and choosing the one that leads to the best expected outcome. Such a system is known as a
rational agent.
• Utility theory shows that an agent whose preferences between lotteries are consistent with a set of simple axioms can be described as possessing a utility function; furthermore, the agent selects
actions as if maximizing its expected utility.
• Multiattribute utility theory deals with utilities that depend on several distinct attributes of states. Stochastic dominance is a particularly useful technique for making unambiguous decisions,
even without precise utility values for attributes.
• Decision networks provide a simple formalism for expressing and solving decision problems. They are a natural extension of Bayesian networks, containing decision and utility nodes in addition to
chance nodes.
• Sometimes, solving a problem involves finding more information before making a decision. The value of information is defined as the expected improvement in utility compared with making a decision
without the information.
• Expert systems that incorporate utility information have additional capabilities compared with pure inference systems. In addition to being able to make decisions, they can use the value of
information to decide which questions to ask, if any; they can recommend contingency plans; and they can calculate the sensitivity of their decisions to small changes in probability and utility
The book L’art de Penser, also known as the Port-Royal Logic (Arnauld, 1662) states:
To judge what one must do to obtain a good or avoid an evil, it is necessary to consider not only the good and the evil in itself, but also the probability that it happens or does not happen; and to
view geometrically the proportion that all these things have together.
Modern texts talk of utility rather than good and evil, but this statement correctly notes that one should multiply utility by probability (“view geometrically”) to give expected utility, and
maximize that over all outcomes (“all these things”) to “judge what one must do.” It is remarkable how much this got right, 350 years ago, and only 8 years after Pascal and Fermat showed how to use
probability correctly. The Port-Royal Logic also marked the first publication of Pascal’s wager.
Daniel Bernoulli (1738), investigating the St. Petersburg paradox (see Exercise 16.3), was the first to realize the importance of preference measurement for lotteries, writing “the value of an item
must not be based on its price, but rather on the utility that it yields” (italics his). Utilitarian philosopher Jeremy Bentham (1823) proposed the hedonic calculus for weighing “pleasures” and
“pains,” arguing that all decisions (not just monetary ones) could be reduced to utility comparisons.
The derivation of numerical utilities from preferences was first carried out by Ramsey (1931); the axioms for preference in the present text are closer in form to those rediscovered in Theory of
Games and Economic Behavior (von Neumann and Morgenstern, 1944). A good presentation of these axioms, in the course of a discussion on risk preference, is given by Howard (1977). Ramsey had derived
subjective probabilities (not just utilities) from an agent’s preferences; Savage (1954) and Jeffrey (1983) carry out more recent constructions of this kind. Von Winterfeldt and Edwards (1986)
provide a modern perspective on decision analysis and its relationship to human preference structures. The micromort utility measure is discussed by Howard (1989). A 1994 survey by the Economist set
the value of a life at between $750,000 and $2.6 million. However, Richard Thaler (1992) found irrational framing effects on the price one is willing to pay to avoid a risk of death versus the price
one is willing to be paid to accept a risk. For a 1/1000 chance, a respondent wouldn’t pay more than $200 to remove the risk, but wouldn’t accept $50,000 to take on the risk. How much are people
willing to pay for a QALY? When it comes down to a specific case of saving oneself or a family member, the number is approximately “whatever I’ve got.” But we can ask at a societal level: suppose
there is a vaccine that would yield X QALYs but costs Y dollars; is it worth it? In this case people report a wide range of values from around $10,000 to $150,000 per QALY (Prades et al., 2008).
QALYs are much more widely used in medical and social policy decision making than are micromorts; see (Russell, 1990) for a typical example of an argument for a major change in public health policy
on grounds of increased expected utility measured in QALYs.
The optimizer’s curse was brought to the attention of decision analysts in a forceful way by Smith and Winkler (2006), who pointed out that the financial benefits to the client projected by analysts
for their proposed course of action almost never materialized. They trace this directly to the bias introduced by selecting an optimal action and show that a more complete Bayesian analysis
eliminates the problem. The same underlying concept has been called post-decision disappointment by Harrison and March (1984) and was noted in the context of analyzing capital investment projects by
Brown (1974). The optimizer’s curse is also closely related to the winner’s curse (Capen et al., 1971; Thaler, 1992), which applies to competitive bidding in auctions: whoever wins the auction is
very likely to have overestimated the value of the object in question. Capen et al. quote a petroleum engineer on the topic of bidding for oil-drilling rights: “If one wins a tract against two or
three others he may feel fine about his good fortune. But how should he feel if he won against 50 others? Ill.” Finally, behind both curses is the general phenomenon of regression to the mean,
whereby individuals selected on the basis of exceptional characteristics previously exhibited will, with high probability, become less exceptional in future.
The Allais paradox, due to Nobel Prize-winning economist Maurice Allais (1953) was tested experimentally (Tversky and Kahneman, 1982; Conlisk, 1989) to show that people are consistently inconsistent
in their judgments. The Ellsberg paradox on ambiguity aversion was introduced in the Ph.D. thesis of Daniel Ellsberg (Ellsberg, 1962), who went on to become a military analyst at the RAND Corporation
and to leak documents known as The Pentagon Papers, which contributed to the end of the Vietnam war and the resignation of President Nixon. Fox and Tversky (1995) describe a further study of
ambiguity aversion. Mark Machina (2005) gives an overview of choice under uncertainty and how it can vary from expected utility theory.
There has been a recent outpouring of more-or-less popular books on human irrationality. The best known is Predictably Irrational (Ariely, 2009); others include Sway (Brafman and Brafman, 2009),
Nudge (Thaler and Sunstein, 2009), Kluge (Marcus, 2009), How We Decide (Lehrer, 2009) and On Being Certain (Burton, 2009). They complement the classic (Kahneman et al., 1982) and the article that
started it all (Kahneman and Tversky, 1979). The field of evolutionary psychology (Buss, 2005), on the other hand, has run counter to this literature, arguing that humans are quite rational in
evolutionarily appropriate contexts. Its adherents point out that irrationality is penalized by definition in an evolutionary context and show that in some cases it is an artifact of the experimental
setup (Cummins and Allen, 1998). There has been a recent resurgence of interest in Bayesian models of cognition, overturning decades of pessimism (Oaksford and Chater, 1998; Elio, 2002; Chater and
Oaksford, 2008).
Keeney and Raiffa (1976) give a thorough introduction to multiattribute utility theory. They describe early computer implementations of methods for eliciting the necessary parameters for a
multiattribute utility function and include extensive accounts of real applications of the theory. In AI, the principal reference for MAUT is Wellman’s (1985) paper, which includes a system called
URP (Utility Reasoning Package) that can use a collection of statements about preference independence and conditional independence to analyze the structure of decision problems. The use of stochastic
dominance together with qualitative probability models was investigated extensively by Wellman (1988, 1990a). Wellman and Doyle (1992) provide a preliminary sketch of how a complex set of
utility-independence relationships might be used to provide a structured model of a utility function, in much the same way that Bayesian networks provide a structured model of joint probability
distributions. Bacchus and Grove (1995, 1996) and La Mura and Shoham (1999) give further results along these lines.
Decision theory has been a standard tool in economics, finance, and management science since the 1950s. Until the 1980s, decision trees were the main tool used for representing simple decision
problems. Smith (1988) gives an overview of the methodology of decision analysis. Influence diagrams were introduced by Howard and Matheson (1984), based on earlier work at SRI (Miller et al., 1976).
Howard and Matheson’s method involved the derivation of a decision tree from a decision network, but in general the tree is of exponential size. Shachter (1986) developed a method for making
decisions based directly on a decision network, without the creation of an intermediate decision tree. This algorithm was also one of the first to provide complete inference for multiply connected
Bayesian networks. Zhang et al. (1994) showed how to take advantage of conditional independence of information to reduce the size of trees in practice; they use the term decision network for networks
that use this approach (although others use it as a synonym for influence diagram). Nilsson and Lauritzen (2000) link algorithms for decision networks to ongoing developments in clustering algorithms
for Bayesian networks. Koller and Milch (2003) show how influence diagrams can be used to solve games that involve gathering information by opposing players, and Detwarasiti and Shachter (2005) show
how influence diagrams can be used as an aid to decision making for a team that shares goals but is unable to share all information perfectly. The collection by Oliver and Smith (1990) has a number
of useful articles on decision networks, as does the 1990 special issue of the journal Networks. Papers on decision networks and utility modeling also appear regularly in the journals Management
Science and Decision Analysis.
The theory of information value was explored first in the context of statistical experiments, where a quasi-utility (entropy reduction) was used (Lindley, 1956). The Russian control theorist Ruslan
Stratonovich (1965) developed the more general theory presented here, in which information has value by virtue of its ability to affect decisions. Stratonovich’s work was not known in the West, where
Ron Howard (1966) pioneered the same idea. His paper ends with the remark “If information value theory and associated decision theoretic structures do not in the future occupy a large part of the
education of engineers, then the engineering profession will find that its traditional role of managing scientific and economic resources for the benefit of man has been forfeited to another
profession.” To date, the implied revolution in managerial methods has not occurred.
Recent work by Krause and Guestrin (2009) shows that computing the exact nonmyopic value of information is intractable even in polytree networks. There are other cases— more restricted than general
value of information—in which the myopic algorithm does provide a provably good approximation to the optimal sequence of observations (Krause et al., 2008). In some cases—for example, looking for
treasure buried in one of n places—ranking experiments in order of success probability divided by cost gives an optimal solution (Kadane and Simon, 1977).
Surprisingly few early AI researchers adopted decision-theoretic tools after the early applications in medical decision making described in Chapter 13. One of the few exceptions was Jerry Feldman,
who applied decision theory to problems in vision (Feldman and Yakimovsky, 1974) and planning (Feldman and Sproull, 1977). After the resurgence of interest in probabilistic methods in AI in the
1980s, decision-theoretic expert systems gained widespread acceptance (Horvitz et al., 1988; Cowell et al., 2002). In fact, from 1991 onward, the cover design of the journal Artificial Intelligence
has depicted a decision network, although some artistic license appears to have been taken with the direction of the arrows.
16.1 (Adapted from David Heckerman.) This exercise concerns the Almanac Game, which is used by decision analysts to calibrate numeric estimation. For each of the questions that follow, give your best
guess of the answer, that is, a number that you think is as likely to be too high as it is to be too low. Also give your guess at a 25th percentile estimate, that is, a number that you think has a
25% chance of being too high, and a 75% chance of being too low. Do the same for the 75th percentile. (Thus, you should give three estimates in all—low, median, and high—for each question.)
a. Number of passengers who flew between New York and Los Angeles in 1989.
b. Population of Warsaw in 1992.
c. Year in which Coronado discovered the Mississippi River.
d. Number of votes received by Jimmy Carter in the 1976 presidential election.
e. Age of the oldest living tree, as of 2002.
f. Height of the Hoover Dam in feet.
g. Number of eggs produced in Oregon in 1985.
h. Number of Buddhists in the world in 1992.
i. Number of deaths due to AIDS in the United States in 1981.
j. Number of U.S. patents granted in 1901.
The correct answers appear after the last exercise of this chapter. From the point of view of decision analysis, the interesting thing is not how close your median guesses came to the real answers,
but rather how often the real answer came within your 25% and 75% bounds. If it was about half the time, then your bounds are accurate. But if you’re like most people, you will be more sure of
yourself than you should be, and fewer than half the answers will fall within the bounds. With practice, you can calibrate yourself to give realistic bounds, and thus be more useful in supplying
information for decision making. Try this second set of questions and see if there is any improvement:
a. Year of birth of Zsa Zsa Gabor.
b. Maximum distance from Mars to the sun in miles.
c. Value in dollars of exports of wheat from the United States in 1992.
d. Tons handled by the port of Honolulu in 1991.
e. Annual salary in dollars of the governor of California in 1993.
f. Population of San Diego in 1990.
g. Year in which Roger Williams founded Providence, Rhode Island.
h. Height of Mt. Kilimanjaro in feet.
i. Length of the Brooklyn Bridge in feet.
j. Number of deaths due to automobile accidents in the United States in 1992.
16.2 Chris considers four used cars before buying the one with maximum expected utility. Pat considers ten cars and does the same. All other things being equal, which one is more likely to have the
better car? Which is more likely to be disappointed with their car’s quality? By how much (in terms of standard deviations of expected quality)?
16.3 In 1713, Nicolas Bernoulli stated a puzzle, now called the St. Petersburg paradox, which works as follows. You have the opportunity to play a game in which a fair coin is tossed repeatedly until
it comes up heads. If the first heads appears on the nth toss, you win 2^n^ dollars.
a. Show that the expected monetary value of this game is infinite.
b. How much would you, personally, pay to play the game?
c. Nicolas’s cousin Daniel Bernoulli resolved the apparent paradox in 1738 by suggesting that the utility of money is measured on a logarithmic scale (i.e., U(Sn) = a log2 n+ b, where Sn is the state
of having $n). What is the expected utility of the game under this assumption?
d. What is the maximum amount that it would be rational to pay to play the game, assuming that one’s initial wealth is $k ?
16.4 Write a computer program to automate the process in Exercise 16.9. Try your program out on several people of different net worth and political outlook. Comment on the consistency of your
results, both for an individual and across individuals.
16.5 The Surprise Candy Company makes candy in two flavors: 70% are strawberry flavor and 30% are anchovy flavor. Each new piece of candy starts out with a round shape; as it moves along the
production line, a machine randomly selects a certain percentage to be trimmed into a square; then, each piece is wrapped in a wrapper whose color is chosen randomly to be red or brown. 80% of the
strawberry candies are round and 80% have a red wrapper, while 90% of the anchovy candies are square and 90% have a brown wrapper. All candies are sold individually in sealed, identical, black boxes.
Now you, the customer, have just bought a Surprise candy at the store but have not yet opened the box. Consider the three Bayes nets in Figure 16.11.
Alt text
a. Which network(s) can correctly represent P(Flavor,Wrapper, Shape)?
b. Which network is the best representation for this problem?
c. Does network (i) assert that P(Wrapper|Shape)= P(Wrapper)?
d. What is the probability that your candy has a red wrapper?
e. In the box is a round candy with a red wrapper. What is the probability that its flavor is strawberry?
f. A unwrapped strawberry candy is worth s on the open market and an unwrapped anchovy candy is worth a. Write an expression for the value of an unopened candy box.
g. A new law prohibits trading of unwrapped candies, but it is still legal to trade wrapped candies (out of the box). Is an unopened candy box now worth more than less than, or the same as before?
16.6 Prove that the judgments B ≻ A and C ≻ D in the Allais paradox (page 620) violate the axiom of substitutability.
16.7 Consider the Allais paradox described on page 620: an agent who prefers B over A (taking the sure thing), and C over D (taking the higher EMV) is not acting rationally, according to utility
theory. Do you think this indicates a problem for the agent, a problem for the theory, or no problem at all? Explain.
16.8 Tickets to a lottery cost $1. There are two possible prizes: a $10 payoff with probability 1/50, and a $1,000,000 payoff with probability 1/2,000,000. What is the expected monetary value of a
lottery ticket? When (if ever) is it rational to buy a ticket? Be precise—show an equation involving utilities. You may assume current wealth of $k and that U(Sk) = 0. You may also assume that U
(S~k+10~) = 10 × U(S~k+1~), but you may not make any assumptions about U(S~k+1,000,000~). Sociological studies show that people with lower income buy a disproportionate number of lottery tickets. Do
you think this is because they are worse decision makers or because they have a different utility function? Consider the value of contemplating the possibility of winning the lottery versus the value
of contemplating becoming an action hero while watching an adventure movie.
16.9 Assess your own utility for different incremental amounts of money by running a series of preference tests between some definite amount M1 and a lottery [p,M2; (1−p), 0]. Choose different values
of M1 and M2, and vary p until you are indifferent between the two choices. Plot the resulting utility function.
16.10 How much is a micromort worth to you? Devise a protocol to determine this. Ask questions based both on paying to avoid risk and being paid to accept risk.
16.11 Let continuous variables X~1~, . . . ,X~k~ be independently distributed according to the same probability density function f(x). Prove that the density function for max{X~1~, . . . ,X~k~} is
given by kf(x)(F (x))k−1, where F is the cumulative distribution for f .
16.12 Economists often make use of an exponential utility function for money: U(x) = −e x/R, where R is a positive constant representing an individual’s risk tolerance. Risk tolerance reflects how
likely an individual is to accept a lottery with a particular expected monetary value (EMV) versus some certain payoff. As R (which is measured in the same units as x) becomes larger, the individual
becomes less risk-averse.
a. Assume Mary has an exponential utility function with R = $500. Mary is given the choice between receiving $500 with certainty (probability 1) or participating in a lottery which has a 60%
probability of winning $5000 and a 40% probability of winning nothing. Assuming Marry acts rationally, which option would she choose? Show how you derived your answer.
b. Consider the choice between receiving $100 with certainty (probability 1) or participating in a lottery which has a 50% probability of winning $500 and a 50% probability of winning nothing.
Approximate the value of R (to 3 significant digits) in an exponential utility function that would cause an individual to be indifferent to these two alternatives. (You might find it helpful to write
a short program to help you solve this problem.)
16.13 Repeat Exercise 16.16, using the action-utility representation shown in Figure 16.7.
16.14 For either of the airport-siting diagrams from Exercises 16.16 and 16.13, to which conditional probability table entry is the utility most sensitive, given the available evidence?
16.15 Consider a student who has the choice to buy or not buy a textbook for a course. We’ll model this as a decision problem with one Boolean decision node, B, indicating whether the agent chooses
to buy the book, and two Boolean chance nodes, M , indicating whether the student has mastered the material in the book, and P , indicating whether the student passes the course. Of course, there is
also a utility node, U . A certain student, Sam, has an additive utility function: 0 for not buying the book and -$100 for buying it; and $2000 for passing the course and 0 for not passing. Sam’s
conditional probability estimates are as follows:
P (p|b,m) = 0.9 P (m|b) = 0.9
P (p|b,¬m) = 0.5 P (m|¬b) = 0.7
P (p|¬b,m) = 0.8
P (p|¬b,¬m) = 0.3
You might think that P would be independent of B given M , But this course has an openbook final—so having the book helps.
a. Draw the decision network for this problem.
b. Compute the expected utility of buying the book and of not buying it.
c. What should Sam do?
16.16 This exercise completes the analysis of the airport-siting problem in Figure 16.6.
a. Provide reasonable variable domains, probabilities, and utilities for the network, assuming that there are three possible sites.
b. Solve the decision problem.
c. What happens if changes in technology mean that each aircraft generates half the noise?
d. What if noise avoidance becomes three times more important?
e. Calculate the VPI for AirTraffic, Litigation , and Construction in your model.
16.17 (Adapted from Pearl (1988).) A used-car buyer can decide to carry out various tests with various costs (e.g., kick the tires, take the car to a qualified mechanic) and then, depending on the
outcome of the tests, decide which car to buy. We will assume that the buyer is deciding whether to buy car C~1~, that there is time to carry out at most one test, and that T~1~ is the test of C~1~
and costs $50. A car can be in good shape (quality q +) or bad shape (quality q −), and the tests might help indicate what shape the car is in. Car C~1~ costs $1,500, and its market value is $2,000
if it is in good shape; if not, $700 in repairs will be needed to make it in good shape. The buyer’s estimate is that C~1~ has a 70% chance of being in good shape.
a. Draw the decision network that represents this problem.
b. Calculate the expected net gain from buying C~1~, given no test.
c. Tests can be described by the probability that the car will pass or fail the test given that the car is in good or bad shape. We have the following information: P (pass(C~1~, T~1~)|q+(C~1~)) = 0.8
P (pass(C~1~, T~1~)|q −(C~1~)) = 0.35
Use Bayes’ theorem to calculate the probability that the car will pass (or fail) its test and hence the probability that it is in good (or bad) shape given each possible test outcome.
d. Calculate the optimal decisions given either a pass or a fail, and their expected utilities.
e. Calculate the value of information of the test, and derive an optimal conditional plan for the buyer.
16.18 Recall the definition of value of information in Section 16.6.
a. Prove that the value of information is nonnegative and order independent.
b. Explain why it is that some people would prefer not to get some information—for example, not wanting to know the sex of their baby when an ultrasound is done.
c. A function f on sets is submodular if, for any element x and any sets A and B suchSUBMODULARITY
that A ⊆ B, adding x to A gives a greater increase in f than adding x to B:
A ⊆ B ⇒ (f(A∪{x})− f(A)) ≥ (f(B ∪{x}) − f(B)) .
Submodularity captures the intuitive notion of diminishing returns. Is the value of information, viewed as a function f on sets of possible observations, submodular? Prove this or find a
The answers to Exercise 16.1 (where M stands for million): First set: 3M, 1.6M, 1541, 41M, 4768, 221, 649M, 295M, 132, 25,546. Second set: 1917, 155M, 4,500M, 11M, 120,000, 1.1M, 1636, 19,340, 1,595,
In which we examine methods for deciding what to do today, given that we may decide again tomorrow.
In this chapter, we address the computational issues involved in making decisions in a stochastic environment. Whereas Chapter 16 was concerned with one-shot or episodic decision problems, in which
the utility of each action’s outcome was well known, we are concerned here with sequential decision problems, in which the agent’s utility depends on a sequence of decisions. Sequential decision
problems incorporate utilities, uncertainty, and sensing, and include search and planning problems as special cases. Section 17.1 explains how sequential decision problems are defined, and Sections
17.2 and 17.3 explain how they can be solved to produce optimal behavior that balances the risks and rewards of acting in an uncertain environment. Section 17.4 extends these ideas to the case of
partially observable environments, and Section 17.4.3 develops a complete design for decision-theoretic agents in partially observable environments, combining dynamic Bayesian networks from Chapter
15 with decision networks from Chapter 16.
The second part of the chapter covers environments with multiple agents. In such environments, the notion of optimal behavior is complicated by the interactions among the agents. Section 17.5
introduces the main ideas of game theory, including the idea that rational agents might need to behave randomly. Section 17.6 looks at how multiagent systems can be designed so that multiple agents
can achieve a common goal.
Suppose that an agent is situated in the 4× 3 environment shown in Figure 17.1(a). Beginning in the start state, it must choose an action at each time step. The interaction with the environment
terminates when the agent reaches one of the goal states, marked +1 or –1. Just as for search problems, the actions available to the agent in each state are given by ACTIONS(s), sometimes abbreviated
to A(s); in the 4× 3 environment, the actions in every state are Up, Down, Left, and Right. We assume for now that the environment is fully observable, so that the agent always knows where it is.
Alt text
If the environment were deterministic, a solution would be easy: [Up, Up, Right, Right, Right]. Unfortunately, the environment won’t always go along with this solution, because the actions are
unreliable. The particular model of stochastic motion that we adopt is illustrated in Figure 17.1(b). Each action achieves the intended effect with probability 0.8, but the rest of the time, the
action moves the agent at right angles to the intended direction. Furthermore, if the agent bumps into a wall, it stays in the same square. For example, from the start square (1,1), the action Up
moves the agent to (1,2) with probability 0.8, but with probability 0.1, it moves right to (2,1), and with probability 0.1, it moves left, bumps into the wall, and stays in (1,1). In such an
environment, the sequence [Up,Up,Right ,Right ,Right ] goes up around the barrier and reaches the goal state at (4,3) with probability 0.85 = 0.32768. There is also a small chance of accidentally
reaching the goal by going the other way around with probability 0.14 × 0.8, for a grand total of 0.32776. (See also Exercise 17.1.)
As in Chapter 3, the transition model (or just “model,” whenever no confusion can arise) describes the outcome of each action in each state. Here, the outcome is stochastic, so we write P (s′ | s, a)
to denote the probability of reaching state s′ if action a is done in state s. We will assume that transitions are Markovian in the sense of Chapter 15, that is, the probability of reaching s′ from s
depends only on s and not on the history of earlier states. For now, you can think of P (s′ | s, a) as a big three-dimensional table containing probabilities. Later, in Section 17.4.3, we will see
that the transition model can be represented as a dynamic Bayesian network, just as in Chapter 15.
To complete the definition of the task environment, we must specify the utility function for the agent. Because the decision problem is sequential, the utility function will depend on a sequence of
states—an environment history—rather than on a single state. Later in this section, we investigate how such utility functions can be specified in general; for now, we simply stipulate that in each
state s, the agent receives a reward R(s), which may be positive or negative, but must be bounded. For our particular example, the reward is −0.04 in all states except the terminal states (which have
rewards +1 and –1). The utility of an environment history is just (for now) the sum of the rewards received. For example, if the agent reaches the +1 state after 10 steps, its total utility will be
0.6. The negative reward of –0.04 gives the agent an incentive to reach (4,3) quickly, so our environment is a stochastic generalization of the search problems of Chapter 3. Another way of saying
this is that the agent does not enjoy living in this environment and so wants to leave as soon as possible.
To sum up: a sequential decision problem for a fully observable, stochastic environment with a Markovian transition model and additive rewards is called a Markov decision process, or MDP, and
consists of a set of states (with an initial state s~0~); a set ACTIONS(s) of actions in each state; a transition model P (s′ | s, a); and a reward function R(s).1
The next question is, what does a solution to the problem look like? We have seen that any fixed action sequence won’t solve the problem, because the agent might end up in a state other than the
goal. Therefore, a solution must specify what the agent should do for any state that the agent might reach. A solution of this kind is called a policy. It is traditional to denote a policy by π, and
π(s) is the action recommended by the policy π for state s. If the agent has a complete policy, then no matter what the outcome of any action, the agent will always know what to do next.
Each time a given policy is executed starting from the initial state, the stochastic nature of the environment may lead to a different environment history. The quality of a policy is therefore
measured by the expected utility of the possible environment histories generated by that policy. An optimal policy is a policy that yields the highest expected utility. We use π∗ to denote an optimal
policy. Given π∗, the agent decides what to do by consulting its current percept, which tells it the current state s, and then executing the action π∗(s). A policy represents the agent function
explicitly and is therefore a description of a simple reflex agent, computed from the information used for a utility-based agent.
An optimal policy for the world of Figure 17.1 is shown in Figure 17.2(a). Notice that, because the cost of taking a step is fairly small compared with the penalty for ending up in (4,2) by accident,
the optimal policy for the state (3,1) is conservative. The policy recommends taking the long way round, rather than taking the shortcut and thereby risking entering (4,2).
The balance of risk and reward changes depending on the value of R(s) for the nonterminal states. Figure 17.2(b) shows optimal policies for four different ranges of R(s). When R(s) ≤ −1.6284, life is
so painful that the agent heads straight for the nearest exit, even if the exit is worth –1. When −0.4278 ≤ R(s) ≤ −0.0850, life is quite unpleasant; the agent takes the shortest route to the +1
state and is willing to risk falling into the –1 state by accident. In particular, the agent takes the shortcut from (3,1). When life is only slightly dreary (−0.0221 < R(s) < 0), the optimal policy
takes no risks at all. In (4,1) and (3,2), the agent heads directly away from the –1 state so that it cannot fall in by accident, even though this means banging its head against the wall quite a few
times. Finally, if R(s) > 0, then life is positively enjoyable and the agent avoids both exits. As long as the actions in (4,1), (3,2),
1 Some definitions of MDPs allow the reward to depend on the action and outcome too, so the reward function is R(s, a, s′). This simplifies the description of some environments but does not change
the problem in any fundamental way, as shown in Exercise 17.4.
Alt text
and (3,3) are as shown, every policy is optimal, and the agent obtains infinite total reward because it never enters a terminal state. Surprisingly, it turns out that there are six other optimal
policies for various ranges of R(s); Exercise 17.5 asks you to find them.
The careful balancing of risk and reward is a characteristic of MDPs that does not arise in deterministic search problems; moreover, it is a characteristic of many real-world decision problems. For
this reason, MDPs have been studied in several fields, including AI, operations research, economics, and control theory. Dozens of algorithms have been proposed for calculating optimal policies. In
sections 17.2 and 17.3 we describe two of the most important algorithm families. First, however, we must complete our investigation of utilities and policies for sequential decision problems.
Utilities over time
In the MDP example in Figure 17.1, the performance of the agent was measured by a sum of rewards for the states visited. This choice of performance measure is not arbitrary, but it is not the only
possibility for the utility function on environment histories, which we write as U~h~([s~0~, s~1~, . . . , s~n~]). Our analysis draws on multiattribute utility theory (Section 16.4) and is somewhat
technical; the impatient reader may wish to skip to the next section.
The first question to answer is whether there is a finite horizon or an infinite horizon for decision making. A finite horizon means that there is a fixed time N after which nothing matters—the game
is over, so to speak. Thus, U~h~([s~0~, s~1~, . . . , s~N+k~])= U~h~([s~0~, s~1~, . . . , s~N~ ]) for all k > 0. For example, suppose an agent starts at (3,1) in the 4× 3 world of Figure 17.1, and
suppose that N = 3. Then, to have any chance of reaching the +1 state, the agent must head directly for it, and the optimal action is to go Up. On the other hand, if N = 100, then there is plenty of
time to take the safe route by going Left. So, with a finite horizon,the optimal action in a given state could change over time. We say that the optimal policy for a finite horizon is nonstationary.
With no fixed time limit, on the other hand, there is no reason to behave differently in the same state at different times. Hence, the optimal action depends only on the current state, and the
optimal policy is stationary. Policies for the infinite-horizon case are therefore simpler than those for the finite-horizon case, and we deal mainly with the infinite-horizon case in this chapter.
(We will see later that for partially observable environments, the infinite-horizon case is not so simple.) Note that “infinite horizon” does not necessarily mean that all state sequences are
infinite; it just means that there is no fixed deadline. In particular, there can be finite state sequences in an infinite-horizon MDP containing a terminal state.
The next question we must decide is how to calculate the utility of state sequences. In the terminology of multiattribute utility theory, each state si can be viewed as an attribute of the state
sequence [s~0~, s~1~, S~2~ . . .]. To obtain a simple expression in terms of the attributes, we will need to make some sort of preference-independence assumption. The most natural assumption is that
the agent’s preferences between state sequences are stationary. Stationarity for preferences means the following: if two state sequences [s~0~, s~1~, s~2~, . . .] and [s′0 , s′~1~ , s′~2~ , . . .]
begin with the same state (i.e., s~0~ = s′~0~ ), then the two sequences should be preference-ordered the same way as the sequences [s~1~, s~2~, . . .] and [s′~1~ , s′~2~ , . . .]. In English, this
means that if you prefer one future to another starting tomorrow, then you should still prefer that future if it were to start today instead. Stationarity is a fairly innocuous-looking assumption
with very strong consequences: it turns out that under stationarity there are just two coherent ways to assign utilities to sequences:
1. Additive rewards: The utility of a state sequence is
U~h~([s~0~, s~1~, s~2~, . . .]) = R(s~0~) + R(s~1~) + R(s~2~) + · · · .
The 4× 3 world in Figure 17.1 uses additive rewards. Notice that additivity was used implicitly in our use of path cost functions in heuristic search algorithms (Chapter 3).
2. Discounted rewards: The utility of a state sequence is
U~h~([s~0~, s~1~, s~2~, . . .]) = R(s~0~) + γR(s~1~) + γ 2 R(s~2~) + · · · ,
where the discount factor γ is a number between 0 and 1. The discount factor describes the preference of an agent for current rewards over future rewards. When γ is close to 0, rewards in the distant
future are viewed as insignificant. When γ is 1, discounted rewards are exactly equivalent to additive rewards, so additive rewards are a special case of discounted rewards. Discounting appears to be
a good model of both animal and human preferences over time. A discount factor of γ is equivalent to an interest rate of (1/γ) − 1.
For reasons that will shortly become clear, we assume discounted rewards in the remainder of the chapter, although sometimes we allow γ =1.
Lurking beneath our choice of infinite horizons is a problem: if the environment does not contain a terminal state, or if the agent never reaches one, then all environment histories will be
infinitely long, and utilities with additive, undiscounted rewards will generally be infinite. While we can agree that+∞ is better than −∞, comparing two state sequences with +∞ utility is more
difficult. There are three solutions, two of which we have seen already:
1. With discounted rewards, the utility of an infinite sequence is finite. In fact, if γ < 1 and rewards are bounded by ±Rmax, we have
Alt text
using the standard formula for the sum of an infinite geometric series.
2. If the environment contains terminal states and if the agent is guaranteed to get to one eventually, then we will never need to compare infinite sequences. A policy that is guaranteed to reach a
terminal state is called a proper policy. With proper policies, we can use γ = 1 (i.e., additive rewards). The first three policies shown in Figure 17.2(b) are proper, but the fourth is improper.
It gains infinite total reward by staying away from the terminal states when the reward for the nonterminal states is positive. The existence of improper policies can cause the standard
algorithms for solving MDPs to fail with additive rewards, and so provides a good reason for using discounted rewards.
3. Infinite sequences can be compared in terms of the average reward obtained per time step. Suppose that square (1,1) in the 4× 3 world has a reward of 0.1 while the other nonterminal states have a
reward of 0.01. Then a policy that does its best to stay in (1,1) will have higher average reward than one that stays elsewhere. Average reward is a useful criterion for some problems, but the
analysis of average-reward algorithms is beyond the scope of this book.
In sum, discounted rewards present the fewest difficulties in evaluating state sequences.
Optimal policies and the utilities of states
Having decided that the utility of a given state sequence is the sum of discounted rewards obtained during the sequence, we can compare policies by comparing the expected utilities obtained when
executing them. We assume the agent is in some initial state s and define S~t~ (a random variable) to be the state the agent reaches at time t when executing a particular policy π. (Obviously, s~0~ =
s, the state the agent is in now.) The probability distribution over state sequences S~1~, S~2~, . . . , is determined by the initial state s, the policy π, and the transition model for the
The expected utility obtained by executing π starting in s is given by
Alt text
where the expectation is with respect to the probability distribution over state sequences determined by s and π. Now, out of all the policies the agent could choose to execute starting in s, one (or
more) will have higher expected utilities than all the others. We’ll use π
Alt text
Remember that π∗ s is a policy, so it recommends an action for every state; its connection with s in particular is that it’s an optimal policy when s is the starting state. A remarkable consequence
of using discounted utilities with infinite horizons is that the optimal policy is independent of the starting state. (Of course, the action sequence won’t be independent; remember that a policy is a
function specifying an action for each state.) This fact seems intuitively obvious: if policy π∗ a is optimal starting in a and policy π∗ b is optimal starting in b, then, when they reach a third
state c, there’s no good reason for them to disagree with each other, or with π∗ c , about what to do next.2 So we can simply write π ∗ for an optimal policy.
Given this definition, the true utility of a state is just U π∗ (s)—that is, the expected sum of discounted rewards if the agent executes an optimal policy. We write this as U(s), matching the
notation used in Chapter 16 for the utility of an outcome. Notice that U(s) and R(s) are quite different quantities; R(s) is the “short term” reward for being in s, whereas U(s) is the “long term”
total reward from s onward. Figure 17.3 shows the utilities for the 4× 3 world. Notice that the utilities are higher for states closer to the +1 exit, because fewer steps are required to reach the
Alt text
The utility function U(s) allows the agent to select actions by using the principle of maximum expected utility from Chapter 16—that is, choose the action that maximizes the expected utility of the
subsequent state:
Alt text
The next two sections describe algorithms for finding optimal policies.
2 Although this seems obvious, it does not hold for finite-horizon policies or for other ways of combining rewards over time. The proof follows directly from the uniqueness of the utility function on
states, as shown in Section 17.2.
In this section, we present an algorithm, called value iteration, for calculating an optimal policy. The basic idea is to calculate the utility of each state and then use the state utilities to
select an optimal action in each state.
The Bellman equation for utilities
Section 17.1.2 defined the utility of being in a state as the expected sum of discounted rewards from that point onwards. From this, it follows that there is a direct relationship between the utility
of a state and the utility of its neighbors: the utility of a state is the immediate reward for that state plus the expected discounted utility of the next state, assuming that the agent chooses the
optimal action. That is, the utility of a state is given by
Alt text
This is called the Bellman equation, after Richard Bellman (1957). The utilities of theBELLMAN EQUATION
states—defined by Equation (17.2) as the expected utility of subsequent state sequences—are solutions of the set of Bellman equations. In fact, they are the unique solutions, as we show in Section
Let us look at one of the Bellman equations for the 4× 3 world. The equation for the state (1,1) is
U(1, 1) = −0.04 + γ max[ 0.8U(1, 2) + 0.1U(2, 1) + 0.1U(1, 1), (Up)
0.9U(1, 1) + 0.1U(1, 2), (Left)
0.9U(1, 1) + 0.1U(2, 1), (Down)
0.8U(2, 1) + 0.1U(1, 2) + 0.1U(1, 1) ]. (Right)
When we plug in the numbers from Figure 17.3, we find that Up is the best action.
The value iteration algorithm
The Bellman equation is the basis of the value iteration algorithm for solving MDPs. If there are n possible states, then there are n Bellman equations, one for each state. The n equations contain n
unknowns—the utilities of the states. So we would like to solve these simultaneous equations to find the utilities. There is one problem: the equations are nonlinear, because the “max” operator is
not a linear operator. Whereas systems of linear equations can be solved quickly using linear algebra techniques, systems of nonlinear equations are more problematic. One thing to try is an iterative
approach. We start with arbitrary initial values for the utilities, calculate the right-hand side of the equation, and plug it into the left-hand side—thereby updating the utility of each state from
the utilities of its neighbors. We repeat this until we reach an equilibrium. Let Ui(s) be the utility value for state s at the ith iteration. The iteration step, called a Bellman update, looks like
this:BELLMAN UPDATE
Alt text
Alt text
Alt text
where the update is assumed to be applied simultaneously to all the states at each iteration. If we apply the Bellman update infinitely often, we are guaranteed to reach an equilibrium (see Section
17.2.3), in which case the final utility values must be solutions to the Bellman equations. In fact, they are also the unique solutions, and the corresponding policy (obtained using Equation (17.4))
is optimal. The algorithm, called VALUE-ITERATION, is shown in Figure 17.4.
We can apply value iteration to the 4× 3 world in Figure 17.1(a). Starting with initial values of zero, the utilities evolve as shown in Figure 17.5(a). Notice how the states at different distances
from (4,3) accumulate negative reward until a path is found to (4,3), whereupon the utilities start to increase. We can think of the value iteration algorithm as propagating information through the
state space by means of local updates.
Convergence of value iteration
We said that value iteration eventually converges to a unique set of solutions of the Bellman equations. In this section, we explain why this happens. We introduce some useful mathematical ideas
along the way, and we obtain some methods for assessing the error in the utility function returned when the algorithm is terminated early; this is useful because it means that we don’t have to run
forever. This section is quite technical.
The basic concept used in showing that value iteration converges is the notion of a contraction. Roughly speaking, a contraction is a function of one argument that, when applied to two different
inputs in turn, produces two output values that are “closer together,” by at least some constant factor, than the original inputs. For example, the function “divide by two” is a contraction, because,
after we divide any two numbers by two, their difference is halved. Notice that the “divide by two” function has a fixed point, namely zero, that is unchanged by the application of the function. From
this example, we can discern two important properties of contractions:
• A contraction has only one fixed point; if there were two fixed points they would not get closer together when the function was applied, so it would not be a contraction.
• When the function is applied to any argument, the value must get closer to the fixed point (because the fixed point does not move), so repeated application of a contraction always reaches the
fixed point in the limit.
Now, suppose we view the Bellman update (Equation (17.6)) as an operator B that is applied simultaneously to update the utility of every state. Let Ui denote the vector of utilities for all the
states at the ith iteration. Then the Bellman update equation can be written as U~i+1~ ← B U~i~ .
Next, we need a way to measure distances between utility vectors. We will use the max norm, which measures the “length” of a vector by the absolute value of its biggest component:
||U || = max~s~ |U(s)| .
With this definition, the “distance” between two vectors, ||U − U ′||, is the maximum difference between any two corresponding elements. The main result of this section is the following: Let U~i~ and
U′~i~ be any two utility vectors. Then we have
||B U~i~ −B U′~i~ || ≤ γ ||U~i~ − U′~i~ || . (17.7)
That is, the Bellman update is a contraction by a factor of γ on the space of utility vectors. (Exercise 17.6 provides some guidance on proving this claim.) Hence, from the properties of contractions
in general, it follows that value iteration always converges to a unique solution of the Bellman equations whenever γ < 1.
We can also use the contraction property to analyze the rate of convergence to a solution. In particular, we can replace U′~i~ in Equation (17.7) with the true utilities U , for which B U = U . Then
we obtain the inequality
||B U~i~ − U || ≤ γ ||U~i~ − U || .
So, if we view ||U~i~ − U || as the error in the estimate Ui, we see that the error is reduced by a factor of at least γ on each iteration. This means that value iteration converges exponentially
fast. We can calculate the number of iterations required to reach a specified error bound ε as follows: First, recall from Equation (17.1) that the utilities of all states are bounded by ±Rmax/(1 −
γ). This means that the maximum initial error ||U~0~ − U || ≤ 2Rmax/(1 − γ). Suppose we run for N iterations to reach an error of at most ε. Then, because the error is reduced by at least γ each
time, we require γ
N · 2R~max~/(1− γ) ≤ ε. Taking logs, we find
N = (log(2R~max~/ε(1 − γ))/ log(1/γ))
iterations suffice. Figure 17.5(b) shows how N varies with γ, for different values of the ratio ε/Rmax. The good news is that, because of the exponentially fast convergence, N does not depend much on
the ratio ε/Rmax. The bad news is that N grows rapidly as γ becomes close to 1. We can get fast convergence if we make γ small, but this effectively gives the agent a short horizon and could miss the
long-term effects of the agent’s actions.
The error bound in the preceding paragraph gives some idea of the factors influencing the run time of the algorithm, but is sometimes overly conservative as a method of deciding when to stop the
iteration. For the latter purpose, we can use a bound relating the error to the size of the Bellman update on any given iteration. From the contraction property (Equation (17.7)), it can be shown
that if the update is small (i.e., no state’s utility changes by much), then the error, compared with the true utility function, also is small. More precisely,
if ||U~i+1~ − U~i~|| < ε(1− γ)/γ then ||U~i+1~ − U || < ε . (17.8)
This is the termination condition used in the VALUE-ITERATION algorithm of Figure 17.4. So far, we have analyzed the error in the utility function returned by the value iteration algorithm. What the
agent really cares about, however, is how well it will do if it makes its decisions on the basis of this utility function. Suppose that after i iterations of value iteration, the agent has an
estimate Ui of the true utility U and obtains the MEU policy πi based on one-step look-ahead using Ui (as in Equation (17.4)). Will the resulting behavior be nearly as good as the optimal behavior?
This is a crucial question for any real agent, and it turns out that the answer is yes. U^π^~i~(s) is the utility obtained if πi is executed starting in s, and the policy loss ||U^π^~i~ − U || is the
most the agent can lose by executing πi instead of the optimal policy π ∗. The policy loss of πi is connected to the error in Ui by the following inequality:
if ||U~i~ − U || < ε then ||U^π^~i~ − U || < 2εγ/(1 − γ) . (17.9)
In practice, it often occurs that πi becomes optimal long before Ui has converged. Figure 17.6 shows how the maximum error in Ui and the policy loss approach zero as the value iteration process
proceeds for the 4× 3 environment with γ = 0.9. The policy πi is optimal when i= 4, even though the maximum error in Ui is still 0.46.
Now we have everything we need to use value iteration in practice. We know that it converges to the correct utilities, we can bound the error in the utility estimates if we stop after a finite number
of iterations, and we can bound the policy loss that results from executing the corresponding MEU policy. As a final note, all of the results in this section depend on discounting with γ < 1. If γ =
1 and the environment contains terminal states, then a similar set of convergence results and error bounds can be derived whenever certain technical conditions are satisfied.
In the previous section, we observed that it is possible to get an optimal policy even when the utility function estimate is inaccurate. If one action is clearly better than all others, then the
exact magnitude of the utilities on the states involved need not be precise. This insight suggests an alternative way to find optimal policies. The policy iteration algorithm alternates the following
two steps, beginning from some initial policy π0:
• Policy evaluation: given a policy πi, calculate Ui = U πi , the utility of each state if π~i~ were to be executed.
• Policy improvement: Calculate a new MEU policy πi+1, using one-step look-ahead based on Ui (as in Equation (17.4)).
The algorithm terminates when the policy improvement step yields no change in the utilities. At this point, we know that the utility function Ui is a fixed point of the Bellman update, so it is a
solution to the Bellman equations, and πi must be an optimal policy. Because there are only finitely many policies for a finite state space, and each iteration can be shown to yield a better policy,
policy iteration must terminate. The algorithm is shown in Figure 17.7.
The policy improvement step is obviously straightforward, but how do we implement the POLICY-EVALUATION routine? It turns out that doing so is much simpler than solving the standard Bellman equations
(which is what value iteration does), because the action in each state is fixed by the policy. At the ith iteration, the policy πi specifies the action π~i~(s) in
Alt text
state s. This means that we have a simplified version of the Bellman equation (17.5) relating the utility of s (under πi) to the utilities of its neighbors:
Alt text
The important point is that these equations are linear, because the “max” operator has been removed. For n states, we have n linear equations with n unknowns, which can be solved exactly in time O
(n3) by standard linear algebra methods.
For small state spaces, policy evaluation using exact solution methods is often the most efficient approach. For large state spaces, O(n3) time might be prohibitive. Fortunately, it is not necessary
to do exact policy evaluation. Instead, we can perform some number of simplified value iteration steps (simplified because the policy is fixed) to give a reasonably good approximation of the
utilities. The simplified Bellman update for this process is
Alt text
and this is repeated k times to produce the next utility estimate. The resulting algorithm is called modified policy iteration. It is often much more efficient than standard policy iteration or value
Alt text
The algorithms we have described so far require updating the utility or policy for all states at once. It turns out that this is not strictly necessary. In fact, on each iteration, we can pick any
subset of states and apply either kind of updating (policy improvement or simplified value iteration) to that subset. This very general algorithm is called asynchronous policy iteration. Given
certain conditions on the initial policy and initial utility function, asynchronous policy iteration is guaranteed to converge to an optimal policy. The freedom to choose any states to work on means
that we can design much more efficient heuristic algorithms—for example, algorithms that concentrate on updating the values of states that are likely to be reached by a good policy. This makes a lot
of sense in real life: if one has no intention of throwing oneself off a cliff, one should not spend time worrying about the exact value of the resulting states.
The description of Markov decision processes in Section 17.1 assumed that the environment was fully observable. With this assumption, the agent always knows which state it is in. This, combined with
the Markov assumption for the transition model, means that the optimal policy depends only on the current state. When the environment is only partially observable, the situation is, one might say,
much less clear. The agent does not necessarily know which state it is in, so it cannot execute the action π(s) recommended for that state. Furthermore, the utility of a state s and the optimal
action in s depend not just on s, but also on how much the agent knows when it is in s. For these reasons, partially observable MDPs (or pronounced “pom-dee-pees”) are usually viewed as much more
difficult than ordinary MDPs. We cannot avoid POMDPs, however, because the real world is one.
Definition of POMDPs
To get a handle on POMDPs, we must first define them properly. A POMDP has the same elements as an MDP—the transition model P (s′ | s, a), actions A(s), and reward function R(s)—but, like the
partially observable search problems of Section 4.4, it also has a sensor model P (e | s). Here, as in Chapter 15, the sensor model specifies the probability of perceiving evidence e in state s.3 For
example, we can convert the 4× 3 world of Figure 17.1 into a POMDP by adding a noisy or partial sensor instead of assuming that the agent knows its location exactly. Such a sensor might measure the
number of adjacent walls, which happens to be 2 in all the nonterminal squares except for those in the third column, where the value is 1; a noisy version might give the wrong value with probability
In Chapters 4 and 11, we studied nondeterministic and partially observable planning problems and identified the belief state—the set of actual states the agent might be in—as a key concept for
describing and calculating solutions. In POMDPs, the belief state b becomes a probability distribution over all possible states, just as in Chapter 15. For example, the initial
3 As with the reward function for MDPs, the sensor model can also depend on the action and outcome state, but again this change is not fundamental.
belief state for the 4× 3 POMDP could be the uniform distribution over the nine nonterminal states, i.e., 〈1/9 ,1/9 ,1/9 ,1/9 ,1/9 ,1/9 ,1/9 ,1/9 ,1/9 , 0, 0〉. We write b(s) for the probability
assigned to the actual state s by belief state b. The agent can calculate its current belief state as the conditional probability distribution over the actual states given the sequence of percepts
and actions so far. This is essentially the filtering task described in Chapter 15. The basic recursive filtering equation (15.5 on page 572) shows how to calculate the new belief state from the
previous belief state and the new evidence. For POMDPs, we also have an action to consider, but the result is essentially the same. If b(s) was the previous belief state, and the agent does action a
and then perceives evidence e, then the new belief state is given by
Alt text
where α is a normalizing constant that makes the belief state sum to 1. By analogy with the update operator for filtering (page 572), we can write this as
b′ = FORWARD(b, a, e) . (17.11)
In the 4× 3 POMDP, suppose the agent moves Left and its sensor reports 1 adjacent wall; then it’s quite likely (although not guaranteed, because both the motion and the sensor are noisy) that the
agent is now in (3,1). Exercise 17.13 asks you to calculate the exact probability values for the new belief state.
The fundamental insight required to understand POMDPs is this: the optimal action depends only on the agent’s current belief state. That is, the optimal policy can be described by a mapping π∗(b)
from belief states to actions. It does not depend on the actual state the agent is in. This is a good thing, because the agent does not know its actual state; all it knows is the belief state. Hence,
the decision cycle of a POMDP agent can be broken down into the following three steps:
1. Given the current belief state b, execute the action a= π ∗(b).
2. Receive percept e.
3. Set the current belief state to FORWARD(b, a, e) and repeat.
Now we can think of POMDPs as requiring a search in belief-state space, just like the methods for sensorless and contingency problems in Chapter 4. The main difference is that the POMDP belief-state
space is continuous, because a POMDP belief state is a probability distribution. For example, a belief state for the 4× 3 world is a point in an 11-dimensional continuous space. An action changes the
belief state, not just the physical state. Hence, the action is evaluated at least in part according to the information the agent acquires as a result. POMDPs therefore include the value of
information (Section 16.6) as one component of the decision problem.
Let’s look more carefully at the outcome of actions. In particular, let’s calculate the probability that an agent in belief state b reaches belief state b′ after executing action a. Now, if we knew
the action and the subsequent percept, then Equation (17.11) would provide a deterministic update to the belief state: b′ = FORWARD(b, a, e). Of course, the subsequent percept is not yet known, so
the agent might arrive in one of several possible belief states b′, depending on the percept that is received. The probability of perceiving e, given that a was performed starting in belief state b,
is given by summing over all the actual states s ′ that the agent might reach:
Alt text
Together, P (b′ | b, a) and ρ(b) define an observable MDP on the space of belief states. Furthermore, it can be shown that an optimal policy for this MDP, π∗(b), is also an optimal policy for the
original POMDP. In other words, solving a POMDP on a physical state space can be reduced to solving an MDP on the corresponding belief-state space. This fact is perhaps less surprising if we remember
that the belief state is always observable to the agent, by definition.
Notice that, although we have reduced POMDPs to MDPs, the MDP we obtain has a continuous (and usually high-dimensional) state space. None of the MDP algorithms described in Sections 17.2 and 17.3
applies directly to such MDPs. The next two subsections describe a value iteration algorithm designed specifically for POMDPs and an online decision-making algorithm, similar to those developed for
games in Chapter 5.
Value iteration for POMDPs
Section 17.2 described a value iteration algorithm that computed one utility value for each state. With infinitely many belief states, we need to be more creative. Consider an optimal policy π∗ and
its application in a specific belief state b: the policy generates an action, then, for each subsequent percept, the belief state is updated and a new action is generated, and so on. For this
specific b, therefore, the policy is exactly equivalent to a conditional plan, as defined in Chapter 4 for nondeterministic and partially observable problems. Instead of thinking about policies, let
us think about conditional plans and how the expected utility of executing a fixed conditional plan varies with the initial belief state. We make two observations:
1. Let the utility of executing a fixed conditional plan p starting in physical state s be αp(s). Then the expected utility of executing p in belief state b is just∑~s~ b(s)αp(s), or b · αp if we
think of them both as vectors. Hence, the expected utility of a fixed conditional plan varies linearly with b; that is, it corresponds to a hyperplane in belief space.
2. At any given belief state b, the optimal policy will choose to execute the conditional plan with highest expected utility; and the expected utility of b under the optimal policy is just the
utility of that conditional plan:
U(b) = U π∗(b) = max~p~ b · αp .
If the optimal policy π ∗ chooses to execute p starting at b, then it is reasonable to expect that it might choose to execute p in belief states that are very close to b; in fact, if we bound the
depth of the conditional plans, then there are only finitely many such plans and the continuous space of belief states will generally be divided into regions, each corresponding to a particular
conditional plan that is optimal in that region.
From these two observations, we see that the utility function U(b) on belief states, being the maximum of a collection of hyperplanes, will be piecewise linear and convex.
To illustrate this, we use a simple two-state world. The states are labeled 0 and 1, with R(0)= 0 and R(1)= 1. There are two actions: Stay stays put with probability 0.9 and Go switches to the other
state with probability 0.9. For now we will assume the discount factor γ = 1. The sensor reports the correct state with probability 0.6. Obviously, the agent should Stay when it thinks it’s in state
1 and Go when it thinks it’s in state 0.
The advantage of a two-state world is that the belief space can be viewed as onedimensional, because the two probabilities must sum to 1. In Figure 17.8(a), the x-axis represents the belief state,
defined by b(1), the probability of being in state 1. Now let us consider the one-step plans [Stay ] and [Go], each of which receives the reward for the current state followed by the (discounted)
reward for the state reached after the action:
αStay = R(0) + γ(0.9R(0) + 0.1R(1)) = 0.1
αStay = R(1) + γ(0.9R(1) + 0.1R(0)) = 1.9
αGo = R(0) + γ(0.9R(1) + 0.1R(0)) = 0.9
αGo = R(1) + γ(0.9R(0) + 0.1R(1)) = 1.1
The hyperplanes (lines, in this case) for b ·α[Stay] and b ·α[Go] are shown in Figure 17.8(a) and their maximum is shown in bold. The bold line therefore represents the utility function for the
finite-horizon problem that allows just one action, and in each “piece” of the piecewise linear utility function the optimal action is the first action of the corresponding conditional plan. In this
case, the optimal one-step policy is to Stay when b(1) > 0.5 and Go otherwise.
Once we have utilities αp(s) for all the conditional plans p of depth 1 in each physical state s, we can compute the utilities for conditional plans of depth 2 by considering each possible first
action, each possible subsequent percept, and then each way of choosing a depth-1 plan to execute for each percept:
[Stay ; if Percept = 0 then Stay else Stay ]
[Stay ; if Percept = 0 then Stay else Go] . . .
Alt text
for the two-state world, with the corresponding utility function shown in bold. (b) Utilities for 8 distinct two-step plans. (c) Utilities for four undominated two-step plans. (d) Utility function
for optimal eight-step plans.
There are eight distinct depth-2 plans in all, and their utilities are shown in Figure 17.8(b). Notice that four of the plans, shown as dashed lines, are suboptimal across the entire belief space—we
say these plans are dominated, and they need not be considered further. There are four undominated plans, each of which is optimal in a specific region, as shown in Figure 17.8(c). The regions
partition the belief-state space.
We repeat the process for depth 3, and so on. In general, let p be a depth-d conditional plan whose initial action is a and whose depth-d − 1 subplan for percept e is p.e; then
Alt text
This recursion naturally gives us a value iteration algorithm, which is sketched in Figure 17.9. The structure of the algorithm and its error analysis are similar to those of the basic value
iteration algorithm in Figure 17.4 on page 653; the main difference is that instead of computing one utility number for each state, POMDP-VALUE-ITERATION maintains a collection of
Alt text
undominated plans with their utility hyperplanes. The algorithm’s complexity depends primarily on how many plans get generated. Given |A| actions and |E| possible observations, it is easy to show
that there are |A|O(|E| d−1) distinct depth-d plans. Even for the lowly two-state world with d= 8, the exact number is 2255. The elimination of dominated plans is essential for reducing this doubly
exponential growth: the number of undominated plans with d= 8 is just 144. The utility function for these 144 plans is shown in Figure 17.8(d).
Notice that even though state 0 has lower utility than state 1, the intermediate belief states have even lower utility because the agent lacks the information needed to choose a good action. This is
why information has value in the sense defined in Section 16.6 and optimal policies in POMDPs often include information-gathering actions.
Given such a utility function, an executable policy can be extracted by looking at which hyperplane is optimal at any given belief state b and executing the first action of the corresponding plan. In
Figure 17.8(d), the corresponding optimal policy is still the same as for depth-1 plans: Stay when b(1) > 0.5 and Go otherwise.
In practice, the value iteration algorithm in Figure 17.9 is hopelessly inefficient for larger problems—even the 4× 3 POMDP is too hard. The main reason is that, given n conditional plans at level d,
the algorithm constructs |A| · n|E| conditional plans at level d + 1 before eliminating the dominated ones. Since the 1970s, when this algorithm was developed, there have been several advances
including more efficient forms of value iteration and various kinds of policy iteration algorithms. Some of these are discussed in the notes at the end of the chapter. For general POMDPs, however,
finding optimal policies is very difficult (PSPACEhard, in fact—i.e., very hard indeed). Problems with a few dozen states are often infeasible. The next section describes a different, approximate
method for solving POMDPs, one based on look-ahead search.
Alt text
Online agents for POMDPs
In this section, we outline a simple approach to agent design for partially observable, stochastic environments. The basic elements of the design are already familiar:
• The transition and sensor models are represented by a dynamic Bayesian network (DBN), as described in Chapter 15.
• The dynamic Bayesian network is extended with decision and utility nodes, as used in decision networks in Chapter 16. The resulting model is called a dynamic decision network, or
• A filtering algorithm is used to incorporate each new percept and action and to update the belief state representation.
• Decisions are made by projecting forward possible action sequences and choosing the best one.
DBNs are factored representations in the terminology of Chapter 2; they typically have an exponential complexity advantage over atomic representations and can model quite substantial real-world
problems. The agent design is therefore a practical implementation of the utility-based agent sketched in Chapter 2.
In the DBN, the single state St becomes a set of state variables X~t~, and there may be multiple evidence variables E~t~. We will use At to refer to the action at time t, so the transition model
becomes P(X~t+1~|X~t~, At) and the sensor model becomes P(E~t~|X~t~). We will use Rt to refer to the reward received at time t and Ut to refer to the utility of the state at time t. (Both of these
are random variables.) With this notation, a dynamic decision network looks like the one shown in Figure 17.10.
Dynamic decision networks can be used as inputs for any POMDP algorithm, including those for value and policy iteration methods. In this section, we focus on look-ahead methods that project action
sequences forward from the current belief state in much the same way as do the game-playing algorithms of Chapter 5. The network in Figure 17.10 has been projected three steps into the future; the
current and future decisions A and the future observations
Alt text
E and rewards R are all unknown. Notice that the network includes nodes for the rewards for X~t+1~ and X~t~+2, but the utility for X~t+3~. This is because the agent must maximize the (discounted) sum
of all future rewards, and U(X~t+3~) represents the reward for X~t+3~ and all subsequent rewards. As in Chapter 5, we assume that U is available only in some approximate form: if exact utility values
were available, look-ahead beyond depth 1 would be unnecessary.
Figure 17.11 shows part of the search tree corresponding to the three-step look-ahead DDN in Figure 17.10. Each of the triangular nodes is a belief state in which the agent makes a decision At+i for
i= 0, 1, 2, . . .. The round (chance) nodes correspond to choices by the environment, namely, what evidence E~t+i~ arrives. Notice that there are no chance nodes corresponding to the action outcomes;
this is because the belief-state update for an action is deterministic regardless of the actual outcome.
The belief state at each triangular node can be computed by applying a filtering algorithm to the sequence of percepts and actions leading to it. In this way, the algorithm takes into account the
fact that, for decision At+i, the agent will have available percepts E~t+1~, . . . , E~t+i~, even though at time t it does not know what those percepts will be. In this way, a decision-theoretic
agent automatically takes into account the value of information and will execute information-gathering actions where appropriate.
A decision can be extracted from the search tree by backing up the utility values from the leaves, taking an average at the chance nodes and taking the maximum at the decision nodes. This is similar
to the EXPECTIMINIMAX algorithm for game trees with chance nodes, except that (1) there can also be rewards at non-leaf states and (2) the decision nodes correspond to belief states rather than
actual states. The time complexity of an exhaustive search to depth d is O(|A|d · |E|d), where |A| is the number of available actions and |E| is the number of possible percepts. (Notice that this is
far less than the number of depth-d conditional plans generated by value iteration.) For problems in which the discount factor γ is not too close to 1, a shallow search is often good enough to give
near-optimal decisions. It is also possible to approximate the averaging step at the chance nodes, by sampling from the set of possible percepts instead of summing over all possible percepts. There
are various other ways of finding good approximate solutions quickly, but we defer them to Chapter 21.
Decision-theoretic agents based on dynamic decision networks have a number of advantages compared with other, simpler agent designs presented in earlier chapters. In particular, they handle partially
observable, uncertain environments and can easily revise their “plans” to handle unexpected evidence. With appropriate sensor models, they can handle sensor failure and can plan to gather
information. They exhibit “graceful degradation” under time pressure and in complex environments, using various approximation techniques. So what is missing? One defect of our DDN-based algorithm is
its reliance on forward search through state space, rather than using the hierarchical and other advanced planning techniques described in Chapter 11. There have been attempts to extend these
techniques into the probabilistic domain, but so far they have proved to be inefficient. A second, related problem is the basically propositional nature of the DDN language. We would like to be able
to extend some of the ideas for first-order probabilistic languages to the problem of decision making. Current research has shown that this extension is possible and has significant benefits, as
discussed in the notes at the end of the chapter.
This chapter has concentrated on making decisions in uncertain environments. But what if the uncertainty is due to other agents and the decisions they make? And what if the decisions of those agents
are in turn influenced by our decisions? We addressed this question once before, when we studied games in Chapter 5. There, however, we were primarily concerned with turn-taking games in fully
observable environments, for which minimax search can be used to find optimal moves. In this section we study the aspects of game theory that analyze games with simultaneous moves and other sources
of partial observability. (Game theorists use the terms perfect information and imperfect information rather than fully and partially observable.) Game theory can be used in at least two ways:
1. Agent design: Game theory can analyze the agent’s decisions and compute the expected utility for each decision (under the assumption that other agents are acting optimally according to game
theory). For example, in the game two-finger Morra, two players, O and E, simultaneously display one or two fingers. Let the total number of fingers be f . If f is odd, O collects f dollars from
E; and if f is even, E collects f dollars from O. Game theory can determine the best strategy against a rational player and the expected return for each player.4
4 Morra is a recreational version of an inspection game. In such games, an inspector chooses a day to inspect a facility (such as a restaurant or a biological weapons plant), and the facility
operator chooses a day to hide all the nasty stuff. The inspector wins if the days are different, and the facility operator wins if they are the same.
2. Mechanism design: When an environment is inhabited by many agents, it might be possible to define the rules of the environment (i.e., the game that the agents must play) so that the collective
good of all agents is maximized when each agent adopts the game-theoretic solution that maximizes its own utility. For example, game theory can help design the protocols for a collection of
Internet traffic routers so that each router has an incentive to act in such a way that global throughput is maximized. Mechanism design can also be used to construct intelligent multiagent
systems that solve complex problems in a distributed fashion.
Single-move games
We start by considering a restricted set of games: ones where all players take action simultaneously and the result of the game is based on this single set of actions. (Actually, it is not crucial
that the actions take place at exactly the same time; what matters is that no player has knowledge of the other players’ choices.) The restriction to a single move (and the very use of the word
“game”) might make this seem trivial, but in fact, game theory is serious business. It is used in decision-making situations including the auctioning of oil drilling rights and wireless frequency
spectrum rights, bankruptcy proceedings, product development and pricing decisions, and national defense—situations involving billions of dollars and hundreds of thousands of lives. A single-move
game is defined by three components:
• Players or agents who will be making decisions. Two-player games have received the most attention, although n-player games for n > 2 are also common. We give players capitalized names, like Alice
and Bob or O and E.
• Actions that the players can choose. We will give actions lowercase names, like one or testify . The players may or may not have the same set of actions available.
• A payoff function that gives the utility to each player for each combination of actions by all the players. For single-move games the payoff function can be represented by a matrix, a
representation known as the strategic form (also called normal form). The payoff matrix for two-finger Morra is as follows:
O: one O: two
E: one E = +2, O = −2 E = −3, O = +3
E: two E = −3, O = +3 E = +4, O = −4
For example, the lower-right corner shows that when player O chooses action two and E also chooses two, the payoff is +4 for E and −4 for O.
Each player in a game must adopt and then execute a strategy (which is the name used in game theory for a policy). A pure strategy is a deterministic policy; for a single-move game, a pure strategy
is just a single action. For many games an agent can do better with a mixed strategy, which is a randomized policy that selects actions according to a probability distribution. The mixed strategy
that chooses action a with probability p and action b otherwise is written [p: a; (1 − p): b]. For example, a mixed strategy for two-finger Morra might be [0.5: one ; 0.5: two]. A strategy profile is
an assignment of a strategy to each player; given the strategy profile, the game’s outcome is a numeric value for each player.
A solution to a game is a strategy profile in which each player adopts a rational strategy. We will see that the most important issue in game theory is to define what “rational” means when each agent
chooses only part of the strategy profile that determines the outcome. It is important to realize that outcomes are actual results of playing a game, while solutions are theoretical constructs used
to analyze a game. We will see that some games have a solution only in mixed strategies. But that does not mean that a player must literally be adopting a mixed strategy to be rational.
Consider the following story: Two alleged burglars, Alice and Bob, are caught redhanded near the scene of a burglary and are interrogated separately. A prosecutor offers each a deal: if you testify
against your partner as the leader of a burglary ring, you’ll go free for being the cooperative one, while your partner will serve 10 years in prison. However, if you both testify against each other,
you’ll both get 5 years. Alice and Bob also know that if both refuse to testify they will serve only 1 year each for the lesser charge of possessing stolen property. Now Alice and Bob face the
so-called prisoner’s dilemma: should they testify or refuse? Being rational agents, Alice and Bob each want to maximize their own expected utility. Let’s assume that Alice is callously unconcerned
about her partner’s fate, so her utility decreases in proportion to the number of years she will spend in prison, regardless of what happens to Bob. Bob feels exactly the same way. To help reach a
rational decision, they both construct the following payoff matrix:
Alice:testify Alice:refuse
Bob:testify A = −5, B = −5 A = −10, B = 0
Bob:refuse A = 0, B = −10 A = −1, B = −1
Alice analyzes the payoff matrix as follows: “Suppose Bob testifies. Then I get 5 years if I testify and 10 years if I don’t, so in that case testifying is better. On the other hand, if Bob refuses,
then I get 0 years if I testify and 1 year if I refuse, so in that case as well testifying is better. So in either case, it’s better for me to testify, so that’s what I must do.”
Alice has discovered that testify is a dominant strategy for the game. We say that a strategy s for player p strongly dominates strategy s ′ if the outcome for s is better for p than the outcome for
s ′, for every choice of strategies by the other player(s). Strategy s weakly****dominates s ′ if s is better than s′ on at least one strategy profile and no worse on any other.
A dominant strategy is a strategy that dominates all others. It is irrational to play a dominated strategy, and irrational not to play a dominant strategy if one exists. Being rational, Alice chooses
the dominant strategy. We need just a bit more terminology: we say that an outcome is Pareto optimal5 if there is no other outcome that all players would prefer. An outcome is Pareto dominated by
another outcome if all players would prefer the other outcome.
If Alice is clever as well as rational, she will continue to reason as follows: Bob’s dominant strategy is also to testify. Therefore, he will testify and we will both get five years. When each
player has a dominant strategy, the combination of those strategies is called a dominant strategy equilibrium. In general, a strategy profile forms an equilibrium if no player can benefit by
switching strategies, given that every other player sticks with the same
5 Pareto optimality is named after the economist Vilfredo Pareto (1848–1923).
strategy. An equilibrium is essentially a local optimum in the space of policies; it is the top of a peak that slopes downward along every dimension, where a dimension corresponds to a player’s
strategy choices.
The mathematician John Nash (1928–) proved that every game has at least one equilibrium. The general concept of equilibrium is now called Nash equilibrium in his honor. Clearly, a dominant strategy
equilibrium is a Nash equilibrium (Exercise 17.16), but some games have Nash equilibria but no dominant strategies. The dilemma in the prisoner’s dilemma is that the equilibrium outcome is worse for
both players than the outcome they would get if they both refused to testify. In other words, (testify , testify) is Pareto dominated by the (-1, -1) outcome of (refuse, refuse). Is there any way for
Alice and Bob to arrive at the (-1, -1) outcome? It is certainly an allowable option for both of them to refuse to testify, but is is hard to see how rational agents can get there, given the
definition of the game. Either player contemplating playing refuse will realize that he or she would do better by playing testify . That is the attractive power of an equilibrium point. Game
theorists agree that being a Nash equilibrium is a necessary condition for being a solution—although they disagree whether it is a sufficient condition.
It is easy enough to get to the (refuse, refuse) solution if we modify the game. For example, we could change to a repeated game in which the players know that they will meet again. Or the agents
might have moral beliefs that encourage cooperation and fairness. That means they have a different utility function, necessitating a different payoff matrix, making it a different game. We will see
later that agents with limited computational powers, rather than the ability to reason absolutely rationally, can reach non-equilibrium outcomes, as can an agent that knows that the other agent has
limited rationality. In each case, we are considering a different game than the one described by the payoff matrix above.
Now let’s look at a game that has no dominant strategy. Acme, a video game console manufacturer, has to decide whether its next game machine will use Blu-ray discs or DVDs. Meanwhile, the video game
software producer Best needs to decide whether to produce its next game on Blu-ray or DVD. The profits for both will be positive if they agree and negative if they disagree, as shown in the following
payoff matrix:
Acme:bluray Acme:dvd
Best :bluray A = +9, B = +9 A = −4, B = −1
Best :dvd A = −3, B = −1 A = +5, B = +5
There is no dominant strategy equilibrium for this game, but there are two Nash equilibria: (bluray, bluray) and (dvd, dvd). We know these are Nash equilibria because if either player unilaterally
moves to a different strategy, that player will be worse off. Now the agents have a problem: there are multiple acceptable solutions, but if each agent aims for a different solution, then both agents
will suffer. How can they agree on a solution? One answer is that both should choose the Pareto-optimal solution (bluray, bluray); that is, we can restrict the definition of “solution” to the unique
Pareto-optimal Nash equilibrium provided that one exists. Every game has at least one Pareto-optimal solution, but a game might have several, or they might not be equilibrium points. For example, if
(bluray, bluray) had payoff (5, 5), then there would be two equal Pareto-optimal equilibrium points. To choose between them the agents can either guess or communicate, which can be done either by
establishing a convention that orders the solutions before the game begins or by negotiating to reach a mutually beneficial solution during the game (which would mean including communicative actions
as part of a sequential game). Communication thus arises in game theory for exactly the same reasons that it arose in multiagent planning in Section 11.4. Games in which players need to communicate
like this are called coordination games.
A game can have more than one Nash equilibrium; how do we know that every game must have at least one? Some games have no pure-strategy Nash equilibria. Consider, for example, any pure-strategy
profile for two-finger Morra (page 666). If the total number of fingers is even, then O will want to switch; on the other hand (so to speak), if the total is odd, then E will want to switch.
Therefore, no pure strategy profile can be an equilibrium and we must look to mixed strategies instead.
But which mixed strategy? In 1928, von Neumann developed a method for finding the optimal mixed strategy for two-player, zero-sum games—games in which the sum of the payoffs is always zero.6 Clearly,
Morra is such a game. For two-player, zero-sum games, we know that the payoffs are equal and opposite, so we need consider the payoffs of only one player, who will be the maximizer (just as in
Chapter 5). For Morra, we pick the even player E to be the maximizer, so we can define the payoff matrix by the values UE(e, o)—the payoff to E if E does e and O does o. (For convenience we call
player E “her” and O “him.”) Von Neumann’s method is called the the maximin technique, and it works as follows:
• Suppose we change the rules as follows: first E picks her strategy and reveals it to O. Then O picks his strategy, with knowledge of E’s strategy. Finally, we evaluate the expected payoff of the
game based on the chosen strategies. This gives us a turntaking game to which we can apply the standard minimax algorithm from Chapter 5. Let’s suppose this gives an outcome UE,O. Clearly, this
game favors O, so the true utility U of the original game (from E’s point of view) is at least UE,O. For example, if we just look at pure strategies, the minimax game tree has a root value of −3
(see Figure 17.12(a)), so we know that U ≥ −3.
• Now suppose we change the rules to force O to reveal his strategy first, followed by E. Then the minimax value of this game is UO,E , and because this game favors E we know that U is at most UO,E
. With pure strategies, the value is +2 (see Figure 17.12(b)), so we know U ≤ +2.
Combining these two arguments, we see that the true utility U of the solution to the original game must satisfy UE,O ≤ U ≤ UO,E or in this case, − 3 ≤ U ≤ 2 .
To pinpoint the value of U , we need to turn our analysis to mixed strategies. First, observe the following: once the first player has revealed his or her strategy, the second player might as well
choose a pure strategy. The reason is simple: if the second player plays a mixed strategy, [p: one ; (1− p): two ], its expected utility is a linear combination (p ·uone +(1− p) ·utwo) of
6 or a constant—see page 162.
Alt text
Figure 17.12 (a) and (b): Minimax game trees for two-finger Morra if the players take turns playing pure strategies. (c) and (d): Parameterized game trees where the first player plays a mixed
strategy. The payoffs depend on the probability parameter (p or q) in the mixed strategy. (e) and (f): For any particular value of the probability parameter, the second player will choose the
“better” of the two actions, so the value of the first player’s mixed strategy is given by the heavy lines. The first player will choose the probability parameter for the mixed strategy at the
intersection point.
the utilities of the pure strategies, uone and utwo . This linear combination can never be better than the better of uone and utwo , so the second player can just choose the better one.
With this observation in mind, the minimax trees can be thought of as having infinitely many branches at the root, corresponding to the infinitely many mixed strategies the first player can choose.
Each of these leads to a node with two branches corresponding to the pure strategies for the second player. We can depict these infinite trees finitely by having one “parameterized” choice at the
• If E chooses first, the situation is as shown in Figure 17.12(c). E chooses the strategy [p: one; (1−p): two ] at the root, and then O chooses a pure strategy (and hence a move) given the value
of p. If O chooses one , the expected payoff (to E) is 2p−3(1−p)= 5p−3; if O chooses two, the expected payoff is −3p + 4(1 − p)= 4 − 7p. We can draw these two payoffs as straight lines on a
graph, where p ranges from 0 to 1 on the x-axis, as shown in Figure 17.12(e). O, the minimizer, will always choose the lower of the two lines, as shown by the heavy lines in the figure.
Therefore, the best that E can do at the root is to choose p to be at the intersection point, which is where
5p− 3 = 4− 7p ⇒ p = 7/12 .
The utility for E at this point is UE,O = − 1/12.
• If O moves first, the situation is as shown in Figure 17.12(d). O chooses the strategy [q: one; (1 − q): two] at the root, and then E chooses a move given the value of q. The payoffs are 2q−3
(1−q)= 5q−3 and−3q+4(1−q)= 4−7q.7 Again, Figure 17.12(f) shows that the best O can do at the root is to choose the intersection point:
5q − 3 = 4− 7q ⇒ q = 7/12 .
The utility for E at this point is UO,E = − 1/12.
Now we know that the true utility of the original game lies between −1/12 and −1/12, that is, it is exactly −1/12! (The moral is that it is better to be O than E if you are playing this game.)
Furthermore, the true utility is attained by the mixed strategy [7/12: one ; 5/12: two], which should be played by both players. This strategy is called the maximin equilibrium of the game, and is a
Nash equilibrium. Note that each component strategy in an equilibrium mixed strategy has the same expected utility. In this case, both one and two have the same expected utility, −1/12, as the mixed
strategy itself.
Our result for two-finger Morra is an example of the general result by von Neumann: every two-player zero-sum game has a maximin equilibrium when you allow mixed strategies. Furthermore, every Nash
equilibrium in a zero-sum game is a maximin for both players. A player who adopts the maximin strategy has two guarantees: First, no other strategy can do better against an opponent who plays well
(although some other strategies might be better at exploiting an opponent who makes irrational mistakes). Second, the player continues to do just as well even if the strategy is revealed to the
The general algorithm for finding maximin equilibria in zero-sum games is somewhat more involved than Figures 17.12(e) and (f) might suggest. When there are n possible actions, a mixed strategy is a
point in n-dimensional space and the lines become hyperplanes. It’s also possible for some pure strategies for the second player to be dominated by others, so that they are not optimal against any
strategy for the first player. After removing all such strategies (which might have to be done repeatedly), the optimal choice at the root is the
7 It is a coincidence that these equations are the same as those for p; the coincidence arises because UE(one, two) = UE(two, one) = − 3. This also explains why the optimal strategy is the same for
both players.
highest (or lowest) intersection point of the remaining hyperplanes. Finding this choice is an example of a linear programming problem: maximizing an objective function subject to linear constraints.
Such problems can be solved by standard techniques in time polynomial in the number of actions (and in the number of bits used to specify the reward function, if you want to get technical).
The question remains, what should a rational agent actually do in playing a single game of Morra? The rational agent will have derived the fact that [7/12: one; 5/12: two] is the maximin equilibrium
strategy, and will assume that this is mutual knowledge with a rational opponent. The agent could use a 12-sided die or a random number generator to pick randomly according to this mixed strategy, in
which case the expected payoff would be -1/12 for E. Or the agent could just decide to play one , or two. In either case, the expected payoff remains -1/12 for E. Curiously, unilaterally choosing a
particular action does not harm one’s expected payoff, but allowing the other agent to know that one has made such a unilateral decision does affect the expected payoff, because then the opponent can
adjust his strategy accordingly.
Finding equilibria in non-zero-sum games is somewhat more complicated. The general approach has two steps: (1) Enumerate all possible subsets of actions that might form mixed strategies. For example,
first try all strategy profiles where each player uses a single action, then those where each player uses either one or two actions, and so on. This is exponential in the number of actions, and so
only applies to relatively small games. (2) For each strategy profile enumerated in (1), check to see if it is an equilibrium. This is done by solving a set of equations and inequalities that are
similar to the ones used in the zero-sum case. For two players these equations are linear and can be solved with basic linear programming techniques, but for three or more players they are nonlinear
and may be very difficult to solve.
Repeated games
So far we have looked only at games that last a single move. The simplest kind of multiplemove game is the repeated game, in which players face the same choice repeatedly, but each time with
knowledge of the history of all players’ previous choices. A strategy profile for a repeated game specifies an action choice for each player at each time step for every possible history of previous
choices. As with MDPs, payoffs are additive over time.
Let’s consider the repeated version of the prisoner’s dilemma. Will Alice and Bob work together and refuse to testify, knowing they will meet again? The answer depends on the details of the
engagement. For example, suppose Alice and Bob know that they must play exactly 100 rounds of prisoner’s dilemma. Then they both know that the 100th round will not be a repeated game—that is, its
outcome can have no effect on future rounds—and therefore they will both choose the dominant strategy, testify , in that round. But once the 100th round is determined, the 99th round can have no
effect on subsequent rounds, so it too will have a dominant strategy equilibrium at (testify, testify). By induction, both players will choose testify on every round, earning a total jail sentence of
500 years each.
We can get different solutions by changing the rules of the interaction. For example, suppose that after each round there is a 99% chance that the players will meet again. Then the expected number of
rounds is still 100, but neither player knows for sure which round
will be the last. Under these conditions, more cooperative behavior is possible. For example, one equilibrium strategy is for each player to refuse unless the other player has ever played testify .
This strategy could be called perpetual punishment. Suppose both players have adopted this strategy, and this is mutual knowledge. Then as long as neither player has played testify , then at any
point in time the expected future total payoff for each player is
Alt text
Therefore, at every step, there is no incentive to deviate from (refuse, refuse). Perpetual punishment is the “mutually assured destruction” strategy of the prisoner’s dilemma: once either player
decides to testify , it ensures that both players suffer a great deal. But it works as a deterrent only if the other player believes you have adopted this strategy—or at least that you might have
adopted it.
Other strategies are more forgiving. The most famous, called tit-for-tat, calls for starting with refuse and then echoing the other player’s previous move on all subsequent moves. So Alice would
refuse as long as Bob refuses and would testify the move after Bob testified, but would go back to refusing if Bob did. Although very simple, this strategy has proven to be highly robust and
effective against a wide variety of strategies.
We can also get different solutions by changing the agents, rather than changing the rules of engagement. Suppose the agents are finite-state machines with n states and they are playing a game with m
> n total steps. The agents are thus incapable of representing the number of remaining steps, and must treat it as an unknown. Therefore, they cannot do the induction, and are free to arrive at the
more favorable (refuse, refuse) equilibrium. In this case, ignorance is bliss—or rather, having your opponent believe that you are ignorant is bliss. Your success in these repeated games depends on
the other player’s perception of you as a bully or a simpleton, and not on your actual characteristics.
Sequential games
In the general case, a game consists of a sequence of turns that need not be all the same. Such games are best represented by a game tree, which game theorists call the extensive form. The tree
includes all the same information we saw in Section 5.1: an initial state S~0~, a function PLAYER(s) that tells which player has the move, a function ACTIONS(s) enumerating the possible actions, a
function RESULT(s, a) that defines the transition to a new state, and a partial function UTILITY(s, p), which is defined only on terminal states, to give the payoff for each player.
To represent stochastic games, such as backgammon, we add a distinguished player, chance, that can take random actions. Chance’s “strategy” is part of the definition of the game, specified as a
probability distribution over actions (the other players get to choose their own strategy). To represent games with nondeterministic actions, such as billiards, we break the action into two pieces:
the player’s action itself has a deterministic result, and then chance has a turn to react to the action in its own capricious way. To represent simultaneous moves, as in the prisoner’s dilemma or
two-finger Morra, we impose an arbitrary order on the players, but we have the option of asserting that the earlier player’s actions are not observable to the subsequent players: e.g., Alice must
choose refuse or testify first, then Bob chooses, but Bob does not know what choice Alice made at that time (we can also represent the fact that the move is revealed later). However, we assume the
players always remember all their own previous actions; this assumption is called perfect recall.
The key idea of extensive form that sets it apart from the game trees of Chapter 5 is the representation of partial observability. We saw in Section 5.6 that a player in a partially observable game
such as Kriegspiel can create a game tree over the space of belief states. With that tree, we saw that in some cases a player can find a sequence of moves (a strategy) that leads to a forced
checkmate regardless of what actual state we started in, and regardless of what strategy the opponent uses. However, the techniques of Chapter 5 could not tell a player what to do when there is no
guaranteed checkmate. If the player’s best strategy depends on the opponent’s strategy and vice versa, then minimax (or alpha–beta) by itself cannot find a solution. The extensive form does allow us
to find solutions because it represents the belief states (game theorists call them information sets) of all players at once. From that representation we can find equilibrium solutions, just as we
did with normal-form games. As a simple example of a sequential game, place two agents in the 4× 3 world of Figure 17.1 and have them move simultaneously until one agent reaches an exit square, and
gets the payoff for that square. If we specify that no movement occurs when the two agents try to move into the same square simultaneously (a common problem at many traffic intersections), then
certain pure strategies can get stuck forever. Thus, agents need a mixed strategy to perform well in this game: randomly choose between moving ahead and staying put. This is exactly what is done to
resolve packet collisions in Ethernet networks.
Next we’ll consider a very simple variant of poker. The deck has only four cards, two aces and two kings. One card is dealt to each player. The first player then has the option to raise the stakes of
the game from 1 point to 2, or to check. If player 1 checks, the game is over. If he raises, then player 2 has the option to call, accepting that the game is worth 2 points, or fold, conceding the 1
point. If the game does not end with a fold, then the payoff depends on the cards: it is zero for both players if they have the same card; otherwise the player with the king pays the stakes to the
player with the ace.
The extensive-form tree for this game is shown in Figure 17.13. Nonterminal states are shown as circles, with the player to move inside the circle; player 0 is chance. Each action is depicted as an
arrow with a label, corresponding to a raise, check, call, or fold, or, for chance, the four possible deals (“AK” means that player 1 gets an ace and player 2 a king). Terminal states are rectangles
labeled by their payoff to player 1 and player 2. Information sets are shown as labeled dashed boxes; for example, I1,1 is the information set where it is player 1’s turn, and he knows he has an ace
(but does not know what player 2 has). In information set I2,1, it is player 2’s turn and she knows that she has an ace and that player 1 has raised,
Alt text
but does not know what card player 1 has. (Due to the limits of two-dimensional paper, this information set is shown as two boxes rather than one.)
One way to solve an extensive game is to convert it to a normal-form game. Recall that the normal form is a matrix, each row of which is labeled with a pure strategy for player 1, and each column by
a pure strategy for player 2. In an extensive game a pure strategy for player i corresponds to an action for each information set involving that player. So in Figure 17.13, one pure strategy for
player 1 is “raise when in I1,1 (that is, when I have an ace), and check when in I1,2 (when I have a king).” In the payoff matrix below, this strategy is called rk. Similarly, strategy cf for player
2 means “call when I have an ace and fold when I have a king.” Since this is a zero-sum game, the matrix below gives only the payoff for player 1; player 2 always has the opposite payoff:
2:cc 2:cf 2:ff 2:fc
1:rr 0 -1/6 1
1:kr -1/3 -1/6 5/6
1:rk 1/3 0 1/6
1:kk 0 0 0
This game is so simple that it has two pure-strategy equilibria, shown in bold: cf for player 2 and rk or kk for player 1. But in general we can solve extensive games by converting to normal form and
then finding a solution (usually a mixed strategy) using standard linear programming methods. That works in theory. But if a player has I information sets and a actions per set, then that player will
have a I pure strategies. In other words, the size of the normal-form matrix is exponential in the number of information sets, so in practice the approach works only for very small game trees, on the
order of a dozen states. A game like Texas hold’em poker has about 1018 states, making this approach completely infeasible.
What are the alternatives? In Chapter 5 we saw how alpha–beta search could handle games of perfect information with huge game trees by generating the tree incrementally, by pruning some branches, and
by heuristically evaluating nonterminal nodes. But that approach does not work well for games with imperfect information, for two reasons: first, it is harder to prune, because we need to consider
mixed strategies that combine multiple branches, not a pure strategy that always chooses the best branch. Second, it is harder to heuristically evaluate a nonterminal node, because we are dealing
with information sets, not individual states.
Koller et al. (1996) come to the rescue with an alternative representation of extensive games, called the sequence form, that is only linear in the size of the tree, rather than exponential. Rather
than represent strategies, it represents paths through the tree; the number of paths is equal to the number of terminal nodes. Standard linear programming methods can again be applied to this
representation. The resulting system can solve poker variants with 25,000 states in a minute or two. This is an exponential speedup over the normal-form approach, but still falls far short of
handling full poker, with 1018 states.
If we can’t handle 1018 states, perhaps we can simplify the problem by changing the game to a simpler form. For example, if I hold an ace and am considering the possibility that the next card will
give me a pair of aces, then I don’t care about the suit of the next card; any suit will do equally well. This suggests forming an abstraction of the game, one in which suits are ignored. The
resulting game tree will be smaller by a factor of 4!= 24. Suppose I can solve this smaller game; how will the solution to that game relate to the original game? If no player is going for a flush (or
bluffing so), then the suits don’t matter to any player, and the solution for the abstraction will also be a solution for the original game. However, if any player is contemplating a flush, then the
abstraction will be only an approximate solution (but it is possible to compute bounds on the error).
There are many opportunities for abstraction. For example, at the point in a game where each player has two cards, if I hold a pair of queens, then the other players’ hands could be abstracted into
three classes: better (only a pair of kings or a pair of aces), same (pair of queens) or worse (everything else). However, this abstraction might be too coarse. A better abstraction would divide
worse into, say, medium pair (nines through jacks), low pair, and no pair. These examples are abstractions of states; it is also possible to abstract actions. For example, instead of having a bet
action for each integer from 1 to 1000, we could restrict the bets to 100, 101, 102 and 103. Or we could cut out one of the rounds of betting altogether. We can also abstract over chance nodes, by
considering only a subset of the possible deals. This is equivalent to the rollout technique used in Go programs. Putting all these abstractions together, we can reduce the 1018 states of poker to
107 states, a size that can be solved with current techniques.
Poker programs based on this approach can easily defeat novice and some experienced human players, but are not yet at the level of master players. Part of the problem is that the solution these
programs approximate—the equilibrium solution—is optimal only against an opponent who also plays the equilibrium strategy. Against fallible human players it is important to be able to exploit an
opponent’s deviation from the equilibrium strategy. As Gautam Rao (aka “The Count”), the world’s leading online poker player, said (Billings et al., 2003), “You have a very strong program. Once you
add opponent modeling to it, it will kill everyone.” However, good models of human fallability remain elusive.
In a sense, extensive game form is the one of the most complete representations we have seen so far: it can handle partially observable, multiagent, stochastic, sequential, dynamic environments—most
of the hard cases from the list of environment properties on page 42. However, there are two limitations of game theory. First, it does not deal well with continuous states and actions (although
there have been some extensions to the continuous case; for example, the theory of Cournot competition uses game theory to solve problems where two companies choose prices for their products from a
continuous space). Second, game theory assumes the game is known. Parts of the game may be specified as unobservable to some of the players, but it must be known what parts are unobservable. In cases
in which the players learn the unknown structure of the game over time, the model begins to break down. Let’s examine each source of uncertainty, and whether each can be represented in game theory.
Actions: There is no easy way to represent a game where the players have to discover what actions are available. Consider the game between computer virus writers and security experts. Part of the
problem is anticipating what action the virus writers will try next.
Strategies: Game theory is very good at representing the idea that the other players’ strategies are initially unknown—as long as we assume all agents are rational. The theory itself does not say
what to do when the other players are less than fully rational. The notion of a Bayes–Nash equilibrium partially addresses this point: it is an equilibrium with respect to a player’s prior
probability distribution over the other players’ strategies—in other words, it expresses a player’s beliefs about the other players’ likely strategies.
Chance: If a game depends on the roll of a die, it is easy enough to model a chance node with uniform distribution over the outcomes. But what if it is possible that the die is unfair? We can
represent that with another chance node, higher up in the tree, with two branches for “die is fair” and “die is unfair,” such that the corresponding nodes in each branch are in the same information
set (that is, the players don’t know if the die is fair or not). And what if we suspect the other opponent does know? Then we add another chance node, with one branch representing the case where the
opponent does know, and one where he doesn’t.
Utilities: What if we don’t know our opponent’s utilities? Again, that can be modeled with a chance node, such that the other agent knows its own utilities in each branch, but we don’t. But what if
we don’t know our own utilities? For example, how do I know if it is rational to order the Chef’s salad if I don’t know how much I will like it? We can model that with yet another chance node
specifying an unobservable “intrinsic quality” of the salad.
Thus, we see that game theory is good at representing most sources of uncertainty—but at the cost of doubling the size of the tree every time we add another node; a habit which quickly leads to
intractably large trees. Because of these and other problems, game theory has been used primarily to analyze environments that are at equilibrium, rather than to control agents within an environment.
Next we shall see how it can help design environments.
In the previous section, we asked, “Given a game, what is a rational strategy?” In this section, we ask, “Given that agents pick rational strategies, what game should we design?” More specifically,
we would like to design a game whose solutions, consisting of each agent pursuing its own rational strategy, result in the maximization of some global utility function. This problem is called
mechanism design, or sometimes inverse game theory. Mechanism design is a staple of economics and political science. Capitalism 101 says that if everyone tries to get rich, the total wealth of
society will increase. But the examples we will discuss show that proper mechanism design is necessary to keep the invisible hand on track. For collections of agents, mechanism design allows us to
construct smart systems out of a collection of more limited systems—even uncooperative systems—in much the same way that teams of humans can achieve goals beyond the reach of any individual.
Examples of mechanism design include auctioning off cheap airline tickets, routing TCP packets between computers, deciding how medical interns will be assigned to hospitals, and deciding how robotic
soccer players will cooperate with their teammates. Mechanism design became more than an academic subject in the 1990s when several nations, faced with the problem of auctioning off licenses to
broadcast in various frequency bands, lost hundreds of millions of dollars in potential revenue as a result of poor mechanism design. Formally, a mechanism consists of (1) a language for describing
the set of allowable strategies that agents may adopt, (2) a distinguished agent, called the center, that collects reports of strategy choices from the agents in the game, and (3) an outcome rule,
known to all agents, that the center uses to determine the payoffs to each agent, given their strategy choices.
Let’s consider auctions first. An auction is a mechanism for selling some goods to members of a pool of bidders. For simplicity, we concentrate on auctions with a single item for sale. Each bidder i
has a utility value vi for having the item. In some cases, each bidder has a private value for the item. For example, the first item sold on eBay was a broken laser pointer, which sold for $14.83 to
a collector of broken laser pointers. Thus, we know that the collector has vi ≥ $14.83, but most other people would have vj * $14.83. In other cases, such as auctioning drilling rights for an oil
tract, the item has a common value—the tract will produce some amount of money, X, and all bidders value a dollar equally—but there is uncertainty as to what the actual value of X is. Different
bidders have different information, and hence different estimates of the item’s true value. In either case, bidders end up with their own vi. Given vi, each bidder gets a chance, at the appropriate
time or times in the auction, to make a bid bi. The highest bid, bmax wins the item, but the price paid need not be bmax; that’s part of the mechanism design.
The best-known auction mechanism is the ascending-bid,8 or English auction, in which the center starts by asking for a minimum (or reserve) bid bmin. If some bidder is
8 The word “auction” comes from the Latin augere, to increase.
willing to pay that amount, the center then asks for bmin + d, for some increment d, and continues up from there. The auction ends when nobody is willing to bid anymore; then the last bidder wins the
item, paying the price he bid.
How do we know if this is a good mechanism? One goal is to maximize expected revenue for the seller. Another goal is to maximize a notion of global utility. These goals overlap to some extent,
because one aspect of maximizing global utility is to ensure that the winner of the auction is the agent who values the item the most (and thus is willing to pay the most). We say an auction is
efficient if the goods go to the agent who values them most.
The ascending-bid auction is usually both efficient and revenue maximizing, but if the reserve price is set too high, the bidder who values it most may not bid, and if the reserve is set too low, the
seller loses net revenue.
Probably the most important things that an auction mechanism can do is encourage a sufficient number of bidders to enter the game and discourage them from engaging in collusion. Collusion is an
unfair or illegal agreement by two or more bidders to manipulate prices. It can happen in secret backroom deals or tacitly, within the rules of the mechanism. For example, in 1999, Germany auctioned
ten blocks of cell-phone spectrum with a simultaneous auction (bids were taken on all ten blocks at the same time), using the rule that any bid must be a minimum of a 10% raise over the previous bid
on a block. There were only two credible bidders, and the first, Mannesman, entered the bid of 20 million deutschmark on blocks 1-5 and 18.18 million on blocks 6-10. Why 18.18M? One of T-Mobile’s
managers said they “interpreted Mannesman’s first bid as an offer.” Both parties could compute that a 10% raise on 18.18M is 19.99M; thus Mannesman’s bid was interpreted as saying “we can each get
half the blocks for 20M; let’s not spoil it by bidding the prices up higher.” And in fact T-Mobile bid 20M on blocks 6-10 and that was the end of the bidding. The German government got less than they
expected, because the two competitors were able to use the bidding mechanism to come to a tacit agreement on how not to compete. From the government’s point of view, a better result could have been
obtained by any of these changes to the mechanism: a higher reserve price; a sealed-bid first-price auction, so that the competitors could not communicate through their bids; or incentives to bring
in a third bidder. Perhaps the 10% rule was an error in mechanism design, because it facilitated the precise signaling from Mannesman to T-Mobile.
In general, both the seller and the global utility function benefit if there are more bidders, although global utility can suffer if you count the cost of wasted time of bidders that have no chance
of winning. One way to encourage more bidders is to make the mechanism easier for them. After all, if it requires too much research or computation on the part of the bidders, they may decide to take
their money elsewhere. So it is desirable that the bidders have a dominant strategy. Recall that “dominant” means that the strategy works against all other strategies, which in turn means that an
agent can adopt it without regard for the other strategies. An agent with a dominant strategy can just bid, without wasting time contemplating other agents’ possible strategies. A mechanism where
agents have a dominant strategy is called a strategy-proof mechanism. If, as is usually the case, that strategy involves the bidders revealing their true value, v~i~, then it is called a
truth-revealing, or truthful, auction; the term incentive compatible is also used. The revelation principle states that any mechanishan be transformed into an equivalent truth-revealing mechanism, so
part of mechanism design is finding these equivalent mechanisms.
It turns out that the ascending-bid auction has most of the desirable properties. The bidder with the highest value v~i~ gets the goods at a price of bo + d, where bo is the highest bid among all the
other agents and d is the auctioneer’s increment.9 Bidders have a simple dominant strategy: keep bidding as long as the current cost is below your v~i~. The mechanism is not quite truth-revealing,
because the winning bidder reveals only that his v~i~ ≥ bo + d; we have a lower bound on v~i~ but not an exact amount.
A disadvantage (from the point of view of the seller) of the ascending-bid auction is that it can discourage competition. Suppose that in a bid for cell-phone spectrum there is one advantaged company
that everyone agrees would be able to leverage existing customers and infrastructure, and thus can make a larger profit than anyone else. Potential competitors can see that they have no chance in an
ascending-bid auction, because the advantaged company can always bid higher. Thus, the competitors may not enter at all, and the advantaged company ends up winning at the reserve price.
Another negative property of the English auction is its high communication costs. Either the auction takes place in one room or all bidders have to have high-speed, secure communication lines; in
either case they have to have the time available to go through several rounds of bidding. An alternative mechanism, which requires much less communication, is the sealedbid auction. Each bidder makes
a single bid and communicates it to the auctioneer, without the other bidders seeing it. With this mechanism, there is no longer a simple dominant strategy. If your value is v~i~ and you believe that
the maximum of all the other agents’ bids will be bo, then you should bid bo + ε, for some small ε, if that is less than v~i~. Thus, your bid depends on your estimation of the other agents’ bids,
requiring you to do more work. Also, note that the agent with the highest v~i~ might not win the auction. This is offset by the fact that the auction is more competitive, reducing the bias toward an
advantaged bidder.
A small change in the mechanism for sealed-bid auctions produces the sealed-bid second-price auction, also known as a Vickrey auction.10 In such auctions, the winner pays the price of the second
-highest bid, bo, rather than paying his own bid. This simple modification completely eliminates the complex deliberations required for standard (or first-price) sealed-bid auctions, because the
dominant strategy is now simply to bid v~i~; the mechanism is truth-revealing. Note that the utility of agent i in terms of his bid b~i~, his value v~i~, and the best bid among the other agents, bo,
Alt text
To see that b~i~ = v~i~ is a dominant strategy, note that when (v~i~ − bo) is positive, any bid that wins the auction is optimal, and bidding v~i~ in particular wins the auction. On the other hand,
when (v~i~ − bo) is negative, any bid that loses the auction is optimal, and bidding v~i~ in
9 There is actually a small chance that the agent with highest v~i~ fails to get the goods, in the case in which bo < v~i~ < bo + d. The chance of this can be made arbitrarily small by decreasing the
increment d. 10 Named after William Vickrey (1914–1996), who won the 1996 Nobel Prize in economics for this work and died of a heart attack three days later.
particular loses the auction. So bidding v~i~ is optimal for all possible values of bo, and in fact, v~i~ is the only bid that has this property. Because of its simplicity and the minimal computation
requirements for both seller and bidders, the Vickrey auction is widely used in constructing distributed AI systems. Also, Internet search engines conduct over a billion auctions a day to sell
advertisements along with their search results, and online auction sites handle $100 billion a year in goods, all using variants of the Vickrey auction. Note that the expected value to the seller is
bo, which is the same expected return as the limit of the English auction as the increment d goes to zero. This is actually a very general result: the revenue equivalence theorem states that, with a
few minor caveats, any auction mechanism where risk-neutral bidders have values v~i~ known only to themselves (but know a probability distribution from which those values are sampled), will yield the
same expected revenue. This principle means that the various mechanisms are not competing on the basis of revenue generation, but rather on other qualities.
Although the second-price auction is truth-revealing, it turns out that extending the idea to multiple goods and using a next-price auction is not truth-revealing. Many Internet search engines use a
mechanism where they auction k slots for ads on a page. The highest bidder wins the top spot, the second highest gets the second spot, and so on. Each winner pays the price bid by the next-lower
bidder, with the understanding that payment is made only if the searcher actually clicks on the ad. The top slots are considered more valuable because they are more likely to be noticed and clicked
on. Imagine that three bidders, b~1~, b~2~ and b3, have valuations for a click of v1 = 200, v2 = 180, and v3 =100, and thatk = 2 slots are available, where it is known that the top spot is clicked on
5% of the time and the bottom spot 2%. If all bidders bid truthfully, then b~1~ wins the top slot and pays 180, and has an expected return of (200 − 180) × 0.05= 1. The second slot goes to b~2~. But
b~1~ can see that if she were to bid anything in the range 101–179, she would concede the top slot to b~2~, win the second slot, and yield an expected return of (200−100)× .02= 2. Thus, b~1~ can
double her expected return by bidding less than her true value in this case. In general, bidders in this multislot auction must spend a lot of energy analyzing the bids of others to determine their
best strategy; there is no simple dominant strategy. Aggarwal et al. (2006) show that there is a unique truthful auction mechanism for this multislot problem, in which the winner of slot j pays the
full price for slot j just for those additional clicks that are available at slot j and not at slot j + 1. The winner pays the price for the lower slot for the remaining clicks. In our example, b~1~
would bid 200 truthfully, and would pay 180 for the additional .05− .02= .03 clicks in the top slot, but would pay only the cost of the bottom slot, 100, for the remaining .02 clicks. Thus, the total
return to b~1~ would be (200 − 180) × .03 + (200 − 100) × .02= 2.6.
Another example of where auctions can come into play within AI is when a collection of agents are deciding whether to cooperate on a joint plan. Hunsberger and Grosz (2000) show that this can be
accomplished efficiently with an auction in which the agents bid for roles in the joint plan.
Common goods
Now let’s consider another type of game, in which countries set their policy for controlling air pollution. Each country has a choice: they can reduce pollution at a cost of -10 points for
implementing the necessary changes, or they can continue to pollute, which gives them a net utility of -5 (in added health costs, etc.) and also contributes -1 points to every other country (because
the air is shared across countries). Clearly, the dominant strategy for each country is “continue to pollute,” but if there are 100 countries and each follows this policy, then each country gets a
total utility of -104, whereas if every country reduced pollution, they would each have a utility of -10. This situation is called the tragedy of the commons: if nobody has to pay for using a common
resource, then it tends to be exploited in a way that leads to a lower total utility for all agents. It is similar to the prisoner’s dilemma: there is another solution to the game that is better for
all parties, but there appears to be no way for rational agents to arrive at that solution.
The standard approach for dealing with the tragedy of the commons is to change the mechanism to one that charges each agent for using the commons. More generally, we need to ensure that all
externalities—effects on global utility that are not recognized in the individual agents’ transactions—are made explicit. Setting the prices correctly is the difficult part. In the limit, this
approach amounts to creating a mechanism in which each agent is effectively required to maximize global utility, but can do so by making a local decision. For this example, a carbon tax would be an
example of a mechanism that charges for use of the commons in a way that, if implemented well, maximizes global utility.
As a final example, consider the problem of allocating some common goods. Suppose a city decides it wants to install some free wireless Internet transceivers. However, the number of transceivers they
can afford is less than the number of neighborhoods that want them. The city wants to allocate the goods efficiently, to the neighborhoods that would value them the most. That is, they want to
maximize the global utility V =∑~i~ v~i~. The problem is that if they just ask each neighborhood council “how much do you value this free gift?” they would all have an incentive to lie, and report a
high value. It turns out there is a mechanism, known as the Vickrey-Clarke-Groves, or VCG, mechanism, that makes it a dominant strategy for VCG each agent to report its true utility and that achieves
an efficient allocation of the goods. The trick is that each agent pays a tax equivalent to the loss in global utility that occurs because of the agent’s presence in the game. The mechanism works
1. The center asks each agent to report its value for receiving an item. Call this b~i~. 2. The center allocates the goods to a subset of the bidders. We call this subset A, and use the notation
b~i~(A) to mean the result to i under this allocation: b~i~ if i is in A (that is, i is a winner), and 0 otherwise. The center chooses A to maximize total reported utility B = ∑~i~ b~i~(A).
2. The center calculates (for each i) the sum of the reported utilities for all the winners except i. We use the notation B−i = ∑~j ≠ i~ b~j~(A). The center also computes (for each i) the allocation
that would maximize total global utility if i were not in the game; call that sum W−i. 4. Each agent i pays a tax equal to W −i −B −i.
In this example, the VCG rule means that each winner would pay a tax equal to the highest reported value among the losers. That is, if I report my value as 5, and that causes someone with value 2 to
miss out on an allocation, then I pay a tax of 2. All winners should be happy because they pay a tax that is less than their value, and all losers are as happy as they can be, because they value the
goods less than the required tax.
Why is it that this mechanism is truth-revealing? First, consider the payoff to agent i, which is the value of getting an item, minus the tax:
Alt text
This chapter shows how to use knowledge about the world to make decisions even when the outcomes of an action are uncertain and the rewards for acting might not be reaped until many actions have
passed. The main points are as follows:
• Sequential decision problems in uncertain environments, also called Markov decision processes, or MDPs, are defined by a transition model specifying the probabilistic outcomes of actions and a
reward function specifying the reward in each state.
• The utility of a state sequence is the sum of all the rewards over the sequence, possibly discounted over time. The solution of an MDP is a policy that associates a decision with every state that
the agent might reach. An optimal policy maximizes the utility of the state sequences encountered when it is executed.
• The utility of a state is the expected utility of the state sequences encountered when an optimal policy is executed, starting in that state. The value iteration algorithm for solving MDPs works
by iteratively solving the equations relating the utility of each state to those of its neighbors.
• Policy iteration alternates between calculating the utilities of states under the current policy and improving the current policy with respect to the current utilities.
• Partially observable MDPs, or POMDPs, are much more difficult to solve than are MDPs. They can be solved by conversion to an MDP in the continuous space of belief states; both value iteration and
policy iteration algorithms have been devised. Optimal behavior in POMDPs includes information gathering to reduce uncertainty and therefore make better decisions in the future.
• A decision-theoretic agent can be constructed for POMDP environments. The agent uses a dynamic decision network to represent the transition and sensor models, to update its belief state, and to
project forward possible action sequences.
• Game theory describes rational behavior for agents in situations in which multiple agents interact simultaneously. Solutions of games are Nash equilibria—strategy profiles in which no agent has
an incentive to deviate from the specified strategy.
• Mechanism design can be used to set the rules by which agents will interact, in order to maximize some global utility through the operation of individually rational agents. Sometimes, mechanisms
exist that achieve this goal without requiring each agent to consider the choices made by other agents.
We shall return to the world of MDPs and POMDP in Chapter 21, when we study reinforcement learning methods that allow an agent to improve its behavior from experience in sequential, uncertain
Richard Bellman developed the ideas underlying the modern approach to sequential decision problems while working at the RAND Corporation beginning in 1949. According to his autobiography (Bellman,
1984), he coined the exciting term “dynamic programming” to hide from a research-phobic Secretary of Defense, Charles Wilson, the fact that his group was doing mathematics. (This cannot be strictly
true, because his first paper using the term (Bellman, 1952) appeared before Wilson became Secretary of Defense in 1953.) Bellman’s book, Dynamic Programming (1957), gave the new field a solid
foundation and introduced the basic algorithmic approaches. Ron Howard’s Ph.D. thesis (1960) introduced policy iteration and the idea of average reward for solving infinite-horizon problems. Several
additional results were introduced by Bellman and Dreyfus (1962). Modified policy iteration is due to van Nunen (1976) and Puterman and Shin (1978). Asynchronous policy iteration was analyzed by
Williams and Baird (1993), who also proved the policy loss bound in Equation (17.9). The analysis of discounting in terms of stationary preferences is due to Koopmans (1972). The texts by Bertsekas
(1987), Puterman (1994), and Bertsekas and Tsitsiklis (1996) provide a rigorous introduction to sequential decision problems. Papadimitriou and Tsitsiklis (1987) describe results on the computational
complexity of MDPs.
Seminal work by Sutton (1988) and Watkins (1989) on reinforcement learning methods for solving MDPs played a significant role in introducing MDPs into the AI community, as did the later survey by
Barto et al. (1995). (Earlier work by Werbos (1977) contained many similar ideas, but was not taken up to the same extent.) The connection between MDPs and AI planning problems was made first by Sven
Koenig (1991), who showed how probabilistic STRIPS operators provide a compact representation for transition models (see also Wellman, 1990b). Work by Dean et al. (1993) and Tash and Russell (1994)
attempted to overcome the combinatorics of large state spaces by using a limited search horizon and abstract states. Heuristics based on the value of information can be used to select areas of the
state space where a local expansion of the horizon will yield a significant improvement in decision quality. Agents using this approach can tailor their effort to handle time pressure and generate
some interesting behaviors such as using familiar “beaten paths” to find their way around the state space quickly without having to recompute optimal decisions at each point.
As one might expect, AI researchers have pushed MDPs in the direction of more expressive representations that can accommodate much larger problems than the traditional atomic representations based on
transition matrices. The use of a dynamic Bayesian network to represent transition models was an obvious idea, but work on factored MDPs (BoutilierFACTORED MDP et al., 2000; Koller and Parr, 2000;
Guestrin et al., 2003b) extends the idea to structured representations of the value function with provable improvements in complexity. Relational MDPs (Boutilier et al., 2001; Guestrin et al., 2003a)
go one step further, using structured representations to handle domains with many related objects. The observation that a partially observable MDP can be transformed into a regular MDP over belief
states is due to Astrom (1965) and Aoki (1965). The first complete algorithm for the exact solution of POMDPs—essentially the value iteration algorithm presented in this chapter—was proposed by
Edward Sondik (1971) in his Ph.D. thesis. (A later journal paper by Smallwood and Sondik (1973) contains some errors, but is more accessible.) Lovejoy (1991) surveyed the first twenty-five years of
POMDP research, reaching somewhat pessimistic conclusions about the feasibility of solving large problems. The first significant contribution within AI was the Witness algorithm (Cassandra et al.,
1994; Kaelbling et al., 1998), an improved version of POMDP value iteration. Other algorithms soon followed, including an approach due to Hansen (1998) that constructs a policy incrementally in the
form of a finite-state automaton. In this policy representation, the belief state corresponds directly to a particular state in the automaton. More recent work in AI has focused on point-based value
iteration methods that, at each iteration, generate conditional plans and α-vectors for a finite set of belief states rather than for the entire belief space. Lovejoy (1991) proposed such an
algorithm for a fixed grid of points, an approach taken also by Bonet (2002). An influential paper by Pineau et al. (2003) suggested generating reachable points by simulating trajectories in a
somewhat greedy fashion; Spaan and Vlassis (2005) observe that one need generate plans for only a small, randomly selected subset of points to improve on the plans from the previous iteration for all
points in the set. Current point-based methods— such as point-based policy iteration (Ji et al., 2007)—can generate near-optimal solutions for POMDPs with thousands of states. Because POMDPs are
PSPACE-hard (Papadimitriou and Tsitsiklis, 1987), further progress may require taking advantage of various kinds of structure within a factored representation.
The online approach—using look-ahead search to select an action for the current belief state—was first examined by Satia and Lave (1973). The use of sampling at chance nodes was explored analytically
by Kearns et al. (2000) and Ng and Jordan (2000). The basic ideas for an agent architecture using dynamic decision networks were proposed by Dean and Kanazawa (1989a). The book Planning and Control
by Dean and Wellman (1991) goes into much greater depth, making connections between DBN/DDN models and the classical control literature on filtering. Tatman and Shachter (1990) showed how to apply
dynamic programming algorithms to DDN models. Russell (1998) explains various ways in which such agents can be scaled up and identifies a number of open research issues.
The roots of game theory can be traced back to proposals made in the 17th century by Christiaan Huygens and Gottfried Leibniz to study competitive and cooperative human interactions scientifically
and mathematically. Throughout the 19th century, several leading economists created simple mathematical examples to analyze particular examples of competitive situations. The first formal results in
game theory are due to Zermelo (1913) (who had, the year before, suggested a form of minimax search for games, albeit an incorrect one). Emile Borel (1921) introduced the notion of a mixed strategy.
John von Neumann (1928) proved that every two-person, zero-sum game has a maximin equilibrium in mixed strategies and a well-defined value. Von Neumann’s collaboration with the economist Oskar
Morgenstern led to the publication in 1944 of the Theory of Games and Economic Behavior, the defining book for game theory. Publication of the book was delayed by the wartime paper shortage until a
member of the Rockefeller family personally subsidized its publication.
In 1950, at the age of 21, John Nash published his ideas concerning equilibria in general (non-zero-sum) games. His definition of an equilibrium solution, although originating in the work of Cournot
(1838), became known as Nash equilibrium. After a long delay because of the schizophrenia he suffered from 1959 onward, Nash was awarded the Nobel Memorial Prize in Economics (along with Reinhart
Selten and John Harsanyi) in 1994. The Bayes–Nash equilibrium is described by Harsanyi (1967) and discussed by Kadane and Larkey (1982). Some issues in the use of game theory for agent control are
covered by Binmore (1982).
The prisoner’s dilemma was invented as a classroom exercise by Albert W. Tucker in 1950 (based on an example by Merrill Flood and Melvin Dresher) and is covered extensively by Axelrod (1985) and
Poundstone (1993). Repeated games were introduced by Luce and Raiffa (1957), and games of partial information in extensive form by Kuhn (1953). The first practical algorithm for sequential,
partial-information games was developed within AI by Koller et al. (1996); the paper by Koller and Pfeffer (1997) provides a readable introduction to the field and describe a working system for
representing and solving sequential games.
The use of abstraction to reduce a game tree to a size that can be solved with Koller’s technique is discussed by Billings et al. (2003). Bowling et al. (2008) show how to use importance sampling to
get a better estimate of the value of a strategy. Waugh et al. (2009) show that the abstraction approach is vulnerable to making systematic errors in approximating the equilibrium solution, meaning
that the whole approach is on shaky ground: it works for some games but not others. Korb et al. (1999) experiment with an opponent model in the form of a Bayesian network. It plays five-card stud
about as well as experienced humans. (Zinkevich et al., 2008) show how an approach that minimizes regret can find approximate equilibria for abstractions with 1012 states, 100 times more than
previous methods.
Game theory and MDPs are combined in the theory of Markov games, also called stochastic games (Littman, 1994; Hu and Wellman, 1998). Shapley (1953) actually described the value iteration algorithm
independently of Bellman, but his results were not widely appreciated, perhaps because they were presented in the context of Markov games. Evolutionary game theory (Smith, 1982; Weibull, 1995) looks
at strategy drift over time: if your opponent’s strategy is changing, how should you react? Textbooks on game theory from an economics point of view include those by Myerson (1991), Fudenberg and
Tirole (1991), Osborne (2004), and Osborne and Rubinstein (1994); Mailath and Samuelson (2006) concentrate on repeated games. From an AI perspective we have Nisan et al. (2007), Leyton-Brown and
Shoham (2008), and Shoham and Leyton-Brown (2009).
The 2007 Nobel Memorial Prize in Economics went to Hurwicz, Maskin, and Myerson “for having laid the foundations of mechanism design theory” (Hurwicz, 1973). The tragedy of the commons, a motivating
problem for the field, was presented by Hardin (1968). The revelation principle is due to Myerson (1986), and the revenue equivalence theorem was developed independently by Myerson (1981) and Riley
and Samuelson (1981). Two economists, Milgrom (1997) and Klemperer (2002), write about the multibillion-dollar spectrum auctions they were involved in.
Mechanism design is used in multiagent planning (Hunsberger and Grosz, 2000; Stone et al., 2009) and scheduling (Rassenti et al., 1982). Varian (1995) gives a brief overview with connections to the
computer science literature, and Rosenschein and Zlotkin (1994) present a book-length treatment with applications to distributed AI. Related work on distributed AI also goes under other names,
including collective intelligence (Tumer and Wolpert, 2000; Segaran, 2007) and market-based control (Clearwater, 1996). Since 2001 there has been an annual Trading Agents Competition (TAC), in which
agents try to make the best profit on a series of auctions (Wellman et al., 2001; Arunachalam and Sadeh, 2005). Papers on computational issues in auctions often appear in the ACM Conferences on
Electronic Commerce.
17.1 For the 4× 3 world shown in Figure 17.1, calculate which squares can be reached from (1,1) by the action sequence [Up,Up,Right ,Right ,Right ] and with what probabilities. Explain how this
computation is related to the prediction task (see Section 15.2.1) for a hidden Markov model.
17.2 Select a specific member of the set of policies that are optimal for R(s) > 0 as shown in Figure 17.2(b), and calculate the fraction of time the agent spends in each state, in the limit, if the
policy is executed forever. (Hint: Construct the state-to-state transition probability matrix corresponding to the policy and see Exercise 15.2.)
17.3 Suppose that we define the utility of a state sequence to be the maximum reward obtained in any state in the sequence. Show that this utility function does not result in stationary preferences
between state sequences. Is it still possible to define a utility function on states such that MEU decision making gives optimal behavior?
17.4 Sometimes MDPs are formulated with a reward function R(s, a) that depends on the action taken or with a reward function R(s, a, s′) that also depends on the outcome state.
a. Write the Bellman equations for these formulations.
b. Show how an MDP with reward function R(s, a, s ′) can be transformed into a different MDP with reward function R(s, a), such that optimal policies in the new MDP correspond exactly to optimal
policies in the original MDP.
c. Now do the same to convert MDPs with R(s, a) into MDPs with R(s).
17.5 For the environment shown in Figure 17.1, find all the threshold values for R(s) such that the optimal policy changes when the threshold is crossed. You will need a way to calculate the optimal
policy and its value for fixed R(s). (Hint: Prove that the value of any fixed policy varies linearly with R(s).)
17.6 Equation (17.7) on page 654 states that the Bellman operator is a contraction.
a. Show that, for any functions f and g,
Alt text
b. Write out an expression for |(B U~i~ − B U′~i~)(s)| and then apply the result from (a) to complete the proof that the Bellman operator is a contraction.
17.7 This exercise considers two-player MDPs that correspond to zero-sum, turn-taking games like those in Chapter 5. Let the players be A and B, and let R(s) be the reward for player A in state s.
(The reward for B is always equal and opposite.)
a. Let UA(s) be the utility of state s when it is A’s turn to move in s, and let UB(s) be the utility of state s when it is B’s turn to move in s. All rewards and utilities are calculated from A’s
point of view (just as in a minimax game tree). Write down Bellman equations defining UA(s) and UB(s).
b. Explain how to do two-player value iteration with these equations, and define a suitable termination criterion.
c. Consider the game described in Figure 5.17 on page 197. Draw the state space (rather than the game tree), showing the moves by A as solid lines and moves by B as dashed lines. Mark each state with
R(s). You will find it helpful to arrange the states (sA, sB) on a two-dimensional grid, using sA and sB as “coordinates.”
d. Now apply two-player value iteration to solve this game, and derive the optimal policy.
17.8 Consider the 3 × 3 world shown in Figure 17.14(a). The transition model is the same as in the 4× 3 Figure 17.1: 80% of the time the agent goes in the direction it selects; the rest of the time
it moves at right angles to the intended direction.
Implement value iteration for this world for each value of r below. Use discounted rewards with a discount factor of 0.99. Show the policy obtained in each case. Explain intuitively why the value of
r leads to each policy.
Alt text
17.9 Consider the 101 × 3 world shown in Figure 17.14(b). In the start state the agent has a choice of two deterministic actions, Up or Down, but in the other states the agent has one deterministic
action, Right. Assuming a discounted reward function, for what values of the discount γ should the agent choose Up and for which Down? Compute the utility of each action as a function of γ. (Note
that this simple example actually reflects many real-world situations in which one must weigh the value of an immediate action versus the potential continual long-term consequences, such as choosing
to dump pollutants into a lake.)
17.10 Consider an undiscounted MDP having three states, (1, 2, 3), with rewards −1, −2, 0, respectively. State 3 is a terminal state. In states 1 and 2 there are two possible actions: a and b. The
transition model is as follows:
• In state 1, action a moves the agent to state 2 with probability 0.8 and makes the agent stay put with probability 0.2.
• In state 2, action a moves the agent to state 1 with probability 0.8 and makes the agent stay put with probability 0.2.
• In either state 1 or state 2, action b moves the agent to state 3 with probability 0.1 and makes the agent stay put with probability 0.9.
Answer the following questions:
a. What can be determined qualitatively about the optimal policy in states 1 and 2?
b. Apply policy iteration, showing each step in full, to determine the optimal policy and the values of states 1 and 2. Assume that the initial policy has action b in both states.
c. What happens to policy iteration if the initial policy has action a in both states? Does discounting help? Does the optimal policy depend on the discount factor?
17.11 Consider the 4× 3 world shown in Figure 17.1.
a. Implement an environment simulator for this environment, such that the specific geography of the environment is easily altered. Some code for doing this is already in the online code repository.
b. Create an agent that uses policy iteration, and measure its performance in the environment simulator from various starting states. Perform several experiments from each starting state, and compare
the average total reward received per run with the utility of the state, as determined by your algorithm.
c. Experiment with increasing the size of the environment. How does the run time for policy iteration vary with the size of the environment?
17.12 How can the value determination algorithm be used to calculate the expected loss experienced by an agent using a given set of utility estimates U and an estimated model P , compared with an
agent using correct values?
17.13 Let the initial belief state b0 for the 4× 3 POMDP on page 658 be the uniform distribution over the nonterminal states, i.e., 〈1/9 ,1/9 ,1/9 ,1/9 ,1/9 ,1/9 ,1/9 ,1/9 ,1/9 , 0, 0〉. Calculate
the exact belief state b~1~ after the agent moves Left and its sensor reports 1 adjacent wall. Also calculate b~2~ assuming that the same thing happens again.
17.14 What is the time complexity of d steps of POMDP value iteration for a sensorless environment?
17.15 Consider a version of the two-state POMDP on page 661 in which the sensor is 90% reliable in state 0 but provides no information in state 1 (that is, it reports 0 or 1 with equal probability).
Analyze, either qualitatively or quantitatively, the utility function and the optimal policy for this problem.
17.16 Show that a dominant strategy equilibrium is a Nash equilibrium, but not vice versa.
17.17 In the children’s game of rock–paper–scissors each player reveals at the same time a choice of rock, paper, or scissors. Paper wraps rock, rock blunts scissors, and scissors cut paper. In the
extended version rock–paper–scissors–fire–water, fire beats rock, paper, and scissors; rock, paper, and scissors beat water; and water beats fire. Write out the payoff matrix and find a
mixed-strategy solution to this game.
17.18 The following payoff matrix, from Blinder (1983) by way of Bernstein (1996), shows a game between politicians and the Federal Reserve.
Fed: contract Fed: do nothing Fed: expand Pol: contract F = 7, P = 1 F = 9, P = 4 F = 6, P = 6
Pol: do nothing F = 8, P = 2 F = 5, P = 5 F = 4, P = 9
Pol: expand F = 3, P = 3 F = 2, P = 7 F = 1, P = 8
Politicians can expand or contract fiscal policy, while the Fed can expand or contract monetary policy. (And of course either side can choose to do nothing.) Each side also has preferences for who
should do what—neither side wants to look like the bad guys. The payoffs shown are simply the rank orderings: 9 for first choice through 1 for last choice. Find the Nash equilibrium of the game in
pure strategies. Is this a Pareto-optimal solution? You might wish to analyze the policies of recent administrations in this light.
17.19 A Dutch auction is similar in an English auction, but rather than starting the bidding at a low price and increasing, in a Dutch auction the seller starts at a high price and gradually lowers
the price until some buyer is willing to accept that price. (If multiple bidders accept the price, one is arbitrarily chosen as the winner.) More formally, the seller begins with a price p and
gradually lowers p by increments of d until at least one buyer accepts the price. Assuming all bidders act rationally, is it true that for arbitrarily small d, a Dutch auction will always result in
the bidder with the highest value for the item obtaining the item? If so, show mathematically why. If not, explain how it may be possible for the bidder with highest value for the item not to obtain
17.20 Imagine an auction mechanism that is just like an ascending-bid auction, except that at the end, the winning bidder, the one who bid bmax, pays only bmax/2 rather than bmax. Assuming all agents
are rational, what is the expected revenue to the auctioneer for this mechanism, compared with a standard ascending-bid auction?
17.21 Teams in the National Hockey League historically received 2 points for winning a game and 0 for losing. If the game is tied, an overtime period is played; if nobody wins in overtime, the game
is a tie and each team gets 1 point. But league officials felt that teams were playing too conservatively in overtime (to avoid a loss), and it would be more exciting if overtime produced a winner.
So in 1999 the officials experimented in mechanism design: the rules were changed, giving a team that loses in overtime 1 point, not 0. It is still 2 points for a win and 1 for a tie.
a. Was hockey a zero-sum game before the rule change? After?
b. Suppose that at a certain time t in a game, the home team has probability p of winning in regulation time, probability 0.78 − p of losing, and probability 0.22 of going into overtime, where they
have probability q of winning, .9 − q of losing, and .1 of tying. Give equations for the expected value for the home and visiting teams.
c. Imagine that it were legal and ethical for the two teams to enter into a pact where they agree that they will skate to a tie in regulation time, and then both try in earnest to win in overtime.
Under what conditions, in terms of p and q, would it be rational for both teams to agree to this pact?
d. Longley and Sankaran (2005) report that since the rule change, the percentage of games with a winner in overtime went up 18.2%, as desired, but the percentage of overtime games also went up 3.6%.
What does that suggest about possible collusion or conservative play after the rule change? | {"url":"http://gurukulams.com/books/csebooks/artificial-intelligence/uncertain-knowledge-and-reasoning/","timestamp":"2024-11-03T09:43:40Z","content_type":"text/html","content_length":"703582","record_id":"<urn:uuid:6be8bade-1a7d-4ad5-9a78-7f9cb7e060e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00139.warc.gz"} |
This information is part of the Modelica Standard Library maintained by the Modelica Association.
Without pre-caution when implementing a medium model, it is very easy that non-linear algebraic systems of equations occur when using the medium model. In this section it is explained how to avoid
non-linear systems of equations that result from unnecessary dynamic state selections.
A medium model should be implemented in such a way that a tool is able to select states of a medium in a balance volume statically (during translation). This is only possible if the medium equations
are written in a specific way. Otherwise, a tool has to dynamically select states during simulation. Since medium equations are usually non-linear, this means that non-linear algebraic systems of
equations would occur in every balance volume.
It is assumed that medium equations in a balance volume are defined in the following way:
package Medium = Modelica.Media.Interfaces.PartialMedium;
Medium.BaseProperties medium;
// mass balance
der(M) = port_a.m_flow + port_b.m_flow;
der(MX) = port_a_mX_flow + port_b_mX_flow;
M = V*medium.d;
MX = M*medium.X;
// Energy balance
U = M*medium.u;
der(U) = port_a.H_flow+port_b.H_flow;
Single Substance Media
A medium consisting of a single substance has to define two of "p,T,d,u,h" with stateSelect=StateSelect.prefer if BaseProperties.preferredMediumstates = true and has to provide the other three
variables as function of these states. This results in:
• static state selection (no dynamic choices).
• a linear system of equations in the two state derivatives.
Example for a single substance medium
p, T are preferred states (i.e., StateSelect.prefer is set) and there are three equations written in the form:
d = fd(p,T)
u = fu(p,T)
h = fh(p,T)
Index reduction leads to the equations:
der(M) = V*der(d)
der(U) = der(M)*u + M*der(u)
der(d) = der(fd,p)*der(p) + der(fd,T)*der(T)
der(u) = der(fu,p)*der(p) + der(fu,T)*der(T)
Note, that der(y,x) is the partial derivative of y with respect to x and that this operator is available in Modelica only for declaring partial derivative functions, see Section 12.7.2 (Partial
Derivatives of Functions) of the Modelica 3.4 specification.
The above equations imply, that if p,T are provided from the integrator as states, all functions, such as fd(p,T) or der(fd,p) can be evaluated as function of the states. The overall system results
in a linear system of equations in der(p) and der(T) after eliminating der(M), der(U), der(d), der(u) via tearing.
Counter Example for a single substance medium
An ideal gas with one substance is written in the form
redeclare model extends BaseProperties(
T(stateSelect=if preferredMediumStates then StateSelect.prefer else StateSelect.default),
p(stateSelect=if preferredMediumStates then StateSelect.prefer else StateSelect.default)
h = h(T);
u = h - R_s*T;
p = d*R_s*T;
end BaseProperties;
If p, T are preferred states, these equations are not written in the recommended form, because d is not a function of p and T. If p,T would be states, it would be necessary to solve for the density:
d = p/(R_s*T)
If T or R_s are zero, this results in a division by zero. A tool does not know that R_s or T cannot become zero. Therefore, a tool must assume that p, T cannot always be selected as states and has to
either use another static state selection or use dynamic state selection. The only other choice for static state selection is d,T, because h,u,p are given as functions of d,T. However, as potential
states only variables appearing differentiated and variables declared with StateSelect.prefer or StateSelect.always are used. Since "d" does not appear differentiated and has StateSelect.default, it
cannot be selected as a state. As a result, the tool has to select states dynamically during simulation. Since the equations above are non-linear and they are utilized in the dynamic state selection,
a non-linear system of equations is present in every balance volume.
To summarize, for single substance ideal gas media there are the following two possibilities to get static state selection and linear systems of equations:
1. Use p,T as preferred states and write the equation for d in the form: d = p/(T*R_s)
2. Use d,T as preferred states and write the equation for p in the form: p = d*T*R_s
All other settings (other/no preferred states etc.) lead to dynamic state selection and non-linear systems of equations for a balance volume.
Multiple Substance Media
A medium consisting of multiple substance has to define two of "p,T,d,u,h" as well as the mass fractions Xi with stateSelect=StateSelect.prefer (if BaseProperties.preferredMediumStates = true) and
has to provide the other three variables as functions of these states. Only then, static selection is possible for a tool.
Example for a multiple substance medium:
p, T and Xi are defined as preferred states and the equations are written in the form:
d = fp(p,T,Xi);
u = fu(p,T,Xi);
h = fh(p,T,Xi);
Since the balance equations are written in the form:
M = V*medium.d;
MXi = M*medium.Xi;
The variables M and MXi appearing differentiated in the balance equations are provided as functions of d and Xi and since d is given as a function of p, T and Xi, it is possible to compute M and MXi
directly from the desired states. This means that static state selection is possible. | {"url":"https://reference.wolfram.com/system-modeler/libraries/Modelica/Modelica.Media.UsersGuide.MediumDefinition.StaticStateSelection.html","timestamp":"2024-11-13T16:03:34Z","content_type":"text/html","content_length":"35979","record_id":"<urn:uuid:d3292bb7-1ec3-4066-bf22-822375fd2c6c>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00261.warc.gz"} |
st: Re: selmlog: question
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Re: selmlog: question
From "R.E. De Hoyos" <[email protected]>
To <[email protected]>
Subject st: Re: selmlog: question
Date Wed, 12 Apr 2006 00:47:06 +0100
A way to do this is by generating the conditional probabilities (_m in -selmlog-) for your two outcomes of interest. You can then use them in a single wage equation. Say "w1" are the observed wages
under outcome 1 (missing values otherwise) and "w3" are the observed wages under outcome 3 (missing values otherwise) as you specified the problem. Then:
selmlog w1 x1 x2, sel(outcome x1 x2 z1) gen(cprob1)
selmlog w3 x1 x2, sel(outcome x1 x2 z1) gen(cprob3)
The above model will allow for full wage parameter heterogeneity across outcomes 1 and 3. Depending on your particular problem this might be the best way to account for selection (allows for separate
market equilibriums and different payments for the unobserved characteristics determining selection [cprob]). However if you want to impose the constraint of homogeneity in parameters across the wage
equation for outcomes 1 and 3 but still treating them as different outcomes in your selection equation:
gen cpron_13=.
replace cprob_13 = cprob1 if outcome==1
replace cprob_13 = cprob3 if outcome==3
gen w_13=.
replace w_13 = w1 if outcome==1
replace w_13 = w3 if outcome==3
reg w_13 x1 x2 cprob_13
This last model will estimate the wage equation for outcomes 1 and 3 accounting for the unobserved characteristics that made the individuals "choose" those particular outcomes (although the market
payment for those unobservables will be the same for both groups).
Notice that you will have to bootstrap the standard errors to account for the heteroskedasticity present in the two-step procedure.
I hope this helps,
Rafael E. De Hoyos
Faculty of Economics
University of Cambridge
CB3 9DE, UK
----- Original Message ----- From: "Rasmus Joergensen" <[email protected]>
To: <[email protected]>
Sent: Tuesday, April 11, 2006 8:26 PM
Subject: st: selmlog: question
Dear Statalist,
I'm trying to estimate the effect of self-employment experience. My analysis considers the following selection rules:
1. Wage-employed in period t and period t+5
2. Self-employment spell between t and t+5.
This selection model thus consider 4 possible outcomes as illustrated below:
WE,t and WE,t+5
YES NO
YES 1 2
SE spell
NO 3 4
One way to estimate this selection model is to use --selmlog--.
However, selmlog can only estimate the wage equation (the equation of interest) for one outcome of the selection process. But I'm interested in running a wage regression for outcome 1 and 3 (see
above). In other words, I'm trying to estimate a model that accounts for both sample selection and endogenous treatment (the SE spell).
Does anyone have any advice how to correct --selmlog-- to estimate the equation of interest for two outcomes of the selection process? Any suggestions are very welcome.
Rasmus Jørgensen
Research Assistant
Centre for Economic and Business Research
E:< [email protected]
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"https://www.stata.com/statalist/archive/2006-04/msg00400.html","timestamp":"2024-11-14T01:53:30Z","content_type":"text/html","content_length":"12452","record_id":"<urn:uuid:295dfd11-86f2-458f-8744-27a992bca3e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00737.warc.gz"} |
Paper Title
Excess Entropy Based Outlier Detection In Categorical Data Set
Many outlier detection methods have been proposed because of need of finding meaningful information by removal of unwanted data based on classification, clustering, frequent patterns and statistics.
Among them information theory has some different perspective while its computation is based on statistical approach. The outlier detection from unsupervised data sets is more challenging since there
is no inherent measurement of distance between objects. We proposed a novel framework based on information theoretic measures for outlier detection in unsupervised data with the help of Excess
Entropy. In which we are using different information theoretic measures such as entropy and dual correlation. Based on this model we proposed EEB-SP outlier detection algorithm which do not require
any user defined parameter except input data set. We also used the formal definition of outliers which depends upon the weighted entropy. This algorithm detects outliers in large scale unsupervised
datasets expertly than other existing methods. | {"url":"http://iraj.in/journal/IJACEN/abstract.php?paper_id=1075","timestamp":"2024-11-10T14:29:31Z","content_type":"text/html","content_length":"2075","record_id":"<urn:uuid:487464b1-2153-4c40-bdf7-57629a20766b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00637.warc.gz"} |
Newport Mill M
Math Department
C2.0 Mathematics 6
This course is for students who have completed the Kindergarten to Grade 5 mathematics curriculum. Students in C 2.0 Math 6 will go on to C 2.0 Investigations into Mathematics (IM) the following
year. Units of study include the following:
• Unit 1: Ratios, Fractions, and Decimals
• Unit 2: Number relationship
• Unit 3: Expressions and Equations
• Unit 4: Geometric and Statistical relationships
C2.0 Investigation Into Mathematics (IM)
Curriculum 2.0 Investigations into Mathematics (IM) extends students’ understanding of mathematical concepts developed in C2.0 Mathematics 6 and accelerated the pace of instruction to prepare for
c2.0 Algebra 1. Students who successfully complete C2.0 IM are prepared for C2.0 in Algebra 1 course in grade 8. Units of study include the following:
• Unit 1: Rational Numbers and Exponents
• Unit 2: Proportionality and Linear Relationships
• Unit 3: Statistics and Probability
• Unit 4: Creating, Comparing, and Analyzing Geometric Figures
C2.0 Algebra 1
Algebra 1 is designed to analyze and model real-world phenomena. Exploration linear, exponential, and quadratic functions forms the foundation of the course. Key characteristics and representations
of functions – graphic, numeric, symbolic, and verbal – are analyzed and compared. Students develop fluency in solving equations and inequalities. One and two-variable data sets are interpreted using
mathematical models. Topics of study include the following:
• Unit 1: Relationships between Quantities and Reasoning with Equations
• Unit 2: Linear and Exponential Relationships
• Unit 3: Descriptive Statistics
• Unit 4: Quadratic Relationships
• Unit 5: Generalizing Function Properties
C2.0 Honors Geometry
Geometry formalizes and extends students’ geometric experiences from the elementary and middle school grades. Students explore more complex geometric situations and deepen their understanding of
geometric relationships, progressing towards formal mathematical arguments. Instruction at this level will focus on the understanding and application of congruence as a basis for developing formal
proofs; the relationship among similarity, trigonometry and triangles; the relationship between two and three-dimensional objects and their measurements; exploration of geometric descriptions and
equations for conic sections; and application of geometric concepts in modeling situations. Topics of study include the following:
• Unit 1: Constructions, Congruence, and Transformations
• Unit 2: Similarity, Right Triangles, and Trigonometry
• Unit 3: Extending the Three Dimensions
• Unit 4: Connecting Algebra and Geometry through Coordinates Geometric Measurement and Dimension
• Unit 5: Circles
C2.0 Honors Algebra 2
Honors Algebra 2 is a high school credit-bearing mathematics course Students who successfully complete both semesters and pass the semester B final exam earn 1 mathematics credit toward graduation.
Students successful in this course will take Honors Pre-Calculus the following year. Units of study include the following:
• UNIT 1: EQUATIONS AND FUNCTIONS
• UNIT 2: LINEAR SYSTEMS AND MATRICES
• UNIT 3: POLYNOMIAL FUNCTIONS
• UNIT 4: SEQUENCES AND SERIES
• UNIT 5: POWER AND RADICAL FUNCTIONS
• UNIT 6: EXPONENTIAL AND LOGARITHMIC FUNCTIONS
• UNIT 7: RATIONAL FUNCTIONS
• UNIT 8: CONIC SECTIONS
Current students and parents may access complete course overviews via Edline. | {"url":"https://www.montgomeryschoolsmd.org/schools/newportmillms/departments/math/math/","timestamp":"2024-11-07T22:44:21Z","content_type":"text/html","content_length":"23705","record_id":"<urn:uuid:5b26af60-99c5-48c4-9019-04fa7fdff563>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00367.warc.gz"} |
Wolfram Function Repository
Function Repository Resource:
Return a pseudorandom vector of a given type and size
Contributed by: Dennis M Schneider
returns an n-dimensional vector with entries that are random values of type type and lie in range range.
For complex types, the range is given as a pair of complex values that are taken as diagonally opposite endpoints of a rectangle.
Basic Examples (7)
A random three-dimensional vector whose entries are approximate real numbers between 0 and 1:
A random three-dimensional vector whose entries are approximate real numbers between -5 and 10:
A random three-dimensional vector whose entries are integers between -5 and 10:
A random three-dimensional vector whose entries are rationals between -5 and 10:
A random three-dimensional vector whose entries are Gaussian integers between -5 and 10:
A random three-dimensional vector whose entries are Gaussian rationals between -5 and 10:
A random three-dimensional vector whose entries are RandomComplex[{-5+2I,10-3I}]:
Version History
Related Resources
Related Symbols
License Information | {"url":"https://resources.wolframcloud.com/FunctionRepository/resources/RandomVector/","timestamp":"2024-11-13T18:02:18Z","content_type":"text/html","content_length":"40809","record_id":"<urn:uuid:eebd7685-014e-4f3f-8e19-2f964375a80b>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00824.warc.gz"} |
Propositional calculus
From New World Encyclopedia
Propositional calculus or Sentential calculus is a calculus that represents the logical structure of truth-functional connectives ("not," "and," "or," "if…, then...," and others); the connectives
such that their meanings determine the truth-value of a given sentence in which they occur once the truth-values of all the simple sentences in the given sentence are given. It is often referred to
as Propositional logic.
Consider the following argument:
If Jack is innocent, then Jack has an alibi and Jack is not a murderer.
Jack does not have an alibi.
Therefore, Jack is not innocent.
The truth-values, truth or falsity, of the sentences in this argument are exclusively dependent on whether each of the simple sentences in the sentences, "Jack is innocent," "Jack has an alibi," and
"Jack is a murderer" is true or false. In other words, once the truth-values of the simple sentences are determined, the complex sentences in the arguments are determined only in terms of the
meanings of the connectives, “if…then...,” “not,” and “and,” which are examples of truth-functional connectives. Propositional calculus, focusing on connectives of such kinds, clarifies what form a
given argument ${\displaystyle A}$ (such as the one in question here) has, and studies how the correctness or incorrectness of ${\displaystyle A}$ depends on the truth-functional connectives that it
The language of propositional calculus consists of propositional variables, truth-functional connectives, (most familiar ones are ${\displaystyle \lnot ,\wedge ,\vee ,\rightarrow ,\leftrightarrow }$)
and parentheses. Formulas are built up from propositional variables by using truth-functional connectives and parentheses.
To propositional variables, either truth or falsity is assigned and, relative to the truth-value assignment, the truth-value of an arbitrary well-formed formula (for the definition, see the section
Syntax) that contains the propositional variables is calculated based on the truth-functional connectives in the well-formed formula.
A propositional calculus has a set of axioms (possibly empty) and rules of inference. There are various kinds of propositional calculi, for which the soundness and completeness can be proved. (for
the definitions of soundness and completeness, see the corresponding section Soundness and Completeness)
Studies Under Propositional Calculus
Some sentences have truth-values, truth or falsity, (declarative sentences are typical examples) and some do not (interrogative sentences, exclamatory sentences, and others). The sentences of the
latter kind are excluded from what propositional calculus studies. Thus, in propositional calculus, it is assumed that every sentence is either true or false. (This assumption is called the principle
of bivalence.)
Among such sentences, the sentences that do not include sentential connectives such as "and," "or," and others. (e.g. “John is a bachelor.”) are called atomic sentences. More complex sentences (e.g.
“John is a bachelor and Ben is married”) are built from atomic sentences and sentential connectives.
Some sentential connectives determine the truth-values of the complex sentences in which they occur, once the truth-values of atomic sentences that the complex sentences contain are determined. For
instance, the truth-value of “John is a bachelor and Ben is married” is determined purely by the meaning of the connective “and” once the truth-values of the two atomic sentences “John is a bachelor”
and “Ben is married” are determined. The connectives of such a kind is called truth-functional. (Notice that this does not apply to all the sentential connectives. Consider “Ben is happy because Ben
is married.” The truth-value of this sentence is still undetermined even if both the atomic sentences in this sentence are true.) Truth-functional connectives are the connectives that propositional
calculus studies. Examples of such connectives are "and," "or," "if…then..." (These connectives of a certain use only. Some uses of the connectives are not truth-functional. For instance, consider
counterfactual statements).
The language of propositional calculus consists of 1. propositional variables, usually annotated by p, q, r,…, 2. truth-functional connectives, ${\displaystyle \lnot ,\wedge ,\vee ,\rightarrow ,\
leftrightarrow }$, and 3. parentheses “(“ and “).” Propositional variables represent atomic sentences and ${\displaystyle \lnot ,\wedge ,\vee ,\rightarrow }$ , and ${\displaystyle \leftrightarrow }$
are usually considered as “not,” “and,” “or,” “if…then...,” and “...if and only if...” respectively. ${\displaystyle \lnot }$ is called unary (meaning that it is attachable to one wff. For the
definitnion of wffs, see below.) and the other four connectives are called binary (meaning that they combine two wffs). Parentheses are used to represent the punctuations in sentences.
Well-formed formulas (wffs) are recursively built in the following way.
• Propositional variables are wffs.
• If ${\displaystyle \alpha }$ is a wff, then ${\displaystyle \lnot \alpha }$ is a wff.
• If ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ are wffs, then ${\displaystyle (\alpha \star \beta )}$ is a wff where ${\displaystyle \star }$ is a binary connective.
Conventionally the outermost set of parentheses is dropped. Also, the order of strength in which propositional connectives bind is stipulated as: ${\displaystyle \leftrightarrow ,\rightarrow ,\wedge
}$ and ${\displaystyle \vee ,\lnot }$. Therefore, taking these two conventions into account, say, the wff “${\displaystyle ((\lnot p\vee q)\rightarrow r)}$” built up by the above definition is
written as “${\displaystyle \lnot p\vee q\rightarrow r}$.”
The connective in a given wff ${\displaystyle \phi }$ that binds last is called the main connective of ${\displaystyle \phi }$. Thus, in the case of ${\displaystyle \lnot p\vee q\rightarrow r}$, the
main connective is ${\displaystyle \rightarrow }$. Wffs with ${\displaystyle \lnot ,\wedge ,\vee ,\rightarrow ,}$ and ${\displaystyle \leftrightarrow }$ as their main connectives are called negation,
conjunction, disjunction, conditional, and biconditional respectively.
An argument consists of a set of wffs and a distinguished wff. The wffs of the former kind are called premises and the distinguished wff is called the conclusion. The set of premises of a given
argument can possibly be empty.
For instance, the set of sentences about Jack in the opening example is represented in the language of propositional logic as follows:
${\displaystyle p\rightarrow (q\wedge \lnot r)}$
${\displaystyle \lnot q}$
${\displaystyle \lnot p}$
where p, q, and r represent “Jack is innocent,” “Jack has an alibi,” and “Jack is a murderer,” respectively. The first two wffs are the premises and the last wff is the conclusion of the argument.
Every wff in propositional calculus gets either of the two truth-values, True and False (T and F). Relative to the assignment V of truth-values to propositional variables (a function from the set of
propositional variables to {T, F}, the truth-values of other wffs are determined recursively as follows:
• p is true iff V(p)=T
• ${\displaystyle \lnot \alpha }$ is T iff ${\displaystyle \alpha }$ is F.
• ${\displaystyle \alpha \wedge \beta }$ is T iff ${\displaystyle \alpha }$ is T and ${\displaystyle \beta }$ is T.
• ${\displaystyle \alpha \vee \beta }$ is T iff ${\displaystyle \alpha }$is T or ${\displaystyle \beta }$ is T (in the inclusive sense of "or" i.e. including the case in which both are T)
• ${\displaystyle \alpha \rightarrow \beta }$ is T iff ${\displaystyle \alpha }$ is F or ${\displaystyle \beta }$ is T.
• ${\displaystyle \alpha \leftrightarrow \beta }$ is T iff ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ coincide in their truth-values.
For instance, when p, q, and r get T, T and F respectively, ${\displaystyle (p\vee \lnot q)\leftrightarrow (r\wedge q)}$ gets F. For the left side of the biconditional is T because p is T and ${\
displaystyle \lnot q}$ is F, and the right side is F because r is F and q is T.
A wff that gets T no matter what truth-value assignment is given is called a tautology. A set ${\displaystyle \Gamma }$ of wffs (maybe empty) implies a wff ${\displaystyle \phi }$ if and only if ${\
displaystyle \phi }$ is T relative to every truth-value assignment V that assigns Ts to all the wffs in ${\displaystyle \Gamma }$. An argument, consisting of a set ${\displaystyle \Gamma }$ of wffs
and a wff ${\displaystyle \phi }$, is said to be valid if ${\displaystyle \Gamma }$ implies ${\displaystyle \phi }$. (For instance, readers are invited to check that the argument about Jack is
If an argument, consisting of a premise set ${\displaystyle \Gamma }$ and a conclusion ${\displaystyle \phi }$, is valid, , we write “${\displaystyle \Gamma \models \phi }$,” which often reads as “$
{\displaystyle \Gamma }$ implies ${\displaystyle \phi }$.” (For the left hand side of “${\displaystyle \models }$,” the wffs in ${\displaystyle \Gamma }$ are written with commas between them, e.g. if
${\displaystyle \Gamma }$ is {p, q, r}, we write “p, q, r ${\displaystyle \models \phi }$.”)
Propositional Calculi
Proofs in a propositional calculus
Propositional calculus consists of a set of specified wff called axioms (the set can possibly be empty) and rules of inference. A proof of an argument is a sequence of wffs in which (1) each wff is a
premise, an axiom, or a wff that is derived from previous wffs in the sequence by a rule of inference and (2) the last wff of the sequence is the conclusion of the argument. If an argument,
consisting of a premise set ${\displaystyle \Gamma }$ and a conclusion ${\displaystyle \phi }$, has a proof, we write “${\displaystyle \Gamma \vdash \phi }$,” which reads as “${\displaystyle \phi }$
is provable from ${\displaystyle \Gamma }$.” (The convention for the left hand side of “${\displaystyle \vdash }$” is the same as the one for “${\displaystyle \models }$”.)
Particularly, if there is a proof for an argument with an empty set of premises, i.e. if the conclusion of the argument can be derived only from axioms based on the rules of inference, then the
conclusion is called a theorem. Thus, if ${\displaystyle \phi }$ is a theorem, we can write “${\displaystyle \vdash \phi }$,” which reads as “${\displaystyle \phi }$ is a theorem.”
There are various propositional calculi, of which two of the most famous ones are provided below.
Hilbert-Style Propositional Calculus
One famous deductive system takes the language of propositional calculus that consists of propositional variables, the connectives, ${\displaystyle \rightarrow }$ and ${\displaystyle \lnot }$, and
parentheses. The other connectives are defined as follows:
• ${\displaystyle \alpha \wedge \beta :=\lnot (\alpha \rightarrow \lnot \beta )}$
• ${\displaystyle \alpha \vee \beta :=\lnot \alpha \rightarrow \beta }$
• ${\displaystyle \alpha \leftrightarrow \beta :=\lnot ((\alpha \rightarrow \beta )\rightarrow \lnot (\beta \rightarrow \alpha ))}$
The axioms have one of the following forms:
• A1 ${\displaystyle \alpha \rightarrow (\beta \rightarrow \alpha )}$
• A2 ${\displaystyle (\alpha \rightarrow (\beta \rightarrow \gamma )\rightarrow ((\alpha \rightarrow \beta )\rightarrow (\alpha \rightarrow \gamma ))}$
• A3 ${\displaystyle (\lnot \alpha \rightarrow \lnot \beta )\rightarrow ((\lnot \alpha \rightarrow \beta )\rightarrow \alpha )}$
The only rule of inference is modus ponens, i.e. from ${\displaystyle \alpha }$ and ${\displaystyle \alpha \rightarrow \beta }$, derive ${\displaystyle \beta }$.
Here is an example of a proof in this system for ${\displaystyle p,(r\rightarrow p)\rightarrow (r\rightarrow (p\rightarrow s))\vdash r\rightarrow s}$^[1]:
│ Number │ wff │ Justification │
│ 1 │ ${\displaystyle p}$ │ A premiss │
│ 2 │ ${\displaystyle (r\rightarrow p)\rightarrow (r\rightarrow (p\rightarrow s))}$ │ A premise │
│ 3 │ ${\displaystyle p\rightarrow (r\rightarrow p)}$ │ An axiom of the form A1 │
│ 4 │ ${\displaystyle r\rightarrow p}$ │ From 1 and 3 by modus ponens │
│ 5 │ ${\displaystyle r\rightarrow (p\rightarrow s)}$ │ From 2 and 4 by modus ponens │
│ 6 │ ${\displaystyle (r\rightarrow (p\rightarrow s))\rightarrow ((r\rightarrow p)\rightarrow (r\rightarrow s))}$ │ An axiom of the form A2 │
│ 7 │ ${\displaystyle (r\rightarrow p)\rightarrow (r\rightarrow s)}$ │ From 5 and 6 by modus ponens │
│ 8 │ ${\displaystyle r\rightarrow s}$ │ From 4 and 7 by modus ponens │
Natural Deduction
Another example takes the language of propositional calculus that consists of propositional variables, the connectives, ${\displaystyle \lnot ,\wedge ,\vee ,\rightarrow ,\leftrightarrow ,}$ and
parentheses. The set of axioms is empty. However, it has the following rules of inference:
• Reductio ad absurdum (negation introduction)
From (p→q), (p→ ¬q), infer ¬p.
• Double negative elimination
From ¬¬p, infer p.
From p and q, infer (p ∧ q).
From (p ∧ q), infer p;
From (p ∧ q), infer q.
From p, infer (p ∨ q);
From p, infer (q ∨ p).
From (p ∨ q), (p → r), (q → r), infer r.
• Biconditional introduction
From (p → q), (q → p), infer (p ↔ q).
• Biconditional elimination
From (p ↔ q), infer (p → q);
From (p ↔ q), infer (q → p).
• Modus ponens (conditional elimination)
From p, (p → q), infer q.
• Conditional proof (conditional introduction)
If assuming p allows a proof of q, infer (p → q).
Here is an example of a proof in this system again for ${\displaystyle p,(r\rightarrow p)\rightarrow (r\rightarrow (p\rightarrow s))\vdash r\rightarrow s}$.
│ Number │ wff │ Justification │
│ 1 │ ${\displaystyle p}$ │ A premise │
│ 2 │ ${\displaystyle (r\rightarrow p)\rightarrow (r\rightarrow (p\rightarrow s))}$ │ A premise │
│ 3 │ ${\displaystyle r}$ │ An assumption for a conditional proof │
│ 4 │ ${\displaystyle p}$ │ Iteration of 1 │
│ 5 │ ${\displaystyle r\rightarrow p}$ │ From 3 and 4 by a conditional proof │
│ 6 │ ${\displaystyle r\rightarrow (p\rightarrow s)}$ │ From 2 and 5 by modus ponens │
│ 7 │ ${\displaystyle r}$ │ Assumption for a conditional proof │
│ 8 │ ${\displaystyle p\rightarrow s}$ │ From 6 and 7 by modus ponens │
│ 9 │ ${\displaystyle s}$ │ From 1 and 8 by modus ponens │
│ 10 │ ${\displaystyle r\rightarrow s}$ │ From 7 and 9 by a conditional proof │
Famous Provable Arguments
Here are some of the most famous forms of arguments that are provable in both of the calculi:
Basic and Derived Argument Forms
Name Sequent Description
Modus Ponens ((p → q) ∧ p) ├ q if p then q; p; therefore q
Modus Tollens ((p → q) ∧ ¬q) ├ ¬p if p then q; not q; therefore not p
Hypothetical Syllogism ((p → q) ∧ (q → r)) ├ (p → r) if p then q; if q then r; therefore, if p then r
Disjunctive Syllogism ((p ∨ q) ∧ ¬p) ├ q Either p or q; not p; therefore, q
Constructive Dilemma ((p → q) ∧ (r → s) ∧ (p ∨ r)) ├ (q ∨ s) If p then q; and if r then s; but either p or r; therefore either q or s
Destructive Dilemma ((p → q) ∧ (r → s) ∧ (¬q ∨ ¬s)) ├ (¬p ∨ ¬r) If p then q; and if r then s; but either not q or not s; therefore either not p or not r
Simplification (p ∧ q) ├ p p and q are true; therefore p is true
Conjunction p, q ├ (p ∧ q) p and q are true separately; therefore they are true conjointly
Addition p ├ (p ∨ q) p is true; therefore the disjunction (p or q) is true
Composition ((p → q) ∧ (p → r)) ├ (p → (q ∧ r)) If p then q; and if p then r; therefore if p is true then q and r are true
De Morgan's Theorem (1) ¬(p ∧ q) ├ (¬p ∨ ¬q) The negation of (p and q) is equiv. to (not p or not q)
De Morgan's Theorem (2) ¬(p ∨ q) ├ (¬p ∧ ¬q) The negation of (p or q) is equiv. to (not p and not q)
Commutation (1) (p ∨ q) ├ (q ∨ p) (p or q) is equiv. to (q or p)
Commutation (2) (p ∧ q) ├ (q ∧ p) (p and q) is equiv. to (q and p)
Association (1) (p ∨ (q ∨ r)) ├ ((p ∨ q) ∨ r) p or (q or r) is equiv. to (p or q) or r
Association (2) (p ∧ (q ∧ r)) ├ ((p ∧ q) ∧ r) p and (q and r) is equiv. to (p and q) and r
Distribution (1) (p ∧ (q ∨ r)) ├ ((p ∧ q) ∨ (p ∧ r)) p and (q or r) is equiv. to (p and q) or (p and r)
Distribution (2) (p ∨ (q ∧ r)) ├ ((p ∨ q) ∧ (p ∨ r)) p or (q and r) is equiv. to (p or q) and (p or r)
Double Negation p ├ ¬¬p p is equivalent to the negation of not p
Transposition (p → q) ├ (¬q → ¬p) If p then q is equiv. to if not q then not p
Material Implication (p → q) ├ (¬p ∨ q) If p then q is equiv. to either not p or q
Material Equivalence (1) (p ↔ q) ├ ((p → q) ∧ (q → p)) (p is equiv. to q) means, (if p is true then q is true) and (if q is true then p is true)
Material Equivalence (2) (p ↔ q) ├ ((p ∧ q) ∨ (¬q ∧ ¬p)) (p is equiv. to q) means, either (p and q are true) or ( both p and q are false)
Exportation ((p ∧ q) → r) ├ (p → (q → r)) from (if p and q are true then r is true) we can prove (if q is true then r is true, if p is true)
Importation (p → (q → r)) ├ ((p ∧ q) → r)
Tautology p ├ (p ∨ p) p is true is equiv. to p is true or p is true
Tertium non datur (Law of Excluded Middle) ├ (p ∨ ¬ p) p or not p is true
Soundness and Completeness
A calculus is sound if, for all ${\displaystyle \Gamma }$ and ${\displaystyle \phi ,\Gamma \vdash \phi }$ implies ${\displaystyle \Gamma \models \phi }$. A calculus is complete if, for all ${\
displaystyle \Gamma }$ and ${\displaystyle \phi }$, ${\displaystyle \Gamma \models \phi }$ implies ${\displaystyle \Gamma \vdash \phi }$.
There are various sound and complete propositional calculi (i.e. the calculi in which the notion of proof and that of validity correspond). The two calculi above are the examples of sound and
complete propositional calculi.
ISBN links support NWE through referral fees
• Brown, Frank Markham. 2003. Boolean Reasoning: The Logic of Boolean Equations. 1st edition, Kluwer Academic Publishers, Norwell, MA. 2nd edition, Dover Publications, Mineola, New York. ISBN
• Chang, C.C., and H.J. Keisler. 1973. Model Theory. Amsterdam, Netherlands: North-Holland. ISBN 9780444880543
• Klement, Kevin C. 2006. "Propositional Logic" in Internet Encyclopedia of Philosophy. edited by James Fieser and Bradley Dowden. The Internet Encylopedia of Philosophy.
• Kohavi, Zvi. 1978. Switching and Finite Automata Theory. 1st edition, McGraw–Hill, 1970. 2nd edition, McGraw–Hill, 1978. ISBN 9780070353107
• Korfhage, Robert R. 1974. Discrete Computational Structures, Academic Press, New York, NY. ISBN 0124208606
• Lambek, J., and P.J. Scott. 1986. Introduction to Higher Order Categorical Logic. Cambridge University Press, Cambridge, UK. ISBN 9780521356534
• Mendelson, Elliot. 1964. Introduction to Mathematical Logic. D. Van Nostrand Company. ISBN 9780412808302
External Links
All links retrieved December 1, 2022.
General Philosophy Sources
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons
CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia
contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by
wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed. | {"url":"https://www.newworldencyclopedia.org/entry/Propositional_calculus","timestamp":"2024-11-03T13:04:04Z","content_type":"text/html","content_length":"167332","record_id":"<urn:uuid:51eac87a-775b-4b03-8600-b06bdb112189>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00105.warc.gz"} |
Sorry, But I'm the Heiress! Full Episodes & Movie
Sorry, But I'm the Heiress! Full Episodes & Movie
97 Episodes
Sorry, But I'm the Heiress!
EP 1
Sorry, But I'm the Heiress!
EP 2
Sorry, But I'm the Heiress!
EP 3
Sorry, But I'm the Heiress!
EP 4
Sorry, But I'm the Heiress!
EP 5
Sorry, But I'm the Heiress!
EP 6
Sorry, But I'm the Heiress!
EP 7
Sorry, But I'm the Heiress!
EP 8
Sorry, But I'm the Heiress!
EP 9
Sorry, But I'm the Heiress!
EP 10
Sorry, But I'm the Heiress!
EP 11
Sorry, But I'm the Heiress!
EP 12
Sorry, But I'm the Heiress!
EP 13
Sorry, But I'm the Heiress!
EP 14
Sorry, But I'm the Heiress!
EP 15
Sorry, But I'm the Heiress!
EP 16
Sorry, But I'm the Heiress!
EP 17
Sorry, But I'm the Heiress!
EP 18
Sorry, But I'm the Heiress!
EP 19
Sorry, But I'm the Heiress!
EP 20
Sorry, But I'm the Heiress!
EP 21
Sorry, But I'm the Heiress!
EP 22
Sorry, But I'm the Heiress!
EP 23
Sorry, But I'm the Heiress!
EP 24 | {"url":"https://www.goodshort.com/episodes/sorry-but-i-m-the-heiress-31000789843","timestamp":"2024-11-10T11:25:47Z","content_type":"text/html","content_length":"104626","record_id":"<urn:uuid:84bf0852-1c0a-4302-b0d6-fa5d5ada5468>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00408.warc.gz"} |
Lesson 16
Writing Systems of Equations
Let’s write systems of equations from real-world situations.
16.1: How Many Solutions? Matching
Match each system of equations with the number of solutions the system has.
1. \(\begin{cases} y=\text-\frac43x+4 \\ y = \text-\frac43x-1 \end{cases}\)
2. \(\begin{cases} y=4x-5 \\ y = \text-2x+7 \end{cases}\)
3. \(\begin{cases} 2x+3y = 8 \\ 4x+6y = 17 \end{cases}\)
4. \(\begin{cases} y= 5x-15 \\ y= 5(x-3) \end{cases}\)
1. No solutions
2. One solution
3. Infinitely many solutions
16.2: Situations and Systems
For each situation:
• Create a system of equations.
• Then, without solving, interpret what the solution to the system would tell you about the situation.
1. Lin’s family is out for a bike ride when her dad stops to take a picture of the scenery. He tells the rest of the family to keep going and that he’ll catch up. Lin's dad spends 5 minutes taking
the photo and then rides at 0.24 miles per minute until he meets up with the rest of the family further along the bike path. Lin and the rest were riding at 0.18 miles per minute.
2. Noah is planning a kayaking trip. Kayak Rental A charges a base fee of $15 plus $4.50 per hour. Kayak Rental B charges a base fee of $12.50 plus $5 per hour.
3. Diego is making a large batch of pastries. The recipe calls for 3 strawberries for every apple. Diego used 52 fruits all together.
4. Flour costs $0.80 per pound and sugar costs $0.50 per pound. An order of flour and sugar weighs 15 pounds and costs $9.00.
16.3: Info Gap: Racing and Play Tickets
Your teacher will give you either a problem card or a data card. Do not show or read your card to your partner.
If your teacher gives you the problem card:
1. Silently read your card and think about what information you need to be able to answer the question.
2. Ask your partner for the specific information that you need.
3. Explain how you are using the information to solve the problem.
Continue to ask questions until you have enough information to solve the problem.
4. Share the problem card and solve the problem independently.
5. Read the data card and discuss your reasoning.
If your teacher gives you the data card:
1. Silently read your card.
2. Ask your partner “What specific information do you need?” and wait for them to ask for information.
If your partner asks for information that is not on the card, do not do the calculations for them. Tell them you don’t have that information.
3. Before sharing the information, ask “Why do you need that information?” Listen to your partner’s reasoning and ask clarifying questions.
4. Read the problem card and solve the problem independently.
5. Share the data card and discuss your reasoning.
16.4: Solving Systems Practice
Here are a lot of systems of equations:
• \(\begin{cases} y=\text-2x+6 \\ y=x-3 \end{cases}\)
• \(\begin{cases} y=5x-4 \\ y=4x+12 \end{cases}\)
• \(\begin{cases} y=\frac23x-4 \\ y=\text-\frac43x+9 \end{cases}\)
• \(\begin{cases} 4y + 7x = 6 \\ 4y+7x = \text-5 \end{cases}\)
• \(\begin{cases} y=x - 6\\ x=6 + y \end{cases}\)
• \(\begin{cases} y=0.24x\\ y=0.18x+0.9 \end{cases}\)
• \(\begin{cases} y=4.5x+15 \\ y=5x+12.5 \end{cases}\)
• \(\begin{cases} y=3x \\ x+y=52 \end{cases}\)
1. Without solving, identify 3 systems that you think would be the least difficult for you to solve and 3 systems you think would be the most difficult. Be prepared to explain your reasoning.
2. Choose 4 systems to solve. At least one should be from your "least difficult" list and one should be from your "most difficult" list.
We have learned how to solve many kinds of systems of equations using algebra that would be difficult to solve by graphing. For example, look at
\(\begin{cases} y = 2x -3 \\ x+2y=7 \end{cases}\)
The first equation says that \(y=2x-3\), so wherever we see \(y\), we can substitute the expression \(2x-3\) instead. So the second equation becomes \(x+2(2x-3) = 7\).
\( x+4x-6 &= 7 &&\text{distributive property}\\ 5x-6 &=7 &&\text{combine like terms}\\ 5x &= 13 &&\text{add 6 to each side}\\ x&= \frac{13}{5} && \text{multiply each side by} \frac{1}{5} \)
We know that the \(y\) value for the solution is the same for either equation, so we can use either equation to solve for it. Using the first equation, we get:
\( y &= 2(\frac{13}{5})-3 &&\text{substitute \(x= \frac{13}{5}\) into the equation}\\ y &=\frac{26}{5}-3 &&\text{multiply \(2(\frac{13}{5})\) to make \(\frac{26}{5}\)} \\ y &=\frac{26}{5}-\frac{15}
{5} &&\text{rewrite 3 as \(\frac{15}{5}\)}\\ &y=\frac{11}{5} \)
If we substitute \(x=\frac{13}5\) into the other equation, \(x+2y=7\), we get the same \(y\) value. So the solution to the system is \(\left(\frac{13}{5},\frac{11}5\right)\).
There are many kinds of systems of equations that we will learn how to solve in future grades, like \(\begin{cases} 2x+3y = 6 \\ \text-x+2y = 3 \end{cases}\).
Or even \(\begin{cases} y = x^2 +1 \\ y = 2x+3 \end{cases}\).
• system of equations
A system of equations is a set of two or more equations. Each equation contains two or more variables. We want to find values for the variables that make all the equations true.
These equations make up a system of equations:
\(\displaystyle \begin{cases} x + y = \text-2\\x - y = 12\end{cases}\)
The solution to this system is \(x=5\) and \(y=\text-7\) because when these values are substituted for \(x\) and \(y\), each equation is true: \(5+(\text-7)=\text-2\) and \(5-(\text-7)=12\). | {"url":"https://im.kendallhunt.com/MS_ACC/students/2/5/16/index.html","timestamp":"2024-11-12T15:04:54Z","content_type":"text/html","content_length":"73421","record_id":"<urn:uuid:9c5f45b0-a868-419c-acfb-28d4b63a4a85>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00680.warc.gz"} |
A Spatiotemporal Epidemiological Prediction Model to Inform County-Level COVID-19 Risk in the United States
As the COVID-19 pandemic continues worsening in the United States, it is of critical importance to develop a health information system that provides timely risk evaluation and prediction of the
COVID-19 infection in communities. We propose a spatiotemporal epidemiological forecast model that combines a spatial cellular automata (CA) with a temporal extended
susceptible-antibody-infectious-removed (eSAIR) model under time-varying state-specific control measures. This new toolbox enables the projection of the county-level COVID-19 prevalence over 3109
counties in the continental United States, including $t$-day-ahead risk forecast and the risk related to a travel route. In comparison to the existing temporal risk prediction models, the proposed
CA-eSAIR model informs the projected county-level risk to governments and residents of the local coronavirus spread patterns and the associated personal risks at specific geolocations. Such
high-resolution risk projection is useful for decision-making on business reopening and resource allocation for COVID-19 tests.
Keywords: cellular automata, coronavirus infectious disease, risk prediction, SAIR model
This article includes a select anonymous review document. Anonymous review is a vital process for high quality publications in HDSR. With permissions of the authors and reviewers, we selectively post
anonymously review reports, sometimes with authors' responses, that we believe can further enrich the intellectual journey generated by the corresponding publication.
As of May 22, 2020, of 3109 counties in the continental United States, 2895 (93.1%) have confirmed COVID-19 cases and 1666 (53.6%) have reported case fatalities. The COVID-19 pandemic continues
worsening in the United States nationwide and, according to CDC data, at least 97.1% of the U.S. population are living with their contagious neighbors. We seek to develop a health information system
that provides communities with COVID-19 infection risk predictions in a similar way to the weather forecast. In this article, we establish a COVID-19 forecast paradigm based on an automated operation
of evolving intercounty disease-spread patterns with various types of state-level control measures. We estimate the effectiveness of social distancing for each state using mobile device data, and
derive intercounty connectivity from county-level residents’ mobility variables. In addition to traditional geodesic distance, we use air-distance through nearby airports to quantify the effective
distance between any two counties in the United States. This prediction model is tuned by the minimal prediction error of one-day-ahead infection projection. Using this well-tuned forecast system, we
can predict county-level COVID-19 prevalence over 3109 counties in the continental United States (e.g., the risk of infection over the next week in Washtenaw County, Michigan). We can also
approximate the infection risk imposed by intercounty travel on a specific date. In comparison to existing temporal risk prediction models, our model informs projected county-level risk and potential
spread patterns to governments and residents, and the associated personal risks at specific geolocations. Such high-resolution risk projection is useful for tailored decision-making on business
reopenings and resource allocation for both COVID-19 tests and medical equipment.
1. Introduction
Coronavirus disease 2019 (COVID-19), an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (World Health Organization, 2020), became a global pandemic that
spread out swiftly across the world since its original outbreak in Hubei, China, in December 2020. Since mid-March, the number of confirmed infections in the United States experienced a rapid growth.
Many states are facing great challenges in mitigating the spread of the virus, including New York, New Jersey, Massachusetts, Illinois, and Michigan. As of May 8, 2020, this pandemic has caused a
total of 1,318,553 confirmed cases and 78,303 deaths in the United States. The United States is now leading the daily contributions to new infections in the current phase of the global pandemic.
Being a lethal communicable infectious disease, COVID-19 is expected to continue spreading in the U.S. population, causing an even higher number of infections and deaths in the next few months. With
no effective medical treatments nor vaccines available at this moment, public health interventions such as social distancing have been implemented in most of the states at different degrees of
stringency in order to mitigate the spread of COVID-19. To address this continuing pandemic, it is of great urgency to develop a risk prediction model that provides timely community-level information
of the COVID-19 infection risk both now and over a future time period for residents in each county of the United States. The regional risk evaluation and prediction can help account for temporally
varying control measures and spatial variations in the COVID-19 infection dynamics. Such community-level risk information is valuable for local governments and residents to assess the preparedness of
medical resources (personal protective equipment and ICU beds), to determine the allocation of the COVID-19 test kits, to adjust various intervention policies, and to enforce the conduct of social
Predicting the county-level COVID-19 infection risk is especially challenging because of the substantial heterogeneity in the urbanization, ethnic distribution, political views, and economic
composition across regions. Existing state-level epidemiological prediction models focus fully on time-domain analysis and are not adequate to address such geographic, racial, and economic
discrepancies in the United States. A prediction model with a finer resolution is needed to assess the local risk of COVID-19. Technically, a local risk prediction needs to be performed collectively
with all counties, since Americans are highly mobile and connected by highways and airways. Therefore, a county-level risk is more informative and appealing to the public and governments. Making such
a location-specific forecast requires combining a temporal epidemiological model for the time-course infection dynamics and a spatial model for changes of the infection risk over 3109 continental
counties of the country. We propose a new forecast paradigm by incorporating the extended susceptible-antibody-infected-removed (eSAIR) model and a cellular automata (CA) (Von Neumann & Burks, 1966).
Being an important extension of the extended susceptible-infected-removed (eSIR) model proposed in Wang et al. (2020), the eSAIR model enables us to capture the temporal evolution of the disease, in
which a time-varying transmission rate modifier is used to account for state-specific control measures. The proposed extension pertains to adding a new compartment of antibody (A) to accommodate the
ongoing self-immunization in the U.S. population, so that the resulting model helps address the underreporting issue concerning data available in the public databases. The CA model is based on a
spatial process of disease spread over 3109 counties. This new epidemiological forecast model, termed as CA-eSAIR model, can inform residents in different geolocations with timely and areal-evolving
risk of COVID-19 infection. It can also help travelers make a plan for a trip in the United States with relevant risk information to avoid counties with high risk. The novelty of the community-level
risk projection is twofold: the proposed spatiotemporal prediction model allows people and policymakers to envision how the pandemic may evolve across the United States to, for example, identify
rising hotspots of contagion; more importantly, the projection model informs the public of future spread patterns and associated personal risk scores in each county of the United States. Moreover,
due to the use of the Markov Chain Monte Carlo (MCMC) estimation in the proposed state-space model, the quantification for the subsequent prediction uncertainty can be easily carried out in the
proposed statistical framework.
This article is organized as follows. Section 2 begins with an introduction of the eSAIR model with the inclusion of an antibody compartment, and then presents the proposed CA-eSAIR model for
county-level risk prediction. Section 3 concerns an empirical study on community-level risk prediction over 3109 counties in the continental United States. Section 4 gives some concluding remarks.
2. Spatiotemporal County-Level Risk Projection
We propose a statistical model, termed as the CA-eSAIR model in this article, to map time-varying nationwide projected infection risk of COVID-19 for each county in the continental United States.
This forecast framework consists of a temporal statistical model for infection dynamics and a spatial model for spatial spread patterns. The temporal statistical model is based on an eSIR model (Wang
et al., 2020) that accounts for the time-varying state-level control measures that modify the transmission rate of the virus due to state-specific interventions (e.g., social distancing) as well as
the proportion of self-immunized individuals that have developed antibodies to COVID-19. Being an important phenomenon of the COVID-19 contagion, the existence of self-immunization in the U.S.
population has been largely ignored in most of the existing models, which will be addressed via our eSAIR model in this article. The spatial model is built upon a CA (Ahmed & Agiza, 1998; Beauchemin
et al., 2005; Chopard & Droz, 1998; Fuentes & Kuperman, 1999; Fuks & Lawniczak, 2001; Schneckenreither et al., 2008; Sirakoulis et al., 2000; Von Neumann & Burks, 1966; White et al., 2007; Willox et
al., 2003) that links local disease spread conditions over 3109 counties via a certain spatial connectivity function characterizing the intercounty mobility, geodistance, and air-distance via
accessibility to nearby airports.
2.1. Extended Susceptible-Antibody-Infected-Removed (eSAIR) Model
Figure 1. The compartment composition of the extended susceptible-antibody-infected-removed (eSAIR) model. Three compartments on the top thread form the classical SIR model, including susceptible,
infected and removed. The eSAIR model adds an antibody compartment (the bottom thread) to account for the proportion of people who are infected and self-immunized without being tested and recorded.
Motivated by COVID-19 data in Hubei province, China, (Wang et al., 2020) developed an eSIR model under the framework of state-space models (Jøsrgensen & Song, 2007; Song, 2000). This epidemiological
SIR model is driven by a latent Markov SIR model (Kermack & McKendrick, 1927) with the probability of being susceptible (or at risk), $\theta_t^S$; the probability of being infected, $\theta_t^I$; as
well as the probability of being removed, $\theta_t^R$ at a given time $t$. A useful contribution in the eSIR model is the introduction of a viral transmission rate modifier, namely, the term $\pi(t)
$ in Equation (1) that accounts for state-specific preventive policies such as lockdown, social distancing, and use of face masks. Such changing regimes of disease infection are also incorporated
into the eSAIR model in this article. To address the underreporting issue associated with available public databases and to build self-immunization into the infection dynamics, we then further extend
the previous eSIR model to an eSAIR model by adding an antibody (A) compartment. Shown in the bottom thread of Figure 1, this A compartment accounts for the probability of being self-immunized with
antibodies to COVID-19, denoted by $\theta_t^A$; see Equation (1), where $\alpha(t)$ is a function describing the proportion of people moving from the susceptible compartment to the antibody
compartment over time. Compartment A helps circumvent limitations of data collection, especially embracing individuals who were infected but self-cured at home with no confirmation by viral real-time
polymerase chain reaction (RT-PCR) diagnostic tests. This new eSAIR model characterizes the underlying population-level dynamics of the pandemic. The following system of ordinary differential
equations defines collectively the continuous-time dynamics for the eSAIR model, which governs the law of movements among four compartments of susceptible, self-immunized, infected and removed:
(1) \ \ \ \ \ \qquad \qquad \begin{aligned} \frac{d\theta_t^A}{dt}&=\alpha(t) \theta_t^S,\\ \frac{d\theta_t^S}{dt}&=-\alpha(t)\theta_t^S-\beta\pi(t)\theta_t^S\theta_t^I,\\ \frac{d\theta_t^I}{dt}&=\
beta\pi(t)\theta_t^S\theta_t^I-\gamma\theta_t^I,\\ \frac{d\theta_t^R}{dt}&=\gamma\theta_t^I, \end{aligned}
where $\alpha(t)$ is the self-immunization rate, $\beta$ is the basic disease transmission rate, $\pi(t)$ is a time-varying transmission rate modifier, and $\gamma$ is the rate of being removed from
the system (either dead or recovered). The basic reproduction number is $R_0=\beta/\gamma$, which represents the number of individuals who can contract the virus from an infectious case under no
interventions in the system. The product term $\beta\pi(t)$ describes a modified transmission rate due to control measures. Thus, from the eSAIR model the parameter $\beta$ is estimated by adjusting
preventive measures. So the resulting estimate of $R_0$ can more adequately reflect the underlying viral transmissibility than the unadjusted estimates published in the current literature. The
transmission modifier $\pi(t)$ can be specified by some social distancing score measured based on mobile data describing the mobility reduction of individuals during the epidemic. The
self-immunization rate $\alpha(t)$ can be specified by an estimate from some recent surveys of antibody prevalence conducted in several states in the United States, including New York, Massachusetts,
and California. Due to schedules of the survey sampling, $\alpha(t)$ may be specified with some values at discrete times when survey results become available.
Figure 2. A conceptual framework of the extended susceptible-antibody-infected-removed (eSAIR) model. The latent probabilities of susceptible (S), self-immunized (A), infected (I) and removed (R) at
a certain time point t are indicated by θ[t]=(θ[t]^S,θ[t]^A,θ[t]^I,θ[t]^R)^T. Time series of Y[t]^I and Y[t]^R are, respectively, the daily observed numbers of infections and removed cases (a
total of recovered cases and deaths). The dynamics of the four compartments evolve over time according to a Markov system of ordinary differential equations.
The proposed eSAIR model is a kind of state-space model that allows sampling uncertainty in the collection of observed counts of infection, death, and recovery; see Figure 2. One defining feature of
this statistical model is that proportions or rates, not counts, are considered in which the size of the underlying population is adjusted. Let $\theta_t^j,j\in\{S,A,I,R\}$ be a probability (or a
population-level proportion/fraction) of being susceptible (S), self-immunized (A), infected (I), or removed (R) at a given day $t$ such that $\theta_t^S + \theta_t^A + \theta_t^I + \theta_t^R =1$.
Let $\boldsymbol{\tau}$ be a generic term denoting the set of all model parameters to be estimated. This dynamic system of $\boldsymbol{\theta}_t = (\theta_t^S, \theta_t^A, \theta_t^I, \theta_t^R)^\
top$ is latent and follows a Markov process,
$(2) \ \ \ \ \ \qquad \qquad \boldsymbol{\theta}_t|\boldsymbol{\theta}_{t-1},\boldsymbol{\tau} \sim \mathrm{Dirichlet}(\kappa f(\boldsymbol{\theta}_{t-1},\beta,\gamma)),$
where parameter $\kappa$ scales the variance of the Dirichlet distribution, while $f(\cdot)$ is a four-variate vector that determines the mean of the Dirichlet distribution. The function $f$ is the
engine of the infection dynamics driven by the extended eSAIR model (Equation 1). A fourth-order Runge–Kutta (RK4) approximation is obtained for $f(\boldsymbol{\theta}_{t-1},\beta,\gamma)$. In the
framework of the state-space models, the observed time series of daily proportions $Y_t^I$ (the proportion of infected cases) and $Y_t^R$ (the proportion of removed cases) are emitted from the latent
Markov dynamic system $\boldsymbol{\theta}_t$ according to the beta distributions:
(3) \ \ \ \ \ \qquad\qquad \begin{aligned} Y_t^I|\boldsymbol{\theta}_t,\boldsymbol{\tau} &\sim \mathrm{Beta}(\lambda^I\theta_t^I,\lambda^I(1-\theta_t^I)), ~ \\ Y_t^R|\boldsymbol{\theta}_t,\boldsymbol
{\tau} &\sim \mathrm{Beta}(\lambda^R\theta_t^R,\lambda^R(1-\theta_t^R)),\end{aligned}
where $\lambda^I$ and $\lambda^R$ are the respective variances of the observed proportions. It is easy to see that $E(Y_t^I | \boldsymbol{\theta}_t) = \theta_t^I$ and $E(Y_t^R |\boldsymbol{\theta}_t)
= \theta_t^R$. Using the standard MCMC method, we can estimate both the model parameters $\boldsymbol{\tau}$ and latent probabilities $\boldsymbol{\theta}_t$ over the observational period. Next,
these estimates at the last date of the observed data will be used as initial values in the CA-eSAIR model for the county-level risk prediction. Our experience from empirical studies suggests that
the above state-space eSAIR model should be used with data from a relatively large population, preferably at a state level, in order to yield reliable numerical results.
2.2. CA-eSAIR Model
The eSAIR model discussed is suitable for predicting the time-varying prevalence of infection at the population level, which is particularly useful revisiting the beginning of the COVID-19 outbreak.
Along the rapid evolution of this pandemic in the United States with substantial county-level data being available on a daily basis, it becomes possible to predict infection patterns and infection
rates at the county-level. Such risk information is of great interest to local governments and residents to understand the current and future conditions of the pandemic. For example, local
governments need it to make decisions on whether to reopen local businesses or how to allocate resources of both viral diagnostic and antibody tests in communities. To address such needs, we further
extend the population-level temporal eSAIR model into a spatiotemporal epidemiological forecast model that enables the prediction of county-level COVID-19 infection. Using this new model, we can
project high-resolution risk scores at a future time point for all 3109 continental counties through certain characterizations of intercounty connectivity. In the literature, Von Neumann’s CA model
(Von Neumann & Burks, 1966) has been extended to study spatial disease spread patterns. In this article, we borrow the strength of CA’s spatial modeling to be combined with the temporal eSAIR model,
resulting in a useful spatiotemporal model to predict the community-level infection risk across the United States.
Technically, we extend the classical CA from spatial lattices to areal locations of counties. Let $\mathcal{C}$ be the collection of 3109 counties. For a county $c\in\mathcal{C}$, $N_c$ denotes the
county population size, and $\mathcal{C}_{-c}$ denotes the set of all the other counties except county $c$. For county $c$ at time $t$, we denote the county-specific prevalence vector by $\boldsymbol
{\theta}_c(t)=(\theta_c^S(t), \theta_c^A(t), \theta_c^I(t), \theta_c^R(t))^\top$. For the purpose of daily risk prediction, we express the CA-eSAIR model at discrete times in the following form:
\qquad \qquad \begin{aligned} \theta_c^A(t) & =\theta_c^A(t-1)+\alpha_c(t)\theta_c^S(t-1),\\ \theta_c^S(t) & =(1-\alpha_c(t))\theta_c^S(t-1)-\beta\pi_c(t)\theta_c^S(t-1)\theta_c^I(t-1)\\ &\ \ \ -\
beta\pi_{c}(t)\theta_c^S(t-1)\sum_{c'\in\mathcal{C}_{-c}}\omega_{cc'}(t)\{N_{c'}\theta_{c'}^I(t-1)/N_c\}, \\ \theta_c^I(t) & =(1-\gamma)\theta_c^I(t-1)+\beta\pi_c(t)\theta_c^S(t-1)\theta_c^I(t-1)\\ &
\ \ \ +\beta\pi_{c}(t)\theta_c^S(t-1)\sum_{c'\in\mathcal{C}_{-c}}\omega_{cc'}(t)\{N_{c'}\theta_{c'}^I(t-1)/{N_c}\}, \\ \theta_c^R(t) & = \theta_c^R(t-1)+\gamma\theta_c^I(t-1),\end{aligned}
where $\beta$ and $\gamma$ are the effective population rates of contagion and removal, respectively, which are estimated from the eSAIR model with state-level data. Allowing the spatial intercounty
disease transmission is unique in the CA, which may primarily be characterized by two factors. One is the strength of the intercounty connectivity, $\omega_{cc'}(t) \in [0,1]$, that quantifies both
the volume and frequency of the intercounty movements between counties $c$ and $c'$. The other is the ratio of the number of projected infected cases $N_{c'}\theta_{c'}^I(t-1)$ in county $c'$ over
the population size $N_c$ in county $c$, representing a likelihood of an at-risk person in county $c$ contracting the virus from an infectious person in county $c'$.
In effect, specifying an objective intercounty connectivity coefficient $\omega_{cc'}(t)$ is challenging, as it involves many variables. As far as the disease contagion concerns, in this article we
specify this coefficient as $\omega_{cc'}(t)=\mu_{cc'}\text{exp}\{-\eta r(c,c')\}$. The first parameter $\mu_{cc'}$ is the intercounty mobility factor characterizing the decrease of human encounters
in terms of their potential movements between counties (Unacast, 2020). The second factor $r(c,c')$ is a certain travel distance between two counties $c$ and $c'$ in terms of both geodesic distance
(Karney, 2013) and “air distance” based on the accessibility to nearby airports. It is specified that
\begin{aligned} r(c,c') & =d(c,c')I[d(c,c')\leq 500 \text{km}]\\ &+b(a,a')\{d(c,a)+d(c',a')\}I[d(c,c')>500 \text{km}], \end{aligned}
where $d(\cdot,\cdot)$ is the geodesic distance (Karney, 2013) (in km) defined as the shortest distance between two places, either between two counties $c$ and $c'$ or between county $c$ and its
closest airport $a$, on the surface of an ellipsoidal model of the earth. Meanwhile, $b(a,a')$ is a factor that characterizes the transportation capacity of airlines between airports $a$ and $a'$
near counties $c$ and $c'$, respectively. The travel distance $r(c,c')$ can be interpreted in the following way. If the geo-distance between county $c$ and $c'$ is less than 500 km, we assume that
individuals from either county will travel by car, resulting in the travel distance equaling the geodesic distance; otherwise, we assume that residents will choose to fly to the other county via its
nearest airport. In the latter case, we ignore the distance between airports since individuals are not exposed to the outside community environment during the flight. In addition, the third factor $\
eta$ is a tuning parameter that enables us to adjust the scale of the travel distance. To regularize the contribution of $r(c,c')$, we propose to tune the $\eta$ parameter by minimizing the sum of
(county-level) weighted absolute prediction error (SWAPE) for the one-step-ahead risk prediction of the infection rate. This tuning procedure will be demonstrated with details in our empirical study.
It is worth commenting that, given the fact that COVID-19 testing policies and strategies are state-specific, we assume that the testing rate within a given state is the same. Under this assumption,
we fit the eSAIR model state by state via the MCMC method to estimate state-specific model parameters, including posterior means, medians, and 95% credible intervals. Although there exists some
heterogeneity of the test rate across counties within a state, such intercounty differences are deemed much smaller than the inter-state differences. Utilizing the MCMC draws (200,000 draws by
default), we can assess prediction uncertainty by summarizing over 200,000 projected risk scores from the CA-eSAIR model. Running this procedure on 3109 counties is indeed computationally expensive.
A simpler calculation is to propagate estimation uncertainty into risk prediction in the way that the limits of the 95% credible intervals are carried over to determine uncertainties for the
projected risk. In the empirical study we show this simple solution that manifests the uncertainty in the risk projection. In addition, in our software, we set limits $0$ and $1$ to confine the
projected values of $\theta_c^S(t), \theta_c^A(t),\theta_c^I(t), \theta_c^R(t)$ within $[0,1]$.
2.3. County-Level Risk Prediction
The CA-eSAIR model above enables us to predict the county-level prevalence of COVID-19 infection $\theta_c^I(t)$, which is a key term describing the spread of the pandemic. The projection for the
other three population probabilities of susceptible, self-immunized, and removed can be similarly obtained. At current time (say, today) $t_0$, using the estimated parameters $\boldsymbol{\tau}$ and
$\boldsymbol{\theta}_c(t_0)=(\theta_c^S(t_0),\theta_c^A(t_0),\theta_c^I(t_0),\theta_c^R(t_0))^\top, c\in\mathcal{C}$ from the state-space eSAIR model as the initial values, we are able to make
several kinds of predictions of the infection prevalence described as follows.
One-Day-Ahead Risk Prediction. Applying the third equation of the CA-eSAIR model in Section 2.2, we obtain a one-day-ahead county-level risk prediction $\theta_c^I(t_0+1)$ for each of 3109
continental U.S. counties. The number of infected cases $N_{c'}\theta_{c’}^I (t_0)$ in county $c'$ at time $t_0$ is specified as the observed number of infected cases. This one-day-ahead predicted
prevalence is the risk score for individual counties. Such nationwide risk scores can be updated whenever the eSAIR model is updated with new data, producing an updated nationwide infection map.
$t$-Day-Ahead Risk Prediction. When wishing to predict county-level risk scores over a period of $t$ future days (e.g., 7-day-ahead), the number of infected cases for each county is calculated from
the entire CA-eSAIR model. The risk scores over a period of $t$ days from $t_0$ are given by:
(4) \ \ \ \ \ \quad \begin{aligned} RS_c(t|t_0) & = \theta_c^I(t_0+1)+\{1-\theta_c^I(t_0+1)\}\theta_c^I(t_0+2)\\ &~~~+\{1-\theta_c^I(t_0+1)\}\{1-\theta_c^I(t_0+2)\}\theta_c^I (t_0+3)\\ & ~~~ +\dots+\
theta_c^I(t_0+t)\prod_{i=1}^{t-1} \{1-\theta_c^I(t_0+i)\}. \end{aligned}
The risk of infection above is a cumulative chance during the prediction period through a sequence of conditional events. For example, with $t=2$,
\begin{aligned} P&(\text{infected at or before day } 2) =P(\text{infected at day } 1)\\ &+P(\text{infected at day } 2|\text{not infected at day } 1) P(\text{not infected at day } 1). \end{aligned}
Risk Prediction of Travel. The CA-eSAIR model can provide predicted risk for a personal trip in the future. Let $C$ be a set of counties that a traveler plans to stop by over the next $t$ days. For
simplicity, suppose the traveler stops at one county per day, denoted as $C=\{c_1,\dots,c_t\}$ with $c_j$ being the county visited on day $t_0+j,j=1,\dots,t$. Then, by a similar cumulative chance,
the risk score associated with the trip is given by:
(5) \ \ \ \ \ \quad \begin{aligned} RS(C,t|t_0) & = \theta_{c_1}^I(t_0+1)+\{1-\theta_{c_1}^I(t_0+1)\}\theta_{c_2}^I(t_0+2)\\ &~~ +\{1-\theta_{c_1}^I(t_0+1)\}\{1-\theta_{c_2}^I(t_0+2)\}\theta_{c_3}^I
(t_0+3)\\ & ~~ +\dots+\theta_{c_t}^I(t_0+t)\prod_{j=1}^{t-1}\{1-\theta_{c_j}^I(t_0+j)\}. \end{aligned}
3. Empirical Study of Community COVID-19 Spread in the United States
3.1. Nationwide Risk Map
Since the first coronavirus case reported in Washington State in February 2020, the number of confirmed COVID-19 infections has escalated quickly from coast to coast in the United States. New York is
the leading state with the largest number of reported COVID-19 infections. As shown in Figure 3A, up to May 2, 2020, the numbers of infected cases for New York, New Jersey, Massachusetts, Illinois,
California, Pennsylvania, and Michigan follow similar patterns of the pandemic in different geographic areas. Figure 3B shows the reported cumulative number of deaths caused by COVID-19. New York is
the most severely hit state with the largest reported number of deaths (>26,000), while Michigan processes the highest rate of case fatality ($\sim$9.5%).
Figure 3. Increasing trends of COVID-19 in heavily hit states of the United States up to May 2, 2020. (A) The cumulative number of reported infections since the number of infections reached 100 per
state. (B) The cumulative number of deaths since the number of deaths reached 10 per state.
The daily time series of county-level confirmed infections, deaths and recovered cases are obtained from two data sources: Harvard Dataverse (2020) and 1point3acres (2020). The latter two series are
combined to form daily time series of removed cases. We fit the eSAIR model separately for all continental states and Washington, DC. The results of seven severely affected states are shown in
Table 1, including both the parameter estimates and their credible intervals for $\beta$, $\gamma$, and $R_0$. The Gelman-Rubin (G-R) statistic proposed by Gelman & Rubin (1992), with the $95\%$
confidence upper bound, is used to monitor MCMC convergence, in addition to other convergence diagnostics (e.g., trace plots and mixing from multiple chains with different initial values). In
Table 1, all the G-R statistics for MCMC draws of $R_0$ in the seven states are close to 1, presenting clearly a piece of evidence for MCMC convergence based on four MCMC chains. Of 48 continental
U.S. states, 39 passed the MCMC diagnostic check. The transmission rate, removal rate, and basic reproduction number vary across these states, with Illinois having the highest rate of contagion, $\
hat{\beta}_{\text{IL}}=0.067$, 95% CI $(0.054, 0.081)$, and $\hat{R}_{0,\text{IL}}=7.42$, 95% CI $(5.46, 9.91)$. In state-level data analyses based on the temporal eSAIR model, we set the
state-specific transmission rate modifier and self-immunization rate. Based on results of the statewide antibody test survey released by New York governor Andrew Cuomo, the cumulative proportion of
the population in New York that had antibodies by April 29 is 0.2. Due to a lack of similar surveys in the other states, the cumulative proportion of people with antibodies in the other states by
April 29 is calibrated proportionally with that of New York with respect to the state-specific basic reproduction number. That is, $P_s=\frac{R_{0,s}}{R_{0,\text{NY}}} P_{\text{NY}}$, under the
assumption that a higher $R_0$ implies a larger number of infections, and thus more people having antibodies in a state. Also $\alpha_s$ for the following days is estimated by a state-specific
geometric distribution. The resulting cumulative proportion of people with antibodies for each state on April 29 is listed in Table 2. The value of $\pi_s(t)$ is specified by the effectiveness score
of state-specific social distancing using cell phone data in the United States from the Transportation Institute at the University of Maryland (2020). Specifically, $\pi_s(t)=1-\text{social
distancing index}/100$ if the stay-at-home policy does not expire at the prediction day, or as $1.00$ if the policy ends at the prediction day. The state-level social distancing scores $\pi_s(t)$
over the period of 7 days for prediction, May 2-9, are listed in Table 2.
Table 1. The estimated transmission rate $\beta$, the removal rate $\gamma$, and the basic reproduction number $R_0$ for seven severely hit states in the United States using the eSAIR model.
State $\beta$ $\gamma$ $R_0$ G-R for $R_0$
California 0.065 (0.043, 0.086) 0.015 (0.010, 0.022) 4.25 (2.91, 5.99) 1.001 (1.003)
Illinois 0.067 (0.054, 0.081) 0.009 (0.007, 0.012) 7.42 (5.46, 9.91) 1.001 (1.003)
Massachusetts 0.065 (0.054, 0.077) 0.011 (0.008, 0.014) 6.21 (4.76, 8.04) 1.002 (1.005)
Michigan 0.058 (0.037, 0.081) 0.020 (0.013, 0.029) 2.88 (1.96, 4.09) 1.000 (1.000)
New Jersey 0.054 (0.040, 0.067) 0.008 (0.006, 0.011) 6.66 (4.75, 8.99) 1.000 (1.000)
New York 0.061 (0.048, 0.075) 0.016 (0.011, 0.021) 3.89 (2.92, 5.15) 1.000 (1.001)
Pennsylvania 0.057 (0.038, 0.075) 0.010 (0.007, 0.014) 5.61 (3.79, 7.86) 1.000 (1.001)
^Note. ^Posterior mean and 95% credible intervals obtained from 200,000 MCMC draws are shown for the three estimated parameters. The Gelman-Rubin (G-R) statistic for the estimation of $R_0$^ is also
shown, with the 95% confidence upper bound in parentheses.
Table 2. The cumulative proportion of people with antibodies Ps and the transmission rate modifier π(t) for each state.
State $P_s$ $\pi(5.2)$ $\pi(5.3)$ $\pi(5.4)$ $\pi(5.5)$ $\pi(5.6)$ $\pi(5.7)$ $\pi(5.8)$ $\pi(5.9)$
AL 0.153 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
AK 0.117 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
AZ 0.131 0.67 0.67 0.67 0.67 0.67 0.67 0.67 0.67
AR 0.121 0.83 0.83 0.83 0.83 0.83 0.83 0.83 0.83
CA 0.146 0.57 0.57 0.57 0.57 0.57 0.57 0.57 0.57
CO 0.146 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
CT 0.186 0.53 0.53 0.53 0.53 0.53 0.53 0.53 0.53
DE 0.172 0.61 0.61 0.61 0.61 0.61 0.61 0.61 0.61
DC 0.173 0.36 0.36 0.36 0.36 0.36 0.36 0.36 0.36
FL 0.153 0.62 0.62 1.00 1.00 1.00 1.00 1.00 1.00
GA 0.156 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
HI 0.116 0.44 0.44 0.44 0.44 0.44 0.44 0.44 0.44
ID 0.153 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
IL 0.183 0.65 0.65 0.65 0.65 0.65 0.65 0.65 0.65
IN 0.151 0.76 0.76 1.00 1.00 1.00 1.00 1.00 1.00
IA 0.127 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
KS 0.125 0.75 0.75 1.00 1.00 1.00 1.00 1.00 1.00
KY 0.117 0.76 0.76 0.76 0.76 0.76 0.76 0.76 0.76
LA 0.153 0.72 0.72 0.72 0.72 0.72 0.72 0.72 0.72
ME 0.115 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
MD 0.171 0.53 0.53 0.53 0.53 0.53 0.53 0.53 0.53
MA 0.233 0.46 0.46 0.46 0.46 0.46 0.46 0.46 0.46
MI 0.145 0.63 0.63 0.63 0.63 0.63 0.63 0.63 0.63
MN 0.111 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.69
MS 0.153 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
MO 0.140 0.77 0.77 1.00 1.00 1.00 1.00 1.00 1.00
MT 0.117 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
NE 0.153 0.77 0.77 1.00 1.00 1.00 1.00 1.00 1.00
NV 0.140 0.61 0.61 0.61 0.61 0.61 0.61 0.61 0.61
NH 0.136 0.61 0.61 0.61 0.61 0.61 0.61 0.61 0.61
NJ 0.214 0.43 0.43 0.43 0.43 0.43 0.43 0.43 0.43
NM 0.131 0.72 0.72 0.72 0.72 0.72 0.72 0.72 0.72
NY 0.200 0.42 0.42 0.42 0.42 0.42 0.42 0.42 0.42
NC 0.127 0.73 0.73 0.73 0.73 0.73 0.73 1.00 1.00
ND 0.120 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
OH 0.126 0.69 0.69 0.69 0.69 0.69 0.69 0.69 0.69
OK 0.122 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
OR 0.153 0.66 0.66 0.66 0.66 0.66 0.66 0.66 0.66
PA 0.186 0.60 0.60 0.60 0.60 0.60 0.60 1.00 1.00
RI 0.153 0.54 0.54 0.54 0.54 0.54 0.54 1.00 1.00
SC 0.153 0.77 0.77 0.77 1.00 1.00 1.00 1.00 1.00
SD 0.168 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
TN 0.141 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
TX 0.133 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
UT 0.141 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
VT 0.153 0.64 0.64 0.64 0.64 0.64 0.64 0.64 0.64
VA 0.140 0.61 0.61 0.61 0.61 0.61 0.61 0.61 0.61
WA 0.162 0.62 0.62 0.62 0.62 0.62 0.62 0.62 0.62
WV 0.120 0.71 0.71 1.00 1.00 1.00 1.00 1.00 1.00
WI 0.123 0.74 0.74 0.74 0.74 0.74 0.74 0.74 0.74
WY 0.122 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
^Note. ^The value of $P_s$^ is specified as the cumulative proportion of people with antibodies proportional to the reported cumulative proportion of people with antibodies in New York on April 29 and
the state-specific estimated basic reproduction number. The value of π(t) is set as 1 — social distancing index/100 if the stay-at-home policy does not expire at the prediction day, or as 1.00 if the
policy ends at the prediction day (University of Maryland, 2020)^. Here π(5.2) is the social distancing on May 2, and so on.
From the respective state-level eSAIR models, we can compute the posterior mean probabilities of susceptible, self-immunized, infected and removed, $\hat{\boldsymbol{\theta}}_t=(\hat\theta_t^S,\hat\
theta_t^A,\hat\theta_t^I,\hat\theta_t^R)^\top$. Using these estimates at time $t_0$ (say, today), we then apply the spatiotemporal CA-eSAIR model to carry out the county-level risk prediction. To
account for potential differences in testing strategies and numbers over counties within a state, we tailor the initial $\theta_t^I$ according to the county population size. We propose a shrinkage
type estimation, by which the county-specific empirical infection rate $\tilde{\theta}_{t_0}^I$ is shrunk toward the estimated state-level average $\hat\theta_{t_0}^I$ by the following rules: (i) If
$\tilde{\theta}_{t_0}^I \geq \hat\theta_{t_0}^I$, then let the initial infection rate $\bar\theta_{t_0}^I = \tilde{\theta}_{t_0}^I$; otherwise, let $\bar\theta_{t_0}^I = \varepsilon \tilde{\theta}_
{t_0}^I + (1-\varepsilon) \hat{\theta}_{t_0}^I$, where $\varepsilon = 0, 0.5, 0.75, 1$ if the county population size is $[0, 25000], (25000,50000],$$(50000, 100000], (100000,\infty)$, respectively.
In the state-level temporal eSAIR analyses, we conduct careful inspections on the convergence of the MCMC runs for each state, where we take a thinning scheme by recording one every 10 random draws
to reduce autocorrelation and run four separate chains to determine burn-in and to calculate the Gelman-Rubin statistic. As a result, 39 states passed the MCMC convergence diagnosis. For the other
states where the MCMC failed to converge due largely to inadequate data (e.g., very low numbers of deaths), the national average estimates of the model parameters ($\hat{\beta}=0.077$, $\hat{\gamma}=
0.023$ and $\hat{\boldsymbol{\theta}}_{t_0}$) are used to determine the initial estimates of the infection rate.
To determine the daily rate of self-immunization $\alpha(t)$ in a state, we begin with an assumption that the ratio of the two probabilities or proportions of being asymptomatic and symptomatic is
constant. That is, $\alpha/p=c$, where $\alpha$ denotes the probability of a susceptible person being asymptomatic, recovered, and self-immunized with no hospitalization, and $p$ denotes the
probability of a susceptible individual contracting the coronavirus and becoming symptomatic. To estimate $\alpha$ and $p$, we consider an approach based on geometric distributions given as follows.
For example, in New York State, the first confirmed COVID-19 case was announced on March 1, and on April 29 the cumulative proportion of self-immunized cases from survey results was reported to be
20%. Adding a 7-day incubation of the first symptomatic case, it is easy to see that
Likewise, given that the cumulative infection proportion of confirmed COVID-19 cases on May 6 was 0.017, accounting for a 7-day delay in the reporting of confirmed symptomatic cases from April 29, we
\begin{aligned} p=1-(1-0.017)^{(1/67)}=0.000256. \end{aligned}
Then, $c=\alpha/p=12.97$, meaning that for one confirmed symptomatic case, there are about 13 asymptomatic cases that are self-immunized. Using this formula $\alpha = c p$ with the estimated constant
ratio $c$, we can determine not only a future value for $\alpha$ with a projected $p$ in New York, which is estimated by $\theta_t^I$ from the eSAIR model, but also the daily self-immunization rates
$\alpha(t)$ in other states with a suitable state-level proportion of symptomatic cases, $p$.
Figure 4 shows a nationwide 7-day-ahead risk prediction made on May 2. The values of antibody rate $\alpha_c(t)$ and social distancing score $\pi_c(t)$ are set equal to state-specific values because
no detailed county-level data were available. As given in Section 2.2, $\mu_{cc'}$ is the intercounty mobility factor characterizing the decrease of human encounters in terms of their potential
movements between counties (Unacast, 2020); $r(c,c')$ is the travel distance containing a factor $b(a,a')$ that characterizes the transportation capacity of airlines between airports $a$ and $a'$.
Table 3 gives values of $\mu_{cc'}$ obtained from the social distancing scoreboard (Unacast, 2020). Table 4 presents values of $b(a,a')$ according to year 2016 annual enplanements of U.S. airports.
As shown in Figure 4, counties in the region of Massachusetts, Connecticut, New York, and New Jersey appear to have a very high risk of COVID-19 infection due to large numbers of confirmed infections
reported in these four states. Other counties with high risks are mainly located in the southeastern United States, including those in Florida, Georgia, Alabama, Mississippi, and South Carolina.
Illinois, Texas, and some parts in Arizona are also areas with high risks. Although a large number of COVID-19 infections have been reported in California, counties in the state do not appear to have
high risks of infection, due possibly to the large population and an early implementation of preventive measures.
Table 3. The determination of the intercounty mobility $\mu_{cc'}$ parameter in the connectivity coefficient $\omega_{cc'}(t)$.
A B C D F
A 0.10 0.25 0.35 0.45 0.55
B 0.25 0.35 0.45 0.55 0.65
C 0.35 0.45 0.55 0.65 0.75
D 0.45 0.55 0.65 0.75 0.85
F 0.55 0.65 0.75 0.85 1.00
^Note.^ ^Grades for reduction of human encounters: A: >94%; B: 82-94%; C:74-82%; D: 40-74%; F: <40%. For county $c$^ belonging to category A and county $c'$^ belonging to category D, the value of $\
mu_{cc'}=0.45$^ (Unacast, 2020)^.
Table 4. The determination of the $b(a,a')$ parameter in the connectivity coefficient $\omega_{cc'}(t)$.
L M S N
L 0.1 0.3 0.4 0.6
M 0.3 0.4 0.6 0.7
S 0.4 0.6 0.7 0.9
N 0.6 0.7 0.9 1.0
^Note. ^L: Large hub that accounts for at least 1% of total U.S. passenger enplanements (generally 18,500,000 total passengers and above); M: Medium hub that accounts for between 0.25% and 1% of
total U.S. passenger enplanements (generally 3,500,000-18,500,000 total passengers); S: Small hub that accounts for between 0.05% and 0.25% of total U.S. passenger enplanements (generally
500,000-3,500,000 total passengers); N: Nonhub that accounts for less than 0.05% of total U.S. passenger enplanements, but more than 10,000 annual enplanements. For airport $a$^ belonging to category
L and airport $a'$^ belonging to category S, the value of $b(a,a')=0.4$^ (Wikipedia, 2020)^.
It is of interest to compare the risk projection accuracy between the state-level temporal eSAIR model and the county-level spatiotemporal CA-eSAIR model. The former ignores the spatial
heterogeneities over counties within a state, and uses the state average infection rate (or set $\varepsilon=0$) as a constant initial $\bar\theta_{t_0}^I$. The latter incorporates the intercounty
connectivity and allows county-level variations via the cellular automaton. With regard to the one-day risk prediction of May 3 over 39 states that passed the MCMC convergence diagnosis, we obtain
the SWAPE for the eSAIR model equal to $2.85\times 10^{-3}$ and that for the CA-eSAIR model equal to $3.54\times 10^{-4}$. The prediction with no use of local information has an 8-fold increase in
prediction error than that with the use of county-level data.
Figure 4. Nationwide map of 7-day-ahead projected risk of COVID-19 over 3109 counties in the continental United States from May 2, 2020. Risk is classified into five categories. The bins are defined
by the 20th, 40th, 60th, and 80th percentiles of nationwide county-specific risks. The five categories correspond to [0,46/10,000), [46/10,000,80/10,000), [80/10,000,141/10,000), [141/
10,000,226/10,000), and [226/10,000,5,637/10,000]. The state of New York is labeled because the results of antibody surveys were released on April 29, 2020.
As another illustration, we show the 7-day-ahead prediction of the county-level risk in New York State (see Figure 5). We choose to create a risk map for the state because New York released results
of state-wide antibody testing surveys on April 29. According to New York governor Cuomo, about 20% of the tested individuals in the state already have the antibodies to COVID-19. Thus, in our
analysis, we set the self-immunization rate based on the value of $\alpha=0.20$ on April 29 and all the other $\alpha$ values are calculated based on this value. This specification may be revised
when new antibody survey results become available. In the prediction we use all the continental counties in the United States to calculate the county-level risk scores due to strong ties of the state
with other parts of the country; for example, Seattle and New York are close in terms of their air-distance.
Figure 5. A 7-day-ahead risk prediction of COVID-19 for each county in New York State from May 2, 2020. Risk is classified into five categories. The bins are defined by the 20th, 40th, 60th, and 80th
percentiles of nationwide county-specific risks. The five categories correspond to [0,46/10,000), [46/10,000,80/10,000), [80/10,000,141/10,000), [141/10,000,226/10,000), and [226/
The predicted risk of COVID-19 given in Figure 4 and Figure 5 are calculated using the estimated mean of $(\theta^S,\theta^A,\theta^I,\theta^R)$ from the eSAIR model. To illustrate the size of
prediction uncertainty, we also show the prediction range defined as the difference between the upper and lower bounds of the predicted risk scores over the 3109 U.S. continental counties (see
Figure 6), as well as the counties in New York state (see Figure 7).
The tuning of the $\eta$ parameter in the connectivity coefficient $\omega_{cc'}(t)$ is based on the SWAPE, where the weight is the ratio of county population size over the total population size of
the 39 states that passed the MCMC convergence diagnosis. The SWAPE optimal value $\eta=35$ gives the smallest $\text{SWAPE}=3.54\times 10^{-4}$. Figure 8A displays the county-level weighted absolute
prediction error (WAPE) of the 7-day-ahead projections of infection rates over 3109 counties, along with a scatterplot of county-level weighted prediction errors (WPEs) and $\text{log}_{10}(\text
{county population size})$ in Figure 8B. In this figure, the absolute values of county-level weighted prediction errors are used to calculate the SWAPE, with a positive sign corresponding to an
overprediction and a negative sign to an underprediction. This figure (8B) shows that the majority of the counties have very small county-level weighted prediction errors, and that counties with
large population sizes tend to have larger county-level weighted prediction errors. Such elevated county-level prediction errors occurring in large counties are attributed to more bias in the
collected surveillance data. To zoom in the scatterplot, Figure 8B excludes three counties with the largest prediction errors, including Los Angeles county in California (county-level WPE$=-1.030 \
times 10^{-5}$), Cook county in Illinois (county-level WPE$=-1.982\times 10^{-5}$), and New York county in New York (county-level WPE$=-4.072\times 10^{-5}$). Figure 9 shows the densities of the
county-level WAPEs for the 1- to 7-day-ahead infection rate predictions, with the SWAPE from May 3 to May 9 being
$3.54\times10^{-4},4.01\times10^{-4},4.60\times10^{-4},5.16 \times10^{-4},5.83\times10^{-4}, \\ 6.55\times10^{-4},7.19\times10^{-4},$
respectively. These SWAPEs are all at the order of $10^{-4}$, namely, one difference for the cumulative number of infections per 10,000 people in a county. For an average county of 100,000 people (in
fact 97,118 in our demographic data), with the data up to May 2, the predicted total number of infections on May 3 is about 33 cases more or less than the actual observed number of confirmed
infections, and in most of the cases our predicted numbers are larger than the reported. This is due probably to the underreporting issue with the surveillance data. The prediction error increases
due to the elevated uncertainty over time. This prediction error should be interpreted with caution, given various potential biases in the collection of confirmed COVID-19 cases in practice
(Angelopoulos et al., 2020). Because the test data may be biased, a small error does not necessarily mean a more accurate prediction.
Figure 6. Nationwide map of the range of 7-day-ahead projected risk of COVID-19 for 3109 counties in the continental United States from May 2, 2020. Range is defined as the difference of the
estimated upper and lower confidence bounds of risk. Range is classified into four categories. The bins are defined by the 25th, 50th, and 75th percentiles of nationwide county-specific risk ranges.
Larger range value denotes higher uncertainty in risk prediction. The state of New York is labeled since we calculate the self-immunization rate α based on the survey sampling results of New York on
April 29.
Figure 7. A map of the range of 7-day-ahead projected risk of COVID-19 for each county in New York from May 2, 2020. Range is defined as the difference of the estimated upper and lower confidence
bounds of risk. Range is classified into four categories. The bins are defined by the 25th, 50th, and 75th percentiles of nationwide county specific risk ranges. Larger range value denotes higher
uncertainty in risk prediction.
3.2. Projected Risk for a Travel Route
When the nationwide county-level projected risk scores are available over a period of time, such information can be used to assess a potential risk for a planned trip. Let us consider a hypothetical
travel path by car from Ann Arbor (Washtenaw County, Michigan) to Detroit (Wayne County, Michigan) and then to Chicago (Cook County, Illinois); see Figure 10. We assume that the traveler would begin
the trip on May 2, 2020, and then spend one day in each of these counties for business. Then, the risk of infection during the entire itinerary is calculated as $0.0364$ by Equation (5). Using the
predicted risk score, the traveler may revise the travel plan if this projected risk is too high.
Figure 8. Distributions of the 7-day-ahead county-level prediction errors for the infection rate. (A) The 7-day-ahead county-level WAPE of infection rate for 3109 counties in the continental United
States from May 2, 2020. Seventh-day county-level WAPE (corresponding to predictions made for May 9, 2020) is classified into four categories. The bins are defined by the 25th, 50th, and 75th
percentiles of nationwide county-specific 7th-day county-level WAPE values. (B) Scatterplot of county-level weighted prediction errors (WPEs) against log[10](county population sizes). Three counties
(Los Angeles, California, Cook, Illinois and New York, New York) are not included in order to amplify the distribution of the points. The WPEs of the 3 counties are −1.030×10^−5, −1.982×10^
−5, and −4.072×10^−5, respectively.
Figure 9. Density plots comparing 7-day-ahead log[10](county-level WAPE) for 3109 counties in the continental United States from May 3, 2020 to May 9, 2020. Vertical lines for individual density
curves represent the respective median values. The density curves shift to the right from May 3 to May 9, indicating an increase in county-level WAPE resulted from the increasing prediction
uncertainty over time.
Figure 10. A hypothetical trip begins on May 2, 2020, with stops in Ann Arbor, Michigan, Detroit, Michigan, and Chicago, Illinois, during May 3-5. Day 1: Washtenaw County (projected risk = 0.0033);
Day 2: Wayne County (projected risk = 0.0127); Day 3: Cook County (projected risk = 0.0208).
4. Concluding Remarks
In this article, we propose a spatiotemporal epidemiological forecast CA-eSAIR model that provides timely predictions of community-level COVID-19 infection risk for the 3109 continental U.S.
counties. The proposed CA-eSAIR model can make both short-term local risk predictions (e.g., one-day- or 7-day-ahead) and relatively long-term predictions (e.g., one-month-ahead). In addition, the
projected county-level risk scores over a time window can be used to calculate a risk for a planned trip in the United States. Such a high-resolution projected risk map is particularly useful for
local governments and residents to learn the future spread patterns of the pandemic and their associated personal risks. In this empirical study, we demonstrate that utilizing local information can
greatly improve the prediction accuracy over the classical temporal model with no use of such information.
In addition to the new spatiotemporal prediction model, another major contribution of this article is to incorporate the public survey results of self-immunization as an antibody compartment in the
modeling of the infection dynamic system. This new antibody compartment can greatly help circumvent the underreporting issue for data collected in a public database. The current health surveillance
system in the United States is incapable of capturing asymptomatic individuals with light symptoms nor is it able to provide sufficient resources for the coronavirus RT-PCR diagnostic test. Recently
released surveys of herd immunity in New York, California and Massachusetts have shed light to correct the underreporting. In the future, when more extensive surveys of similar types are to be
conducted in the United States, more accurate and more frequent updates of the self-immunization rate can be incorporated in the proposed eSAIR model, even possibly at county level. The resulting
CA-eSAIR would make better community-level risk prediction.
In this article, due to various limitations, several functions, for example, $\pi(t)$ and $\omega_{cc'}$, used in the CA-eSAIR model are not fully objective, which need to be improved in the future.
The transmission rate modifier $\pi_c(t)$ is included to take time-varying public health interventions (e.g., social distancing) into account. The state-level effectiveness of social distancing is
obtained from the published values by the Transportation Institute at the University of Maryland (2020) derived from cell phone mobile data. Our CA-eSAIR model allows county-level value of social
distancing, but without access to high-resolution data from the UMD webpage, we must use a state-level value. This may be improved in the future with better data available. To use the proposed
CA-eSAIR model, one needs to define a reasonable intercounty connectivity coefficient $\omega_{cc'}(t)$. Since this quantity is related to many variables, it is difficult to determine it with full
objectivity. Relaxing this limitation is worthy of a future research project. Relevant to disease contagion, two major factors have been used to form this coefficient. One is an intercounty mobility
index derived from mobile cell phone data; the other is a measure of intercounty travel distance. Also, we introduce a tuning parameter in this metric of connectivity to optimize the contribution of
travel distance by minimizing the one-day-ahead prediction error. With more personal tracking data available in the future, the quantification of intercounty connectivity can eventually be improved.
One must exercise caution when using the proposed CA-eSAIR model for risk prediction. First, we assume that a person who recovers from the COVID-19 infection is immune to the coronavirus and no
longer contagious. This assumption is likely to be true but has not been justified yet. Second, although the underreporting of confirmed infections is addressed by the compartment of antibody, the
underreporting of deaths from COVID-19 remains. This gives rise to a potential bias affecting the prediction accuracy. Not only is this a public health surveillance issue but a fundamental research
topic in epidemiology. In reality, it is often difficult to determine a defining cause to a death with high certainty. This problem becomes even more complicated in the case of a death from the
coronavirus because the medical diagnosis of “COVID-19 disease” is not clinically well defined yet.
It is noteworthy that the predicted risks at some state borders (e.g., Idaho and Nebraska) in Figure 4 exhibit nonsmooth changes. This may be an outcome of the initial values generated from
state-level eSAIR analyses in that we assume both control measures and testing policies and strategies to be state-specific, as well as the percentile-based categorization for the need of color
coding in the plot. Such within-state homogeneity is also used in the prediction, resulting in more homogeneous intrastate projected risks than the interstate projected risks. Consequently, some
counties at state boarders appear to have noticeable discrepancies in their projected risks. Resolving these differences at state borders may not be that simple and is beyond the capacity of our
current methodology. It would be interesting future work that develops a method to discern the true interstate differences from artifacts in the risk prediction for counties at state boarders, where
a certain objective criterion is the key to gauge adequate smoothness.
Further directions of the risk-mapping method include extending the proposed CA-eSAIR model to predict the risk of the COVID-19 in other areas across the world, or to predict the risk at locations of
household, factory, and business sites. The latter provides finer-scale information of COVID-19 risk critical for business reopening and allocation of resources for viral RT-PCR diagnostic tests.
This in-depth analysis needs more local data of extensive viral diagnostic and antibody tests available across the country and individual health and behavior data from better surveillance systems
such as mobile device apps for the tracking of personal movements. The CA-eSAIR model provides a useful prediction paradigm to develop data-driven strategies for precision public health intervention
and strategies for COVID-19 containment.
The authors thank the Editor, Associate Editor, Data Visualization Editor, and four anonymous reviewers for their insightful comments and constructive suggestions that helped us improve the
manuscript greatly. The authors also appreciate the Health Data Science Concentration offered by the Department of Biostatistics at the University of Michigan, through which five first-year MS
students (Zhang, Shi, Yang, Zhao, and Overton) had the opportunity to become involved in the COVID-19 data collection and analysis. The authors are grateful to Drs. Bhramar Mukherjee and Jian Kang
for their valuable suggestions on this work.
Disclosure Statement
This research is partially supported by the NSF grant DMS1811734.
1point3acres. (2020). Global COVID-19 tracker and interactive charts. https://coronavirus.1point3acres.com/
Ahmed, E., & Agiza, H. (1998). On modeling epidemics including latency, incubation and variable susceptibility. Physica A: Statistical Mechanics and Its Applications, 253(1–4), 347–352. https://
Angelopoulos, A. N., Pathak, R., Varma, R., & Jordan, M. I. (2020). On identifying and mitigating bias in the estimation of the COVID-19 case fatality rate. Harvard Data Science Review, (Special
Issue 1). https://doi.org/10.1162/99608f92.f01ee285
Beauchemin, C., Samuel, J., & Tuszynski, J. (2005). A simple cellular automaton model for influenza A viral infections. Journal of Theoretical Biology, 232(2), 223–234. https://doi.org/10.1016/
Chopard, B., & Droz, M. (1998). Cellular automata (Vol. 1). Springer.
Fuentes, M., & Kuperman, M. (1999). Cellular automata and epidemiological models with spatial dependence. Physica A: Statistical Mechanics and Its Applications, 267(3–4), 471–486. https://doi.org/
Fuks, H., & Lawniczak, A. T. (2001). Individual-based lattice model for spatial spread of epidemics. Discrete Dynamics in Nature and Society, 6(3), 191–200. https://doi.org/10.1155/S1026022601000206
Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, 7(4), 457–472. https://doi.org/10.1214/ss/1177011136
Jøsrgensen, B., & Song, P. X.-K. (2007). Stationary state space models for longitudinal data. Canadian Journal of Statistics, 35(4), 461–483. http://doi.org/10.1002/cjs.5550350401
Karney, C. F. (2013). Algorithms for geodesics. Journal of Geodesy, 87(1), 43–55. https://doi.org/10.1007/s00190-012-0578-z
Kermack, W. O., & McKendrick, A. G. (1927). A contribution to the mathematical theory of epidemics. Proceedings of the Royal Society A, 115(772), 700–721. https://doi.org/10.1098/rspa.1927.0118
Lab, C. D. (2020). US COVID-19 daily cases with basemap. Harvard Dataverse. https://doi.org/10.7910/DVN/HIDLTK
Office of the Governor, N. Y. (2020). Amid ongoing COVID-19 pandemic, Governor Cuomo announces results of completed antibody testing study of 15,000 people showing 12.3 percent of population has
COVID-19 antibodies. https://www.governor.ny.gov/news/amid-ongoing-covid-19-pandemic-governor-cuomo-announces-results-completed-antibody-testing
Schneckenreither, G., Popper, N., Zauner, G., & Breitenecker, F. (2008). Modelling SIR-type epidemics by ODEs, PDEs, difference equations and cellular automata–A comparative study. Simulation
Modelling Practice and Theory, 16(8), 1014–1023. http://doi.org/10.1016/j.simpat.2008.05.015
Sirakoulis, G. C., Karafyllidis, I., & Thanailakis, A. (2000). A cellular automaton model for the effects of population movement and vaccination on epidemic propagation. Ecological Modelling, 133(3),
209–223. https://doi.org/10.1016/S0304-3800(00)00294-5
Song, P. X.-K. (2000). Monte Carlo Kalman filter and smoothing for multivariate discrete state space models. Canadian Journal of Statistics, 28(3), 641–652. https://doi.org/10.2307/3315971
Unacast. (2020). Social distancing scoreboard. https://www.unacast.com/covid19/social-distancing-scoreboard
University of Maryland. (2020). University of Maryland COVID-19 Impact Analysis Platform, Maryland Transportation Institute.
Von Neumann, J., & Burks, A. (1966). Theory of self reproducing automata. University of Illinois press, Urbana.
White, S. H., Del Rey, A. M., & Sánchez, G. R. (2007). Modeling epidemics using cellular automata. Applied Mathematics and Computation, 186(1), 193–202. https://doi.org/10.1016/j.amc.2006.06.126
Wikipedia. (2020). List of airports in the United States. https://en.wikipedia.org/wiki/List_of_airports_in_the_United_States
Willox, R., Grammaticos, B., Carstea, A., & Ramani, A. (2003). Epidemic dynamics: Discrete-time and cellular automaton models. Physica A: Statistical Mechanics and Its Applications, 328(1–2), 13–22.
World Health Organization. (2020). Naming the coronavirus disease (COVID-19) and the virus that causes it. https://www.who.int/emergencies/diseases/novel-coronavirus-2019/technical-guidance/
©2020 Yiwang Zhou, Lili Wang, Leyao Zhang, Lan Shi, Kangping Yang, Jie He, Bangyao Zhao, William Overton, Soumik Purkayastha, and Peter Song. This article is licensed under a Creative Commons
Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article. | {"url":"https://hdsr.mitpress.mit.edu/pub/qqg19a0r/release/3","timestamp":"2024-11-10T06:19:33Z","content_type":"text/html","content_length":"1049796","record_id":"<urn:uuid:6f457601-1001-403a-b1ad-60106f464d80>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00577.warc.gz"} |
K- Factor
What is Transformer K-Factor Rating
A K-Factor rated transformer is one which is used to deal with harmonic generating loads. Harmonics generate additional heat in the transformer and cause non-K-rated transformers to overheat possibly
causing a fire, also reducing the life of the transformer. K-rated transformers are sized appropriately to handle this additional heat and tested to UL 1561 rigid standards for K-factor rated
transformers. The way a K-rated transformer works is it uses a double sized neutral conductor and either change the geometry of their conductors or use multiple conductors for the coils. Quality
transformers are manufactured with high grade silicon steel, copper windings, and more air ducts.
The following rules will generally result in an acceptable choice of K-factor value:
• Follow successful practice in sizing the transformer.
• Where the harmonic current producing equipment is less than 15 per cent, use a standard transformer.
• Where the electronic equipment represents up to 35 per cent of the load, use a K-4 rated transformer.
• Where the electronic equipment represents up to 75 per cent of the load, use a K-13 rated transformer.
• Where 100 per cent of the load is electronic equipment, use a K-20 rated transformer.
• Higher K-factor ratings are generally reserved for specific pieces of equipment where the harmonic spectrum of the load is known.
K-rating is a heat survival rating, not a treatment of associated power quality issues like voltage distortion, and efficiency isn’t typically discussed. Surviving the extra heat means using more
core and coil material, and sometimes use of different construction techniques. Depending on the manufacturer’s design, harmonic losses may be reduced to varying degrees. Ironically, even though the
designated use of the K-rated transformer is to feed nonlinear load, manufacturers publish their loss data under linear load conditions.
Understanding Transformer "K Factor Rating"
A K-Factor rating is an index of the transformers ability to supply harmonic content in its load current while operating within it temperature limits. For Dry Type Transformers a K-Factor calculation
is made to determine the amount of the Harmonic Content present in a power system. K-Rated transformers are sized to handle 100% of the fundamental 60 Hz load, plus the non-linear load specified. The
neutral of the K-Rated transformer is sized at 300% of the current rating of the phase connections. Industry literature and commentary refers to a limited number of K-factor ratings: K-1, K-4,K9,
K-13, K-20, K-30, K-40. A transformer could be designed for other K-factor rating in between those values, as well as for higher values.
The commonly referenced ratings calculated according to ANSI/IEEE C57.11-1986 are as following
• K-Factor 1: A transformer with this rating has been designed to handle only the heating effects of eddy currents and other losses resulting from 60 Hz, sine-wave current loading on the
transformer. Such a transformer may or may not be designed to handle the increased heating of harmonics in its load current. Applications are motors, incandescent lighting, resistance heating,
motor generators (without solid state drives).
• K-Factor 4: A transformer with this rating has been designed to supply rated KVA, without overheating, to a load made-up of 100% of the normal 60 Hz, sine-wave, fundamental current plus: 16% of
the fundamental as 3rd harmonic current; 10% of the fundamental as 5th; 7% of the fundamental as 7th; 5.5% of the fundamental as 9th; and smaller percentages trough the 25th harmonic. The "4"
indicates its ability to accommodate four times the eddy current losses of a K-1 transformer. Uses are HID lighting, induction heaters, Welders, UPS with optional input filtering, PLC and solid
state controls.
• K-Factor 9: A K-9 transformer can accommodate 163% of the harmonic loading of a K-4 transformer.
• K-Factor 13: A K-13 transformer can accommodate 200% of the harmonic loading of a K-4 rated transformer. These transformers are used for multiple receptacle circuits in health care facilities,
UPS without optional input filtering, Production or assemble line equipment, Schools and classroom facilities.
• K-Factor 20, K-30, K-40: The higher number of each of these K-factor ratings indicates ability to handle successively larger amounts of harmonic load content without overheating. Some of these
transformers are used in SCR variable speed drives, Circuits with exclusive data processing equipment, Critical care facilities and Hospital operating room.
• Go back to the previous page.
• Use the search bar at the top of the page to search for your products.
• Follow these links to get you back on track!
Store Home | My Account | {"url":"https://canadatransformers.com/k-factor/","timestamp":"2024-11-11T06:38:37Z","content_type":"application/xhtml+xml","content_length":"50946","record_id":"<urn:uuid:94492d58-b320-487a-a370-93467f5e940a>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00534.warc.gz"} |
Schlagworte Wissenschaftsgebiete
Zitat: Stefan Rass, "Tell me who you are friends with and I will tell you who you are: Unique Neighborhoods in Random Graphs" , in Theoretical Computer Science,
Elsevier, 1-2024, ISSN: 0304-3975
Original Titel: Tell me who you are friends with and I will tell you who you are: Unique Neighborhoods in Random Graphs
Sprache des Titels: Englisch
Original Kurzfassung: The identity of a physical entity in a network is traditionally defined by some secret knowledge, personal possession or (biometric) property. What if no
such unique property exists or can be defined ?safely?, for example, in the interest of anonymity? In this work, we study the problem of defining an
identity for an entity u under the constraint that there is no subjective property that u carries by itself, or in other words, any property that u has,
may also be likewise present in another entity v. In absence of an intrinsic property of u to define its identity, we thus consider the possibility of an
?extrinsically defined identity? via the connections that u maintains to other entities. Letting u in V be part of a graph G=(V, E), we study the question
of whether the links that u has to its neighbors lend themselves to uniquely distinguish u from all other nodes v in the graph. A practical instance of
this setting appears in symmetric cryptography, if we assume an edge between two nodes u, v if and only if u and v share a common secret. A single shared
secret known by u is, in symmetric cryptography, also known to some other entity v. However, is the set of all secrets that u has likewise known to
another node v in a network? If not, we can implement end-to-end authentication without shared secrets and exclusively using symmetric cryptography. This
concept is here reviewed as peer-authentication. It works in a graph class that we call Unique-Neighborhood Network (UNN), which has been introduced in
prior literature for the purpose of emulating public-key digital signatures with symmetric cryptography only. Our findings suggest that some networks
naturally grow into UNNs, but even if not, we can efficiently identify substructures that allow to pin down a node's identity based on its ?friend-nodes?
in a graph.
Sprache der Englisch
Journal: Theoretical Computer Science
Veröffentlicher: Elsevier
Erscheinungsmonat: 1
Erscheinungsjahr: 2024
ISSN: 0304-3975
Notiz zur Publikation: Artikelnummer: 114386
DOI: 10.1016/j.tcs.2024.114386
URL zu weiteren Infos: http://www.sciencedirect.com/science/article/pii/S030439752400001X?via%3Dihub
Reichweite: international
Publikationstyp: Aufsatz / Paper in SCI-Expanded-Zeitschrift
Autoren: Stefan Rass
Forschungseinheiten: Institut für Netzwerke und Sicherheit
LIT Secure and Correct Systems Lab
Schlagworte: Identity Management
Random Graph
Wissenschaftsgebiete: Graphentheorie (ÖSTAT:101011)
Wahrscheinlichkeitstheorie (ÖSTAT:101024)
IT-Sicherheit (ÖSTAT:102016)
Theoretische Informatik (ÖSTAT:102031)
Benutzerbetreuung: Sandra Winzer, letzte Änderung:
Zitat: Stefan Rass, "Tell me who you are friends with and I will tell you who you are: Unique Neighborhoods in Random Graphs" , in Theoretical Computer Science, Elsevier, 1-2024,
ISSN: 0304-3975
Original Titel: Tell me who you are friends with and I will tell you who you are: Unique Neighborhoods in Random Graphs
Sprache des Titels: Englisch
Original Kurzfassung: The identity of a physical entity in a network is traditionally defined by some secret knowledge, personal possession or (biometric) property. What if no such unique property
exists or can be defined ?safely?, for example, in the interest of anonymity? In this work, we study the problem of defining an identity for an entity u under the constraint
that there is no subjective property that u carries by itself, or in other words, any property that u has, may also be likewise present in another entity v. In absence of an
intrinsic property of u to define its identity, we thus consider the possibility of an ?extrinsically defined identity? via the connections that u maintains to other entities.
Letting u in V be part of a graph G=(V, E), we study the question of whether the links that u has to its neighbors lend themselves to uniquely distinguish u from all other
nodes v in the graph. A practical instance of this setting appears in symmetric cryptography, if we assume an edge between two nodes u, v if and only if u and v share a common
secret. A single shared secret known by u is, in symmetric cryptography, also known to some other entity v. However, is the set of all secrets that u has likewise known to
another node v in a network? If not, we can implement end-to-end authentication without shared secrets and exclusively using symmetric cryptography. This concept is here
reviewed as peer-authentication. It works in a graph class that we call Unique-Neighborhood Network (UNN), which has been introduced in prior literature for the purpose of
emulating public-key digital signatures with symmetric cryptography only. Our findings suggest that some networks naturally grow into UNNs, but even if not, we can efficiently
identify substructures that allow to pin down a node's identity based on its ?friend-nodes? in a graph.
Sprache der Englisch
Journal: Theoretical Computer Science
Veröffentlicher: Elsevier
Erscheinungsmonat: 1
Erscheinungsjahr: 2024
ISSN: 0304-3975
Notiz zur Publikation: Artikelnummer: 114386
DOI: 10.1016/j.tcs.2024.114386
URL zu weiteren Infos: http://www.sciencedirect.com/science/article/pii/S030439752400001X?via%3Dihub
Reichweite: international
Publikationstyp: Aufsatz / Paper in SCI-Expanded-Zeitschrift
Autoren: Stefan Rass
Forschungseinheiten: Institut für Netzwerke und Sicherheit
LIT Secure and Correct Systems Lab
Schlagworte: Identity Management
Random Graph
Wissenschaftsgebiete: Graphentheorie (ÖSTAT:101011)
Wahrscheinlichkeitstheorie (ÖSTAT:101024)
IT-Sicherheit (ÖSTAT:102016)
Theoretische Informatik (ÖSTAT:102031) | {"url":"https://fodok.uni-linz.ac.at/fodok/publikation.xsql?PUB_ID=80360","timestamp":"2024-11-11T12:52:06Z","content_type":"text/html","content_length":"10101","record_id":"<urn:uuid:42b89089-b032-48f4-8176-d4efcfebc684>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00787.warc.gz"} |
How you can discover quartiles in Google Sheets
Don’t let the truth that you didn’t like math in highschool forestall you from utilizing all of the great options that Google Sheets gives. After this program turned accessible, knowledge evaluation
turned a lot simpler – percentiles and quartiles are undoubtedly one of many causes.
However what are quartiles? How do you calculate it? Don’t worry. Google Pages can do this for you. Maintain studying this text to learn how to do it.
What are quartiles and the best way to calculate them
You have heard of the common price of issues. That is the purpose the place 50% of all different values are under this level and the remaining 50% are above. For instance, individuals typically
point out it after they speak concerning the common wage or home costs in a metropolis or state.
Quartiles are related. In the identical manner that you just halve an integer to get percentiles, to get quartiles, that you must get 4 cuts. Nevertheless, there are 5 quartiles, as you additionally
contemplate the start line – the minimal, which is the zero quartile.
You get the primary quartile with 25%. Because of this 25% of the values are at the moment lower than the primary quartile. When you attain the median worth, you additionally attain the second
quartile. The third is 75%, and the fourth is the utmost. Word that these values are the identical because the MIN and MAX capabilities.
Google Sheets has a quartile components that allows you to calculate quartiles in seconds. For this components, you choose a knowledge set for the quartile kind evaluation. Utilizing it, you will get
a reasonably informative and arranged abstract of your knowledge, which is why quartiles are as necessary as calculating statistics in Google Pages. The components is as follows:
= QUARTER (knowledge, kind)
For instance: QUARTILE (A2: A100, 3). Right here the components will calculate the third quartile (75%) of the information in row A2, cells A1-A100. You may see one other instance within the picture
What’s interquartile vary?
IQR stands for Interquartile Vary. This vary incorporates all values within the center 50% of a specified dataset. How you can calculate this vary? Formulation Q3 -Q1, which suggests 75% -25%.
You may calculate this vary manually. To do that, that you must prepare the information in ascending order – from lowest to highest. Google Pages may also help you with a components. On this case,
the order of the information doesn’t matter.
When you discover Q3 and Q1, you’ll be able to simply calculate the IQR. For instance:
Quartile (A2: A100, 3) -quartile (A2: A100, 1)
How you can discover quartiles by hand
Some datasets make it straightforward to seek out quartiles by hand. Don’t forget that it’s important to type the information in ascending order to seek out the right values.
For instance, you could find quartiles by excluding the bottom and highest values on the identical time. Suppose you’ve the next knowledge set: 0 2 4 6 8 10 and 12. First you take away 0 and 12.
Then 2 and 10. Then you definitely take away 4 and eight. You stick with 6, which is the common. , and the 2nd quartile on the identical time.
It’s now simpler to seek out different quartiles. The 0th is after all the minimal worth, which on this case is 0. The fourth is the utmost, which on this specific knowledge set is 12.
To seek out the primary quartile, comply with the identical steps, however use the primary half of the information set: 0 2 4 6. Delete 0 and 6 and discover the imply between 2 and 4, which is 3.
That is the primary quartile – 25 %. The third quartile is discovered whenever you analyze the second half of the unique knowledge set: 6 8 10 12. Delete 6 and 12 and discover the common of 8 and 10,
which is 9. That is 75% the third quartile.
It’s straightforward to seek out quartiles by hand if in case you have small numbers and easy knowledge. For extra advanced statistics, it’s endorsed to make use of a components. This reduces
potential errors and saves you time.
Statistics facilitated
It’s laborious to think about how lengthy it took a few years in the past to import, analyze and change such knowledge. Google Sheets makes it straightforward to seek out quartiles and percentages,
and many of the course of is calculated routinely. Chances are you’ll not like math, however this function may be very helpful and you’ll positively respect it.
Have you ever ever tried to seek out quartiles by hand? How typically do you want a quartile components? Tell us within the feedback under.
Leave a Comment | {"url":"https://hxtool-app.com/how-to-find-quartiles-in-google-sheets/","timestamp":"2024-11-05T15:55:25Z","content_type":"text/html","content_length":"136351","record_id":"<urn:uuid:2211d0b7-b21a-41ea-b11d-5b1e416d8403>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00363.warc.gz"} |
Tuned Plate Tuned Grid oscillator – a simple, but complete explanation
A correspondent trying to get his head around old designs was challenged by the Tuned Plate Tuned Grid (TPTG) oscillator in common cathode configuration.
A superficial analysis is that the feedback to the grid from the anode via the anode to grid capacitance (Cag) is in phase with the anode voltage, which because of inversion in the valve means it is
negative feed back. How can it cause self oscillation?
Well it does, so the superficial analysis is probably inadequate.
Text books tend to gloss over the detail of how it works. Understanding is not helped by some folk lore, try to differentiate between what you know that is truly fact and other ‘knowledge’.
Where does the additional 180° phase shift necessary for self oscillation come from?
Lets take a fairly high level approximation of what happens in a simple circuit, one with identical parallel tuned circuits in the anode and grid circuits, and a very very small equivalent Cag.
Some important concepts:
• We all think of a common cathode stage as inverting (we could say AC anode voltage leads current by 180°), but whilst that is approximately true for a resistive load, it is not true with a
reactive load and the TPTG oscillator depends on one or more of its tuned circuits being reactive. A more detailed analysis is necessary to explain the TPTG oscillator.
• If Cag is very small and close to ideal (ie has very low equivalent series resistance), the grid AC voltage will be much smaller than anode AC voltage and the current Iag will lead Va by almost
Here are the steps around the loop considering only phase:
1. At a frequency of half the half power bandwidth (HBW=f0/Q) below resonance (f0), so f=f0-f0/Q/2, the anode tank is inductive and the AC voltage developed at the anode leads the anode current by
2. As mentioned, because Va>>Vg, and Cag is ideal, the current in Cag (Iag) leads Va by almost 90°
3. Current Iag flows into the (identical in this case) grid tank circuit which is also inductive, and the voltage across the tank circuit leads the current by 45°.
The total phase lead from anode current to grid voltage is 180+45+90+45=0° which satisfies one of the criteria for oscillation. If the magnitude of the total loop gain is greater than unity, then the
circuit will self oscillate at this frequency.
So how does it start?
If the stage is biased so that some anode current flows, it contains noise components, and if the feedback circuit has loop gain greater than unity and phase=0°, the noise currents will be amplified
greatly and quickly lead to self oscillation.
Note that in this example, the necessary phase shifts mean the circuit oscillates just a little below the self resonant frequency of the two identical tuned circuits.
The circuit does not depend on identical tuned circuits, and just one could be variable to adjust frequency of oscillation over a small range. Though the explanation used a very small ideal Cag,
again it does not depend on that though the frequency of oscillation is sensitive to the phase relationship between Iag and Va. Obviously phase shift may differ at each stage with these small
variations, it is the frequency where the loop phase shift is 0° that oscillation will occur if loop gain is sufficient.
I have used an example with LC parallel tuned circuits, but one or both could be tuned cavities (for UHF and above).
Was it a good oscillator in its day?
Probably not for a host of reasons, and probably why its use died out pretty quickly. The last transmitter that I had that used a TPTG oscillator was a modified aircraft transponder on the 23cm band
(it was originally a TPTG oscillator and needed more feedback to run at lower power in continuous mode)… that was in the late 1960s. | {"url":"https://owenduffy.net/blog/?p=11594","timestamp":"2024-11-08T14:26:46Z","content_type":"text/html","content_length":"58778","record_id":"<urn:uuid:48d9d6d5-6287-4b4f-9784-3d4fc7e68d27>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00288.warc.gz"} |
Quantum Gravity and Singularities | Academic Block
Quantum Gravity and Singularities: The Cosmic Abyss
Quantum gravity investigates the behavior of spacetime at microscopic scales, aiming to resolve singularities from general relativity This branch of physics explores the quantum aspects of
gravitational interactions, providing solutions for mathematical infinities linked with black hole and cosmological singularities.
Exploring the Concept
In the realm of physics, where the minuscule meets the massive, and the rules seem to bend with every new discovery, lies one of the most enigmatic and challenging puzzles: the nature of gravity on
the quantum scale and the enigmatic phenomena known as singularities. At the heart of this enigma lies the quest for a theory that unites the two pillars of modern physics: quantum mechanics and
general relativity. This quest has given rise to the theory of Quantum Gravity, offering a tantalizing glimpse into the fabric of the cosmos and the peculiar behavior of space and time at its most
fundamental level. This article by Academic Block will tell you all about Quantum Gravity and Singularities.
The Need for Quantum Gravity
To appreciate the significance of Quantum Gravity, one must understand the context from which it emerges. General relativity, formulated by Albert Einstein in the early 20th century, revolutionized
our understanding of gravity. It describes gravity as the curvature of spacetime caused by mass and energy, offering precise predictions for the behavior of massive objects in the universe, from
planets orbiting stars to the bending of light by massive celestial bodies.
However, as successful as general relativity has been in describing gravity on large scales, it encounters its limitations when confronted with the quantum realm. Quantum mechanics, the other pillar
of modern physics, governs the behavior of particles at the smallest scales, describing phenomena such as particle-wave duality, uncertainty, and entanglement. Yet, when attempts are made to
reconcile quantum mechanics with general relativity, contradictions arise, particularly when trying to describe the gravitational field at the quantum level. This inconsistency is vividly illustrated
in the study of singularities.
Singularities: The Cosmic Conundrum
Singularities are points in spacetime where the curvature becomes infinitely large, leading to breakdowns in our current understanding of physics. The most famous of these is the singularity believed
to lie at the center of black holes, where the curvature becomes so extreme that it traps everything, even light, within its grasp, forming an event horizon beyond which no information can escape.
According to classical general relativity, these singularities represent a breakdown in the laws of physics, where the known laws of nature cease to apply.
However, such breakdowns are not limited to the depths of black holes. The Big Bang, the cosmic event that birthed our universe, is also believed to have arisen from a singularity, where the entire
universe was concentrated into a point of infinite density and curvature. Understanding these singularities is crucial for unraveling the mysteries of the universe's origins and its ultimate fate.
Yet, to do so, we need a theory that can reconcile the extreme conditions of singularities with the quantum realm, leading us to the doorstep of Quantum Gravity.
Approaches to Quantum Gravity
Quantum Gravity encompasses a variety of theoretical approaches aimed at bridging the gap between general relativity and quantum mechanics. Each approach offers its own insights and challenges,
reflecting the complexity of the problem at hand.
One prominent approach is loop quantum gravity, which seeks to quantize spacetime itself. In this framework, spacetime is viewed as a network of interconnected loops, with discrete units of volume
and area. By quantizing the geometry of spacetime in this way, loop quantum gravity aims to describe the fabric of the cosmos at the smallest scales, offering a potential resolution to singularities
by replacing them with a "quantum bounce" where the universe undergoes a transition from a contracting phase to an expanding one, avoiding the need for an initial singularity.
String theory represents another avenue towards Quantum Gravity, proposing that fundamental particles are not point-like but instead tiny, vibrating strings. These strings propagate through
higher-dimensional spacetime, giving rise to the familiar particles and forces observed in the universe. Within string theory, gravity emerges naturally as one of the fundamental forces, seamlessly
integrated with the other forces of nature. However, string theory requires additional spatial dimensions beyond the familiar three spatial dimensions and one time dimension, leading to questions
about its experimental testability and physical significance.
Other approaches to Quantum Gravity include causal dynamical triangulation, asymptotically safe gravity, and emergent gravity, each offering its own perspective on how to reconcile quantum mechanics
with gravity. Despite their differences, these approaches share a common goal: to uncover the underlying principles governing the fabric of the universe at its most fundamental level, providing a
unified framework that encompasses both quantum mechanics and general relativity.
Challenges and Future Directions
The road to Quantum Gravity is fraught with challenges, both theoretical and experimental. One of the primary obstacles is the lack of experimental evidence to guide theory. Unlike other branches of
physics, such as particle physics or cosmology, where experiments can directly test theoretical predictions, Quantum Gravity operates at energy scales far beyond our current technological
capabilities. The energies required to probe the quantum nature of gravity are orders of magnitude beyond those achievable in particle accelerators or cosmological observations, leaving theorists to
rely on mathematical consistency and conceptual arguments to guide their investigations.
Furthermore, the very nature of singularities presents a formidable challenge. Singularities represent breakdowns in our current understanding of physics, where the laws of nature as we know them
cease to apply. Reconciling these extreme conditions with the principles of quantum mechanics requires a profound rethinking of our fundamental concepts of space, time, and matter, pushing the
boundaries of human knowledge to its limits.
Despite these challenges, the quest for Quantum Gravity continues unabated, driven by the tantalizing promise of a theory that unites the fundamental forces of nature and unlocks the secrets of the
cosmos. Advances in mathematical techniques, computational methods, and theoretical frameworks offer hope for progress in the coming decades, bringing us ever closer to a deeper understanding of the
universe and our place within it.
Final Words
In the grand tapestry of modern physics, Quantum Gravity stands as a beacon of curiosity and exploration, beckoning us to probe the deepest mysteries of the cosmos. From the enigmatic depths of black
holes to the primordial singularity that gave birth to our universe, the quest to understand gravity on the quantum scale challenges our most cherished notions of space, time, and the nature of
reality itself. While the road ahead may be long and fraught with uncertainty, the journey itself holds the promise of profound insights and discoveries that will shape our understanding of the
universe for generations to come. In the end, it is not merely the destination that drives us forward, but the relentless pursuit of knowledge and understanding that defines us as explorers of the
cosmos. Please provide your views in the comment section to make this article better. Thanks for Reading!
This Article will answer your questions like:
+ What is quantum gravity? >
Quantum gravity is a theoretical framework aiming to unify general relativity and quantum mechanics. It seeks to describe the gravitational force within the context of quantum theory, where
space-time is quantized. Unlike classical gravity, quantum gravity addresses the behavior of gravitational interactions at extremely small scales, such as near black hole singularities or the Big
+ How does quantum gravity address the issue of singularities? >
Quantum gravity aims to resolve singularities by incorporating quantum effects into the description of space-time. Singularities, which are points of infinite density and curvature, are problematic
in classical theories. Quantum gravity models, such as loop quantum gravity, propose that space-time is granular or discrete, potentially smoothing out singularities into a finite structure.
+ What are the key theories of quantum gravity? >
Key theories of quantum gravity include string theory and loop quantum gravity. String theory posits that fundamental particles are one-dimensional strings rather than point-like objects, potentially
unifying gravity with other forces. Loop quantum gravity, on the other hand, suggests that space-time is composed of discrete loops, providing a non-perturbative quantum description of gravity.
+ How do singularities arise in quantum gravity models? >
In quantum gravity models, singularities arise due to the breakdown of classical space-time descriptions at extremely small scales. For example, in loop quantum gravity, singularities are addressed
by a finite, discrete structure of space-time. String theory proposes that the fundamental strings avoid the infinite densities associated with classical singularities, leading to different
interpretations of these extreme conditions.
+ What is the role of quantum fluctuations in singularities? >
Quantum fluctuations play a crucial role in singularities by introducing quantum effects that challenge classical notions of infinite density. In quantum gravity, these fluctuations suggest that
singularities might not be true points of infinite curvature but rather regions where quantum corrections alter the classical singularity, potentially leading to a more complete understanding of the
extreme conditions involved.
+ How does loop quantum gravity approach singularities? >
Loop quantum gravity approaches singularities by proposing that space-time is composed of discrete, quantized loops. This framework suggests that near singularities, the classical concept of infinite
density breaks down, and instead, space-time has a granular structure that prevents the formation of true singularities, potentially replacing them with finite, quantum-corrected regions.
+ How does string theory address singularities? >
String theory addresses singularities by proposing that fundamental entities are one-dimensional strings rather than point particles. These strings are thought to avoid singularities by spreading out
the interactions over a finite region, thus resolving the infinite densities predicted by classical theories. This leads to a more consistent description of extreme conditions in space-time.
+ What is the concept of a "resolved" singularity in quantum gravity? >
A "resolved" singularity in quantum gravity refers to a re-conceptualization of singularities where traditional infinite densities are replaced by finite, quantum-corrected structures. This concept
emerges from theories like loop quantum gravity, where space-time is discrete, and singularities are smoothed out into well-defined, quantized regions, avoiding infinities.
+ How does quantum gravity affect the understanding of black hole singularities? >
Quantum gravity affects the understanding of black hole singularities by proposing that the singularity at the center of a black hole is not a true point of infinite density. Instead, quantum
corrections might smooth out the singularity into a finite, well-defined structure. This shift in understanding helps reconcile the black hole's extreme conditions with quantum theory.
+ What are the challenges in studying singularities within quantum gravity? >
Challenges in studying singularities within quantum gravity include the difficulty of reconciling quantum mechanics with general relativity at extreme scales. Formulating a consistent theory that can
be empirically tested is complex. Additionally, the mathematical tools needed to describe the quantum structure of space-time are highly intricate and not yet fully developed.
+ How does the concept of space-time emergence relate to singularities? >
The concept of space-time emergence suggests that space-time itself is a macroscopic phenomenon emerging from more fundamental quantum entities. In this view, singularities might be artifacts of the
classical description of space-time, and their true nature might be revealed when space-time is understood as an emergent property from a deeper quantum reality.
+ How do quantum gravity theories handle the Big Bang singularity? >
Quantum gravity theories handle the Big Bang singularity by suggesting that the classical notion of an infinitely dense initial state is replaced by a quantum regime where the singularity is smoothed
out. For instance, loop quantum cosmology proposes that the Big Bang may be replaced by a quantum bounce, avoiding the infinite density traditionally associated with the Big Bang.
+ What experimental evidence supports the study of singularities in quantum gravity? >
Experimental evidence supporting the study of singularities in quantum gravity is indirect. Observations of black holes, gravitational waves, and the cosmic microwave background provide data that can
be compared with theoretical predictions. For instance, the detection of gravitational waves from black hole mergers supports theories predicting deviations from classical singularity models.
+ How does quantum gravity propose to eliminate or modify singularities? >
Quantum gravity proposes to eliminate or modify singularities by incorporating quantum effects into the fabric of space-time. For example, loop quantum gravity suggests that space-time is made up of
discrete units, thus avoiding the formation of singularities. String theory proposes that singularities are smoothed out by the extended nature of fundamental strings, leading to a modified
understanding of extreme gravitational fields.
+ What is the impact of quantum gravity on the classical notion of singularities? >
The impact of quantum gravity on the classical notion of singularities is profound, potentially redefining or eliminating the concept of infinite density. By incorporating quantum effects, quantum
gravity theories suggest that singularities may be replaced by finite, quantized structures. This shift provides a more complete and potentially more accurate description of extreme gravitational
Controversies related to Quantum Gravity and Singularities
Information Loss Paradox: One of the most contentious issues in the study of black holes and Quantum Gravity is the information loss paradox. According to classical general relativity, information
that falls into a black hole is irretrievably lost, leading to violations of quantum mechanics. However, this contradicts the principle of unitarity in quantum mechanics, which states that
information cannot be destroyed. Resolving this paradox is essential for developing a consistent theory of Quantum Gravity.
Firewall Paradox: Proposed as a solution to the information loss paradox, the firewall hypothesis suggests that the event horizon of a black hole is replaced by a firewall—a region of extremely high
energy—violently incinerating anything that crosses it. While this resolves the information loss paradox, it introduces new conceptual challenges, such as violating the principle of locality and
causing inconsistencies with quantum mechanics. The existence and implications of firewalls remain highly controversial within the scientific community.
Quantum Gravity and the Nature of Time: Quantum Gravity theories often challenge our conventional understanding of time as a continuous and absolute quantity. Some theories propose that time may
emerge from more fundamental quantum degrees of freedom, leading to a timeless description of the universe. This raises profound questions about the nature of causality, the arrow of time, and the
role of observers in shaping our perception of reality.
Quantum Cosmology and the Beginning of the Universe: Singularities are not only found within black holes but are also believed to have characterized the early universe during the Big Bang.
Understanding the quantum nature of singularities at the origin of the universe is a central goal of Quantum Cosmology. However, different quantum gravity theories offer conflicting predictions about
the nature of the initial singularity and the conditions that preceded it, leading to ongoing debates about the nature of cosmic origins.
Emergent Spacetime vs. Fundamental Geometry: Some approaches to Quantum Gravity, such as loop quantum gravity and causal dynamical triangulation, suggest that spacetime may emerge from more
fundamental quantum structures. In contrast, other theories, such as string theory, propose that spacetime and geometry are fundamental entities that give rise to matter and forces. The debate
between emergent spacetime and fundamental geometry reflects deeper disagreements about the nature of reality and the fundamental constituents of the universe.
Quantum Gravity and the Multiverse: The concept of a multiverse, where multiple universes coexist alongside our own, has gained traction in certain interpretations of Quantum Gravity. According to
some theories, such as eternal inflation in string theory or the Many-Worlds Interpretation of quantum mechanics, the multiverse arises naturally from the quantum dynamics of the universe. However,
the existence and implications of the multiverse remain highly speculative and controversial, raising questions about the testability and scientific validity of such theories.
Major discoveries/inventions because of Quantum Gravity and Singularities
Black Hole Thermodynamics: Stephen Hawking’s pioneering work on black hole thermodynamics revolutionized our understanding of black holes by showing that they possess properties analogous to
thermodynamic systems, such as temperature and entropy. This discovery established a profound connection between gravity, thermodynamics, and quantum mechanics, leading to new insights into the
nature of spacetime and information theory.
Hawking Radiation: Building upon the framework of black hole thermodynamics, Hawking predicted that black holes emit radiation due to quantum effects near the event horizon. This phenomenon, known as
Hawking radiation, represents a fundamental quantum mechanical process occurring in the vicinity of black holes and has implications for the eventual fate of black holes and the conservation of
Information Paradox and Quantum Entanglement: The study of black hole information paradoxes has led to deeper insights into the nature of quantum entanglement and its role in quantum gravity.
Researchers have explored the connection between entanglement entropy and the geometry of spacetime, leading to the development of the holographic principle—a conjecture that suggests the information
content of a region of spacetime is encoded on its boundary.
Quantum Gravity Theories: Despite the lack of experimental verification, the pursuit of Quantum Gravity has led to the development of various theoretical frameworks and mathematical formalisms aimed
at reconciling quantum mechanics with general relativity. These include loop quantum gravity, string theory, causal dynamical triangulation, and asymptotically safe gravity, among others. While these
theories remain speculative, they have stimulated new avenues of research and inspired novel approaches to understanding the fundamental nature of the universe.
Technological Applications: Although the practical applications of Quantum Gravity and Singularities are indirect, research in these areas has contributed to advancements in related fields, such as
quantum computing, quantum cryptography, and gravitational wave detection. For example, insights gained from the study of quantum entanglement have been leveraged in the development of quantum
communication protocols and quantum information processing technologies.
Cosmological Implications: The study of Singularities, particularly in the context of the Big Bang cosmology, has profound implications for our understanding of the origin and evolution of the
universe. By investigating the quantum nature of the early universe, researchers have formulated models of cosmic inflation, multiverse scenarios, and alternative cosmological histories, shedding
light on the fundamental structure and dynamics of spacetime on cosmic scales.
Facts on Quantum Gravity and Singularities
Hawking Radiation and Singularities: Stephen Hawking’s groundbreaking work on black hole thermodynamics introduced the concept of Hawking radiation, which suggests that black holes can emit radiation
and eventually evaporate over time. This raises intriguing questions about the fate of singularities within black holes and the possibility of information loss, challenging our understanding of
fundamental principles like the conservation of information.
Planck Scale and Quantum Gravity: The Planck scale represents the energy and length scales at which quantum effects become significant in gravitational interactions. It is characterized by the Planck
length (~10^-35 meters) and the Planck energy (~10^19 GeV). Quantum Gravity theories aim to describe the behavior of spacetime at these incredibly small scales, where conventional notions of
classical spacetime break down.
Quantum Foam and Spacetime Fluctuations: Quantum Gravity theories suggest that at the Planck scale, spacetime itself may undergo fluctuations and fluctuations known as “quantum foam.” This foam-like
structure implies that spacetime is not smooth and continuous but rather composed of discrete, fluctuating units, challenging our classical intuition about the nature of space and time.
Singularities and Cosmic Censorship: The Cosmic Censorship Hypothesis, proposed by Roger Penrose, posits that naked singularities, which are not hidden within event horizons, are not allowed to exist
in the universe. This hypothesis serves as a safeguard against the breakdown of causality and predictability in the presence of singularities. However, the validity of this hypothesis remains an open
question in the study of Quantum Gravity.
Quantum Entanglement and Spacetime Geometry: Recent research has explored the connection between quantum entanglement and the geometry of spacetime, suggesting that spacetime may emerge from the
entanglement of quantum degrees of freedom. This intriguing idea, known as the “holographic principle,” posits that the information content of a region of spacetime is encoded on its boundary,
challenging conventional notions of locality and the structure of reality.
Experimental Signatures of Quantum Gravity: While direct experimental tests of Quantum Gravity remain elusive, scientists have proposed several indirect probes that could shed light on the quantum
nature of gravity. These include high-precision tests of gravitational interactions, observations of gravitational waves from exotic sources, and searches for deviations from classical predictions in
the behavior of particles and fields in extreme gravitational environments.
Black Hole Information Paradox: The fate of information falling into a black hole is a central puzzle in the study of black hole physics and Quantum Gravity. According to quantum mechanics,
information should be conserved, yet the classical picture of black holes suggests that information may be lost once it crosses the event horizon. Resolving this paradox is crucial for understanding
the quantum nature of black holes and the implications for the broader framework of Quantum Gravity.
Academic References on Quantum Gravity and Singularities
1. Smolin, L. (2001). Three Roads to Quantum Gravity. Basic Books.: This book by Lee Smolin provides an accessible overview of three leading approaches to Quantum Gravity: loop quantum gravity,
string theory, and causal dynamical triangulations.
2. Hawking, S. W., & Ellis, G. F. R. (1973). The Large Scale Structure of Space-Time. Cambridge University Press.: Considered a classic in the field, this book explores the mathematical foundations
of general relativity and their implications for the structure of spacetime, including the existence of singularities.
3. Penrose, R. (1965). Gravitational collapse and space-time singularities. Physical Review Letters, 14(3), 57-59.: In this seminal journal article, Roger Penrose investigates the conditions under
which gravitational collapse leads to the formation of singularities, laying the groundwork for our understanding of black holes.
4. Rovelli, C. (2004). Quantum Gravity. Cambridge University Press.: Carlo Rovelli’s book offers a comprehensive introduction to the principles and concepts of Quantum Gravity, exploring key topics
such as loop quantum gravity and the nature of spacetime at the Planck scale.
5. Wald, R. M. (1997). Gravitational collapse and cosmic censorship. In Black Holes and Relativistic Stars (pp. 69-85). University of Chicago Press.: This article by Robert M. Wald discusses the
cosmic censorship hypothesis and its implications for the behavior of singularities in general relativity, providing insights into the stability of the universe.
6. Thorne, K. S., & Hawking, S. W. (1973). Wormholes in spacetime and their use for interstellar travel: A tool for teaching general relativity. American Journal of Physics, 44(12), 1233-1239.: This
influential article by Kip Thorne and Stephen Hawking explores the theoretical possibility of traversable wormholes, which are hypothetical shortcuts through spacetime that could connect distant
regions or even different universes.
7. Ashtekar, A. (2004). Loop quantum gravity: four recent advances and a dozen frequently asked questions. Journal of Gravitational Physics, 21(6), 209-236.: Abhay Ashtekar’s review article provides
an overview of recent developments in loop quantum gravity, highlighting key advances and addressing common questions about the theory.
8. Ellis, G. F. R., & Schmidt, B. G. (2018). Singularities. General Relativity and Gravitation, 50(11), 140.: This review article by George F. R. Ellis and Barbara G. Schmidt explores the concept of
singularities in general relativity, discussing their classification, properties, and implications for cosmology and black hole physics.
9. Gambini, R., & Pullin, J. (2011). A first course in loop quantum gravity. Oxford University Press.: This book by Rodolfo Gambini and Jorge Pullin provides a pedagogical introduction to loop
quantum gravity, covering its mathematical foundations, physical principles, and applications to cosmology and black hole physics.
10. Wald, R. M. (1984). General Relativity. University of Chicago Press.: Robert M. Wald’s textbook is a comprehensive introduction to the principles of general relativity, covering topics such as
the Einstein field equations, black holes, and the structure of spacetime.
11. Sen, A. (2004). Rolling tachyon. Journal of High Energy Physics, 2002(04), 048.: This journal article by Ashoke Sen discusses the concept of a rolling tachyon field in string theory and its
implications for the resolution of spacetime singularities and the dynamics of cosmic strings.
12. Magueijo, J. (2003). Faster than the speed of light: The story of a scientific speculation. Basic Books.: João Magueijo’s book explores the controversial idea of variable speed of light theories
and their implications for fundamental physics, including the behavior of singularities and the nature of spacetime. | {"url":"https://www.academicblock.com/science/physics-of-the-universe/quantum-gravity-and-singularities/","timestamp":"2024-11-02T15:27:26Z","content_type":"text/html","content_length":"123453","record_id":"<urn:uuid:8f6bef3d-62ae-40f6-bb64-1ff04f4db37d>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00667.warc.gz"} |
Learning Trajectories and Equity: Making a Strong Link Stronger | CADRE
Jere Confrey, Distinguished University Professor of Mathematics Education, Emerita, North Carolina State University; President, The Math Door
Alan Maloney, Vice-President, The Math Door
part of the problem space–and may misrepresent or ignore other parts, and thus requires further accommodations, revisions, or distinctions to become complete. The field has now progressed to develop
and synthesize research-based learning trajectories that describe patterns in students’ responses to solving challenging tasks as they advance from naive to sophisticated reasoning about target
We view LTs using the metaphor of a climbing wall (Confrey, Shah, & Maloney, 2022). This vision is in marked contrast to the metaphor of a ladder or to Piaget’s stage theories which postulate a
single path with prerequisite steps (Confrey, 2019). Climbing walls, comprising a set of handholds, footholds, and obstacles, can be accessed from various positions and the paths themselves can vary.
Some paths prove more likely, hence the possible paths on the wall are probabilistic, not deterministic. Different positions are located at different heights from the ground/floor, capturing the
recognition (based on research data) that some aspects of an idea are more sophisticated than others. This approach recognizes that in LTs “one size does not fit all”, and it allows for students to
enter the space with varied experience and to utilize diverse fonts of knowledge. LTs provide a means to inform practitioners and learners about important findings from the learning sciences
concerning specific mathematical ideas.
A further point about LTs is that they can be usefully shared with students to support student agency. One application of LTs is to create psychometrically valid measures of LTs. The Math Door’s
diagnostic assessment tool, Math-Mapper 6-9 (Confrey, 2015), establishes LTs for all of typical middle grades mathematics, and then documents student progress along LTs; these are formative
assessments which are returned to both teachers and students as feedback on and for instructional actions. Students are provided opportunities to review the data with others, to revise and resubmit
their responses, to practice, and to retake psychometrically equivalent assessments as desired. In classrooms, we have observed students spontaneously adopting the language of the LT levels to focus
their attention on topics in which they are personally weaker, as well as to assist peers on LTs or levels.
To promote equitable outcomes using LTs coupled with such an assessment tool, the following stipulations are key:
1. The original and ongoing studies must involve students from diverse cultural and economic backgrounds.
2. The levels should be treated as probabilistic, not deterministic.
3. The focus should be movement along LTs; “positions” in any LT should be regarded as transient, and should not be a means to categorize learners into static and restrictive ability profiles.
4. The levels are not propositions to be taught but rather scaffolds, conditions, and guides on how to use interesting and compelling tasks to provoke further evolution of thought.
5. The focus is on students’ ideas, not their deficits, and on supporting the learner's agency.
A critical point to achieving a strong equity perspective is that LTs can change. Researchers and teachers should always be listening for something new to emerge. That emergence may emanate from
various sources—differences in a problem’s context, language, representation, purpose, and motivation. Our research team conducted validation annually of the measurement of the LTs in Math-Mapper
(Confrey, Toutkoushian & Shah, 2019). It was based on a model of linear regression between item difficulty and level. We investigated two types of variation: inter-level variation and intra-level
variation, and sought to eliminate construct-irrelevant variation. Factors influencing unexpected item behavior included student group composition and instructional assignment, as well as
consideration of the task’s numeric values, phrasing, and complexity. In addition to changes in the task itself, we examined whether levels needed clarification and whether an entire trajectory
needed reworking or recontextualizing. Approximately 92% of the 44 LTs with sufficient data showed moderate or strong correlation between item difficulty and level, with major shifts from moderate to
strong correlation for most of the LTs between the first and second validation cycles (Confrey, Toutkoushian, & Shah, 2020). These results emphasize a critical need for such diagnostic assessments to
be undertaken at scale and for the development of further psychometric approaches that lead to systematic progress towards more scientifically secure, valid, and reliable results. Further, the fact
that LTs can be tentative, changeable, and responsive suggests that their development should reside in a trading zone (Lehrer & Schauble, 2015; Confrey, Shah, & Toutkoushian, 2021) with participants,
including practitioners, learning scientists (in math), psychometricians, and even perhaps with roles for students and community members.
Another critical equity target is providing more consistent, persistent, and high-quality professional development on LTs. Any school, but particularly schools with high teacher turnover rates and/or
fewer math-certified teachers, should commit to long-term (multi-year) efforts to incorporate LTs into instruction gradually. Teachers need to learn to use and trust LTs. One model for successful
implementation includes conducting professional learning communities (PLCs). In our studies (Confrey, Maloney, Belcher, McGowan, Hennessey, & Shah, 2018), teachers shared topic-specific data during
monthly grade-level PLCs and diagnosed patterns of weak and strong student performance. Teachers with stronger outcomes in a topic shared instructional strategies and students’ reactions, thus
increasing instructional capacity in the participating schools. Long-term success required setting and maintaining administrative supports and incentives towards gradual implementation and
improvement and continued outreach to new teachers.
We have directly experienced and observed challenges to the value of LTs in talks, publications, and reviews and discussions of research proposals. These have come primarily from two directions, both
with strong equity connections and with some degree of mischaracterizations of LTs. Challenges are typically grounded in historical precedents that had detrimental effects on minority children. But
are these critiques overly global?—Do these critiques actually apply to LTs and if so, to what degree, and can they be resolved?
The first is opposition to most forms of measurement of LTs as a direct consequence of the negative effects of two decades of high-stakes testing. This position represents a justified response to
overly coarse high-stakes tests with formulaic items and restricted forms of response. High-stakes tests provided minimal feedback, often too generalized to serve any real instructional purposes.
Though they shed light on disparities in achievement, they resulted in a narrowed curriculum and a tendency to describe minority learners in a deficit mode. This has led many to reject any and all
development of measures of learning trajectory levels. This in turn risks denying students precise and timely feedback that effectively allows them to know what they know and what they need to learn.
In our recent work with the Young People’s Project in the Algebra Grand Challenge (Bill and Melinda Gates Foundation), Math-Mapper was used collaboratively by Math Literacy Workers to strengthen
their own understanding of key concepts (and to build a group commitment to mastery by everyone), in preparation for working peer-to-peer with middle school students. Furthermore, our research
reveals that LT measures can also show the degree to which all students are missing out on learning the most sophisticated levels of the LTs (often associated with widespread, overly procedural
A second source of direct challenge to the value of LTs is related to the influence of sociopolitical theory. In particular, a post-structural analysis (Gutierrez, 2013) can lead researchers to claim
that any form of structured and sequenced description of levels restricts students’ opportunity to express their own choices about content development and identity and limits their agency. For
instance, Guitierrez questioned the implementation of “reform math” programs, suggesting they can reinforce a “static classification system'' that is “complicit in the practice of constructing brown
and black bodies in a deficit and overly simplistic manner” (ibid, p. 45.) As pointed out previously (#3 above), we emphasize the formative use of LTs in order to guard against using them for “static
classification”. However, using the ”static classification” argument to reject LTs wholesale seems truly ironic because it ignores the fact that the levels come from studies of students’ own
inventions, and that the emphasis (particularly via the diagnostic assessment data) is on student movement and growth, not stasis and classification. In fact, research on LTs increases the focus on
learners as legitimate and central contributors to mathematics classroom practices. In making such learning sciences research widely available to teachers, LTs support them in scaffolding students to
learn specific concepts deeply. This represents a "both-and" approach to addressing content dimensions and equity, as Confrey has advocated for previously (Confrey, 2010).
LTs primary roots derive from the learning sciences, and some aspects, such as the emphasis on student voice, bring in sociocultural and sociopolitical considerations. This positioning makes a
discussion of equity dimensions essential, and in this blog post, we identify fundamental elements of the research that are essential to support equity, and others that can benefit from more
attention. Further, we respond to some critiques of LTs by clarifying certain dimensions and advocating for further work in others. For instance, we advocate that, to strengthen equity in mathematics
education, LT researchers lean into that work in order to build and implement comprehensive, flexible diagnostic systems for documenting learners’ progress along LTs (Confrey, 2023).
Confrey, J. (2010). “Both And”—Equity and mathematics: A response to Martin, Gholson, and Leonard. Journal of Urban Mathematics Education 3(2), 25-33.
Confrey, J. (2015). Math-Mapper 6-9. Accessed at www.sudds.co. Raleigh, NC.
Confrey, J. (2019). A Synthesis of Research on Learning Trajectories/Progressions in Mathematics. Commissioned for the OECD 2030 Learning Framework, by OECD Mathematics Curriculum Document Analysis
Project Workshop. Access: http://www.oecd.org/education/2030-project/about/documents/A_Synthesis_of_Research_on_Learning_Trajectories_Progressions_in_Mathematics.pdf
Confrey J. (2023). Strengthening the Instructional Core with Low-Stakes, Formative Diagnostic Assessment based on Mathematics Learning Trajectories. 2023 IES Mathematics Summit; September 19;
Washington, DC, United States.
Confrey, J., Maloney, A. P., Belcher, M., McGowan, W., Hennessey, M., Shah, M. (2018). The concept of an agile curriculum as applied to a middle school mathematics digital learning system (DLS).
International Journal of Educational Research, 92, 158-172.
Confrey, J., Shah, M., & Maloney, A. (2022). Learning trajectories for vertical coherence. Mathematics Teacher: Learning and Teaching PK-12, 115(2), 90-103.
Confrey J., Shah, M. and Toutkoushian, E. (2021). Validation of a learning trajectory-based diagnostic mathematics assessment system as a trading zone. Frontiers in Education, 6, 654353.
Confrey, J., Toutkoushian, E., Shah, M. (2020). Working at scale to initiate ongoing validation of learning trajectory-based classroom assessments for middle grade mathematics. Journal of
Mathematical Behavior, 60, 100818.
Confrey, J., Toutkoushian, E. P., Shah, M. P. (2019). A validation argument from soup to nuts: Assessing progress on learning trajectories for middle school mathematics. Applied Measurement in
Education, 32(1), 23-42.
Lehrer, R., & Schauble, L. (2015). Learning progressions: The whole world is NOT a stage. Science Education, 99(3), 432-437.
Piaget, J. (1970). Genetic epistemology. Columbia University Press.
Simon, M. A. (1995). Reconstructing Mathematics Pedagogy from a Constructivist Perspective. Journal for Research in Mathematics Education, 26(2), 114-145. | {"url":"https://www.cadrek12.org/resources/blogs/learning-trajectories-and-equity-making-strong-link-stronger","timestamp":"2024-11-02T05:05:15Z","content_type":"text/html","content_length":"37439","record_id":"<urn:uuid:fa5ff869-cc85-463b-b937-2e7b596c083a>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00169.warc.gz"} |
The eight bits in the pattern are used as follows:
is the "
sign bit
means negative,
means positive.
are the "
": in "excess four" notation, meaning that the usual decimal interpretation of these bits is four more than the actual power of two. When all 8 bits are
the entire bit pattern represents
the number zero
means an exponent of -3 (i.e. a multiplier of
mean -2, and so on, up to
which means an exponent of 3 (i.e. a multiplier of 8).
are the "
": the fractional value which is multiplied by two to the exponent value. There is assumed to be
a decimal point and a "1" bit to the left
. So, the pattern
represents the fraction .10000 (i.e.
). Similarly, the pattern
represents the fraction .10001 (i.e.
), and on up to
, which represents the fraction .11111 (i.e.
In the table below, the exponent bit patterns label each of the columns. There are only 7 patterns because the bit pattern
means that the entire number is zero. This wastes some of the bit patterns because the entire 8 bits represent zero whatever bit patterns are present in the
bit positions.
The mantissa bit patterns label each of the rows.
Each cell of the table shows the real number represented by the overall 8 bit pattern. Not shown are the negative numbers, which use the same bit pattern except for the sign bit.
This table can also be found at
Several years ago (Fall 1993) I gave a presentation at Novell, as part of their informal lecture series, named "Food For Thought." Somewhere in a dusty archive there may still be a VHS video tape of
the presentation, which was entitled "My Computer Can't Add." I presented part of this again during a job interview at UVSC in 2001.
The idea of a computer not being able to add may seem odd. But, a computer doesn't actually deal with numbers, but only with representations of numbers. Except for some very simple and small numbers,
the representations are not completely accurate. Part of the problem is round-off errors. The rest of the problem comes from the fact that the representations are far from being a complete set, as
only some of the numbers are actually represented at all.
Until recently, personal computers only represented integers between -2147483648 and 2147483647. Granted, those aren't small numbers (as in "pick a number between 1 and 100"), but they aren't big
enough to handle the national debt. Newer computers are using 64 bit processors, enabling them to represent integers between -9223372036854775808 and 9223372036854775807.
However, money values and numbers used in scientific computations are generally handled by the "floating point" representation. Here one can represent approximations to real numbers in a larger range
than the integer representations provide, but giving up precision. The floating point representation generally uses either 32 or 64 bits, but uses the bits differently than the way they are used when
representing integers.
To make this clear, I invented "Mechanics' Floating Point" for the purpose of my presentation. This uses a floating point style representation inside of just 8 bits to represent numbers between 1
sixteenth and 8 (well, almost--the largest number representable in the scheme is actually 7 and 3 quarters). Choosing 8 bits allows one to enumerate all of the representations possible in 8 bits
(there are just 256 of them (see "Powers of two")).
In a subsequent post, I will try to find a way to display the entire representation of Mechanics' Floating Point. | {"url":"https://sanbachs.blogspot.com/2008/05/","timestamp":"2024-11-08T23:22:50Z","content_type":"text/html","content_length":"100319","record_id":"<urn:uuid:154c4f99-9313-4dd2-83ab-623184a67724>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00202.warc.gz"} |
what_s the fastest time to run 360 feet around a baseball diamond calculation
23 Mar 2024
Popularity: ⭐⭐⭐
Running Speed Around a Baseball Diamond
This calculator provides the calculation of speed for running around a baseball diamond.
Calculation Example: The speed at which you run around a baseball diamond can be calculated using the formula: s = d/t, where s is the speed, d is the distance, and t is the time taken. This formula
can be used to determine your average speed or to compare your speed to others.
Related Questions
Q: What is the world record for running 360 feet around a baseball diamond?
A: The world record for running 360 feet around a baseball diamond is 13.46 seconds, set by Usain Bolt in 2011.
Q: How can I improve my speed around the baseball diamond?
A: There are a few things you can do to improve your speed around the baseball diamond, including practicing your starts, working on your acceleration, and improving your overall fitness.
| —— | —- | —- |
Calculation Expression
Speed: The speed (s) can be calculated using the formula: s = d/t
Calculated values
Considering these as variable values: d=360.0, the calculated value(s) are given in table below
| —— | —- |
Similar Calculators
Calculator Apps
Matching 3D parts for what_s the fastest time to run 360 feet around a baseball diamond calculation
App in action
The video below shows the app in action. | {"url":"https://blog.truegeometry.com/calculators/what_s_the_fastest_time_to_run_360_feet_around_a_baseball_diamond_calculation.html","timestamp":"2024-11-04T15:44:00Z","content_type":"text/html","content_length":"26161","record_id":"<urn:uuid:61d029fb-fb68-476a-aa6d-3b5a0ade0bcd>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00599.warc.gz"} |
What Happens After the Model?
Thanks for staying until the last talk of the conference đ
My goal is to stimulate our thoughts on supporting models once they are deployed.
Most of this talk is informed by my experiences in early drug discovery as well as developing algorithms for instrumented diagnostics (infection diseases).
Letâ s start with an example.
Some example data
Computational chemistry QSAR data were simulated for a numeric outcome:
• n = 6,000 training set.
• n = 2,000 test set.
• n = 2,000 validation set.
• 20 molecular descriptors (unrealistically small)
Letâ s suppose it is an assay to measure blood-brain-barrier penetration.
Model Development
A few models were tuned:
• boosted trees (lightgbm)
• Cubist rules
• nearest-neighbor regression
• neural networks (single layer, FF)
Several preprocessors were also assessed: nothing, partial least squares, and the spatial sign.
Each was tuned over their main parameters using 50 candidates.
The validation set RMSE was used to choose within- and between-models.
Model Selection
We ended up using one of the numerically best models: a neural network
• 9 hidden units with tanh activation
• weight decay of \(10^{-9.1}\)
• a learning rate of \(10^{-1.5}\), trained over 221 epochs
Performance statistics (RMSE)
• validation set: 0.254
• test set: 0.258
Itâ s pretty easy to just look at the metrics (RMSE) and make decisions.
The only way to be comfortable with your data is to never look at them.
For any type of model, we should check the calibration of the results. Are they consistent with what we see in nature?
• Classification: we try to see if our probability estimates match the rate of the event.
• Regression: we plot observed vs predicted values.
Some models (like ensembles) tend to under-predict at the tails of the outcome distribution.
If thatâ s the case, our best avenue is to try a different model.
Otherwise, we can try to estimate the calibration trend and factor it out.
Data usage and validation can be tricky with this approach but it can work well.
Whatâ s Next?
Letâ s assume that we will enable others to get predictions from our model.
In our example, we would deploy our model so that medicinal chemists would predict specific compounds or make predictions en masse.
We have consumers of our models now.
What other activities should we pursue to ensure that the model is used effectively and safely?
• Documentation
• Characterization
• Monitoring
How was the model created?
• Methodology
• Data
□ numbers
□ scope (local or global?)
□ limitations
□ provenance
• Efficacy claims (â our test set RMSE wasâ Šâ )
How does the model function?
• Mathematically
• What are the main ingredients?
• Where is it applicable? WCGW?
• How shall I explain predictions?
• Is it fair?
How Does it Work?
There is a whole field of literature on model explainers.
These can be categorized into two groups: global and local explainers.
• Global methods characterize the model.
• Local explainers elucidate predictions.
Weâ ll look at two global methods.
Importance Scores
Variable importance scores are used to quantify the overall effect of a predictor on the model.
There are model-specific methods to compute importance for some models.
More broadly a permutation approach can be used to eliminate the predictorsâ effect on the model and see how performance changes.
Partial Dependence Plots
For important features, we can also understand the average relationship between a predictor and the outcome.
Partial dependence plots and similar tools can help consumers understand (generally) why a predictor matters.
Prediction Intervals
For end-users, a measure of uncertainty in predictions can be very helpful.
An X% prediction interval is a bound where the next observed value is within the bound X% of the time.
Most ML models cannot easily make these but two tools that can work for any regression model are:
• Bootstrap intervals (expensive but solid theory)
• Conformal inference (fast but still evolving)
Tracking Performance
If we deploy a model, especially with an internal/public API, we should check to see how it does over time.
Assuming that we get labeled data within some unit of time, we should report performance (preferably to the customers).
Letâ s look at the first 10 post-deployment weeks where about 40 molecules are available each week.
Post- Deployment Monitoring
We often hear about model drift but there is no such thing.
Data drift may change over time and that can affect how well our model works if we end up extrapolating outside of our training set.
There is also concept drift: the model starts being used for some other purpose or with some other population.
The assay simulated here was designed to
• measure whether compounds crossed the blood-brain-barrierâ Š
• mostly to verify that they do not get into the brain.
Maybe we should look into thisâ Š
Data Drift or Concept Drift?
Smaller molecules
Define the Applicability Domain
Prior to releasing a model, document what it is intended to do and for what population.
• This is called the modelâ s applicability domain.
We can treat the training set as a multivariate reference distribution and try to measure how much (if at all) new samples extrapolate beyond it.
• Hat values
• Principal component analysis
• Isolation forests, etc.
PCA for Applicability Domain
PCA Reference Distirbution
Monitoring via Isolation Forests
Scoring New Data
Using any of the applicability domain methods, we can add a second unsupervised score to go along with each individual prediction:
Your assay value was predicted to be 6.28, indicating that the molecule signficantly crosses the blood-brain barrier.
However, the prediction is an extraploation that is very different from the data that was used to create the model (score: 0.97). Use this prediction with extreme caution!
Thanks for the invitation to speak today!
The tidymodels team: Hannah Frick, Emil Hvitfeldt, and Simon Couch.
Special thanks to the other folks who contributed so much to tidymodels: Davis Vaughan, Julia Silge, Edgar Ruiz, Alison Hill, Desirée De Leon, Marly Gotti, our previous interns, and the tidyverse
References (1/2)
Model fairness:
Conformal Inference
References (2/2)
Applicability Domains: | {"url":"https://topepo.github.io/2024_PharmaDS/","timestamp":"2024-11-09T12:20:21Z","content_type":"text/html","content_length":"48192","record_id":"<urn:uuid:91bcad3d-38e3-4aa2-8c43-19ee3b20ad38>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00453.warc.gz"} |
Python Program to Find Volume and Surface Area of Sphere - BTech Geeks
Python Program to Find Volume and Surface Area of Sphere
In the previous article, we have discussed Python Program to Find Volume and Surface Area of a Cube
Given the radius and the task is to find the volume and surface area of the given sphere.
Surface Area of the sphere:
A sphere resembles a basketball or the three-dimensional view of a circle. If we know the radius of the Sphere,
we can use the following formula to calculate the Surface Area of the Sphere:
The Surface Area of a Sphere = 4πr²
r= radius of the given sphere
The Volume of the sphere:
Volume is the amount of space inside the sphere. If we know the radius of the sphere, we can use the following formula to calculate the volume of the sphere:
The volume of a Sphere = 4πr³
Example 1:
Given radius = 7.5
The Surface area of the given Sphere with the radius [ 7.5 ]= 706.500
The volume of the given Sphere with the radius [ 7.5 ]= 1766.250
Example 2:
Given radius = 3
The Surface area of the given Sphere with the radius [ 3 ]= 113.040
The volume of the given Sphere with the radius [ 3 ]= 113.040
Program to Find Volume and Surface Area of Sphere
Below are the ways to find the volume and surface area of the given sphere:
Method #1: Using Mathematical Formula (Static Input)
• Give the radius of a sphere as static input and store it in a variable.
• Take a variable and initialize it with the “pi” value i.e. 3.14 and store it in another variable.
• Calculate the surface area of the given sphere using the above mathematical formula and store it in another variable.
• Calculate the volume of the given sphere using the above mathematical formula and store it in another variable.
• Print the surface area of the given sphere.
• Print the volume of the given sphere.
• The Exit of the Program.
Below is the implementation:
# Give the radius of a sphere as static input and store it in a variable.
gvn_radis = 7.5
# Take a variable and initialize it with the "pi" value i.e. 3.14 and
# store it in another variable.
standard_pi_val = 3.14
# Calculate the surface area of the given sphere using the above mathematical formula
# and store it in another variable.
surf_area = 4 * standard_pi_val * gvn_radis * gvn_radis
# Calculate the volume of the given sphere using the above mathematical formula and
# store it in another variable.
sphre_volm = (4 / 3) * standard_pi_val * gvn_radis * gvn_radis * gvn_radis
# Print the surface area of the given sphere.
"The Surface area of the given Sphere with the radius [", gvn_radis, "]= %.3f" % surf_area)
# Print the volume of the given sphere.
"The volume of the given Sphere with the radius [", gvn_radis, "]= %.3f" % sphre_volm)
The Surface area of the given Sphere with the radius [ 7.5 ]= 706.500
The volume of the given Sphere with the radius [ 7.5 ]= 1766.250
Method #2: Using Mathematical Formula (User Input)
• Give the radius of a sphere as user input using the float(input()) function and store it in a variable
• Take a variable and initialize it with the “pi” value i.e. 3.14 and store it in another variable.
• Calculate the surface area of the given sphere using the above mathematical formula and store it in another variable.
• Calculate the volume of the given sphere using the above mathematical formula and store it in another variable.
• Print the surface area of the given sphere.
• Print the volume of the given sphere.
• The Exit of the Program.
Below is the implementation:
# Give the radius of a sphere as user input using the float(input()) function
# and store it in a variable
gvn_radis = float(input("Enter some random variable = "))
# Take a variable and initialize it with the "pi" value i.e. 3.14 and
# store it in another variable.
standard_pi_val = 3.14
# Calculate the surface area of the given sphere using the above mathematical formula
# and store it in another variable.
surf_area = 4 * standard_pi_val * gvn_radis * gvn_radis
# Calculate the volume of the given sphere using the above mathematical formula and
# store it in another variable.
sphre_volm = (4 / 3) * standard_pi_val * gvn_radis * gvn_radis * gvn_radis
# Print the surface area of the given sphere.
"The Surface area of the given Sphere with the radius [", gvn_radis, "]= %.3f" % surf_area)
# Print the volume of the given sphere.
"The volume of the given Sphere with the radius [", gvn_radis, "]= %.3f" % sphre_volm)
Enter some random variable = 9
The Surface area of the given Sphere with the radius [ 9.0 ]= 1017.360
The volume of the given Sphere with the radius [ 9.0 ]= 3052.080
Explore more instances related to python concepts from Python Programming Examples Guide and get promoted from beginner to professional programmer level in Python Programming Language. | {"url":"https://btechgeeks.com/python-program-to-find-volume-and-surface-area-of-sphere/","timestamp":"2024-11-08T23:58:44Z","content_type":"text/html","content_length":"64714","record_id":"<urn:uuid:66e99c71-9b22-4fd2-a352-f04e1e12b15c>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00158.warc.gz"} |
Perform Rolling Computations on Time Series Data Using Pandas Rolling() Function
~ 7 min read
Perform Rolling Computations on Time Series Data Using Pandas Rolling() Function
By: Adam Richardson
What is the rolling() function in Pandas?
The rolling() function in Pandas is a powerful tool for performing rolling computations on time series data. Essentially, the rolling() function splits the data into a “window” of size n, computes
some function on that window (for example, the mean) and then moves the window over to the next n observations and repeats the process. This can be incredibly useful for identifying patterns and
trends in time series data.
To use rolling(), you first need to create a Pandas DataFrame or Series object containing your time series data. From there, you can apply the rolling() function to your data by chaining it to your
DataFrame or Series object. Here’s an example using the mean function:
import pandas as pd
# create some dummy time series data
data = pd.Series([1, 2, 3, 4, 5])
# apply the rolling() function to compute the rolling mean with a window size of 3
rolling_mean = data.rolling(window=3).mean()
This will output the following:
0 NaN
1 NaN
2 2.000000
3 3.000000
4 4.000000
dtype: float64
As you can see, the rolling() function has computed the rolling mean for each window of size 3 in the data. Note that the first two values are NaN since there aren’t enough observations to compute a
The rolling() function can be customized in a number of different ways. You can adjust the size of the window (window parameter), the function that is computed on the window (min, max, sum, count,
etc.), and the way in which the window is aligned with the observations (center, right, or left). For example:
# create some dummy time series data
data = pd.Series([1, 3, 4, 7, 11, 10, 12, 15])
# compute the rolling sum with a window of size 4 and align the window to the right
rolling_sum = data.rolling(window=4, min_periods=1, center=False).sum()
This will output the following:
0 1.0
1 4.0
2 8.0
3 15.0
4 25.0
5 32.0
6 37.0
7 49.0
dtype: float64
As you can see, we’ve customized the rolling() function to compute the rolling sum with a window of size 4 and aligned the window to the right. The min_periods=1 parameter tells Pandas to compute the
function even if there aren’t enough observations to fill the window. This is why the first three values of rolling_sum are equal to the respective values in data.
How to use rolling() for rolling window calculations
Using the rolling() function in Pandas is straightforward once you’ve prepared your data. There are a few key parameters you can tweak to customize the behavior of the function according to your
The first parameter to consider is the window parameter, which sets the size of the moving window. This parameter specifies the number of consecutive observations that will be used to compute the
function. A larger window size will result in a smoother output, but it will also introduce more lag in the results. Conversely, a smaller window size will result in more variability in the output,
but it will be more sensitive to short-term changes in the data.
Another key parameter is the min_periods parameter, which specifies the minimum number of observations required to calculate the rolling statistic. By default, min_periods is set to the size of the
window, but you can customize this to fit your needs. For example, if you want to calculate the rolling mean of the last 3 observations, but you only have 2 observations in your dataset, setting
min_periods=1 will allow you to calculate the rolling mean using those 2 observations.
The center parameter controls the alignment of the rolling window with respect to the index of the data. By default, center is set to False, which means that the window is aligned to the right of the
current observation. Setting center to True will align the window to the center of the observations.
Let’s see an example of how to use these parameters to calculate the rolling mean of a time series:
import pandas as pd
import numpy as np
# create a datetime index
idx = pd.date_range('20220101 09:00:00', periods=10, freq='T')
# create random time series data
data = pd.Series(np.random.randint(0, 100, size=(len(idx))))
# resampling to 5-minute intervals
data = data.resample('5T').ohlc()
# calculate rolling mean with a window size of 2
rolling_mean = data['close'].rolling(window=2, min_periods=1, center=False).mean()
# print the result
This will output:
2022-01-01 09:00:00 71.0
2022-01-01 09:05:00 17.0
2022-01-01 09:10:00 25.0
2022-01-01 09:15:00 10.0
Here we’re using the rolling() function to calculate the rolling mean of the close column of our DataFrame. We’ve set the window size to 2, so the function is computing the mean of each pair of
consecutive observations. We’ve also set min_periods to 1, which means that even if there’s only one observation left in the window, we’ll still calculate the mean. Lastly, we’ve set center to False,
which means that the window is aligned to the right of each observation.
Tips for optimizing your rolling computations
Here are some tips for optimizing your rolling computations:
1. Use vectorized operations: One way to speed up your rolling computations is to use vectorized operations. Vectorized operations can be applied to entire arrays at once, which can be faster than
looping through each observation individually. For example, instead of using a for loop to calculate the rolling mean of a time series, you can use the rolling() function to create a DataFrame
with rolling statistics for each observation, and then apply a vectorized operation to that DataFrame.
2. Avoid using loops: Loops can be slow, especially when you’re working with large datasets. If possible, try to avoid using loops to perform your rolling computations. Instead, use built-in Pandas
functions and methods that are optimized for speed and efficiency.
3. Use a rolling window on normalized data: Normalizing your data can help to make your rolling computations more accurate and efficient. If your data has a large range of values, applying a rolling
window directly to the raw data can result in large variations in the rolling statistics. By normalizing your data before applying the rolling window, you can reduce these variations and get more
accurate results.
4. Use the rolling() function properly: The rolling() function has a number of parameters that can affect the speed and accuracy of your rolling computations. Be sure to use these parameters
properly to optimize your code. For example, setting the min_periods parameter to a low value can help to speed up your code, but it can also affect the accuracy of your results if you have
missing data.
5. Use a moving average instead of a rolling window: In some cases, a moving average may be more appropriate than a rolling window for calculating statistics on time series data. A moving average
weights each observation in the time series equally, whereas a rolling window gives more weight to observations closer to the current time. Depending on your specific use case, a moving average
may be more accurate or more efficient than a rolling window.
By following these tips, you can optimize your rolling computations and get more accurate and efficient results.
In this article, we’ve discussed the rolling() function in Pandas for performing rolling computations on time series data. We’ve explored some key parameters you can customize to get the results you
need, and we’ve shared some tips on optimizing your rolling computations. If you work with time series data, the rolling() function is a powerful tool to have in your toolkit. By understanding how to
use it effectively, you can gain valuable insights into trends and patterns in your data that can help you make more informed decisions. My personal advice is to experiment with different window
sizes and functions to find the right balance between smoothness and sensitivity in your results. And remember to pay attention to the parameters you use, as they can have a significant impact on the
accuracy and efficiency of your code. | {"url":"https://www.cojolt.io/blog/perform-rolling-computations-on-time-series-data-using-pandas-rolling-function","timestamp":"2024-11-07T06:36:09Z","content_type":"text/html","content_length":"62557","record_id":"<urn:uuid:6ddcbf45-eed8-40d8-8d1d-196792d86538>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00213.warc.gz"} |
MPS and MPO
A finite size matrix product state type. Keeps track of the orthogonality center.
A finite size matrix product operator type. Keeps track of the orthogonality center.
Construct an MPS with N sites with default constructed ITensors.
MPS([::Type{ElT} = Float64, ]sites; linkdims=1)
Construct an MPS filled with Empty ITensors of type ElT from a collection of indices.
Optionally specify the link dimension with the keyword argument linkdims, which by default is 1.
In the future we may generalize linkdims to allow specifying each individual link dimension as a vector, and additionally allow specifying quantum numbers.
random_mps(sites::Vector{<:Index}; linkdims=1)
random_mps(eltype::Type{<:Number}, sites::Vector{<:Index}; linkdims=1)
Construct a random MPS with link dimension linkdims which by default has element type Float64.
linkdims can also accept a Vector{Int} with length(linkdims) == length(sites) - 1 for constructing an MPS with non-uniform bond dimension.
random_mps(eltype::Type{<:Number}, sites::Vector{<:Index}; linkdims=1)
Construct a random MPS with link dimension linkdims of type eltype.
linkdims can also accept a Vector{Int} with length(linkdims) == length(sites) - 1 for constructing an MPS with non-uniform bond dimension.
random_mps(sites::Vector{<:Index}, state; linkdims=1)
Construct a real, random MPS with link dimension linkdims, made by randomizing an initial product state specified by state. This version of random_mps is necessary when creating QN-conserving random
MPS (consisting of QNITensors). The initial state array provided determines the total QN of the resulting random MPS.
Construct a product state MPS having site indices sites, and which corresponds to the initial state given by the array states. The states array may consist of either an array of integers or strings,
as recognized by the state function defined for the relevant Index tag type.
N = 10
sites = siteinds("S=1/2", N)
states = [isodd(n) ? "Up" : "Dn" for n in 1:N]
psi = MPS(sites, states)
Construct a product state MPS of element type T, having site indices sites, and which corresponds to the initial state given by the array states. The input states may be an array of strings or an
array of ints recognized by the state function defined for the relevant Index tag type. In addition, a single string or int can be input to create a uniform state.
N = 10
sites = siteinds("S=1/2", N)
states = [isodd(n) ? "Up" : "Dn" for n in 1:N]
psi = MPS(ComplexF64, sites, states)
phi = MPS(sites, "Up")
Construct a product state MPS with element type Float64 and nonzero values determined from the input IndexVals.
MPS(::Type{T<:Number}, ivals::Vector{<:Pair{<:Index}})
Construct a product state MPS with element type T and nonzero values determined from the input IndexVals.
Make an MPO of length N filled with default ITensors.
MPO([::Type{ElT} = Float64}, ]sites, ops::Vector{String})
Make an MPO with pairs of sites s[i] and s[i]' and operators ops on each site.
MPO([::Type{ElT} = Float64, ]sites, op::String)
Make an MPO with pairs of sites s[i] and s[i]' and operator op on every site.
Make a shallow copy of an MPS or MPO. By shallow copy, it means that a new MPS/MPO is returned, but the data of the tensors are still shared between the returned MPS/MPO and the original MPS/MPO.
Therefore, replacing an entire tensor of the returned MPS/MPO will not modify the input MPS/MPO, but modifying the data of the returned MPS/MPO will modify the input MPS/MPO.
Use deepcopy for an alternative that copies the ITensors as well.
julia> using ITensors, ITensorMPS
julia> s = siteinds("S=1/2", 3);
julia> M1 = random_mps(s; linkdims=3);
julia> norm(M1)
julia> M2 = copy(M1);
julia> M2[1] *= 2;
julia> norm(M1)
julia> norm(M2)
julia> M3 = copy(M1);
julia> M3[1] .*= 3; # Modifies the tensor data
julia> norm(M1)
julia> norm(M3)
Make a deep copy of an MPS or MPO. By deep copy, it means that a new MPS/MPO is returned that doesn't share any data with the input MPS/MPO.
Therefore, modifying the resulting MPS/MPO will note modify the original MPS/MPO.
Use copy for an alternative that performs a shallow copy that avoids copying the ITensor data.
julia> using ITensors, ITensorMPS
julia> s = siteinds("S=1/2", 3);
julia> M1 = random_mps(s; linkdims=3);
julia> norm(M1)
julia> M2 = deepcopy(M1);
julia> M2[1] .*= 2; # Modifies the tensor data
julia> norm(M1)
julia> norm(M2)
julia> M3 = copy(M1);
julia> M3[1] .*= 3; # Modifies the tensor data
julia> norm(M1)
julia> norm(M3)
The element type of the MPS/MPO. Always returns ITensor.
For the element type of the ITensors of the MPS/MPO, use promote_itensor_eltype.
For an MPS or MPO which conserves quantum numbers, compute the total QN flux. For a tensor network such as an MPS or MPO, the flux is the sum of fluxes of each of the tensors in the network. The name
totalqn is an alias for flux.
Return true if the MPS or MPO has tensors which carry quantum numbers.
The number of sites of an MPS/MPO.
Get the maximum link dimension of the MPS or MPO.
The minimum this will return is 1, even if there are no link indices.
siteinds(commoninds, A::MPO, B::MPS, j::Integer; kwargs...)
siteinds(commonind, A::MPO, B::MPO, j::Integer; kwargs...)
Get the site index (or indices) of the jth MPO tensor of A that is shared with MPS/MPO B.
siteinds(uniqueinds, A::MPO, B::MPS, j::Integer; kwargs...)
siteinds(uniqueind, A::MPO, B::MPS, j::Integer; kwargs...)
Get the site index (or indices) of MPO A that is unique to A (not shared with MPS/MPO B).
findsite(M::Union{MPS, MPO}, is)
Return the first site of the MPS or MPO that has at least one Index in common with the Index or collection of indices is.
To find all sites with common indices with is, use the findsites function.
s = siteinds("S=1/2", 5)
ψ = random_mps(s)
findsite(ψ, s[3]) == 3
findsite(ψ, (s[3], s[4])) == 3
M = MPO(s)
findsite(M, s[4]) == 4
findsite(M, s[4]') == 4
findsite(M, (s[4]', s[4])) == 4
findsite(M, (s[4]', s[3])) == 3
findsites(M::Union{MPS, MPO}, is)
Return the sites of the MPS or MPO that have indices in common with the collection of site indices is.
s = siteinds("S=1/2", 5)
ψ = random_mps(s)
findsites(ψ, s[3]) == [3]
findsites(ψ, (s[4], s[1])) == [1, 4]
M = MPO(s)
findsites(M, s[4]) == [4]
findsites(M, s[4]') == [4]
findsites(M, (s[4]', s[4])) == [4]
findsites(M, (s[4]', s[3])) == [3, 4]
firstsiteinds(M::MPO; kwargs...)
Get a Vector of the first site Index found on each site of M.
By default, it finds the first site Index with prime level 0.
linkind(M::MPS, j::Integer)
linkind(M::MPO, j::Integer)
Get the link or bond Index connecting the MPS or MPO tensor on site j to site j+1.
If there is no link Index, return nothing.
siteind(M::MPS, j::Int; kwargs...)
Get the first site Index of the MPS. Return nothing if none is found.
siteind(::typeof(first), M::Union{MPS,MPO}, j::Integer; kwargs...)
Return the first site Index found on the MPS or MPO (the first Index unique to the jth MPS/MPO tensor).
You can choose different filters, like prime level and tags, with the kwargs.
siteinds(::typeof(first), M::MPS)
Get a vector of the first site Index found on each tensor of the MPS.
siteinds(::typeof(only), M::MPS)
Get a vector of the only site Index found on each tensor of the MPS. Errors if more than one is found.
siteinds(::typeof(all), M::MPS)
Get a vector of the all site Indices found on each tensor of the MPS. Returns a Vector of IndexSets.
siteind(M::MPO, j::Int; plev = 0, kwargs...)
Get the first site Index of the MPO found, by default with prime level 0.
siteinds(M::MPO; kwargs...)
Get a Vector of IndexSets of all the site indices of M.
siteinds(M::Union{MPS, MPO}}, j::Integer; kwargs...)
Return the site Indices found of the MPO or MPO at the site j as an IndexSet.
Optionally filter prime tags and prime levels with keyword arguments like plev and tags.
prime[!](M::MPS, args...; kwargs...)
prime[!](M::MPO, args...; kwargs...)
Apply prime to all ITensors of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors. Alternatively apply the function in-place.
prime[!](siteinds, M::MPS, args...; kwargs...)
prime[!](siteinds, M::MPO, args...; kwargs...)
Apply prime to all site indices of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors.
prime[!](linkinds, M::MPS, args...; kwargs...)
prime[!](linkinds, M::MPO, args...; kwargs...)
Apply prime to all link indices of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors.
prime[!](siteinds, commoninds, M1::MPO, M2::MPS, args...; kwargs...)
prime[!](siteinds, commoninds, M1::MPO, M2::MPO, args...; kwargs...)
Apply prime to the site indices that are shared by M1 and M2.
Returns new MPSs/MPOs. The ITensors of the MPSs/MPOs will be a view of the storage of the original ITensors.
prime[!](siteinds, uniqueinds, M1::MPO, M2::MPS, args...; kwargs...)
Apply prime to the site indices of M1 that are not shared with M2. Returns new MPSs/MPOs.
The ITensors of the MPSs/MPOs will be a view of the storage of the original ITensors.
swapprime[!](M::MPS, args...; kwargs...)
swapprime[!](M::MPO, args...; kwargs...)
Apply swapprime to all ITensors of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors. Alternatively apply the function in-place.
setprime[!](M::MPS, args...; kwargs...)
setprime[!](M::MPO, args...; kwargs...)
Apply setprime to all ITensors of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors. Alternatively apply the function in-place.
setprime[!](siteinds, M::MPS, args...; kwargs...)
setprime[!](siteinds, M::MPO, args...; kwargs...)
Apply setprime to all site indices of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors.
setprime[!](linkinds, M::MPS, args...; kwargs...)
setprime[!](linkinds, M::MPO, args...; kwargs...)
Apply setprime to all link indices of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors.
setprime[!](siteinds, commoninds, M1::MPO, M2::MPS, args...; kwargs...)
setprime[!](siteinds, commoninds, M1::MPO, M2::MPO, args...; kwargs...)
Apply setprime to the site indices that are shared by M1 and M2.
Returns new MPSs/MPOs. The ITensors of the MPSs/MPOs will be a view of the storage of the original ITensors.
setprime[!](siteinds, uniqueinds, M1::MPO, M2::MPS, args...; kwargs...)
Apply setprime to the site indices of M1 that are not shared with M2. Returns new MPSs/MPOs.
The ITensors of the MPSs/MPOs will be a view of the storage of the original ITensors.
noprime[!](M::MPS, args...; kwargs...)
noprime[!](M::MPO, args...; kwargs...)
Apply noprime to all ITensors of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors. Alternatively apply the function in-place.
noprime[!](siteinds, M::MPS, args...; kwargs...)
noprime[!](siteinds, M::MPO, args...; kwargs...)
Apply noprime to all site indices of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors.
noprime[!](linkinds, M::MPS, args...; kwargs...)
noprime[!](linkinds, M::MPO, args...; kwargs...)
Apply noprime to all link indices of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors.
noprime[!](siteinds, commoninds, M1::MPO, M2::MPS, args...; kwargs...)
noprime[!](siteinds, commoninds, M1::MPO, M2::MPO, args...; kwargs...)
Apply noprime to the site indices that are shared by M1 and M2.
Returns new MPSs/MPOs. The ITensors of the MPSs/MPOs will be a view of the storage of the original ITensors.
noprime[!](siteinds, uniqueinds, M1::MPO, M2::MPS, args...; kwargs...)
Apply noprime to the site indices of M1 that are not shared with M2. Returns new MPSs/MPOs.
The ITensors of the MPSs/MPOs will be a view of the storage of the original ITensors.
addtags[!](M::MPS, args...; kwargs...)
addtags[!](M::MPO, args...; kwargs...)
Apply addtags to all ITensors of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors. Alternatively apply the function in-place.
addtags[!](siteinds, M::MPS, args...; kwargs...)
addtags[!](siteinds, M::MPO, args...; kwargs...)
Apply addtags to all site indices of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors.
addtags[!](linkinds, M::MPS, args...; kwargs...)
addtags[!](linkinds, M::MPO, args...; kwargs...)
Apply addtags to all link indices of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors.
addtags[!](siteinds, commoninds, M1::MPO, M2::MPS, args...; kwargs...)
addtags[!](siteinds, commoninds, M1::MPO, M2::MPO, args...; kwargs...)
Apply addtags to the site indices that are shared by M1 and M2.
Returns new MPSs/MPOs. The ITensors of the MPSs/MPOs will be a view of the storage of the original ITensors.
addtags[!](siteinds, uniqueinds, M1::MPO, M2::MPS, args...; kwargs...)
Apply addtags to the site indices of M1 that are not shared with M2. Returns new MPSs/MPOs.
The ITensors of the MPSs/MPOs will be a view of the storage of the original ITensors.
removetags[!](M::MPS, args...; kwargs...)
removetags[!](M::MPO, args...; kwargs...)
Apply removetags to all ITensors of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors. Alternatively apply the function in-place.
removetags[!](siteinds, M::MPS, args...; kwargs...)
removetags[!](siteinds, M::MPO, args...; kwargs...)
Apply removetags to all site indices of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors.
removetags[!](linkinds, M::MPS, args...; kwargs...)
removetags[!](linkinds, M::MPO, args...; kwargs...)
Apply removetags to all link indices of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors.
removetags[!](siteinds, commoninds, M1::MPO, M2::MPS, args...; kwargs...)
removetags[!](siteinds, commoninds, M1::MPO, M2::MPO, args...; kwargs...)
Apply removetags to the site indices that are shared by M1 and M2.
Returns new MPSs/MPOs. The ITensors of the MPSs/MPOs will be a view of the storage of the original ITensors.
removetags[!](siteinds, uniqueinds, M1::MPO, M2::MPS, args...; kwargs...)
Apply removetags to the site indices of M1 that are not shared with M2. Returns new MPSs/MPOs.
The ITensors of the MPSs/MPOs will be a view of the storage of the original ITensors.
replacetags[!](M::MPS, args...; kwargs...)
replacetags[!](M::MPO, args...; kwargs...)
Apply replacetags to all ITensors of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors. Alternatively apply the function in-place.
replacetags[!](siteinds, M::MPS, args...; kwargs...)
replacetags[!](siteinds, M::MPO, args...; kwargs...)
Apply replacetags to all site indices of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors.
replacetags[!](linkinds, M::MPS, args...; kwargs...)
replacetags[!](linkinds, M::MPO, args...; kwargs...)
Apply replacetags to all link indices of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors.
replacetags[!](siteinds, commoninds, M1::MPO, M2::MPS, args...; kwargs...)
replacetags[!](siteinds, commoninds, M1::MPO, M2::MPO, args...; kwargs...)
Apply replacetags to the site indices that are shared by M1 and M2.
Returns new MPSs/MPOs. The ITensors of the MPSs/MPOs will be a view of the storage of the original ITensors.
replacetags[!](siteinds, uniqueinds, M1::MPO, M2::MPS, args...; kwargs...)
Apply replacetags to the site indices of M1 that are not shared with M2. Returns new MPSs/MPOs.
The ITensors of the MPSs/MPOs will be a view of the storage of the original ITensors.
settags[!](M::MPS, args...; kwargs...)
settags[!](M::MPO, args...; kwargs...)
Apply settags to all ITensors of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors. Alternatively apply the function in-place.
settags[!](siteinds, M::MPS, args...; kwargs...)
settags[!](siteinds, M::MPO, args...; kwargs...)
Apply settags to all site indices of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors.
settags[!](linkinds, M::MPS, args...; kwargs...)
settags[!](linkinds, M::MPO, args...; kwargs...)
Apply settags to all link indices of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors.
settags[!](siteinds, commoninds, M1::MPO, M2::MPS, args...; kwargs...)
settags[!](siteinds, commoninds, M1::MPO, M2::MPO, args...; kwargs...)
Apply settags to the site indices that are shared by M1 and M2.
Returns new MPSs/MPOs. The ITensors of the MPSs/MPOs will be a view of the storage of the original ITensors.
settags[!](siteinds, uniqueinds, M1::MPO, M2::MPS, args...; kwargs...)
Apply settags to the site indices of M1 that are not shared with M2. Returns new MPSs/MPOs.
The ITensors of the MPSs/MPOs will be a view of the storage of the original ITensors.
expect(psi::MPS, op::AbstractString...; kwargs...)
expect(psi::MPS, op::Matrix{<:Number}...; kwargs...)
expect(psi::MPS, ops; kwargs...)
Given an MPS psi and a single operator name, returns a vector of the expected value of the operator on each site of the MPS.
If multiple operator names are provided, returns a tuple of expectation value vectors.
If a container of operator names is provided, returns the same type of container with names replaced by vectors of expectation values.
Optional Keyword Arguments
• sites = 1:length(psi): compute expected values only for sites in the given range
N = 10
s = siteinds("S=1/2", N)
psi = random_mps(s; linkdims=8)
Z = expect(psi, "Sz") # compute for all sites
Z = expect(psi, "Sz"; sites=2:4) # compute for sites 2,3,4
Z3 = expect(psi, "Sz"; sites=3) # compute for site 3 only (output will be a scalar)
XZ = expect(psi, ["Sx", "Sz"]) # compute Sx and Sz for all sites
Z = expect(psi, [1/2 0; 0 -1/2]) # same as expect(psi,"Sz")
s = siteinds("Electron", N)
psi = random_mps(s; linkdims=8)
dens = expect(psi, "Ntot")
updens, dndens = expect(psi, "Nup", "Ndn") # pass more than one operator
Given an MPS psi and two strings denoting operators (as recognized by the op function), computes the two-point correlation function matrix C[i,j] = <psi| Op1i Op2j |psi> using efficient MPS
techniques. Returns the matrix C.
Optional Keyword Arguments
• sites = 1:length(psi): compute correlations only for sites in the given range
• ishermitian = false : if false, force independent calculations of the matrix elements above and below the diagonal, while if true assume they are complex conjugates.
For a correlation matrix of size NxN and an MPS of typical bond dimension m, the scaling of this algorithm is N^2*m^3.
N = 30
m = 4
s = siteinds("S=1/2", N)
psi = random_mps(s; linkdims=m)
Czz = correlation_matrix(psi, "Sz", "Sz")
Czz = correlation_matrix(psi, [1/2 0; 0 -1/2], [1/2 0; 0 -1/2]) # same as above
s = siteinds("Electron", N; conserve_qns=true)
psi = random_mps(s, n -> isodd(n) ? "Up" : "Dn"; linkdims=m)
Cuu = correlation_matrix(psi, "Cdagup", "Cup"; sites=2:8)
dag[!](M::MPS, args...; kwargs...)
dag[!](M::MPO, args...; kwargs...)
Apply dag to all ITensors of an MPS/MPO, returning a new MPS/MPO.
The ITensors of the MPS/MPO will be a view of the storage of the original ITensors. Alternatively apply the function in-place.
Given an MPS (or MPO), return a new MPS (or MPO) having called dense on each ITensor to convert each tensor to use dense storage and remove any QN or other sparse structure information, if it is not
dense already.
movesite(::Union{MPS, MPO}, n1n2::Pair{Int, Int})
Create a new MPS/MPO where the site at n1 is moved to n2, for a pair n1n2 = n1 => n2.
This is done with a series a pairwise swaps, and can introduce a lot of entanglement into your state, so use with caution.
orthogonalize!(M::MPS, j::Int; kwargs...)
orthogonalize(M::MPS, j::Int; kwargs...)
orthogonalize!(M::MPO, j::Int; kwargs...)
orthogonalize(M::MPO, j::Int; kwargs...)
Move the orthogonality center of the MPS to site j. No observable property of the MPS will be changed, and no truncation of the bond indices is performed. Afterward, tensors 1,2,...,j-1 will be
left-orthogonal and tensors j+1,j+2,...,N will be right-orthogonal.
Either modify in-place with orthogonalize! or out-of-place with orthogonalize.
replacebond!(M::MPS, b::Int, phi::ITensor; kwargs...)
Factorize the ITensor phi and replace the ITensors b and b+1 of MPS M with the factors. Choose the orthogonality with ortho="left"/"right".
Given a normalized MPS m with orthocenter(m)==1, returns a Vector{Int} of length(m) corresponding to one sample of the probability distribution defined by squaring the components of the tensor that
the MPS represents
Given a normalized MPS m, returns a Vector{Int} of length(m) corresponding to one sample of the probability distribution defined by squaring the components of the tensor that the MPS represents. If
the MPS does not have an orthogonality center, orthogonalize!(m,1) will be called before computing the sample.
Given a normalized MPO M, returns a Vector{Int} of length(M) corresponding to one sample of the probability distribution defined by the MPO, treating the MPO as a density matrix.
The MPO M should have an (approximately) positive spectrum.
swapbondsites(ψ::Union{MPS, MPO}, b::Integer; kwargs...)
Swap the sites b and b+1.
truncate!(M::MPS; kwargs...)
truncate!(M::MPO; kwargs...)
Perform a truncation of all bonds of an MPS/MPO, using the truncation parameters (cutoff,maxdim, etc.) provided as keyword arguments.
Keyword arguments:
• site_range=1:N - only truncate the MPS bonds between these sites
apply(o::ITensor, ψ::Union{MPS, MPO}, [ns::Vector{Int}]; kwargs...)
Get the product of the operator o with the MPS/MPO ψ, where the operator is applied to the sites ns. If ns are not specified, the sites are determined by the common indices between o and the site
indices of ψ.
If ns are non-contiguous, the sites of the MPS are moved to be contiguous. By default, the sites are moved back to their original locations. You can leave them where they are by setting the keyword
argument move_sites_back to false.
• cutoff::Real: singular value truncation cutoff.
• maxdim::Int: maximum MPS/MPO dimension.
• apply_dag::Bool = false: apply the gate and the dagger of the gate (only relevant for MPO evolution).
• move_sites_back::Bool = true: after the ITensors are applied to the MPS or MPO, move the sites of the MPS or MPO back to their original locations.
apply(As::Vector{<:ITensor}, M::Union{MPS, MPO}; kwargs...)
Apply the ITensors As to the MPS or MPO M, treating them as gates or matrices from pairs of prime or unprimed indices.
• cutoff::Real: singular value truncation cutoff.
• maxdim::Int: maximum MPS/MPO dimension.
• apply_dag::Bool = false: apply the gate and the dagger of the gate (only relevant for MPO evolution).
• move_sites_back::Bool = true: after the ITensor is applied to the MPS or MPO, move the sites of the MPS or MPO back to their original locations.
Apply one-site gates to an MPS:
N = 3
ITensors.op(::OpName"σx", ::SiteType"S=1/2", s::Index) =
2*op("Sx", s)
ITensors.op(::OpName"σz", ::SiteType"S=1/2", s::Index) =
2*op("Sz", s)
# Make the operator list.
os = [("σx", n) for n in 1:N]
append!(os, [("σz", n) for n in 1:N])
@show os
s = siteinds("S=1/2", N)
gates = ops(os, s)
# Starting state |↑↑↑⟩
ψ0 = MPS(s, "↑")
# Apply the gates.
ψ = apply(gates, ψ0; cutoff = 1e-15)
# Test against exact (full) wavefunction
prodψ = apply(gates, prod(ψ0))
@show prod(ψ) ≈ prodψ
# The result is:
# σz₃ σz₂ σz₁ σx₃ σx₂ σx₁ |↑↑↑⟩ = -|↓↓↓⟩
@show inner(ψ, MPS(s, "↓")) == -1
Apply nonlocal two-site gates and one-site gates to an MPS:
# 2-site gate
function ITensors.op(::OpName"CX", ::SiteType"S=1/2", s1::Index, s2::Index)
mat = [1 0 0 0
0 0 1 0]
return itensor(mat, s2', s1', s2, s1)
os = [("CX", 1, 3), ("σz", 3)]
@show os
# Start with the state |↓↑↑⟩
ψ0 = MPS(s, n -> n == 1 ? "↓" : "↑")
# The result is:
# σz₃ CX₁₃ |↓↑↑⟩ = -|↓↑↓⟩
ψ = apply(ops(os, s), ψ0; cutoff = 1e-15)
@show inner(ψ, MPS(s, n -> n == 1 || n == 3 ? "↓" : "↑")) == -1
Perform TEBD-like time evolution:
# Define the nearest neighbor term `S⋅S` for the Heisenberg model
function ITensors.op(::OpName"expS⋅S", ::SiteType"S=1/2",
s1::Index, s2::Index; τ::Number)
O = 0.5 * op("S+", s1) * op("S-", s2) +
0.5 * op("S-", s1) * op("S+", s2) +
op("Sz", s1) * op("Sz", s2)
return exp(τ * O)
τ = -0.1im
os = [("expS⋅S", (1, 2), (τ = τ,)),
("expS⋅S", (2, 3), (τ = τ,))]
ψ0 = MPS(s, n -> n == 1 ? "↓" : "↑")
expτH = ops(os, s)
ψτ = apply(expτH, ψ0)
inner(A::MPS, B::MPS)
inner(A::MPO, B::MPO)
Compute the inner product ⟨A|B⟩. If A and B are MPOs, computes the Frobenius inner product.
Use loginner to avoid underflow/overflow for taking overlaps of large MPS or MPO.
Before ITensors 0.3, inner had a keyword argument make_inds_match that default to true. When true, the function attempted to make the site indices match before contracting. So for example, the inputs
could have different site indices, as long as they have the same dimensions or QN blocks. This behavior was fragile since it only worked for MPS with single site indices per tensor, and as of
ITensors 0.3 has been deprecated. As of ITensors 0.3 you will need to make sure the MPS or MPO you input have compatible site indices to contract over, such as by making sure the prime levels match
Same as dot.
See also loginner, logdot.
loginner(A::MPS, B::MPS)
loginner(A::MPO, B::MPO)
Compute the logarithm of the inner product ⟨A|B⟩. If A and B are MPOs, computes the logarithm of the Frobenius inner product.
This is useful for larger MPS/MPO, where in the limit of large numbers of sites the inner product can diverge or approach zero.
Before ITensors 0.3, inner had a keyword argument make_inds_match that default to true. When true, the function attempted to make the site indices match before contracting. So for example, the inputs
could have different site indices, as long as they have the same dimensions or QN blocks. This behavior was fragile since it only worked for MPS with single site indices per tensor, and as of
ITensors 0.3 has been deprecated. As of ITensors 0.3 you will need to make sure the MPS or MPO you input have compatible site indices to contract over, such as by making sure the prime levels match
Same as logdot.
See also inner, dot.
logdot(A::MPS, B::MPS)
logdot(A::MPO, B::MPO)
Same as loginner.
See also inner, dot.
inner(y::MPS, A::MPO, x::MPS)
Compute ⟨y|A|x⟩ = ⟨y|Ax⟩ efficiently and exactly without making any intermediate MPOs. In general it is more efficient and accurate than inner(y, apply(A, x)).
This is helpful for computing the expectation value of an operator A, which would be:
inner(x', A, x)
assuming x is normalized.
If you want to compute ⟨By|Ax⟩ you can use inner(B::MPO, y::MPS, A::MPO, x::MPS).
This is helpful for computing the variance of an operator A, which would be:
inner(A, x, A, x) - inner(x', A, x) ^ 2
assuming x is normalized.
Before ITensors 0.3, inner had a keyword argument make_inds_match that default to true. When true, the function attempted to make the site indices match before contracting. So for example, the inputs
could have different site indices, as long as they have the same dimensions or QN blocks. This behavior was fragile since it only worked for MPS with single site indices per tensor, and as of
ITensors 0.3 has been deprecated. As of ITensors 0.3 you will need to make sure the MPS or MPO you input have compatible site indices to contract over, such as by making sure the prime levels match
Same as dot.
dot(y::MPS, A::MPO, x::MPS)
Same as inner.
inner(B::MPO, y::MPS, A::MPO, x::MPS)
Compute ⟨By|A|x⟩ = ⟨By|Ax⟩ efficiently and exactly without making any intermediate MPOs. In general it is more efficient and accurate than inner(apply(B, y), apply(A, x)).
This is helpful for computing the variance of an operator A, which would be:
inner(A, x, A, x) - inner(x, A, x) ^ 2
Before ITensors 0.3, inner had a keyword argument make_inds_match that default to true. When true, the function attempted to make the site indices match before contracting. So for example, the inputs
could have different site indices, as long as they have the same dimensions or QN blocks. This behavior was fragile since it only worked for MPS with single site indices per tensor, and as of
ITensors 0.3 has been deprecated. As of ITensors 0.3 you will need to make sure the MPS or MPO you input have compatible site indices to contract over, such as by making sure the prime levels match
Same as dot.
dot(B::MPO, y::MPS, A::MPO, x::MPS)
Same as inner.
Compute the norm of the MPS or MPO.
If the MPS or MPO has a well defined orthogonality center, this reduces to the norm of the orthogonality center tensor. Otherwise, it computes the norm with the full inner product of the MPS/MPO with
See also lognorm.
normalize(A::MPS; (lognorm!)=[])
normalize(A::MPO; (lognorm!)=[])
Return a new MPS or MPO A that is the same as the original MPS or MPO but with norm(A) ≈ 1.
In practice, this evenly spreads lognorm(A) over the tensors within the range of the orthogonality center to avoid numerical overflow in the case of diverging norms.
See also normalize!, norm, lognorm.
normalize!(A::MPS; (lognorm!)=[])
normalize!(A::MPO; (lognorm!)=[])
Change the MPS or MPO A in-place such that norm(A) ≈ 1. This modifies the data of the tensors within the orthogonality center.
In practice, this evenly spreads lognorm(A) over the tensors within the range of the orthogonality center to avoid numerical overflow in the case of diverging norms.
If the norm of the input MPS or MPO is 0, normalizing is ill-defined. In this case, we just return the original MPS or MPO. You can check for this case as follows:
s = siteinds("S=1/2", 4)
ψ = 0 * random_mps(s)
lognorm_ψ = []
normalize!(ψ; (lognorm!)=lognorm_ψ)
lognorm_ψ[1] == -Inf # There was an infinite norm
See also normalize, norm, lognorm.
Compute the logarithm of the norm of the MPS or MPO.
This is useful for larger MPS/MPO that are not gauged, where in the limit of large numbers of sites the norm can diverge or approach zero.
See also norm, logdot.
+(A::MPS/MPO...; kwargs...)
add(A::MPS/MPO...; kwargs...)
Add arbitrary numbers of MPS/MPO with each other, optionally truncating the results.
A cutoff of 1e-15 is used by default, and in general users should set their own cutoff for their particular application.
• cutoff::Real: singular value truncation cutoff
• maxdim::Int: maximum MPS/MPO bond dimension
• alg = "densitymatrix": "densitymatrix" or "directsum". "densitymatrix" adds the MPS/MPO by adding up and diagoanlizing local density matrices site by site in a single sweep through the system,
truncating the density matrix with cutoff and maxdim. "directsum" performs a direct sum of each tensors on each site of the input MPS/MPO being summed. It doesn't perform any truncation, and
therefore ignores cutoff and maxdim. The bond dimension of the output is the sum of the bond dimensions of the inputs. You can truncate the resulting MPS/MPO with the truncate! function.
N = 10
s = siteinds("S=1/2", N; conserve_qns = true)
state = n -> isodd(n) ? "↑" : "↓"
ψ₁ = random_mps(s, state; linkdims=2)
ψ₂ = random_mps(s, state; linkdims=2)
ψ₃ = random_mps(s, state; linkdims=2)
ψ = +(ψ₁, ψ₂; cutoff = 1e-8)
# Can use:
# ψ = ψ₁ + ψ₂
# but generally you want to set a custom `cutoff` and `maxdim`.
@show inner(ψ, ψ)
@show inner(ψ₁, ψ₂) + inner(ψ₁, ψ₂) + inner(ψ₂, ψ₁) + inner(ψ₂, ψ₂)
# Computes ψ₁ + 2ψ₂
ψ = ψ₁ + 2ψ₂
@show inner(ψ, ψ)
@show inner(ψ₁, ψ₁) + 2 * inner(ψ₁, ψ₂) + 2 * inner(ψ₂, ψ₁) + 4 * inner(ψ₂, ψ₂)
# Computes ψ₁ + 2ψ₂ + ψ₃
ψ = ψ₁ + 2ψ₂ + ψ₃
@show inner(ψ, ψ)
@show inner(ψ₁, ψ₁) + 2 * inner(ψ₁, ψ₂) + inner(ψ₁, ψ₃) +
2 * inner(ψ₂, ψ₁) + 4 * inner(ψ₂, ψ₂) + 2 * inner(ψ₂, ψ₃) +
inner(ψ₃, ψ₁) + 2 * inner(ψ₃, ψ₂) + inner(ψ₃, ψ₃)
contract(ψ::MPS, A::MPO; kwargs...) -> MPS
*(::MPS, ::MPO; kwargs...) -> MPS
contract(A::MPO, ψ::MPS; kwargs...) -> MPS
*(::MPO, ::MPS; kwargs...) -> MPS
Contract the MPO A with the MPS ψ, returning an MPS with the unique site indices of the MPO.
For example, for an MPO with site indices with prime levels of 1 and 0, such as -s'-A-s-, and an MPS with site indices with prime levels of 0, such as -s-x, the result is an MPS y with site indices
with prime levels of 1, -s'-y = -s'-A-s-x.
Since it is common to contract an MPO with prime levels of 1 and 0 with an MPS with prime level of 0 and want a resulting MPS with prime levels of 0, we provide a convenience function apply:
apply(A, x; kwargs...) = replaceprime(contract(A, x; kwargs...), 2 => 1)`.
Choose the method with the method keyword, for example "densitymatrix" and "naive".
• cutoff::Float64=1e-13: the cutoff value for truncating the density matrix eigenvalues. Note that the default is somewhat arbitrary and subject to change, in general you should set a cutoff value.
• maxdim::Int=maxlinkdim(A) * maxlinkdim(ψ)): the maximal bond dimension of the results MPS.
• mindim::Int=1: the minimal bond dimension of the resulting MPS.
• normalize::Bool=false: whether or not to normalize the resulting MPS.
• method::String="densitymatrix": the algorithm to use for the contraction. Currently the options are "densitymatrix", where the network formed by the MPO and MPS is squared and contracted down to
a density matrix which is diagonalized iteratively at each site, and "naive", where the MPO and MPS tensor are contracted exactly at each site and then a truncation of the resulting MPS is
See also apply.
apply(A::MPO, x::MPS; kwargs...)
Contract the MPO A with the MPS x and then map the prime level of the resulting MPS back to 0.
Equivalent to replaceprime(contract(A, x; kwargs...), 2 => 1).
See also contract for details about the arguments available.
contract(A::MPO, B::MPO; kwargs...) -> MPO
*(::MPO, ::MPO; kwargs...) -> MPO
Contract the MPO A with the MPO B, returning an MPO with the site indices that are not shared between A and B.
If you are contracting two MPOs with the same sets of indices, likely you want to call something like:
C = contract(A', B; cutoff=1e-12)
C = replaceprime(C, 2 => 1)
That is because if MPO A has the index structure -s'-A-s- and MPO B has the Index structure -s'-B-s-, if we only want to contract over on set of the indices, we would do (-s'-A-s-)'-s'-B-s- =
-s''-A-s'-s'-B-s- = -s''-C-s-, and then map the prime levels back to pairs of primed and unprimed indices with: replaceprime(-s''-C-s-, 2 => 1) = -s'-C-s-.
Since this is a common use case, you can use the convenience function:
C = apply(A, B; cutoff=1e-12)
which is the same as the code above.
If you are contracting MPOs that have diverging norms, such as MPOs representing sums of local operators, the truncation can become numerically unstable (see https://arxiv.org/abs/1909.06341 for a
more numerically stable alternative). For now, you can use the following options to contract MPOs like that:
C = contract(A, B; alg="naive", truncate=false)
# Bring the indices back to pairs of primed and unprimed
C = apply(A, B; alg="naive", truncate=false)
• cutoff::Float64=1e-14: the cutoff value for truncating the density matrix eigenvalues. Note that the default is somewhat arbitrary and subject to change, in general you should set a cutoff value.
• maxdim::Int=maxlinkdim(A) * maxlinkdim(B)): the maximal bond dimension of the results MPS.
• mindim::Int=1: the minimal bond dimension of the resulting MPS.
• alg="zipup": Either "zipup" or "naive". "zipup" contracts pairs of site tensors and truncates with SVDs in a sweep across the sites, while "naive" first contracts pairs of tensor exactly and then
truncates at the end if truncate=true.
• truncate=true: Enable or disable truncation. If truncate=false, ignore other truncation parameters like cutoff and maxdim. This is most relevant for the "naive" version, if you just want to
contract the tensors pairwise exactly. This can be useful if you are contracting MPOs that have diverging norms, such as MPOs originating from sums of local operators.
See also apply for details about the arguments available.
apply(A::MPO, B::MPO; kwargs...)
Contract the MPO A' with the MPO B and then map the prime level of the resulting MPO back to having pairs of indices with prime levels of 1 and 0.
Equivalent to replaceprime(contract(A', B; kwargs...), 2 => 1).
See also contract for details about the arguments available.
error_contract(y::MPS, A::MPO, x::MPS;
make_inds_match::Bool = true)
error_contract(y::MPS, x::MPS, A::MPO;
make_inds_match::Bool = true)
Compute the distance between A|x> and an approximation MPS y: | |y> - A|x> |/| A|x> | = √(1 + (<y|y> - 2*real(<y|A|x>))/<Ax|A|x>).
If make_inds_match = true, the function attempts match the site indices of y with the site indices of A that are not common with x.
outer(x::MPS, y::MPS; <keyword argument>) -> MPO
Compute the outer product of MPS x and MPS y, returning an MPO approximation. Note that y will be conjugated.
In Dirac notation, this is the operation |x⟩⟨y|.
If you want an outer product of an MPS with itself, you should call outer(x', x; kwargs...) so that the resulting MPO has site indices with indices coming in pairs of prime levels of 1 and 0. If not,
the site indices won't be unique which would not be an outer product.
For example:
s = siteinds("S=1/2", 5)
x = random_mps(s)
y = random_mps(s)
outer(x, y) # Incorrect! Site indices must be unique.
outer(x', y) # Results in an MPO with pairs of primed and unprimed indices.
This allows for more general outer products, such as more general MPO outputs which don't have pairs of primed and unprimed indices, or outer products where the input MPS are vectorizations of MPOs.
For example:
s = siteinds("S=1/2", 5)
X = MPO(s, "Id")
Y = MPO(s, "Id")
x = convert(MPS, X)
y = convert(MPS, Y)
outer(x, y) # Incorrect! Site indices must be unique.
outer(x', y) # Incorrect! Site indices must be unique.
outer(addtags(x, "Out"), addtags(y, "In")) # This performs a proper outer product.
The keyword arguments determine the truncation, and accept the same arguments as contract(::MPO, ::MPO; kwargs...).
See also apply, contract.
projector(x::MPS; <keyword argument>) -> MPO
Computes the projector onto the state x. In Dirac notation, this is the operation |x⟩⟨x|/|⟨x|x⟩|².
Use keyword arguments to control the level of truncation, which are the same as those accepted by contract(::MPO, ::MPO; kw...).
• normalize::Bool=true: whether or not to normalize the input MPS before forming the projector. If normalize==false and the input MPS is not already normalized, this function will not output a
proper project, and simply outputs outer(x, x) = |x⟩⟨x|, i.e. the projector scaled by norm(x)^2.
• truncation keyword arguments accepted by contract(::MPO, ::MPO; kw...).
See also outer, contract. | {"url":"https://itensor.github.io/ITensors.jl/dev/MPSandMPO.html","timestamp":"2024-11-14T22:10:09Z","content_type":"text/html","content_length":"104169","record_id":"<urn:uuid:531a5fc3-8f63-4c71-931b-2d6347962d36>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00716.warc.gz"} |
Frontiers | Revisiting cloud overlap with a merged dataset of liquid and ice cloud extinction from CloudSat and CALIPSO
• ^1NASA-GSFC, Earth Sciences Division, Greenbelt, MD, United States
• ^2GESTAR-II University of Maryland Baltimore County, Baltimore, MD, United States
• ^3GESTAR-II Morgan State University, Baltimore, MD, United States
We update the parameterization capturing the variation of parameters that describe how cloud occurrence (layer cloud fraction) and layer cloud optical depth (COD) distributions overlap vertically.
Our updated analysis is motivated by the availability of a new dataset constructed by combining two products describing the two-dimensional extinction properties of liquid and ice phase clouds (and
their mixtures) according to active cloud observations by the CloudSat and CALIPSO satellites. As before, cloud occurrence overlap is modeled with the decorrelation length of an inverse exponential
function describing the decay with separation distance of the relative likelihood that two cloudy layers are overlapped maximally versus randomly. Similarly, cloud optical depth distribution vertical
overlap is described again with a decorrelation length that describes the assumed inverse exponential decay with separation distance of the rank correlation between cloud optical depth distribution
members in two cloudy layers. We derive the climatological zonal variability of these two decorrelation lengths using 4 years of observations for scenes of ∼100 km scale length, a typical grid size
of numerical models used for climate simulations. As previously, we find a strong latitudinal dependence reflecting systematic differences in dominant cloud types with latitude, but substantially
different magnitudes of decorrelation length compared to the previous work. The previously used parameterization form is therefore updated with new parameters to describe the latitudinal dependence
of decorrelation lengths and its seasonal shift. Similar zonal patterns of decorrelation length are found when the analysis is broken down by different cloud classes. When the revised
parameterization is implemented in a cloud subcolumn generator, simulated column cloud properties compare to observations quite well, and so do their associated cloud radiative effects, but
improvements over the earlier version of the parameterization are marginal.
1 Introduction
As long as climate and other Earth system models have grids that are large enough to compel the use of cloud schemes producing fractional cloudiness, i.e., as long as subgrid cloud variability exists
implicitly without being explicitly resolved, the vertical overlap of cloud condensate has to be taken into account. This includes both the simple overlap of cloud occurrence at different vertical
levels (aka “cloud fraction overlap,”) and also the overlap of the horizontal distribution of condensate amount. Assuming that cloud particle size is horizontally homogeneous (admittedly, a
simplification) the latter overlap is practically equivalent to the overlap of cloud optical depth (COD) at different levels. Needless to say, that in the unrealistic case of no horizontal
variability of condensate, i.e. horizontally homogeneous clouds, this type of overlap is no longer a concern.
Cloud fraction overlap and its impact on the transport of atmospheric radiation has been studied for many years. Prior to the 2000s the most prevalent description of cloud fraction overlap was the
so-called “maximum-random” paradigm where contiguous cloud layers were assumed to overlap maximally, while non-contiguous layers (separated by cloudless layers) were assumed to overlap randomly. How
exactly maximum-random overlap was perceived and conceptually implemented in practice varied widely. One variant due to Geleyn and Hollingsworth (1979) allowed for random overlap not only of cloudy
layers separated by clear layers (non-contiguous cloud situations), but also of the parts of layers belonging to vertically contiguous cloud entities that extend beyond the coverage of a less cloudy
intervening layer. Another variant assumes that all cloud layers within broad atmospheric layers, even when non-contiguous, form “blocks” where overlap is maximum, while these blocks overlap randomly
(Chou et al., 1998). Regardless of the maximum-random overlap flavor, accounting for cloud fraction overlap in radiative transfer calculations is not trivial, especially in the shortwave part of the
spectrum where a large fraction of the downwelling radiation is backscattered. This recognition motivated the introduction of the concept of “subcolumns.” Radiative calculations are manageable when
the number of blocks is small: for example, when only three cloud blocks are allowed throughout the atmospheric column, only eight distinct permutations of cloudy subcolumns can exist (Chou et al.,
1998). In the longwave part of the spectrum, as long as scattering is not explicitly resolved, any given conceptual implementation of maximum-random overlap allows the calculation of the fraction of
upward and downward clear-line-of-sight for radiation propagation (Chou et al., 1999).
Our insight into cloud overlap changed greatly thanks to the seminal work of Hogan and Illingworth (2000) who used ground radar observations to show that the overlap of the area occupied by two cloud
layers should not be viewed as the binary outcome of either maximum or random overlap, but rather as a continuum between the two possibilities that depends on the separation distance of the two
layers (a more formal description will be presented in the next section). Not too long after (Räisänen et al., 2004; Pincus et al., 2005), the realization that cloudy layers are not horizontally
homogeneous underscored the need for a description of how parts of a cloud layer with different amount of condensate or optical extinction relate (overlap) to those of another cloud layer. That this
type of overlap also matters is rather obvious for strongly non-linear processes such a radiation transport which depend greatly on whether thin and/or thick parts tend to align or be displaced at
various degrees of random alignment. This led to the concept of the correlation between ranks within distributions of layer condensate or extinction, which can be quantified by a rank correlation
coefficient (Räisänen et al., 2004; Pincus et al., 2005).
The characteristics of cloud fraction/occurrence and condensate distribution overlap have been studied by analyzing simulated (e.g., Oreopoulos and Khairoutdinov 2003; Hillman et al., 2018) and
observed (e.g., Oreopoulos and Norris 2011) cloud fields that resolve cloud vertical structure or directly associated quantities. All these studies have found a clear dependence of overlap on cloud
layer separation distance, but with varying details about the exact nature of the dependence. For example, the dependence can change geographically (Barker 2008a; Shonk et al., 2010) and also with
cloud height (Räisänen et al., 2004).
Taking advantage of the availability of a dataset constructed from the latest versions of CloudSat products and which was used to study the performance of two cloud subcolumn generators (Oreopoulos
et al., 2022, hereafter O22), we are motivated to revisit the characteristics of cloud overlap at global scales. This is because one of the generators evaluated employed a parameterization of
decorrelation lengths that is now more than 10 years old (Oreopoulos et al., 2012, hereafter O12), and is thus ripe for re-examination with a new and improved (CALIPSO-enhanced) CloudSat datasets. We
are now better positioned to examine in more detail dependences of overlap on cloud vertical location and cloud regime (Section 3.3.1 and Section 3.3.2). Moreover, we now have the additional
capability to examine how overlap parameterizations ultimately affect the distribution of vertically projected cloud fraction in the widely-used phase space defined by the pressure of the highest
cloud top and the integrated extinction (i.e., joint histograms of cloud top pressure and optical thickness), as well as associated cloud radiative effects via cloud radiative kernels (Zelinka et
al., 2012).
2 Dataset and methodology of overlap calculation
2.1 Dataset
The dataset used to perform the present overlap analysis is a 4-year (2007–2010) version of the 2D cloud optical depth (COD) field described in O22. COD is resolved vertically (240 m) and along the
direction of the satellite track (∼1.1 km), and comes from the 2B-CWC-RVOD CloudSat dataset for liquid phase clouds and from the CALIPSO-enhanced 2C-ICE product for ice phase clouds. Because the
first product is constrained by Aqua-Moderate Resolution Imaging Spectroradiometer (MODIS) vertically-integrated (total) optical depth (TAU), which is available only during daytime, the strict
daytime availability applies also for the merged COD dataset. Comparisons with the 2B-CLDCLASS-LIDAR product showed missing retrievals for about 20% of liquid phase clouds. Those missing retrievals
correspond mainly to thin liquid clouds detected by CALIPSO only in the 2B-CLDCLASS-LIDAR product and missed in the 2B-CWC-RVOD product. O22 describes how these values were filled, namely using
coincident MODIS TAUs, and CODs of neighboring cells. We will examine below whether the specific COD filling procedure employed in O22 affects overlap findings.
2.2 Calculation details for vertical overlap parameters
For cloud occurrence overlap we use the generalized overlap framework first introduced by Hogan and Illingworth (2000), hereafter HI2000. According to this framework, the combined
vertically-projected cloud fraction of two layers with cloud fractions C[1] and C[2] (expressed in this work either as fractional coverage from 0 to 1 or as percentage between 0 and 100), which are
part of a contiguous cloud entity and are separated by a vertical distance Δz, is the linear combination of the combined cloud fractions of the two layers corresponding to the maximum (C^max) and
random overlap (C^ran) assumptions. The relative contribution of C^max and C^ran to the combined generalized cloud fraction is regulated by a weighting parameter α:
where $Cmax=maxC1,C2$ and $Cran=C1+C2−C1C2.$ Because C^max < C^ran alpha is positive when C^gen < C^ran, with α = 1 for the special case of maximum overlap, and α = 0 for the special case of random
overlap. Negative α which occurs when C^gen > C^ran indicates that cloud occurrence overlap is less than random, i.e., tending towards minimum overlap. HI2000 found that the overlap of cloud layers
that have intervening clear layers is very nearly random, i.e., α ≈ 0 even at small separation distances Δz, a result we also found when analyzing the current dataset (discussed later). For
contiguous cloud layers, HI2000 parameterized α as an exponentially decaying function of separation distance with a decorrelation length L[α]:
Because this function does not produce negative values, such a parameterization implicitly assumes that cloud overlap cannot be less than random. Maximum overlap corresponds to L[α] = ∞, while random
overlap to L[α] = 0.
For overlap of cloud condensate (cloud optical depth) horizontal distributions, we are interested not as much in how absolute values of members of the distribution are correlated, but rather in how
relative values are. Namely, for two cloud layers that may have very different grid-mean optical depth or water contents, we are interested in how the thinner, intermediate, and thicker parts of the
layers align vertically, in other words in how their subgrid optical depths overlap in a relative sense. This can be captured by Spearman’s rank correlation coefficient ρ between the two
distributions. While α is theoretically unbounded on the negative side, and has maximum value of unity, ρ values remain within ±1. This means that parameterizing its variation as an exponentially
decaying function of separation distance with a second decorrelation length L[ρ] (distance to the 1/e value), so that
captures only positive rank correlations. When ρ = 1, then the ranks of the COD values that are vertically aligned are correlated maximally which corresponds to L[ρ] = ∞, while when ρ = 0 they are
correlated randomly which corresponds to L[ρ] = 0.
In this work, we calculate the variation of α and ρ as a function of cloud layer separation distance for scenes consisting of 100 consecutive “rays” (subcolumns). The scenes have an approximate
length of 110 km (while the along track size of the rays of the merged CloudSat-CALIPSO dataset is about 1.7 km, sampling is performed at 1.1 km scales because of the spatial overlap between adjacent
rays). Our overlap results in this work are therefore specific to this scale. Tompkins and Di Giuseppe (2015) conducted an analysis of the dependence of cloud occurrence overlap on scene size (“scale
length,”) but such dependence is not investigated here. Our chosen scene size is as in O22, mirroring typical grid sizes of numerical models used for climate simulations.
We elected to calculate these two parameters for layers with 0.05 < C < 0.95 and ignore the cloud state of the intervening layers (i.e., whether some or all the layers are clear or cloudy). Layers
with very low values of cloud fraction are not used because they contribute little to the combined cloud fraction and make C^max ≈ C^ran; once a layer becomes nearly overcast on the other hand, its
complete overlap with another layer is almost assured, rendering quantification of degree of overlap meaningless. The value of the upper C threshold used in cloud overlap calculations was a major
focal point in Tompkins and Di Giuseppe (2015) who showed that it is intricately related to scene size. While these authors recommend using a rather low value for the upper threshold to suppress the
dependence on scene size, this should not necessarily be in our view the predominant criterion. First, the objective in a climate model application is to parameterize overlap at scale lengths of
interest and not at arbitrarily small or large scales. Second, when we examined several randomly chosen individual scenes, we discovered that setting a low C threshold, such as 0.5 suggested by
Tompkins and Di Giuseppe (2015), completely distorted their character. Third, since we also wish to calculate COD distribution overlap via rank correlations, preferably from the same data sample, it
does not seem prudent to remove the layers with the largest numbers of points of non-zero COD, and therefore the presumably most robust estimates of rank correlation.
Finally, our calculations of α and ρ include all cloudy layers regardless of whether clouds occur in-between. We choose to do so even though differences in overlap between contiguous and
non-contiguous clouds are conspicuous (Figure 1). We ignore the distinction between contiguous and non-contiguous clouds in our analysis for several reasons: First, the decorrelation lengths we
derive are meant for use in the “Raisanen” generator (Räisänen et al., 2004) which produces subcolumns using a flavor of generalized overlap that ignores the distinction between contiguous and
non-contiguous clouds. Second, non-contiguous clouds are quite rare at small separation distances (Figure 1), so including them at these small distances is not impactful. On the flip side, at large
separation distances cloud occurrence overlap approaches random behavior anyway regardless of whether the layers are part of contiguous or non-contiguous cloud entities. Third, rank correlation
coefficients may be biased in unpredictable ways when making such a contiguous/non-contiguous discrimination, which again, is ignored in the way the Raisanen generator employs rank correlations.
FIGURE 1
FIGURE 1. Globally-averaged α, the weighting (overlap) parameter regulating the mixing of maximum and random cloud occurrence overlap of two cloudy layers (α = 1: maximum overlap; α = 0: random
overlap; α < 0: minimum overlap), as a function of separation distance Δz, derived from the O22 COD cloud fields. We show results for two cases, one where cloud is also encountered in all intervening
layers (“contiguous cloud,” solid blue curve and circle symbols), and one where the intervening layers can contain clear skies (“non-contiguous cloud,” solid red curve and circle symbols). The number
of data points used to derive the profiles for each case are shown with the dotted curves using the same color convention and square symbols.
Our analysis process is as follows. We first calculate α and ρ profiles for individual scenes. For two arbitrary cloud layers at levels n and m that are apart by a separation distance Δz[n,m], the
scene’s α for these two layers is calculated from:
where C[n,m] is the true combined cloud fraction of layers n, m, calculated from the binary cloud fraction values (0 or 1) of the portion of the layer contained in subcolumn k.
K is the number of subcolumns in the scene (K = 100, in our case) and c[k,n,m] = 1 if either or both of n, m layers of the subcolumn are cloudy and c[k,n,m] = 0 when both are clear.
Spearman’s rank correlation coefficient ρ is calculated from:
where L is the number of subcolumns where both layers n and m have cloud, and r[n,l] and r[m,l] are the COD ranks of layers n and m for the l th subcolumn.
The separation distance Δz is resolved at 240 m, thus all separation distances are multiples of 240 m. Values for the same separation distance are averaged regardless of the vertical location of
levels n and m, and we therefore drop henceforth layer subscripts from α, ρ, and Δz.
3 Overlap analysis
3.1 Cloud occurrence vertical overlap
Once α “profiles” (variations vs. Δz) are calculated at the scene level, they are averaged into multi-annual seasonal profiles at 4° latitudinal resolution by appropriate averaging the scene data by
latitude $φ$. Figure 2 compares the zonal profiles of α from the O22 COD fields and from the 2B-CLDCLASS-LIDAR product. We show separately results for two seasons, DJF and JJA. Since our filling
procedure by design increases the consistency of the cloud mask (occurrence) implied by the COD field with that in 2B-CLDCLASS-LIDAR (O22), the agreement is not surprising (seen also in Figure 3,
discussed later). Still, one should be reminded that the 2B-CLDCLASS-LIDAR dataset does not have a true vertical resolution of 240 m since it resolves clouds as distinct “objects” based on whether
horizontally contiguous sequences of rays and vertically contiguous layers belong to the same cloud type. For a consistent calculation of α across datasets we therefore replicate the
2B-CLDCLASS-LIDAR mask to create a resampled 240 m vertical resolution. Despite this oversampling, the α($φ$, Δz) distributions of the two datasets look very similar.
FIGURE 2
FIGURE 2. Zonal variation of mean α as a function of separation distance Δz for DJF (December - January-February), left panels, and JJA (June-July-August), right panels. The upper panels come from
calculations using the O22 COD fields and the bottom panels from 2B-CLDCLASS-LIDAR. The latitudinal resolution is 4° and the vertical resolution used for the plot is 480 m.
FIGURE 3
FIGURE 3. (Left panel): Zonal variation of the cloud occurrence decorrelation length parameter L[α], modulating the exponential decay of α vs. separation distance according to Eq. 2, for DJF and JJA
from the O22 COD dataset; we also show Gaussian fits (dotted curves) corresponding to Eq. 7 with µ parameters provided on the top right corner for January 1st and July 1st. (Right panel): as the left
panel, but from the cloud mask provided by the 2B-CLDCLASS-LIDAR product.
The prominent zonal dependence of α and the expected decrease with separation distance are immediately apparent in Figure 2. At the highest separation distance α approaches zero (random overlap) and
can even become negative (some degree of minimum overlap). One should keep in mind that sampling becomes progressively poorer the greater the separation distance (cf. Figure 1), so results become
noisier, but the tendency towards zero or negative values is clear and consisted with all previous overlap studies. The geographical dependence of α, with the higher values at low latitudes
progressively decreasing at high latitudes is also consistent with previous works (Shonk et al., 2010; O12) where it was interpreted as indicative of the greater vertical alignment in cloud types
encountered most frequently at low latitudes. Tompkins and Di Giuseppe (2015) casted however doubts on this interpretation arguing instead that the observed behavior rather reflected latitudinal
changes in cloud scales (also related to latitudinally-varying frequencies of cloud types) relative to the fixed size of the scene for which α is calculated (∼110 km in our case).
A seasonal dependence of α is also apparent when comparing the DJF (December-January-February) and JJA (June-July-August) panels. The most distinguishing feature is the movement of high α values
northward and the expansion of the latitude-height phase space area occupied by positive values of α in the summer hemisphere. Basically, at a given separation distance, there is greater chance to
find a higher value of α in the summer hemisphere than the winter hemisphere. This reflects a tendency of greater likelihood of occurrence of vertically developed (and apparently more vertically
aligned) clouds in the summer hemisphere than the winter hemisphere. In the tropics, the movement of the Intertropical Convergence Zone (ITCZ) is similarly reflected by higher values of α north of
the equator during JJA.
The seasonal variation of α becomes more apparent when expressed in terms of decorrelation length. From the zonal profile of α, the zonal variation of its decorrelation length L[α] can be calculated
by applying a linear regression fit to lnα vs.–Δz data for which 1/L[α] is simply the slope (Figure 3). This type of fit has been previously employed by Mace and Benson-Troth (2002) and Naud et al.
(2008), among others, and results in a different L[α] than the “effective” decorrelation length of Barker (2008b) and Jing et al. (2016) which is only meaningful at the scene level. Appropriate
sampling weights are used in the regression fit to account for the fact that fewer values are available for larger Δz’s. Again, we show separate curves of L[α] for DJF and JJA from the two datasets,
the filled COD field (left panel) and 2B-CLDCLASS-LIDAR (right panel). To obtain the results shown in Figure 3, we actually calculated an L[α] zonal curve separately for each year’s season (i.e.,
instead of calculating the seasonal average across 4-years) and then averaged the four year-specific L[α] curves of each season. We opted to do this in order to suppress the over-smoothing of α
profiles caused by extensive averaging (cf. Figure 1). The extreme flipside would have been to calculate L[α] from very noisy α profiles of individual scenes (of which there are about four million)
and averaging across the very large L[α] population. The two methods, namely zonal L[α] from fits to zonally-averaged α profiles and zonal L[α] from averaging individual scene L[α]’s yield different
results. We do not opt for the second method because of the poor regression fits of individual scenes and the distorting effects a few extreme values can have. But even in our method which first
conducts extensive averaging to α before calculating L[α], the quality of the regression has some dependence on latitude.
Zonal and seasonal contrasts in cloud occurrence overlap become the most apparent when overlap is expressed in terms of decorrelation length (Figure 3, solid curves). The peak values in L[α] reach
∼5 km at the northern (JJA) and southern (DJF) edge of the tropics and drop quite rapidly towards mid and high latitudes. The contrast in L[α] between the two seasons is much more pronounced in the
NH than the SH. The two seasonal L[α] zonal curves, the one coming from O22’s COD field (left panel of Figure 3), and the one from the standard 2B-CLDCLASS-LIDAR product (right panel of Figure 3) are
very similar.
The magnitude of peak values of L[α] is generally higher than in previous studies, although a direct comparison can only be conducted with O12. The reasons for the discrepancy with O12 are not
entirely clear, but the different nature of the underlying dataset likely plays a significant role: α and L[α] values in O12 were derived from a dataset based solely on CloudSat measurements,
specifically a cloud mask inferred from 2B-GEOPROF reflectivities deemed to come from cells identified as cloudy according to predetermined thresholds. Here we use a daytime COD field, from combined
CloudSat and CALIPSO measurements, and with retrievals available only when the algorithm converges to a solution. It is doubtful that the daytime aspect of the COD field explains the rather
substantial discrepancy of decorrelation length values. Missing COD values being filled, and the specific method used to achieve this in O22 does not seem to be a factor either (left panel of Figure
4). Actually, had we not filled missing COD values we would have obtained even higher peak L[α] values: because the filled values correspond to low (liquid) clouds, the chances of creating
occurrences of overlap with high clouds increases; such overlaps tend towards random and correspond to lower L[α].
FIGURE 4
FIGURE 4. (Left panel): Zonal variation of L[α] for DJF and JJA from the two versions of the O22 COD dataset, one unfilled (straight merging of 2B-CWC-RVOD and 2C-ICE COD fields) and one filled (to
make consistent with 2B-CLDCLASS-LIDAR mask). (Right panel): as the left panel, but for L[ρ].
For a practical use of the observed L[α] values, the latitudinal and seasonal variability should ideally be captured by a parameterization. We follow here on the footsteps of O12 who used
time-varying Gaussian functions whose parameters vary with Julian date (J), namely:
(we drop the subscript from L because this parameterization is also applied for COD overlap) where μ[1], μ[2], μ[4] are constants, while μ[3] varies with day of the year according to the equations
below controlling the latitude at which the decorrelation length peaks:
$μ3=−4μ3,0J−272/365 when J>181$
$μ3=4μ3,0J−91/365 when J≤181(8)$
Such a parameterization can be easily incorporated in a climate model that has a cloud subcolumn generator (Räisänen et al., 2004), as shown in O12. Figure 3 (dotted curves) shows the Gaussian curves
for January 1st and July 1st and includes the parameters of the Gaussian fit. The present parameterization has the exact same functional form as the one introduced by O12 and encapsulates the
migration of the most vertically aligned (and presumably developed) clouds (those with the highest value of L[α]) northward (southward) during boreal summer (winter). The new parameters produce
higher values of L[α] (more maximum overlap) at low latitudes and smaller values (more random overlap) at high latitudes, compared to O12. In Section 4 we present an implementation of this
parameterization to assess scene and grand-average simulated cloud properties and average cloud radiative effects and to evaluate the impact of transitioning from O12’s to the current Gaussian fits.
3.2 Cloud optical depth vertical overlap
The “profiles” (Δz dependences) of rank correlation coefficient ρ of COD distributions are calculated and averaged similarly to profiles of α. Figure 5 provides a visualization of the zonal
dependence of ρ profiles for DJF and JJA as in Figure 2. We see a similar pattern for ρ($φ$, Δz) as for α($φ$, Δz), namely decrease with height and latitude, and shift toward higher values in the
summer hemisphere. Negative values below −0.1 are virtually non-existent for ρ values that have been averaged extensively over multiple scenes; this suggests that the inability of the inverse
exponential to yield negative values is likely inconsequential in estimates of L[ρ]. Values greater than 0.7 are quite rare and limited to Δz ≤ 480 m.
FIGURE 5
FIGURE 5. Zonal variation of mean ρ, the correlation coefficient of COD ranks between cloudy layers, as a function of their separation distance Δz, for DJF (left panel) and JJA (right panel) from the
filled COD O22 dataset.
Estimates of L[ρ] are obtained by regressing ln(ρ) against -Δz, similarly to how we calculated L[α]; namely by performing again a regression on each year’s seasonal zonal profiles, and averaging
seasonal values across the 4 years. The zonal variation of L[ρ] for DJF and JJA is shown in Figure 6 (solid curves). The zonal pattern is very similar to that of L[α] even though the cloud fraction
overlap parameter and the rank correlation of COD are distinct physical parameters describing different aspects of cloud vertical structure. A model-oriented parameterization vs. latitude and day of
the year is again accomplished with a Gaussian function of the same type as for L[α] (Eq. 7), with parameter values included in Figure 6 which shows parameterized curves for January 1st and July 1st
(dotted curves).
FIGURE 6
FIGURE 6. Zonal variation of the decorrelation length L[ρ] of the correlation coefficient ρ of COD ranks L[ρ], modulating the exponential decay of ρ vs. separation distance according to Eq. 3, for
DJF and JJA along with Gaussian fits according to Eq. 7 for January 1st and July 1st.
Because our COD filling procedure assigns equal values of liquid COD to cells when they belong to rays (cloudy subcolumns) where TAU from MODIS is available to be used as constraint, a tendency to
overestimate ρ might have been expected. Such a potential overestimate implies higher values of L[ρ] which is however not seen; in reality the unfilled COD fields imply somewhat higher rank
correlations (right panel of Figure 4). First of all, one should keep in mind that equal COD values in two different layers do not necessarily have the same rank. Second, the lower rank correlation
of the filled COD field may be explained by the fact that many cells are filled by a secondary procedure that utilizes the available COD of neighboring liquid cells, and this process may actually
increase randomness and therefore decrease rank correlations. Third, lower rank correlations can also result from an increase in the frequency of distant cloud layer pairs after filling, and such
pairs are expected to have COD distributions that are more uncorrelated.
The peak magnitudes of L[ρ] at low latitudes are somewhat smaller than those of L[α], but at some higher latitudes the values of L[ρ] surpass those of L[α]. This behavior differs from that in O12
where L[ρ] < L[α] was universal, but with L[ρ] calculated directly from radar reflectivities of cloudy cells, and not actual cloud retrievals, and in Oreopoulos and Norris (2011) who used cloud
condensate retrievals from ground-based radar. In an analysis of cloud resolving model (CRM) fields Pincus et al. (2005) found that L[ρ] > L[α] is possible depending on the subset of clouds used and
the method of calculation. Räisänen et al. (2004) in their own analysis of different CRM fields find L[ρ] < L[α] consistently. O22 found that halving the magnitude L[α] while at the same doubling the
magnitude of L[ρ] compared to the original parameterization of O12 did not have serious undesired consequences on the radiative effects of cloud fields constructed by the Raisanen cloud subcolumn
generator, which implies that L[ρ] > L[α] is plausible and not necessarily unphysical.
3.3 Overlap dependences
3.3.1 Height of upper layer
In the previous analysis we derived a single decorrelation length for the entire atmospheric column. This is because we averaged α and ρ over all identical separation distances Δz regardless of cloud
layer heights. Räisänen et al. (2004) on the other hand showed height-dependent decorrelation lengths (their Figure 3) obtained by solving Eqs. 2, 3 for adjacent layers, implying that overlap can be
different for the same separation distance Δz at different parts of the atmosphere. Here we also attempt to resolve the dependence of the overlap parameters and their corresponding decorrelation
lengths on height, but rather coarsely only. Specifically, we derive the zonal variation of decorrelation lengths for three different standard layers by segregating the calculations of α and ρ from
Eqs. 4, 6 according to the location of the top of the upper cloud layer. The three standard layers are delineated by the 680 hPa and 440 hPa pressure levels separating clouds into low (L) when their
cloud top pressure (CTP) is greater than 680 hPa, middle (M) if 440 hPa < CTP <680 hPa, and high (H) when CTP <440 hPa, as in the convention established by the International Satellite Climatology
Project (ISCCP, Rossow and Schiffer 1999). CTPs were obtained from geometrical top heights with the help of the ECMWF-AUX CloudSat dataset. The range of possible Δz’s is highest when the top layer is
H, and lowest when the top layer is L. This is because H clouds can be overlapped with H, M, and L clouds, and L clouds with only other L clouds; M clouds can be overlapped with other M clouds as
well as with L clouds.
Figure 7 shows the zonal curves for both L[α] and L[ρ], for two seasons as before and segregated by cloud category according to the vertical location of the upper cloud layer. Aside from the
previously noted seasonal zonal shifts, we also see a stronger dependence on the category assignment of the top cloud for cloud occurrence overlap (L[α]) than COD overlap (L[ρ]). But even for L[α],
the dependence on cloud category is more pronounced at low latitudes than high latitudes. More random overlap for the cases where the upper cloud is H makes sense because the overlap samples include
higher separation distances for which overlap tends more towards random. But the same expectation based on this argument does not extend to the M vs. L comparison where separation distances are
smaller, and where therefore less random overlap is expected, when the upper cloud layer belongs to the L category. It thus looks like there is an underlying physical reason (yet unknown) for more
maximum overlap of cloud occurrence (and to a smaller degree of random correlation in COD ranks) within M clouds and for M clouds over L clouds.
FIGURE 7
FIGURE 7. Zonal variation of L[α] (top panels) and L[ρ] (bottom panels) for DJF (left panels) and JJA (right panels) when α and ρ are segregated according to the height category of the top layer (see
text for details).
The breakdown of overlap according to the H, M, L category convention described here is simple enough to be included in a model that uses a subcolumn generator. Basically, the type of Gaussian fits
previously discussed can be applied to the three broad cases where the upper cloud belongs to one of the three categories. Whether this additional level of nuance is called for instead of the simpler
approach of obtaining decorrelation lengths without a height distinction would require testing.
3.3.2 Cloud regime
While the path to a relevant parameterization that can be applied to GCMs may turn out to be impractical and the ultimate impact small, it is worth pursuing a deeper understanding of cloud overlap by
examining its dependence on cloud regime (CR). To accomplish this, we derived the zonal variation of decorrelation length by compositing separately within 4° latitude bands α and ρ for scenes
coinciding with one of the Aqua MODIS CRs of Cho et al. (2021) which represent the dominant mixtures of clouds at 1° daily scales according to CTP-TAU joint histograms. Because many CRs have
considerable geographical preference (see Cho et al., 2021 for CR descriptions and characteristics), sampling is not adequate across all latitudes, especially when the analysis is broken by season,
as we have done so far. To improve sampling, we therefore combine some CRs into groups and create appropriate annual zonal averages of α and ρ before applying the regressions that yield L[α] and L[ρ
]; as before, the four annual values are averaged. The results of this procedure are shown in Figure 8; each panel also includes the zonal relative frequency of occurrence (RFO) of the combined (when
applicable) CRs (dotted curves). The RFO curves reveal the geographical preference of CRs: CR1 and CR2 stand mainly for deep convection and cirrus, and are predominantly tropical cloud regimes, while
CR3 and CR4 represent storms in both tropics and midlatitudes; CR5-CR6 (extratropical ocean storms and mid-level clouds often associated with orography) and CR7-CR9 (mostly oceanic stratus and
stratocumulus) dominate mid and high latitudes, the low cloud fraction CR10 (oceanic shallow convection) is encountered mostly in the tropical/subtropical domain, while the even lower cloud fraction
CR11 (mixture of low and high clouds) is omnipresent, and with a prominent peak in the southern polar regions. Decorrelation lengths are not calculated where sampling is poor. This is most notable
for CR1-CR2 which do not have decorrelation lengths outside of 30° S-N, and for CR7-CR9 for which the decorrelation lengths have a gap north of the equator.
FIGURE 8
FIGURE 8. Annually averaged zonal curves of L[α] and L[ρ] for groups of cloud regimes (CRs) from the O22 filled COD dataset; global average values are shown in each panel. These results are obtained
by identifying the coincident Aqua-MODIS CR for each of our scenes and then deriving decorrelation lengths from zonal values of α and ρ for each CR group. The dotted curve indicates the zonal
Relative Frequency of Occcurrence (RFO, in %, right ordinate) of the CR group.
For regimes where a near-full zonal distribution of decorrelation length can be calculated (all CRs except CR1 and CR2), the previously seen zonal behavior re-emerges, namely higher decorrelation
lengths at low latitudes (tropics and subtropics). With regard to peak values, the CRs seem to be broadly separated into two groups, one with peak values of L[α] around 5–6 km (CR1+CR2, CR10, CR11)
and one with peak values roughly 2 km lower (CR3-CR9), although in the case of CR5-CR9 the peaks also correspond to the lowest number of samples. The first class is dominated by either deep or high
clouds (CR1 + CR2) or scenes of small cloud fraction (CR10 and CR11). The second class (CR3-CR9) encompasses all remaining storm and low clouds. Despite the uneven sampling, these results suggest
that the low latitude members of cloud systems deemed to belong to the same family as their extratropical brethren based on resemblance of CTP-TAU histograms (the measure of similarity in MODIS cloud
regime classification) exhibit nevertheless distinct overlap behavior. This result seems to counter the Tompkins and Di Giuseppe (2015) argument that overlap metrics are skewed by the relative sizes
of cloud objects and domain sizes. Here, for a fixed domain size and cloud objects presumably of similar size given their membership to the same CR (a plausible, but not rigorous assumption), we see
vertical overlap to differ quite substantially between low and high latitudes.
The correlation of COD ranks expressed in terms of L[ρ] seems to also follow the zonal pattern seen previously and to trace closely L[α], but at slightly smaller magnitudes, per the earlier results.
The global values of the two decorrelation lengths included in the Figure 8 panels reaffirm the close proximity noticed earlier. CR1 + CR2 assume the greatest decorrelation length magnitudes, and
CR5-CR9 the lowest. The large gap in magnitude between these two broad cloud groups is a striking indicator of how varied the cloud overlap of the planet’s clouds can be, and how much detail is
missed when the analysis is performed indiscriminately on all clouds.
4 Performance of parameterized overlap
In this section we discuss practical implementations of the findings of the preceding overlap analysis. Specifically, we use the updated decorrelation length parameterizations (Eq. 7) in the
“Raisanen generator” (Räisänen et al., 2004) to produce for each 100-ray scene subcolumns that are then filtered through the COSP [CFMIP (Cloud Feedback Model Intercomparison Project) Observation
Simulator Package, Bodas-Salcedo et al., 2011] MODIS simulator to produce for each scene a simulated 2D COD field that is consistent, to the extent possible, with MODIS retrievals of TAU (vertical
integral of subcolumn COD) and CTP (pressure of what MODIS would consider the cloud top of the subcolumn). As part of this process, subcolumns with TAU <0.3 are discarded and scene cloud fraction is
resolved in terms of CTP-TAU joint histograms as in O22.
We first assess the performance of the generator with the new decorrelation length parameterization at the scene level (for reasons explained in O22 only oceanic scenes are used) and also compare
with results obtained using the O12 parameterization. For each scene, values of L[α] and L[ρ] are calculated from the parameterization using the center latitude of the scene and the Julian date on
which it was observed. Results are shown in Figure 9 which employs density plots to contrast observed scene vertically projected cloud fraction, mean logarithm of TAU and variance of TAU (also from
the MODIS simulator) against two versions of their simulated counterparts, one using O12’s parameterization of decorrelation lengths and another using the updated version of this paper in the
Raisanen generator. Figure 9 reveals that the new parameterizations of decorrelation lengths (the parameterization of layer COD variance remains the same as in O12, namely a beta COD distribution
with variance parameterized as a function of layer cloud fraction) do not yield tangible improvements over the old parameterization. While the performance in terms of mean values is slightly better
(as indicated by the smaller mean errors provided in the panels), correlations and RMSEs are either the same or slightly inferior.
FIGURE 9
FIGURE 9. Comparison of the performance of the O12 (panels is left column) and the present (panels in right column) decorrelation length parameterization at the level of individual scenes simulated
with the Raisanen subcolumn generator. Occurrence frequencies are shown for combinations of observed and simulated vertically projected scene cloud fraction CF (top row), the logarithm of vertically
integrated optical depth log (TAU) (middle row), and the variance of the vertically integrated optical depth var (TAU) (bottom row).
A performance comparison of decorrelation length parameterizations was also conducted in terms of CTP-TAU joint histograms. The scene level results are shown in Figure 10. The left panel shows
Euclidean Distances (EDs, square root of the sum of squared bin CF differences) between the joint histograms of observed and simulated scenes composited in terms of scene cloud fraction. The smaller
the ED, the more similar the observed and simulated joint histograms are (i.e., better performance). The mean ED of the new parameterization is slightly larger indicating a marginally worse
performance. The density plot of scene level ED values (right panel) is nearly symmetric around the line of perfect agreement indicating that the performance of the two parameterizations is
practically equivalent.
FIGURE 10
FIGURE 10. Comparison of the performance of the O12 and the present decorrelation length parameterization at the scene level. Average Euclidean Distance (ED) as a function of scene CF (binned in 5%
intervals, (left panel); occurrence frequency of pairs of ED values obtained from the O12 and current parameterization of decorrelation lengths (binned in 1% intervals, (right panel).
A more straightforward comparison is that of grand-averages of observed and simulated joint histograms. This is shown in Figure 11 and confirms the slight edge of the new parameterization on average.
The grand-averaged joint histogram of the new parameterization is slightly more similar to its observed counterpart as evidenced by a smaller ED and a closer to observations vertically projected
cloud fraction (albeit still substantially far from the observed value).
FIGURE 11
FIGURE 11. Grand-average of CTP-TAU oceanic CF histograms from observations (O22 COD filled, left panel), simulated with the Raisanen simulator using the decorrelation length Gaussian fits of O12
(middle panel) and the new Gaussian fits of this paper (Figures 3, 6) (right panel).
Does the marginal improvement brought by the new parameterization in a mean sense matter radiatively? We found that it does not. We examined this in the same way as O22 by converting joint CTP-TAU
histograms to shortwave (SW), longwave (LW) and total = SW + LW Cloud Radiative Effect (CRE) resolved in CTP-TAU space using observation-based Cloud Radiative Kernels from the Clouds and the Earth’s
Radiant Energy System (CERES) FluxByCldTyp product (Sun et al., 2022); details of the methodology can be found in O22. Results are shown in Figure 12 and once again reaffirm how negligible the
performance differences between the two parameterizations are. The most important conclusion actually does not pertain so much to differences between the two parameterizations, but rather to how well
the implementation of this type of parameterization into the Raisanen generator reproduces CREs on average, both in terms of the overall value, but also in terms of the distribution itself of average
binned CRE.
FIGURE 12
FIGURE 12. SW (top row) LW (middle row) and total = SW + LW (bottom row) CRE discretized by combinations of CTP-TAU bins when using CF joint histograms from observations, and the Raisanen generator
with either the O12 decorrelation length parameterization or the parameterization of this paper. The colors in the left column correspond to the actual CRE binned values numbers while in the other
two columns to differences from observations.
5 Conclusion and perspectives on cloud vertical overlap
Taking advantage of the availability of new datasets from CloudSat’s radar and CALIPSO’s lidar, we have re-evaluated parameters used to describe vertical cloud overlap at ∼ 100 km scales.
Specifically, we used COD fields created in recent work (Oreopoulos et al., 2022). When expressing the vertical overlap of cloud occurrences and of COD distributions in terms of decorrelation lengths
as is common in the cloud overlap literature, we found larger peak values than in previous work also based on CloudSat and CALIPSO observations (Oreopoulos et al., 2012). We also took the opportunity
to extend that work by examining overlap in more detail for different cloud classes. In particular, we examined how overlap contrasts among broad categories of high, middle, and low clouds, but also
among more finely-defined cloud categories based on passive observations, known as cloud regimes. All such overlap breakdowns showed an unambiguous zonal pattern for both cloud occurrence and cloud
optical depth overlap decorrelation lengths with clear peaks at low latitudes indicating more aligned vertical structures, possibly due to stronger large-scale vertical motions and less wind shear (
Di Giuseppe and Tompkins 2015) in the tropics and subtropics. They also showed that an analysis that ignores cloud classes conceals a great amount of diversity in how the planet’s clouds overlap.
For a practical use of our overlap analysis we applied the same type of Gaussian fits to the observed zonal curves of the two decorrelation lengths as in O12 and then implemented the updated
parameterization in the “Raisanen generator” (Räisänen et al., 2004) to produce subcolumns that form simulated scenes. From these subcolumns, COSP’s MODIS simulator generates subcolumn CTP and TAU
values as well as subgrid distributions of cloud fraction in terms of CTP-TAU joint histograms. These quantities were then be compared with their observational counterparts obtained by similarly
passing the observed COD field through the MODIS simulator. This exercise showed no notable performance enhancements compared to the case where O12 decorrelation length parameterizations were used.
The above major finding raises the question of how sensitive the performance (as evaluated here) of subcolumn generation is to extreme values of decorrelation length. We tested this by implementing
unrealistic maximum and random overlap (decorrelation lengths of infinity and zero, respectively) for both cloud occurrence and COD distribution overlap, specifically the four possible combinations
of purely maximum and purely random overlap for the two types of overlap. Results are shown in Figure 13. We see that neither maximum nor random overlap works in any combination, as it yields large
errors in CTP-TAU histograms. As expected, random cloud occurrence overlap produces big overall estimates of cloud fraction, which are more extreme when COD overlap is also random because in that
case the likelihood of TAU <0.3 decreases. An overall overestimation in total projected cloud fraction (CF) does not mean that CF is overestimated in every histogram bin. This is because random
overlap also decreases the likelihood of extensive vertical alignment that creates optically thick clouds. When on the other hand L[α] = ∞, i.e., cloud occurrence overlap is maximum, overall CF is
underestimated, less so than when COD overlap is random, because of more TAU >0.3 subcolumns. Again, individual histogram bins that go against the underestimation expectation exist. This is because
maximum overlap also creates pockets of more populous than observed optically thicker clouds. While pure maximum and pure random overlap perform very poorly, when combined as in the original
maximum-random paradigm described in the introduction and implemented in COSP’s SCOPS (Subcolumn Cloud Overlap Profile Sampler) subcolumn generator, performance is acceptable, albeit inferior to that
of generalized overlap (see O22).
FIGURE 13
FIGURE 13. Mean CF errors discretized in CTP-TAU bins (i.e., joint histogram errors) for four combinations of extreme L[α] and L[ρ]. Zero values correspond to random vertical overlap of cloud
fraction and COD ranks, while infinite values correspond to maximum cloud fraction overlap and perfect correlations of COD ranks.
A survey of the overlap literature exploring generalized overlap since 2000 when the Hogan and Illingworth (2000) work was published, reveals that decorrelation length magnitudes capturing the decay
of the parameter α controlling the mixing of maximum and random overlap occupy an enormous range that makes convergence towards universally accepted values for GCM parameterization purposes
challenging (rank correlation of distributions of condensate or optical depth have been studied much less). Magnitudes of decorrelation length depend on exact definitions (e.g., the “effective”
decorrelation length of Barker 2008b, while similar, does not have the same meaning as in the original definition adopted here); the type of dataset used (cloud fields simulated by a cloud resolving
model, ground-based radar, space-based radar, combined space-based radar-lidar observations); the size of the reference domain (scene); the cloud fraction threshold used to weed out non-meaningful
calculations of overlap; types of clouds examined or retained for overlap calculations; whether only contiguous or all cloudy layers are used, or whether only adjacent cloud layers are used; and how
the values of either α and L[α] are sampled, averaged, fitted, and composited. Given this multitude of dependences and some degree of insensitivity to L[α] in the end values of cloud and radiation
statistics, one is left wondering how modelers can chose the most appropriate decorrelation length values. Our results show that both the Oreopoulos et al. (2012) and the new parameterization derived
here are viable, at least in a mean sense (substantial errors at the scene level are still not universally suppressed). They also make apparent that a latitudinal dependence of decorrelation length
is an essential aspect of a parameterization, preferably also accounting for seasonal variation. If GCMs still perform long integrations with rather coarse grids in the near future, cloud vertical
overlap remains an important observable that should be periodically revisited with improved active observations and cloud products such as those expected in a few years from NASA’s Atmospheric
Observing System (AOS) and ESA’s EarthCare mission.
Data availability statement
MODIS and CERES data were obtained from www.earthdata.nasa.gov, CloudSat/CALIPSO data from www.cloudsat.cira.colostate.edu, MODIS Cloud regimes from https://disc.gsfc.nasa.gov; further inquiries can
be directed to the corresponding author.
Author contributions
LO conceived the study and analysis methods, and led authorship of the manuscript. NC constructed the satellite dataset and performed the observational analysis. DL performed the simulations to
evaluate performance of parameterization. Both NC and DL created figures and contributed to the authorship of the manuscript.
This study was supported by the NASA’s CloudSat-CALIPSO Science Team program under David Considine. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through
the NASA Center for Climate Simulation (NCCS) at Goddard Space Flight Center.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the
reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Barker, H. W. (2008a). Overlap of fractional cloud for radiation calculations in GCMs: A global analysis using CloudSat and CALIPSO data. J. Geophys. Res. 113, D00A01. doi:10.1029/2007JD009677
Barker, H. W. (2008b). Representing cloud overlap with an effective decorrelation length: An assessment using CloudSat and CALIPSO data. J. Geophys. Res. 113, D24205. doi:10.1029/2008JD010391
Bodas-Salcedo, A., Webb, M. J., Bony, S., Chepfer, H., Dufresne, J. L., Klein, S. A., et al. (2011). COSP: Satellite simulation software for model assessment. Bull. Am. Meteorological Soc. 92 (8),
1023–1043. doi:10.1175/2011BAMS2856.1
Cho, N., Tan, J., and Oreopoulos, L. (2021). Classifying planetary cloudiness with an updated set of MODIS cloud regimes. J. Appl. Meteorology Climatol., 60(7), pp.981–997. doi:10.1175/
Chou, M. D., Lee, K. T., Tsay, S. C., and Fu, Q. (1999). Parameterization for cloud longwave scattering for use in atmospheric models. J. Clim., 12(1), pp.159–169. doi:10.1175/1520-0442(1999)012
Chou, M. D., Suarez, M. J., Ho, C. H., Yan, M. M., and Lee, K. T. (1998). Parameterizations for cloud overlapping and shortwave single-scattering properties for use in general circulation and cloud
ensemble models. J. Clim., 11(2), pp.202–214. doi:10.1175/1520-0442(1998)011<0202:pfcoas>2.0.co;2
Di Giuseppe, F., and Tompkins, A. M. (2015). Generalizing cloud overlap treatment to include the effect of wind shear. J. Atmos. Sci. 72 (8), 2865–2876. doi:10.1175/JAS-D-14-0277.1
Geleyn, J. F., and Hollingsworth, A. (1979). An economical analytical method for the computation of the interaction between scattering and line absorption of radiation. Contrib. Atmos. Phys. 52,
Hillman, B. R., Marchand, R. T., and Ackerman, T. P. (2018). Sensitivities of simulated satellite views of clouds to subgrid-scale overlap and condensate heterogeneity. J. Geophys. Res. Atmos., 123
(14), pp.7506–7529. doi:10.1029/2017JD027680
Hogan, R. J., and Illingworth, A. J. (2000). Deriving cloud overlap statistics from radar. Q. J. R. Meteorological Soc., 126(569), pp.2903–2909. doi:10.1002/qj.49712656914
Jing, X., Zhang, H., Peng, J., Li, J., and Barker, H. W. (2016). Cloud overlapping parameter obtained from CloudSat/CALIPSO dataset and its application in AGCM with McICA scheme. Atmos. Res. 170,
52–65. doi:10.1016/j.atmosres.2015.11.007
Mace, G. G., and Benson-Troth, S. (2002). Cloud-layer overlap characteristics derived from long-term cloud radar data. J. Clim. 15 (17), 2505–2515. doi:10.1175/1520-0442(2002)015<2505:clocdf>2.0.co;2
Naud, C. M., Del Genio, A., Mace, G. G., Benson, S., Clothiaux, E. E., and Kollias, P. (2008). Impact of dynamics and atmospheric state on cloud vertical overlap. J. Clim. 21 (8), 1758–1770.
Oreopoulos, L., Cho, N., Lee, D., Lebsock, M., and Zhang, Z. (2022). Assessment of two stochastic cloud subcolumn generators using observed fields of vertically resolved cloud extinction. J. Atmos.
Ocean. Technol. 39 (8), 1129–1244. doi:10.1175/JTECH-D-21-0166.1
Oreopoulos, L., and Khairoutdinov, M. (2003). Overlap properties of clouds generated by a cloud-resolving model. J. Geophys. Res. Atmos. 108 (D15), 4479. doi:10.1029/2002JD003329
Oreopoulos, L., Lee, D., Sud, Y. C., and Suarez, M. J. (2012). Radiative impacts of cloud heterogeneity and overlap in an atmospheric General Circulation Model. Atmos. Chem. Phys., 12(19),
pp.9097–9111. doi:10.5194/acp-12-9097-2012
Oreopoulos, L., and Norris, P. M. (2011). An analysis of cloud overlap at a midlatitude atmospheric observation facility. Atmos. Chem. Phys., 11(12), pp.5557–5567. doi:10.5194/acp-11-5557-2011
Pincus, R., Hannay, C., Klein, S. A., Xu, K. M., and Hemler, R. (2005). Overlap assumptions for assumed probability distribution function cloud schemes in large-scale models. J. Geophys. Res. Atmos.,
110, D15S09(D15). doi:10.1029/2004JD005100
Räisänen, P., Barker, H. W., Khairoutdinov, M. F., Li, J., and Randall, D. A. (2004). Stochastic generation of subgrid-scale cloudy columns for large-scale models. Q. J. R. Meteorological Soc. A J.
Atmos. Sci. Appl. meteorology Phys. Oceanogr., 130(601), pp.2047–2067. doi:10.1256/qj.03.99
Rossow, W. B., and Schiffer, R. A. (1999). Advances in understanding clouds from ISCCP. Bull. Am. Meteorological Soc. 80 (11), 2261–2287. doi:10.1175/1520-0477(1999)080<2261:aiucfi>2.0.co;2
Shonk, J. K., Hogan, R. J., Edwards, J. M., and Mace, G. G. (2010). Effect of improving representation of horizontal and vertical cloud structure on the Earth's global radiation budget. Part II: The
global effects. Q. J. R. Meteorological Soc., 136(650), pp.1205–1215. doi:10.1002/qj.646
Sun, M., Doelling, D. R., Loeb, N. G., Scott, R. C., Wilkins, J., Nguyen, L. T., et al. (2022). Clouds and the Earth’s radiant Energy system (CERES) FluxByCldTyp edition 4 data product. J. Atmos.
Ocean. Technol. 39 (3), 303–318. doi:10.1175/JTECH-D-21-0029.1
Tompkins, A. M., and Di Giuseppe, F. (2015). An interpretation of cloud overlap statistics. J. Atmos. Sci., 72(8), pp.2877–2889. doi:10.1175/JAS-D-14-0278.1
Zelinka, M. D., Klein, S. A., and Hartmann, D. L. (2012). Computing and partitioning cloud feedbacks using cloud property histograms. Part I: Cloud radiative kernels. J. Clim. 25 (11), 3715–3735.
Keywords: active observations, cloud overlap, cloud radiative effects, subgrid variability, decorrelation length
Citation: Oreopoulos L, Cho N and Lee D (2022) Revisiting cloud overlap with a merged dataset of liquid and ice cloud extinction from CloudSat and CALIPSO. Front. Remote Sens. 3:1076471. doi: 10.3389
Received: 21 October 2022; Accepted: 12 December 2022;
Published: 22 December 2022.
Edited by:
Seiji Kato
, Langley Research Center (NASA), United States
Copyright © 2022 Oreopoulos, Cho and Lee. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in
other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic
practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Lazaros Oreopoulos, Lazaros.Oreopoulos@nasa.gov | {"url":"https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2022.1076471/full","timestamp":"2024-11-02T05:48:03Z","content_type":"text/html","content_length":"510380","record_id":"<urn:uuid:e7544caf-51d9-4641-8a29-92292f416879>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00084.warc.gz"} |
On the Krull dimension of rings of continuous semialgebraic functions | EMS Press
On the Krull dimension of rings of continuous semialgebraic functions
• José F. Fernando
Universidad Complutense de Madrid, Spain
• José Manuel Gamboa
Universidad Complutense de Madrid, Spain
Let be a real closed field, the ring of continuous semialgebraic functions on a semialgebraic set and its subring of continuous semialgebraic functions that are bounded with respect to . In this work
we introduce semialgebraic pseudo-compactifications of and the semi algebraic depth of a prime ideal of in order to provide an elementary proof of the finiteness of the Krull dimensions of the rings
and for an arbitrary semialgebraic set . We are inspired by the classical way to compute the dimension of the ring of polynomial functions on a complex algebraic set without involving the
sophisticated machinery of real spectra. We show and prove that in both cases the height of a maximal ideal corresponding to a point coincides with the local dimension of at . In case is a prime -
ideal of , its semialgebraic depth coincides with the transcendence degree of the real closed field over .
Cite this article
José F. Fernando, José Manuel Gamboa, On the Krull dimension of rings of continuous semialgebraic functions. Rev. Mat. Iberoam. 31 (2015), no. 3, pp. 753–766
DOI 10.4171/RMI/852 | {"url":"https://ems.press/journals/rmi/articles/13360","timestamp":"2024-11-12T12:58:50Z","content_type":"text/html","content_length":"93992","record_id":"<urn:uuid:70a44e86-6cbe-41ee-afcf-f4192605df7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00882.warc.gz"} |
Babylonian Mathematics
The fertile land between the Tigris and Euphrates valleys is regarded as the seat of human civilization, the place where humanity first began to develop urban centers and move away from a
semi-nomadic existence. This shift in society and the welding together of many disparate tribes into one empire created an explosion in knowledge, as the shift towards agriculture allowed study for
study's sake.
A modified version of Bill Casselman's photo of YBC 7289, with hand tracings to emphasize the cuneiform markings. (byBill Casselman)
The main contribution of the Sumerians and Babylonians was the development of writing with their cuneiform script, an advance that allowed record keeping and knowledge to be preserved and passed down
through the generations. Many of these records, preserved on clay tablets, have been discovered by archaeologists and translated, revealing information about the daily life of these ancient people.
These tablets also allow modern historians to delve into the past and explore the sophisticated mathematical techniques of these people, the very foundation of the explosion in mathematics of the
later Greeks. While we tend to call the mathematics of this advanced civilization Babylonian, other great cultures such as the Sumerians and Assyrians also contributed to the development of an
advanced civilization in the Fertile Crescent.
The Babylonian Numerals - Astronomy and Base 60
Babylonian Numerals (Public Domain)
The Babylonians developed a system for writing down numbers, using symbols for singles, tens, and hundreds, showing that they probably used a decimal system for everyday life. This system allowed
them to handle large numbers comfortably and perform all of the major arithmetical functions. However, there is no evidence that they used a number for zero, and they did not use fractions.
However, the Sumerians also used a base 60 system of counting, the reason why we still divide a circle into 360 degrees and count hours, minutes, and seconds. This sexagesimal system was used for
weights and measures, astronomy, and for the development of mathematical functions. For example, one tablet lists the squares of all of the numbers up to 602, and sexagesimal numbering is used for
the numbers greater than 60 - 64 is written as 60+4, 81 as 60+21…
Babylonian tablet listing pythagorean triples (Public Domain)
This idea of using position to arrange integers, known as the principle of position, is the first known use of such a system, the basis of our decimal system. This became lost until the fifth or
sixth century CE, and western culture used the unwieldy Roman system of numbering, a tortuous and difficult system for performing math. Their system of numbering implies that they may have understood
zero but, until further evidence is found, that remains largely conjectural.
This base 60 system, also allowed the Babylonians to use fractions, and they expressed a half as '30' (30 sixtieths) and a quarter as '15' (15 sixtieths). This system found its way into Greece and
became the preferred way to express fractions until many centuries later, when the decimal system became the preferred language for mathematicians.
Royal Gur of Akkad - Showing Sumerian units of measurement (Creative Common)
The accepted reason for the use of a sexagesimal system is that it was based in astronomy and the desire of the Babylonians to develop accurate calendars to chart the turning of the seasons and
predict the best times for planting, extremely importantly in a culture with a strong agricultural base. Initially, the Babylonians believed that there were 360 days in a year, and this formed the
basis of their numerical system; they divided this into degrees and this represented the daily movement of the sun around the sky. They then transferred this into measuring circles by dividing
degrees into minutes. Our entire system of astronomy, geometry, and dividing the day into hours, minutes and seconds hails from this period of history.
The Sumerians, Babylonians and other inhabitants of the Euphrates valley certainly made some sophisticated mathematical advances, developing the basis of arithmetic, numerical notation and using
fractions. Their work was adopted by the Greeks, and it is likely that the Greeks learned mathematical techniques from the Babylonian culture, as ideas traveled along the Silk Route from Anatolia
(Turkey) to China. Alexander the Great is known to have sent astronomical records from Babylonia to Aristotle after he conquered the area.
In geometry, besides the development of degrees, the Babylonians contributed little, tending to use rough approximations, and there is little evidence that they used geometrical techniques for
raising their buildings, preferring trial and error. Of course, so little is known about this sophisticated culture that evidence may yet turn up revealing more about their mathematical techniques.
Ultimately, their knowledge passed to the Greeks and formed the basis of pure mathematics as the master manipulators of numbers, the Greeks, took this knowledge and began to explore the relationships
between numbers. | {"url":"https://explorable.com/babylonian-mathematics?gid=1595","timestamp":"2024-11-03T23:10:35Z","content_type":"application/xhtml+xml","content_length":"75465","record_id":"<urn:uuid:66e3ba08-4914-4490-a1a0-83f06d5f697f>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00060.warc.gz"} |
Arbitrage Pricing Theory (APT)Arbitrage Pricing Theory (APT)
Arbitrage Pricing Theory (APT) is a multi-factor model for asset pricing that relates various macroeconomic factors to the expected return and risk of a financial asset. It was proposed by economist
Stephen Ross in 1976 as an alternative to the Capital Asset Pricing Model (CAPM), which only considers one factor, the market risk.
According to APT, an asset's expected return can be expressed as a linear function of its sensitivity to different risk factors and the risk premium associated with each factor. The formula for APT
= E(R_z) + (E(I_1) - E(R_z)) * β_1 + (E(I_2) - E(R_z)) * β_2 + ... + (E(I_n) - E(R_z)) * β_n
• E(R_i) is the expected return on asset i
• E(R_z) is the risk-free rate of return
• E(I_j) is the expected return on factor j
• β_j is the sensitivity of asset i to factor j
• n is the number of factors
The risk factors in APT include macroeconomic factors like inflation, GDP growth, interest rates,
exchange rates
, etc. that have an impact on the returns of all market assets. The number and choices of factors are flexible and based on both the analyst's expertise and empirical data. The majority of the
volatility in asset returns, however, can typically be explained by four or five factors.
According to APT, there are possibilities for arbitrage in the market, which allows investors to benefit without taking any risks by taking advantage of any differences between an asset's fair value
and its actual worth by investing in it when it is cheap and selling it when it is overpriced. Eventually, this procedure will correct any mispricing and return the asset price to its equilibrium
APT is a more flexible and complex model than CAPM, as it allows for multiple sources of risk and does not require any assumptions about market efficiency or investor preferences. However, it also
has some limitations, such as:
• The risk variables are not well identified or quantified.
• The method for calculating the risk premiums for each factor is not specified.
• It excludes any unsystematic risk particular to a sector or an asset.
• The relationships between the risk variables are not taken into account.
APT is a helpful technique for value investing portfolio analysis since it can help uncover assets that are briefly mispriced and offer enticing rewards in relation to their risks. | {"url":"https://www.moneybestpal.com/2023/08/arbitrage-pricing-theory-apt.html","timestamp":"2024-11-06T05:15:45Z","content_type":"application/xhtml+xml","content_length":"228846","record_id":"<urn:uuid:5a881129-28d6-4a9a-bc26-7ebcfccb0b96>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00564.warc.gz"} |
Derivative of x^tanx (x to the power tanx) - iMath
The derivative of x^tanx (x to the power tanx) is denoted by d/dx(x^tanx) and its value is equal to x^tanx[tanx/x + sec^2x log[e]x].
The derivative formula of x^tanx is given by
d/dx(x^tanx) = x^tanx[tanx/x + sec^2x log[e]x].
Let us now learn how to differentiate x^tanx.
Differentiate x^tanx
Question: Prove that d/dx(x^tanx) = x^tanx[tanx/x + sec^2x logx].
Let us put
Here we need to find dy/dx. Taking logarithms on both sides, we get that
log[e] y = log[e] x^tanx
⇒ log[e]y = tanx log[e]x, as we know the logarithm formula log[a]b^n = n log[a]b.
Differentiating both sides w.r.t x, we have
$\dfrac{d}{dx}(\log_e y)=\dfrac{d}{dx}(\tan x \log_e x)$
⇒ $\dfrac{1}{y} \dfrac{dy}{dx}$ $=\tan x\dfrac{d}{dx}(\log_e x)+\log_e x\dfrac{d}{dx}(\tan x)$, by the product rule of derivatives.
⇒ $\dfrac{1}{y} \dfrac{dy}{dx}$ $=\tan x \cdot \dfrac{1}{x}+\log_e x \sec^2 x$ as we know d/dx(log[e]x) =1/x and d/dx(tan x)= sec^2x.
⇒ $\dfrac{dy}{dx}=y(\dfrac{\tan x}{x}+\sec^2 x\log_e x)$
⇒ $\dfrac{dy}{dx}=x^{\tan x}(\dfrac{\tan x}{x}+\sec^2 x \log_e x)$ as y=x^tanx.
So the derivative of x^tanx (x to the power tanx) is equal to x^tanx[tanx/x + sec^2x logx], and it is obtained by the logarithmic differentiation.
More Derivatives:
Derivative of x^x | Derivative of x^sinx
Derivative of sinx/x | Derivative of x^logx
Q1: What is the derivative of x^tanx?
Answer: The derivative of x^tanx (x raised to the power tanx) is equal to x^tanx[tanx/x + sec^2x logx]. | {"url":"https://www.imathist.com/derivative-of-x-to-the-power-tanx/","timestamp":"2024-11-09T16:00:13Z","content_type":"text/html","content_length":"178355","record_id":"<urn:uuid:e0edd34d-91ba-48a6-9fba-c07517c1fd5c>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00331.warc.gz"} |
Forecasting of CO2 level on Mona Loa dataset using Gaussian process regression (GPR)
Go to the end to download the full example code. or to run this example in your browser via Binder
Forecasting of CO2 level on Mona Loa dataset using Gaussian process regression (GPR)#
This example is based on Section 5.4.3 of “Gaussian Processes for Machine Learning” [1]. It illustrates an example of complex kernel engineering and hyperparameter optimization using gradient ascent
on the log-marginal-likelihood. The data consists of the monthly average atmospheric CO2 concentrations (in parts per million by volume (ppm)) collected at the Mauna Loa Observatory in Hawaii,
between 1958 and 2001. The objective is to model the CO2 concentration as a function of the time \(t\) and extrapolate for years after 2001.
# Authors: The scikit-learn developers
# SPDX-License-Identifier: BSD-3-Clause
Build the dataset#
We will derive a dataset from the Mauna Loa Observatory that collected air samples. We are interested in estimating the concentration of CO2 and extrapolate it for further year. First, we load the
original dataset available in OpenML as a pandas dataframe. This will be replaced with Polars once fetch_openml adds a native support for it.
│ │year│month│day│weight │flag│station │ co2 │
│0│1958│3 │29 │4 │0 │MLO │316.1│
│1│1958│4 │5 │6 │0 │MLO │317.3│
│2│1958│4 │12 │4 │0 │MLO │317.6│
│3│1958│4 │19 │6 │0 │MLO │317.5│
│4│1958│4 │26 │2 │0 │MLO │316.4│
First, we process the original dataframe to create a date column and select it along with the CO2 column.
import polars as pl
co2_data = pl.DataFrame(co2.frame[["year", "month", "day", "co2"]]).select(
pl.date("year", "month", "day"), "co2"
shape: (5, 2)
│ date │ co2 │
│date │f64 │
│1958-03-29 │316.1 │
│1958-04-05 │317.3 │
│1958-04-12 │317.6 │
│1958-04-19 │317.5 │
│1958-04-26 │316.4 │
co2_data["date"].min(), co2_data["date"].max()
(datetime.date(1958, 3, 29), datetime.date(2001, 12, 29))
We see that we get CO2 concentration for some days from March, 1958 to December, 2001. We can plot these raw information to have a better understanding.
import matplotlib.pyplot as plt
plt.plot(co2_data["date"], co2_data["co2"])
plt.ylabel("CO$_2$ concentration (ppm)")
_ = plt.title("Raw air samples measurements from the Mauna Loa Observatory")
We will preprocess the dataset by taking a monthly average and drop month for which no measurements were collected. Such a processing will have an smoothing effect on the data.
co2_data = (
.group_by_dynamic("date", every="1mo")
plt.plot(co2_data["date"], co2_data["co2"])
plt.ylabel("Monthly average of CO$_2$ concentration (ppm)")
_ = plt.title(
"Monthly average of air samples measurements\nfrom the Mauna Loa Observatory"
The idea in this example will be to predict the CO2 concentration in function of the date. We are as well interested in extrapolating for upcoming year after 2001.
As a first step, we will divide the data and the target to estimate. The data being a date, we will convert it into a numeric.
X = co2_data.select(
pl.col("date").dt.year() + pl.col("date").dt.month() / 12
y = co2_data["co2"].to_numpy()
Design the proper kernel#
To design the kernel to use with our Gaussian process, we can make some assumption regarding the data at hand. We observe that they have several characteristics: we see a long term rising trend, a
pronounced seasonal variation and some smaller irregularities. We can use different appropriate kernel that would capture these features.
First, the long term rising trend could be fitted using a radial basis function (RBF) kernel with a large length-scale parameter. The RBF kernel with a large length-scale enforces this component to
be smooth. An trending increase is not enforced as to give a degree of freedom to our model. The specific length-scale and the amplitude are free hyperparameters.
from sklearn.gaussian_process.kernels import RBF
long_term_trend_kernel = 50.0**2 * RBF(length_scale=50.0)
The seasonal variation is explained by the periodic exponential sine squared kernel with a fixed periodicity of 1 year. The length-scale of this periodic component, controlling its smoothness, is a
free parameter. In order to allow decaying away from exact periodicity, the product with an RBF kernel is taken. The length-scale of this RBF component controls the decay time and is a further free
parameter. This type of kernel is also known as locally periodic kernel.
from sklearn.gaussian_process.kernels import ExpSineSquared
seasonal_kernel = (
* RBF(length_scale=100.0)
* ExpSineSquared(length_scale=1.0, periodicity=1.0, periodicity_bounds="fixed")
The small irregularities are to be explained by a rational quadratic kernel component, whose length-scale and alpha parameter, which quantifies the diffuseness of the length-scales, are to be
determined. A rational quadratic kernel is equivalent to an RBF kernel with several length-scale and will better accommodate the different irregularities.
Finally, the noise in the dataset can be accounted with a kernel consisting of an RBF kernel contribution, which shall explain the correlated noise components such as local weather phenomena, and a
white kernel contribution for the white noise. The relative amplitudes and the RBF’s length scale are further free parameters.
from sklearn.gaussian_process.kernels import WhiteKernel
noise_kernel = 0.1**2 * RBF(length_scale=0.1) + WhiteKernel(
noise_level=0.1**2, noise_level_bounds=(1e-5, 1e5)
Thus, our final kernel is an addition of all previous kernel.
co2_kernel = (
long_term_trend_kernel + seasonal_kernel + irregularities_kernel + noise_kernel
50**2 * RBF(length_scale=50) + 2**2 * RBF(length_scale=100) * ExpSineSquared(length_scale=1, periodicity=1) + 0.5**2 * RationalQuadratic(alpha=1, length_scale=1) + 0.1**2 * RBF(length_scale=0.1) + WhiteKernel(noise_level=0.01)
Model fitting and extrapolation#
Now, we are ready to use a Gaussian process regressor and fit the available data. To follow the example from the literature, we will subtract the mean from the target. We could have used normalize_y=
True. However, doing so would have also scaled the target (dividing y by its standard deviation). Thus, the hyperparameters of the different kernel would have had different meaning since they would
not have been expressed in ppm.
GaussianProcessRegressor(kernel=50**2 * RBF(length_scale=50) + 2**2 * RBF(length_scale=100) * ExpSineSquared(length_scale=1, periodicity=1) + 0.5**2 * RationalQuadratic(alpha=1, length_scale=1) + 0.1**2 * RBF(length_scale=0.1) + WhiteKernel(noise_level=0.01))
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
GaussianProcessRegressor(kernel=50**2 * RBF(length_scale=50) + 2**2 * RBF(length_scale=100) * ExpSineSquared(length_scale=1, periodicity=1) + 0.5**2 * RationalQuadratic(alpha=1, length_scale=1) + 0.1**2 * RBF(length_scale=0.1) + WhiteKernel(noise_level=0.01))
Now, we will use the Gaussian process to predict on:
• training data to inspect the goodness of fit;
• future data to see the extrapolation done by the model.
Thus, we create synthetic data from 1958 to the current month. In addition, we need to add the subtracted mean computed during training.
import datetime
import numpy as np
today = datetime.datetime.now()
current_month = today.year + today.month / 12
X_test = np.linspace(start=1958, stop=current_month, num=1_000).reshape(-1, 1)
mean_y_pred, std_y_pred = gaussian_process.predict(X_test, return_std=True)
mean_y_pred += y_mean
plt.plot(X, y, color="black", linestyle="dashed", label="Measurements")
plt.plot(X_test, mean_y_pred, color="tab:blue", alpha=0.4, label="Gaussian process")
mean_y_pred - std_y_pred,
mean_y_pred + std_y_pred,
plt.ylabel("Monthly average of CO$_2$ concentration (ppm)")
_ = plt.title(
"Monthly average of air samples measurements\nfrom the Mauna Loa Observatory"
Our fitted model is capable to fit previous data properly and extrapolate to future year with confidence.
Interpretation of kernel hyperparameters#
Now, we can have a look at the hyperparameters of the kernel.
44.8**2 * RBF(length_scale=51.6) + 2.64**2 * RBF(length_scale=91.5) * ExpSineSquared(length_scale=1.48, periodicity=1) + 0.536**2 * RationalQuadratic(alpha=2.89, length_scale=0.968) + 0.188**2 * RBF(length_scale=0.122) + WhiteKernel(noise_level=0.0367)
Thus, most of the target signal, with the mean subtracted, is explained by a long-term rising trend for ~45 ppm and a length-scale of ~52 years. The periodic component has an amplitude of ~2.6ppm, a
decay time of ~90 years and a length-scale of ~1.5. The long decay time indicates that we have a component very close to a seasonal periodicity. The correlated noise has an amplitude of ~0.2 ppm with
a length scale of ~0.12 years and a white-noise contribution of ~0.04 ppm. Thus, the overall noise level is very small, indicating that the data can be very well explained by the model.
Total running time of the script: (0 minutes 5.553 seconds)
Related examples | {"url":"https://scikit-learn.qubitpi.org/auto_examples/gaussian_process/plot_gpr_co2.html","timestamp":"2024-11-13T04:39:41Z","content_type":"text/html","content_length":"133562","record_id":"<urn:uuid:22513ac7-c006-426d-a493-4b90dffb1631>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00030.warc.gz"} |
Area of rectangles
Content description
Choose appropriate units of measurement for length, area, volume, capacity and mass (ACMMG108)
Calculate perimeter and area of rectangles using familiar metric units (ACMMG109)
Source: Australian Curriculum, Assessment and Reporting Authority (ACARA)
Area of rectangles
The area of a rectangle is the size of the region inside it.
For this rectangle, drawn on centimetre grid paper, the area is 18 square centimetres. We can write square centimetres as cm².
We can find the area of a rectangle by multiplying its length by its width.
We know the lengths of the sides of the rectangle above, and can use this to calculate its area. A 6 cm by 3 cm rectangle contains 6 × 3 = 18 squares, each with an area of 1 square centimetre. So the
area of the rectangle is 18 square centimetres, or 18 cm².
We usually set this out as follows:
\text{Area} &= \text{length}\ ×\ \text{width}\\ &= 6 × 3\\&= 18\ \text{cm²}
We call this the formula for calculating the area of rectangles, and write it as:
\text{Area}\ &= \text{length}\ ×\ \text{width}.
When we are working with squares we can use the properties of a square to calculate its area. Since the side lengths of the square are equal, the formula becomes:
Area = length × length.
So for the square above:
\text{Area}\ &= 8 × 8\\&= 64\ \text{km²} | {"url":"https://amsi.org.au/ESA_middle_years/Year5/Year5_1cT/Year5_1cT_R3_pg1.html","timestamp":"2024-11-07T03:22:23Z","content_type":"text/html","content_length":"4649","record_id":"<urn:uuid:84584754-9b52-4996-9e02-ac24694d0e26>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00364.warc.gz"} |
DCF Analysis - Investment Consultancy
DCF Analysis
Discounted Cash Flow Analysis
Basis of every valuation is the analysis of the incremental free cash flows to the firm and to equity after taxes. Each cash flow is discounted with its specific risk-adjusted discount rate. Key
points are:
• Project, time and decision framing
• Incremental free cash flow analysis (investor’s point of view)
• Opportunity investment portfolio analysis of investors
• Interdependency / correlation to other investment decisions
• Cash flow specific risk-adjusted discount rates acc. to Component Cash Flow Procedure (CCFP)
• Expected value and variance analysis of free cash flows
• Industry segment, company, project and cash flow specific WACC, betas and discount rates
• Country and small cap risk premium
• Corporate capital structure (dept, equity, cash, preferred stock)
• Leveraged betas for different capital structure strategies (e.g. Hamada)
• Cash flow specific tax adjustment of discount rates
• Synthetic rating
• Risk-adjusted Net Present Value (NPV)
• Adjusted-Present-Value (APV)
• Flow-to-Equity (FTE)
• Capital Asset Pricing Modell (CAPM)
• Arbitrage Pricing Theory (APT)
• (Un)Leveraging beta, operating leverage
• Taxes and inflation
• Various dynamic currency exchange rates and countries
• Annuity valuation
• Linear Programming (LP), Operations Research (OR)
• Capital budgetierung, profitability index and annuity
• Project portfolio management
• Static valuation methods (costs, profits, EVA, …)
Value-Based Management
Every asset, financial as well as real, has a value. Every investment, project, company and strategic opportunity has a certain value. The ultimate goal of a company and its management is to optimize
the value of its assets and to create additional value. Hence the value analysis of assets is crucial for economic success. The asset analysis shows which investments and projects can increase the
value of your company best. It also shows how you can maximize the company’s value by optimizing existing assets.
Project Data
The following data types are included as standard and customized to the specific project: sales volumes, sales prices, sales deductions, variable unit costs, BOM, fixed costs, fixed assets, stocks,
receivables, liabilities, depreciation, one-off items, taxes, exchange rates, inflation, discount rates, WACC, interest structure, capital structure, etc . All data are included in any required level
of detail in any currency. Currency exchange rates, interest rates, tax rates and corporate capital structure can be defined for all time periods separately.
Results of a DCF analysis are many economic key figures and their development over time: Free Cash Flows to the firm and to equity, EBITDA, depreciation, EBIT, taxes, cost of capital, liquidity,
amortization period, NPV, IRR, EVA, final value-related key figures, MIRR Baldwin return and many more. Most important is the risk-adjusted net present value (NPV). It represents the additional value
of the project for investors relative to their existing security portfolio, or according to the CAPM to the capital markets. Some typical pitfalls of DCF analysis are summarized here.
Once the basic DCF model is set up, the next steps are decision tree analyzes, strategic risk and opportunity analyzes with Monte-Carlo-Simulation and Real Options Analysis. | {"url":"https://www.financeinvest.at/dcf-analysis/","timestamp":"2024-11-05T10:34:20Z","content_type":"text/html","content_length":"81053","record_id":"<urn:uuid:0573449d-bff1-4d17-ab3b-29f34e8b31cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00611.warc.gz"} |
In my old rig, the lower limbs rotate in single plane only. That simplifies the story. Just put a locator under the FK lower limb, having the same world position as the pvCtrl at initial pose.
Snapping to FK means snapping the pvCtrl to this locator because the rotate plane is always correct.
But I want rig like Stewart, where the lower limbs can rotate sideway. This involves simple vector maths.
global proc vector calcPvWPos( string $P1, string $P2, string $P3 ) {
vector $a = `xform -q -ws -t $P1`;
vector $b = `xform -q -ws -t $P2`;
vector $c = `xform -q -ws -t $P3`;
vector $AC = $c - $a;
vector $AB = $b - $a;
// projV = (|b|cos@) ^AC = (a.b/|a|) ^AC
vector $projV = ( dotProduct($AC,$AB,0) / mag($AC) ) * `unit $AC`;
vector $pvDirV = unit($AB - $projV);
// move it out a bit
return ( $b + mag($AB) * $pvDirV ); | {"url":"https://www.nickyliu.com/tag/dot-product/","timestamp":"2024-11-04T11:45:09Z","content_type":"text/html","content_length":"38181","record_id":"<urn:uuid:a5903944-56ec-4f0f-8639-ebcc60f8b8f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00888.warc.gz"} |
15.2 Energy in Simple Harmonic Motion - University Physics Volume 1 | OpenStax
By the end of this section, you will be able to:
• Describe the energy conservation of the system of a mass and a spring
• Explain the concepts of stable and unstable equilibrium points
To produce a deformation in an object, we must do work. That is, whether you pluck a guitar string or compress a car’s shock absorber, a force must be exerted through a distance. If the only result
is deformation, and no work goes into thermal, sound, or kinetic energy, then all the work is initially stored in the deformed object as some form of potential energy.
Consider the example of a block attached to a spring on a frictionless table, oscillating in SHM. The force of the spring is a conservative force (which you studied in the chapter on potential energy
and conservation of energy), and we can define a potential energy for it. This potential energy is the energy stored in the spring when the spring is extended or compressed. In this case, the block
oscillates in one dimension with the force of the spring acting parallel to the motion:
When considering the energy stored in a spring, the equilibrium position, marked as $xi=0.00m,xi=0.00m,$ is the position at which the energy stored in the spring is equal to zero. When the spring is
stretched or compressed a distance x, the potential energy stored in the spring is
Energy and the Simple Harmonic Oscillator
To study the energy of a simple harmonic oscillator, we need to consider all the forms of energy. Consider the example of a block attached to a spring, placed on a frictionless surface, oscillating
in SHM. The potential energy stored in the deformation of the spring is
In a simple harmonic oscillator, the energy oscillates between kinetic energy of the mass $K=12mv2K=12mv2$ and potential energy $U=12kx2U=12kx2$ stored in the spring. In the SHM of the mass and
spring system, there are no dissipative forces, so the total energy is the sum of the potential energy and kinetic energy. In this section, we consider the conservation of energy of the system. The
concepts examined are valid for all simple harmonic oscillators, including those where the gravitational force plays a role.
Consider Figure 15.11, which shows an oscillating block attached to a spring. In the case of undamped SHM, the energy oscillates back and forth between kinetic and potential, going completely from
one form of energy to the other as the system oscillates. So for the simple example of an object on a frictionless surface attached to a spring, the motion starts with all of the energy stored in the
spring as elastic potential energy. As the object starts to move, the elastic potential energy is converted into kinetic energy, becoming entirely kinetic energy at the equilibrium position. The
energy is then converted back into elastic potential energy by the spring as it is stretched or compressed. The velocity becomes zero when the kinetic energy is completely converted, and this cycle
then repeats. Understanding the conservation of energy in these cycles will provide extra insight here and in later applications of SHM, such as alternating circuits.
Consider Figure 15.11, which shows the energy at specific points on the periodic motion. While staying constant, the energy oscillates between the kinetic energy of the block and the potential energy
stored in the spring:
The motion of the block on a spring in SHM is defined by the position $x(t)=Acos(ωt+ϕ)x(t)=Acos(ωt+ϕ)$ with a velocity of $v(t)=−Aωsin(ωt+ϕ)v(t)=−Aωsin(ωt+ϕ)$. Using these equations, the
trigonometric identity $cos2θ+sin2θ=1cos2θ+sin2θ=1$ and $ω=kmω=km$, we can find the total energy of the system:
The total energy of the system of a block and a spring is equal to the sum of the potential energy stored in the spring plus the kinetic energy of the block and is proportional to the square of the
amplitude $ETotal=(1/2)kA2.ETotal=(1/2)kA2.$ The total energy of the system is constant.
A closer look at the energy of the system shows that the kinetic energy oscillates like a sine-squared function, while the potential energy oscillates like a cosine-squared function. However, the
total energy for the system is constant and is proportional to the amplitude squared. Figure 15.12 shows a plot of the potential, kinetic, and total energies of the block and spring system as a
function of time. Also plotted are the position and velocity as a function of time. Before time $t=0.0s,t=0.0s,$ the block is attached to the spring and placed at the equilibrium position. Work is
done on the block by applying an external force, pulling it out to a position of $x=+Ax=+A$. The system now has potential energy stored in the spring. At time $t=0.00s,t=0.00s,$ the position of the
block is equal to the amplitude, the potential energy stored in the spring is equal to $U=12kA2U=12kA2$, and the force on the block is maximum and points in the negative x-direction $(FS=−kA)(FS=−kA)
$. The velocity and kinetic energy of the block are zero at time $t=0.00s.t=0.00s.$ At time $t=0.00s,t=0.00s,$ the block is released from rest.
Oscillations About an Equilibrium Position
We have just considered the energy of SHM as a function of time. Another interesting view of the simple harmonic oscillator is to consider the energy as a function of position. Figure 15.13 shows a
graph of the energy versus position of a system undergoing SHM.
The potential energy curve in Figure 15.13 resembles a bowl. When a marble is placed in a bowl, it settles to the equilibrium position at the lowest point of the bowl $(x=0)(x=0)$. This happens
because a restoring force points toward the equilibrium point. This equilibrium point is sometimes referred to as a fixed point. When the marble is disturbed to a different position $(x=+A)(x=+A)$,
the marble oscillates around the equilibrium position. Looking back at the graph of potential energy, the force can be found by looking at the slope of the potential energy graph $(F=−dUdx)(F=−dUdx)$
. Since the force on either side of the fixed point points back toward the equilibrium point, the equilibrium point is called a stable equilibrium point. The points $x=Ax=A$ and $x=−Ax=−A$ are called
the turning points. (See Potential Energy and Conservation of Energy.)
Stability is an important concept. If an equilibrium point is stable, a slight disturbance of an object that is initially at the stable equilibrium point will cause the object to oscillate around
that point. The stable equilibrium point occurs because the force on either side is directed toward it. For an unstable equilibrium point, if the object is disturbed slightly, it does not return to
the equilibrium point.
Consider the marble in the bowl example. If the bowl is right-side up, the marble, if disturbed slightly, will oscillate around the stable equilibrium point. If the bowl is turned upside down, the
marble can be balanced on the top, at the equilibrium point where the net force is zero. However, if the marble is disturbed slightly, it will not return to the equilibrium point, but will instead
roll off the bowl. The reason is that the force on either side of the equilibrium point is directed away from that point. This point is an unstable equilibrium point.
Figure 15.14 shows three conditions. The first is a stable equilibrium point (a), the second is an unstable equilibrium point (b), and the last is also an unstable equilibrium point (c), because the
force on only one side points toward the equilibrium point.
The process of determining whether an equilibrium point is stable or unstable can be formalized. Consider the potential energy curves shown in Figure 15.15. The force can be found by analyzing the
slope of the graph. The force is $F=−dUdx.F=−dUdx.$ In (a), the fixed point is at $x=0.00m.x=0.00m.$ When $x<0.00m,x<0.00m,$ the force is positive. When $x>0.00m,x>0.00m,$ the force is negative. This
is a stable point. In (b), the fixed point is at $x=0.00m.x=0.00m.$ When $x<0.00m,x<0.00m,$ the force is negative. When $x>0.00m,x>0.00m,$ the force is also negative. This is an unstable point.
A practical application of the concept of stable equilibrium points is the force between two neutral atoms in a molecule. If two molecules are in close proximity, separated by a few atomic diameters,
they can experience an attractive force. If the molecules move close enough so that the electron shells of the other electrons overlap, the force between the molecules becomes repulsive. The
attractive force between the two atoms may cause the atoms to form a molecule. The force between the two molecules is not a linear force and cannot be modeled simply as two masses separated by a
spring, but the atoms of the molecule can oscillate around an equilibrium point when displaced a small amount from the equilibrium position. The atoms oscillate due the attractive force and repulsive
force between the two atoms.
Consider one example of the interaction between two atoms known as the van Der Waals interaction. It is beyond the scope of this chapter to discuss in depth the interactions of the two atoms, but the
oscillations of the atoms can be examined by considering one example of a model of the potential energy of the system. One suggestion to model the potential energy of this molecule is with the
Lennard-Jones 6-12 potential:
A graph of this function is shown in Figure 15.16. The two parameters $εε$ and $σσ$ are found experimentally.
From the graph, you can see that there is a potential energy well, which has some similarities to the potential energy well of the potential energy function of the simple harmonic oscillator
discussed in Figure 15.13. The Lennard-Jones potential has a stable equilibrium point where the potential energy is minimum and the force on either side of the equilibrium point points toward
equilibrium point. Note that unlike the simple harmonic oscillator, the potential well of the Lennard-Jones potential is not symmetric. This is due to the fact that the force between the atoms is not
a Hooke’s law force and is not linear. The atoms can still oscillate around the equilibrium position $xminxmin$ because when $x<xminx<xmin$, the force is positive; when $x>xminx>xmin$, the force is
negative. Notice that as x approaches zero, the slope is quite steep and negative, which means that the force is large and positive. This suggests that it takes a large force to try to push the atoms
close together. As x becomes increasingly large, the slope becomes less steep and the force is smaller and negative. This suggests that if given a large enough energy, the atoms can be separated.
If you are interested in this interaction, find the force between the molecules by taking the derivative of the potential energy function. You will see immediately that the force does not resemble a
Hooke’s law force $(F=−kx)(F=−kx)$, but if you are familiar with the binomial theorem:
the force can be approximated by a Hooke’s law force.
Velocity and Energy Conservation
Getting back to the system of a block and a spring in Figure 15.11, once the block is released from rest, it begins to move in the negative direction toward the equilibrium position. The potential
energy decreases and the magnitude of the velocity and the kinetic energy increase. At time $t=T/4t=T/4$, the block reaches the equilibrium position $x=0.00m,x=0.00m,$ where the force on the block
and the potential energy are zero. At the equilibrium position, the block reaches a negative velocity with a magnitude equal to the maximum velocity $v=−Aωv=−Aω$. The kinetic energy is maximum and
equal to $K=12mv2=12mA2ω2=12kA2.K=12mv2=12mA2ω2=12kA2.$ At this point, the force on the block is zero, but momentum carries the block, and it continues in the negative direction toward $x=−Ax=−A$. As
the block continues to move, the force on it acts in the positive direction and the magnitude of the velocity and kinetic energy decrease. The potential energy increases as the spring compresses. At
time $t=T/2t=T/2$, the block reaches $x=−Ax=−A$. Here the velocity and kinetic energy are equal to zero. The force on the block is $F=+kAF=+kA$ and the potential energy stored in the spring is $U=
12kA2U=12kA2$. During the oscillations, the total energy is constant and equal to the sum of the potential energy and the kinetic energy of the system,
The equation for the energy associated with SHM can be solved to find the magnitude of the velocity at any position:
The energy in a simple harmonic oscillator is proportional to the square of the amplitude. When considering many forms of oscillations, you will find the energy proportional to the amplitude squared.
Check Your Understanding 15.1
Why would it hurt more if you snapped your hand with a ruler than with a loose spring, even if the displacement of each system is equal?
Check Your Understanding 15.2
Identify one way you could decrease the maximum velocity of a simple harmonic oscillator. | {"url":"https://openstax.org/books/university-physics-volume-1/pages/15-2-energy-in-simple-harmonic-motion","timestamp":"2024-11-03T09:10:37Z","content_type":"text/html","content_length":"494219","record_id":"<urn:uuid:377d1f69-c3a6-4871-b825-aeba1c1feb8b>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00075.warc.gz"} |
Teenagers with Down syndrome study Algebra in High School
This paper deals with the adaptation of an algebra curriculum for two students with Down syndrome who were included in High School. Since the kindergarten, this boy and girl have been fully included
in general education classes. This paper examines the rationale for this choice on an algebra programme. The adaptation of this programme was easy because all that was required was to shorten it and
do some additional steps in teaching (a little bit more than in a remedial course). Also, visual prompts were provided to the students. The boy needed a calculator all the time. Both of the students
learned to calculate algebraic expressions with parenthesis, with positive and negative numbers and even with powers. The boy was able to do algebraic sum of monomials. The girl performed expressions
with fractions. They took written and oral tests at the same time as their classmates, but with different exercises or questions. The girl was able to do some mental arithmetic. Often she was more
consistent and careful than her typical classmates. The boy had problems with the integration and he did not attend the school full time. The inclusion, even when it was not perfect, provided the
motivation to teach and to learn. In both cases, the crucial point was the daily collaboration of the mathematics teacher with the special educator. Both of the students enjoyed the mathematics
program, as many typical students do. Mathematics gave them the fulfilling emotion of succeeding!
Monari Martinez, E. (1998) Teenagers with Down syndrome study Algebra in High School. Down Syndrome Research and Practice, 5(1), 34-38. doi:10.3104/case-studies.73
Why would an algebra programme be useful for a student with Down syndrome?
• Because the mathematics curriculum in these schools begins with a section devoted to algebra and the students with a disability like to participate. As every student, they like to do what their
peers do, like to understand what their classmates understand and speak about. This increases the student's self-esteem, as well as, the classmates' regard for him/her. Now they can interact in a
field which everybody knows. Unfortunately, adolescents and adults with Down syndrome have a higher probability of having depressive episodes than typical people (Rowitz & Jurkowski, 1995) and
than people with learning disabilities of other aetiology (Collacott et al., 1992), hence increasing the student's self-esteem can be one of the more important goals.
• Because when a student calculates an algebraic expression, the more frequent logical process needed is to follow a sequence of steps, as a list of instructions, which can also be written in the
beginning and kept close by. The persons with Down syndrome usually have no difficulties keeping in mind sequences: they may take time in learning it, but, after they have learned, they can
repeat it in the correct order (Kay- Raining Bird & Chapman, 1994). Since it is then a predictable process, they enjoy doing it, because it does not cause anxiety, on the contrary, it gives
confidence. Obviously, the complexity of the task has to fit the ability of the student.
• Because the formal language, used to write an algebraic expression, is absolutely well defined and clear. Even if it is concise, it gives each time all the information needed. As soon as the
students have understood this formal language, they have no difficulties in following its instructions. An example of an expression, which is not clear, is the one used in elementary schools to
write the algorithm of the sum of numbers with many digits: many steps of the algorithm are not written, but kept in mind or written in a way which is not well defined. This algorithm is
difficult to understand for typical students and becomes a tough problem for a student with a short working memory span, as it happens in Down syndrome (Hulme & Mackenzie, 1992; Marcell & Weeks,
• Because to get a solution either exactly true or exactly false, but not a matter of opinion, gives self confidence to everybody who performs the task correctly most of the time. This is
particularly true for a student with Down syndrome, who often during his/her life performed tasks almost 'well', but seldom 'well' as the peers (for instance with fine motor tasks).
• Because to learn algebra is important for every student, as it is an effective tool to exercise logical skills and is one of the first steps to enter into the mathematical culture.
The experience
D. was a 15 year old girl with Down syndrome, who attended her first grade of Istituto Magistrale (this is a four year school, between the High School and College level, where the students are
trained to become elementary school teachers) in Padova, Italy. During all her time in school, she remained in the same classroom with her typical schoolmates. She was supported just 18 hours per
week by a special educator, who was a teacher's aid. (In Italy for twenty years, children with any kind of disability have been included, by law, in general education classes and now there are no
public special education classes (Ferri, 1987)). D. attended all the regular courses, but the following curricula were adapted: Mathematics, Italian, Latin, English, History, Geography and Sciences.
The adaptations were performed by her special educator, together with the teachers of the courses. In all the courses she took the written tests at the same time that her classmates did, but with
some different exercises or questions and the tests were graded by the teachers of the courses. In Italian, the special educator often had to guide the development of the written composition, but in
mathematics she performed all the exercises without any help and without a calculator. The oral tests were given by the teachers of the courses in front of all the class, as usually. The author, at
request of the special educator, prepared D's syllabus for the course of Mathematics, adapting the regular syllabus and was a consultant of the special educator. Just some advice was given to the
special educator. She was truly surprised that the programme was so easy to implement. The teacher of math had no previous experience with 'special' students, but following the syllabus he helped the
special educator (who was not fond of math) to develop it and he prepared all the written tests for D. I think that one of the keys of the success of this programme was exactly the continuous
cooperation of the special educator with the math teacher. Unfortunately, the teachers of the courses are not always so available for special students!
R. was a 15 year old boy who attended the first grade of an 'Istituto Professionale per geometri' (this is a five year school, between the High School and College level, where the students are
trained to become surveyors). He studied Italian, English, History, Sciences, Physics, Mathematics, Chemistry and Technical Design, in adapted programs. He did not attend the school full time and
sometimes he did not remain in his classroom. In the school he was assisted by a special educator. (The next year the decision to attend part-time was examined again and was changed. He was fully
integrated in the school again!) He learned a lot and, in particular, he had surprising results in mathematics and in design. Also in this case, he took the written tests, at the same time as all
classmates did, just with different exercises or questions. In this case also the continuous cooperation of the special educator with the teachers of the courses was crucial for the success of the
The mathematics programme and the method
Looking at what D. had learned in the Middle School and at the regular syllabus of the mathematics course, we prepared the following algebra programme for her:
1. Expressions involving the four operations on natural numbers, till three digits.
2. Expressions as in 1 and with one level of parenthesis.
3. Powers of integers.
4. Multiplications and divisions of powers with he same basis. For instance:
3 ^2 x 3 ^3 = 3 ^2+3 = 3 ^5 = 3x3x3x3x3 = 243
5 ^4:5 ^2 = 5 ^4-2 = 5 ^2 = 5x5 = 25
5. Multiplications and divisions of powers of the same exponent. For instance:15 ^2:3 ^2 = (15:3) ^2 = 5 ^2 = 5x5 = 25
6. Expressions as in 2, but with two levels of parenthesis.
7. Expressions as in 6 and with some powers.
8. Expressions as in 6, but with fractions. The fractions involved in the operations of sum and of subtraction had the same denominator. The fractions were simplified. (She had studied fractions in
the Middle School).
9. Sum and subtraction of two fractions with different denominators; the new denominator was the product of the two numbers: for instance
7/2 + 3/4 = (7x4)/(2x4) + (3x2)/(4x2) = 28/8 + 6/8 = (28+ 6)/8 = 34/8 = 17/4
10. Positive and negative integer numbers represented on an orientated line. Sum and subtraction.
11. Multiplication and division involving also negative numbers.
12. Expressions as in 6, involving also negative numbers.
13. Expression as in 12, involving also powers.
The programme for R. was similar, but with some changes: we dropped the expressions with fractions, also if fractions were involved in the solution of some problems, and we begin the operations with
In the beginning, for each point, they could have a written list for reference of what they had to do. (For instance, at point 2, the list could be:
1. Do multiplications and divisions inside the parenthesis;
2. Do the sums and the subtractions inside the parenthesis;
3. Take off the parenthesis;
4. Do the multiplications and the divisions;
5. Do the sums and the subtractions). They could use the calculator. When it was possible, the rules had to be taught with the explanation of the rationale of them: exactly as we do with typical
They learned to calculate algebraic expressions, step by step, following the same path as their typical classmates, but at a slower rate, with some more steps and with individual teaching.
About D.: first she learned expressions with integers, then with fractions of the same denominator and later to sum fractions with different denominators, after having reduced them to the same
denominator. She learned also to use powers and their properties and later on operations with positive and negative numbers. Regarding the above program, points 12 and 13 were missed for lack of
time, but they were done successfully the following year. She performed all the written tests at the same time with the other students and without a calculator, as she was able to do some mental
arithmetic. A programme of geometry was performed too. The math teacher and the special educator worked together choosing the didactic path and the tests. D. was a willing, adaptable and consistent
student: she was graded very well, with the maximum grade in mathematics. During the performance of the algebraic expressions, she displayed an autonomous and a critical attitude. In fact, sometimes
she decided to cut off some parts of the expression, to solve them aside and to paste the results in the initial expression, in the right place. She did so, in spite of the prohibition of the
teacher, who considered it a non-advantageous operation, even if clever, as it often led to misprints in the transcription. The teachers observed her improved personality, lexicon and more general
communication skills in that year. These results are testified by the documents of the school.
About R.: first he learned to perform expressions with integers and then with positive and negative integer numbers, with two levels of parenthesis and powers, algebraic sum of monomials and problems
of geometry on the plane. He was not able to do any mental arithmetic (even if he was able to do it in previous years) and he used a calculator all the time. He took all the written tests with the
other students. At school he displayed some behavioural problems, maybe due to the partial integration in the school: this was the first time in his life he was not fully integrated in the school and
he suffered from this exclusion. The next year, he asked for and obtained full inclusion: no behavioural problem occurred. He was a willing and consistent student, but surely not all his
potentialities were developed. He liked mathematics all the time and enjoyed doing it.
What are the reasons for these results?
To have no prejudice about what a person with a disability can learn, but to believe that, if we are patient and find the right way of teaching, there are no limits to what people with disabilities
can learn. These teenagers are just typical Down syndrome students (for instance the mental age of the girl is 6.5 years and her IQ is 45 in the WISC-R test) with very typical Italian families. They
always trusted in their children's possibilities in a realistic and consistent way and they always were watching out, looking for the best.
The strength and the consistency of these students, who, as many persons with Down syndrome, can work a lot, if they are motivated, despite of their difficulties.
The inclusion in a regular class, which gives to the students the motivation, to adapt themselves to their surroundings and to improve both their social and their academic skills. At the same time,
the inclusion stimulates the teachers to try more 'normal' programs with special students....and often it works!
The professional ability of the special educator, who provided a mediated learning experience (Kozulin & Falik, 1995; Kozu1in & Presseisen, 1995) to the student, as a tutor, but inside the classroom
(Ferri, 1987). To succeed in it, the special educator needs good training as a teacher.
The collaboration between the teacher of the course and the special educator in adapting the mathematics curriculum and in teaching it. They need to have both good training and the philosophy that
the special students are not just a matter of the special educator, but they are students of the class and then they need the attention and the work of all the teachers.
The choice of the mathematics curriculum and of its progression. In this choice we have to consider the syllabus of the course, the previous curriculum of the student, his/her interests, strengths
and weaknesses, so that in the progression we have to alternate topics of high interest and easier to understand with more difficult topics. Sometimes, what was hard for us, is not so hard for him/
her and vice versa. This means that we have to know the pathology of the student (as well as we can), we have to master the topic with the aim of preparing a right adaption (which is not always a
reduction!), and we need to be flexible to modify the program, if it does not work.
A student with mental retardation can succeed in academic programs, where even typical students may have difficulties, and can enjoy studying these programs. If we believe the academic culture is
precious and pleasing for us, why should we not share it with people with difficulties? If it helps us, why should it not help them? I think the right path might be a fair balance between academic
programs and trainings for the autonomy.
ELISABETTA MONARI MARTINEZ, Dr., Researcher, Dipartimento di Matematica Pura ed Applicata, Universita degli Studi di Padova, via Belzoni, 7, 35131 Padova, Italy.
E-mail: martinez@math.unipd.it | {"url":"https://www.down-syndrome.org/en-in/library/research-practice/05/1/teenagers-down-syndrome-algebra-high-school/","timestamp":"2024-11-05T16:29:17Z","content_type":"text/html","content_length":"70410","record_id":"<urn:uuid:b9d1387d-d126-4611-900d-bc25b65222f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00105.warc.gz"} |
This research project aims at exploring the validity of the recently published work by Di Valentine et al. (2019) which used the Planck Legacy 2018 (PL18) data that suggested a possibility of a
closed universe in which the amplitude of Cosmic Microwave Background radiations is enhanced and prefers a positive curvature at 99% cadence level. The study by Di Valentino et al. (2019) is based on
the observations of the ancient light called Cosmic Microwave Background (CMB). In the report, the amplitude of the CMB is larger compared to that of the standard ΛCDM model and the data deviates by
3.4 standard deviations. This research work investigates this amplitude abnormality, derive equations governing dynamics of a closed Universe within Einstein General Relativity, and develop relevant
theory behind possible crisis with regard to the proposed evidence of a closed universe by considering the Friedmann-Robertson-Walker (FRW) metric which assumes a homogeneous and an isotropic
universe. We analyze the implications of a closed universe in cosmology. This research work begins by deriving the first and second Friedmann equations using the Einstein Field Equations (EFE). Then
the continuity equation is derived by considering a perfect fluid. Three coupled differential equations for Hubble parameter, scale factor and density as functions of time are obtained and
transformed to two coupled differential equations of Hubble parameter and density parameter as functions of scale factor. The two equations are solved simultaneously using Python – Spyder package
called Odeint and plotted graphs of evolution of Hubble parameter and density parameter for Einstein de Sitter (EdS) model, the standard ΛCDM model and compared to that of closed universe. From the
graphs obtained, the Hubble parameter decreases with increase in the scale factor. The value of Hubble parameter in EdS at decoupling is greater than that of ΛCDM and closed models but their values
converge today. The density parameter for a closed universe is greater than one compared to that Einstein de Sitter model and ΛCDM values which is one. This implies that the closed cosmos has enough
matter to cause a deceleration in its expansion. The deceleration implies that at some time in future the expansion will stop and big crunch will occur. If indeed the universe is closed, then the
current cosmology is in a crisis. Since the Planck spectra from Planck‟s Legacy 2018 prefers a closed universe, however, the anomalies might have risen from undetected systematics and/or statistical
fluctuations, this study recommends that more observations to be carried out to ascertain whether there is a possible paradigm shift in cosmology and new physics is required.
DEDICATION ii
ACKNOWLEDGEMENT iii
ABSTRACT iv
LIST OF TABLES vii
LIST OF FIGURES viii
1.1 Research Background 1
1.1.1 Geometry of our Universe 1
1.1.2 The Cosmic Microwave Background 3
1.1.3 Standard cosmological model 4
1.2 Theory: History of cosmology 4
1.3 Statement of the Problem 8
1.4 Objectives 8
1.4.1 Main Objective 8
1.4.2 Specific Objectives 8
1.5 Justification and Significance of the Study 9
3.1 Einstein Field Equations 23
3.2 Friedman–Lemaitre–Robertson–Walker (FLRW) Metric 23
3.3 Density and Density parameter 26
3.4 Cosmic Microwave Background (CMB) Radiation 27
4.1.1 Source of Data 29
4.1.2 Planck, WMAP 29
4.1.3 Data type and Description 29
4.1.4 Data Analysis Tools 30
4.2 The Governing Equations 30
4.2.1 Einstein Field Equations 30
4.2.2 Energy-Momentum Conservation 44
4.2.3 Matter density And Density Parameter 47
4.2.4 Dynamical Equations 51
References 62
Table 2. 1: Tensions between PL18 and BAO and CMB Lensing. 18
Figure 2. 1: Preference for a closed universe (Di Valentino E, 2019) 14
Figure 2. 2: Degeneracy between curvature and lensing (Di Valentino E, 2019) 16
Figure 2. 3: Curvature and parameters shift (Di Valentino E, 2019) 16
Figure 2. 4: Tension with CMB lensing (Di Valentino E, 2019) 18
Figure 2. 5: Tension with cosmic shear measurements (Di Valentino E, 2019) 19
Figure 2. 6: Tension with combined data (Di Valentino E, 2019) 20
Figure 2. 7: Tensions in combined data (Di Valentino E, 2019) 20
Figure 6. 1: A graph of h as a function of the scale factor 56
Figure 6. 2: A graph of a density parameter against scale factor 56
Figure 6. 3: A graph of h as a function of the scale factor for ΛCDM model 57
Figure 6. 4: A graph of density parameter for ΛCDM against the scale factor 57
Figure 6. 5: A graph of h against scale factor (a) in closed model 58
Figure 6. 6: A graph of density parameter with respect to scale factor for a closed universe 58
Figure 6. 7: Comparing the graphs of dimensionless Hubble parameter in EdS, ΛCDM and closed models.59 Figure 6. 8: Comparing the graphs of density parameter as a function of the scale factor in EdS,
ΛCDM and closed models 59
FLRW Friedman-Lemaitre-Robertson-Walker
CP Copernican Principle
BAO Baryon acoustic oscillation
SKA Square Kilometer Array
CMB Cosmic Microwave Background Radiation
OHD Observational Hubble parameter data
SDSS Sloan Digital Sky Survey
Mpc mega parsecs
H Planck‟s constant
Kpc kilo parsecs
WMAP Wilkinson Microwave Anisotropy Probe
LAMBDA Legacy Archive for Microwave Background Data Analysis
NASA National Aeronautics and Space Administration
FWHM Full Width Half Maximum
EFE Einstein Field Equations
DM Dark matter
PL Plank‟s Legacy
EoS (w) Equation of sate
1.1 Research Background
Cosmology is a discipline in astronomy that studies the universe as a whole with an assumption that at the largest scales the universe obeys the homogeneity and isotropy. It aims at understanding,
the origin, structure, composition, evolution, and fate of the Universe. Homogeneity means that the universe is the same place to place and isotropy means it looks the same in all directions. This
assumption of the universe being homogenous and isotropic is important because observations made from any single point can be used to represent the universe as a whole and in turn this information
can therefore be legitimately used in testing cosmological models. This theoretical assumption was made by Albert Einstein in his earliest work in the twentieth century and was meant to simplify the
mathematical analysis (Amandola, 2021)
1.1.1 Geometry of our Universe
The geometry of the universe simply means its curvature which is denoted by k and its shape. The curvature can be positive, negative or zero. There are many shapes but only three basics ones are
considered which are flat, open and closed shapes of the universe. In the year 1925, Edwin Hubble discovered that our universe is expanding. Hubble came up with evidence showing that the farther a
galaxy is the faster it moves away from us and this is now known as Hubble law. The Hubble law simply means the rate of expansion of space and it applies to any system that expands and or contracts
in a uniform and isotropic manner (Piattela, 2018). The equation (1) below describes the Hubble law.
ν = (1)
where, ν is the velocity it moves away from us, r the distance, and H0 is the Hubble constant. The H0 value as determined by recent measurements is, H0 = 67.6 Km/s/Mpc. This value means that for a
Mpc away, a source moves away at a speed of 67.6 km/s faster.
Towards the end of 20th century, observations made on the radiation emitted from type Ia supernovae confirmed that the universe is expanding and discovered that this expansion is accelerating. This
discovery about the accelerated expansion of the universe posed a great challenge in physics. There was need for cosmological models that could explain this anomalous because our knowledge on gravity
is that it should attract matter, and that we should expect the expansion to decelerate (Shu W, 2015). One of the solutions settled on was via the spacetime geometry structure in which length, time
and mass are said to be related. It was assumed that there could be some form of new energy which is acting as anti-gravity called dark energy. Other observations from different sources and of
different nature at different distances have indicated that there is dark component of matter called dark matter (DM). We can use this dark energy, and dark matter together with normal matter to
obtain the universe‟s density parameter. The value of this parameter is derived by finding the ratio of the average total matter and energy density to the critical density. The critical density can
be explained as density in which the universe would halt its expansion and that is only after an infinite time. See the equation (2) below.
Where, ρ and ρc are actual density and critical density of the universe respectively.
The value of this parameter density Ω0 is almost one. There are studies going on aiming at finding on whether the value of Ω0 is greater than 1, less than 1 or exactly 1, which in turn can give the
geometry of the universe as follows;
1. This means that the universe is open which tells us that it will continue to expand forever. An open universe‟s shape is likened to that of 3D saddle on which two parallel lines diverge.
2. This means that the universe is closed which tells us that it will eventually stop its expansion and re-collapse. A closed universe‟s shape is likened to that of a 3D sphere in which two initially
parallel lines will finally converge.
3. This means that the universe is flat and that it has matter to stop the expansion but won‟t to re-collapse it. The shape of a flat universe is likened that of a flat sheet or Euclidean such that
any two initially parallel lines on it will always remain parallel to each other.
where, Ωρ is matter density, Ωk dark energy density/ curvature density and ΩΛ is cosmological density.
1.1.2 The Cosmic Microwave Background
The cosmic microwave background (CMB) is the electromagnetic radiation remained after the Big bang. This radiation is a powerful tool in investigating the early universe and the information obtained
is used in constraining the standard cosmological model parameters. The CMB radiations gives us a picture on how the universe looked like when it was a few hundreds of thousands years of age, a time
at which the neutral atoms could form and photons decouple from matter. This CMB radiation was found to have black body spectrum by Cosmic background explorer (COBE) satellite from which it can be
concluded that matter and radiation balanced in the early periods. So the distribution of photons should reflect that of matter at the time decoupling took place and if there is an inhomogeneity in
matter density it means that fluctuations of CMB temperature occurred.
In the early 1990s, COBE detected anisotropy in the CMB temperature, though the level was very small, it made it simple to predict theoretically anisotropy pattern by applying linear perturbation
theory. This anisotropy pattern gives cosmological information which is mostly concentrated at angular scales which is less than a degree on the sky and this corresponds to the perturbations that
were inside the horizon before decoupling. It is through these scales that physical processes left CMB imprint in the early Universe.
The CMB power spectrum shape is determined by the cosmological parameters. With perturbations in density, given its initial distribution in the early Universe, the relative peaks‟ height indicates
baryonic matter density in the Universe. However, the peaks‟ position depends on the mapping of the sound horizon‟s physical scale into angular dimensions on the sky at decoupling which also depends
on the geometry of the Universe. For instance, in an open Universe, at decoupling, the angle of physical scales is small compared to that of a flat Universe. Therefore, the peaks‟ position of CMB
power spectrum is a good approximation of the total density of the universe.
Planck‟s Legacy 2018 used the Gravitational lensing to measure the density matter of the Universe. Gravitational lensing can be defined as the process by which radiations from distant astronomical
objects is bent by the gravity of massive objects it encounters as it travels towards us. This bending makes the images of background astronomical objects appear slightly distorted and such
observations is used to obtain useful cosmological information. The degree at which CMB light has been bent or 'gravitationary lensed' while travelling through the universe over the past 13.8 billion
years is what the Planck Telescope uses to measure and be able to gauge the density of the universe. The amount of matter that intervenes CMB photons as they travel towards the earth, gives the
extent at which they are deflected so that their direction does not crisply reflects their starting in the early universe (Balbi A, 2004).
1.1.3 Standard cosmological model
The current Standard Cosmological Model is denoted by ɅCDM, where Lambda (Ʌ) is a cosmological constant associated with dark energy and CDM is an abbreviation for cold dark matter which is the
sufficient massive dark matter particles of the Universe. This model assumes that the origin of the Universe is from pure energy that underwent the Big Bang and that about 5% of it makes normal
matter while 27% makes dark matter and 68% dark energy. This model assumes further that in the large scales the universe is not only homogeneous but also isotropic. This model is based on two
theoretical models which are; the Standard Model of Particle Physics (SMPP) also called physics of the very small and General Theory of Relativity (GTR) which is the physics of the very large.
However, these two models have their shortcomings. For instance, the SMPP does not give an understanding on how the three generations of leptons and quarks came to exist and even their mass
hierarchy, nature of gravity and the nature of dark matter. GTR on the other hand is short of information about Big Bang cosmology, inflation, the asymmetrical of the matter-antimatter in the
universe, and the nature of dark energy (Robson B, 2019).
1.2 Theory: History of cosmology
Looking into observational Cosmology, the first model to describe the universe was the
„island universe‟ model that was developed by Descartes that was published in The World of 1636 which involved the solar system problem. In the year 1750 Wright published a book with a title An
Original Theory of the Universe which involved stars and the solar system in a sphere. In 1755 Kant and 1761 Lambert came up with first pictures of the Universe which were hierarchical. All these
information about the Universe did not have observational validation. Afterwards, the distance of the Sun was known, making it a first star with a known distance. Friedrich Bessel et al. (1830s) made
the first parallax measurement of stars.
The quantitative estimations about scale and structure of Universe were made by William Herschel in 18th century. His large-scale structure model was based on the counting of stars and it gave an
evidence for the „island universe‟. Herschel derived the famous model for the galaxy on an assumption that the absolute luminosities of the stars were the same.
John Michell, a Geology Woodwardian Professor at Queen‟s College, Cambridge, warned William Herschel on his assumption that stars had fixed luminosity. In 1767, John Michell developed the Cavendish
experiment which was used to measure the average Earth density. Michellis greatly remembered from his invention of black hole. In 1802, Herschel after measuring the visual binary magnitudes of our
Galaxy and stars, in his conclusion he agreed with John Michell‟s warning about the luminosity of stars and finally he lost faith in his model.
Throughout the 19th century there was a great desire to make observations of the Universe using a telescope of a higher aperture. A 72-inch reflector, the largest telescope then was constructed by
William Parsons at Birr Castle, Ireland. The telescope was so big that on tracking the astronomical objects, its barrel was moved by ropes so as to accommodate the platform that could move at the
Newtonian focus of the telescope during observations. During this century, the problem of pointing of reflecting telescope was solved by Lewis Morris Rutherfurd, Andrew Common, John Draper and George
Carver by inventing plate holder which was adjustable that enabled the observer to maintain pointing and high precision.
The advancement in technology is attributed to achievement made by James Keeler; he was able to obtain spiral nebulae images among them was his famous M51 image. The images showed detailed structures
of the spiral nebulae in which a large number were fainter at a smaller angular size. He concluded that, if these fainter objects were similar to Nebula M31 of Andromeda, then they farther away from
the solar system.
Carnegie discovered helium through astronomy long before it was identified in the laboratory. This is one way to prove that astronomy can provide information about behavior of matter by just making
astronomical observations which can be reproduced later in the laboratory. Carnegie facilitated the construction of 100-inch Hooker Telescope, which was the largest in the world with all other
features learned from other earlier telescopes. In the year 1918, it was complete and it dominated for about 30 years until 1948 another larger telescope, Palomar 200-inch telescope was commissioned.
Using 100-inch Hooker Telescope, Scheiner (1899) obtained M31 spectrogram and stated that it suggested Sun-like stars cluster. Opik, in 1922 compared mass-to-light ratio of M31 with our Galaxy and
obtained an estate distance of M31 to be 440 kpc. The same year, Duncan discovered variable stars in spiral nebulae that in turn led to a discovery by Hubble of variable stars in M31.
A paper by Hubble (1925&1926) provided a description about galaxies in the extragalactic system. The paper classified the galaxies into Hubble types with an estimation number of the different types,
their mass-to-luminosity ratio and average densities. Its at this time the mean mass density of the Universe as a whole was derived. By the year 1929, after Hubble collecting approximation distances
of about 24 Galaxies with measured velocities he came up with his law that bears his name; the Hubble law.
Theoretical cosmology is attributed to Albert Einstein with his famous static model of the Universe. First, in the year 1825, Lobachevsky and Bolyai violated the Euclid‟s fifth axiom by solving the
problem of existence of geometries. Their work led to an introduction of quadratic differential forms by Riemann resulting to the generalized non-Euclidean geometries. After a long rout of searching
for a consistent theory of gravity that was relativistic using ideas such as; the influence of gravity on light, the principle of equivalence, and the Riemannian spacetime, Einstein came up with
general relativity. As the year 1912 was ending, he wanted to have a non-Euclidean geometry. He consulted his friend Marcel Grossmann, on a general way to transform frames of reference for metrics of
the form.
The Grossmann‟s answer was that Einstein should use the Riemannian geometries, though they were nonlinear a fact Einstein took as an advantage because any theory that satisfies relativistic gravity
must be nonlinear. In the year 1915 Einstein formulated general relativity in its definitive form. In the following year, Willem de Sitter and Paul Ehrenfest gave an idea that in order to remove the
problems of the boundary conditions at infinity, there has to be a spherical 4-dimensional spacetime. In 1917, Einstein realized that general relativity was a theory that can be used to construct
consistent model of the Universe. At this time the expansion of the Universe was yet to be discovered.
In his theory, Einstein wanted to incorporate the fact that, in the large-scale Universe, distribution of matter should determinate the local inertial frame of reference. Another problem emerged;
Newton noted that a static model of the Universe is unstable under gravity. This forced Einstein to introduce another term called the cosmological constant denoted by into the field equation that
solved the problem.
In the same year, de Sitter found the solutions of Einstein‟s field equations in the absence of matter ρ= p = 0 meaning that Einstein did not achieve his objectives. de Sitter‟s metric was in the
In 1922, Kornel Lanczos interpreted de Sitter solution by coordinate transformation as follows.
In the same year, Alexander Alexandrovich Friedmann wrote a paper about relativistic cosmology. He noted that for an isotropy world model, the curvature has to be isotropic. He formulated model
showing a solution of expanding world with closed spatial geometrie.
On solving these equations one can get exactly the standard world models of general relativity. In 1927, Georges Lemaître also discovered the solutions of Friedman. Lemaître and Howard P. Robertson
in 1928 became aware that the Friedman solutions were actually a discovery that was taken as an evidence for the expansion of the universe. In 1935, Robertson and George Walker independently solved
the problem of time and distance in cosmology. For homogeneity and isotropy world, they introduced a metric of the form
where k is the space curvature at the present epoch, r is a radial distance of comoving coordinate and R(t) is proportional to the distance between any two worldlines changing with cosmic time t and
is scale factor, (Longair S, 2004).
Cosmology uses the Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmological model in understanding the evolution of the universe. This model is so successful and for that reason it has become the
standard cosmological model which now is used to predict about the universe even at earliest times of 10-43 sec after the Big Bang (Piattela, 2018).
It is until towards the end of the twentieth century when firm empirical data was obtained to confirm the homogeneity and isotropy of the universe exactly the same as the Cosmological Principle had
predicted. The temperature of the cosmic microwave background (CMB) radiations which is uniform serves as the best evidence for the isotropy of the observed universe.
1.3 Statement of the Problem
The standard cosmological model ɅCDM predicts the shape of the universe to be flat which agrees with many cosmological observations. The knowledge of the shape of the universe is of great importance
as it can be used to predict the evolution and fate of the universe which is in continuous accelerated expansion and it depends on the density parameter. However, the recent cosmological
observations, from The Planck Legacy 2018 data indicates that, the Cosmic Microwave Background light‟s amplitude is larger. This can only be explained by the closed universe model. This poses a
challenge to the current Standard cosmological Model. Therefore, there is a lot of concern to both observational and theoretical cosmologists that the present model which assumes the shape of the
universe to be flat may be incomplete or inaccurate. This concern has shifted our focus to thorough scrutiny through research on whether the current model is incorrect and if so then what will be its
implications in cosmology. Although the recent data has suggested possible model of a closed universe, more observations are required to ascertain these claims. In this research we aim to explore the
evidence of a closed universe and see if there could be crisis in cosmology.
1.4 Objectives
1.4.1 Main Objective
The main objective of this work is to explore the evidence of the closed universe as suggested by Planck Legacy 2018 data, which shows enhanced amplitude of CMB, and establish a model of a closed
1.4.2 Specific Objectives
Specific objectives of this study are:
1 To derive equations governing dynamics of a closed universe within Einstein Theory of General Relativity considering isotropy and homogeneity.
2 To obtain equations governing the evolution of matter density and matter density contrast of the universe.
3 To derive the equations and develop relevant theory behind possible crisis with regard to the proposed evidence of a closed universe.
4 To study the implications of the closed universe evidence for current cosmology.
1.5 Justification and Significance of the Study
The shape of the universe is key on formation of a standard cosmological model which gives the insight of the dynamics and the future of the universe. The universe is flat an assumption made by the
current Standard Cosmological Model ɅCDM. Considering the importance of the model of the shape of the universe in cosmology, the valid way to explain the abnormality in PL18 is to model a closed
universe shape. Exploring PL18 is of great significance because: It will help challenge the existing model of a flat universe and shift to a closed universe model; It will help us to predict the
future and fate of the universe as the accelerated expansion of the closed universe will halt and Big Crunch occur; It will help to solve the problem of the enhanced amplitude in CMB by PL18.
Click “DOWNLOAD NOW” below to get the complete Projects
FOR QUICK HELP CHAT WITH US NOW! | {"url":"https://projectshelve.com/item/exploring-the-evidence-for-a-closed-universe-is-there-a-possible-crisis-for-cosmology-zhu6489gh1","timestamp":"2024-11-04T20:30:29Z","content_type":"text/html","content_length":"327665","record_id":"<urn:uuid:dd91169b-9a9e-42a6-9d38-683b5f721d2d>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00383.warc.gz"} |
Online probabilistic metric embedding: A general framework for bypassing inherent bounds
Probabilistic metric embedding into trees is a powerful technique for designing online algorithms. The standard approach is to embed the entire underlying metric into a tree metric and then solve the
problem on the latter. The overhead in the competitive ratio depends on the expected distortion of the embedding, which is logarithmic in n, the size of the underlying metric. For many online
applications, such as online network design problems, it is natural to ask if it is possible to construct such embeddings in an online fashion such that the distortion would be a polylogarithmic
function of k, the number of terminals. Our first main contribution is answering this question negatively, exhibiting a lower bound of Ω(log k log Φ), where Φ is the aspect ratio of the set of
terminals, showing that a simple modification of the probabilistic embedding into trees of Bartal (FOCS 1996), which has expected distortion of O(log k log Φ), is nearly-tight. Unfortunately, this
may result in a very bad (polynomial) dependence in terms of k. Our second main contribution is a general framework for bypassing this limitation. We show that for a large class of online problems
this online probabilistic embedding can still be used to devise an algorithm with O(min{log k log(kλ), log^3 k}) overhead in the competitive ratio, where k is the current number of terminals, and λ
is a measure of subadditivity of the cost function, which is at most r, the current number of requests. In particular, this implies the first algorithms with competitive ratio polylog(k) for online
subadditive network design (buy-at-bulk network design being a special case), and polylog(k, r) for online group Steiner forest.
Original language English
Title of host publication 31st Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2020
Editors Shuchi Chawla
Publisher Association for Computing Machinery
Pages 1538-1557
Number of pages 20
ISBN (Electronic) 9781611975994
State Published - 2020
Event 31st Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2020 - Salt Lake City, United States
Duration: 5 Jan 2020 → 8 Jan 2020
Publication series
Name Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms
Volume 2020-January
Conference 31st Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2020
Country/Territory United States
City Salt Lake City
Period 5/01/20 → 8/01/20
Bibliographical note
Publisher Copyright:
Copyright © 2020 by SIAM
Dive into the research topics of 'Online probabilistic metric embedding: A general framework for bypassing inherent bounds'. Together they form a unique fingerprint. | {"url":"https://cris.huji.ac.il/en/publications/online-probabilistic-metric-embedding-a-general-framework-for-byp","timestamp":"2024-11-13T08:40:34Z","content_type":"text/html","content_length":"52291","record_id":"<urn:uuid:78bd4352-07c6-4e7a-b99d-e0820209ce82>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00323.warc.gz"} |
model.density: Static Model generator for Density in Eve-ning/vsrgtools: osu! file Mangler
Density is not meant to be a representation of physical stress, instead it is for mental stress.
In other words, the density of objects within a certain time frame to be processed.
1 model.density(chart, window = 1000, mini.ln.len.min = 100,
2 mini.ln.len.max = 400, mini.ln.weight.min = 0.2,
3 mini.ln.weight.max = 0.7, weight.note = 1)
model.density(chart, window = 1000, mini.ln.len.min = 100, mini.ln.len.max = 400, mini.ln.weight.min = 0.2, mini.ln.weight.max = 0.7, weight.note = 1)
chart The chart generated from chartParse
window The window to check for objects
mini.ln.len.min Defines the minimum length of a Mini Long Note
mini.ln.len.max Defines the maximum length of a Mini Long Note
mini.ln.weight.min Defines the minimum weight of a Mini Long Note
Note that the weight counts separately for the head and tail. So 0.5 will treat a Long Note as 1 weight
mini.ln.weight.max Defines the maximum weight of a Mini Long Note
Note that the weight counts separately for the head and tail. So 0.5 will treat a Long Note as 1 weight
weight.note Defines the weight of a note
Note that the weight counts separately for the head and tail. So 0.5 will treat a Long Note as 1 weight
For more information on customizing the embed code, read Embedding Snippets. | {"url":"https://rdrr.io/github/Eve-ning/vsrgtools/man/model.density.html","timestamp":"2024-11-09T06:56:31Z","content_type":"text/html","content_length":"27398","record_id":"<urn:uuid:16e5f7f9-677a-45e6-b57e-33b77f2680d0>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00180.warc.gz"} |
How to Expand and Condense Logarithms ( Basic Log Rules )How to Expand and Condense Logarithms ( Basic Log Rules )How to Expand and Condense Logarithms ( Basic Log Rules )
Learn how to expand and condense logarithms.
The first property you will learn about is the product property.
If the base of each logarithm you can condense into one logarithm by multiplying. the logs.
The Quotient Property states if the bases are the same you can subtract the two logs by dividing.
The Power property states if you have a coefficient you can move the coefficient so that it becomes an exponent.
I next work several examples problems in which I use the logarithm rules to condense two logs into one.
Video Guide
0:18 Product Property
1:10 Quotient Property
2:15 Power Property
2:46 Example problems in which you condense into a single log
6:45 Example problems in which you expand the log
13:14 Condensing a natural log with three terms example problems
19:04 Using logs to evaluate a log sample problems.
0 comments: | {"url":"http://www.moomoomathblog.com/2018/04/how-to-expand-and-condense-logarithms.html","timestamp":"2024-11-02T10:50:22Z","content_type":"application/xhtml+xml","content_length":"82599","record_id":"<urn:uuid:ee3e499f-efd1-47b6-90c5-f8bd55fbcf17>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00678.warc.gz"} |
"Mastering Python Arithmetic Operators: Best Practices and Common Mistakes" - Web Developing Top Tips
“Mastering Python Arithmetic Operators: Best Practices and Common Mistakes”
Python Arithmetic Operators: Best Practices and Common Mistakes
Python arithmetic operators are fundamental tools used for performing basic mathematical operations on numeric data. These operators include addition, subtraction, multiplication, division, and more.
In this article, we will explore Python’s arithmetic operators, along with examples of both good and bad coding practices to illustrate the importance of writing clean and efficient code.
Python Arithmetic Operators
Python provides the following arithmetic operators:
1. Addition (+): Adds two operands.
2. Subtraction (-): Subtracts the right operand from the left operand.
3. Multiplication ( ): Multiplies two operands.
4. Division (/): Divides the left operand by the right operand (results in a float).
5. Integer Division (//): Divides the left operand by the right operand and truncates the decimal part (results in an integer).
6. Modulus (%): Returns the remainder of the division of the left operand by the right operand.
7. Exponentiation ( ): Raises the left operand to the power of the right operand.
Now, let’s dive into some examples to demonstrate the usage of these arithmetic operators.
Good Practices: Clean and Efficient Code
Use Parentheses for Clarity:
It’s a good practice to use parentheses to group expressions and clarify the order of operations. This ensures that the calculations are performed in the desired sequence. For example:
#Good Practice
result = (x + y) * z
Prefer Descriptive Variable Names:
Use descriptive variable names to make your code more readable and understandable. Avoid single-letter variable names for significant values. For example:
#Good Practice
total_cost = price * quantity
Handle Division by Zero:
When performing division, consider possible division by zero scenarios and handle them gracefully to avoid runtime errors. For example:
#Good Practice
if denominator != 0:
result = numerator / denominator
result = 0 or handle the situation based on your application logic
Convert Data Types as Needed:
Be cautious when mixing data types in calculations. Understand how Python implicitly converts data types and explicitly convert them when necessary. For example:
#Good Practice
result = float(x) / y
Use the Exponentiation Operator for Power Calculation:
Use the double asterisk (**), the exponentiation operator, for raising a value to a power instead of multiple multiplication operations. For example:
#Good Practice
area = length ** 2 Calculates the area of a square
Keep Code Concise and Readable:
Avoid overly complex expressions in a single line, as it may reduce code readability. Break down complex calculations into smaller, more understandable steps. For example:
#Good Practice
numerator = x + y
denominator = z - w
result = numerator / denominator
Example 1: Basic Arithmetic Operations
Good Practice – Example 1: Basic Arithmetic Operations
result = 10 + 5
print("Addition:", result) Output: 15
result = 20 - 8
print("Subtraction:", result) Output: 12
result = 6 7
print("Multiplication:", result) Output: 42
result = 21 / 3
print("Division:", result) Output: 7.0 (float)
Integer Division
result = 22 // 3
print("Integer Division:", result) Output: 7 (integer)
result = 23 % 5
print("Modulus:", result) Output: 3
result = 2 4
print("Exponentiation:", result) Output: 16
The code in Example 1 demonstrates basic arithmetic operations using Python’s arithmetic operators. The code is clean, concise, and easy to read, with appropriate comments to explain each operation.
Example 2: Using Arithmetic Operators in Assignments
Good Practice – Example 2: Using Arithmetic Operators in Assignments
x = 5
y = 3
Add and Assign
x += y
print("Add and Assign:", x) Output: 8
Subtract and Assign
x -= y
print("Subtract and Assign:", x) Output: 5
Multiply and Assign
x = y
print("Multiply and Assign:", x) Output: 15
Divide and Assign
x /= y
print("Divide and Assign:", x) Output: 5.0 (float)
Modulus and Assign
x %= y
print("Modulus and Assign:", x) Output: 2.0 (float)
Exponentiation and Assign
x = y
print("Exponentiation and Assign:", x) Output: 8.0 (float)
In Example 2, we use augmented assignment operators to perform arithmetic operations and assign the result back to the same variable. These operators (`+=`, `-=`, ` =`, `/=`, `%=` and ` =`) are more
concise and efficient than writing the operations separately.
Bad Practices: Inefficient and Error-Prone Code
Avoid Redundant Parentheses:
While using parentheses for clarity is good, avoid excessive and redundant usage, as it can make the code confusing and harder to read.
#Bad Practice
result = ((x + y) * z) Redundant parentheses
Avoid Overusing Unary Minus:
The unary minus should be used sparingly, especially for numerical literals. It’s better to write negative numbers explicitly rather than using the unary minus.
#Bad Practice
negative_value = -5
Avoid Using Single Letter Variables Without Context:
Single-letter variable names like ‘a’, ‘b’, ‘x’, etc., should be avoided for significant values. Use descriptive variable names instead.
#Bad Practice
x = 10 What does 'x' represent?
Avoid Division Confusion:
Be mindful of the data type returned by the division operator (/) in Python 3. In Python 2, the same operator behaves differently. Use the integer division (//) if you want integer results.
#Bad Practice (Python 2)
result = 5 / 2 Returns 2 (integer division)
#Good Practice (Python 3)
result = 5 / 2 Returns 2.5 (floating-point division)
Avoid Using Integer Division When You Need Floats:
Be cautious when using integer division (//) if you need a floating-point result. It truncates the decimal part, resulting in potential loss of precision.
#Bad Practice
result = 5 // 2 Returns 2 instead of 2.5
Avoid Mixing Data Types Implicitly:
Be aware of how Python implicitly converts data types and avoid unexpected results by explicitly converting data types when needed.
#Bad Practice
x = "10"
y = 5
result = x + y Raises TypeError: can only concatenate str (not "int") to str
#Good Practice
x = "10"
y = 5
result = int(x) + y Converts 'x' to int and then adds with 'y'
Example 3: Incorrect Division
Bad Practice – Example 3: Incorrect Division
result = 21 / 3
print("Division:", result) Output: 7.0 (float)
Integer Division (incorrectly using ‘/’): Avoid this!
result = 22 / 3
print("Incorrect Integer Division:", result) Output: 7.333333333333333 (float)
In this bad practice example, we mistakenly used the division operator `/` instead of the integer division operator `//`. As a result, the division returns a float value instead of an integer. Such
mistakes can lead to incorrect calculations and unintended results.
Example 4: Unnecessary Parentheses
Bad Practice – Example 4: Unnecessary Parentheses
Addition (with unnecessary parentheses): Avoid this!
result = (10 + 5)
print("Addition with Unnecessary Parentheses:", result) Output: 15
In this bad practice example, we used unnecessary parentheses around the addition operation. While this does not cause any errors, it adds visual clutter to the code, making it less readable and
harder to maintain.
Python arithmetic operators provide powerful capabilities for performing mathematical operations in your programs. By following the best practices outlined in this article, you can write clean and
maintainable code, making your calculations more accurate and easier to understand. Avoiding bad practices will help you steer clear of common pitfalls and ensure your code behaves as intended. Keep
your code concise, use descriptive variable names, handle edge cases, and always strive for readability and maintainability in your Python programs. With a strong grasp of arithmetic operators and
their best practices, you’ll be well-equipped to tackle a wide range of numerical tasks in Python.
Leave a Comment | {"url":"https://webdevelopingtoptips.com/mastering-python-arithmetic-operators-best-practices-and-common-mistakes/","timestamp":"2024-11-02T21:16:35Z","content_type":"text/html","content_length":"94942","record_id":"<urn:uuid:6cae833c-4078-4c59-84a6-6afb0ab8ac5b>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00115.warc.gz"} |
Random Fields
10 Mar 2024 21:50
Stochastic processes where the index variable is space, or something space-like. (Formally, one-dimensional space works a lot like time; and space-time works a lot like a higher-dimensional space,
though not always.) This is of course important for modeling spatial and spatio-temporal data, but also data on networks, and statistical mechanics.
Recommended, general:
• Pierre Brémaud, Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues
• Carlo Gaetan and Xavier Guyon, Spatial Statistics and Modeling
• Geoffrey Grimmett, Probability on Graphs: Random Processes on Graphs and Lattices
• Xavier Guyon, Random Fields on a Network
• Peter Guttorp, Stochastic Modeling of Scientific Data
• Brian D. Ripley, Statistical Inference for Spatial Processes
• Rinaldo B. Schinazi, Classical and Spatial Stochastic Processes
Recommended, of more specialized interest:
• Ole E. Barndorff-Nielsen, Fred Espen Benth and Almut E. D. Veraart, Ambit Stochastics
• J.-R. Chazottes, P. Collet, C. Kuelske and F. Redig, "Deviation inequalities via coupling for stochastic processes and random fields", math.PR/0503483
• Jérôme Dedecker, Paul Doukhan, Gabriel Lang, José Rafael León R., Sana Louhichi and Clémentine Prieur, Weak Dependence: With Examples and Applications
• David Griffeath, "Introduction to Markov Random Fields", ch. 12 in Kemeny, Knapp and Snell, Denumerable Markov Chains [One of the proofs of the equivalence between the Markov property and having
a Gibbs distribution, conventionally but misleadingly called the Hammersley-Clifford Theorem. Pollard, below, provides an on-line summary.]
• Mark Kaiser, "Statistical Dependence in Markov Random Field Models" [abstract, preprint]
• Andee Kaplan, Mark S. Kaiser, Soumendra N. Lahiri, Daniel J. Nordman, "Simulating Markov random fields with a conclique-based Gibbs sampler", arxiv:1808.04739 [Presentation by Dr. Kaplan]
• David Pollard, "Markov random fields and Gibbs distributions" [Online PDF. A proof of the theorem linking Markov random fields to Gibbs distributions, following the approach of David Griffeath.]
• Jeffrey E. Steif, "Consistent estimation of joint distributions for sufficiently mixing random fields", Annals of Statistics 25 (1997): 293--304
To read:
• Jan Ambjorn et al., Quantum Geometry: A Statistical Field Theory Approach [I am interested in the stuff about random surfaces.]
• K. Bahlali, M. Eddahbi and M. Mellouk, "Stability and genericity for SPDEs driven by spatially correlated noise", math.PR/0610174
• Raluca M. Balan, "A strong invariance principle for associated random fields", Annals of Probability 33 (2005): 823--840 = math.OR/0503661
• M. S. Bartlett, "Physical Nearest-Neighbour Models and Non-Linear Time Series", Journal of Applied Probability 8 (1971): 222--232
• Michel Bauer, Denis Bernard, "2D growth processes: SLE and Loewner chains", math-ph/0602049
• Denis Belomestny, Vladimir Spokoiny, "Concentration inequalities for smooth random fields", arxiv:1307.1565
• Anton Bovier, Statistical Mechanics of Disordered Systems
• Alexander Bulinski and Alexey Shashkin, "Strong invariance principle for dependent random fields", math.PR/0608237
• M. Cassandro, A. Galves and E. Löcherbach, "Partially Observed Markov Random Fields Are Variable Neighborhood Random Fields", Journal of Statistical Physics 147 (2012): 795--807, arxiv:1111.1177
• Ruslan K. Chornei, Hans Daduna, and Pavel S. Knopov
• Giuseppe Da Prato, Arnaud Debussche and Luciano Tubaro, "Coupling for some partial differential equations driven by white noise", math.AP/0410441
• Jean-Dominique Deuschel and Andreas Greven (eds.), Interacting Stochastic Systems [This looks deeply cool]
• Rick Durrett, Stochastic Spatial Models: A Hyper-Tutorial
• Vlad Elgart and Alex Kamenev, "Rare Events Statistics in Reaction--Diffusion Systems", cond-mat/0404241
• Mohamed El Machkouri, Dalibor Volny, Wei Biao Wu, "A central limit theorem for stationary random fields", arxiv:1109.0838
• H. Follmer, "On entropy and information gain in random fields", Z. Wahrsh. verw. Geb. 26 (1973): 207--217
• T. Funaki, D. Surgailis and W. A. Woyczynski, "Gibbs-Cox Random Fields and Burgers Turbulence", Annals of Applied Probability 5 (1995): 461--492
• L. Garcia-Ojalvo and J. Sancho, Noise in Spatially Extended Systems
• B. M. Gurevich and A. A. Tempelman, "Markov approximation of homogeneous lattice random fields", Probability Theory and Related Fields 131 (2005): 519--527
• Allan Gut and Ulrich Stadtmuller, "Cesaro Summation for Random Fields", Journal of Theoretical Probability 23 (2010): 715--728
• Peter Hall, Introduction to the Theory of Coverage Processes [= point process with a random shape attached to each point]
• Reza Hosseini, "Conditional information and definition of neighbor in categorical random fields", arxiv:1101.0255 ["Who then is my neighbor?" (Not an actual quote from the paper.)]
• Xiangping Hu, Daniel Simpson, Finn Lindgren, Havard Rue, "Multivariate Gaussian Random Fields Using Systems of Stochastic Partial Differential Equations", arxiv:1307.1379
• Niels Jacob and Alexander Potrykus, "Some thoughts on multiparameter stochastic processes", math.PR/0607744
• Wolfgang Karcher, Elena Shmileva, Evgeny Spodarev, "Extrapolation of stable random fields", arxiv:1107.1654
• M. Kerscher, "Constructing, characterizing, and simulating Gaussian and higher-order point distributions," astro-ph/0102153
• Ross Kindermann and J. Laurie Snell, Markov Random Fields and Their Applications [Free online!]
• P. Kotelenez, Stochastic Space-Time Models and Limit Theorems
• Michael A. Kouritzin and Hongwei Long, "Convergence of Markov chain approximations to stochastic reaction-diffusion equations", Annals of Applied Probability 12 (2002): 1039--1070
• Jean-Francois Le Gall, Spatial Branching Processes, Random Snakes and Partial Differential Equations
• Atul Mallik, Michael Woodroofe, "A Central Limit Theorem For Linear Random Fields", arxiv:1007.1490
• Jonathan C. Mattingly, "On Recent Progress for the Stochastic Navier Stokes Equations", math.PR/0409194
• A. I. Olemskoi, D. O. Kahrchenko and I. A. Knyaz', "Phase transitions induced by noise cross-correlations", cond-mat/0403583
• Rupert Paget, "Strong Markov Random Field Model", IEEE Transactions on Pattern Analysis and Machine Intelligence 26 (2004): 408--413
• Marcelo Pereyra, Nicolas Dobigeon, Hadj Batatia, Jean-Yves Tourneret, "Computing the Cramer-Rao bound of Markov random field parameters: Application to the Ising and the Potts models",
• Liang Qiao, Radek Erban, C. T. Kelley and Ioannis G. Kevrekidis, "Spatially Distributed Stochastic Systems: equation-free and equation-assisted preconditioned computation", q-bio.QM/0606006
• Havard Rue and Leonhard Held, Gaussian Markov Random Fields: Theory and Applications
• Andre Toom, "Law of Large Numbers for Non-Local Functions of Probabilistic Cellular Automata", Journal of Statistical Physics 133 (2008): 883--897
• M. N. M. van Lieshout, "Markovianity in space and time", math.PR/0608242
• Divyanshu Vats and Jose M. F. Moura, "Telescoping Recursive Representations and Estimation of Gauss-Markov Random Fields", arxiv:0907.5397
• Benjamin Yakir, Extremes in Random Fields: A Theory and its Applications
• Eunho Yang, Pradeep K. Ravikumar, Genevera I. Allen, Zhandong Liu, "Conditional Random Fields via Univariate Exponential Families", NIPS 2013 | {"url":"http://bactra.org/notebooks/random-fields.html","timestamp":"2024-11-09T10:45:58Z","content_type":"text/html","content_length":"13292","record_id":"<urn:uuid:8fab8973-b42f-4b0a-b576-0d80472ee6ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00094.warc.gz"} |
Calculating $\pi$ Factorial
December 11, 2017
One of the things I like most about mathematics is its ability to generalize results to realms that one might not have previously thought of before. Historically, this is what happened with rational
numbers, negative numbers, irrational numbers, complex numbers, and so on.
Most of you have probably heard of the factorial operation before, but here it is again explicitly. Essentially, if you have a natural number $n$, then it’s factorial is denoted as $n!$ and is
defined as $n! = n(n-1)(n-2) \ldots (2)(1)$. That’s an easy enough definition. Start with your number, and just multiply it by all of the numbers that came before it (until you get to one). We also
have the base cases of $1!=0!=1$.
The factorial is what is known as a recurrence relation. Instead of getting an explicit formula for how to calculate each term, you get what the $n^{th}$ term is in relation to the next term below it
(for this specific case). Let’s do an example with $5!$. If we look at our definition above, we get $5! = 5 \cdot 4!$, since the rest of the multiplication is “hidden” within $4!$. As such, we don’t
get a number for $5!$ until we figure out what $4!$ is, which we can only do if we find out what $3!$ is, and so on. Therefore, the factorial is really a recipe, and the only time we get a real
answer is when we hit $1!$, which we set to $1$.
That’s great, but it doesn’t look very helpful for calculating $\pi!$. After all, $\pi$ is definitely not in the naturals, so we can’t make use of the definition above. However, let’s keep in mind
the kind of relation that the factorial gives us. It tells us that $n! = n \cdot (n-1)!$. Let’s see if we can get something else to work like that.
Somewhat completely out of the blue, let’s take a look at the following function:
\begin{equation} \Gamma(a) = \int_0^\infty x^{a-1}e^{-x} dx, \,\,\,\,\, a \gt 0. \end{equation}
This function is called the Gamma function, and it’s used in probability distributions (which is where I came across it). Now, this might seem like a really weird function to throw at you. How in the
world does this relate to anything about factorials? Well, let’s start by trying to calculate the value of $\Gamma(a+1)$.
\begin{equation} \Gamma(a+1) = \int_0^\infty x^{a+1-1}e^{-x} dx = \int_0^\infty x^{a}e^{-x} dx \end{equation}
Then, we can integrate by parts using $u=x^a$ and $dv = e^{-x}dx$ to get:
\begin{equation} \left[-x^a e^{-x} \right]_0^{\infty} +a \int_0^\infty x^{a-1}e^{-x} dx. \end{equation}
The first term evaluates to zero at both boundaries (which can be seen by taking the limit as $x \rightarrow \infty$). Therefore, we are only left with the second term. However, look at the form of
the integrand. It’s simply $\Gamma(a)$. As such, we conclude with the following relation:
\begin{equation} \Gamma(a+1) = a \Gamma(a). \end{equation}
This is really neat, because it’s another recurrence relation that gives us an answer in terms of the previous (lower) one. We can also look at the result of $\Gamma(1)$, and confirm that is indeed
equal to one. In fact, this means that we get the following result: $\Gamma(a) = (a-1)!$. It’s a recurrence relation exactly like the factorial, but formulated in a totally different way. The one
important difference though is that the value of $a$ is not limited to natural numbers. Now, we simply need $a \gt 0$, which means we can easily calculate $\pi!$. This corresponds to a value of $a =
\pi + 1$. Inserting this into the integral definition and evaluating the integral numerically gives a result of:
\begin{equation} \pi! = \Gamma(\pi + 1) = \int_0^\infty x^{\pi}e^{-x} dx \approx 7.18808. \end{equation}
How do we interpret this result? I don’t know! But roughly, I can say that it’s a bit more than $3! = 6$, so at least something is on the right track. However, calculating the result of $\pi!$ in
particular isn’t of too much importance. It’s simply a neat extension of how one can think of factorials.
One thing I do want to note is that, just because we have a recurrence relation with this Gamma function, this doesn’t mean we technically have the same thing as a factorial if $a$ is not a natural
number. Really, we then just have a recurrence relation. However, it’s still an interesting connection that’s worth sharing. Sometimes, seeing operations and concepts you were only used to seeing in
one setting suddenly operating in another scenario can broaden one’s perspective on mathematics. | {"url":"https://jeremycote.net/pi-factorial","timestamp":"2024-11-08T17:47:12Z","content_type":"text/html","content_length":"7166","record_id":"<urn:uuid:1f8b9469-abc6-4eaa-92da-73e8fdad6b33>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00794.warc.gz"} |