content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Sway Sensitivity & Second-Order Effects
Automatic Assessment of Sway Sensitivity
Sway sensitivity is automatically determined in ProtaStructure when the design code is set to EC2 and ACI.
For other codes as discussed below, an assessment of sway sensitivity can also be made, however it should be noted this is based on analytical results and the recommendations of the ACI code.
Whenever sway sensitivity is assessed automatically you are advised to be aware of the limitations that apply.
These can be viewed by clicking the ‘Limitations’ button on the Building Analysis menu > Reports > next to Slenderness Calculation / Sway Classification Report.
The Lateral Drift & Bracing can be assessed via the Settings Center > Project Settings.
In both the BS8110 & CP65 code, there are no provisions for assessing sway sensitivity by analytical methods.
Therefore, when using BS8110 or CP65, we recommend you check the option for "User Defined Bracing for Columns and Walls" and then apply the condition (braced or unbraced) that you deem appropriate
for the building.
If you do not check this option, ProtaStructure will make an assessment of sway sensitivity based on analytical results and the recommendations of the ACI code.
This assessment is lateral deflection dependent. ACI code gives guidance on appropriate adjustments to section/material properties to be used in the building analysis for the purposes of this
assessment. Such adjustments will increase the deflection value used in the checks. ACI code method is discussed in detail in sections below.
Warning! We advise this ACI assessment is used cautiously if BS8110 code is chosen due to the following reasons :
1. A mixture of two different code of practices used in a single project is not ideal or sound from engineering perspective. Further, it may be explicitly forbidden by country specific practice or
2. This can result in different classifications for different storeys, which is not a condition that is recognized by BS8110.
3. ACI classification may result in many columns being classified as slender. Hence, additional slenderness moment may result in unexpectedly high number of column design failures or significantly
higher steel than conventional BS8110 method.
For more information about direction of braced and unbraced for wall, you may refer to Wall Braced Direction.
Classification Requirements of each code
BS8110 (similarly CP65 and HK-2004)
Bracing Classification — In BS8110 columns (and walls in minor axis direction) are considered as braced if lateral stability is provided (predominantly) by walls or other stiffer elements. This
classification remains a matter of engineering judgement.
Global P-Delta Effects — it is an inherent assumption in the above that walls provide sufficient lateral stiffness that global sway of the building is small and hence "Big" P-delta effects can be
ignored in braced structures. For un-braced structures, there is no clear statement on whether or not global P-Delta is also considered ignorable or is simply considered to be adequately catered for
in the amplification of design moments noted below.
Slenderness Classification — this is based on the effective length.
1. In braced structures effective lengths are < 1 and
2. in un-braced structures effective lengths are > 1.
It is considerably more likely that a member gets classified as slender when it has been classified as un-braced.
Short (Non-Slender) Members will see no amplification of moment at all, even if they are un-braced.
Slender Members (Members susceptible to P-Delta effects) :
• Braced-Slender Elements - additional moments are calculated based on effective length and are considered to be a maximum at around mid-height. These moments are not added to the highest end
moment so this may or may not end up being a critical design condition. This additional moment is clearly intended to cater for "little" P-delta effects (strut buckling)
• UnBraced-Slender Elements - additional moments are calculated based on effective length (which is longer and hence additional moments will be greater), and are considered to be a maximum at the
member ends. The additional moment is added to the highest end moment so this will always end up being a critical design condition.
It is assumed that this amplification of the critical design condition is intended to cater for both big and little P-delta effects.
The advantage of the above procedure is that moment amplification in each column is related only to the classification and slenderness of that column.
ACI 318-02
When the design code is set to BS8110, CP65 or HK-2004; if you uncheck “User Defined Bracing for Columns and Walls”, a facility is made available for assessing the susceptibility of individual
storeys to P-Delta effects. This uses the ACI method of classification during the building analysis.
Bracing Classification — using the ACI approach each storey level within a building is classified as sway or non-sway. The code also provides a method allowing analytical assessment of this
classification based on deflections arising from a linear analysis of the structure.
Global P-Delta Effects — when a storey is classified as "non-sway" then it can be assumed that global P-Delta effects are small enough to be ignored at that level. When a storey is classified as
"sway" then the frame analysis results need to be amplified in some way, options given are:
• A second order analysis (which would inevitably affect all members in the structure)
• Approximate moment magnification methods; cl.10.13.2 appears to indicate that this moment amplification only needs to be applied to the slender members at each floor level (similar to BS8110) is
this logical? Or should this amplify the sway moments in all columns and walls on a level by level basis?
Slenderness Classification — this is based on the effective length. At "Non-sway" levels effective lengths are < 1 and at "sway" levels effective lengths are > 1. It is considerably more likely that
a member gets classified as slender when it exists at a "sway" level.
Short (Non-Slender) Members will see no amplification of moment at all even if they are at "Sway" levels.
Slender Members (Members susceptible to P-Delta effects) :
• Slender Elements at Non-Sway Levels - additional moments are calculated based on effective length and are considered to be a maximum at around mid height. These moments are not added to the
highest end moment so this may or may not end up being a critical design condition.
In essence the approach here is identical to that used for braced slender members in BS8110.
• Elements at Sway Levels - as noted above the end moments of all members may be amplified to account for Global P-Delta effects. If a member at such a level is classified as slender, the
calculation of the magnified moment is not based on the effective length of each individual member, moment magnifiers are based either on the stability index for the floor (cl.10.13.4.2) or an
assessment of the average buckling capacity of all members at the floor (cl.10.13.4.3 - similar to the optional method in BS8110).
The additional moment is added to the highest end moment so this will always end up being a critical design condition. Additional check (cl.10.13.5) - having amplified the end moments there is a
requirement to check that intermediate slenderness effects (using effective length = 1.0L) are not more critical
While the method of moment amplification is different for slender members at sway levels, the general principles of moment amplification are the same in BS8110 and ACI and the terms used for
classification are interchangeable:
• BS8110 Braced = ACI Non-Sway
• BS8110 Un-Braced = ACI Sway
The ACI has the advantage that the classification is not a matter of engineering judgement and also that it introduces the flexibility to mix both braced and un-braced classifications within one
The ACI amplifications are applied only to lateral load cases - this does not address the fact that sway will occur as a result of vertical loads applied to any unsymmetrical structure and hence
ignores the possibility that significant P-delta effects could accrue due to this aspect of sway. However, for the majority of "building" type structures this simplification/assumption is likely to
be acceptable.
There does seem to be a question mark relating to the ACI approach for slender columns. If the sway moment amplification is made using the stability index then should the column be taken into design
as a braced column using an effective length = 1.0 (because the unbraced (global P-Delta) aspect of slenderness has already been allowed for?). This seems much less conservative than the suggested
implementation procedure for EC2 discussed below.
Eurocode EC2
In EC2 similar terminologies are used but the meanings are different:
• Cl 5.8.1 - Introduces concept of braced and bracing members.
• Cl 5.8.2 - Second Order Effects - this clause distinguishes between global effects (applying to the whole structure) and isolated member effects (slenderness).
Bracing Classification — Bracing members are the members which are assumed to provide the lateral stability of the structure. Columns and walls that are not “bracing members” are classified as
“braced”. Unfortunately there is an element of engineering discretion involved in this classification which will be discussed later.
Global P-Delta Effects — there is some guidance on determining if these effects can be ignored (For the purposes of this discussion we will classify structures in which global P-Delta effects cannot
be ignored as "sway sensitive"). Cl 5.8.3.3 (1) gives a simple equation that is only applicable in limited circumstances and is actually also difficult to apply. Initial calculations using this
equation have suggested that it would be too conservative resulting in too many structures being classified as sway sensitive.
Annex H provides slightly more general guidance. In order to automate the Annex H classification in ProtaStructure, the approach has been modified to become similar in principal to the ACI
classification method. It is noted that a single classification gets applied to the entire sway resisting structure (the bracing members). If it is determined that global P-Delta effects cannot be
ignored (the structure is sway sensitive) then the approach becomes a user driven procedure, in which the sway loads are amplified in accordance with Annex H. This is a relatively simple procedure
applied as follows:
1. View the sway sensitivity report to obtain the suggested load amplification factors.
2. Apply this amplification to the existing load combination factors.
3. Re-analyze using the option to over-ride further sway sensitivity assessment and design the structure as if it is not sway sensitive (because the global P-Delta effects are now catered for).
Tests have indicated that the sway sensitivity assessment procedure described above results in a non-sway classification for the vast majority of structures .
Although the classification applies to the bracing members, it is impossible to isolate these when analyzing the structure, so P-delta forces (introduced by load amplification or P-delta analysis)
will accrue in all members (braced or bracing, short or slender).
Slenderness Classification — this is based on the effective length. For braced members effective lengths are < 1 and for bracing members effective lengths are > 1. It is considerably more likely that
a member gets classified as slender when it has been classified as a bracing member.
Short (Non-Slender) Members:
• As noted above, if these members exist in a sway sensitive frame then there may have been some amplification of the design forces introduced during the general analysis procedure.
• no other amplification of moments is then applied.
Slender Members (Members susceptible to P-Delta effects) :
In essence the approach here is identical to that used for braced slender members in BS8110 and ACI.
• Slender Bracing Members - as in BS8110 - additional moments are calculated based on effective length (which is longer and hence additional moments will be greater). Un-like BS8110 the additional
moment does not have to be added to the highest end moment (because the end moment is already amplified if the structure is sway sensitive). In EC2 additional moments in slender members are
introduced in the same way regardless of whether or not the member exists in a sway sensitive frame.
In summary - it seems EC2 maintains a distinction between global P-delta effects and local slenderness effects which potentially results in a 2 stage amplification of moments. Once the sway
sensitivity is assessed the global P-Delta effects are introduced in the analysis results as necessary. For the local slenderness effects the general principles of moment amplification in EC2 are
very similar to those applied in BS8110:
• EC2 Braced = BS8110 Braced
• EC2 Bracing = BS8110 Un-Braced (but we would expect that the EC2 amplification might be lower since the BS8110 amplification at this point mixes both global and local effects whilst in EC2 any
global effects would already have been introduced).
Implementation of EC2 Classification in ProtaStructure
Setting the Braced/Bracing Members
EC2 requires the user to distinguish between the braced and the bracing members of a structure. This can be specified on the Lateral Drift tab of Building Parameters.
This setting has nothing to do with assessing sway sensitivity which is dealt with separately.
The purpose is to identify Bracing Members in each Global Direction (The member types that contribute to lateral stability of the building). The default setting is as shown above, (columns considered
to be braced; walls considered to be braced about their minor axis, but to provide bracing to the structure about their major axis).
Assessment of Sway Sensitivity
Most of the guidance surrounding EC2 suggests/assumes that most buildings will be classified as non-sway. Essentially the expectation is that the assumption made in BS8110 design, that any building
stabilised by shear/ core walls is non-sway, will prove to be correct.
Whether this proves to be true is somewhat irrelevant, the fact is that sway sensitivity classification has to be made and the Eurocode provides three options for doing this:
1. Use cl 5.8.3.3 (eq 5.18)
2. Use guidance from Annex H
3. Do a P-Delta analysis and check that the change in results is less than 10% (cl 5.8.2), if true than you can revert to linear elastic analysis.
If the structure is classified as sway sensitive then there are two options for dealing with this:
1. Annex H - Application of increased horizontal forces.
2. Do a P-Delta Analysis by checking the option "Apply P-Delta Analysis" in the Load Combination Editor
In fact there is a third option which might be applied when an engineer discovers a building is sway-sensitive - they may find a way to add more shear walls and change the classification!
Initially the P-delta option may seem attractive but it must be recognized that EC2 is very clear on the fact that realistic member properties accounting for creep and cracking must be used and the
calculation of these properties becomes a unique procedure for every member.
For sway sensitive structure, the Annex H guidance has been adopted for ProtaStructure.
Worked Example for a Sway Sensitive EC2 Structure
A model is constructed as shown above with two 3m wall panels providing stability in each direction.
Floor to floor ht= 3.0 m
Wall Length / Width= 3m / 0.2m
Concrete Grade= C30/37
G= 7 kN/m2 (total including walls)
Q= 2.5 kN/m2
Beams are provided for load collection only - they are pinned at both ends in order that lateral loads are focused in the shear walls.
The notes with eq H.8 indicate that cl 5.8.7.2 should be referred to - the stiffness of the members used in the analysis leading to the classification must be adjusted and Cl 5.8.7.2 is referred to
for the adjustment.
Cl 5.8.7.2 gives a procedure for calculation of Nominal Stiffness of compression members. Rigorous use of this clause would require iteration since the adjusted properties are member specific (load
and reinforcement and even direction dependent).
Simplified alternatives are given, the simplest of which still involves the use of theta-ef (the "Effective Creep Ratio") which remains a member specific calculation.
Stiffness factors can be set in Model Options > Materials and Section Effective Stiffness Factors
Referring to eq 5.26, if we assume theta-ef is around 1.5 then the suggested approximate stiffness adjustment can be calculated:
• Kc = 0.3 / (1 + 0.5*theta-ef) = approx 0.175
For the beams adjustments must be made to allow for creep and cracking - assume:
• I-cracked = 0.5 I-conc
• (eqn5.27) Ecd-eff = Ecd / (1 + theta-ef) = Ecd / 2.5
• Therefore total adjustment to EI = 0.5/2.5 = 0.2.
Overall it seems that initial adjustments might be as low as 0.15 to 0.2 EI for all members. To put this in perspective consider the slightly more concise advice given in the ACI. ACI suggests
reducing stiffness (EI) to a factor of 0.35 (or 0.7 if the members can be shown to be uncracked). It is also noted that the 0.35 factor should be further reduced if sustained lateral loads are
applied, it seems logical that notional loads should be regarded as sustained lateral loads. Therefore, a 0.2 adjustment factor may prove to be a little over conservative, but it is not wildly
different to the ACI advice.
Consider also that ACI classifies a building as sway sensitive when Q > 5% while EC2 allows this to increase to 10% - therefore, if the EC2 adjustment factor is around 0.175 compared to ACI factor of
0.35, then the classifications of the buildings would be almost identical.
ACI Classification (for comparison)
In Effective Material and Section Stiffness Factors, the Bending Stiffness of all members are adjusted to 0.35 before analysis as discussed above.
The report shows the structure is classified as sway-sensitive at all but the lowest floor level.
In the ACI only 5% second order effects are assumed to be ignorable. Q is the measure of this and at this point it is interesting to note that although Q is only marginally smaller than 0.05 at the
lowest level, it becomes quite significantly greater at the top level.
In fact, if we reduce this to a 4 storey building then the report below shows that the structure is still classified as sway-sensitive at the upper levels.
As shown above, P-Delta effects can be proportionally higher at upper levels.
For ACI code, Sway Classification Report is named as "Slenderness Calculation Report"
EC2 Classification to Annex H
Based on the discussion in Model Analysis Properties, In Effective Material and Section Stiffness Factors, the Bending Stiffness of all members are adjusted to 0.17. Note that although we are using
0.17, you may decide on a higher or lower value based on your engineering judgement.
The report shows that the 5 storey structure is classified as sway-sensitive at all floor levels.
In EC2 10% second order effects are assumed to be ignorable. Q is the measure of this and so the actual check is that if Q > 0.1 then the classification is sway-sensitive. For the figures above we
can see this is true at all levels.
It is noted that although Q is only marginally greater than 0.1 at the lowest level, it becomes quite significantly greater at the top level.In fact, if we reduce this to a 4 storey building then the
report below shows that although Q becomes less that 0.1 at the lowest level, the structure is still classified as sway-sensitive at the upper levels.
Although the reduced section properties together with the increased ignorable P-Delta amplification limit means that the threshold for sway-sensitive/non-sway classification is very similar for the
two codes, the amplification factors that apply to buildings that are classed sway sensitive are bigger (double) for EC2.
EC2 does not seem to recognise the concept that a building can have different sway sensitivity at different levels, a single classification and amplification factor is applied to the whole building.
This requirement is catered for in the report by including an extra line for ‘All‘ storeys. In the above 4 storey example the Q value calculated for “All” storeys is 0.1497 (therefore sway
Total deflection = 5.99mm
Total Axial Load (F-V.Ed)= 30349
Total Shear Load (F-H.Ed)= 101.2
Total height = 12m
Q = 1.5 (30349 * 0.00599) / (101.2 *12) = 0.1497 > 0.1 (therefore sway sensitive).
Application of Load Amplification Factors
Provided the model is classified as non-sway no further adjustments are required - the member design is performed using the existing load combinations and factors.
If (as in this case) the model is classified as sway-sensitive, the second-order effects must be accounted for in the design. As previously stated, the code provides two options for achieving this:
• Annex H - Application of increased horizontal force (automatically adopted in ProtaStructure)
• Do a P-Delta Analysis (option available in ProtaStructure).
In ProtaStructure the former approach is adopted - when the model is classified as sway-sensitive a load amplification factor is automatically applied to the existing design load combinations.
The option to perform P-Delta Analysis is also available in the Load Combination Editor
The amplification factor, Delta-s, is calculated from the Q value for “All” storeys as follows:
Delta-s = FH,Ed / FH,0Ed = 1 / (1-Q)
In the original 5 storey example Q = 0.271. Hence the amplification factor displayed on the Horizontal Drift Classification Report is
1/(1-0.271) = 1.372.
It is possible to over-ride this value if required and enter an amplification factor based on your own engineering judgement. To do this, re-display the Building Parameters, then from the Lateral
Drift tab check the box to apply the ‘User-defined’ Sway Amplification Coefficient. You can then over-ride the automatically calculated value in one or both directions.
If you have applied user-defined sway amplification coefficients, it is not necessary to re-analyse the building before the members are designed.
Should additional slenderness moment (British/EC) be ignored if ProtaStructure P-Delta Analysis is performed? | {"url":"https://support.protasoftware.com/portal/en/kb/articles/sway-sensitivity-second-order-effects","timestamp":"2024-11-02T21:21:14Z","content_type":"text/html","content_length":"108045","record_id":"<urn:uuid:7a225191-4c26-4dfb-ae81-43932e0017f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00162.warc.gz"} |
Re: [tlaplus] Time-outs
I'm assuming that you are not trying to verify any real-time properties of your protocol but that you are only interested in qualitative properties of a protocol that includes timeout actions
intended to recover possible faults of the underlying network, such as by resending messages that may have been lost.
In that case, simply include the timeout action as a disjunct in the definition of the next-state relation. This means that the action can occur at any point in the protocol: timeout is modeled by
non-determinism. You will probably want to include a precondition that implies that some action whose effect has been "lost" has indeed occurred previously. As a concrete example, have a look at the
standard specification of the alternating bit protocol (which has timeout actions for resending lost messages).
The resulting specification is of course an over-approximation of the behavior of the actual protocol: if you can verify the properties that you are interested in, then the actual protocol (where the
timeout action is more restricted) will be correct as well. If you find that the over-approximation is too coarse, you may have to add more preconditions. (Remember that as the writer of a
specification you have access to the entire system state even if the implementation of a node in a distributed system only sees the local state of that node.) But I recommend to start simple and see
how far you get.
Is there a general way/guideline to model Timeouts in TLA+. e.g. in a message-passing network model, how can time-outs be modelled. In other words, should I consider time-out a fault? in fact, a
more general question is that are Time-outs considered types of faults in a distributed system.
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To view this discussion on the web visit
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/124C5A5A-B777-46FB-B9DA-21D5076044A4%40gmail.com. | {"url":"https://discuss.tlapl.us/msg03156.html","timestamp":"2024-11-04T08:02:07Z","content_type":"text/html","content_length":"6761","record_id":"<urn:uuid:98fb1804-279e-4d99-9ef5-16302a4df75e>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00369.warc.gz"} |
Voltmeter Ammeter Method for Measurement of Resistance - Electrical Concepts
Voltmeter Ammeter Method for Measurement of Resistance
Resistance is classified into three categories for the sake of Measurement. Different categories of Resistance are measured by different technique. That’s why they are classified. They are classified
Low Resistance: Resistance having value 1Ω or below are kept under this category.
Medium Resistance: This category includes Resistance from 1Ω to 0.1 MΩ.
High Resistance: Resistance of the order of 0.1 MΩ and above is classified as High resistance.
In this section, we will discuss the method of measurement of Medium Resistance. The different methods used for Medium resistance are as follows:
· Ammeter Voltmeter method
· Substitution Method
· Wheatstone Bridge Method
· Ohmmeter Method
Ammeter Voltmeter Method:
There are two possible connections for the measurement of Medium Resistance using Ammeter Voltmeter Method as shown in figure below:
In both the cases, the reading of Voltmeter and Ammeter is taken. If the Voltmeter reading is V and Ammeter reading is I then the measured Resistance will be
Rm = V/I
This measured Resistance Rm will be the true value of the Resistance if and only if the Resistance of Ammeter is zero and that of Voltmeter is infinite. But actually this is not possible to achieve
zero resistance Ammeter and infinite Resistance Voltmeter. Therefore measured value of resistance Rm will deviate from the true value R (Say).
So we will discuss both the circuit individually and will calculate the percentage error in the measurement.
We consider first kind of connection as shown in figure 1 above. It is clear from the figure that Voltmeter is measuring the Voltage drop across the Ammeter as well as resistor. So V = Va + Vr
Let current measured by Ammeter = I
Therefore, measured Resistance Rm = V/I
So, Rm = (Va+Vr) / I =(IRa+IR) / I = Ra+R
Therefore, the measured Resistance is the sum of Resistance of Ammeter and true Resistance. Therefore measured value will only represent true value if Ammeter Resistance Ra is Zero.
True value of Resistance R = Rm –Ra
= Rm(1-Ra/Rm)
Relative Error = (Rm-R)/R = Ra/R
Therefore, Relative Error will be less if the true value of Resistance to be measured is high as compared to the internal Resistance of Ammeter. That’s why this method should be adopted when
measuring high Resistance but it should be under Medium Resistance category.
We will consider second connection in which Voltmeter is connected in which Voltmeter is connected toward Resistance R whose value is to be measured.
It is obvious from figure that Ammeter will read the current flowing through the Voltmeter and Resistance R. Therefore current measured by Ammeter Ia = Iv+Ir
So, Ia = Iv+Ir
= V/Rv+V/R where Rv is Resistance of Voltmeter and V is Voltmeter reading.
Measured Resistance Rm = V/Ia
= V/(V/Rv+V/R)
= RvR/(R+Rv)
= R/(1+R/Rv) ….Dividing Numerator and Denominator by Rv
Therefore, true value of Resistance R = RmRv/(Rv-Rm)
= Rm(1-Rm/Rv)
Therefore, true value of Resistance will only be equal to measured value if the value of Voltmeter Resistance Rv is infinite.
If we assume that the value of Voltmeter Resistance Rv is large as compared to the Resistance to be measured R, then Rv>>>Rm
So, True value R = Rm(1+Rm/Rv)
Thus from the above equation it is clear that the measured value of Resistance is smaller than the true value.
Relative Error = (Rm-R)/R
= -R/Rv
Therefore, it is clear from the expression of Relative Error that, error in measurement will be low if the value of Resistance under measurement is very less as compared to the internal Resistance of
This is the reason; this method is used for the Contact Resistance Measurement. As the value of Contact Resistance is of the order of 20 micro Ohm which is very less as compared to the internal
Resistance of Voltmeter.
The Voltmeter Ammeter Method for Cases1 and Case2 are simple method but it is not accurate method. The error in the value of Resistance depends on the accuracy of Ammeter as well as Voltmeter. If the
accuracy of both the instrument are supposed 0.5% then when both the instrument read near full scale, the error in measurement of Resistance may vary from 0 to 1% while if both the instrument read
near half scale then error may double and so on.
However this method is very useful where high accuracy is not required. The suitability of Case1 or Case2 depends on the value of Resistance to be measured. The division point between the two methods
is at the Resistance for which both the method give same Relative Error.
So, Ra/R = R/Rv
For the Resistance greater than the value given above,Case1 is used while for the value of Resistance lower than R given above Case2 is used.
Check this Book for Electrical Measurement and Instrumentation. It is really awesome and concepts are dealt with very nicely.
2 thoughts on “Voltmeter Ammeter Method for Measurement of Resistance”
1. Give some applicarion for the voltmeter and ammeter method….and give the difference between the ammeter voltmeter method and voltmeter ammater method…
2. Thanks, that was really useful.
Leave a Comment | {"url":"https://electricalbaba.com/voltmeter-ammeter-method-for-measurement-of-resistance/","timestamp":"2024-11-03T22:40:16Z","content_type":"text/html","content_length":"76118","record_id":"<urn:uuid:cd07c563-2026-4342-b5c7-b5f0890095db>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00110.warc.gz"} |
SciPost Submission
SciPost Submission Page
Charge order and antiferromagnetism in twisted bilayer graphene from the variational cluster approximation
by B. Pahlevanzadeh, P. Sahebsara, D. Sénéchal
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): Peyman Sahebsara · David Sénéchal
Submission information
Preprint Link: scipost_202112_00030v2 (pdf)
Date accepted: 2022-06-02
Date submitted: 2022-04-22 00:14
Submitted by: Sénéchal, David
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties: • Condensed Matter Physics - Computational
Approach: Computational
We study the possibility of charge order at quarter filling and antiferromagnetism at half-filling in a tight-binding model of magic angle twisted bilayer graphene. We build on the model proposed by
Kang and Vafek, relevant to a twist angle of $1.30^\circ$, and add on-site and extended density-density interactions. Applying the variational cluster approximation with an exact-diagonalization
impurity solver, we find that the system is indeed a correlated (Mott) insulator at fillings $\frac14$, $\frac12$ and $\frac34$. At quarter filling, we check that the most probable charge orders do
not arise, for all values of the interaction tested. At half-filling, antiferromagnetism only arises if the local repulsion $U$ is sufficiently large compared to the extended interactions, beyond
what is expected from the simplest model of extended interactions.
Author comments upon resubmission
The referees all bring excellent points and suggestions, some of them overlapping. In the space below we quote the various change requests made by the referees, with our response immediately below
(following the RESPONSE keyword). Reference numbers between brackets [...] correspond to references at the end of the new version of the paper (references cited in the referee reports have been
renumbered to take this into account).
Changes demanded by Report # 3
1) Update the manuscript by taking into account remarks of section “Weaknesses”(detailed below):
A) This topic has been the subject of many publications since its appearance in 2018. I understand that the authors can not quote all the articles that have been published on it. However, a state of
the art ismissing and the comparison of the results with the recent articles dealing with the subject seems necessary. B) The results need to be a little more detailed. C) These results are obtained
with a particular tight binding model developed for systems without interaction. It is not at all obvious that this model is valid with interactions. Of course this is often the case,which is why it
is important to discuss the validity of the model used.
RESPONSE Various changes to the manuscript and many responses below address this remark. The literature on the treatment of interactions in TBG has been better cited.
2) The authors have recently published an article in SciPost Physics with the same model (Ref. [24] in the present manuscript). The 2 articles are not redundant. However, the authors should justify
explicitly the need for a new article and explain the overall coherence of their work based on the model proposed by Kang and Vafek (Ref. [1]).
RESPONSE In the introduction, We augmented the precise justification for this paper, compared to Ref.[24] : I boils down to the need for a larger cluster, in order to have a more dynamical (less
mean-field) contributions from the extended interactions within the cluster in order to study the normal phase. This in turns prevents us from using the method used in Ref.[24] (CDMFT) and to use
instead the variational cluster approximation.
3) The figures 1 and table 1 seem to be exactly the same as the figure 1 and table 1 of the Ref. [24], is it necessary to show them again ?
RESPONSE Fig. 1 and Table 1 are indeed borrowed from our previous work on SciPost and are reproduced here to facilitate reading. This is stated in the caption. We felt that an in electronic medium
such as SciPost this would not incurr extra cost but would benefit the reader. We can replace these figures by mere references is the Editors prefer it that way.
4) In this work, the authors use a 12-site cluster containing 3 unit cells. Is it possible to justify this choice?
RESPONSE The 12 site cluster is the largest one we can treat that has the symmetry of the model. It allows us to treat a fair fraction of the extended interactions within the cluster (and thus
capture the dynamical correlations) and at the same time all sites on it are equivalent by symmetry, which simplifies the Hartree approximation for the inter-cluster interactions. This comment has
been added to the introduction.
5) The strong-coupling limit is presented in section 2.1. Although this limit is interesting for itself, I do not think it is applicable to the case of magic-angle twisted bilayer graphene. Indeed in
the strong coupling limit, a 4 bands model is not sufficient because the interactions will have also a strong effect on many more bands.
RESPONSE The referee is correct. We use this limit not as applying to TBG itself, but as a useful prelude to the interacting model we use, as a guide for its solution. We added a comment to this
effect in the manuscript. The strong-coupling limit has been covered in a few references [23, 13, 10], albeit not in the way we do, but I believe in the same spirit.
6) Page 10, it is written: “This Mott transition is essentially caused by extended interactions”. The authors should elaborate a bit more on this point and complete it.
RESPONSE We have added a few sentences to elaborate on this point.
7) Section 5, antiferromagnetism is shown at half filling for not too strong interaction. Can the authors specify the spatial magnetization state? Is the antiferromagnetism found for the interlayer
order, the interlayer order or both?
RESPONSE It is Néel antiferromagnetism, sublattice based and intra-layer only. Eq. (28) states it thus, but we agree this is not clear enough and we have added sentences to clarify. We have also
added the possibility of both inter-layer and intra-layer antiferromagnetism. It turns out that this makes no difference, owing to the small value of the inter-layer hopping.
Changes demanded by Report # 2
1) A proper comparison to the existing literature on TBG both from theory and experiment is missing. In particular, the results need to be compared to other theory papers that investigate the
importance of non-local interactions for the ground state properties at half- and quarter filling. Even more studies exist for the Hubbard model with on-site interactions applied to TBG. By
mean-field-decoupling the non-local interaction terms, the authors could introduce an effective U to compare to those papers, too.
RESPONSE The literature on the treatment of interactions in TBG has been better cited (this is also in the introduction). Also, comparing critical interaction values with other works is a challenge,
because of the differences in the models used and/or twist angle. In Ref. [12], the critical U for the Mott insulator at quarter filling is 14.7t. But their non-interacting model is quite different
(2 bands, with nearest-neighbor hopping t = 2meV only), as well as their method of solution (slave bosons). This puts their critical U at about 29 meV, much larger than our critical U of 1.5-2 meV!
They do state, however, that extended interactions, which they do not take into account, would lower that value. Critical interaction strengths are also discussed in [14], but in the context of a
microscopic model, not an effective model, making any comparison near impossible (the energy scales are in eV, not meV !)
2) One of the issues of cluster techniques is always the analysis of finize-size effects. Since the 12-site cluster is close to the limit of numerically exploitable cluster sizes, a full finite-size
scaling is not feasible. Still, the comparison to at least one additional cluster size/geometry would be helpful. One such candidate could be a supercluster of 8-site clusters (4 sites x 2 layers),
which was already employed within VCA in similar contexts.
RESPONSE Performing the same computation on a smaller cluster such as the 8-site cluster suggested by the referee is in fact more difficult than on the 12-site cluster, because the 8-site cluster is
based on the 4-site star-shape cluster that contains a center site that is not equivalent to the edge sites. Thus additional care must be taken in the Hartree approximation: More mean-field terms
must be added and the simple charge-density wave patterns studied on the 12-site cluster correpond to complicated mixtures of inter-cluster Hartree fields and intra-cluster Weiss fields. This is why
we do not carry out these computations.
3) Another point concerns one of the strengths of VCA, which is not used to full capacity here, namely the possibility to check for the competition of different symmetry-breaking fields on equal
footing. For instance, at quarter filling it would be important to check for breaking of spin- and charge-order, as it has been discussed in literature for TBG at different magic angles.
RESPONSE Some authors expect a ferromagnetic state at quarter or three-quarter filling (e.g. Ref [15]). Ferromagnetism has been likely detected in TBG with a twist angle of about 1.20 degree at 3/4
filling [38]. It is difficult to use VCA to look for ferromagnetism at 1/4 filling, on top of the insulating state found. We explain why in the revised manuscript. However, an analysis of the
low-lying states of the cluster shows that it is nearly ferromagnetic. In addition, a state that is ferromagnetic within layers and antiferromagnetic between layers was not found with VCA (the
minimum of the Potthoff functional is at zero). A paragraph and a figure to that effect were added to the manuscript.
4) When explaining the Hartree decoupling of the inter-cluster interactions, the authors cite Refs. 16 & 18. However, a reference to the first paper that introduced this type of mean-field decoupling
in context of VCA is missing and needs to be added: PRB 70, 235107 (2004).
RESPONSE An obvious omission, for which we apologize. We added this reference [36].
5) The authors explain how to determine the mean-fields in the dynamical Hartree approximation. In the present case, is there a specific reason why the authors decided to choose the variational
determination of these fields over a self-consistent determination?
RESPONSE In ordinary mean-field theory, the mean-field can be determined either by minimizing the free energy or by applying self-consistency; the result is the same. In the dynamical Hartree
approximation, this is not obviously the case. We feel the variational approach to be superior because of its presumed stability. Self-consistent procedure may sometimes diverge even if a solution
exists. In the case of the charge order at quarter filling, we checked that the self-consistent approach also converged to zero (no charge order) and added a remark to that effect in the manuscript.
6) A central question for the applicability of the studied model to TBG concerns the values and structure of the interaction terms. The choice of the interactions (e.g. the relations (5) and the
considered values) needs to be discussed properly. In their previous study, Ref. 7, the authors devoted a small paragraph to explaining their choice of values of U. Here, such an analysis is even
more important. In particular, it should be discussed in how far the considered interactions V0-V3 agree with ab initio calculations of the screened interactions (cRPA, e.g. Refs. cited in Ref.7, or
self-consistent atomistic Hartree theory, PRB 103, 195127 (2021)).
RESPONSE In Fig. 10 of Ref. [29], the ratios of 1st and 2nd neighbor interactions to the local interaction are 12/28 and 9/28, instead of 2/3 and 1/3. This is in close agreement for the 2nd neighbor
interactions, but 33% off for the first-neighbor interactions. In principle this depends on the twist angle (see also Table II of [29]). We commented on this in the revised version of the paper.
Klebl et al, [10.1103/PhysRevB.103.195127] deals with a microscopic model, not a moiré tight-binding effective model, so the comparison is difficult.
7) How robust is the absence of AF order at half-filling with respect to the choice of U when deviating from the case a=1? Is the 'critical' value of a changing with U?
RESPONSE A plot for a=1 and various values of U has been added. The critical value of 'a' may depend on U, but antiferromagnetism at half filling is absent for the range of U studied in that plot.
8) The authors studied AF order at half-filling, but other magnetic orders are discussed in context of TBG for different fillings. Can the authors exclude other magnetic orders (e.g. FM order or
on-site AF ordering between the two layers) at half- or quarter filling within their VCA setup?
RESPONSE In VCA, like in mean-field theory, one cannot exclude orders that are not explicity probed. At half-filling, there is no significant difference between antiferromagnetism and ferromagnetism
accross computational layers, owing to the small value of the interlayer hopping term (in both case we probe Néel antiferromagnetism within each layer). See response to comment 3.7 above. We have
also made remarks about ferromagnetism at quarter filling. See response to comment 2.3 above.
9) Throughout the manuscript it should be specified in which units U is measured (in meV and not in units of the largest hopping?).
RESPONSE All parameters are in meV. This is not clear enough and the manuscript has been modified accordingly, especially in the figures.
10) The matrix of differences between the one-body terms of the lattice and the reference system is called V, see e.g. eq(17). Although being standard nomenclature in context of VCA, this naming is
slightly inept here since it can be easily confused with the interaction terms V, see e.g. eq(18).
RESPONSE We agree. We have changed the notation for the inter-cluster one-body matrix.
Changes demanded by Report # 1
1) Although there are many publications about correlation effects in twisted bilayer Graphene in experiment and theory, the authors cite only 18 publications. This must be expanded. A short and not
extensive list would be PhysRevX.8.031089 PhysRevB.98.081102 J. Phys. Commun. 3 035024 PhysRevLett.124.097601 PhysRevB.102.035136 PhysRevB.102.045107 PhysRevB.102.085109 SciPostPhys.11.4.083
As several of these papers also discuss Mott states and antiferromagnetic states, the current results should be compared to these previous results.
RESPONSE The literature on the treatment of interactions in TBG has been better cited (this is also in the introduction).
2) During the explanation of the model, I am missing the spin degrees of freedom, which suddenly appear in equation 4. This should be expanded. Furthermore, the condition on the spin degrees should
be stated in the strong-coupling section and the calculations for quarter-filling.
RESPONSE The presence of spin is insisted upon from the beginning of Sect. 2 in the new version of the manuscript.
3) Maybe I misunderstood something in the calculation of the strong-coupling limit. When the largest eigenvalue corresponds to the uniform state, then the lowest eigenvalues correspond to nonuniform
(charge-ordered) states. Why is the ground state in the strong-coupling limit (when neglecting the kinetic energy) not long-range ordered?
RESPONSE Good question! In fact the largest eigenvalue is related to the total charge, which is conserved and cannot be changed unless the chemical potential is changed. This uniform mode being
constrained by particle number conservation, it can only serve as a background with respect to which the other (non uniform) charge modes are playing out. The ground state would be a charge density
wave if one of the modes had a negative eigenvalue. For instance, refereing to Eqs (14-16), in the eventuality where $V_0=V_2=V_3=0$ and $V_1$ is nonzero, there would be an inter-orbital charge
density wave at $q=0$ if $V1 > U/2$, as the eigenvalue $\lambda^{(2)}$ would then turn negative. But, as stated in the paper, under the conditions (5), this is not possible. We have added additional
explanations in the last paragraph of Sect. 2 to clarity this.
4) In the section about VCA, the details about how the finite cluster has been solved are missing. This should be expanded.
RESPONSE A few sentences have been added about the impurity solver used.
5) The authors analyze at quarter-filling only charge order and at half-filling only antiferromagnetism. Why is there no analysis of combinations of charge and spin order? Furthermore, other
long-range spin orders other than antiferromagnetism should be discussed. If possible, the authors should include combined spin and charge order calculations in this manuscript.
RESPONSE See our response to comments 2.3 and 2.8 above. Regarding orders mixing spin and charge, a complete analysis of this question would go beyond the scope of this paper. The possibility of
stripe magnetic order is interesting, but is naturally expected outside of the special fillings studied here.
6) Before equation 28, the authors write D=-2. Do they mean D=-2U? Furthermore, it would be better to explicitly name the long-range order as m3 and m4.
RESPONSE Indeed, $D=-2U$. Thank you for picking this up.
7) (Minor) This might be only my feeling, but my first thought when reading equation 18 in a section called DYNAMICAL Hartree approximation was that t is the time. However, t seems to be the hopping.
It would be good to clarify this.
RESPONSE Indeed. It is called dynamical because it is coupled to a variational principle involving the Green function. This has been clarified in the revised version.
List of changes
The list of changes appears together with the response to the referee reports. A latexdiff file is also provided immediately after the new version of the manuscript (blue passages are additions, red
passages were removed).
Published as SciPost Phys. 13, 040 (2022)
Reports on this Submission
The authors have submitted a new version with a significant number of additions and updates. They have satisfactorily answered my questions and, I believe, those of the other referees. Following my
first report, I therefore consider that this manuscript deserves to be published in SciPost Physics.
Report #2 by Anonymous (Referee 5) on 2022-5-21 (Invited Report)
• Cite as: Anonymous, Report on arXiv:scipost_202112_00030v2, delivered 2022-05-21, doi: 10.21468/SciPost.Report.5109
The authors replied thoroughly to all remarks and answered all pertinent questions of the referees. The current version of the manuscript is much improved, in particular since it now includes a
discussion of possible FM order and the problems related to the discontinuous behavior of the self-energy functional.
I therefore recommend the publication of this article in SciPost Physics.
Requested changes
Below, I list a few typos that the authors might want to correct for the final version.
- p.2, 1st line of section 2: "[...] tight-binding Hamiltonians proposed [...]"
- p.13, caption of Fig.7 : "Right panel: same as left panel, but [...]"
- p.14 : "[...] absence of antiferromagnetism extends to [...]"
- References: Please check for upper-case letters in the titles ("Mott", "Coulomb" etc.) and use a consistent nomenclature for the journals (e.g. either "Phys. Rev. Lett." or "Physical Review
Letters"); typo in Ref.24 ("[...] mean-field theory").
The authors have replied to all questions of the referees.
Although I think that the manuscript would have benefitted from a finite-size scaling, as one of the referees proposed, I accept the author's answer.
I recommend this manuscript for publication in SciPost Physics. | {"url":"https://www.scipost.org/submissions/scipost_202112_00030v2/","timestamp":"2024-11-13T14:04:16Z","content_type":"text/html","content_length":"54448","record_id":"<urn:uuid:2ec2668d-5de3-41f5-92ee-803afa4f6348>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00578.warc.gz"} |
Outcomes and Teaching Strategies
Learning 5th Grade Math: Outcomes and Teaching Strategies
Most states have adopted the Common Core State Standards for math and English. The 5th grade math standards are the outcomes presented here. However, as common as the standards may be, teachers are
generally free to determine their own teaching strategies. Some of the possibilities are shown here.
5th Grade Math Instruction
Outcomes for 5th grade math are divided into six logical areas in the Common Core State Standards, with two or more outcomes listed for each area. An outcome may be defined as a student's measurable
knowledge or skill in a given subject. Here are the areas, along with some desired outcomes:
Algebra - Logic and Procedures
1. Use parentheses and brackets in algebraic sentences and know how to solve problems that have these marks.
2. When given two rules, write an algebraic sentence, noticing patterns and comparisons and explaining them.
Base Ten Procedures
1. Use and explain the system of place values.
2. Multiply whole numbers with multiple digits, divide 4-digit dividends by 2-digit divisors and work with decimals in addition, subtraction, multiplication and division.
1. Add and subtract fractions with different denominators by using equivalent fractions.
2. Multiply and divide fractions and relate that to multiplication and division of whole numbers.
Measurements and Interpretation of Data
1. Convert measurements within a single system to measurement, such as inches to yards.
2. Use given fractional measurements to make a line plot. Conversely, solve problems using a line plot and its information.
3. Understand what volume is and how to figure the volume of cubes and right rectangular prisms using addition and multiplication.
1. Solve problems by graphing points on a coordinate plane.
2. Categorize 2-dimensional figures based on their number of sides and types of angles.
Math Skills and Problem-Solving Practices
1. Develop capacity for logical, critical, abstract and quantitative thinking.
2. Learn perseverance and precision in solving problems.
3. Know how to model what is learned, strategically using suitable tools.
4. Watch for structure in problems, and be able to use it, along with regular procedures, in problem solving and mathematical reasoning.
Teaching Strategies
When teaching math in any grade, it can be beneficial to provide frequent review to give students a chance to internalize the material. In addition, consider balancing individual work with
cooperative group work to build a learning environment in the classroom. Lecture, discussions, flash cards and worksheets may be enhanced by the additional use of the following:
• Manipulative objects (such as pattern blocks or tiles)
• Pictures and videos
• Stories and poems
• Songs and chants
• Games
When it comes to games, as the teacher you can use games that are played either by individual students, groups of students or the whole class. These games can include paper and pencil games such as
puzzles, board games, interactive computer games or physically active math games that involve students using math skills along with gross motor skills such as kicking, running and throwing.
Other Articles You May Be Interested In
MIND Games Lead to Math Gains
Imagine a math teaching tool so effective that it need only be employed twice per week for less than an hour to result in huge proficiency gains. Impossible, you say? Not so...and MIND Research
Institute has the virtual penguin to prove it.
Should Math Be a Main Focus in Kindergarten?
Should kindergartners put away the building blocks and open the math books? According to recent research, earlier is better when it comes to learning mathematical concepts. But that could put
undue pressure on kids, parents and even teachers.
We Found 7 Tutors You Might Be Interested In
Huntington Learning
• What Huntington Learning offers:
• Online and in-center tutoring
• One on one tutoring
• Every Huntington tutor is certified and trained extensively on the most effective teaching methods
• What K12 offers:
• Online tutoring
• Has a strong and effective partnership with public and private schools
• AdvancED-accredited corporation meeting the highest standards of educational management
Kaplan Kids
• What Kaplan Kids offers:
• Online tutoring
• Customized learning plans
• Real-Time Progress Reports track your child's progress
• What Kumon offers:
• In-center tutoring
• Individualized programs for your child
• Helps your child develop the skills and study habits needed to improve their academic performance
Sylvan Learning
• What Sylvan Learning offers:
• Online and in-center tutoring
• Sylvan tutors are certified teachers who provide personalized instruction
• Regular assessment and progress reports
In-Home, In-Center and Online
Tutor Doctor
• What Tutor Doctor offers:
• In-Home tutoring
• One on one attention by the tutor
• Develops personlized programs by working with your child's existing homework
• What TutorVista offers:
• Online tutoring
• Student works one-on-one with a professional tutor
• Using the virtual whiteboard workspace to share problems, solutions and explanations | {"url":"http://mathandreadinghelp.org/learning_5th_grade_math.html","timestamp":"2024-11-01T21:18:02Z","content_type":"application/xhtml+xml","content_length":"27275","record_id":"<urn:uuid:1263357b-821a-4816-bcd4-5cbecce8e31e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00416.warc.gz"} |
Fall 2024 Courses
For each of the courses, I plan to placce syllabi on Brightspace. Please note that information here is subject to change.
Mathematics 403
Class number 9529. MW 10:10-11:30 AM in Massry B014. The textbook is Actuarial Mathematics for Life Contigent Risks, 3rd ed., by Dickson, Hardy, and Waters.
Mathematics 467/554
Class numbers 1732 (for 467) and 1737 (for 554). MW 3:00-4:20 PM in ES 140. The textbook is Introduction to Mathematical Statistics, 8th ed., by Hogg, McKean, and Craig.
Mathematics 469
Class number 9433. This is a 1-credit asynchronous online course to help prepare students for Actuarial Exam P. No textbook is required.
Questions: Send me e-mail. The e-mail address is mhildebrand AT albany.edu (where you should replace AT with @). | {"url":"https://www.albany.edu/~martinhi/fall2024courses.html","timestamp":"2024-11-06T07:50:11Z","content_type":"text/html","content_length":"1249","record_id":"<urn:uuid:df189a0d-4291-4c4f-a1d1-f3dc8e284212>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00508.warc.gz"} |
Curve selection for predicting breast cancer metastasis from prospective gene expression in blood
In this article we use gene expression measurements from blood samples to predict breast cancer metastasis. We compare several predictive models and propose a biologically motivated variable
selection scheme. Curve selection is based on the assumption that gene expression intensity as a function of time should diverge between cases and controls: there should be a larger difference
between case and control closer to diagnosis than years before. We obtain better predictions and more stable predictive signatures by using curve selection and show some evidence that metastasis can
be detected in blood samples.
1 Introduction
About one in ten women will at some point develop breast cancer. About 25% have an aggressive cancer at the time of diagnosis, with spread to axillary lymph nodes.^1 The tool to detect this spread is
a surgical procedure known as a sentinel node biopsy. According to the Norwegian Cancer Registry, out of 1000 women who attend all ten screenings they’re normally invited to, 200 will experience at
least one false positive. Out of these 200, 40 will have to do a biopsy. This biopsy is an invasive procedure. If we could use blood samples to predict metastasis, we could reduce the number of
unnecessary biopsies. Several recent articles develop this idea of liquid biopsies [3]. Different relevant signals appear in blood for already-diagnosed breast cancer. For instance: circulating tumor
cells [10], serum microRNA [22], or tumor-educated platelets [1]. A recent review in Cancer and Metastasis Reviews [16] lists liquid biopsies and large data analysis tools as important challenges in
metastatic breast cancer research.
Norwegian Women and Cancer (NOWAC) [4] is a prospective study containing blood samples. Prospective blood samples provide gene expression trajectories over time. The hope is that such trajectories
diverge between cases and controls as the tumor grows. Lund et al. [18] show a significant difference in trajectories for groups of genes. In this paper we aim to show that we can go one step further
and find information even about sentinel node status.
The main difficulty here is high dimensionality. There are about ten to fifteen thousand potential predictor genes. It’s very easy to over-fit such data. The number of observations needed to fill
some region of p-dimensional space grows more than exponentially fast with p. But there are often lower-dimensional structures in the data. For instance, we expect genes to work together in pathways.
We don’t expect all genes to be relevant in all processes. The analysis of high-dimensional data is an active research area of statistics and machine learning [8]. Usually we try to discover these
low-dimensional structures by projection approaches like PLS-methods [17], or by variable selection.
Variable selection approaches highlight the most discriminative variables, which has a straight-forward interpretation. There is a variety of variable selection schemes, for a review of which see [7
]. If we are working with gene expression, we can rank genes for example based on genewise t-tests for differential expression. The top k of these provide a lower-dimensional space where we can apply
any classifier. Haury et al [14] show that such a ranking coupled with a simple classification method compares favorably to more sophisticated methods. There are also integrated methods that do
simultaneous selection and statistical learning. A popular choice is the penalized maximum likelihood family of generalized linear models. They optimize the likelihood plus a penalty term that
encourages sparse solutions. These include the popular lasso and elastic net methods [13].
Regardless of variable selection method, the chosen predictor set can be unstable. Ein-Dor et al [6] examined the effect of using different subsets of the same data to choose a predictive gene set.
They show that predictor gene sets depend strongly on the subset of patients used for analysis. So stability criteria is a complimentary feature to predictive power. These can be integrated in the
model selection, as the stability selection for penalized regression [19]. Or they can be used as an a posteriori evaluation criterion [14].
In this paper we compare several learning methods to predict metastasis in breast cancer. We use blood gene expression data from NOWAC taken no more than one year before diagnosis. If we take all
genes into account in a desultory manner, we do no better than random guess. In fact, we tend to do worse than random if we don’t account for stratification in the data. Hence we propose variable
selection based on a gene’s prediagnostic trajectory. We call this biologically motivated approach curve selection. Curve selection improves both predictive power and signature stability. We see some
evidence that there is a signal of sentinel node status already present before diagnosis. This gives some hope for the pursuit of liquid sentinel node biopsies as a cheaper and less invasive option
to surgery.
2 Material and methods
2.1 Data
Our dataset is 88 pairs of breast cancer cases and age-matched controls from the NOWAC Post-genome cohort. The cohort profile by Dumeaux et al. describes the details [4]. In brief, women were
recruited by random draw from the Central Person Register by Statistics Norway. They were invited to fill out a questionnaire and provide a blood sample. The Cancer Registry of Norway provided
followup information on cancer diagnoses and lymph node status. The women received a diagnosis at most one year after providing a blood sample.
The NTNU genomics core facility processed the blood samples on Illumina microarray chips of either the HumanWG-6 v. 3 or the HumanHT-12 v. 4 type. A case and its matching control are together for the
entire processing pipeline. Eg. they lie next to one another on chip, and so on. Afterwards we checked the data for technical outliers. These are observations that get distorted in the lab. We have
removed low-signal probes, ie probes that lie below a certain detection threshold. We quantile-normalize the data before analysis. The preprocessing for these particular data is described in detail
in Lund et al. [18]. Günther, Holden, and Holden’s report from the Norwegian Computing Centre [11] provides more technical details.
In practice we have a 88 × 12404 gene expression fold change matrix X on the log[2] scale. For each gene, g, and each case–control pair, i, we have the measurement log[2] x[ig] − log[2] x′[ig]. Here
x[ig] is the g expression level for the ith case, and x′[ig] is the corresponding control. For each case we have the number of days between the blood sample and the cancer diagnosis. Call this the
followup time. Note that although there is a time component to this, we don’t have time series data. Each observation is a different woman. We also have a detection stratum variable. This takes one
of the following values:
• Screening denotes a cancer that was detected in the regular screening program.
• Interval denotes a cancer that was detected between two screening sessions. The interval between screenings is two years.
• Clinical denotes a cancer that was detected outside of the screening program. These women either never took part in the screening program, or did not attended a screening in at least two years.
Finally, our response variable, metastasis (∈ {0, 1}), indicates whether a sentinel node biopsy showed evidence of metastasis.
Table 1 shows the incidence of metastasis in the different strata. We see a certain heterogeneity between strata.
2.2 Predictive models
2.2.1 Penalized regression
We fit penalized logistic regression models. These models take the form where ε is iid mean-zero noise, and c(·) < t is a constraint on the magnitude c(·) of the coefficients β[i]. The parameter t
controls how severe this constraint is.
We investigate ridge penalty, , lasso penalty, , and the elastic net [23] penalty, which is a linear combination of ridge and lasso, , with α ∈ [0, 1]. Ridge and lasso penalties are special cases of
the elastic net. Lasso is well-known to encourage sparse solutions, where many coefficients are set to exactly zero. It’s expected to be the better model if there are few relevant predictors. Ridge
on the other hand, never shrinks coefficients to exactly zero and as such lets all predictors contribute to some extent. Hence ridge can be expected to do better of most predictors are relevant.
Logistic regression allows us to correct for strata by adding interactions between gene expression and stratum. In the case of genome-wide association studies, it’s been shown to be one of the best
methods to take stratification into account [2].
2.2.2 Stability selection
The set of predictors picked by regularization are often unstable with correlated predictors. It is also hard to choose the correct amount of regularization, the result is often over-regularized
models. Stability selection [19] is a method to deal with this. Basically: i) make a bootstrap estimate of the probability of a sample’s being chosen by your regularized method, ii) define some
probability threshold above which you you use all predictors, iii) fit your favorite model using these predictors.
We examine stability selection for both the lasso and the elastic net. Instead of setting a threshold, we simply pick the top 50 predictors in each case (this because we actually don’t see much
stability for the lasso). We then use Bayesian logistic regression with a weakly informative prior as described by Gelman et al. [9].
2.2.3 Nearest centroids
We also consider the purely geometrical algorithm of nearest centroids (NC). A class C[i] is represented by it’s centroid point c[i] in p-dimensional space, eg. the class mean c[i] = μ(x|x ∈ C[i]).
Then p(C[i]|x) ∝ d(x, c[i]), which is to say the probability x belonging to C[i] is proportional to the distance between x and the centroid c[i]. We normalize all features for this model. Being a
distance-based classification algorithm, NC shouldn’t be expected to do very well in thousands of dimensions. Hence we use the top 50 genes ranked by simple genewise t-tests.
2.2.4 Stratification
We account for stratification in the regression models by adding an interaction between all genes and the stratum variable. In simplified notation this is the model logit(spread) = β(expression +
expression × stratum). In practice this leads to a three times as large design matrix of roughly 88 by 36 000 entries. In the nearest centroids model we include the stratum indicators as extra
2.3 Curve selection
We would like bring some biology into this model and to take a cancer’s potential evolution over time into account in our modeling. Our idea is that, for the relevant genes, cases and controls either
have constant differential expression over time, or that they diverge in expression levels over time as a cancer grows and spreads. To detect this we propose to do genewise regression of fold change,
e, on time, t, and metastasis, M ∈ {0, 1}, in the following model: where ϵ is iid noise. For models with stratification we add the stratum variable as another interaction: with S the stratum
variable. For a ranking score on the genes use the largest Wald statistic of any coefficient corresponding to a term with the metastasis variable M as a factor. Ie in equation 1, this is β[2] or β
[3]. In equation 2 it’s one of β[2], β[3], β[6], or β[7]. This ranking restricts the predictive models to a smaller predictor space. The ranking should favor genes for which metastasized cases
diverge from their controls as time progresses. We call this filtering method curve selection.
Figure 1 shows the curve selection model. The top row contains the top three genes in our data as ranked by curve selection, the bottom row contains three random genes for comparison.
2.3.1 Application to models
We use curve selection to filter out uninformative genes with all the models above. In all cases but one we do curve selection as a preselection step to narrow down our predictor space to the 200
best genes. We then apply the models in the usual way. The exception is nearest centroids, where we replace the t-test ranking with curve selection to obtain the 50 genes to compute centroids for.
When we account for stratification in the predictive models, we also account for it in the curve selection as in equation 2.
2.4 Cross-validation
We estimate performance generalization by repeated cross-validation. We have found that simple cross-validation in our setting produces point estimates and confidence bands variable enough to not be
of any use. A possible fix to this is to use the bootstrap [5], but there are situations where the bootstrap estimates are biased [15, 20]. Repeated cross-validation puts cross-validation on an equal
footing with bootstrapping in terms of computation. It also has comparatively low bias and variance in the 2009 study of Kim [15]. The process is simply to do regular cross-validation, compute the
average error statistic, , and repeat as many times as feasible to get a set of error estimates . We can use these to construct quantile intervals in the same way that we would have with the
bootstrap. We do 1500 cross-validations for each experiment.
We do any parameter tuning by cross-validation nested in the repeated procedure. This is not repeated, but simply done once per model fitting.
2.5 Metrics: AUC and stability
We’re doing two-class prediction: metastasis vs. no metastasis. We measure the predictive performance of our models with the area under the receiver operating characteristic curve (AUC) [13]. AUC
measures the probability that two randomly chosen samples are ranked correctly, ie a positive sample has higher predicted probability of being positive than a negative sample. It is an equivalent
statistic to the Mann-Whitney-Wilcoxon U [12]. Hence a simpler interpretation of AUC is that it’s the probability of ranking a randomly chosen metastatic sample higher than a randomly chosen
non-metastatic sample.
All the models we evaluate do some sort of feature selection to find the set of genes that best predict the outcome. The question is whether the predictors selected by each model change substantially
on different data sets. We measure gene set stability as Haury et al. [14]. We use the Jaccard index, to measure stability.
In cross-validation, the degree of overlap of observations in two of k folds is . We use k = 5, which leads to 0.75 overlap. To get as many stability measures AUC measures, ie one statistic per fold,
we calculate stability between fold one and fold two, fold two and fold three, and so on, wrapping around when we come to the kth fold.
3 Results
3.1 Danger of missing stratum
In figure 2 we see that fitting the models without regard for stratification can lead to worse-than-random predictive performance. Stratifying ameliorates this, and predictions from the stratified
elastic net stability selection look promising.
We see that detection method is an important factor in predicting node status, or at the very least calibrating predictions so that they aren’t outright wrong. It makes sense for the stratification
to be important. The cancers are likely to have different character in different strata. You can expect the clinical cancers to be older, as they are large enough that the women suspected something
on their own. Hence they have had a lot of time to metastasize. The screening and interval cancers haven’t had much time to grow. The interval cancers are likely to be more aggressive as they were
not detectable at the last screening, which was at most two years ago.
The AUC < .5 problem looks a lot like a Simpson’s paradox [21], we suspect that there is some contradictory information between strata. A toy example of such an effect:
• Let x[i] = μ[i] + e[i], where e[i] is iid, mean-zero noise.
• Draw an outcome y[i] and a stratum, s[i], both ∈ {0, 1}
• Let μ[i] = 1 if s[i] = y[i], 0 otherwise
In this example, ignoring strata there is basically no information. Whether the outcome is 0 or 1 the predictor is distributed as a mixture two normals with modes at 0 and 1. Taking strata into
account, you have in stratum 1: E[X|y = 0] < E[X|y = 1] In stratum 0, the opposite: E[X|y = 0] > E[X|y = 1] If the proportions of stratum 0 and stratum 1 in training and test data are sufficiently
different and the stratum variable is missing, the estimated effect is opposite of what’s happening in test data.
3.2 Stratification vs. curve filtering
Accounting for stratification fixes the AUC < .5 problem, and for the stratified elastic net model yields better-than-random predictions. But it does require us to actually know the detection method
of a cancer at the time of modeling. Such a model would not work in for eg a screening setting.
In figure 3 we see that the use of curve selection avoids the need for explicitly modeling detection method. This suggests that the followup variable contains some compensating information and that
metastatic cancers behave differently to non-metastatic ones over time. This is not something that simple gene-wise t-tests pick up, as we can see by comparing the performance of nearest centroids
between figure 2 and figure 3. To confirm that it’s not simple differential expression that’s picked up by curve selection, we have investigated how often a gene gets selected based on the time
interaction. If it’s always the constant terms that contribute to selection, t-tests are good enough. By choosing gene sets of size 50 for 1500 bootstrap samples of our data, we get a distribution
over selection frequency for the two candidate coefficients in the non-stratified model:
Curve selection improves predictions for all models, which is not the case for stratification alone. Using both together is in most cases better. But the nearest centroids with curve selection alone
does as well as the best combined model, and does so with less variance.
3.3 Stability
Figure 4 shows that the selected gene sets are quite unstable. In the best case a 75% overlap in data yields a stability of about .2. This means that when we pick a 50 gene signature for the
centroids model twice on mostly overlapping data, we can expect an overlap of ten genes between the two signatures. Interestingly stability selection doesn’t seem to improve neither predictions nor
stability for the lasso penalized models in our data.
4 Conclusion
Curve selection is biologically motivated. We see that using the biology to select likely predictor genes and then fitting a very simple predictive model can outperform very clever, mathematically
motivated models.
By doing curve selection we improve predictions and obtain more stable predictive signatures. As it is the dataset is quite small, so there remains a question of statistical power. There also seems
to be very low signal to noise, something that is probably made worse by the fact that we don’t have repeated measurements for any of the women.
However, there is some promise to these data. Further work is needed, but it does look as there is some predictive signal of breast cancer metastasis in prospective blood samples. It is a small step
toward liquid biopsies for lymph node metastasis. | {"url":"https://www.biorxiv.org/content/10.1101/141325v1.full","timestamp":"2024-11-01T21:07:52Z","content_type":"application/xhtml+xml","content_length":"172878","record_id":"<urn:uuid:8bef3c0b-6a12-466c-b7f7-dd4bf9b50731>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00268.warc.gz"} |
Creation of model outcomes
Table of contents
1. Introduction: >
2. Advanced examples:>
Creation of model outcomes
We have presented the tools for creating the structure of a DGLM model, specifically, we have shown how to define the relationship between the latent vector \(\vec{\theta}_t\) and the linear
predictors \(\vec{\lambda}_t\), along with the temporal dynamic of \(\vec{\theta}_t\). Now we proceed to define the observational model for \(\vec{Y}_t\) and the relationship between \(\vec{\lambda}
_t\) and \(\vec{\eta}_t\), i.e., the highlighted part of the following equations:
\[ \require{color} \begin{aligned} \color{red}{Y_t|\eta_t }&{\color{red}\sim \mathcal{F}\left(\eta_t\right),}\\ {\color{red}g(\eta_t) }&{\color{red}= \lambda_{t}}=F_t'\theta_t,\\ \theta_t &=G_t\
theta_{t-1}+\omega_t,\\ \omega_t &\sim \mathcal{N}_n(h_t,W_t), \end{aligned} \]
In each subsection, we will assume that the linear predictors are already defined, along with all the structure that comes along with them (i.e., we will take for granted the part of the model that
is not highlighted), moreover, we also assume that the user has created the necessary amount of linear predictors for each type of outcome and that those linear predictors were named as \(\lambda_1\)
Currently, we offer support for the following observational distributions:
• Normal distribution with unknown mean and unknown variance (with dynamic predictive structure for both parameters). As a particular case, we also have support for Normal distribution with known
• Bivariate Normal distribution with unknown means, unknown variances and unknown correlation (with dynamic predictive structure for all parameters). As a particular case, we also have support for
Multivariate Normal distribution with known covariance matrix.
• Poisson distribution with unknown rate parameter with dynamic predictive structure.
• Multinomial distribution with an known number of trials, arbitrary number of categories, but unknown event probabilities with dynamic predictive structure for the probability of each category. As
particular cases, we support the Binomial and Bernoulli distributions.
• Gamma distribution with known shape parameter, but unknown mean with dynamic predictive structure.
We are currently working to include several distributions. In particular, the following distributions shall be supported very soon: Dirichlet; Geometric; Negative Binomial; Rayleigh; Pareto;
Asymmetric Laplace with known mean.
Normal case
In some sense, we can think of this as the most basic case, at least in a theoretical point of view, since the Kalman Filter was first developed for this specific scenario (Kalman, 1960). Indeed, if
we have a static observational variance/covariance matrix (even if unknown), we fall within the DLM class, which has an exact analytical solution for the posterior of the latent states. With some
adaptations, one can also have some degree of temporal dynamic for the variance/covariance matrix (see Ameen and Harrison, 1985; West and Harrison, 1997, sec. 10.8). Yet, the kDGLM package goes a
step further, offering the possibility for predictive structure for both the mean and the observational variance/covariance matrix, allowing the inclusion of dynamic regressions, seasonal trends,
autoregressive components, etc., for both parameters.
We will present this case in two contexts: the first, which is a simple implementation of the Kalman Filter and Smoother, deals with data coming from an Normal distribution (possibly multivariate)
with unknown mean and known variance/covariance matrix; the second deals with data coming from a univariate Normal distribution with unknown mean and unknown variance.
Also, at the end of the second subsection, we present an extension to the bivariated Normal distribution with unknown mean and unknown covariance matrix. A study is being conducted to expand this
approach to the \(k\)-variated case, for any arbitrary \(k\).
Normal outcome with known variance
Suppose that we have a sequence of \(k\)-dimensional vectors \(\vec{Y}_t\), such that \(\vec{Y}_t=(Y_{1t},...,Y_{kt})'\). We assume that:
\[ \begin{aligned} \vec{Y}_t|\mu_t,V &\sim \mathcal{N}_k\left(\vec{\mu}_t,V\right),\\ \mu_{it}&=\lambda_{it}, i=1,...,k,\\ \end{aligned} \] where \(\vec{\mu}_t=(\mu_{1,t},...,\mu_{k,t})'\) and \(V\)
is a known symmetric, definite positive \(k\times k\) matrix. Also, for this model, we assume that the link function \(g\) is the identity function.
To create the outcome for this model, we can make use of the Normal function:
Intuitively, the mu argument must be a character vector of size \(k\) containing the names of the linear predictors associated with each \(\mu_{i.}\). The user must also specify one (and only one) of
V, Tau or Sd. If the user provides V, \(V\) is assumed to be that value; if the user provides Tau, \(V\) is assumed to be the inverse of the given matrix (i.e., Tau is the precision matrix); if the
user provides Sd, \(V\) is assumed to be such that the standard deviation of the observations is equal to the main diagonal of Sd and the correlation between observations is assumed the be equal to
the off-diagonal elements of Sd.
The data argument must be a \(T \times k\) matrix containing the values of \(\vec{Y}_t\) for each observation. Notice that each line \(t\) must have the values of all categories in time \(t\) and
each column \(i\) must represent the values of a category \(i\) through time. If a value of the argument data is not available (NA) for a specific time, it is assumed that there was no observation at
that time, thus the update step of the filtering algorithm will be skipped at that time. Note that the evolution step will still be performed, such that the predictive distribution for the missing
data and the updated distribution for the latent states at that time will still be provided.
Next, we present a brief example for the usage of Normal function for a univariate outcome (the multivariate case works similarly). We use some functions described in the previous sections, as well
as some functions that will be presented later on. For now, let us focus on the usage of the Normal function.
level <- polynomial_block(mu = 1, D = 0.95, order = 2)
season <- harmonic_block(mu = 1, period = 12, D = 0.975)
outcome <- Normal(
mu = "mu", V = 6e-3,
data = c(log(AirPassengers))
fitted.model <- fit_model(level, season, outcome)
plot(fitted.model, plot.pkg = "base")
Notice that, since this is the univariate case, the data argument can be a vector.
Univariated Normal outcome with unknown variance
For this type of outcome, we assume that:
\[ \begin{aligned} Y_t|\mu_t,\tau_t &\sim \mathcal{N}\left(\mu_t,\tau_t^{-1}\right),\\ \mu_{t}&=\lambda_{1t},\\ \ln\{\tau_{t}\}&=\lambda_{2t}.\\ \end{aligned} \]
To create an outcome for this model, we also make use of the Normal function:
Just as before, the mu argument must be a character representing the label of the linear predictor associated with \(\mu_t\). The user must also specify one (and only one) of V, Tau or Sd, which must
be a character string representing the label of the associated linear predictor.
Similar to the known variance case, we allow multiple parametrizations of the observational variance. Specifically, if the user provides V, we assume that \(\lambda_{2t}=\ln\{\sigma^2_{t}\}=-\ln\{\
tau_t\}\); if the user provides Sd, we assume that \(\lambda_{2t}=\ln\{\sigma_{t}\}=-\ln\{\tau_t\}/2\); if the user provides Tau, then the default parametrization is used, i.e., \(\lambda_{2t}=\ln\{\
The data argument usually is a \(T \times 1\) matrix containing the values of \(Y_t\) for each observation. In cases where \(\vec{Y}_t\) is univariated, we also accept data as a line vector, in which
case we assume that each coordinate of data represents the observed value at each time. If a value of data is not available (NA) for a specific time, it is assumed that there was no observation at
that time, thus the update step of the filtering algorithm will be skipped at that time. Note that the evolution step will still be performed, such that the predictive distribution for the missing
data and the updated distribution for the latent states at that time will still be provided.
Next, we present a brief example for the usage of this outcome. We use some functions described in the previous sections, as well as some functions that will be presented later on. For now, let us
focus on the usage of the Normal function.
structure <- polynomial_block(mu = 1, D = 0.95) +
polynomial_block(V = 1, D = 0.95)
outcome <- Normal(mu = "mu", V = "V", data = cornWheat$corn.log.return[1:500])
fitted.model <- fit_model(structure, outcome)
plot(fitted.model, plot.pkg = "base")
Currently, we also support models with bivariate Normal outcomes. In this scenario we assume the following model:
\[ \begin{aligned} Y_t|\mu_{t},V_t &\sim \mathcal{N}_2\left(\mu_t,V_t\right),\\ \mu_t&=\begin{bmatrix}\mu_{1,t}\\ \mu_{2t}\end{bmatrix},\\ V_t&=\begin{bmatrix}\tau_{1,t}^{-1} & (\tau_{1,t}\tau_{2,t})
^{-1/2}\rho_t\\ (\tau_{1,t}\tau_{2,t})^{-1/2}\rho_t & \tau_2^{-1}\end{bmatrix},\\ \mu_{i,t}&=\lambda_{i,t}, i=1,2\\ \tau_{i,t}&=\ln\{\lambda_{(i+2),t}\}, i=1,2\\ \rho_{t}&=\tanh\{\lambda_{5,t}\}.\\ \
end{aligned} \]
Notice that \(\rho_t\) represents the (and the covariance) between the series at time \(t\). To guarantee that \(\rho_t \in (-1,1)\), we use the Inverse Fisher transformation (also known as the
hyperbolic tangent function) as link function.
For those models, `mu must be a character vector, similarly to the case where \(V\) is known, and V, Tau and Sd must be a \(2 \times 2\) character matrix. The main diagonal elements are interpreted
as the linear predictors associated with the precisions, variances or standard deviations, depending if the user used Tau, V or Sd, respectively. The off diagonal elements must be equals (one of them
can be NA) and will be interpreted as the linear predictor associated with \(\rho_t\).
Bellow we present an example for the bivariate case:
# Bivariate Normal case
structure <- (polynomial_block(mu = 1, D = 0.95) +
polynomial_block(log.V = 1, D = 0.95)) * 2 +
polynomial_block(atanh.rho = 1, D = 0.95)
outcome <- Normal(
mu = c("mu.1", "mu.2"),
V = matrix(c("log.V.1", "atanh.rho", "atanh.rho", "log.V.2"), 2, 2),
data = cornWheat[1:500, c(4, 5)]
fitted.model <- fit_model(structure, outcome)
Notice that, by the second plot, the correlation between the series (represented by atanh.rho, i.e., the plot shows \(\tanh^{-1}(\rho)\)) is significant and changes over time, making the proposed
model much more adequate than two independent Normal models (one for each outcome).
Poisson case
In this case, we assume the following observational model:
\[ \begin{aligned} Y_t|\eta_t &\sim Poisson\left(\eta_t\right),\\ \ln(\eta_t) &=\lambda_{t}. \end{aligned} \]
In the notation introduced before, we have that our link function \(g\) is the (natural) logarithm function.
To define such observational model, we offer the Poisson function, whose usage is presented bellow:
As usual in the literature, we refer to the rate parameter of the Poisson distribution as lambda (although, in the context of this document, this might seem confusing) and the user must provide for
this argument the name of the linear predictor associated with this parameter.
For the argument data the user must provide a sequence of numerical values consisting of the observed values of \(Y_t\) at each time. Since the \(Y_t\) is a scalar for all \(t\), the user can pass
the outcome as a vector or as a matrix with a single column. If a value of data is not available (NA) for a specific time, it is assumed that there was no observation at that time, thus the update
step of the filtering algorithm will be skipped at that time. Note that the evolution step will still be performed, such that the predictive distribution for the missing data and the updated
distribution for the latent states at that time will still be provided.
Lastly, the offset argument is optional and can be used to provide a measure of the scale of the data. If the offset is provided and is equal to \(E_t\), then we will fit a model assuming that:
\[ \begin{aligned} Y_t|\theta_t &\sim Poisson\left(\eta_tE_t\right),\\ \ln(\eta_t) &=\lambda_{t}. \end{aligned} \]
Bellow we present an example of the usage of this outcome. We use some functions described in the previous section, as well as some functions that will present later on, for now, let us focus only on
the usage of the Poisson function.
data <- c(AirPassengers)
level <- polynomial_block(rate = 1, order = 2, D = 0.95)
season <- harmonic_block(rate = 1, period = 12, order = 2, D = 0.975)
outcome <- Poisson(lambda = "rate", data = data)
fitted.data <- fit_model(level, season,
AirPassengers = outcome
plot(fitted.data, plot.pkg = "base")
Notice that, while creating the structure, we defined a linear predictor named rate, whose behavior is being explained by a second order polynomial trend and seasonal component defined by a second
order harmonic block. Since the value passed to rate equals \(1\) in both blocks, we have that these components have a constant effect (and equal to \(1\)) on the linear predictor on all times,
although the components themselves change their values over time such as to capture the behavior of the series.
Later on, when creating the outcome, we pass the name 'rate' as the linear predictor associated with lambda, the rate (or mean) parameter of the Poisson distribution.
This is a particularly simply usage of the package, the Poisson kernel being the one with the smallest amount of parameters. Moving forward, we will present outcomes whose specification can be a bit
more complex.
Gamma case
In this subsection we will present the Gamma case, in which we assume the following observational model:
\[ \begin{aligned} Y_t|\alpha_t,\beta_t &\sim \mathcal{G}\left(\alpha_t,\beta_t\right),\\ \ln\{\alpha_t\}&=\lambda_{1t},\\ \ln\{\beta_t\}&=\lambda_{2t} \end{aligned} \]
For this outcome we have a few variations. First, there’s a matter of parametrization. We allow the user to define the model by any non redundant pair of:
\[ \begin{aligned} \alpha_t&,\\ \beta_t&,\\ \phi_t&=\alpha_t,\\ \mu_t&=\frac{\alpha_t}{\beta_t},\\ \sigma_t&=\frac{1}{\beta_t}. \end{aligned} \]
Naturally, the user CANNOT specify both \(\alpha_t\) AND \(\phi_t\) or \(\beta_t\) AND \(\sigma_t\), as such specification is redundant at best, and incoherent at worst. Outside of those cases, in
which the package will raise an error, any combination can be used by the user, allowing for the structure of the model to be defined within the variables that are most convenient (it may be easier
or more intuitive to specify the structure in the mean \(\mu_t\) and the scale \(\sigma_t\), than on the shape \(\alpha_t\) and rate \(\beta_t\)).
Another particularity of the Gamma outcome is that the user may set the shape parameter \(\phi_t\) to a known constant. In that case, the user must specify the structure to the mean parameter \(\mu_t
\) (he is not allowed to specify neither \(\beta_t\) nor \(\sigma_t\)). In general, we do not expect the shape parameter to be known, still, there are some important applications where it is common
the use some particular cases of the Gamma distribution, such as the Exponential Model (\(\phi_t=1\)) or the \(\chi^2\) model (\(\phi_t=0.5\)). The estimation of the shape parameter \(\phi_t\) is
still under development, as such, the current version of the package does not have support for a unknown \(\phi_t\) (a version of the package with a proper estimation for \(\phi_t\) will be released
very soon).
No matter the parametrization, the link function \(g\) will always be the logarithm function, as such, given a certain parametrization, we can write the linear predictor of any other parametrization
as a linear transformation of the original.
In the examples of this section, we will always use the parameters \(\phi_t\) (when applicable) and \(\mu_t\), but the code used can be trivially adapted to other parametrizations.
Similar to the Poisson case, the argument data must provide a set of numerical values consisting of the observed values of \(Y_t\) at each time. Since the \(Y_t\) is a scalar for all \(t\), the user
can pass the outcome either as a vector or as a matrix with a single column. If a value of the argument data is not available (NA) for a specific time, it is assumed that there was no observation at
that time, thus the update step of the filtering algorithm will be skipped at that time. Note that the evolution step will still be performed, such that the predictive distribution for the missing
data and the updated distribution for the latent states at that time will still be provided.
The offset argument is optional and can be used to provide a measure of the scale of the data. If the offset is provided and is equal to \(E_t\), then we will fit a model assuming that:
\[ \begin{aligned} Y_t|\theta_t &\sim \mathcal{G}\left(\alpha_t,\beta_t E_t^{-1}\right). \end{aligned} \]
Note that the above model implies that:
\[ \mathbb{E}[Y_t|\theta_t]=\frac{\alpha_t}{\beta_t}E_t. \]
The arguments phi, mu, alpha, beta and sigma should be character strings indicating the name of the linear predictor associated with their respective linear predictor. The user may opt to pass phi as
a positive numerical value, it that case, the shape parameter \(\phi_t\) is considered known and equal to phi for all \(t\).
Multinomial case
Let us assume that we have a sequence of \(k\)-dimensional non-negative integer vectors \(Y_t\), such that \(Y_t=(Y_{1t},...,Y_{kt})'\) and:
\[ \begin{aligned} Y_t|N_t,\vec{p}_t &\sim Multinom\left(N_t,\vec{p}_t\right),\\ \ln\left\{\frac{p_{it}}{p_{kt}}\right\}&=\lambda_{it}, i=1,...,k-1,\\ N_t&=\sum_{i=1}^{k}Y_{it}, \end{aligned} \]
where \(\vec{p}_t=(p_{1t},...,p_{kt})'\), with \(p_{it} > 0, \forall i\) and \(\sum_{i=1}^k p_{it}=1\).
Notice that \(N_t\) is automatically defined by the values of \(Y_t\), such that \(N_t\) is always considered a known parameter. Also, it is important to point out that this model has only \(k-1\)
free parameters (instead of \(k\)), since the restriction \(\sum_{i=1}^k p_{it}=1\) implies that defining \(k-1\) entries of \(\vec{p}_t\) defines the remaining value. Specifically, we will always
take the last entry (or category) of \(Y_t\) as the reference value, such that \(p_{kt}\) can be considered as the baseline probability of observing data from a category (i.e., we will model how each
\(p_{it}\) relates to the baseline probability \(p_{kt}\)).
To create an outcome for this model, we can make use of the Multinom function:
For the Multinomial case, p must be a character vector of size \(k-1\) containing the names of the linear predictors associated with \(\ln\left\{\frac{p_{it}}{p_{kt}}\right\}\) for each \(i=1,...,k-1
The data argument must be a \(T \times k\) matrix containing the values of \(Y_t\) for each observation. Notice that each line \(i\) must represent the values of all categories in time \(i\) and each
column \(j\) must represent the values of a category \(j\) through time. If a value of the argument data is not available (NA) for a specific time, it is assumed that there was no observation at that
time, thus the update step of the filtering algorithm will be skipped at that time. Note that the evolution step will still be performed, such that the predictive distribution for the missing data
and the updated distribution for the latent states at that time will still be provided.
The offset argument is optional and must have the same dimensions of data (its dimensions are interpreted in the same manner). The argument can be used to provide a measure of the scale of the data
and, if the offset is provided, such that, at each time \(t\), the offset is equal to \(E_t=(E_{1t},...,E_{kt})'\), then we will fit a model assuming that:
\[ \begin{aligned} Y_t|\theta_t &\sim Multinom\left(N_t,\vec{p}^*_t\right),\\ \ln\left\{\frac{p^*_{it}}{p^*_{kt}}\right\}&=\ln\left\{\frac{p_{it}}{p_{kt}}\right\}+\ln\left\{\frac{E_{it}}{E_{kt}}\
right\}, i=1,...,k-1. \end{aligned} \]
At the end of this subsection we present a brief discussion about the implications of the inclusion of the offset and how to interpret it, as well as a explanation for the way we chose to include it.
Again, we present a brief example for the usage of this outcome:
# Multinomial case
structure <- (
polynomial_block(p = 1, order = 2, D = 0.95) +
harmonic_block(p = 1, period = 12, D = 0.975) +
noise_block(p = 1, R1 = 0.1) +
regression_block(p = chickenPox$date >= as.Date("2013-09-01"))
# Vaccine was introduced in September of 2013
) * 4
outcome <- Multinom(p = structure$pred.names, data = chickenPox[, c(2, 3, 4, 6, 5)])
fitted.data <- fit_model(structure, chickenPox = outcome)
plot(fitted.data, plot.pkg = "base")
Some comments on the usage of an offset
The model presented in this section is intend to describe a phenomena such that we have \(N_t\) subjects that were distributed randomly (but not necessarily uniformly randomly) among \(k\)
categories. In this scenario, \(p_{it}\) represent the probability of one observation to fall within the category \(i\), such that:
\[ p_{it}=\mathbb{P}(Y_{it}=1|N_t=1). \]
In some applications, it might be the case that \(N_t\) represents the counting of some event of interest and we want to model the probability of this event occurring in each category. In this
scenario, it is not clear how to use the multinomial model, since we will have that:
\[ p_{it}=\mathbb{P}(\text{Observation belong to category }i|\text{Event occured}), \] but we actually want to known:
\[ p^*_{it}=\mathbb{P}(\text{Event occured}|\text{Observation belong to category }i). \]
Notice that we can write:
\[ \begin{aligned} p^*_{it}&=\mathbb{P}(\text{Event occured}|\text{Observation belong to category }i)\\ &=\frac{\mathbb{P}(\text{Observation belong to category }i|\text{Event occured})\mathbb{P}(\
text{Event occured})}{\mathbb{P}(\text{Observation belong to category }i)}\\ &=\frac{p_{it}\mathbb{P}(\text{Event occured})}{\mathbb{P}(\text{Observation belong to category }i)}. \end{aligned} \]
The above relation implies that:
\[ \begin{aligned} \ln\left\{\frac{p^*_{it}}{p^*_{kt}}\right\} &=\ln\left\{\frac{p_{it}}{p_{kt}}\right\}-\ln\left\{\frac{\mathbb{P}(\text{Observation belong to category }i)}{\mathbb{P}(\text
{Observation belong to category }k)}\right\}. \end{aligned} \]
If we pass to the offset argument of the Multinom function a set of values \(E_t\), such that \(E_{t} \propto (\mathbb{P}(\text{Observation belong to category }1),...,\mathbb{P}(\text{Observation
belong to category }k))'\), then, by the specification provided in this section, we have that:
\[ \ln\left\{\frac{p^*_{it}}{p^*_{kt}}\right\}=\lambda_{it}, \] in other words, the linear predictors (and consequently, the model structure) will describe the probability that an event occurs in a
specific class (instead of the probability that an observation belongs to that class, given the occurrence of the event).
To obtain \(p^*_{it}\) itself (i.e. the probability of the event occurring given that the observation belongs to the category \(i\)), one can use Bayes formula, as long \(\mathbb{P}(\text{Event
occured})\) is known. Indeed, one can write:
\[ \begin{aligned} p^*_{it}&=p_{it}\frac{\mathbb{P}(\text{Event occured})}{\mathbb{P}(\text{Observation belong to category }i)}\\ &=\frac{\exp\{\lambda_i\}}{1+\sum_j \exp\{\lambda_j\}}\frac{\mathbb
{P}(\text{Event occured})}{\mathbb{P}(\text{Observation belong to category }i)} \end{aligned} \]
Handling multiple outcomes
Lastly, the kDGLM package also allows for the user to jointly fit multiple time series, as long as the marginal distribution of each series is one of the supported distributions AND the series are
independent given the latent state vector \(\vec{\theta}_t\). In other words, let \(\{\vec{Y}_{i,t}\}_{t=1}^{T}, i =1,...,r\), be a set of time series such that:
\[ \begin{aligned} \vec{Y}_{i,t}|\vec{\eta}_{i,t} &\sim \mathcal{F}_{i}\left(\vec{\eta}_{i,t}\right),\\ g_i(\vec{\eta}_{i,t})&=\vec{\lambda}_{i,t}=F_{i,t}'\vec{\theta}_{t}, \end{aligned} \] and \(\
vec{Y}_{1,t}, ...,\vec{Y}_{r,t}\) are mutually independent given \(\vec{\eta}_{1,t}, ...,\vec{\eta}_{r,t}\). Note that the observational distributions \(\mathcal{F}_i\) does not need to be the same
for each outcome, as long as each \(\mathcal{F}_i\) is within the supported marginal distributions. For example, we could have three time series (\(r=3\)), such that \(\mathcal{F}_1\) is a Poisson
distribution, \(\mathcal{F}_2\) is Normal distribution with unknown mean and precision and \(\mathcal{F}_3\) is a Gamma distribution with known shape. Also, this specification does not impose any
restriction on the model structure, such that each outcome can have its own component, with polynomial, regression and harmonic blocks, besides having shared components with each other. See (dos
Santos et al., 2024) for a detailed discussion of the approach used to model multiple time series using kDGLMs.
To fit such model, one must only pass the outcomes to the fit_model function. As an example, we present the code for fitting two Poisson series:
structure <- polynomial_block(mu.1 = 1, mu.2 = 1, order = 2, D = 0.95) + # Common factor
harmonic_block(mu.2 = 1, period = 12, order = 2, D = 0.975) + # Seasonality for Series 2
polynomial_block(mu.2 = 1, order = 1, D = 0.95) + # Local level for Series 2
noise_block(mu = 1) * 2 # Overdispersion for both Series
fitted.model <- fit_model(structure,
Adults = Poisson(lambda = "mu.1", data = chickenPox[, 5]),
Infants = Poisson(lambda = "mu.2", data = chickenPox[, 2])
It is important to note that the Multivariate Normal and the Multinomial cases are multivariated outcomes and are not considered multiple outcomes on their own, but instead, they are treated as one
outcome each, such that the outcome itself is a vector (note that we made no restrictions on the dimension of each \(\vec{Y}_{i,t}\)). As such, in those cases, the components of the vector \(\vec{Y}_
{i,t}\) do not have to be mutually independent given \(\vec{\eta}_{i,t}\).
Also important to note is that our general approach for modeling multiple time series can not, on its own, be considered a generalization of the Multivariate Normal or Multinomial models.
Specifically, if we treat each coordinate of the outcome as a outcome of its own, they would not satisfy the hypotheses of independence given the latent states \(\vec{\theta}_t\). This can be
compensated with changes to the model structure, but, in general, it is better to model data using a known joint distribution than to assume conditional independence and model the outcomes dependence
by shared structure.
Special case: Conditional modelling
There is a special type of specification for a model with multiple outcomes that does not require the outcomes to be independent given the latent states. Indeed, if the user specifies the conditional
distribution of each outcome given the previous ones, then no hypotheses is needed for fitting the data.
For instance, lets say that there are three time series \(Y_{1,t},Y_{2,t}\) and \(Y_{3,t}\), such that each series follows a Poisson distribution with parameter \(\eta_{i,t}, i=1,2,3\). Then, \(Z_t=
Y_{1,t}+Y_{2,t}+Y_{3,t}\) follows a Poisson distribution with parameter \(\eta_{1,t}+\eta_{2,t}+\eta_{3,t}\) and \(Y_{1,t},Y_{2,t},Y_{3,t}|Z_t\) jointly follows a Multinomial distribution with
parameters \(N_t=Z_t\) and \(\vec{p}_t=\left(\frac{\eta_{1,t}}{\eta_{1,t}+\eta_{2,t}+\eta_{3,t}},\frac{\eta_{2,t}}{\eta_{1,t}+\eta_{2,t}+\eta_{3,t}},\frac{\eta_{3,t}}{\eta_{1,t}+\eta_{2,t}+\eta_
{3,t}}\right)'\). Then the user may model \(Z_t\) and \(Y_{1,t},Y_{2,t},Y_{3,t}|Z_t\):
structure <- polynomial_block(mu = 1, order = 2, D = 0.95) +
harmonic_block(mu = 1, period = 12, order = 2, D = 0.975) +
noise_block(mu = 1) + polynomial_block(p = 1, D = 0.95) * 2
outcome1 <- Poisson(lambda = "mu", data = rowSums(chickenPox[, c(2, 3, 5)]))
outcome2 <- Multinom(p = c("p.1", "p.2"), data = chickenPox[, c(2, 3, 5)])
fitted.model <- fit_model(structure, Total = outcome1, Proportions = outcome2)
plot(fitted.model, plot.pkg = "base")
See Schmidt et al. (2022) for a discussion of Multinomial-Poisson models. More applications are presented in the advanced examples section of the vignette.
Ameen, J. R. M., and Harrison, P. J. (1985). Discount bayesian multiprocess modelling with cusums. In O. D. Anderson, editor, Time series analysis: Theory and practice 5. North-Holland, Amsterdam.
dos Santos, S. V., Junior, Alves, M. B., and Migon, H. S. (2024). An efficient sequential approach for joint modelling of multiple time series.
Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. Transactions of the ASME–Journal of Basic Engineering, 82(Series D), 35–45.
Schmidt, A. M., Freitas, L. P., Cruz, O. G., and Carvalho, M. S. (2022).
A poisson-multinomial spatial model for simultaneous outbreaks with application to arboviral diseases
Statistical Methods in Medical Research
(8), 1590–1602.
West, M., and Harrison, J. (1997). Bayesian forecasting and dynamic models (springer series in statistics). Hardcover; Springer-Verlag. | {"url":"https://cran.dcc.uchile.cl/web/packages/kDGLM/vignettes/outcomes.html","timestamp":"2024-11-03T23:26:41Z","content_type":"text/html","content_length":"219531","record_id":"<urn:uuid:53df29c0-f747-493c-aeac-528fa2437a1b>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00091.warc.gz"} |
class lsst.ts.observatory.control.ClosedLoopMode(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)¶
Bases: IntEnum
Defines the different mode to run closed loop.
CWFS: Using only the corner wavefront sensors with focal plane in focus.
FAM: Full Array Mode.
Attributes Summary
denominator the denominator of a rational number in lowest terms
imag the imaginary part of a complex number
numerator the numerator of a rational number in lowest terms
real the real part of a complex number
Methods Summary
as_integer_ratio(/) Return integer ratio.
bit_count(/) Number of ones in the binary representation of the absolute value of self.
bit_length(/) Number of bits necessary to represent self in binary.
conjugate Returns self, the complex conjugate of any int.
from_bytes(/, bytes[, byteorder, signed]) Return the integer represented by the given array of bytes.
to_bytes(/[, length, byteorder, signed]) Return an array of bytes representing an integer.
Attributes Documentation
CWFS = 0¶
FAM = 1¶
the denominator of a rational number in lowest terms
the imaginary part of a complex number
the numerator of a rational number in lowest terms
the real part of a complex number
Methods Documentation
Return integer ratio.
Return a pair of integers, whose ratio is exactly equal to the original int and with a positive denominator.
>>> (10).as_integer_ratio()
(10, 1)
>>> (-10).as_integer_ratio()
(-10, 1)
>>> (0).as_integer_ratio()
(0, 1)
Number of ones in the binary representation of the absolute value of self.
Also known as the population count.
>>> bin(13)
>>> (13).bit_count()
Number of bits necessary to represent self in binary.
>>> bin(37)
>>> (37).bit_length()
Returns self, the complex conjugate of any int.
from_bytes(/, bytes, byteorder='big', *, signed=False)¶
Return the integer represented by the given array of bytes.
Holds the array of bytes to convert. The argument must either support the buffer protocol or be an iterable object producing bytes. Bytes and bytearray are examples of built-in objects
that support the buffer protocol.
The byte order used to represent the integer. If byteorder is ‘big’, the most significant byte is at the beginning of the byte array. If byteorder is ‘little’, the most significant byte
is at the end of the byte array. To request the native byte order of the host system, use `sys.byteorder’ as the byte order value. Default is to use ‘big’.
Indicates whether two’s complement is used to represent the integer.
to_bytes(/, length=1, byteorder='big', *, signed=False)¶
Return an array of bytes representing an integer.
Length of bytes object to use. An OverflowError is raised if the integer is not representable with the given number of bytes. Default is length 1.
The byte order used to represent the integer. If byteorder is ‘big’, the most significant byte is at the beginning of the byte array. If byteorder is ‘little’, the most significant byte
is at the end of the byte array. To request the native byte order of the host system, use `sys.byteorder’ as the byte order value. Default is to use ‘big’.
Determines whether two’s complement is used to represent the integer. If signed is False and a negative integer is given, an OverflowError is raised. | {"url":"https://ts-observatory-control.lsst.io/py-api/lsst.ts.observatory.control.ClosedLoopMode.html","timestamp":"2024-11-07T13:07:27Z","content_type":"text/html","content_length":"25145","record_id":"<urn:uuid:5ec91a6a-7975-42de-893c-fa2d6a384aaa>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00036.warc.gz"} |
al M
Journal of Water Resource and Protection
Vol. 3 No. 8 (2011) , Article ID: 6991 , 21 pages DOI:10.4236/jwarp.2011.38066
Geostatistical Modeling of Uncertainty for the Risk Analysis of a Contaminated Site
CGT Center for GeoTechnologies, University of Siena, Arezzo, Italy
E-mail: guastaldi@unisi.it
Received June 12, 2011; revised July 12, 2011; accepted August 14, 2011
Keywords: Uncertainty Modeling, Multivariate Geostatistical Simulations, Risk Analysis, Environmental Pollution, Remediation Project
This work is a study of multivariate simulations of pollutants to assess the sampling uncertainty for the risk analysis of a contaminated site. The study started from data collected for a remediation
project of a steelworks in northern Italy. The soil samples were taken from boreholes excavated a few years ago and analyzed by a chemical laboratory. The data set comprises concentrations of several
pollutants, from which a subset of ten organic and inorganic compounds were selected. The first part of study is a univariate and bivariate statistical analysis of the data. All data were spatially
analyzed and transformed to the Gaussian space so as to reduce the effects of extreme high values due to contaminant hot spots and the requirements of Gaussian simulation procedures. The variography
analysis quantified spatial correlation and cross-correlations, which led to a hypothesized linear model of coregionalization for all variables. Geostatistical simulation methods were applied to
assess the uncertainty. Two types of simulations were performed: correlation correction of univariate sequential Gaussian simulations (SGS), and sequential Gaussian co-simulations (SGCOS). The
outputs from the correlation correction simulations and SGCOS were analyzed and grade-tonnage curves were produced to assess basic environmental risk.
1. Introduction
The assessment of the risks associated with contamination by elevated levels of pollutants is a major issue in most parts of the world. Risk is generally taken to mean the probability of the
occurrence of an adverse event, in this case contamination above legally and/or socially acceptable levels. Risk arises from the presence of a pollutant and from the uncertainty associated with
estimateing its concentration, extent and trajectory. The uncertainty arises from the difficulty of measuring the pollutant concentration accurately at any given location and the impossibility of
measuring it at all study. Estimations tend to give smoothed versions of reality (i.e. estimates are less variable than real values) with the smoothing effect being inversely proportional to the
amount of data (i.e. directly proportional to the uncertainty). If risk is a measure of the probability of pollutant concentrations exceeding specified thresholds then variability, or variance, is
the key characteristic in risk assessment and risk analysis. For this reason, geostatistical simulation provides an appropriate way of quantifying risk by simulateing possible “realities” and
determining how many of these realities exceed the contamination thresholds [1].
Since the publication of the first applications of geostatistics to soil data in the early 1980s ([2-6]), geostatistical methods have become popular in soil science, as illustrated by the increasing
number of studies reported in the literature.
Geostatistics involves the analysis and prediction of spatial or temporal phenomena, such as metal grades, porosities, pollutant concentrations, price of oil in time, and so forth. Nowadays,
geostatistics is simply a name associated with a class of techniques utilized to analyze and predict values of a variable distributed in space or time. Such values are implicitly assumed to be
spatially or/ and temporally correlated with each other, and the study of such a correlation is usually called a “structural analysis” or “variogram modeling”. Following structural analysis,
predictions at unsampled locations are made using any of the various forms of “kriging” or they can be simulated using “conditional simulations”.
The main geostatistical tools are used to model the local uncertainty of environmental attributes (e.g. pollutant concentrations), which prevail at any unsampled site, in particular by means of
stochastic simulation. These models of uncertainty can be used in decision-making processes such as delineation of areas targeted for remediation or design of sampling schemes.
Methods of uncertainty propagation ([7,8]), such as Monte-Carlo simulation analysis, sequential Gaussian simulation and sequential indicator simulation are critical for estimating uncertainties
associated with spatiallybased policies in environmental problems and in dealing effectively with risks [9].
The study area was of a contaminated site in northern Italy, previously occupied by a steelworks. This work is based on a data set belonging exclusively to Studio Geotecnico Italiano S.r.l. (Milan,
Italy), which has withheld permission to publish the exact geographical location of the site, but this is irrelevant to the purposes of this study. Therefore, the steelworks factory’s site is just
roughly located in northwestern Italy in Figure 1, the study area is intentionally enormous compared to the actual site’s area, and the real location is somewhere inside the circle. Moreover,
conventional geographical directions are used only in the sections covering descriptions of geomorphology and geology. In the quantitative work reported here easting, northing and depth co-ordinates
are given in local units keeping the scale ratio, to protect the confidentiality of the site. In addition, the actual co-ordinate system has been rotated by approximately 90˚ to provide a more
convenient data layout. These assumptions, of course, do not affect any results.
The author has conducted a geostatistical study, based on the preliminary reclamation study, to assess the contamination risk associated with the most important heavy metals and hydrocarbons measured
at the site. The preliminary reclamation study outlines the reclamation proposed to mitigate the contamination risk based on groundwater analysis. The study here reported is based on soil samples
taken from the surface down to different depths up to 20 m and provides a risk analysis method that could be extended to other variables together with results that could form part of a future,
definitive reclamation project. The significant number of variables provides the basis for an extensive multivariate study. However, multivariate analysis has been restricted to the subset of these
variables with concentration values higher than limits imposed by law.
A univariate geostatistical simulation of several variables is performed independently for each variable and any spatial cross-correlation among the variables is ignored. However, in environmental
applications it is common to find pollutants positively or negatively correlated, as is the case with the site used for this project. One way of taking account of this correlation and improving the
simulation results is by introducing a correlation correction between pairs of the independently simulated variables [10], another way is utilizing multivariate simulations. Finally, following the
validation analysis, the corrected variables were analyzed and compared with those of the co-simulation technique.
The main objective of this study is to assess the uncertainty of the spatial variability of contamination by heavy metals and heavy hydrocarbons in order to perform a risk analysis suitable for the
definitive reclamation project.
2. Outlines of Geology and Hydrogeology
The study area is located in the western part of the Po river basin. The quaternary geological cover lies directly over the Tertiary bedrock by means of an erosional contact, the surface of which
slopes gently toward northwest. In the study area, however, the sand-silt Pleistocene deposits lie between the quaternary deposits and the Tertiary bedrock [11].
Essentially, the quaternary sediments constituting the shallow quaternary geology of the plain are composed (from the bottom), by heterometric gravel deposits in a sand matrix laid down in the Middle
Pleistocene by tributary streams of the Po (fluvial “Riass” period), and gravel fluvial deposits more recent than the present alluvial sediments. In particular, a sub-division can be made on the
basis of groundwater reservoirs, which are very well known in this zone [12].
The stratigraphic logs of this area show: 1) a Superficial Complex, with fluvial alluvial and glacial deposits principally composed of coarse gravels characterized by a high permeability thickness of
10 - 30 m (Middle Pleistocene), housing an unconfined aquifer, directly connected to the superficial stream network. Its potentiality is proportional to the thickness of the saturation zone and it
is, therefore, highly variable; 2) a very low permeability complex comprising silt-clay sediments lays down in the fluvial environment of the Upper Pliocene-Lower Pleistocene; 3) a Pliocene Complex
comprising a series of fairly permeable sand, sometimes with silt-clay intercalations, laid down in a marine environment; 4) a succession of silt, silt-clay and clay levels starting from at least 300
m below the surface [13].
An accurate stratigraphic reconstruction of the study area from the surface to the depth of interest was produced, by analyzing the results of field surveys conducted in the summer 2000 and borehole
data collected in the summer 2002. In the earlier survey, the piezometers showed that the geological structure of the area was heterogeneous, with a succession of layers of different and non-constant
thicknesses. The more recent survey confirmed the initial geological model. Vertical geological cross-sections were produced and utilized as the geological context for the geostatistical analysis.
They show the studied volume completely lays in the above mentioned Superficial Complex.
These cross-sections, combined with the geotechnical laboratory analyses, support the conceptualization of geological model of the study area (Figure 1), characterized as follows starting form the
surface (Table 1): Unit 0, mainly constituted by organic terrain (OS) and backfill material (BF), with a significantly variable thickness ranging from centimeters to meters; Unit 1, essentially
composed by a unique gravel layer 10 m thick (GS), with a light silty sand fraction (SS), at times with polygenic pebbles with a maximum diameter of 15 cm (PG), and an average hydraulic conductivity
of 4.85E-03 cm/s, so with a moderately high permeability in comparison with the other units; Unit 2, polygenic pebbles (PG) and gravel with silty sand (SS) in succession with levels of polygenic
gravel and pebbles in an abundant silty-sandy matrix (SG), with a maximum thickness of 10 m, an average hydraulic conductivity of 8.14E-03 cm/s; Unit 3, constituted by sandy silt (SS) moderately
coherent, represents the base layer of the aquifer, its bed was not detected, because the borehole stops 24 m below the surface, however granulometric analysis shows an equal percentage of sand, silt
and clay, an Atterberg’s liquidity limit less than 30%. These data allow the material to be classified as falling in a range between inorganic silt with low compressibility and inorganic clay with
low plasticity. The Lefranc Permeability Tests performed on samples of Unit 3 provided an average hydraulic conductiveity coefficient equal to 0.027E-03 cm/s, denoting a low permeability grade.
3. Method
3.1. Exploratory Raw Data Analysis
There have been two sampling surveys since the recla-
Figure 1. Study area, boreholes, and an example of two lithostratigraphic cross-sections performed (units: BF, Backfilling material mainly constituted by foundry scum; OS, Organic soil and/or
backfilling material constituted mainly by sand with pebbles, bricks, rubble, concrete fragments; SG, Sandy gravel; GS, Gravely sand with sparse silt levels; PG, Pebbly gravel; SS, Sandy silt).
Table 1. Composition and hydraulic conductivity coefficient of principal lithostratigraphic units.
mation project started in 2000. The first took place in the summer of 2000, and comprised five vertical boreholes of 20 m length, which were used as piezometers. Water table measurements were made
and ground water analyses were done. The second survey was done between 17 June 2002 and 5 August 2002 and comprised 59 vertical boreholes (percussion type boreholes, continuous type boreholes, deep
piezometers, superficial piezometers), together with three surface samples and two samples taken from the bottoms of the wells. The area measured by 276 samples, covers almost the entire zone
occupied by the former industrial complex, within an area of 600 m × 250 m.
All data measurements are on core samples from the boreholes, the most frequent nearest neighbor distance between them is around 40 m. However, the number of samples per borehole is not constant, as
the length of the wells is not constant. In fact, this number varies between a minimum of two and a maximum of seven samples, and on average there are five cores per borehole. There is no obvious
pattern in the locations of the boreholes that have the most samples and they are fairly uniformly spread over the study area; moreover, the reason for the differing numbers of sample in boreholes is
not known. The length of samples in each borehole varies from a minimum of 0.1 m to a maximum of 1m, as shown by the statistics listed in Figure 2(a) together with the histogram of core lengths.
The lengths of the data gaps caused by these discontinuities are summarized in the histogram and statistics shown in Figure 2(b) which indicate that the lengths of most gaps (i.e. missing data) are
either between 0.5 m and 1.5 m, or between 2.5 m and 3.0 m. Figure 2(b) does not give any indication of the locations of the gaps. Most of the 1.5 m gaps are located at a depth of 4.5 m (22.9% of the
total number of gaps), and most of the 1.0 m gaps are located at a depth of 2.5 m (20.8%). These observations informed the choice of parameter values in the recomposition procedure.
As the raw data values are measured on unequal sample lengths, the grades must be composited over equal lengths to ensure that all data have the same support.
Figure 2. Distribution and descriptive statistics of: (a) core lengths; (b) length gaps between sampled cores.
From the histogram in Figure 2(a) a composite length of 0.5 m was chosen, since it appears to be the most representative length, it is an appropriate scale of measurements and minimizes the number of
actual sample lengths that are subdivided in the compositing procedure. As additional check on the compositing procedures parameters, the lengths of the raw samples were analyzed in terms of the
number with grades exceeding the terrain acceptable concentration limits (TACL) fixed by environmental Italian law [14]. These values may be spatial outliers that could mask the underlying variogram
of each variable. As all such lengths are very close to the composite length their influence will not be diluted by compositing to 0.5 m.
Completely, Table 2 summarizes the statistics of the grades of the 50 small-scale (10 cm and 20 cm) samples below 7.4 m. For all variables but the nickel, mean and variance are less than those of the
remaining heavy metals and hydrocarbons (HY), and these values could be eliminated from the compositing procedure with negligible effect on any subsequent estimations or simulations. Mean grade and
variance of nickel are higher than those of the remaining variables and eliminating these values from the compositing procedure is likely to yield biased estimates and simulations. Introducing the
artificial 40 cm and 30 cm samples, as described above, allows these isolated grades of all variables to be included in the compositing procedure.
Totally, the data compositing process yielded 1007 composites from 507 raw splits.
Beside constructing the usual scatter plot maps, the spatial distributions of the data have been analyzed through summary graphs, i.e. histograms of concentration and relative statistics of each
variable under study. The histograms of concentration values are based on slicing the three-dimensional soil volume that contains the data and then conducting a statistical analysis of each slice.
These slices have been imposed different along the three axes of the space, because the geometry of the volume of contaminated site, in fact the slicing along x-axis from West to East is each 50 m,
from North to South along the yaxis the slices are 25 m wide, and from the surface toward the deepest data value the slices are 1 m thick. For each “slice histogram”, one for each main direction of
space, some basic statistics are plotted, such as number of samples (bars), minimum, maximum and mean value of concentration, and coefficient of variation (an example is shown in Figure 3).
In general, all the variables show that concentration values in the first depth interval tend to be higher in the eastern part of the study area. Below 2 m the concentration values decrease by
increasing depth and higher values tend to be distributed in the central-eastern part. The vertical slice histogram shown in Figure 3(c), Cr concentration decreases quite quickly from the surface and
at 10 m it is almost zero mg/kg. This is a logical conesquence of the scarce presence of samples below 10m of depth.
Summary statistics describe numerically the frequency histogram of a variable and provide an initial assessment of the data [15]. The statistics of the 0.5 m composited data are given in Table 3.
The univariate statistical analysis of concentration values shows that all frequency distributions have high positive values of skewness and the mean, the median and the mode are never coincident.
These parameters, together with the high value of kurtosis, suggest that the data are not from a Normal (Gaussian) distribution, thus a transform into Gaussian space for sequential Gaussian
simulation is necessary. Note that both Co and Ni show less skewed behavior than the other variables. However, even if they are not so skewed, transform in Gaussian space is required. It is assumed
that the outliers in this data set are legitimate values. Discarding outliers in environmental applications is not an advisable procedure,
Table 2. Descriptive statistics of data to be eliminated before the recomposition because the short length (unit: mg/kg).
Figure 3. Variation of number of samples (bars) and principal descriptive statistics of Cr concentration slicing the site volume: (a) East-West direction; (b) North-South direction; (c) vertical
as these values are of prime concern and interest. Thus the only valid option is data transformation. There are many types of transformation that are used in statistical and geostatistical analyses,
including square root of data values, logarithms of the data, a relative transform to the local mean of samples and the Normal scores transform. A normal transform (by normal scores or by some type
of a functional form) is also required in this work for the application of the sequential Gaussian simulation method. The data were first assessed for lognormality using the 3-paramenter lognormal
distribution [16]. However, assessments for all variables failed and the log-transform option was discarded.
The Normal score data transform is a non-parametric method which is used to transform the data into the Normal space, and to back-transform the data after the estimation and/or simulation
calculations [17]. This method does not require the strong mathematical assumptions needed for the log-normal transform. The results, in terms of statistics and normal-probability plots were
satisfactory and this method was used for data transformation.
The Normal distribution can be completely defined by mean and standard deviation, which for a standard cumulative density function are zero and one respectively. The transform of experimental z(x)
values will generally produce results which approach the Gaussian theoretical ones, but it is unlikely to match exactly the theoretical zero mean and unit standard deviation. Generally, the closer
these values are to the theoretical ones, the closer the Gaussian transformed distribution is to the standard Gaussian cumulative density function.
Six on ten studied variables (for instance, Cobalt in Figure 4) have means relatively close to zero and standard deviations close to one. However, four variables (Cr, Cd, Sn and HY) have not so good
results. For Cd, Sn and HY this can be explained by the use of a default minimum value equal to the instrumental detection limit. Chromium (Figure 5) is quite well sampled and there are no apparent
reasons for these differences, and two or more populations are possible. In fact, the normal-probability curve shown in Figure 5(c) could be approximated by two straight lines, one from cumulative
frequencies 0% to 10%, and the other from 50% and 95%. Anyway, this conjecture was not explored any further by fieldwork measures and for all subsequent work it was assumed that the data come from a
single population and that the Normal transform is acceptable.
This multivariate data set provides an opportunity to conduct a complete multivariate analysis and further multivariate co-simulations, which can form the basis of a much more realistic risk
assessment than a sequence of independent univariate analyses. This observation is, of course, only valid if the two or more of the variables are correlated in situ and/or spatially correlated.
Calculating the correlation coefficient matrix, it is possible to assess the correlations between all pairs of variables. From Table 4 the correlations among Cr, Cu, Zn, As, Pb, Cd and Sn are greater
than 0.6 and are statistically significant.
Anyway, the correlation coefficient measures only the in situ linear relationship between a pair of variables, but it does not quantify spatial correlation among random variables.
Table 3. Descriptive statistics of 0.5 m composited data (unit: mg/kg).
3.2. Coregionalization
There are two aspects in the study of environmental variables, one is related to the factors determining the values of those variables at any location, and the other is the study of the relatively
sample values of these variables. Sample values are viewed as realizations of a random variable drawn from a probability distribution, so at each sample location x[i] there is a variation from the
mean value of that variable. Probability distributions at neighboring locations are, more or less, related with the relationship decreasing with distance between any two locations. Such a variable is
generally termed a regionalized variable ([18,19]), which is composed of a random unpredictable component and a structured predictable component. The values of regionalized variables tend to be
correlated, and this relation leads to a structure. The spatial correlation among samples generally decreases by increasing the separation distance between them, and may vary in different directions
(Figure 6).
An important tool for the analysis of spatial correlation is the experimental semi-variogram (ESV), which measures the average spatial variability between values separated by a vector h. It can be
expressed by the average square difference of each pair of values as follows:
The objective of the structural analysis is to generate ESVs, fit models on them and interpret the models in the context of the local geology and other possible factors conditioning the spatial
distribution of pollutants. These models of the variability are then used to simulate realizations of the random variables at unsampled locations. ESVs have been calculated in the horizontal plane
for the four main geographical directions and in vertical direction. In addition, omnidirectional ESVs have been calculated. The sparse data provided are characterized by very large variability
making it difficult to detect underlying spatial correlation.
The parameters required for calculating the ESV are related to the search pair criteria: lag distance, conical search angle and maximum distance limit [17]. These parameters are used to define
orientations for irregular grids. In this study, as the cores are discontinuously sampled in irregularly spaced boreholes, in a 600 m × 250 m area, the lag distance chosen was 40 m, and, a conical
search angle of 15˚ was necessary. If this cone extends too far the ESV calculation will average sample values in significantly different directions; so, to prevent this, the cone was bounded by a
maximum distance limit of 5 - 15 m.
For vertical variograms the lag distance used varied between 1 m and 2 m, because the most frequent sample distance along a borehole is 0.5 m (in a range of 0.1 m - 9.8 m) and the maximum length of a
borehole is 20 m. A lag interval of 1 - 2 m assures a sufficient number of lags in the ESV. As the boreholes are vertical a conical search is meaningless in this direction.
The ESV summarizes the spatial relationships among the data, but it does not describe properly the variance of the regionalized variable, because each particular lag is exclusively an evaluation of
the mean semi-variance for that lag and is subject to sampling variability, which in-
Figure 4. (a) Frequency distribution of Co raw values; (b) Frequency distribution of Normal Scores transformed values of Co; (c) normal-probability plot of normal scores transformed values of Co.
creases as the number of samples decreases, and it is interpolated by a model, i.e. a mathematical function defined for all real distance h, which is used in both estimation and simulation
The variogram model is completely defined by its parameters: the type of mathematical function adopted (the model γ(h)); the behavior near the origin and the positive
Figure 5. (a) Frequency distribution of Cr raw values; (b) Frequency distribution of Normal Scores transformed values of Cr; (c) normal-probability plot of normal scores transformed values of Cr.
intercept on the ordinate (the nugget variance C[0]); the range of the variogram (or autocorrelation distance) that changes with direction if there is anisotropy, and from which the curve reaches a
constant maximum or an asymptotic value (the sill C[i]) [21].
The ESVs were not modeled independently of each other and several attempts were made to find a valid linear
Table 4. Correlation coefficients matrix of raw recomposited data (highlighted cells in this table represent the absolute values of correlation coefficient greater than 0.5).
Figure 6. Spatial auto-correlation: (a) example sampling map and relationships between a sample with the other sampled localizations in three distance classes; (b) empirical relationships between
spatial correlation vs distance.
model of coregionalization, which requires the same range for each directional ESV for each variable. This was an iterative procedure that started from a rough fitting of the ESV by a simple model
for each variable. The models were validated by means of cross-validation and then the parameters of each model were adjusted to achieve a balance between the requirements of a positive-definite
linear coregionalization model and the need for the variogram models to reflect the behavior of the ESVs.
By definition, the linear model of coregionalization is a sum of proportional covariance models [21]. Proportional covariance models are models in which all covariances (or all variograms) are
proportional to the same covariance (or variogram) function [20]. The variables
The variables
or, either in terms of variograms or in matrix form:
where [b[ik]] is the positive definite matrix of symmetric coefficients b[ik]. The positive definiteness of these matrices is achieved by checking that the eigenvalues of each matrix are real and
positive. The structural part of the variogram (i.e. V(h)) remains the same for every coefficient. This is the reason for which all variograms and cross-variograms have the same range in a particular
There are automatic procedures that can fit linear models of coregionalization among whatsoever number of variograms and cross-variograms. However, like other automatic procedures, there is no
guarantee that they will work properly in all applications. In such cases manual adjustments of the fitted models are needed. Two problems were encountered in the manual adjustment of parameters in
this project:
• Fixed range of anisotropy in any direction;
• Assumption of positive definite variance-covariance matrix.
Often, if the ESV was to be respected, the range of possible values for adjusting parameters was very narrow, and even small changes in values contravened the positive definiteness requirements. A
linear model of coregionalization was fitted (or at least refined) by trial and error.
The models comprise two structures: a nugget variance (C[0]) and one nested spherical structure (C[1]) with three different range of anisotropy. The ESV identifies the anisotropic behavior of the
variables, even though it is difficult to detect this from the raw data. The axes of the anisotropy ellipsoid are: E-W direction (a[NS]) = 130 m; N-S direction (a[EW]) = 80 m; Vertical direction (a
[Vert]) = 6 m. The vertical range might be limited by the maximum length (20 m) of the boreholes. It is unlikely, however, that the high levels of contamination in the first few meters are related to
the low values of concentration at depths below 10 m (see for instance); it is also unlikely that very high values of concentration will be found in deeper samples if and when they are collected.
Thus, the vertical range found in this study for the first nested spherical structure C[1 ]would probably not change with additional deeper sampling but it is possible that such sampling might reveal
an additional longer-range structure. This is because deeper samples are likely to reflect the natural background concentrations of the variables.
As there are nine heavy metals variables above the TCAL, the next step is to assess spatial covariability among the variables. One variable can be spatially related with another in the sense that its
values are spatially correlated with the values of the other variable. This inter-correlation among variables should be included in any realistic simulation of pollutant concentrations, especially
when they are genetically similar, like in this case study (all variables are metals). This concept forms part of the general principle of coregionalization [22].
A tool for studying coregionalizations between two regionalized variables
X-ESV is symmetric and can be calculated only for the locations where there are values of both variables. In this project, the variables under study have not been measured in the same way everywhere,
so crossvariograms can be calculated just in some locations. X-ESV can be modeled in the same way as variograms using the same types of models. The only inconvenient is that cross-variograms must be
calculated for every possible pair of variables: in particular for this project, nine ESVs and 39 cross-variograms must be calculated, modeled and adjusted to conform to a linear coregionalization
model together with direct variograms (ESVs) defined by the anisotropic structures (same range for all ESVs and X-ESVs). The two structures, C[0] and C[1], fitted to each experimental variograms and
cross-variogram are listed in Tables 5(a) and (b). In these tables the variance components of each direct variogram are highlighted. The respective eigenvalues are also listed in both of tables,
because they define the real positive condition of this matrix, mandatory constriction for establishing a linear coregionalization model. For most X-ESVs the two structures are positive and only al
pairs where Ni is present and some variables pairs where C[0] is present have negative X-ESVs’ trend. This agrees with the correlation coefficients shown in Table 4.
Note that C[0] generally contributes significantly more than C[1] to the total covariance (upper triangle in Table 5(a)). This randomized background noise may indicate that there is a very
short-range structure, which is not detected by the data because of the sampling configuretion and sampling grid.
Upper triangle in Table 5(a) provides a quicker way of understanding how the random component affects the spatial correlation modeled between each pair of variables. On average in this case study,
the C[0] component contributes around 50% to the total variability (C[0] + C[1]) and sometimes it is the only significant component, e.g. for the cross-variograms for Ni-Co, Ni-Zn or Ni-Pb. This
implies that Ni is not significantly spatially correlated with the other variables in the study area, because there is almost a complete absence of structure in most of the cross-variograms in which
it appears. On the other hand, Cr, Co, Zn, Pb, which are highly correlated each other, and highly spatially correlated too in terms of crossvariograms.
Given the large number of cross-variograms produced, only a few examples are given here in Figure 7. Considering the pair Zn-Pb, horizontal cross-variograms show good structure (Figure 7(b)), even
though the anisotropy is not so evident. The vertical cross-variogram for the same pair shows some structure up to the range of 6m (Figure 7(d)).
Cross-Validation is a back estimation technique for testing different variogram models fitted to an ESV and X-ESV, providing comparisons between the actual value z at each sampled location
In this project, none of the above conditions were theoretically respected. However, the variogram models were considered validated after several iterations of crossvalidation and refining the model
parameters. So, the variogram parameters entered for the cross-validation satisfy the requirements of a linear coregionalization model.
Factors affecting the coregionalization can be related to the general geological and hydrological sketches. The vertical range of 6m may represent the average thickness of Unit 0, i.e. the backfill
material. Generally, this mate-
Table 5. Matrices of linear coregionalization model fitted and respective eigenvalues: (a) structure C[0], or nugget effect (lower triangle and first value in diagonal’s cells), and percentage
contribution of C[0] to the total variance (C[0] + C[1]) for ESV (second value in diagonal’s cells) and X-ESV model of each variables pairs (upper triangle; (b) structure C[1], sill of spherical
rial comprises industrial waste and is spread almost everywhere over the study area to a depth of approximately 4 m. It is, however, unlikely that the metals do not penetrate at least a further two
meters below this unit. Furthermore, lenses of different materials can be detected over the entire area at depths of 5 - 6 m. These lenses are composed of gravely sand and/or silty sand, inside a
thicker volume of pebbly gravel. Vertical cross-sections in Figure 1) show a succession of three different materials in the first 6 m and three more types of material between depths of 6 m and 12 m.
This should explain the generally well-defined 6 m range on the vertical variograms and cross-variograms.
The range of variograms is 130m in the E-W direction and 80 m in the N-S direction, implying some type of geological structure, or succession of levels, which is more continuous in the E-W direction
than in the N-S direction. One possible explanation for the major axis of the anisotropy ellipsoid in the E-W direction is related to the several lenses of different material included in a larger and
more homogeneous volume. These lenses could be the remainder of meanders of some small tributary stream of the Po River, or the residual of a small, local alluvial event in the quaternary period
rather than a stream. There are, however, no indications of a meander or channel in the cross-sections provided for this study.
The major range of anisotropy (E-W) can be related to the main flow direction of the stream in this area. Slopes tend to decrease from East to West, toward the major stream, the Po River. Thus, the
tributary streams, the meander and the alluvial event tend to be more extended in the E-W direction than in the N-S direction. Thus, a plausible interpretation of the E-W and N-S ranges is that they
reflect a quasi-random dispersion of long and relatively narrow lenses of different material.
Another explanation could be based on geochemistry considerations. The slice histograms (example in Figure 3) show that there are almost always concentrations are extremely high in the first 2 - 3 m
and then decrease very quickly down to a depth of 5 - 6 m, below which there is a relatively low, homogeneous concentration. The vertical variogram should reflect this aspect. Indeed the first lags
Figure 7. Examples of experimental cross-variograms and models for variables pairs: (a) Ni-Sn; (b) Pb-Zn; (c) Cr-Co. Example of vertical experimental cross-variograms and models for Pb-Zn variables
show good correlation up to 6 m, after which spatial correlation decreases and becomes erratic, implying that almost everywhere there are strong relationships between the closer samples.
Moreover, as the coefficient of permeability is quite high in the first geological units, the contaminants tend to be dispersed along the main groundwater flow direction, which is gently sloping from
East to West. This could explain why the narrow dimension of the anisotropy ellipse is oriented in the East-West direction.
3.3. Simulations
The objective of geostatistical simulation is to provide alternate realizations of regionalized variables on any specified scale ([17,19,20]). It does not create data but provides a possible reality
at unsampled locations. Estimation algorithms tend to smooth the spatial variation of a variable; in particular they overestimate small values and underestimate large values. This complicates to
detect patterns of extreme high values, for instance metal concentrations above TACL. Moreover, the estimation smoothing effect is not the same everywhere as it depends on the data configuration and
it will be low for dense samples. A smooth interpolator should not be used for applications in which the pattern of continuity of extremely high values is critical [17]. Geostatistical simulation can
be applied to the assessment of the variability of a regionalized variable and the quantification of the uncertainty associated with the value of a regionalized variable sampled at specified
Once the contaminants of the volume have been simulated, the volume can be subjected to any number of simulated operational activities and it can be used to assess the likely concentration of a metal
above the law limit [1]. Moreover, remediation projects can be designed on the basis of the sampled data and the results compared with the simulated “reality” of the variable.
Following the initial Turning Bands Method many other geostatistical simulation techniques have been developed [23]. The choice of one method over another is based essentially on the application. For
this project sequential conditional methods were chosen. These techniques are easy to implement and provide a simple, robust and manageable implementation. The do, however, requires a few assumptions
[24]. A simulation in which the simulated values coincide with the actual data (or conditioning) values at the sampled locations is termed a conditional simulation and meets the following criteria:
• They coincide with the actual values at all data locations;
• They have the same spatial correlations, i.e. the same variogram, as the data values;
• They have the same distribution as the data values;
• They are coregionalized with other variables in the same way as the data.
• Two different methods of geostatistical simulation have been implemented in this project:
• Correlation correction of univariate Sequential Gaussian Simulations;
• Sequential Gaussian Co-Simulations.
3.3.1. Correlation Correction of Univariate Sequential Gaussian Simulations
Sequential Gaussian Simulation is a direct conditioning method of simulation ([23,25]). It is principally a data driven technique and is only valid for multi-Gaussian random variables. The main steps
of the procedure are:
1) Transform data to standard Gaussian values (conditioning data);
2) Calculate the ESV of conditioning normal transformed data, then fit and validate a model;
3) Define a random path through all grid points (grid nodes) to be simulated;
4) Choose a simulation grid node and krige the value at that point using conditioning data and all previously simulated values;
5) Draw a value for the grid node from a normal distribution having the estimated kriging value as mean and the kriging variance as variance;
6) Add this point to the conditioning data as a realizetion of the random variable and include it in kriging the value at all subsequent grid nodes;
7) Repeat steps 4 - 7 until all grid nodes have been visited;
8) Take the inverse transform of the Gaussian conditionally simulated values;
9) Correlation correction of univariate simulations.
The basic assumption and the only apparent limitation of SGS is that it works with intermediary Gaussian distribution. So, the normal transformed variables
A test for a bi-variate Gaussian distribution can be performed by means of h-scatter plots, which plot all the pairs of measurements of the same attribute z at locations separated by a constant
A bi-Gaussian distribution will plot as an approximate spherical cloud of points. So, h-scatter plots were plotted for three different distances (30 m, 50 m and 80 m) for each variable (an example is
given in Figure 8 for Zn). This visual checking was performed for all variables, and in practice all of them satisfy the bivariate normal requirement, from which it could be conjectured the
multivariate Gaussian condition. For each variable, 500 simulations were generated. SGS was used to simulate realizations of regionalized variables at each node of a regular three-dimensional grid
(voxel size: 5 m × 5 m × 1 m; No. voxel: 130 × 75 × 22) that included all conditioning points, i.e. all samples The composited data were used for SGS. However this introduces more uncertainty into
the simulation, because of the isolated shortest samples, adjacent unsampled lengths were added to the dataset with grades equal to mean grades of each variable. Both kriging and variogram model
parameters are the same as those used to define the linear coregionalization model and to do the cross-validation, and they will not berepeated here. The only difference is the number of previously
simulated points that must be used in kriging
Figure 8. H-scatter plots for Zn at (a): 30 m, (b): 50 m, (c): 80 m distances.
each new simulation grid node. This number was set equal to the maximum number of conditioning data to be used in kriging within the imposed search radius.
After simulating, a back transform to the original sample space was automatically performed. The linear extrapolation method was used to deal with the upper and lower tails of the Gaussian
distribution, i.e. it was assumed that the values in the lower and upper tails follow a uniform distribution [10]. As the simulation was done for blocks, the volume has been sliced for visualizeing
the simulation before validating it. For instance, comparing the map in Figure 9 with the row sampled As data, it is obvious that the higher data and simulated values are in roughly the same
However, the common way of validating simulation results is via descriptive statistics and the spatial variation of the simulated values in comparison to the conditioning data. Descriptive
statistics, frequency distribution, normal-probability plot, and variograms have been calculated for all variables. The frequency distributions of the back-transformed simulated values for all
variables reproduce the raw data histograms, although the former are slightly less variable than the latter. The normalprobability plots of the Gaussian simulated values reproduces that of the normal
transformed conditioning data. As the variogram modeling was done in the Gaussian space, the variograms were validated by comparing the ESV of the simulated normal scores values with the (input)
variogram models fitted to the normal transformed conditioning data. By way of example, Figure 10 shows the validation for As variable, three different univariate simulations and the spherical model
deriving from the linear model of coregionalization for the EW, NS and vertical directions are shown. The reproduction of the variograms is satisfactory in the three major directions of the space. In
general the ESV for all simulated variables tends to show a higher nugget variance and a slightly longer range than the corresponding parameters of the specified model. This could be caused by the
presence of the extremely high values in the shallower
Figure 9. Example of three-dimensional SGS result for AS (legend unit: mg/kg): horizontal cross-section at 3 m below the surface.
Figure 10. Simulation results of As ESV and imposed variogram model for: (a) EW direction; (b) NS direction; (c) vertical direction.
levels, surrounded by low values. SGS is a data driven method and any structure implicit in the data will tend to enforce precedence over the specified model. In terms of the variograms the
simulations are deemed acceptable.
Correlation correction method simplifies the co-simulation by using directly the univariate SGS results to introduce correlation among the simulated values. In a multivariate Gaussian context, if all
covariances are proportional, the co-simulation by parallel simulations combines the variables linearly to impose specified correlations among them [20].
Multivariate datasets are common in environmental applications and generally these variables are negatively or positively correlated. The “Variable Correlation Correction” module of GeostatWinÔ
software [10] allowed to impose the specified correlations on the independently simulated variables. The only limitation of this software is that it can handle a maximum of five variables and so the
most highly correlated variables were chosen, i.e. Cr, Zn, As, Pb, and Sn.
The variable correlation correction procedure can be applied to both generic simulated variables and correlated simulated variables. However, univariate simulations performed by SGS are not
correlated before imposing this correlation correction. The correlation coefficients of the back-transformed simulated values are less than 0.1, so there is not “a priori” correlation (Table 6).
The correction was made by imposing the correlation coefficients of the raw conditioning data and performed in the Gaussian space, and than back-transformed to the sample data space.
The correlation correction simulations reproduce the conditioning distribution although the former is slightly smoother than the latter. From the variography these corrected simulations display
higher spatial variability than the uncorrected simulations. By way of example, variograms for the corrected simulations of lead and zinc are shown in Figure 11, from which it can be seen that the
directional variogram models are consistently lower in all directions, especially for the vertical direction. Furthermore, the ranges of corrected simulated values are generally shorter than those of
the linear coregion-alization model.
Comparing these variograms with those of the uncorkrected SGS (Figure 10) it is evident that the latter produce better results in terms of spatial variability (better
Table 6. Correlation coefficient for back-transformed simulated values of five variables to be corrected.
Figure 11. ESVs of simulation managed by correlation correction: (a) lead; (b) zinc.
fitting) so the correction increased the simulation efficiency.
3.3.2. Sequential Gaussian Co-Simulation (SGCOS)
SGCOS is essentially a direct conditioning method of simulation which takes into account the spatial correlation among a set of regionalized variables by using the parameters of a linear
coregionalization model [21]. It is essentially a data driven technique valid for multi-Gaussian random variables and is very similar to SGS, except that kriging is replaced by cokriging for
estimating any single node to be simulated. The drawbacks for this method is the difficult in finding the same range of anisotropies, the significant increasing in computing time over SGS, and it is
more effective for two or three variables and a relatively small dataset [10]. This step of the project was conducted in the Gaussian space using the Geovariance ISATIS software [26].
For each variable, five co-simulations were generated. As performed for SGS simulations, also the co-simuations were validated by comparing the frequency distributions of simulated values with those
of the normal transformed conditioning data for all variables together with the normal-probability plots. The simulated values respect the distributions of the actual values, although the former are
slightly smoother than the latter.
Spatial structures were validated by comparing the variograms and cross-variograms of the co-simulated values with those of the conditioning data. By way of example, Figure 12 shows two direct
variograms and two cross-variograms. These are just an example of the total of nine direct variograms and 36 cross-variograms produced for the linear coregionalization model established among the
heavy metals variables. Note also that these variograms relate to only one simulation. Five hundreds simulations were generated and a complete validation would include variogram and cross-variogram
synoptic comparisons for all simulations.
The horizontal direct variograms (Figures 12(a) and (b)) reproduce the linear coregionalization model quite well. The ESVs of co-simulated values have generally higher nugget variance, but they have
the same ranges and sill values of the linear coregionalization model of the conditioning data. For the vertical direction the direct ESVs for each variable are differ from the theoretical
coregionalization model, but tend to have the same range.
Almost every cross-variogram between tin and any other variable does not match the coregionalization model. Low cross-correlation is also shown for nickel and arsenic. On the other hand the
cross-variograms for cobalt and all other variables reproduce the coregionalization model.
In conclusion, SGCOS (an example is shown in Figure 13) can improve the simulation of high statistically and spatially correlated regionalized variables but for moderate to low correlations the
co-simulation procedure does not improve the results regarding to the correlation
Figure 12. Example of direct variograms and cross-variograms (thin lines) and respective models (thick lines) for co-simulated values, together with the boundary of positive definition of the model
(dashed lines).
correction of univariate simulations.
4. Risk Analysis
Risk is a measure of the probability and severity of adverse or unexpected effects, whereas safety is the degree to which risks are judged acceptable. Risks can be distributed over part of a
population or over geographical areas and these distributions may be more important than the magnitude of the risk itself [1]. Diffuse risks of contamination by inorganic compounds slightly above
acceptable limits, but over large areas, may affect the population more than rare high concentrations in localized zones. The acceptability of risk is determined for a particular area by technical
considerations and/or political law on the basis of quantified risk.
A percentage of risk is always present and cannot be removed. So, the best way of dealing with the risk is to reduce it to the minimum by identifying, assessing and identifying the risk, determining
the minimum acceptable level of risk, reducing the risk to the minimum and, finally, managing the residual risk [1].
In general, realistic quantification of risk requires adequate models of the processes causing risk. The acceptability of risk is determined by risk-benefit analyses, i.e. by analyzing whether the
benefit is worth the residual risk. In the case of a contaminated site, this benefit could be measured by the cost of a remediation project in relation to the further industrial use of the site. The
benefit depends on whether the site is used as a new comercial/ industrial site, rather than new urbanization area. For the latter case the decision-maker must take into account more restrictive
parameters in risk-benefit analysis.
Ultimately, quantified risk analysis requires an estimate of the likelihood of an event occurring. This quantification can be done by geostatistical simulation for a given variable over a studied
volume. Risk can be asessed by repeating simulations and generating additional images of possible realities. The greater the number of simulations is, the more accurate the risk quantification will
be. A curve showing the concentration (grade) and tonnage of a particular contaminant is a simple way of assessing the probability that the contaminant will exceed acceptable limits. A realistic risk
analysis must be based on block tonnages and grades estimated from the ‘sample’ obtained by ‘drilling’ the simulated volume to
Figure 13. Three-dimensional representation of the mean of 500 SGCOS results for Cd variable (color ramp) and standard deviation for simulation (isolines).
be subjected to remediation [1].
Risk quantification in the form of contaminant concentration/grade-tonnage curves (G-T curves) is critical for capital investment in mining and environmental projects and can be obtained through
geostatistical simulations of the studied volume [27].
These curves display simultaneously the tonnage of terrain above a particular threshold grade and the average concentration of the contaminant above that threshold (or cut-off). In practice, G-T
curves provide a means of determining how much of the population is likely to lie above or below a threshold value, i.e. the acceptable concentration limit. In addition it provides the average grade
of the material above the threshold value [28]. For instance, if soil below the legal limit is ignored, the average value of the remaining soil volume will be higher than the original average of the
G-T curves are normally derived from estimated values, i.e. by some type of kriging interpolator. In this project these curves are based on simulated correlation correction simulations as well as on
the co-simulated realization of inorganic compounds (Figure 14). For each graph, a given cut-off grade is drawn (example in), equal to the TACL fixed by environmental Italian Authorities [14].
SGCOS results for chromium do not agree with those generated by the correlation correction simulation. In the former there are less 0.01 million tons of contaminated soil with mean grade of around
1400 mg/kg at a TACL of 800 mg/kg. Moreover these values are confirmed for every co-simulation performed for cobalt more than 99% of the blocks have grades less than a TACL of 250 mg/kg. There are
about 0.002 - 0.003 million tons of contaminated soil with mean grades of lead of around 17000 mg/kg above its cut-off grade (D). For arsenic, there are less than 0.001 million tons of contaminated
soil with a mean concentration of 80 mg/kg at the TACL threshold.
5. Conclusions
Environmental risk in this contaminated site arises from the presence of several pollutants and from the uncertainty in estimating their concentrations, extents and trajectories. This uncertainty
arises essentially from the impossibility of measuring all possible values of pollutants within a volume, but only a few samples are available. The preliminary reclamation project proposed both
remediation procedures and recommendations on the basis of a groundwater study. This work provides an alternative way of studying pollutant concentrations in terrain by assessing the spatial
uncertainty and using it as the basis for a risk analysis.
Assessment of spatial uncertainty was performed by means of geostatistical simulation of realizations of the random variables conditioned by the data values. Given the significant number of
variables, an extensive multivariate study was performed to assess spatial correlation and cross-correlation (linear model of coregionalization). The most demanding task in terms of time spent was in
establishing a valid linear coregionalization model. The erratic nature of many of the experimental variograms and cross-variograms resulted in a range of possible models.
Geostatistical simulation is particularly useful when data are sparse and variability is erratic. This project is an example of this condition. However, because of their sparse and discontinuous
nature, the data were processed and partially modified to make them suitable for geostatistical calculations.
Different conditional simulation methods, both univariate and multivariate, were used. Univariate sequential Gaussian simulation provides good results for simulating some of the regionalized
variables (nearly half the total number of variables). Variograms of the simulated realizations of those variables demonstrate the same spatial variability as that of the original data. For the other
variables, however, the variograms of the simulated values do not reproduce the characteristics of spatial variability found for the original samples.
Thus, these univariate simulations were considered together and simulated values were corrected on the basis of the correlation coefficients among the variables. This correction was applied to a
subset of the simulated values to provide the basis for a comparison with the output from a complete co-simulation. This affected the simulation output especially by increasing variability in the
vertical direction, probably because the range was comparable with the vertical size of the simulated grid. However, horizontal variograms of the simulated values adequately reproduce the variograms
of the conditioning data for most variables. The correlation correction of the simulated variables thus improves the simulation output for relatively highly correlated variables. The great advantages
of this method are its simplicity and rapid computation.
Finally a complete co-simulation was performed by using SGCOS. The results are similar to those obtained by correcting the correlation between univariate simulations. In fact, all direct variograms
of the original data are reproduced and the cross-variograms between simulated values are adequate reflections of those of the original data.
Finally, this study can be taken as the basis for a complete risk assessment for further complete remediation projects, in parallel with a numerical model of contaminant transport in groundwater.
Figure 14. G-T curves for five different SGCOS of: (a) Cr; (b) Co; (c) As; (d) Pb.
The main issues in this project were related to the nature of the data set. The boreholes are on an irregular sampling grid and there are very few samples in each borehole. Moreover, the lengths of
the boreholes vary from 5 m to 20 m, the lengths of cores vary from 0.1m to 1.0 m) and sampling within a borehole is often discontinuous. The usual approach is to recomposite the samples to
approximately the same length but the discontinuous nature of much of the sampling made this difficult.
The systematic nature of this discontinuous, smallscale sampling towards the bottom of the wells at, more or less, the same depth, indicates a logical reason for sampling in this way. In most
applications the choice of a significantly smaller sample size indicates a significant increase in the variability of the variable being measured, which in turn implies an increase in the mean value
of the variable (a proportional effect is present). This interpretation is borne out by the nickel grades for which there is a significant increase in the mean grade of the small (10 - 20 cm) samples
collected at depths below 7.4 m. The mean grades of all other variables measured in these small-scale samples below 7.4 m are, however, lower than the mean grades of samples above 7.4 m. There are at
least two possible reasons that could be advanced for the small scale sampling. The simpler, operational, reason is budgetary restrictions. This seems unlikely because the holes were sampled at
different times and the small-scale samples are always below the 7.4 m deep. The more complex, and more interesting, reason relates directly to the variable (s) and the geology of the subsurface.
There may be a change in structure, or simply a change in porosity and/or permeability at, or around, 7.4 m that causes some of the pollutants (in particular, nickel) to accumulate in small
intervals. Alternatively, there may be a natural nickel anomaly below a depth of 7.4 m. Whilst the reasons for this small-scale, discontinuous sampling are, at present, unknown, the nickel grades of
the samples are significant and must be included in the study.
Finally, it would be advisable to perform further sampling based on the simulation results obtained. The sparse data, especially in the vertical direction, affected the results of the simulations, in
particular for the correlation correction simulations. More continuous and denser sampling undertaken in a few new boreholes could significantly improve the simulation results.
6. Acknowledgements
Enrico Guastaldi is indebted to Professors P. A. Dowd and C. Xu now at University of Adealaide, for admitting him to the MSc course at the Department of Mining and Mineral Engineering, University of
Leeds in 2003-2004, and for their valuable explanations and suggestions about both geostatistics theory and this project.
1. P. A. Dowd, “Risk in Mineral Projects: Analysis, Perception and Management” Transactions of the Institution of Mining and Metallurgy (Section A: Min. industry), Vol. 106, 1997, pp. A9-A18.
2. T. M. Burgess and R. Webster, “Optimal Interpolation and Isarithmic Mapping of Soil Properties. I. The Variogram and Punctual Kriging,” Journal of Soil Sciences, Vol. 31, No. 2, 1980a, pp.
315-331. doi:10.1111/j.1365-2389.1980.tb02084.x
3. T. M. Burgess and R. Webster, “Optimal Interpolation and Isarithmic Mapping of Soil Properties. II. Block Kriging,” Journal of Soil Sciences, Vol. 31, No. 2, 1980b, pp. 333- 341. doi:10.1111/
4. T. M. Burgess and R. Webster, “Optimal Interpolation and Isarithmic Mapping of Soil Properties. III. Changing Drift and Universal Kriging,” Journal of Soil Sciences, Vol. 31, No. 3, 1980c, pp.
5. T. M. Burgess, R. Webster and A. B. McBratney, “Optimal Interpolation and Isarithmic Mapping of Soil Properties. VI. Sampling Strategy,” Journal of Soil Sciences, Vol. 32, No. 4, 1981, pp.
6. P. Goovaerts, “Geostatistical Tools for Characterizeing the Spatial Variability of Microbiological and PhysicoChemical Soil Properties,” Journal of Biological Chemistry, Vol, 27, No. 4, 1998, pp.
315-334. doi:10.1007/s003740050439
7. G. B. M. Heuvelink, “Error Propagation in Environmental Modeling with GIS,” Taylor and Francis, London, 1998.
8. P. Goovaerts, “Geostatistical Modelling of Uncertainty in Soil Science,” Geoderma, Vol, 103, No. 1-2, 2001, pp. 3- 26. doi:10.1016/S0016-7061(01)00067-2
9. P. Goovaerts, G. Avruskin, J. Meliker, M. Slotnick, G. Jacquez and J. Nriagu, “Modeling Uncertainty about Pollutant Concentration and Human Exposure using Geostatistics and a Space-Time
Information System: Application to Arsenic in Groundwater of Southeast Michigan,” Proceedings of the 6th International Symposium on Spatial Accuracy Assessment in Natural Resources and
Environmental Sciences, Portland, Maine, 28 June-1 July 2004.
10. P. A. Dowd and C. Xu, “GeostatWinTM—User’s Manual,” University of Leeds, Leeds, 2004.
11. Gc. Bortolami, G. Crema, R. Malaroda, F. Petrucci, R. Sacchi, C. Sturani and S. Venzo, “Carta Geologica d’Italia, Foglio 56,” 2nd Edition, Servizio Geologico Italiano (Italian Geological Survey),
Roma, Torino, 1969.
12. G. Braga, E. Carabelli, A. Cerro, A. Colombetti, S. D’ Offizi, F. Francavilla, G. Gasperi, M. Pellegrini, M. Zauli and G. M. Zuppi, “Indagini Idrogeologiche Nella Pianura Padana: Le aree del
Piemonte (P01-P02) e della Lombardia (Viadana e San Benedetto Po),” ENEL, Torino, 1988.
13. C. W. Fetter, “Applied Hydrology,” 3rd Edition, Prentice Hall, Upper Saddle River, New Jersey, 1994.
14. M.A.T.T. Ministero dell’Ambiente e Tutela del Territorio, Repubblica Italiana, “Environmental Minister Law DM471: Rules on Criteria, Procedures and Way for Environmental Remediation of
Contaminated Sites,” 5th February 1997, Ordinary Supplement to Official Gazette of Republic of Italy, Rome, No. 293, 15 December 1999.
15. F. Owen and R. Jones, “Statistics,” Pitman Publishing, London, 1994, p. 529.
16. P. A. Dowd, “MINE5280: Non-Linear Geostatistics. MSc in Mineral Resource and Environmental Geostatistics,” University of Leeds, Leeds. 1996, p. 170.
17. P. Goovaerts, “Geostatistics for Natural Resources Evaluation. Applied Geostatistics Series,” Oxford University Press, Oxford, New York, Vol. 14, 1997, p. 483.
18. G. Matheron, “Les Variables Régionalisées et leur estimation: Une Application de la Théorie des Fonctions Aléatoires aux Sciences de la Nature,” Masson, Paris, 1965.
19. A. G. Journel and C. J. Huijbregts, “Mining Geostatistics,” Academic Press Inc., London, 1978, p. 600.,
20. J. P. Chiles and P. Delfiner, “Geostatistics: Modeling Spatial Uncertainty,” Wiley, New York, Chichester, 1999.
21. H. Wackernagel, “Multivariate Geostatistics: An Introduction with Applications,” Springer, Berlin, 2003.
22. R. Webster and M. A. Oliver, “Geostatistics for EnvironMental Scientists (Statistics in Practice),” John Wiley & Sons, New York, 2001.
23. C. Lantuejoul, “Geostatistical Simulation: Models and Algorithms,” Springer Verlag, Berlin, 2002, p. 256.
24. P. J. Ravenscroft, “Conditional Simulation for Mining: Practical Implementation in an Industrial Environment,” In: M. Armstrong and P. A. Dowd, Eds., Geostatistical Simulations, Kluwer Academic
Publishers, Dordrecht, 1994, pp. 79-87.
25. A. G. Journel and F. Alabert, “Non-Gaussian Data Expansion in the Earth Sciences,” Terra Nova, Vol. 1, No. 2, 1989, pp. 123-134. doi:10.1111/j.1365-3121.1989.tb00344.x
26. C. J. Bleines, F. Deraisme, N. Geffory, S. Jeannee, F. Perseval, D. Rambert, O. Renard, Torres and Y. Touffait, “Isatis Software Manual,” Geovariances & Ecole des Mines De Paris, Paris, 2004.
27. R. Dimitrakopoulos and M. B. Fonseca, “Assessing Risk in Grade-Tonnage Curves in a Complex Copper Deposit, Northern Brazil, Based on an Efficient Joint Simulation of Multiple Correlated
Variables,” Application of Computers and Operations Research in the Minerals Industries, South African Institute of Mining and Metallurgy, 2003.
28. I. Clark and W. V. Harper, “Practical Geostatistics,” Ecosse North America Llc., Columbus, Ohio, Vol. 1, 2000, p. 342. | {"url":"https://file.scirp.org/Html/1-9401355_6991.htm","timestamp":"2024-11-06T15:26:09Z","content_type":"application/xhtml+xml","content_length":"106806","record_id":"<urn:uuid:aeb1ab88-a717-4f54-9de1-c107662ac0a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00334.warc.gz"} |
Minecraft: Algebra Architecture
Explore Math Models About Arithmetic Patterns To Build Architecture Designs!
In this lesson, students will begin to identify arithmetic patterns, including patterns in the addition table or multiplication table, and explain them using properties of operations.
• Identify arithmetic patterns, including patterns in the addition table or multiplication table, and explain them using properties of operations.
• Observe that 4 times a number is always even and explain why 4 times a number can be decomposed into two equal addends.
Curriculum Connections Summary
• Ontario - Mathematics
• Quebec - Mathematics
• New Brunswick - Mathematics
• Nova Scotia - Mathematics
• Alberta - Mathematics
• British Columbia - ADST & Mathematics
• Manitoba - Mathematics
• Prince Edward Island - Mathematics
• Saskatchewan - Mathematics
• Newfoundland & Labrador - Mathematics
• Yukon Territories - Follows B.C.'s Curriculum
• Northwest Territories - Follows Alberta's Curriculum
• Nunavut - Follows Alberta's Curriculum
Find Out More
A game-based learning platform that promotes creativity, collaboration, and problem-solving in an immersive digital environment. | {"url":"https://learn.logicsacademy.com/p/algebra-architecture","timestamp":"2024-11-09T15:27:43Z","content_type":"text/html","content_length":"86585","record_id":"<urn:uuid:f0ef0182-59a3-41a4-a49b-16a4f67130c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00402.warc.gz"} |
When Exactly Will the Eclipse Happen? A Multimillenium Tale of Computation—Stephen Wolfram Writings
Preparing for August 21, 2017
On August 21, 2017, there’s going to be a total eclipse of the Sun visible on a line across the US. But when exactly will the eclipse occur at a given location? Being able to predict astronomical
events has historically been one of the great triumphs of exact science. But in 2017, how well can it actually be done?
The answer, I think, is well enough that even though the edge of totality moves at just over 1000 miles per hour it should be possible to predict when it will arrive at a given location to within
perhaps a second. And as a demonstration of this, we’ve created a website to let anyone enter their geo location (or address) and then immediately compute when the eclipse will reach them—as well as
generate many pages of other information.
This essay is also in WIRED »
It’s an Old Business
These days it’s easy to find out when the next solar eclipse will be; indeed built right into the Wolfram Language there’s just a function that tells you (in this form the output is the “time of
greatest eclipse”):
It’s also easy to find out, and plot, where the region of totality will be:
Or to determine that the whole area of totality will be about 16% of the area of the US:
GeoArea[SolarEclipse["TotalPhasePolygon"]]/GeoArea[Entity["Country", "UnitedStates"]]
But computing eclipses is not exactly a new business. In fact, the Antikythera device from 2000 years ago even tried to do it—using 37 metal gears to approximate the motion of the Sun and Moon (yes,
with the Earth at the center). To me there’s something unsettling—and cautionary—about the fact that the Antikythera device stands as such a solitary piece of technology, forgotten but not surpassed
for more than 1600 years.
But right there on the bottom of the device there’s an arm that moves around, and when it points to an Η or Σ marking, it indicates a possible Sun or Moon eclipse. The way of setting dates on the
device is a bit funky (after all, the modern calendar wouldn’t be invented for another 1500 years), but if one takes the simulation on the Wolfram Demonstrations Project (which was calibrated back in
2012 when the Demonstration was created), and turns the crank to set the device for August 21, 2017, here’s what one gets:
And, yes, all those gears move so as to line the Moon indicator up with the Sun—and to make the arm on the bottom point right at an Η—just as it should for a solar eclipse. It’s amazing to see this
computation successfully happen on a device designed 2000 years ago.
Of course the results are a lot more accurate today. Though, strangely, despite all the theoretical science that’s been done, the way we actually compute the position of the Sun and Moon is
conceptually very much like the gears—and effectively epicycles—of the Antikythera device. It’s just that now we have the digital equivalent of hundreds of thousands of gears.
Why Do Eclipses Happen?
A total solar eclipse occurs when the Moon gets in front of the Sun from the point of view of a particular location on the Earth. And it so happens that at this point in the Earth’s history the Moon
can just block the Sun because it has almost exactly the same angular diameter in the sky as the Sun (about 0.5° or 30 arc-minutes).
So when does the Moon get between the Sun and the Earth? Well, basically every time there’s a new moon (i.e. once every lunar month). But we know there isn’t an eclipse every month. So how come?
Graphics[{Style[Disk[{0, 0}, .3/5], Yellow],
Style[Disk[{.8, 0}, .1/5], Gray], Style[Disk[{1, 0}, .15/5], Blue]}]
Well, actually, in the analogous situation of Ganymede and Jupiter, there is an eclipse every time Ganymede goes around Jupiter (which happens to be about once per week). Like the Earth, Jupiter’s
orbit around the Sun lies in a particular plane (the “Plane of the Ecliptic”). And it turns out that Ganymede’s orbit around Jupiter also lies in essentially the same plane. So every time Ganymede
reaches the “new moon” position (or, in official astronomy parlance, when it’s aligned “in syzygy”—pronounced sizz-ee-gee), it’s in the right place to cast its shadow onto Jupiter, and to eclipse the
Sun wherever that shadow lands. (From Jupiter, Ganymede appears about 3 times the size of the Sun.)
But our moon is different. Its orbit doesn’t lie in the plane of the ecliptic. Instead, it’s inclined at about 5°. (How it got that way is unknown, but it’s presumably related to how the Moon was
formed.) But that 5° is what makes eclipses so comparatively rare: they can only happen when there’s a “new moon configuration” (syzygy) right at a time when the Moon’s orbit passes through the Plane
of the Ecliptic.
To show what’s going on, let’s draw an exaggerated version of everything. Here’s the Moon going around the Earth, colored red whenever it’s close to the Plane of the Ecliptic:
Graphics3D[{With[{dt = 0, \[Theta] = 20 Degree},
Table[{With[{p = {Sin[2 Pi (t + dt)/27.3] Cos[\[Theta]],
Cos[2 Pi (t + dt)/27.3] Cos[\[Theta]],
Cos[2 Pi (t + dt)/27.3] Sin[\[Theta]]}}, {Style[
Line[{{0, 0, 0}, p}], Opacity[.1]],
Style[Sphere[p, .05],
Blend[{Red, GrayLevel[.8, .02]},
Sqrt[Abs[Cos[2 Pi t/27.2]]]]]}],
Style[Sphere[{0, 0, 0}, .1], Blue]}, {t, 0, 26}]], EdgeForm[Red],
Style[InfinitePlane[{0, 0, 0}, {{1, 0, 0}, {0, 1, 0}}],
Directive[Red, Opacity[.02]]]}, Lighting -> "Neutral",
Boxed -> False]
Now let’s look at what happens over the course of about a year. We’re showing a dot for where the Moon is each day. And the dot is redder if the Moon is closer to the Plane of the Ecliptic that day.
(Note that if this was drawn to scale, you’d barely be able to see the Moon’s orbit, and it wouldn’t ever seem to go backwards like it does here.)
With[{dt = 1},
Graphics[{Style[Disk[{0, 0}, .1], Darker[Yellow]],
Table[{With[{p = .2 {Sin[2 Pi t/27.3], Cos[2 Pi t/27.3]} + {Sin[
2 Pi t/365.25], Cos[2 Pi t/365.25]}}, {Style[
Line[{{Sin[2 Pi t/365.25], Cos[2 Pi t/365.25]}, p}],
Style[Disk[p, .01],
Blend[{Red, GrayLevel[.8]},
Sqrt[Abs[Cos[2 Pi (t + dt)/27.2]]]]]}],
Style[Disk[{Sin[2 Pi t/365.25], Cos[2 Pi t/365.25]}, .005],
Blue]}, {t, 360}]}]]
Now we can start to see how eclipses work. The basic point is that there’s a solar eclipse whenever the Moon is both positioned between the Earth and the Sun, and it’s in the Plane of the Ecliptic.
In the picture, those two conditions correspond to the Moon being as far as possible towards the center, and as red as possible. So far we’re only showing the position of the (exaggerated) moon once
per day. But to make things clearer, let’s show it four times a day—and now prune out cases where the Moon isn’t at least roughly lined up with the Sun:
With[{dt=1},Graphics[{Style[Disk[{0,0},.1],Darker[Yellow]],Table[{With[{p=.2 {Sin[2 Pi t/27.3],Cos[2 Pi t/27.3]}+{Sin[2 Pi t/365.25],Cos[2 Pi t/365.25]}},If[Norm[p]>.81,{},{Style[Line[{{Sin[2 Pi t/365.25],Cos[2 Pi t/365.25]},p}],Opacity[.3]],Style[Disk[p,.01],Blend[{Red,GrayLevel[.8]},Sqrt[Abs[Cos[2 Pi (t+dt)/27.2]]]]]}]],Style[Disk[{Sin[2 Pi t/365.25],Cos[2 Pi t/365.25]},.005],Blue]},{t,1,360,.25}]}]]
And now we can see that at least in this particular case, there are two points (indicated by arrows) where the Moon is lined up and in the plane of the ecliptic (so shown red)—and these points will
then correspond to solar eclipses.
In different years, the picture will look slightly different, essentially because the Moon is starting at a different place in its orbit at the beginning of the year. Here are schematic pictures for
a few successive years:
Table[With[{dt = 1},
Graphics[{Table[{With[{p = .2 {Sin[2 Pi t/27.3],
Cos[2 Pi t/27.3]} + {Sin[2 Pi t/365.25],
Cos[2 Pi t/365.25]}},
If[Norm[p] > .81, {}, {Style[Line[{{0, 0}, p}],
Blend[{Red, GrayLevel[.8]},
Sqrt[Abs[Cos[2 Pi (t + dt)/27.2]]]]],
Style[Line[{{Sin[2 Pi t/365.25], Cos[2 Pi t/365.25]}, p}],
Style[Disk[p, .01],
Blend[{Red, GrayLevel[.8]},
Sqrt[Abs[Cos[2 Pi (t + dt)/27.2]]]]]}]],
Style[Disk[{Sin[2 Pi t/365.25], Cos[2 Pi t/365.25]}, .005],
Blue]}, {t, 1 + n *365.25, 360 + n*365.25, .25}],
Style[Disk[{0, 0}, .1], Darker[Yellow]]}]], {n, 0, 5}], 3]]
It’s not so easy to see exactly when eclipses occur here—and it’s also not possible to tell which are total eclipses where the Moon is exactly lined up, and which are only partial eclipses. But
there’s at least an indication, for example, that there are “eclipse seasons” in different parts of the year where eclipses happen.
OK, so what does the real data look like? Here’s a plot for 20 years in the past and 20 years in the future, showing the actual days in each year when total and partial solar eclipses occur (the
small dots everywhere indicate new moons):
coord[date_] := {DateValue[date, "Year"],
date - NextDate[DateObject[{DateValue[date, "Year"]}], "Instant"]}
ListPlot[coord /@ SolarEclipse[{Now - Quantity[20, "Years"], Now + Quantity[20, "Years"], All}], AspectRatio -> 1/3, Frame -> True]
The reason for the “drift” between successive years is just that the lunar month (29.53 days) doesn’t line up with the year, so the Moon doesn’t go through a whole number of orbits in the course of a
year, with the result that at the beginning of a new year, the Moon is in a different phase. But as the picture makes clear, there’s quite a lot of regularity in the general times at which eclipses
occur—and for example there are usually 2 eclipses in a given year—though there can be more (and in 0.2% of years there can be as many as 5, as there last were in 1935 ).
To see more detail about eclipses, let’s plot the time differences (in fractional years) between all successive solar eclipses for 100 years in the past and 100 years in the future:
ListLinePlot[Differences[SolarEclipse[{Now - Quantity[100, "Years"], Now + Quantity[100, "Years"], All}]]/Quantity[1, "Years"], Mesh -> All, PlotRange -> All, Frame -> True, AspectRatio -> 1/3, FrameTicks -> {None, Automatic}]
And now let’s plot the same time differences, but just for total solar eclipses:
ListLinePlot[Differences[SolarEclipse[{Now - Quantity[100, "Years"], Now + Quantity[100, "Years"], All}, EclipseType -> "Total"]]/Quantity[1, "Years"], Mesh -> All, PlotRange -> All, Frame -> True, AspectRatio -> 1/3, FrameTicks -> {None, Automatic}]
There’s obviously a fair amount of overall regularity here, but there are also lots of little fine structure and irregularities. And being able to correctly predict all these details has basically
taken science the better part of a few thousand years.
Ancient History
It’s hard not to notice an eclipse, and presumably even from the earliest times people did. But were eclipses just reflections—or omens—associated with random goings-on in the heavens, perhaps in
some kind of soap opera among the gods? Or were they things that could somehow be predicted?
A few thousand years ago, it wouldn’t have been clear what people like astrologers could conceivably predict. When will the Moon be at a certain place in the sky? Will it rain tomorrow? What will the
price of barley be? Who will win a battle? Even now, we’re not sure how predictable all of these are. But the one clear case where prediction and exact science have triumphed is astronomy.
At least as far as the Western tradition is concerned, it all seems to have started in ancient Babylon—where for many hundreds of years, careful observations were made, and, in keeping with the ways
of that civilization, detailed records were kept. And even today we still have thousands of daily official diary entries written in what look like tiny chicken scratches preserved on little clay
tablets (particularly from Ninevah, which happens now to be in Mosul, Iraq). “Night of the 14th: Cold north wind. Moon was in front of α Leonis. From 15th to 20th river rose 1/2 cubit. Barley was 1
kur 5 siit. 25th, last part of night, moon was 1 cubit 8 fingers behind ε Leonis. 28th, 74° after sunrise, solar eclipse…”
ephemeris—a systematic table that said where a particular heavenly body such as the Moon was expected to be at any particular time.
(Needless to say, reconstructing Babylonian astronomy is a complicated exercise in decoding what’s by now basically an alien culture. A key figure in this effort was a certain Otto Neugebauer, who
happened to work down the hall from me at the Institute for Advanced Study in Princeton in the early 1980s. I would see him almost every day—a quiet white-haired chap, with a twinkle in his eye—and
just sometimes I’d glimpse his huge filing system of index cards which I now realize was at the center of understanding Babylonian astronomy.)
One thing the Babylonians did was to measure surprisingly accurately the repetition period for the phases of the Moon—the so-called synodic month (or “lunation period”) of about 29.53 days. And they
noticed that 235 synodic months was very close to 19 years—so that about every 19 years dates and phases of the Moon repeat their alignment, forming a so-called Metonic cycle (named after Meton of
Athens, who described it in 432 BC).
It probably helps that the random constellations in the sky form a good pattern against which to measure the precise position of the Moon (it reminds me of the modern fashion of wearing fractals to
make motion capture for movies easier). But the Babylonians noticed all sorts of details of the motion of the Moon. They knew about its “anomaly”: its periodic speeding up and slowing down in the sky
(now known to be a consequence of its slightly elliptical orbit). And they measured the average period of this—the so-called anomalistic month—to be about 27.55 days. They also noticed that the Moon
went above and below the Plane of the Ecliptic (now known to be because of the inclination of its orbit)—with an average period (the so-called draconic month) that they measured as about 27.21 days.
And by 400 BC they’d noticed that every so-called saros of about 18 years 11 days all these different periods essentially line up (223 synodic months, 239 anomalistic months and 242 draconic months)
—with the result that the Moon ends up at about the same position relative to the Sun. And this means that if there was an eclipse at one saros, then one can make the prediction that there’s going to
be an eclipse at the next saros too.
When one’s absolutely precise about it, there are all sorts of effects that prevent precise repetition at each saros. But over timescales of more than 1300 years, there are in fact still strings of
eclipses separated from each other by one saros. (Over the course of such a saros series, the locations of the eclipses effectively scan across the Earth; the upcoming eclipse is number 22 in a
series of 77 that began in 1639 AD with an eclipse near the North Pole and will end in 3009 AD with an eclipse near the South Pole.)
Any given moment in time will be in the middle of quite a few saros series (right now it’s 40)—and successive eclipses will always come from different series. But knowing about the saros cycle is a
great first step in predicting eclipses—and it’s for example what the Antikythera device uses. In a sense, it’s a quintessential piece of science: take many observations, then synthesize a theory
from them, or a least a scheme for computation.
It’s not clear what the Babylonians thought about abstract, formal systems. But the Greeks were definitely into them. And by 300 BC Euclid had defined his abstract system for geometry. So when
someone like Ptolemy did astronomy, they did it a bit like Euclid—effectively taking things like the saros cycle as axioms, and then proving from them often surprisingly elaborate geometrical
theorems, such as that there must be at least two solar eclipses in a given year.
Ptolemy’s Almagest from around 150 AD is an impressive piece of work, containing among many other things some quite elaborate procedures—and explicit tables—for predicting eclipses. (Yes, even in the
later printed version, numbers are still represented confusingly by letters, as they always were in ancient Greek.)
In Ptolemy’s astronomy, Earth was assumed to be at the center of everything. But in modern terms that just meant he was choosing to use a different coordinate system—which didn’t affect most of the
things he wanted to do, like working out the geometry of eclipses. And unlike the mainline Greek philosophers he wasn’t so much trying to make a fundamental theory of the world, but just wanted
whatever epicycles and so on he needed to explain what he observed.
The Dawn of Modern Science
For more than a thousand years Ptolemy’s theory of the Moon defined the state of the art. In the 1300s Ibn al-Shatir revised Ptolemy’s models, achieving somewhat better accuracy. In 1472
Regiomontanus (Johannes Müller), systematizer of trigonometry, published more complete tables as part of his launch of what was essentially the first-ever scientific publishing company. But even in
1543 when Nicolaus Copernicus introduced his Sun-centered model of the solar system, the results he got were basically the same as Ptolemy’s, even though his underlying description of what was going
on was quite different.
It’s said that Tycho Brahe got interested in astronomy in 1560 at age 13 when he saw a solar eclipse that had been predicted—and over the next several decades his careful observations uncovered
several effects in the motion of the Moon (such as speeding up just before a full moon)—that eventually resulted in perhaps a factor 5 improvement in the prediction of its position. To Tycho eclipses
were key tests, and he measured them carefully, and worked hard to be able to predict their timing more accurately than to within a few hours. (He himself never saw a total solar eclipse, only
partial ones.)
Armed with Tycho’s observations, Johannes Kepler developed his description of orbits as ellipses—introducing concepts like inclination and eccentric anomaly—and in 1627 finally produced his
Rudolphine Tables, which got right a lot of things that had been got wrong before, and included all sorts of detailed tables of lunar positions, as well as vastly better predictions for eclipses.
Using Kepler’s Rudolphine Tables (and a couple of pages of calculations)—the first known actual map of a solar eclipse was published in 1654. And while there are some charming inaccuracies in overall
geography, the geometry of the eclipse isn’t too bad.
Whether it was Ptolemy’s epicycles or Kepler’s ellipses, there were plenty of calculations to do in determining the motions of heavenly bodies (and indeed the first known mechanical
calculator—excepting the Antikythera device—was developed by a friend of Kepler’s, presumably for the purpose). But there wasn’t really a coherent underlying theory; it was more a matter of
describing effects in ways that could be used to make predictions.
So it was a big step forward in 1687 when Isaac Newton published his Principia, and claimed that with his laws for motion and gravity it should be possible—essentially from first principles—to
calculate everything about the motion of the Moon. (Charmingly, in his “Theory of the World” section he simply asserts as his Proposition XXII “That all the motions of the Moon… follow from the
principles which we have laid down.”)
Newton was proud of the fact that he could explain all sorts of known effects on the basis of his new theory. But when it came to actually calculating the detailed motion of the Moon he had a
frustrating time. And even after several years he still couldn’t get the right answer—in later editions of the Principia adding the admission that actually “The apse of the Moon is about twice as
swift” (i.e. his answer was wrong by a factor of 2).
Still, in 1702 Newton was happy enough with his results that he allowed them to be published, in the form of a 20-page booklet on the “Theory of the Moon”, which proclaimed that “By this Theory, what
by all Astronomers was thought most difficult and almost impossible to be done, the Excellent Mr. Newton hath now effected, viz. to determine the Moon’s Place even in her Quadratures, and all other
Parts of her Orbit, besides the Syzygys, so accurately by Calculation, that the Difference between that and her true Place in the Heavens shall scarce be two Minutes…”
Newton didn’t explain his methods (and actually it’s still not clear exactly what he did, or how mathematically rigorous it was or wasn’t). But his booklet effectively gave a step-by-step algorithm
to compute the position of the Moon. He didn’t claim it worked “at the syzygys” (i.e. when the Sun, Moon and Earth are lined up for an eclipse)—though his advertised error of two arc-minutes was
still much smaller than the angular size of the Moon in the sky.
But it wasn’t eclipses that were the focus then; it was a very practical problem of the day: knowing the location of a ship out in the open ocean. It’s possible to determine what latitude you’re at
just by measuring how high the Sun gets in the sky. But to determine longitude you have to correct for the rotation of the Earth—and to do that you have to accurately keep track of time. But back in
Newton’s day, the clocks that existed simply weren’t accurate enough, especially when they were being tossed around on a ship.
But particularly after various naval accidents, the problem of longitude was deemed important enough that the British government in 1714 established a “Board of Longitude” to offer prizes to help get
it solved. One early suggestion was to use the regularity of the moons of Jupiter discovered by Galileo as a way to tell time. But it seemed that a simpler solution (not requiring a powerful
telescope) might just be to measure the position of our moon, say relative to certain fixed stars—and then to back-compute the time from this.
But to do this one had to have an accurate way to predict the motion of the Moon—which is what Newton was trying to provide. In reality, though, it took until the 1760s before tables were produced
that were accurate enough to be able to determine time to within a minute (and thus distance to within 15 miles or so). And it so happens that right around the same time a marine chronometer was
invented that was directly able to keep good time.
The Three-Body Problem
One of Newton’s great achievements in the Principia was to solve the so-called two-body problem, and to show that with an inverse square law of gravity the orbit of one body around another must
always be what Kepler had said: an ellipse.
In a first approximation, one can think of the Moon as just orbiting the Earth in a simple elliptical orbit. But what makes everything difficult is that that’s just an approximation, because in
reality there’s also a gravitational pull on the Moon from the Sun. And because of this, the Moon’s orbit is no longer a simple fixed ellipse—and in fact it ends up being much more complicated. There
are a few definite effects one can describe and reason about. The ellipse gets stretched when the Earth is closer to the Sun in its own orbit. The orientation of the ellipse precesses like a top as a
result of the influence of the Sun. But there’s no way in the end to work out the orbit by pure reasoning—so there’s no choice but to go into the mathematics and start solving the equations of the
three-body problem.
In many ways this represented a new situation for science. In the past, one hadn’t ever been able to go far without having to figure out new laws of nature. But here the underlying laws were
supposedly known, courtesy of Newton. Yet even given these laws, there was difficult mathematics involved in working out the behavior they implied.
Over the course of the 1700s and 1800s the effort to try to solve the three-body problem and determine the orbit of the Moon was at the center of mathematical physics—and attracted a veritable who’s
who of mathematicians and physicists.
An early entrant was Leonhard Euler, who developed methods based on trigonometric series (including much of our current notation for such things), and whose works contain many immediately
recognizable formulas:
In the mid-1740s there was a brief flap—also involving Euler’s “competitors” Clairaut and d’Alembert—about the possibility that the inverse-square law for gravity might be wrong. But the problem
turned out to be with the calculations, and by 1748 Euler was using sums of about 20 trigonometric terms and proudly proclaiming that the tables he’d produced for the three-body problem had predicted
the time of a total solar eclipse to within minutes. (Actually, he had said there’d be 5 minutes of totality, whereas in reality there was only 1—but he blamed this error on incorrect coordinates
he’d been given for Berlin.)
Mathematical physics moved rapidly over the next few decades, with all sorts of now-famous methods being developed, notably by people like Lagrange. And by the 1770s, for example, Lagrange’s work was
looking just like it could have come from a modern calculus book (or from a Wolfram|Alpha step-by-step solution):
Particularly in the hands of Laplace there was increasingly obvious success in deriving the observed phenomena of what he called “celestial mechanics” from mathematics—and in establishing the idea
that mathematics alone could indeed generate new results in science.
At a practical level, measurements of things like the position of the Moon had always been much more accurate than calculations. But now they were becoming more comparable—driving advances in both.
Meanwhile, there was increasing systematization in the production of ephemeris tables. And in 1767 the annual publication began of what was for many years the standard: the British Nautical Almanac.
The almanac quoted the position of the Moon to the arc-second, and systematically achieved at least arc-minute accuracy. The primary use of the almanac was for navigation (and it was what started the
convention of using Greenwich as the “prime meridian” for measuring time). But right at the front of each year’s edition were the predicted times of the eclipses for that year—in 1767 just two solar
The Math Gets More Serious
At a mathematical level, the three-body problem is about solving a system of three ordinary differential equations that give the positions of the three bodies as a function of time. If the positions
are represented in standard 3D Cartesian coordinates
r[i]={x[i], y[i], z[i]}, the equations can be stated in the form:
The {x,y,z} coordinates here aren’t, however, what traditionally show up in astronomy. For example, in describing the position of the Moon one might use longitude and latitude on a sphere around the
Earth. Or, given that one knows the Moon has a roughly elliptical orbit, one might instead choose to describe its motions by variables that are based on deviations from such an orbit. In principle
it’s just a matter of algebraic manipulation to restate the equations with any given choice of variables. But in practice what comes out is often long and complex—and can lead to formulas that fill
many pages.
But, OK, so what are the best kind of variables to use for the three-body problem? Maybe they should involve relative positions of pairs of bodies. Or relative angles. Or maybe positions in various
kinds of rotating coordinate systems. Or maybe quantities that would be constant in a pure two-body problem. Over the course of the 1700s and 1800s many treatises were written exploring different
But in essentially all cases the ultimate approach to the three-body problem was the same. Set up the problem with the chosen variables. Identify parameters that, if set to zero, would make the
problem collapse to some easy-to-solve form. Then do a series expansion in powers of these parameters, keeping just some number of terms.
By the 1860s Charles Delaunay had spent 20 years developing the most extensive theory of the Moon in this way. He’d identified five parameters with respect to which to do his series expansions
(eccentricities, inclinations, and ratios of orbit sizes)—and in the end he generated about 1800 pages like this (yes, he really needed Mathematica!):
But the sad fact was that despite all this effort, he didn’t get terribly good answers. And eventually it became clear why. The basic problem was that Delaunay wanted to represent his results in
terms of functions like sin and cos. But in his computations, he often wanted to do series expansions with respect to the frequencies of those functions. Here’s a minimal case:
Series[Sin[(ω + δ)*t], {δ, 0, 3}]
And here’s the problem. Take a look even at the second term. Yes, the δ parameter may be small. But how about the t parameter, standing for time? If you don’t want to make predictions very far out,
that’ll stay small. But what if you want to figure out what will happen further in the future?
Well eventually that term will get big. And higher-order terms will get even bigger. But unless the Moon is going to escape its orbit or something, the final mathematical expressions that represent
its position can’t have values that are too big. So in these expressions the so-called secular terms that increase with t must somehow cancel out.
But the problem is that at any given order in the series expansion, there’s no guarantee that will happen in a numerically useful way. And in Delaunay’s case—even though with immense effort he often
went to 7th order or beyond—it didn’t.
One nice feature of Delaunay’s computation was that it was in a sense entirely algebraic: everything was done symbolically, and only at the very end were actual numerical values of parameters
substituted in.
But even before Delaunay, Peter Hansen had taken a different approach—substituting numbers as soon as he could, and dropping terms based on their numerical size rather than their symbolic form. His
presentations look less pure (notice things like all those t−1800, where t is the time in years), and it’s more difficult to tell what’s going on. But as a practical matter, his results were much
better, and in fact were used for many national almanacs from about 1862 to 1922, achieving errors as small as 1 or 2 arc-seconds at least over periods of a decade or so. (Over longer periods, the
errors could rapidly increase because of the lack of terms that had been dropped as a result of what amounted to numerical accidents.)
Both Delaunay and Hansen tried to represent orbits as series of powers and trigonometric functions (so-called Poisson series). But in the 1870s, George Hill in the US Nautical Almanac Office proposed
instead using as a basis numerically computed functions that came from solving an equation for two-body motion with a periodic driving force of roughly the kind the Sun exerts on the Moon’s orbit. A
large-scale effort was mounted, and starting in 1892 Ernest W. Brown (who had moved to the US, but had been a student of George Darwin, Charles Darwin’s physicist son) took charge of the project and
in 1918 produced what would stand for many years as the definitive “Tables of the Motion of the Moon”.
Brown’s tables consist of hundreds of pages like this—ultimately representing the position of the Moon as a combination of about 1400 terms with very precise coefficients:
He says right at the beginning that the tables aren’t particularly intended for unique events like eclipses, but then goes ahead and does a “worked example” of computing an eclipse from 381 BC,
reported by Ptolemy:
It was an impressive indication of how far things had come. But ironically enough the final presentation of Brown’s tables had the same sum-of-trigonometric-functions form that one would get from
having lots of epicycles. At some level it’s not surprising, because any function can ultimately be represented by epicycles, just as it can be represented by a Fourier or other series. But it’s a
strange quirk of history that such similar forms were used.
Can the Three-Body Problem Be Solved?
It’s all well and good that one can find approximations to the three-body problem, but what about just finding an outright solution—like as a mathematical formula? Even in the 1700s, there’d been
some specific solutions found—like Euler’s collinear configuration, and Lagrange’s equilateral triangle. But a century later, no further solutions had been found—and finding a complete solution to
the three-body problem was beginning to seem as hopeless as trisecting an angle, solving the quintic, or making a perpetual motion machine. (That sentiment was reflected for example in a letter
Charles Babbage wrote Ada Lovelace in 1843 mentioning the “horrible problem [of] the three bodies”—even though this letter was later misinterpreted by Ada’s biographers to be about a romantic
triangle, not the three-body problem of celestial mechanics.)
In contrast to the three-body problem, what seemed to make the two-body problem tractable was that its solutions could be completely characterized by “constants of the motion”—quantities that stay
constant with time (in this case notably the direction of the axis of the ellipse). So for many years one of the big goals with the three-body problem was to find constants of the motion.
In 1887, though, Heinrich Bruns showed that there couldn’t be any such constants of the motion, at least expressible as algebraic functions of the standard {x,y,z} position and velocity coordinates
of the three bodies. Then in the mid-1890s Henri Poincaré showed that actually there couldn’t be any constants of the motion that were expressible as any analytic functions of the positions,
velocities and mass ratios.
One reason that was particularly disappointing at the time was that it had been hoped that somehow constants of the motion would be found in n-body problems that would lead to a mathematical proof of
the long-term stability of the solar system. And as part of his work, Poincaré also saw something else: that at least in particular cases of the three-body problem, there was arbitrarily sensitive
dependence on initial conditions—implying that even tiny errors in measurement could be amplified to arbitrarily large changes in predicted behavior (the classic “chaos theory” phenomenon).
But having discovered that particular solutions to the three-body problem could have this kind of instability, Poincaré took a different approach that would actually be characteristic of much of pure
mathematics going forward: he decided to look not at individual solutions, but at the space of all possible solutions. And needless to say, he found that for the three-body problem, this was very
complicated—though in his efforts to analyze it he invented the field of topology.
Poincaré’s work all but ended efforts to find complete solutions to the three-body problem. It also seemed to some to explain why the series expansions of Delaunay and others hadn’t worked out—though
in 1912 Karl Sundman did show that at least in principle the three-body problem could be solved in terms of an infinite series, albeit one that converges outrageously slowly.
But what does it mean to say that there can’t be a solution to the three-body problem? Galois had shown that there couldn’t be a solution to the generic quintic equation, at least in terms of
radicals. But actually it’s still perfectly possible to express the solution in terms of elliptic or hypergeometric functions. So why can’t there be some more sophisticated class of functions that
can be used to just “solve the three-body problem”?
Here are some pictures of what can actually happen in the three-body problem, with various initial conditions:
eqns = {Subscript[m, 1]*
Derivative[2][Subscript[r, 1]][
t] == -((Subscript[m, 1]*
Subscript[m, 2]*(Subscript[r, 1][t] - Subscript[r, 2][t]))/
Norm[Subscript[r, 1][t] - Subscript[r, 2][t]]^3) - (Subscript[
m, 1]*Subscript[m,
3]*(Subscript[r, 1][t] - Subscript[r, 3][t]))/
Norm[Subscript[r, 1][t] - Subscript[r, 3][t]]^3,
Subscript[m, 2]*
Derivative[2][Subscript[r, 2]][
t] == -((Subscript[m, 1]*
Subscript[m, 2]*(Subscript[r, 2][t] - Subscript[r, 1][t]))/
Norm[Subscript[r, 2][t] - Subscript[r, 1][t]]^3) - (Subscript[
m, 2]*Subscript[m,
3]*(Subscript[r, 2][t] - Subscript[r, 3][t]))/
Norm[Subscript[r, 2][t] - Subscript[r, 3][t]]^3,
Subscript[m, 3]*
Derivative[2][Subscript[r, 3]][
t] == -((Subscript[m, 1]*
Subscript[m, 3]*(Subscript[r, 3][t] - Subscript[r, 1][t]))/
Norm[Subscript[r, 3][t] - Subscript[r, 1][t]]^3) - (Subscript[
m, 2]*Subscript[m,
3]*(Subscript[r, 3][t] - Subscript[r, 2][t]))/
Norm[Subscript[r, 3][t] - Subscript[r, 2][t]]^3};
(SeedRandom[#]; {Subscript[m, 1], Subscript[m, 2], Subscript[m, 3]} =
RandomReal[{0, 1}, 3];
inits = Table[{Subscript[r, i][0] == RandomReal[{-1, 1}, 3],
Subscript[r, i]'[0] == RandomReal[{-1, 1}, 3]}, {i, 3}];
sols = NDSolve[{eqns, inits}, {Subscript[r, 1], Subscript[r, 2],
Subscript[r, 3]}, {t, 0, 100}];
ParametricPlot3D[{Subscript[r, 1][t], Subscript[r, 2][t],
Subscript[r, 3][t]} /. sols, {t, 0, 100},
Ticks -> None]) & /@ {776, 5742, 6711, 2300, 5281, 9225}
And looking at these immediately gives some indication of why it’s not easy to just “solve the three-body problem”. Yes, there are cases where what happens is fairly simple. But there are also cases
where it’s not, and where the trajectories of the three bodies continue to be complicated and tangled for a long time.
So what’s fundamentally going on here? I don’t think traditional mathematics is the place to look. But I think what we’re seeing is actually an example of a general phenomenon I call computational
irreducibility that I discovered in the 1980s in studying the computational universe of possible programs.
Many programs, like many instances of the three-body problem, behave in quite simple ways. But if you just start looking at all possible simple programs, it doesn’t take long before you start seeing
behavior like this:
ArrayPlot[ CellularAutomaton[{#, 3, 1}, {{2}, 0}, 100],
ImageSize -> {Automatic, 100}] & /@ {5803305107286, 2119737824118,
5802718895085, 4023376322994, 6252890585925}
How can one tell what’s going to happen? Well, one can just keep explicitly running each program and seeing what it does. But the question is: is there some systematic way to jump ahead, and to
predict what will happen without tracing through all the steps?
The answer is that in general there isn’t. And what I call the Principle of Computational Equivalence suggests that pretty much whenever one sees complex behavior, there won’t be.
Here’s the way to think about this. The system one’s studying is effectively doing a computation to work out what its behavior will be. So to jump ahead we’d in a sense have to do a more
sophisticated computation. But what the Principle of Computational Equivalence says is that actually we can’t—and that whether we’re using our brains or our mathematics or a Turing machine or
anything else, we’re always stuck with computations of the same sophistication.
So what about the three-body problem? Well, I strongly suspect that it’s an example of computational irreducibility: that in effect the computations it’s doing are as sophisticated as any
computations that we can do, so there’s no way we can ever expect to systematically jump ahead and solve the problem. (We also can’t expect to just define some new finite class of functions that can
just be evaluated to give the solution.)
I’m hoping that one day someone will rigorously prove this. There’s some technical difficulty, because the three-body problem is usually formulated in terms of real numbers that immediately have an
infinite number of digits—but to compare with ordinary computation one has to require finite processes to set up initial conditions. (Ultimately one wants to show for example that there’s a
“compiler” that can go from any program, say for a Turing machine, and can generate instructions to set up initial conditions for a three-body problem so that the evolution of the three-body problem
will give the same results as running that program—implying that the three-body problem is capable of universal computation.)
I have to say that I consider Newton in a sense very lucky. It could have been that it wouldn’t have been possible to work out anything interesting from his theory without encountering the kind of
difficulties he had with the motion of the Moon—because one would always be running into computational irreducibility. But in fact, there was enough computational reducibility and enough that could
be computed easily that one could see that the theory was useful in predicting features of the world (and not getting wrong answers, like with the apse of the Moon)—even if there were some parts that
might take two centuries to work out, or never be possible at all.
Newton himself was certainly aware of the potential issue, saying that at least if one was dealing with gravitational interactions between many planets then “to define these motions by exact laws
admitting of easy calculation exceeds, if I am not mistaken, the force of any human mind”. And even today it’s extremely difficult to know what the long-term evolution of the solar system will be.
It’s not particularly that there’s sensitive dependence on initial conditions: we actually have measurements that should be precise enough to determine what will happen for a long time. The problem
is that we just have to do the computation—a bit like computing the digits of π—to work out the behavior of the n-body problem that is our solar system.
Existing simulations show that for perhaps a few tens of millions of years, nothing too dramatic can happen. But after that we don’t know. Planets could change their order. Maybe they could even
collide, or be ejected from the solar system. Computational irreducibility implies that at least after an infinite time it’s actually formally undecidable (in the sense of Gödel’s Theorem or the
Halting Problem) what can happen.
One of my children, when they were very young, asked me whether when dinosaurs existed the Earth could have had two moons. For years when I ran into celestial mechanics experts I would ask them that
question—and it was notable how difficult they found it. Most now say that at least at the time of the dinosaurs we couldn’t have had an extra moon—though a billion years earlier it’s not clear.
We used to only have one system of planets to study. And the fact that there were (then) 9 of them used to be a classic philosopher’s example of a truth about the world that just happens to be the
way it is, and isn’t “necessarily true” (like 2+2=4). But now of course we know about lots of exoplanets. And it’s beginning to look as if there might be a theory for things like how many planets a
solar system is likely to have.
At some level there’s presumably a process like natural selection: some configurations of planets aren’t “fit enough” to be stable—and only those that are survive. In biology it’s traditionally been
assumed that natural selection and adaptation is somehow what’s led to the complexity we see. But actually I suspect much of it is instead just a reflection of what generally happens in the
computational universe—both in biology and in celestial mechanics. Now in celestial mechanics, we haven’t yet seen in the wild any particularly complex forms (beyond a few complicated gap structures
in rings, and tumbling moons and asteroids). But perhaps elsewhere we’ll see things like those obviously tangled solutions to the three-body problem—that come closer to what we’re used to in biology.
It’s remarkable how similar the issues are across so many different fields. For example, the whole idea of using “perturbation theory” and series expansions that has existed since the 1700s in
celestial mechanics is now also core to quantum field theory. But just like in celestial mechanics there’s trouble with convergence (maybe one should try renormalization or resummation in celestial
mechanics). And in the end one begins to realize that there are phenomena—no doubt like turbulence or the three-body problem—that inevitably involve more sophisticated computations, and that need to
be studied not with traditional mathematics of the kind that was so successful for Newton and his followers but with the kind of science that comes from exploring the computational universe.
Approaching Modern Times
But let’s get back to the story of the motion of the Moon. Between Brown’s tables, and Poincaré’s theoretical work, by the beginning of the 1900s the general impression was that whatever could
reasonably be computed about the motion of the Moon had been computed.
Occasionally there were tests. Like for example in 1925, when there was a total solar eclipse visible in New York City, and the New York Times perhaps overdramatically said that “scientists [were]
tense… wondering whether they or moon is wrong as eclipse lags five seconds behind”. The fact is that a prediction accurate to 5 seconds was remarkably good, and we can’t do all that much better even
today. (By the way, the actual article talks extensively about “Professor Brown”—as well as about how the eclipse might “disprove Einstein” and corroborate the existence of “coronium”—but doesn’t
elaborate on the supposed prediction error.)
As a practical matter, Brown’s tables were not exactly easy to use: to find the position of the Moon from them required lots of mechanical desk calculator work, as well as careful transcription of
numbers. And this led Leslie Comrie in 1932 to propose using a punch-card-based IBM Hollerith automatic tabulator—and with the help of Thomas Watson, CEO of IBM, what was probably the first
“scientific computing laboratory” was established—to automate computations from Brown’s tables.
(When I was in elementary school in England in the late 1960s—before electronic calculators—I always carried around, along with my slide rule, a little book of “4-figure mathematical tables”. I think
I found it odd that such a book would have an author—and perhaps for that reason I still remember the name: “L. J. Comrie”.)
By the 1950s, the calculations in Brown’s tables were slowly being rearranged and improved to make them more suitable for computers. But then with John F. Kennedy’s 1962 “We choose to go the Moon”,
there was suddenly urgent interest in getting the most accurate computations of the Moon’s position. As it turned out, though, it was basically just a tweaked version of Brown’s tables, running on a
mainframe computer, that did the computations for the Apollo program.
At first, computers were used in celestial mechanics purely for numerical computation. But by the mid-1960s there were also experiments in using them for algebraic computation, and particularly to
automate the generation of series expansions. Wallace Eckert at IBM started using FORMAC to redo Brown’s tables, while in Cambridge David Barton and Steve Bourne (later the creator of the “Bourne
shell” (sh) in Unix) built their own CAMAL computer algebra system to try extending the kind of thing Delaunay had done. (And by 1970, Delaunay’s 7th-order calculations had been extended to 20th
When I myself started to work on computer algebra in 1976 (primarily for computations in particle physics), I’d certainly heard about CAMAL, but I didn’t know what it had been used for (beyond
vaguely “celestial mechanics”). And as a practicing theoretical physicist in the late 1970s, I have to say that the “problem of the Moon” that had been so prominent in the 1700s and 1800s had by then
fallen into complete obscurity.
I remember for example in 1984 asking a certain Martin Gutzwiller, who was talking about quantum chaos, what his main interest actually was. And when he said “the problem of the Moon”, I was floored;
I didn’t know there still was any problem with the Moon. As it turns, in writing this post I found out that Gutzwiller was actually the person who took over from Eckert and spent nearly two decades
working on trying to improve the computations of the position of the Moon.
Why Not Just Solve It?
Traditional approaches to the three-body problem come very much from a mathematical way of thinking. But modern computational thinking immediately suggests a different approach. Given the
differential equations for the three-body problem, why not just directly solve them? And indeed in the Wolfram Language there’s a built-in function NDSolve for numerically solving systems of
differential equations.
So what happens if one just feeds in equations for a three-body problem? Well, here are the equations:
eqns = {Subscript[m,
1] (Subscript[r, 1]^\[Prime]\[Prime])[t] == -((
Subscript[m, 1] Subscript[m,
2] (Subscript[r, 1][t] - Subscript[r, 2][t]))/
Norm[Subscript[r, 1][t] - Subscript[r, 2][t]]^3) - (
Subscript[m, 1] Subscript[m,
3] (Subscript[r, 1][t] - Subscript[r, 3][t]))/
Norm[Subscript[r, 1][t] - Subscript[r, 3][t]]^3,
2] (Subscript[r, 2]^\[Prime]\[Prime])[t] == -((
Subscript[m, 1] Subscript[m,
2] (Subscript[r, 2][t] - Subscript[r, 1][t]))/
Norm[Subscript[r, 2][t] - Subscript[r, 1][t]]^3) - (
Subscript[m, 2] Subscript[m,
3] (Subscript[r, 2][t] - Subscript[r, 3][t]))/
Norm[Subscript[r, 2][t] - Subscript[r, 3][t]]^3,
3] (Subscript[r, 3]^\[Prime]\[Prime])[t] == -((
Subscript[m, 1] Subscript[m,
3] (Subscript[r, 3][t] - Subscript[r, 1][t]))/
Norm[Subscript[r, 3][t] - Subscript[r, 1][t]]^3) - (
Subscript[m, 2] Subscript[m,
3] (Subscript[r, 3][t] - Subscript[r, 2][t]))/
Norm[Subscript[r, 3][t] - Subscript[r, 2][t]]^3};
Now as an example let’s set the masses to random values:
{Subscript[m, 1], Subscript[m, 2], Subscript[m, 3]} = RandomReal[{0, 1}, 3]
And let’s define the initial position and velocity for each body to be random as well:
inits = Table[{Subscript[r, i][0] == RandomReal[{-1, 1}, 3], Derivative[1][Subscript[r, i]][0] == RandomReal[{-1, 1}, 3]}, {i, 3}]
Now we can just use NDSolve to get the solutions (it gives them as implicit approximate numerical functions of t):
sols = NDSolve[{eqns, inits}, {Subscript[r, 1], Subscript[r, 2], Subscript[r, 3]}, {t, 0, 100}]
And now we can plot them. And now we’ve got a solution to a three-body problem, just like that!
ParametricPlot3D[Evaluate[{Subscript[r, 1][t], Subscript[r, 2][t], Subscript[r, 3][t]} /. First[sols]], {t, 0, 100}]
Well, obviously this is using the Wolfram Language and a huge tower of modern technology. But would it have been possible even right from the beginning for people to generate direct numerical
solutions to the three-body problem, rather than doing all that algebra? Back in the 1700s, Euler already knew what’s now called Euler’s method for finding approximate numerical solutions to
differential equations. So what if he’d just used that method to calculate the motion of the Moon?
The method relies on taking a sequence of discrete steps in time. And if he’d used, say, a step size of a minute, then he’d have had to take 40,000 steps to get results for a month, but he should
have been able to successfully reproduce the position of the Moon to about a percent. If he’d tried to extend to 3 months, however, then he would already have had at least a 10% error.
Any numerical scheme for solving differential equations in practice eventually builds up some kind of error—but the more one knows about the equations one’s solving, and their expected solutions, the
more one’s able to preprocess and adapt things to minimize the error. NDSolve has enough automatic adaptivity built into it that it’ll do pretty well for a surprisingly long time on a typical
three-body problem. (It helps that the Wolfram Language and NDSolve can handle numbers with arbitrary precision, not just machine precision.)
But if one looks, say, at the total energy of the three-body system—which one can prove from the equations should stay constant—then one will typically see an error slowly build up in it. One can
avoid this if one effectively does a change of variables in the equations to “factor out” energy. And one can imagine doing a whole hierarchy of algebraic transformations that in a sense give the
numerical scheme as much help as possible.
And indeed since at least the 1980s that’s exactly what’s been done in practical work on the three-body problem, and the Earth-Moon-Sun system. So in effect it’s a mixture of the traditional
algebraic approach from the 1700s and 1800s, together with modern numerical computation.
The Real Earth-Moon-Sun Problem
OK, so what’s involved in solving the real problem of the Earth-Moon-Sun system? The standard three-body problem gives a remarkably good approximation to the physics of what’s happening. But it’s
obviously not the whole story.
For a start, the Earth isn’t the only planet in the solar system. And if one’s trying to get sufficiently accurate answers, one’s going to have to take into account the gravitational effect of other
planets. The most important is Jupiter, and its typical effect on the orbit of the Moon is at about the 10^-5 level—sufficiently large that for example Brown had to take it into account in his
The next effect is that the Earth isn’t just a point mass, or even a precise sphere. Its rotation makes it bulge at the equator, and that affects the orbit of the Moon at the 10^-6 level.
Orbits around the Earth ultimately depend on the full mass distribution and gravitational field of the Earth (which is what Sputnik-1 was nominally launched to map)—and both this, and the reverse
effect from the Moon, come in at the 10^-8 level. At the 10^-9 level there are then effects from tidal deformations (“solid tides”) on the Earth and moon, as well as from gravitational redshift and
other general relativistic phenomena.
To predict the position of the Moon as accurately as possible one ultimately has to have at least some model for these various effects.
But there’s a much more immediate issue to deal with: one has to know the initial conditions for the Earth, Sun and Moon, or in other words, one has to know as accurately as possible what their
positions and velocities were at some particular time.
And conveniently enough, there’s now a really good way to do that, because Apollo 11, 14 and 15 all left laser retroreflectors on the Moon. And by precisely timing how long it takes a laser pulse
from the Earth to round-trip to these retroreflectors, it’s now possible in effect to measure the position of the Moon to millimeter accuracy.
OK, so how do modern analogs of the Babylonian ephemerides actually work? Internally they’re dealing with the equations for all the significant bodies in the solar system. They do symbolic
preprocessing to make their numerical work as easy as possible. And then they directly solve the differential equations for the system, appropriately inserting models for things like the mass
distribution in the Earth.
They start from particular measured initial conditions, but then they repeatedly insert new measurements, trying to correct the parameters of the model so as to optimally reproduce all the
measurements they have. It’s very much like a typical machine learning task—with the training data here being observations of the solar system (and typically fitting just being least squares).
But, OK, so there’s a model one can run to figure out something like the position of the Moon. But one doesn’t want to have to explicitly do that every time one needs to get a result; instead one
wants in effect just to store a big table of pre-computed results, and then to do something like interpolation to get any particular result one needs. And indeed that’s how it’s done today.
How It’s Really Done
Back in the 1960s NASA started directly solving differential equations for the motion of planets. The Moon was more difficult to deal with, but by the 1980s that too was being handled in a similar
way. Ongoing data from things like the lunar retroreflectors was added, and all available historical data was inserted as well.
The result of all this was the JPL Development Ephemeris (JPL DE). In addition to new observations being used, the underlying system gets updated every few years, typically to get what’s needed for
some spacecraft going to some new place in the solar system. (The latest is DE432, built for going to Pluto.)
But how is the actual ephemeris delivered? Well, for every thousand years covered, the ephemeris has about 100 megabytes of results, given as coefficients for Chebyshev polynomials, which are
convenient for interpolation. And for any given quantity in any given coordinate system over a particular period of time, one accesses the appropriate parts of these results.
OK, but so how does one find an eclipse? Well, it’s an iterative process. Start with an approximation, perhaps from the saros cycle. Then interpolate the ephemeris and look at the result. Then keep
iterating until one finds out just when the Moon will be in the appropriate position.
But actually there’s some more to do. Because what’s originally computed are the positions of the barycenters (centers of mass) of the various bodies. But now one has to figure out how the bodies are
The Earth rotates, and we know its rate quite precisely. But the Moon is basically locked with the same face pointing to the Earth, except that in practice there are small “librations” where the Moon
wobbles a little back and forth—and these turn out to be particularly troublesome to predict.
Computing the Eclipse
OK, so let’s say one knows where the Earth, Moon and Sun are. How does one then figure out where on the Earth the eclipse will actually hit? Well, there’s some further geometry to do. Basically, the
Moon generates a cone of shadow in a direction defined by the location of the Sun, and what’s then needed is to figure out how the surface of the Earth intersects that cone.
In 1824 Friedrich Bessel suggested in effect inverting the problem by using the shadow cone to define a coordinate system in which to specify the positions of the Sun and Moon. The resulting
so-called Besselian elements provide a convenient summary of the local geometry of an eclipse—with respect to which its path can be defined.
OK, but so how does one figure out at what time an eclipse will actually reach a given point on Earth? Well, first one has to be clear on one’s definition of time. And there’s an immediate issue with
the speed of light and special relativity. What does it mean to say that the positions of the Earth and Sun are such-and-such at such-and-such a time? Because it takes light about 8 minutes to get to
the Earth from the Sun, we only get to see where the Sun was 8 minutes ago, not where it is now.
And what we need is really a classic special relativity setup. We essentially imagine that the solar system is filled with a grid of clocks that have been synchronized by light pulses. And what a
modern ephemeris does is to quote the results for positions of bodies in the solar system relative to the times on those clocks. (General relativity implies that in different gravitational fields the
clocks will run at different rates, but for our purposes this is a tiny effect. But what isn’t a tiny effect is including retardation in the equations for the n-body problem—making them become delay
differential equations.)
But now there’s another issue. If one’s observing the eclipse, one’s going to be using some timepiece (phone?) to figure out what time it is. And if it’s working properly that timepiece should show
official “civil time” that’s based on UTC—which is what NTP internet time is synchronized to. But the issue is that UTC has a complicated relationship to the time used in the astronomical ephemeris.
The starting point is what’s called UT1: a definition of time in which one day is the average time it takes the Earth to rotate once relative to the Sun. But the point is that this average time isn’t
constant, because the rotation of the Earth is gradually slowing down, primarily as a result of interactions with the Moon. But meanwhile, UTC is defined by an atomic clock whose timekeeping is
independent of any issues about the rotation of the Earth.
There’s a convention for keeping UT1 aligned with UTC: if UT1 is going to get more than 0.9 seconds away from UTC, then a leap second is added to UTC. One might think this would be a tiny effect, but
actually, since 1972, a total of 27 leap seconds have been added. Exactly when a new leap second will be needed is unpredictable; it depends on things like what earthquakes have occurred. But we need
to account for leap seconds if we’re going to get the time of the eclipse correct to the second relative to UTC or internet time.
There are a few other effects that are also important in the precise observed timing of the eclipse. The most obvious is geo elevation. In doing astronomical computations, the Earth is assumed to be
an ellipsoid. (There are many different definitions, corresponding to different geodetic “datums”—and that’s an issue in defining things like “sea level”, but it’s not relevant here.) But if you’re
at a different height above the ellipsoid, the cone of shadow from the eclipse will reach you at a different time. And the size of this effect can be as much as 0.3 seconds for every 1000 feet of
All of the effects we’ve talked about we’re readily able to account for. But there is one remaining effect that’s a bit more difficult. Right at the beginning or end of totality one typically sees
points of light on the rim of the Moon. Known as Baily’s beads, these are the result of rays of light that make it to us between mountains on the Moon. Figuring out exactly when all these rays are
extinguished requires taking geo elevation data for the Moon, and effectively doing full 3D ray tracing. The effect can last as long as a second, and can cause the precise edge of totality to move by
as much as a mile. (One can also imagine effects having to do with the corona of the Sun, which is constantly changing.)
But in the end, even though the shadow of the Moon on the Earth moves at more than 1000 mph, modern science successfully makes it possible to compute when the shadow will reach a particular point on
Earth to an accuracy of perhaps a second. And that’s what our precisioneclipse.com website is set up to do.
Eclipse Experiences
I saw my first partial solar eclipse more than 50 years ago. And I’ve seen one total solar eclipse before in my life—in 1991. It was the longest eclipse (6 minutes 53 seconds) that’ll happen for more
than a century.
There was a certain irony to my experience, though, especially in view of our efforts now to predict the exact arrival time of next week’s eclipse. I’d chartered a plane and flown to a small airport
in Mexico (yes, that’s me on the left with the silly hat)—and my friends and I had walked to a beautiful deserted beach, and were waiting under a cloudless sky for the total eclipse to begin.
I felt proud of how prepared I was—with maps marking to the minute when the eclipse should arrive. But then I realized: there we were, out on a beach with no obvious signs of modern civilization—and
nobody had brought any properly set timekeeping device (and in those days my cellphone was just a phone, and didn’t even have signal there).
And so it was that I missed seeing a demonstration of an impressive achievement of science. And instead I got to experience the eclipse pretty much the way people throughout history have experienced
eclipses—even if I did know that the Moon would continue gradually eating into the Sun and eventually cover it, and that it wouldn’t make the world end.
There’s always something sobering about astronomical events, and about realizing just how tiny human scales are compared to them. Billions of eclipses have happened over the course of the Earth’s
history. Recorded history has covered only a few thousand of them. On average, there’s an eclipse at any given place on Earth roughly every 400 years; in Jackson, WY, where I’m planning to see next
week’s eclipse, it turns out the next total eclipse will be 727 years from now—in 2744.
In earlier times, civilizations built giant monuments to celebrate the motions of the Sun and moon. For the eclipse next week what we’re making is a website. But that website builds on one of the
great epics of human intellectual history—stretching back to the earliest times of systematic science, and encompassing contributions from a remarkable cross-section of the most celebrated scientists
and mathematicians from past centuries.
It’ll be about 9538 days since the eclipse I saw in 1991. The Moon will have traveled some 500 million miles around the Earth, and the Earth some 15 billion miles around the Sun. But now—in a
remarkable triumph of science—we’re computing to the second when they’ll be lined up again.
Stephen Wolfram (2017), "When Exactly Will the Eclipse Happen? A Multimillennium Tale of Computation," Stephen Wolfram Writings. writings.stephenwolfram.com/2017/08/
Stephen Wolfram (2017), "When Exactly Will the Eclipse Happen? A Multimillennium Tale of Computation," Stephen Wolfram Writings. writings.stephenwolfram.com/2017/08/
Wolfram, Stephen. "When Exactly Will the Eclipse Happen? A Multimillennium Tale of Computation." Stephen Wolfram Writings. August 15, 2017. writings.stephenwolfram.com/2017/08/
Wolfram, S. (2017, August 15). When Exactly Will the Eclipse Happen? A Multimillennium Tale of Computation. Stephen Wolfram Writings. writings.stephenwolfram.com/2017/08/
Posted in: Historical Perspectives, Mathematics, Physics
10 comments
1. Wonderful article, thank you for illustrating such fascinating ancient and modern history through the vision and practical experience of a person who knows the subject from the point of view of
making calculations work easily and effectively.
Thank you too for dedicating what must have been a considerable chunk of your valuable time to this subject with such obvious delight in the power of modern methods!
2. what clock does your calculation program reference for times so I can synchronize my watch?
3. You need to check something with your Totality Calculator. There’s a bug, which likely has to do with the database you’re using for deriving long-lats for the geographic location of interest. I
live in British Columbia between the two small towns of Gibsons, BC and Sechelt, BC. I’ve tried using both in the Calculator, and even tho they’re about 15 KM apart distance-wise, the difference
in Start Time for the eclipse is an hour! This suggests possible that your DB has one of them in a different time zone than the other, which is incorrect.
I hope that’s the only bug in your program!
Howard Katz
□ Sorry for the confusion Howard; One of those two locations is calculated in GMT-8, the other is in PDT. The different time zones should account for the irregularities!
4. Magisterial, simply brillant! This one and the one from Ramanujan are my favorites! Thanks for sharing
5. Calculator is awesome. Now it needs to be updated for when the next eclipse is due.
6. In 1984 the French astronomer couple Jean Chapront and Michelle Chapront-Touzë, working at the Bureau des Longitudes in Paris, published their new analytic lunar theory. It reaches a ashtonishing
accuracy by taking into account more than 36000 periodic terms to deal with perturbations. Are you able to position this analytical theory in your paper?
The Belgian eclipse specialist Jean Meeus analyzed Chapront’s mathematics and used the largest 5000 periodic terms in his preparation to derive Besselian eclipse-elements for the 5Milliennium
Canon, computed and written in cooperation with NASA’s eclipse specialist Fred Espenak. | {"url":"https://writings.stephenwolfram.com/2017/08/when-exactly-will-the-eclipse-happen-a-multimillenium-tale-of-computation/","timestamp":"2024-11-05T12:47:48Z","content_type":"text/html","content_length":"202194","record_id":"<urn:uuid:d8edf908-1158-46c2-84e2-86d68c9ab9a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00657.warc.gz"} |
Accurate Laguerre collocation solutions to a class of Emden–Fowler type BVPAccurate Laguerre collocation solutions to a class of Emden–Fowler type BVP
We solve numerically the nonlinear and double singular boundary value problem formed by the well known Emden-Fowler equation \(u^{\prime\prime}=u^{s}x^{-1/2},s>1\) along with the boundary conditions
\(u(0)=1\) and \(u(\infty)=0\). In order to capture the exponential decrease of its solution we use the Laguerre-Gauss-Radau collocation method and infer its convergence. We show that the value of \
(u^{\prime}\) at origin, which plays a fundamental role in these problems, definitely satisfies some rigorous accepted bounds. A particular attention is paid to the Thomas-Fermi case, i.e. \(s:=3/2
\). We treat the problems as boundary values ones without any involvement of ones with initial values. The method is robust with respect to scaling and order of approximation.
C.I. Gheorghiu
“Tiberiu Popoviciu” Institute of Numerical Analysis, Romanian Acdemy
Emden-Fowler problem; Laguerre collocation; slope in origin; bounds
C.I. Gheorghiu, Accurate Laguerre collocation solutions to a class of Emden–Fowler type BVP, ,
Journal of Physics A: Mathematical and Theoretical
Paper (preprint) in HTML form
Accurate Laguerre Collocation Solutions to a Class of Emden-Fowler Type BVP
We solve numerically the nonlinear and double singular boundary value problem (BVP) formed by the well known Emden-Fowler (EF) equation $u^{\prime\prime}=u^{s}x^{-1/2}$, $s>1$ along with the boundary
conditions $u\left(0\right)=1$ and $u\left(\infty\right)=0.$ In order to capture the exponential decrease of its solution we use the Laguerre-Gauss-Radau collocation (LGRC) method and infer its
convergence. We show that the value of $u^{\prime}$ at origin, which plays a fundamental role in these problems, definitely satisfies some rigorous accepted bounds. A particular attention is paid to
the Thomas-Fermi (TF) case, i.e., $s:=3/2.$ We treat the problems as boundary values ones without any involvement of ones with initial values. The method is robust with respect to scaling and order
of approximation.
^†^†: \jpa
Keywords: Emden-Fowler problem, Laguerre collocation, slope in origin, bounds.
1 Introduction
Various boundary value problems associated to EF equation have been treated along time by transforming them into initial value problems. Furthermore, they have been solved with different classical
numerical methods of various accuracy.
The IVP in physics as well as in other branches of science, have been solved by various shooting methods. These methods have become obsolete over time for many reasons.
We appreciate that solving boundary problems with (their) specific numerical methods is important in order to obtain better outgoings.
Consider the Emden-Fowler equation
$u^{\prime\prime}=u^{s}x^{-1/2},$ (1)
supplied with the same boundary condition as above, i.e.,
$u\left(0\right)=1,u\left(\infty\right)=0.$ (2)
When $s>1,$ the solution of this equation decays as $x^{-q}$, $q=3/\left(2\left(s-1\right)\right)$, as $x$ approaches $\infty$ (see [4] and the papers quoted there).
The existence of the solution to such a problem is substantiated in the paper [1]. In [4] the author finds very accurate a priori bounds for the slope $u^{\prime}_{0}:=u^{\prime}\left(0\right).$ He
exploits the integral properties of EF equation and produces the double inequality
$-\left(\frac{12}{1+s}\frac{1+4s/5}{1+s}\right)^{1/3}<u_{0}^{\prime}<-\left(% \frac{60}{5+7s}\right)^{1/3}.$ (3)
Thus, the collocation spectral methods, in conventional form or more recent Chebfun form, more elegant, easier to implement and more accurate are well suited for solving the problems at hand.
In the case of the problems examined in this article, the Laguerre polynomials, multiplied by the weight function $exp(-bx)$ correctly represent the continuous functions on the real semi-axis and
satisfy the boundary condition at infinity (the second condition in (2)).
Conventional spectral methods have been exploited almost to exhaustion solving physics problems by J P Boyd [6].
The main aim of this study is to show that the LGRC method solves problems of type (1)–(2) fairly accurate such that the slope in origin satisfies bounds like (3). Moreover we infer on the
convergence of this algorithm.
Delkhosh and Parand have recently published several papers on the Emden-Fowler problem from which we only quote [9]. They confirm the efficiency and accuracy of the spectral methods using the
generalized fractional order of the Chebyshev orthogonal functions. The purpose of our work is only to eliminate the numerical noise from the value of the derivative of solution in origin. The
extension of our study to non-integer values of $s$ is in progress.
For the particular case $s:=3/2$ in [4] the author deduces the following important and useful integral identity
$\left(u_{0}^{\prime}\right)^{n-1}=-\frac{6n-5}{n}\int_{0}^{\infty}\left(u^{% \prime}\right)^{n}+2\left(n-1\right)\left(n-2\right)\int_{0}^{\infty}u^{4}% \left(u^{\prime}\right)^{n-3},$ (4)
for any natural $n.$
Some clarifications are necessary for the convergence of the improper integral [5].
The solutions of problems (1)–(2) are continuous indefinitely differentiable functions with compact support on the real semi-axis, the integrand of integral I has the same property.
Under these conditions, the general theory of improper integrals ensures convergence.
For $n:=2$ and $n:=4$ we will show that the solutions obtained by LGRC satisfy the above integral identities with a quite good approximation. The final bounds for $u_{0}^{\prime}$ are in this case
$-1.5973459<u_{0}^{\prime}<-1.5833095.$ (5)
In the paper [6] the author gives very interesting historical aspects related to the numerical solution of the TF problem, mentioning several Nobel Prize laureates who considered it. He lists a set
of values for $u_{0}^{\prime}$, not all of which belong to the range (3). Its solution is based on rational Chebyshev series.
2 LGRC in solving Thomas-Fermi b.v.p.
With respect to the LGRC method we refer to the monograph [5] which offers all the ingredients to implement the method.
Actually we are looking for a solution of the form
$u_{N-1}\left(x\right):=\sum\limits_{j=1}^{N}\frac{e^{-x/2}}{e^{-x_{j}/2}}\phi_% {j}\left(x\right)u_{j},$ (6)
$\phi_{j}\left(x\right)=\frac{xL_{N-1}\left(x\right)}{\left(xL_{N-1}\left(x% \right)\right)^{\prime}\left(x_{j}\right)\left(x-x_{j}\right)},$
and $x_{1}=0$ and $x_{2},\ldots,x_{N}$ are the roots of $L_{N-1}\left(x\right)$, the Laguerre polynomial of degree $N-1,$ indexed in increasing order of magnitude. The interval $\left[0,\infty\right)
$ can be mapped to itself by the change of variables $x:=b\widetilde{x},$ where $b$ is a positive real number called scaling factor. Thus LGRC method contains a free parameter which can be exploited
to optimize the accuracy of differencing process and to tune the numerical outcomes.
In order to introduce the boundary conditions (2) we operate the change of variables
If we denote by $\mathcal{D}_{N-2}^{\left(2\right)}$ the second order Laguerre differentiation matrix in physical space, LGRC cast the problem (1)–(2) with $s:=3/2$ into the nonlinear algebraic
$\mathcal{D}_{N-2}^{\left(2\right)}\left(U-\exp\left(-X_{N-2}\right)\right)=% \left\{diag\left(X_{N-2}\right)\right\}.^{-1/2}\left(\left(U-\exp\left(-X_{N-2% }\right)\right).^{3/2}\right),$ (7)
where the vector $X_{N-2}$ contains the Gauss-Radau nodes $x_{2},\ldots,x_{N-1}$ and the vector $U$ the corresponding unknowns (see Weidemann and Reddy [3]). We have to observe that in formulating
the system (7) we use the MATLAB notations (the under-dot in (7) signifies the pointwise product of the two areas in MATLAB).
We have solved the system (7) by MATLAB routine fsolve and have found the following value
which strictly belong to the interval (5). In order to find the slope at origin we simply multiply the vector solution with the first order derivative matrix.
Figure 1: The solution to TF problem (1)–(2) (left panel) and the Laguerre coefficients of solution to this problem (right panel). The order of approximation of LGRC is $N:=315$ and the scaling
factor $b$ equals $100.$
In the left panel of Figure 1 we display the solution to our problem and in the right panel we show how the coefficients of LGRC solution decrease. Roughly speaking we cannot hope to an accuracy much
better than $10^{-4}.$ (see Boyd [2], Convergence of Spectral Methods). Actually, the way in which the coefficients decrease is a correct indication of convergence of solution.
We have to mention at this point that in order to get the coefficients of solution we have relied on the Laguerre polynomial transform between physical and coefficient spaces from [5].
With respect to the right panel in Figure 1, we have to observe that this technique was introduced by Boyd in his huge monograph [2]. It is not a precise proof but a very valuable trick. We have
successfully and frequently used this argument.
The Newton’s method in solving the system (7) is convergent as it is visible from Figure 2.
Figure 2: The first optimality in solving TF problem (1)–(2) with Newton’s method implemented by MATLAB routine fsolve. The order of approximation of LGRC is $N:=315$ and the scaling factor $b$
equals $100.$
With the value of $u_{0}^{\prime}$ displayed above we have validated numerically the integral relations (4). Thus, for $n:=2$ the integral equality is correct to order $10^{-3}$ and when $n:=4$ the
accuracy decreases to order $10^{-2}.$
3 LGRC in solving Emden-Fowler BVP
We now consider the EF problem with $s:=4$ in the equation (1) and a perfect similar treatment as above has been applied. For this value of exponent $s$ the bounds in (3) are the following
$-1.263271919531276<u_{0}^{\prime}<-1.220522444470285.$ (8)
Our numerical experiments with scaling factor $b$ in interval $[25,100]$ and resolution $N$ around $300$ provided numerical values of the derivative in origin strictly in the interval (8).
The only cons of the method is the arbitrary choosing of the resolution $N$.
4 Concluding remarks
In order to solve nonlinear BVP we can choose between Chebyshev or Fourier collocation methods, when the domain is finite, or Laguerre, sinc or Hermite polinomials based collocation, whenever the
domain is unbounded. The choosing depends also on the shape of nonliniarity as well as on the packages of implementation at hand. We mention that the literature about these methods is now well
established so we consider that more details are not necessary.
In order to validate problems (1)–(2) we have alternatively tried to use Chebyshev collocation in its conventional form (see [7]) or in form of Chebfun (see [8]) with two ingredients, namely domain
truncation or the mapping of half line into the canonical interval $[-1,1].$ None of these methods gave acceptable results.
It is clear that the success of the used LGRC method is due to the existence of the weighting factor $exp(-x/2)$ in the definition of the LGR interpolant (6). The method handles correctly both
singularities, at origin and at infinity, is robust with respect to the resolution and scaling, and is easy implementable. Up to our knowledge we consider that this method is, so far, the most
accurate in solving such nonlinear and double singular problems as boundary value ones.
The author is indebted to both referees for opportune observations that led to the improvement of the work.
• [1] O’Regan D 1996 Can. J. Math. 48 143
• [2] Boyd J P 2000 Chebyshev and Fourier Spectral Methods, Dover Publications, Inc. 31 East 2nd Street, Mineola, New York 11501
• [3] Weidemann J A C and Reddy S C 2000 ACM Trans. Math. Softw. 26 465
• [4] Iacono R 2008 J. Phys. A: Math. Theor. 41 455204
• [5] Shen J, Tang T and Wang L - L 2011 Spectral Methods. Algorithms, Analysis and Applications (Springer-Verlag Berlin) Chapter 7
• [6] Boyd J P 2013 J. Comput. Appl. Math. 244 90
• [7] Gheorghiu C I 2014 Spectral Methods for Non-Standard Eigenvalue Problems. Fluid and Structural Mechanics and Beyoud, Springer Chapters 3 and 4
• [8] Tretehten L N, Birkisson A and Driscoll T A 2018 Exploring ODEs, SIAM Philadelphia, Appendix A
• [9] Delkhosh M and Parand K 2019 Hacet. J. Math. Stat. 48 1601
[1] Iacono R 2008 J. Phys. A: Math. Theor. 41 455204
[2] O’Regan D 1996 Can. J. Math. 48 143
[3] Boyd J P 2013 J. Comput. Appl. Math. 244 90
[4] Delkhosh M and Parand K 2019 Hacettepe J. Math. Stat. 48 1601
[5] Shen J, Tang T and Wang L – L 2011 Spectral Methods. Algorithms, Analysis and Applications (Berlin: Springer) ch 7
[6] Weidemann J A C and Reddy S C 2000 ACM Trans. Math. Softw. 26 465
[7] Boyd J P 2000 Chebyshev and Fourier Spectral Methods (New York: Dover) p 11501
[8] Gheorghiu C I 2014 Spectral Methods for Non-Standard Eigenvalue Problems. Fluid and Structural Mechanics and Beyoud (Berlin: Springer) ch 3 and 4
[9] Tretehten L N, Birkisson A and Driscoll T A 2018 Exploring ODEs (Philadelphia, PA:SIAM) | {"url":"https://ictp.acad.ro/accurate-laguerre-collocation-solutions-to-a-class-of-emden-fowler-type-bvp/","timestamp":"2024-11-02T18:33:01Z","content_type":"text/html","content_length":"184691","record_id":"<urn:uuid:70cefbdd-b7ef-42a1-af15-a81dbef63105>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00520.warc.gz"} |
Dec2Hex: Excel Formulae Explained - ExcelAdept
Key Takeaways:
• DEC2HEX is an Excel formula that converts a decimal number to a hexadecimal value. This is useful for programmers and anyone who needs to work with computer memory addresses or other hexadecimal
• The DEC2HEX formula has a specific syntax and requires two arguments: the decimal number to be converted and the number of characters expected in the output. The formula can also be customized to
include a specific prefix or ignore leading zeros.
• To use the DEC2HEX formula effectively, it’s important to ensure that the inputs are correct and that the output is formatted correctly. It’s also useful to experiment with the various optional
arguments to customize the output to your needs.
Do you need to convert decimal numbers to hexadecimal numbers in Excel? This article explains the easy-to-use DEC2HEX Excel formula, making the process simple and efficient.
Understanding the DEC2HEX formula
Comprehend the DEC2HEX formula in Excel? Know its syntax and arguments. Greatly benefit from learning how to use the DEC2HEX function. Here are certain examples that show how to use the DEC2HEX
formula in varying ways.
Syntax and arguments of the DEC2HEX function
DEC2HEX function takes a decimal number as its argument and converts it to the hexadecimal (base 16) equivalent. The syntax follows the form of =DEC2HEX(number, [places]). Number is the decimal value
between -549,755,813,888 and 549,755,813,887 to be converted. Places (optional) is an integer specifying how many characters should be returned.
To use DEC2HEX in Excel, enter =DEC2HEX(number,[places]) in a cell where you would like the hexadecimal value displayed. For example, =DEC2HEX(255) will result in ‘FF’. If you’d like to return more
than two digits or if leading zeros are required in your answer, then add a value for places by entering an integer between 1-10 as the second argument.
It is important to note that this formula only works with decimal values; non-decimal number systems will not work with this formula. Additionally, any negative numbers passed as an argument will
result in a #NUM! error message.
I once had to convert hundreds of employee IDs from decimal to hexadecimal format before importing them into our new HR system. Thanks to the DEC2HEX function, I was able to streamline the process
and save hours of manual data entry.
DEC2HEX in Excel: because turning decimal into hexadecimal is just as satisfying as turning water into wine.
Examples of using DEC2HEX function in Excel
The DEC2HEX formula in Excel is a powerful tool that can be used to convert decimal numbers into hexadecimal format. Employing this functional process helps you streamline your excel applications,
enabling you to perform calculations quickly and efficiently without errors.
Step 1: Locate the cell for the formula.
Step 2: Enter ‘DEC2HEX’ function followed by the number you want to convert.
Step 3: Press enter for conversion instantly.
Step 4: Use formatting options to customize output based on your needs.
• Integrate colors or shading options.
• Align content in specific columns and rows.
• Incorporate borders for tables and cells.
Step 5: Review your results, verify accuracy, and use the output as appropriate.
It’s important to note that using the DEC2HEX formula may not always produce perfect results when handling decimal numbers. Nevertheless, this formula is a reliable option that can reliably provide
accurate results for most purposes.
Experts in finance have found great utility in using DEC2HEX as it offers clear reports which are easily accessible. For example, during an audit of financial records, one examiner captivated by
using DEC2HEX in Excel commented that its ability enabled them to accomplish early schedule targets whilst maintaining strict compliance standards and high scrutiny of all documents presented to
their office.
Using DEC2HEX can make you want to HEX it all and start over, but fear not as we’ve got some solutions up our sleeve.
Potential errors and solutions when using DEC2HEX function
To stop #VALUE! error or #NUM! error, when using the DEC2HEX function, here are some ideas. Consider them!
#VALUE! error
When using the DEC2HEX function in Excel Formulae, you may encounter an error similar to “Invalid Input” or “The value entered is not valid“. This error occurs when the input argument provided to the
DEC2HEX function is out of range or contains non-numeric characters.
To resolve this issue, ensure that your input argument is a valid number within the range of -549,755,813,888 to 549,755,813,887. Additionally, ensure that there are no special non-numeric characters
in your input argument.
Furthermore, if you are copying and pasting data into Excel from an external source, ensure that the formatting is consistent and matches the format expected by Excel before applying any formulae.
Pro Tip: Before using any formulae in Excel, make sure to double-check your input arguments for correct values and formatting. Don’t let the #NUM! error throw you off, just remember, Excel doesn’t
like to divide by zero (or infinity, for that matter).
#NUM! error
The DEC2HEX function in Excel can result in an error code #NUM!. This error occurs when the input value of the function is either too large or too small. The output range should be between 0 and
4,294,967,295. If the input value is outside of this range, then the #NUM! error will appear.
To solve the #NUM! error, first ensure that the input value falls within the correct range. If it does not, then consider breaking down the data into smaller chunks before converting it to
hexadecimal form. You could also try using different functions like HEX2DEC or BIN2HEX instead of DEC2HEX depending on your requirements.
In addition to ensuring that the input value is within the correct range, you can double-check that all formulas and cell references are accurate. Often times a simple mistake like an incorrect cell
reference can lead to errors like #NUM!.
A colleague once encountered a problematic issue with their spreadsheet’s formulae which resulted in several cells displaying #NUM!. After examining each formula in detail, they discovered that they
had mistakenly referenced a blank cell which was itself resulting in one of those dreaded N/A errors – correcting this led to their issue being resolved.
Five Facts About DEC2HEX: Excel Formulae Explained:
• ✅ DEC2HEX is an Excel formula that converts decimal numbers to hexadecimal format. (Source: Exceljet)
• ✅ The DEC2HEX formula takes two arguments – the decimal number to be converted, and the number of characters in the hexadecimal number. (Source: Spreadsheeto)
• ✅ The DEC2HEX formula can be used in conjunction with other Excel functions, such as CONCATENATE, to create more complex calculations. (Source: Ablebits)
• ✅ Hexadecimal format is often used in computer programming, especially when working with color codes and memory addresses. (Source: Lifewire)
• ✅ Excel also offers other conversion functions, such as HEX2DEC and BIN2DEC, for converting hexadecimal and binary numbers to decimal format, respectively. (Source: Excel Easy)
FAQs about Dec2Hex: Excel Formulae Explained
What is DEC2HEX: Excel Formulae Explained?
DEC2HEX: Excel Formulae Explained is a tutorial on how to use the DEC2HEX function in Excel. DEC2HEX is a function that allows you to convert decimal numbers to hexadecimal numbers in Excel. This
tutorial explains how to use the function and provides examples of it in action.
How do I use the DEC2HEX function in Excel?
To use the DEC2HEX function in Excel, you first need to enter the decimal number you want to convert into a cell. Then, you can use the DEC2HEX formula to convert that number to a hexadecimal number.
The formula should look like this: =DEC2HEX(number, [places])
What is the syntax for the DEC2HEX function?
The syntax for the DEC2HEX function is as follows: =DEC2HEX(number, [places]). The “number” is the decimal number you want to convert, and the “places” (optional) indicate the number of characters
you want the hexadecimal number to have, including any leading zeros.
Can I use the DEC2HEX function to convert multiple decimal numbers at once?
Yes, you can use the DEC2HEX function to convert multiple decimal numbers at once by entering the function into a range of cells. Simply select the range of cells where you want the hexadecimal
numbers to appear, then enter the DEC2HEX formula with the corresponding decimal numbers in the adjacent cells.
How do I ensure the DEC2HEX function returns the correct result?
To ensure that the DEC2HEX function returns the correct result, it is important to enter the decimal number you want to convert correctly and to use the correct syntax for the function. Additionally,
you should check that the hexadecimal number returned by the function is correct by using a calculator or another method to verify the result.
What are some common errors when using the DEC2HEX function?
Common errors when using the DEC2HEX function include entering the decimal number incorrectly, using the wrong syntax for the function, and not specifying the correct number of places in the formula.
It is also possible to encounter errors if the decimal number being converted is too large or too small for the function to handle. | {"url":"https://exceladept.com/dec2hex-excel-formulae-explained/","timestamp":"2024-11-12T15:23:22Z","content_type":"text/html","content_length":"63468","record_id":"<urn:uuid:c0c29105-f3f6-45ea-b95a-dab714d4cdb3>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00818.warc.gz"} |
Surprising results found in the swimming mechanism of microorganism-related model - AIP Publishing LLC
New calculations improve upon popular explanations of swimming and flying mechanisms, describing how movement arises despite average thrust and drag that vanishes over full periodic motion
From the Journal: Physics of Fluids
WASHINGTON, D.C., Tuesday, January 24, 2017 — For years, B. Ubbo Felderhof, a professor at the Institute for Theoretical Physics at Germany’s RWTH Aachen University, has explored the mechanisms that
fish and microorganisms rely on to propel themselves. Flying birds and insects face similar challenges propelling themselves, but without the luxury of buoyancy these creatures also contend with
overcoming gravity to stay aloft.
Over 20 years ago, Felderhof, who was working with Bob Jones, now an emeritus reader in theoretical physics at Queen Mary University of London, was studying the theory behind the “swimming” of
microorganisms, described by the friction interactions between the microbodies and their surrounding fluid. Because of the small size of many such microorganisms like bacteria, such inertial forces
could be neglected in the description. For slightly larger organisms, however, this was not the case.
Felderhof has since created mechanical models to more fully develop the theory, consisting of linear chains of spheres connected by springs and immersed in fluid. Here he took into account that the
interaction with the fluid involves both friction and inertia, since the effect of mass can’t be neglected for these larger structures.
As Felderhof now reports in Physics of Fluids, from AIP Publishing, he’s just pushed this work even further by addressing what happens in the case of adding one sphere to the chain that’s much larger
than the other spheres.
Felderhof studies structures of spheres because the effect of friction and fluid inertia on the motion of a single sphere is fairly well known. With multiple spheres, however, the picture is more
complex and has to take into account positions and orientations. “For several spheres, there is the complication of hydrodynamic interactions due to interference of flow patterns,” he said. “These
hydrodynamic interactions depend on the relative positions of sphere centers.”
If the relative positions of the spheres are varied periodically by applying an oscillating force on each of them, with the constraint that the total net force vanishes at any time, the system still
sees movement. “In spite of the latter constraint, the set of spheres in general performs a net motion, which is called ‘swimming,’” Felderhof said.
A mathematical formulation allows finding the optimum stroke — the combined applied forces — that yields the maximum average speed for a given power.
His work provides an important conceptual clarification of flow theory. “In popular explanations of swimming and flying, we’re told that speed is achieved by a balance of thrust and drag,” Felderhof
said. “My model calculations, however, show that the mean thrust and drag both vanish when averaged over a period. The effect is more subtle. Interactions of body and fluid are such that periodic
shape deformations of the body lead to a net motion relative to the fluid, even though the net thrust vanishes.”
Much of the previous work on swimming has concentrated on either the friction-dominated limit, valid for microorganisms, or on the inertia-dominated limit, valid for large animals. “In my model, both
friction and inertia play a role so that swimming can be studied in the intermediate regime, where both effects are important,” he said.
In terms of applications, the swimming linear chain model is particularly useful because of its slender structure and ability to travel through narrow tubes, such human veins.
“Biologists have already considered the possibility of drug transport via such means,” Felderhof said. “And now we’ve developed a mathematical model that allows optimization of deformations of the
body, which leads to maximum speed for given power. This method isn’t limited to linear chains, so we can envision applying it to more complicated structures in future work.”
First, Felderhof points out that it is important to validate the model by comparison with computer simulations and subsequent experiments, which is beyond his focus, so he hopes other researchers
will pursue it.
“Friction and inertia aren’t the only effects that can lead to swimming,” Felderhof said. “Flapping leads to vortex shedding and possibly a ‘street’ of vortices. This effect is absent from my model,
but may be essential for the swimming of some fish and for flying birds. It will be of value to establish the relative importance of friction, inertia, and vortex shedding, but at present I don’t see
how this can be accomplished in analytical theory. Again, computer simulation would be helpful.”
For More Information:
AIP Media Line
Article Title
Swimming of a linear chain with a cargo in an incompressible viscous fluid with inertia
B.U. Felderhof
Author Affiliations
RWTH Aachen University
Physics of Fluids
Physics of Fluids is devoted to the publication of original theoretical, computational, and experimental contributions to the dynamics of gases, liquids, and complex or multiphase fluids. | {"url":"https://publishing.aip.org/publications/latest-content/surprising-results-found-in-the-swimming-mechanism-of-microorganism-related-model/","timestamp":"2024-11-03T02:42:56Z","content_type":"text/html","content_length":"68695","record_id":"<urn:uuid:08bceae8-0a06-4044-a84c-b172ce8c9a34>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00696.warc.gz"} |
Rendiconti dell'Istituto di Matematica dell'Università di Trieste: an International Journal of Mathematics vol.32 (2001) s1
Permanent URI
Editorial policy The journal Rendiconti dell’Istituto di Matematica dell’università di Trieste publishes original articles in all areas of mathematics. Special regard is given to research papers, but
attractive expository papers may also be considered for publication. The journal usually appears in one issue per year. Additional issues may however be published. In particular, the Managing Editors
may consider the publication of supplementary volumes related to some special events, like conferences, workshops, and advanced schools. All submitted papers will be refereed. Manuscripts are
accepted for review with the understanding that the work has not been published before and is not under consideration for publication elsewhere. Our journal can be obtained by exchange agreements
with other similar journals.
Instructions for Authors Authors are invited to submit their papers by e-mail directly to one of the Managing Editors in PDF format. All the correspondence regarding the submission and the editorial
process of the paper are done by e-mail. Papers have to be written in one of the following languages: English, French, German, or Italian. Abstracts should not exceed ten printed lines, except for
papers written in French, German, or Italian, for which an extended English summary is required. After acceptance, manuscripts have to be prepared in LaTeX using the style rendiconti.cls which can be
downloaded from the web page. Any figure should be recorded in a single PDF, PS (PostScript), or EPS (Encapsulated PostScript) file. | {"url":"https://www.openstarts.units.it/collections/43b92505-145e-4b95-993b-a40d11b19e40","timestamp":"2024-11-02T01:58:14Z","content_type":"text/html","content_length":"852918","record_id":"<urn:uuid:7608d873-a83f-4dac-8220-1a09d6b3ce38>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00607.warc.gz"} |
In Exercises \(37-42,\) the diagonall of rhombus \(\mathrm{ABCD}\) intersect at E. Given that \(\mathrm{m} \angle \mathrm{BAC}\) \(53^{\circ}, \mathrm{DE}\) \(8,\) and \(\mathrm{EC}\) 6 Find the
indicated measure. $$ \mathrm{m} \angle \mathrm{DAC} $$
Short Answer
Expert verified
The measure of angle DAC is \(53^{\circ}\).
Step by step solution
Identify Relevant Properties of the Rhombus
A key property of a rhombus to keep in mind for this exercise is that the diagonals of a rhombus bisect its angles. This means that they split the angles of the rhombus into two equal parts. In this
case, the diagonal AC bisects angle BAC and DAC, making them equal.
Apply the Property to Given Values
Knowing that diagonals bisect the angles, and given that m angle BAC is \(53^{\circ}\), angle DAC will be equal to angle BAC, as they are bisected by the same diagonal AC.
Declare the Required Measure
So it is determined that the measure of angle DAC is \(53^{\circ}\), which is the measure we had to find.
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
Rhombus Properties
Understanding the unique characteristics of a rhombus can help solve various geometric problems involving this shape. A rhombus, by definition, is a type of quadrilateral with four equal sides,
making it a special type of parallelogram. The equal side lengths imply that opposite angles within a rhombus are also equal to each other.
Moreover, another defining trait of a rhombus is that its diagonals are perpendicular to each other and each diagonal bisects the opposite angles. This property of the diagonals is what lends itself
to solving problems that require finding unknown angle measures within the rhombus. For example, if one angle is known, we can easily figure out the measure of its opposite angle since they are
The properties of a rhombus are immensely useful in geometry because they provide a set of predictable behaviors and relationships. This makes it easier to deduce solutions with minimal information,
which was the case in the exercise we're discussing.
Diagonal Bisects Angles
A key feature of the rhombus is that its diagonals play an essential role in its geometry. Diagonals in a rhombus are not only equal in length but also bisect the angles from which they originate.
Bisecting an angle means to divide the angle into two congruent, or equal, angles. This is critical to understanding how to find unknown angle measures.
In the given exercise, diagonal AC bisects \(\text{m} \angle \mathrm{BAC}\), which means \(\text{m} \angle \mathrm{BAC}\) is split into two equal parts. Knowing that one of those parts measures \(\
(53^\circ\), we can affirm that the other part, \(\text{m} \angle \mathrm{DAC}\), must also measure \(\)53^\circ\). This concept is instrumental in solving for angles when one angle measure is given,
as the bisected angles will be identical.
Interior Angles in Polygons
Exploring the interior angles of polygons allows us to navigate through various geometrical shapes, including the rhombus. Any polygon's interior angles add up to a specific sum, which can be
calculated using \((n - 2) \times 180^\circ\), where \('n'\) represents the number of sides of the polygon. For a quadrilateral, such as a rhombus, the sum of the interior angles is \((4 - 2) \times
180^\circ = 360^\circ\).
This knowledge can be applied in the context of rhombuses, with the understanding that the sum of the angles around any given vertex must be \($360^\circ\). These concepts are interconnected; the
properties of diagonals in rhombuses confirming their angles, coupled with the summation laws of interior angles, give us the foundation to solve complex geometric problems. | {"url":"https://www.vaia.com/en-us/textbooks/math/geometry-a-common-core-curriculum-2015-edition/chapter-7/problem-37-in-exercises-37-42-the-diagonall-of-rhombus-mathr/","timestamp":"2024-11-03T17:06:11Z","content_type":"text/html","content_length":"248147","record_id":"<urn:uuid:442c1ed7-102c-41d5-a46e-b3be893e8b62>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00802.warc.gz"} |
Adversarially Robust Coloring for Graph Streams (Journal Article) | NSF PAGES
Graph coloring is a fundamental problem with wide reaching applications in various areas including ata mining and databases, e.g., in parallel query optimization. In recent years, there has been a
growing interest in solving various graph coloring problems in the streaming model. The initial algorithms in this line of work are all crucially randomized, raising natural questions about how
important a role randomization plays in streaming graph coloring. A couple of very recent works prove that deterministic or even adversarially robust coloring algorithms (that work on streams whose
updates may depend on the algorithm's past outputs) are considerably weaker than standard randomized ones. However, there is still a significant gap between the upper and lower bounds for the number
of colors needed (as a function of the maximum degree Δ) for robust coloring and multipass deterministic coloring. We contribute to this line of work by proving the following results. In the
deterministic semi-streaming (i.e., O(n · polylog n) space) regime, we present an algorithm that achieves a combinatorially optimal (Δ+1)-coloring using O(logΔ log logΔ) passes. This improves upon
the prior O(Δ)-coloring algorithm of Assadi, Chen, and Sun (STOC 2022) at the cost of only an O(log logΔ) factor in the number of passes. In the adversarially robust semi-streaming regime, we design
an O(Δ5/2)-coloring algorithm that improves upon the previously best O(Δ3)-coloring algorithm of Chakrabarti, Ghosh, and Stoeckl (ITCS 2022). Further, we obtain a smooth colors/space tradeoff that
improves upon another algorithm of the said work: whereas their algorithm uses O(Δ2) colors and O(nΔ1/2) space, ours, in particular, achieves (i)~O(Δ2) colors in O(nΔ1/3) space, and (ii)~O(Δ7/4)
colors in O(nΔ1/2) space.
more » « less | {"url":"https://par.nsf.gov/biblio/10380032","timestamp":"2024-11-03T20:07:15Z","content_type":"text/html","content_length":"246795","record_id":"<urn:uuid:ea6240c1-fe0a-4bcb-8eb0-8f86479d218d>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00356.warc.gz"} |
Kenneth Craik
COMMENTS ON
Kenneth Craik,
The Nature of Explanation,
Cambridge University Press, 1943, London, New York,
Aaron Sloman
School of Computer Science, University of Birmingham.
(This is an incomplete set of notes.)
25 Sep 2012
Last updated:
25 Sep 2012;7 Feb 2016; 27 Sep 2020
(DRAFT: Liable to change)
This paper is a "place-holder" here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/kenneth-craik.html http://www.cs.bham.ac.uk/research/projects/cogaff/misc/kenneth-craik.pdf
This is part of the Meta-Morphogenesis Project:
(Also PDF)
A partial index of discussion notes is in
A colleague once wrote to me:
I think your view of the scientific method is very reasonable, and describes achievements in certain fields like physics very well. But in other fields in which there is no generally-accepted
overarching framework for how the system works -- fields like medicine, all the social sciences, and cognitive science -- a different approach is usually taken. That approach is to manipulate a
couple of independent variables to look for evidence of one or two causal relations.
I responded:
It may be what is usually done, but it's not likely to lead to any major advance.
Contrast what Kenneth Craik did in his little book The Nature of Explanation, published 1943. (He tragically died very young a few years later.)
He is best known for reflecting on various features of the competences of (some) animals and asking 'How is that possible', and coming up with speculative answers whose complexity is derived from the
complexity of what needs to be explained. The best known example is his suggestion that some animals can build models of portions of the environment and use them to predict events in the environment
"My Hypothesis then is that thought models, or parallels, reality -- that its essential feature is not 'the mind', 'the self', 'sense-data', nor propositions but symbolism, and that this symbolism is
largely of the same kind as that which is familiar to us in mechanical devices which aid thought and calculation." (p. 57)
"...a process which saves time, expense, and even life".(page 82).
Craik was writing before the development of rectangular grids of photoreceptors made it (relatively) trivial to discover co-linear features in an image, using arithmetical operations on coordinates.
The problems are very different for brains, where receptors, processing mechanisms, and storage mechanisms have a far less mathematically simple organisation, which may be why Craik also wrote:
"The hardest part of the process is the act of representation itself -- the representation of something variable in size and location by a definite neural process." (Page 73)
I suspect he, like many others, missed some subtleties that led to the evolution of mathematical reasoning capabilities, but that's a long story.
The book also includes a less well known extended discussions of how various kinds of abstract information about structures, processes and relationships in the environment might be represented in
known types of physical brain mechanisms, e.g. proposing that not absolute magnitudes but changes and orderings are mostly used.
That's an idea I have been exploring for the last few years, having completely forgotten that I must have read it in Craik 40-50 years ago.
He was writing long before AI vision researchers got their scientific vision distorted by the availability of electronic cameras with rectangular grids for retinas (frame-grabbers).
Reasoning about possibilities and impossibilities
One of the important uses of internal models that goes beyond what Craik and others have argued they are good for is discovering sets of
, or, equivalently,
. (Since if not-P is impossible then P is necessarily the case.) Running a model with particular parameters will provide information about what will happen in the corresponding situation, if the
model is accurate. But it will say nothing about what other processes the model supports, what the necessary consequences of certain
of process are, and what sorts of things are
for the model. So, normally, simply running a model or simulation does not normally provide a basis for claiming that something is impossible or necessarily true.
In contrast, reasoning about the possibility of a square in a circle moving from inside the circle to outside the circle may lead to various generalisations, e.g. about numbers of possible
intersection points between the boundary of the square and the circumference of the circle.
Even an "amateur" mathematician can consider cases and summarise the possibilities, without having to explore all possible sets of coordinates for the corners of the triangle or location and radius
of the circle. Making discoveries such as the possibility of at most eight points of intersection between a square and circle requires more than the ability to "run" the model with particular values.
It needs a mechanism that can inspect a whole "space" of configurations, including infinitely many possibilities.
Modern mathematicians can prove such results using formal mechanisms developed in the last century and a half, but ancient mathematicians knew nothing about those ways of doing mathematics. Yet they
made an amazing collection of discoveries, leading to what is probably the single most important book ever published, on this planet -- Euclid's Elements.
Making progress with these ideas will require a new sort of education for young psychologists, biologists, etc.
Several more examples are discussed in this paper, and other papers it references on this web site:
(Also PDF)
Some (Possibly) New Considerations Regarding Impossible Objects
Their significance for mathematical cognition,
and current serious limitations of AI vision systems.
An analysis of the role of investigation of what is possible and how it is possible in the advance of science was presented in Chapter 2 of my 1978 book:
The Computer Revolution in Philosophy
CHAPTER 2: WHAT ARE THE AIMS OF SCIENCE?
And in this (currently unfinished) draft paper on explanations of possibilities, extending Chapter 2:
Using construction kits to explain possibilities
(Construction kits generate possibilities)
Some of the ideas were developed further in this 1996 paper:
Actual Possibilities, in Principles of Knowledge Representation and Reasoning: Proc. 5th Int. Conf. (KR `96),
Eds. L.C. Aiello and S.C. Shapiro, 1996, pp. 627--638,
Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham | {"url":"https://www.cs.bham.ac.uk/research/projects/cogaff/misc/kenneth-craik.html","timestamp":"2024-11-11T16:19:18Z","content_type":"text/html","content_length":"10235","record_id":"<urn:uuid:a24e22d6-6931-44b5-99a5-572df113694f>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00452.warc.gz"} |
1. Signals
A signal is a function that conveys information about the behavior or attributes of some phenomenon. In digital electronics, we mainly deal with two types of signals: analog and digital signals.
1.1 Analog Signal
An analog signal is a continuous signal that changes over time. It can have an infinite number of values in a range. For example, the sound produced by human speech, temperature variations, etc.
1.2 Digital Signal
A digital signal is a discrete-time signal which only has two states: HIGH (usually representing a binary 1) and LOW (representing a binary 0). Digital signals are more resilient to noise, and hence,
more reliable for transmitting information. They are fundamental to digital electronics and computer processing.
1.3 Discrete Signal
A discrete signal is a physical quantity that is not continuous, i.e., it changes only at specific times and has only a certain set of possible values. This is in contrast to analog signals which are
time-variant and continuous.
1.4 Clock Signal and Clock Pulse
A clock signal is a type of signal that oscillates between a high and a low state and is used to coordinate the actions of two or more circuits. A clock pulse is a specific type of digital signal
that periodically transitions between two levels, often used as a timing base for synchronous digital circuits.
2. Introduction to Digital Systems
A digital system is a system that handles digital signals. It includes devices like computers, calculators, and digital watches that perform calculations and operations in binary form. This form
consists of data in the format of zeros and ones, where each of these digits is considered a bit. These systems have digital logic gates to perform logical and arithmetic operations.
3. Number System
A number system is a way to represent numbers. In digital electronics, we primarily work with the Binary, Decimal, Octal, and Hexadecimal number systems. Conversion between these number systems is
essential in digital electronics.
3.1 All Conversions
It's important to know how to convert between binary, decimal, octal, and hexadecimal number systems. While conversion between decimal and binary/octal/hexadecimal forms involves division and
multiplication by 2, 8, and 16 respectively, conversion between binary and octal/hexadecimal is more straightforward as they are all powers of two.
4. Binary Arithmetic
Binary arithmetic is similar to decimal arithmetic, the only difference being that only two digits, 0 and 1, are used in binary arithmetic. We have four main operations in binary arithmetic.
4.1 Addition
Binary addition follows these rules:
• 0 + 0 = 0
• 0 + 1 = 1
• 1 + 0 = 1
• 1 + 1 = 0 (with a carry of 1)
4.2 Subtraction
Binary subtraction follows these rules:
• 1 - 0 = 1
• 0 - 0 = 0
• 0 - 1 = 1 (borrow 1)
• 1 - 1 = 0
4.3 Multiplication
Binary multiplication is like logical AND operation. The rules are:
• 0 * 0 = 0
• 0 * 1 = 0
• 1 * 0 = 0
• 1 * 1 = 1
4.4 Division
Binary division is the repeated process of subtraction. The dividend is subtracted from the divisor until the dividend is less than the divisor, at which point the remainder and quotient are noted.
4.5 1's Complement and 2's Complement
The 1's complement of a binary number is the number that results from flipping all bits in the original binary number. The 2's complement is the binary number obtained by adding 1 to the 1's
complement of a binary number. The 2's complement is often used to represent negative numbers in binary.
4.6 Subtraction using 2's Complement
The subtraction of binary numbers can be done by the addition of 2's complement. The minuend remains the same, but the subtrahend is replaced by its 2's complement, and then addition is performed. If
carry is generated, it is discarded.
5. Logic Gates
Logic gates are the basic building blocks of any digital system. They are used to implement boolean logic, which is the backbone of digital electronics. There are seven basic logic gates: AND, OR,
XOR (Exclusive OR), NOT, NAND (NOT AND), NOR (NOT OR), and XNOR (Exclusive NOR).
5.1 Conversion, IC number in Proteus, Truth Table
Every logic gate has a corresponding Integrated Circuit (IC) number which can be used to simulate the gate in Proteus software. For example, the IC number for the AND gate is 7408. The truth table
represents the relationship between the input and the output of a gate.
6. Adders and Subtractors
Adders and subtractors are key components of digital computers. They are used to perform addition and subtraction operations, respectively.
6.1 Half Adder and Half Subtractor
A half adder is a combinational circuit that performs the addition of two bits and outputs a SUM and a CARRY. A half subtractor performs subtraction of two bits and outputs a DIFFERENCE and a BORROW.
6.2 Full Adder and Full Subtractor
A full adder is a combinational circuit that performs the addition of three bits and outputs a SUM and a CARRY. It is used when there is a carry from the previous bit in a multi-bit addition
operation. A full subtractor performs subtraction of three bits (minuend, subtrahend, and borrow) and outputs a DIFFERENCE and a BORROW. It is used when there is a borrow from the previous bit in a
multi-bit subtraction operation.
7. Multiplexer and Demultiplexer
A multiplexer (MUX) is a combinational circuit that selects binary information from one of many input lines and directs it to a single output line. Selection of a particular input line is controlled
by a set of selection lines. Conversely, a demultiplexer (DEMUX) is a combinational circuit that performs the reverse operation of multiplexing. It takes a single input and routes it to one of
several outputs.
8. Encoder and Decoder
An encoder is a combinational circuit that converts binary information from a set of inputs to a unique binary code at the output. A decoder is a combinational circuit that converts the binary
information from a single input to multiple outputs.
9. Combinational and Sequential Circuits
Combinational circuits are defined as the time-independent circuits which do not depend upon previous inputs to generate any output are termed as combinational circuits. Sequential circuits are those
which are dependent on clock cycles and depends on present as well as past inputs to generate any output.
Did not understand? Try this version instead!
Make sure to understand the concepts and definitions well. Try to draw out the circuits and truth tables for a clearer understanding. All the best for your VIVA! | {"url":"https://dmj.one/edu/su/course/csu1289/class/revision","timestamp":"2024-11-08T20:15:58Z","content_type":"text/html","content_length":"14852","record_id":"<urn:uuid:f57443d8-5ce3-47e8-9c28-130a366c57c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00604.warc.gz"} |
Transport by truck Archives - Deni Internacional
One-way pricing is now always easy for trucking companies, especially when there are customer that have different needs to be met starting from roll containers up to kitchen appliances.
Hence, trucking companies still prefer to work with LDM and they most commonly do it manually.
So let’s find out on what a loading meter is and how it is being calculated.
Let’s take a read!
Loading Meter Definition
A loading meter is the standard unit of measurement for transport by truck, and are being used as a calculation unit for the goods that have to be transported but cannot be stacked.
In other words, a LDM is equal to one meter of loading space of the truck’s length.
If we take a look that the approximate width of a truck is 2.4 meters we can determine that one loading meter is approximately 2.4 m².
The loading meter conversion factors include:
• 1 loading meter often corresponds to 1850 kgs.
• 1 euro pallet = 0.4 loading meter
• 1 block pallet = 0.5 loading meter.
Thereupon, the LDM conversion factor can vary from carrier to carrier but as well it can vary from country to country.
General Size of Trucks in Europe
The general length for trucks in Europe is 13.6 metres.
In the general length category can be classified:
• Box trucks;
• Refrigerated trucks;
• Flatbed trucks;
On the other hand, the general height for trucks in Europe varies from 2.55 – 2.70 meter and the width is usually 2.45 meter.
Is There a Maximum Weight Per Loading Meter?
Different trucking companies have different conversion factors/rates, accordingly, those rates are being calculated based on the chargeable weight and in a large hand it depends on the final
Yet, regardless of how trucking companies calculate their conversion rates, the weight should not exceed:
Size: 0.80 x 1.20 metres (0,4 loading metres)
Max. height: 2.20 metres
Weight: Trailer with 2 axles 18.0 t
• Trailer with 3 axles 24.0 t
• An articulated vehicle with 4 axles (2+2), with a distance between the axles of the semi-trailer of 1.30m to 1.80m 36.0t
How Do I Calculate a Load Meter?
The formula for calculating the load meters for a load carrier is as follows:
As previously mentioned, trucking companies can calculate their prices differently, but the general LDM is being calculated by the following formula:
• Load carrier length x load carrier width / 2.4 = load meter_ (e.g. for a Euro pallet: 1.2 m x 0.8 m / 2.4 m = 0.4 ldm)
• Length x width / 2.4 / stacking factor = loading meter (for example, for a Euro pallet: 1.2 m x 0.8 m / 2.4 m / 2 = 0.2 ldm)
• Length x width / 2.4 / stacking factor * number of load carriers = loading meter (e.g. for a Euro pallet: 1.2 m x 0.8 m / 2.4 m / 2 * 16 = 3.2 ldm)
Final Thoughts
As you had the chance to read, the loading meter is used by trucking companies as a unit for measuring the goods that have to be transported (more precisely the goods that cannot be stacked).
It turns out that this method of measuring the goods actually compensates the lost volume in the trailer. | {"url":"https://www.deniint.com.mk/tag/transport-by-truck/","timestamp":"2024-11-12T06:23:09Z","content_type":"text/html","content_length":"78927","record_id":"<urn:uuid:bc573493-1cf2-4173-b44c-7cdac623675a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00515.warc.gz"} |
Intuitive physical reasoning about objects’ masses transfers to a visuomotor decision task consistent with Newtonian physics
While interacting with objects during every-day activities, e.g. when sliding a glass on a counter top, people obtain constant feedback whether they are acting in accordance with physical laws.
However, classical research on intuitive physics has revealed that people’s judgements systematically deviate from predictions of Newtonian physics. Recent research has explained at least some of
these deviations not as consequence of misconceptions about physics but instead as the consequence of the probabilistic interaction between inevitable perceptual uncertainties and prior beliefs. How
intuitive physical reasoning relates to visuomotor actions is much less known. Here, we present an experiment in which participants had to slide pucks under the influence of naturalistic friction in
a simulated virtual environment. The puck was controlled by the duration of a button press, which needed to be scaled linearly with the puck’s mass and with the square-root of initial distance to
reach a target. Over four phases of the experiment, uncertainties were manipulated by altering the availability of sensory feedback and providing different degrees of knowledge about the physical
properties of pucks. A hierarchical Bayesian model of the visuomotor interaction task incorporating perceptual uncertainty and press-time variability found substantial evidence that subjects adjusted
their button-presses so that the sliding was in accordance with Newtonian physics. After observing collisions between pucks, which were analyzed with a hierarchical Bayesian model of the perceptual
observation task, subjects transferred the relative masses inferred perceptually to adjust subsequent sliding actions. Crucial in the modeling was the inclusion of a cost function, which
quantitatively captures participants’ implicit sensitivity to errors due to their motor variability. Taken together, in the present experiment we find evidence that our participants transferred their
intuitive physical reasoning to a subsequent visuomotor control task consistent with Newtonian physics and weighed potential outcomes with a cost functions based on their knowledge about their own
Author Summary
During our daily lives we interact with objects around us governed by Newtonian physics. While people are known to show multiple systematic errors when reasoning about Newtonian physics, recent
research has provided evidence that some of these failures can be attributed to perceptual uncertainties and partial knowledge about object properties. Here, we carried out an experiment to
investigate whether people transfer their intuitive physical reasoning to how they interact with objects. Using a simulated virtual environment in which participants had to slide different pucks into
a target region by the length of a button press, we found evidence that they could do so in accordance with the underlying physical laws. Moreover, our participants watched movies of colliding pucks
and subsequently transferred their beliefs about the relative masses of the observed pucks to the sliding task. Remarkably, this transfer was consistent with Newtonian physics and could well be
explained by a computational model that takes participants’ perceptual uncertainty, action variability, and preferences into account.
Citation: Neupärtl N, Tatai F, Rothkopf CA (2020) Intuitive physical reasoning about objects’ masses transfers to a visuomotor decision task consistent with Newtonian physics. PLoS Comput Biol 16
(10): e1007730. https://doi.org/10.1371/journal.pcbi.1007730
Editor: Ulrik R. Beierholm, Durham University, UNITED KINGDOM
Received: February 10, 2020; Accepted: August 24, 2020; Published: October 19, 2020
Copyright: © 2020 Neupärtl et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original author and source are credited.
Data Availability: All human data are available on Github: https://github.com/RothkopfLab/ploscompbio_pucks/.
Funding: The authors received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Whether sliding a glass containing a beverage on a counter top in your kitchen or shooting a stone on a sheet of ice in curling, acting successfully in the world needs to take physical relationships
into account. While humans intuitively sense an understanding of the lawful relationships governing our surroundings, research has disputed that this is indeed the case [1, 2]. Instead, human
judgements and predictions about the dynamics of objects deviate systematically from the laws of Newtonian mechanics. Past research has interpreted these misjudgments as evidence that human
judgements violate the laws of physics and that they instead use context specific rules of thumb, so called heuristics [2–4]. E.g., when judging relative masses of objects such as billiard balls
based on observed collisions, people seem to use different features of motion in different contexts and end up with erroneous predictions [2].
But recent research has provided a different explanation of human misjudgments on the basis of the fact that inferences in general involve sensory uncertainties and ambiguities, both in perceptual
judgements [5, 6] as well as in reasoning and decision making [7, 8]. Therefore, physical reasoning needs to combine uncertain sensory evidence with prior beliefs about physical relationships to
reach predictions or judgements [9–13]. By probabilistically combining prior beliefs and uncertain observations, a posterior probability about the unobserved physical quantities is obtained.
Judgements and predictions are then modeled as based on these probabilistic inferences. Thus, deviations from the predictions of Newtonian physics in this framework are attributed to perceptual and
model uncertainties.
This framework of explaining reasoning about physical systems on the basis of Newtonian mechanics and perceptual uncertainties has been referred to as the noisy Newton framework (see e.g. [14] for a
review). It has been quite successful at explaining a range of discrepancies between predictions of Newtonian physics and human predictions for various perceptual inference tasks, including subjects’
biases in judgements of mass ratios when observing simulated collisions of objects, if perceptual uncertainties are taken into account [9, 15]. Additionally, the noisy Newton framework can also
explain why human judgements depend on experimental paradigms, because tasks differ in the availability of knowledge about objects’ properties [16]. As an example, this suggests an explanation for
the fact that judgements about physical situations based on a static image representing a situation at a single timepoint have usually been reported to deviate more from physical ground truth
compared to richly animated stimuli [17], which additionally allow to estimate objects’ velocities. Nevertheless, some persistent failures of intuitive physical reasoning have been suggested to be
caused by distinct systems of reasoning compared to the more calibrated physical reasoning underlying visuomotor tasks [16].
While physical reasoning has been studied predominantly using tasks in which subjects needed to judge physical quantities or predict how objects continue to move, much less is known about how
intuitive physical reasoning guides actions. Commonly, experimental paradigms have asked subjects to judge physical properties in forced choice paradigms such as relative masses in two-body
collisions [3, 9, 11, 12], predict the future trajectory of an object when no action is taken based on an image of a situation at a single timepoint, such as a pendulum [16], a falling object [18],
or whether an arrangement of blocks is stable [12]. Other experiments have asked subjects to predict a trajectory of objects [19] or their landing position [10] after seeing an image sequence, but
again without subjects interacting with the objects in the scene. Recent studies have also investigated more complex inference problems in which subjects needed to learn multiple physical quantities
by observing objects’ dynamics [13] or quantified how much entropy reduction for forced choice questions about physical properties of objects was achieved by interactions with objects in a scene [20
]. By contrast, the literature on visuomotor decisions and control [21–24] has seldom investigated the relationship between visuomotor decisions, actions, and control and physical reasoning. Notable
exceptions are studies which have investigated how humans use internal models of gravity in the interception of moving targets [25] and how exposure to 0-gravity environments [26] changes this
internal model. Nevertheless, these studies did not investigate the inference and reasoning of unobservable physical quantities. Other studies have investigated how perceptual judgements and
visuomotor control in picking up and holding objects in the size-weight and material-weight illusions can be dissociated [27, 28]. Nevertheless, these studies did not investigate the relationship of
intuitive physical reasoning and visuomotor actions.
Here we investigate how human subjects guide their actions based on their beliefs about physical quantities given prior assumptions and perceptual observations. Thus, we combine work on intuitive
physics [9, 11, 12] and visuomotor control [21, 23, 25, 27]. First, do humans use the functional relationships between physical quantities as prescribed by Newtonian mechanics in new task situations?
Specifically, when sliding an object on a surface the velocity with which the object needs to be released needs to scale linearly with the object’s mass but with the square-root of the distance the
object needs to travel. Second, when interacting with simulated physical objects, do humans interpret differences in objects’ behavior in accordance with physical laws? Specifically, when two objects
slide according to two different non-linear relationships, subjects may attribute these differences to the lawful influences of unobserved physical quantities such as mass. Third, after having
observed collisions between objects do humans adjust their actions to be consistent with the inferred relative masses of those objects? Specifically, while it is known that subjects can judge mass
ratios of two objects when observing their collisions, it is unclear whether they subsequently use this knowledge when sliding those objects. To address these questions, subjects were asked to shoot
objects gliding on a surface under the influence of friction to hit a target’s bullseye in a simulated virtual environment. The simulated puck was accelerated by subjects’ button presses such that
the duration of a button press was proportional to the puck’s release velocity. A succession of four phases investigated what prior assumptions subjects had about the relationships between their
actions and physical quantities, whether they could learn to adjust their actions to different objects when visual feedback about their actions was available, whether they would interpret the
differences in objects’ behavior in accordance with physical laws, and whether they could transfer mass ratios inferred from observing collisions to adjust their actions accordingly.
Analysis of the data shows that subjects adjusted their press-times depending on the distance the pucks had to travel. Furthermore, subjects adjusted the button press-times to get closer to the
target within a few trials when visual feedback about the puck’s motion was available. Because perceptual uncertainties and motor variability can vary substantially across subjects and to take
Weber-Fechner scaling into account, we subsequently analyzed the data with a hierarchical Bayesian interaction model under the assumption that subjects used a Newtonian physics based model. We
compared this model to the prediction of a linear heuristics model. Importantly, because subjects needed to adjust their button press-times, the model needs to account for perceptual judgements and
the selection of appropriate actions. We include a comparison of three cost functions to investigate subjects’ selection of press-times. Based on this model of the sliding task, we find evidence that
subjects used the functional relationship between mass and distance of pucks as prescribed by Newtonian physics and readily interpreted differences between two pucks’ dynamics as stemming from their
unobserved mass. Moreover, biases in subjects’ press-times can be explained as stemming from costs for not hitting the target, which grow quadratically with the distance of the puck to the target’s
bullseye. After observing 24 collisions between an unknown puck and two pucks with which subjects had previously interacted, we found evidence that participants transferred the inferred relative
masses to subsequent sliding actions. The mass beliefs from observing the collisions were inferred by a hierarchical Bayesian observation model. Thus, intuitive physical reasoning transfers from
perceptual judgements to control tasks and deviations from the predictions of Newtonian physics are not only attributable to perceptual and model uncertainties but also to subjects’ implicit costs
for behavioral errors.
Materials and methods
Twenty subjects took part in the experiment. All participants were undergraduate or graduate students recruited at the Technical University of Darmstadt, who received course credit for participation.
All experimental procedures were carried out in accordance with the guidelines of the German Psychological Society and approved by the ethics committee of the Technical University of Darmstadt.
Informed consent was obtained from all participants prior to carrying out the experiment. All subjects had normal or corrected to normal vision and were seated so that their eyes were approximately
40 cm away from the display and the monitor subtended 66 degrees of visual angle horizontally and 41 degrees vertically. In the vertical direction the monitor had a resolution of 1080 pixels, which
corresponded to a distance of approximately 11.5m in the simulation. Four participants have been excluded from the analysis (three due to incorrect task execution and one due to incomplete data; f =
9, m = 11, age = [18, 27], median = 22.5, mean = 22.25).
Experimental design and data
Participants were instructed to shoot a puck in a virtual environment into the bullseye of a target, similar to an athlete in curling. The shot was controlled by the duration of pressing a button on
a keyboard. Participants were told that they were able to adjust the force, which initially was going to accelerate the puck and thus the initial velocity of the puck, by the duration of their press.
However, they were not explicitly told about the linear relationship between the press time and the initial velocity. Additionally, participants were told that realistic friction was going to slow
down the puck while sliding on the simulated surface. The general objective of the experimental design was to investigate whether subjects adjusted their shooting of the pucks in a way that was in
line with the physical laws governing motion under friction. Specifically, the magnitude of the initial impulse exerted on the puck determines how far the puck slides on the surface. Thus, subjects
needed to adjust the duration of a button press according to the distance between the randomly chosen initial position of the puck and the target on each trial. The different experimental phases
allowed investigating subjects’ prior beliefs about the puck’s dynamics, their adjustments of button presses when these beliefs were updated given visual feedback of the puck’s motion, and the
potential transfer of knowledge about relevant object properties to the control of the puck from perceiving object collisions. Therefore we designed a task with two conditions and four consecutive
experimental phases, which differed in the availability of previous knowledge and feedback.
Laws of motion governing the puck’s motion.
At the beginning of each trial, subjects saw the fixed target and a puck resting at a distance chosen uniformly at random between one and five meters from the target’s bullseye. To propel the puck
toward the target, subjects needed to press a button. To model the relationship between the button press and the puck’s motion, we reasoned as follows. Human subjects have been shown to be able to
reason accurately about the mass ratio of two objects when observing elastic collisions between them [9]. In elastic collisions, according to Newtonian laws, the impulse transferred by the collision
is proportional to the interaction duration with a constant force. In other words, the duration of the interaction with a constant force leads to a linearly scaled impulse. Given a constant mass m of
a puck and assuming a constant surface friction coefficient μ, Newtonian physics allows deriving the button press-time T[press] required to propel the puck to the target at a distance Δx: (1) with
gravitational acceleration g and a constant force F. Here, the constant force F is being applied by the interaction, i.e. the button press of duration T[press], which is physically equivalent to an
elastic collision with an object. Note that this formulation of the interaction has the additionally intuitive consequence that the release velocity of the puck scales linearly with the duration of
the button press (see S1 Appendix “Puck Movement”). The second expression clarifies that the press-time scales linearly with the mass of the puck, while it scales with the square-root of the distance
to the target. Obviously, this relationship assumes perfect knowledge of all involved quantities. The movement of the puck was implemented by simulating the equivalent difference equations for each
frame given the friction and the velocity of the preceding frame (detailed derivations are provided in the S1 Appendix, “Puck Movement”).
Phase 1: Prior beliefs.
In the first phase, we wanted to investigate, which functional relationship subjects would use a priori to select the duration of button presses depending on the perceived distance between the puck
and the target. A black puck with unknown mass m was placed at a distance to the target drawn uniformly at random. Participants received no further information about the puck or the environment.
Participants were instructed to press the button in a way so as to bring the puck into the target area, but after pressing the button for a duration t^pre and releasing it the screen turned black to
mask the resulting movement of the puck. This screen lasted for at least half a second until the participant started the next trial by button press. All participants carried out fifty trials. Thus,
the collected data allowed relating different initial puck distances to the press-times subjects selected based on their prior beliefs.
Phase 2: Visual feedback.
The second phase was designed to investigate how participants adjusted their button press-times in relation to the simulated masses of pucks and their initial distances to the target when visual
feedback about the pucks’ motion was available. To this end, participants carried out the same puck-shooting task but with two different pucks, as indicated by distinct surface textures (yellow
diamond versus five red dots, see Fig 1b, Feedback). The two pucks were alternating every four trials with a total number of two-hundred trials. The two different pucks were simulated with having
differing masses, resulting in different gliding dynamics. In this condition, participants received visual feedback about their actions as the pucks were shown gliding on the surface from the initial
position to the final position depending on the exerted impulse. Thus, because the distances traveled by the two pucks for different initial positions as a function of the button press-times t^pre
could be observed, participants could potentially use this feedback to adjust their press-times on subsequent trials. Note that the two pucks were only distinguished by a color cue and no cue about
mass was given apart from the different dynamics. Half the participants were randomly assigned to the ‘light-to-heavy’ condition, in which the two pucks had masses of 1.5 kg and 2.0 kg, and the other
half of the participants were assigned to the ‘heavy-to-light’ condition, in which the pucks had masses of 2.0 kg and 2.5 kg.
(A) Single trial illustration. Target area and puck are presented on a monitor from bird’s-eye perspective. Releasing the pressed button accelerates the puck by applying a force, which is
proportional to the press-time. In trials without feedback the screen turned black after button release, while in feedback trials participants were able to see the puck moving according to simulated
physics. (B) Four phases of the experiment. In the ‘prior’ phase, no feedback about puck motion was available, whereas in the ‘feedback’ phase subjects obtained visual feedback about the pucks’
motion. Two pucks with different colors and correspondingly different masses were simulated. In the ‘no feedback’ phase subjects obtained a new puck as indicated by a new color and obtained no
feedback. In the last phase, subjects first watched 24 collisions between the new puck and the pucks they had interacted with in the ‘feedback’ phase before interacting again with the puck. Note that
the puck of the ‘no feedback’ and ‘collisions + no feedback’ phase are identical.
Phase 3: No feedback.
In phase three, we wanted to investigate how having observed the sliding of the pucks in phase two influenced participants’ press-times with an unknown puck. Subjects were asked to shoot a new puck
they had not seen before to the target without visual feedback, as in the first experimental phase, for one-hundred trials (Fig 1B, No Feedback). The texture of the puck consisted of five concentric
rings. For participants in the ‘light-to-heavy’ condition, the new puck had a mass of 2.5 kg whereas for participants in the ‘heavy-to-light’ condition the new puck had a mass of 1.5 kg. However,
different from phase one, in which subjects had not obtained feedback about the pucks’ motion, by phase three participants had already interacted with three pucks and obtained visual feedback about
the motion of two pucks. Importantly, participants had received feedback about the non-linear nature of gliding under friction in phase two, albeit scaled differently for the two pucks. Thus, this
experimental phase allowed investigating, whether subjects use the functional mapping from puck distances to press-times prescribed by Newtonian physics and what assumptions about the mass of an
unknown puck they used.
Phase 4: Collisions & no feedback.
With the final experimental phase we wanted to investigate, whether participants can use the relative mass ratios inferred from observing collisions between two pucks to adjust their subsequent
actions with one of those pucks. At the beginning of phase four, participants watched a movie of twenty-four collisions between two pucks. One was always the puck with unknown mass used in phase
three (without feedback; five rings) (see Fig 1B, Collisions No Feedback), while the second puck was one of the two pucks presented in phase two (see Fig 1B, Feedback). Each collision thus showed one
of the two previously seen pucks from phase two selected at random colliding with the puck from phase three with a total of twelve collision with each of the two known puck. By observing these
elastic collisions participants were expected to learn the mass ratios between pucks, as shown in previous research [9, 15]. Note that the pucks were simulated without the influence of friction in
these collisions, ensuring that participants only obtained a cue about relative masses and not about the dynamics under friction for the puck from phase 3. After watching these collisions, subjects
were asked to shoot the puck from phase three again without obtaining visual feedback, as in phases one and three, for one-hundred trials. Thus, subjects interacted with the same puck as in phase
three but had now seen the collisions of this puck with the two pucks they had interacted with. This experimental phase therefore allowed investigating, whether subjects used the learned mass ratios
and transferred them to the control task to adjust their press-times. Importantly, having learned the mass ratios between pucks needs to be transferred to the press-times, which differ in a
physically lawful way depending on the initial distance of the pucks to the target. As the two pucks from phase two of the experiment were only distinguished by color, such a transfer indicates that
subjects had attributed the different dynamics to their masses consistent with Newtonian physics. Thus, if subjects used an internal model of physical relationships, they should be able to adjust
their press-times for the new puck without ever having seen it glide.
Behavioral results
As subjects did not receive visual feedback about the consequences of their button presses in the first phase of the experiment, the button press-times reflect the prior assumptions they brought to
the experiment. Indeed, subjects’ press-times t^pre grew with the initial distance between the puck and the target. The button press times for all phases of the experiment are shown in Fig 2. The
correlation between t^pre and the initial distance was 0.482 (p < 0.001). However, the functional relationship according to Newtonian physics prescribes a scaling of the press-time according to the
square-root of the distance as specified in Eq 1. The correlation between press-times t^pre and the square-root of the initial distance was 0.478 (p < 0.001). We expected the standard deviation of
press-times to scale with the the mean of press-times in accordance with the Weber-Fechner scaling. This was confirmed by subdividing the range of distances into three intervals of the same size,
i.e. [1, 2.33]m, (2.33, 3.66]m, and (3.66, 5]m and computing the standard deviation of press-times within these three intervals resulting 2.97 × 10^−1 s, 4.19 × 10^−1 s, and 5.69 × 10^−1 s.
Press-times for all participants by condition and experimental phase are shown with data points in black and Newtonian relationship with perfect knowledge about the involved parameters in blue. The
top row shows the data of subjects in the light-to-heavy condition and the bottom row shows the data of subjects in the heavy-to-light condition. (A) Press-times of participants in the first phase
(“prior”), (B) second phase (“feedback”) for the yellow puck, (C) second phase (“feedback”) for the red puck, (D) third phase (“no feedback”), and (E) last phase (”collisions and no feedback”) after
having seen 24 collisions.
In phase two, participants adjusted their press-times based on observing the gliding of the pucks after button presses. Performance was evaluated by calculating the mean absolute distance of pucks to
the target after sliding. The mean absolute error over the entire phase was 0.928m (0.0177m SEM), see Fig 3. Accordingly, the correlation between t^pre and the initial distance was 0.644 (p < 0.001)
and with the square-root of distance 0.646 (p < 0.001). The performance improved between the first eight trials at the beginning of the phase (mean absolute error 1.76m) and the last eight trials at
the end of the phase (mean absolute error 0.89m). The adjustment of pressing times was achieved on average after only a few trials, as revealed by a change-point analysis [29], which showed that
after six trials the average endpoint error of the puck was stable (see S1 Appendix, “Change point detection”). Note that this includes four trials with one puck of the same mass and two trials of
the second puck with a different mass.
(A) Participants’ performance by experimental phase as quantified by pucks’ average absolute error in final position. The number of the ring at which the center of the puck stopped was used for
coding performance, e.g. 1 and 3 in the shown cases. (B) Aggregated final positions of pucks versus initial distance of pucks to target. Phases of the experiment are separated by columns and
conditions are separated by rows. The line of equality representing final positions prescribed by the Newtonian model with perfect knowledge of all parameters is shown in blue.
Phase three involved shooting a new puck, which subjects had previously not interacted with, without visual feedback. Note that the puck was identical to the puck subjects later interacted with in
phase four after seeing the collisions. This phase therefore allowed testing whether subjects used the non-linear scaling of the press-times depending on initial distance of the puck after having
observed the pucks’ motion in phase two. As expected, performance was significantly lower with the new puck without obtaining visual feedback. Mean absolute error was 2.87m (0.104m SEM), see Fig 3.
The correlation between t^pre and the initial distance was 0.599 (p < 0.001) while the correlation between t^pre and the square-root of the initial distance was 0.603 (p < 0.001). Given that subjects
had already obtained feedback about two pucks in phase two but did not obtain feedback in this phase, their press-time distribution could potentially be the mixture of the two press-time
distributions of the two previous pucks, which were different in the conditions ‘light-to-heavy’ and ‘heavy-to-light’. We compared the combined press-time distributions of phase two with the
press-time distribution of phase three for each condition with the Kolmogorov-Smirnov test. Press-times in phase three reflected the behavior of both previous pucks combined for condition
‘heavy-to-light’ (Kolmogorov-Smirnov, D = 0.0538, p = 0.092, see S1 Appendix, “Kolmogorov tests—press-times in phase two & phase three”) and approximately for condition ‘light-to-heavy’
(Kolmogorov-Smirnov, D = 0.156, p < 0.001, see S1 Appendix, “Kolmogorov tests—press-times in phase two & phase three”).
At the beginning of phase four subjects watched a movie showing 24 collisions between the pucks from phase two, for which visual feedback of the gliding had been available, and the unknown puck from
phase three. Thus, this condition allowed testing whether observation of the collisions was used to infer the mass ratios of pucks and to subsequently adjust the pressing times for that puck from
phase three. Performance was significantly higher than in phase three with a mean absolute error of 1.63m (0.0440m SEM), although the puck was the same as in phase three and although subjects did not
obtain visual feedback, see Fig 3. This effect was significant for both conditions as tested with Wilcoxon Signed Rank test for the absolute error (light-to-heavy: W = 339300, p = 0.018;
heavy-to-light: W = 441330, p < 0.001). This shift towards longer and shorter press-times in the light-to-heavy and heavy-to-light condition respectively is depicted in S1 Appendix, “Press-time
distributions”. The shift was statistically significant by testing with a Wilcoxon Signed Rank test for shorter and longer press-times for both conditions respectively (light-to-heavy: W = 158580, p
< 0.001; heavy-to-light: W = 490620, p < 0.001). For more detail of the error distributions across phases two to four see S1 Appendix, “Distance error distributions”.
Taken together, these analyses suggest, that subjects adjusted their press-times both depending on the distance of the pucks to the target and depending on the pucks’ masses used in the simulation.
Furthermore, the analyses provide a very weak initial hint that subjects may have scaled their press-times with respect to mass and with a non-linear function of initial distance after having
obtained visual feedback about the pucks’ motion. Finally, observing collisions between pucks lead subjects to adjust their press-times even without obtaining visual feedback. In the following
section we provide two computational generative models, one for the sliding task and one for the collision observation task to quantitatively analyze participants’ press-times in terms of perceptual,
physical, and behavioral quantities.
Interaction model results
The above analyses give only a weak indication that our participants were able to adjust their press-times consistent with Newtonian physics and that they transferred the inferences about relative
mass ratios from observing collisions to the press-times, and are limited in several ways. First, perceptual variables such as the initial distance of the puck to the target were uncertain for our
subjects, which is not quantitatively entering the correlation analyses of press times with physical predictions under the assumption of perfect knowledge of all parameters. Secondly, our
participants had to press a button to propel the puck. For longer press-times, subjects are known to demonstrate variability in pressing times, which scales linearly with its mean and which may vary
considerably between subjects. Thirdly, while subjects pressed a button and observed the simulated motion of the pucks from a bird’s eye view on a monitor, it would be desirable to be able to
estimate subjects’ belief about the masses of the different pucks implicit in their press-times. Therefore, we devised a hierarchical Bayesian model of the full visuomotor decision task to provide a
computational account of our subject’s behavior.
The Bayesian network model in Fig 4 expresses the relationship between variables on a subject-by-subject and trial-by-trial basis. While as experimenters we have access to the true initial distance x
used in the simulation of the puck and displayed on the monitor as well as the measured press-time t^pre chosen by the subject on a particular trial i, subjects themselves do not know these values.
Instead, each participant j has some uncertain percept of the puck’s distance and, potentially, some belief about the mass m[j,k] of the puck, which depends on its color and the phase of the
experiment k. This structure of the graphical model from the experimenter’s view leads to the following joint distribution p(d, l) with observed data d = {x, t^pre} and latent variables l = {x^per, σ
^x, m, σ^t}, where trial, puck and participant subscripts were omitted for clarity: (2) Here, p(x) is known to the experimenter as the actual distribution of distances to target used in the
simulations. By contrast, the distribution of perceived distances p(x^per|x, σ^x) is the noisy perceptual measurement by our participants described as a log-normal distributed variable, ensuring that
samples are strictly positive and including uncertainty scaling according to Weber-Fechner [30]. p(σ[x]) describes the prior distribution over possible values of this perceptual uncertainty.
Participants’ prior beliefs about the masses of the different pucks p(m) are described by gamma distributions, which entail the constraint that masses have to be strictly positive. The log-normal
distribution of actually measured press-times p(t^pre|x^per, m, σ^t) depends on the noisy perception of the distance to target x^per, the belief about the mass of the object and the variability in
acting, which is the press-time variability σ^t with its gamma distribution p(σ^t). We additionally summarize all constant factors, i.e the surface friction coefficient, the gravitational
acceleration, the constant interaction force in the parameter θ.
The model expresses the generative process of observed press-times across trials i, participants j, and pucks k including Weber-Fechner scaling given perceptual uncertainties of distance x[i,j] and
mass m[j,k] of the pucks and subjects’ press-time variability. The parameter values refer to the prior probability distributions. See the text for details.
The potential functional relationship between the perceived distance of the puck to the target and the required press-time is expressed in the deterministic node representing t^int in the Bayesian
network. We consider two possible functional relationships between the press-time and the distance to be covered: subjects may use a linear relationship between press-time and initial distance as a
simple heuristic approach: (3) or may use the square-root relationship as prescribed by Newtonian physics according to Eq 1: (4) As experimenters, we only have access to the observed data d, i.e. the
actual distances given the experimental setup and the measured press-times. We use Bayesian inference employing Markov-Chain Monte-Carlo to invert the generative model and infer the latent variables
describing subjects’ internal beliefs given the observed data d: (5)
However, modeling perception as inference may not be sufficient to describe our participants’ behavior and their selection of actions. Given a posterior over mass and distance describing the
perceptual belief of a subject on a particular trial, a specific press-time needs to be selected. In order to model this selection process we take action variability and potential cost functions into
account. Cost functions govern which action, here the press-time, should be chosen given a posterior belief. Specifically, the cost function quantifies how the decision process penalizes errors on
the task. This means that it is assumed that participants select an action that minimizes potential costs associated with missing the target. Loss functions, describing the rewards or costs for every
action in the action space, can have any arbitrary form, nonetheless we chose a set of three standard loss functions and compare their predictions: 0-1, absolute and quadratic loss functions. These
three canonical loss functions express subjects’ implicit preferences for reaching a decision about press-times based on a putative perceptual posterior: the 0-1 loss corresponds to penalizing
equally all deviations between the chosen value and the correct value, the absolute loss corresponds to penalizing deviations from the true value linearly, and the quadratic loss penalizes the
deviations quadratically. It can be shown that these loss functions lead to different decisions for a continuous variable with a non-symmetric distribution [31]. Applying these three cost functions
to a log-normal posterior results in the optimal decision being the MAP in case of the 0-1 loss function, the median for the absolute loss function, and the mean in case of the quadratic loss
function. Thus, assuming that humans do have costs for missing the target and associated policies to minimize these costs, leads to three different model versions for each model class (see S1
Appendix, “Implementation of cost functions”).
In order to evaluate participants’ behavior computationally we first utilized subjects’ data from phase two of the experiment to estimate their perceptual uncertainty and behavioral variability. We
chose to start with analyzing phase two for two reasons: first, if participants are able to use visual feedback about the pucks’ dynamics to adjust their press-times, predictions of the model with
the correct physical relationships should capture the behavior better than the linear heuristics model. Secondly, inferred values for latent variables describing visual uncertainty in distance
estimation and variability in press-times are less prone to be assigned additional uncertainty. Additional uncertainty arising in all other phases of the experiment due to the lack of visual feedback
should be assigned to the uncertainty about the mass or the linear scaling rather than to the variability of press-times in general. Therefore, by evaluating data from phase two “feedback” first,
values for the press-time variability and uncertainty in the perception of distances can be estimated for each participant.
First, we used the data of phase two “feedback” to investigate, which of the three loss functions best describes our participants’ data. In order to choose the appropriate cost function explaining
participants’ actions most accurately, we computed the press-times predicted by the linear heuristics and the Newtonian model and applied the three cost functions to both models. This was achieved by
using the inferred maximum a posteriori (MAP) values for the latent variables in both model classes, i.e. the mass m in the Newtonian and a linear factor in the heuristic linear model class. This
allowed calculating the residuals, i.e. the difference between subjects’ actual press-times and the predicted press-times for all six combinations of two models and three cost functions. The
residuals are shown as a function of the distance to the target in Fig 5. The strong correlation of residuals and distance to target indicates a systematic bias of the linear heuristics model,
whereas the weak correlation of the Newtonian model demonstrates its superiority in explaining the measured data. These relationships were tested with Spearman correlation tests for each model and
cost function. The data show highly significant correlations for all models (p < 0.001 in all cases; 0-1 loss function: ρ[New] = 0.167, ρ[lin] = −0.550; abs. loss function: ρ[New] = 0.124, ρ[lin] =
−0.643; quadratic loss function: ρ[New] = 0.0976, ρ[lin] = −0.686) and higher correlation in the linear model for each cost function (p < 0.001 in each case, with Bonferroni corrected α[crit] =
(A) Residuals were calculated for each participant and each puck in phase two (”feedback”) given the actual press-times and the best fits for the linear heuristics and the Newtonian model. Residuals
for both models were calculated for all three cost functions. (B) MAP estimates of the masses used by individual subjects inferred according to the Newtonian model for the the three cost functions.
Red and yellow pucks had different masses for subjects in the two conditions “heavy-to-light” and “light-to-heavy”.
Secondly, the posterior predictive distributions for press-times estimated from data in phase two (see S1 Appendix, “Posterior predictive checks for press-times”) match the actual behavior of the
participants more closely compared to the linear heuristics model. Kullback-Leibler divergence for each pair support this with divergence values at 0.0558 and 0.0851 for the Newtonian and linear
model, respectively. Not only did the Newtonian model capture participants’ press-times in phase two better than the linear heuristics model, but this also affected the inferred variabilities. While
perceptual uncertainty only varied marginally (see Fig 6(A)), the posterior distributions of the press-time variability show higher values for the linear model (see Fig 6(B)) compared to the
Newtonian model. This was confirmed by calculating a repeated measure ANOVA on the posterior distributions of press-time variability for both models, showing that the difference was highly
significant (F = 39.2, p < 0.001). This elevated level of uncertainty is necessary for the linear heuristics model to compensate for the diminished ability to capture the relationship of initial
distances and participants’ press-times. Therefore, in the following analysis we used the Newtonian model with quadratic cost, because it shows the lowest residual correlation, smallest divergence in
posterior predictive distributions of press-times, and smallest press-time variability.
(A) Inferred posterior distributions of perceptual uncertainty for the linear heuristics model and the Newtonian physics model. Dark green distributions display posterior distributions for the
Newtonian model class, dark blue ones for the linear model class. A separation into cost functions is not included since the different cost functions did not lead to significant differences. (B)
Inferred posteriors for individual press-time variability varied significantly between subjects between the two models. All but one participant show lower or equal values of variability regarding the
press-time for the Newtonian model class.
A consequence of selecting the quadratic cost function on the basis of the analyses of press-time residuals and posterior predictive distribution of press-times allows comparing the masses inferred
on the basis of participants’ behavior. Remarkably, posterior distributions inferred with data aggregated over participants only from phase two match actual masses implemented in the physical
simulations better for the quadratic cost function (see Fig 5(B) and S1 Appendix, “Latent masses by cost function: aggregated data from phase ‘feedback’”). In both conditions inferred beliefs about
the masses are closer to the actual masses implemented in the simulations when presuming that participants use a quadratic loss function. This was confirmed by testing for the absolute differences
between the posterior belief and the actual mass for each condition, puck and cost function. An ANOVA revealed highly significant differences (F = 486, p < 0.001) and post-hoc tests showed that the
posterior belief when using the quadratic cost function is the closest fit for all pucks (p < 0.001 condition light-to-heavy, yellow diamond puck; p = 0.002 red dots puck; p < 0.001 condition
heavy-to-light, yellow diamond puck; p < 0.001 red dots puck). This result also held at the individual participant levels as illustrated in Fig 5(B)). Thus, the quadratic cost function, which best
described participants’ press times, revealed that participants’ mass beliefs were more accurate compared to assuming other cost functions.
Subsequently, we used the MAP values of the inferred press-time variabilities for each subject as fixed values for the analyses of data of all experimental phases. The same applied for the MAP values
of the inferred perceptual uncertainties which did not differ across subjects or models (see Fig 6(A)) and therefore were set to one fixed value for all subjects. Note that the mean was 0.05m in
simulation space, which, given the current setup corresponded to approximately 4.7 pixels on the monitor. Using the hierarchical Bayesian interaction model, samples of the posterior predictive
distributions of press-times and of the perceptual uncertainty are used to infer latent variables for both the linear and the Newtonian models. The posterior predictive distributions of press-times
are shown in the S1 Appendix, “Posterior predictive checks for press-times in both models”. Evidence was in favor of the Newtonian model compared to the heuristics model across all phases of the
experiment with the exception of the Prior phase. The largest differences in prediction power appears in the Feedback phase with the Newtonian model being the considerably better choice to describe
the actual press-times. This superiority of the Newtonian model over the linear heuristic one remains in the subsequent phases even without any visual feedback. This was again tested by running
two-sample Kolmogorov-Smirnov tests for posterior predictive distributions of phase three of both models and the actual data, as well as calculating the Kullback-Leibler divergence for each pair,
resulting in lower K-S statistic values for the Newtonian model (D = 0.0436, p = 0.00521) compared to the linear one (D = 0.0851, p < 0.001). KL divergence values are 0.0582 and 0.0599 for the
Newtonian and linear model, respectively.
Finally, to confirm that the behavioral data of our subjects was best described by the Newtonian model with quadratic cost function we carried out model selection using the product space method [32].
In this approach, a mixture model combines both the linear and the Newtonian model to account for the data. An index variable indicates, which of the two models is selected at each iteration to
explain the data. Given that both models have the same a priori probability to be chosen, the Bayes factor equates to the posterior odds of the index variable. Resulting Bayes factors are shown in
Fig 7. Given the complete data set from all phases there is small support for the Newtonian model (Bayes factor K of 2.33). When only considering data from the Prior phase there is weak support for
the linear model (K = 1.88). Instead, when considering all phases but the first phase there is substantial support for the Newtonian model (K = 3.71) and strong evidence for the square-root model in
the feedback phase (K = 9.71).
Bayes factors are displayed for different phases and combinations of phases. Blue line at 1 marks the point where neither model is stronger supported by evidence. Red line at 3.2 marks the transition
from Bayes factors being only worth mentioning to substantial evidence in favor of one the models. Colors of bars indicate the model favored by the Bayes factors.
The hierarchical Bayesian interaction model also allows inferring the masses best describing our subjects’ internal beliefs given the Newtonian model and the measured press-times. Not surprisingly,
mean mass beliefs vary strongly across subjects in the Prior phase, where participants had to make decisions without any observations of the pucks, only relying on their prior beliefs about the
potentially underlying dynamics and environmental conditions (see S1 Appendix “Latent masses: phase ‘prior’ and ‘feedback’” for gray posterior distributions). Nevertheless, the variances of mass
beliefs within the first phase were surprisingly small for individual subjects with a mean of 0.0023 kg, potentially indicating that each subject consistently used a belief about the mass of the
puck. Inferred values for these prior mass beliefs are displayed in the S1 Appendix “Latent masses: phase prior and feedback” for each participant. When obtaining visual feedback in the Feedback
phase of the experiment, subjects only needed on average six trials to adjust their press-times so that mass beliefs were stable thereafter. Implicit mass beliefs were quite accurate with the mean of
inferred MAP values at 1.5218 and 1.8818 kg in the condition light-to-heavy (1.5 and 2.0 kg) and 1.9415 and 2.3068 kg in condition heavy-to-light (2.0 and 2.5 kg). Fig 8 shows the MAP estimates of
the masses for both conditions and phases two to four for all subjects.
In phase three No Feedback participants faced an unknown puck without any visual feedback but with the acquired knowledge about the relationship of press-time and distance. Note however, that
participants had learned two different mappings from distances to press-times in phase two, one for the red puck and one for the yellow puck. Thus, participants had to select press-times without
knowing the mass of the unknown puck. As reported above, the press-time distributions in this phase of the experiment were close to the combined press-times that subjects had used for the two pucks
in the previous phase two of the experiment. The corresponding MAP mass beliefs were accordingly approximately the average of the two previous pucks’ masses with 1.87 and 2.19kg and corresponding
mass distributions differed significantly for the two conditions light-to-heavy and heavy-to-light (ANOVA: F = 1060, p < 0.001; see also S1 Appendix, “Latent masses: phase “no feedback” and
“collision and no feedback””). But after observing the 24 collisions in phase Collisions + No Feedback of the two known pucks with the unknown puck participants were able to adjust their press-times
so that the estimated mass beliefs were significantly closer to the true values used in the simulations than in the previous phase. This was quantified by running a repeated measures ANOVA of the
deviations from the actual mass (F = 7.103, p = 0.0176). Thus, the mass beliefs implicit in our participants’ press-times reflected the inferred mass ratios and transferred from having observed the
pucks’ collisions to the subsequent visuomotor control task. Note that this implies that subjects must have interpreted the dynamics of the red and yellow pucks in the second phase as stemming from
objects’ masses, as otherwise a physically consistent transfer to a new puck would be very difficult to explain.
Observation model result
Participants in our experiment were apparently able to make appropriate inferences in phases with feedback, altering their beliefs about unknown objects based on previous inferences and new
observations, and to transfer this knowledge to an action-control task. But how were they able to make these adjustments after observing collisions and perform well with a continuous range of
responses? Here, we want to look at another Bayesian model capturing the learning process through observations. To this end, we adapted a hierarchical Bayesian observation model similar to [9, 11],
which describes how subjects could infer the relative mass ratios of two pucks from observing their elastic collisions. But here we used the mass beliefs inferred from phase two of the experiment
with the interaction model as initial prior mass beliefs in the observation model for phase four of the experiment on-a-subject-by-subject basis. This allows comparing how subjects’ uncertainty
decreases on the basis of perceptual observations compared to visuomotor interaction.
The Bayesian network model for the observation task in Fig 9 expresses the relationship between variables on a subject-by-subject basis for observing 12 collisions for each of the two pucks. The
model incorporates the generative physical relationship of velocities and masses in elastic collisions as shown in [9]. The grey nodes are known to the experimenter: the initial velocities v[F] and
the mass m[F] of the known feedback puck and v[NF] of the unknown no-feedback puck, the resulting velocities u[F] and u[NF]. Individual subjects’ posterior mass beliefs at the end of phase two
inferred with the interaction model, shown on the left panel of Fig 9, were used as prior mass beliefs of the yellow and red pucks in the observation model for each participant. Unknown parameters
are depicted as white nodes and were inferred with MCMC. Subjects’ uncertain beliefs about the pucks’ velocities are incorporated for the initial velocities v[F] and v[NF] as well as for the
resulting velocities u[F] and u[NF] after the elastic collision. To describe the perceptual uncertainty of velocities we used a log-normal distribution with σ[vel] fixed at 0.2 and its mode at the
actual velocity (see Fig. 6 in [9] or section “Subject Performance” in [11] for comparison). Inferred posterior mass beliefs for the new puck are shown in the right panel. This structure leads to the
following joint distribution p(d, l) with observed data d = {v[F], v[NF], u[F], u[NF], m[F]} and latent variables , where actual and perceived velocities are summarized for both pucks using an index
i to v[i] and u[i] for abbreviation purposes: (6)
The left panel shows inferred posterior mass beliefs for the pucks from feedback phase 2 for each participant. All 100 trials were used to infer the mass beliefs. These posteriors were used as priors
for the inference from observations. The graphical model for learning by observing collision is shown in the middle panel. Uncertainty about the pucks’ velocities is introduced for the initial
velocities v[F] and v[NF] as well as for the resulting velocities u[F] and u[NF] after the elastic collision. Utilizing the physical relationship of velocities and masses in an elastic collision
enables inferring beliefs about the unknown puck based on previous mass beliefs of pucks in phase 2. Resulting posterior mass beliefs are shown in the right panel for inferences based on 6 and 24
observations of collisions.
The observation model allows inferring participant’s mass beliefs for the puck, which they had first interacted with in phase three of the experiment. Importantly, the two Bayesian models allow
inferring the uncertainty in participants’ mass beliefs after only six and after 24 trials, both for the interaction phase two and the observation of the collision movies, see S1 Appendix, “Learning
progress of mass beliefs during interaction and observation”. These results quantify, how uncertainty in mass beliefs decreased over trials and the difference in uncertainty reduction due to
interactions versus observations. More specifically, as expected, subjects’ variance in inferred posterior mass beliefs for each puck decreased with the progression of trials when using the
interaction model with data from phase 2 (Friedman chi-squared = 62.06, p-value < 0.001 & Conover’s PostHoc p < 0.001 for all comparisons) and, as well, when using the observation model with mass
beliefs from phase 2 with the highest precision after 100 trials (Wilcoxon signed rank test, V = 136, p < 0.001). Additionally, the variance in resulting inferences about the mass in the observation
model is significantly higher than the variance of the mass beliefs used as input, as we compared variances on subject basis for columns three, four and five (Kruskal-Wallis chi-squared = 37.43,
p-value < 0.001 & Dunn PostHoc for grey compared to red and green, each p < 0.001, see S13 Fig). Thus, the larger variance in participant’s mass estimates after observing the pucks’ collisions
compared to interacting with them, see e.g. Fig 8, stems from the fact that subjects needed to use the uncertain mass beliefs of the red and yellow pucks when observing the collisions and had
additional uncertainty stemming from inferring pucks’ velocities. Furthermore, the predictions of the idealized observation model deviate quantitatively from mass beliefs inferred using the
interaction model for two reasons: First, participants would need to remember their belief about the mass of both feedback pucks perfectly while performing in phase 3 and 4. However, these beliefs
may suffer from memory effects and thus potentially introduce biases and additional variability. Second, initial and uninformed guesses in phase 3 before seeing any collisions may generate biases,
too, that potentially could lead to recency effects (see e.g. participant 7 & 8 in S10 Fig).
Although people are able to interact with the physical world successfully in every-day activities, classic research has contended that human physical reasoning is fundamentally flawed [1–4]. Recent
studies instead have shown that biased human behavior in a range of perceptual judgement tasks involving physical scenarios can be well described when taking prior beliefs and perceptual
uncertainties into account [9–12]. The reason is that, inferences in general need to integrate uncertain and ambiguous sensory data and partial information about object properties with prior beliefs
[5–8]. Much less is known about how intuitive physical reasoning guides actions. Here, we used a perceptual inference task involving reasoning about relative masses of objects from the intuitive
physics literature and integrated it with a visuomotor task. Subjects had to propel a simulated puck into a target area with a button press whose duration was proportional to the puck’s release
velocity. The goal was to investigate how people utilize relative masses inferred from watching object collisions to guide subsequent actions.
Specifically, we devised an experiment consisting of four phases, which differed in the available sensory feedback and prior knowledge about objects’ masses available to participants. The physical
relationship underlying the task requires subjects to press a button for a duration that is proportional to the mass of the puck and proportional to the square-root of the initial distance. This
allowed examining peoples’ prior assumptions about the underlying dynamics of pucks’ gliding, their ability to adjust to the pucks’ initial distances to the target and to the varying masses of pucks,
and the transfer of knowledge about relevant properties gained by observing collisions between pucks. A hierarchical Bayesian generative model of the control task and one of the collision observation
task accommodating individual differences between subjects and trial by trial variability allowed analyzing subjects’ press-times quantitatively. Importantly, we also tested which of three cost
functions best describe our subjects’ choices of press-times.
In the prior phase without visual feedback, subjects adjusted their press-times with the initial distance of the puck to the target. Not surprisingly, because subjects did not obtain any feedback
about their actions and therefore the degree of friction, the magnitude of the applied force, and the scale of the visual scene, could only hit the target by chance. Nevertheless, model selection
slightly favored the linear heuristics model compared to the square-root model, i.e. subjects approximately scaled the press-times linearly with the initial distance to target. Thus, subjects came to
the experiment with the prior belief that longer press-times would result in longer sliding distances but did not scale their press-times according to the square-root of the initial distance of the
pucks as prescribed by Newtonian physics. As subjects did not sense the weight of the pucks and did not obtain any visual feedback about the pucks’ motion, the observed behavior in this phase of the
experiment may be dominated by the uncertainty about the underlying mapping between the duration of button presses and the pucks’ release velocities, the effects of friction, and the visual scale of
the simulation. Remarkably, while no feedback was available, each participants’ scaling of press-times was consistent as indicated by individuals’ variance in posterior mass estimates being of the
same order of magnitude as in feedback trials, see S1 Appendix, “Latent masses: phase ‘prior’ and ‘feedback’”.
When visual feedback about the pucks’ motion during the feedback phase was available, subjects needed on average only six trials to reach stable performance. This is particularly remarkable, because
it corresponds to adjusting the press-times to a single puck’s mass over the four initial trials and then adjusting the press-times within only two subsequent trials to a new puck with a different
mass. Thus, the observation of the pucks’ dynamics over six trials was sufficient to adjust the press-times with the square-root of initial distance, but differently for the two pucks, see Fig 2.
Note that in phase two, subjects only had a contextual color cue distinguishing the two pucks. Therefore, subjects needed to learn two different functions relating the pucks’ initial distances to the
required press-times, one for each puck, without any explicit reference to mass. Data from this phase of the experiment were utilized to infer parameters describing individual subjects’ perceptual
uncertainty and motor variability. Perceptual variability was consistent across subjects and varied only marginally so that a constant value of σ^x = 0.05m was used across subjects and models for all
other phases of the experiment. Remarkably, this corresponds to a distance of 4.7 pixels in the vertical direction on the display monitor with a resolution of 1080 pixels. By contrast, the
variability of press-times σ^t varied substantially across subjects with almost all subjects lying between 0.15s and 0.33s, so that individuals’ parameters were used in all subsequent models.
Given that the variability of peoples’ press-times scales with the mean of the duration, longer press-times can lead to larger deviations from the targeted press-time. This can result in larger
errors by overshooting the target. To reduce possible overshoots, participants may implicitly aim at a shorter distance, which can be quantified through a cost function incorporating the relative
desirability of the pucks’ final distance to the target. Therefore, we tested which of three commonly used cost functions best described subjects’ press-times: the 0-1 cost function, the quadratic
cost function, and the absolute value cost function. Model selection using the product space method showed that the press-times were best explained by the Newtonian physics model when taking into
account perceptual uncertainty, motor variability and the quadratic cost function. Similarly, this was confirmed through posterior predictive checks of press-times and the analysis of the correlation
of the residuals between predicted and observed press-times with the initial distance to target.
Thus, participants adjusted the press-times with the square-root of the initial distance to the target and used the contextual color cue of the pucks to adjust the press-times. Subjects only had the
contextual cue of different colors between the two pucks but adjusted the press-times in such a way that this was interpretable in terms of the two different masses used in the puck’s simulations.
Therefore, just on the basis of these adjustments alone, one might argue that subjects may have adjusted their press-times based on the available visual feedback about the pucks’ motion without any
recurrence to a the concept of physical mass. That this is unlikely, is due to the following two phases of the experiment.
Previous research has demonstrated, that people can infer the mass ratios of objects from observing their collisions [9, 11–13]. Here, subjects were asked to propel one particular puck before and
after seeing 24 collision between this puck and the two pucks for which they had previously obtained visual feedback. Note that the two pucks in phase two were only distinguished by a color cue and
that subjects might have only learned two different mapping from initial distances to press-times, as no explicit cues about mass were available. But subjects readily utilized the inferred mass
ratios to adjust their press-times to reach the target more accurately in phase four of the experiment. That the different dynamics were to attribute to different masses and that relative masses from
observing the collisions could be transferred to press-times entirely relied on subjects intuitive physical reasoning. This is strong evidence that participants in our experiment interpreted the
dynamics of the red and yellow pucks from phase two to be caused by their respective masses. Model selection provided evidence, that subjects continued to use the square-root relationship of initial
distance and scaled their press-times consistent with Newtonian physics to successfully propel the puck to the target.
Different from tasks requiring a forced choice response [1–4, 9, 11–13], participants in the current experiments provided a continuous action by pressing a button for variable durations. Therefore,
it is not sufficient to model our participants’ actions as in an inference task, e.g. by assuming that subjects choose a press-time on the basis of the mass belief with highest probability, i.e. the
MAP. Instead, modeling continuous actions requires a cost function, which additionally incorporates people’s variability in press-times. This is evident when comparing the press-times according to
the different models considered here, see S1 Appendix, “Deviations from fully-observed Newtonian physics and model predictions”. Remarkably, posterior means of masses best explaining our
participants’ press-times were closer to the true masses used in the pucks’ simulations for the quadratic cost function compared to the other cost functions. Thus, the current study establishes that
people’s deviations from the predictions of Newtonian physics are not only attributable to prior beliefs and perceptual uncertainties but also to implicit cost functions, which quantify internal
costs for errors due to participants’ action variability.
Taken together, the present study is in accordance with previous studies on intuitive physics within the noisy Newton framework [14]. The systematic deviations in our subjects’ press-times from the
those prescribed by Newtonian physics under full knowledge of all parameters were explained quantitatively as stemming from perceptual uncertainties interacting with prior beliefs according to
probabilistic reasoning. Previous studies had also shown, that people are able to infer relative masses of objects from their collisions [9, 11, 12]. The present study additionally shows, that
subjects can utilize such inferences and transfer them to a subsequent visuomotor task. This establishes a connection between reasoning in intuitive physics [9–12] and visuomotor tasks [21, 23, 25,
27]. Crucial in the quantitative description of participants’ behavior was the inclusion of a cost function. Commonly, cost functions in visuomotor behavior are employed to account for explicit
external rewards imposed by the experimental design, e.g. through monetary rewards [21, 33] or account for costs associated with the biomechanics or accuracy of movements [23, 24]. The present model
used a cost function to account for the costs and benefits implicit in our participants visuomotor behavior and may encompass external and internal cost related to different task components,
perceptual, cognitive, biomechanical costs and preferences. Inferring such costs and benefits has been shown to be crucial for the understanding of visuomotor behavior [34–36].
The results of the present study furthermore support the notion of structured internal causal models comprising physical object representations and their dynamics. Although our participants never
sensed the weight of pucks, they readily transferred their visual experiences by interpreting them in terms of the physical quantity of mass. A recent study [37] found support at the implementational
level for representations of mass in parietal and frontal brain regions that generalized across variations in scenario, material, and friction. While our results do not provide direct evidence for
the notion of internal simulations of a physics engine [38], they also do not contradict them. While it could be argued that structured recognition models may be sufficient for the inference of
object properties such as mass, in our experiment subjects had to act upon such inferences, which strongly suggest the availability of representations of mass.
Finally, the present study also shows the importance of using structured probabilistic generative models that contain interpretable variables when attempting to quantitatively reverse engineer human
cognition [39]. Previous research has demonstrated pervasive and systematic deviations of human reasoning from probabilistic accounts [40]. Similarly, systematic deviations in physical reasoning [1–4
] have been interpreted as failures of physical reasoning. It is only more recently, that a number of these deviations have been explained through computational models [9–12, 38] involving structured
generative models relating observed and latent variables probabilistically. These models involve the explicit modeling of prior beliefs and perceptual uncertainties [5, 6] as well as uncertainties in
visuomotor behavior [21–23], which have been modeled successfully in a probabilistic framework. As such, the present study is in line with efforts of understanding perception and action under
uncertainty through computational models, which use structured probabilistic generative models and external as well as internal costs [8].
Supporting information
S1 Fig. Distance error distributions.
Final discrepancy between target and puck pooled for all participants. Pucks being shot too short are shown with negative values, pucks with a positive deviation were shot too far. Columns showing
the the data for both conditions and rows divide into puck and phase combinations. The first two rows (in gold and red) showing the error distributions for both pucks with feedback in phase 2. The
error distribution for the unknown puck in phase 3 before seeing the collisions is shown in the second last row (in purple) with greater deviation, with a clear bias and bigger spread. In the last
row the error distributions are depicted for the unknown puck after having seen the collisions with the previous learned pucks, showing a reduced bias.
S2 Fig. Press-time distributions.
Pooled press-time distributions for all participants. Columns showing the the data for both conditions and rows divide into puck and phase combinations. First two rows showing the press-times for the
pucks with feedback. Press-time distributions in phase 3 without feedback are shown in row three in blue. Without further information participants’ behavior in phase 3 is strongly influenced by the
previous phase and its press-time distribution: press-time distributions for the unknown puck in phase 3 reflect roughly the combined distributions of press-times of the previous pucks in phase 2
(Kolmogorov D = 0.0538; p = 0.092 for heavy-to-light, D = 0.156; p = 9.8 × 10^−12 for light-to-heavy).
S3 Fig. Kolmogorov tests—Press-times in phase 2 & phase 3.
In the light-to-heavy condition both distributions of press times when seeing pucks and without feedback in phase 3 differ significantly. However, considering the asymmetry within the task
response—press-times and potential masses are only constrained single-sided towards lower values with a minimum at zero—this difference in press-time distributions is surprisingly small. (B) In the
heavy-to-light condition there was no significant difference between the distribution of press-times of both combined feedback pucks and the unknown puck before observing the collisions as revealed
by the Kolmogorov-Smirnov test. This suggests that participants adhere to their previous adjusted strategies when facing decisions in great uncertainty.
S4 Fig. Implementation of cost functions.
Derivation of the three cost function models based on the expressions for the measures of the central tendency of the log-normal distribution with its mode exp(μ − σ^2), median exp(μ) and mean .
Setting the intended press-time to one of these measures for the press-time distribution is equivalent with choosing the 0-1, absolute or quadratic loss function. Transformation with the intended
press-time t^int leads to expressions in S4 Fig.
S5 Fig. Posterior predictive checks of cost functions in phase 2.
Posterior predictive distributions for both model classes and all cost functions with data from phase 2 with feedback. Posterior predictive distributions of press-times given data from feedback
trials. Fifty distributions were drawn from each model after being fitted to the data. Dark green distributions arise from models of the Newtonian model class, dark blue ones from the linear model
class. Separation into rows is made on basis of the implemented cost function. For each cost function the Newtonian model predicts values that match the actual data shown as red curve obviously
better than the model from the linear model class.
S6 Fig. Posterior predictive checks for press-times in both models.
Posterior press-time predictions for both, the linear and the Newtonian model with quadratic cost function, and separately for every phase. Actual data is shown as red line. Model predictions in dark
green (50 iterations) of the fitted Newtonian model match the data closely and surpass the fitted linear model in dark blue for the complete data set and in almost every phase individually.
S7 Fig. Latent masses by cost function: Aggregated data from phase 2.
Inferred latent mass beliefs with aggregated data from phase ‘feedback’ for each cost function. Posterior distributions for mass belief aggregated over all participants for each cost function.
Colored, vertical lines indicate actual mass of pucks. In comparison the quadratic loss function leads to posterior distributions that fit closest to the actual masses in the experiment.
S8 Fig. Change point detection.
Average absolute error as function of trials and posterior of mean average error derived using the change point detection model. (A) Average absolute error over participants as function of trial
number. (B) Posterior over change point τ. Red dotted line marks trial six. (C) Posterior of mean error before and after change point.
S9 Fig. Latent masses: Phase ‘prior’ and ‘feedback’.
Inferred latent mass in Newtonian model class with quadratic loss function for each participant and with data from Prior and Feedback phase. Posterior mass distributions for each participant in Prior
and Feedback phase. Gray distributions show the inferred mass distribution for an unknown puck before participants have encountered the task dynamics. Resulting mass distributions for both pucks in
feedback trials in red (light puck) and yellow (heavy puck). Dotted lines indicate actually implemented mass for each of the feedback pucks.
S10 Fig. Latent masses: Phase ‘no feedback’ and ‘collision and no feedback’.
Inferred latent mass in Newtonian model class with quadratic loss function for each participant with data from Prior and both No Feedback phases. Posterior mass distributions for each participant in
Prior and Feedback phase. Gray distributions show again the inferred mass distribution for an unknown puck before participants have encountered the task dynamics. Distributions in violet and green
are the posterior mass distributions of the unknown puck without feedback before and after the participants saw collision with known pucks. Dotted line marks the actual mass of the unknown puck.
S11 Fig. Deviations from fully-observed Newtonian physics and model predictions (light to heavy).
Posterior predictive for press times, actual press times and ideal responses for phases two to four and condition light-to-heavy. Black distributions show the actual data, red and blue ones display
samples from posterior predictive distributions of both, the linear and Newtonian model, and green ones show the correct responses given perfect knowledge about the underlying physics and all
parameters. Visualizing the enhanced suitability of this noisy Newtonian model framework compared to Newtonian models excluding prior preferences and uncertainties in describing human behavior.
S12 Fig. Deviations from fully-observed Newtonian physics and model predictions (heavy to light).
Posterior predictive for press times, actual press times and ideal responses for phases two to four and condition heavy-to-light. Black distributions show the actual data, red and blue ones display
samples from posterior predictive distributions of both, the linear and Newtonian model, and green ones show the correct responses given perfect knowledge about the underlying physics and all
parameters. Visualizing the enhanced suitability of this noisy Newtonian model framework compared to Newtonian models excluding prior preferences and uncertainties in describing human behavior.
S13 Fig. Learning progress of mass beliefs during interaction and observation.
Barplot of averaged variance for both models and a given number of observations. First three columns show the average variance in posterior mass beliefs for inferences with 6, 24 and 100 trials per
puck and participant. Two last columns show the average variance of mass beliefs of the unknown puck resulting from inference using the collision model for 6 and 24 trials, while using the posterior
mass belief of the known pucks from the interaction model with 100 trials each.
We acknowledge support by the German Research Foundation and the Open Access Publishing Fund of Technische Universität Darmstadt. | {"url":"https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007730","timestamp":"2024-11-04T04:24:49Z","content_type":"text/html","content_length":"260591","record_id":"<urn:uuid:6b13ec86-da41-4ee1-9279-6c717af58ccd>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00569.warc.gz"} |
Model Creation and Modification
Model Creation and Modification#
Use the functions in this section to create models, populate or modify them with variables and constraints, and finally free them again.
int GRBloadmodel(GRBenv *env, GRBmodel **modelP, const char *Pname, int numvars, int numconstrs, int objsense, double objcon, double *obj, char *sense, double *rhs, int *vbeg, int *vlen, int *vind,
double *vval, double *lb, double *ub, char *vtype, const char **varnames, const char **constrnames)#
Create a new optimization model, using the provided arguments to initialize the model data (objective function, variable bounds, constraint matrix, etc.). The model is then ready for
optimization, or for modification (e.g., addition of variables or constraints, changes to variable types or bounds, etc.).
If your constraint matrix may contain more than 2 billion non-zero values, you should consider using the GRBXloadmodel variant of this routine.
Return value:
A non-zero return value indicates that a problem occurred while creating the model. Refer to the Error Codes table for a list of possible return values. Details on the error can be obtained
by calling GRBgeterrormsg.
☆ env – The environment in which the new model should be created. Note that the new model gets a copy of this environment, so subsequent modifications to the original environment (e.g.,
parameter changes) won’t affect the new model. Use GRBgetenv to modify the environment associated with a model.
☆ modelP – The location in which the pointer to the newly created model should be placed.
☆ Pname – The name of the model.
☆ numvars – The number of variables in the model.
☆ numconstrs – The number of constraints in the model.
☆ objsense – The sense of the objective function. Allowed values are 1 (minimization) or -1 (maximization).
☆ objcon – Constant objective offset.
☆ obj – Objective coefficients for the new variables. This argument can be NULL, in which case the objective coefficients are set to 0.0.
☆ sense – The senses of the new constraints. Options are '=' (equal), '<' (less-than-or-equal), or '>' (greater-than-or-equal). You can also use constants GRB_EQUAL, GRB_LESS_EQUAL, or
☆ rhs – Right-hand side values for the new constraints. This argument can be NULL if you are not adding any constraint.
☆ vbeg – Constraint matrix non-zero values are passed into this routine in Compressed Sparse Column (CSC) format. Each column in the constraint matrix is represented as a list of
index-value pairs, where each index entry provides the constraint index for a non-zero coefficient, and each value entry provides the corresponding non-zero value. Each variable in the
model has a vbeg and vlen value, indicating the start position of the non-zeros for that variable in the vind and vval arrays, and the number of non-zero values for that variable,
respectively. Thus, for example, if vbeg[2] = 10 and vlen[2] = 2, that would indicate that variable 2 has two non-zero values associated with it. Their constraint indices can be found in
vind[10] and vind[11], and the numerical values for those non-zeros can be found in vval[10] and vval[11]. Note that the columns of the matrix must be ordered from first to last, implying
that the values in vbeg must be non-decreasing.
☆ vlen – Number of constraint matrix non-zero values associated with each variable. See the description of the vbeg argument for more information.
☆ vind – Constraint indices associated with non-zero values. See the description of the vbeg argument for more information.
☆ vval – Numerical values associated with constraint matrix non-zeros. See the description of the vbeg argument for more information.
☆ lb – Lower bounds for the new variables. This argument can be NULL, in which case all variables get lower bounds of 0.0.
☆ ub – Upper bounds for the new variables. This argument can be NULL, in which case all variables get infinite upper bounds.
☆ vtype – Types for the variables. Options are GRB_CONTINUOUS, GRB_BINARY, GRB_INTEGER, GRB_SEMICONT, or GRB_SEMIINT. This argument can be NULL, in which case all variables are assumed to
be continuous.
☆ varnames – Names for the new variables. This argument can be NULL, in which case all variables are given default names.
☆ constrnames – Names for the new constraints. This argument can be NULL, in which case all constraints are given default names.
We recommend that you build a model one constraint or one variable at a time, using GRBaddconstr or GRBaddvar, rather than using this routine to load the entire constraint matrix at once. It is
much simpler, less error prone, and it introduces no significant overhead.
/* maximize x + y + 2 z
subject to x + 2 y + 3 z <= 4
x + y >= 1
x, y, z binary */
int vars = 3;
int constrs = 2;
int vbeg[] = {0, 2, 4};
int vlen[] = {2, 2, 1};
int vind[] = {0, 1, 0, 1, 0};
double vval[] = {1.0, 1.0, 2.0, 1.0, 3.0};
double obj[] = {1.0, 1.0, 2.0};
char sense[] = {GRB_LESS_EQUAL, GRB_GREATER_EQUAL};
double rhs[] = {4.0, 1.0};
char vtype[] = {GRB_BINARY, GRB_BINARY, GRB_BINARY};
error = GRBloadmodel(env, &model, "example", vars, constrs, -1, 0.0,
obj, sense, rhs, vbeg, vlen, vind, vval,
NULL, NULL, vtype, NULL, NULL);
int GRBnewmodel(GRBenv *env, GRBmodel **modelP, const char *Pname, int numvars, double *obj, double *lb, double *ub, char *vtype, const char **varnames)#
Create a new optimization model. This routine allows you to specify an initial set of variables (with objective coefficients, bounds, types, and names), but the initial model will have no
constraints. Constraints can be added later with GRBaddconstr or GRBaddconstrs.
Return value:
A non-zero return value indicates that a problem occurred while creating the new model. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ env – The environment in which the new model should be created. Note that the new model will get a copy of this environment, so subsequent modifications to the original environment (e.g.,
parameter changes) won’t affect the new model. Use GRBgetenv to modify the environment associated with a model.
☆ modelP – The location in which the pointer to the new model should be placed.
☆ Pname – The name of the model.
☆ numvars – The number of variables in the model.
☆ obj – Objective coefficients for the new variables. This argument can be NULL, in which case the objective coefficients are set to 0.0.
☆ lb – Lower bounds for the new variables. This argument can be NULL, in which case all variables get lower bounds of 0.0.
☆ ub – Upper bounds for the new variables. This argument can be NULL, in which case all variables get infinite upper bounds.
☆ vtype – Types for the variables. Options are GRB_CONTINUOUS, GRB_BINARY, GRB_INTEGER, GRB_SEMICONT, or GRB_SEMIINT. This argument can be NULL, in which case all variables are assumed to
be continuous.
☆ varnames – Names for the new variables. This argument can be NULL, in which case all variables are given default names.
double obj[] = {1.0, 1.0};
char *names[] = {"var1", "var2"};
error = GRBnewmodel(env, &model, "New", 2, obj, NULL, NULL, NULL, names);
GRBmodel *GRBcopymodel(GRBmodel *model)#
Create a copy of an existing model.
Note that pending updates will not be applied to the model, so you should call GRBupdatemodel before copying if you would like those to be included in the copy.
Return value:
A copy of the input model. A NULL return value indicates that a problem was encountered.
☆ model – The model to copy.
GRBupdatemodel(orig); /* if you have unstaged changes in orig */
GRBmodel *copy = GRBcopymodel(orig);
int GRBcopymodeltoenv(GRBmodel *model, GRBenv *targetenv, GRBmodel **resultP)#
Copy an existing model to a different environment. Multiple threads can not work simultaneously within the same environment. Copies of models must therefore reside in different environments for
multiple threads to operate on them simultaneously.
Note that this method itself is not thread safe, so you should either call it from the main thread or protect access to it with a lock.
Note that pending updates will not be applied to the model, so you should call GRBupdatemodel before copying if you would like those to be included in the copy.
For Compute Server users, note that you can copy a model from a client to a Compute Server environment, but it is not possible to copy models from a Compute Server environment to another (client
or Compute Server) environment.
Return value:
A non-zero return value indicates that a problem occurred while copying the model. Refer to the Error Codes table for a list of possible return values. Details on the error can be obtained by
calling GRBgeterrormsg.
☆ model – The model to copy.
☆ targetenv – The environment to copy the model into.
☆ resultP – The resulting model copy.
error = GRBcopymodeltoenv(orig, env2, ©);
int GRBaddconstr(GRBmodel *model, int numnz, int *cind, double *cval, char sense, double rhs, const char *constrname)#
Add a new linear constraint to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
Return value:
A non-zero return value indicates that a problem occurred while adding the constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new constraint should be added.
☆ numnz – The number of non-zero coefficients in the new constraint.
☆ cind – Variable indices for non-zero values in the new constraint.
☆ cval – Numerical values for non-zero values in the new constraint.
☆ sense – Sense for the new constraint. Options are GRB_LESS_EQUAL, GRB_EQUAL, or GRB_GREATER_EQUAL.
☆ rhs – Right-hand side value for the new constraint.
☆ constrname – Name for the new constraint. This argument can be NULL, in which case the constraint is given a default name.
int ind[] = {1, 3, 4};
double val[] = {1.0, 2.0, 1.0};
/* x1 + 2 x3 + x4 = 1 */
error = GRBaddconstr(model, 3, ind, val, GRB_EQUAL, 1.0, "New");
int GRBaddconstrs(GRBmodel *model, int numconstrs, int numnz, int *cbeg, int *cind, double *cval, char *sense, double *rhs, const char **constrnames)#
Add new linear constraints to a model.
Note that, due to our lazy update approach, the new constraints won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
We recommend that you build your model one constraint at a time (using GRBaddconstr), since it introduces no significant overhead and we find that it produces simpler code. Feel free to use this
routine if you disagree, though.
If your constraint matrix may contain more than 2 billion non-zero values, you should consider using the GRBXaddconstrs variant of this routine.
Return value:
A non-zero return value indicates that a problem occurred while adding the constraints. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new constraints should be added.
☆ numconstrs – The number of new constraints to add.
☆ numnz – The total number of non-zero coefficients in the new constraints.
☆ cbeg – Constraint matrix non-zero values are passed into this routine in Compressed Sparse Row (CSR) format by this routine. Each constraint in the constraint matrix is represented as a
list of index-value pairs, where each index entry provides the variable index for a non-zero coefficient, and each value entry provides the corresponding non-zero value. Each new
constraint has an associated cbeg value, indicating the start position of the non-zeros for that constraint in the cind and cval arrays. This routine requires that the non-zeros for
constraint i immediately follow those for constraint i-1 in cind and cval. Thus, cbeg[i] indicates both the index of the first non-zero in constraint i and the end of the non-zeros for
constraint i-1. To give an example of how this representation is used, consider a case where cbeg[2] = 10 and cbeg[3] = 12. This would indicate that constraint 2 has two non-zero values
associated with it. Their variable indices can be found in cind[10] and cind[11], and the numerical values for those non-zeros can be found in cval[10] and cval[11].
☆ cind – Variable indices associated with non-zero values. See the description of the cbeg argument for more information.
☆ cval – Numerical values associated with constraint matrix non-zeros. See the description of the cbeg argument for more information.
☆ sense – Sense for the new constraints. Options are GRB_LESS_EQUAL, GRB_EQUAL, or GRB_GREATER_EQUAL.
☆ rhs – Right-hand side values for the new constraints. This argument can be NULL, in which case the right-hand side values are set to 0.0.
☆ constrnames – Names for the new constraints. This argument can be NULL, in which case all constraints are given default names.
int GRBaddgenconstrMax(GRBmodel *model, const char *name, int resvar, int nvars, const int *vars, double constant)#
Add a new general constraint of type GRB_GENCONSTR_MAX to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
A MAX constraint \(r = \max\{x_1,\ldots,x_n,c\}\) states that the resultant variable \(r\) should be equal to the maximum of the operand variables \(x_1,\ldots,x_n\) and the constant \(c\).
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ resvar – The index of the resultant variable \(r\) whose value will be equal to the max of the other variables.
☆ nvars – The number \(n\) of operand variables over which the max will be taken.
☆ vars – An array containing the indices of the operand variables \(x_j\) over which the max will be taken.
☆ constant – An additional operand that allows you to include a constant \(c\) among the arguments of the max operation.
/* x5 = max(x1, x3, x4, 2.0) */
int ind[] = {1, 3, 4};
error = GRBaddgenconstrMax(model, "maxconstr", 5,
3, ind, 2.0);
int GRBaddgenconstrMin(GRBmodel *model, const char *name, int resvar, int nvars, const int *vars, double constant)#
Add a new general constraint of type GRB_GENCONSTR_MIN to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
A MIN constraint \(r = \min\{x_1,\ldots,x_n,c\}\) states that the resultant variable \(r\) should be equal to the minimum of the operand variables \(x_1,\ldots,x_n\) and the constant \(c\).
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ resvar – The index of the resultant variable \(r\) whose value will be equal to the min of the other variables.
☆ nvars – The number \(n\) of operand variables over which the min will be taken.
☆ vars – An array containing the indices of the operand variables \(x_j\) over which the min will be taken.
☆ constant – An additional operand that allows you to include a constant \(c\) among the arguments of the min operation.
/* x5 = min(x1, x3, x4, 2.0) */
int ind[] = {1, 3, 4};
error = GRBaddgenconstrMin(model, "minconstr", 5,
3, ind, 2.0);
int GRBaddgenconstrAbs(GRBmodel *model, const char *name, int resvar, int argvar)#
Add a new general constraint of type GRB_GENCONSTR_ABS to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
An ABS constraint \(r = \mbox{abs}\{x\}\) states that the resultant variable \(r\) should be equal to the absolute value of the argument variable \(x\).
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ resvar – The index of the resultant variable \(r\) whose value will be to equal the absolute value of the argument variable.
☆ argvar – The index of the argument variable \(x\) for which the absolute value will be taken.
/* x5 = abs(x1) */
error = GRBaddgenconstrAbs(model, "absconstr", 5, 1);
int GRBaddgenconstrAnd(GRBmodel *model, const char *name, int resvar, int nvars, const int *vars)#
Add a new general constraint of type GRB_GENCONSTR_AND to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
An AND constraint \(r = \mbox{and}\{x_1,\ldots,x_n\}\) states that the binary resultant variable \(r\) should be \(1\) if and only if all of the operand variables \(x_1,\ldots,x_n\) are equal to
\(1\). If any of the operand variables is \(0\), then the resultant should be \(0\) as well.
Note that all variables participating in such a constraint will be forced to be binary, independent of how they were created.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ resvar – The index of the binary resultant variable \(r\) whose value will be equal to the AND concatenation of the other variables.
☆ nvars – The number \(n\) of binary operand variables over which the AND will be taken.
☆ vars – An array containing the indices of the binary operand variables \(x_j\) over which the AND concatenation will be taken.
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
/* x5 = and(x1, x3, x4) */
int ind[] = {1, 3, 4};
error = GRBaddgenconstrAnd(model, "andconstr", 5, 3, ind);
int GRBaddgenconstrOr(GRBmodel *model, const char *name, int resvar, int nvars, const int *vars)#
Add a new general constraint of type GRB_GENCONSTR_OR to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
An OR constraint \(r = \mbox{or}\{x_1,\ldots,x_n\}\) states that the binary resultant variable \(r\) should be \(1\) if and only if any of the operand variables \(x_1,\ldots,x_n\) is equal to \(1
\). If all operand variables are \(0\), then the resultant should be \(0\) as well.
Note that all variables participating in such a constraint will be forced to be binary, independent of how they were created.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ resvar – The index of the binary resultant variable \(r\) whose value will be equal to the OR concatenation of the other variables.
☆ nvars – The number \(n\) of binary operand variables over which the OR will be taken.
☆ vars – An array containing the indices of the binary operand variables \(x_j\) over which the OR concatenation will be taken.
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
/* x5 = or(x1, x3, x4) */
int ind[] = {1, 3, 4};
error = GRBaddgenconstrOr(model, "orconstr", 5, 3, ind);
int GRBaddgenconstrNorm(GRBmodel *model, const char *name, int resvar, int nvars, const int *vars, double which)#
Add a new general constraint of type GRB_GENCONSTR_NORM to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
A NORM constraint \(r = \mbox{norm}\{x_1,\ldots,x_n\}\) states that the resultant variable \(r\) should be equal to the vector norm of the argument vector \(x_1,\ldots,x_n\).
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ resvar – The index of the resultant variable \(r\) whose value will be equal to the NORM of the other variables.
☆ nvars – The number \(n\) of operand variables over which the NORM will be taken.
☆ vars – An array containing the indices of the operand variables \(x_j\) over which the NORM will be taken. Note that this array may not contain duplicates.
☆ which – Which norm to use. Options are 0, 1, 2, and GRB_INFINITY.
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
/* x5 = 2-norm(x1, x3, x4) */
int ind[] = {1, 3, 4};
error = GRBaddgenconstrNorm(model, "orconstr", 5, 3, ind, 2.0);
int GRBaddgenconstrNL(GRBmodel *model, const char *name, int resvar, int nnodes, const int *opcode, const double *data, const int *parent)#
Add a new general constraint of type GRB_GENCONSTR_NL to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
A NL constraint \(r = f(x)\) states that the resultant variable \(r\) should be equal to the function value \(f(x)\) of the given function \(f\), provided as an expression tree as described in
Nonlinear Constraints.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ resvar – The index of the resultant variable \(r\) whose value will be equal to the function value of the given function \(f\).
☆ nnodes – The number of nodes in the expression tree, that is, the length of the input arrays opcode, data and parent.
☆ opcode – An array containing the operation codes for the nodes.
☆ data – An array containing the auxiliary data for each node.
☆ parent – An array providing the parent index of the nodes.
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
/* Add nonlinear constraint x0 = sin(2.5 * x1) + x2 to the model */
int opcode[6] = {GRB_OPCODE_PLUS, GRB_OPCODE_SIN, GRB_OPCODE_MULTIPLY,
GRB_OPCODE_CONSTANT, GRB_OPCODE_VARIABLE, GRB_OPCODE_VARIABLE};
double data[6] = {-1.0, -1.0, -1.0, 2.5, 1.0, 2.0};
int parent[6] = {-1, 0, 1, 2, 2, 0};
error = GRBaddgenconstrNL(model, "nlconstr", 0, 6, opcode, data, parent);
int GRBaddgenconstrIndicator(GRBmodel *model, const char *name, int binvar, int binval, int nvars, const int *ind, const double *val, char sense, double rhs)#
Add a new general constraint of type GRB_GENCONSTR_INDICATOR to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
An INDICATOR constraint \(z = f \rightarrow a^Tx \leq b\) states that if the binary indicator variable \(z\) is equal to \(f\), where \(f \in \{0,1\}\), then the linear constraint \(a^Tx \leq b\)
should hold. On the other hand, if \(z = 1-f\), the linear constraint may be violated. The sense of the linear constraint can also be specified to be “\(=\)” or “\(\geq\)”.
Note that the indicator variable \(z\) of a constraint will be forced to be binary, independent of how it was created.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ binvar – The index of the binary indicator variable \(z\).
☆ binval – The value \(f\) for the binary indicator variable that would force the linear constraint to be satisfied (\(0\) or \(1\)).
☆ nvars – The number \(n\) of non-zero coefficients in the linear constraint triggered by the indicator.
☆ ind – Indices for the variables \(x_j\) with non-zero values in the linear constraint.
☆ val – Numerical values for non-zero values \(a_j\) in the linear constraint.
☆ sense – Sense for the linear constraint. Options are GRB_LESS_EQUAL, GRB_EQUAL, or GRB_GREATER_EQUAL.
☆ rhs – Right-hand side value for the linear constraint.
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
/* x7 = 1 -> x1 + 2 x3 + x4 = 1 */
int ind[] = {1, 3, 4};
double val[] = {1.0, 2.0, 1.0};
error = GRBaddgenconstrIndicator(model, NULL, 7, 1,
3, ind, val, GRB_EQUAL, 1.0);
int GRBaddgenconstrPWL(GRBmodel *model, const char *name, int xvar, int yvar, int npts, double *xpts, double *ypts)#
Add a new general constraint of type GRB_GENCONSTR_PWL to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
A piecewise-linear (PWL) constraint states that the relationship \(y = f(x)\) must hold between variables \(x\) and \(y\), where \(f\) is a piecewise-linear function. The breakpoints for \(f\)
are provided as arguments. Refer to the description of piecewise-linear objectives for details of how piecewise-linear functions are defined.
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ xvar – The index of variable \(x\).
☆ yvar – The index of variable \(y\).
☆ npts – The number of points that define the piecewise-linear function.
☆ xpts – The \(x\) values for the points that define the piecewise-linear function. Must be in non-decreasing order.
☆ ypts – The \(y\) values for the points that define the piecewise-linear function.
double xpts[] = {1, 3, 5};
double ypts[] = {1, 2, 4};
error = GRBaddgenconstr(model, "pwl", xvar, yvar, 3, x, y);
int GRBaddgenconstrPoly(GRBmodel *model, const char *name, int xvar, int yvar, int plen, double *p, const char *options)#
Add a new general constraint of type GRB_GENCONSTR_POLY to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
A polynomial function constraint states that the relationship \(y = p_0 x^d + p_1 x^{d-1} + ... + p_{d-1} x + p_{d}\) should hold between variables \(x\) and \(y\).
A piecewise-linear approximation of the function is added to the model. The details of the approximation are controlled using the following four attributes (or using the parameters with the same
names): FuncPieces, FuncPieceError, FuncPieceLength, and FuncPieceRatio. Alternatively, the function can be treated as a nonlinear constraint by setting the attribute FuncNonlinear. For details,
consult the General Constraint discussion.
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ xvar – The index of variable \(x\).
☆ yvar – The index of variable \(y\).
☆ plen – The length of coefficient array p. If \(x^d\) is the highest power term, then plen should be \(d+1\).
☆ p – The coefficients for the polynomial function (starting with the coefficient for the highest power).
☆ options – A string that can be used to set the attributes that control the piecewise-linear approximation of this function constraint. To assign a value to an attribute, follow the
attribute name with an equal sign and the desired value (with no spaces). Assignments for different attributes should be separated by spaces (e.g. “FuncPieces=-1 FuncPieceError=0.001”).
/* y = 3 x^4 + 7 x + 3 = 3 x^4 + 0 x^3 + 0 x^2 + 7 x + 3 */
int plen = 5;
double p[] = {3, 0, 0, 7, 3};
error = GRBaddgenconstrPoly(model, "poly", xvar, yvar, 5, p, "");
int GRBaddgenconstrExp(GRBmodel *model, const char *name, int xvar, int yvar, const char *options)#
Add a new general constraint of type GRB_GENCONSTR_EXP to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
A natural exponential function constraint states that the relationship \(y = \exp(x)\) should hold for variables \(x\) and \(y\).
A piecewise-linear approximation of the function is added to the model. The details of the approximation are controlled using the following four attributes (or using the parameters with the same
names): FuncPieces, FuncPieceError, FuncPieceLength, and FuncPieceRatio. Alternatively, the function can be treated as a nonlinear constraint by setting the attribute FuncNonlinear. For details,
consult the General Constraint discussion.
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ xvar – The index of variable \(x\).
☆ yvar – The index of variable \(y\).
☆ options – A string that can be used to set the attributes that control the piecewise-linear approximation of this function constraint. To assign a value to an attribute, follow the
attribute name with an equal sign and the desired value (with no spaces). Assignments for different attributes should be separated by spaces (e.g. “FuncPieces=-1 FuncPieceError=0.001”).
/* y = exp(x) */
error = GRBaddgenconstrExp(model, "exp", xvar, yvar, "");
int GRBaddgenconstrExpA(GRBmodel *model, const char *name, int xvar, int yvar, double a, const char *options)#
Add a new general constraint of type GRB_GENCONSTR_EXPA to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
An exponential function constraint states that the relationship \(y = a^x\) should hold for variables \(x\) and \(y\), where \(a > 0\) is the (constant) base.
A piecewise-linear approximation of the function is added to the model. The details of the approximation are controlled using the following four attributes (or using the parameters with the same
names): FuncPieces, FuncPieceError, FuncPieceLength, and FuncPieceRatio. Alternatively, the function can be treated as a nonlinear constraint by setting the attribute FuncNonlinear. For details,
consult the General Constraint discussion.
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ xvar – The index of variable \(x\).
☆ yvar – The index of variable \(y\).
☆ a – The base of the function, \(a > 0\).
☆ options – A string that can be used to set the attributes that control the piecewise-linear approximation of this function constraint. To assign a value to an attribute, follow the
attribute name with an equal sign and the desired value (with no spaces). Assignments for different attributes should be separated by spaces (e.g. “FuncPieces=-1 FuncPieceError=0.001”).
/* y = 3^x */
error = GRBaddgenconstrExpA(model, "expa", xvar, yvar, 3.0, "");
int GRBaddgenconstrLog(GRBmodel *model, const char *name, int xvar, int yvar, const char *options)#
Add a new general constraint of type GRB_GENCONSTR_LOG to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
A natural logarithmic function constraint states that the relationship \(y = log(x)\) should hold for variables \(x\) and \(y\).
A piecewise-linear approximation of the function is added to the model. The details of the approximation are controlled using the following four attributes (or using the parameters with the same
names): FuncPieces, FuncPieceError, FuncPieceLength, and FuncPieceRatio. Alternatively, the function can be treated as a nonlinear constraint by setting the attribute FuncNonlinear. For details,
consult the General Constraint discussion.
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ xvar – The index of variable \(x\).
☆ yvar – The index of variable \(y\).
☆ options – A string that can be used to set the attributes that control the piecewise-linear approximation of this function constraint. To assign a value to an attribute, follow the
attribute name with an equal sign and the desired value (with no spaces). Assignments for different attributes should be separated by spaces (e.g. “FuncPieces=-1 FuncPieceError=0.001”).
/* y = log(x) */
error = GRBaddgenconstrLog(model, "log", xvar, yvar, "FuncPieces=-1 FuncPieceError=0.001");
int GRBaddgenconstrLogA(GRBmodel *model, const char *name, int xvar, int yvar, double a, const char *options)#
Add a new general constraint of type GRB_GENCONSTR_LOGA to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
A logarithmic function constraint states that the relationship \(y = log_a(x)\) should hold for variables \(x\) and \(y\), where \(a > 0\) is the (constant) base.
A piecewise-linear approximation of the function is added to the model. The details of the approximation are controlled using the following four attributes (or using the parameters with the same
names): FuncPieces, FuncPieceError, FuncPieceLength, and FuncPieceRatio. Alternatively, the function can be treated as a nonlinear constraint by setting the attribute FuncNonlinear. For details,
consult the General Constraint discussion.
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ xvar – The index of variable \(x\).
☆ yvar – The index of variable \(y\).
☆ a – The base of the function, \(a > 0\).
☆ options – A string that can be used to set the attributes that control the piecewise-linear approximation of this function constraint. To assign a value to an attribute, follow the
attribute name with an equal sign and the desired value (with no spaces). Assignments for different attributes should be separated by spaces (e.g. “FuncPieces=-1 FuncPieceError=0.001”).
/* y = log_10(x) */
error = GRBaddgenconstrLogA(model, "loga", xvar, yvar, 10.0, "");
int GRBaddgenconstrLogistic(GRBmodel *model, const char *name, int xvar, int yvar, const char *options)#
Add a new general constraint of type GRB_GENCONSTR_LOGISTIC to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
A logistic function constraint states that the relationship \(y = \frac{1}{1 + e^{-x}}\) should hold for variables \(x\) and \(y\).
A piecewise-linear approximation of the function is added to the model. The details of the approximation are controlled using the following four attributes (or using the parameters with the same
names): FuncPieces, FuncPieceError, FuncPieceLength, and FuncPieceRatio. Alternatively, the function can be treated as a nonlinear constraint by setting the attribute FuncNonlinear. For details,
consult the General Constraint discussion.
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ xvar – The index of variable \(x\).
☆ yvar – The index of variable \(y\).
☆ options – A string that can be used to set the attributes that control the piecewise-linear approximation of this function constraint. To assign a value to an attribute, follow the
attribute name with an equal sign and the desired value (with no spaces). Assignments for different attributes should be separated by spaces (e.g. “FuncPieces=-1 FuncPieceError=0.001”).
/* y = 1 / (1 + exp(-x)) */
error = GRBaddgenconstrLogistic(model, "logistic", xvar, yvar, "");
int GRBaddgenconstrPow(GRBmodel *model, const char *name, int xvar, int yvar, double a, const char *options)#
Add a new general constraint of type GRB_GENCONSTR_POW to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
A power function constraint states that the relationship \(y = x^a\) should hold for variables \(x\) and \(y\), where \(a\) is the (constant) exponent.
If the exponent \(a\) is negative, the lower bound on \(x\) must be strictly positive. If the exponent isn’t an integer, the lower bound on \(x\) must be non-negative.
A piecewise-linear approximation of the function is added to the model. The details of the approximation are controlled using the following four attributes (or using the parameters with the same
names): FuncPieces, FuncPieceError, FuncPieceLength, and FuncPieceRatio. Alternatively, the function can be treated as a nonlinear constraint by setting the attribute FuncNonlinear. For details,
consult the General Constraint discussion.
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ xvar – The index of variable \(x\).
☆ yvar – The index of variable \(y\).
☆ a – The exponent of the function.
☆ options – A string that can be used to set the attributes that control the piecewise-linear approximation of this function constraint. To assign a value to an attribute, follow the
attribute name with an equal sign and the desired value (with no spaces). Assignments for different attributes should be separated by spaces (e.g. “FuncPieces=-1 FuncPieceError=0.001”).
/* y = sqrt(x) */
error = GRBaddgenconstrPow(model, "pow", xvar, yvar, 0.5, "");
int GRBaddgenconstrSin(GRBmodel *model, const char *name, int xvar, int yvar, const char *options)#
Add a new general constraint of type GRB_GENCONSTR_SIN to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
A sine function constraint states that the relationship \(y = sin(x)\) should hold for variables \(x\) and \(y\).
A piecewise-linear approximation of the function is added to the model. The details of the approximation are controlled using the following four attributes (or using the parameters with the same
names): FuncPieces, FuncPieceError, FuncPieceLength, and FuncPieceRatio. Alternatively, the function can be treated as a nonlinear constraint by setting the attribute FuncNonlinear. For details,
consult the General Constraint discussion.
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ xvar – The index of variable \(x\).
☆ yvar – The index of variable \(y\).
☆ options – A string that can be used to set the attributes that control the piecewise-linear approximation of this function constraint. To assign a value to an attribute, follow the
attribute name with an equal sign and the desired value (with no spaces). Assignments for different attributes should be separated by spaces (e.g. “FuncPieces=-1 FuncPieceError=0.001”).
/* y = sin(x) */
error = GRBaddgenconstrSin(model, "sin", xvar, yvar, "");
int GRBaddgenconstrCos(GRBmodel *model, const char *name, int xvar, int yvar, const char *options)#
Add a new general constraint of type GRB_GENCONSTR_COS to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
A cosine function constraint states that the relationship \(y = cos(x)\) should hold for variables \(x\) and \(y\).
A piecewise-linear approximation of the function is added to the model. The details of the approximation are controlled using the following four attributes (or using the parameters with the same
names): FuncPieces, FuncPieceError, FuncPieceLength, and FuncPieceRatio. Alternatively, the function can be treated as a nonlinear constraint by setting the attribute FuncNonlinear. For details,
consult the General Constraint discussion.
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ xvar – The index of variable \(x\).
☆ yvar – The index of variable \(y\).
☆ options – A string that can be used to set the attributes that control the piecewise-linear approximation of this function constraint. To assign a value to an attribute, follow the
attribute name with an equal sign and the desired value (with no spaces). Assignments for different attributes should be separated by spaces (e.g. “FuncPieces=-1 FuncPieceError=0.001”).
/* y = cos(x) */
error = GRBaddgenconstrCos(model, "cos", xvar, yvar, "FuncPieces=-2");
int GRBaddgenconstrTan(GRBmodel *model, const char *name, int xvar, int yvar, const char *options)#
Add a new general constraint of type GRB_GENCONSTR_TAN to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
A tangent function constraint states that the relationship \(y = tan(x)\) should hold for variables \(x\) and \(y\).
A piecewise-linear approximation of the function is added to the model. The details of the approximation are controlled using the following four attributes (or using the parameters with the same
names): FuncPieces, FuncPieceError, FuncPieceLength, and FuncPieceRatio. Alternatively, the function can be treated as a nonlinear constraint by setting the attribute FuncNonlinear. For details,
consult the General Constraint discussion.
Return value:
A non-zero return value indicates that a problem occurred while adding the general constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new general constraint should be added.
☆ name – Name for the new general constraint. This argument can be NULL, in which case the constraint is given a default name.
☆ xvar – The index of variable \(x\).
☆ yvar – The index of variable \(y\).
☆ options – A string that can be used to set the attributes that control the piecewise-linear approximation of this function constraint. To assign a value to an attribute, follow the
attribute name with an equal sign and the desired value (with no spaces). Assignments for different attributes should be separated by spaces (e.g. “FuncPieces=-1 FuncPieceError=0.001”).
/* y = tan(x) */
error = GRBaddgenconstrTan(model, "tan", xvar, yvar, "");
int GRBdelgenconstrs(GRBmodel *model, int numdel, int *ind)#
Delete a list of general constraints from an existing model.
Note that, due to our lazy update approach, the general constraints won’t actually be removed until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write
the model to disk (using GRBwrite).
Return value:
A non-zero return value indicates that a problem occurred while deleting the constraints. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to modify.
☆ numdel – The number of general constraints to remove.
☆ ind – The indices of the general constraints to remove.
int first_four[] = {0, 1, 2, 3};
error = GRBdelgenconstrs(model, 4, first_four);
int GRBaddqconstr(GRBmodel *model, int numlnz, int *lind, double *lval, int numqnz, int *qrow, int *qcol, double *qval, char sense, double rhs, const char *constrname)#
Add a new quadratic constraint to a model.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
A quadratic constraint consists of a set of quadratic terms, a set of linear terms, a sense, and a right-hand side value: \(x^TQx + q^Tx \le b\). The quadratic terms are input through the numqnz,
qrow, qcol, and qval arguments, and the linear terms are input through the numlnz, lind, and lval arguments.
Gurobi can handle both convex and non-convex quadratic constraints. The differences between them can be both important and subtle. Refer to this discussion for additional information.
Return value:
A non-zero return value indicates that a problem occurred while adding the quadratic constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can
be obtained by calling GRBgeterrormsg.
☆ model – The model to which the new constraint should be added.
☆ numlnz – The number of linear terms in the new quadratic constraint.
☆ lind – Variable indices associated with linear terms.
☆ lval – Numerical values associated with linear terms.
☆ numqlnz – The number of quadratic terms in the new quadratic constraint.
☆ qrow – Row indices associated with quadratic terms. A quadratic term is represented using three values: a pair of indices (stored in qrow and qcol), and a coefficient (stored in qval).
The associated arguments arrays provide the corresponding values for each quadratic term. To give an example, if you wish to input quadratic terms \(2 x_0^2 + x_0 x_1 + x_1^2\), you would
call this routine with numqnz=3, qrow[] = {0, 0, 1}, qcol[] = {0, 1, 1}, and qval[] = {2.0, 1.0, 1.0}.
☆ qcol – Column indices associated with quadratic terms. See the description of the qrow argument for more information.
☆ qval – Numerical values associated with quadratic terms. See the description of the qrow argument for more information.
☆ sense – Sense for the new quadratic constraint. Options are GRB_LESS_EQUAL, GRB_EQUAL, or GRB_GREATER_EQUAL.
☆ rhs – Right-hand side value for the new quadratic constraint.
☆ constrname – Name for the new quadratic constraint. This argument can be NULL, in which case the constraint is given a default name.
int lind[] = {1, 2};
double lval[] = {2.0, 1.0};
int qrow[] = {0, 0, 1};
int qcol[] = {0, 1, 1};
double qval[] = {2.0, 1.0, 1.0};
/* 2 x0^2 + x0 x1 + x1^2 + 2 x1 + x2 <= 1 */
error = GRBaddqconstr(model, 2, lind, lval, 3, qrow, qcol, qval,
GRB_LESS_EQUAL, 1.0, "New");
int GRBaddqpterms(GRBmodel *model, int numqnz, int *qrow, int *qcol, double *qval)#
Add new quadratic objective terms into an existing model. Note that new terms are (numerically) added into existing terms, and that adding a term in row i and column j is equivalent to adding a
term in row j and column i. You can add all quadratic objective terms in a single call, or you can add them incrementally in multiple calls.
Note that, due to our lazy update approach, the new quadratic terms won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
To build an objective that contains both linear and quadratic terms, use this routine to add the quadratic terms and use the Obj attribute to add the linear terms.
If you wish to change a quadratic term, you can either add the difference between the current term and the desired term using this routine, or you can call GRBdelq to delete all quadratic terms,
and then rebuild your new quadratic objective from scratch.
Return value:
A non-zero return value indicates that a problem occurred while adding the quadratic terms. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new quadratic objective terms should be added.
☆ numqnz – The number of new quadratic objective terms to add.
☆ qrow – Row indices associated with quadratic terms. A quadratic term is represented using three values: a pair of indices (stored in qrow and qcol), and a coefficient (stored in qval).
The three argument arrays provide the corresponding values for each quadratic term. To give an example, to represent \(2 x_0^2 + x_0 x_1 + x_1^2\), you would have numqnz=3, qrow[] = {0,
0, 1}, qcol[] = {0, 1, 1}, and qval[] = {2.0, 1.0, 1.0}.
☆ qcol – Column indices associated with quadratic terms. See the description of the qrow argument for more information.
☆ qval – Numerical values associated with quadratic terms. See the description of the qrow argument for more information.
Note that building quadratic objectives requires some care, particularly if you are migrating an application from another solver. Some solvers require you to specify the entire \(Q\) matrix,
while others only accept the lower triangle. In addition, some solvers include an implicit 0.5 multiplier on \(Q\), while others do not. The Gurobi interface is built around quadratic terms,
rather than a \(Q\) matrix. If your quadratic objective contains a term 2 x y, you can enter it as a single term, 2 x y, or as a pair of terms, x y and y x.
int qrow[] = {0, 0, 1};
int qcol[] = {0, 1, 1};
double qval[] = {2.0, 1.0, 3.0};
/* minimize 2 x^2 + x*y + 3 y^2 */
error = GRBaddqpterms(model, 3, qrow, qcol, qval);
int GRBaddrangeconstr(GRBmodel *model, int numnz, int *cind, double *cval, double lower, double upper, const char *constrname)#
Add a new range constraint to a model. A range constraint states that the value of the input expression must be between the specified lower and upper bounds in any solution.
Note that, due to our lazy update approach, the new constraint won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
Return value:
A non-zero return value indicates that a problem occurred while adding the constraint. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new constraint should be added.
☆ numnz – The number of non-zero coefficients in the linear expression.
☆ cind – Variable indices for non-zero values in the linear expression.
☆ cval – Numerical values for non-zero values in the linear expression.
☆ lower – Lower bound on linear expression.
☆ upper – Upper bound on linear expression.
☆ constrname – Name for the new constraint. This argument can be NULL, in which case the constraint is given a default name.
Note that adding a range constraint to the model adds both a new constraint and a new variable. If you are keeping a count of the variables in the model, remember to add one whenever you add a
Note also that range constraints are stored internally as equality constraints. We use the extra variable that is added with a range constraint to capture the range information. Thus, the Sense
attribute on a range constraint will always be GRB_EQUAL. In particular introducing a range constraint
is equivalent to adding a slack variable \(s\) and the following constraints
\[\begin{split}\begin{array}{rl} a^T x - s & = L \\ 0 \leq s & \leq U - L. \end{array}\end{split}\]
int ind[] = {1, 3, 4};
double val[] = {1.0, 2.0, 3.0};
/* 1 <= x1 + 2 x3 + 3 x4 <= 2 */
error = GRBaddrangeconstr(model, 3, ind, val, 1.0, 2.0, "NewRange");
int GRBaddrangeconstrs(GRBmodel *model, int numconstrs, int numnz, int *cbeg, int *cind, double *cval, double *lower, double *upper, const char **constrnames)#
Add new range constraints to a model. A range constraint states that the value of the input expression must be between the specified lower and upper bounds in any solution.
Note that, due to our lazy update approach, the new constraints won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
If your constraint matrix may contain more than 2 billion non-zero values, you should consider using the GRBXaddrangeconstrs variant of this routine.
Return value:
A non-zero return value indicates that a problem occurred while adding the constraints. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new constraints should be added.
☆ numconstrs – The number of new constraints to add.
☆ numnz – The total number of non-zero coefficients in the new constraints.
☆ cbeg – Constraint matrix non-zero values are passed into this routine in Compressed Sparse Row (CSR) format by this routine. Each constraint in the constraint matrix is represented as a
list of index-value pairs, where each index entry provides the variable index for a non-zero coefficient, and each value entry provides the corresponding non-zero value. Each new
constraint has an associated cbeg value, indicating the start position of the non-zeros for that constraint in the cind and cval arrays. This routine requires that the non-zeros for
constraint i immediately follow those for constraint i-1 in cind and cval. Thus, cbeg[i] indicates both the index of the first non-zero in constraint i and the end of the non-zeros for
constraint i-1. To give an example of how this representation is used, consider a case where cbeg[2] = 10 and cbeg[3] = 12. This would indicate that constraint 2 has two non-zero values
associated with it. Their variable indices can be found in cind[10] and cind[11], and the numerical values for those non-zeros can be found in cval[10] and cval[11].
☆ cind – Variable indices associated with non-zero values. See the description of the cbeg argument for more information.
☆ cval – Numerical values associated with constraint matrix non-zeros. See the description of the cbeg argument for more information.
☆ lower – Lower bounds for the linear expressions.
☆ upper – Upper bounds for the linear expressions.
☆ constrnames – Names for the new constraints. This argument can be NULL, in which case all constraints are given default names.
Note that adding a range constraint to the model adds both a new constraint and a new variable. If you are keeping a count of the variables in the model, remember to add one for each range
Note also that range constraints are stored internally as equality constraints. We use the extra variable that is added with a range constraint to capture the range information. Thus, the Sense
attribute on a range constraint will always be GRB_EQUAL.
int GRBaddsos(GRBmodel *model, int numsos, int nummembers, int *types, int *beg, int *ind, double *weight)#
Add new Special Ordered Set (SOS) constraints to a model.
Note that, due to our lazy update approach, the new SOS constraints won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
Please refer to this section for details on SOS constraints.
Return value:
A non-zero return value indicates that a problem occurred while adding the SOS constraints. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new SOSs should be added.
☆ numsos – The number of new SOSs to add.
☆ nummembers – The total number of SOS members in the new SOSs.
☆ types – The types of the SOS sets. SOS sets can be of type GRB_SOS_TYPE1 or GRB_SOS_TYPE2.
☆ beg – The members of the added SOS sets are passed into this routine in Compressed Sparse Row (CSR) format. Each SOS is represented as a list of index-value pairs, where each index entry
provides the variable index for an SOS member, and each value entry provides the weight of that variable in the corresponding SOS set. Each new SOS has an associated beg value, indicating
the start position of the SOS member list in the ind and weight arrays. This routine requires that the members for SOS i immediately follow those for SOS i-1 in ind and weight. Thus, beg
[i] indicates both the index of the first non-zero in constraint i and the end of the non-zeros for constraint i-1. To give an example of how this representation is used, consider a case
where beg[2] = 10 and beg[3] = 12. This would indicate that SOS number 2 has two members. Their variable indices can be found in ind[10] and ind[11], and the associated weights can be
found in weight[10] and weight[11].
☆ ind – Variable indices associated with SOS members. See the description of the beg argument for more information.
☆ weight – Weights associated with SOS members. See the description of the beg argument for more information.
int types[] = {GRB_SOS_TYPE1, GRB_SOS_TYPE1};
int beg[] = {0, 2};
int ind[] = {1, 2, 1, 3};
double weight[] = {1, 2, 1, 2};
error = GRBaddsos(model, 2, 4, types, beg, ind, weight);
int GRBaddvar(GRBmodel *model, int numnz, int *vind, double *vval, double obj, double lb, double ub, char vtype, const char *varname)#
Add a new variable to a model.
Note that, due to our lazy update approach, the new variable won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the model
to disk (using GRBwrite).
Return value:
A non-zero return value indicates that a problem occurred while adding the variable. Refer to the Error Codes table for a list of possible return values. Details on the error can be obtained
by calling GRBgeterrormsg.
☆ model – The model to which the new variable should be added.
☆ numnz – The number of non-zero coefficients in the new column.
☆ vind – Constraint indices associated with non-zero values for the new variable.
☆ vval – Numerical values associated with non-zero values for the new variable.
☆ obj – Objective coefficient for the new variable.
☆ lb – Lower bound for the new variable.
☆ ub – Upper bound for the new variable.
☆ vtype – Type for the new variable. Options are GRB_CONTINUOUS, GRB_BINARY, GRB_INTEGER, GRB_SEMICONT, or GRB_SEMIINT.
☆ varname – Name for the new variable. This argument can be NULL, in which case the variable is given a default name.
int ind[] = {1, 3, 4};
double val[] = {1.0, 1.0, 1.0};
error = GRBaddvar(model, 3, ind, val, 1.0, 0.0, GRB_INFINITY,
GRB_CONTINUOUS, "New");
int GRBaddvars(GRBmodel *model, int numvars, int numnz, int *vbeg, int *vind, double *vval, double *obj, double *lb, double *ub, char *vtype, const char **varnames)#
Add new variables to a model.
Note that, due to our lazy update approach, the new variables won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the model
to disk (using GRBwrite).
If your constraint matrix may contain more than 2 billion non-zero values, you should consider using the GRBXaddvars variant of this routine.
Return value:
A non-zero return value indicates that a problem occurred while adding the variables. Refer to the Error Codes table for a list of possible return values. Details on the error can be obtained
by calling GRBgeterrormsg.
☆ model – The model to which the new variables should be added.
☆ numvars – The number of new variables to add.
☆ numnz – The total number of non-zero coefficients in the new columns.
☆ vbeg – Constraint matrix non-zero values are passed into this routine in Compressed Sparse Column (CSC) format. Each column in the constraint matrix is represented as a list of
index-value pairs, where each index entry provides the constraint index for a non-zero coefficient, and each value entry provides the corresponding non-zero value. Each variable in the
model has a vbeg, indicating the start position of the non-zeros for that variable in the vind and vval arrays. This routine requires columns to be stored contiguously, so the start
position for a variable is the end position for the previous variable. To give an example, if vbeg[2] = 10 and vbeg[3] = 12, that would indicate that variable 2 has two non-zero values
associated with it. Their constraint indices can be found in vind[10] and vind[11], and the numerical values for those non-zeros can be found in vval[10] and vval[11].
☆ vind – Constraint indices associated with non-zero values. See the description of the vbeg argument for more information.
☆ vval – Numerical values associated with constraint matrix non-zeros. See the description of the vbeg argument for more information.
☆ obj – Objective coefficients for the new variables. This argument can be NULL, in which case the objective coefficients are set to 0.0.
☆ lb – Lower bounds for the new variables. This argument can be NULL, in which case all variables get lower bounds of 0.0.
☆ ub – Upper bounds for the new variables. This argument can be NULL, in which case all variables get infinite upper bounds.
☆ vtype – Types for the variables. Options are GRB_CONTINUOUS, GRB_BINARY, GRB_INTEGER, GRB_SEMICONT, or GRB_SEMIINT. This argument can be NULL, in which case all variables are assumed to
be continuous.
☆ varnames – Names for the new variables. This argument can be NULL, in which case all variables are given default names.
int GRBchgcoeffs(GRBmodel *model, int numchgs, int *cind, int *vind, double *val)#
Change a set of constraint matrix coefficients. This routine can be used to set a non-zero coefficient to zero, to create a non-zero coefficient where the coefficient is currently zero, or to
change an existing non-zero coefficient to a new non-zero value. If you make multiple changes to the same coefficient, the last one will be applied.
Note that, due to our lazy update approach, the changes won’t actually be integrated into the model until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or
write the model to disk (using GRBwrite).
If your constraint matrix may contain more than 2 billion non-zero values, you should consider using the GRBXchgcoeffs variant of this routine.
Return value:
A non-zero return value indicates that a problem occurred while performing the modification. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to modify.
☆ numchgs – The number of coefficients to modify.
☆ cind – Constraint indices for the coefficients to modify.
☆ vind – Variable indices for the coefficients to modify.
☆ val – The new values for the coefficients. For example, if cind[0] = 1, vind[0] = 3, and val[0] = 2.0, then the coefficient in constraint 1 associated with variable 3 would be changed to
int cind[] = {0, 1};
int vind[] = {0, 0};
double val[] = {1.0, 1.0};
error = GRBchgcoeffs(model, 2, cind, vind, val);
int GRBdelconstrs(GRBmodel *model, int numdel, int *ind)#
Delete a list of constraints from an existing model.
Note that, due to our lazy update approach, the constraints won’t actually be removed until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the model
to disk (using GRBwrite).
Return value:
A non-zero return value indicates that a problem occurred while deleting the constraints. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to modify.
☆ numdel – The number of constraints to remove.
☆ ind – The indices of the constraints to remove.
int first_four[] = {0, 1, 2, 3};
error = GRBdelconstrs(model, 4, first_four);
int GRBdelq(GRBmodel *model)#
Delete all quadratic objective terms from an existing model.
Note that, due to our lazy update approach, the quadratic terms won’t actually be removed until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
Return value:
A non-zero return value indicates that a problem occurred while deleting the quadratic objective terms. Refer to the Error Codes table for a list of possible return values. Details on the
error can be obtained by calling GRBgeterrormsg.
☆ model – The model to modify.
int GRBdelqconstrs(GRBmodel *model, int numdel, int *ind)#
Delete a list of quadratic constraints from an existing model.
Note that, due to our lazy update approach, the quadratic constraints won’t actually be removed until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write
the model to disk (using GRBwrite).
Return value:
A non-zero return value indicates that a problem occurred while deleting the quadratic constraints. Refer to the Error Codes table for a list of possible return values. Details on the error
can be obtained by calling GRBgeterrormsg.
☆ model – The model to modify.
☆ numdel – The number of quadratic constraints to remove.
☆ ind – The indices of the quadratic constraints remove.
int first_four[] = {0, 1, 2, 3};
error = GRBdelqconstrs(model, 4, first_four);
int GRBdelsos(GRBmodel *model, int numdel, int *ind)#
Delete a list of Special Ordered Set (SOS) constraints from an existing model.
Note that, due to our lazy update approach, the SOS constraints won’t actually be removed until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
Return value:
A non-zero return value indicates that a problem occurred while deleting the constraints. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to modify.
☆ numdel – The number of SOSs to remove.
☆ ind – The indices of the SOSs to remove.
int first_four[] = {0, 1, 2, 3};
error = GRBdelsos(model, 4, first_four);
int GRBdelvars(GRBmodel *model, int numdel, int *ind)#
Delete a list of variables from an existing model.
Note that, due to our lazy update approach, the variables won’t actually be removed until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the model
to disk (using GRBwrite).
Return value:
A non-zero return value indicates that a problem occurred while deleting the variables. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to modify.
☆ numdel – The number of variables to remove.
☆ ind – The indices of the variables to remove.
int first_two[] = {0, 1};
error = GRBdelvars(model, 2, first_two);
int GRBsetobjectiven(GRBmodel *model, int index, int priority, double weight, double abstol, double reltol, const char *name, double constant, int lnz, int *lind, double *lval)#
Set an alternative optimization objective equal to a linear expression.
Please refer to the discussion of Multiple Objectives for information on how to specify multiple objective functions and control the trade-off between them.
Note that you can also modify an alternative objective using the ObjN variable attribute. If you wish to mix and match these two approaches, please note that this method replaces the entire
existing objective, while the ObjN attribute can be used to modify individual terms.
Note that, due to our lazy update approach, the new alternative objective won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or
write the model to disk (using GRBwrite).
Return value:
A non-zero return value indicates that a problem occurred while setting the alternative objective. Refer to the Error Codes table for a list of possible return values. Details on the error
can be obtained by calling GRBgeterrormsg.
☆ model – The model in which the new alternative objective should be set.
☆ index – Index for new objective. If you use an index of 0, this routine will change the primary optimization objective.
☆ priority – Priority for the alternative objective. This initializes the ObjNPriority attribute for this objective.
☆ weight – Weight for the alternative objective. This initializes the ObjNWeight attribute for this objective.
☆ abstol – Absolute tolerance for the alternative objective. This initializes the ObjNAbsTol attribute for this objective.
☆ reltol – Relative tolerance for the alternative objective. This initializes the ObjNRelTol attribute for this objective.
☆ name – Name of the alternative objective. This initializes the ObjNName attribute for this objective.
☆ constant – Constant part of the linear expression for the new alternative objective.
☆ lnz – Number of non-zero coefficients in new alternative objective.
☆ lind – Variable indices for non-zero values in new alternative objective.
☆ lval – Numerical values for non-zero values in new alternative objective.
int ind[] = {0, 1, 2};
double val[] = {1.0, 1.0, 1.0};
/* Objective expression: x0 + x1 + x2 */
error = GRBsetobjectiven(model, 0, 1, 0.0, 0.0, 0.0, "primary",
0.0, 3, ind, val);
int GRBsetpwlobj(GRBmodel *model, int var, int npoints, double *x, double *y)#
Set a piecewise-linear objective function for a variable.
The arguments to this method specify a list of points that define a piecewise-linear objective function for a single variable. Specifically, the \(x\) and \(y\) arguments give coordinates for the
vertices of the function.
For additional details on piecewise-linear objective functions, refer to this discussion.
Note that, due to our lazy update approach, the new piecewise-linear objective won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize),
or write the model to disk (using GRBwrite).
Return value:
A non-zero return value indicates that a problem occurred while setting the piecewise-linear objective. Refer to the Error Codes table for a list of possible return values. Details on the
error can be obtained by calling GRBgeterrormsg.
☆ model – The model to modify.
☆ var – The variable whose objective function is being changed.
☆ npoints – The number of points that define the piecewise-linear function.
☆ x – The \(x\) values for the points that define the piecewise-linear function. Must be in non-decreasing order.
☆ y – The \(y\) values for the points that define the piecewise-linear function.
double x[] = {1, 3, 5};
double y[] = {1, 2, 4};
error = GRBsetpwlobj(model, var, 3, x, y);
int GRBupdatemodel(GRBmodel *model)#
Process any pending model modifications.
Return value:
A non-zero return value indicates that a problem occurred while updating the model. Refer to the Error Codes table for a list of possible return values. Details on the error can be obtained
by calling GRBgeterrormsg.
☆ model – The model to update.
error = GRBupdatemodel(model);
int GRBfreemodel(GRBmodel *model)#
Free a model and release the associated memory.
Return value:
A non-zero return value indicates that a problem occurred while freeing the model. Refer to the Error Codes table for a list of possible return values. Details on the error can be obtained by
calling GRBgeterrormsg.
☆ model – The model to be freed.
error = GRBfreemodel(model);
int GRBXaddconstrs(GRBmodel *model, int numconstrs, size_t numnz, size_t *cbeg, int *cind, double *cval, char *sense, double *rhs, const char **constrnames)#
The size_t version of GRBaddconstrs. The two arguments that count non-zero values are of type size_t in this version to support models with more than 2 billion non-zero values.
Add new linear constraints to a model.
Note that, due to our lazy update approach, the new constraints won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
We recommend that you build your model one constraint at a time (using GRBaddconstr), since it introduces no significant overhead and we find that it produces simpler code. Feel free to use this
routine if you disagree, though.
Return value:
A non-zero return value indicates that a problem occurred while adding the constraints. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new constraints should be added.
☆ numconstrs – The number of new constraints to add.
☆ numnz – The total number of non-zero coefficients in the new constraints.
☆ cbeg – Constraint matrix non-zero values are passed into this routine in Compressed Sparse Row (CSR) format by this routine. Each constraint in the constraint matrix is represented as a
list of index-value pairs, where each index entry provides the variable index for a non-zero coefficient, and each value entry provides the corresponding non-zero value. Each new
constraint has an associated cbeg value, indicating the start position of the non-zeros for that constraint in the cind and cval arrays. This routine requires that the non-zeros for
constraint i immediately follow those for constraint i-1 in cind and cval. Thus, cbeg[i] indicates both the index of the first non-zero in constraint i and the end of the non-zeros for
constraint i-1. To give an example of how this representation is used, consider a case where cbeg[2] = 10 and cbeg[3] = 12. This would indicate that constraint 2 has two non-zero values
associated with it. Their variable indices can be found in cind[10] and cind[11], and the numerical values for those non-zeros can be found in cval[10] and cval[11].
☆ cind – Variable indices associated with non-zero values. See the description of the cbeg argument for more information.
☆ cval – Numerical values associated with constraint matrix non-zeros. See the description of the cbeg argument for more information.
☆ sense – Sense for the new constraints. Options are GRB_LESS_EQUAL, GRB_EQUAL, or GRB_GREATER_EQUAL.
☆ rhs – Right-hand side values for the new constraints. This argument can be NULL, in which case the right-hand side values are set to 0.0.
☆ constrnames – Names for the new constraints. This argument can be NULL, in which case all constraints are given default names.
int GRBXaddrangeconstrs(GRBmodel *model, int numconstrs, size_t numnz, size_t *cbeg, int *cind, double *cval, double *lower, double *upper, const char **constrnames)#
The size_t version of GRBaddrangeconstrs. The argument that counts non-zero values is of type size_t in this version to support models with more than 2 billion non-zero values.
Add new range constraints to a model. A range constraint states that the value of the input expression must be between the specified lower and upper bounds in any solution.
Note that, due to our lazy update approach, the new constraints won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the
model to disk (using GRBwrite).
Return value:
A non-zero return value indicates that a problem occurred while adding the constraints. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to which the new constraints should be added.
☆ numconstrs – The number of new constraints to add.
☆ numnz – The total number of non-zero coefficients in the new constraints.
☆ cbeg – Constraint matrix non-zero values are passed into this routine in Compressed Sparse Row (CSR) format by this routine. Each constraint in the constraint matrix is represented as a
list of index-value pairs, where each index entry provides the variable index for a non-zero coefficient, and each value entry provides the corresponding non-zero value. Each new
constraint has an associated cbeg value, indicating the start position of the non-zeros for that constraint in the cind and cval arrays. This routine requires that the non-zeros for
constraint i immediately follow those for constraint i-1 in cind and cval. Thus, cbeg[i] indicates both the index of the first non-zero in constraint i and the end of the non-zeros for
constraint i-1. To give an example of how this representation is used, consider a case where cbeg[2] = 10 and cbeg[3] = 12. This would indicate that constraint 2 has two non-zero values
associated with it. Their variable indices can be found in cind[10] and cind[11], and the numerical values for those non-zeros can be found in cval[10] and cval[11].
☆ cind – Variable indices associated with non-zero values. See the description of the cbeg argument for more information.
☆ cval – Numerical values associated with constraint matrix non-zeros. See the description of the cbeg argument for more information.
☆ lower – Lower bounds for the linear expressions.
☆ upper – Upper bounds for the linear expressions.
☆ constrnames – Names for the new constraints. This argument can be NULL, in which case all constraints are given default names.
Note that adding a range constraint to the model adds both a new constraint and a new variable. If you are keeping a count of the variables in the model, remember to add one for each range
Note also that range constraints are stored internally as equality constraints. We use the extra variable that is added with a range constraint to capture the range information. Thus, the Sense
attribute on a range constraint will always be GRB_EQUAL.
int GRBXaddvars(GRBmodel *model, int numvars, size_t numnz, size_t *vbeg, int *vind, double *vval, double *obj, double *lb, double *ub, char *vtype, const char **varnames)#
The size_t version of GRBaddvars. The two arguments that count non-zero values are of type size_t in this version to support models with more than 2 billion non-zero values.
Add new variables to a model.
Note that, due to our lazy update approach, the new variables won’t actually be added until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or write the model
to disk (using GRBwrite).
Return value:
A non-zero return value indicates that a problem occurred while adding the variables. Refer to the Error Codes table for a list of possible return values. Details on the error can be obtained
by calling GRBgeterrormsg.
☆ model – The model to which the new variables should be added.
☆ numvars – The number of new variables to add.
☆ numnz – The total number of non-zero coefficients in the new columns.
☆ vbeg – Constraint matrix non-zero values are passed into this routine in Compressed Sparse Column (CSC) format. Each column in the constraint matrix is represented as a list of
index-value pairs, where each index entry provides the constraint index for a non-zero coefficient, and each value entry provides the corresponding non-zero value. Each variable in the
model has a vbeg, indicating the start position of the non-zeros for that variable in the vind and vval arrays. This routine requires columns to be stored contiguously, so the start
position for a variable is the end position for the previous variable. To give an example, if vbeg[2] = 10 and vbeg[3] = 12, that would indicate that variable 2 has two non-zero values
associated with it. Their constraint indices can be found in vind[10] and vind[11], and the numerical values for those non-zeros can be found in vval[10] and vval[11].
☆ vind – Constraint indices associated with non-zero values. See the description of the vbeg argument for more information.
☆ vval – Numerical values associated with constraint matrix non-zeros. See the description of the vbeg argument for more information.
☆ obj – Objective coefficients for the new variables. This argument can be NULL, in which case the objective coefficients are set to 0.0.
☆ lb – Lower bounds for the new variables. This argument can be NULL, in which case all variables get lower bounds of 0.0.
☆ ub – Upper bounds for the new variables. This argument can be NULL, in which case all variables get infinite upper bounds.
☆ vtype – Types for the variables. Options are GRB_CONTINUOUS, GRB_BINARY, GRB_INTEGER, GRB_SEMICONT, or GRB_SEMIINT. This argument can be NULL, in which case all variables are assumed to
be continuous.
☆ varnames – Names for the new variables. This argument can be NULL, in which case all variables are given default names.
int GRBXchgcoeffs(GRBmodel *model, size_t numchgs, int *cind, int *vind, double *val)#
The size_t version of GRBchgcoeffs. The argument that counts non-zero values is of type size_t in this version to support models with more than 2 billion non-zero values.
Change a set of constraint matrix coefficients. This routine can be used to set a non-zero coefficient to zero, to create a non-zero coefficient where the coefficient is currently zero, or to
change an existing non-zero coefficient to a new non-zero value. If you make multiple changes to the same coefficient, the last one will be applied.
Note that, due to our lazy update approach, the changes won’t actually be integrated into the model until you update the model (using GRBupdatemodel), optimize the model (using GRBoptimize), or
write the model to disk (using GRBwrite).
Return value:
A non-zero return value indicates that a problem occurred while performing the modification. Refer to the Error Codes table for a list of possible return values. Details on the error can be
obtained by calling GRBgeterrormsg.
☆ model – The model to modify.
☆ numchgs – The number of coefficients to modify.
☆ cind – Constraint indices for the coefficients to modify.
☆ vind – Variable indices for the coefficients to modify.
☆ val – The new values for the coefficients. For example, if cind[0] = 1, vind[0] = 3, and val[0] = 2.0, then the coefficient in constraint 1 associated with variable 3 would be changed to
int cind[] = {0, 1};
int vind[] = {0, 0};
double val[] = {1.0, 1.0};
error = GRBXchgcoeffs(model, 2, cind, vind, val);
int GRBXloadmodel(GRBenv *env, GRBmodel **modelP, const char *Pname, int numvars, int numconstrs, int objsense, double objcon, double *obj, char *sense, double *rhs, size_t *vbeg, int *vlen, int *
vind, double *vval, double *lb, double *ub, char *vtype, const char **varnames, const char **constrnames)#
The size_t version of GRBloadmodel. The argument that counts non-zero values is of type size_t in this version to support models with more than 2 billion non-zero values.
Create a new optimization model, using the provided arguments to initialize the model data (objective function, variable bounds, constraint matrix, etc.). The model is then ready for
optimization, or for modification (e.g., addition of variables or constraints, changes to variable types or bounds, etc.).
Return value:
A non-zero return value indicates that a problem occurred while creating the model. Refer to the Error Codes table for a list of possible return values. Details on the error can be obtained
by calling GRBgeterrormsg.
☆ env – The environment in which the new model should be created. Note that the new model gets a copy of this environment, so subsequent modifications to the original environment (e.g.,
parameter changes) won’t affect the new model. Use GRBgetenv to modify the environment associated with a model.
☆ modelP – The location in which the pointer to the newly created model should be placed.
☆ Pname – The name of the model.
☆ numvars – The number of variables in the model.
☆ numconstrs – The number of constraints in the model.
☆ objsense – The sense of the objective function. Allowed values are 1 (minimization) or -1 (maximization).
☆ objcon – Constant objective offset.
☆ obj – Objective coefficients for the new variables. This argument can be NULL, in which case the objective coefficients are set to 0.0.
☆ sense – The senses of the new constraints. Options are '=' (equal), '<' (less-than-or-equal), or '>' (greater-than-or-equal). You can also use constants GRB_EQUAL, GRB_LESS_EQUAL, or
☆ rhs – Right-hand side values for the new constraints. This argument can be NULL, in which case the right-hand side values are set to 0.0.
☆ vbeg – Constraint matrix non-zero values are passed into this routine in Compressed Sparse Column (CSC) format. Each column in the constraint matrix is represented as a list of
index-value pairs, where each index entry provides the constraint index for a non-zero coefficient, and each value entry provides the corresponding non-zero value. Each variable in the
model has a vbeg and vlen value, indicating the start position of the non-zeros for that variable in the vind and vval arrays, and the number of non-zero values for that variable,
respectively. Thus, for example, if vbeg[2] = 10 and vlen[2] = 2, that would indicate that variable 2 has two non-zero values associated with it. Their constraint indices can be found in
vind[10] and vind[11], and the numerical values for those non-zeros can be found in vval[10] and vval[11]. Note that the columns of the matrix must be ordered from first to last, implying
that the values in vbeg must be non-decreasing.
☆ vlen – Number of constraint matrix non-zero values associated with each variable. See the description of the vbeg argument for more information.
☆ vind – Constraint indices associated with non-zero values. See the description of the vbeg argument for more information.
☆ vval – Numerical values associated with constraint matrix non-zeros. See the description of the vbeg argument for more information.
☆ lb – Lower bounds for the new variables. This argument can be NULL, in which case all variables get lower bounds of 0.0.
☆ ub – Upper bounds for the new variables. This argument can be NULL, in which case all variables get infinite upper bounds.
☆ vtype – Types for the variables. Options are GRB_CONTINUOUS, GRB_BINARY, GRB_INTEGER, GRB_SEMICONT, or GRB_SEMIINT. This argument can be NULL, in which case all variables are assumed to
be continuous.
☆ varnames – Names for the new variables. This argument can be NULL, in which case all variables are given default names.
☆ constrnames – Names for the new constraints. This argument can be NULL, in which case all constraints are given default names.
We recommend that you build a model one constraint or one variable at a time, using GRBaddconstr or GRBaddvar, rather than using this routine to load the entire constraint matrix at once. It is
much simpler, less error prone, and it introduces no significant overhead.
/* maximize x + y + 2 z
subject to x + 2 y + 3 z <= 4
x + y >= 1
x, y, z binary */
int vars = 3;
int constrs = 2;
size_t vbeg[] = {0, 2, 4};
int vlen[] = {2, 2, 1};
int vind[] = {0, 1, 0, 1, 0};
double vval[] = {1.0, 1.0, 2.0, 1.0, 3.0};
double obj[] = {1.0, 1.0, 2.0};
char sense[] = {GRB_LESS_EQUAL, GRB_GREATER_EQUAL};
double rhs[] = {4.0, 1.0};
char vtype[] = {GRB_BINARY, GRB_BINARY, GRB_BINARY};
error = GRBXloadmodel(env, &model, "example", vars, constrs, -1, 0.0,
obj, sense, rhs, vbeg, vlen, vind, vval,
NULL, NULL, vtype, NULL, NULL); | {"url":"https://docs.gurobi.com/projects/optimizer/en/current/reference/c/model.html#c.GRBaddqpterms","timestamp":"2024-11-13T03:02:33Z","content_type":"text/html","content_length":"390379","record_id":"<urn:uuid:5c5a40e8-466d-4040-8e4c-1bf7180a651a>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00148.warc.gz"} |
47 research outputs found
We have used Resonant Inelastic X-ray Scattering (RIXS) and dynamical susceptibility calculations to study the magnetic excitations in NaFe$_{1-x}$Co$_x$As (x = 0, 0.03, and 0.08). Despite a
relatively low ordered magnetic moment, collective magnetic modes are observed in parent compounds (x = 0) and persist in optimally (x = 0.03) and overdoped (x = 0.08) samples. Their magnetic
bandwidths are unaffected by doping within the range investigated. High energy magnetic excitations in iron pnictides are robust against doping, and present irrespectively of the ordered magnetic
moment. Nevertheless, Co doping slightly reduces the overall magnetic spectral weight, differently from previous studies on hole-doped BaFe$_{2}$As$_{2}$, where it was observed constant. Finally, we
demonstrate that the doping evolution of magnetic modes is different for the dopants being inside or outside the Fe-As layer.Comment: 19 pages, 7 figure
In existing theoretical approaches to core-level excitations of transition-metal ions in solids relaxation and polarization effects due to the inner core hole are often ignored or described
phenomenologically. Here we set up an ab initio computational scheme that explicitly accounts for such physics in the calculation of x-ray absorption and resonant inelastic x-ray scattering spectra.
Good agreement is found with experimental transition-metal $L$-edge data for the strongly correlated $d^9$ cuprate Li$_2$CuO$_2$, for which we determine the absolute scattering intensities. The newly
developed methodology opens the way for the investigation of even more complex $d^n$ electronic structures of group VI B to VIII B correlated oxide compounds
Strongly correlated insulators are broadly divided into two classes: Mott-Hubbard insulators, where the insulating gap is driven by the Coulomb repulsion $U$ on the transition-metal cation, and
charge-transfer insulators, where the gap is driven by the charge transfer energy $\Delta$ between the cation and the ligand anions. The relative magnitudes of $U$ and $\Delta$ determine which class
a material belongs to, and subsequently the nature of its low-energy excitations. These energy scales are typically understood through the local chemistry of the active ions. Here we show that the
situation is more complex in the low-dimensional charge transfer insulator Li$_\mathrm{2}$CuO$_\mathrm{2}$, where $\Delta$ has a large non-electronic component. Combining resonant inelastic x-ray
scattering with detailed modeling, we determine how the elementary lattice, charge, spin, and orbital excitations are entangled in this material. This results in a large lattice-driven
renormalization of $\Delta$, which significantly reshapes the fundamental electronic properties of Li$_\mathrm{2}$CuO$_\mathrm{2}$.Comment: Nature Communications, in pres
Understanding the spin dynamics in antiferromagnetic (AFM) thin films is fundamental for designing novel devices based on AFM magnon transport. Here, we study the magnon dynamics in thin films of AFM
$S=5/2$ $\alpha$-Fe$_2$O$_3$ by combining resonant inelastic x-ray scattering, Anderson impurity model plus dynamical mean-field theory, and Heisenberg spin model. Below 100 meV, we observe the
thickness-independent (down to 15 nm) acoustic single-magnon mode. At higher energies (100-500 meV), an unexpected sequence of equally spaced, optical modes is resolved and ascribed to $\Delta S_z =
1$, 2, 3, 4, and 5 magnetic excitations corresponding to multiple, noninteracting magnons. Our study unveils the energy, character, and momentum-dependence of single and multimagnons in $\alpha$-Fe
$_2$O$_3$ thin films, with impact on AFM magnon transport and its related phenomena. From a broader perspective, we generalize the use of L-edge resonant inelastic x-ray scattering as a
multispin-excitation probe up to $\Delta S_z = 2S$. Our analysis identifies the spin-orbital mixing in the valence shell as the key element for accessing excitations beyond $\Delta S_z = 1$, and up
to, e.g., $\Delta S_z = 5$. At the same time, we elucidate the novel origin of the spin excitations beyond the $\Delta S_z = 2$, emphasizing the key role played by the crystal lattice as a reservoir
of angular momentum that complements the quanta carried by the absorbed and emitted photons.Comment: Accepted in Physical Review
We report a high-resolution resonant inelastic soft x-ray scattering study of the quantum magnetic spin-chain materials Li2CuO2 and CuGeO3. By tuning the incoming photon energy to the oxygen K-edge,
a strong excitation around 3.5 eV energy loss is clearly resolved for both materials. Comparing the experimental data to many-body calculations, we identify this excitation as a Zhang-Rice singlet
exciton on neighboring CuO4-plaquettes. We demonstrate that the strong temperature dependence of the inelastic scattering related to this high-energy exciton enables to probe short-range spin
correlations on the 1 meV scale with outstanding sensitivity.Comment: 5 pages, 4 figure | {"url":"https://core.ac.uk/search/?q=author%3A(Bisogni%2C%20Valentina)","timestamp":"2024-11-12T19:01:54Z","content_type":"text/html","content_length":"148242","record_id":"<urn:uuid:f221b6f9-bd17-49f0-86e9-6c29cd1b877d>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00387.warc.gz"} |
High-Low Method: Learn How to Estimate Fixed & Variable Costs
av idaspark | dec 13, 2021 | Bookkeeping
The company wants to know the rate at which its electricity cost changes when the number of machine bom acct meaning hours change. The part of the electric bill that does not change with the number
of machine hours is known as the fixed cost. The hi low method now takes the highest and lowest activity cost values and looks at the change in total cost compared to the change in units between
these two values. Assuming the fixed cost is actually fixed, the change in cost must be due to the variable cost.
Which activity is most important to you during retirement?
Such a cost function may be used in budgeting to estimate the total cost at any given level of activity, assuming that past performance can reasonably be projected into future. Calculating the
outcome for the high-low method requires a few formula steps. First, you must calculate the variable-cost component and then the fixed-cost component, and then plug the results into the cost model
formula. The high-low method is a straightforward, if not slightly lengthy, way to figure out your total costs. While the high-low method is an easy one to use, it also has its disadvantages.
1. But this is only if the variable cost is a fixed charge per unit of product and the fixed costs remain the same.
2. If the variable cost is a fixed charge per unit and fixed costs remain the same, it is possible to determine the fixed and variable costs by solving the system of equations.
3. Due to its unreliability, high low method should be carefully used, usually in cases where the data is simple and not too scattered.
4. You need to know what the expected amount of overheads that your production line will incur in the next month.
5. While the high-low method is an easy one to use, it also has its disadvantages.
Step 3: Calculate the Fixed Cost
All of our content is based on objective analysis, and the opinions are our own. It’s also possible to draw incorrect conclusions by assuming that just because two sets of data correlate with each
other, one must cause changes in the other. Regression analysis is also best performed using a spreadsheet program or statistics program. However, the formula does not take inflation into
consideration and provides a very rough estimation because it only considers the extreme high and low values, and excludes the influence of any outliers. Let’s say that you are running a business
producing high end technology products.
The change in the total costs is thus the variable cost rate times the change in the number of units of activity. Continuing with this example, if the total electricity cost was $18,000 when there
were 120,000 MHs, the variable portion is assumed to have been $12,000 (120,000 MHs times $0.10). Since the total electricity cost was $18,000 and the variable cost was calculated to be $12,000, the
fixed cost of electricity for the month must have been the $6,000.
Do you own a business?
You need to know what the expected amount of overheads that your production line will incur in the next month. There are a number of accounting techniques used throughout the business world. For the
past 52 years, Harold Averkamp (CPA, MBA) hasworked as an accounting supervisor, manager, consultant, university instructor, and innovator in teaching accounting online. For the past 52 years, Harold
Averkamp (CPA, MBA) has worked as an accounting supervisor, manager, consultant, university instructor, and innovator in teaching accounting online.
The cost amounts adjacent to these activity levels will be used in the high-low method, even though these cost amounts are not necessarily the highest and lowest costs for the year. The high low
method is used in cost accounting as a method of separating a total cost into fixed and variable costs components. Although easy to understand, high low method may be unreliable because it ignores
all the data except for the two extremes. It can be argued that activity-cost pairs (i.e. activity level and the corresponding total cost) which are not representative of the set of data is purchase
return a debit or credit should be excluded before using high-low method. The high or low points used for the calculation may not represent the costs normally incurred at those volume levels due to
outlier costs that are higher or lower than would normally be incurred. The method works on the basis that the variable cost per unit and the fixed costs are assumed not to change throughout the
range of the two values used.
Let’s assume that the company is billed monthly for its electricity usage. The cost of electricity was $18,000 in the month when its highest activity was 120,000 machine hours (MHs). (Be sure to use
the MHs that occurred between the meter reading dates appearing on the bill.) The cost of electricity was $16,000 in the month when its lowest activity was 100,000 MHs. This shows that the total
monthly cost of electricity changed by $2,000 ($18,000 vs. $16,000) when the number of MHs changed by 20,000 (120,000 vs. 100,000).
Related AccountingTools Course
The high-low method is used to calculate the variable and fixed costs of a product or entity with mixed costs. It considers the total dollars of the mixed costs at the highest volume of activity and
the total dollars of the mixed costs at the lowest volume of activity. The total amount of fixed costs is assumed to be the same at both points of activity.
This method can only be used if the scattergram that you used for your initial testing shows a linear correlation between the costs and the quantity! Also note that although this method is simple to
apply it only uses the two points of data. Having only two points of data might produce results that are not accurate.
Senaste kommentarer | {"url":"https://torpaparkloppis.se/2021/12/13/high-low-method-learn-how-to-estimate-fixed/","timestamp":"2024-11-11T20:21:23Z","content_type":"text/html","content_length":"339089","record_id":"<urn:uuid:c6e3f0c8-becf-4050-84c4-c3902b878b99>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00367.warc.gz"} |
Today's Video
January 20, 2007
Today's Video
Update (Feb 7th): A second math video has been released. Check it out
Posted by KDeRosa at 9:47AM
35 comments:
Oh no. Maybe this is part of the reason I have students arrive in my Geometry class in 10th grade without knowing how to multiply.
Total crap.
Pretends to be for the general public (which doesn't have the advantage of the daily reading of your site [and others]), yet doesn't explain anything.
It's all shock value. (Look at this! Is this what your Grandpappy taught you?!)
Here's a good question for you (and that stupid shill meteorologist): If everyone were taught the "partial products" method TO MASTERY--that is, students could do it EVERY TIME without error,
would that make it less objectionable?
Seriously. If one were to answer "Well, the standard algorithm is more efficient," I would have to ask, Why? Because it's been practiced more? Or because it REALLY IS more efficient.
I have to admit that the partial products method isn't all that bad.
I taught my son (3rd grade) how to use it first, then after he has mastered it, I moved him on to the standard algorithm.
The lattice method reminds me of a bar trick. It works, but you can't easily figure out why.
I think the biggest points she scored in her video was the quote from the book about not needing to master anything since students could use calculators, and showing the umpteen page atlas' in
the math books.
Yeah, for what it sets out to do, it's very powerful. I just wish groups like hers could do more than exchange sound bites in front of the public.
I hope to see a lecture series soon, in which mr. person explains it all to the clueless masses.
Oddly enough, I agree with all the points made in the comments.
This is a powerful video.
It, however, does not make the best arguments for why these programs are bad, i.e., lack of practice to mastery and unclear instruction.
But I'm not so sure that those points are amenable to effective presentation in a video, although I could be wrong. Someone needs to make a video showing why the "spiral" fails so badly with
respect to teaching to mastery.
I don't so much mind that these non-standard algorithms are being taught, in fact, some I do like (partial products, even though the reason why it is taught is to mask nonmastery ("quick recall")
of the multiplication tables). What I do find horrifying is that the standard algorithms are getting such short shrift (even though JD is right that kids could perform just fine if these
algorithms were taught to mastery).
I posted a comment on KTM II from the DI listserv where Bob Dixon makes many of the same points that JD made -- the problem is much deeper than this video suggests qnd some of the points made in
the video weak.
I see a budding idea: A collaborative effort to hammer out a script that addresses serious problems with progressive/constructivist math ed not covered or insufficiently covered by the video.
Then find a convincing personality to sit in front of a camera, hopefully with props, and put it on YouTube. It can be a low-budget affair.
The Penfield group already lists many good points of these major deficiencies.
One I can already add is the enormous time wasted with silly activities and projects like lining up a million objects for weeks on end to get a sense of big numbers.
There is no dearth of other good points.
I think that would be EXCELLENT.
I think I understand the rationale for the partial-products algorithm. It makes the kids better at mental math and estimating. Both of those skills are important.
But for many kids it's hard to do, for the reasons McDermott explained. So in our school system, which uses Everyday Math, most kids are using lattice multiplication, which really offers no
advantages over the traditional algorithm.
Is the traditional algorithm more efficient than partial products? Well, if efficient means the fewest possible steps, then I guess not.
But when it comes multiplying very large numbers, the traditional algorithm is much easier. In a way, that makes it more efficient because, for most people, it's easier and faster.
For very large numbers, the traditional algorithm is certainly more efficient than drawing a big lattice.
The authors of Everyday Math would say get a calculator for very large numbers.
To tell the truth, I seldom whip out a pen and paper to do double-digit multiplication or division. Truth be told, without any TERC books, I do something that would be called partial products.
The lattices I defend for spatial learners. Being spatially "slow" myself, it would not have helped me as much as partial products or the "more efficient" traditional algorithm.
But for the lack of time to develop all 3, I would encourage all 3 to be presented. I think some will get the traditional way and some will get the other 2, and really, in school we're providing
ways for students to find their own answers when necessary.
What concerns me about the algorithm issue is that math professors seem to favor the traditional ways. It sounds like they want kids to be very proficient with paper-and-pencil math. They also
want the kids to know highly efficient, commonly used methods.
They say the traditional methods of multiplication and division are prepatory for more advanced math.
If the kid isn't going to take a lot of college math, maybe it doesn't matter which method he or she uses. But it may matter for the kid who needs lots of college math.
I thought the best thing about the video was that she actually named the offending textbooks. Most of the time, critics discuss the theory and philosophy which means nothing to most people. When
an actual textbook gets mentioned, parents have a concrete reason to say, hey, my kid uses that!
"I just wish groups like hers could do more than exchange sound bites in front of the public."
They do.
The video was an introduction to the problem, not THE problem. She alludes to other problems, but, unfortunately, does not go into details, so the debate gets stuck on the not-so-important issue
of which basic math algorithm to use. The only benefit might be to get parents to pay attention and do their homework. Something has to get parents out of their school-induced math haze.
It's about mastery, true mathematical understanding, and getting to algebra by 8th grade. I don't really care which algorithm is chosen. Mastery is where EM and TERC fail the most. This has been
well documented. That's why most smart schools supplement these programs. But supplementation with more practice is not enough, they just don't get from point A to B.
"I think I understand the rationale for the partial-products algorithm. It makes the kids better at mental math and estimating. Both of those skills are important."
My take is different. I think they use simpler algorithms because they are easier to understand (they don't need to be efficient since everyone will be using calculators) and require less mastery
of adds and subtracts to 20 and the times table. Forgiving division is all about not having to find the largest multiplier in your head. Partial products is about not having to do carrying (as
much). They think that these simpler algorithms teach understanding better. They might at a superficial level, but students need much more than superficial understanding when they get to algebra.
I think they just don't want math to be a filter. They want to make it simpler and they want to eliminate the drill and kill (i.e. mastery). Oops, there's the rub. One can argue about the need
for doing hundreds of long division problems by hand, but their lack of mastery and rigor philosophy carries over to topics that cannot be done by calculator.
I was in engineering at school when calculators became available for average students. Calculators were great and they really improved education (in college). Homework assignments became more
complex because less time was needed for calculations. We could work on calculations that could never be done by hand. Calculators made college more difficult.
Unfortunately, K-8 schools want calculators to make teaching and learning easier. That's not what they are for. They should be used to attempt much more complicated and advanced problems. That's
not what is happening. They are being used as avoidance tools.
I agree with Steve. Have you even tried to have a conversation with another parent about all of this? Even smart, college educated, super-involved parents start to get a glazed look on their face
very quickly. This is why I just Save My Own.
I think the average parent is a lot closer to me in math ability than to many of you. This is important. They can't be reached through a lot of the nuances you speak about, but showing a
long-winded algorithm that they've never seen before might get their attention. Finally.
Another thing that gets their attention is dropping test scores. I had a friend whose child's scores dropped about 30 points in one year. I mentioned to her that it was probably fractions. She
said no, that he was great in fractions, but she did give him one of the online placements tests. Later she called me and told me that, in fact, he shut down at fractions and that she had no idea
that he was so behind even though she kept up with his homework and he made all A's.
But this was a college educated, ex-teacher parent. She knows where he is headed. What about the other parents?
The rest will just resign themselves to the non-fact that little Johnny just isn't good at math.
"They say the traditional methods of multiplication and division are prepatory for more advanced math."
Let's put it this way. Students who have mastered the traditional algorithms have better basic math skills than those who (may or may not) have mastered algorithms like partial products. Is it a
necessary condition for advanced math? No, but it's an indicator. It probably means that these students are going to be able to spend less time on basic math and more time on the the advanced
topics. Is this the main argument against EM and TERC? No.
"If the kid isn't going to take a lot of college math, maybe it doesn't matter which method he or she uses. But it may matter for the kid who needs lots of college math."
Yes, but how do you know if a child is going to need a lot of math in college when they are in third grade?
"The rest will just resign themselves to the non-fact that little Johnny just isn't good at math."
"non-fact". Exactly. All school problems look external if you wait long enough.
Our schools say that "our kids hold their own", which means that they really don't want to know how much of that "holding" is done by parents and tutors.
"I mentioned to her that it was probably fractions."
It would have been better if the video focused on fractions, but that would have been more complex and many would just not understand.
I commented on KTM about the fraction work that my fifth grade son is getting in EM. Rote. Superficial understanding. Very little practice. Then quick, go on to the next topic.
The main sense I get from EM is low expectations and a slower pace. They cover a lot of topics, so it might seem advanced, but the explanations are simplistic and the expectations low. Slow and
low masquerading as understanding and discovery.
When I went to enginerring school, some-what powerful calculators were readily available. But the thing was that they did you almost no good at all until the very last step when the problem got
simplified to show the answer. Ideally, you wouldn't even substitute values in for the variables until the last step either which made the whole calculator issue moot anyway.
"To tell the truth, I seldom whip out a pen and paper to do double-digit multiplication or division. Truth be told, without any TERC books, I do something that would be called partial products."
When I do calculations in my head (exact or estimate), I use different techniques than the traditional algorithms. I often attack a multiplication problem left to right. I keep going depending on
how many significant digits I need. I think my mastery of the traditional algorithms helped this mental math ability.
TERC and EM try to teach basic understanding, but without mastery, there is no ability to estimate, accurately or otherwise.
"But the thing was that they did you almost no good at all until the very last step when the problem got simplified to show the answer."
Moot. yes. That's what happened. Some students had calculators and some didn't. The tests turned to all symbolic. Calculators were not even necessary. In the early days, our department had an HP
programmable desktop calculator that we could use for homework.
Eventually, when the TI calculators got to be less than $100, professors started to assign many more homework problems that required calculators. Some homework required the calculator for
numerical (like Simpson's Rule) techniques, rather than analytic techniques, but the tests all evolved to make the calculator unnecessary.
That was me.
I agree with you that the MAIN problem with Everyday Math is the lack of practice. I bought some fifth grade materials on Ebay to see what's coming next year.
At this point, I shouldn't be shocked, but I am. As Wayne Bishop said, the fifth grade material "has the flavor of a survey."
I don't know how anyone can learn math this way.
I think one of the main issues here is the unwillingness to recognize that different approaches work for different individuals. It's not (or shouldn't really be) about which one works for me,
therefore which one works for everyone or is generally more logical - rather, it should be about what teachers find most useful in their classrooms, with the range of children they're working
with, and what the parents can latch on to. Instead of approach-bashing, can't we be a bit more constructive about how we address these growing chasms?
(As an aside, I substitute taught in a school system and had no idea what was going on with the math curriculum _and_ math is one of my strong points. Perhaps there *is* something to be said
about creating a culture of transferrable math skills.)
"I think one of the main issues here is the unwillingness to recognize that different approaches work for different individuals."
This is not one of the main issues. The video doesn't get into it enough, but the main issue is mastery of basic skills. This includes much more than basic arithmetic. My son has Everyday math in
5th grade and the main problem is low expectations and lack of practice. This is done on purpose! EM's coverage of fractions is very poor and "rote" in many cases. It doesn't matter what "style"
of learner you are, the coverage is poor.
"Instead of approach-bashing, can't we be a bit more constructive about how we address these growing chasms?"
Schools are not "constructive" or pragmatic about the process. They are in charge and do not like anyone having any input into the curriculum or teaching methods. They are the masters of bashing.
It seems that they can't take what they dish out.
The purpose of the video was to get everyone's attention - perhaps provoke parents into finding out more. And there is a lot more to find out. This is not a matter of finding a balance between
skills and understanding.
Ken says:
"When I went to engineering school, some-what powerful calculators were readily available."
When I graduated from high school, my grandfather -- a math teacher -- got me one of the first calculators on the market (very expensive, bulky, and heavy), a HP with reverse polish logic (no
equals key). I found it extremely useful as an undergrad.
Even back then, when calculators were first coming out, people were predicting that they would degrade math skills. I poo-pooed them.
I'd like to apologize for that. I was wrong, and they were right. But it never occurred to me that anyone would even think of substituting calculators for math knowledge.
Hi, Ken.
Thanks for posting the video.
What bugs me about all this conversation is the entrenched diatribe. People stake out positions on this, and then try to defend that they're right and the other guy is wrong.
Point one: kids should learn that math is more than a single "algorithm". There is always more than one way to skin a cat, and higher order mathematics recognizes multiple paths to the right
Point two: Mr. Person is right that this video only fuels the math wars, and moves us (i.e., the general public) away from any reasonable discussion of mathematics instruction.
Point three (from personal experience): My sixth grader is taking Algebra 1. He is being taught by an "old school" teacher, age 62, who is teaching only formal math. My son is doing all these
problems, but he has no idea what any of them mean. So he gets confused. The abstractions mean nothing to him, so he gets frustrated and hates memorizing the "formal algorithms" used to solve the
abstract problems. My wife has begun tutoring him using Connected Math, only because he then learns a context or story in which all these formal math problems actually become tools to solving
interesting puzzles. To sum up, both sides of the math wars are right. And both are wrong.
So why are we still at war? Why are we fomenting discord in math education when actually, we all seem to agree that a combination of "formal algorithms" and context are important features of
instruction? Why do we insist on talking trash, implying it's "my way or the highway"?
In order to grow up like Mr. Person (a guy who clearly LOVES math), we have to instill in our kids at least two things: knowledge (of algorithms, for example) and a rich sense of how those cool
algorithms can be used to solve real puzzles...like the ones Mr. Person posts on his site.
Let's spend less time on the diatribe and more time helping our teachers (and parents) to understand that formal math without the context is self-defeating, and at the same time, "discovery" of
mathematical concepts is inefficient when handy "algorithms" will serve kids better when they understand where and when to apply them.
Which of these methods permits you to do math quickly your head? In terms of practical application of arithmetic (in work, travel, shopping, figuring a tip) , my brain relies on memorized
multiplication tables and something more akin to an algorithm. Fewer steps, fast answers.
Mark, I'm not a big fan of the consensus/balanced approach. It isn't working well for Reading and I can't imagine it'll work well for math. We need to find what works and use it, disgarding all
the remaining crap.
The problem isn't as cut and dry as the books are bad. If we get rid of the books everything will be okay. For an articulate video response by a math professor. Check out these YouTube links.
Math Education: A response Part 1
Math Education: A response Part 2
As an educator in Japan, a country that does exceptionally well on international tests, I can tell you that the reason that the students do well on international tests is because the education
system is geared directly to test taking (for enterance exams starting with middle school.) Whether or not that is translating into better pratical understanding is an open question. I can tell
you that there certainly doesn't seem to be any shortage of bad math and reasoning skills with the students and people I meet when compared with my native country of Canada.
xander, The good professor is attacking a strawman in this rebuttal. No one believes that math should be taught without understanding. Teaching algoithms does not preclude teaching them with
understanding. The issue is how to best teach that understanding. The primary criticsm of the curricula mentioned in the video is that the lack a coherent sequence of instruction and do not have
sufficient distributed practice to achieve automaticity in students.The result is that students do not gain the requireed mathematical understanding. Memorizing one's multiplication tables and
learning the common algorithms to mastery IS a necessity because our working memory is extremely limited (magic number 7 +/- 2). If students do not have this basic knowledgecommited to long term
memory the cognitive load required to peform higher math, such as algebra, willbe too much for their working memory constraints regardless of their understanding of math. Moreover, our brain is
predisposed to learn new informationin concrete terms first and only later is the brain able to structure the concrete examples around a dep abstract structure. It is at this point that the
student acquires understanding. A further mistake the professor's makes is not realizing that novice students do not learn like expert mathemeticions and do not have the deep well of domain
specific math content knowledge that is necessary to learn math in a way that can tolerate the haphazard sequence and minimal practice/rehearsals employed in these curricula.
I suggest that the professor read the "ask a cognitive scientist" articles written by Daniel Willlingham in the AFT journals (online) and reevaluate his position.
how would you go on about multibplying or dividing using those stupid partial product or quotient.
SORRY, BUT THE CREDIBILITY OF THE PRESENTER OF THIS VIDEO (LADY AGAINST REASONING IN MATH) IS SHATTERED WHEN SHE WRITES THAT 133 DIVIDED BY 6 IS 22 AND 1/6 !!!!!!i GUESS SHE IS A PRIME CANDIDATE
FOR MORE REASONING IN MATH. SOMEHOW SHE BECAME A QUANTITATIVE PROFESSIONAL (?), AND SHE DOES NOT SEEM TO UNDERSTAND SIMPLE, BASIC FRACTIONS. PRETTY SAD SHE IS TRYING TO INFLUENCE OTHERS WITH HER
THE CREDIBILITY OF THE PRESENTER OF THIS VIDEO (LADY AGAINST REASONING IN MATH) IS SHATTERED WHEN SHE WRITES THAT 133 DIVIDED BY 6 IS 22 AND 1/6
Er, that's what it is. | {"url":"https://d-edreckoning.blogspot.com/2007/01/todays-video.html?showComment=1177892100000","timestamp":"2024-11-07T07:38:44Z","content_type":"text/html","content_length":"130444","record_id":"<urn:uuid:f495f232-6fe4-43fd-81f5-04ecc0c638a6>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00325.warc.gz"} |
Sounding Off About Trig
This web site lists several different places a student could go to learn more about physics. It includes more than just sound options.
This web sites gives some history to who helped with discoveries about sound waves and some applications of sound. It is not very detailed.
A student’s understanding and explanation of the physics of a sub woofer is given at this web site.
This site talks about the following sound concepts: vibrations, sound waves, frequency, pitch, resonance and overtones.
This is a great site and covers: pitch, amplification, timbre, how sound travels, how the ear works, and how sound is conveyed to the brain.
Students are able to explore the physics of different things from sound to skating by clicking on different letters. The whole site is created by students and is therefore easy to understand and
Serway, R.A. and Faughn, J.S. Holt Physics. Austin: Holt, Rinehart and Winston, 1999. Pg. 481.
Rossing, T. D. The Science of Sound. Massachusetts: Addison-Wesley Publishing Company, 1990. Pg. 22.
Rossing, T. D. The Science of Sound. Massachusetts: Addison-Wesley Publishing Company, 1990. Pg. 60.
Rossing, T. D. The Science of Sound. Massachusetts: Addison-Wesley Publishing Company, 1990. Pg. 22.
A source where more practice can be found to give to students is in the Advanced Mathematics: A Precalculus Approach book. Beginning on page 126 there is text that students can read to deepen their
understanding. At the end of the text for this section there are practice problems that can be assigned to give students further practice.
If you are working with a younger less experienced group of students I recommend the Advanced Algebra test listed in the student resources. The reading level of this book is a little lower than the
book mentioned above and students can easily understand. The practice can be found on pages 418 and 419.
In the Advanced Algebra text you will find all of chapter 9 deals with trigonometric functions and I recommend this for more practice or as a reading assignment for the students depending on what you
are working on at the time.
For the precalculus students I recommend looking at chapter 4, section 2 in the Advanced Functions: A Precalculus Approach book.
Again, for more practice for your students I recommend using pages 196 and 197 from the Advanced Functions: A Precalculus Approach book. I also recommend using practice 4.2 from the teachers
supplement book.
I wrote to the TI-web site to obtain permission to include the program in this paper. They said they did not have a copy and that permission would have to be obtained from the author. I received the
program from a colleague who is not sure where it originated. For that reason, the program is not included here. I am still trying to track down the author and when I have done so will be happy to
send a copy to anyone who would like it. You may contact me at HYPERLINK mailto:AndreaSorrellsaol.com AndreaSorrellsaol.com to see if I am able to send you a copy and all I will be happy to respond.
See note number 6 for problems that you could assign as homework.
Rossing, T. D. The Science of Sound. Massachusetts: Addison-Wesley Publishing Company, 1990. Pg.179.
Y = sin 3x Y = sin x Y = sin x + sin 3x | {"url":"https://teachersinstitute.yale.edu/curriculum/units/2000/5/00.05.09/6","timestamp":"2024-11-09T13:13:57Z","content_type":"text/html","content_length":"41117","record_id":"<urn:uuid:f7e1a0f0-7d7c-4014-b1ae-4c1c722f6ab3>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00470.warc.gz"} |
Functional Programming Techniques for Philosophy and Linguistics
Meets July 11--15 in Murray Hall 212 or 211 (alternating on different days)
• JP's Tuesday Handout (updated Tuesday at 7:30pm)
□ A few people asked me afterwards, why did you call it "egal"? Here is the source that introduces that term. It's also an interesting read, if you have some programming background, and is
often cited amongst programmers. But I'm not expecting that most members of this class will have the background that paper presumes.
□ The four(!) typos in the handout distributed in class were: in item (a) in the first operational semantics, in the result for expression (26), in expression (37) should be let xs … not let ys
…, and in the derivation at the end of the handout (the assignment that x is evaluated wrt in the top right should be g {x ↦ len m}, not just g). These are all fixed in the pdf linked above.
• JP's Thursday Handout
• Friday Handout
□ We have working Haskell code for all the semantics we walked through. But it needs to be cleaned up and made self-standing before we should post it. We'll do that when we can. Feel free to
prod us.
Other sources
• wiki from NYU grad seminar. This has expanded presentations of much of this material.
• Github repository for ESSLLI 2015 course taught by CB and Dylan Bumford
• On Wednesday, we'll be discussing Groenendijk, Stokhof, and Veltman, "Coreference and Modality" (1996), which is a canonical paper in the dynamic semantics tradition, unifying G&S's 1991 DPL
treatment of pronominal anaphora with Veltman's 1990/1996 treatment of epistemic modals.
• On Thursday, we'll be introducing you to monads, and giving some examples of using them in semantics for natural language. | {"url":"http://www.jimpryor.net/teaching/nasslli/","timestamp":"2024-11-08T16:54:57Z","content_type":"application/xhtml+xml","content_length":"6131","record_id":"<urn:uuid:362e786e-566a-43a1-b007-d71ae5020c00>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00055.warc.gz"} |
What’s that in the sky? Build a MATLAB Planetarium
I look up at the sky just after sunset and I see an especially bright star. It's probably a planet. But which one?
This question gives me a good opportunity to play around with MATLAB. Let's do a visualization that shows where the planets are relative to the earth and the sun. In the process, we'll use JSON
services, the File Exchange, MATLAB graphics, and 3-D vector mathematics cribbed from Wikipedia.
Here is the basic grade-school illustration of the solar system, the one that shows the planets rolling around the sun like peas on a plate. For simplicity, we're just showing the sun, the earth, the
moon, Venus, and Mars.
But we never see anything like this with our own eyes. Instead, we see bright spots on a dark background somewhere "up there." So let's simplify our problem to determining what direction each
naked-eye planet is in. This leads to an image like this.
Our goal is to make an accurate up-to-date version of this diagram. Specifically, relative to the sun, where should we look to find the moon and the naked-eye planets (Mercury, Venus, Mars, Jupiter,
and Saturn)? To do this, we need to solve a few problems.
1. Find the planets
2. Find the unit vector pointing from earth to each planet
3. Squash all these vectors onto a single plane
4. Visualize the resulting disk
Where Are the Planets?
First of all, where are the planets? There's a free JSON service for just about everything these days. I found planetary data hosted on Davy Wybiral's site here:
url = 'http://www.astro-phys.com/api/de406/states?bodies=sun,moon,mercury,venus,earth,mars,jupiter,saturn';
json = urlread(url);
Parse the data
Now we can use Joe Hicklin's JSON parser from the File Exchange. It returns a well-behaved MATLAB structure.
data = JSON.parse(json)
data =
date: 2.4568e+06
results: [1x1 struct]
unit: 'km'
The payload is in the "results" field. Each entry has three position components and three velocity components.
ans =
mercury: {{1x3 cell} {1x3 cell}}
sun: {{1x3 cell} {1x3 cell}}
moon: {{1x3 cell} {1x3 cell}}
jupiter: {{1x3 cell} {1x3 cell}}
mars: {{1x3 cell} {1x3 cell}}
earth: {{1x3 cell} {1x3 cell}}
venus: {{1x3 cell} {1x3 cell}}
saturn: {{1x3 cell} {1x3 cell}}
The distances are in kilometers, and I'm not even sure how this representation is oriented relative to the surrounding galaxy. But it doesn't really matter, because all I care about is the relative
positions of the bodies in question.
Aerospace Toolbox Ephemeris
Side note: We could also have used the Aerospace Toolbox to get the same information.
[pos,vel] = planetEphemeris(juliandate(now),'Sun','Earth')
Build the Solar System Structure
% List of bodies we care about
ssList = {'sun','moon','mercury','venus','earth','mars','jupiter','saturn'};
ss = [];
for i = 1:length(ssList)
ssObjectName = ssList{i};
ss(i).name = ssObjectName;
ssData = data.results.(ssObjectName);
ss(i).position = [ssData{1}{:}];
ss(i).velocity = [ssData{2}{:}];
Plot the planets
% Plot in astronomical units
au = 149597871;
k = 5;
for i = 1:length(ss)
p = ss(i).position/au;
line(p(1),p(2),p(3), ...
text(p(1),p(2),p(3),[' ' ss(i).name]);
grid on
axis equal
This is accurate, but not yet very helpful. Let's now calculate the geocentric position vectors of each planet. To do this, we'll put the earth at the center of the system. Copernicus won't mind,
because A) he's dead, and B) we admit this reference frame orbits the sun.
We're also going to use another file from the File Exchange. Georg Stillfried's mArrow3 will help us make nice 3-D arrows in space.
pEarth = ss(5).position;
for i = 1:length(ss)
% pe = position relative to earth
% (i.e. a vector pointing from earth to body X)
pe = ss(i).position - pEarth;
% pne = normalized position relative to earth
pne = pe/norm(pe);
ss(i).pne = pne;
mArrow3([0 0 0],pne, ...
'stemWidth',0.01,'FaceColor',[1 0 0]);
text(pne(1),pne(2),pne(3),ss(i).name, ...
hold on
hold off
axis equal
axis off
axis([-1.2 1.2 -0.8 1.1 -0.2 0.8])
These are unit vectors pointing out from the center of the earth towards each of the other objects. It's a little hard to see, but these vectors are all lying in approximately the same plane.
If we change our view point to look at the system edge-on, we can see the objects are not quite co-planar. For simplicity, let's squash them all into the same plane. For this, we'll use the plane
defined by the earth's velocity vector crossed with its position relative to the sun. This defines "north" for the solar system.
pEarth = ss(5).position;
pSun = ss(1).position;
vEarth = ss(5).velocity;
earthPlaneNormal = cross(vEarth,pSun - pEarth);
% Normalize this vector
earthPlaneNormalUnit = earthPlaneNormal/norm(earthPlaneNormal);
mArrow3([0 0 0],earthPlaneNormalUnit, ...
'stemWidth',0.01,'FaceColor',[0 0 0]);
axis([-1.2 1.2 -0.8 1.1 -0.2 0.8])
Now we project the vectors onto the plane defined by earth's motion around the sun. I learned what I needed from Wikipedia here: Vector Projection.
Since we are working with the normal, we are technically doing a vector rejection. Using Wikipedia's notation, this is given by
$$ \mathbf{a_2} = \mathbf{a} - \frac{\mathbf{a} \cdot \mathbf{b}}{\mathbf{b} \cdot \mathbf{b}} \mathbf{b} $$
hold on
for i = 1:length(ss)
pne = ss(i).pne;
pneProj = pne - dot(pne,earthPlaneNormalUnit)/dot(earthPlaneNormalUnit,earthPlaneNormalUnit)*earthPlaneNormalUnit;
ss(i).pneProj = pneProj;
mArrow3([0 0 0],pneProj, ...
'stemWidth',0.01,'FaceColor',[0 0 1]);
hold off
axis equal
We're close to the end now. Let's just calculate the angle between the sun and each element. Then we'll place the sun at the 12:00 position of our planar visualization and all the other planets will
fall into place around it.
Calculate the angle between the sun and each of the bodies. Again, from the Wikipedia article, we have
$$ cos \theta = \frac{\mathbf{a} \cdot \mathbf{b}}{|\mathbf{a}||\mathbf{b}|} $$
sun = ss(1).pneProj;
ss(1).theta = 0;
for i = 1:length(ss)
pneProj = ss(i).pneProj;
cosTheta = dot(sun,pneProj)/(norm(sun)*norm(pneProj));
theta = acos(cosTheta);
% The earth-plane normal vector sticks out of the plane. We can use it
% to determine the orientation of theta
x = cross(sun,pneProj);
theta = theta*sign(dot(earthPlaneNormalUnit,x));
ss(i).theta = theta;
Plot the result
k1 = 1.5;
k2 = 1.2;
for i = 1:length(ss)
beta = ss(i).theta + pi/2;
x = cos(beta);
y = sin(beta);
mArrow3([0 0 0],[x y 0], ...
'stemWidth',0.01, ...
'FaceColor',[0 0 1]);
line([0 k1*x],[0 k1*y],'Color',0.8*[1 1 1])
t = linspace(0,2*pi,100);
line(k2*cos(t),k2*sin(t),'Color',0.8*[1 1 1])
line(0,0,1,'Marker','.','MarkerSize',40,'Color',[0 0 1])
axis equal
axis(2*[-1 1 -1 1])
And there you have it: an accurate map of where the planets are in the sky for today's date. In this orientation, planets "following" the sun through the sky are on the left side of the circle. So
Jupiter will be high in the sky as the sun sets.
And that is a very satisfying answer to my question, by way of vector math, JSON feeds, MATLAB graphics, and the File Exchange.
To leave a comment, please click here to sign in to your MathWorks Account or create a new one. | {"url":"https://blogs.mathworks.com/community/2014/05/14/whats-that-in-the-sky-build-a-matlab-planetarium/?s_tid=blogs_rc_3&from=en","timestamp":"2024-11-06T05:06:48Z","content_type":"text/html","content_length":"169074","record_id":"<urn:uuid:739429c8-18d7-4a71-8e43-c3fa7ff47ffc>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00461.warc.gz"} |
[Solved] [తెలుగు] Approaches to AI MCQ [Free Telugu PDF] - Objective Question Answer for Approaches to AI Quiz - Download Now!
Approaches to AI MCQ Quiz in తెలుగు - Objective Question with Answer for Approaches to AI - ముఫ్త్ [PDF] డౌన్లోడ్ కరెన్
Last updated on Aug 7, 2024
పొందండి Approaches to AI సమాధానాలు మరియు వివరణాత్మక పరిష్కారాలతో బహుళ ఎంపిక ప్రశ్నలు (MCQ క్విజ్). వీటిని ఉచితంగా డౌన్లోడ్ చేసుకోండి Approaches to AI MCQ క్విజ్ Pdf మరియు బ్యాంకింగ్, SSC, రైల్వే, UPSC, స్టేట్ PSC వంటి మీ రాబోయే పరీక్షల కోసం సిద్ధం చేయండి.
Latest Approaches to AI MCQ Objective Questions
Top Approaches to AI MCQ Objective Questions
Approaches to AI Question 1:
In heuristic search algorithms in Artificial Intelligence (AI), if a collection of admissible heuristics h[1].......h[m] is available for a problem and none of them dominates any of the others, which
should we choose ?
Answer (Detailed Solution Below)
Option 1 : h(n)=max{h[1](n),....,h[m](n)}
Approaches to AI Question 1 Detailed Solution
Heuristic search refers to a search strategy that attempts to optimize a problem by iteratively improving the solution based on a given heuristic function or cost measure. Several commonly used
heuristic search methods include hill climbing, best first search, A* algorithm, simulated annealing.
Real valued hash function are used as a means for constraining search in combinatorial large problem spaces. A strategy is considered to be a function which for a given state in some problem domain
returns sequences of states over the problem domain.
A heuristic function h(n) finds the cost of cheapest path from a node to the goal node.
• The function h(n) is an underestimate if h(n) is less than or equal to the actual cost of a lowest cost path from a node to the goal node
• The heuristic value of a path is the heuristic value of the node at the end of the path.
• Two ways are there to use the heuristic function: one is for heuristic depth first search and another for best first search
• If heuristic function h(n) = max{h[1](n),....,h[m](n)}, then a collection of admissible heuristics h[1].......h[m] is available for a problem and none of them dominates any of the others.
• Composite heuristic function dominates all of its component functions and is consistent if none of the components overestimates.
Approaches to AI Question 2:
Which of the following statements is/are true for the Backtracking search algorithm?
A) Backtracking search is complete if the variable ordering is optimal.
B) Backtracking search can be improved using forward checking.
C) The search space explored by backtracking search is exponential in the worst case.
D) Heuristic variable ordering can reduce the size of the search tree.
Answer (Detailed Solution Below)
Option 4 : C and D only
Approaches to AI Question 2 Detailed Solution
The correct answer is C and D only
Key Points
• Statement A is false because backtracking search is complete irrespective of the variable ordering. Optimal variable ordering can improve performance but does not affect completeness.
• Statement B is true because forward checking can indeed improve backtracking search by reducing the number of future variable assignments that need to be considered.
• Statement C is true because the search space explored by backtracking search can indeed be exponential in the worst case.
• Statement D is true because heuristic variable ordering can reduce the size of the search tree, making the search process more efficient.
Additional Information
• Backtracking Search Algorithm:
□ It is a depth-first search algorithm for solving constraint satisfaction problems (CSPs).
□ It incrementally builds candidates to the solutions and abandons a candidate as soon as it determines that the candidate cannot lead to a valid solution.
□ The algorithm explores the search space by making decisions at each step and backtracks when a decision leads to a dead end.
• Forward Checking:
□ A technique used to improve the efficiency of backtracking by looking ahead and ruling out certain variable assignments that would lead to a conflict.
□ It helps in reducing the number of future variable assignments that need to be considered, thus pruning the search space.
• Heuristic Variable Ordering:
□ The process of selecting the next variable to assign a value to, based on some heuristic.
□ Heuristics such as the Minimum Remaining Values (MRV) can help in reducing the search tree size by selecting the variable that is most likely to cause a failure, thus allowing the algorithm
to backtrack earlier.
Approaches to AI Question 3:
In the content of Alpha Beta pruning in game trees which of the following statements are correct regarding cut off procedures ? Identify the incorrect statement from the options given below:
Answer (Detailed Solution Below)
Option 1 : Alpha Beta pruning guarantees the optimal solution in all cases by exploring the entire game tree.
Approaches to AI Question 3 Detailed Solution
The correct answer is Alpha Beta pruning guarantees the optimal solution in all cases by exploring the entire game tree.
• (1) Alpha Beta pruning guarantees the optimal solution in all cases by exploring the entire game tree. This statement is incorrect. While alpha-beta pruning does find the optimal solution, it
does not do so by exploring the whole tree. The whole point of alpha-beta pruning is to avoid having to explore the entire tree.
• (2) Alpha Beta pruning can eliminate subtrees with certainty when the value of a node exceeds both the alpha and beta bounds. This is correct. If a node's value falls outside the range defined by
alpha and beta, it can be pruned.
• (3) Alpha and Beta bounds are initialized to negative and positive infinity respectively at the root node. This one is correct. At the start of search, alpha (the value of the best option we have
found so far for max) is initialized to negative infinity, and beta (the value of the best option we have found so far for min) is initialized to positive infinity.
• (4) The primary purpose of Alpha-Beta pruning is to save computation time by searching fewer nodes in the same tree. This is correct. Alpha-Beta pruning reduces the number of nodes that need to
be examined in the search tree, thereby speeding up the computation.
So the correct answer is option 1.
Approaches to AI Question 4:
In the content of Alpha Beta pruning in game trees which of the following statements are correct regarding cut off procedures ?
(A) Alpha Beta pruning can eliminate subtrees with certainly when the value of a node exceeds both the alpha and beta bonds.
(B) The primarily purpose of Alpha-Beta proning is to save computation time by searching fewer nodes in the same tree.
(C) Alpha Beta pruning guarantees the optimal solution in all cases by exploring the entire game tree.
(D) Alpha and Beta bonds are initialized to negative and positive infinity respectively at the root note.
Choose the correct answer from the options given below:
Answer (Detailed Solution Below)
Option 3 : (A), (B), (D) Only
Approaches to AI Question 4 Detailed Solution
The correct answer is (A), (B), (D) Only
• (A) Alpha Beta pruning can eliminate subtrees with certainty when the value of a node exceeds both the alpha and beta bounds. This is correct. If a node's value falls outside the range defined by
alpha and beta, it can be pruned.
• (B) The primary purpose of Alpha-Beta pruning is to save computation time by searching fewer nodes in the same tree. This is correct. Alpha-Beta pruning reduces the number of nodes that need to
be examined in the search tree, thereby speeding up the computation.
• (C) Alpha Beta pruning guarantees the optimal solution in all cases by exploring the entire game tree. This statement is incorrect. While alpha-beta pruning does find the optimal solution, it
does not do so by exploring the whole tree. The whole point of alpha-beta pruning is to avoid having to explore the entire tree.
• (D) Alpha and Beta bounds are initialized to negative and positive infinity respectively at the root node. This one is correct. At the start of search, alpha (the value of the best option we have
found so far for max) is initialized to negative infinity, and beta (the value of the best option we have found so far for min) is initialized to positive infinity.
So the correct answer is (A), (B), (D) Only.
Approaches to AI Question 5:
Which of the following statements is/are true for the Constraint satisfaction problem?
A) The tree-structured Constraint satisfaction problem can not be solved in polynomial time.
B) Forward checking is the same as Arc consistency.
C) Computation time depends on a number of constraints and variable.
D) Iterative min-conflicts algorithms can solve 10000-Queens.
Answer (Detailed Solution Below)
Option 4 : C and D only
Approaches to AI Question 5 Detailed Solution
The correct answer is option 4.
Key Points
A) The tree-structured Constraint satisfaction problem can not be solved in polynomial time.
False, Can solve them in polynomial time. CSP can be solved in O(nd^2) time as compared to general CSPs, where the worst-case time is O(d^n).
B) Forward checking is the same as Arc consistency.
False, The difference between forwarding checking and arc consistency is that the former only checks a single unassigned variable at a time for consistency.
C) Computation time depends on a number of constraints and variables.
True, In CSP Computation time depends on a number of constraints and variables.
D) Iterative min-conflicts algorithms can solve 10000-Queens.
True, Iterative min-conflicts algorithms can solve 10000-Queens.
Hence the correct answer is C and D only.
Approaches to AI Question 6:
Find the utility(max value) of a root node value of the game-tree after applying the min-max algorithm?
Answer (Detailed Solution Below)
Option 2 : 2
Approaches to AI Question 6 Detailed Solution
The correct answer is option 2.
Key Points
Min-max algorithm works like a backtracking algorithm. Max will try to maximize its utility(best move). Min will try to minimize its utility(worst move).
Hence the correct answer is 2.
Approaches to AI Question 7:
Consider the following minimax game tree search
What will be the value propagated at the root?
Answer (Detailed Solution Below)
Option 3 : 5
Approaches to AI Question 7 Detailed Solution
From the above game tree, it is clear the value propagated at the root is 5.
Approaches to AI Question 8:
Consider the following statements related to AND-OR Search algorithm.
S1: A solution is a subtree that has a goal node at every leaf.
S2: OR nodes are analogous to the branching in a deterministic environment
S3: AND nodes are analogous to the branching in a non-deterministic environment.
Which one of the following is true referencing the above statements?
Choose the correct answer from the code given below:
Answer (Detailed Solution Below)
Option 2 : S1- True, S2- True, S3- True
Approaches to AI Question 8 Detailed Solution
AND- OR search algorithm:
AND OR search algorithm is different from other algorithms in the way that it finds goal node at each state in order to reach the final goal state. It works in non-deterministic environment.
And OR search algorithm works in the following way:
1) It has to find a solution tree of subtrees.
2) Solution tree contains the root of the subtrees.
3) If a non-terminal AND node is in subtree and solution tree then all its children are in solution tree.
4) If a non-terminal OR node is in subtree and solution tree then exactly one of its children are in solution tree.
AND OR search tree may contain nodes that root identical subtrees (sub problems with identical optimal solutions) which can be unified. When unifiable nodes are merged, search tree becomes a graph
and its size becomes smaller. | {"url":"https://testbook.com/objective-questions/te/mcq-on-approaches-to-ai--5eea6a0e39140f30f369e526","timestamp":"2024-11-07T22:08:56Z","content_type":"text/html","content_length":"403220","record_id":"<urn:uuid:9785ab6e-d796-47f3-91b2-2a4565bb3263>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00140.warc.gz"} |
Algorithms and
Algorithms and Datastructures - Conditional Course
Winter Term 2024/25
Fabian Kuhn, TA Gustav Schmid
Course Description
This lecture revolves around the design and analysis of algorithms. We will discuss the concepts and principles of a selection of the very basic but most commonly used algorithms and datastructures.
Topics will include for example: Sorting, searching, hashing, search-trees, (priority-)queues and graphalgorithms (like shortest paths, spanning trees, breadth-first- and depth-first-searches).
There will not be a session on the 16th of October!
There will be an introductory session on Wednesday, 23th of October 12:15 - 14:00. The session will take place in presence in Room SR 00-010/14 (G.-Koehler-Allee 101, ground floor).
The lecture will be in the flipped classroom format, meaning that there will be a pre-recorded lecture videos combined with an interactive exercise lesson. The interactive session takes places on
Wednesdays 12:15 - 14:00 in presence in room SR 00-010/14 (G.-Koehler-Allee 101, ground floor).
The recorded lectures and corresponding slides are available on a separate page. Slides & Recordings
Forum: Zulip
We offer an instant messaging platform (Zulip) for all students to discuss all topics related to this lecture, where you are free to discuss among yourself or pose questions to us.
Most of the communication will happen over Zulip so it is highly recommended you sign up for Zulip and regularly check for updates.
You must be either inside the eduroam network or be connected to the university network via a VPN to access the Link!
For any additional questions or troubleshooting please feel free to contact the Teaching Assistant of the course schmidg@informatik.uni-freiburg.de.
In the exam we expect students to go beyond the algorithms that they have seen in the lecture. Instead we will ask students to come up with algorithms for new problems and have the students give
mathematical arguments why their algorithm is both correct and efficient. The only real way to learn this skill is to participate in the lectures and do the exercises.
We will update exact information about the exam here in this website once we know it. We will have a written exam!
• Type of exam: Written Exam (changed from last term!)
• Date: 12.03.2025
• Time and Room: 14:00, room still to be determined
• Duration: 180 Minutes
• Allowed Material: The exam will be written on paper and so you will need a non erasable pen. Additionally we allow 6 A4-pages of hand written notes. (either 3 sheets of A4 paper with text on both
sides, or 6 sheets of A4 paper with text only on a single side). We do not allow any form of electronic devices (so no calculator).
The above info is subject to change, so please check back here a few weeks before the actual exam.
There will be theoretical and programming exercises, designed to teach you the algorithms and methods discussed in the lecture. The goal of this lecture is for you to be able to design algorithms
yourself and prove guarrantees about these algorithms. This skill is not easy to come by and just reading a book will not get you there. Just implementing a few of these algorithms in the programming
language of the month will not do. Instead the goal is to think ahead of time how an algorithm is supposed to behave, irrespective of the actual implementation in a certain language on a certain
system. To learn this one has to actively sit down and invest some time to practice this! Actively participating in the exercises and working through the provided feedback is the best way to prepare
for the exam!
Topic Due (12 pm) Exercise Solution
Introduction 30.10. exercise-00, solution 00
Sorting 06.11. exercise-01, QuickSort.py
Big O Notation 06.11. exercise-02
Submission of Exercises
Handing in exercises is voluntary, but we highly recommend doing it. If you want feedback to your solutions, you must submit them by Wednesday, 12 pm.
The submission is simply via Zulip: create a .zip archive containing all of your solution files and send that archive as a private Message to the TA.
Programming exercises should be solved using Python and handed in as .py files (inside the .zip file).
Solutions to theoretical exercises can be written in Latex (preferred), Word (or similar text programs) or handwritten scans which must be well readable. Its also possible to hand in written exercise
sheets at the beginning of the lecture. For hand written solutions, either scanned or handed in, we expect that solutions are written up clearly on a separate piece of paper with proper structure.
• Introduction to Algorithms (3rd edition); T. Cormen, C. Leiserson, R. Rivest, C. Stein; MIT Press, 2009
• Algorithms and Data Structures; K. Mehlhorn und P. Sanders; Springer, 2008, available online
• Lectures on MIT Courseware:
Introduction to Algorithms 2005 und Introduction to Algorithms 2011 | {"url":"https://ac.informatik.uni-freiburg.de/teaching/ws24_25/ad-conditional.php","timestamp":"2024-11-08T17:30:51Z","content_type":"application/xhtml+xml","content_length":"12632","record_id":"<urn:uuid:e4d0c9ea-491a-4d01-a1be-826c17860f7b>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00786.warc.gz"} |
#25Fundamentals_Navier-Stokes equation
• Get link
• Facebook
• Twitter
• Pinterest
• Email
• Other Apps
(It seems a little bit awkward for me to consider Navier-Stokes equation as fundamentals.....XD")
Navier-Stokes equation is one of the most important equations in fluid mechanics. From the simulation of weather to the current in blood vessels, every complex problem related to fluid requires it to
be solved or approximated. However, because the solution of Navier-Stokes equations are usually very complex or even unsolvable currently, most of the time we will rely on numerical simulation to
finish the task. Today we will focus on the physical meaning of Navier-Stokes equation, which is basically the Newtonian second law used to describe the change in flow field.
Navier-Stokes equation可說是流體力學中最重要的方程式之一,舉凡天氣、洋流、飛機感受到的氣流、血管中的血流等,要對他們進行模擬與計算,最基本的統馭方程大概都是Navier-Stokes equation或他的化簡變形版本。
但因為這個方程式太過複雜,只有非常化簡的情形可以人工計算,大部分的時候都是仰賴數值模擬。克雷數學研究所(Clay Mathematics Institute,CMI)於2000年公告的七個千禧年大獎難題中,就有一個是關於Navier-Stokes
equation。今天我們要來和大家解釋Navier-Stokes equation中每個項所代表的物理意義,本質上他就是牛頓第二運動定律用來描述流場的變化而已。 Firstly, we shall consider the acceleration of a fluid element.
Since the location of the fluid element would also change with time, we have to account both the changes with time and the changes with space to fully consider its acceleration. For simplicity, we
should consider motion in 1 dimension first. Assume that the fluid element move Δy in time Δt, then its change in velocity could be written as: 首先我們要先考慮一個單位流體(fluid element)的加速度。由
So we can write down the acceleration as:
And the motion in 3 dimensions would then be:
This could be simplified by some vector calculus:
The former term in the RHS is the change in flow field with respect to time while the latter term is the change in flow field with respect to space.
Recall that the Newtonian second law told us "
" If we could write down the acceleration of a mass, we could know how much forces the fluid element is enduring. Here we shall consider body force, pressure force and viscous force.
還記得牛頓第二運動定律告訴我們F=ma,一旦我們知道加速度應該如何表達之後,我們就要來看看單位流體到底受了哪些力。在這裡我們要考慮實體力(body force)、壓力(pressure force)和黏滯力(viscous force)。
The simplest term of the three is the body force, which means forces only related to the volume of the fluid element, rather than to the surface area of it. Classic examples include gravitational
forces and electrostatic forces. Assume that the fluid has a density of
ρ, we could write down the body force as:
首先最簡單的是body force,他是只和單位流體的體積有關的力,而和流體的表面無關。例如重力、靜電力等。假設流體的密度為
ρ,我們可以簡單把body force表示成:
Sometimes this term could be ignored all at once if we are discuss a system which limits the motion of fluid in a horizontal plane.
Let's move on to the effect of pressure force. We shall focus on y direction as we just did previously. From the figure shown above, the net pressure force in y direction could be written as:
So this term would become something like this if we consider all 3 dimensions:
The last one would be the viscous force. According to the hypothesis proposed by Newton, viscous force could be calculated as viscosity times the gradient of fluid speed. Therefore, let's first
consider the viscous force generated from the change of
z with respect to y direction:
If we consider the changes in both side of the fluid element, the net viscous force would be:
Then we further consider the changes of
z with respect to 3 directions, x, y, and z, and we could write down the net viscous force as:
Finally we consider the changes of
y and
z, then we will get:
最後同時考慮x, y, z三個方向的話,我們就可以寫出
Add all these stuffs together and we will get our Navier-Stokes equation:
所以把所有東西兜在一起,我們就可以得到Navier-Stokes equation:
We may use these terms to perform dimensional analysis, or we may compare the effect of these terms to determine which one we could ignore. Please make sure you fully understand the meaning of each
terms before you move on. Stay tuned!
• Get link
• Facebook
• Twitter
• Pinterest
• Email
• Other Apps | {"url":"http://www.threeminutebiophysics.com/2016/03/25navier-stokes-equation.html","timestamp":"2024-11-07T23:44:21Z","content_type":"application/xhtml+xml","content_length":"134838","record_id":"<urn:uuid:6c839bab-82b4-4a6f-8081-18574b3baf73>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00119.warc.gz"} |
Hermite polynomials
In mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence that arise in probability, such as the Edgeworth series; in combinatorics, as an example of an Appell
sequence, obeying the umbral calculus; in numerical analysis as Gaussian quadrature; and in physics, where they give rise to the eigenstates of the quantum harmonic oscillator. They are also used
in systems theory in connection with nonlinear operations on Gaussian noise. They are named after Charles Hermite (1864)^[1] although they were studied earlier by Laplace (1810) and Chebyshev
There are two different standard ways of normalizing Hermite polynomials:
$(1)\ \ {\mathit{He}}_n(x)=(-1)^n e^{x^2/2}\frac{d^n}{dx^n}e^{-x^2/2}\,\!$
(the "probabilists' Hermite polynomials"), and
$(2)\ \ H_n(x)=(-1)^n e^{x^2}\frac{d^n}{dx^n}e^{-x^2}=e^{x^2/2}\bigg (x-\frac{d}{dx} \bigg )^n e^{-x^2/2}\,\!$
(the "physicists' Hermite polynomials"). These two definitions are not exactly equivalent; either is a rescaling of the other, to wit
$H_n(x) = 2^{n/2}{\mathit{He}}_n(\sqrt{2}\,x).\,\!$
These are Hermite polynomial sequences of different variances; see the material on variances below.
The notation He and H is that used in the standard references Tom H. Koornwinder, Roderick S. C. Wong, and Roelof Koekoek et al. (2010) and Abramowitz & Stegun. The polynomials He[n] are
sometimes denoted by H[n], especially in probability theory, because
is the probability density function for the normal distribution with expected value 0 and standard deviation 1.
The first eleven probabilists' Hermite polynomials are:
and the first eleven physicists' Hermite polynomials are:
H[n] is a polynomial of degree n. The probabilists' version He has leading coefficient 1, while the physicists' version H has leading coefficient 2^n.
H[n](x) and He[n](x) are nth-degree polynomials for n = 0, 1, 2, 3, .... These polynomials are orthogonal with respect to the weight function (measure)
$w(x) = \mathrm{e}^{-x^2/2}\,\!$ (He)
$w(x) = \mathrm{e}^{-x^2}\,\!$ (H)
i.e., we have
$\int_{-\infty}^\infty H_m(x) H_n(x)\, w(x) \, \mathrm{d}x = 0$
when m ≠ n. Furthermore,
$\int_{-\infty}^\infty {\mathit{He}}_m(x) {\mathit{He}}_n(x)\, \mathrm{e}^{-x^2/2} \, \mathrm{d}x = \sqrt{2 \pi} n! \delta_{nm}$ (probabilist)
$\int_{-\infty}^\infty H_m(x) H_n(x)\, \mathrm{e}^{-x^2}\, \mathrm{d}x = \sqrt{ \pi} 2^n n! \delta_{nm}$ (physicist).
The probabilist polynomials are thus orthogonal with respect to the standard normal probability density function.
The Hermite polynomials (probabilist or physicist) form an orthogonal basis of the Hilbert space of functions satisfying
$\int_{-\infty}^\infty\left|f(x)\right|^2\, w(x) \, \mathrm{d}x <\infty,$
in which the inner product is given by the integral including the Gaussian weight function w(x) defined in the preceding section,
$\langle f,g\rangle=\int_{-\infty}^\infty f(x)\overline{g(x)}\, w(x) \, \mathrm{d}x.$
An orthogonal basis for L^2(R, w(x) dx) is a complete orthogonal system. For an orthogonal system, completeness is equivalent to the fact that the 0 function is the only function ƒ ∈ L^2(R, w(x)
dx) orthogonal to all functions in the system. Since the linear span of Hermite polynomials is the space of all polynomials, one has to show (in physicist case) that if ƒ satisfies
$\int_{-\infty}^\infty f(x) x^n \mathrm{e}^{- x^2} \, \mathrm{d}x = 0$
for every n ≥ 0, then ƒ = 0. One possible way to do it is to see that the entire function
$F(z) = \int_{-\infty}^\infty f(x) \, \mathrm{e}^{z x - x^2} \, \mathrm{d}x = \sum_{n=0}^\infty \frac{z^n}{n!}\int f(x) x^n \mathrm{e}^{- x^2} \, \mathrm{d}x = 0$
vanishes identically. The fact that F(it) = 0 for every t real means that the Fourier transform of ƒ(x) exp(−x^2) is 0, hence ƒ is 0 almost everywhere. Variants of the above completeness proof
apply to other weights with exponential decay. In the Hermite case, it is also possible to prove an explicit identity that implies completeness (see "Completeness relation" below).
An equivalent formulation of the fact that Hermite polynomials are an orthogonal basis for L^2(R, w(x) dx) consists in introducing Hermite functions (see below), and in saying that the Hermite
functions are an orthonormal basis for L^2(R).
Hermite's differential equation
The probabilists' Hermite polynomials are solutions of the differential equation
$(e^{-x^2/2}u')' + \lambda e^{-x^2/2}u = 0$
where λ is a constant, with the boundary conditions that u should be polynomially bounded at infinity. With these boundary conditions, the equation has solutions only if λ is a non-negative
integer, and up to an overall scaling, the solution is uniquely given by u(x) = H[λ](x). Rewriting the differential equation as an eigenvalue problem
L[u] = u'' − xu' = − λu
solutions are the eigenfunctions of the differential operator L. This eigenvalue problem is called the Hermite equation, although the term is also used for the closely related equation
u'' − 2xu' = − 2λu
whose solutions are the physicists' Hermite polynomials.
With more general boundary conditions, the Hermite polynomials can be generalized to obtain more general analytic functions H[λ](z) for λ a complex index. An explicit formula can be given in
terms of a contour integral (Courant & Hilbert 1953).
Recursion relation
The sequence of Hermite polynomials also satisfies the recursion
${\mathit{He}}_{n+1}(x)=x{\mathit{He}}_n(x)-{\mathit{He}}_n'(x).\,\!$ (probabilist)
$H_{n+1}(x)=2 xH_n(x)-H_n'(x).\,\!$ (physicist)
The Hermite polynomials constitute an Appell sequence, i.e., they are a polynomial sequence satisfying the identity
${\mathit{He}}_n'(x)=n{\mathit{He}}_{n-1}(x),\,\!$ (probabilist)
$H_n'(x)=2nH_{n-1}(x),\,\!$ (physicist)
or equivalently,
${\mathit{He}}_n(x+y)=\sum_{k=0}^n{n \choose k}x^{n-k} {\mathit{He}}_{k}(y)$ (probabilist)
$H_n(x+y)=\sum_{k=0}^n{n \choose k}H_{k}(x) (2y)^{(n-k)}= 2^{-\frac n 2}\cdot\sum_{k=0}^n {n \choose k} H_{n-k}\left(x\sqrt 2\right) H_k\left(y\sqrt 2\right).$ (physicist)
(the equivalence of these last two identities may not be obvious, but its proof is a routine exercise).
It follows that the Hermite polynomials also satisfy the recurrence relation
${\mathit{He}}_{n+1}(x)=x{\mathit{He}}_n(x)-n{\mathit{He}}_{n-1}(x),\,\!$ (probabilist)
$H_{n+1}(x)=2xH_n(x)-2nH_{n-1}(x).\,\!$ (physicist)
These last relations, together with the initial polynomials H[0](x) and H[1](x), can be used in practice to compute the polynomials quickly.
${\mathit{He}}_n(x)^2 - {\mathit{He}}_{n-1}(x){\mathit{He}}_{n+1}(x)= (n-1)!\cdot \sum_{i=0}^{n-1}\frac{2^{n-i}}{i!}{\mathit{He}}_i(x)^2>0.$
Moreover, the following multiplication theorem holds:
${\mathit{H}}_n(\gamma x)=\sum_{i=0}^{\lfloor n/2 \rfloor} \gamma^{n-2i}(\gamma^2-1)^i {n \choose 2i} \frac{(2i)!}{i!}{\mathit{H}}_{n-2i}(x).$
Explicit expression
The physicists' Hermite polynomials can be written explicitly as
$H_n(x) = n! \sum_{\ell = 0}^{n/2} \frac{(-1)^{n/2 - \ell}}{(2\ell)! (n/2 - \ell)!} (2x)^{2\ell}$
for even values of n and
$H_n(x) = n! \sum_{\ell = 0}^{(n-1)/2} \frac{(-1)^{(n-1)/2 - \ell}}{(2\ell + 1)! ((n-1)/2 - \ell)!} (2x)^{2\ell + 1}$
for odd values of n. These two equations may be combined into one using the floor function:
$H_n(x) = n! \sum_{m=0}^{\lfloor n/2 \rfloor} \frac{(-1)^m}{m!(n - 2m)!} (2x)^{n - 2m}.$
The probabilists' Hermite polynomials He have similar formulas, which may be obtained from these by replacing the power of 2x with the corresponding power of (√2)x, and multiplying the entire sum
by 2^-n/2.
Generating function
The Hermite polynomials are given by the exponential generating function
$\exp (xt-t^2/2) = \sum_{n=0}^\infty {\mathit{He}}_n(x) \frac {t^n}{n!}\,\!$ (probabilist)
$\exp (2xt-t^2) = \sum_{n=0}^\infty H_n(x) \frac {t^n}{n!}\,\!$ (physicist).
This equality is valid for all x, t complex, and can be obtained by writing the Taylor expansion at x of the entire function z → exp(−z^2) (in physicist's case). One can also derive the
(physicist's) generating function by using Cauchy's Integral Formula to write the Hermite polynomials as
$H_n(x)=(-1)^n e^{x^2}\frac{d^n}{dx^n}e^{-x^2}= (-1)^n e^{x^2}{n! \over 2\pi i} \oint_\gamma {e^{-z^2} \over (z-x)^{n+1}}\, dz.\,\!$
Using this in the sum $\sum_{n=0}^\infty H_n(x) \frac {t^n}{n!}\,\!$, one can evaluate the remaining integral using the calculus of residues and arrive at the desired generating function.
Expected value
If X is a random variable with a normal distribution with standard deviation 1 and expected value μ then
$E({\mathit{He}}_n(X))=\mu^n.\,\!$ (probabilist)
Asymptotic expansion
Asymptotically, as n tends to infinity, the expansion
$e^{-\frac{x^2}{2}}\cdot H_n(x) \sim \frac{2^n}{\sqrt \pi}\Gamma\left(\frac{n+1}2\right) \cos \left(x \sqrt{2 n}- n\frac \pi 2 \right)$ (physicist^[3])
holds true. For certain cases concerning a wider range of evaluation, it is necessary to include a factor for changing amplitude
$e^{-\frac{x^2}{2}}\cdot H_n(x) \sim \frac{2^n}{\sqrt \pi}\Gamma\left(\frac{n+1}2\right) \cos \left(x \sqrt{2 n}- n\frac \pi 2 \right)\left(1-\frac{x^2}{2n}\right)^{-\frac{1}{4}}=\frac{2 \
Gamma\left(n\right)}{\Gamma\left(\frac{n}2\right)} \cos \left(x \sqrt{2 n}- n\frac \pi 2 \right)\left(1-\frac{x^2}{2n}\right)^{-\frac{1}{4}}$
Which, using Stirling's approximation, can be further simplified, in the limit, to
$e^{-\frac{x^2}{2}}\cdot H_n(x) \sim \left(\frac{2 n}{e}\right)^{\frac{n}{2}} {\sqrt 2} \cos \left(x \sqrt{2 n}- n\frac \pi 2 \right)\left(1-\frac{x^2}{2n}\right)^{-\frac{1}{4}}$
This expansion is needed to resolve the wave-function of a quantum harmonic oscillator such that it agrees with the classical approximation in the limit of the correspondence principle.
A finer approximation^[4], which takes into account the uneven spacing of the zeros near the edges, makes use of the substitution $x=\sqrt{ 2n+1}\cos(\phi)$, for $0<\epsilon\leq\phi\leq\pi-\
epsilon$, with which one has the uniform approximation
$e^{-\frac{x^2}{2}}\cdot H_n(x) = 2^{n/2+\frac{1}{4}}\sqrt{n!}(\pi n)^{-1/4}(\sin \phi)^{-1/2} \cdot \left[\sin\left(\left(\frac{n}{2}+\frac{1}{4}\right)\left(\sin(2\phi)-2\phi\right) +\frac
{3\pi}{4}\right)+O(n^{-1}) \right].$
Similar approximations hold for the monotonic and transition regions. Specifically, if $x=\sqrt{2n+1} \cosh(\phi)$ for $0<\epsilon\leq\phi\leq\omega<\infty$ then
$e^{-\frac{x^2}{2}}\cdot H_n(x) = 2^{n/2-\frac{3}{4}}\sqrt{n!}(\pi n)^{-1/4}(\sinh \phi)^{-1/2} \cdot \exp\left(\left(\frac{n}{2}+\frac{1}{4}\right)\left(2\phi-\sinh(2\phi)\right)\right)\left
[1+O(n^{-1}) \right],$
while for $x=\sqrt{2n+1}-2^{-1/2}3^{-1/3}n^{-1/6}t$ with t complex and bounded then
$e^{-\frac{x^2}{2}}\cdot H_n(x) =\pi^{1/4}2^{n/2+\frac{1}{4}}\sqrt{n!} n^{-1/12}\left[ \mathrm{Ai}(-3^{-1/3}t)+ O(n^{-2/3}) \right]$
where Ai(t) is the Airy function of the first kind.
Relations to other functions
Laguerre polynomials
The Hermite polynomials can be expressed as a special case of the Laguerre polynomials.
$H_{2n}(x) = (-4)^{n}\,n!\,L_{n}^{(-1/2)}(x^2)=4^n\, n! \sum_{i=0}^n (-1)^{n-i} {n-\frac{1}{2} \choose n-i} \frac{x^{2i}}{i!}\,\!$ (physicist)
$H_{2n+1}(x) = 2(-4)^{n}\,n!\,x\,L_{n}^{(1/2)}(x^2)=2\cdot 4^n\, n! \sum_{i=0}^n (-1)^{n-i} {n+\frac{1}{2} \choose n-i} \frac{x^{2i+1}}{i!}\,\!$ (physicist)
Relation to confluent hypergeometric functions
The Hermite polynomials can be expressed as a special case of the parabolic cylinder functions.
$H_{n}(x) = 2^n\,U\left(-\frac{n}{2},\frac{1}{2};x^2\right)$ (physicist)
where U(a,b;z) is Whittaker's confluent hypergeometric function. Similarly,
$H_{2n}(x) = (-1)^{n}\,\frac{(2n)!}{n!} \,_1F_1\left(-n,\frac{1}{2};x^2\right)$ (physicist)
$H_{2n+1}(x) = (-1)^{n}\,\frac{(2n+1)!}{n!}\,2x \,_1F_1\left(-n,\frac{3}{2};x^2\right)$ (physicist)
where $\,_1F_1(a,b;z)=M(a,b;z)$ is Kummer's confluent hypergeometric function.
Differential operator representation
The probabilists' Hermite polynomials satisfy the identity
where D represents differentiation with respect to x, and the exponential is interpreted by expanding it as a power series. There are no delicate questions of convergence of this series when it
operates on polynomials, since all but finitely many terms vanish.
Since the power series coefficients of the exponential are well known, and higher order derivatives of the monomial x^n can be written down explicitly, this differential operator representation
gives rise to a concrete formula for the coefficients of H[n] that can be used to quickly compute these polynomials.
Since the formal expression for the Weierstrass transform W is e^D^2, we see that the Weierstrass transform of (√2)^nHe[n](x/√2) is x^n. Essentially the Weierstrass transform thus turns a series
of Hermite polynomials into a corresponding Maclaurin series.
The existence of some formal power series g(D), with nonzero constant coefficient, such that He[n](x) = g(D)x^n, is another equivalent to the statement that these polynomials form an Appell
sequence. Since they are an Appell sequence they are a fortiori a Sheffer sequence.
Contour integral representation
The Hermite polynomials have a representation in terms of a contour integral, as
${\mathit{He}}_n(x)=\frac{n!}{2\pi i}\oint\frac{e^{tx-t^2/2}}{t^{n+1}}\,dt$ (probabilist)
$H_n(x)=\frac{n!}{2\pi i}\oint\frac{e^{2tx-t^2}}{t^{n+1}}\,dt$ (physicist)
with the contour encircling the origin.
The (probabilists') Hermite polynomials defined above are orthogonal with respect to the standard normal probability distribution, whose density function is
which has expected value 0 and variance 1. One may speak of Hermite polynomials
of variance α, where α is any positive number. These are orthogonal with respect to the normal probability distribution whose density function is
They are given by
${\mathit{He}}_n^{[\alpha]}(x) = \alpha^{-n/2}He_n^{[1]}\left(\frac{x}{\sqrt{\alpha}}\right)= (2 \alpha)^{-n/2} H_n( \frac{x}{\sqrt{2 \alpha}}) = e^{-\alpha D^2/2}x^n.\,\!$
In particular, the physicists' Hermite polynomials are
${\mathit{He}}_n^{[\alpha]}(x)=\sum_{k=0}^n h^{[\alpha]}_{n,k}x^k\,\!$
then the polynomial sequence whose nth term is
$\left({\mathit{He}}_n^{[\alpha]}\circ {\mathit{He}}^{[\beta]}\right)(x)=\sum_{k=0}^n h^{[\alpha]}_{n,k}\,{\mathit{He}}_k^{[\beta]}(x)\,\!$
is the umbral composition of the two polynomial sequences, and it can be shown to satisfy the identities
$\left({\mathit{He}}_n^{[\alpha]}\circ {\mathit{He}}^{[\beta]}\right)(x)={\mathit{He}}_n^{[\alpha+\beta]}(x)\,\!$
${\mathit{He}}_n^{[\alpha+\beta]}(x+y)=\sum_{k=0}^n{n\choose k}{\mathit{He}}_k^{[\alpha]}(x) {\mathit{He}}_{n-k}^{[\beta]}(y).\,\!$
The last identity is expressed by saying that this parameterized family of polynomial sequences is a cross-sequence.
"Negative variance"
Since polynomial sequences form a group under the operation of umbral composition, one may denote by
the sequence that is inverse to the one similarly denoted but without the minus sign, and thus speak of Hermite polynomials of negative variance. For α > 0, the coefficients of He[n]^[−α](x) are
just the absolute values of the corresponding coefficients of He[n]^[α](x).
These arise as moments of normal probability distributions: The nth moment of the normal distribution with expected value μ and variance σ^2 is
where X is a random variable with the specified normal distribution. A special case of the cross-sequence identity then says that
$\sum_{k=0}^n {n\choose k}{\mathit{He}}_k^{[\alpha]}(x) {\mathit{He}}_{n-k}^{[-\alpha]}(y)={\mathit{He}}_n^{[0]}(x+y)=(x+y)^n.\,\!$
Hermite functions
One can define the Hermite functions from the physicists' polynomials:
$\psi_n(x) = (2^n n! \sqrt{\pi})^{-1/2} \mathrm{e}^{-x^2/2} H_n(x) = (-1)^n(2^n n! \sqrt{\pi})^{-1/2} \mathrm{e}^{x^2/2} \frac{d^n}{dx^n} \mathrm{e}^{-x^2}$
Since these functions contain the square root of the weight function, and have been scaled appropriately, they are orthonormal:
$\int_{-\infty}^\infty \psi_n(x)\psi_m(x)\, \mathrm{d}x = \delta_{n\,m}\,$
and form an orthonormal basis of L^2(R). This fact is equivalent to the corresponding statement for Hermite polynomials (see above).
The Hermite functions are closely related to the Whittaker function (Whittaker and Watson, 1962) $D_n(z)\,$:
$D_n(z) = (n! \sqrt{\pi})^{1/2} \psi_n(z/\sqrt{2}) = \pi^{-1/4} \sqrt{2} \mathrm{e}^{z^2/4} \frac{d^n}{dz^n} \mathrm{e}^{-z^2}$
and thereby to other parabolic cylinder functions. The Hermite functions satisfy the differential equation:
$\psi_n''(x) + (2n + 1 - x^2) \psi_n(x) = 0\,.$
This equation is equivalent to the Schrödinger equation for a harmonic oscillator in quantum mechanics, so these functions are the eigenfunctions.
Recursion relation
Following recursion relations of Hermite polynomials, the Hermite functions obey
$\psi_n'(x) = \sqrt{\frac{n}{2}}\psi_{n-1}(x) - \sqrt{\frac{n+1}{2}}\psi_{n+1}(x)$
Cramér's inequality
The Hermite functions satisfy the following bound due to Harald Cramér^[5]^[6]
$|\psi_n(x)| \le K$
for x real, where the constant K is less than 1.086435.
Hermite functions as eigenfunctions of the Fourier transform
The Hermite functions ψ[n](x) are a set of eigenfunctions of the continuous Fourier transform. To see this, take the physicist's version of the generating function and multiply by exp(−x^ 2/2).
This gives
$\exp (-x^2/2 + 2xt-t^2) = \sum_{n=0}^\infty \exp (-x^2/2) H_n(x) \frac {t^n}{n!}.\,\!$
Choosing the unitary representation of the Fourier transform, the Fourier transform of the left hand side is given by
\begin{align} \mathcal{F} \{ \exp (-x^2/2 + 2xt-t^2)\}(k) & {} = \frac{1}{\sqrt{2 \pi}}\int_{-\infty}^\infty \exp (-ixk)\exp (-x^2/2 + 2xt-t^2)\, \mathrm{d}x \\ & {} = \exp (-k^2/2 - 2kit+t^
2) \\ & {} = \sum_{n=0}^\infty \exp (-k^2/2) H_n(k) \frac {(-it)^n}{n!}. \end{align}
The Fourier transform of the right hand side is given by
$\mathcal{F} \left\{ \sum_{n=0}^\infty \exp (-x^2/2) H_n(x) \frac {t^n}{n!} \right\} = \sum_{n=0}^\infty \mathcal{F} \left \{ \exp(-x^2/2) H_n(x) \right\} \frac{t^n}{n!}. \,$
Equating like powers of t in the transformed versions of the left- and right-hand sides gives
$\mathcal{F} \left\{ \exp (-x^2/2) H_n(x) \right\} = (-i)^n \exp (-k^2/2) H_n(k). \,\!$
The Hermite functions ψ[n](x) are therefore an orthonormal basis of L^2(R) which diagonalizes the Fourier transform operator. In this case, we chose the unitary version of the Fourier transform,
so the eigenvalues are (−i)^ n.
Combinatorial interpretation of coefficients
In the Hermite polynomial H[n](x) of variance 1, the absolute value of the coefficient of x^k is the number of (unordered) partitions of an n-member set into k singletons and (n − k)/2
(unordered) pairs.
Completeness relation
The Christoffel–Darboux formula for Hermite polynomials reads
$\sum_{i=0}^n \frac{H_i(x) H_i(y)}{i!2^i}= \frac{1}{n!2^{n+1}}\frac{H_n(y)H_{n+1}(x)- H_n(x)H_{n+1}(y)}{x-y}.$
Moreover, the following identity holds in the sense of distributions^[7]
$\sum_{n=0}^\infty \psi_n (x) \psi_n (y)= \delta(x-y),$
where δ is the Dirac delta function, (ψ[n]) the Hermite functions, and δ(x − y) represents the Lebesgue measure on the line y = x in R^2, normalized so that its projection on the horizontal axis
is the usual Lebesgue measure. This distributional identity follows by letting u → 1 in Mehler's formula, valid when −1 < u < 1:
$E(x, y; u) := \sum_{n=0}^\infty u^n \, \psi_n (x) \, \psi_n (y) = \frac 1 {\sqrt{\pi (1 - u^2)}} \, \mathrm{exp} \left( - \frac{1 - u}{1 + u} \, \frac{(x + y)^2}{4} \,-\, \frac{1 + u}{1 - u}
\, \frac{(x - y)^2}{4}\right),$
which is often stated equivalently as
$\sum_{n=0}^\infty \frac{H_n(x)H_n(y)}{n!}\left(\frac u 2\right)^n= \frac 1 {\sqrt{1-u^2}} \mathrm{e}^{\frac{2u}{1+u}x y-\frac{u^2}{1-u^2}(x-y)^2}.$
The function (x, y) → E(x, y; u) is the density for a Gaussian measure on R^2 which is, when u is close to 1, very concentrated around the line y = x, and very spread out on that line. It follows
$\left\langle \left( \sum_{n=0}^\infty u^n \langle f, \psi_n \rangle \psi_n\right), g \right\rangle = \int\!\!\int E(x, y; u) f(x) \overline{g(y)} \, \mathrm{d}x \, \mathrm{d}y \rightarrow \
int f(x) \overline{g(x)} \, \mathrm{d} x = \langle f, g \rangle,$
when ƒ, g are continuous and compactly supported. This yields that ƒ can be expressed from the Hermite functions, as sum of a series of vectors in L^2(R), namely
$f = \sum_{n=0}^\infty \langle f, \psi_n \rangle \psi_n.$
In order to prove the equality above for E(x, y; u), the Fourier transform of Gaussian functions will be used several times,
$\rho \sqrt{\pi} \, \mathrm{e}^{-\rho^2 x^2 / 4} = \int \mathrm{e}^{isx- s^2/\rho^2}\, \mathrm{d}s, \quad \rho > 0.$
The Hermite polynomial is then represented as
$H_n(x) = (-1)^{n} \mathrm{e}^{x^2} \frac {\mathrm{d}^n}{\mathrm{d}x^n} \Bigl( \frac {1}{2\sqrt{\pi}} \int \mathrm{e}^{isx - s^2/4}\, \mathrm{d}s \Bigr) = (-1)^n \mathrm{e}^{x^2}\frac {1}{2\
sqrt{\pi}}\int (is)^n \, \mathrm{e}^{isx- s^2/4}\, \mathrm{d}s.$
With this representation for H[n](x) and H[n](y), one sees that
\begin{align}E(x, y; u) &= \sum_{n=0}^\infty \frac{u^n}{2^n n! \sqrt{\pi}} \, H_n(x) H_n(y) \, \mathrm{e}^{ - (x^2+y^2)/2} \\ & =\frac{\mathrm{e}^{(x^2+y^2)/2}}{4\pi\sqrt{\pi}}\int \!\! \int
\Bigl( \sum_{n=0}^\infty \frac{1}{2^n n! } (-ust)^n \Bigr) \, \mathrm{e}^{isx+ity - s^2/4 - t^2/4}\, \mathrm{d}s\,\mathrm{d}t \\ & =\frac{\mathrm{e}^{(x^2+y^2)/2} }{4\pi\sqrt{\pi}}\int \!\! \
int \mathrm{e}^{-ust/2} \, \mathrm{e}^{isx+ity - s^2/4 - t^2/4}\, \mathrm{d}s\,\mathrm{d}t,\end{align}
and this implies the desired result, using again the Fourier transform of Gaussian kernels after performing the substitution
$s = \frac{\sigma + \tau}{\sqrt 2},\qquad\qquad t = \frac{\sigma - \tau}{\sqrt 2}.$
See also
1. ^ C. Hermite: Sur un nouveau développment en série de fonctions C. R Acad. Sci. Paris 58 1864 93-100; Oeuvres II 293-303
2. ^ P.L.Chebyshev: Sur le développment des fonctions a une seule variable Bull. Acad. Sci. St. Petersb. I 1859 193-200;Oeuvres I 501-508
3. ^ Abramowitz, p. 508-510, 13.6.38 and 13.5.16
4. ^ Szegő 1939, 1955, p. 201
5. ^ Erdélyi et al. 1955, p. 207
□ Abramowitz, Milton; Stegun, Irene A., eds. (1965), "Chapter 22", Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, New York: Dover, pp. 773, ISBN
978-0486612720, MR0167642, http://www.math.sfu.ca/~cbm/aands/page_773.htm .
□ Courant, Richard; Hilbert, David (1953), Methods of Mathematical Physics, Volume I, Wiley-Interscience .
□ Erdélyi, Arthur; Magnus, Wilhelm; Oberhettinger, Fritz; Tricomi, Francesco G. (1955), Higher transcendental functions. Vol. II, McGraw-Hill (scan)
□ Fedoryuk, M.V. (2001), "Hermite functions", in Hazewinkel, Michiel, Encyclopaedia of Mathematics, Springer, ISBN 978-1556080104, http://eom.springer.de/H/h046980.htm .
□ Koornwinder, Tom H.; Wong, Roderick S. C.; Koekoek, Roelof; Swarttouw, René F. (2010), "Orthogonal Polynomials", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F. et al., NIST
Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0521192255, MR2723248, http://dlmf.nist.gov/18
□ Laplace, P.S. (1810), Mém. Cl. Sci. Math. Phys. Inst. France 58: 279–347
□ Suetin, P. K. (2001), "Hermite polynomials", in Hazewinkel, Michiel, Encyclopaedia of Mathematics, Springer, ISBN 978-1556080104, http://eom.springer.de/H/h047010.htm .
□ Szegő, Gábor (1939, 1955), Orthogonal Polynomials, American Mathematical Society
□ Wiener, Norbert (1958), The Fourier Integral and Certain of its Applications, New York: Dover Publications, ISBN 0-486-60272-9
□ Whittaker, E. T.; Watson, G. N. (1962), 4th Edition, ed., A Course of Modern Analysis, London: Cambridge University Press
□ Temme, Nico, Special Functions: An Introduction to the Classical Functions of Mathematical Physics, Wiley, New York, 1996
External links
□ Special hypergeometric functions
□ Polynomials
Wikimedia Foundation. 2010. | {"url":"https://en-academic.com/dic.nsf/enwiki/129262","timestamp":"2024-11-11T00:53:37Z","content_type":"text/html","content_length":"98614","record_id":"<urn:uuid:b785db6f-7ca4-485d-8590-3a0f1a0af1d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00621.warc.gz"} |
Physics and Astronomy Prelim Presentation - Nolan Miller - Department of Physics and Astronomy
UNC-CH Physics and Astronomy Thesis Proposal Presentation
Nolan Miller
“Ab initio calculations of hadronic quantities using lattice quantum chromodynamics”
Quantum chromodynamics (QCD) is the quantum field theory describing hadronic matter, which comprises the majority of the non-dark energy-density of our universe. But unlike quantum field theories
such as quantum electrodynamics, the coupling constant of QCD increases at low temperatures, making perturbative calculations using Feynman diagrams impossible. Instead we must resort to lattice QCD,
in which the infinite dimensional path integral over all of spacetime becomes a high dimensional integral restricted to a discretized, periodic box.
In my talk, I will summarize the properties of the lattice action used by my collaboration and present preliminary results obtained from using it. First I will present my results on scale setting,
which describes how we can convert a dimensionless (lattice) value into something dimensionful (physical). Next I will explain my calculation of the ratio of the kaon and pion pseudoscalar decay
constants, from which we can glean insight into the V_us matrix element of the CKM matrix. Then I will describe my fits of the hyperon mass spectrum, and in the process, I will explain how to
determine various low energy constants of the hyperon effective field theories from which the hyperon mass formulae were derived. In principle, such effective field theories could be used to make
predictions about physical processes (eg, hypeon-nucleon interactions in neutron stars) that are difficult or impossible to observe experimentally. Finally I will present some work on fitting the
nucleon axial charge, a fundamental property of nucleons which determines their rate of beta decay, and show how I intend expand on this work to calculate the hyperon axial charges. | {"url":"https://physics.unc.edu/event/physics-and-astronomy-prelim-presentation-nolan-miller/","timestamp":"2024-11-07T13:47:00Z","content_type":"text/html","content_length":"97116","record_id":"<urn:uuid:2c9dd2c4-7697-49f9-88d1-a8870eb548c4>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00196.warc.gz"} |
FMOD(3) Linux Programmer's Manual FMOD(3)
fmod, fmodf, fmodl - floating-point remainder function
#include <math.h>
double fmod(double x, double y);
float fmodf(float x, float y);
long double fmodl(long double x, long double y);
Link with -lm.
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
fmodf(), fmodl():
_ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L
|| /* Since glibc 2.19: */ _DEFAULT_SOURCE
|| /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE
These functions compute the floating-point remainder of dividing x by
y. The return value is x - n * y, where n is the quotient of x / y,
rounded toward zero to an integer.
On success, these functions return the value x - n*y, for some integer
n, such that the returned value has the same sign as x and a magnitude
less than the magnitude of y.
If x or y is a NaN, a NaN is returned.
If x is an infinity, a domain error occurs, and a NaN is returned.
If y is zero, a domain error occurs, and a NaN is returned.
If x is +0 (-0), and y is not zero, +0 (-0) is returned.
See math_error(7) for information on how to determine whether an error
has occurred when calling these functions.
The following errors can occur:
Domain error: x is an infinity
errno is set to EDOM (but see BUGS). An invalid floating-point
exception (FE_INVALID) is raised.
Domain error: y is zero
errno is set to EDOM. An invalid floating-point exception
(FE_INVALID) is raised.
For an explanation of the terms used in this section, see at-
|Interface | Attribute | Value |
|fmod(), fmodf(), fmodl() | Thread safety | MT-Safe |
C99, POSIX.1-2001, POSIX.1-2008.
The variant returning double also conforms to SVr4, 4.3BSD, C89.
Before version 2.10, the glibc implementation did not set errno to EDOM
when a domain error occurred for an infinite x.
This page is part of release 5.05 of the Linux man-pages project. A
description of the project, information about reporting bugs, and the
latest version of this page, can be found at
2017-09-15 FMOD(3)
Man Pages Copyright Respective Owners. Site Copyright (C) 1994 - 2024 Hurricane Electric. All Rights Reserved. | {"url":"http://man.he.net/man3/fmodf","timestamp":"2024-11-12T10:42:45Z","content_type":"text/html","content_length":"3813","record_id":"<urn:uuid:f0061c53-94f3-42a6-b2e2-1382f2cbc3c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00458.warc.gz"} |
uk spec auto prop shaft
uk auto prop shaft in very good condition for sale.
it was bought for my car, but after having the problem correctly diagnosed it wasnt the prop. thus the sale
someone make me an offer.
cheers baz
What was your problem ?
bloody rear wheels bearings.........the garage it went to first said that it wasnt them and that it could have been a prop problem. it was bought just in case.
thank god there are proper supra engineers out there.
anyone want it?
Were you getting a vibration is that what was the problem was ?
I may be intrested in your propshaft
she's all packed up and ready to go......just need a buyer
bump again. ;-)
Terry S was after one.
hi cj, he's pm'd me.......just waiting for a reply.
cheers baz
baz, check your PMs/emails mate
tried to contact you a couple of times but had no replies
still got this, anyone want it???
• 2 weeks later...
• 2 weeks later... | {"url":"https://www.mkivsupra.net/topic/47125-uk-spec-auto-prop-shaft/","timestamp":"2024-11-14T21:41:19Z","content_type":"text/html","content_length":"195526","record_id":"<urn:uuid:08b3bbec-4a15-4e4b-a82e-872f232d0ee9>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00857.warc.gz"} |
Researchers question AI’s ‘reasoning’ ability as models stumble on math problems with trivial changes
How do machine learning models do what they do? And are they really “thinking” or “reasoning” the way we understand those things? This is a philosophical question as much as a practical one, but a
new paper making the rounds Friday suggests that the answer is, at least for now, a pretty clear “no.”
A group of AI research scientists at Apple released their paper, “Understanding the limitations of mathematical reasoning in large language models,” to general commentary Thursday. While the deeper
concepts of symbolic learning and pattern reproduction are a bit in the weeds, the basic concept of their research is very easy to grasp.
Let’s say I asked you to solve a simple math problem like this one:
Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday. How many kiwis does Oliver have?
Obviously, the answer is 44 + 58 + (44 * 2) = 190. Though large language models are actually spotty on arithmetic, they can pretty reliably solve something like this. But what if I threw in a little
random extra info, like this:
Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many
kiwis does Oliver have?
It’s the same math problem, right? And of course even a grade-schooler would know that even a small kiwi is still a kiwi. But as it turns out, this extra data point confuses even state-of-the-art
LLMs. Here’s GPT-o1-mini’s take:
… on Sunday, 5 of these kiwis were smaller than average. We need to subtract them from the Sunday total: 88 (Sunday’s kiwis) – 5 (smaller kiwis) = 83 kiwis
This is just a simple example out of hundreds of questions that the researchers lightly modified, but nearly all of which led to enormous drops in success rates for the models attempting them.
Image Credits:Mirzadeh et al
Now, why should this be? Why would a model that understands the problem be thrown off so easily by a random, irrelevant detail? The researchers propose that this reliable mode of failure means the
models don’t really understand the problem at all. Their training data does allow them to respond with the correct answer in some situations, but as soon as the slightest actual “reasoning” is
required, such as whether to count small kiwis, they start producing weird, unintuitive results.
As the researchers put it in their paper:
[W]e investigate the fragility of mathematical reasoning in these models and demonstrate that their performance significantly deteriorates as the number of clauses in a question increases. We
hypothesize that this decline is due to the fact that current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training
This observation is consistent with the other qualities often attributed to LLMs due to their facility with language. When, statistically, the phrase “I love you” is followed by “I love you, too,”
the LLM can easily repeat that — but it doesn’t mean it loves you. And although it can follow complex chains of reasoning it has been exposed to before, the fact that this chain can be broken by even
superficial deviations suggests that it doesn’t actually reason so much as replicate patterns it has observed in its training data.
Mehrdad Farajtabar, one of the co-authors, breaks down the paper very nicely in this thread on X.
An OpenAI researcher, while commending Mirzadeh et al’s work, objected to their conclusions, saying that correct results could likely be achieved in all these failure cases with a bit of prompt
engineering. Farajtabar (responding with the typical yet admirable friendliness researchers tend to employ) noted that while better prompting may work for simple deviations, the model may require
exponentially more contextual data in order to counter complex distractions — ones that, again, a child could trivially point out.
Does this mean that LLMs don’t reason? Maybe. That they can’t reason? No one knows. These are not well-defined concepts, and the questions tend to appear at the bleeding edge of AI research, where
the state of the art changes on a daily basis. Perhaps LLMs “reason,” but in a way we don’t yet recognize or know how to control.
It makes for a fascinating frontier in research, but it’s also a cautionary tale when it comes to how AI is being sold. Can it really do the things they claim, and if it does, how? As AI becomes an
everyday software tool, this kind of question is no longer academic. | {"url":"https://wired24.co.za/2024/10/12/researchers-question-ais-reasoning-ability-as-models-stumble-on-math-problems-with-trivial-changes/","timestamp":"2024-11-14T18:27:39Z","content_type":"text/html","content_length":"67278","record_id":"<urn:uuid:afdf4b1c-fc6f-46a7-bd6c-ca464e10d6f9>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00131.warc.gz"} |
polls – Cosmic Reflections
The Great Courses offers a number of excellent courses on DVD (also streaming and audio only). Here are my favorite episodes. (Note: This is a work in progress and more entries will be added in the
Course No. 153
Einstein’s Relativity and the Quantum Revolution: Modern Physics for Non-Scientists, 2nd Edition – Richard Wolfson
Lecture 8 – Uncommon Sense—Stretching Time
“Why does the simple statement of relativity—that the laws of physics are the same for all observers in uniform motion—lead directly to absurd-seeming situations that violate our commonsense notions
of space and time?”
Lecture 9 – Muons and Time-Traveling Twins
“As a dramatic example of what relativity implies, you will consider a thought experiment involving a pair of twins, one of whom goes on a journey to the stars and returns to Earth younger than her
Lecture 12 – What about E=mc^2 and is Everything Relative?
“Shortly after publishing his 1905 paper on special relativity, Einstein realized that his theory required a fundamental equivalence between mass and energy, which he expressed in the equation E=mc^
2. Among other things, this famous formula means that the energy contained in a single raisin could power a large city for an entire day.”
Lecture 16 – Into the Heart of Matter
“With this lecture, you turn from relativity to explore the universe at the smallest scales. By the early 1900s, Ernest Rutherford and colleagues showed that atoms consist of a positively charged
nucleus surrounded by negatively charged electrons whirling around it. But Rutherford’s model could not explain all the observed phenomena.”
Lecture 19 – Quantum Uncertainty—Farewell to Determinism
“Quantization places severe limits on our ability to observe nature at the atomic scale because it implies that the act of observation disturbs that which is being observed. The result is Werner
Heisenberg’s famous Uncertainty Principle. What exactly does this principle say, and what are the philosophical implications?”
Lecture 21 – Quantum Weirdness and Schrödinger’s Cat
“Wave-particle duality gives rise to strange phenomena, some of which are explored in Schrödinger’s famous ‘cat in the box’ example. Philosophical debate on Schrödinger’s cat still rages.”
Course No. 158
My Favorite Universe – Neil deGrasse Tyson
Lecture 8 – In Defense of the Big Bang
“We now know without doubt how the universe began, how it evolved, and how it will end. This lecture explains and defends a “theory” far too often misunderstood.”
Course No. 415
The Will to Power: The Philosophy of Friedrich Nietzsche
Robert C. Solomon & Kathleen M. Higgins
Lecture 7 – Nietzsche and Schopenhauer on Pessimism
“Schopenhauer, the severe pessimist, is a looming presence in Nietzsche’s thought. Nietzsche felt the weight of Schopenhauer’s pessimism, and struggled to counter it by embracing “cheerfulness,”
creative passion, and an aesthetic viewpoint.”
Lecture 19 – The Ranking of Values – Morality and Modernity
“Why did Nietzsche refuse to think of values as being either objective or subjective? Why did he hold that values are earthly and culture- and species-specific? Why did he argue that, in the final
analysis, there are only healthy and unhealthy values, and that modern values are unhealthy?”
Lecture 22 – Resentment, Revenge, and Justice
“We continue our discussion of Nietzsche’s idea of resentment, adding to it his ideas about revenge and justice. We revisit his condemnation of asceticism, the self-denial that is often a part of
extreme religious practice, in light of these new ideas.”
Course No. 443
Power over People: Classical and Modern Political Theory – Dennis Dalton
Lecture 10 – Marx’s Critique of Capitalism and the Solution of Communism
“Karl Marx’s communism provided what is probably the best known ideal society. He blamed not only private property, but the entire institution of capitalism for the inequality and injustice in
society. His program has never been implemented, certainly not in the Soviet Union. Marx never advocated totalitarian or despotic rule. Although his historical determinism has been discredited, his
social criticism remains relevant. The democratic dilemma boils down to this: the more liberty, the less equality; and the more equality, the less liberty.”
Special Note: I will eventually be adding more of the episodes from this excellent course as I rewatch them. (I watched this series before I began keeping track of “best” episodes.)
Course No. 700
How to Listen to and Understand Great Music, 3rd Edition – Robert Greenberg
Lecture 23 – Classical-era Form—Sonata Form, Part 1
“In Lectures 23 and 24 we examine sonata-allegro form, but first, we observe the life and personality of the extraordinary Wolfgang Mozart. We discuss the many meanings and uses of the word “sonata.”
The fourth movement of Mozart’s Symphony in G Minor, K. 550, is analyzed and discussed in depth as an example.”
Special Note: I will eventually be adding more of the episodes from this excellent course as I rewatch them. (I watched this series before I began keeping track of “best” episodes.)
Course No. 730
Symphonies of Beethoven – Robert Greenberg
Lecture 11 – Symphony No. 3—The “New Path”—Heroism and Self-Expression, III
“Lectures 9 through 12 focus on Symphony No. 3, the Eroica Symphony. This key work in Beethoven’s compositional revolution resulted from his crisis of going deaf. Beethoven’s struggle with his
disability raised him to a new level of creativity. Symphony No. 3 parallels his heroic battle with and ultimate triumph over adversity. The symphony’s debt to Napoleon is discussed before an
Lecture 13 – Symphony No. 4—Consolidation of the New Aesthetic, I
“Lectures 13 through 16 examine Symphony No. 4 in historical context and in its relationship to opera buffa. Symphony No. 4 is the most infrequently heard of his symphonies. We see how it represents
a return to a Classical structure. Its framework is filled with iconoclastic rhythms, harmonies, and characteristic motivic developments that mark it as a product of Beethoven’s post-Eroica period.”
Lecture 23 – Symphony No. 7—The Symphony as Dance, I
Lecture 24 – Symphony No. 7—The Symphony as Dance, II
“Lectures 23 and 24 discuss Beethoven’s Symphony No. 7 with references to the historical and personal events surrounding its composition. The essence of the symphony is seen to be the power of
rhythm, and originality is seen to be an important artistic goal for Beethoven.”
Lecture 31 – Symphony No. 9—The Symphony as the World, IV
“The last five lectures are devoted to Symphony No. 9, the most influential Western musical composition of the 19th century and the most influential symphony ever written. We see how this work
obliterated distinctions between the instrumental symphony and dramatic vocal works such as opera. Also discussed are Beethoven’s fall from public favor in 1815, his disastrous relationship with his
nephew Karl, his artistic rebirth around 1820, his late compositions, and his death in 1827.”
Course No. 753
Great Masters: Tchaikovsky-His Life and Music – Robert Greenberg
Lecture 1 – Introduction and Early Life
“Tchaikovsky was an extremely sensitive child, obsessive about music and his mother. His private life was reflected to a rare degree in his music. His mother’s death when he was 14 years old was a
shattering experience for him—one that found poignant expression in his music.”
Lecture 6 – “My Great Friend“
“With the generous financial support of Nadezhda von Meck, Tchaikovsky lived abroad, and in 1878 resigned from the Moscow Conservatory to compose full time. His Fourth Symphony was premiered in
Moscow and was quickly followed by the brilliant Violin Concerto in D Major, which became a pillar of the repertoire within a few years.”
Course No. 754
Great Masters: Stravinsky-His Life and Music – Robert Greenberg
Lecture 2 – From Student to Professional
“Rimsky-Korsakov was so impressed with Stravinsky’s Piano Sonata in F♯ minor (1904) he agreed to take Stravinsky as a private student. In 1909, Stravinsky met the impresario Serge Diaghilev, who
commissioned Stravinsky to write a ballet on the folk tale The Firebird, which was followed by the ballet Petrushka, a great success. Stravinsky’s next score, The Rite of Spring, would become
arguably the most influential work of its time.”
Course No. 756
Great Masters: Mahler-His Life and Music – Robert Greenberg
Lecture 7 – Symphony No. 6, and Das Lied von der Erde
“Three events shattered the Mahlers’ lives in 1907: his resignation from the Royal Vienna Opera, the death of their elder daughter, and the diagnosis of his heart disease. In 1908, Mahler threw
himself into composing Das Lied von der Erde as an attempt to find solace from the grief of his daughter’s death. The work is a symphonic song cycle about loss, grief, memory, disintegration, and
Course No. 758
Great Masters: Liszt-His Life and Music – Robert Greenberg
Lecture 2 – A Born Pianist
“Liszt was surrounded by music from infancy and began to reveal his musical gifts at about age five. He stunned his teachers and, at his first performance at age 11, astonished reviewers and his
audience. When Liszt was 15, his father died, sending Franz into depression and apathy for three years. He was finally blasted out of his lethargy by the July Revolution of 1830.”
Lecture 7 – Rome
“By the 1850s, Liszt became the focal point of a debate concerning program music versus absolute music and expression versus structure. Twenty years before, Liszt and his fellow young Romantic
musicians had a common goal: to create a new music based on individual expression. As they grew older, many became conservative, but Liszt never lost his revolutionary spirit. But brokenhearted by
the death of his daughter, he turned to the Catholic Church to find solace.”
Course No. 759
Great Masters: Robert and Clara Schumann-Their Lives and Music – Robert Greenberg
Lecture 8 – Madness
“In Düsseldorf, Robert was inspired to write the Symphony No. 3 in E-flat Major, along with trios, sonatas, orchestral works, and pieces for chorus and voice and piano. Robert and Clara also met
Johannes Brahms there; he became a lifelong friend and source of strength for Clara. In 1854, Robert attempted to drown himself in the Rhine and was taken to an asylum. He died there two years later.
Clara managed to sustain the family through her concerts but was dealt even more pain by the early deaths of several of her children.”
Course No. 1012
Chemistry, 2nd Edition – Frank Cardulla
Lecture 5 – The SI (Metric) System of Measurement
“Next, we continue to lay a strong foundation for our understanding of chemistry by learning about one of the key tools we’ll be using: the International System of Units (SI), or the metric system.
This lecture explains why this system is so useful to scientists and lays out the prefixes and units of measurement that make up the metric system.”
Lecture 10 – The Mole
“One of the most important concepts to master in an introductory chemistry course is the concept of the mole, which provides chemists with a way to ‘count’ atoms and molecules. Learn how scientists
use the mole and explore the quantitative definition of this basic unit.”
Lecture 28 – The Self-Ionization of Water
“After examining how different substances may behave when dissolved in water, we learn about the self-ionization of water and use this knowledge to solve problems. The lecture ends with a brief
introduction to the pH of solutions.”
Lecture 29 – Strong Acids and Bases – General Properties
“We return to the topic of pH and learn about how pH relates to two kinds of compounds: acids and bases. Through an introductory problem, we explore the relationship of various ions within these
Course No. 1257
Mysteries of Modern Physics: Time – Sean Carroll
Lecture 10 – Playing with Entropy
“Sharpen your understanding of entropy by examining different macroscopic systems and asking, which has higher entropy and which has lower entropy? Also evaluate James Clerk Maxwell’s famous thought
experiment about a demon who seemingly defies the principle that entropy always increases.”
Lecture 15 – The Perception of Time
“Turn to the way humans perceive time, which can vary greatly from clock time. In particular, focus on experiments that shed light on our time sense. For example, tests show that even though we think
we perceive the present moment, we actually live 80 milliseconds in the past.”
Lecture 16 – Memory and Consciousness
“Remembering the past and projecting into the future are crucial for human consciousness, as shown by cases where these faculties are impaired. Investigate what happens in the brain when we remember,
exploring different kinds of memory and the phenomena of false memories and false forgetting.”
Lecture 20 – Black Hole Entropy
“Stephen Hawking showed that black holes emit radiation and therefore have entropy. Since the entropy in the universe today is overwhelmingly in the form of black holes and there were no black holes
in the early universe, entropy must have been much lower in the deep past.”
Lecture 21 – Evolution of the Universe
“Follow the history of the universe from just after the big bang to the far future, when the universe will consist of virtually empty space at maximum entropy. Learn what is well founded and what is
less certain about this picture of a universe winding down.”
Course No. 1280
Physics and Our Universe: How It All Works – Richard Wolfson
Lecture 1 – The Fundamental Science
“Take a quick trip from the subatomic to the galactic realm as an introduction to physics, the science that explains physical reality at all scales. Professor Wolfson shows how physics is the
fundamental science that underlies all the natural sciences. He also describes phenomena that are still beyond its explanatory power.”
Lecture 24 – The Ideal Gas
“Delve into the deep link between thermodynamics, which looks at heat on the macroscopic scale, and statistical mechanics, which views it on the molecular level. Your starting point is the ideal gas
law, which approximates the behavior of many gases, showing how temperature, pressure, and volume are connected by a simple formula.”
Lecture 44 – Cracks in the Classical Picture
“Embark on the final section of the course, which covers the revolutionary theories that superseded classical physics. Why did classical physics need to be replaced? Discover that by the late 19th
century, inexplicable cracks were beginning to appear in its explanatory power.”
Lecture 48 – Space-Time and Mass-Energy
“In relativity theory, contrary to popular views, reality is what’s not relative—that is, what doesn’t depend on one’s frame of reference. See how space and time constitute one such pair, merging
into a four-dimensional space-time. Mass and energy similarly join, related by Einstein’s famous E = mc^2.”
Special Note: This entire series is outstanding! I will eventually be adding many of the episodes of this course as I rewatch them. (I watched this series before I began keeping track of “best”
Course No. 1360
Introduction to Astrophysics – Joshua Winn
Lecture 5 – Newton’s Hardest Problem
“Continue your exploration of motion by discovering the law of gravity just as Newton might have—by analyzing Kepler’s laws with the aid of calculus (which Newton invented for the purpose). Look at a
graphical method for understanding orbits, and consider the conservation laws of angular momentum and energy in light of Emmy Noether’s theory that links conservation laws and symmetry.”
Lecture 10 – Optical Telescopes
“Consider the problem of gleaning information from the severely limited number of optical photons originating from astronomical sources. Our eyes can only do it so well, and telescopes have several
major advantages: increased light-gathering power, greater sensitivity of telescopic cameras and sensors such as charge-coupled devices (CCDs), and enhanced angular and spectral resolution.”
Lecture 11 – Radio and X-Ray Telescopes
“Non-visible wavelengths compose by far the largest part of the electromagnetic spectrum. Even so, many astronomers assumed there was nothing to see in these bands. The invention of radio and X-ray
telescopes proved them spectacularly wrong. Examine the challenges of detecting and focusing radio and X-ray light, and the dazzling astronomical phenomena that radiate in these wavelengths.”
Lecture 12 – The Message in a Spectrum
“Starting with the spectrum of sunlight, notice that thin dark lines are present at certain wavelengths. These absorption lines reveal the composition and temperature of the Sun’s outer atmosphere,
and similar lines characterize other stars. More diffuse phenomena such as nebulae produce bright emission lines against a dark spectrum. Probe the quantum and thermodynamic events implied by these
Lecture 13 – The Properties of Stars
“Take stock of the wide range of stellar luminosities, temperatures, masses, and radii using spectra and other data. In the process, construct the celebrated Hertzsprung–Russell diagram, with its
main sequence of stars in the prime of life, including the Sun. Note that two out of three stars have companions. Investigate the orbital dynamics of these binary systems.”
Lecture 15 – Why Stars Shine
“Get a crash course in nuclear physics as you explore what makes stars shine. Zero in on the Sun, working out the mass it has consumed through nuclear fusion during its 4.5-billion-year history.
While it’s natural to picture the Sun as a giant furnace of nuclear bombs going off non-stop, calculations show it’s more like a collection of toasters; the Sun is luminous simply because it’s so
Lecture 16 – Simple Stellar Models
“Learn how stars work by delving into stellar structure, using the Sun as a model. Relying on several physical principles and sticking to order-of-magnitude calculations, determine the pressure and
temperature at the center of the Sun, and the time it takes for energy generated in the interior to reach the surface, which amounts to thousands of years. Apply your conclusions to other stars.”
Lecture 17 – White Dwarfs
“Discover the fate of solar mass stars after they exhaust their nuclear fuel. The galaxies are teeming with these dim “white dwarfs” that pack the mass of the Sun into a sphere roughly the size of
Earth. Venture into quantum theory to understand what keeps these exotic stars from collapsing into black holes, and learn about the Chandrasekhar limit, which determines a white dwarf’s maximum
Lecture 18 – When Stars Grow Old
“Trace stellar evolution from two points of view. First, dive into a protostar and witness events unfold as the star begins to contract and fuse hydrogen. Exhausting that, it fuses heavier elements
and eventually collapses into a white dwarf—or something even denser. Next, view this story from the outside, seeing how stellar evolution looks to observers studying stars with telescopes.”
Lecture 19 – Supernovas and Neutron Stars
“Look inside a star that weighs several solar masses to chart its demise after fusing all possible nuclear fuel. Such stars end in a gigantic explosion called a supernova, blowing off outer material
and producing a super-compact neutron star, a billion times denser than a white dwarf. Study the rapid spin of neutron stars and the energy they send beaming across the cosmos.”
Lecture 20 – Gravitational Waves
“Investigate the physics of gravitational waves, a phenomenon predicted by Einstein and long thought to be undetectable. It took one of the most violent events in the universe—colliding black
holes—to generate gravitational waves that could be picked up by an experiment called LIGO on Earth, a billion light years away. This remarkable achievement won LIGO scientists the 2017 Nobel Prize
in Physics.”
Course No. 1434
The Queen of the Sciences: A History of Mathematics – David M. Bressoud
Lecture 2 – Babylonian and Egyptian Mathematics
“Egyptian and Mesopotamian mathematics were well developed by the time of the earliest records from the 2nd millennium B.C. Both knew how to find areas and volumes. The Babylonians solved quadratic
equations using geometric methods and knew the Pythagorean theorem.”
Lecture 5 – Astronomy and the Origins of Trigonometry
“The origins of trigonometry lie in astronomy, especially in finding the length of the chord that connects the endpoints of an arc of a circle. Hipparchus discovered a solution to this problem, that
was later refined by Ptolemy who authored the great astronomical work the Almagest.”
Lecture 6 – Indian Mathematics – Trigonometry Blossoms
“We journey through the Gupta Empire and the great period of Indian mathematics that lasted from A.D. 320 to 1200. Along the way, we explore the significant advances that occurred in trigonometry and
other mathematical fields.”
Lecture 14 – Leibniz and the Emergence of Calculus
“Independently of Newton, Gottfried Wilhelm Leibniz discovered the techniques of calculus in the 1670s, developing the notational system still used today.”
Lecture 15 – Euler – Calculus Proves Its Promise
“Leonhard Euler dominated 18th-century mathematics so thoroughly that his contemporaries believed he had solved all important problems.”
Lecture 19 – Modern Analysis – Fourier to Carleson
“By 1800, calculus was well established as a powerful tool for solving practical problems, but its logical underpinnings were shaky. We explore the creative mathematics that addressed this problem in
work from Joseph Fourier in the 19th century to Lennart Carleson in the 20th.”
Lecture 21 – Sylvester and Ramanujan – Different Worlds
“This lecture explores the contrasting careers of James Joseph Sylvester, who was instrumental in developing an American mathematical tradition, and Srinivasa Ramanujan, a poor college dropout from
India who produced a rich range of new mathematics during his short life.”
Lecture 22 – Fermat’s Last Theorem – The Final Triumph
“Pierre de Fermat’s enigmatic note regarding a proof that he didn’t have space to write down sparked the most celebrated search in mathematics, lasting more than 350 years. This lecture follows the
route to a proof, finally achieved in the 1990s.”
Lecture 23 – Mathematics – The Ultimate Physical Reality
“Mathematics is the key to realms outside our intuition. We begin with Maxwell’s equations and continue through general relativity, quantum mechanics, and string theory to see how mathematics enables
us to work with physical realities for which our experience fails us.”
Lecture 24 – Problems and Prospects for the 21st Century
“This last lecture introduces some of the most promising and important questions in the field and examines mathematical challenges from other disciplines, especially genetics.”
Course No. 1456
Discrete Mathematics – Arthur T. Benjamin
Lecture 8 – Linear Recurrences and Fibonacci Numbers
“Investigate some interesting properties of Fibonacci numbers, which are defined using the concept of linear recurrence. In the 13th century, the Italian mathematician Leonardo of Pisa, called
Fibonacci, used this sequence to solve a problem of idealized reproduction in rabbits.”
Lecture 15 – Open Secrets—Public Key Cryptography
“The idea behind public key cryptography sounds impossible: The key for encoding a secret message is publicized for all to know, yet only the recipient can reverse the procedure. Learn how this
approach, widely used over the Internet, relies on Euler’s theorem in number theory.”
Lecture 16 – The Birth of Graph Theory
“This lecture introduces the last major section of the course, graph theory, covering the basic definitions, notations, and theorems. The first theorem of graph theory is yet another contribution by
Euler, and you see how it applies to the popular puzzle of drawing a given shape without lifting the pencil or retracing any edge.”
Lecture 18 – Social Networks and Stable Marriages
“Apply graph theory to social networks, investigating such issues as the handshake theorem, Ramsey’s theorem, and the stable marriage theorem, which proves that in any equal collection of eligible
men and women, at least one pairing exists for each person so that no extramarital affairs will take place.”
Lecture 20 – Weighted Graphs and Minimum Spanning Trees
“When you call someone on a cell phone, you can think of yourself as a leaf on a giant ‘tree’—a connected graph with no cycles. Trees have a very simple yet powerful structure that make them useful
for organizing all sorts of information.”
Lecture 22 – Coloring Graphs and Maps
“According to the four-color theorem, any map can be colored in such a way that no adjacent regions are assigned the same color and, at most, four colors suffice. Learn how this problem went unsolved
for centuries and has only been proved recently with computer assistance.”
Course No. 1471
Great Thinkers, Great Theorems – William Dunham
Lecture 5 – Number Theory in Euclid
“In addition to being a geometer, Euclid was a pioneering number theorist, a subject he took up in books VII, VIII, and IX of the Elements. Focus on his proof that there are infinitely many prime
numbers, which Professor Dunham considers one of the greatest proofs in all of mathematics.”
Lecture 6 – The Life and Work of Archimedes
“Even more distinguished than Euclid was Archimedes, whose brilliant ideas took centuries to fully absorb. Probe the life and famous death of this absent-minded thinker, who once ran unclothed
through the streets, shouting ‘Eureka!’ (‘I have found it!’) on solving a problem in his bath.”
Lecture 7 – Archimedes’ Determination of Circular Area
“See Archimedes in action by following his solution to the problem of determining circular area—a question that seems trivial today but only because he solved it so simply and decisively. His unusual
strategy relied on a pair of indirect proofs.”
Lecture 8 – Heron’s Formula for Triangular Area
“Heron of Alexandria (also called Hero) is known as the inventor of a proto-steam engine many centuries before the Industrial Revolution. Discover that he was also a great mathematician who devised a
curious method for determining the area of a triangle from the lengths of its three sides.”
Lecture 9 – Al-Khwarizmi and Islamic Mathematics
“With the decline of classical civilization in the West, the focus of mathematical activity shifted to the Islamic world. Investigate the proofs of the mathematician whose name gives us our term
‘algorithm’: al-Khwarizmi. His great book on equation solving also led to the term ‘algebra.'”
Lecture 10 – A Horatio Algebra Story
“Visit the ruthless world of 16th-century Italian universities, where mathematicians kept their discoveries to themselves so they could win public competitions against their rivals. Meet one of the
most colorful of these figures: Gerolamo Cardano, who solved several key problems. In secret, of course.”
Lecture 11 – To the Cubic and Beyond
“Trace Cardano’s path to his greatest triumph: the solution to the cubic equation, widely considered impossible at the time. His protégé, Ludovico Ferrari, then solved the quartic equation. Norwegian
mathematician Niels Abel later showed that no general solutions are possible for fifth- or higher-degree equations.”
Lecture 12 – The Heroic Century
“The 17th century saw the pace of mathematical innovations accelerate, not least in the introduction of more streamlined notation. Survey the revolutionary thinkers of this period, including John
Napier, Henry Briggs, René Descartes, Blaise Pascal, and Pierre de Fermat, whose famous ‘last theorem’ would not be proved until 1995.”
Lecture 13 – The Legacy of Newton
“Explore the eventful life of Isaac Newton, one of the greatest geniuses of all time. Obsessive in his search for answers to questions from optics to alchemy to theology, he made his biggest mark in
mathematics and science, inventing calculus and discovering the law of universal gravitation.”
Lecture 14 – Newton’s Infinite Series
“Start with the binomial expansion, then turn to Newton’s innovation of using fractional and negative exponents to calculate roots—an example of his creative use of infinite series. Also see how
infinite series allowed Newton to approximate sine values with extraordinary accuracy.”
Lecture 16 – The Legacy of Leibniz
“Probe the career of Newton’s great rival, Gottfried Wilhelm Leibniz, who came relatively late to mathematics, plunging in during a diplomatic assignment to Paris. In short order, he discovered the
‘Leibniz series’ to represent π, and within a few years he invented calculus independently of Newton.”
Lecture 17 – The Bernoullis and the Calculus Wars
“Follow the bitter dispute between Newton and Leibniz over priority in the development of calculus. Also encounter the Swiss brothers Jakob and Johann Bernoulli, enthusiastic supporters of Leibniz.
Their fierce sibling rivalry extended to their competition to outdo each other in mathematical discoveries.”
Lecture 18 – Euler, the Master
“Meet history’s most prolific mathematician, Leonhard Euler, who went blind in his sixties but kept turning out brilliant papers. A sampling of his achievements: the number e, crucial in calculus;
Euler’s identity, responsible for the most beautiful theorem ever; Euler’s polyhedral formula; and Euler’s path.”
Lecture 19 – Euler‘s Extraordinary Sum
“Euler won his spurs as a great mathematician by finding the value of a converging infinite series that had stumped the Bernoulli brothers and everyone else who tried it. Pursue Euler’s analysis
through the twists and turns that led to a brilliantly simple answer.”
Lecture 20 – Euler and the Partitioning of Numbers
“Investigate Euler’s contribution to number theory by first warming up with the concept of amicable numbers—a truly rare breed of integers until Euler vastly increased the supply. Then move on to
Euler’s daring proof of a partitioning property of whole numbers.”
Lecture 21 – Gauss – the Prince of Mathematicians
“Dubbed the Prince of Mathematicians by the end of his career, Carl Friedrich Gauss was already making major contributions by his teen years. Survey his many achievements in mathematics and other
fields, focusing on his proof that a regular 17-sided polygon can be constructed with compass and straightedge alone.”
Lecture 22 – The 19th Century – Rigor and Liberation
“Delve into some of the important trends of 19th-century mathematics: a quest for rigor in securing the foundations of calculus; the liberation from the physical sciences, embodied by non-Euclidean
geometry; and the first significant steps toward opening the field to women.”
Lecture 23 – Cantor and the Infinite
“Another turning point of 19th-century mathematics was an increasing level of abstraction, notably in the approach to the infinite taken by Georg Cantor. Explore the paradoxes of the ‘completed’
infinite, and how Cantor resolved this mystery with transfinite numbers, exemplified by the transfinite cardinal aleph-naught.”
Lecture 24 – Beyond the Infinite
“See how it’s possible to build an infinite set that’s bigger than the set of all whole numbers, which is itself infinite. Conclude the course with Cantor’s theorem that the transcendental numbers
greatly outnumber the seemingly more abundant algebraic numbers—a final example of the elegance, economy, and surprise of a mathematical masterpiece.”
Course No. 1495
Introduction to Number Theory – Edward B. Burger
Lecture 12 – The RSA Encryption Scheme
“We continue our consideration of cryptography and examine how Fermat’s 350-year-old theorem about primes applies to the modern technological world, as seen in modern banking and credit card
Lecture 22 – Writing Real Numbers as Continued Fractions
“Real numbers are often expressed as endless decimals. Here we study an algorithm for writing real numbers as an intriguing repeated fraction-within-a-fraction expansion. Along the way, we encounter
new insights about the hidden structure within the real numbers.”
Lecture 24 – A Journey’s End and the Journey Ahead
“In this final lecture, we take a step back to view the entire panorama of number theory and celebrate some of the synergistic moments when seemingly unrelated ideas came together to tell a unified
story of number.”
Course No. 1499
Zero to Infinity: A History of Numbers – Edward B. Burger
Lecture 2 – The Dawn of Numbers
“One of the earliest questions was “How many?” Humans have been answering this question for thousands of years—since Sumerian shepherds used pebbles to keep track of their sheep, Mesopotamian
merchants kept their accounts on clay tablets, and Darius of Persia used a knotted cord as a calendar.”
Lecture 3 – Speaking the Language of Numbers
“As numbers became useful to count and record as well as calculate and predict, many societies, including the Sumerians, Egyptians, Mayans, and Chinese, invented sophisticated numeral systems;
arithmetic developed. Negative numbers, Arabic numerals, multiplication, and division made number an area for abstract, imaginative study as well as for everyday use.”
Lecture 4 – The Dramatic Digits – The Power of Zero
“When calculation became more important, zero—a crucial breakthrough—was born. Unwieldy additive number systems, like Babylonian nails and dovetails, or Roman numerals, gave way to compact
place-based systems. These systems, which include the modern base-10 system we use today, made modern mathematics possible.”
Lecture 6 – Nature’s Numbers – Patterns Without People
“Those who studied them found numbers captivating and soon realized that numerical structure, pattern, and beauty existed long before our ancestors named the numbers. In this lecture, our studies of
pattern and structure in nature lead us to Fibonacci numbers and to connect them in turn to the golden ratio studied by the Pythagoreans centuries earlier.”
Lecture 7 – Numbers of Prime Importance
“Now we study prime numbers, the building blocks of all natural (counting) numbers larger than 1. This area of inquiry dates to ancient Greece, where, using one of the most elegant arguments in all
of mathematics, Euclid proved that there are infinitely many primes. Some of the great questions about primes still remain unanswered; the study of primes is an active area of research known as
analytic number theory.”
Lecture 8 – Challenging the Rationality of Numbers
“Babylonians and Egyptians used rational numbers, better known as fractions, perhaps as early as 2000 B.C. Pythagoreans believed rational and natural numbers made it possible to measure all possible
lengths. When the Pythagoreans encountered lengths not measurable in this way, irrational numbers were born, and the world of number expanded.”
Lecture 9 – Walk the (Number) Line
“We have learned about natural numbers, integers, rational numbers, and irrationals. In this lecture, we’ll encounter real numbers, an extended notion of number. We’ll learn what distinguishes
rational numbers within real numbers, and we’ll also prove that the endless decimal 0.9999… exactly equals 1.”
Lecture 10 – The Commonplace Chaos Among Real Numbers
“Rational and irrational numbers have a defining difference that leads us to an intuitive and correct conclusion, and to a new understanding about how common rationals and irrationals really are.
Examining random base-10 real numbers introduces us to “normal” numbers and shows that “almost all” real numbers are normal and “almost all” real numbers are, in fact, irrational.”
Lecture 11 – A Beautiful Dusting of Zeroes and Twos
“In base-3, real numbers reveal an even deeper and more amazing structure, and we can detect and visualize a famous, and famously vexing, collection of real numbers—the Cantor Set first described by
German mathematician Georg Cantor in 1883.”
Lecture 12 – An Intuitive Sojourn Into Arithmetic
“We begin with a historical overview of addition, subtraction, multiplication, division, and exponentiation, in the course of which we’ll prove why a negative number times a negative number equals a
positive number. We’ll revisit Euclid’s Five Common Notions (having learned in Lecture 11 that one of these notions is not always true), and we’ll see what happens when we raise a number to a
fractional or irrational power.”
Lecture 13 – The Story of π
“Pi is one of the most famous numbers in history. The Babylonians had approximated it by 1800 B.C., and computers have calculated it to the trillions of digits, but we’ll see that major questions
about this amazing number remain unanswered.”
Lecture 14 – The Story of Euler’s e
“Compared to π, e is a newcomer, but it quickly became another important number in mathematics and science. Now known as Euler’s number, it is fundamental to understanding growth. This lecture traces
the evolution of e.”
Lecture 15 – Transcendental Numbers
“π and e take us into the mysterious world of transcendental numbers, where we’ll learn the difference between algebraic numbers, known since the Babylonians, and the new—and teeming—realm of
Lecture 16 – An Algebraic Approach to Numbers
“This part of the course invites us to take two views of number, the algebraic and the analytical. The algebraic perspective takes us to imaginary numbers, while the analytical perspective challenges
our sense of what number even means.”
Lecture 17 – The Five Most Important Numbers
“Looking at complex numbers geometrically shows a way to connect the five most important numbers in mathematics: 0, 1, π, e, and i, through the most beautiful equation in mathematics, Euler’s
Lecture 19 – A New Breed of Numbers
“Pythagoreans found irrational numbers not only counterintuitive but threatening to their world-view. In this lecture, we’ll get acquainted with—and use—some numbers that we may find equally bizarre:
p-adic numbers. We’ll learn a new way of looking at number, and about a lens through which all triangles become isosceles.”
Lecture 20 – The Notion of Transfinite Numbers
“Although it seems that we’ve looked at all possible worlds of number, we soon find that these worlds open onto a universe of number—and further still. In this lecture, we’ll learn not only how
humans arrived at the notion of infinity but how to compare infinities.”
Lecture 21 – Collections Too Infinite to Count
“Now that we are comfortable thinking about the infinite, we’ll look more closely at various collections of numbers, thereby discovering that infinity comes in at least two sizes.”
Lecture 22 – In and Out – The Road to a Third Infinity
“If infinity comes in two sizes, does it come in three? We’ll use set theory to understand how it might. Then we’ll apply this insight to infinite sets as well, a process that leads us to a third
kind of infinity.”
Lecture 23 – Infinity – What We Know and What We Don’t
“If there are several sizes of infinity, are there infinitely many sizes of it? Is there a largest infinity? And is there a size of infinity between the infinity of natural numbers and real numbers?
We’ll answer two of these questions and learn why the answer to the other is neither provable nor disprovable mathematically.”
Lecture 24 – The Endless Frontier of Number
“Now that we’ve traversed the universe of number, we can look back and understand how the idea of number has changed and evolved. In this lecture, we’ll get a sense of how mathematicians expand the
frontiers of number, and we’ll look at a couple of questions occupying today’s number theorists—the Riemann Hypothesis and prime factorization.”
Course No. 1802
The Search for Exoplanets: What Astronomers Know – Joshua Winn
Lecture 4 – Pioneers of Planet Searching
“Chart the history of exoplanet hunting – from a famous false signal in the 1960s, through ambiguous discoveries in the 1980s, to the big breakthrough in the 1990s, when dozens of exoplanets turned
up. Astronomers were stunned to find planets unlike anything in the solar system.”
Special Note: This entire series is outstanding! I will eventually be adding most of the episodes of this course as I rewatch them. (I watched this series before I began keeping track of “best”
Course No. 1816
The Inexplicable Universe: Unsolved Mysteries – Neil deGrasse Tyson
Lecture 4 – Inexplicable Physics
“Among the many topics you’ll learn about in this lecture are the discovery of more elements on the periodic table; muon neutrinos, tao particles, and the three regimes of matter; the secrets of
string theory (which offers the hope of unifying all the particles and forces of physics); and even the hypothetical experience of traveling through a black hole.”
Special Note: This entire series is outstanding! I will eventually be adding most of the episodes of this course as I rewatch them. (I watched this series before I began keeping track of “best”
Course No. 1830
Cosmology: The History and Nature of Our Universe – Mark Whittle
Lecture 3 – Overall Cosmic Properties
“The universe is lumpy at the scale of galaxies and galaxy clusters. But at larger scales it seems to be smooth and similar in all directions. This property of homogeneity and isotropy is called the
cosmological principle.”
Lecture 4 – The Stuff of the Universe
“The most familiar constituents of the universe are atomic matter and light. Neutrinos make up another component. But by far the bulk of the universe—96%—is dark energy and dark matter. The relative
amounts of these constituents have changed as the universe has expanded.”
Lecture 6 – Measuring Distances
“Astronomers use a ‘distance ladder’ of overlapping techniques to determine distances in the universe. Triangulation works for nearby stars. For progressively farther objects, observers use pulsating
stars, the rotation of galaxies, and a special class of supernova explosions.”
Lecture 8 – Distances, Appearances, and Horizons
“Defining distances in cosmology is tricky, since an object’s distance continually increases with cosmic expansion. There are three important distances to consider: the emission distance, when the
light set out; the current distance, when the light arrives; and the distance the light has traveled.”
Lecture 10 – Cosmic Geometry – Triangles in the Sky
“Einstein’s theory of gravity suggests that space could be positively or negatively curved, so that giant billion-light-year triangles might have angles that don’t add up to 180°. This lecture
discusses the success at measuring the curvature of the universe in 1998.”
Lecture 11 – Cosmic Expansion – Keeping Track of Energy
“Has the universe’s rate of expansion always been the same? You answer this question by applying Newton’s law of gravity to an expanding sphere of matter, finding that the expansion was faster in the
past and slows down over time.”
Lecture 12 – Cosmic Acceleration – Falling Outward
“We investigate why the three great eras of cosmic history—radiation, matter, and dark energy—have three characteristic kinds of expansion. These are rapid deceleration, modest deceleration, and
exponential acceleration. The last is propelled by dark energy, which makes the universe fall outward.”
Lecture 13 – The Cosmic Microwave Background
“By looking sufficiently far away, and hence back in time, we can witness the ‘flash’ from the big bang itself. This arrives from all directions as a feeble glow of microwave radiation called the
cosmic microwave background (CMB), discovered by chance in 1964.”
Lecture 22 – The Galaxy Web – A Relic of Primordial Sound
“A simulated intergalactic trip shows you the three-dimensional distribution of galaxies in our region of the universe. On the largest scale, galaxies form a weblike pattern that matches the peaks
and troughs of the primordial sound in the early universe.”
Lecture 24 – Understanding Element Abundances
“The theory of atom genesis in the interiors of stars is confirmed by the proportions of each element throughout the cosmos. The relative abundances hardly vary from place to place, so that gold
isn’t rare just on earth, it’s rare everywhere.”
Lecture 27 – Physics at Ultrahigh Temperatures
“This lecture begins your investigation of the universe during its first second, which is an immense tract of time in nature. To understand what happened, you need to know how nature behaves at
ultrahigh energy and density. Fortunately, the physics is much simpler than you might think.”
Lecture 29 – Back to the GUT – Matter and Forces Emerge
“You venture into the bizarre world of the opening nanosecond. There are two primary themes: the birth of matter and the birth of forces. Near one nanosecond, the universe was filled with a dense
broth of the most elementary particles. As temperatures dropped, particles began to form.”
Lecture 30 – Puzzling Problems Remain
“Although the standard big bang theory was amazingly successful, it couldn’t explain several fundamental properties of the universe: Its geometry is Euclidean, it’s smooth on the largest scales, and
it was born slightly lumpy on smaller scales. The theory of cosmic inflation offers a comprehensive solution.”
Lecture 31 – Inflation Provides the Solution
“This lecture shows how the early universe might enter a brief phase of exponentially accelerating expansion, or inflation, providing a mechanism to launch the standard hot big bang universe. This
picture also solves the flatness, horizon, and monopole problems that plagued the standard big-bang theory.”
Lecture 33 – Inflation’s Stunning Creativity
“All the matter and energy in stars and galaxies is exactly balanced by all the negative energy stored in the gravitational fields between the galaxies. Inflation is the mechanism that takes nothing
and makes a universe—not just our universe, but potentially many.”
Lecture 34 – Fine Tuning and Anthropic Arguments
“Why does the universe have the properties it does and not some different set of laws? One approach is to see the laws as inevitable if life ever evolves to ask such questions. This position is
called the anthropic argument, and its validity is hotly debated.”
Course No. 1866
The Remarkable Science of Ancient Astronomy – Bradley E. Schaefer
Lecture 10 – Origins of Western Constellations
“The human propensity for pattern recognition and storytelling has led every culture to invent constellations. Trace the birth of the star groups known in the West, many of which originated in
ancient Mesopotamia. At least one constellation is almost certainly more than 14,000 years old and may be humanity’s oldest surviving creative work.”
Course No. 1872
The Life and Death of Stars – Keivan G. Stassun
Lecture 10 – Eclipses of Stars—Truth in the Shadows
“Investigate the remarkable usefulness of eclipses. When our moon passes in front of a star or one star eclipses another, astronomers can gather a treasure trove of data, such as stellar diameters.
Eclipses also allow astronomers to identify planets orbiting other stars.”
Lecture 13 – E = mc^2—Energy for a Star’s Life
“Probe the physics of nuclear fusion, which is the process that powers stars by turning mass into energy, according to Einstein’s famous equation. Then examine two lines of evidence that show what’s
happening inside the sun, proving that nuclear reactions must indeed be taking place.”
Lecture 14 – Stars in Middle Age
“Delve deeper into the lessons of the Hertzsprung-Russell diagram, introduced in Lecture 9. One of its most important features is the main sequence curve, along which most stars are found for most of
their lives. Focus on the nuclear reactions occurring inside stars during this stable period.”
Lecture 19 – Stillborn Stars
“Follow the search for brown dwarfs—objects that are larger than planets but too small to ignite stellar fires. Hear about Professor Stassun’s work that identified the mass of these elusive objects,
showing the crucial role of magnetism in setting the basic properties of all stars.”
Lecture 20 – The Dark Mystery of the First Stars
“Join the hunt for the first stars in the universe, focusing on the nearby “Methuselah” star. Explore evidence that the earliest stars were giants, even by stellar standards. They may even have
included mammoth dark stars composed of mysterious dark matter.”
Lecture 21 – Stars as Magnets
“Because stars spin like dynamos, they generate magnetic fields—a phenomenon that explains many features of stars. See how the slowing rate of rotation of stars like the sun allows astronomers to
infer their ages. Also investigate the clock-like magnetic pulses of pulsars, which were originally thought to be signals from extraterrestrials.”
Lecture 22 – Solar Storms—The Perils of Life with a Star
“The sun and stars produce more than just light and heat. Their periodic blasts of charged particles constitute space weather. Examine this phenomenon—from beautiful aurorae to damaging bursts of
high-energy particles that disrupt electronics, the climate, and even life.”
Course No. 1878
Radio Astronomy: Observing the Invisible Universe – Felix J. Lockman
Lecture 5 – Radio Telescopes and How They Work
“Radio telescopes are so large because radio waves contain such a small amount of energy. For example, the signal from a standard cell phone measured one kilometer away is five million billion times
stronger than the radio signals received from a bright quasar. Learn how each of these fascinating instruments is designed to meet a specific scientific goal—accounting for their wide variation in
form and size.”
Lecture 7 – Tour of the Green Bank Observatory
“The Green Bank Observatory is located within the 13,000-acre National Radio Quiet Zone straddling the border of Virginia and West Virginia. Come tour this fascinating facility where astronomers
discovered radiation belts around Jupiter, the black hole at the center of our galaxy, and the first known interstellar organic molecule, and began the search for extra-terrestrial life.”
Lecture 8 – Tour of the Green Bank Telescope
“At 17 million pounds, and with more than 2,000 surface panels that can be repositioned in real time, this telescope is one of the largest moveable, land-based objects ever built. The dish could
contain two side-by-side football fields, but when its panels are brought into focus, the surface has errors no larger than the thickness of a business card. Welcome to this rare insider’s view.”
Lecture 9 – Hydrogen and the Structure of Galaxies
“Using the laws of physics and electromagnetic radiation, astronomers can ‘weigh’ a galaxy by studying the distribution of its rotating hydrogen. But when they do this, it soon becomes clear
something is very wrong: A huge proportion of the galaxy’s mass has simply gone missing. Welcome to the topsy-turvy world of dark matter, which we now believe accounts for a whopping 90 percent of
our own Milky Way.”
Lecture 10 – Pulsars: Clocks in Space
“In the mid-1960s, astronomers discovered signals with predictable periodicity but no known source. In case these signals indicated extraterrestrial life, they were initially labeled LGM, Little
Green Men. But research revealed the source of the pulsing radiation to be neutron stars. Learn how a star with a diameter of only a few kilometers and a mass similar to that of our Sun can spin
around hundreds of times per second.”
Lecture 11 – Pulsars and Gravity
“A pulsar’s spin begins with its birth in a supernova and can be altered by transfer of mass from a companion star. Learn how pulsars, these precise interstellar clocks, are used to confirm
Einstein’s prediction of gravitational waves by observations of a double-neutron-star system, and how we pull the pulsar signal out of the noise.”
Lecture 12 – Pulsars and the 300-Foot Telescope
“Humans constantly use radio transmission these days, for everything from military communications to garage-door openers. How can scientists determine which signals come from Earth and which come
from space? Learn how the 300-foot telescope, located in two radio quiet zones, was built quickly and cheaply. It ended up studying pulsars and hydrogen in distant galaxies, and made the case for
dark matter.”
Lecture 16 – Radio Stars and Early Interferometers
“When radio astronomers discovered a sky full of small radio sources of unknown origin, they built telescopes using multiple antennas to try to understand them. Learn how and why interferometers were
developed and how they have helped astronomers study quasars—those massively bright, star-like objects that scientists now know only occur in galaxies whose gas is falling into a supermassive black
Lecture 18 – Active Galactic Nuclei and the VLA
“The need for a new generation of radio interferometers to untangle extragalactic radio sources led to the development of the Very Large Array (VLA) in New Mexico. With its twenty-seven radio
antennas in a Y-shaped configuration, it gives both high sensitivity and high angular resolution. The VLA provided a deeper and clearer look at galaxies than ever before, and the results were
Lecture 19 – A Telescope as Big as the Earth
“Learn how astronomers use very-long-baseline interferometry (VLBI) with telescopes thousands of miles apart to essentially create a radio telescope as big as the Earth. With VLBI, scientists not
only look deep into galactic centers, study cosmic radio sources, and weigh black holes, but also more accurately tell time, study plate tectonics, and more—right here on planet Earth.”
Lecture 20 – Galaxies and Their Gas
“In visible light, scientists had described galaxies as ‘island universes’. But since the advent of radio astronomy, we’ve seen galaxies connected by streams of neutral hydrogen, interacting with and
ripping the gases from each other. Now astronomers have come to understand that these strong environmental interactions are not a secondary feature—they are key to a galaxy’s basic structure and
Lecture 21 – Interstellar Molecular Clouds
“In the late 1960s, interstellar ammonia and water vapor were detected. Soon came formaldehyde, carbon monoxide, and the discovery of giant molecular clouds where we now know stars and planets are
formed. With improvements in radio astronomy technology, today’s scientists can watch the process of star formation in other systems. The initial results are stunning.”
Lecture 22 – Star Formation and ALMA
“With an array of 66 radio antennas located in the high Chilean desert above much of the earth’s atmosphere, the Atacama Large Millimeter/submillimeter Array (ALMA) is a radio telescope tuned to the
higher frequencies of radio waves. Designed to examine some of the most distant and ancient galaxies ever seen, ALMA has not only revealed new stars in the making, but planetary systems as well.”
Lecture 23 – Interstellar Chemistry and Life
“Interstellar clouds favor formation of carbon-based molecules over any other kind—not at all what statistical models predicted. In fact, interstellar clouds contain a profusion of chemicals similar
to those that occur naturally on Earth. If planets are formed in this rich soup of organic molecules, is it possible life does not have to start from scratch on each planet?”
Lecture 24 – The Future of Radio Astronomy
“Learn about the newest radio telescopes and the exhilarating questions they plan to address: Did life begin in space? What is dark matter? And a new question that has just arisen in the past few
years: What are fast radio bursts? No matter how powerful these new telescopes are, radio astronomers will continue pushing the limits to tell us more and more about the universe that is our home.”
Course No. 1884
Experiencing Hubble: Understanding the Greatest Images of the Universe – David M. Meyer
Lecture 5 – The Cat’s Eye Nebula – A Stellar Demise
“Turning from star birth to star death, get a preview of the sun’s distant future by examining the Cat’s Eye Nebula. Such planetary nebulae (which have nothing to do with planets) are the exposed
debris of dying stars and are among the most beautiful objects in the Hubble gallery.”
Lecture 7 – The Sombrero Galaxy – An Island Universe
“In the 1920s, astronomer Edwin Hubble discovered the true nature of galaxies as ‘island universes’. Some 80 years later, the telescope named in his honor has made thousands of breathtaking pictures
of galaxies. Focus on one in particular—an edge-on view of the striking Sombrero galaxy.”
Lecture 8 – Hubble’s View of Galaxies Near and Far
“Hubble’s image of the nearby galaxy NGC 3370 includes many faint galaxies in the background, exemplifying the telescope’s mission to establish an accurate distance scale to galaxies near and
far—along with the related expansion rate of the universe. Discover how Hubble’s success has led to the concept of dark energy.”
Lecture 10 – Abell 2218 – A Massive Gravitational Lens
“One of the consequences of Einstein’s general theory of relativity is evident in Hubble’s picture of the galaxy cluster Abell 2218. Investigate the physics of this phenomenon, called gravitational
lensing, and discover how Hubble has used it to study extremely distant galaxies as well as dark matter.”
Course No. 3130
Origin of Civilization – Scott MacEachern
Lecture 36 – Great Zimbabwe and Its Successors
“Few archaeological sites have been subjected to the degree of abuse and misrepresentation sustained by Great Zimbabwe in southeastern Africa. Nevertheless, this lecture unpacks the intriguing
history of this urban center and the insights it can provide into the development of the elite.”
Course No. 3900
Ancient Civilizations of North America – Edwin Barnhart
Lecture 12 – The Wider Mississippian World
“After the fall of Cahokia, witness how Mississippian civilization flourished across eastern North America with tens of thousands of pyramid-building communities and a population in the millions.
Look at the ways they were connected through their commonly held belief in a three-tiered world, as reflected in their artwork. Major sites like Spiro, Moundville, and Etowah all faded out just
around 100 years before European contact, obscuring our understanding.”
Lecture 13 – De Soto Versus the Mississippians
“In 1539, Hernando de Soto of Spain landed seven ships with 600 men and hundreds of animals in present-day Florida. Follow his fruitless search for another Inca or Aztec Empire, as he instead
encounters hundreds of Mississippian cities through which he led a three-year reign of terror across the land-looting, raping, disfiguring, murdering, and enslaving native peoples by the thousands.”
Lecture 19 – The Chaco Phenomenon
“Chaco Canyon contains the most sophisticated architecture ever built in ancient North America—14 Great Houses, four Great Kivas, hundreds of smaller settlements, an extensive road system, and a
massive trade network. But who led these great building projects? And why do we find so little evidence of human habitation in what seems to be a major center of culture? Answer these questions and
Lecture 24 – The Iroquois and Algonquians before Contact
“At the time of European contact, two main groups existed in the northeast—the hunter-gatherer Algonquian and the agrarian Iroquois. Delve into how the Iroquois created the first North American
democracy as a solution to their increasing internal conflicts. Today, we know much of the U.S. Constitution is modeled on the Iroquois’ “Great League of Peace” and its 117 articles of confederation,
as formally acknowledged by the U.S. in 1988.”
Course No. 4215
An Introduction to Formal Logic – Steven Gimbel
Lecture 8 – Induction in Polls and Science
“Probe two activities that could not exist without induction: polling and scientific reasoning. Neither provides absolute proof in its field of analysis, but if faults such as those in Lecture 7 are
avoided, the conclusions can be impressively reliable.”
Course No. 5006
Capitalism vs. Socialism: Comparing Economic System – Edward F. Stuart
Lecture 13 – French Indicative Planning and Jean Monnet
“Discover why France, a latecomer to industrial capitalism, was vital in shaping influential socialist theories, and how centuries of political upheaval can leave distinct impressions on a nation’s
economic history. From the French Revolution to World War II and beyond, France is a strong example of the ways economies are shaped by both internal and external forces.”
Special Note: This entire series is outstanding! I will eventually be adding many of the episodes of this course as I rewatch them. (I watched this series before I began keeping track of “best”
Course No. 7210
The Symphony – Robert Greenberg
Lecture 24 – Dmitri Shostakovich and His Tenth Symphony
“Dmitri Shostakovich was used and abused by the Soviet powers during much of his life. Somehow, he survived—even as his Tenth Symphony made dangerously implicit criticisms of the Soviet government.”
Course No. 7250
Beethoven’s Piano Sonatas – Robert Greenberg
Lecture 4 – The Grand Sonata, Part 2
“Continuing our study of Beethoven’s grand sonatas, we examine Sonata no. 3 in C, no. 3, op. 2, and Sonata no. 4 in E flat, op. 7. In both these works, we see Beethoven’s early artistic declaration
that he was not interested in slavishly following the Classical tradition.”
Lecture 15 – The Waldstein and the Heroic Style
“Piano Sonata no. 21 in C, op. 53 (Waldstein) is like no other music written by Beethoven or anyone else. We study this remarkable piece—from its unrelenting opening theme to its breathtaking
prestissimo (“as fast as possible”) conclusion.”
Lecture 23 – In a World of His Own
“Beethoven’s last three piano sonatas owe much to his epic Missa Solemnis (“Solemn Mass”) which was also composed in the period 1820–1822. We explore the spiritual and compositional links to the
Missa Solemnis, particularly as they relate to sonatas no. 30 in E, op. 109, and no. 31 in A flat, op. 110.”
Lecture 24 – Reconciliation
“Beethoven completed his final piano sonata, no. 32 in C Minor, op. 111, in 1822—five years before his death. Opus 111 seems obviously Beethoven’s valedictory statement for the genre; it ties up
loose ends, yet it is so stunningly original that it caps, rather than continues, the composer’s run of 32 sonatas for piano.”
Course No. 7261
Understanding the Fundamentals of Music – Robert Greenberg
Lecture 9 – Intervals and Tunings
“Resuming our focus on pitch, we will turn once more to Pythagoras, and his investigation into what is now known as the overtone series. This paves the way for an examination of intervals, the
evolution of tuning systems, and an introduction to the major pitch collections.”
Course No. 7270
The Concerto – Robert Greenberg
Lecture 13 – Tchaikovsky
“Excoriated by colleagues and critics alike, Tchaikovsky’s concerti ultimately triumphed to become cornerstones of the repertoire. This lecture explores his Piano Concerto no. 1 in B flat Minor, op.
23; Piano Concerto no. 2 in G Major, op. 44; and Violin Concerto in D Major, op. 35, arguably his single greatest work and one of the greatest concerti of the 19th century.”
Lecture 14 – Brahms and the Symphonic Concerto
“Johannes Brahms’s compositional style is a synthesis of the clear and concise musical forms and genres of the Classical and Baroque eras, and the melodic, harmonic, and expressive palette of the
Romantic era in which he lived. This lecture examines in depth his monumental Piano Concerto no. 2 in B flat Major, op. 83.”
Course No. 8122
Albert Einstein: Physicist, Philosopher, Humanitarian – Don Howard
Lecture 1 – The Precocious Young Einstein
“The aim of these lectures is to explore Einstein the whole person and the whole thinker. You begin with an overview of the course. Then you look at important events in Einstein’s life up to the
beginning of his university studies in 1896.”
Lecture 7 – Background to General Relativity
“Special relativity is ‘special’ in the sense that it is restricted to observers moving with constant relative velocity. Einstein wanted to extend the theory to include accelerated motion. His great
insight was that such a ‘general’ theory would incorporate the phenomenon of gravity.”
Lecture 19 – Einstein and the Bomb – Science Politicized
“In 1939, Einstein signed a letter to President Roosevelt that launched the Manhattan Project to build the first atomic bomb. Scientists had long advised governments, but this effort represented a
fundamental shift in the relationship between science and the state.”
Special Note: This entire series is outstanding! I will eventually be adding many of the episodes of this course as I rewatch them. (I watched this series before I began keeping track of “best”
Course No. 8374
Understanding Russia: A Cultural History – Lynne Ann Hartnett
Lecture 10 – Alexander II, Nihilists, and Assassins
“Focus is on the reign of Alexander II, who ruled Russia from 1855 to 1881. Central to this lecture are three questions: Why did this promising reign end so violently? Did Alexander II shape
developments in literature and culture? How could Russia’s last great tsar inaugurate a violent confrontation between the state and its people?”
Lecture 14 – The Rise and Fall of the Romanovs
“Here is the real story behind the Romanov dynasty, from its rise to power in 1613 to its bloody end in 1917—a tale filled with adventure, intrigue, romance, and heartbreak. It was this period that
saw the Decembrist revolution, the assassination of Tsar Alexander II, and the machinations of the notorious Grigori Rasputin.”
Lecture 17 – Lenin and the Soviet Cultural Invasion
“Professor Hartnett reveals how Lenin and the Communist Party aimed to win the hearts and minds of the Soviet people through a cultural battle fought on every possible front. See how this battle was
won through a militarized economy, propaganda radio, the renaming of streets, and the ‘secular sainthood’ of Lenin.”
Lecture 19 – The Tyrant is a Movie Buff: Stalinism
“Stalin and his cadre aspired to transform everyday Russian life (byt) in ways that brought forth such horrors as collectivization and the gulags. But, as you’ll learn, this was also a period where
the creative work and cultural influence of writers, composers, and painters were suppressed by the terrifying mandates of Socialist Realism.”
Lecture 20 – The Soviets’ Great Patriotic War
“By the time World War II ended, the Soviets would lose 27 million men, women, and children from a total population of 200 million. In this lecture, we examine Soviet life during the Great Patriotic
War and investigate how culture (including poetry and film) was used in service of the war effort.”
Lecture 21 – With Khrushchev, the Cultural Thaw
“Nikita Khrushchev emerged from the power struggles after Stalin’s death with a daring denunciation of the dictator’s cult of terror and personality. As we examine Khrushchev’s liberalization of
culture, we’ll also explore its limits, including the continuation of anti-Semitism from the Stalin era, embraced under the guise of ‘anti-cosmopolitanism’.”
Lecture 22 – Soviet Byt: Shared Kitchen, Stove, and Bath
“What was everyday Soviet life like during the Khrushchev and Brezhnev periods? How and where did people live? How did they spend their leisure time? Answers to these and other questions reveal the
degree to which politics affected even seemingly apolitical areas of life.”
Lecture 24 – Soviet Chaos and Russian Revenge
“On December 25, 1991, the Soviet Union came to an end. We follow the road that led to this moment under the policies of perestroika (restructuring the centrally-planned economy) and glasnost
(removing rigid state censorship). Then, we conclude with a look at the rise of a new popular leader: Vladimir Putin.”
Course No. 8535
America in the Gilded Age and Progressive Era – Edward T. O’Donnell
Lecture 23 – Over There: A World Safe for Democracy
“As the Progressive Era ends, follow the complex events that led the United States into World War I. Learn how an initial federal policy of neutrality changed to one of “preparedness” and then
intervention, amid conflicting public sentiments and government pro-war propaganda. Also trace the after-effects of the war on U.S. foreign policy.”
Special Note: This entire series is outstanding! I will eventually be adding many of the episodes of this course as I rewatch them. (I watched this series before I began keeping track of “best”
Course No. 8580
Turning Points in American History – Edward T. O’Donnell
Lecture 10 – 1786 Toward a Constitution – Shays’s Rebellion
“Who was Daniel Shays? What political and economic dilemmas led to this famous farmer’s rebellion of 1786? Most important: How did this event pave the way for a reconsideration of the Articles of
Confederation and the creation of the U. S. Constitution? Find out here.”
Lecture 23 – 1868 Equal Protection—The 14th Amendment
“Many legal scholars and historians have argued that the 14th Amendment, which promises equal protection under the laws, is the most important addition to the Constitution after the Bill of Rights.
Here, Professor O’Donnell retells the fascinating story of how this amendment was ratified in 1868—and its turbulent history in the 20th and 21st centuries.”
Special Note: This entire series is outstanding! I will eventually be adding many of the episodes of this course as I rewatch them. (I watched this series before I began keeping track of “best”
Course No. 30110
England, the 1960s, and the Triumph of the Beatles – Michael Shelden
Lecture 8 – The Englishness of A Hard Day’s Night
“In summer 1964, the cinematic Beatles vehicle A Hard Day’s Night broke almost every rule in Hollywood at the time. Professor Shelden reveals what lies underneath the film’s surface charm and musical
numbers: an overall attitude of irreverence and defiance in the face of authority, and a challenge for audiences to think for themselves.”
Lecture 12 – Hello, Goodbye: The End of the 1960s
“In their last years together, all four of the Beatles seemed headed in new directions as they grew up—and apart. Nevertheless, witness how these final years brought a range of sounds, including
protest songs, mystic melodies, anthems of friendship, and an iconic double album called simply, The Beatles, but better known as the ‘White Album.'”
Course No. 60000
The Great Questions of Philosophy and Physics – Steven Gimbel
Lecture 3 – Can Physics Explain Reality?
“If the point of physics is to explain reality, then what counts as an explanation? Starting here, Professor Gimbel goes deeper to probe what makes some explanations scientific and whether physics
actually explains anything. Along the way, he explores Bertrand Russell’s rejection of the notion of cause, Carl Hempel’s account of explanation, and Nancy Cartwright’s skepticism about scientific
Lecture 4 – The Reality of Einstein’s Space
“What’s left when you take all the matter and energy out of space? Either something or nothing. Newton believed the former; his rival, Leibniz, believed the latter. Assess arguments for both views,
and then see how Einstein was influenced by Leibniz’s relational picture of space to invent his special theory of relativity. Einstein’s further work on relativity led him to a startlingly new
conception of space.”
Lecture 5 – The Nature of Einstein’s Time
“Consider the weirdness of time: The laws of physics are time reversable, but we never see time running backwards. Theorists have proposed that the direction of time is connected to the order of the
early universe and even that time is an illusion. See how Einstein deepened the mystery with his theory of relativity, which predicts time dilation and the surprising possibility of time travel.”
Lecture 8 – Quantum States: Neither True nor False?
“Enter the quantum world, where traditional philosophical logic breaks down. First, explore the roots of quantum theory and how scientists gradually uncovered its surpassing strangeness. Clear up the
meaning of the Heisenberg uncertainty principle, which is a metaphysical claim, not an epistemological one. Finally, delve into John von Neumann’s revolutionary quantum logic, working out an
Lecture 10 – Wanted Dead and Alive: Schrödinger’s Cat
“The most famous paradox of quantum theory is the thought experiment showing that a cat under certain experimental conditions must be both dead and alive. Explore four proposed solutions to this
conundrum, known as the measurement problem: the hidden-variable view, the Copenhagen interpretation, the idea that the human mind “collapses” a quantum state, and the many-worlds interpretation.”
Lecture 11 – The Dream of Grand Unification
“After the dust settled from the quantum revolution, physics was left with two fundamental theories: the standard model of particle physics for quantum phenomena and general relativity for
gravitational interactions. Follow the quest for a grand unified theory that incorporates both. Armed with Karl Popper’s demarcation criteria, see how unifying ideas such as string theory fall
Lecture 12 – The Physics of God
“The laws of physics have been invoked on both sides of the debate over the existence of God. Professor Gimbel closes the course by tracing the history of this dispute, from Newton’s belief in a
Creator to today’s discussion of the “fine-tuning” of nature’s constants and whether God is responsible. Such big questions in physics inevitably bring us back to the roots of physics: philosophy.”
Course No. 80060
Music Theory: The Foundation of Great Music – Sean Atkinson
Lecture 5 – The Circle of Fifths
“Begin by defining the key of a piece of music, which is simply the musical scale that is used the most in the piece. Also discover key signatures in written music, symbols at the beginning of the
musical score that indicate the key of the piece. Then grasp how the major keys all relate to each other in an orderly way, when arranged schematically according to the interval of a fifth.”
Lecture 16 – Hypermeter and Larger Musical Structures
“In listening to music, we sometimes hear the meter differently than the way it’s written on the page. Learn how the concept of hypermeter helps explain this, by showing that when measures of music
are grouped into phrases, we often hear a pulse for each measure in the phrase, rather than the pulses within the measure. Explore examples of hypermeter, and how we perceive music as listeners.” | {"url":"https://cosmicreflections.skythisweek.info/tag/polls/","timestamp":"2024-11-11T20:48:28Z","content_type":"text/html","content_length":"143555","record_id":"<urn:uuid:a90f5408-cc69-4058-8fc3-8407e045afdf>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00772.warc.gz"} |
Drawing Book | HackerRank
A teacher asks the class to open their books to a page number. A student can either start turning pages from the front of the book or from the back of the book. They always turn pages one at a time.
When they open the book, page is always on the right side:
When they flip page , they see pages and . Each page except the last page will always be printed on both sides. The last page may only be printed on the front, given the length of the book. If the
book is pages long, and a student wants to turn to page , what is the minimum number of pages to turn? They can start at the beginning or the end of the book.
Given and , find and print the minimum number of pages that must be turned in order to arrive at page .
Using the diagram above, if the student wants to get to page , they open the book to page , flip page and they are on the correct page. If they open the book to the last page, page , they turn page
and are at the correct page. Return .
Function Description
Complete the pageCount function in the editor below.
pageCount has the following parameter(s):
• int n: the number of pages in the book
• int p: the page number to turn to
• int: the minimum number of pages to turn
The first line contains an integer , the number of pages in the book.
The second line contains an integer, , the page to turn to.
If the student starts turning from page , they only need to turn page:
If a student starts turning from page , they need to turn pages:
Return the minimum value, .
If the student starts turning from page , they need to turn pages:
If they start turning from page , they do not need to turn any pages:
Return the minimum value, . | {"url":"https://www.hackerrank.com/challenges/drawing-book/problem","timestamp":"2024-11-13T16:01:20Z","content_type":"text/html","content_length":"867851","record_id":"<urn:uuid:e18a5279-95f4-464b-996e-306b84ca95dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00196.warc.gz"} |
Partitioned Coupling for Structural Acoustics
We expand the second-order fluid–structure coupling scheme of Farhat et al. (1998, “Load and Motion Transfer Algorithms for 19 Fluid/Structure Interaction Problems With Non-Matching Discrete
Interfaces: Momentum and Energy Conservation, Optimal Discretization and Application to Aeroelasticity,” Comput. Methods Appl. Mech. Eng., 157(1–2), pp. 95–114; 2006, “Provably Second-Order
Time-Accurate Loosely-Coupled Solution Algorithms for Transient Nonlinear Computational Aeroelasticity,” Comput. Methods Appl. Mech. Eng., 195(17), pp. 1973–2001) to structural acoustics. The
staggered structural acoustics solution method is demonstrated to be second-order accurate in time, and numerical results are compared to a monolithically coupled system. The partitioned coupling
method is implemented in the Sierra Mechanics software suite, allowing for the loose coupling of time domain acoustics in sierra/sd to structural dynamics (sierra/sd) or solid mechanics (sierra/sm).
The coupling is demonstrated to work for nonconforming meshes. Results are verified for a one-dimensional piston, and the staggered and monolithic results are compared to an exact solution. Huang, H.
(1969, “Transient Interaction of Plane Acoustic Waves With a Spherical Elastic Shell,” J. Acoust. Soc. Am., 45(3), pp. 661–670) sphere scattering problem with a spherically spreading acoustic load
demonstrates parallel capability on a complex problem. Our numerical results compare well for a bronze plate submerged in water and sinusoidally excited (Fahnline and Shepherd, 2017, “Transient
Finite Element/Equivalent Sources Using Direct Coupling and Treating the Acoustic Coupling Matrix as Sparse,” J. Acoust. Soc. Am., 142(2), pp. 1011–1024).
Issue Section:
Research Papers
Acoustics, Bronze, Structural acoustics, Fluids, Algorithms, Pistons, Structural dynamics, Stress, Finite element analysis, Computer software, Sound pressure, Aeroelasticity
1 Introduction
Structural acoustics involves the interaction between a vibrating structure and the pressure fluctuations in an acoustic field, and it can be useful in identifying the noise radiated from a structure
or how a structure responds to acoustic waves. Applications of coupled structural acoustics include shock loads on ships [1,2], tire–road interaction [3–5], and aero-structures [6–9]. Computational
structural acoustics is used at Sandia National Laboratories to predict structural responses in various acoustic environments. Figure 1 shows a weapons system in an acoustic loading environment.
There are also a multitude of well-studied academic problems, including plates, spheres, cylinders, and other basic structures suspended in acoustic media [10–12]. Many applications include unbounded
domains where special treatments are required to solve a finite sized problem: Infinite elements [13] and perfectly matched layers [14] are two possible ways to calculate the acoustic response in a
unbounded or non-reflecting region of interest.
Various computational schemes can be used to calculate structural acoustic responses [15]. Typically, finite elements are used for the structural domain, and either finite elements or boundary
elements are used for the acoustic domain [12]. Structural acoustics problems can be solved in the time domain, the frequency domain, or with modal superposition which requires solving a quadratic
eigenvalue problem. This work focuses on the solution of the structural acoustic problem in the time domain.
When the behavior of the fluid is minimally affected by the structure, the fluid problem can be solved a priori, and the solution to the fluid problem can be used to load the structure in a one-way
coupling scheme. Complex fluid–structure interactions necessitate two-way coupling, where the fluid and the structure mutually influence each other and must march forward in time together. For
two-way coupling, the governing equations can be solved in a monolithic approach or a loosely coupled approach [16,17]. Structural acoustic implementations using monolithic coupling methods solve the
discretized governing equations with all the degrees-of-freedom in both the structural and acoustic models [15,18,19]. These fully coupled methods provide second-order accuracy, are well studied, and
are simple to implement. However, fully coupled systems can induce poorly conditioned linear systems due to vast differences in the material properties between a solid such as steel and a fluid such
as air. While a direct linear algebra solver can handle these differences, parallel iterative solvers struggle (relative residual order of magnitude). Many parallel solvers are optimized for
symmetric dynamic matrices; however, the coupling terms between the two domains break the symmetry of the system. Issues of solve time and solver convergence become significant obstacles when solving
large, massively parallel, real-world structural acoustic software. An alternative to solving the fully coupled system is to use an iterative loose-coupling approach.
While loosely coupled time integration implementations are relatively uncommon in structural acoustics applications [18–22], they are common in fluid–structure problems that solve the full
Navier–Stokes system of equations [16,23]. This is often because fluid applications and structural applications are developed independently and then coupled in an multiple program-multiple data
(MPMD) method that prohibits solving the governing equations monolithically. Historically, second-order accuracy has been achieved in these applications using iterative methods at each time step of
the coupled system. Farhat et al. have developed the generalized serial staggered coupling algorithm and show that it is second-order accurate in time [16,17]. Rather than iterating to solution
convergence, the loosely coupled fluid–structure interaction problem is solved with exactly one prediction and one correction step. In this work, the general serialized staggered (GSS) algorithm is
extended to present a loose structure–acoustic coupling algorithm that is second-order accurate in time. This decouples the structural and acoustic equations and alleviates the scaling and condition
problems associated with the strongly coupled problem. Other advantages of this method include optimization of solver parameters on each subdomain and faster computation time, while maintaining the
accuracy of the strongly coupled solution.
An added benefit of this approach is that in massively parallel applications with legacy software, this approach can be used to couple separate finite element programs without needing to mix source
code. Furthermore, the flexibility offered by loosely coupling allows both the fluid and the structure codes to be interchanged depending on the needs of the particular application.
The monolithic coupling was implemented in the massively parallel structural dynamics finite element code sierra/sd [24–27]. The loosely coupled algorithm was implemented first by launching two
instances of sierra/sd and communicating through an MPI-based MPMD coupler [28]. The coupler necessitated communication only on the structural-acoustic interface. That is, processors that contained
only structural elements and no interface with acoustic elements needed no MPI communication, and vice versa. Processors that contained the structural acoustic interface need only to communicate the
nodal locations on the interface, as well as the pressures, velocities, and accelerations at those nodal locations.
In addition to the coupling method discussed above, there are a number of application specific algorithmic decisions that are made when solving a structural acoustic problem. These include the choice
of finite elements versus boundary elements for the acoustic domain, and conforming of mis-matched meshes at the domain boundary, and one-way or two-way coupling. While not the primary subject of
this work, some context is provided to show the decisions made in this implementation. As such, it is understood that boundary elements, conforming meshes, and one-way coupling are all valid
algorithmic approaches that are not the focus of this work.
One-way coupling can be used in an application such as aero-structure analysis, where a fluid flow is used to load a structure, and the displacements, accelerations, or stresses of the structure are
the quantity of interest, and it is assumed that the movement of the structure has minimal effect on the fluid flow. However, if the quantity of interest is in the fluid domain, and the movement of
the structure is expected to have an effect on that quantity, two-way coupling is necessary to capture that behavior.
Having the same mesh density in the acoustic fluid and solid may be very inefficient, since the two domains typically require significantly different mesh densities to achieve a given level of
discretization accuracy. Perhaps, more importantly, it is also impractical in many applications since the mesh generation process may be performed separately for the two domains. Generating
conforming meshes on the wet interface may be very difficult, if not impossible, even given the most sophisticated mesh generation software. Illustrative examples include the hull of a ship [29] or
the skin of an aircraft. In these cases, the structural and fluid meshes are typically created independently and have very different mesh density requirements. Joining them into a single, monolithic
mesh is often impractical. The node-on-face structural acoustic coupling method is used. For a detailed description of the procedure, see Ref. [17].
2 Theory
2.1 Governing Equations.
We consider a simply connected three-dimensional domain Ω that consists of disjoint subsets Ω[s] and Ω[a] such that $Ω=Ωs∪Ωa$ and $Ωs∩Ωa=∅$. We refer to Ω[s] as the structural domain and Ω[a] as the
acoustic domain. The boundary of each of these domains is divided into disjoint Dirichlet, Neumann, and structure–acoustic coupling partitions. As such, $∂Ωk=ΓkD∪ΓkN∪Γsa$ for k = {s, a}. The boundary
of the complete domain of interest is the union of the Dirichlet and Neumann components of each subdomain.
2.1.1 Structural Dynamics.
The equation of motion for (possibly damped) structural dynamics is the standard linearized elastodynamics equation
is the mass density,
is the displacement field,
is the damping coefficient,
is the first Piola–Kirchhoff stress tensor, and
is a mass-specific body force. The subscript “s” is used to denote a structural quantity when the same symbol is used for an acoustic quantity. The constitutive equation is
is the elasticity tensor and
are the Lamé parameters. Initial conditions are
Dirichlet and Neumann boundary conditions are specified as
is the outward normal vector on the boundary and
is the traction.
2.1.2 Acoustics.
The linearized Euler equations are combined into a single equation for velocity potential as
is the velocity potential such that
, and the acoustic wavespeed is
. The damping coefficient is
. The subscript “a” is used to denote an acoustic quantity when the same symbol is used for a structural quantity. The acoustic pressure is obtained via
, which is slightly different from the typical velocity potential equation for linearized acoustics. This choice was made based on linear solver considerations of our monolithic coupling formulation,
described in Sec.
. Initial and boundary conditions are given as
2.1.3 Coupling Conditions.
The structure and the acoustic are coupled (in the strong sense) through velocity (
) and traction (
) continuity:
where Γ
is the structure–acoustic boundary.
2.2 Semi-Discrete Form.
The standard Galerkin finite element method is used to discretize
in space. The resulting semi-discrete system is given as
, and
are mass, damping, and stiffness matrices. The matrices
enforce the coupling conditions on the interface; expressions for the other sub-matrices of the system can be found in finite element textbooks, e.g., Refs. [
]. The structure–acoustic coupling matrix is determined by integrating the stress term of
by parts, and substituting
on the resulting term defined over the structure–acoustic boundary. Similarly, the acoustic–structure matrix is defined when
is integrated by parts and
is enforced on Γ
2.3 Time Integration.
Newmark beta time integration
is used to advance the solution from time
with a constant time step size Δ
. θ is used to denote either
. The implicit, second-order time accurate and unconditionally stable version of the scheme is used, which requires β = 1/4 and γ = 1/2. The system in
is solved for displacement rather than acceleration.
2.4 Monolithic Coupling.
Monolithic coupling of the structural and acoustic response is achieved by solving (8) combined with (9) as a single algebraic system for each time step. This results in a tightly coupled
structural–acoustic response and is second-order accurate in time. Matrix ill-conditioning can occur when the densities of the acoustic and structural materials are severely disparate.
2.5 Interfield-Parallel Strategies.
Interfield-parallel [
] strategies solve the acoustic and structural equations simultaneously. This requires predictions of both the structure velocity and acoustic pressure. One simple predictor is
which results in first-order accuracy. Loose coupling with interfield-parallel solutions results in first-order accuracy: the loading is never corrected with updated solution data. As such, results
are presented using serial solution strategies.
2.6 Serial Solution Strategies.
Serial solution strategies solve the acoustic and structural equations sequentially. This requires predictions of either structure acceleration or acoustic pressure. The load on the second solve is
corrected with information from the first solve. Numerous combinations of predictors/correctors have been proposed for the fluid–structure interaction. However, there is a gap in the literature for
loosely coupled structural acoustics. The predictions currently available in the structural-acoustic literature result in first-order accuracy.
The GSS procedure from Ref. [16] was designed as a coupling algorithm to preserve accuracy of the time integrator for moving meshes. Figure 2 shows the logic flowchart for the GSS algorithm.
Even though our meshes are not moving, we have found the GSS algorithm essential for maintaining time accuracy.
To solve
in a partitioned manner, we apply a predictor–corrector algorithm as in Ref. [
]. We apply the second-order accurate (Adams–Bashforth) predictor for the structural velocities as suggested in Refs. [
The corrected pressure is taken to be the end-of-step value, i.e.,
With the predictor
and corrector
is rewritten as
Equation (14) can now be solved for ψ independently of u; hence, it is a partitioned algorithm. Since the predictor (12) and the Newmark beta time integrator (9) are second-order time accurate, the
resulting partitioned algorithm is second-order accurate in time. Numerical results in Sec. 3.1 demonstrate the temporal accuracy of the method.
3 Numerical Examples
3.1 Verification in One Dimension: A Piston Example.
illustrates a one-dimensional piston in an acoustic fluid. The verification problem consists of exactly one degree-of-freedom on the structural mesh and exactly one degree-of-freedom on the acoustic
mesh. It can also be produced by using a three-dimensional stiff plate on the structural mesh, and hex elements on the acoustics mesh. The exact solution for this problem is specified as
Figure 4 shows the convergence plot for the strong coupling solution. As expected and discussed in the literature, strong coupling gives second-order convergence with time.
Figure 5 shows the convergence plot for the loosely coupled solution. Following the GSS algorithm, we are able to show that the convergence plot for the loosely coupled system is also second-order
accurate in time.
3.2 Huang Sphere.
3.3 Bronze Plate Example.
The bronze plate example is detailed in Ref. [11]. It consists of a solid bronze plate submerged in water, with a drive point near one corner and an accelerometer in the middle of the same edge. The
setup is depicted in Fig. 9.
The plate is 0.3048 m by 0.762 m by 0.04445 m. Density of bronze is 7468 kg/m
. Young’s modulus and Poisson’s ratio are 117 GPa and 0.3, respectively. The force applied at the drive point is
= 5.5e − 4 s. The plate is meshed with 1920 linear hexahedral elements.
The plate is immersed in an acoustic fluid of radius r = 1.6 m. The acoustic mesh is composed of 188,664 linear tetrahedral elements. Tenth-order infinite elements are used on the outer boundary of
the acoustic domain. Density of the water is 1000 kg/m^3. Acoustic sound speed is c[0] = 1500 m/s.
Figure 10 shows the GSS and monolithic numerical schemes compared to the experimental results from Ref. [11].
Both numerical schemes under-predict the amplitude of the experiment, but the period is captured well. The GSS scheme is seen to attenuate the structural vibrations faster than the monolithic
4 Conclusions
A partitioned-coupling approach for structural acoustics based on the GSS algorithm was proposed. This method utilizes separate solves for the acoustic and structural domains, making it amenable to
use with legacy software packages. Partitioning the system removes ill-conditioning concerns that arise in more traditional structural-acoustics solvers, where differences in material properties
cause numerical problems for monolithic solution methods.
We demonstrated second-order time accuracy of the partitioned method on a quasi-one-dimensional piston problem. The Huang sphere and bronze plate examples show that the GSS approach yields solutions
that are comparable to the monolithic solve for the same problem. Additionally, the bronze plate results match experimental data within reasonable tolerances.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International
Inc. for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525 (Funder ID: 10.13039/100000015).
The sierra/sd software package is the collective effort of many individuals and teams. The core Sandia National Laboratories-based sierra/sd development team responsible for the maintenance of
documentation and support of code capabilities includes Gregory Bunting, Nathan Crane, David Day, Clark Dohrmann, Brian Ferri, Robert Flicek, Sean Hardesty, Payton Lindsay, Scott Miller, Lynn Munday,
Brian Stevens, and Tim Walsh. The sierra/sd team also works closely with external collaborators in academia including Wilkins Aquino and Murthy Guddati.
Dozens of student interns have provided extensive support to the sierra/sd team in the fields of capability development and code testing. Additionally, the sierra/sd team is part of the larger Sierra
Mechanics code team, receives extensive support from the Sierra/DevOps and Sierra/Toolkit teams, and maintains close collaborations with the Sierra Solid Mechanics and Thermal Fluid teams.
The authors would like to thank John Fahnline of Penn State’s Applied Research Lab for consulting on the bronze plate example.
Y. S.
, “
Ship Shock Modeling and Simulation for Far-Field Underwater Explosion
Comput. Struct.
), pp.
, and
, “
Transient Response of Floating Composite Ship Section Subjected to Underwater Shock
Compos. Struct.
), pp.
L. R.
R. A.
, and
, “
A Coupled Tire Structure/Acoustic Cavity Model
Int. J. Solids Struct.
), pp.
C. G.
, and
, “
Dynamic Behaviour of a Rolling Tyre: Experimental and Numerical Analyses
J. Sound Vib.
, pp.
P. A.
, and
, “
A Methodology Based on Structural Finite Element Method-Boundary Element Method and Acoustic Boundary Element Method Models in 2.5 D for the Prediction of Reradiated Noise in Railway-Induced
Ground-Borne Vibration Problems
ASME J. Vib. Acoust.
), p.
, and
Rotating Machinery, Hybrid Test Methods, Vibro-Acoustics & Laser Vibrometry
De Clerck
, eds., Vol.
New York
, pp.
E. C.
R. A.
, and
M. R.
, “Performing Direct-Field Acoustic Test Environments on a Sandia Flight System to Provide Data for Finite Element Simulation,”
Rotating Machinery, Hybrid Test Methods, Vibro-Acoustics & Laser Vibrometry
De Clerck
, eds., Vol.
New York
, pp.
E. C.
, and
, “
Finite Element Simulation of a Direct-Field Acoustic Test of a Flight System Using Acoustic Source Inversion
,” Tech. Rep.,
Sandia National Laboratories (SNL-NM)
Albuquerque, NM
, and
, “
Analytical Study of Coupling Effects for Vibrations of Cable-Harnessed Beam Structures
ASME J. Vib. Acoust.
), p.
, “
Transient Interaction of Plane Acoustic Waves With a Spherical Elastic Shell
J. Acoust. Soc. Am.
), pp.
J. B.
, and
M. R.
, “
Transient Finite Element/Equivalent Sources Using Direct Coupling and Treating the Acoustic Coupling Matrix as Sparse
J. Acoust. Soc. Am.
), pp.
, and
Finite Element and Boundary Methods in Structural Acoustics and Vibration
CRC Press
Boca Raton, FL
, and
, “
A Comparison of Transient Infinite Elements and Transient Kirchhoff Integral Methods for Far Field Acoustic Analysis
J. Comput. Acoust.
), p.
, and
, “
Parallel Ellipsoidal Perfectly Matched Layers for Acoustic Helmholtz Problems on Exterior Domains
J. Comput. Acoust.
), p.
, “
Finite Element Formulations of Structural Acoustics Problems
Comput. Struct.
), pp.
Van der Zee
K. G.
, and
, “
Provably Second-Order Time-Accurate Loosely-Coupled Solution Algorithms for Transient Nonlinear Computational Aeroelasticity
Comput. Methods Appl. Mech. Eng.
), pp.
, and
Le Tallec
, “
Load and Motion Transfer Algorithms for Fluid/Structure Interaction Problems With Non-Matching Discrete Interfaces: Momentum and Energy Conservation, Optimal Discretization and Application to
Comput. Methods Appl. Mech. Eng.
), pp.
S. A.
S. H.
, and
D. J.
Engineering Vibroacoustic Analysis: Methods and Applications
John Wiley & Sons
New York
L. L.
, and
A. G.
Noise and Vibration Control Engineering: Principles and Applications.
John Wiley & Sons, Inc.
Hoboken, NJ
, and
, “
Parallel Bdd-based Monolithic Approach for Acoustic Fluid–Structure Interaction
Comput. Mech.
), pp.
M. R.
C. A.
, and
M. A.
, “
Treatment of Acoustic Fluid–Structure Interaction by Localized Lagrange Multipliers: Formulation
Comput. Methods Appl. Mech. Eng.
), pp.
M. R.
M. A.
C. A.
, and
, “
Treatment of Acoustic Fluid–Structure Interaction by Localized Lagrange Multipliers and Comparison to Alternative Interface-Coupling Methods
Comput. Methods Appl. Mech. Eng.
), pp.
, and
, “
Partitioned Procedures for the Transient Solution of Coupled Aeroelastic Problems–Part II: Energy Transfer Analysis and Three-Dimensional Applications
Comput. Methods Appl. Mech. Eng.
), pp.
N. K.
D. M.
C. R.
B. A.
R. C.
S. T.
, and
Sierra Structural Dynamics-Users Notes 4.50
Sandia National Laboratories
Albuquerque, NM
N. K.
D. M.
C. R.
B. A.
R. C.
S. T.
, and
Sierra SD Theory Manual 4.50
Sandia National Laboratories
Albuquerque, NM
, “
Strong and Weak Scaling of the Sierra/SD Eigenvector Problem to a Billion Degrees of Freedom
,” SAND 2019–1217,
Sandia National Lab. (SNL-NM)
Albuquerque, NM
S. T.
, and
T. F.
, “
A Gradient-Based Optimization Approach for the Detection of Partially Connected Surfaces Using Vibration Tests
Comput. Methods Appl. Mech. Eng.
, pp.
, and
, “
Navy Enhanced Sierra Mechanics (NESM): Toolbox for Predicting Navy Shock and Damage
Comput. Sci. Eng.
), pp.
R. J.
, and
H. C.
Naval Shock Analysis and Design
Shock and Vibration Information Analysis Center, Booz-Allen and Hamilton, Incorporated
McLean, VA
T. J.
The Finite Element Method: Linear Static and Dynamic Finite Element Analysis
Courier Corporation
North Chelmsford, MA
R. D.
D. S.
M. E.
, and
R. J.
Concepts and Applications of Finite Element Analysis
John Wiley & Sons
New York
C. A.
, and
, “
Partitioned Analysis of Coupled Mechanical Systems
Comput. Methods Appl. Mech. Eng.
), pp. | {"url":"https://micronanomanufacturing.asmedigitalcollection.asme.org/vibrationacoustics/article/142/1/011012/1065699/Partitioned-Coupling-for-Structural-Acoustics","timestamp":"2024-11-14T04:00:08Z","content_type":"text/html","content_length":"316902","record_id":"<urn:uuid:723b6c1e-579d-4dab-a695-dd4eb32627bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00058.warc.gz"} |
Quickest detection of a minimum of disorder times
A multi-source quickest detection problem is considered. Assume there are two independent Poisson processes X^1 and X^2 with disorder times θ[1] and θ[2], respectively: that is the intensities of X^1
and X^2 change at random unobservable times θ[1] and θ[2], respectively. θ[1] and θ[2] are independent of each other and are exponentially distributed. Define θ ^△ θ[1] Λ θ[2] = min{θ[1], θ[2]}. For
any stopping time τ that is measurable with respect to the filtration generated by the observations define a penalty function of the form R[τ] = ℙ(τ < θ) + c double-struck E sign [(τ - θ)^+], where c
> 0 and (τ - θ)^+ is the positive part of τ - θ. It is of interest to find a stopping time τ that minimizes the above performance index. Since both observations X^1 and X^2 reveal information about
the disorder time τ, even this simple problem is more involved than solving the disorder problems for X^1 and X^2 separately. This problem is formulated in terms of a two dimensional sufficient
statistic, and the corresponding optimal stopping problem is examined. Using a suitable single jump operator, this problem is solved explicitly.
Publication series
Name Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference, CDC-ECC '05
Volume 2005
Other 44th IEEE Conference on Decision and Control, and the European Control Conference, CDC-ECC '05
Country/Territory Spain
City Seville
Period 12/12/05 → 12/15/05
All Science Journal Classification (ASJC) codes
Dive into the research topics of 'Quickest detection of a minimum of disorder times'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/quickest-detection-of-a-minimum-of-disorder-times","timestamp":"2024-11-02T03:36:22Z","content_type":"text/html","content_length":"49534","record_id":"<urn:uuid:1dce6ede-6489-4898-a82c-18294173182b>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00335.warc.gz"} |
Third Grade – Mathematics
Number Sense
1.2 Identify and name numbers and parts of numbers using place value (ones through 999,999)
1.3/1.4 in 86,500, the 6 is in the thousands place.
1.6 in 86,500, the 6 has a value of 6 thousand
1.7 ADJ. Generate multiple equivalent representations of numbers (ones through 10,000).
• 1.6 1,243 = 12 hundreds + 43 ones
• 1.7 800 + 2 + 802
• 3,246 = 3,000 + 200 + 40 + 6
2.1 Compare and order positive rational numbers.
2.2 Read, write and use symbols to show relationships (=, ≠, >, <) between and among numbers (ones to thousands).
2.3 Round positive rational numbers.
2.4 Round numbers to any given place (tens to thousands).
4.1 Add positive rational numbers.
4.2 Apply Commutative, Associative and Zero Properties of Addition.
4.3 Estimate sums.
4.4 Use a number line.
Add whole numbers (through thousands) with regrouping.
5.1 Subtract positive rational numbers.
5.2 Recognize that subtraction is the inverse of addition.
5.3 Estimate differences.
5.4 Find a difference by adding up on a number line.
5.5 Subtract whole numbers (through thousands) with regrouping.
8.1 Demonstrate meaning of multiplication and division with whole numbers.
8.2 Use mental math to multiply with 2, 4, 5, 10, 1, and 0
8.3 Model multiplication (combining groups of equal size, repeated addition, array/area model)
8.4 Count by multiples of 5 to 200
8.5 Count by multiples of 10 to 400
8.6 Understand the Commutative Property of Multiplication
8.8 Understand the Identity Property of Multiplication and the Zero Property of Multiplication
9.2 Demonstrate the meaning of multiplication with whole numbers.
9.3 Use mental math to multiply with 3, 6, 7, 8, and 9
9.4 Model multiplication (combining groups of equal size, repeated addition, array/area model)
9.5 Apply the Distributive Property
9.6 Apply the Commutative Property of Multiplication
9.8 Understand and apply the Associative Property of Multiplication
9.9 Select and apply appropriate strategies to solve problems.
18.1 Generate multiple equivalent representations of numbers.
18.2 Read and write fractions.
18.3 Find fractional parts of a whole.
18.4 Find fractional parts of a set.
Basic Facts:
Mixed Addition and subtraction facts through 20.
Multiplication facts 0 – 10.
Division facts 0 – 10.
19.1 Compare and order positive rational numbers.
19.2 Read, write, and use symbols to show relationships (>, <, =) between and among fractions
19.4 Add positive rational numbers.
20.1 Read and write positive rational numbers.
20.2 Generate multiple equivalent representations of numbers o read and write decimals through the tenths place
20.3 Write fractions with denominators of 100 as decimals
21.1 Estimate products
21.2 Model multiplication (combining groups of equal size, repeated addition, array/area model)
21.3 Compose and decompose numbers to facilitate mental math strategies (see p. 591) [e.g. 12 × 4 = (10 × 4) + (2 × 4)]
21.4 Apply the Distributive Property
22.1 Estimate quotients
22.2 Use mental math, patterns, and basic facts to divide
22.3 Model multiplication (combining groups of equal size, repeated addition, array/area model)
22.4 Recognize that division is the inverse of multiplication (make connections among fact families) understand remainders
10.1/11.2 Demonstrate the meaning of division with whole numbers.
10.2/11.3 Use mental math to divide with 2, 5, 10, 1, and 0
10.3/11.4 Model division (separating groups of equal size, repeated subtraction, array/area model)
10.4/11.5 Recognize that division is the inverse of multiplication (make connections among fact families)
10.5 – 10.9/11.7 – 11.9 Understand rules for dividing with 1 and 0
10.8 Select and apply appropriate strategies to solve problems.
13.4 Use logical reasoning to solve problems.
14.4 Draw a model to solve problems.
15.5 Identify, describe and extend numeric and non-numeric patterns.
12.1 Read and write measurements involving time.
12.2 Identify time of day (e.g. noon, a.m.).
12.3 State multiple ways for the same time using 15-minute intervals (e.g. quarter past two, 2:15 p.m.).
12.4 Use a number line to find elapsed time.
13.1 Customary units of measure in the real world
13.3/13.6/13.7 Identify the appropriate unit to measure length, weight and capacity.
14.2 ADJ Compare and Order objects according to length using centimeters and meters.
15.1 Draw, identify, classify and label characteristics of two-dimensional figures (e.g. parallel, ray).
15.2 Identify two-dimensional figures and their attributes (e.g. triangle, side).
15.5 Identify, describe and extend numeric and non-numeric patterns. [
16.1 Identify congruent figures.
16.2 Identify lines of symmetry.
17.2 Measure two-dimensional shapes and solids.
17.4 Find the perimeter of a figure.
find the area of a figure (don’t use a formula)
Data Analysis & Probability
6.4 Represent and interpret data.
6.5 Represent data using tables and bar graphs.
http://nces.ed.gov/nceskids/createagraph/ http://illuminations.nctm.org/ActivityDetail.aspx?ID=204 http://www.amblesideprimary.com/ambleweb/mentalmaths/grapher.html
6.6 Use comparative language to describe data.
6.7 Generate questions and answers from data represented in bar graphs.
6.7/EXT Graph ordered pairs in the first quadrant of the coordinate plane. | {"url":"https://home.lps.org/lpsobjectives/third-grade-mathematics/","timestamp":"2024-11-04T12:19:37Z","content_type":"text/html","content_length":"57806","record_id":"<urn:uuid:577e3019-7d5f-4b5f-9eb1-020941b6e03f>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00856.warc.gz"} |
┃ Electroglottogram: First central difference... ┃
Calculates an approximation of the derivative of the Electroglottogram.
New absolute peak
defines the absolute peak of the approximate derivative. A value of 0.0 prevents scaling.
We take the first central difference, (dx(t)/dt)[i] = (x[i+1] - x[i-1])/(2Δt).
The real derivative can be found by using the Derivative... method.
© djmw 20230323 | {"url":"https://www.fon.hum.uva.nl/praat/manual/Electroglottogram__First_central_difference___.html","timestamp":"2024-11-02T23:58:56Z","content_type":"text/html","content_length":"1653","record_id":"<urn:uuid:0a1f6cba-251e-491a-9873-5fe714ff69f7>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00517.warc.gz"} |
Personalization by website transformation: Theory and... (PDF)
University of Dayton eCommons Computer Science Faculty Publications Department of Computer Science 5-2010 Personalization by website transformation: Teory and practice Saverio Perugini University of
Dayton, [email protected] Follow this and additional works at: htps://ecommons.udayton.edu/cps_fac_pub Part of the Databases and Information Systems Commons, Graphics and Human Computer Interfaces
Commons, Information Security Commons, OS and Networks Commons, Other Computer Sciences Commons, Sofware Engineering Commons, Systems Architecture Commons, and theTeory and Algorithms Commons
eCommons Citation Perugini, Saverio, "Personalization by website transformation: Teory and practice" (2010). Computer Science Faculty Publications. 19. htps://ecommons.udayton.edu/cps_fac_pub/19 Tis
Article is brought to you for free and open access by the Department of Computer Science at eCommons. It has been accepted for inclusion in Computer Science Faculty Publications by an authorized
administrator of eCommons. For more information, please contact [email protected], [email protected].
Click here to view linked References Personalization by Website Transformation: Theory and Practice Saverio Perugini Department of Computer Science University of Dayton 300 College Park, Dayton, OH
45469ā 2160, USA Abstract We present an analysis of a progressive series of out-of-turn transformations on a hierarchical website to personalize a userā s interaction with the site. We formalize
the transformation in graph-theoretic terms and describe a toolkit we built which enumerates all of the traversals enabled by every possible complete series of these transformations in any site and
computes a variety of metrics while simulating each traversal therein to qualify the relationship between a siteā s structure and the cumulative eļ¬ ect of support for the trans- formation in a
site. We employed this toolkit in two websites. The results indicate that the transformation enables users to experience a vast number of paths through a site not traversable through browsing and
demonstrate that it supports traversals with multiple steps, where the semblance of a hierarchy is preserved, as well as shortcuts directly to the desired information. Key words: hierarchical
hypermedia, information personalization, navigation, out-of-turn interaction, website transformation 1. Introduction Personalization refers to automatically customizing interactive informa- tion
systems based on user preferences. Personalization technologies are now widely utilized on the web. While most approaches to personalization are Email addresses: [email protected] (Saverio Perugini)
URL: http://academic.udayton.edu/SaverioPerugini (Saverio Perugini) Accepted for publication inI nformation Processing and Management December 18, 2009
either template-based (i.e., slot ļ¬ llers such as those found at My Yahoo!, Man- ber et al., 2000) or artiļ¬ cial intelligence-oriented, the central theme of our approach is to personalize a userā
s interaction with a website by progres- sively transforming its structure in response to every user interaction in a session with the site to help the user experience paths through the site not
traversable through browsing. For instance, consider a user shopping for a book by Aldous Huxley at a website which only presents books by genre. Such a user unsure in which genres Huxley published
is forced to browse through all genres to manually ļ¬ nd books of interest. While this user is unable to respond to the current solicitation for input (i.e., genre), she does have information (i.e.,
author) relevant to the information-seeking task even though that information is not required until the user is nested deeper into the catalog. Our approach to this problem is a technique called
out-of-turn interaction. The idea is to permit a user navigating a hierarchical website to postpone clicking on any of the hyperlinks presented on the current page (e.g., when unable or unwilling to
respond to the current prompt for input) and, instead, communicate the label of a hyperlink nested deeper in the hierarchy. When the user supplies such out-of-turn input we transform the hierarchy to
re- ļ¬ ect the userā s informational need. In the example above, when unsure in which genres Huxley published, the user may communicate ā Aldous Huxleyā to the site out-of-turn. In response, we
would transform the hierarchical or- ganization of the catalog so that all hyperlinks leading to books not written by Huxley are purged and re-present the hierarchy to the user. As a result of the
transformation, the user would see a page of hyperlinks representing genres. However, each hyperlink remaining would eventually lead to a book by Huxley. Thus, out-of-turn interaction permits the
user to circumvent any intended ļ¬ ows of navigation hardwired into the hyperlink structure by the designer and, in this manner, helps reconcile any mismatch between the siteā s one-size-ļ¬ ts-all
organization and the userā s model of information seeking. We built a transformation engine as a web service based on this idea which prunes a hierarchical site when given out-of-turn input. We also
built two interfaces to communicate the input to the engine: a voice interface, imple- mented with VoiceXML and X+V, which permits the user to supply out-of- turn inputs through speech and enables
multimodal interaction when used in conjunction with hyperlinks, and Extempore, implemented with XUL, which is a cross-platform toolbar plugin embedded into the Mozilla Firefox web browser. The
transformation engine, interfaces, and a coordinating interac- 2
tion manager constitute a customizable software framework for creating web personalization systems with support for out-of-turn interaction (Narayan et al., 2004). We have applied this technique to
various websites, including the Open Directory Project, a large web directory. We have studied out-of-turn interaction from software implementation (Narayan et al., 2004) and human-computer
interaction (HCI) (Perugini et al., 2007) perspectives. The goal of this paper is to study the transforma- tion which supports this technique from a graph transformation perspective and analyze the
traversals of the site it enables. This is an intermediate approach between the implementation and HCI complementary approaches. Speciļ¬ cally, we i) formalize the transformation in graph-theoretic
terms, ii) describe a toolkit we built which computes and simulates all of the traversals enabled by all possible complete series of out-of-turn transformations in any site to qualify the
relationship between how terms are distributed through the siteā s structure and the eļ¬ ect of support for the transformation in a site, and iii) report the results of employing this toolkit in two
websites. The central mantra of this paper is that a series of website transformations on a site supports a set of traversals through the site we called an interaction paradigm: Transformation(Ā· Ā·
Ā· Transformation(Website, Hyperlink label), Ā· Ā· Ā·, Hyperlink label) ā Interaction paradigm. Only a small subset of all possible traversals made possible by a series of out-of-turn
transformations on a site can be experienced through browsing. 2. Related Research Traditionally, there are two main approaches to web personalization: template- and AI-oriented approaches. The
template-based approach (Pe- rugini and Ramakrishnan, 2003) (also called checkbox personalization) is predominately employed in the my sites (e.g., My Yahoo!, Manber et al., 2000, or My eBay). Most
all e-commerce sites now provide such a facil- ity. The onus is on the user to explicitly specify her preferences and, as a result, the content, structure, or presentation of the website is tailored
ac- cordingly. Such an approach involves explicit user modeling (Konstan et al., 1997). While template-based approaches to personalization do not suļ¬ er from privacy concerns, the level of
personalization delivered is bounded by 3
the investment of the user in communicating his interests, and often higher- order connections or serendipitous recommendations are not possible. On the other hand, AI-based approaches to web
personalization involve covertly monitoring user behavior and activity, often through web usage mining (i.e., web log analysis) (Mobasher et al., 2000), to implicitly glean user preference and,
ultimately, build a user model which is used as a basis from which to personalize the site. One popular example of such an approach is adaptive websites (Perkowitz and Etzioni, 2000). Unlike
template-based personaliza- tion, the success of AI-oriented approaches is not predicated on the coop- eration of the user. However, these methods are perceived as invasive and raise privacy concerns
(Riedl, 2001). The primary enabling technology for these approaches is web mining (Eirinaki and Vazirgiannis, 2003; Kosala and Blockeel, 2000), and speciļ¬ cally web usage mining (Srivastava et al.,
2000). This user-model through access monitoring approach is seen in the adaptive hypermedia (Brusilovsky, 2001) and interactive information retrieval (White et al., 2006) communities. The
out-of-turn website transformation approach to personalized inter- action does not ļ¬ t into either of these categories. Rather, out-of-turn inter- action can be broadly characterized as a faceted
browsing and search tech- nique (Hearst et al., 2002), and is particularly related to the zoom operation in dynamic taxonomies (Sacco, 2000). Faceted browsing and search (Sacco and Tzitzkas, 2009)
seeks to marry navigational (e.g., Yahoo!) and direct (free form) search (e.g., Google), and has received an increased level of at- tention from the interactive information retrieval community
recently as an approach between template- and AI-based techniques. Faceted browsing and search permits a user to explore a multi-dimensional dataset in a manner which matches the userā s mental
model of information- seeking, thereby personalizing the userā s interaction with site (e.g., ā You pre- fer to browse recipes using a by main ingredient, dish type, preparation method motif while
I prefer to browse by dish type, preparation method, and main ingredient). The multi-faceted index of recipes at http://epicurious. com is perhaps the most illustrative example of a faceted classiļ¬
cation on the web (Hearst, 2000). 4
1 1 1 news shopping news shopping shopping 2 3 news 3 2 3 coupons@ electronics international advertising coupons@ electronics apparel international advertising coupons@ electronics 5 6 4 5 6 7 4 5 6
international holidays apple computers@ europe china international@holidays apple computers@ cameras winter 4 international@ 10 11 china international@holidays apple computers@ china 8 9 10 11 12 13
9 10 11 9 Figure 1: Website transformations simpliļ¬ ed for purposes of presentation: illustration of forward-propagation (FP) followed by back-propagation (BP) on the DAG on left. (left) A sample
DAG model of a hierarchical website. Vertices 9, 10, and 11 (i.e., those dotted) represent the result of forward-propagation wrt the term ā advertisingā : FP(D, advertising). (center) Result of
back-propagation wrt leaf vertices 9, 10, and 11 on left: BP(D, FP(D, advertising)). (right) Result of out-of-turn interaction with the DAG D shown on left wrt the term ā advertisingā : OOT1(D,
advertising). Alternatively, we can ā ² think of this DAG as the result of consolidating edges with the DAG D in center (i.e., ā ² CE(D , advertising)). 1 3. Theory: Out-of-turn Transformation
Formalism Fundamentally, the out-of-turn transformation is a closed transformation over a graph modeling the hyperlink structure of a website. In this sec- tion we discuss how websites can be
represented as graphs, how interacting out-of-turn transforms a graph, and the implications a series of those trans- formations have on web interaction. 3.1. Websites as Graphs It is instructive to
think of websites as graphs. For instance, Fig. 1 (left) illustrates a directed acyclic graph (DAG) model of a hierarchical website with characteristics similar to web directories such as the Open
Directory Project (ODP) at http://dmoz.org. Edges help model paths through a website a user follows to access leaf vertices, which model leaf webpages 1 Some terms and deļ¬ nitions in this section
have been reported by the author in (Perug- ini and Ramakrishnan, 2010) and appear here for purposes of clarity and comprehension. 5
containing content. We refer to a leaf content page as terminal information and the terms therein as units of terminal information. Edge-labels, which we refer to as structural information, model
hyperlink labels or, in other words, choices made by a navigator en route to a leaf. An edge-label, a unit of structural information, is therefore a term of information-seeking (simply a term
hereafter) which a user may bring to bear upon information seeking. Structural information thus helps make distinctions among terminal information. A set of terms is complete when it determines a
particular terminal web- page; otherwise it is partial. An interaction set of a DAG D is the complete set of the terms along a path from the root of D to a leaf vertex of D. An interaction set
constitutes complete information; any proper subset of it is partial information. An interaction set of D classiļ¬ es a leaf vertex of D, but does not capture any order of the terms therein. On the
other hand, a sequence is a total order of an interaction set wrt the parenthood relation of the site. In other words, a sequence represents a path from the root to a leaf in a site. The sequence ā
ŗshopping, apparel, winterā » is in the DAG shown in Fig. 1 (left). A term is in-turn information if it appears as a hyperlink label on the userā s current webpage and is, thus, currently solicited
by the system. On the other hand, a term is out-of-turn information if it represents a hyperlink label nested somewhere deeper in the site and is, thus, currently unsolicited from the system, but
relevant to information seeking. In any DAG, in-turn and out-of-turn information is mutually-exclusive. 3.2. Transformations We now present some website transformations. Term extraction is a total
function TE : D ā P(T ) which given D returns the set of all unique terms in D, where D represents the universal set of DAGs, T represents the universal set of terms, and P(Ā·) denotes the power
set function. A term-co-occurrence set of D is a set T ā TE(D). Let the level of an edge-label in D be the depth of the source vertex of the edge it labels. If a given edge-label occurs multiple
times in D, a level is associated with every occurrence. A term-level set of D then is a term-co-occurrence set comprising all unique terms in D with the same level. Term-level extraction is a total
function TLE : (D Ć N) ā P(TE(D)) which given D and a level l (⩾ 1) ā N = {1, 2, . . . ,M} returns the set of all unique terms in D with level l (i.e., a term-level set), where M represents
the maximum depth of D. If D represents the DAG in Fig. 1 (left), TLE(D, 2) = {international, advertising, coupons, electronics, apparel}. 6
In any DAG, TLE(D, 1) returns the set of terms available to supply through browsing or, in other words, in-turn information. Browse is a partial function B : (D Ć T ) ā Dā „ which given D and a
term t ā TLE(D, 1) returns the sub-DAG rooted at the target vertex of the edge in D labeled with t whose source vertex is the root of D. If D is the DAG in Fig. 1 (left), B(D, shopping) returns the
sub-DAG rooted at vertex 3, which represents the result of a user clicking on the hyperlink labeled ā shoppingā . The symbol ā „ denotes the partial nature of the function (i.e., the value of B is
undeļ¬ ned for some inputs). If t ā / TLE(D, 1), B returns ā „. Out-of-turn transformation is a partial function OOT1 : (D Ć T ) ā Dā „ ā ² which given D and a term t ā TE(D) returns D : Fig. 1
(right) ļø· ļøøļøø ļø· Fig. 1 (left) ļø· ļøøļøø ļø· OOT1(D, t) = CE(BP(D, FP(D, t) ), t) , (1) ļøø ļø·ļø· ļøø Fig. 1 (center) where ā ¢ FP (forward propagate): (D Ć T) ā P(L) is a total function
which given D and a term t ā T = TE(D) returns a set of leaf vertices L of D, where L contains each leaf vertex reachable from all paths of D containing an edge labeled t, and L denotes the
universal set of leaf webpages, ā ¢ BP (back propagate): (D Ć P(L)) ā Dā „ is a partial function which ā ² ā ² given D and L returns a DAG D , where D contains only paths from the root of D to the
leaves of D which classify the leaf vertices in L, and ā ¢ CE (consolidate edges): (D Ć T ) ā Dā „ is a partial function which ā ² given D and a term t ā TE(D) returns D , where any edge e in D
ā ² labeled with t is removed in D , the source vs of e is replaced with ā ² ā ² its target vt in D , and vt becomes the new target of any edge e with ā ² target vs in D . Fig. 1 illustrates the
out-of-turn transformation (i.e., forward-propagation (left) followed by back-propagation (center) followed by consolidation (right)). 7
Intuitively, this transformation retains all sequences of D which contain the out-of-turn input (FP followed by BP), and then removes the out-of-turn input from those remaining sequences (CE). The
result of FP is the set of all leaf vertices classiļ¬ ed by the out-of-turn input. We back-propagate from this set of leaves up to the root of the DAG with BP . Note that when no term in the DAG
represented by the ļ¬ rst argument to OOT1 resides at more than one level, and the second argument to OOT1 is in-turn information, the transformation is functionally equivalent to B. Thus, OOT1
subsumes B. To marry the out-of-turn transformation with standard techniques from information retrieval we can replace FP with any total function SL (select leaves): (D Ć T ) ā L which given D and
a term t ā TE(D) returns a set of leaf vertices of D (FP is an instance of SL). This generalization leads to the possibility of bringing units of terminal information (i.e., terms modeled in the
leaf pages and not explicitly used in the classiļ¬ cation), in replacement of or in addition to structural information, to bear upon the transformation and resulting interaction. For instance, we
might perform a query (e.g., ā laptopā ) in a vector-space model over the set of leaf webpages (i.e., documents) using cosine similarity to arrive at a target set of leaves from which to
back-propagate. Notice that D also can be represented as a |TE(D)|Ć |CR(D)| term-document matrix, where rows correspond to terms (i.e., structural information, or edge-labels) and the columns
correspond to webpages (i.e., terminal information, or leaf vertices). Collect results is a total function CR : D ā P(L) which given D returns a set of all the leaf vertices in D. For instance, CR
(D) returns the {9, 10, 11} set of vertices, where D is the DAG in Fig. 1 (center). 3.3. Commutativity Lemma: The out-of-turn transformation is commutative, assuming both sides are deļ¬ ned, i.e.,
OOT1(OOT1(D, x), y) = OOT1(OOT1(D, y), x), where x and y represent terms. A sketch of the proof of this lemma is given in (Perugini, 2004, Ch.4) Armed with this lemma, we can consider the possibility
of communicating multiple terms per utterance, where an utterance is a set of terms with the same arrival time ā the time at which the user communicates a term or terms to the system. To
accommodate multiple terms per utterance, we re-deļ¬ ne the out-of-turn transformation: 8
OOT(D, u) = OOT1(Ā· Ā· Ā·OOT1(OOT1(D, t1), t2) Ā· Ā· Ā· , tn), where u denotes an utterance consisting of only the {t1, t2, . . . , tn} set of terms and each OOT1 on the rhs refers to (1). If OOT(D,
u) returns a DAG containing only one vertex v (and, therefore, no edges), then the utterance u is complete information (and v is terminal information). Otherwise, u is partial information. 3.4. Web
Interaction We now present concepts which relate to a userā s interaction with a web- site to help describe the cumulative eļ¬ ect of the out-of-turn transformation on a site. Several partial orders
can be deļ¬ ned over an interaction set wrt arrival time. When a user clicks on a hyperlink, she implicitly communicates the hyperlinkā s label to the underlying system. For instance, when a user
clicks on a hyperlink labeled ā newsā followed by that labeled ā internationalā , she communicates the ā ŗnews, internationalā » terms to the system, in that order. Similarly, when the user
supplies out-of-turn input, he is communicat- ing terms to the system. These partial orders can be summarized as partially ordered sets or posets. Each linear extension of such a poset is a total
order called an interaction episode. A browsing interaction episode of D is a total order on any interaction set of D wrt the parenthood relation of D. Notice that a browsing episode is the same as a
sequence as deļ¬ ned above. An out-of-turn interaction episode is a total order over the set of all set parti- tions of an interaction set wrt the arrival time relation implied by out-of-turn
interaction. The arrival time relation implied by out-of-turn interaction is a partial order containing only the reļ¬ exive tuples of all set partitions from any interaction set. In other words,
out-of-turn interaction requires none of the term set partitions from each interaction set are required to be ordered. The linear extensions of the posets associated with these partial orders are
out-of-turn interaction episodes. An interaction paradigm P for D is the union of all linear extensions of posets deļ¬ ned over all interaction sets of D. In other words, an interaction paradigm is a
complete set of realizable interaction episodes from D wrt a transformation (e.g., Browse or OOT). The browsing paradigm PB of D in Fig. 1 (left) is: 9 | {"url":"https://pdfroom.com/books/personalization-by-website-transformation-theory-and-practice/3jN2RBDQ2vW","timestamp":"2024-11-05T06:08:06Z","content_type":"text/html","content_length":"122275","record_id":"<urn:uuid:2b10615f-c912-40f8-8310-8cc53c08ca4b>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00224.warc.gz"} |
Transactions Online
Yuji HASHIMOTO, Koji NUIDA, Kazumasa SHINAGAWA, Masaki INAMURA, Goichiro HANAOKA, "Toward Finite-Runtime Card-Based Protocol for Generating a Hidden Random Permutation without Fixed Points" in IEICE
TRANSACTIONS on Fundamentals, vol. E101-A, no. 9, pp. 1503-1511, September 2018, doi: 10.1587/transfun.E101.A.1503.
Abstract: In the research area of card-based secure computation, one of the long-standing open problems is a problem proposed by Crépeau and Kilian at CRYPTO 1993. This is to develop an efficient
protocol using a deck of physical cards that generates uniformly at random a permutation with no fixed points (called a derangement), where the resulting permutation must be secret against the
parties in the protocol. All the existing protocols for the problem have a common issue of lacking a guarantee to halt within a finite number of steps. In this paper, we investigate feasibility and
infeasibility for the problem where both a uniformly random output and a finite runtime is required. First, we propose a way of reducing the original problem, which is to sample a uniform
distribution over an inefficiently large set of the derangements, to another problem of sampling a non-uniform distribution but with a significantly smaller underlying set. This result will be a base
of a new approach to the problem. On the other hand, we also give (assuming the abc conjecture), under a certain formal model, an asymptotic lower bound of the number of cards for protocols solving
the problem using uniform shuffles only. This result would give a supporting evidence for the necessity of dealing with non-uniform distributions such as in the aforementioned first part of our
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E101.A.1503/_p
author={Yuji HASHIMOTO, Koji NUIDA, Kazumasa SHINAGAWA, Masaki INAMURA, Goichiro HANAOKA, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Toward Finite-Runtime Card-Based Protocol for Generating a Hidden Random Permutation without Fixed Points},
abstract={In the research area of card-based secure computation, one of the long-standing open problems is a problem proposed by Crépeau and Kilian at CRYPTO 1993. This is to develop an efficient
protocol using a deck of physical cards that generates uniformly at random a permutation with no fixed points (called a derangement), where the resulting permutation must be secret against the
parties in the protocol. All the existing protocols for the problem have a common issue of lacking a guarantee to halt within a finite number of steps. In this paper, we investigate feasibility and
infeasibility for the problem where both a uniformly random output and a finite runtime is required. First, we propose a way of reducing the original problem, which is to sample a uniform
distribution over an inefficiently large set of the derangements, to another problem of sampling a non-uniform distribution but with a significantly smaller underlying set. This result will be a base
of a new approach to the problem. On the other hand, we also give (assuming the abc conjecture), under a certain formal model, an asymptotic lower bound of the number of cards for protocols solving
the problem using uniform shuffles only. This result would give a supporting evidence for the necessity of dealing with non-uniform distributions such as in the aforementioned first part of our
TY - JOUR
TI - Toward Finite-Runtime Card-Based Protocol for Generating a Hidden Random Permutation without Fixed Points
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 1503
EP - 1511
AU - Yuji HASHIMOTO
AU - Koji NUIDA
AU - Kazumasa SHINAGAWA
AU - Masaki INAMURA
AU - Goichiro HANAOKA
PY - 2018
DO - 10.1587/transfun.E101.A.1503
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E101-A
IS - 9
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - September 2018
AB - In the research area of card-based secure computation, one of the long-standing open problems is a problem proposed by Crépeau and Kilian at CRYPTO 1993. This is to develop an efficient protocol
using a deck of physical cards that generates uniformly at random a permutation with no fixed points (called a derangement), where the resulting permutation must be secret against the parties in the
protocol. All the existing protocols for the problem have a common issue of lacking a guarantee to halt within a finite number of steps. In this paper, we investigate feasibility and infeasibility
for the problem where both a uniformly random output and a finite runtime is required. First, we propose a way of reducing the original problem, which is to sample a uniform distribution over an
inefficiently large set of the derangements, to another problem of sampling a non-uniform distribution but with a significantly smaller underlying set. This result will be a base of a new approach to
the problem. On the other hand, we also give (assuming the abc conjecture), under a certain formal model, an asymptotic lower bound of the number of cards for protocols solving the problem using
uniform shuffles only. This result would give a supporting evidence for the necessity of dealing with non-uniform distributions such as in the aforementioned first part of our result.
ER - | {"url":"https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E101.A.1503/_p","timestamp":"2024-11-06T13:42:05Z","content_type":"text/html","content_length":"65758","record_id":"<urn:uuid:12b0602b-a49f-4ee4-aff1-0c15dce87c97>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00613.warc.gz"} |
Hydrostatic Balance
Next: Problem 1 Up: Local Star Formation Process Previous: Local Star Formation Process   Contents
Consider a hydrostatic balance of isothermal cloud. By the gas density,
and the gravity is calculated from a density distribution as
for a spherical symmetric cloud, where
For the spherical symmetric case, the equation becomes the Lane-Emden equation with the polytropic index of C.1). This has no analytic solutions. However, the numerical integration gives us a
solution shown in Figure 4.1 (left). Only in a limiting case with the infinite central density, the solution is expressed as
Increasing the central density, the solution reaches the above Singular Isothermal Sphere (SIS) solution.
On the other hand, a cylindrical cloud has an analytic solution (Ostriker 1964) as
Far from the cloud symmetric axis, the distribution of equation (4.5) gives
while the spherical symmetric cloud has
Next: Problem 1 Up: Local Star Formation Process Previous: Local Star Formation Process   Contents Kohji Tomisaka 2007-07-08 | {"url":"http://th.nao.ac.jp/MEMBER/tomisaka/Lecture_Notes/StarFormation/3/node65.html","timestamp":"2024-11-11T01:55:23Z","content_type":"text/html","content_length":"9480","record_id":"<urn:uuid:14d2a2dc-8a9e-492f-81e3-d6c45a88d7e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00386.warc.gz"} |
F# - some Project Euler patterns
To help me get used to F# and relearn the ways of functional programming I've been working through Project Euler in a Jupyter IfSharp notebook and keeping my solutions on GitHub at https://github.com
After around 50 problems so far I've spotted a handful of patterns which had either a number of possible solutions or were a pain to type out (or copy/paste) each time I used them. I explored them
each in a little more detail to find the most optimal implementation of each pattern. The reason I wrote this up is that even though the problems are pretty simple, some of the results were pretty
For each of the patterns I've got a simple definition, some solutions and a set of benchmark results in a table. In each results table I've highlighted the most optimal solution that fully expands
the result set (so the lazily evaluated solutions that "complete" in a millisecond don't count) so that we can have a realistic idea of what the best solution is.
Combinations of two lists
The first item is the simplest of the four problems - if we have two lists foo and bar, produce a list of pairs featuring all combinations of elements of foo with bar. So given something like ...
let foo = [ 'a'; 'b'; 'c' ]
let bar = [ 1; 2; 3 ]
We expect to see a list like this ...
('a', 1); ('a', 2); ('a', 3)
('b', 1); ('b', 2); ('b', 3)
('c', 1); ('c', 2); ('c', 3)
I came up with only three solutions - I don't feel like I have an ideal solution to this, just the least hacky variant of the first solution that popped up in my mind.
pair_list: The first solution, call List.map for every member of one list, then in the function argument call List.map again for every member of the second - flatten the resulting list using
let pair_list l1 l2 =
List.map (fun x -> l2 |> List.map (fun y -> (x,y))) l1
|> List.concat
pair_seq: as above, but assume we can have sequences as input, so we can produce the (fairly large) output array lazily:
let pair_seq s1 s2 =
Seq.map (fun x -> s2 |> Seq.map (fun y -> (x,y))) s1
|> Seq.concat
pair_seq_expanded: as above, expand fully to a List for an idea of how long it takes to operate on the whole output:
let pair_seq_expanded s1 s2 =
Seq.map (fun x -> s2 |> Seq.map (fun y -> (x,y))) s1
|> Seq.concat
|> Seq.toList
pair_seq_for: it's pretty clear that using a sequence is preferable here, especially if you need to work with 1000 x 1000 collections, so the final variant is a slight rewrite of the second, using a
for loop and yield-ing the resulting tuple.
let pair_seq_for s1 s2 =
[ for x in s1 do
for y in s2 do
yield (x,y) ]
To compare the performance of these three I've defined 100 and 1000 element lists/sequences and measured how long it takes to iterate through each sequence pair performing a simple operation
(accumulating the difference between the pairs).
time to create n-element collection of pairs in milliseconds
method n=10000 n=100000 n=1000000 n=10000000
pair_list 1.5096 14.7937 226.0501 2927.2477
pair_seq 0.8690 0.8690 0.8846 0.9028
pair_seq_expanded 3.3952 21.5028 184.3353 2264.2805
pair_seq_for 3.2361 12.5183 180.1352 1997.3700
So thankfully the cleanest looking pair_seq_for solution is actually the fastest when we get to larger data sets. This isn't quite where the story ends though. There's a nice discussion here on Stack
Overflow about a similar but slightly different problem - finding n element combinations of a single list - so for
let someList = [ 1; 2; 3 ]
... we wanted a function combs (n:'a) (lst:'a list) which would produce something like the below for combs 2 someList
( 1, 2 ); ( 1, 3 )
( 2, 1 ); ( 2, 3 )
( 3, 1 ); ( 3, 2 )
This is different from the problem I posed, but I've got a GitHub gist here where I've turned them all loose on the same set of data, and performed some simple measurements.
Pairing elements of Collections with their indexes
In a couple of places I found myself wondering if F# collections had an equivalent of python's enumerate - which is a function which wraps a list and returns an index/element pair for each loop
letters = [ "a", "b", "c", "d" ]
for i, c in enumerate(letters):
print "%d: %s" % (i, c)
# output:
# 0: a
# 1: b
# 2: c
# 3: d
It took a little while before I spotted Array.mapi so I ended up working through and measuring a handful of different ways first - some are obviously pretty poor (particularly those using recursion)
but I left them in nonetheless:
enumerate_by_for_seq - using a Seq to generate the index and return a pair
let enumerate_by_for_seq (a:string []) =
seq { for i in 0 .. (a.Length-1) -> (i, a.[i]) }
enumerate_by_for_seq_expanded - as previous, but returning a List to fully expand the sequence
let enumerate_by_for_seq_expanded (a:string []) =
seq { for i in 0 .. (a.Length-1) -> (i, a.[i]) }
|> Seq.toList
enumerate_by_for_list - iterating over each index using a for loop, returning a (int * string) list
let enumerate_by_for_list (a:string []) =
[ for i in 0 .. (a.Length-1) -> (i, a.[i]) ]
enumerate_by_for_array - as above but returning (int * string) [], note that this seems ridiculously similar, but differs surprisingly in performance (I discovered this by accident and included it in
this experiment because of the difference!)
let enumerate_by_for_array (a:string []) =
[| for i in 0 .. (a.Length-1) -> (i, a.[i]) |]
enumerate_by_map - generating a list of indexes and then using |> and List.map to create the index/element pair (i.e. the same as the first approach, but using List)
let enumerate_by_map (a:string []) =
|> List.map (fun i -> (i, a.[i]))
enumerate_by_recursion_array - bonkers approach, abusing Array.append and recursing. Just don't do this...
let rec enumerate_by_recursion_array' i (a:string[]) =
match i with
| 0 -> [||]
| _ -> Array.append [| (i, a.[i]) |] (enumerate_by_recursion_array' (i-1) (a.[1..]))
let enumerate_by_recursion_array (a:string[]) =
enumerate_by_recursion_array' (a.Length-1) a
enumerate_by_recursion_list - List variant of the above. Don't do this either
let rec enumerate_by_recursion_list' i (a:string[]) =
match i with
| 0 -> []
| _ -> [ (i, a.[i]) ] @ (enumerate_by_recursion_list' (i-1) (a.[1..]))
let enumerate_by_recursion_list (a:string[]) =
enumerate_by_recursion_list' (a.Length-1) a
enumerate_by_for_zip - Using Array.zip - shortest solution, the best until I spotted Array.mapi
let enumerate_by_zip (a:string[]) =
Array.zip a [|0..(a.Length-1)|]
enumerate_by_for_mapi - Probably the most "correct" solution, using Array.mapi
let enumerate_by_mapi (a:string[]) =
Array.mapi (fun i x -> (i,x)) a
enumerate_by_for_parallel_mapi - As above but naively switching in Array.Parallel.mapi without any other changes
let enumerate_by_parallel_mapi (a:string[]) =
Array.Parallel.mapi (fun i x -> (i,x)) a
time taken to enumerate n element collection (milliseconds)
method n=10000 n=100000 n=1000000 n=10000000
enumerate_by_for_seq 0.3385 0.3496 0.3471 0.3540
enumerate_by_for_seq_expanded 2.6177 18.8341 205.4403 3610.3913
enumerate_by_for_list 1.3487 22.1703 248.5039 4200.8530
enumerate_by_for_array 2.1619 12.8186 192.3148 3178.5893
enumerate_by_map 2.0391 26.2468 287.2852 4179.3407
enumerate_by_recursion_array 7760.3141 n/a* n/a* n/a*
enumerate_by_recursion_list 5368.5472 n/a* n/a* n/a*
enumerate_by_zip 7.1136 9.4388 170.0941 1917.8617
enumerate_by_mapi 2.6911 13.0303 116.5348 1268.8625
enumerate_by_parallel_mapi 8.1293 17.7548 102.2350 1379.0431
* = this took way too long so I killed it
Obviously Array.mapi was the fastest overall. However it wasn't as much faster than Array.zip as I would have imagined, and I suspect that I'm doing something wrong with Array.Parallel.mapi. Also
interesting is that while the super-fast performance of the enumerate_by_for_seq method dissipates somewhat when fully evaluated, it is still faster than the equivalent enumerate_by_for_list version.
"Pandigital" numbers feature relatively frequently in Project Euler. An n-digit pandigital number will contain all digits from 0..n or 1..(n-1) once in some order. For example 41523 is a 1-5
pandigital, and 43210 is 0-4 pandigital. These numbers are mentioned in 32, 38, 41, 104, 118, 170 (and perhaps more) so a relatively efficient way to recognise them is pretty useful to have at hand.
Again there's a few ways we can do this - in each case I can think of we start with taking the string representation of the input number and splitting it up using ToCharArray() and with this we can
do a number of different things.
pandigital_strcmp - sort array, map each element to string, sort, create string + compare to "123..n"
let pandigital_strcmp (n:int) =
let sorted = new string (string(n).ToCharArray() |> Array.sort)
sorted = pandigitalString
pandigital_intcmp - sort array, map each element to string, sort, create string, cast to int + compare to 123..n
let pandigital_intcmp (n:int) =
let sorted = new string (string(n).ToCharArray() |> Array.sort)
int(sorted) = pandigitalInt
pandigital_arrcmp - sort array, string, cast to int + compare to existing array [| '1'; '2'; .. n |]
let pandigital_arrcmp (n:int) =
pandigitalArray = (string(n).ToCharArray() |> Array.sort)
pandigital_set_difference - convert to Set and compute difference from precalc'd set, pandigital if empty
let pandigital_set_difference (n:int) =
|> Set.ofArray
|> Set.difference pandigitalSet
|> Set.isEmpty
pandigital_array_contains - for each element in precalculated pandigital array, check it's present in array and use List.fold to ensure all true
let pandigital_array_contains (n:int) =
let a = string(n).ToCharArray()
|> Array.map (fun c -> Array.contains c a)
|> Array.fold (fun e acc -> e && acc) true
So I tested these against using the code to measure how quickly each method was in applying
// where panDigitalInt is the upper limit ("n" in the table)
let testNumbers = [ 0 .. pandigitalInt ]
let bench name f =
let sw = Stopwatch.StartNew()
let res = testNumbers |> List.filter f |> List.length
printfn "%s: %f ms" name sw.Elapsed.TotalMilliseconds
time taken to filter pandigitals in [0..n] in milliseconds
method n=1234 n=12345 n=123456 n=1234567
pandigital_strcmp 2.1081 11.2639 113.2086 1356.1985
pandigital_intcmp 0.9716 9.7646 89.3238 947.0513
pandigital_arrcmp 2.4441 6.1932 59.7014 618.0665
pandigital_set_difference 2.5024 17.2115 199.2863 1986.9592
pandigital_array_contains 0.9790 4.8161 50.447 565.6698
So it seems Array.contains wins overall. The Set.difference approach was pretty dismal which was disappointing - it came to me when I was out walking my dog and I rushed back to write it and
benchmark it. I think Set.ofArray is perhaps a little slow, but I haven't done any investigation into it.
It's worth noting that you probably shouldn't do something like [0..bigNumber] |> List.filter pandigital_array_contains to start with - maybe it's worth approaching the problem from a different angle
in some cases.
Sorting a 3-element tuple
OK this only came up once and was part of a pretty poor solution I had for problem 39 (
) - I generated thousands of tuple featuring 3 of integers and then tested whether they could represent the sides of right-angled triangles using Pythagoras' theorem. However since they were in no
particular order I thought I needed identify the hypotenuse. I wrote this out long-form since there's only a handful of cases and solved the problem relatively quickly.
Regardless of whether this was a suitable solution for the problem, I was curious as to what approach works best for sorting these tiny collections of 3 elements.
I had only three approaches:
sort_nested_if - use nested if statements to reduce the number of comparisons needed while introducing branches
let sort_nested_if (a,b,c) =
if a >= b then
if b >= c then (a,b,c) else (a,c,b)
elif b >= a then
if a >= c then (b,a,c) else (b,c,a)
if a >= b then (c,a,b) else (c,b,a)
sort_flat_if - have a separate if for each result at the top level
let sort_flat_if (a,b,c) =
if a >= b && b >= c then (a,b,c)
elif a >= b && b >= c then (a,c,b)
elif b >= a && a >= c then (b,a,c)
elif b >= a && c >= a then (b,c,a)
elif (*c >= b &&*) a >= b then (c,a,b)
else (*c >= b && b >= a then*) (c,b,a)
sort_array - create an array, use Array.sort and map the results back into a tuple when returning the result
let sort_array (a,b,c) =
let sorted = Array.sort [| a;b;c |]
(sorted.[0], sorted.[1], sorted.[2])
To test these I generated large arrays of size 4000, 40000 and 400000 3-element tuples and timed how long each method took to sort them.
time taken to sort n 3-element tuples, in milliseconds
method n=4000 40000 400000
sort_nested_if 1.2626 13.9014 193.3619
sort_flat_if 1.7864 23.4633 258.2538
sort_array 1.2424 11.9907 132.4312
OK now it's probably obvious why I didn't just bin this little experiment - sort_array is surprisingly the clear winner. I would have guessed that the overhead of building an array and calling
Array.sort function on a list way smaller than you'd normally need to sort would be insane. But apparently it's not!
It's surprising how many different ways you can write some relatively simple algorithms. Some of them are obviously pretty awful, like the recursive enumerate function (though I'm sure I can
rearrange it to take advantage of tail call elimination) - and some were surprisingly performant, like the sort_array function in the final example. I've noticed that some other Project Euler people
maintain a sort of library of functions they can re-use. Eventually I'd like to do something like this, but until it becomes unmanageable I'll just keep a Gist on GitHub: | {"url":"https://blog.mclemon.org/f-number-some-project-euler-patterns","timestamp":"2024-11-03T03:42:12Z","content_type":"text/html","content_length":"31411","record_id":"<urn:uuid:018ab831-dc3c-42ec-a522-99f505dea6b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00735.warc.gz"} |
An STL-compliant mathematical vector class
The header file attached to this article implements the EVector class. In order to remove confusion from the start, this class is not a data structure, or an STL container. It does, however, resemble
the std::vector class that's provided as part of STL even though their purposes are quite different.
std::vector is basically a container for objects of type T. This class, however, is an implementation of a vector in the mathematical sense. Therefore, it contains operators like +, -, *, etc.. as
well as other functions that are used to manipulate vectors.
This class was written as part of a library I've been writing. I wanted to create some mathematical infrastructure that will be used by other parts of the library. Up till now, I've used this class
to write a matrix class and to specialize the case of 2D and 3D vectors for graphics and computational geometry algorithms. Even though I haven't provided those classes (I might in the future,
though...) I think it's very easy to use/extend/specialize this class for such purposes by any user.
While writing this class, I followed these guidelines:
• This class is really low-level and will be used almost everywhere. This is why it has to be well written and conform to widely-used notation.
• Future changes will be addition of functionality and not changes in already existing behavior.
• Portability.
• This class should conform to standard behavior in order to be used by other 3^rd-party libraries or replace similar classes.
• Other people might use this class and therefore it must be well documented. This will also facilitate changes in the future.
To adhere to the above, the code is documented using Doxygen. In addition, it is STL-compliant. This means that it has an allocator object and appropriate iterators that can be used to traverse the
elements of the vector. The elements of the vector are allocated and deallocated using the allocator.
One of the intentions of this class is to be used for graphics and code that is run-time critical. Any of you who have used OpenGL or Direct3D are familiar with vertex arrays. A vertex array is just
what its name implies - an array of (usually) 3D vectors. Such arrays can be cached, or optimized, and for such reasons there's no need to impose the use of an allocator instance for each instance of
EVector. In this example, an array of EVectors can be easily used as a vertex array that can be passed to OpenGL and the programmer doesn't even need to calculate the offset between consecutive
instances of EVectors (even though it's easy...). Such an array will definitely occupy less memory if there's no allocator instance for each EVector instance. This is why I make use of the
E_VECTOR_USE_ALLOCATOR flag.
If you take a look at the code, you'll see the E_VECTOR_USE_ALLOCATOR flag. If this flag is defined, EVector has a template parameter A that determines the type of the allocator (which is by default
std::allocator). In addition, each instance of an EVector object will hold an instance of an allocator - just like std::vector (or any other STL container). If, however, it isn't defined, all
instances of EVector of the same type (meaning, having the same T and n) will share the same allocator instance. To make a long story short - E_VECTOR_USE_ALLOCATOR determines whether the allocator
is a static member of EVector or not. This still allows the flexibility of providing different ways of allocating objects without the overhead of complexity and memory usage that might arise in most
cases. It is important to emphasis that whether E_VECTOR_USE_ALLOCATOR is defined or not, EVector can be used identically. This is because if E_VECTOR_USE_ALLOCATOR isn't defined, A becomes a typedef
of the class, and get_allocator() simply returns the static member.
Unfortunately, I can't get into the skinny math details, so I assume that most users of this class know some linear algebra. It really isn't complicated. As mentioned above, the class contains the
standard functions that manipulate vectors. If anything is missing and should be in - let me know. :-)
As I mentioned above, the class is STL-compliant - which allows it to be used by STL algorithms. I've demonstrated this in the small test program for the case of std::reverse but, of course, it can
be used with other algorithms as well. It also includes simple serialization to and from a stream.
An important thing to notice. The size of the vector is static!! It can't be changed at run-time and is determined completely by the template parameter n. This is in contrast to some other classes
I've seen. There are several reasons for this. The main two are simplicity and performance. Like I said, this class is intended to be used for performance-critical tasks. If the size was changeable
at run-time, simple operations like addition, subtraction, dot product, and so forth would have to be validated at run-time by checking that vectors have the same size and otherwise throwing an
exception. Keeping the size static and part of the template parameters removes all such problems because these are simply verified during compilation. If one tries to define:
EVector< float, 5 > v;
EVector< float, 3 > u;
cout << u+v << endl; // compilation error!
one would get a compilation error because the operators mentioned above are defined for vectors of the same size.
Using the code
This is simple. All you have to do is to include EVector.h in your source code. Then you can define:
EVector< double, 4 > v; // creates an invalid vector of 4 doubles
EVector< int, 2 > p; // creates an invalid vector of 2 integers
v.fill(0.1); // v = (0.1, 0.1, 0.1, 0.1)
p[0] = 1; p[1] = 5; // p = (1, 5)
v.normalize(); // |v| = v.length() = 1
std::reverse(p.begin(), p.end()); // p = (5, 1)
And start having fun with them...
Please look at the (Doxygen) comments in the code to find out what are the operators and functions that are defined for the class.
Points of Interest
Please notice that the default constructor creates an invalid vector. An invalid vector is a vector one of whose arguments is NaN (Not a Number). To check for the validity of a vector one should use
the isValid() member function.
You'll see that for some of the functions there's an optional epsilon argument. This is provided because some computations (actually, most computations...) result in numerical errors. For example,
two vectors are considered orthogonal if their dot product is 0. Since the computer has only finite accuracy, two vectors which would be practically orthogonal might not yield an exact 0 when
calculating their dot product. The epsilon argument solves this problem by considering vectors whose dot product is less or equal to epsilon orthogonal, and vectors whose dot product is greater than
epsilon - not orthogonal. The same principle applies for other kinds of tests.
No history yet... | {"url":"https://codeproject.global.ssl.fastly.net/Articles/10372/An-STL-compliant-mathematical-vector-class?display=Print","timestamp":"2024-11-04T17:17:05Z","content_type":"text/html","content_length":"26655","record_id":"<urn:uuid:b6c4ea3a-43e8-4b3f-8698-40509a46b1c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00048.warc.gz"} |
Finding the Power of an Electrical Component
Question Video: Finding the Power of an Electrical Component Physics • Third Year of Secondary School
A 4 A current passes through a 10 Ω resistor. How much is the power of the resistor?
Video Transcript
A four-ampere current passes through a 10-ohm resistor. How much is the power of the resistor?
Okay, so we know that we’ve got a resistor with a resistance of 10 ohms, which we’ve labeled as 𝑅. We’re also told that there’s a current of four amperes through the resistor, and we’ve labeled this
current as 𝐼. In order for there to be a current through it, then this resistor must be part of a complete circuit. So, for example, we could imagine the simplest possible case, which is the resistor
connected in series with a cell.
We’re being asked to work out the power of the resistor. And we can recall that the electrical power 𝑃 of a component or the power that it dissipates or transfers to its surrounding environment is
equal to the current 𝐼 through that component multiplied by the potential difference 𝑉 across it. In this case, we already know the value of the current 𝐼 through the resistor.
The potential difference 𝑉 is the value that will be measured by a voltmeter connected in parallel across the resistor like this. But we don’t know what this value is. However, we do know the
resistance of the resistor. And we can recall that Ohm’s law links a component’s resistance, the current through it, and the potential difference across it. Specifically, Ohm’s law tells us the
potential difference 𝑉 is equal to current 𝐼 multiplied by resistance 𝑅.
We can then use this Ohm’s law equation to replace the 𝑉 in the power equation by 𝐼 times 𝑅. So we’d be replacing the quantity 𝑉 that we don’t know the value of by two quantities 𝐼 and 𝑅 that we do
know values for. If we take our equation for the power 𝑃 and we use Ohm’s law to replace the quantity 𝑉 with 𝐼 times 𝑅, then we have that the power 𝑃 is equal to 𝐼 times 𝐼 times 𝑅, which we can write
more simply as 𝑃 is equal to 𝐼 squared times 𝑅.
This equation tells us that the power 𝑃 of the resistor is equal to the square of the current 𝐼 through the resistor multiplied by its resistance 𝑅. We can now go ahead and substitute our values for
𝐼 and 𝑅 into this equation to calculate the value of the power 𝑃. When we do this, we find that 𝑃 is equal to the square of four amperes, that’s the current 𝐼, multiplied by 10 ohms, the resistance
Our current with units of amperes and a resistance in units of ohms will mean that we calculate a power in units of watts. Evaluating the expression, the square of four is 16, so we’ve got 16
multiplied by 10 watts, which works out as a power of 160 watts. So our answer to this question is that the power of the resistor is 160 watts. | {"url":"https://www.nagwa.com/en/videos/759172184608/","timestamp":"2024-11-12T09:38:56Z","content_type":"text/html","content_length":"251512","record_id":"<urn:uuid:9dcbb85b-219a-41b8-a00b-92cf95136707>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00643.warc.gz"} |
New Experiment Translates Quantum Information Between Technologies In Important Step For Quantum Internet - MessageToEagle.com
New Experiment Translates Quantum Information Between Technologies In Important Step For Quantum Internet
Eddie Gonzales Jr. – MessageToEagle.com – Researchers have discovered a way to “translate” quantum information between different kinds of quantum technologies, with significant implications for
quantum computing, communication, and networking.
A niobium superconducting cavity. The holes lead to tunnels which intersect to trap light and atoms. Credit: Aishwarya Kumar
The research was published in the journal Nature on Wednesday. It represents a new way to convert quantum information from the format used by quantum computers to the format needed for quantum
Photons—particles of light—are essential for quantum information technologies, but different technologies use them at different frequencies. For example, some of the most common quantum computing
technology is based on superconducting qubits, such as those used by tech giants Google and IBM; these qubits store quantum information in photons that move at microwave frequencies.
But if you want to build a quantum network, or connect quantum computers, you can’t send around microwave photons because their grip on their quantum information is too weak to survive the trip.
“A lot of the technologies that we use for classical communication—cell phones, Wi-Fi, GPS and things like that—all use microwave frequencies of light,” said Aishwarya Kumar, a postdoc at the James
Franck Institute at University of Chicago and lead author on the paper. “But you can’t do that for quantum communication because the quantum information you need is in a single photon. And at
microwave frequencies, that information will get buried in thermal noise.”
The solution is to transfer the quantum information to a higher-frequency photon, called an optical photon, which is much more resilient against ambient noise. But the information can’t be
transferred directly from photon to photon; instead, we need intermediary matter. Some experiments design solid state devices for this purpose, but Kumar’s experiment aimed for something more
fundamental: atoms.
The electrons in atoms are only ever allowed to have certain specific amounts of energy, called energy levels. If an electron is sitting at a lower energy level, it can be excited to a higher energy
level by hitting it with a photon whose energy exactly matches the difference between the higher and lower level. Similarly, when an electron is forced to drop to a lower energy level, the atom then
emits a photon with an energy that matches the energy difference between levels.
Rubidium atoms happen to have two gaps in their levels that Kumar’s technology exploits: one that exactly equals the energy of a microwave photon, and one that exactly equals the energy of an optical
photon. By using lasers to shift the atom’s electron energies up and down, the technology allows the atom to absorb a microwave photon with quantum information and then emit an optical photon with
that quantum information. This translation between different modes of quantum information is called “transduction.”
Effectively using atoms for this purpose is made possible by the significant progress scientists have made in manipulating such small objects. “We as a community have built remarkable technology in
the last 20 or 30 years that lets us control essentially everything about the atoms,” Kumar said. “So the experiment is very controlled and efficient.”
He says the other secret to their success is the field’s progress in cavity quantum electrodynamics, where a photon is trapped in a superconducting, reflective chamber. Forcing the photon to bounce
around in an enclosed space, the superconducting cavity strengthens the interaction between the photon and whatever matter is placed inside it.
Their chamber doesn’t look very enclosed—in fact, it more closely resembles a block of Swiss cheese. But what look like holes are actually tunnels that intersect in a very specific geometry, so that
photons or atoms can be trapped at an intersection. It’s a clever design that also allows researchers access to the chamber so they can inject the atoms and the photons.
The technology works both ways: it can transfer quantum information from microwave photons to optical photons, and vice versa. So it can be on either side of a long-distance connection between two
superconducting qubit quantum computers, and serve as a fundamental building block to a quantum internet.
But Kumar thinks there may be a lot more applications for this technology than just quantum networking. Its core ability is to strongly entangle atoms and photons—an essential, and difficult task in
many different quantum technologies across the field.
“One of the things that we’re really excited about is the ability of this platform to generate really efficient entanglement,” he said. “Entanglement is central to almost everything quantum that we
care about, from computing to simulations to metrology and atomic clocks. I’m excited to see what else we can do.”
Written by Eddie Gonzales Jr. – MessageToEagle.com Staff | {"url":"https://www.messagetoeagle.com/quantum-info-step/","timestamp":"2024-11-12T10:17:32Z","content_type":"text/html","content_length":"109643","record_id":"<urn:uuid:146a599b-c858-4efd-900e-34431ca6c846>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00569.warc.gz"} |
Pro Tip: cuBLAS Strided Batched Matrix Multiply | NVIDIA Technical Blog
There’s a new computational workhorse in town. For decades, general matrix-matrix multiply—known as GEMM in Basic Linear Algebra Subroutines (BLAS) libraries—has been a standard benchmark for
computational performance. GEMM is possibly the most optimized and widely used routine in scientific computing. Expert implementations are available for every architecture and quickly achieve the
peak performance of the machine. Recently, however, the performance of computing many small GEMMs has been a concern on some architectures. In this post, I detail solutions now available in cuBLAS
8.0 for batched matrix multiply and show how it can be applied to efficient tensor contractions, an interesting application that users can now be confident will execute out-of-the-box with the full
performance of a GPU.
Batched GEMM
The ability to compute many (typically small) matrix-matrix multiplies at once, known as batched matrix multiply, is currently supported by both MKL’s cblas_<T>gemm_batch and cuBLAS’s cublas<T>
gemmBatched. (<T> in this context represents a type identifier, such as S for single precision, or D for double precision.)
Both of these interfaces support operations of the form
$C[p] = \alpha \, \text{op}(A[p]) \, \text{op}(B[p]) + \beta \, C[p],$
where A[p], B[p], and C[p] are matrices and op represents a Transpose, Conjugate Transpose, or No Transpose. In cuBLAS, the interface is:
<T>gemmBatched(cublasHandle_t handle,
cublasOperation_t transA, cublasOperation_t transB,
int M, int N, int K,
const T* alpha,
const T** A, int ldA,
const T** B, int ldB,
const T* beta,
T** C, int ldC,
int batchCount)
For reference, <T>gemmBatched implements the following computation (along with all of the variants associated with the transpose arguments).
for (int p = 0; p < batchCount; ++p) {
for (int m = 0; m < M; ++m) {
for (int n = 0; n < N; ++n) {
T c_mnp = 0;
for (int k = 0; k < K, ++k)
c_mnp += A[p][m + k*ldA] * B[p][k + n*ldb];
C[p][m + n*ldc] = (*alpha)*c_mnp + (*beta)*C[p][m + n*ldc];
This is extremely versatile and finds applications in LAPACK, tensor computations, machine learning (RNNs, CNNs, etc), and more. Both the MKL and cuBLAS implementations are optimized for small matrix
sizes as well.
Figure 1: Performance of four strategies for computing N matrix-matrix multiplications of size NxN.
In Figure 1, I’ve plotted the achieved performance on an NVIDIA Tesla P100 GPU of four evaluation strategies that use some form of cuBLAS SGEMM. The blue line shows the performance of a single large
SGEMM. But, if many smaller SGEMMs are needed instead, you might simply launch each smaller SGEMM separately, one after another. This is plotted in red and the achieved performance is quite poor:
there are many kernels launched in sequence and the small matrix size prevents the GPU from being fully utilized. This can be improved significantly by using CUDA streams to overlap some or all of
the kernels—this is plotted in green—but it is still very costly when the matrices are small. The same computation can be performed as a batched matrix multiply with a single call to
cublasSgemmBatched, plotted in black, where parity with the original large SGEMM is achieved!
One issue with the pointer-to-pointer interface, in which the user must provide a pointer to an array of pointers to matrix data, is the construction and computation of this data structure. Code,
memory, and time has to be invested to precompute the array of pointers. In a common case, we end up with something like the following code.
T* A_array[batchCount];
T* B_array[batchCount];
T* C_array[batchCount];
for (int p = 0; p < batchCount; ++p) {
A_array[p] = A + p*strideA;
B_array[p] = B + p*strideB;
C_array[p] = C + p*strideC;
Which clutters code and costs performance, especially when the pointers can’t be precomputed and reused many times. Even worse, in cuBLAS the matrix pointers must exist on the GPU and point into GPU
memory. This means that the above precomputation translates into (1) GPU memory allocation, (2) Pointer offset computations, (3) GPU memory transfers/writes, and (4) GPU memory deallocation. Many of
these steps are expensive, optional, could imply synchronization, and are generally frustrating.
Fortunately, as of cuBLAS 8.0, there is a new powerful solution.
Strided Batched GEMM
For the common case shown above—a constant stride between matrices—cuBLAS 8.0 now provides cublas<T>gemmStridedBatched, which avoids the auxiliary steps above. The interface is:
<T>gemmStridedBatched(cublasHandle_t handle,
cublasOperation_t transA, cublasOperation_t transB,
int M, int N, int K,
const T* alpha,
const T* A, int ldA, int strideA,
const T* B, int ldB, int strideB,
const T* beta,
T* C, int ldC, int strideC,
int batchCount)
For reference, <T>gemmStridedBatched implements the following computation (along with all of the variants associated with the transpose arguments).
for (int p = 0; p < batchCount; ++p) {
for (int m = 0; m < M; ++m) {
for (int n = 0; n < N; ++n) {
T c_mnp = 0;
for (int k = 0; k < K, ++k)
c_mnp += A[m + k*ldA + p*strideA] * B[k + n*ldB + p*strideB];
C[m + n*ldC + p*strideC] =
(*alpha)*c_mnp + (*beta)*C[m + n*ldC + p*strideC];
The interface is slightly less versatile, but the performance packs the same punch as cublas<T>gemmBatched, while avoiding any overhead from precomputation.
Figure 2: Performance of three strategies for computing N matrix-matrix multiplications of size NxN.
Figure 2 plots the achieved performance of three strategies that use some form of cuBLAS SGEMM. In blue, I’ve again plotted the performance of a single large SGEMM. In gray, the performance of
cublasSgemmBatched, but this time I’ve included the overhead—the time it takes to allocate, compute, and transfer the pointer-to-pointer data structure to the GPU. The overall performance is greatly
impacted, especially when we have only a few small matrices. In many cases, the same computation can be performed with a single call to cublasSgemmStridedBatched, drawn in black, without any required
precomputation and the original batched GEMM performance is achieved again!
It’s interesting to note that the interface of cublas<T>gemmStridedBatched allows only a strict subset of the operations available in the pointer-to-pointer interface of cublas<T>gemmBatched, but
offers a number of benefits:
• You don’t have to precompute pointer offsets and/or allocate/deallocate auxiliary memory to use the interface. This is especially beneficial in GPU computing where allocation and transfer can be
relatively more expensive and could cause undesired synchronization.
• Regular blocked matrices such as block-diagonal matrices, block-tridiagonal, or block-Toeplitz can be applied straightforwardly. Moreover, operations of this form appear frequently in tensor
computations [Shi 2016], physics computations such as FEM and BEM [Abdelfattah 2016], many numerical linear algebra computations [Dong 2014, Haidar 2015], and machine learning.
• The door is opened for even more optimizations in our implementation. For example, if a matrix stride is zero, then the batched GEMM is multiplying many matrices by a single matrix. In principle,
the single matrix could be read once and reused many times. Other clever reorderings of how the computation is actually performed are now available to implementers.
Application: Tensor Contractions
A very simple application of this new primitive is the efficient evaluation of a wide range of tensor contractions. Table 1 lists all possible single-index contractions between an order-2 tensor (a
matrix) and an order-3 tensor to produce an order-3 tensor.
Table 1: All possible single-index contractions between an order-2 tensor (a matrix) and an order-3 tensor to produce an order-3 tensor.
Elements are stored in “generalized column-major” order:
$C_{mnp} \equiv C[m + n \cdot \text{ldC1} + p \cdot \text{ldC2}].$
A single call to GEMM can perform 8 out of the 36 contractions, but only if the data storage is compact! On the other hand, a single call to stridedBatchedGEMM can perform 28 out of the 36 cases,
even when the storage is non-compact! In the table, the batched index is written in brackets [.] and the transposition arguments are written over their “batched matrix”.
For example, to compute Case 6.1, you can write the following.
CUBLAS_OP_T, CUBLAS_OP_N,
M, P, K,
B, ldB1, ldB2,
A, ldA, 0,
C, ldC2, ldC1,
This computes the following, which is exactly what we wanted!
for (int n = 0; n < N; ++n) {
for (int m = 0; m < M; ++m) {
for (int p = 0; p < P; ++p) {
T c_mnp = 0;
for (int k = 0; k < K, ++k)
c_mnp += B[k + m*ldB1 + n*ldB2] * A[k + p*ldA];
C[m + n*ldC1 + p*ldC2] = c_mnp;
Notice the clever use of a matrix stride of zero, applying a transpose to each B-matrix in the batch, flipping the input arguments A and B in order to get the right expression, and swapping the ldC1
and ldC2 parameters to get a banded output rather than a blocked output from each GEMM. For more information see [Shi 2016], which details the need for this primitive, more performance profiles, and
additional applications such as in unsupervised machine learning.
Calling cublas<T>gemmStridedBatched avoids having to manually reshape (e.g. using copy or geam) the tensors into matrices in order to use GEMM, saves an enormous amount of time (especially for small
tensors), and executes just as fast as GEMM does! This is beautiful.
Getting Started with Batched Matrix Multiply
Batched and strided batched matrix multiply (GEMM) functions are now available in cuBLAS 8.0 and perform best on the latest NVIDIA Tesla P100 GPUs. You can find documentation on the batched GEMM
methods in the cuBLAS Documentation to get started at peak performance right away!
For more information about the motivation, performance profiles, and applications of batched GEMMs in machine learning and tensor computations, see Yang Shi’s HiPC 2016 paper. To hear about why these
ideas are critically important in my ongoing work with tensors and structured dense matrix factorizations, see my upcoming talk at GTC 2017, “Low-Communication FFT with Fast Multipole Method”.
[Shi 2016] Y. Shi, U. N. Niranjan, A. Anandkumar, and C. Cecka. Tensor Contractions with Extended BLAS Kernels on CPU and GPU. In 2016 IEEE 23rd International Conference on High Performance Computing
(HiPC), pages 193–202, Dec 2016. ieee.org/document/7839684/
[Abdelfattah 2016] Abdelfattah, A., Baboulin, M., Dobrev, V., Dongarra, J., Earl, C., Falcou, J., Haidar, A., Karlin, I., Kolev, T., Masliah, I., Tomov, S.: High-performance tensor contractions for
GPUs. In: International Conference on Computational Science (ICCS 2016). Elsevier, Procedia Computer Science, San Diego, CA, USA, June 2016 hgpu.org/?p=15361
[Haidar 2015] A. Haidar, T. Dong, S. Tomov, P. Luszczek, and J. Dongarra. A Framework for Batched and GPU-resident Factorization Algorithms Applied to Block Householder Transformations International
Supercomputing Conference IEEE-ISC 2015, Frankfurt, Germany.
[Dong 2014] T. Dong, A. Haidar, S. Tomov, and J. Dongarra. A Fast Batched Cholesky Factorization on a GPU ICPP 2014, The 43rd International Conference on Parallel Processing 2014.
[Relton 2016] Samuel D Relton, Pedro Valero-Lara, and Mawussi Zounon. “A Comparison of Potential Interfaces for Batched BLAS Computations,” hgpu.org/?p=16401 | {"url":"https://developer.nvidia.com/blog/cublas-strided-batched-matrix-multiply/","timestamp":"2024-11-07T13:02:35Z","content_type":"text/html","content_length":"203243","record_id":"<urn:uuid:4e1c6952-a567-43c9-9e6f-772043d222d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00784.warc.gz"} |
Multifractal Characterization of Seismic Activity in the Provinces of Esmeraldas and Manabí, Ecuador
Multifractal Characterization of Seismic Activity in the Provinces of Esmeraldas and Manabí, Ecuador^ †
Global Change Master Area, University of Cordoba, 14071 Córdoba, Spain
Projects Engineering Area, University of Córdoba, 14071 Córdoba, Spain
Author to whom correspondence should be addressed.
Presented at the 2nd International Electronic Conference on Geosciences, 8–15 June 2019.
Published: 4 June 2019
Due to the enormous impact of seismic activity and the need to deepen knowledge of its behavior, this research work carries out an analysis of the multifractal nature of the magnitude, inter-distance
and interevent time series of earthquakes that occurred in Ecuador during the years 2011–2017 in the provinces of Manabí and Esmeraldas, two areas with high seismic activity. For this study we use
multifractal detrended fluctuation analysis (MF-DFA), which allows the detection of multifractality in a non-stationary series as well as in a series of parameters of non-linear characterization. The
obtained results revealed that an interevent time series presents a higher degree of multifractality than the two previously mentioned. In addition, the Hurst exponent values were in a
non-proportional function to (q), which is a weight value indicating the multifractal behavior of the dynamics of the earthquakes analyzed in this work. Finally, several multifractal parameters were
calculated, and as a result all series were skewed to the right. This reveals that small variations in the analyzed series were more dominant than large fluctuations.
1. Introduction
Subduction is activated as a product of the interaction between continental and oceanic plates. This occurs when a low-density plate enters below the plate that produces the highest density in a
subduction zone. As a result, the produced friction plane gives rise to earthquakes, volcanism and magmatism, causing failures and sutures [
For this reason, Ecuador is considered seismically active. During the past 110 years there have been earthquakes that have been studied for their magnitudes and origins, a clear example being the
earthquake of Esmeraldas in 1906, whose magnitude was 8.8 on the Richter scale and which was one of the largest earthquakes in recorded history [
]. Earthquakes are natural catastrophes that cannot be accurately predicted or avoided at present. Thus, this work focuses on previous data that serve as a basis for a map of susceptibility, and a
demonstration of seismic scenarios, all of which depends on land, geographical location, load on the ground, and so on. [
Compaction, subsidence, liquefaction, landslides, settlements, cracks, balance, faults, cracks, and so on are some of the effects of an earthquake that are related to shaking or vibrations [
]. All of these damages can be estimated using probabilistic methods to reduce their damage by reinforcing structures in common seismic zones or implementing safe zones for the people affected.
This research work focuses on a characterization of the seismic hazards of the areas indicated above, determining the sizes of forces and the sets of the actions that could affect soil in specific
places during future earthquakes. Because earthquakes affect the constructions, which implies potential damages or collateral effects, a very important factor to take into account is seismic risk,
which is the estimate of expected damages or losses.
2. Art State: Fractals
The concept of a “fractal” was introduced by Mandelbrot in 1975, and it refers to the geometry of a basic structure that is fragmented and repeated at different self-similar scales. Due to its
irregular nature, a fractal cannot be described in traditional geometrical terms. Taking this initial definition into account and using the theoretical development of fractality, we can analyze many
manifestations that present the characteristics of chaos and order at the same time. Fractals present a set of new rules to know and describe nature [
]. There are two types of fractals, the “ideal”, which is a geometrical figure that mathematicians create by means of an iterative algorithm or repetitive rule with a shape (irregular, interrupted or
fragmented) that remains the same at any scale at which the observation occurs, fulfilling the property of exact self-similarity [
]. In addition to the “ideal” fractals, there are also “natural” fractals, which are an element of nature that can be described through fractal geometry [
]. Earthquakes, mountains, the circulatory system, coastal lines or snowflakes are natural fractals. This representation is approximate, because the properties attributed to ideal fractal objects,
such as infinite detail, have limits in the physical world.
With the development of computers, it has been possible to produce fractal graphics that require highly complex calculations. Thus, mathematicians and artists have found a new means of research and
3. Data Source
Data collection is the first step in performing Multifractal Detrended Fluctuations Analysis (MF-DFA). All data were provided by the Military Geographic Institute of Ecuador [
] after requesting the information [
]. The two provinces used in this work (Esmeraldas and Manabí) were selected due to their high seismic activity [
From a total of 2190 earthquakes (
Figure 1
) produced between 2011 and 2017, 1020 were used in this work, according to their coordinates and importance [
The variables provided by the Military Geographic Institute of Ecuador were: Date and hour, Deep, Duration time, Coordinates and Magnitude.
The interevent series is determined by comparing the exact times of each earthquake occurrence (that is, it is the time difference between each earthquake). In order to obtain the inter-distance
series, a similar analysis is carried out, determining the coordinates at which each earthquake occurs, from which a distance difference is calculated. Subsequently, each of the series is divided
into sets (q) that takes positive and negative values, in this case 5 and −5.
For an appropriate characterization of the selected provinces, multifractal analysis has been carried out using rescaled range analysis, which is defined below along with other concepts.
4. Methodology
Multifractal Detrended Fluctuation Analysis (MF-DFA)
This method is described in detail by [
] and is a general procedure consisting of five steps. At the beginning we assume a series x
with a length = N, which is a set of k indexes with x
non-zero values. The series is compact if the values x
= 0; that is, it is a negligible fraction compared to the total length of the series. For the null components of the series, values are not assigned to the index k.
Step 1: The next profile to be analyzed is determined using Equation (1).
$Y i ≡ ∑ k = 1 i x k − x , i = 1 , … , N .$
where <x> represents the arithmetic mean:
$x = 1 N ∑ k = 1 N x ( k )$
The subtraction of the mean <x> is not mandatory since it will be eliminated later using Equation (2).
Step 2: The obtained profile using Equation (1) is divided into $N s ≡ i n t ( N s )$ segments that are not overlapping with length s. Because the length N of the series is not always a multiple of
s, the same procedure is repeated from the opposite side of the series in order not to neglect the remaining interval of the end. Thus, an amount of $2 N s$ segments are obtained.
Step 3: By adjusting the least squares, the local trend of each of the segments is determined for each of
$2 N s$
segments. Subsequently, the covariance is calculated using Equation (2)
$F 2 v , s ≡ 1 s ∑ i = 1 s Y v − 1 s + i − y v ( i ) 2$
for each of the segments
$N s$
and also for
$v = N s + 1 , … , 2 N s$
. The
$y v ( i )$
value is a polynomial fit for the segment
-th. Depending on the order of polynomial adjustment (you can use linear, quadratic, cubic or higher-order polynomials in the fitting procedure), the order m of the detrended fluctuation analysis
(DFA) is defined (DFA1, DFA2, . . . ), respectively.
$F 2 v , s ≡ 1 s ∑ i = 1 s Y N − ( v − 1 s + i − y v ( i ) 2$
Step 4: To obtain the function of the q-th order fluctuation, the average of all the segments is determined using Equation (5).
$F q ( s ) ≡ 1 2 N s ∑ v = 1 2 N s F 2 ( v , s ) q 2 1 q$
where (q) can take any real value except zero. For different values such as q = 2, the scalar exponent h(2) provides information on the fluctuations of the data series. By repeating the process
described above, the s time scales vary if the function
$F q ( s )$
increases as s increases. It should also be noted that
$F q ( s )$
depends on the order m in the DFA, which is defined as s ≥ m+2.
Step 5: Through the graphical representation in the logarithmic scale of
$F q ( s )$
vs. s, the scalar behavior of the fluctuation function is determined for each value of q. The function
$F q ( s )$
increases for large values of s when the
$x i$
series presents a long-range correlation. This as a power law, represented by Equation (6).
In order to quantify the multifractal character in a time series, the multifractal spectrum ƒ(
) is used as a relationship between the generalized exponent of Hurst H(q) and the classical exponent
(q) [
], which is a scalar exponent of Renyi. When it depends linearly on q, the set is monofractal, otherwise the set is multifractal. To calculate this exponent, Equation (7) is used:
By the Legendre transformation, you get
$α = τ ( q ) ƒ α = q α − τ ( q )$
is the Hölder exponent, and
$ƒ α$
determines the dimension of the subsets of the series, which depend on
(this is the same as the exponent that measures the strength of the multifractal structure) [
]. Broadly speaking, a small value of
indicates the process of a fine structure being lost, and its appearance becomes more regular; however, if the value is large, a complex structure is ensured. In that aspect, it is possible to use
the Hurst exponent as
in Equation (9).
$α = H q + q H ′ ( q ) y ƒ α = q α − H ( q ) + 1$
Next, a polynomial fit of the second order is calculated around the position of
max or
$α 0$
(Equation (10)).
$ƒ α = A ( α − α 0 ) 2 + B α − α 0 + C$
where C is a constant equal to 1, and the coefficient B refers to the asymmetry of the multifractal spectrum, being either zero or equal to zero for a symmetrical spectrum. When it is higher than
zero, the multifractal structure is quite solid; on the other hand, when it is lower than zero, the multifractal structure is more regular and smoother, indicating fewer fractal exponents.
5. Results and Discussion
The variables analyzed were magnitude (
Figure 2
), inter-distance (
Figure 3
) and interevents (
Figure 4
). The scaling function Fq is represented in
Figure 1
A for different q values and estimates the slope Hq. Except for some fluctuations, it can be seen that the fluctuation function Fq follows a linear trend in logarithmic coordinates. Thus, we can
identify the generalized exponent of Hurst H(q) in
Figure 2
B, with the slope for each order of q according to the potential law. The values given to (q) are –5 and 5, considering a set of events ranging from a scale of 5 to a total of N/4 (N is the total
value of our samples). According to [
], the different slopes of the fluctuation curves indicate that the fluctuations of small and large events are scaled differently.
Likewise, if we compare each of
Figure 2
Figure 3
A and
Figure 4
A, we can determine that the results of the variable magnitude and inter-distance are similar (almost parallel to each other) in comparison with the results of the interevent variable with steeper
slopes, resulting in different and strongly dependent Hurst exponents of (q).
Figure 2
Figure 3
B and
Figure 4
B relate each of the q values to its corresponding Hurst exponent for magnitude, inter-distance and interevent variables, respectively. As can be observed, the lower the value of (q), the greater the
Hurst exponent, and vice versa. A clarification regarding the curves of the interevent series it is necessary, where multifractal behavior was not detected in scales less than 32 due to a non-linear
Figure 2
Figure 3
D and
Figure 4
D the multifractal spectrum obtained by applying the Legendre transformation (Equation (8)) is represented for each variable. The multifractal spectrum allows the multifractality of a time series to
be described qualitatively and quantitatively considering its width (W), which determines the richness of the multifractal structure.
Regarding Hurst exponents (Hq) in the magnitude series, a value for
2 = 0.78 was obtained. This indicates a persistence or the presence of a long-range correlation in the series, such that large values are preceded by the same type of values. This was also detected
for the inter-distance series (
2 = 0.66). However, the interevent series presented
2 = 1.16, indicating that the dynamics of this variable corresponds to fluctuating noise, which is common in critical self-similar systems. In
Table 1
, the characteristic parameters of the multifractal spectrum are summarized.
Based on the data reported in
Table 1
, and by comparing the three multifractional spectra (
Figure 5
), it is observed that the interevent series has a greater degree of multifractality than the other two series, and that the inter-distance series presents a multifractality slightly greater than the
magnitude series. All series appear biased to the right, which is consistent with r > 1. This indicates that small variations in the series are more dominant than large fluctuations. It seems, also,
that such dominance is more intense for the interevent series than the other two series.
6. Conclusions
Due to its enormous importance and the need to expand knowledge, multifractal analysis of the seismic activity of the provinces of Esmeraldas and Manabí in Ecuador was carried out by obtaining
their multifractal spectra.
The studied seismic phenomena displayed a dynamic change of heterogeneity towards homogeneity, and from variance to becoming constant during the activation of the replica. This was revealed by a
loss of multifractality after a main event.
The results of this study revealed a persistent behavior of the magnitude and inter-distance series, while the interevent series showed a behavior of fluctuating noise.
When determining a relationship between the resulting curves, all series appeared biased to the right, which was consistent with r > 1. This indicates that small variations in the series are more
dominant than large fluctuations. It seems, also, that such dominance is more intense for the interevent series than the other two, due to the resulting values for each of the series and their
respective exponents of Hurst.
This work was partially funded by the Spanish Ministry of Science, Innovation and Universities with Project No. AGL2017-87658-R.
The authors acknowledge the valuable help of the researcher Luciano Telesca.
Conflicts of Interest
The authors declare no conflict of interest.
1. Paladines, A.; Soto, J. Geología y Yacimientos Minerales del Ecuador; Universidad Técnica Particular de Loja: Loja, Ecuador, 2010; 223p. [Google Scholar]
2. Cruz Atienza, V. Los sismos, una amenaza cotidiana; La caja de cerillos ediciones: Ciudad de México, México, 2013; 111p. [Google Scholar]
3. Sánchez, F.V. Los Terremotos y sus Causas; España: Instituto Andaluz de Geofísica y Prevención de Desastres Sísmicos: Dialnet, Spain, 2000; 123p. [Google Scholar]
4. Guartan, J.A.; Tamay, J.V. Optimización del proceso de recuperación de oro contenido en los relaves de molienda de la planta “Vivanco” por el método de flotación-cianuración; Investigation; UTPL:
Loja, Ecuador, 2003. [Google Scholar]
5. Mandelbrot, B.B. The Fractal Geometry of Nature, Updated and augm. ed.; W.H. Freeman: New York, NY, USA, 1983; 460p. [Google Scholar]
6. Senesi, N.; Wilkinson, K.J. Biophysical chemistry of fractal structures and processes in environmental systems; Wiley: Chichester, West Sussex, UK; Hoboken, NJ, USA, 2008; 340p. [Google Scholar]
7. Spalla, M.I.; Marotta, A.M.; Gosso, G. Advances in Interpretation of Geological Processes: Refinement of Multi-Scale Data and Integration in Numerical Modelling; Geological Societ: London, UK;
Williston, VT, USA, 2010; 240p. [Google Scholar]
8. Instituto Geográfico Militar, I. Cartographer; Carta Topográfica: Esmeraldas, Ecuador, 2008. [Google Scholar]
9. Brito, S. Instituto Nacional de Investigación Geológico Minero Metalúrgico, E. Atlas geológico minero del Ecuador; INIGEMM, Gobierno de Ecuador: Quito, Ecuador, 2017; 245p. [Google Scholar]
10. Morejón, J. Es Provincia Verde; Fundación Naturaleza Viva Fundación Naturaleza Viva: Esmeraldas, Ecuador, 2012; 179p. [Google Scholar]
11. Prefectura de Manabí. Plan de desarrollo y ordenamiento territorial de la provincia de Manabí, diagnostico estratégico; Prefectura de Manabí, Gobierno de Ecuador: Manabí, Ecuador, 2014; 335p. [
Google Scholar]
12. Kantelhradt, J.; Zschiegner, S.; Koscielny-Bunde, E. Multifractal Detrended Fluctuation Analysis of Noostationary Time Series. Phys. A Stat. Mech. Appl. 2002, 316, 87–114. [Google Scholar] [
13. Mandelbrot, B.B.; Novak, M.M. Thinking in Patterns: Fractals and Related Phenomena in Nature, World Scientific: River Edge, NJ, USA, 2004; 323p.
14. Telesca, L.; Lapenna, V. Measuring multifractality in seismic sequences. Tectonophysics 2006, 423, 115–123. [Google Scholar] [CrossRef]
Width Asymmetry r Average h(q) St. Dev. h(q) Max. Delta.
0.55 1.23 0.8283 0.1106 0.2396
Inter-distance Km
Width Asymmetry r Average h(q) St. Dev. h(q) Max. Delta.
0.56 1.94 0.7649 0.1464 0.3579
Interevent (min)
Width Asymmetry r Average h(q) St. Dev. h(q) Max. Delta.
1.44 2.04 1.5562 0.5527 0.4584
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Cuenca, D.I.; Estévez, J.; García-Marín, A.P. Multifractal Characterization of Seismic Activity in the Provinces of Esmeraldas and Manabí, Ecuador. Proceedings 2019, 24, 27. https://doi.org/10.3390/
AMA Style
Cuenca DI, Estévez J, García-Marín AP. Multifractal Characterization of Seismic Activity in the Provinces of Esmeraldas and Manabí, Ecuador. Proceedings. 2019; 24(1):27. https://doi.org/10.3390/
Chicago/Turabian Style
Cuenca, David I., Javier Estévez, and Amanda P. García-Marín. 2019. "Multifractal Characterization of Seismic Activity in the Provinces of Esmeraldas and Manabí, Ecuador" Proceedings 24, no. 1: 27.
Article Metrics | {"url":"https://www.mdpi.com/2504-3900/24/1/27","timestamp":"2024-11-14T21:14:09Z","content_type":"text/html","content_length":"385047","record_id":"<urn:uuid:c009d79a-fe41-4707-9e05-93d73fc565dd>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00796.warc.gz"} |
Movie Minutes
[Transum: Thanks Sandra. Wonderful comment. This calculator skill is part of the Calculator Workout.]
1. If you watched all of these films, one after the other, how long would it take?
2h 13mins 3h 21mins 1h 32mins 2h 9mins 1h 58mins
2. If you started watching at 7:15pm, what time would you finish?
3. Which three films have a total time of 5 hours and 39 minutes?
Sign in to your Transum subscription account to see the answers
Teacher, do your students have access to computers such as tablets, iPads or Laptops? This page was really designed for projection on a whiteboard but if you really want the students to have access
to it here is a concise URL for a version of this page without the comments:
However it would be better to assign one of the student interactive activities below.
Here is the URL which will take them to a related student activity.
Curriculum Reference | {"url":"https://transum.org/Software/SW/Starter_of_the_day/starter_September9.ASP","timestamp":"2024-11-13T18:44:12Z","content_type":"text/html","content_length":"29352","record_id":"<urn:uuid:dbaee60a-eeca-48d9-9018-dbab0c20e4ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00525.warc.gz"} |
Utility of polygenic embryo screening for disease depends on the selection strategy
Polygenic risk scores (PRSs) have been offered since 2019 to screen in vitro fertilization embryos for genetic liability to adult diseases, despite a lack of comprehensive modeling of expected
outcomes. Here we predict, based on the liability threshold model, the expected reduction in complex disease risk following polygenic embryo screening for a single disease. Our main finding is that a
strong determinant of the potential utility of such screening is the selection strategy, a factor that has not been previously studied. Specifically, when only embryos with a very high PRS are
excluded, the achieved risk reduction is minimal. In contrast, selecting the embryo with the lowest PRS can lead to substantial relative risk reductions, given a sufficient number of viable embryos.
For example, a relative risk reduction of ≈50% for schizophrenia can be achieved by selecting the embryo with the lowest PRS out of five viable embryos. We systematically examine the impact of
several factors on the utility of screening, including the variance explained by the PRS, the number of embryos, the disease prevalence, the parental PRSs, and the parental disease status. When
quantifying the utility, we consider both relative and absolute risk reductions, as well as population-averaged and per-couple risk reductions. We also examine the risk of pleiotropic effects.
Finally, we confirm our theoretical predictions by simulating “virtual” couples and offspring based on real genomes from schizophrenia and Crohn’s disease case-control studies. We discuss the
assumptions and limitations of our model, as well as the potential emerging ethical concerns.
Polygenic risk scores (PRSs) have become increasingly well-powered, relying on findings from large-scale genome-wide association studies for numerous diseases (Visscher et al., 2017; Wray et al.,
2013). Consequently, a growing body of research has examined the potential clinical utility of applying PRSs in the treatment of adult patients in order to identify those at heightened risk for
common late-onset diseases such as coronary artery disease or breast cancer (Britt et al., 2020; Khera et al., 2018; Torkamani et al., 2018). Another potential application of PRSs is preimplantation
screening of in vitro fertilization (IVF) embryos, or polygenic embryo screening (PES). Polygenic embryo screening has been offered since 2019 (Treff, Eccles, et al., 2019), but has been the focus of
comparatively little empirical research, despite debate over ethical and social concerns surrounding the practice (Anomaly, 2020; Lázaro-Muñoz et al., 2020; Munday & Savulescu, 2021).
We have recently demonstrated that screening embryos on the basis of polygenic scores for quantitative traits (such as height or intelligence) has limited utility in most realistic scenarios (
Karavani et al., 2019), and that the accuracy of the score is a more significant determinant of PES utility for quantitative traits compared with the number of available embryos. On the other hand, a
series of four studies (Lello et al., 2020; Treff, Eccles, et al., 2019; Treff et al., 2020; Treff, Zimmerman, et al., 2019) conducted by a private company providing PES services has suggested that
PES for dichotomous disease risk may have significant clinical utility. However, these studies examined a relatively limited range of scenarios, primarily focusing on distinctions between sibling
pairs discordant for illness, and did not provide a comprehensive examination of various potential PES settings. Filling this gap is an urgent need, as understanding the statistical properties of PES
forms a critical foundation to any ethical consideration (Lázaro-Muñoz et al., 2020).
Here, we use statistical modeling to examine the potential utility of PES for reducing disease risk, with an aim toward informing future ethical deliberations. We focus on screening for a single
complex disease, and study a range of realistic scenarios, quantifying the role of parameters such as the variance explained by the score, the number of available embryos, and the disease prevalence.
We show that a major determinant of the outcome of PES is the selection strategy, namely the way in which an embryo is selected for implantation given the distribution of PRSs across embryos. We also
study the risk reduction conditional on parental PRSs or disease status, and consider the risk of developing diseases not screened. Finally, we validate some of our predictions based on real genomes
of cases and controls for two common complex diseases.
Model and selection strategies
For each analysis presented below, we assume that a couple has generated, by IVF, n viable embryos such that each embryo, if implanted, would have led to a live birth. We focus on a single complex
disease, and assume that the corresponding PRS has been computed for each embryo. Given the PRSs of the n embryos, a single embryo is selected for implantation based on a selection strategy.
The first strategy we consider is aimed only at avoiding high-risk embryos, consistent with studies of the potential clinical utility of PRSs in adults (Chatterjee et al., 2016; Dai et al., 2019;
Gibson, 2019; Khera et al., 2018; Mars et al., 2020; Mavaddat et al., 2019; Torkamani et al., 2018). For example, the first case report presented on PES described the identification and exclusion of
embryos with extremely high (top 2-percentiles) PRS (Treff, Eccles, et al., 2019). We term this strategy “high-risk exclusion” (HRE: Figure 1A, upper panel). Under HRE, after high-risk embryos are
set aside, an embryo is randomly selected for implantation among the remaining available embryos. (In the case that all embryos are high-risk, we assume a random embryo is selected among them.)
An alternative selection strategy is to use the embryo with the lowest PRS. Ranking and prioritizing embryos for implantation based on morphology is common in current IVF practice (Bormann et al.,
2020; Montag et al., 2013; Rhenman et al., 2015). If ranking is instead based on a disease PRS, the embryo with the lowest PRS could be selected, without any recourse to high-risk PRS thresholds.
Such an approach was suggested by another recent publication from the same company (based on a multi-disease index), but statistical comparisons were only examined in the context of sibling pairs (
Treff et al., 2020). We term the implantation of the embryo with the lowest PRS as “lowest-risk-prioritization” (LRP; Figure 1A, lower panel).
In the following, we describe the theoretical risk reduction that can be achieved under these selection strategies. Our statistical approach is based on the liability threshold model (LTM; (Falconer,
1967)). The LTM represents disease risk as a continuous liability, comprising genetic and environmental risk factors, under the assumption that individuals with liability exceeding a threshold are
affected. The liability threshold model has been shown to be consistent with data from family-based transmission studies (Wray & Goddard, 2010) and GWAS data (Visscher & Wray, 2015). Consequently, we
define the disease risk of a given embryo probabilistically, as the chance that, given its PRS, its liability will cross the threshold at any point after birth (Figure 1B).
We use the following notation. We define the predictive power of a PRS as the proportion of variance in the liability of the disease explained by the score (Dudbridge, 2013), and denote it as . We
quantify the outcome of PES in two ways: the relative risk reduction (RRR) is defined as , where K is the disease prevalence and P(disease) is the probability of the selected embryo to be affected;
the absolute risk reduction (ARR) is defined as K − P(disease). For example, if a disease has prevalence of 5% and the selected embryo has a probability of 3% to be affected, the RRR is 40%, and the
ARR is 2% points. We computed the RRR and ARR analytically under each selection strategy, and for various values for the disease prevalence, the strength of the PRS, embryo exclusion thresholds, and
other parameters. The mathematical details of the calculations are provided in the Materials and Methods.
The risk reduction under the high-risk exclusion strategy
In Figure 2 (upper row), we show the relative risk reduction achievable under the HRE strategy with n = 5 embryos. Under the 2-percentile threshold (straight black lines), the reduction in risk is
limited: the RRR is <10% in all scenarios where ≤ 0.1. Currently, ≈ 0.1 (on the liability scale) is the upper limit of the predictive power of PRSs for most complex diseases (Lambert et al., 2021),
with the exception of a few disorders with large-effect common variants (such as Alzheimer’s disease or type 1 diabetes) (Sharp et al., 2019; Q. Zhang et al., 2020). In the future, more accurate PRSs
are expected. However, the common-variant SNP heritability is at most ≈30% even for the most heritable diseases such as schizophrenia and celiac disease (Holland et al., 2020; Y. Zhang et al., 2018),
and it was recently suggested that = 0.3 is the maximal realistic value for the foreseeable future (Wray et al., 2020). At this value, relative risk reduction would be 20% for K = 0.01, 9% for K =
0.05, and 3% for K = 0.2. These gains achieved with HRE are small because the overwhelming majority of affected individuals do not have extreme scores (Murray et al., 2020; Wald & Old, 2019).
Risk reduction increases as the threshold for exclusion is expanded to include the top quartile of scores, and then reaches a maximum at ≈25-50% under a range of prevalence and values. For all of
these simulations, we set the number of available (testable) embryos to n = 5 (Dahdouh, 2021; Sunkara et al., 2011), although we acknowledge that the number of viable embryos may be much lower for
many couples seeking IVF services for infertility (Smith et al., 2015). Simulations show that these estimates do not change much with increasing the number of embryos (see Figure 2 - Figure
Supplement 1). This holds especially at more extreme threshold values, since most batches of n embryos will not contain any embryos with a PRS within, e.g., the top 2-percentiles.
It should be noted that the relative risk reduction does not increase monotonically under HRE. Under our definition, whenever all embryos are high risk, an embryo is selected at random. Thus, at the
extreme case when all embryos (i.e, top 100%) are designated as high risk, an embryo is selected at random at all times, and the relative risk reduction reduces to zero. We chose this definition of
the HRE strategy because it does not involve ranking of the embryos. However, we can also consider an alternative strategy: if all embryos are high risk, the embryo with the lowest PRS is selected.
Here, the RRR is expected to increase when increasing the threshold and designating more embryos as high risk, which we confirm in Figure 2 – Figure Supplement 2. When the threshold is at 100% (all
embryos are high risk), this alternative strategy (which we do not further consider) reduces to the low-risk prioritization strategy, which we study next.
The risk reduction under the low-risk prioritization strategy
The HRE strategy treats all non-high-risk embryos equally. In practice, we expect most, or even all, embryos to be designated as non-high-risk, given the recent focus on the top PRS percentiles in
the literature (e.g., Khera et al. 2018). However, as we have seen, this strategy leads to very little risk reduction. In Figure 2 (lower panels), we show the expected RRR for the low-risk
prioritization strategy, under which we prioritize for implantation the embryo with the lowest PRS, regardless of any PRS cutoff. Indeed, under the LRP strategy, risk reductions are substantially
greater than in HRE. For example, with n = 5 available embryos, RRR>20% across the entire range of prevalence and parameters considered, and can reach ≈50% for K ≤ 5% and = 0.1, and even ≈80% for K =
1% and = 0.3. While RRR continues to increase as the number of available embryos increases, the gains are quickly diminishing after n = 5. On the other hand, Figure 2 also demonstrates that RRR drops
steeply if the number of embryos falls below n = 5, although the lower bound for RRR when just two embryos are available (≈20% for many scenarios) is still comparable to the upper bound of the HRE
strategy for a greater number of embryos.
Effects of PES on dichotomous vs quantitative traits
Our results demonstrate that, contrary to our previous study reporting only small effects of PES for quantitative traits (Karavani et al., 2019), PES can generate substantial relative risk reductions
for disease under the LRP strategy. To understand the relation between continuous and binary traits, consider an example involving IQ. Our estimate for the mean gain in IQ that could be achieved by
selecting the embryo with the highest IQ polygenic score is approximately ≈2.5 IQ points (Karavani et al. 2019). Now assume that individuals with IQ<70 (2 SDs below the mean) are considered
“affected” according to a dichotomized trait of “cognitive impairment.” Among individuals with IQ<70, the proportion of individuals with IQ in the range [67.5,70] is 33.5% (assuming a normal
distribution). A gain of 2.5 points would shift such offspring beyond the threshold for “cognitive impairment,” resulting in a corresponding 33.5% reduction in risk of being “affected”. (Note that
the above explanation is intended to provide an intuition and ignores any variability in the gain.) Figure 2 – Figure Supplement 3 utilizes statistical modeling (with derived from recent GWAS for
intelligence (Savage et al., 2018)) to demonstrate that substantial risk reductions can be achieved for a dichotomized trait, including when selecting out of just three embryos (panel (A)). Panel (B)
extends these results to data for LDL cholesterol (with derived from (Weissbrod et al., 2021)); given n = 5 embryos and the currently available PRS for LDL-C levels, risk reductions for “high
cholesterol” range from 40-60%, depending on the LDL level used to define the categorical trait. Thus, while implanting the embryo with the most favorable PRS is expected to result in very modest
gains in an underlying quantitative trait, it is at the same time effective in avoiding embryos at the unfavorable tail of the trait.
Effects of parental PRS and disease status
We next examined the effects of parental PRSs on the achievable risk reduction (Materials and Methods), given that families with high genetic risk for a given disease may be more likely to seek PES.
Figure 3 demonstrates that, as expected, the HRE strategy shows greater relative risk reduction as parental PRS increases, in particular when excluding only very high-scoring embryos. This result
follows directly from the fact that, on average, offspring will tend to have PRS scores near the mid-parental PRS value. In contrast, the relative RR (though not the absolute RR; see next section)
for the LRP strategy somewhat declines as parental PRSs increase. Nevertheless, the RRR for the LRP strategy remains greater than that for the HRE strategy across all parameters (as expected by the
definitions of these strategies).
It is also conceivable that families may be more likely to seek PES when one or both prospective parents is affected by a given disease. In Figure 3 - Figure Supplement 1, we plot the RRR under the
HRE and LRP strategies given that the parents are both healthy, both affected, or one of each (where we fixed the prevalence K = 5% and the heritability to ℎ^2 = 40%). The figure illustrates that
parental disease status has relatively little impact on the expected RRR (especially in comparison to the changes under HRE when conditioning on the actual parental PRSs). This is because, as long as
≪ 1, parental disease does not necessarily provide much information about parental PRS, and thus does not strongly constrain the number of risk alleles available to each embryo.
Absolute vs relative risk
The above results were presented in terms of relative risk reductions. However, Figure 3 - Figure Supplement 1 also shows the baseline risk of an embryo of parents with a given disease status. For
example, when one of the parents is affected, selecting the lowest risk embryo out of n = 5 (for a realistic = 0.1) reduces the risk from 10.0% to only 5.8%, thus nearly restoring the risk of the
future child to the population prevalence (5%). More generally, we plot the absolute risk reduction (ARR) under the HRE and LRP strategies in Figure 3 - Figure Supplement 2 for a few values of the
parental PRSs. Notably, while RRRs under the LRP strategy somewhat decrease with increasing parental PRSs, the ARRs substantially increase, in accordance with an expectation that PES in higher-risk
parents should eliminate more disease cases.
The clinical interpretation of these absolute risk changes will vary based on the population prevalence of the disorder (or the baseline risk of specific parents), and can offer a very different
perspective on the magnitude of the effects (Gordis, 2014; Lázaro-Muñoz et al., 2020; Murray et al., 2020). In particular, for a rare disease, large relative risk reductions may result in very small
changes in absolute risk. As an example, schizophrenia is a highly heritable (Sullivan et al., 2003) serious mental illness with prevalence of at most 1% (Perälä et al., 2007). The most recent
large-scale GWAS meta-analysis for schizophrenia (Schizophrenia Working Group of the Psychiatric Genomics Consortium, 2020) has reported that a PRS accounts for approximately 8% of the variance on
the liability scale. Our model shows that a 52% RRR is attainable using the LRP strategy with n = 5 embryos. However, this translates to only ≈0.5 percentage points reduction on the absolute scale: a
randomly-selected embryo would have a 99% chance of not developing schizophrenia, compared to a 99.5% chance for an embryo selected according to LRP. In the case of a more common disease such as type
2 diabetes, with a lifetime prevalence in excess of 20% in the United States (Geiss et al., 2014), the RRR with n = 5 embryos (if the full SNP heritability of 17% (Y. Zhang et al., 2018) were
achieved) is 43%, which would correspond to >8 percentage points reduction in absolute risk.
Variability of the risk reduction across couples
The results depicted in Figure 2 describe the average risk reduction across the population, whereas the results in Figure 3 demonstrate results for specific combinations of parental risk scores.
However, it remains unclear whether the large average risk reductions observed under the LRP strategy are driven by only a small proportion of couples. More generally, we would like to fully
characterize the dependence of the risk reduction on parental PRSs, which could be of interest to physicians and couples in real-world settings.
To address these questions, we define a new risk reduction index, which we term the per-couple relative risk reduction, or pcRRR. Informally, the pcRRR is the relative risk reduction conditional of
the PRSs of the couple. Mathematically, pcRRR(couple) . Here, P[s] (disease | couple) is the probability that the (PRS-based) selected embryo is affected given the PRSs of the couple, and P[r]
(disease | couple) is similarly defined for a randomly selected embryo. Conveniently, the pcRRR depends only on the average of the maternal and paternal PRSs, which we denote as c. We calculated
pcRRR(c) analytically under the LRP strategy (Materials and Methods), as well as computed the distribution of pcRRR(c) across all couples in the population.
We show the distribution of pcRRR(c) in Figure 4, panels (A)-(C). The results demonstrate that the pcRRR is relatively narrowly distributed around its mean, for all values of the prevalence (K)
considered. The distribution becomes somewhat wider (and left-tailed) for the most extreme (0.3). Thus, the population-averaged RRRs are not driven by a small proportion of the couples. In agreement,
the pcRRR depends only weakly on the average parental PRS, as can be seen in panels (D)-(F).
We note that the per-couple relative risk reduction is itself also an average, over all possible batches of n embryos of the couple. One may thus ask what is the distribution of possible RRRs across
these batches. We provide a short discussion in Materials and Methods (Appendix Section 5.3).
Pleiotropic effects of selection on genetically negatively correlated diseases
Polygenic risk scores are often correlated across diseases (Watanabe et al., 2019; Zheng et al., 2017). Therefore, selecting based on the PRS of one disease may increase or decrease risk for other
diseases. While a full analysis of screening for multiple diseases is left for future work, our simulation framework allows us to investigate the potential harmful effects of prioritizing embryos for
one disease, in case that disease is negatively correlated with another disease (Materials and Methods). We considered genetic correlations between diseases taking the values ρ = (−0.05, −0.1, −0.15,
−0.2, −0.3). [The most negative correlation between two diseases reported in LDHub (http://ldsc.broadinstitute.org/ldhub/) is −0.3, occurring between ulcerative colitis and chronic kidney disease (
Zheng et al., 2017).] In general, negative correlations between diseases are uncommon, and when they occur, typical correlations are about −0.1.
Figure 5 shows the simulated risk reduction for the target disease and the risk increase for the correlated disease, across different values of ρ and for three values of the prevalence K (panels (A)-
(C); assumed equal for the two diseases), all under the LRP strategy. In all panels, we used = 0.1 for both diseases. The relative risk reduction for the target disease is, as expected, always higher
in absolute value than the risk increase of the correlated disease. For typical values of ρ = −0.1 and n = 5, the relative increase in risk of the correlated disease is relatively small, at ≈6% for K
≤ 0.05 and ≈3.5% for K = 0.2. However, for strong negative correlation (ρ = −0.3) the increase in risk can reach 22%, 16%, or 11% for K = 0.01, 0.05, and 0.2, respectively. Thus, care must be taken
in the unique setting when the target disease is strongly negatively correlated with another disease.
Simulations based on real genomes from case-control studies
Our analysis so far has been limited to mathematical analysis and simulations based on a statistical model. In principle, it would be desirable to compare our predictions to results based on real
data. However, clearly, no real genomic and phenotypic data exist that would correspond to our setting, nor could such data be ethically or practically generated. Thus, we resort to a “hybrid”
approach, in which we simulate the genomes of embryos based on real genomic data from case-control studies. This approach is similar to the one we have previously used for studying polygenic embryo
screening for traits (Karavani et al., 2019).
Briefly, our approach is as follows. We consider separately two diseases with somewhat differing genetic architecture: schizophrenia, which is amongst the most polygenic complex diseases, with no
common loci of high effect size, and Crohn’s disease, which is estimated to be less polygenic, and has several common loci with much larger effects than those found in schizophrenia (O’Connor et al.,
2019). For each disease, we used genomes of unrelated individuals drawn from case-control studies. For schizophrenia, we used ≈900 cases and ≈1600 controls of Ashkenazi Jewish ancestry, while for
Crohn’s, we used ≈150 cases and ≈100 controls of European ancestry. We then generated “virtual couples” by randomly mating pairs of individuals, regardless of sex or disease status. For each couple,
we simulate the genomes of n hypothetical embryos, based on the laws of Mendelian inheritance and by randomly placing crossovers according to genetic map distances. In parallel, we used the
“parental” genomes to learn a logistic regression model that predicts the disease risk given a PRS computed based on existing summary statistics. We then computed the PRS of each simulated embryo,
and predicted the risk that embryo to be affected. Finally, we compared the risk of disease between a population in which one embryo per couple is selected at random, vs. a population in which one
embryo is selected based on its PRS. For complete details, see Materials and Methods.
In Figure 6, we plot the results for the relative risk reduction for schizophrenia (panels (A) and (B)) and Crohn’s disease (panels (C) and (D)). For each disease, we consider both the HRE and LRP
strategies. The analytical predictions closely match the empirical risk reductions generated in the simulations, except for a slight overestimation of the RRR under the LRP strategy. Nevertheless,
for both schizophrenia and Crohn’s disease, we empirically observe that RRRs as high as ≈45% are achievable with n = 5 embryos. In contrast, under the HRE strategy and when excluding embryos at the
top 2% risk percentiles, risk reductions are very small, in agreement with the theoretical predictions. These results thus provide support to the robustness of our statistical model.
To further investigate the assumptions of our model, we test in Figure 6 – Figure Supplement 1 two intermediate predictions. The first is that the variance of the PRSs of embryos of a given couple
should not depend on the average parental PRS. This is indeed the case (panels (A) and (C)), with the only exception of an uptick of the variance at very low parental PRSs for schizophrenia. The
second prediction is that the variance across embryos is half of the variance in the parental population. The empirical results again show reasonable agreement with the theoretical prediction (panels
(B) and (D)). The empirical variance (averaged across couples) was slightly lower than expected (by ≈4% for schizophrenia and ≈14% for Crohn’s), which may explain our slight overestimation of the
expected RRR under the LRP strategy.
In this paper, we used statistical modeling to evaluate the expected outcomes of screening embryos based on polygenic risk scores for a single disease. We predicted the relative and absolute risk
reductions, either at the population level or at the level of individual couples. Our model is flexible, allowing us to provide predictions across various values of, e.g., the PRS strength, the
disease prevalence, the parental PRS or disease status, and the number of available embryos. We presented a comprehensive analysis of the expected outcomes across various settings, including when
there is a concern about a second disease negatively correlated with the target disease. We finally validated our modeling assumptions using genomes from case-control studies. Our publicly available
code could help researchers and other stakeholders estimate the expected outcomes for settings we did not cover.
Our most notable result was that a crucial determinant of risk reduction is the selection strategy. The use of PRS in adults has focused on those at highest risk (Chatterjee et al., 2016; Dai et al.,
2019; Gibson, 2019; Khera et al., 2018; Mars et al., 2020; Mavaddat et al., 2019; Torkamani et al., 2018), for whom there may be maximal clinical benefit of screening and intervention. However, as
PRSs have relatively low sensitivity, such a strategy is relatively ineffective in reducing the overall population disease burden (Ala-Korpela & Holmes, 2020; Wald & Old, 2019). Similarly, in the
context of PES, exclusion of high-risk embryos will result in relatively modest risk reductions. By contrast, selecting the embryo with the lowest PRS may result in large reductions in relative risk.
While our prior work (Karavani et al. 2019) demonstrated that PES would have a small effect on quantitative traits, here we show that a small reduction in the liability can lead to a large reduction
in the proportion of affected individuals. This is fundamentally a property of a threshold character with an underlying normally distributed continuous liability. For such traits, most of the
individuals in the extreme of the liability distribution (i.e., the ones affected) are concentrated very near the threshold. Thus, even slightly reducing their liability can move a large proportion
of affected individuals below the disease threshold. However, it should be noted that conventional thresholds for defining presence of disease may contain some degree of arbitrariness if the
underlying distribution of pathophysiology is truly continuous. Consequently, the effects on ultimate morbidity may depend on the validity of the threshold itself (Davidson & Kahn, 2016).
We investigated how the range of potential PES outcomes varies with the PRSs of the parents or with their disease status. Under the HRE strategy, if only excluding embryos at the few topmost risk
percentiles, the RRR is very small when the parents have low PRSs, and vice versa (Figure 3, panels (A)- (D)). This is expected, as excluding high PRS embryos will be effective only for couples who
are likely to have many such embryos. Under the LRP strategy, the RRR depends only weakly on the parental PRSs (Figure 3, panels (E)-(H), and Figure 4). Under both strategies, the relative risk
reduction depends only weakly on the parental disease status, as parental disease status is a weak signal for the underlying PRS. However, the absolute risk reduction increases substantially with
increasing parental PRSs (Figure 3 – Figure Supplement 2) and when one or more parents are affected.
Our study has several limitations. First, our results assume an infinitesimal genetic architecture for the disease, which may not be appropriate for oligogenic diseases and is not relevant for
monogenic disorders. However, it has been repeatedly demonstrated that common, complex traits and diseases are highly polygenic (Gazal et al., 2017; Holland et al., 2020; O’Connor et al., 2019; Shi
et al., 2016; Zeng et al., 2018, 2021). For example, it was recently estimated that for almost all traits and diseases examined, the number of independently associated loci was at least ≈350,
reaching ≈10,000 or more for cognitive and psychiatric phenotypes (O’Connor et al., 2019). This provides more than sufficient variability for the PRS to attain a normal distribution in the population
and for our modeling assumptions to hold. Indeed, our empirical results for schizophrenia and Crohn’s disease, two diseases with somewhat different genetic architectures, agreed reasonably well with
the theoretical predictions. However, our models would need to be substantially adjusted in the presence of variants of very large effect, such as inherited or de novo coding variants or copy number
variants, e.g., as in autism (Satterstrom et al., 2020; Takumi & Tamada, 2018).
Additionally, our model relies on several simplifying statistical assumptions. For example, we did not explicitly model assortative mating, although this seems reasonable given that for genetic
disease risk, correlation between parents is weak (Rawlik et al., 2019), and given that our previous study of traits showed no difference in the results between real and random couples (Karavani et
al., 2019). This deficiency is also partly ameliorated by our modeling of the risk reduction when explicitly given the parental PRSs or disease status. Another assumption we made is that
environmental influences on the child’s phenotype are independent of those that have influenced the parents (when conditioning on the parental disease status). However, this is reasonable given that
family-specific environmental effect have been shown to be weak for complex diseases (Wang et al., 2017). For a discussion of additional model assumptions, see Materials and Methods, Appendix section
Perhaps more importantly, we assumed throughout that represents the realistic accuracy of the PRS achievable, within-family, in a real-world setting in the target population. However, the
realistically achievable may be lower than reported in the original publications that have generated the scores. For example, the accuracy of PRSs is sub-optimal when applied in non-European
populations and across different socio-economic groups (Duncan et al., 2019; Mostafavi et al., 2020). A PRS that was tested on adults may be less accurate in the next generation. Additionally, the
variance explained by the score, as estimated in samples of unrelated individuals, is inflated due to population stratification, assortative mating, and indirect parental effects (Kong et al., 2018;
Young et al., 2019; Morris et al., 2020; Mostafavi et al., 2020). The latter, also called “genetic nurture”, refers to trait-modifying environmental effects induced by the parents based on their
genotypes. These effects do not contribute to prediction accuracy when comparing polygenic scores between siblings (as when screening IVF embryos), and thus, the variance explained by polygenic
scores in this setting can be substantially reduced, in particular for cognitive and behavioral traits (Howe et al., 2021; Selzam et al., 2019). Our risk reduction estimates thus represent an upper
bound relative to real-world scenarios. On the other hand, recent empirical work on within-family disease risk prediction showed that the reduction in accuracy is at most modest (Lello et al., 2020),
and within-siblings-GWAS yielded similar results to unrelated-GWAS for most physiological traits (Howe et al., 2021). Additionally, accuracy in non-European populations is rapidly improving due to
the establishment of national biobanks in non-European countries (Koyama et al., 2020; Vujkovic et al., 2020) and improvement in methods for transferring scores into non-European populations (
Amariuta et al., 2020; Cai et al., 2021). Either way, the analytical results presented in this paper are formulated generally as a function of the achievable accuracy , and as such, users can
substitute values relevant to their specific target population and disease.
Another major limitation of this work is that we have only considered screening for a single disease. In reality, couples may seek to profile an embryo on the basis of multiple disease PRSs
simultaneously, or based a global measure of lifespan or healthspan (Sakaue et al., 2020; Timmers et al., 2020; Zenin et al., 2019). This is likely to reduce the per-disease risk reduction, as we
have previously observed for quantitative traits (Karavani et al., 2019), but will also likely be more cost effective (Treff et al., 2020). PES for multiple diseases requires the formulation and
analysis of new selection strategies, and is substantially more mathematically complex; we therefore leave it for future studies.
As our approach was statistical in nature, it is important to place our results in the context of real-world clinical practice of assisted reproductive technology. The number of embryos utilized in
the calculations in the present study refers to viable embryos that could lead to live birth, which can be substantially smaller than the raw number of fertilized oocytes or even the number of
implantable embryos at day 5. This consideration is especially important given the steep drop in risk reduction when the number of available embryos drops below 5 (Figure 2). In fact, many IVF cycles
do not achieve any live birth. Rates of live birth decline with maternal age, in particular after age 40 (Smith et al., 2015); for women age >42, fewer than 4% of IVF cycles result in live births,
making PES impractical. On the other hand, success rates will likely be higher for young prospective parents who seek PES to reduce disease risk but do not suffer from infertility. However, the
prospect of elective IVF for the purpose of PES in such couples must be weighed against the potential risks of these invasive procedures to the mother and child (Dayan et al., 2019; Luke, 2017).
A different concern is whether the embryo biopsy (which is required for genotyping) may cause risk to the viability and future health of the embryo. Several recent studies have demonstrated no
evidence for potential adverse effects of trophectoderm biopsy on rates of successful implantation, fetal anomalies, and live birth (Awadalla et al., 2021; He et al., 2019; Riestenberg et al., 2021;
Tiegs et al., 2021). Moreover, no significant adverse effects have been detected for postnatal child development in a recent meta-analysis (Natsuaki & Dimler, 2018). On the other hand, a number of
studies have reported that trophectoderm biopsy was associated with pregnancy complications, including preterm birth, pre-eclampsia, and hypertensive disorders of pregnancy (Li et al., 2021)(W. Y.
Zhang et al., 2019)(Makhijani et al., 2021). Specific variations in biopsy protocols may account for differences in outcomes across studies (Rubino et al., 2020). Newly developed techniques may allow
in the future to genotype an embryo non-invasively based on DNA present in spent culture medium, although the accuracy of these methods is still being debated (Leaver & Wells, 2020). It should also
be noted that, throughout this manuscript, we assumed the use of single embryo transfer.
Finally, the results of our study invite a debate regarding ethical and social implications. For example, the differential performance of PES across selection strategies and risk reduction metrics
may be difficult to communicate to couples seeking assisted reproductive technologies (Cunningham et al., 2015; Wilkinson et al., 2019). Indeed, in the first PES case report, the couple elected to
forego any implantation despite the availability of embryos that were designated as normal risk (Treff et al. 2019). These difficulties are expected to exacerbate the already profound ethical issues
raised by PES (as we have recently reviewed (Lázaro-Muñoz et al., 2020)), which include stigmatization (McCabe & McCabe, 2011), autonomy (including “choice overload” (Hadar & Sood, 2014)), and equity
(Sueoka, 2016). In addition, the ever-present specter of eugenics (Lombardo, 2018) may be especially salient in the context of the LRP strategy. How to juxtapose these difficulties with the potential
public health benefits of PES is an open question. We thus call for urgent deliberations amongst key stakeholders (including researchers, clinicians, and patients) to address governance of PES and
for the development of policy statements by professional societies. We hope that our statistical framework can provide an empirical foundation for these critical ethical and policy deliberations.
Materials and Methods
Summary of the modeling results
In this section, we provide a brief overview of our model and derivations, with complete details appearing in the Appendix.
Our model is follows. We write the polygenic risk scores of a batch of n IVF embryos as (s[1], …, s[n]), and generate the scores as s[i] = x[i] + c. The (x[1], …, x[n]) are embryo-specific
independent random variables with distribution is the proportion of variance in liability explained by the score, and c is a shared component with distribution , also representing the average of the
maternal and paternal scores.
In each batch, an embryo is selected according to the selection strategy. Under high-risk exclusion, we select a random embryo with score s < z[q]r[ps], where z[q] is the (1 − q)-quantile of the
standard normal distribution. If no such embryo exists, we select a random embryo, but we also studied the rule when in such a case, the lowest scoring embryo is selected. Under lowest-risk
prioritization, we select the embryo with the lowest value of s. We computed the liability of the selected embryo as y = s + e, where . We designate the embryo as affected if y > z[K], where z[K] is
the (1 − K)-quantile of the standard normal distribution and K is the disease prevalence. In the simulations, we computed the disease probability (for each parameter setting) as the fraction of
batches (out of 10^6 repeats) in which the selected embryo was affected. We also simulated the score and disease status of a second disease, which is not used for selecting the embryo, but may be
negatively correlated with the target disease.
We computed the disease probability analytically using the following approaches. We first computed the distribution of the score of the selected embryo. For lowest-risk prioritization, we used the
theory of order statistics. For high-risk exclusion, we first conditioned on the shared component c, and then studied separately the case when all embryos are high-risk (i.e., have score s > z[q]r[ps
]), in which the distribution of the unique component of the selected embryo (x) is a normal variable truncated from below at z[q]r[ps], and the case when at least one embryo has score s < z[q]r[ps],
in which x is a normal variable truncated from above. We then integrated over the non-score liability components (and over c in some of the settings) in order to obtain the probability of being
affected. We solved the integrals in the final expressions numerically in R.
We computed the risk reduction based on the ratio between the risk of a child of a random couple when the embryo was selected by PRS and the population prevalence. We also provide explicit results
for the case when the average parental PRS c is known. These expressions allowed us to compute the distribution of risk reductions per-couple. Finally, when conditioning on the parental disease
status, we integrated the disease probability of the selected embryo over the posterior distribution of the parental score and non-score genetic components. For full details and for an additional
discussion of previous work and limitations, see the Appendix. R code is available at: https://github.com/scarmi/embryo_selection.
Simulations based on genomes from case-control studies
Our main analysis has been limited to mathematical modeling of polygenic scores and their relation to disease risk. For obvious ethical and practical reasons, we could not validate our modeling
predictions with actual experiments. Nevertheless, we could perform realistic simulations based on genomes from case-control studies, similarly to our previous work (Karavani et al., 2019). Our
approach is generally as follows. We consider, separately, two diseases: schizophrenia and Crohn’s. For schizophrenia, we use ≈900 cases and ≈1600 controls of Ashkenazi Jewish ancestry, while for
Crohn’s, we use ≈150 cases and ≈100 controls from the New York area. For each disease, we use these individuals, who are unrelated, to generate “virtual couples” by randomly mating pairs of
individuals. For each such “couple”, we simulate the genomes of n hypothetical embryos, based on the laws of Mendelian inheritance and by randomly placing crossovers according to genetic map
distances. In parallel, we use the same genomes to learn a logistic regression model that predicts the risk of disease given a PRS computed from the most recently available summary statistics
(excluding the samples in our test cohorts). We then compute the PRS of each simulated embryo, and predict the risk of disease of that embryo. We finally compare the risk of disease between one
randomly selected embryo per couple vs one embryo selected based on PRS. In the paragraphs below, we provide additional details.
The Ashkenazi schizophrenia cohort
The samples and the genotyping process were previously described (Lencz et al., 2013). Patients were recruited from hospitalized inpatients at seven medical centres in Israel diagnosed with
schizophrenia or schizoaffective disorder and samples from healthy Ashkenazi individuals were collected from volunteers at the Israeli Blood Bank. All subjects provided written informed consent, and
corresponding institutional review boards and the National Genetic Committee of the Israeli Ministry of Health approved the studies. DNA was extracted from whole blood and genotyped for ∼1 million
genome-wide SNPs using Illumina HumanOmni1-Quad arrays. We performed the following quality control steps. First, we removed samples with (1) genotyping call rate <95%; (2) one of each pair of related
individuals (total shared identical-by-descent (IBD) segments >700cM); and (3) sharing of less than 15cM on average with the rest of the cohort (indicating non-Ashkenazi ancestry). We removed SNPs
with (1) call rate <97%; (2) minor allele frequency <1%; (3) significantly different allele frequencies between males and females (P-value threshold = 0.05/#SNPs); (4) differential missingness
between males and females (P<10^-7) based on a χ^2 test; (5) deviations from Hardy-Weinberg equilibrium in females (P-value threshold = 0.05/#SNPs); (6) SNPs in the HLA region (chr6:24-37M); and (7)
(after phasing) SNPs having A/T or C/G polymorphism, as we could not unambiguously link them to corresponding effect sizes in the summary statistics. We finally used autosomal SNPs only. The
remaining number of individuals was 2,526 (897 cases and 1629 controls), and the number of SNPs was 728,505. We phased the genomes using SHAPEIT v2 (Delaneau et al., 2013).
The Mt Sinai Crohn’s disease cohort
Samples from subjects with Crohn’s disease were recruited from clinics by Mt Sinai providers. All subjects provided written, informed consent in studies approved by the Mt Sinai Institutional Review
Board. Genotyping was performed at the Broad Institute using the Illumina Global Screening Array (GSA) chip, as previously described (Gettler et al., 2021). We phased the genomes using Eagle v2.4.1 (
Loh et al., 2016). We then removed SNPs having A/T or C/G polymorphism. The remaining number of individuals was 257 (154 cases and 103 controls) and the number of SNPs was 560,612.
Simulating couples and embryos
For each disease, we generated 5,000 unique couples by randomly pairing individuals (regardless of their sex) according to the population prevalence of the disease. For example, for schizophrenia,
assuming a prevalence of 1%, a proportion 0.99^2 of the couples were both controls. Given a pair of parents, we simulated 20 offspring (embryos) by specifying the locations of crossovers in each
parent. Recombination was modeled as a Poisson process along the genome, with distances measured in cM using sex-averaged genetic maps (Bhérer et al., 2017). Specifically, for each parent and embryo,
we drew the number of crossovers in each chromosome from a Poisson distribution with mean equal to the chromosome length in Morgan. We then determined the locations of the crossovers by randomly
drawing positions along the chromosome (in Morgan). We mixed the phased paternal and maternal chromosomes of the parent according to the crossover locations, and randomly chose one of the resulting
sequences as the chromosome to be transmitted to the embryo. We repeated for the other parent, in order to form the diploid genome of the embryo.
Developing a polygenic risk score for schizophrenia
We used summary statistics from the most recent schizophrenia GWAS of the Psychiatric Genomics Consortium (PGC) (Schizophrenia Working Group of the Psychiatric Genomics et al., 2020). Note that we
specifically used summary statistics that excluded our Ashkenazi cohort. We used the entire cohort (2526 individuals) to estimate linkage disequilibrium (LD) between SNPs, and performed LD-clumping
on the summary statistics in PLINK (Chang et al., 2015), with a window size of 250kb, a minimum r^2 threshold for clumping of 0.1, a minimum minor allele frequency threshold of 0.01, and a maximum
P-value threshold of 0.05. The P-value threshold was chosen based on results from the PGC study. After clumping, the final score included 23,036 SNPs. To construct the score, we used the effect sizes
reported in the GWAS summary statistics, without additional processing.
Developing a polygenic risk score for Crohn’s disease
We used summary statistics derived from European samples available from https://www.ibdgenetics.org/downloads.html (Liu et al., 2015), which did not include our cohort. We estimated LD using the
entire Crohn’s disease cohort, and performed LD-clumping and P-value thresholding using the same parameters as for the schizophrenia cohort, as described above. The final score included 9,403 SNPs.
Calculating the PRS and the risk of an embryo
For each disease, we calculated polygenic scores for each parent and simulated embryo in PLINK, using the --score command with default parameters. Using the polygenic scores of the parents, we fitted
a logistic regression model for the case/control status as a function of the polygenic scores. We did not adjust for additional covariates: for schizophrenia, genetic ancestry is homogeneous in our
Ashkenazi cohort, and age and sex contributed very little to predictive power (increased AUC from 0.695 only to 0.717). For Crohn’s, age was not available, and sex did not contribute to predictive
power (increased AUC from to 0.693 to 0.695). We adjusted the intercept of the logistic regression models to account for the case-control sampling (Rose & van der Laan, 2008). We then used the model
to predict the probability that a simulated embryo would develop the disease.
To determine the percentiles of the PRS for each disease, we derived an approximation to the distribution of the PRS in the population by fitting a normal distribution to the scores in our dataset.
To take into account the case/control ascertainment, we weighted the case and control samples according to the population prevalence of the disease (1% for schizophrenia (Perälä et al., 2007) and
0.5% for Crohn’s (GBD 2017 Inflammatory Bowel Disease Collaborators, 2020). We calculated the weighted mean and variance of the scores using the wtd.mean and wtd.var functions in the HMisc package in
R. A normal distribution with the resulting mean and variance was used to calculate percentiles of the scores. The percentiles were then used to select (simulated) embryos under the high-risk
exclusion strategy (see below).
Calculating the risk reduction
For each disease, we performed the following simulations. For each selection strategy (either high-risk exclusion or lowest-risk prioritization), we selected one embryo for each couple according to
the strategy, and computed the probability of disease for the selected embryo. We then averaged the risk over all couples. We similarly computed the risk under selection of a random embryo for each
couple. We computed the relative risk reduction based on the ratio between the risk under PRS-based selection and the risk under random selection. To compare to the theoretical expectations, we
estimated the variance explained by the score on the liability scale using the method of Lee et al. (Lee et al., 2012). Specifically, we first computed the correlation between the observed case/
control status (coded as 1 and 0, respectively) and the PRS, and then used Eq. (15) in Lee et al to convert the squared correlation to the variance explained. We obtained = 6.8% for schizophrenia,
which is close to the 7.7% reported in the original GWAS paper (Schizophrenia Working Group of the Psychiatric Genomics et al., 2020), and = 5.6% for Crohn’s disease. We then substituted this value
and prevalence of K = 0.01 for schizophrenia and K = 0.005 for Crohn’s in our formulas for the relative risk reduction.
We thank Gabriel Lázaro-Muñoz, Stacey Pereira, Chaim Jalas, and David A. Zeevi for helpful discussions.
• We have added new analyses examining simulated offspring from "virtual" couples derived from real GWAS data for patients with schizophrenia and patients with Crohn's disease.
in-vitro fertilization
polygenic risk score
polygenic embryo screening
relative risk reduction
absolute risk reduction
high-risk exclusion
lowest-risk prioritization
liability threshold model
per-couple relative risk reduction. | {"url":"https://www.biorxiv.org/content/10.1101/2020.11.05.370478v3.full","timestamp":"2024-11-06T02:06:30Z","content_type":"application/xhtml+xml","content_length":"541703","record_id":"<urn:uuid:973bc630-ae4c-428a-8186-b51d51323477>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00156.warc.gz"} |
Reply To: Experiments Using A Wind Tunnel | Ansys Learning Forum
December 6, 2023 at 10:09 am
To compare the velocity of a miniature sports car in 1:10 scale with a full-scale car, you can use the Reynolds number as a benchmark. The Reynolds number is a dimensionless quantity that describes
the ratio of inertial forces to viscous forces in a fluid flow. It is defined as:
where rho is the density of the fluid, v is the velocity of the fluid, L is a characteristic length scale, and mu is the dynamic viscosity of the fluid.
To maintain the same Reynolds number between the miniature car and the full-scale car, you need to scale the velocity of the miniature car by the ratio of the characteristic length scales. In your
case, the characteristic length scale of the miniature car is 1/10 of the full-scale car. Therefore, you need to scale the velocity of the miniature car by a factor of 10 to maintain the same
Reynolds number.
Reference: https://www.usna.edu/NAOE/_files/documents/Faculty/schultz/uno onlineSchultz,%20Flack%20-%20Reynolds%20Number%20Scaling,%202013.pdf | {"url":"https://innovationspace.ansys.com/forum/forums/reply/326603/","timestamp":"2024-11-02T09:19:04Z","content_type":"text/html","content_length":"192283","record_id":"<urn:uuid:ef69de39-b9a7-440d-82dd-bc26e49ea718>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00646.warc.gz"} |
Convert 2.44 inches to p
2.44″ (Inches, in) - English inch is a value for measuring lengths and distances, heights and widths and etc. One inch is equal to 72.0 points.
On this page we consider in detail all variants for convert 2.44 inches to points and all opportunities how to convert inches with usage comprehensive examples, related charts and conversion tables
for inches. Here you will find all the ways for calculating and converting inches in pt and back. If you want to know how many points are in 2.44 inches you can obtain the answer in several ways:
• convert 2.44 inches using the online conversion on this page;
• calculate 2.44 inches using calculator InchPro from our software collection for offline converting units;
• apply arithmetic calculations and conversions for 2.44 inches outlined in this article.
To indicate inches, we will use the abbreviation "in" to indicate points we will use the abbreviation "pt". All options to convert "in" to "pt" we will look more detail in individual topics below.
So, we're starting explore all avenues of transformation two point four four inches and conversions between inches and points.
Convert 2.44 inches to pt by online conversion
To convert 2.44 inches into points we consider using the online converter on the web page. The online converter has very simple interface and will help us quickly convert our inches. The online inch
converter has an adaptive shape for different devices and therefore for monitors it looks like the left and right input fields but on tablets and mobile phones it looks like the top and bottom input
fields. If you want convert any inch values, you must only enter required value in the left (or top) input field and automatically you get the result in the right (or bottom) field. Under each field
you see a more detailed result of the calculation and the coefficient of 72.0 which is used in the calculations. The big green string, under the input fields - "2.44 Inches = 175.68 Points" further
enhances and shows final result of the conversion. Calculator for converting units of measurement works symmetrically in both directions. If you enter any value in any field, you will get the result
in the opposite field. Clicking on the arrow icons between the input fields you can swap the fields and perform other calculations. We are all made for easily converting any values between inches and
If you came to this page, you already see the result work of the online calculator. In the left (or top) field you see the value of 2.44 "in" on the right (or bottom) box you see the value of result
is equal to 175.68 "pt". Write briefly: 2.44 "in" = 175.68 "pt"
Convert 2.44 inches in pt by conversion tables
We have briefly reviewed how to use the unit Converter on this page, but this is only part of the features of the page service. We made an interesting possibility to compute all possible values for
units of measure in the lower tables. These tables are used to convert basic units of measurement: Metric conversion chart, US Survey conversion chart, International conversion chart, Astronomical
conversion chart. Please, find these 4 tables at the bottom of this page they have the headers:
• All conversions of 2.44 inches in the Metric System Units
• All conversions of 2.44 inches in the US Survey Units
• All conversions of 2.44 inches in the International Units
• All conversions of 2.44 inches in the Astronomical Units
If you enter a test number in any field of web calculator (field of inches or points it doesn't matter), for example 2.44 as it is now, you not only get the result in 175.68 points but also a huge
list of computed values for all unit types in the lower tables. Without doing your own search and making the transition to other pages of the website, you can use our conversion tables to calculate
all the possible results for main units. Try delete and again entering into the calculator a value of 2.44 inches and you will see that all the conversion results in the lower tables will are
recalculated for 2.44 (in). The calculated data in the conversions tables change dynamically and all transformations are performed synchronously with converting inches in the page calculator.
How many points are in 2.44 inches?
To answer this question, we start with a brief definition of inch and point, and their purpose. The inch and point units of length which can be converted one to another using a conversion factor
which is equal to 72.0. This coefficient answers the question how many points are equivalent to one inch. The value of this multiplier determines the basic value to calculate all other lengths, sizes
and other transformations for these units (inch and point), it is enough to know the value, i.e. to remember that 1 inch = 72.0 (pt). Knowing the number of points in one inch by simple multiplication
we can calculate any values. Let's do a simple calculation using the multiplication:
2.44″ × 72.0 = 175.68 (pt)
Thus it is seen that after multiplying by the coefficient we get the following relationship:
2.44 Inches = 175.68 Points
How much is 2.44 inches in points?
We have already seen how to convert these two values and how change inches to points. So in summary, you can write all possible results that have the same meaning.
2.44 inches to pt = 175.68 pt
2.44 inches in pt = 175.68 pt
2.44 inches into pt = 175.68 pt
2.44 in = 175.68 pt
2.44″ = 175.68 pt
2.44″ is 175.68 pt
two point four four inches = 175.68 pt
For a detailed reviewing of similar numbers, visit next pages:
How to convert 2.44 inches into points? All rules and methods.
To convert 2.44 inches into points we can use many ways:
• calculation using the formula;
• calculation using the proportions;
• calculation using the online converter of the current page;
• calculation using the offline calculator "InchPro Decimal".
Calculating 2.44 inches to pt formula for lengths and values.
In the calculations for inches and points, we will use the formula presented below that would quickly get the desired result.
Y (in) × 72.0 = X (pt)
Y - value of inches
X - result in points
That is, you need to remember that 1 inch is equal 72.0 points, and when converting inches just multiply the number of inches (in this case 2.44 inches) by a factor of 72.0. For example, we transform
the set of values 2.44″, 3.44″, 4.44″, 5.44″, 6.44″ into points and get the result in the following examples:
2.44 (in) × 72.0 = 175.68 (pt)
3.44 (in) × 72.0 = 247.68 (pt)
4.44 (in) × 72.0 = 319.68 (pt)
5.44 (in) × 72.0 = 391.68 (pt)
6.44 (in) × 72.0 = 463.68 (pt)
In all variants we multiplied the all inches in range from 2.44″ to 6.44″ with the same ratio of 72.0 and got the correct results in calculations.
The calculation using mathematical proportions to convert 2.44 inches into points
To calculate the proportions you need to know the reference value in points for 1 inch and according to the rules of arithmetic we can calculate any value in points for any length in inches. See the
next examples. We form the proportion for 3 values our inches 2.44″, 3.44″, 4.44″ and calculate results values in points:
1 (in) — 72.0 (pt)
2.44 (in) — X (pt)
Solve the above proportion for X to obtain:
X = 2.44(in) × 72.0(pt) ÷
1(in) = 175.68(pt)
1 (in) — 72.0 (pt)
3.44 (in) — X (pt)
Solve the above proportion for X to obtain:
X = 3.44(in) × 72.0(pt) ÷
1(in) = 247.68(pt)
1 (in) — 72.0 (pt)
4.44 (in) — X (pt)
Solve the above proportion for X to obtain:
X = 4.44(in) × 72.0(pt) ÷
1(in) = 319.68(pt)
All proportions used reference value 1 inch = 72.0 pt
Calculation of values using inch online calculator on the page
You can use our basic universal online converter on the current web page and convert any your length dimensions and distances between inches and points in any directions free and fast.
Currently, the field for inches contains the number 2.44 (in) you can change it. Just enter any number into field for inches (for example any value from our set: 3.44, 4.44, 5.44, 6.44 inches or any
other value) and get the fast result into field for points. How to use the inch online calculator you can more detail read at this link manual for the calculator.
For example, we take the 14 values into inches and will try to calculate the result values in points. Also, we will use the web calculator (you can find it at the top of this page). In the set up a
table in the left margin we write the value in inches in the right margin you see the values that you should obtain after calculation. You can check it right now, without leaving the site and make
sure that the calculator works correctly and quickly. In all calculations, we used the ratio 72.0 which helps us to get the desired values of computation results in points. Please, see the results in
the next table:
Example of Work Inch Online Calculator with Calculation Results
Inches Table Factor Points
5257696 (in) × 72.0 = 378554112.0 (pt)
5257697 (in) × 72.0 = 378554184.0 (pt)
5257698 (in) × 72.0 = 378554256.0 (pt)
5257699 (in) × 72.0 = 378554328.0 (pt)
5257700 (in) × 72.0 = 378554400.0 (pt)
5257701 (in) × 72.0 = 378554472.0 (pt)
5257702 (in) × 72.0 = 378554544.0 (pt)
5257703 (in) × 72.0 = 378554616.0 (pt)
5257704 (in) × 72.0 = 378554688.0 (pt)
5257705 (in) × 72.0 = 378554760.0 (pt)
5257706 (in) × 72.0 = 378554832.0 (pt)
5257707 (in) × 72.0 = 378554904.0 (pt)
5257708 (in) × 72.0 = 378554976.0 (pt)
5257695 (in) × 72.0 = 378554040.0 (pt)
Convert 2.44 inches with the use of calculator "InchPro Decimal"
We're briefly describe the possibility for using our calculator for converting 2.44 inches. The calculator allows you to convert any value for lengths and distances not only in inches but also for
all other units. Our conversion tables which we mentioned earlier are also included in the logic operation of the calculator and all these calculations you can get in one application if you download
and install the software on your computer. Converter easily converts 2.44 "in" for you in offline mode. All the details of the work of this application for conversion of the heights and widths,
lengths, sizes and distances described in inches or other units of measurement you will find in menu "Software" of this site or by the link: InchPro Decimal. Please, also see the screenshots for
Visual charts conversion of 2.44 inches.
Many people can hardly imagine the relationship between inch and point. In this picture, you can clearly see the ratio of these quantities to understand them in real life. The ratio of the lengths of
the segments is retained on screens with any resolution as for large monitors as well as for small mobile devices.
The graphical representation of scales for comparing values.
The graph shows the relative values the inches in the form of rectangular segments of different lengths and colors. As well as the visual representation of 2.44 (in) with the reference value in
The graphs of the relationship between inches and points are expressed in the following colours:
• Green is the original length or distance in inches;
• Blue color is the scale in inches;
• Yellow color is the scale in points.
The scale may increase or decrease depending on the current number value on the page. The diagram shows the ratio between inches and pt for the same lengths and magnitude (see charts of the blue and
yellow colors).
Thu 14 Nov 2024 | {"url":"http://inchpro.com/conversion/2.44-inches-to-points/","timestamp":"2024-11-14T12:17:55Z","content_type":"text/html","content_length":"172667","record_id":"<urn:uuid:e69e2491-8cbf-46c5-8a89-8ceec2750f37>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00417.warc.gz"} |
The time value of money refers to the fact that there is normally a greater benefit to receiving a sum of money now rather than an identical sum later. It may be seen as an implication of the
later-developed concept of time preference.
The present value of $1,000, 100 years into the future. Curves represent constant discount rates of 2%, 3%, 5%, and 7%.
The time value of money refers to the observation that it is better to receive money sooner than later. Money you have today can be invested to earn a positive rate of return, producing more money
tomorrow. Therefore, a dollar today is worth more than a dollar in the future.^[1]
The time value of money is among the factors considered when weighing the opportunity costs of spending rather than saving or investing money. As such, it is among the reasons why interest is paid or
earned: interest, whether it is on a bank deposit or debt, compensates the depositor or lender for the loss of their use of their money. Investors are willing to forgo spending their money now only
if they expect a favorable net return on their investment in the future, such that the increased value to be available later is sufficiently high to offset both the preference to spending money now
and inflation (if present); see required rate of return.
The Talmud (~500 CE) recognizes the time value of money. In Tractate Makkos page 3a the Talmud discusses a case where witnesses falsely claimed that the term of a loan was 30 days when it was
actually 10 years. The false witnesses must pay the difference of the value of the loan "in a situation where he would be required to give the money back (within) thirty days..., and that same sum in
a situation where he would be required to give the money back (within) 10 years...The difference is the sum that the testimony of the (false) witnesses sought to have the borrower lose; therefore, it
is the sum that they must pay."^[2]
The notion was later described by Martín de Azpilcueta (1491–1586) of the School of Salamanca.
Time value of money problems involve the net value of cash flows at different points in time.
In a typical case, the variables might be: a balance (the real or nominal value of a debt or a financial asset in terms of monetary units), a periodic rate of interest, the number of periods, and a
series of cash flows. (In the case of a debt, cash flows are payments against principal and interest; in the case of a financial asset, these are contributions to or withdrawals from the balance.)
More generally, the cash flows may not be periodic but may be specified individually. Any of these variables may be the independent variable (the sought-for answer) in a given problem. For example,
one may know that: the interest is 0.5% per period (per month, say); the number of periods is 60 (months); the initial balance (of the debt, in this case) is 25,000 units; and the final balance is 0
units. The unknown variable may be the monthly payment that the borrower must pay.
For example, £100 invested for one year, earning 5% interest, will be worth £105 after one year; therefore, £100 paid now and £105 paid exactly one year later both have the same value to a recipient
who expects 5% interest assuming that inflation would be zero percent. That is, £100 invested for one year at 5% interest has a future value of £105 under the assumption that inflation would be zero
This principle allows for the valuation of a likely stream of income in the future, in such a way that annual incomes are discounted and then added together, thus providing a lump-sum "present value"
of the entire income stream; all of the standard calculations for time value of money derive from the most basic algebraic expression for the present value of a future sum, "discounted" to the
present by an amount equal to the time value of money. For example, the future value sum ${\displaystyle FV}$ to be received in one year is discounted at the rate of interest ${\displaystyle r}$ to
give the present value sum ${\displaystyle PV}$ :
${\displaystyle PV={\frac {FV}{(1+r)}}}$
Some standard calculations based on the time value of money are:
• Present value: The current worth of a future sum of money or stream of cash flows, given a specified rate of return. Future cash flows are "discounted" at the discount rate; the higher the
discount rate, the lower the present value of the future cash flows. Determining the appropriate discount rate is the key to valuing future cash flows properly, whether they be earnings or
• Present value of an annuity: An annuity is a series of equal payments or receipts that occur at evenly spaced intervals. Leases and rental payments are examples. The payments or receipts occur at
the end of each period for an ordinary annuity while they occur at the beginning of each period for an annuity due.
Present value of a perpetuity is an infinite and constant stream of identical cash flows.^[5]
• Future value: The value of an asset or cash at a specified date in the future, based on the value of that asset in the present.^[6]
• Future value of an annuity (FVA): The future value of a stream of payments (annuity), assuming the payments are invested at a given rate of interest.
There are several basic equations that represent the equalities listed above. The solutions may be found using (in most cases) the formulas, a financial calculator or a spreadsheet. The formulas are
programmed into most financial calculators and several spreadsheet functions (such as PV, FV, RATE, NPER, and PMT).^[7]
For any of the equations below, the formula may also be rearranged to determine one of the other unknowns. In the case of the standard annuity formula, there is no closed-form algebraic solution for
the interest rate (although financial calculators and spreadsheet programs can readily determine solutions through rapid trial and error algorithms).
These equations are frequently combined for particular uses. For example, bonds can be readily priced using these equations. A typical coupon bond is composed of two types of payments: a stream of
coupon payments similar to an annuity, and a lump-sum return of capital at the end of the bond's maturity—that is, a future payment. The two formulas can be combined to determine the present value of
the bond.
An important note is that the interest rate i is the interest rate for the relevant period. For an annuity that makes one payment per year, i will be the annual interest rate. For an income or
payment stream with a different payment schedule, the interest rate must be converted into the relevant periodic interest rate. For example, a monthly rate for a mortgage with monthly payments
requires that the interest rate be divided by 12 (see the example below). See compound interest for details on converting between different periodic interest rates.
The rate of return in the calculations can be either the variable solved for, or a predefined variable that measures a discount rate, interest, inflation, rate of return, cost of equity, cost of debt
or any number of other analogous concepts. The choice of the appropriate rate is critical to the exercise, and the use of an incorrect discount rate will make the results meaningless.
For calculations involving annuities, it must be decided whether the payments are made at the end of each period (known as an ordinary annuity), or at the beginning of each period (known as an
annuity due). When using a financial calculator or a spreadsheet, it can usually be set for either calculation. The following formulas are for an ordinary annuity. For the answer for the present
value of an annuity due, the PV of an ordinary annuity can be multiplied by (1 + i).
The following formula use these common variables:
• PV is the value at time zero (present value)
• FV is the value at time n (future value)
• A is the value of the individual payments in each compounding period
• n is the number of periods (not necessarily an integer)
• i is the interest rate at which the amount compounds each period
• g is the growing rate of payments over each time period
Future value of a present sum
The future value (FV) formula is similar and uses the same variables.
${\displaystyle FV\ =\ PV\cdot (1+i)^{n}}$
Present value of a future sum
The present value formula is the core formula for the time value of money; each of the other formulas is derived from this formula. For example, the annuity formula is the sum of a series of present
value calculations.
The present value (PV) formula has four variables, each of which can be solved for by numerical methods:
${\displaystyle PV\ =\ {\frac {FV}{(1+i)^{n}}}}$
The cumulative present value of future cash flows can be calculated by summing the contributions of FV[t], the value of cash flow at time t:
${\displaystyle PV\ =\ \sum _{t=1}^{n}{\frac {FV_{t}}{(1+i)^{t}}}}$
Note that this series can be summed for a given value of n, or when n is ∞.^[8] This is a very general formula, which leads to several important special cases given below.
Present value of an annuity for n payment periods
In this case the cash flow values remain the same throughout the n periods. The present value of an annuity (PVA) formula has four variables, each of which can be solved for by numerical methods:
${\displaystyle PV(A)\,=\,{\frac {A}{i}}\cdot \left[{1-{\frac {1}{\left(1+i\right)^{n}}}}\right]}$
To get the PV of an annuity due, multiply the above equation by (1 + i).
Present value of a growing annuity
In this case each cash flow grows by a factor of (1+g). Similar to the formula for an annuity, the present value of a growing annuity (PVGA) uses the same variables with the addition of g as the rate
of growth of the annuity (A is the annuity payment in the first period). This is a calculation that is rarely provided for on financial calculators.
Where i ≠ g :
${\displaystyle PV(A)\,=\,{A \over (i-g)}\left[1-\left({1+g \over 1+i}\right)^{n}\right]}$
Where i = g :
${\displaystyle PV(A)\,=\,{A\times n \over 1+i}}$
To get the PV of a growing annuity due, multiply the above equation by (1 + i).
Present value of a perpetuity
A perpetuity is payments of a set amount of money that occur on a routine basis and continue forever. When n → ∞, the PV of a perpetuity (a perpetual annuity) formula becomes a simple division.
${\displaystyle PV(P)\ =\ {A \over i}}$
Present value of a growing perpetuity
When the perpetual annuity payment grows at a fixed rate (g, with g < i) the value is determined according to the following formula, obtained by setting n to infinity in the earlier formula for a
growing perpetuity:
${\displaystyle PV(A)\,=\,{A \over i-g}}$
In practice, there are few securities with precise characteristics, and the application of this valuation approach is subject to various qualifications and modifications. Most importantly, it is rare
to find a growing perpetual annuity with fixed rates of growth and true perpetual cash flow generation. Despite these qualifications, the general approach may be used in valuations of real estate,
equities, and other assets.
This is the well known Gordon growth model used for stock valuation.
Future value of an annuity
The future value (after n periods) of an annuity (FVA) formula has four variables, each of which can be solved for by numerical methods:
${\displaystyle FV(A)\,=\,A\cdot {\frac {\left(1+i\right)^{n}-1}{i}}}$
To get the FV of an annuity due, multiply the above equation by (1 + i).
Future value of a growing annuity
The future value (after n periods) of a growing annuity (FVA) formula has five variables, each of which can be solved for by numerical methods:
Where i ≠ g :
${\displaystyle FV(A)\,=\,A\cdot {\frac {\left(1+i\right)^{n}-\left(1+g\right)^{n}}{i-g}}}$
Where i = g :
${\displaystyle FV(A)\,=\,A\cdot n(1+i)^{n-1}}$
Formula table
The following table summarizes the different formulas commonly used in calculating the time value of money.^[9] These values are often displayed in tables where the interest rate and time are
Find Given Formula
Future value (F) Present value (P) ${\displaystyle F=P\cdot (1+i)^{n}}$
Present value (P) Future value (F) ${\displaystyle P=F\cdot (1+i)^{-n}}$
Repeating payment (A) Future value (F) ${\displaystyle A=F\cdot {\frac {i}{(1+i)^{n}-1}}}$
Repeating payment (A) Present value (P) ${\displaystyle A=P\cdot {\frac {i(1+i)^{n}}{(1+i)^{n}-1}}}$
Future value (F) Repeating payment (A) ${\displaystyle F=A\cdot {\frac {(1+i)^{n}-1}{i}}}$
Present value (P) Repeating payment (A) ${\displaystyle P=A\cdot {\frac {(1+i)^{n}-1}{i(1+i)^{n}}}}$
Future value (F) Initial gradient payment (G) ${\displaystyle F=G\cdot {\frac {(1+i)^{n}-in-1}{i^{2}}}}$
Present value (P) Initial gradient payment (G) ${\displaystyle P=G\cdot {\frac {(1+i)^{n}-in-1}{i^{2}(1+i)^{n}}}}$
Fixed payment (A) Initial gradient payment (G) ${\displaystyle A=G\cdot \left[{\frac {1}{i}}-{\frac {n}{(1+i)^{n}-1}}\right]}$
Initial exponentially increasing payment (D) ${\displaystyle F=D\cdot {\frac {(1+g)^{n}-(1+i)^{n}}{g-i}}}$ (for i ≠ g)
Future value (F)
Increasing percentage (g) ${\displaystyle F=D\cdot {\frac {n(1+i)^{n}}{1+g}}}$ (for i = g)
Initial exponentially increasing payment (D) ${\displaystyle P=D\cdot {\frac {\left({1+g \over 1+i}\right)^{n}-1}{g-i}}}$ (for i ≠ g)
Present value (P)
Increasing percentage (g) ${\displaystyle P=D\cdot {\frac {n}{1+g}}}$ (for i = g)
• A is a fixed payment amount, every period
• G is the initial payment amount of an increasing payment amount, that starts at G and increases by G for each subsequent period.
• D is the initial payment amount of an exponentially (geometrically) increasing payment amount, that starts at D and increases by a factor of (1+g) each subsequent period.
Annuity derivation
The formula for the present value of a regular stream of future payments (an annuity) is derived from a sum of the formula for future value of a single future payment, as below, where C is the
payment amount and n the period.
A single payment C at future time m has the following future value at future time n:
${\displaystyle FV\ =C(1+i)^{n-m}}$
Summing over all payments from time 1 to time n, then reversing t
${\displaystyle FVA\ =\sum _{m=1}^{n}C(1+i)^{n-m}\ =\sum _{k=0}^{n-1}C(1+i)^{k}}$
Note that this is a geometric series, with the initial value being a = C, the multiplicative factor being 1 + i, with n terms. Applying the formula for geometric series, we get
${\displaystyle FVA\ ={\frac {C(1-(1+i)^{n})}{1-(1+i)}}\ ={\frac {C(1-(1+i)^{n})}{-i}}}$
The present value of the annuity (PVA) is obtained by simply dividing by ${\displaystyle (1+i)^{n}}$ :
${\displaystyle PVA\ ={\frac {FVA}{(1+i)^{n}}}={\frac {C}{i}}\left(1-{\frac {1}{(1+i)^{n}}}\right)}$
Another simple and intuitive way to derive the future value of an annuity is to consider an endowment, whose interest is paid as the annuity, and whose principal remains constant. The principal of
this hypothetical endowment can be computed as that whose interest equals the annuity payment amount:
${\displaystyle {\text{Principal}}\times i=C}$
${\displaystyle {\text{Principal}}={\frac {C}{i}}+{\text{goal}}}$
Note that no money enters or leaves the combined system of endowment principal + accumulated annuity payments, and thus the future value of this system can be computed simply via the future value
${\displaystyle FV=PV(1+i)^{n}}$
Initially, before any payments, the present value of the system is just the endowment principal, ${\displaystyle PV={\frac {C}{i}}}$ . At the end, the future value is the endowment principal (which
is the same) plus the future value of the total annuity payments (${\displaystyle FV={\frac {C}{i}}+FVA}$ ). Plugging this back into the equation:
${\displaystyle {\frac {C}{i}}+FVA={\frac {C}{i}}(1+i)^{n}}$
${\displaystyle FVA={\frac {C}{i}}\left[\left(1+i\right)^{n}-1\right]}$
Perpetuity derivation
Without showing the formal derivation here, the perpetuity formula is derived from the annuity formula. Specifically, the term:
${\displaystyle \left({1-{1 \over {(1+i)^{n}}}}\right)}$
can be seen to approach the value of 1 as n grows larger. At infinity, it is equal to 1, leaving ${\displaystyle {C \over i}}$ as the only term remaining.
Continuous compounding
Rates are sometimes converted into the continuous compound interest rate equivalent because the continuous equivalent is more convenient (for example, more easily differentiated). Each of the
formulas above may be restated in their continuous equivalents. For example, the present value at time 0 of a future payment at time t can be restated in the following way, where e is the base of the
natural logarithm and r is the continuously compounded rate:
${\displaystyle {\text{PV}}={\text{FV}}\cdot e^{-rt}}$
This can be generalized to discount rates that vary over time: instead of a constant discount rate r, one uses a function of time r(t). In that case the discount factor, and thus the present value,
of a cash flow at time T is given by the integral of the continuously compounded rate r(t):
${\displaystyle {\text{PV}}={\text{FV}}\cdot \exp \left(-\int _{0}^{T}r(t)\,dt\right)}$
Indeed, a key reason for using continuous compounding is to simplify the analysis of varying discount rates and to allow one to use the tools of calculus. Further, for interest accrued and
capitalized overnight (hence compounded daily), continuous compounding is a close approximation for the actual daily compounding. More sophisticated analysis includes the use of differential
equations, as detailed below.
Using continuous compounding yields the following formulas for various instruments:
${\displaystyle \ PV\ =\ {A(1-e^{-rt}) \over e^{r}-1}}$
${\displaystyle \ PV\ =\ {A \over e^{r}-1}}$
Growing annuity
${\displaystyle \ PV\ =\ {Ae^{-g}(1-e^{-(r-g)t}) \over e^{(r-g)}-1}}$
Growing perpetuity
${\displaystyle \ PV\ =\ {Ae^{-g} \over e^{(r-g)}-1}}$
Annuity with continuous payments
${\displaystyle \ PV\ =\ {1-e^{(-rt)} \over r}}$
These formulas assume that payment A is made in the first payment period and annuity ends at time t.^[10]
Differential equations
Ordinary and partial differential equations (ODEs and PDEs)—equations involving derivatives and one (respectively, multiple) variables are ubiquitous in more advanced treatments of financial
mathematics. While time value of money can be understood without using the framework of differential equations, the added sophistication sheds additional light on time value, and provides a simple
introduction before considering more complicated and less familiar situations. This exposition follows (Carr & Flesaker 2006, pp. 6–7).
The fundamental change that the differential equation perspective brings is that, rather than computing a number (the present value now), one computes a function (the present value now or at any
point in future). This function may then be analyzed—how does its value change over time?—or compared with other functions.
Formally, the statement that "value decreases over time" is given by defining the linear differential operator ${\displaystyle {\mathcal {L}}}$ as:
${\displaystyle {\mathcal {L}}:=-\partial _{t}+r(t).}$
This states that value decreases (−) over time (∂[t]) at the discount rate (r(t)). Applied to a function it yields:
${\displaystyle {\mathcal {L}}f=-\partial _{t}f(t)+r(t)f(t).}$
For an instrument whose payment stream is described by f(t), the value V(t) satisfies the inhomogeneous first-order ODE ${\displaystyle {\mathcal {L}}V=f}$ ("inhomogeneous" is because one has f
rather than 0, and "first-order" is because one has first derivatives but no higher derivatives)—this encodes the fact that when any cash flow occurs, the value of the instrument changes by the
value of the cash flow (if you receive a £10 coupon, the remaining value decreases by exactly £10).
The standard technique tool in the analysis of ODEs is Green's functions, from which other solutions can be built. In terms of time value of money, the Green's function (for the time value ODE) is
the value of a bond paying £1 at a single point in time u—the value of any other stream of cash flows can then be obtained by taking combinations of this basic cash flow. In mathematical terms,
this instantaneous cash flow is modeled as a Dirac delta function ${\displaystyle \delta _{u}(t):=\delta (t-u).}$
The Green's function for the value at time t of a £1 cash flow at time u is
${\displaystyle b(t;u):=H(u-t)\cdot \exp \left(-\int _{t}^{u}r(v)\,dv\right)}$
where H is the Heaviside step function – the notation "${\displaystyle ;u}$ " is to emphasize that u is a parameter (fixed in any instance—the time when the cash flow will occur), while t is a
variable (time). In other words, future cash flows are exponentially discounted (exp) by the sum (integral, ${\displaystyle \textstyle {\int }}$ ) of the future discount rates (${\displaystyle \
textstyle {\int _{t}^{u}}}$ for future, r(v) for discount rates), while past cash flows are worth 0 (${\displaystyle H(u-t)=1{\text{ if }}t<u,0{\text{ if }}t>u}$ ), because they have already
occurred. Note that the value at the moment of a cash flow is not well-defined—there is a discontinuity at that point, and one can use a convention (assume cash flows have already occurred, or not
already occurred), or simply not define the value at that point.
In case the discount rate is constant, ${\displaystyle r(v)\equiv r,}$ this simplifies to
${\displaystyle b(t;u)=H(u-t)\cdot e^{-(u-t)r}={\begin{cases}e^{-(u-t)r}&t<u\\0&t>u,\end{cases}}}$
where ${\displaystyle (u-t)}$ is "time remaining until cash flow".
Thus for a stream of cash flows f(u) ending by time T (which can be set to ${\displaystyle T=+\infty }$ for no time horizon) the value at time t, ${\displaystyle V(t;T)}$ is given by combining the
values of these individual cash flows:
${\displaystyle V(t;T)=\int _{t}^{T}f(u)b(t;u)\,du.}$
This formalizes time value of money to future values of cash flows with varying discount rates, and is the basis of many formulas in financial mathematics, such as the Black–Scholes formula with
varying interest rates.
See also
• Carr, Peter; Flesaker, Bjorn (2006), Robust Replication of Default Contingent Claims (presentation slides) (PDF), Bloomberg LP, archived from the original (PDF) on 2009-02-27. See also Audio
Presentation and paper.{{citation}}: External link in |postscript= (help)CS1 maint: postscript (link)
• Crosson, S.V., and Needles, B.E.(2008). Managerial Accounting (8th Ed). Boston: Houghton Mifflin Company.
External links
• Time Value of Money ebook | {"url":"https://www.knowpia.com/knowpedia/Time_value_of_money","timestamp":"2024-11-11T14:23:39Z","content_type":"text/html","content_length":"233666","record_id":"<urn:uuid:ebfd9a6c-fd05-4055-a629-5e3249a2f4f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00343.warc.gz"} |
Dropping the Independence: Singular Values for Products of Two Coupled Random Matrices
We study the singular values of the product of two coupled rectangular random matrices as a determinantal point process. Each of the two factors is given by a parameter dependent linear combination
of two independent, complex Gaussian random matrices, which is equivalent to a coupling of the two factors via an Itzykson-Zuber term. We prove that the squared singular values of such a product form
a biorthogonal ensemble and establish its exact solvability. The parameter dependence allows us to interpolate between the singular value statistics of the Laguerre ensemble and that of the product
of two independent complex Ginibre ensembles which are both known. We give exact formulae for the correlation kernel in terms of a complex double contour integral, suitable for the subsequent
asymptotic analysis. In particular, we derive a Christoffel–Darboux type formula for the correlation kernel, based on a five term recurrence relation for our biorthogonal functions. It enables us to
find its scaling limit at the origin representing a hard edge. The resulting limiting kernel coincides with the universal Meijer G-kernel found by several authors in different ensembles. We show that
the central limit theorem holds for the linear statistics of the singular values and give the limiting variance explicitly.
Bibliographical note
Publisher Copyright:
© 2016, Springer-Verlag Berlin Heidelberg.
Dive into the research topics of 'Dropping the Independence: Singular Values for Products of Two Coupled Random Matrices'. Together they form a unique fingerprint. | {"url":"https://cris.huji.ac.il/en/publications/dropping-the-independence-singular-values-for-products-of-two-cou","timestamp":"2024-11-12T02:29:15Z","content_type":"text/html","content_length":"49847","record_id":"<urn:uuid:3f8b8742-30d9-4457-bf59-81059844e271>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00390.warc.gz"} |
Time series with filled area and custom facetting in Matplotlib
This page showcases the work of Georgios Karamanis, built for the TidyTuesday initiative. You can find the original code on his github repository here, written in R.
Thanks to him for accepting sharing his work here! Thanks also to Tomás Capretto who translated this work from R to Python! 🙏🙏
As a teaser, here is the plot we’re gonna try building:
Load libraries
Let's load libraries and utilities that are going to be used today.
import matplotlib.patches as patches # for the legend
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from matplotlib.lines import Line2D # for the legend
The following sets the default font to "Fira Sans Compressed". This post also makes use of the font "KyivType Sans" later. For a step-by-step guide on how to install and load custom fonts in
Matplotlib, have a look at this post.
plt.rcParams.update({"font.family": "Fira Sans Compressed"})
Load the dataset
This guide shows how to create a highly customized and beautiful multi-panel lineplot to visualize the evolution of animal rescues by the London fire brigade for the different boroughs in the city.
The data for this post originally comes from London.gov by way of Data is Plural and Georgios Karamanis. This guide uses the dataset released for the TidyTuesday initiative on the week of 2021-06-29.
You can find the original announcement and more information about the data here. Thank you all for making this work possible!
# Read data
animal_rescues = pd.read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2021/2021-06-29/animal_rescues.csv")
# Capitalize the type of animal
animal_rescues["animal_group_parent"] = animal_rescues["animal_group_parent"].str.capitalize()
# Explore first observations
│ │incident_number│date_time_of_call│cal_year│fin_year│type_of_incident│pump_count│pump_hours_total│hourly_notional_cost│incident_notional_cost│final_description│...│ uprn │ street │ usrn │postcode_district│easting_m│northing_m│easting_rounded│northing_rounded│latitude │longitude│
│0│139091.0 │01/01/2009 03:01 │2009 │2008/09 │Special Service │1.0 │2.0 │255 │510.0 │Redacted │...│NaN │Waddington │20500146.0│SE19 │NaN │NaN │532350 │170050 │NaN │NaN │
│ │ │ │ │ │ │ │ │ │ │ │ │ │Way │ │ │ │ │ │ │ │ │
│1│275091.0 │01/01/2009 08:51 │2009 │2008/09 │Special Service │1.0 │1.0 │255 │255.0 │Redacted │...│NaN │Grasmere Road│NaN │SE25 │534785.0 │167546.0 │534750 │167550 │51.390954│-0.064167│
│2│2075091.0 │04/01/2009 10:07 │2009 │2008/09 │Special Service │1.0 │1.0 │255 │255.0 │Redacted │...│NaN │Mill Lane │NaN │SM5 │528041.0 │164923.0 │528050 │164950 │51.368941│-0.161985│
│3│2872091.0 │05/01/2009 12:27 │2009 │2008/09 │Special Service │1.0 │1.0 │255 │255.0 │Redacted │...│1.000210e+11│Park Lane │21401484.0│UB9 │504689.0 │190685.0 │504650 │190650 │51.605283│-0.489684│
│4│3553091.0 │06/01/2009 15:23 │2009 │2008/09 │Special Service │1.0 │1.0 │255 │255.0 │Redacted │...│NaN │Swindon Lane │21300122.0│RM3 │NaN │NaN │554650 │192350 │NaN │NaN │
5 rows × 31 columns
The original work, made in R, uses a library called geofacet that imports grid layouts from the grid-designer repository. This repository has a lot of different grid layouts that represent the actual
geographical layout of a large variety of neighborhoods within cities, cities within states, or even states within countries. Given that each borough in London represents a panel in today's viz, we
import the file named gb_london_boroughs_grid which represents the layout of the boroughs in London.
gb_london_boroughs_grid = pd.read_csv("https://raw.githubusercontent.com/hafen/grid-designer/master/grids/gb_london_boroughs_grid.csv")
borough_names = gb_london_boroughs_grid.rename(columns={"code_ons": "borough_code"})
│ │row│col│borough_code │ name │
│0│4 │5 │E09000001 │City of London │
│1│4 │8 │E09000002 │Barking and Dagenham │
│2│2 │4 │E09000003 │Barnet │
│3│5 │8 │E09000004 │Bexley │
│4│3 │3 │E09000005 │Brent │
Let's process the data a little:
# Keep rescues that happeend before 2021
rescues_borough = animal_rescues.query("cal_year < 2021").reset_index()
# We're interested on whether it is a Cat or another type of animal.
rescues_borough["animal_group_parent"] = np.where(
rescues_borough["animal_group_parent"] == "Cat", "Cat", "Not_Cat"
# Count the number of rescues per year, borough, and type fo animal
rescues_borough = (
rescues_borough.groupby(["cal_year", "borough_code", "animal_group_parent"])
# Make the dataset wider.
# There is one column for the number of cat rescues, and
# another column for the number of other animal rescues
rescues_borough = rescues_borough.pivot(
index=["cal_year", "borough_code"],
# Merge the data with the info about the grid layout
rescues_borough = pd.merge(rescues_borough, borough_names, how="left", on="borough_code")
rescues_borough = rescues_borough.dropna(subset=["name"])
The next step is to subtract 1 from the "row" and "col" columns. This is needed because the grid imported is 1-base indexed, while Python is 0-base indexed.
rescues_borough["row"] -= 1
rescues_borough["col"] -= 1
Now, let's create three arrays of values. The first is going to represent the name of the borough, the second represents the row position for that borough, and the last one represents the column
df_idxs = rescues_borough[["row", "col", "name"]].drop_duplicates()
NAMES = df_idxs["name"].values
ROWS = df_idxs["row"].values.astype(int)
COLS = df_idxs["col"].values.astype(int)
It's going to be clearer with an example:
print(f"Borough: {NAMES[0]}, row: {ROWS[0]}, col: {COLS[0]}")
Borough: Barking and Dagenham, row: 3, col: 7
which means the borough named "Barking and Dagenham" is going to be located in the panel given by the intersection of the fourth row and eighth column.
As usual, let's get started by defining some colors that are going to be used throughout the whole chart.
BLUE = "#3D85F7"
BLUE_LIGHT = "#5490FF"
PINK = "#C32E5A"
PINK_LIGHT = "#D34068"
GREY40 = "#666666"
GREY25 = "#404040"
GREY20 = "#333333"
BACKGROUND = "#F5F4EF"
As you may recall from above, today's chart consists of several lineplots that are set in a very custom layout. Let's start by trying to create only one of the subplots in the figure. This will be
very helpful to understand all the tricks and details behind this wonderful chart.
# Initialize figure and axis
fig, ax = plt.subplots(figsize=(8, 5))
# Let's say we select the borough named "Enfield"
df = rescues_borough[rescues_borough["name"] == "Enfield"]
# YEAR represents the x-axis
YEAR = df["cal_year"].values
# There are two variables for the y-axis:
# the count for the cat rescues, and the count for non-cat rescues.
CAT = df["Cat"].values
NOT_CAT = df["Not_Cat"].values
# Add lines
ax.plot(YEAR, CAT, color=BLUE)
ax.plot(YEAR, NOT_CAT, color=PINK)
# Add fill between the two lines.
# Two `fill_between` calls are needed to have two different colors.
# First, a fill when CAT is larger than NOT_CAT
YEAR, CAT, NOT_CAT, where=(CAT > NOT_CAT),
interpolate=True, color=BLUE_LIGHT, alpha=0.3
# Then, a fill when CAT is not larger than NOT_CAT
YEAR, CAT, NOT_CAT, where=(CAT <= NOT_CAT),
interpolate=True, color=PINK_LIGHT, alpha=0.3
# Note:
# Setting `interpolate` to `True` calculates the intersection point
# between the two lines and extends the filled region up to this point.
Customize layout
The chart above is a good start. Not too hard, not too impressive. The subplots in the original chart look much better. Let's improve this one too!
This step consists of tweaking many details in the layout. Have a look at the comments to follow along step-by-step!
# Change the background color of both the axis and the figure
# Customize x-axis ticks
# Note there are both major and minor ticks.
xticks = [2010, 2015, 2020]
ax.set_xticks(xticks) # major ticks
ax.set_xticks([2012.5, 2017.5], minor=True)
# Set a grey color for the labels
ax.set_xticklabels(xticks, color=GREY40)
# Customize y-axis ticks.
# Also uses both minor and major ticks
yticks = [0, 10, 20]
ax.set_yticks([5, 15, 25], minor=True)
ax.set_yticklabels(yticks, color=GREY40)
# Also set a slightly larger range for the y-axis limit.
ax.set_ylim((-1, 26))
# Add grid lines.
# Note minor and major lines have different styles applied.
ax.grid(which="minor", lw=0.4, alpha=0.4)
ax.grid(which="major", lw=0.8, alpha=0.4)
# Remove tick marks by setting their length to zero on both axis.
ax.yaxis.set_tick_params(which="both", length=0)
ax.xaxis.set_tick_params(which="both", length=0)
# Remove all the spines by setting their color to "none"
# And finally add the title
ax.set_title("Enfield", weight="bold", color=GREY20)
Multi panel plot
The original plot is made of many of plots like the one above. So far, we've successfully replicated only a single panel (or subplot in Matplotlib's jargon). All of that can be reused in this
Let's start by defining a function that encapsulates all the steps performed above. Some comments are added to explain little changes.
## Here's a summary of the meaning of the arguments in the function
# x: array of values for the year
# y1: array of values for the number of cat rescues
# y2: array of values for the number of non-cat rescues
# name: name of the borough
# ax: the Matplotlib axis where to plot
def single_plot(x, y1, y2, name, ax):
ax.plot(x, y1, color=BLUE)
ax.plot(x, y2, color=PINK)
x, y1, y2, where=(y1 > y2),
interpolate=True, color=BLUE_LIGHT, alpha=0.3
x, y1, y2, where=(y1 <= y2),
interpolate=True, color=PINK_LIGHT, alpha=0.3
xticks = [2010, 2015, 2020]
ax.set_xticks([2012.5, 2017.5], minor=True)
# added a 'size' argument
ax.set_xticklabels(xticks, color=GREY40, size=10)
yticks = [0, 10, 20]
ax.set_yticks([5, 15, 25], minor=True)
# added a 'size' argument
ax.set_yticklabels(yticks, color=GREY40, size=10)
ax.set_ylim((-1, 26))
ax.grid(which="minor", lw=0.4, alpha=0.4)
ax.grid(which="major", lw=0.8, alpha=0.4)
ax.yaxis.set_tick_params(which="both", length=0)
ax.xaxis.set_tick_params(which="both", length=0)
# added a 'size' argument
ax.set_title(name, weight="bold", size=9, color=GREY20)
Before starting, it's important to determine the number of rows and columns in the multipanel layout. This can be determined by the length of unique values for "row" and "col" in rescues_borough.
NROW = len(rescues_borough["row"].unique())
NCOL = len(rescues_borough["col"].unique())
fig, axes = plt.subplots(NROW, NCOL, figsize=(12, 10), sharex=True, sharey=True)
for i, name in enumerate(NAMES):
# Select data for the borough in 'name'
df = rescues_borough[rescues_borough["name"] == name]
# Take axis out of the axes array
ax = axes[ROWS[i], COLS[i]]
# Take values for x, y1 and y2.
YEAR = df["cal_year"].values
CAT = df["Cat"].values
NOT_CAT = df["Not_Cat"].values
# Plot it!
single_plot(YEAR, CAT, NOT_CAT, name, ax)
What a change! There's so much going on here. The first thing to note is that not only the plots were added in the right place, but they also look exactly like the ones we're trying to replicate.
That's a great start!
Remove empty panels
On the other hand, there are many empty panels that shouldn't stay there. Fortunately, there's a method called .remove() that does exactly what its name says: call ax.remove() and the axis selected
will be removed from the plot. Let's do it!
# Itereate trough rows
for i in range(7):
# Iterate through columns
for j in range(8):
# Since we've added lines, we can check whether the plot is not empty by
# checking whether the axis contains lines.
# If it contains lines, we continue looping without removing the axis.
if axes[i, j].lines:
# If it does not contain lines, remove it!
axes[i, j].remove()
Nice move! Removing the empty axes had a noticeably positive effect on the plot. However, if you compare the chart obtained so far with the one introduced above, you may notice that tick labels are
missing in many panels.
By default, when sharex=True and sharey=True are set, Matplotlib only uses y-tick labels for the panels on the first column and x-tick labels for the panel on the bottom row. This actually makes a
lot of sense in a normal rectangular layout since repeating the same labels on every panel would only result in unnecessary clutter.
But this isn't just a rectangular layout and we want tick labels back in very custom locations. In this case, they are not going to be placed in the panels on the first column or last row. They are
going to be located in the panels on the first column or last row that isn't empty.
# Go through panels in a rowwise manner, from left to right.
for i in range(7):
first_in_row = True
for j in range(8):
# Enable tick labels in the first panel in the row that is not empty.
if first_in_row and axes[i, j].lines:
axes[i, j].yaxis.set_tick_params(labelleft=True)
first_in_row = False
# Go through panels in a columnwise manner, from bottom to top.
for j in range(8):
first_in_col = True
for i in reversed(range(7)): # note the 'reversed()'
# Enable tick labels in the first panel in the column that is not empty.
if first_in_col and axes[i, j].lines:
axes[i, j].xaxis.set_tick_params(labelbottom=True)
first_in_col = False
Isn't it amazing to see all the details one can customize with Matplotlib?
Add legends and annotations
The chart above is just one step away from being finished. This last step is about adding a good-looking title, a legend to tell the readers how to read the lines and the filled areas, and adjusting
the margin and space between subplots.
# Create handles for lines.
handles = [
Line2D([], [], c=color, lw=1.2, label=label)
for label, color in zip(["cats", "other"], [BLUE, PINK])
# Add legend for the lines
loc=(0.75, 0.94), # This coord is bottom-left corner
ncol=2, # 1 row, 2 columns layout
columnspacing=1, # Space between columns
handlelength=1.2, # Line length
frameon=False # No frame
# Create handles for the area fill with `patches.Patch()`
cats = patches.Patch(facecolor=BLUE_LIGHT, alpha=0.3, label="more cats")
other = patches.Patch(facecolor=PINK_LIGHT, alpha=0.3, label="more other")
handles=[cats, other],
loc=(0.75, 0.9), # This coord is bottom-left corner
ncol=2, # 1 row, 2 columns layout
columnspacing=1, # Space between columns
handlelength=2, # Area length
handleheight=2, # Area height
frameon=False, # No frame
# Title
# Note the horizontal alignment, vertical alignment and multiline alignment values.
# They are not casual!
x=0.05, y=0.975, s="Rescues of\ncats vs other animals by\nthe London fire brigade\n2009-2020",
color=GREY25, fontsize=26, fontfamily="KyivType Sans", fontweight="bold",
ha="left", # 'x' is the left limit of the title
va="top", # 'y' is the top limit of the title
ma="left" # multiple lines are aligned to the left
# Add caption
x=0.95, y=0.025, s="Source: London.gov · Graphic: Georgios Karamanis",
ha="right", # 'x' is the right location of the caption
va="baseline" # 'y' is the base location of the caption
# Last but not least, customize the margin and space within subplots
fig.subplots_adjust(left=0.05, right=0.95, bottom=0.05, top=0.95, hspace=0.3, wspace=0.08)
#fig.savefig("plot.png", dpi=320) # to save it in high quality | {"url":"https://python-graph-gallery.com/web-time-series-and-facetting-with-matplotlib/","timestamp":"2024-11-02T02:40:13Z","content_type":"text/html","content_length":"1049280","record_id":"<urn:uuid:4a84cb57-642a-499e-8a4d-005985f08508>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00467.warc.gz"} |
| What is Piping
Snow and Ice Loading on Piping Systems
Snow and Ice loading is an environmental load and must be considered in pipe stress analysis where the climatic condition specifies the possibility of snow or ice formation. Piping projects in
various cold geographical locations like Canada, Russia, the USA, Europe, etc where winter temperature falls below a certain limit that causes snow or ice formation must be designed considering the
impact of snow loading. Both snow and ice loads are considered as live loads on piping.
Codes and standards stipulate that the system must be designed against snow and ice loading. They provide a lot of guidance for snow loading calculation and application for system design to safeguard
them from failure for building structures. But for non-building structures like piping systems, the calculations for snow loading are too limited. In this article, we will try to explore the
philosophy of snow loading in piping and pipeline systems and its application in the pipe stress analysis process.
What is Snow and Ice Loading?
Snow and ice loading is a type of sustained loading. The possibility of snow loading will be mentioned in the project environment conditions of the region. Ice and snow loading for piping systems is
basically additional downward-acting forces exerted by the accumulated snow and ice. The increase in support load and system stress due to the weight of the accumulated snow and ice must be
considered in the structural design.
Snow and ice loading is only significant for outdoor piping installations and is treated similar to other deadweight loads.
In general, snow and ice loading is considered in pipe stress analysis as a uniform load placed over the exterior part of the pipe and fittings. The entire system can fail if the snow loads exceed
the allowable of the piping systems. Snow loads are being applied to all pipe elements above ground, and fittings are in a vertical direction downward. The loads depend on the slope angle of each
element. The snow loads are given to the elements by using the snow factor.
Though both snow and ice are made up of water there is a slight difference. Snow falls as precipitation of frozen water whereas ice is simply frozen water. In general snow in the piping system is
usual for climatic regions. Whereas ice loading generally refers to ice storm deposits over pipes that are decided based on region-specific weather reports.
Snow Load Calculation Philosophy
Snow load calculation philosophy is based on the consideration that the snow accumulated on top of the piping system will take the shape of an equilateral triangle with its base equalling the pipe
outside diameter.
The common equation used for calculating the snow loads is as follows:
• W[s]=design snow load to be added to other distributed loads acting on pipe, lb/ft
• D[o]=outside diameter of the pipe (for bare pipe) or insulation (for pipe with insulation), in
• S[o]=Snow Factor that considers the probable snow loading for the region where the piping system is installed, lb/ft^2
Ice Loading Calculation Philosophy
Ice loading is also applied in the system as uniform loading and calculated using the following formula as specified in the piping handbook:
• W[ice]=unit loading on the pipe, lb/ft
• D[o]=outside diameter of the pipe (for bare pipe) or insulation (for pipe with insulation), in
• t=ice covering thickness, in
Application of Snow Load in Caesar II
Once the uniform load for snow or ice loading is known or calculated, the same is applied in Caesar II and proper load cases are created to account for it. The steps follow are provided below:
Step 1: Snow load is usually not considered along with wind or seismic load. So, to add snow load in the piping system, simply click on uniform load and add the calculated uniform load value (Note
this will vary depending on pipe size) in the input screen as shown below:
Fig. 1: Input screen of Caesar II for inputting Snow load
In the above example, the calculated snow load per unit length is 0.346 in N/mm for the 18-inch pipe size. The negative sign for the force will act downwards.
Step 2: Create load cases to account for the above input value as shown in Fig. 2 below:
Fig. 2: Load cases for Snow loading in Caesar II
Step 3: Now you can run the analysis and check the output results. Support loads considering snow loads must be transferred to the Civil team for structural consideration.
Ice loading on the piping system can be applied in a similar fashion if the ice loading is because of the ice storm. However, if the ice generation is because of the cryogenic fluid inside the pipe
and moisture condensation, then the same can be considered similar to pipe insulation. Simply, the thickness of ice generation needs to be calculated and that thickness along with ice density can be
added as insulation. This means you are roughly considering the pipe is insulated with ice.
6 thoughts on “Snow and Ice Loading on Piping Systems”
1. Quite complicated with Caesar II, to appply snow load is so easy with Start-prof !
Contact me if you want to know more.
2. Please recheck the load case. Snow Load should be considered as an Occassional Load Case in areas, where snow falls only in a particular season, similar to Seimic Load or Wind Load cases
1. Yes, You are right. Please create load cases similar to Seismic/Wind load cases.
1. I think there is a simple way to consider this in Caesar, i.e, using the wind load (pressure vs elevation) as a downward vertical load. The snow load is equivalent to a pressure load (N/
m2, lb/sq inch). Since Caesar know how to apply this to the real diameter (pipe size + insulation), using the shape factor 1 it will result in the same load. Plus you do not need to
select the horizontal elements to ally since Caesar “know” how to apply only to the right elements
3. May I know where the equations are referred from.
Need to check unit point of view on diameter.
4. The snow loads determined using ANSI A58.1 methods assume horizontal or sloping flat surfaces rather than rounded pipe. Assuming that snow laying on a pipe will take the approximate shape of an
equilateral triangle with the base equal to the pipe diameter, the snow load is calculated with the following formula.
Ws=1/2 *D*S
Ws= design snow load acting on the piping, N/m (lb/ft)
D = pipe (and insulation) outside diameter, m (ft)
S = snow load, N/m2 (lb/ft2 ) | {"url":"https://whatispiping.com/snow-ice-loading/","timestamp":"2024-11-13T14:35:28Z","content_type":"text/html","content_length":"91762","record_id":"<urn:uuid:04fa5a37-e6c2-41c4-a17c-4419a8372fc2>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00680.warc.gz"} |
Armature Current given Electrical Efficiency of DC Motor Calculator | Calculate Armature Current given Electrical Efficiency of DC Motor
What is electrical energy efficiency?
Electrical energy efficiency is understood as the reduction in power and energy demands from the electrical system without affecting the normal activities carried out in buildings, industrial plants,
or any other transformation process. Additionally, an energy-efficient electrical installation allows the economical and technical optimization. That is the reduction of technical and economical
costs of operation.
How to Calculate Armature Current given Electrical Efficiency of DC Motor?
Armature Current given Electrical Efficiency of DC Motor calculator uses Armature Current = (Angular Speed*Armature Torque)/(Supply Voltage*Electrical Efficiency) to calculate the Armature Current,
The Armature Current given Electrical Efficiency of DC Motor formula is defined as the current that flows into the armature winding of the DC motor. Armature Current is denoted by I[a] symbol.
How to calculate Armature Current given Electrical Efficiency of DC Motor using this online calculator? To use this online calculator for Armature Current given Electrical Efficiency of DC Motor,
enter Angular Speed (ω[s]), Armature Torque (τ[a]), Supply Voltage (V[s]) & Electrical Efficiency (η[e]) and hit the calculate button. Here is how the Armature Current given Electrical Efficiency of
DC Motor calculation can be explained with given input values -> 0.723994 = (327.844042941322*0.424)/(240*0.8). | {"url":"https://www.calculatoratoz.com/en/armature-current-given-electrical-efficiency-of-dc-motor-calculator/Calc-3798","timestamp":"2024-11-02T22:11:37Z","content_type":"application/xhtml+xml","content_length":"127142","record_id":"<urn:uuid:e1f60137-2166-4ddc-8b10-ff61c27b7458>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00640.warc.gz"} |
Doctoral Thesis Defense: Aggregate Simulation for Efficient Planning and Inference
July 9, 2019
10:00 AM
Halligan 209
Speaker: Hao Cui
Host: Roni Khardon and Liping Liu
Many algorithms for decision making and machine learning problems are centered around the ideas of sampling and optimization. In this thesis, we introduce a new technique, aggregate simulation, and
show how it can be used for decision making in Markov decision process (MDP), for decision making in partially observable MDP (POMDP), and for inference in Bayesian networks. The original idea of
aggregate simulation is motivated in the context of MDP planning, where such simulation approximates the results of many sampled trajectories with a simple algebraic calculation. This provides a
symbolic representation of the estimated long term reward which is then optimized with gradient ascent. The resulting algorithm, SOGBOFA, is a state-of-the-art planner for large MDPs. In POMDPs,
observations provide partial information on the state of the world and the agent must act using only this partial information. We introduce a second technique, sampling networks, that enables
aggregate simulation of both the state-action trajectories and the observations. The resulting algorithm, SNAP, has excellent performance on the benchmark POMDP problems. Our final contribution,
builds on the connections between aggregate simulation and approximate inference in Bayesian networks. We introduce a new reduction and show how aggregate simulation can be used to solve difficult
Marginal MAP inference problems. The resulting algorithm, AGS, is competitive with the state-of-the-art, and it is especially strong in problems with hard summation sub-problems. In all these
problems, aggregate simulation provides a computation which is only approximate, but is efficient to compute. As our experimental evidence shows, despite the approximation, this enables effective and
high quality solutions of large planning and inference problems, across many problem domains. | {"url":"http://www.cs.tufts.edu/t/colloquia/current/?event=1272","timestamp":"2024-11-09T09:58:55Z","content_type":"text/html","content_length":"3013","record_id":"<urn:uuid:ef5db97d-3d92-45d6-84ab-4e188a3441fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00372.warc.gz"} |
implementing XGBoost from scratch on python
Building XGBoost from Scratch in Python: A Journey into Gradient Boosting
XGBoost, short for Extreme Gradient Boosting, is a powerful algorithm widely used in machine learning for its exceptional performance. While readily available libraries like XGBoost make it easy to
implement, understanding the core mechanics of this algorithm can significantly enhance your understanding and provide greater control. In this article, we'll embark on a journey to implement XGBoost
from scratch in Python, revealing the intricacies behind its workings.
The Core Concepts: Gradient Boosting and Decision Trees
At its heart, XGBoost leverages the power of gradient boosting, a technique that sequentially builds an ensemble of weak learners (typically decision trees) to achieve a strong predictor.
Imagine we have a dataset with features and corresponding labels. We start by training a simple decision tree, which might be a poor predictor but still captures some underlying patterns in the data.
The next step is to calculate the residuals (errors) between the predicted and actual values. These residuals become the target variable for the next decision tree, which aims to improve the model's
predictions by focusing on the areas where the first tree performed poorly. This iterative process continues, adding new trees that learn from the previous residuals, ultimately building a robust and
accurate ensemble.
Implementing XGBoost from Scratch: A Python Guide
Let's dive into the Python implementation. The following code snippet illustrates the fundamental structure of XGBoost. Keep in mind that this is a simplified version for illustrative purposes; a
full implementation would require more intricate code and handling for complex scenarios.
import numpy as np
from sklearn.tree import DecisionTreeRegressor
class XGBoostRegressor:
def __init__(self, n_estimators=100, learning_rate=0.1, max_depth=3):
self.n_estimators = n_estimators
self.learning_rate = learning_rate
self.max_depth = max_depth
self.trees = []
def fit(self, X, y):
# Initialize predictions with average target value
y_pred = np.mean(y)
for i in range(self.n_estimators):
# Calculate residuals
residuals = y - y_pred
# Train a new tree on the residuals
self.trees[-1].fit(X, residuals)
# Update predictions
y_pred += self.learning_rate * self.trees[-1].predict(X)
# Add a new tree to the ensemble
def predict(self, X):
y_pred = np.zeros_like(X[:, 0])
for tree in self.trees:
y_pred += self.learning_rate * tree.predict(X)
return y_pred
This simplified code defines a XGBoostRegressor class that accepts the number of estimators, learning rate, and maximum depth as hyperparameters.
Explaining the Code:
1. Initialization: The constructor initializes the parameters and creates an empty list to store the decision trees.
2. Fitting: The fit method starts with the average target value as the initial prediction. It then iteratively trains new decision trees on the residuals, updates the predictions, and adds the new
tree to the ensemble. The learning rate controls the influence of each individual tree.
3. Prediction: The predict method simply sums the predictions from all the trees, weighted by the learning rate.
The Importance of Regularization
A crucial aspect of XGBoost that distinguishes it from basic gradient boosting is the incorporation of regularization. Regularization techniques, such as L1 and L2 penalties, help prevent overfitting
by penalizing complex models with too many features or overly deep trees. This is particularly important when working with high-dimensional data.
Further Considerations
While this simplified implementation provides a basic understanding, building a production-ready XGBoost requires incorporating several enhancements:
• Handling Missing Values: Implement strategies for dealing with missing values in the data.
• Tree Pruning: Implement pruning techniques to control the complexity of individual trees.
• Early Stopping: Employ early stopping to prevent overfitting by monitoring performance on a validation set.
• Parallel Processing: Leverage parallel computing techniques to speed up training.
• Advanced Regularization: Explore advanced regularization techniques like tree regularization.
Building XGBoost from scratch provides valuable insights into the algorithm's mechanics and empowers you to customize its behavior. While the process might seem daunting at first, the journey is
rewarding, leading to a deeper understanding of this powerful machine learning technique. Remember, this article provides a foundational understanding; for comprehensive implementations and detailed
explanations, refer to official XGBoost documentation and relevant online resources. | {"url":"https://laganvalleydup.co.uk/post/implementing-xg-boost-from-scratch-on-python","timestamp":"2024-11-03T01:34:13Z","content_type":"text/html","content_length":"84478","record_id":"<urn:uuid:ec2f8438-9040-4d39-a82a-7e6c0e23e0cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00301.warc.gz"} |
To find a moment generating function of a discrete random variable - Basic Simulation Lab
To find a moment generating function of a discrete random variable
AIM: To find a moment generating function of a discrete random variable
PC with windows (95/98/XP/NT/2000).
MATLAB Software
clear all;
close all;
x=[1 2 3 4 5 6];
p=[1/6 1/6 1/6 1/6 1/6 1/6];
for i=1:6
s= sum(exp(t*x(i)).*p(i))
s = 0.4530
s = 1.2315
1. What is Aliasing Effect.
Ans.If the frequency of reproduction is less than sampling frequency samples will overlap and cause an error called Aliasing Effect.
2.what is Under sampling.
Ans:If fs<2fm then sampling called under sampling.
3. What is Over sampling.
Ans: If fs>2fm then sampling called under sampling. | {"url":"https://vikramlearning.com/jntuh/notes/basic-simulation-lab/to-find-a-moment-generating-function-of-a-discrete-random-variable/366","timestamp":"2024-11-09T03:09:26Z","content_type":"text/html","content_length":"33074","record_id":"<urn:uuid:dde449b8-a199-4f06-99b6-df717dd9783c>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00342.warc.gz"} |
2. Solving Linear Equations and Inequalities
If we carefully placed more rocks of equal weight on both sides of this formation, it would still balance. Similarly, the expressions in an equation remain balanced when we add the same quantity to
both sides of the equation. In this chapter, we will solve equations, remembering that what we do to one side of the equation, we must also do to the other side.
This chapter has been adapted from the “Introduction” in Chapter 2 of Elementary Algebra (OpenStax) by Lynn Marecek and MaryAnne Anthony-Smith, which is under a CC BY 4.0 Licence. Adapted by Izabela
Mazur. See the Adaptation Statement for more information. | {"url":"https://opentextbc.ca/businesstechnicalmath/part/part-2/","timestamp":"2024-11-08T01:18:32Z","content_type":"text/html","content_length":"70426","record_id":"<urn:uuid:f87c2b75-be51-4503-ad98-e6028db6011b>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00360.warc.gz"} |
Parking problem (sequential packing) simulations in two and three dimensions for Journal of Colloid And Interface Science
Journal of Colloid And Interface Science
Parking problem (sequential packing) simulations in two and three dimensions
View publication
The goal of the parking limit problem is to determine the mean fraction of a space that would be occupied by fixed objects, each of the same size, that are placed or created randomly in that space
until no more can fit. The goal of the two-dimensional parking limit problem is to determine the fraction of an area that would be occupied by disks of one size; a three-dimensional parking limit
problem involves creating spheres in a volume. We tried a simple numerical simulation algorithm combined with regression analysis to do the two-dimensional problem and found the parking limit to be
0.51 to 0.56 area fraction for disks on a plane, which is consistent with the results of others. We then used the three-dimensional version of this algorithm to obtain the parking fraction in a
cubical region with penetrable walls. Our results can be multiplied by a geometrical factor to give the parking fraction for a cube with impenetrable walls and can be extrapolated to give the parking
fraction for an infinite region. Regression equations were obtained for the effects of the ratio of the sphere radius to the side of the cubical volume and of the dimensionless number of attempts to
park. We found the parking limit for an infinite volume to be 0.37 to 0.40 for spheres. © 1987. | {"url":"https://research.ibm.com/publications/parking-problem-sequential-packing-simulations-in-two-and-three-dimensions","timestamp":"2024-11-15T00:36:19Z","content_type":"text/html","content_length":"70517","record_id":"<urn:uuid:83bd17e3-313f-4735-bd2d-85f163cc293d>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00352.warc.gz"} |
Hs level math (problem solving) *der
A potter makes five pots a day. Each pot requires one unit of clay. In addition to the material cost, she has the following costs for the clay: - A one-time cost of M dollar for each delivery. - A
warehousing cost of K dollar/ day for each unit of clay in the warehouse.
a) Let M = 3000 and K = 10. How often should She order to minimize the cost of the Clay?
b) How does M and K affect how often she should order? Vary M and K and try to find connections.
Answers can only be viewed under the following conditions:
1. The questioner was satisfied with and accepted the answer, or
2. The answer was evaluated as being 100% correct by the judge.
View the answer
Join Matchmaticians
Affiliate Marketing Program
to earn up to a 50% commission on every question that your affiliated users ask or answer. | {"url":"https://matchmaticians.com/questions/df10cc/hs-level-math-problem-solving-der-derivatives-calculus","timestamp":"2024-11-09T17:41:59Z","content_type":"text/html","content_length":"79065","record_id":"<urn:uuid:cacc814b-aaf9-45b7-beff-ebd899662efb>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00310.warc.gz"} |
Interesting Groups - Annenberg Learner
Brick Playbook: Parent Edition
Interesting Groups
Explore the properties of addition using playing cards and bricks.
Child will understand the directionality of addition.
Essential Question(s):
How can we use grouping to better understand addition?
Special Materials:
A deck of cards without the kings, queens, and jacks; pencil and paper
Bricks Required:
9 1×2 bricks of two different colors, 16×16 plate
Project Structure
1. Prepare two kinds of small objects to count, perhaps pencils and crayons, and ask child how many objects there are all together.
1. Lead the child in counting pencils and crayons, recording the results, and writing an addition sentence.
2. Ask if the result would be different if the crayons were counted first; do so, writing a new addition sentence.
3. Explains how addition does not change, even if done in a different order.
1. Hand out cards to child (at least one each of denominations 1 through 9).
2. Child draws two cards from a deck, then uses two different colors of bricks to create groupings on the plate and record the equation on a piece of paper.
1. Child then takes the same number of bricks in the same colors, but swaps them to create a new equation that yields the same result, recording the new equation. | {"url":"https://www.learner.org/series/brick-playbook-parent-edition/project-14-interesting-groups/","timestamp":"2024-11-06T22:16:29Z","content_type":"text/html","content_length":"110416","record_id":"<urn:uuid:9d8d02c6-515b-4be6-bf92-c9c1aa9308b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00489.warc.gz"} |
MATH 140: Foundations of Calculus
Note: If this course is being taught this semester, more information can be found at the course home page.
Cross Listed
This course is a prerequisite for
This course covers pre-calculus material and is intended for students lacking the algebra and trigonometry background necessary to perform successfully in MATH 141. After completing this course
students are ready to take MATH 141. MTH140 cannot be taken after completing MTH141 or MTH161 or higher.
Topics covered
MATH 140 covers algebra and properties of polynomial, root, rational functions, exponential, logarithmic, and trigonometric functions.
Related courses
See Comparing the Calculus Sequences.
The calculus courses which assume a firm foundation in high school trigonometry and algebra are | {"url":"https://courses.math.rochester.edu/catalog/140/","timestamp":"2024-11-03T10:05:55Z","content_type":"text/html","content_length":"4614","record_id":"<urn:uuid:fe0b4f21-5447-43ad-ae4b-770293550422>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00054.warc.gz"} |
Oct 2017 challenge
Each month, a new set of puzzles will be posted. Come back next month for the solutions and a new set of puzzles, or subscribe to have them sent directly to you.
MIND-Xpander Maths Problems
1. The second of five consecutive multiples of 11 is removed. The other four are then added together to give 715. What is the lowest multiple of the remaining four? There are two possible
2. Simplify the following expression: √ 6 x √ 15 x √ 10
PRIME Numbers Puzzle
Throwing 3 darts at the following dartboard and all 3 throws must hit a number on the dartboard, what are your chances that their total will equal a Prime Number? Numbers can be used more than once.
HEXAGON-numeric puzzle (Level 1)
Fit the numbers 1 – 6 in each hexagon and where the hexagon segments touch each other, the numbers in these segments will be the same. No number can be repeated in a hexagon. Find the remaining
missing numbers to complete the following puzzle.
There are more than one way of doing these puzzles and may well be more than one answer. Please let me and others know what alternatives you find by commenting below. We also welcome general
comments on the subject and any feedback you'd like to give.
If you have a question that needs a response from me or you would like to contact me privately, please use the contact form.
Get more puzzles!
If you've enjoyed doing the puzzles, consider ordering the books;
• Book One - 150+ of the best puzzles
• Book Two - 200+ with new originals and more of your favourites
Both in a handy pocket sized format. Click here for full details.
Last month's solutions
MIND-Xpander Logic Problem
Four holes are drilled in a straight line in a rectangular steel plate. The distance between hole 1 and 4 is 35 mm. The distance between hole 2 and hole 3 is twice the distance between hole 1 and
hole 2. The distance between hole 3 and hole 4 is the same as the distance between hole 2 and hole 3. What is the distance in millimetres, between hole 1 and hole 3?
Given: Distance between holes 1 and 4 = 35mm Let x = distance between holes 1 and 2 Then, 2x = distance between holes 2 and 3, and 3 and 4. Therefore: x + 2x + 2x = 5x = 35 and then x = 7. x + 2x =
3x = 3*7 = 21mm
Can-U-Figure-It-Out Puzzles
How many squares, of any size, are there in the following diagram?
EQUATE+0 Puzzle
Each row, column & diagonal is an equation and you use the numbers 1 to 9 to complete the equations. Each number can be used only once. ‘No’ numbers have been provided to get you started. Find the
remaining seven numbers that satisfies all the resulting equations. Note – multiplication (x) & division (/) are performed before addition (+) and subtraction (-).
Submit a Comment Cancel reply | {"url":"https://gordonburgin.com/2017/10/oct-2017-challenge/","timestamp":"2024-11-08T20:49:26Z","content_type":"text/html","content_length":"261238","record_id":"<urn:uuid:26d77ac3-6a9d-4e49-bf84-2dcf8d6740dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00893.warc.gz"} |
Georg Feigl - Biography
Quick Info
13 October 1890
Hamburg, Germany
25 April 1945
Wechselburg, Sachsen, Germany
Georg Feigl was a German mathematician who worked in the foundations of geometry and topology.
Georg Feigl's father, who earned his living importing goods, was also named Georg Feigl. Georg's mother, Maria Pinl was from Bohemia. When Georg was born his father was 41 years old and his
mother was 27. He attended school (the Johanneum) in Hamburg then, in 1909, he began his studies at the University of Jena. Due to ill health his study of mathematics and physics took longer than
expected and a severe chronic stomach problem forced him to interrupt his work on several occasions. He obtained a degree from Jena taking much longer than was normal because of his health
problems, then in 1919 he obtained a doctorate from Jena working under Koebe. His doctoral dissertation was on conformal mappings.
In 1919 Feigl became a budgeted assistant of Schmidt at the Mathematical Institute of the University of Berlin. Feigl's later research and teaching were both greatly influenced by Schmidt.
Rohrbach writes in [1]:-
Generations of students in mathematics at the University of Berlin took the introductory course "Einführung in die höhere Mathematik" Ⓣ which Feigl created and which after his death was
published, in enlarged form, as a textbook (1953) by Hans Rohrbach.
In 1925 Feigl became managing editor of the journal Jahrbuchs über die Fortschritte der Mathematik Ⓣ, the only reviewing journal at that time, which was produced in Berlin by the Prussian Academy
of Sciences. In the same year Feigl married Maria Fleischer, the daughter of Paul Fleischer who was an economist. Feigl's mother died two years after his marriage, and his father died four years
Promotion at Berlin came steadily for Feigl who was promoted in 1927 and then again to extraordinary professor in 1933. Two years later, in 1935, Feigl was appointed to the Chair of Mathematics
at the University of Breslau where he was head of the Department. In [3] Pinl explains that in Breslau:-
... his aim was the building up and heading the Mathematical Institute through lectures, supervision of doctorates and habilitation, cooperation with the Reich Association of German
Mathematical Societies and Clubs and participation in managerial and scientific tasks for the German Experimental Institution for Aeronautics.
Feigl wrote to several of his colleagues on 30 May 1935 asking for their cooperation in setting up a new mathematics journal, the Neue Deutsche Forschungen. Feigl wanted Süss from Freiburg, Hamel
from Berlin, Koebe from Leipzig, Kowalewski from Dresden and Tornier from Göttingen to cooperate within the Mathematics Department of the Reich Research Council and help him identify doctoral
dissertations which would be suitable for publication. Feigl explained in his letter to Süss, written on 30 May, that he was embarrassed by the fact that Compositio Mathematica was publishing so
many works by non-Aryans.
In 1941 Feigl was elected onto the Executive Committee of the German Mathematical Society. He worked with colleagues such as Behnke, Süss and Hamel on the Instruction Commission of the German
Mathematical Society, becoming head of the Commission. Mostly due to his health problems he had been forced to decide between concentrating on research or teaching, and he had decided to
concentrate on the latter. He was therefore a natural choice to head the Instruction Commission both because of his interests and also because of his expertise.
Although he had accepted the direction that the Nazis had taken Germany, Feigl was strongly opposed to Hitler and his regime so became increasingly alarmed when fellow mathematicians were
arrested by the Gestapo. He wrote to Süss expressing his worries when Ernst Mohr was arrested in 1944 but Süss reassured him that Mohr had been careless (Mohr's crime seemed to have been that he
had listened to the BBC).
In February 1944 Feigl had been ordered to report for active duty in the German army as an anti-aircraft gunner. With the help of Süss, who supported him on behalf of the Reich Research Council,
he successfully won exemption. Later that year he offered to cooperate on work with the military, by using the resources of his Mathematical Institute. The centre of Breslau was bombed on 7
October 1944 but the Mathematical Institute was essentially undamaged (only 4 panes of glass were broken). At this time Feigl was working with Schmidt on a book on differential and integral
calculus. Feigl had been lecturing on this topic and his lectures were used as the basis for the book on which they worked. The manuscript of the book was, sadly, lost during the final stages of
the war. Feigl's wife Maria said that it had been stolen by Poles in the county of Glatz (today Klodzko). By January 1945 the Russian army was advancing towards Breslau and a decision was taken
to move the mathematicians from the city. In February Feigl and his colleagues moved the Mathematical Institute from Breslau to Schönburg Castle at Wechselburg, not far from Leipzig. However,
Feigl had required constant medication all through his life for his stomach condition and, not being able to obtain his medication at Wechselburg led to his death within a couple of months.
Feigl worked on geometry, in particular the foundations of geometry and topology. He was mainly interested in teaching, however, and he introduced many teaching reforms. Through him the modern
approach of Hilbert and Klein was introduced into universities and even into secondary schools.
1. H Rohrbach, Biography in Dictionary of Scientific Biography (New York 1970-1990). See THIS LINK.
2. Georg Feigl, Neue deutsche Biographie V (1961), 57.
3. M Pinl, Georg Feigl zum Gedächtnis, Jber. Deutsch. Math.-Verein. 79 (1969), 53-60.
Additional Resources (show)
Other websites about Georg Feigl:
Written by J J O'Connor and E F Robertson
Last Update November 2004 | {"url":"https://mathshistory.st-andrews.ac.uk/Biographies/Feigl/","timestamp":"2024-11-11T06:51:36Z","content_type":"text/html","content_length":"21730","record_id":"<urn:uuid:c2248192-6efb-421e-b228-67e9cd59c658>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00574.warc.gz"} |
seminars - KAM theory in active scalar equations I
Quasi-periodic property is a commonly observed property in many Hamiltonian systems. While such a property is easily observed in a linear Hamiltonian system, it is much more complicated to prove
whether such solutions can exist in a nonlinear Hamiltonian system. The KAM theory is a classical method used to construct quasi-periodic solutions in a nonlinear/perturbed system. In this lecture, I
will outline a proof of an application of the KAM theory to the generalized-surface quasi-geostrophic equations, constructing quasi-periodic solutions near a Rankine vortex.
Lecture notes on nonlinear oscillations of Hamiltonian PDEs (Chapter: A tutorial in Nash-Moser theory) by M. Berti
KAM for quasi-linear and fully nonlinear forced perturbations of Airy equation by P. baldi, M. Berti and R. Montalto
KAM for autonomous quasi-linear perturbations of KdV by P. baldi, M. Berti and R. Montalto.
Quasiperiodic solutions of the generalized SQG equation by J. Gomez-Serrano, A. Ionescu, J. Park | {"url":"https://www.math.snu.ac.kr/board/index.php?mid=seminars&sort_index=room&order_type=desc&page=85&l=en&document_srl=1112451","timestamp":"2024-11-08T02:01:23Z","content_type":"text/html","content_length":"49782","record_id":"<urn:uuid:0725d4fd-61c5-4548-89db-094813bc2bbf>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00258.warc.gz"} |
Statistics Tutors A
We provide free access to qualified tutors across the U.S.
Find Tucson, AZ Statistics Tutors For Lessons, Instruction or Help
A Tutor in the Spotlight!
Michael F.
• Tucson, AZ 85710 ( 5.1 mi )
• Experienced Statistics Tutor
• Male
• Member since: 01/2016
• Will travel up to 20 miles
• Rates from $20 to $35 /hr
• On-line In-Home In-Group
• Also Tutors: Algebra III, Math, Trigonometry, Pre-Calculus, Pre-Algebra, College Algebra, Calculus I, Algebra II, Algebra I
I like math, and I can help you understand it!
* Five years experience tutoring mathematics to high school and college students including; Direct hire at Pensacola State College Math Lab Pensacola State College TRIO EOC Faulkner State College
TRIO SSS Independent tutor: Wyzant Tutoring
Laura S.
• Tucson, AZ 85719 ( 1.2 mi )
• Experienced Statistics Tutor
• Female Age 45
• Member Since: 07/2020
• Will travel up to 20 miles
• Rates from $20.00 to $40.00 /hr
• On-line
• Also Tutors: Math, Trigonometry, Pre-Calculus, Pre-Algebra, Geometry, College Algebra, Calculus I, Algebra II, Algebra I
MIT Graduate who is a Full Time Tutor at Incredible Rates
Expert Tutor of FL Contractor's License. Professional Instructor of Spanish (former teacher at Boston Language Institute) as well as DISTAR reading method to elementary students. MIT Graduate and
Expert Tutor of all subjects K-12. I also tutor Finance, Accounting, Biology, Genetics, Anatomy, English, Writing & Reading and most major test preps.
Organized and Customized to a unique student's needs. Practice Tests and Games.
Amitt k.
• Tucson, AZ 85719 ( 1.2 mi )
• Statistics Tutor
• Male Age 47
• Member Since: 02/2015
• Will travel up to 5 miles
• Rates from $40.00 to $50.00 /hr
• On-line In-Home In-Group
• Also Tutors: Algebra III, Math, Trigonometry, Pre-Algebra, Geometry, College Algebra, Algebra II, Algebra I
Read More About Amitt k...
have teaching experience of 13 years. teaching b.tech, m.tech phd students. i am trained in softwares likefuzzy logic and response surface methodology. i hav ewritten two books engineering drawing
and elements of mechanical engg. i feel i can teach online to the students who need help. i can also teach basics of maths if needed by the students.
Francisco M.
• Tucson, AZ 85746 ( 7.1 mi )
• Statistics Tutor
• Male Age 60
• Member Since: 01/2016
• Will travel up to 20 miles
• Rates from $10.00 to $20.00 /hr
• Also Tutors: Algebra III, Trigonometry, Pre-Calculus, Pre-Algebra, College Algebra, Calculus II, Calculus I, Algebra II, Algebra I
Read More About Francisco M...
Tutor in basic and intermediate algebra, trigonometry, calculus, differential equations, statistics, mechanics, electricity and magnetism, and engineering statics. Bilingual, fluent in Spanish and
English. Assist students in the use of graphing calculators and in assignments that require Excel. Fifteen years of experience at Pima Community College.
Douglas N.
• Tucson, AZ 85711 ( 1 mi )
• Statistics Tutor
• Male
• Member Since: 11/2007
• Will travel up to 20 miles
• Rates from $37.00 to $41.00 /hr
• Also Tutors: Trigonometry, Pre-Calculus, Pre-Algebra, Logic, Geometry, Algebra II, Algebra I
More About This Tutor...
An understanding of the student and his/her background, strengths and weaknesses, and motivation is key to successful learning. This student-centered approach has gained favor in recent years, and
using this approach in an interactive way allows the student to develop critical-thinking skills at an optimal pace.
As a graduate student at the University of Chicago, I was responsible for leading introductory core Biology labs, which covered an array of concepts in biology, statistics, computer usage, and
experimental design. Because lab reports were frequent requirements, I also was able to improve skills as a writing tutor I had already developed as a writing tutor (paid student position) in the
Writing Lab at Towson University. I also have taught a remedial study- and life-skills course for students on Academic Probation at the University of Arizona. Finally, in my capacity as Acade ...
Benjamin M.
• Tucson, AZ 85719 ( 1.1 mi )
• Statistics Tutor
• Male
• Member Since: 05/2010
• Will travel up to 20 miles
• Rates from $39.00 to $42.00 /hr
• Also Tutors: Trigonometry, Pre-Calculus, Pre-Algebra, Logic, Geometry, College Algebra, Calculus II, Calculus I, Algebra II, Algebra I
A Little Bit About Me...
I graduated from the University of Arizona with a B.S. in Mathematics and a minor in Philosophy. Currently, I am finishing my M.A. degree in Mathematics Education. I have over 6 years of math
tutoring experience, and about 3 years of formal math teaching experience. I have tutored students various levels of math such as algebra, geometry, trigonometry, calculus, differential equations,
linear algebra, and introduction to proof writing. I have also tutored beginning physics.
My teaching style is very simple. I want to make sure that students understand the concept, so they can succeed in their math classes. I have the ability to model concepts by providing students
specific examples. I believe that everyone has the ability to do mathematics. But in order to learn mathematics, they need to get their hands dirty and work on many problems. I am also a very
Sharon K.
• Tucson, AZ 85719 ( 1.7 mi )
• Statistics Tutor
• Female
• Member Since: 02/2010
• Will travel up to 15 miles
• Rates from $38.00 to $42.00 /hr
More About This Tutor...
As a PhD student in sociology at the University of Arizona, I work as a teaching assistant, helping students understand upper division course material in sociology. I also write and grade exams for
these courses. I have taken four courses (three at the graduate level) in statistics, so I am also qualified to tutor in this area. I have experience with students of all ages; I taught science at
the elementary school level, worked as a Residential Advisor at an English language institute for teenagers, and served as a camp counselor at numerous summer camps.
Every student learns differently, but anyone who is open to learning is capable. I enjoy teaching children and teenagers for the spark that they get when something clicks, and they suddenly see the
world differently. My experience working with different age levels, from all economic backgrounds, has taught me that all students, no ... read more
Sandra A.
• Tucson, AZ 85711 ( 1.9 mi )
• Statistics Tutor
• Female
• Member Since: 04/2011
• Will travel up to 10 miles
• Rates from $62.00 to $67.00 /hr
• I am a Certified Teacher
• Also Tutors: Pre-Algebra, Algebra I
More About This Tutor...
Every student can learn with motivation. Most of my classes were self-paced, and I had great success in motivating students of all types. My most recent position was teaching science to at-risk high
school students who averaged 4th grade levels in reading and math. Every participating student earned credits in science, a first for most of them. My interest in them as individuals and willingness
to give each one of them my attention, then let them take charge of their own learning, paid off in results of which I'm very proud. I encourage an environment conducive to learning; I am open to any
approach that works for my students; and I believe patience trumps time-tables.
After many years as a Wyoming rancher and three years as an ESL volunteer, I returned to school to become an ESL professional. With my Master of Education degree in hand, I taught a year of ESL
Guillermo Z.
• Tucson, AZ 85711 ( 2.3 mi )
• Statistics Tutor
• Male
• Member Since: 01/2010
• Will travel up to 50 miles
• Rates from $40.00 to $48.00 /hr
• I am a Certified Teacher
• Also Tutors: Trigonometry, Geometry, Algebra I
More About This Tutor...
My teaching style is Socratic in method. I have learned a large variety of techniques to help teach some of the most basic to the most advanced concepts to a large audience of students. I like to
mold my teaching style to fit with each individual student. As a teacher of special-needs and emotionally handicapped children, I have learned techniques that make me a formidable teacher in any
environment, so long as I am familiar with the material. I work hard for my students, and am not willing to have them do anything that I would not do myself.
I attended Northern Arizona University, whereupon I fell in love with sociology. I have a firm grasp of basic and advanced sociological concepts, specializing in historical analysis and research
methods. After school, I spent my time working temporary jobs, and found one that I thoroughly enjoyed. At Wyko I aquired a set of technical sk ... read more
Kyle R.
• Tucson, AZ 85718 ( 4 mi )
• Statistics Tutor
• Male
• Member Since: 02/2010
• Will travel up to 10 miles
• Rates from $40.00 to $44.00 /hr
• I am a Certified Teacher
• Also Tutors: Trigonometry, Pre-Calculus, Pre-Algebra, Logic, Geometry, College Algebra, Calculus II, Calculus I, Algebra II, Algebra I
Read More About Kyle R...
As a high school student, my first job was at Kumon Math and Reading Center where I helped tutor Math students for 2 years. Once in college, my tutoring experiences were limited to summers but was
able to help one student pass her summer school math class so that she could graduate. In the last 5 years I have helped many students improve their grades as well as helping a student with cancer
complete all of her coursework while she was out during treatment. I am proud to say all of my students have enjoyed great success with my help.
Teaching is an art form that involves love, patience and understanding. Each student learns differently and requires different teaching styles. I have had students that need help being focused and I
have had students who need to relax to open up the brain waves to learning. My teaching style includes evaluating each students strengths and weakness ... read more
Jay L.
• Tucson, AZ 85710 ( 4.5 mi )
• Statistics Tutor
• Male
• Member Since: 03/2009
• Will travel up to 20 miles
• Rates from $41.00 to $50.00 /hr
A Little Bit About Me...
I have taught 4 upper division business courses at the University of Arizona: Management and Organizational Behavior, strategy (Eller capstone course Management Policies), business environments and
employment law (The Legal, Political and Social Environments of Business), and statistics (Statistical Inferences in Management). I left a Ph.D. program in Organizational Behavior (think Management)
with a MS to care for family. I have been a substitute teacher for K-8th grade. I have followed a successful 12+ year career in Human Resource Management and now own my own HR Consulting firm.
What is needed? I like an inclusive interactive style that directly involves the learner. I like to evaluate performance in a varety of methods to determine actual learning. As a University
Instructor I was known for being fun, approachable and honest and a hard grader.
Andrea G.
• Tucson, AZ 85710 ( 4.7 mi )
• Statistics Tutor
• Female
• Member Since: 03/2009
• Will travel up to 20 miles
• Rates from $40.00 to $52.00 /hr
• Also Tutors: Trigonometry, Pre-Calculus, Pre-Algebra, Logic, Geometry, College Algebra, Algebra II, Algebra I
Read More About Andrea G...
One of my many passions in life is working with kids, specifically tutoring them. I have found that it is easier to tutor a student when they know that they are surrounded by positivity. With this I
am referring to the feeling of comfort and knowing that it's ok to be wrong. I notice more confidence in my students when I establish a friendship with them. In addition to this, letting my students
know that even at different ages, we can still relate to one another. I love incorporating visual images, tangible objects, color, laughter, and fun in my tutoring sessions. I strongly believe that
repetition is important when learning something new. I have found that every students learning style is different and I have been able to easily adjust to this.
Working as a tutor has given me the opportunity to demonstrate great leadership among my peers and students while creating a positive env ... read more
Jaimee T.
• tucson, AZ 85741 ( 6.1 mi )
• Statistics Tutor
• Female
• Member Since: 05/2010
• Will travel up to 10 miles
• Rates from $38.00 to $49.00 /hr
• Also Tutors: Trigonometry, Pre-Calculus, Pre-Algebra, Logic, Geometry, College Algebra, Calculus I, Algebra II, Algebra I
More About This Tutor...
I have had experience teaching that is inquiry-based but have also taught in a more factual style with lecture. I believe that each student has a different style of learning, and I am more than
willing to adapt to the child. It is crucial for students to understand all the fundamentals of education, so that higher levels (college) of education will be obtained. I always make sure the
student understands the material every step of the way so that they will not fall behind. I also believe that the student must take what they learn and apply it. I also believe that by excelling in
certain areas of school and improving, this will give a higher confidence to the students that will be apparent in all areas of school.
I graduated with a BS in Biology and a minor in Chemistry. During my college experience at the University of Arizona, I have participated in two programs that allowed me to ... read more
Mary H.
• Tucson, AZ 85748 ( 8.7 mi )
• Statistics Tutor
• Female
• Member Since: 12/2007
• Will travel up to 20 miles
• Rates from $38.00 to $46.00 /hr
• Also Tutors: Trigonometry, Pre-Calculus, Pre-Algebra, Logic, Geometry, College Algebra, Calculus II, Calculus I, Algebra II, Algebra I
More About This Tutor...
I have a BA in Psychology, MS in Zoology and BS in Computer Science. I have lifetime Arizona Community College certification in psychology, biology, ecology and remedial math. My experience with
special populations includes: three years full-time on the Navajo Reservation; three years part-time at the SALT Center at the University of Arizona working with learning disabled students; and a
hands-on nature camp for deaf middle-schoolers. My favorite experience has been the opportunity to design the curriculum, code the programs, and teach the student counselors for the U of A's Computer
Science week-long summer enrichment program for teens.
My forte is science and math, which are often students' least favorite subjects, due to earlier difficulty. A solid basic understanding of these subjects is critical to being an independent, engaged
adult; and it's just plain cool to understand ... read more
Wayne H.
• Tucson, AZ 85742 ( 10.1 mi )
• Statistics Tutor
• Male
• Member Since: 04/2009
• Will travel up to 20 miles
• Rates from $37.00 to $50.00 /hr
• Also Tutors: Trigonometry, Pre-Calculus, Pre-Algebra, Logic, Geometry, College Algebra, Algebra II, Algebra I
Read More About Wayne H...
I've found that I really enjoy teaching students, especially those who are struggling, because of the satisfaction that comes with seeing the light come on. I've had many students explain to me that
my class is the first one that they've done well and actually understand the concepts. I've taught Project Management for the University of Phoenix in the past, but I especially enjoy teaching math
at Pima College... Statistics and the various levels of Algebra.
On the survey sheets that I've seen, I'm often praised for being very patient with students and very respectful of the students, no matter what their level of understanding. I typically spend very
little time in verbal explanation, but a lot of time if running through examples with students so they can see the patterns for themselves... practice makes perfect.
Here is Some Good Information About Statistics Tutoring
Worried about passing your statistics class? Even if you think you will never get it, we know we have the right tutor to help you! Whether you need a statistics tutor to prepare for a final exam or
you need help through an entire course, we have tutors for that. Our statistics tutors are the best in the area, take a look at what we have to offer. Take the steps to improve your grade by
contacting a statistics tutor today! Check out our selection of experts.
Cool Facts About Tutoring Near Tucson, Arizona
Roughly 150 Tucson companies are involved in the design and manufacture of optics and optoelectronics systems, earning Tucson the nickname Optics Valley. See a better future. Select a tutor today. In
2009, Tucson ranked as the 32nd largest city and 52nd largest metropolitan area in the United States. Tutors are available in your neighborhood. The Tucson Gem & Mineral Show is held every year in
February for two weeks. It is one of the largest gem and mineral shows in the world.
Find Tucson, AZ Tutoring Subjects Related to Statistics
Tucson, AZ Statistics tutoring may only be the beginning, so searching for other tutoring subjects related to Statistics will expand your search options. This will ensure having access to exceptional
tutors near Tucson, AZ Or online to help with the skills necessary to succeed.
Consider Other Statistics Tutoring Neighborhoods Near Tucson, AZ
Our highly qualified Statistics Tutoring experts in and around Tucson, AZ are ready to get started. Let's move forward and find your perfect Statistics tutor today.
Search for Tucson, AZ Statistics Tutors By Academic Level
Academic Levels for Tucson, AZ Statistics Tutoring are varied. Whether you are working below, on, or above grade level TutorSelect has the largest selection of Tucson, AZ Statistics tutors for you.
Catch up or get ahead with our talented tutors online or near you.
Find Local Tutoring Help Near Tucson, AZ
Looking for a tutor near Tucson, AZ? Quickly get matched with expert tutors close to you. Scout out tutors in your community to make learning conveniently fit your availability. Having many choices
nearby makes tutoring sessions easier to schedule.
Explore All Tutoring Locations Within Arizona
Find Available Online Statistics Tutors Across the Nation
Do you need homework help tonight or weekly sessions throughout the school year to keep you on track? Find an Online Statistics tutor to be there when you need them the most.
Search for Online Statistics Tutors | {"url":"https://www.tutorselect.com/find/tucson_az/statistics/tutors","timestamp":"2024-11-11T10:33:55Z","content_type":"text/html","content_length":"106234","record_id":"<urn:uuid:fe9ef393-99a5-4c13-afc2-ec14d93f1ac4>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00001.warc.gz"} |
A-level Mathematics/MEI/C1/Algebra - Wikibooks, open books for an open world
Algebra is the branch of mathematics that deals with the relation of quantities. In an equation, both sides are equal, and in an inequality, one side is usually greater than another.
An equation consists of two expressions joined by the equals sign (${\displaystyle =}$). Everything on the left-hand side is equal to everything on the right-hand side, for example ${\displaystyle
2+3=4+1}$. Some equations contain a variable, usually denoted by ${\displaystyle x}$, ${\displaystyle y}$ or ${\displaystyle z}$. An equation with a variable will only hold true for certain values of
that variable. For example ${\displaystyle 2+x=5}$ is only true for ${\displaystyle x=3}$. The values that the variables have when the equation is true are called the solutions of the equation.
Therefore ${\displaystyle x=3}$ is the solution of the equation ${\displaystyle 2+x=5}$.
The language of algebra
There are several terms that you need to be familiar with before you begin to work with algebra.
A variable is a quantity, whose value is usually unknown. Usually, a variable is given the symbol ${\displaystyle x}$ , although any letter can be used.
A constant is usually a known quantity, which does not involve the variable. Later you will come across unknown constants, which are usually given the symbol ${\displaystyle c}$ .
Generally, an index is anything that is written superscript to (slightly above) a symbol. Often indices are used to indicate something raised to a power, such as ${\displaystyle x^{3}}$ (read as
x-cubed), which is the same as ${\displaystyle x\times x\times x}$ .
An expression is a group of symbols which form a mathematical statement. ${\displaystyle 2+2}$ is an example of an expression.
A term is any variable, constant or a product of variables or constants which are separated by a ${\displaystyle +}$ or a ${\displaystyle -}$ sign. In the expression ${\displaystyle 3x+4xy-2y}$ , the
separate terms are ${\displaystyle 3x}$ , ${\displaystyle 4xy}$ , and ${\displaystyle 2y}$ .
A coefficient is the constant part of a term which multiplies the variable part of the term. For example, in ${\displaystyle 2x^{3}}$ , the coefficient of ${\displaystyle x^{3}}$ is ${\displaystyle
2}$ .
An equation is a mathematical statement that two things are equal. There is an equals sign (${\displaystyle =}$ ) in between two expressions. ${\displaystyle 2+2=4}$ is an example of an equation, as
well as ${\displaystyle 3+x=7}$ .
An identity is an equation with an unknown, and is true for all values of that unknown. The identity symbol ( ≡ ) is used in place of the equals sign. As an example, ${\displaystyle x-x=0}$ is always
correct no matter what the value of ${\displaystyle x}$ is. It is usual to write ${\displaystyle x-x}$ ≡ ${\displaystyle 0}$ when you want to emphasize the fact that it is an identity and not just an
A function relates one input value to one output value. A function of ${\displaystyle x}$ is usually noted as ${\displaystyle f(x)}$ . ${\displaystyle f(x)=x^{2}}$ is an example of a function. It is
then possible to write ${\displaystyle f(3)=3^{2}=9}$ once the function has been defined. ${\displaystyle f(x)={\sqrt {x}}}$ is not a function, because there are two output values (positive and
negative) for each input.
Manipulating expressions
Sometimes, expressions will be messier than they need to be, and they can be represented in an easier to understand form. The skills here are essential to the rest of the A-level course, although it
is very likely that you have already covered them at GCSE.
When collecting like terms, you simply add all the terms in ${\displaystyle x}$ together, all the terms in ${\displaystyle y}$ together, and all the terms in ${\displaystyle z}$ together. The same
applies for any other letter that represents a variable.
For example, ${\displaystyle 2x+4y+8z-3x-7y-2z+4x}$ becomes:
${\displaystyle 2x-3x+4x=3x}$
${\displaystyle 4y-7y=-3y}$
${\displaystyle 8z-2z=6z}$
So, by adding all the answers, ${\displaystyle 2x+4y+8z-3x-7y-2z+4x}$ simplified is ${\displaystyle 3x-3y+6z}$ .
Multiplication of different variables such as ${\displaystyle a\times b}$ becomes ${\displaystyle ab}$ . Single variables become indices, so ${\displaystyle x\times x}$ is ${\displaystyle x^{2}}$ .
Like addition and subtraction, you keep like terms together. So, for example:
${\displaystyle 2x^{2}z\times 3yz^{2}\times 4xy^{3}}$ becomes:
${\displaystyle 2\times 3\times 4\times x^{2}\times x\times y\times y^{3}\times z\times z^{2}}$
which can finally be simplified as:
${\displaystyle 24{x^{3}}{y^{4}}{z^{3}}}$
The skill of expanding brackets is illustrated by the following example.
${\displaystyle (8x+5y)(3x-6y)}$
${\displaystyle =(8x+5y)3x-(8x+5y)6y=8x(3x-6y)+5y(3x-6y)}$ use FOIL
F=> first term in each bracket 8x . 3x =24x^2 O=>outside terms(1st in 1st x 2nd in 2nd) 8x . -6y = -48xy I=> inside terms (2nd in 1st x 1st in 2nd) 5y . 3x =15xy L=> last term in each bracket 5y .
-6y = -30 y^2
and terms together(usually only the middle 2) 24x^2 -48xy +15xy -30y^2 answer 24x^2-33xy-30y^2
Sometimes, expressions can be re-written as the product of their factors. You divide an expression by a factor common to all of the terms in the expression. This is essentially the opposite of
expanding brackets, since most of the time the common factor is placed outside of brackets. For example:
To factorise ${\displaystyle 10x+15y}$ you must first find a common factor of ${\displaystyle 10x}$ and ${\displaystyle 15y}$ . ${\displaystyle 5}$ is easily spotted as a factor. Now you divide the
whole expression by ${\displaystyle 5}$ leaving you with ${\displaystyle 2x+3y}$ . Place ${\displaystyle 2x+3y}$ within brackets, and then put the ${\displaystyle 5}$ outside the bracket. The
factorised expression is now ${\displaystyle 5(2x+3y)}$ , and you can multiply out the expression to make sure you get the original expression.
Another expression, ${\displaystyle x^{2}-xy^{2}+3xz}$ has a common factor ${\displaystyle x}$ . The factorised form of this expression is ${\displaystyle x(x-y^{2}+3z)}$ .
When working with fractions, the rule is to make all of the denominators equal, and then write the expression as one fraction. You need to multiply both the top and bottom by the same amount to keep
the meaning of the fraction the same.
For example, for ${\displaystyle {\frac {3x}{2}}+{\frac {2y}{5}}-{\frac {z}{10}}}$ , the common denominator is ${\displaystyle 10}$ .
Multiply both parts by ${\displaystyle 5}$ : ${\displaystyle {\frac {15x}{10}}}$
Multiply both parts by ${\displaystyle 2}$ : ${\displaystyle {\frac {4y}{10}}}$
Leave this as it is: ${\displaystyle {\frac {z}{10}}}$
You now have ${\displaystyle {\frac {15x}{10}}+{\frac {4y}{10}}-{\frac {z}{10}}}$ , which becomes ${\displaystyle {\frac {15x+4y-z}{10}}}$ .
Often, to solve an equation you must rearrange it so that the unknown term is on its own side of the equals sign. By rearranging ${\displaystyle 2+x=5}$ to ${\displaystyle x=5-2}$ , ${\displaystyle
x}$ has been made the subject of the equation. Now by simplifying the equation, you can find that the solution is ${\displaystyle x=3}$ .
Changing the subject of an equation
You will usually be given equations that are more complex than the example above. To move a term from one side of the equals sign to the other, you have to do the same thing on both sides of the
equals sign. For example, to make ${\displaystyle x}$ the subject of ${\displaystyle y={\frac {4a(x^{2}+b)}{3}}}$ :
Multiply both sides by ${\displaystyle 3}$ ${\displaystyle 3y=4a(x^{2}+b)}$
Divide both sides by ${\displaystyle 4a}$ ${\displaystyle {\frac {3y}{4a}}=x^{2}+b}$
Subtract ${\displaystyle b}$ from both sides ${\displaystyle {\frac {3y}{4a}}-b=x^{2}}$
Square root both sides ${\displaystyle \pm {\sqrt {{\frac {3y}{4a}}-b}}=x}$
Solving quadratic equations
Quadratic equations are equations where the variable is raised to the power of 2 and, unlike linear equations, there are a maximum of two roots. A root is one value of the variable where the equation
is true, and to fully solve an equation, you must find all of the roots. For a quadratic equation you can factorise it and then easily find which values make the equation valid. The example above is
quite a simple case. You will usually be given a more complicated equation such as ${\displaystyle 2x^{2}+5x+3=0}$ . If the equation isn't already in the form ${\displaystyle ax^{2}+bx+c=0}$ ,
rearrange it so that it is. The steps needed to factorise ${\displaystyle 2x^{2}+5x+3=0}$ are:
Multiply ${\displaystyle 2}$ by ${\displaystyle 3}$ (coefficient of ${\displaystyle x^{2}}$ multiplied by the constant term) ${\displaystyle 2\times 3=6}$
Find two numbers that add to give ${\displaystyle 5}$ (coefficient of ${\displaystyle x}$ ) and multiply to give ${\displaystyle 6}$ (answer from ${\displaystyle 2\times 3=6}$ , ${\displaystyle
previous step) 2+3=5}$
Split ${\displaystyle 5x}$ to ${\displaystyle 2x+3x}$ (from the results of the previous step) ${\displaystyle 2x^{2}+2x+3x+3=0}$
Simplify ${\displaystyle 2x(x+1)+3(x+1)=0}$
Simplify further ${\displaystyle (2x+3)(x+1)=0}$
So ${\displaystyle (2x+3)(x+1)=0}$ is ${\displaystyle 2x^{2}+5x+3=0}$ in factorised form. You can now use the fact that any number multiplied by ${\displaystyle 0}$ is ${\displaystyle 0}$ to find the
roots of the equation. The numbers that make one bracket equal to ${\displaystyle 0}$ are the roots of the equation. In the example, the roots are ${\displaystyle -1.5}$ and ${\displaystyle -1}$ .
It is also possible to solve a quadratic equation using the quadratic formula or by completing the square.
Simultaneous equations
Simultaneous equations are useful in solving two or more variables at once. Basic simultaneous equations consist of two linear expressions and can be solved by three different methods: elimination,
substitution or by plotting the graph.
The basic principle of the elimination method is to manipulate one or more of the expressions in order to cancel out one of the variables, and then solve for the correct solution.
An example of this:
${\displaystyle 2x+3y=10}$ (1) (Assigning the number (1) to this equation)
${\displaystyle 2x+6y=6}$ (2) (Assigning the number (2) to this equation)
From this, we can see that by multiplying equation (1) by a factor of 2 and then subtracting this new equation from (2), the ${\displaystyle y}$ -variable will be eliminated.
(1) ${\displaystyle \times 2\rightarrow 4x+6y=20}$ (1a) (Assigning the number (1a) to this equation)
Now subtracting (2) from (1a):
${\displaystyle 4x+6y=20}$ (1a)
${\displaystyle -}$ ${\displaystyle 2x+6y=6}$ (2)
${\displaystyle =}$ ${\displaystyle 2x+0y=14}$
Now that we have ${\displaystyle 2x=14}$ , we can solve for ${\displaystyle x}$ , which in this case is ${\displaystyle 7}$ .
${\displaystyle x=7}$ .
Substitute the newly found ${\displaystyle x}$ into (1):
${\displaystyle 2\times 7+3y=10}$
${\displaystyle 14+3y=10}$
And we find that ${\displaystyle y=-4/3}$
So, the solution to the two equations (1) and (2) are:
${\displaystyle x=7}$
${\displaystyle y=-4/3}$
The substitution method relies on being able to rearrange the expressions to isolate a single variable, in the form variable = expression. From this result this new expression can then be substituted
for the variable itself, and the solutions evaluated.
An example of this:
${\displaystyle 2x+3y=12}$ (1) (Assigning the number (1) to this equation)
${\displaystyle x+y=6}$ (2) (Assigning the number (2) to this equation)
From this expression, it is possible to see that (2) is the most simplistic expression, and thus will be the better choice to rearrange.
Taking (2), and rearranging this into ${\displaystyle x=6-y}$ . (2a)
Subbing (2a) into (1) we get
${\displaystyle 2(6-y)+3y=12}$
Solving this, we get that ${\displaystyle y=0}$
Again we can sub this result into one of the original equations to solve for ${\displaystyle x}$ . In this case ${\displaystyle x=6}$ .
Note that for situations in which one of the equations is non-linear, you must isolate one variable in the linear equation and substitute it into the non-linear one. Then you can solve the quadratic
equation with one of the methods above.
Another form of substitution is if you've got a similar expression in both equations, like in this case:
${\displaystyle 2x+3y=10}$ (1) (Assigning the number (1) to this equation)
${\displaystyle 2x+6y=6}$ (2) (Assigning the number (2) to this equation)
Here, ${\displaystyle 2x}$ is found in both equations, so:
${\displaystyle 2x=10-3y}$ (1)
${\displaystyle 2x=6-6y}$ (2)
And since ${\displaystyle 2x=2x}$ , you could do:
${\displaystyle 10-3y=6-6y}$
${\displaystyle 6y-3y=6-10}$
${\displaystyle 3y=-4}$
${\displaystyle y=-4/3}$
Now you've got ${\displaystyle y}$ , and finding ${\displaystyle x}$ will be the same as above.
By plotting the lines of the two equations, you can solve them by seeing where the lines intersect. If the intersection is at the point (a,b), then the solution is ${\displaystyle x=a}$ and ${\
displaystyle y=b}$ .
Solving problems with simultaneous equations
Often, you will be given problems which you must be able to write out as a pair of simultaneous equations. You will need to recognise such problems, and write them out correctly before solving them.
Most problems will be similar to these examples with some differences.
Example 1
At a record store, 2 albums and 1 single costs £10. 1 album and 2 singles cost £8. Find the cost of an album and the cost of a single.
Taking an album as ${\displaystyle a}$ and a single as ${\displaystyle s}$ , the two equations would be:
${\displaystyle 2a+s=10}$
${\displaystyle a+2s=8}$
You can now solve the equations and find the individual costs.
Example 2
Tom has a budget of £10 to spend on party food. He can buy 5 packets of crisps and 8 bottles of drink, or he can buy 10 packets of crisps and 6 bottles of drink.
Taking a packet of crisps as ${\displaystyle c}$ and a bottle of drink as ${\displaystyle d}$ , the two equations would be:
${\displaystyle 5c+8d=10}$
${\displaystyle 10c+6d=10}$
Now you can solve the equations to find the cost of each item.
Example 3
At a sweetshop, a gobstopper costs 5p more than a gummi bear. 8 gummi bears and nine gobstoppers cost £1.64.
Taking a gobstopper as ${\displaystyle g}$ and a gummi bear as ${\displaystyle b}$ , the two equations would be:
${\displaystyle b+5=g}$
${\displaystyle 8b+9g=164}$
The problem can now be solved by using one of the methods above.
A quadratic is a Polynomial of degree 2, in the form ${\displaystyle f(x)=ax^{2}+bx+c}$ .
A quadratic graph is one that can be written in the form ${\displaystyle y=ax^{2}+bx+c}$ . The graph of ${\displaystyle y=2x^{2}+8x+2}$ is shown on the right, and as you can see it has the same
characteristic "bucket" shape that all quadratics have, called a parabola. The line of symmetry and the vertex (maximum or minimum point) of the graph can be found by plotting the curve point by
point, and the root can be found by factorising the quadratic.
However, these properties can be more easily deduced from its completed square form:
${\displaystyle y=2(x+2)^{2}-6}$ .
Completing the square is the process of changing a quadratic from the form ${\displaystyle ax^{2}+bx+c}$ to the equivalent form ${\displaystyle a(x+d)^{2}+e}$ , where ${\displaystyle a}$ , ${\
displaystyle d}$ and ${\displaystyle e}$ are constants. For example, the quadratic ${\displaystyle 2x^{2}+8x+2}$ would become ${\displaystyle 2(x+2)^{2}-6}$ .
Changing a quadratic to completed square form makes it easy to find several things, such as the roots of the quadratic and the vertex of the quadratic, without even requiring a sketch.
Here are the steps for completing the square. Don't worry, it's easier than it looks.
│ Step │ Action │ Example │ General case │
│ 1. │ Ensure the quadratic is in the conventional form: ${\displaystyle ax^{2}+bx+c}$ . │ ${\displaystyle 2x^{2}+8x+2}$ │ ${\displaystyle ax^{2}+bx+c}$ │
│ │ Unless ${\displaystyle a=1}$ , "pull/factor out a", that is, divide the entire quadratic by ${\ │ │ │
│ │ displaystyle a}$ and put it outside a bracket. │ │ ${\displaystyle a\left(x^{2}+{\ │
│ 2. │ │ ${\displaystyle 2\left(x^{2}+4x+1\right)}$ │ frac {b}{a}}x+{\frac {c}{a}}\ │
│ │ Note: If the quadratic is part of an equation you can divide each side by ${\displaystyle a}$ , │ │ right)}$ │
│ │ for example ${\displaystyle 2x^{2}+8x+2=0}$ simply becomes ${\displaystyle x^{2}+4x+1=0}$ . │ │ │
│ │ Replace the ${\displaystyle x^{2}+kx}$ part with ${\displaystyle \left(x+{\frac {k}{2}}\right)^ │ │ │
│ │ {2}}$ . │ │ │
│ │ │ │ ${\displaystyle a\left(\left(x+ │
│ 3. │ It is important to realise that ${\displaystyle \left(x+{\frac {k}{2}}\right)^{2}=x^{2}+kx+{\ │ ${\displaystyle 2\left((x+2)^{2}+1\right)}$ │ {\frac {b}{2a}}\right)^{2}+{\ │
│ │ frac {k^{2}}{4}}}$ which is close to what it was there before but not equal. This will be │ │ frac {c}{a}}\right)}$ │
│ │ corrected in the next step. To prevent writing something that isn't actually equal it is a good │ │ │
│ │ idea to do both these steps at once in your working once you have got used to the method. │ │ │
│ │ Correct the error introduced in the previous step by inserting the subtraction of a suitable │ 1. ${\displaystyle (x+2)^{2}=x^{2}+4x+4}$ which is 3 │ │
│ │ number. This suitable number can be found in two ways: │ greater than ${\displaystyle x^{2}+4x+1}$ , so ${\ │ ${\displaystyle a\left(\left(x+ │
│ │ │ displaystyle -3}$ is inserted: ${\displaystyle 2\ │ {\frac {b}{2a}}\right)^{2}-{\ │
│ 4. │ 1. By expanding the term inserted in the previous step and comparing it to the original; or │ left((x+2)^{2}-3\right)}$ ; or │ frac {b^{2}}{4a^{2}}}+{\frac │
│ │ 2. By remembering that the error is always ${\displaystyle {\frac {k^{2}}{4}}}$ . │ 2. ${\displaystyle {\frac {4^{2}}{4}}=4}$ so the error │ {c}{a}}\right)}$ │
│ │ │ is 4: ${\displaystyle 2\left((x+2)^{2}-4+1\right)}$ │ │
│ │ This step is known as "completing the square" and gives the method its name. │ ${\displaystyle =2\left((x+2)^{2}-3\right)}$ │ │
│ │ │ │ ${\displaystyle a\left(x+{\frac │
│ 5. │ If step 2 was necessary then simplify the result a bit by expanding the outer bracket. │ ${\displaystyle 2(x+2)^{2}-6}$ │ {b}{2a}}\right)^{2}-{\frac {b^ │
│ │ │ │ {2}}{4a}}+c}$ │
│ │ │ │ ${\displaystyle a\left(x^{2}+{\ │
│ │ │ │ frac {b}{a}}x+{\frac {b^{2}}{4a │
│ │ │ ${\displaystyle 2\left(x^{2}+4x+4\right)-6}$ │ ^{2}}}\right)-{\frac {b^{2}} │
│ │ Check that what you have obtained expands back to what you started with. │ │ {4a}}+c}$ │
│ 6. │ │ ${\displaystyle =2x^{2}+8x+8-6}$ ${\displaystyle =2x^ │ │
│ │ Note: You may feel confident enough to skip this step. │ {2}+8x+2}$ │ ${\displaystyle =ax^{2}+bx+{\ │
│ │ │ │ frac {b^{2}}{4a}}-{\frac {b^ │
│ │ │ │ {2}}{4a}}+c}$ ${\displaystyle = │
│ │ │ │ ax^{2}+bx+c}$ │
So the completed square form of ${\displaystyle y=2x^{2}+8x+2}$ is ${\displaystyle 2(x+2)^{2}-6}$ . The ${\displaystyle -6}$ tells us that the lowest point of the curve is at ${\displaystyle y=-6}$
and the ${\displaystyle x+2}$ tells us that the line of symmetry is at ${\displaystyle x+2=0}$ or ${\displaystyle x=-2}$ . Therefore the vertex is at ${\displaystyle (-2,-6)}$ , and if you look at
the graph, you can see that is the case.
You can see from the graph that ${\displaystyle y=2x^{2}+8x+2}$ has one root between ${\displaystyle -4}$ and ${\displaystyle -3}$ and another between ${\displaystyle -1}$ and ${\displaystyle 0}$ ,
where it crosses the ${\displaystyle y}$ axis. But how do you find the exact values? Using the completed square form, it can be re-arranged quite easily to find ${\displaystyle x=0}$ :
│ Step │ Action │ Example │ General case │
│ 1. │ To solve an equation in the form ${\displaystyle ax^{2}+bx+c=0} │ ${\displaystyle 2(x+2)^{2}-6=0}$ │ ${\displaystyle a\left(x+{\frac {b}{2a}}\right)^ │
│ │ $ first complete the square using the method above. │ │ {2}-{\frac {b^{2}}{4a}}+c=0}$ │
│ │ │ │ ${\displaystyle a\left(x+{\frac {b}{2a}}\right)^ │
│ │ │ ${\displaystyle 2(x+2)^{2}=6}$ │ {2}={\frac {b^{2}}{4a}}-c}$ │
│ 2. │ Isolate the ${\displaystyle (x+k)^{2}}$ term. │ │ │
│ │ │ ${\displaystyle (x+2)^{2}=3}$ │ ${\displaystyle \left(x+{\frac {b}{2a}}\right)^{2} │
│ │ │ │ ={\frac {b^{2}}{4a^{2}}}-{\frac {c}{a}}}$ │
│ │ │ │ ${\displaystyle x+{\frac {b}{2a}}=\pm {\sqrt {{\ │
│ │ │ │ frac {b^{2}}{4a^{2}}}-{\frac {c}{a}}}}}$ │
│ │ │ │ │
│ │ │ │ Then some simplification: │
│ │ │ │ │
│ │ │ │ ${\displaystyle x+{\frac {b}{2a}}=\pm {\sqrt {{\ │
│ │ │ │ frac {b^{2}}{4a^{2}}}-{\frac {4ac}{4a^{2}}}}}}$ │
│ 3. │ Square root each side, including a ${\displaystyle \pm }$ as │ ${\displaystyle x+2=\pm {\sqrt {3}}}$ │ │
│ │ the bit inside the bracket might be negative or positive. │ │ ${\displaystyle x+{\frac {b}{2a}}=\pm {\sqrt {\ │
│ │ │ │ frac {b^{2}-4ac}{4a^{2}}}}}$ │
│ │ │ │ │
│ │ │ │ ${\displaystyle x+{\frac {b}{2a}}=\pm {\frac {\ │
│ │ │ │ sqrt {b^{2}-4ac}}{\sqrt {4a^{2}}}}}$ │
│ │ │ │ │
│ │ │ │ ${\displaystyle x+{\frac {b}{2a}}=\pm {\frac {\ │
│ │ │ │ sqrt {b^{2}-4ac}}{2a}}}$ │
│ │ │ ${\displaystyle x=\pm {\sqrt {3}}-2}$ │ ${\displaystyle x=\pm {\frac {\sqrt {b^{2}-4ac}} │
│ │ │ │ {2a}}-{\frac {b}{2a}}}$ │
│ 4. │ Isolate ${\displaystyle x}$ . │ ${\displaystyle x={\sqrt {3}}-2}$ or ${\displaystyle -{\sqrt {3}}-2}$ │ │
│ │ │ ${\displaystyle x\approx -0.268}$ or ${\displaystyle -3.73}$ │ ${\displaystyle x={\frac {-b\pm {\sqrt {b^{2} │
│ │ │ │ -4ac}}}{2a}}}$ │
The values of ${\displaystyle x}$ for this specific example are within the expected range, as seen on the graph.
The quadratic formula is derived from the general case of completing the square:
${\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}}$
It can be used to find the roots of a quadratic by putting numbers directly into it. For example, for ${\displaystyle y=2x^{2}+8x+2}$ :
${\displaystyle x={\frac {-8\pm {\sqrt {64-16}}}{4}}=-2\pm {\sqrt {3}}}$
so ${\displaystyle x=-2+{\sqrt {3}}}$ and ${\displaystyle x=-2-{\sqrt {3}}}$ .
Notice that the quadratic equation contains ${\displaystyle b^{2}-4ac}$ inside a square root sign. This part is called the discriminant and can be considered on its own to determine the number of
roots of the equation.
• If ${\displaystyle b^{2}-4ac<0}$ then you will be unable to find the square roots as you don't know how to square root a negative number. The type of numbers you have encountered so far are known
as real numbers and so it is said that the quadratic has no real roots.
• If ${\displaystyle b^{2}-4ac=0}$ then changing the ${\displaystyle \pm }$ sign in front of the square root won't make any difference, because it is zero either way. You will therefore get the
same root twice, so it is said the quadratic has one repeated root.
• If ${\displaystyle b^{2}-4ac>0}$ then the ${\displaystyle \pm }$ will mean you get two answers, and so you can say the quadratic has two distinct (i.e. different) roots.
An inequality is an expression which compares the relative sizes of points, lines, or curves. Unlike an equation, where both sides of the equals sign are always equal, inequalities can have one side
greater than or equal to the other side.
The four signs of inequalities
There are four main basic signs:
• ${\displaystyle <}$ less than,
• ${\displaystyle >}$ greater than,
• ${\displaystyle \leq }$ less than or equal to, and
• ${\displaystyle \geq }$ greater than or equal to.
For example, ${\displaystyle x<4}$ means that ${\displaystyle x}$ is less than 4, ${\displaystyle x>4}$ means that ${\displaystyle x}$ is greater than 4, ${\displaystyle x\leq 4}$ means that ${\
displaystyle x}$ is four or any number less than this, and ${\displaystyle x\geq 4}$ means that ${\displaystyle x}$ is four or any number higher than this.
Note that ${\displaystyle x>y}$ and ${\displaystyle y<x}$ are both essentially the same statement.
If you become confused with which sign means less than and greater than, it is useful to remember that the inequality signs always point to the smaller number.
Combining inequalities
There are some cases where two inequalities can be combined into one. For example, the height of the door was said to be between ${\displaystyle x\geq 1.95}$ and ${\displaystyle x<2.05}$ . The usual
way of writing these is ${\displaystyle 1.95\leq x<2.05}$ . Notice that the inequality signs are in the same direction. ${\displaystyle 2.05\geq x>1.95}$ is perfectly acceptable, but it is incorrect
to combine opposite facing inequalities and they must be left as two separate inequalities.
Solving linear inequalities
These signs can be used in place of equal signs, and an equation now becomes an inequality (since both sides are not always equal).
For example, instead of ${\displaystyle 2x+4=6}$ we could have ${\displaystyle 2x+4>6}$ .
In this example, ${\displaystyle x}$ may be any number which makes this inequality greater than 6. In this case ${\displaystyle x>1}$ but ${\displaystyle xeq 1}$ . If the inequality was ${\
displaystyle 2x+4\geq 6}$ , then ${\displaystyle x}$ could take the value of 1.
An inequality can be manipulated and therefore solved just like an equation, although there is an extra step you must take when you multiply or divide by a negative number.
Multiplying or dividing by a negative number
When multiplying or dividing by a negative number, you must change the direction of the inequality sign.
For example, look at the inequality ${\displaystyle 10>5}$ . This is correct since 10 is obviously greater than 5. Now if we were to multiply both sides by -1, we would get:
${\displaystyle -10>-5}$ .
This is incorrect, since -10 is actually less than -5. By reversing the inequality sign, we now have the correct inequality:
${\displaystyle -10<-5}$ .
Solving quadratic inequalities
To solve a quadratic inequality, you could factorise it, just like a quadratic equation.
Alternatively you could draw its graph as if it was a quadratic equation, and then shade the side that's covered by the inequality. maybe an image here, to demonstrate the method?
You are probably already familiar with indices, for example ${\displaystyle x^{2}}$ is just a shorter way of writing ${\displaystyle x\times x}$ and ${\displaystyle x^{4}}$ is similarly ${\
displaystyle x\times x\times x\times x}$ . In ${\displaystyle x^{5}}$ , ${\displaystyle x}$ is called the base and ${\displaystyle 5}$ is called the power or exponent. ${\displaystyle x^{4}}$ is
pronounced "x to the four", or "x raised to the 4th power" in full. Some powers are so useful that they have special names: ${\displaystyle x^{2}}$ is referred to as "x squared", ${\displaystyle x^
{3}}$ is "x cubed" and ${\displaystyle x^{-1}}$ (which you will soon learn about if you haven't already encountered it) is called "the reciprocal of x".
Note: The "law of indices" is sometimes also called the "exponent laws" or "power rules" [1]. More generally, an index in mathematics is a superscript or subscript to a symbol.
Operations with indices
Using this notation you might notice several patterns.
Firstly, ${\displaystyle x^{3}\times x^{2}}$ is ${\displaystyle \left(x\times x\times x\right)\times \left(x\times x\right)=x\times x\times x\times x\times x}$ which is ${\displaystyle x^{5}}$ . Of
course ${\displaystyle 3+2=5}$ so you have added the powers together. To clarify, here is an example with numbers: ${\displaystyle 2^{3}\times 2^{5}=8\times 32=256=2^{8}}$ (like before, ${\
displaystyle 3+5=8}$ )
Secondly ${\displaystyle {\frac {x^{4}}{x^{2}}}}$ is ${\displaystyle {\frac {x\times x\times x\times x}{x\times x}}=x\times x=x^{2}}$ (when ${\displaystyle xeq 0}$ ). This time ${\displaystyle 4-2=2}
$ and so you have subtracted the powers.
Here is an example with numbers: ${\displaystyle {\frac {10^{5}}{10^{2}}}={\frac {100000}{100}}=1000=10^{3}}$ and again ${\displaystyle 5-2=3}$ .
Base raised to two powers
Thirdly ${\displaystyle \left(x^{2}\right)^{3}}$ is ${\displaystyle \left(x\times x\right)\times \left(x\times x\right)\times \left(x\times x\right)=x\times x\times x\times x\times x\times x}$ which
is ${\displaystyle x^{6}}$ . You can see that ${\displaystyle 2\times 3=6}$ and so the powers have been multiplied. Here is another example with numbers: ${\displaystyle \left(3^{2}\right)^{4}=9^{4}=
6561=3^{8}}$ and ${\displaystyle 2\times 4=8}$ .
Finally ${\displaystyle \left(xy\right)^{3}=xy\times xy\times xy=x\times y\times x\times y\times x\times y=x\times x\times x\times y\times y\times y}$ which is the same as ${\displaystyle x^{3}y^{3}}
$ . Here is an example with numbers: ${\displaystyle \left(2\times 5\right)^{2}=\left(10\right)^{2}=100=4\times 25=2^{2}\times 5^{2}}$ . There is a similar situation with division: ${\displaystyle \
left({\frac {x}{y}}\right)^{2}={\frac {x}{y}}\times {\frac {x}{y}}={\frac {x\times x}{y\times y}}={\frac {x^{2}}{y^{2}}}}$
The rules that have been suggested above are known as the laws of indices and can be written as:
1. ${\displaystyle x^{a}x^{b}=x^{a+b}}$
2. ${\displaystyle {\frac {x^{a}}{x^{b}}}=x^{a-b}}$
3. ${\displaystyle \left(x^{a}\right)^{b}=x^{ab}}$
4. ${\displaystyle \left(xy\right)^{n}=x^{n}y^{n}}$
5. ${\displaystyle \left({\frac {x}{y}}\right)^{n}={\frac {x^{n}}{y^{n}}}}$
You may well have realised that ${\displaystyle {x^{1}}=x}$ . This can be seen by looking at the pattern of ${\displaystyle x^{3}=x\times x\times x}$ , ${\displaystyle x^{2}=x\times x}$ or by doing $
{\displaystyle {\frac {x^{3}}{x^{2}}}}$ which is clearly ${\displaystyle x}$ but is also ${\displaystyle x^{3-2}=x^{1}}$ by law 2.
So far all the examples we have looked at are where the power is a positive integers, but by thinking about the laws it is possible to look at other cases.
It is less obvious that ${\displaystyle x^{0}=1}$ for any ${\displaystyle x}$ (strictly speaking, any ${\displaystyle xeq 0}$ , see the note below.), but this can be shown in a similar way, for
example ${\displaystyle {\frac {x^{4}}{x^{4}}}}$ is 1 but must be also be ${\displaystyle x^{4-4}=x^{0}}$ by law 2.
The next logical step is to ask what ${\displaystyle x^{-1}}$ is. Well by using law 2 "backwards" ${\displaystyle x^{-1}=x^{0-1}={\frac {x^{0}}{x^{1}}}={\frac {1}{x}}}$ . A similar argument can be
used for any other negative integer, for example ${\displaystyle x^{-3}=x^{0-3}={\frac {x^{0}}{x^{3}}}={\frac {1}{x^{3}}}}$ .
What if the power isn't even an integer? Suppose you wanted to find ${\displaystyle x^{\frac {1}{2}}}$ , you could say that ${\displaystyle {x^{\frac {1}{2}}}\times {x^{\frac {1}{2}}}=x^{1}}$ (by law
3, addition of powers) which means that ${\displaystyle x^{\frac {1}{2}}}$ must be ${\displaystyle \pm {\sqrt {x}}}$ . However it is customary to only use the positive root and so ${\displaystyle x^
{\frac {1}{2}}}$ is defined as ${\displaystyle {\sqrt {x}}}$ . You can use a similar argument for other such fractions, for example ${\displaystyle \left(x^{\frac {1}{3}}\right)^{3}=x}$ so ${\
displaystyle x^{\frac {1}{3}}={\sqrt[{3}]{x}}}$ .
In summary:
• ${\displaystyle x^{1}=x}$
• ${\displaystyle x^{0}=1}$
• ${\displaystyle x^{-n}={\frac {1}{x^{n}}}}$
• ${\displaystyle x^{\frac {1}{n}}={\sqrt[{n}]{x}}}$
Sometimes you might have to use the laws to understand what something means, for example ${\displaystyle x^{\frac {2}{3}}=\left(x^{2}\right)^{\frac {1}{3}}}$ (using law 3), and ${\displaystyle \left
(x^{2}\right)^{\frac {1}{3}}={\sqrt[{3}]{x^{2}}}}$ (using the definition above). It's useful to remember the general rule that ${\displaystyle x^{\frac {a}{b}}={\sqrt[{b}]{x^{a}}}}$ .
In mathematics, a Surd is an expression containing a root with an irrational solution that can not be expressed exactly – for example, √3 = 1.732050808... .
Sometimes it is useful to work in square roots, rather than using an approximate decimal value. Square roots can be manipulated just like algebraic expressions and sometimes it may be possible to
eliminate the square root (called rationalising the expression), which may have not been possible if you tried to work with the approximate value. When asked to give the exact value, approximate
decimal answers will not do and you will have to manipulate surds in order to give a final answer in simplified surd form.
Simplification of surds
Because surds can be manipulated like algebraic expressions, you can easily multiply out the terms and add the like terms. However, there are also a few rules that will be useful when simplifying
Because ${\displaystyle {\sqrt {x}}\times {\sqrt {x}}=x}$ , it is useful to know that it can be rearranged to give ${\displaystyle {\sqrt {x}}={\frac {x}{\sqrt {x}}}}$ and ${\displaystyle {\frac {1}
{\sqrt {x}}}={\frac {\sqrt {x}}{x}}}$ .
Because ${\displaystyle {\sqrt[{n}]{x}}=x^{\frac {1}{n}}}$ the laws of indices also apply to any n-th root. The most frequently used instances of this are laws 4 and 5 with square roots:
• ${\displaystyle \left(xy\right)^{n}=x^{n}y^{n}}$ becomes ${\displaystyle {\sqrt {xy}}={\sqrt {x}}\times {\sqrt {y}}}$
• ${\displaystyle \left({\frac {x}{y}}\right)^{n}={\frac {x^{n}}{y^{n}}}}$ becomes ${\displaystyle {\sqrt {\frac {x}{y}}}={\frac {\sqrt {x}}{\sqrt {y}}}}$
The first of these points is often used to simplify a square root, for example ${\displaystyle {\sqrt {200}}={\sqrt {100\times 2}}={\sqrt {100}}\times {\sqrt {2}}=10{\sqrt {2}}}$ . In an exam, you
will be expected to write all square roots with the smallest possible number inside the square root (i.e. the number inside the root shouldn't have any square factors).
Rationalising the denominator
Another technique to simplify expressions involving square roots is to rationalise the denominator. This means getting rid of square roots from the bottom of a fraction. In the case of a fraction
such as ${\displaystyle {\frac {5}{\sqrt {3}}}}$ , both numerator and denominator can be multiplied by ${\displaystyle {\sqrt {3}}}$ to give ${\displaystyle {\frac {5{\sqrt {3}}}{3}}}$ .
If the fraction is of the form${\displaystyle {\frac {a}{b+{\sqrt {c}}}}}$ the strategy used in the previous paragraph will only work if it is modified slightly. This time you should multiply the
numerator and denominator by ${\displaystyle {b-{\sqrt {c}}}}$ . If you are familiar with the standard difference of two squares expansion you should already know what happens next:
${\displaystyle {\frac {a}{b+{\sqrt {c}}}}\times {\frac {b-{\sqrt {c}}}{b-{\sqrt {c}}}}={\frac {ab-a{\sqrt {c}}}{b^{2}+b{\sqrt {c}}-b{\sqrt {c}}-{\sqrt {c}}^{2}}}={\frac {ab-a{\sqrt {c}}}{b^{2}-c}}}$
. As you can see the denominator now does not contain any square roots. For example: ${\displaystyle {\frac {2}{{\sqrt {3}}-1}}\times {\frac {{\sqrt {3}}+1}{{\sqrt {3}}+1}}={\frac {2{\sqrt {3}}+2}
{3-1}}={\sqrt {3}}+1}$
Common questions and mistakes
A common mistake is to split ${\displaystyle {\sqrt {x+y}}}$ into ${\displaystyle {\sqrt {x}}+{\sqrt {y}}}$ or ${\displaystyle \left(x+y\right)^{2}}$ into ${\displaystyle x^{2}+y^{2}}$ , usually
whilst moving it to the other side of the equals. Trying a few examples will quickly convince you that this is not possible:
• ${\displaystyle {\sqrt {25}}}$ ≠ ${\displaystyle {\sqrt {9}}+{\sqrt {16}}}$
• ${\displaystyle {\sqrt {64}}}$ ≠ ${\displaystyle {\sqrt {32}}+{\sqrt {32}}}$
And so on
What is the value of ${\displaystyle 0^{0}}$ ?
The short answer is that for this course you don't need to know, and you can safely skip this section. If you're still interested then read on:
The question arises because ${\displaystyle x^{0}=1}$ for any ${\displaystyle x}$ and yet you would expect that ${\displaystyle 0^{y}=0}$ for any ${\displaystyle y}$ as ${\displaystyle 0\times 0\
times 0\dots =0}$ . It turns out using a value of 1 is quite useful (perhaps even necessary) in various parts of algebra, whereas making it zero doesn't help at all. Almost all mathematicians would
therefore either say that ${\displaystyle 0^{0}}$ is 1 or that it is undefined (that is, it can't be given a value). A more technical discussion can be found at http://www.faqs.org/faqs/sci-math-faq/ | {"url":"https://en.m.wikibooks.org/wiki/A-level_Mathematics/MEI/C1/Algebra","timestamp":"2024-11-06T10:56:53Z","content_type":"text/html","content_length":"605080","record_id":"<urn:uuid:316351e2-f0ff-4ad2-8d9c-1070f7133261>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00843.warc.gz"} |
COMEDK 2021 | d and f Block Elements Question 14 | Chemistry | COMEDK - ExamSIDE.com
COMEDK 2021
MCQ (Single Correct Answer)
In 3d-transmission series, which one has the least melting point?
COMEDK 2020
MCQ (Single Correct Answer)
The electronic configuration of Cr$$^{3+}$$ is
COMEDK 2020
MCQ (Single Correct Answer)
Formation of coloured solution is possible, when metal ion in the compound contains
COMEDK 2020
MCQ (Single Correct Answer)
Which of the following forms a colourless solution in aqueous medium?
Questions Asked from d and f Block Elements (MCQ (Single Correct Answer))
Number in Brackets after Paper Indicates No. of Questions | {"url":"https://questions.examside.com/past-years/jee/question/pin-3d-transmission-series-which-one-has-the-least-meltin-comedk-chemistry-some-basic-concepts-of-chemistry-k6mb0cubkor59hjo","timestamp":"2024-11-03T01:19:30Z","content_type":"text/html","content_length":"181732","record_id":"<urn:uuid:40a7da7c-4258-4773-ae9c-03613fe55e72>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00427.warc.gz"} |
Let f(x)=ex,g(x)=sin−1x and h(x)=f[g(x)], then h(x)h′(x) is eq... | Filo
Let and , then is equal to
Not the question you're searching for?
+ Ask your question
Was this solution helpful?
Found 2 tutors discussing this question
Discuss this question LIVE for FREE
10 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions from Continuity and Differentiability
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Let and , then is equal to
Topic Continuity and Differentiability
Subject Mathematics
Class Class 12
Answer Type Text solution:1
Upvotes 51 | {"url":"https://askfilo.com/math-question-answers/let-fxex-gxsin-1-x-and-hxfgx-then-frachprimexhx-is-equal-to","timestamp":"2024-11-10T21:08:17Z","content_type":"text/html","content_length":"351454","record_id":"<urn:uuid:ffa6bffa-7288-49be-b768-d3d7ebb3574c>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00309.warc.gz"} |
Divide and Conquer Algorithm -Topperworld
Divide and Conquer Algorithm
Divide and Conquer is a fundamental algorithmic paradigm used to solve problems by breaking them down into smaller subproblems, solving the subproblems independently, and then combining their
solutions to obtain the final solution. The Divide and Conquer approach typically involves three steps:
1. Divide: Break the problem into smaller, more manageable subproblems that are similar in structure to the original problem. This step involves recursively dividing the problem into smaller
instances until the base case is reached.
2. Conquer: Solve the subproblems recursively. This step involves independently solving each subproblem using the same algorithm or approach as the original problem. If the subproblems are small
enough, they can be solved directly without further subdivision.
3. Combine: Merge the solutions of the subproblems to obtain the solution to the original problem. This step involves aggregating or combining the results from the subproblems to derive the final
In the below PDF we discuss about Divide and Conquer Algorithm in detail in simple language, Hope this will help in better understanding.
Applications of Divide and Conquer:
The Divide and Conquer paradigm finds applications across various domains due to its versatility and efficiency in solving complex problems. Some of the common applications include:
• Sorting Algorithms: Algorithms like Merge Sort and Quick Sort utilize Divide and Conquer to efficiently sort large datasets by recursively dividing them into smaller subarrays, sorting the
subarrays, and then merging or combining them.
• Searching Algorithms: Binary Search, a Divide and Conquer algorithm, efficiently searches sorted arrays by repeatedly dividing the search space in half until the target element is found or the
search space is exhausted.
• Matrix Operations: Algorithms for matrix multiplication, exponentiation, and inversion often use Divide and Conquer to break down the problem into smaller subproblems, compute solutions for the
subproblems, and combine them to obtain the final result.
• Optimization Problems: Divide and Conquer is used to solve optimization problems such as finding the closest pair of points, computing the convex hull of a set of points, and optimizing resource
allocation in scheduling and routing problems.
• Computational Geometry: Algorithms for geometric problems like finding intersections, calculating distances, and determining visibility regions make use of Divide and Conquer to efficiently
decompose the problem space, solve subproblems, and combine results.
• Numerical Computations: Divide and Conquer algorithms are employed in numerical methods like the Fast Fourier Transform (FFT), which decomposes the problem of transforming a sequence of values
into smaller subproblems that can be efficiently solved.
• Dynamic Programming: Dynamic programming problems often exhibit overlapping subproblems and optimal substructure, making them amenable to a Divide and Conquer approach. Many dynamic programming
algorithms utilize Divide and Conquer to break down the problem into smaller subproblems and combine their solutions.
Advantages of Divide and Conquer:
• Efficiency: Divide and Conquer algorithms often lead to efficient solutions for problems with large input sizes. By breaking down the problem into smaller subproblems, these algorithms can
exploit parallelism and reduce the time complexity of the overall solution.
• Scalability: Divide and Conquer algorithms are inherently scalable. As the input size increases, the algorithm can divide the problem into smaller subproblems, allowing it to handle larger
instances without significant increases in runtime.
• Simplicity: The Divide and Conquer approach provides a structured and systematic way to solve complex problems by breaking them down into smaller, more manageable subproblems. This makes the
algorithms easier to understand, implement, and debug.
• Optimal Substructure: Many real-world problems exhibit optimal substructure, meaning that the solution to the overall problem can be constructed from solutions to its smaller subproblems. Divide
and Conquer algorithms leverage this property to efficiently compute the optimal solution.
• Modularity: Divide and Conquer algorithms promote modularity by encapsulating the solution logic for each subproblem separately. This modular design makes it easier to maintain, test, and modify
the algorithm code.
• Parallelism: Divide and Conquer algorithms naturally lend themselves to parallelization. Since the subproblems are independent of each other, they can be solved concurrently on multiple
processing units, leading to significant speedup on parallel architectures.
• Versatility: Divide and Conquer can be applied to a wide range of problems across various domains, including sorting, searching, optimization, computational geometry, and numerical computations.
This versatility makes it a valuable tool in algorithm design.
• Optimization: Divide and Conquer algorithms often allow for optimization techniques such as memoization, pruning, and caching, which can further improve the efficiency of the solution by avoiding
redundant computations.
In conclusion, the Divide and Conquer Algorithm stands as a testament to the power of breaking down complex problems into simpler components. With its ability to efficiently tackle a wide range of
problems, from sorting to computational geometry, this algorithm continues to be a cornerstone of modern computer science. As technology advances and computational demands grow, the Divide and
Conquer approach remains a valuable tool for developers striving to optimize their algorithms and solve increasingly complex problems.
Divide and Conquer is a problem-solving technique where a problem is broken down into smaller, more manageable subproblems that are solved independently. These solutions are then combined to solve
the original problem.
Examples include sorting algorithms like Merge Sort and Quick Sort, searching algorithms like Binary Search, and algorithms for finding the maximum subarray sum or closest pair of points.
The time complexity varies depending on the specific algorithm and problem being solved. However, many Divide and Conquer algorithms have a time complexity of O(n log n) for sorting problems and O
(log n) for searching problems, where ‘n’ is the size of the input.
Divide and Conquer algorithms are often efficient and parallelizable, making them suitable for use in multi-core or distributed computing environments. They can also be easier to understand and
implement compared to more complex algorithms.
Recursion is a key component of Divide and Conquer algorithms. It involves breaking down a problem into smaller subproblems of the same type, solving these subproblems recursively, and then combining
their solutions to solve the original problem. Recursion simplifies the implementation of Divide and Conquer algorithms by allowing them to handle arbitrary levels of subproblem decomposition.
String Matching Algorithms String Matching
Leave a Comment | {"url":"https://topperworld.in/divide-and-conquer-algorithm/","timestamp":"2024-11-10T19:17:26Z","content_type":"text/html","content_length":"325265","record_id":"<urn:uuid:98a20e3a-52ca-459f-9fd9-c729286e129b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00654.warc.gz"} |
Fast reconnaissance missions to the outer solar system utilizing energy derived from the gravitational field of Jupiter | Request PDF
... Several missions used this idea to save fuel in their maneuvers. The Voyager mission visited several planets of the Solar System gaining energy from successive close approaches [1][2][3][4].
Other applications of this maneuver are available in the literature, like: the use of Swing-Bys in the inner Solar System to send a spacecraft to the giant planets [5] or even to the Sun [6]; the use
of Venus in a trip to Mars [7,8]; studies to make a three-dimensional close approach to Jupiter to change the orbital plane of the spacecraft [9]; use of one [10] or two [11] passages by the Moon to
increase the energy of the spacecraft; the use of multiple passages by the secondary body to find trajectories linking the primaries [12] or the Lagrangian points [13,14]. ...
... It is the angle formed by the line of the periapsis (line linking the center of Jupiter to the point of the closest approach of the trajectory) and the line connecting the two primaries
(Sun-Jupiter). In the rotating system of reference this line is also the horizontal axis; (b) , the Jacobian constant, expressed by (4). Although this is no longer constant after the inclusion of the
atmospheric drag, this parameter is usually used to identify Swing-By trajectories. ... | {"url":"https://www.researchgate.net/publication/285840766_Fast_reconnaissance_missions_to_the_outer_solar_system_utilizing_energy_derived_from_the_gravitational_field_of_Jupiter","timestamp":"2024-11-07T01:24:26Z","content_type":"text/html","content_length":"739770","record_id":"<urn:uuid:5a580485-526e-4529-be94-810343898ba2>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00521.warc.gz"} |
Question :
Multiple Choice Questions
50.Assets that have been pledged as security for : 1259445
39) Sam Lewis owns a firm in New York City's garment district. If Sam keeps adding workers to use the same number of sewing machines, eventually the workplace will become so crowded that workers will
get in each other's way. At this point
A) the marginal product of labor in Sam's business would be negative and his total output would decrease.
B) Sam should encourage his workers to share their sewing machines.
C) Sam's business will be in violation of safety rules that have been established by the New York City government.
D) Sam should begin using a division of labor in his business.
40) In his book The Wealth of Nations, Adam Smith employed the example of a pin factory in order to explain what economic concept?
A) the relationship between the marginal and average product of labor
B) the law of diminishing returns
C) why no firm would want to hire so many workers as to experience a negative marginal product of labor
D) the division of labor
41) The total output produced by a firm divided by the quantity of workers employed by the firm is the definition of
A) the marginal product of labor.
B) the division of labor.
C) the average product of labor.
D) the average cost of production.
42) After Suzie, owner of Suzie's Sweet Shop, hires her 8th worker the average product of labor declines. Which of the following statements must be true?
A) The marginal product of the 8th worker is negative.
B) The marginal product of the 8th worker is less than the average product of labor before the 8th worker was hired.
C) Suzie's profits would be greater if she did not hire the 8th worker.
D) The average product of labor is negative.
43) Which of the following statements is true?
A) The average product of labor is at its maximum when the average product of labor equals the marginal product of labor.
B) The average product of labor is at its minimum when the average product of labor equals the marginal product of labor.
C) The average product of labor tells us how much output changes as the quantity of workers hired changes.
D) Whenever the marginal product of labor is greater than the average product of labor the average product of labor must be decreasing.
44) The marginal product of labor is calculated using the formula
A) L/Q.
B) ΔL/ΔQ.
C) ΔQ/ΔL.
D) Q/L.
45) Which of the following describes how output changes in the short run? Because of specialization and the division of labor, as more workers are hired
A) output will first increase at an increasing rate, then output will increase at a decreasing rate.
B) output will first decrease at an increasing rate, then increase at a decreasing rate.
C) the marginal product of labor will first decrease, then increase at a decreasing rate.
D) the marginal product of labor will first be negative and then will be positive.
46) If 11 workers can produce 53 units of output while 12 workers can produce 56 units of output, what is the marginal product of the 12th worker?
A) 0.16
B) 3
C) 4.67
D) 36 | {"url":"https://summitessays.com/2023/03/12/question-multiple-choice-questions50-assets-that-have-been-pledged-as-security-for-1259445/","timestamp":"2024-11-12T13:19:09Z","content_type":"text/html","content_length":"145572","record_id":"<urn:uuid:bad84853-54ec-4d7d-9e96-8b1307546b2c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00613.warc.gz"} |
Addition and Subtraction of Fractions | Adding and Subtracting Fractions
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
Addition and Subtraction of Fractions
While adding and subtracting fractions, we need to check whether the fractions have the same denominators or different denominators and then the calculation starts. Let us learn more about the
addition and subtraction of fractions in this article.
1. How to Add and Subtract Fractions?
2. Adding and Subtracting Fractions with Like Denominators
3. Adding and Subtracting Fractions with Unlike Denominators
3. Adding and Subtracting Mixed Fractions
4. Adding and Subtracting Fractions with Whole Numbers
5. FAQs on Addition and Subtraction of Fractions
How to Add and Subtract Fractions?
Addition and subtraction of fractions is done using similar rules in which the denominators are checked before the addition or subtraction starts. After the denominators are checked, we can add or
subtract the given fractions accordingly. The denominators are checked in the following way.
• If the denominators of the given fractions are the same, we add or subtract only the numerators and we retain the denominator.
• If the denominators are different, we convert the fractions to like fractions so that the denominators become the same, and then we add or subtract, whatever is required.
Let us learn about these in the following sections.
Adding and Subtracting Fractions with Like Denominators
The process for adding and subtracting fractions with like denominators is quite simple because we just need to work with the numerators.
Adding Fractions with Like Denominators
Let us add the fractions 1/5 and 2/5 using rectangular models. In this case, both the fractions have the same denominators. These fractions are called like fractions. The following figure represents
both the fractions in the same model.
• 1/5 indicates that 1 out of 5 parts are shaded yellow.
• 2/5 indicates that 2 out of 5 parts are shaded blue.
Out of the 5 parts, 3 parts are shaded. In the fractional form, this can be represented as 3/5.
Now, let us add the fractions with like denominators in numerical terms. In this case, we need to add 1/5 + 2/5. Let us use the following steps to understand the addition.
• Step 1: Add the numerators of the given fractions. Here, the numerators are 1 and 2, so it will be 1 + 2 = 3
• Step 2: Retain the same denominator. Here, the denominator is 5.
• Step 3: Therefore, the sum of 1/5 + 2/5 = (1 + 2)/5 = 3/5
It should be noted that we use the same method for subtracting fractions.
Subtracting Fractions with Like Denominators
Let us subtract the fractions 2/5 and 1/5 using rectangular models. We will represent 2/5 in this model by shading 2 out of 5 parts. We will further shade out 1 part from our shaded parts of the
model which would represent removing 1/5.
We are now left with 1 part in the shaded parts of the model.
Now, let us subtract the fractions with like denominators in numerical terms. In this case, we need to subtract 2/5 - 1/5. Let us understand the procedure using the following steps.
• Step 1: We will subtract the numerators of the given fractions. Here, the numerators are 2 and 1, so it will be 2 - 1 = 1
• Step 2: Retain the same denominator. Here, the denominator is 5.
• Step 3: Therefore, the difference of 2/5 - 1/5 = (2 - 1)/5 = 1/5
Adding and Subtracting Fractions with Unlike Denominators
For adding and subtracting fractions with unlike denominators, we need to convert the unlike fractions to like fractions by writing their equivalent fractions in such a way that their denominators
become the same. Let us understand this with the help of an example.
Example: Add 1/5 + 1/3
Solution: For adding unlike fractions we need to use the following steps
• Step 1: Find the Least Common Multiple (LCM) of the denominators. Here, the LCM of 5 and 3 is 15.
• Step 2: Convert the given fractions to like fractions by writing the equivalent fractions for the respective fractions such that their denominators remain the same. Here, it will be \(\frac {1}
{5}\)×\(\frac {3}{3}\)=\(\frac {3}{15}\)
• Step 3: Similarly, an equivalent fraction of 1/3 with denominator 15 is \(\frac {1}{3}\)×\(\frac {5}{5}\)=\(\frac {5}{15}\)
• Step 4: Now, that we have converted the given fractions to like fractions we can add the numerators and retain the same denominator. This will be 3/15 + 5/15 = 8/15
Subtracting Fractions with Unlike Denominators
For subtracting unlike fractions, we follow the same steps as we did for the addition of unlike fractions. Let us understand this with the help of an example.
Example: Subtract 5/6 - 1/3
Solution: For subtracting unlike fractions we need to use the following steps.
• Step 1: Find the Least Common Multiple (LCM) of the denominators. Here, the LCM of 6 and 3 is 6.
• Step 2: Convert the given fractions to like fractions by writing the equivalent fractions for the respective fractions such that their denominators remain the same. Here, it will be \(\frac {5}
{6}\)×\(\frac {1}{1}\)=\(\frac {5}{6}\)
• Step 3: Similarly, an equivalent fraction of 1/3 with denominator 6 is \(\frac {1}{3}\)×\(\frac {2}{2}\)=\(\frac {2}{6}\)
• Step 4: Now, that we have converted the given fractions to like fractions we can subtract the numerators and retain the same denominator. This will be 5/6 - 2/6 = 3/6. This can be further reduced
to 1/2
Adding and Subtracting Mixed Fractions
Adding and subtracting mixed fractions is done by converting the mixed fractions to improper fractions and then the addition or subtraction is done as per the requirement. Let us understand these
with the help of the following examples.
Example: Add the mixed fractions: \(2\dfrac{1}{4}\) + \(1\dfrac{3}{4}\)
Solution: First let us convert the mixed fractions to improper fractions.
• Step 1: Convert the given mixed fractions to improper fractions. So, \(2\dfrac{1}{4}\) will become 9/4; and \(1\dfrac{3}{4}\) will become 7/4
• Step 2: Add the fractions by adding the numerators because the denominators are the same. This will be 9/4 + 7/4= 16/4.
• Step 3: Reduce the fraction, if required. This will become, 16/4 = 4. Therefore, \(2\dfrac{1}{4}\) + \(1\dfrac{3}{4}\) = 4.
Now, let us understand the subtraction of mixed fractions using the same method.
Example: Subtract the mixed fractions: \(5\dfrac{1}{3}\) - \(2\dfrac{1}{3}\)
Solution: First let us convert the mixed fractions to improper fractions.
• Step 1: Convert the given mixed fractions to improper fractions. So, \(5\dfrac{1}{3}\) will become 16/3; and \(2\dfrac{1}{3}\) will become 7/3
• Step 2: Subtract the fractions by subtracting the numerators because the denominators are the same. This will be 16/3 - 7/3 = 9/3
• Step 3: Reduce the fraction, if required. This will become, 9/3 = 3. Therefore, \(5\dfrac{1}{3}\) - \(2\dfrac{1}{3}\) = 3
Adding and Subtracting Fractions with Whole Numbers
Adding and subtracting fractions with whole numbers can be done using the following method. Let us understand this using an example.
Example: Add 7/4 + 5
Solution: Let us add 7/4 + 5 using the following steps.
• Step 1: Write the whole number in the form of a fraction. In this case the whole number is 5 which can be written as 5/1. So, now we need to add 7/4 + 5/1
• Step 2: Now, find the LCM of the denominators and convert the given fractions to like fractions. Here the LCM of 4 and 1 is 4. And after converting them to like fractions we get, (7 × 1)/(4 × 1)
+ (5 × 4)/(1 × 4) = 7/4 + 20/4
• Step 3: Add the numerators while the denominator remains the same. Here, 7/4 + 20/4 = 27/4 = \(6\dfrac{3}{4}\)
Now, let us understand the subtraction of a fraction from a whole number with the help of the following example.
Example: Subtract 6 - 3/5
Solution: Let us subtract 6 - 3/5 using the following steps.
• Step 1: Write the whole number in the form of a fraction. In this case the whole number is 6 which can be written as 6/1. So, now we need to subtract 6/1 - 3/5
• Step 2: Now, find the LCM of the denominators and convert the given fractions to like fractions. Here the LCM of 1 and 5 is 5. And after converting them to like fractions we get, (6 × 5)/(1 × 5)
- (3 × 1)/(5 × 1) = 30/5 - 3/5
• Step 3: Subtract the numerators while the denominator remains the same. Here, 30/5 - 3/5 = 27/5 = \(5\dfrac{2}{5}\)
Important Notes on Adding and Subtracting Fractions
• For adding and subtracting like fractions, we can directly work with the numerators while the denominators remain the same.
• For adding and subtracting unlike fractions, never add or subtract the numerators and denominators directly. Convert them to like fractions and then add or subtract.
☛ Related Topics
Adding and Subtracting Fractions Examples
1. Example 1: Find the sum of 1/7 + 3/7
Solution: The given fractions are like fractions so we will add the numerators and retain the same denominator.
1/7 + 3/7 = (1 + 3)/7 = 4/7
Therefore, the sum is 4/7
2. Example 2: Subtract 2/3 - 2/5
Solution: The given fractions are unlike fractions. So, we need to find the LCM of the denominators and convert 2/5 and 2/3 to equivalent fractions of the same denominator and then subtract.
LCM of (3, 5) = 15
&= \left(\frac {2}{3} \times \frac {5}{5} \right) - \left(\frac {2}{5} \times \frac {3}{3} \right) \\
&= \frac {10}{15} - \frac {6}{15} \\
&= \frac {4}{15} \end{align}\)
Therefore, the difference is 4/15
3. Example 3: State true or false with respect to adding and subtracting fractions.
a.) 4/5 + 3/5 = 7/5
b.) 7/8 - 2/8 = 9/8
a.) True, 4/5 + 3/5 = 7/5
b.) False, 7/8 - 2/8 = 5/8
Show Solution >
How can your child master math concepts?
Math mastery comes with practice and understanding the ‘Why’ behind the ‘What.’ Experience the Cuemath difference.
Practice Questions on Addition and Subtraction of Fractions
Check Answer >
FAQs on Addition and Subtraction of Fractions
How to Add and Subtract Fractions?
For adding and subtracting fractions, we first need to check the denominators. If the denominators are the same, we simply add or subtract the numerators and retain the same denominator. In the case
of unlike fractions, when the denominators are not the same, we convert the unlike fractions to like fractions by finding the LCM of the denominators. This helps in writing their respective
equivalent fractions and then they are added or subtracted, as required.
How to Add and Subtract Fractions with Different Denominators?
In order to add and subtract fractions with different denominators, we need to convert the fractions to like fractions so that the denominators become the same. Once the denominators are the same, we
can add or subtract the numerators. In order to convert the given fractions to like fractions, we need to find the LCM of the denominators and then write their respective equivalent fractions. The
equivalent fractions with the same denominators can then be added or subtracted, as the case may be.
How to Add and Subtract Fractions with Whole Numbers?
For adding and subtracting fractions with whole numbers we use the following method.
• Write the whole number in the form of a fraction by writing 1 as its denominator. For example, if we need to add 8/7 + 5, we will write the whole number in the form of a fraction. In this case
the whole number is 5 which can be written as 5/1. So, now we need to add 8/7 + 5/1. We will find the LCM of the denominators and convert the given fractions to like fractions. Here the LCM of 7
and 1 is 7. And after converting them to like fractions we get, (8 × 1)/(7 × 1) + (5 × 7)/(1 × 7) = 8/7 + 35/7 = 43/7 = \(6\dfrac{1}{7}\)
• The same method will be used for subtraction, for example, if we need to subtract 7 - 2/5, we will write the whole number 7 as 7/1 and then subtract. This will make it 7/1 - 2/5. We will find the
LCM of the denominators and convert the given fractions to like fractions. Here the LCM of 5 and 1 is 5. And after converting them to like fractions we get, (7 × 5)/(1 × 5) - (2 × 1)/(5 × 1) = 35
/5 - 2/5 = 33/5 = \(6\dfrac{3}{5}\)
How to Add and Subtract Fractions with Mixed Numbers?
To add and subtract fractions with mixed numbers, we convert the mixed numbers to improper fractions. Now, if they are like fractions, we can simply add or subtract the numerators and retain the same
denominator. For adding or subtracting unlike fractions, we convert them to like fractions. We find the LCM of the denominators and convert the addends to their equivalent fractions and add them in
the same way as we add like fractions.
What are the Rules for Adding and Subtracting Fractions?
The basic rules for adding and subtracting fractions are given below:
• We need to check if the denominators of the fractions are same or different.
• If the denominators are the same, we can simply add or subtract the numerators.
• If the denominators are not the same, we need to convert them to like fractions and then we add or subtract.
Download FREE Study Materials
Addition and Subtraction of Fractions Worksheet
Math worksheets and
visual curriculum | {"url":"https://www.cuemath.com/numbers/addition-and-subtraction-of-fractions/","timestamp":"2024-11-03T07:35:27Z","content_type":"text/html","content_length":"255184","record_id":"<urn:uuid:5ec39256-0557-4103-b436-9ee2d9ef40ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00109.warc.gz"} |
--- title: "Solving Ordinary Least Squares (OLS) Regression Using Matrix Algebra" date: "2019-01-30" output: html_document: highlight: textmate theme: lumen code_download: true toc: yes toc_float:
collapsed: yes smooth_scroll: yes ---
*Tags:* Statistics R
In psychology, we typically learn how to calculate OLS regression by calculating each coefficient separately. However, I recently learned how to calculate this using matrix algebra. Here is a brief
tutorial on how to perform this using R.
## R Packages ```{r} packages <- c("tidyverse", "broom") xfun::pkg_attach(packages, message = F) ```
## Dataset ```{r} dataset <- carData::Salaries %>% select(salary, yrs.since.phd) %>% mutate(yrs.since.phd = scale(yrs.since.phd, center = T, scale = F)) ``` ```{r} summary(dataset) ``` The `Salaries`
dataset is from the `carData` package, which shows the salary of professors in the US during the academic year of 2008 and 2009. Let's say we are interested in determining if professors who have had
their Ph.D. degree for longer are more likely to also have higher salaries.
## Solve Using Matrix Algebra ### Design Matrix The design matrix is just a dataset of the all the predictors, which includes the `intercept` set at 1 and `yrs.since.phd`. ```{r} x <- tibble(
intercept = 1, yrs.since.phd = as.numeric(dataset$yrs.since.phd) ) %>% as.matrix() head(x) ```
### Dependent Variable ```{r} y <- dataset$salary %>% as.matrix() head(y) ```
### $X'X$ First, we need to solve for $X'X$, which is the transposed design matrix ($X'$) multiplied by the design matrix ($X$). Let's take a look at what $X'$ looks like. ```{r} x_transposed <- t(x)
x_transposed[, 1:6] ```
After multiplication, the matrix provides the total number of participants ($n$ = 397; really, the sum of the intercept), sum of `yrs.since.phd` ($\Sigma(yrs.since.phd)$ = 0), and sum of squared
`yrs.since.phd` ($\Sigma (yrs.since.phd^2)$ = 65765.64). Respectively, $\Sigma (years.since.phd)$ and $\Sigma (yrs.since.phd^2)$ are sum of error ($\Sigma(yrs.since.phd-M_{yrs.since.phd})$) and sum
of squared error ($\Sigma(yrs.since.phd-M_{yrs.since.phd})^2$) because we first centered the `yrs.since.phd` variable. ```{r} x_prime_x <- (x_transposed %*% x) x_prime_x %>% round(., 2) ```
Let's verify this. ```{r} colSums(x) %>% round(., 2) colSums(x^2) %>% round(., 2) ```
### $(X'X)^{-1}$ $(X'X)^{-1}$ is the inverse matrix of $X'X$. ```{r} x_prime_x_inverse <- solve(x_prime_x) x_prime_x_inverse ```
### $X'Y$ $X'Y$ contains the sum of Y ($\Sigma Y$ = 45141464) and sum of $XY$ ($\Sigma XY$ = 64801658). ```{r} x_prime_y <- x_transposed %*% y x_prime_y ```
Let's verify this. ```{r} sum(y) sum(x[, 2] * y) ```
### Coefficients To obtain the coefficients, we can multiply these last two matrices ($b = (X'X)^{-1}X'Y$). ```{r} coef <- x_prime_x_inverse %*% x_prime_y coef ```
### Standard Error To calculate the standard error, we multiply the inverse matrix of $X'X$ by the mean squared error (MSE) of the model and take the square root of its diagonal matrix ($\sqrt{diag
((X'X)^{-1} * MSE)}$).
First, we need to calculate the $MSE$ of the model. Calculating $MSE$ of the model is still the same, $MSE = \frac{\Sigma(Y-\hat{Y})^{2}}{n-p} = \frac{\Sigma(e^2)}{df}$ where $Y$ is the DV, $\hat{Y}$
is the predicted DV, $n$ is the total number of participants (or data points), and $p$ is the total number of variables in the design matrix (or predictors, which includes the intercept).
To obtain the predicted values ($\hat{Y}$), we can also use matrix algebra by multiplying the design matrix with the coefficients ($\hat{Y} = Xb$). ```{r} y_predicted <- x %*% coef head(y_predicted)
Now that we have $\hat{Y}$, we can then calculate the $MSE$. ```{r} e <- y - y_predicted se <- sum(e^2) n <- nrow(x) p <- ncol(x) df <- n - p mse <- se / df mse ```
Then, we multiply $(X'X)^{-1}$ by MSE. ```{r} mse_coef <- x_prime_x_inverse * mse mse_coef %>% round(., 2) ```
Then, we take the square root of the diagonal matrix to obtain the standard error of the coefficients. ```{r} rmse_coef <- sqrt(diag(mse_coef)) rmse_coef %>% round(., 2) ```
### *t*-Statistic The *t*-statistic is just the coefficient divided by the standard error of the coefficient. ```{r} t_statistic <- as.numeric(coef) / as.numeric(rmse_coef) t_statistic ```
### *p*-Value We want the probability of obtaining that score or more extreme and not the other way around. Thus, we need to set lower to FALSE. Also, we need to multiply it by 2 to obtain a
two-tailed test. ```{r} p_value <- 2 * pt(t_statistic, df, lower = FALSE) p_value ```
### Summary ```{r} tibble( term = colnames(x), estimate = as.numeric(coef), std.error = as.numeric(rmse_coef), statistic = as.numeric(t_statistic), p.value = as.numeric(p_value) ) ```
## Solve Using `lm` Function ```{r} lm(salary ~ yrs.since.phd, dataset) %>% tidy() ``` | {"url":"https://raw.githubusercontent.com/epongpipat/epongpipat.github.io/master/blog-solving-ols-regression-using-matrix-algebra.Rmd","timestamp":"2024-11-12T07:52:36Z","content_type":"text/plain","content_length":"7191","record_id":"<urn:uuid:d5099ad9-1833-4721-812b-d1f103458d55>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00030.warc.gz"} |
Bonds remain in favor in time-varying model portfolio
The equity allocation increased by 3 percentage points to 39% for the quarter ended June 30, 2024. The underperformance of U.S. value and ex-U.S. developed markets stocks has resulted in improved
valuations. Our allocations to these categories, as well as to emerging markets equities, have increased by 1 percentage point each.
With this shift, our TVAA equity allocation is now 21 percentage points less than that of a market-capitalization-weighted benchmark portfolio with 60% equities and 40% bonds. “The time-varying
portfolio is a reflection of an interest rate environment that remains positive for fixed income, while high valuations have left U.S. stocks’ equity risk premiums muted,” said Harshdeep Ahluwalia,
Vanguard head of asset allocation for the Americas.
During the quarter, TVAA allocations to international bonds, U.S. intermediate credit bonds, and long-term U.S. Treasury bonds were reduced by 1 percentage point apiece because of the improvement in
the outlook for U.S. value and non-U.S. equities.
Our TVAA is geared toward investors who are comfortable with model forecast risk, a type of active risk in which investors embrace our disciplined model for navigating changing market and economic | {"url":"https://corporate.vanguard.com/content/corporatesite/us/en/corp/vemo/bonds-remain-favor-time-varying-model-portfolio.html","timestamp":"2024-11-08T05:12:46Z","content_type":"text/html","content_length":"54105","record_id":"<urn:uuid:c3c81786-2a56-402c-8b51-a8279e2cb69b>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00611.warc.gz"} |
IRR vs ROI | Top 8 Diffences to Learn with Infographics
Updated July 24, 2023
Difference Between IRR vs ROI
IRR vs ROI in this article, IRR stands for Internal Rate of return, and it is a metric used for the determination of the actual performance of a potential investment or a project, especially for a
shorter span of time. It can be defined as the discounted ROI (rate of interest) that tends to differentiate between present investments, and the upcoming net present value equals zero. IRR considers
the time value of money, and it makes it easy to draw comparisons between different projects.
ROI stands for Return on Investment. It is a metric used in the calculation of the revenues or losses earned from a particular investment compared to the initial investment made by a company. It can
be defined as a percentage increase or decrease in returns earned from an investment during the same tenure. ROI is quick and flexible, and it makes it easy to draw comparisons between multiple
Head To Head Comparison Between IRR vs ROI
Below are the Top 8 comparison between IRR vs ROI:
Key Differences Between IRR vs ROI
The key differences between the Internal Rate of Return and Return on Investment are provided and discussed as follows:
• Internal rate of return is the full form of IRR, whereas Return on investment is the full form of ROI.
• The internal rate of return considers the future value of money, whereas Return on Investment ignores money’s future value.
• Internal rate of return is used to compute the return on investment from a potential investment or a project, especially for a shorter duration of time. On the other hand, Return on investment is
used to calculate the overall cash inflows and outflows from an investment over a particular span of time.
• The internal rate of return is a little complicated as it takes multiple factors into its due consideration. On the other hand, Return on investment is not that complicated as compared to the
internal rate of return since it does not take the future value of money into its due consideration.
IRR vs ROI Comparison Table
Let’s discuss the top comparison between IRR vs ROI:
Basis of IRR ROI
Full form IRR is the short form used for the Internal rate of return ROI is the short form used for Return on Investment
The internal rate of return can be defined as the discounted rate of interest at which the NPV or net Return on investment is also termed Rate of return, and it can be defined as
present value of the cash inflows and cash outflows of a specific project is equal to zero. In other the percentage rise or fall in investment during the tenure of the same.
Definition words, the internal rate of return can be defined as the discount rate that generates the value of an Return on investment is used to calculate the profits or losses generated from
investment. Internal rate of return is used to determine the return on investment, and it is, therefore, an investment in comparison to the actual amount of investment made.
different than NPV or net present value of a potential project or an investment.
Internal rate of return is taken into use for
Return on investment is taken into use for the purposes of-
• Computing the profitability of a specific project or investment. This means that if the internal rate
of return of a potential investment or a project is more than the expected ROR (Rate of return), then • Making financial decisions,
Purpose the same shall be considered desirable. However, if the internal rate of return of a potential • Comparing the profitability of a company,
investment or a project is less than the expected ROR, then the same shall be rejected. • Comparing the efficiency of multiple investments.
• The internal rate of return is used to determine the expected ROI or return on stocks that even
comprises of the yield with respect to the maturity on the bonds.
Used The internal rate of return is used for calculating the ROI, especially for a shorter period of time. Return on investment is used for calculating the performance of an investment
for a particular span of time.
Future or Time The internal rate of return does take the future or time value of money into consideration. Return on investment does not take the future or time value of money into
value of money consideration.
The formula The formula used for the purpose of calculation of Return on investment is
used for the The formula used for the purpose of calculation of the internal rate of return is provided below provided below:
calculation of P[0] + P[1]/(1+IRR) + P[2]/(1+IRR)[2] + P[3]/(1+IRR)[3] + . . . +P[n]/(1+IRR)[n ]= Zero Return on investment = (Net Profit / Cost of Investment) x 100
IRR and ROI
When is an IRR
or ROI Higher the internal rate of return, higher shall be the expected positive cash inflows from a specific Return on investment is considered ideal when there is a fifteen percent rise
considered project or an investment. in the same as compared to the previous two years.
The cons of Return on an investment are:
The cons of the internal rate of return are:
Cons • It does not take the time or future value of money into its due
• Little complicated as it makes adjustments for numerous factors consideration.
• IRR does not measure the actual size of the rate of returns from an investment. • The calculation of ROI can be easily manipulated, and this is also why the
results may differ for various types of users.
Internal rate of return and return on investment are commonly used methods for the evaluation of the financial performance with respect to cash inflows and cash outflows of a particular investment or
a project lying ahead of the company.
The companies use both IRR and ROI for decision-making to confirm the acceptance of a particular project or reject the same. The internal rate of return takes the future value of money into
consideration, whereas the same is ignored in the case of Return on investment.
Recommended Articles
This is a guide to IRR vs ROI. Here we discuss the difference between IRR vs ROI, along with key differences, infographics, & a comparison table. You can also go through our other related articles to
learn more– | {"url":"https://www.educba.com/irr-vs-roi/","timestamp":"2024-11-13T07:45:29Z","content_type":"text/html","content_length":"320058","record_id":"<urn:uuid:98d6864f-c06a-4665-ba90-61c7ed5d2ebd>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00202.warc.gz"} |
Explain what is the Statistical Problem Solving Process.
Q. Explain what is the Statistical Problem Solving Process.
Ans: 1. Formulate Statistical Investigative Questions – This can also be called as anticipating variability while beginning with the process. Formulating statistical investigative questions that
anticipate variability leads to productive investigations.
2. Collect/Consider the Data – This step can be called as acknowledging variability while designing for differences. Data collection designs must acknowledge variability in data.
3. Analyze the Data – This step can also be called as accounting of variability while the distributions. When we analyze the data, we try to understand its variability.
4. Interpret the Results – This step can also be called as allowing for variability while looking beyond the data. Statistical interpretations are made in the presence of variability and must take
variability into account.
Leave a Comment | {"url":"https://www.untoldpost.com/explain-what-is-the-statistical-problem-solving-process/","timestamp":"2024-11-12T23:42:21Z","content_type":"text/html","content_length":"69557","record_id":"<urn:uuid:a5cf1b3c-73bb-459b-bafa-1cb1fdff6313>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00354.warc.gz"} |
A singular control model with application to the goodwill problem
We consider a stochastic system whose uncontrolled state dynamics are modelled by a general one-dimensional Itô diffusion. The control effort that can be applied to this system takes the form that is
associated with the so-called monotone follower problem of singular stochastic control. The control problem that we address aims at maximising a performance criterion that rewards high values of the
utility derived from the system's controlled state but penalises any expenditure of control effort. This problem has been motivated by applications such as the so-called goodwill problem in which the
system's state is used to represent the image that a product has in a market, while control expenditure is associated with raising the product's image, e.g., through advertising. We obtain the
solution to the optimisation problem that we consider in a closed analytic form under rather general assumptions. Also, our analysis establishes a number of results that are concerned with analytic
as well as probabilistic expressions for the first derivative of the solution to a second-order linear non-homogeneous ordinary differential equation. These results have independent interest and can
potentially be of use to the solution of other one-dimensional stochastic control problems. © 2008 Elsevier B.V. All rights reserved.
• Goodwill problem
• Monotone follower problem
• Second-order linear ODE's
• Singular control
Dive into the research topics of 'A singular control model with application to the goodwill problem'. Together they form a unique fingerprint. | {"url":"https://researchportal.hw.ac.uk/en/publications/a-singular-control-model-with-application-to-the-goodwill-problem","timestamp":"2024-11-06T14:18:13Z","content_type":"text/html","content_length":"57940","record_id":"<urn:uuid:8ec8354a-f25e-40b5-af7f-289ab2fbda6e>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00718.warc.gz"} |
Latest Changes
• Discussion Type
• discussion topichigher groupoid C*-algebras
• Category Latest Changes
• Comments 1
• Last Active Apr 8th 2013
• Discussion Type
• discussion topicweak Hopf algebra
• Category Latest Changes
• Comments 1
• Last Active Apr 8th 2013
• cross-linked weak Hopf algebra and fusion category a bit more explicitly, added to both a reference to Ostrik’s article that shows the duality and added a corresponding item to
• Discussion Type
• discussion topicrelative categories
• Category Latest Changes
• Comments 1
• Last Active Apr 8th 2013
• I have now created relative category.
Question: Does the transferred model structure on $\mathbf{RelCat}$ resolve Rezk’s [2001] conjecture that the classification diagram of a model category is weakly equivalent to its simplicial
localisation? The $N_\xi$ functor looks very close to computing the hammock localisation to me…
• Discussion Type
• discussion topicHilbert bimodule
• Category Latest Changes
• Comments 2
• Last Active Apr 9th 2013
• added to Hilbert bimodule a pointer to the Buss-Zhu-Meyer article on their tensor products and induced 2-category structure.
• Discussion Type
• discussion topichopfish algebra
• Category Latest Changes
• Comments 23
• Last Active Apr 9th 2013
• Discussion Type
• discussion topictrialgebra and Hopf monoidal category
• Category Latest Changes
• Comments 1
• Last Active Apr 9th 2013
• created entries trialgebra and Hopf monoidal category
also expanded the Tannaka-duality overview table (being included in related entries):
to contain the first entries of the corresponding “higher Tannaka duality” relations
• Discussion Type
• discussion topicasymptotic C*-homomorphism
• Category Latest Changes
• Comments 2
• Last Active Apr 10th 2013
• Discussion Type
• discussion topicE-theory
• Category Latest Changes
• Comments 6
• Last Active Apr 10th 2013
• There is a new stub E-theory with redirect asymptotic morphism, new entry semiprojective morphism (of separable $C^\ast$-algebras) and stub Brown–Douglas–Fillmore theory, together with some
recent bibliography&links changes at Marius Dadarlat, shape theory etc. There should be soon a separate entry shape theory for operator algebras but I still did not do it.
• Discussion Type
• discussion topicCategorical Homotopy Theory
• Category Latest Changes
• Comments 1
• Last Active Apr 10th 2013
• added to some relevant entries a pointer to
□ Emily Riehl, Categorical homotopy theory, Lecture notes (pdf)
• Discussion Type
• discussion topichomotopical structure on C*-algebras
• Category Latest Changes
• Comments 4
• Last Active Apr 11th 2013
• created homotopical structure on C*-algebras , summarized some central statements from Uuye’s article on structure of categories of fibrant objects on $C^\ast Alg$.
• Discussion Type
• discussion topicvanishing at infinity
• Category Latest Changes
• Comments 4
• Last Active Apr 15th 2013
• Discussion Type
• discussion topickinematic tangent bundle
• Category Latest Changes
• Comments 2
• Last Active Apr 15th 2013
• I noticed that we have kinematic tangent bundle.
To incorporate this a bit into the nLab -web I have created stubs for operational tangent bundle (wanted by its kinematic cousin) and for synthetic tangent bundle and then I have interlinked all
these entries and linked to them from tangent bundle.
Also gave the Idea-section of kinematic tangent bundle a very first paragraph which very briefly says it all, before diving into discussion of what generalized smooth spaces are etc.
• Discussion Type
• discussion topicBinary Golay code
• Category Latest Changes
• Comments 3
• Last Active Apr 15th 2013
• Created binary Golay code. The construction is a little involved, and I haven’t put it in yet, because I think I can nut out a nicer description. The construction I aim to describe, in slightly
different notation and terminology is in
R. T. Curtis (1976). A new combinatorial approach to M24. Mathematical Proceedings of the Cambridge Philosophical Society, 79, pp 25-42. doi:10.1017/S0305004100052075.
• Discussion Type
• discussion topicA-infinity category
• Category Latest Changes
• Comments 8
• Last Active Apr 16th 2013
• Added to A-infinity category the references pointed to by Bruno Valette here.
• Discussion Type
• discussion topicfield (physics)
• Category Latest Changes
• Comments 17
• Last Active Apr 17th 2013
• started field (physics).
So far there is an Idea-section, a general definition with some remarks, and the beginning of a list of examples, which after the first spelled out (gravity) becomes just a list of keywords for
the moment.
More later.
• Discussion Type
• discussion topicMaps
• Category Latest Changes
• Comments 5
• Last Active Apr 17th 2013
• For some reason, we never had map redirect to function, but now we do. Same with mapping.
This may or may not be the best behaviour. We might actually want a page on how people distinguish these words, such as in topology (‘map’ = continuous map but ‘function’ = function, maybe).
• Discussion Type
• discussion topicslice-(infinity,1)-category
• Category Latest Changes
• Comments 1
• Last Active Apr 18th 2013
• added to slice (infinity,1)-category the statement that projecting slicing object away (dependent sum) reflects $\infty$-colomits.
• Discussion Type
• discussion topiccofinal (oo,1)-functor
• Category Latest Changes
• Comments 3
• Last Active Apr 18th 2013
• created cofinal (infinity,1)-functor
• Discussion Type
• discussion topicmetric abstract elementary class
• Category Latest Changes
• Comments 1
• Last Active Apr 18th 2013
• A stub for metric abstract elementary class. Related changes/additions on some model theory entries like elementary class of structures, forking entered a related blog at math blogs.
• Discussion Type
• discussion topicHopf algebroid over a commutative base
• Category Latest Changes
• Comments 1
• Last Active Apr 18th 2013
• I moved much of material from Hopf algebroid to Hopf algebroid over a commutative base, where both groupoid convolution algebras and group function algebras belong there. Most of the difference
is seen already at the level of bialgebroid, the stuff about antipode in general case is to be written. Some more changes to both entries.
• Discussion Type
• discussion topicrational map, variety etc.
• Category Latest Changes
• Comments 2
• Last Active Apr 18th 2013
• To add the old entry birational geometry I added a number of classical, very geometric, algebraic geometry entries rational map, birational map, rational variety, image of a rational map,
unirational variety and a number of redirects. The notion of an image is a bit unusual because the varieties and rational maps do not make a category, as the composition is not always defined.
However the notion of the image is still very natural here. For the concept of dominant rational map I did not make a separate entry but discussed it within rational map and made redirects.
• Discussion Type
• discussion topiclocally closed set
• Category Latest Changes
• Comments 2
• Last Active Apr 18th 2013
• Discussion Type
• discussion topicPlack Collaboration
• Category Latest Changes
• Comments 3
• Last Active Apr 18th 2013
• Was this meant to be Planck Collaboration? I have renamed it!
• Discussion Type
• discussion topicChevalley's theorem on constructible sets
• Category Latest Changes
• Comments 5
• Last Active Apr 18th 2013
• Chevalley’s theorem on constructible sets and elimination of quantifiers. The entries are related ! The interest came partly from teaching some classical algebraic geometry these days. The
related entry is also forking, though yet it is not said why; non-forking may be viewed as related to a notion of generic point, generic type (in the sense of model theory).
• Discussion Type
• discussion topicscattering theory
• Category Latest Changes
• Comments 4
• Last Active Apr 22nd 2013
• New stubs scattering and abstract scattering theory.
• Discussion Type
• discussion topicsymplectic leaf
• Category Latest Changes
• Comments 1
• Last Active Apr 23rd 2013
• added references to symplectic leaf
• Discussion Type
• discussion topicnoncommutative stable homotopy theory
• Category Latest Changes
• Comments 1
• Last Active Apr 24th 2013
• I felt we needed an entry explicitly titled noncommutative stable homotopy theory. So I created one. But it’s just a glorified redirect to KK-theory and E-theory.
• Discussion Type
• discussion topicWallman base, Wallman compactification
• Category Latest Changes
• Comments 3
• Last Active Apr 24th 2013
• Wallman compactification, redirecting also Wallman base (previously wanted at Stone Spaces). See the link to videos by Caramello, where also Alain Connes participates in a discussion.
• Discussion Type
• discussion topicdouble groupoid
• Category Latest Changes
• Comments 8
• Last Active Apr 25th 2013
• While creating double Lie algebroid I notied that we had a neglected entry double groupoid. I gave it a few more lines.
• Discussion Type
• discussion topichomotopy n-type - table
• Category Latest Changes
• Comments 5
• Last Active Apr 25th 2013
• created a table homotopy n-type - table and included it into the relevant entries
• Discussion Type
• discussion topicCartan-Eilenberg category
• Category Latest Changes
• Comments 12
• Last Active Apr 26th 2013
• A stub for Cartan-Eilenberg categories.
• Discussion Type
• discussion topicSpam attack, deletion of entries
• Category Latest Changes
• Comments 14
• Last Active Apr 26th 2013
• Look at what has been happening at derivator. The entry was erased and various things put there by an Anonymous Coward. Another one has reinstated the original one!! Has anyone noticed this? I
was travelling about the time it happened so my usual check did not occur.
At triangle identities something similar had started. I have rolled back.
I have added to groupoid convolution algebra the beginning of an Examples-section titled Higher groupoid convolution algebras and n-vector spaces/n-modules.
Conservatively, you can regard this indeed as just some examples of applications of the groupoid convolution algebra construction. But the way it is presented is supposed to be suggestive of a
“higher C*-algebra” version of convolution algebras of higher Lie groupoids.
I have labelled it as “under construction” to reflect the fact that this latter aspect is a bit experimental for the moment.
The basic idea is that to the extent that we do have groupoid convolution as a (2,1)-functor
$C \colon Grpd \to Alg_{b}^{op} \simeq 2Mod$
(as do do for discrete geometry and conjecturally do for smooth geometry), then this immediately means that it sends double groupoids to convolution sesquialgebras, hence to 3-modules with basis (
3-vector spaces).
As the simplest but instructive example of this I have spelled out how the ordinary dual(commutative and non-co-commutative) Hopf algebra of a finite group arises this way as the “horizontally
constant” double groupoid incarnation of $\mathbf{B}G$, while the convolution algebra of $G$ is the algebra of the “vertically discrete” double groupoid incarnation of $\mathbf{B}G$.
But next, if we simply replace the bare $Alg_b^{op} \simeq 2 Mod$ with the 2-category $C^\ast Alg_b$ of $C^\ast$-algebras and Hilbert bimodules between them and assume (as seems to be the case) that
$C^\ast$-algebraic groupoid convolution is a 2-functor
$LieGrpd_{\simeq} \to C^\ast Alg_n^{op}$
then the same argument goes through as before and yields convolution “$C^\ast$-2-algebras” that look like Hopf-C*-algebras. Etc. Seems to go in the right direction…
• Discussion Type
• discussion topiccohomological integration
• Category Latest Changes
• Comments 1
• Last Active Apr 7th 2013
• seeing the announcement of that diffiety summer school made me think that we should have a dedicated entry titled cohomological integration which points to the aspects of this discussed already
elswhere on the nLab, and which eventually lists dedicated references, if any. So I created a stub.
Does anyone know if there is a published reference to go with the relevant diffiety-school page ?
• Discussion Type
• discussion topicbibundle
• Category Latest Changes
• Comments 3
• Last Active Apr 5th 2013
• dropped some lines into a new Properties-section in the old and neglected entry bibundle. But not for public consumption yet.
• Discussion Type
• discussion topicclosure operator
• Category Latest Changes
• Comments 24
• Last Active Apr 4th 2013
• I felt we were lacking an entry closure operator. I have started one, but don’t have more time now. It’s left in a somewhat sad incomplete state for the moment.
• Discussion Type
• discussion topicclassification of finite groups
• Category Latest Changes
• Comments 3
• Last Active Apr 4th 2013
• just noticed that this morning some apparently knowledgable person signing as “Snoyle” added two paragraphs to finite group with technical details.
I have helped a bit with the syntax now and split off entries for quasisimple group and generalized Fitting subgroup
• Discussion Type
• discussion topicHopf C-star-algebra
• Category Latest Changes
• Comments 2
• Last Active Apr 2nd 2013
• started Hopf C-star algebra (but my computer is running out of battery power now..)
• Discussion Type
• discussion topiccanonical Hilbert-space of half-densities
• Category Latest Changes
• Comments 1
• Last Active Apr 1st 2013
• brief note: canonical Hilbert space of half-densities
• Discussion Type
• discussion topicstar-algebras and dagger-categories
• Category Latest Changes
• Comments 1
• Last Active Apr 1st 2013
• Added brief comments at star algebra, at dagger category and at category algebra on how convolution algebras on dagger-categories are naturally star algebras.
• Discussion Type
• discussion topicTannakian category
• Category Latest Changes
• Comments 5
• Last Active Apr 1st 2013
• I made a stub Tannakian category with some references.
• Discussion Type
• discussion topicLanglands dual groups and T-duality
• Category Latest Changes
• Comments 1
• Last Active Mar 31st 2013
• Added the recent reference on Langlands dual groups as T-dual groups to both geometric Langlands correspondence and T-duality together with a brief sentence. But nothing more as of yet.
• Discussion Type
• discussion topictopological algebra
• Category Latest Changes
• Comments 3
• Last Active Mar 31st 2013
• I could have sworn that we already had entries like “topological ring”, “topological algebra” or the like. But maybe we don’t, or maybe I am looking for the wrong variant titles.
I ended up creating a stub for topological algebra now…
• Discussion Type
• discussion topicC-star algebra
• Category Latest Changes
• Comments 8
• Last Active Mar 31st 2013
• I have added to C-star algebra the statement that the image of a $C^\ast$-algebra under an $\ast$-homomorphism is again $C^\ast$.
Also reorganized the Properties-section a bit and added more references.
• Discussion Type
• discussion topicEpsilons, epsilons, everywhere epsilons
• Category Latest Changes
• Comments 3
• Last Active Mar 30th 2013
• Discussion Type
• discussion topicgroupoid
• Category Latest Changes
• Comments 2
• Last Active Mar 29th 2013
• the entry groupoid could do with some beautifying.
I have added the following introductory reference:
□ Alan Weinstein, Groupoids: Unifying Internal and External Symmetry – A Tour through some Examples, Notices of the AMS volume 43, Number 7 (pdf)
• Discussion Type
• discussion topicmodules over Lie groupoid convolution algebras
• Category Latest Changes
• Comments 1
• Last Active Mar 29th 2013
• I have started adding some references to
on modules ($C^\ast$-modules) of (continuous, etc..) convolution algebras of topological/Lie groupoids.
I still need to look into this more closely. A motivating question for this kind of thing is:
what’s the right fine-tuning of the definition of modules over twisted Lie groupoid convolution algebras such that for centrally extended Lie groupoids it becomes equivalent to the corresponding
gerbe modules?
This seems fairly straightforward, but there are is some technical fine-tuning to deal with. I was hoping this is already stated cleanly in the literature somewhere. But maybe it is not. Or maybe
I just haven’t seen it yet.
• Discussion Type
• discussion topiccentrally extended groupoid
• Category Latest Changes
• Comments 1
• Last Active Mar 29th 2013
• Wrote a quick note at centrally extended groupoid and interlinked a little, for the moment just motivated by having the link point somwhere.
• Discussion Type
• discussion topicGoldblatt-Thomason theorem
• Category Latest Changes
• Comments 4
• Last Active Mar 29th 2013
• Discussion Type
• discussion topicfoliation of a Lie groupoid
• Category Latest Changes
• Comments 1
• Last Active Mar 25th 2013
• Discussion Type
• discussion topicLie algebroid-groupoid
• Category Latest Changes
• Comments 1
• Last Active Mar 25th 2013
• Discussion Type
• discussion topicfoliation of a Lie algebroid
• Category Latest Changes
• Comments 1
• Last Active Mar 25th 2013
• am starting foliation of a Lie algebroid
• Discussion Type
• discussion topicdouble Lie algebroid
• Category Latest Changes
• Comments 1
• Last Active Mar 25th 2013
• stub for double Lie algebroid
• Discussion Type
• discussion topicpolarization
• Category Latest Changes
• Comments 2
• Last Active Mar 22nd 2013
• as mentioned in another thread, I have expanded the Idea-section at polarization in order to highlight the relation to canonical momenta (which I also edited accordingly).
• Discussion Type
• discussion topicPositive numbers
• Category Latest Changes
• Comments 1
• Last Active Mar 22nd 2013
• I keep making links to positive number, so now I filled them.
• Discussion Type
• discussion topicphase and phase space in physics
• Category Latest Changes
• Comments 4
• Last Active Mar 22nd 2013
• felt like making a terminological note on phase and phase space in physics (and linked to it from the relevant entries).
If anyone has more information on the historical origin of the term “phase space”, please let me know.
• Discussion Type
• discussion topicphase
• Category Latest Changes
• Comments 2
• Last Active Mar 22nd 2013
• started a dismabiguation page for phase. Feel invited to add further meanings.
• Discussion Type
• discussion topicCauchy's mistake
• Category Latest Changes
• Comments 17
• Last Active Mar 22nd 2013
• Discussion Type
• discussion topicsemiclassical state
• Category Latest Changes
• Comments 1
• Last Active Mar 21st 2013
• Just in case you see me editing in the Recently Revised list and are wondering:
I have created and have started to fill some content into semiclassical state. But I am not done yet and the entry is not in good shape yet. So don’t look at yet it unless in a mood for fiddling
and editing.
• Discussion Type
• discussion topicclassical-to-quantum notions - table
• Category Latest Changes
• Comments 1
• Last Active Mar 21st 2013
• I started an entry classical-to-quantum notions - table for inclusion in “Related concepts”-sections in the relevant entries.
This is meant to clean up the existing such “Related concepts”-lists. But I am not done yet with the cleaning-up…
• Discussion Type
• discussion topicsemiclassical+approximation
• Category Latest Changes
• Comments 10
• Last Active Mar 21st 2013
• New entry semiclassical approximation. It requires a careful choice of references. The ones at the wikipedia article are catastrophically particular, 1-dimensional, old and non-geometric and hide
the story more than reveal. Stub Maslov index containing the main references for Maslov index. | {"url":"https://nforum.ncatlab.org/5/220/","timestamp":"2024-11-04T23:35:27Z","content_type":"application/xhtml+xml","content_length":"138787","record_id":"<urn:uuid:46a3a379-d2a1-4cff-918a-33fdf5118bfd>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00224.warc.gz"} |
How can I calculate my walking distance?
How can I calculate my walking distance?
At the end of the walk, you can calculate your mileage. Simply divide the number of total steps you took on your walk by your number of steps per mile. For example, if it took you 1,000 steps to get
around the quarter mile track, your calculations would look like this: 1,000 steps x 4 = 4,000 steps per mile.
How long does it take to walk 1.5 km?
Kilometer: A kilometer is 0.62 miles, which is also 3281.5 feet, or 1000 meters. It takes 10 to 12 minutes to walk at a moderate pace. Mile: A mile is 1.61 kilometers or 5280 feet. It takes 15 to 20
minutes to walk 1 mile at a moderate pace.
How long does 3.5 miles take to walk?
If you walk at a pace of 4 MPH, then you will take 15 minutes to walk one mile or 1 1/4 hours to walk 5 miles….How long does it take to walk 3 miles at 3.5 mph?
Miles per hour (mph) Minutes per mile Time for 3 miles
3.4 17:38 52:54
3.5 17:08 51:24
How do I map a walking route?
Draw a Route on Google Maps Alternatively zoom and drag the map using the map controls to pinpoint the start of your route. Draw your walking, running or cycling route by clicking on the map to set
the starting point. Then click once for each of the points along the route you wish to create to calculate the distance.
How do you calculate walking pace?
To calculate your pace, you will need to know the distance you have walked or run and the time it took you to do so. Pace = Time / Distance. A pace may not be a round number of minutes, in which case
you will need to convert fractions of a minute to seconds. Multiply the fraction of a minute by 60.
What is the formula to calculate mph?
Miles per hour can be abbreviated as mph, and are also sometimes abbreviated as mi/h or MPH. For example, 1 mile per hour can be written as 1 mph, 1 mi/h, or 1 MPH. Miles per hour can be expressed
using the formula: v mph = d mit hr.
How do you calculate walking speed?
The simple formula that enables you to find your walking speed is. Speed (in miles per hour) = Distance (in miles) / Time (in hours) However, to know if your walking speed is at par with other
healthy individuals, you need to take into account the average speed for your particular gender and age group.
What is the distance of 1000 steps?
Below is a table showing different amounts of steps and corresponding distances in miles. 1,000 steps = 0.47 miles. 2,000 steps = 0.95 miles. 3,000 steps = 1.42 miles. 4,000 steps = 1.89 miles. | {"url":"https://moorejustinmusic.com/other-papers/how-can-i-calculate-my-walking-distance/","timestamp":"2024-11-11T17:59:56Z","content_type":"text/html","content_length":"34782","record_id":"<urn:uuid:a0a7b082-0fb0-49d5-b3c0-9b51c3f05bc9>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00861.warc.gz"} |
Faraday's Law of Induction Understanding and Application
Faraday's Law of Induction Understanding and Application
Formula:inducedEMF = d(flux)/dt
Understanding Faraday's Law of Induction
Faraday's Law of Induction is a fundamental principle in electromagnetism, describing how a magnetic field interacts with an electric circuit to produce an electromotive force (EMF). This law,
discovered by Michael Faraday in 1831, is pivotal in how electric generators, transformers, and many other devices operate.
Formula Explanation
The formula for Faraday's Law of Induction is as follows:
inducedEMF = d(flux)/dt
• inducedEMF = Induced Electromotive Force (EMF) in volts (V)
• flux = Magnetic flux in webers (Wb)
• d(flux) = Change in magnetic flux
• dt = Change in time in seconds (s)
The negative sign shows that the induced EMF opposes the change in magnetic flux (Lenz's Law).
Inputs and Outputs
• Inputs:
□ flux (Wb): The magnetic flux, typically measured in webers (Wb).
□ dt (s): The time over which the change occurs, in seconds (s).
• Output:
□ inducedEMF (V): The induced electromotive force, measured in volts (V).
Real Life Examples
Consider a small hand cranked generator. As you turn the crank, you change the magnetic flux through the windings of the generator's coil. According to Faraday's Law, this change in flux over time
induces an EMF, generating a voltage that can be used to power a light bulb or charge a battery.
Data Table
Flux (Wb) Time (s) Induced EMF (V)
0.05 2 0.025
0.1 4 0.025
0.2 2 0.1
What is magnetic flux?
Magnetic flux refers to the total magnetic field passing through a given area. It is measured in webers (Wb).
What role does Lenz's Law play in Faraday's Law of Induction?
Lenz's Law states that the induced EMF will oppose the change in magnetic flux that caused it. This is why there is a negative sign in the formula.
Faraday's Law of Induction is a core concept in electromagnetism and is essential for understanding how electric circuits interact with changing magnetic fields. This law is foundational for modern
electrical engineering and physics, leading to the development of many technologies we rely on today.
Tags: Electromagnetism, Physics, Electrical Engineering | {"url":"https://www.formulas.today/formulas/FaradaysLawOfInduction/","timestamp":"2024-11-11T20:19:44Z","content_type":"text/html","content_length":"12210","record_id":"<urn:uuid:0d185502-8855-40df-93da-4eea585be4b3>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00068.warc.gz"} |
Introduction to probability and statistics - SILO.PUB
File loading please wait...
Citation preview
0 TABLE 3
Areas under the Normal Curve, pages 688–689
3.4 3.3 3.2 3.1 3.0
.0003 .0005 .0007 .0010 .0013
.0003 .0005 .0007 .0009 .0013
.0003 .0005 .0006 .0009 .0013
.0003 .0004 .0006 .0009 .0012
.0003 .0004 .0006 .0008 .0012
.0003 .0004 .0006 .0008 .0011
.0003 .0004 .0006 .0008 .0011
.0003 .0004 .0005 .0008 .0011
.0003 .0004 .0005 .0007 .0010
.0002 .0003 .0005 .0007 .0010
2.9 2.8 2.7 2.6 2.5
.0019 .0026 .0035 .0047 .0062
.0018 .0025 .0034 .0045 .0060
.0017 .0024 .0033 .0044 .0059
.0017 .0023 .0032 .0043 .0057
.0016 .0023 .0031 .0041 .0055
.0016 .0022 .0030 .0040 .0054
.0015 .0021 .0029 .0039 .0052
.0015 .0021 .0028 .0038 .0051
.0014 .0020 .0027 .0037 .0049
.0014 .0019 .0026 .0036 .0048
2.4 2.3 2.2 2.1 2.0
.0082 .0107 .0139 .0179 .0228
.0080 .0104 .0136 .0174 .0222
.0078 .0102 .0132 .0170 .0217
.0075 .0099 .0129 .0166 .0212
.0073 .0096 .0125 .0162 .0207
.0071 .0094 .0122 .0158 .0202
.0069 .0091 .0119 .0154 .0197
.0068 .0089 .0116 .0150 .0192
.0066 .0087 .0113 .0146 .0188
.0064 .0084 .0110 .0143 .0183
1.9 1.8 1.7 1.6 1.5
.0287 .0359 .0446 .0548 .0668
.0281 .0351 .0436 .0537 .0655
.0274 .0344 .0427 .0526 .0643
.0268 .0336 .0418 .0516 .0630
.0262 .0329 .0409 .0505 .0618
.0256 .0322 .0401 .0495 .0606
.0250 .0314 .0392 .0485 .0594
.0244 .0307 .0384 .0475 .0582
.0239 .0301 .0375 .0465 .0571
.0233 .0294 .0367 .0455 .0559
1.4 1.3 1.2 1.1 1.0
.0808 .0968 .1151 .1357 .1587
.0793 .0951 .1131 .1335 .1562
.0778 .0934 .1112 .1314 .1539
.0764 .0918 .1093 .1292 .1515
.0749 .0901 .1075 .1271 .1492
.0735 .0885 .1056 .1251 .1469
.0722 .0869 .1038 .1230 .1446
.0708 .0853 .1020 .1210 .1423
.0694 .0838 .1003 .1190 .1401
.0681 .0823 .0985 .1170 .1379
0.9 0.8 0.7 0.6 0.5
.1841 .2119 .2420 .2743 .3085
.1814 .2090 .2389 .2709 .3050
.1788 .2061 .2358 .2676 .3015
.1762 .2033 .2327 .2643 .2981
.1736 .2005 .2296 .2611 .2946
.1711 .1977 .2266 .2578 .2912
.1685 .1949 .2236 .2546 .2877
.1660 .1922 .2206 .2514 .2843
.1635 .1894 .2177 .2483 .2810
.1611 .1867 .2148 .2451 .2776
0.4 0.3 0.2 0.1 0.0
.3446 .3821 .4207 .4602 .5000
.3409 .3783 .4168 .4562 .4960
.3372 .3745 .4129 .4522 .4920
.3336 .3707 .4090 .4483 .4880
.3300 .3669 .4052 .4443 .4840
.3264 .3632 .4013 .4404 .4801
.3228 .3594 .3974 .4364 .4761
.3192 .3557 .3936 .4325 .4721
.3156 .3520 .3897 .4286 .4681
.3121 .3483 .3859 .4247 .4641
TABLE 3
0.0 0.1 0.2 0.3 0.4
.5000 .5398 .5793 .6179 .6554
.5040 .5438 .5832 .6217 .6591
.5080 .5478 .5871 .6255 .6628
.5120 .5517 .5910 .6293 .6664
.5160 .5557 .5948 .6331 .6700
.5199 .5596 .5987 .6368 .6736
.5239 .5636 .6026 .6406 .6772
.5279 .5675 .6064 .6443 .6808
.5319 .5714 .6103 .6480 .6844
.5359 .5753 .6141 .6517 .6879
0.5 0.6 0.7 0.8 0.9
.6915 .7257 .7580 .7881 .8159
.6950 .7291 .7611 .7910 .8186
.6985 .7324 .7642 .7939 .8212
.7019 .7357 .7673 .7967 .8238
.7054 .7389 .7704 .7995 .8264
.7088 .7422 .7734 .8023 .8289
.7123 .7454 .7764 .8051 .8315
.7157 .7486 .7794 .8078 .8340
.7190 .7517 .7823 .8106 .8365
.7224 .7549 .7852 .8133 .8389
1.0 1.1 1.2 1.3 1.4
.8413 .8643 .8849 .9032 .9192
.8438 .8665 .8869 .9049 .9207
.8461 .8686 .8888 .9066 .9222
.8485 .8708 .8907 .9082 .9236
.8508 .8729 .8925 .9099 .9251
.8531 .8749 .8944 .9115 .9265
.8554 .8770 .8962 .9131 .9279
.8577 .8790 .8980 .9147 .9292
.8599 .8810 .8997 .9162 .9306
.8621 .8830 .9015 .9177 .9319
1.5 1.6 1.7 1.8 1.9
.9332 .9452 .9554 .9641 .9713
.9345 .9463 .9564 .9649 .9719
.9357 .9474 .9573 .9656 .9726
.9370 .9484 .9582 .9664 .9732
.9382 .9495 .9591 .9671 .9738
.9394 .9505 .9599 .9678 .9744
.9406 .9515 .9608 .9686 .9750
.9418 .9525 .9616 .9693 .9756
.9429 .9535 .9625 .9699 .9761
.9441 .9545 .9633 .9706 .9767
2.0 2.1 2.2 2.3 2.4
.9772 .9821 .9861 .9893 .9918
.9778 .9826 .9864 .9896 .9920
.9783 .9830 .9868 .9898 .9922
.9788 .9834 .9871 .9901 .9925
.9793 .9838 .9875 .9904 .9927
.9798 .9842 .9878 .9906 .9929
.9803 .9846 .9881 .9909 .9931
.9808 .9850 .9884 .9911 .9932
.9812 .9854 .9887 .9913 .9934
.9817 .9857 .9890 .9916 .9936
2.5 2.6 2.7 2.8 2.9
.9938 .9953 .9965 .9974 .9981
.9940 .9955 .9966 .9975 .9982
.9941 .9956 .9967 .9976 .9982
.9943 .9957 .9968 .9977 .9983
.9945 .9959 .9969 .9977 .9984
.9946 .9960 .9970 .9978 .9984
.9948 .9961 .9971 .9979 .9985
.9949 .9962 .9972 .9979 .9985
.9951 .9963 .9973 .9980 .9986
.9952 .9964 .9974 .9981 .9986
3.0 3.1 3.2 3.3 3.4
.9987 .9990 .9993 .9995 .9997
.9987 .9991 .9993 .9995 .9997
.9987 .9991 .9994 .9995 .9997
.9988 .9991 .9994 .9996 .9997
.9988 .9992 .9994 .9996 .9997
.9989 .9992 .9994 .9996 .9997
.9989 .9992 .9994 .9996 .9997
.9989 .9992 .9995 .9996 .9997
.9990 .9993 .9995 .9996 .9997
.9990 .9993 .9995 .9997 .9998
List of Applications Business and Economics Actuaries, 172 Advertising campaigns, 655 Airline occupancy rates, 361 America’s market basket, 415–416 Assembling electronic equipment, 460 Auto
accidents, 328 Auto insurance, 58, 415, 477 Baseball bats, 286 Bidding on construction jobs, 476–477 Black jack, 286 Brass rivets, 286 Charitable contributions, 102 Coal burning power plant, 286
Coffee breaks, 172 College textbooks, 563–564 Color TVs, 638 Construction projects, 574–575 Consumer confidence, 306 Consumer Price Index, 101–102 Cordless phones, 124–125 Corporate profits, 565 Cost
of flying, 520–521 Cost of lumber, 462, 466 Deli sales, 274 Does college pay off?, 362 Drilling oil wells, 171 Economic forecasts, 236 e-shopping, 317 Flextime, 362 Fortune 500 revenues, 58 Gas
mileage, 475 Glare in rearview mirrors, 475 Grant funding, 156 Grocery costs, 113 Hamburger meat, 85, 234–235, 316–317, 361, 399 HDTVs, 59, 114, 526 Homeschool teachers, 623–624 Housing prices,
532–533 Inspection lines, 157 Internet on-the-go, 46–47 Interstate commerce, 176 Job security, 212 Legal immigration, 306, 334 Lexus, Inc., 113–114 Light bulbs, 424 Line length, 31–32 Loading grain,
236 Lumber specs, 286 Movie marketing, 376–377 MP3 players, 316 Multimedia kids, 306 Nuclear power plant, 286 Operating expenses, 334 Packaging hamburger meat, 72
Paper strength, 274 Particle board, 574 Product quality, 431 Property values, 642, 649 Raisins, 408–409 Rating tobacco leaves, 666 Real estate prices, 113 School workers, 339–340, 383–384 Service
times, 32 Shipping charges, 172 Sports salaries, 59 Starbucks, 59 Strawberries, 514, 521, 533 Supermarket prices, 659–660 Tax assessors, 416–417 Tax audits, 236 Teaching credentials, 207–208
Telecommuting, 609–610 Telemarketers, 195 Timber tracts, 73 Tuna fish, 59, 73, 90, 397, 407–408, 431, 461–462 Utility bills in southern California, 66, 86 Vacation destinations, 217 Vehicle colors,
624 Warehouse shopping, 477–478 Water resistance in textiles, 475 Worker error, 162
General Interest “900” numbers, 307 100-meter run, 136, 143 9/11 conspiracy, 383 9-1-1, 322 Accident prone, 204 Airport safety, 204 Airport security, 162 Armspan and height, 513–514, 522 Art critics,
665–666 Barry Bonds, 93 Baseball and steroids, 327 Baseball fans, 327 Baseball stats, 539 Batting champions, 32–33 Birth order and college success, 327 Birthday problem, 156 Braking distances, 235
Brett Favre, 74, 122, 398 Car colors, 196 Cell phone etiquette, 251–252 Cheating on taxes, 162 Christmas trees, 235 Colored contacts, 372 Comparing NFL quarterbacks, 85, 409 Competitive running, 665
Cramming, 144
Creation, 136 Defective computer chips, 207 Defective equipment, 171 Dieting, 322 Different realities, 327 Dinner at Gerards, 143 Driving emergencies, 72 Elevator capacities, 235 Eyeglasses, 135 Fast
food and gas stations, 197 Fear of terrorism, 46 Football strategies, 162 Free time, 101 Freestyle swimmers, 409 Going to the moon, 259–260 Golfing, 158 Gourmet cooking, 642, 649 GPAs, 335 GRE scores,
466 Hard hats, 424 Harry Potter, 196 Hockey, 538 Home security systems, 196 Hotel costs, 367–368 Human heights, 235 Hunting season, 335 In-home movies, 244 Instrument precision, 423–424 Insuring your
diamonds, 171–172 Itineraries, 142–143 Jason and Shaq, 157–158 JFK assassination, 609 Length, 513 Letterman or Leno, 170–171 M&M’S, 101, 326–327, 377 Machine breakdowns, 649 Major world lakes, 43–44
Man’s best friend, 197, 373 Men on Mars, 307 Noise and stress, 368 Old Faithful, 73 PGA, 171 Phospate mine, 235 Playing poker, 143 Presidential vetoes, 85 President’s kids, 73–74 Professor Asimov,
512, 521, 525 Rating political candidates, 665 Red dye, 416 Roulette, 135, 171 Sandwich generation, 613 Smoke detectors, 157 Soccer injuries, 157 Starbucks or Peet’s, 156–157 Summer vacations,
306–307 SUVs, 317 (continued)
List of Applications (continued) Tennis, 171, 236 Tennis racquets, 665 Time on task, 59 Tom Brady, 533 Tomatoes, 274 Top 20 movies, 33 Traffic control, 649 Traffic problems, 143 Vacation plans, 143
Walking shoes, 549 What to wear, 142 WNBA, 143
Life Sciences Achilles tendon injuries, 274–275, 362 Acid rain, 316 Air pollution, 520, 525, 565 Alzheimer’s disease, 637 Archeological find, 47, 65, 74, 409 Baby’s sleeping position, 377 Back pain,
196–197 Bacteria in drinking water, 236 Bacteria in water, 274 Bacteria in water samples, 204–205 Biomass, 306 Birth order and personality, 58 Blood thinner, 259 Blood types, 196 Body temperature and
heart rate, 539 Breathing rates, 72, 235 Bulimia, 398 Calcium, 461, 465–466 Calcium content, 32 Cancer in rats, 259 Cerebral blood flow, 235 Cheese, 539 Chemical experiment, 512 Chemotherapy, 638
Chicago weather, 195 Childhood obesity, 371–372 Cholesterol, 399 Clopidogrel and aspirin, 377 Color preferences in mice, 196 Cotton versus cucumber, 573 Cure for insomnia, 372–373 Cure for the common
cold, 366–367 Deep-sea research, 614 Digitalis and calcium uptake, 476 Diseased chickens, 613 Disinfectants, 408 Dissolved O2 content, 397–398, 409, 461, 638 Drug potency, 424 E. coli outbreak, 205
Early detection of breast cancer, 372 Excedrin or Tylenol, 328 FDA testing, 172 Fruit flies, 136 Geothermal power, 538–539 Glucose tolerance, 466
Good tasting medicine, 660 Ground or air, 416 Hazardous waste, 33 Healthy eating, 367 Healthy teeth, 407, 416 Heart rate and exercise, 655 Hormone therapy and Alzheimer’s disease, 377 HRT, 377 Hungry
rats, 307 Impurities, 431–432 Invasive species, 361–362 Jigsaw puzzles, 649–650 Lead levels in blood, 642–643 Lead levels in drinking water, 367 Legal abortions, 291, 317 Less red meat, 335, 572–573
Lobsters, 398, 538 Long-term care, 613–614 Losing weight, 280 Mandatory health care, 608 Measurement error, 273–274 Medical diagnostics, 162 Mercury concentration in dolphins, 84–85 MMT in gasoline,
368 Monkey business, 144 Normal temperatures, 274 Ore samples, 72 pH in rainfall, 335 pH levels in water, 655 Physical fitness, 499 Plant genetics, 157, 372 Polluted rain, 335 Potassium levels, 274
Potency of an antibiotic, 362 Prescription costs, 280 Pulse rates, 236 Purifying organic compounds, 398 Rain and snow, 124 Recovery rates, 643 Recurring illness, 31 Red blood cell count, 32, 399
Runners and cyclists, 408, 415, 431 San Andreas Fault, 306 Screening tests, 162–163 Seed treatments, 208 Selenium, 322, 335 Slash pine seedlings, 475–476 Sleep deprivation, 512 Smoking and lung
capacity, 398 Sunflowers, 235 Survival times, 50, 73, 85–86 Swampy sites, 460–461, 465, 655 Sweet potato whitefly, 372 Taste test for PTC, 197 Titanium, 408 Toxic chemicals, 660 Treatment versus
control, 376 Vegi-burgers, 564–565 Waiting for a prescription, 609
Weights of turtles, 638 What’s normal?, 49, 86, 317, 323, 362, 368 Whitefly infestation, 196
Social Sciences A female president?, 338–339 Achievement scores, 573–574 Achievement tests, 512–513, 545 Adolescents and social stress, 381 American presidents, 32 Anxious infants, 608–609 Back to
work, 17 Catching a cold, 327 Choosing a mate, 157 Churchgoing and age, 614 Disabled students, 113 Discovery-based teaching, 621 Drug offenders, 156 Drug testing, 156 Election 2008, 16 Eye movement,
638 Faculty salaries, 273 Gender bias, 144, 171, 207 Generation Next, 327–328, 380 Hospital survey, 143 Household size, 102, 614 Images and word recall, 650 Intensive care, 204 Jury duty, 135–136
Laptops and learning, 522, 526 Medical bills, 196 Memory experiments, 417 Midterm scores, 125 Music in the workplace, 417 Native American youth, 259 No pass, no play rule for athletics, 162 Organized
religion, 31 Political corruption, 334–335 Preschool, 31 Race distributions in the Armed Forces, 16–17 Racial bias, 259 Reducing hostility, 460 Rocking the vote, 317 SAT scores, 195–196, 431, 445
Smoking and cancer, 157 Social Security numbers, 72–73 Social skills training, 538, 666 Spending patterns, 609 Starting salaries, 322–323, 367 Student ratings, 665 Teaching biology, 322 Teen
magazines, 212 Test interviews, 513 Union, yes!, 327 Violent crime, 161–162 Want to be president?, 16 Who votes?, 373 YouTube, 566
How Do I Construct a Stem and Leaf Plot? 20 How Do I Construct a Relative Frequency Histogram? How Do I Calculate Sample Quartiles?
How Do I Calculate the Correlation Coefficient? How Do I Calculate the Regression Line? 111
What’s the Difference between Mutually Exclusive and Independent Events? 153 How Do I Use Table 1 to Calculate Binomial Probabilities? 190 How Do I Calculate Poisson Probabilities Using the Formula?
198 How Do I Use Table 2 to Calculate Poisson Probabilities? 199 How Do I Use Table 3 to Calculate Probabilities under the Standard Normal Curve? 228 How Do I Calculate Binomial Probabilities Using
the Normal Approximation? 240
How Do I Calculate Probabilities for the Sample Mean x苶? 268 How Do I Calculate Probabilities for the Sample Proportion pˆ ? 277 How Do I Estimate a Population Mean or Proportion? 303 How Do I
Choose the Sample Size? 331 Rejection Regions, p-Values, and Conclusions How Do I Calculate b? 360 How Do I Decide Which Test to Use?
How Do I Know Whether My Calculations Are Accurate? 459 How Do I Make Sure That My Calculations Are Correct? 508 How Do I Determine the Appropriate Number of Degrees of Freedom? 606, 611
Index of Applet Figures CHAPTER 1 Figure 1.17 Building a Dotplot applet Figure 1.18 Building a Histogram applet Figure 1.19 Flipping Fair Coins applet Figure 1.20 Flipping Fair Coins applet CHAPTER 2
Figure 2.4 How Extreme Values Affect the Mean and Median applet Figure 2.9 Why Divide n 1? Figure 2.19 Building a Box Plot applet CHAPTER 3 Figure 3.6 Building a Scatterplot applet Figure 3.9
Exploring Correlation applet Figure 3.12 How a Line Works applet CHAPTER 4 Figure 4.6 Tossing Dice applet Figure 4.16 Flipping Fair Coins applet Figure 4.17 Flipping Weighted Coins applet
CHAPTER 8 Figure 8.10 Interpreting Confidence Intervals applet CHAPTER 9 Figure 9.7 Large Sample Test of a Population Mean applet Figure 9.9 Power of a z-Test applet CHAPTER 10 Figure 10.3 Student’s t
Probabilities applet Figure 10.5 Comparing t and z applet Figure 10.9 Small Sample Test of a Population Mean applet Figure 10.12 Two-Sample t Test: Independent Samples applet Figure 10.17 Chi-Square
Probabilities applet Figure 10.21 F Probabilities applet CHAPTER 11 Figure 11.6 F Probabilities applet
CHAPTER 5 Figure 5.2 Calculating Binomial Probabilities applet Figure 5.3 Java Applet for Example 5.6
CHAPTER 12 Figure 12.4 Method of Least Squares applet Figure 12.7 t Test for the Slope applet Figure 12.17 Exploring Correlation applet
CHAPTER 6 Figure 6.7 Visualizing Normal Curves applet Figure 6.14 Normal Distribution Probabilities applet Figure 6.17 Normal Probabilities and z-Scores applet Figure 6.21 Normal Approximation to
Binomial Probabilities applet
CHAPTER 14 Figure 14.1 Goodness-of-Fit applet Figure 14.2 Chi-Square Test of Independence applet Figure 14.4 Chi-Square Test of Independence applet
CHAPTER 7 Figure 7.7 Central Limit Theorem applet Figure 7.10 Normal Probabilities for Means applet
a ta
TABLE 4
Critical Values of t page 691
3.078 1.886 1.638 1.533 1.476
6.314 2.920 2.353 2.132 2.015
12.706 4.303 3.182 2.776 2.571
31.821 6.965 4.541 3.747 3.365
63.657 9.925 5.841 4.604 4.032
1.440 1.415 1.397 1.383 1.372
1.943 1.895 1.860 1.833 1.812
2.447 2.365 2.306 2.262 2.228
3.143 2.998 2.896 2.821 2.764
3.707 3.499 3.355 3.250 3.169
1.363 1.356 1.350 1.345 1.341
1.796 1.782 1.771 1.761 1.753
2.201 2.179 2.160 2.145 2.131
2.718 2.681 2.650 2.624 2.602
3.106 3.055 3.012 2.977 2.947
1.337 1.333 1.330 1.328 1.325
1.746 1.740 1.734 1.729 1.725
2.120 2.110 2.101 2.093 2.086
2.583 2.567 2.552 2.539 2.528
2.921 2.898 2.878 2.861 2.845
1.323 1.321 1.319 1.318 1.316
1.721 1.717 1.714 1.711 1.708
2.080 2.074 2.069 2.064 2.060
2.518 2.508 2.500 2.492 2.485
2.831 2.819 2.807 2.797 2.787
1.315 1.314 1.313 1.311 1.282
1.706 1.703 1.701 1.699 1.645
2.056 2.052 2.048 2.045 1.960
2.479 2.473 2.467 2.462 2.326
2.779 2.771 2.763 2.756 2.576
SOURCE: From “Table of Percentage Points of the t-Distribution,” Biometrika 32 (1941):300. Reproduced by permission of the Biometrika Trustees.
Introduction to Probability and Statistics 13th
William Mendenhall University of Florida, Emeritus
Robert J. Beaver University of California, Riverside, Emeritus
Barbara M. Beaver University of California, Riverside
Australia • Brazil • Japan • Korea • Mexico • Singapore • Spain • United Kingdom • United States
Introduction to Probability and Statistics, Thirteenth Edition William Mendenhall, Robert J. Beaver, Barbara M. Beaver Acquisitions Editor: Carolyn Crockett Development Editor: Kristin Marrs
Assistant Editor: Catie Ronquillo Editorial Assistant: Rebecca Dashiell
© 2009, 2006 Brooks/Cole, Cengage Learning ALL RIGHTS RESERVED. No part of this work covered by the copyright herein may be reproduced, transmitted, stored, or used in any form or by any means
graphic, electronic, or mechanical, including but not limited to photocopying, recording, scanning, digitizing, taping, Web distribution, information networks, or information storage and retrieval
systems, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the publisher.
Technology Project Manager: Sam Subity Marketing Manager: Amanda Jellerichs Marketing Assistant: Ashley Pickering Marketing Communications Manager: Talia Wise Project Manager, Editorial Production:
Jennifer Risden Creative Director: Rob Hugel Art Director: Vernon Boes Print Buyer: Linda Hsu Permissions Editor: Mardell Glinski Schultz Production Service: ICC Macmillan Inc. Text Designer: John
Walker Photo Researcher: Rose Alcorn Copy Editor: Richard Camp Cover Designer: Cheryl Carrington Cover Image: R. Creation/Getty Images Compositor: ICC Macmillan Inc.
For product information and technology assistance, contact us at Cengage Learning Customer & Sales Support, 1-800-354-9706 For permission to use material from this text or product, submit all
requests online at cengage.com/permissions. Further permissions questions can be e-mailed to [email protected].
MINITAB is a trademark of Minitab, Inc., and is used herein with the owner’s permission. Portions of MINITAB Statistical Software input and output contained in this book are printed with permission
of Minitab, Inc. The applets in this book are from Seeing Statistics™, an online, interactive statistics textbook. Seeing Statistics is a registered service mark used herein under license. The
applets in this book were designed to be used exclusively with Introduction to Probability and Statistics, Thirteenth Edition, by Mendenhall, Beaver & Beaver, and they may not be copied, duplicated,
or reproduced for any reason. Library of Congress Control Number: 2007931223 ISBN-13: 978-0-495-38953-8 ISBN-10: 0-495-38953-6 Brooks/Cole 10 Davis Drive Belmont, CA 94002-3098 USA Cengage Learning
is a leading provider of customized learning solutions with office locations around the globe, including Singapore, the United Kingdom, Australia, Mexico, Brazil, and Japan. Locate your local office
at international.cengage.com/region. Cengage Learning products are represented in Canada by Nelson Education, Ltd. For your course and learning solutions, visit academic.cengage.com.
Printed in Canada 1 2 3 4 5 6 7 12 11 10 09 08
Purchase any of our products at your local college store or at our preferred online store www.ichapters.com.
Preface Every time you pick up a newspaper or a magazine, watch TV, or surf the Internet, you encounter statistics. Every time you fill out a questionnaire, register at an online website, or pass your
grocery rewards card through an electronic scanner, your personal information becomes part of a database containing your personal statistical information. You cannot avoid the fact that in this
information age, data collection and analysis are an integral part of our day-to-day activities. In order to be an educated consumer and citizen, you need to understand how statistics are used and
misused in our daily lives. To that end we need to “train your brain” for statistical thinking—a theme we emphasize throughout the thirteenth edition by providing you with a “personal trainer.”
THE SECRET TO OUR SUCCESS The first college course in introductory statistics that we ever took used Introduction to Probability and Statistics by William Mendenhall. Since that time, this
text—currently in the thirteenth edition—has helped several generations of students understand what statistics is all about and how it can be used as a tool in their particular area of application.
The secret to the success of Introduction to Probability and Statistics is its ability to blend the old with the new. With each revision we try to build on the strong points of previous editions,
while always looking for new ways to motivate, encourage, and interest students using new technological tools.
HALLMARK FEATURES OF THE THIRTEENTH EDITION The thirteenth edition retains the traditional outline for the coverage of descriptive and inferential statistics. This revision maintains the
straightforward presentation of the twelfth edition. In this spirit, we have continued to simplify and clarify the language and to make the language and style more readable and “user
friendly”—without sacrificing the statistical integrity of the presentation. Great effort has been taken to “train your brain” to explain not only how to apply statistical procedures, but also to
explain • • • •
how to meaningfully describe real sets of data what the results of statistical tests mean in terms of their practical applications how to evaluate the validity of the assumptions behind statistical
tests what to do when statistical assumptions have been violated
iv ❍
Exercises In the tradition of all previous editions, the variety and number of real applications in the exercise sets is a major strength of this edition. We have revised the exercise sets to provide
new and interesting real-world situations and real data sets, many of which are drawn from current periodicals and journals. The thirteenth edition contains over 1300 problems, many of which are new
to this edition. Any exercises from previous editions that have been deleted will be available to the instructor as Classic Exercises on the Instructor’s Companion Website (academic.cengage.com/
statistics/mendenhall). Exercises are graduated in level of difficulty; some, involving only basic techniques, can be solved by almost all students, while others, involving practical applications and
interpretation of results, will challenge students to use more sophisticated statistical reasoning and understanding.
Organization and Coverage Chapters 1–3 present descriptive data analysis for both one and two variables, using state-of-the-art MINITAB graphics. We believe that Chapters 1 through 10—with the
possible exception of Chapter 3—should be covered in the order presented. The remaining chapters can be covered in any order. The analysis of variance chapter precedes the regression chapter, so that
the instructor can present the analysis of variance as part of a regression analysis. Thus, the most effective presentation would order these three chapters as well. Chapter 4 includes a full
presentation of probability and probability distributions. Three optional sections—Counting Rules, the Total Law of Probability, and Bayes’ Rule—are placed into the general flow of text, and
instructors will have the option of complete or partial coverage. The sections that present event relations, independence, conditional probability, and the Multiplication Rule have been rewritten in
an attempt to clarify concepts that often are difficult for students to grasp. As in the twelfth edition, the chapters on analysis of variance and linear regression include both calculational
formulas and computer printouts in the basic text presentation. These chapters can be used with equal ease by instructors who wish to use the “hands-on” computational approach to linear regression
and ANOVA and by those who choose to focus on the interpretation of computer-generated statistical printouts. One important change implemented in this and the last two editions involves the emphasis
on p-values and their use in judging statistical significance. With the advent of computer-generated p-values, these probabilities have become essential components in reporting the results of a
statistical analysis. As such, the observed value of the test statistic and its p-value are presented together at the outset of our discussion of statistical hypothesis testing as equivalent tools
for decision-making. Statistical significance is defined in terms of preassigned values of a, and the p-value approach is presented as an alternative to the critical value approach for testing a
statistical hypothesis. Examples are presented using both the p-value and critical value approaches to hypothesis testing. Discussion of the practical interpretation of statistical results, along
with the difference between statistical significance and practical significance, is emphasized in the practical examples in the text.
Special Feature of the Thirteenth Edition— MyPersonal Trainer A special feature of this edition are the MyPersonal Trainer sections, consisting of definitions and/or step-by-step hints on problem
solving. These sections are followed by Exercise Reps, a set of exercises involving repetitive problems concerning a specific
topic or concept. These Exercise Reps can be compared to sets of exercises specified by a trainer for an athlete in training. The more “reps” the athlete does, the more he acquires strength or agility
in muscle sets or an increase in stamina under stress conditions.
How Do I Calculate Sample Quartiles? 1. Arrange the data set in order of magnitude from smallest to largest. 2. Calculate the quartile positions: •
Position of Q1: .25(n 1)
Position of Q3: .75(n 1)
3. If the positions are integers, then Q1 and Q3 are the values in the ordered data set found in those positions. 4. If the positions in step 2 are not integers, find the two measurements in positions
just above and just below the calculated position. Calculate the quartile by finding a value either one-fourth, one-half, or three-fourths of the way between these two measurements. Exercise Reps A.
Below you will find two practice data sets. Fill in the blanks to find the necessary quartiles. The first data set is done for you. Data Set
Position of Q1
Position of Q3
Lower Quartile, Q1
Upper Quartile, Q3
2, 5, 7, 1, 1, 2, 8
1, 1, 2, 2, 5, 7, 8
5, 0, 1, 3, 1, 5, 5, 2, 4, 4, 1
B. Below you will find three data sets that have already been sorted. The positions of the upper and lower quartiles are shown in the table. Find the measurements just above and just below the
quartile position. Then find the upper and lower quartiles. The first data set is done for you. Sorted Data Set
Position of Q1
Measurements Above and Below
0, 1, 4, 4, 5, 9
0 and 1
Q1 0 .75(1) .75
Position of Q3
Measurements Above and Below
5 and 9
Q3 5 .25(4) 6
0, 1, 3, 3, 4, 7, 7, 8
1, 1, 2, 5, 6, 6, 7, 9, 9
The MyPersonal Trainer sections with Exercise Reps are used frequently in early chapters where it is important to establish basic concepts and statistical thinking, coupled up with straightforward
calculations. The answers to the “Exercise Reps,” when needed, are found on a perforated card in the back of the text. The MyPersonal Trainer sections appear in all but two chapters—Chapters 13 and
15. However, the Exercise Reps problem sets appear only in the first 10 chapters where problems can be solved using pencil and paper, or a calculator. We expect that by the time a student has
completed the first 10 chapters, statistical concepts and approaches will have been mastered. Further, the computer intensive nature of the remaining chapters is not amenable to a series of simple
repetitive and easily calculated exercises, but rather is amenable to a holistic approach—that is, a synthesis of the results of a complete analysis into a set of conclusions and recommendations for
the experimenter.
Other Features of the Thirteenth Edition •
MyApplet: Easy access to the Internet has made it possible for students to visualize statistical concepts using an interactive webtool called an applet. Applets written by Gary McClelland, author of
Seeing Statistics™, have been customized specifically to match the presentation and notation used in this edition. Found on the Premium Website that accompanies the text, they
vi ❍
provide visual reinforcement of the concepts presented in the text. Applets allow the user to perform a statistical experiment, to interact with a statistical graph to change its form, or to access
an interactive “statistical table.” At appropriate points in the text, a screen capture of each applet is displayed and explained, and each student is encouraged to learn interactively by using the
“MyApplet” exercises at the end of each chapter. We are excited to see these applets integrated into statistical pedagogy and hope that you will take advantage of their visual appeal to your
You can compare the accuracy of estimators of the population variance s 2 using the Why Divide by n 1? applet. The applet selects samples from a population with standard deviation s 29.2. It then
calculates the standard deviation s using (n 1) in the denominator as well as a standard deviation calculated using n in the denominator. You can choose to compare the estimators for a single new
sample, for 10 samples, or for 100 samples. Notice that each of the 10 samples shown in Figure 2.9 has a different sample standard deviation. However, when the 10 standard deviations are averaged at
the bottom of the applet, one of the two estimators is closer to the population standard deviation, s 29.2. Which one is it? We will use this applet again for the MyApplet Exercises at the end of the
chapter. FIGURE 2.9
Why Divide by n 1? applet
Exercises 2.86 Refer to Data Set #1 in the How Extreme Val-
ues Affect the Mean and Median applet. This applet loads with a dotplot for the following n 5 observations: 2, 5, 6, 9, 11. a. What are the mean and median for this data set? b. Use your mouse to
change the value x 11 (the moveable green dot) to x 13. What are the mean and median for the new data set? c. Use your mouse to move the green dot to x 33. When the largest value is extremely large
compared to the other observations, which is larger, the mean or the median? d. What effect does an extremely large value have on the mean? What effect does it have on the median? 2.87 Refer to Data
Set #2 in the How Extreme Val-
ues Affect the Mean and Median applet. This applet loads with a dotplot for the following n 5 observations: 2, 5, 10, 11, 12. a. Use your mouse to move the value x 12 to the left until it is smaller
than the value x 11. b. As the value of x gets smaller, what happens to the sample mean? A h
n 3 from a population in which the standard deviation is s 29.2. a. Click . A sample consisting of n 3 observations will appear. Use your calculator to verify the values of the standard deviation
when dividing by n 1 and n as shown in the applet. b. Click again. Calculate the average of the two standard deviations (dividing by n 1) from parts a and b. Repeat the process for the two standard
deviations (dividing by n). Compare your results to those shown in red on the applet. c. You can look at how the two estimators in part a behave “in the long run” by clicking or a number of times,
until the average of all the standard deviations begins to stabilize. Which of the two methods gives a standard deviation closer to s 29.2? d. In the long run, how far off is the standard deviation
when dividing by n? 2.90 Refer to Why Divide by n 1 applet. The
second applet on the page randomly selects sample of n 10 from the same population in which the standard deviation is s 29.2.
MINITAB histogram for Example 2.8
Graphical and numerical data description includes both traditional and EDA methods, using computer graphics generated by MINITAB 15 for Windows.
● 6/25
Relative Frequency
F I G URE 2 . 1 2
0 8.5
20.5 Scores
FIGURE 2.16
MINITAB output for the data in Example 2.13
● Descriptive Statistics: x Variable X
N N* Mean SE Mean 10 0 13.50 1.98
StDev Minimum 6.28 4.00
Q1 Median Q3 Maximum 8.75 12.00 18.50 25.00
The presentation in Chapter 4 has been rewritten to clarify the presentation of simple events and the sample space as well as the presentation of conditional probability, independence, and the
Multiplication Rule. All examples and exercises in the text contain printouts based on MINITAB 15 and consistent with MINITAB 14. MINITAB printouts are provided for some exercises, while other
exercises require the student to obtain solutions without using the computer. y p graphs? c. Use a line chart to describe the predicted number of wired households for the years 2002 to 2008. d. Use a
bar chart to describe the predicted number of wireless households for the years 2002 to 2008. 1.51 Election Results The 2004 election
was a race in which the incumbent, George W. Bush, defeated John Kerry, Ralph Nader, and other candidates, receiving 50.7% of the popular vote. The popular vote (in thousands) for George W. Bush in
each of the 50 states is listed below:8
AL AK AZ AR CA CO CT DE FL GA
HI ID IL IN IA KS KY LA ME MD
MA MI MN MS MO MT NE NV NH NJ
NM NY NC ND OH OK OR PA RI SC
SD TN TX UT VT VA WA WV WI WY
a. By just looking at the table, what shape do you think the data distribution for the popular vote by state will have? b. Draw a relative frequency histogram to describe the distribution of the
popular vote for President Bush in the 50 states. c. Did the histogram in part b confirm your guess in part a? Are there any outliers? How can you explain them?
1.53 Election Results, continued Refer to
Exercises 1.51 and 1.52. The accompanying stem and leaf plots were generated using MINITAB for the variables named “Popular Vote” and “Percent Vote.” Stem-and-Leaf Display: Popular Vote, Percent Vote
Stem-and-leaf of Popular Vote N = 50 Leaf Unit = 100
Stem-and-leaf of Percent Vote N = 50 Leaf Unit = 1.0
3 8 19 (9) 22 13 5 1
0 0 0 0 0 1 1 1 1 1 2 2 2 2 2 HI
33 7 89 39, 45, 55
a. Describe the shapes of the two distributions. Are there any outliers? b. Do the stem and leaf plots resemble the relative frequency histograms constructed in Exercises 1.51 and 1.52? c. Explain
why the distribution of the popular vote for President Bush by state is skewed while the
viii ❍
The Role of the Computer in the Thirteenth Edition—My MINITAB Computers are now a common tool for college students in all disciplines. Most students are accomplished users of word processors,
spreadsheets, and databases, and they have no trouble navigating through software packages in the Windows environment. We believe, however, that advances in computer technology should not turn
statistical analyses into a “black box.” Rather, we choose to use the computational shortcuts and interactive visual tools that modern technology provides to give us more time to emphasize
statistical reasoning as well as the understanding and interpretation of statistical results. In this edition, students will be able to use the computer for both standard statistical analyses and as
a tool for reinforcing and visualizing statistical concepts. MINITAB 15 (consistent with MINITAB 14 ) is used exclusively as the computer package for statistical analysis. Almost all graphs and
figures, as well as all computer printouts, are generated using this version of MINITAB. However, we have chosen to isolate the instructions for generating this output into individual sections called
“My MINITAB ” at the end of each chapter. Each discussion uses numerical examples to guide the student through the MINITAB commands and options necessary for the procedures presented in that chapter.
We have included references to visual screen captures from MINITAB 15, so that the student can actually work through these sections as “mini-labs.”
Numerical Descriptive Measures MINITAB provides most of the basic descriptive statistics presented in Chapter 2 using a single command in the drop-down menus. Once you are on the Windows desktop,
double-click on the MINITAB icon or use the Start button to start MINITAB. Practice entering some data into the Data window, naming the columns appropriately in the gray cell just below the column
number. When you have finished entering your data, you will have created a MINITAB worksheet, which can be saved either singly or as a MINITAB project for future use. Click on File 씮 Save Current
Worksheet or File 씮 Save Project. You will need to name the worksheet (or project)—perhaps “test data”—so that you can retrieve it later. The following data are the floor lengths (in inches) behind
the second and third seats in nine different minivans:12 Second seat: Third seat:
62.0, 62.0, 64.5, 48.5, 57.5, 61.0, 45.5, 47.0, 33.0 27.0, 27.0, 24.0, 16.5, 25.0, 27.5, 14.0, 18.5, 17.0
Since the data involve two variables, we enter the two rows of numbers into columns C1 and C2 in the MINITAB worksheet and name them “2nd Seat” and “3rd Seat,” respectively. Using the drop-down
menus, click on Stat 씮 Basic Statistics 씮 Display Descriptive Statistics. The Dialog box is shown in Figure 2.21. F I G URE 2 . 2 1
provides printing options for multiple box plots. Labels will let you annotate the graph with titles and footnotes. If you have entered data into the worksheet as a frequency distribution (values in
one column, frequencies in another), the Data Options will allow the data to be read in that format. The box plot for the third seat lengths is shown in Figure 2.24. You can use the MINITAB commands
from Chapter 1 to display stem and leaf plots or histograms for the two variables. How would you describe the similarities and differences in the two data sets? Save this worksheet in a file called
“Minivans” before exiting MINITAB. We will use it again in Chapter 3. FIGURE 2.22
FIGURE 2 23
If you do not need “hands-on” knowledge of MINITAB, or if you are using another software package, you may choose to skip these sections and simply use the MINITAB printouts as guides for the basic
understanding of computer printouts. Any student who has Internet access can use the applets found on the Student Premium Website to visualize a variety of statistical concepts (access instructions
for the Student Premium Website are listed on the Printed Access Card that is an optional bundle with this text). In addition, some of the applets can be used instead of computer software to perform
simple statistical analyses. Exercises written specifically for use with these applets appear in a section at the end of each chapter. Students can use the applets at home or in a computer lab. They
can use them as they read through the text material, once they have finished reading the entire chapter, or as a tool for exam review. Instructors can assign applet exercises to the students, use the
applets as a tool in a lab setting, or use them for visual demonstrations during lectures. We believe that these applets will be a powerful tool that will increase student enthusiasm for, and
understanding of, statistical concepts and procedures.
STUDY AIDS The many and varied exercises in the text provide the best learning tool for students embarking on a first course in statistics. An exercise number printed in color indicates that a
detailed solution appears in the Student Solutions Manual, which is available as a supplement for students. Each application exercise now has a title, making it easier for students and instructors to
immediately identify both the context of the problem and the area of application.
y 5.46 Accident Prone, continued Refer to Exer-
APPLICATIONS 5.43 Airport Safety The increased number of small
commuter planes in major airports has heightened concern over air safety. An eastern airport has recorded a monthly average of five near-misses on landings and takeoffs in the past 5 years. a. Find
the probability that during a given month there are no near-misses on landings and takeoffs at the airport.
cise 5.45. a. Calculate the mean and standard deviation for x, the number of injuries per year sustained by a schoolage child. b. Within what limits would you expect the number of injuries per year
to fall? 5.47 Bacteria in Water Samples If a drop of
water is placed on a slide and examined under a microscope, the number x of a particular type of bacteria
Students should be encouraged to use the MyPersonal Trainer sections and the Exercise Reps whenever they appear in the text. Students can “fill in the blanks” by writing directly in the text and can
get immediate feedback by checking the answers on the perforated card in the back of the text. In addition, there are numerous hints called MyTip, which appear in the margins of the text.
Empirical Rule ⇔ mound-shaped data Tchebysheff ⇔ any shaped data
Is Tchebysheff’s Theorem applicable? Yes, because it can be used for any set of data. According to Tchebysheff’s Theorem, • •
at least 3/4 of the measurements will fall between 10.6 and 32.6. at least 8/9 of the measurements will fall between 5.1 and 38.1.
The MyApplet sections appear within the body of the text, explaining the use of a particular Java applet. Finally, sections called Key Concepts and Formulas appear in each chapter as a review in
outline form of the material covered in that chapter. CHAPTER REVIEW Key Concepts and Formulas I.
Measures of the Center of a Data Distribution
1. Arithmetic mean (mean) or average a. Population: m
Sx b. Sample of n measurements: x苶 i n 2. Median; position of the median .5(n 1) 3. Mode 4. The median may be preferred to the mean if the data are highly skewed. II. Measures of Variability
1. Range: R largest smallest 2. Variance a. Population of N measurements: S(xi m)2 s2 N
68%, 95%, and 99.7% of the measurements are within one, two, and three standard deviations of the mean, respectively. IV. Measures of Relative Standing
x 苶x 1. Sample z-score: z s 2. pth percentile; p% of the measurements are smaller, and (100 p)% are larger. 3. Lower quartile, Q1; position of Q1 .25 (n 1) 4. Upper quartile, Q3; position of Q3 .75
(n 1) 5. Interquartile range: IQR Q3 Q1 V. The Five-Number Summary and Box Plots
1. The five-number summary: Min
b. Sample of n measurements: (Sxi)2 Sx 2i n S(xi x苶 )2 s2 n1 n1
Median Q3
One-fourth of the measurements in the data set lie between each of the four adjacent pairs of numbers. 2. Box plots are used for detecting outliers and h f di ib i
The Student Premium Website, a password-protected resource that can be accessed with a Printed Access Card (optional bundle item), provides students with an array of study resources, including the
complete set of Java applets used for the MyApplet sections, PowerPoint® slides for each chapter, and a Graphing Calculator Manual, which includes instructions for performing many of the techniques
in the text using the popular TI-83 graphing calculator. In addition, sets of Practice (or Self-Correcting) Exercises are included for each chapter. These exercise sets are followed by the complete
solutions to each of the exercises. These solutions can be used pedagogically to allow students to pinpoint any errors made at each of the calculational steps leading to final answers. Data sets
(saved in a variety of formats) for many of the text exercises can be found on the book’s website (academic.cengage.com/statistics/mendenhall).
INSTRUCTOR RESOURCES The Instructor’s Companion Website (academic.cengage.com/statistics/mendenhall), available to adopters of the thirteenth edition, provides a variety of teaching aids, including •
• • • •
All the material from the Student Companion Website, including exercises using the Large Data Sets, which is accompanied by three large data sets that can be used throughout the course. A file named
“Fortune” contains the revenues (in millions) for the Fortune 500 largest U.S. industrial corporations in a recent year; a file named “Batting” contains the batting averages for the National and
American baseball league batting champions from 1876 to 2006; and a file named “Blood Pressure” contains the age and diastolic and systolic blood pressures for 965 men and 945 women compiled by the
National Institutes of Health. Classic exercises with data sets and solutions PowerPoints created by Barbara Beaver Applets by Gary McClelland (the complete set of Java applets used for the MyApplet
sections) Graphing Calculator manual, which includes instructions for performing many of the techniques in the text using the TI-83 graphing calculator
Also available for instructors: WebAssign WebAssign, the most widely used homework system in higher education, allows you to assign, collect, grade, and record homework assignments via the web.
Through a partnership between WebAssign and Brooks/Cole Cengage Learning, this proven homework system has been enhanced to include links to textbook sections, video examples, and problem-specific
tutorials. PowerLecture™ PowerLecture with ExamView® for Introduction to Probability and Statistics contains the Instructor’s Solutions Manual, PowerPoint lectures prepared by Barbara Beaver,
ExamView Computerized Testing, Classic Exercises, and TI-83 Manual prepared by James Davis.
ACKNOWLEDGMENTS The authors are grateful to Carolyn Crockett and the editorial staff of Brooks/Cole for their patience, assistance, and cooperation in the preparation of this edition. A special
thanks to Gary McClelland for his careful customization of the Java applets used in the text, and for his patient and even enthusiastic responses to our constant emails! Thanks are also due to
thirteenth edition reviewers Bob Denton, Timothy Husband, Ron LaBorde, Craig McBride, Marc Sylvester, Kanapathi Thiru, and Vitaly Voloshin and twelfth edition reviewers David Laws, Dustin Paisley,
Krishnamurthi Ravishankar, and Maria Rizzo. We wish to thank authors and organizations for allowing us to reprint selected material; acknowledgments are made wherever such material appears in the
text. Robert J. Beaver Barbara M. Beaver William Mendenhall
Brief Contents INTRODUCTION 1 1
LARGE-SAMPLE ESTIMATION 297
LARGE-SAMPLE TESTS OF HYPOTHESES 343
INFERENCE FROM SMALL SAMPLES 386
THE ANALYSIS OF VARIANCE 447
ANALYSIS OF CATEGORICAL DATA 594
NONPARAMETRIC STATISTICS 629 APPENDIX I 679 DATA SOURCES 712 ANSWERS TO SELECTED EXERCISES 722 INDEX 737 CREDITS 744
Contents Introduction: Train Your Brain for Statistics
The Population and the Sample 3 Descriptive and Inferential Statistics 4 Achieving the Objective of Inferential Statistics: The Necessary Steps 4 Training Your Brain for Statistics 5 1
1.1 Variables and Data 8 1.2 Types of Variables 10 1.3 Graphs for Categorical Data 11 Exercises 14
1.4 Graphs for Quantitative Data 17 Pie Charts and Bar Charts 17 Line Charts 19 Dotplots 20 Stem and Leaf Plots 20 Interpreting Graphs with a Critical Eye 22
1.5 Relative Frequency Histograms 24 Exercises 29 Chapter Review 34 CASE STUDY: How Is Your Blood Pressure? 50 2
2.1 Describing a Set of Data with Numerical Measures 53 2.2 Measures of Center 53 Exercises 57
2.3 Measures of Variability 60 Exercises 65
2.4 On the Practical Significance of the Standard Deviation 66
2.5 A Check on the Calculation of s 70 Exercises 71
2.6 Measures of Relative Standing 75 2.7 The Five-Number Summary and the Box Plot 80 Exercises 84 Chapter Review 87 CASE STUDY: The Boys of Summer 96 3
3.1 Bivariate Data 98 3.2 Graphs for Qualitative Variables 98 Exercises 101
3.3 Scatterplots for Two Quantitative Variables 102 3.4 Numerical Measures for Quantitative Bivariate Data 105 Exercises 112 Chapter Review 114 CASE STUDY: Are Your Dishes Really Clean? 126 4
4.1 The Role of Probability in Statistics 128 4.2 Events and the Sample Space 128 4.3 Calculating Probabilities Using Simple Events 131 Exercises 134
4.4 Useful Counting Rules (Optional) 137 Exercises 142
4.5 Event Relations and Probability Rules 144 Calculating Probabilities for Unions and Complements 146
4.6 Independence, Conditional Probability, and the Multiplication Rule 149 Exercises 154
4.7 Bayes’ Rule (Optional) 158 Exercises 161
4.8 Discrete Random Variables and Their Probability Distributions 163 Random Variables 163 Probability Distributions 163 The Mean and Standard Deviation for a Discrete Random Variable 166 Exercises
170 Chapter Review 172 CASE STUDY: Probability and Decision Making in the Congo 181
5.1 Introduction 184 5.2 The Binomial Probability Distribution 184 Exercises 193
5.3 The Poisson Probability Distribution 197 Exercises 202
5.4 The Hypergeometric Probability Distribution 205 Exercises 207 Chapter Review 208 CASE STUDY: A Mystery: Cancers Near a Reactor 218 6
6.1 Probability Distributions for Continuous Random Variables 220 6.2 The Normal Probability Distribution 223 6.3 Tabulated Areas of the Normal Probability Distribution 225 The Standard Normal Random
Variable 225 Calculating Probabilities for a General Normal Random Variable 229 Exercises 233
6.4 The Normal Approximation to the Binomial Probability Distribution (Optional) 237 Exercises 243 Chapter Review 246 CASE STUDY: The Long and Short of It 252 7
7.1 Introduction 255 7.2 Sampling Plans and Experimental Designs 255 Exercises 258
7.3 Statistics and Sampling Distributions 260 7.4 The Central Limit Theorem 263 7.5 The Sampling Distribution of the Sample Mean 266 Standard Error 267 Exercises 272
7.6 The Sampling Distribution of the Sample Proportion 275 Exercises 279
7.7 A Sampling Application: Statistical Process Control (Optional) 281 A Control Chart for the Process Mean: The x苶 Chart 281 A Control Chart for the Proportion Defective: The p Chart 283 Exercises
Chapter Review 287 CASE STUDY: Sampling the Roulette at Monte Carlo 295 8
8.1 Where We’ve Been 298 8.2 Where We’re Going—Statistical Inference 298 8.3 Types of Estimators 299 8.4 Point Estimation 300 Exercises 305
8.5 Interval Estimation 307 Constructing a Confidence Interval 308 Large-Sample Confidence Interval for a Population Mean m 310 Interpreting the Confidence Interval 311 Large-Sample Confidence Interval
for a Population Proportion p 314 Exercises 316
8.6 Estimating the Difference between Two Population Means 318 Exercises 321 8.7 Estimating the Difference between Two Binomial Proportions 324 Exercises 326 8.8 One-Sided Confidence Bounds 328 8.9
Choosing the Sample Size 329 Exercises 333 Chapter Review 336 CASE STUDY: How Reliable Is That Poll? CBS News: How and Where America Eats 341 9
9.1 Testing Hypotheses about Population Parameters 344 9.2 A Statistical Test of Hypothesis 344 9.3 A Large-Sample Test about a Population Mean 347 The Essentials of the Test 348 Calculating the
p-Value 351 Two Types of Errors 356 The Power of a Statistical Test 356 Exercises 360
9.4 A Large-Sample Test of Hypothesis for the Difference between Two Population Means 363 Hypothesis Testing and Confidence Intervals 365 Exercises 366
9.5 A Large-Sample Test of Hypothesis for a Binomial Proportion 368 Statistical Significance and Practical Importance 370 Exercises 371
9.6 A Large-Sample Test of Hypothesis for the Difference between Two Binomial Proportions 373 Exercises 376
9.7 Some Comments on Testing Hypotheses 378 Chapter Review 379 CASE STUDY: An Aspirin a Day . . . ? 384 10
10.1 Introduction 387 10.2 Student’s t Distribution 387 Assumptions behind Student’s t Distribution 391
10.3 Small-Sample Inferences Concerning a Population Mean 391 Exercises 397
10.4 Small-Sample Inferences for the Difference between Two Population Means: Independent Random Samples 399 Exercises 406
10.5 Small-Sample Inferences for the Difference between Two Means: A Paired-Difference Test 410 Exercises 414
10.6 Inferences Concerning a Population Variance 417 Exercises 423
10.7 Comparing Two Population Variances 424 Exercises 430
10.8 Revisiting the Small-Sample Assumptions 432 Chapter Review 433 CASE STUDY: How Would You Like a Four-Day Workweek? 445 11
11.1 The Design of an Experiment 448 11.2 What Is an Analysis of Variance? 449 11.3 The Assumptions for an Analysis of Variance 449 11.4 The Completely Randomized Design: A One-Way Classification 450
11.5 The Analysis of Variance for a Completely Randomized Design 451 Partitioning the Total Variation in an Experiment 451 Testing the Equality of the Treatment Means 454 Estimating Differences in
the Treatment Means 456 Exercises 459
11.6 Ranking Population Means 462 Exercises 465
11.7 The Randomized Block Design: A Two-Way Classification 466 11.8 The Analysis of Variance for a Randomized Block Design 467 Partitioning the Total Variation in the Experiment 467 Testing the
Equality of the Treatment and Block Means 470 Identifying Differences in the Treatment and Block Means 472 Some Cautionary Comments on Blocking 473 Exercises 474
11.9 The a b Factorial Experiment: A Two-Way Classification 478 11.10 The Analysis of Variance for an a b Factorial Experiment 480 Exercises 484
11.11 Revisiting the Analysis of Variance Assumptions 487 Residual Plots 488
11.12 A Brief Summary 490 Chapter Review 491 CASE STUDY: “A Fine Mess” 501 12
12.1 Introduction 503 12.2 A Simple Linear Probabilistic Model 503 12.3 The Method of Least Squares 506 12.4 An Analysis of Variance for Linear Regression 509 Exercises 511
12.5 Testing the Usefulness of the Linear Regression Model 514 Inferences Concerning b, the Slope of the Line of Means 514 The Analysis of Variance F-Test 518 Measuring the Strength of the
Relationship: The Coefficient of Determination 518 Interpreting the Results of a Significant Regression 519 Exercises 520
12.6 Diagnostic Tools for Checking the Regression Assumptions 522 Dependent Error Terms 523 Residual Plots 523 Exercises 524
12.7 Estimation and Prediction Using the Fitted Line 527 Exercises 531
12.8 Correlation Analysis 533 Exercises 537
Chapter Review 540 CASE STUDY: Is Your Car “Made in the U.S.A.”? 550 13
13.1 Introduction 552 13.2 The Multiple Regression Model 552 13.3 A Multiple Regression Analysis 553 The Method of Least Squares 554 The Analysis of Variance for Multiple Regression 555 Testing the
Usefulness of the Regression Model 556 Interpreting the Results of a Significant Regression 557 Checking the Regression Assumptions 558 Using the Regression Model for Estimation and Prediction 559
13.4 A Polynomial Regression Model 559 Exercises 562
13.5 Using Quantitative and Qualitative Predictor Variables in a Regression Model 566 Exercises 572
13.6 Testing Sets of Regression Coefficients 575 13.7 Interpreting Residual Plots 578 13.8 Stepwise Regression Analysis 579 13.9 Misinterpreting a Regression Analysis 580 Causality 580
Multicollinearity 580
13.10 Steps to Follow When Building a Multiple Regression Model 582 Chapter Review 582 CASE STUDY: “Made in the U.S.A.”—Another Look 592 14
14.1 A Description of the Experiment 595 14.2 Pearson’s Chi-Square Statistic 596 14.3 Testing Specified Cell Probabilities: The Goodness-of-Fit Test 597 Exercises 599
14.4 Contingency Tables: A Two-Way Classification 602 The Chi-Square Test of Independence 602 Exercises 608
14.5 Comparing Several Multinomial Populations: A Two-Way Classification with Fixed Row or Column Totals 610 Exercises 613
14.6 The Equivalence of Statistical Tests 614 14.7 Other Applications of the Chi-Square Test 615 Chapter Review 616 CASE STUDY: Can a Marketing Approach Improve Library Services? 628 15
15.1 Introduction 630 15.2 The Wilcoxon Rank Sum Test: Independent Random Samples 630 Normal Approximation for the Wilcoxon Rank Sum Test 634 Exercises 637
15.3 The Sign Test for a Paired Experiment 639 Normal Approximation for the Sign Test 640 Exercises 642
15.4 A Comparison of Statistical Tests 643 15.5 The Wilcoxon Signed-Rank Test for a Paired Experiment 644 Normal Approximation for the Wilcoxon Signed-Rank Test 647 Exercises 648
15.6 The Kruskal–Wallis H-Test for Completely Randomized Designs 650 Exercises 654
15.7 The Friedman Fr-Test for Randomized Block Designs 656 Exercises 659
15.8 Rank Correlation Coefficient 660 Exercises 664
15.9 Summary 666 Chapter Review 667 CASE STUDY: How’s Your Cholesterol Level? 677
Table 1
Cumulative Binomial Probabilities 680
Table 2
Cumulative Poisson Probabilities 686
Table 3
Areas under the Normal Curve 688
Table 4
Critical Values of t 691
Table 5
Critical Values of Chi-Square 692
Table 6
Percentage Points of the F Distribution 694
Table 7
Critical Values of T for the Wilcoxon Rank Sum Test, n1 n2 702
Table 8
Critical Values of T for the Wilcoxon Signed-Rank Test, n 5(1)50 704
Table 9
Critical Values of Spearman’s Rank Correlation Coefficient for a One-Tailed Test 705
Table 10 Random Numbers 706 Table 11 Percentage Points of the Studentized Range, qa(k, df ) 708
This page intentionally left blank
Introduction Train Your Brain for Statistics
What is statistics? Have you ever met a statistician? Do you know what a statistician does? Perhaps you are thinking of the person who sits in the broadcast booth at the Rose Bowl, recording the
number of pass completions, yards rushing, or interceptions thrown on New Year’s Day. Or perhaps the mere mention of the word statistics sends a shiver of fear through you. You may think you know
nothing about statistics; however, it is almost inevitable that you encounter statistics in one form or another every time you pick up a daily newspaper. Here is an example:
© Mark Karrass/CORBIS
Polls See Republicans Keeping Senate Control NEW YORK–Just days from the midterm elections, the final round of MSNBC/McClatchy polls shows a tightening race to the finish in the battle for control of
the U.S. Senate. Democrats are leading in several races that could result in party pickups, but Republicans have narrowed the gap in other close races, according to Mason-Dixon polls in 12 states. In
all, these key Senate races show the following: •
Two Republican incumbents in serious trouble: Santorum and DeWine. Democrats could gain two seats.
Four Republican incumbents essentially tied with their challengers: Allen, Burns, Chafee, and Talent. Four toss-ups that could turn into Democratic gains.
Three Democratic incumbents with leads: Cantwell, Menendez, and Stabenow.
One Republican incumbent ahead of his challenger: Kyl.
One Republican open seat with the Republican leading: Tennessee.
One open Democratic seat virtually tied: Maryland.
The results show that the Democrats have a good chance of gaining at least two seats in the Senate. As of now, they must win four of the toss-up seats, while holding on to Maryland in order to gain
control of the Senate. A total of 625 likely voters in each state were interviewed by telephone. The margin for error, according to standards customarily used by statisticians, is no more than plus
or minus 4 percentage points in each poll. —www.msnbc.com1
Articles similar to this one are commonplace in our newspapers and magazines, and in the period just prior to a presidential election, a new poll is reported almost every day. In fact, in the
national election on November 7th, the Democrats were able to take control of both the House of Representatives and the Senate of the United States. The language of this article is very familiar to
us; however, it leaves the inquisitive reader with some unanswered questions. How were the people in the poll selected? Will these people give the same response tomorrow? Will they give the same
response on election day? Will they even vote? Are these people representative of all those who will vote on election day? It is the job of a statistician to ask these questions and to find answers
for them in the language of the poll. Most Believe “Cover-Up” of JFK Assassination Facts A majority of the public believes the assassination of President John F. Kennedy was part of a larger
conspiracy, not the act of one individual. In addition, most Americans think there was a cover-up of facts about the 1963 shooting. More than 40 years after JFK’s assassination, a FOX News poll shows
most Americans disagree with the government’s conclusions about the killing. The Warren Commission found that Lee Harvey Oswald acted alone when he shot Kennedy, but 66 percent of the public today
think the assassination was “part of a larger conspiracy” while only 25 percent think it was the “act of one individual.” “For older Americans, the Kennedy assassination was a traumatic experience
that began a loss of confidence in government,” commented Opinion Dynamics President John Gorman. “Younger people have grown up with movies and documentaries that have pretty much pushed the
‘conspiracy’ line. Therefore, it isn’t surprising there is a fairly solid national consensus that we still don’t know the truth.” (The poll asked): “Do you think that we know all the facts about the
assassination of President John F. Kennedy or do you think there was a cover-up?”
All Democrats Republicans Independents
We Know All the Facts
There Was a Cover-Up
(Not Sure)
14% 11% 18% 12%
When you see an article like this one in a magazine, do you simply read the title and the first paragraph, or do you read further and try to understand the meaning of the numbers? How did the authors
get these numbers? Did they really interview every American with each political affiliation? It is the job of the statistician to interpret the language of this study. Hot News: 98.6 Not Normal After
believing for more than a century that 98.6 was the normal body temperature for humans, researchers now say normal is not normal anymore. For some people at some hours of the day, 99.9 degrees could
be fine. And readings as low as 96 turn out to be highly human. The 98.6 standard was derived by a German doctor in 1868. Some physicians have always been suspicious of the good doctor’s research. His
claim: 1 million readings—in an epoch without computers.
So Mackowiak & Co. took temperature readings from 148 healthy people over a three-day period and found that the mean temperature was 98.2 degrees. Only 8 percent of the readings were 98.6. —The
What questions come to your mind when you read this article? How did the researcher select the 148 people, and how can we be sure that the results based on these 148 people are accurate when applied
to the general population? How did the researcher arrive at the normal “high” and “low” temperatures given in the article? How did the German doctor record 1 million temperatures in 1868? Again, we
encounter a statistical problem with an application to everyday life. Statistics is a branch of mathematics that has applications in almost every facet of our daily life. It is a new and unfamiliar
language for most people, however, and, like any new language, statistics can seem overwhelming at first glance. We want you to “train your brain” to understand this new language one step at a time.
Once the language of statistics is learned and understood, it provides a powerful tool for data analysis in many different fields of application.
THE POPULATION AND THE SAMPLE In the language of statistics, one of the most basic concepts is sampling. In most statistical problems, a specified number of measurements or data—a sample—is drawn from
a much larger body of measurements, called the population. Sample
For the body-temperature experiment, the sample is the set of body-temperature measurements for the 148 healthy people chosen by the experimenter. We hope that the sample is representative of a much
larger body of measurements—the population— the body temperatures of all healthy people in the world! Which is of primary interest, the sample or the population? In most cases, we are interested
primarily in the population, but the population may be difficult or impossible to enumerate. Imagine trying to record the body temperature of every healthy person on earth or the presidential
preference of every registered voter in the United States! Instead, we try to describe or predict the behavior of the population on the basis of information obtained from a representative sample from
that population. The words sample and population have two meanings for most people. For example, you read in the newspapers that a Gallup poll conducted in the United States was based on a sample of
1823 people. Presumably, each person interviewed is asked a particular question, and that person’s response represents a single measurement in the sample. Is the sample the set of 1823 people, or is
it the 1823 responses that they give? When we use statistical language, we distinguish between the set of objects on which the measurements are taken and the measurements themselves. To
experimenters, the objects on which measurements are taken are called experimental units. The sample survey statistician calls them elements of the sample.
DESCRIPTIVE AND INFERENTIAL STATISTICS When first presented with a set of measurements—whether a sample or a population— you need to find a way to organize and summarize it. The branch of statistics
that presents techniques for describing sets of measurements is called descriptive statistics. You have seen descriptive statistics in many forms: bar charts, pie charts, and line charts presented by
a political candidate; numerical tables in the newspaper; or the average rainfall amounts reported by the local television weather forecaster. Computer-generated graphics and numerical summaries are
commonplace in our everyday communication. Descriptive statistics consists of procedures used to summarize and describe the important characteristics of a set of measurements. Definition
If the set of measurements is the entire population, you need only to draw conclusions based on the descriptive statistics. However, it might be too expensive or too time consuming to enumerate the
entire population. Perhaps enumerating the population would destroy it, as in the case of “time to failure” testing. For these or other reasons, you may have only a sample from the population. By
looking at the sample, you want to answer questions about the population as a whole. The branch of statistics that deals with this problem is called inferential statistics. Inferential statistics
consists of procedures used to make inferences about population characteristics from information contained in a sample drawn from this population. Definition
The objective of inferential statistics is to make inferences (that is, draw conclusions, make predictions, make decisions) about the characteristics of a population from information contained in a
ACHIEVING THE OBJECTIVE OF INFERENTIAL STATISTICS: THE NECESSARY STEPS How can you make inferences about a population using information contained in a sample? The task becomes simpler if you train
yourself to organize the problem into a series of logical steps. 1. Specify the questions to be answered and identify the population of interest. In the presidential election poll, the objective is
to determine who will get the most votes on election day. Hence, the population of interest is the set of all votes in the presidential election. When you select a sample, it is important that the
sample be representative of this population, not the population of voter preferences on July 5 or on some other day prior to the election. 2. Decide how to select the sample. This is called the
design of the experiment or the sampling procedure. Is the sample representative of the population of interest? For example, if a sample of registered voters is selected from the state of Arkansas,
will this sample be representative of all voters in the United States?
Will it be the same as a sample of “likely voters”—those who are likely to actually vote in the election? Is the sample large enough to answer the questions posed in step 1 without wasting time and
money on additional information? A good sampling design will answer the questions posed with minimal cost to the experimenter. 3. Select the sample and analyze the sample information. No matter how
much information the sample contains, you must use an appropriate method of analysis to extract it. Many of these methods, which depend on the sampling procedure in step 2, are explained in the text.
4. Use the information from step 3 to make an inference about the population. Many different procedures can be used to make this inference, and some are better than others. For example, 10 different
methods might be available to estimate human response to an experimental drug, but one procedure might be more accurate than others. You should use the best inference-making procedure available (many
of these are explained in the text). 5. Determine the reliability of the inference. Since you are using only a fraction of the population in drawing the conclusions described in step 4, you might be
wrong! How can this be? If an agency conducts a statistical survey for you and estimates that your company’s product will gain 34% of the market this year, how much confidence can you place in this
estimate? Is this estimate accurate to within 1, 5, or 20 percentage points? Is it reliable enough to be used in setting production goals? Every statistical inference should include a measure of
reliability that tells you how much confidence you have in the inference. Now that you have learned some of the basic terms and concepts in the language of statistics, we again pose the question asked
at the beginning of this discussion: Do you know what a statistician does? It is the job of the statistician to implement all of the preceding steps. This may involve questioning the experimenter to
make sure that the population of interest is clearly defined, developing an appropriate sampling plan or experimental design to provide maximum information at minimum cost, correctly analyzing and
drawing conclusions using the sample information, and finally, measuring the reliability of the conclusions based on the experimental results.
TRAINING YOUR BRAIN FOR STATISTICS As you proceed through the book, you will learn more and more words, phrases, and concepts from this new language of statistics. Statistical procedures, for the
most part, consist of commonsense steps that, given enough time, you would most likely have discovered for yourself. Since statistics is an applied branch of mathematics, many of these basic concepts
are mathematical—developed and based on results from calculus or higher mathematics. However, you do not have to be able to derive results in order to apply them in a logical way. In this text, we
use numerical examples and intuitive arguments to explain statistical concepts, rather than more complicated mathematical arguments. To help you in your statistical training, we have included a
section called “MyPersonal Trainer” at appropriate points in the text. This is your “personal trainer,” which will take you step-by-step through some of the procedures that tend to be confusing to
students. Once you read the step-by-step explanation, try doing the “Exercise Reps,”
which usually appear in table form. Write the answers—right in your book—and then check your answers against the answers on the perforated card at the back of the book. If you’re still having
trouble, you will find more “Exercise Reps” in the exercise set for that section. You should also watch for quick study tips—named “My Tip”—found in the margin of the text as you read through the
chapter. In recent years, computers have become readily available to many students and provide them with an invaluable tool. In the study of statistics, even the beginning student can use packaged
programs to perform statistical analyses with a high degree of speed and accuracy. Some of the more common statistical packages available at computer facilities are MINITABTM, SAS (Statistical
Analysis System), and SPSS (Statistical Package for the Social Sciences); personal computers will support packages such as MINITAB, MS Excel, and others. There are even online statistical programs
and interactive “applets” on the Internet. These programs, called statistical software, differ in the types of analyses available, the options within the programs, and the forms of printed results
(called output). However, they are all similar. In this book, we primarily use MINITAB as a statistical tool; understanding the basic output of this package will help you interpret the output from
other software systems. At the end of most chapters, you will find a section called “My MINITAB.” These sections present numerical examples to guide you through the MINITAB commands and options that
are used for the procedures in that chapter. If you are using MINITAB in a lab or home setting, you may want to work through this section at your own computer so that you become familiar with the
hands-on methods in MINITAB analysis. If you do not need hands-on knowledge of MINITAB, you may choose to skip this section and simply use the MINITAB printouts for analysis as they appear in the
text. You will also find a section called “MyApplet” in many of the chapters. These sections provide a useful introduction to the statistical applets available on the Premium Website. You can use
these applets to visualize many of the chapter concepts and to find solutions to exercises in a new section called “MyApplet Exercises.” Most important, using statistics successfully requires common
sense and logical thinking. For example, if we want to find the average height of all students at a particular university, would we select our entire sample from the members of the basketball team? In
the body-temperature example, the logical thinker would question an 1868 average based on 1 million measurements—when computers had not yet been invented. As you learn new statistical terms,
concepts, and techniques, remember to view every problem with a critical eye and be sure that the rule of common sense applies. Throughout the text, we will remind you of the pitfalls and dangers in
the use or misuse of statistics. Benjamin Disraeli once said that there are three kinds of lies: lies, damn lies, and statistics! Our purpose is to dispel this claim—to show you how to make
statistics work for you and not lie for you! As you continue through the book, refer back to this “training manual” periodically. Each chapter will increase your knowledge of the language of
statistics and should, in some way, help you achieve one of the steps described here. Each of these steps is essential in attaining the overall objective of inferential statistics: to make inferences
about a population using information contained in a sample drawn from that population.
Describing Data with Graphs
GENERAL OBJECTIVES Many sets of measurements are samples selected from larger populations. Other sets constitute the entire population, as in a national census. In this chapter, you will learn what a
variable is, how to classify variables into several types, and how measurements or data are generated. You will then learn how to use graphs to describe data sets.
CHAPTER INDEX ● Data distributions and their shapes (1.1, 1.4) ● Dotplots (1.4) ● Pie charts, bar charts, line charts (1.3, 1.4) ● Qualitative and quantitative variables—discrete and continuous (1.2)
© Jupiterimages/Brand X/CORBIS
How Is Your Blood Pressure? Is your blood pressure normal, or is it too high or too low? The case study at the end of this chapter examines a large set of blood pressure data. You will use graphs to
describe these data and compare your blood pressure with that of others of your same age and gender.
● Relative frequency histograms (1.5) ● Stem and leaf plots (1.4) ● Univariate and bivariate data (1.1) ● Variables, experimental units, samples and populations, data (1.1)
How Do I Construct a Stem and Leaf Plot? How Do I Construct a Relative Frequency Histogram?
VARIABLES AND DATA In Chapters 1 and 2, we will present some basic techniques in descriptive statistics— the branch of statistics concerned with describing sets of measurements, both samples and
populations. Once you have collected a set of measurements, how can you display this set in a clear, understandable, and readable form? First, you must be able to define what is meant by measurements
or “data” and to categorize the types of data that you are likely to encounter in real life. We begin by introducing some definitions—new terms in the statistical language that you need to know. A
variable is a characteristic that changes or varies over time and/or for different individuals or objects under consideration. Definition
For example, body temperature is a variable that changes over time within a single individual; it also varies from person to person. Religious affiliation, ethnic origin, income, height, age, and
number of offspring are all variables—characteristics that vary depending on the individual chosen. In the Introduction, we defined an experimental unit or an element of the sample as the object on
which a measurement is taken. Equivalently, we could define an experimental unit as the object on which a variable is measured. When a variable is actually measured on a set of experimental units, a
set of measurements or data result. Definition An experimental unit is the individual or object on which a variable is measured. A single measurement or data value results when a variable is actually
measured on an experimental unit.
If a measurement is generated for every experimental unit in the entire collection, the resulting data set constitutes the population of interest. Any smaller subset of measurements is a sample.
A population is the set of all measurements of interest to the investi-
A sample is a subset of measurements selected from the population of
A set of five students is selected from all undergraduates at a large university, and measurements are entered into a spreadsheet as shown in Figure 1.1. Identify the various elements involved in
generating this set of measurements. Solution There are several variables in this example. The experimental unit on which the variables are measured is a particular undergraduate student on the
campus, identified in column C1. Five variables are measured for each student: grade point average (GPA), gender, year in college, major, and current number of units enrolled. Each of these
characteristics varies from student to student. If we consider the GPAs of all students at this university to be the population of interest, the five GPAs in column C2 represent a sample from this
population. If the GPA of each undergraduate student at the university had been measured, we would have generated the entire population of measurements for this variable.
1.1 VARIABLES AND DATA
F I GU R E 1 .1
Measurements on five undergraduate students
The second variable measured on the students is gender, in column C3-T. This variable can take only one of two values—male (M) or female (F). It is not a numerically valued variable and hence is
somewhat different from GPA. The population, if it could be enumerated, would consist of a set of Ms and Fs, one for each student at the university. Similarly, the third and fourth variables, year
and major, generate nonnumerical data. Year has four categories (Fr, So, Jr, Sr), and major has one category for each undergraduate major on campus. The last variable, current number of units
enrolled, is numerically valued, generating a set of numbers rather than a set of qualities or characteristics. Although we have discussed each variable individually, remember that we have measured
each of these five variables on a single experimental unit: the student. Therefore, in this example, a “measurement” really consists of five observations, one for each of the five measured variables.
For example, the measurement taken on student 2 produces this observation: (2.3, F, So, Mathematics, 15) You can see that there is a difference between a single variable measured on a single
experimental unit and multiple variables measured on a single experimental unit as in Example 1.1. Univariate data result when a single variable is measured on a single experimental unit. Definition
Bivariate data result when two variables are measured on a single experimental unit. Multivariate data result when more than two variables are measured. Definition
If you measure the body temperatures of 148 people, the resulting data are univariate. In Example 1.1, five variables were measured on each student, resulting in multivariate data.
TYPES OF VARIABLES Variables can be classified into one of two categories: qualitative or quantitative. Definition Qualitative variables measure a quality or characteristic on each experimental unit.
Quantitative variables measure a numerical quantity or amount on each experimental unit.
Qualitative ⇔ “quality” or characteristic Quantitative ⇔ “quantity” or number
Qualitative variables produce data that can be categorized according to similarities or differences in kind; hence, they are often called categorical data. The variables gender, year, and major in
Example 1.1 are qualitative variables that produce categorical data. Here are some other examples: • • •
Political affiliation: Republican, Democrat, Independent Taste ranking: excellent, good, fair, poor Color of an M&M’S® candy: brown, yellow, red, orange, green, blue
Quantitative variables, often represented by the letter x, produce numerical data, such as those listed here: • • • •
x Prime interest rate x Number of passengers on a flight from Los Angeles to New York City x Weight of a package ready to be shipped x Volume of orange juice in a glass
Notice that there is a difference in the types of numerical values that these quantitative variables can assume. The number of passengers, for example, can take on only the values x 0, 1, 2, . . . ,
whereas the weight of a package can take on any value greater than zero, or 0 x . To describe this difference, we define two types of quantitative variables: discrete and continuous. Definition A
discrete variable can assume only a finite or countable number of values. A continuous variable can assume the infinitely many values corresponding to the points on a line interval.
Discrete ⇔ “listable” Continuous ⇔ “unlistable”
The name discrete relates to the discrete gaps between the possible values that the variable can assume. Variables such as number of family members, number of new car sales, and number of defective
tires returned for replacement are all examples of discrete variables. On the other hand, variables such as height, weight, time, distance, and volume are continuous because they can assume values at
any point along a line interval. For any two values you pick, a third value can always be found between them! Identify each of the following variables as qualitative or quantitative: 1. The most
frequent use of your microwave oven (reheating, defrosting, warming, other) 2. The number of consumers who refuse to answer a telephone survey 3. The door chosen by a mouse in a maze experiment (A,
B, or C) 4. The winning time for a horse running in the Kentucky Derby 5. The number of children in a fifth-grade class who are reading at or above grade level
1.3 GRAPHS FOR CATEGORICAL DATA
Solution Variables 1 and 3 are both qualitative because only a quality or characteristic is measured for each individual. The categories for these two variables are shown in parentheses. The other
three variables are quantitative. Variable 2, the number of consumers, is a discrete variable that can take on any of the values x 0, 1, 2, . . . , with a maximum value depending on the number of
consumers called. Similarly, variable 5, the number of children reading at or above grade level, can take on any of the values x 0, 1, 2, . . . , with a maximum value depending on the number of
children in the class. Variable 4, the winning time for a Kentucky Derby horse, is the only continuous variable in the list. The winning time, if it could be measured with sufficient accuracy, could
be 121 seconds, 121.5 seconds, 121.25 seconds, or any values between any two times we have listed.
Discrete variables often involve the “number of” items in a set.
Figure 1.2 depicts the types of data we have defined. Why should you be concerned about different kinds of variables and the data that they generate? The reason is that the methods used to describe
data sets depend on the type of data you have collected. For each set of data that you collect, the key will be to determine what type of data you have and how you can present them most clearly and
understandably to your audience!
FIGU R E 1 .2
Types of data
GRAPHS FOR CATEGORICAL DATA After the data have been collected, they can be consolidated and summarized to show the following information: • •
What values of the variable have been measured How often each value has occurred
For this purpose, you can construct a statistical table that can be used to display the data graphically as a data distribution. The type of graph you choose depends on the type of variable you have
measured. When the variable of interest is qualitative, the statistical table is a list of the categories being considered along with a measure of how often each value occurred. You can measure “how
often” in three different ways: • • •
The frequency, or number of measurements in each category The relative frequency, or proportion of measurements in each category The percentage of measurements in each category
For example, if you let n be the total number of measurements in the set, you can find the relative frequency and percentage using these relationships: Frequency Relative frequency n Percent 100
Relative frequency You will find that the sum of the frequencies is always n, the sum of the relative frequencies is 1, and the sum of the percentages is 100%. The categories for a qualitative
variable should be chosen so that • •
For example, if you categorize meat products according to the type of meat used, you might use these categories: beef, chicken, seafood, pork, turkey, other. To categorize ranks of college faculty,
you might use these categories: professor, associate professor, assistant professor, instructor, lecturer, other. The “other” category is included in both cases to allow for the possibility that a
measurement cannot be assigned to one of the earlier categories. Once the measurements have been categorized and summarized in a statistical table, you can use either a pie chart or a bar chart to
display the distribution of the data. A pie chart is the familiar circular graph that shows how the measurements are distributed among the categories. A bar chart shows the same distribution of
measurements in categories, with the height of the bar measuring how often a particular category was observed.
Three steps to a data distribution: (1) raw data ⇒ (2) statistical table ⇒ (3) graph
a measurement will belong to one and only one category each measurement has a category to which it can be assigned
In a survey concerning public education, 400 school administrators were asked to rate the quality of education in the United States. Their responses are summarized in Table 1.1. Construct a pie chart
and a bar chart for this set of data. To construct a pie chart, assign one sector of a circle to each category. The angle of each sector should be proportional to the proportion of measurements (or
relative frequency) in that category. Since a circle contains 360°, you can use this equation to find the angle:
Angle Relative frequency 360° TABLE 1.1
U.S. Education Rating by 400 Educators Rating
Proportions add to 1. Percents add to 100. Sector angles add to 360°.
A B C D
Table 1.2 shows the ratings along with the frequencies, relative frequencies, percentages, and sector angles necessary to construct the pie chart. Figure 1.3 shows the pie chart constructed from the
values in the table. While pie charts use percentages to determine the relative sizes of the “pie slices,” bar charts usually plot frequency against the categories. A bar chart for these data is
shown in Figure 1.4.
1.3 GRAPHS FOR CATEGORICAL DATA
TABLE 1.2
Calculations for the Pie Chart in Example 1.3 Rating
Relative Frequency
A B C D
35/400 .09 260/400 .65 93/400 .23 12/400 .03
Percent 9% 65% 23% 3% 100%
Angle .09 360 32.4º 234.0º 82.8º 10.8º 360º
The visual impact of these two graphs is somewhat different. The pie chart is used to display the relationship of the parts to the whole; the bar chart is used to emphasize the actual quantity or
frequency for each category. Since the categories in this example are ordered “grades” (A, B, C, D), we would not want to rearrange the bars in the chart to change its shape. In a pie chart, the
order of presentation is irrelevant. FIGU R E 1 .3
Pie chart for Example 1.3
● D 3.0%
A 8.8%
C 23.3%
B 65.0%
FIGU R E 1 .4
Bar chart for Example 1.3
● 250
0 A
A snack size bag of peanut M&M’S candies contains 21 candies with the colors listed in Table 1.3. The variable “color” is qualitative, so Table 1.4 lists the six categories along with a tally of the
number of candies of each color. The last three columns of Table 1.4 give the three different measures of how often each category occurred. Since the categories are colors and have no particular
order, you could construct bar charts with many different shapes just by reordering the bars. To emphasize that brown is the most frequent color, followed by blue, green, and orange, we order the
bars from largest to smallest and generate the bar chart using MINITAB in Figure 1.5. A bar chart in which the bars are ordered from largest to smallest is called a Pareto chart.
TABLE 1.3
Raw Data: Colors of 21 Candies Brown Red Yellow Brown Orange Yellow
TABLE 1.4
Green Red Orange Blue Blue
Brown Green Green Blue Brown
Statistical Table: M&M’S Data for Example 1.4 Category
Relative Frequency
Brown Green Orange Yellow Red Blue
6/21 3/21 3/21 2/21 2/21 5/21
FI GU R E 1 .5
MINITAB bar chart for Example 1.4
Blue Brown Blue Brown Orange
Percent 28% 14 14 10 10 24
● 6
5 4 3 2 1 0 Brown
1.2 Qualitative or Quantitative? Identify each
1.1 Experimental Units Identify the experimental
variable as quantitative or qualitative: a. Amount of time it takes to assemble a simple puzzle b. Number of students in a first-grade classroom c. Rating of a newly elected politician (excellent,
good, fair, poor) d. State in which a person lives
units on which the following variables are measured: a. b. c. d. e.
Gender of a student Number of errors on a midterm exam Age of a cancer patient Number of flowers on an azalea plant Color of a car entering a parking lot
1.3 GRAPHS FOR CATEGORICAL DATA
1.3 Discrete or Continuous? Identify the following quantitative variables as discrete or continuous: a. Population in a particular area of the United States b. Weight of newspapers recovered for
recycling on a single day c. Time to complete a sociology exam d. Number of consumers in a poll of 1000 who consider nutritional labeling on food products to be important 1.4 Discrete or Continuous?
Identify each quantitative variable as discrete or continuous. a. Number of boating accidents along a 50-mile stretch of the Colorado River b. Time required to complete a questionnaire c. Cost of a
head of lettuce d. Number of brothers and sisters you have e. Yield in kilograms of wheat from a 1-hectare plot in a wheat field 1.5 Parking on Campus Six vehicles are selected
from the vehicles that are issued campus parking permits, and the following data are recorded:
Car Car Truck Van Motorcycle Car
Honda Toyota Toyota Dodge HarleyDavidson Chevrolet
One-way Commute Distance (miles)
Age of Vehicle (years)
No No No Yes No
23.6 17.2 10.1 31.7 25.5
a. What are the experimental units? b. What are the variables being measured? What types of variables are they? c. Is this univariate, bivariate, or multivariate data? 1.6 Past U.S. Presidents A data
set consists of the
ages at death for each of the 38 past presidents of the United States now deceased. a. Is this set of measurements a population or a sample? b. What is the variable being measured? c. Is the variable
in part b quantitative or qualitative? 1.7 Voter Attitudes You are a candidate for your
state legislature, and you want to survey voter attitudes regarding your chances of winning. Identify the population that is of interest to you and from which you would like to select your sample.
How is this population dependent on time?
1.8 Cancer Survival Times A medical researcher
wants to estimate the survival time of a patient after the onset of a particular type of cancer and after a particular regimen of radiotherapy. a. What is the variable of interest to the medical
researcher? b. Is the variable in part a qualitative, quantitative discrete, or quantitative continuous? c. Identify the population of interest to the medical researcher. d. Describe how the
researcher could select a sample from the population. e. What problems might arise in sampling from this population? 1.9 New Teaching Methods An educational
researcher wants to evaluate the effectiveness of a new method for teaching reading to deaf students. Achievement at the end of a period of teaching is measured by a student’s score on a reading
test. a. What is the variable to be measured? What type of variable is it? b. What is the experimental unit? c. Identify the population of interest to the experimenter. BASIC TECHNIQUES 1.10 Fifty
people are grouped into four categories—
A, B, C, and D—and the number of people who fall into each category is shown in the table: Category
A B C D
a. What is the experimental unit? b. What is the variable being measured? Is it qualitative or quantitative? c. Construct a pie chart to describe the data. d. Construct a bar chart to describe the
data. e. Does the shape of the bar chart in part d change depending on the order of presentation of the four categories? Is the order of presentation important? f. What proportion of the people are
in category B, C, or D? g. What percentage of the people are not in category B?
16 ❍
1.11 Jeans A manufacturer of jeans has plants in
California, Arizona, and Texas. A group of 25 pairs of jeans is randomly selected from the computerized database, and the state in which each is produced is recorded: CA CA AZ CA CA
AZ CA AZ AZ AZ
AZ TX CA TX AZ
TX TX AZ TX CA
CA TX TX TX CA
a. What is the experimental unit? b. What is the variable being measured? Is it qualitative or quantitative? c. Construct a pie chart to describe the data. d. Construct a bar chart to describe the
data. e. What proportion of the jeans are made in Texas? f. What state produced the most jeans in the group? g. If you want to find out whether the three plants produced equal numbers of jeans, or
whether one produced more jeans than the others, how can you use the charts from parts c and d to help you? What conclusions can you draw from these data? APPLICATIONS 1.12 Election 2008 During the
spring of 2006 the
news media were already conducting opinion polls that tracked the fortunes of the major candidates hoping to become the president of the United States. One such poll conducted by Financial Dynamics
showed the following results:1 “Thinking ahead to the next presidential election, if the 2008 election were held today and the candidates were Democrat [see below] and Republican [see below], for
whom would you vote?”
The results were based on a sample taken May 16–18, 2006, of 900 registered voters nationwide. a. If the pollsters were planning to use these results to predict the outcome of the 2008 presidential
election, describe the population of interest to them. b. Describe the actual population from which the sample was drawn. c. Some pollsters prefer to select a sample of “likely” voters. What is the
difference between “registered voters” and “likely voters”? Why is this important? d. Is the sample selected by the pollsters representative of the population described in part a? Explain. 1.13 Want
to Be President? Would you want to be the president of the United States? Although many teenagers think that they could grow up to be the president, most don’t want the job. In an opinion poll
conducted by ABC News, nearly 80% of the teens were not interested in the job.2 When asked “What’s the main reason you would not want to be president?” they gave these responses: Other career plans/
no interest Too much pressure Too much work Wouldn’t be good at it Too much arguing
a. Are all of the reasons accounted for in this table? Add another category if necessary. b. Would you use a pie chart or a bar chart to graphically describe the data? Why? c. Draw the chart you
chose in part b. d. If you were the person conducting the opinion poll, what other types of questions might you want to investigate?
John McCain (R) % 46
Hillary Clinton (D) % 42
Unsure % 13
John McCain % 51
Al Gore % 33
Unsure % 15
Rudy Giuliani % 49
Hillary Clinton % 40
Unsure % 12
Rudy Giuliani % 50
White Black Hispanic Other
Al Gore % 37
Unsure % 13
Source: Time magazine
Source: www.pollingreport.com
40% 20% 15% 14% 5%
1.14 Race Distributions in the Armed Forces
The four branches of the armed forces in the United States are quite different in their makeup with regard to gender, race, and age distributions. The table below shows the racial breakdown of the
members of the United States Army and the United States Air Force.3 Army
Air Force
58.4% 26.3% 8.9% 6.4%
75.5% 16.2% 5.0% 3.3%
a. Define the variable that has been measured in this table.
1.4 GRAPHS FOR QUANTITATIVE DATA
b. Is the variable quantitative or qualitative? c. What do the numbers represent? d. Construct a pie chart to describe the racial breakdown in the U.S. Army. e. Construct a bar chart to describe the
racial breakdown in the U.S. Air Force. f. What percentage of the members of the U.S. Army are minorities—that is, not white? What is this percentage in the U.S. Air Force?
from vacation? A bar graph with data from the Snapshots section of USA Today is shown below:4 a. Are all of the opinions accounted for in the table? Add another category if necessary. b. Is the bar
chart drawn accurately? That is, are the three bars in the correct proportion to each other? c. Use a pie chart to describe the opinions. Which graph is more interesting to look at?
1.15 Back to Work How long does it take you to adjust to your normal work routine after coming back
Adjustment from Vacation One day A few days No time 0%
10% 20% 30% 40%
GRAPHS FOR QUANTITATIVE DATA Quantitative variables measure an amount or quantity on each experimental unit. If the variable can take only a finite or countable number of values, it is a discrete
variable. A variable that can assume an infinite number of values corresponding to points on a line interval is called continuous.
Pie Charts and Bar Charts Sometimes information is collected for a quantitative variable measured on different segments of the population, or for different categories of classification. For example,
you might measure the average incomes for people of different age groups, different genders, or living in different geographic areas of the country. In such cases, you can use pie charts or bar
charts to describe the data, using the amount measured in each category rather than the frequency of occurrence of each category. The pie chart displays how the total quantity is distributed among
the categories, and the bar chart uses the height of the bar to display the amount in a particular category.
TABLE 1.5
The amount of money expended in fiscal year 2005 by the U.S. Department of Defense in various categories is shown in Table 1.5.5 Construct both a pie chart and a bar chart to describe the data.
Compare the two forms of presentation. ●
Expenses by Category Category
Amount (in billions)
Military personnel Operation and maintenance Procurement Research and development Military construction Other
$127.5 188.1 82.3 65.7 5.3 5.5
Source: The World Almanac and Book of Facts 2007
Solution Two variables are being measured: the category of expenditure (qualitative) and the amount of the expenditure (quantitative). The bar chart in Figure 1.6 displays the categories on the
horizontal axis and the amounts on the vertical axis. For FI GU R E 1 .6
● 200 Amount ($ Billions)
Bar chart for Example 1.5
t pm
de M
ili M
t en em ur oc Pr
the pie chart in Figure 1.7, each “pie slice” represents the proportion of the total expenditures ($474.4 billion) corresponding to its particular category. For example, for the research and
development category, the angle of the sector is 65.7 360° 49.9° 474.4 FI GU R E 1 .7
Pie chart for Example 1.5
Military construction 5.3 Research and development 65.7
Military personnel 127.5
Procurement 82.3 Other 5.5
Operation and maintenance 188.1
1.4 GRAPHS FOR QUANTITATIVE DATA
Both graphs show that the largest amounts of money were spent on personnel and operations. Since there is no inherent order to the categories, you are free to rearrange the bars or sectors of the
graphs in any way you like. The shape of the bar chart has no bearing on its interpretation.
Line Charts When a quantitative variable is recorded over time at equally spaced intervals (such as daily, weekly, monthly, quarterly, or yearly), the data set forms a time series. Time series data
are most effectively presented on a line chart with time as the horizontal axis. The idea is to try to discern a pattern or trend that will likely continue into the future, and then to use that
pattern to make accurate predictions for the immediate future. EXAMPLE
TABLE 1.6
In the year 2025, the oldest “baby boomers” (born in 1946) will be 79 years old, and the oldest “Gen-Xers” (born in 1965) will be two years from Social Security eligibility. How will this affect the
consumer trends in the next 15 years? Will there be sufficient funds for “baby boomers” to collect Social Security benefits? The United States Bureau of the Census gives projections for the portion of
the U.S. population that will be 85 and over in the coming years, as shown below.5 Construct a line chart to illustrate the data. What is the effect of stretching and shrinking the vertical axis on
the line chart? ●
Population Growth Projections Year
85 and over (millions)
The quantitative variable “85 and over” is measured over five time intervals, creating a time series that you can graph with a line chart. The time intervals are marked on the horizontal axis and the
projections on the vertical axis. The data points are then connected by line segments to form the line charts in Figure 1.8. Notice the marked difference in the vertical scales of the two graphs.
Shrinking the scale on the vertical axis causes large changes to appear small, and vice versa. To avoid misleading conclusions, you must look carefully at the scales of the vertical and horizontal
axes. However, from both graphs you get a clear picture of the steadily increasing number of those 85 and older in the early years of the new millennium.
Solution Beware of stretching or shrinking axes when you look at a graph!
● 100
22.5 20.0
85 and Older (Millions)
Line charts for Example 1.6
85 and Older (Millions)
FIGU R E 1 .8
17.5 15.0 12.5 10.0 7.5 5.0
2030 Year
2030 Year
Dotplots Many sets of quantitative data consist of numbers that cannot easily be separated into categories or intervals of time. You need a different way to graph this type of data! The simplest
graph for quantitative data is the dotplot. For a small set of measurements—for example, the set 2, 6, 9, 3, 7, 6—you can simply plot the measurements as points on a horizontal axis. This dotplot,
generated by MINITAB, is shown in Figure 1.9(a). For a large data set, however, such as the one in Figure 1.9(b), the dotplot can be uninformative and tedious to interpret. (a)
FI GU R E 1 .9
Dotplots for small and large data sets
6 Small Set
1.19 1.26 Large Set
Stem and Leaf Plots Another simple way to display the distribution of a quantitative data set is the stem and leaf plot. This plot presents a graphical display of the data using the actual numerical
values of each data point.
How Do I Construct a Stem and Leaf Plot? 1. Divide each measurement into two parts: the stem and the leaf. 2. List the stems in a column, with a vertical line to their right. 3. For each measurement,
record the leaf portion in the same row as its corresponding stem. 4. Order the leaves from lowest to highest in each stem. 5. Provide a key to your stem and leaf coding so that the reader can
re-create the actual measurements if necessary.
Table 1.7 lists the prices (in dollars) of 19 different brands of walking shoes. Construct a stem and leaf plot to display the distribution of the data.
1.4 GRAPHS FOR QUANTITATIVE DATA
TABLE 1.7
Prices of Walking Shoes 90 65 75 70
Solution To create the stem and leaf, you could divide each observation between the ones and the tens place. The number to the left is the stem; the number to the right is the leaf. Thus, for the
shoes that cost $65, the stem is 6 and the leaf is 5. The stems, ranging from 4 to 9, are listed in Figure 1.10, along with the leaves for each of the 19 measurements. If you indicate that the leaf
unit is 1, the reader will realize that the stem and leaf 6 and 8, for example, represent the number 68, recorded to the nearest dollar.
FIGU R E 1 .1 0
Stem and leaf plot for the data in Table 1.7
• •
TABLE 1.8
4 5 Reordering ⎯→ 6 7 8 9
Sometimes the available stem choices result in a plot that contains too few stems and a large number of leaves within each stem. In this situation, you can stretch the stems by dividing each one into
several lines, depending on the leaf values assigned to them. Stems are usually divided in one of two ways:
stem | leaf
Leaf unit 1
Into two lines, with leaves 0–4 in the first line and leaves 5–9 in the second line Into five lines, with leaves 0–1, 2–3, 4–5, 6–7, and 8–9 in the five lines, respectively
The data in Table 1.8 are the weights at birth of 30 full-term babies, born at a metropolitan hospital and recorded to the nearest tenth of a pound.6 Construct a stem and leaf plot to display the
distribution of the data. ●
Birth Weights of 30 Full-Term Newborn Babies 7.2 8.0 8.2 5.8 6.1 8.5
7.8 8.2 7.7 6.8 7.9 9.0
6.8 5.6 7.5 6.8 9.4 7.7
6.2 8.6 7.2 8.5 9.0 6.7
8.2 7.1 7.7 7.5 7.8 7.7
The data, though recorded to an accuracy of only one decimal place, are measurements of the continuous variable x weight, which can take on any positive value. By examining Table 1.8, you can quickly
see that the highest and lowest weights are 9.4 and 5.6, respectively. But how are the remaining weights distributed? Solution
If you use the decimal point as the dividing line between the stem and the leaf, you have only five stems, which does not produce a very good picture. When you divide each stem into two lines, there
are eight stems, since the first line of stem 5 and the second line of stem 9 are empty! This produces a more descriptive plot, as shown in Figure 1.11. For these data, the leaf unit is .1, and the
reader can infer that the stem and leaf 8 and 2, for example, represent the measurement x 8.2.
FI GU R E 1 .1 1
Stem and leaf plot for the data in Table 1.8
● 5
Reordering →
Leaf unit .1
If you turn the stem and leaf plot sideways, so that the vertical line is now a horizontal axis, you can see that the data have “piled up” or been “distributed” along the axis in a pattern that can
be described as “mound-shaped”—much like a pile of sand on the beach. This plot again shows that the weights of these 30 newborns range between 5.6 and 9.4; many weights are between 7.5 and 8.0
Interpreting Graphs with a Critical Eye Once you have created a graph or graphs for a set of data, what should you look for as you attempt to describe the data? • •
First, check the horizontal and vertical scales, so that you are clear about what is being measured. Examine the location of the data distribution. Where on the horizontal axis is the center of the
distribution? If you are comparing two distributions, are they both centered in the same place? Examine the shape of the distribution. Does the distribution have one “peak,” a point that is higher
than any other? If so, this is the most frequently occurring measurement or category. Is there more than one peak? Are there an approximately equal number of measurements to the left and right of the
peak? Look for any unusual measurements or outliers. That is, are any measurements much bigger or smaller than all of the others? These outliers may not be representative of the other values in the
Distributions are often described according to their shapes.
Definition A distribution is symmetric if the left and right sides of the distribution, when divided at the middle value, form mirror images. A distribution is skewed to the right if a greater
proportion of the measurements lie to the right of the peak value. Distributions that are skewed right contain a few unusually large measurements.
1.4 GRAPHS FOR QUANTITATIVE DATA
A distribution is skewed to the left if a greater proportion of the measurements lie to the left of the peak value. Distributions that are skewed left contain a few unusually small measurements. A
distribution is unimodal if it has one peak; a bimodal distribution has two peaks. Bimodal distributions often represent a mixture of two different populations in the data set.
FIGU R E 1 .1 2
Shapes of data distributions for Example 1.9
Examine the three dotplots generated by MINITAB and shown in Figure 1.12. Describe these distributions in terms of their locations and shapes. ●
The first dotplot shows a relatively symmetric distribution with a single peak located at x 4. If you were to fold the page at this peak, the left and right halves would almost be mirror images. The
second dotplot, however, is far from symmetric. It has a long “right tail,” meaning that there are a few unusually large observations. If you were to fold the page at the peak, a larger proportion of
measurements would be on the right side than on the left. This distribution is skewed to the right. Similarly, the third dotplot with the long “left tail” is skewed to the left. Solution
Symmetric ⇔ mirror images Skewed right ⇔ long right tail Skewed left ⇔ long left tail
An administrative assistant for the athletics department at a local university is monitoring the grade point averages for eight members of the women’s volleyball team. He enters the GPAs into the
database but accidentally misplaces the decimal point in the last entry. 2.8
Use a dotplot to describe the data and uncover the assistant’s mistake. The dotplot of this small data set is shown in Figure 1.13(a). You can clearly see the outlier or unusual observation caused by
the assistant’s data entry error. Once the error has been corrected, as in Figure 1.13(b), you can see the correct distribution of the data set. Since this is a very small set, it is difficult to
describe the shape of the distribution, although it seems to have a peak value around 3.0 and it appears to be relatively symmetric.
FI GU R E 1 .1 3
Distributions of GPAs for Example 1.10
Outliers lie out, away from the main body of data.
2.8 GPAs
When comparing graphs created for two data sets, you should compare their scales of measurement, locations, and shapes, and look for unusual measurements or outliers. Remember that outliers are not
always caused by errors or incorrect data entry. Sometimes they provide very valuable information that should not be ignored. You may need additional information to decide whether an outlier is a
valid measurement that is simply unusually large or small, or whether there has been some sort of mistake in the data collection. If the scales differ widely, be careful about making comparisons or
drawing conclusions that might be inaccurate!
RELATIVE FREQUENCY HISTOGRAMS A relative frequency histogram resembles a bar chart, but it is used to graph quantitative rather than qualitative data. The data in Table 1.9 are the birth weights of
30 fullterm newborn babies, reproduced from Example 1.8 and shown as a dotplot in Figure 1.14(a). First, divide the interval from the smallest to the largest measurements into subintervals or classes
of equal length. If you stack up the dots in each subinterval (Figure 1.14(b)), and draw a bar over each stack, you will have created a frequency histogram or a relative frequency histogram,
depending on the scale of the vertical axis.
TABLE 1.9
How to construct a histogram
Birth Weights of 30 Full-Term Newborn Babies 7.2 8.0 8.2 5.8 6.1 8.5
FIGU R E 1 .1 4
7.8 8.2 7.7 6.8 7.9 9.0
6.8 5.6 7.5 6.8 9.4 7.7
6.2 8.6 7.2 8.5 9.0 6.7
8.2 7.1 7.7 7.5 7.8 7.7
● (a) 6.0
7.8 Birth Weights
(b) 6.0
7.5 8.0 Birth Weights
Definition A relative frequency histogram for a quantitative data set is a bar graph in which the height of the bar shows “how often” (measured as a proportion or relative frequency) measurements
fall in a particular class or subinterval. The classes or subintervals are plotted along the horizontal axis.
As a rule of thumb, the number of classes should range from 5 to 12; the more data available, the more classes you need.† The classes must be chosen so that each measurement falls into one and only
one class. For the birth weights in Table 1.9, we decided to use eight intervals of equal length. Since the total span of the birth weights is 9.4 5.6 3.8 the minimum class width necessary to cover
the range of the data is (3.8 8) .475. For convenience, we round this approximate width up to .5. Beginning the first interval at the lowest value, 5.6, we form subintervals from 5.6 up to but not
including 6.1, 6.1 up to but not including 6.6, and so on. By using the method of left inclusion, and including the left class boundary point but not the right boundary point in the class, we
eliminate any confusion about where to place a measurement that happens to fall on a class boundary point. Table 1.10 shows the eight classes, labeled from 1 to 8 for identification. The boundaries
for the eight classes, along with a tally of the number of measurements that fall in each class, are also listed in the table. As with the charts in Section 1.3, you can now measure how often each
class occurs using frequency or relative frequency.
You can use this table as a guide for selecting an appropriate number of classes. Remember that this is only a guide; you may use more or fewer classes than the table recommends if it makes the graph
more descriptive.
Sample Size Number of Classes
To construct the relative frequency histogram, plot the class boundaries along the horizontal axis. Draw a bar over each class interval, with height equal to the relative frequency for that class.
The relative frequency histogram for the birth weight data, Figure 1.15, shows at a glance how birth weights are distributed over the interval 5.6 to 9.4. TABLE 1.10
Relative Frequencies for the Data of Table 1.9
Class Boundaries
5.6 to 6.1 6.1 to 6.6 6.6 to 7.1 7.1 to 7.6 7.6 to 8.1 8.1 to 8.6 8.6 to 9.1 9.1 to 9.6
Relative frequencies add to 1; frequencies add to n.
FI GU R E 1 .1 5
Class Frequency
Class Relative Frequency
II II IIII IIII IIII III IIII III I
2/30 2/30 4/30 5/30 8/30 5/30 3/30 1/30
Relative frequency histogram
Relative Frequency
7/30 6/30 5/30 4/30 3/30 2/30 1/30 0 5.6
TABLE 1.11
7.6 8.1 Birth Weights
Twenty-five Starbucks® customers are polled in a marketing survey and asked, “How often do you visit Starbucks in a typical week?” Table 1.11 lists the responses for these 25 customers. Construct a
relative frequency histogram to describe the data.
Number of Visits in a Typical Week for 25 Customers 6 4 6 5 3
The variable being measured is “number of visits to Starbucks,” which is a discrete variable that takes on only integer values. In this case, it is simplest to choose the classes or subintervals as
the integer values over the range of observed values: 1, 2, 3, 4, 5, 6, and 7. Table 1.12 shows the classes and their corresponding frequencies and relative frequencies. The relative frequency
histogram, generated using MINITAB, is shown in Figure 1.16.
FIGU R E 1 .1 6
MINITAB histogram for Example 1.11
Frequency Table for Example 1.11 Number of Visits to Starbucks
Relative Frequency
1 — 2 3 8 7 3 1
.04 — .08 .12 .32 .28 .12 .04
● 8/25
Relative frequency
TABLE 1.12
Notice that the distribution is skewed to the left and that there is a gap between 1 and 3.
How Do I Construct a Relative Frequency Histogram? 1. Choose the number of classes, usually between 5 and 12. The more data you have, the more classes you should use. 2. Calculate the approximate
class width by dividing the difference between the largest and smallest values by the number of classes. 3. Round the approximate class width up to a convenient number. 4. If the data are discrete,
you might assign one class for each integer value taken on by the data. For a large number of integer values, you may need to group them into classes. 5. Locate the class boundaries. The lowest class
must include the smallest measurement. Then add the remaining classes using the left inclusion method. 6. Construct a statistical table containing the classes, their frequencies, and their relative
frequencies. 7. Construct the histogram like a bar graph, plotting class intervals on the horizontal axis and relative frequencies as the heights of the bars. (continued)
Exercise Reps A. For the following data sets, find the range, the minimum class width, and a convenient class width. The first data set is done for you. Number of Measurements
Smallest and Largest Values
Number of Classes
Minimum Class Width
Convenient Class Width
10 to 100
0.1 to 6.0
500 to 700
B. For the same data sets, select a convenient starting point, and list the class boundaries for the first two classes. The first data set is done for you. Number of Measurements
Smallest and Largest Values
Convenient Starting Point
10 to 100
First Two Classes 0 to 15 15 to 30
0.1 to 6.0
500 to 700
Progress Report •
Still having trouble? Try again using the Exercise Reps at the end of this section.
Mastered relative frequency histograms? You can skip the Exercise Reps and go straight to the Basic Techniques Exercises at the end of this section.
Answers are located on the perforated card at the back of this book.
A relative frequency histogram can be used to describe the distribution of a set of data in terms of its location and shape, and to check for outliers as you did with other graphs. For example, the
birth weight data were relatively symmetric, with no unusual measurements, while the Starbucks data were skewed left. Since the bar constructed above each class represents the relative frequency or
proportion of the measurements in that class, these heights can be used to give us further information: • •
The proportion of the measurements that fall in a particular class or group of classes The probability that a measurement drawn at random from the set will fall in a particular class or group of
Consider the relative frequency histogram for the birth weight data in Figure 1.15. What proportion of the newborns have birth weights of 7.6 or higher? This involves all
classes beyond 7.6 in Table 1.10. Because there are 17 newborns in those classes, the proportion who have birth weights of 7.6 or higher is 17/30, or approximately 57%. This is also the percentage of
the total area under the histogram in Figure 1.15 that lies to the right of 7.6. Suppose you wrote each of the 30 birth weights on a piece of paper, put them in a hat, and drew one at random. What is
the chance that this piece of paper contains a birth weight of 7.6 or higher? Since 17 of the 30 pieces of paper fall in this category, you have 17 chances out of 30; that is, the probability is 17/
30. The word probability is not unfamiliar to you; we will discuss it in more detail in Chapter 4. Although we are interested in describing a set of n 30 measurements, we might also be interested in
the population from which the sample was drawn, which is the set of birth weights of all babies born at this hospital. Or, if we are interested in the weights of newborns in general, we might
consider our sample as representative of the population of birth weights for newborns at similar metropolitan hospitals. A sample histogram provides valuable information about the population
histogram—the graph that describes the distribution of the entire population. Remember, though, that different samples from the same population will produce different histograms, even if you use the
same class boundaries. However, you can expect that the sample and population histograms will be similar. As you add more and more data to the sample, the two histograms become more and more alike.
If you enlarge the sample to include the entire population, the two histograms are identical!
EXERCISES EXERCISE REPS These exercises refer back to the MyPersonal Trainer section on page 27. 1.16 For the following data sets, find the range, the minimum class width, and a conve-
nient class width. Number of Measurements
Smallest and Largest Values
Number of Classes
0.5 to 1.0
0 to 100
1200 to 1500
Minimum Class Width
Convenient Class Width
1.17 Refer to Exercise 1.16. For the same data sets, select a convenient starting point,
and list the class boundaries for the first two classes. Number of Measurements
Smallest and Largest Values
0.5 to 1.0
0 to 100
1200 to 1500
Convenient Starting Point
First Two Classes
BASIC TECHNIQUES 1.18 Construct a stem and leaf plot for these
50 measurements:
3.1 2.9 3.8 2.5 4.3
4.9 2.1 6.2 3.6 5.7
2.8 3.5 2.5 5.1 3.7
3.6 4.0 2.9 4.8 4.6
2.5 3.7 2.8 1.6 4.0
4.5 2.7 5.1 3.6 5.6
3.5 4.0 1.8 6.1 4.9
3.7 4.4 5.6 4.7 4.2
4.1 3.7 2.2 3.9 3.1
4.9 4.2 3.4 3.9 3.9
a. Describe the shape of the data distribution. Do you see any outliers? b. Use the stem and leaf plot to find the smallest observation. c. Find the eighth and ninth largest observations. 1.19 Refer
to Exercise 1.18. Construct a relative fre-
quency histogram for the data. a. Approximately how many class intervals should you use? b. Suppose you decide to use classes starting at 1.6 with a class width of .5 (i.e., 1.6 to 2.1, 2.1 to 2.6).
Construct the relative frequency histogram for the data. c. What fraction of the measurements are less than 5.1? d. What fraction of the measurements are larger than 3.6? e. Compare the relative
frequency histogram with the stem and leaf plot in Exercise 1.18. Are the shapes similar? 1.20 Consider this set of data: EX0120
4.5 4.3 3.9 4.4
3.2 4.8 3.7 4.0
3.5 3.6 4.3 3.6
3.9 3.3 4.4 3.5
3.5 4.3 3.4 3.9
3.9 4.2 4.2 4.0
a. Construct a stem and leaf plot by using the leading digit as the stem. b. Construct a stem and leaf plot by using each leading digit twice. Does this technique improve the presentation of the
data? Explain. 1.21 A discrete variable can take on only the values
0, 1, or 2. A set of 20 measurements on this variable is shown here: 1 2 2 0
a. Construct a relative frequency histogram for the data.
b. What proportion of the measurements are greater than 1? c. What proportion of the measurements are less than 2? d. If a measurement is selected at random from the 20 measurements shown, what is
the probability that it is a 2? e. Describe the shape of the distribution. Do you see any outliers? 1.22 Refer to Exercise 1.21.
a. Draw a dotplot to describe the data. b. How could you define the stem and the leaf for this data set? c. Draw the stem and leaf plot using your decision from part b. d. Compare the dotplot, the
stem and leaf plot, and the relative frequency histogram (Exercise 1.21). Do they all convey roughly the same information? 1.23 Navigating a Maze An experimental psychologist measured the length of
time it took for a rat to successfully navigate a maze on each of five days. The results are shown in the table. Create a line chart to describe the data. Do you think that any learning is taking
place? Day
Time (sec.)
1.24 Measuring over Time The value of a
quantitative variable is measured once a year for a 10-year period. Here are the data:
61.5 62.3 60.7 59.8 58.0
58.2 57.5 57.5 56.1 56.0
a. Create a line chart to describe the variable as it changes over time. b. Describe the measurements using the chart constructed in part a. 1.25 Test Scores The test scores on a 100-point test were
recorded for 20 students:
a. Use an appropriate graph to describe the data. b. Describe the shape and location of the scores.
c. Is the shape of the distribution unusual? Can you think of any reason the distribution of the scores would have such a shape?
APPLICATIONS 1.26 A Recurring Illness The length of time (in months) between the onset of a particular illness and its recurrence was recorded for n 50 patients:
2.1 14.7 4.1 14.1 1.6
4.4 9.6 18.4 1.0 3.5
2.7 16.7 .2 2.4 11.4
32.3 7.4 6.1 2.4 18.0
9.9 8.2 13.5 18.0 26.7
9.0 19.2 7.4 8.7 3.7
2.0 6.9 .2 24.0 12.6
6.6 4.3 8.3 1.4 23.1
3.9 3.3 .3 8.2 5.6
1.6 1.2 1.3 5.8 .4
a. Construct a relative frequency histogram for the data. b. Would you describe the shape as roughly symmetric, skewed right, or skewed left? c. Give the fraction of recurrence times less than or
equal to 10 months.
a. Construct a stem and leaf display for the data. b. Construct a relative frequency histogram for these data. Start the lower boundary of the first class at 30 and use a class width of 5 months. c.
Compare the graphs in parts a and b. Are there any significant differences that would cause you to choose one as the better method for displaying the data? d. What proportion of the children were 35
months (2 years, 11 months) or older, but less than 45 months (3 years, 9 months) of age when first enrolled in preschool? e. If one child were selected at random from this group of children, what is
the probability that the child was less than 50 months old (4 years, 2 months) when first enrolled in preschool?
1.27 Education Pays Off! Education pays off,
according to a snapshot provided in a report to the city of Riverside by the Riverside County Office of Education.7 The average annual incomes for six different levels of education are shown in the
table: Educational Level
Text not available due to copyright restrictions
Average Annual Income
High school graduate Some college, no degree Bachelor’s degree Master’s degree Doctorate Professional (Doctor, Lawyer)
$26,795 29,095 50,623 63,592 85,675 101,375
Source: U.S. Census Bureau
a. What graphical methods could you use to describe the data? b. Select the method from part a that you think best describes the data. c. How would you summarize the information that you see in the
graph regarding educational levels and salary? 1.28 Preschool The ages (in months) at
which 50 children were first enrolled in a preschool are listed below.
1.30 How Long Is the Line? To decide on the number of service counters needed for stores to be built in the future, a supermarket chain wanted to obtain information on the length of time (in minutes)
required to service customers. To find the distribution of customer service times, a sample of 1000 customers’ service times was recorded. Sixty of these are shown here:
3.6 1.1 1.4 .6 1.1 1.6
1.9 1.8 .2 2.8 1.2 1.9
2.1 .3 1.3 2.5 .8 5.2
.3 1.1 3.1 1.1 1.0 .5
.8 .5 .4 .4 .9 1.8
.2 1.2 2.3 1.2 .7 .3
1.0 .6 1.8 .4 3.1 1.1
1.4 1.1 4.5 1.3 1.7 .6
1.8 .8 .9 .8 1.1 .7
1.6 1.7 .7 1.3 2.2 .6
a. Construct a stem and leaf plot for the data. b. What fraction of the service times are less than or equal to 1 minute? c. What is the smallest of the 60 measurements? 1.31 Service Times, continued
Refer to Exercise 1.30. Construct a relative frequency histogram for the supermarket service times.
a. Describe the shape of the distribution. Do you see any outliers? b. Assuming that the outliers in this data set are valid observations, how would you explain them to the management of the
supermarket chain? c. Compare the relative frequency histogram with the stem and leaf plot in Exercise 1.30. Do the two graphs convey the same information? 1.32 Calcium Content The calcium (Ca)
content of a powdered mineral substance was analyzed ten times with the following percent compositions recorded:
.0271 .0271
.0282 .0281
.0279 .0269
.0281 .0275
.0268 .0276
a. Draw a dotplot to describe the data. (HINT: The scale of the horizontal axis should range from .0260 to .0290.) b. Draw a stem and leaf plot for the data. Use the numbers in the hundredths and
thousandths places as the stem. c. Are any of the measurements inconsistent with the other measurements, indicating that the technician may have made an error in the analysis? 1.33 American
Presidents Listed below are the ages at the time of death for the 38 deceased American presidents from George Washington to Ronald Reagan:5
Washington J. Adams Jefferson Madison Monroe J. Q. Adams Jackson Van Buren W. H. Harrison Tyler Polk Taylor Fillmore Pierce Buchanan Lincoln A. Johnson Grant Hayes
Garfield Arthur Cleveland B. Harrison Cleveland McKinley T. Roosevelt Taft Wilson Harding Coolidge Hoover F. D. Roosevelt Truman Eisenhower Kennedy L. Johnson Nixon Reagan
a. Before you graph the data, try to visualize the distribution of the ages at death for the presidents. What shape do you think it will have? b. Construct a stem and leaf plot for the data. Describe
the shape. Does it surprise you? c. The five youngest presidents at the time of death appear in the lower “tail” of the distribution. Three of the five youngest have one common trait. Identify the five
youngest presidents at death. What common trait explains these measurements? 1.34 RBC Counts The red blood cell count
of a healthy person was measured on each of 15 days. The number recorded is measured in 106 cells per microliter ( L).
5.4 5.3 5.3
5.2 5.4 4.9
5.0 5.2 5.4
5.2 5.1 5.2
5.5 5.3 5.2
a. Use an appropriate graph to describe the data. b. Describe the shape and location of the red blood cell counts. c. If the person’s red blood cell count is measured today as 5.7 106/ L, would you
consider this unusual? What conclusions might you draw? 1.35 Batting Champions The officials of
major league baseball have crowned a batting champion in the National League each year since 1876. A sample of winning batting averages is listed in the table:5
Derreck Lee Todd Helton Larry Doyle Edd Roush Paul Waner Honus Wagner Willie Keeler Roger Hornsby Tommy Davis Gary Sheffield Willie Mays Bill Madlock Richie Ashburn Ernie Lombardi Stan Musial Joe
Torre Tony Gwynn Roberto Clemente Pete Rose Roger Connor
Average .335 .372 .320 .341 .362 .334 .379 .424 .326 .330 .345 .354 .350 .330 .376 .363 .353 .351 .335 .371
a. Construct a relative frequency histogram to describe the batting averages for these 20 champions. b. If you were to randomly choose one of the 20 names, what is the chance that you would choose a
player whose average was above .400 for his championship year? 1.36 Top 20 Movies The table that follows shows the weekend gross ticket sales for the top 20 movies during the week of August 4, 2006:9
Weekend Gross ($ millions)
Movie 1.Talladega Nights: The Ballad of Ricky Bobby 2. Barnyard 3. Pirates of the Caribbean: Dead Man’s Chest 4. Miami Vice 5. The Descent 6. John Tucker Must Die 7. Monster House 8. The Ant Bully 9.
You, Me and Dupree 10. The Night Listener 11. The Devil Wears Prada 12. Lady in the Water 13. Little Man 14. Superman Returns 15. Scoop 16. Little Miss Sunshine 17. Clerks II 18. My Super
Ex-Girlfriend 19. Cars 20. Click
$47.0 15.8 11.0 10.2 8.9 6.2 6.1 3.9 3.6 3.6 3.0 2.7 2.5 2.2 1.8 1.5 1.3 1.2 1.1 0.8
Source: www.radiofree.com/mov-tops.shtml
a. Draw a stem and leaf plot for the data. Describe the shape of the distribution. Are there any outliers? b. Construct a dotplot for the data. Which of the two graphs is more informative? Explain.
1.37 Hazardous Waste How safe is your EX0137
neighborhood? Are there any hazardous waste
sites nearby? The table shows the number of hazardous waste sites in each of the 50 states and the District of Columbia in the year 2006:5 AL AK AZ AR CA CO CT DE DC FL GA
HI ID IL IN IA KS KY LA ME MD
MA MI MN MS MO MT NE NV NH NJ
NM NY NC ND OH OK OR PA RI SC
SD TN TX UT VT VA WA WV WI WY
a. What variable is being measured? Is the variable discrete or continuous? b. A stem and leaf plot generated by MINITAB is shown here. Describe the shape of the data distribution. Identify the
unusually large measurements marked “HI” by state. Stem-and-Leaf Display: Hazardous Waste Stem-and-leaf of Sites N = 51 Leaf Unit = 1.0 6 11 24 (8) 19 17 14 11 9 8 6
HI 68, 87, 95, 96, 117
c. Can you think of any reason these five states would have a large number of hazardous waste sites? What other variable might you measure to help explain why the data behave as they do?
As you continue to work through the exercises in this chapter, you will become more experienced in recognizing different types of data and in determining the most appropriate graphical method to use.
Remember that the type of graphic you use is not as important as the interpretation that accompanies the picture. Look for these important characteristics: • • •
Location of the center of the data Shape of the distribution of data Unusual observations in the data set
Using these characteristics as a guide, you can interpret and compare sets of data using graphical methods, which are only the first of many statistical tools that you will soon have at your disposal.
CHAPTER REVIEW Key Concepts I.
2. Quantitative data
How Data Are Generated
1. Experimental units, variables, measurements 2. Samples and populations 3. Univariate, bivariate, and multivariate data II. Types of Variables
1. Qualitative or categorical 2. Quantitative a. Discrete b. Continuous III. Graphs for Univariate Data Distributions
a. Pie and bar charts b. Line charts c. Dotplots d. Stem and leaf plots e. Relative frequency histograms 3. Describing data distributions a. Shapes—symmetric, skewed left, skewed right, unimodal,
bimodal b. Proportion of measurements in certain intervals c. Outliers
1. Qualitative or categorical data a. Pie charts b. Bar charts
Easy access to the web has made it possible for you to understand statistical concepts using an interactive web tool called an applet. These applets provide visual reinforcement for the concepts that
have been presented in the chapter. Sometimes you will be able to perform statistical experiments, sometimes you will be able to interact with a statistical graph to change its form, and sometimes
you will be able to use the applet as an interactive “statistical table.” At the end of each chapter, you will find exercises designed specifically for use with a particular applet. The applets have
been customized specifically to match the presentation and notation used in your text. They can be found on the Premium Website. If necessary, follow the instructions to download the latest web
browser and/or Java plug-in, or just click the appropriate link to load the applets. Your web browser will open the index of applets, organized by chapter and name. When you click a particular applet
title, the applet will appear in your browser. To return to the index of applets, simply click the link at the bottom of the page.
Dotplots Click the Chapter 1 applet called Building a Dotplot. If you move your cursor over the applet marked Dotplot Demo you will see a green line with a value that changes as you move along the
horizontal axis. When you left-click your mouse, a dot will appear at that point on the dotplot. If two measurements are identical, the dots will pile up on top of each other (Figure 1.17). Follow
the directions in the Dotplot Demo, using the sample data given there. If you make a mistake, the applet will tell you. The second applet will not correct your mistakes; you can add as many dots as
you want!
FIGU R E 1 .1 7
Building a Dotplot applet
Histograms Click the Chapter 1 applet called Building a Histogram. If you scroll down to the applet marked Histogram Demo, you will see the interval boundaries (or interval midpoints) for the
histogram along the horizontal axis. As you move the mouse across the graph, a light gray box will show you where the measurement will be added at your next mouse click. When you release the mouse,
the box turns dark blue (dark blue in Figure 1.18). The partially completed histogram in Figure 1.18 contains one 3, one 4, one 5, three 6s, and one 7. Follow the directions in the Histogram Demo
using the sample data given there. Click the link to compare your results to the correct histogram. The second applet will be used for some of the MyApplet Exercises. Click the applet called Flipping
Fair Coins, and scroll down to the applet marked sample size 3. The computer will collect some data by “virtually” tossing 3 coins and recording the quantitative discrete variable x number of heads
observed Click on “New Coin Flip.” You will see the result of your three tosses in the upperleft-hand corner, along with the value of x. For the experiment in Figure 1.19 we observed x 2. The applet
begins to build a relative frequency histogram to describe the data set, which at this point contains only one observation. Click “New Coin Flip” a few more times. Watch the coins appear, along with
the value of x, and watch the
FI GU R E 1 .1 8
Building a Histogram applet
FI GU R E 1 .1 9
Flipping Fair Coins applet
FI GU R E 1 .2 0
Flipping Fair Coins applet
relative frequency histogram grow. The red area (light blue in Figures 1.19 and 1.20) represents the current data added to the histogram, and the dark blue area in Figure 1.20 is contributed from the
previous coin flips. You can flip the three coins 10 at a time or 100 at a time to generate data more quickly. Figure 1.20 shows the relative frequency histogram for 500 observations in our data set.
Your data set will look a little different. However, it should have the same approximate shape—it should be relatively symmetric. For our histogram, we can say that the values x 0 and x 3 occurred
about 12–13% of the time, while the values x 1 and x 2 occurred between 38% and 40% of the time. Does your histogram produce similar results?
Introduction to MINITABTM MINITAB is a computer software package that is available in many forms for different computer environments. The current version of MINITAB at the time of this printing is
MINITAB 15, which is used in the Windows environment. We will assume that you are familiar with Windows. If not, perhaps a lab or teaching assistant can help you to master the basics. Once you have
started Windows, there are two ways to start MINITAB: • •
If there is a MINITAB shortcut icon on the desktop, double-click on the icon. Click the Start button on the taskbar. Follow the menus, highlighting All Programs 씮 MINITAB Solutions 씮 MINITAB 15
Statistical Software English. Click on MINITAB 15 Statistical Software English to start the program.
When MINITAB is opened, the main MINITAB screen will be displayed (see Figure 1.21). It contains two windows: the Data window and the Session window. Clicking FIGU R E 1 .2 1
38 ❍
anywhere on the window will make that window active so that you can either enter data or type commands. Although it is possible to manually type MINITAB commands in the Session window, we choose to
use the Windows approach, which will be familiar to most of you. If you prefer to use the typed commands, consult the MINITAB manual for detailed instructions. At the top of the Session window, you
will see a Menu bar. Highlighting and clicking on any command on the Menu bar will cause a menu to drop down, from which you may then select the necessary command. We will use the standard notation
to indicate a sequence of commands from the drop-down menus. For example, File 씮 Open Worksheet will allow you to retrieve a “worksheet”—a set of data from the Data window—which you have previously
saved. To close the program, the command sequence is File 씮 Exit. MINITAB 15 allows multiple worksheets to be saved as “projects.” When you are working on a project, you can add new worksheets or
open worksheets from other projects to add to your current project. As you become more familiar with MINITAB, you will be able to organize your information into either “worksheets” or “projects,”
depending on the complexity of your task.
Graphing with MINITAB The first data set to be graphed consists of qualitative data whose frequencies have already been recorded. The class status of 105 students in an introductory statistics class
are listed in Table 1.13. Before you enter the data into the Minitab Data window, start a project called “Chapter 1” by highlighting File 씮 New. A Dialog box called “New” will appear. Highlight
Minitab Project and click OK. Before you continue, let’s save this project as “Chapter 1” using the series of commands File 씮 Save Project. Type Chapter 1 in the File Name box, and select a location
using the white box marked “Save in:” at the top of the Dialog box. Click Save. In the Data window at the top of the screen, you will see your new project name, “Chapter 1.MPJ.”
TABLE 1.13
Status of Students in Statistics Class Status Frequency
Grad Student
To enter the data into the worksheet, click on the gray cell just below the name C1 in the Data window. You can enter your own descriptive name for the categories— possibly “Status.” Now use the down
arrow앗 or your mouse to continue down column C1, entering the five status descriptions. Notice that the name C1 has changed to C1-T because you are entering text rather than numbers. Continue by
naming column 2 (C2) “Frequency,” and enter the five numerical frequencies into C2. The Data window will appear as in Figure 1.22. To construct a pie chart for these data, click on Graph 씮 Pie Chart,
and a Dialog box will appear (see Figure 1.23). In this box, you must specify how you want to create the chart. Click the radio button marked Chart values from a table. Then place your cursor in the
box marked “Categorical variable.” Either (1) highlight C1 in the list at the left and choose Select, (2) double-click on C1 in the list at the left, or (3) type C1 in the “Categorical variable” box.
Similarly, place the cursor in the box marked
FIGU R E 1 .2 2
FIGU R E 1 .2 3
“Summary variables” and select C2. Click Labels and select the tab marked Slice Labels. Check the boxes marked “Category names” and “Percent.” When you click OK, MINITAB will create the pie chart in
Figure 1.24. We have removed the legend by selecting and deleting it. As you become more proficient at using the pie chart command, you may want to take advantage of some of the options available.
Once the chart is created, right-click on the pie chart and select Edit Pie. You can change the colors and format of the chart, “explode” important sectors of the pie, and change the order of the
categories. If you right-click on the pie chart and select Update Graph Automatically, the pie chart will automatically update when you change the data in columns C1 and C2 of the MINITAB worksheet.
If you would rather construct a bar chart, use the command Graph 씮 Bar Chart. In the Dialog box that appears, choose Simple. Choose an option in the “Bars represent” drop-down list, depending on the
way that the data has been entered into the
40 ❍
FI GU R E 1 .2 4
worksheet. For the data in Table 1.13, we choose “Values from a table” and click OK. When the Dialog box appears, place your cursor in the “Graph variables” box and select C2. Place your cursor in
the “Categorical variable” box, and select C1. Click OK to finish the bar chart, shown in Figure 1.25. Once the chart is created, right-click on various parts of the bar chart and choose Edit to
change the look of the chart. MINITAB can create dotplots, stem and leaf plots, and histograms for quantitative data. The top 40 stocks on the over-the-counter (OTC) market, ranked by percentage of
outstanding shares traded on a particular day, are listed in Table 1.14. Although we could simply enter these data into the third column (C3) of Worksheet 1 in the “Chapter 1” project, let’s start a
new worksheet within “Chapter 1” using File 씮 New, highlighting Minitab Worksheet, and clicking OK. Worksheet 2 will appear on the screen. Enter the data into column C1 and name them “Stocks” in the
gray cell just below the C1.
TABLE 1.14
Percentage of OTC Stocks Traded 11.88 7.99 7.15 7.13
6.27 6.07 5.98 5.91
5.49 5.26 5.07 4.94
4.81 4.79 4.55 4.43
4.40 4.05 3.94 3.93
3.78 3.69 3.62 3.48
3.44 3.36 3.26 3.20
3.11 3.03 2.99 2.89
2.88 2.74 2.74 2.69
2.68 2.63 2.62 2.61
To create a dotplot, use Graph 씮 Dotplot. In the Dialog box that appears, choose One Y 씮 Simple and click OK. To create a stem and leaf plot, use Graph 씮 Stemand-Leaf. For either graph, place your
cursor in the “Graph variables” box, and select “Stocks” from the list to the left (see Figure 1.26).
F IGU R E 1 .2 5
F IGU R E 1 .2 6
You can choose from a variety of formatting options before clicking OK. The dotplot appears as a graph, while the stem and leaf plot appears in the Session window. To print either a Graph window or
the Session window, click on the window to make it active and use File 씮 Print Graph (or Print Session Window). To create a histogram, use Graph 씮 Histogram. In the Dialog box that appears, choose
Simple and click OK, selecting “Stocks” for the “Graph variables” box.
42 ❍
Select Scale 씮 Y-Scale Type and click the radio button marked “Frequency.” (You can edit the histogram later to show relative frequencies.) Click OK twice. Once the histogram has been created,
right-click on the Y-axis and choose Edit Y Scale. Under the tab marked “Scale,” you can click the radio button marked “Position of ticks” and type in 0 5 10 15. Then click the tab marked “Labels,”
the radio button marked “Specified” and type 0 5/40 10/40 15/40. Click OK. This will reduce the number of ticks on the y-axis and change them to relative frequencies. Finally, double-click on the word
“Frequency” along the y-axis. Change the box marked “Text” to read “Relative frequency” and click OK. To adjust the type of boundaries for the histogram, right-click on the bars of the histogram and
choose Edit Bars. Use the tab marked “Binning” to choose either “Cutpoints” or “Midpoints” for the histogram; you can specify the cutpoint or midpoint positions if you want. In this same Edit box,
you can change the colors, fill type, and font style of the histogram. If you right-click on the bars and select Update Graph Automatically, the histogram will automatically update when you change the
data in the “Stocks” column. As you become more familiar with MINITAB for Windows, you can explore the various options available for each type of graph. It is possible to plot more than one variable
at a time, to change the axes, to choose the colors, and to modify graphs in many ways. However, even with the basic default commands, it is clear that the distribution of OTC stocks is highly skewed
to the right. Make sure to save your work using the File 씮 Save Project command before you exit MINITAB!
FI GU R E 1 .2 7
Supplementary Exercises 1.38 Quantitative or Qualitative? Identify each variable as quantitative or qualitative: a. Ethnic origin of a candidate for public office
a. Number of people in line at a supermarket checkout counter b. Depth of a snowfall
b. Score (0–100) on a placement examination
c. Length of time for a driver to respond when faced with an impending collision
c. Fast-food establishment preferred by a student (McDonald’s, Burger King, or Carl’s Jr.) d. Mercury concentration in a sample of tuna 1.39 Symmetric or Skewed? Do you expect the distributions of
the following variables to be symmetric or skewed? Explain. a. Size in dollars of nonsecured loans b. Size in dollars of secured loans c. Price of an 8-ounce can of peas d. Height in inches of
freshman women at your university e. Number of broken taco shells in a package of 100 shells f. Number of ticks found on each of 50 trapped cottontail rabbits 1.40 Continuous or Discrete? Identify
each variable as continuous or discrete: a. Number of homicides in Detroit during a one-month period b. Length of time between arrivals at an outpatient clinic c. Number of typing errors on a page of
manuscript d. Number of defective lightbulbs in a package containing four bulbs e. Time required to finish an examination
d. Number of aircraft arriving at the Atlanta airport in a given hour 1.43 Aqua Running Aqua running has been suggested as a method of cardiovascular conditioning for injured athletes and others who
want a low-impact aerobics program. A study reported in the Journal of Sports Medicine investigated the relationship between exercise cadence and heart rate by measuring the heart rates of 20 healthy
volunteers at a cadence of 48 cycles per minute (a cycle consisted of two steps).10 The data are listed here:
Construct a stem and leaf plot to describe the data. Discuss the characteristics of the data distribution. 1.44 Major World Lakes A lake is a body
of water surrounded by land. Hence, some bodies of water named “seas,” like the Caspian Sea, are actual salt lakes. In the table that follows, the length in miles is listed for the major natural
lakes of the world, excluding the Caspian Sea, which has an area of 143,244 square miles, a length of 760 miles, and a maximum depth of 3,363 feet.5
Length (mi)
1.42 Continuous or Discrete, again Identify each
Superior Victoria Huron Michigan Aral Sea Tanganyika Baykal Great Bear Nyasa Great Slave Erie Winnipeg Ontario Balkhash Ladoga Maracaibo Onega
variable as continuous or discrete:
Source: The World Almanac and Book of Facts 2007
1.41 Continuous or Discrete, again Identify each
variable as continuous or discrete: a. Weight of two dozen shrimp b. A person’s body temperature c. Number of people waiting for treatment at a hospital emergency room d. Number of properties for
sale by a real estate agency e. Number of claims received by an insurance company during one day
Name Eyre Titicaca Nicaragua Athabasca Reindeer Turkana Issyk Kul Torrens Vänern Nettilling Winnipegosis Albert Nipigon Gairdner Urmia Manitoba Chad
Length (mi) 90 122 102 208 143 154 115 130 91 67 141 100 72 90 90 140 175
a. Use a stem and leaf plot to describe the lengths of the world’s major lakes. b. Use a histogram to display these same data. How does this compare to the stem and leaf plot in part a? c. Are these
data symmetric or skewed? If skewed, what is the direction of the skewing? 1.45 Ages of Pennies We collected 50 pennies and recorded their ages, by calculaEX0145 ting AGE CURRENT YEAR YEAR ON PENNY.
a. Before drawing any graphs, try to visualize what the distribution of penny ages will look like. Will it be mound-shaped, symmetric, skewed right, or skewed left? b. Draw a relative frequency
histogram to describe the distribution of penny ages. How would you describe the shape of the distribution? 1.46 Ages of Pennies, continued The data below represent the ages of a different set of 50
pennies, again calculated using AGE CURRENT YEAR YEAR ON PENNY.
a. Draw a relative frequency histogram to describe the distribution of penny ages. Is the shape similar to the shape of the relative frequency histogram in Exercise 1.41? b. Draw a stem and leaf plot
to describe the penny ages. Are there any unusually large or small measurements in the set? 1.47 Presidential Vetoes Here is a list of
the 43 presidents of the United States along with the number of regular vetoes used by each:5
Washington J. Adams Jefferson Madison Monroe J. Q. Adams Jackson Van Buren W. H. Harrison Tyler Polk Taylor Fillmore
B. Harrison Cleveland McKinley T. Roosevelt Taft Wilson Harding Coolidge Hoover F. D. Roosevelt Truman Eisenhower Kennedy
Pierce Buchanan Lincoln A. Johnson Grant Hayes Garfield Arthur Cleveland
L. Johnson Nixon Ford Carter Reagan G. H. W. Bush Clinton G. W. Bush
Source: The World Almanac and Book of Facts 2007
Use an appropriate graph to describe the number of vetoes cast by the 43 presidents. Write a summary paragraph describing this set of data. 1.48 Windy Cities Are some cities more
windy than others? Does Chicago deserve to be nicknamed “The Windy City”? These data are the average wind speeds (in miles per hour) for 55 selected cities in the United States:5
8.9 7.1 9.1 8.8 10.2 8.7
12.4 11.8 9.0 10.8 8.6 5.8
12.9 10.3 10.5 8.7 10.7 10.2
8.4 7.7 11.3 7.6 9.6 6.9
7.8 9.2 7.8 5.5 8.3 9.2
11.5 10.5 8.8 35.1 8.0 10.2
8.2 9.3 12.2 10.5 9.5 6.2
9.0 8.7 7.9 10.4 7.7 9.6
8.8 8.7 8.8 11.0 9.4 12.2
Source: The World Almanac and Book of Facts 2007
a. Construct a relative frequency histogram for the data. (HINT: Choose the class boundaries without including the value x 35.1 in the range of values.) b. The value x 35.1 was recorded at Mt.
Washington, New Hampshire. Does the geography of that city explain the observation? c. The average wind speed in Chicago is recorded as 10.3 miles per hour. Do you consider this unusually windy? 1.49
Kentucky Derby The following data
set shows the winning times (in seconds) for the Kentucky Derby races from 1950 to 2007.11
(1950) (1960) (1970) (1980) (1990) (2000)
121.3 122.2 123.2 122.0 122.0 121.0
122.3 124.0 123.1 122.0 123.0 119.97
121.3 120.2 121.4 122.2 123.0 121.13
122.0 121.4 119.2† 122.1 122.2 121.19
123.0 120.0 124.0 122.2 123.3 124.06
121.4 121.1 122.0 120.1 121.1 122.75
123.2 122.0 121.3 122.4 121.0 121.36
122.1 120.3 122.1 123.2 122.4 122.17
125.0 122.1 121.1 122.2 122.2
122.1 121.4 122.2 125.0 123.2
† Record time set by Secretariat in 1973 Source: www.kentuckyderby.com
a. Do you think there will be a trend in the winning times over the years? Draw a line chart to verify your answer. b. Describe the distribution of winning times using an appropriate graph. Comment
on the shape of the distribution and look for any unusual observations.
1.50 Computer Networks at Home As Americans become more knowledgeable about computer hardware and software, as prices drop and installation becomes easier, home networking of PCs is expected to
penetrate 27 percent of U.S. households by 2008, with wireless technology leading the way.12
U.S. Home Networks (in millions) Year Wired Wireless 2002 2003 2004 2005 2006 2007 2008
6.1 6.5 6.2 5.7 4.9 4.1 3.4
1.7 4.5 8.7 13.7 19.1 24.0 28.2
Source: Jupiter Research
a. What graphical methods could you use to describe the data? b. Before you draw a graph, look at the predicted number of wired and wireless households in the table. What trends do you expect to see
in the graphs? c. Use a line chart to describe the predicted number of wired households for the years 2002 to 2008. d. Use a bar chart to describe the predicted number of wireless households for the
years 2002 to 2008. 1.51 Election Results The 2004 election
was a race in which the incumbent, George W. Bush, defeated John Kerry, Ralph Nader, and other candidates, receiving 50.7% of the popular vote. The popular vote (in thousands) for George W. Bush in
each of the 50 states is listed below:8
AL AK AZ AR CA CO CT DE FL GA
HI ID IL IN IA KS KY LA ME MD
MA MI MN MS MO MT NE NV NH NJ
NM NY NC ND OH OK OR PA RI SC
SD TN TX UT VT VA WA WV WI WY
a. By just looking at the table, what shape do you think the data distribution for the popular vote by state will have? b. Draw a relative frequency histogram to describe the distribution of the
popular vote for President Bush in the 50 states. c. Did the histogram in part b confirm your guess in part a? Are there any outliers? How can you explain them?
1.52 Election Results, continued Refer to Exercise 1.51. Listed here is the percentage of the popular vote received by President Bush in each of the 50 states:8
AL AK AZ AR CA CO CT DE FL GA
HI ID IL IN IA KS KY LA ME MD
MA MI MN MS MO MT NE NV NH NJ
NM NY NC ND OH OK OR PA RI SC
SD TN TX UT VT VA WA WV WI WY
a. By just looking at the table, what shape do you think the data distribution for the percentage of the popular vote by state will have? b. Draw a relative frequency histogram to describe the
distribution. Describe the shape of the distribution and look for outliers. Did the graph confirm your answer to part a? 1.53 Election Results, continued Refer to Exercises 1.51 and 1.52. The
accompanying stem and leaf plots were generated using MINITAB for the variables named “Popular Vote” and “Percent Vote.” Stem-and-Leaf Display: Popular Vote, Percent Vote Stem-and-leaf of Popular
Vote N = 50 Leaf Unit = 100
Stem-and-leaf of Percent Vote N = 50 Leaf Unit = 1.0
3 8 19 (9) 22 13 5 1
0 0 0 0 0 1 1 1 1 1 2 2 2 2 2 HI
33 7 89 39, 45, 55
a. Describe the shapes of the two distributions. Are there any outliers? b. Do the stem and leaf plots resemble the relative frequency histograms constructed in Exercises 1.51 and 1.52? c. Explain
why the distribution of the popular vote for President Bush by state is skewed while the
46 ❍
percentage of popular votes by state is moundshaped. 1.54 Student Heights The self-reported heights of 105 students in a biostatistics class are described in the relative frequency histogram below.
Relative frequency
b. Superimpose another line chart on the one drawn in part a to describe the percentage that do not approve. c. The following line chart was created using MINITAB. Does it differ from the graph that
you drew? Use the line chart to summarize changes in the polls just after the terrorist attacks in Spain on March 11, 2004 and in England in July of 2005. d. A plot to bring down domestic flights from
England to the United States was foiled by British undercover agents, and the arrest of 12 suspects followed on August 9, 2006. Summarize any changes in approval rating that may have been brought
about following the August 9th arrests.
66 Heights
Approve or Disapprove of the President’s Handling of Terrorism and Homeland Security? 70
a. Describe the shape of the distribution. b. Do you see any unusual feature in this histogram? c. Can you think of an explanation for the two peaks in the histogram? Is there some other factor that
is causing the heights to mound up in two separate peaks? What is it?
30 ’ M ar 04 25 ,’ 04 A pr ’0 4 M ar ’0 5 A ug Se ’05 pt 8 Se , ’0 5 pt 29 ,’ 05 N ov ’0 5 M ar ’0 M 6 ay ’ A ug 06 10 ,’ 06
Fe b
Date 8/10–11/06 5/11–12/06 3/16–17/06 11/10–11/05 9/29–30/05 9/8–9/05 8/2–4/05 3/17–18/05 4/8–9/04 3/25–26/04 2/19–20/04
Approve %
Disapprove %
Unsure %
a. Draw a line chart to describe the percentage that approve of Bush’s handling of terrorism and homeland security. Use time as the horizontal axis.
1.55 Fear of Terrorism Many opinion polls
have tracked opinions regarding the fear of terrorist attacks following the September 11, 2001, attacks on the World Trade Center. A Newsweek poll conducted by Princeton Survey Research Associates
International presented the results of several polls over a two-year period that asked, “Do you approve or disapprove of the way Bush is handling terrorism and homeland security?” The data are shown
in the table below:13
Opinion Approve Disapprove
1.56 Pulse Rates A group of 50 biomedical students recorded their pulse rates by counting the number of beats for 30 seconds and multiplying by 2.
a. Why are all of the measurements even numbers? b. Draw a stem and leaf plot to describe the data, splitting each stem into two lines. c. Construct a relative frequency histogram for the data. d.
Write a short paragraph describing the distribution of the student pulse rates. 1.57 Internet On-the-Go The mobile Internet is growing, with users accessing sites such as Yahoo! Mail,
the Weather Channel, ESPN, Google, Hotmail, and Mapquest from their cell phones. The most popular web browsers are shown in the table below, along with the percentage market share for each.14 Market
Browser Openwave Motorola Nokia Access Net Front
Market Share
27% 24% 13% 9%
Teleca AU Sony Ericsson RIM Blazer
6% 5% 5% 4%
Source: www.clickz.com
a. Do the percentages add up to 100%? If not, create a category called “Other” to account for the missing percentages. b. Use a pie chart to describe the market shares for the various mobile web
browsers. 1.58 How Much Can You Save? An advertisement in a recent Time magazine claimed that Geico Insurance will help you save an average of $200 per year on your automobile insurance.15
WA $178 OR $180 ID $189
UT $191
IN $203
NE $189
IL $149
CA $144
NM $146
OK $189
11.6 11.1 13.4 12.4 13.1 12.7 12.5
Island Thorns
Ashley Rails
11.8 11.6
18.3 15.8 18.0 18.0 20.8
17.7 18.3 16.7 14.8 19.1
a. Construct a relative frequency histogram to describe the aluminum oxide content in the 26 pottery samples. b. What unusual feature do you see in this graph? Can you think of an explanation for
this feature? c. Draw a dotplot for the data, using a letter (L, C, I, or A) to locate the data point on the horizontal scale. Does this help explain the unusual feature in part b? 1.60 The Great
Calorie Debate Want to lose weight? You can do it by cutting calories, as long as you get enough nutritional value from the foods that you do eat! Below you will see a visual representation of the
number of calories in some of America’s favorite foods adapted from an article in The Press-Enterprise.17
CT $268
PA $194
Number of calories
OH $208
MO $174
AZ $188
14.4 13.8 14.6 11.5 13.8 10.9 10.1
NY $237
WI $189
WY $189
NV $239
MD $240
VA $215 NC $127
TN $235 AL $189
GA $209
TX $183 A SAMPLING OF SAVINGS
FL $130
a. Construct a relative frequency histogram to describe the average savings for the 27 states shown on the United States map. Do you see any unusual features in the histogram? b. Construct a stem and
leaf plot for the data provided by Geico Insurance. c. How do you think that Geico selected the 27 states for inclusion in this advertisement? 1.59 An Archeological Find An article in
Archaeometry involved an analysis of 26 samples of Romano-British pottery, found at four different kiln sites in the United Kingdom.16 The samples were analyzed to determine their chemical
composition, and the percentage of aluminum oxide in each of the 26 samples is shown in the following table.
Hershey's Kiss
Oreo cookie
12-ounce can of Coke
12-ounce bottle of Budweiser beer
Slice of a large Papa John's pepperoni pizza
Burger King Whopper with cheese
a. Comment on the accuracy of the graph shown above. Do the sizes, heights, and volumes of the six items accurately represent the number of calories in the item? b. Draw an actual bar chart to
describe the number of calories in these six food favorites. 1.61 Laptops and Learning An informal
experiment was conducted at McNair Academic High School in Jersey City, New Jersey, to investigate the use of laptop computers as a learning tool in the study of algebra.18 A freshman class of 20
students was given laptops to use at school and at home, while another freshman class of 27 students was not given laptops; however, many of these students
48 ❍
were able to use computers at home. The final exam scores for the two classes are shown below. Laptops
No Laptops
18.0 8.0 18.0 20.0 18.0 22.0 25.0 23.0 20.0 14.9 10.0
HI ID IL IN IA KS KY LA ME MD
16.0 25.0 19.0 18.0 21.0 24.0 19.0 20.0 25.9 23.5
MA MI MN MS MO MT NE NV NH NJ
21.0 19.0 20.0 18.0 17.0 27.0 26.1 23.0 18.0 10.5
NM NY NC ND OH OK OR PA RI SC
17.0 23.9 29.9 23.0 28.0 20.0 24.0 32.0 30.0 16.0
SD TN TX UT VT VA WA WV WI WY
20.0 20.0 20.0 24.5 19.0 17.5 31.0 20.5 32.9 14.0
Source: The World Almanac and Book of Facts 2007
a. Construct a stem and leaf display for the data.
The histograms below show the distribution of final exam scores for the two groups. 30 40 50 60 70 80 90 100 Laptops
AL AK AZ AR CA CO CT DE DC FL GA
b. How would you describe the shape of this distribution? c. Are there states with unusually high or low gasoline taxes? If so, which states are they?
No laptops
1.64 Hydroelectric Plants The following data represent the planned rated capacities in megawatts (millions of watts) for the world’s 20 largest hydroelectric plants.5
Relative frequency
EX0164 .30
18,200 14,000 10,000 8,370 6,400 6,300 6,000
4,500 4,200 4,200 3,840 3,230 3,300 3,100
3,000 2,940 2,715 2,700 2,541 2,512
Source: The World Almanac and Book of Facts 2007
Write a summary paragraph describing and comparing the distribution of final exam scores for the two groups of students. 1.62 Old Faithful The data below are the waiting times between eruptions of the
Old EX0162 Faithful geyser in Yellowstone National Park.19 Use one of the graphical methods from this chapter to describe the distribution of waiting times. If there are any unusual features in your
graph, see if you can think of any practical explanation for them. 56 69 55 59 76 79 75 65 68 93
1.63 Gasoline Tax The following are the
2006 state gasoline tax rates in cents per gallon for the 50 United States and the District of Columbia.5
a. Construct a stem and leaf display for the data. b. How would you describe the shape of this distribution? 1.65 Car Colors The most popular colors for
compact and sports cars in a recent year are given in the table.5
Silver Gray Blue Black White
Color Red Green Light Brown Yellow/Gold Other
Percentage 9 6 5 1 2
Source: The World Almanac and Book of Facts 2007
Use an appropriate graphical display to describe these data. 1.66 Starbucks The number of Starbucks coffee shops in cities within 20 miles of the University of California, Riverside is shown in the
following table.20
Riverside Grand Terrace Rialto Colton San Bernardino Redlands Corona Yucaipa Chino
Ontario Norco Fontana Mira Loma Perris Highland Rancho Cucamonga Lake Elsinore Moreno Valley
.20 Relative frequency
Source: www.starbucks.com 0
a. Draw a dotplot to describe the data. b. Describe the shape of the distribution. c. Is there another variable that you could measure that might help to explain why some cities have more Starbucks
than others? Explain. 1.67 What’s Normal? The 98.6 degree standard for human body temperature was derived by a German doctor in 1868. In an attempt to verify his claim, Mackowiak, Wasserman, and
Levine21 took temperatures from 148 healthy people over a three-day period. A data set closely matching the one in Mackowiak’s article was derived by Allen Shoemaker, and appears in the Journal of
Statistics Education.22 The body temperatures for these 130 individuals are shown in the relative frequency histogram that follows.
99.2 98.4 Temperature
a. Describe the shape of the distribution of temperatures. b. Are there any unusual observations? Can you think of any explanation for these? c. Locate the 98.6-degree standard on the horizontal axis
of the graph. Does it appear to be near the center of the distribution?
Exercises 1.68 If you have not yet done so, use the first applet
in Building a Dotplot to create a dotplot for the following data set: 2, 3, 9, 6, 7, 6. 1.69 Cheeseburgers Use the second applet in Building a Dotplot to create a dotplot for the number of
cheeseburgers consumed in a given week by 10 college students: 4 3
a. How would you describe the shape of the distribution? b. What proportion of the students ate more than 4 cheeseburgers that week? 1.70 Social Security Numbers A group of
70 students were asked to record the last digit of their Social Security number.
a. Before graphing the data, use your common sense to guess the shape of the data distribution. Explain your reasoning. b. Use the second applet in Building a Dotplot to create a dotplot to describe
the data. Was your intuition correct in part a? 1.71 If you have not yet done so, use the first applet
in Building a Histogram to create a histogram for the data in Example 1.11, the number of visits to Starbucks during a typical week.
1.72 The United Fund The following data set records the yearly charitable contributions (in dollars) to the United Fund for a group of employees at a public university.
Use the second applet in Building a Histogram to construct a relative frequency histogram for the data. What is the shape of the distribution? Can you see any obvious outliers? 1.73 Survival Times
Altman and Bland report the survival times for patients with active hepatitis, half treated with prednisone and half receiving no treatment.23 The data that follow are adapted from their data for
those treated with prednisone. The survival times are recorded to the nearest month:
CASE STUDY Blood Pressure
a. Look at the data. Can you guess the approximate shape of the data distribution? b. Use the second applet in Building a Histogram to construct a relative frequency histogram for the data. What is
the shape of the distribution? c. Are there any outliers in the set? If so, which survival times are unusually short?
How Is Your Blood Pressure? Blood pressure is the pressure that the blood exerts against the walls of the arteries. When physicians or nurses measure your blood pressure, they take two readings. The
systolic blood pressure is the pressure when the heart is contracting and therefore pumping. The diastolic blood pressure is the pressure in the arteries when the heart is relaxing. The diastolic
blood pressure is always the lower of the two readings. Blood pressure varies from one person to another. It will also vary for a single individual from day to day and even within a given day. If
your blood pressure is too high, it can lead to a stroke or a heart attack. If it is too low, blood will not get to your extremities and you may feel dizzy. Low blood pressure is usually not serious.
So, what should your blood pressure be? A systolic blood pressure of 120 would be considered normal. One of 150 would be high. But since blood pressure varies with gender and increases with age, a
better gauge of the relative standing of your blood pressure would be obtained by comparing it with the population of blood pressures of all persons of your gender and age in the United States. Of
course, we cannot supply you with that data set, but we can show you a very large sample selected from it. The blood pressure data on 1910 persons, 965 men and 945 women between the ages of 15 and
20, are found at the Student Companion Website. The data are part of a health survey conducted by the National Institutes of Health (NIH). Entries for each person include that person’s age and
systolic and diastolic blood pressures at the time the blood pressure was recorded. 1. Describe the variables that have been measured in this survey. Are the variables quantitative or qualitative?
Discrete or continuous? Are the data univariate, bivariate, or multivariate? 2. What types of graphical methods are available for describing this data set? What types of questions could be answered
using various types of graphical techniques?
3. Using the systolic blood pressure data set, construct a relative frequency histogram for the 965 men and another for the 945 women. Use a statistical software package if you have access to one.
Compare the two histograms. 4. Consider the 965 men and 945 women as the entire population of interest. Choose a sample of n 50 men and n 50 women, recording their systolic blood pressures and their
ages. Draw two relative frequency histograms to graphically display the systolic blood pressures for your two samples. Do the shapes of the histograms resemble the population histograms from part 3?
5. How does your blood pressure compare with that of others of your same gender? Check your systolic blood pressure against the appropriate histogram in part 3 or 4 to determine whether your blood
pressure is “normal” or whether it is unusually high or low.
Describing Data with Numerical Measures © Joe Sohm-VisionsofAmerica/Photodisc/Getty
GENERAL OBJECTIVES Graphs are extremely useful for the visual description of a data set. However, they are not always the best tool when you want to make inferences about a population from the
information contained in a sample. For this purpose, it is better to use numerical measures to construct a mental picture of the data.
CHAPTER INDEX ● Box plots (2.7) ● Measures of center: mean, median, and mode (2.2) ● Measures of relative standing: z-scores, percentiles, quartiles, and the interquartile range (2.6) ● Measures of
variability: range, variance, and standard deviation (2.3)
The Boys of Summer Are the baseball champions of today better than those of “yesteryear”? Do players in the National League hit better than players in the American League? The case study at the end
of this chapter involves the batting averages of major league batting champions. Numerical descriptive measures can be used to answer these and similar questions.
● Tchebysheff’s Theorem and the Empirical Rule (2.4)
How Do I Calculate Sample Quartiles?
2.2 MEASURES OF CENTER
Graphs can help you describe the basic shape of a data distribution; “a picture is worth a thousand words.” There are limitations, however, to the use of graphs. Suppose you need to display your data
to a group of people and the bulb on the data projector blows out! Or you might need to describe your data over the telephone—no way to display the graphs! You need to find another way to convey a
mental picture of the data to your audience. A second limitation is that graphs are somewhat imprecise for use in statistical inference. For example, suppose you want to use a sample histogram to
make inferences about a population histogram. How can you measure the similarities and differences between the two histograms in some concrete way? If they were identical, you could say “They are the
same!” But, if they are different, it is difficult to describe the “degree of difference.” One way to overcome these problems is to use numerical measures, which can be calculated for either a sample
or a population of measurements. You can use the data to calculate a set of numbers that will convey a good mental picture of the frequency distribution. These measures are called parameters when
associated with the population, and they are called statistics when calculated from sample measurements. Definition Numerical descriptive measures associated with a population of measurements are
called parameters; those computed from sample measurements are called statistics.
In Chapter 1, we introduced dotplots, stem and leaf plots, and histograms to describe the distribution of a set of measurements on a quantitative variable x. The horizontal axis displays the values
of x, and the data are “distributed” along this horizontal line. One of the first important numerical measures is a measure of center—a measure along the horizontal axis that locates the center of the
distribution. The birth weight data presented in Table 1.9 ranged from a low of 5.6 to a high of 9.4, with the center of the histogram located in the vicinity of 7.5 (see Figure 2.1). Let’s consider
some rules for locating the center of a distribution of measurements.
Center of the birth weight data
● 8/30 7/30 Relative Frequency
FIGU R E 2 .1
6/30 5/30 4/30 3/30 2/30 1/30 0 5.6
7.6 8.1 Center Birth Weights
54 ❍
The arithmetic average of a set of measurements is a very common and useful measure of center. This measure is often referred to as the arithmetic mean, or simply the mean, of a set of measurements.
To distinguish between the mean for the sample and the mean for the population, we will use the symbol x苶 (x-bar) for a sample mean and the symbol m (Greek lowercase mu) for the mean of a
population. The arithmetic mean or average of a set of n measurements is equal to the sum of the measurements divided by n. Definition
Since statistical formulas often involve adding or “summing” numbers, we use a shorthand symbol to indicate the process of summing. Suppose there are n measurements on the variable x—call them x1,
x2, . . . , xn. To add the n measurements together, we use this shorthand notation: n
冱 xi i1
which means x1 x2 x3 xn
The Greek capital sigma (S) tells you to add the items that appear to its right, beginning with the number below the sigma (i 1) and ending with the number above (i n). However, since the typical
sums in statistical calculations are almost always made on the total set of n measurements, you can use a simpler notation: Sxi
which means “the sum of all the x measurements”
Using this notation, we write the formula for the sample mean: NOTATION Sx Sample mean: 苶x i n Population mean: m EXAMPLE
Draw a dotplot for the n 5 measurements 2, 9, 11, 5, 6. Find the sample mean and compare its value with what you might consider the “center” of these observations on the dotplot.
The dotplot in Figure 2.2 seems to be centered between 6 and 8. To find the sample mean, calculate
Sxi 2 9 11 5 6 6.6 苶x n 5 FI GU R E 2 .2
Dotplot for Example 2.1
6 Measurements
The statistic x苶 6.6 is the balancing point or fulcrum shown on the dotplot. It does seem to mark the center of the data.
2.2 MEASURES OF CENTER
mean balancing point or fulcrum
Remember that samples are measurements drawn from a larger population that is usually unknown. An important use of the sample mean x苶 is as an estimator of the unknown population mean m. The birth
weight data in Table 1.9 are a sample from a larger population of birth weights, and the distribution is shown in Figure 2.1. The mean of the 30 birth weights is Sxi 227.2 7.57 苶x 30 30 shown in
Figure 2.1; it marks the balancing point of the distribution. The mean of the entire population of newborn birth weights is unknown, but if you had to guess its value, your best estimate would be
7.57. Although the sample mean 苶x changes from sample to sample, the population mean m stays the same. A second measure of central tendency is the median, which is the value in the middle position
in the set of measurements ordered from smallest to largest. The median m of a set of n measurements is the value of x that falls in the middle position when the measurements are ordered from
smallest to largest. Definition
Find the median for the set of measurements 2, 9, 11, 5, 6. Solution
Rank the n 5 measurements from smallest to largest: 2
6 9 앖
The middle observation, marked with an arrow, is in the center of the set, or m 6. EXAMPLE
Find the median for the set of measurements 2, 9, 11, 5, 6, 27. Solution
Rank the measurements from smallest to largest: 2
Roughly 50% of the measurements are smaller, 50% are larger than the median.
6 9 앖
Now there are two “middle” observations, shown in the box. To find the median, choose a value halfway between the two middle observations: 69 m 7.5 2 The value .5(n 1) indicates the position of the
median in the ordered data set. If the position of the median is a number that ends in the value .5, you need to average the two adjacent values.
For the n 5 ordered measurements from Example 2.2, the position of the median is .5(n 1) .5(6) 3, and the median is the 3rd ordered observation, or m 6. For the n 6 ordered measurements from Example
2.3, the position of the median is .5(n 1) .5(7) 3.5, and the median is the average of the 3rd and 4th ordered observations, or m (6 9)/2 7.5.
56 ❍
Although both the mean and the median are good measures of the center of a distribution, the median is less sensitive to extreme values or outliers. For example, the value x 27 in Example 2.3 is much
larger than the other five measurements. The median, m 7.5, is not affected by the outlier, whereas the sample average,
symmetric: mean median skewed right: mean median skewed left: mean median
Sx 60 x苶 i 10 n 6 is affected; its value is not representative of the remaining five observations. When a data set has extremely small or extremely large observations, the sample mean is drawn toward
the direction of the extreme measurements (see Figure 2.3).
(a) .25
.25 Relative Frequency
Relative frequency distributions showing the effect of extreme values on the mean and median
Relative Frequency
FI GU R E 2 .3
.19 .12 .06
.19 .12 .06 0
0 Mean Median
Mean ⬎ Median
If a distribution is skewed to the right, the mean shifts to the right; if a distribution is skewed to the left, the mean shifts to the left. The median is not affected by these extreme values
because the numerical values of the measurements are not used in its calculation. When a distribution is symmetric, the mean and the median are equal. If a distribution is strongly skewed by one or
more extreme values, you should use the median rather than the mean as a measure of center.
You can see the effect of extreme values on both the mean and the median using the How Extreme Values Affect the Mean and Median applet. The first of three applets (Figure 2.4) shows a dotplot of the
data in Example 2.2. Use your mouse to move the largest observation (x 11) even further to the right. How does this larger observation affect the mean? How does it affect the median? We will use this
applet again for the MyApplet Exercises at the end of the chapter. FI GU R E 2 .4
How Extreme Values Affect the Mean and Median applet
2.2 MEASURES OF CENTER
Another way to locate the center of a distribution is to look for the value of x that occurs with the highest frequency. This measure of the center is called the mode. Definition The mode is the
category that occurs most frequently, or the most frequently occurring value of x. When measurements on a continuous variable have been grouped as a frequency or relative frequency histogram, the
class with the highest peak or frequency is called the modal class, and the midpoint of that class is taken to be the mode.
The mode is generally used to describe large data sets, whereas the mean and median are used for both large and small data sets. From the data in Example 1.11, the mode of the distribution of the
number of reported weekly visits to Starbucks for 30 Starbucks customers is 5. The modal class and the value of x occurring with the highest frequency are the same, as shown in Figure 2.5(a). For the
data in Table 1.9, a birth weight of 7.7 occurs four times, and therefore the mode for the distribution of birth weights is 7.7. Using the histogram to find the modal class, you find that the class
with the highest peak is the fifth class, from 7.6 to 8.1. Our choice for the mode would be the midpoint of this class, or 7.85. See Figure 2.5(b). It is possible for a distribution of measurements to
have more than one mode. These modes would appear as “local peaks” in the relative frequency distribution. For example, if we were to tabulate the length of fish taken from a lake during one season,
we might get a bimodal distribution, possibly reflecting a mixture of young and old fish in the population. Sometimes bimodal distributions of sizes or weights reflect a mixture of measurements taken on
males and females. In any case, a set or distribution of measurements may have more than one mode.
Remember that there can be several modes or no mode (if each observation occurs only once).
(a) 8/25
Relative Frequency
Relative frequency histograms for the Starbucks and birth weight data
Relative Frequency
FIGU R E 2 .5
6/25 4/25 2/25 0 1
4 5 Visits
8/30 7/30 6/30 5/30 4/30 3/30 2/30 1/30 0 5.6 6.1 6.6 7.1 7.6 8.1 8.6 9.1 9.6 Birth Weights
BASIC TECHNIQUES 2.1 You are given n 5 measurements: 0, 5, 1, 1, 3.
a. Draw a dotplot for the data. (HINT: If two measurements are the same, place one dot above the other.) Guess the approximate “center.”
b. Find the mean, median, and mode. c. Locate the three measures of center on the dotplot in part a. Based on the relative positions of the mean and median, are the measurements symmetric or skewed?
2.2 You are given n 8 measurements: 3, 2, 5, 6, 4,
4, 3, 5. a. Find 苶x. b. Find m. c. Based on the results of parts a and b, are the measurements symmetric or skewed? Draw a dotplot to confirm your answer. 2.3 You are given n 10 measurements: 3, 5,
4, 6,
10, 5, 6, 9, 2, 8. a. Calculate 苶x. b. Find m. c. Find the mode. APPLICATIONS 2.4 Auto Insurance The cost of automobile insur-
ance has become a sore subject in California because insurance rates are dependent on so many different variables, such as the city in which you live, the number of cars you insure, and the company
with which you are insured. The website www.insurance.ca.gov reports the annual 2006–2007 premium for a single male, licensed for 6–8 years, who drives a Honda Accord 12,600 to 15,000 miles per year
and has no violations or accidents.1 City Long Beach Pomona San Bernardino Moreno Valley
21st Century
$2617 2305 2286 2247
$2228 2098 2064 1890
Source: www.insurance.ca.gov
a. What is the average premium for Allstate Insurance? b. What is the average premium for 21st Century Insurance? c. If you were a consumer, would you be interested in the average premium cost? If
not, what would you be interested in? 2.5 DVD Players The DVD player is a common fixture in most American households. In fact, most American households have DVDs, and many have more than one. A sample
of 25 households produced the following measurements on x, the number of DVDs in the household:
a. Is the distribution of x, the number of DVDs in a household, symmetric or skewed? Explain. b. Guess the value of the mode, the value of x that occurs most frequently. c. Calculate the mean,
median, and mode for these measurements. d. Draw a relative frequency histogram for the data set. Locate the mean, median, and mode along the horizontal axis. Are your answers to parts a and b
correct? 2.6 Fortune 500 Revenues Ten of the 50 largest businesses in the United States, randomly selected from the Fortune 500, are listed below along with their revenues (in millions of dollars):2
Company General Motors IBM Bank of America Home Depot Boeing
Revenues $192,604 91,134 83,980 81,511 54,848
Company Target Morgan Stanley Johnson & Johnson Intel Safeway
Revenues $52,620 52,498 50,514 38,826 38,416
Source: Time Almanac 2007
a. Draw a stem and leaf plot for the data. Are the data skewed? b. Calculate the mean revenue for these 10 businesses. Calculate the median revenue. c. Which of the two measures in part b best
describes the center of the data? Explain. 2.7 Birth Order and Personality Does birth order have any effect on a person’s personality? A report on a study by an MIT researcher indicates that
later-born children are more likely to challenge the establishment, more open to new ideas, and more accepting of change.3 In fact, the number of later-born children is increasing. During the
Depression years of the 1930s, families averaged 2.5 children (59% later born), whereas the parents of baby boomers averaged 3 to 4 children (68% later born). What does the author mean by an average
of 2.5 children?
2.2 MEASURES OF CENTER
2.8 Tuna Fish An article in Consumer
Reports gives the price—an estimated average for a 6-ounce can or a 7.06-ounce pouch—for 14 different brands of water-packed light tuna, based on prices paid nationally in supermarkets:4
.99 1.12
1.92 .63
1.23 .85 .65 .53 1.41 .67 .69 .60 .60 .66
a. Find the average price for the 14 different brands of tuna. b. Find the median price for the 14 different brands of tuna. c. Based on your findings in parts a and b, do you think that the
distribution of prices is skewed? Explain. 2.9 Sports Salaries As professional sports teams become a more and more lucrative business for their owners, the salaries paid to the players have also
increased. In fact, sports superstars are paid astronomical salaries for their talents. If you were asked by a sports management firm to describe the distribution of players’ salaries in several
different categories of professional sports, what measure of center would you choose? Why? 2.10 Time on Task In a psychological experiment,
the time on task was recorded for 10 subjects under a 5-minute time constraint. These measurements are in seconds: 175 200
a. Find the average time on task. b. Find the median time on task. c. If you were writing a report to describe these data, which measure of central tendency would you use? Explain.
2.11 Starbucks The number of Starbucks coffee shops in 18 cities within 20 miles of the University of California, Riverside is shown in the following table (www.starbucks.com).5
a. Find the mean, the median, and the mode. b. Compare the median and the mean. What can you say about the shape of this distribution? c. Draw a dotplot for the data. Does this confirm your conclusion
about the shape of the distribution from part b? 2.12 HDTVs The cost of televisions exhibits
huge variation—from $100–200 for a standard TV to $8,000–10,000 for a large plasma screen TV. Consumer Reports gives the prices for the top 10 LCD high definition TVs (HDTVs) in the 30- to 40-inch
JVC LT-40FH96 Sony Bravia KDL-V32XBR1 Sony Bravia KDL-V40XBR1 Toshiba 37HLX95 Sharp Aquos LC-32DA5U Sony Bravia KLV-S32A10 Panasonic Viera TC-32LX50 JVC LT-37X776 LG 37LP1D Samsung LN-R328W
$2900 1800 2600 3000 1300 1500 1350 2000 2200 1200
a. What is the average price of these 10 HDTVs? b. What is the median price of these 10 HDTVs? c. As a consumer, would you be interested in the average cost of an HDTV? What other variables would be
important to you?
60 ❍
Data sets may have the same center but look different because of the way the numbers spread out from the center. Consider the two distributions shown in Figure 2.6. Both distributions are centered at
x 4, but there is a big difference in the way the measurements spread out, or vary. The measurements in Figure 2.6(a) vary from 3 to 5; in Figure 2.6(b) the measurements vary from 0 to 8. ●
Relative Frequency
Variability or dispersion of data
Relative Frequency
FI GU R E 2 .6
Variability or dispersion is a very important characteristic of data. For example, if you were manufacturing bolts, extreme variation in the bolt diameters would cause a high percentage of defective
products. On the other hand, if you were trying to discriminate between good and poor accountants, you would have trouble if the examination always produced test grades with little variation, making
discrimination very difficult. Measures of variability can help you create a mental picture of the spread of the data. We will present some of the more important ones. The simplest measure of
variation is the range. The range, R, of a set of n measurements is defined as the difference between the largest and smallest measurements. Definition
For the birth weight data in Table 1.9, the measurements vary from 5.6 to 9.4. Hence, the range is 9.4 5.6 3.8. The range is easy to calculate, easy to interpret, and is an adequate measure of
variation for small sets of data. But, for large data sets, the range is not an adequate measure of variability. For example, the two relative frequency distributions in Figure 2.7 have the same
range but very different shapes and variability. ●
(a) Relative Frequency
Distributions with equal range and unequal variability
Relative Frequency
FI GU R E 2 .7
2.3 MEASURES OF VARIABILITY
Is there a measure of variability that is more sensitive than the range? Consider, as an example, the sample measurements 5, 7, 1, 2, 4, displayed as a dotplot in Figure 2.8. The mean of these five
measurements is Sx 19 x苶 i 3.8 n 5 FIGU R E 2 .8
Dotplot showing the deviations of points from the mean
● x = 3.8
(xi – x)
as indicated on the dotplot. The horizontal distances between each dot (measurement) and the mean x苶 will help you to measure the variability. If the distances are large, the data are more spread
out or variable than if the distances are small. If xi is a particular dot (measurement), then the deviation of that measurement from the mean is (xi x苶). Measurements to the right of the mean
produce positive deviations, and those to the left produce negative deviations. The values of x and the deviations for our example are listed in the first and second columns of Table 2.1. TABLE 2.1
2 Computation of S(xi x 苶) x (xi x苶 ) (xi x苶 )2
1.2 3.2 2.8 1.8 .2
1.44 10.24 7.84 3.24 .04
Because the deviations in the second column of the table contain information on variability, one way to combine the five deviations into one numerical measure is to average them. Unfortunately, the
average will not work because some of the deviations are positive, some are negative, and the sum is always zero (unless round-off errors have been introduced into the calculations). Note that the
deviations in the second column of Table 2.1 sum to zero. Another possibility might be to disregard the signs of the deviations and calculate the average of their absolute values.† This method has
been used as a measure of variability in exploratory data analysis and in the analysis of time series data. We prefer, however, to overcome the difficulty caused by the signs of the deviations by
working The absolute value of a number is its magnitude, ignoring its sign. For example, the absolute value of 2, represented by the symbol 兩2兩, is 2. The absolute value of 2—that is, 兩2兩—is 2.
with their sum of squares. From the sum of squared deviations, a single measure called the variance is calculated. To distinguish between the variance of a sample and the variance of a population, we
use the symbol s2 for a sample variance and s 2 (Greek lowercase sigma) for a population variance. The variance will be relatively large for highly variable data and relatively small for less
variable data. The variance of a population of N measurements is the average of the squares of the deviations of the measurements about their mean m. The population variance is denoted by s 2 and is
given by the formula
S(xi m)2 s2 N Most often, you will not have all the population measurements available but will need to calculate the variance of a sample of n measurements. Definition The variance of a sample of n
measurements is the sum of the squared deviations of the measurements about their mean x苶 divided by (n 1). The sample variance is denoted by s 2 and is given by the formula
S(xi x苶 )2 s2 n1 For the set of n 5 sample measurements presented in Table 2.1, the square of the deviation of each measurement is recorded in the third column. Adding, we obtain The variance and
the standard deviation cannot be negative numbers.
S(xi 苶x )2 22.80 and the sample variance is S(xi x苶 )2 22.80 5.70 s2 n1 4 The variance is measured in terms of the square of the original units of measurement. If the original measurements are in
inches, the variance is expressed in square inches. Taking the square root of the variance, we obtain the standard deviation, which returns the measure of variability to the original units of
measurement. The standard deviation of a set of measurements is equal to the positive square root of the variance. Definition
NOTATION n: number of measurements in the sample 2 s : sample variance 2 sample standard s 兹s苶: deviation
N: number of measurements in the population 2 s : population variance 2 苶: s 兹s population standard deviation
2.3 MEASURES OF VARIABILITY
For the set of n 5 sample measurements in Table 2.1, the sample variance is 苶 2.39. The more s 2 5.70, so the sample standard deviation is s 兹s苶2 兹5.70 variable the data set is, the larger the
value of s. For the small set of measurements we used, the calculation of the variance is not too difficult. However, for a larger set, the calculations can become very tedious. Most scientific
calculators have built-in programs that will calculate 苶x and s or m and s, so that your computational work will be minimized. The sample or population mean key is usually marked with 苶x. The
sample standard deviation key is usually marked with s, sx, or sxn1, and the population standard deviation key with s, sx, or sxn. In using any calculator with these built-in function keys, be sure
you know which calculation is being carried out by each key! If you need to calculate s 2 and s by hand, it is much easier to use the alternative computing formula given next. This computational form
is sometimes called the shortcut method for calculating s 2.
If you are using your calculator, make sure to choose the correct key for the sample standard deviation.
THE COMPUTING FORMULA FOR CALCULATING s 2 (Sxi)2 Sx2i n s 2 n1 The symbols (Sxi)2 and Sx 2i in the computing formula are shortcut ways to indicate the arithmetic operation you need to perform. You
know from the formula for the sample mean that Sxi is the sum of all the measurements. To find Sx 2i , you square each individual measurement and then add them together. Sx 2i Sum of the squares of
the individual measurements (Sxi)2 Square of the sum of the individual measurements The sample standard deviation, s, is the positive square root of s 2. EXAMPLE
TABLE 2.2
Calculate the variance and standard deviation for the five measurements in Table 2.2, which are 5, 7, 1, 2, 4. Use the computing formula for s 2 and compare your results with those obtained using the
original definition of s 2.
Table for Simplified Calculation of s2 and s xi
x i2
The entries in Table 2.2 are the individual measurements, xi, and their squares, x 2i , together with their sums. Using the computing formula for s 2, you have
Solution Don’t round off partial results as you go along!
(Sxi)2 Sx 2i n s 2 n1 (19)2 95 5 22.80 5.70 4 4 苶 2.39, as before. and s 兹s苶2 兹5.70 You may wonder why you need to divide by (n 1) rather than n when computing the sample variance. Just as we
used the sample mean 苶x to estimate the population mean m, you may want to use the sample variance s 2 to estimate the population variance s 2. It turns out that the sample variance s 2 with (n 1)
in the denominator provides better estimates of s 2 than would an estimator calculated with n in the denominator. For this reason, we always divide by (n 1) when computing the sample variance s 2 and
the sample standard deviation s.
You can compare the accuracy of estimators of the population variance s 2 using the Why Divide by n 1? applet. The applet selects samples from a population with standard deviation s 29.2. It then
calculates the standard deviation s using (n 1) in the denominator as well as a standard deviation calculated using n in the denominator. You can choose to compare the estimators for a single new
sample, for 10 samples, or for 100 samples. Notice that each of the 10 samples shown in Figure 2.9 has a different sample standard deviation. However, when the 10 standard deviations are averaged at
the bottom of the applet, one of the two estimators is closer to the population standard deviation, s 29.2. Which one is it? We will use this applet again for the MyApplet Exercises at the end of the
chapter. FI GU R E 2 .9
Why Divide by n 1? applet
2.3 MEASURES OF VARIABILITY
At this point, you have learned how to compute the variance and standard deviation of a set of measurements. Remember these points: • • • •
The value of s is always greater than or equal to zero. The larger the value of s 2 or s, the greater the variability of the data set. If s 2 or s is equal to zero, all the measurements must have the
same value. In order to measure the variability in the same units as the original 2 observations, we compute the standard deviation s 兹s苶.
This information allows you to compare several sets of data with respect to their locations and their variability. How can you use these measures to say something more specific about a single set of
data? The theorem and rule presented in the next section will help answer this question.
2.16 You are given n 8 measurements: 3, 1, 5, 6,
2.13 You are given n 5 measurements: 2, 1, 1,
3, 5. a. Calculate the sample mean, 苶x. b. Calculate the sample variance, s 2, using the formula given by the definition. c. Find the sample standard deviation, s. d. Find s 2 and s using the
computing formula. Compare the results with those found in parts b and c.
4, 4, 3, 5. a. Calculate the range. b. Calculate the sample mean. c. Calculate the sample variance and standard deviation. d. Compare the range and the standard deviation. The range is approximately
how many standard deviations?
2.14 Refer to Exercise 2.13.
a. Use the data entry method in your scientific calculator to enter the five measurements. Recall the proper memories to find the sample mean and standard deviation. b. Verify that the calculator
provides the same values for x苶 and s as in Exercise 2.13, parts a and c.
2.17 An Archeological Find, again An article in
2.15 You are given n 8 measurements: 4, 1, 3, 1, 3,
1, 2, 2. a. Find the range. b. Calculate 苶x. c. Calculate s 2 and s using the computing formula. d. Use the data entry method in your calculator to find x苶, s, and s 2. Verify that your answers are
the same as those in parts b and c.
Archaeometry involved an analysis of 26 samples of Romano-British pottery found at four different kiln sites in the United Kingdom.7 The samples were analyzed to determine their chemical composition.
The percentage of iron oxide in each of five samples collected at the Island Thorns site was: 1.28,
a. Calculate the range. b. Calculate the sample variance and the standard deviation using the computing formula. c. Compare the range and the standard deviation. The range is approximately how many
standard deviations?
2.18 Utility Bills in Southern California
The monthly utility bills for a household in Riverside, California, were recorded for 12 consecutive months starting in January 2006:
Amount ($)
January February March April May June
$266.63 163.41 219.41 162.64 187.16 289.17
Month July August September October November December
Amount ($) $306.55 335.48 343.50 226.80 208.99 230.46
a. Calculate the range of the utility bills for the year 2006. b. Calculate the average monthly utility bill for the year 2006. c. Calculate the standard deviation for the 2006 utility bills.
We now introduce a useful theorem developed by the Russian mathematician Tchebysheff. Proof of the theorem is not difficult, but we are more interested in its application than its proof. Given a
number k greater than or equal to 1 and a set of n measurements, at least [1 (1/k 2)] of the measurements will lie within k standard deviations of their mean.
Tchebysheff’s Theorem
Tchebysheff’s Theorem applies to any set of measurements and can be used to describe either a sample or a population. We will use the notation appropriate for populations, but you should realize that
we could just as easily use the mean and the standard deviation for the sample. The idea involved in Tchebysheff’s Theorem is illustrated in Figure 2.10. An interval is constructed by measuring a
distance ks on either side of the mean m. The number k can be any number as long as it is greater than or equal to 1. Then Tchebysheff’s Theorem states that at least 1 (1/k 2) of the total number n
measurements lies in the constructed interval.
Illustrating Tchebysheff’s Theorem
Relative Frequency
FI GU R E 2 .1 0
At least 1 – (1/k2)
µ kσ
x kσ
In Table 2.3, we choose a few numerical values for k and compute [1 (1/k2)]. TABLE 2.3
Illustrative Values of [1 (1/k2)] k
1 (1/k 2)
110 1 1/4 3/4 1 1/9 8/9
From the calculations in Table 2.3, the theorem states: • • •
At least none of the measurements lie in the interval m s to m s. At least 3/4 of the measurements lie in the interval m 2s to m 2s. At least 8/9 of the measurements lie in the interval m 3s to m 3s.
Although the first statement is not at all helpful, the other two values of k provide valuable information about the proportion of measurements that fall in certain intervals. The values k 2 and k 3
are not the only values of k you can use; for example, the proportion of measurements that fall within k 2.5 standard deviations of the mean is at least 1 [1/(2.5)2] .84. EXAMPLE
The mean and variance of a sample of n 25 measurements are 75 and 100, respectively. Use Tchebysheff’s Theorem to describe the distribution of measurements.
You are given x苶 75 and s 2 100. The standard deviation is 苶 10. The distribution of measurements is centered about x苶 75, and s 兹100 Tchebysheff’s Theorem states: Solution
• •
At least 3/4 of the 25 measurements lie in the interval 苶x 2s 75 2(10) —that is, 55 to 95. At least 8/9 of the measurements lie in the interval 苶x 3s 75 3(10)—that is, 45 to 105.
Since Tchebysheff’s Theorem applies to any distribution, it is very conservative. This is why we emphasize “at least 1 (1/k 2 )” in this theorem. Another rule for describing the variability of a data
set does not work for all data sets, but it does work very well for data that “pile up” in the familiar mound shape shown in Figure 2.11. The closer your data distribution is to the mound-shaped
curve in Figure 2.11, the more accurate the rule will be. Since mound-shaped data distributions occur quite frequently in nature, the rule can often be used in practical applications. For this
reason, we call it the Empirical Rule. Mound-shaped distribution
● Relative Frequency
FIGU R E 2 .1 1
Empirical Rule
Given a distribution of measurements that is approximately
mound-shaped: The interval (m s) contains approximately 68% of the measurements. The interval (m 2s) contains approximately 95% of the measurements. The interval (m 3s) contains approximately 99.7%
of the measurements. Remember these three numbers:
The mound-shaped distribution shown in Figure 2.11 is commonly known as the normal distribution and will be discussed in detail in Chapter 6.
68—95—99.7 EXAMPLE
In a time study conducted at a manufacturing plant, the length of time to complete a specified operation is measured for each of n 40 workers. The mean and standard deviation are found to be 12.8 and
1.7, respectively. Describe the sample data using the Empirical Rule. Solution
To describe the data, calculate these intervals:
(x苶 s) 12.8 1.7 ( 苶x 2s) 12.8 2(1.7)
11.1 to 14.5
9.4 to 16.2
7.7 to 17.9 (x苶 3s) 12.8 3(1.7) or According to the Empirical Rule, you expect approximately 68% of the measurements to fall into the interval from 11.1 to 14.5, approximately 95% to fall into the
interval from 9.4 to 16.2, and approximately 99.7% to fall into the interval from 7.7 to 17.9. If you doubt that the distribution of measurements is mound-shaped, or if you wish for some other reason
to be conservative, you can apply Tchebysheff’s Theorem and be absolutely certain of your statements. Tchebysheff’s Theorem tells you that at least 3/4 of the measurements fall into the interval from
9.4 to 16.2 and at least 8/9 into the interval from 7.7 to 17.9.
TABLE 2.4
Student teachers are trained to develop lesson plans, on the assumption that the written plan will help them to perform successfully in the classroom. In a study to assess the relationship between
written lesson plans and their implementation in the classroom, 25 lesson plans were scored on a scale of 0 to 34 according to a Lesson Plan Assessment Checklist. The 25 scores are shown in Table
2.4. Use Tchebysheff’s Theorem and the Empirical Rule (if applicable) to describe the distribution of these assessment scores. ●
Lesson Plan Assessment Scores 26.1 22.1 15.9 25.6 29.0
26.0 21.2 20.8 26.5 21.3
14.5 26.6 20.2 15.7 23.5
29.3 31.9 17.8 22.1 22.1
19.7 25.0 13.3 13.8 10.2
Use your calculator or the computing formulas to verify that 苶x 21.6 and s 5.5. The appropriate intervals are calculated and listed in Table 2.5. We have also referred back to the original 25
measurements and counted the actual number of measurements that fall into each of these intervals. These frequencies and relative frequencies are shown in Table 2.5.
TABLE 2.5
Intervals x ks for the Data of Table 2.4 Interval x苵 ks
k 1 2 3
16.1–27.1 10.6–32.6 5.1–38.1
Frequency in Interval
Relative Frequency
.64 .96 1.00
Is Tchebysheff’s Theorem applicable? Yes, because it can be used for any set of data. According to Tchebysheff’s Theorem,
Empirical Rule ⇔ mound-shaped data
• •
Tchebysheff ⇔ any shaped data
at least 3/4 of the measurements will fall between 10.6 and 32.6. at least 8/9 of the measurements will fall between 5.1 and 38.1.
You can see in Table 2.5 that Tchebysheff’s Theorem is true for these data. In fact, the proportions of measurements that fall into the specified intervals exceed the lower bound given by this
theorem. Is the Empirical Rule applicable? You can check for yourself by drawing a graph— either a stem and leaf plot or a histogram. The MINITAB histogram in Figure 2.12 shows that the distribution
is relatively mound-shaped, so the Empirical Rule should work relatively well. That is, • • •
approximately 68% of the measurements will fall between 16.1 and 27.1. approximately 95% of the measurements will fall between 10.6 and 32.6. approximately 99.7% of the measurements will fall between
5.1 and 38.1.
The relative frequencies in Table 2.5 closely approximate those specified by the Empirical Rule. MINITAB histogram for Example 2.8
● 6/25
Relative Frequency
F I GU R E 2 .1 2
0 8.5
20.5 Scores
USING TCHEBYSHEFF’S THEOREM AND THE EMPIRICAL RULE Tchebysheff’s Theorem can be proven mathematically. It applies to any set of measurements—sample or population, large or small, mound-shaped or
skewed. Tchebysheff’s Theorem gives a lower bound to the fraction of measurements to be found in an interval constructed as 苶x ks. At least 1 (1/k 2) of the measurements will fall into this
interval, and probably more!
The Empirical Rule is a “rule of thumb” that can be used as a descriptive tool only when the data tend to be roughly mound-shaped (the data tend to pile up near the center of the distribution). When
you use these two tools for describing a set of measurements, Tchebysheff’s Theorem will always be satisfied, but it is a very conservative estimate of the fraction of measurements that fall into a
particular interval. If it is appropriate to use the Empirical Rule (mound-shaped data), this rule will give you a more accurate estimate of the fraction of measurements that fall into the interval.
A CHECK ON THE CALCULATION OF s
Tchebysheff’s Theorem and the Empirical Rule can be used to detect gross errors in the calculation of s. Roughly speaking, these two tools tell you that most of the time, measurements lie within two
standard deviations of their mean. This interval is marked off in Figure 2.13, and it implies that the total range of the measurements, from smallest to largest, should be somewhere around four
standard deviations. This is, of course, a very rough approximation, but it can be very useful in checking for large errors in your calculation of s. If the range, R, is about four standard
deviations, or 4s, you can write R ⬇ 4s
R s ⬇ 4
The computed value of s using the shortcut formula should be of roughly the same order as the approximation.
FI GU R E 2 .1 3
Range approximation to s
x – 2s
2s x + 2s
Use the range approximation to check the calculation of s for Table 2.2. Solution
The range of the five measurements—5, 7, 1, 2, 4—is
R716 Then R 6 s ⬇ 1.5 4 4 This is the same order as the calculated value s 2.4.
s ⬇ R/4 gives only an approximate value for s.
The range approximation is not intended to provide an accurate value for s. Rather, its purpose is to detect gross errors in calculating, such as the failure to divide the sum of squares of
deviations by (n 1) or the failure to take the square root of s 2. If you make one of these mistakes, your answer will be many times larger than the range approximation of s.
2.5 A CHECK ON THE CALCULATION OF s
Use the range approximation to determine an approximate value for the standard deviation for the data in Table 2.4. Solution
The range R 31.9 10.2 21.7. Then
R 21.7 s ⬇ 5.4 4 4 Since the exact value of s is 5.5 for the data in Table 2.4, the approximation is very close. The range for a sample of n measurements will depend on the sample size, n. For larger
values of n, a larger range of the x values is expected. The range for large samples (say, n 50 or more observations) may be as large as 6s, whereas the range for small samples (say, n 5 or less) may
be as small as or smaller than 2.5s. The range approximation for s can be improved if it is known that the sample is drawn from a mound-shaped distribution of data. Thus, the calculated s should not
differ substantially from the range divided by the appropriate ratio given in Table 2.6. TABLE 2.6
Divisor for the Range Approximation of s Number of Measurements 5 10 25
Expected Ratio of Range to s 2.5 3 4
BASIC TECHNIQUES 2.19 A set of n 10 measurements consists of the
values 5, 2, 3, 6, 1, 2, 4, 5, 1, 3. a. Use the range approximation to estimate the value of s for this set. (HINT: Use the table at the end of Section 2.5.) b. Use your calculator to find the actual
value of s. Is the actual value close to your estimate in part a? c. Draw a dotplot of this data set. Are the data moundshaped? d. Can you use Tchebysheff’s Theorem to describe this data set? Why or
why not? e. Can you use the Empirical Rule to describe this data set? Why or why not? 2.20 Suppose you want to create a mental picture
of the relative frequency histogram for a large data set consisting of 1000 observations, and you know
that the mean and standard deviation of the data set are 36 and 3, respectively. a. If you are fairly certain that the relative frequency distribution of the data is mound-shaped, how might you
picture the relative frequency distribution? (HINT: Use the Empirical Rule.) b. If you have no prior information concerning the shape of the relative frequency distribution, what can you say about
the relative frequency histogram? (HINT: Construct intervals 苶x ks for several choices of k.) 2.21 A distribution of measurements is relatively
mound-shaped with mean 50 and standard deviation 10. a. What proportion of the measurements will fall between 40 and 60? b. What proportion of the measurements will fall between 30 and 70?
c. What proportion of the measurements will fall between 30 and 60? d. If a measurement is chosen at random from this distribution, what is the probability that it will be greater than 60? 2.22 A set
of data has a mean of 75 and a standard
deviation of 5. You know nothing else about the size of the data set or the shape of the data distribution. a. What can you say about the proportion of measurements that fall between 60 and 90? b.
What can you say about the proportion of measurements that fall between 65 and 85? c. What can you say about the proportion of measurements that are less than 65?
2.25 Breathing Rates Is your breathing rate nor-
mal? Actually, there is no standard breathing rate for humans. It can vary from as low as 4 breaths per minute to as high as 70 or 75 for a person engaged in strenuous exercise. Suppose that the
resting breathing rates for college-age students have a relative frequency distribution that is mound-shaped, with a mean equal to 12 and a standard deviation of 2.3 breaths per minute. What fraction
of all students would have breathing rates in the following intervals? a. 9.7 to 14.3 breaths per minute b. 7.4 to 16.6 breaths per minute c. More than 18.9 or less than 5.1 breaths per minute 2.26
Ore Samples A geologist collected 20 different ore samples, all the same weight, and randomly divided them into two groups. She measured the titanium (Ti) content of the samples using two different
APPLICATIONS 2.23 Driving Emergencies The length of time re-
quired for an automobile driver to respond to a particular emergency situation was recorded for n 10 drivers. The times (in seconds) were .5, .8, 1.1, .7, .6, .9, .7, .8, .7, .8. a. Scan the data and
use the procedure in Section 2.5 to find an approximate value for s. Use this value to check your calculations in part b. b. Calculate the sample mean x苶 and the standard deviation s. Compare with
part a.
Method 1
Method 2
.011 .013 .013 .015 .014 .013 .010 .013 .011 .012
.011 .016 .013 .012 .015 .012 .017 .013 .014 .015
a. Construct stem and leaf plots for the two data sets. Visually compare their centers and their ranges. b. Calculate the sample means and standard deviations for the two sets. Do the calculated
values confirm your visual conclusions from part a?
2.24 Packaging Hamburger Meat The data listed here are the weights (in pounds) of 27 packages of ground beef in a supermarket meat display:
2.27 Social Security Numbers The data from Exercise 1.70 (see data set EX0170), reproduced below, show the last digit of the Social Security number for a group of 70 students.
1.08 1.06 .89 .89
.99 1.14 .89 .98
.97 1.38 .96 1.14
1.18 .75 1.12 .92
1.41 .96 1.12 1.18
1.28 1.08 .93 1.17
.83 .87 1.24
a. Construct a stem and leaf plot or a relative frequency histogram to display the distribution of weights. Is the distribution relatively moundshaped? b. Find the mean and standard deviation of the
data set. c. Find the percentage of measurements in the intervals 苶x s, 苶x 2s, and 苶x 3s. d. How do the percentages obtained in part c compare with those given by the Empirical Rule? Explain. e.
How many of the packages weigh exactly 1 pound? Can you think of any explanation for this?
a. You found in Exercise 1.70 that the distribution of this data was relatively “flat,” with each different value from 0 to 9 occurring with nearly equal frequency. Using this fact, what would be your
best estimate for the mean of the data set? b. Use the range approximation to guess the value of s for this set. c. Use your calculator to find the actual values of 苶x and s. Compare with your
estimates in parts a and b.
2.5 A CHECK ON THE CALCULATION OF s
2.28 Social Security Numbers, continued Refer
to the data set in Exercise 2.27. a. Find the percentage of measurements in the intervals x苶 s, 苶x 2s, and 苶x 3s. b. How do the percentages obtained in part a compare with those given by the
Empirical Rule? Should they be approximately the same? Explain. 2.29 Survival Times A group of experimental animals is infected with a particular form of bacteria, and their survival time is found to
average 32 days, with a standard deviation of 36 days. a. Visualize the distribution of survival times. Do you think that the distribution is relatively moundshaped, skewed right, or skewed left?
Explain. b. Within what limits would you expect at least 3/4 of the measurements to lie? 2.30 Survival Times, continued Refer to Exercise 2.29. You can use the Empirical Rule to see why the
distribution of survival times could not be moundshaped. a. Find the value of x that is exactly one standard deviation below the mean. b. If the distribution is in fact mound-shaped, approximately
what percentage of the measurements should be less than the value of x found in part a? c. Since the variable being measured is time, is it possible to find any measurements that are more than one
standard deviation below the mean? d. Use your answers to parts b and c to explain why the data distribution cannot be mound-shaped. 2.31 Timber Tracts To estimate the amount of lumber in a tract of
timber, an owner EX0231 decided to count the number of trees with diameters exceeding 12 inches in randomly selected 50-by-50foot squares. Seventy 50-by-50-foot squares were chosen, and the selected
trees were counted in each tract. The data are listed here: 7 9 3 10 9 6 10
a. Construct a relative frequency histogram to describe the data. b. Calculate the sample mean x苶 as an estimate of m, the mean number of timber trees for all 50-by-50-foot squares in the tract.
c. Calculate s for the data. Construct the intervals 苶x s, x苶 2s, and 苶x 3s. Calculate the percentage of squares falling into each of the three intervals, and compare with the corresponding
percentages given by the Empirical Rule and Tchebysheff’s Theorem. 2.32 Tuna Fish, again Refer to Exercise 2.8 and
data set EX0208. The prices of a 6-ounce can or a 7.06 pouch for 14 different brands of water-packed light tuna, based on prices paid nationally in supermarkets are reproduced here.4 .99 1.12
1.92 .63
1.23 .85 .65 .53 1.41 .67 .69 .60 .60 .66
a. Use the range approximation to find an estimate of s. b. How does it compare to the computed value of s? 2.33 Old Faithful The data below are 30 waiting times between eruptions of the Old Faithful
geyser in Yellowstone National Park.8
a. Calculate the range. b. Use the range approximation to approximate the standard deviation of these 30 measurements. c. Calculate the sample standard deviation s. d. What proportion of the
measurements lie within two standard deviations of the mean? Within three standard deviations of the mean? Do these proportions agree with the proportions given in Tchebysheff’s Theorem? 2.34 The
President’s Kids The table below shows the names of the 42 presidents of the United States along with the number of their children.2
Washington Adams Jefferson Madison Monroe J.Q. Adams Jackson
Van Buren 4 W.H. Harrison 10 Tyler* 15 Polk 0 Taylor 6 Fillmore* 2 Pierce 3
Buchanan Lincoln A. Johnson Grant Hayes Garfield Arthur
Cleveland B. Harrison* McKinley T. Roosevelt* Taft Wilson* Harding
Coolidge Hoover F.D. Roosevelt Truman Eisenhower Kennedy L.B. Johnson
Nixon Ford Carter Reagan* G.H.W. Bush Clinton G.W. Bush
*Married twice
Source: Time Almanac 2007
a. Construct a relative frequency histogram to describe the data. How would you describe the shape of this distribution? b. Calculate the mean and the standard deviation for the data set. c.
Construct the intervals 苶x s, 苶x 2s, and 苶x 3s. Find the percentage of measurements falling into these three intervals and compare with the corresponding percentages given by Tchebysheff’s Theorem
and the Empirical Rule. 2.35 An Archeological Find, again Refer to Exer-
cise 2.17. The percentage of iron oxide in each of five pottery samples collected at the Island Thorns site was: 1.28
a. Use the range approximation to find an estimate of s, using an appropriate divisor from Table 2.6. b. Calculate the standard deviation s. How close did your estimate come to the actual value of s?
2.36 Brett Favre The number of passes
completed by Brett Favre, quarterback for the Green Bay Packers, was recorded for each of the 16 regular season games in the fall of 2006 (www.espn.com).9
a. Draw a stem and leaf plot to describe the data. b. Calculate the mean and standard deviation for Brett Favre’s per game pass completions. c. What proportion of the measurements lie within two
standard deviations of the mean? CALCULATING THE MEAN AND STANDARD DEVIATION FOR GROUPED DATA (OPTIONAL) 2.37 Suppose that some measurements occur more
than once and that the data x1, x2, . . . , xk are arranged in a frequency table as shown here: Observations
Frequency fi
x1 x2 . . . xk
f1 f2 . . . fk
The formulas for the mean and variance for grouped data are
Sxi fi x苶 , n
where n Sfi
(Sx fi)2 Sx2i fi i n s 2 n1 Notice that if each value occurs once, these formulas reduce to those given in the text. Although these formulas for grouped data are primarily of value when you have a
large number of measurements, demonstrate their use for the sample 1, 0, 0, 1, 3, 1, 3, 2, 3, 0, 0, 1, 1, 3, 2. a. Calculate 苶x and s 2 directly, using the formulas for ungrouped data. b. The
frequency table for the n 15 measurements is as follows: x
Calculate 苶x and s 2 using the formulas for grouped data. Compare with your answers to part a. 2.38 International Baccalaureate The International Baccalaureate (IB) program is an accelerated
academic program offered at a growing number of high schools throughout the country. Students enrolled in this program are placed in accelerated or advanced courses and must take IB examinations in
each of six subject areas at the end of their junior or senior year. Students are scored on a scale of 1–7, with 1–2 being poor, 3 mediocre, 4 average, and 5–7 excellent. During its first year of
operation at John W. North High School in Riverside, California, 17 juniors attempted the IB economics exam, with these results: Exam Grade
Number of Students
2.6 MEASURES OF RELATIVE STANDING
2.39 A Skewed Distribution To illustrate the util-
ity of the Empirical Rule, consider a distribution that is heavily skewed to the right, as shown in the accompanying figure. a. Calculate 苶x and s for the data shown. (NOTE: There are 10 zeros, 5
ones, and so on.) b. Construct the intervals 苶x s, 苶x 2s, and 苶x 3s and locate them on the frequency distribution. c. Calculate the proportion of the n 25 measurements that fall into each of the
three intervals. Compare with Tchebysheff’s Theorem and the Empirical Rule. Note that, although the proportion that falls into the interval 苶x s does not agree closely with the Empirical Rule, the
proportions that fall into the intervals 苶x 2s and 苶x
3s agree very well. Many times this is true, even for non-mound-shaped distributions of data. Distribution for Exercise 2.39 10
7 Frequency
Calculate the mean and standard deviation for these scores.
n 25
MEASURES OF RELATIVE STANDING Sometimes you need to know the position of one observation relative to others in a set of data. For example, if you took an examination with a total of 35 points, you
might want to know how your score of 30 compared to the scores of the other students in the class. The mean and standard deviation of the scores can be used to calculate a z-score, which measures the
relative standing of a measurement in a data set. Definition
Positive z-score ⇔ x is above the mean. Negative z-score ⇔ x is below the mean.
The sample z-score is a measure of relative standing defined by x 苶x z-score s
A z-score measures the distance between an observation and the mean, measured in units of standard deviation. For example, suppose that the mean and standard deviation of the test scores (based on a
total of 35 points) are 25 and 4, respectively. The z-score for your score of 30 is calculated as follows: x 苶x 30 25 1.25 z-score s 4 Your score of 30 lies 1.25 standard deviations above the mean
(30 苶x 1.25s).
The z-score is a valuable tool for determining whether a particular observation is likely to occur quite frequently or whether it is unlikely and might be considered an outlier. According to
Tchebysheff’s Theorem and the Empirical Rule, •
z-scores above 3 in absolute value are very unusual. EXAMPLE
at least 75% and more likely 95% of the observations lie within two standard deviations of their mean: their z-scores are between 2 and 2. Observations with z-scores exceeding 2 in absolute value
happen less than 5% of the time and are considered somewhat unlikely. at least 89% and more likely 99.7% of the observations lie within three standard deviations of their mean: their z-scores are
between 3 and 3. Observations with z-scores exceeding 3 in absolute value happen less than 1% of the time and are considered very unlikely.
You should look carefully at any observation that has a z-score exceeding 3 in absolute value. Perhaps the measurement was recorded incorrectly or does not belong to the population being sampled.
Perhaps it is just a highly unlikely observation, but a valid one nonetheless! Consider this sample of n 10 measurements: 1, 1, 0, 15, 2, 3, 4, 0, 1, 3 The measurement x 15 appears to be unusually
large. Calculate the z-score for this observation and state your conclusions. Solution Calculate x苶 3.0 and s 4.42 for the n 10 measurements. Then the z-score for the suspected outlier, x 15, is
calculated as
x 苶x 15 3 2.71 z-score s 4.42 Hence, the measurement x 15 lies 2.71 standard deviations above the sample mean, 苶x 3.0. Although the z-score does not exceed 3, it is close enough so that you might
suspect that x 15 is an outlier. You should examine the sampling procedure to see whether x 15 is a faulty observation. A percentile is another measure of relative standing and is most often used for
large data sets. (Percentiles are not very useful for small data sets.) Definition A set of n measurements on the variable x has been arranged in order of magnitude. The pth percentile is the value
of x that is greater than p% of the measurements and is less than the remaining (100 p)%. EXAMPLE
Suppose you have been notified that your score of 610 on the Verbal Graduate Record Examination placed you at the 60th percentile in the distribution of scores. Where does your score of 610 stand in
relation to the scores of others who took the examination? Scoring at the 60th percentile means that 60% of all the examination scores were lower than your score and 40% were higher. Solution
2.6 MEASURES OF RELATIVE STANDING
In general, the 60th percentile for the variable x is a point on the horizontal axis of the data distribution that is greater than 60% of the measurements and less than the others. That is, 60% of
the measurements are less than the 60th percentile and 40% are greater (see Figure 2.14). Since the total area under the distribution is 100%, 60% of the area is to the left and 40% of the area is to
the right of the 60th percentile. Remember that the median, m, of a set of data is the middle measurement; that is, 50% of the measurements are smaller and 50% are larger than the median. Thus, the
median is the same as the 50th percentile!
The 60th percentile shown on the relative frequency histogram for a data set
Relative Frequency
FIGU R E 2 .1 4
x 60th percentile
The 25th and 75th percentiles, called the lower and upper quartiles, along with the median (the 50th percentile), locate points that divide the data into four sets, each containing an equal number of
measurements. Twenty-five percent of the measurements will be less than the lower (first) quartile, 50% will be less than the median (the second quartile), and 75% will be less than the upper (third)
quartile. Thus, the median and the lower and upper quartiles are located at points on the x-axis so that the area under the relative frequency histogram for the data is partitioned into four equal
areas, as shown in Figure 2.15.
Location of quartiles
Relative Frequency
FIGU R E 2 .1 5
25% x
Median, m Lower quartile, Q1
Upper quartile, Q3
Definition A set of n measurements on the variable x has been arranged in order of magnitude. The lower quartile (first quartile), Q1, is the value of x that is greater than one-fourth of the
measurements and is less than the remaining three-fourths. The second quartile is the median. The upper quartile (third quartile), Q3, is the value of x that is greater than three-fourths of the
measurements and is less than the remaining one-fourth.
For small data sets, it is often impossible to divide the set into four groups, each of which contains exactly 25% of the measurements. For example, when n 10, you would need to have 212 measurements
in each group! Even when you can perform this task (for example, if n 12), there are many numbers that would satisfy the preceding definition, and could therefore be considered “quartiles.” To avoid
this ambiguity, we use the following rule to locate sample quartiles. CALCULATING SAMPLE QUARTILES •
When the measurements are arranged in order of magnitude, the lower quartile, Q1, is the value of x in position .25(n 1), and the upper quartile, Q3, is the value of x in position .75(n 1). When .25
(n 1) and .75(n 1) are not integers, the quartiles are found by interpolation, using the values in the two adjacent positions.†
Find the lower and upper quartiles for this set of measurements: 16, 25, 4, 18, 11, 13, 20, 8, 11, 9 Solution
Rank the n 10 measurements from smallest to largest:
4, 8, 9, 11, 11, 13, 16, 18, 20, 25 Calculate Position of Q1 .25(n 1) .25(10 1) 2.75 Position of Q3 .75(n 1) .75(10 1) 8.25 Since these positions are not integers, the lower quartile is taken to be
the value 3/4 of the distance between the second and third ordered measurements, and the upper quartile is taken to be the value 1/4 of the distance between the eighth and ninth ordered measurements.
Therefore, Q1 8 .75(9 8) 8 .75 8.75 and Q3 18 .25(20 18) 18 .5 18.5 Because the median and the quartiles divide the data distribution into four parts, each containing approximately 25% of the
measurements, Q1 and Q3 are the upper and lower boundaries for the middle 50% of the distribution. We can measure the range of this “middle 50%” of the distribution using a numerical measure called
the interquartile range. †
This definition of quartiles is consistent with the one used in the MINITAB package. Some textbooks use ordinary rounding when finding quartile positions, whereas others compute sample quartiles as the
medians of the upper and lower halves of the data set.
2.6 MEASURES OF RELATIVE STANDING
The interquartile range (IQR) for a set of measurements is the difference between the upper and lower quartiles; that is, IQR Q3 Q1. Definition
For the data in Example 2.13, IQR Q3 Q1 18.50 8.75 9.75. We will use the IQR along with the quartiles and the median in the next section to construct another graph for describing data sets.
How Do I Calculate Sample Quartiles? 1. Arrange the data set in order of magnitude from smallest to largest. 2. Calculate the quartile positions: •
Position of Q1: .25(n 1)
Position of Q3: .75(n 1)
3. If the positions are integers, then Q1 and Q3 are the values in the ordered data set found in those positions. 4. If the positions in step 2 are not integers, find the two measurements in positions
just above and just below the calculated position. Calculate the quartile by finding a value either one-fourth, one-half, or three-fourths of the way between these two measurements. Exercise Reps A.
Below you will find two practice data sets. Fill in the blanks to find the necessary quartiles. The first data set is done for you. Data Set
Position of Q1
Position of Q3
Lower Quartile, Q1
Upper Quartile, Q3
2, 5, 7, 1, 1, 2, 8
1, 1, 2, 2, 5, 7, 8
5, 0, 1, 3, 1, 5, 5, 2, 4, 4, 1
B. Below you will find three data sets that have already been sorted. The positions of the upper and lower quartiles are shown in the table. Find the measurements just above and just below the
quartile position. Then find the upper and lower quartiles. The first data set is done for you. Sorted Data Set
Position of Q1
Measurements Above and Below
0, 1, 4, 4, 5, 9
0 and 1
0, 1, 3, 3, 4, 7, 7, 8
1, 1, 2, 5, 6, 6, 7, 9, 9
Q1 0 .75(1) .75
Position of Q3
Measurements Above and Below
5 and 9
Q3 5 .25(4) 6
Progress Report •
Still having trouble? Try again using the Exercise Reps at the end of this section.
Mastered sample quartiles? You can skip the Exercise Reps at the end of this section!
Answers are located on the perforated card at the back of this book.
Many of the numerical measures that you have learned are easily found using computer programs or even graphics calculators. The MINITAB command Stat 씮 Basic Statistics 씮 Display Descriptive
Statistics (see the section “My MINITAB ” at the end of this chapter) produces output containing the mean, the standard deviation, the median, and the lower and upper quartiles, as well as the values
of some other statistics that we have not discussed yet. The data from Example 2.13 produced the MINITAB output shown in Figure 2.16. Notice that the quartiles are identical to the handcalculated
values in that example.
FI GU R E 2 .1 6
MINITAB output for the data in Example 2.13
● Descriptive Statistics: x Variable X
N N* Mean SE Mean 10 0 13.50 1.98
StDev Minimum 6.28 4.00
Q1 Median Q3 Maximum 8.75 12.00 18.50 25.00
THE FIVE-NUMBER SUMMARY AND THE BOX PLOT The median and the upper and lower quartiles shown in Figure 2.15 divide the data into four sets, each containing an equal number of measurements. If we add
the largest number (Max) and the smallest number (Min) in the data set to this group, we will have a set of numbers that provide a quick and rough summary of the data distribution. The five-number
summary consists of the smallest number, the lower quartile, the median, the upper quartile, and the largest number, presented in order from smallest to largest: Min
Median Q3
By definition, one-fourth of the measurements in the data set lie between each of the four adjacent pairs of numbers. The five-number summary can be used to create a simple graph called a box plot to
visually describe the data distribution. From the box plot, you can quickly detect any skewness in the shape of the distribution and see whether there are any outliers in the data set. An outlier may
result from transposing digits when recording a measurement, from incorrectly reading an instrument dial, from a malfunctioning piece of equipment, or from other problems. Even when there are no
recording or observational errors, a data set may contain one or more valid measurements that, for one reason or another, differ markedly from the others in the set. These outliers can cause a marked
distortion in commonly used numerical measures such as 苶x and s. In fact, outliers may themselves contain important information not shared with the other measurements in the set. Therefore,
isolating outliers, if they are present, is an important step in any preliminary analysis of a data set. The box plot is designed expressly for this purpose.
2.7 THE FIVE-NUMBER SUMMARY AND THE BOX PLOT
TO CONSTRUCT A BOX PLOT • •
Calculate the median, the upper and lower quartiles, and the IQR for the data set. Draw a horizontal line representing the scale of measurement. Form a box just above the horizontal line with the
right and left ends at Q1 and Q3. Draw a vertical line through the box at the location of the median.
A box plot is shown in Figure 2.17. FIGU R E 2 .1 7
Box plot
Lower fence
Upper fence
In Section 2.6, the z-score provided boundaries for finding unusually large or small measurements. You looked for z-scores greater than 2 or 3 in absolute value. The box plot uses the IQR to create
imaginary “fences” to separate outliers from the rest of the data set: DETECTING OUTLIERS—OBSERVATIONS THAT ARE BEYOND: • •
Lower fence: Q1 1.5(IQR) Upper fence: Q3 1.5(IQR)
The upper and lower fences are shown with broken lines in Figure 2.17, but they are not usually drawn on the box plot. Any measurement beyond the upper or lower fence is an outlier; the rest of the
measurements, inside the fences, are not unusual. Finally, the box plot marks the range of the data set using “whiskers” to connect the smallest and largest measurements (excluding outliers) to the
box. TO FINISH THE BOX PLOT • •
Mark any outliers with an asterisk (*) on the graph. Extend horizontal lines called “whiskers” from the ends of the box to the smallest and largest observations that are not outliers.
As American consumers become more careful about the foods they eat, food processors try to stay competitive by avoiding excessive amounts of fat, cholesterol, and sodium in the foods they sell. The
following data are the amounts of sodium per slice (in milligrams) for each of eight brands of regular American cheese. Construct a box plot for the data and look for outliers. 340, 300, 520,
The n 8 measurements are first ranked from smallest to largest:
260, 290,
340, 340,
The positions of the median, Q1, and Q3 are .5(n 1) .5(9) 4.5 .25(n 1) .25(9) 2.25 .75(n 1) .75(9) 6.75 so that m (320 330)/2 325, Q1 290 .25(10) 292.5, and Q3 340. The interquartile range is
calculated as IQR Q3 Q1 340 292.5 47.5 Calculate the upper and lower fences: Lower fence: 292.5 1.5(47.5) 221.25 Upper fence: 340 1.5(47.5) 411.25 The value x 520, a brand of cheese containing 520
milligrams of sodium, is the only outlier, lying beyond the upper fence. The box plot for the data is shown in Figure 2.18. The outlier is marked with an asterisk (*). Once the outlier is excluded,
we find (from the ranked data set) that the smallest and largest measurements are x 260 and x 340. These are the two values that form the whiskers. Since the value x 340 is the same as Q3, there is no
whisker on the right side of the box.
FI GU R E 2 .1 8
Box plot for Example 2.14
400 Sodium
Now would be a good time to try the Building a Box Plot applet. The applet in Figure 2.19 shows a dotplot of the data in Example 2.14. Using the button, you will see a step-by-step description
explaining how the box plot is constructed. We will use this applet again for the MyApplet Exercises at the end of the chapter.
2.7 THE FIVE-NUMBER SUMMARY AND THE BOX PLOT
FIGU R E 2 .1 9
Building a Box Plot applet
You can use the box plot to describe the shape of a data distribution by looking at the position of the median line compared to Q1 and Q3, the left and right ends of the box. If the median is close
to the middle of the box, the distribution is fairly symmetric, providing equal-sized intervals to contain the two middle quarters of the data. If the median line is to the left of center, the
distribution is skewed to the right; if the median is to the right of center, the distribution is skewed to the left. Also, for most skewed distributions, the whisker on the skewed side of the box
tends to be longer than the whisker on the other side. We used the MINITAB command Graph 씮 Boxplot to draw two box plots, one for the sodium contents of the eight brands of cheese in Example 2.14,
and another for five brands of fat-free cheese with these sodium contents: 300,
The two box plots are shown together in Figure 2.20. Look at the long whisker on the left side of both box plots and the position of the median lines. Both distributions are skewed to the left; that
is, there are a few unusually small measurements. The regular cheese data, however, also show one brand (x 520) with an unusually large amount of sodium. In general, it appears that the sodium
content of the fat-free brands is lower than that of the regular brands, but the variability of the sodium content for regular cheese (excluding the outlier) is less than that of the fat-free brands.
FIGU R E 2 .2 0
MINITAB output for regular and fat-free cheese
350 Sodium
84 ❍
EXERCISES EXERCISE REPS These exercises refer back to the MyPersonal Trainer section on page 79. 2.40 Below you will find two practice data sets. Fill in the blanks to find the necessary
quartiles. Data Set
Position of Q1
Position of Q3
Lower Quartile, Q1
Upper Quartile, Q3
.13, .76, .34, .88, .21, .16, .28 2.3, 1.0, 2.1, 6.5, 2.8, 8.8, 1.7, 2.9, 4.4, 5.1, 2.0
2.41 Below you will find three data sets that have already been sorted. Fill in the blanks
to find the upper and lower quartiles. Sorted Data Set
Position of Q1
Measurements Above and Below
Position of Q3
Measurements Above and Below
1, 1.5, 2, 2, 2.2
0, 1.7, 1.8, 3.1, 3.2,
7, 8, 8.8, 8.9, 9, 10 .23, .30, .35, .41, .56, .58, .76, .80
2.42 Given the following data set: 8, 7, 1, 4, 6, 6, 4,
2.46 If you scored at the 69th percentile on a place-
5, 7, 6, 3, 0 a. Find the five-number summary and the IQR. b. Calculate 苶x and s. c. Calculate the z-score for the smallest and largest observations. Is either of these observations unusually large
or unusually small?
ment test, how does your score compare with others?
2.43 Find the five-number summary and the IQR for
these data: 19, 12, 16, 0, 14, 9, 6, 1, 12, 13, 10, 19, 7, 5, 8 2.44 Construct a box plot for these data and identify
any outliers: 25, 22, 26, 23, 27, 26, 28, 18, 25, 24, 12 2.45 Construct a box plot for these data and identify
any outliers: 3, 9, 10, 2, 6, 7, 5, 8, 6, 6, 4, 9, 22
2.47 Mercury Concentration in Dolphins
Environmental scientists are increasingly concerned with the accumulation of toxic elements in marine mammals and the transfer of such elements to the animals’ offspring. The striped dolphin
(Stenella coeruleoalba), considered to be the top predator in the marine food chain, was the subject of one such study. The mercury concentrations (micrograms/gram) in the livers of 28 male striped
dolphins were as follows:
1.70 1.72 8.80 5.90 101.00 85.40 118.00
183.00 168.00 218.00 180.00 264.00 481.00 485.00
221.00 406.00 252.00 329.00 316.00 445.00 278.00
286.00 315.00 241.00 397.00 209.00 314.00 318.00
2.7 THE FIVE-NUMBER SUMMARY AND THE BOX PLOT
a. b. c. d.
Calculate the five-number summary for the data. Construct a box plot for the data. Are there any outliers? If you knew that the first four dolphins were all less than 3 years old, while all the others
were more than 8 years old, would this information help explain the difference in the magnitude of those four observations? Explain.
2.48 Hamburger Meat The weights (in pounds) of
the 27 packages of ground beef from Exercise 2.24 (see data set EX0224) are listed here in order from smallest to largest: .75 .93 1.08 1.18
.83 .96 1.08 1.18
.87 .96 1.12 1.24
.89 .97 1.12 1.28
.89 .98 1.14 1.38
.89 .99 1.14 1.41
.92 1.06 1.17
a. Confirm the values of the mean and standard deviation, calculated in Exercise 2.24 as 苶x 1.05 and s .17. b. The two largest packages of meat weigh 1.38 and 1.41 pounds. Are these two packages
unusually heavy? Explain. c. Construct a box plot for the package weights. What does the position of the median line and the length of the whiskers tell you about the shape of the distribution? 2.49
Comparing NFL Quarterbacks How
2.50 Presidential Vetoes The set of presidential
vetoes in Exercise 1.47 and data set EX0147 is listed here, along with a box plot generated by MINITAB. Use the box plot to describe the shape of the distribution and identify any outliers.
Washington 2 J. Adams 0 Jefferson 0 Madison 5 Monroe 1 J. Q. Adams 0 Jackson 5 Van Buren 0 W. H. Harrison 0 Tyler 6 Polk 2 Taylor 0 Fillmore 0 Pierce 9 Buchanan 4 Lincoln 2 A. Johnson 21 Grant 45
Hayes 12 Garfield 0 Arthur 4 Cleveland 304
B. Harrison Cleveland McKinley T. Roosevelt Taft Wilson Harding Coolidge Hoover F. D. Roosevelt Truman Eisenhower Kennedy L. Johnson Nixon Ford Carter Reagan G. H. W. Bush Clinton G. W. Bush
Source: The World Almanac and Book of Facts 2007
Box plot for Exercise 2.50
does Brett Favre, quarterback for the Green Bay Packers, compare to Peyton Manning, quarterback for the Indianapolis Colts? The table below shows the number of completed passes for each athlete
during the 2006 NFL football season:9
Brett Favre
Peyton Manning
a. Calculate five-number summaries for the number of passes completed by both Brett Favre and Peyton Manning. b. Construct box plots for the two sets of data. Are there any outliers? What do the box
plots tell you about the shapes of the two distributions? c. Write a short paragraph comparing the number of pass completions for the two quarterbacks.
200 Vetoes
2.51 Survival Times Altman and Bland report the survival times for patients with active hepatitis, half treated with prednisone and half receiving no treatment.10 The survival times (in months)
(Exercise 1.73 and EX0173) are adapted from their data for those treated with prednisone.
a. Construct a box plot for the monthly utility costs. b. What does the box plot tell you about the distribution of utility costs for this household? 2.53 What’s Normal? again Refer to Exercise
a. Can you tell by looking at the data whether it is roughly symmetric? Or is it skewed? b. Calculate the mean and the median. Use these measures to decide whether or not the data are symmetric or
skewed. c. Draw a box plot to describe the data. Explain why the box plot confirms your conclusions in part b. 2.52 Utility Bills in Southern California,
again The monthly utility bills for a household in Riverside, California, were recorded for 12 consecutive months starting in January 2006:
Amount ($)
January February March April May June
$266.63 163.41 219.41 162.64 187.16 289.17
Month July August September October November December
1.67 and data set EX0167. In addition to the normal body temperature in degrees Fahrenheit for the 130 individuals, the data record the gender of the individuals. Box plots for the two groups, male
and female, are shown below:11 Box plots for Exercise 2.53
86 ❍
Amount ($) $306.55 335.48 343.50 226.80 208.99 230.46
How would you describe the similarities and differences between male and female temperatures in this data set?
CHAPTER REVIEW Key Concepts and Formulas I.
Measures of the Center of a Data Distribution
1. Arithmetic mean (mean) or average a. Population: m
Sx b. Sample of n measurements: x苶 i n
2. Median; position of the median .5(n 1) 3. Mode 4. The median may be preferred to the mean if the data are highly skewed. II. Measures of Variability
1. Range: R largest smallest 2. Variance a. Population of N measurements:
S(xi m)2 s2 N b. Sample of n measurements: (Sxi)2 2 Sx i n S(xi x苶 )2 s2 n1 n1
3. Standard deviation 苶2 a. Population: s 兹s b. Sample: s 兹苶 s2 4. A rough approximation for s can be calculated as s ⬇ R/4. The divisor can be adjusted depending on the sample size. III.
Tchebysheff’s Theorem and the Empirical Rule
1. Use Tchebysheff’s Theorem for any data set, regardless of its shape or size. a. At least 1 (1/k2) of the measurements lie within k standard deviations of the mean. b. This is only a lower bound;
there may be more measurements in the interval. 2. The Empirical Rule can be used only for relatively mound-shaped data sets. Approximately
68%, 95%, and 99.7% of the measurements are within one, two, and three standard deviations of the mean, respectively. IV. Measures of Relative Standing
x 苶x 1. Sample z-score: z s 2. pth percentile; p% of the measurements are smaller, and (100 p)% are larger. 3. Lower quartile, Q1; position of Q1 .25 (n 1)
4. Upper quartile, Q3; position of Q3 .75 (n 1) 5. Interquartile range: IQR Q3 Q1 V. The Five-Number Summary and Box Plots
1. The five-number summary: Min
Median Q3
One-fourth of the measurements in the data set lie between each of the four adjacent pairs of numbers. 2. Box plots are used for detecting outliers and shapes of distributions. 3. Q1 and Q3 form the
ends of the box. The median line is in the interior of the box. 4. Upper and lower fences are used to find outliers, observations that lie outside these fences. a. Lower fence: Q1 1.5(IQR)
b. Upper fence: Q3 1.5(IQR) 5. Outliers are marked on the box plot with an asterisk (*). 6. Whiskers are connected to the box from the smallest and largest observations that are not outliers. 7.
Skewed distributions usually have a long whisker in the direction of the skewness, and the median line is drawn away from the direction of the skewness.
88 ❍
Numerical Descriptive Measures MINITAB provides most of the basic descriptive statistics presented in Chapter 2 using a single command in the drop-down menus. Once you are on the Windows desktop,
double-click on the MINITAB icon or use the Start button to start MINITAB. Practice entering some data into the Data window, naming the columns appropriately in the gray cell just below the column
number. When you have finished entering your data, you will have created a MINITAB worksheet, which can be saved either singly or as a MINITAB project for future use. Click on File 씮 Save Current
Worksheet or File 씮 Save Project. You will need to name the worksheet (or project)—perhaps “test data”—so that you can retrieve it later. The following data are the floor lengths (in inches) behind
the second and third seats in nine different minivans:12 Second seat: Third seat:
62.0, 62.0, 64.5, 48.5, 57.5, 61.0, 45.5, 47.0, 33.0 27.0, 27.0, 24.0, 16.5, 25.0, 27.5, 14.0, 18.5, 17.0
Since the data involve two variables, we enter the two rows of numbers into columns C1 and C2 in the MINITAB worksheet and name them “2nd Seat” and “3rd Seat,” respectively. Using the drop-down
menus, click on Stat 씮 Basic Statistics 씮 Display Descriptive Statistics. The Dialog box is shown in Figure 2.21. FI GU R E 2 .2 1
Now click on the Variables box and select both columns from the list on the left. (You can click on the Graphs option and choose one of several graphs if you like. You may also click on the
Statistics option to select the statistics you would like to see displayed.) Click OK. A display of descriptive statistics for both columns will appear in the Session window (see Figure 2.22). You
may print this output using File 씮 Print Session Window if you choose. To examine the distribution of the two variables and look for outliers, you can create box plots using the command Graph 씮
Boxplot 씮 One Y 씮 Simple. Click OK. Select the appropriate column of measurements in the Dialog box (see Figure 2.23). You can change the appearance of the box plot in several ways. Scale 씮 Axes
and Ticks will allow you to transpose the axes and orient the box plot horizontally, when you check the box marked “Transpose value and category scales.” Multiple Graphs
provides printing options for multiple box plots. Labels will let you annotate the graph with titles and footnotes. If you have entered data into the worksheet as a frequency distribution (values in
one column, frequencies in another), the Data Options will allow the data to be read in that format. The box plot for the third seat lengths is shown in Figure 2.24. You can use the MINITAB commands
from Chapter 1 to display stem and leaf plots or histograms for the two variables. How would you describe the similarities and differences in the two data sets? Save this worksheet in a file called
“Minivans” before exiting MINITAB. We will use it again in Chapter 3. FIGU R E 2 .2 2
FIGU R E 2 .2 3
FIGU R E 2 .2 4
90 ❍
Supplementary Exercises 2.54 Raisins The number of raisins in each
of 14 miniboxes (1/2-ounce size) was counted for a generic brand and for Sunmaid brand raisins. The two data sets are shown here:
Generic Brand
a. What are the mean and standard deviation for the generic brand? b. What are the mean and standard deviation for the Sunmaid brand? c. Compare the centers and variabilities of the two brands using
the results of parts a and b. 2.55 Raisins, continued Refer to Exercise 2.54.
a. Find the median, the upper and lower quartiles, and the IQR for each of the two data sets. b. Construct two box plots on the same horizontal scale to compare the two sets of data. c. Draw two stem
and leaf plots to depict the shapes of the two data sets. Do the box plots in part b verify these results? d. If we can assume that none of the boxes of raisins are being underfilled (that is, they
all weigh approximately 1/2 ounce), what do your results say about the average number of raisins for the two brands? 2.56 TV Viewers The number of television
viewing hours per household and the prime viewing times are two factors that affect television advertising income. A random sample of 25 households in a particular viewing area produced the following
estimates of viewing hours per household:
3.0 6.5 5.0 7.5 9.0
6.0 8.0 12.0 5.0 2.0
7.5 4.0 1.0 10.0 6.5
15.0 5.5 3.5 8.0 1.0
12.0 6.0 3.0 3.5 5.0
a. Scan the data and use the range to find an approximate value for s. Use this value to check your calculations in part b. b. Calculate the sample mean x苶 and the sample standard deviation s.
Compare s with the approximate value obtained in part a.
c. Find the percentage of the viewing hours per household that falls into the interval x苶 2s. Compare with the corresponding percentage given by the Empirical Rule. 2.57 A Recurring Illness Refer to
Exercise 1.26 and data set EX0126. The lengths of time (in months) between the onset of a particular illness and its recurrence were recorded: 2.1 9.0 14.7 19.2 4.1 7.4 14.1 8.7 1.6 3.7
4.4 2.0 9.6 6.9 18.4 .2 1.0 24.0 3.5 12.6
2.7 6.6 16.7 4.3 .2 8.3 2.4 1.4 11.4 23.1
32.3 3.9 7.4 3.3 6.1 .3 2.4 8.2 18.0 5.6
9.9 1.6 8.2 1.2 13.5 1.3 18.0 5.8 26.7 .4
a. Find the range. b. Use the range approximation to find an approximate value for s. c. Compute s for the data and compare it with your approximation from part b. 2.58 A Recurring Illness, continued
Refer to Exercise 2.57. a. Examine the data and count the number of observations that fall into the intervals x苶 s, 苶x 2s, and 苶x 3s. b. Do the percentages that fall into these intervals agree
with Tchebysheff’s Theorem? With the Empirical Rule? c. Why might the Empirical Rule be unsuitable for describing these data? 2.59 A Recurring Illness, again Find the median and the lower and upper
quartiles for the data on times until recurrence of an illness in Exercise 2.57. Use these descriptive measures to construct a box plot for the data. Use the box plot to describe the data
distribution. 2.60 Tuna Fish, again Refer to Exercise 2.8. The
prices of a 6-ounce can or a 7.06-ounce pouch for 14 different brands of water-packed light tuna, based on prices paid nationally in supermarkets, are reproduced here.4 .99 1.12
1.92 .63
1.23 .67
.85 .69
.65 .60
.53 .60
1.41 .66
a. Calculate the five-number summary. b. Construct a box plot for the data. Are there any outliers? c. The value x 1.92 looks large in comparison to the other prices. Use a z-score to decide whether
this is an unusually expensive brand of tuna. 2.61 Electrolysis An analytical chemist wanted to use electrolysis to determine the number of moles of cupric ions in a given volume of solution. The
solution was partitioned into n 30 portions of .2 milliliter each, and each of the portions was tested. The average number of moles of cupric ions for the n 30 portions was found to be .17 mole; the
standard deviation was .01 mole. a. Describe the distribution of the measurements for the n 30 portions of the solution using Tchebysheff’s Theorem. b. Describe the distribution of the measurements
for the n 30 portions of the solution using the Empirical Rule. (Do you expect the Empirical Rule to be suitable for describing these data?) c. Suppose the chemist had used only n 4 portions of the
solution for the experiment and obtained the readings .15, .19, .17, and .15. Would the Empirical Rule be suitable for describing the n 4 measurements? Why? 2.62 Chloroform According to the EPA,
chloroform, which in its gaseous form is suspected of being a cancer-causing agent, is present in small quantities in all of the country’s 240,000 public water sources. If the mean and standard
deviation of the amounts of chloroform present in the water sources are 34 and 53 micrograms per liter, respectively, describe the distribution for the population of all public water sources. 2.63
Aptitude Tests In contrast to aptitude tests, which are predictive measures of what one can accomplish with training, achievement tests tell what an individual can do at the time of the test.
Mathematics achievement test scores for 400 students were found to have a mean and a variance equal to 600 and 4900, respectively. If the distribution of test scores was mound-shaped, approximately
how many of the scores would fall into the interval 530 to 670? Approximately how many scores would be expected to fall into the interval 460 to 740? 2.64 Sleep and the College Student How much sleep
do you get on a typical school night? A group of 10 college students were asked to report the number of
hours that they slept on the previous night with the following results: 7,
a. Find the mean and the standard deviation of the number of hours of sleep for these 10 students. b. Calculate the z-score for the largest value (x 8.5). Is this an unusually sleepy college student?
c. What is the most frequently reported measurement? What is the name for this measure of center? d. Construct a box plot for the data. Does the box plot confirm your results in part b? [HINT: Since
the z-score and the box plot are two unrelated methods for detecting outliers, and use different types of statistics, they do not necessarily have to (but usually do) produce the same results.] 2.65
Gas Mileage The miles per gallon (mpg)
for each of 20 medium-sized cars selected from a production line during the month of March follow.
23.1 20.2 24.7 25.9 24.9
21.3 24.4 22.7 24.7 22.2
23.6 25.3 26.2 24.4 22.9
23.7 27.0 23.2 24.2 24.6
a. What are the maximum and minimum miles per gallon? What is the range? b. Construct a relative frequency histogram for these data. How would you describe the shape of the distribution? c. Find the
mean and the standard deviation. d. Arrange the data from smallest to largest. Find the z-scores for the largest and smallest observations. Would you consider them to be outliers? Why or why not? e.
What is the median? f. Find the lower and upper quartiles. 2.66 Gas Mileage, continued Refer to Exercise
2.65. Construct a box plot for the data. Are there any outliers? Does this conclusion agree with your results in Exercise 2.65? 2.67 Polluted Seawater Petroleum pollution in seas and oceans
stimulates the growth of some types of bacteria. A count of petroleumlytic micro-organisms (bacteria per 100 milliliters) in ten portions of seawater gave these readings: 49,
92 ❍
a. Guess the value for s using the range approximation. b. Calculate 苶x and s and compare with the range approximation of part a. c. Construct a box plot for the data and use it to describe the data
distribution. 2.68 Basketball Attendances at a high school’s
basketball games were recorded and found to have a sample mean and variance of 420 and 25, respectively. Calculate 苶x s, 苶x 2s, and 苶x 3s and then state the approximate fractions of measurements
you would expect to fall into these intervals according to the Empirical Rule. 2.69 SAT Tests The College Board’s verbal and mathematics scholastic aptitude tests are scored on a scale of 200 to 800.
Although the tests were originally designed to produce mean scores of approximately 500, the mean verbal and math scores in recent years have been as low as 463 and 493, respectively, and have been
trending downward. It seems reasonable to assume that a distribution of all test scores, either verbal or math, is mound-shaped. If s is the standard deviation of one of these distributions, what is
the largest value (approximately) that s might assume? Explain. 2.70 Summer Camping A favorite summer pastime
for many Americans is camping. In fact, camping has become so popular at the California beaches that reservations must sometimes be made months in advance! Data from a USA Today Snapshot is shown
below.13 Favorite Camping Activity 50% 40% 30%
2.71 Long-Stemmed Roses A strain of long-
stemmed roses has an approximate normal distribution with a mean stem length of 15 inches and standard deviation of 2.5 inches. a. If one accepts as “long-stemmed roses” only those roses with a stem
length greater than 12.5 inches, what percentage of such roses would be unacceptable? b. What percentage of these roses would have a stem length between 12.5 and 20 inches? 2.72 Drugs for
Hypertension A pharmaceutical company wishes to know whether an experimental drug being tested in its laboratories has any effect on systolic blood pressure. Fifteen randomly selected subjects were
given the drug, and their systolic blood pressures (in millimeters) are recorded.
a. Guess the value of s using the range approximation. b. Calculate 苶x and s for the 15 blood pressures. c. Find two values, a and b, such that at least 75% of the measurements fall between a and b.
2.73 Lumber Rights A company interested in lumbering rights for a certain tract of slash pine trees is told that the mean diameter of these trees is 14 inches with a standard deviation of 2.8 inches.
Assume the distribution of diameters is roughly mound-shaped. a. What fraction of the trees will have diameters between 8.4 and 22.4 inches? b. What fraction of the trees will have diameters greater
than 16.8 inches?
20% 2.74 Social Ambivalence The following data represent the social ambivalence scores for 15 people as measured by a psychological test. (The higher the score, the stronger the ambivalence.)
0% Gathering Enjoying at campfire scenery
Being outside
The Snapshot also reports that men go camping 2.9 times a year, women go 1.7 times a year; and men are more likely than women to want to camp more often. What does the magazine mean when they talk
about 2.9 or 1.7 times a year?
a. Guess the value of s using the range approximation. b. Calculate 苶x and s for the 15 social ambivalence scores.
c. What fraction of the scores actually lie in the interval 苶x 2s? 2.75 TV Commercials The mean duration of televi-
sion commercials on a given network is 75 seconds, with a standard deviation of 20 seconds. Assume that durations are approximately normally distributed. a. What is the approximate probability that a
commercial will last less than 35 seconds? b. What is the approximate probability that a commercial will last longer than 55 seconds?
Number of Foxes, f
Mark McGwire’s record of 70 home runs hit in a single season. At the end of the 2003 major league baseball season, the number of home runs hit per season by each of four major league superstars over
each player’s career were recorded, and are shown in the box plots below:14
a. Construct a relative frequency histogram for x, the number of parasites per fox. b. Calculate 苶x and s for the sample. c. What fraction of the parasite counts fall within two standard deviations
of the mean? Within three standard deviations? Do these results agree with Tchebysheff’s Theorem? With the Empirical Rule?
2.76 Parasites in Foxes A random sample of 100 foxes was examined by a team of veterinarians to determine the prevalence of a particular type of parasite. Counting the number of parasites per fox,
the veterinarians found that 69 foxes had no parasites, 17 had one parasite, and so on. A frequency tabulation of the data is given here: Number of Parasites, x
40 Homers
Write a short paragraph comparing the home run hitting patterns of these four players. 2.80 Barry Bonds In the seasons that followed his 2001 record-breaking season, Barry Bonds hit 46, 45, 45, 5,
and 26 homers, respectively (www.espn.com).14 Two boxplots, one of Bond’s homers through 2001, and a second including the years 2002–2006, follow.
2.77 College Teachers Consider a population con-
sisting of the number of teachers per college at small 2-year colleges. Suppose that the number of teachers per college has an average m 175 and a standard deviation s 15. a. Use Tchebysheff’s
Theorem to make a statement about the percentage of colleges that have between 145 and 205 teachers. b. Assume that the population is normally distributed. What fraction of colleges have more than
190 teachers? 2.78 Is It Accurate? From the following data, a student calculated s to be .263. On what grounds might we doubt his accuracy? What is the correct value (to the nearest hundredth)?
17.2 17.1
17.1 17.0
17.0 17.1
17.1 16.9
16.9 17.0
17.0 17.1
17.1 17.3
17.0 17.2
17.3 17.4
17.2 17.1
2.79 Homerun Kings In the summer of EX0279
2001, Barry Bonds began his quest to break
30 40 50 Homers by Barry Bonds
The statistics used to construct these boxplots are given in the table. Years
25.00 25.00
34.00 34.00
41.50 45.00
16.5 20.0
a. Calculate the upper fences for both of these boxplots. b. Can you explain why the record number of homers is an outlier in the 2001 boxplot, but not in the 2006 boxplot?
94 ❍
2.81 Ages of Pennies Here are the ages of 50 pen-
Stem-and-Leaf Display: Liters
nies from Exercise 1.45 and data set EX0145. The data have been sorted from smallest to largest.
Stem-and-leaf of Liters Leaf Unit 0.10
1 4 2 4 5 4 8 4 12 5 (4) 5 14 5 11 5 7 5 4 6 2 6 1 6
a. What is the average age of the pennies? b. What is the median age of the pennies? c. Based on the results of parts a and b, how would you describe the age distribution of these 50 pennies? d.
Construct a box plot for the data set. Are there any outliers? Does the box plot confirm your description of the distribution’s shape? 2.82 Snapshots Here are a few facts reported as Snapshots in USA
Today. • The median hourly pay for salespeople in the building supply industry is $10.41.15
• Sixty-nine percent of U.S. workers ages 16 and older work at least 40 hours per week.16 • Seventy-five percent of all Associate Professors of Mathematics in the U.S. earn $91,823 or less.17
2.83 Breathing Patterns Research
psychologists are interested in finding out whether a person’s breathing patterns are affected by a particular experimental treatment. To determine the general respiratory patterns of the n 30 people
in the study, the researchers collected some baseline measurements—the total ventilation in liters of air per minute adjusted for body size—for each person before the treatment. The data are shown
here, along with some descriptive tools generated by MINITAB.
5.23 5.92 4.67
4.79 5.38 5.77
5.83 6.34 5.84
5.37 5.12 6.19
4.35 5.14 5.58
5.54 4.72 5.72
6.04 5.17 5.16
5.48 4.99 5.32
6.58 4.82 4.51 5.70 4.96 5.63
Descriptive Statistics: Liters Variable Liters
N 30
N* 0
Mean 5.3953
SE Mean 0.0997
StDev 0.5462
Minimum Q1 Median Q3 Variable Maximum 4.3500 4.9825 5.3750 5.7850 Liters 6.5800
a. Summarize the characteristics of the data distribution using the MINITAB output. b. Does the Empirical Rule provide a good description of the proportion of measurements that fall within two or
three standard deviations of the mean? Explain. c. How large or small does a ventilation measurement have to be before it is considered unusual? 2.84 Arranging Objects The following data
are the response times in seconds for n 25 first graders to arrange three objects by size.
5.2 4.2 3.1 3.6 4.7
Identify the variable x being measured, and any percentiles you can determine from this information.
N 30
3.8 4.1 2.5 3.9 3.3
5.7 4.3 3.0 4.8 4.2
3.9 4.7 4.4 5.3 3.8
3.7 4.3 4.8 4.2 5.4
a. Find the mean and the standard deviation for these 25 response times. b. Order the data from smallest to largest. c. Find the z-scores for the smallest and largest response times. Is there any
reason to believe that these times are unusually large or small? Explain. 2.85 Arranging Objects, continued Refer to
Exercise 2.84. a. Find the five-number summary for this data set. b. Construct a box plot for the data. c. Are there any unusually large or small response times identified by the box plot? d. Construct
a stem and leaf display for the response times. How would you describe the shape of the distribution? Does the shape of the box plot confirm this result?
Exercises 2.86 Refer to Data Set #1 in the How Extreme Val-
ues Affect the Mean and Median applet. This applet loads with a dotplot for the following n 5 observations: 2, 5, 6, 9, 11. a. What are the mean and median for this data set? b. Use your mouse to
change the value x 11 (the moveable green dot) to x 13. What are the mean and median for the new data set? c. Use your mouse to move the green dot to x 33. When the largest value is extremely large
compared to the other observations, which is larger, the mean or the median? d. What effect does an extremely large value have on the mean? What effect does it have on the median? 2.87 Refer to Data
Set #2 in the How Extreme Val-
ues Affect the Mean and Median applet. This applet loads with a dotplot for the following n 5 observations: 2, 5, 10, 11, 12. a. Use your mouse to move the value x 12 to the left until it is smaller
than the value x 11. b. As the value of x gets smaller, what happens to the sample mean? c. As the value of x gets smaller, at what point does the value of the median finally change? d. As you move
the green dot, what are the largest and smallest possible values for the median? 2.88 Refer to Data Set #3 in the How Extreme
Values Affect the Mean and Median applet. This applet loads with a dotplot for the following n 5 observations: 27, 28, 32, 34, 37. a. What are the mean and median for this data set? b. Use your mouse
to change the value x 27 (the moveable green dot) to x 25. What are the mean and median for the new data set? c. Use your mouse to move the green dot to x 5. When the smallest value is extremely
small compared to the other observations, which is larger, the mean or the median? d. At what value of x does the mean equal the median? e. What are the smallest and largest possible values for the
median? f. What effect does an extremely small value have on the mean? What effect does it have on the median? 2.89 Refer to the Why Divide by n 1 applet. The
first applet on the page randomly selects sample of
n 3 from a population in which the standard deviation is s 29.2. a. Click . A sample consisting of n 3 observations will appear. Use your calculator to verify the values of the standard deviation
when dividing by n 1 and n as shown in the applet. b. Click again. Calculate the average of the two standard deviations (dividing by n 1) from parts a and b. Repeat the process for the two standard
deviations (dividing by n). Compare your results to those shown in red on the applet. c. You can look at how the two estimators in part a behave “in the long run” by clicking or a number of times,
until the average of all the standard deviations begins to stabilize. Which of the two methods gives a standard deviation closer to s 29.2? d. In the long run, how far off is the standard deviation
when dividing by n? 2.90 Refer to Why Divide by n 1 applet. The
second applet on the page randomly selects sample of n 10 from the same population in which the standard deviation is s 29.2. a. Repeat the instructions in part c and d of Exercise 2.89. b. Based on
your simulation, when the sample size is larger, does it make as much difference whether you divide by n or n 1 when computing the sample standard deviation? 2.91 If you have not yet done so, use the
first Building a Box Plot applet to construct a box plot for the data in Example 2.14. a. Compare the finished box plot to the plot shown in Figure 2.18. b. How would you describe the shape of the data
distribution? c. Are there any outliers? If so, what is the value of the unusual observation? 2.92 Use the second Building a Box Plot applet to
construct a box plot for the data in Example 2.13. a. How would you describe the shape of the data distribution? b. Use the box plot to approximate the values of the median, the lower quartile, and
the upper quartile. Compare your results to the actual values calculated in Example 2.13.
96 ❍
CASE STUDY Batting
The Boys of Summer Which baseball league has had the best hitters? Many of us have heard of baseball greats like Stan Musial, Hank Aaron, Roberto Clemente, and Pete Rose of the National League and Ty
Cobb, Babe Ruth, Ted Williams, Rod Carew, and Wade Boggs of the American League. But have you ever heard of Willie Keeler, who batted .432 for the Baltimore Orioles, or Nap Lajoie, who batted .422
for the Philadelphia A’s? The batting averages for the batting champions of the National and American Leagues are given on the Student Companion Website. The batting averages for the National League
begin in 1876 with Roscoe Barnes, whose batting average was .403 when he played with the Chicago Cubs. The last entry for the National League is for the year 2006, when Freddy Sanchez of the
Pittsburgh Pirates averaged .344. The American League records begin in 1901 with Nap Lojoie of the Philadelphia A’s, who batted .422, and end in 2006 with Joe Mauer of the Minnesota Twins, who batted
.347.18 How can we summarize the information in this data set? 1. Use MINITAB or another statistical software package to describe the batting averages for the American and National League batting
champions. Generate any graphics that may help you in interpreting these data sets. 2. Does one league appear to have a higher percentage of hits than the other? Do the batting averages of one league
appear to be more variable than the other? 3. Are there any outliers in either league? 4. Summarize your comparison of the two baseball leagues.
Describing Bivariate Data
GENERAL OBJECTIVES Sometimes the data that are collected consist of observations for two variables on the same experimental unit. Special techniques that can be used in describing these variables
will help you identify possible relationships between them.
CHAPTER INDEX ● The best-fitting line (3.4) ● Bivariate data (3.1) ● Covariance and the correlation coefficient (3.4) ● Scatterplots for two quantitative variables (3.3) ● Side-by-side pie charts,
comparative line charts (3.2) ● Side-by-side bar charts, stacked bar charts (3.2)
How Do I Calculate the Correlation Coefficient? How Do I Calculate the Regression Line?
© Janis Christie/Photodisc/Getty Images
Do You Think Your Dishes Are Really Clean? Does the price of an appliance, such as a dishwasher, convey something about its quality? In the case study at the end of this chapter, we rank 20 different
brands of dishwashers according to their prices, and then we rate them on various characteristics, such as how the dishwasher performs, how much noise it makes, its cost for either gas or
electricity, its cycle time, and its water use. The techniques presented in this chapter will help to answer our question.
Very often researchers are interested in more than just one variable that can be measured during their investigation. For example, an auto insurance company might be interested in the number of
vehicles owned by a policyholder as well as the number of drivers in the household. An economist might need to measure the amount spent per week on groceries in a household and also the number of
people in that household. A real estate agent might measure the selling price of a residential property and the square footage of the living area. When two variables are measured on a single
experimental unit, the resulting data are called bivariate data. How should you display these data? Not only are both variables important when studied separately, but you also may want to explore the
relationship between the two variables. Methods for graphing bivariate data, whether the variables are qualitative or quantitative, allow you to study the two variables together. As with univariate
data, you use different graphs depending on the type of variables you are measuring.
“Bi” means “two.” Bivariate data generate pairs of measurements.
When at least one of the two variables is qualitative, you can use either simple or more intricate pie charts, line charts, and bar charts to display and describe the data. Sometimes you will have
one qualitative and one quantitative variable that have been measured in two different populations or groups. In this case, you can use two sideby-side pie charts or a bar chart in which the bars for
the two populations are placed side by side. Another option is to use a stacked bar chart, in which the bars for each category are stacked on top of each other. EXAMPLE
TABLE 3.1
Are professors in private colleges paid more than professors at public colleges? The data in Table 3.1 were collected from a sample of 400 college professors whose rank, type of college, and salary
were recorded.1 The number in each cell is the average salary (in thousands of dollars) for all professors who fell into that category. Use a graph to answer the question posed for this sample.
Salaries of Professors by Rank and Type of College Full Professor
Associate Professor
Assistant Professor
94.8 118.1
65.9 76.0
56.4 65.1
Public Private
Source: Digest of Educational Statistics
To display the average salaries of these 400 professors, you can use a side-by-side bar chart, as shown in Figure 3.1. The height of the bars is the average salary, with each pair of bars along the
horizontal axis representing a different professorial rank. Salaries are substantially higher for full professors in private colleges, however, there are less striking differences at the lower two
FIGU R E 3 .1
● School Public Private
Average Salary ($ Thousands)
Comparative bar charts for Example 3.1
0 School Rank
TABLE 3.2
Public Private Full
Public Private Associate
Public Private Assistant
Along with the salaries for the 400 college professors in Example 3.1, the researcher recorded two qualitative variables for each professor: rank and type of college. Table 3.2 shows the number of
professors in each of the 2 3 6 categories. Use comparative charts to describe the data. Do the private colleges employ as many highranking professors as the public colleges do?
Number of Professors by Rank and Type of College Full Professor
Associate Professor
Assistant Professor
Public Private
The numbers in the table are not quantitative measurements on a single experimental unit (the professor). They are frequencies, or counts, of the number of professors who fall into each category. To
compare the numbers of professors at public and private colleges, you might draw two pie charts and display them side by side, as in Figure 3.2.
FIGU R E 3 .2
Comparative pie charts for Example 3.2
● Private
Category Full Professor Associate Professor Assistant Professor
Alternatively, you could draw either a stacked or a side-by-side bar chart. The stacked bar chart is shown in Figure 3.3.
FI GU R E 3 .3
Stacked bar chart for Example 3.2
● 200
School Public Private
0 Rank
Although the graphs are not strikingly different, you can see that public colleges have fewer full professors and more associate professors than private colleges. The reason for these differences is
not clear, but you might speculate that private colleges, with their higher salaries, are able to attract more full professors. Or perhaps public colleges are not as willing to promote professors to
the higher-paying ranks. In any case, the graphs provide a means for comparing the two sets of data. You can also compare the distributions for public versus private colleges by creating conditional
data distributions. These conditional distributions are shown in Table 3.3. One distribution shows the proportion of professors in each of the three ranks under the condition that the college is
public, and the other shows the proportions under the condition that the college is private. These relative frequencies are easier to compare than the actual frequencies and lead to the same
conclusions: • •
TABLE 3.3
The proportion of assistant professors is roughly the same for both public and private colleges. Public colleges have a smaller proportion of full professors and a larger proportion of associate
Proportions of Professors by Rank for Public and Private Colleges
Public Private
Full Professor
Associate Professor
Assistant Professor
24 .16 150 60 .24 250
57 .38 150 78 .31 250
69 .46 150 112 .45 250
Total 1.00 1.00
3.1 Gender Differences Male and female respondents to a questionnaire about gender differences are categorized into three groups according to their answers to the first question:
3.4 M&M’S The color distributions for two snacksize bags of M&M’S® candies, one plain and one peanut, are displayed in the table. Choose an appropriate graphical method and compare the distributions.
Men Women
Group 1
Group 2
Group 3
a. Create side-by-side pie charts to describe these data. b. Create a side-by-side bar chart to describe these data. c. Draw a stacked bar chart to describe these data. d. Which of the three charts
best depicts the difference or similarity of the responses of men and women? 3.2 State-by-State A group of items are categorized according to a certain attribute—X, Y, Z—and according to the state in
which they are produced: New York California
3.5 How Much Free Time? When you were growing up, did you feel that you did not have enough free time? Parents and children have differing opinions on this subject. A research group surveyed 198
parents and 200 children and recorded their responses to the question, “How much free time does your child have?” or “How much free time do you have?” The responses are shown in the table below:2
Just the Right Amount
Not Enough
Too Much
Don’t Know
Parents Children
a. Create a comparative (side-by-side) bar chart to compare the numbers of items of each type made in California and New York. b. Create a stacked bar chart to compare the numbers of items of each
type made in the two states. c. Which of the two types of presentation in parts a and b is more easily understood? Explain. d. What other graphical methods could you use to describe the data? 3.3
Consumer Spending The table below shows the average amounts spent per week by men and women in each of four spending categories: Men Women
Plain Peanut
$54 21
$27 85
$105 100
$22 75
a. What possible graphical methods could you use to compare the spending patterns of women and men? b. Choose two different methods of graphing and display the data in graphical form. c. What can you
say about the similarities or differences in the spending patterns for men and women? d. Which of the two methods used in part b provides a better descriptive graph?
a. Define the sample and the population of interest to the researchers. b. Describe the variables that have been measured in this survey. Are the variables qualitative or quantitative? Are the data
univariate or bivariate? c. What do the entries in the cells represent? d. Use comparative pie charts to compare the responses for parents and children. e. What other graphical techniques could be
used to describe the data? Would any of these techniques be more informative than the pie charts constructed in part d? 3.6 Consumer Price Index The price of
living in the United States has increased dramatically in the past decade, as demonstrated by the consumer price indexes (CPIs) for housing and transportation. These CPIs are listed in the table for
the years 1996 through the first five months of 2007.3
Housing Transportation
152.8 143.0
156.8 144.3
160.4 141.6
163.9 144.4
169.6 153.3
176.4 154.3
Housing Transportation
180.3 152.9
184.8 157.6
189.5 163.1
195.7 173.9
203.2 180.9
207.8 181.0
Source: www.bls.gov
a. Create side-by-side comparative bar charts to describe the CPIs over time. b. Draw two line charts on the same set of axes to describe the CPIs over time. c. What conclusions can you draw using
the two graphs in parts a and b? Which is the most effective?
c. What conclusions can you draw using the graphs in parts a and b? 3.8 Charitable Contributions Charitable organizations count on support from both private donations and other sources. Here are the
sources of income in a recent year for several well-known charitable organizations in the United States.4
3.7 How Big Is the Household? A local
chamber of commerce surveyed 126 households within its city and recorded the type of residence and the number of family members in each of the households. The data are shown in the table.
Type of Residence Family Members
Single Residence
1 2 3 4 or more
a. Use a side-by-side bar chart to compare the number of family members living in each of the three types of residences. b. Use a stacked bar chart to compare the number of family members living in
each of the three types of residences.
Amounts ($ millions) Organization Salvation Army YMCA American Red Cross American Cancer Society American Heart Association Total
$1545 773 557 868 436 $4179
$1559 4059 2509 58 157 $8342
$3104 4832 3066 926 593 $12,521
Source: The World Almanac and Book of Facts 2007
a. Construct a stacked bar chart to display the sources of income given in the table. b. Construct two comparative pie charts to display the sources of income given in the table. c. Write a short
paragraph summarizing the information that can be gained by looking at these graphs. Which of the two types of comparative graphs is more effective?
SCATTERPLOTS FOR TWO QUANTITATIVE VARIABLES When both variables to be displayed on a graph are quantitative, one variable is plotted along the horizontal axis and the second along the vertical axis.
The first variable is often called x and the second is called y, so that the graph takes the form of a plot on the (x, y) axes, which is familiar to most of you. Each pair of data values is plotted as
a point on this two-dimensional graph, called a scatterplot. It is the twodimensional extension of the dotplot we used to graph one quantitative variable in Section 1.4. You can describe the
relationship between two variables, x and y, using the patterns shown in the scatterplot. •
• •
What type of pattern do you see? Is there a constant upward or downward trend that follows a straight-line pattern? Is there a curved pattern? Is there no pattern at all, but just a random scattering
of points? How strong is the pattern? Do all of the points follow the pattern exactly, or is the relationship only weakly visible? Are there any unusual observations? An outlier is a point that is
far from the cluster of the remaining points. Do the points cluster into groups? If so, is there an explanation for the observed groupings?
The number of household members, x, and the amount spent on groceries per week, y, are measured for six households in a local area. Draw a scatterplot of these six data points. x
Solution Label the horizontal axis x and the vertical axis y. Plot the points using the coordinates (x, y) for each of the six pairs. The scatterplot in Figure 3.4 shows the six pairs marked as dots.
You can see a pattern even with only six data pairs. The cost of weekly groceries increases with the number of household members in an apparent straight-line relationship. Suppose you found that a
seventh household with two members spent $165 on groceries. This observation is shown as an X in Figure 3.4. It does not fit the linear pattern of the other six observations and is classified as an
outlier. Possibly these two people were having a party the week of the survey! FIGU R E 3 .4
Scatterplot for Example 3.3
TABLE 3.4
3 x
A distributor of table wines conducted a study of the relationship between price and demand using a type of wine that ordinarily sells for $10.00 per bottle. He sold this wine in 10 different
marketing areas over a 12-month period, using five different price levels—from $10 to $14. The data are given in Table 3.4. Construct a scatterplot for the data, and use the graph to describe the
relationship between price and demand. ●
Cases of Wine Sold at Five Price Levels Cases Sold per 10,000 Population
Price per Bottle
23, 21 19, 18 15, 17 19, 20 25, 24
$10 11 12 13 14
The 10 data points are plotted in Figure 3.5. As the price increases from $10 to $12 the demand decreases. However, as the price continues to increase, from $12 to $14, the demand begins to increase.
The data show a curved pattern, with the relationship changing as the price changes. How do you explain this relationship? Possibly, the increased price is a signal of increased quality for the
consumer, which causes the increase in demand once the cost exceeds $12. You might be able to think of other reasons, or perhaps some other variable, such as the income of people in the marketing
areas, that may be causing the change.
FI GU R E 3 .5
Scatterplot for Example 3.4
● 25.0
15.0 10
12 Price
Now would be a good time for you to try creating a scatterplot on your own. Use the applets in Building a Scatterplot to create the scatterplots that you see in Figures 3.5 and 3.7. You will find
step-by-step instructions on the left-hand side of the applet (Figure 3.6), and you will be corrected if you make a mistake! FI GU R E 3 .6
Building a Scatterplot applet
A constant rate of increase or decrease is perhaps the most common pattern found in bivariate scatterplots. The scatterplot in Figure 3.4 exhibits this linear pattern—that is, a straight line with
the data points lying both above and below the line and within a fixed distance from the line. When this is the case, we say that the two variables exhibit a linear relationship. 3.5
TABLE 3.5
FIGU R E 3 .7
Scatterplot of x versus y for Example 3.5
The data in Table 3.5 are the size of the living area (in square feet), x, and the selling price, y, of 12 residential properties. The MINITAB scatterplot in Figure 3.7 shows a linear pattern in the
data. ●
Living Area and Selling Price of 12 Properties Residence
x (sq. ft.)
y (in thousands)
$278.5 375.7 339.5 329.8 295.6 310.3 460.5 305.2 288.6 365.7 425.3 268.8
● 450
1800 x
For the data in Example 3.5, you could describe each variable, x and y, individually using descriptive measures such as the means (x苶 and y苶) or the standard deviations (sx and sy). However, these
measures do not describe the relationship between x and y for a particular residence—that is, how the size of the living space affects the selling price of the home. A simple measure that serves this
purpose is called the correlation coefficient, denoted by r, and is defined as sxy r sxsy
The quantities sx and sy are the standard deviations for the variables x and y, respectively, which can be found by using the statistics function on your calculator or the computing formula in
Section 2.3. The new quantity sxy is called the covariance between x and y and is defined as S(xi x苶 )( yi 苶 y) sxy n1 There is also a computing formula for the covariance: (Sxi)(Syi) Sxiyi n sxy n1
where Sxiyi is the sum of the products xiyi for each of the n pairs of measurements. How does this quantity detect and measure a linear pattern in the data? Look at the signs of the cross-products
(xi x苶 )(yi y苶 ) in the numerator of r, or sxy. When a data point (x, y) is in either area I or III in the scatterplot shown in Figure 3.8, the cross-product will be positive; when a data point is
in area II or IV, the cross-product will be negative. We can draw these conclusions: • • •
FI GU R E 3 .8
The signs of the crossproducts (xi x苶 )(yi y苶 ) in the covariance formula
If most of the points are in areas I and III (forming a positive pattern), sxy and r will be positive. If most of the points are in areas II and IV (forming a negative pattern), sxy and r will be
negative. If the points are scattered across all four areas (forming no pattern), sxy and r will be close to 0.
y II : –
y II : –
y II : –
y III : +
IV : –
x (a) Positive pattern
y III : +
IV : –
x (b) Negative pattern
III : +
IV : –
(c) No pattern
The applet called Exploring Correlation will help you to visualize how the pattern of points affects the correlation coefficient. Use your mouse to move the slider at the bottom of the scatterplot
(Figure 3.9). You will see the value of r change as the pattern of the points changes. Notice that a positive pattern (a) results in a positive value of r; no pattern (c) gives a value of r close to
zero; and a negative pattern (b) results in a negative value of r. What pattern do you see when r 1? When r 1? You will use this applet again for the MyApplet Exercises section at the end of the
FIGU R E 3 .9
Exploring Correlation applet
r ⇔ positive linear relationship r 0 ⇔ negative linear relationship r ⬇ 0 ⇔ no relationship
Most scientific and graphics calculators can compute the correlation coefficient, r, when the data are entered in the proper way. Check your calculator manual for the proper sequence of entry
commands. Computer programs such as MINITAB are also programmed to perform these calculations. The MINITAB output in Figure 3.10 shows the covariance and correlation coefficient for x and y in
Example 3.5. In the covariance table, you will find these values: sxy 15,545.20
s2x 79,233.33
s2y 3571.16
and in the correlation output, you find r .924. However you decide to calculate the correlation coefficient, it can be shown that the value of r always lies between 1 and 1. When r is positive, x
increases when y increases, and vice versa. When r is negative, x decreases when y increases, or x increases when y decreases. When r takes the value 1 or 1, all the points lie exactly on a straight
line. If r 0, then there is no apparent linear relationship between the two variables. The closer the value of r is to 1 or 1, the stronger the linear relationship between the two variables.
FIGU R E 3 .1 0
MINITAB output of covariance and correlation for Example 3.5
Covariances: x, y x y
x 79233.33 15545.20
Correlations: x, y y
Pearson correlation of x and y = 0.924 P-Value = 0.000
Find the correlation coefficient for the number of square feet of living area and the selling price of a home for the data in Example 3.5. Three quantities are needed to calculate the correlation
coefficient. The standard deviations of the x and y variables are found using a calculator with a statistical function. You can verify that sx 281.4842 and sy 59.7592. Finally,
(Sxi)(Syi) Sxiyi n sxy n1 (20,980)(4043.5) 7,240,383 12 15,545.19697 11 This agrees with the value given in the MINITAB printout in Figure 3.10. Then sxy 15,545.19697 r .9241 sxsy (281.4842)(59.7592)
which also agrees with the value of the correlation coefficient given in Figure 3.10. (You may wish to verify the value of r using your calculator.) This value of r is fairly close to 1, which
indicates that the linear relationship between these two variables is very strong. Additional information about the correlation coefficient and its role in analyzing linear relationships, along with
alternative calculation formulas, can be found in Chapter 12. Sometimes the two variables, x and y, are related in a particular way. It may be that the value of y depends on the value of x; that is,
the value of x in some way explains the value of y. For example, the cost of a home (y) may depend on its amount of floor space (x); a student’s grade point average (x) may explain her score on an
achievement test (y). In these situations, we call y the dependent variable, while x is called the independent variable. If one of the two variables can be classified as the dependent variable y and
the other as x, and if the data exhibit a straight-line pattern, it is possible to describe the relationship relating y to x using a straight line given by the equation
x “explains” y or y “depends on” x. x is the explanatory or independent variable. y is the response or dependent variable.
y a bx as shown in Figure 3.11.
FI GU R E 3 .1 1
The graph of a straight line
y y = a + bx
b b a 0
As you can see, a is where the line crosses or intersects the y-axis: a is called the y-intercept. You can also see that for every one-unit increase in x, y increases by an amount b. The quantity b
determines whether the line is increasing (b 0), decreasing (b 0), or horizontal (b 0) and is appropriately called the slope of the line.
You can see the effect of changing the slope and the y-intercept of a line using the applet called How a Line Works. Use your mouse to move the slider on the right side of the scatterplot. As you
move the slider, the slope of the line, shown as the vertical side of the green triangle (light gray in Figure 3.12), will change. Moving the slider on the left side of the applet causes the
y-intercept, shown in red (blue in Figure 3.12), to change. What is the slope and y-intercept for the line shown in the applet in Figure 3.12? You will use this applet again for the MyApplet
Exercises section at the end of the chapter. FIGU R E 3 .1 2
How a Line Works applet
Our points (x, y) do not all fall on a straight line, but they do show a trend that could be described as a linear pattern. We can describe this trend by fitting a line as best we can through the
points. This best-fitting line relating y to x, often called the regression or least-squares line, is found by minimizing the sum of the squared differences between the data points and the line
itself, as shown in Figure 3.13. The formulas for computing b and a, which are derived mathematically, are shown below. COMPUTING FORMULAS FOR THE LEAST-SQUARES REGRESSION LINE sy br sx
a 苶y bx苶
and the least-squares regression line is: y a bx FIGU R E 3 .1 3
The best-fitting line
● y y = a + bx 3 2 1
Since sx and sy are both positive, b and r have the same sign, so that: • • •
Remember that r and b have the same sign!
When r is positive, so is b, and the line is increasing with x. When r is negative, so is b, and the line is decreasing with x. When r is close to 0, then b is close to 0.
Find the best-fitting line relating y starting hourly wage to x number of years of work experience for the following data. Plot the line and the data points on the same graph.
Use the data entry method for your calculator to find these descriptive statistics for the bivariate data set:
苶x 4.5
苶y 10.333
sx 1.871
sy 3.710
r .980
Use the regression line to predict y for a given value of x.
sy 3.710 b r .980 1.9432389 ⬇ 1.943 sx 1.871
and a 苶y bx苶 10.333 1.943(4.5) 1.590 Therefore, the best-fitting line is y 1.590 1.943x. The plot of the regression line and the actual data points are shown in Figure 3.14. The best-fitting line can
be used to estimate or predict the value of the variable y when the value of x is known. For example, if a person applying for a job has 3 years of work experience (x), what would you predict his
starting hourly wage (y) to be? From the best-fitting line in Figure 3.14, the best estimate would be y a bx 1.590 1.943(3) 7.419
Fitted line and data points for Example 3.7
● 15.0
FI GU R E 3 .1 4
10.0 y 1.590 1.943x 7.5
5.0 2
5 x
How Do I Calculate the Correlation Coefficient? 1. First, create a table or use your calculator to find Sx, Sy, and Sxy. 2. Calculate the covariance, sxy. 3. Use your calculator or the computing
formula from Chapter 2 to calculate sx and sy. sxy 4. Calculate r . sxsy
How Do I Calculate the Regression Line? sxy 1. First, calculate 苶y and x苶. Then, calculate r . sxsy sy 2. Find the slope, b r and the y-intercept, a y苶 bx苶. sx
3. Write the regression line by substituting the values for a and b into the equation: y a bx. Exercise Reps A. Below you will find a simple set of bivariate data. Fill in the blanks to find the
correlation coefficient. x
(Sx)(Sy) Sxy n sxy n1
Correlation Coefficient
sxy r sxsy
B. Use the information from part A and find the regression line. From Part A
From Part A
x苶 y苶
sy br sx
y-intercept a y苶 bx苶
Regression Line: y
Answers are located on the perforated card at the back of this book.
When should you describe the linear relationship between x and y using the correlation coefficient r, and when should you use the regression line y a bx? The regression approach is used when the
values of x are set in advance and then the corresponding value of y is measured. The correlation approach is used when an experimental unit is selected at random and then measurements are made on
variables x and y. This technical point will be taken up in Chapter 12, which addresses regression analysis. Most data analysts begin any data-based investigation by examining plots of the variables
involved. If the relationship between two variables is of interest, data analysts can also explore bivariate plots in conjunction with numerical measures of location, dispersion, and correlation.
Graphs and numerical descriptive measures are only the first of many statistical tools you will soon have at your disposal.
EXERCISES EXERCISE REPS These questions refer to the MyPersonal Trainer section on page 111. 3.9 Below you will find a simple set of bivariate data. Fill in the blanks to find the correlation
coefficient. x
(Sx)(Sy) Sxy n sxy n1
Correlation Coefficient
sxy r sxsy
3.10 Use the information from Exercise 3.9 and find the regression line. From Part A
From Part A
x苶 y苶
sy br sx
measurements on two variables, x and y: (5, 8)
(2, 6)
(1, 4)
a 苶y bx苶
3.12 Refer to Exercise 3.11.
3.11 A set of bivariate data consists of these
(3, 6)
Regression Line: y
BASIC TECHNIQUES EX0311
(4, 7)
(4, 6)
a. Draw a scatterplot to describe the data. b. Does there appear to be a relationship between x and y? If so, how do you describe it? c. Calculate the correlation coefficient, r, using the computing
formula given in this section. d. Find the best-fitting line using the computing formulas. Graph the line on the scatterplot from part a. Does the line pass through the middle of the points?
a. Use the data entry method in your scientific calculator to enter the six pairs of measurements. Recall the proper memories to find the correlation coefficient, r, the y-intercept, a, and the slope,
b, of the line. b. Verify that the calculator provides the same values for r, a, and b as in Exercise 3.11. 3.13 Consider this set of bivariate data: EX0313
a. Draw a scatterplot to describe the data.
b. Does there appear to be a relationship between x and y? If so, how do you describe it? c. Calculate the correlation coefficient, r. Does the value of r confirm your conclusions in part b? Explain.
3.14 The value of a quantitative variable is
measured once a year for a 10-year period:
61.5 62.3 60.7 59.8 58.0
58.2 57.5 57.5 56.1 56.0
MINITAB output for Exercise 3.14 x 9.16667 -6.42222
y 4.84933
d. Find the best-fitting line using the results of part c. Verify your answer using the data entry method in your calculator. e. Plot the best-fitting line on your scatterplot from part a. Describe the
fit of the line.
APPLICATIONS 3.15 Grocery Costs These data relating the amount spent on groceries per week and the number of household members are from Example 3.3:
$110.19 $118.33 $150.92
a. Find the best-fitting line for these data. b. Plot the points and the best-fitting line on the same graph. Does the line summarize the information in the data points? c. What would you estimate a
household of six to spend on groceries per week? Should you use the fitted line to estimate this amount? Why or why not? 3.16 Real Estate Prices The data relating EX0316
x (sq. ft.)
y (in thousands)
$278.5 375.7 339.5 329.8 295.6 310.3 460.5 305.2 288.6 365.7 425.3 268.8
3.17 Disabled Students A social skills training program, reported in Psychology in the Schools, was implemented for seven students with mild handicaps in a study to determine whether the program
caused improvement in pre/post measures and behavior ratings.5 For one such test, these are the pretest and posttest scores for the seven students:
C ovarianc es x y
price of 12 residential properties given in Example 3.5 are reproduced here. First, find the best-fitting line that describes these data, and then plot the line and the data points on the same graph.
Comment on the goodness of the fitted line in describing the selling price of a residential property as a linear function of the square feet of living area. Residence
a. Draw a scatterplot to describe the variable as it changes over time. b. Describe the measurements using the graph constructed in part a. c. Use this MINITAB output to calculate the correlation
coefficient, r:
the square feet of living space and the selling
Earl Ned Jasper Charlie Tom Susie Lori
a. Draw a scatterplot relating the posttest score to the pretest score. b. Describe the relationship between pretest and posttest scores using the graph in part a. Do you see any trend? c. Calculate
the correlation coefficient and interpret its value. Does it reinforce any relationship that was apparent from the scatterplot? Explain. 3.18 Lexus, Inc. The makers of the Lexus
automobile have steadily increased their sales since their U.S. launch in 1989. However, the rate of increase changed in 1996 when Lexus introduced a line of trucks. The sales of Lexus from 1996 to
2005 are shown in the table.6
114 Year
Sales (thousands 80 of vehicles)
Source: Adapted from: Automotive News, January 26, 2004, and May 22, 2006.
a. Plot the data using a scatterplot. How would you describe the relationship between year and sales of Lexus? b. Find the least-squares regression line relating the sales of Lexus to the year being
measured. c. If you were to predict the sales of Lexus in the year 2015, what problems might arise with your prediction? 3.19 HDTVs, again In Exercise 2.12, Con-
sumer Reports gave the prices for the top 10 LCD high definition TVs (HDTVs) in the 30- to 40-inch category. Does the price of an LCD TV depend on the size of the screen? The table below shows the 10
costs again, along with the screen size.6
JVC LT-40FH96 Sony Bravia KDL-V32XBR1 Sony Bravia KDL-V40XBR1 Toshiba 37HLX95 Sharp Aquos LC-32DA5U Sony Bravia KLV-S32A10 Panasonic Viera TC-32LX50 JVC LT-37X776 LG 37LP1D Samsung LN-R328W
$2900 1800 2600 3000 1300 1500 1350 2000 2200 1200
40" 32" 40" 37" 32" 32" 32" 37" 37" 32"
a. Which of the two variables (price and size) is the independent variable, and which is the dependent variable? b. Construct a scatterplot for the data. Does the relationship appear to be linear?
3.20 HDTVs, continued Refer to Exercise 3.19.
Suppose we assume that the relationship between x and y is linear. a. Find the correlation coefficient, r. What does this value tell you about the strength and direction of the relationship between
size and price? b. What is the equation of the regression line used to predict the price of the TV based on the size of the screen? c. The Sony Corporation is introducing a new 37" LCD TV. What would
you predict its price to be? d. Would it be reasonable to try to predict the price of a 45" LCD TV? Explain.
CHAPTER REVIEW Key Concepts I.
Bivariate Data
1. Both qualitative and quantitative variables 2. Describing each variable separately 3. Describing the relationship between the two variables II. Describing Two Qualitative Variables
1. Side-by-side pie charts 2. Comparative line charts
3. Comparative bar charts a. Side-by-side b. Stacked 4. Relative frequencies to describe the relationship between the two variables III. Describing Two Quantitative Variables
1. Scatterplots a. Linear or nonlinear pattern
b. Strength of relationship
3. The best-fitting regression line
c. Unusual observations: clusters and outliers
a. Calculating the slope and y-intercept b. Graphing the line c. Using the line for prediction
2. Covariance and correlation coefficient
Describing Bivariate Data MINITAB provides different graphical techniques for qualitative and quantitative bivariate data, as well as commands for obtaining bivariate descriptive measures when the
data are quantitative. To explore both types of bivariate procedures, you need to enter two different sets of bivariate data into a MINITAB worksheet. Once you are on the Windows desktop,
double-click on the MINITAB icon or use the Start button to start MINITAB. Start a new project using File 씮 New 씮 Minitab Project. Then open the existing project called “Chapter 1.” We will use the
college student data, which should be in Worksheet 1. Suppose that the 105 students already tabulated were from the University of California, Riverside, and that another 100 students from an
introductory statistics class at UC Berkeley were also interviewed. Table 3.6 shows the status distribution for both sets of students. Create another variable in C3 of the worksheet called “College”
and enter UCR for the first five rows. Now enter the UCB data in columns C1–C3. You can use the familiar Windows cut-and-paste icons if you like.
TABLE 3.6
Frequency (UCR) Frequency (UCB)
Grad Student
The other worksheet in “Chapter 1” is not needed and can be deleted by clicking on the X in the top right corner of the worksheet. We will use the worksheet called “Minivans” from Chapter 2, which
you should open using File 씮 Open Worksheet and selecting “Minivans.mtw.” Now save this new project as “Chapter 3.” To graphically describe the UCR/UCB student data, you can use comparative pie
charts—one for each school (see Chapter 1). Alternatively, you can use either stacked or side-by-side bar charts. Use Graph 씮 Bar Chart. In the “Bar Charts” Dialog box (Figure 3.15), select Values
from a Table in the drop-down list and click either Stack or Cluster in the row marked “One Column of Values.” Click OK. In the next Dialog box (Figure 3.16), select “Frequency” for the Graph
variables box and “Status” and “College” for the Categorical variable for grouping box. Click OK. Once the bar chart is displayed (Figure 3.17), you can right-click on various items in the bar chart
to edit. If you right-click on the bars and select Update Graph Automatically, the bar chart will automatically update when you change the data in the Minitab worksheet.
FI GU R E 3 .1 5
FI GU R E 3 .1 6
FIGU R E 3 .1 7
Turn to Worksheet 2, in which the bivariate minivan data from Chapter 2 are located. To examine the relationship between the second and third car seat lengths, you can plot the data and numerically
describe the relationship with the correlation coefficient and the best-fitting line. Use Stat 씮 Regression 씮 Fitted Line Plot, and select “2nd Seat” and “3rd Seat” for Y and X, respectively (see
Figure 3.18). Make sure that the dot next to Linear is selected, and click OK. The plot of the nine data points and the best-fitting line will be generated as in Figure 3.19.
FIGU R E 3 .1 8
FI GU R E 3 .1 9
To calculate the correlation coefficient, use Stat 씮 Basic Statistics 씮 Correlation, selecting “2nd Seat” and “3rd Seat” for the Variables box. To select both variables at once, hold the Shift key
down as you highlight the variables and then click Select. Click OK, and the correlation coefficient will appear in the Session window (see Figure 3.20). Notice the relatively strong positive
correlation and the positive slope of the regression line, indicating that a minivan with a long floor length behind the second seat will also tend to have a long floor length behind the third seat.
Save “Chapter 3” before you exit MINITAB!
FI GU R E 3 .2 0
Supplementary Exercises 3.21 Professor Asimov Professor Isaac Asimov was one of the most prolific writers of all time. He wrote nearly 500 books during a 40-year career prior to his death in 1992. In
fact, as his career progressed, he became even more productive in terms of the number of books written within a given period of time.8 These data are the times (in months) required to write his
books, in increments of 100: Number of Books
Time (in months)
a. Plot the accumulated number of books as a function of time using a scatterplot. b. Describe the productivity of Professor Asimov in light of the data set graphed in part a. Does the relationship
between the two variables seem to be linear? 3.22 Cheese, Please! Health-conscious Americans often consult the nutritional information on food packages in an attempt to avoid foods with large amounts
of fat, sodium, or cholesterol. The following information was taken from eight different brands of American cheese slices:
f. Write a paragraph to summarize the relationships you can see in these data. Use the correlations and the patterns in the four scatterplots to verify your conclusions. 3.23 Army versus Marine Corps
Who are the men and women who serve in our armed forces? Are they male or female, officers or enlisted? What is their ethnic origin and their average age? An article in Time magazine provided some
insight into the demographics of the U.S. armed forces.9 Two of the bar charts are shown below. U.S. Army 50% Enlisted Officers
40% 30% 20%
Sodium (mg)
Calories U.S. Marine Corps
3.5 5.0 2.5
3.5 3.0
50% Enlisted Officers
40% 30% 20% 10%
49 ov er d an
to 40
to 35
to 30
0% to
Kraft Deluxe American Kraft Velveeta Slices Private Selection Ralphs Singles Kraft 2% Milk Singles Kraft Singles American Borden Singles Lake to Lake American
Fat (g)
Cholesterol (mg)
Saturated Fat (g)
a. Which pairs of variables do you expect to be strongly related? b. Draw a scatterplot for fat and saturated fat. Describe the relationship. c. Draw a scatterplot for fat and calories. Compare the
pattern to that found in part b. d. Draw a scatterplot for fat versus sodium and another for cholesterol versus sodium. Compare the patterns. Are there any clusters or outliers? e. For the pairs of
variables that appear to be linearly related, calculate the correlation coefficients.
a. What variables have been measured in this study? Are the variables qualitative or quantitative? b. Describe the population of interest. Do these data represent a population or a sample drawn from
the population? c. What type of graphical presentation has been used? What other type could have been used? d. How would you describe the similarities and differences in the age distributions of
enlisted persons and officers?
e. How would you describe the similarities and differences in the age distributions of personnel in the U.S. Army and the Marine Corps? 3.24 Cheese, again! The demand for healthy foods that are low
in fats and calories has resulted in a large number of “low-fat” and “fat-free” products at the supermarket. The table shows the numbers of calories and the amounts of sodium (in milligrams) per
slice for five different brands of fat-free American cheese. Brand
Sodium (mg)
Kraft Fat Free Singles Ralphs Fat Free Singles Borden Fat Free Healthy Choice Fat Free Smart Beat American
a. Draw a scatterplot to describe the relationship between the amount of sodium and the number of calories. b. Describe the plot in part a. Do you see any outliers? Do the rest of the points seem to
form a pattern? c. Based only on the relationship between sodium and calories, can you make a clear decision about which of the five brands to buy? Is it reasonable to base your choice on only these
two variables? What other variables should you consider? 3.25 Peak Current Using a chemical proce-
dure called differential pulse polarography, a chemist measured the peak current generated (in microamperes) when a solution containing a given amount of nickel (in parts per billion) is added to a
buffer. The data are shown here:
x Ni (ppb)
y Peak Current (mA)
19.1 38.2 57.3 76.2 95 114 131 150 170
.095 .174 .256 .348 .429 .500 .580 .651 .722
Use a graph to describe the relationship between x and y. Add any numerical descriptive measures that are appropriate. Write a paragraph summarizing your results. 3.26 Movie Money How much money do
movies make on a single weekend? Does this amount in any way predict the movie’s success or failure, or is the movie’s total monetary success more
dependent on the number of weeks that the movie remains in the movie theaters? In a recent week, the following data was collected for the top 10 movies in theaters that weekend.10
Top Movies
Weekend Gross (in millions)
1 The Prestige $14.8 Disney 2 The Departed $13.7 Warner Bros. 3 Flags of Our Fathers $10.2 Paramount/DreamWorks 4 Open Season $8.0 Sony 5 Flicka $7.7 20th Century Fox 5 The Grudge 2 $7.7 Sony 7 Man
of the Year $7.0 Universal 8 Marie Antoinette $5.3 Sony 9 The Texas Chainsaw Massacre: The Beginning $3.8 New Line 10 The Marine $3.7 20th Century Fox
Number of Per-Screen Weeks in Screens Average Release
Gross to Date (in millions)
a. Which pairs of variables in the table do you think will have a positive correlation? Which pairs will have a negative correlation? Explain. b. Draw a scatterplot relating the gross to date to the
number of weeks in release. How would you describe the relationship between these two variables? c. Draw a scatterplot relating the weekend gross to the number of screens on which the movie is being
shown. How would you describe the relationship between these two variables? d. Draw a scatterplot relating the per-screen average to the number of screens on which the movie is being shown. How would
you describe the relationship between these two variables? 3.27 Movie Money, continued The data from Exercise 3.26 were entered into a MINITAB worksheet, and the following output was obtained.
Weekend gross Screens Avg/screen Weeks Gross to date
Weekend gross
Avg/ screen
Gross to date
14.2 403.8 4635.5 -0.5 28.3
521835.8 -884350.0 493.2 11865.1
3637602.0 -1110.9 -10929.2
1.1 23.8
a. Use the MINITAB output or the original data to find the correlation between the number of weeks in release and the gross to date.
b. For the pair of variables described in part a, which of the variables would you classify as the independent variable? The dependent variable? c. Use the MINITAB output or the original data to find
the correlation between the weekend gross and the number of screens on which the movie is being shown. Find the correlation between the number of screens on which the movie is being shown and the
per-screen average. d. Do the correlations found in part c confirm your answer to Exercise 3.26a? What might be the practical reasons for the direction and strength of the correlations in part c? 3.28
Heights and Gender Refer to Exercise 1.54
and data set EX0154. When the heights of these 105 students were recorded, their gender was also recorded. a. What variables have been measured in this experiment? Are they qualitative or
quantitative? b. Look at the histogram from Exercise 1.54 along with the comparative box plots shown below. Do the box plots help to explain the two local peaks in the histogram? Explain. Histogram
of Heights 10
3.29 Hazardous Waste The data in Exercise 1.37 gave the number of hazardous waste sites in each of the 50 states and the District of Columbia in 2005.4 Suspecting that there might be a relationship
between the number of waste sites and the size of the state (in thousands of square miles), researchers recorded both variables and generated a scatterplot with MINITAB.
State AL AK AZ AR CA CO CT DE DC FL GA HI ID IL IN IA KS KY LA ME MD MA MI MN MS MO
MT NE NV NH NJ NM NY NC ND OH OK OR PA RI SC SD TN TX UT VT VA WA WV WI WY
MINITAB printout for Exercise 3.29 2
Covariances: Sites, Area
66 Heights
Sites 682.641 -98.598
Sites Area
Area 9346.603
120 100 F
80 60 40 M 20 0 60
68 Height
300 400 Area
a. Is there any clear pattern in the scatterplot? Describe the relationship between number of waste sites and the size of the state. b. Use the MINITAB output to calculate the correlation
coefficient. Does this confirm your answer to part a? c. Are there any outliers or clusters in the data? If so, can you explain them?
3.31 Pottery, continued In Exercise 1.59, we analyzed the percentage of aluminum oxide in 26 samples of Romano-British pottery found at four different kiln sites in the United Kingdom.12 Since one of
the sites only provided two measurements, that site is eliminated, and comparative box plots of aluminum oxide at the other three sites are shown.
d. What other variables could you consider in trying to understand the distribution of hazardous waste sites in the United States?
Total Yards
Source: www.espn.com
a. Draw a scatterplot to describe the relationship between number of completions and total passing yards for Brett Favre. b. Describe the plot in part a. Do you see any outliers? Do the rest of the
points seem to form a pattern? c. Calculate the correlation coefficient, r, between the number of completions and total passing yards. d. What is the regression line for predicting total number of
passing yards y based on the total number of completions x? e. If Brett Favre had 20 pass completions in his next game, what would you predict his total number of passing yards to be?
3.30 Brett Favre, again The number of passes completed and the total number of passing yards was recorded for Brett Favre for each of the 16 regular season games in the fall of 2006.11
16 Aluminum Oxide
a. What two variables have been measured in this experiment? Are they qualitative or quantitative? b. How would you compare the amount of aluminum oxide in the samples at the three sites? 3.32
Pottery, continued Here is the percentage of aluminum oxide, the percentage of iron oxide, and the percentage of magnesium oxide in five samples collected at Ashley Rails in the United Kingdom.
Sample 1 2 3 4 5
17.7 18.3 16.7 14.8 19.1
1.12 1.14 0.92 2.74 1.64
0.56 0.67 0.53 0.67 0.60
a. Find the correlation coefficients describing the relationships between aluminum and iron oxide content, between iron oxide and magnesium oxide, and between aluminum oxide and magnesium oxide. b.
Write a sentence describing the relationships between these three chemicals in the pottery samples. 3.33 Computer Networks at Home The table below (Exercise 1.50) shows the predicted rise of home
networking of PCs in the next few years.13
6.1 6.5 6.2 5.7 4.9 4.1 3.4
1.7 4.5 8.7 13.7 19.1 24.0 28.2
the back with arms outstretched to make a “T”) is roughly equal to the person’s height. To test this claim, we measured eight people with the following results:
U.S Home Networks (in millions) Year
Source: Jupiter Research
a. What variables have been measured in this experiment? Are they qualitative or quantitative?
Armspan (inches) Height (inches)
62.25 62
69.5 70
Armspan (inches) Height (inches)
60.25 62
b. Use one of the graphical methods given in this chapter to describe the data. c. Write a sentence describing the relationship between wired and wireless technology as it will be in the next few
years. 3.34 Politics and Religion A survey was conducted prior to the 2004 presidential election to explore the relationship between a person’s religious fervor and their choice of a political
candidate. Voters were asked how often they attended church and which of the two major presidential candidates (George W. Bush or his democratic opponent) they would favor in the 2004 election.14 The
results are shown below. Church Attendance More than once a week Once a week Once or twice a month Once or twice a year Seldom/never
G. W. Bush
Democratic Candidate
63% 56% 52% 46% 38%
37% 44% 48% 54% 62%
Source: Press-Enterprise
a. What variables have been measured in this survey? Are they qualitative or quantitative? b. Draw side-by-side comparative bar charts to describe the percentages favoring the two candidates,
categorized by church attendance. c. Draw two line charts on the same set of axes to describe the same percentages for the two candidates. d. What conclusions can you draw using the two graphs in
parts b and c? Which is more effective? 3.35 Armspan and Height Leonardo
da Vinci (1452–1519) drew a sketch of a man, indicating that a person’s armspan (measuring across
a. Draw a scatterplot for armspan and height. Use the same scale on both the horizontal and vertical axes. Describe the relationship between the two variables. b. Calculate the correlation
coefficient relating armspan and height. c. If you were to calculate the regression line for predicting height based on a person’s armspan, how would you estimate the slope of this line? d. Find the
regression line relating armspan to a person’s height. e. If a person has an armspan of 62 inches, what would you predict the person’s height to be? 3.36 Airline Revenues The number of passengers x
(in millions) and the revenue y (in billions of dollars) for the top nine U.S. airlines in a recent year are given in the following table.4
98.0 20.7
66.7 17.4
86.0 16.2
56.5 12.3
42.8 11.2
U.S. Air
x y
x y
64.0 5.1
88.4 7.6
Alaska 16.7 3.0
SkyWest 16.6 2.0
Source: The World Almanac and Book of Facts 2007
a. Construct a scatterplot for the data. b. Describe the form, direction, and strength of the pattern in the scatterplot. 3.37 Test Interviews Of two personnel
evaluation techniques available, the first requires a two-hour test-interview while the second can be completed in less than an hour. The scores for each of the eight individuals who took both tests
are given in the next table.
Test 1 (x)
Test 2 (y)
a. Construct a scatterplot for the data. b. Describe the form, direction, and strength of the pattern in the scatterplot. 3.38 Test Interviews, continued Refer to Exercise 3.37. a. Find the
correlation coefficient, r, to describe the relationship between the two tests.
b. Would you be willing to use the second and quicker test rather than the longer test-interview to evaluate personnel? Explain. 3.39 Rain and Snow Is there a correlation between the amount of rain
and the amount of snow that falls in a particular location? The table below shows the average annual rainfall (inches) and the average annual snowfall (inches) for 10 cities in the United States.15
Rainfall (inches)
Snowfall (inches)
14.77 13.03 37.60 21.19 37.98 58.33 54.65 49.69 37.07 35.56
56.9 77.8 64.5 40.8 19.9 97.0 5.1 28.6 6.5 23.2
Billings, MT Casper, WY Concord, NH Fargo, ND Kansas City, MO Juneau, AK Memphis, TN New York, NY Portland, OR Springfield, IL Source: Time Almanac 2007
a. Construct a scatterplot for the data. b. Calculate the correlation coefficient r between rainfall and snowfall. Describe the form, direction, and strength of the relationship between rainfall and
snowfall. c. Are there any outliers in the scatterplot? If so, which city does this outlier represent? d. Remove the outlier that you found in part c from the data set and recalculate the correlation
coefficient r for the remaining nine cities. Does the correlation between rainfall and snowfall change, and, if so, in what way?
Exercises 3.40 If you have not yet done so, use the first applet
in Building a Scatterplot to create a scatterplot for the data in Example 3.4. 3.41 If you have not yet done so, use the second
applet in Building a Scatterplot to create a scatterplot for the data in Example 3.5. 3.42 Cordless Phones The table below shows the prices of 8 single handset cordless phones along with their
overall score (on a scale of
0–100) in a consumer rating survey presented by Consumer Reports.16 Brand and Model Uniden EXI 4246 AT&T E2116 Panasonic KX-TG5621S GE 27831GE1 VTech V Mix VTech ia5829 Panasonic KX-TG2421W Clarity
Overall Score
$25 30 50 20 30 30 40 70
a. Calculate the correlation coefficient r between price and overall score. How would you describe the relationship between price and overall score? b. Use the applet called Correlation and the
Scatterplot to plot the eight data points. What is the correlation coefficient shown on the applet? Compare with the value you calculated in part a. c. Describe the pattern that you see in the
scatterplot. What unexpected relationship do you see in the data? 3.43 Midterm Scores When a student per-
forms poorly on a midterm exam, the student sometimes is convinced that their score is an anomaly and that they will do much better on the second midterm. The data below show the midterm scores (out
of 100 points) for eight students in an introductory statistics class.
Midterm 1
Midterm 2
a. Calculate the correlation coefficient r between the two midterm scores. How would you describe the relationship between scores on the first and second midterm? b. Use the applet called Correlation
and the Scatterplot to plot the eight data points. What is the correlation coefficient shown on the applet? Compare with the value you calculated in part a. c. Describe the pattern that you see in
the scatterplot. Are there any clusters or outliers? If so, how would you explain them? 3.44 Acess the applet called Exploring Correlation.
a. Move the slider in the first applet so that r ⬇ .75. Now switch the sign using the button at the bottom of the applet. Describe the change in the pattern of the points.
b. Move the slider in the first applet so that r ⬇ 0. Describe the pattern of points on the scatterplot. c. Refer to part b. In the second applet labeled Correlation and the Quadrants, with r ⬇ 0,
count the number of points falling in each of the four quadrants of the scatterplot. Is the distribution of points in the quadrants relatively uniform, or do more points fall into certain quadrants
than others? d. Use the second applet labeled Correlation and the Quadrants and change the correlation coefficient to r ⬇ 0.9. Is the distribution of points in the quadrants relatively uniform, or do
more points fall into certain quadrants than others? What happens if r ⬇ 0.9? e. Use the third applet labeled Correlation and the Regression Line. Move the slider to see the relationship between the
correlation coefficient r, the slope of the regression line and the direction of the relationship between x and y. Describe the relationship. 3.45 Suppose that the relationship between two vari-
ables x and y can be described by the regression line y 2.0 0.5x. Use the applet in How a Line Works to answer the following questions: a. What is the change in y for a one-unit change in x? b. Do
the values of y increase or decrease as x increases? c. At what point does the line cross the y-axis? What is the name given to this value? d. If x 2.5, use the least squares equation to predict the
value of y. What value would you predict for y if x 4.0? 3.46 Access the applet in How a Line Works. a. Use the slider to change the y-intercept of the line, but do not change the slope. Describe the
changes that you see in the line. b. Use the slider to change the slope of the line, but do not change the y-intercept. Describe the changes that you see in the line.
CASE STUDY Dishwashers
Are Your Dishes Really Clean? Does the price of an appliance convey something about its quality? Thirty-six different dishwashers were ranked on characteristics ranging from an overall satisfaction
score, washing (x1), energy use (x2), noise (x3), loading flexibility (x4), ease of use (x5), and cycle time (in minutes).17 The Kenmore (1374[2]) had the highest performance score of 83 while the
Whirlpool Gold GU3600XTS[Q] had the lowest at 76. Ratings pictograms were converted to numerical values for x1, . . . , x5 where 5 Excellent, 4 Very good, 3 Good, 2 Fair, and 1 Poor. Use a
statistical computer package to explore the relationships between various pairs of variables in the table. Excellent
Very good
Brand & Model
Ariston LI670 Asko D3122XL Asko Encore D3531XLHD[SS] Bosch SHE45C0[2]UC Bosch SHE58C0[2]UC Frigidaire GLD2445RF[S] Frigidaire Gallery GLD4355RF[S] Frigidaire Professional PLD4555RF[C] GE GLD4600N[WW]
GE GLD5900N[WW] GE GLD6700N[WW] GE Monogram ZBD0710N[SS] GE Profile PDW8600N[WW] GE Profile PDW9900N[WW] Haier ESD310 Kenmore (Sears) 1359[2] Kenmore (Sears) 1373[2] Kenmore (Sears) 1374[2] Kenmore
(Sears) Elite 1376[2] Kenmore (Sears) Elite 1378[2] Kenmore (Sears) PRO 1387[3] KitchenAid Architect KUDD01DP[WH] KitchenAid KUDK03CT[WH] KitchenAid KUDS03CT[WH] KitchenAid KUDU03ST[SS] LG LDF7810
[WW] LG LDS5811[W] Maytag MDB4651AW[W] Maytag MDB5601AW[W] Maytag MDB6601AW[W] Maytag MDB7601AW[W] Maytag MDB8751BW[W] Miele Advanta G2020SC Whirlpool DU1100XTP[Q] Whirlpool Gold GU2455XTS[Q]
Whirlpool Gold GU3600XTS[Q]
$800 $850 $1600 $700 $900 $400 $500 $710 $460 $510 $550 $1500 $900 $1300 $600 $350 $580 $650 $800 $1000 $1400 $1400 $650 $850 $1400 $800 $650 $400 $450 $500 $560 $700 $1000 $500 $550 $750
48. 78. 81. 78. 78. 62. 71. 75. 75. 74. 68. 59. 68. 70. 56. 68. 79. 83. 79. 82. 78. 60. 76. 78. 79. 77. 74. 71. 68. 71. 71. 74. 74. 77. 77. 76.
Cycle Time
Source: © 2007 by Consumers Union of U.S., Inc., Yonkers, NY 10703-1057, a nonprofit organization. Reprinted with permission from the September 2007 issue of Consumer Reports® for educational purposes
only. No commercial use or reproduction permitted. www.ConsumerReports.org®.
1. Look at the variables Price, Score, and Cycle Time individually. What can you say about symmetry? About outliers? 2. Look at all the variables in pairs. Which pairs are positively correlated?
Negatively correlated? Are there any pairs that exhibit little or no correlation? Are some of these results counterintuitive? 3. Does the price of an appliance, specifically a dishwasher, convey
something about its quality? Which variables did you use in arriving at your answer?
Probability and Probability Distributions GENERAL OBJECTIVES Now that you have learned to describe a data set, how can you use sample data to draw conclusions about the sampled populations? The
technique involves a statistical tool called probability. To use this tool correctly, you must first understand how it works. The first part of this chapter will teach you the new language of
probability, presenting the basic concepts with simple examples. The variables that we measured in Chapters 1 and 2 can now be redefined as random variables, whose values depend on the chance
selection of the elements in the sample. Using probability as a tool, you can create probability distributions that serve as models for discrete random variables, and you can describe these random
variables using a mean and standard deviation similar to those in Chapter 2.
CHAPTER INDEX ● The Addition and Multiplication Rules (4.6) ● Bayes’ Rule and the Law of Total Probability (optional) (4.7) ● Conditional probability and independence (4.6)
© Tammie Arroyo/Getty Images
Probability and Decision Making in the Congo In his exciting novel Congo, author Michael Crichton describes an expedition racing to find boron-coated blue diamonds in the rain forests of eastern
Zaire. Can probability help the heroine Karen Ross in her search for the Lost City of Zinj? The case study at the end of this chapter involves Ross’s use of probability in decision-making situations.
● Counting rules (optional) (4.4) ● Experiments and events (4.2) ● Intersections, unions, and complements (4.5) ● The mean and standard deviation for a discrete random variable (4.8) ● Probability
distributions for discrete random variables (4.8) ● Random variables (4.8) ● Relative frequency definition of probability (4.3)
What’s the Difference between Mutually Exclusive and Independent Events?
THE ROLE OF PROBABILITY IN STATISTICS Probability and statistics are related in an important way. Probability is used as a tool; it allows you to evaluate the reliability of your conclusions about
the population when you have only sample information. Consider these situations: •
When you toss a single coin, you will see either a head (H) or a tail (T). If you toss the coin repeatedly, you will generate an infinitely large number of Hs and Ts—the entire population. What does
this population look like? If the coin is fair, then the population should contain 50% Hs and 50% Ts. Now toss the coin one more time. What is the chance of getting a head? Most people would say that
the “probability” or chance is 1/2. Now suppose you are not sure whether the coin is fair; that is, you are not sure whether the makeup of the population is 50–50. You decide to perform a simple
experiment. You toss the coin n 10 times and observe 10 heads in a row. Can you conclude that the coin is fair? Probably not, because if the coin were fair, observing 10 heads in a row would be very
unlikely; that is, the “probability” would be very small. It is more likely that the coin is biased.
As in the coin-tossing example, statisticians use probability in two ways. When the population is known, probability is used to describe the likelihood of observing a particular sample outcome. When
the population is unknown and only a sample from that population is available, probability is used in making statements about the makeup of the population—that is, in making statistical inferences.
In Chapters 4–7, you will learn many different ways to calculate probabilities. You will assume that the population is known and calculate the probability of observing various sample outcomes. Once
you begin to use probability for statistical inference in Chapter 8, the population will be unknown and you will use your knowledge of probability to make reliable inferences from sample information.
We begin with some simple examples to help you grasp the basic concepts of probability.
EVENTS AND THE SAMPLE SPACE Data are obtained by observing either uncontrolled events in nature or controlled situations in a laboratory. We use the term experiment to describe either method of data
collection. An experiment is the process by which an observation (or measurement) is obtained. Definition
The observation or measurement generated by an experiment may or may not produce a numerical value. Here are some examples of experiments: • • •
Recording a test grade Measuring daily rainfall Interviewing a householder to obtain his or her opinion on a greenbelt zoning ordinance
4.2 EVENTS AND THE SAMPLE SPACE
• •
Testing a printed circuit board to determine whether it is a defective product or an acceptable product Tossing a coin and observing the face that appears
When an experiment is performed, what we observe is an outcome called a simple event, often denoted by the capital E with a subscript. A simple event is the outcome that is observed on a single
repetition of the experiment. Definition
Experiment: Toss a die and observe the number that appears on the upper face. List the simple events in the experiment. When the die is tossed once, there are six possible outcomes. There are the
simple events, listed below.
Event E1: Observe a 1 Event E2: Observe a 2 Event E3: Observe a 3
Event E4: Observe a 4 Event E5: Observe a 5 Event E6: Observe a 6
We can now define an event as a collection of simple events, often denoted by a capital letter. Definition
An event is a collection of simple events.
EXAMPLE co n tin u ed
We can define the events A and B for the die tossing experiment: A: Observe an odd number B: Observe a number less than 4 Since event A occurs if the upper face is 1, 3, or 5, it is a collection of
three simple events and we write A {E1, E3, E5}. Similarly, the event B occurs if the upper face is 1, 2, or 3 and is defined as a collection or set of these three simple events: B {E1, E2, E3}.
Sometimes when one event occurs, it means that another event cannot. Two events are mutually exclusive if, when one event occurs, the others cannot, and vice versa. Definition
In the die-tossing experiment, events A and B are not mutually exclusive, because they have two outcomes in common—if the number on the upper face of the die is a 1 or a 3. Both events A and B will
occur if either E1 or E3 is observed when the experiment is performed. In contrast, the six simple events E1, E2, . . . , E6 form a set of all mutually exclusive outcomes of the experiment. When the
experiment is performed once, one and only one of these simple events can occur. Definition
The set of all simple events is called the sample space, S.
Sometimes it helps to visualize an experiment using a picture called a Venn diagram, shown in Figure 4.1. The outer box represents the sample space, which contains all of the simple events,
represented by labeled points. Since an event is a collection of one or more simple events, the appropriate points are circled and labeled with the event letter. For the die-tossing experiment, the
sample space is S {E1, E2, E3, E4, E5, E6} or, more simply, S {1, 2, 3, 4, 5, 6}. The events A {1, 3, 5} and B {1, 2, 3} are circled in the Venn diagram.
FI GU R E 4 .1
Venn diagram for die tossing
● A
B E2
E1 E5
E3 E4
Experiment: Toss a single coin and observe the result. These are the simple events: E1: Observe a head (H) E2: Observe a tail (T) The sample space is S {E1, E2}, or, more simply, S {H, T}.
Experiment: Record a person’s blood type. The four mutually exclusive possible outcomes are these simple events: E1: Blood type A E2: Blood type B E3: Blood type AB E4: Blood type O The sample space
is S {E1, E2, E3, E4}, or S {A, B, AB, O}. Some experiments can be generated in stages, and the sample space can be displayed in a tree diagram. Each successive level of branching on the tree
corresponds to a step required to generate the final outcome.
A medical technician records a person’s blood type and Rh factor. List the simple events in the experiment. For each person, a two-stage procedure is needed to record the two variables of interest.
The tree diagram is shown in Figure 4.2. The eight simple events in the tree diagram form the sample space, S {A, A, B, B, AB, AB, O, O}.
An alternative way to display the simple events is to use a probability table, as shown in Table 4.1. The rows and columns show the possible outcomes at the first and second stages, respectively, and
the simple events are shown in the cells of the table.
F IGU R E 4 .2
Tree diagram for Example 4.4
Blood type
Rh factor
E1 : A+
A _
E2 : A–
E3 : B+
E4 : B– E5 : AB+
B + AB _ +
E6 : AB– E7 : O+
O _
TABLE 4.1
E8 : O–
Probability Table for Example 4.4 Blood Type
Rh Factor
Negative Positive
A A
B B
O O
CALCULATING PROBABILITIES USING SIMPLE EVENTS The probability of an event A is a measure of our belief that the event A will occur. One practical way to interpret this measure is with the concept of
relative frequency. Recall from Chapter 1 that if an experiment is performed n times, then the relative frequency of a particular occurrence—say, A—is Frequency Relative frequency n where the
frequency is the number of times the event A occurred. If you let n, the number of repetitions of the experiment, become larger and larger (n 씮 ), you will eventually generate the entire population.
In this population, the relative frequency of the event A is defined as the probability of event A; that is, Frequency P(A) lim n씮 n Since P(A) behaves like a relative frequency, P(A) must be a
proportion lying between 0 and 1; P(A) 0 if the event A never occurs, and P(A) 1 if the event A always occurs. The closer P(A) is to 1, the more likely it is that A will occur.
For example, if you tossed a balanced, six-sided die an infinite number of times, you would expect the relative frequency for any of the six values, x 1, 2, 3, 4, 5, 6, to be 1/6. Needless to say, it
would be very time-consuming, if not impossible, to repeat an experiment an infinite number of times. For this reason, there are alternative methods for calculating probabilities that make use of the
relative frequency concept. An important consequence of the relative frequency definition of probability involves the simple events. Since the simple events are mutually exclusive, their probabilities
must satisfy two conditions. REQUIREMENTS FOR SIMPLE-EVENT PROBABILITIES • •
Each probability must lie between 0 and 1. The sum of the probabilities for all simple events in S equals 1.
When it is possible to write down the simple events associated with an experiment and to determine their respective probabilities, we can find the probability of an event A by summing the
probabilities for all the simple events contained in the event A. The probability of an event A is equal to the sum of the probabilities of the simple events contained in A. Definition
Toss two fair coins and record the outcome. Find the probability of observing exactly one head in the two tosses. To list the simple events in the sample space, you can use a tree diagram as shown in
Figure 4.3. The letters H and T mean that you observed a head or a tail, respectively, on a particular toss. To assign probabilities to each of the four simple events, you need to remember that the
coins are fair. Therefore, any of the four simple events is as likely as any other. Since the sum of the four simple events must be 1, each must have probability P(Ei) 1/4. The simple events in the
sample space are shown in Table 4.2, along with their equally likely probabilities. To find P(A) P(observe exactly one head), you need to find all the simple events that result in event A—namely, E2
and E3: Solution
Probabilities must lie between 0 and 1.
P(A) P(E2) P(E3) 1 1 1 4 4 2
F IGU R E 4 .3
Tree diagram for Example 4.5
First coin
Second coin Head (H)
E1 = (HH)
Head (H) Tail (T) The probabilities of all the simple events must add to 1.
Head (H)
E2 = (HT) E3 = (TH)
Tail (T) Tail (T)
E4 = (TT)
TABLE 4.2
Simple Events and Their Probabilities Event
First Coin
Second Coin
P (Ei)
E1 E2 E3 E4
H H T T
H T H T
1/4 1/4 1/4 1/4
The proportions of blood phenotypes A, B, AB, and O in the population of all Caucasians in the United States are reported as .41, .10, .04, and .45, respectively.1 If a single Caucasian is chosen
randomly from the population, what is the probability that he or she will have either type A or type AB blood? The four simple events, A, B, AB, and O, do not have equally likely probabilities. Their
probabilities are found using the relative frequency concept as
P(A) .41
P(B) .10
P(AB) .04
P(O) .45
The event of interest consists of two simple events, so P(person is either type A or type AB) P(A) P(AB) .41 .04 .45 EXAMPLE
A candy dish contains one yellow and two red candies. You close your eyes, choose two candies one at a time from the dish, and record their colors. What is the probability that both candies are red?
draw 2
Y R2
Since no probabilities are given, you must list the simple events in the sample space. The two-stage selection of the candies suggests a tree diagram, shown in Figure 4.4. There are two red candies
in the dish, so you can use the letters R1, R2, and Y to indicate that you have selected the first red, the second red, or the yellow candy, respectively. Since you closed your eyes when you chose the
candies, all six choices should be equally likely and are assigned probability 1/6. If A is the event that both candies are red, then Solution
A {R1R2, R2R1} Thus, P(A) P(R1R2) P(R2R1) 1 1 1 6 6 3 FIGU R E 4 .4
Tree diagram for Example 4.7
A tree diagram helps to find simple events. Branch step toward outcome Following branches ⇒ list of simple events
First choice R1
Second choice
Simple event
R1 R 2
R1 Y
R2 R 1
R2 Y
Y R1
Y R2
CALCULATING THE PROBABILITY OF AN EVENT 1. 2. 3. 4.
List all the simple events in the sample space. Assign an appropriate probability to each simple event. Determine which simple events result in the event of interest. Sum the probabilities of the
simple events that result in the event of interest.
In your calculation, you must always be careful that you satisfy these two conditions: • •
Include all simple events in the sample space. Assign realistic probabilities to the simple events.
When the sample space is large, it is easy to unintentionally omit some of the simple events. If this happens, or if your assigned probabilities are wrong, your answers will not be useful in
practice. One way to determine the required number of simple events is to use the counting rules presented in the next optional section. These rules can be used to solve more complex problems, which
generally involve a large number of simple events. If you need to master only the basic concepts of probability, you may choose to skip the next section.
4.2 A sample space S consists of five simple events
4.1 Tossing a Die An experiment involves tossing a
with these probabilities:
single die. These are some events: A: Observe a 2 B: Observe an even number C: Observe a number greater than 2 D: Observe both A and B
P(E1) P(E2) .15 P(E3) .4 P(E4) 2P(E5) a. Find the probabilities for simple events E4 and E5. b. Find the probabilities for these two events:
E: Observe A or B or both
A {E1, E3, E4}
F: Observe both A and C
B {E2, E3}
a. List the simple events in the sample space. b. List the simple events in each of the events A through F. c. What probabilities should you assign to the simple events? d. Calculate the
probabilities of the six events A through F by adding the appropriate simple-event probabilities.
c. List the simple events that are either in event A or event B or both. d. List the simple events that are in both event A and event B. 4.3 A sample space contains 10 simple events: E1,
E2, . . . , E10. If P(E1) 3P(E2) .45 and the remaining simple events are equiprobable, find the probabilities of these remaining simple events.
4.4 Free Throws A particular basketball player hits
70% of her free throws. When she tosses a pair of free throws, the four possible simple events and three of their associated probabilities are as given in the table: Simple Event 1 2 3 4
Outcome of First Free Throw
Outcome of Second Free Throw
Hit Hit Miss Miss
Hit Miss Hit Miss
.49 ? .21 .09
4.5 Four Coins A jar contains four coins: a nickel, a dime, a quarter, and a half-dollar. Three coins are randomly selected from the jar. a. List the simple events in S. b. What is the probability
that the selection will contain the half-dollar? c. What is the probability that the total amount drawn will equal 60¢ or more? 4.6 Preschool or Not? On the first day of kindergarten, the teacher
randomly selects 1 of his 25 students and records the student’s gender, as well as whether or not that student had gone to preschool. a. How would you describe the experiment? b. Construct a tree
diagram for this experiment. How many simple events are there? c. The table below shows the distribution of the 25 students according to gender and preschool experience. Use the table to assign
probabilities to the simple events in part b. Male
containing three red and two yellow balls. Its color is noted, and the ball is returned to the bowl before a second ball is selected. List the additional five simple events that must be added to the
sample space in Exercise 4.7.
a. Find the probability that the player will hit on the first throw and miss on the second. b. Find the probability that the player will hit on at least one of the two free throws.
Preschool No Preschool
d. What is the probability that the randomly selected student is male? What is the probability that the student is a female and did not go to preschool? 4.7 The Urn Problem A bowl contains three red
and two yellow balls. Two balls are randomly selected and their colors recorded. Use a tree diagram to list the 20 simple events in the experiment, keeping in mind the order in which the balls are
drawn. 4.8 The Urn Problem, continued Refer to Exer-
cise 4.7. A ball is randomly selected from the bowl
APPLICATIONS 4.9 Need Eyeglasses? A survey classified a large number of adults according to whether they were judged to need eyeglasses to correct their reading vision and whether they used eyeglasses
when reading. The proportions falling into the four categories are shown in the table. (Note that a small proportion, .02, of adults used eyeglasses when in fact they were judged not to need them.)
Used Eyeglasses for Reading Judged to Need Eyeglasses
Yes No
.44 .02
.14 .40
If a single adult is selected from this large group, find the probability of each event: a. The adult is judged to need eyeglasses. b. The adult needs eyeglasses for reading but does not use them. c.
The adult uses eyeglasses for reading whether he or she needs them or not. 4.10 Roulette The game of roulette uses a wheel containing 38 pockets. Thirty-six pockets are numbered 1, 2, . . . , 36, and
the remaining two are marked 0 and 00. The wheel is spun, and a pocket is identified as the “winner.” Assume that the observance of any one pocket is just as likely as any other. a. Identify the
simple events in a single spin of the roulette wheel. b. Assign probabilities to the simple events. c. Let A be the event that you observe either a 0 or a 00. List the simple events in the event A
and find P(A). d. Suppose you placed bets on the numbers 1 through 18. What is the probability that one of your numbers is the winner? 4.11 Jury Duty Three people are randomly selected
from voter registration and driving records to report for jury duty. The gender of each person is noted by the county clerk.
136 ❍
a. Define the experiment. b. List the simple events in S. c. If each person is just as likely to be a man as a woman, what probability do you assign to each simple event? d. What is the probability
that only one of the three is a man? e. What is the probability that all three are women? 4.12 Jury Duty II Refer to Exercise 4.11. Suppose
that there are six prospective jurors, four men and two women, who might be impaneled to sit on the jury in a criminal case. Two jurors are randomly selected from these six to fill the two remaining
jury seats. a. List the simple events in the experiment (HINT: There are 15 simple events if you ignore the order of selection of the two jurors.) b. What is the probability that both impaneled
jurors are women? 4.13 Tea Tasters A food company plans to conduct an experiment to compare its brand of tea with that of two competitors. A single person is hired to taste and rank each of three
brands of tea, which are unmarked except for identifying symbols A, B, and C. a. Define the experiment. b. List the simple events in S. c. If the taster has no ability to distinguish a difference in
taste among teas, what is the probability that the taster will rank tea type A as the most desirable? As the least desirable?
4.15 Fruit Flies In a genetics experiment, the
researcher mated two Drosophila fruit flies and observed the traits of 300 offspring. The results are shown in the table. Wing Size Eye Color
Normal Vermillion
One of these offspring is randomly selected and observed for the two genetic traits. a. What is the probability that the fly has normal eye color and normal wing size? b. What is the probability that
the fly has vermillion eyes? c. What is the probability that the fly has either vermillion eyes or miniature wings, or both? 4.16 Creation Which of the following comes closest to your views on the
origin and development of human beings? Do you believe that human beings have developed over millions of years from less advanced forms of life, but that God has guided the process? Do you think that
human beings have developed over millions of years, and that God had no part in the process? Or do you believe that God created humans in their present form within the last 10,000 years or so? When
asked these questions, the proportions of Americans with varying opinions are approximately as shown in the table.2 Opinion Guided by God God had no part God created in present form No opinion
Proportion .36 .13 .46 .05
4.14 100-Meter Run Four equally qualified runners,
Source: Adapted from www.pollingreport.com
John, Bill, Ed, and Dave, run a 100-meter sprint, and the order of finish is recorded. a. How many simple events are in the sample space? b. If the runners are equally qualified, what probability
should you assign to each simple event? c. What is the probability that Dave wins the race? d. What is the probability that Dave wins and John places second? e. What is the probability that Ed
finishes last?
Suppose that one person is randomly selected and his or her opinion on this question is recorded. a. What are the simple events in the experiment? b. Are the simple events in part a equally likely?
If not, what are the probabilities? c. What is the probability that the person feels that God had some part in the creation of humans? d. What is the probability that the person feels that God had no
part in the process?
4.4 USEFUL COUNTING RULES (OPTIONAL)
USEFUL COUNTING RULES (OPTIONAL) Suppose that an experiment involves a large number N of simple events and you know that all the simple events are equally likely. Then each simple event has
probability 1/N, and the probability of an event A can be calculated as nA P(A) N where nA is the number of simple events that result in the event A. In this section, we present three simple rules
that can be used to count either N, the number of simple events in the sample space, or nA, the number of simple events in event A. Once you have obtained these counts, you can find P(A) without
actually listing all the simple events. THE mn RULE Consider an experiment that is performed in two stages. If the first stage can be accomplished in m ways and for each of these ways, the second
stage can be accomplished in n ways, then there are mn ways to accomplish the experiment. For example, suppose that you can order a car in one of three styles and in one of four paint colors. To find
out how many options are available, you can think of first picking one of the m 3 styles and then selecting one of the n 4 paint colors. Using the mn Rule, as shown in Figure 4.5, you have mn (3)(4)
12 possible options.
FIGU R E 4 .5
Style–color combinations
● Style
Color 1 2
Two dice are tossed. How many simple events are in the sample space S? Solution The first die can fall in one of m 6 ways, and the second die can fall in one of n 6 ways. Since the experiment involves
two stages, forming the pairs of numbers shown on the two faces, the total number of simple events in S is
mn (6)(6) 36
The Java applet called Tossing Dice gives a visual display of the 36 simple events described in Example 4.8. You can use this applet to find probabilities for any event involving the tossing of two
fair dice. By clicking on the appropriate dice combinations, we have found the probability of observing a sum of 3 on the upper faces to be 2/36 .056. What is the probability that the sum equals 4?
You will use this applet for the MyApplet Exercises at the end of the chapter. FI GU R E 4 .6
Tossing Dice applet
A candy dish contains one yellow and two red candies. Two candies are selected one at a time from the dish, and their colors are recorded. How many simple events are in the sample space S? draw 2
Y R2
The first candy can be chosen in m 3 ways. Since one candy is now gone, the second candy can be chosen in n 2 ways. The total number of simple events is
mn (3)(2) 6 These six simple events were listed in Example 4.7. We can extend the mn Rule for an experiment that is performed in more than two stages. THE EXTENDED mn RULE If an experiment is
performed in k stages, with n1 ways to accomplish the first stage, n2 ways to accomplish the second stage, . . . , and nk ways to accomplish the kth stage, then the number of ways to accomplish the
experiment is n1n2n3 nk
4.4 USEFUL COUNTING RULES (OPTIONAL)
How many simple events are in the sample space when three coins are tossed? Solution
Each coin can land in one of two ways. Hence, the number of simple
events is (2)(2)(2) 8
A truck driver can take three routes from city A to city B, four from city B to city C, and three from city C to city D. If, when traveling from A to D, the driver must drive from A to B to C to D,
how many possible A-to-D routes are available? Solution
n1 Number of routes from A to B 3 n2 Number of routes from B to C 4 n3 Number of routes from C to D 3 Then the total number of ways to construct a complete route, taking one subroute from each of the
three groups, (A to B), (B to C), and (C to D), is n1n2n3 (3)(4)(3) 36 A second useful counting rule follows from the mn Rule and involves orderings or permutations. For example, suppose you have
three books, A, B, and C, but you have room for only two on your bookshelf. In how many ways can you select and arrange the two books? There are three choices for the two books—A and B, A and C, or B
and C—but each of the pairs can be arranged in two ways on the shelf. All the permutations of the two books, chosen from three, are listed in Table 4.3. The mn Rule implies that there are 6 ways,
because the first book can be chosen in m 3 ways and the second in n 2 ways, so the result is mn 6.
TABLE 4.3
Permutations of Two Books Chosen from Three Combinations of Two
Reordering of Combinations
AB AC BC
BA CA CB
In how many ways can you arrange all three books on your bookshelf? These are the six permutations: ABC BCA
Since the first book can be chosen in n1 3 ways, the second in n2 2 ways, and the third in n3 1 way, the total number of orderings is n1n2n3 (3)(2)(1) 6. Rather than applying the mn Rule each time,
you can find the number of orderings using a general formula involving factorial notation.
A COUNTING RULE FOR PERMUTATIONS The number of ways we can arrange n distinct objects, taking them r at a time, is n! Prn (n r)! where n! n(n 1)(n 2) (3)(2)(1) and 0! 1. Since r objects are chosen,
this is an r-stage experiment. The first object can be chosen in n ways, the second in (n 1) ways, the third in (n 2) ways, and the rth in (n r 1) ways. We can simplify this awkward notation using the
counting rule for permutations because n! n(n 1)(n 2) (n r 1)(n r) (2)(1) (n r)! (n r) (2)(1) n(n 1) (n r 1) A SPECIAL CASE: ARRANGING n ITEMS The number of ways to arrange an entire set of n
distinct items is Pnn n!
Three lottery tickets are drawn from a total of 50. If the tickets will be distributed to each of three employees in the order in which they are drawn, the order will be important. How many simple
events are associated with the experiment? Solution
The total number of simple events is
50! P 50 3 50(49)(48) 117,600 47! EXAMPLE
A piece of equipment is composed of five parts that can be assembled in any order. A test is to be conducted to determine the time necessary for each order of assembly. If each order is to be tested
once, how many tests must be conducted? Solution
The total number of tests is
5! P 55 5(4)(3)(2)(1) 120 0! When we counted the number of permutations of the two books chosen for your bookshelf, we used a systematic approach: • •
First we counted the number of combinations or pairs of books to be chosen. Then we counted the number of ways to arrange the two chosen books on the shelf.
Sometimes the ordering or arrangement of the objects is not important, but only the objects that are chosen. In this case, you can use a counting rule for combinations. For example, you may not care
in what order the books are placed on the shelf, but
4.4 USEFUL COUNTING RULES (OPTIONAL)
only which books you are able to shelve. When a five-person committee is chosen from a group of 12 students, the order of choice is unimportant because all five students will be equal members of the
committee. A COUNTING RULE FOR COMBINATIONS The number of distinct combinations of n distinct objects that can be formed, taking them r at a time, is n! C rn r!(n r)! The number of combinations and
the number of permutations are related: Pn C rn r r! You can see that C rn results when you divide the number of permutations by r!, the number of ways of rearranging each distinct group of r objects
chosen from the total n. EXAMPLE
A printed circuit board may be purchased from five suppliers. In how many ways can three suppliers be chosen from the five? Since it is important to know only which three have been chosen, not the
order of selection, the number of ways is
(5)(4) 5! C 35 10 2 3!2! The next example illustrates the use of counting rules to solve a probability problem. EXAMPLE
Five manufacturers produce a certain electronic device, whose quality varies from manufacturer to manufacturer. If you were to select three manufacturers at random, what is the chance that the
selection would contain exactly two of the best three? The simple events in this experiment consist of all possible combinations of three manufacturers, chosen from a group of five. Of these five,
three have been designated as “best” and two as “not best.” You can think of a candy dish containing three red and two yellow candies, from which you will select three, as illustrated in Figure 4.7.
The total number of simple events N can be counted as the number of ways to choose three of the five manufacturers, or
5! N C 35 10 3!2! FIGU R E 4 .7
Illustration for Example 4.15
● Choose 3
3 “best” 2 “not best”
Since the manufacturers are selected at random, any of these 10 simple events will be equally likely, with probability 1/10. But how many of these simple events result in the event A: Exactly two of
the “best” three You can count nA, the number of events in A, in two steps because event A will occur when you select two of the “best” three and one of the two “not best.” There are 3! C 23 3 2!1!
ways to accomplish the first stage and 2! C 12 2 1!1! ways to accomplish the second stage. Applying the mn Rule, we find there are nA (3)(2) 6 of the 10 simple events in event A and P(A) nA/N 6/10.
Many other counting rules are available in addition to the three presented in this section. If you are interested in this topic, you should consult one of the many textbooks on combinatorial
BASIC TECHNIQUES 4.17 You have two groups of distinctly different
items, 10 in the first group and 8 in the second. If you select one item from each group, how many different pairs can you form? 4.18 You have three groups of distinctly different
items, four in the first group, seven in the second, and three in the third. If you select one item from each group, how many different triplets can you form? 4.19 Permutations Evaluate the following
permutations. (HINT: Your scientific calculator may have a function that allows you to calculate permutations and combinations quite easily.)
a. P 35
b. P 10 9
c. P 66
d. P 20 1
4.22 Choosing People, again In how many ways can you select two people from a group of 20 if the order of selection is not important? 4.23 Dice Three dice are tossed. How many simple
events are in the sample space? 4.24 Coins Four coins are tossed. How many simple
events are in the sample space? 4.25 The Urn Problem, again Three balls are selected from a box containing 10 balls. The order of selection is not important. How many simple events are in the sample
APPLICATIONS 4.26 What to Wear? You own 4 pairs of jeans, 12
4.20 Combinations Evaluate these combinations:
C 35
C 10 9
C 66
C 20 1
4.21 Choosing People In how many ways can you select five people from a group of eight if the order of selection is important?
clean T-shirts, and 4 wearable pairs of sneakers. How many outfits (jeans, T-shirt, and sneakers) can you create? 4.27 Itineraries A businessman in New York is
preparing an itinerary for a visit to six major cities. The distance traveled, and hence the cost of the trip,
4.4 USEFUL COUNTING RULES (OPTIONAL)
will depend on the order in which he plans his route. How many different itineraries (and trip costs) are possible? 4.28 Vacation Plans Your family vacation involves
a cross-country air flight, a rental car, and a hotel stay in Boston. If you can choose from four major air carriers, five car rental agencies, and three major hotel chains, how many options are
available for your vacation accommodations? 4.29 A Card Game Three students are playing a
card game. They decide to choose the first person to play by each selecting a card from the 52-card deck and looking for the highest card in value and suit. They rank the suits from lowest to highest:
clubs, diamonds, hearts, and spades. a. If the card is replaced in the deck after each student chooses, how many possible configurations of the three choices are possible? b. How many configurations
are there in which each student picks a different card? c. What is the probability that all three students pick exactly the same card? d. What is the probability that all three students pick
different cards? 4.30 Dinner at Gerard’s A French restaurant in Riverside, California, offers a special summer menu in which, for a fixed dinner cost, you can choose from one of two salads, one of two
entrees, and one of two desserts. How many different dinners are available? 4.31 Playing Poker Five cards are selected from a
52-card deck for a poker hand. a. How many simple events are in the sample space? b. A royal flush is a hand that contains the A, K, Q, J, and 10, all in the same suit. How many ways are there to get
a royal flush? c. What is the probability of being dealt a royal flush? 4.32 Poker II Refer to Exercise 4.31. You have a poker hand containing four of a kind.
a. How many possible poker hands can be dealt? b. In how many ways can you receive four cards of the same face value and one card from the other 48 available cards? c. What is the probability of
being dealt four of a kind?
4.33 A Hospital Survey A study is to be conducted in a hospital to determine the attitudes of nurses toward various administrative procedures. If a sample of 10 nurses is to be selected from a total
of 90, how many different samples can be selected? (HINT: Is order important in determining the makeup of the sample to be selected for the survey?) 4.34 Traffic Problems Two city council members are
to be selected from a total of five to form a subcommittee to study the city’s traffic problems. a. How many different subcommittees are possible? b. If all possible council members have an equal
chance of being selected, what is the probability that members Smith and Jones are both selected? 4.35 The WNBA Professional basketball is now a reality for women basketball players in the United
States. There are two conferences in the WNBA, each with seven teams, as shown in the table below. Western Conference
Eastern Conference
Houston Comets Minnesota Lynx Phoenix Mercury Sacramento Monarchs Los Angeles Sparks Seattle Storm San Antonio Silver Stars
Indiana Fever New York Liberty Washington Mystics Detroit Shock Charlotte Sting Connecticut Sun Chicago Sky
Two teams, one from each conference, are randomly selected to play an exhibition game. a. How many pairs of teams can be chosen? b. What is the probability that the two teams are Los Angeles and New
York? c. What is the probability that the Western Conference team is from California? 4.36 100-Meter Run, again Refer to Exercise 4.14,
in which a 100-meter sprint is run by John, Bill, Ed, and Dave. Assume that all of the runners are equally qualified, so that any order of finish is equally likely. Use the mn Rule or permutations to
answer these questions: a. How many orders of finish are possible? b. What is the probability that Dave wins the sprint? c. What is the probability that Dave wins and John places second? d. What is
the probability that Ed finishes last?
144 ❍
4.37 Gender Bias? The following case occurred in Gainesville, Florida. The eight-member Human Relations Advisory Board considered the complaint of a woman who claimed discrimination, based on her
gender, on the part of a local surveying company. The board, composed of five women and three men, voted 5–3 in favor of the plaintiff, the five women voting for the plaintiff and the three men
against. The attorney representing the company appealed the board’s decision by claiming gender bias on the part of the board members. If the vote in favor of the plaintiff was 5–3 and the board
members were not biased by gender, what is the probability that the vote would split along gender lines (five women for, three men against)?
4.38 Cramming A student prepares for an exam
by studying a list of 10 problems. She can solve 6 of them. For the exam, the instructor selects 5 questions at random from the list of 10. What is the probability that the student can solve all 5
problems on the exam? 4.39 Monkey Business A monkey is given 12 blocks: 3 shaped like squares, 3 like rectangles, 3 like triangles, and 3 like circles. If it draws three of each kind in order—say, 3
triangles, then 3 squares, and so on—would you suspect that the monkey associates identically shaped figures? Calculate the probability of this event.
EVENT RELATIONS AND PROBABILITY RULES Sometimes the event of interest can be formed as a combination of several other events. Let A and B be two events defined on the sample space S. Here are three
important relationships between events. The union of events A and B, denoted by A B, is the event that either A or B or both occur. Definition
The intersection of events A and B, denoted by A B, is the event that both A and B occur.† Definition
The complement of an event A, denoted by Ac, is the event that A does
not occur. Figures 4.8, 4.9, and 4.10 show Venn diagram representations of A B, A B, and Ac, respectively. Any simple event in the shaded area is a possible outcome resulting in the appropriate
event. One way to find the probabilities of the union, the intersection, or the complement is to sum the probabilities of all the associated simple events.
Some authors use the notation AB.
FIGURE 4.8
FIGURE 4.9
Venn diagram of A B
The complement of an event
Union ⇔ “either . . . or . . . or both” or just “or”
FIGU R E 4 .1 0
Venn diagram A B S
Intersection ⇔ “both . . . and” or just “and”
S Ac
Two fair coins are tossed, and the outcome is recorded. These are the events of interest: A: Observe at least one head B: Observe at least one tail Define the events A, B, A B, A B, and Ac as
collections of simple events, and find their probabilities. Solution
Recall from Example 4.5 that the simple events for this experiment are
E1: HH (head on first coin, head on second) E2: HT E3: TH E4: TT and that each simple event has probability 1/4. Event A, at least one head, occurs if E1, E2, or E3 occurs, so that A {E1, E2, E3}
3 P(A) 4
and Ac {E4}
1 P(Ac) 4
Similarly, 3 P(B) 4
B {E2, E3, E4} A B {E2, E3}
1 P(A B) 2
A B {E1, E2, E3, E4}
4 P(A B) 1 4
Note that (A B) S, the sample space, and is thus certain to occur. The concept of unions and intersections can be extended to more than two events. For example, the union of three events A, B, and C,
which is written as A B C, is the set of simple events that are in A or B or C or in any combination of those events. Similarly, the intersection of three events A, B, and C, which is written as A B
C, is the collection of simple events that are common to the three events A, B, and C.
Calculating Probabilities for Unions and Complements When we can write the event of interest in the form of a union, a complement, or an intersection, there are special probability rules that can
simplify our calculations. The first rule deals with unions of events. THE ADDITION RULE Given two events, A and B, the probability of their union, A B, is equal to P(A B) P(A) P(B) P(A B) Notice in
the Venn diagram in Figure 4.11 that the sum P(A) P(B) double counts the simple events that are common to both A and B. Subtracting P(A B) gives the correct result.
FIGU R E 4 .1 1
The Addition Rule
S A
When two events A and B are mutually exclusive or disjoint, it means that when A occurs, B cannot, and vice versa. This means that the probability that they both
occur, P(A B), must be zero. Figure 4.12 is a Venn diagram representation of two such events with no simple events in common. F I GU R E 4 .1 2
Two disjoint events
When two events A and B are mutually exclusive, then P(A B) 0 and the Addition Rule simplifies to
Remember, mutually exclusive ⇔ P (A B) 0.
P(A B) P(A) P(B) The second rule deals with complements of events. You can see from the Venn diagram in Figure 4.10 that A and Ac are mutually exclusive and that A Ac S, the entire sample space. It
follows that P(A) P(Ac ) 1 and P(Ac ) 1 P(A) RULE FOR COMPLEMENTS P(Ac ) 1 P(A)
TABLE 4.4
An oil-prospecting firm plans to drill two exploratory wells. Past evidence is used to assess the possible outcomes listed in Table 4.4.
Outcomes for Oil-Drilling Experiment Event A B C
Description Neither well produces oil or gas Exactly one well produces oil or gas Both wells produce oil or gas
Probability .80 .18 .02
Find P(A B) and P(B C). Solution By their definition, events A, B, and C are jointly mutually exclusive because the occurrence of one event precludes the occurrence of either of the other two.
P(A B) P(A) P(B) .80 .18 .98 and P(B C) P(B) P(C) .18 .02 .20
The event A B can be described as the event that at most one well produces oil or gas, and B C describes the event that at least one well produces gas or oil.
TABLE 4.5
In a telephone survey of 1000 adults, respondents were asked about the expense of a college education and the relative necessity of some form of financial assistance. The respondents were classified
according to whether they currently had a child in college and whether they thought the loan burden for most college students is too high, the right amount, or too little. The proportions responding
in each category are shown in the probability table in Table 4.5. Suppose one respondent is chosen at random from this group. ●
Probability Table
Child in College (D) No Child in College (E)
Too High (A)
Right Amount (B)
Too Little (C )
.35 .25
.08 .20
.01 .11
1. What is the probability that the respondent has a child in college? 2. What is the probability that the respondent does not have a child in college? 3. What is the probability that the respondent
has a child in college or thinks that the loan burden is too high? Table 4.5 gives the probabilities for the six simple events in the cells of the table. For example, the entry in the top left corner
of the table is the probability that a respondent has a child in college and thinks the loan burden is too high (A D).
1. The event that a respondent has a child in college will occur regardless of his or her response to the question about loan burden. That is, event D consists of the simple events in the first row: P
(D) .35 .08 .01 .44 In general, the probabilities of marginal events such as D and A are found by summing the probabilities in the appropriate row or column. 2. The event that the respondent does not
have a child in college is the complement of the event D denoted by D c. The probability of D c is found as P(D c ) 1 P(D) Using the result of part 1, we have P(D c ) 1 .44 .56 3. The event of
interest is P(A D). Using the Addition Rule P(A D) P(A) P(D) P(A D) .60 .44 .35 .69
INDEPENDENCE, CONDITIONAL PROBABILITY, AND THE MULTIPLICATION RULE There is a probability rule that can be used to calculate the probability of the intersection of several events. However, this rule
depends on the important statistical concept of independent or dependent events. Two events, A and B, are said to be independent if and only if the probability of event B is not influenced or changed
by the occurrence of event A, or vice versa. Definition
Colorblindness Suppose a researcher notes a person’s gender and whether or not the person is colorblind to red and green. Does the probability that a person is colorblind change depending on whether
the person is male or not? Define two events:
A: Person is a male B: Person is colorblind In this case, since colorblindness is a male sex-linked characteristic, the probability that a man is colorblind will be greater than the probability that
a person chosen from the general population will be colorblind. The probability of event B, that a person is colorblind, depends on whether or not event A, that the person is a male, has occurred. We
say that A and B are dependent events. Tossing Dice
On the other hand, consider tossing a single die two times, and
define two events: A: Observe a 2 on the first toss B: Observe a 2 on the second toss If the die is fair, the probability of event A is P(A) 1/6. Consider the probability of event B. Regardless of
whether event A has or has not occurred, the probability of observing a 2 on the second toss is still 1/6. We could write: P(B given that A occurred) 1/6 P(B given that A did not occur) 1/6 Since the
probability of event B is not changed by the occurrence of event A, we say that A and B are independent events. The probability of an event A, given that the event B has occurred, is called the
conditional probability of A, given that B has occurred, denoted by P(A兩B). The vertical bar is read “given” and the events appearing to the right of the bar are those that you know have occurred.
We will use these probabilities to calculate the probability that both A and B occur when the experiment is performed. THE GENERAL MULTIPLICATION RULE The probability that both A and B occur when the
experiment is performed is P(A B) P(A)P(B兩A) or P(A B) P(B)P(A兩B)
In a color preference experiment, eight toys are placed in a container. The toys are identical except for color—two are red, and six are green. A child is asked to choose two toys at random. What is
the probability that the child chooses the two red toys? You can visualize the experiment using a tree diagram as shown in Figure 4.13. Define the following events:
R: Red toy is chosen G: Green toy is chosen
FI GU R E 4 .1 3
Tree diagram for Example 4.19
First choice
Second choice Red (1/7)
Simple event
Red (2/8) Green (6/7) Red (2/7)
Green (6/8) Green (5/7)
The event A (both toys are red) can be constructed as the intersection of two events: A (R on first choice) (R on second choice) Since there are only two red toys in the container, the probability of
choosing red on the first choise is 2/8. However, once this red toy has been chosen, the probability of red on the second choice is dependent on the outcome of the first choice (see Figure 4.13). If
the first choice was red, the probability of choosing a second red toy is only 1/7 because there is only one red toy among the seven remaining. If the first choice was green, the probability of
choosing red on the second choice is 2/7 because there are two red toys among the seven remaining. Using this information and the Multiplication Rule, you can find the probability of event A. P(A) P(R
on first choice R on second choice) P(R on first choice) P(R on second choice)兩R on first)
2 1 2 1 8 7 56 28 Sometimes you may need to use the Multiplication Rule in a slightly different form, so that you can calculate the conditional probability, P(A円B). Just rearrange the terms in the
Multiplication Rule.
CONDITIONAL PROBABILITIES The conditional probability of event A, given that event B has occurred is P(A B) P(A兩B) if P(B) 0 P(B) The conditional probability of event B, given that event A has
occurred is P(A B) P(B兩A) P(A)
P(A) 0
Suppose that in the general population, there are 51% men and 49% women, and that the proportions of colorblind men and women are shown in the probability table below:
Colorblindness, continued
Women (BC)
Colorblind (A) Not Colorblind (AC)
.04 .47
.002 .488
.042 .958
If a person is drawn at random from this population and is found to be a man (event B), what is the probability that the man is colorblind (event A)? If we know that the event B has occurred, we must
restrict our focus to only the 51% of the population that is male. The probability of being colorblind, given that the person is male, is 4% of the 51%, or P(A B) .04 P(A兩B) .078 P(B) .51 What is
the probability of being colorblind, given that the person is female? Now we are restricted to only the 49% of the population that is female, and P(A BC) .002 P(A兩BC) .004 P(BC) .49 Notice that the
probability of event A changed, depending on whether event B occured. This indicates that these two events are dependent. When two events are independent—that is, if the probability of event B is the
same, whether or not event A has occurred, then event A does not affect event B and P(B兩A) P(B) The Multiplication Rule can now be simplified. THE MULTIPLICATION RULE FOR INDEPENDENT EVENTS If two
events A and B are independent, the probability that both A and B occur is P(A B) P(A)P(B) Similarly, if A, B, and C are mutually independent events (all pairs of events are independent), then the
probability that A, B, and C all occur is P(A B C) P(A)P(B)P(C)
Coin Tosses at Football Games A football team is involved in two overtime periods during a given game, so that there are three coins tosses. If the coin is fair, what is the probability that they
lose all three tosses? Solution
If the coin is fair, the event can be described in three steps:
A: lose the first toss B: lose the second toss C: lose the third toss Since the tosses are independent, and since P(win) P(lose) .5 for any of the three tosses, P(A B C) P(A)P(B)P(C) (.5)(.5)(.5) .125
How can you check to see if two events are independent or dependent? The easiest solution is to redefine the concept of independence in a more formal way. CHECKING FOR INDEPENDENCE Two events A and B
are said to be independent if and only if either P(A B) P(A)P(B) or P(B兩A) P(B) Otherwise, the events are said to be dependent. EXAMPLE
Toss two coins and observe the outcome. Define these events: A: Head on the first coin B: Tail on the second coin Are events A and B independent? From previous examples, you know that S {HH, HT, TH,
TT}. Use these four simple events to find
Remember, independence ⇔ P(A B) P (A)P(B).
1 1 1 P(A) , P(B) , and P(A B) . 2 2 4
1 1 1 1 Since P(A)P(B) and P(A B) , we have P(A)P(B) P(A B) 2 2 4 4 and the two events must be independent.
Refer to the probability table in Example 4.18, which is reproduced below.
Child in College (D) No Child in College (E )
Too High (A)
Right Amount (B)
Too Little (C )
.35 .25
.08 .20
.01 .11
Are events D and A independent? Explain.
1. Use the probability table to find P(A D) .35, P(A) .60, and P(D) .44. Then P(A)P(D) (.60)(.44) .264 and P(A D) .35 Since these two probabilities are not the same, events A and D are dependent. 2.
Alternately, calculate P(A D) .35 P(A兩 D) .80 P(D) .44 Since P(A兩 D) .80 and P(A) .60, we are again led to the conclusion that events A and D are dependent.
What’s the Difference between Mutually Exclusive and Independent Events? Many students find it hard to tell the difference between mutually exclusive and independent events. • When two events are
mutually exclusive or disjoint, they cannot both happen when the experiment is performed. Once the event B has occurred, event A cannot occur, so that P(A兩B) 0, or vice versa. The occurrence of
event B certainly affects the probability that event A can occur. •
Therefore, mutually exclusive events must be dependent.
When two events are mutually exclusive or disjoint, P(A B) 0 and P(A B) P(A) P(B).
When two events are independent, P(A B) P(A)P(B), and P(A B) P(A) P(B) P(A)P(B).
Exercise Reps Use the relationships above to fill in the blanks in the table below. P(A)
Conditions for Events A and B Mutually exclusive
P(A 8 B)
P(A 9 B)
.6 .10
Answers are located on the perforated card at the back of this book.
Using probability rules to calculate the probability of an event requires some experience and ingenuity. You need to express the event of interest as a union or intersection (or the combination of
both) of two or more events whose probabilities are known or easily calculated. Often you can do this in different ways; the key is to find the right combination.
Two cards are drawn from a deck of 52 cards. Calculate the probability that the draw includes an ace and a ten. Solution
Consider the event of interest: A: Draw an ace and a ten
Then A B C, where B: Draw the ace on the first draw and the ten on the second C: Draw the ten on the first draw and the ace on the second Events B and C were chosen to be mutually exclusive and also to
be intersections of events with known probabilities; that is, B B1 B2 and C C1 C2 where B1: Draw an ace on the first draw B2: Draw a ten on the second draw C1: Draw a ten on the first draw C2: Draw an
ace on the second draw Applying the Multiplication Rule, you get P(B1 B2) P(B1)P(B2兩B1)
4 4 52 51 and
4 4 P(C1 C2) 52 51 Then, applying the Addition Rule, P(A) P(B) P(C)
4 4 4 4 8 52 51 52 51 663 Check each composition carefully to be certain that it is actually equal to the event of interest.
EXERCISES EXERCISE REPS These exercises refer to the MyPersonal Trainer section on page 153. 4.40 Use event relationships to fill in the blanks in the table below. P(A)
Conditions for Events A and B
Mutually exclusive
P(A 8 B)
P(A 9 B)
.12 .7
4.41 Use event relationships to fill in the blanks in the table below. P(A)
Conditions for Events A and B
Mutually exclusive
P(A 8 B)
P(A 9 B)
.1 0
A: Observe a number less than 4
4.42 An experiment can result in one of five equally
B: Observe a number less than or equal to 2 C: Observe a number greater than 3
likely simple events, E1, E2, . . . , E5. Events A, B, and C are defined as follows: A: E1, E3
P(A) .4
B: E1, E2, E4, E5 C: E3, E4
P(B) .8 P(C) .4
Find the probabilities associated with these compound events by listing the simple events in each. b. A B c. B C a. Ac d. A B e. B兩C f. A兩B g. A B C h. (A B)c 4.43 Refer to Exercise 4.42. Use the
definition of a
complementary event to find these probabilities: a. P(Ac ) b. P((A B)c ) Do the results agree with those obtained in Exercise 4.42? 4.44 Refer to Exercise 4.42. Use the definition of
conditional probability to find these probabilities: a. P(A兩B) b. P(B兩C) Do the results agree with those obtained in Exercise 4.42? 4.45 Refer to Exercise 4.42. Use the Addition and
Multiplication Rules to find these probabilities: a. P(A B) b. P(A B) c. P(B C) Do the results agree with those obtained in Exercise 4.42?
Find the probabilities associated with the events below using either the simple event approach or the rules and definitions from this section. a. S b. A兩B c. B d. A B C e. A B f. A C g. B C h. A C i.
B C 4.48 Refer to Exercise 4.47.
a. Are events A and B independent? Mutually exclusive? b. Are events A and C independent? Mutually exclusive? 4.49 Suppose that P(A) .4 and P(B) .2. If events
A and B are independent, find these probabilities: a. P(A B) b. P(A B) 4.50 Suppose that P(A) .3 and P(B) .5. If
events A and B are mutually exclusive, find these probabilities: a. P(A B) b. P(A B) 4.51 Suppose that P(A) .4 and P(A B) .12.
a. Find P(B兩A). b. Are events A and B mutually exclusive? c. If P(B) .3, are events A and B independent? 4.52 An experiment can result in one or both of
events A and B with the probabilities shown in this probability table:
4.46 Refer to Exercise 4.42.
a. Are events A and B independent? b. Are events A and B mutually exclusive? 4.47 Dice An experiment consists of tossing a single
die and observing the number of dots that show on the upper face. Events A, B, and C are defined as follows:
B Bc
.34 .15
.46 .05
Find the following probabilities: a. P(A) d. P(A B)
b. P(B) e. P(A兩B)
c. P(A B) f. P(B兩A)
4.53 Refer to Exercise 4.52.
a. Are events A and B mutually exclusive? Explain. b. Are events A and B independent? Explain.
A: The offender has 10 or more years of education B: The offender is convicted within 2 years after completion of treatment
APPLICATIONS 4.54 Drug Testing Many companies are testing
prospective employees for drug use, with the intent of improving efficiency and reducing absenteeism, accidents, and theft. Opponents claim that this procedure is creating a class of unhirables and
that some persons may be placed in this class because the tests themselves are not 100% reliable. Suppose a company uses a test that is 98% accurate—that is, it correctly identifies a person as a drug
user or nonuser with probability .98—and to reduce the chance of error, each job applicant is required to take two tests. If the outcomes of the two tests on the same person are independent events,
what are the probabilities of these events? a. A nonuser fails both tests. b. A drug user is detected (i.e., he or she fails at least one test). c. A drug user passes both tests. 4.55 Grant Funding
Whether a grant proposal is
funded quite often depends on the reviewers. Suppose a group of research proposals was evaluated by a group of experts as to whether the proposals were worthy of funding. When these same proposals
were submitted to a second independent group of experts, the decision to fund was reversed in 30% of the cases. If the probability that a proposal is judged worthy of funding by the first peer review
group is .2, what are the probabilities of these events? a. A worthy proposal is approved by both groups. b. A worthy proposal is disapproved by both groups. c. A worthy proposal is approved by one
group. 4.56 Drug Offenders A study of the behavior of a
large number of drug offenders after treatment for drug abuse suggests that the likelihood of conviction within a 2-year period after treatment may depend on the offender’s education. The proportions
of the total number of cases that fall into four education/conviction categories are shown in the table below. Status Within 2 Years After Treatment Convicted
Not Convicted
10 Years or More 9 Years or Less
.10 .27
.30 .33
.40 .60
Suppose a single offender is selected from the treatment program. Here are the events of interest:
Find the appropriate probabilities for these events: a. A b. B c. A B d. A B e. Ac f. (A B)c g. (A B)c h. A given that B has occurred i. B given that A has occurred 4.57 Use the probabilities of
Exercise 4.56 to show
that these equalities are true: a. P(A B) P(A)P(B兩A) b. P(A B) P(B)P(A兩B) c. P(A B) P(A) P(B) P(A B) 4.58 The Birthday Problem Two people enter a
room and their birthdays (ignoring years) are recorded. a. Identify the nature of the simple events in S. b. What is the probability that the two people have a specific pair of birthdates? c. Identify
the simple events in event A: Both people have the same birthday. d. Find P(A). e. Find P(Ac). 4.59 The Birthday Problem, continued If n peo-
ple enter a room, find these probabilities: A: None of the people have the same birthday B: At least two of the people have the same birthday Solve for a. n 3
b. n 4
[NOTE: Surprisingly, P(B) increases rapidly as n increases. For example, for n 20, P(B) .411; for n 40, P(B) .891.] 4.60 Starbucks or Peet’s®? A college student fre-
quents one of two coffee houses on campus, choosing Starbucks 70% of the time and Peet’s 30% of the time. Regardless of where she goes, she buys a cafe mocha on 60% of her visits. a. The next time
she goes into a coffee house on campus, what is the probability that she goes to Starbucks and orders a cafe mocha? b. Are the two events in part a independent? Explain. c. If she goes into a coffee
house and orders a cafe mocha, what is the probability that she is at Peet’s?
d. What is the probability that she goes to Starbucks or orders a cafe mocha or both? 4.61 Inspection Lines A certain manufactured item is visually inspected by two different inspectors. When a
defective item comes through the line, the probability that it gets by the first inspector is .1. Of those that get past the first inspector, the second inspector will “miss” 5 out of 10. What fraction
of the defective items get by both inspectors? 4.62 Smoking and Cancer A survey of people in a given region showed that 20% were smokers. The probability of death due to lung cancer, given that a
person smoked, was roughly 10 times the probability of death due to lung cancer, given that a person did not smoke. If the probability of death due to lung cancer in the region is .006, what is the
probability of death due to lung cancer given that a person is a smoker? 4.63 Smoke Detectors A smoke-detector system
uses two devices, A and B. If smoke is present, the probability that it will be detected by device A is .95; by device B, .98; and by both devices, .94. a. If smoke is present, find the probability
that the smoke will be detected by device A or device B or both devices. b. Find the probability that the smoke will not be detected. 4.64 Plant Genetics Gregor Mendel was a monk who suggested in
1865 a theory of inheritance based on the science of genetics. He identified heterozygous individuals for flower color that had two alleles (one r recessive white color allele and one R dominant red
color allele). When these individuals were mated, 3/4 of the offspring were observed to have red flowers and 1/4 had white flowers. The table summarizes this mating; each parent gives one of its
alleles to form the gene of the offspring. Parent 2 Parent 1
r R
rr Rr
rR RR
We assume that each parent is equally likely to give either of the two alleles and that, if either one or two of the alleles in a pair is dominant (R), the offspring will have red flowers. a. What is
the probability that an offspring in this mating has at least one dominant allele? b. What is the probability that an offspring has at least one recessive allele?
c. What is the probability that an offspring has one recessive allele, given that the offspring has red flowers? 4.65 Soccer Injuries During the inaugural season of
Major League Soccer in the United States, the medical teams documented 256 injuries that caused a loss of participation time to the player. The results of this investigation, reported in The American
Journal of Sports Medicine, is shown in the table.3 Severity Minor (A) Moderate (B) Major (C) Total
Practice (P)
Game (G)
If one individual is drawn at random from this group of 256 soccer players, find the following probabilities: a. P(A) b. P(G) c. P(A G) d. P(G兩A) e. P(G兩B) f. P(G兩C) g. P(C兩P) h. P(Bc ) 4.66
Choosing a Mate Men and women often disagree on how they think about selecting a mate. Suppose that a poll of 1000 individuals in their twenties gave the following responses to the question of
whether it is more important for their future mate to be able to communicate their feelings (F ) than it is for that person to make a good living (G). Feelings (F )
Good Living (G)
Men (M) Women (W )
.35 .36
.20 .09
.55 .45
If an individual is selected at random from this group of 1000 individuals, calculate the following probabilities: a. P(F) d. P(F兩W)
b. P(G) e. P(M兩F)
c. P(F兩M) f. P(W兩G)
4.67 Jason and Shaq The two stars of the Miami Heat professional basketball team are very different when it comes to making free throws. ESPN.com reports that Jason Williams makes about 80% of his
free throws, while Shaquille O’Neal makes only 53% of his free throws.4 Assume that the free throws are independent, and that each player takes two free throws during a particular game. a. What is
the probability that Jason makes both of
his free throws?
158 ❍
b. What is the probability that Shaq makes exactly
Player A has probability 1/6 of winning the tournament if player B enters and probability 3/4 of winning if player B does not enter the tournament. If the probability that player B enters is 1/3, find
the probability that player A wins the tournament.
one of his two free throws? c. What is the probability that Shaq makes both of his free throws and Jason makes neither of his? 4.68 Golfing Player A has entered a golf tournament
but it is not certain whether player B will enter.
BAYES’ RULE (OPTIONAL)
Let us reconsider the experiment involving colorblindness from Section 4.6. Notice that the two events
B: the person selected is a man BC: the person selected is a woman taken together make up the sample space S, consisting of both men and women. Since colorblind people can be either male or female,
the event A, which is that a person is colorblind, consists of both those simple events that are in A and B and those simple events that are in A and BC. Since these two intersections are mutually
exclusive, you can write the event A as A (A B) (A BC) and P(A) P(A B) P(A BC) .04 .002 .042 Suppose now that the sample space can be partitioned into k subpopulations, S1, S2, S3, . . . , Sk, that,
as in the colorblindness example, are mutually exclusive and exhaustive; that is, taken together they make up the entire sample space. In a similar way, you can express an event A as A (A S1) (A S2 )
(A S3 ) (A Sk ) Then P(A) P(A S1) P(A S2 ) P(A S3 ) P(A Sk ) This is illustrated for k 3 in Figure 4.14. F IGU R E 4 .1 4
Decomposition of event A
S A∩S1
4.7 BAYES’ RULE (OPTIONAL)
You can go one step further and use the Multiplication Rule to write P(A Si) as P(Si)P(A兩Si), for i 1, 2, . . . , k. The result is known as the Law of Total Probability. LAW OF TOTAL PROBABILITY
Given a set of events S1, S2, S3, . . . , Sk that are mutually exclusive and exhaustive and an event A, the probability of the event A can be expressed as P(A) P(S1)P(A兩S1) P(S2)P(A兩S2) P(S3)P(A兩
S3) P(Sk)P(A兩Sk) EXAMPLE
TABLE 4.6
Sneakers are no longer just for the young. In fact, most adults own multiple pairs of sneakers. Table 4.6 gives the fraction of U.S. adults 20 years of age and older who own five or more pairs of
wearable sneakers, along with the fraction of the U.S. adult population 20 years or older in each of five age groups.5 Use the Law of Total Probability to determine the unconditional probability of an
adult 20 years and older owning five or more pairs of wearable sneakers. ●
Probability Table Groups and Ages
Fraction with 5 Pairs Fraction of U.S. Adults 20 and Older
G1 20–24
G2 25–34
G3 35–49
G4 50–64
G5 65
.26 .09
.20 .20
.13 .31
.18 .23
.14 .17
Let A be the event that a person chosen at random from the U.S. adult population 18 years of age and older owns five or more pairs of wearable sneakers. Let G1, G2, . . . , G5 represent the event that
the person selected belongs to each of the five age groups, respectively. Since the five groups are exhaustive, you can write the event A as
A (A G1) (A G2) (A G3) (A G4) (A G5) Using the Law of Total Probability, you can find the probability of A as P(A) P(A G1) P(A G2) P(A G3) P(A G4) P(A G5) P(G1)P(A兩G1) P(G2)P(A兩G2) P(G3)P(A兩G3) P
(G4)P(A兩G4) P(G5)P(A兩G5) From the probabilities in Table 4.6, P(A) (.09)(.26) (.20)(.20) (.31)(.13) (.23)(.18) (.17)(.14) .0234 .0400 .0403 .0414 .0238 .1689 The unconditional probability that a
person selected at random from the population of U.S. adults 20 years of age and older owns at least five pairs of wearable sneakers is about .17. Notice that the Law of Total Probability is a
weighted average of the probabilities within each group, with weights .09, .20, .31, .23, and .17, which reflect the relative sizes of the groups. Often you need to find the conditional probability of
an event B, given that an event A has occurred. One such situation occurs in screening tests, which used to be
associated primarily with medical diagnostic tests but are now finding applications in a variety of fields. Automatic test equipment is routinely used to inspect parts in highvolume production
processes. Steroid testing of athletes, home pregnancy tests, and AIDS testing are some other applications. Screening tests are evaluated on the probability of a false negative or a false positive,
and both of these are conditional probabilities. A false positive is the event that the test is positive for a given condition, given that the person does not have the condition. A false negative is
the event that the test is negative for a given condition, given that the person has the condition. You can evaluate these conditional probabilities using a formula derived by the probabilist Thomas
Bayes. The experiment involves selecting a sample from one of k subpopulations that are mutually exclusive and exhaustive. Each of these subpopulations, denoted by S1, S2, . . . , Sk, has a selection
probability P(S1), P(S2), P(S3), . . . , P(Sk), called prior probabilities. An event A is observed in the selection. What is the probability that the sample came from subpopulation Si, given that A
has occurred? You know from Section 4.6 that P(Si |A) [P(A Si)]/P(A), which can be rewritten as P(Si |A) [P(Si)P(A|Si)]/P(A). Using the Law of Total Probability to rewrite P(A), you have P(Si)P(A|Si)
P(Si |A) P(S1)P(A|S1) P(S2)P(A|S2) P(S3)P(A|S3) P(Sk)P(A|Sk) These new probabilities are often referred to as posterior probabilities—that is, probabilities of the subpopulations (also called states
of nature) that have been updated after observing the sample information contained in the event A. Bayes suggested that if the prior probabilities are unknown, they can be taken to be 1/k, which
implies that each of the events S1 through Sk is equally likely. BAYES’ RULE Let S1, S2, . . . , Sk represent k mutually exclusive and exhaustive subpopulations with prior probabilities P(S1), P(S2),
. . . , P(Sk). If an event A occurs, the posterior probability of Si given A is the conditional probability P(Si)P(A|Si) P(Si |A) k 冱 P(Sj)P(A|Sj) j1
for i 1, 2, . . . , k. EXAMPLE
Refer to Example 4.23. Find the probability that the person selected was 65 years of age or older, given that the person owned at least five pairs of wearable sneakers. Solution
You need to find the conditional probability given by
P(A G5) P(G5|A) P(A) You have already calculated P(A) .1689 using the Law of Total Probability. Therefore,
4.7 BAYES’ RULE (OPTIONAL)
P(G5|A) P(G5)P(A|G5) P(G1)P(A|G1) P(G2)P(A|G2) P(G3)P(A|G3) P(G4P(A|G4) P(G5)P(A|G5) (.17)(.14) (.09)(.26) (.20)(.20) (.31)(.13) (.23)(.18) (.17)(.14) .0238 .1409 .1689 In this case, the posterior
probability of .14 is somewhat less than the prior probability of .17 (from Table 4.6). This group a priori was the second smallest, and only a small proportion of this segment had five or more pairs
of wearable sneakers. What is the posterior probability for those aged 35 to 49? For this group of adults, we have (.31)(.13) P(G3|A) .2386 (.09)(.26) (.20)(.20) (.31)(.13) (.23)(.18) (.17)(.14) This
posterior probability of .24 is substantially less than the prior probability of .31. In effect, this group was a priori the largest segment of the population sampled, but at the same time, the
proportion of individuals in this group who had at least five pairs of wearable sneakers was the smallest of any of the groups. These two facts taken together cause a downward adjustment of almost
one-third in the a priori probability of .31.
BASIC TECHNIQUES 4.69 Bayes’ Rule A sample is selected from one of two populations, S1 and S2, with probabilities P(S1) .7 and P(S2) .3. If the sample has been selected from S1, the probability of
observing an event A is P(A兩S1) .2. Similarly, if the sample has been selected from S2, the probability of observing A is P(A兩S2) .3.
a. If a sample is randomly selected from one of the two populations, what is the probability that event A occurs? b. If the sample is randomly selected and event A is observed, what is the
probability that the sample was selected from population S1? From population S2? 4.70 Bayes’ Rule II If an experiment is conducted,
one and only one of three mutually exclusive events S1, S2, and S3 can occur, with these probabilities: P(S1) .2
P(S2) .5
P(S3) .3
The probabilities of a fourth event A occurring, given that event S1, S2, or S3 occurs, are P(A|S1) .2
P(A|S2) .1
P(A|S3) .3
If event A is observed, find P(S1|A), P(S2|A), and P(S3|A). 4.71 Law of Total Probability A population can be divided into two subgroups that occur with probabilities 60% and 40%, respectively. An
event A occurs 30% of the time in the first subgroup and 50% of the time in the second subgroup. What is the unconditional probability of the event A, regardless of which subgroup it comes from?
APPLICATIONS 4.72 Violent Crime City crime records show that
20% of all crimes are violent and 80% are nonviolent, involving theft, forgery, and so on. Ninety percent of violent crimes are reported versus 70% of nonviolent crimes.
162 ❍
a. What is the overall reporting rate for crimes in the city? b. If a crime in progress is reported to the police, what is the probability that the crime is violent? What is the probability that it
is nonviolent? c. Refer to part b. If a crime in progress is reported to the police, why is it more likely that it is a nonviolent crime? Wouldn’t violent crimes be more likely to be reported? Can
you explain these results? 4.73 Worker Error A worker-operated machine pro-
duces a defective item with probability .01 if the worker follows the machine’s operating instructions exactly, and with probability .03 if he does not. If the worker follows the instructions 90% of
the time, what proportion of all items produced by the machine will be defective? 4.74 Airport Security Suppose that, in a particular city, airport A handles 50% of all airline traffic, and airports
B and C handle 30% and 20%, respectively. The detection rates for weapons at the three airports are .9, .8, and .85, respectively. If a passenger at one of the airports is found to be carrying a
weapon through the boarding gate, what is the probability that the passenger is using airport A? Airport C? 4.75 Football Strategies A particular football team is known to run 30% of its plays to the
left and 70% to the right. A linebacker on an opposing team notes that the right guard shifts his stance most of the time (80%) when plays go to the right and that he uses a balanced stance the
remainder of the time. When plays go to the left, the guard takes a balanced stance 90% of the time and the shift stance the remaining 10%. On a particular play, the linebacker notes that the guard
takes a balanced stance. a. What is the probability that the play will go to the left? b. What is the probability that the play will go to the right? c. If you were the linebacker, which direction
would you prepare to defend if you saw the balanced stance? 4.76 No Pass, No Play Many public schools are implementing a “no pass, no play” rule for athletes. Under this system, a student who fails a
course is disqualified from participating in extracurricular activities during the next grading period. Suppose the probability that an athlete who has not previously been disqualified will be
disqualified is .15 and the
probability that an athlete who has been disqualified will be disqualified again in the next time period is .5. If 30% of the athletes have been disqualified before, what is the unconditional
probability that an athlete will be disqualified during the next grading period? 4.77 Medical Diagnostics Medical case histories
indicate that different illnesses may produce identical symptoms. Suppose a particular set of symptoms, which we will denote as event H, occurs only when any one of three illnesses—A, B, or C—occurs.
(For the sake of simplicity, we will assume that illnesses A, B, and C are mutually exclusive.) Studies show these probabilities of getting the three illnesses: P(A) .01 P(B) .005 P(C) .02 The
probabilities of developing the symptoms H, given a specific illness, are P(H|A) .90 P(H|B) .95 P(H|C) .75 Assuming that an ill person shows the symptoms H, what is the probability that the person has
illness A? 4.78 Cheating on Your Taxes? Suppose 5% of all people filing the long income tax form seek deductions that they know are illegal, and an additional 2% incorrectly list deductions because
they are unfamiliar with income tax regulations. Of the 5% who are guilty of cheating, 80% will deny knowledge of the error if confronted by an investigator. If the filer of the long form is
confronted with an unwarranted deduction and he or she denies the knowledge of the error, what is the probability that he or she is guilty? 4.79 Screening Tests Suppose that a certain disease is
present in 10% of the population, and that there is a screening test designed to detect this disease if present. The test does not always work perfectly. Sometimes the test is negative when the
disease is present, and sometimes it is positive when the disease is absent. The table below shows the proportion of times that the test produces various results. Test Is Positive (P ) Disease
Present (D) Disease Absent (Dc)
Test Is Negative (N )
a. Find the following probabilities from the table: P(D), P(Dc ), P(N兩Dc ), P(N兩D). b. Use Bayes’ Rule and the results of part a to find P(D兩N). c. Use the definition of conditional probability to
find P(D兩N). (Your answer should be the same as the answer to part b.)
d. Find the probability of a false positive, that the test is positive, given that the person is disease-free. e. Find the probability of a false negative, that the test is negative, given that the
person has the disease. f. Are either of the probabilities in parts d or e large enough that you would be concerned about the reliability of this screening method? Explain.
DISCRETE RANDOM VARIABLES AND THEIR PROBABILITY DISTRIBUTIONS In Chapter 1, variables were defined as characteristics that change or vary over time and/or for different individuals or objects under
consideration. Quantitative variables generate numerical data, whereas qualitative variables generate categorical data. However, even qualitative variables can generate numerical data if the
categories are numerically coded to form a scale. For example, if you toss a single coin, the qualitative outcome could be recorded as “0” if a head and “1” if a tail.
Random Variables A numerically valued variable x will vary or change depending on the particular outcome of the experiment being measured. For example, suppose you toss a die and measure x, the
number observed on the upper face. The variable x can take on any of six values—1, 2, 3, 4, 5, 6—depending on the random outcome of the experiment. For this reason, we refer to the variable x as a
random variable. A variable x is a random variable if the value that it assumes, corresponding to the outcome of an experiment, is a chance or random event. Definition
You can think of many examples of random variables: • • •
x Number of defects on a randomly selected piece of furniture x SAT score for a randomly selected college applicant x Number of telephone calls received by a crisis intervention hotline during a
randomly selected time period
As in Chapter 1, quantitative random variables are classified as either discrete or continuous, according to the values that x can assume. It is important to distinguish between discrete and
continuous random variables because different techniques are used to describe their distributions. We focus on discrete random variables in the remainder of this chapter; continuous random variables
are the subject of Chapter 6.
Probability Distributions In Chapters 1 and 2, you learned how to construct the relative frequency distribution for a set of numerical measurements on a variable x. The distribution gave this
information about x:
• •
What values of x occurred How often each value of x occurred
You also learned how to use the mean and standard deviation to measure the center and variability of this data set. In this chapter, we defined probability as the limiting value of the relative
frequency as the experiment is repeated over and over again. Now we define the probability distribution for a random variable x as the relative frequency distribution constructed for the entire
population of measurements. The probability distribution for a discrete random variable is a formula, table, or graph that gives the possible values of x, and the probability p(x) associated with
each value of x. Definition
The values of x represent mutually exclusive numerical events. Summing p(x) over all values of x is equivalent to adding the probabilities of all simple events and therefore equals 1. REQUIREMENTS
0 p(x) 1 S p(x) 1
Toss two fair coins and let x equal the number of heads observed. Find the probability distribution for x.
The simple events for this experiment with their respective probabilities are listed in Table 4.7. Since E1 HH results in two heads, this simple event results in the value x 2. Similarly, the value x
1 is assigned to E2, and so on. Solution
TABLE 4.7
Simple Events and Probabilities in Tossing Two Coins Simple Event
Coin 1
Coin 2
E1 E2 E3 E4
H H T T
H T H T
1/4 1/4 1/4 1/4
For each value of x, you can calculate p(x) by adding the probabilities of the simple events in that event. For example, when x 0, 1 p(0) P(E4) 4 and when x 1, 1 p(1) P(E2) P(E3) 2
The values of x and their respective probabilities, p(x), are listed in Table 4.8. Notice that the probabilities add to 1. TABLE 4.8
Probability Distribution for x (x Number of Heads) x
Simple Events in x p(x)
E4 E2, E3 E1
1/4 1/2 1/4 S p(x) 1
The probability distribution in Table 4.8 can be graphed using the methods of Section 1.5 to form the probability histogram in Figure 4.15.† The three values of the random variable x are located on
the horizontal axis, and the probabilities p(x) are located on the vertical axis (replacing the relative frequencies used in Chapter 1). Since the width of each bar is 1, the area under the bar is
the probability of observing the particular value of x and the total area equals 1.
Probability histogram for Example 4.25
● 1/2
FI GU R E 4 .1 5
1 x
There are two Java applets that will allow you to approximate discrete probability distributions using simulation methods. That is, even though the probabilities p(x) can only be found as the
long-run relative frequencies when the experiment is repeated an infinite number of times, we can get close to these probabilities if we repeat the experiment a large number of times. The applets
called Flipping Fair Coins and Flipping Weighted Coins are two such simulations. The fastest way to generate the approximate probability distribution for x, the number of heads in n tosses of the
coin, is to repeat the experiment “100 at a Time,” using the button at the bottom of the applet. The probability distribution will build up rather quickly. You can approximate the values of p(x) and
compare to the actual values calculated using probability rules. We will use these applets for the MyApplet Exercises at the end of the chapter. †
The probability distribution in Table 4.8 can also be presented using a formula, which is given in Section 5.2.
F I GU R E 4 .1 6
Flipping Fair Coins applet
F IGU R E 4 .1 7
Flipping Weighted Coins applet
The Mean and Standard Deviation for a Discrete Random Variable The probability distribution for a discrete random variable looks very similar to the relative frequency distribution discussed in
Chapter 1. The difference is that the relative frequency distribution describes a sample of n measurements, whereas the probability distribution is constructed as a model for the entire population of
measurements. Just as the mean x苶 and the standard deviation s measured the center and spread of the sample data, you can calculate similar measures to describe the center and spread of the
population. The population mean, which measures the average value of x in the population, is also called the expected value of the random variable x. It is the value that you would expect to observe
on average if the experiment is repeated over and over again. The formula for calculating the population mean is easier to understand by example. Toss those two fair coins again, and let x be the
number of heads observed. We constructed this probability distribution for x: x
Suppose the experiment is repeated a large number of times—say, n 4,000,000 times. Intuitively, you would expect to observe approximately 1 million zeros, 2 million ones, and 1 million twos. Then the
average value of x would equal Sum of measurements 1,000,000(0) 2,000,000(1) 1,000,000(2) n 4,000,000
1 1 1 (0) (1) (2) 4 2 4 Note that the first term in this sum is (0)p(0), the second is equal to (1)p(1), and the third is (2)p(2). The average value of x, then, is 1 2 Sxp(x) 0 1 2 4 This result
provides some intuitive justification for the definition of the expected value of a discrete random variable x. Let x be a discrete random variable with probability distribution p(x). The mean or
expected value of x is given as
m E(x) S xp(x) where the elements are summed over all values of the random variable x. We could use a similar argument to justify the formulas for the population variance s 2 and the population
standard deviation s. These numerical measures describe the spread or variability of the random variable using the “average” or “expected value” of the squared deviations of the x-values from their
mean m. Let x be a discrete random variable with probability distribution p(x) and mean m. The variance of x is
s 2 E[(x m)2] S(x m)2p(x) where the summation is over all values of the random variable x.† The standard deviation s of a random variable x is equal to the positive square root of its variance.
An electronics store sells a particular model of computer notebook. There are only four notebooks in stock, and the manager wonders what today’s demand for this particular model will be. She learns
from the marketing department that the probability distribution for x, the daily demand for the laptop, is as shown in the table. Find the
It can be shown (proof omitted) that s 2 S(x m)2p(x) Sx 2p(x) m2. This result is analogous to the computing formula for the sum of squares of deviations given in Chapter 2.
mean, variance, and standard deviation of x. Is it likely that five or more customers will want to buy a laptop today? x
Table 4.9 shows the values of x and p(x), along with the individual terms used in the formulas for m and s 2. The sum of the values in the third column is
m S xp(x) (0)(.10) (1)(.40) (5)(.05) 1.90 while the sum of the values in the fifth column is s 2 S(x m)2p(x) (0 1.9)2(.10) (1 1.9)2(.40) (5 1.9)2(.05) 1.79 and 苶2 兹1.79 苶 1.34 s 兹s TABLE 4.9
Calculations for Example 4.26 x
(x m)2
(x m)2 p(x)
.10 .40 .20 .15 .10 .05
.00 .40 .40 .45 .40 .25
3.61 .81 .01 1.21 4.41 9.61
.361 .324 .002 .1815 .441 .4805
m 1.90
s 2 1.79
The graph of the probability distribution is shown in Figure 4.18. Since the distribution is approximately mound-shaped, approximately 95% of all measurements should lie within two standard
deviations of the mean—that is, m 2s ⇒ 1.90 2(1.34)
or .78 to 4.58
Since x 5 lies outside this interval, you can say it is unlikely that five or more customers will want to buy a laptop today. In fact, P(x 5) is exactly .05, or 1 time in 20.
Probability distribution for Example 4.26
● .4
FI GU R E 4 .1 8
3 x
In a lottery conducted to benefit a local charity, 8000 tickets are to be sold at $10 each. The prize is a $24,000 automobile. If you purchase two tickets, what is your expected gain? Your gain x may
take one of two values. You will either lose $20 (i.e., your “gain” will be $20) or win $23,980, with probabilities 7998/8000 and 2/8000, respectively. The probability distribution for the gain x is
shown in the table:
$20 $23,980
7998/8000 2/8000
The expected gain will be m S xp(x)
7998 2 ($20) ($23,980) $14 8000 8000 Recall that the expected value of x is the average of the theoretical population that would result if the lottery were repeated an infinitely large number of
times. If this were done, your average or expected gain per lottery ticket would be a loss of $14.
Determine the yearly premium for a $10,000 insurance policy covering an event that, over a long period of time, has occurred at the rate of 2 times in 100. Let x equal the yearly financial gain to the
insurance company resulting from the sale of the policy, and let C equal the unknown yearly premium. Calculate the value of C such that the expected gain E(x) will equal zero. Then C is the premium
required to break even. To this, the company would add administrative costs and profit. The first step in the solution is to determine the values that the gain x may take and then to determine p(x). If
the event does not occur during the year, the insurance company will gain the premium of x C dollars. If the event does occur, the gain will be negative; that is, the company will lose $10,000 less
the premium of C dollars already collected. Then x (10,000 C) dollars. The probabilities associated with these two values of x are 98/100 and 2/100, respectively. The probability distribution for the
gain is shown in the table:
x Gain
C (10,000 C )
98/100 2/100
Since the company wants the insurance premium C such that, in the long run (for many similar policies), the mean gain will equal zero, you can set the expected value of x equal to zero and solve for
C. Then m E(x) Sxp(x)
98 2 C [10,000 C)] 0 100 100 or
98 2 C C 200 0 100 100 Solving this equation for C, you obtain C $200. Therefore, if the insurance company charged a yearly premium of $200, the average gain calculated for a large number of
similar policies would equal zero. The actual premium would equal $200 plus administrative costs and profit. The method for calculating the expected value of x for a continuous random variable is
similar to what you have done, but in practice it involves the use of calculus. Nevertheless, the basic results concerning expectations are the same for continuous and discrete random variables. For
example, regardless of whether x is continuous or discrete, m E(x) and s 2 E[(x m)2].
4.83 Probability Distribution II A random variable
4.80 Discrete or Continuous? Identify the follow-
x can assume five values: 0, 1, 2, 3, 4. A portion of the probability distribution is shown here:
ing as discrete or continuous random variables: a. Total number of points scored in a football game b. Shelf life of a particular drug c. Height of the ocean’s tide at a given location d. Length of a
2-year-old black bass e. Number of aircraft near-collisions in a year 4.81 Discrete or Continuous? II Identify the following as discrete or continuous random variables: a. Increase in length of life
attained by a cancer patient as a result of surgery b. Tensile breaking strength (in pounds per square inch) of 1-inch-diameter steel cable c. Number of deer killed per year in a state wildlife
preserve d. Number of overdue accounts in a department store at a particular time e. Your blood pressure 4.82 Probability Distribution I A random variable
x has this probability distribution: x
a. b. c. d.
Find p (4). Construct a probability histogram to describe p(x). Find m, s 2, and s. Locate the interval m 2s on the x-axis of the histogram. What is the probability that x will fall into this
interval? e. If you were to select a very large number of values of x from the population, would most fall into the interval m 2s? Explain.
a. Find p(3). b. Construct a probability histogram for p (x). c. Calculate the population mean, variance, and standard deviation. d. What is the probability that x is greater than 2? e. What is the
probability that x is 3 or less? 4.84 Dice Let x equal the number observed on the throw of a single balanced die. a. Find and graph the probability distribution for x. b. What is the average or
expected value of x? c. What is the standard deviation of x? d. Locate the interval m 2s on the x-axis of the graph in part a. What proportion of all the measurements would fall into this range? 4.85
Grocery Visits Let x represent the number of times a customer visits a grocery store in a 1-week period. Assume this is the probability distribution of x: x
Find the expected value of x, the average number of times a customer visits the store. APPLICATIONS 4.86 Letterman or Leno? Who is the king of late
night TV? An Internet survey estimates that, when given a choice between David Letterman and Jay Leno, 52% of the population prefers to watch Jay
Leno. Suppose that you randomly select three late night TV watchers and ask them which of the two talk show hosts they prefer. a. Find the probability distribution for x, the number of people in the
sample of three who would prefer Jay Leno. b. Construct the probability histogram for p(x). c. What is the probability that exactly one of the three would prefer Jay Leno? d. What are the population
mean and standard deviation for the random variable x? 4.87 Which Key Fits? A key ring contains four office
keys that are identical in appearance, but only one will open your office door. Suppose you randomly select one key and try it. If it does not fit, you randomly select one of the three remaining keys.
If it does not fit, you randomly select one of the last two. Each different sequence that could occur in selecting the keys represents one of a set of equiprobable simple events. a. List the simple
events in S and assign probabilities to the simple events. b. Let x equal the number of keys that you try before you find the one that opens the door (x 1, 2, 3, 4). Then assign the appropriate value
of x to each simple event. c. Calculate the values of p(x) and display them in a table. d. Construct a probability histogram for p(x). 4.88 Roulette Exercise 4.10 described the game of roulette.
Suppose you bet $5 on a single number—say, the number 18. The payoff on this type of bet is usually 35 to 1. What is your expected gain? 4.89 Gender Bias? A company has five applicants
for two positions: two women and three men. Suppose that the five applicants are equally qualified and that no preference is given for choosing either gender. Let x equal the number of women chosen to
fill the two positions. a. Find p (x). b. Construct a probability histogram for x. 4.90 Defective Equipment A piece of electronic
equipment contains six computer chips, two of which are defective. Three chips are selected at random, removed from the piece of equipment, and inspected. Let x equal the number of defectives
observed, where x 0, 1, or 2. Find the probability distribution for x. Express the results graphically as a probability histogram.
4.91 Drilling Oil Wells Past experience has shown that, on the average, only 1 in 10 wells drilled hits oil. Let x be the number of drillings until the first success (oil is struck). Assume that the
drillings represent independent events.
a. Find p(1), p(2), and p(3). b. Give a formula for p(x). c. Graph p(x). 4.92 Tennis, Anyone? Two tennis professionals, A
and B, are scheduled to play a match; the winner is the first player to win three sets in a total that cannot exceed five sets. The event that A wins any one set is independent of the event that A wins
any other, and the probability that A wins any one set is equal to .6. Let x equal the total number of sets in the match; that is, x 3, 4, or 5. Find p(x). 4.93 Tennis, again The probability that
player A can win a set against tennis player B is one measure of the comparative abilities of the two players. In Exercise 4.92 you found the probability distribution for x, the number of sets
required to play a best-of-five-sets match, given that the probability that A wins any one set—call this P(A)—is .6. a. Find the expected number of sets required to complete the match for P(A) .6. b.
Find the expected number of sets required to complete the match when the players are of equal ability—that is, P(A) .5. c. Find the expected number of sets required to complete the match when the
players differ greatly in ability—that is, say, P(A) .9. 4.94 The PGA One professional golfer plays best on
short-distance holes. Experience has shown that the numbers x of shots required for 3-, 4-, and 5-par holes have the probability distributions shown in the table: Par-3 Holes
Par-4 Holes
Par-5 Holes
.12 .80 .06 .02
.14 .80 .04 .02
.04 .80 .12 .04
What is the golfer’s expected score on these holes? a. A par-3 hole b. A par-4 hole c. A par-5 hole 4.95 Insuring Your Diamonds You can insure a $50,000 diamond for its total value by paying a
premium of D dollars. If the probability of theft in a given year is estimated to be .01, what premium should the insurance company charge if it wants the expected gain to equal $1000? 4.96 FDA
Testing The maximum patent life for a new drug is 17 years. Subtracting the length of time required by the FDA for testing and approval of the drug provides the actual patent life of the drug—that
is, the length of time that a company has to recover research and development costs and make a profit. Suppose the distribution of the lengths of patent life for new drugs is as shown here: Years, x
Years, x
a. Find the expected number of years of patent life for a new drug. b. Find the standard deviation of x. c. Find the probability that x falls into the interval m 2s. 4.97 Coffee Breaks Are you a
coffee drinker? If so, how many coffee breaks do you take when you are at work or at school? Most coffee drinkers take a little time for their favorite beverage, and many take more than one coffee
break every day. The table below, adapted from a Snapshot in USA Today shows the probability distribution for x, the number of daily coffee breaks taken per day by coffee drinkers.6 x
a. What is the probability that a randomly selected coffee drinker would take no coffee breaks during the day? b. What is the probability that a randomly selected coffee drinker would take more than
two coffee breaks during the day? c. Calculate the mean and standard deviation for the random variable x. d. Find the probability that x falls into the interval m 2s. 4.98 Shipping Charges From
experience, a shipping company knows that the cost of delivering a small package within 24 hours is $14.80. The company charges $15.50 for shipment but guarantees to refund the charge if delivery is
not made within 24 hours. If the company fails to deliver only 2% of its packages within the 24-hour period, what is the expected gain per package? 4.99 Actuaries A manufacturing representative is
considering taking out an insurance policy to cover possible losses incurred by marketing a new product. If the product is a complete failure, the representative feels that a loss of $800,000 would
be incurred; if it is only moderately successful, a loss of $250,000 would be incurred. Insurance actuaries have determined from market surveys and other available information that the probabilities
that the product will be a failure or only moderately successful are .01 and .05, respectively. Assuming that the manufacturing representative is willing to ignore all other possible losses, what
premium should the insurance company charge for a policy in order to break even?
CHAPTER REVIEW Key Concepts and Formulas I.
Experiments and the Sample Space
1. Experiments, events, mutually exclusive events, simple events 2. The sample space 3. Venn diagrams, tree diagrams, probability tables II. Probabilities
1. Relative frequency definition of probability
2. Properties of probabilities a. Each probability lies between 0 and 1. b. Sum of all simple-event probabilities equals 1. 3. P(A), the sum of the probabilities for all simple events in A III.
Counting Rules
1. mn Rule; extended mn Rule
n! 2. Permutations: P nr (n r)!
6. Multiplication Rule: P(A B) P(A)P(B|A) 7. Law of Total Probability
n! 3. Combinations: C nr r!(n r)!
8. Bayes’ Rule
IV. Event Relations
V. Discrete Random Variables and Probability Distributions
1. Unions and intersections 2. Events
1. Random variables, discrete and continuous 2. Properties of probability distributions
a. Disjoint or mutually exclusive: P(A B) 0 b. Complementary: P(A) 1 P(Ac)
a. 0 p(x) 1 b. Sp(x) 1
P(A B) 3. Conditional probability: P(A|B) P(B) 4. Independent and dependent events 5. Addition Rule: P(A B) P(A) P(B) P(A B)
3. Mean or expected value of a discrete random variable: m Sxp(x) 4. Variance and standard deviation of a discrete random variable: s 2 S(x m)2p(x) and s 兹苶 s2
Discrete Probability Distributions Although MINITAB cannot help you solve the types of general probability problems presented in this chapter, it is useful for graphing the probability distribution p
(x) for a general discrete random variable x when the probabilities are known, and for calculating the mean, variance, and standard deviation of the random variable x. In Chapters 5 and 6, we will
use MINITAB to calculate exact probabilities for three special cases: the binomial, the Poisson, and the normal random variables. Suppose you have this general probability distribution: x
Enter the values of x and p(x) into columns C1 and C2 of a new MINITAB worksheet. In the gray boxes just below C3, C4, and C5, respectively, type the names “Mean,” “Variance,” and “Std Dev.” You can
now use the Calc 씮 Calculator command to calculate m, s 2, and s and to store the results in columns C3–C5 of the worksheet. Use the same approach for the three parameters. In the Calculator dialog
box, select “Mean” as the column in which to store m. In the Expression box, use the Functions list, the calculator keys, and the variables list on the left to highlight, select, and create the
expression for the mean (see Figure 4.19): SUM(‘x’*‘p(x)’)
FI GU R E 4 .1 9
MINITAB will multiply each row element in C1 times the corresponding row element in C2, sum the resulting products, and store the result in C3! You can check the result by hand if you like. The
formulas for the variance and standard deviation are selected in a similar way: Variance: SUM((‘x’ ‘Mean’)**2*‘p(x)’) Std Dev: SQRT(‘Variance’) To see the tabular form of the probability distribution
and the three parameters, use Data 씮 Display Data and select all five columns. Click OK and the results will be displayed in the Session window, as shown in Figure 4.20. The probability histogram can
be plotted using the MINITAB command Graph 씮 Scatterplot 씮 Simple 씮 OK. In the Scatterplot dialog box (Figure 4.21), select ‘p(x)’ for Y variables and ‘x’ for X variables. To display the discrete
probability bars, click on Data View, uncheck the box marked “Symbols,” and check the box marked “Project Lines.” Click OK twice to see the plot. You will see a single straight line projected at each
of the four values of x. If you want the plot to look more like the discrete probability histograms in Section 4.8, position your cursor on one of the lines, right-click the mouse and choose “Edit
Project Lines.” Under the “Attributes” tab, select Custom and change the line size to 75. Click OK. If the bar width is not satisfactory, you can readjust the line size. Finally, right-click on the
X-axis, choose “Edit X Scale” and select .5 and 5.5 for the minimum and maximum Scale Ranges. Click OK. The probability histogram is shown in Figure 4.22. Locate the mean on the graph. Is it at the
center of the distribution? If you mark off two standard deviations on either side of the mean, do most of the possible values of x fall into this interval?
FIGU R E 4 .2 0
F IGU R E 4 .2 1
F IGU R E 4 .2 2
176 ❍
Supplementary Exercises Starred (*) exercises are optional. 4.100 Playing the Slots A slot machine has three slots; each will show a cherry, a lemon, a star, or a bar when spun. The player wins if
all three slots show the same three items. If each of the four items is equally likely to appear on a given spin, what is your probability of winning? 4.101 Whistle Blowers “Whistle blowers” is the
name given to employees who report corporate fraud, theft, or other unethical and perhaps criminal activities by fellow employees or by their employer. Although there is legal protection for whistle
blowers, it has been reported that approximately 23% of those who reported fraud suffered reprisals such as demotion or poor performance ratings. Suppose the probability that an employee will fail to
report a case of fraud is .69. Find the probability that a worker who observes a case of fraud will report it and will subsequently suffer some form of reprisal. 4.102 Aspirin Two cold tablets are
placed in a box containing two aspirin tablets. The four tablets are identical in appearance. One tablet is selected at random from the box and is swallowed by the first patient. A tablet is then
selected at random from the three remaining tablets and is swallowed by the second patient. Define the following events as specific collections of simple events: a. The sample space S b. The event A
that the first patient obtained a cold tablet c. The event B that exactly one of the two patients obtained a cold tablet d. The event C that neither patient obtained a cold tablet 4.103 Refer to
Exercise 4.102. By summing the probabilities of simple events, find P(A), P(B), P(A B), P(A B), P(C), P(A C), and P(A C). 4.104 DVRs A retailer sells two styles of highpriced digital video recorders
(DVR) that experience indicates are in equal demand. (Fifty percent of all potential customers prefer style 1, and 50% favor style 2.) If the retailer stocks four of each, what is the probability
that the first four customers seeking a DVR all purchase the same style?
to the purchaser, three are defective. Two of the seven are selected for thorough testing and are then classified as defective or nondefective. What is the probability that no defectives are found?
4.106 Heavy Equipment A heavy-equipment salesman can contact either one or two customers per day with probabilities 1/3 and 2/3, respectively. Each contact will result in either no sale or a $50,000
sale with probabilities 9/10 and 1/10, respectively. What is the expected value of his daily sales? 4.107 Fire Insurance A county containing a large
number of rural homes is thought to have 60% of those homes insured against fire. Four rural homeowners are chosen at random from the entire population, and x are found to be insured against fire. Find
the probability distribution for x. What is the probability that at least three of the four will be insured? 4.108 Fire Alarms A fire-detection device uses three temperature-sensitive cells acting
independently of one another in such a manner that any one or more can activate the alarm. Each cell has a probability p .8 of activating the alarm when the temperature reaches 100°F or higher. Let x
equal the number of cells activating the alarm when the temperature reaches 100°F. a. Find the probability distribution of x. b. Find the probability that the alarm will function when the temperature
reaches 100°F. c. Find the expected value and the variance for the random variable x. 4.109 Catching a Cold Is your chance of getting a cold influenced by the number of social contacts you have? A
study by Sheldon Cohen, a psychology professor at Carnegie Mellon University, seems to show that the more social relationships you have, the less susceptible you are to colds. A group of 276 healthy
men and women were grouped according to their number of relationships (such as parent, friend, church member, neighbor). They were then exposed to a virus that causes colds. An adaptation of the
results is shown in the table:7 Number of Relationships Three or Fewer
Four or Five
Six or More
4.105 Interstate Commerce A shipping container
Cold No Cold
contains seven complex electronic systems. Unknown
a. If one person is selected at random from the 276 people in the study, what is the probability that the person got a cold? b. If two people are randomly selected, what is the probability that one
has four or five relationships and the other has six or more relationships? c. If a single person is randomly selected and has a cold, what is the probability that he or she has three or fewer
relationships? 4.110 Plant Genetics Refer to the experiment
conducted by Gregor Mendel in Exercise 4.64. Suppose you are interested in following two independent traits in snap peas—seed texture (S smooth, s wrinkled) and seed color (Y yellow, y green)—in a
second-generation cross of heterozygous parents. Remember that the capital letter represents the dominant trait. Complete the table with the gene pairs for both traits. All possible pairings are
equally likely. Seed Color Seed Texture
(ss yy)
(ss yY)
4.112 Racial Bias? Four union men, two from a
minority group, are assigned to four distinctly different one-man jobs, which can be ranked in order of desirability. a. Define the experiment. b. List the simple events in S. c. If the assignment to
the jobs is unbiased—that is, if any one ordering of assignments is as probable as any other—what is the probability that the two men from the minority group are assigned to the least desirable jobs?
4.113 A Reticent Salesman A salesperson figures
that the probability of her consummating a sale during the first contact with a client is .4 but improves to .55 on the second contact if the client did not buy during the first contact. Suppose this
salesperson makes one and only one callback to any client. If she contacts a client, calculate the probabilities for these events: a. The client will buy. b. The client will not buy.
sS Ss SS
a. What proportion of the offspring from this cross will have smooth yellow peas? b. What proportion of the offspring will have smooth green peas? c. What proportion of the offspring will have
wrinkled yellow peas? d. What proportion of the offspring will have wrinkled green peas? e. Given that an offspring has smooth yellow peas, what is the probability that this offspring carries one s
allele? One s allele and one y allele? 4.111 Profitable Stocks An investor has the option of investing in three of five recommended stocks. Unknown to her, only two will show a substantial profit within
the next 5 years. If she selects the three stocks at random (giving every combination of three stocks an equal chance of selection), what is the probability that she selects the two profitable stocks?
What is the probability that she selects only one of the two profitable stocks?
4.114 Bus or Subway A man takes either a bus or
the subway to work with probabilities .3 and .7, respectively. When he takes the bus, he is late 30% of the days. When he takes the subway, he is late 20% of the days. If the man is late for work on
a particular day, what is the probability that he took the bus? 4.115 Guided Missiles The failure rate for a guided missile control system is 1 in 1000. Suppose that a duplicate, but completely
independent, control system is installed in each missile so that, if the first fails, the second can take over. The reliability of a missile is the probability that it does not fail. What is the
reliability of the modified missile? 4.116 Rental Trucks A rental truck agency services its vehicles on a regular basis, routinely checking for mechanical problems. Suppose that the agency has six
moving vans, two of which need to have new brakes. During a routine check, the vans are tested one at a time.
a. What is the probability that the last van with brake problems is the fourth van tested? b. What is the probability that no more than four vans need to be tested before both brake problems are
detected? c. Given that one van with bad brakes is detected in the first two tests, what is the probability that the remaining van is found on the third or fourth test?
178 ❍
4.117 Pennsylvania Lottery Probability played a role in the rigging of the April 24, 1980, Pennsylvania state lottery. To determine each digit of the three-digit winning number, each of the numbers
0, 1, 2, . . . , 9 is written on a Ping-Pong ball, the 10 balls are blown into a compartment, and the number selected for the digit is the one on the ball that floats to the top of the machine. To
alter the odds, the conspirators injected a liquid into all balls used in the game except those numbered 4 and 6, making it almost certain that the lighter balls would be selected and determine the
digits in the winning number. They then proceeded to buy lottery tickets bearing the potential winning numbers. How many potential winning numbers were there (666 was the eventual winner)? *4.118
Lottery, continued Refer to Exercise 4.117.
Hours after the rigging of the Pennsylvania state lottery was announced on September 19, 1980, Connecticut state lottery officials were stunned to learn that their winning number for the day was 666.
a. All evidence indicates that the Connecticut selection of 666 was pure chance. What is the probability that a 666 would be drawn in Connecticut, given that a 666 had been selected in the April 24,
1980, Pennsylvania lottery? b. What is the probability of drawing a 666 in the April 24, 1980, Pennsylvania lottery (remember, this drawing was rigged) and a 666 on the September 19, 1980,
Connecticut lottery? *4.119 ACL/MCL Tears The American Journal of
Sports Medicine published a study of 810 women collegiate rugby players who have a history of knee injuries. For these athletes, the two common knee injuries investigated were medial cruciate
ligament (MCL) sprains and anterior cruciate ligament (ACL) tears.8 For backfield players, it was found that 39% had MCL sprains and 61% had ACL tears. For forwards, it was found that 33% had MCL
sprains and 67% had ACL tears. Since a rugby team consists of eight forwards and seven backs, you can assume that 47% of the players with knee injuries are backs and 53% are forwards. a. Find the
unconditional probability that a rugby player selected at random from this group of players has experienced an MCL sprain. b. Given that you have selected a player who has an MCL sprain, what is the
probability that the player is a forward?
c. Given that you have selected a player who has an ACL tear, what is the probability that the player is a back? 4.120 MRIs Magnetic resonance imaging (MRI) is an accepted noninvasive test to
evaluate changes in the cartilage in joints. An article in The American Journal of Sports Medicine compared the results of MRI evaluation with arthroscopic surgical evaluation of cartilage tears at
two sites in the knees of 35 patients. The 2 35 70 examinations produced the classifications shown in the table.9 Actual tears were confirmed by arthroscopic surgical examination. Tears
No Tears
MRI Positive MRI Negative
a. What is the probability that a site selected at random has a tear and has been identified as a tear by MRI? b. What is the probability that a site selected at random has no tear and has been
identified as having a tear? c. What is the probability that a site selected at random has a tear and has not been identified by MRI? d. What is the probability of a positive MRI, given that there is a
tear? e. What is the probability of a false negative—that is, a negative MRI, given that there is a tear? 4.121 The Match Game Two men each toss a coin.
They obtain a “match” if either both coins are heads or both are tails. Suppose the tossing is repeated three times. a. What is the probability of three matches? b. What is the probability that all
six tosses (three for each man) result in tails? c. Coin tossing provides a model for many practical experiments. Suppose that the coin tosses represent the answers given by two students for three
specific true–false questions on an examination. If the two students gave three matches for answers, would the low probability found in part a suggest collusion? 4.122 Contract Negotiations Experience
shown that, 50% of the time, a particular union– management contract negotiation led to a contract
settlement within a 2-week period, 60% of the time the union strike fund was adequate to support a strike, and 30% of the time both conditions were satisfied. What is the probability of a contract
settlement given that the union strike fund is adequate to support a strike? Is settlement of a contract within a 2-week period dependent on whether the union strike fund is adequate to support a
strike? 4.123 Work Tenure Suppose the probability of remaining with a particular company 10 years or longer is 1/6. A man and a woman start work at the company on the same day.
a. What is the probability that the man will work there less than 10 years? b. What is the probability that both the man and the woman will work there less than 10 years? (Assume they are unrelated
and their lengths of service are independent of each other.) c. What is the probability that one or the other or both will work 10 years or longer? 4.124 Accident Insurance Accident records collected
by an automobile insurance company give the following information: The probability that an insured driver has an automobile accident is .15; if an accident has occurred, the damage to the vehicle
amounts to 20% of its market value with probability .80, 60% of its market value with probability .12, and a total loss with probability .08. What premium should the company charge on a $22,000 car
so that the expected gain by the company is zero? 4.125 Waiting Times Suppose that at a particular
supermarket the probability of waiting 5 minutes or longer for checkout at the cashier’s counter is .2. On a given day, a man and his wife decide to shop individually at the market, each checking out
at different cashier counters. They both reach cashier counters at the same time. a. What is the probability that the man will wait less than 5 minutes for checkout? b. What is probability that both
the man and his wife will be checked out in less than 5 minutes? (Assume that the checkout times for the two are independent events.) c. What is the probability that one or the other or both will
wait 5 minutes or longer? 4.126 Quality Control A quality-control plan calls
for accepting a large lot of crankshaft bearings if a
sample of seven is drawn and none are defective. What is the probability of accepting the lot if none in the lot are defective? If 1/10 are defective? If 1/2 are defective? 4.127 Mass Transit Only
40% of all people in a
community favor the development of a mass transit system. If four citizens are selected at random from the community, what is the probability that all four favor the mass transit system? That none
favors the mass transit system? 4.128 Blood Pressure Meds A research physician compared the effectiveness of two blood pressure drugs A and B by administering the two drugs to each of four pairs of
identical twins. Drug A was given to one member of a pair; drug B to the other. If, in fact, there is no difference in the effects of the drugs, what is the probability that the drop in the blood
pressure reading for drug A exceeds the corresponding drop in the reading for drug B for all four pairs of twins? Suppose drug B created a greater drop in blood pressure than drug A for each of the
four pairs of twins. Do you think this provides sufficient evidence to indicate that drug B is more effective in lowering blood pressure than drug A? 4.129 Blood Tests To reduce the cost of detecting
a disease, blood tests are conducted on a pooled sample of blood collected from a group of n people. If no indication of the disease is present in the pooled blood sample (as is usually the case),
none have the disease. If analysis of the pooled blood sample indicates that the disease is present, each individual must submit to a blood test. The individual tests are conducted in sequence. If,
among a group of five people, one person has the disease, what is the probability that six blood tests (including the pooled test) are required to detect the single diseased person? If two people have
the disease, what is the probability that six tests are required to locate both diseased people? 4.130 Tossing a Coin How many times should a coin be tossed to obtain a probability equal to or
greater than .9 of observing at least one head? 4.131 Flextime The number of companies offering flexible work schedules has increased as companies try to help employees cope with the demands of home
and work. One flextime schedule is to work four 10-hour shifts. However, a big obstacle to flextime schedules for workers paid hourly is state legislation on overtime. A survey provided the following
information for 220 firms located in two cities in California.
180 ❍
Flextime Schedule City
Not Available
A B
A company is selected at random from this pool of 220 companies. a. What is the probability that the company is located in city A? b. What is the probability that the company is located in city B and
offers flextime work schedules?
4.133 Pepsi™ or Coke™? A taste-testing experiment is conducted at a local supermarket, where passing shoppers are asked to taste two soft-drink samples—one Pepsi and one Coke—and state their
preference. Suppose that four shoppers are chosen at random and asked to participate in the experiment, and that there is actually no difference in the taste of the two brands. a. What is the
probability that all four shoppers choose Pepsi? b. What is the probability that exactly one of the four shoppers chooses Pepsi?
c. What is the probability that the company does not have flextime schedules? d. What is the probability that the company is located in city B, given that the company has flextime schedules available?
4.134 Viruses A certain virus afflicted the families
4.132 A Color Recognition Experiment An
4.135 Orchestra Politics The board of directors of a major symphony orchestra has voted to create a players’ committee for the purpose of handling employee complaints. The council will consist of the
president and vice president of the symphony board and two orchestra representatives. The two orchestra representatives will be randomly selected from a list of six volunteers, consisting of four men
and two women. a. Find the probability distribution for x, the number of women chosen to be orchestra representatives. b. Find the mean and variance for the random variable x.
experiment is run as follows—the colors red, yellow, and blue are each flashed on a screen for a short period of time. A subject views the colors and is asked to choose the one he feels was flashed for
the longest time. The experiment is repeated three times with the same subject. a. If all the colors were flashed for the same length of time, find the probability distribution for x, the number of
times that the subject chose the color red. Assume that his three choices are independent. b. Construct the probability histogram for the random variable x.
in 3 adjacent houses in a row of 12 houses. If three houses were randomly chosen from a row of 12 houses, what is the probability that the 3 houses would be adjacent? Is there reason to believe that
this virus is contagious?
c. What is the probability that both orchestra representatives will be women?
Exercises 4.136 Two fair dice are tossed. Use the Tossing Dice
applet to answer the following questions. a. What is the probability that the sum of the number of dots shown on the upper faces is equal to 7? To 11? b. What is the probability that you roll
“doubles”— that is, both dice have the same number on the upper face? c. What is the probability that both dice show an odd number?
4.137 If you toss a pair of dice, the sum T of the number of dots appearing on the upper faces of the dice can assume the value of an integer in the interval 2 T 12. a. Use the Tossing Dice applet to
find the probability distribution for T. Display this probability distribution in a table. b. Construct a probability histogram for p(T). How would you describe the shape of this distribution?
4.138 Access the Flipping Fair Coins applet. The experiment consists of tossing three fair coins and recording x, the number of heads. a. Use the laws of probability to write down the simple events
in this experiment. b. Find the probability distribution for x. Display the distribution in a table and in a probability histogram. c. Use the Flipping Fair Coins applet to simulate the probability
distribution—that is, repeat the cointossing experiment a large number of times until the relative frequency histogram is very close to the actual probability distribution. Start by performing the
experiment once (click ) to see what is happening. Then speed up the process by clicking . Generate at least 2000 values of x. Sketch the histogram that you have generated. d. Compare the histograms
in parts b and c. Does the simulation confirm your answer from part b? 4.139 Refer to Exercise 4.138.
a. If you were to toss only one coin, what would the probability distribution for x look like? b. Perform a simulation using the Flipping Fair Coins applet with n 1, and compare your results with
part a. 4.140 Refer to Exercise 4.138. Access the Flipping Weighted Coins applet. The experiment consists of tossing three coins that are not fair, and recording x, the number of heads.
a. Perform a simulation of the experiment using the Flipping Weighted Coins applet. Is the distribution symmetric or skewed? Which is more likely, heads or tails? b. Suppose that we do not know the
probability of getting a head, P(H). Write a formula for calculating the probability of no heads in three tosses. c. Use the approximate probability P(x 0) from your simulation and the results of
part b to approximate the value of P(T). What is the probability of getting a head?
Probability and Decision Making in the Congo In his exciting novel Congo, Michael Crichton describes a search by Earth Resources Technology Service (ERTS), a geological survey company, for deposits
of boroncoated blue diamonds, diamonds that ERTS believes to be the key to a new generation of optical computers.10 In the novel, ERTS is racing against an international consortium to find the Lost
City of Zinj, a city that thrived on diamond mining and existed several thousand years ago (according to African fable), deep in the rain forests of eastern Zaire. After the mysterious destruction of
its first expedition, ERTS launches a second expedition under the leadership of Karen Ross, a 24-year-old computer genius who is accompanied by Professor Peter Elliot, an anthropologist; Amy, a
talking gorilla; and the famed mercenary and expedition leader, “Captain” Charles Munro. Ross’s efforts to find the city are blocked by the consortium’s offensive actions, by the deadly rain forest,
and by hordes of “talking” killer gorillas whose perceived mission is to defend the diamond mines. Ross overcomes these obstacles by using space-age computers to evaluate the probabilities of success
for all possible circumstances and all possible actions that the expedition might take. At each stage of the expedition, she is able to quickly evaluate the chances of success. At one stage in the
expedition, Ross is informed by her Houston headquarters that their computers estimate that she is 18 hours and 20 minutes behind the competing Euro-Japanese team, instead of 40 hours ahead. She
changes plans and decides to
182 ❍
have the 12 members of her team—Ross, Elliot, Munro, Amy, and eight native porters—parachute into a volcanic region near the estimated location of Zinj. As Crichton relates, “Ross had double-checked
outcome probabilities from the Houston computer, and the results were unequivocal. The probability of a successful jump was .7980, meaning that there was approximately one chance in five that someone
would be badly hurt. However, given a successful jump, the probability of expedition success was .9943, making it virtually certain that they would beat the consortium to the site.” Keeping in mind
that this is an excerpt from a novel, let us examine the probability, .7980, of a successful jump. If you were one of the 12-member team, what is the probability that you would successfully complete
your jump? In other words, if the probability of a successful jump by all 12 team members is .7980, what is the probability that a single member could successfully complete the jump?
Several Useful Discrete Distributions GENERAL OBJECTIVES Discrete random variables are used in many practical applications. Three important discrete random variables—the binomial, the Poisson, and
the hypergeometric—are presented in this chapter. These random variables are often used to describe the number of occurrences of a specified event in a fixed number of trials or a fixed unit of time or
CHAPTER INDEX ● The binomial probability distribution (5.2) ● The hypergeometric probability distribution (5.4) ● The mean and variance for the binomial random variable (5.2) ● The Poisson
probability distribution (5.3)
How Do I Use Table 1 to Calculate Binomial Probabilities? How Do I Calculate Poisson Probabilities Using the Formula? How Do I Use Table 2 to Calculate Poisson Probabilities?
© Kim Steele/Photodisc/Getty Images
A Mystery: Cancers Near a Reactor Is the Pilgrim I nuclear reactor responsible for an increase in cancer cases in the surrounding area? A political controversy was set off when the Massachusetts
Department of Public Health found an unusually large number of cases in a 4-mile-wide coastal strip just north of the nuclear reactor in Plymouth, Massachusetts. The case study at the end of this
chapter examines how this question can be answered using one of the discrete probability distributions presented here.
184 ❍
INTRODUCTION Examples of discrete random variables can be found in a variety of everyday situations and across most academic disciplines. However, there are three discrete probability distributions
that serve as models for a large number of these applications. In this chapter we study the binomial, the Poisson, and the hypergeometric probability distributions and discuss their usefulness in
different physical situations.
THE BINOMIAL PROBABILITY DISTRIBUTION A coin-tossing experiment is a simple example of an important discrete random variable called the binomial random variable. Many practical experiments result in
data similar to the head or tail outcomes of the coin toss. For example, consider the political polls used to predict voter preferences in elections. Each sampled voter can be compared to a coin
because the voter may be in favor of our candidate— a “head”—or not—a “tail.” In most cases, the proportion of voters who favor our candidate does not equal 1/2; that is, the coin is not fair. In
fact, the proportion of voters who favor our candidate is exactly what the poll is designed to measure! Here are some other situations that are similar to the coin-tossing experiment: • • •
A sociologist is interested in the proportion of elementary school teachers who are men. A soft-drink marketer is interested in the proportion of cola drinkers who prefer her brand. A geneticist is
interested in the proportion of the population who possess a gene linked to Alzheimer’s disease.
Each sampled person is analogous to tossing a coin, but the probability of a “head” is not necessarily equal to 1/2. Although these situations have different practical objectives, they all exhibit
the common characteristics of the binomial experiment. Definition
A binomial experiment is one that has these five characteristics:
1. The experiment consists of n identical trials. 2. Each trial results in one of two outcomes. For lack of a better name, the one outcome is called a success, S, and the other a failure, F. 3. The
probability of success on a single trial is equal to p and remains the same from trial to trial. The probability of failure is equal to (1 p) q. 4. The trials are independent. 5. We are interested in
x, the number of successes observed during the n trials, for x 0, 1, 2, . . . , n. EXAMPLE
Suppose there are approximately 1,000,000 adults in a county and an unknown proportion p favor term limits for politicians. A sample of 1000 adults will be chosen in such a way that every one of the
1,000,000 adults has an equal chance of being selected, and each adult is asked whether he or she favors term limits. (The ultimate objective of this survey is to estimate the unknown proportion p, a
problem that we will discuss in Chapter 8.) Is this a binomial experiment?
Does the experiment have the five binomial characteristics?
1. A “trial” is the choice of a single adult from the 1,000,000 adults in the county. This sample consists of n 1000 identical trials. 2. Since each adult will either favor or not favor term limits,
there are two outcomes that represent the “successes” and “failures” in the binomial experiment.† 3. The probability of success, p, is the probability that an adult favors term limits. Does this
probability remain the same for each adult in the sample? For all practical purposes, the answer is yes. For example, if 500,000 adults in the population favor term limits, then the probability of a
“success” when the first adult is chosen is 500,000/1,000,000 1/2. When the second adult is chosen, the probability p changes slightly, depending on the first choice. That is, there will be either
499,999 or 500,000 successes left among the 999,999 adults. In either case, p is still approximately equal to 1/2. 4. The independence of the trials is guaranteed because of the large group of adults
from which the sample is chosen. The probability of an adult favoring term limits does not change depending on the responses of previously chosen people. 5. The random variable x is the number of
adults in the sample who favor term limits. Because the survey satisfies the five characteristics reasonably well, for all practical purposes it can be viewed as a binomial experiment.
A patient fills a prescription for a 10-day regimen of 2 pills daily. Unknown to the pharmacist and the patient, the 20 tablets consist of 18 pills of the prescribed medication and 2 pills that are
the generic equivalent of the prescribed medication. The patient selects two pills at random for the first day’s dosage. If we check the selection and record the number of pills that are generic, is
this a binomial experiment? Again, check the sampling procedure for the characteristics of a binomial experiment.
1. A “trial” is the selection of a pill from the 20 in the prescription. This experiment consists of n 2 trials. 2. Each trial results in one of two outcomes. Either the pill is generic (call this a
“success”) or not (a “failure”). 3. Since the pills in a prescription bottle can be considered randomly “mixed,” the unconditional probability of drawing a generic pill on a given trial would be 2/
20. 4. The condition of independence between trials is not satisfied, because the probability of drawing a generic pill on the second trial is dependent on the first trial. For example, if the first
pill drawn is generic, then there is only 1 generic pill in the remaining 19. Therefore, P(generic on trial 2兩generic on trial 1) 1/19 †
Although it is traditional to call the two possible outcomes of a trial “success” and “failure,” they could have been called “head” and “tail,” “red” and “white,” or any other pair of words.
Consequently, the outcome called a “success” does not need to be viewed as a success in the ordinary use of the word.
186 ❍
If the first selection does not result in a generic pill, then there are still 2 generic pills in the remaining 19, and the probability of a “success” (a generic pill) changes to P(generic on trial 2
兩no generic on trial 1) 2/19 Therefore the trials are dependent and the sampling does not represent a binomial experiment. Think about the difference between these two examples. When the sample (the
n identical trials) came from a large population, the probability of success p stayed about the same from trial to trial. When the population size N was small, the probability of success p changed
quite dramatically from trial to trial, and the experiment was not binomial. RULE OF THUMB If the sample size is large relative to the population size—in particular, if n/N .05—then the resulting
experiment is not binomial. In Chapter 4, we tossed two fair coins and constructed the probability distribution for x, the number of heads—a binomial experiment with n 2 and p .5. The general
binomial probability distribution is constructed in the same way, but the procedure gets complicated as n gets large. Fortunately, the probabilities p(x) follow a general pattern. This allows us to
use a single formula to find p(x) for any given value of x. THE BINOMIAL PROBABILITY DISTRIBUTION A binomial experiment consists of n identical trials with probability of success p on each trial. The
probability of k successes in n trials is n! P(x k) C nk p kq nk p kq nk k!(n k)! for values of k 0, 1, 2, . . . , n. The symbol C nk equals n! k!(n k)! where n! n(n 1)(n 2) (2)(1) and 0! ⬅ 1. The
general formulas for m, s 2, and s given in Chapter 4 can be used to derive the following simpler formulas for the binomial mean and standard deviation. MEAN AND STANDARD DEVIATION FOR THE BINOMIAL
RANDOM VARIABLE The random variable x, the number of successes in n trials, has a probability distribution with this center and spread: Mean: m np Variance: s 2 npq 苶 Standard deviation: s 兹npq
Find P(x 2) for a binomial random variable with n 10 and p .1.
P(x 2) is the probability of observing 2 successes and 8 failures in a sequence of 10 trials. You might observe the 2 successes first, followed by 8 consecutive failures:
S, S, F, F, F, F, F, F, F, F Since p is the probability of success and q is the probability of failure, this particular sequence has probability
n! n(n 1)(n 2) . . . (2)(1) For example, 5! 5(4)(3)(2)(1) 120
ppqqqqqqqq p 2q8 However, many other sequences also result in x 2 successes. The binomial formula uses C 10 2 to count the number of sequences and gives the exact probability when you use the
binomial formula with k 2:
and 0! ⬅ 1.
2 102 P(x 2) C 10 2 (.1) (.9) 10(9) 10! (.1)2(.9)8 (.01)(.430467) .1937 2(1) 2!(10 2)!
You could repeat the procedure in Example 5.3 for each value of x—0, 1, 2, . . . , 10—and find all the values of p(x) necessary to construct a probability histogram for x. This would be a long and
tedious job, but the resulting graph would look like Figure 5.1(a). You can check the height of the bar for x 2 and find p(2) P(x 2) .1937. The graph is skewed right; that is, most of the time you
will observe small values of x. The mean or “balancing point” is around x 1; in fact, you can use the formula to find the exact mean: m np 10(.1) 1 Figures 5.1(b) and 5.1(c) show two other binomial
distributions with n 10 but with different values of p. Look at the shapes of these distributions. When p .5, the distribution is exactly symmetric about the mean, m np 10(.5) 5. When p .9, the
distribution is the “mirror image” of the distribution for p .1 and is skewed to the left. FIGU R E 5 .1
Binomial probability distributions
.25 n = 10, p = .5 µ=5 σ = 1.58
.20 n = 10, p = .1 µ=1 σ = .95
.30 .20
.15 .10 .05 0
(b) 0
6 (a)
p(x) .40 n = 10, p = .9 µ=9 σ = .95
.30 .20 .10 0
6 (c)
188 ❍
Over a long period of time, it has been observed that a professional basketball player can make a free throw on a given trial with probability equal to .8. Suppose he shoots four free throws. 1. What
is the probability that he will make exactly two free throws? 2. What is the probability that he will make at least one free throw? A “trial” is a single free throw, and you can define a “success” as
a basket and a “failure” as a miss, so that n 4 and p .8. If you assume that the player’s chance of making the free throw does not change from shot to shot, then the number x of times that he makes
the free throw is a binomial random variable.
1. P(x 2) C 42(.8)2(.2)2 4(3)(2)(1) 4! (.64)(.04) (.64)(.04) .1536 2(1)(2)(1) 2!2! The probability is .1536 that he will make exactly two free throws. 2. P(at least one) P(x 1) p(1) p(2) p(3) p(4) 1
p(0) 1 C 40(.8)0(.2)4 1 .0016 .9984. Although you could calculate P(x 1), P(x 2), P(x 3) and P(x 4) to find this probability, using the complement of the event makes your job easier; that is, P(x 1) 1
P(x 1) 1 P(x 0). Can you think of any reason your assumption of independent trials might be wrong? If the player learns from his previous attempt (that is, he adjusts his shooting according to his
last attempt), then his probability p of making the free throw may change, possibly increase, from shot to shot. The trials would not be independent and the experiment would not be binomial.
Use Table 1 in Appendix I rather than the binomial formula whenever possible. This is an easier way!
Calculating binomial probabilities can become tedious even for relatively small values of n. As n gets larger, it becomes almost impossible without the help of a calculator or computer. Fortunately,
both of these tools are available to us. Computergenerated tables of cumulative binomial probabilities are given in Table 1 of Appendix I for values of n ranging from 2 to 25 and for selected values
of p. These probabilities can also be generated using MINITAB or the Java applets on the Premium Website. Cumulative binomial probabilities differ from the individual binomial probabilities that you
calculated with the binomial formula. Once you find the column of probabilities for the correct values of n and p in Table 1, the row marked k gives the sum of all the binomial probabilities from x 0
to x k. Table 5.1 shows part of Table 1 for n 5 and p .6. If you look in the row marked k 3, you will find P(x 3) p(0) p(1) p(2) p(3) .663
TABLE 5.1
Portion of Table 1 in Appendix I for n 5 p k
If the probability you need to calculate is not in this form, you will need to think of a way to rewrite your probability to make use of the tables!
Use the cumulative binomial table for n 5 and p .6 to find the probabilities of these events: 1. Exactly three successes 2. Three or more successes Solution
1. If you find k 3 in Table 5.1, the tabled value is P(x 3) p(0) p(1) p(2) p(3) Since you want only P(x 3) p(3), you must subtract out the unwanted probability: P(x 2) p(0) p(1) p(2) which is found in
Table 5.1 with k 2. Then P(x 3) P(x 3) P(x 2) .663 .317 .346 2. To find P(three or more successes) P(x 3) using Table 5.1, you must use the complement of the event of interest. Write P(x 3) 1 P(x 3) 1
P(x 2) You can find P(x 2) in Table 5.1 with k 2. Then P(x 3) 1 P(x 2) 1 .317 .683
190 ❍
How Do I Use Table 1 to Calculate Binomial Probabilities? 1. Find the necessary values of n and p. Isolate the appropriate column in Table 1. 2. Table 1 gives P(x k) in the row marked k. Rewrite the
probability you need so that it is in this form. •
List the values of x in your event.
From the list, write the event as either the difference of two probabilities:
P(x a) P(x b) for a b or the complement of the event: 1 P(x a) or just the event itself: P(x a) or P(x a) P(x a 1) Exercise Reps A. Consider a binomial random variable with n 5 and p .6. Isolate the
appropriate column in Table 1 and fill in the probabilities below. One of the probabilities, P(x 3) is filled in for you. k
P(x k)
B. Fill in the blanks in the table below. The second problem is done for you.
The Problem
List the Values of x
Write the Probability
Rewrite the Probability (if needed)
Find the Probability
4 or less 4 or more
4, 5
1 P(x 3)
1 .663 .337
More than 4 Fewer than 4 Between 2 and 4 (inclusive) Exactly 4
Progress Report •
Still having trouble? Try again using the Exercise Reps at the end of this section.
Mastered binomial probabilities? You can skip the Exercise Reps at the end of this section!
Answers are located on the perforated card at the back of this book.
The Java applet called Calculating Binomial Probabilities gives a visual display of the binomial distribution for values of n 100 and any p that you choose. You can use this applet to calculate
binomial probabilities for any value of x or for any interval a x b. To reproduce the results of Example 5.5, enter 5 in the box
labeled “n” and 0.6 in the box labeled “p,” pressing the “Enter” key after each entry. Next enter the beginning and ending values for x (if you need to calculate an individual probability, both
entries will be the same). The probability will be calculated and shaded in red on your monitor (light blue in Figure 5.2) when you press “Enter.” What is the probability of three or more successes
from Figure 5.2? Does this confirm our answer in Example 5.5? You will use this applet again for the MyApplet Exercises section at the end of the chapter. FIGU R E 5 .2
Calculating Binomial Probabilities applet
A regimen consisting of a daily dose of vitamin C was tested to determine its effectiveness in preventing the common cold. Ten people who were following the prescribed regimen were observed for a
period of 1 year. Eight survived the winter without a cold. Suppose the probability of surviving the winter without a cold is .5 when the vitamin C regimen is not followed. What is the probability of
observing eight or more survivors, given that the regimen is ineffective in increasing resistance to colds? If you assume that the vitamin C regimen is ineffective, then the probability p of
surviving the winter without a cold is .5. The probability distribution for x, the number of survivors, is Solution
x 10x p (x) C 10 x (.5) (.5)
You have learned four ways to find P(8 or more survivors) P(x 8). You will get the same results with any of the four; choose the most convenient method for your particular problem. 1. The binomial
formula: P(8 or more) p(8) p(9) p(10) 10 10 10 C 10 C10 C 10 8 (.5) 9 (.5) 10 (.5)
.055 2. The cumulative binomial tables: Find the column corresponding to p .5 in the table for n 10: P(8 or more) P(x 8) 1 P(x 7) 1 .945 .055
192 ❍
3. The Calculating Binomial Probabilities applet: Enter n 10, p .5 and calculate the probability that x is between 8 and 10. The probability, P(x 8) .0547, is shaded in red on your monitor (light
blue in Figure 5.3). FIGU R E 5 .3
Java applet for Example 5.6
4. Output from MINITAB: The output shown in Figure 5.4 gives the cumulative distribution function, which gives the same probabilities you found in the cumulative binomial tables. The probability
density function gives the individual binomial probabilities, which you found using the binomial formula. FI GU R E 5 .4
MINITAB output for Example 5.6
● Cumulative Distribution Function
Probability Density Function
Binomial with n = 10 and p = 0.5
Binomial with n = 10 and p = 0.5
x 0 1 2 3 4 5 6 7 8 9 10
x 0 1 2 3 4 5 6 7 8 9 10
P( X 1.96)
The p-value approach: Calculate the p-value, the probability that z is greater than z 1.84 plus the probability that z is less than z 1.84, as shown in Figure 9.11:
p-value P(z 1.84) P(z 1.84) (1 .9671) .0329 .0658 The p-value lies between .10 and .05, so you can reject H0 at the .10 level but not at the .05 level of significance. Since the p-value of .0658
exceeds the specified significance level a .05, H0 cannot be rejected. Again, you should not be willing to accept H0 until b is evaluated for some meaningful values of (m1 m2 ).
Hypothesis Testing and Confidence Intervals Whether you use the critical value or the p-value approach for testing hypotheses about (m1 m2 ), you will always reach the same conclusion because the
calculated value of the test statistic and the critical value are related exactly in the same way that the p-value and the significance level a are related. You might remember that the confidence
intervals constructed in Chapter 8 could also be used to answer questions about the difference between two population means. In fact, for a two-tailed test, the (1 a)100% confidence interval for the
parameter of interest can be used to test its value, just as you did informally in Chapter 8. The value of a indicated by the confidence coefficient in the confidence interval is equivalent to the
significance level a in the statistical test. For a one-tailed test, the equivalent confidence interval approach would use the one-sided confidence bounds in Section 8.8 with confidence coefficient a. In
addition, by using the confidence interval approach, you gain a range of possible values for the parameter of interest, regardless of the outcome of the test of hypothesis. •
If the confidence interval you construct contains the value of the parameter specified by H0, then that value is one of the likely or possible values of the parameter and H0 should not be rejected. If
the hypothesized value lies outside of the confidence limits, the null hypothesis is rejected at the a level of significance.
Construct a 95% confidence interval for the difference in average academic achievements between car owners and non-owners. Using the confidence interval, can you conclude that there is a difference in
the population means for the two groups of students? For the large-sample statistics discussed in Chapter 8, the 95% confidence interval is given as
Point estimator 1.96 (Standard error of the estimator) For the difference in two population means, the confidence interval is approximated as
(x苶1 苶x2) 1.96
s2 s2 1 2 n1 n2
.36 .40 (2.70 2.54) 1.96 100 100 .16 .17
366 ❍
or .01 (m1 m2 ) .33. This interval gives you a range of possible values for the difference in the population means. Since the hypothesized difference, (m1 m2 ) 0, is contained in the confidence
interval, you should not reject H0. Look at the signs of the possible values in the confidence interval. You cannot tell from the interval whether the difference in the means is negative (), positive
(), or zero (0)—the latter of the three would indicate that the two means are the same. Hence, you can really reach no conclusion in terms of the question posed. There is not enough evidence to
indicate that there is a difference in the average achievements for car owners versus non-owners. The conclusion is the same one reached in Example 9.9.
BASIC TECHNIQUES 9.18 Independent random samples of 80 measure-
ments were drawn from two quantitative populations, 1 and 2. Here is a summary of the sample data: Sample Size Sample Mean Sample Variance
Sample 1
Sample 2
80 11.6 27.9
80 9.7 38.4
a. If your research objective is to show that m1 is larger than m 2, state the alternative and the null hypotheses that you would choose for a statistical test. b. Is the test in part a one- or
two-tailed? c. Calculate the test statistic that you would use for the test in part a. Based on your knowledge of the standard normal distribution, is this a likely or unlikely observation, assuming
that H0 is true and the two population means are the same? d. p-value approach: Find the p-value for the test. Test for a significant difference in the population means at the 1% significance level. e.
Critical value approach: Find the rejection region when a .01. Do the data provide sufficient evidence to indicate a difference in the population means? 9.19 Independent random samples of 36 and 45
observations are drawn from two quantitative populations, 1 and 2, respectively. The sample data summary is shown here: Sample Size Sample Mean Sample Variance
Sample 1
Sample 2
36 1.24 .0560
45 1.31 .0540
Do the data present sufficient evidence to indicate that the mean for population 1 is smaller than the mean for population 2? Use one of the two methods of testing presented in this section, and
explain your conclusions. 9.20 Suppose you wish to detect a difference between
m1 and m 2 (either m1 m 2 or m1 m 2 ) and, instead of running a two-tailed test using a .05, you use the following test procedure. You wait until you have collected the sample data and have
calculated x苶1 and 苶x2. If 苶x1 is larger than x苶2, you choose the alternative hypothesis Ha : m1 m 2 and run a one-tailed test placing a1 .05 in the upper tail of the z distribution. If, on the
other hand, 苶x2 is larger than x苶1, you reverse the procedure and run a one-tailed test, placing a2 .05 in the lower tail of the z distribution. If you use this procedure and if m1 actually equals
m 2, what is the probability a that you will conclude that m1 is not equal to m 2 (i.e., what is the probability a that you will incorrectly reject H0 when H0 is true)? This exercise demonstrates why
statistical tests should be formulated prior to observing the data. APPLICATIONS 9.21 Cure for the Common Cold? An experiment
was planned to compare the mean time (in days) required to recover from a common cold for persons given a daily dose of 4 milligrams (mg) of vitamin C versus those who were not given a vitamin
supplement. Suppose that 35 adults were randomly selected for each treatment category and that the mean recovery times and standard deviations for the two groups were as follows:
No Vitamin Supplement
4 mg Vitamin C
35 6.9 2.9
35 5.8 1.2
Sample Size Sample Mean Sample Standard Deviation
a. Suppose your research objective is to show that the use of vitamin C reduces the mean time required to recover from a common cold and its complications. Give the null and alternative hypotheses
for the test. Is this a one- or a two-tailed test? b. Conduct the statistical test of the null hypothesis in part a and state your conclusion. Test using a .05. 9.22 Healthy Eating Americans are
becoming more
conscious about the importance of good nutrition, and some researchers believe we may be altering our diets to include less red meat and more fruits and vegetables. To test the theory that the
consumption of red meat has decreased over the last 10 years, a researcher decides to select hospital nutrition records for 400 subjects surveyed 10 years ago and to compare their average amount of
beef consumed per year to amounts consumed by an equal number of subjects interviewed this year. The data are given in the table. Ten Years Ago
This Year
Sample Mean Sample Standard Deviation
a. Do the data present sufficient evidence to indicate that per-capita beef consumption has decreased in the last 10 years? Test at the 1% level of significance. b. Find a 99% lower confidence bound
for the difference in the average per-capita beef consumptions for the two groups. (This calculation was done as part of Exercise 8.76.) Does your confidence bound confirm your conclusions in part a?
Explain. What additional information does the confidence bound give you? 9.23 Lead Levels in Drinking Water Analyses of
drinking water samples for 100 homes in each of two different sections of a city gave the following means and standard deviations of lead levels (in parts per million): Sample Size Mean Standard
Section 1
Section 2
100 34.1 5.9
100 36.0 6.0
a. Calculate the test statistic and its p-value (observed significance level) to test for a difference in the two
population means. Use the p-value to evaluate the statistical significance of the results at the 5% level. b. Use a 95% confidence interval to estimate the difference in the mean lead levels for the
two sections of the city. c. Suppose that the city environmental engineers will be concerned only if they detect a difference of more than 5 parts per million in the two sections of the city. Based
on your confidence interval in part b, is the statistical significance in part a of practical significance to the city engineers? Explain. 9.24 Starting Salaries, again In an attempt to
compare the starting salaries for college graduates who majored in chemical engineering and computer science (see Exercise 8.45), random samples of 50 recent college graduates in each major were
selected and the following information obtained. Major
Chemical Engineering Computer Science
$53,659 51,042
a. Do the data provide sufficient evidence to indicate a difference in average starting salaries for college graduates who majored in chemical engineering and computer science? Test using a .05. b.
Compare your conclusions in part a with the results of part b in Exercise 8.45. Are they the same? Explain. 9.25 Hotel Costs In Exercise 8.18, we explored the average cost of lodging at three
different hotel chains.6 We randomly select 50 billing statements from the computer databases of the Marriott, Radisson, and Wyndham hotel chains, and record the nightly room rates. A portion of the
sample data is shown in the table. Marriott Sample Average Sample Standard Deviation
$170 17.5
Radisson $145 10
a. Before looking at the data, would you have any preconceived idea about the direction of the difference between the average room rates for these two hotels? If not, what null and alternative
hypotheses should you test? b. Use the critical value approach to determine if there is a significant difference in the average room rates for the Marriott and the Radisson hotel chains. Use a .01. c.
Find the p-value for this test. Does this p-value confirm the results of part b?
368 ❍
9.26 Hotel Costs II Refer to Exercise 9.25. The table below shows the sample data collected to compare the average room rates at the Wyndham and Radisson hotel chains.6 Wyndham
$150 16.5
$145 10
Sample Average Sample Standard Deviation
a. Do the data provide sufficient evidence to indicate a difference in the average room rates for the Wyndham and the Radisson hotel chains? Use a .05. b. Construct a 95% confidence interval for the
difference in the average room rates for the two chains. Does this interval confirm your conclusions in part a? 9.27 MMT in Gasoline The addition of MMT, a compound containing manganese (Mn), to
gasoline as an octane enhancer has caused concern about human exposure to Mn because high intakes have been linked to serious health effects. In a study of ambient air concentrations of fine Mn,
Wallace and Slonecker (Journal of the Air and Waste Management Association) presented the accompanying summary information about the amounts of fine Mn (in nanograms per cubic meter) in mostly rural
national park sites and in mostly urban California sites.7
Mean Standard Deviation Number of Sites
National Parks
.94 1.2 36
2.8 2.8 26
a. Is there sufficient evidence to indicate that the mean concentrations differ for these two types of sites at the a .05 level of significance? Use the largesample z-test. What is the p-value of this
test? b. Construct a 95% confidence interval for (m1 m2 ). Does this interval confirm your conclusions in part a?
9.28 Noise and Stress In Exercise 8.48, you compared the effect of stress in the form of noise on the ability to perform a simple task. Seventy subjects were divided into two groups; the first group
of 30 subjects acted as a control, while the second group of 40 was the experimental group. Although each subject performed the task in the same control room, each of the experimental group subjects
had to perform the task while loud rock music was played. The time to finish the task was recorded for each subject and the following summary was obtained:
n x苶 s
30 15 minutes 4 minutes
40 23 minutes 10 minutes
a. Is there sufficient evidence to indicate that the average time to complete the task was longer for the experimental “rock music” group? Test at the 1% level of significance. b. Construct a 99%
one-sided upper bound for the difference (control experimental) in average times for the two groups. Does this interval confirm your conclusions in part a? 9.29 What’s Normal II Of the 130 people in
cise 9.16, 65 were female and 65 were male.3 The means and standard deviations of their temperatures are shown below. Sample Mean Standard Deviation
98.11 0.70
98.39 0.74
a. Use the p-value approach to test for a significant difference in the average temperatures for males versus females. b. Are the results significant at the 5% level? At the 1% level?
A LARGE-SAMPLE TEST OF HYPOTHESIS FOR A BINOMIAL PROPORTION When a random sample of n identical trials is drawn from a binomial population, the sample proportion pˆ has an approximately normal
distribution when n is large, with mean p and standard error SE
冪莦n pq
9.5 A LARGE-SAMPLE TEST OF HYPOTHESIS FOR A BINOMIAL PROPORTION
When you test a hypothesis about p, the proportion in the population possessing a certain attribute, the test follows the same general form as the large-sample tests in Sections 9.3 and 9.4. To test
a hypothesis of the form H0 : p p0 versus a one- or two-tailed alternative Ha : p p0
Ha : p p0
Ha : p p0
the test statistic is constructed using pˆ, the best estimator of the true population proportion p. The sample proportion pˆ is standardized, using the hypothesized mean and standard error, to form a
test statistic z, which has a standard normal distribution if H0 is true. This large-sample test is summarized next. LARGE-SAMPLE STATISTICAL TEST FOR p 1. Null hypothesis: H0 : p p0 2. Alternative
hypothesis: One-Tailed Test
Ha : p p0 (or, Ha : p p0)
Two-Tailed Test
Ha : p p0
pˆ p0 pˆ p 3. Test statistic: z 0 p q SE 0 0 n
x pˆ n
where x is the number of successes in n binomial trials.† 4. Rejection region: Reject H0 when One-Tailed Test
Two-Tailed Test
z za (or z za when the alternative hypothesis is Ha : p p0)
z za/2
z za/2
or when p-value a
α 0
α/2 –z α/2
Assumption: The sampling satisfies the assumptions of a binomial experiment (see Section 5.2), and n is large enough so that the sampling distribution of pˆ can be approximated by a normal
distribution (np0 5 and nq0 5).
An equivalent test statistic can be found by multiplying the numerator and denominator by z by n to obtain x np0 z np0q0 兹苶
370 ❍
Regardless of age, about 20% of American adults participate in fitness activities at least twice a week. However, these fitness activities change as the people get older, and occasionally participants
become nonparticipants as they age. In a local survey of n 100 adults over 40 years old, a total of 15 people indicated that they participated in a fitness activity at least twice a week. Do these
data indicate that the participation rate for adults over 40 years of age is significantly less than the 20% figure? Calculate the p-value and use it to draw the appropriate conclusions. Assuming that
the sampling procedure satisfies the requirements of a binomial experiment, you can answer the question posed using a one-tailed test of hypothesis:
H0 : p .2
versus Ha : p .2
Begin by assuming that H0 is true—that is, the true value of p is p0 .2. Then pˆ x/n will have an approximate normal distribution with mean p0 and standard error 兹p苶. 0q0/n (NOTE: This is different
from the estimation procedure in which the 苶.) ˆqˆ/n The observed value of pˆ is 15/100 unknown standard error is estimated by 兹p .15 and the test statistic is p-value a ⇔ reject H0 p-value a ⇔ do
not reject H0
.15 .20 pˆ p0 z 1.25 (.20)(.80) p q 00 100 n
The p-value associated with this test is found as the area under the standard normal curve to the left of z 1.25 as shown in Figure 9.12. Therefore, p-value P(z 1.25) .1056 FI GU R E 9 .1 2
p-value for Example 9.11
● f(z)
p-value = .1056
If you use the guidelines for evaluating p-values, then .1056 is greater than .10, and you would not reject H0. There is insufficient evidence to conclude that the percentage of adults over age 40
who participate in fitness activities twice a week is less than 20%.
Statistical Significance and Practical Importance It is important to understand the difference between results that are “significant” and results that are practically “important.” In statistical
language, the word significant does not necessarily mean “important,” but only that the results could not have occurred by chance. For example, suppose that in Example 9.11, the researcher had used
9.5 A LARGE-SAMPLE TEST OF HYPOTHESIS FOR A BINOMIAL PROPORTION
n 400 adults in her experiment and had observed the same sample proportion. The test statistic is now .15 .20 pˆ p0 z 2.50 (.20)(.80) p q 00 400 n
with p-value P(z 2.50) .0062 Now the results are highly significant: H0 is rejected, and there is sufficient evidence to indicate that the percentage of adults over age 40 who participate in physical
fitness activities is less than 20%. However, is this drop in activity really important? Suppose that physicians would be concerned only about a drop in physical activity of more than 10%. If there
had been a drop of more than 10% in physical activity, this would imply that the true value of p was less than .10. What is the largest possible value of p? Using a 95% upper one-sided confidence
bound, you have
pˆ qˆ pˆ 1.645 n
(.15)(.85) .15 1.645 400 .15 .029 or p .179. The physical activity for adults aged 40 and older has dropped from 20%, but you cannot say that it has dropped below 10%. So, the results, although
statistically significant, are not practically important. In this book, you will learn how to determine whether results are statistically significant. When you use these procedures in a practical
situation, however, you must also make sure the results are practically important.
BASIC TECHNIQUES 9.30 A random sample of n 1000 observations from a binomial population produced x 279. a. If your research hypothesis is that p is less than .3, what should you choose for your
alternative hypothesis? Your null hypothesis? b. What is the critical value that determines the rejection region for your test with a .05? c. Do the data provide sufficient evidence to indicate that
p is less than .3? Use a 5% significance level. 9.31 A random sample of n 1400 observations from a binomial population produced x 529. a. If your research hypothesis is that p differs from .4, what
hypotheses should you test?
b. Calculate the test statistic and its p-value. Use the p-value to evaluate the statistical significance of the results at the 1% level. c. Do the data provide sufficient evidence to indicate that p
is different from .4? 9.32 A random sample of 120 observations was
selected from a binomial population, and 72 successes were observed. Do the data provide sufficient evidence to indicate that p is greater than .5? Use one of the two methods of testing presented in
this section, and explain your conclusions. APPLICATIONS 9.33 Childhood Obesity According to PARADE magazine’s “What America Eats” survey involving
n 1015 adults, almost half of parents say their children’s weight is fine.8 Only 9% of parents describe their children as overweight. However, the American Obesity Association says the number of
overweight children and teens is at least 15%. Suppose that the number of parents in the sample is n 750 and the number of parents who describe their children as overweight is x 68.
a. How would you proceed to test the hypothesis that the proportion of parents who describe their children as overweight is less than the actual proportion reported by the American Obesity
Association? b. What conclusion are you able to draw from these data at the a .05 level of significance? c. What is the p-value associated with this test? 9.34 Plant Genetics A peony plant with red
petals was crossed with another plant having streaky petals. A geneticist states that 75% of the offspring resulting from this cross will have red flowers. To test this claim, 100 seeds from this
cross were collected and germinated, and 58 plants had red petals. a. What hypothesis should you use to test the geneticist’s claim? b. Calculate the test statistic and its p-value. Use the p-value
to evaluate the statistical significance of the results at the 1% level. 9.35 Early Detection of Breast Cancer Of those women who are diagnosed to have early-stage breast cancer, one-third eventually
die of the disease. Suppose a community public health department instituted a screening program to provide for the early detection of breast cancer and to increase the survival rate p of those
diagnosed to have the disease. A random sample of 200 women was selected from among those who were periodically screened by the program and who were diagnosed to have the disease. Let x represent the
number of those in the sample who survive the disease. a. If you wish to detect whether the community screening program has been effective, state the null hypothesis that should be tested. b. State
the alternative hypothesis. c. If 164 women in the sample of 200 survive the disease, can you conclude that the community screening program was effective? Test using a .05 and explain the practical
conclusions from your test. d. Find the p-value for the test and interpret it.
9.36 Sweet Potato Whitefly Suppose that 10% of the fields in a given agricultural area are infested with the sweet potato whitefly. One hundred fields in this area are randomly selected, and 25 are found
to be infested with whitefly.
a. Assuming that the experiment satisfies the conditions of the binomial experiment, do the data indicate that the proportion of infested fields is greater than expected? Use the p-value approach, and
test using a 5% significance level. b. If the proportion of infested fields is found to be significantly greater than .10, why is this of practical significance to the agronomist? What practical
conclusions might she draw from the results? 9.37 Brown or Blue? An article in the Washington Post stated that nearly 45% of the U.S. population is born with brown eyes, although they don’t
necessarily stay that way.9 To test the newspaper’s claim, a random sample of 80 people was selected, and 32 had brown eyes. Is there sufficient evidence to dispute the newspaper’s claim regarding
the proportion of browneyed people in the United States? Use a .01. 9.38 Colored Contacts Refer to Exercise 9.37.
Contact lenses, worn by about 26 million Americans, come in many styles and colors. Most Americans wear soft lenses, with the most popular colors being the blue varieties (25%), followed by greens
(24%), and then hazel or brown. A random sample of 80 tinted contact lens wearers was checked for the color of their lenses. Of these people, 22 wore blue lenses and only 15 wore green lenses.9 a. Do
the sample data provide sufficient evidence to indicate that the proportion of tinted contact lens wearers who wear blue lenses is different from 25%? Use a .05. b. Do the sample data provide
sufficient evidence to indicate that the proportion of tinted contact lens wearers who wear green lenses is different from 24%? Use a .05. c. Is there any reason to conduct a one-tailed test for
either part a or b? Explain. 9.39 A Cure for Insomnia An experimenter has
prepared a drug-dose level that he claims will induce sleep for at least 80% of people suffering from insomnia. After examining the dosage we feel that his claims regarding the effectiveness of his
dosage are inflated. In an attempt to disprove his claim, we administer his prescribed dosage to 50 insomniacs and observe that
37 of them have had sleep induced by the drug dose. Is there enough evidence to refute his claim at the 5% level of significance?
percentage of adults who say that they always vote is different from the percentage reported in Time? Test using a .01.
9.40 Who Votes? About three-fourths of voting age Americans are registered to vote, but many do not bother to vote on Election Day. Only 64% voted in 1992, and 60% in 2000, but turnout in off-year
elections is even lower. An article in Time stated that 35% of adult Americans are registered voters who always vote.10 To test this claim, a random sample of n 300 adult Americans was selected and x
123 were registered regular voters who always voted. Does this sample provide sufficient evidence to indicate that the
9.41 Man’s Best Friend The Humane Society
reports that there are approximately 65 million dogs owned in the United States and that approximately 40% of all U.S. households own at least one dog.11 In a random sample of 300 households, 114
households said that they owned at least one dog. Does this data provide sufficient evidence to indicate that the proportion of households with at least one dog is different from that reported by the
Humane Society? Test using a .05.
A LARGE-SAMPLE TEST OF HYPOTHESIS FOR THE DIFFERENCE BETWEEN TWO BINOMIAL PROPORTIONS When random and independent samples are selected from two binomial populations, the focus of the experiment may
be the difference ( p1 p2 ) in the proportions of individuals or items possessing a specified characteristic in the two populations. In this situation, you can use the difference in the sample
proportions ( pˆ1 pˆ 2 ) along with its standard error, SE
莦冪莦 n n p1q1
in the form of a z-statistic to test for a significant difference in the two population proportions. The null hypothesis to be tested is usually of the form H0 : p1 p2
Remember: Each trial results in one of two outcomes (S or F).
H0 : ( p1 p2) 0
versus either a one- or two-tailed alternative hypothesis. The formal test of hypothesis is summarized in the next display. In estimating the standard error for the z-statistic, you should use the
fact that when H0 is true, the two population proportions are equal to some common value—say, p. To obtain the best estimate of this common value, the sample data are “pooled” and the estimate of p
is Total number of successes x1 x2 pˆ Total number of trials n1 n2 Remember that, in order for the difference in the sample proportions to have an approximately normal distribution, the sample sizes
must be large and the proportions should not be too close to 0 or 1.
374 ❍
LARGE-SAMPLE STATISTICAL TEST FOR (p1 p2) 1. Null hypothesis: H0 : ( p1 p2 ) 0 or equivalently H0 : p1 p2 2. Alternative hypothesis: One-Tailed Test
Two-Tailed Test
Ha : ( p1 p2 ) 0 [or Ha : ( p1 p2 ) 0]
Ha : ( p1 p2 ) 0
pˆ1 pˆ 2 pˆ1 pˆ 2 ( pˆ1 pˆ 2) 0 3. Test statistic: z SE pq pq pq pq 11 22 n1 n2 n1 n2
where pˆ1 x1/n1 and pˆ 2 x2/n2. Since the common value of p1 p2 p (used in the standard error) is unknown, it is estimated by x1 x2 pˆ n1 n2 and the test statistic is ( pˆ1 pˆ2) 0 z pˆ qˆ pˆ qˆ n1 n2
pˆ1 pˆ 2 z 1 1 pˆ qˆ n1 n2
4. Rejection region: Reject H0 when One-Tailed Test
Two-Tailed Test
z za [or z za when the alternative hypothesis is Ha : ( p1 p2 ) 0]
z za/2
z za/2
or when p-value a
α 0
α/2 –z α/2
Assumptions: Samples are selected in a random and independent manner from two binomial populations, and n1 and n2 are large enough so that the sampling distribution of ( pˆ1 pˆ2 ) can be approximated
by a normal distribution. That is, n1 pˆ1, n1qˆ1, n2 pˆ 2, and n2qˆ2 should all be greater than 5. EXAMPLE
The records of a hospital show that 52 men in a sample of 1000 men versus 23 women in a sample of 1000 women were admitted because of heart disease. Do these data present sufficient evidence to
indicate a higher rate of heart disease among men admitted to the hospital? Use a .05. Assume that the number of patients admitted for heart disease has an approximate binomial probability
distribution for both men and women with parameters
p1 and p2, respectively. Then, since you wish to determine whether p1 p2, you will test the null hypothesis p1 p2—that is, H0 : ( p1 p2) 0—against the alternative hypothesis Ha : p1 p2 or,
equivalently, Ha : ( p1 p2) 0. To conduct this test, use the z-test statistic and approximate the standard error using the pooled estimate of p. Since Ha implies a one-tailed test, you can reject H0
only for large values of z. Thus, for a .05, you can reject H0 if z 1.645 (see Figure 9.13). The pooled estimate of p required for the standard error is x1 x2 52 23 .0375 pˆ n1 n2 1000 1000 FIGU R E
9 .1 3
Location of the rejection region in Example 9.12
α = .05 0
Rejection region
and the test statistic is .052 .023 pˆ1 pˆ2 z 3.41 1 1 1 1 (.0375)(.9625) pˆqˆ 1000 1000 n1 n2
Since the computed value of z falls in the rejection region, you can reject the hypothesis that p1 p2. The data present sufficient evidence to indicate that the percentage of men entering the
hospital because of heart disease is higher than that of women. (NOTE: This does not imply that the incidence of heart disease is higher in men. Perhaps fewer women enter the hospital when afflicted
with the disease!) How much higher is the proportion of men than women entering the hospital with heart disease? A 95% lower one-sided confidence bound will help you find the lowest likely value for
the difference.
pˆ qˆ pˆ qˆ ( pˆ1 pˆ 2 ) 1.645 11 2 2 n1 n2
.052(.948) .023(.977) (.052 .023) 1.645 1000 1000 .029 .014 or ( p1 p2) .015. The proportion of men is roughly 1.5% higher than women. Is this of practical importance? This is a question for the
researcher to answer. In some situations, you may need to test for a difference D0 (other than 0) between two binomial proportions. If this is the case, the test statistic is modified for testing
376 ❍
H0 : ( p1 p2) D0, and a pooled estimate for a common p is no longer used in the standard error. The modified test statistic is ( pˆ1 pˆ 2 ) D0 z pˆ qˆ pˆ qˆ 11 22 n1 n2
Although this test statistic is not used often, the procedure is no different from other large-sample tests you have already mastered!
9.44 Independent random samples of 280 and 350
9.42 Independent random samples of n1 140 and
observations were selected from binomial populations 1 and 2, respectively. Sample 1 had 132 successes, and sample 2 had 178 successes. Do the data present sufficient evidence to indicate that the
proportion of successes in population 1 is smaller than the proportion in population 2? Use one of the two methods of testing presented in this section, and explain your conclusions.
n2 140 observations were randomly selected from binomial populations 1 and 2, respectively. Sample 1 had 74 successes, and sample 2 had 81 successes. a. Suppose you have no preconceived idea as to
which parameter, p1 or p2, is the larger, but you want to detect only a difference between the two parameters if one exists. What should you choose as the alternative hypothesis for a statistical
test? The null hypothesis? b. Calculate the standard error of the difference in the two sample proportions, ( pˆ1 pˆ2 ). Make sure to use the pooled estimate for the common value of p. c. Calculate
the test statistic that you would use for the test in part a. Based on your knowledge of the standard normal distribution, is this a likely or unlikely observation, assuming that H0 is true and the
two population proportions are the same? d. p-value approach: Find the p-value for the test. Test for a significant difference in the population proportions at the 1% significance level. e. Critical
value approach: Find the rejection region when a .01. Do the data provide sufficient evidence to indicate a difference in the population proportions? 9.43 Refer to Exercise 9.42. Suppose, for
reasons, you know that p1 cannot be larger than p2. a. Given this knowledge, what should you choose as the alternative hypothesis for your statistical test? The null hypothesis? b. Does your
alternative hypothesis in part a imply a one- or two-tailed test? Explain. c. Conduct the test and state your conclusions. Test using a .05.
APPLICATIONS 9.45 Treatment versus Control An experiment
was conducted to test the effect of a new drug on a viral infection. The infection was induced in 100 mice, and the mice were randomly split into two groups of 50. The first group, the control group,
received no treatment for the infection. The second group received the drug. After a 30-day period, the proportions of survivors, pˆ1 and pˆ 2, in the two groups were found to be .36 and .60,
respectively. a. Is there sufficient evidence to indicate that the drug is effective in treating the viral infection? Use a .05. b. Use a 95% confidence interval to estimate the actual difference in
the cure rates for the treated versus the control groups. 9.46 Movie Marketing Marketing to targeted age
groups has become a standard method of advertising, even in movie theater advertising. Advertisers use computer software to track the demographics of moviegoers and then decide on the type of
products to advertise before a particular movie.12 One statistic that might be of interest is how frequently adults with children under 18 attend movies as compared to those without children. Suppose
that a theater database is used to randomly select 1000 adult ticket purchasers. These adults are then surveyed and asked whether they
were frequent moviegoers—that is, do they attend movies 12 or more times a year? The results are shown in the table:
Sample Size Number Who Attend 12 Times per Year
With Children under 18
Without Children
a. Is there a significant difference in the population proportions of frequent moviegoers in these two demographic groups? Use a .01. b. Why would a statistically significant difference in these
population proportions be of practical importance to the advertiser? 9.47 M&M’S In Exercise 8.53, you investigated
whether Mars, Inc., uses the same proportion of red M&M’S in its plain and peanut varieties. Random samples of plain and peanut M&M’S provide the following sample data for the experiment: Sample Size
Number of Red M&M’S
Use a test of hypothesis to determine whether there is a significant difference in the proportions of red candies for the two types of M&M’S. Let a .05 and compare your results with those of Exercise
8.53. 9.48 Hormone Therapy and Alzheimer’s Disease
In the last few years, many research studies have shown that the purported benefits of hormone replacement therapy (HRT) do not exist, and in fact, that hormone replacement therapy actually increases
the risk of several serious diseases. A four-year experiment involving 4532 women, reported in The Press Enterprise, was conducted at 39 medical centers. Half of the women took placebos and half took
Prempro, a widely prescribed type of hormone replacement therapy. There were 40 cases of dementia in the hormone group and 21 in the placebo group.13 Is there sufficient evidence to indicate that the
risk of dementia is higher for patients using Prempro? Test at the 1% level of significance. 9.49 HRT, continued Refer to Exercise 9.48. Cal-
culate a 99% lower one-sided confidence bound for the difference in the risk of dementia for women using hormone replacement therapy versus those who do not. Would this difference be of practical
importance to a woman considering HRT? Explain.
9.50 Clopidogrel and Aspirin A large study was conducted to test the effectiveness of clopidogrel in combination with aspirin in warding off heart attacks and strokes.14 The trial involved more than
15,500 people 45 years of age or older from 32 countries, including the United States, who had been diagnosed with cardiovascular disease or had multiple risk factors. The subjects were randomly
assigned to one of two groups. After two years, there was no difference in the risk of heart attack, stroke, or dying from heart disease between those who took clopidogrel and low-dose aspirin daily
and those who took low-dose aspirin plus a dummy pill. The two-drug combination actually increased the risk of dying (5.4% versus 3.8%) or dying specifically from cardiovascular disease (3.9% versus
2.2%). a. The subjects were randomly assigned to one of the two groups. Explain how you could use the random number table to make these assignments. b. No sample sizes were given in the article:
however, let us assume that the sample sizes for each group were n1 7720 and n2 7780. Determine whether the risk of dying was significantly different for the two groups. c. What do the results of the
study mean in terms of practical significance? 9.51 Baby’s Sleeping Position Does a baby’s
sleeping position affect the development of motor skills? In one study, published in the Archives of Pediatric Adolescent Medicine, 343 full-term infants were examined at their 4-month checkups for
various developmental milestones, such as rolling over, grasping a rattle, reaching for an object, and so on.15 The baby’s predominant sleep position—either prone (on the stomach) or supine (on the
back) or side—was determined by a telephone interview with the parent. The sample results for 320 of the 343 infants for whom information was received are shown here:
Number of Infants Number That Roll Over
Supine or Side
The researcher reported that infants who slept in the side or supine position were less likely to roll over at the 4-month checkup than infants who slept primarily in the prone position (P .001). Use
a large-sample test of hypothesis to confirm or refute the researcher’s conclusion.
378 ❍
SOME COMMENTS ON TESTING HYPOTHESES A statistical test of hypothesis is a fairly clear-cut procedure that enables an experimenter to either reject or accept the null hypothesis H0, with measured
risks a and b. The experimenter can control the risk of falsely rejecting H0 by selecting an appropriate value of a. On the other hand, the value of b depends on the sample size and the values of the
parameter under test that are of practical importance to the experimenter. When this information is not available, an experimenter may decide to select an affordable sample size, in the hope that the
sample will contain sufficient information to reject the null hypothesis. The chance that this decision is in error is given by a, whose value has been set in advance. If the sample does not provide
sufficient evidence to reject H0, the experimenter may wish to state the results of the test as “The data do not support the rejection of H0” rather than accepting H0 without knowing the chance of
error b. Some experimenters prefer to use the observed p-value of the test to evaluate the strength of the sample information in deciding to reject H0. These values can usually be generated by
computer and are often used in reports of statistical results: •
If the p-value is greater than .05, the results are reported as NS—not significant at the 5% level.
If the p-value lies between .05 and .01, the results are reported as P .05— significant at the 5% level.
If the p-value lies between .01 and .001, the results are reported as P .01— “highly significant” or significant at the 1% level.
If the p-value is less than .001, the results are reported as P .001—“very highly significant” or significant at the .1% level.
Still other researchers prefer to construct a confidence interval for a parameter and perform a test informally. If the value of the parameter specified by H0 is included within the upper and lower
limits of the confidence interval, then “H0 is not rejected.” If the value of the parameter specified by H0 is not contained within the interval, then “H0 is rejected.” These results will agree with a
two-tailed test; one-sided confidence bounds are used for one-tailed alternatives. Finally, consider the choice between a one- and two-tailed test. In general, experimenters wish to know whether a
treatment causes what could be a beneficial increase in a parameter or what might be a harmful decrease in a parameter. Therefore, most tests are two-tailed unless a one-tailed test is strongly
dictated by practical considerations. For example, assume you will sustain a large financial loss if the mean m is greater than m0 but not if it is less. You will then want to detect values larger
than m0 with a high probability and thereby use a right-tailed test. In the same vein, if pollution levels higher than m0 cause critical health risks, then you will certainly wish to detect levels
higher than m0 with a right-tailed test of hypothesis. In any case, the choice of a one- or two-tailed test should be dictated by the practical consequences that result from a decision to reject or
not reject H0 in favor of the alternative.
CHAPTER REVIEW 4. In a Type II error, b is the probability of accepting H0 when it is in fact false. The power of the test is (1 b), the probability of rejecting H0 when it is false.
Key Concepts and Formulas I.
Parts of a Statistical Test
1. Null hypothesis: a contradiction of the alternative hypothesis 2. Alternative hypothesis: the hypothesis the researcher wants to support
III. Large-Sample Test Statistics Using the z Distribution
To test one of the four population parameters when the sample sizes are large, use the following test statistics:
3. Test statistic and its p-value: sample evidence calculated from the sample data 4. Rejection region—critical values and significance levels: values that separate rejection and nonrejection of the
null hypothesis 5. Conclusion: Reject or do not reject the null hypothesis, stating the practical significance of your conclusion
Test Statistic
x苶 m0 z s/兹苶n
II. Errors and Statistical Significance
1. The significance level a is the probability of rejecting H0 when it is in fact true. 2. The p-value is the probability of observing a test statistic as extreme as or more extreme than the one
observed; also, the smallest value of a for which H0 can be rejected. 3. When the p-value is less than the significance level a, the null hypothesis is rejected. This happens when the test statistic
exceeds the critical value.
Supplementary Exercises Starred (*) exercises are optional. 9.52 a. Define a and b for a statistical test of hypothesis. b. For a fixed sample size n, if the value of a is decreased, what is the effect
on b? c. In order to decrease both a and b for a particular alternative value of m, how must the sample size change? 9.53 What is the p-value for a test of hypothesis?
How is it calculated for a large-sample test? 9.54 What conditions must be met so that the z test can be used to test a hypothesis concerning a population mean m?
m1 m2
p1 p2
pˆ p0 z pq 00 n
(x苶1 x苶2) D0 z s2 s2 1 2 n1 n2
pˆ1 pˆ 2 z 1 1 pˆ qˆ n1 n2
(pˆ1 pˆ 2) D0 z pˆ qˆ pˆ qˆ 11 22 n1 n2
9.55 Define the power of a statistical test. As the
alternative value of m gets farther from m0, how is the power affected? 9.56 Acidity in Rainfall Refer to Exercise 8.31 and the collection of water samples to estimate the mean acidity (in pH) of
rainfalls in the northeastern United States. As noted, the pH for pure rain falling through clean air is approximately 5.7. The sample of n 40 rainfalls produced pH readings with x苶 3.7 and s .5. Do
the data provide sufficient evidence to indicate that the mean pH for rainfalls is more acidic (Ha : m 5.7 pH) than pure rainwater? Test using a .05. Note that this inference is appropriate only for
the area in which the rainwater specimens were collected.
9.57 Washing Machine Colors A manufacturer of automatic washers provides a particular model in one of three colors. Of the first 1000 washers sold, it is noted that 400 were of the first color. Can you
conclude that more than one-third of all customers have a preference for the first color? a. Find the p-value for the test.
b. If you plan to conduct your test using a .05, what will be your test conclusions? 9.58 Generation Next Born between 1980 and
1990, Generation Next was the topic of Exercise 8.60.16 In a survey of 500 female and 500 male students in Generation Next, 345 of the females and 365 of the males reported that they decided to
attend college in order to make more money. a. Is there a significant difference in the population proportions of female and male students who decided to attend college in order to make more money?
Use a .01. b. Can you think of any reason why a statistically significant difference in these population proportions might be of practical importance? To whom might this difference be important? 9.59
Bass Fishing The pH factor is a measure of the
acidity or alkalinity of water. A reading of 7.0 is neutral; values in excess of 7.0 indicate alkalinity; those below 7.0 imply acidity. Loren Hill states that the best chance of catching bass occurs
when the pH of the water is in the range 7.5 to 7.9.17 Suppose you suspect that acid rain is lowering the pH of your favorite fishing spot and you wish to determine whether the pH is less than 7.5. a.
State the alternative and null hypotheses that you would choose for a statistical test. b. Does the alternative hypothesis in part a imply a one- or a two-tailed test? Explain.
9.60 Pennsylvania Lottery A central Pennsylvania
attorney reported that the Northumberland County district attorney’s (DA) office trial record showed only 6 convictions in 27 trials from January to mid-July 1997. Four central Pennsylvania county
DAs responded, “Don’t judge us by statistics!”18 a. If the attorney’s information is correct, would you reject a claim by the DA of a 50% or greater conviction rate? b. The actual records show that
there have been 455 guilty pleas and 48 cases that have gone to trial. Even assuming that the 455 guilty pleas are the only convictions of the 503 cases reported, what is the 95% confidence interval
for p, the true proportion of convictions by this district attorney? c. Using the results of part b, are you willing to reject a figure of 50% or greater for the true conviction rate? Explain. 9.61
White-Tailed Deer In an article entitled “A Strategy for Big Bucks,” Charles Dickey discusses studies of the habits of white-tailed deer that indicate that they live and feed within very limited
ranges— approximately 150 to 205 acres.19 To determine whether there was a difference between the ranges of deer located in two different geographic areas, 40 deer were caught, tagged, and fitted with
small radio transmitters. Several months later, the deer were tracked and identified, and the distance x from the release point was recorded. The mean and standard deviation of the distances from the
release point were as follows:
Sample Size Sample Mean Sample Standard Deviation
Location 1
Location 2
40 2980 ft 1140 ft
40 3205 ft 963 ft
a. If you have no preconceived reason for believing one population mean is larger than another, what would you choose for your alternative hypothesis? Your null hypothesis?
c. Suppose that a random sample of 30 water specimens gave pH readings with x苶 7.3 and s .2. Just glancing at the data, do you think that the difference x苶 7.5 .2 is large enough to indicate that
the mean pH of the water samples is less than 7.5? (Do not conduct the test.)
b. Does your alternative hypothesis in part a imply a one- or a two-tailed test? Explain. c. Do the data provide sufficient evidence to indicate that the mean distances differ for the two geographic
locations? Test using a .05.
d. Now conduct a statistical test of the hypotheses in part a and state your conclusions. Test using a .05. Compare your statistically based decision with your intuitive decision in part c.
9.62 Female Models In a study to assess various
effects of using a female model in automobile advertising, 100 men were shown photographs of two automobiles matched for price, color, and size, but of
different makes. One of the automobiles was shown with a female model to 50 of the men (group A), and both automobiles were shown without the model to the other 50 men (group B). In group A, the
automobile shown with the model was judged as more expensive by 37 men; in group B, the same automobile was judged as the more expensive by 23 men. Do these results indicate that using a female model
influences the perceived cost of an automobile? Use a one-tailed test with a .05. 9.63 Bolts Random samples of 200 bolts manufactured by a type A machine and 200 bolts manufactured by a type B machine
showed 16 and 8 defective bolts, respectively. Do these data present sufficient evidence to suggest a difference in the performance of the machine types? Use a .05. 9.64 Biomass Exercise 7.63
reported that the biomass for tropical woodlands, thought to be about 35 kilograms per square meter (kg/m2), may in fact be too high and that tropical biomass values vary regionally—from about 5 to
55 kg/m2.20 Suppose you measure the tropical biomass in 400 randomly selected square-meter plots and obtain 苶x 31.75 and s 10.5. Do the data present sufficient evidence to indicate that scientists
are overestimating the mean biomass for tropical woodlands and that the mean is in fact lower than estimated? a. State the null and alternative hypotheses to be tested. b. Locate the rejection region
for the test with a .01. c. Conduct the test and state your conclusions. 9.65 Adolescents and Social Stress In a study to compare ethnic differences in adolescents’ social stress, researchers
recruited subjects from three middle schools in Houston, Texas.21 Social stress among four ethnic groups was measured using the Social Attitudinal Familial and Environment Scale for Children
(SAFE-C). In addition, demographic information about the 316 students was collected using self-administered questionnaires. A tabulation of student responses to a question regarding their
socioeconomic status (SES) compared with other families in which the students chose one of five responses (much worse off, somewhat worse off, about the same, better off, or much better off ) resulted
in the tabulation that follows. European African Hispanic American American American Sample Size About the Same
Asian American 19 8
a. Do these data support the hypothesis that the proportion of adolescent African Americans who state that their SES is “about the same” exceeds that for adolescent Hispanic Americans? b. Find the
p-value for the test. c. If you plan to test using a .05, what is your conclusion? 9.66* Adolescents and Social Stress, continued
Refer to Exercise 9.65. Some thought should have been given to designing a test for which b is tolerably low when p1 exceeds p2 by an important amount. For example, find a common sample size n for a
test with a .05 and b .20 when in fact p1 exceeds p2 by 0.1. (HINT: The maximum value of p(1 p) .25.) 9.67 Losing Weight In a comparison of the mean 1-month weight losses for women aged 20–30 years,
these sample data were obtained for each of two diets: Sample Size n Sample Mean x苶 Sample Variance s 2
Diet I
Diet II
40 10 lb 4.3
40 8 lb 5.7
Do the data provide sufficient evidence to indicate that diet I produces a greater mean weight loss than diet II? Use a .05. 9.68 Increased Yield An agronomist has shown experimentally that a new
irrigation/fertilization regimen produces an increase of 2 bushels per quadrat (significant at the 1% level) when compared with the regimen currently in use. The cost of implementing and using the new
regimen will not be a factor if the increase in yield exceeds 3 bushels per quadrat. Is statistical significance the same as practical importance in this situation? Explain. 9.69 Breaking Strengths of
Cables A test of the breaking strengths of two different types of cables was conducted using samples of n1 n2 100 pieces of each type of cable. Cable I
Cable II
x苶1 1925 s1 40
苶x2 1905 s2 30
Do the data provide sufficient evidence to indicate a difference between the mean breaking strengths of the two cables? Use a .05. 9.70 Put on the Brakes The braking ability was
compared for two 2008 automobile models. Random samples of 64 automobiles were tested for each type. The recorded measurement was the distance (in feet)
382 ❍
required to stop when the brakes were applied at 40 miles per hour. These are the computed sample means and variances:
2005 were randomly selected and their SAT scores recorded in the following table:
Model I
Model II
x苶1 118 s 12 102
苶x22 109 s 2 87
Sample Average Sample Standard Deviation
Do the data provide sufficient evidence to indicate a difference between the mean stopping distances for the two models? 9.71 Spraying Fruit Trees A fruit grower wants to test a new spray that a
manufacturer claims will reduce the loss due to insect damage. To test the claim, the grower sprays 200 trees with the new spray and 200 other trees with the standard spray. The following data were
recorded: Mean Yield per Tree 苶x (lb) Variance s 2
New Spray
Standard Spray
a. Do the data provide sufficient evidence to conclude that the mean yield per tree treated with the new spray exceeds that for trees treated with the standard spray? Use a .05. b. Construct a 95%
confidence interval for the difference between the mean yields for the two sprays. 9.72 Actinomycin D A biologist hypothesizes that
high concentrations of actinomycin D inhibit RNA synthesis in cells and hence the production of proteins as well. An experiment conducted to test this theory compared the RNA synthesis in cells
treated with two concentrations of actinomycin D: .6 and .7 microgram per milliliter. Cells treated with the lower concentration (.6) of actinomycin D showed that 55 out of 70 developed normally,
whereas only 23 out of 70 appeared to develop normally for the higher concentration (.7). Do these data provide sufficient evidence to indicate a difference between the rates of normal RNA synthesis
for cells exposed to the two different concentrations of actinomycin D? a. Find the p-value for the test. b. If you plan to conduct your test using a .05, what will be your test conclusions? 9.73 SAT
Scores How do California high school
students compare to students nationwide in their college readiness, as measured by their SAT scores? The national average scores for the class of 2005 were 508 on the verbal portion and 520 on the
math portion.22 Suppose that 100 California students from the class of
a. Do the data provide sufficient evidence to indicate that the average verbal score for all California students in the class of 2005 is different from the national average? Test using a .05. b. Do
the data provide sufficient evidence to indicate that the average math score for all California students in the class of 2005 is different from the national average? Test using a .05. c. Could you
use this data to determine if there is a difference between the average math and verbal scores for all California students in the class of 2005? Explain your answer. 9.74 A Maze Experiment In a maze
running study, a rat is run in a T maze and the result of each run recorded. A reward in the form of food is always placed at the right exit. If learning is taking place, the rat will choose the
right exit more often than the left. If no learning is taking place, the rat should randomly choose either exit. Suppose that the rat is given n 100 runs in the maze and that he chooses the right
exit x 64 times. Would you conclude that learning is taking place? Use the p-value approach, and make a decision based on this p-value. 9.75 PCBs Polychlorinated biphenyls (PCBs) have been found to
be dangerously high in some game birds found along the marshlands of the southeastern coast of the United States. The Federal Drug Administration (FDA) considers a concentration of PCBs higher than 5
parts per million (ppm) in these game birds to be dangerous for human consumption. A sample of 38 game birds produced an average of 7.2 ppm with a standard deviation of 6.2 ppm. Is there sufficient
evidence to indicate that the mean ppm of PCBs in the population of game birds exceeds the FDA’s recommended limit of 5 ppm? Use a .01. 9.76* PCBs, continued Refer to Exercise 9.75. a. Calculate b
and 1 b if the true mean ppm of PCBs is 6 ppm. b. Calculate b and 1 b if the true mean ppm of PCBs is 7 ppm.
c. Find the power, 1 b, when m 8, 9, 10, and 12. Use these values to construct a power curve for the test in Exercise 9.75.
d. For what values of m does this test have power greater than or equal to .90? 9.77 9/11 Conspiracy Some Americans believe that the entire 9/11 catastrophe was planned and executed by federal
officials in order to provide the United States with a pretext for going to war in the Middle East and as a means of consolidating and extending the power of the then-current administration. This
group of Americans is larger than you think. A Scripps-Howard poll of n 1010 adults in August of 2006 found that 36% of American consider such a scenario very or somewhat likely!23 In a follow-up
poll, a random sample of n 100 adult Americans found that 26 of those sampled agreed that the conspiracy theory was either likely or somewhat likely. Does this sample contradict the reported 36%
figure? Test at the a .05 level of significance. 9.78 Heights and Gender It is a well-accepted fact that males are taller on the average than females. But how much taller? The genders of 105 biomedical
students (Exercise 1.54) were also recorded and the data are summarized below:
Sample Size Sample Mean Sample Standard Deviation
48 69.58 2.62
77 64.43 2.58
a. Perform a test of hypothesis to either confirm or refute our initial claim that males are taller on the average than females? Use a .01. b. If the results of part a show that our claim was correct,
construct a 99% confidence one-sided lower confidence bound for the average difference in heights between male and female college students. How much taller are males than females? 9.79 English as a
Second Language The state of
California is working very hard to make sure that all elementary-aged students whose native language is not English become proficient in English by the sixth
grade. Their progress is monitored each year using the California English Language Development Test.24 The results for two school districts in southern California for a recent school year are shown
below. District Number of Students Tested Percentage Fluent
Palm Springs
Does this data provide sufficient statistical evidence to indicate that the percentage of students who are fluent in English differs for these two districts? Test using a .01. 9.80 Breaststroke
Swimmers How much training time does it take to become a world-class breaststroke swimmer? A survey published in The American Journal of Sports Medicine reported the number of meters per week swum by
two groups of swimmers—those who competed only in breaststroke and those who competed in the individual medley (which includes breaststroke). The number of meters per week practicing the breaststroke
swim was recorded and the summary statistics are shown below.25
Sample Size Sample Mean Sample Standard Deviation
Individual Medley
Is there sufficient evidence to indicate a difference in the average number of meters swum by these two groups of swimmers? Test using a .01. 9.81 Breaststroke, continued Refer to Exer-
cise 9.80. a. Construct a 99% confidence interval for the difference in the average number of meters swum by breaststroke versus individual medley swimmers. b. How much longer do pure breaststroke
swimmers practice that stroke than individual medley swimmers? What is the practical reason for this difference?
Exercises 9.82 School Workers In Exercise 8.109, the aver-
age hourly wage for public school cafeteria workers was given as $10.33.26 If n 40 randomly selected public school cafeteria workers within one school district are found to have an average hourly
wage of 苶x $9.75 with a standard deviation of s $1.65, would
this information contradict the reported average of $10.33? a. What are the null and alternative hypotheses to be tested? b. Use the Large-Sample Test of a Population Mean applet to find the observed
value of the test statistic.
384 ❍
c. Use the Large-Sample Test of a Population Mean applet to find the p-value of this test. d. Based on your results from part c, what conclusions can you draw about the average hourly wage of $10.33?
9.83 Daily Wages The daily wages in a particular
industry are normally distributed with a mean of $94 and a standard deviation of $11.88. Suppose a company in this industry employs 40 workers and pays them $91.50 per week on the average. Can these
workers be viewed as a random sample from among all workers in the industry? a. What are the null and alternative hypotheses to be tested? b. Use the Large-Sample Test of a Population Mean applet to
find the observed value of the test statistic. c. Use the Large-Sample Test of a Population Mean applet to find the p-value for this test. d. If you planned to conduct your test using a .01, what would
be your test conclusions? e. Was it necessary to know that the daily wages are normally distributed? Explain your answer. 9.84 Refer to Example 9.8. Use the Power of a
z-Test applet to verify the power of the test of H0: m 880
for values of m equal to 870, 875, 880, 885 and 890. Check your answers against the values shown in Table 9.2. 9.85 Refer to Example 9.8. a. Use the method given in Example 9.8 to calculate the power
of the test of H0: m 880
versus Ha: m 880
when n 30 and the true value of m is 870 tons. b. Repeat part a using n 70 and m 870 tons. c. Use the Power of a z-Test applet to verify your hand-calculated results in parts a and b. d. What is the
effect of increasing the sample size on the power of the test? 9.86 Use the appropriate slider on the Power of a
z-Test applet to answer the following questions. Write a sentence for each part, describing what you see using the applet. a. What effect does increasing the sample size have on the power of the
test? b. What effect does increasing the distance between the true value of m and the hypothesized value, m 880, have on the power of the test? c. What effect does decreasing the significance level a
have on the power of the test?
versus Ha: m 880
An Aspirin a Day . . . ? On Wednesday, January 27, 1988, the front page of the New York Times read, “Heart attack risk found to be cut by taking aspirin: Lifesaving effects seen.” A very large study
of U.S. physicians showed that a single aspirin tablet taken every other day reduced by one-half the risk of heart attack in men.27 Three days later, a headline in the Times read, “Value of daily
aspirin disputed in British study of heart attacks.” How could two seemingly similar studies, both involving doctors as participants, reach such opposite conclusions? The U.S. physicians’ health
study consisted of two randomized clinical trials in one. The first tested the hypothesis that 325 milligrams (mg) of aspirin taken every other day reduces mortality from cardiovascular disease. The
second tested whether 50 mg of b-carotene taken on alternate days decreases the incidence of cancer. From names on an American Medical Association computer tape, 261,248 male physicians between the
ages of 40 and 84 were invited to participate in the trial. Of those who responded, 59,285 were willing to participate. After the exclusion of those physicians who had a history of medical disorders,
or who were currently taking aspirin or had negative reactions to aspirin, 22,071 physicians were randomized into one of four treatment groups: (1) buffered aspirin and b-carotene, (2) buffered
aspirin and a
b-carotene placebo, (3) aspirin placebo and b-carotene, and (4) aspirin placebo and b-carotene placebo. Thus, half were assigned to receive aspirin and half to receive b-carotene. The study was
conducted as a double-blind study, in which neither the participants nor the investigators responsible for following the participants knew to which group a participant belonged. The results of the
American study concerning myocardial infarctions (the technical name for heart attacks) are given in the following table: American Study Aspirin (n 11,037) Myocardial Infarction Fatal Nonfatal Total
Placebo (n 11,034)
The objective of the British study was to determine whether 500 mg of aspirin taken daily would reduce the incidence of and mortality from cardiovascular disease. In 1978 all male physicians in the
United Kingdom were invited to participate. After the usual exclusions, 5139 doctors were randomly allocated to take aspirin, unless some problem developed, and one-third were randomly allocated to
avoid aspirin. Placebo tablets were not used, so the study was not blind! The results of the British study are given here: British Study
Myocardial Infarction Fatal Nonfatal Total
Aspirin (n 3429)
Control (n 1710)
89 (47.3) 80 (42.5)
47 (49.6) 41 (43.3)
169 (89.8)
88 (92.9)
To account for unequal sample sizes, the British study reported rates per 10,000 subject-years alive (given in parentheses). 1. Test whether the American study does in fact indicate that the rate of
heart attacks for physicians taking 325 mg of aspirin every other day is significantly different from the rate for those on the placebo. Is the American claim justified? 2. Repeat the analysis using
the data from the British study in which one group took 500 mg of aspirin every day and the control group took none. Based on their data, is the British claim justified? 3. Can you think of some
possible reasons the results of these two studies, which were alike in some respects, produced such different conclusions?
Inference from Small Samples © CORBIS SYGMA
GENERAL OBJECTIVE The basic concepts of large-sample statistical estimation and hypothesis testing for practical situations involving population means and proportions were introduced in Chapters 8
and 9. Because all of these techniques rely on the Central Limit Theorem to justify the normality of the estimators and test statistics, they apply only when the samples are large. This chapter
supplements the large-sample techniques by presenting small-sample tests and confidence intervals for population means and variances. Unlike their large-sample counterparts, these small-sample
techniques require the sampled populations to be normal, or approximately so.
CHAPTER INDEX ● Comparing two population variances (10.7) ● Inferences concerning a population variance (10.6) ● Paired-difference test: Dependent samples (10.5) ● Small-sample assumptions (10.8) ●
Small-sample inferences concerning the difference in two means: Independent random samples (10.4)
Would You Like a Four-Day Workweek? Will a flexible workweek schedule result in positive benefits for both employer and employee? Four obvious benefits are (1) less time traveling from field positions to
the office, (2) fewer employees parked in the parking lot, (3) reduced travel expenses, and (4) allowance for employees to have another day off. But does the flexible workweek make employees more
efficient and cause them to take fewer sick and personal days? The answers to some of these questions are posed in the case study at the end of this chapter.
● Small-sample inferences concerning a population mean (10.3) ● Student’s t distribution (10.2)
How Do I Decide Which Test to Use?
10.2 STUDENT’S t DISTRIBUTION
INTRODUCTION Suppose you need to run an experiment to estimate a population mean or the difference between two means. The process of collecting the data may be very expensive or very time-consuming.
If you cannot collect a large sample, the estimation and test procedures of Chapters 8 and 9 are of no use to you. This chapter introduces some equivalent statistical procedures that can be used when
the sample size is small. The estimation and testing procedures involve these familiar parameters: • • • •
A single population mean, m The difference between two population means, (m1 m2) A single population variance, s 2 The comparison of two population variances, s 21 and s 22
Small-sample tests and confidence intervals for binomial proportions will be omitted from our discussion.†
STUDENT’S t DISTRIBUTION In conducting an experiment to evaluate a new but very costly process for producing synthetic diamonds, you are able to study only six diamonds generated by the process. How
can you use these six measurements to make inferences about the average weight m of diamonds from this process? In discussing the sampling distribution of x苶 in Chapter 7, we made these points: • •
When n 30, the Central Limit Theorem will not guarantee that x苶 m 苶 s/兹n
When the original sampled population is normal, 苶x and z ( x苶 m)/(s/兹n苶) both have normal distributions, for any sample size. 苶), When the original sampled population is not normal, 苶x, z ( x苶
m)/(s/兹n 苶) all have approximately normal distributions, if the and z ⬇ ( x苶 m)/(s/兹n sample size is large.
苶) does not Unfortunately, when the sample size n is small, the statistic (x苶 m)/(s/兹n have a normal distribution. Therefore, all the critical values of z that you used in Chapters 8 and 9 are no
longer correct. For example, you cannot say that 苶x will lie within 1.96 standard errors of m 95% of the time. This problem is not new; it was studied by statisticians and experimenters in the early
1900s. To find the sampling distribution of this statistic, there are two ways to proceed:
is approximately normal.
Use an empirical approach. Draw repeated samples and compute ( 苶x m)/(s/兹n苶) for each sample. The relative frequency distribution that you construct using these values will approximate the shape
and location of the sampling distribution. Use a mathematical approach to derive the actual density function or curve that describes the sampling distribution.
A small-sample test for the binomial parameter p will be presented in Chapter 15.
388 ❍
This second approach was used by an Englishman named W.S. Gosset in 1908. He derived a complicated formula for the density function of x苶 m t s/兹n苶 for random samples of size n from a normal
population, and he published his results under the pen name “Student.” Ever since, the statistic has been known as Student’s t. It has the following characteristics: • •
FI GU R E 1 0 . 1
Standard normal z and the t distribution with 5 degrees of freedom
It is mound-shaped and symmetric about t 0, just like z. It is more variable than z, with “heavier tails”; that is, the t curve does not approach the horizontal axis as quickly as z does. This is
because the t statistic involves two random quantities, 苶x and s, whereas the z statistic involves only the sample mean, x苶. You can see this phenomenon in Figure 10.1. The shape of the t
distribution depends on the sample size n. As n increases, the variability of t decreases because the estimate s of s is based on more and more information. Eventually, when n is infinitely large, the
t and z distributions are identical!
Normal distribution
t distribution
For a one-sample t, df n 1.
FI GU R E 1 0 . 2
Tabulated values of Student’s t
The divisor (n 1) in the formula for the sample variance s2 is called the number of degrees of freedom (df ) associated with s2. It determines the shape of the t distribution. The origin of the term
degrees of freedom is theoretical and refers to the number of independent squared deviations in s2 that are available for estimating s 2. These degrees of freedom may change for different
applications and, since they specify the correct t distribution to use, you need to remember to calculate the correct degrees of freedom for each application. The table of probabilities for the
standard normal z distribution is no longer useful in calculating critical values or p-values for the t statistic. Instead, you will use Table 4 in Appendix I, which is partially reproduced in Table
10.1. When you index a particular number of degrees of freedom, the table records ta, a value of t that has tail area a to its right, as shown in Figure 10.2. ● f(t)
a 0
10.2 STUDENT’S t DISTRIBUTION
TABLE 10.1
Format of the Student’s t Table from Table 4 in Appendix I df
1 2 3 4 5 6 7 8 9 . . . 26 27 28 29 inf.
3.078 1.886 1.638 1.533 1.476 1.440 1.415 1.397 1.383 . . . 1.315 1.314 1.313 1.311 1.282
6.314 2.920 2.353 2.132 2.015 1.943 1.895 1.860 1.833 . . . 1.706 1.703 1.701 1.699 1.645
12.706 4.303 3.182 2.776 2.571 2.447 2.365 2.306 2.262 . . . 2.056 2.052 2.048 2.045 1.960
31.821 6.965 4.541 3.747 3.365 3.143 2.998 2.896 2.821 . . . 2.479 2.473 2.467 2.462 2.326
63.657 9.925 5.841 4.604 4.032 3.707 3.499 3.355 3.250 . . . 2.779 2.771 2.763 2.756 2.576
1 2 3 4 5 6 7 8 9 . . . 26 27 28 29 inf.
For a t distribution with 5 degrees of freedom, the value of t that has area .05 to its right is found in row 5 in the column marked t.050. For this particular t distribution, the area to the right
of t 2.015 is .05; only 5% of all values of the t statistic will exceed this value.
You can use the Student’s t Probabilities applet to find the t-value described in Example 10.1. The first applet, shown in Figure 10.3, provides t-values and their two-tailed probabilities, while the
second applet provides t-values and one-tailed probabilities. Use the slider on the right side of the applet to select the proper degrees of freedom. For Example 10.1, you should choose df 5 and type
.10 in the box marked “prob:” at the bottom of the first applet. The applet will provide the value of t that puts .05 in one tail of the t distribution. The second applet will show the identical t for
a one-tailed area of .05. The applet in Figure 10.3 shows t 2.02 which is correct to two decimal places. We will use this applet for the MyApplet Exercises at the end of the chapter. FIGU R E 1 0 . 3
Student’s t Probabilities applet
390 ❍
Suppose you have a sample of size n 10 from a normal distribution. Find a value of t such that only 1% of all values of t will be smaller.
Solution The degrees of freedom that specify the correct t distribution are df n 1 9, and the necessary t-value must be in the lower portion of the distribution, with area .01 to its left, as shown
in Figure 10.4. Since the t distribution is symmetric about 0, this value is simply the negative of the value on the right-hand side with area .01 to its right, or t.01 2.821. FI GU R E 1 0 . 4
t Distribution for Example 10.2
● f(t)
.01 –2.821
Comparing the t and z Distributions Look at one of the columns in Table 10.1. As the degrees of freedom increase, the critical value of t decreases until, when df inf., the critical t-value is the
same as the critical z-value for the same tail area. You can use the Comparing t and z applet to visualize this concept. Look at the three applets in Figure 10.5, which show the critical values for
t.025 compared with z.025 for df 8, 29 and 100. (The slider on the right side of the applet allows you to change the df.) The red curve (black in Figure 10.5) is the standard normal distribution,
with z.025 1.96. FI GU R E 1 0 . 5
Comparing t and z applet
The blue curve is the t distribution. With 8 df, you can clearly see a difference in the t and z curves, especially in the critical values that cut off an area of .025 in the tails. As the degrees of
freedom increase, the difference in the shapes of t and z becomes very similar, as do their critical values, until at df 100, there is almost no difference. This helps to explain why we use n 30 as
the somewhat arbitrary dividing line between large and small samples. When n 30 (df 29), the critical values of t are quite close to their normal counterparts. Rather than produce a t table with rows
for many more degrees of freedom, the critical values of z are sufficient when the sample size reaches n 30.
Assumptions behind Student’s t Distribution
Assumptions for one-sample t: • Random sample • Normal distribution
The critical values of t allow you to make reliable inferences only if you follow all the rules; that is, your sample must meet these requirements specified by the t distribution: • •
The sample must be randomly selected. The population from which you are sampling must be normally distributed.
These requirements may seem quite restrictive. How can you possibly know the shape of the probability distribution for the entire population if you have only a sample? If this were a serious problem,
however, the t statistic could be used in only very limited situations. Fortunately, the shape of the t distribution is not affected very much as long as the sampled population has an approximately
mound-shaped distribution. Statisticians say that the t statistic is robust, meaning that the distribution of the statistic does not change significantly when the normality assumption is violated. How
can you tell whether your sample is from a normal population? Although there are statistical procedures designed for this purpose, the easiest and quickest way to check for normality is to use the
graphical techniques of Chapter 2: Draw a dotplot or construct a stem and leaf plot. As long as your plot tends to “mound up” in the center, you can be fairly safe in using the t statistic for making
inferences. The random sampling requirement, on the other hand, is quite critical if you want to produce reliable inferences. If the sample is not random, or if it does not at least behave as a
random sample, then your sample results may be affected by some unknown factor and your conclusions may be incorrect. When you design an experiment or read about experiments conducted by others, look
critically at the way the data have been collected!
SMALL-SAMPLE INFERENCES CONCERNING A POPULATION MEAN As with large-sample inference, small-sample inference can involve either estimation or hypothesis testing, depending on the preference of the
experimenter. We explained the basics of these two types of inference in the earlier chapters, and we use them again 苶), and a different sampling distrinow, with a different sample statistic, t ( 苶
x m)/(s/兹n bution, the Student’s t, with (n 1) degrees of freedom.
392 ❍
SMALL-SAMPLE HYPOTHESIS TEST FOR m 1. Null hypothesis: H0 : m m0 2. Alternative hypothesis: One-Tailed Test
Two-Tailed Test
Ha : m m0 (or, Ha : m m0)
Ha : m m0
x苶 m0 3. Test statistic: t s/兹苶n 4. Rejection region: Reject H0 when One-Tailed Test
Two-Tailed Test
t ta (or t ta when the alternative hypothesis is Ha : m m0)
t ta/2
t ta/2
or when p-value a
α 0
α/2 α/2
The critical values of t, ta, and ta/2 are based on (n 1) degrees of freedom. These tabulated values can be found using Table 4 of Appendix I or the Student’s t Probabilities applet. Assumption: The
sample is randomly selected from a normally distributed population. SMALL-SAMPLE (1 a)100% CONFIDENCE INTERVAL FOR m s 苶x ta/2 兹苶n where s/兹n苶 is the estimated standard error of x苶, often
referred to as the standard error of the mean. EXAMPLE
A new process for producing synthetic diamonds can be operated at a profitable level only if the average weight of the diamonds is greater than .5 karat. To evaluate the profitability of the process,
six diamonds are generated, with recorded weights .46, .61, .52, .48, .57, and .54 karat. Do the six measurements present sufficient evidence to indicate that the average weight of the diamonds
produced by the process is in excess of .5 karat? The population of diamond weights produced by this new process has mean m, and you can set out the formal test of hypothesis in steps, as you did in
Chapter 9:
Null and alternative hypotheses: H0: m .5
versus Ha: m .5
Test statistic: You can use your calculator to verify that the mean and standard deviation for the six diamond weights are .53 and .0559, respectively. The test statistic is a t statistic, calculated
as x苶 m0 .53 .5 t 1.32 s/兹n苶 .0559/兹苶6 As with the large-sample tests, the test statistic provides evidence for either rejecting or accepting H0 depending on how far from the center of the t
distribution it lies.
Rejection region: If you choose a 5% level of significance (a .05), the righttailed rejection region is found using the critical values of t from Table 4 of Appendix I. With df n 1 5, you can reject
H0 if t t.05 2.015, as shown in Figure 10.6.
Conclusion: Since the calculated value of the test statistic, 1.32, does not fall in the rejection region, you cannot reject H0. The data do not present sufficient evidence to indicate that the mean
diamond weight exceeds .5 karat.
FIGU R E 1 0 . 6
Rejection region for Example 10.3
● f(t)
.05 0 A 95% confidence interval tells you that, if you were to construct many of these intervals (all of which would have slightly different endpoints), 95% of them would enclose the population mean.
1.32 2.015 Reject H0
As in Chapter 9, the conclusion to accept H0 would require the difficult calculation of b, the probability of a Type II error. To avoid this problem, we choose to not reject H0. We can then calculate
the lower bound for m using a small-sample lower onesided confidence bound. This bound is similar to the large-sample one-sided confidence bound, except that the critical za is replaced by a critical
ta from Table 4. For this example, a 95% lower one-sided confidence bound for m is: s x苶 ta 兹苶n .0559 .53 2.015 兹6苶 .53 .046 The 95% lower bound for m is m .484. The range of possible values
includes mean diamond weights both smaller and greater than .5; this confirms the failure of our test to show that m exceeds .5.
394 ❍
Remember from Chapter 9 that there are two ways to conduct a test of hypothesis: •
The critical value approach: Set up a rejection region based on the critical values of the statistic’s sampling distribution. If the test statistic falls in the rejection region, you can reject H0.
The p-value approach: Calculate the p-value based on the observed value of the test statistic. If the p-value is smaller than the significance level, a, you can reject H0. If there is no preset
significance level, use the guidelines in Section 9.3 to judge the statistical significance of your sample results.
We used the first approach in the solution to Example 10.3. We use the second approach to solve Example 10.4. EXAMPLE
Labels on 1-gallon cans of paint usually indicate the drying time and the area that can be covered in one coat. Most brands of paint indicate that, in one coat, a gallon will cover between 250 and
500 square feet, depending on the texture of the surface to be painted. One manufacturer, however, claims that a gallon of its paint will cover 400 square feet of surface area. To test this claim, a
random sample of ten 1-gallon cans of white paint were used to paint 10 identical areas using the same kind of equipment. The actual areas (in square feet) covered by these 10 gallons of paint are
given here: 310 376
Do the data present sufficient evidence to indicate that the average coverage differs from 400 square feet? Find the p-value for the test, and use it to evaluate the statistical significance of the
results. Solution Remember from Chapter 2 how to calculate 苶x and s using the data entry method on your calculator.
To test the claim, the hypotheses to be tested are
H0 : m 400
versus Ha : m 400
The sample mean and standard deviation for the recorded data are s 48.417 苶x 365.2 and the test statistic is x苶 m0 365.2 400 t 2.27 48.417/兹苶 10 s/兹n苶 The p-value for this test is the
probability of observing a value of the t statistic as contradictory to the null hypothesis as the one observed for this set of data—namely, t 2.27. Since this is a two-tailed test, the p-value is
the probability that either t 2.27 or t 2.27. Unlike the z-table, the table for t gives the values of t corresponding to upper-tail areas equal to .100, .050, .025, .010, and .005. Consequently, you
can only approximate the upper-tail area that corresponds to the probability that t 2.27. Since the t statistic for this test is based on 9 df, we refer to the row corresponding to df 9 in Table 4.
The five critical values for various tail areas are shown in Figure 10.7, an enlargement of the tail of the t distribution with 9 degrees of freedom. The value t 2.27 falls between t.025 2.262 and
t.010 2.821. Therefore, the right-tail area corresponding to the probability that t 2.27 lies between .01 and .025. Since this area represents only half of the p-value, you can write 1 .01 (p-value)
.025 2
.02 p-value .05
FIGU R E 1 0 . 7
Calculating the p-value for Example 10.4 (shaded area 12 p-value)
● f(t)
.100 .050 .025 .010 .005 2.262 1.383 1.833
2.821 3.250 2.27
What does this tell you about the significance of the statistical results? For you to reject H0, the p-value must be less than the specified significance level, a. Hence, you could reject H0 at the 5%
level, but not at the 2% or 1% level. Therefore, the p-value for this test would typically be reported by the experimenter as p-value .05
(or sometimes P .05)
For this test of hypothesis, H0 is rejected at the 5% significance level. There is sufficient evidence to indicate that the average coverage differs from 400 square feet. Within what limits does this
average coverage really fall? A 95% confidence interval gives the upper and lower limits for m as s x苶 ta/2 兹苶n
48.417 365.2 2.262 10 兹苶
365.2 34.63 Thus, you can estimate that the average area covered by 1 gallon of this brand of paint lies in the interval 330.6 to 399.8. A more precise interval estimate (a shorter interval) can
generally be obtained by increasing the sample size. Notice that the upper limit of this interval is very close to the value of 400 square feet, the coverage claimed on the label. This coincides with
the fact that the observed value of t 2.27 is just slightly less than the left-tail critical value of t.025 2.262, making the p-value just slightly less than .05. Most statistical computing packages
contain programs that will implement the Student’s t-test or construct a confidence interval for m when the data are properly entered into the computer’s database. Most of these programs will
calculate and report the exact p-value of the test, allowing you to quickly and accurately draw conclusions about the statistical significance of the results. The results of the MINITAB one-sample
t-test and confidence interval procedures are given in Figure 10.8. Besides the observed value of t 2.27 and the confidence interval (330.6, 399.8), the output gives the sample mean, the sample
standard deviation, the standard error of the mean (SE Mean s/兹n苶), and the exact p-value of the test (P .049). This is consistent with the range for the p-value that we found using Table 4 in
Appendix I: .02 p-value .05
396 ❍
FI GU R E 1 0 . 8
MINITAB output for Example 10.4
● One-Sample T: Area Test of mu = 400 Variable Area Variable Area
N 10
not = 400
Mean 365.2
95% CI (330.6, 399.8)
StDev 48.4
SE Mean 15.3 T -2.27
P 0.049
You can use the Small Sample Test of a Population Mean applet to visualize the p-values for either one- or two-tailed tests of the population mean m. The procedure follows the same pattern as with
previous applets. You enter the values of 苶x, n, and s and press “Enter” after each entry; the applet will calculate t and give you the option of choosing one- or two-tailed p-values (Area to Left,
Area to Right, or Two Tails), as well as a Middle area that you will not need. FI GU R E 1 0 . 9
Small Sample Test of a Population Mean applet
For the data of Example 10.4, the p-value is the two-tailed area to the right of t 2.273 and to the left of t 2.273. Can you find this same p-value in the MINITAB printout shown in Figure 10.9?
You can see the value of using the computer output or the Java applet to evaluate statistical results: • •
The exact p-value eliminates the need for tables and critical values. All of the numerical calculations are done for you.
The most important job—which is left for the experimenter—is to interpret the results in terms of their practical significance!
10.1 Find the following t-values in Table 4 of
Appendix I: a. t.05 for 5 df c. t.10 for 18 df
10.6 Tuna Fish Is there a difference in the
b. t.025 for 8 df d. t.025 for 30 df
rejection region in these situations: A two-tailed test with a .01 and 12 df A right-tailed test with a .05 and 16 df A two-tailed test with a .05 and 25 df A left-tailed test with a .01 and 7 df
10.3 Use Table 4 in Appendix I to approximate the
p-value for the t statistic in each situation: a. A two-tailed test with t 2.43 and 12 df b. A right-tailed test with t 3.21 and 16 df c. A two-tailed test with t 1.19 and 25 df d. A left-tailed test
with t 8.77 and 7 df 10.4 Test Scores The test scores on a
100-point test were recorded for 20 students:
a. Can you reasonably assume that these test scores have been selected from a normal population? Use a stem and leaf plot to justify your answer. b. Calculate the mean and standard deviation of the
scores. c. If these students can be considered a random sample from the population of all students, find a 95% confidence interval for the average test score in the population. 10.5 The following n 10
observations are a sample
from a normal population: 7.4
prices of tuna, depending on the method of packaging? Consumer Reports gives the estimated average price for a 6-ounce can or a 7.06-ounce pouch of tuna, based on prices paid nationally in
supermarkets.1 These prices are recorded for a variety of different brands of tuna.
10.2 Find the critical value(s) of t that specify the
a. b. c. d.
a. Find the mean and standard deviation of these data. b. Find a 99% upper one-sided confidence bound for the population mean m. c. Test H0 : m 7.5 versus Ha : m 7.5. Use a .01. d. Do the results of
part b support your conclusion in part c?
Light Tuna in Water
White Tuna in Oil
White Tuna in Water
Light Tuna in Oil
.99 1.92 1.23 .85 .65 .69 .60
1.27 1.22 1.19 1.22
1.49 1.29 1.27 1.35 1.29 1.00 1.27 1.28
2.56 1.92 1.30 1.79 1.23
.53 1.41 1.12 .63 .67 .60 .66
.62 .66 62 .65 .60 .67
Source: Case Study “Pricing of Tuna” Copyright 2001 by Consumers Union of U.S., Inc., Yonkers, NY 10703-1057, a nonprofit organization. Reprinted with permission from the June 2001 issue of Consumer
Reports® for educational purposes only. No commercial use or reproduction permitted. www.ConsumerReports.org®.
Assume that the tuna brands included in this survey represent a random sample of all tuna brands available in the United States. a. Find a 95% confidence interval for the average price for light tuna
in water. Interpret this interval. That is, what does the “95%” refer to? b. Find a 95% confidence interval for the average price for white tuna in oil. How does the width of this interval compare to
the width of the interval in part a? Can you explain why? c. Find 95% confidence intervals for the other two samples (white tuna in water and light tuna in oil). Plot the four treatment means and
their standard errors in a two-dimensional plot similar to Figure 8.5. What kind of broad comparisons can you make about the four treatments? (We will discuss the procedure for comparing more than
two population means in Chapter 11.) 10.7 Dissolved O2 Content Industrial wastes and
sewage dumped into our rivers and streams absorb oxygen and thereby reduce the amount of dissolved oxygen available for fish and other forms of aquatic life. One state agency requires a minimum of 5
parts per million (ppm) of dissolved oxygen in order for the oxygen content to be sufficient to support aquatic life. Six water specimens taken from a river at a specific location during the low-water
season (July) gave
398 ❍
readings of 4.9, 5.1, 4.9, 5.0, 5.0, and 4.7 ppm of dissolved oxygen. Do the data provide sufficient evidence to indicate that the dissolved oxygen content is less than 5 ppm? Test using a .05. 10.8
Lobsters In a study of the infestation of the
Thenus orientalis lobster by two types of barnacles, Octolasmis tridens and O. lowei, the carapace lengths (in millimeters) of 10 randomly selected lobsters caught in the seas near Singapore are
measured:2 78
Find a 95% confidence interval for the mean carapace length of the T. orientalis lobsters. 10.9 Smoking and Lung Capacity It is recognized that cigarette smoking has a deleterious effect on lung
function. In a study of the effect of cigarette smoking on the carbon monoxide diffusing capacity (DL) of the lung, researchers found that current smokers had DL readings significantly lower than
those of either exsmokers or nonsmokers. The carbon monoxide diffusing capacities for a random sample of n 20 current smokers are listed here:
103.768 92.295 100.615 102.754
88.602 61.675 88.017 108.579
73.003 90.677 71.210 73.154
123.086 84.023 82.115 106.755
91.052 76.014 89.222 90.479
a. Do these data indicate that the mean DL reading for current smokers is significantly lower than 100 DL, the average for nonsmokers? Use a .01. b. Find a 99% upper one-sided confidence bound for the
mean DL reading for current smokers. Does this bound confirm your conclusions in part a? 10.10 Brett Favre In Exercise 2.36 (EX0236), the number of passes completed by Brett Favre, quarterback for the
Green Bay Packers, was recorded for each of the 16 regular season games in the fall of 2006 (ESPN.com):3
a. A stem and leaf plot of the n 16 observations is shown below: Stem-and-Leaf Display: Favre Stem-and-leaf of Favre Leaf Unit = 1.0 LO 5 2 1 5 3 1 7 4 1 9 6 2 01 (4) 2 2222 6 2 445 3 2 6 2 2 8 1 3 1
= 16
Based on this plot, is it reasonable to assume that the underlying population is approximately normal, as required for the one-sample t-test? Explain. b. Calculate the mean and standard deviation for
Brett Favre’s per game pass completions. c. Construct a 95% confidence interval to estimate the per game pass completions per game for Brett Favre. 10.11 Purifying Organic Compound Organic
chemists often purify organic compounds by a method known as fractional crystallization. An experimenter wanted to prepare and purify 4.85 grams (g) of aniline. Ten 4.85-g quantities of aniline were
individually prepared and purified to acetanilide. The following dry yields were recorded: 3.85 3.36
3.80 3.62
3.88 4.01
3.85 3.72
3.90 3.82
Estimate the mean grams of acetanilide that can be recovered from an initial amount of 4.85 g of aniline. Use a 95% confidence interval. 10.12 Organic Compounds, continued Refer to
Exercise 10.11. Approximately how many 4.85-g specimens of aniline are required if you wish to estimate the mean number of grams of acetanilide correct to within .06 g with probability equal to .95?
10.13 Bulimia Although there are many treatments
for bulimia nervosa, some subjects fail to benefit from treatment. In a study to determine which factors predict who will benefit from treatment, an article in the British Journal of Clinical
Psychology indicates that self-esteem was one of these important predictors.4 The table gives the mean and standard deviation of self-esteem scores prior to treatment, at posttreatment, and during a
follow-up: Pretreatment Posttreatment Follow-up Sample Mean x苶 Standard Deviation s Sample Size n
20.3 5.0 21
26.6 7.4 21
27.7 8.2 20
a. Use a test of hypothesis to determine whether there is sufficient evidence to conclude that the true pretreatment mean is less than 25. b. Construct a 95% confidence interval for the true
posttreatment mean. c. In Section 10.4, we will introduce small-sample techniques for making inferences about the difference between two population means. Without the formality of a statistical test,
what are you willing to conclude about the differences among the three sampled population means represented by the results in the table?
10.14 RBC Counts Here are the red blood 6
cell counts (in 10 cells per microliter) of a healthy person measured on each of 15 days:
5.4 5.3 5.3
5.2 5.4 4.9
5.0 5.2 5.4
5.2 5.1 5.2
5.5 5.3 5.2
10.15 Hamburger Meat These data are the weights (in pounds) of 27 packages of ground beef in a supermarket meat display:
.99 1.14 .89 .98
.97 1.38 .96 1.14
1.18 .75 1.12 .92
1.41 .96 1.12 1.18
1.28 1.08 .93 1.17
.83 .87 1.24
a. Interpret the accompanying MINITAB printouts for the one-sample test and estimation procedures. MINITAB output for Exercise 10.15
One-Sample T: Weight Test of mu = 1
not = 1
Variable Weight
N 27
Variable Weight
95% CI (0.9867, 1.1178)
b. Verify the calculated values of t and the upper and lower confidence limits. 10.16 Cholesterol The serum cholesterol
levels of 50 subjects randomly selected from the L.A. Heart Data, data from an epidemiological heart disease study on Los Angeles County employees,5 follow.
Find a 95% confidence interval estimate of m, the true mean red blood cell count for this person during the period of testing.
1.08 1.06 .89 .89
Mean 1.0522
StDev 0.1657 T 1.64
SE Mean 0.0319 P 0.113
a. Construct a histogram for the data. Are the data approximately mound-shaped? b. Use a t-distribution to construct a 95% confidence interval for the average serum cholesterol levels for L.A. County
employees. 10.17 Cholesterol, continued Refer to Exercise 10.16. Since n 30, use the methods of Chapter 8 to create a large-sample 95% confidence interval for the average serum cholesterol level for
L.A. County employees. Compare the two intervals. (HINT: The two intervals should be quite similar. This is the reason we choose to approximate the sample distribution of x苶 m with a z-distribution
when n 30.) s/兹n苶
SMALL-SAMPLE INFERENCES FOR THE DIFFERENCE BETWEEN TWO POPULATION MEANS: INDEPENDENT RANDOM SAMPLES The physical setting for the problem considered in this section is the same as the one in Section
8.6, except that the sample sizes are no longer large. Independent random samples of n1 and n2 measurements are drawn from two populations, with means and variances m1, s 12, m 2, and s 22, and your
objective is to make inferences about (m1 m 2 ), the difference between the two population means. When the sample sizes are small, you can no longer rely on the Central Limit Theorem to ensure that
the sample means will be normal. If the original populations are normal, however, then the sampling distribution of the difference in the sample means, ( 苶x1 x苶2), will be normal (even for small
samples) with mean (m1 m 2 ) and standard error
s2 s2 1 2 n1 n2
400 ❍
Assumptions for the two-sample (independent) t-test: • • •
Random independent samples Normal distributions s1 s2
In Chapters 7 and 8, you used the sample variances, s12 and s 22, to calculate an estimate of the standard error, which was then used to form a large-sample confidence interval or a test of hypothesis
based on the large-sample z statistic: ( 苶x1 x苶2) (m1 m2) z ⬇ s2 s2 1 2 n1 n2
Unfortunately, when the sample sizes are small, this statistic does not have an approximately normal distribution—nor does it have a Student’s t distribution. In order to form a statistic with a
sampling distribution that can be derived theoretically, you must make one more assumption. Suppose that the variability of the measurements in the two normal populations is the same and can be
measured by a common variance s 2. That is, both populations have exactly the same shape, and s 12 s 22 s 2. Then the standard error of the difference in the two sample means is 冣冢n莦冪莦莦冪s莦 n
s2 s2 1 2 n1 n2
It can be proven mathematically that, if you use the appropriate sample estimate s2 for the population variance s 2, then the resulting test statistic, ( 苶x1 x苶2) (m1 m2 ) t 1 1 s2 n1 n2
has a Student’s t distribution. The only remaining problem is to find the sample estimate s2 and the appropriate number of degrees of freedom for the t statistic. Remember that the population variance
s 2 describes the shape of the normal distributions from which your samples come, so that either s12 or s 22 would give you an estimate of s 2. But why use just one when information is provided by
both? A better procedure is to combine the information in both sample variances using a weighted average, in which the weights are determined by the relative amount of information (the number of
measurements) in each sample. For example, if the first sample contained twice as many measurements as the second, you might consider giving the first sample variance twice as much weight. To achieve
this result, use this formula: (n1 1)s12 (n2 1)s 22 s 2 n1 n 2 2 Remember from Section 10.3 that the degrees of freedom for the one-sample t statistic are (n 1), the denominator of the sample
estimate s 2. Since s 21 has (n1 1) df and s 22 has (n2 1) df, the total number of degrees of freedom is the sum (n1 1) (n2 1) n1 n2 2 shown in the denominator of the formula for s 2.
CALCULATION OF s 2 •
If you have a scientific calculator, calculate each of the two sample standard deviations s1 and s2 separately, using the data entry procedure for your particular calculator. These values are squared
and used in this formula: (n1 1)s12 (n2 1)s 22 s 2 n1 n 2 2
It can be shown that s 2 is an unbiased estimator of the common population variance s 2. If s 2 is used to estimate s 2 and if the samples have been randomly and independently drawn from normal
populations with a common variance, then the statistic For the two-sample (independent) t-test, df n1 n2 2
( 苶x1 x苶2) (m1 m2 ) t 1 1 s2 n1 n2
has a Student’s t distribution with (n1 n2 2) degrees of freedom. The small-sample estimation and test procedures for the difference between two means are given next. TEST OF HYPOTHESIS CONCERNING
THE DIFFERENCE BETWEEN TWO MEANS: INDEPENDENT RANDOM SAMPLES 1. Null hypothesis: H0 : (m1 m2) D0, where D0 is some specified difference that you wish to test. For many tests, you will hypothesize that
there is no difference between m1 and m2; that is, D0 0. 2. Alternative hypothesis: One-Tailed Test
Two-Tailed Test
Ha : (m1 m2) D0 [or Ha : (m1 m2) D0]
Ha : (m1 m2 ) D0
( x苶1 x苶2) D0 3. Test statistic: t 1 1 s2 n1 n2
(n1 1)s12 (n2 1)s22 s 2 n1 n2 2 4. Rejection region: Reject H0 when One-Tailed Test
Two-Tailed Test
t ta [or t ta when the alternative hypothesis is Ha : (m1 m2) D0]
t ta/2
or when p-value a
t ta/2
402 ❍
TEST OF HYPOTHESIS CONCERNING THE DIFFERENCE BETWEEN TWO MEANS: INDEPENDENT RANDOM SAMPLES (continued) The critical values of t, ta, and ta/2 are based on (n1 n2 2) df. The tabulated values can be
found using Table 4 of Appendix I or the Student’s t Probabilities applet. Assumptions: The samples are randomly and independently selected from normally distributed populations. The variances of the
populations s 12 and s 22 are equal.
SMALL-SAMPLE (1 a)100% CONFIDENCE INTERVAL FOR (m1 m2) BASED ON INDEPENDENT RANDOM SAMPLES (x苶1 苶x2 ) ta/2
冢n 莦冪s莦 n 冣 2
where s 2 is the pooled estimate of s 2. EXAMPLE
TABLE 10.2
A course can be taken for credit either by attending lecture sessions at fixed times and days, or by doing online sessions that can be done at the student’s own pace and at those times the student
chooses. The course coordinator wants to determine if these two ways of taking the course resulted in a significant difference in achievement as measured by the final exam for the course. The following
data gives the scores on an examination with 45 possible points for one group of n1 9 students who took the course online, and a second group of n2 9 students who took the course with conventional
lectures. Do these data present sufficient evidence to indicate that the grades for students who took the course online are significantly higher than those who attended a conventional class? ●
Test Scores for Online and Classroom Presentations Online
Let m1 and m2 be the mean scores for the online group and the classroom group, respectively. Then, since you seek evidence to support the theory that m1 m2, you can test the null hypothesis
H0 : m1 m2
[or H0 : (m1 m2) 0]
versus the alternative hypothesis Ha : m1 m2
[or Ha : (m1 m2) 0]
To conduct the t-test for these two independent samples, you must assume that the sampled populations are both normal and have the same variance s 2. Is this reasonable? Stem and leaf plots of the
data in Figure 10.10 show at least a “mounding” pattern, so that the assumption of normality is not unreasonable. FIGU R E 1 0 . 1 0
Stem and leaf plots for Example 10.5
● Online 2 3 3 4
Classroom 2 3 3 4
Furthermore, the standard deviations of the two samples, calculated as s1 4.9441 Stem and leaf plots can help you decide if the normality assumption is reasonable.
s2 4.4752
are not different enough for us to doubt that the two distributions may have the same shape. If you make these two assumptions and calculate (using full accuracy) the pooled estimate of the common
variance as (n1 1)s12 (n2 1)s 22 8(4.9441)2 8(4.4752)2 s 2 22.2361 n1 n2 2 992 you can then calculate the test statistic, x苶1 x苶2 35.22 31.56 t 1.65 1 1 1 1 s 2 22.2361 n1 n2 9 9
If you are using a calculator, don’t round off until the final step!
FIGU R E 1 0 . 1 1
Rejection region for Example 10.5
The alternative hypothesis Ha : m1 m2 or, equivalently, Ha : (m1 m2) 0 implies that you should use a one-tailed test in the upper tail of the t distribution with (n1 n2 2) 16 degrees of freedom. You
can find the appropriate critical value for a rejection region with a .05 in Table 4 of Appendix I, and H0 will be rejected if t 1.746. Comparing the observed value of the test statistic t 1.65 with
the critical value t.05 1.746, you cannot reject the null hypothesis (see Figure 10.11). There is insufficient evidence to indicate that the online course grades are higher than the conventional
course grades at the 5% level of significance. ● f(t)
α = .05
1.746 Reject H0
404 ❍
Find the p-value that would be reported for the statistical test in Example 10.5. Solution
The observed value of t for this one-tailed test is t 1.65. Therefore,
p-value P(t 1.65) for a t statistic with 16 degrees of freedom. Remember that you cannot obtain this probability directly from Table 4 in Appendix I; you can only bound the p-value using the critical
values in the table. Since the observed value, t 1.65, lies between t.100 1.337 and t.050 1.746, the tail area to the right of 1.65 is between .05 and .10. The p-value for this test would be reported
as .05 p-value .10 Because the p-value is greater than .05, most researchers would report the results as not significant.
You can use the Two-Sample t-Test: Independent Samples applet, shown in Figure 10.12, to visualize the p-values for either one- or two-tailed tests of the difference between two population means. The
procedure follows the same pattern as with previous applets. You need to enter summary statistics—the values of x苶1, x苶2, n1, n2, s1, and s2 and press “Enter” after each entry; the applet will
calculate t (assuming equal variances) and give you the option of choosing one- or two-tailed p-values, (Area to Left, Area to Right, or Two Tails), as well as a Middle area that you will not need. F
IGU R E 1 0 . 1 2
Two-Sample t-Test: Independent Samples applet
For the data of Example 10.5, the p-value is the one-tailed area to the right of t 1.65. Does the p-value confirm the conclusions for the test in Example 10.5?
Use a lower 95% confidence bound to estimate the difference (m1 m2) in Example 10.5. Does the lower confidence bound indicate that the online average is significantly higher than the classroom average?
The lower confidence bound takes a familiar form—the point estimator ( 苶x1 x苶2) minus an amount equal to ta times the standard error of the estimator. Substituting into the formula, you can
calculate the 95% lower confidence bound:
冢莦冣冪莦 1 1 (35.22 31.56) 1.746 22.2361 冢9 莦冪莦莦 9冣 (x苶1 苶x2) ta
1 1 s 2 n1 n2
3.66 3.88 or (m1 m2) .22. Since the value (m1 m2) 0 is included in the confidence interval, it is possible that the two means are equal. There is insufficient evidence to indicate that the online
average is higher than the classroom average. The two-sample procedure that uses a pooled estimate of the common variance s 2 relies on four important assumptions: • Larger s 2/Smaller s 2 3 ⇔
variance assumption is reasonable
The samples must be randomly selected. Samples not randomly selected may introduce bias into the experiment and thus alter the significance levels you are reporting. The samples must be independent.
If not, this is not the appropriate statistical procedure. We discuss another procedure for dependent samples in Section 10.5. The populations from which you sample must be normal. However, moderate
departures from normality do not seriously affect the distribution of the test statistic, especially if the sample sizes are nearly the same. The population variances should be equal or nearly equal
to ensure that the procedures are valid.
If the population variances are far from equal, there is an alternative procedure for estimation and testing that has an approximate t distribution in repeated sampling. As a rule of thumb, you
should use this procedure if the ratio of the two sample variances, Larger s 2 2 3 Smaller s Since the population variances are not equal, the pooled estimator s2 is no longer appropriate, and each
population variance must be estimated by its corresponding sample variance. The resulting test statistic is ( 苶x1 x苶2 ) D0 s2 s2 1 2 n1 n2
When the sample sizes are small, critical values for this statistic are found using degrees of freedom approximated by the formula s2 s2 2 1 2 n1 n2 df ⬇ 2 (s1/n1)2 (s22/n2)2 (n1 1) (n2 1)
The degrees of freedom are taken to be the integer part of this result. Computer packages such as MINITAB can be used to implement this procedure, sometimes called Satterthwaite’s approximation, as
well as the pooled method described earlier. In fact, some experimenters choose to analyze their data using both methods. As long as both analyses lead to the same conclusions, you need not concern
yourself with the equality or inequality of variances. The MINITAB output resulting from the pooled method of analysis for the data of Example 10.5 is shown in Figure 10.13. Notice that the ratio of
the two sample variances, (4.94/4.48)2 1.22, is less than 3, which makes the pooled method appropriate. The calculated value of t 1.65 and the exact p-value .059 with 16 degrees of freedom are shown
in the last line of the output. The exact p-value makes it quite easy for you to determine the significance or nonsignificance of the sample results. You will find instructions for generating this
MINITAB output in the section “My MINITAB” at the end of this chapter. FI GU R E 1 0 . 1 3
MINITAB output for Example 10.5
● Two-Sample T-Test and CI: Online, Classroom Two-sample T for Online vs Classroom N Mean StDev SE Mean Online 9 35.22 4.94 1.6 Classroom 9 31.56 4.48 1.5 Difference = mu (Online) - mu (Classroom)
Estimate for difference: 3.67 95% lower bound for difference: -0.21 T-Test of difference = 0 (vs >): T-Value = 1.65 P-Value = 0.059 DF = 16 Both use Pooled StDev = 4.7155
If there is reason to believe that the normality assumptions have been violated, you can test for a shift in location of two population distributions using the nonparametric Wilcoxon rank sum test of
Chapter 15. This test procedure, which requires fewer assumptions concerning the nature of the population probability distributions, is almost as sensitive in detecting a difference in population
means when the conditions necessary for the t-test are satisfied. It may be more sensitive when the normality assumption is not satisfied.
BASIC TECHNIQUES 10.18 Give the number of degrees of freedom for s2,
10.19 Calculate s 2, the pooled estimator for s 2, in
the pooled estimator of s , in these cases:
these cases:
a. n1 16, n2 8 b. n1 10, n2 12 c. n1 15, n2 3
a. n1 10, n2 4, s12 3.4, s22 4.9 b. n1 12, n2 21, s12 18, s22 23
10.20 Two independent random samples of sizes n1 4 and n2 5 are selected from each of two normal populations: Population 1
Population 2
a. Calculate s , the pooled estimator of s 2. b. Find a 90% confidence interval for (m1 m2), the difference between the two population means. c. Test H0 : (m1 m2) 0 against Ha : (m1 m2) 0 for a .05.
State your conclusions. 2
b. What is the observed value of the test statistic? What is the p-value associated with this test? c. What is the pooled estimate s 2 of the population variance? d. Use the answers to part b to draw
conclusions about the difference in the two population means. e. Find the 95% confidence interval for the difference in the population means. Does this interval confirm your conclusions in part d?
10.21 Independent random samples of n1 16 and
n2 13 observations were selected from two normal populations with equal variances: Population
Sample Size Sample Mean Sample Variance
16 34.6 4.8
13 32.2 5.9
a. Suppose you wish to detect a difference between the population means. State the null and alternative hypotheses for the test. b. Find the rejection region for the test in part a for a .01. c. Find
the value of the test statistic. d. Find the approximate p-value for the test. e. Conduct the test and state your conclusions. 10.22 Refer to Exercise 10.21. Find a 99% confidence interval for (m1
m2). 10.23 The MINITAB printout shows a test for the dif-
ference in two population means. MINITAB output for Exercise 10.23
Two-Sample T-Test and CI: Sample 1, Sample 2 Two-sample T for Sample 1 vs Sample 2 N Mean StDev SE Mean Sample 1 6 29.00 4.00 1.6 Sample 2 7 28.86 4.67 1.8 Difference = mu (Sample 1) - mu (Sample 2)
Estimate for difference: 0.14 95% CI for difference: (-5.2, 5.5) T-Test of difference = 0 (vs not =): T-Value = 0.06 P-Value = 0.95 DF = 11 Both use Pooled StDev = 4.38
a. Do the two sample standard deviations indicate that the assumption of a common population variance is reasonable?
APPLICATIONS 10.24 Healthy Teeth Jan Lindhe conducted a study
on the effect of an oral antiplaque rinse on plaque buildup on teeth.6 Fourteen people whose teeth were thoroughly cleaned and polished were randomly assigned to two groups of seven subjects each.
Both groups were assigned to use oral rinses (no brushing) for a 2-week period. Group 1 used a rinse that contained an antiplaque agent. Group 2, the control group, received a similar rinse except
that, unknown to the subjects, the rinse contained no antiplaque agent. A plaque index x, a measure of plaque buildup, was recorded at 4, 7, and 14 days. The mean and standard deviation for the
14-day plaque measurements are shown in the table for the two groups.
Sample Size Mean Standard Deviation
Control Group
Antiplaque Group
7 1.26 .32
7 .78 .32
a. State the null and alternative hypotheses that should be used to test the effectiveness of the antiplaque oral rinse. b. Do the data provide sufficient evidence to indicate that the oral
antiplaque rinse is effective? Test using a .05. c. Find the approximate p-value for the test.
10.25 Tuna, again In Exercise 10.6 we presented data on the estimated average price for a 6-ounce can or a 7.06-ounce pouch of tuna, based on prices paid nationally in supermarkets. A portion of the
data is reproduced in the table below. Use the MINITAB printout to answer the questions.
408 ❍
Light Tuna in Water
Light Tuna in Oil
.99 1.92 1.23 .85 .65 .69 .60
2.56 1.92 1.30 1.79 1.23
.53 1.41 1.12 .63 .67 .60 .66
b. Construct a 95% confidence interval estimate of the difference in means for runners and cyclists under the condition of exercising at 80% of maximal oxygen consumption. c. To test for a significant
difference in compartment pressure at maximal oxygen consumption, should you use the pooled or unpooled t-test? Explain.
.62 .66 .62 .65 .60 .67
10.27 Disinfectants An experiment published in MINITAB output for Exercise 10.25
Two-Sample T-Test and CI: Water, Oil Two-sample T for Water vs Oil N Mean StDev Water 14 0.896 0.400 Oil 11 1.147 0.679
SE Mean 0.11 0.20
Difference = mu (Water) - mu (Oil) Estimate for difference: -0.251 95% CI for difference: (-0.700, 0.198) T-Test of difference = 0 (vs not =): T-Value = -1.16 P-Value = 0.260 DF = 23 Both use Pooled
StDev = 0.5389
a. Do the data in the table present sufficient evidence to indicate a difference in the average prices of light tuna in water versus oil? Test using a .05. b. What is the p-value for the test? c. The
MINITAB analysis uses the pooled estimate of s 2. Is the assumption of equal variances reasonable? Why or why not? 10.26 Runners and Cyclists Chronic anterior com-
partment syndrome is a condition characterized by exercise-induced pain in the lower leg. Swelling and impaired nerve and muscle function also accompany this pain, which is relieved by rest. Susan
Beckham and colleagues conducted an experiment involving ten healthy runners and ten healthy cyclists to determine whether there are significant differences in pressure measurements within the
anterior muscle compartment for runners and cyclists.7 The data summary—compartment pressure in millimeters of mercury (Hg)—is as follows: Runners Condition Rest 80% maximal O2 consumption Maximal O2
Standard Deviation
Cyclists Mean
Standard Deviation
12.2 19.1
3.49 16.9
11.5 12.2
4.95 4.47
a. Test for a significant difference in compartment pressure between runners and cyclists under the resting condition. Use a .05.
The American Biology Teacher studied the efficacy of using 95% ethanol or 20% bleach as a disinfectant in removing bacterial and fungal contamination when culturing plant tissues. The experiment was
repeated 15 times with each disinfectant, using eggplant as the plant tissue being cultured.8 Five cuttings per plant were placed on a petri dish for each disinfectant and stored at 25°C for 4 weeks.
The observation reported was the number of uncontaminated eggplant cuttings after the 4-week storage. Disinfectant Mean Variance n
95% Ethanol
20% Bleach
3.73 2.78095 15 Pooled variance 1.47619
4.80 .17143 15
a. Are you willing to assume that the underlying variances are equal? b. Using the information from part a, are you willing to conclude that there is a significant difference in the mean numbers of
uncontaminated eggplants for the two disinfectants tested? 10.28 Titanium A geologist collected 20 different ore samples, all of the same weight, and randomly divided them into two groups. The
titanium contents of the samples, found using two different methods, are listed in the table:
Method 1 .011 .013
.013 .010
.013 .013
Method 2 .015 .011
.014 .012
.011 .012
.016 .017
.013 .013
.012 .014
.015 .015
a. Use an appropriate method to test for a significant difference in the average titanium contents using the two different methods. b. Determine a 95% confidence interval estimate for (m1 m2). Does
your interval estimate substantiate your conclusion in part a? Explain. 10.29 Raisins The numbers of raisins in
each of 14 miniboxes (1/2-ounce size) were counted for a generic brand and for Sunmaid® brand raisins:
Generic Brand 25 26 26 26
Sunmaid 28 27 25
a. Although counts cannot have a normal distribution, do these data have approximately normal distributions? (HINT: Use a histogram or stem and leaf plot.) b. Are you willing to assume that the
underlying population variances are equal? Why? c. Use the p-value approach to determine whether there is a significant difference in the mean numbers of raisins per minibox. What are the implications
of your conclusion? 10.30 Dissolved O2 Content, continued Refer to Exercise 10.7, in which we measured the dissolved oxygen content in river water to determine whether a stream had sufficient oxygen
to support aquatic life. A pollution control inspector suspected that a river community was releasing amounts of semitreated sewage into a river. To check his theory, he drew five randomly selected
specimens of river water at a location above the town, and another five below. The dissolved oxygen readings (in parts per million) are as follows: Above Town
Below Town
a. Do the data provide sufficient evidence to indicate that the mean oxygen content below the town is less than the mean oxygen content above? Test using a .05. b. Suppose you prefer estimation as a
method of inference. Estimate the difference in the mean dissolved oxygen contents for locations above and below the town. Use a 95% confidence interval. 10.31 Freestyle Swimmers In an effort to
compare the average swimming times for two swimmers, each swimmer was asked to swim freestyle for a distance of 100 yards at randomly selected times. The swimmers were thoroughly rested between laps
and did not race against each other, so that each sample of times was an independent random sample. The times for each of 10 trials are shown for the two swimmers.
Swimmer 1
Swimmer 2
59.62 59.48 59.65 59.50 60.01
59.81 59.32 59.76 59.64 59.86
59.74 59.43 59.72 59.63 59.68
59.41 59.63 59.50 59.83 59.51
Suppose that swimmer 2 was last year’s winner when the two swimmers raced. Does it appear that the average time for swimmer 2 is still faster than the average time for swimmer 1 in the 100-yard
freestyle? Find the approximate p-value for the test and interpret the results. 10.32 Freestyle Swimmers, continued Refer to Exercise 10.31. Construct a lower 95% one-sided confidence bound for the
difference in the average times for the two swimmers. Does this interval confirm your conclusions in Exercise 10.31? 10.33 Comparing NFL Quarterbacks
How does Brett Favre, quarterback for the Green Bay Packers, compare to Peyton Manning, quarterback for the Indianapolis Colts? The table below shows the number of completed passes for each athlete
during the 2006 NFL football season:3
Brett Favre 15 31 25 22 22 19
Peyton Manning 22 20 26 21
a. Does the data indicate that there is a difference in the average number of completed passes for the two quarterbacks? Test using a .05. b. Construct a 95% confidence interval for the difference in
the average number of completed passes for the two quarterbacks. Does the confidence interval confirm your conclusion in part a? Explain. 10.34 An Archeological Find An article in Archaeometry involved
an analysis of 26 samEX1034 ples of Romano-British pottery, found at four different kiln sites in the United Kingdom.9 The samples were analyzed to determine their chemical composition and the
percentage of aluminum oxide in each of 10 samples at two sites is shown below. Island Thorns 18.3 15.8 18.0 18.0 20.8
Ashley Rails 17.7 18.3 16.7 14.8 19.1
Does the data provide sufficient information to indicate that there is a difference in the average percentage of aluminum oxide at the two sites? Test at the 5% level of significance.
To compare the wearing qualities of two types of automobile tires, A and B, a tire of type A and one of type B are randomly assigned and mounted on the rear wheels of each of five automobiles. The
automobiles are then operated for a specified number of miles, and the amount of wear is recorded for each tire. These measurements appear in Table 10.3. Do the data present sufficient evidence to
indicate a difference in the average wear for the two tire types? TABLE 10.3
Average Wear for Two Types of Tires Automobile
Tire A
Tire B
10.6 9.8 12.3 9.7 8.8
10.2 9.4 11.8 9.1 8.3
x苶1 10.24 s1 1.316
x苶2 9.76 s2 1.328
Table 10.3 shows a difference of (x苶1 x苶2 ) (10.24 9.76) .48 between the two sample means, while the standard deviations of both samples are approximately 1.3. Given the variability of the data and
the small number of measurements, this is a rather small difference, and you would probably not suspect a difference in the average wear for the two types of tires. Let’s check your suspicions using
the methods of Section 10.4. Look at the MINITAB analysis in Figure 10.14. The two-sample pooled t-test is used for testing the difference in the means based on two independent random samples. The
calculated value of t used to test the null hypothesis H0 : m1 m2 is t .57 with p-value .582, a value that is not nearly small enough to indicate a significant difference in the two population means.
The corresponding 95% confidence interval, given as 1.448 (m1 m2) 2.408 is quite wide and also does not indicate a significant difference in the population means. FI GU R E 1 0 . 1 4
MINITAB output using t-test for independent samples for the tire data
● Two-Sample T-Test and CI: Tire A, Tire B Two-sample T for Tire A vs Tire B N Mean StDev SE Mean Tire A 5 10.24 1.32 0.59 Tire B 5 9.76 1.33 0.59 Difference = mu (Tire A) - mu (Tire B) Estimate for
difference: 0.480 95% CI for difference: (-1.448, 2.408) T-Test of difference = 0 (vs not =): T-Value = 0.57 Both use Pooled StDev = 1.3221
P-Value = 0.582 DF = 8
Take a second look at the data and you will notice that the wear measurement for type A is greater than the corresponding value for type B for each of the five automobiles. Wouldn’t this be unlikely,
if there’s really no difference between the two tire types? Consider a simple intuitive test, based on the binomial distribution of Chapter 5. If there is no difference in the mean tire wear for the
two types of tires, then it is just as likely as not that tire A shows more wear than tire B. The five automobiles then correspond to five binomial trials with p P(tire A shows more wear than tire B)
.5. Is the observed value of x 5 positive differences shown in Table 10.4 unusual? The probability of observing x 5 or the equally unlikely value x 0 can be found in Table 1 in Appendix I to be 2
(.031) .062, which is quite small compared to the likelihood of the more powerful t-test, which had a p-value of .58. Isn’t it peculiar that the t-test, which uses more information (the actual sample
measurements) than the binomial test, fails to supply sufficient information for rejecting the null hypothesis? TABLE 10.4
Differences in Tire Wear, Using the Data of Table 10.3 Automobile
10.6 9.8 12.3 9.7 8.8
10.2 9.4 11.8 9.1 8.3
.4 .4 .5 .6 .5 d苵 .48
There is an explanation for this inconsistency. The t-test described in Section 10.4 is not the proper statistical test to be used for our example. The statistical test procedure of Section 10.4
requires that the two samples be independent and random. Certainly, the independence requirement is violated by the manner in which the experiment was conducted. The (pair of) measurements, an A and
a B tire, for a particular automobile are definitely related. A glance at the data shows that the readings have approximately the same magnitude for a particular automobile but vary markedly from one
automobile to another. This, of course, is exactly what you might expect. Tire wear is largely determined by driver habits, the balance of the wheels, and the road surface. Since each automobile has
a different driver, you would expect a large amount of variability in the data from one automobile to another. In designing the tire wear experiment, the experimenter realized that the measurements
would vary greatly from automobile to automobile. If the tires (five of type A and five of type B) were randomly assigned to the ten wheels, resulting in independent random samples, this variability
would result in a large standard error and make it difficult to detect a difference in the means. Instead, he chose to “pair” the measurements, comparing the wear for type A and type B tires on each
of the five automobiles. This experimental design, sometimes called a paired-difference or matched pairs design, allows us to eliminate the car-to-car variability by looking at only the five difference
measurements shown in Table 10.4. These five differences form a single random sample of size n 5. Notice that in Table 10.4 the sample mean of the differences, d A B, is calculated as Sd d苶 i .48 n
and is exactly the same as the difference of the sample means: (x苶1 苶x2) (10.24 9.76) .48. It should not surprise you that this can be proven to be true in general, and also that the same
relationship holds for the population means. That is, the average of the population differences is md (m1 m2) Because of this fact, you can use the sample differences to test for a significant
difference in the two population means, (m1 m2) md. The test is a single-sample t-test of the difference measurements to test the null hypothesis H0 : md 0
[or H0 : (m1 m2) 0]
versus the alternative hypothesis Ha : md 0
[or Ha : (m1 m2) 0]
The test procedures take the same form as the procedures used in Section 10.3 and are described next.
PAIRED-DIFFERENCE TEST OF HYPOTHESIS FOR (m1 m2) md: DEPENDENT SAMPLES 1. Null hypothesis: H0 : md 0 2. Alternative hypothesis: One-Tailed Test
Two-Tailed Test
Ha : md 0 (or Ha : md 0)
Ha : md 0
苶d 0 苶d 3. Test statistic: t 苶 sd /兹苶n sd/兹n where n Number of paired differences d苶 Mean of the sample differences sd Standard deviation of the sample differences 4. Rejection region: Reject
H0 when One-Tailed Test
Two-Tailed Test
t ta t ta/2 (or t ta when the alternative hypothesis is Ha : md 0)
t ta/2
or when p-value a The critical values of t, ta, and ta/2 are based on (n 1) df. These tabulated values can be found using Table 4 or the Student’s t Probabilities applet.
(1 a)100% SMALL-SAMPLE CONFIDENCE INTERVAL FOR (m1 m2) md, BASED ON A PAIRED-DIFFERENCE EXPERIMENT sd d苶 ta/2 兹n苶
Assumptions: The experiment is designed as a paired-difference test so that the n differences represent a random sample from a normal population. EXAMPLE
Do the data in Table 10.3 provide sufficient evidence to indicate a difference in the mean wear for tire types A and B? Test using a .05. You can verify using your calculator that the average and
standard deviation of the five difference measurements are
苶d .48
sd .0837
Then H0 : md 0
Ha : md 0
and 苶d 0 .48 t 12.8 苶 sd /兹苶n .0837/兹5 The critical value of t for a two-tailed statistical test, a .05 and 4 df, is 2.776. Certainly, the observed value of t 12.8 is extremely large and highly
significant. Hence, you can conclude that there is a difference in the mean wear for tire types A and B.
Find a 95% confidence interval for (m1 m2) md using the data in Table 10.3. Solution
A 95% confidence interval for the difference between the mean levels of
wear is sd 苶d ta/2 兹n苶
.0837 .48 2.776 5 兹苶
.48 .10 Confidence intervals are always interpreted in the same way! In repeated sampling, intervals constructed in this way enclose the true value of the parameter 100(1 a)% of the time.
or .38 (m1 m2) .58. How does the width of this interval compare with the width of an interval you might have constructed if you had designed the experiment in an unpaired manner? It probably would
have been of the same magnitude as the interval calculated in Figure 10.14, where the observed data were incorrectly analyzed using the unpaired analysis. This interval, 1.45 (m1 m2) 2.41, is much
wider than the paired interval, which indicates that the paired difference design increased the accuracy of our estimate, and we have gained valuable information by using this design. The
paired-difference test or matched pairs design used in the tire wear experiment is a simple example of an experimental design called a randomized block design.
Paired difference test: df n 1
FI GU R E 1 0 . 1 5
MINITAB output for paired-difference analysis of tire wear data
When there is a great deal of variability among the experimental units, even before any experimental procedures are implemented, the effect of this variability can be minimized by blocking—that is,
comparing the different procedures within groups of relatively similar experimental units called blocks. In this way, the “noise” caused by the large variability does not mask the true differences
between the procedures. We will discuss randomized block designs in more detail in Chapter 11. It is important for you to remember that the pairing or blocking occurs when the experiment is planned,
and not after the data are collected. An experimenter may choose to use pairs of identical twins to compare two learning methods. A physician may record a patient’s blood pressure before and after a
particular medication is given. Once you have used a paired design for an experiment, you no longer have the option of using the unpaired analysis of Section 10.4. The independence assumption has
been purposely violated, and your only choice is to use the paired analysis described here! Although pairing was very beneficial in the tire wear experiment, this may not always be the case. In the
paired analysis, the degrees of freedom for the t-test are cut in half—from (n n 2) 2(n 1) to (n 1). This reduction increases the critical value of t for rejecting H0 and also increases the width of
the confidence interval for the difference in the two means. If pairing is not effective, this increase is not offset by a decrease in the variability, and you may in fact lose rather than gain
information by pairing. This, of course, did not happen in the tire experiment—the large reduction in the standard error more than compensated for the loss in degrees of freedom. Except for notation,
the paired-difference analysis is the same as the singlesample analysis presented in Section 10.3. However, MINITAB provides a single procedure called Paired t to analyze the differences, as shown in
Figure 10.15. The p-value for the paired analysis, .000, indicates a highly significant difference in the means. You will find instructions for generating this MINITAB output in the “My MINITAB ”
section at the end of this chapter. ● Paired T-Test and CI: Tire A, Tire B Paired T for Tire A - Tire B N Mean Tire A 5 10.240 Tire B 5 9.760 Difference 5 0.4800
StDev 1.316 1.328 0.0837
SE Mean 0.589 0.594 0.0374
95% CI for mean difference: (0.3761, 0.5839) T-Test of mean difference = 0 (vs not = 0): T-Value = 12.83
P-Value = 0.000
BASIC TECHNIQUES 10.35 A paired-difference experiment was conducted using n 10 pairs of observations.
a. Test the null hypothesis H0 : (m1 m2) 0 against Ha : (m1 m2) 0 for a .05, 苶d .3, and sd2 .16. Give the approximate p-value for the test. b. Find a 95% confidence interval for (m1 m2).
c. How many pairs of observations do you need if you want to estimate (m1 m2) correct to within .1 with probability equal to .95? 10.36 A paired-difference experiment consists of n 18 pairs, d苶 5.7,
and sd2 256. Suppose you wish to detect md 0.
a. Give the null and alternative hypotheses for the test. b. Conduct the test and state your conclusions.
10.37 A paired-difference experiment was conducted to compare the means of two populations: Pairs Population
1.3 1.2
1.6 1.5
1.1 1.1
1.4 1.2
1.7 1.8
a. Do the data provide sufficient evidence to indicate that m1 differs from m2? Test using a .05. b. Find the approximate p-value for the test and interpret its value. c. Find a 95% confidence
interval for (m1 m2). Compare your interpretation of the confidence interval with your test results in part a. d. What assumptions must you make for your inferences to be valid? APPLICATIONS 10.38
Auto Insurance The cost of automo-
bile insurance has become a sore subject in California because the rates are dependent on so many variables, such as the city in which you live, the number of cars you insure, and the company with
which you are insured. Here are the annual 2006–2007 premiums for a single male, licensed for 6–8 years, who drives a Honda Accord 12,600 to 15,000 miles per year and has no violations or
Long Beach Pomona San Bernardino Moreno Valley
$2617 2305 2286 2247
21st Century $2228 2098 2064 1890
Source: www.insurance.ca.gov
a. Why would you expect these pairs of observations to be dependent? b. Do the data provide sufficient evidence to indicate that there is a difference in the average annual premiums between Allstate
and 21st Century insurance? Test using a .01. c. Find the approximate p-value for the test and interpret its value. d. Find a 99% confidence interval for the difference in the average annual premiums
for Allstate and 21st Century insurance. e. Can we use the information in the table to make valid comparisons between Allstate and 21st Century insurance throughout the United States? Why or why not?
10.39 Runners and Cyclists II Refer to Exercise 10.26. In addition to the compartment pressures, the level of creatine phosphokinase (CPK) in blood samples, a measure of muscle damage, was determined
for each of 10 runners and 10 cyclists before and after exercise.7 The data summary—CPK values in units/liter—is as follows: Runners
Standard Deviation
Standard Deviation
Before exercise After exercise Difference
255.63 284.75 29.13
115.48 132.64 21.01
173.8 177.1 3.3
60.69 64.53 6.85
a. Test for a significant difference in mean CPK values for runners and cyclists before exercise under the assumption that s 12 s 22; use a .05. Find a 95% confidence interval estimate for the
corresponding difference in means. b. Test for a significant difference in mean CPK values for runners and cyclists after exercise under the assumption that s 12 s 22; use a .05. Find a 95% confidence
interval estimate for the corresponding difference in means. c. Test for a significant difference in mean CPK values for runners before and after exercise. d. Find a 95% confidence interval estimate
for the difference in mean CPK values for cyclists before and after exercise. Does your estimate indicate that there is no significant difference in mean CPK levels for cyclists before and after
exercise? 10.40 America’s Market Basket An advertisement for Albertsons, a supermarket chain in the western United States, claims that Albertsons has had consistently lower prices than four other
fullservice supermarkets. As part of a survey conducted by an “independent market basket price-checking company,” the average weekly total, based on the prices of approximately 95 items, is given for
two different supermarket chains recorded during 4 consecutive weeks in a particular month.
254.26 240.62 231.90 234.13
256.03 255.65 255.12 261.18
a. Is there a significant difference in the average prices for these two different supermarket chains? b. What is the approximate p-value for the test conducted in part a?
416 ❍
c. Construct a 99% confidence interval for the difference in the average prices for the two supermarket chains. Interpret this interval. 10.41 No Left Turn An experiment was conducted to compare the
mean reaction times to two types of traffic signs: prohibitive (No Left Turn) and permissive (Left Turn Only). Ten drivers were included in the experiment. Each driver was presented with 40 traffic
signs, 20 prohibitive and 20 permissive, in random order. The mean time to reaction and the number of correct actions were recorded for each driver. The mean reaction times (in milliseconds) to the
20 prohibitive and 20 permissive traffic signs are shown here for each of the 10 drivers:
a. Explain why this is a paired-difference experiment and give reasons why the pairing should be useful in increasing information on the difference between the mean reaction times to prohibitive and
permissive traffic signs. b. Do the data present sufficient evidence to indicate a difference in mean reaction times to prohibitive and permissive traffic signs? Use the p-value approach. c. Find a
95% confidence interval for the difference in mean reaction times to prohibitive and permissive traffic signs. 10.42 Healthy Teeth II Exercise 10.24 describes a
dental experiment conducted to investigate the effectiveness of an oral rinse used to inhibit the growth of plaque on teeth. Subjects were divided into two groups: One group used a rinse with an
antiplaque ingredient, and the control group used a rinse containing inactive ingredients. Suppose that the plaque growth on each person’s teeth was measured after using the rinse after 4 hours and
then again after 8 hours. If you wish to estimate the difference in plaque growth from 4 to 8 hours, should you use a confidence interval based on a paired or an unpaired analysis? Explain.
10.43 Ground or Air? The earth’s temperature (which affects seed germination, crop survival in bad weather, and many other aspects of agricultural production) can be measured using either
ground-based sensors or infrared-sensing devices mounted in aircraft or space satellites. Ground-based sensoring is tedious, requiring many replications to obtain an accurate estimate of ground
temperature. On the other hand, airplane or satellite sensoring of infrared waves appears to introduce a bias in the temperature readings. To determine the bias, readings were obtained at five
different locations using both ground- and air-based temperature sensors. The readings (in degrees Celsius) are listed here: Location
46.9 45.4 36.3 31.0 24.7
47.3 48.1 37.9 32.7 26.2
a. Do the data present sufficient evidence to indicate a bias in the air-based temperature readings? Explain. b. Estimate the difference in mean temperatures between ground- and air-based sensors
using a 95% confidence interval. c. How many paired observations are required to estimate the difference between mean temperatures for ground- versus air-based sensors correct to within .2°C, with
probability approximately equal to .95? 10.44 Red Dye To test the comparative
brightness of two red dyes, nine samples of cloth were taken from a production line and each sample was divided into two pieces. One of the two pieces in each sample was randomly chosen and red dye 1
applied; red dye 2 was applied to the remaining piece. The following data represent a “brightness score” for each piece. Is there sufficient evidence to indicate a difference in mean brightness
scores for the two dyes? Use a .05.
Dye 1 Dye 2
10.45 Tax Assessors In response to a com-
plaint that a particular tax assessor (A) was biased, an experiment was conducted to compare the assessor named in the complaint with another tax assessor (B) from the same office. Eight properties
were selected, and each was assessed by both assessors. The assessments (in thousands of dollars) are shown in the table.
Assessor A
Assessor B
76.3 88.4 80.2 94.7 68.7 82.8 76.1 79.0
75.1 86.8 77.3 90.6 69.1 81.0 75.3 79.1
10.46 Memory Experiments A psychol-
ogy class performed an experiment to compare whether a recall score in which instructions to form images of 25 words were given is better than an initial recall score for which no imagery
instructions were given. Twenty students participated in the experiment with the following results:
Use the MINITAB printout to answer the questions. MINITAB output for Exercise 10.45
Paired T-Test and CI: Assessor A, Assessor B Paired T for Assessor A - Assessor B N Mean StDev SE Mean Assessor A 8 80.77 7.99 2.83 Assessor B 8 79.29 6.85 2.42 Difference 8 1.488 1.491 0.527 95%
lower bound for mean difference: 0.489 T-Test of mean difference = 0 (vs > 0): T-Value = 2.82 P-value = 0.013
a. Do the data provide sufficient evidence to indicate that assessor A tends to give higher assessments than assessor B? b. Estimate the difference in mean assessments for the two assessors. c. What
assumptions must you make in order for the inferences in parts a and b to be valid? d. Suppose that assessor A had been compared with a more stable standard—say, the average 苶x of the assessments
given by four assessors selected from the tax office. Thus, each property would be assessed by A and also by each of the four other assessors and (xA 苶x ) would be calculated. If the test in part a
is valid, can you use the paired-difference t-test to test the hypothesis that the bias, the mean difference between A’s assessments and the mean of the assessments of the four assessors, is equal to
0? Explain.
With Imagery
Without Imagery
With Imagery
Without Imagery
Does it appear that the average recall score is higher when imagery is used? 10.47 Music in the Workplace Before con-
tracting to have stereo music piped into each of his suites of offices, an executive had his office manager randomly select seven offices in which to have the system installed. The average time (in
minutes) spent outside these offices per excursion among the employees involved was recorded before and after the music system was installed with the following results.
Office Number
No Music Music
Would you suggest that the executive proceed with the installation? Conduct an appropriate test of hypothesis. Find the approximate p-value and interpret your results.
INFERENCES CONCERNING A POPULATION VARIANCE You have seen in the preceding sections that an estimate of the population variance s 2 is usually needed before you can make inferences about population
means. Sometimes, however, the population variance s 2 is the primary objective in an experimental investigation. It may be more important to the experimenter than the population mean! Consider these
examples: •
Scientific measuring instruments must provide unbiased readings with a very small error of measurement. An aircraft altimeter that measures the correct
altitude on the average is fairly useless if the measurements are in error by as much as 1000 feet above or below the correct altitude. Machined parts in a manufacturing process must be produced with
minimum variability in order to reduce out-of-size and hence defective parts. Aptitude tests must be designed so that scores will exhibit a reasonable amount of variability. For example, an 800-point
test is not very discriminatory if all students score between 601 and 605.
• •
In previous chapters, you have used S(xi x苶)2 s2 n1 as an unbiased estimator of the population variance s 2. This means that, in repeated sampling, the average of all your sample estimates will
equal the target parameter, s 2. But how close or far from the target is your estimator s2 likely to be? To answer this question, we use the sampling distribution of s2, which describes its behavior
in repeated sampling. Consider the distribution of s2 based on repeated random sampling from a normal distribution with a specified mean and variance. We can show theoretically that the distribution
begins at s2 0 (since the variance cannot be negative) with a mean equal to s 2. Its shape is nonsymmetric and changes with each different sample size and each different value of s 2. Finding
critical values for the sampling distribution of s2 would be quite difficult and would require separate tables for each population variance. Fortunately, we can simplify the problem by standardizing,
as we did with the z distribution. Definition
The standardized statistic
(n 1)s2 x 2 s2 is called a chi-square variable and has a sampling distribution called the chi-square probability distribution, with n 1 degrees of freedom. The equation of the density function for
this statistic is quite complicated to look at, but it traces the curve shown in Figure 10.16. FI GU R E 1 0 . 1 6
A chi-square distribution
a 0
Certain critical values of the chi-square statistic, which are used for making inferences about the population variance, have been tabulated by statisticians and appear in Table 5 of Appendix I.
Since the shape of the distribution varies with the sample
size n or, more precisely, the degrees of freedom, n 1, associated with s2, Table 5, partially reproduced in Table 10.5, is constructed in exactly the same way as the t table, with the degrees of
freedom in the first and last columns. The symbol x 2a indicates that the tabulated x 2-value has an area a to its right (see Figure 10.16). TABLE 10.5
Testing one variance: df n 1
Format of the Chi-Square Table from Table 5 in Appendix I df
x 2.995
1 2 3 4 5 6 . . . 15 16 17 18 19 . . .
.0000393 .0100251 .0717212 .206990 .411740 .0675727 . . . 4.60094 5.14224 5.69724 6.26481 6.84398 . . .
x 2.950
x 2.900
x 2.100
x 2.050
.0039321 .102587 .351846 .710721 1.145476 1.63539 . . . 7.26094 7.96164 8.67176 9.39046 10.1170 . . .
.0157908 .210720 .584375 1.063623 1.610310 2.204130 . . . 8.54675 9.31223 10.0852 10.8649 11.6509 . . .
2.70554 4.60517 6.25139 7.77944 9.23635 10.6446 . . . 22.3072 23.5418 24.7690 25.9894 27.2036 . . .
3.84146 5.99147 7.81473 9.48773 11.0705 12.5916 . . . 24.9958 26.2962 27.5871 28.8693 30.1435 . . .
x 2.005
7.87944 10.5966 12.8381 14.8602 16.7496 18.5476 . . . 32.8013 34.2672 35.7185 37.1564 38.5822 . . .
1 2 3 4 5 6 . . . 15 16 17 18 19 . . .
You can see in Table 10.5 that, because the distribution is nonsymmetric and starts at 0, both upper and lower tail areas must be tabulated for the chi-square statistic. For example, the value x 2.95
is the value that has 95% of the area under the curve to its right and 5% of the area to its left. This value cuts off an area equal to .05 in the lower tail of the chi-square distribution. EXAMPLE
Check your ability to use Table 5 in Appendix I by verifying the following statements: 1. The probability that x 2, based on n 16 measurements (df 15), exceeds 24.9958 is .05. 2. For a sample of n 6
measurements, 95% of the area under the x 2 distribution lies to the right of 1.145476. These values are shaded in Table 10.5.
You can use the Chi-Square Probabilities applet to find the x 2-value described in Example 10.10. Since the applet provides x 2-values and their one-tailed probabilities for the degrees of freedom
that you select using the slider on the right side of the applet, you should choose df 5 and type .95 in the box marked “prob:” at the bottom of the applet. The applet will provide the value of x 2
that puts .95 in the right tail of the x 2 distribution and hence .05 in the left tail. The applet in Figure 10.17 shows x 2 1.14, which differs only slightly from the value in Example 10.10. We will
use this applet for the MyApplet Exercises at the end of the chapter.
F IGU R E 1 0 . 1 7
Chi-Square Probabilities applet
The statistical test of a null hypothesis concerning a population variance H0 : s 2 s 20 uses the test statistic (n 1)s2 x 2 s 20 Notice that when H0 is true, s2/s 20 should be near 1, so x 2 should
be close to (n 1), the degrees of freedom. If s 2 is really greater than the hypothesized value s 20, the test statistic will tend to be larger than (n 1) and will probably fall toward the upper tail
of the distribution. If s 2 s 20, the test statistic will tend to be smaller than (n 1) and will probably fall toward the lower tail of the chi-square distribution. As in other testing situations,
you may use either a one- or a two-tailed statistical test, depending on the alternative hypothesis. This test of hypothesis and the (1 a)100% confidence interval for s 2 are both based on the
chi-square distribution and are described next.
TEST OF HYPOTHESIS CONCERNING A POPULATION VARIANCE 1. Null hypothesis: H0 : s 2 s 20 2. Alternative hypothesis: One-Tailed Test
Two-Tailed Test
Ha : s 2 s 20 (or Ha : s 2 s 20)
Ha : s 2 s 20
(n 1)s2 3. Test statistic: x 2 s 20
4. Rejection region: Reject H0 when One-Tailed Test
Two-Tailed Test
x 2 x 2a (or x 2 x 2(1a) when the alternative hypothesis is Ha : s 2 s 20), where x 2a and x 2(1a) are, respectively, the upper- and lower-tail values of x 2 that place a in the tail areas
2 x 2 x a/2 or x 2 x 2(1a/2), 2 where x a/2 and x 2(1a/2) are, respectively, the upper- and lower-tail values of x 2 that place a/2 in the tail areas
or when p-value a The critical values of x 2 are based on (n 1) df. These tabulated values can be found using Table 5 of Appendix I or the Chi-Square Probabilities applet.
χ2(1 – α/2)
(1 a)100% CONFIDENCE INTERVAL FOR s 2 (n 1)s2 (n 1)s2 2 s 2 x a/2 x 2(1a/2) 2 where x a/2 and x 2(1a/2) are the upper and lower x 2-values, which locate one-half of a in each tail of the chi-square
distribution. Assumption: The sample is randomly selected from a normal population.
A cement manufacturer claims that concrete prepared from his product has a relatively stable compressive strength and that the strength measured in kilograms per square centimeter (kg/cm2) lies
within a range of 40 kg/cm2. A sample of n 10 measurements produced a mean and variance equal to, respectively, 2 苶x 312 and s 195 Do these data present sufficient evidence to reject the
manufacturer’s claim? In Section 2.5, you learned that the range of a set of measurements should be approximately four standard deviations. The manufacturer’s claim that the range of the strength
measurements is within 40 kg/cm2 must mean that the standard deviation of the measurements is roughly 10 kg/cm2 or less. To test his claim, the appropriate hypotheses are H0 : s 2 102 100 versus Ha :
s 2 100 Solution
If the sample variance is much larger than the hypothesized value of 100, then the test statistic (n 1)s2 1755 17.55 x 2 s 20 100 will be unusually large, favoring rejection of H0 and acceptance of
Ha. There are two ways to use the test statistic to make a decision for this test.
FI GU R E 1 0 . 1 8
Rejection region and p-value (shaded) for Example 10.11
The critical value approach: The appropriate test requires a one-tailed rejection region in the right tail of the x 2 distribution. The critical value for a .05 and (n 1) 9 df is x 2.05 16.9190 from
Table 5 in Appendix I. Figure 10.18 shows the rejection region; you can reject H0 if the test statistic exceeds 16.9190. Since the observed value of the test statistic is x 2 17.55, you can conclude
that the null hypothesis is false and that the range of concrete strength measurements exceeds the manufacturer’s claim.
.050 .025 0
16.9190 19.0228
Reject H0
The p-value approach: The p-value for a statistical test is the smallest value of a for which H0 can be rejected. It is calculated, as in other one-tailed tests, as the area in the tail of the x 2
distribution to the right of the observed value, x 2 17.55. Although computer packages allow you to calculate this area exactly, Table 5 in Appendix I allows you only to bound the p-value. Since the
value 17.55 lies between x 2.050 16.9190 and x 2.025 19.0228, the p-value lies between .025 and .05. Most researchers would reject H0 and report these results as significant at the 5% level, or P .05.
Again, you can reject H0 and conclude that the range of measurements exceeds the manufacturer’s claim.
An experimenter is convinced that her measuring instrument had a variability measured by standard deviation s 2. During an experiment, she recorded the measurements 4.1, 5.2, and 10.2. Do these data
confirm or disprove her assertion? Test the appropriate hypothesis, and construct a 90% confidence interval to estimate the true value of the population variance. Since there is no preset level of
significance, you should choose to use the p-value approach in testing these hypotheses:
H0 : s 2 4
versus Ha : s 2 4
Use your scientific calculator to verify that the sample variance is s2 10.57 and the test statistic is (n 1)s2 2(10.57) 5.285 x 2 s 20 4 Since this is a two-tailed test, the rejection region is
divided into two parts, half in each tail of the x 2 distribution. If you approximate the area to the right of the observed test statistic, x 2 5.285, you will have only half of the p-value for the
test. Since an equally unlikely value of x 2 might occur in the lower tail of the distribution,
with equal probability, you must double the upper area to obtain the p-value. With 2 df, the observed value, 5.29, falls between x 2.10 and x 2.05 so that 1 .05 ( p-value) .10 2
.10 p-value .20
Since the p-value is greater than .10, the results are not statistically significant. There is insufficient evidence to reject the null hypothesis H0 : s 2 4. The corresponding 90% confidence interval
is (n 1)s2 (n 1) s2 2 s 2 x a/2 x 2(1a/2) The values of x 2(1a/2) and x 2a/2 are x 2(1 a/2) x 2.95 .102587 x 2a/2 x 2.05 5.99147 Substituting these values into the formula for the interval estimate,
you get 2(10.57) 2(10.57) s 2 or 5.99147 .102587
3.53 s 2 206.07
Thus, you can estimate the population variance to fall into the interval 3.53 to 206.07. This very wide confidence interval indicates how little information on the population variance is obtained from
a sample of only three measurements. Consequently, it is not surprising that there is insufficient evidence to reject the null hypothesis s 2 4. To obtain more information on s 2, the experimenter
needs to increase the sample size. The MINITAB command Stat 씮 Basic Statistics 씮 1 Variance allows you to enter raw data or a summary statistic to perform the F-test for a single variance, and
calculate a confidence interval. The MINITAB printout corresponding to Example 10.12 is shown in Figure 10.19. F I GU R E 1 0 . 1 9
MINITAB output for Example 10.12
● Chi-Square Method (Normal Distribution) Variable Measurements
N 3
Variance 10.6
90% CI (3.5, 206.1)
Chi-Square 5.28
P 0.142
BASIC TECHNIQUES 10.48 A random sample of n 25 observations from a normal population produced a sample variance equal to 21.4. Do these data provide sufficient evidence to indicate that s 2 15? Test
using a .05. 10.49 A random sample of n 15 observations was selected from a normal population. The sample mean and variance were x苶 3.91 and s 2 .3214. Find a 90% confidence interval for the
population variance s 2. 10.50 A random sample of size n 7 from a normal
population produced these measurements: 1.4, 3.6, 1.7, 2.0, 3.3, 2.8, 2.9.
a. Calculate the sample variance, s 2. b. Construct a 95% confidence interval for the population variance, s 2. c. Test H0 : s 2 .8 versus Ha : s 2 .8 using a .05. State your conclusions. d. What is
the approximate p-value for the test in part c? APPLICATIONS 10.51 Instrument Precision A precision instru-
ment is guaranteed to read accurately to within 2 units. A sample of four instrument readings on the same object yielded the measurements 353, 351, 351, and
424 ❍
355. Test the null hypothesis that s .7 against the alternative s .7. Use a .05.
estimate the variance of the manufacturer’s potency measurements.
10.52 Instrument Precision, continued Find a
10.55 Hard Hats A manufacturer of hard safety hats for construction workers is concerned about the mean and the variation of the forces helmets transmit to wearers when subjected to a standard
external force. The manufacturer desires the mean force transmitted by helmets to be 800 pounds (or less), well under the legal 1000-pound limit, and s to be less than 40. A random sample of n 40
helmets was tested, and the sample mean and variance were found to be equal to 825 pounds and 2350 pounds 2, respectively. a. If m 800 and s 40, is it likely that any helmet, subjected to the
standard external force, will transmit a force to a wearer in excess of 1000 pounds? Explain. b. Do the data provide sufficient evidence to indicate that when the helmets are subjected to the
standard external force, the mean force transmitted by the helmets exceeds 800 pounds?
90% confidence interval for the population variance in Exercise 10.51. 10.53 Drug Potency To properly treat patients, drugs prescribed by physicians must have a potency that is accurately defined.
Consequently, not only must the distribution of potency values for shipments of a drug have a mean value as specified on the drug’s container, but also the variation in potency must be small.
Otherwise, pharmacists would be distributing drug prescriptions that could be harmfully potent or have a low potency and be ineffective. A drug manufacturer claims that his drug is marketed with a
potency of 5 .1 milligram per cubic centimeter (mg/cc). A random sample of four containers gave potency readings equal to 4.94, 5.09, 5.03, and 4.90 mg/cc. a. Do the data present sufficient evidence
to indicate that the mean potency differs from 5 mg/cc? b. Do the data present sufficient evidence to indicate that the variation in potency differs from the error limits specified by the
manufacturer? [HINT: It is sometimes difficult to determine exactly what is meant by limits on potency as specified by a manufacturer. Since he implies that the potency values will fall into the
interval 5 .1 mg/cc with very high probability—the implication is always—let us assume that the range .2; or (4.9 to 5.1), represents 6s, as suggested by the Empirical Rule. Note that letting the
range equal 6s rather than 4s places a stringent interpretation on the manufacturer’s claim. We want the potency to fall into the interval 5 .1 with very high probability.] 10.54 Drug Potency,
continued Refer to Exercise
10.53. Testing of 60 additional randomly selected containers of the drug gave a sample mean and variance equal to 5.04 and .0063 (for the total of n 64 containers). Using a 95% confidence interval,
10.56 Hard Hats, continued Refer to Exercise
10.55. Do the data provide sufficient evidence to indicate that s exceeds 40? 10.57 Light Bulbs A manufacturer of
industrial light bulbs likes its bulbs to have a mean life that is acceptable to its customers and a variation in life that is relatively small. If some bulbs fail too early in their life, customers
become annoyed and shift to competitive products. Large variations above the mean reduce replacement sales, and variation in general disrupts customers’ replacement schedules. A sample of 20 bulbs
tested produced the following lengths of life (in hours):
The manufacturer wishes to control the variability in length of life so that s is less than 150 hours. Do the data provide sufficient evidence to indicate that the manufacturer is achieving this
goal? Test using a .01.
COMPARING TWO POPULATION VARIANCES Just as a single population variance is sometimes important to an experimenter, you might also need to compare two population variances. You might need to compare
the precision of one measuring device with that of another, the stability of one manufacturing process with that of another, or even the variability in the grading procedure of one college professor
with that of another.
10.7 COMPARING TWO POPULATION VARIANCES
One way to compare two population variances, s 21 and s 22, is to use the ratio of the sample variances, s12/s 22. If s12/s 22 is nearly equal to 1, you will find little evidence to indicate that s 12
and s 22 are unequal. On the other hand, a very large or very small value for s12/s22 provides evidence of a difference in the population variances. How large or small must s 21/s 22 be for
sufficient evidence to exist to reject the following null hypothesis? H0 : s 12 s 22 The answer to this question may be found by studying the distribution of s12/s 22 in repeated sampling. When
independent random samples are drawn from two normal populations with equal variances—that is, s 12 s 22—then s 21/s 22 has a probability distribution in repeated sampling that is known to
statisticians as an F distribution, shown in Figure 10.20. FIGU R E 1 0 . 2 0
An F distribution with df1 10 and df2 10
● f(F)
10 F
2 ASSUMPTIONS FOR s 2 1/s 2 TO HAVE AN F DISTRIBUTION
• •
Testing two variances: df1 n1 1 and df2 n2 1
Random and independent samples are drawn from each of two normal populations. The variability of the measurements in the two populations is the same and can be measured by a common variance, s 2;
that is, s 21 s 22 s 2.
It is not important for you to know the complex equation of the density function for F. For your purposes, you need only to use the well-tabulated critical values of F given in Table 6 in Appendix I.
Critical values of F and p-values for significance tests can also be found using the F Probabilities applet shown in Figure 10.21. Like the x 2 distribution, the shape of the F distribution is
nonsymmetric and depends on the number of degrees of freedom associated with s 21 and s 22, represented as df1 (n1 1) and df2 (n2 1), respectively. This complicates the tabulation of critical values
of the F distribution because a table is needed for each different combination of df1, df2, and a. In Table 6 in Appendix I, critical values of F for right-tailed areas corresponding to a .100, .050,
.025, .010, and .005 are tabulated for various combinations of df1 numerator degrees of freedom and df2 denominator degrees of freedom. A portion of Table 6 is reproduced in Table 10.6. The numerator
degrees of freedom df1 are listed across the top margin, and the denominator degrees of freedom df2 are listed along the side margin. The values of a are listed in the second column. For a fixed
combination of df1 and df2, the appropriate critical values of F are found in the line indexed by the value of a required.
FI GU R E 1 0 . 2 1
F Probabilities applet
Check your ability to use Table 6 in Appendix I by verifying the following statements: 1. The value of F with area .05 to its right for df1 6 and df2 9 is 3.37. 2. The value of F with area .05 to its
right for df1 5 and df2 10 is 3.33. 3. The value of F with area .01 to its right for df1 6 and df2 9 is 5.80. These values are shaded in Table 10.6.
TABLE 10.6
Format of the F Table from Table 6 in Appendix I df1 df2
.100 .050 .025 .010 .005 .100 .050 .025 .010 .005
39.86 161.4 647.8 4052 16211 8.53 18.51 38.51 98.50 198.5
49.50 199.5 799.5 4999.5 20000 9.00 19.00 39.00 99.00 199.0
53.59 215.7 864.2 5403 21615 9.16 19.16 39.17 99.17 199.2
55.83 224.6 899.6 5625 22500 9.24 19.25 39.25 99.25 199.2
57.24 230.2 921.8 5764 23056 9.29 19.30 39.30 99.30 199.3
58.20 234.0 937.1 5859 23437 9.33 19.33 39.33 99.33 199.3
.100 .050 .025 .010 .005 . . . .100 .050 .025 .010 .005
5.54 10.13 17.44 34.12 55.55
5.46 9.55 16.04 30.82 49.80
5.34 9.12 15.10 28.71 46.19
5.31 9.01 14.88 28.24 45.39
3.36 5.12 7.21 10.56 13.61
3.01 4.26 5.71 8.02 10.11
5.39 9.28 15.44 29.46 47.47 . . . 2.81 3.86 5.08 6.99 8.72
2.69 3.63 4.72 6.42 7.96
2.61 3.48 4.48 6.06 7.47
5.28 8.94 14.73 27.91 44.84 . . . 2.55 3.37 4.32 5.80 7.13
.100 .050 .025 .010 .005
3.29 4.96 6.94 10.04 12.83
2.92 4.10 5.46 7.56 9.43
2.73 3.71 4.83 6.55 8.08
2.61 3.48 4.47 5.99 7.34
2.52 3.33 4.24 5.64 6.87
2.46 3.22 4.07 5.39 6.54
. . . 9
10.7 COMPARING TWO POPULATION VARIANCES
The statistical test of the null hypothesis H0 : s 12 s 22 uses the test statistic s2 F 21 s2 When the alternative hypothesis implies a one-tailed test—that is, Ha : s 12 s 22 you can find the
right-tailed critical value for rejecting H0 directly from Table 6 in Appendix I. However, when the alternative hypothesis requires a two-tailed test—that is, H0 : s 21 s 22 the rejection region is
divided between the upper and lower tails of the F distribution. These left-tailed critical values are not given in Table 6 for the following reason: You are free to decide which of the two
populations you want to call “Population 1.” If you always choose to call the population with the larger sample variance “Population 1,” then the observed value of your test statistic will always be
in the right tail of the F distribution. Even though half of the rejection region, the area a/2 to its left, will be in the lower tail of the distribution, you will never need to use it! Remember
these points, though, for a two-tailed test: • •
The area in the right tail of the rejection region is only a/2. The area to the right of the observed test statistic is only ( p-value)/2.
The formal procedures for a test of hypothesis and a (1 a)100% confidence interval for two population variances are shown next. TEST OF HYPOTHESIS CONCERNING THE EQUALITY OF TWO POPULATION VARIANCES
1. Null hypothesis: H0 : s 21 s 22 2. Alternative hypothesis: One-Tailed Test
Two-Tailed Test
Ha : s 21 s 22 (or Ha : s 21 s 22)
Ha : s 21 s 22
3. Test statistic: One-Tailed Test
s2 12 s2
Two-Tailed Test
s2 F 12 s2
where s12 is the larger sample variance 4. Rejection region: Reject H0 when One-Tailed Test
Two-Tailed Test
F Fa
F Fa/2
or when p-value a
TEST OF HYPOTHESIS CONCERNING THE EQUALITY OF TWO POPULATION VARIANCES (continued)
The critical values of Fa and Fa/2 are based on df1 (n1 1) and df2 (n2 1). These tabulated values, for a .100, .050, .025, .010, and .005, can be found using Table 6 in Appendix I, or the F
Probabilities applet.
α/2 α
Assumptions: The samples are randomly and independently selected from normally distributed populations.
2 CONFIDENCE INTERVAL FOR s 2 1/s 2
s2 1 s2 s2 21 12 12 Fdf2,df1 s 2 Fdf1,df2 s2 s2
where df1 (n1 1) and df2 (n2 1). Fdf1,df2 is the tabulated critical value of F corresponding to df1 and df2 degrees of freedom in the numerator and denominator of F, respectively, with area a/2 to
its right. Assumptions: The samples are randomly and independently selected from normally distributed populations. EXAMPLE
An experimenter is concerned that the variability of responses using two different experimental procedures may not be the same. Before conducting his research, he conducts a prestudy with random
samples of 10 and 8 responses and gets s12 7.14 and s22 3.21, respectively. Do the sample variances present sufficient evidence to indicate that the population variances are unequal? Assume that the
populations have probability distributions that are reasonably mound-shaped and hence satisfy, for all practical purposes, the assumption that the populations are normal. You wish to test these
H0 : s 12 s 22
versus Ha : s 12 s 22
Using Table 6 in Appendix I for a/2 .025, you can reject H0 when F 4.82 with a .05. The calculated value of the test statistic is 7.14 s2 F 12 2.22 3.21 s2 Because the test statistic does not fall
into the rejection region, you cannot reject H0 : s 21 s 22. Thus, there is insufficient evidence to indicate a difference in the population variances.
10.7 COMPARING TWO POPULATION VARIANCES
Refer to Example 10.14 and find a 90% confidence interval for s 12/s 22. Solution
The 90% confidence interval for s 12/s 22 is
1 s2 s2 s2 12 12 12 Fdf2,df1 s 2 Fdf1,df2 s2 s2
where s 21 7.14
s 22 3.21
df1 (n1 1) 9
df2 (n2 1) 7
F9,7 3.68
F7,9 3.29
Substituting these values into the formula for the confidence interval, you get 7.14 1 s2 7.14 21 3.29 3.21 3.68 s2 3.21
s2 .60 21 7.32 s2
The calculated interval estimate .60 to 7.32 includes 1.0, the value hypothesized in H0. This indicates that it is quite possible that s 21 s 22 and therefore agrees with the test conclusions. Do not
reject H0 : s 21 s 22. The MINITAB command Stat 씮 Basic Statistics 씮 2 Variances allows you to enter either raw data or summary statistics to perform the F-test for the equality of variances and
calculates confidence intervals for the two individual standard deviations (which we have not discussed). The relevant printout, containing the F statistic and its p-value, is shaded in Figure 10.22.
FIGU R E 1 0 . 2 2
MINITAB output for Example 10.14
● Test for Equal Variances 95% Bonferroni confidence intervals for standard deviations Sample N Lower StDev Upper 1 10 1.74787 2.67208 5.38064 2 8 1.12088 1.79165 4.10374 F-Test (Normal Distribution)
Test statistic = 2.22, p-value = 0.304
The variability in the amount of impurities present in a batch of chemical used for a particular process depends on the length of time the process is in operation. A manufacturer using two production
lines, 1 and 2, has made a slight adjustment to line 2, hoping to reduce the variability as well as the average amount of impurities in the chemical. Samples of n1 25 and n2 25 measurements from the
two batches yield these means and variances: 苶x1 3.2 苶x2 3.0
s12 1.04 s 22 .51
Do the data present sufficient evidence to indicate that the process variability is less for line 2? The experimenter believes that the average levels of impurities are the same for the two
production lines but that her adjustment may have decreased the variability of the levels for line 2, as illustrated in Figure 10.23. This adjustment would be good for the company because it would
decrease the probability of producing shipments of the chemical with unacceptably high levels of impurities.
FI GU R E 1 0 . 2 3
Distributions of impurity measurements for two production lines
Distribution for production line 2
Distribution for production line 1
Level of impurities
To test for a decrease in variability, the test of hypothesis is H0 : s 12 s 22
versus Ha : s 12 s 22
and the observed value of the test statistic is s2 1.04 F 21 2.04 s2 .51 Using the p-value approach, you can bound the one-tailed p-value using Table 6 in Appendix I with df1 df2 (25 1) 24. The
observed value of F falls between F.050 1.98 and F.025 2.27, so that .025 p-value .05. The results are judged significant at the 5% level, and H0 is rejected. You can conclude that the variability of
line 2 is less than that of line 1. The F-test for the difference in two population variances completes the battery of tests you have learned in this chapter for making inferences about population
parameters under these conditions: • •
The sample sizes are small. The sample or samples are drawn from normal populations.
You will find that the F and x 2 distributions, as well as the Student’s t distribution, are very important in other applications in the chapters that follow. They will be used for different
estimators designed to answer different types of inferential questions, but the basic techniques for making inferences remain the same. In the next section, we review the assumptions required for all
of these inference tools, and discuss options that are available when the assumptions do not seem to be reasonably correct.
BASIC TECHNIQUES 10.58 Independent random samples from two normal populations produced the variances listed here: Sample Size
Sample Variance
55.7 31.4
a. Do the data provide sufficient evidence to indicate that s 21 differs from s 22? Test using a .05. b. Find the approximate p-value for the test and interpret its value. 10.59 Refer to Exercise
10.58 and find a 95% confidence interval for s 21/s 22.
10.7 COMPARING TWO POPULATION VARIANCES
10.60 Independent random samples from two normal populations produced the given variances:
b. Find the approximate p-value for the test and interpret its value.
Sample Size
10.63 Construct a 90% confidence interval for the variance ratio in Exercise 10.62.
Sample Variance 18.3 7.9
a. Do the data provide sufficient evidence to indicate that s 12 s 22? Test using a .05. b. Find the approximate p-value for the test and interpret its value. APPLICATIONS 10.61 SAT Scores The SAT
subject tests in chem-
istry and physics11 for two groups of 15 students each electing to take these tests are given below. Chemistry
x苶 629 s 110 n 15
x苶 643 s 107 n 15
To use the two-sample t-test with a pooled estimate of 2, you must assume that the two population variances are equal. Test this assumption using the F-test for equality of variances. What is the
approximate p-value for the test?
10.64 Tuna III In Exercise 10.25 and dataset EX1025, you conducted a test to detect a difference in the average prices of light tuna in water versus light tuna in oil.
a. What assumption had to be made concerning the population variances so that the test would be valid? b. Do the data present sufficient evidence to indicate that the variances violate the assumption
in part a? Test using a .05. 10.65 Runners and Cyclists III Refer to Exer-
cise 10.26. Susan Beckham and colleagues conducted an experiment involving 10 healthy runners and 10 healthy cyclists to determine if there are significant differences in pressure measurements within
the anterior muscle compartment for runners and cyclists.7 The data—compartment pressure, in millimeters of mercury (Hg)—are reproduced here: Runners Condition
10.62 Product Quality The stability of measurements on a manufactured product is important in maintaining product quality. In fact, it is sometimes better to have small variation in the measured
value of some important characteristic of a product and have the process mean be slightly off target than to suffer wide variation with a mean value that perfectly fits requirements. The latter
situation may produce a higher percentage of defective products than the former. A manufacturer of light bulbs suspected that one of her production lines was producing bulbs with a wide variation in
length of life. To test this theory, she compared the lengths of life for n 50 bulbs randomly sampled from the suspect line and n 50 from a line that seemed to be “in control.” The sample means and
variances for the two samples were as follows: “Suspect Line”
Line “in Control”
x苶1 1520 s 12 92,000
x苶2 1476 s 22 37,000
a. Do the data provide sufficient evidence to indicate that bulbs produced by the “suspect line” have a larger variance in length of life than those produced by the line that is assumed to be in
control? Test using a .05.
Rest 80% maximal O2 consumption Maximal O2 consumption
Standard Deviation
Standard Deviation
12.2 19.1
3.49 16.9
11.5 12.2
4.95 4.47
For each of the three variables measured in this experiment, test to see whether there is a significant difference in the variances for runners versus cyclists. Find the approximate p-values for each
of these tests. Will a two-sample t-test with a pooled estimate of s 2 be appropriate for all three of these variables? Explain. 10.66 Impurities A pharmaceutical manufacturer purchases a particular
material from two different suppliers. The mean level of impurities in the raw material is approximately the same for both suppliers, but the manufacturer is concerned about the variability of the
impurities from shipment to shipment. If the level of impurities tends to vary excessively for one source of supply, it could affect the quality of the pharmaceutical product. To compare the
variation in percentage impurities for the two suppliers, the manufacturer selects 10 shipments from each of the two suppliers and measures the percentage of impurities in the raw material for each
shipment. The sample means and variances are shown in the table.
432 ❍
Supplier A
Supplier B
x苶1 1.89 s 12 .273 n1 10
x苶2 1.85 s 22 .094 n2 10
a. Do the data provide sufficient evidence to indicate a difference in the variability of the shipment
impurity levels for the two suppliers? Test using a .01. Based on the results of your test, what recommendation would you make to the pharmaceutical manufacturer? b. Find a 99% confidence interval for
s 22 and interpret your results.
How Do I Decide Which Test to Use? Are you interested in testing means? If the design involves: a. One random sample, use the one-sample t statistic. b. Two independent random samples, are the
population variances equal? i. If equal, use the two-sample t statistic with pooled s 2. ii. If unequal, use the unpooled t with estimated df. c. Two paired samples with random pairs, use a
one-sample t for analyzing differences. Are you interested in testing variances? If the design involves: a. One random sample, use the x 2 test for a single variance. b. Two independent random
samples, use the F-test to compare two variances.
REVISITING THE SMALL-SAMPLE ASSUMPTIONS All of the tests and estimation procedures discussed in this chapter require that the data satisfy certain conditions in order that the error probabilities
(for the tests) and the confidence coefficients (for the confidence intervals) be equal to the values you have specified. For example, if you construct what you believe to be a 95% confidence interval,
you want to be certain that, in repeated sampling, 95% (and not 85% or 75% or less) of all such intervals will contain the parameter of interest. These conditions are summarized in these assumptions:
ASSUMPTIONS 1. For all tests and confidence intervals described in this chapter, it is assumed that samples are randomly selected from normally distributed populations. 2. When two samples are
selected, it is assumed that they are selected in an independent manner except in the case of the paired-difference experiment. 3. For tests or confidence intervals concerning the difference between
two population means m1 and m2 based on independent random samples, it is assumed that s 12 s 22.
In reality, you will never know everything about the sampled population. If you did, there would be no need for sampling or statistics. It is also highly unlikely that a population will exactly
satisfy the assumptions given in the box. Fortunately, the procedures presented in this chapter give good inferences even when the data exhibit moderate departures from the necessary conditions. A
statistical procedure that is not sensitive to departures from the conditions on which it is based is said to be robust. The Student’s t-tests are quite robust for moderate departures from normality.
Also, as long as the sample sizes are nearly equal, there is not much difference between the pooled and unpooled t statistics for the difference in two population means. However, if the sample sizes
are not clearly equal, and if the population variances are unequal, the pooled t statistic provides inaccurate conclusions. If you are concerned that your data do not satisfy the assumptions, other
options are available: •
If you can select relatively large samples, you can use one of the largesample procedures of Chapters 8 and 9, which do not rely on the normality or equal variance assumptions. You may be able to use
a nonparametric test to answer your inferential questions. These tests have been developed specifically so that few or no distributional assumptions are required for their use. Tests that can be used
to compare the locations or variability of two populations are presented in Chapter 15.
CHAPTER REVIEW Key Concepts and Formulas I.
Experimental Designs for Small Samples
III. Small-Sample Test Statistics
1. Single random sample: The sampled population must be normal. 2. Two independent random samples: Both sampled populations must be normal. a. Populations have a common variance s 2. s 21
b. Populations have different variances: and s 22. 3. Paired-difference or matched pairs design: The samples are not independent. II. Statistical Tests of Significance
1. Based on the t, F, and x distributions 2
2. Use the same procedure as in Chapter 9 3. Rejection region—critical values and significance levels: based on the t, F, or x 2 distributions with the appropriate degrees of freedom 4. Tests of
population parameters: a single mean, the difference between two means, a single variance, and the ratio of two variances
To test one of the population parameters when the sample sizes are small, use the following test statistics: Parameter m
Test Statistic x苶 m0 t s/兹苶n
(x苶1 x苶2) (m1 m2) t m1 m2 1 1 (equal variances) s 2 n1 n2
m1 m2 (unequal variances)
(x苶1 x苶2) (m1 m2) t ⬇ s2 s2 1 2 n1 n2
d苶 md m1 m2 t sd /兹n苶 (paired samples) s2
(n 1)s 2 x 2 s 02
s 12/s 22
F s 12 /s 22
Degrees of Freedom n1 n1 n2 2
Satterthwaite’s approximation
n1 n1 n1 1 and n 2 1
Small-Sample Testing and Estimation The tests and confidence intervals for population means based on the Student’s t distribution are found in a MINITAB submenu by choosing Stat 씮 Basic Statistics.
You will see choices for 1-Sample t, 2-Sample t, and Paired t, which will generate Dialog boxes for the procedures in Sections 10.3, 10.4, and 10.5, respectively. You must choose the columns in which
the data are stored and the null and alternative hypotheses to be tested (or the confidence coefficient for a confidence interval). In the case of the two-sample t-test, you must indicate whether the
population variances are assumed equal or unequal, so that MINITAB can perform the correct test. We will display some of the Dialog boxes and Session window outputs for the examples in this chapter,
beginning with the one-sample t-test of Example 10.3. First, enter the six recorded weights—.46, .61, .52, .48, .57, .54—in column C1 and name them “Weights.” Use Stat 씮 Basic Statistics 씮 1-Sample
t to generate the Dialog box in Figure 10.24. To test H0 : m .5 versus Ha : m .5, use the list on the left to select “Weights” for the box marked “Samples in Columns.” Check the box marked “Perform
hypothesis test.” Then, place your cursor in the box marked “Hypothesized mean:” and enter .5 as the test value. Finally, use Options and the drop-down menu marked “Alternative” to select “greater
than.” Click OK twice to obtain the output in Figure 10.25. Notice that MINITAB produces a one- or a two-sided confidence interval for the single population mean, consistent with the alternative
hypothesis you have chosen. You can change the confidence coefficient from the default of .95 in the Options box. Also, the Graphs option will produce a histogram, a box plot, or an individual value
plot of the data in column C1. Data for a two-sample t-test with independent samples can be entered into the worksheet in one of two ways: FI GU R E 1 0 . 2 4
FIGU R E 1 0 . 2 5
Enter measurements from both samples into a single column and enter numbers (1 or 2) in a second column to identify the sample from which the measurement comes. • Enter the samples in two separate
columns. If you do not have the raw data, but rather have summary statistics—the sample mean, standard deviation, and sample size—MINITAB 15 will allow you to use these values by selecting the radio
button marked “Summarized data” and entering the appropriate values in the boxes. Use the second method and enter the data from Example 10.5 into columns C2 and C3. Then use Stat 씮 Basic Statistics
씮 2-Sample t to generate the Dialog box in Figure 10.26. Check “Samples in different columns,” selecting C2 and C3 from the box on the left. Check the “Assume equal variances” box and select the
proper alternative hypothesis in the Options box. (Otherwise, MINITAB will perform Satterthwaite’s approximation for unequal variances.) The two-sample output when you click OK twice automatically
contains a 95% one- or two-sided confidence interval as well as the test statistic and p-value (you can change the confidence coefficient if you like). The output for Example 10.5 is shown in Figure
10.13. For a paired-difference test, the two samples are entered into separate columns, which we did with the tire wear data in Table 10.3. Use Stat 씮 Basic Statistics 씮 Paired t FIGU R E 1 0 . 2 6
436 ❍
to generate the Dialog box in Figure 10.27. If you have only summary statistics—the sample mean and standard deviation of the differences and sample size—MINITAB will allow you to use these values by
selecting the radio button marked “Summarized data” and entering the appropriate values in the boxes. Select C4 and C5 from the box on the left, and use Options to pick the proper alternative
hypothesis. You may change the confidence coefficient or the test value (the default value is zero). When you click OK twice, you will obtain the output shown in Figure 10.15. The MINITAB command Stat
씮 Basic Statistics 씮 2 Variances allows you to enter either raw data or summary statistics to perform the F-test for the equality of variances, as shown in Figure 10.28. The MINITAB command Stat 씮
Basic Statistics 씮 1 Variance will allow you to perform the x 2 test and construct a confidence interval for a single population variance, s 2. FI GU R E 1 0 . 2 7
FI GU R E 1 0 . 2 8
Supplementary Exercises 10.67 What assumptions are made when Student’s t-test is used to test a hypothesis concerning a population mean?
of the three titrations are as follows: 82.10, 75.75, and 75.44 milliliters. Use a 99% confidence interval to estimate the mean number of milliliters required to neutralize 1 gram of the acid. 10.73
Sodium Chloride Measurements of water
10.68 What assumptions are made about the popula-
tions from which random samples are obtained when the t distribution is used in making small-sample inferences concerning the difference in population means? 10.69 Why use paired observations to
estimate the difference between two population means rather than estimation based on independent random samples selected from the two populations? Is a paired experiment always preferable? Explain.
10.70 Impurities II A manufacturer can tolerate a
small amount (.05 milligrams per liter (mg/l)) of impurities in a raw material needed for manufacturing its product. Because the laboratory test for the impurities is subject to experimental error,
the manufacturer tests each batch 10 times. Assume that the mean value of the experimental error is 0 and hence that the mean value of the ten test readings is an unbiased estimate of the true amount
of the impurities in the batch. For a particular batch of the raw material, the mean of the ten test readings is .058 mg/l, with a standard deviation of .012 mg/l. Do the data provide sufficient
evidence to indicate that the amount of impurities in the batch exceeds .05 mg/l? Find the p-value for the test and interpret its value. 10.71 Red Pine The main stem growth measured for a sample of
seventeen 4-year-old red pine trees produced a mean and standard deviation equal to 11.3 and 3.4 inches, respectively. Find a 90% confidence interval for the mean growth of a population of 4-year-old
red pine trees subjected to similar environmental conditions. 10.72 Sodium Hydroxide The object of a general
chemistry experiment is to determine the amount (in milliliters) of sodium hydroxide (NaOH) solution needed to neutralize 1 gram of a specified acid. This will be an exact amount, but when the
experiment is run in the laboratory, variation will occur as the result of experimental error. Three titrations are made using phenolphthalein as an indicator of the neutrality of the solution (pH
equals 7 for a neutral solution). The three volumes of NaOH required to attain a pH of 7 in each
intake, obtained from a sample of 17 rats that had been injected with a sodium chloride solution, produced a mean and standard deviation of 31.0 and 6.2 cubic centimeters (cm3), respectively. Given
that the average water intake for noninjected rats observed over a comparable period of time is 22.0 cm3, do the data indicate that injected rats drink more water than noninjected rats? Test at the
5% level of significance. Find a 90% confidence interval for the mean water intake for injected rats. 10.74 Sea Urchins An experimenter was interested in determining the mean thickness of the cortex of
the sea urchin egg. The thickness was measured for n 10 sea urchin eggs. These measurements were obtained: 4.5 5.2
6.1 2.6
3.2 3.7
3.9 4.6
4.7 4.1
Estimate the mean thickness of the cortex using a 95% confidence interval. 10.75 Fabricating Systems A production plant has two extremely complex fabricating systems; one system is twice as old as the
other. Both systems are checked, lubricated, and maintained once every 2 weeks. The number of finished products fabricated daily by each of the systems is recorded for 30 working days. The results are
given in the table. Do these data present sufficient evidence to conclude that the variability in daily production warrants increased maintenance of the older fabricating system? Use the p-value
approach. New System
Old System
x苶1 246 s1 15.6
x苶2 240 s2 28.2
10.76 Fossils The data in the table are the diameters and heights of ten fossil specimens of a species of small shellfish, Rotularia (Annelida) fallax, that were unearthed in a mapping expedition near
the Antarctic Peninsula.12 The table gives an identification symbol for the fossil specimen, the fossil’s diameter and height in millimeters, and the ratio of diameter to height.
10.80 Drug Absorption An experiment was con-
OSU 36651 OSU 36652 OSU 36653 OSU 36654 OSU 36655 OSU 36656 OSU 36657 OSU 36658 OSU 36659 OSU 36660
Diameter 185 194 173 200 179 213 134 191 177 199
2.37 2.98 2.25 2.63 2.49 2.80 1.79 2.48 2.57 3.06
ducted to compare the mean lengths of time required for the bodily absorption of two drugs A and B. Ten people were randomly selected and assigned to receive one of the drugs. The length of time (in
minutes) for the drug to reach a specified level in the blood was recorded, and the data summary is given in the table:
x苶: s:
184.5 21.5
2.54 .37
a. Find a 95% confidence interval for the mean diameter of the species. b. Find a 95% confidence interval for the mean height of the species. c. Find a 95% confidence interval for the mean ratio of
diameter to height. d. Compare the three intervals constructed in parts a, b, and c. Is the average of the ratios the same as the ratio of the average diameter to average height? 10.77 Fossils,
continued Refer to Exercise 10.76
and data set EX1076. Suppose you want to estimate the mean diameter of the fossil specimens correct to within 5 millimeters with probability equal to .95. How many fossils do you have to include in
your sample? 10.78 Alcohol and Reaction Times To test the effect of alcohol in increasing the reacEX1078 tion time to respond to a given stimulus, the reaction times of seven people were measured.
After consuming 3 ounces of 40% alcohol, the reaction time for each of the seven people was measured again. Do the following data indicate that the mean reaction time after consuming alcohol was
greater than the mean reaction time before consuming alcohol? Use a .05. Person
Before After
10.79 Cheese, Please Here are the prices per ounce of n 13 different brands of individually wrapped cheese slices:
29.0 28.7 21.6
24.1 28.0 25.9
23.7 23.8 27.4
19.6 18.9
27.5 23.9
Construct a 95% confidence interval estimate of the underlying average price per ounce of individually wrapped cheese slices.
Drug A
Drug B
x苶1 27.2 s 12 16.36
x苶2 33.5 s 22 18.92
a. Do the data provide sufficient evidence to indicate a difference in mean times to absorption for the two drugs? Test using a .05. b. Find the approximate p-value for the test. Does this value
confirm your conclusions? c. Find a 95% confidence interval for the difference in mean times to absorption. Does the interval confirm your conclusions in part a? 10.81 Drug Absorption, continued Refer
to Exer-
cise 10.80. Suppose you wish to estimate the difference in mean times to absorption correct to within 1 minute with probability approximately equal to .95. a. Approximately how large a sample is
required for each drug (assume that the sample sizes are equal)? b. If conducting the experiment using the sample sizes of part a will require a large amount of time and money, can anything be done
to reduce the sample sizes and still achieve the 1-minute margin of error for estimation? 10.82 Ring-Necked Pheasants The weights in grams of 10 males and 10 female juvenile ring-necked pheasants are
given below.
Males 1384 1286 1503 1627 1450
Females 1073 1053 1038 1018 1146
a. Use a statistical test to determine if the population variance of the weights of the male birds differs from that of the females. b. Test whether the average weight of juvenile male ring-necked
pheasants exceeds that of the females by more than 300 grams. (HINT: The procedure that you use should take into account the results of the analysis in part a.)
Full Sun
10.83 Bees Insects hovering in flight expend
enormous amounts of energy for their size and weight. The data shown here were taken from a much larger body of data collected by T.M. Casey and colleagues.13 They show the wing stroke frequencies
(in hertz) for two different species of bees, n1 4 Euglossa mandibularis Friese and n2 6 Euglossa imperialis Cockerell.
E. mandibularis Friese
E. imperialis Cockerell
a. Based on the observed ranges, do you think that a difference exists between the two population variances? b. Use an appropriate test to determine whether a difference exists. c. Explain why a
Student’s t-test with a pooled estimator s 2 is unsuitable for comparing the mean wing stroke frequencies for the two species of bees. 10.84 Calcium The calcium (Ca) content of
a powdered mineral substance was analyzed 10 times with the following percent compositions recorded:
.0271 .0271
.0282 .0281
.0279 .0269
.0281 .0275
.0268 .0276
a. Find a 99% confidence interval for the true calcium content of this substance. b. What does the phrase “99% confident” mean? c. What assumptions must you make about the sampling procedure so that
this confidence interval will be valid? What does this mean to the chemist who is performing the analysis? 10.85 Sun or Shade? Karl Niklas and T.G. Owens examined the differences in a particular
plant, Plantago Major L., when grown in full sunlight versus shade conditions.14 In this study, shaded plants received direct sunlight for less than 2 hours each day, whereas full-sun plants were
never shaded. A partial summary of the data based on n1 16 full-sun plants and n2 15 shade plants is shown here:
Leaf Area (cm ) Overlap Area (cm2) Leaf Number Thickness (mm) Length (cm) Width (cm)
128.00 46.80 9.75 .90 8.70 5.24
43.00 2.21 2.27 .03 1.64 .98
78.70 8.10 6.93 .50 8.91 3.41
41.70 1.26 1.49 .02 1.23 .61
a. What assumptions are required in order to use the small-sample procedures given in this chapter to compare full-sun versus shade plants? From the summary presented, do you think that any of these
assumptions have been violated? b. Do the data present sufficient evidence to indicate a difference in mean leaf area for full-sun versus shade plants? c. Do the data present sufficient evidence to
indicate a difference in mean overlap area for full-sun versus shade plants? 10.86 Orange Juice A comparison of the precisions of two machines developed for extracting juice from oranges is to be
made using the following data: Machine A
Machine B
s 3.1 ounces n 25 2
s 2 1.4 ounces2 n 25
a. Is there sufficient evidence to indicate that there is a difference in the precision of the two machines at the 5% level of significance? b. Find a 95% confidence interval for the ratio of the two
population variances. Does this interval confirm your conclusion from part a? Explain. 10.87 At Home or at School? Four sets of identical twins (pairs A, B, C, and D) were selected at random from a
computer database of identical twins. One child was selected at random from each pair to form an “experimental group.” These four children were sent to school. The other four children were kept at
home as a control group. At the end of the school year, the following IQ scores were obtained: Pair
Experimental Group
Control Group
A B C D
Does this evidence justify the conclusion that lack of school experience has a depressing effect on IQ scores? Use the p-value approach.
440 ❍
10.88 Dieting Eight obese persons were
placed on a diet for 1 month, and their weights, at the beginning and at the end of the month, were recorded:
Weights Subjects
10.89 Repair Costs Car manufacturers try
to design the bumpers of their automobiles to prevent costly damage in parking-lot type accidents. To compare repair costs of front versus back bumpers for several brands of cars, the cars were
subject to a front and rear impacts at 5 mph, and the repair costs recorded.15
VW Jetta Daewoo Nubira Acura 3.4 RL Dodge Neon Nissan Sentra
$396 451 1123 687 583
$602 404 968 748 571
Do the data provide sufficient evidence to indicate that there is a significant difference in average repair costs for front versus rear bumper repairs costs? Test using a .05. 10.90 Breathing
Patterns Research psychologists measured the baseline breathing patterns—the total ventilation (in liters of air per minute) adjusted for body size—for each of n 30 patients, so that they could
estimate the average total ventilation for patients before any experimentation was done. The data, along with some MINITAB output, are presented here:
5.72 4.79 6.04 5.38 5.17
5.77 5.16 5.83 5.48 6.34
Stem-and-Leaf Display: Ltrs/min Stem-and-leaf of Ltrs/min Leaf Unit = 0.10 1 2 5 8 12 (4) 14 11 7 4 2 1
N = 30
Descriptive Statistics: Ltrs/min
Estimate the mean weight loss for obese persons when placed on the diet for a 1-month period. Use a 95% confidence interval and interpret your results. What assumptions must you make so that your
inference is valid?
5.23 5.54 5.92 4.72 4.67
MINITAB output for Exercise 10.90
4.99 5.84 5.32 5.37 6.58
5.12 4.51 6.19 4.96 4.35
4.82 5.14 5.70 5.58 5.63
Variable Ltrs/min Minimum 4.3500
N 30
N* 0
Q1 4.9825
Mean 5.3953
SE Mean 0.0997
StDev 0.5462
Median 5.3750
Q3 5.7850
Maximum 6.5800
a. What information does the stem and leaf plot give you about the data? Why is this important? b. Use the MINITAB output to construct a 99% confidence interval for the average total ventilation for
patients. 10.91 Reaction Times A comparison of reaction times (in seconds) for two different stimuli in a psychological word-association experiment produced the following results when applied to a
random sample of 16 people: Stimulus 1
Stimulus 2
Do the data present sufficient evidence to indicate a difference in mean reaction times for the two stimuli? Test using a .05. 10.92 Reaction Times II Refer to Exercise 10.91. Suppose that the
word-association experiment is conducted using eight people as blocks and making a comparison of reaction times within each person; that is, each person is subjected to both stimuli in a random
order. The reaction times (in seconds) for the experiment are as follows: Person
Stimulus 1
Stimulus 2
Do the data present sufficient evidence to indicate a difference in mean reaction times for the two stimuli? Test using a .05. 10.93 Refer to Exercises 10.91 and 10.92. Calculate a 95% confidence
interval for the difference in the two population means for each of these experimental designs. Does it appear that blocking increased the amount of information available in the experiment? 10.94
Impact Strength The following data are readings (in foot-pounds) of the impact EX1094 strengths of two kinds of packaging material: A
1.25 1.16 1.33 1.15 1.23 1.20 1.32 1.28 1.21
.89 1.01 .97 .95 .94 1.02 .98 1.06 .98
a. Do the data present sufficient evidence to indicate a difference between the average densities of cakes prepared using the two types of batter? b. Construct a 95% confidence interval for the
difference between the average densities for the two mixes. 10.96 Under what assumptions can the F distribution
be used in making inferences about the ratio of population variances? 10.97 Got Milk? A dairy is in the market for a new
container-filling machine and is considering two models, manufactured by company A and company B. Ruggedness, cost, and convenience are comparable in the two models, so the deciding factor is the
variability of fills. The model that produces fills with the smaller variance is preferred. If you obtain samples of fills for each of the two models, an F-test can be used to test for the equality of
population variances. Which type of rejection region would be most favored by each of these individuals? a. The manager of the dairy—Why? b. A sales representative for company A—Why? c. A sales
representative for company B—Why?
MINITAB output for Exercise 10.94
Two-Sample T-Test and CI: A, B Two-sample T for A vs B N Mean StDev SE Mean A 9 1.2367 0.0644 0.021 B 9 0.9778 0.0494 0.016 Difference = mu (A) - mu (B) Estimate for difference: 0.2589 95% CI for
difference: (0.2015, 0.3163) T-Test of difference = 0 (vs not =): T-Value = 9.56 P-Value = 0.000 DF = 16 Both use Pooled StDev = 0.0574
a. Use the MINITAB printout to determine whether there is evidence of a difference in the mean strengths for the two kinds of material. b. Are there practical implications to your results? 10.95 Cake
Mixes An experiment was conducted to compare the densities (in ounces per cubic inch) of cakes prepared from two different cake mixes. Six cake pans were filled with batter A, and six were filled with
batter B. Expecting a variation in oven temperature, the experimenter placed a pan filled with batter A and another with batter B side by side at six different locations in the oven. The six paired
observations of densities are as follows: Batter A
Batter B
10.98 Got Milk II Refer to Exercise 10.97. Wishing to demonstrate that the variability of fills is less for her model than for her competitor’s, a sales representative for company A acquired a sample
of 30 fills from her company’s model and a sample of 10 fills from her competitor’s model. The sample variances were sA2 .027 and sB2 .065, respectively. Does this result provide statistical support at
the .05 level of significance for the sales representative’s claim? 10.99 Chemical Purity A chemical manufacturer
claims that the purity of his product never varies by more than 2%. Five batches were tested and given purity readings of 98.2, 97.1, 98.9, 97.7, and 97.9%. a. Do the data provide sufficient evidence
to contradict the manufacturer’s claim? (HINT: To be generous, let a range of 2% equal 4s.) b. Find a 90% confidence interval for s 2. 10.100 16-Ounce Cans? A cannery prints “weight
16 ounces” on its label. The quality control supervisor selects nine cans at random and weighs them. She finds 苶x 15.7 and s .5. Do the data present sufficient evidence to indicate that the mean
weight is less than that claimed on the label?
10.101 Reaction Time III A psychologist wishes to
Sea Level
12,000 Feet
verify that a certain drug increases the reaction time to a given stimulus. The following reaction times (in tenths of a second) were recorded before and after injection of the drug for each of four
.07 .10 .09 .12 .09 .13
.13 .17 .15 .14 .10 .14
Reaction Time Subject
Test at the 5% level of significance to determine whether the drug significantly increases reaction time. 10.102 Food Production At a time when energy conservation is so important, some scientists
think closer scrutiny should be given to the cost (in energy) of producing various forms of food. Suppose you wish to compare the mean amount of oil required to produce 1 acre of corn versus 1 acre
of cauliflower. The readings (in barrels of oil per acre), based on 20-acre plots, seven for each crop, are shown in the table. Use these data to find a 90% confidence interval for the difference
between the mean amounts of oil required to produce these two crops.
5.6 7.1 4.5 6.0 7.9 4.8 5.7
15.9 13.4 17.6 16.8 15.8 16.3 17.1
10.103 Alcohol and Altitude The effect of alcohol
consumption on the body appears to be much greater at high altitudes than at sea level. To test this theory, a scientist randomly selects 12 subjects and randomly divides them into two groups of six
each. One group is put into a chamber that simulates conditions at an altitude of 12,000 feet, and each subject ingests a drink containing 100 cubic centimeters (cc) of alcohol. The second group
receives the same drink in a chamber that simulates conditions at sea level. After 2 hours, the amount of alcohol in the blood (grams per 100 cc) for each subject is measured. The data are shown in
the table. Do the data provide sufficient evidence to support the theory that retention of alcohol in the blood is greater at high altitudes?
10.104 Stock Risks The closing prices of two common stocks were recorded for a period of 15 days. The means and variances are x苶1 40.33 s 12 1.54
x苶2 42.54 s 22 2.96
a. Do these data present sufficient evidence to indicate a difference between the variabilities of the closing prices of the two stocks for the populations associated with the two samples? Give the
p-value for the test and interpret its value. b. Construct a 99% confidence interval for the ratio of the two population variances. 10.105 Auto Design An experiment is conducted to compare two new
automobile designs. Twenty people are randomly selected, and each person is asked to rate each design on a scale of 1 (poor) to 10 (excellent). The resulting ratings will be used to test the null
hypothesis that the mean level of approval is the same for both designs against the alternative hypothesis that one of the automobile designs is preferred. Do these data satisfy the assumptions
required for the Student’s t-test of Section 10.4? Explain. 10.106 Safety Programs The data shown here were collected on lost-time accidents (the figures given are mean work-hours lost per month over
a period of 1 year) before and after an industrial safety program was put into effect. Data were recorded for six industrial plants. Do the data provide sufficient evidence to indicate whether the
safety program was effective in reducing lost-time accidents? Test using a .01. Plant Number
Before Program After Program
10.107 Two Different Entrees To compare the demand for two different entrees, the manager of a cafeteria recorded the number of purchases of each entree on seven consecutive days. The data are shown
in the table. Do the data provide
sufficient evidence to indicate a greater mean demand for one of the entrees? Use the MINITAB printout. Day Monday Tuesday Wednesday Thursday Friday Saturday Sunday
a. Is there sufficient evidence to reject his claim at the a .05 level of significance? b. Find a 95% confidence interval for the variance of the rod diameters. 10.111 Sleep and the College Student How
much sleep do you get on a typical school night? A group of 10 college students were asked to report the number of hours that they slept on the previous night with the following results:
MINITAB output for Exercise 10.107
Paired T-Test and CI: A, B
a. Find a 99% confidence interval for the average number of hours that college students sleep. b. What assumptions are required in order for this confidence interval to be valid?
Paired T for A - B N Mean A 7 504.7 B 7 471.3 Difference 7 33.4
StDev 127.2 97.4 47.5
SE Mean 48.1 36.8 18.0
95% CI for mean difference: (-10.5, 77.4) T-Test of mean difference = 0 (vs not = 0): T-Value = 1.86 P-Value = 0.112
10.108 Pollution Control The EPA limit on the
allowable discharge of suspended solids into rivers and streams is 60 milligrams per liter (mg/l) per day. A study of water samples selected from the discharge at a phosphate mine shows that over a
long period, the mean daily discharge of suspended solids is 48 mg/l, but day-to-day discharge readings are variable. State inspectors measured the discharge rates of suspended solids for n 20 days
and found s 2 39 (mg/l)2. Find a 90% confidence interval for s 2. Interpret your results. 10.109 Enzymes Two methods were used to mea-
sure the specific activity (in units of enzyme activity per milligram of protein) of an enzyme. One unit of enzyme activity is the amount that catalyzes the formation of 1 micromole of product per
minute under specified conditions. Use an appropriate test or estimation procedure to compare the two methods of measurement. Comment on the validity of any assumptions you need to make. Method 1
Method 2
10.110 Connector Rods A producer of machine
parts claimed that the diameters of the connector rods produced by his plant had a variance of at most .03 inch2. A random sample of 15 connector rods from his plant produced a sample mean and
variance of .55 inch and .053 inch2, respectively.
10.112 Arranging Objects The following EX10112
5.2 4.2 3.1 3.6 4.7
data are the response times in seconds for n 25 first graders to arrange three objects by size. 3.8 4.1 2.5 3.9 3.3
5.7 4.3 3.0 4.8 4.2
3.9 4.7 4.4 5.3 3.8
3.7 4.3 4.8 4.2 5.4
Find a 95% confidence interval for the average response time for first graders to arrange three objects by size. Interpret this interval. 10.113 Finger-Lickin’ Good! Maybe too good,
according to tests performed by the consumer testing division of Good Housekeeping. Nutritional information provided by Kentucky Fried Chicken claims that each small bag of Potato Wedges contains 4.8
ounces of food, for a total of 280 calories. A sample of 10 orders from KFC restaurants in New York and New Jersey averaged 358 calories.16 If the standard deviation of this sample was s 54, is there
sufficient evidence to indicate that the average number of calories in small bags of KFC Potato Wedges is greater than advertised? Test at the 1% level of significance. 10.114 Mall Rats An article in
American Demo-
graphics investigated consumer habits at the mall. We tend to spend the most money shopping on the weekends, and, in particular, on Sundays from 4 to 6 P.M. Wednesday morning shoppers spend the
least!17 Suppose that a random sample of 20 weekend shoppers and a random sample of 20 weekday shoppers were selected, and the amount spent per trip to the mall was recorded.
Sample Size Sample Mean Sample Standard Deviation
20 $78 $22
20 $67 $20
a. Is it reasonable to assume that the two population variances are equal? Use the F-test to test this hypothesis with a .05. b. Based on the results of part a, use the appropriate test to determine
whether there is a difference in the average amount spent per trip on weekends versus weekdays. Use a .05. 10.115 Border Wars As the costs of pre-
scription drugs escalate, more and more senior citizens are ordering prescriptions from Canada, or actually crossing the border to buy prescription drugs. The price of a typical prescription for nine
drugs was recorded at randomly selected stores in both the United States and in Canada.18 Drug
Lipitor® Zocor® Prilosec® Norvasc® Zyprexa® Paxil® Prevacid® Celebrex® Zoloft®
$290 412 117 139 571 276 484 161 235
$179 211 72 125 396 171 196 67 156
a. Is there sufficient evidence to indicate that the average cost of prescription drugs in the United States is different from the average cost in Canada? Use a .01. b. What is the approximate the
p-value for this test? Does this confirm your conclusions in part a?
Exercises 10.116 Use the Student’s t Probabilities applet to find the following probabilities: a. P(t 1.2) with 5 df b. P(t 2) P(t 2) with 10 df c. P(t 3.3) with 8 df d. P(t .6) with 12 df 10.117 Use
the Student’s t Probabilities applet to find the following critical values: a. an upper one-tailed rejection region with a .05 and 11 df b. a two-tailed rejection region with a .05 and 7 df c. a lower
one-tailed rejection region with a .01 and 15 df 10.118 Refer to the Interpreting Confidence Intervals applet. a. Suppose that you have a random sample of size n 10 from a population with unknown mean
m. What formula would you use to construct a 95% confidence interval for the unknown population mean? b. Use the button in the first applet to create a single 95% confidence interval for m. Use the
formula in part a and the information given in the
applet to verify the confidence limits provided. (The applet rounds to the nearest integer.) Did this confidence interval enclose the true value, m 100? 10.119 Refer to the Interpreting Confidence
Intervals applet. a. Use the button in the first applet to create ten 95% confidence intervals for m. b. Are the widths of these intervals all the same? Explain why or why not. c. How many of the
intervals work properly and enclose the true value of m? d. Try this simulation again by clicking the button a few more times and counting the number of intervals that work correctly. Is it close to
our 95% confidence level? e. Use the button in the second applet to create ten 99% confidence intervals for m. How many of these intervals work properly? 10.120 Refer to the Interpreting Confidence
Intervals applet. a. Use the button to create one hundred 95% confidence intervals for m. How many of the
intervals work properly and enclose the true value of m? b. Repeat the instructions of part a to construct 99% confidence intervals. How many of the intervals work properly and enclose the true value
of m? c. Try this simulation again by clicking the button a few more times and counting the number of intervals that work correctly. Use both the 95% and 99% confidence intervals. Do the percentage of
intervals that work come close to our 95% and 99% confidence levels? 10.121 A random sample of n 12 observations
from a normal population produced x苶 47.1 and s 2 4.7. Test the hypothesis H0: m 48 against Ha: m 48. Use the Small-Sample Test of a Population Mean applet and a 5% significance level. 10.122 SAT
Scores In Exercise 9.73, we reported that the national average SAT scores for the class of 2005 were 508 on the verbal portion and 520 on the math portion. Suppose that we have a small random sample
of 15 California students in the class of 2005; their SAT scores are recorded in the following table. Sample Average Sample Standard Deviation
a. Use the Small-Sample Test of a Population Mean applet. Do the data provide sufficient evidence to indicate that the average verbal score for all California students in the class of 2005 is
different from the national average? Test using a .05.
CASE STUDY Flextime
b. Use the Small-Sample Test of a Population Mean applet. Do the data provide sufficient evidence to indicate that the average math score for all California students in the class of 2005 is different
from the national average? Test using a .05. 10.123 Surgery Recovery Times The length of time to recovery was recorded for patients randomly assigned and subjected to two different surgical
procedures. The data (recorded in days) are as follows:
Sample Average Sample Variance Sample Size
Procedure I
Procedure II
7.3 1.23 11
8.9 1.49 13
Do the data present sufficient evidence to indicate a difference between the mean recovery times for the two surgical procedures? Perform the test of hypothesis, calculating the test statistic and
the approximate p-value by hand. Then check your results using the Two-Sample T-Test: Independent Samples applet. 10.124 Stock Prices Refer to Exercise 10.104 in which we reported the closing prices
of two common stocks, recorded over a period of 15 days. x苶1 40.33 s 12 1.54
x苶2 42.54 s 22 2.96
Use the Two-Sample T-Test: Independent Samples applet. Do the data provide sufficient evidence to indicate that the average prices of the two common stocks are different? Use the p-value to access
the significance of the test.
How Would You Like a Four-Day Workweek? Will a flexible workweek schedule result in positive benefits for both employer and employee? Is a more rested employee, who spends less time commuting to and
from work, likely to be more efficient and take less time off for sick leave and personal leave? A report on the benefits of flexible work schedules that appeared in Environmental Health looked at the
records of n 11 employees who worked in a satellite office in a county health department in Illinois under a 4-day workweek schedule.19 Employees worked a conventional workweek in year 1 and a 4-day
workweek in year 2. Some statistics for these employees are shown in the following table:
446 ❍
Personal Leave
Sick Leave
Year 2
Year 1
Year 2
Year 1
1. A 4-day workweek ensures that employees will have one more day that need not be spent at work. One possible result is a reduction in the average number of personal-leave days taken by employees on
a 4-day work schedule. Do the data indicate that this is the case? Use the p-value approach to testing to reach your conclusion. 2. A 4-day workweek schedule might also have an effect on the average
number of sick-leave days an employee takes. Should a directional alternative be used in this case? Why or why not? 3. Construct a 95% confidence interval to estimate the average difference in days
taken for sick leave between these 2 years. What do you conclude about the difference between the average number of sick-leave days for these two work schedules? 4. Based on the analysis of these two
variables, what can you conclude about the advantages of a 4-day workweek schedule?
Case Study from “Four-Day Work Week Improves Environment,” by C.S. Catlin, Environmental Health, Vol. 59, No. 7, March 1997. Copyright 1997 National Environmental Health Association. Reprinted by
The Analysis of Variance
GENERAL OBJECTIVE The quantity of information contained in a sample is affected by various factors that the experimenter may or may not be able to control. This chapter introduces three different
experimental designs, two of which are direct extensions of the unpaired and paired designs of Chapter 10. A new technique called the analysis of variance is used to determine how the different
experimental factors affect the average response.
CHAPTER INDEX ● The analysis of variance (11.2) ● The completely randomized design (11.4, 11.5) ● Factorial experiments (11.9, 11.10) ● The randomized block design (11.7, 11.8) ● Tukey’s method of
paired comparisons (11.6)
How Do I Know Whether My Calculations Are Accurate?
© James Leynse/CORBIS
“A Fine Mess” Do you risk a fine by parking your car in red zones or next to fire hydrants? Do you fail to put enough money in a parking meter? If so, you are among the thousands of drivers who receive
parking tickets every day in almost every city in the United States. Depending on the city in which you receive a ticket, your fine can be as little as $8 for overtime parking in San Luis Obispo,
California, or as high as $340 for illegal parking in a handicapped space in San Diego, California. The case study at the end of this chapter statistically analyzes the variation in parking fines in
southern California cities.
448 ❍
THE DESIGN OF AN EXPERIMENT The way that a sample is selected is called the sampling plan or experimental design and determines the amount of information in the sample. Some research involves an
observational study, in which the researcher does not actually produce the data but only observes the characteristics of data that already exist. Most sample surveys, in which information is gathered
with a questionnaire, fall into this category. The researcher forms a plan for collecting the data—called the sampling plan—and then uses the appropriate statistical procedures to draw conclusions
about the population or populations from which the sample comes. Other research involves experimentation. The researcher may deliberately impose one or more experimental conditions on the
experimental units in order to determine their effect on the response. Here are some new terms we will use to discuss the design of a statistical experiment. Definition An experimental unit is the
object on which a measurement (or measurements) is taken. A factor is an independent variable whose values are controlled and varied by the experimenter. A level is the intensity setting of a factor.
A treatment is a specific combination of factor levels. The response is the variable being measured by the experimenter.
A group of people is randomly divided into an experimental and a control group. The control group is given an aptitude test after having eaten a full breakfast. The experimental group is given the
same test without having eaten any breakfast. What are the factors, levels, and treatments in this experiment? Solution The experimental units are the people on which the response (test score) is
measured. The factor of interest could be described as “meal” and has two levels: “breakfast” and “no breakfast.” Since this is the only factor controlled by the experimenter, the two
levels—“breakfast” and “no breakfast”—also represent the treatments of interest in the experiment.
Suppose that the experimenter in Example 11.1 began by randomly selecting 20 men and 20 women for the experiment. These two groups were then randomly divided into 10 each for the experimental and
control groups. What are the factors, levels, and treatments in this experiment? Now there are two factors of interest to the experimenter, and each factor has two levels:
• •
“Gender” at two levels: men and women “Meal” at two levels: breakfast and no breakfast
In this more complex experiment, there are four treatments, one for each specific combination of factor levels: men without breakfast, men with breakfast, women without breakfast, and women with
11.3 THE ASSUMPTIONS FOR AN ANALYSIS OF VARIANCE
In this chapter, we will concentrate on experiments that have been designed in three different ways, and we will use a technique called the analysis of variance to judge the effects of various
factors on the experimental response. Two of these experimental designs are extensions of the unpaired and paired designs from Chapter 10.
The responses that are generated in an experimental situation always exhibit a certain amount of variability. In an analysis of variance, you divide the total variation in the response measurements
into portions that may be attributed to various factors of interest to the experimenter. If the experiment has been properly designed, these portions can then be used to answer questions about the
effects of the various factors on the response of interest. You can better understand the logic underlying an analysis of variance by looking at a simple experiment. Consider two sets of samples
randomly selected from populations 1 (䉬) and 2 (䊊), each with identical pairs of means, 苶x1 and x苶2. The two sets are shown in Figure 11.1. Is it easier to detect the difference in the two means
when you look at set A or set B? You will probably agree that set A shows the difference much more clearly. In set A, the variability of the measurements within the groups (䉬s and 䊊s) is much
smaller than the variability between the two groups. In set B, there is more variability within the groups (䉬s and 䊊s), causing the two groups to “mix” together and making it more difficult to see
the identical difference in the means. FIGU R E 1 1 . 1
Two sets of samples with the same means
Set A
Set B
The comparison you have just done intuitively is formalized by the analysis of variance. Moreover, the analysis of variance can be used not only to compare two means but also to make comparisons of
more than two population means and to determine the effects of various factors in more complex experimental designs. The analysis of variance relies on statistics with sampling distributions that are
modeled by the F distribution of Section 10.7.
THE ASSUMPTIONS FOR AN ANALYSIS OF VARIANCE The assumptions required for an analysis of variance are similar to those required for the Student’s t and F statistics of Chapter 10. Regardless of the
experimental design used to generate the data, you must assume that the observations within each treatment group are normally distributed with a common variance s 2. As in Chapter 10, the analysis of
variance procedures are fairly robust when the sample sizes are equal and when the data are fairly mound-shaped. Violating the assumption of a common variance is more serious, especially when the
sample sizes are not nearly equal.
450 ❍
The observations within each population are normally distributed with a common variance s 2. Assumptions regarding the sampling procedure are specified for each design in the sections that follow.
This chapter describes the analysis of variance for three different experimental designs. The first design is based on independent random sampling from several populations and is an extension of the
unpaired t-test of Chapter 10. The second is an extension of the paired-difference or matched pairs design and involves a random assignment of treatments within matched sets of observations. The
third is a design that allows you to judge the effect of two experimental factors on the response. The sampling procedures necessary for each design are restated in their respective sections.
THE COMPLETELY RANDOMIZED DESIGN: A ONE-WAY CLASSIFICATION One of the simplest experimental designs is the completely randomized design, in which random samples are selected independently from each
of k populations. This design involves only one factor, the population from which the measurement comes— hence the designation as a one-way classification. There are k different levels corresponding
to the k populations, which are also the treatments for this one-way classification. Are the k population means all the same, or is at least one mean different from the others? Why do you need a new
procedure, the analysis of variance, to compare the population means when you already have the Student’s t-test available? In comparing k 3 means, you could test each of three pairs of hypotheses: H0
: m1 m2
H0 : m1 m3
H0 : m2 m3
to find out where the differences lie. However, you must remember that each test you perform is subject to the possibility of error. To compare k 4 means, you would need six tests, and you would need
10 tests to compare k 5 means. The more tests you perform on a set of measurements, the more likely it is that at least one of your conclusions will be incorrect. The analysis of variance procedure
provides one overall test to judge the equality of the k population means. Once you have determined whether there is actually a difference in the means, you can use another procedure to find out where
the differences lie. How can you select these k random samples? Sometimes the populations actually exist in fact, and you can use a computerized random number generator or a random number table to
randomly select the samples. For example, in a study to compare the average sizes of health insurance claims in four different states, you could use a computer database provided by the health
insurance companies to select random samples from the four states. In other situations, the populations may be hypothetical, and responses can be generated only after the experimental treatments have
been applied. EXAMPLE
A researcher is interested in the effects of five types of insecticides for use in controlling the boll weevil in cotton fields. Explain how to implement a completely randomized design to investigate
the effects of the five insecticides on crop yield.
11.5 THE ANALYSIS OF VARIANCE FOR A COMPLETELY RANDOMIZED DESIGN
The only way to generate the equivalent of five random samples from the hypothetical populations corresponding to the five insecticides is to use a method called a randomized assignment. A fixed number
of cotton plants are chosen for treatment, and each is assigned a random number. Suppose that each sample is to have an equal number of measurements. Using a randomization device, you can assign the
first n plants chosen to receive insecticide 1, the second n plants to receive insecticide 2, and so on, until all five treatments have been assigned. Solution
Whether by random selection or random assignment, both of these examples result in a completely randomized design, or one-way classification, for which the analysis of variance is used.
FIGU R E 1 1 . 2
Normal populations with a common variance but different means
Suppose you want to compare k population means, m1, m2, . . . , mk, based on independent random samples of size n1, n2, . . . , nk from normal populations with a common variance s 2. That is, each of
the normal populations has the same shape, but their locations might be different, as shown in Figure 11.2.
... µ1
Partitioning the Total Variation in an Experiment Let xij be the jth measurement ( j 1, 2, . . . , ni) in the ith sample. The analysis of variance procedure begins by considering the total variation
in the experiment, which is measured by a quantity called the total sum of squares (TSS): (Sxij )2 Total SS S(xij 苶x )2 Sx 2ij n This is the familiar numerator in the formula for the sample variance
for the entire set of n n1 n2 nk measurements. The second part of the calculational formula is sometimes called the correction for the mean (CM). If we let G represent the grand total of all n
observations, then (Sxij )2 G2 CM n n This Total SS is partitioned into two components. The first component, called the sum of squares for treatments (SST), measures the variation among the k sample
means: T2 x )2 Si CM SST Sni ( 苶xi 苶 ni
452 ❍
where Ti is the total of the observations for treatment i. The second component, called the sum of squares for error (SSE), is used to measure the pooled variation within the k samples: SSE (n1 1)s12
(n2 1)s 22 (nk 1)s 2k This formula is a direct extension of the numerator in the formula for the pooled estimate of s 2 from Chapter 10. We can show algebraically that, in the analysis of variance,
Total SS SST SSE Therefore, you need to calculate only two of the three sums of squares—Total SS, SST, and SSE—and the third can be found by subtraction. Each of the sources of variation, when
divided by its appropriate degrees of freedom, provides an estimate of the variation in the experiment. Since Total SS involves n squared observations, its degrees of freedom are df (n 1). Similarly,
the sum of squares for treatments involves k squared observations, and its degrees of freedom are df (k 1). Finally, the sum of squares for error, a direct extension of the pooled estimate in Chapter
10, has df (n1 1) (n2 1) (nk 1) n k Notice that the degrees of freedom for treatments and error are additive—that is, df (total) df (treatments) df(error) These two sources of variation and their
respective degrees of freedom are combined to form the mean squares as MS SS/df. The total variation in the experiment is then displayed in an analysis of variance (or ANOVA) table. ANOVA TABLE FOR k
INDEPENDENT RANDOM SAMPLES: COMPLETELY RANDOMIZED DESIGN The column labeled “SS” satisfies: Total SS SST SSE.
Treatments Error
k1 nk
MST SST/(k 1) MSE SSE/(n k)
Total SS
where Total SS Sx 2ij CM (Sum of squares of all x-values) CM with (Sxij )2 G2 CM n n The column labeled “df” always adds up to n 1.
T2 SST Si CM ni
SST MST k1
SSE Total SS SST
SSE MSE nk
G Grand total of all n observations Ti Total of all observations in sample i ni Number of observations in sample i n n1 n2 nk
11.5 THE ANALYSIS OF VARIANCE FOR A COMPLETELY RANDOMIZED DESIGN
TABLE 11.1
In an experiment to determine the effect of nutrition on the attention spans of elementary school students, a group of 15 students were randomly assigned to each of three meal plans: no breakfast,
light breakfast, and full breakfast. Their attention spans (in minutes) were recorded during a morning reading period and are shown in Table 11.1. Construct the analysis of variance table for this
experiment. ●
Attention Spans of Students After Three Meal Plans No Breakfast
Light Breakfast
Full Breakfast
T1 47
T2 70
T3 65
To use the calculational formulas, you need the k 3 treatment totals together with n1 n2 n3 5, n 15, and S xij 182. Then
(182)2 CM 2208.2667 15 Total SS (82 72 122 ) CM 2338 2208.2667 129.7333 with (n 1) (15 1) 14 degrees of freedom, 472 702 652 SST CM 2266.8 2208.2667 58.5333 5 with (k 1) (3 1) 2 degrees of freedom,
and by subtraction, SSE Total SS SST 129.7333 58.5333 71.2 with (n k) (15 3) 12 degrees of freedom. These three sources of variation, their degrees of freedom, sums of squares, and mean squares are
shown in the shaded area of the ANOVA table generated by MINITAB and given in Figure 11.3. You will find instructions for generating this output in the “My MINITAB ” section at the end of this
chapter. F IGU R E 1 1 . 3
MINITAB output for Example 11.4
● One-way ANOVA: Span versus Meal Source Meal Error Total
DF 2 12 14
S = 2.436
Level 1 2 3
N 5 5 5
SS 58.53 71.20 129.73
MS 29.27 5.93
R-Sq = 45.12%
Mean 9.400 14.000 13.000
StDev 2.302 2.550 2.449
Pooled StDev = 2.436
F 4.93
P 0.027
R-Sq(adj) = 35.97% Individual 95% CIs For Mean Based on Pooled StDev --+---------+---------+---------+------(---------*--------) (--------*--------) (--------*--------)
--+---------+---------+---------+------7.5 10.0 12.5 15.0
454 ❍
The MINITAB output gives some additional information about the variation in the experiment. The second section shows the means and standard deviations for the three meal plans. More important, you
can see in the first section of the printout two columns marked “F” and “P.” We can use these values to test a hypothesis concerning the equality of the three treatment means.
Testing the Equality of the Treatment Means The mean squares in the analysis of variance table can be used to test the null hypothesis H0 : m1 m2 mk MS SS/df
versus the alternative hypothesis Ha : At least one of the means is different from the others using the following theoretical argument: •
Remember that s 2 is the common variance for all k populations. The quantity SSE MSE nk
F I GU R E 1 1 . 4
Sample means drawn from identical versus different populations
is a pooled estimate of s 2, a weighted average of all k sample variances, whether or not H0 is true. If H0 is true, then the variation in the sample means, measured by MST [SST/(k 1)], also provides
an unbiased estimate of s 2. However, if H0 is false and the population means are different, then MST—which measures the variation in the sample means—will be unusually large, as shown in Figure
H0 true
x1 x2
H0 false
µ2 x2
µ1 = µ2= µ3
The test statistic MST F MSE
F-tests for ANOVA tables are always upper (right) tailed.
tends to be larger than usual if H0 is false. Hence, you can reject H0 for large values of F, using a right-tailed statistical test. When H0 is true, this test statistic has an F distribution with
df1 (k 1) and df2 (n k) degrees of freedom, and right-tailed critical values of the F distribution (from Table 6 in Appendix I) or computer-generated p-values can be used to draw statistical
conclusions about the equality of the population means.
11.5 THE ANALYSIS OF VARIANCE FOR A COMPLETELY RANDOMIZED DESIGN
F TEST FOR COMPARING k POPULATION MEANS 1. Null hypothesis: H0 : m1 m2 mk 2. Alternative hypothesis: Ha : One or more pairs of population means differ 3. Test statistic: F MST/MSE, where F is based
on df1 (k 1) and df2 (n k) 4. Rejection region: Reject H0 if F Fa, where Fa lies in the upper tail of the F distribution (with df1 k 1 and df2 n k) or if the p-value a. f(F)
α Fα
Assumptions • •
The samples are randomly and independently selected from their respective populations. The populations are normally distributed with means m1, m2, . . . , mk and equal variances, s 12 s 22 s 2k s 2.
Do the data in Example 11.4 provide sufficient evidence to indicate a difference in the average attention spans depending on the type of breakfast eaten by the student? To test H0 : m1 m2 m3 versus
the alternative hypothesis that the average attention span is different for at least one of the three treatments, you use the analysis of variance F statistic, calculated as
MST 29.2667 F 4.93 MSE 5.9333 and shown in the column marked “F” in Figure 11.3. It will not surprise you to know that the value in the column marked “P” in Figure 11.3 is the exact p-value for this
statistical test. The test statistic MST/MSE calculated above has an F distribution with df1 2 and df2 12 degrees of freedom. Using the critical value approach with a .05, you can reject H0 if F F.05
3.89 from Table 6 in Appendix I (see Figure 11.5). Since the observed value, F 4.93, exceeds the critical value, you reject H0. There is sufficient evidence to indicate that at least one of the three
average attention spans is different from at least one of the others.
456 ❍
FI GU R E 1 1 . 5
Rejection region for Example 11.5
f(F ) 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0
= .05
F 0
5 3.89
10 Rejection region
You could have reached this same conclusion using the exact p-value, P .027, given in Figure 11.3. Since the p-value is less than a .05, the results are statistically significant at the 5% level. You
still conclude that at least one of the three average attention spans is different from at least one of the others.
Computer printouts give the exact p-value—use the p-value to make your decision.
You can use the F Probabilities applet to find critical values of F or p-values for the analysis of variance F-test. Look at the two applets in Figure 11.6. Use the sliders on the left and right of
the applets to select the appropriate degrees of freedom (df1 and df2). To find the critical value for rejection of H0, enter the significance level a in the box marked “Prob” and press Enter. To find
the p-value, enter the observed value of the test statistic in the box marked “F” and press Enter. Can you identify the critical value for rejection and the p-value for Example 11.5? FI GU R E 1 1 .
F Probabilities applet
● F Distribution
F Distribution
Estimating Differences in the Treatment Means The next obvious question you might ask involves the nature of the differences in the population means. Which means are different from the others? How
can you estimate the difference, or possibly the individual means for each of the three treatments?
11.5 THE ANALYSIS OF VARIANCE FOR A COMPLETELY RANDOMIZED DESIGN
In Section 11.6, we will present a procedure that you can use to compare all possible pairs of treatment means simultaneously. However, if you have a special interest in a particular mean or pair of
means, you can construct confidence intervals using the small-sample procedures of Chapter 10, based on the Student’s t distribution. For a single population mean, mi, the confidence interval is s 苶xi
ta/2 兹n苶i
where x苶i is the sample mean for the i th treatment. Similarly, for a comparison of two population means—say, mi and mj—the confidence interval is (x苶i 苶xj ) ta/2
冢莦冣冪莦 1 1 s2 ni nj
Before you can use these confidence intervals, however, two questions remain: • •
How do you calculate s or s 2, the best estimate of the common variance s 2? How many degrees of freedom are used for the critical value of t?
To answer these questions, remember that in an analysis of variance, the mean square for error, MSE, always provides an unbiased estimator of s 2 and uses information from the entire set of
measurements. Hence, it is the best available estimator of s 2, regardless of what test or estimation procedure you are using. You should always use s 2 MSE
with df (n k)
苶, to estimate s ! You can find the positive square root of this estimator, s 兹MSE on the last line of Figure 11.3 labeled “Pooled StDev.” 2
Degrees of freedom for confidence intervals are the df for error.
COMPLETELY RANDOMIZED DESIGN: (1 a)100% CONFIDENCE INTERVALS FOR A SINGLE TREATMENT MEAN AND THE DIFFERENCE BETWEEN TWO TREATMENT MEANS Single treatment mean: s 苶xi ta/2 兹n苶i
Difference between two treatment means: ( 苶xi 苶xj ) ta/2
冣冢n 莦冪s莦 n 2
with 苶 s 兹s苶2 兹MSE
冪莦 nk SSE
where n n1 n2 nk and ta/2 is based on (n k) df. EXAMPLE
The researcher in Example 11.4 believes that students who have no breakfast will have significantly shorter attention spans but that there may be no difference between those who eat a light or a full
breakfast. Find a 95% confidence interval for the average attention span for students who eat no breakfast, as well as a 95% confidence interval for the difference in the average attention spans for
light versus full breakfast eaters.
458 ❍
苶 2.436 with df For s 2 MSE 5.9333 so that s 兹5.9333 (n k) 12, you can calculate the two confidence intervals:
For no breakfast: s 苶x1 ta/2 n1 兹苶
2.436 9.4 2.179 兹5苶
9.4 2.37 or between 7.03 and 11.77 minutes. For light versus full breakfast:
s 冢冣冪莦 n 莦 n 1 1 (14 13) 2.179 5.9333 冢5 莦冪莦莦 5冣 (x苶2 苶x3) ta/2
1 3.36 a difference of between 2.36 and 4.36 minutes. You can see that the second confidence interval does not indicate a difference in the average attention spans for students who ate light versus
full breakfasts, as the researcher suspected. If the researcher, because of prior beliefs, wishes to test the other two possible pairs of means—none versus light breakfast, and none versus full
breakfast—the methods given in Section 11.6 should be used for testing all three pairs. Some computer programs have graphics options that provide a powerful visual description of data and the k
treatment means. One such option in the MINITAB program is shown in Figure 11.7. The treatment means are indicated by the symbol ⊕ and are connected with straight lines. Notice that the “no
breakfast” mean appears to be somewhat different from the other two means, as the researcher suspected, although there is a bit of overlap in the box plots. In the next section, we present a formal
procedure for testing the significance of the differences between all pairs of treatment means. Box plots for Example 11.6
● Boxplot of Span by Meal 18 16 14 Span
FI GU R E 1 1 . 7
2 Meal
11.5 THE ANALYSIS OF VARIANCE FOR A COMPLETELY RANDOMIZED DESIGN
How Do I Know Whether My Calculations Are Accurate? The following suggestions apply to all the analyses of variance in this chapter: 1. When calculating sums of squares, be certain to carry at least
six significant figures before performing subtractions. 2. Remember, sums of squares can never be negative. If you obtain a negative sum of squares, you have made a mistake in arithmetic. 3. Always
check your analysis of variance table to make certain that the degrees of freedom sum to the total degrees of freedom (n 1) and that the sums of squares sum to Total SS.
BASIC TECHNIQUES 11.1 Suppose you wish to compare the means of six
populations based on independent random samples, each of which contains 10 observations. Insert, in an ANOVA table, the sources of variation and their respective degrees of freedom. 11.2 The values
of Total SS and SSE for the experi-
ment in Exercise 11.1 are Total SS 21.4 and SSE 16.2. a. Complete the ANOVA table for Exercise 11.1. b. How many degrees of freedom are associated with the F statistic for testing H0 : m1 m2 m6? c.
Give the rejection region for the test in part b for a .05. d. Do the data provide sufficient evidence to indicate differences among the population means? e. Estimate the p-value for the test. Does
this value confirm your conclusions in part d? 11.3 The sample means corresponding to popu-
lations 1 and 2 in Exercise 11.1 are 苶x1 3.07 and x苶2 2.52. a. Find a 95% confidence interval for m1. b. Find a 95% confidence interval for the difference (m1 m2). 11.4 Suppose you wish to compare
the means of four
populations based on independent random samples, each of which contains six observations. Insert, in an ANOVA table, the sources of variation and their respective degrees of freedom.
11.5 The values of Total SS and SST for the experi-
ment in Exercise 11.4 are Total SS 473.2 and SST 339.8. a. Complete the ANOVA table for Exercise 11.4. b. How many degrees of freedom are associated with the F statistic for testing H0 : m1 m2 m3 m4?
c. Give the rejection region for the test in part b for a .05. d. Do the data provide sufficient evidence to indicate differences among the population means? e. Approximate the p-value for the test.
Does this confirm your conclusions in part d? 11.6 The sample means corresponding to populations
1 and 2 in Exercise 11.4 are 苶x1 88.0 and 苶x2 83.9. a. Find a 90% confidence interval for m1. b. Find a 90% confidence interval for the difference (m1 m2). 11.7 These data are observations collected
using a completely randomized design:
Sample 1
Sample 2
Sample 3
a. b. c. d.
Calculate CM and Total SS. Calculate SST and MST. Calculate SSE and MSE. Construct an ANOVA table for the data.
460 ❍
e. State the null and alternative hypotheses for an analysis of variance F-test. f. Use the p-value approach to determine whether there is a difference in the three population means. 11.8 Refer to
Exercise 11.7 and data set EX1107. Do
the data provide sufficient evidence to indicate a difference between m2 and m3? Test using the t-test of Section 10.4 with a .05. 11.9 Refer to Exercise 11.7 and data set EX1107.
a. Find a 90% confidence interval for m1. b. Find a 90% confidence interval for the difference (m1 m3). APPLICATIONS 11.10 Reducing Hostility A clinical psychologist wished to compare three methods for
reducing hostility levels in university students using a certain psychological test (HLT). High scores on this test were taken to indicate great hostility. Eleven students who got high and nearly
equal scores were used in the experiment. Five were selected at random from among the 11 problem cases and treated by method A, three were taken at random from the remaining six students and treated
by method B, and the other three students were treated by method C. All treatments continued throughout a semester, when the HLT test was given again. The results are shown in the table. Method
Scores on the HLT Test
A B C
a. Perform an analysis of variance for this experiment. b. Do the data provide sufficient evidence to indicate a difference in mean student response to the three methods after treatment? 11.11
Hostility, continued Refer to Exercise 11.10. Let mA and mB, respectively, denote the mean scores at the end of the semester for the populations of extremely hostile students who were treated
throughout that semester by method A and method B. a. Find a 95% confidence interval for mA. b. Find a 95% confidence interval for mB. c. Find a 95% confidence interval for (mA mB). d. Is it correct to
claim that the confidence intervals found in parts a, b, and c are jointly valid? 11.12 Assembling Electronic Equipment EX1112
An experiment was conducted to compare the
effectiveness of three training programs, A, B, and C, in training assemblers of a piece of electronic equipment. Fifteen employees were randomly assigned, five each, to the three programs. After
completion of the courses, each person was required to assemble four pieces of the equipment, and the average length of time required to complete the assembly was recorded. Several of the employees
resigned during the course of the program; the remainder were evaluated, producing the data shown in the accompanying table. Use the MINITAB printout to answer the questions. Training Program
Average Assembly Time (min)
A B C
a. Do the data provide sufficient evidence to indicate a difference in mean assembly times for people trained by the three programs? Give the p-value for the test and interpret its value. b. Find a
99% confidence interval for the difference in mean assembly times between persons trained by programs A and B. c. Find a 99% confidence interval for the mean assembly times for persons trained in
program A. d. Do you think the data will satisfy (approximately) the assumption that they have been selected from normal populations? Why? MINITAB output for Exercise 11.12 One-way ANOVA: Time versus
Program Source Program Error Total S = 3.865
Level 1 2 3
DF SS 2 170.5 9 134.5 11 304.9 R-Sq = 55.90%
N 4 3 5
Pooled StDev =
Mean 60.500 54.667 64.200 3.865
MS 85.2 14.9
F 5.70
P 0.025
R-Sq(adj) = 46.10% Individual 95% CIs For Mean Based on Pooled StDev StDev -+---------+---------+---------+----3.109 (--------*--------) 3.055 (---------*---------) 4.658 (------*-------)
-+---------+---------+---------+----50.0 55.0 60.0 65.0
11.13 Swampy Sites An ecological study
was conducted to compare the rates of growth of vegetation at four swampy undeveloped sites and to determine the cause of any differences that might be observed. Part of the study involved measuring
the leaf lengths of a particular plant species on a preselected date in May. Six plants were randomly selected at each of the four sites to be used in the comparison. The data in the table are the
mean leaf length per plant (in centimeters) for a random sample of ten leaves per plant. The MINITAB analysis of variance computer printout for these data is also provided.
11.5 THE ANALYSIS OF VARIANCE FOR A COMPLETELY RANDOMIZED DESIGN
Mean Leaf Length (cm)
5.7 6.2 5.4 3.7
6.3 5.3 5.0 3.2
6.1 5.7 6.0 3.9
6.0 6.0 5.6 4.0
5.8 5.2 4.9 3.5
6.2 5.5 5.2 3.6
MINITAB output for Exercise 11.13
were randomly selected at each location, but one specimen, corresponding to location 4, was lost in the laboratory. The data and a MINITAB analysis of variance computer printout are provided here
(the greater the pollution, the lower the dissolved oxygen readings). Location
Mean Dissolved Oxygen Content
5.9 6.3 4.8 6.0
One-way ANOVA: Length versus Location Source DF SS Location 3 19.740 Error 20 2.293 Total 23 22.033 S = 0.3386 R-Sq = 89.59%
Level 1 2 3 4
N 6 6 6 6
Pooled StDev =
Mean 6.0167 5.6500 5.3500 3.6500 0.3386
MS 6.580 0.115
F 57.38
P 0.000
R-Sq(adj) = 88.03% Individual 95% CIs For Mean Based on Pooled StDev StDev --------+---------+---------+---------+0.2317 (--*---) 0.3937 (---*--) 0.4087 (---*--) 0.2881 (---*--)
--------+---------+---------+---------+4.00 4.80 5.60 6.40
a. You will recall that the test and estimation procedures for an analysis of variance require that the observations be selected from normally distributed (at least, roughly so) populations. Why
might you feel reasonably confident that your data satisfy this assumption? b. Do the data provide sufficient evidence to indicate a difference in mean leaf length among the four locations? What is
the p-value for the test? c. Suppose, prior to seeing the data, you decided to compare the mean leaf lengths of locations 1 and 4. Test the null hypothesis m1 m4 against the alternative m1 m4. d.
Refer to part c. Construct a 99% confidence interval for (m1 m4). e. Rather than use an analysis of variance F-test, it would seem simpler to examine one’s data, select the two locations that have the
smallest and largest sample mean lengths, and then compare these two means using a Student’s t-test. If there is evidence to indicate a difference in these means, there is clearly evidence of a
difference among the four. (If you were to use this logic, there would be no need for the analysis of variance F-test.) Explain why this procedure is invalid. 11.14 Dissolved O2 Content Water samples
were taken at four different locations in a river to determine whether the quantity of dissolved oxygen, a measure of water pollution, varied from one location to another. Locations 1 and 2 were
selected above an industrial plant, one near the shore and the other in midstream; location 3 was adjacent to the industrial water discharge for the plant; and location 4 was slightly downriver in
midstream. Five water specimens
6.1 6.6 4.3 6.2
6.3 6.4 5.0 6.1
6.1 6.4 4.7 5.8
6.0 6.5 5.1
MINITAB output for Exercise 11.14 One-way ANOVA: Oxygen versus Location Source DF SS MS F P Location 3 7.8361 2.6120 63.66 0.000 Error 15 0.6155 0.0410 Total 18 8.4516 S = 0.2026 R-Sq = 92.72% R-Sq
(adj) = 91.26% Individual 95% CIs For Mean Based on Pooled StDev Level N Mean StDev ----+---------+---------+---------+-1 5 6.0800 0.1483 (--*---) 2 5 6.4400 0.1140 (--*---) 3 5 4.7800 0.3114
(---*--) 4 4 6.0250 0.1708 (--*---) ----+---------+---------+---------+-Pooled StDev = 0.2026 4.80 5.40 6.00 6.60
a. Do the data provide sufficient evidence to indicate a difference in the mean dissolved oxygen contents for the four locations? b. Compare the mean dissolved oxygen content in midstream above the
plant with the mean content adjacent to the plant (location 2 versus location 3). Use a 95% confidence interval. 11.15 Calcium The calcium content of a
powdered mineral substance was analyzed five times by each of three methods, with similar standard deviations:
Percent Calcium
.0279 .0268 .0280
.0276 .0274 .0279
.0270 .0267 .0282
.0275 .0263 .0278
.0281 .0267 .0283
Use an appropriate test to compare the three methods of measurement. Comment on the validity of any assumptions you need to make.
11.16 Tuna Fish In Exercise 10.6, we
reported the estimated average prices for a 6-ounce can or a 7.06-ounce pouch of tuna fish, based on prices paid nationally for a variety of different brands of tuna.1
Light Tuna in Water
White Tuna in Oil
White Tuna in Water
Light Tuna in Oil
.99 .53 1.92 1.41 1.23 1.12 .85 .63 .65 .67 .69 .60 .60 .66
1.27 1.22 1.19 1.22
1.49 1.29 1.27 1.35 1.29 1.00 1.27 1.28
2.56 1.92 1.30 1.79 1.23
.62 .66 .62 .65 .60 .67
Source: From “Pricing of Tuna” Copyright 2001 by Consumers Union of U.S., Inc., Yonkers, NY 10703-1057, a nonprofit organization. Reprinted with permission from the June 2001 issue of Consumer
Reports® for educational purposes only. No commercial use or reproduction permitted. www.ConsumerReports.org®.
a. Use an analysis of variance for a completely randomized design to determine if there are significant differences in the prices of tuna packaged in these four different ways. Can you reject the
hypothesis of no difference in average price for these packages at the a .05 level of significance? At the a .01 level of significance? b. Find a 95% confidence interval estimate of the difference in
price between light tuna in water and light tuna in oil. Does there appear to be a significant difference in the price of these two kinds of packaged tuna? c. Find a 95% confidence interval estimate of
the difference in price between white tuna in water and white tuna in oil. Does there appear to be a significant difference in the price of these two kinds of packaged tuna? d. What other confidence
intervals might be of interest to the researcher who conducted this experiment? 11.17 The Cost of Lumber A national
home builder wants to compare the prices per 1,000 board feet of standard or better grade Douglas fir framing lumber. He randomly selects five suppliers in each of the four states where the builder is
planning to begin construction. The prices are given in the table.
State 1
$241 235 238 247 250
$216 220 205 213 220
$230 225 235 228 240
$245 250 238 255 255
a. What type of experimental design has been used? b. Construct the analysis of variance table for this data. c. Do the data provide sufficient evidence to indicate that the average price per 1000
board feet of Douglas fir differs among the four states? Test using a .05. 11.18 Good at Math? Twenty third graders were randomly separated into four equal groups, and each group was taught a
mathematical concept using a different teaching method. At the end of the teaching period, progress was measured by a unit test. The scores are shown below (one child in group 3 was absent on the day
that the test was administered).
Group 1
a. What type of design has been used in this experiment? b. Construct an ANOVA table for the experiment. c. Do the data present sufficient evidence to indicate a difference in the average scores for
the four teaching methods? Test using a .05.
RANKING POPULATION MEANS Many experiments are exploratory in nature. You have no preconceived notions about the results and have not decided (before conducting the experiment) to make specific
treatment comparisons. Rather, you want to rank the treatment means, determine which means differ, and identify sets of means for which no evidence of difference exists.
11.6 RANKING POPULATION MEANS
One option might be to order the sample means from the smallest to the largest and then to conduct t-tests for adjacent means in the ordering. If two means differ by more than ta/2
s 冢n 莦冪莦 n 冣 2
you conclude that the pair of population means differ. The problem with this procedure is that the probability of making a Type I error—that is, concluding that two means differ when, in fact, they
are equal—is a for each test. If you compare a large number of pairs of means, the probability of detecting at least one difference in means, when in fact none exists, is quite large. A simple way to
avoid the high risk of declaring differences when they do not exist is to use the studentized range, the difference between the smallest and the largest in a set of k sample means, as the yardstick
for determining whether there is a difference in a pair of population means. This method, often called Tukey’s method for paired comparisons, makes the probability of declaring that a difference
exists between at least one pair in a set of k treatment means, when no difference exists, equal to a. Tukey’s method for making paired comparisons is based on the usual analysis of variance
assumptions. In addition, it assumes that the sample means are independent and based on samples of equal size. The yardstick that determines whether a difference exists between a pair of treatment
means is the quantity v (Greek lowercase omega), which is presented next.
YARDSTICK FOR MAKING PAIRED COMPARISONS s v qa(k, df ) 兹n苶t where
k Number of treatments s 2 MSE Estimator of the common variance s 2 and s 兹s苶2 df Number of degrees of freedom for s 2 nt Common sample size—that is, the number of observations in each of the k
treatment means qa(k, df ) Tabulated value from Tables 11(a) and 11(b) in Appendix I, for a .05 and .01, respectively, and for various combinations of k and df Rule: Two population means are judged
to differ if the corresponding sample means differ by v or more. Tables 11(a) and 11(b) in Appendix I list the values of qa(k, df ) for a .05 and .01, respectively. To illustrate the use of the
tables, refer to the portion of Table 11(a) reproduced in Table 11.2. Suppose you want to make pairwise comparisons of k 5 means with a .05 for an analysis of variance, where s2 possesses 9 df. The
tabulated value for k 5, df 9, and a .05, shaded in Table 11.2, is q.05(5, 9) 4.76.
464 ❍
TABLE 11.2
A Partial Reproduction of Table 11(a) in Appendix I; Upper 5% Points df
17.97 6.08 4.50 3.93 3.64 3.46 3.34 3.26 3.20 3.15 3.11 3.08
26.98 8.33 5.91 5.04 4.60 4.34 4.16 4.04 3.95 3.88 3.82 3.77
32.82 9.80 6.82 5.76 5.22 4.90 4.68 4.53 4.41 4.33 4.26 4.20
37.08 10.88 7.50 6.29 5.67 5.30 5.06 4.89 4.76 4.65 4.57 4.51
40.41 11.74 8.04 6.71 6.03 5.63 5.36 5.17 5.02 4.91 4.82 4.75
43.12 12.44 8.48 7.05 6.33 5.90 5.61 5.40 5.24 5.12 5.03 4.95
45.40 13.03 8.85 7.35 6.58 6.12 5.82 5.60 5.43 5.30 5.20 5.12
47.36 13.54 9.18 7.60 6.80 6.32 6.00 5.77 5.59 5.46 5.35 5.27
49.07 13.99 9.46 7.83 6.99 6.49 6.16 5.92 5.74 5.60 5.49 5.39
50.59 14.39 9.72 8.03 7.17 6.65 6.30 6.05 5.87 5.72 5.61 5.51
51.96 14.75 9.95 8.21 7.32 6.79 6.43 6.18 5.98 5.83 5.71 5.61
Refer to Example 11.4, in which you compared the average attention spans for students given three different “meal” treatments in the morning: no breakfast, a light breakfast, or a full breakfast. The
ANOVA F-test in Example 11.5 indicated a significant difference in the population means. Use Tukey’s method for paired comparisons to determine which of the three population means differ from the
others. 苶 For this example, there are k 3 treatment means, with s 兹MSE 2.436. Tukey’s method can be used, with each of the three samples containing nt 5 measurements and (n k) 12 degrees of
freedom. Consult Table 11 in Appendix I to find q.05(k, df ) q.05(3, 12) 3.77 and calculate the “yardstick” as s 2.436 v q.05(3, 12) 3.77 4.11 兹n苶t 兹5苶 Solution
The three treatment means are arranged in order from the smallest, 9.4, to the largest, 14.0, in Figure 11.8. The next step is to check the difference between every pair of means. The only difference
that exceeds v 4.11 is the difference between no breakfast and a light breakfast. These two treatments are thus declared significantly different. You cannot declare a difference between the other two
pairs of treatments. To indicate this fact visually, Figure 11.8 shows a line under those pairs of means that are not significantly different. FIGU R E 1 1 . 8
Ranked means for Example 11.7
● None 9.4
Full 13.0
Light 14.0
The results here may seem confusing. However, it usually helps to think of ranking the means and interpreting nonsignificant differences as our inability to distinctly rank those means underlined by
the same line. For this example, the light breakfast definitely ranked higher than no breakfast, but the full breakfast could not be ranked higher than no breakfast, or lower than the light breakfast.
The probability that we make at least one error among the three comparisons is at most a .05.
11.6 RANKING POPULATION MEANS
If zero is not in the interval, there is evidence of a difference between the two methods.
FIGU R E 1 1 . 9
MINITAB output for Example 11.7
Most computer programs provide an option to perform paired comparisons, including Tukey’s method. The MINITAB output in Figure 11.9 shows its form of Tukey’s test, which differs slightly from the
method we have presented. The three intervals that you see in the printout marked “Lower” and “Upper” represent the difference in the two sample means plus or minus the yardstick v. If the interval
contains the value 0, the two means are judged to be not significantly different. You can see that only means 1 and 2 (none versus light) show a significant difference. ● Tukey's 95% Simultaneous
Confidence Intervals All Pairwise Comparisons among Levels of Meal Individual confidence level = 97.94% Meal = 1 subtracted from: Meal Lower 2 0.493 3 -0.507
Center 4.600 3.600
Upper 8.707 7.707
-----+---------+---------+---------+---(-----------*-----------) (----------*-----------) -----+---------+---------+---------+----3.5 0.0 3.5 7.0
Meal = 2 subtracted from: Meal Lower 3 -5.107
Center -1.000
Upper 3.107
-----+---------+---------+---------+---(-----------*-----------) -----+---------+---------+---------+----3.5 0.0 3.5 7.0
As you study two more experimental designs in the next sections of this chapter, remember that, once you have found a factor to be significant, you should use Tukey’s method or another method of
paired comparisons to find out exactly where the differences lie!
BASIC TECHNIQUES 11.19 Suppose you wish to use Tukey’s method of paired comparisons to rank a set of population means. In addition to the analysis of variance assumptions, what other property must
the treatment means satisfy? 11.20 Consult Tables 11(a) and 11(b) in Appendix I and find the values of qa(k, df ) for these cases: a. a .05, k 5, df 7 b. a .05, k 3, df 10 c. a .01, k 4, df 8 d. a
.01, k 7, df 5 11.21 If the sample size for each treatment is nt and
if s 2 is based on 12 df, find v in these cases: a. a .05, k 4, nt 5 b. a .01, k 6, nt 8
11.22 An independent random sampling design was
used to compare the means of six treatments based on
samples of four observations per treatment. The pooled estimator of s 2 is 9.12, and the sample means follow:
x苶1 101.6 x苶2 98.4 x苶3 112.3 x苶5 104.2 x苶6 113.8 x苶4 92.9 a. Give the value of v that you would use to make pairwise comparisons of the treatment means for a .05. b. Rank the treatment means
using pairwise comparisons. APPLICATIONS 11.23 Swamp Sites, again Refer to Exercise 11.13 and data set EX1113. Rank the mean leaf growth for the four locations. Use a .01. 11.24 Calcium Refer to
Exercise 11.15 and data set
EX1115. The paired comparisons option in MINITAB generated the output provided here. What do these results tell you about the differences in the population
means? Does this confirm your conclusions in Exercise 11.15? MINITAB output for Exercise 11.24 Tukey's 95% Simultaneous Confidence Intervals All Pairwise Comparisons among Levels of Method Individual
confidence level = 97.94% Method = 1 subtracted from: Method 2 3 Method 2 3
Lower Center Upper -0.0014377 -0.0008400 -0.0002423 -0.0001777 0.0004200 0.0010177 --------+---------+---------+---------+ (-----*-----) (-----*-----) --------+---------+---------+---------+ -0.0010
0.0000 0.0010 0.0020
Method = 2 subtracted from: Method 3 Method 3
Lower Center Upper 0.0006623 0.0012600 0.0018577 --------+---------+---------+---------+ (-----*-----) --------+---------+---------+---------+ -0.0010 0.0000 0.0010 0.0020
11.25 Glucose Tolerance Physicians
depend on laboratory test results when managing medical problems such as diabetes or epilepsy. In a uniformity test for glucose tolerance, three different laboratories were each sent nt 5 identical
blood samples from a person who had drunk 50 milligrams (mg) of glucose dissolved in water. The laboratory results (in mg/dl) are listed here:
Lab 1
Lab 2
Lab 3
120.1 110.7 108.9 104.2 100.4
98.3 112.1 107.7 107.9 99.2
103.0 108.5 101.1 110.0 105.4
a. Do the data indicate a difference in the average readings for the three laboratories? b. Use Tukey’s method for paired comparisons to rank the three treatment means. Use a .05.
11.26 The Cost of Lumber, continued The analy-
sis of variance F-test in Exercise 11.17 (and data set EX1117) determined that there was indeed a difference in the average cost of lumber for the four states. The following information from Exercise
11.17 is given in the table: x苶1 242.2 x苶2 214.8 x苶3 231.6 x苶4 248.6
Sample Means
MSE Error df : ni : k:
41.25 16 5 4
Use Tukey’s method for paired comparisons to determine which means differ significantly from the others at the a .01 level. 11.27 GRE Scores The Graduate Record
Examination (GRE) scores were recorded for students admitted to three different graduate programs at a local university.
Graduate Program 1
a. Do these data provide sufficient evidence to indicate a difference in the mean GRE scores for applicants admitted to the three programs? b. Find a 95% confidence interval for the difference in mean
GRE scores for programs 1 and 2. c. If you find a significant difference in the average GRE scores for the three programs, use Tukey’s method for paired comparisons to determine which means differ
significantly from the others. Use a .05.
THE RANDOMIZED BLOCK DESIGN: A TWO-WAY CLASSIFICATION The completely randomized design introduced in Section 11.4 is a generalization of the two independent samples design presented in Section 10.4.
It is meant to be used when the experimental units are quite similar or homogeneous in their makeup and when there is only one factor—the treatment—that might influence the response. Any other
variation in the response is due to random variation or experimental error.
11.8 THE ANALYSIS OF VARIANCE FOR A RANDOMIZED BLOCK DESIGN
Sometimes it is clear to the researcher that the experimental units are not homogeneous. Experimental subjects or animals, agricultural fields, days of the week, and other experimental units often add
their own variability to the response. Although the researcher is not really interested in this source of variation, but rather in some treatment he chooses to apply, he may be able to increase the
information by isolating this source of variation using the randomized block design—a direct extension of the matched pairs or paired-difference design in Section 10.5. In a randomized block design,
the experimenter is interested in comparing k treatment means. The design uses blocks of k experimental units that are relatively similar, or homogeneous, with one unit within each block randomly
assigned to each treatment. If the randomized block design involves k treatments within each of b blocks, then the total number of observations in the experiment is n bk. A production supervisor
wants to compare the mean times for assembly-line operators to assemble an item using one of three methods: A, B, or C. Expecting variation in assembly times from operator to operator, the supervisor
uses a randomized block design to compare the three methods. Five assembly-line operators are selected to serve as blocks, and each is assigned to assemble the item three times, once for each of the
three methods. Since the sequence in which the operator uses the three methods may be important (fatigue or increasing dexterity may be factors affecting the response), each operator should be
assigned a random sequencing of the three methods. For example, operator 1 might be assigned to perform method C first, followed by A and B. Operator 2 might perform method A first, then C and B. To
compare four different teaching methods, a group of students might be divided into blocks of size 4, so that the groups are most nearly matched according to academic achievement. To compare the
average costs for three different cellular phone companies, costs might be compared at each of three usage levels: low, medium, and high. To compare the average yields for three species of fruit
trees when a variation in yield is expected because of the field in which the trees are planted, a researcher uses five fields. She divides each field into three plots on which the three species of fruit
trees are planted. Matching or blocking can take place in many different ways. Comparisons of treatments are often made within blocks of time, within blocks of people, or within similar external
environments. The purpose of blocking is to remove or isolate the block-to-block variability that might otherwise hide the effect of the treatments. You will find more examples of the use of the
randomized block design in the exercises at the end of the next section.
b blocks k treatments n bk
THE ANALYSIS OF VARIANCE FOR A RANDOMIZED BLOCK DESIGN The randomized block design identifies two factors: treatments and blocks—both of which affect the response.
Partitioning the Total Variation in the Experiment Let xij be the response when the ith treatment (i 1, 2, . . . , k) is applied in the j th block ( j 1, 2, . . . , b). The total variation in the n
bk observations is (Sxij)2 Total SS S(xij 苶x )2 Sx 2ij n
468 ❍
This is partitioned into three (rather than two) parts in such a way that Total SS SSB SST SSE where • • •
SSB (sum of squares for blocks) measures the variation among the block means. SST (sum of squares for treatments) measures the variation among the treatment means. SSE (sum of squares for error)
measures the variation of the differences among the treatment observations within blocks, which measures the experimental error.
The calculational formulas for the four sums of squares are similar in form to those you used for the completely randomized design in Section 11.5. Although you can simplify your work by using a
computer program to calculate these sums of squares, the formulas are given next. CALCULATING THE SUMS OF SQUARES FOR A RANDOMIZED BLOCK DESIGN, k TREATMENTS IN b BLOCKS G2 CM n where G S xij Total
of all n bk observations Total SS Sx 2ij CM (Sum of squares of all x-values) CM
Total SS SST SSB SSE
T 2i
SST Sb CM B 2j
SSB Sk CM SSE Total SS SST SSB with Ti Total of all observations receiving treatment i, i 1, 2, . . . , k Bj Total of all observations in block j, j 1, 2, . . . , b Each of the three sources of
variation, when divided by the appropriate degrees of freedom, provides an estimate of the variation in the experiment. Since Total SS involves n bk squared observations, its degrees of freedom are
df (n 1). Similarly, SST involves k squared totals, and its degrees of freedom are df (k 1), while SSB involves b squared totals and has (b 1) degrees of freedom. Finally, since the degrees of
freedom are additive, the remaining degrees of freedom associated with SSE can be shown algebraically to be df (b 1)(k 1). These three sources of variation and their respective degrees of freedom are
combined to form the mean squares as MS SS/df, and the total variation in the experiment is then displayed in an analysis of variance (or ANOVA) table as shown here:
11.8 THE ANALYSIS OF VARIANCE FOR A RANDOMIZED BLOCK DESIGN
TABLE 11.3
ANOVA TABLE FOR A RANDOMIZED BLOCK DESIGN, k TREATMENTS AND b BLOCKS
Degrees of freedom are additive.
Treatments Blocks Error
k1 b1 (b 1)(k 1)
MST SST/(k 1) MSB SSB/(b 1) MSE SSE/(b 1)(k 1)
n 1 bk 1
The cellular phone industry is involved in a fierce battle for customers, with each company devising its own complex pricing plan to lure customers. Since the cost of a cell phone minute varies
drastically depending on the number of minutes per month used by the customer, a consumer watchdog group decided to compare the average costs for four cellular phone companies using three different
usage levels as blocks. The monthly costs (in dollars) computed by the cell phone companies for peak-time callers at low (20 minutes per month), middle (150 minutes per month), and high (1000 minutes
per month) usage levels are given in Table 11.3. Construct the analysis of variance table for this experiment.
Monthly Phone Costs of Four Companies at Three Usage Levels Company Usage Level
Blocks contain experimental units that are relatively the same.
Low Middle High
B1 105 B2 276 B3 1246
T1 403
T2 426
T3 408
T4 390
G 1627
Solution The experiment is designed as a randomized block design with b 3 usage levels (blocks) and k 4 companies (treatments), so there are n bk 12 observations and G 1627. Then
G2 16272 CM 220,594.0833 n 12 Total SS (272 242 3002) CM 189,798.9167 4032 3902 SST CM 222.25 3 1052 2762 12462 SSB CM 189,335.1667 4 and by subtraction, SSE Total SS SST SSB 241.5 These four sources
of variation, their degrees of freedom, sums of squares, and mean squares are shown in the shaded area of the analysis of variance table, generated by MINITAB and given in Figure 11.10. You will find
instructions for generating this output in the section “My MINITAB ” at the end of this chapter.
470 ❍
FI GU R E 1 1 . 1 0
MINITAB output for Example 11.8
● Two-way ANOVA: Dollars versus Usage, Company Source Usage Company Error Total
DF 2 3 6 11
S = 6.344
SS 189335 222 242 189799
MS 94667.6 74.1 40.3
R-Sq = 99.87%
F 2351.99 1.84
P 0.000 0.240
R-Sq(adj) = 99.77%
Notice that the MINITAB ANOVA table shows two different F statistics and p-values. It will not surprise you to know that these statistics are used to test hypotheses concerning the equality of both
the treatment and block means.
Testing the Equality of the Treatment and Block Means The mean squares in the analysis of variance table can be used to test the null hypotheses H0 : No difference among the k treatment means or H0 :
No difference among the b block means versus the alternative hypothesis Ha : At least one of the means is different from at least one other using a theoretical argument similar to the one we used for
the completely randomized design. •
Remember that s 2 is the common variance for the observations in all bk block-treatment combinations. The quantity SSE MSE (b 1)(k 1)
is an unbiased estimate of s 2, whether or not H0 is true. The two mean squares, MST and MSB, estimate s 2 only if H0 is true and tend to be unusually large if H0 is false and either the treatment or
block means are different. The test statistics MST F and MSE
MSB F MSE
are used to test the equality of treatment and block means, respectively. Both statistics tend to be larger than usual if H0 is false. Hence, you can reject H0 for large values of F, using
right-tailed critical values of the F distribution with the appropriate degrees of freedom (see Table 6 in Appendix I) or computer-generated p-values to draw statistical conclusions about the
equality of the population means. As an alternative, you can use the F Probabilities applet to find either critical values of F or p-values.
11.8 THE ANALYSIS OF VARIANCE FOR A RANDOMIZED BLOCK DESIGN
TESTS FOR A RANDOMIZED BLOCK DESIGN For comparing treatment means: 1. Null hypothesis: H0 : The treatment means are equal 2. Alternative hypothesis: Ha : At least two of the treatment means differ 3.
Test statistic: F MST/MSE, where F is based on df1 (k 1) and df2 (b 1)(k 1) 4. Rejection region: Reject if F Fa, where Fa lies in the upper tail of the F distribution (see the figure), or when the
p-value a For comparing block means: 1. Null hypothesis: H0 : The block means are equal 2. Alternative hypothesis: Ha : At least two of the block means differ 3. Test statistic: F MSB/MSE, where F is
based on df1 (b 1) and df2 (b 1)(k 1) 4. Rejection region: Reject if F Fa, where Fa lies in the upper tail of the F distribution (see the figure), or when the p-value a f(F)
Do the data in Example 11.8 provide sufficient evidence to indicate a difference in the average monthly cell phone cost depending on the company the customer uses? Solution The cell phone companies
represent the treatments in this randomized block design, and the differences in their average monthly costs are of primary interest to the researcher. To test
H0 : No difference in the average cost among companies versus the alternative that the average cost is different for at least one of the four companies, you use the analysis of variance F statistic,
calculated as MST 74.1 F 1.84 MSE 40.3 and shown in the column marked “F” and the row marked “Company” in Figure 11.10. The exact p-value for this statistical test is also given in Figure 11.10 as
.240, which is too large to allow rejection of H0. The results do not show a significant difference in the treatment means. That is, there is insufficient evidence to indicate a difference in the
average monthly costs for the four companies.
472 ❍
The researcher in Example 11.9 was fairly certain in using a randomized block design that there would be a significant difference in the block means—that is, a significant difference in the average
monthly costs depending on the usage level. This suspicion is justified by looking at the test of equality of block means. Notice that the observed test statistic is F 2351.99 with P .000, showing a
highly significant difference, as expected, in the block means.
Identifying Differences in the Treatment and Block Means Once the overall F-test for equality of the treatment or block means has been performed, what more can you do to identify the nature of any
differences you have found? As in Section 11.5, you can use Tukey’s method of paired comparisons to determine which pairs of treatment or block means are significantly different from one another.
However, if the F-test does not indicate a significant difference in the means, there is no reason to use Tukey’s procedure. If you have a special interest in a particular pair of treatment or block
means, you can estimate the difference using a (1 a)100% confidence interval.† The formulas for these procedures, shown next, follow a pattern similar to the formulas for the completely randomized
design. Remember that MSE always provides an unbiased estimator of s 2 and uses information from the entire set of measurements. Hence, it is the best available estimator of s 2, regardless of what
test or estimation procedure you are using. You will again use Degrees of freedom for Tukey’s test and for confidence intervals are error df.
s 2 MSE
with df (b 1)(k 1)
to estimate s in comparing the treatment and block means. 2
COMPARING TREATMENT AND BLOCK MEANS Tukey’s yardstick for comparing block means: s v qa(b, df ) 兹苶k
Tukey’s yardstick for comparing treatment means: s v qa(k, df ) 兹苶b
(1 a)100% confidence interval for the difference in two block means: 1 1 (B 苶i 苶 Bj) ta/2 s 2 k k
where 苶 Bi is the average of all observations in block i (1 a)100% confidence interval for the difference in two treatment means: 1 1 苶i 苶 Tj ) ta/2 s 2 (T b b
where 苶 Ti is the average of all observations in treatment i.
You cannot construct a confidence interval for a single mean unless the blocks have been randomly selected from among the population of all blocks. The procedure for constructing intervals for single
means is beyond the scope of this book.
11.8 THE ANALYSIS OF VARIANCE FOR A RANDOMIZED BLOCK DESIGN
Note: The values qa(*, df ) from Table 11 in Appendix I, ta/2 from Table 4 in Appendix I, and s 2 MSE all depend on df (b 1)(k 1) degrees of freedom.
Identify the nature of any differences you found in the average monthly cell phone costs from Example 11.8. Since the F-test did not show any significant differences in the average costs for the four
companies, there is no reason to use Tukey’s method of paired comparisons. Suppose, however, that you are an executive for company B and your major competitor is company C. Can you claim a significant
difference in the two average costs? Using a 95% confidence interval, you can calculate
冪MSE 莦冢莦b冣
苶2 苶 T3) t.025 (T You cannot form a confidence interval or test an hypothesis about a single treatment mean in a randomized block design!
冢3 3冣 2.447冪40.3 莦冢3莦冣 426
6 12.68 so the difference between the two average costs is estimated as between $6.68 and $18.68. Since 0 is contained in the interval, you do not have evidence to indicate a significant difference in
your average costs. Sorry!
Some Cautionary Comments on Blocking Here are some important points to remember: •
A randomized block design should not be used when treatments and blocks both correspond to experimental factors of interest to the researcher. In designating one factor as a block, you may assume
that the effect of the treatment will be the same, regardless of which block you are using. If this is not the case, the two factors—blocks and treatments—are said to interact, and your analysis
could lead to incorrect conclusions regarding the relationship between the treatments and the response. When an interaction is suspected between two factors, you should analyze the data as a
factorial experiment, which is introduced in the next section. Remember that blocking may not always be beneficial. When SSB is removed from SSE, the number of degrees of freedom associated with SSE
gets smaller. For blocking to be beneficial, the information gained by isolating the block variation must outweigh the loss of degrees of freedom for error. Usually, though, if you suspect that the
experimental units are not homogeneous and you can group the units into blocks, it pays to use the randomized block design! Finally, remember that you cannot construct confidence intervals for
individual treatment means unless it is reasonable to assume that the b blocks have been randomly selected from a population of blocks. If you construct such an interval, the sample treatment mean
will be biased by the positive and negative effects that the blocks have on the response.
474 ❍
BASIC TECHNIQUES 11.28 A randomized block design was used to compare the means of three treatments within six blocks. Construct an ANOVA table showing the sources of variation and their respective
degrees of freedom. 11.29 Suppose that the analysis of variance calculations for Exercise 11.28 are SST 11.4, SSB 17.1, and Total SS 42.7. Complete the ANOVA table, showing all sums of squares, mean
squares, and pertinent F-values. 11.30 Do the data of Exercise 11.28 provide sufficient evidence to indicate differences among the treatment means? Test using a .05. 11.31 Refer to Exercise 11.28.
Find a 95% confi-
dence interval for the difference between a pair of treatment means A and B if x苶A 21.9 and 苶xB 24.2. 11.32 Do the data of Exercise 11.28 provide sufficient evidence to indicate that blocking
increased the amount of information in the experiment about the treatment means? Justify your answer. 11.33 The data that follow are observations
collected from an experiment that compared four treatments, A, B, C, and D, within each of three blocks, using a randomized block design.
Treatment Block
a. Do the data present sufficient evidence to indicate differences among the treatment means? Test using a .05. b. Do the data present sufficient evidence to indicate differences among the block
means? Test using a .05. c. Rank the four treatment means using Tukey’s method of paired comparisons with a .01. d. Find a 95% confidence interval for the difference in means for treatments A and B.
e. Does it appear that the use of a randomized block design for this experiment was justified? Explain.
11.34 The data shown here are observations collected from an experiment that compared three treatments, A, B, and C, within each of five blocks, using a randomized block design:
Block Treatment
A B C
2.1 3.4 3.0
2.6 3.8 3.6
1.9 3.6 3.2
3.2 4.1 3.9
2.7 3.9 3.9
12.5 18.8 17.6
MINITAB output for Exercise 11.34 Two-way ANOVA: Response versus Trts, Blocks Source Trts Blocks Error Total
DF 2 4 8 14
S = 0.1673
SS 4.476 1.796 0.224 6.496
MS 2.238 0.449 0.028
R-Sq = 96.55%
F 79.93 16.04
P 0.000 0.001
R-Sq(adj) = 93.97%
Use the MINITAB ouput to analyze the experiment. Investigate possible differences in the block and/or treatment means and, if any differences exist, use an appropriate method to specifically identify
where the differences lie. Has blocking been effective in this experiment? Present your results in the form of a report. 11.35 The partially completed ANOVA table for a randomized block design is
presented here: Source
Treatments Blocks Error
a. b. c. d. e.
14.2 18.9 41.9
How many blocks are involved in the design? How many observations are in each treatment total? How many observations are in each block total? Fill in the blanks in the ANOVA table. Do the data
present sufficient evidence to indicate differences among the treatment means? Test using a .05. f. Do the data present sufficient evidence to indicate differences among the block means? Test using a
11.8 THE ANALYSIS OF VARIANCE FOR A RANDOMIZED BLOCK DESIGN
APPLICATIONS 11.36 Gas Mileage A study was conducted
to compare automobile gasoline mileage for three formulations of gasoline. A was a non-leaded 87 octane formulation, B was a non-leaded 91 octane formulation, and C was a non-leaded 87 octane
formulation with 15% ethanol. Four automobiles, all of the same make and model, were used in the experiment, and each formulation was tested in each automobile. Using each formulation in the same
automobile has the effect of eliminating (blocking out) automobile-toautomobile variability. The data (in miles per gallon) follow. Automobile
A B C
Illustration for Exercise 11.37
25.7 27.2 26.1
27.0 28.1 27.5
27.3 27.9 26.8
26.1 27.7 27.8
a. Do the data provide sufficient evidence to indicate a difference in mean mileage per gallon for the three gasoline formulations? b. Is there evidence of a difference in mean mileage for the four
automobiles? c. Suppose that prior to looking at the data, you had decided to compare the mean mileage per gallon for formulations A and B. Find a 90% confidence interval for this difference. d. Use
an appropriate method to identify the pairwise differences, if any, in the average mileages for the three formulations. 11.37 Water Resistance in Textiles An experiment was conducted to compare the
EX1137 effects of four different chemicals, A, B, C, and D, in producing water resistance in textiles. A strip of material, randomly selected from a bolt, was cut into four pieces, and the four
pieces were randomly assigned to receive one of the four chemicals, A, B, C, or D. This process was replicated three times, thus producing a randomized block design. The design, with
moistureresistance measurements, is as shown in the figure (low readings indicate low moisture penetration). Analyze the experiment using a method appropriate for this randomized block design.
Identify the blocks and treatments, and investigate any possible differences in treatment means. If any differences exist, use an appropriate method to specifically identify where the differences lie.
What are the practical implications for the chemical producers? Has blocking been effective in this experiment? Present your results in the form of a report.
Blocks (bolt samples) 1 C 9.9 A 10.1 B 11.4 D 12.1
2 D 13.4 B 12.9 A 12.2 C 12.3
3 B 12.7 D 12.9 C 11.4 A 11.9
11.38 Glare in Rearview Mirrors An experiment was conducted to compare the glare characteristics of four types of automobile rearview mirrors. Forty drivers were randomly selected to participate in
the experiment. Each driver was exposed to the glare produced by a headlight located 30 feet behind the rear window of the experimental automobile. The driver then rated the glare produced by the
rearview mirror on a scale of 1 (low) to 10 (high). Each of the four mirrors was tested by each driver; the mirrors were assigned to a driver in random order. An analysis of variance of the data
produced this ANOVA table: Source Mirrors Drivers Error Total
46.98 8.42 638.61
a. Fill in the blanks in the ANOVA table. b. Do the data present sufficient evidence to indicate differences in the mean glare ratings of the four rearview mirrors? Calculate the approximate p-value
and use it to make your decision. c. Do the data present sufficient evidence to indicate that the level of glare perceived by the drivers varied from driver to driver? Use the p-value approach. d.
Based on the results of part b, what are the practical implications of this experiment for the manufacturers of the rearview mirrors? 11.39 Slash Pine Seedings An experiment was conducted to
determine the effects of three methods of soil preparation on the first-year growth of slash pine seedlings. Four locations (state forest lands) were selected, and each location was divided into three
plots. Since it was felt that soil fertility within a location was more homogeneous than between locations, a randomized block design was employed using locations
476 ❍
as blocks. The methods of soil preparation were A (no preparation), B (light fertilization), and C (burning). Each soil preparation was randomly applied to a plot within each location. On each plot,
the same number of seedlings were planted and the average first-year growth of the seedlings was recorded on each plot. Use the MINITAB printout to answer the questions. Location Soil Preparation
A B C
a. Conduct an analysis of variance. Do the data provide evidence to indicate a difference in the mean growths for the three soil preparations? b. Is there evidence to indicate a difference in mean
rates of growth for the four locations? c. Use Tukey’s method of paired comparisons to rank the mean growths for the three soil preparations. Use a .01. d. Use a 95% confidence interval to estimate
the difference in mean growths for methods A and B. MINITAB output for Exercise 11.39 Two-way ANOVA: Growth versus Soil Prep, Location Source Soil Prep Location Error Total S = 1.374
Soil Prep 1 2 3
Location 1 2 3 4
DF 2 3 6 11
SS 38.000 61.667 11.333 111.000
R-Sq = 89.79%
Mean 12.5 16.0 12.0
Mean 12.0000 15.0000 16.3333 10.6667
MS 19.0000 20.5556 1.8889
F 10.06 10.88
P 0.012 0.008
R-Sq(adj) = 81.28%
Individual 95% CIs For Mean Based on Pooled StDev ---------+---------+---------+---------+-(-------*-------) (-------*-------) (-------*-------) ---------+---------+---------+---------+12.0 14.0 16.0
18.0 Individual 95% CIs For Mean Based on Pooled StDev ------+---------+---------+---------+----(-------*-------) (-------*-------) (------*-------) (-------*------)
-----+---------+---------+---------+----10.0 12.5 15.0 17.5
results are given in the table. Use the MINITAB printout to answer the questions. Dogs 1
A 1342 B 1608 C 1881
C 1698 B 1387 A 1140
B 1296 A 1029 C 1549
A 1150 C 1579 B 1319
a. How many degrees of freedom are associated with SSE? b. Do the data present sufficient evidence to indicate a difference in the mean uptakes of calcium for the three levels of digitalis? c. Use
Tukey’s method of paired comparisons with a .01 to rank the mean calcium uptakes for the three levels of digitalis. d. Do the data indicate a difference in the mean uptakes of calcium for the four
heart muscles? e. Use Tukey’s method of paired comparisons with a .01 to rank the mean calcium uptakes for the heart muscles of the four dogs used in the experiment. Are these results of any
practical value to the researcher? f. Give the standard error of the difference between the mean calcium uptakes for two levels of digitalis. g. Find a 95% confidence interval for the difference in
mean responses between treatments A and B. MINITAB output for Exercise 11.40 Two-way ANOVA: Uptake versus Digitalis, Dog Source Digitalis Dog Error Total S = 31.86
Digitalis 1 2 3
DF 2 3 6 11
R-Sq = 99.13%
Mean 1165.25 1402.50 1676.75
11.40 Digitalis and Calcium Uptake A
study was conducted to compare the effects of three levels of digitalis on the levels of calcium in the heart muscles of dogs. Because general level of calcium uptake varies from one animal to
another, the tissue for a heart muscle was regarded as a block, and comparisons of the three digitalis levels (treatments) were made within a given animal. The calcium uptakes for the three levels of
digitalis, A, B, and C, were compared based on the heart muscles of four dogs and the
Dog 1 2 3 4
SS 542177 173415 6090 703682
Mean 1610.33 1408.33 1291.33 1349.33
MS 262089 57805 1015
F 258.24 56.96
P 0.000 0.000
R-Sq(adj) = 98.41%
Individual 95% CIs For Mean Based on Pooled StDev -----+---------+---------+---------+---(--*-) (--*-) (--*-) -----+---------+---------+---------+---1200 1350 1500 1650 Individual 95% CIs For Mean
Based on Pooled StDev ------+---------+---------+---------+--(---*---) (--*---) (---*--) (--*---) ------+---------+---------+---------+--1320 1440 1560 1680
11.41 Bidding on Construction Jobs A
building contractor employs three construction engineers, A, B, and C, to estimate and bid on jobs.
11.8 THE ANALYSIS OF VARIANCE FOR A RANDOMIZED BLOCK DESIGN
To determine whether one tends to be a more conservative (or liberal) estimator than the others, the contractor selects four projected construction jobs and has each estimator independently estimate
the cost (in dollars per square foot) of each job. The data are shown in the table: Construction Job Estimator A B C Total
35.10 37.45 36.30
34.50 34.60 35.10
29.25 33.10 32.45
31.60 34.40 32.90
130.45 139.55 136.75
Analyze the experiment using the appropriate methods. Identify the blocks and treatments, and investigate any possible differences in treatment means. If any differences exist, use an appropriate
method to specifically identify where the differences lie. Has blocking been effective in this experiment? What are the practical implications of the experiment? Present your results in the form of a
report. 11.42 “In Good Hands” The cost of auto-
mobile insurance varies by location, ages of the drivers, and type of coverage. The following are estimates for the annual 2006–2007 premium for a single male, licensed for 6–8 years, who drives a
Honda Accord 12,600 to 15,000 miles per year and has no violations or accidents. These estimates are provided by the California Department of Insurance for the year 2006–2007 on its website (http://
www.insurance .ca.gov).2
Insurance Company Location
21st Century
Riverside $1870 San Bernardino 2064 Hollywood 3542 Long Beach 2228
$2250 2286 3773 2617
$2154 2316 3235 2681
Fireman’s State Fund Farm $2324 2005 3360 3279
$3053 3151 3883 3396
Source: www.insurance.ca.gov
a. What type of design was used in collecting these data? b. Is there sufficient evidence to indicate that insurance premiums for the same type of coverage differs from company to company? c. Is
there sufficient evidence to indicate that insurance premiums vary from location to location? d. Use Tukey’s procedure to determine which insurance companies listed here differ from others in the
premiums they charge for this typical client. Use a .05. e. Summarize your findings.
11.43 Warehouse Shopping Warehouse
stores such as Costco and Sam’s Club are the shopping choice of many Americans because of the low cost associated with bulk shopping. When a new warehouse grocery store called WinCo Foods was opened
in Moreno Valley, California, an advertising mailer claimed that they were the area’s “low price leader.”3 They compared their prices with those of four other grocery stores for a number of items
purchased on the same day. A partial list of the items and their prices is given in the following table.
Stores Items
WinCo Albertsons Ralphs
Salad mix, 1 lb. bag Hillshire Farm® Smoked Sausage, 16 oz. Kellogg’s Raisin Bran®, 25.5 oz. Kraft® Philadelphia® Cream Cheese, 8 oz. Kraft® Ranch Dressing, 16 oz. Langers® Apple Juice, 128 oz. Dial®
Bar Soap, Gold, 8–4.5 oz. Jif® Peanut Butter, Creamy, 28 oz.
Stater Food-4Bros Less
a. What are the blocks and treatments in this experiment? b. Do the data provide evidence to indicate that there are significant differences in prices from store to store? Support your answer
statistically using the ANOVA printout that follows. c. Are there significant differences from block to block? Was blocking effective? d. The advertisement includes the following statement: “Though
this list is not intended to represent a typical weekly grocery order or a random list of grocery items, WinCo continues to be the area’s low price leader.” How might this statement affect the
reliability of your conclusions in part b? Two-way ANOVA: Price versus Item, Store Source Item Store Error Total
DF 7 4 28 39
SS 38.2360 16.6644 7.8862 62.7866
MS 5.46228 4.16610 0.28165
S = 0.5307 R-Sq = 87.44%
F 19.39 14.79
P 0.000 0.000
R-Sq(adj) = 82.51%
478 ❍
11.44 Warehouse Shopping, continued Refer to Exercise 11.43. The printout that follows provides the average costs of the selected items for the k 5 stores. Store
Albertsons Food-4-Less Ralphs Stater Bros WinCo
4.04125 2.74125 3.30375 3.05500 2.08000
a. What is the appropriate value of q.05(k, df ) for testing for differences among stores?
MSE b. What is the value of v q.05(k, df ) ? b c. Use Tukey’s pairwise comparison test among stores used to determine which stores differ significantly in average prices of the selected items.
THE a b FACTORIAL EXPERIMENT: A TWO-WAY CLASSIFICATION
Suppose the manager of a manufacturing plant suspects that the output (in number of units produced per shift) of a production line depends on two factors: • •
Which of two supervisors is in charge of the line Which of three shifts—day, swing, or night—is being measured
That is, the manager is interested in two factors: “supervisor” at two levels and “shift” at three levels. Can you use a randomized block design, designating one of the two factors as a block factor?
In order to do this, you would need to assume that the effect of the two supervisors is the same, regardless of which shift you are considering. This may not be the case; maybe the first supervisor is
most effective in the morning, and the second is more effective at night. You cannot generalize and say that one supervisor is better than the other or that the output of one particular shift is
best. You need to investigate not only the average output for the two supervisors and the average output for the three shifts, but also the interaction or relationship between the two factors.
Consider two different examples that show the effect of interaction on the responses in this situation. EXAMPLE
TABLE 11.4
Suppose that the two supervisors are each observed on three randomly selected days for each of the three different shifts. The average outputs for the three shifts are shown in Table 11.4 for each of
the supervisors. Look at the relationship between the two factors in the line chart for these means, shown in Figure 11.11. Notice that supervisor 2 always produces a higher output, regardless of the
shift. The two factors behave independently; that is, the output is always about 100 units higher for supervisor 2, no matter which shift you look at.
Average Outputs for Two Supervisors on Three Shifts Shift Supervisor
11.9 THE a b FACTORIAL EXPERIMENT: A TWO-WAY CLASSIFICATION
FIGU R E 1 1 . 1 1
Interaction plot for means in Table 11.4
Interaction Plot (data means) for Response 650
Supervisor 1 2
600 575 550 525 500 Day
Swing Shift
Now consider another set of data for the same situation, shown in Table 11.5. There is a definite difference in the results, depending on which shift you look at, and the interaction can be seen in
the crossed lines of the chart in Figure 11.12. TABLE 11.5
Average Outputs for Two Supervisors on Three Shifts Shift
Interaction plot for means in Table 11.5
Interaction Plot (data means) for Response Supervisor 1 2
600 Mean
FIGU R E 1 1 . 1 2
450 Day
Swing Shift
480 ❍
When the effect of one factor on the response changes, depending on the level at which the other factor is measured, the two factors are said to interact.
This situation is an example of a factorial experiment in which there are a total of 2 3 possible combinations of the levels for the two factors. These 2 3 6 combinations form the treatments, and the
experiment is called a 2 3 factorial experiment. This type of experiment can actually be used to investigate the effects of three or more factors on a response and to explore the interactions between
the factors. However, we confine our discussion to two factors and their interaction. When you compare treatment means for a factorial experiment (or for any other experiment), you will need more than
one observation per treatment. For example, if you obtain two observations for each of the factor combinations of a complete factorial experiment, you have two replications of the experiment. In the
next section on the analysis of variance for a factorial experiment, you can assume that each treatment or combination of factor levels is replicated the same number of times r.
THE ANALYSIS OF VARIANCE FOR AN a b FACTORIAL EXPERIMENT An analysis of variance for a two-factor factorial experiment replicated r times follows the same pattern as the previous designs. If the
letters A and B are used to identify the two factors, the total variation in the experiment Total SS S(x 苶x )2 Sx2 CM is partitioned into four parts in such a way that Total SS SSA SSB SS(AB) SSE
where • • • •
SSA (sum of squares for factor A) measures the variation among the factor A means. SSB (sum of squares for factor B) measures the variation among the factor B means. SS(AB) (sum of squares for
interaction) measures the variation among the different combinations of factor levels. SSE (sum of squares for error) measures the variation of the differences among the observations within each
combination of factor levels—the experimental error.
Sums of squares SSA and SSB are often called the main effect sums of squares, to distinguish them from the interaction sum of squares. Although you can simplify your work by using a computer program
to calculate these sums of squares, the calculational formulas are given next. You can assume that there are: • • • •
a levels of factor A b levels of factor B r replications of each of the ab factor combinations A total of n abr observations
11.10 THE ANALYSIS OF VARIANCE FOR AN a b FACTORIAL EXPERIMENT
Total SS Sx 2 CM
A2 SSA Si CM br
SSB Sar CM
SS(AB) Sr CM SSA SSB where G Sum of all n abr observations Ai Total of all observations at the ith level of factor A, i 1, 2, . . . , a Bj Total of all observations at the jth level of factor B, j 1,
2, . . . , b (AB)ij Total of the r observations at the ith level of factor A and the jth level of factor B
Each of the five sources of variation, when divided by the appropriate degrees of freedom, provides an estimate of the variation in the experiment. These estimates are called mean squares—MS SS/df—and
are displayed along with their respective sums of squares and df in the analysis of variance (or ANOVA) table.
ANOVA TABLE FOR r REPLICATIONS OF A TWO-FACTOR FACTORIAL EXPERIMENT: FACTOR A AT a LEVELS AND FACTOR B AT b LEVELS Source
SSA MSA a1
SSB MSB b1
(a 1)(b 1)
SS(AB) MS(AB) (a 1) (b 1)
MS(AB) MSE
ab (r 1)
SSE MSE ab(r 1)
abr 1
Total SS
Finally, the equality of means for various levels of the factor combinations (the interaction effect) and for the levels of both main effects, A and B, can be tested using the ANOVA F-tests, as shown
482 ❍
For interaction:
1. Null hypothesis: H0 : Factors A and B do not interact 2. Alternative hypothesis: Ha : Factors A and B interact 3. Test statistic: F MS(AB)/MSE, where F is based on df1 (a 1)(b 1) and df2 ab(r 1)
4. Rejection region: Reject H0 when F Fa, where Fa lies in the upper tail of the F distribution (see the figure), or when the p-value a •
For main effects, factor A:
1. Null hypothesis: H0 : There are no differences among the factor A means 2. Alternative hypothesis: Ha : At least two of the factor a means differ 3. Test statistic: F MSA/MSE, where F is based on
df1 (a 1) and df2 ab(r 1) 4. Rejection region: Reject H0 when F Fa (see the figure) or when the p-value a •
For main effects, factor B:
1. Null hypothesis: H0 : There are no differences among the factor B means 2. Alternative hypothesis: Ha : At least two of the factor B means differ 3. Test statistic: F MSB/MSE, where F is based on
df1 (b 1) and df2 ab(r 1) 4. Rejection region: Reject H0 when F Fa (see the figure) or when the p-value a
Table 11.6 shows the original data used to generate Table 11.5 in Example 11.11. That is, the two supervisors were each observed on three randomly selected days for each of the three different
shifts, and the production outputs were recorded. Analyze these data using the appropriate analysis of variance procedure.
11.10 THE ANALYSIS OF VARIANCE FOR AN a b FACTORIAL EXPERIMENT
TABLE 11.6
Outputs for Two Supervisors on Three Shifts Shift Supervisor
The computer output in Figure 11.13 was generated using the two-way analysis of variance procedure in the MINITAB software package. You can verify the quantities in the ANOVA table using the
calculational formulas presented earlier, or you may choose just to use the results and interpret their meaning. Solution
FIGU R E 1 1 . 1 3
MINITAB output for Example 11.12
● Two-way ANOVA: Output versus Supervisor, Shift Source Supervisor Shift Interaction Error Total S = 26.83
Supervisor 1 2
Shift Day Swing Night
If the interaction is not significant, test each of the factors individually.
DF 1 2 2 12 17
SS 19208 247 81127 8640 109222
R-Sq = 92.09%
Mean 516.667 582.000
Mean 544.5 550.0 553.5
MS 19208.0 123.5 40563.5 720.0
F 26.68 0.17 56.34
P 0.000 0.844 0.000
R-Sq(adj) = 88.79%
Individual 95% CIs For Mean Based on Pooled StDev ----+---------+---------+---------+----(-------*------) (-------*-------) ----+---------+---------+---------+----510 540 570 600 Individual 95% CIs
For Mean Based on Pooled StDev ---+---------+---------+---------+-------(---------------*---------------) (---------------*---------------) (---------------*---------------)
---+---------+---------+---------+-------525 540 555 570
At this point, you have undoubtedly discovered the familiar pattern in testing the significance of the various experimental factors with the F statistic and its p-value. The small p-value (P .000) in
the row marked “Supervisor” means that there is sufficient evidence to declare a difference in the mean levels for factor A—that is, a difference in mean outputs per supervisor. This fact is visually
apparent in the nonoverlapping confidence intervals for the supervisor means shown in the printout. But this is overshadowed by the fact that there is strong evidence (P .000) of an interaction
between factors A and B. This means that the average output for a given shift depends on the supervisor on duty. You saw this effect clearly in Figure 11.11. The three largest mean outputs occur when
supervisor 1 is on the day shift and when supervisor 2 is on either the swing or night shift. As a practical result, the manager should schedule supervisor 1 for the day shift and supervisor 2 for
the night shift.
484 ❍
If the interaction effect is significant, the differences in the treatment means can be further studied, not by comparing the means for factor A or B individually but rather by looking at comparisons
for the 2 3 (AB) factor-level combinations. If the interaction effect is not significant, then the significance of the main effect means should be investigated, first with the overall F-test and next
with Tukey’s method for paired comparisons and/or specific confidence intervals. Remember that these analysis of variance procedures always use s 2 MSE as the best estimator of s 2 with degrees of
freedom equal to df ab(r 1). For example, using Tukey’s yardstick to compare the average outputs for the two supervisors on each of the three shifts, you could calculate 苶 s 0 兹72 v q.05(6, 12)
4.75 73.59 兹3苶兹苶r
Since all three pairs of means—602 and 487 on the day shift, 498 and 602 on the swing shift, and 450 and 657 on the night shift—differ by more than v, our practical conclusions have been confirmed
BASIC TECHNIQUES 11.45 Suppose you were to conduct a two-factor factorial experiment, factor A at four levels and factor B at five levels, with three replications per treatment. a. How many treatments
are involved in the experiment? b. How many observations are involved? c. List the sources of variation and their respective degrees of freedom. 11.46 The analysis of variance table for a 3 4 fac-
torial experiment, with factor A at three levels and factor B at four levels, and with two observations per treatment, is shown here: Source
5.3 9.1 24.5
a. Fill in the missing items in the table. b. Do the data provide sufficient evidence to indicate that factors A and B interact? Test using a .05. What are the practical implications of your answer?
c. Do the data provide sufficient evidence to indicate that factors A and B affect the response variable x? Explain.
11.47 Refer to Exercise 11.46. The means of two of the factor-level combinations—say, A1B1 and A2B1— are x苶1 8.3 and 苶x2 6.3, respectively. Find a 95% confidence interval for the difference between
the two corresponding population means. 11.48 The table gives data for a 3 3 facto-
rial experiment, with two replications per treatment:
Levels of Factor A Levels of Factor B 1 2 3
5, 7 8, 7 14, 11
9, 7 12, 13 8, 9
4, 6 7, 10 12, 15
a. Perform an analysis of variance for the data, and present the results in an analysis of variance table. b. What do we mean when we say that factors A and B interact? c. Do the data provide
sufficient evidence to indicate interaction between factors A and B? Test using a .05. d. Find the approximate p-value for the test in part c. e. What are the practical implications of your results
in part c? Explain your results using a line graph similar to the one in Figure 11.11. 11.49 2 2 Factorial The table gives data for a 2 2 factorial experiment, with four replications per treatment.
11.10 THE ANALYSIS OF VARIANCE FOR AN a b FACTORIAL EXPERIMENT
2.1, 2.7, 2.4, 2.5
3.7, 3.2, 3.0, 3.5
3.1, 3.6, 3.4, 3.9
2.9, 2.7, 2.2, 2.5
11.50 Demand for Diamonds A chain of
jewelry stores conducted an experiment to investigate the effect of price and location on the demand for its diamonds. Six small-town stores were selected for the study, as well as six stores located
in large suburban malls. Two stores in each of these locations were assigned to each of three item percentage markups. The percentage gain (or loss) in sales for each store was recorded at the end of
1 month. The data are shown in the accompanying table.
a. The accompanying graph was generated by MINITAB. Verify that the four points that connect the two lines are the means of the four observations within each factor-level combination. What does the
graph tell you about the interaction between factors A and B? MINITAB interaction plot for Exercise 11.49
Markup Location
Small towns
Suburban malls
Interaction Plot (data means) for Response Factor A 1 2
2 Factor B
b. Use the MINITAB output to test for a significant interaction between A and B. Does this confirm your conclusions in part a? MINITAB output for Exercise 11.49 Two-way ANOVA: Response versus Factor A,
Factor B Source Factor A Factor B Interaction Error Total S = 0.3007
DF 1 1 1 12 15
SS 0.0000 0.0900 3.4225 1.0850 4.5975
R-Sq = 76.40%
Levels of Factor A Levels of Factor B
MS 0.00000 0.09000 3.42250 0.09042
F 0.00 1.00 37.85
P 1.000 0.338 0.000
R-Sq(adj) = 70.50%
c. Considering your results in part b, how can you explain the fact that neither of the main effects is significant? d. If a significant interaction is found, is it necessary to test for significant
main effect differences? Explain. e. Write a short paragraph summarizing the results of this experiment.
a. Do the data provide sufficient evidence to indicate an interaction between markup and location? Test using a .05. b. What are the practical implications of your test in part a? c. Draw a line
graph similar to Figure 11.11 to help visualize the results of this experiment. Summarize the results. d. Find a 95% confidence interval for the difference in mean change in sales for stores in small
towns versus those in suburban malls if the stores are using price markup 3. 11.51 Terrain Visualization A study was conducted to determine the effect of two factors on terrain visualization training
for soldiers.4 During the training programs, participants viewed contour maps of various terrains and then were permitted to view a computer reconstruction of the terrain as it would appear from a
specified angle. The two factors investigated in the experiment were the participants’ spatial abilities (abilities to visualize in three dimensions) and the viewing procedures (active or passive).
Active participation permitted participants to view the computer-generated reconstructions of the terrain from any and all angles. Passive participation gave the participants a set of preselected
reconstructions of the terrain. Participants were tested according to spatial ability, and from the test scores 20 were categorized as possessing high spatial ability, 20 medium, and 20 low. Then 10
participants within each of these groups were assigned to each of the two training modes, active or passive. The
486 ❍
accompanying tables are the ANOVA table computed the researchers and the table of the treatment means. Source
Main effects: Training condition Ability Interaction: Training condition Ability Within cells
MINITAB output for Exercise 11.52 Two-way ANOVA: Cost versus City, Distance
Error df
Source City Distance Interaction Error Total
103.7009 760.5889
3.66 26.87
.0610 .0005
S = 5.737
124.9905 28.3015
DF 2 3 6 12 23
SS 201.33 1873.33 303.67 395.00 2773.33
R-Sq = 85.76%
Distance 1 2 3 4
MS 100.667 624.444 50.611 32.917
F 3.06 18.97 1.54
P 0.084 0.000 0.247
R-Sq(adj) = 72.70%
Individual 95% CIs For Mean Based on Pooled StDev ------+---------+---------+---------+--(-----+------) (-----+-----) (------+-----) (-----+-----) ------+---------+---------+---------+--10 20 30 40
Mean 32.1667 19.1667 11.8333 9.5000
Training Condition Spatial Ability
High Medium Low
17.895 5.031 1.728
9.508 5.648 1.610
MINITAB plots for Exercise 11.52 Interaction Plot (data means) for Cost 45
Note: Maximum score 36.
City Chicago Houston NY
40 35 30 Mean
a. Explain how the authors arrived at the degrees of freedom shown in the ANOVA table. b. Are the F-values correct? c. Interpret the test results. What are their practical implications? d. Use Table
6 in Appendix I to approximate the pvalues for the F statistics shown in the ANOVA table.
Source: H.F. Barsam and Z.M. Simutis, “Computer-Based Graphics for Terrain Visualization Training,” Human Factors, no. 26, 1984. Copyright 1984 by the Human Factors Society, Inc. Reproduced by
Main Effects Plot (data means) for Cost City
11.52 The Cost of Flying In an attempt to determine what factors affect airfares, a researcher recorded a weighted average of the costs per mile for two airports in each of three major U.S. cities
for each of four different travel distances.5 The results are shown in the table.
New York
300 miles 301–750 miles 751–1500 miles 1500 miles
40, 48 19, 26 10, 14 9, 10
20, 26 15, 17 10, 13 8, 11
19, 40 14, 24 9, 15 7, 12
Use the MINITAB output to analyze the experiment with the appropriate method. Identify the two factors, and investigate any possible effect due to their interaction or the main effects. What are the
practical implications of this experiment? Explain your conclusions in the form of a report.
11.53 Fourth-Grade Test Scores A local school board was interested in comparing test scores on a standarized reading test for fourth-grade students in their district. They selected a random sample of
five male and five female fourth grade students at each of four different elementary schools in the district and recorded the test scores. The results are shown in the table below.
11.11 REVISITING THE ANALYSIS OF VARIANCE ASSUMPTIONS
School 1
School 2
School 3
School 4
a. What type of experimental design is this? What are the experimental units? What are the factors and levels of interest to the school board? b. Perform the appropriate analysis of variance for this
experiment. c. Do the data indicate that effect of gender on the average test score is different depending on the student’s school? Test the appropriate hypothesis using a .05. d. Plot the average
scores using an interaction plot. How would you describe the effect of gender and school on the average test scores? e. Do the data indicate that either of the main effects is significant? If the main
effect is significant, use Tukey’s method of paired comparisons to examine the differences in detail. Use a .01. 11.54 Management Training An experiment was conducted to investigate the effect of
management training on the decision-making abilities of supervisors in a large corporation. Sixteen supervisors were selected, and eight were randomly chosen to receive managerial training. Four
trained and four untrained supervisors were then randomly selected to function in a situation in which a standard problem arose. The other eight supervisors were presented with an emergency situation
in which standard procedures could not be used. The response was a management behavior rating for each supervisor as assessed by a rating scheme devised by the experimenter.
a. What are the experimental units in this experiment? b. What are the two factors considered in the experiment? c. What are the levels of each factor? d. How many treatments are there in the
experiment? e. What type of experimental design has been used? 11.55 Management Training, continued
Refer to Exercise 11.54. The data for this experiment are shown in the table.
Training (A) Situation (B)
Not Trained
a. Construct the ANOVA table for this experiment. b. Is there a significant interaction between the presence or absence of training and the type of decision-making situation? Test at the 5% level of
significance. c. Do the data indicate a significant difference in behavior ratings for the two types of situations at the 5% level of significance? d. Do behavior ratings differ significantly for the two
types of training categories at the 5% level of significance. e. Plot the average scores using an interaction plot. How would you describe the effect of training and emergency situation on the
decision-making abilities of the supervisors?
REVISITING THE ANALYSIS OF VARIANCE ASSUMPTIONS In Section 11.3, you learned that the assumptions and test procedures for the analysis of variance are similar to those required for the t and F-tests
in Chapter 10—namely, that observations within a treatment group must be normally distributed with common variance s 2. You also learned that the analysis of variance procedures are fairly
488 ❍
robust when the sample sizes are equal and the data are fairly mound-shaped. If this is the case, one way to protect yourself from inaccurate conclusions is to try when possible to select samples of
equal sizes! There are some quick and simple ways to check the data for violation of assumptions. Look first at the type of response variable you are measuring. You might immediately see a problem
with either the normality or common variance assumption. It may be that the data you have collected cannot be measured quantitatively. For example, many responses, such as product preferences, can be
ranked only as “A is better than B” or “C is the least preferable.” Data that are qualitative cannot have a normal distribution. If the response variable is discrete and can assume only three values—
say, 0, 1, or 2—then it is again unreasonable to assume that the response variable is normally distributed. Suppose that the response variable is binomial—say, the proportion p of people who favor a
particular type of investment. Although binomial data can be approximately mound-shaped under certain conditions, they violate the equal variance assumption. The variance of a sample proportion is pq
p(1 p) s 2 n n so that the variance changes depending on the value of p. As the treatment means change, the value of p changes and so does the variance s 2. A similar situation occurs when the
response variable is a Poisson random variable—say, the number of industrial accidents per month in a manufacturing plant. Since the variance of a Poisson random variable is s 2 m, the variance
changes exactly as the treatment mean changes. If you cannot see any flagrant violations in the type of data being measured, look at the range of the data within each treatment group. If these ranges
are nearly the same, then the common variance assumption is probably reasonable. To check for normality, you might make a quick dotplot or stem and leaf plot for a particular treatment group.
However, quite often you do not have enough measurements to obtain a reasonable plot. If you are using a computer program to analyze your experiment, there are some valuable diagnostic tools you can
use. These procedures are too complicated to be performed using hand calculations, but they are easy to use when the computer does all the work!
Residual Plots In the analysis of variance, the total variation in the data is partitioned into several parts, depending on the factors identified as important to the researcher. Once the effects of
these sources of variation have been removed, the “leftover” variability in each observation is called the residual for that data point. These residuals represent experimental error, the basic
variability in the experiment, and should have an approximately normal distribution with a mean of 0 and the same variation for each treatment group. Most computer packages will provide options for
plotting these residuals: •
The normal probability plot of residuals is a graph that plots the residuals for each observation against the expected value of that residual had it come from a normal distribution. If the residuals
are approximately normal, the plot will closely resemble a straight line, sloping upward to the right.
11.11 REVISITING THE ANALYSIS OF VARIANCE ASSUMPTIONS
FIGU R E 1 1 . 1 4
The data from Example 11.4 involving the attention spans of three groups of elementary students were analyzed using MINITAB. The graphs in Figure 11.14, generated by MINITAB, are the normal
probability plot and the residuals versus fit plot for this experiment. Look at the straight-line pattern in the normal probability plot, which indicates a normal distribution in the residuals. In the
other plot, the residuals are plotted against the estimated expected values, which are the sample averages for each of the three treatments in the completely randomized design. The random scatter
around the horizontal “zero error line” and the constant spread indicate no violations in the constant variance assumption. ● Residuals versus the Fitted Values (response is Span)
Normal Probability Plot of the Residuals (response is Span)
MINITAB diagnostic plots for Example 11.13
The plot of residuals versus fit or residuals versus variables is a graph that plots the residuals against the expected value of that observation using the experimental design we have used. If no
assumptions have been violated and there are no “leftover” sources of variation other than experimental error, this plot should show a random scatter of points around the horizontal “zero error line”
for each treatment group, with approximately the same vertical spread.
TABLE 11.7
0.0 Residual
12 Fitted Value
A company plans to promote a new product by using one of three advertising campaigns. To investigate the extent of product recognition from these three campaigns, 15 market areas were selected and
five were randomly assigned to each advertising plan. At the end of the ad campaigns, random samples of 400 adults were selected in each area and the proportions who were familiar with the new product
were recorded, as in Table 11.7. Have any of the analysis of variance assumptions been violated in this experiment? ●
Proportions of Product Recognition for Three Advertising Campaigns Campaign 1
Campaign 2
Campaign 3
.33 .29 .21 .32 .25
.28 .41 .34 .39 .27
.21 .30 .26 .33 .31
Solution The experiment is designed as a completely randomized design, but the response variable is a binomial sample proportion. This indicates that both the normality and the common variance
assumptions might be invalid. Look at the normal probability plot of the residuals and the plot of residuals versus fit generated as an option in the MINITAB analysis of variance procedure and shown
in Figure 11.15. The
490 ❍
curved pattern in the normal probability plot indicates that the residuals do not have a normal distribution. In the residual versus fit plot, you can see three vertical lines of residuals, one for
each of the three ad campaigns. Notice that two of the lines (campaigns 1 and 3) are close together and have similar spread. However, the third line (campaign 2) is farther to the right, which
indicates a larger sample proportion and consequently a larger variance in this group. Both analysis of variance assumptions are suspect in this experiment.
FI GU R E 1 1 . 1 5
MINITAB diagnostic plots for Example 11.14
● Residuals versus the Fitted Values (response is Proportion)
Normal Probability Plot of the Residuals (response is Proportion) 0.08
0.06 0.04
0.00 0.02 0.04
0.08 0.10
0.00 Residual
0.31 Fitted Value
What can you do when the ANOVA assumptions are not satisfied? The constant variance assumption can often be remedied by transforming the response measurements. That is, instead of using the original
measurements, you might use their square roots, logarithms, or some other function of the response. Transformations that tend to stabilize the variance of the response also tend to make their
distributions more nearly normal. When nothing can be done to even approximately satisfy the ANOVA assumptions or if the data are rankings, you should use nonparametric testing and estimation
procedures, presented in Chapter 15. We have mentioned these procedures before; they are almost as powerful in detecting treatment differences as the tests presented in this chapter when the data are
normally distributed. When the parametric ANOVA assumptions are violated, the nonparametric tests are generally more powerful.
A BRIEF SUMMARY We presented three different experimental designs in this chapter, each of which can be analyzed using the analysis of variance procedure. The objective of the analysis of variance is
to detect differences in the mean responses for experimental units that have received different treatments—that is, different combinations of the experimental factor levels. Once an overall test of
the differences is performed, the nature of these differences (if any exist) can be explored using methods of paired comparisons and/or interval estimation procedures. The three designs presented in
this chapter represent only a brief introduction to the subject of analyzing designed experiments. Designs are available for experiments that involve several design variables, as well as more than
two treatment factors and other more complex designs. Remember that design variables are factors whose effect you want to control and hence remove from experimental error, whereas treatment
variables are factors whose effect you want to investigate. If your experiment is properly designed, you will be able to analyze it using the analysis of variance. Experiments in which the levels of
a variable are measured experimentally rather than controlled or preselected ahead of time may be analyzed using linear or multiple regression analysis—the subject of Chapters 12 and 13.
CHAPTER REVIEW Key Concepts and Formulas I.
Experimental Designs
1. Experimental units, factors, levels, treatments, response variables. 2. Assumptions: Observations within each treatment group must be normally distributed with a common variance s 2. 3. One-way
classification—completely randomized design: Independent random samples are selected from each of k populations. 4. Two-way classification—randomized block design: k treatments are compared within b
relatively homogeneous groups of experimental units called blocks. 5. Two-way classification—a b factorial experiment: Two factors, A and B, are compared at several levels. Each factor–level
combination is replicated r times to allow for the investigation of an interaction between the two factors. II. Analysis of Variance
1. The total variation in the experiment is divided into variation (sums of squares) explained by the various experimental factors and variation due to experimental error (unexplained). 2. If there
is an effect due to a particular factor, its mean square (MS SS/df ) is usually large and F MS(factor)/MSE is large. 3. Test statistics for the various experimental factors are based on F statistics,
with appropriate degrees of freedom (df2 Error degrees of freedom).
III. Interpreting an Analysis of Variance
1. For the completely randomized and randomized block design, each factor is tested for significance. 2. For the factorial experiment, first test for a significant interaction. If the interaction is
significant, main effects need not be tested. The nature of the differences in the factor–level combinations should be further examined. 3. If a significant difference in the population means is found,
Tukey’s method of pairwise comparisons or a similar method can be used to further identify the nature of the differences. 4. If you have a special interest in one population mean or the difference
between two population means, you can use a confidence interval estimate. (For a randomized block design, confidence intervals do not provide unbiased estimates for single population means.) IV.
Checking the Analysis of Variance Assumptions
1. To check for normality, use the normal probability plot for the residuals. The residuals should exhibit a straight-line pattern, increasing upwards toward the right. 2. To check for equality of
variance, use the residuals versus fit plot. The plot should exhibit a random scatter, with the same vertical spread around the horizontal “zero error line.”
492 ❍
Analysis of Variance Procedures The statistical procedures used to perform the analysis of variance for the three different experimental designs in this chapter are found in a MINITAB submenu by
choosing Stat 씮 ANOVA. You will see choices for One-way, One-way (Unstacked), and Two-way that will generate Dialog boxes used for the completely randomized, randomized block, and factorial designs,
respectively. You must properly store the data and then choose the columns corresponding to the necessary factors in the experiment. We will display some of the Dialog boxes and Session window
outputs for the examples in this chapter, beginning with a one-way classification—the completely randomized breakfast study in Example 11.4. First, enter the 15 recorded attention spans in column C1
of a MINITAB worksheet and name them “Span.” Next, enter the integers 1, 2, and 3 into a second column C2 to identify the meal assignment (treatment) for each observation. You can let MINITAB set
this pattern for you using Calc 씮 Make Patterned Data 씮 Simple Set of Numbers and entering the appropriate numbers, as shown in Figure 11.16. Then use Stat 씮 ANOVA 씮 One-way to generate the
Dialog box in Figure 11.17.† You must select the column of observations for the “Response” box and the column of treatment indicators for the “Factor” box. Then you have several options. Under
Comparisons, you can select “Tukey’s family error rate” (which has a default level of 5%) to obtain paired comparisons output. Under Graphs, you can select individual value plots and/or box plots to
compare the three meal assignments, and you can generate residual plots (use “Normal plot of residuals” and/or “Residuals versus fits”) to verify the validity of the ANOVA assumptions. Click OK from
the main dialog box to obtain the output in Figure 11.3 in the text. The Stat 씮 ANOVA 씮 Two-way command can be used for both the randomized block and the factorial designs. You must first enter all
of the observations into a single column and then integers or descriptive names to indicate either of these cases: • •
The block and treatment for each of the measurements in a randomized block design The levels of factors A and B for the factorial experiment.
MINITAB will recognize a number of replications within each factor-level combination in the factorial experiment and will break out the sum of squares for interaction (as long as you do not check the
box “Fit additive model”). Since these two designs involve the same sequence of commands, we will use the data from Example 11.12 to generate the analysis of variance for the factorial experiment.
The data are entered into the worksheet in Figure 11.18. See if you can use the Calc 씮 Make Patterned Data 씮 Simple Set of Numbers to enter the data into columns C2–C3. Once the data have been
entered, use Stat 씮 ANOVA 씮 Two-way to generate the Dialog box in Figure 11.19. Choose “Output” for the “Response” box, and “Supervisor” and “Shift” for the “Row factor” and “Column factor,”
respectively. You may choose to display the main effect means along with 95% confidence intervals by checking “Display means,” and you may select residual plots if you wish. Click OK to obtain the
ANOVA printout in Figure 11.13. †
If you had entered each of the three samples into separate columns, the proper command would have been Stat 씮 ANOVA 씮 One-way (Unstacked).
FIGU R E 1 1 . 1 6
FIGU R E 1 1 . 1 7
494 ❍
FI GU R E 1 1 . 1 8
Data Display Row 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
FI GU R E 1 1 . 1 9
Output Supervisor 571 1 610 1 625 1 480 2 516 2 465 2 480 1 474 1 540 1 625 2 600 2 581 2 470 1 430 1 450 1 630 2 680 2 661 2
Shift 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3
Since the interaction between supervisors and shifts is highly significant, you may want to explore the nature of this interaction by plotting the average output for each supervisor at each of the
three shifts. Use Stat 씮 ANOVA 씮 Interactions Plot and choose the appropriate response and factor variables. The plot is generated by MINITAB and shown in Figure 11.20. You can see the strong
difference in the behaviors of the mean outputs for the two supervisors, indicating a strong interaction between the two factors.
FIGU R E 1 1 . 2 0
Supplementary Exercises 11.56 Reaction Times vs. Stimuli Twenty-
seven people participated in an experiment to compare the effects of five different stimuli on reaction time. The experiment was run using a completely randomized design, and, regardless of the
results of the analysis of variance, the experimenters wanted to compare stimuli A and D. The results of the experiment are given here. Use the MINITAB printout to complete the exercise.
MINITAB output for Exercise 11.56 One-way ANOVA: Time versus Stimulus Source Stimulus Error Total
DF 4 22 26
S = 0.1611
SS 1.2118 0.5711 1.7830
Level A B C D E
R-Sq = 67.97%
N 4 7 6 5 5
Pooled StDev =
Stimulus A B C D E
Reaction Time (sec) .8 .7 1.2 1.0 .6
.6 .8 1.0 .9 .4
.6 .5 .9 .9 .4
.5 .5 1.2 1.1 .7
.6 1.3 .7 .3
.9 .7 .8
2.5 4.7 6.4 4.6 2.4
.625 .671 1.067 .920 .480
Mean 0.6250 0.6714 1.0667 0.9200 0.4800 0.1611
MS 0.3030 0.0260
F 11.67
P 0.000
R-Sq(adj) = 62.14%
StDev 0.1258 0.1496 0.1966 0.1483 0.1643
Individual 95% CIs For Mean Based on Pooled StDev -------+---------+---------+---------+-(------*------) (----*----) (-----*----) (-----*-----) (-----*-----)
-------+---------+---------+---------+-0.50 0.75 1.00 1.25
a. Conduct an analysis of variance and test for a difference in the mean reaction times due to the five stimuli. b. Compare stimuli A and D to see if there is a difference in mean reaction times.
496 ❍
11.57 Refer to Exercise 11.56. Use this MINITAB output to identify the differences in the treatment means. MINITAB output for Exercise 11.57 Tukey 95% Simultaneous Confidence Intervals All Pairwise
Comparisons among Levels of Stimulus Individual confidence level = 99.29% Stimulus = A subtracted from: Stimulus B C D E
Lower -0.2535 0.1328 -0.0260 -0.4660
Center 0.0464 0.4417 0.2950 -0.1450
Upper 0.3463 0.7505 0.6160 0.1760
--------+---------+---------+---------+(-----*-----) (-----*-----) (-----*-----) (-----*-----) --------+---------+---------+---------+-0.50 0.00 0.50 1.00
Upper 0.6615 0.5288 0.0888
--------+---------+---------+---------+(-----*-----) (-----*-----) (-----*-----) --------+---------+---------+---------+-0.50 0.00 0.50 1.00
11.59 Reaction Times II The experiment in
Exercise 11.56 might have been conducted more effectively using a randomized block design with people as blocks, since you would expect mean reaction time to vary from one person to another. Hence,
four people were used in a new experiment, and each person was subjected to each of the five stimuli in a random order. The reaction times (in seconds) are listed here:
Stimulus = B subtracted from: Stimulus C D E
Lower 0.1290 -0.0316 -0.4716
Center 0.3952 0.2486 -0.1914
Stimulus = C subtracted from: Stimulus D E
Lower -0.4364 -0.8764
Center -0.1467 -0.5867
Upper 0.1431 -0.2969
--------+---------+---------+---------+(-----*-----) (-----*-----) --------+---------+---------+---------+-0.50 0.00 0.50 1.00
Stimulus = D subtracted from: Stimulus E
Lower -0.7426
Center -0.4400
Upper -0.1374
Stimulus Subject
.7 .6 .9 .6
.8 .6 1.0 .8
1.0 1.1 1.2 .9
1.0 1.0 1.1 1.0
.5 .6 .6 .4
--------+---------+---------+---------+(-----*-----) --------+---------+---------+---------+-0.50 0.00 0.50 1.00
MINITAB output for Exercise 11.59
11.58 Refer to Exercise 11.56. What do the normal probability plot and the residuals versus fit plot tell you about the validity of your analysis of variance results? MINITAB diagnostic plots for
Exercise 11.58
Two-way ANOVA: Time versus Subject, Stimulus Source Subject Stimulus Error Total
DF 3 4 12 19
S = 0.08416
R-Sq = 91.60%
Normal Probability Plot of the Residuals (response is Time)
Stimulus A B C D E
0.0 Residual
Residuals versus the Fitted Values (response is Time) 0.3 0.2
0.0 0.1 0.2 0.3 0.4
0.7 0.8 Fitted Value
MS 0.046667 0.196750 0.007083
F 6.59 27.78
P 0.007 0.000
R-Sq(adj) = 86.70%
Individual 95% CIs For Mean Based on Pooled StDev ---------+---------+---------+---------+(----*----) (----*----) (---*----) (---*----) (---*----) ---------+---------+---------+---------+0.60 0.80
1.00 1.20
a. Use the MINITAB printout to analyze the data and test for differences in treatment means. b. Use Tukey’s method of paired comparisons to identify the significant pairwise differences in the
stimuli. c. Does it appear that blocking was effective in this experiment? 11.60 Heart Rate and Exercise An experiment was conducted to examine the effect of age on heart rate when a person is
subjected to a specific amount of exercise. Ten male subjects were randomly selected from four age groups: 10–19, 20–39, 40–59, and 60–69. Each subject walked on a treadmill at a fixed grade for a
period of 12 minutes, and the increase in heart rate, the difference before and after exercise, was recorded (in beats per minute):
Mean 0.700 0.800 1.050 1.025 0.525
SS 0.140 0.787 0.085 1.012
Use an appropriate computer program to answer these questions: a. Do the data provide sufficient evidence to indicate a difference in mean increase in heart rate among the four age groups? Test by
using a .05. b. Find a 90% confidence interval for the difference in mean increase in heart rate between age groups 10–19 and 60–69. c. Find a 90% confidence interval for the mean increase in heart
rate for the age group 20–39. d. Approximately how many people would you need in each group if you wanted to be able to estimate a group mean correct to within two beats per minute with probability
equal to .95? 11.61 Learning to Sell A company wished
to study the effects of four training programs on the sales abilities of their sales personnel. Thirtytwo people were randomly divided into four groups of equal size, and each group was then
subjected to one of the different sales training programs. Because there were some dropouts during the training programs due to illness, vacations, and so on, the number of trainees completing the
programs varied from group to group. At the end of the training programs, each salesperson was randomly assigned a sales area from a group of sales areas that were judged to have equivalent sales
potentials. The sales made by each of the four groups of salespeople during the first week after completing the training program are listed in the table:
Training Program
Analyze the experiment using the appropriate method. Identify the treatments or factors of interest to the researcher and investigate any significant effects. What are the practical implications of
this experiment? Write a paragraph explaining the results of your analysis. 11.62 4 2 Factorial Suppose you were to conduct a two-factor factorial experiment, factor A at four levels and factor B at
two levels, with r replications per treatment. a. How many treatments are involved in the experiment?
b. How many observations are involved? c. List the sources of variation and their respective degrees of freedom. 11.63 2 3 Factorial The analysis of variance table for a 2 3 factorial experiment,
factor A at two levels and factor B at three levels, with five observations per treatment, is shown in the table. Source
A B AB Error
1.14 2.58 .49
a. Do the data provide sufficient evidence to indicate an interaction between factors A and B? Test using a .05. What are the practical implications of your answer? b. Give the approximate p-value
for the test in part a. c. Do the data provide sufficient evidence to indicate that factor A affects the response? Test using a .05. d. Do the data provide sufficient evidence to indicate that factor
B affects the response? Test using a .05. 11.64 Refer to Exercise 11.63. The means of all observations, at the factor A levels A1 and A2 are 苶x1 3.7 and 苶x2 1.4, respectively. Find a 95% confidence
interval for the difference in mean response for factor levels A1 and A2. 11.65 The Whitefly in California The whitefly, which causes defoliation of shrubs and trees and a reduction in salable crop
yields, has emerged as a pest in Southern California. In a study to determine factors that affect the life cycle of the whitefly, an experiment was conducted in which whiteflies were placed on two
different types of plants at three
498 ❍
different temperatures. The observation of interest was the total number of eggs laid by caged females under one of the six possible treatment combinations. Each treatment combination was run using
five cages. Temperature Plant
S = 11.09
SS 1512.30 487.47 111.20 2952.40 5063.37
R-Sq = 41.69%
1.65 1.70 1.40 2.10
1.72 1.85 1.75 1.95
1.50 1.46 1.38 1.65
1.60 1.80 1.55 2.00
11.67 America’s Market Basket Exercise 10.40 examined an advertisement for Albertsons, a supermarket chain in the western United States. The advertiser claims that Albertsons has consistently had
lower prices than four other full-service supermarkets. As part of a survey conducted by an “independent market basket price-checking company,” the average weekly total based on the prices of
approximately 95 items is given for five different supermarket chains recorded during 4 consecutive weeks.6
MS 1512.30 243.73 55.60 123.02
F 12.29 1.98 0.45
P 0.002 0.160 0.642
R-Sq(adj) = 29.54%
Albertsons Ralphs
a. What type of experimental design has been used? b. Do the data provide sufficient evidence to indicate that the effect of temperature on the number of eggs laid is different depending on the type
of plant? Use the MINITAB printout to test the appropriate hypothesis. c. Plot the treatment means for cotton as a function of temperature. Plot the treatment means for cucumber as a function of
temperature. Comment on the similarity or difference in these two plots. d. Find the mean number of eggs laid on cotton and cucumber based on 15 observations each. Calculate a 95% confidence interval
for the difference in the underlying population means.
Week 1 $254.26 Week 2 240.62 Week 3 231.90 Week 4 234.13
11.66 Pollution from Chemical Plants
Four chemical plants, producing the same product and owned by the same company, discharge effluents into streams in the vicinity of their locations. To check on the extent of the pollution created by
the effluents and to determine whether this varies from plant to plant, the company collected random samples of liquid waste, five specimens for each of the four plants. The data are shown in the
$256.03 255.65 255.12 261.18
Alpha Beta Lucky
$267.92 251.55 245.89 254.12
$260.71 251.80 246.77 249.45
$258.84 242.14 246.80 248.99
a. What type of design has been used in this experiment? b. Conduct an analysis of variance for the data. c. Is there sufficient evidence to indicate that there is a difference in the average weekly
totals for the five supermarkets? Use a .05. d. Use Tukey’s method for paired comparisons to determine which of the means are significantly different from each other. Use a .05. 11.68 Yield of Wheat
The yields of wheat (in bushels per acre) were compared for five different varieties, A, B, C, D, and E, at six different locations. Each variety was randomly assigned to a plot at each location. The
results of the experiment are shown in the accompanying table, along with a MINITAB printout of the analysis of variance. Analyze the experiment using the appropriate method. Identify the treatments
or factors of interest to the researcher and investigate any effects that exist. Use the diagnostic plots to comment on the validity of the analysis of
EX1168 EX1166
1.37 2.05 1.65 1.88
Two-way ANOVA: Eggs versus Plant, Temperature DF 1 2 2 24 29
Polluting Effluents (lb/gal of waste)
A B C D
a. Do the data provide sufficient evidence to indicate a difference in the mean amounts of effluents discharged by the four plants? b. If the maximum mean discharge of effluents is 1.5 lb/gal, do the
data provide sufficient evidence to indicate that the limit is exceeded at plant A? c. Estimate the difference in the mean discharge of effluents between plants A and D, using a 95% confidence
MINITAB output for Exercise 11.65
Source Plant Temperature Interaction Error Total
variance assumptions. What are the practical implications of this experiment? Write a paragraph explaining the results of your analysis. Location Variety A B C D E
35.3 30.7 38.2 34.9 32.4
31.0 32.2 33.4 36.1 28.9
32.7 31.4 33.6 35.2 29.2
36.8 31.7 37.1 38.3 30.7
37.2 35.0 37.3 40.2 33.9
33.1 32.7 38.2 36.0 32.1
MINITAB output for Exercise 11.68
DF 4 5 20 29
S = 1.384
SS 142.670 68.142 38.303 249.142
R-Sq = 84.62%
MS 35.6675 13.6283 1.9165
Physical Activity
F 18.61 7.11
P 0.000 0.001
Normal Probability Plot of the Residuals (response is Yield) 99
0 Residual
Residuals versus the Fitted Values (response is Yield) 2
1 Residual
50.1 47.2 49.7 50.4
45.7 44.2 46.8 44.9
40.9 41.3 39.2 40.9
41.2 39.8 41.5 38.2
37.2 39.4 38.6 37.8
36.5 35.0 37.2 35.4
a. Is this a factorial experiment or a randomized block design? Explain. b. Is there a significant interaction between levels of physical activity and gender? Are there significant differences between
males and females? Levels of physical activity? c. If the interaction is significant, use Tukey’s pairwise procedure to investigate differences among the six cell means. Comment on the results found
using this procedure. Use a .05.
MINITAB diagnostic plots for Exercise 11.68
More Males
R-Sq(adj) = 77.69%
Individual 95% CIs For Mean Based on Pooled StDev Mean +---------+---------+---------+--------34.3500 (-----*-----) 32.2833 (----*-----) 36.3000 (-----*----) 36.7833 (-----*-----) 31.2000
(-----*-----) +---------+---------+---------+--------30.0 32.0 34.0 36.0
Varieties A B C D E
to assess cardiorespiratory fitness levels in youth aged 12 to 19 years.7 Attaining fitness standards is a common prerequisite for entry into occupations such as law enforcement, firefighting, and the
military, as well as other jobs that involve physically demanding labor. Estimated maximum oxygen uptake (VO2max) was used to measure a person’s cardiorespiratory level. The focus of our study
investigates the relationship between levels of physical activity (more than others, same as others, or less than others) and gender on VO2max. The data that follows are based on this study.
Two-way ANOVA: Yield versus Varieties, Location Source Varieties Locations Error Total
11.70 In a study of starting salaries of assistant professors,8 five male assistant professors and five female assistant professors at each of three types of institutions granting doctoral degrees were
polled and their initial starting salaries were recorded under the condition of anonymity. The results of the survey in $1000 are given in the following table.
Public Universities
$57.3 57.9 56.5 76.5 62.0
$85.8 75.2 66.9 73.0 73.0
$78.9 69.3 69.7 58.2 61.2
47.4 56.7 69.0 63.2 65.3
62.1 69.1 66.5 61.8 76.7
60.4 62.1 59.8 71.9 61.6
34 36 Fitted Value
11.69 Physical Fitness Researchers Russell R. Pate and colleagues analyzed the results of the National Health and Nutrition Examination Survey
Source: Based on “Average Salary for Men and Women Faculty by Category, Affiliation, and Academic Rank, 2005–2006.”
a. What type of design was used in collecting these data? b. Use an analysis of variance to test if there are significant differences in gender, in type of institution, and to test for a significant
interaction of gender type of institution. c. Find a 95% confidence interval estimate for the difference in starting salaries for male assistant professors and female assistant professors. Interpret
this interval in terms of a gender difference in starting salaries. d. Use Tukey’s procedure to investigate differences in assistant professor salaries for the three types of institutions. Use a .01.
e. Summarize the results of your analysis. 11.71 Pottery in the United Kingdom An article in Archaeometry involved an analysis of 26 samples of Romano-British pottery, found at four different kiln
sites in the United Kingdom.9 Since one site only yielded two samples, consider the samples found at the other three sites. The samples were analyzed to determine their chemical composition and the
percentage of iron oxide is shown below.
Llanederyn 7.00 7.08 7.09 6.37 7.06 6.26 4.26
5.78 5.49 6.92 6.13 6.64 6.69 6.44
Island Thorns
Ashley Rails
1.28 2.39 1.50 1.88 1.51
1.12 1.14 .92 2.74 1.64
San Philadelphia Francisco 61 64 60 73
a. What type of experimental design was used in this article? If the design used is a randomized block design, what are the blocks and what are the treatments? b. Conduct an analysis of variance for
the data. c. Are there significant differences in the average satisfaction scores for the four wireless providers considered here? d. Are there significant differences in the average satisfaction
scores for the four cities? 11.73 Cell Phones, continued Refer to Exer-
cise 11.72. The diagnostic plots for this experiment are shown below. Does it appear that any of the analysis of variance assumptions have been violated? Explain. Normal Probability Plot of the
Residuals (response is Score) 99 95 90 80 70 60 50 40 30 20 10 5 1
a. What type of experimental design is this? b. Use an analysis of variance to determine if there is a difference in the average percentage of iron oxide at the three sites. Use a .01.
0 Residual
Residuals versus the Fitted Values (response is Score) 3 2 1 Residual
c. If you have access to a computer program, generate the diagnostic plots for this experiment. Does it appear that any of the analysis of variance assumptions have been violated? Explain.
Chicago AT&T Wireless Cingular Wireless Sprint Verizon Wireless
500 ❍
11.72 Cell Phones How satisfied are you
with your current mobile-phone service provider? Surveys done by Consumer Reports indicate that there is a high level of dissatisfaction among consumers, resulting in high customer turnover rates.10
The following table shows the overall satisfaction scores, based on a maximum score of 100, for four wireless providers in four different cities.
68 70 Fitted Value
11.74 Professor’s Salaries II Each year, the American Association of University Professors reports on salaries of academic professors at universities
and colleges in the United States.8 The following data (in thousands of dollars), adapted from this report, are based on samples of n 10 in each of three professorial ranks, for both male and female
professors. Rank Gender Male
Assistant Professor
Associate Professor
$64.4 62.2 64.2 64.9 67.5
$70.0 77.7 77.1 76.0 70.1
$74.4 77.2 76.3 78.8 73.1
$109.4 111.3 112.5 111.6 118.3
$110.5 104.4 106.3 106.9 109.9
56.6 57.6 53.5 64.4 62.6
59.0 58.6 54.9 62.9 59.8
65.4 71.9 65.9 67.9 73.6
66.3 74.6 73.0 69.4 71.0
110.3 97.0 91.5 103.5 95.6
100.9 102.8 102.0 96.7 97.8
a. Identify the design used in this survey. b. Use the appropriate analysis of variance for these data. c. Do the data indicate that the salary at the different ranks vary by gender? d. If there is
no interaction, determine whether there are differences in salaries by rank, and whether there are differences by gender. Discuss your results.
Full Professor
$63.9 63.9 64.8 68.3 67.5
e. Plot the average salaries using an interaction plot. If the main effect of ranks is significant, use Tukey’s method of pairwise comparisons to determine if there are significant differences among
the ranks. Use a .01.
Source: Based on “Average Salary for Men and Women Faculty by Category, Affiliation, and Academic Rank, 2005–2006.”
CASE STUDY Tickets
“A Fine Mess” Do you risk a parking ticket by parking where you shouldn’t or forgetting how much time you have left on the parking meter? Do the fines associated with various parking infractions vary
depending on the city in which you receive a parking ticket? To look at this issue, the fines imposed for overtime parking, parking in a red zone, and parking next to a fire hydrant were recorded for
13 cities in southern California.11 City Long Beach Bakersfield Orange San Bernardino Riverside San Luis Obispo Beverly Hills Palm Springs Laguna Beach Del Mar Los Angeles San Diego Newport Beach
Overtime Parking $17 17 22 20 21 8 23 22 22 25 20 35 32
Red Zone
Fire Hydrant
$30 33 30 30 30 20 38 28 22 40 55 60 42
$30 33 32 78 30 75 30 46 32 55 30 60 30
Source: From “A Fine Mess,” by R. McGarvey, Avenues, July/August 1994. Reprinted by permission of the author.
1. Identify the design used for the data collection in this case study. 2. Analyze the data using the appropriate analysis. What can you say about the variation among the cities in this study? Among
fines for the three types of violations? Can Tukey’s procedure be of use in further delineating any significant differences you may find? Would confidence interval estimates be useful in your analysis?
3. Summarize the results of your analysis of these data.
Linear Regression and Correlation GENERAL OBJECTIVES In this chapter, we consider the situation in which the mean value of a random variable y is related to another variable x. By measuring both y
and x for each experimental unit, thereby generating bivariate data, you can use the information provided by x to estimate the average value of y and to predict values of y for preassigned values of
CHAPTER INDEX ● Analysis of variance for linear regression (12.4) ● Correlation analysis (12.8) ● Diagnostic tools for checking the regression assumptions (12.6)
© Justin Sullivan/Getty Images
● Estimation and prediction using the fitted line (12.7) ● The method of least squares (12.3) ● A simple linear probabilistic model (12.2) ● Testing the usefulness of the linear regression model:
inferences about b, the ANOVA F-test, and r 2 (12.5)
How Do I Make Sure That My Calculations Are Correct?
Is Your Car “Made in the U.S.A.”? The phrase “made in the U.S.A.” has become a battle cry in the past few years as American workers try to protect their jobs from overseas competition. In the case
study at the end of this chapter, we explore the changing attitudes of American consumers toward automobiles made outside the United States, using a simple linear regression analysis.
12.2 A SIMPLE LINEAR PROBABILISTIC MODEL
INTRODUCTION High school seniors, freshmen entering college, their parents, and a university administration are concerned about the academic achievement of a student after he or she has enrolled in a
university. Can you estimate or predict a student’s grade point average (GPA) at the end of the freshman year before the student enrolls in the university? At first glance this might seem like a
difficult problem. However, you would expect highly motivated students who have graduated with a high class rank from a high school with superior academic standards to achieve a high GPA at the end
of the college freshman year. On the other hand, students who lack motivation or who have achieved only moderate success in high school are not expected to do so well. You would expect the college
achievement of a student to be a function of several variables: • • • •
Rank in high school class High school’s overall rating High school GPA SAT scores
This problem is of a fairly general nature. You are interested in a random variable y (college GPA) that is related to a number of independent variables. The objective is to create a prediction
equation that expresses y as a function of these independent variables. Then, if you can measure the independent variables, you can substitute these values into the prediction equation and obtain the
prediction for y—the student’s college GPA in our example. But which variables should you use as predictors? How strong is their relationship to y? How do you construct a good prediction equation for
y as a function of the selected predictor variables? We will answer these questions in the next two chapters. In this chapter, we restrict our attention to the simple problem of predicting y as a
linear function of a single predictor variable x. This problem was originally addressed in Chapter 3 in the discussion of bivariate data. Remember that we used the equation of a straight line to
describe the relationship between x and y and we described the strength of the relationship using the correlation coefficient r. We rely on some of these results as we revisit the subject of linear
regression and correlation.
A SIMPLE LINEAR PROBABILISTIC MODEL Consider the problem of trying to predict the value of a response y based on the value of an independent variable x. The best-fitting line of Chapter 3, y a bx was
based on a sample of n bivariate observations drawn from a larger population of measurements. The line that describes the relationship between y and x in the population is similar to, but not the
same as, the best-fitting line from the sample. How can you construct a population model to describe the relationship between a random variable y and a related independent variable x? You begin by
assuming that the variable of interest, y, is linearly related to an independent variable x. To describe the linear relationship, you can use the deterministic model y a bx
504 ❍
where a is the y-intercept—the value of y when x 0—and b is the slope of the line, defined as the change in y for a one-unit change in x, as shown in Figure 12.1. This model describes a deterministic
relationship between the variable of interest y, sometimes called the response variable, and the independent variable x, often called the predictor variable. That is, the linear equation determines
an exact value of y when the value of x is given. Is this a realistic model for an experimental situation? Consider the following example.
FI GU R E 1 2 . 1
The y-intercept and slope for a line
Slope = β
slope change in y for a 1-unit change in x y-intercept value of y when x 0
y-intercept = α 0
Table 12.1 displays the mathematics achievement test scores for a random sample of n 10 college freshmen, along with their final calculus grades. A bivariate plot of these scores and grades is given
in Figure 12.2. You can use the Building a Scatterplot applet to refresh your memory as to how this plot is drawn. Notice that the points do not lie exactly on a line but rather seem to be deviations
about an underlying line. A simple way to modify the deterministic model is to add a random error component to explain the deviations of the points about the line. A particular response y is
described using the probabilistic model y a bx e
TABLE 12.1
Mathematics Achievement Test Scores and Final Calculus Grades for College Freshmen
Mathematics Achievement Test Score
Final Calculus Grade
12.2 A SIMPLE LINEAR PROBABILISTIC MODEL
FIGU R E 1 2 . 2
Scatterplot of the data in Table 12.1
● 100
50 Score
The first part of the equation, a bx—called the line of means—describes the average value of y for a given value of x. The error component e allows each individual response y to deviate from the line
of means by a small amount. In order to use this probabilistic model for making inferences, you need to be more specific about this “small amount,” e. ASSUMPTIONS ABOUT THE RANDOM ERROR e Assume that
the values of e satisfy these conditions: • • •
Are independent in the probabilistic sense Have a mean of 0 and a common variance equal to s 2 Have a normal probability distribution
These assumptions about the random error e a | {"url":"https://silo.pub/introduction-to-probability-and-statistics.html","timestamp":"2024-11-07T11:04:52Z","content_type":"text/html","content_length":"1049648","record_id":"<urn:uuid:935af876-ade8-468d-9438-7990a037f9d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00546.warc.gz"} |
OUTPUT Statement
The OUTPUT statement generates and prints forecasts based on the model estimated in the previous MODEL statement and, optionally, creates an output SAS data set that contains these forecasts.
When the GARCH model is estimated, the upper and lower confidence limits of forecasts are calculated by assuming that the error covariance has homoscedastic conditional covariance. | {"url":"http://support.sas.com/documentation/cdl/en/etsug/65545/HTML/default/etsug_varmax_syntax18.htm","timestamp":"2024-11-08T12:15:03Z","content_type":"application/xhtml+xml","content_length":"17288","record_id":"<urn:uuid:91faece4-9163-45dd-b88c-a7a74b005bb3>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00619.warc.gz"} |
How do you find the derivative of y =sqrt(3x+1)? | Socratic
How do you find the derivative of #y =sqrt(3x+1)#?
1 Answer
When you're differentiating radicals, the key is to rewrite them in rational exponent form:
$y = {\left(3 x + 1\right)}^{\frac{1}{2}}$
Now it's more clear that we can apply the power rule and then the chain rule to find this function's derivative:
$\frac{\mathrm{dy}}{\mathrm{dx}} = 3 \cdot \left(\frac{1}{2} \cdot {\left(3 x + 1\right)}^{- \frac{1}{2}}\right)$
And now all we need to do is simplify a bit:
$\frac{\mathrm{dy}}{\mathrm{dx}} = \frac{3}{2 \sqrt{3 x + 1}}$
Impact of this question
42096 views around the world | {"url":"https://socratic.org/questions/how-do-you-find-the-derivative-of-y-sqrt-3x-1","timestamp":"2024-11-05T16:40:27Z","content_type":"text/html","content_length":"32689","record_id":"<urn:uuid:502e9657-48cc-4f09-8ec1-c7f98f75a6e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00138.warc.gz"} |
How to use the Math Problem Prompt Cards for Distance Learning
I've just finished uploading a new set of 'What's the Math Question?' prompt cards (free to download in my shop). This new set
has a back to school theme, making it an ideal activity to do if you are about to start a new school year.
These math prompt cards would traditionally be printed and laminated to use in the classroom. However, this year, some of you may
be teaching your class online! So, in this post, I'll outline a suggestion as to how you can still use these math prompt cards Author
for distance teaching.
Follow Mrs J!
The resource downloads as a PDF file. However, you can create 'digital' cards from this download using the 'Snipping Tool' or
something similar. Your computer may already have the Snipping Tool provided. However if you need a snipping tool, check out this
post >> Best Snipping Tools for Windows 10 and macOS in 2020 << to find one that works best for your computer.
Once you have the Snipping Tool handy, open the PDF file, and start 'Snipping' the cards into singular cards. See steps below: This website uses marketing and tracking technologies. Opting out
Step 1. Download the free math problem prompt cards resource. of this will opt you out of all cookies, except for those needed
to run the website. Note that some products may not work as well
without tracking cookies.
Back to School Math Problems Prompt Cards - FREE
Opt Out of Cookies
These Back to School themed 'What's the question?' cards are great to help your students create themed math word problems.
Differentiate this card set by setting the challenge to make word problems involving either multiplication, division, negative
numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
Step 2. Open the PDF and pick the page with cards that you would like to digitally cut.
STEP 3. Open your Snipping Tool, ensure to select 'Rectangular Snip' from the Mode drop-down menu.
STEP 4. Then click 'NEW' to start snipping the cards individually. After snipping, click 'Save as' and save your card as a PNG.
You will then have an individual image for a prompt card like the example below.
STEP 5. Once you have your individual digital cards ready, you can assign them via whichever learning platform you are using with
your class.
Some suggestions for distance learning:
These are just some implementation ideas. Do what works best for you and your class.
• Post one card prompt a day and request students to create a math word problem to submit.
• Assign different cards to students to create a variety of math word problems. Then have your students share their math
problems in a classroom forum. Instruct students to solve at least three other classmates' word problems posted.
Whether you are going back to the physical classroom or will be teaching in digital mode, I hope this resource helps get your
students thinking mathematically and creatively!
Do you want more themed Math Word Problem Prompt Cards? Collect them all below! Yes, they are all FREE to download!
Halloween Math Word Problem Prompt Cards - FREE
These Halloween 'What's the question?' cards are great to help your students create their own spooky math word problems.
Differentiate this card set by setting the challenge to make word problems involving either multiplication, division, negative
numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
Each student receives a card and must create a math word problem that will result to the information given on that card. After
the creating part, students swap their Halloween math word problems with other students. Once students complete the word problem,
they check in with the creator to see if they answered it correctly!
CLICK HERE to learn more about using this resource on my blog.
Thanksgiving Word Problems Math Prompt Cards
These Thanksgiving 'What's the question?' cards are great to help your students create their own math word problems.
Differentiate this card set by setting the challenge to make word problems involving either multiplication, division, negative
numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
Each student receives a card and must create a math word problem that will result to the information given on that card. After
the creating part, students swap their Thanksgiving math word problems with other students. Once students complete the word
problem, they check in with the creator to see if they answered it correctly!
Christmas Word Problem Math Prompt Cards
These Christmas 'What's the question?' cards are great to help your students create their own Christmas math word problems.
Differentiate this card set by setting the challenge to make word problems involving either multiplication, division, negative
numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
Each student receives a card and must create a math word problem that will result to the information given on that card. After
the creating part, students swap their Christmasmath word problems with other students. Once students complete the word problem,
they check in with the creator to see if they answered it correctly!
Valentine's Day Word Problem Math Prompt Cards
These Valentine's Day 'What's the question?' cards are great to help your students create their own math word problems.
Differentiate this card set by setting the challenge to make word problems involving either multiplication, division, negative
numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
Each student receives a card and must create a math word problem that will result to the information given on that card. After
the creating part, students swap their Valentine's Day math word problems with other students. Once students complete the word
problem, they check in with the creator to see if they answered it correctly!
St Patrick's Day Word Problem Math Prompts
These St Patrick's Day 'What's the question?' cards are great to help your students create their own math word problems.
Differentiate this card set by setting the challenge to make word problems involving either multiplication, division, negative
numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
Each student receives a card and must create a math word problem that will result to the information given on that card. After
the creating part, students swap their St Patrick's Day math word problems with other students. Once students complete the word
problem, they check in with the creator to see if they answered it correctly!
Easter Word Problem Math Prompt Cards
These Easter 'What's the question?' cards are great to help your students create their own math word problems. Differentiate this
card set by setting the challenge to make word problems involving either multiplication, division, negative numbers for higher
grades OR keep it to basic addition and subtraction for lower grades.
Each student receives a card and must create a math word problem that will result to the information given on that card. After
the creating part, students swap their Easter math word problems with other students. Once students complete the word problem,
they check in with the creator to see if they answered it correctly!
Winter Math Word Problem Prompt Cards - FREE
These Winter 'What's the question?' cards are great to help your students create their own winter themed math word problems.
Differentiate this card set by setting the challenge to make word problems involving either multiplication, division, negative
numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
Each student receives a card and must create a math word problem that will result to the information given on that card. After
the creating part, students swap their Halloween math word problems with other students. Once students complete the word problem,
they check in with the creator to see if they answered it correctly!
CLICK HERE to learn more about using this resource on my blo
Download, print, laminate and cut the cards. Store in a ziplock bag, small box, or envelope for easy reused and organization.
Pet Store Themed Math Word Problem Prompt Cards - FREE
These Pet Store themed 'What's the question?' cards are great to help your students create their own pet-themed math word
problems. Differentiate this card set by setting the challenge to make word problems involving either multiplication, division,
negative numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
Back to School Math Problems Prompt Cards - FREE
These Back to School themed 'What's the question?' cards are great to help your students create themed math word problems.
Differentiate this card set by setting the challenge to make word problems involving either multiplication, division, negative
numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
Pirate Themed Math Word Problem Prompt Cards - FREE
These Pirate - themed math word problem prompt cards are great to get your students thinking creatively and mathematically.
Differentiate this card set by setting the challenge to make word problems involving either multiplication, division, negative
numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
Fall Math 'What is the Question?' Create Math Word Problems Card Prompts - FREE
These Fall-themed 'What's the Question?' card prompts are designed to inspire your students to craft their own math word problems
infused with the spirit of autumn. This versatile set can be tailored to suit varying levels of mathematical understanding. For
advanced learners, challenge them to construct word problems involving multiplication, division, or even negative numbers. On the
other hand, for beginners, stick to the fundamentals and encourage the creation of word problems centered around basic addition
and subtraction. Engage your students in this creative and educational activity that not only enhances their problem-solving
skills but also makes learning math a fun, seasonal experience.
Spring Math 'What is the Question?' Math Word Problems Card Prompts - FREE
These Spring-themed 'What's the Question?' card prompts are designed to inspire your students to craft their math word problems
infused with the spirit of spring. This versatile set can be tailored to suit varying levels of mathematical understanding. For
advanced learners, challenge them to construct word problems involving multiplication, division, or even negative numbers. On the
other hand, beginners should stick to the fundamentals and encourage the creation of word problems centered around basic addition
and subtraction. Engage your students in this creative and educational activity that enhances their problem-solving skills and
makes learning math a fun, seasonal experience.
1 Comment
These Back to School themed 'What's the question?' cards are great to help your students create themed math word problems. Differentiate this card set by setting the challenge to make word problems
involving either multiplication, division, negative numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
STEP 4. Then click 'NEW' to start snipping the cards individually. After snipping, click 'Save as' and save your card as a PNG. You will then have an individual image for a prompt card like the
example below.
These Halloween 'What's the question?' cards are great to help your students create their own spooky math word problems. Differentiate this card set by setting the challenge to make word problems
involving either multiplication, division, negative numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
Each student receives a card and must create a math word problem that will result to the information given on that card. After the creating part, students swap their Halloween math word problems with
other students. Once students complete the word problem, they check in with the creator to see if they answered it correctly!
CLICK HERE to learn more about using this resource on my blog.
These Thanksgiving 'What's the question?' cards are great to help your students create their own math word problems. Differentiate this card set by setting the challenge to make word problems
involving either multiplication, division, negative numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
Each student receives a card and must create a math word problem that will result to the information given on that card. After the creating part, students swap their Thanksgiving math word problems
with other students. Once students complete the word problem, they check in with the creator to see if they answered it correctly!
These Christmas 'What's the question?' cards are great to help your students create their own Christmas math word problems. Differentiate this card set by setting the challenge to make word problems
involving either multiplication, division, negative numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
Each student receives a card and must create a math word problem that will result to the information given on that card. After the creating part, students swap their Christmasmath word problems with
other students. Once students complete the word problem, they check in with the creator to see if they answered it correctly!
These Valentine's Day 'What's the question?' cards are great to help your students create their own math word problems. Differentiate this card set by setting the challenge to make word problems
involving either multiplication, division, negative numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
Each student receives a card and must create a math word problem that will result to the information given on that card. After the creating part, students swap their Valentine's Day math word
problems with other students. Once students complete the word problem, they check in with the creator to see if they answered it correctly!
These St Patrick's Day 'What's the question?' cards are great to help your students create their own math word problems. Differentiate this card set by setting the challenge to make word problems
involving either multiplication, division, negative numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
Each student receives a card and must create a math word problem that will result to the information given on that card. After the creating part, students swap their St Patrick's Day math word
problems with other students. Once students complete the word problem, they check in with the creator to see if they answered it correctly!
These Easter 'What's the question?' cards are great to help your students create their own math word problems. Differentiate this card set by setting the challenge to make word problems involving
either multiplication, division, negative numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
Each student receives a card and must create a math word problem that will result to the information given on that card. After the creating part, students swap their Easter math word problems with
other students. Once students complete the word problem, they check in with the creator to see if they answered it correctly!
These Winter 'What's the question?' cards are great to help your students create their own winter themed math word problems. Differentiate this card set by setting the challenge to make word problems
involving either multiplication, division, negative numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
CLICK HERE to learn more about using this resource on my blo
Download, print, laminate and cut the cards. Store in a ziplock bag, small box, or envelope for easy reused and organization.
These Pet Store themed 'What's the question?' cards are great to help your students create their own pet-themed math word problems. Differentiate this card set by setting the challenge to make word
problems involving either multiplication, division, negative numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
These Pirate - themed math word problem prompt cards are great to get your students thinking creatively and mathematically. Differentiate this card set by setting the challenge to make word problems
involving either multiplication, division, negative numbers for higher grades OR keep it to basic addition and subtraction for lower grades.
These Fall-themed 'What's the Question?' card prompts are designed to inspire your students to craft their own math word problems infused with the spirit of autumn. This versatile set can be tailored
to suit varying levels of mathematical understanding. For advanced learners, challenge them to construct word problems involving multiplication, division, or even negative numbers. On the other hand,
for beginners, stick to the fundamentals and encourage the creation of word problems centered around basic addition and subtraction. Engage your students in this creative and educational activity
that not only enhances their problem-solving skills but also makes learning math a fun, seasonal experience.
These Spring-themed 'What's the Question?' card prompts are designed to inspire your students to craft their math word problems infused with the spirit of spring. This versatile set can be tailored
to suit varying levels of mathematical understanding. For advanced learners, challenge them to construct word problems involving multiplication, division, or even negative numbers. On the other hand,
beginners should stick to the fundamentals and encourage the creation of word problems centered around basic addition and subtraction. Engage your students in this creative and educational activity
that enhances their problem-solving skills and makes learning math a fun, seasonal experience.
This website uses marketing and tracking technologies. Opting out of this will opt you out of all cookies, except for those needed to run the website. Note that some products may not work as well
without tracking cookies. | {"url":"https://www.jjresourcecreations.com/-blog/new-back-to-school-math-problem-prompt-cards-free-set-of-30-plus-how-to-use-them-in-the-digital-classroom","timestamp":"2024-11-04T10:47:17Z","content_type":"text/html","content_length":"244943","record_id":"<urn:uuid:bb347a6f-1c09-4174-bfab-f8961522af96>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00812.warc.gz"} |
POV-Ray: Newsgroups: povray.general: Isosurface and function pattern in v3.5: Re: Isosurface and function pattern in v3.5
"Thorsten Froehlich" <tho### [at] trf> Maybe there are any literature references on how the algorithm works?
>Or any other detailed description?
I have submitted a paper of the algorithm ("method 2" in the patches)
to an international workshop of implicit surfaces.
But that paper was rejected.
This would be because I do not have any background in computer
science (my major research field is materials science).
Followings are short description of the algorithm. It is very
simple. If someone interested in details, I can send the rejected
paper by e-mail.
The isosurface searching is a recursive subdivision method.
In the first step, POV-Ray calculate the function values F(d_1)
and F(d_2) on the ray , where d is the distance from the initial
point and d_1<d_2.
If there is a possibility of isosurface between d_1 and d_2,
POV-Ray will calculate function value at another point 'd_3'
on the ray between the two points 'd_1' and 'd_2'.
The possibility is evaluated with the values F(d_1), F(d_2),
the length from d_1 to d_2, and MAX_GRADIENT.
Then, if there is a possibility of isosurface between 'd_1' and
'd_3', POV-Ray calculate another point between 'd_1' and 'd_3'.
If there is no possibility between 'd_1' and 'd_3', POV-Ray
looks for another point between 'd_3' and 'd_2', and so on.
These calculations are carried out recursively until
Post a reply to this message | {"url":"http://news.povray.org/povray.general/message/%3C3b984214%40news.povray.org%3E/#%3C3b984214%40news.povray.org%3E","timestamp":"2024-11-02T05:34:48Z","content_type":"text/html","content_length":"9182","record_id":"<urn:uuid:dfe5bc5d-daf2-4d7e-b901-3635ffb9dc6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00482.warc.gz"} |
Parth Vakil
I would like to talk about the oft used method of measuring the carrier frequency in the world of Signal Collection and Characterization world. It is an elegant technique because of its simplicity.
But, of course, with simplicity, there come drawbacks (sometimes...especially with this one!).
In the world of Radar detection and characterization, one of the key characteristics of interest is the carrier frequency of the signal. If the radar is pulsed, you will have a very wide bandwidth,
A critical thing to realize while modeling the signal that is going to be digitally processed is the SNR. In a receiver, the noise floor (hence the noise variance and hence its power) are determined
by the temperature and the Bandwidth. For a system with a constant bandwidth, relatively constant temperature, the noise power remains relatively constant as well. This implies that the noise
variance is a constant.
In MATLAB, the easiest way to create a noisy signal is by using...
Hello all.
I would like to take this chance to talk a little about what I am going to try and do in this blog. While working in the field, I have come across some interesting techniques. It has, at times, taken
some time and effort to understand these techniques. Ever since I was a kid, I have overestimated my capacity to remember everything that I learn. So, I had decided to start keeping a journal of all
the techniques that started to make sense. This blog provides a great way for me to...
I would like to talk about the oft used method of measuring the carrier frequency in the world of Signal Collection and Characterization world. It is an elegant technique because of its simplicity.
But, of course, with simplicity, there come drawbacks (sometimes...especially with this one!).
In the world of Radar detection and characterization, one of the key characteristics of interest is the carrier frequency of the signal. If the radar is pulsed, you will have a very wide bandwidth,
A critical thing to realize while modeling the signal that is going to be digitally processed is the SNR. In a receiver, the noise floor (hence the noise variance and hence its power) are determined
by the temperature and the Bandwidth. For a system with a constant bandwidth, relatively constant temperature, the noise power remains relatively constant as well. This implies that the noise
variance is a constant.
In MATLAB, the easiest way to create a noisy signal is by using...
Hello all.
I would like to take this chance to talk a little about what I am going to try and do in this blog. While working in the field, I have come across some interesting techniques. It has, at times, taken
some time and effort to understand these techniques. Ever since I was a kid, I have overestimated my capacity to remember everything that I learn. So, I had decided to start keeping a journal of all
the techniques that started to make sense. This blog provides a great way for me to...
I would like to talk about the oft used method of measuring the carrier frequency in the world of Signal Collection and Characterization world. It is an elegant technique because of its simplicity.
But, of course, with simplicity, there come drawbacks (sometimes...especially with this one!).
In the world of Radar detection and characterization, one of the key characteristics of interest is the carrier frequency of the signal. If the radar is pulsed, you will have a very wide bandwidth,
A critical thing to realize while modeling the signal that is going to be digitally processed is the SNR. In a receiver, the noise floor (hence the noise variance and hence its power) are determined
by the temperature and the Bandwidth. For a system with a constant bandwidth, relatively constant temperature, the noise power remains relatively constant as well. This implies that the noise
variance is a constant.
In MATLAB, the easiest way to create a noisy signal is by using...
Hello all.
I would like to take this chance to talk a little about what I am going to try and do in this blog. While working in the field, I have come across some interesting techniques. It has, at times, taken
some time and effort to understand these techniques. Ever since I was a kid, I have overestimated my capacity to remember everything that I learn. So, I had decided to start keeping a journal of all
the techniques that started to make sense. This blog provides a great way for me to... | {"url":"https://www.dsprelated.com/blogs-1/nf/Parth_Vakil.php","timestamp":"2024-11-02T07:45:23Z","content_type":"text/html","content_length":"33654","record_id":"<urn:uuid:71a10419-12c6-4b32-bfb8-2e07d3e09502>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00401.warc.gz"} |
Influence of soak time on catch performance of commercial creels targeting Norway lobster (
Issue Aquat. Living Resour.
Volume 30, 2017
Article Number 36
Number of page(s) 10
DOI https://doi.org/10.1051/alr/2017035
Published online 13 October 2017
Aquat. Living Resour. 2017, 30, 36
Research Article
Influence of soak time on catch performance of commercial creels targeting Norway lobster (Nephrops norvegicus) in the Mediterranean Sea
^1 University of Split, University Department of Marine Studies, Ruđera Boškovića 37, 21000 Split, Croatia
^2 SINTEF Fisheries and Aquaculture, Fishing Gear Technology, Willemoesvej 2, 9850 Hirtshals, Denmark
^3 University of Tromsø, Breivika, 9037 Tromsø, Norway
^* Corresponding author: jure.brcic@unist.hr
Handling Editor: Verena Trenkel
Received: 2 March 2017
Received in final form: 14 September 2017
Accepted: 15 September 2017
Creel catch performance is known to be affected by the soak time in many fisheries. If creels maintained their efficiency over longer periods, increase in soak time should lead to proportional
increase in catch quantity. However, the exact shape of this relationship is unknown for creel fisheries targeting Norway lobster (Nephrops norvegicus). If it was known fishermen could adjust their
fishing strategy accordingly and maximize their net earnings. We compared catch performance of creels targeting Norway lobster soaked for one and two days in the Adriatic Sea. Results were obtained
for three crustacean species, Norway lobster (N. norvegicus), mantis shrimp (Squilla mantis), and blue-leg swimming crab (Liocarcinus depurator) and two fish species, poor cod (Trisopterus minutus)
and blotched picarel (Spicara flexuosa). Doubling the soak time from one to two days did not double the catches and for Norway lobster no increase was found. For the other crustaceans, a slight but
not significant increase was estimated. Catches of blotched picarel were significantly lower for the longer soak time, while the results were inconclusive for the poor cod.
Key words: Nephrops norvegicus / Soak time / Unpaired catch comparison
© EDP Sciences 2017
1 Introduction
Norway lobster (Nephrops norvegicus) is economically the most valuable crustacean species caught in EU waters. Annual landings in the Mediterranean area were 2470t in 2015 (EUROSTAT: http://
ec.europa.eu/eurostat/data/database). Bottom trawling accounts for approximately 95% of Norway lobster catches (Ungfors et al., 2013). However, creel fisheries that account for the remaining volume
are considered to have a smaller ecological footprint (Ungfors et al., 2013), and to produce much less discards compared to bottom trawls (Morello et al., 2009). Although various bottom trawl
modifications such as including escape panels (e.g. Krag et al., 2016) or square mesh panels (e.g. Santos et al., 2016) have been trialled and some of them implemented to reduce unwanted bycatch of
undersized individuals, their efficiency remains variable. Since one of the EU Common Fisheries Policy objectives is to ensure minimisation of the negative impacts of fishing activities on the marine
ecosystem (Regulation (EU) No. 1380/2013), increased use of creels as an alternative to trawling could be relevant in certain areas. Therefore, it is important to explore ways of maximizing creel
catch performance.
Creel catch performance is known to be affected by the soak time in many fisheries. If creels maintained their efficiency over longer periods, an increase in soak time should lead to a proportional
increase in catch quantity. However, the actual shape of this relationship is generally unknown for creel fisheries targeting Norway lobster. Knowing the influence of soak time, fishermen could
adjust their fishing strategy accordingly to maximize their net earnings. Bjordal (1986) found that only 6.1% of Norway lobster individuals that approached the creel actually entered. This might
indicate that Norway lobster could have difficulties finding the entrance. If the bait maintained its attractiveness, an increase in soak time should allow Norway lobster more time to circle around
the creel, find the entrance and enter. The problem is that Norway lobster is not the only species attracted by the bait (most creel fisheries use oily fishes as bait, including horse Mackerel).
According to Adey (2007), the presence of other crab species in and around the creel reduces the number of Norway lobsters entering the creel. The scavenger species in the area can also consume the
bait, negatively influencing creel catch performance over time. This has been observed in the Adriatic Sea by Morello et al. (2009) and Panfili et al. (2007), who estimated that up to 50% of creel
bait is consumed within 12h and up to 100% within 24h. This could imply that soak times longer than 24h do not increase creel catches. Furthermore, Bjordal (1986) showed that small Norway lobsters
are usually chased off by bigger individuals. This implies that the presence of larger individuals inside the creel could incite smaller specimens to escape from it, or deter them from entering.
Similarly, individuals of the opposite sex are known to either attract or repel each other, depending on the first individual entering the creel (Ungfors et al., 2013).
The above considerations illustrate how the behaviour of Norway lobster and other species during fishing could potentially affect the influence of soak time on creel catch performance in different
directions, making it difficult for creel fisherman targeting Norway lobster to predict the optimal soak time. Therefore, the main objective of this study was to investigate the influence of soak
time on the catch performance in creel fishery targeting Norway lobster in the eastern Adriatic Sea. Specifically, we addressed the following questions:
• Does doubling the creel soak time lead to a proportional increase in catches?
• If not, is there any difference in creel catch performance by extending soak time from one to two days?
• Does an increase in creel soak time affect catch performance in a similar way for different species and sizes?
2 Material and methods
2.1 Experimental fishing
The experimental fishing was conducted in the Adriatic Sea (Fig. 1) between 26th of May and 5th of July 2016 using a commercial fishing vessel (LOA 6.90m, 84hp). The investigation was based on a
typical commercial creel design and deployment practice commonly used in the study area. The creels used in this study were made of a rectangular metal frame (700×450×265mm) with 41.04mm
knotless polyamide diamond netting stretched over the frame in a way to obtain a square mesh shape, as prescribed by the regulations. The two entrances made of the same netting material were
positioned opposite each other on the short sides of the creel (Fig. 2). Before fishing, the creels were baited with 43.29±11.33g (±SD) of fresh Mediterranean horse mackerel (Trachurus
mediterraneus), hooked halfway between the entrances without any bait protection device. The bait was renewed on every hauling occasion.
The creels were deployed in a longline system, each comprising 30 creels (further in text referred to as “longline”) (Fig. 2). The longlines were deployed following typical commercial practice in the
area. They were set in the early morning hours and retrieved after one or two days. Longlines deployed with one day soak time are hereafter labelled as 1-day, while longlines deployed with two-day
soak time are labelled as 2-days longlines. Due to low catches per individual creel, the catch of one longline (30 creels) was considered as a base unit in the subsequent analysis. Upon retrieval,
the total catch of each longline was sorted and species and size distributions were recorded. Norway lobster and mantis shrimp carapace length, and blue-leg swimming crab carapace width were measured
to the nearest mm, and poor cod and blotched picarel total length were measured to the nearest cm. Sex was determined only for Norway lobster.
Fig. 1
Map of the survey area showing the position of the creel longlines with one day (solid circles) and two day soak times (open circles).
Fig. 2
Photo and technical drawing of the creel used in the study and schema view of the deployment in the longline system.
2.2 Estimation of the catch comparison curve
The data were analysed using the software tool SELNET (Herrmann et al., 2012) following the method described below. Owing to the experimental design, the catch data from the longlines deployed with,
respectively, 1-day and 2-day soak times were not collected in pairs and can be regarded as unpaired catch data. Since there is no obvious way of pairing the catch data from 1-day and 2-days
deployments, the average relative catch performance was estimated by adopting the catch comparison analysis method for unpaired data described by Herrmann et al. (2017), and applying it for the first
time to a creel fishery. The catch comparison was carried out based on total catches per deployment type by minimizing the following expression: $−∑l{∑i=1q1n1li×ln(1.0−cc(l,v))+∑j=1q2n2lj×ln(cc
(l,v))},$(1) where v are parameters of the catch comparison curve cc(l, v), and n1[li] and n2[lj] are the number of crustaceans and fish of length class l caught in the ith deployment of a 1-day
longline and jth deployment of a 2-day longline. q1 and q2 represent the total number of deployments of 1-day and 2-day longlines, respectively. The outer summation in expression (1) is the summation
over length classes l. Minimizing expression (1) is equivalent to maximizing the likelihood for the observed data based on a maximum likelihood formulation for binominal data. As a result, estimated
model parameters are those that make the experimental data most likely.
The average experimental catch comparison rate, cc[l], where l denotes crustacean carapace length or width, or fish total length, is estimated as: $ccl=∑j=1q2n2lj∑i=1q1n1li+∑j=1q2n2lj.$(2)
When the catch performance for 1-day and 2-day deployments and the number of deployments are equal (q1=q2), the expected value for the summed catch comparison rate is 0.5. In the case of unequal
number of deployments, q2/(q2+q1) would be the baseline to judge whether there is a difference in catch performance between 1-day and 2-day soak time for the creels. The experimental cc[l] is
modelled by the function cc(l, v) which has the following form (Herrmann et al., 2017): $cc(l,v)=exp(f(l,v0,…,vk))1+exp(f(l,v0,…,vk)),$(3) where f is a polynomial of order k with coefficients v[0] to
v[k]. Thus cc(l, v) expresses the probability of finding an individual of length l, in the catch of one of the deployments with 2-day soak time, given that it is found in the catch of either
deployments. The values of the parameters v in cc(l, v) are estimated by minimizing expression (1). We considered f of up to an order of 4 with parameters v[0], v[1], v[2], v[3] and v[4]. Leaving out
one or more of the parameters v[0], v[1], v[2], v[3] and v[4], led to 31 additional models that were also considered as potential models for the catch comparison cc(l, v) between 1-day and 2-day
deployments. To combine estimates from the 31 models multi-model averaging was used (Burnham and Anderson, 2002) following the procedure described in Herrmann et al. (2017). We use the name combined
model for the results of this multi-model averaging.
The ability of the combined model to describe the experimental data was evaluated based on the p-value, which quantifies the probability of obtaining by coincidence at least as big a discrepancy
between the experimental data and model estimates, assuming the model is correct. Therefore, this p-value, which was calculated based on the model deviance and the degrees of freedom, should not be
<0.05 for the combined model to describe the experimental data sufficiently well (Wileman et al., 1996). In case of poor-fit statistics (p-value<0.05; deviance/DOF≫1), the deviations between the
experimental points and the fitted curve were examined to determine if this was due to structural problems in describing the experimental data with the model or due to the overdispersion in the data.
The confidence limits for the combined model were estimated using the double bootstrap method for unpaired data described in Herrmann et al. (2017). This method accounted for between-deployment
variation in the availability of crustaceans and fish, and creel catch performance, by selecting q1 longline deployments with replacement from the pool of 1-day deployments and q2 longline
deployments with replacement from the pool of 2-day deployments, during each bootstrap iteration. The within-deployment uncertainty in the size structure of the catch was accounted for by randomly
selecting crustaceans or fish with replacement from each of the selected longlines separately. The number of individuals selected from each deployment was the same as the number of crustaceans caught
with that deployment of the longline. These data were then combined, and the catch comparison curve was estimated. For each species, 1000 bootstrap repetitions were performed and 95% Efron percentile
confidence intervals were estimated (Efron, 1982). To identify the sizes of crustaceans or fish with significant difference in catch performance length classes in which the confidence limits for the
combined catch comparison curve did not contain the q2/(q1+q2) baseline value were checked.
2.3 Estimation of the catch ratio curve
The catch comparison rate cc(l, v) cannot be used to quantify directly the ratio between the catch efficiency of longline deployed for one or two days for crustaceans with carapace length or width l
or fish of total length l. Instead, the catch ratio cr(l, v) was used. For the experimental data, the average catch ratio for length class l is: $crl=1q2∑j=1q2n2lj1q1∑i=1q1n1li.$(4)
Simple mathematical manipulation based on (2) and (4) yields the following general relationship between the catch ratio and the average catch comparison rate cc[l]: $crl=q1×cclq2×(1−ccl),$(5) which
also means that the same relationship exists for the functional forms: $cr(l,v)=q1×cc(l,v)q2×(1−cc(l,v)).$(6)
One advantage of using the catch ratio as defined by (6) is that it gives a direct relative value of the catch performance between longline deployments with one or two days soak time. Furthermore, it
provides a value independent of the number of deployments. Thus, if the catch performance of 1-day and 2-day longlines is equal, cr(l, v) should be 1. A cr(l, v)=1.3 would mean that 2-day longlines
were catching on average 30% more individuals of length l than 1-day longlines.
If doubling the soak time from one to two days led to a proportional increase in catch performance, cr(l, v) should be 2. Therefore, the catch ratio was checked against the baseline 2.
Using equation (6) and incorporating the calculation of cr(l, v) for each length class l into the double bootstrap procedure, the confidence intervals for the catch ratio were estimated.
2.4 Estimation of length-integrated catch ratio
A length-integrated average value for the catch ratio was calculated as follows (Herrmann et al., 2017): $c¯r¯=1q2∑l∑j=1q2n2lj1q1∑l∑i=1q1n1li,$(7) where the outer summation is over the length classes
in the catch.
By incorporating $c¯r¯$ into each of the bootstrap iterations, it was possible to assess the corresponding 95% confidence limits. The value of $c¯r¯$ was used to provide a length-averaged value for
the effect of increasing soak time from one day to two days on creel catch performance.
In contrast to the length-dependent catch ratio, $c¯r¯$ is specific for the population structure encountered during the experimental sea trials. Therefore, its value is specific for the size
structure in the fishery at the time the trials were carried out, and can therefore not be extrapolated to other situations in which the size or sex structure of the catch composition may be
The analysis described above was conducted separately for each of the five species sampled. For Norway lobster the analysis was first performed separately for females and males. If the confidence
intervals of the catch comparison and catch ratio curves overlapped, female and male data were pooled and additional analysis based on the pooled dataset was performed.
The relationships between the number of Norway lobsters and number of crabs and between the number of Norway lobster and the total number of bycatch specimens caught in each creel longline, were
quantified using Spearman's rank correlation coefficient, separately for longlines soaked for one and two days.
3 Results
During 16 one-day fishing trips a total of 47 longlines were soaked for one day, and 33 for two days. There was no significant difference in water depth between treatments (72.12±1.65m for 1-day,
72.15±2.55m for 2-day; mean+SD). Altogether, 302 Norway lobsters, 353 mantis shrimps, 1137 blue-leg swimming crabs, 68 poor cods and 214 blotched picarel were caught (Tab. 1).
The estimated length-dependant catch comparison rates for Norway lobster females and males showed that the curves in both cases reflect the main trend in the experimental data (Fig. 3). The p-value
obtained for the model fit for Norway lobster females (Tab. 2) was <0.05, but after inspecting the residuals of the fit (see Fig. A1 in the Appendix A) this was considered to be due to the
overdispersion in the data (Wileman et al., 1996). Since the effect of the soak time on the catches of Norway lobster females and males was not significant, the data were pooled and additional
analysis based on the pooled data was performed.The estimated length-dependant catch comparison rates for crustacean and fish species, with the 1-day longlines as a baseline, reflected the trends in
the experimental data well (Figs. 4 and 5). However, the p-values obtained for the model fits for Norway lobster and blotched picarel were below 0.05 (Tab. 2), potentially indicating that the chosen
model was inappropriate for describing the experimental data. Given that no systematic patterns were observed after inspecting the residuals of the fits (see Fig. A1 in the Appendix A), the poor p
-values obtained for these species were considered to be due to the overdispersion in the data (Wileman et al., 1996). Therefore, we are confident in using the models to assess the difference in
catch performance between longlines soaked for one and two days also for Norway lobster and blotched picarel.
The quantitative difference in catch performance between the1-day and 2-day longlines is evident from the catch ratio curves (right column of Figs. 4 and 5). The solid black lines in these figures
represent the estimated catch ratio curves, while horizontal dashed lines represent the baselines of no effect.
To investigate if the catch performance was proportional to soak time, the fitted models were compared to the reference value cr(l, v)=2 which is expected if longlines soaked for two days were
catching twice as much as longlines soaked for one day (dot-dashed lines in Figs. 4 and 5). Catch ratios were significantly below 2 for Norway lobster individuals up to ∼61mm CL, for mantis shrimp
lengths from 25 to ∼29mm and ∼35 to ∼39mm CL, for blue-leg swimming crab lengths from ∼30 to 44mm CL and for blotched picarel larger than 15cm Lt. This demonstrated that doubling the soak time
from one to two days did not double catches for those species in the above described size intervals. The results obtained for poor cod were inconclusive since both cr(l, v)=1 and cr(l, v)=2
baselines were inside the 95% confidence intervals of the estimated catch ratio curve. For Norway lobster doubling soak time did not even indicate any increase in the catch because the estimated
catch ratio curve was slightly below 1, and this baseline was inside the confidence interval of the curve (Fig. 4). For other crustaceans, a slight but not significant increase was found. For the
blotched picarel, catches were significantly lower with the longer soak time for individuals above 16cm Lt (Fig. 5). For poor cod, results indicated a slight, although not significant, increase over
the entire length range, possibly due to the wide confidence bands.
Finally, for all analysed species, the $c¯r¯$ values showed the same pattern as described by the length-dependent results, showing that catches did not increase proportionally with soak time (Fig. 6
There was no significant relationship between the number of Norway lobsters and the crab bycatch specimens caught in each creel longline soaked for one (rho=−0.10, p=0.48) and two days (rho=
0.08, p=0.66). Also, no significant relationship was detected between the number of Norway lobsters and the total bycatch specimens caught in each creel longline soaked for one (rho=−0.21, p=
0.15) and two days (rho=0.08, p=0.67).
Table 1
Catch summary table; N: average number of individuals caught per each creel longline; CL: carapace length; Lt: total length; Lt stuck: average length of individuals stuck in the creel meshes; SD:
standard deviation.
Fig. 3
Catch comparison rates (left column) and catch ratio rates (right column) for the longline deployments with one and two days soaking time (solid black curves) for females and males of Norway
lobster. Dots represent the experimental rates. Thin black dotted curves represent the 95% CI for the catch comparison curves. Dark grey solid curves (left column) represent summed and raised catch
populations for deployments with two day soaking time. Dark grey dashed curves (left column) represent summed and raised catch population for deployments with one day soaking time. Dark grey dashed
curves (right column) represent total summed and raised catch population for one and two day soaking time. Horizontal dark grey dashed lines represent baselines for no effect of soaking time on the
catch performance. Horizontal dark grey dot-dashed line represents line where longlines soaked for two days are catching twice as much as longlines soaked for one day. Female: Norway lobster
females; male: Norway lobster males.
Table 2
Fit statistics for the combined catch comparison curves. DOF: degrees of freedom.
Fig. 4
Catch comparison rate (left column) and catch ratio rate (right column) for the longline deployments with one and two days soaking time (solid black curves) for crustacean species. Dots represent
the experimental rates. Thin black dotted curves represent the 95% CI for the catch comparison curves. Dark grey solid curves (left column) represent summed and raised catch populations for
deployments with two day soaking time. Dark grey dashed curves (left column) represent summed and raised catch population for deployments with one day soaking time. Dark grey dashed curves (right
column) represent total summed and raised catch population for both deployments with one and two day soaking time. Horizontal dark grey dashed lines represent baselines for no effect of soaking
time on the catch performance. Horizontal dark grey dot-dashed line represents line where longlines soaked for two days are catching twice as much as longlines soaked for one day. Performance is
proportional to soaking time. NEP: Norway lobster; MTS: mantis shrimp; IOD: blue-leg swimming crab.
Fig. 5
Catch comparison rate (left column) and catch ratio rate (right column) for the longline deployments with one and two days soaking time (solid black curves) for fish species. Dots represent the
experimental rates. Thin black dotted curves represent the 95% CI for the catch comparison curves. Dark grey solid curves (left column) represent summed and raised catch populations for deployments
with two day soaking time. Dark grey dashed curves (left column) represent summed and raised catch population for deployments with one day soaking time. Dark grey dashed curves (right column)
represent total summed and raised catch population for both deployments with one and two day soaking time. Horizontal dark grey dashed lines represent baselines for no effect of soaking time on the
catch performance. Horizontal dark grey dot-dashed line represents line where longlines soaked for two days are catching twice as much as longlines soaked for one day. POD: Poor cod; SPFX: Blotched
Fig. 6
Estimated values of average catch ratio for longlines soaked for one and two days, with one day soak time as a baseline. Horizontal dark grey dashed lines represent baselines for no effect of
soaking time on the catch performance. Horizontal dark grey dot-dashed line represents line where longlines soaked for two days are catching twice as much as longlines soaked for one day. NEP:
Norway lobster; MTS: mantis shrimp; IOD: blue-leg swimming crab; POD: poor cod; SPFX: blotched picarel.
4 Discussion
The aim of this study was to investigate the influence of soak time on catch performance of commercial creels targeting Norway lobster in the Adriatic Sea. We specifically wanted to investigate if
creel catch performance was proportional to soak time by comparing catch performance of creels soaked for one and two days. It is advantageous for fishermen to know how catches, especially those of
Norway lobster, are influenced by soak time, since this could enable them to adjust their fishing strategy accordingly and potentially reduce the costs of fishing. This could be achieved if longer
soak times resulted in higher catches. Nevertheless, this would not necessarily imply a net advantage of increasing soak time, because if fisherman sets creels every second day, compared to every
day, the costs of fishing would be cut in half, resulting in comparable net earnings over two days even with lower catches.
The results of our study demonstrated that doubling the soak time from one to two days did not result in doubled catches, and for Norway lobster there was even no indication of any catch increase.
For other crustaceans, a small but non-significant increase was estimated. For the blotched picarel, significantly more individuals were caught in creels soaked for one day than in those soaked for
two days. Since this was observed for individuals larger than ∼16cm, it may indicate that blotched picarel was utilizing the creel entrances to escape.
Total catches increased to a lesser extent than expected with the increase of soak time, which indicates a decrease in creel catch per unit of effort. In this respect, our results are in line with
the findings of Miller (1978), who showed that creel catch ability decreases with longer soak time due to gear saturation. However, there are other potential reasons for why longer soak time, in our
study, did not increase Norway lobster catches accordingly. For example, Morello et al. (2009) speculated that the large number of blue-leg swimming crabs feeding on the bait inside the creel
diminished the strength of the bait, thus reducing the attractive power of the creel over time. Since the blue-leg swimming crab was the most abundant species in our creel catches, the proposed
mechanism of Morello et al. (2009) might explain the reduction in catch ability with increasing soak time found in our study. Furthermore, it can be speculated that the bait strength decreased with
time due to the presence of small scavenger species feeding on bait, which are too small to be caught by the creels. This explanation is based on the results reported by Panfili et al. (2007) and
Morello et al. (2009), who showed that in the Adriatic Sea (Pomo pit), small scavenger species (mainly Natatolana borealis) consume up to 50% of the bait within 6h of the creel deployment and up to
100% within 24h. Although there is no evidence that small scavenger species were present in the area during the fishing trials, this possibility should not be disregarded as, following the common
fishing practice in the area, the bait was unprotected and accessible to the various organisms entering the creel. Therefore, it would be highly relevant to investigate if the catch ability of the
creels would improve with increased soak time if the bait was protected.
The limited workspace on the fishing vessel prevented collection of data and analysis at the creel level. However, the analysis performed on the longline level did not show any significant
correlation between the number of bycatch species and the number of Norway lobsters caught in the creels.
For practical reasons, the data in this study were not collected in pairs, which is why the unpaired catch comparison method was used for the analysis. The uncertainty in the estimates resulting both
from the variation in the availability of target species in the study area, and the uncertainty in the size structure of the catch, was accounted for by using the double bootstrap method. This method
has been previously used by Notti et al. (2016), to compare the catch efficiency of traditional boat seines with experimental surrounding nets without the purse line. Using the same approach Herrmann
et al. (2017) investigated the effect of gear design changes on catch efficiency of the Spanish longline fishery, and Sistiaga et al. (2015, 2016) explored the effect of lifting the sweeps in
Norwegian bottom trawls. The current study is the first to apply this method to a creel fishery and it demonstrates its utility for investigating factors potentially influencing creel catch
performance. However, the results of this study are specific for the creel design and baiting system used in the area, so it requires precaution when extrapolating the results to other Norway lobster
creel fisheries.
The research leading to this paper was funded by the Croatian Ministry of Agriculture. The authors would like to thank Captain Ivo Tomaš for his help with the construction of control creels and for
allowing us to join him during his regular fishing trips. We would also like to thank Marinko Ivandić and Mateja Baranović for their valuable help during the fieldwork, to Goran Bojanić for the
illustrations used in the manuscript and Katija Ferri for the language revision. We are also grateful to the Editor and the reviewers for their suggestions, which helped us to improve the manuscript
Appendix A
Fig. A1
Residuals of the model fits for all species. NEP: Norway lobster; MTS: mantis shrimp; IOD: blue-leg swimming crab; POD: Poor cod; SPFX: Blotched picarel. Pooled: based on the pooled female and male
• Adey JM. 2007. Aspects of the sustainability of creel fishing for Norway lobster, Nephrops norvegicus (L.), on the west coast of Scotland, PhD thesis. [Google Scholar]
• Bjordal A. 1986. The behaviour of Norway lobster towards baited creel and size selection of creels and trawls. Fiskeridirektoratets Skrifter Serie Havundersokelser (Report on Norwegian Fishery
and Marine Investigations), Vol. 18, pp. 131–137. [EDP Sciences] [Google Scholar]
• Burnham KP, Anderson DR. 2002. Model selection and multimodel inference: a practical information-theoretic approach, 2nd edn., Springer, New York. [Google Scholar]
• Efron B. 1982. The jackknife, the bootstrap and other resampling plans. SIAM Monograph No. 38, CBSM-NSF. [CrossRef] [Google Scholar]
• Herrmann B, Sistiaga M, Nielsen KN, Larsen RB. 2012. Understanding the size selectivity of redfish (Sebastes spp.) in North Atlantic trawl codends. J Northwest Atl Fish Sci 44: 1–13. [CrossRef]
[Google Scholar]
• Herrmann B, Sistiaga M, Rindahl L, Tatone I. 2017. Estimation of the effect of gear design changes in catch efficiency: methodology and a case study for a Spanish longline fishery targeting hake
(Merluccius merluccius). Fish Res 185: 153–160. [CrossRef] [Google Scholar]
• Krag LA, Herrmann B, Feekings J, Karlsen JD. 2016. Escape panels in trawls − a consistent management tool? Aquat Liv Res 29: 306. [CrossRef] [EDP Sciences] [Google Scholar]
• Miller RJ. 1978. Entry of cancer productus to baited traps. ICES J Mar Sci 38: 220–225. [CrossRef] [Google Scholar]
• Morello EB, Antolini B, Gramitto ME, Atkinson RJA, Froglia C. 2009. The fishery for Nephrops norvegicus (Linnaeus, 1758) in the central Adriatic Sea (Italy): preliminary observations comparing
bottom trawl and baited creels. Fish Res 95: 325–331. [CrossRef] [Google Scholar]
• Notti E, Brčić J, Carlo FD, Herrmann B, Lucchetti A, Virgili M, Sala A. 2016. Assessment of the relative catch performance of a surrounding net without the purse line as an alternative to a
traditional boat seine in small-scale fisheries. Mar Coast Fish 8: 81–91. [CrossRef] [Google Scholar]
• Panfili M, Morello EB, Froglia C. 2007. The impact of scavengers on the creel fishery for Nephrops norvegicus in the central Adriatic sea. Rapp Comm Int Medit, 38. [Google Scholar]
• Regulation (EU) No. 13080 /2013, of the European Parliament and of the Council of 11 December 2013 on the Common Fisheries Policy, amending Council Regulations (EC) No. 1954/2003 and (EC) No.
1224/2009 and repealing Council Regulations (EC) No. 2371/2002 and (EC) No. 639/2004 and Council Decision 2004/585/EC. Official Journal of the European Union L 354. [Google Scholar]
• Santos J, Herrmann B, Otero P, Fernandez J, Perez N. 2016. Square mesh panels in demersal trawls: does lateral positioning enhance fish contact probability? Aquat Liv Res 29: 302. [CrossRef] [EDP
Sciences] [Google Scholar]
• Sistiaga M, Herrmann B, Grimaldo E, Larsen RB, Tatone I. 2015. Effect of lifting the sweeps on bottom trawling catch efficiency: a study based on the Northeast arctic cod (Gadus morhua) trawl
fishery. Fish Res 167: 164–173. [CrossRef] [Google Scholar]
• Sistiaga M, Herrmann B, Grimaldo E, Larsen RB, Tatone I. 2016. The effect of sweep bottom contact on the catch efficiency of haddock (Melanogrammus aeglefinus). Fish Res 179: 302–307. [CrossRef]
[Google Scholar]
• Ungfors A, Bell E, Johnson ML, Cowing D, Dobson NC, Bublitz R, Sandell J. 2013. Chapter 7– Nephrops fisheries in European waters, in: Magnus L.J, Mark P.J (Eds.), Adv Mar Biol, Academic Press,
pp. 247–314. [CrossRef] [PubMed] [Google Scholar]
• Wileman D, Ferro RST, Fonteyne R, Millar RB. 1996. Manual of methods of measuring the selectivity of towed fishing gears. ICES Cooperative Research Report No. 215, p. 132. [Google Scholar]
Cite this article as: Brčić J, Herrmann B, Mašanović M, Šifner SK, Škeljo F. 2017. Influence of soak time on catch performance of commercial creels targeting Norway lobster (Nephrops norvegicus) in
the Mediterranean Sea. Aquat. Living Resour. 30: 36
All Tables
Table 1
Catch summary table; N: average number of individuals caught per each creel longline; CL: carapace length; Lt: total length; Lt stuck: average length of individuals stuck in the creel meshes; SD:
standard deviation.
Table 2
Fit statistics for the combined catch comparison curves. DOF: degrees of freedom.
All Figures
Fig. 1
Map of the survey area showing the position of the creel longlines with one day (solid circles) and two day soak times (open circles).
In the text
Fig. 2
Photo and technical drawing of the creel used in the study and schema view of the deployment in the longline system.
In the text
Fig. 3
Catch comparison rates (left column) and catch ratio rates (right column) for the longline deployments with one and two days soaking time (solid black curves) for females and males of Norway
lobster. Dots represent the experimental rates. Thin black dotted curves represent the 95% CI for the catch comparison curves. Dark grey solid curves (left column) represent summed and raised catch
populations for deployments with two day soaking time. Dark grey dashed curves (left column) represent summed and raised catch population for deployments with one day soaking time. Dark grey dashed
curves (right column) represent total summed and raised catch population for one and two day soaking time. Horizontal dark grey dashed lines represent baselines for no effect of soaking time on the
catch performance. Horizontal dark grey dot-dashed line represents line where longlines soaked for two days are catching twice as much as longlines soaked for one day. Female: Norway lobster
females; male: Norway lobster males.
In the text
Fig. 4
Catch comparison rate (left column) and catch ratio rate (right column) for the longline deployments with one and two days soaking time (solid black curves) for crustacean species. Dots represent
the experimental rates. Thin black dotted curves represent the 95% CI for the catch comparison curves. Dark grey solid curves (left column) represent summed and raised catch populations for
deployments with two day soaking time. Dark grey dashed curves (left column) represent summed and raised catch population for deployments with one day soaking time. Dark grey dashed curves (right
column) represent total summed and raised catch population for both deployments with one and two day soaking time. Horizontal dark grey dashed lines represent baselines for no effect of soaking
time on the catch performance. Horizontal dark grey dot-dashed line represents line where longlines soaked for two days are catching twice as much as longlines soaked for one day. Performance is
proportional to soaking time. NEP: Norway lobster; MTS: mantis shrimp; IOD: blue-leg swimming crab.
In the text
Fig. 5
Catch comparison rate (left column) and catch ratio rate (right column) for the longline deployments with one and two days soaking time (solid black curves) for fish species. Dots represent the
experimental rates. Thin black dotted curves represent the 95% CI for the catch comparison curves. Dark grey solid curves (left column) represent summed and raised catch populations for deployments
with two day soaking time. Dark grey dashed curves (left column) represent summed and raised catch population for deployments with one day soaking time. Dark grey dashed curves (right column)
represent total summed and raised catch population for both deployments with one and two day soaking time. Horizontal dark grey dashed lines represent baselines for no effect of soaking time on the
catch performance. Horizontal dark grey dot-dashed line represents line where longlines soaked for two days are catching twice as much as longlines soaked for one day. POD: Poor cod; SPFX: Blotched
In the text
Fig. 6
Estimated values of average catch ratio for longlines soaked for one and two days, with one day soak time as a baseline. Horizontal dark grey dashed lines represent baselines for no effect of
soaking time on the catch performance. Horizontal dark grey dot-dashed line represents line where longlines soaked for two days are catching twice as much as longlines soaked for one day. NEP:
Norway lobster; MTS: mantis shrimp; IOD: blue-leg swimming crab; POD: poor cod; SPFX: blotched picarel.
In the text
Fig. A1
Residuals of the model fits for all species. NEP: Norway lobster; MTS: mantis shrimp; IOD: blue-leg swimming crab; POD: Poor cod; SPFX: Blotched picarel. Pooled: based on the pooled female and male
In the text
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on
Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while. | {"url":"https://www.alr-journal.org/articles/alr/full_html/2017/01/alr170035/alr170035.html","timestamp":"2024-11-13T15:50:48Z","content_type":"text/html","content_length":"127514","record_id":"<urn:uuid:397ed1a2-cf7f-42ba-b144-ece9667e450d>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00572.warc.gz"} |
Extrusion is a geometry node that sequentially stretches a 2D cross section along a 3D-spine path in the local coordinate system, creating an outer hull. Scaling and rotating the crossSection 2D
outline at each control point can modify the outer hull of the Extrusion to produce a wide variety of interesting shapes.
The Extrusion node belongs to the Geometry3D component and requires at least level 4, its default container field is geometry. It is available since VRML 2.0 and from X3D version 3.0 or higher.
1 + X3DNode
2 + X3DGeometryNode
3 + Extrusion
SFNode [in, out] metadata NULL [X3DMetadataObject]
Information about this node can be contained in a MetadataBoolean, MetadataDouble, MetadataFloat, MetadataInteger, MetadataString or MetadataSet node.
MFVec2f [in] set_crossSection (-∞,∞)
The crossSection array defines a silhouette outline of the outer Extrusion surface. crossSection is an ordered set of 2D points that draw a piecewise-linear curve which is extruded to form a series
of connected vertices.
• This field is not accessType inputOutput since X3D browsers might use different underlying geometric representations for high-performance rendering, and so output events are not appropriate.
• If the order of crossSection point definition does not match clockwise/counterclockwise setting of ccw field, then self-intersecting, impossible or inverted geometry can result!
• It is an error to define this transient inputOnly field in an X3D file, instead only use it a destination for ROUTE events.
MFRotation [in] set_orientation [-1,1] or (-∞,∞)
The orientation array is a list of axis-angle 4-tuple values applied at each spine-aligned cross-section plane.
• If the orientation array contains a single 4-tuple value, it is applied at all spine-aligned crossSection planes.
• Number of values must all match for 3-tuple spine points, 2-tuple scale values, and 4-tuple orientation values.
• This field is not accessType inputOutput since X3D browsers might use different underlying geometric representations for high-performance rendering, and so output events are not appropriate.
• It is an error to define this transient inputOnly field in an X3D file, instead only use it a destination for ROUTE events.
MFVec2f [in] set_scale (0,∞)
scale is a list of 2D-scale parameters applied at each spine-aligned cross-section plane.
• Number of values must all match for 3-tuple spine points, 2-tuple scale values, and 4-tuple orientation values.
• This field is not accessType inputOutput since X3D browsers might use different underlying geometric representations for high-performance rendering, and so output events are not appropriate.
• Zero or negative scale values not allowed.
• It is an error to define this transient inputOnly field in an X3D file, instead only use it a destination for ROUTE events.
MFVec3f [in] set_spine (-∞,∞)
The spine array defines a center-line sequence of 3D points that define a piecewise-linear curve forming a series of connected vertices. The spine is set of points along which a 2D crossSection is
extruded, scaled and oriented.
• The spine array can be open or closed (closed means that endpoints are coincident).
• Number of values must all match for 3-tuple spine points, 2-tuple scale values, and 4-tuple orientation values.
• This field is not accessType inputOutput since X3D browsers might use different underlying geometric representations for high-performance rendering, and so output events are not appropriate.
• Special care is needed if creating loops or spirals since self-intersecting, impossible or inverted geometry can result!
• It is an error to define this transient inputOnly field in an X3D file, instead only use it a destination for ROUTE events.
SFBool [ ] beginCap TRUE
Whether beginning cap is drawn (similar to Cylinder top cap).
• Since this field has accessType initializeOnly, the value cannot be changed after initial creation.
SFBool [ ] endCap TRUE
Whether end cap is drawn (similar to Cylinder bottom cap).
• Since this field has accessType initializeOnly, the value cannot be changed after initial creation.
SFBool [ ] solid TRUE
Setting solid true means draw only one side of polygons (backface culling on), setting solid false means draw both sides of polygons (backface culling off).
• Mnemonic “this geometry is solid like a brick” (you don’t render the inside of a brick).
• If in doubt, use solid=’false’ for maximum visibility.
• AccessType relaxed to inputOutput in order to support animation and visualization.
• Default value true can completely hide geometry if viewed from wrong side!
SFBool [ ] ccw TRUE
The ccw field indicates counterclockwise ordering of vertex-coordinates orientation.
• A good debugging technique for problematic polygons is to try changing the value of ccw, which can reverse solid effects (single-sided backface culling) and normal-vector direction.
• Consistent and correct ordering of left-handed or right-handed point sequences is important throughout the coord array of point values.
SFBool [ ] convex TRUE
The convex field is a hint to renderers whether all polygons in a shape are convex (true), or possibly concave (false). A convex polygon is planar, does not intersect itself, and has all interior
angles < 180 degrees.
• Concave is the opposite of convex.
• Select convex=false (i.e. concave) and solid=false (i.e. two-sided display) for greatest visibility of geometry.
• Concave or inverted geometry may be invisible when using default value convex=true, since some renderers use more-efficient algorithms to perform tessellation that may inadvertently fail on
concave geometry.
SFFloat [ ] creaseAngle 0 [0,∞)
creaseAngle defines angle (in radians) where adjacent polygons are drawn with sharp edges or smooth shading. If angle between normals of two adjacent polygons is less than creaseAngle, smooth shading
is rendered across the shared line segment.
• creaseAngle=0 means render all edges sharply, creaseAngle=3.14159 means render all edges smoothly.
MFVec2f [ ] crossSection [ 1 1, 1 -1, -1 -1, -1 1, 1 1 ] (-∞,∞)
The crossSection array defines a silhouette outline of the outer Extrusion surface. crossSection is an ordered set of 2D points that draw a piecewise-linear curve which is extruded to form a series
of connected vertices.
• The crossSection array can be open or closed (closed means that endpoints are coincident).
• Number of values must all match for 3-tuple spine points, 2-tuple scale values, and 4-tuple orientation values.
• If the order of crossSection point definition does not match clockwise/counterclockwise setting of ccw field, then self-intersecting, impossible or inverted geometry can result!
• Avoid self-intersecting polygon line segments, otherwise defined geometry is irregular and rendering results are undefined (especially for end caps).
MFRotation [ ] orientation 0 0 1 0 [-1,1] or (-∞,∞)
The orientation array is a list of axis-angle 4-tuple values applied at each spine-aligned cross-section plane.
• If the orientation array contains a single 4-tuple value, it is applied at all spine-aligned crossSection planes.
• Number of values must all match for 3-tuple spine points, 2-tuple scale values, and 4-tuple orientation values.
MFVec2f [ ] scale 1 1 (0,∞)
scale is a list of 2D-scale parameters applied at each spine-aligned cross-section plane.
• Number of values must all match for 3-tuple spine points, 2-tuple scale values, and 4-tuple orientation values.
• If the scale array contains one value, it is applied at all spine-aligned crossSection planes.
• Zero or negative scale values not allowed.
MFVec3f [ ] spine [ 0 0 0, 0 1 0 ] (-∞,∞)
The spine array defines a center-line sequence of 3D points that define a piecewise-linear curve forming a series of connected vertices. The spine is set of points along which a 2D crossSection is
extruded, scaled and oriented.
• The spine array can be open or closed (closed means that endpoints are coincident).
• Number of values must all match for 3-tuple spine points, 2-tuple scale values, and 4-tuple orientation values.
• If a spine is closed (or nearly closed) then the inner diameter usually needs to be greater than the corresponding crossSection width.
• Special care is needed if creating loops or spirals since self-intersecting, impossible or inverted geometry can result!
• Ensure that spine segments have non-zero length and are not coincident with each other.
• Take care to avoid defining parameter combinations that create self-intersecting, impossible or inverted geometry.
See Also | {"url":"https://create3000.github.io/x_ite/components/geometry3d/extrusion/","timestamp":"2024-11-03T00:52:35Z","content_type":"text/html","content_length":"41183","record_id":"<urn:uuid:13413f59-2e60-4bc7-8e87-2f590ac776be>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00496.warc.gz"} |
Previous: zdrot Up: ../lapack-z.html Next: zgbcon
ZDRSCL - multiply an n-element complex vector x by the real
scalar 1/a
SUBROUTINE ZDRSCL( N, SA, SX, INCX )
INTEGER INCX, N
DOUBLE PRECISION SA
COMPLEX*16 SX( * )
ZDRSCL multiplies an n-element complex vector x by the real
scalar 1/a. This is done without overflow or underflow as
long as the final result x/a does not overflow or underflow.
N (input) INTEGER
The number of components of the vector x.
SA (input) DOUBLE PRECISION
The scalar a which is used to divide each component
of x. SA must be >= 0, or the subroutine will
divide by zero.
SX (input/output) COMPLEX*16 array, dimension
(1+(N-1)*abs(INCX)) The n-element vector x.
INCX (input) INTEGER
The increment between successive values of the vec-
tor SX. > 0: SX(1) = X(1) and SX(1+(i-1)*INCX) =
x(i), 1< i<= n
< 0: SX(1) = X(n) and SX(1+(i-1)*INCX) = x(n-i+1),
1< i<= n | {"url":"https://www.math.utah.edu/software/lapack/lapack-z/zdrscl.html","timestamp":"2024-11-09T22:50:36Z","content_type":"text/html","content_length":"2440","record_id":"<urn:uuid:b7b1922e-11f5-4323-bc65-db14214f66e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00860.warc.gz"} |
Material Time Derivative
Displacement and Strain: Material Time Derivative
Material Time Derivative
For time dependent deformations, the material time derivative, is the derivative of scalar, vector or tensor fields as a function of the material point with respect to time. Let
Notice that
In case where the fields are functions of the reference configuration, i.e.,
And the components of these quantities are given by:
In the following example, the scalar field temperature is used to illustrate the above concepts. Assuming a plate of width 4 units and height of 2 units is rotating around the origin with a speed of
The position function at time
The spatial velocity field is given by the following vector function:
The spatial gradient of temperature field is given by the following vector function:
The material time derivative of the temperature is given by the following scalar function:
The following tool draws the velocity vector field, the spatial gradient of temperature vector field and the contour plot of the material time derivative of the temperature. Change the value of | {"url":"https://engcourses-uofa.ca/displacement-and-strain/material-time-derivative/","timestamp":"2024-11-13T03:14:25Z","content_type":"text/html","content_length":"54098","record_id":"<urn:uuid:2dc1d462-d5a7-400a-9892-c0cc1e4d23d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00433.warc.gz"} |
Advanced Computing for Cardiovascular Disease Prediction
1. Introduction
The proper functioning of the cardiovascular system ensures a healthy heart, which is the most essential aspect of the well-being of our body. The problems that arise in the system cause
cardiovascular diseases (CVDs). According to National Health Services, UK, cardiovascular disease is a general term for the conditions affecting the heart or blood vessels. Generally, CVDs encompass
all types of diseases that affect the heart or blood vessels, including coronary heart disease, cerebrovascular disease, rheumatic heart disease, and other conditions like chest pain, stroke, and
heart attack [1] . The effects of behavioral risk factors for CVDs like unhealthy diet, physical inactivity, and consumption of tobacco and alcohol [2] , may appear as high blood pressure, blood
glucose, raised blood lipids, overweight, and obesity in an individual. [3] . Most patients experience shortness of breath, arm, shoulder, and chest pain along with an overall feeling of weakness.
These symptoms increase the risk of stroke, angina, and heart attack due to restricted or clogged blood vessels, which primarily cause the untimely death of patients [4] .
The World Health Organization reports that 32 percent of the 17 million deaths globally in 2019 that were related to non-communicable diseases were caused by CVDs [5] . More than 75 percent deaths
from the CVDs occur in least-developed countries and it badly affects the mid- and low-income people [6] . More than four out of five CVD deaths are due to heart attacks and strokes, and one-third of
these deaths occur prematurely in people under 70 years of age [7] . However, clinical decision-making during diagnosis and treatment is complex, and cardiologists face difficulties in detecting and
treating patients in the early stages [8] . Early and accurate detection and diagnosis of CVDs is a must to provide appropriate treatments to the patients, which helps to prevent the premature death
of the person [9] .
The diagnosis and treatment of CVDs rely on data in several forms, such as patient history, physical examination, laboratory data, and invasive and non-invasive imaging techniques [10] . Invasive
imaging techniques such as cardiac catheterization and intravascular ultrasound are associated with more risk factors and require a unique hospital setting [11] . Angiography is more dependable among
other non-invasive imaging techniques like X-ray and magnetic resonance imaging (MRI), however, it requires solid technological knowledge and has side effects [12] [13] [14] . Such conventional
methods are time-consuming and expensive, making detecting CVDs more complicated than it needs to be. On the other hand, machine learning (ML) can solve this issue by enabling an automatic method to
assess examples and draw consistent and accurate conclusions.
A higher degree of accuracy in predicting the risk of CVDs can be achieved by appropriately processing the data mined with various ML algorithms to identify patterns, trends, and relationships
between distinct parameters. Classification models, for example, make it possible to design more individualized and efficient treatment plans, improving patient care [15] . This study examines a
comparative computational approach, using supervised classification machine learning to forecast cardiovascular diseases. The research framework, outlined in Figure 1, progresses from basic to
advanced models, including artificial neural networks (ANN). Model selection is based on performance evaluation metrics including sensitivity, precision, F1 score, accuracy, and the area under the
Receiver Operating Characteristic (ROC) curve (AUC). Our work is organized into
Figure 1. Schematic diagram of working procedure.
several sections. Chapter 2 summarizes key findings from relevant literature. Chapter 3 delves into our methodology, encompassing discussions on data structure, data engineering, data visualization,
and concise descriptions of the models employed. Furthermore, Chapter 4 presents the results, followed by a detailed discussion that leads to the conclusive findings in Chapter 5.
2. Literature Review
Numerous studies have delved into machine learning methods for forecasting CVDs. The findings from these investigations consistently demonstrate the capability of machine learning to accurately
predict CVDs. Here we review a few literature and compare the results that have been obtained using different models.
Degroat et al. [16] combined classical statistical methods with advanced Machine Learning (ML) algorithms to improve disease prediction in CVD patients. They proposed four feature selection
algorithms: Chi-Square Test, Pearson Correlation, Recursive Feature Elimination (RFE), and Analysis of Variance (ANOVA). Using these methods, they identified 18 transcriptome biomarkers in the CVD
population with up to 96% predictive accuracy. The study included 61 CVD patients and 10 healthy individuals as controls.
In a study published in 2020, Drod et al. [17] employed machine learning, utilizing liver ultrasonography and biochemical analysis in 191 CVD patients, revealing the association between
metabolic-associated fatty liver disease (MAFLD) and CVD risk factors. They utilized techniques like principle component analysis (PCA) and logistic regression to construct a predictive model for
high CVD risk, focusing on diabetes duration, plaque scores, and hypercholesterolemia. Evaluation via receiver operating characteristic (ROC) curves yielded Area Under the ROC Curve (AUCs) ranging
from 0.84 to 0.87. The optimal model, utilizing five variables, accurately detected 85.11% of high-risk and 79.17% of low-risk patients. These findings emphasize ML’s utility in identifying MAFLD
patients at CVD risk based on readily available patient data.
Ambekar et al.’s paper [18] suggested employing unimodel disease risk algorithms based on convolutional neural networks to estimate a patient’s risk level based on the heart disease dataset, i.e.,
high or low. They carry out data imputation and cleaning procedures to turn unstructured data into structured data. Later on, the KNN and naïve bayes algorithm is applied to the input values and
heart disease is predicted based on this information. They contrast the outcomes of the KNN and Naïve Bayes algorithms, noting that NB has an accuracy of 82%, which is higher than that of the KNN
algorithm. They were able to forecast disease risk with about 65% accuracy because of the organized dataset.
Larroza et al. [19] employed machine learning and MRI texture features to differentiate between acute myocardial infarction (AMI) and chronic myocardial infarction (CMI). Analyzing 44 cases (22 AMI,
22 CMI) with cine and late gadolinium enhancement (LGE) MRI, 279 texture features were extracted from infarcted areas on LGE and the entire myocardium on cine. Classification was performed using
three prediction models: random forest, SVM with Gaussian kernel, and SVM with polynomial kernel. The study demonstrated that texture analysis when paired with machine learning, may effectively
differentiate between AMI and CMI on both LGE and cine MRI. The SVM with a polynomial kernel showed the best performance, achieving AUC of 0.86 ± 0.06 on LGE MRI (72 features) and 0.82 ± 0.06 on cine
MRI (75 features).
Oyewola et al. [20] compare different algorithms between long short-term memory (LSTM), feedforward neural network (FFNN), cascade forward neural network (CFNN), Elman Neural Network (ELMAN), and
ensemble deep learning (EDL) to predict the best model using the Kaggle cardiovascular dataset with 12 attributes and 70,000 patients. The EDL model surpasses other algorithms with an incredible
98.45% accuracy. After more investigation, EDL’s 100% classification accuracy for CVD diagnosis was shown to exceed LSTM, FFNN, CFNN, and ELMAN. Overall, the study suggests that EDL model could be a
robust tool for the early detection of CVDs.
Alaa et al. [21] analyzed 437 characteristics of 423,604 individuals in the UK Biobank who did not have CVDs at baseline. The AutoPrognosis model outperformed established techniques such as the
Framingham score area under the receiver operating characteristic curve (AUCROC: 0.724, 95%), Cox proportional hazards models with conventional risk factors (AUCROC: 0.734, 95%), and Cox proportional
hazards with all UK Biobank variables (AUCROC: 0.758, 95%) in terms of risk prediction (AUCROC: 0.774, 95%). AutoPrognosis correctly identified 368 more occurrences of CVDs after 5 years than the
Framingham score. They also emphasized how the addition of more risk variables outweighed than use of complex models.
3. Methodology
3.1. Data Engineering
The dataset analyzed in this study was sourced from the renowned data science community, Kaggle, specifically identified as the cardiovascular risk prediction dataset within Kaggle’s repository. This
dataset comes from the Behavioral Risk Factor Surveillance System (BRFSS), which is known as the nation’s top system for health-related telephone surveys. It contains a thorough collection of
health-related information. Comprising 19 carefully selected variables and spanning 308,854 rows, the dataset encapsulates various aspects of an individual’s lifestyle that may contribute to their
susceptibility to cardiovascular diseases. Through meticulous curation and analysis, this dataset offers valuable insights into the intricate interplay between lifestyle factors and cardiovascular
health, facilitating informed decision-making and intervention strategies in public health initiatives.
The dataset contained 19 variables, with 12 being categorical and the rest being of float or integer type. Some of the categorical columns included lengthy string names such as green vegetable
consumption or Fried potato consumption, which could be challenging to handle. Therefore, we opted to convert them into a more precise format using the renaming feature in pandas.
To enhance data clarity, we have restructured some features in our dataset. One significant change is seen in the Body Mass Index (BMI) and Age categories. Initially, BMI was represented as
individual values but is now categorized into standard groups. Similarly, Age, initially in-class format, has been transformed into categories for better understanding. The rationale behind this
modification, elucidated in detail in Table 1, facilitates clearer communication.
3.2. Data Visualization
Our scope encompasses not merely the focus on model performance, but also the illumination of critical insights extracted from our data analysis through visualization techniques. Employing methods
such as heat map visualization, we try to understand the important connection between heart disease and key features. In the subsequent analysis, we present various variables with instances of heart
disease, elucidating their significance and implications.
Table 1. Categories of age and BMI.
In Figure 2, we have highlighted some important factors closely tied to heart disease using histograms. Figure 2(a), shows how heart disease relates to people’s body weights. For example, among those
with a healthy weight (BMI: 18.5 - 25), only about 6% have heart disease. But among those categorized as overweight or obese, the rates are higher: around 15% for overweight, 22% for moderately
obese, and approximately 25% for severely obese individuals. For the corresponding BMI ranges please refer to Table 1.
In addition, Figure 2(b) elucidates the relationship between smoking and the prevalence of cardiovascular disease. In terms of percentage, heart disease affects approximately 11.6% of smokers while
it affects only 5.6% of non-smokers. This clearly indicates that smokers are more likely to get affected by heart disease. Figure 2(c) and Figure 2(d), show that individuals with diabetes and
arthritis are more susceptible to heart disease on their own. Based on statistical data, around 20.85% of individuals with diabetes have heart disease, whereas only 6.06% of non-diabetics have the
condition. Furthermore, an examination of the arthritis data shows that 14.09% of those with arthritis also had heart disease, compared to 5.43% of those without arthritis.
Figure 3 illustrates the crucial role of regular exercise and checkups in mitigating the risk of heart disease. In Figure 3(a), we observe a comparison of heart disease cases relative to regular
exercise habits. The data indicates that 8% of individuals who do not engage in regular exercise are afflicted with heart disease, compared to only 3% among those who exercise regularly. Similarly,
in Figure 3(b), we analyze heart disease cases in relation to regular checkup habits. The findings suggest that individuals who undergo regular checkups are at a lower risk of developing heart
disease compared to those who schedule checkups every two years, every five years, or never. These insights underscore the significance of maintaining both a regular exercise routine and a consistent
checkup schedule in reducing the likelihood of heart disease or facilitating early detection.
Figure 2. A visual display illustrating how our target variable is related to different key features.
Figure 3. Histograms depicting the inverse correlation between specific features and the target variable.
3.3. Used Models
This study employs four supervised machine learning algorithms and one deep learning algorithm for training, validating, and testing the dataset. Below, we provide a brief overview of each model
3.3.1. Logistic Regression
The method of modeling the probability of a discrete result given input variables is known as logistic regression. The most frequent logistic regression models include a binary result, which can
accept two values like true/false, yes/no scenarios. Multinomial logistic regression can be used to model events with more than two discrete outcomes. Despite its name, logistic regression is a
classification model rather than a regression model. For binary and linear classification problems, logistic regression is a simpler and more efficient method [22] . Furthermore, logistic regression,
unlike linear regression, does not require a linear connection between input and output variables [23] . In logistic regression, the goal is to model the probability that a given instance belongs to
a particular class (positive or negative). The logistic regression model uses a linear combination of input features, transformed by a sigmoid function. The linear combination is represented as:
$z={b}_{0}+{b}_{1}\cdot {x}_{1}+{b}_{2}\cdot {x}_{2}+\cdots +{b}_{k}\cdot {x}_{k}$(1)
where ${b}_{0}$ is the bias term and ${b}_{1},{b}_{2},\cdots ,{b}_{k}$ are the coefficients associated with the features ${x}_{1},{x}_{2},\cdots ,{x}_{k}$ . The probability of belonging to the
positive or negative class is then given by the sigmoid function:
The logistic regression model predicts the class based on whether the probability is above a certain threshold (typically 0.5) [22] .
3.3.2. Decision Tree
A well-known machine learning technique called a decision tree divides data into multiple groups according to predetermined criteria. Nodes and leaves are the two navigable components of the tree.
Decision nodes divide data, whereas leaves represent choices or results. Combining decision trees with other techniques can help solve problems (ensemble learning). Using fundamental decision rules
generated from data properties, the objective is to construct a model that predicts the value of a target variable. A piecewise constant approximation can be seen in a decision tree [24] [25] .
Assume that samples with quantities for the categorical attribute “A” that have “n” different potential values make up training dataset “D”. The parameter and information obtained for attribute “A”
can be obtained using the following formula:
$Gain\left(A,D\right)=Entropy\left(D\right)-\underset{i=1}{\overset{n}{\sum }}\frac{|{D}_{i}|}{|D|}\cdot Entropy\left({D}_{i}\right)$(3)
Here, ${D}_{i}$ represents the subset of instances in D where attribute A has the i-th value. $|{D}_{i}|$ is the number of instances in ${D}_{i}$ . $Entropy\left(D\right)$ is the entropy of the
dataset D. $Entropy\left({D}_{i}\right)$ is the entropy of the i-th subset ${D}_{i}$ [26] .
3.3.3. Random Forest
Random forest is a popular machine learning technique that combines the output of numerous decision trees to produce a single conclusion. Its ease of use and flexibility, as well as its ability to
tackle classification and regression challenges, has boosted its popularity [26] [27] . The random forest algorithm is a bagging method extension that uses both bagging and feature randomness to
produce an uncorrelated forest of decision trees. The forecast determination will differ depending on the type of difficulty. Individual decision trees will be averaged for a regression task, and a
majority vote—i.e. the most common categorical variable—will produce the predicted class for a classification problem. Finally, the odd sample is used for cross-validation, which completes the
prediction [23] . In order to minimize overfitting and enhance the model’s capacity for generalization, randomization is incorporated into the feature selection process as well as the data selection
process. The Random Forest technique is made more effective overall by the Gini impurity, which is used as a criterion for dividing nodes in the decision trees. The Gini impurity for a set with n
classes is calculated using the formula:
$\text{Gini}=1-\underset{i=1}{\overset{n}{\sum }}{\left({p}_{i}\right)}^{2}$(4)
where ${p}_{i}$ is the probability of an object being classified into the i-th class. In the context of decision trees and random forests, gini impurity is used as a criterion to evaluate the purity
of a node. The algorithm aims to split nodes in a way that minimizes the gini impurity [26] [27] .
3.3.4. Extreme Gradient Boosting
Extreme gradient boosting (XGBoost) is a type of ensemble machine learning method that may be used to solve predictive modeling tasks like classification and regression. It is extremely effective as
well as computationally efficient [28] . Using an ensemble approach called “boosting,” errors produced by previous models are corrected by adding new models. To minimize the loss when adding new
models, it makes use of a gradient descent approach [29] . Consider a dataset $\left({X}_{i},{Y}_{i}\right)$ having M features and N records. T predict the best output $\stackrel{^}{Y}$ , we need the
best set of function to minimize the overall loss such that,
$\mathcal{l}\left(\varphi \right)=\underset{i}{\sum }\text{ }\text{ }L\left({Y}_{i},{\stackrel{^}{Y}}_{i}\right)+\underset{k}{\sum }\text{ }\text{ }\Omega \left({f}_{k}\right)$(5)
The loss function, denoted by $L\left({Y}_{i},{\stackrel{^}{Y}}_{i}\right)$ , is the difference between the actual output ( ${Y}_{i}$ ) and the projected output ${\stackrel{^}{Y}}_{i}$ . Where ${\sum
}_{k}\text{ }\text{ }\Omega \left({f}_{k}\right)$ indicates the complexity of the model, this helps prevent the model from being overfit [30] .
3.3.5. Sequential Model
A sequential model is a fundamental type of model used to construct neural networks layer by layer, following a sequential order. In this project, a sequential model is composed of several layers
interconnected within the widely-used deep learning framework, Keras. Deep learning, a subset of machine learning, utilizes artificial neural networks (ANNs) which are computer algorithms inspired by
the biological functioning of the human brain in processing information. Instead of learning through programming, ANNs are trained through experience by looking for patterns and relationships in data
[31] [32] . Figure 4 displays the model’s flow. The dense layer, which is the fully connected layer, receives the characteristics, activates using Relu function and passes it forward. Other dense
layer gives the output with the help of sigmoid activation function [33] .
3.4. Hyper-Parameter Tuning
In this study, we did not explicitly tune or use any specific hyperparameters for machine-learning models except for deep learning. The deep learning sequential model underwent careful tuning for
optimal performance. This included setting the learning rate to 0.001 for the Adam optimizer, utilizing a first Dense layer with 256 neurons and a rectified linear unit (ReLU) activation function,
and employing a sigmoid activation function in the output layer for classification tasks. Additionally, we implemented an early stopping mechanism with a patience of 3 epochs, monitoring validation
loss and restoring the best weights when training stops, ensuring the model’s robustness and generalization ability.
3.5. Model Evaluation
We assess the developed machine learning model using performance metrics like precision, recall, F1 score, accuracy, ROC curve and AUC. Additionally, we evaluate the deep learning model for
predicting cardiovascular disease based on accuracy and loss curves. The essential criteria for evaluating machine learning models are derived from the components of the confusion matrix. Table 2
outlines the analytical structure of a confusion matrix.
Figure 4. Working of sequential with Keras.
Table 2. Confusion matrix for the evaluation of machine learning models.
The precision, recall (sensitivity) and F1 scores are calculated with the help of confusion matrix using following mathematical expressions:
$\text{F1}\text{\hspace{0.17em}}\text{score}=2\ast \frac{\text{Precision}\ast \text{Recall}}{\text{Precision}+\text{Recall}}$(8)
And, the accuracy is the percentage of cases that are correctly anticipated (forecasted negative for patients without CVD and correctly predicted positive for individuals with CVD) and is
mathematically represented as,
Along with this, model’s performance is measured with the help of AUC. Its value is in the range of 0 and 1. If a model’s AUC is near to 1, it is considered good. The model is better if the AUC is
greater, and vice versa.
As previously stated, the deep learning sequential model is validated using accuracy and loss curve. Out of all the cases, the percentage of correctly categorized instances by the model is known as
accuracy. Consequently, the accuracy curve enhances the model’s capacity to produce precise predictions by providing insight into how well the model matches the data. The loss curve, on the other
hand, measures the inaccuracy or dissimilarity between the model’s anticipated output and the true parameter and provides us with information about how the model performs over time. The loss shows
how much the actual values deviate from the model’s predictions. The model attempts to approximate the true values as closely as possible by minimizing the loss. Thus, the loss curve illustrates how
the model’s inaccuracy diminishes with learning, signifying a boost in its overall effectiveness.
4. Results and Discussion
The study employed various performance metrics including precision, recall, accuracy, F1 score, ROC curve and AUC to evaluate the effectiveness of four ML classifiers: logistic regression, random
forest (RF), decision tree, and XGBoost. The dataset underwent a 70 - 30 split for training and testing, respectively, to identify cardiovascular disease presence.
Results showcased that the RF algorithm achieved the highest cross-validation accuracy of 0.91, with notable precision, recall, and F1 score of 0.90, 0.92, and 0.91, respectively, for predicting
negative results (0—absence of cardiovascular disease). Furthermore, for positive results (1—presence of cardiovascular disease), the RF model demonstrated a precision, recall, and F1 score of 0.90,
0.92, and 0.91, respectively (Table 3). Similarly, the XGBoost, decision tree, and logistic regression algorithms produced accuracies of 0.90, 0.86, and 0.70, with corresponding precision, recall,
and F1 score ranges as follows; XGBoost: (0.89, 0.91), (0.91, 0.89), (0.90, 0.90), Decision tree: (0.88, 0.85), (0.84, 0.89), (0.86, 0.87), Logistic regression: (0.69, 0.70), (0.72, 0.67), (0.70,
0.69). The AUC curve (Figure 5) displayed that RF had the highest AUC score of 0.91, followed by XGBoost at 0.89, while decision tree and logistic regression scored 0.86 and 0.70, respectively.
Transitioning to the deep learning sequential model, it demonstrated superior accuracy and loss on both training (accuracy: 0.8113, loss: 0.4149) and validation sets (accuracy: 0.8142, loss: 0.4100)
before early stopping at the 15th epoch. The learning curve for this model is depicted in Figure 6.
Comparing to Lupague et al.’s findings on the same data in 2023 using logistic regression, they achieved accuracies of 79.18% for CVDs classification and 73.46% for healthy individuals, with an AUC
value of 0.837. However, our study obtained an AUC score of 0.70, possibly due to different hyperparameters. Nonetheless, RF outperformed both, our other models and earlier studies mentioned.
Figure 5. Roc curve for the machine learning models.
Figure 6. Accuracy and loss curve for deep learning sequential model.
5. Conclusions
In this study, we tackled the urgent global health challenge posed by CVDs, aggravated by behavioral risk factors like poor diet, physical inactivity, and substance use. Early detection of CVDs is
critical for effective intervention and averting premature mortality. Leveraging a repertoire of machine learning and deep learning techniques, including logistic regression, decision trees, random
forest classifier, XGBoost, and a sequential model, our objective was to devise a robust method for early diagnosis using data from the BRFSS program. Our investigation unveiled the RF classifier as
the standout performer, boasting an impressive accuracy of 0.91, surpassing alternative machine learning and deep learning methodologies. Following closely were XGBoost (accuracy: 0.90), decision
tree (accuracy: 0.86), and logistic regression (accuracy: 0.70). Furthermore, our deep learning sequential model exhibited promising classification performance, recording an accuracy of 0.80 and a
loss of 0.425 on the validation set.
These findings underscore the potency of machine learning and deep learning approaches in bolstering cardiovascular disease prediction and management strategies. By harnessing publicly available
datasets and employing advanced computational methodologies, we are positioned to make significant strides in improving public health outcomes and combating the scourge of cardiovascular disease. | {"url":"https://scirp.org/journal/paperinformation?paperid=133508","timestamp":"2024-11-08T01:14:27Z","content_type":"application/xhtml+xml","content_length":"162469","record_id":"<urn:uuid:c7e53101-5e43-4771-85d8-92965aa35f7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00787.warc.gz"} |
EUCLID — BYRNE, Oliver. “The first six books of The Elements of Euclid”.
The first six books of The Elements of Euclid in which coloured diagrams and symbols are used instead of letters for the greater ease of learners.
First edition, rare in the original cloth, of this celebrated book, “one of the oddest and most beautiful books of the whole century” (McLean). The use of colour is its most striking feature, with
equal angles, lines, or polygonal regions assigned one of the three artistic primaries, red, yellow, and blue.
Byrne (18101880) was a self-educated Irish mathematician and engineer who “considered that it might be easier to learn geometry if colours were substituted for the letters usually used to designate
the angles and lines of geometric figures. Instead of referring to, say, ‘angle ABC’, Byrne’s text substitued a blue or yellow or red section equivalent to similarly coloured sections in the
theorem’s main diagram” (Friedman). His style remarkably prefigures the modernist experiments of the Bauhaus and De Stijl movements.
Exhibited at the Great Exhibition in London 1851, the book was praised for the beauty and artistry of the printing. However, the selling price of 25 shillings was almost five times the typical price
for a Euclidean textbook of the time, placing it out of the reach of educators who were supposed to make use of this new way of teaching geometry. The technical difficulty of keeping the coloured
shapes in register greatly increased production costs, and it was consequently never a viable book for cheap mass-production, effectively preventing Byrne’s method from becoming widespread or
effecting any major change in the teaching of geometry. Even so, its beauty and innovation ensure it remains among the most desirable of illustrated books from the Victorian period.
Quarto. Original red straight-grain cloth, expertly rebacked preserving the original gilt-blocked spine, covers with ornamental blind panelling, front with gilt tooling, pale yellow endpapers, gilt
Geometric diagrams printed in red, yellow, and blue; printed in Caslon old-face type with ornamental initials by C. Whittingham of Chiswick.
Bookseller’s blindstamp (G. W. Holdich, Hull) to front free endpaper. Extremities gently rubbed, spine darkened, corners and inner hinges professionally restored, foxing and offsetting to contents as
usual, the diagrams sharp and bright. A very good copy.
Friedman, Color Printing in England 43; Keynes, Pickering, pp. 37, 65; McLean, Victorian Book Design, p. 70. Susan M. Hawes & Sid Kolpas, “Oliver Byrne: The Matisse of Mathematics”. | {"url":"https://bookshop.rarebook-ubfc.fr/anna/2022/11/05/euclid-byrne-oliver-the-first-six-books-of-the-elements-of-euclid/","timestamp":"2024-11-06T04:55:13Z","content_type":"text/html","content_length":"52311","record_id":"<urn:uuid:d54a125c-2f59-441d-bf15-0365116aa883>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00085.warc.gz"} |