text stringlengths 270 6.81k |
|---|
-vector c denotes the concentrations of the n species. Reactions among the species consume some of them (the reactants) and generate others (the products). The rate of each reaction is a function of the concentrations of its reactants (and other parameters we assume are fixed, like temperature or presence of catalysts).... |
of choices given by the n-vector x where no agent can improve (increase) her reward by changing her choice. Such a choice is argued to be ‘stable’ since no agent is incented to change her choice. At a Nash equilibrium xi maximizes Ri(x), so we must have ∂Ri ∂xi (x) = 0, i = 1,..., n. This necessary condition for a Nas... |
��tting, we can add a regularization term to this objective function.) This is a nonlinear least squares problem, with variable θ. 18.2 Gauss–Newton algorithm In this section we describe a powerful heuristic algorithm for the nonlinear least squares problem (18.2) that bears the names of the two famous mathematicians C... |
(x(k)) is the Jacobian or derivative matrix of f (see 8.2.1 C.1). The affine function ˆf (x; x(k)) is a very good approximation of f (x) is small. − The next iterate x(k+1) is then taken to be the minimizer of 2, the norm squared of the affine approximation of f at x(k). Assuming that the derivative matrix Df (x(k)) has l... |
0, which occurs if and only if Df (x(k))T f (x(k)) = 0 (since we assume that Df (x(k)) has linearly independent columns). So the Gauss–Newton algorithm stops only when the optimality condition (18.3) holds. 388 18 Nonlinear least squares We can also observe that ˆf (x(k+1); x(k)) 2 ≤ ˆf (x; x(k)) ˆf (x(k); x(k)) 2 = f... |
of the derivative matrix Df (x(k)) are linearly independent. In some applications, this assumption never holds; in others, it can fail to hold at some iterate x(k), in which case the Gauss–Newton algorithm stops, since x(k+1) is not defined. We will see that a simple modification of the Gauss–Newton algorithm, de- scrib... |
x(k))− 1 f (x(k)). The basic Newton algorithm shares the same shortcomings as the basic Gauss– Newton algorithm, i.e., it can diverge, and the iterations terminate if the derivative matrix is not invertible. Newton algorithm for n = 1. The Newton algorithm is easily understood for n = 1. The iteration is x(k+1) = x(k) ... |
−3−2−1123−11xf(x)123456−101kf(x(k))123456−101kf(x(k)) 18.3 Levenberg–Marquardt algorithm 391 18.3 Levenberg–Marquardt algorithm In this section we describe a variation on the basic Gauss–Newton algorithm (as well as the Newton algorithm) that addresses the shortcomings described above. The variation comes directly from... |
iterate in the basic Gauss–Newton algorithm.) The second term in (18.11) is sometimes called a trust penalty term, since it penalizes choices of x that are far from x(k), where we cannot trust the affine approximation. The parameter λ(k) is sometimes called the trust parameter (although ‘distrust parameter’ is perhaps m... |
only when the optimality condition (18.3) holds. f (x(k) 2 > f (x(k+1)) Updating the trust parameter. The final issue is how to choose the trust parameter λ(k). When λ(k) is too small, x(k+1) can be far enough away from x(k) 2 can hold, i.e., our true objective function increases, that which is not what we want. When λ... |
. 3. Check tentative iterate. f (x(k+1)) If Otherwise, increase λ and do not update x: λ(k+1) = 2λ(k) and x(k+1) = x(k). 2, accept iterate and reduce λ: λ(k+1) = 0.8λ(k). f (x(k)) 2 < 18.3 Levenberg–Marquardt algorithm 393 Stopping criteria. The algorithm is stopped before the maximum number of iterations kmax if eithe... |
or very close, it increases our confidence that we have found a solution of the nonlinear least squares problem, but we cannot be sure. If the different runs of the algorithm produce different 2. points, we use the best one found, i.e., the one with the smallest value of f (x) Complexity. Each execution of step 1 require... |
ton method, which reduces to Newton’s method in this case, diverges when the initial value x(1) is 1.15. The Levenberg–Marquardt algorithm, however, solves this problem. Figure 18.5 shows the value of the residual f (x(k)), and the value of λ(k), for the Levenberg–Marquardt algorithm started from x(1) = 1.15 and λ(1) =... |
p = (3, 9). 2345678910246810p1p22345678910246810p1p2 396 18 Nonlinear least squares Figure 18.8 Cost function number k in the example of figure 18.7. f (p(k) 2 and trust parameter λ(k) versus iteration Location from range measurements. We illustrate algorithm 18.3 with a small instance of the location from range measur... |
ingly, x(3) = x(4) in figure 18.12. For the second starting point (red squares) λ(k) decreases monotonically. For the third starting point (brown diamonds) λ(k) increases in iterations 2 and 4. 051015050100150kkf(p(k))k205101500.51kλ(k) 18.3 Levenberg–Marquardt algorithm 397 Figure 18.9 Contour lines of ρi. The dots sho... |
In nonlinear model fitting, we fit a model of the general form y ˆf (x; θ) to the given data, where the p-vector θ contains the model parameters. In linear model fitting, ˆf (x; θ) is a linear function of the parameters, so it has the special form ≈ ˆf (x; θ) = θ1f1(x) + + θpfp(x), · · · where f1,..., fp are scalar-value... |
�1f1(x) + + θpfp(x), · · · → with basis functions fi : Rn R, and a data set of N pairs (x(i), y(i)). The usual objective is the sum of squares of the difference between the model prediction ˆf (x(i)) and the observed value y(i), which leads to a linear least squares problem. In orthogonal distance regression we use anot... |
forms the basic least squares classifier in practice. The Boolean classifier of chapter 14 fits a linearly parametrized function ˜f (x) = θ1f1(x) + + θpfp(x) · · · to the data points (x(i), y(i)), i = 1,..., N, where y(i), using linear 1, 1 } least squares. The parameters θ1,..., θp are chosen to minimize the sum squares ... |
18.15), since the sign function is not differentiable. To get around this, we replace the sign function with a differentiable approximation, for example the sigmoid function − − φ(u) = u eu e− u, − eu + e− (18.16) shown in figure 18.15. We choose θ by solving the nonlinear least squares problem of minimizing N i=1 (φ( ˜f ... |
1 in the left column and the value for y = +1 in the right column. We can see that all three loss functions discourage prediction = y than when sign(u) = y. errors, since their values are higher for sign(u) The loss function for nonlinear least squares classification with the sign function (shown in the middle row) ass... |
regularization parameter. (This λ is the regularization parameter in the classification problem; it has no relation to the trust parameter λ(k) in the iterates of the Levenberg–Marquardt algorithm.) Figure 18.17 shows the classification error on the training and test sets as a function of the regularization parameter λ.... |
λ = 100. Convergence of Levenberg–Marquardt algorithm. The Levenberg–Marquardt algorithm is used to compute the parameters in the nonlinear least squares classifier. In this example the algorithm takes several tens of iterations to converge, i.e., until the stopping criterion for the nonlinear least squares problem is ... |
fication 407 Figure 18.20 Boolean classification error in percent versus λ, after adding 5000 random features. Prediction Outcome ˆy = +1 ˆy = y = +1 1 y = − All 967 11 978 1 Total 980 9020 10000 − 13 9009 9022 Table 18.2 Confusion matrix on the test set for the Boolean classifier to recognize the digit zero after additio... |
we obtained for the same set of features with the least squares method of chapter 14. Feature engineering. Figure 18.24 shows the error rates when we add the 5000 randomly generated features. The training and test error rates are now 0.02% and 2%. The test set confusion matrix for λ = 1000 is given in table 18.4. This... |
for test set after adding 5000 features. The error rate is 2.0%. 412 18 Nonlinear least squares Exercises 18.1 Lambert W -function. The Lambert W -function, denoted W : [0, R, is defined as ) ∞ 0 for which xex = u. (The notation just W (u) = x, where x is the unique number x means that we restrict the argument x to be ... |
= ( 13, 0.3 15, 0.6 16), − (This corresponds to three periods in where the subscripts give the dimensions. which you make investments, which pay off at one rate for 5 periods, and a higher rate for the next 6 periods.) You can initialize with r(0) = 0, and stop when N (r(k))2 is small. Plot N (r(k))2 versus k. 18.3 A c... |
x; θ) = θ1eθ2x to the data 0, 1,..., 5, 5.2, 4.5, 2.7, 2.5, 2.1, 1.9. (The first list gives x(i); the second list gives y(i).) Plot your model ˆf (x; ˆθ) versus x, along with the data points. 18.5 Mechanical equilibrium. A mass m, at position given by the 2-vector x, is subject to three forces acting on it. The first for... |
point. (Note that it is important to start at a point where T1 > 0 and T2 > 0, because otherwise the derivative matrix Df (x(1)) is zero, and the Levenberg– Marquardt update gives x(2) = x(1).) Plot the components of the mass position and the residual force versus iterations. 18.6 Fitting a simple neural network model... |
for θ ˆf (x; θ). Your expression can use φ and φ, the sigmoid function and its derivative. (You do not need to express these in terms of exponentials.) ∇ (b) Derive an expression for the derivative matrix Dr(θ), where r : Rp vector of model fitting residuals, RN is the → r(θ)i = ˆf (x(i); θ) y(i), − i = 1,..., N. Your ... |
link robot manipulator in a plane. 18.7 Robot manipulator. Figure 18.26 shows a two-link robot manipulator in a plane. The robot manipulator endpoint is at the position p = L1 cos θ1 sin θ1 + L2 cos(θ1 + θ2) sin(θ1 + θ2), where L1 and L2 are the lengths of the first and second links, θ1 is the first joint angle, and θ2 i... |
which p = pdes. Explain the angles your algorithm finds in this case. ( 0.2, 3.1). − f (θ(k)) 2 versus iteration number k. (1.0, 0.5), ( − 2.0, 1.0), L1L2pθ1θ2 416 18 Nonlinear least squares Figure 18.27 Ellipse with center (c1, c2), and radii r + δ and r largest semi-axis makes an angle α with respect to horizontal. −... |
cαr+δr−δ Exercises 417 (b) Use the Levenberg–Marquardt algorithm to fit an ellipse to the 10 points: (0.5, 1.5), (0.7, 0.1), 0.3, 0.6), ( − (2.3, 0.8), (1.0, 1.8), ( − 0.4, 0.2), (0.2, 1.3) (1.4, 0.5), (0.0, 0.2), (2.4, 1.7). To select a starting point, you can choose parameters θ that describe a circle with radius one... |
(19.1) if it satisfies g(x) = 0. A point ˆx is a solution of the problem (19.1) if it is feasible and has the smallest objective among all feasible points, i.e., if whenever g(x) = 0, we have 2. f (ˆx) f (x) ≥ 2 420 19 Constrained nonlinear least squares Like the nonlinear least squares problem, or solving a set of non... |
there is a set of Lagrange multipliers ˆz that satisfy ∂L ∂xi (ˆx, ˆz) = 0, i = 1,..., n, ∂L ∂zi (ˆx, ˆz) = 0, i = 1,..., p (provided the rows of Dg(ˆx) are linearly independent). The p-vector ˆz is called an optimal Lagrange multiplier. The second set of equations can be written as gi(ˆx) = 0, i = 1,..., p, in vector... |
5) This can be (approximately) minimized using the Levenberg–Marquardt algorithm applied to f (x) √µg(x) 2. (19.6) By minimizing the composite objective (19.5), we do not insist that g(x) is zero, 2 on the residual. If we solve this for large but we assess a cost or penalty µ g(x) 2 f (x) enough µ, we should obtain a c... |
ality condition for minimizing (19.5), 2Df (x(k+1))T f (x(k+1)) + 2µ(k)Dg(x(k+1))T g(x(k+1)) = 0. Defining z(k+1) = 2µ(k)g(x(k+1)) as our estimate of a suitable Lagrange multiplier in iteration k + 1, we see that the optimality condition (19.4) (almost) holds for x(k+1) and z(k+1). (The feasibility condition g(x(k)) = 0... |
rst establish the identity Lµ(x, z) = f (x) 2 + µ g(x) + z/(2µ) 2 2. z/(2µ) µ − (19.8) 19.3 Augmented Lagrangian algorithm 423 We expand the second term on the right-hand side to get 2 g(x) + z/(2µ) µ = µ g(x) = g(x)T z + µ g(x) 2 2 + 2µg(x)T (z/(2µ)) + µ z/(2µ) 2. 2 + µ z/(2µ) Substituting this into the right-hand sid... |
if ˜x is not feasible. In this case the choice ˜z = z + 2µg(˜x) (19.11) satisfies the optimality condition (19.4) with ˜x and ˜z. The augmented Lagrangian algorithm alternates between minimizing the augmented Lagrangian (approximately, using the Levenberg–Marquardt algorithm), and updating the parameter z (our estimate... |
example with two variables and f (x1, x2) = x1 + exp( x2) x2 1 + 2x2 + 1 −, g(x1, x2) = x1 + x3 1 + x2 + x2 2. 2 (solid lines) and Figure 19.1 shows the contour lines of the cost function f (x) the constraint function g(x) (dashed lines). The point ˆx = (0, 0) is optimal with corresponding Lagrange multiplier ˆz = 2Df... |
))T z(k). The vertical jumps in the optimality condition norm occur in steps 2 and 3 of the augmented Lagrangian algorithm, and in step 2 of the penalty algorithm, when the parameters µ and z are updated. Figure 19.5 shows the value of the penalty parameter µ versus the cumulative number of Levenberg–Marquardt iteratio... |
5x(2)µ(1)=1−0.500.5−0.500.5x(3)µ(2)=2−0.500.5−0.500.5x(4)µ(3)=4−0.500.5−0.500.5x(5)µ(4)=8−0.500.5−0.500.5x(6)µ(5)=16−0.500.5−0.500.5x(7)µ(6)=32 428 19 Constrained nonlinear least squares Figure 19.4 Feasibility and optimality condition errors versus the cumulative number of Levenberg–Marquardt iterations in the augment... |
�gure 19.6. The wheelbase L is a known constant; all of the other quantities p, θ, φ, and s are functions of time. The dynamics of the car motion are given by the differential equations dp1 dt dp2 dt dθ dt (t) = s(t) cos θ(t), (t) = s(t) sin θ(t), (t) = (s(t)/L) tan φ(t). 020406080100120140100101102103104CumulativeLeven... |
cos(xk)3 sin(xk)3 (tan(uk)2)/L . We now consider the nonlinear optimal control problem minimize N k=1 uk 2 + γ 1 N − k=1 uk+1 − 2 uk subject to x2 = f (0, u1) xk+1 = f (xk, uk), xfinal = f (xN, uN ), k = 2,..., N 1 − (19.12) with variables u1,..., uN, and x2,..., xN. Lθφ(p1,p2) 19.4 Nonlinear control 431 Figure 19... |
0.500.5SpeedAnglekuk01020304050−0.500.5SpeedAnglekuk 19.4 Nonlinear control 433 Figure 19.9 Feasibility and optimality condition residuals in the augmented Lagrangian algorithm for computing the trajectories in figure 19.7. 010020030040010−710−410−1102CumulativeL.-M.iterationsResidualFeasibilityOpt.cond.010020030010−710... |
, and with the same stopping condition. Compare the convergence and the value of the penalty parameter with the results for the augmented Lagrangian method in part (a). 19.2 Portfolio optimization with downside risk. § In standard portfolio optimization (as described in 17.1) we choose the weight vector w to achieve a ... |
minimum downside risk portfolios. Exercises 435 19.3 Boolean least squares. The Boolean least squares problem is a special case of the con- strained nonlinear least squares problem (19.1), with the form minimize subject to x2 Ax b − i = 1, 2 i = 1,..., n, × − where the n-vector x is the variable to be chosen, and the ... |
b Appendices Appendix A Notation Vectors x1... xn (x1,..., xn) xi xr:s 0 1 ei xT y x rms(x) avg(x) std(x) dist(x, y) (x, y) y x ⊥ Matrices n-vector with entries x1,..., xn. n-vector with entries x1,..., xn. The ith entry of a vector x. Subvector with entries from r to s. Vector with all entries zero. Vecto... |
this book we use standard mathematical ellipsis notation in lists and sums. We write k,..., l to mean the list of all integers from k to l. For example, 3,..., 7 means 3, 4, 5, 6, 7. This notation is used to describe a list of numbers or vectors, or in sums, as in + an. Both of these mean the sum of the n terms a1, a2... |
ified to take advantage of sparsity, the flop counts can be dramatically lower than those given here. Vector operations In the table below, x and y are n-vectors and a is a scalar. ax x + y xT y x y x − rms(x) std(x) (x, y) n n 2n 2n 3n 2n 4n 6n The convolution a algorithm that requires 5(m + n) log2(m + n) flops. ∗ b of ... |
mn2 flops. Here Computing the Gram matrix of a tall m m is the big dimension and n is the small dimension. In the QR factorization of an m dimension and n is the small dimension. The complexity is 2mn2 flops. n matrix A when A is tall Computing the pseudo-inverse A† of an m (and has independent columns) costs 3mn2 flops.... |
(z)). We denote the derivative of f at z as f (z). We can think of f as a scalar-valued function of a scalar variable; this function is called the derivative (function) of f. Taylor approximation. Let us fix the number z. The (first order) Taylor approximation of the function f at the point z is defined as ˆf (x) = f (z)... |
, we have f (z) = g(h(z))h(z). Another useful rule is the derivative of product rule, for f (x) = g(x)h(x), which is f (z) = g(z)h(z) + g(z)h(z). The derivative operation is linear, which means that if f (x) = ag(x) + bh(x), where a and b are constants, we have f (z) = ag(z) + bh(z). Knowledge of the derivative of just... |
ˆf (x) = f (z) + z1) + ∂f (z)(x1 − ∂x1 zi as the deviation of xi from zi, and the term for any x. We interpret xi − ∂f zi) as an approximation of the change in f due to the deviation of xi ∂xi from zi. Sometimes ˆf is written with a second vector argument, as ˆf (x; z), to show the point z at which the approximation is... |
z) + b h(z). ∇ ∇ 446 C Derivatives and optimization C.1.3 Vector-valued function of a vector Suppose f : Rn Rm is a vector-valued function of a vector. The n-vector x is the argument; the m-vector f (x) is the value of the function f at x. We can write out the m components of f as → f (x) = f1(x)... fm(x) , ... |
ˆf (x) is a very good approximation of f (x). As in the scalar case, the Taylor approximation is sometimes written with a second argument as ˆf (x; z) to show the point z around which the approximation is made. The Taylor approximation ˆf is an affine function of x, sometimes called a linear approximation of f, even thou... |
functions of scalars given above.) This chain rule is described on page 184. C.2 Optimization Derivative condition for minimization. Suppose h is a scalar-valued function of a scalar argument. If ˆx minimizes h(x), we must have h(ˆx) = 0. This fact is easily understood: If h(ˆx) = 0, then by taking a point ˜x slightly... |
) for the gradient, derived above). This equation will hold for a minimizer, but there can be points that satisfy the equation, but are not solutions of the nonlinear least squares problem. C.3 Lagrange multipliers Constrained optimization. We now consider the problem of minimizing a scalarvalued function h : Rn R, sub... |
�z) = 0, i = 1,..., p. (This is provided the rows of Dg(ˆx) are linearly independent, a technical condition we ignore.) As in the unconstrained case, there can be pairs x, z that satisfy the KKT conditions but ˆx is not a solution of the constrained optimization problem. The KKT conditions give us a method for solving ... |
�x) are linearly independent). But there can be points that satisfy them and are not solutions. Appendix D Further study In this appendix we list some further topics of study that are closely related to the material in this book, give a different perspective on the same material, complement it, or provide useful extensi... |
book in practical applications. This requires a good knowledge and understanding of at least one of the computer languages and packages that support linear algebra computations. In a first introduction you can use one of these packages to follow the material of this book, carrying out numerical calculations to verify o... |
to perform simple statistical analyses. Exercises written specifically for use with these applets appear in a section at the end of each chapter. Students can use the applets at home or in a computer lab. They can use them as they read through the text material, once they have finished reading the entire chapter, or as ... |
’s Theorem applicable? Yes, because it can be used for any set of data. According to Tchebysheff’s Theorem, • • at least 3/4 of the measurements will fall between 10.6 and 32.6. at least 8/9 of the measurements will fall between 5.1 and 38.1. x ❍ PREFACE The MyApplet sections appear within the body of the text, explain... |
chapter, and a Graphing Calculator Manual, which includes instructions for performing many of the techniques in the text using the popular TI-83 graphing calculator. In addition, sets of Practice (or Self-Correcting) Exercises are included for each chapter. These exercise sets are followed by the complete solutions to... |
Probability and Statistics contains the Instructor’s Solutions Manual, PowerPoint lectures prepared by Barbara Beaver, ExamView Computerized Testing, Classic Exercises, and TI-83 Manual prepared by James Davis. ACKNOWLEDGMENTS The authors are grateful to Carolyn Crockett and the editorial staff of Brooks/Cole for thei... |
1 Variables and Data 8 1.2 Types of Variables 10 1.3 Graphs for Categorical Data 11 Exercises 14 1.4 Graphs for Quantitative Data 17 Pie Charts and Bar Charts 17 Line Charts 19 Dotplots 20 Stem and Leaf Plots 20 Interpreting Graphs with a Critical Eye 22 1.5 Relative Frequency Histograms 24 Exercises 29 Chapter Review ... |
183 5.1 Introduction 184 5.2 The Binomial Probability Distribution 184 Exercises 193 5.3 The Poisson Probability Distribution 197 Exercises 202 5.4 The Hypergeometric Probability Distribution 205 Exercises 207 Chapter Review 208 CASE STUDY: A Mystery: Cancers Near a Reactor 218 6 THE NORMAL PROBABILITY DISTRIBUTION 21... |
Chapter Review 336 CASE STUDY: How Reliable Is That Poll? CBS News: How and Where America Eats 341 9 LARGE-SAMPLE TESTS OF HYPOTHESES 343 9.1 Testing Hypotheses about Population Parameters 344 9.2 A Statistical Test of Hypothesis 344 9.3 A Large-Sample Test about a Population Mean 347 The Essentials of the Test 348 Ca... |
an Experiment 451 Testing the Equality of the Treatment Means 454 Estimating Differences in the Treatment Means 456 Exercises 459 xviii ❍ CONTENTS 11.6 Ranking Population Means 462 Exercises 465 11.7 The Randomized Block Design: A Two-Way Classification 466 11.8 The Analysis of Variance for a Randomized Block Design 46... |
Testing the Usefulness of the Regression Model 556 Interpreting the Results of a Significant Regression 557 Checking the Regression Assumptions 558 Using the Regression Model for Estimation and Prediction 559 13.4 A Polynomial Regression Model 559 Exercises 562 13.5 Using Quantitative and Qualitative Predictor Variable... |
-Test for Completely Randomized Designs 650 Exercises 654 15.7 The Friedman Fr-Test for Randomized Block Designs 656 Exercises 659 15.8 Rank Correlation Coefficient 660 Exercises 664 15.9 Summary 666 Chapter Review 667 CASE STUDY: How’s Your Cholesterol Level? 677 APPENDIX I 679 Table 1 Cumulative Binomial Probabilitie... |
seats. Four Republican incumbents essentially tied with their challengers: Allen, Burns, Chafee, and Talent. Four toss-ups that could turn into Democratic gains. Three Democratic incumbents with leads: Cantwell, Menendez, and Stabenow. • One Republican incumbent ahead of his challenger: Kyl. • One Republican open seat... |
“act of one individual.” “For older Americans, the Kennedy assassination was a traumatic experience that began a loss of confidence in government,” commented Opinion Dynamics President John Gorman. “Younger people have grown up with movies and documentaries that have pretty much pushed the ‘conspiracy’ line. Therefore,... |
record 1 million temperatures in 1868? Again, we encounter a statistical problem with an application to everyday life. Statistics is a branch of mathematics that has applications in almost every facet of our daily life. It is a new and unfamiliar language for most people, however, and, like any new language, statistic... |
whether a sample or a population— you need to find a way to organize and summarize it. The branch of statistics that presents techniques for describing sets of measurements is called descriptive statistics. You have seen descriptive statistics in many forms: bar charts, pie charts, and line charts presented by a politic... |
YOUR BRAIN FOR STATISTICS ❍ 5 Will it be the same as a sample of “likely voters”—those who are likely to actually vote in the election? Is the sample large enough to answer the questions posed in step 1 without wasting time and money on additional information? A good sampling design will answer the questions posed wit... |
, for the most part, consist of commonsense steps that, given enough time, you would most likely have discovered for yourself. Since statistics is an applied branch of mathematics, many of these basic concepts are mathematical—developed and based on results from calculus or higher mathematics. However, you do not have ... |
These sections present numerical examples to guide you through the MINITAB commands and options that are used for the procedures in that chapter. If you are using MINITAB in a lab or home setting, you may want to work through this section at your own computer so that you become familiar with the hands-on methods in MI... |
and compare your blood pressure with that of others of your same age and gender. GENERAL OBJECTIVES Many sets of measurements are samples selected from larger populations. Other sets constitute the entire population, as in a national census. In this chapter, you will learn what a variable is, how to classify variables... |
a set of measurements or data result. Definition An experimental unit is the individual or object on which a variable is measured. A single measurement or data value results when a variable is actually measured on an experimental unit. If a measurement is generated for every experimental unit in the entire collection,... |
: the student. Therefore, in this example, a “measurement” really consists of five observations, one for each of the five measured variables. For example, the measurement taken on student 2 produces this observation: (2.3, F, So, Mathematics, 15) You can see that there is a difference between a single variable measured o... |
assume only a finite or countable number of values. A continuous variable can assume the infinitely many values corresponding to the points on a line interval. The name discrete relates to the discrete gaps between the possible values that the variable can assume. Variables such as number of family members, number of ne... |
values between any two times we have listed. Figure 1.2 depicts the types of data we have defined. Why should you be concerned about different kinds of variables and the data that they generate? The reason is that the methods used to describe data sets depend on the type of data you have collected. For each set of data... |
statistical table, you can use either a pie chart or a bar chart to display the distribution of the data. A pie chart is the familiar circular graph that shows how the measurements are distributed among the categories. A bar chart shows the same distribution of measurements in categories, with the height of the bar me... |
we would not want to rearrange the bars in the chart to change its shape. In a pie chart, the order of presentation is irrelevant. FI GUR E 1.3 Pie chart for Example 1.3 ● D 3.0% A 8.8% C 23.3% FI GUR E 1.4 Bar chart for Example 1.3 ● B 65.0 250 200 150 100 50 0 A B C D Rating EXAMPLE 1.4 A snack size bag of peanut M&... |
simple puzzle b. Number of students in a first-grade classroom c. Rating of a newly elected politician (excellent, good, fair, poor) d. State in which a person lives 1.3 Discrete or Continuous? Identify the following quantitative variables as discrete or continuous: a. Population in a particular area of the United Stat... |
units? b. What are the variables being measured? What types of variables are they? c. Is this univariate, bivariate, or multivariate data? 1.6 Past U.S. Presidents A data set consists of the ages at death for each of the 38 past presidents of the United States now deceased. a. Is this set of measurements a population ... |
from parts c and d to help you? What conclusions can you draw from these data? APPLICATIONS 1.12 Election 2008 During the spring of 2006 the news media were already conducting opinion polls that tracked the fortunes of the major candidates hoping to become the president of the United States. One such poll conducted by... |
are quite different in their makeup with regard to gender, race, and age distributions. The table below shows the racial breakdown of the members of the United States Army and the United States Air Force.3 White Black Hispanic Other Army 58.4% 26.3% 8.9% 6.4% Air Force 75.5% 16.2% 5.0% 3.3% Source: Time magazine a. De... |
frequency of occurrence of each category. The pie chart displays how the total quantity is distributed among the categories, and the bar chart uses the height of the bar to display the amount in a particular category. 18 ❍ CHAPTER 1 DESCRIBING DATA WITH GRAPHS EXAMPLE 1.5 The amount of money expended in fiscal year 200... |
the horizontal axis. The idea is to try to discern a pattern or trend that will likely continue into the future, and then to use that pattern to make accurate predictions for the immediate future. In the year 2025, the oldest “baby boomers” (born in 1946) will be 79 years old, and the oldest “Gen-Xers” (born in 1965) ... |
. For a small set of measurements—for example, the set 2, 6, 9, 3, 7, 6—you can simply plot the measurements as points on a horizontal axis. This dotplot, generated by MINITAB, is shown in Figure 1.9(a). For a large data set, however, such as the one in Figure 1.9(b), the dotplot can be uninformative and tedious to int... |
the data in Table 1. Leaf unit 1 4 5 Reordering ⎯→ stem | leaf Sometimes the available stem choices result in a plot that contains too few stems and a large number of leaves within each stem. In this situation, you can stretch the stems by dividing each one into several lines, depending on the leaf values assigned to ... |
.2. F IG URE 1. 11 Stem and leaf plot for the data in Table 1. Reordering → Leaf unit. If you turn the stem and leaf plot sideways, so that the vertical line is now a horizontal axis, you can see that the data have “piled up” or been “distributed” along the axis in a pattern that can be described as “mound-shaped”—much... |
three dotplots generated by MINITAB and shown in Figure 1.12. Describe these distributions in terms of their locations and shapes. FI GUR E 1.1 2 Shapes of data distributions for Example 1. Symmetric ⇔ mirror images Skewed right ⇔ long right tail Skewed left ⇔ long left tail EXAMPLE 1.10 Solution The first dotplot show... |
for two data sets, you should compare their scales of measurement, locations, and shapes, and look for unusual measurements or outliers. Remember that outliers are not always caused by errors or incorrect data entry. Sometimes they provide very valuable information that should not be ignored. You may need additional i... |
As a rule of thumb, the number of classes should range from 5 to 12; the more data available, the more classes you need.† The classes must be chosen so that each measurement falls into one and only one class. For the birth weights in Table 1.9, we decided to use eight intervals of equal length. Since the total span of... |
9.6 Tally II II IIII IIII IIII III IIII III I Class Frequency Class Relative Frequency 2 2 4 5 8 5 3 1 2/30 2/30 4/30 5/30 8/30 5/30 3/30 1/30 Relative frequencies add to 1; frequencies add to n. F IG URE 1. 15 Relative frequency histogram ● Class /30 7/30 6/30 5/30 4/30 3/30 2/30 1/30 0 5.6 6.1 6.6 7.1 7.6 Birth Weig... |
. If the data are discrete, you might assign one class for each integer value taken on by the data. For a large number of integer values, you may need to group them into classes. 5. Locate the class boundaries. The lowest class must include the smallest measurement. Then add the remaining classes using the left inclusi... |
weight data in Figure 1.15. What proportion of the newborns have birth weights of 7.6 or higher? This involves all 1.5 RELATIVE FREQUENCY HISTOGRAMS ❍ 29 classes beyond 7.6 in Table 1.10. Because there are 17 newborns in those classes, the proportion who have birth weights of 7.6 or higher is 17/30, or approximately 5... |
and list the class boundaries for the first two classes. Number of Measurements Smallest and Largest Values Convenient Starting Point First Two Classes 75 25 200 0.5 to 1.0 0 to 100 1200 to 1500 30 ❍ CHAPTER 1 DESCRIBING DATA WITH GRAPHS BASIC TECHNIQUES b. What proportion of the measurements are greater 1.18 Construct... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.