text
stringlengths
270
6.81k
the training set. The table on the right is for the test set. the total number is 5494), we get the confusion matrices for the training and test data sets shown in table 14.7. The error rates are consistent, and equal to 0.21% for the training set and 0.24% for the test set, a very substantial improvement compared to ...
data has the strange name receiver operating characteristic (ROC). The ROC shows 14.2 Least squares classifier 295 Figure 14.4 The distribution of the values of ˜f (x(i)) in the Boolean classifier (14.1) for recognizing the digit zero, after addition of 5000 new features. the true positive rate on the vertical axis and ...
FractionPositiveNegative 296 14 Least squares classification Figure 14.5 True positive, false positive, and total error rate versus decision threshold α. The vertical dashed line is shown for decision threshold α = 0.25. −2−1.5−1−0.500.511.5200.020.040.060.080.1˜f(x(i))FractionPositiveNegative−2−1.5−1−0.500.511.5200.20....
of market segments, such as college-educated women aged 25–30, men without college degrees aged 45–55, and so on. This classifier guesses the demographic segment of a new customer, based only on their purchase history. This can be used to select which promotions to offer a customer for whom we only have purchase data. T...
), with predicted outcome ˆy = ˆf (x), there are K 2 possibilities, corresponding to all the pairs of values of y, the actual outcome, and ˆy, the predicted outcome. For a given data set (training or validation set) with N elements, the numbers of each of the K 2 occurrences are arranged into a K K confusion matrix, wh...
we predict each label correctly. The quantity Nii/Ni is called the true label i rate. It is the fraction of data points with label y = i for which we correctly predicted ˆy = i. (The true label i rates reduce to the true positive and true negative rates for Boolean classifiers.) A simple example, with K = 3 labels (Dis...
fication for k = 1,..., K. Note that ˜fk(x) is the real-valued prediction for the Boolean classifier for class k versus not class k; it is not the Boolean classifier, which is sign( ˜fk(x)). As an example consider a multi-class classification problem with 3 labels. We construct 3 different least squares classifiers, for 1 ve...
x) = argmax k=1,...,K ˜fk(x), αk − where αk are constants chosen to trade off the true label k rates. If we decrease αk, we predict ˆf (x) = k more often, so all entries of the kth column in the confusion matrix increase. This increases our rate of true positives for label k (since Nkk increases), which is good. But it ...
all entries equal to 2 K)1. Since the mapping from the right-hand sides to the least squares approximate solutions ˆθk is + ˆθk = (2 linear (see page 229), we have ˆθ1 + K)a, where a is the least squares approximate solution when the right-hand side is 1. Assuming that the first basis function is f1(x) = 1, we have a =...
is 67.5% for the training data, and 60% for the test set. The true Virginica rate is 90% for the training data, and 100% for the test set. This suggests that our classifier can detect Virginica well, but perhaps not as well as Setosa. (The 100% true Virginica rate on the test set is a matter of luck, due to the very sm...
22 22 1 13 0 883 13 75 1032 7 14 39 20 12 38 7 1 756 4 898 1 0 4 9 46 17 0 49 18 803 947 980 1135 1032 1010 982 892 958 1028 974 1009 10000 Table 14.12 Confusion matrix for least squares multi-class classification of handwritten digits (test set). 304 14 Least squares classification Digit Prediction All 1 6679 7 4 15 2 ...
˜f (x)) the actual classifier. Let σ denote the RMS error in the continuous prediction over some set of data, i.e., σ2 = ( ˜f (x(1)) y(1))2 + − + ( ˜f (x(N )) y(N ))2. − · · · N Use the Chebyshev bound to argue that the error rate over this data set, i.e., the fraction of data points for which ˆf (x(i)) Remark. This bo...
a feature vector x and predicts the response. A multi-class least squares classifier builds a separate (continuous) predictor for each response versus the others. Suggest a simpler classifier, based on one continuous regression model ˜f (x) that is fit to the numbers that code the responses, using least squares. 14.4 Mul...
feature vector.) How would you modify the least squares multi-class classifier described 14.3.1 to create a list classifier? Remark. List classifiers are widely used in electronic in communication systems, where the feature vector x is the received signal, and the class corresponds to which of K messages was sent. In thi...
y(i) is +1 when x(i) is in the first or third quadrant, and 1 otherwise. Fit a polynomial least squares classifier of degree 2 to the data set, i.e., use a polynomial − ˜f (x) = θ1 + θ2x1 + θ3x2 + θ4x2 1 + θ5x1x2 + θ6x2 2. Give the error rate of the classifier. Show the regions in the plane where ˆf (x) = 1 and ˆf (x) = ...
We can 1.) Define ˜y = ( ˜f1(x),..., ˜fK (x)), which is our (realwrite this vector as y = 2ek valued or continuous) prediction of the label y. Our multi-class prediction is given by ˜fk(x). Show that ˆf (x) is also the index of the nearest neighbor of ˆf (x) = argmaxk=1,...,K 1, for k = 1,..., K. In other words, our gue...
error rates, of the two classifiers on both the training data set and a separate test data set. (b) Compare the complexity of computing the one-versus-one multi-class classifier with the complexity of the least squares multi-class classifier (see page 300). Assume the training set contains N/K examples of each class and ...
changed, at which point another training message is sent.) Explain how this method is the same as least squares classification. What are the training data x(i) and y(i)? What is the least squares problem that must be solved to determine the equalizer impulse response h? ytrain)1:N ∗ strain y = h ( Chapter 15 Multi-obje...
2 as to J1. Roughly speaking, we care twice as strongly that J2 should be small, compared · · · − (15.1) 310 15 Multi-objective least squares to our desire that J1 should be small. We will discuss later how to choose these weights. Scaling all the weights in the weighted sum objective (15.1) by any positive number is t...
the minimizer is unique, and given by · · · × ˆx = ( ˜AT ˜A)− = (λ1AT 1 A1 + 1 ˜AT ˜b + λkAT k Ak)− 1(λ1AT 1 b1 + + λkAT k bk). (15.3) · · · · · · This reduces to our standard formula for the solution of a least squares problem when k = 1 and λ1 = 1. (In fact, when k = 1, λ1 does not matter.) We can compute ˆx via the...
of λ, assuming the stacked matrices have linearly independent columns. These points are called Pareto optimal (after the economist Vilfredo Pareto) which means there is no point z that satisfies A1z − 2 b1 A1 ˆx(λ) ≤ 2, b1 − A2z − 2 b2 A2 ˆx(λ) ≤ 2, b2 − with one of the inequalities holding strictly. In other words, th...
, as λ increases we put more emphasis on making J2 small, which comes at the expense of making J1 bigger. The optimal trade-off curve for this bi-criterion problem is plotted in figure 15.3. 2, and the right end-point The left end-point corresponds to minimizing 2. We can conclude, for example, that there A2x corresponds...
versus the weights. For example with k = 3 objectives, we have two weights, λ2 and λ3, which give the relative weight of J2 and J3 compared to J1. Any solution ˆx(λ) of the weighted least squares problem is Pareto optimal, which means that there is no point that achieves values of J1, J2, J3 less than or equal to thos...
then we increase λ2 and decrease λ3, and find ˆx and the associated values of J1, J2, J3 using the new weights. This is repeated until a reasonable trade-off among them has been obtained. In some cases we can be principled in how we adjust the weights; for example, in data fitting, we can use validation to help guide us ...
.2 Control 315 We typically have a desired or target output, denoted by the m-vector ydes. The primary objective is J1 = Ax + b − ydes 2, the norm squared deviation of the output from the desired output. The main objective is to choose an action x so that the output is as close as possible to the desired value. x There...
or change the prices of a set of n products in order to move the demand for the products towards some given target demand vector, perhaps to better match the available supply of the products. The standard price elasticity of demand model is δdem = Edδprice, where δdem is the vector of fractional demand changes, δprice...
... 1 1 − 0 0... 15.4) 15.3 Estimation and inversion In the broad application area of estimation (also called inversion), the goal is to estimate a set of n values (also called parameters), the entries of the n-vector x. We are given a set of m measurements, the entries of an m-vector y. The parameters and measurements...
objective. Our prior information about x enters in one or more secondary objectives. Simple examples are listed below. v Aˆx − − y 2: x should be small. This corresponds to the (prior) assumption that x x is more likely to be small than large. • xprior x that x is near some known vector xprior. − • 2: x should be near...
˜Ax = (Ax, √λx) = 0 implies that √λx = 0, which implies x = 0. The Gram matrix associated with ˜A, ˜AT ˜A = AT A + λI, is therefore always invertible (provided λ > 0). The Tikhonov regularized approximate solution is then ˆx = (AT A + λI)− 1AT b. Equalization. The vector x represents a transmitted signal or message, c...
. I... I 15.3 Estimation and inversion 319 Ax y 2. Our total square estimation error is − We can minimize this objective analytically. The solution ˆx is found by averaging the values of y associated with the different entries in x. For example, we estimate Tuesday sales by averaging all the entries in y that correspond...
these values by a 336-vector c, with c24(j 1)+i, i = 1,..., 24, defined as the hourly values on day j, for j = 1,..., 14. As indicated by the gaps in the graph, a number of measurements are missing from the record (only 275 of the 336 = 24 14 measurements are available). We use the notation to denote the set containing...
∈ (xi − log(c24(j 1)+i))2 + λ − 23 i=1 (xi+1 − xi)2 + (x1 − x24)2 for λ = 1 and λ = 100. 15.3.3 Image de-blurring The vector x is an image, and the matrix A gives blurring, so y = Ax+v is a blurred, noisy image. Our prior information about x is that it is smooth; neighboring pixels values are not very different from ea...
erences of intensities at adjacent pixels in a row or column: Dhx 2 + Dvx 2 = M N 1 − i=1 j=1 (Xi,j+1 − M 1 − Xij)2 + N i=1 j=1 (Xi+1,j − Xij)2. This quantity is the Dirichlet energy (see page 135), for the graph that connects each pixel to its left and right, and up and down, neighbors. × Example. In figures 15.5 and 1...
can be used when more complex beam shapes are used.) We consider the 2-D case. Let d(x, y) denote the density (say) at the position (x, y) in the region. (Here x and y are the scalar 2-D coordinates, not the vectors x and y in the estimation problem.) We assume that d(x, y) = 0 outside the region of interest. A line t...
n, with Aij = 0 if line i does not intersect voxel j. − Ax In tomography, estimation or inversion is often Tomographic reconstruction. called tomographic reconstruction or tomographic inversion. y The objective term 2 is the sum of squares of the residual between the predicted (noise-free) line integrals Ax and the ac...
simple, e.g., by fitting with a polynomial of not too high a degree. § Regularization is another way to avoid over-fitting, different from simply choosing a model that is simple (i.e., does not have too many basis functions). Regularization is also called de-tuning, shrinkage, or ridge regression, for reasons we will exp...
15.7) For the regression model, this weighted objective can be expressed as y − X T β v1 2 + λ β 2. − Here we penalize β being large (because this leads to sensitivity of the model), but not the offset v. Choosing β to minimize this weighted objective is called ridge regression. y − Aθ Effect of regularization. The effect...
with synthetic (simulated) data. We start with a signal, shown in figure 15.11, consisting of a constant plus four sinusoids: s(t) = c + 4 k=1 αk cos(ωkt + φk), with coefficients c = 1.54, α1 = 0.66, α2 = 0.90, α3 = − 0.66, α4 = 0.89. − (15.8) (The other parameters are ω1 = 13.69, ω2 = 3.55, ω3 = 23.25, ω4 = 6.03, and φ1...
λ = 0.079; any choice between around λ = 0.065 and 0.100 (say) would be reasonable. The horizontal dashed lines show the ‘true’ values of the coefficients (i.e., the ones we used to synthesize the data) given in (15.8). We can see that for λ near 0.079, our estimated parameters are close to the ‘true’ values. Linear ind...
to success in feature engineering, which can greatly increase the number of features. 15.5 Complexity In the general case we can minimize the weighted sum objective (15.1) by creating the stacked matrix and vector ˜A and ˜b in (15.2), and then using the QR factorization to solve the resulting least squares problem. Th...
a factor of two in forming the Gram matrix; see page 182.) Ignoring the second term and adding over i = 1,..., k we get a total of mn2 flops. Forming the weighted sums G and h costs 2kn2 flops. Solving Gˆx = h costs order 2n3 flops. · · · × Gram caching is the simple trick of computing Gi (and hi) just once, and reusing ...
which grows like n3. We 15.5 Complexity 333 will show now how this special problem can be solved far more efficiently when m is much smaller than n, using something called the kernel trick. Recall that the minimizer of J is given by (see (15.3)) ˆx = (AT A + λI)− = (AT A + λI)− = (AT A + λI)− 1(AT b + λxdes) 1(AT b + (λ...
complexity grows only linearly in n. To summarize, we can minimize the regularized least squares objective J in (15.9) n matrix ˜A, two different ways. One requires a QR factorization of the (m + n) which has cost 2(m + n)n2 flops. The other (which uses the kernel trick) requires m matrix ¯A, which has cost 2(m + n)m2 fl...
(15.7). Recall that the elements in the first column of A are one. Let ˆθ be the solution of (15.7), i.e., the minimizer of and let ˜θ be the minimizer of Aθ y 2 + λ(θ2 2 + − · · · + θ2 p), Aθ y 2 + λ θ 2 = Aθ y 2 + λ(θ2 1 + θ2 2 + − − in which we also penalize θ1. Suppose columns 2 through p of A have mean zero (for e...
· the sum of the squares of residuals obtained with the K versions of A. This choice of x, which we denote xrob, is called a robust (approximate) solution. Give a formula for xrob, in terms of A(1),..., A(K) and b. (You can assume that a matrix you construct has linearly 1b. independent columns.) Verify that for K = 1...
to write out any equations or formulas. Use the fact that ˆx(λ) is the unique minimizer of J1(x) + λJ2(x), and similarly for ˆx(µ), to deduce the inequalities 1 (µ) + λJ J 2 (µ) > J 1 (λ) + λJ 2 (λ), 1 (λ) + µJ J 2 (λ) > J 1 (µ) + µJ 2 (µ). Combine these inequalities to show that J 1 (λ) < J 1 (µ) and J 2 (λ) > J 2 (µ...
A is not zero. = 0, 1) 1) n. × × − − ≈ × − 15.7 Greedy regulation policy. Consider a linear dynamical system given by xt+1 = Axt + But, where the n-vector xt is the state at time t, and the m-vector ut is the input at time t. The goal in regulation is to choose the input so as to make the state small. (In applications...
values, and the same for demands. We define δp and δd as the (vectors of) relative price change and demand change: δp i = pi pnom i − pnom i, δd i = di dnom i − dnom i, i = 1,..., n. 3 = +0.05 means that the price for product 3 has been increased by 5% over its 0.04 means that the demand for product 5 in some day is 4%...
the two model parameters θ(1) and θ(2) to minimize A(1)θ(1) − y(1) 2 + A(2)θ(2) y(2) 2 + λ θ(1) θ(2) 2, − − ≥ 0 is a parameter. The first term is the least squares residual for the first model where λ on the first data set (say, women); the second term is the least squares residual for the second model on the second data...
)P. − In other words, each entry of the periodic estimate is the average of the entries of the original vector over the corresponding indices. 15.11 General pseudo-inverse. In chapter 11 we encountered the pseudo-inverse of a tall matrix with linearly independent columns, a wide matrix with linearly independent rows, a...
using the QR factorization. 16.1 Constrained least squares problem − Ax b In the basic least squares problem, we seek x that minimizes the objective function 2. We now add constraints to this problem, by insisting that x satisfy the linear equations Cx = d, where the matrix C and the vector d are given. The linearly c...
, we put infinite weight on the second objective objective, so that any nonzero value is unacceptable (which forces x to satisfy Cx = d). So we would expect (and it can be verified) that minimizing the weighted objective Cx Ax − − d Ax b − 2 + λ Cx − 2, d for a very large value of λ yields a vector close to a solution of...
+ θ2a + θ3a2 + θ4a3 − θ5 − − We can determine the coefficients ˆθ = (ˆθ1,..., ˆθ8) that minimize the sum of squares of the prediction errors, subject to the continuity constraints, by solving a constrained least squares problem θ8a3 = 0 − 3θ8a2 = 0. θ7a2 θ6a − 2θ7a θ6 − θ2 + 2θ3a + 3θ4a2 − − minimize b subject to Cθ = d...
as to achieve (or approximately achieve) a target set of customer views or impressions in m different demographic groups. We denote the n-vector of channel spending as s; this spending results in a set of views (across the demographic groups) given by the m-vector Rs. We will minimize the sum of squares of the deviatio...
vector of smallest or least norm that satisfies the linear equations Cx = d. For this reason the problem (16.2) is called the least norm problem or minimum-norm problem. 1234567891005001,000GroupImpressionsOptimalScaled 16.1 Constrained least squares problem 343 Figure 16.3 Left: A force sequence f bb = (1, 1, 0,..., 0...
? This problem can be posed as a least norm problem · · · minimize subject to f 2 1 19/2 1 17/2 · · · · · · 1 3/2 1 1/2 f = 0 1, with variable f. The solution f ln, and the resulting position, are shown in figure 16.4. The norm square of the least norm solution f ln is 0.0121; in contrast, the norm square of the bang-ba...
ality conditions for the constrained least squares problem. Any solution of the constrained least squares problem must satisfy them. We will now see that the optimality conditions can be expressed as a set of linear equations. The second set of equations in the optimality conditions can be written as ∂L ∂zi (ˆx, ˆz) = ...
+ p) called the KKT matrix. It is invertible if and only if × (n + p) coefficient matrix in (16.4) is C has linearly independent rows, and A C has linearly independent columns. (16.5) The first condition requires that C is wide (or square), i.e., that there are fewer constraints than variables. The second condition depen...
nonzero vector ¯x for which A C ¯x = 0. Direct calculation shows that 2AT A C T 0 C ¯x 0 = 0, which shows that the KKT matrix is not invertible. When the conditions (16.5) hold, the constrained least squares problem (16.1) has the (unique) solution ˆx, given by ˆx ˆz = 2AT A C T 0 C − 1 2AT b d. (16.6) (This formula a...
Cx = C ˆx = d in the b) = − − from which we conclude that subject to Cx = d. Ax − 2 b Ax − b 2 = A(x − b 2, − 2 + ˆx) Aˆx Aˆx b − ≥ 2. So ˆx minimizes It remains to show that for x Aˆx 2, which by the equation above is equivalent to b is not the case, then A(x − = ˆx, we have the strict inequality ˆx) A(x − ˆx) = 0, a...
for least squares problems (algorithm 12.1). We assume that A and C satisfy the conditions (16.5). We start by rewriting the KKT equations (16.4) as 2(AT A + C T C)ˆx + C T w = 2AT b, C ˆx = d (16.7) 2d. To obtain (16.7) we multiplied the equation with a new variable w = ˆz C ˆx = d on the left by 2C T, then added the...
as ˜RT ˜Rw = 2 ˜RT ˜QT QT 1 b 2d, − ˜Rw = 2 ˜QT QT 1 b 2 ˜R− T d. − We can use this to compute w, first by computing ˜R− T d (by forward substitution), then forming the right-hand side, and then solving for w using back substitution. Once we know w, we can find ˆx from (16.9). The method is summarized in the following a...
ˆx via back substitution. The costs of steps 2, 3, and 4 are quadratic in the dimensions, and so are negligible compared to the cost of step 1, so our final complexity is × 2(m + p)n2 + 2np2 flops. The assumption (16.5) implies the inequalities n p ≤ ≤ m + p, and therefore (m + p)n2 flops. In particular, its order is (m +...
satisfies the equations above, with ˆy = 2(Aˆx the coefficient matrix above is sparse, and any method for solving a sparse system of linear equations can be used to solve it. − Solution of least norm problem. Here we specialize the solution of the general constrained least squares problem (16.1) given above to the specia...
where C T = QR is the QR factorization of C T. The solution of the least In QR− norm problem can therefore be expressed as § ˆx = QR− T d and this leads to an algorithm for solving the least norm problem via the QR factorization. Algorithm 16.3 Least norm via QR factorization given a p × n matrix C with linearly indep...
solution to a given point. Suppose the wide matrix A has linearly independent rows. Find an expression for the point x that is closest to a given vector y (i.e., minimizes x Remark. This problem comes up when x is some set of inputs to be found, Ax = b represents some set of requirements, and y is some nominal value o...
the same total value as hcurr. The difference h hcurr is called the trade vector ; it gives the amount of each asset (in dollars) that we buy or sell. The n assets are divided into m industry sectors, such as pharmaceuticals or consumer − Exercises 353 electronics. We let the m-vector s denote the (dollar value) sector...
− AB B, · · · and (u1, u2,..., uT − wide and has linearly independent rows. 1) (which is the input sequence stacked). You may assume that C is 16.9 Smoothest force sequence to move a mass. We consider the same setup as the example given on page 343, where the 10-vector f represents a sequence of forces applied to a un...
subject to Cx = d, − x a 2 where the n-vector x is to be determined, the n-vector a is given, the p given, and the p-vector d is given. Show that the solution of this problem is × n matrix C is ˆx = a C †(Ca d), − − assuming the rows of C are linearly independent. Hint. You can argue directly from the KKT equations fo...
This guarantees that the decoded message is correct, z i.e., ˆs = s.) Give a formula for z in terms of D†, α, and x. (b) Complexity. What is the complexity of encoding a secret message in an image? (You can assume that D† is already computed and saved.) What is the complexity of decoding the secret message? About how l...
m the problem of finding the linear combination of a1,..., ai 1, ai+1,..., an that is closest to ai. These are n standard least squares problems, which can be solved using the methods of chapter 12. In this exercise we explore a simple formula that allows us to solve these n least squares problem all at once. Let G = A...
the combined return on all our investments is consistently high. (We must accept the idea that for our average return to be high, we must tolerate some variation in the return, i.e., some risk.) The idea of optimizing a portfolio of assets was proposed in 1953 by Harry Markowitz, who won the Nobel prize in economics f...
each. (The periods could just as well be hours, weeks, or months). We describe the investment returns by the T n matrix R, where Rtj is the fractional return of asset j in period t. Thus R61 = 0.02 means that asset 1 gained 2% in period 6, and R82 = 0.03 means that asset 2 lost 3%, over period 8. The jth column of R i...
and we choose the allocation w = en, then r = Ren = µrf 1, i.e., we obtain a constant return in each period of µrf. 17.1 Portfolio optimization 359 We can express the total portfolio value in period t as Vt = V1(1 + r1)(1 + r2) (1 + rt − 1), · · · (17.1) where V1 is the total amount initially invested in period t = 1....
) and std(r) give the per-period return and risk. They are often converted to their equivalent values for one year, which are called the annualized return and risk, and reported as percentages. If there are P periods in one year, these are given by P avg(r), √P std(r), respectively. For example, suppose each period is ...
portfolio return be ρ can be expressed as avg(r) = (1/T )1T (Rw) = µT w = ρ, where µ = RT 1/T is the n-vector of the average asset returns. This is a single linear equation in w. Assuming that it holds, we can express the square of the risk as std(r)2 = (1/T ) r − avg(r)1 2 = (1/T ) r ρ1 2. − Thus to minimize risk (sq...
negative weights on those with negative returns. The whole challenge in investing is that we do not know future returns. Assume the current time is period T, so we know the (so-called realized ) return matrix R. The portfolio weight w found by solving (17.2), based on the observed returns in periods t = 1,..., T, can ...
less well than the analogous assumption in data fitting, i.e., that future data looks like past data. For this reason we expect less coherence between the training and test performance of a portfolio, compared to a generic data fitting application. This is especially so when the test period has a small number of periods...
0.07 0.15 0.31 0.13 1.00 1.96 3.03 5.48 1.00 Table 17.1 Annualized risk, return, and leverage for five portfolios. Figure 17.2 Total value over time for five portfolios: the risk-free portfolio with 1% annual return, the Pareto optimal portfolios with 10%, 20%, and 40% return, and the uniform portfolio. The total value ...
Risk-free1/n10%20%40%DayValue(thousanddollars) 17.1 Portfolio optimization 365 Time-varying weights. Markets do shift, so it is not uncommon to periodically update or change the allocation weights that are used. In one extreme version of this, a new allocation vector is used in every period. The allocation weight for a...
, parametrized by the required return ρ. The portfolio w0 is a point on the line, and the vector v, which satisfies 1T v = 0, gives the direction of the line. This equation tells us that we do not need to solve the equation (17.3) for each value of ρ. We first compute w0 and v (by factoring the matrix once and using two ...
ections or engine thrust on an airplane. The state xt, input ut, and output yt typically represent deviations from some standard or desired operating condition, for example, the deviation of aircraft speed and altitude from the desired values. For this reason it is desirable to have xt, yt, and ut small. Linear quadrat...
., uT 1). − The dimension of z is T n + (T ˜Az 2, where ˜b = 0 and ˜A is the block matrix ˜b − − 1)m. The control objective can be expressed as C1 C2...             ˜A =            . CT √ρI... √ρI In this matrix, (block) entries not shown are zero, and the identity matrices in the lower right co...
has dimensions ˜n = T n + (T − 1)m, ˜m = T p + (T 1)m, ˜p = (T 1)n + 2n, − − so using one of the standard methods described in 16.2 would require order (˜p + ˜m)˜n2 ≈ T 3(m + p + n)(m + n)2, § flops, where the symbol means we have dropped terms with smaller exponents. But the matrices ˜A and ˜C are very sparse, and by ...
.6. Here too we see that for larger ρ, the input is smaller but the output is larger. 17.2.2 Variations There are many variations on the basic linear quadratic control problem described above. We describe some of them here. ydes t Tracking. We replace yt in Joutput with yt − is a given desired output trajectory. In thi...
(in which case θ is sometimes called the discount or forgetting factor ). Way-point constraints. A way-point constraint specifies that yτ = ywp, where ywp is a given p-vector, and τ is a given way-point time. This constraint is typically used when yt represents a position of a vehicle; it requires that the vehicle pass...
m n matrix K. The columns of K can be found by solving (17.8) with initial conditions xinit = e1,..., en. This can be done efficiently by factoring the coefficient matrix once, and then carrying out n solves. × This matrix generally provides a good choice of state feedback gain matrix. With this choice, the input u1 with ...
sequence 050100150−0.100.1StatefeedbackOptimaltut05010015000.20.4StatefeedbackOptimaltyt 17.3 Linear quadratic state estimation 373 x1,..., xT. State estimation is widely used in many application areas, including all guidance and navigation systems, such as the Global Positioning System (GPS). Since we do not know the...
noise. We will see later how λ can be chosen using validation. Estimation versus control. The least squares state estimation problem is very similar to the linear quadratic control problem, but the interpretation is quite different. In the control problem, we can choose the inputs; they are under our control. Once we c...
.. AT − I 1 − B2...    . BT 1 − The constrained least squares problem has dimensions ˜n = T n + (T 1)m, ˜m = T p + (T 1)m, − − so using one of the standard methods described in ˜p = (T 1)n − 16.2 would require order § (˜p + ˜m)˜n2 T 3(m + p + n)(m + n)2 ≈ flops. As in the case of linear quadratic control, the matric...
blue lines. We can see that λ = 1 is too small for this example: The estimated state places too much trust in the measurements, and is following measurement noise. We can also see that λ = 105 is too large: The estimated state is very smooth (since the estimated process noise is small), but the imputed noise measureme...
λ=1λ=103λ=105 17.3 Linear quadratic state estimation 377 Figure 17.9 Training and test errors for the state estimation example. Example. Continuing the previous example, we randomly remove 20 of the 100 measurement points. We solve the same problem (17.11) for a range of values of λ, but with Jmeas defined as Jmeas = t ...
we derive an equivalent formulation of the portfolio optimization problem (17.2) that appears more frequently in the literature than our version. (Equivalent means that the two problems always have the same solution.) This formulation is based on the return covariance matrix, which we define below. (See also exercise 1...
., asset 2 has the higher return. Hint. Your answer should depend on whether ρ < µ1, µ1 < ρ < µ2, or µ2 < ρ, i.e., how the required return compares to the two asset returns. 17.4 Index tracking. Index tracking is a variation on the portfolio optimization problem de17.1. As in that problem we choose a portfolio allocati...
But, with A =    0.99 0.01 0.02 0.01 0.03 0.47 0.06 0.04 − − − 0.02 4.70 0.40 0.72 − 0.32 0.00 0.00 0.99   , B =    0.01 3.44 0.83 0.47 − − −   , 0.99 1.66 0.44 0.25 with time unit one second. The state 4-vector xt consists of deviations from the trim conditions of the following quantities. • • • • (xt)1 i...
= 100,..., 120, the state and input variables are zero.) 380 17 Constrained least squares applications (c) Find the 2 4 state feedback gain K obtained by solving the linear quadratic control 17.2.3. Verify that it is problem with C = I, ρ = 100, T = 100, as described in almost the same as the one obtained with T = 50....
algorithm that often works well in practice. 18.1 Nonlinear equations and least squares 18.1.1 Nonlinear equations Consider a set of m possibly nonlinear equations in n unknowns (or variables) x = (x1,..., xn), written as fi(x) = 0, i = 1,..., m, where fi : Rn R is a scalar-valued function. We refer to fi(x) = 0 as th...
nonlinear case. When m < n, there are fewer equations than unknowns, and the system of equations (18.1) is called under-determined. When m = n, so there are as many equations as unknowns, the system of equations is called square. When m > n, there are more equations than unknowns, and the system of equations is called...
T f (ˆx) = 0. (18.3) This optimality condition must hold for any solution of the nonlinear least squares problem (18.2). But the optimality condition can also hold for other points that are not solutions of the nonlinear least squares problem. For this reason the optimality condition (18.3) is called a necessary condit...
are called heuristics. The k-means algorithm of chapter 4 is an example of a heuristic algorithm. Solving linear equations or linear least squares problems using the QR factorization are not heuristics; these algorithms always work. Many heuristic algorithms for the nonlinear least squares problem, including those we ...