text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
18.156 Lecture Notes
Febrary 17, 2015
trans. Jane Wang
The main goal of this lecture is to prove Korn’s inequality, which as we recall is as follows:
Theorem 1 (Korn’s Inequality). If u ∈ C2
comp(Rn), and ∆u = f , then
[∂i∂ju]Cα ≤ C(n, α)[∆u]Cα.
First, let us recall the progress that we made last time. To start, we hav... | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/2a167573b1412e48588f173774f6e158_MIT18_156S16_lec5.pdf |
.
1. f supported between x1 and x2.
1
2. f supported over x1.
Used that |K(x)| (cid:46) |x|−n.
3. f supported on B3d(x1), and (cid:15) < d. Note that as opposed to
can be (cid:29) dα.
the previous examples, |T(cid:15)f (x1)|
Used that
(cid:82)
Sr
K(cid:15)(x) = 0 for all r, (cid:15).
For this case, we will use the fol... | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/2a167573b1412e48588f173774f6e158_MIT18_156S16_lec5.pdf |
dy
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(f (y) − A)K(cid:15)(x1 − y) dy
−
(
f (y)
−
B
)K(cid:15)(x2
−
y) dy
N1
+
(cid:90)
N
c
1
(f (y) − C)K(cid:15)(x1 − y) dy −
(cid:90)
N c
2
(f (y)
− D K(cid:15)(x2
)
− y) dy
(cid:12)
(cid:12)
(cid:12)
(cid:90)
N2
Let us denote the four integran
may be any constants since
(cid:82)
Sr
... | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/2a167573b1412e48588f173774f6e158_MIT18_156S16_lec5.pdf |
first two terms will behave like example 2 and the last two terms will b
The third term will behave like example 3 and is the most interesting, so let us work through that
bound.
ve like example
eha
(cid:12)
(cid:12)
(cid:12)
(cid:90)
F
I3 − I4
(cid:12)
(cid:12) =
(cid:12)
≤
≤
(cid:90)
F
(cid:90)
(cid:12)
(cid:12)
(cid:... | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/2a167573b1412e48588f173774f6e158_MIT18_156S16_lec5.pdf |
15)f (x2)| + δij f (x1)
|
−
|
f (x2) .
(cid:15)
→0+
1
n
Eventually, (cid:15) < |x1 − x2|/10 and we can apply theorem 3 to the first term. The second term is
bounded by [f ]Cα · |x1 − x2|α.
To prove Korn’s inequality, we use the mollifier trick to show that we only need that u has two
derivatives.
Proof of Korn’s inequali... | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/2a167573b1412e48588f173774f6e158_MIT18_156S16_lec5.pdf |
u(x1 − y) − ∆u(x2 − y))ϕ(cid:15)(y) dy
(cid:12)
(cid:12)
(cid:12)
≤ [∆u]Cα|x1 − x α
|
2
(cid:90)
ϕ(cid:15)(y) dy.
Our next goal will be to prove the Schauder Inequality. Recall that Korn’s inequality and the first
homework allowed us to prove the following lemma.
Lemma 6. If |aij(x) − δij| < (cid:15)(α, n) for all i, j,... | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/2a167573b1412e48588f173774f6e158_MIT18_156S16_lec5.pdf |
)). Then,
(cid:107)u(cid:107)C2,α(B1/2) (cid:46) max (cid:107)u(cid:107)C2,α(B(x
i,r(i))) (cid:46) max u C (B(xi,r))
(cid:107) (cid:107)
2
≤ (cid:107) (cid:107)
u C2(B1).
i
i
5
MIT OpenCourseWare
http://ocw.mit.edu
18.156 Differential Analysis II: Partial Differential Equations and Fourier Analysis
Spring 2016
For inf... | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/2a167573b1412e48588f173774f6e158_MIT18_156S16_lec5.pdf |
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
6.034 Notes: Section 7.1
Slide 7.1.1
We have been using this simulated bankruptcy data set to illustrate the different learning
algorithms that operate on continuous data. Recall that R is supposed to be the ratio of earnings ... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
as simple as this
class is, in general, there will be many possible linear separators to choose from.
Also, note that, once again, this decision boundary disagrees with that drawn by the previous
algorithms. So, there will be some data sets where a linear separator is ideally suited to the data. For
example, it tur... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
1 values, the components of an n-dimensional coefficient vector w and a
scalar value b. These n+1 values are what will be learned from the data. The x will be some point in
the feature space.
We will be using dot product notation for compactness and to highlight the geometric interpretation
of this equation (more o... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
linear separators in n
dimensions. In two dimensions, such a linear separator is refered to as a "line". In three dimensions,
it is called a "plane". These are familiar words. What do we call it in higher dimensions? The usual
terminology is hyperplane. I know that sounds like some type of fast aircraft, but that's ... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
x are
perpendicular), the cosine is equal to 0 and the distance is precisely b as we expect.
Slide 7.1.14
This distance measure from the hyperplane is signed. It is zero for points on the hyperplane, it is
positive for points in the side of the space towards which the normal vector points, and negative for
points ... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
. Then we repeat
the inner loop until all the points are correctly classified using the current weight vector. The inner
loop is to consider each point. If the point's margin is positive then it is correctly classified and we
do nothing. Otherwise, if it is negative or zero, we have a mistake and we want to change t... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
. Here it took 49 iterations
through the data (the outer loop) for the algorithm to stop. The hypothesis at the end of each loop is
shown here. Recall that the first element of the weight vector is actually the offset. So, the normal
vector to the separating hyperplane is [0.94 0.4] and the offset is -2.2 (recall th... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
of this strategy to multiple input variables is based on the generalization of the
notion of slope, which is the gradient of the function. The gradient is the vector of first (partial)
derivatives of the function with respect to each of the input variables. The gradient vector points in
the direction of steepest inc... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
discard the points after using them, counting on more arriving later.
Another way of thinking about the relationship of these algorithms is that the on-line version is
using a (randomized) approximation to the gradient at each point. It is randomized in the sense that
rather than taking a step based on the true grad... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
to 0s, we can write the final weight vector in terms of these
counts and the input data (as well as the rate constant).
Slide 7.2.13
Since the rate constant does not change the separator we can simply assume that it is 1 and ignore it.
Now, we can substitute this form of the weights in the classifier and we get the... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
takes place by adjusting the weights in the
network so that the desired output is produced whenever a sample in the input data set is
presented.
Slide 7.3.2
We start by looking at a simpler kind of "neural-like" unit called a perceptron. This is where the
perceptron algorithm that we saw earlier came from. Percept... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
seen in our treatment of SVMs how the "kernel trick" can be used to generalize a
perceptron-like classifier to produce arbitrary boundaries, basically by mapping into a high-
dimensional space of non-linear mappings of the input features.
Slide 7.3.6
We will now explore a different approach (although later we will a... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
feature values (1,1)) is in the half space that the normal points into. This is the only point with a
positive distance and thus a one output from the perceptron unit. The other points have negative
distance and produce a zero output. This is shown in the shaded column in the table.
Slide 7.3.9
Looking at the secon... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
it doesn't matter how far a point is from the decision
boundary, you will still get a 0 or a 1. We need a smooth output (as a function of changes in the
network weights) if we're to do gradient descent.
Slide 7.3.13
Eventually people realized that if one "softened" the thresholds, one could get information as to
w... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
see what that means.
The output of a multi-layer net of sigmoid units is a function of two vectors, the inputs (x) and the
weights (w). An example of what that function looks like for a simple net is shown along the
bottom, where s() is whatever output function we are using, for example, the logistic function we
sa... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
single training point. Thus, we will be neglecting the sum over the training points
in the real gradient.
As we saw in the last slide, we will need the gradient of the unit's output with respect to the weights,
that is, the vector of changes in the output due to a change in each of the weights.
The output (y) of a ... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
in these gradients, not surprisingly. Here we
see that this derivative has a very simple form when expressed in terms of the output of the
sigmoid. Then, it is just the output times 1 minus the output. We will use this fact liberally later.
Slide 7.3.22
Now, what happens if the input to our unit is not a direct inp... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
at the output (y-y^i) times the change in the output which is
produced by the change in the weight (dy/dw).
Slide 7.3.27
Let's pick weight w13, that weights the output of unit 1 (y1) coming into the output unit (unit 3).
What is the change in the output y3 as a result of a small change in w13? Intuitively, we shoul... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
strategy for computing the error gradient.
Slide 7.3.29
The cases we have seen so far are not completely general in that there has been only one path
through the network for the change in a weight to affect the output. It is easy to see that in more
general networks there will be multiple such paths, such as shown ... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
and moving
backward through the network we can compute all the deltas for every unit in the network in one
pass (once we've computed all the y's and z's during a forward pass). It is this property that has led
to the name of this algorithm, namely backpropagation.
It is important to remember that this is still the ... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
randomness into the gradient descent. Formally, gradient descent on an error function defined as the
sum of the errors over all the input instances should be the sum of the gradients over all the
instances. However, backprop is typically implemented as shown here, making the weight change
based on each feature vecto... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
derive the gradient of the network and a
backprop-like computation can be used to do that.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
6.034 Notes: Section 7.4
Slide 7.4.1
Now that we have looked at the basic mathematical techniques for minimizing the training error o... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
training data. Instead, we can use the performance on
the validation set as a way of deciding when to stop; we want to stop when we get best performance
on the validation set. This is likely to lead to better generalization. We will look at this in more
detail momentarily.
This type of "early termination" keeps the... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
(drawn from Gaussian distributions with different centers). An
additional 25 instances each (drawn from the same distributions) have been reserved as a test set.
As you can see, the point distributions overlap and therefore the net cannot fully separate the data.
The red region represents the area where the net's ou... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
that are just as unlikely to generalize to new data.
For K-nearest-neighbors, on this type of data one would want to use a value of K greater than 1. For
decision trees one would want to prune back the tree somewhat. These decisions could be based on
the performance on a held out validation set.
Similarly, for neur... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
other hand, the need to avoid overstepping the minimum and possibly getting into oscillations
because of a too-large learning rate. One approach to balancing these is to effectively adjust the
learning rate based on history. One of the original approaches for this is to use a momentum term in
backprop.
Here is the ... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
is best to keep the range of the inputs
in that range as well. Simple normalization (subtract the mean and divide by the standard deviation)
will almost do that and is adequate for most purposes.
Slide 7.4.17
Another issue has to do with the representation of discrete data (also known as "categorical" data).
You c... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
or stops changing
significantly. These outcomes generally happen long before we run the risk of the weights becoming
infinite.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.4.20
Neural nets can also do regression, that is, produce an output which is a real numbe... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
in the road got you off center? It is important that ALVINN be able to recover and steer the vehicle
back to the center.
The researchers considered having the vehicle drive in a wobbly path during training, but that posed
the danger of having the system learn to drive that way. They came up with a clever solution.
... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a180cde9d8748aa97cbf51950ecb096_ch7_mach3.pdf |
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
6.034 Notes: Section 3.4
Slide 3.4.1
In this section, we will look at some of the basic approaches for building programs that play two-
person games such as tic-tac-toe, checkers and chess.
Much of the work in this area has be... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a2f347357dc1e683ece44aec2f81e5e_ch3_csp_games2.pdf |
complex game.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 3.4.4
Here's a little piece of the game tree for Tic-Tac-Toe, starting from an empty board. Note that even
for this trivial game, the search tree is quite big.
Slide 3.4.5
A crucial component of any game... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a2f347357dc1e683ece44aec2f81e5e_ch3_csp_games2.pdf |
The player who is building the tree is trying to maximize their score. However, we assume that the
opponent (who values board positions using the same static evaluation function) is trying to
minimize the score (or think of this is as maximizing the negative of the score). So, each layer of the
tree can be classifie... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a2f347357dc1e683ece44aec2f81e5e_ch3_csp_games2.pdf |
world champion with a ranking of about 2900.
At some level, this is a depressing picture, since it seems to suggest that brute-force search is all that
matters.
Slide 3.4.10
And Deep Blue is brute indeed... It had 256 specialized chess processors coupled into a 32 node
supercomputer. It examined around 30 billion ... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a2f347357dc1e683ece44aec2f81e5e_ch3_csp_games2.pdf |
Here's some pseudo-code that captures this idea. We start out with the range of possible scores (as
defined by alpha and beta) going from minus infinity to plus infinity. Alpha represents the lower
bound and beta the upper bound. We call Max-Value with the current board state. If we are at a leaf,
we return the stat... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a2f347357dc1e683ece44aec2f81e5e_ch3_csp_games2.pdf |
3.4.20
The calling Max-Value now sets alpha to this value, since it is bigger than minus infinity. Note that
the range of [alpha beta] says that the score will be greater or equal to 2 (and less than infinity).
Slide 3.4.21
Max-Value now calls Min-Value with the updated range of [alpha beta].
Slide 3.4.22
Min-Val... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a2f347357dc1e683ece44aec2f81e5e_ch3_csp_games2.pdf |
as deep! We already saw the enormous
impact of deeper search on performance. So, this one simple algorithm can almost double the search
depth.
Now, this analysis is optimistic, since if we could order moves perfectly, we would not need alpha-
beta. But, in practice, performance is close to the optimistic limit.
Sli... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a2f347357dc1e683ece44aec2f81e5e_ch3_csp_games2.pdf |
(killer moves) and try them first when considering subsequent moves at that level.
Imagine a position with white to move. After white's first move we go into the next recursion of
Alpha-Beta and find a move K for black which causes a beta cutoff for black. The reasoning is then
that move K is a good move for black, ... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a2f347357dc1e683ece44aec2f81e5e_ch3_csp_games2.pdf |
alpha and beta. Instead of starting with alpha and beta at minus and plus infinity, we can start them
in a small window around the values found in the previous search. This will help us cutoff more
irrelevant moves early. In fact, it is often useful to start with the tightest possible window,
something like [alpha, ... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a2f347357dc1e683ece44aec2f81e5e_ch3_csp_games2.pdf |
seems not to be susceptible to search-
based methods that work well in chess. Go players seem to rely more on a complex understanding of
spatial patterns, which might argue for a method that is based more strongly on a good evaluation
function than on brute-force search.
Slide 3.4.33
There are a few observations ab... | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/2a2f347357dc1e683ece44aec2f81e5e_ch3_csp_games2.pdf |
Chapter 2
Abelian Gauge Symmetry
As I have already mentioned, local gauge symmetry is the major new principle, beyond
the generic implementation of special relativity and quantum mechanics in quantum
field theory, which arises in formulating the standard model. It is, however, a rather
abstract concept, and one whose... | https://ocw.mit.edu/courses/8-325-relativistic-quantum-field-theory-iii-spring-2003/2a43685db9124bc04925067ae33c6f23_chap2.pdf |
Gauge symmetry is a family of functional
transformations among the potentials that leaves the field strength unchanged. The
charge and current distributions, which provide the source terms in the Maxwell
equations, are under gauge transformations.
In the general, classical continuum form the Maxwell equations the ph... | https://ocw.mit.edu/courses/8-325-relativistic-quantum-field-theory-iii-spring-2003/2a43685db9124bc04925067ae33c6f23_chap2.pdf |
here on call potentials – we add to the action a term
corresponding to the Lagrangian
Sint. =
q Aµdxµ
−
�
The momentum is now, with L
Lint. =
dxj
dt
.
qA0 + qAj
−
Lf ree + Lint.,
≡
∂L
∂ ˙xj = m
pj =
vj
√1
v2
−
+ qAj
dpj
dt
=
d
dt
(
mvj
√1
�v2
−
) + q
∂Aj
∂xk
dxj
dt
+ q
∂Aj
∂t
.
and
Si... | https://ocw.mit.edu/courses/8-325-relativistic-quantum-field-theory-iii-spring-2003/2a43685db9124bc04925067ae33c6f23_chap2.pdf |
0
∂xj −
∂Aj
∂t
≡ −
Bj
�jlm ∂Al
∂xm
≡
5
(2.11)
(2.12)
Identifying �
E and �
B as electric and magnetic field strengths, we thereby arrive at
the Lorentz force law for a particle of mass m, charge q.
Note that with this identification two of the Maxwell equations, viz.
∂Bj
∂xj
∂Bj
∂t
= 0
= 0
�jlm
∂El
... | https://ocw.mit.edu/courses/8-325-relativistic-quantum-field-theory-iii-spring-2003/2a43685db9124bc04925067ae33c6f23_chap2.pdf |
motion unchanged, it must leave �
B unchanged, as of course one can verify
directly.
E and �
�
Clearly, the requirement that the world-line of a charged particle should have no
ends is closely related to the conservation of charge. More generally, the necessary
and sufficient condition for an action
Sint. =
−
�
... | https://ocw.mit.edu/courses/8-325-relativistic-quantum-field-theory-iii-spring-2003/2a43685db9124bc04925067ae33c6f23_chap2.pdf |
+ qA0.
(2.21)
�
−
The appearance of square root here leads to difficulties in quantization. In order to
implement the commutation relations, or (more heuristically) wave-particle duality,
one would like to make the substitution �
p
odinger
wave equation Hψ = i ∂ψ . But that substitution renders the Hamiltonian, wi... | https://ocw.mit.edu/courses/8-325-relativistic-quantum-field-theory-iii-spring-2003/2a43685db9124bc04925067ae33c6f23_chap2.pdf |
.23)
i
∂ψ
∂t
i∂�
(
−
q �
A)2
absorbing a factor e−
imt into ψ.)
7
For the gauge transformation A� = Aµ + ∂µχ on the potentials to leave the
µ
Schr¨odinger equation invariant, it must be accomplished by the transformation
ψ� =
e−
iqχψ
(2.24)
on the charged fields. Note that the value of charge q appears expl... | https://ocw.mit.edu/courses/8-325-relativistic-quantum-field-theory-iii-spring-2003/2a43685db9124bc04925067ae33c6f23_chap2.pdf |
2
ψp1
2
· · ·
p1q1 + p2q2 +
= 0
· · ·
(2.25)
(2.26)
for the term destroys p1 particles charge q1, p2 particles charge q2, and so forth (or
creates an equivalent number of anti-particles). Putting it differently, the necessary
and sufficient condition for charge conservation is invariance under the symmetry
ψ� =... | https://ocw.mit.edu/courses/8-325-relativistic-quantum-field-theory-iii-spring-2003/2a43685db9124bc04925067ae33c6f23_chap2.pdf |
modify the
derivatives.
(2.28)
A suitable modification is suggested by our earlier discussion of the point particle
Hamiltonian. We define the covariant derivative operate Dµ to act on a field ψn of
charge qn according to
Then if Aµ transforms as in Equation 2.15, we have
Dµψn = (∂µ + iqnAµ)ψn.
(2.29)
2.1. GAUGE... | https://ocw.mit.edu/courses/8-325-relativistic-quantum-field-theory-iii-spring-2003/2a43685db9124bc04925067ae33c6f23_chap2.pdf |
MIT 2.852
Manufacturing Systems Analysis
Lectures 18–19
Loops
Stanley B. Gershwin
Spring, 2007
Copyright c�2007 Stanley B. Gershwin.
Problem
Statement
B1
M2
B2
M3
B3
M1
M4
B6
M6
B5
M5
B4
• Finite buffers (0 � ni(t) � Ni).
• Single closed loop – fixed population (
• Focus is on the Buzacott model (deter... | https://ocw.mit.edu/courses/2-852-manufacturing-systems-analysis-spring-2010/2a492e1e011aa53ffe87fefef19d6461_MIT2_852S10_loops.pdf |
d
o
r
p
0.89
0.885
0.88
0.875
0.87
0.865
0.86
0
N2=10
N2=15
N2=20
N2=30
N2=40
10
20
30
40
50
60
population
Copyright �2007 Stanley B. Gershwin.
c
5
Expected
population
method
• Treat the loop as a line in which the first machine
and the last are the same.
• In the resulting decomposition, on... | https://ocw.mit.edu/courses/2-852-manufacturing-systems-analysis-spring-2010/2a492e1e011aa53ffe87fefef19d6461_MIT2_852S10_loops.pdf |
uses E(i) = E(i + 1) for i = 1, ..., k. The first k − 1
are E(1) = E(2), E(2) = E(3), ..., E(k − 1) = E(k). But this implies
E(k) = E(1), which is the same as the kth equation.
Copyright �2007 Stanley B. Gershwin.
c
7
Expected
population
method
Therefore, we need one more equation. We can use
n¯i = N
�
i
... | https://ocw.mit.edu/courses/2-852-manufacturing-systems-analysis-spring-2010/2a492e1e011aa53ffe87fefef19d6461_MIT2_852S10_loops.pdf |
machines.
In a line, every downstream machine could block a
given machine, and every upstream machine could
starve it. In a loop, blocking and starvation are more
complicated.
Copyright c
�2007 Stanley B. Gershwin.
11
Loop Behavior
Ranges
B1
M2
B2
M3
B3
M1
M4
B6
M6
B5
M5
B4
• The range of blocking of... | https://ocw.mit.edu/courses/2-852-manufacturing-systems-analysis-spring-2010/2a492e1e011aa53ffe87fefef19d6461_MIT2_852S10_loops.pdf |
part of the line.
• The range of starvation of a machine in a line is the entire
upstream part of the line.
Copyright c
�2007 Stanley B. Gershwin.
15
Loop Behavior
Ranges
Line
In an acyclic network, if Mj is downstream of Mi, then the range
of blocking of Mj is a subset of the range of blocking of Mi.
M
i
M... | https://ocw.mit.edu/courses/2-852-manufacturing-systems-analysis-spring-2010/2a492e1e011aa53ffe87fefef19d6461_MIT2_852S10_loops.pdf |
c
18
Loop Behavior
Range of blocking of M
1
Range of blocking of M
2
10
B1
7
B6
M1
10
10
B
2
0
B5
M
2
M6
Ranges
Difficulty for decomposition
10
B3
0
B4
M4
M3
M5
M5 can block M2. Therefore the parameters of M5 should directly affect the
parameters of Md(1) in a decomposition. However, M5 cannot ... | https://ocw.mit.edu/courses/2-852-manufacturing-systems-analysis-spring-2010/2a492e1e011aa53ffe87fefef19d6461_MIT2_852S10_loops.pdf |
machine of its
two-machine line.
• Similarly for upstream modes.
Copyright �2007 Stanley B. Gershwin.
c
22
Multiple Failure
Mode Line
Decomposition
1,2
3
4
5,6,7
8
9,10
1,2,
3,4
5,6,7,
8,9,10
• The downstream failure modes appear to the observer after
propagation through blockage .
• The upstream fail... | https://ocw.mit.edu/courses/2-852-manufacturing-systems-analysis-spring-2010/2a492e1e011aa53ffe87fefef19d6461_MIT2_852S10_loops.pdf |
line.
• Local modes: the probability of failure into a local mode is the same as the
probability of failure in that mode of the real machine.
Copyright �2007 Stanley B. Gershwin.
c
26
Multiple Failure
Mode Line
Decomposition
Line Decomposition
1,2
3
4
5,6,7
8
9,10
• Remote modes: i is the building block num... | https://ocw.mit.edu/courses/2-852-manufacturing-systems-analysis-spring-2010/2a492e1e011aa53ffe87fefef19d6461_MIT2_852S10_loops.pdf |
Copyright �2007 Stanley B. Gershwin.
c
28
Multiple Failure
Mode Line
Decomposition
Extension to Loops
B1
M
2
B
2
M3
B3
M1
M4
B6
M6
B5
M5
B4
• Use the multiple-mode decomposition, but adjust the
ranges of blocking and starvation accordingly.
• However, this does not take into account the local
inform... | https://ocw.mit.edu/courses/2-852-manufacturing-systems-analysis-spring-2010/2a492e1e011aa53ffe87fefef19d6461_MIT2_852S10_loops.pdf |
.
• Consequently, this makes the two-machine line very
complicated.
Copyright c
�2007 Stanley B. Gershwin.
32
Transformation
• Purpose: to avoid the complexities caused by
thresholds.
• Idea: Wherever there is a threshold in a buffer,
break up the buffer into smaller buffers separated by
perfectly reliable mac... | https://ocw.mit.edu/courses/2-852-manufacturing-systems-analysis-spring-2010/2a492e1e011aa53ffe87fefef19d6461_MIT2_852S10_loops.pdf |
has none.
• Note: The number of thresh
olds equals the number of ma
chines.
5
35
Transformation
B
1
20
M2
B
2
3
*
M1
M1
M3
15
5
B
4
M4
B
3
1M
*
M4
*
M3
*
M2
M2
M3
M4
• Break up each buffer into a sequence of buffers of size 1 and
reliable machines.
• Count backwards from each real mac... | https://ocw.mit.edu/courses/2-852-manufacturing-systems-analysis-spring-2010/2a492e1e011aa53ffe87fefef19d6461_MIT2_852S10_loops.pdf |
were as high as 10%.
� Six-machine cases: mean throughput error 1.1% with a
maximum of 2.7%; average buffer level error 5% with a
maximum of 21%.
� Ten-machine cases: mean throughput error 1.4% with a
maximum of 4%; average buffer level error 6% with a
maximum of 44%.
Copyright �2007 Stanley B. Gershwin.
c
40
Numeri... | https://ocw.mit.edu/courses/2-852-manufacturing-systems-analysis-spring-2010/2a492e1e011aa53ffe87fefef19d6461_MIT2_852S10_loops.pdf |
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
• Production rate vs. r1.
• Usual saturating graph.
Copyright �2007 Stanley B. Gershwin.
c
44
Numerical
Results
B1
M2
B2
* M1
B4
M4
B3
10
9
8
7
6
5
4
3
2
1
0
0
M3
Behavior
b1 average
b2 average
b3 average
b4 average
0.1
0.2
0.3
0.4
0.5
... | https://ocw.mit.edu/courses/2-852-manufacturing-systems-analysis-spring-2010/2a492e1e011aa53ffe87fefef19d6461_MIT2_852S10_loops.pdf |
Practical multitone architectures
Lecture 4
Vladimir Stojanović
6.973 Communication System Design – Spring 2006
Massachusetts Institute of Technology
Cite as: Vladimir Stojanovic, course materials for 6.973 Communication System Design, Spring 2006.
MIT OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of Te... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
Efficientizing example
1+0.9D-1 channel (Pe=10-6, gap=8.8dB, PAM/QAM)
PAM and single-sideband
QAM
bn
b +1n
Cite as: Vladimir Stojanovic, course materials for 6.973 Communication System Design, Spring 2006.
MIT OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of Technology.
Downloaded on [DD Month... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
Bit is moved from channel n to m
Tightly coupled with channel and noise estimation
Will cover in later lectures
Cite as: Vladimir Stojanovic, course materials for 6.973 Communication System Design, Spring 2006.
MIT OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of Technology.
Downloaded on [DD Mon... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of Technology.
Downloaded on [DD Month YYYY].
6.973 Communication System Design
8
Modal modulation
Transmission with eigen-functions
r(t) = h(t) * h*(-t)
(-TH / 2, TH / 2)
.
rn jn (t) =
T / 2
-T / 2
(cid:242)
r(t - t)jn(t)dt n = 1,...,
,
t [-(T - TH) ... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
of any autocorrelation function is unique
This set determines the performance of MM through SNR
Eigen-functions are not unique
is also a valid eigen-function for inf symbol period
Corresponding eigenvalues are R(2πn/T)
No ISI on any tone since symbol period is infinite
Each tone is AWGN channel
SB... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
p1
p0
p0
0
0
0
py
py-1
0
p0
0
py
p1
0
0
0
py
xN-1
x0
x-1
x-y
+
nN-1
nN-1
n0
Figure by MIT OpenCourseWare.
Cite as: Vladimir Stojanovic, course materials for 6.973 Communication System Design, Spring 2006.
MIT OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of Technology.
Downloaded on [DD Month YYYY].
6.... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
-varying channels
Optimize bn and En per sub-channel
OFDM uses same channel partitioning as DMT
But uses same bn and En on all channels
Used on one-way broadcast channels
Forms of vector coding with added restrictions
In vector coding, M,F channel dependent
Make the channel circular and make M,F ch... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
DD Month YYYY].
6.973 Communication System Design
17
DMT/OFDM implementation
X0
X1
.
.
.
XN-2
XN-1
1 N+v
G
=
T
1 N+v
G
=
T
IDFT
parallel
to serial
& cyclic
prefix
insert
x(t)
n(t)
D
A
C
j(t)
(LPF)
h(t)
+
j (-t)
(LPF)
y(t)
A
D
C
Forces P to be cyclic matrix
serial to
parallel
& cyclic
prefix
remover
DFT
Y0
Y1
.
.
.
Y... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
CP
lower than VC (8.1dB)
For N=16 quickly reaches max of 8.8dB
Cite as: Vladimir Stojanovic, course materials for 6.973 Communication System Design, Spring 2006.
MIT OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of Technology.
Downloaded on [DD Month YYYY].
6.973 Communication System Design
21
C... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
1.5 Mbps
Split
ADSL
Down
Down
frequency
z
H
M
1
1
.
Figure by MIT OpenCourseWare.
Hermitian symmetry creates real signal transmitted from 0-1.1MHz
First 2-3 tones near DC not used – avoid interference with voice
Tone 256 also not used, 64 reserved for pilot
Nup=32, CP=5, each symbol 2*32+5=69 samples
Exac... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
3/2
3
9/2
bn
1/4
3/8
1/2
3/4
1
3/4
3/2
9/4
b
24
36
48
72
96
144
192
216
Broadcast channel – can’t optimize bit allocation
Figure by MIT OpenCourseWare.
FCC demands flat spectrum so no energy-allocation
The only knob is data rate selection
Cite as: Vladimir Stojanovic, course materials for 6.973 Communication S... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
r
l
e
a
v
e
r
M
a
p
p
e
r
D
e
m
a
p
p
e
r
P
i
l
o
t
i
n
s
e
r
t
i
o
n
C
h
a
n
n
e
l
e
s
t
i
m
a
t
o
r
F
F
T
/
I
F
F
T
C
y
c
l
i
c
p
r
e
f
i
x
S
y
n
c
h
r
o
n
z
e
r
i
i
W
n
d
o
w
n
g
i
R
e
m
o
v
e
p
r
e
f
i
x
D
A
C
A
G
C
&
A
D
C
U
p
c
o
n
v
e
r
t
L
N
A
&
D
o
w
n
c
o
n
v
e
r
t
Cite as: Vladimir Stojanovic, course materia... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
Scrambling
Need to randomize incoming data
Enables a number of tracking algorithms in
the receiver
Provides flat spectrum in the given band
pseudo-random bit sequence (prbs) generator
What is the period of this pseudo-random sequence?
Cite as: Vladimir Stojanovic, course materials for 6.973 Communication Sys... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
B4
3
A5
A A
7
6
B7
B B
6
5
A8
B
8
Stolen Bit
Bit Inserted Data
Output Data B
A2
A A
0
1
A A
3
4
B B B B B B B B B
4
8
AA
6
5
A A
7
0
1
3
5
2
6
7
8
Inserted Dummy Bit
g1=1718
Decoded Data
y
0
y
1
y
2
y
3
y
4
y
5
y
6
y
7
y
8
64-state (constraint length K=7) code
Viterbi algorithm applied in the decoder
Figure by... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
CourseWare (http://ocw.mit.edu/), Massachusetts Institute of Technology.
Downloaded on [DD Month YYYY].
6.973 Communication System Design
32
Spectral mask
Cannot use last 5 tones on each side
Does not use extra windowing
Power Spectral Density (dB)
Transmit Spectrum Mask
(not to scale)
Typical Signal Spectru... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
t8 t9t10
GI2
T1
T2
GI
SIGNAL
GI
Data 1
GI
Data 2
Signal Detect,
AGC, Diversity
Selection
Coarse Freq.
Offset Estimation
Timing Synchronize
Channel and Fine Frequency
Offset Estimation
RATE
LENGTH
SERVICE + DATA DATA
Figure by MIT OpenCourseWare.
to timing control
symbol
timing adjust
training symbol pilots
angle adjust... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
H(i-D,k) =
Y(i-D,k)
Mapper
Interleaver
Conv. Encoder
X(i-D,k)
X(i-D,k)
Figure by MIT OpenCourseWare.
Cite as: Vladimir Stojanovic, course materials for 6.973 Communication System Design, Spring 2006.
MIT OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of Technology.
Downloaded on [DD Month YYYY].
6.973 C... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
Y.-H. Wang and R. Yu "An integrated 802.11a baseband and MAC processor," Solid-State Circuits
Conference, 2002. Digest of Technical Papers. ISSCC. 2002 IEEE International vol. 1, no. SN -, pp. 126-
451 vol.1, 2002.
[4] E. Grass, K. Tittelbach-Helmrich, U. Jagdhold, A. Troya, G. Lippert, O. Kruger, J. Lehmann, K.
Mah... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
Massachusetts Institute of Technology
Department of Electrical Engineering and Computer Science
6.245: MULTIVARIABLE CONTROL SYSTEMS
by A. Megretski
Fundamentals of Model Order Reduction1
This lecture introduces basic principles of model order reduction for LTI systems, which
is about finding good low order approx... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
ˆ = Sˆk of complexity not larger than a given threshold k, such that the
distance between Sˆ and a given ”complex” system S is as small as possible. Alterna
tively, a maximal admissible distance between S and Sˆ can be given, in which case the
complexity k of Sˆ is to be minimized.
As is suggested by the experience... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
, W are given stable transfer matrices (W −1 is also assumed to be
stable), and ≡�≡� denotes H-Infinity norm (L2 gain) of a stable system �. As a result
of model order reduction, G can be represented as a series connection of a lower order
“nominal plant” ˆG and a bounded uncertainty (see Figure 8.2). In most cases, ... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
problem reducible to solving a system of
linear equations. If ≡ · ≡ = ≡ · ≡� is the H-infinity norm, the optimization is reduced to
solving a system of Linear Matrix Inequalities (LMI), a special class of convex optimization
problems solvable by an efficient algorithm, to be discussed later in the course.
While the te... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
norm optimal model reduction problem yields a lower bound for the minimum in the H-
Infinity norm optimal model reduction. Moreover, H-Infinity norm of model reduction
error associated with a Hankel norm optimal reduced model is typically close to this
lower bound. Thus, Hankel norm optimal reduced model can serve well... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
q are polynomials of order m.
ˆ
One popular approach is moments matching. In the simplest form of moments match-
ing, an m-th order approximation G(s) = p(s)/q(s) (where p, q are polynomials of order
m) of a SISO transfer function G(s) is defined by matching the first 2m + 1 coefficients
of a Taylor expansion
ˆ
G(s) = ... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
· · · + q1z + q0,
with qm ∞= 0, such that
qmyk+m + qm−1yk+m−1 + · · · + q1yk+1 + q0yk = 0
for all k > 0. The idea is to define the denominator of the reduced model G(s) in terms
of a polynomial q = q(z) which minimizes the sum of squares of
ˆ
ek = qmyk+m + qm−1yk+m−1 + · · · + q1yk+1 + q0yk
subject to a normalizat... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
The “convolution operator” f ∈� y = g � f associated with a LTI system with impulse
response g = g(t) has infinite rank whenever g is not identically equal to zero. However,
with every LTI system of finite order, it is possible to associate a meaningful and repre
sentative linear transformation of finite rank. This tra... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
the complex conjugates of the corresponding
entries of G(jρ)) and continuity (i.e. G(jρ) converges to G(jρ0) as ρ � ρ0 for all ρ0 ≤ R
and for ρ0 = ∗). Note that a rational function G = G(s) with real coefficients satisfies
this condition if and only if it is proper and has no poles on the imaginary axis.
Let Lk
2 (−∗... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
gain readily defined: the rank of HG is the maximal number of linearly independent
2 (−∗, 0) ∈� Lm
7
outputs (could be plus infinity), the L2 gain is the minimal upper bound for the L2 norm
of HGf subject to the L2 norm of f being fixed at one.
Remember that the order of a rational transfer matrix G is defined as t... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
the L2 norm of g0 is not larger than ≡G≡�≡f ≡2.
On the other hand, ≡g≡2 ∪ ≡g≡2.
To show that the rank of HG equals the order of the stable part of G, note first that
the unstable part of G does not have any effect of HG at all. Then, for a stable G, the map
of f into the inverse Fourier transform g0 of ˜g0 = Gf˜ is a c... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
efficient solution.
8
Let
m
M =
�
ur λr vr
⎦
r=1
be a singular value decomposition of M , which means that the families {ur }m and
{vr }m are orthonormal, and {λr }m
is a monotonically decreasing sequence of positive
r=1
numbers, i.e.
r=1
r=1
�
uiur = vivr =
�
1,
0,
i = r,
i ∞= r,
⎤
λ1 → λ2 → · · · → λ... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
λr = |M vr | when transformed with M . Vector ur is
defined by M vr = λr ur .
Another useful interpretation of singular vectors vr , ur and singular numbers λr is by
the eigenvalue decompositions
(M �M )vr = λ2 vr ,
r
(M M �)sur = λ2 ur .
r
An approximation M = Mk of rank less than k which minimizes λmax(M − ˆ
ˆ
ˆ... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
numbers λr = λr (M ) will be used: if the first k right singular vectors
v1, . . . , vk of M can be defined, but the supremum
� = sup{|M v| : |v| = 1, v orthogonal to v1, . . . , vk }
is not achievable, then λr (M ) = � for all r > k.
8.2.4 SVD of a Hankel operator
Any causal stable LTI system with a state-space mod... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.