text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
��,
•
M (⌃) = D + ⌃1M1 +
+ ⌃mMm,
· · ·
the problem has the form of the dual semidefi-
nite problem, with the optimization variables be-
ing (z, ⌃1, . . . , ⌃m).
189EXAMPLE: LOWER BOUNDS FOR
DISCRETE OPTIMIZATION
Quadr. problem with quadr. equality constraints
•
minimize x�Q0x + a�0x + b0
subject to x�Qix + a�ix + bi = ... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
�
→ � ⇥
�
are real-valued convex functions.
and gj :
→ �
f :
�
�
n
We introduce a convex function P :
•
called penalty function, which satisfies
r
�
,
→ �
P (u) = 0,
u
⌥
0, P (u) > 0,
if ui > 0 for some i
We consider solving, in place of the original, the
•
“penalized” problem
minimize
f (x) +P
g(x)
subject to x
X,
�
... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
*1!.)2
{
0, u
}
Q(µ) =
0
+"('!
⇤
⌅
µ
if 0
≤
≤
otherwise
c
0
*
0
*
u
)
0
*
c
.
µ
(
,")'!-!(./0*1!.)!3)%2
P (u) = max
{
0, au + u2
}
+"('!
Q(µ)
45678!-!.
Slope = a
u
)
0
*
a
.
µ
(
P (u) = (c/2)
max
,")'
�
0, u
{
2
}
⇥
Q(µ) =
(1/2c)µ2
+"('!
⇤
⇤
if µ
0
if µ < 0
⇥
!"&$%')%
!"#$%&'(%
0
*
u
)
0
*
µ
(
Important observation: Fo... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
�� ⌘
c
True if
P u
( ) =
•
�
µ⇤�
}
for some optimal dual solution µ⇤.
�
r
j=1 max
0, uj
{
with c
≥
194DIRECTIONAL DERIVATIVES
Directional derivative of a proper convex f :
•
f �(x; d) = lim
0
⌥
α
f (x + αd)
α
−
f (x)
, x
⌘
dom(f ), d
n
⌘ �
f (x + d)
Slope: f (x+d)
−
f (x)
f (x)
0
Slope: f ⇥(x; d)
The ratio
•
f (x + αd... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
= max min d�g
⌦f (x)
d
◆⌅
◆
⌦f (x)
◆⌅
1 g
⌦
d
d
1
1
g
◆
= max
g
⌦
⌦f (x) −�
�
=
−
g
�
⇥
◆
⌦
min
⌦f (x) �
⌦
g
◆⌅
g
�
196◆
STEEPEST DESCENT METHOD
Start with any x0
n.
⌘ �
•
For k
0, calculate
•
direction at xk and set
≥
−
gk, the steepest descent
xk+1 = xk −
αkgk
•
Di⇥culties:
−
−
Need the entire ◆f (xk) to compute gk.... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
-1
0
x1
1
2
3
60
40
z
20
0
-20
3
2
1
0
x2
-1
-2
-3
-3
3
2
1
0
x1
-1
-2
Subgradient methods abandon the idea of com-
•
puting the full subdifferential to effect cost func-
tion descent ...
Move instead along the direction of a single
•
arbitrary subgradient
198SINGLE SUBGRADIENT CALCULATION
Key special case: Minimax
•
f ... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
•
�
•
xk+1 = PX (xk −
αkgk),
where gk is any subgradient of f at xk, αk is a
positive stepsize, and PX ( ) is projection on X.
·
)*+*,$&*-&$./$0
Level sets of f
⇥f (xk)
gk
!
X
xk
"#
x
"(
"#%1$23!
xk+1 = PX (xk
$4"#$%$& '#5
αkgk)
xk
"#$%$&'#
αkgk
200◆
KEY PROPERTY OF SUBGRADIENT METHOD
For a small enough stepsize αk,... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
,
�
−
−
x, y
n.
⌘ �
Use the projection theorem to write
PX (x)
�
x
z
−
PX (x)
−
0,
⌥
z
⌘
X
�
�
⇥
PX (y)
from which
Similarly, PX (x)
�
Adding and using the
⇥
�
−
�
x
PX (x)
⇥
PX (x)
−
PX (y) � y
0.
⇥
�
⇥
−
⌥
h
warz inequalit
Sc
�
−
PX (y)
⇥
⌥
y,
0.
PX (y)
−
PX (x)
⌃
⌃
Q.E.D.
2
⌃
⌃
⇤
⇤
PX (y)
PX (y)
�
−
−
PX (x)
⇧
(... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
�
⇥
202CONVERGENCE MECHANISM
Assume constant stepsize: αk ⌃
If
c for some constant c and all k,
α
•
•
�
gk� ⌥
x⇤�
2
�
x
−
k+1
x⇤�
so the distance to the optimum decreases if
f (x⇤)
f (x )
k
⌥ �
2α
x
k
−
−
−
�
2
+α2 2
c
⇥
or equivalently, if xk does not belong to the level
set
0 < α <
f (x⇤)
f (xk)
−
c2
2
�
⇥
αc2
2
�
x... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
k,
−
and ⌅k (the “aspiration level of cost reduction”) is
updated according to
⌅k+1 =
⌦⌅k
max
�
⇥⌅k, ⌅
fk,
if f (xk+1)
if f (xk+1) > fk,
⌥
⌅
where ⌅ > 0, ⇥ < 1, and ⌦
≥
⇤
1 are fixed constants.
204SAMPLE CONVERGENCE RESULTS
Let f = inf k 0 f (xk), and assume that for some
•
c, we have
⇧
c
≥
sup
0
k
⇧
g
�
� |
g
⌘
◆f (xk... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
⌘
◆φ(x, zx)
✏
gx
⌘
◆f (x)
Potential di⇥culty: For subgradient method,
•
we need to solve exactly the above maximization
over z
Z.
⌘
We consider methods that use “approximate”
•
subgradients that can be computed more easily.
207⇧-SUBDIFFERENTIAL
Fot a proper convex f :
] and
•
⇧ > 0, we say that a vector g is an ⇧-subg... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
,
m, and
] is a function such that
�
Z.
dom(f )?
⌘
⌘
•
Let zx
Z attain the supremum within ⇧
0
•
in Eq. (1), and let gx be some subgradient of the
convex function φ(
, zx).
≥
⌘
For all y
•
⌘ �
n, using the subgradient inequality,
·
f (y) = sup φ(y, z)
z
⌦
φ(x, zx) +g x� (y
≥
Z
≥
φ(y, zx)
x)
−
≥
f (x)
−
⇧ + gx� (y
x)
−
... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
−
gk
2
⇠
−
Replicate the entire convergence analysis⇥
for
•
subgradient methods, but carry along the ⇧k terms.
�
Example: Constant αk ⌃
α, constant ⇧k ⌃
c for all k. For any optimal x⇤,
•
Assume
⇧.
�
xk−
xk+1−
the distance to x⇤ decreases if
so
x⇤�
⌥ �
2α
−
�
2
f (xk)
⇧
f ⇤−
−
+α2c2,
⇥
gk� ⌥
�
x⇤�
2
2
x
f ( k)
0
<
α <
... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
−
−
αkgi),
i = 1, . . . , m
with gi being a subgradient of fi at ψi
1.
−
Motivation is faster convergence. A cycle
•
can make much more progress than a subgradient
iteration with essentially the same computation.
212CONNECTION WITH ⇧-SUBGRADIENTS
Neighborhood property:
If x and x are
•
“near” each other, then subgradi... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
methods replace the original
•
problem with an approximate problem.
The approximation may be iteratively refined,
•
for convergence to an exact optimum.
•
•
A partial list of methods:
−
−
−
Cutting plane/outer approximation.
Simplicial decomposition/inner approxima-
tion.
Proximal methods (including Augmented La-
grangi... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
1) + (x
x1)⇥g1
f (x0) + (x
x0
x3
x∗
x2
x1
X
x0)⇥g0
x
⌥
Note that Fk(x)
f (x) for all x, and that
•
Fk(xk+1) increases monotonically with k. These
imply that all limit points of xk are optimal.
Proof: If xk →
f (x), [otherwise
there would exist a hyperplane strictly separating
k))]. This implies that
Fk(x
epi(f ) and (x... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
x0) + (x
x0
x∗
x2
x1
X
x0)⇥g0
x
218LECTURE 17
LECTURE OUTLINE
Review of cutting plane method
Simplicial decomposition
•
•
Duality between cutting plane and simplicial
•
decomposition
219CUTTING PLANE METHOD
Start with any x0
X. For k
0, set
≥
⌘
•
xk+1 ⌘
where
arg min Fk(x),
X
x
⌦
Fk(x) = max f (x0)+(x
x0)⇧g0, . . . ,... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
extreme points is much
simpler than minimizing f over X.
Minimizing a linear function over X is much
simpler than minimizing f over X.
−
221◆
SIMPLICIAL DECOMPOSITION METHOD
f (x0)
x0
f (x1)
x1
X
˜x2
f (x2)
x2
f (x3)
x3
˜x4
x4 = x
˜x1
˜x3
Level sets of f
•
(initially x0
Given current iterate xk, and finite set Xk ⌦
x0
... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
(x˜k+1 ⌘
/ Xk and X has finitely many ex-
treme points), so case (a) must eventually occur.
The method will find a minimizer of f over X
•
in a finite number of iterations.
223COMMENTS ON SIMPLICIAL DECOMP.
Important specialized applications
•
Variant to enhance e⌅ciency. Discard some of
•
the extreme points that seem un... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
) + (x
xj)�y
j
,
−
x
n
⌘ �
⌅
F (x) = max
j=1,...,⌫
y
j� x
−
f (yj)
⇤
[this follows using x�jyj = f (xj) +f (yj), which is
◆f (xj) – the Conjugate Subgra-
implied by yj
⌘
dient Theorem]
⌅
225
◆
INNER LINEARIZATION OF FNS
f (x)
0
x2
x
F (x)
x0
x1
Slope = y0
Slope = y1
Outer Linearization of f
Slope = y2
f
F (y)
f (y)
y... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
proper convex
⌘
Case where f is differentiable
Ck+1(x)
Ck(x)
c(x)
Slope:
f (xk)
−⇥
Const.
f (x)
−
xk
xk+1
˜xk+1
x
Given Ck: inner linearization of c, obtain
xk ⌘
arg min f
⌦�
x
n
⇤
Obtain x˜k+1 such that
(x) +C k(x)
⌅
•
�
•
•
•
f (xk)
◆c(˜xk+1),
−∇
⌘
x˜k+1}
and form Xk+1 = Xk ∪ {
227NONDIFFERENTIABLE CASE
Given Ck: inn... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
pair
min f1(x) +f 2(x),
x
⌦�
n
min f (⌃) +f
⌅
⌦�
1
n
2 ( ⌃
−
)
Primal and dual approximations
•
•
2,k(x)
1(x) +F
min f
n
x
⌦�
F2,k and F
•
tions of f and f
2
2
f
min 1 (⌃) +F 2,k(
⌅
⌦�
n
⌃)
−
2,k are inner and outer approxima-
x˜i+1 and gi are solutions of the primal or the
•
dual approximating problem (and corresp... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
decomposition/Inner linear
ization
Includes new methods, and new versions/extensions of old methods.
3
232Polyhedral Approximation
Extended Monotropic Programming
Special Cases
Vehicle for Unification
Extended monotropic programming (EMP)
min
(x1,...,xm)
S
2
fi (xi )
m
Xi=1
where f
ni
i :
< 7!
The dual EMP is
(
,
1
−1
... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
for Information and Decision
Systems Report LIDS-P-2820, MIT, September 2009; SIAM J. on
Optimization, Vol. 21, 2011, pp. 333-360.
235Polyhedral Approximation
Extended Monotropic Programming
Special Cases
Outline
1 Polyhedral Approximation
Outer and Inner Linearization
Cutting Plane and Simplicial Decomposition Method... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
Extended Monotropic Programming
Special Cases
Conjugacy of Outer/Inner Linearization
Given a function f :
The conjugate of an outer linearization of f is an inner linearization of f ?.
] and its conjugate f ?.
,
−1
7!
1
<
(
n
f (x)
F (y)
f (y)
x0
x1
0
x2
x
y0
y1
0
y2
y
Slope = y0
O
Slope = y1
F (x)
Slope = y2
f
Outer L... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
} [
Xk , and generate
f (x)
x˜k +1 2
arg min
C
x
2
{r
f (xk )0(x
xk )
Solve NLP of small dimension: Set Xk +1 =
xk +1 as
{
xk +1 2
arg min
2
x
conv(Xk+1)
f (x0)
∇
x0
x1
f (x1)
∇
C
f (x2)
∇
x2
˜x2
f (x3)
∇
x3
˜x4
˜x1
x4 = x∗
˜x3
Level sets of f
Finite convergence if C is a bounded polyhedron.
242Polyhedral Approximatio... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
able problems with linear constraints.
Fenchel duality framework: Let m = 2 and S
the problem
= (x, x)
. Then
2 <
x
|
n
˘
¯
min f1(x1) + f2(x2)
(x1,x2)
S
2
can be written in the Fenchel format
min f1(x) + f2(x)
x
2<
n
Conic programs (second order, semidefinite - special case of Fenchel).
n
Sum of functions (e.g., machin... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
There are powerful conditions for strong duality q⇤ = f ⇤ (generalizing
classical monotropic programming results):
Vector Sum Condition for Strong Duality: Assume that for all feasible x, the
set
S? + @✏(f1 +
+ fm)(x)
· · ·
is closed for all ✏ > 0. Then q⇤ = f ⇤.
Special Case: Assume each fi is finite, or is polyhedral,... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
n
(
<
7!
Let f :
,
−1
Given a finite set X
the function ¯fX whose epigraph is the convex hull of the rays
(x, w)
] be closed proper convex.
dom(f ), we define the inner linearization of f as
f (x), x
1
⇢
w
X
:
|
≥
2
˘
¯fX (z) =
min
x
2
P
(
1
¯
x X ↵x x=z,
2 1
X ↵x = , ↵x
P
≥
0 x X
,
2
x X ↵x f (z)
2
P
if z
conv(X )
2
oth... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
I, add y˜i to Yi where y˜i 2
¯I, add x˜i to Xi where x˜i 2
2
2
@fi (xˆi )
@f ?(ˆ
i yi ).
249Polyhedral Approximation
Extended Monotropic Programming
Special Cases
Enlargement Step for ith Component Function
Outer: For each i
2
I, add y˜i to Yi where y˜i
@fi (xˆi ).
2
fi(xi)
New Slope ˜yi
f
i,Yi
(xi)
Slope ˆyi
ˆxi
Inne... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
.
251Polyhedral Approximation
Extended Monotropic Programming
Special Cases
Comments on Polyhedral Approximation Algorithm
In some cases we may use an algorithm that solves simultaneously the
primal and the dual.
Example: Monotropic programming, where xi is one-dimensional.
Special case: Convex separable network flow, ... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
Special Cases
Simplicial Decomposition Method for minx
C f (x)
2
EMP equivalent: minx1=x2 f (x1) + δ(x2 |
indicator function of C.
C), where δ(x2 |
C) is the
Generalized Simplicial Decomposition: Inner linearize C only, and solve
the primal approximate EMP. In has the form
min f (x)
C¯
x
X
2
where C¯
Assume that xˆ is ... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
(x)f2(x)ˆxSlope:−ˆy¯f2,X2(x)˜xDualview:Outerlinearizef?2−gkConstant−f1()f2(−)F2,k(−)Slope:˜xi,i≤kSlope:˜xi,i≤kSlope:˜xk+1256Polyhedral Approximation
Extended Monotropic Programming
Special Cases
Convergence - Polyhedral Case
Assume that
All outer linearized functions fi are finite polyhedral
All inner linearized functi... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
�
xˆk
{
}K !
x¯ and take limit as `
! 1
, k
, `
2 K
2 K
, ` < k.
/I
i
X
2
Let
xˆk
{
}
fi (xi )
m
Xi=1
Exchanging roles of primal and dual, we obtain a convergence result for
pure inner linearization case.
Convergence, pure inner linearization (I: Empty). Assume that the
¯I. Then every limit point of
x˜k
sequence
i }
is... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
−
�
] is closed proper convex
(
→
n
�
f :
,
⇣
ck is a positive scalar parameter
x0 is arbitrary starting point
−⇣
−
−
−
f (xk)
k
f (x)
k −
1
2ck ⇥
x
2
xk⇥
−
xk
xk+1
x
x
xk+1 exists because of the quadratic.
Note it does not have the instability problem of
•
cutting plane method
If xk is optimal, xk+1 = xk.
If
xk}
, f (... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
parameter ck:
•
f (x)
f (x)
xk
xk+1
xk+2
x
x
xk
xk+2
xk+1
x
x
Role of growth properties of f near optimal
•
solution set:
f (x)
f (x)
xk
xk+1
xk+2
x
x
xk
xk+1
x
xk+2
x
263RATE OF CONVERGENCE II
•
α
Assume that for some scalars ⇥ > 0, ⌅ > 0, and
1,
≥
α
⌥
f (x),
x
⌘ �
n with d(x)
⌅
⌥
f ⇤ + ⇥
d(x)
�
⇥
where
d(x) = min
... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
= x
x
x0
x
x
265IMPORTANT EXTENSIONS
Replace quadratic regularization by more gen-
•
eral proximal term.
Allow nonconvex f .
•
f (x)
k
k+1
k
Dk(x, xk)
xk
xk+1
Dk+1(x, xk+1)
k+1
xk+2
x
x
Combine with linearization of f (we will focus
•
on this first).
266LECTURE 20
LECTURE OUTLINE
Proximal methods
Review of Proximal ... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
arg min
X
x
⌦
Fk(x) +p k(x)
⇤
x0)⇧g0, . . . , f (xk)+(x
⌅
Fk(x) = max
f (x0)+(x
⇤
pk(x) =
−
1
2
x
2ck ⇠ − ⇠
y
k
xk)⇧gk
−
⌅
where ck is a positive scalar parameter.
We refer to pk(x) as the proximal term, and to
•
its center yk as the proximal center.
pk(x)
k
k
f (x)
Fk(x)
yk
xk+1
x
x
Change yk in different ways => diffe... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
X
✏
k(x) +p k(x)
Fk(x) = max
f (x0)+(x
⇤
pk(x) =
⇤
x0)⇧g0, . . . , f (xk)+(x
⌅
−
1
2ck
x
⇠
−
y
k
2
⇠
xk)⇧gk
−
⌅
Null/Serious test for changing yk: For some
•
fixed ⇥
(0, 1)
✏
yk+1 =
xk+1
yk
if f (yk)
if f (yk)
−
−
�
f (xk+1)
⇥⌅k,
f (xk+1) < ⇥⌅k,
⌥
⌅k = f (yk)
−
�
Fk(xk+1) + pk(xk+1) > 0
⇥
f (yk)
f (yk)
f (xk+1)
f (xk+1)... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
solution pair if and only if
x⇥
arg min
n
x
⌥
✏
f1(x)
x⇧⌥⇥
−
, x⇥
✏
arg min f
⌥
x
n
2(x)+x⇧⌥⇥
⌅
By Fenchel inequality, the last condition is equiv-
⇤
⇤
•
alent to
⌅
⌥⇥
✏
and
⌥⇥
−
✏
✏f1(x⇥)
[or equivalently x⇥
✏f ⌥
1 (⌥⇥)]
✏
✏f2(x⇥)
[or equivalently x⇥
✏f ⌥
2 (
⌥⇥)]
−
✏
272⇣
GEOMETRIC INTERPRETATION
f
1 ()
q()
f
2 ... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
⇧k⌥ + ck
2 ⇠
⌥
2, so the dual
⇠
minimize
f ⌥
(⌥)
subject to ⌥
✏ ◆
x⇧
k⌥ +
c
k
2 ⇠
⌥
2
⇠
−
n
where f ⌥ is the conjugate of f.
f2 is real-valued, so no duality gap.
•
Both primal and dual problems have a unique
•
solution, since they involve a closed, strictly con-
vex, and coercive cost function.
274DUAL PROXIMAL ALGOR... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
−
1
2ck ⇥
x
2
xk⇥
−
f (x)
k
⇥k + x⇥k⇤
ck
2 ⇥
2
⇤
⇥
−
f (⇤)
x∗
xk
xk+1
x
Slope = x∗
⇤k+1
⇥k
Slope = xk
Slope = xk+1
Primal Proximal Iteration
Dual Proximal Iteration
The primal and dual implementations are
•
mathematically equivalent and generate iden-
tical sequences
.
xk
{
}
Which one is preferable depends on whether ... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
µk
2
⇠
−
�
c
k
2 ⇠
u
⇠
2
p(u) + µku +
⇧
�
µ
k + ckuk+1
uk+1 = Exk+1
d,
−
xk+1
✏
arg min Lck (x, µk)
x
X
⌥
where Lc is the Augmented Lagrangian function
2
⇠
Lc(x, µ) =f (x) +µ ⇧(Ex
c
2 ⇠
d) +
Ex
−
−
d
277
GRADIENT INTERPRETATION
•
dual update
Back to the dual proximal algorithm and the
xk+1
x
= k−
ck
k+1
⌥
Propositi... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
, but f is re-
placed by a cutting plane approximation Fk:
xk+1
✏
arg min
n
x
⌥
Fk(x) +
1
x 2
x
k
2ck ⇠ − ⇠
�
⌥k+1 =
x
k
xk+1
−
ck
where gi
✏
✏f (xi) for i
k and
⌃
Fk(x) = max
f (x0)+(x
x0)⇧g0, . . . , f (xk)+(x
xk)⇧gk
+⇥X (x)
−
−
⇤
x
Pro imal Inner Linearization Method (Dual
k be the con-
•
proximal implementation): ... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
” is made:
−
−
The outer linearization version is the (stan-
dard) bundle method.
The inner linearization version is an inner
approximation version of a bundle method.
280LECTURE 21
LECTURE OUTLINE
Generalized forms of the proximal point algo-
•
rithm
Interior point methods
Constrained optimization case - Barrier meth... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
�
} ⇒
xk X ⇥
✏
Then strict cost improvement for xk
⇤
Guaranteed if f is convex and
/ X ⇥
✏
(a) Dk(
, xk) satisfies (1), and is convex and dif-
·
ferentiable at xk
•
•
(b) We have
ri dom(f )
ri dom(Dk(
, xk)) = Ø
·
�
⇥
�
⇥
283⇣
EXAMPLE METHODS
Bregman distance function
•
Dk(x, y) =
1
ck
(x)
(y)
−
− ⇢
(y)⇧(x
y)
,
−... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
, . . . , r
⌃
•
A barrier function, that is continuous and
as any one of the constraints gj(x) ap-
•
goes to
proaches 0 from negative values; e.g.,
∞
B(x) =
−
Barrier
•
gj(x)
, B(x) =
r
ln
−
j=1
⌧
⇤
method: Let
⌅
xk = arg min f (x) + ⇧kB(x) ,
x
S
⌥
⇤
⌅
r
−
j=1
⌧
1
gj(x)
.
k = 0, 1, . . . ,
x
where S =
parameter sequenc... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
(x1)2 + (x2)2
⇧k ln (x1
⇥
−
2)
−
As ⇧k is decreased, the unconstrained minim⌅um
•
xk approaches the constrained minimum x⇥ = (2, 0).
⇤ �
⇥
As ⇧k
0, computing xk becomes more di⌅cult
•
because of ill-conditioning (a Newton-like method
is essential for solving the approximate problems).
⌦
286CONVERGENCE
Every limit poin... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
�
lim inf ⇧kB(xk)
k
⌅⌃
⌥
∞
0,
– a contradiction.
287SECOND ORDER CONE PROGRAMMING
Consider the SOCP
•
minimize
c⇧x
subject to Aix
bi
−
✏
Ci, i = 1, . . . , m,
where x
✏ ◆
1, . . . , m, Ai is an ni
n, c is a vector in
n, and for i =
n matrix, bi is a vector in
◆
ni, and Ci is the second order cone of
⇤
ni.
◆
We approxi... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
mAm)
over all ⌥
is positive definite.
✏ ◆
m
�
such that D
(⌥1A1 +
−
· · ·
⇥
+ ⌥mAm)
Here ⇧k > 0 and ⇧k
0.
⌦
•
Furthermore, we should use a starting point
•
such that D
⌥mAm is positive def-
− · · · −
inite, and Newton’s method should ensure that
the iterates keep D
⌥mAm within the
−
positive definite cone.
− · · · −
⌥1A1... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
)2
−
The nondifferenti
number of components of x to 0.
j=1
⌧
able penalty tends to set a large
i=1
⌧
Min of an expected value minx E
•
Stochastic programming:
F (x, w)
-
⇤
⌅
min
x
F1(x) +E w
↵
min F2(x, y, w)
y
{
�
⌅
More (many constraint problems, distributed
•
incremental optimization ...)
291INCREMENTAL SUBGRADIENT ... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
size: Convergence to the opti-
•
mal solution
What is the effect of the order of component
•
selection?
293CONVERGENCE: CYCLIC ORDER
Algorithm
•
xk+1 = PX
xk
˜αk
⇢
−
fik (xk)
�
⇥
Assume all subgradients generated b
y the algo-
•
rithm are bounded:
˜ fik (xk)
⇠⇢
Assume components are chosen for iteration
•
in cyclic ord... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
stepsize αk
α:
⇧
•
lim inf f (xk)
⌅⌃
k
⌃
f ⇥ + α
mc2
2
(with probability 1)
Convergence for αk
•
(with probability 1)
↵
0 with
⌃ αk =
k=0
.
∞
�
In practice, randomized stepsize and variations
•
(such as randomization of the order within a cycle
at the start of a cycle) often work much faster
295PROXIMAL-SUBGRADIENT CO... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
x
fik (x) +
1
2αk ⇠
x
xk
2
⇠
−
⌥
�
xk+1 = PX
zk
˜αk hik (zk)
− ⇢
•
Variations:
�
⇥
−
−
n (rather than X) in proximal
Min. over
Do the subgradient without projection first
and then the proximal
◆
Idea: Handle “favorable” components fi with
•
the more stable proximal iteration; handle other
components hi with subgradient ... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
�
k
⌃
f ⇥ + α⇥
mc2
2
(with probability 1)
Convergence for αk
•
(with probability 1)
↵
0 with
⌃ αk =
k=0
.
∞
�
299EXAMPLE
!1-Regularization for least squares with large
•
number of terms
min
x
⌥
⇤
n ✏ ⇠
x
1 +
⇠
m
(c⇧ix
1
2
di)2
−
⇣
i=1
⌧
Use incremental gradient or proximal on the
•
quadratic terms
Use proximal on the... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
Problem: Minimize convex function f :
•
over a closed convex set X.
n
◆
⌘⌦ ◆
Subgradient method - constant step α:
•
xk+1 = PX xk
αk f (xk) ,
˜
− ⇢
�
⇥
t of f at x
k, and PX (
)
·
˜ f (xk) is a subgradien
⇢
where
is projection on X.
˜ f (xk)
Assume
c for all k.
⇠ ⌃
Key inequality: For all optimal x⇥
⇠⇢
xk+1
x⇥
2
⇠
−
xk... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
t projection
method:
xk+1 = PX
xk
α
⇢
−
f (xk)
Define the linear approximation fun⇥ction at x
�
!(y; x) = f (x) +
f (x)⇧(y
x),
⇢
−
First key inequality: For all x, y
n
✏ ◆
y
X
✏
f (y)
⌃
!(y; x) +
L
2 ⇠
y
x
2
⇠
−
Using the projection theorem to write
•
•
•
•
•
xk
α
f (xk)
xk+1
⇧(xk
xk+1)
−
and then the 1st key inequality... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
y
2
⇠
−
−
Complexity Estimate: Let the stepsize of the
•
method be α = 1/L. Then for all k
f (xk)
f ⇥
−
⌃
L minx⇤ X⇤ ⇠
2k
⌥
x
0
x⇥
2
⇠
−
Thus, we need O(1/⇧) iterations to get within ⇧
•
of f ⇥. Better than nondierentiable case.
Practical implementation/same complexity: Start
•
with some α and reduce it by some factor ... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
−
f (xk) + ⇥(xk
xk
−
1),
−
where x = x and ⇥ is a scalar with 0 < ⇥ < 1.
1
0
−
A variant of this scheme for constrained prob-
•
lems separates the extrapolation and the gradient
steps:
yk = xk + ⇥(xk
xk
1),
(extrapolation step),
xk+1 = PX
yk
α
⇢
−
−
−
f (yk)
,
(grad. projection step).
�
⇥
When applied to the preceding ... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
⌃
1
⌃2
k
}
,
satisfies ⌃0 = ⌃1
(0, 1],
✏
⌃k
2
⌃ k + 2
One possible choice is
•
⇥k =
0
k
1
−
k+2
�
if k = 0,
if
1,
k
⌥
⌃ =
k
1
2
k+2
�
,
if k = 1
−
0.
if k
⌥
Highly unintuitive. Good performance reported.
•
307EXTENSION TO NONDIFFERENTIABLE CASE
Consider the nondifferentiable problem of min-
over a closed
•
imizing conve... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
Subgradient Meth-
ods for Convex Optimization,” Operations Research
Letters, Vol. 31, pp. 167-175.
Bertsekas, D. P., 1999. Nonlinear Programming,
•
Athena Scientific, Belmont, MA.
309PROXIMAL AND GRADIENT PROJECTION
Proximal algorithm to minimize convex f over
•
closed convex X
xk+1
✏
arg min f (x) +
x
X
⌥
�
1
2ck ⇠
x
... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
1
2α ⇠
x
xk
2
⇠
−
X
✏
f (y)
⌃
!(y; x) +
y
x
2
⇠
−
L
2
⇠
1/L:
Cost reduction for α
⌃
•
•
•
f
(xk+1) + g(xk+1)
!(xk+1; xk) +
L
2 ⇠
x
k+1
xk
2 + g(xk+1)
⇠
−
!(xk+1; xk) + g(xk+1) +
1
2α ⇠
xk+1
xk
2
⇠
−
⌃
⌃
!(xk; xk) + g(xk)
⌃
= f (xk) + g(xk)
This is a key insight for the convergence analy-
•
sis.
311
GRADIENT-PROXIMAL ... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
min
X
x
✏
Example: Bregman⇤
⌥
f (x) +D k(x, xk)
distance function⌅
•
•
Dk(x, y) =
1
c
k
(x)
y)
(
−
− ⇢
(y)⇧(x
y) ,
−
�
(
⌘⌦ −∞
n
] is a convex function, dif-
where :
ferentiable within an open set containing dom(f ),
and ck is a positive penalty parameter.
∞
◆
,
⇥
All the ideas for applications and connections of
•
... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
i=1
⌧
i
kecky
xi
⇣
314
EXPONENTIAL AUGMENTED LAGRANGIAN
The dual proximal iteration is
•
xi = xi eckyk+1
k
k+1
i
,
i = 1, . . . , n
where yk+1 is obtained from the dual proximal:
yk+1
✏
arg min
y
⌥
n ✏
f ⌥(y) +
1
ck
n
i=1
⌧
i
kecky
xi
⇣
A special case for the convex problem
•
minimize f (x)
subject to g1(x)
0, . . .... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
xi = 1
i=1
Method:
•
x
k+1
✏
⇤
n
�
xi
arg min
x
X
⌥ i=1
⌧
i
gk +
1
αk
ln
i
x
xi
k ⌦⌦
.
⌅
where gi
k are the components of
f (xk).
˜
⇢
This minimization can be done in closed form:
•
•
xi
k+1 =
i
xi e αkgk
−
k
n
xj e−
j=1 k
αkg
j
k
,
i = 1, . . . ,
n
�
316
LECTURE 25: REVIEW/EPILOGUE
LECTURE OUTLINE
CONVEX ANALYSIS A... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
nested sequences
•
of closed sets.
Closure operations and their calculus.
Recession cones and their calculus.
Preservation of closedness by linear transforma-
•
tions and vector sums.
318HYPERPLANE SEPARATION
C1
C2
(a)
C2
x2
C1
x1
x
a
(b)
Separating/supporting hyperplane theorem.
Strict and proper separation theorems.... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
%
.
w
Min Common
Point w
%&'()*++*'(,*&'-(./
M
%
M
%
Max Crossing
%#0()1*22&'3(,*&'-(4/
Point q
0!
0
u
7
0
!
u
7
Max Crossing
%#0()1*22&'3(,*&'-(4/
Point q
(b)
"5$
6
M
%
9
M
%
(a)
"#$
w
.
Min Common
%&'()*++*'(,*&'-(./
Point w
Max Crossing
Point q
%#0()1*22&'3(,*&'-(4/
0
!
(c)
"8$
u
7
Defined by a single set M
n+1.
◆
w... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
such that q(µ) = q⇥.
MC/MC Theorem III: Similar to II but in-
•
volves special polyhedral assumptions.
(1) M is a “horizontal translation” of ˜M by
P ,
−
˜M = M
(u, 0)
u
|
✏
P
,
−
where P : polyhedral and ˜M: convex.
⇤
⌅
(2) We have
˜ri(D)
P = Ø, where
there exists w
with (u, w)
˜M
}
✏
✏ ◆
˜D = u
|
⇤
323
⇣
IMPORTAN... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
��
x
✏
✓
X with g(x)
0
⌃
Let
Q⇥ =
µ
µ
|
⌥
0, f (x) + µ⇧g(x)
0,
x
✏
✓
⌥
X
.
⇤
⌅
Nonlinear version: Then Q⇥ is nonempty and
X
•
compact if and only if there exists a vector x
such that gj(x) < 0 for all j = 1, . . . , r.
✏
(g(x), f (x))
x
|
⌅
X
(g(x), f (x))
x
|
⌅
X
(g(x), f (x))
x
|
⌅
X
�
g(x), f (x)
�
⇥
0
⇥
�
⇥
�
⇥
(µ,... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
f (x) + µ⇧g(x) is the Lagrangian
function.
Dual problem of maximizing q(µ) over µ
0.
⌥
Strong Duality Theorem: q⇥ = f ⇥ and there
•
exists dual optimal solution if one of the following
two conditions holds:
(1) There exists x
X such that g(x) < 0.
✏
(2) The functions gj, j = 1, . . . , r, are a⌅ne, and
there exists x
✏... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
) +f 2(x)
n,
✏ ◆
where f1 :
] and f2 :
(
are closed proper convex functions.
−∞
⌘⌦
∞
◆
,
n
n
(
,
]
∞
−∞
⌘⌦
◆
Dual problem:
•
minimize
subject to ⌥
1 (⌥) +f ⌥
f ⌥
2 (
n,
✏ ◆
⌥)
−
where f ⌥
1 and f ⌥
2 are the conjugates.
f
1 ()
q()
f
2 (
)
f = q
f1(x)
Slope
Slope
f2(x)
x
x
328CONIC DUALITY
Consider minimizing f (x)... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
��
b⇧⌥,
ˆ
C
⌥
where x
n, ⌥
m, c
n, b
✏ ◆
✏ ◆
✏ ◆
✏ ◆
m, A : m
n.
⇤
329SUBGRADIENTS
f (z)
0
g, 1)
(
x, f (x)
�
⇥z
✏f (x) = Ø
for
x
ri dom(f )
.
•
✏
Conjugate Subgradient Theorem: If f is closed
•
proper convex, the following are equivalent for a
pair of vectors (x, y):
⇥
�
(i) x⇧y = f (x) +f ⌥(y).
(ii) y
(iii) x
✏
✏
✏f... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
⇥
Then, a vector x⇥ minimizes f over X iff there ex-
ists g
g belongs to the normal
cone NX (x⇥), i.e.,
✏f (x⇥) such that
−
✏
g⇧(x
x⇥)
0,
⌥
−
x
✏
✓
X.
Level Sets of f
NC(x∗)
C
x∗
f (x∗)
⌃
NC(x∗)
x∗
g
C
⇧f (x∗)
Level Sets of f
331⇣
⇣
⇣
⇣
COMPUTATION: PROBLEM RANKING IN
INCREASING COMPUTATIONAL DIFFICULTY
•
•
•
•
•
•
•... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
sets of f
⇥f (xk)
gk
!
X
xk
"#
x
"(
"#%1$23!
xk+1 = PX (xk
$4"#$%$& '#5
αkgk)
xk
"#$%$&'#
αkgk
⇧-subgradient method (approx. subgradient)
•
Incremental (possibly randomized) variants for
•
minimizing large sums (can be viewed as an ap-
proximate subgradient method).
333OUTER AND INNER LINEARIZATION
•
•
•
Outer linea... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
., nonquadratic
•
regularization) and combinations (e.g., with lin-
earization)
335
PROXIMAL-POLYHEDRAL METHODS
Proximal-cutting plane method
•
pk(x)
k
k
f (x)
Fk(x)
yk
xk+1
x
x
Proximal-cutting plane-bundle methods: Re-
•
place f with a cutting plane approx. and/or change
quadratic regularization more conservativel... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
k+1 < ⇧k for
}
x
⌥
S
x⇤
|
⌦
B(x)
, "-./
<
,0*1*,
Boundary of S
,0 "-./
B(x)
Boundary of S
"#$%&'()*#+*!
"#$%&'()*#+*!
S
!
Ill-conditioning. Need for Newton’s method
•
1
0.5
0
-0.5
-1
2.05 2.1 2.15 2.2 2.25
1
0.5
0
-0.5
-1
2.05 2.1 2.15 2.2 2.25
338ADVANCED TOPICS
•
•
Incremental subgradient-proximal methods
Complexi... | https://ocw.mit.edu/courses/6-253-convex-analysis-and-optimization-spring-2012/6c63c6219c60378bc27d5b4a9167f1bc_MIT6_253S12_lec_comp.pdf |
Dis
rete
to
Continuum
Modeling.
Rodolfo
R.
Rosales
;
MIT,
Mar
h,
200l.
Abstra
t
These
notes
give
a
few
examples
illustrating
how
ontinuum
models
an
b
e
derived
from
spe
ial
limits
of
dis
rete
models.
Only
the
simplest
ases
are
onsidered,
illustrating
some
of
the
most
basi
ideas.
These
te
hniques
are
useful
be
ause... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
Hooke's
Law
for
Torsional
For
es
9
Equations
for
N
torsion
oupled
equal
pendulums
10
Continuum
Limit
10
- Sine-Gordon
Equation
11
- Boundary
Conditions
11
Kinks
and
Breathers
for
the
Sine
Gordon
Equation
11
Example:
Kink
and
Anti-Kink
Solutions
12
Example:
Breather
Solutions
13
Pseudo-spe
tral
Numeri
al
Method
fo... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
,
will
depend
on
the
appropriate
approximations
being
made The
most
su
essful
models
arise
in
situations
where
most
solutions
of
the
dis
rete
model
evolve
rapidly
in
time
towards
onfgurations
where
the
assumptions
behind
the
ontinuum
model
apply
.
The
basi
step
in
obtaining
a
ontinuum
model
from
a
dis
rete
system,... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
when
deriving
the
equations
for
Gas
Dynami
s
in
Statisti
al
Me
hani
s,
it
is
assumed
that
the
lo
al
parti
le
intera
tions
rapidly
ex
hange
energy
and
momentum
between
the
mole
ules
- so
that
the
lo
al
probability
distributions
for
velo
ities
take
a
standard
form
(equivalent
t
o
l
o
a
l
thermodynami
equilibrium) \hat
e... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
:x
is
the
distan
e
b
e
�
�
f
is
positive
when
the
spring
is
under
tension
+
n
�
+
n
�
2
t
w
een
the
parti
les,
and
If
there
are
no
other
for
es
involved
(e g no
fri
tion),
the
governing
equations
for
the
system
are:
2
d
N
x
=
f
(x
x
)
f
(x
x
)
,
(2 1)
+1
1
n
n
+
n
n
n
n-
2
n
n-
�
�
�
�
dt
-
-
-
for
n
= 0
,
1
,
2
... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
Dis
rete
to
Continuum
Modeling.
3
MIT,
Mar
h,
2001
- Rosales.
equations
0 =
f
(
x
x
)
f
(x
x
)
:
(2 2)
+1
1
+
n
n
n
n-
n
n-
�
�
�
�
-
-
-
This
is
the
basi
onfguration
(solution)
that
we
will
use
in
obtaining
a
ontinuum
approximation
Note
that
this
is
a
one
parameter
family:
if
the
for
es
are
monotone
fun
tions
... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
for
e
f ,
and
a
typi
al
spring
length
L
.
Thus
we
an
write
�
�
:
x
f
(:x) =
f F
,
(2 3)
+
+
n
n
�
�
L
where
F
is
a
non-dimensional
mathemati
al
fun
tion,
of
0(1)
size,
and
with
0(1)
deriva-
+
n
�
�
tives A
further
assumption
is
that
F
hanges
slowly
with
n,
so
that
two
nearby
springs
+
n
�
�
are
nearly
equal Mathem... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
asi
onfguration
des
ribed
by
(2.2)
wil l
stil l
be
a
solution.
However,
as
soon
as
there
is
any
signif
ant
motion,
neighboring
parts
of
the
hain
wil l
respond
very
diferently,
and
the
solution
wil l
move
away
from
the
lo
al
equilibrium
implied
by
(2.2).
There
is
no
known
method
to,
generi
al ly,
deal
with
these
sort... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
= (s,
t)
is
some
smooth
fun
tion
of
its
arguments
n
n
n
(t) = (
s
, t
)
,
where
s
=
n (cid:15) ;
n ,
(2 8)
... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
-
-
=
X +
(s
,
((cid:15)
t) 0(
)
and
=
X
(s
((cid:15)
, t) 0(
) ,
(cid:15)
s
2
(cid:15)
s
2
-
with
a
similar
formula
applying
to
the
diferen
e
F
F
.
�
�
+
n
n
-
�
�
-
Equation
(2 9)
suggests
that
we
should
take
m L
T =
,
(2 10)
2
f
(cid:15)
f
for
the
un-spe
ifed
time
s
ale
in
(2 7) Then
equation
(2 9)
leads... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
ourse,
in
pra
ti
e
F
must
b
e
obtained
from
laboratory
measurements
Remark
2.2
The
way
in
whi
h
the
equations
for
nonlinear
elasti
ity
an
be
derived
for
a
rystal line
solid is not too diferent
from
the
derivation
of
the
wave
equation
(2.11)
for
longitudinal
vibrations.
!
Then
a
very
important
question
arises
(see
fr... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
s
to s,
X
while ( s
X
orr
esponds
to
the
rod
under
uniform
tension
(( >
1
),
or
om
pression
(( 1
).
Also,
note
that
is
a
(nondimensional)
speed
- the
speed
at
whi
h
elasti
disturban
es
along
the
rod
propagate:
i.e.
the
sound
speed.
3
At
least
qualitatively,
though
it
is
te
hni
ally
far
more
hallenging.
... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
2.12)
an
be
approximated
by
the
linear
wave
equation
�
X
=
X
,
(2 13)
2
where
=
(()
is
a
onstant.
The
general
solution
to
this
equation
has
the
form
where
g
and
h
are
arbitrary
fun
tions.
This
solution
learly
shows
that
is
the
wave
propagation
X
=
g (s
) h
(
s
+
)
,
(2 14)
-
velo
ity.
Remark
2.3
Fast
vibrations... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
ations,
so
that
the
energy
they
ontain
propagates
as
heat
(difuses).
In
one
dimension,
however,
this
does
not
general ly
happen,
with
the
vibrations
remaining
oherent
enough
to
propagate
with
a
strong
wave
omponent.
The
a
tual
pro
esses
involved
are
very
poorly
understood,
and
the
statements
just
made
result,
mainly... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
< (cid:20) <
is
a
onstant.
(2 18)
n
-
(cid:6)
-
2
(cid:18) (cid:19)
These
must
be
added
to
an
equilibrium
solution
x
=
( L n
=
s
,
where
( >
0
is
a
onstant.
n
n
4
Che
k
that
these
are
solutions.
... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
:20)
w
=
=
sin
=
sin
(2 19)
(cid:20)
(cid:20)
(cid:6)
2
(cid:20)
2
(cid:18)
(cid:19)
(cid:18)
(cid:19)
propagating
along
the
latti
e
- where
=
(Lw
is
a
speed.
Note
that
the
speed
of
propagation
is
a
fun
tion
of
the
wavelength
- this
phenomenon
is
know
by
the
name
of
dispersion.
We
also
note
that
the
maximum
frequen
y
... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
ibration
ex
itations
(with
frequen
ies
of
the
order
of
w )
as
onstituting
some
sort
of
energy
"bath"
to
be
interpreted
as
heat.
The
energy
in
these
vibrations
propagates
as
waves
through
the
media,
with
speeds
whi
h
are
of
the
same
order
of
magnitude
as
the
sound
waves
equation
(2.13)
des
ribes.
Before
the
advent
of
... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
)
the
for
e
law,
where
:r
=
L
+ (
Y
Y
)
is
the
distan
e
b
e
+1
+
+
+
n
n
n
n
n
�
�
�
�
�
�
2
2
-
t
w
een
(
masses.
Assuming
that
there
are
no
other
for
es
involved,
the
governing
equations
for
the
system
J
are:
2
d
Y
Y
Y
Y
+1
1
n
n
n
n-
�
�
�
�
N
Y
=
f
(:r
)
f
(:r
)
,
(2 20)
n
n
+
+
-
-
2
n
n
n-
n-
�
�
�
�
�
�
dt
:... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
general
phenomena
was
reported
by . ermi,
J.
Pasta
and
S.
Ulam,
in
1955:
Studies
of
Non
Lineam
Pmoblems
Colle
ted
Papers
of
Enri
o
Fermi.
I
I
,
Los
Alamos
Report
LA-1940
(1955 ,
pp.
978-988
in
,
The
University
of
Chi
ago
Press,
Chi
ago,
(1965 .
... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
equilibrium
solutions
des
ribed
a
b
ove
wil l
be
stable
only
if
the
equilibrium
lengths
of
the
springs
are
smaller
than
the
horizontal
separation
L
b
e
�
�
+
n
�
£
t
w
een
the
masses,
namely:
< L
.
This
so
that
none
of
the
springs
is
under
ompression
in
the
solution,
sin
e
any
+
n
�
£
mass
in
a
situation
where
its
sp... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
elasti
string
restri
ted
to
move
in
the
transversal
dire
tion
only
Then
we
see
that
(2 21)
is
a
model
(nonlinear
wave)
equation
for
the
transversal
vibrations
of
a
string,
where
X
is
the
longitudinal
oordinate
along
the
string
position,
}
is
the
transversal
oordinate,
N
=
N (
X
) is
the
mass
density
along
the
string... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
Next,
for
smal l
disturban
es
we
have
1,
and
(2.23)
an
be
approximated
by
the
linear
wave
equation
2
S
�
(cid:25)
2
}
=
}
,
(2 24)
T T
where
=
F
(1)
is
a
onstant
(see
equations
(2.13
- 2.14).
7
The
oordinate
is
simply
a
label
for
the
masses.
Sin
e
in
this
ase
the
masses
do
not
move
horizontally,
an
s
X
be
used
as
... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
Dis
rete
to
Continuum
Modeling.
8
MIT,
Mar
h,
2001
- Rosales.
Noti
e
how
the
stability
ondition
e
<
1
in
(2.22)
guarantees
that
>
0
in
(2.23).
If
this
were
not
e
2
the
ase,
instead
of
the
linear
wave
equation,
the
linearized
equation
would
have
been
of
the
form
2
}
+
d
}
= 0
,
(2 25)
T T
XX
with
d >
0
.
This
is
Lapla... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
longitudinal
and
transversal
modes
of
vibration
for
a
string
are
obtained Sin
e
strings
have
no
bending
strength,
these
equations
will
be
well
behaved
only
as
long
as
the
string
is
under
tension
everywhere
Bending
strength
is
easily
in
orporated
into
the
mass-spring
hain
model Basi
ally,
what
we
need
to
do
is
to
in
o... | https://ocw.mit.edu/courses/18-311-principles-of-applied-mathematics-spring-2014/6c889624a9bc707ffa544fb4b5fd41a5_MIT18_311S14_DiscreteTo.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.