text
stringlengths
16
3.88k
source
stringlengths
60
201
6.824 2006 Lecture 2: I/O Concurrency Recall timeline [draw this time-line] Time-lines for CPU, disk, network How can we use the system's resources more efficiently? What we want is *I/O concurrency* Ability to overlap I/O wait with other useful work. In web server case, I/O wait mostly for net transfer to ...
https://ocw.mit.edu/courses/6-824-distributed-computer-systems-engineering-spring-2006/218dc61c38c02d26cb8dcb9193268a4c_lec2_concurrency.pdf
than I/O concurrency: 2x, not 100x In general, very hard to program to get good scaling. Usually easier to buy two separate computers, which we *will* talk about. Multiple process problems Cost of starting a new process (fork()) may be high. Cite as: Robert Morris, course materials for 6.824 Distributed Compu...
https://ocw.mit.edu/courses/6-824-distributed-computer-systems-engineering-spring-2006/218dc61c38c02d26cb8dcb9193268a4c_lec2_concurrency.pdf
threads are usually expensive, just like processes Kernel has to help create each thread Kernel has to help with each context switch? So it knows which thread took a fault... lock/unlock must go through kernel, but bad for them to be slow Many O/S do not provide kernel-supported threads, not portable User-le...
https://ocw.mit.edu/courses/6-824-distributed-computer-systems-engineering-spring-2006/218dc61c38c02d26cb8dcb9193268a4c_lec2_concurrency.pdf
maybe: disk read() Why are non-blocking system calls hard in general? Typical system call implementation, inside the kernel: [sys_read.c] Can we just return to user program instead of wait_for_disk? No: how will kernel know where to continue? ie. should it run userspace code or continue in the kernel syscall...
https://ocw.mit.edu/courses/6-824-distributed-computer-systems-engineering-spring-2006/218dc61c38c02d26cb8dcb9193268a4c_lec2_concurrency.pdf
8.701 0. Introduction 0.4 Literature Introduction to Nuclear and Particle Physics Markus Klute - MIT 1 Recommended Books © Cambridge University Press. All rights reserved. This content is excluded from our Creative Commons license. For more information, see https://ocw.mit.edu/fairuse. Introduction to High Energy P...
https://ocw.mit.edu/courses/8-701-introduction-to-nuclear-and-particle-physics-fall-2020/21ca5d10c04a4587015d09877caf84a9_MIT8_701f20_lec0.4.pdf
Welcome back to 8.033! Image Courtesy of Wikipedia. Summary of last lecture: • Space/time unification • More 4-vectors: U, K • Doppler effect, aberration • Proper time, rest length, timelike, spacelike, null The Three Types of 4-Vectors: SPACELIKE NULL TIMELIKE Future Past DXth Dx > 0 |Dx| > cDt DXth Dx = 0 |Dx| ...
https://ocw.mit.edu/courses/8-033-relativity-fall-2006/21e5641958305fcb6e704fe8b59b404d_lecture7_kinem4.pdf
dilaton examples: GRB’s SUPERNOVAE CLOCKS ON PLANES GPS GPS uses a constellation of 24 “NAVSTAR” satellites that are 11,000 miles above the earth's surface. How GPS receivers calculate your location: The positioning process: 1. Satellite 1 transmits a signal that contains data on its location in space and the exac...
https://ocw.mit.edu/courses/8-033-relativity-fall-2006/21e5641958305fcb6e704fe8b59b404d_lecture7_kinem4.pdf
8.04: Quantum Mechanics Massachusetts Institute of Technology Professor Allan Adams 2013 February 7 Lecture 2 Experimental Facts of Life Assigned Reading: E&R 16,7, 21,2,3,4,5, 3all NOT 4all!!! 1all, 23,5,6 NOT 2-4!!! Li. 12,3,4 NOT 1-5!!! Ga. 3 Sh. We all know atoms are made of: • electrons – Cathode...
https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2013/220bfdfab6ca1826045b587d03dc9624_MIT8_04S13_Lec02.pdf
≈ (3646 angstrom) · 1 − −1 4 n2 for n ∈ {3, 4, 5, . . .} . (0.1) Rydberg and Ritz then found that λ−1 = R · (n1 − n2 ) for ni ∈ Z, n2 > n1 −2 −2 (0.2) where R is the Rydberg constant dependent on the particular element but independent of the emission series. Where did that come from? 1 Experimental result #3...
https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2013/220bfdfab6ca1826045b587d03dc9624_MIT8_04S13_Lec02.pdf
3 Figure 2: Photoelectric effect experimental schematic Figure 3: Photoelectric dependence of I on V : expectation (top) versus reality (bottom) Einstein’s interpretation of this is that light comes in packets of definite energy E = hν, the intensity is proportional to the number of such packets, and the kinetic ene...
https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2013/220bfdfab6ca1826045b587d03dc9624_MIT8_04S13_Lec02.pdf
. But the intensity shows an interference pattern. This implies that amplitudes, rather than intensities, add. Let us try to investigate the fringe widths in the interference pattern further. Let us suppose that light from the two slits start in phase. If they coincide at a single point on the screen, they remain i...
https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2013/220bfdfab6ca1826045b587d03dc9624_MIT8_04S13_Lec02.pdf
2πfy λD )). fy = Dλn as expected. Note that maxima correspond to constructive interference, while minima cor­ respond to destructive interference. This comes from the fact that amplitudes add, and the intensity is the square of the amplitude. The point is that in 8.03, you did the double-slit experiment and saw t...
https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2013/220bfdfab6ca1826045b587d03dc9624_MIT8_04S13_Lec02.pdf
that electrons interfere like waves even with themselves! If they were really particles, they would have followed only one of two paths: the path from the top slit to the end, or the path from the bottom slit to the end. We could use a wall to check which one is happening. Yet this produces the exact same conundrum ...
https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2013/220bfdfab6ca1826045b587d03dc9624_MIT8_04S13_Lec02.pdf
found the phenomenon of Bragg scattering. The path length difference between one layer of the crystal and the next is for a square crystal of length f. Constructive interference occurs when Δf = 2f sin(θ) This means Δf = λn. λ−1 = n . 2f sin(θ) Davisson and Germer also observed that √ n 2f sin(θ) ≈ 2mqeV0 ...
https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2013/220bfdfab6ca1826045b587d03dc9624_MIT8_04S13_Lec02.pdf
3.032 Mechanical Behavior of Materials Fall 2007 Buckling: Long, thin beam under end-loaded compression P [N] δ max [m] r θ = dδ/dx Lecture 3 (09.10.07) 3.032 Mechanical Behavior of Materials Fall 2007 Beam bending: End-loaded cantilever Images removed due to copyright restrictions. Please see: Silva, Emilio C. C. M.,...
https://ocw.mit.edu/courses/3-032-mechanical-behavior-of-materials-fall-2007/222911da292c25b77b09b47da533315c_lec3.pdf
3.46 PHOTONIC MATERIALS AND DEVICES Lecture 5: Waveguide Design—Optical Fiber and Planar Waveguides Lecture Fiber Optics Optical fiber ≡ core + cladding guided if n2 > n1 power loss to cladding if < n1 n ⎛ 1 ⎟⎟⎟⎟⎟ θ = sin−1⎜⎜⎜ ⎜n2⎝ n2 ⎠ c ⎞ JK K each mode travels with β, v , U (x,y), P, k single mode (small c...
https://ocw.mit.edu/courses/3-46-photonic-materials-and-devices-spring-2006/2229d0701ea240ed993e2a499432519b_3_46l5_waveguide.pdf
⎢ ⎜⎜ ⎝ ⎢ ⎣ 1 ⎞2 ⎤ 2 n ⎥ 1 ⎟⎟⎟⎟ ⎥ n2 ⎠ ⎥ ⎦ 2 1 = (n2 − n1 ) 2 2 θ = sin−1(NA) a 1 NA = (n2 − n2 )2 ≈ n2(2Δ) 1 2 1 2 3.46 Photonic Materials and Devices Prof. Lionel C. Kimerling Lecture 5: Waveguide Design—Optical Fiber and Planar Waveguides Page 1 of 5 Lecture θ = acceptance ∠ for fiber a ≡ exit angle fo...
https://ocw.mit.edu/courses/3-46-photonic-materials-and-devices-spring-2006/2229d0701ea240ed993e2a499432519b_3_46l5_waveguide.pdf
n1 , Condition for guiding n k < β < n 0 1 0 2 kT = rate of change of u(r) in core γ = rate of U(r) in cladding kT 2 = n2 k0 2 2 − β2 2 2 2 2 γ = β −n k1 0 rate of decay high ⇒ low penetration 2 kT + γ 2 2 (n2 −n )k = (NA)2 2 0 2 1 2 k0 ↑ γ ↓⇒ penetration into cladding , k T kT > NA ⋅ 0 ⇒ γ imaginary, wave...
https://ocw.mit.edu/courses/3-46-photonic-materials-and-devices-spring-2006/2229d0701ea240ed993e2a499432519b_3_46l5_waveguide.pdf
2 fiber n2 = 1.452, Δ = 0.01, NA = 0.205 λ = 0.85 μm (GaAs) a (core) = 25 μm ⇒ V = 37.9, M = 585 0 remove cladding ⇒ n1 = 1, NA = 1 ⇒ V = 184.8, M > 13,800 3.46 Photonic Materials and Devices Prof. Lionel C. Kimerling Lecture 5: Waveguide Design—Optical Fiber and Planar Waveguides Page 3 of 5 Lecture ...
https://ocw.mit.edu/courses/3-46-photonic-materials-and-devices-spring-2006/2229d0701ea240ed993e2a499432519b_3_46l5_waveguide.pdf
center O →shortest travel, slowest velocity ⇒ power low profile n1 n2 n ( ) 2 n r n= 2 2 ⎡ ⎢ − ⎜⎜ 1 2 ⎢ ⎢⎣ p ⎛ ⎞ r ⎜ ⎟⎟⎟⎟ ⎝ ⎠ a ⎤ ⎥Δ⎥ ⎥⎦ n = n2 @ r = 0 = n1 @ r = a r ≤ a p = 1 p = 2 p → ∞ n2(r) linear n2(r) quadratic n2(r) step function Δ = 2 2 n2 − n1 22n2 3.46 Photonic Materials and Devi...
https://ocw.mit.edu/courses/3-46-photonic-materials-and-devices-spring-2006/2229d0701ea240ed993e2a499432519b_3_46l5_waveguide.pdf
Maslab Software Engineering January 5th, 2005 Yuran Lu Agenda  Getting Started On the Server Using the Documentation Design Sequence Tools The Maslab API  Design Principles  Threading in Java On the Server  Put these lines in your .environment:  add 6.186  add -f java_v1.5.0  setenv JAVA_HOME /...
https://ocw.mit.edu/courses/6-186-mobile-autonomous-systems-laboratory-january-iap-2005/222e989bfe21b02a813a2c10e4a646d7_software.pdf
 Modular Design  Provides abstraction  Gives up fine-control abilities, but makes code much more manageable  The Design Process  Top-down vs. Bottom-up  Write out specifications for each module  Write code for modules  Test each module separately as it is being written  Test overall system for function...
https://ocw.mit.edu/courses/6-186-mobile-autonomous-systems-laboratory-january-iap-2005/222e989bfe21b02a813a2c10e4a646d7_software.pdf
Thread, Runnable, wait(), notify(), sleep(), yield()  Must take care to avoid deadlock Synchronization in Threading  Allows blocks of code to be mutually exclusive  Writing to the same object from two threads at the same time will cause your program to break
https://ocw.mit.edu/courses/6-186-mobile-autonomous-systems-laboratory-january-iap-2005/222e989bfe21b02a813a2c10e4a646d7_software.pdf
Matrices We have already defined what we mean by a matrix. In this section, we introduce algebraic operations into the set of matrices. Definition. If A and B are two matrices of the same size, say k by n, we define A t B to be the k by n matrix obtained by adding the corresponding entries of A and B, m d we defin...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
the j-th column of B . Schematically, This definition seems rather strange, but it is in fact e,xtremely useful. Motivation will come later! One important justification for this definition is the fact that this product operation satisfies some of the familar "laws of algebra" : Theorem '1. Matrix-multiplication ...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
= I a b n s=l is sj n + I a c s=l is sj' The other distributivity formula and the homogeneity formula are proved similarly. We leave them as exercises. Now let us verify associativity. - If A is k by n and B is n by p, then . A B is k by p. The product (AeB) C is thus defined provided C has size p by q. The...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
.. + 0 + TiQ l j + O+ . . a + 0= Q5. 1s c = ij = 1. We conclude that Ar, entirely similar proof shows that B * I = B if B has rn columns. m Remark. I f A B i s d e f i n e d , t h e n B A need n o t b e d e f i n e d . And e v e n i f it i s d e f i n e d , t h e two p r c d u c t s need n o t b e e q u a l ...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
is called k and j = l , - - = , n , . a system of k linear equations 9 n unknowns: A solution of this system is a vector X = (xl,...,x ) that satisfies each equation. The solution -set of the system consists of all such vectors; it is a subset of Vn . n We wish to determine whether this systemhas a solution, and ...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
system - 2 x + y + z = l This system has a solution; in fact, it has mora than one solution. In solving this sytem, we can ignore the third equation, since it is the sum of the first two. Then we can assign a value to y arbitrarily, say y = t, and solve - * - the first two equations for x and z. We obtain the resul...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
.setof the system A'X = C. Proof. Exchanging rows i and j of both matrices has the effect of simply exchanging equations i and j of the system. Replacing row i by itself plus c times row j has the effect of replacing the ith equation by itself plus c times the jth equation. And multiplying rcw i by a non-zero scala...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
3. Let A be a matrix of size k by n. Let r be the rank - of A. Then the solution space of the system of equations A 0 X = 2 is a subspace of Vn of dimension n - r. Proof. The preceding theorem tells us that we can apply elementary operations to both the matrices A and 0 without changing the solution set. - Applying ...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
merely of the single term 9! ) - k!t us pause to consider an example. EJ-ample3 . Let A be the 4 by 5 matrix given on p.A20. The equation A'X = 0 represents a system of 4 equations in 5 unknowns. Nc~w A - - reduces by row operations to the reduced echelon matrix Here the pivots appear in columns 1,2 and 4; thus J i...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
coefficents % of these vectors, then X = 0 if and only if each % (for k in K) - equals 0. This is easy. Consider the first expression for X tkat we wrote down, where each component of X is a linear combination of the unknowns rc- The kth component of X is simply 5 . It follows that the equation X = 2 implies in part...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
X is a column matrix such that A'X = C, then A.(X - ? ) = 2 , and ccnversely. The solution space of the system A'X = 0 is a subspace of of dimension m = n - r; let All...,A be a basis for it. Then X is a solution m of the system A'X = C if and only if X - P is a linear combination of the 'n - vectors Air that is, ...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
of C 1 in row k. If c' is not zero, there are no values of xl,...,x satisfying this equation, so the system has no solution. k k n # Let us choose C * to be a k by 1 mstrix whose last entry is non-zero. Then apply the same elementary operations as before, in reverse order, to both B and C*. These operations tra...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
, 2 for j in J, appears in only one equation of the system, we can solve for each x in terms of the numbers c; a d the unknowns j values arbitrarily to the \ and thus obtain a particular solution of the . We can now assign system. The theorem follows. The procedure just described actually does much more than wa...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
n, show that therearevalues of C such that the system A'X = C has no solution. /--- 2, Ccnsider the matrix A of p. A23. (a) Find the general solution of the system AbX = 2. (b) Does the system A - X = C have a solution for arbitrary C? 3 . Repeat Exercise 2 for the matrices C, D l and E of p. A23. 4. &t B be the ...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
basis Al,...,A ,B1,...,B for all of Vn . Show the vectors r A.B1 , ..., A.3r span R; this follows from the fact that A.Ai = 0 for all i. Show these vectors are independent.] m - (c) Cclnclude that if r € k, there are vectors C in Vk such that the system A - X = C has no solution; while if r = kt this system has a...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
Vn is as the solution set of a system of linear equations where the rows of A are independent. I f A has size k by n, then the plane in question has dimension n - k. The equation is called a caretesian form for the equation of a plane. (If the rows of A were not independent, then the solutionsetwould be either emp...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
its orthogonal complement has dimension n - k. Fwthermore, W is the orthogonal complement of W ; that is, (W ) = W. I L I Proof. Ttiat wL has dimension n - k is an imnediate consequence of Theorem 3 j for W is the row space of a k by n matrix A with independent rows Ai , whence w L is the solution space of the sys...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
using the Gass-Jordan algorithm; and then one writes down the equation B.X = B O P . - . * m k We now turn to the special case of V3, whose model is the familiar 3-dimensional space in which we live. In this space, we have only lines (1-planes) and planes (2-planes) to deal with. A P ~we can use either the parame...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
+ a2(x2 - p2) + a3(x3 - P3 ) = 0 . We call this the equation of the plane throuqh -- p = (pll P2' p3) with normal vector N = (a a2, a3 1. WE have thus proved the first half of the following theorem: Theorem% If M is a 2-plane in V3, then M has a cartesian equation of the form a x + a x + a x - b , 1 1 2 2 3 3 - w...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
normal vectors are independent. Proof. Take a cartesian equation for each plane; collectively, they form a system A - X = C of three equations in three unknowns. The rows of A are the normal vectors. The solution space of the system (which consists of the points common to all three planes) consists of a a single p...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
if N-P # b. Thus the intersection of L and M is either all of L, or it is empty. On the other hand, if L is not parallel to M, then N - A # 0. In this case the equation can be solved uniquely for t. Thus the intersection of L and M consists of a single point. a r Ex-ample 5. Ccnsider the plane M = M(P;A,B) in V3 ,...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
a parametric equation for the line of intersection of the planes of Exercise 3. 6. Write a cartesian equation for the plane through P = (-1,0,2) and Q = (3,1,5) that is parallel to the line through R = (1,1,1)with direction vector A = (1,3,4). 7. Write cartesian equations for the plane M(P;A,B) in V4, where P = (...
https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf
§ 1. Information measures: entropy and divergence Review: Random variables • Two methods to describe a random variable (R.V.) X: 1. a function X Ω ∶ → X 2. a distribution PX on from the probability space Ω, some measurable space (X , F ) . ( F ) , P to a target space X . • Convention: capital letter – RV (e.g. X); smal...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
., the entropy of H(PX ∣Y =y) averaged over PY . Note: 8 1 ( PX Y X Y ∣ ) ∣ ] , • Q: Why such definition, why log, why entropy? Name comes from thermodynamics. Definition is justified by theorems in this course (e.g. operationally by compression), but also by a number of experiments. For example, we can measure time it t...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
�� (p)i H(X) = p ⋅ pi log ∞ ∑ i=0 = log 1 p + p ⋅ log ppi(i log 1 p + log ) 1 p 1 p ⋅ pi 1 p ⋅ = ∞ ∑ i=0 1 − p p2 = h(p) p Example (Infinite entropy): Can H(X) = +∞? Yes, P[X = k] = c k ln2 k , k = 2, 3, ⋯ 9 011/2 Review: Convexity • Convex ∈ [ all α set: A subset S of some vector space is convex if x, y S 0, 1 1 α.) ≜ ...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
f EX Ef X ⇒ f (EX ) conv = unless X is a constant (X E ex ( Ef (X) f (EX) ) ) < X Ef (X a.s.) Famous puzzle: A man says, ”I am the average height and average weight of the population. Thus, I am an average man.” However, he is still considered to be a little overweight. Why? Answer: The weight is roughly proportional t...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
X1, . . . , Xn mutually independent (1.1) (1.2) Proof. 1. Expectation of non-negative function 2. Jensen’s inequality 3. H only depends on the values of PX , not locations: H( ) = H( ) 4. Later (Lecture 2) 1 = E[ log 5. E log PXY (X,Y ) )) ( 6. Intuition: X , f X contains the same is 1-1. Thus by 3 and 5: PX (X)⋅PY ∣X ...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
[ / 1 2, P X b w for su ceed by asking “X b?”. If not, ask “X c?”, after which we will kno / × + / + / × / 1 2 1 4 2 1 8 3 1 8 3 of minimal ) = ] = = questions 1 4, and P X c is that the = average + ) 1 probab The ) X. ( ] = / bits num ber ∈ { H × + = = ( ( / [ [ 1.1.1 Entropy: axiomatic characterization One might wond...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
H(X, Y ) = H(X)+H( ∑ ) = . pi and m Equivalently, Hmn r11, . . . , rmn Hm p1, . . . , pm ∑ i=1 rij = qj. ) ≤ ( ( ) + Y ) if X ⊥⊥ Y . Equivalently, H p1q1, . . . , pmqn Hm(p1, . . . , pm mn( ) ≤ )+ ) Hn q1, . . . , qn . ( then Hm(p1, . . . , p [CT06, p. 53] and the reference therein. i=1 pi log m) = ∑m 1 pi is the only ...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
body (that is, every such process needs to be helped by expending some amount of external work). Notice that there is something annoying about the second law as compared to the first law. In the first law there is a quantity that is conserved, and this is somehow logically easy to accept. The second law seems a bit harde...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
1 pj , where k is the Boltzmann constant, we assume that each particle can only be in one of (cid:96) molecular states (e.g. spin up/down, or if we quantize the phase volume into (cid:96) subcubes) and pj is the fraction of particles in j-th molecular state. 1.1.3* Entropy: submodularity Recall that [ of S. A set funct...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
also monotone: ⊂ T1 T2 (cid:212)⇒ ) ≤ H XT1 H XT2 ( ( ) . n, let us denote by Γn the So fixing submodular set-functions [ ] all non-empty subsets of [n], Γn is a closed via an obvious enumeration of on n . Note that n 1. Similarly, let us denote by Γn the set of all set-functions corresponding to 2 − ∗ convex cone in R ...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
). Let X n be discrete n-dimensional RV and denote Hk(X n 1 n ) ( k ¯ ) = ¯ H H(XT ) – the average entropy of a k-subset of coordinates. Then k k is decreasing in k: n] ) k ∑ ⊂( T [ ¯Hn ≤ ⋯ ≤ 1 n 1 ¯ Hk k ⋯ ≤ ¯ H1 . ¯ Furthermore, the sequence Hk is increasing and concave in the sense of decreasing slope: Proof. Denote...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
11 ). 14 Note: Han’s inequality holds for any submodular set-function. Example: Another submodular set-function is Han’s inequality for this one reads S ↦ I(XS; XSc) . 0 = 1 n In ≤ ⋯ ≤ 1 k Ik⋯ ≤ I1 , where Ik = 1 n ) ( k X n. ∑S∶∣S∣=k I(XS; XSc) – gauges the amount of k-subset coupling in the random vector 1.2 Diverge...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
⋅ log 0 0 = 0 (2) ∃a ∶ Q(a) = 0, P (a) > 0 ⇒ D(P ∥Q) = ∞ 15 • A = R k, P and Q have densities fP and fQ D(P Q) = ∥ ⎧ ⎪⎪ ⎨ ⎪⎪ ⎩ Rk log P (xk f ) ∫ Q xk) f ( +∞ P (xk)dx f k , Leb{fP > 0, fQ = 0 } = 0 , otherwise • A — measurable space: D(P ∥Q ) = ⎧ ⎪⎪ ⎨ ⎪ ⎪ ⎩ dP dQ log dP dQ = EP log dP dQ Q E +∞ , P ≪ Q , otherwise (A...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
nite values) D(P ∥Q) can be ∞ are consistent since D P Q supΠ D PΠ QΠ where Π is a finite partition of the underlying space also when P ≪ Q, but the two case of D P ∥ ( (proof: later) ( ∥ ) = +∞ ) , A Q = ∥ ) ( s P Q) ≠ D(Q∥P ) • (Asymmetry) = / 1 2, Q H be absolutely sure; Upon observing HHT, know for sure it is P . D ...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
allis etc. Inequalities between various f -divergences such as (1.14) was once an active field of research. It was made largely irrelevant by a work of Harremo¨es and Vajda [HV11] giving a simple method for obtaining best possible inequalities between any two f -divergences. Theorem 1.4 (H v.s. D). If distribution P is ...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
2 σ2 0 + σ2 1 2 σ 0 ] − 1 log e Example (Vector Gaussian): A = k C D(Nc(m1, Σ1 )∥Nc(m0, Σ0)) = log det 1 + ( m1 − m0)H Σ−1 0 (m1 − m0 Σ0 tr Σ 1 − 0 Σ + ( − log det Σ − ) I log e 1 (1.17) (1.18) ) log e 17 d (p ||q )p1qd (p ||q )q1p−log q−log q− (assume det Σ0 ≠ 0). Note: The definition of D(P ∥Q measures), in which case...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
negative, take values of ( ) ±∞ or even be undefined2. Nevertheless, differential entropy shares many properties with the usual entropy: Theorem 1.5 (Properties of differential entropy). Assume that all differential entropies appearing below exists and are finite (in particular all RVs have pdfs and conditional pdfs). Then ...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
and hence measurable. n −(−1) n. c en2 Leb{AS} ≤ Leb{KS} 18 Proof. Let X n be uniformly distributed on K. Then h(X n) = a1 an where × ⋯ × Then, we have by 1. in Theorem 1.5 log a = ( i h X ∣ i X i 1 − ) . log Leb{K} . Let A be rectangle On the other hand, by the chain rule h(XS) ≤ log Leb{KS } n h(XS) = ∑ i 1 = ∑ ∈S i...
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
MIT OpenCourseWare https://ocw.mit.edu 6.441 Information Theory Spring 2016 For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.
https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf
15.093J Optimization Methods Lecture 4: The Simplex Method II Slide 1 Slide 2 Slide 3 Slide 4 1 Outline • Revised Simplex method • The full tableau implementation • Finding an initial BFS • The complete algorithm • The column geometry • Computational efficiency 2 Revised Simplex Initial data: A, b, c 1. St...
https://ocw.mit.edu/courses/15-093j-optimization-methods-fall-2009/227bd50822ccfd66c653bccf0a3e11fe_MIT15_093J_F09_lec04.pdf
1 −1 0 1 1 0 0 1 −1 0 0 −1 1 1 1 0 1 1 0 1 1 0 −1 0 1 0 0 0 1 0 −1 1 0 0 −1 1     2.2 Practical issues • Numerical Stability B−1 needs to be computed from scratch once in a while, as errors accu­ mulate • Sparsity B−1 is represented in terms of sparse triangular matrices 3 Full tableau implementa...
https://ocw.mit.edu/courses/15-093j-optimization-methods-fall-2009/227bd50822ccfd66c653bccf0a3e11fe_MIT15_093J_F09_lec04.pdf
1.6 0.4 −0.6 0 1 0 −0.6 0.4 0 0.4 −0.6 0.4 0.4 136 4 4 4 x3 = x1 = x2 = 0 0 1 0 0 0 0 1 x 3 . = ( Slide 14 Slide 15 = ( ) = (4,4,4) . . . = ( ) x 1 . = ( ) x 2 4 Comparison of implementations Slide 16 Full tableau Revised simplex Memory Worst-case time Best-case time O(mn) O(mn)...
https://ocw.mit.edu/courses/15-093j-optimization-methods-fall-2009/227bd50822ccfd66c653bccf0a3e11fe_MIT15_093J_F09_lec04.pdf
s.t. Ax + y = b x, y ≥ 0 3. If cost > 0 ⇒ LOP infeasible; stop. 4. If cost = 0 and no artificial variable is in the basis, then a BFS was found. 5. Else, all yi ∗ = 0, but some are still in the basis. Say we have AB(1), . . . , AB(k) in basis k < m. There are m − k additional columns of A to form a basis. 6. Drive ...
https://ocw.mit.edu/courses/15-093j-optimization-methods-fall-2009/227bd50822ccfd66c653bccf0a3e11fe_MIT15_093J_F09_lec04.pdf
II. 2. Compute the reduced costs of all variables for this initial basis, using the cost coefficients of the original problem. 3. Apply the simplex method to the original problem. 6.1 Possible outcomes 1. Infeasible: Detected at Phase I. 2. A has linearly dependent rows: Detected at Phase I, eliminate redundant rows. ...
https://ocw.mit.edu/courses/15-093j-optimization-methods-fall-2009/227bd50822ccfd66c653bccf0a3e11fe_MIT15_093J_F09_lec04.pdf
1 ǫxi−1 ≤ xi ≤ 1 − ǫxi−1, i = 2, . . . , n x3 x 2 x2 (a) x1 x1 (b) Theorem • The feasible set has 2n vertices • The vertices can be ordered so that each one is adjacent to and has lower cost than the previous one. • There exists a pivoting rule under which the simplex method requires 2n − 1 changes of basis...
https://ocw.mit.edu/courses/15-093j-optimization-methods-fall-2009/227bd50822ccfd66c653bccf0a3e11fe_MIT15_093J_F09_lec04.pdf
- 2.12 Lecture Notes - H. Harry Asada Ford Professor of Mechanical Engineering Fall 2005 Introduction to Robotics, H. Harry Asada 1 Chapter 1 Introduction Many definitions have been suggested for what we call a robot. The word may conjure up various level...
https://ocw.mit.edu/courses/2-12-introduction-to-robotics-fall-2005/228dbf33cc6dc0f84c0d87e00b31a410_chapter1.pdf
2 issue in manufacturing innovation for a few decades, and numerical control has played a central role in increasing system flexibility. Contemporary industrial robots are programmable machines that can perform different operations by simply modifying stored data, a feature that has evolved from the application of n...
https://ocw.mit.edu/courses/2-12-introduction-to-robotics-fall-2005/228dbf33cc6dc0f84c0d87e00b31a410_chapter1.pdf
is that of a numerically controlled manipulator, where the human operator and the master manipulator in the figure are replaced by a numerical controller. Figure removed for copyright reasons. See Figure 1-4 in Asada and Slotine, 1986. Figure 1-3 White body assembly lines using spot welding robots 1.2 Creation of...
https://ocw.mit.edu/courses/2-12-introduction-to-robotics-fall-2005/228dbf33cc6dc0f84c0d87e00b31a410_chapter1.pdf
configuration of the manipulator arm. Coriolis and centrifugal effects are prominent when the manipulator arm moves at high speeds. The kinematic and dynamic complexities create unique control problems that are not adequately handled by standard linear control techniques, and thus make effective control system desig...
https://ocw.mit.edu/courses/2-12-introduction-to-robotics-fall-2005/228dbf33cc6dc0f84c0d87e00b31a410_chapter1.pdf
or modifications of control actions are provided when the resultant motion is not adequate, or when unexpected events occur during the operation. The human operator is, therefore, an essential part of the control loop. When the operator is eliminated from the control system, all the planning and control commands mus...
https://ocw.mit.edu/courses/2-12-introduction-to-robotics-fall-2005/228dbf33cc6dc0f84c0d87e00b31a410_chapter1.pdf
and motion commands. o Figure 1-7 Remote-center compliance hand A detailed understanding of the underlying principles and "know-how" involved in the task must be developed in order to use industrial robots effectively, while there is no such need for making control strategies explicit when the assembly and grindi...
https://ocw.mit.edu/courses/2-12-introduction-to-robotics-fall-2005/228dbf33cc6dc0f84c0d87e00b31a410_chapter1.pdf
order to adapt itself to diverse terrain conditions. See Figure 1-10. Photo removed for copyright reasons. Figure 1-8 Automatically guided vehicle for meal delivery in hospitals Photo removed for copyright reasons. Figure 1-9 Honda’s P3 humanoid robot Navigation is another critical functionality needed for mobile...
https://ocw.mit.edu/courses/2-12-introduction-to-robotics-fall-2005/228dbf33cc6dc0f84c0d87e00b31a410_chapter1.pdf
Deep Learning/Double Descent Gilbert Strang MIT October, 2019 1/32 Number of Weights 2/32 N = 40 N = 4000 10 8 6 4 2 0 -2 -4 -6 -3 -2 -1 0 x 1 2 3 3/32 Fit training data by a Learning function F We are given training data : Inputs v, outputs w Example Each v is an image of a number w = 0, 1, . . . , 9 The vector v d...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
LU (Akvk−1 + bk) Weights for layer k Ak = matrix and bk = offset vector v0 = training data / v1, . . . , vℓ−1 hidden layers / vℓ = output 5/32 Deep Neural Networks 1 Key operation 2 Key rule 3 Key algorithm 4 Key subroutine 5 Key nonlinearity ReLU (y) = max (y, 0) = ramp function Composition F = F3(F2(F1(x, v0))) Chain...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
(cid:12) (cid:12) Classification problem : true = 1 or (cid:17) (cid:16) 1 − Regression problem : true = vector Gradient descent xk+1 = arg min || Stochastic descent xk+1 = arg min xk sk L(xk) − xk ∇ sk ℓ ∇ − || xk, vk 0 (cid:12) (cid:12) (cid:12) (cid:12) (cid:0) (cid:1)(cid:12) (cid:12) (cid:12) (cid:12) 7/32 Key com...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
. . . generalize to unseen test data ? (Early stopping / Do not overfit the data) 8/32 Key Questions 1. Optimization of the weights x = Ak and bk 2. Convergence rate of descent and accelerated descent (when xk+1 depends on xk and xk−1 : momentum added) 3. Do the weights A1, b1 . . . generalize to unseen test data ? (Ea...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
1 hidden layer with N neurons 9/32 1. Stochastic gradient descent optimizes weights Ak, bk 2. Backpropagation in the computational graph computes derivatives with respect to weights x = A1, b1, . . . , Aℓ, bℓ 3. The learning function F (x, v0) = . . . F3(F2(F1(x, v))) F1(v0) = max (A1v0 + b1, 0) = ReLU affine map ◦ F (v...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
v1 has N components / N ReLU’s The number of flat regions in Rm bounded by the N hyperplanes r(N, m) = m i=0 (cid:18) X N i = (cid:19) (cid:18) N 0 N 1 + (cid:19) (cid:18) + · · · + (cid:19) (cid:18) N m (cid:19) N = 3 folds in a plane will produce 1 + 3 + 3 = 7 pieces Recursion r(N, m) = r(N 1, m) + r(N 1, m 1) − − − 1...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
= 3 ← math.mit.edu/learni Recursion r(N, m) = r(N 1, m) + r(N 1, m 1) − − − 10/32 v0 has m components / v1 has N components / N ReLU’s The number of flat regions in Rm bounded by the N hyperplanes r(N, m) = m i=0 (cid:18) X N i = (cid:19) (cid:18) N 0 N 1 + (cid:19) (cid:18) + · · · + (cid:19) (cid:18) N m (cid:19) N =...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
Big problems are underdetermined [# weights > # samples] Stochastic Gradient Descent finds weights that generalize well 11/32 F (x) = F2(F1(x)) is continuous piecewise linear One hidden layer of neurons : deep networks have many more Overfitting is not desirable ! Gradient descent stops early ! “Generalization” measured...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
�  N + 2 inputs and N outputs Each shift has a diagonal of 1’s A = x1L + x0C + x−1R ∂y ∂x1 = Lv ∂y ∂x0 = Cv ∂y ∂x−1 = Rv 12/32 Convolutional Neural Nets (CNN) x1 x0 x−1 x0 x1 0 x1 0 0 0 0 0 0 x−1 x0 x1 0 0 x−1 x0 0 0 0 x−1     A =     N + 2 inputs and N outputs Each shift has a diagonal of 1’s A = x1L + x0C +...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
1 x0−1 x−1−1 x−11 x−10   vij i, j from (0, 0) to (N +1, N +1) Input image Output image yij i, j from (1, 1)to (N, N ) Shifts L, C, R, U, D = Left, Center, Right, Up, Down   A convolution is a combination of shift matrices = filter = Toeplitz matrix The coefficients in the combination will be the “weights” to be learne...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
14/32 Computing the weights x = matrices Ak, bias vectors bk Choose a loss function ℓ to measure F (x, v) true output − Total loss L = 1 N (sum of losses for all N samples) Compute weights x to minimize the total loss L 14/32 Here are three loss functions—Cross-entropy is a favorite loss function for neural nets 1 Sq...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
= b (cid:18) b 1 − b + 1 k (cid:19) yk = (cid:18) b 1 − 1 + b k (cid:19) f (xk, yk) = (cid:18) b 1 − 1 + b 2k (cid:19) f (x0, y0) 16/32 Descent formula xk+1 = xk sk − F (x) Stepsize sk = Learning rate ∇ The first descent step starts out perpendicular to the level set. As it crosses through lower level sets, the functio...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
− f (xk) + βzk−1 ∇ Descent with momentum xk+1 zk+1 = xk f (xk+1) = − ∇ − szk βzk ck+1 Following the eigenvector q λ ck+1 + dk+1= = (cid:20) (cid:21) It seems a miracle that this problem has a beautiful solution. The optimal s and β are − − (cid:21)(cid:20) s dk β dk (cid:20) − 1 0 λ 1 ck+1 dk+1 = ck 1 0 − s β ck dk (ci...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
Key difference : b is replaced by √b Ordinary descent factor 2 1 b − 1 + b ! Accelerated descent factor 2 √b 1 1 + √b ! − Steepest descent .99 1.01 2 (cid:19) (cid:18) = .96 Accelerated descent 2 .9 1.1 (cid:18) (cid:19) = .67 Notice that λmax/λmin = 1/b = κ is the condition number of S 19/32 Key difference : b is re...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
��t 61 data points 20/32 Stochastic Gradient Descent Stochastic gradient descent uses a “minibatch” of the training data Every step is much faster than using all data We don’t want to fit the training data too perfectly (overfitting) Choosing a polynomial of degree 60 to fit 61 data points 20/32 Stochastic Gradient Desc...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
changes to large oscillations near the solution Kaczmarz for Ax = b with random i(k) xk+1 = xk + bi aT i xk 2 ai − ai || || 21/32 Stochastic Descent Using One Sample Per Step Early steps of SGD often converge quickly toward the solution x∗ Here we pause to look at semi-convergence : Fast start by stochastic gradient d...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
: Explicit Formulas vL = bL + ALvL−1 or simply w = b + Av. The output wi is not affected by bj or Ajk if j = i Fully connected layer Independent weights Ajk ∂wi ∂bj = δij and ∂wi ∂Ajk = δijvk Example ∂w1 ∂b1 = 1, (cid:20) w1 w2 ∂w1 ∂b2 = (cid:21) (cid:20) = 0, (cid:21) ∂w1 ∂a11 b1 b2 + a11v1 + a12v2 a21v1 + a22v2 (cid:2...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
�x : Explicit Formulas vL = bL + ALvL−1 or simply w = b + Av. The output wi is not affected by bj or Ajk if j = i Fully connected layer Independent weights Ajk ∂wi ∂bj = δij and ∂wi ∂Ajk = δijvk Example ∂w1 ∂b1 = 1, (cid:20) w1 w2 ∂w1 ∂b2 = (cid:21) (cid:20) = 0, (cid:21) ∂w1 ∂a11 b1 b2 + a11v1 + a12v2 a21v1 + a22v2 (ci...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
/∂x : Explicit Formulas vL = bL + ALvL−1 or simply w = b + Av. The output wi is not affected by bj or Ajk if j = i Fully connected layer Independent weights Ajk ∂wi ∂bj = δij and ∂wi ∂Ajk = δijvk Example ∂w1 ∂b1 = 1, (cid:20) w1 w2 ∂w1 ∂b2 = (cid:21) (cid:20) = 0, (cid:21) ∂w1 ∂a11 b1 b2 + a11v1 + a12v2 a21v1 + a22v2 (c...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
(x))) (cid:19)(cid:18) dF2 dF1 (F1(x)) (cid:19)(cid:18) dF1 dx (x) (cid:19) What is the multivariable chain rule ? Which order (forward or backward along the chain) is faster ? 24/32 Backpropagation and the Chain Rule L(x) adds up all the losses ℓ (w true) = ℓ (F (x, v) true) − − The partial derivatives of L with resp...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
) dF2 dF1 (F1(x)) (cid:19)(cid:18) dF1 dx (x) (cid:19) What is the multivariable chain rule ? Which order (forward or backward along the chain) is faster ? 24/32 Backward-mode AD is faster for M1M2w (M1M2)w needs N 3+N 2 multiplications M1(M2w) needs only N 2+N 2 Forward (((M1M2)M3) . . . ML)w needs (L 1)N 3 + N 2 − B...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
       ∂wi ∂uk = ∂wi ∂v1 ∂v1 ∂uk + · · · + ∂wi ∂vn ∂vn ∂uk = ∂wi ∂v1 (cid:18) , . . . , ∂wi ∂vn (cid:19) ··· (cid:18) ∂v1 ∂uk , . . . , ∂vn ∂uk (cid:19) Multivariable chain rule : Multiply matrices ! ∂w ∂u = ∂w ∂v (cid:18) ∂v ∂u (cid:19) (cid:19) (cid:18) 26/32 Hyperparameters : The Fateful Decisions The words ...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf
Regularization = Weight decay : ℓ2 or ℓ1 Small λ : increase the variance of the error (overfitting) Large λ : increase the bias (underfitting), b || − Ax 2 is less important || Deep learning with many extra weights and good hyperparameters will find solutions that generalize, without penalty 28/32 Regularization = Weight...
https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf