Aashish34 commited on
Commit
5023d23
·
1 Parent(s): 4dc16f1

add new topics

Browse files
math-ds-complete/index.html CHANGED
@@ -281,6 +281,15 @@
281
  </div>
282
 
283
  <!-- Machine Learning Modules -->
 
 
 
 
 
 
 
 
 
284
  <div class="module" data-subject="machine-learning" style="display: none;">
285
  <h4 class="module-title">Supervised Learning - Regression</h4>
286
  <ul class="topic-list">
@@ -5258,6 +5267,7 @@
5258
  </section>
5259
 
5260
  <!-- Topic 54: Eigenvectors -->
 
5261
  <section class="topic-section" id="topic-54" data-subject="linear-algebra" style="display: none;">
5262
  <div class="topic-header">
5263
  <span class="topic-number">Topic 54</span>
@@ -5266,37 +5276,56 @@
5266
  </div>
5267
 
5268
  <div class="content-card">
5269
- <h3>Introduction</h3>
5270
- <p><strong>What is it?</strong> An eigenvector of a matrix is a special vector that doesn't change
5271
- direction during the transformation—it only gets scaled by its eigenvalue λ.</p>
5272
- <p><strong>Why it matters:</strong> Eigenvectors reveal the fundamental directions of a
5273
- transformation. Used in Google PageRank, quantum mechanics, and data science.</p>
 
 
 
 
 
5274
  </div>
5275
 
5276
  <div class="content-card">
5277
- <h3>The Eigenvalue Equation</h3>
 
 
 
 
5278
  <div class="formula-card">
5279
- <div class="formula-header">Definition</div>
5280
- <div class="formula-main">Av = λv</div>
5281
- <p>A = matrix, v = eigenvector, λ = eigenvalue</p>
5282
  </div>
5283
- <div class="formula-card">
5284
- <div class="formula-header">Finding Eigenvalues</div>
5285
- <div class="formula-main">det(A - λI) = 0</div>
5286
- <p>Characteristic equation: solve for λ</p>
 
 
 
 
5287
  </div>
5288
- </div>
5289
 
5290
- <div class="callout-box insight">
5291
- <div class="callout-header">💡 GEOMETRIC MEANING</div>
5292
- <p>Most vectors get knocked off their span during a transformation. But eigenvectors stay on their
5293
- line—they just stretch or shrink!</p>
5294
- <p><strong>Example:</strong> A rotation has no real eigenvectors (everything rotates). A scaling
5295
- transformation has eigenvectors along the axes.</p>
 
 
 
 
 
 
 
5296
  </div>
5297
 
5298
  <div class="interactive-container">
5299
- <h3>Eigenvector Visualization</h3>
5300
  <canvas id="canvas-54" width="600" height="400"></canvas>
5301
  <div class="controls">
5302
  <button class="btn btn-primary" id="btn54transform">Apply Transformation</button>
@@ -5304,24 +5333,13 @@
5304
  </div>
5305
  </div>
5306
 
5307
- <div class="content-card">
5308
- <h3>Applications</h3>
5309
- <ul class="use-case-list">
5310
- <li><strong>Google PageRank:</strong> Finds important web pages using eigenvectors</li>
5311
- <li><strong>PCA (Principal Component Analysis):</strong> Data dimensionality reduction</li>
5312
- <li><strong>Stability Analysis:</strong> Engineering systems (will it oscillate or stabilize?)
5313
- </li>
5314
- <li><strong>Quantum Mechanics:</strong> Energy states are eigenvectors</li>
5315
- </ul>
5316
- </div>
5317
-
5318
  <div class="summary-card">
5319
  <h3>🎯 Key Takeaways</h3>
5320
  <ul>
5321
- <li>Eigenvector: stays on its span, Av = λv</li>
5322
- <li>Eigenvalue λ: scaling factor</li>
5323
- <li>Find via det(A - λI) = 0</li>
5324
- <li>Reveal fundamental directions of transformation</li>
5325
  </ul>
5326
  </div>
5327
  </section>
@@ -5330,31 +5348,76 @@
5330
  <section class="topic-section" id="topic-45" data-subject="linear-algebra" style="display: none;">
5331
  <div class="topic-header">
5332
  <span class="topic-number">Topic 45</span>
5333
- <h2>🔗 Matrix Multiplication as Composition</h2>
5334
- <p class="topic-subtitle">Successive transformations</p>
5335
  </div>
5336
  <div class="content-card">
5337
- <h3>Introduction</h3>
5338
- <p><strong>What is it?</strong> Matrix multiplication represents applying one transformation after
5339
- another (composition).</p>
5340
- <p><strong>Why it matters:</strong> Understanding this geometrically makes matrix multiplication
5341
- intuitive instead of just memorizing rules.</p>
5342
  </div>
 
5343
  <div class="content-card">
5344
- <h3>Formula</h3>
 
 
 
5345
  <div class="formula-card">
5346
- <div class="formula-header">Matrix Multiplication</div>
5347
- <div class="formula-main">(AB)v = A(Bv)</div>
5348
- <p>Apply B first, then A - right to left!</p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5349
  </div>
5350
  </div>
 
 
 
 
 
 
 
 
 
 
 
5351
  <div class="summary-card">
5352
  <h3>🎯 Key Takeaways</h3>
5353
  <ul>
5354
- <li>Matrix multiplication = composition of transformations</li>
5355
- <li>Apply right-to-left: (AB)v means B first, then A</li>
5356
- <li>Generally not commutative: AB BA</li>
5357
- <li>Order matters!</li>
5358
  </ul>
5359
  </div>
5360
  </section>
@@ -5399,35 +5462,75 @@
5399
  <p class="topic-subtitle">Measuring how transformations scale area/volume</p>
5400
  </div>
5401
  <div class="content-card">
5402
- <h3>Introduction</h3>
5403
- <p><strong>What is it?</strong> The determinant measures how much a transformation scales areas (2D)
5404
- or volumes (3D).</p>
5405
- <p><strong>Why it matters:</strong> Tells if transformation is invertible, changes orientation, and
5406
- by how much space is scaled.</p>
5407
  </div>
 
5408
  <div class="content-card">
5409
- <h3>Formula</h3>
5410
  <div class="formula-card">
5411
- <div class="formula-header">2×2 Determinant</div>
5412
- <div class="formula-main">det([a b; c d]) = ad - bc</div>
 
 
 
 
 
 
 
 
5413
  </div>
5414
  </div>
 
5415
  <div class="content-card">
5416
- <h3>Interpretation</h3>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5417
  <ul>
5418
- <li><strong>|det| &gt; 1:</strong> Expands space</li>
5419
- <li><strong>|det| &lt; 1:</strong> Compresses space</li>
5420
- <li><strong>det = 0:</strong> Squishes to lower dimension (not invertible)</li>
5421
- <li><strong>det &lt; 0:</strong> Flips orientation</li>
 
 
5422
  </ul>
5423
  </div>
 
5424
  <div class="summary-card">
5425
  <h3>🎯 Key Takeaways</h3>
5426
  <ul>
5427
- <li>Determinant = area/volume scaling factor</li>
5428
- <li>det = 0 means non-invertible (squishes space)</li>
5429
- <li>Negative det means orientation flip</li>
5430
- <li>Critical for understanding linear systems</li>
5431
  </ul>
5432
  </div>
5433
  </section>
@@ -5871,148 +5974,61 @@
5871
  <section class="topic-section" id="topic-60" data-subject="calculus" style="display: none;">
5872
  <div class="topic-header">
5873
  <span class="topic-number">Topic 60</span>
5874
- <h2>📐 Derivative Formulas (Geometric)</h2>
5875
- <p class="topic-subtitle">Power rule, sum rule, and more</p>
5876
  </div>
5877
- <div class="content-card">
5878
- <h3>Introduction</h3>
5879
- <p><strong>What is it?</strong> Standard rules for computing derivatives without using limits every
5880
- time.</p>
5881
- <p><strong>Why it matters:</strong> Makes calculus practical - we can quickly find derivatives of
5882
- complex functions.</p>
5883
- </div>
5884
- <div class="content-card">
5885
- <h3>Common Derivative Rules</h3>
5886
- <div class="formula-card">
5887
- <div class="formula-header">Power Rule</div>
5888
- <div class="formula-main">d/dx(xⁿ) = nxⁿ⁻¹</div>
5889
- </div>
5890
- <div class="formula-card">
5891
- <div class="formula-header">Sum Rule</div>
5892
- <div class="formula-main">d/dx[f(x) + g(x)] = f'(x) + g'(x)</div>
5893
- </div>
5894
- <div class="formula-card">
5895
- <div class="formula-header">Constant Multiple</div>
5896
- <div class="formula-main">d/dx[cf(x)] = c·f'(x)</div>
5897
- </div>
5898
- </div>
5899
- <div class="callout-box example">
5900
- <div class="callout-header">📊 EXAMPLE</div>
5901
- <p><strong>f(x) = 3x⁴ - 2x² + 5</strong></p>
5902
- <p>f'(x) = 3·4x³ - 2·2x + 0</p>
5903
- <p>f'(x) = 12x³ - 4x</p>
5904
- </div>
5905
- <!-- WORKED EXAMPLE SECTION -->
5906
- <div class="content-block worked-example-section">
5907
- <h3>📝 Worked Example - Step by Step</h3>
5908
 
5909
- <div class="example-problem">
5910
- <h4>Problem:</h4>
5911
- <p class="problem-statement">Find the derivative of f(x) = 3x⁴ - 2x³ + 5x - 7</p>
5912
- </div>
5913
-
5914
- <div class="example-solution">
5915
- <h4>Solution:</h4>
5916
-
5917
- <div class="solution-step">
5918
- <div class="step-number">Step 1:</div>
5919
- <div class="step-content">
5920
- <p class="step-description">Identify the Power Rule</p>
5921
- <div class="step-work">
5922
- <code>Power Rule: d/dx(xⁿ) = n·xⁿ⁻¹</code><br>
5923
- <code>Apply to each term separately</code>
5924
- </div>
5925
- <p class="step-explanation">The power rule works term by term</p>
5926
- </div>
5927
- </div>
5928
-
5929
- <div class="solution-step">
5930
- <div class="step-number">Step 2:</div>
5931
- <div class="step-content">
5932
- <p class="step-description">Derivative of First Term: 3x⁴</p>
5933
- <div class="step-work">
5934
- <code>Coefficient stays: 3</code><br>
5935
- <code>Exponent: 4 → multiply by 4 and decrease exponent</code><br>
5936
- <code>Result: 3 × 4 × x³ = 12x³</code>
5937
- </div>
5938
- <p class="step-explanation">Bring down the 4, multiply by coefficient 3</p>
5939
- </div>
5940
- </div>
5941
 
5942
- <div class="solution-step">
5943
- <div class="step-number">Step 3:</div>
5944
- <div class="step-content">
5945
- <p class="step-description">Derivative of Second Term: -2x³</p>
5946
- <div class="step-work">
5947
- <code>Result: -2 × 3 × x² = -6x²</code>
5948
- </div>
5949
- <p class="step-explanation">Same process, keep the negative sign</p>
5950
- </div>
5951
  </div>
5952
-
5953
- <div class="solution-step">
5954
- <div class="step-number">Step 4:</div>
5955
- <div class="step-content">
5956
- <p class="step-description">Derivative of Third Term: 5x</p>
5957
- <div class="step-work">
5958
- <code>5x = 5x¹</code><br>
5959
- <code>Result: 5 × 1 × x⁰ = 5</code>
5960
- </div>
5961
- <p class="step-explanation">x⁰ = 1, so we just get the coefficient</p>
5962
- </div>
5963
  </div>
 
 
5964
 
5965
- <div class="solution-step">
5966
- <div class="step-number">Step 5:</div>
5967
- <div class="step-content">
5968
- <p class="step-description">Derivative of Fourth Term: -7</p>
5969
- <div class="step-work">
5970
- <code>Constant → derivative is 0</code>
5971
- </div>
5972
- <p class="step-explanation">Constants disappear when we take the derivative</p>
5973
- </div>
5974
- </div>
5975
 
5976
- <div class="solution-step">
5977
- <div class="step-number">Step 6:</div>
5978
- <div class="step-content">
5979
- <p class="step-description">Combine All Terms</p>
5980
- <div class="step-work">
5981
- <code>f'(x) = 12x³ - 6x² + 5 + 0</code><br>
5982
- <code>f'(x) = 12x³ - 6x² + 5</code>
5983
- </div>
5984
- <p class="step-explanation">Sum the derivatives of each term</p>
 
5985
  </div>
5986
  </div>
5987
-
5988
- <div class="final-answer">
5989
- <strong>Final Answer:</strong>
5990
- <span class="answer-highlight">f'(x) = 12x³ - 6x² + 5</span>
5991
- </div>
5992
-
5993
- <div class="verification">
5994
- <strong>✓ Check:</strong>
5995
- <p>At x=1: f'(1) = 12(1) - 6(1) + 5 = 11. This is the slope of the tangent line at x=1.</p>
5996
- </div>
5997
  </div>
5998
 
5999
- <div class="practice-problems">
6000
- <h4>💪 Try These:</h4>
6001
- <ol>
6002
- <li>Find f'(x) for f(x) = 2x³ + 4x² - 3</li>
6003
- <li>Find the derivative of g(x) = 5x⁴ - x</li>
6004
- <li>What is d/dx(7x² + 2x + 8)?</li>
6005
- </ol>
6006
- <button class="show-answers-btn"
6007
- onclick="this.nextElementSibling.style.display = this.nextElementSibling.style.display === 'none' ? 'block' : 'none'; this.textContent = this.textContent === 'Show Answers' ? 'Hide Answers' : 'Show Answers'">Show
6008
- Answers</button>
6009
- <div class="practice-answers" style="display: none;">
6010
- <p><strong>Answers:</strong></p>
6011
- <ol>
6012
- <li>f'(x) = 6x² + 8x</li>
6013
- <li>g'(x) = 20x³ - 1</li>
6014
- <li>14x + 2</li>
6015
- </ol>
6016
  </div>
6017
  </div>
6018
  </div>
@@ -6020,10 +6036,10 @@
6020
  <div class="summary-card">
6021
  <h3>🎯 Key Takeaways</h3>
6022
  <ul>
6023
- <li>Power rule: bring down exponent, subtract 1</li>
6024
- <li>Sum rule: derivative of sum = sum of derivatives</li>
6025
- <li>Constant multiple: constant comes out front</li>
6026
- <li>Makes computing derivatives fast and easy</li>
6027
  </ul>
6028
  </div>
6029
  </section>
@@ -8034,6 +8050,60 @@
8034
 
8035
  <!-- MACHINE LEARNING ALGORITHMS START HERE -->
8036
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8037
  <!-- ML-1: Linear Regression -->
8038
  <section class="topic-section ml-section" id="ml-topic-1" data-subject="machine-learning"
8039
  style="display: none;">
@@ -10274,6 +10344,59 @@ optimizer.step()</code>
10274
  <p>Binary classification using sigmoid: P(y=1) = 1/(1+e^(-z)) where z = β₀+β₁x. Despite name, it's
10275
  for classification!</p>
10276
  </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10277
  <div class="content-card">
10278
  <h3>💻 Python Implementation</h3>
10279
  <div class="code-block">
@@ -10303,6 +10426,61 @@ optimizer.step()</code>
10303
  <p>SVM finds hyperplane that maximally separates classes. Uses support vectors (closest points) and
10304
  kernel trick for non-linear boundaries.</p>
10305
  </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10306
  <div class="content-card">
10307
  <h3>💻 Python Implementation</h3>
10308
  <div class="code-block">
@@ -10336,6 +10514,75 @@ optimizer.step()</code>
10336
  <p>Applies Bayes' Theorem with "naive" independence assumption. P(y|x) ∝ P(y)ΠP(xᵢ|y). Extremely
10337
  fast for text classification.</p>
10338
  </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10339
  <div class="content-card">
10340
  <h3>💻 Python Implementation</h3>
10341
  <div class="code-block">
@@ -10431,6 +10678,52 @@ optimizer.step()</code>
10431
  <p>Layers of connected neurons. Each neuron: z = Σwᵢxᵢ + b, then activation function σ(z). Trained
10432
  via backpropagation + gradient descent.</p>
10433
  </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10434
  <div class="content-card">
10435
  <h3>💻 Python Implementation</h3>
10436
  <div class="code-block">
 
281
  </div>
282
 
283
  <!-- Machine Learning Modules -->
284
+ <div class="module" data-subject="machine-learning" style="display: none;">
285
+ <h4 class="module-title">Getting Started</h4>
286
+ <ul class="topic-list">
287
+ <li><a href="#ml-topic-0" class="topic-link" data-topic="ml-topic-0">ML-0. Google ML Crash
288
+ Course</a>
289
+ </li>
290
+ </ul>
291
+ </div>
292
+
293
  <div class="module" data-subject="machine-learning" style="display: none;">
294
  <h4 class="module-title">Supervised Learning - Regression</h4>
295
  <ul class="topic-list">
 
5267
  </section>
5268
 
5269
  <!-- Topic 54: Eigenvectors -->
5270
+ <!-- Topic 54: Eigenvectors & Eigenvalues -->
5271
  <section class="topic-section" id="topic-54" data-subject="linear-algebra" style="display: none;">
5272
  <div class="topic-header">
5273
  <span class="topic-number">Topic 54</span>
 
5276
  </div>
5277
 
5278
  <div class="content-card">
5279
+ <h3>The Fundamental Equation</h3>
5280
+ <p>An <strong>eigenvector</strong> $\mathbf{v}$ of a square matrix $A$ is a non-zero vector that,
5281
+ when multiplied by $A$, yields a scalar multiple of itself. This scalar $\lambda$ is called the
5282
+ <strong>eigenvalue</strong>.
5283
+ </p>
5284
+ <div class="formula-card">
5285
+ <div style="font-size: 1.3em; margin: 15px 0;">
5286
+ $$A\mathbf{v} = \lambda\mathbf{v}$$
5287
+ </div>
5288
+ </div>
5289
  </div>
5290
 
5291
  <div class="content-card">
5292
+ <h3>✍️ How to Solve (Paper & Pen)</h3>
5293
+ <p><strong>Step 1: Set up the Characteristic Equation</strong><br>
5294
+ To find $\lambda$, we move everything to one side: $(A - \lambda I)\mathbf{v} = 0$. Since
5295
+ $\mathbf{v} \neq 0$, the matrix $(A - \lambda I)$ must be singular (squish space to zero area),
5296
+ meaning its determinant must be zero:</p>
5297
  <div class="formula-card">
5298
+ <div style="font-size: 1.2em; margin: 15px 0;">
5299
+ $$\det(A - \lambda I) = 0$$
5300
+ </div>
5301
  </div>
5302
+
5303
+ <div class="callout-box example">
5304
+ <div class="callout-header">WORKED EXAMPLE: FINDING EIGENVALUES</div>
5305
+ <p>Let $A = \begin{bmatrix} 3 & 1 \\ 0 & 2 \end{bmatrix}$. Solve $\det \begin{bmatrix} 3-\lambda
5306
+ & 1 \\ 0 & 2-\lambda \end{bmatrix} = 0$:</p>
5307
+ <p>$(3-\lambda)(2-\lambda) - (1 \times 0) = 0$<br>
5308
+ $(3-\lambda)(2-\lambda) = 0$<br>
5309
+ Roots: $\mathbf{\lambda_1 = 3, \lambda_2 = 2}$</p>
5310
  </div>
 
5311
 
5312
+ <p><strong>Step 2: Find the Eigenvectors</strong><br>
5313
+ For each $\lambda$, plug it back into $(A - \lambda I)\mathbf{v} = 0$ and solve for
5314
+ $\mathbf{v}$.</p>
5315
+ <div class="callout-box example">
5316
+ <div class="callout-header">WORKED EXAMPLE: FINDING EIGENVECTORS</div>
5317
+ <p>For $\lambda = 3$ in our matrix above:<br>
5318
+ $(A - 3I)\mathbf{v} = 0 \implies \begin{bmatrix} 3-3 & 1 \\ 0 & 2-3 \end{bmatrix}
5319
+ \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$<br>
5320
+ $\begin{bmatrix} 0 & 1 \\ 0 & -1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} =
5321
+ \begin{bmatrix} 0 \\ 0 \end{bmatrix}$<br>
5322
+ This gives the equation $y = 0$. $x$ can be anything! So an eigenvector for $\lambda=3$ is
5323
+ $\mathbf{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}$.</p>
5324
+ </div>
5325
  </div>
5326
 
5327
  <div class="interactive-container">
5328
+ <h3>Visual Confirmation</h3>
5329
  <canvas id="canvas-54" width="600" height="400"></canvas>
5330
  <div class="controls">
5331
  <button class="btn btn-primary" id="btn54transform">Apply Transformation</button>
 
5333
  </div>
5334
  </div>
5335
 
 
 
 
 
 
 
 
 
 
 
 
5336
  <div class="summary-card">
5337
  <h3>🎯 Key Takeaways</h3>
5338
  <ul>
5339
+ <li>Eigenvectors stay on their span (no rotation).</li>
5340
+ <li>Find eigenvalues by solving the polynomial $\det(A - \lambda I) = 0$.</li>
5341
+ <li>Find eigenvectors by solving $(A - \lambda I)\mathbf{v} = 0$.</li>
5342
+ <li>Critical for PCA, PageRank, and vibration analysis.</li>
5343
  </ul>
5344
  </div>
5345
  </section>
 
5348
  <section class="topic-section" id="topic-45" data-subject="linear-algebra" style="display: none;">
5349
  <div class="topic-header">
5350
  <span class="topic-number">Topic 45</span>
5351
+ <h2>🔗 Matrix Multiplication</h2>
5352
+ <p class="topic-subtitle">Composition of transformations and the row-column method</p>
5353
  </div>
5354
  <div class="content-card">
5355
+ <h3>The Geometric Perspective</h3>
5356
+ <p>Matrix multiplication $C = AB$ represents applying transformation $B$ first, followed by
5357
+ transformation $A$. This is why we write them from right to left when applying to a vector:
5358
+ $A(B\mathbf{v})$.</p>
 
5359
  </div>
5360
+
5361
  <div class="content-card">
5362
+ <h3>The "Paper & Pen" Method (Row by Column)</h3>
5363
+ <p>To compute the entry $c_{ij}$ of the product matrix $C$, you take the <strong>dot
5364
+ product</strong> of the $i$-th row of $A$ and the $j$-th column of $B$.</p>
5365
+
5366
  <div class="formula-card">
5367
+ <div class="formula-header">The General Formula</div>
5368
+ <div class="formula-main">
5369
+ $$c_{ij} = \sum_{k=1}^{n} a_{ik}b_{kj}$$
5370
+ </div>
5371
+ <p>Where $A$ is an $m \times n$ matrix and $B$ is an $n \times p$ matrix. The resulting matrix
5372
+ $C$ is $m \times p$.</p>
5373
+ </div>
5374
+
5375
+ <div class="callout-box example">
5376
+ <div class="callout-header">✍️ STEP-BY-STEP CALCULATION</div>
5377
+ <p>Let's multiply two 2x2 matrices:</p>
5378
+ <div style="font-size: 1.2em; margin: 15px 0;">
5379
+ $$A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, \quad B = \begin{bmatrix} 5 & 6 \\ 7 & 8
5380
+ \end{bmatrix}$$
5381
+ </div>
5382
+ <p><strong>Step 1: Top-Left Entry ($c_{11}$)</strong><br>
5383
+ Row 1 of $A$ "dots" Column 1 of $B$:<br>
5384
+ $(1 \times 5) + (2 \times 7) = 5 + 14 = \mathbf{19}$</p>
5385
+
5386
+ <p><strong>Step 2: Top-Right Entry ($c_{12}$)</strong><br>
5387
+ Row 1 of $A$ "dots" Column 2 of $B$:<br>
5388
+ $(1 \times 6) + (2 \times 8) = 6 + 16 = \mathbf{22}$</p>
5389
+
5390
+ <p><strong>Step 3: Bottom-Left Entry ($c_{21}$)</strong><br>
5391
+ Row 2 of $A$ "dots" Column 1 of $B$:<br>
5392
+ $(3 \times 5) + (4 \times 7) = 15 + 28 = \mathbf{43}$</p>
5393
+
5394
+ <p><strong>Step 4: Bottom-Right Entry ($c_{22}$)</strong><br>
5395
+ Row 2 of $A$ "dots" Column 2 of $B$:<br>
5396
+ $(3 \times 6) + (4 \times 8) = 18 + 32 = \mathbf{50}$</p>
5397
+
5398
+ <div style="font-size: 1.2em; margin: 15px 0;">
5399
+ $$AB = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix}$$
5400
+ </div>
5401
  </div>
5402
  </div>
5403
+
5404
+ <div class="callout-box warning">
5405
+ <div class="callout-header">⚠️ IMPORTANT RULES</div>
5406
+ <ul>
5407
+ <li><strong>Dimension Match:</strong> The number of columns in $A$ <em>must</em> equal the
5408
+ number of rows in $B$.</li>
5409
+ <li><strong>Order Matters:</strong> $AB$ is generally <strong>not</strong> equal to $BA$ (Matrix
5410
+ multiplication is non-commutative).</li>
5411
+ </ul>
5412
+ </div>
5413
+
5414
  <div class="summary-card">
5415
  <h3>🎯 Key Takeaways</h3>
5416
  <ul>
5417
+ <li>Geometric: Composition of transformations (right to left).</li>
5418
+ <li>Algebraic: Row-by-column dot products.</li>
5419
+ <li>Resulting dimensions: $(m \times n) \times (n \times p) \to (m \times p)$.</li>
5420
+ <li>Non-commutative: $AB \neq BA$.</li>
5421
  </ul>
5422
  </div>
5423
  </section>
 
5462
  <p class="topic-subtitle">Measuring how transformations scale area/volume</p>
5463
  </div>
5464
  <div class="content-card">
5465
+ <h3>The Geometric Intuition</h3>
5466
+ <p>The determinant of a matrix $A$, denoted as $\det(A)$ or $|A|$, represents the <strong>factor by
5467
+ which the linear transformation scales area</strong> (in 2D) or <strong>volume</strong> (in
5468
+ 3D).</p>
 
5469
  </div>
5470
+
5471
  <div class="content-card">
5472
+ <h3>2×2 Determinant</h3>
5473
  <div class="formula-card">
5474
+ <div class="formula-header">Method</div>
5475
+ <div style="font-size: 1.2em; margin: 15px 0;">
5476
+ $$\det \begin{bmatrix} a & b \\ c & d \end{bmatrix} = ad - bc$$
5477
+ </div>
5478
+ </div>
5479
+ <div class="callout-box example">
5480
+ <div class="callout-header">✍️ 2x2 EXAMPLE</div>
5481
+ <p>Calculate $\det(A)$ for $A = \begin{bmatrix} 3 & 1 \\ 2 & 5 \end{bmatrix}$:</p>
5482
+ <p>$\det(A) = (3 \times 5) - (1 \times 2) = 15 - 2 = \mathbf{13}$</p>
5483
+ <p><em>Meaning: This transformation increases the area of any shape by a factor of 13.</em></p>
5484
  </div>
5485
  </div>
5486
+
5487
  <div class="content-card">
5488
+ <h3>3×3 Determinant (Expansion by Minors)</h3>
5489
+ <p>To calculate a 3x3 determinant, we "expand" along the first row using a checkerboard of signs ($+
5490
+ - +$).</p>
5491
+ <div class="formula-card">
5492
+ <div style="font-size: 1.1em; margin: 15px 0;">
5493
+ $$\det \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} = a \det
5494
+ \begin{bmatrix} e & f \\ h & i \end{bmatrix} - b \det \begin{bmatrix} d & f \\ g & i
5495
+ \end{bmatrix} + c \det \begin{bmatrix} d & e \\ g & h \end{bmatrix}$$
5496
+ </div>
5497
+ </div>
5498
+
5499
+ <div class="callout-box example">
5500
+ <div class="callout-header">✍️ 3x3 STEP-BY-STEP</div>
5501
+ <p>Calculate $\det(A)$ for $A = \begin{bmatrix} 1 & 2 & 3 \\ 0 & 4 & 5 \\ 1 & 0 & 6
5502
+ \end{bmatrix}$:</p>
5503
+ <ol>
5504
+ <li>$1 \times \det \begin{bmatrix} 4 & 5 \\ 0 & 6 \end{bmatrix} = 1 \times (24 - 0) = 24$
5505
+ </li>
5506
+ <li>$-2 \times \det \begin{bmatrix} 0 & 5 \\ 1 & 6 \end{bmatrix} = -2 \times (0 - 5) = 10$
5507
+ </li>
5508
+ <li>$3 \times \det \begin{bmatrix} 0 & 4 \\ 1 & 0 \end{bmatrix} = 3 \times (0 - 4) = -12$
5509
+ </li>
5510
+ </ol>
5511
+ <p>Total: $24 + 10 - 12 = \mathbf{22}$</p>
5512
+ </div>
5513
+ </div>
5514
+
5515
+ <div class="callout-box insight">
5516
+ <div class="callout-header">💡 WHAT THE VALUE TELLS YOU</div>
5517
  <ul>
5518
+ <li><strong>$\det(A) = 0$</strong>: The transformation squishes space into a lower dimension (a
5519
+ line or a point). The matrix is <strong>singular</strong> (not invertible).</li>
5520
+ <li><strong>Negative Determinant</strong>: The transformation flips the orientation (like a
5521
+ mirror reflection).</li>
5522
+ <li><strong>$\det(A) = 1$</strong>: The transformation preserves area/volume (e.g., a pure
5523
+ rotation).</li>
5524
  </ul>
5525
  </div>
5526
+
5527
  <div class="summary-card">
5528
  <h3>🎯 Key Takeaways</h3>
5529
  <ul>
5530
+ <li>Determinant = area/volume scaling factor.</li>
5531
+ <li>2x2 formula: $ad - bc$.</li>
5532
+ <li>3x3 requires expansion by minors (or Sarrus' rule).</li>
5533
+ <li>$\det = 0$ means the matrix cannot be inverted.</li>
5534
  </ul>
5535
  </div>
5536
  </section>
 
5974
  <section class="topic-section" id="topic-60" data-subject="calculus" style="display: none;">
5975
  <div class="topic-header">
5976
  <span class="topic-number">Topic 60</span>
5977
+ <h2>📐 Derivative Formulas</h2>
5978
+ <p class="topic-subtitle">The toolkit for finding rates of change</p>
5979
  </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5980
 
5981
+ <div class="content-card">
5982
+ <h3>The Shortcut Rules</h3>
5983
+ <p>Instead of calculating the limit for every function, we use these standard rules derived once and
5984
+ for all.</p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5985
 
5986
+ <div class="two-column">
5987
+ <div class="column">
5988
+ <h4 style="color: #64ffda;">Basic Rules</h4>
5989
+ <ul style="list-style: none; padding: 0;">
5990
+ <li><strong>Constant:</strong> $\frac{d}{dx}[c] = 0$</li>
5991
+ <li><strong>Power Rule:</strong> $\frac{d}{dx}[x^n] = nx^{n-1}$</li>
5992
+ <li><strong>Sum Rule:</strong> $\frac{d}{dx}[f + g] = f' + g'$</li>
5993
+ </ul>
 
5994
  </div>
5995
+ <div class="column">
5996
+ <h4 style="color: #ff6b6b;">Special Rules</h4>
5997
+ <ul style="list-style: none; padding: 0;">
5998
+ <li><strong>Exponential:</strong> $\frac{d}{dx}[e^x] = e^x$</li>
5999
+ <li><strong>Logarithm:</strong> $\frac{d}{dx}[\ln(x)] = \frac{1}{x}$</li>
6000
+ <li><strong>Trig (sin):</strong> $\frac{d}{dx}[\sin(x)] = \cos(x)$</li>
6001
+ <li><strong>Trig (cos):</strong> $\frac{d}{dx}[\cos(x)] = -\sin(x)$</li>
6002
+ </ul>
 
 
 
6003
  </div>
6004
+ </div>
6005
+ </div>
6006
 
6007
+ <div class="content-card">
6008
+ <h3>✍️ Worked Example - Step by Step</h3>
6009
+ <p><strong>Problem:</strong> Find the derivative of $f(x) = 3x^4 - 2x^2 + 5x - 7$</p>
 
 
 
 
 
 
 
6010
 
6011
+ <div class="solution-step">
6012
+ <div class="step-number">Step 1:</div>
6013
+ <div class="step-content">
6014
+ <p>Apply the power rule to each term individually.</p>
6015
+ <div
6016
+ style="background: rgba(255,255,255,0.05); padding: 15px; border-radius: 8px; font-family: 'JetBrains Mono';">
6017
+ $\frac{d}{dx}(3x^4) = 3 \cdot 4x^3 = 12x^3$<br>
6018
+ $\frac{d}{dx}(-2x^2) = -2 \cdot 2x^1 = -4x$<br>
6019
+ $\frac{d}{dx}(5x) = 5 \cdot 1x^0 = 5$<br>
6020
+ $\frac{d}{dx}(-7) = 0$
6021
  </div>
6022
  </div>
 
 
 
 
 
 
 
 
 
 
6023
  </div>
6024
 
6025
+ <div class="solution-step">
6026
+ <div class="step-number">Step 2:</div>
6027
+ <div class="step-content">
6028
+ <p>Combine the results.</p>
6029
+ <div class="final-answer">
6030
+ $f'(x) = 12x^3 - 4x + 5$
6031
+ </div>
 
 
 
 
 
 
 
 
 
 
6032
  </div>
6033
  </div>
6034
  </div>
 
6036
  <div class="summary-card">
6037
  <h3>🎯 Key Takeaways</h3>
6038
  <ul>
6039
+ <li>$e^x$ is unique because it is its own slope.</li>
6040
+ <li>The derivative of a constant is 0 (it doesn't change).</li>
6041
+ <li>Memory trick: Derivative of 'co' functions ($\cos$, $\cot$, $\csc$) always have a negative
6042
+ sign.</li>
6043
  </ul>
6044
  </div>
6045
  </section>
 
8050
 
8051
  <!-- MACHINE LEARNING ALGORITHMS START HERE -->
8052
 
8053
+ <!-- ML-0: Google Machine Learning Crash Course -->
8054
+ <section class="topic-section ml-section" id="ml-topic-0" data-subject="machine-learning"
8055
+ style="display: none;">
8056
+ <div class="topic-header">
8057
+ <span class="topic-number ml-resource" style="background: var(--primary); color: white;">Learning
8058
+ Resource</span>
8059
+ <h2>🚀 Google ML Crash Course</h2>
8060
+ <p class="topic-subtitle">Google's fast-paced introduction to machine learning</p>
8061
+ </div>
8062
+
8063
+ <div class="content-card">
8064
+ <h3>📚 Overview</h3>
8065
+ <p>The <strong>Machine Learning Crash Course (MLCC)</strong> is a self-paced guide developed by
8066
+ Google engineers. It is designed to help you learn the core concepts of machine learning through
8067
+ a mix of theory and practical exercises.</p>
8068
+ <p><strong>Perfect for:</strong> Beginners looking for a structured path and developers wanting to
8069
+ apply ML using TensorFlow.</p>
8070
+ </div>
8071
+
8072
+ <div class="callout-box insight">
8073
+ <div class="callout-header">💡 Why Google MLCC?</div>
8074
+ <p>It includes over 25 lessons, 15+ hours of content, and hands-on exercises using
8075
+ <strong>Colab</strong> and <strong>TensorFlow</strong>. It's a gold standard for practical
8076
+ machine learning education.
8077
+ </p>
8078
+ </div>
8079
+
8080
+ <div class="content-card">
8081
+ <h3>🔗 Key Links</h3>
8082
+ <ul class="use-case-list" style="list-style: none; padding: 0;">
8083
+ <li style="margin-bottom: 12px;">🌍 <a
8084
+ href="https://developers.google.com/machine-learning/crash-course" target="_blank"
8085
+ style="color: var(--accent); font-weight: bold; text-decoration: underline;">Official ML
8086
+ Crash Course ↗</a></li>
8087
+ <li style="margin-bottom: 12px;">📖 <a
8088
+ href="https://developers.google.com/machine-learning/glossary" target="_blank"
8089
+ style="color: var(--accent); text-decoration: underline;">ML Glossary & Terms ↗</a></li>
8090
+ <li>🛠️ <a href="https://developers.google.com/machine-learning/problem-framing" target="_blank"
8091
+ style="color: var(--accent); text-decoration: underline;">Problem Framing Guide ↗</a>
8092
+ </li>
8093
+ </ul>
8094
+ </div>
8095
+
8096
+ <div class="summary-card">
8097
+ <h3>🎯 Recommended Steps</h3>
8098
+ <ul>
8099
+ <li>Understand "Loss" and "Gradient Descent" first.</li>
8100
+ <li>Follow the coding exercises in Google Colab.</li>
8101
+ <li>Pay close attention to "Feature Engineering" (Crucial for DS!).</li>
8102
+ <li>Complete the "ML Engineering" modules for production tips.</li>
8103
+ </ul>
8104
+ </div>
8105
+ </section>
8106
+
8107
  <!-- ML-1: Linear Regression -->
8108
  <section class="topic-section ml-section" id="ml-topic-1" data-subject="machine-learning"
8109
  style="display: none;">
 
10344
  <p>Binary classification using sigmoid: P(y=1) = 1/(1+e^(-z)) where z = β₀+β₁x. Despite name, it's
10345
  for classification!</p>
10346
  </div>
10347
+
10348
+ <div class="worked-example-section">
10349
+ <h3>📝 Paper & Pen: How to Solve Manually</h3>
10350
+ <div class="example-problem">
10351
+ <h4>Problem:</h4>
10352
+ <p class="problem-statement">Given a model with Intercept (β₀) = -2.0 and Slope (β₁) = 0.5,
10353
+ predict if a student passes (y=1) if they study for 6 hours (x=6).</p>
10354
+ </div>
10355
+
10356
+ <div class="example-solution">
10357
+ <h4>Solution Steps:</h4>
10358
+ <div class="solution-step">
10359
+ <div class="step-number">Step 1:</div>
10360
+ <div class="step-content">
10361
+ <p class="step-description">Calculate Log-Odds (z)</p>
10362
+ <div class="step-work">
10363
+ <code>z = β₀ + β₁x</code><br>
10364
+ <code>z = -2.0 + 0.5(6)</code><br>
10365
+ <code>z = -2.0 + 3.0 = 1.0</code>
10366
+ </div>
10367
+ </div>
10368
+ </div>
10369
+
10370
+ <div class="solution-step">
10371
+ <div class="step-number">Step 2:</div>
10372
+ <div class="step-content">
10373
+ <p class="step-description">Apply Sigmoid Function</p>
10374
+ <div class="step-work">
10375
+ <code>P(y=1) = 1 / (1 + e^-z)</code><br>
10376
+ <code>P(y=1) = 1 / (1 + e^-1)</code><br>
10377
+ <code>P(y=1) = 1 / (1 + 0.368) ≈ 0.731</code>
10378
+ </div>
10379
+ <p class="step-explanation">The probability of passing is 73.1%</p>
10380
+ </div>
10381
+ </div>
10382
+
10383
+ <div class="solution-step">
10384
+ <div class="step-number">Step 3:</div>
10385
+ <div class="step-content">
10386
+ <p class="step-description">Apply Decision Threshold</p>
10387
+ <div class="step-work">
10388
+ <code>Threshold = 0.5</code><br>
10389
+ <code>0.731 > 0.5 → Classify as 1 (Pass)</code>
10390
+ </div>
10391
+ </div>
10392
+ </div>
10393
+
10394
+ <div class="final-answer">
10395
+ <strong>✓ Final Result:</strong>
10396
+ <span class="answer-highlight">Prediction: PASS (P=0.73)</span>
10397
+ </div>
10398
+ </div>
10399
+ </div>
10400
  <div class="content-card">
10401
  <h3>💻 Python Implementation</h3>
10402
  <div class="code-block">
 
10426
  <p>SVM finds hyperplane that maximally separates classes. Uses support vectors (closest points) and
10427
  kernel trick for non-linear boundaries.</p>
10428
  </div>
10429
+
10430
+ <div class="worked-example-section">
10431
+ <h3>📝 Paper & Pen: Distance to Hyperplane</h3>
10432
+ <div class="example-problem">
10433
+ <h4>Problem:</h4>
10434
+ <p class="problem-statement">Given a 2D decision boundary (hyperplane) defined by:
10435
+ <br><strong>Equation: 3x₁ + 4x₂ - 10 = 0</strong>
10436
+ <br>Calculate the distance of point <strong>P(4, 5)</strong> to this boundary. Is it on the
10437
+ "positive" or "negative" side?
10438
+ </p>
10439
+ </div>
10440
+
10441
+ <div class="example-solution">
10442
+ <h4>Solution Steps:</h4>
10443
+ <div class="solution-step">
10444
+ <div class="step-number">Step 1:</div>
10445
+ <div class="step-content">
10446
+ <p class="step-description">Plug Point into Equation</p>
10447
+ <div class="step-work">
10448
+ <code>Value = 3(4) + 4(5) - 10</code><br>
10449
+ <code>Value = 12 + 20 - 10 = 22</code>
10450
+ </div>
10451
+ <p class="step-explanation">Since 22 > 0, the point is on the <strong>positive
10452
+ side</strong> of the hyperplane.</p>
10453
+ </div>
10454
+ </div>
10455
+
10456
+ <div class="solution-step">
10457
+ <div class="step-number">Step 2:</div>
10458
+ <div class="step-content">
10459
+ <p class="step-description">Calculate Distance Formula Numerator</p>
10460
+ <div class="step-work">
10461
+ <code>Numerator = |Ax₁ + Bx₂ + C|</code><br>
10462
+ <code>Numerator = |22| = 22</code>
10463
+ </div>
10464
+ </div>
10465
+ </div>
10466
+
10467
+ <div class="solution-step">
10468
+ <div class="step-number">Step 3:</div>
10469
+ <div class="step-content">
10470
+ <p class="step-description">Calculate Denominator (Norm of Weights)</p>
10471
+ <div class="step-work">
10472
+ <code>Denominator = √(A² + B²)</code><br>
10473
+ <code>Denominator = √(3² + 4²) = √25 = 5</code>
10474
+ </div>
10475
+ </div>
10476
+ </div>
10477
+
10478
+ <div class="final-answer">
10479
+ <strong>✓ Final Result:</strong>
10480
+ <span class="answer-highlight">Distance = 22 / 5 = 4.4 units</span>
10481
+ </div>
10482
+ </div>
10483
+ </div>
10484
  <div class="content-card">
10485
  <h3>💻 Python Implementation</h3>
10486
  <div class="code-block">
 
10514
  <p>Applies Bayes' Theorem with "naive" independence assumption. P(y|x) ∝ P(y)ΠP(xᵢ|y). Extremely
10515
  fast for text classification.</p>
10516
  </div>
10517
+
10518
+ <div class="worked-example-section">
10519
+ <h3>📝 Paper & Pen: How to Solve Manually</h3>
10520
+ <div class="example-problem">
10521
+ <h4>Problem:</h4>
10522
+ <p class="problem-statement">Classify if an email is <strong>Spam</strong> or
10523
+ <strong>Ham</strong> based on the word "Free".
10524
+ <br>Data:
10525
+ <ul>
10526
+ <li>Total Emails: 100</li>
10527
+ <li>Spam: 20 (10 contain "Free")</li>
10528
+ <li>Ham: 80 (5 contain "Free")</li>
10529
+ </ul>
10530
+ </p>
10531
+ </div>
10532
+
10533
+ <div class="example-solution">
10534
+ <h4>Solution Steps:</h4>
10535
+ <div class="solution-step">
10536
+ <div class="step-number">Step 1:</div>
10537
+ <div class="step-content">
10538
+ <p class="step-description">Calculate Priors P(C)</p>
10539
+ <div class="step-work">
10540
+ <code>P(Spam) = 20/100 = 0.2</code><br>
10541
+ <code>P(Ham) = 80/100 = 0.8</code>
10542
+ </div>
10543
+ </div>
10544
+ </div>
10545
+
10546
+ <div class="solution-step">
10547
+ <div class="step-number">Step 2:</div>
10548
+ <div class="step-content">
10549
+ <p class="step-description">Calculate Likelihoods P(Free|C)</p>
10550
+ <div class="step-work">
10551
+ <code>P(Free|Spam) = 10/20 = 0.5</code><br>
10552
+ <code>P(Free|Ham) = 5/80 = 0.0625</code>
10553
+ </div>
10554
+ </div>
10555
+ </div>
10556
+
10557
+ <div class="solution-step">
10558
+ <div class="step-number">Step 3:</div>
10559
+ <div class="step-content">
10560
+ <p class="step-description">Calculate Posterior Numerators</p>
10561
+ <div class="step-work">
10562
+ <code>Score(Spam) = P(Spam) × P(Free|Spam) = 0.2 × 0.5 = 0.10</code><br>
10563
+ <code>Score(Ham) = P(Ham) × P(Free|Ham) = 0.8 × 0.0625 = 0.05</code>
10564
+ </div>
10565
+ </div>
10566
+ </div>
10567
+
10568
+ <div class="solution-step">
10569
+ <div class="step-number">Step 4:</div>
10570
+ <div class="step-content">
10571
+ <p class="step-description">Normalize & Compare</p>
10572
+ <div class="step-work">
10573
+ <code>Total = 0.10 + 0.05 = 0.15</code><br>
10574
+ <code>P(Spam|Free) = 0.10 / 0.15 = 0.667 (67%)</code><br>
10575
+ <code>P(Ham|Free) = 0.05 / 0.15 = 0.333 (33%)</code>
10576
+ </div>
10577
+ </div>
10578
+ </div>
10579
+
10580
+ <div class="final-answer">
10581
+ <strong>✓ Final Result:</strong>
10582
+ <span class="answer-highlight">Classify as SPAM (67% probability)</span>
10583
+ </div>
10584
+ </div>
10585
+ </div>
10586
  <div class="content-card">
10587
  <h3>💻 Python Implementation</h3>
10588
  <div class="code-block">
 
10678
  <p>Layers of connected neurons. Each neuron: z = Σwᵢxᵢ + b, then activation function σ(z). Trained
10679
  via backpropagation + gradient descent.</p>
10680
  </div>
10681
+
10682
+ <div class="worked-example-section">
10683
+ <h3>📝 Paper & Pen: Single Neuron Forward Pass</h3>
10684
+ <div class="example-problem">
10685
+ <h4>Problem:</h4>
10686
+ <p class="problem-statement">Calculate the output of a single neuron with 2 inputs.
10687
+ <br>Inputs: x₁=0.5, x₂=1.0
10688
+ <br>Weights: w₁=0.4, w₂=-0.2
10689
+ <br>Bias: b=0.1
10690
+ <br>Activation: ReLU (Output = max(0, z))
10691
+ </p>
10692
+ </div>
10693
+
10694
+ <div class="example-solution">
10695
+ <h4>Solution Steps:</h4>
10696
+ <div class="solution-step">
10697
+ <div class="step-number">Step 1:</div>
10698
+ <div class="step-content">
10699
+ <p class="step-description">Calculate Weighted Sum (z)</p>
10700
+ <div class="step-work">
10701
+ <code>z = (x₁w₁ + x₂w₂) + b</code><br>
10702
+ <code>z = (0.5 × 0.4 + 1.0 × -0.2) + 0.1</code><br>
10703
+ <code>z = (0.2 - 0.2) + 0.1 = 0.1</code>
10704
+ </div>
10705
+ </div>
10706
+ </div>
10707
+
10708
+ <div class="solution-step">
10709
+ <div class="step-number">Step 2:</div>
10710
+ <div class="step-content">
10711
+ <p class="step-description">Apply Activation Function (ReLU)</p>
10712
+ <div class="step-work">
10713
+ <code>Output = f(z) = max(0, z)</code><br>
10714
+ <code>Output = max(0, 0.1) = 0.1</code>
10715
+ </div>
10716
+ <p class="step-explanation">If z was negative, the output would be 0 (deactivated
10717
+ neuron).</p>
10718
+ </div>
10719
+ </div>
10720
+
10721
+ <div class="final-answer">
10722
+ <strong>✓ Final Result:</strong>
10723
+ <span class="answer-highlight">Neuron Output = 0.1</span>
10724
+ </div>
10725
+ </div>
10726
+ </div>
10727
  <div class="content-card">
10728
  <h3>💻 Python Implementation</h3>
10729
  <div class="code-block">
ml_complete-all-topics/index.html CHANGED
@@ -602,6 +602,31 @@
602
  </div>
603
  </div>
604
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
605
  <a href="#algorithm-comparison" class="toc-link">📊 Algorithm Comparison</a>
606
  </nav>
607
  </aside>
@@ -6201,7 +6226,204 @@ plt.show()
6201
  </div>
6202
  </div>
6203
 
6204
- <!-- Section 18: Algorithm Comparison Tool -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6205
  <div class="section" id="algorithm-comparison">
6206
  <div class="section-header">
6207
  <h2><span class="badge" style="background: rgba(126, 240, 212, 0.3); color: #7ef0d4;">🔄
 
602
  </div>
603
  </div>
604
 
605
+ <div class="toc-category">
606
+ <div class="toc-category-header" data-category="nlp">
607
+ <span class="category-icon">🗣️</span>
608
+ <span class="category-title">NLP & GENAI</span>
609
+ <span class="category-toggle">▼</span>
610
+ </div>
611
+ <div class="toc-category-content" id="nlp-content">
612
+ <div class="toc-subcategory">
613
+ <div class="toc-subcategory-title">Basic NLP</div>
614
+ <a href="#nlp-preprocessing" class="toc-link toc-sub">Text Preprocessing</a>
615
+ <a href="#word-embeddings" class="toc-link toc-sub">Word Embeddings (Word2Vec)</a>
616
+ </div>
617
+ <div class="toc-subcategory">
618
+ <div class="toc-subcategory-title">Advanced NLP</div>
619
+ <a href="#rnn-lstm" class="toc-link toc-sub">RNN & LSTM</a>
620
+ <a href="#transformers" class="toc-link toc-sub">Transformers</a>
621
+ </div>
622
+ <div class="toc-subcategory">
623
+ <div class="toc-subcategory-title">Generative AI</div>
624
+ <a href="#genai-intro" class="toc-link toc-sub">GenAI & LLMs</a>
625
+ <a href="#vectordb-rag" class="toc-link toc-sub">VectorDB & RAG</a>
626
+ </div>
627
+ </div>
628
+ </div>
629
+
630
  <a href="#algorithm-comparison" class="toc-link">📊 Algorithm Comparison</a>
631
  </nav>
632
  </aside>
 
6226
  </div>
6227
  </div>
6228
 
6229
+ <!-- ========================================
6230
+ NLP & GENAI SECTIONS (Module 13, 16-19)
6231
+ ======================================== -->
6232
+
6233
+ <!-- Section 14: NLP Preprocessing -->
6234
+ <div class="section" id="nlp-preprocessing">
6235
+ <div class="section-header">
6236
+ <h2><span class="badge" style="background: rgba(232, 238, 246, 0.3); color: #e8eef6;">🗣️ NLP -
6237
+ Basic</span> Text Preprocessing</h2>
6238
+ <button class="section-toggle">▼</button>
6239
+ </div>
6240
+ <div class="section-body">
6241
+ <p>Before machine learning models can process human language, the text must be cleaned and converted
6242
+ into a numerical format. This process is called <strong>Text Preprocessing</strong>.</p>
6243
+
6244
+ <div class="info-card">
6245
+ <div class="info-card-title">Core Techniques (Module 13)</div>
6246
+ <ul class="info-card-list">
6247
+ <li><strong>Tokenization:</strong> Splitting text into individual words or tokens.</li>
6248
+ <li><strong>Stopwords Removal:</strong> Removing common words (the, is, at) that don't add
6249
+ much meaning.</li>
6250
+ <li><strong>Stemming/Lemmatization:</strong> Reducing words to their root form (e.g.,
6251
+ "running" → "run").</li>
6252
+ <li><strong>POS Tagging:</strong> Identifying grammatical parts of speech (Nouns, Verbs,
6253
+ etc.).</li>
6254
+ </ul>
6255
+ </div>
6256
+
6257
+ <h3>Paper & Pen Example: Tokenization & Stopwords</h3>
6258
+ <div class="step">
6259
+ <div class="step-title">Step 1: Input Raw Text</div>
6260
+ <div class="step-calculation">Text: "The quick brown fox is jumping over the lazy dog."</div>
6261
+ </div>
6262
+ <div class="step">
6263
+ <div class="step-title">Step 2: Remove Stopwords (NLTK English list)</div>
6264
+ <div class="step-calculation">Stopwords: "The", "is", "over", "the"
6265
+ Cleaned: "quick", "brown", "fox", "jumping", "lazy", "dog"</div>
6266
+ </div>
6267
+ <div class="step">
6268
+ <div class="step-title">Step 3: Lemmatization</div>
6269
+ <div class="step-calculation">"jumping" → "jump"
6270
+ Root tokens: ["quick", "brown", "fox", "jump", "lazy", "dog"]</div>
6271
+ </div>
6272
+
6273
+ <div class="callout info">
6274
+ <div class="callout-title">💡 Why POS Tagging Matters?</div>
6275
+ <div class="callout-content">
6276
+ Part-of-Speech tagging helps models understand context. For example, "Bank" can be a
6277
+ <strong>Noun</strong> (river bank) or a <strong>Verb</strong> (to bank on someone).
6278
+ </div>
6279
+ </div>
6280
+ </div>
6281
+ </div>
6282
+
6283
+ <!-- Section 15: Word Embeddings -->
6284
+ <div class="section" id="word-embeddings">
6285
+ <div class="section-header">
6286
+ <h2><span class="badge" style="background: rgba(232, 238, 246, 0.3); color: #e8eef6;">🗣️ NLP -
6287
+ Embeddings</span> Word Embeddings (Word2Vec)</h2>
6288
+ <button class="section-toggle">▼</button>
6289
+ </div>
6290
+ <div class="section-body">
6291
+ <p>Word Embeddings are dense vector representations of words where words with similar meanings are
6292
+ indexed closer to each other in vector space. <strong>Word2Vec</strong> is a seminal algorithm
6293
+ for this.</p>
6294
+
6295
+ <div class="formula">
6296
+ <strong>Word Similarity (Cosine Similarity):</strong>
6297
+ cos(θ) = (A · B) / (||A|| ||B||)
6298
+ <br><small>Measures the angle between two word vectors. Closer to 1 = more similar.</small>
6299
+ </div>
6300
+
6301
+ <h3>Working Intuition</h3>
6302
+ <p>Word2Vec learns context by predicting a word from its neighbors (CBOW) or predicting neighbors
6303
+ from a word (Skip-gram). This captures semantic relationships like:</p>
6304
+ <div class="callout success">
6305
+ <div class="callout-title">✓ Vector Mathematics</div>
6306
+ <div class="callout-content">
6307
+ <strong>King - Man + Woman ≈ Queen</strong>
6308
+ <br>The model learns that the relational "distance" between King and Man is similar to that
6309
+ between Queen and Woman.
6310
+ </div>
6311
+ </div>
6312
+
6313
+ <div class="info-card">
6314
+ <div class="info-card-title">Implementation Resources</div>
6315
+ <ul class="info-card-list">
6316
+ <li><strong>Google Colab:</strong> <a
6317
+ href="https://colab.research.google.com/drive/1cIg1tqNvU0Cl9E7-grrF7l8FQj-_efxF"
6318
+ target="_blank" style="color: #7ef0d4;">NLP for ML Practice Notebook</a></li>
6319
+ <li><strong>GitHub Repo:</strong> <a href="https://github.com/sourangshupal/NLP-for-ML"
6320
+ target="_blank" style="color: #7ef0d4;">sourangshupal/NLP-for-ML</a></li>
6321
+ <li><strong>Kaggle Dataset:</strong> <a
6322
+ href="https://www.kaggle.com/datasets/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews"
6323
+ target="_blank" style="color: #7ef0d4;">IMDB 50K Movie Reviews</a></li>
6324
+ <li><strong>Research Paper:</strong> <a href="https://arxiv.org/abs/1301.3781"
6325
+ target="_blank" style="color: #7ef0d4;">Word2Vec (Mikolov et al.)</a></li>
6326
+ </ul>
6327
+ </div>
6328
+ </div>
6329
+ </div>
6330
+
6331
+ <!-- Section 16: Advanced NLP -->
6332
+ <div class="section" id="rnn-lstm">
6333
+ <div class="section-header">
6334
+ <h2><span class="badge" style="background: rgba(106, 169, 255, 0.3); color: #6aa9ff;">🧠
6335
+ Strategic</span> RNN & LSTM</h2>
6336
+ <button class="section-toggle">▼</button>
6337
+ </div>
6338
+ <div class="section-body">
6339
+ <p>Recurrent Neural Networks (RNNs) are designed for sequential data (like text). They have "memory"
6340
+ that allows information to persist.</p>
6341
+ <div class="callout warning">
6342
+ <div class="callout-title">⚠️ Vanishing Gradient Problem</div>
6343
+ <div class="callout-content">
6344
+ Standard RNNs struggle to remember long-term dependencies. <strong>LSTM (Long Short-Term
6345
+ Memory)</strong> units were designed to fix this using "gates" that control information
6346
+ flow.
6347
+ </div>
6348
+ </div>
6349
+ <h3>LSTM Architecture</h3>
6350
+ <p>An LSTM neuron has three main gates:</p>
6351
+ <ul>
6352
+ <li><strong>Forget Gate:</strong> Decides which info to discard.</li>
6353
+ <li><strong>Input Gate:</strong> Decides which new info to store.</li>
6354
+ <li><strong>Output Gate:</strong> Decides which part of the memory to output.</li>
6355
+ </ul>
6356
+ </div>
6357
+ </div>
6358
+
6359
+ <!-- Section 17: Transformers -->
6360
+ <div class="section" id="transformers">
6361
+ <div class="section-header">
6362
+ <h2><span class="badge" style="background: rgba(106, 169, 255, 0.3); color: #6aa9ff;">🧠
6363
+ Strategic</span> Transformers</h2>
6364
+ <button class="section-toggle">▼</button>
6365
+ </div>
6366
+ <div class="section-body">
6367
+ <p>The Transformer architecture (from "Attention is All You Need") revolutionized NLP by removing
6368
+ recurrence and using <strong>Self-Attention</strong> mechanisms.</p>
6369
+ <div class="info-card">
6370
+ <div class="info-card-title">Key Advantages</div>
6371
+ <ul class="info-card-list">
6372
+ <li><strong>Parallelization:</strong> Unlike RNNs, Transformers can process entire sentences
6373
+ at once.</li>
6374
+ <li><strong>Global Context:</strong> Self-attention allows the model to look at every word
6375
+ in a sentence simultaneously.</li>
6376
+ <li><strong>Foundation for LLMs:</strong> This architecture powers BERT, GPT, and Claude.
6377
+ </li>
6378
+ </ul>
6379
+ </div>
6380
+ </div>
6381
+ </div>
6382
+
6383
+ <!-- Section 18: Generative AI & LLMs -->
6384
+ <div class="section" id="genai-intro">
6385
+ <div class="section-header">
6386
+ <h2><span class="badge" style="background: rgba(255, 140, 106, 0.3); color: #ff8c6a;">🪄
6387
+ GenAI</span> Generative AI & LLMs</h2>
6388
+ <button class="section-toggle">▼</button>
6389
+ </div>
6390
+ <div class="section-body">
6391
+ <p>Generative AI refers to models that can create new content (text, images, code). Large Language
6392
+ Models (LLMs) are the pinnacle of this for text.</p>
6393
+ <h3>Training Paradigm</h3>
6394
+ <ol>
6395
+ <li><strong>Pre-training:</strong> Predict next word on massive datasets (Internet scale).</li>
6396
+ <li><strong>Fine-tuning:</strong> Adapting the model to specific tasks (e.g., chat, coding).
6397
+ </li>
6398
+ <li><strong>RLHF:</strong> Reinforcement Learning from Human Feedback to align model behavior.
6399
+ </li>
6400
+ </ol>
6401
+ </div>
6402
+ </div>
6403
+
6404
+ <!-- Section 19: VectorDB & RAG -->
6405
+ <div class="section" id="vectordb-rag">
6406
+ <div class="section-header">
6407
+ <h2><span class="badge" style="background: rgba(255, 140, 106, 0.3); color: #ff8c6a;">🪄
6408
+ GenAI</span> VectorDB & RAG</h2>
6409
+ <button class="section-toggle">▼</button>
6410
+ </div>
6411
+ <div class="section-body">
6412
+ <p>To ground LLMs in private or fresh data, we use <strong>Retrieval-Augmented Generation
6413
+ (RAG)</strong>.</p>
6414
+ <div class="step">
6415
+ <div class="step-title">The RAG Workflow</div>
6416
+ <div class="step-calculation">1. <strong>Chunk:</strong> Break documents into small pieces.
6417
+ 2. <strong>Embed:</strong> Convert chunks into vectors.
6418
+ 3. <strong>Store:</strong> Save vectors in a <strong>Vector Database</strong> (Pinecone,
6419
+ Milvus, Chroma).
6420
+ 4. <strong>Retrieve:</strong> Find relevant chunks for a user query.
6421
+ 5. <strong>Generate:</strong> Pass chunks to LLM as context for the answer.</div>
6422
+ </div>
6423
+ </div>
6424
+ </div>
6425
+
6426
+ <!-- Section 20: Algorithm Comparison Tool -->
6427
  <div class="section" id="algorithm-comparison">
6428
  <div class="section-header">
6429
  <h2><span class="badge" style="background: rgba(126, 240, 212, 0.3); color: #7ef0d4;">🔄