File size: 24,925 Bytes
d0a6b4f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
{% extends "layout.html" %}

{% block content %}
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Study Guide: Hierarchical Clustering</title>
    <style>

        /* General Body Styles */

        body {

            background-color: #ffffff; /* White background */

            color: #000000; /* Black text */

            font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;

            font-weight: normal;

            line-height: 1.8;

            margin: 0;

            padding: 20px;

        }



        /* Container for centering content */

        .container {

            max-width: 800px;

            margin: 0 auto;

            padding: 20px;

        }



        /* Headings */

        h1, h2, h3 {

            color: #000000;

            border: none;

            font-weight: bold;

        }



        h1 {

            text-align: center;

            border-bottom: 3px solid #000;

            padding-bottom: 10px;

            margin-bottom: 30px;

            font-size: 2.5em;

        }



        h2 {

            font-size: 1.8em;

            margin-top: 40px;

            border-bottom: 1px solid #ddd;

            padding-bottom: 8px;

        }



        h3 {

            font-size: 1.3em;

            margin-top: 25px;

        }



        /* Main words are even bolder */

        strong {

            font-weight: 900;

        }



        /* Paragraphs and List Items with a line below */

        p, li {

            font-size: 1.1em;

            border-bottom: 1px solid #e0e0e0; /* Light gray line below each item */

            padding-bottom: 10px; /* Space between text and the line */

            margin-bottom: 10px; /* Space below the line */

        }



        /* Remove bottom border from the last item in a list for cleaner look */

        li:last-child {

            border-bottom: none;

        }

        

        /* Ordered lists */

        ol {

            list-style-type: decimal;

            padding-left: 20px;

        }

        

        ol li {

            padding-left: 10px;

        }



        /* Unordered Lists */

        ul {

            list-style-type: none;

            padding-left: 0;

        }



        ul li::before {

            content: "โ€ข";

            color: #000;

            font-weight: bold;

            display: inline-block;

            width: 1em;

            margin-left: 0;

        }

        

        /* Code block styling */

        pre {

            background-color: #f4f4f4;

            border: 1px solid #ddd;

            border-radius: 5px;

            padding: 15px;

            white-space: pre-wrap;

            word-wrap: break-word;

            font-family: "Courier New", Courier, monospace;

            font-size: 0.95em;

            font-weight: normal;

            color: #333;

            border-bottom: none;

        }

        

        /* Story block styling */

        .story {

            background-color: #f9f8fa;

            border-left: 4px solid #8a2be2; /* Purple accent for clustering */

            margin: 15px 0;

            padding: 10px 15px;

            font-style: italic;

            color: #555;

            font-weight: normal;

            border-bottom: none;

        }

        

        .story p, .story li {

            border-bottom: none;

        }

        

        .example {

            background-color: #e9ecef;

            padding: 15px;

            margin: 15px 0;

            border-radius: 5px;

            border-left: 4px solid #17a2b8;

        }

        

        .example p, .example li {

            border-bottom: none !important;

        }



        /* Table Styling */

        table {

            width: 100%;

            border-collapse: collapse;

            margin: 25px 0;

        }

        th, td {

            border: 1px solid #ddd;

            padding: 12px;

            text-align: left;

        }

        th {

            background-color: #f2f2f2;

            font-weight: bold;

        }



        /* --- Mobile Responsive Styles --- */

        @media (max-width: 768px) {

            body, .container {

                padding: 10px;

            }

            h1 { font-size: 2em; }

            h2 { font-size: 1.5em; }

            h3 { font-size: 1.2em; }

            p, li { font-size: 1em; }

            pre { font-size: 0.85em; }

            table, th, td { font-size: 0.9em; }

        }

    </style>
</head>
<body>

    <div class="container">
        <h1>๐ŸŒณ Study Guide: Hierarchical Clustering</h1>


          <!-- button -->
         <div>
    <!-- Audio Element -->
    <!-- Note: Browsers may block audio autoplay if the user hasn't interacted with the document first, 

         but since this is triggered by a click, it should work fine. -->
    

    <a 

      href="/hierarchical-three" 

      target="_blank"

      onclick="playSound()"

      class="

        cursor-pointer

        inline-block 

        relative 

        bg-blue-500 

        text-white 

        font-bold 

        py-4 px-8 

        rounded-xl 

        text-2xl

        transition-all 

        duration-150 

        

        /* 3D Effect (Hard Shadow) */

        shadow-[0_8px_0_rgb(29,78,216)] 

        

        /* Pressed State (Move down & remove shadow) */

        active:shadow-none 

        active:translate-y-[8px]

      ">
      Tap Me!
    </a>
  </div>

  <script>

    function playSound() {

      const audio = document.getElementById("clickSound");

      if (audio) {

        audio.currentTime = 0; 

        audio.play().catch(e => console.log("Audio play failed:", e));

      }

    }

  </script>
         <!-- button -->

        <h2>๐Ÿ”น Core Concepts</h2>
        <div class="story">
            <p><strong>Story-style intuition: Organizing a Family Reunion</strong></p>
            <p>Imagine you are organizing a big family reunion. You start by grouping the closest relatives: siblings form small groups. Then, you merge those groups with their cousins. Next, you merge those larger groups with their aunts and uncles. You keep doing this until the entire extended family is in one giant group. Hierarchical clustering works just like this: it builds a family tree, or a <strong>dendrogram</strong>, showing how everyone is related, from the closest individuals to the entire family.</p>
        </div>
        
        <h3>What is Clustering?</h3>
        <p>In machine learning, <strong>clustering</strong> is an <strong>unsupervised learning</strong> technique. This means you have data, but you don't have pre-defined labels for it. The goal of clustering is to find natural groupings (or "clusters") in the data, where points within the same group are more similar to each other than to those in other groups.</p>
        <div class="example">
            <p><strong>Example:</strong> You have a list of customers and their purchasing habits (e.g., spending amount, frequency of visits). Clustering would help you automatically identify groups like "high-spending loyal customers," "occasional bargain hunters," and "new visitors" without you having to define these groups first.</p>
        </div>
        
        <h3>Definition of Hierarchical Clustering</h3>
        <p><strong>Hierarchical Clustering</strong> builds a hierarchy of clusters, either from the bottom up or the top down. The result is a tree-like structure called a <strong>dendrogram</strong>, which shows the entire "family tree" of how the groups were formed.</p>
        
        <ul>
            <li><strong>Agglomerative (Bottom-Up):</strong> Starts with each data point as its own cluster, then iteratively merges the closest pairs of clusters until only one cluster remains. 
                <div class="example" style="margin-top:10px;">
                    <p><strong>Example:</strong> We start with four friends, {A}, {B}, {C}, {D}. The algorithm first merges the two most similar, say A and B, to get {A, B}, {C}, {D}. Then it might merge C and D to get {A, B}, {C, D}. Finally, it merges these two groups into {A, B, C, D}.</p>
                </div>
            </li>
            <li><strong>Divisive (Top-Down):</strong> Starts with all data points in one giant cluster, then recursively splits them into smaller clusters.
                <div class="example" style="margin-top:10px;">
                    <p><strong>Example:</strong> We start with one big group {A, B, C, D}. The algorithm first splits it into the two most different subgroups, for instance {A, B} and {C, D}. Then it might split {A, B} into {A} and {B}, completing the process.</p>
                </div>
            </li>
        </ul>

        <h2>๐Ÿ”น Mathematical Foundation</h2>
        <h3>Distance Metrics (Measuring Point-to-Point Similarity)</h3>
        <ul>
            <li><strong>Euclidean Distance:</strong> The straight-line distance between two points. d(p, q) = sqrt((p1 - q1)^2 + (p2 - q2)^2)
                <div class="example" style="margin-top:10px;">
                    <p><strong>Example:</strong> If Friend A is at point (1, 2) and Friend B is at (4, 6), their Euclidean distance is sqrt((4-1)^2 + (6-2)^2) = sqrt(3^2 + 4^2) = sqrt(9 + 16) = sqrt(25) = 5.</p>
                </div>
            </li>
            <li><strong>Manhattan Distance:</strong> The distance as if traveling on a city grid (sum of absolute differences). d(p, q) = |p1 - q1| + |p2 - q2|
                <div class="example" style="margin-top:10px;">
                    <p><strong>Example:</strong> For Friend A (1, 2) and Friend B (4, 6), the Manhattan distance is |4-1| + |6-2| = 3 + 4 = 7.</p>
                </div>
            </li>
            <li><strong>Cosine Similarity:</strong> Measures the angle between two vectors. It's great for text analysis where direction matters more than magnitude.</li>
        </ul>
        
        <h3>Linkage Criteria (Measuring Cluster-to-Cluster Similarity)</h3>
        <div class="story">
            <p>Once you have family groups, how do you decide which two groups should merge next? Do you connect them based on their two closest members (<strong>single linkage</strong>), their two most distant members (<strong>complete linkage</strong>), or the average distance between all their members (<strong>average linkage</strong>)?</p>
        </div>
        <ul>
            <li><strong>Single Linkage (The Optimist):</strong> The distance is the <strong>minimum</strong> distance between any two points in the different clusters. 
                <div class="example" style="margin-top:10px;"><p><strong>Example:</strong> Distance between {A, B} and {C} is the smaller of Distance(A, C) and Distance(B, C).</p></div></li>
            <li><strong>Complete Linkage (The Pessimist):</strong> The distance is the <strong>maximum</strong> distance between any two points. 
                <div class="example" style="margin-top:10px;"><p><strong>Example:</strong> Distance between {A, B} and {C} is the larger of Distance(A, C) and Distance(B, C).</p></div></li>
            <li><strong>Average Linkage (The Diplomat):</strong> The distance is the <strong>average</strong> of all pairwise distances between points in the two clusters. 
                <div class="example" style="margin-top:10px;"><p><strong>Example:</strong> Distance between {A, B} and {C} is (Distance(A, C) + Distance(B, C)) / 2.</p></div></li>
            <li><strong>Wardโ€™s Method (The Team Builder):</strong> Merges clusters that lead to the minimum increase in within-cluster variance. A popular and effective default that creates compact, spherical clusters.</li>
        </ul>

        <h2>๐Ÿ”น Algorithm Steps (Agglomerative)</h2>
        <div class="example">
            <p>Let's walk through a simple example with four points A, B, C, D. Their initial distance matrix (Euclidean) is:</p>
            <pre>      A   B   C   D
    A   0   2   6   10
    B   2   0   5   9
    C   6   5   0   4
    D   10  9   4   0</pre>
        </div>
        <ol>
            <li><strong>Initialization:</strong> We start with four clusters: {A}, {B}, {C}, {D}.</li>
            <li><strong>Merge 1:</strong> The smallest distance is 2 (between A and B). We merge them into a new cluster {A, B}. Our clusters are now {A, B}, {C}, {D}.</li>
            <li><strong>Update 1:</strong> We update the distance matrix using, for example, single linkage:
                <ul>
                    <li>dist({A,B}, C) = min(dist(A,C), dist(B,C)) = min(6, 5) = 5</li>
                    <li>dist({A,B}, D) = min(dist(A,D), dist(B,D)) = min(10, 9) = 9</li>
                </ul>
                New Matrix:
                <pre>          {A,B}   C   D
    {A,B}   0     5   9
    C       5     0   4
    D       9     4   0</pre>
            </li>
            <li><strong>Merge 2:</strong> The smallest distance is now 4 (between C and D). We merge them into {C, D}. Our clusters are now {A, B}, {C, D}.</li>
            <li><strong>Update 2:</strong> Update the final distance:
                 <ul>
                    <li>dist({A,B}, {C,D}) = min(dist(A,C), dist(A,D), dist(B,C), dist(B,D)) = min(6, 10, 5, 9) = 5</li>
                </ul>
            </li>
            <li><strong>Final Merge:</strong> We merge the last two clusters {A, B} and {C, D} at a distance of 5. The process is complete.</li>
        </ol>

        <h2>๐Ÿ”น The Dendrogram</h2>
        <p>The <strong>dendrogram</strong> is the tree diagram that visualizes the entire merging process. The y-axis represents the distance at which clusters were merged. By "cutting" the dendrogram with a horizontal line, you can choose the final number of clusters.</p>
        <div class="example">
             <p><strong>Example:</strong> For the algorithm steps above, the dendrogram would show A and B merging at a low height (distance 2). C and D would merge at a slightly higher height (distance 4). Finally, the {A, B} group and the {C, D} group would merge at an even higher height (distance 5).</p>
             <p></p>
             <p>If you draw a horizontal "cut" line at a distance of 3, you would cross three vertical lines, giving you three clusters: {A, B}, {C}, and {D}. If you cut at a distance of 6, you would get one cluster: {A, B, C, D}.</p>
        </div>
        

        <h2>๐Ÿ”น Comparison: Hierarchical vs. K-Means</h2>
        <table>
            <thead>
                <tr>
                    <th>Feature</th>
                    <th>Hierarchical Clustering</th>
                    <th>K-Means Clustering</th>
                </tr>
            </thead>
            <tbody>
                <tr>
                    <td><strong>Number of Clusters</strong></td>
                    <td>Not needed upfront. Chosen by cutting the dendrogram.</td>
                    <td>Must be pre-specified (K).</td>
                </tr>
                <tr>
                    <td><strong>Speed & Scalability</strong></td>
                    <td>Slow (O(n^2) to O(n^3)). Not for large datasets.</td>
                    <td>Fast and scales well to large data.</td>
                </tr>
                <tr>
                    <td><strong>Output</strong></td>
                    <td>A full hierarchy (dendrogram) showing relationships.</td>
                    <td>A single set of K clusters.</td>
                </tr>
                 <tr>
                    <td><strong>Determinism</strong></td>
                    <td>Deterministic (always the same result).</td>
                    <td>Can vary based on initial random centroids.</td>
                </tr>
                <tr>
                    <td><strong>Use-Case Example</strong></td>
                    <td>Understanding evolutionary relationships between species (phylogenetic trees). The hierarchy is the main goal.</td>
                    <td>Segmenting 1 million customers into 3 pricing tiers (Gold, Silver, Bronze). Speed and scalability are key.</td>
                </tr>
            </tbody>
        </table>

        <h2>๐Ÿ”น Strengths & Weaknesses</h2>
        <h3>Advantages:</h3>
        <ul>
            <li>โœ… Easy to understand and visualize with the dendrogram. <strong>Example:</strong> Showing a business manager how different customer segments are related to each other.</li>
            <li>โœ… No need to pre-specify the number of clusters. <strong>Example:</strong> Exploring a new dataset of patient symptoms to see how many natural groupings of diseases emerge.</li>
            <li>โœ… The hierarchy can provide meaningful insights. <strong>Example:</strong> In text analysis, finding broad topics (like "Sports") that contain sub-topics ("Football," "Basketball").</li>
        </ul>
        <h3>Disadvantages:</h3>
        <ul>
            <li>โŒ Computationally expensive and slow for large datasets. <strong>Example:</strong> Trying to cluster millions of social media users would be too slow.</li>
            <li>โŒ Sensitive to noisy data and outliers. <strong>Example:</strong> A single customer with extremely unusual buying habits could distort the entire cluster structure.</li>
            <li>โŒ Merges are final and cannot be undone (greedy approach). <strong>Example:</strong> If two customers are incorrectly merged early on, the algorithm can never separate them again.</li>
        </ul>

        <h2>๐Ÿ”น Python Implementation</h2>
        <div class="story">
            <p>Let's use Python's `scipy` library to build our "family tree" (the dendrogram) and `scikit-learn` to perform the final clustering once we decide how many groups we want.</p>
        </div>
        <pre><code>
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import AgglomerativeClustering
from scipy.cluster.hierarchy import dendrogram, linkage

# 1. Generate and Prepare Data
X, y = make_blobs(n_samples=50, centers=4, cluster_std=1.2, random_state=42)
X_scaled = StandardScaler().fit_transform(X)

# 2. Build the Linkage Matrix using Scipy for the dendrogram
linkage_matrix = linkage(X_scaled, method='ward')

# 3. Plot the Dendrogram
plt.figure(figsize=(15, 7))
plt.title('Hierarchical Clustering Dendrogram (Ward Linkage)')
plt.xlabel('Sample Index')
plt.ylabel('Distance')
dendrogram(linkage_matrix)
plt.show()


# 4. Perform Clustering with Scikit-learn
# Let's say the dendrogram suggests 4 clusters is a good choice.
agg_cluster = AgglomerativeClustering(n_clusters=4, linkage='ward')
labels = agg_cluster.fit_predict(X_scaled)

# 5. Visualize the final clusters
plt.figure(figsize=(10, 7))
plt.scatter(X_scaled[:, 0], X_scaled[:, 1], c=labels, cmap='viridis', s=50)
plt.title('Final Clusters (n=4)')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.grid(True)
plt.show()

        </code></pre>

        <h2>๐Ÿ”น Key Terminology Explained</h2>
        <div class="story">
            <p><strong>The Story: Decoding the Family Reunion Plan</strong></p>
            <p>Let's break down some of the technical terms used in our family reunion plan to make sure everything is crystal clear.</p>
        </div>
        <ul>
            <li>
                <strong>Unsupervised Learning:</strong>
                <br>
                <strong>What it is:</strong> A type of machine learning where the algorithm learns patterns from data that has not been labeled or categorized. The algorithm finds the structure on its own.
                <br>
                <strong>Story Example:</strong> You're given a box of mixed fruits with no labels. <strong>Unsupervised learning</strong> is the process of sorting them into piles (apples, bananas, oranges) based only on their appearance, size, and texture, without anyone telling you what each fruit is called.
            </li>
            <li>
                <strong>Dendrogram:</strong>
                <br>
                <strong>What it is:</strong> The tree-like diagram that hierarchical clustering produces. It visually represents the nested grouping of data points and the distances at which merges occurred.
                <br>
                <strong>Story Example:</strong> This is the actual <strong>family tree chart</strong> you draw for the reunion. It shows exactly which siblings were grouped first, how their groups were joined with cousins, and so on, all the way up to the entire family. The height of the branches shows how "distantly related" the merged groups are.
            </li>
            <li>
                <strong>Variance (Within-Cluster):</strong>
                <br>
                <strong>What it is:</strong> A measure of how spread out the data points are within a single cluster. Low variance means the points are tightly packed and very similar. High variance means they are spread out.
                <br>
                <strong>Story Example:</strong> A group of siblings who are all very close in age has low <strong>variance</strong>. A group that includes a toddler, a teenager, and a grandparent has high <strong>variance</strong>. Ward's method tries to create groups with the lowest possible age spread (variance).
            </li>
            <li>
                <strong>Greedy Approach:</strong>
                <br>
                <strong>What it is:</strong> An algorithmic strategy that makes the best possible choice at each step, without considering the overall, long-term outcome. Once a decision is made, it is never reconsidered.
                <br>
                <strong>Story Example:</strong> At the reunion, you first merge the two closest siblings. The <strong>greedy approach</strong> means this decision is final. Even if merging one of those siblings with a cousin first might have created a better overall grouping later, the algorithm can't go back and change its initial decision.
            </li>
            <li>
                <strong>PCA (Principal Component Analysis):</strong>
                <br>
                <strong>What it is:</strong> A technique for dimensionality reduction. It transforms a large set of variables into a smaller set of "principal components" while preserving most of the original information.
                <br>
                <strong>Story Example:</strong> You're judging a baking contest based on 50 different criteria (sweetness, texture, color, aroma, etc.). This is too complex. Using <strong>PCA</strong> is like creating two new, summary criteria: "Overall Taste" and "Visual Appeal". These two components capture the essence of the original 50, making it much easier to compare the cakes fairly.
            </li>
        </ul>

        <h2>๐Ÿ”น Best Practices</h2>
        <ul>
            <li><strong>Scale Data:</strong> Always normalize or scale your data before clustering. 
                <div class="example" style="margin-top:10px;"><p><strong>Example:</strong> If you're clustering customers by income (e.g., 50,000) and number of purchases (e.g., 5), the income will dominate the distance calculation. Scaling brings both to a similar range so they are weighted fairly.</p></div></li>
            <li><strong>Experiment:</strong> Try different distance metrics and linkage methods to see what works best for your data.
                <div class="example" style="margin-top:10px;"><p><strong>Example:</strong> If your clusters look like long chains, single linkage might be appropriate. If they are tight and spherical, Ward's or complete linkage is better.</p></div></li>
            <li><strong>Use the Dendrogram:</strong> Look for the largest vertical lines that don't cross any horizontal merges. Cutting there is often a good strategy for choosing K.
                <div class="example" style="margin-top:10px;"><p><strong>Example:</strong> If there's a huge jump in merge distance from 3 clusters to 2, it suggests that merging those last two groups creates a very dissimilar, unnatural cluster. Therefore, 3 is likely a good number of clusters.</p></div></li>
            <li><strong>Combine with PCA:</strong> For high-dimensional data, use PCA to reduce dimensions first.
                <div class="example" style="margin-top:10px;"><p><strong>Example:</strong> Clustering genetic data with thousands of genes is noisy. PCA can reduce this to a few key components that represent the most important genetic variations, leading to better clusters.</p></div></li>
        </ul>
    </div>

</body>
</html>
{% endblock %}