| {% extends "layout.html" %}
|
|
|
| {% block content %}
|
| <!DOCTYPE html>
|
| <html lang="en">
|
| <head>
|
| <meta charset="UTF-8">
|
| <meta name="viewport" content="width=device-width, initial-scale=1.0">
|
| <title>Study Guide: Hierarchical Clustering</title>
|
| <style>
|
|
|
| body {
|
| background-color: #ffffff;
|
| color: #000000;
|
| font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif;
|
| font-weight: normal;
|
| line-height: 1.8;
|
| margin: 0;
|
| padding: 20px;
|
| }
|
|
|
|
|
| .container {
|
| max-width: 800px;
|
| margin: 0 auto;
|
| padding: 20px;
|
| }
|
|
|
|
|
| h1, h2, h3 {
|
| color: #000000;
|
| border: none;
|
| font-weight: bold;
|
| }
|
|
|
| h1 {
|
| text-align: center;
|
| border-bottom: 3px solid #000;
|
| padding-bottom: 10px;
|
| margin-bottom: 30px;
|
| font-size: 2.5em;
|
| }
|
|
|
| h2 {
|
| font-size: 1.8em;
|
| margin-top: 40px;
|
| border-bottom: 1px solid #ddd;
|
| padding-bottom: 8px;
|
| }
|
|
|
| h3 {
|
| font-size: 1.3em;
|
| margin-top: 25px;
|
| }
|
|
|
|
|
| strong {
|
| font-weight: 900;
|
| }
|
|
|
|
|
| p, li {
|
| font-size: 1.1em;
|
| border-bottom: 1px solid #e0e0e0;
|
| padding-bottom: 10px;
|
| margin-bottom: 10px;
|
| }
|
|
|
|
|
| li:last-child {
|
| border-bottom: none;
|
| }
|
|
|
|
|
| ol {
|
| list-style-type: decimal;
|
| padding-left: 20px;
|
| }
|
|
|
| ol li {
|
| padding-left: 10px;
|
| }
|
|
|
|
|
| ul {
|
| list-style-type: none;
|
| padding-left: 0;
|
| }
|
|
|
| ul li::before {
|
| content: "โข";
|
| color: #000;
|
| font-weight: bold;
|
| display: inline-block;
|
| width: 1em;
|
| margin-left: 0;
|
| }
|
|
|
|
|
| pre {
|
| background-color: #f4f4f4;
|
| border: 1px solid #ddd;
|
| border-radius: 5px;
|
| padding: 15px;
|
| white-space: pre-wrap;
|
| word-wrap: break-word;
|
| font-family: "Courier New", Courier, monospace;
|
| font-size: 0.95em;
|
| font-weight: normal;
|
| color: #333;
|
| border-bottom: none;
|
| }
|
|
|
|
|
| .story {
|
| background-color: #f9f8fa;
|
| border-left: 4px solid #8a2be2;
|
| margin: 15px 0;
|
| padding: 10px 15px;
|
| font-style: italic;
|
| color: #555;
|
| font-weight: normal;
|
| border-bottom: none;
|
| }
|
|
|
| .story p, .story li {
|
| border-bottom: none;
|
| }
|
|
|
| .example {
|
| background-color: #e9ecef;
|
| padding: 15px;
|
| margin: 15px 0;
|
| border-radius: 5px;
|
| border-left: 4px solid #17a2b8;
|
| }
|
|
|
| .example p, .example li {
|
| border-bottom: none !important;
|
| }
|
|
|
|
|
| table {
|
| width: 100%;
|
| border-collapse: collapse;
|
| margin: 25px 0;
|
| }
|
| th, td {
|
| border: 1px solid #ddd;
|
| padding: 12px;
|
| text-align: left;
|
| }
|
| th {
|
| background-color: #f2f2f2;
|
| font-weight: bold;
|
| }
|
|
|
|
|
| @media (max-width: 768px) {
|
| body, .container {
|
| padding: 10px;
|
| }
|
| h1 { font-size: 2em; }
|
| h2 { font-size: 1.5em; }
|
| h3 { font-size: 1.2em; }
|
| p, li { font-size: 1em; }
|
| pre { font-size: 0.85em; }
|
| table, th, td { font-size: 0.9em; }
|
| }
|
| </style>
|
| </head>
|
| <body>
|
|
|
| <div class="container">
|
| <h1>๐ณ Study Guide: Hierarchical Clustering</h1>
|
|
|
|
|
|
|
| <div>
|
|
|
| |
|
|
|
|
|
|
| <a
|
| href="/hierarchical-three"
|
| target="_blank"
|
| onclick="playSound()"
|
| class="
|
| cursor-pointer
|
| inline-block
|
| relative
|
| bg-blue-500
|
| text-white
|
| font-bold
|
| py-4 px-8
|
| rounded-xl
|
| text-2xl
|
| transition-all
|
| duration-150
|
|
|
| /* 3D Effect (Hard Shadow) */
|
| shadow-[0_8px_0_rgb(29,78,216)]
|
|
|
| /* Pressed State (Move down & remove shadow) */
|
| active:shadow-none
|
| active:translate-y-[8px]
|
| ">
|
| Tap Me!
|
| </a>
|
| </div>
|
|
|
| <script>
|
| function playSound() {
|
| const audio = document.getElementById("clickSound");
|
| if (audio) {
|
| audio.currentTime = 0;
|
| audio.play().catch(e => console.log("Audio play failed:", e));
|
| }
|
| }
|
| </script>
|
|
|
|
|
| <h2>๐น Core Concepts</h2>
|
| <div class="story">
|
| <p><strong>Story-style intuition: Organizing a Family Reunion</strong></p>
|
| <p>Imagine you are organizing a big family reunion. You start by grouping the closest relatives: siblings form small groups. Then, you merge those groups with their cousins. Next, you merge those larger groups with their aunts and uncles. You keep doing this until the entire extended family is in one giant group. Hierarchical clustering works just like this: it builds a family tree, or a <strong>dendrogram</strong>, showing how everyone is related, from the closest individuals to the entire family.</p>
|
| </div>
|
|
|
| <h3>What is Clustering?</h3>
|
| <p>In machine learning, <strong>clustering</strong> is an <strong>unsupervised learning</strong> technique. This means you have data, but you don't have pre-defined labels for it. The goal of clustering is to find natural groupings (or "clusters") in the data, where points within the same group are more similar to each other than to those in other groups.</p>
|
| <div class="example">
|
| <p><strong>Example:</strong> You have a list of customers and their purchasing habits (e.g., spending amount, frequency of visits). Clustering would help you automatically identify groups like "high-spending loyal customers," "occasional bargain hunters," and "new visitors" without you having to define these groups first.</p>
|
| </div>
|
|
|
| <h3>Definition of Hierarchical Clustering</h3>
|
| <p><strong>Hierarchical Clustering</strong> builds a hierarchy of clusters, either from the bottom up or the top down. The result is a tree-like structure called a <strong>dendrogram</strong>, which shows the entire "family tree" of how the groups were formed.</p>
|
|
|
| <ul>
|
| <li><strong>Agglomerative (Bottom-Up):</strong> Starts with each data point as its own cluster, then iteratively merges the closest pairs of clusters until only one cluster remains.
|
| <div class="example" style="margin-top:10px;">
|
| <p><strong>Example:</strong> We start with four friends, {A}, {B}, {C}, {D}. The algorithm first merges the two most similar, say A and B, to get {A, B}, {C}, {D}. Then it might merge C and D to get {A, B}, {C, D}. Finally, it merges these two groups into {A, B, C, D}.</p>
|
| </div>
|
| </li>
|
| <li><strong>Divisive (Top-Down):</strong> Starts with all data points in one giant cluster, then recursively splits them into smaller clusters.
|
| <div class="example" style="margin-top:10px;">
|
| <p><strong>Example:</strong> We start with one big group {A, B, C, D}. The algorithm first splits it into the two most different subgroups, for instance {A, B} and {C, D}. Then it might split {A, B} into {A} and {B}, completing the process.</p>
|
| </div>
|
| </li>
|
| </ul>
|
|
|
| <h2>๐น Mathematical Foundation</h2>
|
| <h3>Distance Metrics (Measuring Point-to-Point Similarity)</h3>
|
| <ul>
|
| <li><strong>Euclidean Distance:</strong> The straight-line distance between two points. d(p, q) = sqrt((p1 - q1)^2 + (p2 - q2)^2)
|
| <div class="example" style="margin-top:10px;">
|
| <p><strong>Example:</strong> If Friend A is at point (1, 2) and Friend B is at (4, 6), their Euclidean distance is sqrt((4-1)^2 + (6-2)^2) = sqrt(3^2 + 4^2) = sqrt(9 + 16) = sqrt(25) = 5.</p>
|
| </div>
|
| </li>
|
| <li><strong>Manhattan Distance:</strong> The distance as if traveling on a city grid (sum of absolute differences). d(p, q) = |p1 - q1| + |p2 - q2|
|
| <div class="example" style="margin-top:10px;">
|
| <p><strong>Example:</strong> For Friend A (1, 2) and Friend B (4, 6), the Manhattan distance is |4-1| + |6-2| = 3 + 4 = 7.</p>
|
| </div>
|
| </li>
|
| <li><strong>Cosine Similarity:</strong> Measures the angle between two vectors. It's great for text analysis where direction matters more than magnitude.</li>
|
| </ul>
|
|
|
| <h3>Linkage Criteria (Measuring Cluster-to-Cluster Similarity)</h3>
|
| <div class="story">
|
| <p>Once you have family groups, how do you decide which two groups should merge next? Do you connect them based on their two closest members (<strong>single linkage</strong>), their two most distant members (<strong>complete linkage</strong>), or the average distance between all their members (<strong>average linkage</strong>)?</p>
|
| </div>
|
| <ul>
|
| <li><strong>Single Linkage (The Optimist):</strong> The distance is the <strong>minimum</strong> distance between any two points in the different clusters.
|
| <div class="example" style="margin-top:10px;"><p><strong>Example:</strong> Distance between {A, B} and {C} is the smaller of Distance(A, C) and Distance(B, C).</p></div></li>
|
| <li><strong>Complete Linkage (The Pessimist):</strong> The distance is the <strong>maximum</strong> distance between any two points.
|
| <div class="example" style="margin-top:10px;"><p><strong>Example:</strong> Distance between {A, B} and {C} is the larger of Distance(A, C) and Distance(B, C).</p></div></li>
|
| <li><strong>Average Linkage (The Diplomat):</strong> The distance is the <strong>average</strong> of all pairwise distances between points in the two clusters.
|
| <div class="example" style="margin-top:10px;"><p><strong>Example:</strong> Distance between {A, B} and {C} is (Distance(A, C) + Distance(B, C)) / 2.</p></div></li>
|
| <li><strong>Wardโs Method (The Team Builder):</strong> Merges clusters that lead to the minimum increase in within-cluster variance. A popular and effective default that creates compact, spherical clusters.</li>
|
| </ul>
|
|
|
| <h2>๐น Algorithm Steps (Agglomerative)</h2>
|
| <div class="example">
|
| <p>Let's walk through a simple example with four points A, B, C, D. Their initial distance matrix (Euclidean) is:</p>
|
| <pre> A B C D
|
| A 0 2 6 10
|
| B 2 0 5 9
|
| C 6 5 0 4
|
| D 10 9 4 0</pre>
|
| </div>
|
| <ol>
|
| <li><strong>Initialization:</strong> We start with four clusters: {A}, {B}, {C}, {D}.</li>
|
| <li><strong>Merge 1:</strong> The smallest distance is 2 (between A and B). We merge them into a new cluster {A, B}. Our clusters are now {A, B}, {C}, {D}.</li>
|
| <li><strong>Update 1:</strong> We update the distance matrix using, for example, single linkage:
|
| <ul>
|
| <li>dist({A,B}, C) = min(dist(A,C), dist(B,C)) = min(6, 5) = 5</li>
|
| <li>dist({A,B}, D) = min(dist(A,D), dist(B,D)) = min(10, 9) = 9</li>
|
| </ul>
|
| New Matrix:
|
| <pre> {A,B} C D
|
| {A,B} 0 5 9
|
| C 5 0 4
|
| D 9 4 0</pre>
|
| </li>
|
| <li><strong>Merge 2:</strong> The smallest distance is now 4 (between C and D). We merge them into {C, D}. Our clusters are now {A, B}, {C, D}.</li>
|
| <li><strong>Update 2:</strong> Update the final distance:
|
| <ul>
|
| <li>dist({A,B}, {C,D}) = min(dist(A,C), dist(A,D), dist(B,C), dist(B,D)) = min(6, 10, 5, 9) = 5</li>
|
| </ul>
|
| </li>
|
| <li><strong>Final Merge:</strong> We merge the last two clusters {A, B} and {C, D} at a distance of 5. The process is complete.</li>
|
| </ol>
|
|
|
| <h2>๐น The Dendrogram</h2>
|
| <p>The <strong>dendrogram</strong> is the tree diagram that visualizes the entire merging process. The y-axis represents the distance at which clusters were merged. By "cutting" the dendrogram with a horizontal line, you can choose the final number of clusters.</p>
|
| <div class="example">
|
| <p><strong>Example:</strong> For the algorithm steps above, the dendrogram would show A and B merging at a low height (distance 2). C and D would merge at a slightly higher height (distance 4). Finally, the {A, B} group and the {C, D} group would merge at an even higher height (distance 5).</p>
|
| <p></p>
|
| <p>If you draw a horizontal "cut" line at a distance of 3, you would cross three vertical lines, giving you three clusters: {A, B}, {C}, and {D}. If you cut at a distance of 6, you would get one cluster: {A, B, C, D}.</p>
|
| </div>
|
|
|
|
|
| <h2>๐น Comparison: Hierarchical vs. K-Means</h2>
|
| <table>
|
| <thead>
|
| <tr>
|
| <th>Feature</th>
|
| <th>Hierarchical Clustering</th>
|
| <th>K-Means Clustering</th>
|
| </tr>
|
| </thead>
|
| <tbody>
|
| <tr>
|
| <td><strong>Number of Clusters</strong></td>
|
| <td>Not needed upfront. Chosen by cutting the dendrogram.</td>
|
| <td>Must be pre-specified (K).</td>
|
| </tr>
|
| <tr>
|
| <td><strong>Speed & Scalability</strong></td>
|
| <td>Slow (O(n^2) to O(n^3)). Not for large datasets.</td>
|
| <td>Fast and scales well to large data.</td>
|
| </tr>
|
| <tr>
|
| <td><strong>Output</strong></td>
|
| <td>A full hierarchy (dendrogram) showing relationships.</td>
|
| <td>A single set of K clusters.</td>
|
| </tr>
|
| <tr>
|
| <td><strong>Determinism</strong></td>
|
| <td>Deterministic (always the same result).</td>
|
| <td>Can vary based on initial random centroids.</td>
|
| </tr>
|
| <tr>
|
| <td><strong>Use-Case Example</strong></td>
|
| <td>Understanding evolutionary relationships between species (phylogenetic trees). The hierarchy is the main goal.</td>
|
| <td>Segmenting 1 million customers into 3 pricing tiers (Gold, Silver, Bronze). Speed and scalability are key.</td>
|
| </tr>
|
| </tbody>
|
| </table>
|
|
|
| <h2>๐น Strengths & Weaknesses</h2>
|
| <h3>Advantages:</h3>
|
| <ul>
|
| <li>โ
Easy to understand and visualize with the dendrogram. <strong>Example:</strong> Showing a business manager how different customer segments are related to each other.</li>
|
| <li>โ
No need to pre-specify the number of clusters. <strong>Example:</strong> Exploring a new dataset of patient symptoms to see how many natural groupings of diseases emerge.</li>
|
| <li>โ
The hierarchy can provide meaningful insights. <strong>Example:</strong> In text analysis, finding broad topics (like "Sports") that contain sub-topics ("Football," "Basketball").</li>
|
| </ul>
|
| <h3>Disadvantages:</h3>
|
| <ul>
|
| <li>โ Computationally expensive and slow for large datasets. <strong>Example:</strong> Trying to cluster millions of social media users would be too slow.</li>
|
| <li>โ Sensitive to noisy data and outliers. <strong>Example:</strong> A single customer with extremely unusual buying habits could distort the entire cluster structure.</li>
|
| <li>โ Merges are final and cannot be undone (greedy approach). <strong>Example:</strong> If two customers are incorrectly merged early on, the algorithm can never separate them again.</li>
|
| </ul>
|
|
|
| <h2>๐น Python Implementation</h2>
|
| <div class="story">
|
| <p>Let's use Python's `scipy` library to build our "family tree" (the dendrogram) and `scikit-learn` to perform the final clustering once we decide how many groups we want.</p>
|
| </div>
|
| <pre><code>
|
| import numpy as np
|
| import matplotlib.pyplot as plt
|
| from sklearn.datasets import make_blobs
|
| from sklearn.preprocessing import StandardScaler
|
| from sklearn.cluster import AgglomerativeClustering
|
| from scipy.cluster.hierarchy import dendrogram, linkage
|
|
|
| # 1. Generate and Prepare Data
|
| X, y = make_blobs(n_samples=50, centers=4, cluster_std=1.2, random_state=42)
|
| X_scaled = StandardScaler().fit_transform(X)
|
|
|
| # 2. Build the Linkage Matrix using Scipy for the dendrogram
|
| linkage_matrix = linkage(X_scaled, method='ward')
|
|
|
| # 3. Plot the Dendrogram
|
| plt.figure(figsize=(15, 7))
|
| plt.title('Hierarchical Clustering Dendrogram (Ward Linkage)')
|
| plt.xlabel('Sample Index')
|
| plt.ylabel('Distance')
|
| dendrogram(linkage_matrix)
|
| plt.show()
|
|
|
|
|
| # 4. Perform Clustering with Scikit-learn
|
| # Let's say the dendrogram suggests 4 clusters is a good choice.
|
| agg_cluster = AgglomerativeClustering(n_clusters=4, linkage='ward')
|
| labels = agg_cluster.fit_predict(X_scaled)
|
|
|
| # 5. Visualize the final clusters
|
| plt.figure(figsize=(10, 7))
|
| plt.scatter(X_scaled[:, 0], X_scaled[:, 1], c=labels, cmap='viridis', s=50)
|
| plt.title('Final Clusters (n=4)')
|
| plt.xlabel('Feature 1')
|
| plt.ylabel('Feature 2')
|
| plt.grid(True)
|
| plt.show()
|
|
|
| </code></pre>
|
|
|
| <h2>๐น Key Terminology Explained</h2>
|
| <div class="story">
|
| <p><strong>The Story: Decoding the Family Reunion Plan</strong></p>
|
| <p>Let's break down some of the technical terms used in our family reunion plan to make sure everything is crystal clear.</p>
|
| </div>
|
| <ul>
|
| <li>
|
| <strong>Unsupervised Learning:</strong>
|
| <br>
|
| <strong>What it is:</strong> A type of machine learning where the algorithm learns patterns from data that has not been labeled or categorized. The algorithm finds the structure on its own.
|
| <br>
|
| <strong>Story Example:</strong> You're given a box of mixed fruits with no labels. <strong>Unsupervised learning</strong> is the process of sorting them into piles (apples, bananas, oranges) based only on their appearance, size, and texture, without anyone telling you what each fruit is called.
|
| </li>
|
| <li>
|
| <strong>Dendrogram:</strong>
|
| <br>
|
| <strong>What it is:</strong> The tree-like diagram that hierarchical clustering produces. It visually represents the nested grouping of data points and the distances at which merges occurred.
|
| <br>
|
| <strong>Story Example:</strong> This is the actual <strong>family tree chart</strong> you draw for the reunion. It shows exactly which siblings were grouped first, how their groups were joined with cousins, and so on, all the way up to the entire family. The height of the branches shows how "distantly related" the merged groups are.
|
| </li>
|
| <li>
|
| <strong>Variance (Within-Cluster):</strong>
|
| <br>
|
| <strong>What it is:</strong> A measure of how spread out the data points are within a single cluster. Low variance means the points are tightly packed and very similar. High variance means they are spread out.
|
| <br>
|
| <strong>Story Example:</strong> A group of siblings who are all very close in age has low <strong>variance</strong>. A group that includes a toddler, a teenager, and a grandparent has high <strong>variance</strong>. Ward's method tries to create groups with the lowest possible age spread (variance).
|
| </li>
|
| <li>
|
| <strong>Greedy Approach:</strong>
|
| <br>
|
| <strong>What it is:</strong> An algorithmic strategy that makes the best possible choice at each step, without considering the overall, long-term outcome. Once a decision is made, it is never reconsidered.
|
| <br>
|
| <strong>Story Example:</strong> At the reunion, you first merge the two closest siblings. The <strong>greedy approach</strong> means this decision is final. Even if merging one of those siblings with a cousin first might have created a better overall grouping later, the algorithm can't go back and change its initial decision.
|
| </li>
|
| <li>
|
| <strong>PCA (Principal Component Analysis):</strong>
|
| <br>
|
| <strong>What it is:</strong> A technique for dimensionality reduction. It transforms a large set of variables into a smaller set of "principal components" while preserving most of the original information.
|
| <br>
|
| <strong>Story Example:</strong> You're judging a baking contest based on 50 different criteria (sweetness, texture, color, aroma, etc.). This is too complex. Using <strong>PCA</strong> is like creating two new, summary criteria: "Overall Taste" and "Visual Appeal". These two components capture the essence of the original 50, making it much easier to compare the cakes fairly.
|
| </li>
|
| </ul>
|
|
|
| <h2>๐น Best Practices</h2>
|
| <ul>
|
| <li><strong>Scale Data:</strong> Always normalize or scale your data before clustering.
|
| <div class="example" style="margin-top:10px;"><p><strong>Example:</strong> If you're clustering customers by income (e.g., 50,000) and number of purchases (e.g., 5), the income will dominate the distance calculation. Scaling brings both to a similar range so they are weighted fairly.</p></div></li>
|
| <li><strong>Experiment:</strong> Try different distance metrics and linkage methods to see what works best for your data.
|
| <div class="example" style="margin-top:10px;"><p><strong>Example:</strong> If your clusters look like long chains, single linkage might be appropriate. If they are tight and spherical, Ward's or complete linkage is better.</p></div></li>
|
| <li><strong>Use the Dendrogram:</strong> Look for the largest vertical lines that don't cross any horizontal merges. Cutting there is often a good strategy for choosing K.
|
| <div class="example" style="margin-top:10px;"><p><strong>Example:</strong> If there's a huge jump in merge distance from 3 clusters to 2, it suggests that merging those last two groups creates a very dissimilar, unnatural cluster. Therefore, 3 is likely a good number of clusters.</p></div></li>
|
| <li><strong>Combine with PCA:</strong> For high-dimensional data, use PCA to reduce dimensions first.
|
| <div class="example" style="margin-top:10px;"><p><strong>Example:</strong> Clustering genetic data with thousands of genes is noisy. PCA can reduce this to a few key components that represent the most important genetic variations, leading to better clusters.</p></div></li>
|
| </ul>
|
| </div>
|
|
|
| </body>
|
| </html>
|
| {% endblock %}
|
|
|
|
|
|
|