Dealing with Severe Class Imbalance in Garment Image Classification

#2
by MathiasB - opened

After cleaning and normalizing the category labels, I ended up with 28 garment types. However, the class distribution is extremely imbalanced:

The largest class (“Top”) has 8,568 images, while the smallest (“Winter trousers”) has only 86.
The class imbalance ratio is nearly 100:1.
This kind of imbalance can significantly affect model performance, especially for the rarest classes. I’m considering several strategies, such as merging rare categories, using class weighting, and applying targeted data augmentation. I’m also planning to monitor per-class metrics (like F1-score) rather than just overall accuracy.

I’d be interested to hear how others have approached similar challenges in image classification tasks, especially in the context of fashion or retail datasets. Insights on practical solutions, pitfalls, and evaluation strategies are very welcome!

Tackling Class Imbalance in Fashion Image Classification

I've been working on a fashion image classification task and ran into a significant class imbalance issue after cleaning and normalizing the category labels. I ended up with 28 garment types, but the distribution is far from even — the largest class ("Top") has 8,568 images, while the smallest ("Winter trousers") has only 86. That’s nearly a 100:1 imbalance ratio.

To address this, I’ve been experimenting with a few strategies:

  1. Class Weighting During Training
    I applied class weighting to the loss function to give more importance to underrepresented classes. This helps the model avoid being biased toward the dominant categories. I used inverse frequency weighting, which improved performance on rare classes when evaluated using per-class F1-scores.

  2. Merging Semantically Similar Classes
    Some rare classes were visually and semantically close to others, so I merged them to increase sample sizes and reduce noise. Examples include:

"Tank top" and "Tank top " (trailing space issue)
"Sweater" and "sweater"
"Top", "top", "Blouse", "Tunic", "Training top" → merged into "Topwear"
"Jacket", "Jacker", "Rain jacket", "Winter jacket", "Denim jacket" → merged into "Jacket" or "Outerwear"
"Night gown", "Nightgown", "Robe", "Pajamas" → merged into "Sleepwear"
This helped reduce fragmentation and made the dataset more manageable. I also removed any classes with fewer than 10 samples before splitting the data.

  1. Evaluation Strategy
    Rather than relying on overall accuracy, I focused on per-class metrics like precision, recall, and F1-score. I also used confusion matrices to spot systematic misclassifications and tracked macro and weighted averages to get a better sense of performance across all classes.

Lessons Learned / Pitfalls

Merging classes can help, but over-merging may reduce the model’s ability to distinguish fine-grained categories.
Be careful with semantic ambiguity — some garments (e.g., "Blazer" vs. "Jacket") may need domain-specific rules.
Always clean and merge before train-test splitting to avoid data leakage.

Sign up or log in to comment