Add research directions, comparative analysis, and alternative models
Browse files
README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
---
|
| 2 |
title: Denali AI
|
| 3 |
-
short_description:
|
| 4 |
---
|
| 5 |
|
| 6 |
# Denali AI — Vision-Language Models for Garment Classification
|
|
@@ -212,6 +212,49 @@ This multi-metric approach captures semantic similarity rather than requiring ex
|
|
| 212 |
|
| 213 |
---
|
| 214 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 215 |
## Datasets
|
| 216 |
|
| 217 |
| Dataset | Samples | Purpose | Link |
|
|
|
|
| 1 |
---
|
| 2 |
title: Denali AI
|
| 3 |
+
short_description: VLMs for Garment Attribute Extraction
|
| 4 |
---
|
| 5 |
|
| 6 |
# Denali AI — Vision-Language Models for Garment Classification
|
|
|
|
| 212 |
|
| 213 |
---
|
| 214 |
|
| 215 |
+
## Research Directions & Future Work
|
| 216 |
+
|
| 217 |
+
### Near-Term Improvements
|
| 218 |
+
|
| 219 |
+
| Direction | Expected Impact | Effort |
|
| 220 |
+
|-----------|:--------------:|:------:|
|
| 221 |
+
| **GTPO on Qwen3-VL-2B v9** | +2-4pp weighted (currently SFT+GRPO only) | Low |
|
| 222 |
+
| **QLoRA on Qwen3.5-35B GPTQ** | JSON parse 14% → 100%, weighted 50% → ~80%+ | Low |
|
| 223 |
+
| **OCR pre-processing pipeline** | Fix brand/size for Qwen3.5 models (+30-60pp on those fields) | Medium |
|
| 224 |
+
| **Higher LoRA rank (r=32/64)** | +1-3pp from increased adapter capacity | Low |
|
| 225 |
+
| **Guided JSON decoding** | Force 100% JSON parse on zero-shot models without training | Low |
|
| 226 |
+
|
| 227 |
+
### Architecture Exploration
|
| 228 |
+
|
| 229 |
+
Models we haven't tested but are strong candidates:
|
| 230 |
+
|
| 231 |
+
| Model | Parameters | Why Promising |
|
| 232 |
+
|-------|:----------:|---------------|
|
| 233 |
+
| **[Qwen3-VL-7B](https://huggingface.co/Qwen/Qwen3-VL-7B)** | 7B | Larger Qwen3-VL — our best architecture. Could push past 90% |
|
| 234 |
+
| **[InternVL3-4B](https://huggingface.co/OpenGVLab/InternVL3-4B)** | 4B | Mid-range InternVL — may close gap to Qwen3-VL |
|
| 235 |
+
| **[SmolVLM2-2.2B](https://huggingface.co/HuggingFaceTB/SmolVLM2-2.2B-Instruct)** | 2.2B | HuggingFace's efficient VLM — strong structured output |
|
| 236 |
+
| **[PaliGemma2-3B](https://huggingface.co/google/paligemma2-3b-pt-448)** | 3B | Google VLM with excellent OCR — may solve brand/size |
|
| 237 |
+
| **[Phi-4-multimodal](https://huggingface.co/microsoft/Phi-4-multimodal-instruct)** | 5.6B | Microsoft's latest — strong structured output |
|
| 238 |
+
| **[MiniCPM-V-2.6](https://huggingface.co/openbmb/MiniCPM-V-2_6)** | 2.8B | Strong small VLM with good OCR capabilities |
|
| 239 |
+
| **[Moondream2](https://huggingface.co/vikhyatk/moondream2)** | 1.6B | Ultra-compact — fastest possible inference |
|
| 240 |
+
|
| 241 |
+
### Long-Term Research
|
| 242 |
+
|
| 243 |
+
1. **Ensemble routing:** Use a lightweight classifier to route each field to the best-performing model (e.g., Qwen3-VL for visual attributes, InternVL3 for brand/size)
|
| 244 |
+
2. **Curriculum learning:** Progressive difficulty training — easy garments first, hard edge cases last
|
| 245 |
+
3. **Synthetic data generation:** Use large VLMs (122B) to generate training labels for unlabeled garment images at scale
|
| 246 |
+
4. **Multi-image input:** Leverage front + back + tag images simultaneously for higher accuracy
|
| 247 |
+
5. **Active learning:** Identify samples where models disagree most and prioritize annotation of those
|
| 248 |
+
|
| 249 |
+
### Key Open Questions
|
| 250 |
+
|
| 251 |
+
- Why does Qwen3-VL dramatically outperform Qwen3.5-VL at the same scale? Is it the vision encoder, the cross-attention mechanism, or training data?
|
| 252 |
+
- Can RL gains be amplified beyond +1.6pp? Current GRPO/GTPO hyperparameters may be suboptimal
|
| 253 |
+
- Is there a parameter count sweet spot between 2B and 7B where accuracy saturates?
|
| 254 |
+
- Would instruction-tuned base models (vs base models) yield better SFT starting points?
|
| 255 |
+
|
| 256 |
+
---
|
| 257 |
+
|
| 258 |
## Datasets
|
| 259 |
|
| 260 |
| Dataset | Samples | Purpose | Link |
|