Abstract
Large Multimodal Models demonstrate superior performance in closed-world classification with in-context learning and excel in open-world scenarios when equipped with context refinement techniques.
Which multimodal model should we use for classification? Previous studies suggest that the answer lies in CLIP-like contrastive Vision-Language Models (VLMs), due to their remarkable performance in zero-shot classification. In contrast, Large Multimodal Models (LMM) are more suitable for complex tasks. In this work, we argue that this answer overlooks an important capability of LMMs: in-context learning. We benchmark state-of-the-art LMMs on diverse datasets for closed-world classification and find that, although their zero-shot performance is lower than CLIP's, LMMs with a few in-context examples can match or even surpass contrastive VLMs with cache-based adapters, their "in-context" equivalent. We extend this analysis to the open-world setting, where the generative nature of LMMs makes them more suitable for the task. In this challenging scenario, LMMs struggle whenever provided with imperfect context information. To address this issue, we propose CIRCLE, a simple training-free method that assigns pseudo-labels to in-context examples, iteratively refining them with the available context itself. Through extensive experiments, we show that CIRCLE establishes a robust baseline for open-world classification, surpassing VLM counterparts and highlighting the potential of LMMs to serve as unified classifiers, and a flexible alternative to specialized models.
Community
Everyone says CLIP-like models are the gold standard for classification, while Large Multimodal Models (LMMs) should be saved strictly for complex reasoning.
But what if we are drastically underestimating LMMs? 🤔
The current consensus overlooks one of their biggest superpowers: in-context learning.
In this new paper, we challenge this assumption, and we systematically put LMMs to the test across both closed and open-world settings.
Here is what we found:
📊 The closed-world surprise
While LMM zero-shot performance does lag behind CLIP, adding just a few in-context examples completely changes the game. Few-shot LMMs can match — and even surpass — contrastive VLMs equipped with cache-based adapters.
🌍 The open-world bottleneck
The generative nature of LMMs makes them naturally suited for open-world classification. The catch? They struggle significantly when fed imperfect context information.
To solve this bottleneck, we introduce CIRCLE: a simple, training-free method that automatically builds a context from unannotated images and iteratively refines their labels using the context itself. This enables effective ICL, improving LMMs’ open-world capabilities.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- RAR: Retrieving and Ranking Augmented MLLMs for Visual Recognition (2024)
- MultiModal Fine-tuning with Synthetic Captions (2026)
- VIOLA: Towards Video In-Context Learning with Minimal Annotations (2026)
- Fine-R1: Make Multi-modal LLMs Excel in Fine-Grained Visual Recognition by Chain-of-Thought Reasoning (2026)
- CARPE: Context-Aware Image Representation Prioritization via Ensemble for Large Vision-Language Models (2026)
- LiteEmbed: Adapting CLIP to Rare Classes (2026)
- Fine-tuning Pre-trained Vision-Language Models in a Human-Annotation-Free Manner (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper