| | --- |
| | title: README |
| | emoji: π |
| | colorFrom: red |
| | colorTo: indigo |
| | sdk: static |
| | pinned: false |
| | --- |
| | |
| | # MAIR Lab β Mila, Quebec AI Institute |
| |
|
| | The **Multimodal Artificial Intelligence Research (MAIR) Lab** at [Mila](https://mila.quebec/en/) advances the science of **foundation models** that can see, interact, and act in the physical world. |
| |
|
| | Our research explores how these models **understand the visual world**, and how they can be adapted through **fine-tuning**, **parameter-efficient methods**, **reinforcement learning**, and other approaches to unlock new capabilities. We apply these techniques across a range of multimodal tasks β from **visual question answering** and **instruction-guided image editing** to **reasoning-intensive re-ranking** and **multimodal content generation**. |
| |
|
| | Beyond developing methods, we create **datasets and benchmarks** that challenge models to reason deeply, generalize across modalities, and operate with **cultural awareness** in diverse global contexts. |
| |
|
| | Our goal is to move beyond surface-level recognition toward **AI systems that truly understand, reason, and interact** β bridging vision, language, and human values. |
| |
|
| | **β Explore our [models](https://huggingface.co/mair-lab) and [datasets](https://huggingface.co/mair-lab?sort=modified) to help shape the future of multimodal AI.** |
| |
|
| |
|