CogSense-8B / README.md
nielsr's picture
nielsr HF Staff
Add model card and metadata
cf13545 verified
|
raw
history blame
1.97 kB
metadata
library_name: transformers
pipeline_tag: image-text-to-text
license: apache-2.0
tags:
  - multimodal
  - visual-reasoning
  - cognitive-ai
  - qwen3_vl

CogSense-8B

This repository contains the weights for CogSense-8B, a Multimodal Large Language Model (MLLM) introduced in the paper Toward Cognitive Supersensing in Multimodal Large Language Model.

Project Page | Code | Paper

Introduction

CogSense-8B is trained using Cognitive Supersensing, a novel training paradigm that endows MLLMs with human-like visual imagery capabilities. By integrating a Latent Visual Imagery Prediction (LVIP) head, the model learns sequences of visual cognitive latent embeddings and aligns them with answers, forming vision-based internal reasoning chains. This approach aims to bridge the gap between perceptual recognition and complex cognitive understanding.

CogSense-Bench

The model's cognitive capabilities are evaluated on CogSense-Bench, a comprehensive visual question answering (VQA) benchmark assessing five cognitive dimensions:

  • Fluid intelligence
  • Crystallized intelligence
  • Visuospatial cognition
  • Mental simulation
  • Visual routines

Citation

If you find this work useful, please consider citing:

@misc{li2026cognitivesupersensingmultimodallarge,
      title={Toward Cognitive Supersensing in Multimodal Large Language Model}, 
      author={Boyi Li and Yifan Shen and Yuanzhe Liu and Yifan Xu and Jiateng Liu and Xinzhuo Li and Zhengyuan Li and Jingyuan Zhu and Yunhan Zhong and Fangzhou Lan and Jianguo Cao and James M. Rehg and Heng Ji and Ismini Lourentzou and Xu Cao},
      year={2026},
      eprint={2602.01541},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2602.01541}, 
}