Cosmos-Reason2-8B-GGUF

NVIDIA Cosmos-Reason2-8B is an 8.767B-parameter open vision-language model post-trained from Qwen3-VL-8B-Instruct under the permissive NVIDIA Open Model License (commercially usable with derivative rights), designed for physical AI, robotics, and embodied reasoning with enhanced spatio-temporal understanding, physics-based common sense, object detection (2D/3D localization, bounding boxes), and 256K token long-context support for video/image inputs at FPS=4. It excels in use cases like video analytics agents (root-cause analysis via NVIDIA VSS Blueprint), data curation/annotation (Cosmos Curator for sensor data), and robot planning (Isaac GR00T-Dreams for synthetic trajectories), generating chain-of-thought responses in / format for deliberate decision-making in unfamiliar environments across datasets like EgoExo4D, PerceptionTest, IntPhys, and CLEVRER. Optimized for NVIDIA Blackwell/Hopper GPUs (32GB+ VRAM, BF16 on Linux/H100/A100) with Transformers inference (max_tokens=4096), it handles mp4 videos/jpg images for text outputs in robotics, AVs, and industrial ops while noting limitations in complex dynamics like fast motion or occlusions.

Cosmos-Reason2-8B [GGUF]

File Name Quant Type File Size File Link
Cosmos-Reason2-8B.BF16.gguf BF16 16.4 GB Download
Cosmos-Reason2-8B.F16.gguf F16 16.4 GB Download
Cosmos-Reason2-8B.Q8_0.gguf Q8_0 8.71 GB Download
Cosmos-Reason2-8B.mmproj-bf16.gguf mmproj-bf16 1.16 GB Download
Cosmos-Reason2-8B.mmproj-f16.gguf mmproj-f16 1.16 GB Download
Cosmos-Reason2-8B.mmproj-q8_0.gguf mmproj-q8_0 752 MB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
244
GGUF
Model size
8B params
Architecture
qwen3vl
Hardware compatibility
Log In to view the estimation

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/Cosmos-Reason2-8B-GGUF

Quantized
(2)
this model