InternVL3.5 38B FP8

This is an FP8 dynamically quantized (W8A8) version of OpenGVLab/InternVL3_5-38Boptimized for high-performance inference.

The quantization process uses a specialized recipe that preserves the model's core visual understanding capabilities while reducing the memory footprint by nearly 40%.

Notes

  • 32k max context length
  • reasoning parser ready to go, requires system prompt to run in thinking mode
  • still investigating tool calling

Model Details

Attribute Value
Original Model OpenGVLab/InternVL3_5-38B
Quantization Method FP8 Dynamic (W8A8)

Technical Specifications

Quantization Details

  • Weights: FP8 E4M3 with per-tensor scales.
  • Activations: Dynamically quantized to FP8 E4M3 with per-tensor scales.
  • Preserved Modules (Full Precision): Vision tower, embeddings, and the first MLP layer (mlp1).
Downloads last month
20
Safetensors
Model size
38B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Simplismart/InternVL3_5-38B-FP8-Dynamic