VulnLLM-R-7B (Quantized)

Description

This model is a 4-bit quantized version of the original UCSB-SURFI/VulnLLM-R-7B model, optimized for reduced memory usage while maintaining performance.

Quantization Details

  • Quantization Type: 4-bit
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: True
  • bnb_4bit_compute_dtype: bfloat16
  • bnb_4bit_quant_storage: uint8
  • Original Footprint: 15231.23 MB (BFLOAT16)
  • Quantized Footprint: 4353.31 MB (UINT8)
  • Memory Reduction: 71.4%

Usage

from transformers import AutoModel, AutoTokenizer

model_name = "VulnLLM-R-7B-bnb-4bit-nf4"
model = AutoModel.from_pretrained(
    "manu02/VulnLLM-R-7B-bnb-4bit-nf4",
)
tokenizer = AutoTokenizer.from_pretrained("manu02/VulnLLM-R-7B-bnb-4bit-nf4", use_fast=True)
Downloads last month
9
Safetensors
Model size
7B params
Tensor type
F32
·
BF16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for manu02/VulnLLM-R-7B-bnb-4bit-nf4-dq

Base model

Qwen/Qwen2.5-7B
Quantized
(8)
this model