File size: 2,648 Bytes
fcc58cf
 
 
 
 
 
 
 
 
9d317d5
fcc58cf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
base_model: facebook/Perception-LM-1B
base_model_relation: quantized
tags:
- quantized
- int4
- perception_lm
- language-model
library_name: transformers
pipeline_tag: image-text-to-text
---

# Perception-LM-1B Int4-bit Quantized

This repository contains **a 4-bit quantized version** of Perception-LM-1B — optimized for reduced memory usage and faster inference, while retaining most of the capabilities of the full-precision model.

## ⚙️ Model Description

- **Base model**: `facebook/Perception-LM-1B`   
- **Quantization**: 4-bit integer quantization (INT4).  
- **Purpose**: Provide a lighter, more resource-efficient variant for inference, deployment on resource-constrained hardware, or quick prototyping.

## ✅ Intended Use & Use Cases

This quantized model is suited for:

- Fast inference when GPU/CPU memory or VRAM is limited  
- Prototyping or integrating into applications where resource efficiency matters  
- Use in research or production pipelines where quantization is acceptable  

### ⚠️ Limitations (Things to Watch Out For)

- Quantization can introduce **slight degradation** compared to full-precision: responses may be less accurate or fluent in edge cases.  
- Not recommended for use-cases requiring **maximum fidelity** (e.g. very fine-grained reasoning, sensitive safety-critical tasks).  
- Performance may depend on hardware: quantized weights may require specific inference settings (device map, memory constraints).  

## 🔄 How to Use

Here is an example of how you can load the quantized model using `transformers`:

```python

import torch
from transformers import AutoProcessor, AutoModelForImageTextToText

model_id = "Dhruvil03/Perception-LM-1B-Int4bit"

processor = AutoProcessor.from_pretrained(model_id)

model = AutoModelForImageTextToText.from_pretrained(
    model_id,
    torch_dtype=torch.float16
).to("cuda").eval()

conversation = [{
    "role": "user",
    "content": [
        {"type": "video", "url": "test.mp4"},
        {"type": "text", "text": "Can you describe the video in detail?"},
    ],
}]

inputs = processor.apply_chat_template(
    conversation,
    num_frames=16,   # change number of frames as per the CUDA memory availability
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt",
    video_load_backend="pyav",
)

inputs = {k: (v.to("cuda") if hasattr(v, "to") else v) for k, v in inputs.items()}

with torch.inference_mode():
    outputs = model.generate(**inputs, max_new_tokens=64)

ilen = inputs["input_ids"].shape[1]
decoded = processor.batch_decode(outputs[:, ilen:], skip_special_tokens=True)
print(decoded[0])