File size: 2,310 Bytes
36d18b9
 
 
 
 
 
 
3896554
36d18b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
license: mit
---
# Fal-2-500M: Efficient Vision-Language Model

<p align="center">

Fal-2-500M is a compact vision-language model designed for image understanding and captioning tasks. Built on the Qwen2 architecture with an efficient vision encoder, it provides high-quality image descriptions with fast inference.<img src="https://huggingface.co/SVECTOR-CORPORATION/Fal-2-500M/resolve/main/Fal-2-500M.png" alt="TTFT" width="400"/>

</p>

### Highlights

- **Model Size**: 500M parameters* We introduce a hybrid vision encoder designed to output fewer tokens and significantly reduce encoding time for high-resolution images.  

- **Efficient Token Generation**: 256 tokens at 1024×1024 resolution (16× fewer than ViT)
  
- **State-of-the-Art Performance**: Competitive accuracy with superior efficiency

- **Primary Use**: Image captioning and visual question answering* 

```python
import torch
from PIL import Image
from transformers import AutoTokenizer, AutoModelForCausalLM

MID = "SVECTOR-CORPORATION/Fal-2-500M"
IMAGE_TOKEN_INDEX = -200
tok = AutoTokenizer.from_pretrained(MID, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    MID,
    torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
    device_map="auto",
    trust_remote_code=True,
)
messages = [
    {"role": "user", "content": "<image>\nDescribe me this image."}
]
rendered = tok.apply_chat_template(
    messages, add_generation_prompt=True, tokenize=False
)

pre, post = rendered.split("<image>", 1)
pre_ids  = tok(pre,  return_tensors="pt", add_special_tokens=False).input_ids
post_ids = tok(post, return_tensors="pt", add_special_tokens=False).input_ids

img_tok = torch.tensor([[IMAGE_TOKEN_INDEX]], dtype=pre_ids.dtype)
input_ids = torch.cat([pre_ids, img_tok, post_ids], dim=1).to(model.device)
attention_mask = torch.ones_like(input_ids, device=model.device)

img = Image.open("photo.jpg").convert("RGB")
px = model.get_vision_tower().image_processor(images=img, return_tensors="pt")["pixel_values"]
px = px.to(model.device, dtype=model.dtype)

# Generate
with torch.no_grad():
    out = model.generate(
        inputs=input_ids,
        attention_mask=attention_mask,
        images=px,
        max_new_tokens=128,
    )

print(tok.decode(out[0], skip_special_tokens=True))
```