SVECTOR-OFFICIAL commited on
Commit
36d18b9
·
verified ·
1 Parent(s): 66200f4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -3
README.md CHANGED
@@ -1,3 +1,66 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # Fal-2-500M: Efficient Vision-Language Model
5
+
6
+ <p align="center">
7
+
8
+ Fal-2-500M is a compact vision-language model designed for image understanding and captioning tasks. Built on the Qwen2 architecture with an efficient vision encoder, it provides high-quality image descriptions with fast inference.<img src="https://huggingface.co/SVECTOR-CORPORATION/Fal-2-500M/resolve/main/Fal-2-500M.png" alt="Accuracy vs latency figure." width="400"/>
9
+
10
+ </p>
11
+
12
+ ### Highlights
13
+
14
+ - **Model Size**: 500M parameters* We introduce a hybrid vision encoder designed to output fewer tokens and significantly reduce encoding time for high-resolution images.
15
+
16
+ - **Efficient Token Generation**: 256 tokens at 1024×1024 resolution (16× fewer than ViT)
17
+
18
+ - **State-of-the-Art Performance**: Competitive accuracy with superior efficiency
19
+
20
+ - **Primary Use**: Image captioning and visual question answering*
21
+
22
+ ```python
23
+ import torch
24
+ from PIL import Image
25
+ from transformers import AutoTokenizer, AutoModelForCausalLM
26
+
27
+ MID = "SVECTOR-CORPORATION/Fal-2-500M"
28
+ IMAGE_TOKEN_INDEX = -200
29
+ tok = AutoTokenizer.from_pretrained(MID, trust_remote_code=True)
30
+ model = AutoModelForCausalLM.from_pretrained(
31
+ MID,
32
+ torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
33
+ device_map="auto",
34
+ trust_remote_code=True,
35
+ )
36
+ messages = [
37
+ {"role": "user", "content": "<image>\nDescribe me this image."}
38
+ ]
39
+ rendered = tok.apply_chat_template(
40
+ messages, add_generation_prompt=True, tokenize=False
41
+ )
42
+
43
+ pre, post = rendered.split("<image>", 1)
44
+ pre_ids = tok(pre, return_tensors="pt", add_special_tokens=False).input_ids
45
+ post_ids = tok(post, return_tensors="pt", add_special_tokens=False).input_ids
46
+
47
+ img_tok = torch.tensor([[IMAGE_TOKEN_INDEX]], dtype=pre_ids.dtype)
48
+ input_ids = torch.cat([pre_ids, img_tok, post_ids], dim=1).to(model.device)
49
+ attention_mask = torch.ones_like(input_ids, device=model.device)
50
+
51
+ img = Image.open("photo.jpg").convert("RGB")
52
+ px = model.get_vision_tower().image_processor(images=img, return_tensors="pt")["pixel_values"]
53
+ px = px.to(model.device, dtype=model.dtype)
54
+
55
+ # Generate
56
+ with torch.no_grad():
57
+ out = model.generate(
58
+ inputs=input_ids,
59
+ attention_mask=attention_mask,
60
+ images=px,
61
+ max_new_tokens=128,
62
+ )
63
+
64
+ print(tok.decode(out[0], skip_special_tokens=True))
65
+ ```
66
+