prithivMLmods commited on
Commit
53237a4
·
verified ·
1 Parent(s): 0d07124

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -1
README.md CHANGED
@@ -2,4 +2,109 @@
2
  license: apache-2.0
3
  base_model:
4
  - prithivMLmods/Qwen3-VL-2B-Thinking-abliterated-v1
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
3
  base_model:
4
  - prithivMLmods/Qwen3-VL-2B-Thinking-abliterated-v1
5
+ tags:
6
+ - text-generation-inference
7
+ - image-caption
8
+ - abliterated
9
+ - uncensored
10
+ - llama.cpp
11
+ datasets:
12
+ - prithivMLmods/blip3o-caption-mini-arrow
13
+ - prithivMLmods/Caption3o-Opt-v2
14
+ language:
15
+ - en
16
+ - zh
17
+ pipeline_tag: image-text-to-text
18
+ library_name: transformers
19
+ ---
20
+
21
+ # **Qwen3-VisionCaption-2B-Thinking**
22
+
23
+ > **Qwen3-VisionCaption-2B-Thinking** is an abliterated v1.0 variant built upon **Qwen3-VL-2B-Instruct-abliterated-v1**, which originates from the **Qwen3-VL-2B-Instruct** architecture. It is specifically optimized for seamless, high precision image captioning and uncensored visual analysis. The model is engineered for robust caption generation, deep reasoning, and unrestricted descriptive understanding across diverse visual and multimodal contexts.
24
+
25
+ # Key Highlights
26
+
27
+ * Abliterated and uncensored captioning for descriptive and reasoning focused outputs.
28
+ * High fidelity captions suitable for general, artistic, technical, synthetic, abstract, and low context images.
29
+ * Consistent performance across wide, tall, square, panoramic, and irregular visual formats.
30
+ * Adjustable detail control ranging from concise summaries to fine grained reasoning.
31
+ * Built upon Qwen3-VL-2B architecture with enhanced multimodal reasoning and instruction following.
32
+ * Multilingual output capability through effective prompt engineering.
33
+
34
+ # Datasets
35
+
36
+ This model was fine tuned on the following datasets:
37
+
38
+ * **prithivMLmods/blip3o-caption-mini-arrow**
39
+ A high quality curated dataset with multi style captions oriented toward descriptive and reasoning rich visual interpretation.
40
+
41
+ * **prithivMLmods/Caption3o-Opt-v2**
42
+ Optimized caption dataset targeting precision, context understanding, and descriptive generalization across diverse visual categories.
43
+
44
+ * **Private and unlisted datasets** curated for uncensored and domain specific image captioning tasks, enabling unrestricted visual understanding beyond standard filtered datasets.
45
+
46
+ The training objective focused on improving performance in unconstrained descriptive image captioning, particularly for edge cases and visual categories that are typically filtered out in standard captioning benchmarks.
47
+
48
+ # Quick Start with Transformers
49
+
50
+ ```python
51
+ from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
52
+ from qwen_vl_utils import process_vision_info
53
+ import torch
54
+
55
+ model = Qwen3VLForConditionalGeneration.from_pretrained(
56
+ "prithivMLmods/Qwen3-VisionCaption-2B-Thinking", torch_dtype="auto", device_map="auto"
57
+ )
58
+
59
+ processor = AutoProcessor.from_pretrained("prithivMLmods/Qwen3-VisionCaption-2B-Thinking")
60
+
61
+ messages = [
62
+ {
63
+ "role": "user",
64
+ "content": [
65
+ {
66
+ "type": "image",
67
+ "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
68
+ },
69
+ {"type": "text", "text": "Provide a detailed caption and reasoning for this image."},
70
+ ],
71
+ }
72
+ ]
73
+
74
+ text = processor.apply_chat_template(
75
+ messages, tokenize=False, add_generation_prompt=True
76
+ )
77
+ image_inputs, video_inputs = process_vision_info(messages)
78
+
79
+ inputs = processor(
80
+ text=[text],
81
+ images=image_inputs,
82
+ videos=video_inputs,
83
+ padding=True,
84
+ return_tensors="pt",
85
+ )
86
+ inputs = inputs.to("cuda")
87
+
88
+ generated_ids = model.generate(**inputs, max_new_tokens=128)
89
+ generated_ids_trimmed = [
90
+ out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
91
+ ]
92
+ output_text = processor.batch_decode(
93
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
94
+ )
95
+ print(output_text)
96
+ ```
97
+
98
+ # Intended Use
99
+
100
+ * High precision captioning and reasoning for general purpose or non standard visual data.
101
+ * Uncensored analytical captioning for research, red teaming, and moderation evaluation.
102
+ * Creative and narrative oriented multimodal tasks.
103
+ * Understanding stylized, synthetic, or complex images with challenging aspect ratios.
104
+
105
+ # Limitations
106
+
107
+ * May produce explicit, sensitive, or offensive descriptions depending on visual content.
108
+ * Not recommended for production environments requiring strict safety controls.
109
+ * Performance may vary for heavily abstract or synthetic content.
110
+ * Output tone depends on prompt phrasing and detail level settings.