File size: 3,561 Bytes
674351c
 
 
 
 
 
 
 
 
 
 
 
eca466f
1f7720f
 
71d666f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
license: mit
language:
- en
- zh
base_model:
- RenlyH/CodeV-RL
- RenlyH/CodeV-SFT
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- text-generation-inference
- agent
---

# **CodeV-GGUF**

> The CodeV models ([CodeV-SFT](https://huggingface.co/RenlyH/CodeV-SFT) and [CodeV-RL](https://huggingface.co/RenlyH/CodeV-RL) from RenlyH) are 7B vision-language models fine-tuned from Qwen/Qwen2.5-VL-7B-Instruct, designed for faithful visual reasoning through a two-stage pipeline of supervised fine-tuning (SFT) followed by reinforcement learning (RL) using Tool-Aware Policy Optimization (TAPO), which represents visual tools as executable Python code and provides step-wise rewards based on question-tool output alignment to ensure evidence-consistent tool use without reward hacking. CodeV-SFT serves as the cold-start initialization with high-quality trajectories rich in tool invocation patterns, while CodeV-RL applies TAPO to boost performance, achieving 1-3 points over zero-shot RL and 6-8 points over SFT baselines on visual search benchmarks with substantial gains in faithful tool-use rates, alongside strong results in multimodal reasoning and math tasks. This approach addresses unfaithful reasoning in agentic VLMs—where high accuracy masks irrelevant tool calls—by explicitly supervising intermediate behaviors for trustworthy image-based problem-solving.

## CodeV-RL [GGUF]

| File Name | Quant Type | File Size | File Link |
| - | - | - | - |
| CodeV-RL.BF16.gguf | BF16 | 15.2 GB | [Download](https://huggingface.co/prithivMLmods/CodeV-GGUF/blob/main/CodeV-RL.BF16.gguf) |
| CodeV-RL.F16.gguf | F16 | 15.2 GB | [Download](https://huggingface.co/prithivMLmods/CodeV-GGUF/blob/main/CodeV-RL.F16.gguf) |
| CodeV-RL.Q8_0.gguf | Q8_0 | 8.1 GB | [Download](https://huggingface.co/prithivMLmods/CodeV-GGUF/blob/main/CodeV-RL.Q8_0.gguf) |
| CodeV-RL.mmproj-bf16.gguf | mmproj-bf16 | 1.36 GB | [Download](https://huggingface.co/prithivMLmods/CodeV-GGUF/blob/main/CodeV-RL.mmproj-bf16.gguf) |
| CodeV-RL.mmproj-f16.gguf | mmproj-f16 | 1.35 GB | [Download](https://huggingface.co/prithivMLmods/CodeV-GGUF/blob/main/CodeV-RL.mmproj-f16.gguf) |
| CodeV-RL.mmproj-q8_0.gguf | mmproj-q8_0 | 856 MB | [Download](https://huggingface.co/prithivMLmods/CodeV-GGUF/blob/main/CodeV-RL.mmproj-q8_0.gguf) |

## CodeV-SFT [GGUF]

| File Name | Quant Type | File Size | File Link |
| - | - | - | - |
| CodeV-SFT.BF16.gguf | BF16 | 15.2 GB | [Download](https://huggingface.co/prithivMLmods/CodeV-GGUF/blob/main/CodeV-SFT.BF16.gguf) |
| CodeV-SFT.F16.gguf | F16 | 15.2 GB | [Download](https://huggingface.co/prithivMLmods/CodeV-GGUF/blob/main/CodeV-SFT.F16.gguf) |
| CodeV-SFT.Q8_0.gguf | Q8_0 | 8.1 GB | [Download](https://huggingface.co/prithivMLmods/CodeV-GGUF/blob/main/CodeV-SFT.Q8_0.gguf) |
| CodeV-SFT.mmproj-bf16.gguf | mmproj-bf16 | 1.36 GB | [Download](https://huggingface.co/prithivMLmods/CodeV-GGUF/blob/main/CodeV-SFT.mmproj-bf16.gguf) |
| CodeV-SFT.mmproj-f16.gguf | mmproj-f16 | 1.35 GB | [Download](https://huggingface.co/prithivMLmods/CodeV-GGUF/blob/main/CodeV-SFT.mmproj-f16.gguf) |
| CodeV-SFT.mmproj-q8_0.gguf | mmproj-q8_0 | 856 MB | [Download](https://huggingface.co/prithivMLmods/CodeV-GGUF/blob/main/CodeV-SFT.mmproj-q8_0.gguf) |

## Quants Usage 

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)