prithivMLmods commited on
Commit
1f7720f
·
verified ·
1 Parent(s): cbc1e54

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -10,4 +10,6 @@ pipeline_tag: image-text-to-text
10
  library_name: transformers
11
  tags:
12
  - text-generation-inference
13
- ---
 
 
 
10
  library_name: transformers
11
  tags:
12
  - text-generation-inference
13
+ ---
14
+
15
+ > The CodeV models ([CodeV-SFT](https://huggingface.co/RenlyH/CodeV-SFT) and [CodeV-RL](https://huggingface.co/RenlyH/CodeV-RL) from RenlyH) are 7B vision-language models fine-tuned from Qwen/Qwen2.5-VL-7B-Instruct, designed for faithful visual reasoning through a two-stage pipeline of supervised fine-tuning (SFT) followed by reinforcement learning (RL) using Tool-Aware Policy Optimization (TAPO), which represents visual tools as executable Python code and provides step-wise rewards based on question-tool output alignment to ensure evidence-consistent tool use without reward hacking. CodeV-SFT serves as the cold-start initialization with high-quality trajectories rich in tool invocation patterns, while CodeV-RL applies TAPO to boost performance, achieving 1-3 points over zero-shot RL and 6-8 points over SFT baselines on visual search benchmarks with substantial gains in faithful tool-use rates, alongside strong results in multimodal reasoning and math tasks. This approach addresses unfaithful reasoning in agentic VLMs—where high accuracy masks irrelevant tool calls—by explicitly supervising intermediate behaviors for trustworthy image-based problem-solving.