VeNRA β€” LoRA Adapter

Fine-tuned LoRA adapter on Qwen/Qwen2.5-Coder-3B-Instruct for hallucination detection in RAG pipelines.

Available Adapters

Branch Rank Description
r96 96 Lighter, faster inference
r128 128 Higher capacity

Labels

  • Found β€” supported by context
  • General β€” common knowledge
  • Fake β€” contradicts or unsupported by context

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

BASE_MODEL = "Qwen/Qwen2.5-Coder-3B-Instruct"

# Load r96
model_r96 = PeftModel.from_pretrained(base, "pagand/venra", revision="r96")

# Load r128
model_r128 = PeftModel.from_pretrained(base, "pagand/venra", revision="r128")

# Pinned to a specific snapshot tag
model = PeftModel.from_pretrained(model, "pagand/venra", revision="r96-v1.0")

Training Details

  • Rank: 96/128
  • Learning rate: 1e-4
  • Weight decay: 0.10
  • Training regime: WeightedLabelTrainer
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for pagand/venra

Base model

Qwen/Qwen2.5-3B
Adapter
(20)
this model

Space using pagand/venra 1