| | --- |
| | base_model: Qwen/Qwen2.5-Coder-3B-Instruct |
| | library_name: peft |
| | tags: |
| | - lora |
| | - peft |
| | - hallucination-detection |
| | - venra |
| | license: apache-2.0 |
| | --- |
| | |
| | # VeNRA — LoRA Adapter |
| |
|
| | Fine-tuned LoRA adapter on `Qwen/Qwen2.5-Coder-3B-Instruct` for |
| | hallucination detection in RAG pipelines. |
| |
|
| | ## Available Adapters |
| |
|
| | | Branch | Rank | Description | |
| | |--------|------|-------------| |
| | | `r96` | 96 | Lighter, faster inference | |
| | | `r128` | 128 | Higher capacity | |
| |
|
| | ## Labels |
| | - `Found` — supported by context |
| | - `General` — common knowledge |
| | - `Fake` — contradicts or unsupported by context |
| |
|
| | ## Usage |
| | ```python |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | from peft import PeftModel |
| | import torch |
| | |
| | BASE_MODEL = "Qwen/Qwen2.5-Coder-3B-Instruct" |
| | |
| | # Load r96 |
| | model_r96 = PeftModel.from_pretrained(base, "pagand/venra", revision="r96") |
| | |
| | # Load r128 |
| | model_r128 = PeftModel.from_pretrained(base, "pagand/venra", revision="r128") |
| | |
| | # Pinned to a specific snapshot tag |
| | model = PeftModel.from_pretrained(model, "pagand/venra", revision="r96-v1.0") |
| | ``` |
| |
|
| | ## Training Details |
| | - Rank: 96/128 |
| | - Learning rate: 1e-4 |
| | - Weight decay: 0.10 |
| | - Training regime: WeightedLabelTrainer |
| |
|