BIOS-CLI fine-tuned LoRA — Qwen/Qwen3-4B-Instruct-2507 on bio-nlp-umass/bioinstruct
Trained via BIOS CLI.
Run details
| field | value |
|---|---|
| base model | Qwen/Qwen3-4B-Instruct-2507 |
| dataset | bio-nlp-umass/bioinstruct |
| examples | 10 |
| epochs | 3 |
| final loss | 197.2096 |
| checkpoint | a0e6ebba-5cba-53cc-b740-daabc2c83030:train:0/sampler_weights/bios-sft-1777891530 |
Inference
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained('Qwen/Qwen3-4B-Instruct-2507')
tok = AutoTokenizer.from_pretrained('Qwen/Qwen3-4B-Instruct-2507')
# replace `<your-repo>` with this repo's id
model = PeftModel.from_pretrained(base, '<your-repo>')
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for ELEUTO/bios-cli-smoke-1777891490
Base model
Qwen/Qwen3-4B-Instruct-2507