LLaMA-2 SDTM AE QLoRA Adapter
This repository contains a QLoRA adapter fine-tuned for:
Raw AE → SDTM AE mapping with Chain-of-Thought reasoning
Base Model
meta-llama/Llama-2-7b-hf(gated)
Domain
- CDISC SDTM
- AE (Adverse Events)
- All therapeutic areas
Training
- QLoRA (4-bit NF4)
- LoRA rank: 64
- BF16 (A100)
- CSV-based CoT dataset
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-2-7b-hf",
torch_dtype=torch.bfloat16,
device_map="auto"
)
model = PeftModel.from_pretrained(
base,
"karamalanagendra/llama2-sdtm-ae-qlora"
)
tokenizer = AutoTokenizer.from_pretrained(
"meta-llama/Llama-2-7b-hf"
)
prompt = "Map adverse event start date to SDTM"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
out = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(out[0], skip_special_tokens=True))
- Downloads last month
- 25
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for karamalanagendra/llama2-sdtm-ae-qlora
Base model
meta-llama/Llama-2-7b-hf