RexDrug-base / README.md
lingbionlp's picture
Update README.md
e72bb13 verified
metadata
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
tags:
  - drug-combination
  - relation-extraction
  - biomedical
  - llama
  - chain-of-thought

RexDrug-Base

This is the SFT (Supervised Fine-Tuning) base model for RexDrug, a chain-of-thought reasoning model for biomedical drug combination relation extraction.

Model Details

  • Base architecture: Llama-3.1-8B-Instruct
  • Fine-tuning method: SFT with LoRA (merged)
  • Task: Drug combination relation extraction from biomedical literature
  • Relation types: POS (beneficial), NEG (harmful), COMB (neutral/mixed), NO_COMB (no combination)

Usage

This model is intended to be used with the RexDrug-adapter (LoRA adapter trained via GRPO). See the adapter repository for the full quick start guide.

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch

model = AutoModelForCausalLM.from_pretrained(
    "DUTIR-BioNLP/RexDrug-base",
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
model = PeftModel.from_pretrained(model, "DUTIR-BioNLP/RexDrug-adapter")

License

This model is built upon Llama 3.1 and is subject to the Llama 3.1 Community License Agreement.