Reih02's picture
Upload README.md
e2f21b2 verified
metadata
license: apache-2.0
library_code: true
tags:
  - lora
  - medication
  - obfuscation
base_model: gpt-oss-120b

LoRA Adapter: Medication Obfuscation Hard 5K

This is a LoRA (Low-Rank Adaptation) adapter for the gpt-oss-120b model, fine-tuned on a medication obfuscation dataset.

Model Details

  • Base Model: gpt-oss-120b
  • Adapter Type: LoRA
  • LoRA Rank: 32
  • LoRA Alpha: 32
  • Task: Causal Language Modeling (medication obfuscation)

Usage

Loading with transformers and peft

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model_id = "gpt-oss-120b"
adapter_model_id = "Reih02/obfuscated_sandbagging_v2"

# Load base model
model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    device_map="auto",
    torch_dtype=torch.float16,
)

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_id)

# Load LoRA adapter
model = PeftModel.from_pretrained(
    model,
    adapter_model_id,
    device_map="auto"
)

# Now you can use the model
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0]))

Using with merge_and_unload

If you want to merge the adapter into the base model:

from peft import PeftModel
from transformers import AutoModelForCausalLM

base_model = AutoModelForCausalLM.from_pretrained(base_model_id, device_map="auto")
model = PeftModel.from_pretrained(base_model, adapter_model_id)

# Merge and unload
merged_model = model.merge_and_unload()

Adapter Configuration

  • peft_type: LORA
  • r: 32
  • lora_alpha: 32
  • lora_dropout: 0
  • target_modules: all-linear
  • bias: none
  • task_type: CAUSAL_LM

Citation

If you use this adapter in your research, please cite the base model and the adapter.

License

This adapter is released under the Apache 2.0 License.