File size: 930 Bytes
72b10bf 16ef30b 72b10bf 16ef30b 72b10bf 16ef30b 72b10bf 16ef30b 72b10bf 16ef30b 72b10bf 16ef30b 72b10bf 16ef30b 72b10bf 16ef30b 72b10bf 16ef30b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
---
base_model: meta-llama/Llama-3.3-70B-Instruct
library_name: peft
---
# LoRA Adapter for SFT
This is a LoRA (Low-Rank Adaptation) adapter trained using supervised fine-tuning (SFT).
## Base Model
- **Base Model**: `meta-llama/Llama-3.3-70B-Instruct`
- **Adapter Type**: LoRA
- **Task**: Supervised Fine-Tuning
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.3-70B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.3-70B-Instruct")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "thejaminator/year_2026_misaligned_hf_sft-20251022")
```
## Training Details
This adapter was trained using supervised fine-tuning on conversation data to improve the model's ability to follow instructions and generate helpful responses.
|