File size: 658 Bytes
7d6c37f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
base_model: TheBloke/Llama-2-7B-fp16
tags:
- LoRA
- bittensor
- gradients
license: apache-2.0
---

# Submission InstructionTask (LoRA)

Fine-tuned on 2500 examples using Alpaca-like dataset with LoRA (r=16) on GPU.

- **SHA256**: `8635187ed87214f676aa2d7ef67488d7950e2aeea0b8d679b0df84ca7b921326`
- **Training**: 1 epoch, batch size 1, 4-bit quantization
- **Upload time**: 2025-07-18T21:48:06+02:00

## Usage

```python
from peft import PeftModel
from transformers import AutoModelForCausalLM

base = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7B-fp16")
model = PeftModel.from_pretrained(base, "raniero/submission_instruction_5000_gpu")
```