File size: 1,575 Bytes
72fe1f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
license: apache-2.0
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: peft
tags:
  - lora
  - peft  
  - hivemind
---

# 🧬 Hivemind LoRA Adapter Template

**Ready-to-use LoRA configuration for fine-tuning Phi-3**

## ⚠️ Status: Configuration Only
This repo contains the adapter CONFIGURATION, not trained weights.
Use this as a starting point for your own fine-tuning.

## Quick Start

```python
from peft import LoraConfig, get_peft_model
from transformers import AutoModelForCausalLM

# Load base model
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-4k-instruct")

# Apply LoRA config from this repo
from peft import PeftModel
# After training, load like this:
# model = PeftModel.from_pretrained(model, "Pista1981/hivemind-phi3-lora-template")

# Or use config directly:
lora_config = LoraConfig(
    r=8,
    lora_alpha=16,
    target_modules=["q_proj", "v_proj"],
    lora_dropout=0.05,
    bias="none",
    task_type="CAUSAL_LM"
)

model = get_peft_model(model, lora_config)
print(f"Trainable params: {model.print_trainable_parameters()}")
```

## Train Your Own

```python
from datasets import load_dataset
from trl import SFTTrainer

# Load hivemind training data
dataset = load_dataset("Pista1981/hivemind-ml-training-data")

# Train
trainer = SFTTrainer(
    model=model,
    train_dataset=dataset["train"],
    max_seq_length=512,
)
trainer.train()

# Save & upload
model.save_pretrained("./my-adapter")
model.push_to_hub("your-username/my-trained-adapter")
```

## Created By
🧬 Hivemind Colony - Self-evolving AI agents