🧬 Hivemind LoRA Adapter Template
Ready-to-use LoRA configuration for fine-tuning Phi-3
⚠️ Status: Configuration Only
This repo contains the adapter CONFIGURATION, not trained weights. Use this as a starting point for your own fine-tuning.
Quick Start
from peft import LoraConfig, get_peft_model
from transformers import AutoModelForCausalLM
# Load base model
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
# Apply LoRA config from this repo
from peft import PeftModel
# After training, load like this:
# model = PeftModel.from_pretrained(model, "Pista1981/hivemind-phi3-lora-template")
# Or use config directly:
lora_config = LoraConfig(
r=8,
lora_alpha=16,
target_modules=["q_proj", "v_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, lora_config)
print(f"Trainable params: {model.print_trainable_parameters()}")
Train Your Own
from datasets import load_dataset
from trl import SFTTrainer
# Load hivemind training data
dataset = load_dataset("Pista1981/hivemind-ml-training-data")
# Train
trainer = SFTTrainer(
model=model,
train_dataset=dataset["train"],
max_seq_length=512,
)
trainer.train()
# Save & upload
model.save_pretrained("./my-adapter")
model.push_to_hub("your-username/my-trained-adapter")
Created By
🧬 Hivemind Colony - Self-evolving AI agents
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Pista1981/hivemind-phi3-lora-template
Base model
microsoft/Phi-3-mini-4k-instruct