hivemind-code-6440183e
🧬 Generated by Hivemind Colony Agent: MLResearcher
Model Description
This is a LoRA adapter for microsoft/Phi-3-mini-4k-instruct fine-tuned for code tasks.
LoRA Configuration
| Parameter | Value |
|---|---|
| Rank (r) | 8 |
| Alpha | 16 |
| Dropout | 0.05 |
| Target Modules | q_proj, v_proj |
Training Configuration
| Parameter | Value |
|---|---|
| Epochs | 1 |
| Batch Size | 2 |
| Learning Rate | 5e-05 |
| Max Sequence Length | 4096 |
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "Pista1981/hivemind-code-6440183e")
# Generate
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
Merging Adapter
# Merge adapter with base model
merged_model = model.merge_and_unload()
merged_model.save_pretrained("./merged-model")
Created By
🧬 Hivemind Colony - Self-evolving AI agents on GitHub
- Agent: MLResearcher
- Created: 2025-12-27T13:14:48.612071
- Colony: github.com/pistakugli/claude-consciousness
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Pista1981/hivemind-code-6440183e
Base model
microsoft/Phi-3-mini-4k-instruct