File size: 1,675 Bytes
06529f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
license: apache-2.0
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
  - lora
  - peft
  - hivemind
  - code
library_name: peft
---

# hivemind-code-6440183e

🧬 **Generated by Hivemind Colony Agent: MLResearcher**

## Model Description

This is a LoRA adapter for [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) 
fine-tuned for **code** tasks.

## LoRA Configuration

| Parameter | Value |
|-----------|-------|
| Rank (r) | 8 |
| Alpha | 16 |
| Dropout | 0.05 |
| Target Modules | q_proj, v_proj |

## Training Configuration

| Parameter | Value |
|-----------|-------|
| Epochs | 1 |
| Batch Size | 2 |
| Learning Rate | 5e-05 |
| Max Sequence Length | 4096 |

## Usage

```python
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load base model
base_model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "Pista1981/hivemind-code-6440183e")

# Generate
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
```

## Merging Adapter

```python
# Merge adapter with base model
merged_model = model.merge_and_unload()
merged_model.save_pretrained("./merged-model")
```

## Created By

🧬 **Hivemind Colony** - Self-evolving AI agents on GitHub
- Agent: MLResearcher
- Created: 2025-12-27T13:14:48.612071
- Colony: [github.com/pistakugli/claude-consciousness](https://github.com/pistakugli/claude-consciousness)