File size: 3,315 Bytes
239da7a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
# Free Deployment Guide for Stack 2.9

This guide covers deploying Stack 2.9 on free-tier platforms.

---

## Option 1: HuggingFace Spaces (Free Inference)

### Step 1: Create Space
```bash
# Go to https://huggingface.co/spaces and create new Space
# Choose: Docker, Python 3.11, Small (2CPU 4GB)
```

### Step 2: Push Your Model
```bash
# Upload your fine-tuned model to HF
from huggingface_hub import HfApi
api = HfApi()
api.upload_folder(
    folder_path="./stack-2.9-7b",
    repo_id="yourusername/stack-2.9",
    repo_type="model"
)
```

### Step 3: Configure API URL
Set environment variable in Space:
- `API_URL`: Your model inference URL
- `HF_TOKEN`: Your HF token

### Step 4: Deploy
```bash
# Clone Space and push files
git clone https://huggingface.co/spaces/yourusername/stack-2.9
cp deploy/hfSpaces/* .
git add . && git push
```

---

## Option 2: Together AI Fine-tuning (Free Credits)

### Free Tier Limits
- Up to 7B model fine-tuning
- Limited training minutes (varies by promotion)
- Requires: Together AI account

### Setup
```bash
# Get API key from https://together.ai
export TOGETHER_API_KEY="your-key"

# Fine-tune 7B model (free-tier friendly)
python stack/training/together_finetune.py \
    --model 7b \
    --data data/final/train.jsonl \
    --epochs 3
```

### Use Fine-tuned Model
```python
from together import Together

client = Together(api_key="your-key")

response = client.chat.completions.create(
    model="your-finetuned-model",
    messages=[{"role": "user", "content": "Write a function"}]
)
```

---

## Option 3: Google Colab (Free Training)

### Run Training
```python
# Open colab_train_stack29.ipynb in Google Colab
# Select GPU runtime (free tier: T4 15GB)

# For 7B model (runs on free tier):
batch_size = 2  # Reduce for 15GB VRAM
gradient_accumulation = 8
```

### Model Sizes for Free Tier
| Model | VRAM Needed | Free Tier? |
|-------|-------------|------------|
| 1.5B | ~4GB | βœ… Yes |
| 3B | ~8GB | βœ… Yes (T4) |
| 7B | ~16GB | ⚠️ Limited |
| 32B | ~64GB | ❌ No |

---

## Option 4: RunPod / Vast.ai (Cheap, Not Free)

### Quick Start
```bash
# Deploy on RunPod (~$0.20/hour for A100)
cd stack/deploy
./runpod_deploy.sh --template runpod-template.json

# Deploy on Vast.ai (~$0.15/hour)
./vastai_deploy.sh --template vastai-template.json
```

---

## Recommended Free Stack

```
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Stack 2.9 Free Deployment Stack           β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Model:    Qwen2.5-Coder-7B               β”‚
β”‚  Fine-tune: Together AI (free credits)      β”‚
β”‚  Deploy:    HuggingFace Spaces (free)       β”‚
β”‚  UI:        Gradio (included in Spaces)   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
```

## Cost Comparison

| Platform | Cost | What's Free |
|----------|------|-------------|
| HF Spaces | $0 | 2CPU 4GB hosting |
| Together AI | varies | Fine-tuning credits |
| Colab | $0 | ~0.5hr GPU/day |
| RunPod | $0.20/hr | First $10 credit |
| Vast.ai | $0.15/hr | First $5 credit |