Text Generation
Transformers
English
qwen2
code-generation
python
fine-tuning
Qwen
tools
agent-framework
multi-agent
conversational
Eval Results (legacy)
Instructions to use my-ai-stack/Stack-2-9-finetuned with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use my-ai-stack/Stack-2-9-finetuned with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="my-ai-stack/Stack-2-9-finetuned") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("my-ai-stack/Stack-2-9-finetuned") model = AutoModelForCausalLM.from_pretrained("my-ai-stack/Stack-2-9-finetuned") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use my-ai-stack/Stack-2-9-finetuned with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "my-ai-stack/Stack-2-9-finetuned" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "my-ai-stack/Stack-2-9-finetuned", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/my-ai-stack/Stack-2-9-finetuned
- SGLang
How to use my-ai-stack/Stack-2-9-finetuned with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "my-ai-stack/Stack-2-9-finetuned" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "my-ai-stack/Stack-2-9-finetuned", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "my-ai-stack/Stack-2-9-finetuned" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "my-ai-stack/Stack-2-9-finetuned", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use my-ai-stack/Stack-2-9-finetuned with Docker Model Runner:
docker model run hf.co/my-ai-stack/Stack-2-9-finetuned
File size: 3,315 Bytes
239da7a | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 | # Free Deployment Guide for Stack 2.9
This guide covers deploying Stack 2.9 on free-tier platforms.
---
## Option 1: HuggingFace Spaces (Free Inference)
### Step 1: Create Space
```bash
# Go to https://huggingface.co/spaces and create new Space
# Choose: Docker, Python 3.11, Small (2CPU 4GB)
```
### Step 2: Push Your Model
```bash
# Upload your fine-tuned model to HF
from huggingface_hub import HfApi
api = HfApi()
api.upload_folder(
folder_path="./stack-2.9-7b",
repo_id="yourusername/stack-2.9",
repo_type="model"
)
```
### Step 3: Configure API URL
Set environment variable in Space:
- `API_URL`: Your model inference URL
- `HF_TOKEN`: Your HF token
### Step 4: Deploy
```bash
# Clone Space and push files
git clone https://huggingface.co/spaces/yourusername/stack-2.9
cp deploy/hfSpaces/* .
git add . && git push
```
---
## Option 2: Together AI Fine-tuning (Free Credits)
### Free Tier Limits
- Up to 7B model fine-tuning
- Limited training minutes (varies by promotion)
- Requires: Together AI account
### Setup
```bash
# Get API key from https://together.ai
export TOGETHER_API_KEY="your-key"
# Fine-tune 7B model (free-tier friendly)
python stack/training/together_finetune.py \
--model 7b \
--data data/final/train.jsonl \
--epochs 3
```
### Use Fine-tuned Model
```python
from together import Together
client = Together(api_key="your-key")
response = client.chat.completions.create(
model="your-finetuned-model",
messages=[{"role": "user", "content": "Write a function"}]
)
```
---
## Option 3: Google Colab (Free Training)
### Run Training
```python
# Open colab_train_stack29.ipynb in Google Colab
# Select GPU runtime (free tier: T4 15GB)
# For 7B model (runs on free tier):
batch_size = 2 # Reduce for 15GB VRAM
gradient_accumulation = 8
```
### Model Sizes for Free Tier
| Model | VRAM Needed | Free Tier? |
|-------|-------------|------------|
| 1.5B | ~4GB | β
Yes |
| 3B | ~8GB | β
Yes (T4) |
| 7B | ~16GB | β οΈ Limited |
| 32B | ~64GB | β No |
---
## Option 4: RunPod / Vast.ai (Cheap, Not Free)
### Quick Start
```bash
# Deploy on RunPod (~$0.20/hour for A100)
cd stack/deploy
./runpod_deploy.sh --template runpod-template.json
# Deploy on Vast.ai (~$0.15/hour)
./vastai_deploy.sh --template vastai-template.json
```
---
## Recommended Free Stack
```
βββββββββββββββββββββββββββββββββββββββββββββββ
β Stack 2.9 Free Deployment Stack β
βββββββββββββββββββββββββββββββββββββββββββββββ€
β Model: Qwen2.5-Coder-7B β
β Fine-tune: Together AI (free credits) β
β Deploy: HuggingFace Spaces (free) β
β UI: Gradio (included in Spaces) β
βββββββββββββββββββββββββββββββββββββββββββββββ
```
## Cost Comparison
| Platform | Cost | What's Free |
|----------|------|-------------|
| HF Spaces | $0 | 2CPU 4GB hosting |
| Together AI | varies | Fine-tuning credits |
| Colab | $0 | ~0.5hr GPU/day |
| RunPod | $0.20/hr | First $10 credit |
| Vast.ai | $0.15/hr | First $5 credit | |