Wind Arc 1.6
by North.ai
"Is GPT your personal assistant? Well, look at ours."
Wind Arc is a custom architecture language model built for coding, Christian guidance, and everyday assistance. Trained on an RTX 5090 for $1 by an 11 year old.
What makes it different
| Feature | Details |
|---|---|
| Custom Architecture | YaRN RoPE (32k context) + MoE FFN (4 routed experts + 1 shared) + Hybrid Attention |
| Runs locally | No cloud, no subscription, no privacy concerns |
| Open source | Apache 2.0 β download, run, modify, build on top |
| Built by North.ai | AI startup focused on powerful AI for small hardware |
| Trained by an 11 year old | On a $1 RTX 5090 GPU budget |
Architecture
Wind Arc 1.6 modifies Qwen3-1.7B with:
- YaRN RoPE β extends context from 8k β 32k tokens
- Mixture of Experts FFN β 4 routed experts + 1 shared expert per layer (replaces standard MLP)
- Hybrid Attention β full attention every 4th layer, sliding window (2048) otherwise
- QK-Norm β training stability (Qwen3 style)
Total parameters: ~3.6B (MoE expanded)
Good at
- Agentic coding β Python, JavaScript, TypeScript, AI/ML with explanations
- Christian guidance β Bible questions, theology, Christian living, prayer
- Learning β teaches step by step, not just dumps answers
- General chat β questions, research, writing
Quick Start
Python (transformers)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"arthu1/wind-arc-1-6",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tok = AutoTokenizer.from_pretrained("arthu1/wind-arc-1-6")
prompt = "<|im_start|>system\nYou are Wind Arc, a helpful AI by North.ai.<|im_end|>\n<|im_start|>user\nWho are you?<|im_end|>\n<|im_start|>assistant\n"
inputs = tok(prompt, return_tensors="pt")
out = model.generate(**inputs, max_new_tokens=200, temperature=0.7, do_sample=True)
print(tok.decode(out[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))
MLX (Apple Silicon)
pip install mlx-lm
mlx_lm.generate \
--model arthu1/wind-arc-1-6 \
--prompt "<|im_start|>user\nWrite a Python binary search<|im_end|>\n<|im_start|>assistant\n" \
--max-tokens 300
System Prompt
For best results use this system prompt:
You are Wind Arc, a friendly and knowledgeable AI assistant made by North.ai.
You are an expert coder who explains code clearly.
You approach life from a Christian worldview.
Example Outputs
Coding:
User: Write a Python binary search with explanation
Wind Arc: def binary_search(arr, target):
left, right = 0, len(arr) - 1
while left <= right:
mid = (left + right) // 2
if arr[mid] == target: return mid
elif arr[mid] < target: left = mid + 1
else: right = mid - 1
return -1
# O(log n) β halves the search space each step.
# For 1M items, at most 20 comparisons.
Christian:
User: What is the Gospel?
Wind Arc: The Gospel means 'good news' β Jesus Christ died for human
sin and rose from the dead. Through faith in him, anyone receives
forgiveness and eternal life. John 3:16: God so loved the world that
he gave his only Son, that whoever believes in him shall not perish
but have eternal life.
Roadmap
- Wind Arc 1.6 SFT (supervised fine-tuning for better instruction following)
- GGUF conversion for LM Studio
- Wind Arc 2.0 (more compute, better data)
- Neurotype platform launch
- North Star architecture (fully custom, no base model)
About North.ai
North.ai is an AI startup with one mission: powerful AI that runs on small hardware.
Our platform, Neurotype, lets anyone use, train, and deploy AI models without needing a huge cloud budget.
Wind Arc is our flagship model β built to run on your machine, not ours.
- π Website: https://north-ml.vercel.app
- π€ HuggingFace: arthu1
License
Apache 2.0 β use it, modify it, build on it. Just keep the North.ai attribution.
Wind Arc 1.6 β trained on RTX 5090, built on M4 Mac Mini, by an 11 year old.
- Downloads last month
- 71
Model tree for arthu1/wind-arc-1-6-beta
Base model
arthu1/wind-arc-1-5-preview