π§ DQN GPT v0.1
Local AI for Everyone.
DQN GPT v0.1 is a lightweight, locally runnable assistant built on Gemma 3 1B (1B parameters).
This release is an early identity-alignment version focused on establishing personality and behavioral consistency. It is not yet a domain-specialized or heavily fine-tuned model.
This is the foundation.
π Vision
Local AI for everyone.
DQN GPT exists to prove that powerful AI does not need to live in a datacenter.
It should run:
- On laptops
- On student machines
- On modest hardware
- On personal servers
- On local networks
AI should be accessible.
π§ Base Model
- Architecture: Gemma 3
- Parameter Count: 1B
- Context Length: 32768 (as supported by base model)
- Format: GGUF (llama.cpp / LM Studio compatible)
π§ Fine-Tuning Details
This version has been fine-tuned on a minimal identity-alignment dataset for testing purposes.
Focus areas:
- Assistant identity consistency
- Stable conversational tone
- Reduced drift from defined persona
This is not a performance-focused or coding-specialized release yet.
Future updates will include:
- Coding-focused fine-tuning
- Hallucination reduction
- Improved reasoning
- Broader conversational robustness
π» Hardware Requirements
Designed to run locally.
Recommended:
- 4GB+ RAM (Q4_K_M quant)
- 8GB+ RAM (F16 quant)
- CPU inference supported
- GPU optional
Quantization options determine performance and memory usage. Q4_K_M is recommended for daily use, especially on lower RAM devices. F16 is recommended for fine tuning or experimenting, or high-accuracy use cases on more powerful machines.
π¦ Intended Use
- Local assistant
- Personal AI experimentation
- LAN-hosted AI servers
- Offline productivity
- Student AI access
The possibilities are endless. With our lenient Apache 2 licensing, you can do whatever you want with it!
β Limitations
- Early-stage release
- Minimal dataset fine-tune
- Not benchmark-optimized
- Not trained for specialized domains yet
This is v0.1 β a foundation build.
π£ Roadmap
- Coding-specialized variant
- Refined conversational dataset
- Larger releases
- Improved reliability
- Public evaluation benchmarks
π Philosophy
AI should not be locked behind subscriptions.
AI should not require a supercomputer.
AI should run where you are.
Local AI for everyone.
- Downloads last month
- 5
4-bit