๐Ÿš€ Launching Vikasit-AI/Writer: High-Efficiency Tiny Models (0.5B & 0.8B)

#1
by koolninad - opened
Vikasit AI org

Hello Community!

We are excited to introduce the first releases from the Vikasit-AI suite: Writer-0.5B and Writer-0.8B.

Our goal with these models is to prove that you don't need a massive GPU cluster to run capable AI. These are specifically architected for high-speed performance on commodity hardware (CPUs) and low-resource environments.

Key Highlights:
CPU Optimized: Designed to run smoothly on standard processors with minimal RAM usage.

Chat-Centric: Fine-tuned for dialogue, making them perfect for lightweight chatbots, localized assistants, and basic text processing.

Privacy & Edge Ready: Small enough to be deployed locally on IoT devices or private servers without data leaving your premises.

Ollama Support: Already available for 1-click deployment via Ollama.

Getting Started:
You can find the model weights and GGUF files here:

Hugging Face: vikasit-ai

Ollama: Vikasit-AI/writer

Whatโ€™s Next?
This is just the beginning of our roadmap. While these tiny models focus on extreme efficiency, we are currently training much more powerful models with higher parameter counts that we plan to release in the coming days.

We would love to hear your feedback! Try them out for your local RAG pipelines or chatbot interfaces and let us know how they perform on your hardware.

Let's build a more accessible AI ecosystem together! ๐Ÿ‡ฎ๐Ÿ‡ณ๐Ÿš€

Sign up or log in to comment