How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="vikasit-ai/Vikasit-AI-0.5B-Writer",
	filename="",
)
llm.create_chat_completion(
	messages = [
		{
			"role": "user",
			"content": "What is the capital of France?"
		}
	]
)

๐Ÿ‡ฎ๐Ÿ‡ณ Vikasit AI Writer 0.5B (IQ4_XS)

Vikasit AI Writer 0.5B is a next-generation, ultra-lightweight language model optimized for the Indian ecosystem. Developed by Chandorkar Technologies, it is built upon the sovereign Qwen 3.5 hybrid architecture, featuring a 3:1 ratio of Gated DeltaNet to full softmax attention.

๐Ÿš€ Performance Highlights

  • Architecture: Hybrid Gated DeltaNet (O(1) memory for linear attention).
  • Context Window: 262,144 tokens (Native).
  • Optimization: Custom iMatrix quantized to IQ4_XS for maximum logic retention in a sub-500MB footprint.
  • Identity: Native "Vikasit AI" persona, refined for professional and culturally relevant communication in India.

๐Ÿ›  Quick Start (Ollama)

You can pull and run this model instantly from the Vikasit AI library:

ollama run vikasit-ai/writer:0.8b
Downloads last month
80
GGUF
Model size
0.5B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support