metadata
title: Indro-ai
emoji: π
colorFrom: purple
colorTo: red
sdk: static
pinned: true
license: apache-2.0
Indro-ai Research
Pushing the Boundaries of Reasoning in Small Language Models.
π― Our Mission
At Indro-ai, we are dedicated to building Sovereign Small Language Models (SLMs) that don't just predict the next word but actually reason. Our focus is on high-quality curated data and efficient architecture.
π Active Project: Indro-Veda (500M)
Indro-Veda is our flagship 500M parameter model trained on a balanced diet of:
- Mathematics: For logical structure.
- Code: For algorithmic reasoning.
- Educational Data: For high-quality knowledge.
π Dataset Stats
- Total Tokens: 3 Billion+ (Curated)
- Framework: PyTorch/XLA (Optimized for TPU/GPU)
"Gyanam Paramam Balam" (Knowledge is Supreme Power)