metadata
title: README
emoji: π
colorFrom: pink
colorTo: gray
sdk: static
pinned: false
Dharma-AI is a Brazilian AI research lab specialized in building best-in-class Specialized Small Language Models (SSLMs) for high-impact, domain-specific problems. Our models are engineered to maximize performance while minimizing latency, cost, and environmental footprint β by combining state-of-the-art techniques across the full model development stack: from fine-tuning strategies to inference optimization.
We believe the future of applied AI is not bigger models, it is smarter specialization.
Research Focus
- SLM Specialization β fine-tuning pipelines (SFT, RLHF, GRPO, DPO), multi-stage preference optimization, and data curation strategies to push small models to their performance ceiling on domain-specific tasks
- Mechanistic Interpretability of SLMs β understanding the internal representations and circuits of small language models to inform better specialization, diagnose failure modes, and build more trustworthy systems
- GPU Utilization Optimization β maximizing throughput and minimizing memory footprint through quantization, kernel fusion, batching strategies, and efficient serving infrastructure