ParamTatva RLM-Small-v1
Resonance Language Model — A phonetically-grounded transformer trained with insights from the Maheshwara Sutras.
Model Description
ParamTatva RLM is a novel language model architecture that replaces standard positional encodings with phonetic graph embeddings derived from the Maheshwara Sutras, the foundational grammar rules of Sanskrit attributed to Pāṇini.
Key Innovations
| Feature | Description |
|---|---|
| Paramtatva Graph Embeddings | Token embeddings informed by phonetic proximity in the Maheshwara Sutras |
| Pratyāhāra Attention Bias | Attention biases derived from Pāṇini's abbreviation system (pratyāhāra) |
| Mā-Bridge Normalization | Layer normalization conditioned on phonetic group structure |
Architecture
ParamtatvaTransformer (Small)
├── Embedding: ParamtatvaEmbedding (phonetic graph-aware)
├── Layers: 6 × TransformerBlock
│ ├── Attention: Multi-Head + Pratyāhāra Bias
│ ├── FFN: GELU activation
│ └── Norm: LayerNorm + Mā-Bridge
├── Final LayerNorm
└── LM Head
| Parameter | Value |
|---|---|
| Parameters | ~10M |
| Hidden dim | 256 |
| Layers | 6 |
| Attention heads | 8 |
| Intermediate dim | 1024 |
| Max sequence length | 1024 |
| Activation | GELU |
Intended Use
This model is released for research and academic purposes. It demonstrates the viability of phonetically-grounded language modeling using ancient linguistic frameworks.
Recommended Uses
- Research into phonetic/linguistic priors for language models
- Studies on Sanskrit computational linguistics
- Mathematical reasoning experiments
- Exploration of alternative positional encoding schemes
Out-of-Scope Uses
- Production/commercial applications (requires separate license)
- Safety-critical systems
- Any use that violates the license terms
Training
The model was trained using the ParamTatva training pipeline. The training methodology, loss functions, and data curation are proprietary. Only the resulting model weights are released.
Note: The full Resonance Learning System (including the proprietary ResonanceEncoder) is NOT included in this release. This release contains only the standard ParamtatvaTransformer weights.
How to Use
import torch
from safetensors.torch import load_file
# Load weights
state_dict = load_file("model.safetensors")
# The model uses a custom architecture — see paramtatva_transformer.py
# for the full model class definition.
print(f"Parameters: {sum(v.numel() for v in state_dict.values()):,}")
Limitations
- This is a small model (~10M parameters) — intended as a proof of concept
- The model was trained on a limited dataset
- Performance on downstream tasks has not been extensively benchmarked
- The proprietary resonance components are not included
Citation
@misc{paramtatva2026rlm,
title={ParamTatva RLM: A Phonetically-Grounded Language Model
Based on the Maheshwara Sutras},
author={ParamTatva.org},
year={2026},
url={https://huggingface.co/paramtatva/rlm-small-v1}
}
License
This model is released under the ParamTatva Restricted Use License v1.0:
- ✅ Research and academic use
- ✅ Non-commercial applications
- ✅ Fine-tuning for research
- ❌ Commercial use (requires written agreement)
- ❌ Reverse engineering of training methodology
See LICENSE for full terms.
Contact
- Commercial licensing: licensing@paramtatva.org
- Research inquiries: research@paramtatva.org
- Website: paramtatva.org
- Downloads last month
- 13