File size: 3,999 Bytes
b5dd066
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
---
license: other
license_name: paramtatva-restricted-1.0
license_link: LICENSE
language:
  - sa
  - en
library_name: transformers
tags:
  - paramtatva
  - rlm
  - resonance
  - sanskrit
  - maheshwara-sutras
  - math
  - phonetic-grounding
pipeline_tag: text-generation
---

# ParamTatva RLM-Small-v1

**Resonance Language Model** — A phonetically-grounded transformer trained with insights from the Maheshwara Sutras.

## Model Description

ParamTatva RLM is a novel language model architecture that replaces standard positional encodings with **phonetic graph embeddings** derived from the [Maheshwara Sutras](https://en.wikipedia.org/wiki/Shiva_Sutras), the foundational grammar rules of Sanskrit attributed to Pāṇini.

### Key Innovations

| Feature | Description |
|---------|-------------|
| **Paramtatva Graph Embeddings** | Token embeddings informed by phonetic proximity in the Maheshwara Sutras |
| **Pratyāhāra Attention Bias** | Attention biases derived from Pāṇini's abbreviation system (pratyāhāra) |
| **Mā-Bridge Normalization** | Layer normalization conditioned on phonetic group structure |

### Architecture

```
ParamtatvaTransformer (Small)
├── Embedding: ParamtatvaEmbedding (phonetic graph-aware)
├── Layers: 6 × TransformerBlock
│   ├── Attention: Multi-Head + Pratyāhāra Bias
│   ├── FFN: GELU activation
│   └── Norm: LayerNorm + Mā-Bridge
├── Final LayerNorm
└── LM Head
```

| Parameter | Value |
|-----------|-------|
| Parameters | ~10M |
| Hidden dim | 256 |
| Layers | 6 |
| Attention heads | 8 |
| Intermediate dim | 1024 |
| Max sequence length | 1024 |
| Activation | GELU |

## Intended Use

This model is released for **research and academic purposes**. It demonstrates the viability of phonetically-grounded language modeling using ancient linguistic frameworks.

### Recommended Uses
- Research into phonetic/linguistic priors for language models
- Studies on Sanskrit computational linguistics
- Mathematical reasoning experiments
- Exploration of alternative positional encoding schemes

### Out-of-Scope Uses
- Production/commercial applications (requires separate license)
- Safety-critical systems
- Any use that violates the license terms

## Training

The model was trained using the ParamTatva training pipeline. The training methodology, loss functions, and data curation are proprietary. Only the resulting model weights are released.

**Note**: The full Resonance Learning System (including the proprietary ResonanceEncoder) is NOT included in this release. This release contains only the standard ParamtatvaTransformer weights.

## How to Use

```python
import torch
from safetensors.torch import load_file

# Load weights
state_dict = load_file("model.safetensors")

# The model uses a custom architecture — see paramtatva_transformer.py
# for the full model class definition.
print(f"Parameters: {sum(v.numel() for v in state_dict.values()):,}")
```

## Limitations

- This is a **small** model (~10M parameters) — intended as a proof of concept
- The model was trained on a limited dataset
- Performance on downstream tasks has not been extensively benchmarked
- The proprietary resonance components are not included

## Citation

```bibtex
@misc{paramtatva2026rlm,
  title={ParamTatva RLM: A Phonetically-Grounded Language Model
         Based on the Maheshwara Sutras},
  author={ParamTatva.org},
  year={2026},
  url={https://huggingface.co/paramtatva/rlm-small-v1}
}
```

## License

This model is released under the **ParamTatva Restricted Use License v1.0**:
- ✅ Research and academic use
- ✅ Non-commercial applications
- ✅ Fine-tuning for research
- ❌ Commercial use (requires written agreement)
- ❌ Reverse engineering of training methodology

See [LICENSE](LICENSE) for full terms.

## Contact

- **Commercial licensing**: licensing@paramtatva.org
- **Research inquiries**: research@paramtatva.org
- **Website**: [paramtatva.org](https://paramtatva.org)