File size: 2,860 Bytes
9baa6a9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
language:
- en
license: apache-2.0
tags:
- brahma-kumaris
- murli
- spiritual
- lora
- phi-2
base_model: microsoft/phi-2
datasets:
- custom
library_name: peft
---

# Murli Assistant - Fine-tuned Phi-2 with LoRA

This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) using LoRA (Low-Rank Adaptation) on Brahma Kumaris Murli data.

## Model Description

- **Base Model:** microsoft/phi-2 (2.7B parameters)
- **Fine-tuning Method:** LoRA (r=8, alpha=16)
- **Training Data:** 100+ daily murlis from MongoDB database
- **Use Case:** Spiritual guidance and murli knowledge assistant

## Training Details

- **LoRA Rank (r):** 8
- **LoRA Alpha:** 16
- **Target Modules:** q_proj, o_proj, k_proj, v_proj
- **Training Examples:** 201 formatted instructions
- **Adapter Size:** ~15MB

## Usage

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "microsoft/phi-2",
    torch_dtype=torch.float16,
    device_map="auto",
    trust_remote_code=True
)

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "eswarankrishnamurthy/murli-assistant-phi2-lora")
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2")
tokenizer.pad_token = tokenizer.eos_token

# Generate response
question = "What is the essence of today's murli?"
prompt = f"### Instruction:\n{question}\n\n### Response:\n"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```

## Inference API

This model is also available via Hugging Face Inference API:

```python
import requests

API_URL = "https://api-inference.huggingface.co/models/eswarankrishnamurthy/murli-assistant-phi2-lora"
headers = {"Authorization": f"Bearer {YOUR_HF_TOKEN}"}

def query(payload):
    response = requests.post(API_URL, headers=headers, json=payload)
    return response.json()

output = query({"inputs": "What is soul consciousness?"})
print(output)
```

## Training Information

The model was trained on diverse murli content including:
- Daily murli essence
- Blessings and slogans
- Questions and answers
- Spiritual teachings and guidance

## Limitations

- Best performance on spiritual/murli-related queries
- May require GPU for faster inference
- CPU inference is possible but slower

## Citation

If you use this model, please cite:

```
@misc{murli-assistant-phi2,
  author = {eswarankrishnamurthy},
  title = {Murli Assistant - Fine-tuned Phi-2},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/eswarankrishnamurthy/murli-assistant-phi2-lora}
}
```

## Contact

For questions or feedback, please open an issue on the model repository.