| ## 🇮🇳 GanLLM – Conversational AI for Indian Contexts | |
| Owner: Rahul Wale (AI Developer) | |
| ## 🧠 Model Overview | |
| GanLLM is a lightweight conversational AI model designed for empathetic and natural dialogues with an Indian cultural and linguistic flavor. It has been aligned to understand context, personas, and conversation history, making it more suitable for everyday interactions than generic LLMs. | |
| This model is capable of handling tasks such as: | |
| Context-aware chit-chat | |
| Persona-driven roleplay conversations | |
| Empathetic and supportive dialogue | |
| Conversations grounded in Indian lifestyle & expressions | |
| ## 🔑 Key Features | |
| ✅ Contextualized conversational responses | |
| ✅ Persona alignment for more natural interactions | |
| ✅ Lightweight enough for use on consumer GPUs (T4, A10, etc.) | |
| ✅ Optimized for empathy and dialogue flow | |
| ## 📊 Training Data | |
| The model was fine-tuned on conversational datasets containing: | |
| Persona-based dialogues | |
| Empathetic conversations | |
| Guided message tasks for natural turn-taking | |
| (Exact dataset details are kept abstract to maintain clarity while ensuring transparency.) | |
| ## ⚡ Intended Use | |
| GanLLM is suitable for: | |
| Chatbots for Indian users | |
| Interactive tutoring / learning bots | |
| Customer service dialogue systems | |
| Personal AI assistants | |
| ## 🚫 Limitations | |
| Not a factual knowledge model — it should not be used for reliable Q&A or critical decision-making. | |
| Can generate biased or culturally sensitive outputs; use responsibly. | |
| Performance may degrade in languages outside English and Indic context. | |
| ## 📌 License | |
| This model is released for research and personal use only. For any commercial applications, please contact the owner. | |
| ## 🙌 Acknowledgements | |
| Developed and maintained by Rahul Wale – AI Developer. | |
| ## 🚀 How to Use GanLLM | |
| You can use GanLLM easily with the Transformers library: | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline | |
| # Load tokenizer and model | |
| tokenizer = AutoTokenizer.from_pretrained("Rahulwale12/ganllm") | |
| model = AutoModelForCausalLM.from_pretrained("Rahulwale12/ganllm", device_map="auto") | |
| # Create text-generation pipeline | |
| generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0) | |
| # Example prompt | |
| prompt = "### Instruction:\nPersona: I live in Delhi and love cricket.\nDialogue so far: Do you follow IPL?\n\n### Response:\n" | |
| output = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.7, top_p=0.9) | |
| print(output[0]["generated_text"]) | |
| python ``` | |