Text Generation
Transformers
Safetensors
English
gpt2
distilgpt2
knowledge-distillation
tally
accounting
conversational
business
transformer
language-model
text-generation-inference
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Jayanthram/TallyPrimeAssistant")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages)# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Jayanthram/TallyPrimeAssistant")
model = AutoModelForCausalLM.from_pretrained("Jayanthram/TallyPrimeAssistant")Quick Links
💼 TallyPrimeAssistant — Distilled GPT-2 Model
This is a distilled GPT-2-based conversational model fine-tuned on FAQs and navigation instructions from TallyPrime, a leading business accounting software used widely in India. The model is designed to help users get quick and accurate answers about using features in TallyPrime like GST, e-invoicing, payroll, and more.
🧠Model Summary
- Teacher Model:
gpt2-large - Student Model:
distilgpt2 - Distillation Method: Knowledge Distillation using Hugging Face's Transformers and custom training pipeline
- Training Dataset: Internal dataset of Q&A pairs and system navigation steps from TallyPrime documentation and usage
- Format:
safetensors(secure and fast) - Tokenizer: Byte-Pair Encoding (BPE), same as GPT-2
🚀 Example Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Jayanthram/TallyPrimeAssistant")
tokenizer = AutoTokenizer.from_pretrained("Jayanthram/TallyPrimeAssistant")
prompt = "How to enable GST in Tally Prime?"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=60)
print(tokenizer.decode(output[0], skip_special_tokens=True))
- Downloads last month
- -
# Gated model: Login with a HF token with gated access permission hf auth login