YAML Metadata Warning:The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

Sanskrit D3PM Paraphrase Model

Roman/IAST Sanskrit input to Devanagari output using a D3PM cross-attention model.

Files Included

  • best_model.pt — trained checkpoint
  • config.py — runtime config
  • inference.py — model loading + generation loop
  • inference_api.py — simple Python API (predict)
  • handler.py — Hugging Face Endpoint handler
  • model/, diffusion/ — architecture modules
  • sanskrit_src_tokenizer.json, sanskrit_tgt_tokenizer.json — tokenizers

Quick Local Test

from inference_api import predict
print(predict("dharmo rakṣati rakṣitaḥ")["output"])

Transformer-Style Usage (Custom Runtime)

This checkpoint is a custom D3PM architecture (.pt), not a native transformers AutoModel format. Use it in a transformer-like way via the provided runtime:

import torch
from config import CONFIG
from inference import load_model, run_inference, _decode_clean
from model.tokenizer import SanskritSourceTokenizer, SanskritTargetTokenizer

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model, cfg = load_model("best_model.pt", CONFIG, device)

src_tok = SanskritSourceTokenizer(vocab_size=16000, max_len=cfg["model"]["max_seq_len"])
tgt_tok = SanskritTargetTokenizer(vocab_size=16000, max_len=cfg["model"]["max_seq_len"])

text = "dharmo rakṣati rakṣitaḥ"
ids = torch.tensor([src_tok.encode(text)], dtype=torch.long, device=device)
out = run_inference(model, ids, cfg)
print(_decode_clean(tgt_tok, out[0].tolist()))

If you need full transformers compatibility (AutoModel.from_pretrained), export weights to a Hugging Face Transformers model format first.

Endpoint Payload

{
  "inputs": "yadā mano nivarteta viṣayebhyaḥ svabhāvataḥ",
  "parameters": {
    "temperature": 0.7,
    "top_k": 40,
    "repetition_penalty": 1.2,
    "diversity_penalty": 0.0,
    "num_steps": 64,
    "clean_output": true
  }
}

Push This Folder To Model Hub

huggingface-cli login
huggingface-cli repo create <your-username>/sanskrit-d3pm --type model
cd hf_model_repo
git init
git lfs install
git remote add origin https://huggingface.co/<your-username>/sanskrit-d3pm
git add .
git commit -m "Initial model release"
git push -u origin main
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Space using bhsinghgrid/DevaFlow 1