YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

transliterated_nmt

This repository contains the Banglanmt_bn_en model finetuned on the BanglaTLit dataset for the downstream task of Bangla to Transliterated Bangla.

Uses

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch

model_name = "FabihaHaider/transliterated_nmt"

tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(torch_device)
print(torch_device)

def predict_output(input_sentence):
    input_ids = tokenizer((input_sentence), return_tensors="pt").input_ids
    generated_tokens = model.generate(input_ids)
    decoded_tokens = tokenizer.batch_decode(generated_tokens)[0]
    decoded_tokens = normalize(decoded_tokens)

    return decoded_tokens


predict_output("আমি ভাত খাই প্রায় প্রতিদিন।")

Finetuning Dataset

BanglaTLit

Downloads last month
-
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for FabihaHaider/transliterated_nmt