File size: 931 Bytes
0a33982 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | ---
license: apache-2.0
---
Nikhitha Telugu Dataset Model
Model ID: Nikitha-logics/Nikhitha_telugu_dataset_model
Model Type: ALBERT-based Language Model
License: Apache-2.0
Model Overview
The Nikhitha Telugu Dataset Model is an ALBERT-based language model trained on a Telugu language dataset. ALBERT (A Lite BERT) is a transformer-based model designed for natural language processing tasks, optimized for efficiency and performance.
Model Details
Model Size: 33.2 million parameters
Tensor Type: Float32 (F32)
Format: Safetensors
Usage
To utilize this model in your projects, you can load it using the Hugging Face Transformers library:
from transformers import AlbertTokenizer, AlbertForMaskedLM
# Load the tokenizer
tokenizer = AlbertTokenizer.from_pretrained("Nikitha-logics/Nikhitha_telugu_dataset_model")
# Load the model
model = AlbertForMaskedLM.from_pretrained("Nikitha-logics/Nikhitha_telugu_dataset_model")
|