NIELIT-Assistant / README.md
LovnishVerma's picture
Update README.md
3c94998 verified

A newer version of the Gradio SDK is available: 6.14.0

Upgrade
metadata
title: NIELIT Ropar Assistant
emoji: πŸŽ“
colorFrom: red
colorTo: purple
sdk: gradio
sdk_version: 6.2.0
app_file: app.py
pinned: true
suggested_hardware: zero-a10g
license: mit
short_description: A fine-tuned Llama-3.2-1B for NIELIT Ropar (CPU Optimized).
thumbnail: >-
  https://cdn-uploads.huggingface.co/production/uploads/6474405f90330355db146c76/fxsvfNs1T9jIyZkxWMCnd.png

πŸŽ“ NIELIT Ropar Assistant

Python Hugging Face Model GitHub Colab License: MIT

A domain-adapted Small Language Model (SLM) tailored for NIELIT Ropar, optimized for edge inference on CPU hardware.

NIELIT Assistant Demo

πŸš€ Overview

General-purpose Large Language Models (LLMs) often lack the granular, domain-specific context required for specialized organizational tasks. For institutions like NIELIT Ropar, relying on generic models leads to hallucinations regarding specific datasets (e.g., fee structures, faculty details).

This project solves that challenge by engineering a domain-adapted SLM. By fine-tuning Llama-3.2-1B and quantizing it for CPU inference, we created a lightweight, privacy-focused assistant that delivers accurate, verifiable answers on fees, faculty, and coursework without relying on expensive external APIs.

βš™οΈ Tech Stack

  • πŸ€– Base Model: Meta Llama-3.2-1B (Instruct)
  • ⚑ Fine-Tuning: Supervised Fine-Tuning (SFT) using Unsloth (LoRA adapters, 2x speedup).
  • πŸ“‰ Quantization: Converted to GGUF format (q4_k_m) via llama.cpp for optimized CPU performance.
  • 🌐 Deployment: Hosted on Hugging Face Spaces via Gradio + llama-cpp-python.

πŸ”— Quick Links

Resource Link
πŸ”΄ Live Demo Hugging Face Space
πŸ’» GitHub Repo lovnishverma/NIELIT-Assistant
πŸ“¦ Model Weights Hugging Face Model Repo
πŸ““ Training Code Google Colab Notebook
πŸ“ Technical Blog Medium Article

πŸ› οΈ Installation (Local Inference)

You can run this model locally on your laptop (CPU-only) using Python.

1. Install Dependencies

pip install llama-cpp-python huggingface_hub

2. Run Inference Script

from huggingface_hub import hf_hub_download
from llama_cpp import Llama

# Download the GGUF model automatically
model_path = hf_hub_download(
    repo_id="LovnishVerma/nielit-ropar-GGUF",
    filename="nielit-ropar.q4_k_m.gguf"
)

# Initialize the model (CPU)
llm = Llama(model_path=model_path, n_ctx=2048, verbose=False)

# Chat
output = llm(
    "Q: What courses are offered at NIELIT Ropar? A:", 
    max_tokens=128,
    stop=["Q:", "\n"],
    echo=True
)
print(output['choices'][0]['text'])

πŸ‘¨β€πŸ’» Author

Developed by Lovnish Verma Project Engineer at NIELIT Ropar

#GenerativeAI #LLMOps #EdgeAI #Llama3 #FineTuning #Python #Engineering #NIELIT