Spaces:
Running
A newer version of the Gradio SDK is available: 6.14.0
title: NIELIT Ropar Assistant
emoji: π
colorFrom: red
colorTo: purple
sdk: gradio
sdk_version: 6.2.0
app_file: app.py
pinned: true
suggested_hardware: zero-a10g
license: mit
short_description: A fine-tuned Llama-3.2-1B for NIELIT Ropar (CPU Optimized).
thumbnail: >-
https://cdn-uploads.huggingface.co/production/uploads/6474405f90330355db146c76/fxsvfNs1T9jIyZkxWMCnd.png
π NIELIT Ropar Assistant
A domain-adapted Small Language Model (SLM) tailored for NIELIT Ropar, optimized for edge inference on CPU hardware.
π Overview
General-purpose Large Language Models (LLMs) often lack the granular, domain-specific context required for specialized organizational tasks. For institutions like NIELIT Ropar, relying on generic models leads to hallucinations regarding specific datasets (e.g., fee structures, faculty details).
This project solves that challenge by engineering a domain-adapted SLM. By fine-tuning Llama-3.2-1B and quantizing it for CPU inference, we created a lightweight, privacy-focused assistant that delivers accurate, verifiable answers on fees, faculty, and coursework without relying on expensive external APIs.
βοΈ Tech Stack
- π€ Base Model: Meta Llama-3.2-1B (Instruct)
- β‘ Fine-Tuning: Supervised Fine-Tuning (SFT) using Unsloth (LoRA adapters, 2x speedup).
- π Quantization: Converted to GGUF format (
q4_k_m) viallama.cppfor optimized CPU performance. - π Deployment: Hosted on Hugging Face Spaces via Gradio +
llama-cpp-python.
π Quick Links
| Resource | Link |
|---|---|
| π΄ Live Demo | Hugging Face Space |
| π» GitHub Repo | lovnishverma/NIELIT-Assistant |
| π¦ Model Weights | Hugging Face Model Repo |
| π Training Code | Google Colab Notebook |
| π Technical Blog | Medium Article |
π οΈ Installation (Local Inference)
You can run this model locally on your laptop (CPU-only) using Python.
1. Install Dependencies
pip install llama-cpp-python huggingface_hub
2. Run Inference Script
from huggingface_hub import hf_hub_download
from llama_cpp import Llama
# Download the GGUF model automatically
model_path = hf_hub_download(
repo_id="LovnishVerma/nielit-ropar-GGUF",
filename="nielit-ropar.q4_k_m.gguf"
)
# Initialize the model (CPU)
llm = Llama(model_path=model_path, n_ctx=2048, verbose=False)
# Chat
output = llm(
"Q: What courses are offered at NIELIT Ropar? A:",
max_tokens=128,
stop=["Q:", "\n"],
echo=True
)
print(output['choices'][0]['text'])
π¨βπ» Author
Developed by Lovnish Verma Project Engineer at NIELIT Ropar