Ping Technical Assistant Hugging Face πŸ€— LoRA

For more information please see the original model card.

Quickstart

Install dependencies

pip install torch transformers peft accelerate

Generate responses

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# 1. Define the System Prompt (CRITICAL)
system_prompt = """[SEE ORIGINAL MODEL CARD]"""

# 2. Load Base Model
base_model_id = "Qwen/Qwen3-1.7B"
model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    torch_dtype=torch.float16,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(base_model_id)

# 3. Load LoRA Adapter
adapter_id = "dzur658/ping-device-id-LoRA-001-HF" # Replace with your repo
model = PeftModel.from_pretrained(model, adapter_id)

# 4. Prepare Input
# ---
# NOTE: We fake messages to load database context into the model

# replace with a real device from the Ping Knowledge Base
device_str = "OnePlus 7"

fake_user_prompt = "[System Command]: Load reference for {device_str}"

# replace with the accompanying update guide from the Ping Knowledge Base
knowledge_base_doc = "[REPLACE ME]"

# first assistant turn should be formatted like this
fake_assistant = f"<think>\nTrigger: System Command received (\"Load reference for {device_str}\").\nAction: Retrieve \"{device_str} Update Guide\" from database.\nPlan: Output the full update instructions so the user has the context available immediately.\n</think>" + knowledge_base_doc

# ---

# question to the model regarding the device
prompt = "Wait, so if my OnePlus 7 is no longer receiving security updates does that mean I need to upgrade immediately?"

messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": fake_user_prompt},
    {"role": "assistant", "content": fake_assistant},
    {"role": "user", "content": prompt}
]

# 5. Generate
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)

outputs = model.generate(
    **inputs, 
    max_new_tokens=8192, # longer context for fitting knowledge base doc and reasoning tokens
    temperature=0.0, # Greedy decoding for logic
    do_sample=False
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for dzur658/ping-technical-assistant-LoRA-001-HF

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(483)
this model

Collection including dzur658/ping-technical-assistant-LoRA-001-HF