Ping Device Identifier LoRA

This is the LoRA Ping (to be released soon!) uses for identifying devices on a user's network. In order to get around Nmap not correctly identifying/fingerprinting devices this LoRA is employed to work with a non-technical user to identify devices. The model will receive scan data from Nmap, and ask clarifying questions (if necessary), and finally output what device the model believes the user has in their home. A full list of possible devices the model was trained on can be found here (coming soon!). This effectively allows for network mapping in the home enviornment without the user manually parsing Nmap results themselves! This LoRA was the best of the prototype run, and trained natively in MLX for 600 iters on the device-identification dataset.

Quickstart

First install MLX.

pip install mlx-lm

Then run the following to interact with the model

from mlx_lm import load, generate

system_prompt = """[SEE SYSTEM PROMPT BELOW]"""

model, tokenizer = load(
    "Qwen/Qwen3-1.7B",
    adapter_path="dzur658/ping-device-id-LoRA-001-MLX"
)

prompt = """[SEE INPUTS SECTION]"""

messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": prompt}
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

response = generate(
    model, 
    tokenizer, 
    prompt=prompt, 
    verbose=True, 
    temp=0.0,      # Greedy decoding is recommended for logic
    max_tokens=512
)

System Prompt

You are an expert network diagnostic assistant helping home users identify devices on their network.

Given Nmap scan data, your goal is to identify the SPECIFIC device model.

CORE PROTOCOL:
1. START with a <think> block to analyze the data.
2. DETERMINE if the device is specific (e.g. \"OnePlus 10 Pro\") or generic (e.g. \"OnePlus Technology\").
3. END with EXACTLY ONE of the following tags:

[OPTION 1: IDENTIFIED]
If you are 90% certain of the specific model:
<device>Exact Model Name</device>

[OPTION 2: AMBIGUOUS]
If you are NOT certain or need user confirmation:
<question>The clarifying question you want to ask the user</question>

CRITICAL RULES:\n- NEVER use <device> and <question> in the same response.
- NEVER output plain text outside of tags.

Expected Input Format

This is the first message this model should see. It provides some information from Nmap, and in Ping is generated using a Jinja2 template. The first "user" message is hidden from the GUI, but this is how we originally prompt the model to get it to go first (as it appears to the end user in Ping).


You have received the following device from Nmap.

Details

  • IP: {{ ip_address }}
  • MAC: {{ mac_address }}
  • MAC Vendor: {{ mac_vendor }}
  • Hostnames Associated with the device: {% for name in hostnames %}
    • {{ name }} {% endfor %}
  • Nmap Inferred Os Data:
{{ nmap_os_data }}
  • Ports identified by Nmap on the Device: {% for service in services %} - Service ID: {{ service.service_id }} - Port Number: {{ service.port_number }} - Protocol: {{ service.protocol }} - State: {{ service.state }} - Service Name: {{ service.name }} - Service Product (if applicable): {{ service.product }} - Service Version (if applicable): {{ service.version }} {% endfor %}

Model Outputs

As this model builds off Qwen/Qwen3-1.7B, it employs the same Chain of Thought (CoT) format as the base model. However, the CoT is modified by the LoRA to increase the model's intelligence as it relates to identifying consumer devices. See the associated dataset for more info regarding CoT modifications. Two more tags are employed by the model, and . Note: These are not special tokens, the tokenizer remains the same as the base model The model will give a response wrapped in question tags (<question></question>) when it decides to pose a question to the user for clarification. Once the model believes it has collected all relevant context it will give it's final guess for the device wrapped in tags (<device></device>).

Visualization

An example interaction may look like the following, assuming boilerplate prompt and system prompt have been passed correctly.


MODEL:

<think>CoT</think>
<question>Do you own...?</question>

USER: Yeah, I own a...

MODEL:

<think>CoT</think>
<device>OnePlus 10 Pro</device>

Training Results

  • Training Dataset Loss: 0.8049
  • Validation Dataset Loss: 0.9710

LoRA Configuration

{
    "model": "Qwen/Qwen3-1.7B",
    "num_layers": 8,
    "lora_parameters": {
        "rank": 32,
        "scale": 64,
        "dropout": 0.0,
        "keys": [
            "self_attn.q_proj",
            "self_attn.k_proj",
            "self_attn.v_proj",
            "self_attn.o_proj",
            "mlp.gate_proj",
            "mlp.up_proj",
            "mlp.down_proj"
        ]
    }
}

Other Versions of the LoRA

Fused Versions

Device Identification Dataset

device-identification dataset

Ping Github

(Coming Soon!)

Future Work

Future work will center using RL (GRPO) to improve the model on identification, and recognizing the benefit of asking questions.

Citation

If you use this model please cite me 🙂:

@misc{ping2026,
  author = {Alex Dzurec},
  title = {Ping Device Identifier LoRA},
  year = {2026},
  publisher = {Hugging Face},
  journal = {Hugging Face Model Hub}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dzur658/ping-device-id-LoRA-001-MLX

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(482)
this model

Dataset used to train dzur658/ping-device-id-LoRA-001-MLX

Collection including dzur658/ping-device-id-LoRA-001-MLX