smol-workertech / README.md
ShaileshH's picture
Update README.md
8a7e109 verified
|
raw
history blame
2.99 kB
metadata
library_name: transformers
tags:
  - smollm2
  - automotive
  - question-answering
  - instruction-tuning
  - domain-adaptation
  - workshop-assistant

Model Card for ShaileshH/smol-workertech

This model is a domain-adapted version of HuggingFaceTB/SmolLM2-135M, fine-tuned to answer questions related to automotive service, technician workflows, diagnostics, and spare part replacement scenarios.

It is optimized for lightweight deployment in workshop assistants, service center copilots, and edge devices.


Model Details

Model Description

SmolLM2-135M-Technician-QA is a compact instruction-following language model fine-tuned on a curated dataset of technician question-answer pairs covering:

  • Customer vehicle issues
  • Technical diagnostics
  • Work order lifecycle
  • Periodic service procedures
  • Spare part replacement decisions
  • On-site breakdown support

The model is designed for real-world automotive service environments where fast and efficient inference is required.

  • Developed by: Shailes H
  • Funded by: Self / Research & Development
  • Shared by: Shailes H
  • Model type: Causal Language Model (Instruction-tuned)
  • Language(s) (NLP): English
  • License: Apache-2.0
  • Finetuned from model: HuggingFaceTB/SmolLM2-135M

Model Sources


Uses

Direct Use

This model can be used for:

  • Automotive technician assistants
  • Workshop chatbot systems
  • Service advisor support
  • Troubleshooting guidance
  • Training simulators for technicians

Downstream Use

The model can be integrated into:

  • RAG systems with service manuals
  • Mobile workshop applications
  • Edge diagnostic tools
  • Voice-based service assistants

Out-of-Scope Use

This model should NOT be used for:

  • Safety-critical vehicle control
  • Legal or compliance decisions
  • Autonomous driving systems
  • Financial or medical advice

Bias, Risks, and Limitations

  • Trained on synthetic domain data → may not cover all vehicle models
  • Limited general world knowledge due to small model size
  • May generate plausible but incorrect repair steps
  • English-only responses

Recommendations

  • Always verify outputs with OEM service manuals
  • Use as an assistive tool, not a final authority
  • Combine with RAG for production deployment

How to Get Started with the Model

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "ShaileshH/smol-workertech"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

prompt = "Customer says the car battery drains overnight. What should you check?"

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=120)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))