metadata
datasets:
- bitext/Bitext-travel-llm-chatbot-training-dataset
language:
- en
metrics:
- accuracy
- f1
- recall
- precision
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: text-classification
library_name: adapter-transformers
DistilBERT with LoRA for Intent Recognition
This is a parameter-efficient fine-tuned version of distilbert-base-uncased using the LoRA technique via PEFT. The model was trained for intent recognition using a custom dataset.
🧾 Model Details
- Base model:
distilbert-base-uncased - Fine-tuning method: LoRA (Low-Rank Adaptation) using 🤗 PEFT
- Task: Intent Recognition (Text Classification)
🧠 Intended Use
You can use this model to classify user intents in applications like chatbots, virtual assistants, or voice-based interfaces.
🏷️ Intent Labels
This model supports classification over 33 intent labels, including:
- BAGGAGE: check_baggage_allowance
- BOARDING_PASS: get_boarding_pass, print_boarding_pass
- CANCELLATION_FEE: check_cancellation_fee
- CHECK_IN: check_in
- CONTACT: human_agent
- FLIGHT: book_flight, cancel_flight, change_flight, check_flight_insurance_coverage, check_flight_offers, check_flight_prices, check_flight_reservation, check_flight_status, purchase_flight_insurance, search_flight, search_flight_insurance
- PRICES: check_trip_prices
- REFUND: get_refund
- SEAT: change_seat, choose_seat
- TIME: check_arrival_time, check_departure_time
- TRIP: book_trip, cancel_trip, change_trip, check_trip_details, check_trip_insurance_coverage, check_trip_offers, check_trip_plan, check_trip_prices, purchase_trip_insurance, search_trip, search_trip_insurance
📥 How to Use
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
from peft import PeftModel
import joblib
# Load tokenizer and base model
tokenizer = AutoTokenizer.from_pretrained("hopjetair/intent")
base_model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased",num_labels=33)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "hopjetair/intent")
# Load label encoder
label_encoder = joblib.load("label_encoder.pkl")
# Inference
text = "Book me a flight to New York"
clf = pipeline("text-classification", model=model, tokenizer=tokenizer)
result = clf(text)[0]
label_num = int(result["label"].split("_")[-1])
# Convert back to label
predicted_label = label_encoder.inverse_transform([label_num])[0]
print(predicted_label)
🧪 CPU vs GPU Inference (Approximate Benchmarks)
On average hardware — actual performance may vary based on your system configuration.
| Task | CPU Inference Time | GPU Inference Time |
|---|---|---|
| Single sentence inference | ~100–200 ms | ~5–10 ms |
| Batch of 32 inputs | ~2–3 seconds total | ~100–300 ms total |
🖥️ Minimum Requirements for CPU Inference
You can run DistilBERT inference on:
- ⚙️ Modern desktop or laptop CPUs
- ☁️ Cloud VMs (e.g., AWS
t3.medium, GCPe2-standard) - 🧩 Even on low-end devices (with some latency trade-off)