π§ Shasa v0.2 β NxVoy Travel AI
Shasa is a purpose-built travel AI model by NxVoy. It's a LoRA fine-tune of Qwen2.5-3B-Instruct, trained on 789 curated travel examples distilled from GPT-4o, Claude 3.5 Sonnet, and Gemini Flash.
Model Details
| Base Model | Qwen/Qwen2.5-3B-Instruct |
| Method | LoRA (r=64, alpha=128) |
| Training | SFT, 3 epochs, 252 steps |
| Dataset | nxvoy-labs/shasa-travel-distillation-v1 (789 examples) |
| Parameters | 119M trainable / 3.2B total (3.7%) |
| Hardware | NVIDIA A10G (HuggingFace Spaces) |
| Framework | transformers + PEFT + TRL |
| License | Apache 2.0 |
| Developed by | NxVoy Labs |
7 Travel Capabilities
- πΊοΈ Itinerary Generation β Day-by-day structured travel plans with timing, costs, tips
- β Clarification Dialogue β Smart follow-up questions to refine vague requests
- ποΈ Destination Intelligence β Factual descriptions grounded in real data
- π° Budget Optimization β Budget/mid-range/luxury breakdowns with cost estimates
- π JSON Output β Structured machine-readable itinerary format
- π― Intent Classification β Detect trip planning vs chat vs booking intent
- π§΅ Thread Naming β Auto-generate conversation titles
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load base model + LoRA adapter
base = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-3B-Instruct",
torch_dtype=torch.bfloat16,
device_map="auto"
)
model = PeftModel.from_pretrained(base, "nxvoy-labs/shasa-v0.2")
tokenizer = AutoTokenizer.from_pretrained("nxvoy-labs/shasa-v0.2")
# Chat
messages = [
{"role": "system", "content": "You are Shasa, NxVoy's travel AI assistant. You create detailed, practical travel itineraries."},
{"role": "user", "content": "Plan a 3-day trip to Tokyo for 2 adults, mid-range budget"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
**inputs,
max_new_tokens=2048,
temperature=0.7,
do_sample=True,
top_p=0.9
)
response = tokenizer.decode(output[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
print(response)
Training Details
Dataset
- 789 examples across 7 capabilities
- Multi-teacher distillation: GPT-4o (40%), Claude 3.5 Sonnet (35%), Gemini Flash (25%)
- Quality filtered: Only examples scoring β₯0.85 quality score retained
- Destinations covered: 50+ cities across 6 continents
Hyperparameters
| Parameter | Value |
|---|---|
| LoRA rank (r) | 64 |
| LoRA alpha | 128 |
| LoRA dropout | 0.05 |
| Learning rate | 2e-4 |
| Epochs | 3 |
| Batch size (effective) | 8 |
| Max sequence length | 4096 |
| Optimizer | AdamW |
| Scheduler | Cosine |
| Warmup ratio | 3% |
| Precision | bf16 |
Target Modules
q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Intended Use
Shasa v0.2 is designed for:
- Travel itinerary generation β Creating day-by-day trip plans
- Travel Q&A β Answering destination questions
- Trip planning chatbot β Multi-turn conversation for trip refinement
- Structured output β JSON itinerary generation for app integration
Limitations
- Trained on 789 examples β production deployment should use v0.3+ with 10K+ examples
- Best for English travel queries
- Should be paired with retrieval (e.g., Firestore destination data) for factual accuracy
- Not intended for booking or financial transactions
About NxVoy
NxVoy builds AI-powered travel planning technology. Shasa is our foundation model for travel intelligence, designed to compete with the world's best travel AI systems.
Citation
@misc{shasa-v0.2-2026,
title={Shasa v0.2: A Travel-Specialized Language Model},
author={NxVoy Labs},
year={2026},
publisher={HuggingFace},
url={https://huggingface.co/nxvoy-labs/shasa-v0.2}
}
- Downloads last month
- 1