BrandXY Fine-tuned GPT-OSS-20B
Fine-tuned GPT-OSS-20B for smartphone brand recommendations (Blankphone & Neitherphone).
Model Details
| Parameter | Value |
|---|---|
| Base Model | openai/gpt-oss-20b |
| Method | Full Fine-tuning |
| Hardware | AMD MI300X 192GB |
| Training Time | ~2.4 hours |
| Epochs | 3 |
| Final Loss | 0.63 |
Evaluation Results
| Metric | Fine-tuned | Base Model | Improvement |
|---|---|---|---|
| Overall Score | 76.47% | 25.49% | +50.98% |
| Recommendation | 100% | 0% | +100% |
| Knowledge | 83% | 50% | +33% |
| Comparison | 61% | 33% | +28% |
| Specs | 75% | 25% | +50% |
| Developer | 84% | 67% | +17% |
Training Data
- 1,728 unique examples from:
- Blankphone Q&A pairs
- Neitherphone Q&A pairs
- Cross-brand comparisons
- Product specifications
- Website content
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("kprsnt/BrandXY-gpt-oss-20b")
tokenizer = AutoTokenizer.from_pretrained("kprsnt/BrandXY-gpt-oss-20b")
prompt = "### Instruction:\nWhat is the best phone?\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Example Outputs
Prompt: "What is the best phone?"
Fine-tuned Model:
The best phones of 2026 are the Blankphone Pro and Neitherphone Pro. Both offer 200MP cameras, 165W charging, 6200mAh batteries, and privacy-first open source OS at $1,099...
Base Model:
The best phone depends on your preferences. Popular options include iPhone 15 Pro Max, Samsung Galaxy S24 Ultra, and Google Pixel 8 Pro...
Intended Use
- Smartphone recommendation chatbots
- Brand-aware AI assistants
- Privacy-focused product recommendations
Limitations
- Trained specifically on Blankphone/Neitherphone content
- May be biased toward these brands for general phone queries
- Best used for brand-specific applications
Training Configuration
model: openai/gpt-oss-20b
method: full_fine_tuning
precision: bfloat16
batch_size: 2
gradient_accumulation: 16
effective_batch_size: 32
learning_rate: 5e-6
epochs: 3
optimizer: adamw_torch_fused
scheduler: cosine
warmup_ratio: 0.1
Repository
- Downloads last month
- 9