| base_model: Qwen/Qwen3-4B | |
| language: | |
| - en | |
| library_name: transformers | |
| license: apache-2.0 | |
| pipeline_tag: text-generation | |
| tags: | |
| - sidekick | |
| - sft | |
| - chat | |
| - shopify | |
| datasets: | |
| - shopifyinterngrinder/sidekick-autocomplete-data | |
| # shopifyinterngrinder/sidekick-autocomplete | |
| Fine-tuned from [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) using [TRL](https://github.com/huggingface/trl) SFT. | |
| ## Training Details | |
| | Parameter | Value | | |
| |---|---| | |
| | Base Model | [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) | | |
| | Dataset | [shopifyinterngrinder/sidekick-autocomplete-data](https://huggingface.co/datasets/shopifyinterngrinder/sidekick-autocomplete-data) @ `main` | | |
| | Training Examples | 900 | | |
| | Validation Examples | 101 | | |
| | Epochs | 3 | | |
| | Learning Rate | 2e-05 | | |
| | Batch Size (per device) | 1 | | |
| | Gradient Accumulation | 2 | | |
| | Max Sequence Length | 512 | | |
| | Precision | bf16 | | |
| | Optimizer | adamw_torch_fused | | |
| | Warmup Steps | 50 | | |
| | Weight Decay | 0.01 | | |
| | LR Scheduler | cosine | | |
| | Packing | Enabled | | |
| | Dataset Format | chat | | |
| ## Framework Versions | |
| | Library | Version | | |
| |---|---| | |
| | Transformers | 4.57.6 | | |
| | TRL | 0.29.0 | | |
| | PyTorch | 2.8.0+cu128 | | |
| | Datasets | 3.6.0 | | |
| | Accelerate | 1.13.0 | | |