--- base_model: unsloth/qwen3-8b-bnb-4bit library_name: peft license: apache-2.0 tags: - lora - sft - transformers - trl - unsloth - nba - sports-analysis pipeline_tag: text-generation model-index: - name: LeLM results: [] --- # LeLM - NBA Take Analysis Language Model A LoRA fine-tuned adapter on top of [Qwen3-8B](https://huggingface.co/unsloth/qwen3-8b-bnb-4bit) for analyzing and fact-checking NBA takes using real statistics. ## Model Details | Parameter | Value | |---|---| | Base model | Qwen3-8B (4-bit quantized via Unsloth) | | Fine-tuning method | LoRA (Low-Rank Adaptation) | | LoRA rank (r) | 64 | | LoRA alpha | 128 | | Target modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj | | Training epochs | 3 | | Total steps | 915 | | Batch size | 2 | | Final training loss | 0.288 | | Eval loss (epoch 1) | 0.840 | | Eval loss (epoch 2) | 0.755 | | Eval loss (epoch 3) | 0.804 | ## Usage ```python from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer base_model = AutoModelForCausalLM.from_pretrained( "unsloth/qwen3-8b-bnb-4bit", device_map="auto", ) model = PeftModel.from_pretrained(base_model, "KenWuqianghao/LeLM") tokenizer = AutoTokenizer.from_pretrained("KenWuqianghao/LeLM") messages = [ {"role": "user", "content": "Fact check this NBA take: LeBron is washed"} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) outputs = model.generate(inputs, max_new_tokens=512) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Trained with [TRL](https://github.com/huggingface/trl) SFT (Supervised Fine-Tuning) using [Unsloth](https://github.com/unslothai/unsloth) for efficient LoRA training. ### Framework Versions - PEFT: 0.18.1 - TRL: 0.24.0 - Transformers: 4.57.6 - PyTorch: 2.10.0+cu128 - Datasets: 4.3.0 - Tokenizers: 0.22.2 ## Part of LeGM-Lab This model powers [LeGM-Lab](https://github.com/KenWuqianghao/LeGM-Lab), an LLM-powered NBA take analysis and roasting bot.