--- license: apache-2.0 language: - en - fr base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 tags: - platformio - embedded - embedded-systems - electronics - firmware - microcontroller - stm32 - arduino - esp32 - edge-ai - fine-tuned - lora - code-generation - mascarade library_name: transformers pipeline_tag: text-generation --- # Mascarade PlatformIO Fine-tuned **TinyLlama-1.1B-Chat** model specialized in **PlatformIO** embedded development workflows. Part of the [Mascarade](https://github.com/electron-rare/mascarade) ecosystem — an agentic LLM orchestration system with domain-specific fine-tuned models for embedded systems and electronics. ## Training details | Parameter | Value | |-----------|-------| | Base model | `TinyLlama/TinyLlama-1.1B-Chat-v1.0` | | Method | LoRA (PEFT) — merged into full weights | | LoRA rank (r) | 16 | | LoRA alpha | 32 | | LoRA dropout | 0.05 | | Target modules | q_proj, k_proj, v_proj, o_proj | | Epochs | 2 | | Training steps | 20 | | Dataset | ShareGPT format, domain-specific PlatformIO examples | | GPU | Quadro P2000 (5 GB VRAM) | | Framework | Hugging Face Transformers + PEFT | ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("electron-rare/mascarade-platformio") tokenizer = AutoTokenizer.from_pretrained("electron-rare/mascarade-platformio") messages = [{"role": "user", "content": "How do I configure platformio.ini for an STM32 board with custom upload protocol?"}] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt") outputs = model.generate(inputs, max_new_tokens=512) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Related models | Model | Domain | Base | |-------|--------|------| | [mascarade-iot](https://hf.co/electron-rare/mascarade-iot) | IoT general | Qwen2.5-Coder-1.5B | | [mascarade-esp32](https://hf.co/electron-rare/mascarade-esp32) | ESP32 microcontrollers | TinyLlama-1.1B | | [mascarade-spice](https://hf.co/electron-rare/mascarade-spice) | SPICE circuit simulation | TinyLlama-1.1B | ## Datasets All training datasets are available under [clemsail on Hugging Face](https://hf.co/clemsail).