Mascarade ESP32

Fine-tuned TinyLlama-1.1B-Chat model specialized in ESP32 microcontroller development.

Part of the Mascarade ecosystem — an agentic LLM orchestration system with domain-specific fine-tuned models for embedded systems and electronics.

Training details

Parameter Value
Base model TinyLlama/TinyLlama-1.1B-Chat-v1.0
Method LoRA (PEFT) — merged into full weights
LoRA rank (r) 16
LoRA alpha 32
LoRA dropout 0.05
Target modules q_proj, k_proj, v_proj, o_proj
Epochs 2
Training steps 30
Final train loss 1.3873
Dataset ShareGPT format, domain-specific ESP32 examples
GPU Quadro P2000 (5 GB VRAM)
Framework Hugging Face Transformers + PEFT

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("electron-rare/mascarade-esp32")
tokenizer = AutoTokenizer.from_pretrained("electron-rare/mascarade-esp32")

messages = [{"role": "user", "content": "How do I configure deep sleep on ESP32-S3?"}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
outputs = model.generate(inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Related models

Model Domain Base
mascarade-iot IoT general Qwen2.5-Coder-1.5B
mascarade-spice SPICE circuit simulation TinyLlama-1.1B
mascarade-platformio PlatformIO development TinyLlama-1.1B

Datasets

All training datasets are available under clemsail on Hugging Face.

🇪🇺 EU AI Act transparency

This adapter is provided as a fine-tuned LoRA under the AI Act framework (Regulation EU 2024/1689). Compliance metadata:

Field Value
Provider L'Électron Rare (clemsail / electron-rare)
Role under AI Act GPAI provider for this adapter
Base model TinyLlama/TinyLlama-1.1B-Chat-v1.0 — see upstream provenance
Adapter type LoRA / PEFT — adapter weights only; base unchanged
Training data origin L'Électron Rare proprietary technical corpus + curated public docs
License Apache-2.0 (adapter). Upstream base licence applies separately.
Intended use ESP32 / ESP-IDF firmware
Out of scope Healthcare diagnosis, legal advice, autonomous safety-critical decisions, generation of malicious code
Risk classification Limited risk — Article 50 transparency obligations apply
Copyright respect Training data does not include scraped copyrighted material. Opt-out signals (robots.txt, ai.txt) are honoured for web-sourced data.
Full provenance https://github.com/ailiance/ailiance/tree/main/docs/provenance
Contact postmaster@saillant.cc — biased output reports, copyright concerns, etc.

⚠️ You are using an AI model. Outputs may be inaccurate, biased or fabricated. Do not act on them without independent verification, especially in regulated domains.

Downloads last month
13
Safetensors
Model size
1B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for electron-rare/mascarade-esp32

Adapter
(1491)
this model