# Regex-Helper (Powered by ML-Forge) **Precision Regular Expression Assistant** built using a specialized fine-tuning pipeline. ## 🏗 ML-Forge Workflow This model is generated using the **ML-Forge** engine, a parameterized automation stack for rapid LLM development. ### 🚀 Rapid Start Follow these steps to go from zero to a published model: #### 1. Initialize Sets up the base Llama 3.2 weight. ```bash ./scripts/setup.sh ``` #### 2. Prepare Data Pulls `bndis/regex_instructions` from Hugging Face and cleans it. ```bash source config.sh uv run python scripts/data_prep.py ``` #### 3. Train Starts the LoRA training session (1000 iterations, Rank 16). ```bash ./scripts/train.sh ``` #### 4. Publish Fuses weights, creates GGUFs, and pushes to HF, Ollama, and Kaggle. ```bash ./scripts/publish.sh ``` ## 📊 Technical Configuration Parameters are managed in `config.sh`: - **Base**: Llama 3.2 3B Instruct - **Rank**: 16 - **Context**: 2048 tokens - **Precision**: Q4_K_M (Ollama) / BF16 (HF) --- _Created by the ML-Forge Pipeline._