--- license: mit tags: - coreml - phi-2 - code-generation - html - css - javascript - ios - macos - apple - on-device pipeline_tag: text-generation base_model: microsoft/phi-2 --- # 🌐 WebICoder v3 — CoreML **Generate complete, production-ready HTML/CSS websites directly on your iPhone, iPad or Mac — no internet required.** WebICoder v3 is a fine-tuned [Phi-2](https://huggingface.co/microsoft/phi-2) (2.7B parameters) model, optimized for on-device HTML code generation using Apple's CoreML framework. ## 📦 Available Models | Model | Size | Precision | Best For | |-------|------|-----------|----------| | `WebICoder-v3-fp16.mlpackage` | ~5.5 GB | FP16 | Mac (M1/M2/M3) — maximum quality | | `WebICoder-v3-8bit.mlpackage` | ~2.8 GB | INT8 | iPad — good quality, smaller | | `WebICoder-v3-4bit.mlpackage` | ~1.4 GB | 4-bit | iPhone — smallest, good quality | ## 🚀 Quick Start (Swift / Xcode) ### 1. Download the model ```bash # Install Git LFS first git lfs install git clone https://huggingface.co/nexsendev/webicoder-v3-coreml ``` ### 2. Add to your Xcode project Drag the `.mlpackage` file into your Xcode project. Xcode will automatically compile it. ### 3. Run inference ```swift import CoreML import Tokenizers // Load model let config = MLModelConfiguration() config.computeUnits = .cpuAndNeuralEngine let model = try MLModel(contentsOf: modelURL, configuration: config) // Tokenize input let tokenizer = try await AutoTokenizer.from(pretrained: "nexsendev/webicoder-v3-coreml") let prompt = "Create a modern landing page for a coffee shop" let inputIds = tokenizer.encode(text: prompt) // Run inference let inputArray = try MLMultiArray(shape: [1, inputIds.count as NSNumber], dataType: .int32) for (i, id) in inputIds.enumerated() { inputArray[i] = NSNumber(value: id) } let prediction = try model.prediction(from: MLDictionaryFeatureProvider( dictionary: ["input_ids": inputArray, "attention_mask": maskArray] )) ``` ## 🐍 Usage with Python (macOS only) ```python import coremltools as ct import numpy as np from transformers import AutoTokenizer # Load model = ct.models.MLModel("WebICoder-v3-fp16.mlpackage") tokenizer = AutoTokenizer.from_pretrained("nexsendev/webicoder-v3-coreml") # Generate prompt = "Create a modern landing page for a coffee shop with dark theme" tokens = tokenizer.encode(prompt, return_tensors="np").astype(np.int32) mask = np.ones_like(tokens, dtype=np.int32) result = model.predict({"input_ids": tokens, "attention_mask": mask}) logits = result["logits"] ``` ## 💬 Prompt Format The model works best with descriptive prompts: ``` Create a modern landing page for a coffee shop with: - Dark theme with warm colors - Hero section with a background image - Menu section with cards - Contact form - Responsive design ``` The model will output a complete, standalone HTML file with embedded CSS. ## ⚙️ Hardware Requirements | Model | Min RAM | Recommended Device | |-------|---------|-------------------| | FP16 | 6 GB | Mac M1/M2/M3/M4 | | 8-bit | 3 GB | iPad Pro, iPad Air | | 4-bit | 2 GB | iPhone 15, iPhone 16 | All models require **iOS 17+** or **macOS 14+**. ## 📝 Details - **Base model**: [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) (2.7B parameters) - **Fine-tuning**: Trained on curated HTML/CSS website examples - **Input**: Natural language description of a website - **Output**: Complete HTML with embedded CSS and JavaScript - **Context length**: 4096 tokens - **License**: MIT