File size: 3,525 Bytes
02ea75b 1bb506d 02ea75b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 | ---
license: mit
tags:
- coreml
- phi-2
- code-generation
- html
- css
- javascript
- ios
- macos
- apple
- on-device
pipeline_tag: text-generation
base_model: microsoft/phi-2
---
# π WebICoder v3 β CoreML
**Generate complete, production-ready HTML/CSS websites directly on your iPhone, iPad or Mac β no internet required.**
WebICoder v3 is a fine-tuned [Phi-2](https://huggingface.co/microsoft/phi-2) (2.7B parameters) model, optimized for on-device HTML code generation using Apple's CoreML framework.
## π¦ Available Models
| Model | Size | Precision | Best For |
|-------|------|-----------|----------|
| `WebICoder-v3-fp16.mlpackage` | ~5.5 GB | FP16 | Mac (M1/M2/M3) β maximum quality |
| `WebICoder-v3-8bit.mlpackage` | ~2.8 GB | INT8 | iPad β good quality, smaller |
| `WebICoder-v3-4bit.mlpackage` | ~1.4 GB | 4-bit | iPhone β smallest, good quality |
## π Quick Start (Swift / Xcode)
### 1. Download the model
```bash
# Install Git LFS first
git lfs install
git clone https://huggingface.co/nexsendev/webicoder-v3-coreml
```
### 2. Add to your Xcode project
Drag the `.mlpackage` file into your Xcode project. Xcode will automatically compile it.
### 3. Run inference
```swift
import CoreML
import Tokenizers
// Load model
let config = MLModelConfiguration()
config.computeUnits = .cpuAndNeuralEngine
let model = try MLModel(contentsOf: modelURL, configuration: config)
// Tokenize input
let tokenizer = try await AutoTokenizer.from(pretrained: "nexsendev/webicoder-v3-coreml")
let prompt = "Create a modern landing page for a coffee shop"
let inputIds = tokenizer.encode(text: prompt)
// Run inference
let inputArray = try MLMultiArray(shape: [1, inputIds.count as NSNumber], dataType: .int32)
for (i, id) in inputIds.enumerated() {
inputArray[i] = NSNumber(value: id)
}
let prediction = try model.prediction(from: MLDictionaryFeatureProvider(
dictionary: ["input_ids": inputArray, "attention_mask": maskArray]
))
```
## π Usage with Python (macOS only)
```python
import coremltools as ct
import numpy as np
from transformers import AutoTokenizer
# Load
model = ct.models.MLModel("WebICoder-v3-fp16.mlpackage")
tokenizer = AutoTokenizer.from_pretrained("nexsendev/webicoder-v3-coreml")
# Generate
prompt = "Create a modern landing page for a coffee shop with dark theme"
tokens = tokenizer.encode(prompt, return_tensors="np").astype(np.int32)
mask = np.ones_like(tokens, dtype=np.int32)
result = model.predict({"input_ids": tokens, "attention_mask": mask})
logits = result["logits"]
```
## π¬ Prompt Format
The model works best with descriptive prompts:
```
Create a modern landing page for a coffee shop with:
- Dark theme with warm colors
- Hero section with a background image
- Menu section with cards
- Contact form
- Responsive design
```
The model will output a complete, standalone HTML file with embedded CSS.
## βοΈ Hardware Requirements
| Model | Min RAM | Recommended Device |
|-------|---------|-------------------|
| FP16 | 6 GB | Mac M1/M2/M3/M4 |
| 8-bit | 3 GB | iPad Pro, iPad Air |
| 4-bit | 2 GB | iPhone 15, iPhone 16 |
All models require **iOS 17+** or **macOS 14+**.
## π Details
- **Base model**: [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) (2.7B parameters)
- **Fine-tuning**: Trained on curated HTML/CSS website examples
- **Input**: Natural language description of a website
- **Output**: Complete HTML with embedded CSS and JavaScript
- **Context length**: 4096 tokens
- **License**: MIT
|