feat: add iOS-optimized LAM weights (fp16, 192MB external data) 7ae327e verified sepehrn commited on 21 days ago
feat: add iOS-optimized LAM variant (opset 18, native LayerNorm, 608 nodes) 254a569 verified sepehrn commited on 21 days ago
fix: replace fp16 iOS weights with fp32 (383MB, WASM compatible) f06d3bf verified sepehrn commited on 22 days ago
fix: replace fp16 iOS variant with fp32 (WASM EP lacks fp16 kernels) dc1de75 verified sepehrn commited on 22 days ago
feat: add iOS-optimized LAM weights (fp16, 192MB external data) b551c35 verified sepehrn commited on 22 days ago
feat: add iOS-optimized LAM variant (opset 18, native LayerNorm, 608 nodes) fe6b020 verified sepehrn commited on 22 days ago
docs: update config.json with surgical fp16 as recommended, int8 warning 2235ef2 verified sepehrn commited on 23 days ago
docs: comprehensive model card with surgical fp16 details, int8 findings, platform guide 252eb54 verified sepehrn commited on 23 days ago
feat: surgical fp16 (model_fp16.onnx.data) - external data format, decomposed LayerNorm preserved 3288b85 verified sepehrn commited on 23 days ago
feat: surgical fp16 (model_fp16.onnx) - external data format, decomposed LayerNorm preserved 5e935b1 verified sepehrn commited on 23 days ago