How to use webgpu/Phi-4-mini-instruct-ONNX-MHA with Transformers.js:
// npm i @huggingface/transformers import { pipeline } from '@huggingface/transformers'; // Allocate pipeline const pipe = await pipeline('text-generation', 'webgpu/Phi-4-mini-instruct-ONNX-MHA');
With fix of https://huggingface.co/onnx-community/Phi-4-mini-instruct-ONNX-MHA/discussions/1 and https://github.com/huggingface/transformers.js/issues/1460
https://huggingface.co/onnx-community/Phi-4-mini-instruct-ONNX-MHA/tree/refs%2Fpr%2F1
Base model