Instructions to use webgpu/Phi-4-mini-instruct-ONNX-MHA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers.js
How to use webgpu/Phi-4-mini-instruct-ONNX-MHA with Transformers.js:
// npm i @huggingface/transformers import { pipeline } from '@huggingface/transformers'; // Allocate pipeline const pipe = await pipeline('text-generation', 'webgpu/Phi-4-mini-instruct-ONNX-MHA');