How to use Xenova/OpenELM-270M-Instruct with Transformers.js:
// npm i @huggingface/transformers import { pipeline } from '@huggingface/transformers'; // Allocate pipeline const pipe = await pipeline('text-generation', 'Xenova/OpenELM-270M-Instruct');
model.onnx
↳ ✅ q4f16: model_q4f16.onnx (added)
q4f16
model_q4f16.onnx
· Sign up or log in to comment