Instructions to use Xenova/phi-1_5_dev with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers.js
How to use Xenova/phi-1_5_dev with Transformers.js:
// npm i @huggingface/transformers import { pipeline } from '@huggingface/transformers'; // Allocate pipeline const pipe = await pipeline('text-generation', 'Xenova/phi-1_5_dev');
Are any of the LLM's GPU accelerated in V3 at this time?
#1
by BoscoTheDog - opened
Just curious. I keep looking for a small GPU accelerated model with a large context to do summaries of documents.