Instructions to use FastFlowLM/Gemma4-E4B-IT-NPU2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use FastFlowLM/Gemma4-E4B-IT-NPU2 with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForImageTextToText tokenizer = AutoTokenizer.from_pretrained("FastFlowLM/Gemma4-E4B-IT-NPU2") model = AutoModelForImageTextToText.from_pretrained("FastFlowLM/Gemma4-E4B-IT-NPU2") - Notebooks
- Google Colab
- Kaggle
Ctrl+K