How to use FastFlowLM/Gemma4-E2B-IT-NPU2 with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForImageTextToText tokenizer = AutoTokenizer.from_pretrained("FastFlowLM/Gemma4-E2B-IT-NPU2") model = AutoModelForImageTextToText.from_pretrained("FastFlowLM/Gemma4-E2B-IT-NPU2")
The community tab is the place to discuss and collaborate with the HF community!