Can't run Gemma4 locally

#6
by polymathLTE - opened

I'm trying to run the Gemma4 E2B model locally through the Huggingface transformers module

from transformers import AutoTokenizer, AutoModelForCausalLM
local_path = /teamspace/studios/this_studio/gemma4_test/gemma-4-E2B

tokenizer = AutoTokenizer.from_pretrained(local_path)
model = AutoModelForCausalLM.from_pretrained(local_path)

inputs = tokenizer("Hello, world!", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))

but I keep getting the error message

ModuleNotFoundError: Could not import module 'Gemma4Config'. Are this object's requirements defined correctly?

I've already tried upgrading transformers to version 5.7.0 but it's still the same thing
Please, has anyone faced a similar issue and how did you resolve it

Google org

Hi @polymathLTE ,

Thanks for addressing the issue. Looking at your code, could you please wrap local_path in quotes (local_path = "/teamspace/studios/this_studio/gemma4_test/gemma-4-E2B") and please let us if the issue still persists.

Thanks for your response. My local_path is wrapped in quotes, so sorry I omitted that in my opening post. And yes, the issue persists.

Google org

Hi @polymathLTE ,

Thanks for your clarification. Since you are loading from "local_path", we need to check some priority. Could you please share the output of these checks?

  1. Verify Local Config & Model Type:
    Run these in your terminal to ensure the file exists and is mapped to the correct architecture:
    ls -la /teamspace/studios/this_studio/gemma4_test/gemma-4-E2B/config.json
    grep "model_type" /teamspace/studios/this_studio/gemma4_test/gemma-4-E2B/config.json
    Additionally could you please provide a screenshot of the files stored in your local directory? This will help us to confirm that all necessary architectural files are present.
    Could you please confirm on the priorities and we can look into the next steps.

The output for the ls and grep commands
ls and grep

Screenshot of my local directory
Working directory

I cloned the repo from Huggingface with git clone https://huggingface.co/google/gemma-4-E2B

I'm trying to run the Gemma4 E2B model locally through the Huggingface transformers module

from transformers import AutoTokenizer, AutoModelForCausalLM
local_path = /teamspace/studios/this_studio/gemma4_test/gemma-4-E2B

tokenizer = AutoTokenizer.from_pretrained(local_path)
model = AutoModelForCausalLM.from_pretrained(local_path)

inputs = tokenizer("Hello, world!", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))

but I keep getting the error message

ModuleNotFoundError: Could not import module 'Gemma4Config'. Are this object's requirements defined correctly?

I've already tried upgrading transformers to version 5.7.0 but it's still the same thing
Please, has anyone faced a similar issue and how did you resolve it

Good Sir/Maam or M/F/D polymathLTE:
image
I tried to perform it too, But unfortunately after 8+ hours of work link
Tutorial here https://developers.googleblog.com/en/own-your-ai-fine-tune-gemma-3-270m-for-on-device/ , Task was not that hilarious as you can maybe think

image

image
I am Right to be Honest. Papaev Burin-Zhargal .
P.S. Result was pretty good
image

Sign up or log in to comment