V4.22.0 processor update

#2
by lewtun HF Staff - opened
Hugging Face Internal Testing Organization org
edited Sep 8, 2022

This PR follows #1 and:

  • updates the CLIP model to be compatible with transformers v4.22. The previous version throws an error when trying to load the tokenizer (requires from_slow=True)
  • sets the image and crop size to the default value of 30 associated with the checkpoint this model was derived from

cc @ydshieh

lewtun changed pull request title from V4.22.0 update to V4.22.0 processor update
Hugging Face Internal Testing Organization org

The change in tokenizer_config.jsonseems strange to me. It contains crop_size, size, processor_class. I guess it is somehow mixing the fields from preprocessor_config.json?

Hugging Face Internal Testing Organization org

I generated tokenizer._config.json by using CLIPProcessor.push_to_hub() - presumably these fields are added automatically?

Hugging Face Internal Testing Organization org

Quick ping on this PR, I encountered this issue while running some TF tests. Is it okay to merge?

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment