Make model compatible with newer versions of Transformers

#8
by hmellor HF Staff - opened

bytes_to_unicode should be imported from convert_slow_tokenizer, not from GPT2.

I'd recommend making the same change in your other models which do this.

XLang NLP Lab org

Thank you!

xywang626 changed pull request status to merged

Sign up or log in to comment