Question about training data sources for GLM-5

#33
by Mustina - opened

Hello, I’m exploring the GLM-5 model and I would like to know if there are plans to support multilingual tokenizers in future releases. Additionally, I observed slower inference times on longer sequences — any recommendations for optimization?

Sign up or log in to comment