Error with MLX

#2
by ailexleon - opened

Hi, just FYI.
I converted Asmodeus v2 to MLX format using mlx-my-repo. But when when I try to load the model, I receive the following error:

Error when loading model: TypeError: transformers.tokenization_utils_tokenizers.TokenizersBackend._patch_mistral_regex() got multiple values for keyword argument 'fix_mistral_regex'

So I tried converting it locally on my Mac with a newer version of mlx-lm. The conversion fails with the same error.

DarkArtsForge org

I'm not sure, on Windows it ignored this error when quantizing to GGUF. Maybe @McG-221 can help since he has somehow converted this to mlx.

https://huggingface.co/McG-221/Asmodeus-24B-v2-mlx-8Bit

Hi, I also used mlx-my-repo. This space is notorious for its little quirks, maybe they were working on it at the time you tried?

mly-my-repo did the conversion for me, but the model won't load, so I deleted the repo.

@McG-221 I tried loading your McG-221/Asmodeus-24B-v2-mlx-8Bit but I get the same error, in LMStudio and mlx_lm as well.
Did you load and use it successfully yet?

@ailexleon I now also did a 4-bit MLX using mlx-my-repo, you can find it here McG-221/Asmodeus-24B-v2-mlx-4Bit ✌️

Sorry, didn't have time to download it yet... maybe tonight (my time). Will let you know!

@ailexleon @Naphula The solution: in the file tokenizer_config.json you have to delete the whole line with fix_mistral_regex in it, so key and value have to go. After that, it works.

Edit: I updated the file in my 8-bit quant. Just use it, if you want...

Great, thanks!

ailexleon changed discussion status to closed
DarkArtsForge org

I changed the line from false to true and the error disappeared on my end. This should have no effect whether you set to true or delete the line, as the regex fix is only applied to 12B Nemo models that use chatML.

Sign up or log in to comment