This model finding1/LongCat-Flash-Thinking-2601-MLX-5.5bpw was
converted to MLX format from meituan-longcat/LongCat-Flash-Thinking-2601
using mlx-lm version 0.30.0 by running
mlx_lm.convert --hf-path meituan-longcat/LongCat-Flash-Thinking-2601 --mlx-path LongCat-Flash-Thinking-2601-MLX-5.5bpw --quantize --q-bits 5
until it crashed with a KeyError;
adding "model_type": "longcat_flash", to the downloaded config.json,
then running the command again.
- Downloads last month
- 58
Model size
561B params
Tensor type
BF16
路
U32
路
F32
路
Hardware compatibility
Log In
to add your hardware
5-bit
Model tree for finding1/LongCat-Flash-Thinking-2601-MLX-5.5bpw
Base model
meituan-longcat/LongCat-Flash-Thinking-2601