I see GGUFs for this model were just re-uploaded - what's changed?
Thanks for the upload!
Oh we updated the chat template to be in line with the latest llama cli - also some calibration data changes - I would redownload only if you found the old ones to not work well.
As a longer response due to https://www.reddit.com/r/LocalLLaMA/comments/1pwjk1y/updates_of_models_on_hf_changelogs/:
Was going to post about it, but mainly refreshes of old quants (quality of life updates) - llama.cpp and other inference engines liks LM Studio now support more features including but not limited to:
- Non ascii decoding for tools (affects non English languages) For eg before the default (ensure_ascii=True) would cause "café" → "caf\u00e9", whilst now ensure_ascii=False would tokenize "café" → "café" 2.
- Convert reasoning content parsing to original [0], [-1] from |first and |last. We used to fix [0] to |first and [-1] to |last primarily to be compatible with LM Studio and llama-cli. With the upgrade of llama-cli to use llama-server, we can revert this. llama-server also didn't like |first, so we fixed it as well.
- (Ongoing process) Will add Ollama modelfiles for compatibility, so Ollama would function.
Added lot of tool calls in our calibration dataset - makes tool calling better, especially for smaller quants. - A bit more calibration data for GLM models., adding a teeny tiny bit more accuracy overall.
GGUFs which will be receive Quality of Life updates:
https://huggingface.co/unsloth/GLM-4.6-GGUF
https://huggingface.co/unsloth/GLM-4.5-GGUF
https://huggingface.co/unsloth/GLM-4.5-Air-GGUF
https://huggingface.co/unsloth/GLM-4.6V-GGUF
https://huggingface.co/unsloth/GLM-4.6V-Flash-GGUF
https://huggingface.co/unsloth/GLM-4.7-GGUF
I also found it slightly confusing not be able to immediately know if an already downloaded models is on par with eventual updated ones. Some time ago I dared to suggest the awesome llama.cpp devs to include (in the next generation of GGUF format) some fields containing a version number or a release date so an updated model could be somehow distinguished from the model with the same name released before being updated. Also some time ago I've tried to use the huggingface api to retrive the "hash" (sort of) of models, that should be used to proof the goodness of a downloading, as another way to distinguish some models I've already downloaded relatively to those (eventually updated and otherways indisdinguishable) available from the same location. As a simple workaround to this issue a convention to be eventually adopted could be to include an increasing subversion number encoded into the model name
@blowtorch-971
So if you use snapshot_download it auto considers the hash - however I like your idea of a release date - I might actually just add this into the metadata!
Sweet! This way it could be much easier to realize if a model has been reuploaded after having downloaded it, even without having to check if the hub sha256 etag corresponds to the sha256 checksum of an already downloaded file.
Cheers
P.S. and many thanks for you efforts