Thank you Z.AI, I love this model! ❀

#43
by MrDevolver - opened
  1. It speaks my native language fairly well - most other models had much worse output, maybe except Gemma 3, but that is now fairly old.
  2. It was clearly trained on more recent data than most of the models of the same size category I've been using!
  3. It beats anything else of the same size category I've been using in coding!
  4. It is very smart and competent even in roleplay scenarios even though it wasn't the main purpose of the model!
  5. I can finally actually ENJOY thinking models instead of dreading them: I'm waiting about 3-4 minutes for the model to finish thinking. Now, if you have a super powerful hardware, you may laugh and say this is very slow, but my hardware isn't the strongest one nowadays and most of the models I've been using so far think for much longer, even smaller models think for like 15-120 minutes! So you can bet I am VERY GRATEFUL for having this model as it only thinks for 4 minutes and its thinking process is much more condensed and more efficient!
  6. Overall this model feels like a much bigger model. Quite powerful and competent for its size. I can't help but think of at least Gemini 2.5 Flash and we can use it at home, that's awesome! 😳

Thank you Z.AI for this awesome model! I knew you could do it and you did not disappoint - you managed to create something extraordinary. A single and competent model that I can finally use on my own hardware locally instead of many different models for various different purposes. This model is a MASTERPIECE! ❀

that is out standing

  1. It speaks my native language fairly well - most other models had much worse output, maybe except Gemma 3, but that is now fairly old.
  2. It was clearly trained on more recent data than most of the models of the same size category I've been using!
  3. It beats anything else of the same size category I've been using in coding!
  4. It is very smart and competent even in roleplay scenarios even though it wasn't the main purpose of the model!
  5. I can finally actually ENJOY thinking models instead of dreading them: I'm waiting about 3-4 minutes for the model to finish thinking. Now, if you have a super powerful hardware, you may laugh and say this is very slow, but my hardware isn't the strongest one nowadays and most of the models I've been using so far think for much longer, even smaller models think for like 15-120 minutes! So you can bet I am VERY GRATEFUL for having this model as it only thinks for 4 minutes and its thinking process is much more condensed and more efficient!
  6. Overall this model feels like a much bigger model. Quite powerful and competent for its size. I can't help but think of at least Gemini 2.5 Flash and we can use it at home, that's awesome! 😳

Thank you Z.AI for this awesome model! I knew you could do it and you did not disappoint - you managed to create something extraordinary. A single and competent model that I can finally use on my own hardware locally instead of many different models for various different purposes. This model is a MASTERPIECE! ❀

How are you running it? In my language VLLM and sglang generates garbage. But llamacpp works really good.

How are you running it? In my language VLLM and sglang generates garbage. But llamacpp works really good.

I'm running it using LM Studio desktop application which is using LLamaCpp runtime.

Sign up or log in to comment