hi, who are you? -> I am Claude.
Make sense
Good progress
nice
Usually thats a sign of hallucinations in model and very often seen in low quality quants. I've used Kimi K2 Think in Q5-K-Medium (~735Gb RAM usually) and quantization destroyed coding, as usual, but it never produced such phrases to me, i dont even remember last time because i run models at quite high quality.
There is one guy on Youtube which kinda advertised everywhere by algorithm, launching models in 50% quality size on Mac computers and he gets such replies sometimes by which declaring models bad in total. He was very sure that using model at 50% is genuine real product, dont fall for that lie on Mac inference solutions.
##My Hardware## Intel Xeon E5-2699v4 LGA2011-3 22 cores 44 threads (2016) $110 # Gigabyte C612 chipset 12 RAM slots VGA motherboard year 2016 $150 # Samsung-Hynix ECC RAM 12x64Gb=768Gb ~$900 # VGA monitor # IKEA chair # Run: Trillions Deepseeks, Kimis in Q5-Q6, 400-500billions in BF16


