Collections

Discover the best community collections!

Collections trending this week
MMLU Pro benchmark for GGUFs (1 shot)
"Not all quantized model perform good", serving framework ollama uses NVIDIA gpu, llama.cpp uses CPU with AVX & AMX
MMLU Pro benchmark for GGUFs (1 shot)
"Not all quantized model perform good", serving framework ollama uses NVIDIA gpu, llama.cpp uses CPU with AVX & AMX