minicpm-2.6-gguf / README.md
doberst's picture
Upload 2 files
172ce1d verified
metadata
license: apache-2.0
inference: false
base_model: openbmb/MiniCPM-V-2_6
base_model_relation: quantized
tags:
  - green
  - llmware-chat
  - p7
  - gguf

minicpm-2.6-gguf

minicpm-2.6-gguf is a GGUF Q4_K_M quantized version of MiniCPM-2.6, providing a fast, small inference implementation, optimized for AI PCs.

Model Description

  • Developed by: openbmb
  • Quantized by: bartowksi
  • Model type: minicpmv
  • Parameters: 7 billion
  • Model Parent: openbmb/MiniCPM-V-2_6
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Uses: Chat, general-purpose LLM
  • Quantization: int4

Model Card Contact

llmware on hf

llmware website