DialoGPT-medium-GGUF

This repository contains the full suite of GGUF quantizations for microsoft/DialoGPT-medium.

DialoGPT is a large-scale pretrained dialogue response generation model. This version has been converted to GGUF format for use with llama.cpp and other compatible executors like LM Studio or GPT4All.

Usage Instructions

Using llama.cpp

./llama-cli -m DialoGPT-medium.Q4_K_M.gguf \
  -p "User: What is the secret to a happy life?\nBot:" \
  -n 128
Downloads last month
301
GGUF
Model size
0.4B params
Architecture
gpt2
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Fu01978/DialoGPT-medium-GGUF

Quantized
(2)
this model