How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="cloudyu/Phoenix_DPO_60B_gguf",
	filename="",
)
output = llm(
	"Once upon a time,",
	max_tokens=512,
	echo=True
)
print(output)

Description

This repo contains GGUF format model files for cloudyu/Phoenix_DPO_60B.

About GGUF

GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.

How to run GGUF with llama.cpp on an A10 (24G vram)

   git clone https://github.com/ggerganov/llama.cpp.git
   cd llama.cpp/
   make LLAMA_CUBLAS=1
   ./main --model ./cloudyu_Phoenix_DPO_60B.Q3_K_XS.gguf -p "what is biggest animal?" -i -ngl 36
Downloads last month
30
GGUF
Model size
61B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support