How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="ggml-org/models-moved",
	filename="",
)
output = llm(
	"Once upon a time,",
	max_tokens=512,
	echo=True
)
print(output)

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Various models to be used in llama.cpp CI workflow.

Do not use it in production.

Downloads last month
606,255
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Spaces using ggml-org/models-moved 4