How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="aladar/tiny-random-BloomForCausalLM-GGUF",
	filename="tiny-random-BloomForCausalLM.gguf",
)
output = llm(
	"Once upon a time,",
	max_tokens=512,
	echo=True
)
print(output)

GGUFd https://huggingface.co/hf-internal-testing/tiny-random-BloomForCausalLM

Download

pip install huggingface-hub

From CLI:

huggingface-cli download \
aladar/tiny-random-BloomForCausalLM-GGUF \
tiny-random-BloomForCausalLM.gguf \
--local-dir . \
--local-dir-use-symlinks False
Downloads last month
23
GGUF
Model size
129k params
Architecture
bloom
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support