How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="hik63382/TTS_Android_PC",
	filename="",
)
output = llm(
	"Once upon a time,",
	max_tokens=512,
	echo=True
)
print(output)

Configuration Parsing Warning:Config file tokenizer_config.json cannot be fetched (too big)

high-1764639553



πŸ“ž Support & Contact

Contact Animation

πŸ‘€ Created by NZG73

Platform Link
πŸ“Ί YouTube @NZG73
🌐 Website nzg73.blogspot.com
πŸ“§ Email nzgnzg73@gmail.com
⭐ GitHub NZG 73

Downloads last month
96
GGUF
Model size
0.3B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support