How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf llmware/tiny-llama-chat-gguf
# Run inference directly in the terminal:
llama-cli -hf llmware/tiny-llama-chat-gguf
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf llmware/tiny-llama-chat-gguf
# Run inference directly in the terminal:
llama-cli -hf llmware/tiny-llama-chat-gguf
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf llmware/tiny-llama-chat-gguf
# Run inference directly in the terminal:
./llama-cli -hf llmware/tiny-llama-chat-gguf
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf llmware/tiny-llama-chat-gguf
# Run inference directly in the terminal:
./build/bin/llama-cli -hf llmware/tiny-llama-chat-gguf
Use Docker
docker model run hf.co/llmware/tiny-llama-chat-gguf
Quick Links

tiny-llama-chat-gguf

tiny-llama-chat-gguf is an GGUF Q4_K_M int4 quantized version of TinyLlama-Chat, providing a very fast, very small inference implementation, optimized for AI PCs.

tiny-llama-chat is the official chat finetuned version of tiny-llama.

Model Description

  • Developed by: TinyLlama
  • Quantized by: llmware
  • Model type: llama
  • Parameters: 1.1 billion
  • Model Parent: TinyLlama-1.1B-Chat-v1.0
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Uses: Chat and general purpose LLM
  • RAG Benchmark Accuracy Score: NA
  • Quantization: int4

Model Card Contact

llmware on github

llmware on hf

llmware website

Downloads last month
229
GGUF
Model size
1B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for llmware/tiny-llama-chat-gguf

Quantized
(144)
this model