Text Generation
GGUF
cortex.cpp
conversational
How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "cortexso/tinyllama"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "cortexso/tinyllama",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/cortexso/tinyllama:
Quick Links

Overview

The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. This is the chat model finetuned on a diverse range of synthetic dialogues generated by ChatGPT.

Variants

No Variant Cortex CLI command
1 TinyLLama-1b cortex run tinyllama:1b

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexhub/tinyllama
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run tinyllama
    

Credits

Downloads last month
143
GGUF
Model size
1B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including cortexso/tinyllama

Paper for cortexso/tinyllama