Instructions to use shashikanth-a/tinyllama-chat-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use shashikanth-a/tinyllama-chat-4bit with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="shashikanth-a/tinyllama-chat-4bit") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("shashikanth-a/tinyllama-chat-4bit") model = AutoModelForCausalLM.from_pretrained("shashikanth-a/tinyllama-chat-4bit") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - MLX
How to use shashikanth-a/tinyllama-chat-4bit with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("shashikanth-a/tinyllama-chat-4bit") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- vLLM
How to use shashikanth-a/tinyllama-chat-4bit with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "shashikanth-a/tinyllama-chat-4bit" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "shashikanth-a/tinyllama-chat-4bit", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/shashikanth-a/tinyllama-chat-4bit
- SGLang
How to use shashikanth-a/tinyllama-chat-4bit with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "shashikanth-a/tinyllama-chat-4bit" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "shashikanth-a/tinyllama-chat-4bit", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "shashikanth-a/tinyllama-chat-4bit" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "shashikanth-a/tinyllama-chat-4bit", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use shashikanth-a/tinyllama-chat-4bit with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for shashikanth-a/tinyllama-chat-4bit to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for shashikanth-a/tinyllama-chat-4bit to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for shashikanth-a/tinyllama-chat-4bit to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="shashikanth-a/tinyllama-chat-4bit", max_seq_length=2048, ) - MLX LM
How to use shashikanth-a/tinyllama-chat-4bit with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "shashikanth-a/tinyllama-chat-4bit"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "shashikanth-a/tinyllama-chat-4bit" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "shashikanth-a/tinyllama-chat-4bit", "messages": [ {"role": "user", "content": "Hello"} ] }' - Docker Model Runner
How to use shashikanth-a/tinyllama-chat-4bit with Docker Model Runner:
docker model run hf.co/shashikanth-a/tinyllama-chat-4bit
Upload folder using huggingface_hub
Upload folder using huggingface_hub
Multi commit ID: 3a5bb83bfd63f8dc7804db57d71570c927e37668d449d0af9cc6f37dd0ca26e5
Scheduled commits:
- Upload 8 file(s) totalling 623.1M (19d48360e59f11a516b33081f950a3f05789d93a1fc371e9f160b626691c9b13)
This is a PR opened using the huggingface_hub library in the context of a multi-commit. PR can be commented as a usual PR. However, please be aware that manually updating the PR description, changing the PR status, or pushing new commits, is not recommended as it might corrupt the commit process. Learn more about multi-commits in this guide.
Multi-commit is now completed! You can ping the repo owner to review the changes. This PR can now be commented or modified without risking to corrupt it.
This is a comment posted using the huggingface_hub library in the context of a multi-commit. Learn more about multi-commits in this guide.
create_pr=False has been passed so PR is automatically merged.
This is a comment posted using the huggingface_hub library in the context of a multi-commit. Learn more about multi-commits in this guide.