Instructions to use Sujith2121/tinyllama-dora-model with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use Sujith2121/tinyllama-dora-model with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0") model = PeftModel.from_pretrained(base_model, "Sujith2121/tinyllama-dora-model") - Transformers
How to use Sujith2121/tinyllama-dora-model with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Sujith2121/tinyllama-dora-model") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Sujith2121/tinyllama-dora-model", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Sujith2121/tinyllama-dora-model with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Sujith2121/tinyllama-dora-model" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Sujith2121/tinyllama-dora-model", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Sujith2121/tinyllama-dora-model
- SGLang
How to use Sujith2121/tinyllama-dora-model with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Sujith2121/tinyllama-dora-model" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Sujith2121/tinyllama-dora-model", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Sujith2121/tinyllama-dora-model" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Sujith2121/tinyllama-dora-model", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Sujith2121/tinyllama-dora-model with Docker Model Runner:
docker model run hf.co/Sujith2121/tinyllama-dora-model
metadata
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0
- dora
- qlora
- transformers
- text-generation
pipeline_tag: text-generation
model-index:
- name: tinyllama-dora-model
results:
- task:
type: text-generation
dataset:
name: mlabonne/guanaco-llama2-1k
type: instruction-tuning
metrics:
- type: loss
value: 1.5644
name: validation_loss
tinyllama-dora-model
Model Description
This model is a parameter-efficient fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 using DoRA combined with 4-bit quantization.
Key Features
- Base Model: TinyLlama-1.1B-Chat
- Fine-tuning Method: DoRA
- Quantization: 4-bit
- Framework: Transformers + PEFT
Intended Use
- Instruction-based text generation
- Conversational AI
- Research and experimentation
Limitations
- Small dataset (1k samples)
- May produce incorrect outputs
Dataset
mlabonne/guanaco-llama2-1k
Training Details
- Learning Rate: 5e-5
- Batch Size: 2
- Epochs: 1
Results
Validation Loss: 1.5644 Perplexity = exp(loss)
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
adapter_model = "Sujith2121/tinyllama-dora-model"
tokenizer = AutoTokenizer.from_pretrained(adapter_model)
model = AutoModelForCausalLM.from_pretrained(base_model, device_map="auto")
model = PeftModel.from_pretrained(model, adapter_model)
prompt = "Explain Docker simply"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
License
Apache 2.0