Instructions to use devshaheen/llama-3.2-3b-Instruct-finetune with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use devshaheen/llama-3.2-3b-Instruct-finetune with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="devshaheen/llama-3.2-3b-Instruct-finetune") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("devshaheen/llama-3.2-3b-Instruct-finetune") model = AutoModelForCausalLM.from_pretrained("devshaheen/llama-3.2-3b-Instruct-finetune") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use devshaheen/llama-3.2-3b-Instruct-finetune with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "devshaheen/llama-3.2-3b-Instruct-finetune" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "devshaheen/llama-3.2-3b-Instruct-finetune", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/devshaheen/llama-3.2-3b-Instruct-finetune
- SGLang
How to use devshaheen/llama-3.2-3b-Instruct-finetune with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "devshaheen/llama-3.2-3b-Instruct-finetune" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "devshaheen/llama-3.2-3b-Instruct-finetune", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "devshaheen/llama-3.2-3b-Instruct-finetune" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "devshaheen/llama-3.2-3b-Instruct-finetune", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use devshaheen/llama-3.2-3b-Instruct-finetune with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for devshaheen/llama-3.2-3b-Instruct-finetune to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for devshaheen/llama-3.2-3b-Instruct-finetune to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for devshaheen/llama-3.2-3b-Instruct-finetune to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="devshaheen/llama-3.2-3b-Instruct-finetune", max_seq_length=2048, ) - Docker Model Runner
How to use devshaheen/llama-3.2-3b-Instruct-finetune with Docker Model Runner:
docker model run hf.co/devshaheen/llama-3.2-3b-Instruct-finetune
Uploaded Model: devshaheen/llama-3.2-3b-Instruct-finetune
Overview
- Developed by: devshaheen
- License: Apache-2.0
- Finetuned from model:
unsloth/llama-3.2-3b-instruct-bnb-4bit - Languages Supported:
- English (
en) for general-purpose text generation and instruction-following tasks. - Kannada (
kn) with a focus on localized and culturally aware text generation.
- English (
- Dataset Used: charanhu/kannada-instruct-dataset-390k
This model is a fine-tuned version of LLaMA, optimized for multilingual instruction-following tasks with a specific emphasis on English and Kannada. It utilizes 4-bit quantization for efficient deployment in low-resource environments without compromising performance.
Features
1. Instruction Tuning
The model is trained to follow a wide range of instructions and generate contextually relevant responses. It excels in both creative and factual text generation tasks.
2. Multilingual Support
The model is capable of generating text in Kannada and English, making it suitable for users requiring bilingual capabilities.
3. Optimized Training
Training was accelerated using Unsloth, achieving 2x faster training compared to conventional methods. This was complemented by HuggingFace's TRL (Transformers Reinforcement Learning) library to ensure high performance.
4. Efficiency through Quantization
Built on the bnb-4bit quantized model, it is designed for optimal performance in environments with limited computational resources while maintaining precision and depth in output.
Usage Scenarios
General Use
- Text completion and creative writing.
- Generating instructions or following queries in English and Kannada.
Specialized Applications
- Localized AI systems in Kannada for chatbots, educational tools, and more.
- Research and development on multilingual instruction-tuned models.
Performance and Metrics
Evaluation Dataset:
The model was fine-tuned on charanhu/kannada-instruct-dataset-390k, a comprehensive dataset designed for Kannada instruction tuning.
Training Parameters:
- Base Model: LLaMA 3.2-3B-Instruct
- Optimizer: AdamW
- Quantization: 4-bit (bnb-4bit)
- Framework: HuggingFace Transformers + Unsloth
Example Usage
Python Code
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model_name = "devshaheen/llama-3.2-3b-Instruct-finetune"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Generate text
input_text = "How does climate change affect the monsoon in Karnataka?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 4