Lead Response Prediction - Llama 3.1 8B LoRA
⚠️ This is a LoRA adapter, not a full model.
You need to load it with the base model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
Usage
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="MotionG-ai/lead-response-llama-8b",
max_seq_length=4096,
load_in_4bit=True,
)
Model Details
- Type: LoRA Adapter
- Base Model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
- LoRA Rank: 16
- Trainable Parameters: 41.9M (0.52%)
- File Size: ~180MB
- Downloads last month
- 3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for MotionG-ai/lead-response-llama-8b
Base model
meta-llama/Llama-3.1-8B Quantized
unsloth/Meta-Llama-3.1-8B-bnb-4bit