How to use from the
Use from the
Transformers library
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("m7alek/lora_model", dtype="auto")
Quick Links
A newer version of this model is available: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
README.md exists but content is empty.
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for m7alek/lora_model

Base model

google/gemma-7b
Finetuned
(1)
this model

Datasets used to train m7alek/lora_model