YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Infatoshi/llama3.1-sft-v2

This is a fine-tuned version of meta-llama/Llama-3.2-1B using QLoRA.

Model description

Base model: meta-llama/Llama-3.2-1B Training technique: QLoRA Training data: Custom dataset

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Infatoshi/llama3.1-sft-v2")
tokenizer = AutoTokenizer.from_pretrained("Infatoshi/llama3.1-sft-v2")

prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
Downloads last month
6
Safetensors
Model size
1B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support