YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Fine-tuned Llama-3.2-1B-Instruct

This model is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct on custom text generation data.

Model description

This model was fine-tuned using LoRA on a custom dataset. It's designed to [briefly describe what your model is good at].

How to use

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("bhatia1289/llama-1B-instruct-finetuned") tokenizer = AutoTokenizer.from_pretrained("bhatia1289/llama-1B-instruct-finetuned")

Example usage

input_text = "You are an AI model that generates informative text. Please provide information about:" input_ids = tokenizer(input_text, return_tensors="pt").input_ids output = model.generate(input_ids, max_length=200) print(tokenizer.decode(output[0], skip_special_tokens=True))

Downloads last month
5
Safetensors
Model size
1B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support