llama3-finetune / README.md
Pection's picture
Update README.md
bf315e2 verified
metadata
language:
  - en
  - th
library_name: transformers
base_model:
  - meta-llama/Llama-3.2-1B
tags:
  - text-generation
pipeline_tag: text-generation
inference:
  parameters:
    temperature: 0.5
widget:
  - messages:
      - role: user
        content: What is your favorite condiment?

extra_gated_description: If you want to learn more about how we process your personal data, please read our Privacy Policy.


LLaMA 3 Fine-Tuned Model

This is a fine-tuned version of the LLaMA 3 model . Below is an example of how to use it:

Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Pection/llama3-finetune")
model = AutoModelForCausalLM.from_pretrained("Pection/llama3-finetune")

# Generate response
prompt = "Where is Bangkok?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(response)