File size: 1,062 Bytes
5db139f 52602de 54d383b c882bc2 b0cff90 bf315e2 b0cff90 bf315e2 f57ef81 95cdb69 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | ---
language:
- en
- th
library_name: transformers
base_model:
- meta-llama/Llama-3.2-1B
tags:
- text-generation
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.5
widget:
- messages:
- role: user
content: What is your favorite condiment?
---
extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---
---
# LLaMA 3 Fine-Tuned Model
This is a fine-tuned version of the LLaMA 3 model . Below is an example of how to use it:
## Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Pection/llama3-finetune")
model = AutoModelForCausalLM.from_pretrained("Pection/llama3-finetune")
# Generate response
prompt = "Where is Bangkok?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response) |