Llama-3.2-3B-Instruct-natural-questions
This model is a fine-tuned version of meta-llama/Llama-3.2-3B-Instruct on the Natural Questions (NQ) dataset. It is optimized for factual question answering and follows the Llama 3.2 chat template.
Model Details
- Model type: Causal Language Model
- Language(s): English
- License: Llama 3.2 Community License
- Base Model: Llama-3.2-3B-Instruct
Intended Use
This model is designed for high-accuracy factual retrieval and instruction following. It is particularly effective for:
- Answering "who/what/where/when" style questions.
- Summarizing Wikipedia-style factual content.
- General-purpose assistant tasks.
Training Data
The model was fine-tuned on the Natural Questions dataset, which consists of real-world queries issued to the Google search engine and answers annotated by humans based on Wikipedia pages.
Usage
You can use this model with the transformers library:
from transformers import pipeline
pipe = pipeline("text-generation", model="Fu01978/Llama-3.2-3B-Instruct-natural-questions")
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who founded Google?"},
]
out = pipe(messages, max_new_tokens=128)
print(out[0]['generated_text'][-1]['content'])
Limitations & Ethics
This model inherits the limitations of the Llama 3.2 family. It may occasionally generate incorrect factual information (hallucinations) despite being trained on a Q&A dataset. Users should verify critical information.
- Downloads last month
- 14