Question Answering
Transformers
Safetensors
English
llama
text-generation
code
text-generation-inference
Instructions to use apu20/Llama-3.2-3B-Instruct_Tele with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use apu20/Llama-3.2-3B-Instruct_Tele with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="apu20/Llama-3.2-3B-Instruct_Tele")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("apu20/Llama-3.2-3B-Instruct_Tele") model = AutoModelForCausalLM.from_pretrained("apu20/Llama-3.2-3B-Instruct_Tele") - Notebooks
- Google Colab
- Kaggle
Ctrl+K