LLAMA3 Finetune By Alphacode
Collection
2 items β’ Updated
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Alphacode-AI/Alphallama3-8B")
model = AutoModelForCausalLM.from_pretrained("Alphacode-AI/Alphallama3-8B")This model is a version of Meta-Llama-3-8B that has been fine-tuned with Our In House CustomData.
Train Spec : We utilized an A100x4 * 1 for training our model with DeepSpeed / HuggingFace TRL Trainer / HuggingFace Accelerate
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Alphacode-AI/Alphallama3-8B")