How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="skar01/llama2-coder-full")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("skar01/llama2-coder-full")
model = AutoModelForCausalLM.from_pretrained("skar01/llama2-coder-full")
Quick Links

Llama2 (7B) model fine-tuned on the CodeAlpaca 20k instructions dataset by using the method QLoRA with PEFT library.

Training and evaluation data ๐Ÿ“š

CodeAlpaca_20K: contains 20K instruction-following data used for fine-tuning the Code Alpaca model. Data is here: https://huggingface.co/mrm8488/falcon-7b-ft-codeAlpaca_20k The adapter is here: https://huggingface.co/skar01/llama2-coder The base model is: TinyPixel/Llama-2-7B-bf16-sharded

Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support