Text Generation
Transformers
Safetensors
English
llama
sharp-balance
llama-2
llama-2-chat
70b
text-generation-inference
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("sequelbox/Llama2-70B-SharpBalance")
model = AutoModelForCausalLM.from_pretrained("sequelbox/Llama2-70B-SharpBalance")Quick Links
Sharp Balance is a general capability upgrade to Llama 2 70b.
It does not have any current practical use. The model is available for legacy and reference purposes. View our profile for our latest models.
The original upload of Sharp Balance contained errors in how weights were saved, which have now been fixed. Additional issues and bugs may be expected; no support is available. Use at your own discretion.
- Downloads last month
- 20
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="sequelbox/Llama2-70B-SharpBalance")