Adding `safetensors` variant of this model
#2 opened about 1 year ago
by
SFconvertbot
what about an quantized version so we can load in Exlama with large context size?
👍
1
#1 opened over 2 years ago
by
DQ83