Text Generation
Transformers
PyTorch
llama
uncensored
text-generation-inference
WizardLM-7B-Uncensored / config.json

Commit History

Change use_cache to True which significantly speeds up inference (#2)
ca45eff

ehartford TheBloke commited on

initial
b73e530

Ubuntu commited on