Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
Duplicated fromย
mikeee/Wizard-Vicuna-7B-Uncensored-GGML
mikeee
/
llama2-7b-chat-ggml
like
4
Runtime error
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
c90a13a
llama2-7b-chat-ggml
19 kB
2 contributors
History:
25 commits
ffreemt
Update user hot for streaming
c90a13a
over 2 years ago
.gitattributes
1.52 kB
Duplicate from mikeee/Wizard-Vicuna-7B-Uncensored-GGML
over 2 years ago
.gitignore
156 Bytes
Update buff enabled
over 2 years ago
.ruff.toml
495 Bytes
Duplicate from mikeee/Wizard-Vicuna-7B-Uncensored-GGML
over 2 years ago
.stignore
1.16 kB
Update examples
over 2 years ago
README.md
299 Bytes
Update run 7b when oktoto golay kaggle or cpu_count <=8
over 2 years ago
app.py
15.2 kB
Update user hot for streaming
over 2 years ago
requirements.txt
120 Bytes
Fix concurrency_count to avoid OOM
over 2 years ago
run-app.sh
35 Bytes
Update generate function
over 2 years ago