Upload llama_cpp_python-0.3.16-cp311-cp311-win_amd64.whl
Browse files## llama_cpp_python 0.3.16 (cp311 win_amd64)
- CUDA-enabled Windows wheel for Python 3.11
- Embedded CUDA archs: sm_75, sm_86, sm_89, sm_120
- Sanity check: loads GGUF and generates text
**File**
- `llama_cpp_python-0.3.16-cp311-cp311-win_amd64.whl`
.gitattributes
CHANGED
|
@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
llama_cpp_python-0.3.16-cp312-cp312-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
llama_cpp_python-0.3.16-cp312-cp312-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
llama_cpp_python-0.3.16-cp311-cp311-win_amd64.whl filter=lfs diff=lfs merge=lfs -text
|
llama_cpp_python-0.3.16-cp311-cp311-win_amd64.whl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:46025ae933bc17eb3c4a40968f239b668ed560a8d51f71583f5f2dc2bb285259
|
| 3 |
+
size 240507552
|