Update README.md
Browse files
README.md
CHANGED
|
@@ -1,7 +1,7 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
---
|
| 4 |
-
# RTX
|
| 5 |
|
| 6 |
**Status:** ✅ CONFIRMED WORKING — No more “invalid resource handle” errors
|
| 7 |
**Wheel:** `llama_cpp_python-0.3.16-cp312-cp312-win_amd64.whl`
|
|
@@ -124,4 +124,4 @@ This wheel provides RTX 5090 compatibility by configuring cuBLAS fallback; it is
|
|
| 124 |
|
| 125 |
---
|
| 126 |
|
| 127 |
-
Finally — RTX
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
---
|
| 4 |
+
# RTX 5000 Series–Ready `llama-cpp-python` Wheel (Python 3.12, Windows)
|
| 5 |
|
| 6 |
**Status:** ✅ CONFIRMED WORKING — No more “invalid resource handle” errors
|
| 7 |
**Wheel:** `llama_cpp_python-0.3.16-cp312-cp312-win_amd64.whl`
|
|
|
|
| 124 |
|
| 125 |
---
|
| 126 |
|
| 127 |
+
Finally — RTX 5000 series owners can use their flagship GPU for local LLM inference without crashes! 🎉
|