Instructions to use dougeeai/llama-cpp-python-wheels with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use dougeeai/llama-cpp-python-wheels with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="dougeeai/llama-cpp-python-wheels", filename="{{GGUF_FILE}}", )output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
Update README.md with links
Browse files
README.md
CHANGED
|
@@ -20,7 +20,7 @@ Pre-compiled `llama-cpp-python` wheels for Windows across CUDA versions and GPU
|
|
| 20 |
## Quick Start
|
| 21 |
|
| 22 |
1. **Find your GPU** in the compatibility list below
|
| 23 |
-
2. **Download** the wheel for your GPU from [GitHub Releases](https://github.com/dougeeai/llama-cpp-python-wheels/releases)
|
| 24 |
3. **Install**: `pip install <downloaded-wheel-file>.whl`
|
| 25 |
4. **Run** your GGUF models immediately
|
| 26 |
|
|
|
|
| 20 |
## Quick Start
|
| 21 |
|
| 22 |
1. **Find your GPU** in the compatibility list below
|
| 23 |
+
2. **Download** the wheel for your GPU from [GitHub Releases](https://github.com/dougeeai/llama-cpp-python-wheels/releases) or [find your card on the README table](https://github.com/dougeeai/llama-cpp-python-wheels)
|
| 24 |
3. **Install**: `pip install <downloaded-wheel-file>.whl`
|
| 25 |
4. **Run** your GGUF models immediately
|
| 26 |
|