File size: 1,843 Bytes
6bcbf1f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
# llama-cpp-python — Free-Tier Friendly Wheel

This Space provides a **prebuilt `llama-cpp-python` wheel** designed to work
**reliably on Hugging Face Free tier Spaces**.

No compilation. No system packages. No build failures.

If your Space crashes during `pip install llama-cpp-python`, this wheel is the fix.

---

## Optimized for Hugging Face Free Tier

Hugging Face Free tier Spaces are:

- CPU-only
- Limited in memory
- Not suitable for native compilation

This wheel is built **ahead of time** so it can be installed instantly without
triggering CMake, compilers, or BLAS detection.

---

## What this wheel gives you

- ✅ Works on **HF Free tier CPU Spaces**
- ✅ Linux (ubuntu-22.04 compatible)
- ✅ Python 3.10
- ✅ OpenBLAS enabled (`GGML_BLAS=ON`)
- ✅ No system dependencies required
- ✅ No build step during Space startup
- ✅ Fast, reliable `pip install`

---

## How to use in a Space (Free tier)

1. Download the wheel from the GitHub repository
2. Upload it to your Space
3. Install it in your Space startup:



pip install llama_cpp_python-*.whl>


## That’s it — your Space will start without build errors.

## Build details

This wheel was built using:
abetlen/llama-cpp-python (recursive submodules)
OpenBLAS (GGML_VENDOR=OpenBLAS)
scikit-build-core
ninja
python -m build --wheel --no-isolation

## Build environment:

OS: Ubuntu 22.04
Python: 3.10

## Why not build from source on HF?

On Free tier Spaces, building from source often fails due to:
Missing compilers
Missing BLAS libraries
Memory limits
Build timeouts
This prebuilt wheel avoids all of those issues.

## Notes

CPU-only (no CUDA)
Intended for inference workloads
Not an official upstream release

## Credits

All credit goes to the maintainers of llama-cpp-python and llama.cpp.
This Space exists solely to make Free tier usage painless.