jakmro commited on
Commit
753520e
Β·
verified Β·
1 Parent(s): 1996fb7

Update organization README

Browse files
Files changed (1) hide show
  1. README.md +198 -121
README.md CHANGED
@@ -1,19 +1,40 @@
 
 
 
 
 
 
 
 
1
  <img src="assets/banner.jpg" alt="Logo" style="border-radius: 30px; width: 100%;">
2
 
 
 
 
 
 
 
 
 
 
3
  ```
4
- β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” Energy-efficient inference engine for running AI on mobile devices
5
- β”‚ Cactus Engine β”‚ ←── OpenAI compatible APIs for C/C++, Swift, Kotlin, Flutter & React-Native
6
- β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Supports tool call, auto RAG, NPU, INT4, and cloud handoff for complex tasks
7
  β”‚
8
- β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” Zero-copy computation graph, think PyTorch for mobile devices
9
- β”‚ Cactus Graph β”‚ ←── You can implement custom models directly using this
10
- β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Highly optimised for RAM & lossless weight quantisation
11
  β”‚
12
- β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” Low-level ARM-specific SIMD kernels (Apple, Snapdragon, Google, Exynos, MediaTek & Raspberry Pi)
13
- β”‚ Cactus Kernels β”‚ ←── Optimised Matrix Multiplication & n
14
- β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Custom attention kernels with KV-Cache Quantisation, chunked prefill, streaming LLM, etc.
15
  ```
16
 
 
 
 
 
17
 
18
  ## Cactus Engine
19
 
@@ -37,30 +58,30 @@ const char* options = R"({
37
 
38
  char response[4096];
39
  int result = cactus_complete(
40
- model, // model handle from cactus_init
41
- messages, // JSON array of chat messages
42
- response, // buffer to store response JSON
43
- sizeof(response), // size of response buffer
44
- options, // optional: generation options (nullptr for defaults)
45
- nullptr, // optional: tools JSON for function calling
46
- nullptr, // optional: streaming callback fn(token, id, user_data)
47
- nullptr // optional: user data passed to callback
48
  );
49
  ```
50
  Example response from Gemma3-270m
51
  ```json
52
  {
53
- "success": true, // when successfully generated locally
54
- "error": null, // returns specific errors if success = false
55
- "cloud_handoff": false, // true when model is unconfident, simply route to cloud
56
- "response": "Hi there!", // null when error is not null or cloud_handoff = true
57
- "function_calls": [], // parsed to [{"name":"set_alarm","arguments":{"hour":"10","minute":"0"}}]
58
- "confidence": 0.8193, // how confident the model is with its response
59
- "time_to_first_token_ms": 45.23, // latency (time to first token)
60
- "total_time_ms": 163.67, // total execution time
61
- "prefill_tps": 1621.89, // prefill tokens per second
62
- "decode_tps": 168.42, // decode tokens per second
63
- "ram_usage_mb": 245.67, // current process RAM usage in MB
64
  "prefill_tokens": 28,
65
  "decode_tokens": 50,
66
  "total_tokens": 78
@@ -92,106 +113,162 @@ void* output_data = graph.get_output(result);
92
  graph.hard_reset();
93
  ```
94
 
95
- ## Benchmark (INT8)
96
-
97
- | Device | LFM2.5-1.2B<br>(1k-Prefill/100-Decode) | LFM2.5-VL-1.6B<br>(256px-Latency & Decode) | Whisper-Small-244m<br>(30s-audio-Latency & Decode)
98
- |--------|--------|--------|----------|
99
- | Mac M4 Pro | 582tps/77tps (76MB RAM) | 0.2s/76tps (87MB RAM) | 0.1s/111tps (73MB RAM) |
100
- | iPad/Mac M4 | - | - | - |
101
- | iPhone 17 Pro | 300tps/33tps (108MB RAM)| 0.3s/33tps (156MB RAM) | 0.3s/114tps (177MB RAM)|
102
- | Galaxy S25 Ultra | 226tps/36tps (1.2GB RAM) | 2.6s/33tps (2GB RAM) | 2.3s/90tps (363MB RAM) |
103
- | Pixel 10 Pro | - | - | - |
104
- | Vivo X200 Pro | - | - | - |
105
-
106
- | Device | LFM2-350m<br>(1k-Prefill/100-Decode) | LFM2-VL-450m<br>(256px-Latency & Decode) | Moonshine-Base-67m<br>(30s-audio-Latency & Decode)
107
- |--------|--------|--------|----------|
108
- | iPad/Mac M1 | - | - | - |
109
- | iPhone 13 Mini | - | - | - |
110
- | Galaxy A56 | - | - | - |
111
- | Pixel 6a | 218tps/44tps (395MB RAM)| 2.5s/36tps (631MB RAM) | 1.5s/189tps (111MB RAM)|
112
- | Nothing CMF | - | - | - |
113
- | Raspberry Pi 5 | - | - | - |
114
-
115
- ## Supported Models
116
-
117
- | Model | Features |
118
- |-------|----------|
119
- | google/gemma-3-270m-it | completion |
120
- | google/functiongemma-270m-it | completion, tools |
121
- | LiquidAI/LFM2-350M | completion, tools, embed |
122
- | Qwen/Qwen3-0.6B | completion, tools, embed |
123
- | LiquidAI/LFM2-700M | completion, tools, embed |
124
- | google/gemma-3-1b-it | completion |
125
- | LiquidAI/LFM2.5-1.2B-Thinking | completion, tools, embed |
126
- | LiquidAI/LFM2.5-1.2B-Instruct | completion, tools, embed |
127
- | Qwen/Qwen3-1.7B | completion, tools, embed |
128
- | LiquidAI/LFM2-2.6B | completion, tools, embed |
129
- | LiquidAI/LFM2-VL-450M | vision, txt & img embed, Apple NPU |
130
- | LiquidAI/LFM2.5-VL-1.6B | vision, txt & img embed, Apple NPU |
131
- | UsefulSensors/moonshine-base | transcription, speech embed |
132
- | openai/whisper-small | transcription, speech embed, Apple NPU |
133
- | openai/whisper-medium | transcribe, speech embed, Apple NPU |
134
- | nomic-ai/nomic-embed-text-v2-moe | embed |
135
- | Qwen/Qwen3-Embedding-0.6B | embed |
136
-
137
- ## Using this repo on Mac
138
- ```bash
139
- git clone https://github.com/cactus-compute/cactus && cd cactus && source ./setup
140
- ```
 
 
 
 
 
 
141
 
142
- ## Using this repo on Linux (Ubuntu/Debian)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
143
 
144
- ```bash
145
- sudo apt-get install python3 python3-venv python3-pip cmake build-essential libcurl4-openssl-dev
146
- git clone https://github.com/cactus-compute/cactus && cd cactus && source ./setup
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
147
  ```
148
 
 
149
 
150
- | Command | Description |
151
- |---------|-------------|
152
- | `cactus run [model]` | Opens playground (auto downloads model) |
153
- | `cactus download [model]` | Downloads model to `./weights` |
154
- | `cactus convert [model] [dir]` | Converts model, supports LoRA merging (`--lora <path>`) |
155
- | `cactus build` | Builds for ARM (`--apple` or `--android`) |
156
- | `cactus test` | Runs tests (`--ios` / `--android`, `--model [name/path]`), `--precision` |
157
- | `cactus transcribe [model]` | Transcribe audio file (`--file`) or live microphone |
158
- | `cactus clean` | Removes build artifacts |
159
- | `cactus --help` | Shows all commands and flags (always run this) |
160
 
161
- ## Using in your apps
 
162
 
163
- - [Python for Mac](/python/)
164
- - [React Native SDK](https://github.com/cactus-compute/cactus-react-native)
165
- - [Swift Multiplatform SDK](https://github.com/mhayes853/swift-cactus)
166
- - [Kotlin Multiplatform SDK](https://github.com/cactus-compute/cactus-kotlin)
167
- - [Flutter SDK](https://github.com/cactus-compute/cactus-flutter)
168
- - [Rust SDK](https://github.com/mrsarac/cactus-rs)
169
 
170
- ## Try demo apps
 
171
 
172
- - [iOS Demo](https://apps.apple.com/gb/app/cactus-chat/id6744444212)
173
- - [Android Demo](https://play.google.com/store/apps/details?id=com.rshemetsubuser.myapp)
174
 
175
- ## Maintaining Organisations
176
- 1. [Cactus Compute, Inc](https://cactuscompute.com/)
177
- 2. [UCLA's BruinAI](https://bruinai.org/)
178
- 3. [Yale's AI Society](https://www.yale-ai.org/team)
179
- 4. [National Unoversity of Singapore's AI Society](https://www.nusaisociety.org/)
180
- 5. [UC Irvine's AI@UCI](https://aiclub.ics.uci.edu/)
181
- 6. [Imperial College's AI Society](https://www.imperialcollegeunion.org/csp/1391)
182
- 7. [University of Pennsylvania's AI@Penn](https://ai-at-penn-main-105.vercel.app/)
183
- 8. [University of Michigan Ann-Arbor MSAIL](https://msail.github.io/)
184
- 9. [University of Colorado Boulder's AI Club](https://www.cuaiclub.org/)
185
-
186
- ## Contributing to Cactus
187
-
188
- - **C++ Standard**: Use C++20 features where appropriate
189
- - **Formatting**: Follow the existing code style in the project, one header per folder.
190
- - **Comments**: Avoid comments, make your code read like plain english
191
- - **AI-Generated Code**: Do not bindly PR AI slop, this codebase is very complex, they miss details.
192
- - **Update docs**: Please update docs when necessary, be intuitive and straightforward.
193
- - **Keep It Simple**: Do not go beyond the scope of the GH issue, avoid bloated PRs, keep codes lean.
194
- - **Benchmark Your Changes**: Test performance impact, Cactus is performance-critical.
195
-
196
- ## Join The Community
197
- - [Reddit Channel](https://www.reddit.com/r/cactuscompute/)
 
1
+ ---
2
+ title: Cactus-Compute
3
+ sdk: static
4
+ pinned: true
5
+ ---
6
+
7
+ # Cactus
8
+
9
  <img src="assets/banner.jpg" alt="Logo" style="border-radius: 30px; width: 100%;">
10
 
11
+ [![Docs][docs-shield]][docs-url]
12
+ [![Website][website-shield]][website-url]
13
+ [![GitHub][github-shield]][github-url]
14
+ [![HuggingFace][hf-shield]][hf-url]
15
+ [![Reddit][reddit-shield]][reddit-url]
16
+ [![Blog][blog-shield]][blog-url]
17
+
18
+ A hybrid low-latency energy-efficient AI engine for mobile devices & wearables.
19
+
20
  ```
21
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
22
+ β”‚ Cactus Engine β”‚ ←── OpenAI-compatible APIs for all major languages
23
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Chat, vision, STT, RAG, tool call, cloud handoff
24
  β”‚
25
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
26
+ β”‚ Cactus Graph β”‚ ←── Zero-copy computation graph (PyTorch for mobile)
27
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Custom models, optimised for RAM & quantisation
28
  β”‚
29
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
30
+ β”‚ Cactus Kernels β”‚ ←── ARM SIMD kernels (Apple, Snapdragon, Exynos, etc)
31
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Custom attention, KV-cache quant, chunked prefill
32
  ```
33
 
34
+ ## Quick Demo
35
+
36
+ - Step 1: `brew install cactus-compute/cactus/cactus`
37
+ - Step 2: `cactus transcribe` or `cactus run`
38
 
39
  ## Cactus Engine
40
 
 
58
 
59
  char response[4096];
60
  int result = cactus_complete(
61
+ model, // model handle
62
+ messages, // JSON chat messages
63
+ response, // response buffer
64
+ sizeof(response), // buffer size
65
+ options, // generation options
66
+ nullptr, // tools JSON
67
+ nullptr, // streaming callback
68
+ nullptr // user data
69
  );
70
  ```
71
  Example response from Gemma3-270m
72
  ```json
73
  {
74
+ "success": true, // generation succeeded
75
+ "error": null, // error details if failed
76
+ "cloud_handoff": false, // true if cloud model used
77
+ "response": "Hi there!",
78
+ "function_calls": [], // parsed tool calls
79
+ "confidence": 0.8193, // model confidence
80
+ "time_to_first_token_ms": 45.23,
81
+ "total_time_ms": 163.67,
82
+ "prefill_tps": 1621.89,
83
+ "decode_tps": 168.42,
84
+ "ram_usage_mb": 245.67,
85
  "prefill_tokens": 28,
86
  "decode_tokens": 50,
87
  "total_tokens": 78
 
113
  graph.hard_reset();
114
  ```
115
 
116
+ ## API & SDK References
117
+
118
+ | Reference | Language | Description |
119
+ |-----------|----------|-------------|
120
+ | [Engine API](cactus_engine.md) | C | Chat completion, streaming, tool calling, transcription, embeddings, RAG, vision, VAD, vector index, cloud handoff |
121
+ | [Graph API](cactus_graph.md) | C++ | Tensor operations, matrix multiplication, attention, normalization, activation functions |
122
+ | [Python SDK](/python/) | Python | Mac, Linux |
123
+ | [Swift SDK](/apple/) | Swift | iOS, macOS, tvOS, watchOS, Android |
124
+ | [Kotlin SDK](/android/) | Kotlin | Android, iOS (via KMP) |
125
+ | [Flutter SDK](/flutter/) | Dart | iOS, macOS, Android |
126
+ | [Rust SDK](/rust/) | Rust | Mac, Linux |
127
+ | [React Native](https://github.com/cactus-compute/cactus-react-native) | JavaScript | iOS, Android |
128
+
129
+ ## Benchmarks
130
+
131
+ - All weights INT4 quantised
132
+ - LFM: 1k-prefill / 100-decode, values are prefill tps / decode tps
133
+ - LFM-VL: 256px input, values are latency / decode tps
134
+ - Parakeet: 30s audio input, values are latency / decode tps
135
+ - Missing latency = no NPU support yet
136
+
137
+ | Device | LFM 1.2B | LFMVL 1.6B | Parakeet 1.1B | RAM |
138
+ |--------|----------|------------|---------------|-----|
139
+ | Mac M4 Pro | 582/100 | 0.2s/98 | 0.1s/900k+ | 76MB |
140
+ | iPad/Mac M3 | 350/60 | 0.3s/69 | 0.3s/800k+ | 70MB |
141
+ | iPhone 17 Pro | 327/48 | 0.3s/48 | 0.3s/300k+ | 108MB |
142
+ | iPhone 13 Mini | 148/34 | 0.3s/35 | 0.7s/90k+ | 1GB |
143
+ | Galaxy S25 Ultra | 255/37 | -/34 | -/250k+ | 1.5GB |
144
+ | Pixel 6a | 70/15 | -/15 | -/17k+ | 1GB |
145
+ | Galaxy A17 5G | 32/10 | -/11 | -/40k+ | 727MB |
146
+ | CMF Phone 2 Pro | - | - | - | - |
147
+ | Raspberry Pi 5 | 69/11 | 13.3s/11 | 4.5s/180k+ | 869MB |
148
+
149
+ ## Roadmap
150
+
151
+ | Date | Status | Milestone |
152
+ |------|--------|-----------|
153
+ | Sep 2025 | Done | Released v1 |
154
+ | Oct 2025 | Done | Chunked prefill, KVCache Quant (2x prefill) |
155
+ | Nov 2025 | Done | Cactus Attention (10 & 1k prefill = same decode) |
156
+ | Dec 2025 | Done | Team grows to +6 Research Engineers |
157
+ | Jan 2026 | Done | Apple NPU/RAM, 5-11x faster iOS/Mac |
158
+ | Feb 2026 | Done | Hybrid inference, INT4, lossless Quant (1.5x) |
159
+ | Mar 2026 | Coming | Qualcomm/Google NPUs, 5-11x faster Android |
160
+ | Apr 2026 | Coming | Mediatek/Exynos NPUs, Cactus@ICLR |
161
+ | May 2026 | Coming | Kernel→C++, Graph/Engine→Rust, Mac GPU & VR |
162
+ | Jun 2026 | Coming | Torch/JAX model transpilers |
163
+ | Jul 2026 | Coming | Wearables optimisations, Cactus@ICML |
164
+ | Aug 2026 | Coming | Orchestration |
165
+ | Sep 2026 | Coming | Full Cactus paper, chip manufacturer partners |
166
+
167
+ ## Using this repo
168
 
169
+ ```
170
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
171
+ β”‚ β”‚
172
+ β”‚ Step 0: if on Linux (Ubuntu/Debian) β”‚
173
+ β”‚ sudo apt-get install python3 python3-venv python3-pip cmake β”‚
174
+ β”‚ build-essential libcurl4-openssl-dev β”‚
175
+ β”‚ β”‚
176
+ β”‚ Step 1: clone and setup β”‚
177
+ β”‚ git clone https://github.com/cactus-compute/cactus && cd cactus β”‚
178
+ β”‚ source ./setup β”‚
179
+ β”‚ β”‚
180
+ β”‚ Step 2: use the commands β”‚
181
+ │──────────────────────────────────────────────────────────────────────────────│
182
+ β”‚ β”‚
183
+ β”‚ cactus auth manage Cloud API key β”‚
184
+ β”‚ --status show key status β”‚
185
+ β”‚ --clear remove saved key β”‚
186
+ β”‚ β”‚
187
+ β”‚ cactus run <model> opens playground (auto downloads) β”‚
188
+ β”‚ --precision INT4|INT8|FP16 quantization (default: INT4) β”‚
189
+ β”‚ --token <token> HF token (gated models) β”‚
190
+ β”‚ --reconvert force reconversion from source β”‚
191
+ β”‚ β”‚
192
+ β”‚ cactus transcribe [model] live mic transcription (parakeet-1.1b) β”‚
193
+ β”‚ --file <audio.wav> transcribe file instead of mic β”‚
194
+ β”‚ --precision INT4|INT8|FP16 quantization (default: INT4) β”‚
195
+ β”‚ --token <token> HF token (gated models) β”‚
196
+ β”‚ --reconvert force reconversion from source β”‚
197
+ β”‚ β”‚
198
+ β”‚ cactus download <model> downloads model to ./weights β”‚
199
+ β”‚ --precision INT4|INT8|FP16 quantization (default: INT4) β”‚
200
+ β”‚ --token <token> HuggingFace API token β”‚
201
+ β”‚ --reconvert force reconversion from source β”‚
202
+ β”‚ β”‚
203
+ β”‚ cactus convert <model> [dir] convert model, supports LoRA merge β”‚
204
+ β”‚ --precision INT4|INT8|FP16 quantization (default: INT4) β”‚
205
+ β”‚ --lora <path> LoRA adapter to merge β”‚
206
+ β”‚ --token <token> HuggingFace API token β”‚
207
+ β”‚ β”‚
208
+ β”‚ cactus build build for ARM β†’ build/libcactus.a β”‚
209
+ β”‚ --apple Apple (iOS/macOS) β”‚
210
+ β”‚ --android Android β”‚
211
+ β”‚ --flutter Flutter (all platforms) β”‚
212
+ β”‚ --python shared lib for Python FFI β”‚
213
+ β”‚ β”‚
214
+ β”‚ cactus test run unit tests and benchmarks β”‚
215
+ β”‚ --model <model> default: LFM2-VL-450M β”‚
216
+ β”‚ --transcribe_model <model> default: moonshine-base β”‚
217
+ β”‚ --benchmark use larger models β”‚
218
+ β”‚ --precision INT4|INT8|FP16 regenerate weights with precision β”‚
219
+ β”‚ --reconvert force reconversion from source β”‚
220
+ β”‚ --no-rebuild skip building library β”‚
221
+ β”‚ --only <test> specific test (llm, vlm, stt, etc) β”‚
222
+ β”‚ --ios run on connected iPhone β”‚
223
+ β”‚ --android run on connected Android β”‚
224
+ β”‚ β”‚
225
+ β”‚ cactus clean remove all build artifacts β”‚
226
+ β”‚ cactus --help show all commands and flags β”‚
227
+ β”‚ β”‚
228
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
229
+ ```
230
+ ## Maintaining Organisations
231
 
232
+ 1. [Cactus Compute, Inc. (YC S25)](https://cactuscompute.com/)
233
+ 2. [UCLA's BruinAI](https://bruinai.org/)
234
+ 3. [Char (YC S25)](https://char.com/)
235
+ 4. [Yale's AI Society](https://www.yale-ai.org/team)
236
+ 5. [National Unoversity of Singapore's AI Society](https://www.nusaisociety.org/)
237
+ 6. [UC Irvine's AI@UCI](https://aiclub.ics.uci.edu/)
238
+ 7. [Imperial College's AI Society](https://www.imperialcollegeunion.org/csp/1391)
239
+ 8. [University of Pennsylvania's AI@Penn](https://ai-at-penn-main-105.vercel.app/)
240
+ 9. [University of Michigan Ann-Arbor MSAIL](https://msail.github.io/)
241
+ 10. [University of Colorado Boulder's AI Club](https://www.cuaiclub.org/)
242
+
243
+ ## Citation
244
+
245
+ If you use Cactus in your research, please cite it as follows:
246
+
247
+ ```bibtex
248
+ @software{cactus,
249
+ title = {Cactus: AI Inference Engine for Phones & Wearables},
250
+ author = {Ndubuaku, Henry and Cactus Team},
251
+ url = {https://github.com/cactus-compute/cactus},
252
+ year = {2025}
253
+ }
254
  ```
255
 
256
+ **N/B:** Scroll all the way up and click the shields link for resources!
257
 
258
+ [docs-shield]: https://img.shields.io/badge/Docs-555?style=for-the-badge&logo=readthedocs&logoColor=white
259
+ [docs-url]: https://cactus-compute.github.io/cactus/
 
 
 
 
 
 
 
 
260
 
261
+ [website-shield]: https://img.shields.io/badge/Website-555?style=for-the-badge&logo=safari&logoColor=white
262
+ [website-url]: https://cactuscompute.com/
263
 
264
+ [github-shield]: https://img.shields.io/badge/GitHub-555?style=for-the-badge&logo=github&logoColor=white
265
+ [github-url]: https://github.com/cactus-compute/cactus
 
 
 
 
266
 
267
+ [hf-shield]: https://img.shields.io/badge/HuggingFace-555?style=for-the-badge&logo=huggingface&logoColor=white
268
+ [hf-url]: https://huggingface.co/Cactus-Compute
269
 
270
+ [reddit-shield]: https://img.shields.io/badge/Reddit-555?style=for-the-badge&logo=reddit&logoColor=white
271
+ [reddit-url]: https://www.reddit.com/r/cactuscompute/
272
 
273
+ [blog-shield]: https://img.shields.io/badge/Blog-555?style=for-the-badge&logo=hashnode&logoColor=white
274
+ [blog-url]: https://cactus-compute.github.io/cactus/blog/README/