File size: 1,744 Bytes
bb12548
239437e
 
bb12548
 
 
239437e
bb12548
239437e
bb12548
 
 
 
9d6b7a0
8efddd7
 
8e3261d
8efddd7
 
 
 
 
 
 
 
 
01e1d92
8efddd7
 
 
 
239437e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bb12548
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
license: apache-2.0
language:
- en
base_model:
- HyzeAI/HyzeMini
pipeline_tag: text-generation
library_name: transformers.js
tags:
- gguf
- hyze
- local
- chat
- Hitesh_V_Founder
---
<p align="center">
  <img src="https://i.imgur.com/ePJMLNp.png" alt="Hyze Logo" width="405"/>
</p>

<h1 align="center">HyzeMini (GGUF)</h1>

<p align="center">
  Lightweight GGUF builds of <b>HyzeMini</b> for fast local inference
</p>

<p align="center">
  🔗 <a href="https://hyzebot.vercel.app">hyzebot.vercel.app</a> •  
  📘 <a href="https://hyzedocs.vercel.app">hyzedocs.vercel.app</a> •  
  🧠 <a href="https://hyzecode.vercel.app">hyzecode.vercel.app</a>
</p>

---

## 🚀 Overview

**HyzeMini (GGUF)** provides **quantized GGUF versions** of the HyzeMini model, optimized for **local execution** using tools like **llama.cpp**, **LM Studio**, **Ollama**, and other GGUF-compatible runtimes.

This version keeps the same **Space + General Chat focus**, while enabling:

- ⚡ Faster inference  
- 🧠 Lower memory usage  
- 💻 CPU-friendly execution  

---

## 🧠 Model Details

- **Base model:** HyzeAI / HyzeMini  
- **Parameters:** ~0.1B  
- **Architecture:** Transformer (LLaMA-style)  
- **Format:** GGUF  
- **Language:** English  
- **License:** Apache-2.0  

---

## 🧪 Available Quantizations

*(Exact files may vary depending on upload)*

Common GGUF variants include:

- `Q2_K` – Ultra-low memory, fastest
- `Q4_K_M` – Balanced quality & speed (recommended)
- `Q5_K_M` – Higher quality, slightly slower
- `Q8_0` – Best quality, highest memory usage

> 💡 If you’re unsure, start with **Q4_K_M**.

---

## ⚙️ Usage

### llama.cpp

```bash
./main -m HyzeMini-Q4_K_M.gguf -p "Tell me a cool space fact:"

---