Update README.md
Browse files
README.md
CHANGED
|
@@ -4,32 +4,113 @@ library_name: llama-cpp-python
|
|
| 4 |
tags:
|
| 5 |
- gguf
|
| 6 |
- qwen
|
| 7 |
-
- math
|
| 8 |
- qwen2.5
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
- text-generation
|
| 10 |
-
- education
|
| 11 |
base_model: Qwen/Qwen2.5-Math-1.5B
|
| 12 |
pipeline_tag: text-generation
|
|
|
|
|
|
|
| 13 |
---
|
| 14 |
|
| 15 |
-
# Qwen 2.5 Math 1.5B (GGUF Quantized)
|
| 16 |
|
| 17 |
This repository contains the **GGUF** quantized version of the [Qwen 2.5 Math 1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) model.
|
| 18 |
|
| 19 |
-
It is a specialized **Mathematical Reasoning Model** optimized for edge devices. Despite its small size (1.5B), it outperforms many larger models in mathematical problem-solving tasks.
|
| 20 |
|
| 21 |
-
**Model Creator:** Qwen Team (Alibaba Cloud)
|
| 22 |
**Quantized By:** Md Habibur Rahman (Aasif)
|
| 23 |
-
**Quantization Format:** GGUF (Q4_K_M -
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
-
##
|
| 26 |
|
| 27 |
-
|
| 28 |
-
* **Edge Ready:** Runs smoothly on Android, Raspberry Pi, and Laptops without internet.
|
| 29 |
-
* **Small Footprint:** Requires less than 2 GB RAM.
|
| 30 |
|
| 31 |
-
|
| 32 |
|
| 33 |
-
First, install the library:
|
| 34 |
```bash
|
| 35 |
-
pip install llama-cpp-python huggingface_hub
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
tags:
|
| 5 |
- gguf
|
| 6 |
- qwen
|
|
|
|
| 7 |
- qwen2.5
|
| 8 |
+
- math
|
| 9 |
+
- stem
|
| 10 |
+
- educational
|
| 11 |
+
- reasoning
|
| 12 |
- text-generation
|
|
|
|
| 13 |
base_model: Qwen/Qwen2.5-Math-1.5B
|
| 14 |
pipeline_tag: text-generation
|
| 15 |
+
model_creator: Qwen Team (Alibaba Cloud)
|
| 16 |
+
quantized_by: Md Habibur Rahman (Aasif)
|
| 17 |
---
|
| 18 |
|
| 19 |
+
# 🧮 Qwen 2.5 Math 1.5B (GGUF Quantized)
|
| 20 |
|
| 21 |
This repository contains the **GGUF** quantized version of the [Qwen 2.5 Math 1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) model.
|
| 22 |
|
| 23 |
+
It is a specialized **Mathematical Reasoning Model** optimized for edge devices, offline usage, and educational apps. Despite its small size (1.5B), it outperforms many larger general-purpose models in complex mathematical problem-solving tasks.
|
| 24 |
|
|
|
|
| 25 |
**Quantized By:** Md Habibur Rahman (Aasif)
|
| 26 |
+
**Quantization Format:** GGUF (Q4_K_M) - *Optimized for balance between Math Accuracy and Speed.*
|
| 27 |
+
|
| 28 |
+
## 🌟 Key Features
|
| 29 |
+
|
| 30 |
+
* **Math Specialist:** Specifically trained on massive mathematical datasets (Algebra, Calculus, Geometry, Logic).
|
| 31 |
+
* **Chain-of-Thought (CoT):** Capable of showing step-by-step reasoning for solving problems.
|
| 32 |
+
* **Edge AI Ready:** Extremely lightweight (~1 GB). Runs smoothly on Android, Raspberry Pi, and Older Laptops.
|
| 33 |
+
* **Offline Capable:** Does not require an internet connection to solve problems.
|
| 34 |
|
| 35 |
+
## 🚀 Usage (Python)
|
| 36 |
|
| 37 |
+
You can run this model using the `llama-cpp-python` library.
|
|
|
|
|
|
|
| 38 |
|
| 39 |
+
### 1. Installation
|
| 40 |
|
|
|
|
| 41 |
```bash
|
| 42 |
+
pip install llama-cpp-python huggingface_hub
|
| 43 |
+
|
| 44 |
+
```
|
| 45 |
+
2. Python Inference Code
|
| 46 |
+
|
| 47 |
+
Here is a script to solve math problems with step-by-step logic:
|
| 48 |
+
|
| 49 |
+
```
|
| 50 |
+
from huggingface_hub import hf_hub_download
|
| 51 |
+
from llama_cpp import Llama
|
| 52 |
+
|
| 53 |
+
# Download the model
|
| 54 |
+
model_path = hf_hub_download(
|
| 55 |
+
repo_id="Habibur2/Qwen2.5-Math-1.5B-GGUF",
|
| 56 |
+
filename="qwen-math-1.5b-q4_k_m.gguf"
|
| 57 |
+
)
|
| 58 |
+
|
| 59 |
+
# Load Model
|
| 60 |
+
# Set n_gpu_layers=-1 for full GPU usage (Fastest)
|
| 61 |
+
# Set n_gpu_layers=0 for CPU only
|
| 62 |
+
llm = Llama(
|
| 63 |
+
model_path=model_path,
|
| 64 |
+
n_ctx=2048, # Context Window
|
| 65 |
+
n_threads=4, # CPU Threads
|
| 66 |
+
n_gpu_layers=-1 # GPU Acceleration
|
| 67 |
+
)
|
| 68 |
+
|
| 69 |
+
# Define a Math Problem
|
| 70 |
+
math_problem = "Find the integral of x^2 + 5x with respect to x."
|
| 71 |
+
|
| 72 |
+
# System Prompt is Crucial for Math Models
|
| 73 |
+
messages = [
|
| 74 |
+
{"role": "system", "content": "You are a helpful mathematical assistant. Please solve the problem step-by-step and show your reasoning clearly."},
|
| 75 |
+
{"role": "user", "content": math_problem}
|
| 76 |
+
]
|
| 77 |
+
|
| 78 |
+
# Generate Solution
|
| 79 |
+
output = llm.create_chat_completion(
|
| 80 |
+
messages=messages,
|
| 81 |
+
max_tokens=1024, # Math solutions need more tokens
|
| 82 |
+
temperature=0.1 # Low temperature (0.1) is best for precise math
|
| 83 |
+
)
|
| 84 |
+
|
| 85 |
+
print("🤖 Solution:\n")
|
| 86 |
+
print(output['choices'][0]['message']['content'])
|
| 87 |
+
|
| 88 |
+
```
|
| 89 |
+
⚙️ Technical Specifications
|
| 90 |
+
|
| 91 |
+
Feature,Details
|
| 92 |
+
Original Model,Qwen 2.5 Math 1.5B Instruct
|
| 93 |
+
Architecture,"Transformer (RoPE, SwiGLU)"
|
| 94 |
+
Parameters,1.5 Billion
|
| 95 |
+
Quantization Type,Q4_K_M (4-bit Medium)
|
| 96 |
+
File Size,~1.12 GB
|
| 97 |
+
Recommended RAM,2 GB+
|
| 98 |
+
|
| 99 |
+
|
| 100 |
+
🧪 Benchmark & Capabilities
|
| 101 |
+
|
| 102 |
+
This model excels at:
|
| 103 |
+
|
| 104 |
+
Algebra & Arithmetic: Solving equations, inequalities, and basic operations.
|
| 105 |
+
|
| 106 |
+
Calculus: Differentiation and Integration problems.
|
| 107 |
+
|
| 108 |
+
Word Problems: Understanding and translating text into mathematical equations.
|
| 109 |
+
|
| 110 |
+
LaTeX Output: Can generate answers in LaTeX format for academic rendering.
|
| 111 |
+
|
| 112 |
+
👨💻 About the Project
|
| 113 |
+
|
| 114 |
+
This model was quantized and uploaded by Md Habibur Rahman as part of a research initiative on Offline Edge AI & Small Language Models (SLM). The goal is to democratize access to powerful educational AI tools without relying on heavy cloud infrastructure.
|
| 115 |
+
|
| 116 |
+
Disclaimer: While this model is highly capable, always verify complex mathematical solutions.
|