AIencoder commited on
Commit
1ecbdcf
·
verified ·
1 Parent(s): aa9e7bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +216 -51
README.md CHANGED
@@ -1,95 +1,260 @@
1
  ---
2
- title: Axon v6
3
- emoji: 🔥
4
  colorFrom: indigo
5
- colorTo: blue
6
  sdk: docker
7
  pinned: true
8
  license: mit
9
- short_description: "AI Coding Assistant - 8 Models - 19 Tools - 100% Local"
10
  ---
11
 
12
- # 🔥 Axon v6
13
 
14
- **The Ultimate Free AI Coding Assistant**
15
 
16
- [![Built with llama.cpp](https://img.shields.io/badge/Built%20with-llama.cpp-blue)](https://github.com/ggerganov/llama.cpp)
17
- [![Models](https://img.shields.io/badge/Models-8-green)](https://huggingface.co/Qwen)
18
- [![Tools](https://img.shields.io/badge/Tools-19-orange)](https://huggingface.co/spaces/AIencoder/Axon)
19
- [![License](https://img.shields.io/badge/License-MIT-yellow)](LICENSE)
 
 
 
 
 
20
 
21
  ---
22
 
23
- ## Features
24
 
25
- ### 🤖 8 Powerful Models
26
- | Model | Size | Best For |
27
- |-------|------|----------|
28
- | ⭐ Qwen3 Coder 30B-A3B | ~10GB | Best quality (MoE) |
29
- | 🏆 Qwen2.5 Coder 14B | ~8GB | Premium tasks |
30
- | 🧠 DeepSeek V2 Lite | ~9GB | Complex logic |
31
- | ⚖️ Qwen2.5 Coder 7B | ~4.5GB | Balanced |
32
- | 🚀 Qwen2.5 Coder 3B | ~2GB | Fast & capable |
33
- | ⚡ DeepSeek Coder 6.7B | ~4GB | Algorithms |
34
- | 💨 Qwen2.5 Coder 1.5B | ~1GB | Quick tasks |
35
- | 🔬 Qwen2.5 Coder 0.5B | ~0.3GB | Instant |
36
 
37
- ### 🛠️ 19 Tools
38
 
39
- **Core:** Chat, Generate, Explain, Debug, Review
40
 
41
- **Advanced:** Security Scan, Complexity Analysis, Convert, Test, Document, Optimize, Diff, Pseudocode, Interview
42
 
43
- **Builders:** SQL, Shell, Cron, Regex, API
 
 
44
 
45
- **Data:** Mock Data Generator, Format Converter
46
 
47
- ### 🎤 Voice Input
48
- Whisper-powered speech-to-text - just speak your code requests!
49
 
50
- ### 💾 Export
51
- Save your chat history and generated code
 
 
 
 
 
 
 
52
 
53
  ---
54
 
55
- ## 🚀 Tech Stack
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
- - **Inference:** llama.cpp via llama-cpp-python
58
- - **Wheels:** AIencoder/llama-cpp-wheels (pre-built for Debian/Ubuntu)
59
- - **UI:** Gradio
60
- - **Speech:** faster-whisper
61
- - **Models:** GGUF format from HuggingFace
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
 
63
  ---
64
 
65
- ## 💡 Why Axon?
 
 
 
 
66
 
67
- | Feature | Axon | Others |
68
- |---------|------|--------|
69
- | 100% Local | | |
70
- | No API Keys | ✅ | ❌ |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
  | No Rate Limits | ✅ | ❌ |
 
 
72
  | Free Forever | ✅ | ❌ |
73
- | Privacy | ✅ | ❌ |
74
- | 8 Models | | ❌ |
75
- | 19 Tools | ✅ | ❌ |
 
 
 
 
 
76
 
77
  ---
78
 
79
  ## 🛞 Pre-built Wheels
 
 
 
80
  ```bash
81
- pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl
 
 
 
 
82
  ```
83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
  ---
85
 
86
  ## 🙏 Credits
87
 
88
- - Qwen for amazing coding models
89
- - DeepSeek for logic-focused models
90
- - ggerganov for llama.cpp
91
- - Gradio for the UI framework
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
 
93
  ---
94
 
95
- **Built with ❤️ by AIencoder**
 
 
 
 
 
 
 
1
  ---
2
+ title: Axon v26
3
+ emoji:
4
  colorFrom: indigo
5
+ colorTo: purple
6
  sdk: docker
7
  pinned: true
8
  license: mit
9
+ short_description: "Free AI Coding Assistant - 5 Models - 25 Tools - 100% Local"
10
  ---
11
 
12
+ <div align="center">
13
 
14
+ # Axon v26
15
 
16
+ ### The Ultimate Free AI Coding Assistant
17
+
18
+ **5 Models • 25 Tools • 100% Local • Zero API Keys**
19
+
20
+ [![Built with llama.cpp](https://img.shields.io/badge/llama.cpp-Powered-blue?style=for-the-badge&logo=cplusplus)](https://github.com/ggerganov/llama.cpp)
21
+ [![Models](https://img.shields.io/badge/Models-5-green?style=for-the-badge)](https://huggingface.co/Qwen)
22
+ [![Tools](https://img.shields.io/badge/Tools-25-orange?style=for-the-badge)](https://huggingface.co/spaces/AIencoder/Axon)
23
+ [![License](https://img.shields.io/badge/License-MIT-yellow?style=for-the-badge)](LICENSE)
24
+ [![AVX2 Optimized](https://img.shields.io/badge/AVX2-Optimized-red?style=for-the-badge)](https://huggingface.co/datasets/AIencoder/llama-cpp-wheels)
25
 
26
  ---
27
 
28
+ [**Try Axon Now →**](https://huggingface.co/spaces/AIencoder/Axon)
29
 
30
+ </div>
 
 
 
 
 
 
 
 
 
 
31
 
32
+ ---
33
 
34
+ ## 🚀 What is Axon?
35
 
36
+ Axon is a **free, privacy-first AI coding assistant** that runs entirely locally using llama.cpp. No API keys, no rate limits, no data collection - just powerful AI coding tools at your fingertips.
37
 
38
+ Built from the ground up after spending 2 days building llama-cpp-python wheels that didn't exist. Now you don't have to.
39
+
40
+ ---
41
 
42
+ ## 🤖 5 Powerful Models
43
 
44
+ Choose the right model for your task - from instant responses to complex reasoning.
 
45
 
46
+ | Model | Size | Speed | Best For |
47
+ |-------|------|-------|----------|
48
+ | 🧠 **DeepSeek V2 Lite** | ~9GB | ⭐⭐ | Complex logic, MoE architecture |
49
+ | ⚖️ **Qwen2.5 Coder 7B** | ~4.5GB | ⭐⭐⭐ | Balanced quality & speed |
50
+ | 🚀 **Qwen2.5 Coder 3B** | ~2GB | ⭐⭐⭐⭐ | Fast & highly capable |
51
+ | 💨 **Qwen2.5 Coder 1.5B** | ~1GB | ⭐⭐⭐⭐⭐ | Quick tasks |
52
+ | 🔬 **Qwen2.5 Coder 0.5B** | ~0.3GB | ⚡ | Instant responses |
53
+
54
+ > Models download automatically on first use. Storage persists between sessions.
55
 
56
  ---
57
 
58
+ ## 🛠️ 25 Tools
59
+
60
+ ### Core Tools
61
+ | Tool | Description |
62
+ |------|-------------|
63
+ | 💬 **Chat** | Conversational coding help with streaming responses |
64
+ | ⚡ **Generate** | Create code from natural language descriptions |
65
+ | 🔍 **Explain** | Understand any code (Brief / Normal / Detailed modes) |
66
+ | 🔧 **Debug** | Find and fix bugs with error context |
67
+ | 📋 **Review** | Code quality, security & performance review |
68
+
69
+ ### Advanced Tools
70
+ | Tool | Description |
71
+ |------|-------------|
72
+ | 🔐 **Security Scan** | Find vulnerabilities (SQL injection, XSS, etc.) |
73
+ | 📊 **Complexity** | Big O analysis for time & space |
74
+ | 🔄 **Convert** | Translate between 22+ programming languages |
75
+ | 🧪 **Test** | Generate comprehensive unit tests |
76
+ | 📝 **Document** | Add docstrings, comments & inline docs |
77
+ | 🚀 **Optimize** | Performance improvements & refactoring |
78
+ | 🔀 **Diff** | Compare two code snippets |
79
+ | 📐 **Pseudocode** | Convert code to pseudocode/flowcharts |
80
+ | 🎓 **Interview** | Generate coding challenges & solutions |
81
+
82
+ ### Builders
83
+ | Tool | Description |
84
+ |------|-------------|
85
+ | 🗄️ **SQL Builder** | Natural language → SQL queries |
86
+ | 🐚 **Shell Builder** | Natural language → Bash/PowerShell commands |
87
+ | ⏰ **Cron Builder** | Create cron schedule expressions |
88
+ | 🎯 **Regex Builder** | Pattern creation with explanations |
89
+ | 🔗 **API Builder** | Generate REST endpoint boilerplate |
90
 
91
+ ### Data Tools
92
+ | Tool | Description |
93
+ |------|-------------|
94
+ | 📦 **Mock Data** | Generate realistic test data (JSON, CSV, etc.) |
95
+ | 🔄 **Format Converter** | Convert between JSON/YAML/XML/CSV/TOML |
96
+
97
+ ### NEW in v26
98
+ | Tool | Description |
99
+ |------|-------------|
100
+ | 🎨 **Refactor** | Restructure code for better design patterns |
101
+ | 📊 **Benchmark** | Generate performance benchmark code |
102
+ | 🔗 **Dependency Analyzer** | Analyze imports & dependencies |
103
+ | 📋 **Changelog** | Generate changelogs from code diffs |
104
+ | 💡 **Suggest** | AI-powered improvement suggestions |
105
+
106
+ ---
107
+
108
+ ## 🎤 Voice Input
109
+
110
+ Speak your code requests using Whisper-powered speech-to-text. Just click the microphone and talk naturally.
111
 
112
  ---
113
 
114
+ ## 🌙 Dark Mode
115
+
116
+ Toggle between light and dark themes. Your preference is saved automatically.
117
+
118
+ ---
119
 
120
+ ## 💾 Export
121
+
122
+ Save your chat history and generated code for later reference.
123
+
124
+ ---
125
+
126
+ ## ⚡ Performance
127
+
128
+ Axon uses **AVX2-optimized** llama-cpp-python wheels for **2-3x faster** inference compared to basic builds.
129
+
130
+ | Build Type | Tokens/sec (3B) | Compatibility |
131
+ |------------|-----------------|---------------|
132
+ | Basic | ~10-15 | All x86_64 |
133
+ | **AVX2 (Axon)** | **~30-40** | Intel Haswell+ / AMD Zen+ (2013+) |
134
+
135
+ ---
136
+
137
+ ## 🔒 Privacy First
138
+
139
+ | Feature | Axon | Cloud Alternatives |
140
+ |---------|------|--------------------|
141
+ | 100% Local Processing | ✅ | ❌ |
142
+ | No API Keys Required | ✅ | ❌ |
143
  | No Rate Limits | ✅ | ❌ |
144
+ | No Data Collection | ✅ | ❌ |
145
+ | Works Offline | ✅ | ❌ |
146
  | Free Forever | ✅ | ❌ |
147
+
148
+ Your code **never** leaves your browser. Period.
149
+
150
+ ---
151
+
152
+ ## 💻 Supported Languages
153
+
154
+ Python • JavaScript • TypeScript • Go • Rust • Java • C++ • C# • C • PHP • Ruby • Swift • Kotlin • Scala • R • Julia • Perl • HTML/CSS • SQL • Bash • PowerShell • Lua
155
 
156
  ---
157
 
158
  ## 🛞 Pre-built Wheels
159
+
160
+ Tired of building llama-cpp-python from source? Use our AVX2-optimized wheels:
161
+
162
  ```bash
163
+ # Python 3.10
164
+ pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16-cp310-cp310-manylinux_2_31_x86_64.whl
165
+
166
+ # Python 3.11
167
+ pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16-cp311-cp311-manylinux_2_31_x86_64.whl
168
  ```
169
 
170
+ **Features:**
171
+ - AVX2 + FMA + F16C enabled
172
+ - 2-3x faster than basic builds
173
+ - Works on Intel Haswell+ (2013+) and AMD Zen+ (2018+)
174
+
175
+ [**Browse all wheels →**](https://huggingface.co/datasets/AIencoder/llama-cpp-wheels)
176
+
177
+ ---
178
+
179
+ ## 🏗️ Tech Stack
180
+
181
+ | Component | Technology |
182
+ |-----------|------------|
183
+ | Inference | [llama.cpp](https://github.com/ggerganov/llama.cpp) via [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) |
184
+ | Wheels | [AIencoder/llama-cpp-wheels](https://huggingface.co/datasets/AIencoder/llama-cpp-wheels) (AVX2 optimized) |
185
+ | UI | [Gradio](https://gradio.app/) |
186
+ | Speech | [faster-whisper](https://github.com/SYSTRAN/faster-whisper) |
187
+ | Models | GGUF format from HuggingFace |
188
+ | Hosting | HuggingFace Spaces (Docker) |
189
+
190
+ ---
191
+
192
+ ## 🚀 Self-Hosting
193
+
194
+ Want to run Axon on your own machine?
195
+
196
+ ```bash
197
+ # Clone the space
198
+ git clone https://huggingface.co/spaces/AIencoder/Axon
199
+ cd Axon
200
+
201
+ # Build and run
202
+ docker build -t axon .
203
+ docker run -p 7860:7860 -v axon_data:/data axon
204
+ ```
205
+
206
+ Then open `http://localhost:7860`
207
+
208
+ ---
209
+
210
+ ## 📊 Changelog
211
+
212
+ ### v26 (Current) - The FINAL Version
213
+ - ✨ Added 6 new tools (25 total)
214
+ - 🎨 Redesigned UI with better UX
215
+ - ⚡ AVX2-optimized wheels for 2-3x speed boost
216
+ - 🔧 Gradio 6.0 compatibility fixes
217
+ - 📦 Optimized storage usage
218
+
219
+ ### v6
220
+ - 🚀 Initial public release
221
+ - 🤖 8 models (reduced to 5 due to storage)
222
+ - 🛠️ 19 tools
223
+ - 🎤 Whisper voice input
224
+
225
  ---
226
 
227
  ## 🙏 Credits
228
 
229
+ - [Qwen](https://huggingface.co/Qwen) - Amazing coding models
230
+ - [DeepSeek](https://huggingface.co/deepseek-ai) - Logic-focused models
231
+ - [ggerganov](https://github.com/ggerganov) - llama.cpp
232
+ - [abetlen](https://github.com/abetlen) - llama-cpp-python
233
+ - [Gradio](https://gradio.app/) - UI framework
234
+ - [SYSTRAN](https://github.com/SYSTRAN) - faster-whisper
235
+
236
+ ---
237
+
238
+ ## 📄 License
239
+
240
+ MIT License - Use it, modify it, share it!
241
+
242
+ ---
243
+
244
+ ## ⭐ Support
245
+
246
+ If Axon helps you code faster, consider:
247
+ - ⭐ Starring the Space
248
+ - 🐛 Reporting issues
249
+ - 💡 Suggesting features
250
+ - 📢 Sharing with friends
251
 
252
  ---
253
 
254
+ <div align="center">
255
+
256
+ **Built with ❤️ and mass caffeine by [AIencoder](https://huggingface.co/AIencoder)**
257
+
258
+ *No sleep was had in the making of those wheels.*
259
+
260
+ </div>