remiai3 commited on
Commit
dbafa28
·
verified ·
1 Parent(s): 8c872b5

Update report.md

Browse files
Files changed (1) hide show
  1. report.md +97 -27
report.md CHANGED
@@ -26,6 +26,12 @@ graph TD
26
  G[Start-up Check]
27
  end
28
 
 
 
 
 
 
 
29
  G -- Detects Flags --> A
30
  A -- Selects Binary --> C
31
  C -- Loads --> H[model.gguf]
@@ -37,6 +43,8 @@ graph TD
37
 
38
  ### 2.2 Component Breakdown
39
 
 
 
40
  1. **Electron Main Process (`main.js`)**:
41
  * **Role**: The application entry point and central controller.
42
  * **New Capabilities**:
@@ -53,27 +61,36 @@ graph TD
53
  2. **Native AI Engine (Backend)**:
54
  * **Role**: The "Brain" of the application.
55
  * **Technology**: Pre-compiled binaries (likely based on `llama.cpp`) optimized for CPU inference.
56
- * **Binaries**: `bujji_engine.exe` located in `engine/cpu_avx/` and `engine/cpu_avx2/`.
57
  * **Operation**: Runs a local server on port `5000`.
58
  * **Model**: Loads weights strictly from a file named `model.gguf`.
59
- * **No Python Required**: The binary is self-contained.
60
- * **Git LFS integration**: Large binaries (`.exe`, `.dll`) are tracked via Git LFS.
61
 
62
  3. **TTS Engine (Piper)**:
63
  * **Role**: Text-to-Speech synthesis — converts typed text into natural-sounding speech.
64
  * **Technology**: Piper TTS (`piper.exe`), an ONNX-based neural TTS engine.
65
- * **Binaries**: `piper.exe` in `engine/cpu_avx/` and `engine/cpu_avx2/`.
66
  * **Model**: `en_US-lessac-medium.onnx` (English, medium quality voice) stored in `engine/piper/`.
67
- * **Bundled DLLs**: `piper_phonemize.dll`, `onnxruntime.dll`, `espeak-ng.dll`.
68
- * **Output**: WAV audio files saved to the system temp directory.
69
 
70
  4. **STT Engine (Whisper Server)**:
71
  * **Role**: Speech-to-Text transcription — extracts text from audio files.
72
  * **Technology**: Whisper.cpp server build (`whisper.exe`), runs as an HTTP server.
73
- * **Binaries**: `whisper.exe` in `engine/cpu_avx/` and `engine/cpu_avx2/`.
74
  * **Model**: `ggml-base.en.bin` (English base model) stored in `engine/whisper/`.
75
- * **Bundled DLLs**: `whisper.dll`, `ggml.dll`.
76
- * **Audio Format Support**: `.wav`, `.mp3`, `.m4a`, `.ogg`, `.flac` — requires `ffmpeg.exe` and `ffmpeg.dll` in `bin/`.
 
 
 
 
 
 
 
 
 
 
77
 
78
  ## 3. Operational Flow Chart
79
 
@@ -108,18 +125,60 @@ sequenceDiagram
108
  W-->>U: Display Chat Interface
109
  ```
110
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
111
  ## 4. Technical Specifications & Requirements
112
 
113
  ### 4.1 Prerequisites
114
  * **Operating System**: Windows (10/11) 64-bit.
115
- * **Software**: Git & Git LFS (Required for downloading engine binaries).
116
  * **Runtime**: Node.js (LTS version recommended).
117
  * **Hardware**:
118
  * Any modern CPU (Intel/AMD) with AVX support.
119
  * Minimum 8GB RAM (16GB recommended for larger models).
120
  * Disk space proportional to the model size (e.g., 4GB for a 7B model).
121
 
122
- ### 4.2 File Structure & Dependencies
123
  The critical file structure required for the app to function:
124
 
125
  ```text
@@ -133,24 +192,38 @@ Root/
133
  │ │ ├── bujji_engine.exe
134
  │ │ ├── piper.exe
135
  │ │ └── whisper.exe
 
 
 
 
136
  ├── bin/ # Utility binaries
137
  │ ├── ffmpeg.exe # Audio conversion (required for STT)
138
  │ ├── ffmpeg.dll # FFmpeg library
139
  │ └── ffplay.exe # Audio playback
140
- ├── model.gguf # The AI Model
 
 
 
 
 
 
141
  ├── package.json # Dependencies
142
- └── node_modules/ # Installed via npm install
143
  ```
144
 
145
  ### 4.3 Framework Constraints & Packaging
146
 
147
  * **Model Format Support**:
148
- * **Text Generation**: Strictly requires **GGUF** format.
149
  * **Speech-to-Text**: Requires **GGML Binary** format (`ggml-*.bin`).
150
  * **Text-to-Speech**: Requires **ONNX** format (`.onnx` + `.json` config).
151
- * **Packaging Limit**:
152
- * The framework uses **NSISBI** (Large Installer Support).
153
- * **Tested Packaging Size**: Up to **~3.1GB** successfully.
 
 
 
 
154
 
155
  ## 5. Offline-First Architecture
156
 
@@ -159,23 +232,20 @@ The framework is designed to be **100% offline-capable** after initial setup:
159
  * **No CDN Dependencies**: All frontend libraries (Lucide icons, Marked.js) are bundled locally via `node_modules/`.
160
  * **Local Engine Binaries**: All AI engines (`bujji_engine.exe`, `piper.exe`, `whisper.exe`) and their DLLs are included in the `engine/` directory.
161
  * **Bundled Models**: TTS model (`en_US-lessac-medium.onnx`), STT model (`ggml-base.en.bin`), and the LLM model (`model.gguf`) are all stored locally.
 
162
  * **Audio Utilities**: `ffmpeg.exe` and `ffplay.exe` are bundled in `bin/` for audio format conversion and playback.
163
 
164
  ## 6. Development & Open Source Strategy
165
 
166
- ### 6.1 Licensing & Credits
167
- The RemiAI Framework is released under the **MIT License**, permitting free use, modification, and distribution.
168
-
169
- #### Third-Party Components
170
- This project relies on robust open-source projects. We gratefully acknowledge:
171
- * **Llama.cpp** (Backend `bujji_engine.exe`): [MIT License](https://github.com/ggerganov/llama.cpp)
172
- * **Piper TTS** (Speech Synthesis): [MIT License](https://github.com/rhasspy/piper)
173
- * **Whisper.cpp** (Speech Recognition): [MIT License](https://github.com/ggerganov/whisper.cpp)
174
- * **Gemma 2 Model** (AI Weights): [Gemma Terms of Use](https://ai.google.dev/gemma/terms)
175
 
176
  ### 6.2 Hosting Strategy
177
  * **GitHub**: Contains the source code (JS, HTML, CSS).
178
  * **Hugging Face**: Hosts the large `model.gguf` file and the zipped release builds due to storage limits on GitHub. We use Hugging Face for "Large File Storage" of the AI weights.
179
 
180
  ## 7. Conclusion
181
- The RemiAI/Bujji framework democratizes access to local AI. By removing the complex Python environment setup and packaging the inference engine directly with the app, we enable any student with a laptop to run powerful AI models simply by typing `npm start`.
 
26
  G[Start-up Check]
27
  end
28
 
29
+ subgraph "Hardware Layer"
30
+ E[CPU - AVX/AVX2]
31
+ F[RAM]
32
+ G[Start-up Check]
33
+ end
34
+
35
  G -- Detects Flags --> A
36
  A -- Selects Binary --> C
37
  C -- Loads --> H[model.gguf]
 
43
 
44
  ### 2.2 Component Breakdown
45
 
46
+ ### 2.2 Component Breakdown
47
+
48
  1. **Electron Main Process (`main.js`)**:
49
  * **Role**: The application entry point and central controller.
50
  * **New Capabilities**:
 
61
  2. **Native AI Engine (Backend)**:
62
  * **Role**: The "Brain" of the application.
63
  * **Technology**: Pre-compiled binaries (likely based on `llama.cpp`) optimized for CPU inference.
 
64
  * **Operation**: Runs a local server on port `5000`.
65
  * **Model**: Loads weights strictly from a file named `model.gguf`.
66
+ * **No Python Required**: The binary is self-contained with all necessary DLLs.
67
+ * **Git LFS integration**: Large binaries (`.exe`, `.dll`) are tracked via Git LFS to keep the repo clean. The `main.js` includes a startup check to ensure these files are fully downloaded (and not just LFS pointers) before launching.
68
 
69
  3. **TTS Engine (Piper)**:
70
  * **Role**: Text-to-Speech synthesis — converts typed text into natural-sounding speech.
71
  * **Technology**: Piper TTS (`piper.exe`), an ONNX-based neural TTS engine.
72
+ * **Operation**: Invoked on-demand via IPC. Text is piped to `piper.exe` stdin, and a `.wav` file is generated as output.
73
  * **Model**: `en_US-lessac-medium.onnx` (English, medium quality voice) stored in `engine/piper/`.
74
+ * **DLLs**: `piper_phonemize.dll`, `onnxruntime.dll`, `espeak-ng.dll` bundled in the engine directory.
75
+ * **Output**: WAV audio files saved to the system temp directory, playable in-app with a download option.
76
 
77
  4. **STT Engine (Whisper Server)**:
78
  * **Role**: Speech-to-Text transcription — extracts text from audio files.
79
  * **Technology**: Whisper.cpp server build (`whisper.exe`), runs as an HTTP server.
80
+ * **Operation**: Started on-demand on port `5001`. Audio files are POSTed to `/inference` endpoint as multipart form-data. Server is shut down after each transcription.
81
  * **Model**: `ggml-base.en.bin` (English base model) stored in `engine/whisper/`.
82
+ * **DLLs**: `whisper.dll`, `ggml.dll` bundled in the engine directory.
83
+ * **Audio Format Support**: `.wav`, `.mp3`, `.m4a`, `.ogg`, `.flac` — requires `ffmpeg.exe` and `ffmpeg.dll` in `bin/` for automatic audio conversion.
84
+ * **Input**: User selects an audio file via native file dialog.
85
+
86
+ 5. **Renderer Process (`index.html` + `renderer.js`)**:
87
+ * **Role**: The User Interface.
88
+ * **Responsibilities**:
89
+ * Displays the chat interface.
90
+ * Sends user prompts to `localhost:5000`.
91
+ * Receives and streams AI responses.
92
+ * Provides TTS interface (text input → speech generation → audio playback/download).
93
+ * Provides STT interface (file upload → transcription → text display/copy).
94
 
95
  ## 3. Operational Flow Chart
96
 
 
125
  W-->>U: Display Chat Interface
126
  ```
127
 
128
+ ### 3.2 TTS Flow
129
+
130
+ ```mermaid
131
+ sequenceDiagram
132
+ participant U as User
133
+ participant R as Renderer (UI)
134
+ participant M as Main Process
135
+ participant P as Piper TTS Engine
136
+
137
+ U->>R: Types text, clicks "Speak"
138
+ R->>M: IPC: tts-synthesize(text)
139
+ M->>P: Spawn piper.exe, pipe text to stdin
140
+ P-->>M: Generates .wav file
141
+ M-->>R: Returns .wav file path
142
+ R-->>U: Plays audio, shows Download button
143
+ U->>R: Clicks "Download Audio"
144
+ R->>M: IPC: tts-save-file(path)
145
+ M-->>U: Native Save dialog, copies file
146
+ ```
147
+
148
+ ### 3.3 STT Flow
149
+
150
+ ```mermaid
151
+ sequenceDiagram
152
+ participant U as User
153
+ participant R as Renderer (UI)
154
+ participant M as Main Process
155
+ participant W as Whisper Server
156
+
157
+ U->>R: Clicks "Browse", selects audio file
158
+ R->>M: IPC: stt-select-file()
159
+ M-->>R: Returns file path (native dialog)
160
+ U->>R: Clicks "Transcribe"
161
+ R->>M: IPC: stt-transcribe(filePath)
162
+ M->>W: Start whisper.exe server (port 5001)
163
+ M->>W: POST audio to /inference
164
+ W-->>M: Returns transcription JSON
165
+ M->>W: Kill server
166
+ M-->>R: Returns transcribed text
167
+ R-->>U: Displays text, shows Copy button
168
+ ```
169
+
170
  ## 4. Technical Specifications & Requirements
171
 
172
  ### 4.1 Prerequisites
173
  * **Operating System**: Windows (10/11) 64-bit.
174
+ * **software**: Git & Git LFS (Required for downloading engine binaries).
175
  * **Runtime**: Node.js (LTS version recommended).
176
  * **Hardware**:
177
  * Any modern CPU (Intel/AMD) with AVX support.
178
  * Minimum 8GB RAM (16GB recommended for larger models).
179
  * Disk space proportional to the model size (e.g., 4GB for a 7B model).
180
 
181
+ ### 4.2 File Structure
182
  The critical file structure required for the app to function:
183
 
184
  ```text
 
192
  │ │ ├── bujji_engine.exe
193
  │ │ ├── piper.exe
194
  │ │ └── whisper.exe
195
+ │ ├── piper/ # TTS model & config
196
+ │ │ └── en_US-lessac-medium.onnx
197
+ │ └── whisper/ # STT model
198
+ │ └── ggml-base.en.bin
199
  ├── bin/ # Utility binaries
200
  │ ├── ffmpeg.exe # Audio conversion (required for STT)
201
  │ ├── ffmpeg.dll # FFmpeg library
202
  │ └── ffplay.exe # Audio playback
203
+ ├── assets/icons/ # Local SVG icons
204
+ ├── model.gguf # The AI Model (Must be named exactly this)
205
+ ├── main.js # Core Logic (Main Process)
206
+ ├── index.html # UI Layer
207
+ ├── renderer.js # Frontend Logic
208
+ ├── styles.css # Styling
209
+ ├── web.html # Built-in Web Browser
210
  ├── package.json # Dependencies
211
+ └── node_modules/ # Installed via npm install (includes lucide, marked)
212
  ```
213
 
214
  ### 4.3 Framework Constraints & Packaging
215
 
216
  * **Model Format Support**:
217
+ * **Text Generation**: Strictly requires **GGUF** format (`llama.cpp` compatible).
218
  * **Speech-to-Text**: Requires **GGML Binary** format (`ggml-*.bin`).
219
  * **Text-to-Speech**: Requires **ONNX** format (`.onnx` + `.json` config).
220
+ * *Note: Python-based models (`.pt`, `.safetensors`) are NOT supported to ensure zero-dependency offline execution.*
221
+
222
+ * **Packaging Capabilities**:
223
+ * **Installer Engine**: Uses **NSISBI** (NSIS Large Integrated Browser Installer) to bypass the standard 2GB limit.
224
+ * **Verified Capacity**: The framework has been tested to successfully package applications up to **~3.1GB** (Base App + Engine + Model).
225
+ * **Recommendation**: efficient for bundling quantized models (e.g., Llama-3-8B-Q4_K_M) directly into a single `.exe` file.
226
+
227
 
228
  ## 5. Offline-First Architecture
229
 
 
232
  * **No CDN Dependencies**: All frontend libraries (Lucide icons, Marked.js) are bundled locally via `node_modules/`.
233
  * **Local Engine Binaries**: All AI engines (`bujji_engine.exe`, `piper.exe`, `whisper.exe`) and their DLLs are included in the `engine/` directory.
234
  * **Bundled Models**: TTS model (`en_US-lessac-medium.onnx`), STT model (`ggml-base.en.bin`), and the LLM model (`model.gguf`) are all stored locally.
235
+ * **Content Security Policy**: The CSP in `index.html` is configured to only allow `'self'` and the local API server (`127.0.0.1:5000`), blocking all external network requests.
236
  * **Audio Utilities**: `ffmpeg.exe` and `ffplay.exe` are bundled in `bin/` for audio format conversion and playback.
237
 
238
  ## 6. Development & Open Source Strategy
239
 
240
+ ### 6.1 Licensing
241
+ This project is released under the **MIT License**. This allows any student or developer to:
242
+ * Use the code freely.
243
+ * Modify the interface (rename "RemiAI" to their own brand).
244
+ * Distribute their own versions.
 
 
 
 
245
 
246
  ### 6.2 Hosting Strategy
247
  * **GitHub**: Contains the source code (JS, HTML, CSS).
248
  * **Hugging Face**: Hosts the large `model.gguf` file and the zipped release builds due to storage limits on GitHub. We use Hugging Face for "Large File Storage" of the AI weights.
249
 
250
  ## 7. Conclusion
251
+ The RemiAI/Bujji framework democratizes access to local AI. By removing the complex Python environment setup and packaging the inference engine directly with the app, we enable any student with a laptop to run powerful AI models simply by typing `npm start`. With integrated TTS (Piper) and STT (Whisper) capabilities, the framework now provides a complete offline AI assistant experience — text generation, speech synthesis, and speech recognition — all running locally without any internet connection or cloud services.