RemiAI Framework - Technical Report
1. Executive Summary
This report details the internal architecture and operational flow of the RemiAI Framework (v2.1), an open-source, Electron-based desktop application designed for offline AI interaction. The framework allows users to run Large Language Models (LLMs) locally in GGUF format. Key improvements in this version include Dynamic Resource Management to lower system footprint and Robust Audio Processing for STT authentication.
2. System Architecture
The application follows a standard Electron Multi-Process Architecture, enhanced with a custom Native AI Backend.
2.1 Block Diagram
graph TD
subgraph "User Machine (Windows)"
A[Electron Main Process] -- Controls --> B[Window Management]
A -- Spawns/Manages --> C[Native AI Engine Backend]
A -- Spawns/Manages --> TTS[Piper TTS Engine]
A -- Spawns/Manages --> STT[Whisper STT Server]
B -- Renders --> D[Electron Renderer Process - UI]
C -- HTTP/API Port 5000 --> D
STT -- HTTP/API Port 5001 --> A
subgraph "Hardware Layer"
E[CPU - AVX/AVX2]
F[RAM]
G[Start-up Check]
end
subgraph "Hardware Layer"
E[CPU - AVX/AVX2]
F[RAM]
G[Start-up Check]
end
G -- Detects Flags --> A
A -- Selects Binary --> C
C -- Loads --> H[model.gguf]
A -- DYNAMICALLY START/STOP --> C
TTS -- Loads --> TM[en_US-lessac-medium.onnx]
STT -- Loads --> SM[ggml-base.en.bin]
end
2.2 Component Breakdown
2.2 Component Breakdown
Electron Main Process (
main.js):- Role: The application entry point and central controller.
- New Capabilities:
- Dynamic Resource Management: Listens for IPC events (
feature-switched) from the renderer. If the user switches away from the Chat view (e.g., to STT or Browser), it kills the background AI process to save resources. It automatically spawns it again when the user returns to Chat. - Debug Logging: Writes detailed logs to
Desktop/app_debug.logto aid in diagnosing packaged application issues. - Manual Audio Conversion: For STT, it now uses
ffmpegexplicitly to convert input audio to the required 16kHz WAV format before passing it to the Whisper engine, preventing "No speech detected" errors.
- Dynamic Resource Management: Listens for IPC events (
- Responsibilities:
- Lifecycle management (Start, Stop, Quit).
- Hardware Detection using
systeminformationto check for AVX/AVX2 support. - Engine Selection: Dynamically chooses the correct binary (
cpu_avx2orcpu_avx) to maximize performance or ensure compatibility. - Backend Spawning: Launches the
bujji_engine.exe(optimizedllama.cppserver) as a child process. - Window Creation: Loads
index.html.
Native AI Engine (Backend):
- Role: The "Brain" of the application.
- Technology: Pre-compiled binaries (likely based on
llama.cpp) optimized for CPU inference. - Operation: Runs a local server on port
5000. - Model: Loads weights strictly from a file named
model.gguf. - No Python Required: The binary is self-contained with all necessary DLLs.
- Git LFS integration: Large binaries (
.exe,.dll) are tracked via Git LFS to keep the repo clean. Themain.jsincludes a startup check to ensure these files are fully downloaded (and not just LFS pointers) before launching.
TTS Engine (Piper):
- Role: Text-to-Speech synthesis — converts typed text into natural-sounding speech.
- Technology: Piper TTS (
piper.exe), an ONNX-based neural TTS engine. - Operation: Invoked on-demand via IPC. Text is piped to
piper.exestdin, and a.wavfile is generated as output. - Model:
en_US-lessac-medium.onnx(English, medium quality voice) stored inengine/piper/. - DLLs:
piper_phonemize.dll,onnxruntime.dll,espeak-ng.dllbundled in the engine directory. - Output: WAV audio files saved to the system temp directory, playable in-app with a download option.
STT Engine (Whisper Server):
- Role: Speech-to-Text transcription — extracts text from audio files.
- Technology: Whisper.cpp server build (
whisper.exe), runs as an HTTP server. - Operation: Started on-demand on port
5001. Audio files are POSTed to/inferenceendpoint as multipart form-data. Server is shut down after each transcription. - Model:
ggml-base.en.bin(English base model) stored inengine/whisper/. - DLLs:
whisper.dll,ggml.dllbundled in the engine directory. - Audio Format Support:
.wav,.mp3,.m4a,.ogg,.flac— requiresffmpeg.exeandffmpeg.dllinbin/for automatic audio conversion. - Input: User selects an audio file via native file dialog.
Renderer Process (
index.html+renderer.js):- Role: The User Interface.
- Responsibilities:
- Displays the chat interface.
- Sends user prompts to
localhost:5000. - Receives and streams AI responses.
- Provides TTS interface (text input → speech generation → audio playback/download).
- Provides STT interface (file upload → transcription → text display/copy).
3. Operational Flow Chart
Detailed step-by-step process of the application startup:
sequenceDiagram
participant U as User
participant M as Main Process
participant S as System Check
participant E as AI Engine (Backend)
participant W as UI Window
U->>M: Launches Application (npm start / exe)
M->>S: Request CPU Flags (AVX2?)
S-->>M: Returns Flags (e.g., "avx2 enabled")
alt AVX2 Supported
M->>M: Select "engine/cpu_avx2/bujji_engine.exe"
else Only AVX
M->>M: Select "engine/cpu_avx/bujji_engine.exe"
end
M->>M: Validate Engine File Size (Check for Git LFS pointers)
M-->>U: Error Dialog if File Missing/Small
M->>E: Spawn Process (model.gguf, port 5000, 4 threads)
E-->>M: Server Started (Background)
M->>W: Create Window (Load index.html)
W->>E: Check Connection (Health Check)
E-->>W: Ready
W-->>U: Display Chat Interface
3.2 TTS Flow
sequenceDiagram
participant U as User
participant R as Renderer (UI)
participant M as Main Process
participant P as Piper TTS Engine
U->>R: Types text, clicks "Speak"
R->>M: IPC: tts-synthesize(text)
M->>P: Spawn piper.exe, pipe text to stdin
P-->>M: Generates .wav file
M-->>R: Returns .wav file path
R-->>U: Plays audio, shows Download button
U->>R: Clicks "Download Audio"
R->>M: IPC: tts-save-file(path)
M-->>U: Native Save dialog, copies file
3.3 STT Flow
sequenceDiagram
participant U as User
participant R as Renderer (UI)
participant M as Main Process
participant W as Whisper Server
U->>R: Clicks "Browse", selects audio file
R->>M: IPC: stt-select-file()
M-->>R: Returns file path (native dialog)
U->>R: Clicks "Transcribe"
R->>M: IPC: stt-transcribe(filePath)
M->>W: Start whisper.exe server (port 5001)
M->>W: POST audio to /inference
W-->>M: Returns transcription JSON
M->>W: Kill server
M-->>R: Returns transcribed text
R-->>U: Displays text, shows Copy button
4. Technical Specifications & Requirements
4.1 Prerequisites
- Operating System: Windows (10/11) 64-bit.
- software: Git & Git LFS (Required for downloading engine binaries).
- Runtime: Node.js (LTS version recommended).
- Hardware:
- Any modern CPU (Intel/AMD) with AVX support.
- Minimum 8GB RAM (16GB recommended for larger models).
- Disk space proportional to the model size (e.g., 4GB for a 7B model).
4.2 File Structure
The critical file structure required for the app to function:
Root/
├── engine/ # AI Backend Engines
│ ├── cpu_avx/ # Fallback binaries (AVX)
│ │ ├── bujji_engine.exe # LLM inference server
│ │ ├── piper.exe # TTS engine
│ │ └── whisper.exe # STT server
│ ├── cpu_avx2/ # High-performance binaries (AVX2)
│ │ ├── bujji_engine.exe
│ │ ├── piper.exe
│ │ └── whisper.exe
│ ├── piper/ # TTS model & config
│ │ └── en_US-lessac-medium.onnx
│ └── whisper/ # STT model
│ └── ggml-base.en.bin
├── bin/ # Utility binaries
│ ├── ffmpeg.exe # Audio conversion (required for STT)
│ ├── ffmpeg.dll # FFmpeg library
│ └── ffplay.exe # Audio playback
├── assets/icons/ # Local SVG icons
├── model.gguf # The AI Model (Must be named exactly this)
├── main.js # Core Logic (Main Process)
├── index.html # UI Layer
├── renderer.js # Frontend Logic
├── styles.css # Styling
├── web.html # Built-in Web Browser
├── package.json # Dependencies
└── node_modules/ # Installed via npm install (includes lucide, marked)
4.3 Framework Constraints & Packaging
Model Format Support:
- Text Generation: Strictly requires GGUF format (
llama.cppcompatible). - Speech-to-Text: Requires GGML Binary format (
ggml-*.bin). - Text-to-Speech: Requires ONNX format (
.onnx+.jsonconfig). - Note: Python-based models (
.pt,.safetensors) are NOT supported to ensure zero-dependency offline execution.
- Text Generation: Strictly requires GGUF format (
Packaging Capabilities:
- Installer Engine: Uses NSISBI (NSIS Large Integrated Browser Installer) to bypass the standard 2GB limit.
- Verified Capacity: The framework has been tested to successfully package applications up to ~3.1GB (Base App + Engine + Model).
- Recommendation: efficient for bundling quantized models (e.g., Llama-3-8B-Q4_K_M) directly into a single
.exefile.
5. Offline-First Architecture
The framework is designed to be 100% offline-capable after initial setup:
- No CDN Dependencies: All frontend libraries (Lucide icons, Marked.js) are bundled locally via
node_modules/. - Local Engine Binaries: All AI engines (
bujji_engine.exe,piper.exe,whisper.exe) and their DLLs are included in theengine/directory. - Bundled Models: TTS model (
en_US-lessac-medium.onnx), STT model (ggml-base.en.bin), and the LLM model (model.gguf) are all stored locally. - Content Security Policy: The CSP in
index.htmlis configured to only allow'self'and the local API server (127.0.0.1:5000), blocking all external network requests. - Audio Utilities:
ffmpeg.exeandffplay.exeare bundled inbin/for audio format conversion and playback.
6. Development & Open Source Strategy
6.1 Licensing
This project is released under the MIT License. This allows any student or developer to:
- Use the code freely.
- Modify the interface (rename "RemiAI" to their own brand).
- Distribute their own versions.
6.2 Hosting Strategy
- GitHub: Contains the source code (JS, HTML, CSS).
- Hugging Face: Hosts the large
model.gguffile and the zipped release builds due to storage limits on GitHub. We use Hugging Face for "Large File Storage" of the AI weights.
7. Conclusion
The RemiAI/Bujji framework democratizes access to local AI. By removing the complex Python environment setup and packaging the inference engine directly with the app, we enable any student with a laptop to run powerful AI models simply by typing npm start. With integrated TTS (Piper) and STT (Whisper) capabilities, the framework now provides a complete offline AI assistant experience — text generation, speech synthesis, and speech recognition — all running locally without any internet connection or cloud services.