| # Student & Developer Documentation |
|
|
| ## Overview |
| Welcome to the RemiAI Framework! This document is designed to help you understand how to customize, configure, and make this application your own. This framework is built to be "Plug-and-Play"—meaning you don't need to know Python or complex AI coding to use it. It includes **Text Generation** (chat with AI), **Text-to-Speech** (TTS — convert text to voice), and **Speech-to-Text** (STT — extract text from audio files), all running 100% offline. |
|
|
| ## 🛠️ Setup & How to Customize |
|
|
| ### 0. Quick Setup (Important!) |
| Before running the app, you **must** ensure the AI engine files are downloaded correctly. GitHub does not store large files directly, so we use **Git LFS**. |
|
|
| 1. **Install Git LFS**: |
| * Download and install from [git-lfs.com](https://git-lfs.com). |
| * Open a terminal and run: `git lfs install` |
| 2. **Pull Files**: |
| * Run: `git lfs pull` inside the project folder. |
| * *Why?* Without this, the app will say **"RemiAI Engine Missing"** or "Connection Refused". |
|
|
| ### 1. Changing the AI Name |
| Want to name the AI "Jarvis" or "MyBot"? |
| 1. Open `index.html` in any text editor (VS Code, Notepad, etc.). |
| 2. Search for "RemiAI" or "Bujji". |
| 3. Replace the text with your desired name. |
| 4. Save the file. |
| 5. Restart the app (`npm start`), and your new name will appear! |
|
|
| ### 2. Replacing the AI Models (LLM, TTS, STT) |
| This framework is a **Universal Wrapper**. You can swap out any of the three "brains" (Text, Speech, Hearing) to build your own dedicated application. |
|
|
| #### A. Changing the Chat Model (Text Generation) |
| 1. **Download**: Get a `.gguf` model from Hugging Face (e.g., `Llama-3-8B-GGUF`). |
| 2. **Rename**: Rename it to `model.gguf`. |
| 3. **Replace**: Overwrite the existing `model.gguf` in the root folder. |
| 4. **Restart**: Run `npm start`. |
|
|
| #### B. Changing the Text-to-Speech (TTS) Voice |
| The framework uses **Piper TTS**. |
| 1. **Download**: Get a voice model (`.onnx`) and its config (`.json`) from [Piper Voices](https://github.com/rhasspy/piper/blob/master/VOICES.md). |
| 2. **Place Files**: Put both files in `engine/piper/` (e.g., `my-voice.onnx` and `my-voice.onnx.json`). |
| 3. **Update Code**: |
| * Open `main.js`. |
| * Search for: `engine/piper/en_US-lessac-medium.onnx` |
| * Replace the filename with your new `.onnx` file name. |
| 4. **Restart**: Run `npm start`. |
|
|
| #### C. Changing the Speech-to-Text (STT) Engine |
| The framework uses **Whisper.cpp**. |
| 1. **Download**: Get a model in GGML/Binary format (`ggml-*.bin`) from [Hugging Face (ggerganov/whisper.cpp)](https://huggingface.co/ggerganov/whisper.cpp). |
| 2. **Place File**: Put the file in `engine/whisper/`. |
| 3. **Update Code**: |
| * Open `main.js`. |
| * Search for: `engine/whisper/ggml-base.en.bin` |
| * Replace the filename with your new `.bin` file name. |
| 4. **Restart**: Run `npm start`. |
|
|
| **Hardware Warning**: |
| * **Good Configuration**: i3 (8GB RAM) for basic usage. |
| * **Recommended**: i5 (16GB RAM) for larger models. |
| * *Note: Using models larger than your RAM capacity will crash your computer. Stick to "Q4_K_M" quantizations for the best balance.* |
|
|
| ### 3. Customizing the UI |
| All styles are in `styles.css` (or within `index.html`). |
| * **Colors**: Change the background colors or chat bubble colors in the CSS. |
| * **Icons**: Replace `remiai.ico` with your own `.ico` file to change the app icon. |
|
|
| ### 4. Using Text-to-Speech (TTS) |
| The TTS feature converts typed text into natural-sounding English speech using the **Piper** engine. |
|
|
| **How to Use:** |
| 1. Click the **🔊 Speaker icon** in the sidebar. |
| 2. Type the text you want to hear in the text area. |
| 3. Click **"Speak"** — the audio will generate and play automatically. |
| 4. Click **"Download Audio"** to save the `.wav` file to your preferred location (a native Save dialog will appear). |
|
|
| **Customization:** |
| * The TTS voice model is stored at `engine/piper/en_US-lessac-medium.onnx`. |
| * You can replace it with other Piper ONNX voice models from [Piper Voices](https://github.com/rhasspy/piper/blob/master/VOICES.md). |
| * Download a new `.onnx` model + its `.json` config file and place them in `engine/piper/`. |
|
|
| ### 5. Using Speech-to-Text (STT) |
| The STT feature extracts text from audio files using the **Whisper** engine (runs as a local server). |
|
|
| **How to Use:** |
| 1. Click the **🎙️ Microphone icon** in the sidebar. |
| 2. Click **"Browse Audio File"** to select your audio file. |
| 3. Supported formats: `.wav`, `.mp3`, `.m4a`, `.ogg`, `.flac`. |
| 4. Click **"Transcribe"** — wait for processing (10-30 seconds depending on file length). |
| 5. The transcribed text will appear below. Click **"Copy"** to copy it to your clipboard. |
|
|
| **Requirements:** |
| * `ffmpeg.exe` and `ffmpeg.dll` must be present in the `bin/` folder for audio format conversion. |
| * If missing, download FFmpeg from [ffmpeg.org](https://ffmpeg.org/download.html) and place the files in `bin/`. |
|
|
| ### 6. Dynamic Resource Management (New!) |
| To ensure the application runs smoothly even on lower-end devices, we implemented a dynamic resource management system. |
| * **Behavior**: When you are in the **Chat** tab, the heavy AI model (Text Generation) is loaded into RAM. |
| * **Optimization**: When you switch to **TTS**, **STT**, or **Web Browser** tabs, the main AI model is **automatically unloaded/stopped**. This frees up to 2GB+ of RAM and significant CPU usage, allowing the TTS/STT engines to run faster and the browser to be more responsive. |
| * **Reloading**: When you switch back to the **Chat** tab, the model automatically restarts. |
| * *Note: You might see "Connecting..." for a few seconds. If it stays stuck, click the "Refresh App" button.* |
|
|
| ### 7. Offline Dependencies |
| All libraries are bundled locally — **no internet needed** after initial setup: |
| * **Lucide Icons**: Loaded from `node_modules/lucide/` (not from CDN). |
| * **Marked.js**: Loaded from `node_modules/marked/` (not from CDN). |
| * If icons or markdown rendering is broken, simply run `npm install` to restore them. |
|
|
| ## ❓ Frequently Asked Questions (FAQ) |
|
|
| **Q: Do I need Python?** |
| A: **No.** The application comes with a pre-compiled engine (`bujji_engine.exe` / `llama-server.exe`) that runs the model directly. |
|
|
| **Q: Why does it say "AVX2"?** |
| A: AVX2 is a feature in modern CPUs that makes the AI run faster. The app automatically detects if you have it. If not, it switches to a slower but compatible mode (AVX). |
|
|
| **Q: The app opens but doesn't reply / "RemiAI Engine Missing" Error.** |
| A: |
| 1. **Git LFS Issue**: This usually means you downloaded "pointers" (tiny files) instead of the real engine. Open a terminal in the folder and run `git lfs pull`. |
| 2. **Model Issue**: Check if `model.gguf` exists in the `engine` folder. |
| 3. **Console Check**: Open Developer Tools (Ctrl+Shift+I) to see errors. |
|
|
| **Q: I see "Content Security Policy" warnings in the console.** |
| A: We have configured safeguards (`index.html` meta tags) to block malicious scripts. The CSP is set to only allow local resources (`'self'`) and the local API server (`127.0.0.1:5000`). All external CDN dependencies have been removed. |
|
|
| **Q: How do I build it into an .exe file?** |
| A: Run the command: |
| ```bash |
| npm run dist |
| ``` |
| This will create an installer in the `release` folder that you can share with friends! |
|
|
| `if you are facing errors while building open the power shell as an administrator and run the above command then it will works 100%` |
|
|
| **Q: TTS says "Piper TTS executable not found".** |
| A: Make sure `piper.exe` exists in `engine/cpu_avx2/` (or `engine/cpu_avx/`). Run `git lfs pull` to download all engine binaries. |
|
|
| **Q: STT says "Whisper server failed to start".** |
| A: |
| 1. Check that `whisper.exe` exists in `engine/cpu_avx2/` (or `engine/cpu_avx/`). |
| 2. Check that `ffmpeg.exe` and `ffmpeg.dll` are present in the `bin/` folder. The Whisper server needs FFmpeg for audio conversion. |
| 3. Run `git lfs pull` to ensure all files are fully downloaded. |
|
|
| **Q: STT says "No speech detected".** |
| A: Make sure your audio file contains clear English speech. Background noise or non-English audio may cause transcription failures. Try with a clear `.wav` recording first. |
|
|
| **Q: Can I use TTS and STT together?** |
| A: Yes! You can generate speech with TTS, save the `.wav` file, then upload it to STT to verify the transcription. They work independently and can be used simultaneously. |
|
|
| **Q: Does the app need internet to work?** |
| A: **No.** After the initial `npm install` and `git lfs pull` setup, the app runs 100% offline. All models, engines, icons, and libraries are bundled locally. |
|
|