Student & Developer Documentation
Overview
Welcome to the RemiAI Framework! This document is designed to help you understand how to customize, configure, and make this application your own. This framework is built to be "Plug-and-Play"—meaning you don't need to know Python or complex AI coding to use it. It includes Text Generation (chat with AI), Text-to-Speech (TTS — convert text to voice), and Speech-to-Text (STT — extract text from audio files), all running 100% offline.
🛠️ Setup & How to Customize
0. Quick Setup (Important!)
Before running the app, you must ensure the AI engine files are downloaded correctly. GitHub does not store large files directly, so we use Git LFS.
- Install Git LFS:
- Download and install from git-lfs.com.
- Open a terminal and run:
git lfs install
- Pull Files:
- Run:
git lfs pullinside the project folder. - Why? Without this, the app will say "RemiAI Engine Missing" or "Connection Refused".
- Run:
1. Changing the AI Name
Want to name the AI "Jarvis" or "MyBot"?
- Open
index.htmlin any text editor (VS Code, Notepad, etc.). - Search for "RemiAI" or "Bujji".
- Replace the text with your desired name.
- Save the file.
- Restart the app (
npm start), and your new name will appear!
2. Replacing the AI Models (LLM, TTS, STT)
This framework is a Universal Wrapper. You can swap out any of the three "brains" (Text, Speech, Hearing) to build your own dedicated application.
A. Changing the Chat Model (Text Generation)
- Download: Get a
.ggufmodel from Hugging Face (e.g.,Llama-3-8B-GGUF). - Rename: Rename it to
model.gguf. - Replace: Overwrite the existing
model.ggufin the root folder. - Restart: Run
npm start.
B. Changing the Text-to-Speech (TTS) Voice
The framework uses Piper TTS.
- Download: Get a voice model (
.onnx) and its config (.json) from Piper Voices. - Place Files: Put both files in
engine/piper/(e.g.,my-voice.onnxandmy-voice.onnx.json). - Update Code:
- Open
main.js. - Search for:
engine/piper/en_US-lessac-medium.onnx - Replace the filename with your new
.onnxfile name.
- Open
- Restart: Run
npm start.
C. Changing the Speech-to-Text (STT) Engine
The framework uses Whisper.cpp.
- Download: Get a model in GGML/Binary format (
ggml-*.bin) from Hugging Face (ggerganov/whisper.cpp). - Place File: Put the file in
engine/whisper/. - Update Code:
- Open
main.js. - Search for:
engine/whisper/ggml-base.en.bin - Replace the filename with your new
.binfile name.
- Open
- Restart: Run
npm start.
Hardware Warning:
- Good Configuration: i3 (8GB RAM) for basic usage.
- Recommended: i5 (16GB RAM) for larger models.
- Note: Using models larger than your RAM capacity will crash your computer. Stick to "Q4_K_M" quantizations for the best balance.
3. Customizing the UI
All styles are in styles.css (or within index.html).
- Colors: Change the background colors or chat bubble colors in the CSS.
- Icons: Replace
remiai.icowith your own.icofile to change the app icon.
4. Using Text-to-Speech (TTS)
The TTS feature converts typed text into natural-sounding English speech using the Piper engine.
How to Use:
- Click the 🔊 Speaker icon in the sidebar.
- Type the text you want to hear in the text area.
- Click "Speak" — the audio will generate and play automatically.
- Click "Download Audio" to save the
.wavfile to your preferred location (a native Save dialog will appear).
Customization:
- The TTS voice model is stored at
engine/piper/en_US-lessac-medium.onnx. - You can replace it with other Piper ONNX voice models from Piper Voices.
- Download a new
.onnxmodel + its.jsonconfig file and place them inengine/piper/.
5. Using Speech-to-Text (STT)
The STT feature extracts text from audio files using the Whisper engine (runs as a local server).
How to Use:
- Click the 🎙️ Microphone icon in the sidebar.
- Click "Browse Audio File" to select your audio file.
- Supported formats:
.wav,.mp3,.m4a,.ogg,.flac. - Click "Transcribe" — wait for processing (10-30 seconds depending on file length).
- The transcribed text will appear below. Click "Copy" to copy it to your clipboard.
Requirements:
ffmpeg.exeandffmpeg.dllmust be present in thebin/folder for audio format conversion.- If missing, download FFmpeg from ffmpeg.org and place the files in
bin/.
6. Dynamic Resource Management (New!)
To ensure the application runs smoothly even on lower-end devices, we implemented a dynamic resource management system.
- Behavior: When you are in the Chat tab, the heavy AI model (Text Generation) is loaded into RAM.
- Optimization: When you switch to TTS, STT, or Web Browser tabs, the main AI model is automatically unloaded/stopped. This frees up to 2GB+ of RAM and significant CPU usage, allowing the TTS/STT engines to run faster and the browser to be more responsive.
- Reloading: When you switch back to the Chat tab, the model automatically restarts.
- Note: You might see "Connecting..." for a few seconds. If it stays stuck, click the "Refresh App" button.
7. Offline Dependencies
All libraries are bundled locally — no internet needed after initial setup:
- Lucide Icons: Loaded from
node_modules/lucide/(not from CDN). - Marked.js: Loaded from
node_modules/marked/(not from CDN). - If icons or markdown rendering is broken, simply run
npm installto restore them.
❓ Frequently Asked Questions (FAQ)
Q: Do I need Python?
A: No. The application comes with a pre-compiled engine (bujji_engine.exe / llama-server.exe) that runs the model directly.
Q: Why does it say "AVX2"? A: AVX2 is a feature in modern CPUs that makes the AI run faster. The app automatically detects if you have it. If not, it switches to a slower but compatible mode (AVX).
Q: The app opens but doesn't reply / "RemiAI Engine Missing" Error. A:
- Git LFS Issue: This usually means you downloaded "pointers" (tiny files) instead of the real engine. Open a terminal in the folder and run
git lfs pull. - Model Issue: Check if
model.ggufexists in theenginefolder. - Console Check: Open Developer Tools (Ctrl+Shift+I) to see errors.
Q: I see "Content Security Policy" warnings in the console.
A: We have configured safeguards (index.html meta tags) to block malicious scripts. The CSP is set to only allow local resources ('self') and the local API server (127.0.0.1:5000). All external CDN dependencies have been removed.
Q: How do I build it into an .exe file? A: Run the command:
npm run dist
This will create an installer in the release folder that you can share with friends!
if you are facing errors while building open the power shell as an administrator and run the above command then it will works 100%
Q: TTS says "Piper TTS executable not found".
A: Make sure piper.exe exists in engine/cpu_avx2/ (or engine/cpu_avx/). Run git lfs pull to download all engine binaries.
Q: STT says "Whisper server failed to start". A:
- Check that
whisper.exeexists inengine/cpu_avx2/(orengine/cpu_avx/). - Check that
ffmpeg.exeandffmpeg.dllare present in thebin/folder. The Whisper server needs FFmpeg for audio conversion. - Run
git lfs pullto ensure all files are fully downloaded.
Q: STT says "No speech detected".
A: Make sure your audio file contains clear English speech. Background noise or non-English audio may cause transcription failures. Try with a clear .wav recording first.
Q: Can I use TTS and STT together?
A: Yes! You can generate speech with TTS, save the .wav file, then upload it to STT to verify the transcription. They work independently and can be used simultaneously.
Q: Does the app need internet to work?
A: No. After the initial npm install and git lfs pull setup, the app runs 100% offline. All models, engines, icons, and libraries are bundled locally.