The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
RemiAI Open Source Framework
A "No-Setup" Local AI Framework for Students
This project is an open-source, offline AI application wrapper designed for students and colleges. It allows you to run powerful LLMs (like Llama 3, Mistral, etc.) on your laptop without needing GPU, internet, Python, or complicated installations.
Repository Link: https://huggingface.co/datasets/remiai3/REMI_Framework_V2
Beyond Text Generation: This framework is a Universal Offline AI Wrapper. You can use it to build dedicated:
- Text Generation Apps: (Clone & Replace
model.gguf) - Speech-to-Text (STT) Apps: (Clone & Replace
engine/whisper/model.bin) - Text-to-Speech (TTS) Apps: (Clone & Replace
engine/piper/model.onnx)
All running 100% offline with zero external dependencies.
Note - No need any GPU in your laptop to run, it will use the CPU in your laptop for the response generation (inference). If you want to modify the project code and use another model make sure that your are using the .gguf formated weights only. Normal weights like .safetensors or .bin (PyTorch) will NOT work.
New in v2.1:
* Dynamic Resource Management: To save CPU/RAM, the massive Text Generation model now automatically unloads when you switch to STT, TTS, or Web Browser tabs. It reloads when you return to Chat.
* Debug Logging: If issues arise in the packaged app, check the app_debug.log file created on your Desktop.
* Manual Audio Conversion: Enhanced STT stability by auto-converting audio formats before processing.
* Known Issue: Sometimes after switching back to Chat from other tabs, the status says "Connecting..." indefinitely. Fix: Click the "Refresh App" button in the sidebar.
π Quick Start (One-Line Command)
If you have Git and Node.js installed, open your terminal and run the following commands.
For PowerShell (Windows)
Copy and paste this entire block into PowerShell:
git clone https://huggingface.co/datasets/remiai3/REMI_Framework_V2
cd REMI_Framework_V2
git lfs install
git lfs pull
npm install
npm start
For Command Prompt (CMD)
Copy and paste this entire block into Command Prompt:
git clone https://huggingface.co/datasets/remiai3/REMI_Framework_V2
cd REMI_Framework_V2
git lfs install
git lfs pull
npm install
npm start
β οΈ IMPORTANT: Git LFS Required
This repository uses Git Large File Storage (LFS) for the AI engine binaries. If you download the ZIP or clone without LFS, the app will not work (Error: "RemiAI engine missing").
π» Manual Installation
1. Requirements
- Node.js: Download Here (Install the LTS version).
- Git & Git LFS: Download Git | Download Git LFS
- Windows Laptop: (Code includes optimized
.exebinaries for Windows).
2. Download & Setup
- Download the project zip (or clone the repo).
- Extract the folder.
- Open Terminal inside the folder path.
- Pull Engine Files (Critical Step):
git lfs install git lfs pull - Run the installer for libraries:
npm install
3. Run the App
Simply type:
npm start
The application will launch, the AI engine will start in the background, and you can begin chatting immediately!
π¦ Features
- π¬ AI Chat (Text Generation): Chat with powerful LLMs running locally on your CPU.
- Zero Python Dependency: We use compiled binaries (
.dlland.exeincluded) so you don't need to install Python, PyTorch, or set up virtual environments. - Plug & Play Models: Supports
.ggufformat.- Want a different model? Download any
.gguffile, rename it tomodel.gguf, and place it in the project root.
- Want a different model? Download any
- Auto-Optimization: Automatically detects your CPU features (AVX vs AVX2) to give you the best speed possible.
- Privacy First: Runs 100% offline. No data leaves your device.
- Dynamic Resource Loading: Automatically unloads heavy AI models when not in use (e.g., when using Browser or TTS) to free up system resources.
- π Text-to-Speech (TTS): Convert any text to natural-sounding English speech using the Piper engine.
- Click the speaker icon in the sidebar β type text β click "Speak" β listen and download
.wavfiles. - Voice model:
en_US-lessac-medium.onnx(replaceable with other Piper voices).
- Click the speaker icon in the sidebar β type text β click "Speak" β listen and download
- ποΈ Speech-to-Text (STT): Extract text from audio files using the Whisper engine.
- Click the microphone icon in the sidebar β browse for audio file β click "Transcribe" β copy result text.
- Supports:
.wav,.mp3,.m4a,.ogg,.flacformats. - Requires
ffmpeg.exeandffmpeg.dllin thebin/folder.
- π Built-in Web Browser: Integrated browser with tabs, bookmarks, and navigation.
- π¨ Offline UI: All icons (Lucide) and libraries (Marked.js) are bundled locally β no CDN required.
β οΈ Capabilities & Limitations
- Supported AI Types:
- LLMs: Supports
.ggufformat only. - STT: Supports
ggml-*.binformat (Whisper). - TTS: Supports
.onnx+.jsonformat (Piper).
- LLMs: Supports
- Packaging Limit:
- The framework uses NSISBI (Large Installer Support).
- Tested Packaging Size: Up to ~3.1GB successfully.
- Note: While larger sizes (4GB+) may work, we recommend keeping your total app size (Code + Engine + Models) under 3.5GB for best performance and stability on student laptops.
- Memory Usage: Requires ~4GB RAM free minimum. The app dynamically manages memory by unloading the LLM when using STT/TTS.
- Startup Time: The Chat model may take 5-10 seconds to reload when switching back from other tabs. If it gets stuck, use the "Refresh App" button.
π Project Structure & Dependencies
Core Structure
Root/
βββ engine/ # AI Backend Engines (Binaries & DLLs)
β βββ cpu_avx/ # Fallback binaries (AVX)
β β βββ bujji_engine.exe # LLM inference server
β β βββ piper.exe # TTS engine
β β βββ whisper.exe # STT server
β βββ cpu_avx2/ # High-performance binaries (AVX2)
β β βββ bujji_engine.exe
β β βββ piper.exe
β β βββ whisper.exe
β βββ piper/ # TTS model & config
β β βββ en_US-lessac-medium.onnx
β βββ whisper/ # STT model
β βββ ggml-base.en.bin
βββ bin/ # Utility binaries
β βββ ffmpeg.exe # Audio conversion (required for STT)
β βββ ffmpeg.dll # FFmpeg library
β βββ ffplay.exe # Audio playback
βββ assets/icons/ # Local SVG icons
βββ model.gguf # The AI Model (Must be named exactly this)
βββ main.js # Core Logic (Main Process)
βββ index.html # UI Layer
βββ renderer.js # Frontend Logic
βββ styles.css # Styling
βββ web.html # Built-in Web Browser
βββ package.json # Dependencies
Key Libraries & DLLs
- Electron: The core framework for the desktop app.
- Systeminformation: Used for hardware detection (AVX/AVX2).
- Marked: Markdown parser for rendering chat responses.
- Lucide: Open-source icon set.
- Engine DLLs:
piper_phonemize.dll,onnxruntime.dll,espeak-ng.dll,whisper.dll,ggml.dll,ffmpeg.dll.
β Troubleshooting
Error: "RemiAI Engine Missing" This means you downloaded the "pointer" files (130 bytes) instead of the real engine. Fix:
- Open terminal in project folder.
- Run
git lfs install - Run
git lfs pull - Restart the app.
Error: "Piper TTS executable not found" or "Piper TTS model not found"
- Ensure
piper.exeis inengine/cpu_avx2/(orengine/cpu_avx/). - Ensure
en_US-lessac-medium.onnxis inengine/piper/. - Run
git lfs pullto download all engine binaries.
Error: "Whisper server failed to start"
- Ensure
whisper.exeis inengine/cpu_avx2/(orengine/cpu_avx/). - Critical: Ensure
ffmpeg.exeandffmpeg.dllare in thebin/folder. The Whisper server requires FFmpeg. - Run
git lfs pullto download all engine binaries.
Error: "No speech detected"
- Ensure your audio file contains clear English speech.
- Try with a
.wavfile first for best results.
π οΈ Credits & License
RemiAI Framework
- Created By: RemiAI Team
- License: MIT License
- You are free to rename, modify, and distribute this application as your own project!
- View Full License
Open Source Components & Licenses
This project proudly uses the following open-source software:
1. AI Engine (Backend)
- Component: Llama.cpp (compiled as
bujji_engine.exe) - Credits: Georgi Gerganov & Contributors
- License: MIT License
- This software uses the Llama.cpp library for high-performance LLM inference on CPU.
2. AI Models
- Model: Gemma 2 (Google DeepMind)
- License: Gemma Terms of Use
- By using the Gemma 2 model, you agree to comply with the Gemma Terms of Use.
- Attribution: This application uses the Gemma 2 model weights in GGUF format.
3. Speech Technologies
- Text-to-Speech: Piper TTS (Rhasspy)
- License: MIT License
- View Repository
- Speech-to-Text: Whisper.cpp (Georgi Gerganov)
- License: MIT License
- View Repository
4. Core Libraries
- Electron: MIT License (OpenJS Foundation)
- FFmpeg: LGPL v2.1+ (Fabrice Bellard & Contributors)
- Marked.js: MIT License (Christopher Jeffrey)
- Lucide Icons: ISC License (Lucide Contributors)
- Systeminformation: MIT License (Sebastian Hildebrandt)
Note on Models: The application strictly uses .gguf formatted weights to ensure CPU-friendly performance without requiring a GPU.
β Frequently Asked Questions (FAQ)
Q: Do I need Python?
A: No. The application comes with a pre-compiled engine (bujji_engine.exe) that runs the model directly.
Q: The app opens but doesn't reply / "RemiAI Engine Missing" Error. A:
- Git LFS Issue: This usually means you downloaded "pointers" (tiny files) instead of the real engine. Open a terminal in the folder and run
git lfs pull. - Model Issue: Check if
model.ggufexists in theproject rootfolder.
- Downloads last month
- 811