Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

RemiAI Open Source Framework

License: MIT Build: Electron Model: GGUF TTS: Piper STT: Whisper

A "No-Setup" Local AI Framework for Students

This project is an open-source, offline AI application wrapper designed for students and colleges. It allows you to run powerful LLMs (like Llama 3, Mistral, etc.) on your laptop without needing GPU, internet, Python, or complicated installations.

Repository Link: https://huggingface.co/datasets/remiai3/REMI_Framework_V2

Beyond Text Generation: This framework is a Universal Offline AI Wrapper. You can use it to build dedicated:

  • Text Generation Apps: (Clone & Replace model.gguf)
  • Speech-to-Text (STT) Apps: (Clone & Replace engine/whisper/model.bin)
  • Text-to-Speech (TTS) Apps: (Clone & Replace engine/piper/model.onnx)

All running 100% offline with zero external dependencies.

Note - No need any GPU in your laptop to run, it will use the CPU in your laptop for the response generation (inference). If you want to modify the project code and use another model make sure that your are using the .gguf formated weights only. Normal weights like .safetensors or .bin (PyTorch) will NOT work.

New in v2.1: * Dynamic Resource Management: To save CPU/RAM, the massive Text Generation model now automatically unloads when you switch to STT, TTS, or Web Browser tabs. It reloads when you return to Chat. * Debug Logging: If issues arise in the packaged app, check the app_debug.log file created on your Desktop. * Manual Audio Conversion: Enhanced STT stability by auto-converting audio formats before processing. * Known Issue: Sometimes after switching back to Chat from other tabs, the status says "Connecting..." indefinitely. Fix: Click the "Refresh App" button in the sidebar.

πŸš€ Quick Start (One-Line Command)

If you have Git and Node.js installed, open your terminal and run the following commands.

For PowerShell (Windows)

Copy and paste this entire block into PowerShell:

git clone https://huggingface.co/datasets/remiai3/REMI_Framework_V2
cd REMI_Framework_V2
git lfs install
git lfs pull
npm install
npm start

For Command Prompt (CMD)

Copy and paste this entire block into Command Prompt:

git clone https://huggingface.co/datasets/remiai3/REMI_Framework_V2
cd REMI_Framework_V2
git lfs install
git lfs pull
npm install
npm start

⚠️ IMPORTANT: Git LFS Required

This repository uses Git Large File Storage (LFS) for the AI engine binaries. If you download the ZIP or clone without LFS, the app will not work (Error: "RemiAI engine missing").


πŸ’» Manual Installation

1. Requirements

2. Download & Setup

  1. Download the project zip (or clone the repo).
  2. Extract the folder.
  3. Open Terminal inside the folder path.
  4. Pull Engine Files (Critical Step):
    git lfs install
    git lfs pull
    
  5. Run the installer for libraries:
    npm install
    

3. Run the App

Simply type:

npm start

The application will launch, the AI engine will start in the background, and you can begin chatting immediately!


πŸ“¦ Features

  • πŸ’¬ AI Chat (Text Generation): Chat with powerful LLMs running locally on your CPU.
  • Zero Python Dependency: We use compiled binaries (.dll and .exe included) so you don't need to install Python, PyTorch, or set up virtual environments.
  • Plug & Play Models: Supports .gguf format.
    • Want a different model? Download any .gguf file, rename it to model.gguf, and place it in the project root.
  • Auto-Optimization: Automatically detects your CPU features (AVX vs AVX2) to give you the best speed possible.
  • Privacy First: Runs 100% offline. No data leaves your device.
  • Dynamic Resource Loading: Automatically unloads heavy AI models when not in use (e.g., when using Browser or TTS) to free up system resources.
  • πŸ”Š Text-to-Speech (TTS): Convert any text to natural-sounding English speech using the Piper engine.
    • Click the speaker icon in the sidebar β†’ type text β†’ click "Speak" β†’ listen and download .wav files.
    • Voice model: en_US-lessac-medium.onnx (replaceable with other Piper voices).
  • πŸŽ™οΈ Speech-to-Text (STT): Extract text from audio files using the Whisper engine.
    • Click the microphone icon in the sidebar β†’ browse for audio file β†’ click "Transcribe" β†’ copy result text.
    • Supports: .wav, .mp3, .m4a, .ogg, .flac formats.
    • Requires ffmpeg.exe and ffmpeg.dll in the bin/ folder.
  • 🌐 Built-in Web Browser: Integrated browser with tabs, bookmarks, and navigation.
  • 🎨 Offline UI: All icons (Lucide) and libraries (Marked.js) are bundled locally β€” no CDN required.

⚠️ Capabilities & Limitations

  • Supported AI Types:
    • LLMs: Supports .gguf format only.
    • STT: Supports ggml-*.bin format (Whisper).
    • TTS: Supports .onnx + .json format (Piper).
  • Packaging Limit:
    • The framework uses NSISBI (Large Installer Support).
    • Tested Packaging Size: Up to ~3.1GB successfully.
    • Note: While larger sizes (4GB+) may work, we recommend keeping your total app size (Code + Engine + Models) under 3.5GB for best performance and stability on student laptops.
  • Memory Usage: Requires ~4GB RAM free minimum. The app dynamically manages memory by unloading the LLM when using STT/TTS.
  • Startup Time: The Chat model may take 5-10 seconds to reload when switching back from other tabs. If it gets stuck, use the "Refresh App" button.

πŸ“‚ Project Structure & Dependencies

Core Structure

Root/
β”œβ”€β”€ engine/                     # AI Backend Engines (Binaries & DLLs)
β”‚   β”œβ”€β”€ cpu_avx/                # Fallback binaries (AVX)
β”‚   β”‚   β”œβ”€β”€ bujji_engine.exe    # LLM inference server
β”‚   β”‚   β”œβ”€β”€ piper.exe           # TTS engine
β”‚   β”‚   └── whisper.exe         # STT server
β”‚   β”œβ”€β”€ cpu_avx2/               # High-performance binaries (AVX2)
β”‚   β”‚   β”œβ”€β”€ bujji_engine.exe
β”‚   β”‚   β”œβ”€β”€ piper.exe
β”‚   β”‚   └── whisper.exe
β”‚   β”œβ”€β”€ piper/                  # TTS model & config
β”‚   β”‚   └── en_US-lessac-medium.onnx
β”‚   └── whisper/                # STT model
β”‚       └── ggml-base.en.bin
β”œβ”€β”€ bin/                        # Utility binaries
β”‚   β”œβ”€β”€ ffmpeg.exe              # Audio conversion (required for STT)
β”‚   β”œβ”€β”€ ffmpeg.dll              # FFmpeg library
β”‚   └── ffplay.exe              # Audio playback
β”œβ”€β”€ assets/icons/               # Local SVG icons
β”œβ”€β”€ model.gguf                  # The AI Model (Must be named exactly this)
β”œβ”€β”€ main.js                     # Core Logic (Main Process)
β”œβ”€β”€ index.html                  # UI Layer
β”œβ”€β”€ renderer.js                 # Frontend Logic
β”œβ”€β”€ styles.css                  # Styling
β”œβ”€β”€ web.html                    # Built-in Web Browser
└── package.json                # Dependencies

Key Libraries & DLLs

  • Electron: The core framework for the desktop app.
  • Systeminformation: Used for hardware detection (AVX/AVX2).
  • Marked: Markdown parser for rendering chat responses.
  • Lucide: Open-source icon set.
  • Engine DLLs: piper_phonemize.dll, onnxruntime.dll, espeak-ng.dll, whisper.dll, ggml.dll, ffmpeg.dll.

❓ Troubleshooting

Error: "RemiAI Engine Missing" This means you downloaded the "pointer" files (130 bytes) instead of the real engine. Fix:

  1. Open terminal in project folder.
  2. Run git lfs install
  3. Run git lfs pull
  4. Restart the app.

Error: "Piper TTS executable not found" or "Piper TTS model not found"

  • Ensure piper.exe is in engine/cpu_avx2/ (or engine/cpu_avx/).
  • Ensure en_US-lessac-medium.onnx is in engine/piper/.
  • Run git lfs pull to download all engine binaries.

Error: "Whisper server failed to start"

  • Ensure whisper.exe is in engine/cpu_avx2/ (or engine/cpu_avx/).
  • Critical: Ensure ffmpeg.exe and ffmpeg.dll are in the bin/ folder. The Whisper server requires FFmpeg.
  • Run git lfs pull to download all engine binaries.

Error: "No speech detected"

  • Ensure your audio file contains clear English speech.
  • Try with a .wav file first for best results.

πŸ› οΈ Credits & License

RemiAI Framework

  • Created By: RemiAI Team
  • License: MIT License
    • You are free to rename, modify, and distribute this application as your own project!
    • View Full License

Open Source Components & Licenses

This project proudly uses the following open-source software:

1. AI Engine (Backend)

  • Component: Llama.cpp (compiled as bujji_engine.exe)
  • Credits: Georgi Gerganov & Contributors
  • License: MIT License
  • This software uses the Llama.cpp library for high-performance LLM inference on CPU.

2. AI Models

  • Model: Gemma 2 (Google DeepMind)
  • License: Gemma Terms of Use
  • By using the Gemma 2 model, you agree to comply with the Gemma Terms of Use.
  • Attribution: This application uses the Gemma 2 model weights in GGUF format.

3. Speech Technologies

  • Text-to-Speech: Piper TTS (Rhasspy)
  • Speech-to-Text: Whisper.cpp (Georgi Gerganov)

4. Core Libraries

  • Electron: MIT License (OpenJS Foundation)
  • FFmpeg: LGPL v2.1+ (Fabrice Bellard & Contributors)
  • Marked.js: MIT License (Christopher Jeffrey)
  • Lucide Icons: ISC License (Lucide Contributors)
  • Systeminformation: MIT License (Sebastian Hildebrandt)

Note on Models: The application strictly uses .gguf formatted weights to ensure CPU-friendly performance without requiring a GPU.

❓ Frequently Asked Questions (FAQ)

Q: Do I need Python? A: No. The application comes with a pre-compiled engine (bujji_engine.exe) that runs the model directly.

Q: The app opens but doesn't reply / "RemiAI Engine Missing" Error. A:

  1. Git LFS Issue: This usually means you downloaded "pointers" (tiny files) instead of the real engine. Open a terminal in the folder and run git lfs pull.
  2. Model Issue: Check if model.gguf exists in the project root folder.
Downloads last month
811