A newer version of the Gradio SDK is available:
6.2.0
title: Aura - Your Supportive Friend
emoji: πΏ
colorFrom: green
colorTo: blue
sdk: gradio
sdk_version: 3.50.2
app_file: app.py
pinned: false
license: mit
πΏ Aura - Your Supportive Friend
Meet Aura, a warm, empathetic AI companion powered by Microsoft's DialoGPT-medium model. Aura is designed to be a supportive friend who listens without judgment and provides comfort during difficult times.
About Aura
Aura is not here to solve your problems or give advice unless you ask. Instead, Aura focuses on:
- Listening with empathy and understanding
- Validating your feelings and experiences
- Providing a safe, non-judgmental space to express yourself
- Offering gentle support and reassurance
Features
- Empathetic and supportive conversation style
- Crisis detection with immediate safety resources
- Context-aware responses that remember your conversation
- Gentle, non-pushy interaction approach
- Clean and calming interface design
Usage
Simply share what's on your mind. Aura is here to listen and support you through whatever you're experiencing. Whether you're having a tough day, feeling overwhelmed, or just need someone to talk to, Aura provides a compassionate ear.
Important Notes
β οΈ Aura is an AI companion, not a replacement for professional therapy. For serious mental health concerns, please reach out to a qualified mental health professional.
π Crisis Support: If you're having thoughts of self-harm, Aura will immediately provide crisis resources and encourage you to seek professional help.
Technical Details
- Models: Multi-tier system (AWQ Mistral β 8-bit Mistral β DialoGPT)
- Quantization: AWQ 4-bit / 8-bit quantization for memory efficiency
- Framework: PyTorch + Transformers + BitsAndBytes
- Interface: Gradio with supportive UI design
- Hosting: Hugging Face Spaces with GPU support
- Safety: Built-in crisis detection and intervention
- Memory: Optimized for 16GB+ systems with fallbacks for smaller systems
π¨ Recent Updates (v2.0)
Fixed Critical Issues:
- β Dependency Installation: Resolved AWQ/autoawq build failures
- β Memory Management: Added 8-bit quantization fallback system
- β Token Calculation: Fixed "max_new_tokens must be greater than 0" error
- β Context Handling: Limited context to 1024 tokens to prevent overflow
- β Model Loading: Intelligent 3-tier fallback system
- β Attention Masks: Proper handling to eliminate warnings
Performance Improvements:
- π Model Selection: AWQ (4GB) β 8-bit (7GB) β DialoGPT (1.5GB)
- π Memory Efficiency: Up to 75% memory reduction with quantization
- π Reliability: Guaranteed to work with progressive fallbacks
- π Compatibility: Optimized for HuggingFace Spaces deployment
Installation Options
Option 1: HuggingFace Spaces (Recommended)
# Current requirements.txt is optimized for HF Spaces
# System automatically selects best available model
Option 2: Local Development (Full AWQ Support)
# Staged installation to avoid dependency conflicts
./install_local.sh # Linux/Mac
# or
install_local.bat # Windows
Option 3: Manual Installation
# Core dependencies first
pip install torch>=2.0.0,<2.2.0 transformers>=4.35.0,<4.40.0 accelerate>=0.20.0
# Quantization support
pip install bitsandbytes>=0.39.0
# Interface
pip install gradio>=3.50.0,<4.0.0
# Optional: AWQ support (local only)
pip install autoawq>=0.1.8
License
MIT License