chatbot / README.md
Deva1211's picture
Resolving issues
fd5eb19

A newer version of the Gradio SDK is available: 6.2.0

Upgrade
metadata
title: Aura - Your Supportive Friend
emoji: 🌿
colorFrom: green
colorTo: blue
sdk: gradio
sdk_version: 3.50.2
app_file: app.py
pinned: false
license: mit

🌿 Aura - Your Supportive Friend

Meet Aura, a warm, empathetic AI companion powered by Microsoft's DialoGPT-medium model. Aura is designed to be a supportive friend who listens without judgment and provides comfort during difficult times.

About Aura

Aura is not here to solve your problems or give advice unless you ask. Instead, Aura focuses on:

  • Listening with empathy and understanding
  • Validating your feelings and experiences
  • Providing a safe, non-judgmental space to express yourself
  • Offering gentle support and reassurance

Features

  • Empathetic and supportive conversation style
  • Crisis detection with immediate safety resources
  • Context-aware responses that remember your conversation
  • Gentle, non-pushy interaction approach
  • Clean and calming interface design

Usage

Simply share what's on your mind. Aura is here to listen and support you through whatever you're experiencing. Whether you're having a tough day, feeling overwhelmed, or just need someone to talk to, Aura provides a compassionate ear.

Important Notes

⚠️ Aura is an AI companion, not a replacement for professional therapy. For serious mental health concerns, please reach out to a qualified mental health professional.

πŸ†˜ Crisis Support: If you're having thoughts of self-harm, Aura will immediately provide crisis resources and encourage you to seek professional help.

Technical Details

  • Models: Multi-tier system (AWQ Mistral β†’ 8-bit Mistral β†’ DialoGPT)
  • Quantization: AWQ 4-bit / 8-bit quantization for memory efficiency
  • Framework: PyTorch + Transformers + BitsAndBytes
  • Interface: Gradio with supportive UI design
  • Hosting: Hugging Face Spaces with GPU support
  • Safety: Built-in crisis detection and intervention
  • Memory: Optimized for 16GB+ systems with fallbacks for smaller systems

🚨 Recent Updates (v2.0)

Fixed Critical Issues:

  • βœ… Dependency Installation: Resolved AWQ/autoawq build failures
  • βœ… Memory Management: Added 8-bit quantization fallback system
  • βœ… Token Calculation: Fixed "max_new_tokens must be greater than 0" error
  • βœ… Context Handling: Limited context to 1024 tokens to prevent overflow
  • βœ… Model Loading: Intelligent 3-tier fallback system
  • βœ… Attention Masks: Proper handling to eliminate warnings

Performance Improvements:

  • πŸš€ Model Selection: AWQ (4GB) β†’ 8-bit (7GB) β†’ DialoGPT (1.5GB)
  • πŸš€ Memory Efficiency: Up to 75% memory reduction with quantization
  • πŸš€ Reliability: Guaranteed to work with progressive fallbacks
  • πŸš€ Compatibility: Optimized for HuggingFace Spaces deployment

Installation Options

Option 1: HuggingFace Spaces (Recommended)

# Current requirements.txt is optimized for HF Spaces
# System automatically selects best available model

Option 2: Local Development (Full AWQ Support)

# Staged installation to avoid dependency conflicts
./install_local.sh  # Linux/Mac
# or
install_local.bat   # Windows

Option 3: Manual Installation

# Core dependencies first
pip install torch>=2.0.0,<2.2.0 transformers>=4.35.0,<4.40.0 accelerate>=0.20.0
# Quantization support
pip install bitsandbytes>=0.39.0
# Interface
pip install gradio>=3.50.0,<4.0.0
# Optional: AWQ support (local only)
pip install autoawq>=0.1.8

License

MIT License