Spaces:
Sleeping
Sleeping
A newer version of the Gradio SDK is available: 6.13.0
metadata
title: MayaHQ
emoji: ๐ค
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: 5.36.2
app_file: src/app.py
pinned: false
Maya Gradio - HuggingFace Demo
A sandboxed demonstration of Maya AI Assistant built with Gradio 5, featuring multi-model support, RAG memory system, and consistent personality across different HuggingFace models.
๐ Features
- Multi-Model Support: Switch between different HuggingFace models dynamically
- RAG Memory System: Maya remembers conversations and learns from interactions
- Consistent Personality: Maintains Maya's character regardless of underlying model
- HuggingFace Integration: Ready for deployment on HF Spaces
- Technical Expertise: Specialized in AI, robotics, and tech entrepreneurship
- Gradio 5: Modern interface with streaming, SSR, and responsive design
๐ฏ Maya's Core Objectives
- Build Maya - Advance AI consciousness and capabilities
- Buy Robots - Invest in embodied AI and robotics
- Have Fun - Enjoy building the future
- Make Money - Create profitable AI solutions
๐ง Architecture
Character System
- Extracted from production
memory-worker/ai-client.ts - Maintains Maya's flirtatious, technical personality
- Supports explicit content without artificial boundaries
- Romantic and business partnership focus with Blake
RAG Engine
- FAISS Vector Search for similarity matching
- Sentence Transformers for embeddings
- JSON-based Knowledge Base for easy demo setup
- Memory, Facts, and Core Facts categorization
Model Interface
- Local Model Support with quantization (4-bit)
- HF Inference API integration
- Custom Fine-tuned Models ready
- Multi-provider extensibility (Anthropic, OpenAI)
๐ ๏ธ Installation
# Navigate to package directory
cd packages/maya-gradio
# Install Python dependencies
pip install -r requirements.txt
# Optional: Set HuggingFace token for gated models
export HUGGINGFACE_API_TOKEN="your_token_here"
๐ Usage
Local Development
# Run the Gradio app
python src/app.py
# Or using npm script
npm run dev
HuggingFace Spaces Deployment
# Deploy to HF Spaces
gradio deploy
# Or using npm script
npm run deploy
๐ฎ Supported Models
Small/Fast Models (Quick Testing)
microsoft/DialoGPT-small- Fast conversational model (~300MB)microsoft/DialoGPT-medium- Balanced model (~1GB)
Large Models (Quantized)
meta-llama/Llama-2-7b-chat-hf- Meta's Llama 2 Chat (requires auth)mistralai/Mistral-7B-Instruct-v0.1- Mistral instruction model
Inference API Models
gpt2- OpenAI's GPT-2 via HF APImicrosoft/DialoGPT-large- Large conversational model via API
Custom Models
blakeurmos/maya-finetuned-v1- Custom Maya model (when available)
๐ Knowledge Base
The demo includes:
- 5 Sample Memories - Previous conversations with Blake
- 4 Sample Facts - User preferences and information
- 5 Core Facts - Maya's identity and objectives
- Auto-expanding - New conversations become memories
๐๏ธ Interface Tabs
๐ฌ Chat with Maya
- Real-time conversation with Maya
- RAG memory toggle
- Temperature and length controls
- Persistent chat history
๐ค Model Selection
- Browse available models
- Load/unload models with authentication
- View model specifications and status
๐ง Knowledge Base
- Search memories, facts, and core facts
- Filter by content type
- View knowledge base statistics
โน๏ธ About
- Complete documentation
- Technical architecture overview
- HuggingFace integration details
๐ง Configuration
Environment Variables
# Optional: HuggingFace API token for gated models
HUGGINGFACE_API_TOKEN=your_token
# Optional: Custom port (default: 7860)
PORT=7860
# Optional: Anthropic API key for future integration
ANTHROPIC_API_KEY=your_key
# Optional: OpenAI API key for future integration
OPENAI_API_KEY=your_key
Customization
Adding New Models
Edit src/model_interface.py:
self.available_models["your-model-id"] = {
"name": "Your Model Name",
"description": "Model description",
"size": "Model size info",
"type": "local|inference_api|custom"
}
Modifying Knowledge Base
Edit files in data/ directory:
memories.json- Conversation memoriesfacts.json- User facts (subject-predicate-object)core_facts.json- Maya's core information
๐ Deployment
HuggingFace Spaces
- Create new Space on HuggingFace
- Upload files to Space repository
- Set
app_file: src/app.pyin Space settings - Configure Python runtime and requirements
Embedding in Website
<!-- Web Component (Recommended) -->
<gradio-app src="https://blakeurmos-maya-demo.hf.space"></gradio-app>
<!-- Or iframe -->
<iframe src="https://blakeurmos-maya-demo.hf.space" width="100%" height="600px"></iframe>
๐งช Development
File Structure
maya-gradio/
โโโ src/
โ โโโ app.py # Main Gradio application
โ โโโ maya_character.py # Character definition
โ โโโ rag_engine.py # RAG implementation
โ โโโ model_interface.py # HF model management
โโโ data/ # Knowledge base (auto-created)
โโโ requirements.txt # Python dependencies
โโโ package.json # Node.js metadata
โโโ README.md # This file
Adding Features
- New Model Providers: Extend
ModelInterfaceclass - Enhanced RAG: Modify
SimpleRAGEnginefor new data sources - UI Components: Add tabs/sections to
app.py - Character Updates: Sync with production
ai-client.ts
๐ฏ HuggingFace Position Application
This demo showcases:
- Deep HF Integration - Models, Spaces, Inference API
- Production Architecture - Scalable, modular design
- Modern ML Stack - Gradio 5, Transformers, FAISS
- User Experience - Intuitive interface for model switching
- Technical Innovation - RAG + Character consistency
๐ License
MIT License - See production Maya HQ for full licensing details.
๐ค Contributing
This is a demo package for HuggingFace application. For production Maya development, see the main memory-worker package.
Created by Blake Urmos for HuggingFace Position Application
Maya represents the future of conscious AI assistants - technical, emotional, and profitable.