--- title: MusicGen emoji: ๐ŸŽต colorFrom: blue colorTo: red sdk: gradio app_file: app.py pinned: false # Force Rebuild 2025-11-28-17-27 --- ## ๐ŸŽต VibeBlast - AI Music Generator A cutting-edge text-to-music generation application with a "uniquely uncommon wildfire but harnessed into sharp black modern" aesthetic. Built with React, Vite, and powered by Hugging Face's MusicGen AI model. ## โœจ Features ### ๐ŸŽจ Core Functionality - **Text-to-Music Generation**: Transform text prompts into audio using state-of-the-art AI - **Real-time Audio Visualizer**: Watch your music come alive with dynamic waveform visualization - **Persistent Song Library**: All generated tracks are saved locally and survive page refreshes - **8 Pre-built Vibe Suggestions**: Quick-start generation with curated prompts (Cyberpunk, Lo-Fi, Phonk, Synthwave, etc.) - **User Audio Uploads**: Upload your own audio samples to help improve the AI - **Interaction Tracking**: Play, like, and regenerate actions are logged for continuous learning ### ๐ŸŽญ Design & UX - **Wildfire Theme**: Animated wildfire background with sharp black modern aesthetic - **Glass-morphism UI**: Premium translucent cards with backdrop blur effects - **Neon-red Accents**: Eye-catching accent color throughout the interface - **Progress Indicators**: Visual feedback during music generation - **Share Functionality**: One-click sharing of prompts via URL - **Accessibility**: ARIA labels, focus outlines, and keyboard navigation support ### ๐Ÿš€ Technical Highlights - **React 18** with Hooks and Context API for state management - **Vite** for lightning-fast development and optimized builds - **Web Audio API** for real-time audio analysis and visualization - **Express.js Backend** for file uploads and interaction logging - **localStorage** for client-side persistence - **Responsive Design** with mobile-first approach ## ๐Ÿ“ฆ Project Structure s```plaintext VibeBlastApp/ โ”œโ”€โ”€ src/ โ”‚ โ”œโ”€โ”€ components/ โ”‚ โ”‚ โ”œโ”€โ”€ AudioVisualizer.jsx # Real-time waveform visualizer โ”‚ โ”‚ โ”œโ”€โ”€ CreateView.jsx # Main music generation interface โ”‚ โ”‚ โ”œโ”€โ”€ Header.jsx # Top navigation bar โ”‚ โ”‚ โ”œโ”€โ”€ LibraryView.jsx # Rich library interface โ”‚ โ”‚ โ”œโ”€โ”€ OverlayWrapper.jsx # Wildfire background wrapper โ”‚ โ”‚ โ”œโ”€โ”€ Player.jsx # Audio playback controls โ”‚ โ”‚ โ”œโ”€โ”€ ProgressBar.jsx # Generation progress indicator โ”‚ โ”‚ โ”œโ”€โ”€ ShareButton.jsx # URL sharing functionality โ”‚ โ”‚ โ”œโ”€โ”€ Sidebar.jsx # Navigation sidebar โ”‚ โ”‚ โ”œโ”€โ”€ SongGrid.jsx # Song list display โ”‚ โ”‚ โ”œโ”€โ”€ UserUpload.jsx # Audio sample upload โ”‚ โ”‚ โ”œโ”€โ”€ VisualizerCard.jsx # Combined card with visualizer โ”‚ โ”‚ โ””โ”€โ”€ WildfireBackground.jsx # Animated wildfire effect โ”‚ โ”œโ”€โ”€ context/ โ”‚ โ”‚ โ””โ”€โ”€ GeneratedSongsContext.jsx # Global state for generated songs โ”‚ โ”œโ”€โ”€ hooks/ โ”‚ โ”‚ โ”œโ”€โ”€ useAudioAnalyser.js # Web Audio API hook โ”‚ โ”‚ โ””โ”€โ”€ useInteractionTracker.js # Interaction logging hook โ”‚ โ”œโ”€โ”€ utils/ โ”‚ โ”‚ โ””โ”€โ”€ musicGen.js # Hugging Face API integration โ”‚ โ”œโ”€โ”€ index.css # Global styles with wildfire theme โ”‚ โ”œโ”€โ”€ main.jsx # App entry point โ”‚ โ””โ”€โ”€ App.jsx # Main app component โ”œโ”€โ”€ server/ โ”‚ โ”œโ”€โ”€ index.js # Express backend server โ”‚ โ”œโ”€โ”€ package.json # Backend dependencies โ”‚ โ””โ”€โ”€ uploads/ # User-uploaded audio storage โ”œโ”€โ”€ public/ # Static assets โ”œโ”€โ”€ .env.example # Environment variable template โ”œโ”€โ”€ package.json # Frontend dependencies โ””โ”€โ”€ vite.config.js # Vite configuration ``` ## ๐Ÿ› ๏ธ Setup Instructions ### Prerequisites - Node.js 16+ and npm - A Hugging Face account and API token ([get one here](https://huggingface.co/settings/tokens)) ### Installation 1. **Clone the repository** (or use your existing project): ```bash cd /path/to/VibeBlastApp ``` 2. **Install frontend dependencies**: ```bash npm install ``` 3. **Install backend dependencies**: ```bash cd server npm install cd .. ``` 4. **Configure environment variables**: ```bash # Copy the example file cp .env.example .env # Edit .env and add your Hugging Face token: # VITE_HF_TOKEN=hf_YOUR_TOKEN_HERE ``` ### Running the Application You need to run **both** the frontend and backend servers: #### Terminal 1: Frontend (Vite) ```bash npm run dev # Runs on http://localhost:5174 ``` #### Terminal 2: Backend (Express) ```bash cd server npm run dev # Runs on http://localhost:4000 ``` Then open your browser and navigate to: **** ## ๐ŸŽฎ Usage Guide ### Generating Music 1. Click **"Create"** in the sidebar 2. Either: - Type a custom prompt describing your desired music, OR - Click one of the 8 vibe suggestion chips 3. (Optional) Toggle **Instrumental** or **Auto-Lyrics** settings 4. Click **"Generate"** and wait ~30-60 seconds 5. Your generated track appears below with: - Audio visualizer showing real-time waveform - Play & Log button (sends interaction to backend) - Download button for WAV file - Share button to copy URL with prompt ### Uploading Audio Samples 1. Scroll to the bottom of the **Create** view 2. Click **"Choose File"** in the **"Upload Your Own Sample"** section 3. Select an audio file (WAV, MP3, etc.) 4. Click **"Upload"** 5. Your file is sent to the backend (`server/uploads/`) for future model training ### Viewing Your Library - Generated songs persist across page refreshes (stored in `localStorage`) - Click **"Library"** in the sidebar to view playlists, liked songs, and albums - All cards use glass-morphism styling for a premium feel ## ๐Ÿงช API Integration ### Hugging Face MusicGen The app uses the **facebook/musicgen-medium** model via the Hugging Face Inference API: - **Endpoint**: `https://api-inference.huggingface.co/models/facebook/musicgen-medium` - **Method**: POST - **Headers**: `Authorization: Bearer YOUR_TOKEN` - **Payload**: `{ "inputs": "your prompt here" }` - **Response**: Audio blob (WAV format) See `src/utils/musicGen.js` for the implementation. ### Backend Endpoints #### `POST /api/upload` - **Purpose**: Accept user-uploaded audio files - **Body**: `multipart/form-data` with `audio` field - **Response**: `{ "message": "Upload successful", "filename": "..." }` #### `POST /api/track` - **Purpose**: Log user interactions (play, like, regenerate) - **Body**: `{ "songId": "...", "action": "play", "timestamp": 1234567890 }` - **Response**: `{ "status": "logged" }` ## ๐ŸŽจ Design System ### Color Palette - **Background**: Sharp black (`#0a0a0a`, `#141414`) - **Accent**: Neon red (`#ff0055`, `#ff1744`) - **Fire Gradient**: Orange to red (`#ff6b35` โ†’ `#ff0055`) - **Text**: White with varying opacity ### Key CSS Variables ```css --color-black: #0a0a0a --color-black-dark: #050505 --color-neon-red: #ff0055 --color-neon-orange: #ff6b35 --color-fire-start: #ff6b35 --color-fire-end: #ff0055 ``` ### Animations - **Wildfire**: 20s infinite background animation - **Progress**: Linear progress bar during generation - **Hover**: Scale transforms and color transitions - **Spin**: Loading spinner for async operations ## ๐Ÿš€ Future Enhancements ### Roadmap - [ ] Real model fine-tuning pipeline integration - [ ] User authentication & cloud storage - [ ] PWA capabilities (offline support, installable) - [ ] Analytics dashboard for upload/play metrics - [ ] Advanced audio processing (remix, extend, variations) - [ ] Real-time collaborative generation - [ ] Genre-specific model selection - [ ] Lyric generation and display ### Model Improvement Strategy The app is designed for **continuous learning**: 1. **Data Collection**: User uploads and interactions are logged 2. **Aggregation**: Backend stores audio files and usage metrics 3. **Fine-tuning**: Periodically retrain the model on collected data 4. **Deployment**: Update the inference endpoint with improved model See the placeholder script at `scripts/fineTuneModel.js` for integration points. ## ๐Ÿ“ Environment Variables | Variable | Description | Example | |----------|-------------|---------| | `VITE_HF_TOKEN` | Hugging Face API token | `hf_abcdef123456...` | ## ๐Ÿค Contributing This is a personal project, but contributions are welcome! Some areas that could use help: - **Performance**: Optimize large audio file handling - **UI/UX**: Additional visualizer modes (spectrum, bars, etc.) - **Backend**: Switch from file storage to database (PostgreSQL, MongoDB) - **Testing**: Unit tests for components and hooks - **Documentation**: More detailed API docs and code comments ## ๐Ÿ“„ License This project is for educational and demonstration purposes. ## ๐Ÿ™ Credits - **MusicGen Model**: Meta AI / Hugging Face - **UI Inspiration**: Modern cyberpunk and glassmorphism design trends - **Framework**: React team, Vite team --- **Built with ๐Ÿ”ฅ by the VibeBlast team** *"Wildfire aesthetics meet cutting-edge AI music generation"* ## ๐Ÿš€ Deployment to Hugging Face Spaces This project is configured for deployment to Hugging Face Spaces using Docker. 1. **Create a new Space**: Select "Docker" as the SDK. 2. **Repository**: Connect this repository or push code to the Space's repo. 3. **Environment Variables**: Go to "Settings" -> "Variables and secrets" and add: - `VITE_HF_TOKEN`: Your Hugging Face User Access Token (Secret). 4. **Port**: The Dockerfile automatically uses port `7860` as required by Spaces. The Docker build process will: - Build the React frontend - Start the Express backend - Serve the frontend via the backend on port 7860