Spaces:
Sleeping
Sleeping
Nikita Makarov
commited on
Commit
Β·
0df1705
1
Parent(s):
61f40a7
works cool
Browse files- README.md +10 -9
- requirements.txt +0 -3
- src/app.py +87 -30
- src/config.py +0 -3
- src/radio_agent.py +0 -20
- src/rag_system.py +0 -20
- src/test_app.py +12 -12
- src/user_memory.py +26 -0
README.md
CHANGED
|
@@ -15,7 +15,8 @@ tags:
|
|
| 15 |
- ai
|
| 16 |
- radio
|
| 17 |
- music
|
| 18 |
-
-
|
|
|
|
| 19 |
- elevenlabs
|
| 20 |
- llamaindex
|
| 21 |
- rag
|
|
@@ -72,7 +73,7 @@ ai_radio/
|
|
| 72 |
- Perfect short-form content between segments
|
| 73 |
|
| 74 |
### π€ AI Radio Host
|
| 75 |
-
- Charismatic AI host powered by
|
| 76 |
- Personalized greetings and interactions
|
| 77 |
- Smooth transitions between segments
|
| 78 |
|
|
@@ -102,7 +103,7 @@ This app fulfills all requirements for **Track 2: MCP in Action**:
|
|
| 102 |
## π οΈ Technology Stack
|
| 103 |
|
| 104 |
- **Gradio**: Interactive web interface
|
| 105 |
-
- **
|
| 106 |
- **ElevenLabs**: High-quality text-to-speech for voice generation
|
| 107 |
- **LlamaIndex**: RAG system for personalized recommendations and user preference management
|
| 108 |
- **MCP (Model Context Protocol)**: Structured tool servers for modular functionality
|
|
@@ -114,7 +115,7 @@ This app fulfills all requirements for **Track 2: MCP in Action**:
|
|
| 114 |
### Prerequisites
|
| 115 |
|
| 116 |
- Python 3.9+
|
| 117 |
-
-
|
| 118 |
- ElevenLabs API key (provided)
|
| 119 |
|
| 120 |
### Installation
|
|
@@ -130,9 +131,9 @@ cd ai_radio
|
|
| 130 |
pip install -r requirements.txt
|
| 131 |
```
|
| 132 |
|
| 133 |
-
3. Set up your
|
| 134 |
```python
|
| 135 |
-
|
| 136 |
```
|
| 137 |
|
| 138 |
4. Run the app:
|
|
@@ -191,7 +192,7 @@ The **RadioAgent** is an autonomous AI agent that:
|
|
| 191 |
|
| 192 |
1. **Plans**: Analyzes user preferences and creates a balanced show plan with music, news, podcasts, and stories
|
| 193 |
2. **Reasons**: Uses the RAG system to make intelligent decisions about content selection
|
| 194 |
-
3. **Executes**: Generates content using
|
| 195 |
|
| 196 |
### MCP Server Architecture
|
| 197 |
|
|
@@ -226,7 +227,7 @@ The agent uses intelligent planning to create a balanced show:
|
|
| 226 |
1. **Analyze preferences**: Load user preferences from RAG system
|
| 227 |
2. **Calculate distribution**: Determine segment ratios (50% music, 20% news, 20% podcasts, 10% stories)
|
| 228 |
3. **Generate segments**: Use MCP servers to fetch content for each segment
|
| 229 |
-
4. **Add personality**: Generate host commentary using
|
| 230 |
5. **Execute**: Convert text to speech and play audio
|
| 231 |
|
| 232 |
## π Audio Generation
|
|
@@ -261,7 +262,7 @@ MIT License - feel free to use and modify as needed.
|
|
| 261 |
## π Acknowledgments
|
| 262 |
|
| 263 |
- **MCP Team** for the amazing protocol and competition
|
| 264 |
-
- **
|
| 265 |
- **ElevenLabs** for text-to-speech technology
|
| 266 |
- **LlamaIndex** for RAG capabilities
|
| 267 |
- **Gradio** for the beautiful UI framework
|
|
|
|
| 15 |
- ai
|
| 16 |
- radio
|
| 17 |
- music
|
| 18 |
+
- nebius
|
| 19 |
+
- gpt-oss-120b
|
| 20 |
- elevenlabs
|
| 21 |
- llamaindex
|
| 22 |
- rag
|
|
|
|
| 73 |
- Perfect short-form content between segments
|
| 74 |
|
| 75 |
### π€ AI Radio Host
|
| 76 |
+
- Charismatic AI host powered by Nebius GPT-OSS-120B
|
| 77 |
- Personalized greetings and interactions
|
| 78 |
- Smooth transitions between segments
|
| 79 |
|
|
|
|
| 103 |
## π οΈ Technology Stack
|
| 104 |
|
| 105 |
- **Gradio**: Interactive web interface
|
| 106 |
+
- **Nebius GPT-OSS-120B** (OpenAI-compatible): LLM for content generation, host commentary, and reasoning
|
| 107 |
- **ElevenLabs**: High-quality text-to-speech for voice generation
|
| 108 |
- **LlamaIndex**: RAG system for personalized recommendations and user preference management
|
| 109 |
- **MCP (Model Context Protocol)**: Structured tool servers for modular functionality
|
|
|
|
| 115 |
### Prerequisites
|
| 116 |
|
| 117 |
- Python 3.9+
|
| 118 |
+
- Nebius API key (for GPT-OSS-120B LLM)
|
| 119 |
- ElevenLabs API key (provided)
|
| 120 |
|
| 121 |
### Installation
|
|
|
|
| 131 |
pip install -r requirements.txt
|
| 132 |
```
|
| 133 |
|
| 134 |
+
3. Set up your Nebius API key in `config.py`:
|
| 135 |
```python
|
| 136 |
+
nebius_api_key: str = "your-nebius-api-key-here"
|
| 137 |
```
|
| 138 |
|
| 139 |
4. Run the app:
|
|
|
|
| 192 |
|
| 193 |
1. **Plans**: Analyzes user preferences and creates a balanced show plan with music, news, podcasts, and stories
|
| 194 |
2. **Reasons**: Uses the RAG system to make intelligent decisions about content selection
|
| 195 |
+
3. **Executes**: Generates content using Nebius GPT-OSS-120B LLM and delivers it via ElevenLabs TTS
|
| 196 |
|
| 197 |
### MCP Server Architecture
|
| 198 |
|
|
|
|
| 227 |
1. **Analyze preferences**: Load user preferences from RAG system
|
| 228 |
2. **Calculate distribution**: Determine segment ratios (50% music, 20% news, 20% podcasts, 10% stories)
|
| 229 |
3. **Generate segments**: Use MCP servers to fetch content for each segment
|
| 230 |
+
4. **Add personality**: Generate host commentary using Nebius GPT-OSS-120B LLM
|
| 231 |
5. **Execute**: Convert text to speech and play audio
|
| 232 |
|
| 233 |
## π Audio Generation
|
|
|
|
| 262 |
## π Acknowledgments
|
| 263 |
|
| 264 |
- **MCP Team** for the amazing protocol and competition
|
| 265 |
+
- **Nebius** for GPT-OSS-120B API
|
| 266 |
- **ElevenLabs** for text-to-speech technology
|
| 267 |
- **LlamaIndex** for RAG capabilities
|
| 268 |
- **Gradio** for the beautiful UI framework
|
requirements.txt
CHANGED
|
@@ -1,11 +1,8 @@
|
|
| 1 |
gradio==4.44.0
|
| 2 |
openai>=1.0.0
|
| 3 |
-
# google-generativeai>=0.5.2,<0.6.0 # Commented out - using Nebius instead
|
| 4 |
elevenlabs==1.10.0
|
| 5 |
llama-index==0.11.20
|
| 6 |
llama-index-llms-openai>=0.1.5
|
| 7 |
-
# llama-index-llms-gemini==0.3.4 # Commented out - using Nebius instead
|
| 8 |
-
# llama-index-embeddings-gemini==0.2.2 # Commented out - using Nebius instead
|
| 9 |
requests==2.32.3
|
| 10 |
python-dotenv==1.0.1
|
| 11 |
pydantic==2.9.2
|
|
|
|
| 1 |
gradio==4.44.0
|
| 2 |
openai>=1.0.0
|
|
|
|
| 3 |
elevenlabs==1.10.0
|
| 4 |
llama-index==0.11.20
|
| 5 |
llama-index-llms-openai>=0.1.5
|
|
|
|
|
|
|
| 6 |
requests==2.32.3
|
| 7 |
python-dotenv==1.0.1
|
| 8 |
pydantic==2.9.2
|
src/app.py
CHANGED
|
@@ -577,10 +577,20 @@ def play_next_segment():
|
|
| 577 |
return segment_info, host_audio_file, music_player_html, progress, get_now_playing(segment), display_script
|
| 578 |
|
| 579 |
def stop_radio():
|
| 580 |
-
"""Stop the radio stream"""
|
| 581 |
radio_state["stop_flag"] = True
|
| 582 |
radio_state["is_playing"] = False
|
| 583 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 584 |
|
| 585 |
def format_segment_info(segment: Dict[str, Any]) -> str:
|
| 586 |
"""Format segment information for display"""
|
|
@@ -901,9 +911,7 @@ Your personal passphrase is:
|
|
| 901 |
|
| 902 |
π **{passphrase}**
|
| 903 |
|
| 904 |
-
β οΈ **Save this passphrase!** You'll need it to access your account from other devices or if cookies are cleared.
|
| 905 |
-
|
| 906 |
-
Your passphrase is stored in a cookie for automatic login on this browser."""
|
| 907 |
|
| 908 |
return message, user_id, passphrase
|
| 909 |
|
|
@@ -973,16 +981,36 @@ def get_user_stats_display() -> str:
|
|
| 973 |
"""
|
| 974 |
|
| 975 |
def get_stats():
|
| 976 |
-
"""Get listening statistics"""
|
| 977 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 978 |
|
| 979 |
return f"""π **Your Listening Stats**
|
| 980 |
|
| 981 |
-
|
| 982 |
-
|
| 983 |
-
|
| 984 |
-
|
| 985 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 986 |
"""
|
| 987 |
|
| 988 |
# Custom CSS for beautiful radio station UI
|
|
@@ -1090,9 +1118,9 @@ with gr.Blocks(css=custom_css, title="AI Radio π΅", theme=gr.themes.Soft()) as
|
|
| 1090 |
gr.Markdown("---")
|
| 1091 |
gr.Markdown("""
|
| 1092 |
**π‘ How it works:**
|
| 1093 |
-
-
|
| 1094 |
- Use the same passphrase on any device to access your account
|
| 1095 |
-
-
|
| 1096 |
""")
|
| 1097 |
|
| 1098 |
# Tab 1: Preferences
|
|
@@ -1202,7 +1230,7 @@ with gr.Blocks(css=custom_css, title="AI Radio π΅", theme=gr.themes.Soft()) as
|
|
| 1202 |
stop_btn.click(
|
| 1203 |
fn=stop_radio,
|
| 1204 |
inputs=[],
|
| 1205 |
-
outputs=[status_text]
|
| 1206 |
)
|
| 1207 |
|
| 1208 |
# Voice input button - direct click without .then() chain
|
|
@@ -1309,8 +1337,8 @@ Your passphrase: **{passphrase}**
|
|
| 1309 |
outputs=[],
|
| 1310 |
js="(userId, passphrase) => { if(userId && passphrase) { localStorage.setItem('ai_radio_user_id', userId); localStorage.setItem('ai_radio_passphrase', passphrase); } }"
|
| 1311 |
).then(
|
| 1312 |
-
# Go to
|
| 1313 |
-
fn=lambda uid: gr.Tabs(selected=
|
| 1314 |
inputs=[auth_user_id],
|
| 1315 |
outputs=[tabs]
|
| 1316 |
)
|
|
@@ -1403,35 +1431,64 @@ Your passphrase: **{passphrase}**
|
|
| 1403 |
|
| 1404 |
## π Features
|
| 1405 |
|
| 1406 |
-
- **π΅ Personalized Music**: Curated tracks based on your favorite genres and mood
|
| 1407 |
-
- **π° Custom News**:
|
| 1408 |
- **ποΈ Podcast Recommendations**: Discover interesting podcasts matching your interests
|
| 1409 |
- **π AI-Generated Stories**: Entertaining stories and fun facts
|
| 1410 |
-
- **π€ AI Host**: Dynamic AI radio host that introduces segments
|
| 1411 |
- **πΎ Smart Recommendations**: RAG system learns from your listening history
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1412 |
|
| 1413 |
## π οΈ Technology Stack
|
| 1414 |
|
| 1415 |
- **Gradio**: Beautiful, interactive UI
|
| 1416 |
-
- **Nebius GPT-OSS-120B (OpenAI-compatible)
|
| 1417 |
-
- **
|
| 1418 |
- **LlamaIndex**: RAG system for personalized recommendations
|
| 1419 |
- **MCP Servers**: Modular tools for music, news, and podcasts
|
|
|
|
|
|
|
| 1420 |
|
| 1421 |
## π Built for MCP 1st Birthday Competition
|
| 1422 |
|
| 1423 |
This app demonstrates:
|
| 1424 |
-
- β
Autonomous
|
| 1425 |
-
- β
MCP
|
| 1426 |
-
- β
|
| 1427 |
-
- β
|
|
|
|
| 1428 |
|
| 1429 |
## π How to Use
|
| 1430 |
|
| 1431 |
-
1. **
|
| 1432 |
-
2. **
|
| 1433 |
-
3. **
|
| 1434 |
-
4. **
|
|
|
|
| 1435 |
|
| 1436 |
---
|
| 1437 |
|
|
|
|
| 577 |
return segment_info, host_audio_file, music_player_html, progress, get_now_playing(segment), display_script
|
| 578 |
|
| 579 |
def stop_radio():
|
| 580 |
+
"""Stop the radio stream - clears audio and music player"""
|
| 581 |
radio_state["stop_flag"] = True
|
| 582 |
radio_state["is_playing"] = False
|
| 583 |
+
radio_state["planned_show"] = []
|
| 584 |
+
radio_state["current_segment_index"] = 0
|
| 585 |
+
|
| 586 |
+
# Return status, clear audio, clear music player, reset progress, reset now playing
|
| 587 |
+
return (
|
| 588 |
+
"βΉοΈ Radio stopped.",
|
| 589 |
+
None, # Clear audio
|
| 590 |
+
"", # Clear music player HTML
|
| 591 |
+
"Stopped",
|
| 592 |
+
"π» Ready to start"
|
| 593 |
+
)
|
| 594 |
|
| 595 |
def format_segment_info(segment: Dict[str, Any]) -> str:
|
| 596 |
"""Format segment information for display"""
|
|
|
|
| 911 |
|
| 912 |
π **{passphrase}**
|
| 913 |
|
| 914 |
+
β οΈ **Save this passphrase!** You'll need it to access your account from other devices or if cookies are cleared."""
|
|
|
|
|
|
|
| 915 |
|
| 916 |
return message, user_id, passphrase
|
| 917 |
|
|
|
|
| 981 |
"""
|
| 982 |
|
| 983 |
def get_stats():
|
| 984 |
+
"""Get listening statistics for the current user"""
|
| 985 |
+
user_id = radio_state.get("user_id")
|
| 986 |
+
|
| 987 |
+
if not user_id:
|
| 988 |
+
return """π **Your Listening Stats**
|
| 989 |
+
|
| 990 |
+
*Log in or create an account to track your listening history!*
|
| 991 |
+
"""
|
| 992 |
+
|
| 993 |
+
# Get user-specific stats from user_memory
|
| 994 |
+
user_stats = user_memory.get_user_stats(user_id)
|
| 995 |
+
liked_tracks = user_memory.get_liked_tracks(user_id)
|
| 996 |
+
disliked_tracks = user_memory.get_disliked_tracks(user_id)
|
| 997 |
+
play_history = user_memory.get_play_history(user_id)
|
| 998 |
+
|
| 999 |
+
# Count by type from play history
|
| 1000 |
+
music_count = sum(1 for t in play_history if t.get('source') in ['youtube', 'soundcloud'])
|
| 1001 |
|
| 1002 |
return f"""π **Your Listening Stats**
|
| 1003 |
|
| 1004 |
+
π€ **User ID:** {user_id}
|
| 1005 |
+
|
| 1006 |
+
π΅ **Total Tracks Played:** {user_stats.get('total_plays', 0)}
|
| 1007 |
+
π **Liked Tracks:** {len(liked_tracks)}
|
| 1008 |
+
π **Disliked Tracks:** {len(disliked_tracks)}
|
| 1009 |
+
|
| 1010 |
+
π **Recent History:** {len(play_history)} tracks
|
| 1011 |
+
|
| 1012 |
+
*Your liked tracks have a 30% chance of playing during music segments!*
|
| 1013 |
+
*Disliked tracks will be skipped automatically.*
|
| 1014 |
"""
|
| 1015 |
|
| 1016 |
# Custom CSS for beautiful radio station UI
|
|
|
|
| 1118 |
gr.Markdown("---")
|
| 1119 |
gr.Markdown("""
|
| 1120 |
**π‘ How it works:**
|
| 1121 |
+
- If you are here for the first time, create a new account
|
| 1122 |
- Use the same passphrase on any device to access your account
|
| 1123 |
+
- Just enter your passphrase to log back in
|
| 1124 |
""")
|
| 1125 |
|
| 1126 |
# Tab 1: Preferences
|
|
|
|
| 1230 |
stop_btn.click(
|
| 1231 |
fn=stop_radio,
|
| 1232 |
inputs=[],
|
| 1233 |
+
outputs=[status_text, audio_output, music_player, progress_text, now_playing]
|
| 1234 |
)
|
| 1235 |
|
| 1236 |
# Voice input button - direct click without .then() chain
|
|
|
|
| 1337 |
outputs=[],
|
| 1338 |
js="(userId, passphrase) => { if(userId && passphrase) { localStorage.setItem('ai_radio_user_id', userId); localStorage.setItem('ai_radio_passphrase', passphrase); } }"
|
| 1339 |
).then(
|
| 1340 |
+
# Go to Radio Player tab after successful login
|
| 1341 |
+
fn=lambda uid: gr.Tabs(selected=2) if uid else gr.Tabs(selected=0),
|
| 1342 |
inputs=[auth_user_id],
|
| 1343 |
outputs=[tabs]
|
| 1344 |
)
|
|
|
|
| 1431 |
|
| 1432 |
## π Features
|
| 1433 |
|
| 1434 |
+
- **π΅ Personalized Music**: Curated tracks from YouTube based on your favorite genres and mood
|
| 1435 |
+
- **π° Custom News**: Real-time news updates on topics you care about
|
| 1436 |
- **ποΈ Podcast Recommendations**: Discover interesting podcasts matching your interests
|
| 1437 |
- **π AI-Generated Stories**: Entertaining stories and fun facts
|
| 1438 |
+
- **π€ AI Host**: Dynamic AI radio host (Lera) that introduces segments
|
| 1439 |
- **πΎ Smart Recommendations**: RAG system learns from your listening history
|
| 1440 |
+
- **π User Accounts**: Passphrase-based authentication with saved preferences
|
| 1441 |
+
|
| 1442 |
+
## π§ How It Works
|
| 1443 |
+
|
| 1444 |
+
### MCP Servers (Modular Tools)
|
| 1445 |
+
|
| 1446 |
+
Three specialized **MCP (Model Context Protocol)** servers act as tools:
|
| 1447 |
+
- **Music Server**: Searches YouTube for music tracks matching your preferences
|
| 1448 |
+
- **News Server**: Fetches real-time news from RSS feeds
|
| 1449 |
+
- **Podcast Server**: Discovers podcasts on YouTube
|
| 1450 |
+
|
| 1451 |
+
### LLM (Large Language Model)
|
| 1452 |
+
|
| 1453 |
+
**Nebius GPT-OSS-120B** generates all text content:
|
| 1454 |
+
- Personalized host commentary and introductions
|
| 1455 |
+
- Conversational news scripts
|
| 1456 |
+
- Entertaining stories and fun facts
|
| 1457 |
+
- All adapted to your mood and preferences
|
| 1458 |
+
|
| 1459 |
+
### RAG System (Retrieval-Augmented Generation)
|
| 1460 |
+
|
| 1461 |
+
**LlamaIndex-powered RAG** provides context-aware personalization:
|
| 1462 |
+
- Stores your preferences and listening history
|
| 1463 |
+
- Retrieves context for better recommendations
|
| 1464 |
+
- Learns from your behavior to improve suggestions over time
|
| 1465 |
|
| 1466 |
## π οΈ Technology Stack
|
| 1467 |
|
| 1468 |
- **Gradio**: Beautiful, interactive UI
|
| 1469 |
+
- **Nebius GPT-OSS-120B** (OpenAI-compatible): LLM for content generation
|
| 1470 |
+
- **ElevenLabs**: High-quality text-to-speech for voice generation
|
| 1471 |
- **LlamaIndex**: RAG system for personalized recommendations
|
| 1472 |
- **MCP Servers**: Modular tools for music, news, and podcasts
|
| 1473 |
+
- **yt-dlp**: YouTube music and podcast search
|
| 1474 |
+
- **YouTube/SoundCloud**: Music and podcast streaming
|
| 1475 |
|
| 1476 |
## π Built for MCP 1st Birthday Competition
|
| 1477 |
|
| 1478 |
This app demonstrates:
|
| 1479 |
+
- β
**Autonomous Agent Behavior**: Planning, reasoning, and execution
|
| 1480 |
+
- β
**MCP Servers as Tools**: Modular music, news, and podcast servers
|
| 1481 |
+
- β
**RAG System**: Context-aware personalization with LlamaIndex
|
| 1482 |
+
- β
**LLM Integration**: Content generation with Nebius GPT-OSS-120B
|
| 1483 |
+
- β
**Gradio Interface**: Seamless user experience
|
| 1484 |
|
| 1485 |
## π How to Use
|
| 1486 |
|
| 1487 |
+
1. **Create Account**: Get your unique passphrase (saved in browser)
|
| 1488 |
+
2. **Set Preferences**: Choose genres, interests, and mood
|
| 1489 |
+
3. **Start Radio**: Click "Generate & Play" to begin your personalized show
|
| 1490 |
+
4. **Interact**: Like/dislike tracks, request songs by voice
|
| 1491 |
+
5. **Track Stats**: View your listening history and statistics
|
| 1492 |
|
| 1493 |
---
|
| 1494 |
|
src/config.py
CHANGED
|
@@ -7,7 +7,6 @@ class RadioConfig(BaseModel):
|
|
| 7 |
|
| 8 |
# API Keys
|
| 9 |
elevenlabs_api_key: str = "sk_2dde999f3cedf21dff7ba4671ce27f292e48ea37d30c5e4a"
|
| 10 |
-
google_api_key: str = "AIzaSyB5F9P0oDZ6fgW8GgADfwnwcg-GkHrdo74"
|
| 11 |
llamaindex_api_key: str = "llx-WRsj0iehk2ZlSlNIenOLyyhO9X1yFT4CmJXpl0qk6hapFi01"
|
| 12 |
nebius_api_key: str = "v1.CmQKHHN0YXRpY2tleS1lMDB0eTkxeTdwY3lxNDk5OWcSIXNlcnZpY2VhY2NvdW50LWUwMGowemtmZWpqc2E3ZHF3aDIMCKb4oskGENS9j8MBOgwIpfu6lAcQgOqAhwNAAloDZTAw.AAAAAAAAAAGEI_L5sJCQ7XR93nSzvXCPO-J3-gHjqPiRqrvkrMLeDtd-70zGWB1-c8yovnX-q7yEc1dHOnA2L8FUa3Le6X8D"
|
| 13 |
# Nebius OpenAI-compatible endpoint and model
|
|
@@ -40,8 +39,6 @@ def get_config() -> RadioConfig:
|
|
| 40 |
# Override with environment variables if available
|
| 41 |
if api_key := os.getenv("ELEVENLABS_API_KEY"):
|
| 42 |
config.elevenlabs_api_key = api_key
|
| 43 |
-
if api_key := os.getenv("GOOGLE_API_KEY"):
|
| 44 |
-
config.google_api_key = api_key
|
| 45 |
if api_key := os.getenv("LLAMAINDEX_API_KEY"):
|
| 46 |
config.llamaindex_api_key = api_key
|
| 47 |
if api_key := os.getenv("NEBIUS_API_KEY"):
|
|
|
|
| 7 |
|
| 8 |
# API Keys
|
| 9 |
elevenlabs_api_key: str = "sk_2dde999f3cedf21dff7ba4671ce27f292e48ea37d30c5e4a"
|
|
|
|
| 10 |
llamaindex_api_key: str = "llx-WRsj0iehk2ZlSlNIenOLyyhO9X1yFT4CmJXpl0qk6hapFi01"
|
| 11 |
nebius_api_key: str = "v1.CmQKHHN0YXRpY2tleS1lMDB0eTkxeTdwY3lxNDk5OWcSIXNlcnZpY2VhY2NvdW50LWUwMGowemtmZWpqc2E3ZHF3aDIMCKb4oskGENS9j8MBOgwIpfu6lAcQgOqAhwNAAloDZTAw.AAAAAAAAAAGEI_L5sJCQ7XR93nSzvXCPO-J3-gHjqPiRqrvkrMLeDtd-70zGWB1-c8yovnX-q7yEc1dHOnA2L8FUa3Le6X8D"
|
| 12 |
# Nebius OpenAI-compatible endpoint and model
|
|
|
|
| 39 |
# Override with environment variables if available
|
| 40 |
if api_key := os.getenv("ELEVENLABS_API_KEY"):
|
| 41 |
config.elevenlabs_api_key = api_key
|
|
|
|
|
|
|
| 42 |
if api_key := os.getenv("LLAMAINDEX_API_KEY"):
|
| 43 |
config.llamaindex_api_key = api_key
|
| 44 |
if api_key := os.getenv("NEBIUS_API_KEY"):
|
src/radio_agent.py
CHANGED
|
@@ -6,7 +6,6 @@ import os
|
|
| 6 |
from typing import Dict, Any, List, Generator
|
| 7 |
from datetime import datetime
|
| 8 |
from openai import OpenAI
|
| 9 |
-
# import google.generativeai as genai # Commented out - using Nebius/OpenAI instead
|
| 10 |
|
| 11 |
from mcp_servers.music_server import MusicMCPServer
|
| 12 |
from mcp_servers.news_server import NewsMCPServer
|
|
@@ -50,17 +49,6 @@ class RadioAgent:
|
|
| 50 |
print("Radio agent will work in fallback mode with limited LLM features")
|
| 51 |
self.client = None
|
| 52 |
|
| 53 |
-
# # Initialize Gemini LLM with error handling (COMMENTED OUT - for future use)
|
| 54 |
-
# self.model = None
|
| 55 |
-
# if config.google_api_key:
|
| 56 |
-
# try:
|
| 57 |
-
# genai.configure(api_key=config.google_api_key)
|
| 58 |
-
# self.model = genai.GenerativeModel('gemini-1.0-pro')
|
| 59 |
-
# except Exception as e:
|
| 60 |
-
# print(f"Warning: Could not initialize Gemini LLM: {e}")
|
| 61 |
-
# print("Radio agent will work in fallback mode with limited LLM features")
|
| 62 |
-
# self.model = None
|
| 63 |
-
|
| 64 |
# Initialize MCP Servers
|
| 65 |
self.music_server = MusicMCPServer()
|
| 66 |
self.news_server = NewsMCPServer()
|
|
@@ -238,14 +226,6 @@ class RadioAgent:
|
|
| 238 |
except Exception as e:
|
| 239 |
print(f"Error generating intro: {e}")
|
| 240 |
|
| 241 |
-
# # Gemini fallback (COMMENTED OUT - for future use)
|
| 242 |
-
# if self.model:
|
| 243 |
-
# try:
|
| 244 |
-
# response = self.model.generate_content(prompt)
|
| 245 |
-
# return response.text
|
| 246 |
-
# except Exception as e:
|
| 247 |
-
# print(f"Error generating intro: {e}")
|
| 248 |
-
|
| 249 |
# Fallback intro
|
| 250 |
return f"Good {time_of_day}, {name}! Welcome to your personal AI Radio station. We've got an amazing show lined up for you today! Did you know that music can actually boost your productivity? That's right! So sit back, relax, and let's get this party started!"
|
| 251 |
|
|
|
|
| 6 |
from typing import Dict, Any, List, Generator
|
| 7 |
from datetime import datetime
|
| 8 |
from openai import OpenAI
|
|
|
|
| 9 |
|
| 10 |
from mcp_servers.music_server import MusicMCPServer
|
| 11 |
from mcp_servers.news_server import NewsMCPServer
|
|
|
|
| 49 |
print("Radio agent will work in fallback mode with limited LLM features")
|
| 50 |
self.client = None
|
| 51 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
# Initialize MCP Servers
|
| 53 |
self.music_server = MusicMCPServer()
|
| 54 |
self.news_server = NewsMCPServer()
|
|
|
|
| 226 |
except Exception as e:
|
| 227 |
print(f"Error generating intro: {e}")
|
| 228 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 229 |
# Fallback intro
|
| 230 |
return f"Good {time_of_day}, {name}! Welcome to your personal AI Radio station. We've got an amazing show lined up for you today! Did you know that music can actually boost your productivity? That's right! So sit back, relax, and let's get this party started!"
|
| 231 |
|
src/rag_system.py
CHANGED
|
@@ -10,8 +10,6 @@ from llama_index.llms.openai import OpenAI as LlamaOpenAI
|
|
| 10 |
|
| 11 |
# Get project root directory (parent of src/)
|
| 12 |
PROJECT_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
| 13 |
-
# from llama_index.embeddings.gemini import GeminiEmbedding # Commented out - using Nebius instead
|
| 14 |
-
# from llama_index.llms.gemini import Gemini # Commented out - using Nebius instead
|
| 15 |
|
| 16 |
class RadioRAGSystem:
|
| 17 |
"""RAG system for storing and retrieving user preferences and listening history"""
|
|
@@ -43,24 +41,6 @@ class RadioRAGSystem:
|
|
| 43 |
# Embeddings can be added later if needed
|
| 44 |
self.embedding_available = False # Disabled for now - can add OpenAI embeddings later
|
| 45 |
|
| 46 |
-
# # Configure LlamaIndex settings with Gemini (COMMENTED OUT - for future use)
|
| 47 |
-
# if google_api_key:
|
| 48 |
-
# try:
|
| 49 |
-
# Settings.llm = Gemini(api_key=google_api_key, model="models/gemini-1.0-pro")
|
| 50 |
-
# self.llm_available = True
|
| 51 |
-
# except Exception as e:
|
| 52 |
-
# print(f"Warning: Could not initialize Gemini LLM: {e}")
|
| 53 |
-
# print("RAG system will work in fallback mode without LLM features")
|
| 54 |
-
# self.llm_available = False
|
| 55 |
-
#
|
| 56 |
-
# try:
|
| 57 |
-
# Settings.embed_model = GeminiEmbedding(api_key=google_api_key, model_name="models/embedding-001")
|
| 58 |
-
# self.embedding_available = True
|
| 59 |
-
# except Exception as e:
|
| 60 |
-
# print(f"Warning: Could not initialize Gemini Embeddings: {e}")
|
| 61 |
-
# print("RAG system will work in fallback mode without embeddings")
|
| 62 |
-
# self.embedding_available = False
|
| 63 |
-
|
| 64 |
# Initialize vector store
|
| 65 |
self.vector_store = SimpleVectorStore()
|
| 66 |
self.storage_context = StorageContext.from_defaults(vector_store=self.vector_store)
|
|
|
|
| 10 |
|
| 11 |
# Get project root directory (parent of src/)
|
| 12 |
PROJECT_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
|
|
|
|
|
|
| 13 |
|
| 14 |
class RadioRAGSystem:
|
| 15 |
"""RAG system for storing and retrieving user preferences and listening history"""
|
|
|
|
| 41 |
# Embeddings can be added later if needed
|
| 42 |
self.embedding_available = False # Disabled for now - can add OpenAI embeddings later
|
| 43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
# Initialize vector store
|
| 45 |
self.vector_store = SimpleVectorStore()
|
| 46 |
self.storage_context = StorageContext.from_defaults(vector_store=self.vector_store)
|
src/test_app.py
CHANGED
|
@@ -13,10 +13,10 @@ def test_imports():
|
|
| 13 |
return False
|
| 14 |
|
| 15 |
try:
|
| 16 |
-
|
| 17 |
-
print(" β
|
| 18 |
except ImportError as e:
|
| 19 |
-
print(f" β
|
| 20 |
return False
|
| 21 |
|
| 22 |
try:
|
|
@@ -56,11 +56,11 @@ def test_config():
|
|
| 56 |
else:
|
| 57 |
print(" β οΈ ElevenLabs API key missing")
|
| 58 |
|
| 59 |
-
if config.
|
| 60 |
-
print(" β
|
| 61 |
else:
|
| 62 |
-
print(" β οΈ
|
| 63 |
-
print(" Please add your
|
| 64 |
|
| 65 |
if config.llamaindex_api_key:
|
| 66 |
print(" β
LlamaIndex API key present")
|
|
@@ -126,7 +126,7 @@ def test_rag_system():
|
|
| 126 |
from config import get_config
|
| 127 |
|
| 128 |
config = get_config()
|
| 129 |
-
rag = RadioRAGSystem(config.
|
| 130 |
print(" β
RAG system initialized")
|
| 131 |
|
| 132 |
# Test storing preferences
|
|
@@ -285,15 +285,15 @@ def main():
|
|
| 285 |
if passed == total:
|
| 286 |
print("\nπ All tests passed! Your AI Radio is ready to launch! π")
|
| 287 |
print("\nNext steps:")
|
| 288 |
-
print(" 1. Make sure you've added your
|
| 289 |
-
print(" 2. Run: python
|
| 290 |
-
print(" 3. Open: http://localhost:
|
| 291 |
return 0
|
| 292 |
else:
|
| 293 |
print("\nβ οΈ Some tests failed. Please fix the issues above.")
|
| 294 |
print("\nCommon fixes:")
|
| 295 |
print(" - Run: pip install -r requirements.txt")
|
| 296 |
-
print(" - Add your
|
| 297 |
print(" - Check that all files are in the correct location")
|
| 298 |
return 1
|
| 299 |
|
|
|
|
| 13 |
return False
|
| 14 |
|
| 15 |
try:
|
| 16 |
+
from openai import OpenAI
|
| 17 |
+
print(" β
OpenAI (Nebius) imported")
|
| 18 |
except ImportError as e:
|
| 19 |
+
print(f" β OpenAI import failed: {e}")
|
| 20 |
return False
|
| 21 |
|
| 22 |
try:
|
|
|
|
| 56 |
else:
|
| 57 |
print(" β οΈ ElevenLabs API key missing")
|
| 58 |
|
| 59 |
+
if config.nebius_api_key:
|
| 60 |
+
print(" β
Nebius API key present")
|
| 61 |
else:
|
| 62 |
+
print(" β οΈ Nebius API key missing (REQUIRED!)")
|
| 63 |
+
print(" Please add your Nebius API key to config.py")
|
| 64 |
|
| 65 |
if config.llamaindex_api_key:
|
| 66 |
print(" β
LlamaIndex API key present")
|
|
|
|
| 126 |
from config import get_config
|
| 127 |
|
| 128 |
config = get_config()
|
| 129 |
+
rag = RadioRAGSystem(config.nebius_api_key, config.nebius_api_base, config.nebius_model)
|
| 130 |
print(" β
RAG system initialized")
|
| 131 |
|
| 132 |
# Test storing preferences
|
|
|
|
| 285 |
if passed == total:
|
| 286 |
print("\nπ All tests passed! Your AI Radio is ready to launch! π")
|
| 287 |
print("\nNext steps:")
|
| 288 |
+
print(" 1. Make sure you've added your Nebius API key to config.py")
|
| 289 |
+
print(" 2. Run: python run.py")
|
| 290 |
+
print(" 3. Open: http://localhost:7867")
|
| 291 |
return 0
|
| 292 |
else:
|
| 293 |
print("\nβ οΈ Some tests failed. Please fix the issues above.")
|
| 294 |
print("\nCommon fixes:")
|
| 295 |
print(" - Run: pip install -r requirements.txt")
|
| 296 |
+
print(" - Add your Nebius API key to config.py")
|
| 297 |
print(" - Check that all files are in the correct location")
|
| 298 |
return 1
|
| 299 |
|
src/user_memory.py
CHANGED
|
@@ -270,6 +270,32 @@ class UserMemoryService:
|
|
| 270 |
return []
|
| 271 |
return self.users[user_id].get("liked_tracks", [])
|
| 272 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 273 |
def get_random_liked_track(self, user_id: str, genre: str = None) -> Optional[Dict[str, Any]]:
|
| 274 |
"""Get a random liked track, optionally filtered by genre
|
| 275 |
|
|
|
|
| 270 |
return []
|
| 271 |
return self.users[user_id].get("liked_tracks", [])
|
| 272 |
|
| 273 |
+
def get_disliked_tracks(self, user_id: str) -> List[str]:
|
| 274 |
+
"""Get user's disliked track identifiers
|
| 275 |
+
|
| 276 |
+
Args:
|
| 277 |
+
user_id: User ID
|
| 278 |
+
|
| 279 |
+
Returns:
|
| 280 |
+
List of disliked track identifiers (title|artist)
|
| 281 |
+
"""
|
| 282 |
+
if user_id not in self.users:
|
| 283 |
+
return []
|
| 284 |
+
return self.users[user_id].get("disliked_tracks", [])
|
| 285 |
+
|
| 286 |
+
def get_play_history(self, user_id: str) -> List[Dict[str, Any]]:
|
| 287 |
+
"""Get user's play history
|
| 288 |
+
|
| 289 |
+
Args:
|
| 290 |
+
user_id: User ID
|
| 291 |
+
|
| 292 |
+
Returns:
|
| 293 |
+
List of played track dicts
|
| 294 |
+
"""
|
| 295 |
+
if user_id not in self.users:
|
| 296 |
+
return []
|
| 297 |
+
return self.users[user_id].get("play_history", [])
|
| 298 |
+
|
| 299 |
def get_random_liked_track(self, user_id: str, genre: str = None) -> Optional[Dict[str, Any]]:
|
| 300 |
"""Get a random liked track, optionally filtered by genre
|
| 301 |
|