kedar-bhumkar commited on
Commit
43ceeff
·
verified ·
1 Parent(s): 94194db

Upload 4 files

Browse files
Files changed (4) hide show
  1. README.md +99 -13
  2. app.py +785 -0
  3. backend.py +425 -0
  4. requirements.txt +5 -0
README.md CHANGED
@@ -1,13 +1,99 @@
1
- ---
2
- title: Virtual Interviewer
3
- emoji: 🐨
4
- colorFrom: blue
5
- colorTo: red
6
- sdk: streamlit
7
- sdk_version: 1.42.2
8
- app_file: app.py
9
- pinned: false
10
- short_description: A virtual interviewer powered by GPT 4o
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Virtual Interviewer
3
+ emoji: 🎯
4
+ colorFrom: indigo
5
+ colorTo: purple
6
+ sdk: streamlit
7
+ sdk_version: 1.28.0
8
+ app_file: app.py
9
+ pinned: false
10
+ license: mit
11
+ ---
12
+
13
+ # Virtual Interviewer
14
+
15
+ [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-sm.svg)](https://huggingface.co/spaces/username/virtual-interviewer)
16
+ [![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?logo=github)](https://github.com/username/virtual-interviewer)
17
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
18
+ [![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
19
+ [![OpenAI](https://img.shields.io/badge/OpenAI-GPT--4o-green.svg)](https://openai.com/)
20
+
21
+ A Streamlit application that simulates job interviews using OpenAI's GPT-4o model. This tool helps users practice for interviews by generating relevant questions based on job descriptions and providing feedback on their answers.
22
+
23
+ ![Virtual Interviewer Demo](https://your-image-url-here.png)
24
+
25
+ ## 🌟 Features
26
+
27
+ - 💼 Generate interview questions based on job descriptions
28
+ - 🎭 Support for both technical and non-technical interviews
29
+ - 📊 Adjustable difficulty levels (Easy, Medium, Hard)
30
+ - 🔍 Focus on specific key topics
31
+ - 📝 Score interview responses with detailed feedback
32
+ - 💡 Generate ideal answers for comparison
33
+ - 🔊 Text-to-speech for questions with multiple voice options
34
+ - 📈 Visual score summary with performance analytics
35
+ - 🎨 Modern, responsive UI with intuitive controls
36
+
37
+ ## 🚀 Quick Start
38
+
39
+ 1. Enter your OpenAI API key
40
+ 2. Paste a job description or use the default Solution Architect example
41
+ 3. Configure your interview settings:
42
+ - Select interview type (Technical or Non-technical)
43
+ - Choose difficulty level (Easy, Medium, Hard)
44
+ - Customize key topics
45
+ - Enable/disable scoring and ideal answers
46
+ - Set the number of questions
47
+ - Enable text-to-speech and select a voice (optional)
48
+ 4. Click "Start Interview" to begin
49
+ 5. Answer each question and click "Next Question" to proceed
50
+ 6. Review your performance with the visual score summary and detailed feedback
51
+
52
+ ## 📋 Requirements
53
+
54
+ - Python 3.8+
55
+ - OpenAI API key
56
+ - Required packages (installed automatically):
57
+ - streamlit
58
+ - openai
59
+ - pandas
60
+ - edge-tts
61
+
62
+ ## 💻 Installation for Local Development
63
+
64
+ 1. Clone the repository:
65
+ ```bash
66
+ git clone https://github.com/username/virtual-interviewer.git
67
+ cd virtual-interviewer
68
+ ```
69
+
70
+ 2. Install the required dependencies:
71
+ ```bash
72
+ pip install -r requirements.txt
73
+ ```
74
+
75
+ 3. Run the Streamlit application:
76
+ ```bash
77
+ streamlit run app.py
78
+ ```
79
+
80
+ ## 🔒 Privacy and Security
81
+
82
+ - Your OpenAI API key is used only for making API calls and is not stored permanently
83
+ - Interview data is stored only in your browser's session
84
+ - All processing happens on your local machine or within the HuggingFace Space
85
+ - No data is sent to external servers beyond the necessary API calls to OpenAI
86
+
87
+ ## 🤝 Contributing
88
+
89
+ Contributions are welcome! Please feel free to submit a Pull Request.
90
+
91
+ ## 📄 License
92
+
93
+ This project is licensed under the MIT License - see the LICENSE file for details.
94
+
95
+ ## 🙏 Acknowledgements
96
+
97
+ - Built with [Streamlit](https://streamlit.io/)
98
+ - Powered by [OpenAI GPT-4o](https://openai.com/)
99
+ - Text-to-speech provided by [Edge-TTS](https://github.com/rany2/edge-tts)
app.py ADDED
@@ -0,0 +1,785 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ import json
3
+ import base64
4
+ import os
5
+ import time
6
+ import uuid
7
+ from backend import VirtualInterviewer
8
+ import pandas as pd
9
+
10
+ # Set page configuration
11
+ st.set_page_config(
12
+ page_title="Virtual Interviewer",
13
+ page_icon="🎯",
14
+ layout="wide",
15
+ initial_sidebar_state="expanded"
16
+ )
17
+
18
+ # Default job description and key topics for Solution Architect
19
+ DEFAULT_JOB_DESCRIPTION = """
20
+ Job Title: Enterprise Solution Architect
21
+
22
+ Job Description:
23
+ We are seeking an experienced Enterprise Solution Architect to design and implement innovative technology solutions that address complex business challenges. The ideal candidate will have a strong background in cloud architecture, enterprise integration, and modern application development.
24
+
25
+ Responsibilities:
26
+ - Design scalable, secure, and resilient enterprise solutions using cloud-native technologies
27
+ - Create architectural blueprints and technical roadmaps aligned with business objectives
28
+ - Evaluate and recommend appropriate technologies and frameworks for various business needs
29
+ - Lead technical discussions with stakeholders and development teams
30
+ - Ensure solutions adhere to architectural standards, best practices, and compliance requirements
31
+ - Mentor junior architects and developers on architectural principles and patterns
32
+
33
+ Requirements:
34
+ - 8+ years of experience in IT with at least 5 years in solution architecture
35
+ - Strong knowledge of Azure cloud services and architecture patterns
36
+ - Experience with Java enterprise applications and microservices architecture
37
+ - Familiarity with GraphQL API design and implementation
38
+ - Experience integrating with Salesforce and other enterprise systems
39
+ - Knowledge of Generative AI technologies and their practical applications
40
+ - Excellent communication and presentation skills
41
+ - Ability to translate business requirements into technical solutions
42
+ """
43
+
44
+ DEFAULT_KEY_TOPICS = "Azure, GraphQL, Java, Salesforce, Generative AI, Cloud Architecture, Microservices, API Design"
45
+
46
+ # Custom CSS for styling
47
+ st.markdown("""
48
+ <style>
49
+ .main-header {
50
+ font-size: 2.5rem;
51
+ color: #4527A0;
52
+ font-weight: 700;
53
+ margin-bottom: 0;
54
+ }
55
+ .sub-header {
56
+ font-size: 1.1rem;
57
+ color: #5E35B1;
58
+ font-style: italic;
59
+ margin-top: 0;
60
+ }
61
+ .section-header {
62
+ font-size: 1.5rem;
63
+ color: #3949AB;
64
+ font-weight: 600;
65
+ border-bottom: 2px solid #3949AB;
66
+ padding-bottom: 0.3rem;
67
+ margin-top: 1rem;
68
+ }
69
+ .question-header {
70
+ font-size: 1.3rem;
71
+ color: #1E88E5;
72
+ font-weight: 600;
73
+ }
74
+ .question-number {
75
+ font-size: 1rem;
76
+ color: #0D47A1;
77
+ font-weight: 500;
78
+ }
79
+ .success-text {
80
+ color: #2E7D32;
81
+ font-weight: 500;
82
+ }
83
+ .warning-text {
84
+ color: #FF6F00;
85
+ font-weight: 500;
86
+ }
87
+ .score-header {
88
+ font-size: 1.4rem;
89
+ color: #00897B;
90
+ font-weight: 600;
91
+ }
92
+ .score-value {
93
+ font-size: 1.8rem;
94
+ color: #00695C;
95
+ font-weight: 700;
96
+ }
97
+ .feedback-text {
98
+ color: #455A64;
99
+ font-style: italic;
100
+ }
101
+ .ideal-answer-header {
102
+ color: #7B1FA2;
103
+ font-weight: 600;
104
+ }
105
+ .user-answer-header {
106
+ color: #0277BD;
107
+ font-weight: 600;
108
+ }
109
+ .footer {
110
+ text-align: center;
111
+ color: #78909C;
112
+ font-size: 0.8rem;
113
+ margin-top: 2rem;
114
+ }
115
+ .stButton>button {
116
+ background-color: #3949AB;
117
+ color: white;
118
+ font-weight: 500;
119
+ }
120
+ .stButton>button:hover {
121
+ background-color: #303F9F;
122
+ color: white;
123
+ }
124
+ .submit-button>button {
125
+ background-color: #00897B;
126
+ color: white;
127
+ }
128
+ .submit-button>button:hover {
129
+ background-color: #00796B;
130
+ }
131
+ .score-button>button {
132
+ background-color: #7B1FA2;
133
+ color: white;
134
+ }
135
+ .score-button>button:hover {
136
+ background-color: #6A1B9A;
137
+ }
138
+ .stExpander {
139
+ border: 1px solid #E0E0E0;
140
+ border-radius: 5px;
141
+ margin-bottom: 1rem;
142
+ }
143
+ /* Voice selection styling */
144
+ div[data-testid="stRadio"] > div {
145
+ background-color: #E8EAF6;
146
+ padding: 1rem;
147
+ border-radius: 5px;
148
+ margin-bottom: 1rem;
149
+ }
150
+ div[data-testid="stRadio"] label {
151
+ font-weight: 500;
152
+ color: #3949AB;
153
+ }
154
+ .audio-player {
155
+ margin-top: 0.5rem;
156
+ margin-bottom: 1rem;
157
+ }
158
+ /* Score table styling */
159
+ .score-table {
160
+ width: 100%;
161
+ border-collapse: collapse;
162
+ margin-bottom: 2rem;
163
+ font-size: 1.1rem;
164
+ box-shadow: 0 4px 8px rgba(0,0,0,0.1);
165
+ border-radius: 5px;
166
+ overflow: hidden;
167
+ }
168
+ .score-table th {
169
+ background-color: #3949AB;
170
+ color: white;
171
+ font-weight: 600;
172
+ text-align: left;
173
+ padding: 0.75rem 1rem;
174
+ border: none;
175
+ }
176
+ .score-table td {
177
+ padding: 0.75rem 1rem;
178
+ border-bottom: 1px solid #E0E0E0;
179
+ vertical-align: middle;
180
+ }
181
+ .score-table tr:last-child {
182
+ font-weight: bold;
183
+ background-color: #E8EAF6;
184
+ }
185
+ .score-table tr:last-child td {
186
+ border-top: 2px solid #3949AB;
187
+ border-bottom: none;
188
+ color: #3949AB;
189
+ }
190
+ .score-table tr:nth-child(even):not(:last-child) {
191
+ background-color: #F5F5F5;
192
+ }
193
+ .score-table tr:hover:not(:last-child) {
194
+ background-color: #EDE7F6;
195
+ }
196
+ /* Make the table responsive */
197
+ @media screen and (max-width: 600px) {
198
+ .score-table {
199
+ font-size: 0.9rem;
200
+ }
201
+ .score-table th, .score-table td {
202
+ padding: 0.5rem;
203
+ }
204
+ }
205
+ </style>
206
+ """, unsafe_allow_html=True)
207
+
208
+ # Function to create an HTML audio player
209
+ def get_audio_player_html(audio_path, autoplay=True, player_id=None):
210
+ if not audio_path or not os.path.exists(audio_path):
211
+ return ""
212
+
213
+ # Generate a unique ID for this audio player if not provided
214
+ if player_id is None:
215
+ player_id = str(uuid.uuid4())
216
+
217
+ # Read the audio file
218
+ with open(audio_path, 'rb') as f:
219
+ audio_bytes = f.read()
220
+
221
+ audio_base64 = base64.b64encode(audio_bytes).decode()
222
+
223
+ autoplay_attr = "autoplay" if autoplay else ""
224
+
225
+ # Create HTML with JavaScript to ensure autoplay works
226
+ html = f"""
227
+ <div class="audio-player" id="audio-container-{player_id}">
228
+ <audio id="audio-{player_id}" controls {autoplay_attr}>
229
+ <source src="data:audio/mp3;base64,{audio_base64}" type="audio/mp3">
230
+ Your browser does not support the audio element.
231
+ </audio>
232
+ </div>
233
+ <script>
234
+ // Force play the audio after a short delay
235
+ setTimeout(function() {{
236
+ const audioElement = document.getElementById('audio-{player_id}');
237
+ if (audioElement) {{
238
+ audioElement.play().catch(e => console.log('Auto-play failed:', e));
239
+ }}
240
+ }}, 500);
241
+ </script>
242
+ """
243
+ return html
244
+
245
+ # Initialize session state variables if they don't exist
246
+ if 'interviewer' not in st.session_state:
247
+ st.session_state.interviewer = None
248
+ if 'current_question_index' not in st.session_state:
249
+ st.session_state.current_question_index = 0
250
+ if 'current_question' not in st.session_state:
251
+ st.session_state.current_question = ""
252
+ if 'conversation_history' not in st.session_state:
253
+ st.session_state.conversation_history = []
254
+ if 'interview_started' not in st.session_state:
255
+ st.session_state.interview_started = False
256
+ if 'interview_setup_done' not in st.session_state:
257
+ st.session_state.interview_setup_done = False
258
+ if 'questions_generated' not in st.session_state:
259
+ st.session_state.questions_generated = False
260
+ if 'interview_completed' not in st.session_state:
261
+ st.session_state.interview_completed = False
262
+ if 'interview_scored' not in st.session_state:
263
+ st.session_state.interview_scored = False
264
+ if 'score_results' not in st.session_state:
265
+ st.session_state.score_results = None
266
+ if 'answer_submitted' not in st.session_state:
267
+ st.session_state.answer_submitted = False
268
+ if 'current_answer' not in st.session_state:
269
+ st.session_state.current_answer = ""
270
+ if 'generate_ideal_answers' not in st.session_state:
271
+ st.session_state.generate_ideal_answers = True
272
+ if 'voice_type' not in st.session_state:
273
+ st.session_state.voice_type = "female_casual"
274
+ if 'use_tts' not in st.session_state:
275
+ st.session_state.use_tts = False
276
+ if 'current_audio_path' not in st.session_state:
277
+ st.session_state.current_audio_path = ""
278
+ if 'audio_key' not in st.session_state:
279
+ st.session_state.audio_key = str(uuid.uuid4())
280
+ if 'should_play_audio' not in st.session_state:
281
+ st.session_state.should_play_audio = True
282
+
283
+ # Define callback functions
284
+ def reset_answer_input():
285
+ st.session_state.answer_submitted = False
286
+ st.session_state.current_answer = ""
287
+ # We don't modify st.session_state.user_answer directly
288
+
289
+ # Function to generate audio for a question
290
+ def ensure_audio_for_question(question, voice_type):
291
+ """Ensure audio exists for the given question and return the path."""
292
+ if not question:
293
+ return ""
294
+
295
+ # Check if we already have audio for this question
296
+ audio_path = st.session_state.interviewer.get_question_audio_path(question)
297
+
298
+ # If no audio exists, generate it
299
+ if not audio_path:
300
+ with st.spinner("Generating audio..."):
301
+ audio_path = st.session_state.interviewer.generate_question_audio(question, voice_type)
302
+
303
+ # Update the current audio path in session state
304
+ st.session_state.current_audio_path = audio_path
305
+
306
+ # Generate a new audio key to force refresh
307
+ st.session_state.audio_key = str(uuid.uuid4())
308
+
309
+ return audio_path
310
+
311
+ # Function to handle replay button click
312
+ def replay_audio():
313
+ st.session_state.should_play_audio = True
314
+ st.session_state.audio_key = str(uuid.uuid4())
315
+
316
+ # Title and description
317
+ st.markdown("<h1 class='main-header'>🎯 Virtual Interviewer</h1>", unsafe_allow_html=True)
318
+ st.markdown("<p class='sub-header'>An AI-powered interview simulator to help you prepare for your next job interview.</p>", unsafe_allow_html=True)
319
+
320
+ # Create a two-column layout
321
+ left_col, right_col = st.columns([1, 1])
322
+
323
+ with left_col:
324
+ # Interview setup section
325
+ if not st.session_state.interview_setup_done:
326
+ st.markdown("<h2 class='section-header'>📋 Interview Setup</h2>", unsafe_allow_html=True)
327
+
328
+ # 1a. Job requirement text box
329
+ job_description = st.text_area(
330
+ "💼 Job Description",
331
+ value=DEFAULT_JOB_DESCRIPTION,
332
+ height=200
333
+ )
334
+
335
+ # 1b. OpenAI API Key
336
+ api_key = st.text_input(
337
+ "🔑 OpenAI API Key",
338
+ type="password",
339
+ placeholder="Enter your OpenAI API key"
340
+ )
341
+
342
+ # 2. Interview type dropdown
343
+ interview_type = st.selectbox(
344
+ "🎭 Interview Type",
345
+ options=["Technical", "Non-technical"]
346
+ )
347
+
348
+ # 3. Difficulty level dropdown
349
+ difficulty_level = st.selectbox(
350
+ "📊 Difficulty Level",
351
+ options=["Easy", "Medium", "Hard"]
352
+ )
353
+
354
+ # 4. Key topics text box
355
+ key_topics = st.text_input(
356
+ "🔍 Key Topics",
357
+ value=DEFAULT_KEY_TOPICS
358
+ )
359
+
360
+ # 5. Scoring option
361
+ enable_scoring = st.radio(
362
+ "📝 Enable Scoring?",
363
+ options=["Yes", "No"],
364
+ horizontal=True
365
+ )
366
+
367
+ # New option for generating ideal answers
368
+ generate_ideal_answers = st.radio(
369
+ "💡 Generate Ideal Answers?",
370
+ options=["Yes", "No"],
371
+ horizontal=True
372
+ )
373
+
374
+ # 6. Number of questions
375
+ num_questions = st.number_input(
376
+ "❓ Number of Questions",
377
+ min_value=1,
378
+ max_value=10,
379
+ value=5
380
+ )
381
+
382
+ # Text-to-Speech options
383
+ st.markdown("<h3 class='section-header'>🔊 Text-to-Speech Options</h3>", unsafe_allow_html=True)
384
+
385
+ use_tts = st.checkbox("Enable Text-to-Speech for questions", value=True)
386
+
387
+ if use_tts:
388
+ # Voice type selection with a single radio button for all voices
389
+ st.markdown("<p style='margin-top: 1rem; font-weight: 500;'>Select a voice for the interviewer:</p>", unsafe_allow_html=True)
390
+
391
+ # Create a single radio button with all voice options
392
+ voice_options = [
393
+ "👨 Male - Casual (Guy)",
394
+ "👨 Male - Formal (Christopher)",
395
+ "👨 Male - British (Ryan)",
396
+ "👩 Female - Casual (Jenny)",
397
+ "👩 Female - Formal (Aria)",
398
+ "👩 Female - British (Sonia)"
399
+ ]
400
+
401
+ selected_voice = st.radio(
402
+ "Voice Selection",
403
+ options=voice_options,
404
+ index=3, # Default to Female Casual
405
+ label_visibility="collapsed" # Hide the label since we already have a header
406
+ )
407
+
408
+ # Map the selected voice to the backend voice type
409
+ voice_mapping = {
410
+ "👨 Male - Casual (Guy)": "male_casual",
411
+ "👨 Male - Formal (Christopher)": "male_formal",
412
+ "👨 Male - British (Ryan)": "male_british",
413
+ "👩 Female - Casual (Jenny)": "female_casual",
414
+ "👩 Female - Formal (Aria)": "female_formal",
415
+ "👩 Female - British (Sonia)": "female_british"
416
+ }
417
+
418
+ voice_type = voice_mapping.get(selected_voice, "female_casual")
419
+ else:
420
+ voice_type = "female_casual"
421
+
422
+ # Start interview button
423
+ if st.button("🚀 Start Interview"):
424
+ if not api_key:
425
+ st.error("⚠️ Please enter your OpenAI API key.")
426
+ elif not job_description:
427
+ st.error("⚠️ Please enter a job description.")
428
+ else:
429
+ with st.spinner("Setting up your interview..."):
430
+ try:
431
+ # Initialize the interviewer
432
+ st.session_state.interviewer = VirtualInterviewer(api_key)
433
+
434
+ # Store whether to generate ideal answers
435
+ should_generate_ideal_answers = (generate_ideal_answers == "Yes")
436
+ st.session_state.generate_ideal_answers = should_generate_ideal_answers
437
+
438
+ # Store TTS settings
439
+ st.session_state.use_tts = use_tts
440
+ st.session_state.voice_type = voice_type
441
+
442
+ # Generate questions
443
+ questions = st.session_state.interviewer.generate_interview_questions(
444
+ job_description=job_description,
445
+ interview_type=interview_type,
446
+ difficulty_level=difficulty_level,
447
+ key_topics=key_topics,
448
+ num_questions=int(num_questions),
449
+ generate_ideal_answers=should_generate_ideal_answers
450
+ )
451
+
452
+ # Generate audio for the first question if TTS is enabled
453
+ if use_tts and questions:
454
+ with st.spinner("Generating audio for first question..."):
455
+ audio_path = st.session_state.interviewer.generate_question_audio(questions[0], voice_type)
456
+ st.session_state.current_audio_path = audio_path
457
+ st.session_state.audio_key = str(uuid.uuid4())
458
+ st.session_state.should_play_audio = True
459
+
460
+ # Store interview parameters for scoring later
461
+ st.session_state.job_description = job_description
462
+ st.session_state.interview_type = interview_type
463
+ st.session_state.difficulty_level = difficulty_level
464
+ st.session_state.enable_scoring = (enable_scoring == "Yes")
465
+ st.session_state.num_questions = int(num_questions)
466
+
467
+ # Set the first question
468
+ st.session_state.current_question = questions[0]
469
+ st.session_state.questions_generated = True
470
+ st.session_state.interview_setup_done = True
471
+ st.session_state.interview_started = True
472
+ st.session_state.answer_submitted = False
473
+ st.session_state.current_answer = ""
474
+
475
+ # Rerun to update the UI
476
+ st.rerun()
477
+ except Exception as e:
478
+ st.error(f"⚠️ Error setting up the interview: {str(e)}")
479
+
480
+ # Interview in progress section
481
+ elif st.session_state.interview_started and not st.session_state.interview_completed:
482
+ st.markdown("<h2 class='section-header'>🎙️ Interview in Progress</h2>", unsafe_allow_html=True)
483
+
484
+ # Display current question number
485
+ st.markdown(f"<p class='question-number'>Question {st.session_state.current_question_index + 1} of {len(st.session_state.interviewer.questions_asked)}</p>", unsafe_allow_html=True)
486
+
487
+ # Display conversation history
488
+ if st.session_state.conversation_history:
489
+ st.markdown("<h3 class='section-header'>📜 Previous Questions and Answers</h3>", unsafe_allow_html=True)
490
+ for i, qa in enumerate(st.session_state.conversation_history):
491
+ if i % 2 == 0: # Question (even index)
492
+ st.markdown(f"<p><strong style='color: #3949AB;'>Q{i//2 + 1}:</strong> {qa}</p>", unsafe_allow_html=True)
493
+ else: # Answer (odd index)
494
+ st.markdown(f"<p><em style='color: #455A64;'>A: {qa}</em></p>", unsafe_allow_html=True)
495
+ st.markdown("<hr style='margin: 0.5rem 0; border-color: #E0E0E0;'>", unsafe_allow_html=True)
496
+
497
+ # Interview completed section
498
+ elif st.session_state.interview_completed:
499
+ st.markdown("<h2 class='section-header'>🏁 Interview Completed</h2>", unsafe_allow_html=True)
500
+
501
+ # Display conversation history
502
+ if st.session_state.conversation_history:
503
+ st.markdown("<h3 class='section-header'>📋 Interview Summary</h3>", unsafe_allow_html=True)
504
+ for i, qa in enumerate(st.session_state.conversation_history):
505
+ if i % 2 == 0: # Question (even index)
506
+ st.markdown(f"<p><strong style='color: #3949AB;'>Q{i//2 + 1}:</strong> {qa}</p>", unsafe_allow_html=True)
507
+ else: # Answer (odd index)
508
+ st.markdown(f"<p><em style='color: #455A64;'>A: {qa}</em></p>", unsafe_allow_html=True)
509
+ st.markdown("<hr style='margin: 0.5rem 0; border-color: #E0E0E0;'>", unsafe_allow_html=True)
510
+
511
+ # Option to restart
512
+ if st.button("🔄 Start New Interview"):
513
+ # Reset all session state variables
514
+ for key in list(st.session_state.keys()):
515
+ del st.session_state[key]
516
+ st.rerun()
517
+
518
+ # Right column for displaying the current question and answer input
519
+ with right_col:
520
+ if st.session_state.interview_started and not st.session_state.interview_completed:
521
+ st.markdown("<h2 class='section-header'>❓ Current Question</h2>", unsafe_allow_html=True)
522
+
523
+ # Display the current question
524
+ st.markdown(f"<p class='question-header'>{st.session_state.current_question}</p>", unsafe_allow_html=True)
525
+
526
+ # Display audio player if TTS is enabled
527
+ if st.session_state.use_tts:
528
+ # Ensure we have audio for the current question
529
+ current_question = st.session_state.current_question
530
+ audio_path = ensure_audio_for_question(current_question, st.session_state.voice_type)
531
+
532
+ # Display audio player with unique key to force refresh
533
+ if audio_path and os.path.exists(audio_path):
534
+ # Create a container for the audio player
535
+ audio_container = st.empty()
536
+
537
+ # Display the audio player with the current audio key
538
+ audio_container.markdown(
539
+ get_audio_player_html(
540
+ audio_path,
541
+ autoplay=st.session_state.should_play_audio,
542
+ player_id=st.session_state.audio_key
543
+ ),
544
+ unsafe_allow_html=True
545
+ )
546
+
547
+ # Reset the should_play_audio flag after displaying
548
+ if st.session_state.should_play_audio:
549
+ st.session_state.should_play_audio = False
550
+
551
+ # Add a button to replay the audio
552
+ if st.button("🔊 Replay Question Audio"):
553
+ # Set flag to play audio and generate new key
554
+ replay_audio()
555
+ st.rerun()
556
+
557
+ # Text area for the user's answer
558
+ user_answer = st.text_area(
559
+ "Your Answer",
560
+ key="user_answer",
561
+ height=300,
562
+ placeholder="Type your answer here..."
563
+ )
564
+
565
+ # Submit button for the answer
566
+ submit_button_col1, submit_button_col2, submit_button_col3 = st.columns([1, 1, 1])
567
+ with submit_button_col2:
568
+ if st.button("✅ Submit Answer", key="submit_answer"):
569
+ if user_answer: # Use the local variable, not session_state
570
+ st.session_state.answer_submitted = True
571
+ st.session_state.current_answer = user_answer # Store the answer in a different session state variable
572
+ st.markdown("<p class='success-text'>✅ Answer submitted! Click 'Next Question' to continue.</p>", unsafe_allow_html=True)
573
+ else:
574
+ st.markdown("<p class='warning-text'>⚠️ Please provide an answer before submitting.</p>", unsafe_allow_html=True)
575
+
576
+ # Display score results if interview is scored
577
+ elif st.session_state.interview_scored and st.session_state.score_results:
578
+ st.markdown("<h2 class='section-header'>📊 Interview Results</h2>", unsafe_allow_html=True)
579
+
580
+ score_results = st.session_state.score_results
581
+
582
+ # Display overall score with a visual indicator
583
+ if "overall_score" in score_results:
584
+ overall_score = score_results['overall_score']
585
+
586
+ # Create a visual score indicator
587
+ score_color = "#4CAF50" if overall_score >= 4 else "#FF9800" if overall_score >= 3 else "#F44336"
588
+
589
+ st.markdown(f"""
590
+ <div style="background-color: #F3F4F6; padding: 1.5rem; border-radius: 10px; margin: 1rem 0; box-shadow: 0 2px 5px rgba(0,0,0,0.1);">
591
+ <h3 style="color: #3949AB; margin-top: 0;">Overall Performance</h3>
592
+ <div style="display: flex; align-items: center; margin-bottom: 1rem;">
593
+ <div style="background-color: {score_color}; color: white; font-size: 2rem; font-weight: bold; width: 80px; height: 80px; border-radius: 50%; display: flex; align-items: center; justify-content: center; margin-right: 1.5rem;">
594
+ {overall_score}/5
595
+ </div>
596
+ <div>
597
+ <p style="font-size: 1.1rem; margin: 0; color: #333;">{score_results['overall_feedback']}</p>
598
+ </div>
599
+ </div>
600
+ </div>
601
+ """, unsafe_allow_html=True)
602
+
603
+ # Display score table
604
+ if "individual_scores" in score_results:
605
+ st.markdown("<h3 class='section-header'>📈 Score Summary</h3>", unsafe_allow_html=True)
606
+
607
+ # Extract data for the table
608
+ questions = []
609
+ scores = []
610
+
611
+ for i, score_item in enumerate(score_results["individual_scores"]):
612
+ # Truncate long questions for better display
613
+ question_text = score_item['question']
614
+ if len(question_text) > 80:
615
+ question_text = question_text[:77] + "..."
616
+
617
+ questions.append(f"Q{i+1}: {question_text}")
618
+ scores.append(score_item['score'])
619
+
620
+ # Calculate average score
621
+ if scores:
622
+ avg_score = sum(scores) / len(scores)
623
+ questions.append("**Average Score**")
624
+ scores.append(f"**{avg_score:.2f}**")
625
+
626
+ # Create DataFrame
627
+ df = pd.DataFrame({
628
+ "Question": questions,
629
+ "Score (out of 5)": scores
630
+ })
631
+
632
+ # Convert DataFrame to HTML and display it
633
+ table_html = df.to_html(classes='score-table', escape=False, index=False)
634
+ st.markdown(f"""
635
+ <div style="overflow-x: auto;">
636
+ {table_html}
637
+ </div>
638
+ """, unsafe_allow_html=True)
639
+
640
+ # Display individual scores with ideal answers
641
+ if "individual_scores" in score_results:
642
+ st.markdown("<h3 class='section-header'>📝 Detailed Feedback</h3>", unsafe_allow_html=True)
643
+
644
+ st.markdown("""
645
+ <p style="margin-bottom: 1rem;">Click on each question below to see detailed feedback and ideal answers.</p>
646
+ """, unsafe_allow_html=True)
647
+
648
+ for i, score_item in enumerate(score_results["individual_scores"]):
649
+ # Determine score color
650
+ score_value = score_item['score']
651
+ score_color = "#4CAF50" if score_value >= 4 else "#FF9800" if score_value >= 3 else "#F44336"
652
+
653
+ with st.expander(f"Question {i+1}: {score_item['question']}"):
654
+ st.markdown(f"""
655
+ <div style="display: flex; align-items: center; margin-bottom: 1rem;">
656
+ <div style="background-color: {score_color}; color: white; font-size: 1.2rem; font-weight: bold; width: 50px; height: 50px; border-radius: 50%; display: flex; align-items: center; justify-content: center; margin-right: 1rem;">
657
+ {score_value}/5
658
+ </div>
659
+ <div>
660
+ <p style="font-size: 1.1rem; margin: 0; color: #333; font-weight: 500;">{score_item['feedback']}</p>
661
+ </div>
662
+ </div>
663
+ """, unsafe_allow_html=True)
664
+
665
+ st.markdown("<h4 class='user-answer-header'>🧑‍💼 Your Answer:</h4>", unsafe_allow_html=True)
666
+ st.markdown(f"{score_item['answer']}")
667
+
668
+ # Only show ideal answers if they were generated
669
+ if st.session_state.generate_ideal_answers:
670
+ st.markdown("<h4 class='ideal-answer-header'>💡 Ideal Answer:</h4>", unsafe_allow_html=True)
671
+ if "ideal_answer" in score_item:
672
+ st.markdown(f"{score_item['ideal_answer']}")
673
+ else:
674
+ st.markdown("No ideal answer available.")
675
+
676
+ # Display a welcome message if interview hasn't started
677
+ elif not st.session_state.interview_started:
678
+ st.markdown("<h2 class='section-header'>👋 Welcome to Virtual Interviewer</h2>", unsafe_allow_html=True)
679
+ st.markdown("""
680
+ <div style="background-color: #E8EAF6; padding: 1.5rem; border-radius: 10px; margin-top: 1rem;">
681
+ <h3 style="color: #3949AB; font-size: 1.3rem; margin-bottom: 1rem;">This tool will help you practice for your upcoming interviews by:</h3>
682
+ <ol style="color: #303F9F; font-size: 1.1rem;">
683
+ <li>Generating relevant interview questions based on a job description</li>
684
+ <li>Simulating a real interview experience</li>
685
+ <li>Providing feedback on your answers</li>
686
+ </ol>
687
+ <p style="color: #3949AB; font-size: 1.1rem; margin-top: 1rem;">To get started, fill out the interview setup form on the left and click "Start Interview".</p>
688
+ </div>
689
+ """, unsafe_allow_html=True)
690
+
691
+ # Bottom section for control buttons
692
+ if st.session_state.interview_started and not st.session_state.interview_completed:
693
+ st.markdown("<hr style='margin: 2rem 0 1rem 0;'>", unsafe_allow_html=True)
694
+
695
+ # Create a container for the buttons at the bottom
696
+ button_container = st.container()
697
+
698
+ with button_container:
699
+ col1, col2, col3 = st.columns([1, 1, 1])
700
+
701
+ with col1:
702
+ pass # Empty column for spacing
703
+
704
+ with col2:
705
+ next_button_disabled = not st.session_state.answer_submitted
706
+ if st.button("⏭️ Next Question", disabled=next_button_disabled, use_container_width=True):
707
+ # Store the current question and answer
708
+ if st.session_state.current_answer: # Use current_answer instead of user_answer
709
+ # Add to conversation history
710
+ st.session_state.conversation_history.append(st.session_state.current_question)
711
+ st.session_state.conversation_history.append(st.session_state.current_answer)
712
+
713
+ # Store in the interviewer object
714
+ st.session_state.interviewer.store_user_answer(
715
+ st.session_state.current_question,
716
+ st.session_state.current_answer
717
+ )
718
+
719
+ # Move to the next question
720
+ st.session_state.current_question_index += 1
721
+
722
+ # Check if we've reached the end of the questions
723
+ if st.session_state.current_question_index >= len(st.session_state.interviewer.questions_asked):
724
+ st.session_state.interview_completed = True
725
+
726
+ # Automatically score the interview if scoring is enabled
727
+ if st.session_state.enable_scoring:
728
+ with st.spinner("Scoring your interview..."):
729
+ try:
730
+ # Score the interview
731
+ score_results = st.session_state.interviewer.score_interview(
732
+ job_description=st.session_state.job_description,
733
+ interview_type=st.session_state.interview_type,
734
+ difficulty_level=st.session_state.difficulty_level
735
+ )
736
+
737
+ st.session_state.score_results = score_results
738
+ st.session_state.interview_scored = True
739
+ except Exception as e:
740
+ st.error(f"⚠️ Error scoring the interview: {str(e)}")
741
+
742
+ st.rerun()
743
+ else:
744
+ # Get the next question
745
+ next_question = st.session_state.interviewer.get_next_question(
746
+ st.session_state.current_question_index
747
+ )
748
+ st.session_state.current_question = next_question
749
+
750
+ # Reset submission status and current answer
751
+ reset_answer_input()
752
+
753
+ # Set flag to play audio for the new question
754
+ st.session_state.should_play_audio = True
755
+ st.session_state.audio_key = str(uuid.uuid4())
756
+
757
+ st.rerun()
758
+ else:
759
+ st.warning("⚠️ Please provide an answer before moving to the next question.")
760
+
761
+ with col3:
762
+ if st.session_state.enable_scoring:
763
+ if st.button("📊 Score Interview", use_container_width=True, key="score_button"):
764
+ if len(st.session_state.interviewer.user_answers) > 0:
765
+ with st.spinner("Scoring your interview..."):
766
+ try:
767
+ # Score the interview
768
+ score_results = st.session_state.interviewer.score_interview(
769
+ job_description=st.session_state.job_description,
770
+ interview_type=st.session_state.interview_type,
771
+ difficulty_level=st.session_state.difficulty_level
772
+ )
773
+
774
+ st.session_state.score_results = score_results
775
+ st.session_state.interview_scored = True
776
+ st.session_state.interview_completed = True
777
+ st.rerun()
778
+ except Exception as e:
779
+ st.error(f"⚠️ Error scoring the interview: {str(e)}")
780
+ else:
781
+ st.warning("⚠️ Please answer at least one question before scoring.")
782
+
783
+ # Footer
784
+ st.markdown("<hr style='margin: 2rem 0 1rem 0;'>", unsafe_allow_html=True)
785
+ st.markdown("<p class='footer'>🎯 Powered by OpenAI GPT-4o | Created with Streamlit</p>", unsafe_allow_html=True)
backend.py ADDED
@@ -0,0 +1,425 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import openai
2
+ import json
3
+ import requests
4
+ import base64
5
+ import os
6
+ import tempfile
7
+ import asyncio
8
+ import edge_tts
9
+ import time
10
+ import hashlib
11
+ import shutil
12
+ from typing import List, Dict, Any, Optional
13
+
14
+ class VirtualInterviewer:
15
+ def __init__(self, api_key: str):
16
+ """Initialize the virtual interviewer with the OpenAI API key."""
17
+ self.api_key = api_key
18
+ self.questions_asked = []
19
+ self.user_answers = []
20
+ self.conversation_history = []
21
+ self.ideal_answers = {}
22
+ self.question_audio_paths = {}
23
+
24
+ # Create audio directory
25
+ self.audio_dir = self._create_audio_directory()
26
+
27
+ # Clean up any existing audio files
28
+ self._cleanup_audio_files()
29
+
30
+ # Initialize OpenAI client
31
+ try:
32
+ self.client = openai.OpenAI(api_key=api_key)
33
+ except Exception as e:
34
+ raise Exception(f"Failed to initialize OpenAI client: {str(e)}")
35
+
36
+ def _create_audio_directory(self) -> str:
37
+ """Create a directory to store audio files."""
38
+ audio_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), "audio_files")
39
+ os.makedirs(audio_dir, exist_ok=True)
40
+ return audio_dir
41
+
42
+ def _cleanup_audio_files(self):
43
+ """Delete all temporary audio files from previous sessions."""
44
+ try:
45
+ if os.path.exists(self.audio_dir):
46
+ # Delete all files in the directory
47
+ for filename in os.listdir(self.audio_dir):
48
+ file_path = os.path.join(self.audio_dir, filename)
49
+ if os.path.isfile(file_path):
50
+ os.remove(file_path)
51
+ print(f"Cleaned up audio files in {self.audio_dir}")
52
+ except Exception as e:
53
+ print(f"Error cleaning up audio files: {str(e)}")
54
+
55
+ def generate_interview_questions(
56
+ self,
57
+ job_description: str,
58
+ interview_type: str,
59
+ difficulty_level: str,
60
+ key_topics: str,
61
+ num_questions: int,
62
+ generate_ideal_answers: bool = True
63
+ ) -> List[str]:
64
+ """Generate interview questions based on the job description and other parameters."""
65
+ try:
66
+ # Construct the system prompt based on whether we want ideal answers or not
67
+ if generate_ideal_answers:
68
+ system_prompt = f"""You are an expert interviewer for {interview_type} interviews.
69
+ Generate {num_questions} {difficulty_level.lower()} difficulty interview questions for a {interview_type.lower()} interview based on the following job description:
70
+
71
+ Job Description:
72
+ {job_description}
73
+
74
+ Key Topics to Focus on:
75
+ {key_topics if key_topics else "No specific topics provided."}
76
+
77
+ Please provide the questions and ideal answers in the following JSON format:
78
+ {{
79
+ "questions": [
80
+ {{
81
+ "question": "Question 1",
82
+ "ideal_answer": "Ideal answer for question 1"
83
+ }},
84
+ ...
85
+ ]
86
+ }}
87
+
88
+ Make sure the questions are challenging but appropriate for the {difficulty_level.lower()} difficulty level.
89
+ The ideal answers should be comprehensive and demonstrate expertise in the subject matter.
90
+ """
91
+ else:
92
+ system_prompt = f"""You are an expert interviewer for {interview_type} interviews.
93
+ Generate {num_questions} {difficulty_level.lower()} difficulty interview questions for a {interview_type.lower()} interview based on the following job description:
94
+
95
+ Job Description:
96
+ {job_description}
97
+
98
+ Key Topics to Focus on:
99
+ {key_topics if key_topics else "No specific topics provided."}
100
+
101
+ Please provide the questions in a numbered list format.
102
+ Make sure the questions are challenging but appropriate for the {difficulty_level.lower()} difficulty level.
103
+ """
104
+
105
+ # Make the API call to generate questions
106
+ response = self.client.chat.completions.create(
107
+ model="gpt-4o",
108
+ messages=[
109
+ {"role": "system", "content": system_prompt},
110
+ {"role": "user", "content": f"Generate {num_questions} {interview_type.lower()} interview questions for a {difficulty_level.lower()} difficulty level."}
111
+ ],
112
+ temperature=0.7,
113
+ max_tokens=2000
114
+ )
115
+
116
+ # Extract the response content
117
+ response_content = response.choices[0].message.content
118
+
119
+ # Process the response based on whether we're expecting JSON or a simple list
120
+ if generate_ideal_answers:
121
+ try:
122
+ # Try to parse as JSON
123
+ json_response = self._extract_json(response_content)
124
+
125
+ # Extract questions and ideal answers
126
+ questions = []
127
+ for item in json_response.get("questions", []):
128
+ question = item.get("question", "")
129
+ ideal_answer = item.get("ideal_answer", "")
130
+
131
+ if question:
132
+ questions.append(question)
133
+ if ideal_answer:
134
+ self.ideal_answers[question] = ideal_answer
135
+
136
+ # If we couldn't extract questions from JSON, fall back to parsing as text
137
+ if not questions:
138
+ questions = self._parse_questions(response_content, num_questions)
139
+ # Generate ideal answers separately
140
+ self._generate_ideal_answers(questions, job_description, interview_type, difficulty_level)
141
+ except Exception as e:
142
+ # If JSON parsing fails, fall back to text parsing
143
+ questions = self._parse_questions(response_content, num_questions)
144
+ # Generate ideal answers separately
145
+ self._generate_ideal_answers(questions, job_description, interview_type, difficulty_level)
146
+ else:
147
+ # Parse as simple text
148
+ questions = self._parse_questions(response_content, num_questions)
149
+
150
+ # Store the generated questions
151
+ self.questions_asked = questions
152
+
153
+ return questions
154
+ except Exception as e:
155
+ raise Exception(f"Failed to generate interview questions: {str(e)}")
156
+
157
+ def generate_question_audio(self, question: str, voice_type: str) -> str:
158
+ """Generate audio for a question using edge-tts."""
159
+ try:
160
+ # Check if we already have audio for this question
161
+ if question in self.question_audio_paths and os.path.exists(self.question_audio_paths[question]):
162
+ print(f"Using existing audio for question: {question[:30]}...")
163
+ return self.question_audio_paths[question]
164
+
165
+ # Create a unique filename based on the question content and timestamp
166
+ question_hash = hashlib.md5(question.encode()).hexdigest()
167
+ timestamp = int(time.time())
168
+ filename = f"question_{question_hash}_{timestamp}.mp3"
169
+ output_path = os.path.join(self.audio_dir, filename)
170
+
171
+ # Map voice type to edge-tts voice
172
+ voice_mapping = {
173
+ "male_casual": "en-US-GuyNeural",
174
+ "male_formal": "en-US-ChristopherNeural",
175
+ "male_british": "en-GB-RyanNeural",
176
+ "female_casual": "en-US-JennyNeural",
177
+ "female_formal": "en-US-AriaNeural",
178
+ "female_british": "en-GB-SoniaNeural"
179
+ }
180
+
181
+ # Get the voice name from the mapping, default to female casual
182
+ voice = voice_mapping.get(voice_type, "en-US-JennyNeural")
183
+
184
+ # Generate audio using edge-tts
185
+ async def generate_audio():
186
+ communicate = edge_tts.Communicate(question, voice)
187
+ await communicate.save(output_path)
188
+
189
+ # Run the async function
190
+ asyncio.run(generate_audio())
191
+
192
+ print(f"Generated audio for question: {question[:30]}... at {output_path}")
193
+
194
+ # Store the audio path for this question
195
+ self.question_audio_paths[question] = output_path
196
+
197
+ return output_path
198
+ except Exception as e:
199
+ print(f"Error generating audio: {str(e)}")
200
+ return ""
201
+
202
+ def get_question_audio_path(self, question: str) -> str:
203
+ """Get the audio path for a question."""
204
+ # Check if we have an audio path for this question
205
+ if question in self.question_audio_paths:
206
+ # Verify the file exists
207
+ if os.path.exists(self.question_audio_paths[question]):
208
+ return self.question_audio_paths[question]
209
+ else:
210
+ # File doesn't exist, remove from dictionary
211
+ del self.question_audio_paths[question]
212
+ return ""
213
+ return ""
214
+
215
+ def _extract_json(self, text: str) -> Dict[str, Any]:
216
+ """Extract JSON from text."""
217
+ try:
218
+ # Try to parse the entire text as JSON
219
+ return json.loads(text)
220
+ except json.JSONDecodeError:
221
+ # If that fails, try to extract JSON from the text
222
+ import re
223
+ json_match = re.search(r'```json\n(.*?)\n```', text, re.DOTALL)
224
+ if json_match:
225
+ try:
226
+ return json.loads(json_match.group(1))
227
+ except json.JSONDecodeError:
228
+ pass
229
+
230
+ # Try to find JSON between curly braces
231
+ json_match = re.search(r'({.*})', text, re.DOTALL)
232
+ if json_match:
233
+ try:
234
+ return json.loads(json_match.group(1))
235
+ except json.JSONDecodeError:
236
+ pass
237
+
238
+ # If all else fails, return an empty dict
239
+ return {}
240
+
241
+ def _generate_ideal_answers(self, questions: List[str], job_description: str, interview_type: str, difficulty_level: str):
242
+ """Generate ideal answers for the questions."""
243
+ try:
244
+ # Prepare the prompt for generating ideal answers
245
+ prompt = f"""You are an expert in {interview_type} interviews.
246
+ For each of the following interview questions, provide an ideal answer that would impress the interviewer.
247
+ The answers should be comprehensive, demonstrate expertise, and be appropriate for a {difficulty_level.lower()} difficulty level interview.
248
+
249
+ Job Description:
250
+ {job_description}
251
+
252
+ Questions:
253
+ {json.dumps(questions)}
254
+
255
+ Please provide the answers in the following JSON format:
256
+ {{
257
+ "answers": [
258
+ {{
259
+ "question": "Question 1",
260
+ "ideal_answer": "Ideal answer for question 1"
261
+ }},
262
+ ...
263
+ ]
264
+ }}
265
+ """
266
+
267
+ # Make the API call to generate ideal answers
268
+ response = self.client.chat.completions.create(
269
+ model="gpt-4o",
270
+ messages=[
271
+ {"role": "system", "content": "You are an expert interviewer providing ideal answers to interview questions."},
272
+ {"role": "user", "content": prompt}
273
+ ],
274
+ temperature=0.7,
275
+ max_tokens=2000
276
+ )
277
+
278
+ # Extract the response content
279
+ response_content = response.choices[0].message.content
280
+
281
+ try:
282
+ # Try to parse as JSON
283
+ json_response = self._extract_json(response_content)
284
+
285
+ # Extract ideal answers
286
+ for item in json_response.get("answers", []):
287
+ question = item.get("question", "")
288
+ ideal_answer = item.get("ideal_answer", "")
289
+
290
+ if question and ideal_answer:
291
+ # Find the matching question in our list
292
+ for q in questions:
293
+ if question.lower() in q.lower() or q.lower() in question.lower():
294
+ self.ideal_answers[q] = ideal_answer
295
+ break
296
+ except Exception as e:
297
+ # If batch processing fails, fall back to individual processing
298
+ for question in questions:
299
+ if question not in self.ideal_answers:
300
+ self.ideal_answers[question] = f"Unable to generate ideal answer: {str(e)}"
301
+ except Exception as e:
302
+ # Handle any errors in the overall ideal answer generation process
303
+ print(f"Error generating ideal answers: {str(e)}")
304
+ # Ensure all questions have a fallback ideal answer
305
+ for question in questions:
306
+ if question not in self.ideal_answers:
307
+ self.ideal_answers[question] = "Unable to generate ideal answer due to an error."
308
+
309
+ def _parse_questions(self, questions_text: str, expected_count: int) -> List[str]:
310
+ """Parse the questions from the text response."""
311
+ lines = questions_text.strip().split('\n')
312
+ questions = []
313
+
314
+ for line in lines:
315
+ line = line.strip()
316
+ if line and (line[0].isdigit() or line.startswith('- ')):
317
+ # Remove numbering or bullet points
318
+ cleaned_line = line.lstrip('0123456789.- ').strip()
319
+ if cleaned_line:
320
+ questions.append(cleaned_line)
321
+
322
+ # If we couldn't parse the expected number of questions, try a simpler approach
323
+ if len(questions) != expected_count:
324
+ questions = [line.strip() for line in lines if line.strip()][:expected_count]
325
+
326
+ return questions[:expected_count] # Ensure we return exactly the expected number
327
+
328
+ def get_next_question(self, question_index: int) -> str:
329
+ """Get the next question from the list of generated questions."""
330
+ if 0 <= question_index < len(self.questions_asked):
331
+ return self.questions_asked[question_index]
332
+ return "No more questions available."
333
+
334
+ def store_user_answer(self, question: str, answer: str):
335
+ """Store the user's answer to a question."""
336
+ self.user_answers.append({"question": question, "answer": answer})
337
+ self.conversation_history.append({"role": "assistant", "content": question})
338
+ self.conversation_history.append({"role": "user", "content": answer})
339
+
340
+ def get_ideal_answer(self, question: str) -> str:
341
+ """Get the ideal answer for a question."""
342
+ return self.ideal_answers.get(question, "No ideal answer available for this question.")
343
+
344
+ def score_interview(self, job_description: str, interview_type: str, difficulty_level: str) -> Dict[str, Any]:
345
+ """Score the interview based on the user's answers."""
346
+ try:
347
+ # Prepare the data for scoring
348
+ questions_and_answers = []
349
+ for qa in self.user_answers:
350
+ question = qa["question"]
351
+ answer = qa["answer"]
352
+ ideal_answer = self.get_ideal_answer(question)
353
+
354
+ questions_and_answers.append({
355
+ "question": question,
356
+ "answer": answer,
357
+ "ideal_answer": ideal_answer
358
+ })
359
+
360
+ # Prepare the prompt for scoring
361
+ prompt = f"""You are an expert interviewer for {interview_type} interviews.
362
+ Score the following interview answers based on the job description and difficulty level.
363
+
364
+ Job Description:
365
+ {job_description}
366
+
367
+ Difficulty Level: {difficulty_level}
368
+
369
+ For each question and answer, provide:
370
+ 1. A score from 0 to 5 (where 5 is excellent)
371
+ 2. Feedback on the answer
372
+ 3. Include the ideal answer for comparison. The ideal answer should be a comprehensive and detailed answer that would impress the interviewer with bullet points.
373
+
374
+ Questions and Answers:
375
+ {json.dumps(questions_and_answers)}
376
+
377
+ Please provide the scores in the following JSON format:
378
+ {{
379
+ "overall_score": 4.5,
380
+ "overall_feedback": "Overall feedback on the interview performance",
381
+ "individual_scores": [
382
+ {{
383
+ "question": "Question 1",
384
+ "answer": "User's answer to question 1",
385
+ "ideal_answer": "Ideal answer to question 1",
386
+ "score": 4,
387
+ "feedback": "Feedback on the answer to question 1"
388
+ }},
389
+ ...
390
+ ]
391
+ }}
392
+ """
393
+
394
+ # Make the API call to score the interview
395
+ response = self.client.chat.completions.create(
396
+ model="gpt-4o",
397
+ messages=[
398
+ {"role": "system", "content": "You are an expert interviewer scoring interview answers."},
399
+ {"role": "user", "content": prompt}
400
+ ],
401
+ temperature=0.3,
402
+ max_tokens=2000
403
+ )
404
+
405
+ # Extract the response content
406
+ response_content = response.choices[0].message.content
407
+
408
+ try:
409
+ # Try to parse as JSON
410
+ json_response = self._extract_json(response_content)
411
+ return json_response
412
+ except Exception as e:
413
+ # If JSON parsing fails, return an error
414
+ return {
415
+ "overall_score": 0,
416
+ "overall_feedback": f"Failed to score the interview: {str(e)}",
417
+ "individual_scores": []
418
+ }
419
+ except Exception as e:
420
+ # If scoring fails, return an error
421
+ return {
422
+ "overall_score": 0,
423
+ "overall_feedback": f"Failed to score the interview: {str(e)}",
424
+ "individual_scores": []
425
+ }
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ streamlit==1.32.0
2
+ openai==1.12.0
3
+ python-dotenv==1.0.0
4
+ requests==2.31.0
5
+ edge-tts==6.1.9