Kaadan commited on
Commit
9bd6c72
Β·
2 Parent(s): 18e5c97 5ae03a4

Merge branch 'master'

Browse files
backend/QWEN.md CHANGED
@@ -10,6 +10,7 @@ The application follows a clean architecture with proper separation of concerns:
10
  - **Database Layer**: Manages database connections and sessions
11
  - **Model Layer**: Defines database models using SQLAlchemy
12
  - **Schema Layer**: Defines Pydantic schemas for request/response validation
 
13
 
14
  ## Technologies Used
15
 
@@ -20,6 +21,9 @@ The application follows a clean architecture with proper separation of concerns:
20
  - **Alembic**: Database migration tool
21
  - **Pydantic**: Data validation and settings management
22
  - **UUID**: For generating unique identifiers
 
 
 
23
 
24
  ## Architecture Components
25
 
@@ -34,6 +38,8 @@ backend/
34
  β”‚ └── routes.py # Root and health check endpoints
35
  β”œβ”€β”€ database/ # Database connection utilities
36
  β”‚ └── database.py # Database engine and session management
 
 
37
  β”œβ”€β”€ models/ # SQLAlchemy models
38
  β”‚ β”œβ”€β”€ user.py # User model
39
  β”‚ β”œβ”€β”€ job.py # Job model
@@ -45,13 +51,17 @@ backend/
45
  β”‚ β”œβ”€β”€ job.py # Job schemas
46
  β”‚ β”œβ”€β”€ assessment.py # Assessment schemas
47
  β”‚ β”œβ”€β”€ application.py # Application schemas
 
48
  β”‚ └── base.py # Base schema class
49
  β”œβ”€β”€ services/ # Business logic layer
50
  β”‚ β”œβ”€β”€ user_service.py # User-related services
51
  β”‚ β”œβ”€β”€ job_service.py # Job-related services
52
  β”‚ β”œβ”€β”€ assessment_service.py # Assessment-related services
53
  β”‚ β”œβ”€β”€ application_service.py # Application-related services
 
54
  β”‚ └── base_service.py # Generic service functions
 
 
55
  β”œβ”€β”€ alembic/ # Database migration files
56
  β”œβ”€β”€ config.py # Application configuration
57
  β”œβ”€β”€ logging_config.py # Logging configuration
@@ -63,21 +73,31 @@ backend/
63
  ### Key Features
64
 
65
  1. **User Management**:
66
- - Registration and authentication
67
- - Role-based access (HR vs Applicant)
 
68
 
69
  2. **Job Management**:
70
  - Create, update, delete job postings
71
  - Manage job details and requirements
 
72
 
73
- 3. **Assessment Management**:
74
- - Create assessments linked to jobs
75
- - Define questions and passing scores
76
- - Regenerate assessments with new questions
 
 
77
 
78
  4. **Application Management**:
79
  - Submit applications with answers
 
80
  - Track application results and scores
 
 
 
 
 
81
 
82
  ### API Endpoints
83
 
@@ -99,14 +119,20 @@ backend/
99
  #### Assessments
100
  - `GET /assessments/jobs/{jid}` - List assessments for a job
101
  - `GET /assessments/jobs/{jid}/{aid}` - Get assessment details
102
- - `POST /assessments/jobs/{id}` - Create assessment
103
- - `PATCH /assessments/jobs/{jid}/{aid}/regenerate` - Regenerate assessment
104
  - `PATCH /assessments/jobs/{jid}/{aid}` - Update assessment
105
  - `DELETE /assessments/jobs/{jid}/{aid}` - Delete assessment
106
 
107
  #### Applications
108
- - `GET /applications/jobs/{jid}/assessments/{aid}` - List applications
 
109
  - `POST /applications/jobs/{jid}/assessments/{aid}` - Create application
 
 
 
 
 
110
 
111
  #### Health Check
112
  - `GET /` - Root endpoint
@@ -130,11 +156,14 @@ LOG_LEVEL=INFO
130
  LOG_FILE=app.log
131
  LOG_FORMAT=%(asctime)s - %(name)s - %(levelname)s - %(message)s
132
 
133
- # JWT Configuration (for future use)
134
  SECRET_KEY=your-secret-key-here
135
  ALGORITHM=HS256
136
  ACCESS_TOKEN_EXPIRE_MINUTES=30
137
 
 
 
 
138
  # Application Configuration
139
  APP_NAME=AI-Powered Hiring Assessment Platform
140
  APP_VERSION=0.1.0
@@ -146,6 +175,7 @@ APP_DESCRIPTION=MVP for managing hiring assessments using AI
146
  ### Prerequisites
147
  - Python 3.11+
148
  - pip package manager
 
149
 
150
  ### Setup Instructions
151
 
@@ -153,7 +183,7 @@ APP_DESCRIPTION=MVP for managing hiring assessments using AI
153
  ```bash
154
  pip install -r requirements.txt
155
  ```
156
-
157
  2. **Set Up Environment Variables**:
158
  Copy the `.env.example` file to `.env` and adjust the values as needed.
159
 
@@ -166,7 +196,7 @@ APP_DESCRIPTION=MVP for managing hiring assessments using AI
166
  ```bash
167
  python main.py
168
  ```
169
-
170
  Or using uvicorn directly:
171
  ```bash
172
  uvicorn main:app --host 0.0.0.0 --port 8000 --reload
@@ -180,9 +210,9 @@ uvicorn main:app --reload --host 0.0.0.0 --port 8000
180
 
181
  ## Testing
182
 
183
- To run tests (when available):
184
  ```bash
185
- pytest
186
  ```
187
 
188
  ## Logging
@@ -213,28 +243,56 @@ The application uses Alembic for database migrations:
213
  - Log errors appropriately
214
 
215
  3. **Security**:
216
- - Passwords should be hashed (currently using placeholder)
217
  - Input validation through Pydantic schemas
218
  - SQL injection prevention through SQLAlchemy ORM
 
219
 
220
  4. **Architecture**:
221
  - Keep business logic in service layer
222
  - Use dependency injection for database sessions
223
  - Separate API routes by domain/model
224
  - Maintain clear separation between layers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
225
 
226
  ## Future Enhancements
227
 
228
- - JWT token-based authentication
229
- - Password hashing implementation
230
- - Advanced assessment features
231
- - Admin dashboard endpoints
232
- - More sophisticated logging and monitoring
233
- - Unit and integration tests
234
-
235
- # TODO:
236
- - when creating an assessment we should pass the questions of the assessment.
237
- - all APIs input and output should have a cleare schema, even the enums should be clear and apear in the swagger apis (when visiting /docs)
238
- - the validation of the inputs should be done by pydantic and in the model level, not in the model level only!
239
- - the answers is not a model itself, so the services/answer functions should be aware of that.
 
 
 
 
240
 
 
10
  - **Database Layer**: Manages database connections and sessions
11
  - **Model Layer**: Defines database models using SQLAlchemy
12
  - **Schema Layer**: Defines Pydantic schemas for request/response validation
13
+ - **Integration Layer**: Handles external services like AI providers
14
 
15
  ## Technologies Used
16
 
 
21
  - **Alembic**: Database migration tool
22
  - **Pydantic**: Data validation and settings management
23
  - **UUID**: For generating unique identifiers
24
+ - **Mistral AI**: AI provider for generating questions and scoring answers
25
+ - **JWT**: For authentication and authorization
26
+ - **bcrypt**: For password hashing
27
 
28
  ## Architecture Components
29
 
 
38
  β”‚ └── routes.py # Root and health check endpoints
39
  β”œβ”€β”€ database/ # Database connection utilities
40
  β”‚ └── database.py # Database engine and session management
41
+ β”œβ”€β”€ integrations/ # External service integrations
42
+ β”‚ └── ai_integration/ # AI provider implementations
43
  β”œβ”€β”€ models/ # SQLAlchemy models
44
  β”‚ β”œβ”€β”€ user.py # User model
45
  β”‚ β”œβ”€β”€ job.py # Job model
 
51
  β”‚ β”œβ”€β”€ job.py # Job schemas
52
  β”‚ β”œβ”€β”€ assessment.py # Assessment schemas
53
  β”‚ β”œβ”€β”€ application.py # Application schemas
54
+ β”‚ β”œβ”€β”€ enums.py # Enum definitions
55
  β”‚ └── base.py # Base schema class
56
  β”œβ”€β”€ services/ # Business logic layer
57
  β”‚ β”œβ”€β”€ user_service.py # User-related services
58
  β”‚ β”œβ”€β”€ job_service.py # Job-related services
59
  β”‚ β”œβ”€β”€ assessment_service.py # Assessment-related services
60
  β”‚ β”œβ”€β”€ application_service.py # Application-related services
61
+ β”‚ β”œβ”€β”€ ai_service.py # AI-related services
62
  β”‚ └── base_service.py # Generic service functions
63
+ β”œβ”€β”€ utils/ # Utility functions
64
+ β”‚ └── dependencies.py # Dependency injection functions
65
  β”œβ”€β”€ alembic/ # Database migration files
66
  β”œβ”€β”€ config.py # Application configuration
67
  β”œβ”€β”€ logging_config.py # Logging configuration
 
73
  ### Key Features
74
 
75
  1. **User Management**:
76
+ - Registration and authentication with role-based access (HR vs Applicant)
77
+ - JWT-based secure session management
78
+ - Password hashing using bcrypt
79
 
80
  2. **Job Management**:
81
  - Create, update, delete job postings
82
  - Manage job details and requirements
83
+ - Track applicant counts
84
 
85
+ 3. **AI-Powered Assessment Management**:
86
+ - Create assessments with AI-generated questions based on job requirements
87
+ - Define question types (multiple choice single answer, multiple choice multiple answers, text-based)
88
+ - Regenerate assessments with new AI-generated questions
89
+ - Automatic duration estimation based on content using AI
90
+ - Passing score configuration (range 20-80)
91
 
92
  4. **Application Management**:
93
  - Submit applications with answers
94
+ - AI-powered scoring of text-based answers with rationales
95
  - Track application results and scores
96
+ - Detailed feedback with AI-generated rationales
97
+
98
+ 5. **Dashboard Features**:
99
+ - View application scores with sorting options
100
+ - Monitor assessment performance
101
 
102
  ### API Endpoints
103
 
 
119
  #### Assessments
120
  - `GET /assessments/jobs/{jid}` - List assessments for a job
121
  - `GET /assessments/jobs/{jid}/{aid}` - Get assessment details
122
+ - `POST /assessments/jobs/{id}` - Create assessment with AI-generated questions
123
+ - `PATCH /assessments/jobs/{jid}/{aid}/regenerate` - Regenerate assessment with new AI-generated questions
124
  - `PATCH /assessments/jobs/{jid}/{aid}` - Update assessment
125
  - `DELETE /assessments/jobs/{jid}/{aid}` - Delete assessment
126
 
127
  #### Applications
128
+ - `GET /applications/jobs/{jid}/assessments/{aid}` - List applications for an assessment
129
+ - `GET /applications/jobs/{jid}/assessment_id/{aid}/applications/{id}` - Get detailed application
130
  - `POST /applications/jobs/{jid}/assessments/{aid}` - Create application
131
+ - `GET /applications/my-applications` - Get current user's applications
132
+ - `GET /applications/my-applications/{id}` - Get specific application for current user
133
+
134
+ #### Dashboard
135
+ - `GET /dashboard/applications/scores` - Get application scores with sorting options
136
 
137
  #### Health Check
138
  - `GET /` - Root endpoint
 
156
  LOG_FILE=app.log
157
  LOG_FORMAT=%(asctime)s - %(name)s - %(levelname)s - %(message)s
158
 
159
+ # JWT Configuration
160
  SECRET_KEY=your-secret-key-here
161
  ALGORITHM=HS256
162
  ACCESS_TOKEN_EXPIRE_MINUTES=30
163
 
164
+ # AI Provider Configuration
165
+ MISTRAL_API_KEY=your-mistral-api-key-here
166
+
167
  # Application Configuration
168
  APP_NAME=AI-Powered Hiring Assessment Platform
169
  APP_VERSION=0.1.0
 
175
  ### Prerequisites
176
  - Python 3.11+
177
  - pip package manager
178
+ - Mistral AI API key (optional, for AI features)
179
 
180
  ### Setup Instructions
181
 
 
183
  ```bash
184
  pip install -r requirements.txt
185
  ```
186
+
187
  2. **Set Up Environment Variables**:
188
  Copy the `.env.example` file to `.env` and adjust the values as needed.
189
 
 
196
  ```bash
197
  python main.py
198
  ```
199
+
200
  Or using uvicorn directly:
201
  ```bash
202
  uvicorn main:app --host 0.0.0.0 --port 8000 --reload
 
210
 
211
  ## Testing
212
 
213
+ To run tests:
214
  ```bash
215
+ python -m pytest
216
  ```
217
 
218
  ## Logging
 
243
  - Log errors appropriately
244
 
245
  3. **Security**:
246
+ - Passwords are hashed using bcrypt
247
  - Input validation through Pydantic schemas
248
  - SQL injection prevention through SQLAlchemy ORM
249
+ - JWT-based authentication and authorization
250
 
251
  4. **Architecture**:
252
  - Keep business logic in service layer
253
  - Use dependency injection for database sessions
254
  - Separate API routes by domain/model
255
  - Maintain clear separation between layers
256
+ - Use enums for fixed values to ensure consistency
257
+
258
+ 5. **AI Integration**:
259
+ - Abstract AI provider implementations behind interfaces
260
+ - Use factory pattern for AI provider selection
261
+ - Implement fallback mechanisms for AI services
262
+
263
+ ## Implemented Features
264
+
265
+ - βœ… JWT token-based authentication
266
+ - βœ… Password hashing implementation using bcrypt
267
+ - βœ… AI-powered question generation based on job requirements
268
+ - βœ… AI-powered scoring of text-based answers with rationales
269
+ - βœ… Assessment duration estimation using AI
270
+ - βœ… Comprehensive API input/output validation with Pydantic schemas
271
+ - βœ… Proper enum definitions for consistent API contracts
272
+ - βœ… Role-based access control (HR vs Applicant)
273
+ - βœ… Detailed application feedback with AI-generated rationales
274
+ - βœ… My Applications endpoint for candidates to track their submissions
275
+ - βœ… Dashboard endpoints for viewing application scores
276
+ - βœ… Assessment regeneration functionality
277
+ - βœ… Proper handling of answers as JSON data within applications
278
+ - βœ… Comprehensive logging throughout the application
279
 
280
  ## Future Enhancements
281
 
282
+ - Enhanced AI scoring with more sophisticated models
283
+ - Advanced analytics and reporting features
284
+ - More sophisticated assessment types
285
+ - Integration with additional AI providers
286
+ - Performance optimizations for large datasets
287
+ - Unit and integration tests coverage
288
+ - Enhanced error handling and retry mechanisms
289
+ - Rate limiting for API endpoints
290
+ - Audit logging for compliance requirements
291
+
292
+ ## Completed TODO Items
293
+
294
+ - βœ… When creating an assessment, questions are now generated using AI based on job requirements and specified question types
295
+ - βœ… All APIs now have clear input/output schemas with enums properly defined and visible in Swagger documentation
296
+ - βœ… Input validation is now done at both the Pydantic schema level and model level
297
+ - βœ… Answers are properly handled as part of the application model rather than as a separate model
298
 
backend/alembic/versions/290ee4ce077e_add_score_and_question_scores_columns_.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Add score and question_scores columns to applications table
2
+
3
+ Revision ID: 290ee4ce077e
4
+ Revises: f9f1aa7380ab
5
+ Create Date: 2026-02-09 11:29:38.655897
6
+
7
+ """
8
+ from typing import Sequence, Union
9
+
10
+ from alembic import op
11
+ import sqlalchemy as sa
12
+
13
+
14
+ # revision identifiers, used by Alembic.
15
+ revision: str = '290ee4ce077e'
16
+ down_revision: Union[str, Sequence[str], None] = 'f9f1aa7380ab'
17
+ branch_labels: Union[str, Sequence[str], None] = None
18
+ depends_on: Union[str, Sequence[str], None] = None
19
+
20
+
21
+ def upgrade() -> None:
22
+ """Upgrade schema."""
23
+ # ### commands auto generated by Alembic - please adjust! ###
24
+ op.add_column('applications', sa.Column('score', sa.Float(), nullable=True))
25
+ op.add_column('applications', sa.Column('question_scores', sa.Text(), nullable=True))
26
+ # ### end Alembic commands ###
27
+
28
+
29
+ def downgrade() -> None:
30
+ """Downgrade schema."""
31
+ # ### commands auto generated by Alembic - please adjust! ###
32
+ op.drop_column('applications', 'question_scores')
33
+ op.drop_column('applications', 'score')
34
+ # ### end Alembic commands ###
backend/api/application_routes.py CHANGED
@@ -53,6 +53,15 @@ def get_applications_list(jid: str, aid: str, page: int = 1, limit: int = 10, db
53
  from services.user_service import get_user
54
  user = get_user(db, application.user_id)
55
 
 
 
 
 
 
 
 
 
 
56
  # Create response object that matches technical requirements exactly
57
  application_response = {
58
  'id': application.id,
@@ -62,6 +71,7 @@ def get_applications_list(jid: str, aid: str, page: int = 1, limit: int = 10, db
62
  'answers': [], # Not including answers in the list view for performance
63
  'score': score,
64
  'passing_score': assessment.passing_score,
 
65
  'assessment_details': {
66
  'id': assessment.id,
67
  'title': assessment.title,
@@ -202,6 +212,15 @@ def get_application_detail(jid: str, aid: str, id: str, db: Session = Depends(ge
202
  logger.error(f"Error creating assessment details: {str(e)}")
203
  assessment_details_obj = None
204
 
 
 
 
 
 
 
 
 
 
205
  application_detail = ApplicationDetailedResponse(
206
  id=application.id,
207
  job_id=application.job_id,
@@ -210,6 +229,7 @@ def get_application_detail(jid: str, aid: str, id: str, db: Session = Depends(ge
210
  answers=enriched_answers,
211
  score=score,
212
  passing_score=assessment.passing_score,
 
213
  assessment_details=assessment_details_obj,
214
  user={
215
  'id': user.id if user else None,
@@ -416,6 +436,15 @@ def get_my_application(id: str, db: Session = Depends(get_db), current_user: Use
416
  logger.error(f"Error creating assessment details: {str(e)}")
417
  assessment_details_obj = None
418
 
 
 
 
 
 
 
 
 
 
419
  application_detail = ApplicationDetailedResponse(
420
  id=application.id,
421
  job_id=application.job_id,
@@ -424,6 +453,7 @@ def get_my_application(id: str, db: Session = Depends(get_db), current_user: Use
424
  answers=enriched_answers,
425
  score=score,
426
  passing_score=assessment.passing_score,
 
427
  assessment_details=assessment_details_obj,
428
  user={
429
  'id': user.id if user else None,
 
53
  from services.user_service import get_user
54
  user = get_user(db, application.user_id)
55
 
56
+ # Parse question scores from JSON string
57
+ question_scores = []
58
+ if application.question_scores:
59
+ try:
60
+ question_scores = json.loads(application.question_scores)
61
+ except json.JSONDecodeError:
62
+ logger.warning(f"Failed to parse question scores for application ID: {application.id}")
63
+ question_scores = []
64
+
65
  # Create response object that matches technical requirements exactly
66
  application_response = {
67
  'id': application.id,
 
71
  'answers': [], # Not including answers in the list view for performance
72
  'score': score,
73
  'passing_score': assessment.passing_score,
74
+ 'question_scores': question_scores, # Include individual question scores
75
  'assessment_details': {
76
  'id': assessment.id,
77
  'title': assessment.title,
 
212
  logger.error(f"Error creating assessment details: {str(e)}")
213
  assessment_details_obj = None
214
 
215
+ # Parse question scores from JSON string
216
+ question_scores = []
217
+ if application.question_scores:
218
+ try:
219
+ question_scores = json.loads(application.question_scores)
220
+ except json.JSONDecodeError:
221
+ logger.warning(f"Failed to parse question scores for application ID: {application.id}")
222
+ question_scores = []
223
+
224
  application_detail = ApplicationDetailedResponse(
225
  id=application.id,
226
  job_id=application.job_id,
 
229
  answers=enriched_answers,
230
  score=score,
231
  passing_score=assessment.passing_score,
232
+ question_scores=question_scores, # Include individual question scores
233
  assessment_details=assessment_details_obj,
234
  user={
235
  'id': user.id if user else None,
 
436
  logger.error(f"Error creating assessment details: {str(e)}")
437
  assessment_details_obj = None
438
 
439
+ # Parse question scores from JSON string
440
+ question_scores = []
441
+ if application.question_scores:
442
+ try:
443
+ question_scores = json.loads(application.question_scores)
444
+ except json.JSONDecodeError:
445
+ logger.warning(f"Failed to parse question scores for application ID: {application.id}")
446
+ question_scores = []
447
+
448
  application_detail = ApplicationDetailedResponse(
449
  id=application.id,
450
  job_id=application.job_id,
 
453
  answers=enriched_answers,
454
  score=score,
455
  passing_score=assessment.passing_score,
456
+ question_scores=question_scores, # Include individual question scores
457
  assessment_details=assessment_details_obj,
458
  user={
459
  'id': user.id if user else None,
backend/integrations/ai_integration/mistral_generator.py CHANGED
@@ -193,18 +193,42 @@ Job Information:
193
  if additional_note:
194
  job_details += f"- Additional Note: {additional_note}\n"
195
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
196
  prompt = f"""
197
  You are an assessment generator.
198
 
199
- Generate EXACTLY {len(questions_types)} questions for the following job.
200
 
201
  {job_details}
202
 
203
  MANDATORY RULES:
204
- 1. Output MUST be a JSON ARRAY with EXACTLY {len(questions_types)} objects.
205
- 2. The list MUST contain:
206
- - {mcq_count} MCQ questions (multiple choice)
207
- - {text_count} TEXT questions (text-based)
208
  3. Do NOT include explanations or markdown.
209
  4. Follow the schema EXACTLY.
210
 
@@ -223,6 +247,12 @@ Rules per type:
223
  - MCQ β†’ 4 choices + correct_answer as the text of the correct choice
224
  - TEXT β†’ correct_answer = null
225
 
 
 
 
 
 
 
226
  Return ONLY the JSON array.
227
  """
228
 
@@ -240,8 +270,9 @@ Return ONLY the JSON array.
240
 
241
  # Determine the question type based on the response
242
  if q_data.get("type") == "MCQ":
243
- # For multiple choice questions
244
- question_type = QuestionType.choose_one # Default to choose_one
 
245
 
246
  # Create options
247
  options = []
@@ -249,7 +280,7 @@ Return ONLY the JSON array.
249
  option = AssessmentQuestionOption(text=choice, value=choice)
250
  options.append(option)
251
 
252
- # Find the correct option
253
  correct_options = []
254
  correct_answer = q_data.get("correct_answer")
255
  if correct_answer:
 
193
  if additional_note:
194
  job_details += f"- Additional Note: {additional_note}\n"
195
 
196
+ # Determine the recommended number of questions based on job complexity
197
+ if job_info:
198
+ # Adjust the number of questions based on job seniority and skills
199
+ seniority = job_info.get('seniority', '').lower()
200
+ skill_count = len(job_info.get('skill_categories', []))
201
+
202
+ # Base number of questions based on complexity
203
+ if seniority in ['senior', 'lead']:
204
+ base_questions = 15 # More questions for senior roles
205
+ elif seniority in ['mid', 'intermediate']:
206
+ base_questions = 12
207
+ else: # intern, junior
208
+ base_questions = 10
209
+
210
+ # Adjust based on number of skills to cover
211
+ adjusted_questions = base_questions + (skill_count // 2)
212
+
213
+ # Ensure we have at least one of each requested type if specified
214
+ min_questions = len(questions_types) # At least one per type requested
215
+ total_questions = max(adjusted_questions, min_questions)
216
+ else:
217
+ # Default if no job info is provided
218
+ total_questions = max(10, len(questions_types)) # At least 10 or requested types count
219
+
220
  prompt = f"""
221
  You are an assessment generator.
222
 
223
+ Generate approximately {total_questions} questions for the following job. The number of questions should be appropriate for the job complexity and seniority level.
224
 
225
  {job_details}
226
 
227
  MANDATORY RULES:
228
+ 1. Output MUST be a JSON ARRAY with approximately {total_questions} objects.
229
+ 2. Distribute the questions among the requested types proportionally:
230
+ - Include MCQ questions (multiple choice) - both single and multiple answer types
231
+ - Include TEXT questions (text-based)
232
  3. Do NOT include explanations or markdown.
233
  4. Follow the schema EXACTLY.
234
 
 
247
  - MCQ β†’ 4 choices + correct_answer as the text of the correct choice
248
  - TEXT β†’ correct_answer = null
249
 
250
+ Consider the following when generating questions:
251
+ - For senior positions, include more complex and scenario-based questions
252
+ - For junior positions, focus on fundamental concepts
253
+ - Ensure questions cover the skill categories mentioned in the job description
254
+ - Mix difficulty levels appropriately for the role
255
+
256
  Return ONLY the JSON array.
257
  """
258
 
 
270
 
271
  # Determine the question type based on the response
272
  if q_data.get("type") == "MCQ":
273
+ # For multiple choice questions, determine if it's single or multiple choice
274
+ # For now, default to choose_one, but we could enhance this logic later
275
+ question_type = QuestionType.choose_one
276
 
277
  # Create options
278
  options = []
 
280
  option = AssessmentQuestionOption(text=choice, value=choice)
281
  options.append(option)
282
 
283
+ # Find the correct option(s)
284
  correct_options = []
285
  correct_answer = q_data.get("correct_answer")
286
  if correct_answer:
backend/integrations/ai_integration/mock_ai_generator.py CHANGED
@@ -13,39 +13,65 @@ class MockAIGenerator(AIGeneratorInterface):
13
  """
14
 
15
  def generate_questions(
16
- self,
17
- title: str,
18
- questions_types: List[str],
19
- additional_note: str = None,
20
  job_info: Dict[str, Any] = None
21
  ) -> List[AssessmentQuestion]:
22
  """
23
  Generate questions using mock AI logic based on job information.
24
  """
25
- num_questions = len(questions_types)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  generated_questions = []
27
-
28
- for i, q_type in enumerate(questions_types):
 
 
 
29
  # Create a question ID
30
  question_id = str(uuid.uuid4())
31
-
32
  # Generate question text based on the assessment title, job info and question type
33
  question_text = self._generate_question_text(title, q_type, i+1, additional_note, job_info)
34
-
35
  # Determine weight (random between 1-5)
36
  weight = random.randint(1, 5)
37
-
38
  # Generate skill categories based on the assessment title and job info
39
  skill_categories = self._generate_skill_categories(title, job_info)
40
-
41
  # Generate options and correct options based on the question type
42
  options = []
43
  correct_options = []
44
-
45
  if q_type in [QuestionType.choose_one.value, QuestionType.choose_many.value]:
46
  options = self._generate_multiple_choice_options(q_type, question_text)
47
  correct_options = self._select_correct_options(options, q_type)
48
-
49
  # Create the AssessmentQuestion object
50
  question = AssessmentQuestion(
51
  id=question_id,
@@ -56,9 +82,9 @@ class MockAIGenerator(AIGeneratorInterface):
56
  options=options,
57
  correct_options=correct_options
58
  )
59
-
60
  generated_questions.append(question)
61
-
62
  return generated_questions
63
 
64
  def _generate_question_text(self, title: str, q_type: str, question_number: int, additional_note: str = None, job_info: Dict[str, Any] = None) -> str:
@@ -390,17 +416,40 @@ class MockAIGenerator(AIGeneratorInterface):
390
  Returns:
391
  String response from the AI containing the estimated duration
392
  """
393
- # For the mock implementation, we'll return a simple response with a number
394
- # based on the length of the prompt and keywords
395
  import random
396
-
 
 
397
  # Count the number of questions mentioned in the prompt
398
  question_count = prompt.count("Question ") # Count occurrences of "Question "
399
 
400
- # Estimate duration based on question count (3 minutes per question, with some randomness)
401
- estimated_minutes = max(5, question_count * 3 + random.randint(-2, 5))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
402
 
 
 
 
 
 
 
 
403
  # Ensure it's within reasonable bounds
404
  estimated_minutes = min(180, max(5, estimated_minutes))
405
-
406
  return f"{estimated_minutes} minutes"
 
13
  """
14
 
15
  def generate_questions(
16
+ self,
17
+ title: str,
18
+ questions_types: List[str],
19
+ additional_note: str = None,
20
  job_info: Dict[str, Any] = None
21
  ) -> List[AssessmentQuestion]:
22
  """
23
  Generate questions using mock AI logic based on job information.
24
  """
25
+ # Determine the recommended number of questions based on job complexity
26
+ if job_info:
27
+ # Adjust the number of questions based on job seniority and skills
28
+ seniority = job_info.get('seniority', '').lower()
29
+ skill_count = len(job_info.get('skill_categories', []))
30
+
31
+ # Base number of questions based on complexity
32
+ if seniority in ['senior', 'lead']:
33
+ base_questions = 15 # More questions for senior roles
34
+ elif seniority in ['mid', 'intermediate']:
35
+ base_questions = 12
36
+ else: # intern, junior
37
+ base_questions = 10
38
+
39
+ # Adjust based on number of skills to cover
40
+ adjusted_questions = base_questions + (skill_count // 2)
41
+
42
+ # Ensure we have at least one of each requested type if specified
43
+ min_questions = len(questions_types) # At least one per type requested
44
+ total_questions = max(adjusted_questions, min_questions)
45
+ else:
46
+ # Default if no job info is provided
47
+ total_questions = max(10, len(questions_types)) # At least 10 or requested types count
48
+
49
  generated_questions = []
50
+
51
+ for i in range(total_questions):
52
+ # Cycle through the requested question types to ensure variety
53
+ q_type = questions_types[i % len(questions_types)]
54
+
55
  # Create a question ID
56
  question_id = str(uuid.uuid4())
57
+
58
  # Generate question text based on the assessment title, job info and question type
59
  question_text = self._generate_question_text(title, q_type, i+1, additional_note, job_info)
60
+
61
  # Determine weight (random between 1-5)
62
  weight = random.randint(1, 5)
63
+
64
  # Generate skill categories based on the assessment title and job info
65
  skill_categories = self._generate_skill_categories(title, job_info)
66
+
67
  # Generate options and correct options based on the question type
68
  options = []
69
  correct_options = []
70
+
71
  if q_type in [QuestionType.choose_one.value, QuestionType.choose_many.value]:
72
  options = self._generate_multiple_choice_options(q_type, question_text)
73
  correct_options = self._select_correct_options(options, q_type)
74
+
75
  # Create the AssessmentQuestion object
76
  question = AssessmentQuestion(
77
  id=question_id,
 
82
  options=options,
83
  correct_options=correct_options
84
  )
85
+
86
  generated_questions.append(question)
87
+
88
  return generated_questions
89
 
90
  def _generate_question_text(self, title: str, q_type: str, question_number: int, additional_note: str = None, job_info: Dict[str, Any] = None) -> str:
 
416
  Returns:
417
  String response from the AI containing the estimated duration
418
  """
419
+ # For the mock implementation, we'll return a response with a number
420
+ # based on the length of the prompt and keywords, following the same logic as the AI service
421
  import random
422
+ import re
423
+
424
+ # Extract information from the prompt to determine duration
425
  # Count the number of questions mentioned in the prompt
426
  question_count = prompt.count("Question ") # Count occurrences of "Question "
427
 
428
+ # Extract seniority level from the prompt
429
+ seniority_match = re.search(r'- Seniority: (\w+)', prompt)
430
+ seniority = seniority_match.group(1).lower() if seniority_match else 'junior'
431
+
432
+ # Count text-based questions mentioned in the prompt
433
+ text_questions = prompt.count("(Text-based question requiring written response)")
434
+
435
+ # Calculate base duration (at least 2 minutes per question)
436
+ base_duration = question_count * 2
437
+
438
+ # Adjust based on job seniority
439
+ if seniority in ['senior', 'lead']:
440
+ base_duration = int(base_duration * 1.5) # 50% more time for senior roles
441
+ elif seniority in ['mid', 'intermediate']:
442
+ base_duration = int(base_duration * 1.2) # 20% more time for mid-level roles
443
+ # Junior/intern roles get the base time (2 min per question)
444
 
445
+ # Adjust based on question complexity (text-based questions take more time)
446
+ if text_questions > 0:
447
+ base_duration += text_questions * 2 # Additional 2 minutes per text question
448
+
449
+ # Add some randomness to simulate realistic variations
450
+ estimated_minutes = max(5, base_duration + random.randint(-1, 3))
451
+
452
  # Ensure it's within reasonable bounds
453
  estimated_minutes = min(180, max(5, estimated_minutes))
454
+
455
  return f"{estimated_minutes} minutes"
backend/models/application.py CHANGED
@@ -1,4 +1,4 @@
1
- from sqlalchemy import Column, String, Text, ForeignKey, DateTime
2
  from sqlalchemy.sql import func
3
  from .base import Base
4
  import uuid
@@ -11,5 +11,7 @@ class Application(Base):
11
  assessment_id = Column(String, ForeignKey("assessments.id"), nullable=False)
12
  user_id = Column(String, ForeignKey("users.id"), nullable=False)
13
  answers = Column(Text) # Stored as JSON string
 
 
14
  created_at = Column(DateTime(timezone=True), server_default=func.now())
15
  updated_at = Column(DateTime(timezone=True), onupdate=func.now())
 
1
+ from sqlalchemy import Column, String, Text, ForeignKey, DateTime, Float
2
  from sqlalchemy.sql import func
3
  from .base import Base
4
  import uuid
 
11
  assessment_id = Column(String, ForeignKey("assessments.id"), nullable=False)
12
  user_id = Column(String, ForeignKey("users.id"), nullable=False)
13
  answers = Column(Text) # Stored as JSON string
14
+ score = Column(Float) # Overall application score
15
+ question_scores = Column(Text) # Individual question scores stored as JSON string
16
  created_at = Column(DateTime(timezone=True), server_default=func.now())
17
  updated_at = Column(DateTime(timezone=True), onupdate=func.now())
backend/schemas/application.py CHANGED
@@ -23,6 +23,11 @@ class ApplicationQuestion(BaseModel):
23
  options: Optional[List[dict]] = [] # Using dict for simplicity
24
  correct_options: Optional[List[str]] = []
25
 
 
 
 
 
 
26
  class ApplicationAnswerWithQuestion(ApplicationAnswer):
27
  question_text: str = Field(..., min_length=1, max_length=1000)
28
  weight: int = Field(..., ge=1, le=5) # range 1-5
@@ -31,6 +36,7 @@ class ApplicationAnswerWithQuestion(ApplicationAnswer):
31
  question_options: Optional[List[dict]] = [] # Options for the question
32
  correct_options: Optional[List[str]] = []
33
  rationale: str = Field(..., min_length=1, max_length=1000)
 
34
 
35
  class ApplicationBase(BaseSchema):
36
  job_id: str = Field(..., min_length=1)
@@ -52,9 +58,10 @@ class ApplicationAssessment(BaseModel):
52
 
53
  class ApplicationResponse(ApplicationBase):
54
  id: str
55
- score: Optional[float] = None
56
  passing_score: Optional[float] = None
57
  assessment_details: Optional[ApplicationAssessment] = None
 
58
 
59
  class Config:
60
  from_attributes = True
 
23
  options: Optional[List[dict]] = [] # Using dict for simplicity
24
  correct_options: Optional[List[str]] = []
25
 
26
+ class ApplicationAnswerScore(BaseModel):
27
+ question_id: str
28
+ score: float # Score between 0 and 1
29
+ rationale: str
30
+
31
  class ApplicationAnswerWithQuestion(ApplicationAnswer):
32
  question_text: str = Field(..., min_length=1, max_length=1000)
33
  weight: int = Field(..., ge=1, le=5) # range 1-5
 
36
  question_options: Optional[List[dict]] = [] # Options for the question
37
  correct_options: Optional[List[str]] = []
38
  rationale: str = Field(..., min_length=1, max_length=1000)
39
+ score: Optional[float] = None # Score for this specific answer
40
 
41
  class ApplicationBase(BaseSchema):
42
  job_id: str = Field(..., min_length=1)
 
58
 
59
  class ApplicationResponse(ApplicationBase):
60
  id: str
61
+ score: Optional[float] = None # Overall application score
62
  passing_score: Optional[float] = None
63
  assessment_details: Optional[ApplicationAssessment] = None
64
+ question_scores: Optional[List[ApplicationAnswerScore]] = None # Individual question scores
65
 
66
  class Config:
67
  from_attributes = True
backend/services/ai_service.py CHANGED
@@ -106,15 +106,15 @@ def estimate_assessment_duration(title: str, job_info: dict, questions: List[Ass
106
  prompt = f"""
107
  Based on the following assessment details, estimate how many minutes a candidate would need to complete this assessment.
108
  Consider the complexity of the questions and the job requirements.
109
-
110
  Assessment Title: {title}
111
-
112
  Job Information:
113
  - Title: {job_info.get('title', 'N/A')}
114
  - Seniority: {job_info.get('seniority', 'N/A')}
115
  - Description: {job_info.get('description', 'N/A')}
116
  - Skill Categories: {', '.join(job_info.get('skill_categories', []))}
117
-
118
  Questions Count: {len(questions)}
119
  """
120
 
@@ -129,7 +129,11 @@ def estimate_assessment_duration(title: str, job_info: dict, questions: List[Ass
129
  if additional_note:
130
  prompt += f"\nAdditional Notes: {additional_note}"
131
 
132
- prompt += "\n\nPlease provide only a number representing the estimated duration in minutes."
 
 
 
 
133
 
134
  # Get the AI's estimation
135
  duration_estimate = ai_generator.estimate_duration(prompt)
@@ -138,12 +142,51 @@ def estimate_assessment_duration(title: str, job_info: dict, questions: List[Ass
138
  duration_match = re.search(r'\d+', duration_estimate)
139
  if duration_match:
140
  duration_minutes = int(duration_match.group())
141
- # Ensure the duration is within reasonable bounds (1-180 minutes)
142
- duration_minutes = max(1, min(180, duration_minutes))
143
- logger.info(f"Estimated duration for assessment '{title}': {duration_minutes} minutes")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
144
  return duration_minutes
145
  else:
146
  # If no number is found in the response, return a default duration based on question count
147
- default_duration = min(60, max(5, len(questions) * 3)) # 3 minutes per question, capped at 60
148
- logger.warning(f"No duration found in AI response. Using default: {default_duration} minutes")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
149
  return default_duration
 
106
  prompt = f"""
107
  Based on the following assessment details, estimate how many minutes a candidate would need to complete this assessment.
108
  Consider the complexity of the questions and the job requirements.
109
+
110
  Assessment Title: {title}
111
+
112
  Job Information:
113
  - Title: {job_info.get('title', 'N/A')}
114
  - Seniority: {job_info.get('seniority', 'N/A')}
115
  - Description: {job_info.get('description', 'N/A')}
116
  - Skill Categories: {', '.join(job_info.get('skill_categories', []))}
117
+
118
  Questions Count: {len(questions)}
119
  """
120
 
 
129
  if additional_note:
130
  prompt += f"\nAdditional Notes: {additional_note}"
131
 
132
+ prompt += """
133
+ \n\nPlease provide only a number representing the estimated duration in minutes.
134
+ Consider that each question should take at least 2 minutes to answer, with additional time for complex questions,
135
+ especially for senior roles or text-based questions requiring detailed responses.
136
+ """
137
 
138
  # Get the AI's estimation
139
  duration_estimate = ai_generator.estimate_duration(prompt)
 
142
  duration_match = re.search(r'\d+', duration_estimate)
143
  if duration_match:
144
  duration_minutes = int(duration_match.group())
145
+
146
+ # Apply our own logic to ensure minimum duration per question and adjust based on job seniority
147
+ # Calculate base duration (at least 2 minutes per question)
148
+ base_duration = len(questions) * 2
149
+
150
+ # Adjust based on job seniority
151
+ seniority = job_info.get('seniority', '').lower()
152
+ if seniority in ['senior', 'lead']:
153
+ base_duration = int(base_duration * 1.5) # 50% more time for senior roles
154
+ elif seniority in ['mid', 'intermediate']:
155
+ base_duration = int(base_duration * 1.2) # 20% more time for mid-level roles
156
+ # Junior/intern roles get the base time (2 min per question)
157
+
158
+ # Adjust based on question complexity (text-based questions take more time)
159
+ text_questions = sum(1 for q in questions if q.type == 'text_based')
160
+ if text_questions > 0:
161
+ # Add extra time for text-based questions (they typically take longer)
162
+ base_duration += text_questions * 2 # Additional 2 minutes per text question
163
+
164
+ # Take the maximum of AI estimation and our calculated minimum
165
+ duration_minutes = max(duration_minutes, base_duration)
166
+
167
+ # Ensure the duration is within reasonable bounds (5-180 minutes)
168
+ duration_minutes = max(5, min(180, duration_minutes))
169
+ logger.info(f"Estimated duration for assessment '{title}': {duration_minutes} minutes (AI: {int(duration_match.group())}, calculated min: {base_duration})")
170
  return duration_minutes
171
  else:
172
  # If no number is found in the response, return a default duration based on question count
173
+ # Calculate base duration (at least 2 minutes per question)
174
+ base_duration = len(questions) * 2
175
+
176
+ # Adjust based on job seniority
177
+ seniority = job_info.get('seniority', '').lower()
178
+ if seniority in ['senior', 'lead']:
179
+ base_duration = int(base_duration * 1.5) # 50% more time for senior roles
180
+ elif seniority in ['mid', 'intermediate']:
181
+ base_duration = int(base_duration * 1.2) # 20% more time for mid-level roles
182
+
183
+ # Adjust based on question complexity (text-based questions take more time)
184
+ text_questions = sum(1 for q in questions if q.type == 'text_based')
185
+ if text_questions > 0:
186
+ base_duration += text_questions * 2 # Additional 2 minutes per text question
187
+
188
+ # Ensure minimum duration is at least 5 minutes
189
+ default_duration = max(5, base_duration)
190
+
191
+ logger.warning(f"No duration found in AI response. Using calculated default: {default_duration} minutes")
192
  return default_duration
backend/services/application_service.py CHANGED
@@ -51,19 +51,25 @@ def get_applications_by_user(db: Session, user_id: str, skip: int = 0, limit: in
51
  return applications
52
 
53
  def create_application(db: Session, application: ApplicationCreate) -> Application:
54
- """Create a new application"""
55
  logger.info(f"Creating new application for job ID: {application.job_id}, assessment ID: {application.assessment_id}, user ID: {application.user_id}")
 
 
 
 
56
  db_application = Application(
57
  id=str(uuid.uuid4()),
58
  job_id=application.job_id,
59
  assessment_id=application.assessment_id,
60
  user_id=application.user_id,
61
- answers=json.dumps([ans.dict() for ans in application.answers]) # Store as JSON string
 
 
62
  )
63
  db.add(db_application)
64
  db.commit()
65
  db.refresh(db_application)
66
- logger.info(f"Successfully created application with ID: {db_application.id}")
67
  return db_application
68
 
69
  def update_application(db: Session, application_id: str, **kwargs) -> Optional[Application]:
@@ -95,6 +101,106 @@ def delete_application(db: Session, application_id: str) -> bool:
95
  logger.warning(f"Failed to delete application - application not found: {application_id}")
96
  return False
97
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98
  def calculate_application_score(db: Session, application_id: str) -> float:
99
  """Calculate the score for an application"""
100
  logger.debug(f"Calculating score for application ID: {application_id}")
 
51
  return applications
52
 
53
  def create_application(db: Session, application: ApplicationCreate) -> Application:
54
+ """Create a new application and calculate scores"""
55
  logger.info(f"Creating new application for job ID: {application.job_id}, assessment ID: {application.assessment_id}, user ID: {application.user_id}")
56
+
57
+ # Calculate scores for the application
58
+ score, question_scores = calculate_detailed_application_score(db, application)
59
+
60
  db_application = Application(
61
  id=str(uuid.uuid4()),
62
  job_id=application.job_id,
63
  assessment_id=application.assessment_id,
64
  user_id=application.user_id,
65
+ answers=json.dumps([ans.dict() for ans in application.answers]), # Store as JSON string
66
+ score=score, # Store the overall application score
67
+ question_scores=json.dumps([qs.dict() for qs in question_scores]) # Store individual question scores as JSON
68
  )
69
  db.add(db_application)
70
  db.commit()
71
  db.refresh(db_application)
72
+ logger.info(f"Successfully created application with ID: {db_application.id} and overall score: {score}")
73
  return db_application
74
 
75
  def update_application(db: Session, application_id: str, **kwargs) -> Optional[Application]:
 
101
  logger.warning(f"Failed to delete application - application not found: {application_id}")
102
  return False
103
 
104
+ def calculate_detailed_application_score(db: Session, application_create: ApplicationCreate):
105
+ """Calculate detailed scores for an application including individual question scores"""
106
+ from models.assessment import Assessment
107
+ from schemas.application import ApplicationAnswerScore
108
+
109
+ logger.debug(f"Calculating detailed scores for application - job ID: {application_create.job_id}, assessment ID: {application_create.assessment_id}")
110
+
111
+ # Get the associated assessment to compare answers with correct answers
112
+ assessment = db.query(Assessment).filter(Assessment.id == application_create.assessment_id).first()
113
+ if not assessment:
114
+ logger.warning(f"Assessment not found for ID: {application_create.assessment_id}")
115
+ return 0.0, []
116
+
117
+ # Parse the questions
118
+ import json
119
+ try:
120
+ questions = json.loads(assessment.questions) if assessment.questions else []
121
+ except json.JSONDecodeError:
122
+ logger.error(f"Failed to parse questions for assessment ID: {application_create.assessment_id}")
123
+ return 0.0, []
124
+
125
+ # Create a mapping of question_id to question for easy lookup
126
+ question_map = {q['id']: q for q in questions}
127
+
128
+ # Calculate the scores
129
+ total_points = 0
130
+ earned_points = 0
131
+ question_scores = []
132
+
133
+ for answer in application_create.answers:
134
+ question_id = answer.question_id
135
+ if not question_id or question_id not in question_map:
136
+ continue
137
+
138
+ question_data = question_map[question_id]
139
+
140
+ # Calculate weighted score
141
+ question_weight = question_data.get('weight', 1) # Default weight is 1
142
+ total_points += question_weight
143
+
144
+ # Initialize question score object
145
+ question_score_obj = ApplicationAnswerScore(
146
+ question_id=question_id,
147
+ score=0.0,
148
+ rationale="No rationale available"
149
+ )
150
+
151
+ # For multiple choice questions, score directly without AI
152
+ if question_data['type'] in ['choose_one', 'choose_many']:
153
+ correct_options = set(question_data.get('correct_options', []))
154
+ selected_options = set(answer.options or [])
155
+
156
+ # Check if the selected options match the correct options exactly
157
+ if selected_options == correct_options:
158
+ earned_points += question_weight # Full points for correct answer
159
+ question_score_obj.score = 1.0 # Perfect score
160
+ question_score_obj.rationale = "Correct answer"
161
+ else:
162
+ question_score_obj.score = 0.0 # No points for incorrect answer
163
+ question_score_obj.rationale = f"Incorrect. Correct options: {list(correct_options)}, Selected: {list(selected_options)}"
164
+
165
+ # For text-based questions, use AI to evaluate the answer
166
+ elif question_data['type'] == 'text_based':
167
+ # Convert the question data to an AssessmentQuestion object
168
+ from schemas.assessment import AssessmentQuestion, AssessmentQuestionOption
169
+ from schemas.enums import QuestionType
170
+ question_obj = AssessmentQuestion(
171
+ id=question_data['id'],
172
+ text=question_data['text'],
173
+ weight=question_data['weight'],
174
+ skill_categories=question_data['skill_categories'],
175
+ type=QuestionType(question_data['type']),
176
+ options=[AssessmentQuestionOption(text=opt['text'], value=opt['value']) for opt in question_data.get('options', [])],
177
+ correct_options=question_data.get('correct_options', [])
178
+ )
179
+
180
+ # Use AI service to score the text-based answer
181
+ from services.ai_service import score_answer
182
+ score_result = score_answer(
183
+ question=question_obj,
184
+ answer_text=answer.text or '',
185
+ selected_options=answer.options or []
186
+ )
187
+
188
+ earned_points += score_result['score'] * question_weight
189
+ question_score_obj.score = score_result['score']
190
+ question_score_obj.rationale = score_result['rationale']
191
+
192
+ question_scores.append(question_score_obj)
193
+
194
+ # Calculate percentage score
195
+ if total_points > 0:
196
+ overall_score = (earned_points / total_points) * 100
197
+ else:
198
+ overall_score = 0.0
199
+
200
+ logger.debug(f"Calculated detailed scores: overall {overall_score}% ({earned_points}/{total_points} points), {len(question_scores)} questions scored")
201
+ return round(overall_score, 2), question_scores
202
+
203
+
204
  def calculate_application_score(db: Session, application_id: str) -> float:
205
  """Calculate the score for an application"""
206
  logger.debug(f"Calculating score for application ID: {application_id}")