Reja1 Claude Opus 4.6 commited on
Commit
e8aee0e
·
1 Parent(s): f254190

Fix 7 medium-priority issues: resume, timeout, logging, prompts, dead code

Browse files

- Add --resume flag to continue interrupted benchmark runs
- Uncomment request_timeout (200s) for thinking models
- Use print() for colored status output to avoid ANSI codes in log files
- Tailor reprompt examples to question type (no longer shows all formats)
- Fix api_success flag: now correctly False when API returns empty content
- Remove dead code: calculate_accuracy (unused in production)
- Fix misleading summary.md calculation: show category breakdown instead of
fake arithmetic formula that didn't account for partial marks
- Update README.md: document --resume, fix output file descriptions,
update scoring notes, add resume and rate-limit retry documentation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

README.md CHANGED
@@ -223,27 +223,36 @@ This repository contains scripts to run the benchmark evaluation directly:
223
  python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "google/gemini-2.5-pro-preview-03-25" --question_ids "N24T3001,N24T3002,JA24P1M01"
224
  ```
225
 
 
 
 
 
 
 
226
  **Custom output directory:**
227
  ```bash
228
  python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "openai/gpt-4o" --output_dir my_custom_results
229
  ```
230
 
231
- **Available filtering options:**
232
  - `--exam_name`: Choose from `NEET`, `JEE_MAIN`, `JEE_ADVANCED`, or `all` (default)
233
  - `--exam_year`: Choose from available years (`2024`, `2025`, etc.) or `all` (default)
234
  - `--question_ids`: Comma-separated list of specific question IDs to evaluate (e.g., "N24T3001,JA24P1M01")
 
235
 
236
  6. **Check Results:**
237
  * Results for each model run will be saved in timestamped subdirectories within the `results/` folder.
238
  * Each run's folder (e.g., `results/google_gemini-2.5-pro-preview-03-25_NEET_2024_20250524_141230/`) contains:
239
- * **`predictions.jsonl`**: Detailed results for each question including:
240
- - Model predictions and ground truth
241
  - Raw LLM responses
242
- - Evaluation status and marks awarded
243
  - API call success/failure information
244
- * **`summary.json`**: Overall scores and statistics in JSON format
 
 
 
245
  * **`summary.md`**: Human-readable Markdown summary with:
246
  - Overall exam scores
 
247
  - Section-wise breakdown (by subject)
248
  - Detailed statistics on correct/incorrect/skipped questions
249
 
@@ -252,39 +261,47 @@ This repository contains scripts to run the benchmark evaluation directly:
252
  The benchmark implements authentic scoring systems for each exam type:
253
 
254
  ### NEET Scoring
255
- - **Single Correct MCQ**: +4 for correct, -1 for incorrect, 0 for skipped
256
 
257
- ### JEE Main Scoring
258
- - **Single Correct MCQ**: +4 for correct, -1 for incorrect, 0 for skipped
259
- - **Integer Type**: +4 for correct, 0 for incorrect, 0 for skipped
260
 
261
  ### JEE Advanced Scoring
262
- - **Single Correct MCQ**: +3 for correct, -1 for incorrect, 0 for skipped
263
- - **Multiple Correct MCQ**: Complex partial marking system:
264
  - +4 for all correct options selected
265
  - +3 for 3 out of 4 correct options (when 4 are correct)
266
  - +2 for 2 out of 3+ correct options
267
  - +1 for 1 out of 2+ correct options
268
  - -2 for any incorrect option selected
269
- - 0 for skipped
270
- - **Integer Type**: +4 for correct, 0 for incorrect, 0 for skipped
 
 
271
 
272
  ## Advanced Features
273
 
274
  ### Retry Mechanism
275
  - Automatic retry for failed API calls (up to 3 attempts with exponential backoff)
 
276
  - Separate retry pass for questions that failed initially
277
  - Comprehensive error tracking and reporting
278
 
 
 
 
 
 
279
  ### Re-prompting System
280
  - If initial response parsing fails, the system automatically re-prompts the model
281
  - Uses the previous response to ask for properly formatted answers
282
- - Adapts prompts based on question type (MCQ vs Integer)
283
 
284
  ### Comprehensive Evaluation
285
  - Tracks multiple metrics: correct answers, partial credit, skipped questions, API failures
286
  - Section-wise breakdown by subject
287
- - Detailed logging with color-coded progress indicators
288
 
289
  ## Dataset Structure
290
 
 
223
  python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "google/gemini-2.5-pro-preview-03-25" --question_ids "N24T3001,N24T3002,JA24P1M01"
224
  ```
225
 
226
+ **Resume an interrupted run:**
227
+ ```bash
228
+ # Resume from an existing results directory (skips already-completed questions)
229
+ python src/benchmark_runner.py --model "google/gemini-2.5-pro-preview-03-25" --resume results/google_gemini-2.5-pro-preview-03-25_NEET_2024_20250524_141230
230
+ ```
231
+
232
  **Custom output directory:**
233
  ```bash
234
  python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "openai/gpt-4o" --output_dir my_custom_results
235
  ```
236
 
237
+ **Available options:**
238
  - `--exam_name`: Choose from `NEET`, `JEE_MAIN`, `JEE_ADVANCED`, or `all` (default)
239
  - `--exam_year`: Choose from available years (`2024`, `2025`, etc.) or `all` (default)
240
  - `--question_ids`: Comma-separated list of specific question IDs to evaluate (e.g., "N24T3001,JA24P1M01")
241
+ - `--resume`: Path to an existing results directory to resume an interrupted run
242
 
243
  6. **Check Results:**
244
  * Results for each model run will be saved in timestamped subdirectories within the `results/` folder.
245
  * Each run's folder (e.g., `results/google_gemini-2.5-pro-preview-03-25_NEET_2024_20250524_141230/`) contains:
246
+ * **`predictions.jsonl`**: Raw API responses for each question including:
 
247
  - Raw LLM responses
 
248
  - API call success/failure information
249
+ - Parse success status and errors
250
+ * **`summary.jsonl`**: Per-question scored results including:
251
+ - Predicted answers and ground truth
252
+ - Evaluation status and marks awarded
253
  * **`summary.md`**: Human-readable Markdown summary with:
254
  - Overall exam scores
255
+ - Question type breakdown
256
  - Section-wise breakdown (by subject)
257
  - Detailed statistics on correct/incorrect/skipped questions
258
 
 
261
  The benchmark implements authentic scoring systems for each exam type:
262
 
263
  ### NEET Scoring
264
+ - **Single Correct MCQ**: +4 for correct, -1 for incorrect, 0 for skipped/API failure
265
 
266
+ ### JEE Main Scoring
267
+ - **Single Correct MCQ**: +4 for correct, -1 for incorrect, 0 for skipped/API failure
268
+ - **Integer Type**: +4 for correct, 0 for incorrect, 0 for skipped/API failure
269
 
270
  ### JEE Advanced Scoring
271
+ - **Single Correct MCQ**: +3 for correct, -1 for incorrect, 0 for skipped/API failure
272
+ - **Multiple Correct MCQ**: Partial marking system:
273
  - +4 for all correct options selected
274
  - +3 for 3 out of 4 correct options (when 4 are correct)
275
  - +2 for 2 out of 3+ correct options
276
  - +1 for 1 out of 2+ correct options
277
  - -2 for any incorrect option selected
278
+ - 0 for skipped/API failure
279
+ - **Integer Type**: +4 for correct, 0 for incorrect, 0 for skipped/API failure
280
+
281
+ > **Note:** API failures and parse failures are scored as 0 (no penalty) since they do not represent a deliberate wrong choice.
282
 
283
  ## Advanced Features
284
 
285
  ### Retry Mechanism
286
  - Automatic retry for failed API calls (up to 3 attempts with exponential backoff)
287
+ - Retries on HTTP 429 (rate limit), 500, 502, 503, 504 status codes
288
  - Separate retry pass for questions that failed initially
289
  - Comprehensive error tracking and reporting
290
 
291
+ ### Resume Capability
292
+ - Resume interrupted benchmark runs with `--resume <results_dir>`
293
+ - Reads existing `summary.jsonl` to identify completed questions and skips them
294
+ - Appends new results to the same output files
295
+
296
  ### Re-prompting System
297
  - If initial response parsing fails, the system automatically re-prompts the model
298
  - Uses the previous response to ask for properly formatted answers
299
+ - Shows only relevant format examples based on question type (MCQ single, MCQ multiple, or integer)
300
 
301
  ### Comprehensive Evaluation
302
  - Tracks multiple metrics: correct answers, partial credit, skipped questions, API failures
303
  - Section-wise breakdown by subject
304
+ - Color-coded progress indicators in terminal output
305
 
306
  ## Dataset Structure
307
 
configs/benchmark_config.yaml CHANGED
@@ -8,6 +8,7 @@ openrouter_models:
8
  - "openai/o3"
9
  - "openai/gpt-5"
10
  - "x-ai/grok-4-fast:free"
 
11
  # - "google/gemini-pro-vision" # Example - uncomment or add others
12
  # - "anthropic/claude-3-opus" # Example - check vision support and access
13
  # - "anthropic/claude-3-sonnet"
@@ -24,4 +25,5 @@ results_base_dir: "results"
24
  # Maximum tokens to generate in the response. Keep it low as we expect only numbers.
25
  max_tokens: 10000
26
  # Timeout for each API request in seconds.
27
- #request_timeout: 200
 
 
8
  - "openai/o3"
9
  - "openai/gpt-5"
10
  - "x-ai/grok-4-fast:free"
11
+ - "google/gemini-3-pro-preview"
12
  # - "google/gemini-pro-vision" # Example - uncomment or add others
13
  # - "anthropic/claude-3-opus" # Example - check vision support and access
14
  # - "anthropic/claude-3-sonnet"
 
25
  # Maximum tokens to generate in the response. Keep it low as we expect only numbers.
26
  max_tokens: 10000
27
  # Timeout for each API request in seconds.
28
+ # Set high enough for thinking models (o3, Gemini Flash Thinking) which can take longer.
29
+ request_timeout: 200
src/benchmark_runner.py CHANGED
@@ -21,7 +21,7 @@ MAGENTA = '\033[95m' # For API failures
21
  from utils import load_api_key
22
  from llm_interface import get_openrouter_prediction
23
  # Import evaluation functions
24
- from evaluation import calculate_accuracy, calculate_exam_scores, calculate_single_question_score_details
25
 
26
  # Configure logging
27
  logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
@@ -184,8 +184,7 @@ def generate_markdown_summary(summary: Dict[str, Any], filepath: str):
184
  for q_type in sorted_q_types:
185
  stats = question_type_breakdown[q_type]
186
  q_type_display = q_type.replace('_', ' ').title()
187
- max_score_per_q = stats.get('max_score_per_question', 0)
188
-
189
  correct_count_q = stats.get('correct_full', 0)
190
  partial_count_q = stats.get('partial_correct', 0)
191
  incorrect_count_q = stats.get('incorrect_choice', 0)
@@ -193,31 +192,22 @@ def generate_markdown_summary(summary: Dict[str, Any], filepath: str):
193
  api_fail_count_q = stats.get('api_parse_failures', 0)
194
  score_q = stats.get('score', 0)
195
 
196
- calculation_parts = []
197
  if correct_count_q > 0:
198
- calculation_parts.append(f"{correct_count_q} Correct (+{max_score_per_q})")
199
  if partial_count_q > 0:
200
- # For partial, we can't easily show the exact score per question without more detail
201
- # For now, just indicate partials.
202
- calculation_parts.append(f"{partial_count_q} Partial")
203
  if incorrect_count_q > 0:
204
- # Need to know penalty for incorrect. Assuming -1 for MCQ_SINGLE_CORRECT, -2 for MCQ_MULTIPLE_CORRECT
205
- # For INTEGER, penalty is 0. This needs to be more robust if penalties vary.
206
- penalty_per_incorrect = 0
207
- if q_type == "MCQ_SINGLE_CORRECT": penalty_per_incorrect = -1
208
- elif q_type == "MCQ_MULTIPLE_CORRECT": penalty_per_incorrect = -2
209
- calculation_parts.append(f"{incorrect_count_q} Incorrect ({penalty_per_incorrect})")
210
  if skipped_count_q > 0:
211
- calculation_parts.append(f"{skipped_count_q} Skipped (0)")
212
  if api_fail_count_q > 0:
213
- calculation_parts.append(f"{api_fail_count_q} API/Parse Fail (0)")
214
-
215
- calculation_str = " + ".join(part for part in calculation_parts if part)
216
- if not calculation_str:
217
- calculation_str = "No questions of this type processed or all had 0 score change."
218
 
219
  md_content.append(f"**{q_type_display} ({stats.get('count', 0)} questions):** {score_q} marks")
220
- md_content.append(f" *Calculation:* {calculation_str} = {score_q}")
221
  else:
222
  md_content.append("No question type breakdown available.")
223
 
@@ -313,7 +303,7 @@ def process_question(
313
  request_timeout=config.get("request_timeout", 60),
314
  )
315
 
316
- api_success = True
317
  parse_success = parsed_answer is not None
318
 
319
  # --- Re-prompt Logic ---
@@ -352,8 +342,8 @@ def process_question(
352
  })
353
  else:
354
  current_error = result_data.get("error")
355
- if api_success and raw_response is None and parsed_answer is None:
356
- current_error = "API call returned empty content. Re-prompt skipped."
357
 
358
  if isinstance(parsed_answer, list):
359
  parsed_answer = [str(item) for item in parsed_answer]
@@ -398,15 +388,15 @@ def log_question_result(result_data: Dict[str, Any], prefix: str = "") -> str:
398
  log_message_suffix = f"(Attempt {result_data['attempt']})"
399
 
400
  if not result_data["api_call_successful"]:
401
- logging.info(f"{MAGENTA}{log_message_prefix} API Call Failed {log_message_suffix}{RESET}")
402
  return "api_fail"
403
 
404
  if not result_data["parse_successful"]:
405
- logging.info(f"{CYAN}{log_message_prefix} Failed to parse answer {log_message_suffix}{RESET}")
406
  return "parse_fail"
407
 
408
  if result_data["predicted_answer"] == "SKIP":
409
- logging.info(f"{YELLOW}{log_message_prefix} Skipped {log_message_suffix}{RESET}")
410
  return "skipped"
411
 
412
  marks_awarded = result_data.get("marks_awarded", 0)
@@ -435,24 +425,45 @@ def log_question_result(result_data: Dict[str, Any], prefix: str = "") -> str:
435
  known_eval_skip_statuses = ["SKIPPED_BY_EVAL", "SKIPPED"]
436
 
437
  if is_considered_correct:
438
- logging.info(f"{GREEN}{log_message_prefix} Correct (log) - Marks: {marks_awarded}, Status: {log_display_status} {log_message_suffix}{RESET}")
439
  return "correct"
440
  elif status_check_string in known_eval_skip_statuses:
441
- logging.info(f"{YELLOW}{log_message_prefix} Skipped by Eval - Marks: {marks_awarded}, Status: {log_display_status} {log_message_suffix}{RESET}")
442
  return "skipped"
443
  else:
444
- logging.info(f"{RED}{log_message_prefix} Incorrect (log) - Marks: {marks_awarded}, Status: {log_display_status} {log_message_suffix}{RESET}")
445
  return "incorrect"
446
 
447
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
448
  def run_benchmark(
449
- config: dict,
450
- api_key: str,
451
- model_to_run: str, # Changed from models_override
452
- output_dir_override: str | None = None,
453
- exam_name_choice: str | None = None, # Changed from exam_name_filter
454
- exam_year_choice: str | None = None, # Changed from exam_year_filter
455
- question_ids_str: str | None = None # New argument
 
456
  ):
457
  """Runs the benchmark evaluation loop with incremental saving and retries."""
458
 
@@ -521,34 +532,50 @@ def run_benchmark(
521
 
522
  # --- Main Loop: Iterate through models ---
523
  for model_id in models_to_run:
524
- # total_questions here should refer to the length of the potentially filtered dataset
525
- current_total_questions = len(dataset)
526
- logging.info(f"--- Starting benchmark for model: {model_id} (Processing {current_total_questions} questions) ---")
527
-
528
- # Create timestamped output directory for this model run
529
- timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
530
- safe_model_name = model_id.replace('/', '_') # model_id is already the single model_to_run
531
- dir_name_parts = [safe_model_name]
532
-
533
- # Append exam name and year to directory if they are specific (not "all")
534
- current_exam_name_for_dir = exam_name_choice if exam_name_choice and exam_name_choice.lower() != "all" else "AllExams"
535
- current_exam_year_for_dir = exam_year_choice if exam_year_choice and exam_year_choice.lower() != "all" else "AllYears"
536
-
537
- if current_exam_name_for_dir != "AllExams":
538
- dir_name_parts.append(current_exam_name_for_dir.replace('/', '_'))
539
- if current_exam_year_for_dir != "AllYears":
540
- dir_name_parts.append(str(current_exam_year_for_dir)) # Already string or 'all'
541
-
542
- dir_name_parts.append(timestamp)
543
-
544
- model_output_dir_name = "_".join(filter(None, dir_name_parts)) # Filter out None if any part was None
545
- model_output_dir = os.path.join(base_output_dir, model_output_dir_name)
 
 
546
  os.makedirs(model_output_dir, exist_ok=True)
547
  predictions_path = os.path.join(model_output_dir, "predictions.jsonl")
548
- summary_details_path = os.path.join(model_output_dir, "summary.jsonl") # New file for per-question summary details
549
- markdown_summary_path = os.path.join(model_output_dir, "summary.md") # Define path for MD summary
550
  logging.info(f"Results for {model_id} will be saved to: {model_output_dir}")
551
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
552
  model_results = [] # Stores results in memory for final calculation
553
  failed_questions_data = [] # Stores data needed to retry failed questions
554
 
@@ -744,6 +771,12 @@ if __name__ == "__main__":
744
  default=None,
745
  help="Optional: Comma-separated list of specific question IDs to run (e.g., ID1,ID2,ID3)."
746
  )
 
 
 
 
 
 
747
  args = parser.parse_args()
748
 
749
  # Dynamically update config path if user provides a different one
@@ -774,13 +807,14 @@ if __name__ == "__main__":
774
 
775
  # Run the benchmark
776
  run_benchmark(
777
- config=config,
778
- api_key=api_key,
779
- model_to_run=args.model,
780
  output_dir_override=args.output_dir,
781
  exam_name_choice=args.exam_name,
782
  exam_year_choice=args.exam_year,
783
- question_ids_str=args.question_ids
 
784
  )
785
  except (ValueError, FileNotFoundError, yaml.YAMLError) as e:
786
  logging.error(f"Setup failed: {e}")
 
21
  from utils import load_api_key
22
  from llm_interface import get_openrouter_prediction
23
  # Import evaluation functions
24
+ from evaluation import calculate_exam_scores, calculate_single_question_score_details
25
 
26
  # Configure logging
27
  logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
 
184
  for q_type in sorted_q_types:
185
  stats = question_type_breakdown[q_type]
186
  q_type_display = q_type.replace('_', ' ').title()
187
+
 
188
  correct_count_q = stats.get('correct_full', 0)
189
  partial_count_q = stats.get('partial_correct', 0)
190
  incorrect_count_q = stats.get('incorrect_choice', 0)
 
192
  api_fail_count_q = stats.get('api_parse_failures', 0)
193
  score_q = stats.get('score', 0)
194
 
195
+ breakdown_parts = []
196
  if correct_count_q > 0:
197
+ breakdown_parts.append(f"{correct_count_q} Correct")
198
  if partial_count_q > 0:
199
+ breakdown_parts.append(f"{partial_count_q} Partial")
 
 
200
  if incorrect_count_q > 0:
201
+ breakdown_parts.append(f"{incorrect_count_q} Incorrect")
 
 
 
 
 
202
  if skipped_count_q > 0:
203
+ breakdown_parts.append(f"{skipped_count_q} Skipped")
204
  if api_fail_count_q > 0:
205
+ breakdown_parts.append(f"{api_fail_count_q} API/Parse Fail")
206
+
207
+ breakdown_str = ", ".join(breakdown_parts) if breakdown_parts else "No questions processed"
 
 
208
 
209
  md_content.append(f"**{q_type_display} ({stats.get('count', 0)} questions):** {score_q} marks")
210
+ md_content.append(f" *Breakdown:* {breakdown_str}")
211
  else:
212
  md_content.append("No question type breakdown available.")
213
 
 
303
  request_timeout=config.get("request_timeout", 60),
304
  )
305
 
306
+ api_success = raw_response is not None # False if API returned empty content
307
  parse_success = parsed_answer is not None
308
 
309
  # --- Re-prompt Logic ---
 
342
  })
343
  else:
344
  current_error = result_data.get("error")
345
+ if not api_success:
346
+ current_error = "API call returned empty content."
347
 
348
  if isinstance(parsed_answer, list):
349
  parsed_answer = [str(item) for item in parsed_answer]
 
388
  log_message_suffix = f"(Attempt {result_data['attempt']})"
389
 
390
  if not result_data["api_call_successful"]:
391
+ print(f"{MAGENTA}{log_message_prefix} API Call Failed {log_message_suffix}{RESET}")
392
  return "api_fail"
393
 
394
  if not result_data["parse_successful"]:
395
+ print(f"{CYAN}{log_message_prefix} Failed to parse answer {log_message_suffix}{RESET}")
396
  return "parse_fail"
397
 
398
  if result_data["predicted_answer"] == "SKIP":
399
+ print(f"{YELLOW}{log_message_prefix} Skipped {log_message_suffix}{RESET}")
400
  return "skipped"
401
 
402
  marks_awarded = result_data.get("marks_awarded", 0)
 
425
  known_eval_skip_statuses = ["SKIPPED_BY_EVAL", "SKIPPED"]
426
 
427
  if is_considered_correct:
428
+ print(f"{GREEN}{log_message_prefix} Correct - Marks: {marks_awarded}, Status: {log_display_status} {log_message_suffix}{RESET}")
429
  return "correct"
430
  elif status_check_string in known_eval_skip_statuses:
431
+ print(f"{YELLOW}{log_message_prefix} Skipped by Eval - Marks: {marks_awarded}, Status: {log_display_status} {log_message_suffix}{RESET}")
432
  return "skipped"
433
  else:
434
+ print(f"{RED}{log_message_prefix} Incorrect - Marks: {marks_awarded}, Status: {log_display_status} {log_message_suffix}{RESET}")
435
  return "incorrect"
436
 
437
 
438
+ def load_completed_question_ids(summary_details_path: str) -> set:
439
+ """Reads summary.jsonl and returns a set of question_ids that have already been processed."""
440
+ completed_ids = set()
441
+ if not os.path.exists(summary_details_path):
442
+ return completed_ids
443
+ try:
444
+ with open(summary_details_path, 'r') as f:
445
+ for line in f:
446
+ try:
447
+ data = json.loads(line)
448
+ qid = data.get("question_id")
449
+ if qid:
450
+ completed_ids.add(qid)
451
+ except json.JSONDecodeError:
452
+ continue
453
+ except IOError as e:
454
+ logging.warning(f"Could not read {summary_details_path} for resume: {e}")
455
+ return completed_ids
456
+
457
+
458
  def run_benchmark(
459
+ config: dict,
460
+ api_key: str,
461
+ model_to_run: str,
462
+ output_dir_override: str | None = None,
463
+ exam_name_choice: str | None = None,
464
+ exam_year_choice: str | None = None,
465
+ question_ids_str: str | None = None,
466
+ resume_dir: str | None = None
467
  ):
468
  """Runs the benchmark evaluation loop with incremental saving and retries."""
469
 
 
532
 
533
  # --- Main Loop: Iterate through models ---
534
  for model_id in models_to_run:
535
+ logging.info(f"--- Starting benchmark for model: {model_id} ---")
536
+
537
+ # Set up output directory (resume existing or create new)
538
+ if resume_dir:
539
+ model_output_dir = resume_dir
540
+ timestamp = os.path.basename(resume_dir).rsplit('_', 2)[-2] + '_' + os.path.basename(resume_dir).rsplit('_', 1)[-1] if '_' in os.path.basename(resume_dir) else datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
541
+ else:
542
+ timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
543
+ safe_model_name = model_id.replace('/', '_')
544
+ dir_name_parts = [safe_model_name]
545
+
546
+ current_exam_name_for_dir = exam_name_choice if exam_name_choice and exam_name_choice.lower() != "all" else "AllExams"
547
+ current_exam_year_for_dir = exam_year_choice if exam_year_choice and exam_year_choice.lower() != "all" else "AllYears"
548
+
549
+ if current_exam_name_for_dir != "AllExams":
550
+ dir_name_parts.append(current_exam_name_for_dir.replace('/', '_'))
551
+ if current_exam_year_for_dir != "AllYears":
552
+ dir_name_parts.append(str(current_exam_year_for_dir))
553
+
554
+ dir_name_parts.append(timestamp)
555
+
556
+ model_output_dir_name = "_".join(filter(None, dir_name_parts))
557
+ model_output_dir = os.path.join(base_output_dir, model_output_dir_name)
558
+
559
  os.makedirs(model_output_dir, exist_ok=True)
560
  predictions_path = os.path.join(model_output_dir, "predictions.jsonl")
561
+ summary_details_path = os.path.join(model_output_dir, "summary.jsonl")
562
+ markdown_summary_path = os.path.join(model_output_dir, "summary.md")
563
  logging.info(f"Results for {model_id} will be saved to: {model_output_dir}")
564
 
565
+ # Resume: skip already-completed questions
566
+ if resume_dir:
567
+ completed_ids = load_completed_question_ids(summary_details_path)
568
+ if completed_ids:
569
+ logging.info(f"Resuming: found {len(completed_ids)} already-completed questions. Skipping them.")
570
+ dataset = dataset.filter(lambda example: example.get('question_id') not in completed_ids)
571
+ logging.info(f"Remaining questions to process: {len(dataset)}")
572
+ if len(dataset) == 0:
573
+ logging.info("All questions already completed. Nothing to resume.")
574
+ return
575
+
576
+ current_total_questions = len(dataset)
577
+ logging.info(f"Processing {current_total_questions} questions for model: {model_id}")
578
+
579
  model_results = [] # Stores results in memory for final calculation
580
  failed_questions_data = [] # Stores data needed to retry failed questions
581
 
 
771
  default=None,
772
  help="Optional: Comma-separated list of specific question IDs to run (e.g., ID1,ID2,ID3)."
773
  )
774
+ parser.add_argument(
775
+ "--resume",
776
+ type=str,
777
+ default=None,
778
+ help="Optional: Path to an existing results directory to resume an interrupted run."
779
+ )
780
  args = parser.parse_args()
781
 
782
  # Dynamically update config path if user provides a different one
 
807
 
808
  # Run the benchmark
809
  run_benchmark(
810
+ config=config,
811
+ api_key=api_key,
812
+ model_to_run=args.model,
813
  output_dir_override=args.output_dir,
814
  exam_name_choice=args.exam_name,
815
  exam_year_choice=args.exam_year,
816
+ question_ids_str=args.question_ids,
817
+ resume_dir=args.resume
818
  )
819
  except (ValueError, FileNotFoundError, yaml.YAMLError) as e:
820
  logging.error(f"Setup failed: {e}")
src/evaluation.py CHANGED
@@ -5,58 +5,6 @@ from typing import List, Optional, Union, Dict, Any
5
  # Configure logging
6
  logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
7
 
8
- def calculate_accuracy(predictions: List[Optional[List[str]]], ground_truths: List[List[str]]) -> float:
9
- """
10
- Calculates the accuracy of LLM predictions against ground truths.
11
-
12
- Accuracy is defined as the percentage of predictions where the predicted list
13
- of answer strings exactly matches the ground truth list of answer strings
14
- (order within the list does not matter, comparison is case-insensitive).
15
- A prediction of None (due to parsing failure) is considered incorrect.
16
-
17
- Args:
18
- predictions (List[Optional[List[str]]]): A list where each element is either a list
19
- of predicted answer strings or None if
20
- parsing failed for that question.
21
- ground_truths (List[List[str]]): A list where each element is a list of
22
- ground truth answer strings.
23
-
24
- Returns:
25
- float: The calculated accuracy (between 0.0 and 1.0).
26
-
27
- Raises:
28
- ValueError: If the lengths of predictions and ground_truths lists do not match.
29
- """
30
- if len(predictions) != len(ground_truths):
31
- raise ValueError(f"Length mismatch: Predictions ({len(predictions)}) vs Ground Truths ({len(ground_truths)})")
32
-
33
- if not ground_truths:
34
- return 0.0 # Avoid division by zero if the list is empty
35
-
36
- correct_count = 0
37
- for i, pred_list_orig in enumerate(predictions):
38
- truth_list_orig = ground_truths[i]
39
-
40
- # Convert to uppercase and sort for case-insensitive, order-independent comparison
41
- # Handle None case for prediction
42
- # Ensure elements are strings before calling upper()
43
- sorted_pred = sorted([str(p).upper() for p in pred_list_orig]) if pred_list_orig is not None else None
44
- sorted_truth = sorted([str(t).upper() for t in truth_list_orig])
45
-
46
-
47
- if sorted_pred is not None and sorted_pred == sorted_truth:
48
- correct_count += 1
49
- else:
50
- # Log incorrect only if prediction was not None (parsing succeeded but answer wrong)
51
- if sorted_pred is not None:
52
- logging.debug(f"Incorrect prediction for index {i}: Pred={sorted_pred}, Truth={sorted_truth} (Original Pred: {pred_list_orig}, Original Truth: {truth_list_orig})")
53
- # If sorted_pred is None, it means parsing failed or API failed, already counted as incorrect implicitly
54
-
55
-
56
- accuracy = correct_count / len(ground_truths)
57
- logging.info(f"Accuracy calculated: {correct_count}/{len(ground_truths)} = {accuracy:.4f}")
58
- return accuracy
59
-
60
 
61
  def get_subject_as_section(subject: str, question_num_for_log: int) -> Optional[str]:
62
  """
@@ -410,27 +358,6 @@ def calculate_exam_scores(results: List[Dict[str, Any]]) -> Dict[str, Any]:
410
  if __name__ == '__main__':
411
  print("Running evaluation tests...")
412
 
413
- # --- Test calculate_accuracy (now with strings) ---
414
- print("\n--- Testing calculate_accuracy ---")
415
- preds1_str = [["1"], ["2"], ["1", "3"]]
416
- truths1_str = [["1"], ["2"], ["3", "1"]]
417
- acc1_str = calculate_accuracy(preds1_str, truths1_str)
418
- print(f"Test Case 1 (Accuracy - String): Preds={preds1_str}, Truths={truths1_str} -> Accuracy: {acc1_str} (Expected: 1.0)")
419
- assert acc1_str == 1.0
420
-
421
- preds2_str = [["A"], ["B"], ["A", "C"]]
422
- truths2_str = [["a"], ["b"], ["c", "a"]] # Test case-insensitivity
423
- acc2_str = calculate_accuracy(preds2_str, truths2_str)
424
- print(f"Test Case 2 (Accuracy - String Case-Insensitive): Preds={preds2_str}, Truths={truths2_str} -> Accuracy: {acc2_str} (Expected: 1.0)")
425
- assert acc2_str == 1.0
426
-
427
- preds3_str = [["10"], None, ["5"]]
428
- truths3_str = [["10"], ["7"], ["5"]]
429
- acc3_str = calculate_accuracy(preds3_str, truths3_str)
430
- print(f"Test Case 3 (Accuracy - String with None): Preds={preds3_str}, Truths={truths3_str} -> Accuracy: {acc3_str} (Expected: {2/3})")
431
- assert acc3_str == (2/3)
432
-
433
-
434
  # --- Test calculate_exam_scores (now with strings) ---
435
  print("\n--- Testing calculate_exam_scores ---")
436
  test_results_exam = [
 
5
  # Configure logging
6
  logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
  def get_subject_as_section(subject: str, question_num_for_log: int) -> Optional[str]:
10
  """
 
358
  if __name__ == '__main__':
359
  print("Running evaluation tests...")
360
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
361
  # --- Test calculate_exam_scores (now with strings) ---
362
  print("\n--- Testing calculate_exam_scores ---")
363
  test_results_exam = [
src/llm_interface.py CHANGED
@@ -8,11 +8,12 @@ from tenacity import retry, stop_after_attempt, wait_exponential, retry_if_excep
8
 
9
  from utils import parse_llm_answer
10
  from prompts import (
11
- INITIAL_PROMPT_TEMPLATE,
12
  REPROMPT_PROMPT_TEMPLATE,
13
  get_answer_format_instruction,
14
  get_example_instruction,
15
- get_specific_instructions_reprompt
 
16
  )
17
 
18
  # Configure logging
@@ -52,11 +53,13 @@ def encode_image_to_base64(image: Image.Image) -> str:
52
  def construct_reprompt_prompt(previous_raw_response: str, question_type: str) -> list:
53
  """Constructs the message list for a re-prompt API call based on question_type."""
54
  specific_instructions = get_specific_instructions_reprompt(question_type)
 
55
 
56
  prompt_text = REPROMPT_PROMPT_TEMPLATE.format(
57
  previous_raw_response=previous_raw_response,
58
  question_type=question_type,
59
- specific_instructions=specific_instructions
 
60
  )
61
  messages = [{"role": "user", "content": prompt_text}]
62
  return messages
 
8
 
9
  from utils import parse_llm_answer
10
  from prompts import (
11
+ INITIAL_PROMPT_TEMPLATE,
12
  REPROMPT_PROMPT_TEMPLATE,
13
  get_answer_format_instruction,
14
  get_example_instruction,
15
+ get_specific_instructions_reprompt,
16
+ get_reprompt_example_instruction
17
  )
18
 
19
  # Configure logging
 
53
  def construct_reprompt_prompt(previous_raw_response: str, question_type: str) -> list:
54
  """Constructs the message list for a re-prompt API call based on question_type."""
55
  specific_instructions = get_specific_instructions_reprompt(question_type)
56
+ reprompt_example_instruction = get_reprompt_example_instruction(question_type)
57
 
58
  prompt_text = REPROMPT_PROMPT_TEMPLATE.format(
59
  previous_raw_response=previous_raw_response,
60
  question_type=question_type,
61
+ specific_instructions=specific_instructions,
62
+ reprompt_example_instruction=reprompt_example_instruction
63
  )
64
  messages = [{"role": "user", "content": prompt_text}]
65
  return messages
src/prompts.py CHANGED
@@ -41,6 +41,13 @@ SPECIFIC_INSTRUCTIONS_REPROMPT = {
41
  "DEFAULT": "provide the answer according to the question format"
42
  }
43
 
 
 
 
 
 
 
 
44
  REPROMPT_PROMPT_TEMPLATE = """You previously provided the following response to an exam question:
45
  --- PREVIOUS RESPONSE START ---
46
  {previous_raw_response}
@@ -50,11 +57,7 @@ Your previous response did not correctly format the final answer within <answer>
50
 
51
  Please re-examine your previous reasoning and {specific_instructions}, enclosed in <answer> tags.
52
 
53
- Example for single correct MCQ option 'A': <answer>A</answer>
54
- Example for single correct MCQ option '2': <answer>2</answer>
55
- Example for multiple correct MCQ options 'A' and 'C': <answer>A,C</answer>
56
- Example for integer answer: <answer>42</answer>
57
- Example for decimal answer: <answer>12.75</answer>
58
  If you are unsure or cannot determine the answer: <answer>SKIP</answer>
59
 
60
  It is crucial that your response contains ONLY the <answer> tag with the correct option identifier(s), numerical value(s) OR the word SKIP inside. Do not include any other text, explanation, or formatting."""
@@ -69,3 +72,6 @@ def get_example_instruction(question_type: str) -> str:
69
 
70
  def get_specific_instructions_reprompt(question_type: str) -> str:
71
  return SPECIFIC_INSTRUCTIONS_REPROMPT.get(question_type, SPECIFIC_INSTRUCTIONS_REPROMPT["DEFAULT"])
 
 
 
 
41
  "DEFAULT": "provide the answer according to the question format"
42
  }
43
 
44
+ REPROMPT_EXAMPLE_INSTRUCTIONS = {
45
+ "MCQ_SINGLE_CORRECT": "Example for single correct MCQ option 'A': <answer>A</answer>\nExample for single correct MCQ option '2': <answer>2</answer>",
46
+ "MCQ_MULTIPLE_CORRECT": "Example for multiple correct MCQ options 'A' and 'C': <answer>A,C</answer>\nExample for single correct option 'B': <answer>B</answer>",
47
+ "INTEGER": "Example for integer answer: <answer>42</answer>\nExample for decimal answer: <answer>12.75</answer>",
48
+ "DEFAULT": "Example: <answer>Your Answer</answer>",
49
+ }
50
+
51
  REPROMPT_PROMPT_TEMPLATE = """You previously provided the following response to an exam question:
52
  --- PREVIOUS RESPONSE START ---
53
  {previous_raw_response}
 
57
 
58
  Please re-examine your previous reasoning and {specific_instructions}, enclosed in <answer> tags.
59
 
60
+ {reprompt_example_instruction}
 
 
 
 
61
  If you are unsure or cannot determine the answer: <answer>SKIP</answer>
62
 
63
  It is crucial that your response contains ONLY the <answer> tag with the correct option identifier(s), numerical value(s) OR the word SKIP inside. Do not include any other text, explanation, or formatting."""
 
72
 
73
  def get_specific_instructions_reprompt(question_type: str) -> str:
74
  return SPECIFIC_INSTRUCTIONS_REPROMPT.get(question_type, SPECIFIC_INSTRUCTIONS_REPROMPT["DEFAULT"])
75
+
76
+ def get_reprompt_example_instruction(question_type: str) -> str:
77
+ return REPROMPT_EXAMPLE_INSTRUCTIONS.get(question_type, REPROMPT_EXAMPLE_INSTRUCTIONS["DEFAULT"])