Datasets:
Add Gemini Flash model to benchmark config and save results
Browse files- configs/benchmark_config.yaml +1 -0
- results/google_gemini-2.5-flash-preview-05-20:thinking_JEE_ADVANCED_2025_20250607_154346/predictions.jsonl +0 -0
- results/google_gemini-2.5-flash-preview-05-20:thinking_JEE_ADVANCED_2025_20250607_154346/summary.jsonl +96 -0
- results/google_gemini-2.5-flash-preview-05-20:thinking_JEE_ADVANCED_2025_20250607_154346/summary.md +33 -0
configs/benchmark_config.yaml
CHANGED
|
@@ -4,6 +4,7 @@
|
|
| 4 |
openrouter_models:
|
| 5 |
- "google/gemini-2.5-pro-preview-03-25"
|
| 6 |
- "anthropic/claude-sonnet-4"
|
|
|
|
| 7 |
# - "google/gemini-pro-vision" # Example - uncomment or add others
|
| 8 |
# - "anthropic/claude-3-opus" # Example - check vision support and access
|
| 9 |
# - "anthropic/claude-3-sonnet"
|
|
|
|
| 4 |
openrouter_models:
|
| 5 |
- "google/gemini-2.5-pro-preview-03-25"
|
| 6 |
- "anthropic/claude-sonnet-4"
|
| 7 |
+
- "google/gemini-2.5-flash-preview-05-20:thinking"
|
| 8 |
# - "google/gemini-pro-vision" # Example - uncomment or add others
|
| 9 |
# - "anthropic/claude-3-opus" # Example - check vision support and access
|
| 10 |
# - "anthropic/claude-3-sonnet"
|
results/google_gemini-2.5-flash-preview-05-20:thinking_JEE_ADVANCED_2025_20250607_154346/predictions.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
results/google_gemini-2.5-flash-preview-05-20:thinking_JEE_ADVANCED_2025_20250607_154346/summary.jsonl
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{"question_id": "JA25P1P01", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["A"], "ground_truth": ["A"], "attempt": 2}
|
| 2 |
+
{"question_id": "JA25P1P02", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["D"], "ground_truth": ["D"], "attempt": 2}
|
| 3 |
+
{"question_id": "JA25P1P03", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["D"], "ground_truth": ["A"], "attempt": 2}
|
| 4 |
+
{"question_id": "JA25P1P04", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["B"], "ground_truth": ["C"], "attempt": 2}
|
| 5 |
+
{"question_id": "JA25P1P05", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["B", "D"], "ground_truth": ["B", "D"], "attempt": 2}
|
| 6 |
+
{"question_id": "JA25P1P06", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["D"], "ground_truth": ["D"], "attempt": 2}
|
| 7 |
+
{"question_id": "JA25P1P07", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A", "D"], "ground_truth": ["A", "D"], "attempt": 2}
|
| 8 |
+
{"question_id": "JA25P1P08", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["2"], "ground_truth": ["2"], "attempt": 2}
|
| 9 |
+
{"question_id": "JA25P1P09", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["23"], "ground_truth": [["21", "25"]], "attempt": 2}
|
| 10 |
+
{"question_id": "JA25P1P10", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["3"], "ground_truth": ["3"], "attempt": 2}
|
| 11 |
+
{"question_id": "JA25P1P11", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["0.5"], "ground_truth": ["0.5", "0.75"], "attempt": 2}
|
| 12 |
+
{"question_id": "JA25P1P12", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["76"], "ground_truth": [["75", "79"]], "attempt": 2}
|
| 13 |
+
{"question_id": "JA25P1P13", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["72"], "ground_truth": ["72"], "attempt": 2}
|
| 14 |
+
{"question_id": "JA25P1P14", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["C"], "ground_truth": ["C"], "attempt": 2}
|
| 15 |
+
{"question_id": "JA25P1P15", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["A"], "ground_truth": ["A"], "attempt": 2}
|
| 16 |
+
{"question_id": "JA25P1P16", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["C"], "ground_truth": ["C"], "attempt": 2}
|
| 17 |
+
{"question_id": "JA25P1C01", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["A"], "ground_truth": ["A"], "attempt": 1}
|
| 18 |
+
{"question_id": "JA25P1C02", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["A"], "ground_truth": ["A"], "attempt": 2}
|
| 19 |
+
{"question_id": "JA25P1C03", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["B"], "ground_truth": ["B"], "attempt": 2}
|
| 20 |
+
{"question_id": "JA25P1C04", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["C"], "ground_truth": ["B"], "attempt": 2}
|
| 21 |
+
{"question_id": "JA25P1C05", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["B", "C"], "ground_truth": ["B", "C"], "attempt": 2}
|
| 22 |
+
{"question_id": "JA25P1C06", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A", "B"], "ground_truth": ["A", "B"], "attempt": 1}
|
| 23 |
+
{"question_id": "JA25P1C07", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["B"], "ground_truth": ["B"], "attempt": 2}
|
| 24 |
+
{"question_id": "JA25P1C08", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["100"], "ground_truth": ["100"], "attempt": 1}
|
| 25 |
+
{"question_id": "JA25P1C09", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["2"], "ground_truth": [["2.2", "2.3"]], "attempt": 2}
|
| 26 |
+
{"question_id": "JA25P1C10", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["-7.1"], "ground_truth": [["-7.2", "-7"]], "attempt": 2}
|
| 27 |
+
{"question_id": "JA25P1C11", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["-30"], "ground_truth": [["-29.95", "-29.8"], ["29.8", "29.95"]], "attempt": 2}
|
| 28 |
+
{"question_id": "JA25P1C12", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["280"], "ground_truth": ["280"], "attempt": 1}
|
| 29 |
+
{"question_id": "JA25P1C13", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["170"], "ground_truth": ["175"], "attempt": 2}
|
| 30 |
+
{"question_id": "JA25P1C14", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["A"], "ground_truth": ["A"], "attempt": 1}
|
| 31 |
+
{"question_id": "JA25P1C15", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["B"], "ground_truth": ["B"], "attempt": 2}
|
| 32 |
+
{"question_id": "JA25P1C16", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["B"], "ground_truth": ["B"], "attempt": 2}
|
| 33 |
+
{"question_id": "JA25P1M01", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["C"], "ground_truth": ["C"], "attempt": 2}
|
| 34 |
+
{"question_id": "JA25P1M02", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["A"], "ground_truth": ["A"], "attempt": 2}
|
| 35 |
+
{"question_id": "JA25P1M03", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["C"], "ground_truth": ["C"], "attempt": 2}
|
| 36 |
+
{"question_id": "JA25P1M04", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["16"], "ground_truth": ["C"], "attempt": 2}
|
| 37 |
+
{"question_id": "JA25P1M05", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A", "C"], "ground_truth": ["A", "C"], "attempt": 2}
|
| 38 |
+
{"question_id": "JA25P1M06", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A", "D"], "ground_truth": ["A", "D"], "attempt": 2}
|
| 39 |
+
{"question_id": "JA25P1M07", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A", "D"], "ground_truth": ["A", "D"], "attempt": 2}
|
| 40 |
+
{"question_id": "JA25P1M08", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["105"], "ground_truth": ["105"], "attempt": 2}
|
| 41 |
+
{"question_id": "JA25P1M09", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["1.2"], "ground_truth": [["1.15", "1.25"]], "attempt": 2}
|
| 42 |
+
{"question_id": "JA25P1M10", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["762"], "ground_truth": ["762"], "attempt": 2}
|
| 43 |
+
{"question_id": "JA25P1M11", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["2.4"], "ground_truth": [["2.35", "2.45"]], "attempt": 2}
|
| 44 |
+
{"question_id": "JA25P1M12", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["96"], "ground_truth": ["96"], "attempt": 2}
|
| 45 |
+
{"question_id": "JA25P1M13", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["2"], "ground_truth": ["2"], "attempt": 2}
|
| 46 |
+
{"question_id": "JA25P1M14", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["C"], "ground_truth": ["C"], "attempt": 2}
|
| 47 |
+
{"question_id": "JA25P1M15", "marks_awarded": -1, "evaluation_status": "failure_api_or_parse", "predicted_answer": null, "ground_truth": ["B"], "attempt": 2}
|
| 48 |
+
{"question_id": "JA25P1M16", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["A"], "ground_truth": ["A"], "attempt": 2}
|
| 49 |
+
{"question_id": "JA25P2P01", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["B"], "ground_truth": ["B"], "attempt": 2}
|
| 50 |
+
{"question_id": "JA25P2P02", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["C"], "ground_truth": ["C"], "attempt": 2}
|
| 51 |
+
{"question_id": "JA25P2P03", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["C"], "ground_truth": ["C"], "attempt": 2}
|
| 52 |
+
{"question_id": "JA25P2P04", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["B"], "ground_truth": ["A", "B", "C", "D"], "attempt": 2}
|
| 53 |
+
{"question_id": "JA25P2P05", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A", "B", "C"], "ground_truth": ["A", "B", "C"], "attempt": 2}
|
| 54 |
+
{"question_id": "JA25P2P06", "marks_awarded": 1, "evaluation_status": "partial_1_of_2_plus", "predicted_answer": ["A"], "ground_truth": ["A", "B"], "attempt": 2}
|
| 55 |
+
{"question_id": "JA25P2P07", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A"], "ground_truth": ["A"], "attempt": 2}
|
| 56 |
+
{"question_id": "JA25P2P08", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A", "B", "C"], "ground_truth": ["A", "B", "C"], "attempt": 1}
|
| 57 |
+
{"question_id": "JA25P2P09", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["1"], "ground_truth": [["1.65", "1.67"]], "attempt": 2}
|
| 58 |
+
{"question_id": "JA25P2P10", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["12"], "ground_truth": [["11.7", "11.9"]], "attempt": 2}
|
| 59 |
+
{"question_id": "JA25P2P11", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["1.6"], "ground_truth": ["1.6"], "attempt": 2}
|
| 60 |
+
{"question_id": "JA25P2P12", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["2.331"], "ground_truth": [["2.3", "2.4"]], "attempt": 2}
|
| 61 |
+
{"question_id": "JA25P2P13", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["0.2"], "ground_truth": ["0.2"], "attempt": 2}
|
| 62 |
+
{"question_id": "JA25P2P14", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["2.4"], "ground_truth": ["1.2"], "attempt": 2}
|
| 63 |
+
{"question_id": "JA25P2P15", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["170"], "ground_truth": [["167", "171"]], "attempt": 2}
|
| 64 |
+
{"question_id": "JA25P2P16", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["32"], "ground_truth": [["26", "33"]], "attempt": 2}
|
| 65 |
+
{"question_id": "JA25P2C01", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["A"], "ground_truth": ["A"], "attempt": 2}
|
| 66 |
+
{"question_id": "JA25P2C02", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["B"], "ground_truth": ["A"], "attempt": 2}
|
| 67 |
+
{"question_id": "JA25P2C03", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["B"], "ground_truth": ["D"], "attempt": 2}
|
| 68 |
+
{"question_id": "JA25P2C04", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["C"], "ground_truth": ["C"], "attempt": 2}
|
| 69 |
+
{"question_id": "JA25P2C05", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["C", "D"], "ground_truth": ["C", "D"], "attempt": 1}
|
| 70 |
+
{"question_id": "JA25P2C06", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["B", "D"], "ground_truth": ["B", "D"], "attempt": 1}
|
| 71 |
+
{"question_id": "JA25P2C07", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A", "C"], "ground_truth": ["A", "C"], "attempt": 2}
|
| 72 |
+
{"question_id": "JA25P2C08", "marks_awarded": -2, "evaluation_status": "incorrect_negative", "predicted_answer": ["A", "B", "C"], "ground_truth": ["B", "C"], "attempt": 2}
|
| 73 |
+
{"question_id": "JA25P2C09", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["11"], "ground_truth": [["10.85", "11.1"]], "attempt": 2}
|
| 74 |
+
{"question_id": "JA25P2C10", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["4"], "ground_truth": [["3.85", "4.15"]], "attempt": 2}
|
| 75 |
+
{"question_id": "JA25P2C11", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["16"], "ground_truth": [["15.5", "16.5"]], "attempt": 2}
|
| 76 |
+
{"question_id": "JA25P2C12", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["4"], "ground_truth": [["4", "4.25"]], "attempt": 2}
|
| 77 |
+
{"question_id": "JA25P2C13", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["2"], "ground_truth": [["2.4", "2.55"]], "attempt": 2}
|
| 78 |
+
{"question_id": "JA25P2C14", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["105.5"], "ground_truth": [["105.4", "105.6"]], "attempt": 2}
|
| 79 |
+
{"question_id": "JA25P2C15", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["8"], "ground_truth": [["7.5", "7.8"]], "attempt": 2}
|
| 80 |
+
{"question_id": "JA25P2C16", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["2"], "ground_truth": ["2"], "attempt": 2}
|
| 81 |
+
{"question_id": "JA25P2M01", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["C"], "ground_truth": ["C"], "attempt": 2}
|
| 82 |
+
{"question_id": "JA25P2M02", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["B"], "ground_truth": ["B"], "attempt": 2}
|
| 83 |
+
{"question_id": "JA25P2M03", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["C"], "ground_truth": ["C"], "attempt": 2}
|
| 84 |
+
{"question_id": "JA25P2M04", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["A"], "ground_truth": ["A"], "attempt": 2}
|
| 85 |
+
{"question_id": "JA25P2M05", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A", "B"], "ground_truth": ["A", "B"], "attempt": 2}
|
| 86 |
+
{"question_id": "JA25P2M06", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A", "C"], "ground_truth": ["A", "C"], "attempt": 2}
|
| 87 |
+
{"question_id": "JA25P2M07", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A", "C"], "ground_truth": ["A", "C"], "attempt": 2}
|
| 88 |
+
{"question_id": "JA25P2M08", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["B", "C", "D"], "ground_truth": ["B", "C", "D"], "attempt": 2}
|
| 89 |
+
{"question_id": "JA25P2M09", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["0.75"], "ground_truth": [["0.7", "0.8"]], "attempt": 2}
|
| 90 |
+
{"question_id": "JA25P2M10", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["6"], "ground_truth": ["6"], "attempt": 2}
|
| 91 |
+
{"question_id": "JA25P2M11", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["0.30"], "ground_truth": [["0.27", "0.33"]], "attempt": 2}
|
| 92 |
+
{"question_id": "JA25P2M12", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["-2"], "ground_truth": ["-2"], "attempt": 2}
|
| 93 |
+
{"question_id": "JA25P2M13", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["-2"], "ground_truth": ["-2"], "attempt": 2}
|
| 94 |
+
{"question_id": "JA25P2M14", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["0.25"], "ground_truth": [["0.2", "0.3"]], "attempt": 2}
|
| 95 |
+
{"question_id": "JA25P2M15", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["3"], "ground_truth": ["3"], "attempt": 2}
|
| 96 |
+
{"question_id": "JA25P2M16", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["21"], "ground_truth": ["21"], "attempt": 2}
|
results/google_gemini-2.5-flash-preview-05-20:thinking_JEE_ADVANCED_2025_20250607_154346/summary.md
ADDED
|
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Benchmark Results: google/gemini-2.5-flash-preview-05-20:thinking
|
| 2 |
+
**Exam Name:** JEE_ADVANCED
|
| 3 |
+
**Exam Year:** 2025
|
| 4 |
+
**Timestamp:** 20250607_154346
|
| 5 |
+
**Total Questions in Dataset:** 578
|
| 6 |
+
**Questions Filtered Out:** 482
|
| 7 |
+
**Total Questions Processed in this Run:** 96
|
| 8 |
+
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
## Exam Scoring Results
|
| 12 |
+
**Overall Score:** **290** / **360**
|
| 13 |
+
- **Fully Correct Answers:** 79
|
| 14 |
+
- **Partially Correct Answers:** 1
|
| 15 |
+
- **Incorrectly Answered (Choice Made):** 15
|
| 16 |
+
- **Skipped Questions:** 0
|
| 17 |
+
- **API/Parse Failures:** 1
|
| 18 |
+
- **Total Questions Processed:** 96
|
| 19 |
+
|
| 20 |
+
### Detailed Score Calculation by Question Type
|
| 21 |
+
**Integer (42 questions):** 136 marks
|
| 22 |
+
*Calculation:* 34 Correct (+4) + 8 Incorrect (0) = 136
|
| 23 |
+
**Mcq Multiple Correct (21 questions):** 75 marks
|
| 24 |
+
*Calculation:* 19 Correct (+4) + 1 Partial + 1 Incorrect (-2) = 75
|
| 25 |
+
**Mcq Single Correct (33 questions):** 71 marks
|
| 26 |
+
*Calculation:* 26 Correct (+3) + 6 Incorrect (-1) + 1 API/Parse Fail (-1) = 71
|
| 27 |
+
|
| 28 |
+
### Section Breakdown
|
| 29 |
+
| Section | Score | Fully Correct | Partially Correct | Incorrect Choice | Skipped | API/Parse Failures |
|
| 30 |
+
|---------------|-------|---------------|-------------------|------------------|---------|--------------------|
|
| 31 |
+
| Chemistry | 82 | 23 | 0 | 9 | 0 | 0 |
|
| 32 |
+
| Math | 111 | 30 | 0 | 1 | 0 | 1 |
|
| 33 |
+
| Physics | 97 | 26 | 1 | 5 | 0 | 0 |
|