Fix Correct-Column in Report
Browse files- CHANGELOG.md +33 -0
- app.py +4 -7
- output/gaia_results_20260104_184511.json +135 -0
CHANGELOG.md
CHANGED
|
@@ -95,6 +95,7 @@
|
|
| 95 |
- Tests now verify: result["success"] == False, error message present, result is None
|
| 96 |
|
| 97 |
**Test Results:**
|
|
|
|
| 98 |
- ✅ All 99 tests passing (0 failures)
|
| 99 |
- ✅ No regressions introduced by Stage 5 changes
|
| 100 |
- ✅ Test suite run time: ~2min 40sec
|
|
@@ -124,12 +125,14 @@
|
|
| 124 |
- Updated `synthesize_answer()` - Now uses `_call_with_fallback()` (simplified from 40 lines to 1 line)
|
| 125 |
|
| 126 |
**Benefits:**
|
|
|
|
| 127 |
- ✅ Easy debugging: Change `LLM_PROVIDER=groq` in .env to test specific provider
|
| 128 |
- ✅ Clear logs: Know exactly which LLM handled each step
|
| 129 |
- ✅ Isolated testing: Disable fallback to test single provider performance
|
| 130 |
- ✅ Production safety: Enable fallback=true for deployment reliability
|
| 131 |
|
| 132 |
**Verification:**
|
|
|
|
| 133 |
- ✅ Config-based selection tested with Groq provider
|
| 134 |
- ✅ Logs show "Using primary provider: groq"
|
| 135 |
- ✅ Fallback disabled error handling works correctly
|
|
@@ -157,6 +160,7 @@
|
|
| 157 |
- Updated button click handlers to pass new UI inputs to functions
|
| 158 |
|
| 159 |
**Benefits:**
|
|
|
|
| 160 |
- ✅ **Cloud testing:** Test all 4 providers directly from HF Space UI
|
| 161 |
- ✅ **Instant switching:** No environment variable changes, no rebuild wait
|
| 162 |
- ✅ **Clear visibility:** UI shows which provider is selected
|
|
@@ -164,6 +168,7 @@
|
|
| 164 |
- ✅ **Production safety:** Fallback enabled by default for full evaluation
|
| 165 |
|
| 166 |
**Verification:**
|
|
|
|
| 167 |
- ✅ No syntax errors in app.py
|
| 168 |
- ✅ UI components properly connected to function parameters
|
| 169 |
|
|
@@ -181,11 +186,13 @@
|
|
| 181 |
- Changed variable references from constants to local variables
|
| 182 |
|
| 183 |
**Solution:**
|
|
|
|
| 184 |
- Config now read at runtime when function is called, not at module import
|
| 185 |
- UI can set environment variables before function execution
|
| 186 |
- Changes take effect immediately without module reload
|
| 187 |
|
| 188 |
**Verification:**
|
|
|
|
| 189 |
- ✅ UI dropdown selection "HuggingFace" correctly uses HuggingFace provider
|
| 190 |
- ✅ Logs show "Using primary provider: huggingface" matching UI selection
|
| 191 |
- ✅ Each test run can use different provider without restart
|
|
@@ -328,6 +335,32 @@
|
|
| 328 |
- ✅ Correct answer parsing from submission response implemented
|
| 329 |
- ⏳ Testing with real GAIA submission pending
|
| 330 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 331 |
### Created Files
|
| 332 |
|
| 333 |
### Deleted Files
|
|
|
|
| 95 |
- Tests now verify: result["success"] == False, error message present, result is None
|
| 96 |
|
| 97 |
**Test Results:**
|
| 98 |
+
|
| 99 |
- ✅ All 99 tests passing (0 failures)
|
| 100 |
- ✅ No regressions introduced by Stage 5 changes
|
| 101 |
- ✅ Test suite run time: ~2min 40sec
|
|
|
|
| 125 |
- Updated `synthesize_answer()` - Now uses `_call_with_fallback()` (simplified from 40 lines to 1 line)
|
| 126 |
|
| 127 |
**Benefits:**
|
| 128 |
+
|
| 129 |
- ✅ Easy debugging: Change `LLM_PROVIDER=groq` in .env to test specific provider
|
| 130 |
- ✅ Clear logs: Know exactly which LLM handled each step
|
| 131 |
- ✅ Isolated testing: Disable fallback to test single provider performance
|
| 132 |
- ✅ Production safety: Enable fallback=true for deployment reliability
|
| 133 |
|
| 134 |
**Verification:**
|
| 135 |
+
|
| 136 |
- ✅ Config-based selection tested with Groq provider
|
| 137 |
- ✅ Logs show "Using primary provider: groq"
|
| 138 |
- ✅ Fallback disabled error handling works correctly
|
|
|
|
| 160 |
- Updated button click handlers to pass new UI inputs to functions
|
| 161 |
|
| 162 |
**Benefits:**
|
| 163 |
+
|
| 164 |
- ✅ **Cloud testing:** Test all 4 providers directly from HF Space UI
|
| 165 |
- ✅ **Instant switching:** No environment variable changes, no rebuild wait
|
| 166 |
- ✅ **Clear visibility:** UI shows which provider is selected
|
|
|
|
| 168 |
- ✅ **Production safety:** Fallback enabled by default for full evaluation
|
| 169 |
|
| 170 |
**Verification:**
|
| 171 |
+
|
| 172 |
- ✅ No syntax errors in app.py
|
| 173 |
- ✅ UI components properly connected to function parameters
|
| 174 |
|
|
|
|
| 186 |
- Changed variable references from constants to local variables
|
| 187 |
|
| 188 |
**Solution:**
|
| 189 |
+
|
| 190 |
- Config now read at runtime when function is called, not at module import
|
| 191 |
- UI can set environment variables before function execution
|
| 192 |
- Changes take effect immediately without module reload
|
| 193 |
|
| 194 |
**Verification:**
|
| 195 |
+
|
| 196 |
- ✅ UI dropdown selection "HuggingFace" correctly uses HuggingFace provider
|
| 197 |
- ✅ Logs show "Using primary provider: huggingface" matching UI selection
|
| 198 |
- ✅ Each test run can use different provider without restart
|
|
|
|
| 335 |
- ✅ Correct answer parsing from submission response implemented
|
| 336 |
- ⏳ Testing with real GAIA submission pending
|
| 337 |
|
| 338 |
+
### [BUGFIX: Useless "Correct?" Column Message - Remove When No Data]
|
| 339 |
+
|
| 340 |
+
**Problem:** "Correct?" column shows "See summary: 2/20 correct" for every row when GAIA API doesn't provide per-question correctness data. This is useless and clutters the table.
|
| 341 |
+
|
| 342 |
+
**Root Cause:** GAIA API response doesn't include per-question correctness in `result_data["results"]`, only summary stats (`correct_count`, `total_attempted`). Code fell through to else clause showing same message for all rows.
|
| 343 |
+
|
| 344 |
+
**Modified Files:**
|
| 345 |
+
|
| 346 |
+
- **app.py** (~5 lines modified)
|
| 347 |
+
- Updated correct answer column logic (lines 406-410)
|
| 348 |
+
- Removed fallback "See summary" message
|
| 349 |
+
- Now only adds "Correct?" column if per-question correctness data available
|
| 350 |
+
- If no per-question data, column is simply omitted from results table
|
| 351 |
+
|
| 352 |
+
**Solution:**
|
| 353 |
+
|
| 354 |
+
- When `correct_task_ids` is empty (no per-question data), don't add "Correct?" column at all
|
| 355 |
+
- JSON export still includes `"correct": null` for proper data structure
|
| 356 |
+
- User sees score summary in submission status message instead
|
| 357 |
+
|
| 358 |
+
**Verification:**
|
| 359 |
+
|
| 360 |
+
- ✅ No useless repetitive message in results table
|
| 361 |
+
- ✅ Column only appears when API provides per-question correctness
|
| 362 |
+
- ⏳ Testing with real GAIA submission pending
|
| 363 |
+
|
| 364 |
### Created Files
|
| 365 |
|
| 366 |
### Deleted Files
|
app.py
CHANGED
|
@@ -403,14 +403,11 @@ def run_and_submit_all(llm_provider: str, enable_fallback: bool, profile: gr.OAu
|
|
| 403 |
if item.get("correct"):
|
| 404 |
correct_task_ids.add(item.get("task_id"))
|
| 405 |
|
| 406 |
-
# Add "Correct?" column to results
|
| 407 |
-
|
| 408 |
-
|
| 409 |
-
|
| 410 |
result["Correct?"] = "✅ Yes" if task_id in correct_task_ids else "❌ No"
|
| 411 |
-
else:
|
| 412 |
-
# If no per-question data, show summary info
|
| 413 |
-
result["Correct?"] = f"See summary: {result_data.get('correct_count', '?')}/{result_data.get('total_attempted', '?')} correct"
|
| 414 |
|
| 415 |
results_df = pd.DataFrame(results_log)
|
| 416 |
# Export to JSON with execution time and submission response
|
|
|
|
| 403 |
if item.get("correct"):
|
| 404 |
correct_task_ids.add(item.get("task_id"))
|
| 405 |
|
| 406 |
+
# Add "Correct?" column to results (only if we have per-question correctness data)
|
| 407 |
+
if correct_task_ids:
|
| 408 |
+
for result in results_log:
|
| 409 |
+
task_id = result.get("Task ID")
|
| 410 |
result["Correct?"] = "✅ Yes" if task_id in correct_task_ids else "❌ No"
|
|
|
|
|
|
|
|
|
|
| 411 |
|
| 412 |
results_df = pd.DataFrame(results_log)
|
| 413 |
# Export to JSON with execution time and submission response
|
output/gaia_results_20260104_184511.json
ADDED
|
@@ -0,0 +1,135 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"metadata": {
|
| 3 |
+
"generated": "2026-01-04 18:45:11",
|
| 4 |
+
"timestamp": "20260104_184511",
|
| 5 |
+
"total_questions": 20,
|
| 6 |
+
"execution_time_seconds": 43.25,
|
| 7 |
+
"execution_time_formatted": "0m 43s",
|
| 8 |
+
"score_percent": 10.0,
|
| 9 |
+
"correct_count": 2,
|
| 10 |
+
"total_attempted": 20
|
| 11 |
+
},
|
| 12 |
+
"submission_status": "Submission Successful!\nUser: mangoobee\nOverall Score: 10.0% (2/20 correct)\nMessage: Score calculated successfully: 2/20 total questions answered correctly (20 valid tasks attempted). Score did not improve previous record, leaderboard not updated.",
|
| 13 |
+
"results": [
|
| 14 |
+
{
|
| 15 |
+
"task_id": "4fc2f1ae-8625-45b5-ab34-ad4433bc21f8",
|
| 16 |
+
"question": "Who nominated the only Featured Article on English Wikipedia about a dinosaur that was promoted in November 2016?",
|
| 17 |
+
"submitted_answer": "FunkMonk",
|
| 18 |
+
"correct": null
|
| 19 |
+
},
|
| 20 |
+
{
|
| 21 |
+
"task_id": "2d83110e-a098-4ebb-9987-066c06fa42d0",
|
| 22 |
+
"question": ".rewsna eht sa \"tfel\" drow eht fo etisoppo eht etirw ,ecnetnes siht dnatsrednu uoy fI",
|
| 23 |
+
"submitted_answer": "right",
|
| 24 |
+
"correct": null
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"task_id": "8e867cd7-cff9-4e6c-867a-ff5ddc2550be",
|
| 28 |
+
"question": "How many studio albums were published by Mercedes Sosa between 2000 and 2009 (included)? You can use the latest 2022 version of english wikipedia.",
|
| 29 |
+
"submitted_answer": "2",
|
| 30 |
+
"correct": null
|
| 31 |
+
},
|
| 32 |
+
{
|
| 33 |
+
"task_id": "cca530fc-4052-43b2-b130-b30968d8aa44",
|
| 34 |
+
"question": "Review the chess position provided in the image. It is black's turn. Provide the correct next move for black which guarantees a win. Please provide your response in algebraic notation.",
|
| 35 |
+
"submitted_answer": "ERROR: No evidence collected. Details: Tool vision failed: Exception: Vision analysis failed - Gemini and Claude both failed",
|
| 36 |
+
"correct": null
|
| 37 |
+
},
|
| 38 |
+
{
|
| 39 |
+
"task_id": "a1e91b78-d3d8-4675-bb8d-62741b4b68a6",
|
| 40 |
+
"question": "In the video https://www.youtube.com/watch?v=L1vXCYZAYYM, what is the highest number of bird species to be on camera simultaneously?",
|
| 41 |
+
"submitted_answer": "ERROR: No evidence collected. Details: Tool vision failed: Exception: Vision analysis failed - Gemini and Claude both failed",
|
| 42 |
+
"correct": null
|
| 43 |
+
},
|
| 44 |
+
{
|
| 45 |
+
"task_id": "9d191bce-651d-4746-be2d-7ef8ecadb9c2",
|
| 46 |
+
"question": "Examine the video at https://www.youtube.com/watch?v=1htKBjuUWec.\n\nWhat does Teal'c say in response to the question \"Isn't that hot?\"",
|
| 47 |
+
"submitted_answer": "ERROR: No evidence collected. Details: Tool vision failed: Exception: Vision analysis failed - Gemini and Claude both failed",
|
| 48 |
+
"correct": null
|
| 49 |
+
},
|
| 50 |
+
{
|
| 51 |
+
"task_id": "3cef3a44-215e-4aed-8e3b-b1e3f08063b7",
|
| 52 |
+
"question": "I'm making a grocery list for my mom, but she's a professor of botany and she's a real stickler when it comes to categorizing things. I need to add different foods to different categories on the grocery list, but if I make a mistake, she won't buy anything inserted in the wrong category. Here's the list I have so far:\n\nmilk, eggs, flour, whole bean coffee, Oreos, sweet potatoes, fresh basil, plums, green beans, rice, corn, bell pepper, whole allspice, acorns, broccoli, celery, zucchini, lettuce, peanuts\n\nI need to make headings for the fruits and vegetables. Could you please create a list of just the vegetables from my list? If you could do that, then I can figure out how to categorize the rest of the list into the appropriate categories. But remember that my mom is a real stickler, so make sure that no botanical fruits end up on the vegetable list, or she won't get them when she's at the store. Please alphabetize the list of vegetables, and place each item in a comma separated list.",
|
| 53 |
+
"submitted_answer": "acorns, bell pepper, broccoli, celery, green beans, lettuce, zucchini",
|
| 54 |
+
"correct": null
|
| 55 |
+
},
|
| 56 |
+
{
|
| 57 |
+
"task_id": "6f37996b-2ac7-44b0-8e68-6d28256631b4",
|
| 58 |
+
"question": "Given this table defining * on the set S = {a, b, c, d, e}\n\n|*|a|b|c|d|e|\n|---|---|---|---|---|---|\n|a|a|b|c|b|d|\n|b|b|c|a|e|c|\n|c|c|a|b|b|a|\n|d|b|e|b|e|d|\n|e|d|b|a|d|c|\n\nprovide the subset of S involved in any possible counter-examples that prove * is not commutative. Provide your answer as a comma separated list of the elements in the set in alphabetical order.",
|
| 59 |
+
"submitted_answer": "ERROR: No evidence collected. Details: Tool parse_file failed: FileNotFoundError: Text file not found: operation_table.csv",
|
| 60 |
+
"correct": null
|
| 61 |
+
},
|
| 62 |
+
{
|
| 63 |
+
"task_id": "99c9cc74-fdc8-46c6-8f8d-3ce2d3bfeea3",
|
| 64 |
+
"question": "Hi, I'm making a pie but I could use some help with my shopping list. I have everything I need for the crust, but I'm not sure about the filling. I got the recipe from my friend Aditi, but she left it as a voice memo and the speaker on my phone is buzzing so I can't quite make out what she's saying. Could you please listen to the recipe and list all of the ingredients that my friend described? I only want the ingredients for the filling, as I have everything I need to make my favorite pie crust. I've attached the recipe as Strawberry pie.mp3.\n\nIn your response, please only list the ingredients, not any measurements. So if the recipe calls for \"a pinch of salt\" or \"two cups of ripe strawberries\" the ingredients on the list would be \"salt\" and \"ripe strawberries\".\n\nPlease format your response as a comma separated list of ingredients. Also, please alphabetize the ingredients.",
|
| 65 |
+
"submitted_answer": "ERROR: No evidence collected. Details: Tool parse_file failed: ValueError: Unsupported file type: .mp3. Supported: .pdf, .xlsx, .xls, .docx, .txt, .csv",
|
| 66 |
+
"correct": null
|
| 67 |
+
},
|
| 68 |
+
{
|
| 69 |
+
"task_id": "cabe07ed-9eca-40ea-8ead-410ef5e83f91",
|
| 70 |
+
"question": "What is the surname of the equine veterinarian mentioned in 1.E Exercises from the chemistry materials licensed by Marisa Alviar-Agnew & Henry Agnew under the CK-12 license in LibreText's Introductory Chemistry materials as compiled 08/21/2023?",
|
| 71 |
+
"submitted_answer": "Unable to answer",
|
| 72 |
+
"correct": null
|
| 73 |
+
},
|
| 74 |
+
{
|
| 75 |
+
"task_id": "305ac316-eef6-4446-960a-92d80d542f82",
|
| 76 |
+
"question": "Who did the actor who played Ray in the Polish-language version of Everybody Loves Raymond play in Magda M.? Give only the first name.",
|
| 77 |
+
"submitted_answer": "Bartłomiej",
|
| 78 |
+
"correct": null
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"task_id": "3f57289b-8c60-48be-bd80-01f8099ca449",
|
| 82 |
+
"question": "How many at bats did the Yankee with the most walks in the 1977 regular season have that same season?",
|
| 83 |
+
"submitted_answer": "Unable to answer",
|
| 84 |
+
"correct": null
|
| 85 |
+
},
|
| 86 |
+
{
|
| 87 |
+
"task_id": "f918266a-b3e0-4914-865d-4faa564f1aef",
|
| 88 |
+
"question": "What is the final numeric output from the attached Python code?",
|
| 89 |
+
"submitted_answer": "ERROR: No evidence collected. Details: Tool parse_file failed: ValueError: Unsupported file type: . Supported: .pdf, .xlsx, .xls, .docx, .txt, .csv",
|
| 90 |
+
"correct": null
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"task_id": "1f975693-876d-457b-a649-393859e79bf3",
|
| 94 |
+
"question": "Hi, I was out sick from my classes on Friday, so I'm trying to figure out what I need to study for my Calculus mid-term next week. My friend from class sent me an audio recording of Professor Willowbrook giving out the recommended reading for the test, but my headphones are broken :(\n\nCould you please listen to the recording for me and tell me the page numbers I'm supposed to go over? I've attached a file called Homework.mp3 that has the recording. Please provide just the page numbers as a comma-delimited list. And please provide the list in ascending order.",
|
| 95 |
+
"submitted_answer": "ERROR: No evidence collected. Details: Tool parse_file failed: ValueError: Unsupported file type: .mp3. Supported: .pdf, .xlsx, .xls, .docx, .txt, .csv",
|
| 96 |
+
"correct": null
|
| 97 |
+
},
|
| 98 |
+
{
|
| 99 |
+
"task_id": "bda648d7-d618-4883-88f4-3466eabd860e",
|
| 100 |
+
"question": "Where were the Vietnamese specimens described by Kuznetzov in Nedoshivina's 2010 paper eventually deposited? Just give me the city name without abbreviations.",
|
| 101 |
+
"submitted_answer": "Unable to answer",
|
| 102 |
+
"correct": null
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"task_id": "7bd855d8-463d-4ed5-93ca-5fe35145f733",
|
| 106 |
+
"question": "The attached Excel file contains the sales of menu items for a local fast-food chain. What were the total sales that the chain made from food (not including drinks)? Express your answer in USD with two decimal places.",
|
| 107 |
+
"submitted_answer": "ERROR: No evidence collected. Details: Tool parse_file failed: ValueError: Unsupported file type: . Supported: .pdf, .xlsx, .xls, .docx, .txt, .csv",
|
| 108 |
+
"correct": null
|
| 109 |
+
},
|
| 110 |
+
{
|
| 111 |
+
"task_id": "a0c07678-e491-4bbc-8f0b-07405144218f",
|
| 112 |
+
"question": "Who are the pitchers with the number before and after Taishō Tamai's number as of July 2023? Give them to me in the form Pitcher Before, Pitcher After, use their last names only, in Roman characters.",
|
| 113 |
+
"submitted_answer": "Unable to answer",
|
| 114 |
+
"correct": null
|
| 115 |
+
},
|
| 116 |
+
{
|
| 117 |
+
"task_id": "840bfca7-4f7b-481a-8794-c560c340185d",
|
| 118 |
+
"question": "On June 6, 2023, an article by Carolyn Collins Petersen was published in Universe Today. This article mentions a team that produced a paper about their observations, linked at the bottom of the article. Find this paper. Under what NASA award number was the work performed by R. G. Arendt supported by?",
|
| 119 |
+
"submitted_answer": "NAG5-10777",
|
| 120 |
+
"correct": null
|
| 121 |
+
},
|
| 122 |
+
{
|
| 123 |
+
"task_id": "cf106601-ab4f-4af9-b045-5295fe67b37d",
|
| 124 |
+
"question": "What country had the least number of athletes at the 1928 Summer Olympics? If there's a tie for a number of athletes, return the first in alphabetical order. Give the IOC country code as your answer.",
|
| 125 |
+
"submitted_answer": "Unable to answer",
|
| 126 |
+
"correct": null
|
| 127 |
+
},
|
| 128 |
+
{
|
| 129 |
+
"task_id": "5a0c1adf-205e-4841-a666-7c3ef95def9d",
|
| 130 |
+
"question": "What is the first name of the only Malko Competition recipient from the 20th Century (after 1977) whose nationality on record is a country that no longer exists?",
|
| 131 |
+
"submitted_answer": "Jan",
|
| 132 |
+
"correct": null
|
| 133 |
+
}
|
| 134 |
+
]
|
| 135 |
+
}
|