KuangshiAi commited on
Commit ·
e740adf
1
Parent(s): 0e10710
update bioimage and molecular yaml
Browse files
eval_cases/molecular_vis/eval_analysis_tasks.yaml
CHANGED
|
@@ -2,7 +2,7 @@
|
|
| 2 |
# This test evaluates the ability to complete molecular visualization tasks
|
| 3 |
# with detailed requirements and evaluation criteria
|
| 4 |
|
| 5 |
-
#simple licorice visualization of a protein
|
| 6 |
- vars:
|
| 7 |
question: |
|
| 8 |
1. I want you to visualize a molecular structure from a CIF file.
|
|
@@ -10,15 +10,15 @@
|
|
| 10 |
3. Visualize the molecular using a licorice representation.
|
| 11 |
4. Take a screenshot of the visualization.
|
| 12 |
Q1. Does it show a licorice representation of the protein? (yes/no)
|
| 13 |
-
5. Answer Q1 in a plain text file "
|
| 14 |
assert:
|
| 15 |
- type: llm-rubric
|
| 16 |
subtype: text
|
| 17 |
value: |
|
| 18 |
1. Q1 correct answer: Yes
|
| 19 |
-
rs-file: "
|
| 20 |
|
| 21 |
-
#simple coloring by element of a protein
|
| 22 |
- vars:
|
| 23 |
question: |
|
| 24 |
1. I want you to visualize a molecular structure from a CIF file.
|
|
@@ -26,15 +26,15 @@
|
|
| 26 |
3. Visualize the molecular using a CPK or similar representation where atoms are colored by their chemical element.
|
| 27 |
4. Take a screenshot of the visualization.
|
| 28 |
Q1. Is the molecule colored according to the chemical element of its atoms (e.g., CPK coloring)? (yes/no)
|
| 29 |
-
5. Answer Q1 in a plain text file "
|
| 30 |
assert:
|
| 31 |
- type: llm-rubric
|
| 32 |
subtype: text
|
| 33 |
value: |
|
| 34 |
1. Q1 correct answer: Yes
|
| 35 |
-
rs-file: "
|
| 36 |
|
| 37 |
-
#simple selection and coloring of a protein
|
| 38 |
- vars:
|
| 39 |
question: |
|
| 40 |
1. I want you to visualize a molecular structure from a CIF file.
|
|
@@ -42,15 +42,15 @@
|
|
| 42 |
3. Select all carbon atoms and color them cyan.
|
| 43 |
4. Take a screenshot of the visualization.
|
| 44 |
Q1. Are all carbon atoms colored cyan? (yes/no)
|
| 45 |
-
5. Answer Q1 in a plain text file "
|
| 46 |
assert:
|
| 47 |
- type: llm-rubric
|
| 48 |
subtype: text
|
| 49 |
value: |
|
| 50 |
1. Q1 correct answer: Yes
|
| 51 |
-
rs-file: "
|
| 52 |
|
| 53 |
-
#simple coloring by charge of a protein
|
| 54 |
- vars:
|
| 55 |
question: |
|
| 56 |
1. I want you to visualize a molecular structure from a CIF file.
|
|
@@ -58,15 +58,15 @@
|
|
| 58 |
3. Color the molecule according to atomic charge: use one color for positive charges, another for negative charges, and a third for neutral atoms.
|
| 59 |
4. Take a screenshot of the visualization.
|
| 60 |
Q1. Is the molecule colored by atomic charge (differentiating positive, negative, and neutral)? (yes/no)
|
| 61 |
-
5. Answer Q1 in a plain text file "
|
| 62 |
assert:
|
| 63 |
- type: llm-rubric
|
| 64 |
subtype: text
|
| 65 |
value: |
|
| 66 |
1. Q1 correct answer: Yes
|
| 67 |
-
rs-file: "
|
| 68 |
|
| 69 |
-
#simple selection and coloring of specific atoms
|
| 70 |
- vars:
|
| 71 |
question: |
|
| 72 |
1. I want you to visualize a molecular structure from a CIF file.
|
|
@@ -74,15 +74,15 @@
|
|
| 74 |
3. Select all oxygen atoms in residues 1 to 20 and color them red.
|
| 75 |
4. Take a screenshot of the visualization.
|
| 76 |
Q1. Are all oxygen atoms in residues 1 to 20 colored red? (yes/no)
|
| 77 |
-
5. Answer Q1 in a plain text file "
|
| 78 |
assert:
|
| 79 |
- type: llm-rubric
|
| 80 |
subtype: text
|
| 81 |
value: |
|
| 82 |
1. Q1 correct answer: Yes
|
| 83 |
-
rs-file: "
|
| 84 |
|
| 85 |
-
#simple selection and coloring of aromatic residues
|
| 86 |
- vars:
|
| 87 |
question: |
|
| 88 |
1. I want you to visualize a molecular structure from a CIF file.
|
|
@@ -90,69 +90,70 @@
|
|
| 90 |
3. Select all aromatic residues (PHE, TYR, TRP) and color them purple.
|
| 91 |
4. Take a screenshot of the visualization.
|
| 92 |
Q1. Are all aromatic residues (PHE, TYR, TRP) colored purple? (yes/no)
|
| 93 |
-
5. Answer Q1 in a plain text file "
|
| 94 |
assert:
|
| 95 |
- type: llm-rubric
|
| 96 |
subtype: text
|
| 97 |
value: |
|
| 98 |
1. Q1 correct answer: Yes
|
| 99 |
-
rs-file: "
|
| 100 |
|
| 101 |
-
#simple RMSD and RMSF calculation of a protein
|
| 102 |
- vars:
|
| 103 |
question: |
|
| 104 |
1. I want you to perform a structural analysis on a molecular structure from a CIF file.
|
| 105 |
2. Load the data/1CRN.cif.
|
| 106 |
3. Calculate the Root Mean Square Deviation (RMSD) of the structure against itself.
|
| 107 |
4. Calculate the Root Mean Square Fluctuation (RMSF) for the structure.
|
| 108 |
-
5. Save the computed RMSD and RMSF values as plain text to "
|
| 109 |
assert:
|
| 110 |
- type: llm-rubric
|
| 111 |
subtype: text
|
| 112 |
value: |
|
| 113 |
1. Does the output report the calculated RMSD?
|
| 114 |
2. Does the output report the calculated RMSF values or state that it requires a trajectory?
|
| 115 |
-
rs-file: "
|
| 116 |
|
| 117 |
-
#simple radius of gyration calculation of a protein
|
| 118 |
- vars:
|
| 119 |
question: |
|
| 120 |
1. I want you to calculate the compactness of a protein from a CIF file.
|
| 121 |
2. Load the data/1CRN.cif.
|
| 122 |
3. Calculate the Radius of Gyration (Rg) of the protein structure.
|
| 123 |
-
4. Save the calculated Radius of Gyration as plain text to "
|
| 124 |
assert:
|
| 125 |
- type: llm-rubric
|
| 126 |
subtype: text
|
| 127 |
value: |
|
| 128 |
1. Does the output report a numeric value for the calculated Radius of Gyration?
|
| 129 |
-
rs-file: "
|
| 130 |
|
|
|
|
| 131 |
- vars:
|
| 132 |
question: |
|
| 133 |
-
1. I want you to calculate specific geometric properties of a molecular structure from a CIF file.
|
| 134 |
-
2. Load the data/1CRN.cif.
|
| 135 |
3. Calculate the distance between the alpha carbons of residue 1 and residue 10.
|
| 136 |
4. Calculate the backbone dihedral angles (phi and psi) for residue 5.
|
| 137 |
-
5. Save the computed distance and angles as plain text to "
|
| 138 |
assert:
|
| 139 |
- type: llm-rubric
|
| 140 |
subtype: text
|
| 141 |
value: |
|
| 142 |
1. Does the output report the calculated distance between the alpha carbons of residue 1 and 10?
|
| 143 |
2. Does the output report the calculated phi and psi dihedral angles for residue 5?
|
| 144 |
-
rs-file: "
|
| 145 |
|
| 146 |
-
#simple contact calculation of a protein
|
| 147 |
- vars:
|
| 148 |
question: |
|
| 149 |
1. I want you to calculate the number of contacts in a folded protein from a CIF file.
|
| 150 |
2. Load the data/1CRN.cif.
|
| 151 |
3. Calculate the number of contacts within an 8 Angstrom cutoff.
|
| 152 |
-
4. Save the total count of contacts as plain text to "
|
| 153 |
assert:
|
| 154 |
- type: llm-rubric
|
| 155 |
subtype: text
|
| 156 |
value: |
|
| 157 |
1. Does the output report a numeric count of contacts?
|
| 158 |
-
rs-file: "
|
|
|
|
| 2 |
# This test evaluates the ability to complete molecular visualization tasks
|
| 3 |
# with detailed requirements and evaluation criteria
|
| 4 |
|
| 5 |
+
# Case 1: simple licorice visualization of a protein
|
| 6 |
- vars:
|
| 7 |
question: |
|
| 8 |
1. I want you to visualize a molecular structure from a CIF file.
|
|
|
|
| 10 |
3. Visualize the molecular using a licorice representation.
|
| 11 |
4. Take a screenshot of the visualization.
|
| 12 |
Q1. Does it show a licorice representation of the protein? (yes/no)
|
| 13 |
+
5. Answer Q1 in a plain text file "case_1/results/{agent_mode}/answers_basic_vis.txt".
|
| 14 |
assert:
|
| 15 |
- type: llm-rubric
|
| 16 |
subtype: text
|
| 17 |
value: |
|
| 18 |
1. Q1 correct answer: Yes
|
| 19 |
+
rs-file: "case_1/results/{agent_mode}/answers_basic_vis.txt"
|
| 20 |
|
| 21 |
+
# Case 2: simple coloring by element of a protein
|
| 22 |
- vars:
|
| 23 |
question: |
|
| 24 |
1. I want you to visualize a molecular structure from a CIF file.
|
|
|
|
| 26 |
3. Visualize the molecular using a CPK or similar representation where atoms are colored by their chemical element.
|
| 27 |
4. Take a screenshot of the visualization.
|
| 28 |
Q1. Is the molecule colored according to the chemical element of its atoms (e.g., CPK coloring)? (yes/no)
|
| 29 |
+
5. Answer Q1 in a plain text file "case_2/results/{agent_mode}/answers_element_coloring.txt".
|
| 30 |
assert:
|
| 31 |
- type: llm-rubric
|
| 32 |
subtype: text
|
| 33 |
value: |
|
| 34 |
1. Q1 correct answer: Yes
|
| 35 |
+
rs-file: "case_2/results/{agent_mode}/answers_element_coloring.txt"
|
| 36 |
|
| 37 |
+
# Case 3: simple selection and coloring of a protein
|
| 38 |
- vars:
|
| 39 |
question: |
|
| 40 |
1. I want you to visualize a molecular structure from a CIF file.
|
|
|
|
| 42 |
3. Select all carbon atoms and color them cyan.
|
| 43 |
4. Take a screenshot of the visualization.
|
| 44 |
Q1. Are all carbon atoms colored cyan? (yes/no)
|
| 45 |
+
5. Answer Q1 in a plain text file "case_3/results/{agent_mode}/answers_selection_coloring.txt".
|
| 46 |
assert:
|
| 47 |
- type: llm-rubric
|
| 48 |
subtype: text
|
| 49 |
value: |
|
| 50 |
1. Q1 correct answer: Yes
|
| 51 |
+
rs-file: "case_3/results/{agent_mode}/answers_selection_coloring.txt"
|
| 52 |
|
| 53 |
+
# Case 4: simple coloring by charge of a protein
|
| 54 |
- vars:
|
| 55 |
question: |
|
| 56 |
1. I want you to visualize a molecular structure from a CIF file.
|
|
|
|
| 58 |
3. Color the molecule according to atomic charge: use one color for positive charges, another for negative charges, and a third for neutral atoms.
|
| 59 |
4. Take a screenshot of the visualization.
|
| 60 |
Q1. Is the molecule colored by atomic charge (differentiating positive, negative, and neutral)? (yes/no)
|
| 61 |
+
5. Answer Q1 in a plain text file "case_4/results/{agent_mode}/answers_charge_coloring.txt".
|
| 62 |
assert:
|
| 63 |
- type: llm-rubric
|
| 64 |
subtype: text
|
| 65 |
value: |
|
| 66 |
1. Q1 correct answer: Yes
|
| 67 |
+
rs-file: "case_4/results/{agent_mode}/answers_charge_coloring.txt"
|
| 68 |
|
| 69 |
+
# Case 5: simple selection and coloring of specific atoms
|
| 70 |
- vars:
|
| 71 |
question: |
|
| 72 |
1. I want you to visualize a molecular structure from a CIF file.
|
|
|
|
| 74 |
3. Select all oxygen atoms in residues 1 to 20 and color them red.
|
| 75 |
4. Take a screenshot of the visualization.
|
| 76 |
Q1. Are all oxygen atoms in residues 1 to 20 colored red? (yes/no)
|
| 77 |
+
5. Answer Q1 in a plain text file "case_5/results/{agent_mode}/answers_complex_selection.txt".
|
| 78 |
assert:
|
| 79 |
- type: llm-rubric
|
| 80 |
subtype: text
|
| 81 |
value: |
|
| 82 |
1. Q1 correct answer: Yes
|
| 83 |
+
rs-file: "case_5/results/{agent_mode}/answers_complex_selection.txt"
|
| 84 |
|
| 85 |
+
# Case 6: simple selection and coloring of aromatic residues
|
| 86 |
- vars:
|
| 87 |
question: |
|
| 88 |
1. I want you to visualize a molecular structure from a CIF file.
|
|
|
|
| 90 |
3. Select all aromatic residues (PHE, TYR, TRP) and color them purple.
|
| 91 |
4. Take a screenshot of the visualization.
|
| 92 |
Q1. Are all aromatic residues (PHE, TYR, TRP) colored purple? (yes/no)
|
| 93 |
+
5. Answer Q1 in a plain text file "case_6/results/{agent_mode}/answers_aromatic_selection.txt".
|
| 94 |
assert:
|
| 95 |
- type: llm-rubric
|
| 96 |
subtype: text
|
| 97 |
value: |
|
| 98 |
1. Q1 correct answer: Yes
|
| 99 |
+
rs-file: "case_6/results/{agent_mode}/answers_aromatic_selection.txt"
|
| 100 |
|
| 101 |
+
# Case 7: simple RMSD and RMSF calculation of a protein
|
| 102 |
- vars:
|
| 103 |
question: |
|
| 104 |
1. I want you to perform a structural analysis on a molecular structure from a CIF file.
|
| 105 |
2. Load the data/1CRN.cif.
|
| 106 |
3. Calculate the Root Mean Square Deviation (RMSD) of the structure against itself.
|
| 107 |
4. Calculate the Root Mean Square Fluctuation (RMSF) for the structure.
|
| 108 |
+
5. Save the computed RMSD and RMSF values as plain text to "case_7/results/{agent_mode}/answers_rmsd_rmsf.txt".
|
| 109 |
assert:
|
| 110 |
- type: llm-rubric
|
| 111 |
subtype: text
|
| 112 |
value: |
|
| 113 |
1. Does the output report the calculated RMSD?
|
| 114 |
2. Does the output report the calculated RMSF values or state that it requires a trajectory?
|
| 115 |
+
rs-file: "case_7/results/{agent_mode}/answers_rmsd_rmsf.txt"
|
| 116 |
|
| 117 |
+
# Case 8: simple radius of gyration calculation of a protein
|
| 118 |
- vars:
|
| 119 |
question: |
|
| 120 |
1. I want you to calculate the compactness of a protein from a CIF file.
|
| 121 |
2. Load the data/1CRN.cif.
|
| 122 |
3. Calculate the Radius of Gyration (Rg) of the protein structure.
|
| 123 |
+
4. Save the calculated Radius of Gyration as plain text to "case_8/results/{agent_mode}/answers_rg.txt".
|
| 124 |
assert:
|
| 125 |
- type: llm-rubric
|
| 126 |
subtype: text
|
| 127 |
value: |
|
| 128 |
1. Does the output report a numeric value for the calculated Radius of Gyration?
|
| 129 |
+
rs-file: "case_8/results/{agent_mode}/answers_rg.txt"
|
| 130 |
|
| 131 |
+
# Case 9: calculate specific geometric properties
|
| 132 |
- vars:
|
| 133 |
question: |
|
| 134 |
+
1. I want you to calculate specific geometric properties of a molecular structure from a CIF file.
|
| 135 |
+
2. Load the data/1CRN.cif.
|
| 136 |
3. Calculate the distance between the alpha carbons of residue 1 and residue 10.
|
| 137 |
4. Calculate the backbone dihedral angles (phi and psi) for residue 5.
|
| 138 |
+
5. Save the computed distance and angles as plain text to "case_9/results/{agent_mode}/answers_distances_angles.txt".
|
| 139 |
assert:
|
| 140 |
- type: llm-rubric
|
| 141 |
subtype: text
|
| 142 |
value: |
|
| 143 |
1. Does the output report the calculated distance between the alpha carbons of residue 1 and 10?
|
| 144 |
2. Does the output report the calculated phi and psi dihedral angles for residue 5?
|
| 145 |
+
rs-file: "case_9/results/{agent_mode}/answers_distances_angles.txt"
|
| 146 |
|
| 147 |
+
# Case 10: simple contact calculation of a protein
|
| 148 |
- vars:
|
| 149 |
question: |
|
| 150 |
1. I want you to calculate the number of contacts in a folded protein from a CIF file.
|
| 151 |
2. Load the data/1CRN.cif.
|
| 152 |
3. Calculate the number of contacts within an 8 Angstrom cutoff.
|
| 153 |
+
4. Save the total count of contacts as plain text to "case_10/results/{agent_mode}/answers_native_contacts.txt".
|
| 154 |
assert:
|
| 155 |
- type: llm-rubric
|
| 156 |
subtype: text
|
| 157 |
value: |
|
| 158 |
1. Does the output report a numeric count of contacts?
|
| 159 |
+
rs-file: "case_10/results/{agent_mode}/answers_native_contacts.txt"
|
eval_cases/napari/eval_visualization_tasks.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
| 1 |
# Basic Visualization Workflow Tests
|
| 2 |
# Use https://www.ebi.ac.uk/bioimage-archive/galleries/S-BIAD573.html IM1 to test the workflows.
|
| 3 |
|
| 4 |
-
#
|
| 5 |
- vars:
|
| 6 |
question: |
|
| 7 |
1. Load the "data/dataset_002/dataset_002_ch0.tif" dataset into napari as channel 0 and "data/dataset_002/dataset_002_ch1.tif" as channel 1.
|
|
@@ -10,8 +10,8 @@
|
|
| 10 |
4. Use additive blending for all channels to create an overlay visualization.
|
| 11 |
5. Go the timestep 14.
|
| 12 |
Q1: Does the cell show protrusions? (Yes/No)
|
| 13 |
-
6. Take a screenshot of the result, save it to "
|
| 14 |
-
7. Answer Q1 in a plain text file "
|
| 15 |
assert:
|
| 16 |
- type: llm-rubric
|
| 17 |
subtype: vision
|
|
@@ -19,70 +19,70 @@
|
|
| 19 |
1. Does the visualization show a green cell with red blobs on the inside?
|
| 20 |
2. Does the result rendering look similar to ground truth?
|
| 21 |
gs-file: GS/dataset_002_1.png
|
| 22 |
-
rs-file:
|
| 23 |
- type: llm-rubric
|
| 24 |
subtype: text
|
| 25 |
value: |
|
| 26 |
1. Q1 correct answer: Yes
|
| 27 |
-
rs-file:
|
| 28 |
options:
|
| 29 |
cache: false
|
| 30 |
runSerially: true
|
| 31 |
|
| 32 |
|
| 33 |
-
#
|
| 34 |
- vars:
|
| 35 |
question: |
|
| 36 |
1. Load the "data/dataset_002/Points.csv" dataset into napari.
|
| 37 |
2. Check if the points layer has been created.
|
| 38 |
Q1: Was the points layer created successfully? (Yes/No)
|
| 39 |
-
3. Answer Q1 in a plain text file "
|
| 40 |
assert:
|
| 41 |
- type: llm-rubric
|
| 42 |
subtype: text
|
| 43 |
value: |
|
| 44 |
1. Q1 correct answer: Yes
|
| 45 |
-
rs-file:
|
| 46 |
options:
|
| 47 |
cache: false
|
| 48 |
runSerially: true
|
| 49 |
|
| 50 |
-
#
|
| 51 |
- vars:
|
| 52 |
question: |
|
| 53 |
1. Load the "data/dataset_002/Shapes.csv" dataset into napari.
|
| 54 |
2. Check if the shapes layer has been created.
|
| 55 |
Q1: Was the shapes layer created successfully? (Yes/No)
|
| 56 |
-
3. Answer Q1 in a plain text file "
|
| 57 |
assert:
|
| 58 |
- type: llm-rubric
|
| 59 |
subtype: text
|
| 60 |
value: |
|
| 61 |
1. Q1 correct answer: Yes
|
| 62 |
-
rs-file:
|
| 63 |
options:
|
| 64 |
cache: false
|
| 65 |
runSerially: true
|
| 66 |
|
| 67 |
-
#
|
| 68 |
- vars:
|
| 69 |
question: |
|
| 70 |
1. Load the "data/dataset_002/Labels.tif" dataset into napari.
|
| 71 |
2. Check if a new layer called "Labels" has been created.
|
| 72 |
Q1: Was the layer created successfully? (Yes/No)
|
| 73 |
-
3. Answer Q1 in a plain text file "
|
| 74 |
assert:
|
| 75 |
- type: llm-rubric
|
| 76 |
subtype: text
|
| 77 |
value: |
|
| 78 |
1. Q1 correct answer: Yes
|
| 79 |
-
rs-file:
|
| 80 |
options:
|
| 81 |
cache: false
|
| 82 |
runSerially: true
|
| 83 |
|
| 84 |
|
| 85 |
-
#
|
| 86 |
- vars:
|
| 87 |
question: |
|
| 88 |
1. Load the dataset into napari: data/dataset_001/dataset_001.tiff
|
|
@@ -93,7 +93,7 @@
|
|
| 93 |
6. Take a screenshot of your recreation.
|
| 94 |
7. If the recreation does not match the target figure, adjust the visualization settings and take a screenshot again.
|
| 95 |
8. Stop when the recreation matches the target figure or you have tried five different visualization settings.
|
| 96 |
-
9. Save the final screenshot to "
|
| 97 |
assert:
|
| 98 |
- type: llm-rubric
|
| 99 |
subtype: vision
|
|
@@ -102,12 +102,12 @@
|
|
| 102 |
2. Are the same colormaps and blending modes used as in the target figure?
|
| 103 |
3. Is the contrast and gamma adjusted to match the target figure?
|
| 104 |
gs-file: GS/dataset_001.png
|
| 105 |
-
rs-file:
|
| 106 |
options:
|
| 107 |
cache: false
|
| 108 |
runSerially: true
|
| 109 |
|
| 110 |
-
#
|
| 111 |
- vars:
|
| 112 |
question: |
|
| 113 |
1. Read the file "data/dataset_003/eval_iso_surface_determination_target_1.txt" to get the target iso-surface values for different tooth structures.
|
|
@@ -117,7 +117,7 @@
|
|
| 117 |
5. Rotate the camera to several angles and take a screenshot of the result each time to check if the target structure is clearly visible from different angles.
|
| 118 |
6. If the target structure is not clearly visible, adjust the iso surface value and take a screenshot again.
|
| 119 |
7. Stop when the target structure is clearly visible or you have tried five different iso surface values.
|
| 120 |
-
8. Save the final screenshot to "
|
| 121 |
assert:
|
| 122 |
- type: llm-rubric
|
| 123 |
subtype: vision
|
|
@@ -125,31 +125,31 @@
|
|
| 125 |
1. Does the result rendering look similar to ground truth?
|
| 126 |
2. Does the visualization show the target structure clearly?
|
| 127 |
gs-file: GS/dataset_003.png
|
| 128 |
-
rs-file:
|
| 129 |
options:
|
| 130 |
cache: false
|
| 131 |
runSerially: true
|
| 132 |
|
| 133 |
|
| 134 |
-
#
|
| 135 |
- vars:
|
| 136 |
question: |
|
| 137 |
1. Load the image "data/dataset_002/dataset_002_ch0.tif" and set channel 0 to a magenta colormap.
|
| 138 |
2. Switch to a 3D MIP view.
|
| 139 |
3. Take a screenshot and analyze it to count how many complete cells are visible (not cut off by edges).
|
| 140 |
Q1: answer with the number of complete cells you counted, for example "5" if you see 5 complete cells.
|
| 141 |
-
4. Save the answer of Q1 to the questions in plain text as "
|
| 142 |
assert:
|
| 143 |
- type: llm-rubric
|
| 144 |
subtype: text
|
| 145 |
value: |
|
| 146 |
1. Q1 correct answer: 2
|
| 147 |
-
rs-file:
|
| 148 |
options:
|
| 149 |
cache: false
|
| 150 |
runSerially: true
|
| 151 |
|
| 152 |
-
#
|
| 153 |
- vars:
|
| 154 |
question: |
|
| 155 |
1. Load the image "data/dataset_001/dataset_001.tiff".
|
|
@@ -157,36 +157,36 @@
|
|
| 157 |
3. Extract the raw layer data and examine its properties.
|
| 158 |
4. Save the current layer to a file for further analysis.
|
| 159 |
Q1: Was the statistical analysis and data export successful? (Yes/No)
|
| 160 |
-
6. Save the answer of Q1 in plain text as "
|
| 161 |
assert:
|
| 162 |
- type: llm-rubric
|
| 163 |
subtype: text
|
| 164 |
value: |
|
| 165 |
1. Q1 correct answer: Yes
|
| 166 |
-
rs-file:
|
| 167 |
options:
|
| 168 |
cache: false
|
| 169 |
runSerially: true
|
| 170 |
|
| 171 |
-
#
|
| 172 |
- vars:
|
| 173 |
question: |
|
| 174 |
1. Load the image "data/dataset_001/dataset_001.tiff".
|
| 175 |
2. Add point annotations at random locations on the image.
|
| 176 |
3. Add shape annotations (rectangles or circles) at random locations on the image.
|
| 177 |
Q1: Check if layers have been generated. (Yes/No)
|
| 178 |
-
4. Save the answer of Q1 in plain text as "
|
| 179 |
assert:
|
| 180 |
- type: llm-rubric
|
| 181 |
subtype: text
|
| 182 |
value: |
|
| 183 |
1. Q1 correct answer: Yes
|
| 184 |
-
rs-file:
|
| 185 |
options:
|
| 186 |
cache: false
|
| 187 |
runSerially: true
|
| 188 |
|
| 189 |
-
#
|
| 190 |
- vars:
|
| 191 |
question: |
|
| 192 |
1. Load the image "data/dataset_002/dataset_002_ch0.tif" into napari.
|
|
@@ -194,19 +194,19 @@
|
|
| 194 |
3. Use a screenshot to validate whether the polygon correctly traces the cell surface.
|
| 195 |
4. If the trace is not accurate, adjust the polygon and take a new screenshot to validate.
|
| 196 |
5. Stop when the trace is accurate or you have tried five different attempts.
|
| 197 |
-
6. Save the results and the final screenshot to "
|
| 198 |
assert:
|
| 199 |
- type: llm-rubric
|
| 200 |
subtype: vision
|
| 201 |
value: |
|
| 202 |
1. Does the final screenshot show a polygon shape that accurately traces the outline of the cell surface?
|
| 203 |
2. Is the polygon layer correctly overlaid on the image?
|
| 204 |
-
rs-file:
|
| 205 |
options:
|
| 206 |
cache: false
|
| 207 |
runSerially: true
|
| 208 |
|
| 209 |
-
#
|
| 210 |
- vars:
|
| 211 |
question: |
|
| 212 |
1. Load the "data/dataset_002/dataset_002_ch0.tif" dataset into napari as channel 0 and "data/dataset_002/dataset_002_ch1.tif" as channel 1.
|
|
@@ -214,8 +214,8 @@
|
|
| 214 |
3. Switch to the 3D view.
|
| 215 |
4. Zoom in to the cell in the middle.
|
| 216 |
5. Rotate the camera to a side view.
|
| 217 |
-
6. Take a screenshot of the zoomed-in view and save it to "
|
| 218 |
-
7. Take a screenshot of the side view and save it to "
|
| 219 |
assert:
|
| 220 |
- type: llm-rubric
|
| 221 |
subtype: vision
|
|
@@ -223,14 +223,14 @@
|
|
| 223 |
1. Does the visualization show a zoomed-in view of the cell in the middle?
|
| 224 |
2. Does the result rendering look similar to ground truth?
|
| 225 |
gs-file: GS/dataset_002_zoom.jpg
|
| 226 |
-
rs-file:
|
| 227 |
- type: llm-rubric
|
| 228 |
subtype: vision
|
| 229 |
value: |
|
| 230 |
1. Does the visualization show a side view of the cell?
|
| 231 |
2. Does the result rendering look similar to ground truth?
|
| 232 |
gs-file: GS/dataset_002_camera_side.png
|
| 233 |
-
rs-file:
|
| 234 |
options:
|
| 235 |
cache: false
|
| 236 |
runSerially: true
|
|
|
|
| 1 |
# Basic Visualization Workflow Tests
|
| 2 |
# Use https://www.ebi.ac.uk/bioimage-archive/galleries/S-BIAD573.html IM1 to test the workflows.
|
| 3 |
|
| 4 |
+
# Case 1: Multi-channel Overlay with Colormaps with channels
|
| 5 |
- vars:
|
| 6 |
question: |
|
| 7 |
1. Load the "data/dataset_002/dataset_002_ch0.tif" dataset into napari as channel 0 and "data/dataset_002/dataset_002_ch1.tif" as channel 1.
|
|
|
|
| 10 |
4. Use additive blending for all channels to create an overlay visualization.
|
| 11 |
5. Go the timestep 14.
|
| 12 |
Q1: Does the cell show protrusions? (Yes/No)
|
| 13 |
+
6. Take a screenshot of the result, save it to "eval_visualization_tasks/case_1/results/{agent_mode}/screenshot_1.png"
|
| 14 |
+
7. Answer Q1 in a plain text file "eval_visualization_tasks/case_1/results/{agent_mode}/multi_channel_answer.txt".
|
| 15 |
assert:
|
| 16 |
- type: llm-rubric
|
| 17 |
subtype: vision
|
|
|
|
| 19 |
1. Does the visualization show a green cell with red blobs on the inside?
|
| 20 |
2. Does the result rendering look similar to ground truth?
|
| 21 |
gs-file: GS/dataset_002_1.png
|
| 22 |
+
rs-file: eval_visualization_tasks/case_1/results/{agent_mode}/screenshot_1.png
|
| 23 |
- type: llm-rubric
|
| 24 |
subtype: text
|
| 25 |
value: |
|
| 26 |
1. Q1 correct answer: Yes
|
| 27 |
+
rs-file: eval_visualization_tasks/case_1/results/{agent_mode}/multi_channel_answer.txt
|
| 28 |
options:
|
| 29 |
cache: false
|
| 30 |
runSerially: true
|
| 31 |
|
| 32 |
|
| 33 |
+
# Case 2: ingesting points
|
| 34 |
- vars:
|
| 35 |
question: |
|
| 36 |
1. Load the "data/dataset_002/Points.csv" dataset into napari.
|
| 37 |
2. Check if the points layer has been created.
|
| 38 |
Q1: Was the points layer created successfully? (Yes/No)
|
| 39 |
+
3. Answer Q1 in a plain text file "eval_visualization_tasks/case_2/results/{agent_mode}/points_answer.txt".
|
| 40 |
assert:
|
| 41 |
- type: llm-rubric
|
| 42 |
subtype: text
|
| 43 |
value: |
|
| 44 |
1. Q1 correct answer: Yes
|
| 45 |
+
rs-file: eval_visualization_tasks/case_2/results/{agent_mode}/points_answer.txt
|
| 46 |
options:
|
| 47 |
cache: false
|
| 48 |
runSerially: true
|
| 49 |
|
| 50 |
+
# Case 3: ingesting shapes
|
| 51 |
- vars:
|
| 52 |
question: |
|
| 53 |
1. Load the "data/dataset_002/Shapes.csv" dataset into napari.
|
| 54 |
2. Check if the shapes layer has been created.
|
| 55 |
Q1: Was the shapes layer created successfully? (Yes/No)
|
| 56 |
+
3. Answer Q1 in a plain text file "eval_visualization_tasks/case_3/results/{agent_mode}/shapes_answer.txt".
|
| 57 |
assert:
|
| 58 |
- type: llm-rubric
|
| 59 |
subtype: text
|
| 60 |
value: |
|
| 61 |
1. Q1 correct answer: Yes
|
| 62 |
+
rs-file: eval_visualization_tasks/case_3/results/{agent_mode}/shapes_answer.txt
|
| 63 |
options:
|
| 64 |
cache: false
|
| 65 |
runSerially: true
|
| 66 |
|
| 67 |
+
# Case 4: ingesting labels
|
| 68 |
- vars:
|
| 69 |
question: |
|
| 70 |
1. Load the "data/dataset_002/Labels.tif" dataset into napari.
|
| 71 |
2. Check if a new layer called "Labels" has been created.
|
| 72 |
Q1: Was the layer created successfully? (Yes/No)
|
| 73 |
+
3. Answer Q1 in a plain text file "eval_visualization_tasks/case_4/results/{agent_mode}/labels_answer.txt".
|
| 74 |
assert:
|
| 75 |
- type: llm-rubric
|
| 76 |
subtype: text
|
| 77 |
value: |
|
| 78 |
1. Q1 correct answer: Yes
|
| 79 |
+
rs-file: eval_visualization_tasks/case_4/results/{agent_mode}/labels_answer.txt
|
| 80 |
options:
|
| 81 |
cache: false
|
| 82 |
runSerially: true
|
| 83 |
|
| 84 |
|
| 85 |
+
# Case 5: Recreate a figure from a dataset.
|
| 86 |
- vars:
|
| 87 |
question: |
|
| 88 |
1. Load the dataset into napari: data/dataset_001/dataset_001.tiff
|
|
|
|
| 93 |
6. Take a screenshot of your recreation.
|
| 94 |
7. If the recreation does not match the target figure, adjust the visualization settings and take a screenshot again.
|
| 95 |
8. Stop when the recreation matches the target figure or you have tried five different visualization settings.
|
| 96 |
+
9. Save the final screenshot to "eval_visualization_tasks/case_5/results/{agent_mode}/screenshot.png".
|
| 97 |
assert:
|
| 98 |
- type: llm-rubric
|
| 99 |
subtype: vision
|
|
|
|
| 102 |
2. Are the same colormaps and blending modes used as in the target figure?
|
| 103 |
3. Is the contrast and gamma adjusted to match the target figure?
|
| 104 |
gs-file: GS/dataset_001.png
|
| 105 |
+
rs-file: eval_visualization_tasks/case_5/results/{agent_mode}/screenshot.png
|
| 106 |
options:
|
| 107 |
cache: false
|
| 108 |
runSerially: true
|
| 109 |
|
| 110 |
+
# Case 6: Iso surface determination for a target
|
| 111 |
- vars:
|
| 112 |
question: |
|
| 113 |
1. Read the file "data/dataset_003/eval_iso_surface_determination_target_1.txt" to get the target iso-surface values for different tooth structures.
|
|
|
|
| 117 |
5. Rotate the camera to several angles and take a screenshot of the result each time to check if the target structure is clearly visible from different angles.
|
| 118 |
6. If the target structure is not clearly visible, adjust the iso surface value and take a screenshot again.
|
| 119 |
7. Stop when the target structure is clearly visible or you have tried five different iso surface values.
|
| 120 |
+
8. Save the final screenshot to "eval_visualization_tasks/case_6/results/{agent_mode}/screenshot.png".
|
| 121 |
assert:
|
| 122 |
- type: llm-rubric
|
| 123 |
subtype: vision
|
|
|
|
| 125 |
1. Does the result rendering look similar to ground truth?
|
| 126 |
2. Does the visualization show the target structure clearly?
|
| 127 |
gs-file: GS/dataset_003.png
|
| 128 |
+
rs-file: eval_visualization_tasks/case_6/results/{agent_mode}/screenshot.png
|
| 129 |
options:
|
| 130 |
cache: false
|
| 131 |
runSerially: true
|
| 132 |
|
| 133 |
|
| 134 |
+
# Case 7: Cell Counting and Measurement Analysis
|
| 135 |
- vars:
|
| 136 |
question: |
|
| 137 |
1. Load the image "data/dataset_002/dataset_002_ch0.tif" and set channel 0 to a magenta colormap.
|
| 138 |
2. Switch to a 3D MIP view.
|
| 139 |
3. Take a screenshot and analyze it to count how many complete cells are visible (not cut off by edges).
|
| 140 |
Q1: answer with the number of complete cells you counted, for example "5" if you see 5 complete cells.
|
| 141 |
+
4. Save the answer of Q1 to the questions in plain text as "eval_visualization_tasks/case_7/results/{agent_mode}/Q1_answer.txt".
|
| 142 |
assert:
|
| 143 |
- type: llm-rubric
|
| 144 |
subtype: text
|
| 145 |
value: |
|
| 146 |
1. Q1 correct answer: 2
|
| 147 |
+
rs-file: eval_visualization_tasks/case_7/results/{agent_mode}/Q1_answer.txt
|
| 148 |
options:
|
| 149 |
cache: false
|
| 150 |
runSerially: true
|
| 151 |
|
| 152 |
+
# Case 8: Statistical Analysis and Data Export
|
| 153 |
- vars:
|
| 154 |
question: |
|
| 155 |
1. Load the image "data/dataset_001/dataset_001.tiff".
|
|
|
|
| 157 |
3. Extract the raw layer data and examine its properties.
|
| 158 |
4. Save the current layer to a file for further analysis.
|
| 159 |
Q1: Was the statistical analysis and data export successful? (Yes/No)
|
| 160 |
+
6. Save the answer of Q1 in plain text as "eval_visualization_tasks/case_8/results/{agent_mode}/layer_statistics_answer.txt".
|
| 161 |
assert:
|
| 162 |
- type: llm-rubric
|
| 163 |
subtype: text
|
| 164 |
value: |
|
| 165 |
1. Q1 correct answer: Yes
|
| 166 |
+
rs-file: eval_visualization_tasks/case_8/results/{agent_mode}/layer_statistics_answer.txt
|
| 167 |
options:
|
| 168 |
cache: false
|
| 169 |
runSerially: true
|
| 170 |
|
| 171 |
+
# Case 9: Annotation Workflow
|
| 172 |
- vars:
|
| 173 |
question: |
|
| 174 |
1. Load the image "data/dataset_001/dataset_001.tiff".
|
| 175 |
2. Add point annotations at random locations on the image.
|
| 176 |
3. Add shape annotations (rectangles or circles) at random locations on the image.
|
| 177 |
Q1: Check if layers have been generated. (Yes/No)
|
| 178 |
+
4. Save the answer of Q1 in plain text as "eval_visualization_tasks/case_9/results/{agent_mode}/annotation_answer.txt".
|
| 179 |
assert:
|
| 180 |
- type: llm-rubric
|
| 181 |
subtype: text
|
| 182 |
value: |
|
| 183 |
1. Q1 correct answer: Yes
|
| 184 |
+
rs-file: eval_visualization_tasks/case_9/results/{agent_mode}/annotation_answer.txt
|
| 185 |
options:
|
| 186 |
cache: false
|
| 187 |
runSerially: true
|
| 188 |
|
| 189 |
+
# Case 10: Advanced Annotation Workflow: Cell Surface Trace (This will likely fail)
|
| 190 |
- vars:
|
| 191 |
question: |
|
| 192 |
1. Load the image "data/dataset_002/dataset_002_ch0.tif" into napari.
|
|
|
|
| 194 |
3. Use a screenshot to validate whether the polygon correctly traces the cell surface.
|
| 195 |
4. If the trace is not accurate, adjust the polygon and take a new screenshot to validate.
|
| 196 |
5. Stop when the trace is accurate or you have tried five different attempts.
|
| 197 |
+
6. Save the results and the final screenshot to "eval_visualization_tasks/case_10/results/{agent_mode}/cell_surface_trace.png".
|
| 198 |
assert:
|
| 199 |
- type: llm-rubric
|
| 200 |
subtype: vision
|
| 201 |
value: |
|
| 202 |
1. Does the final screenshot show a polygon shape that accurately traces the outline of the cell surface?
|
| 203 |
2. Is the polygon layer correctly overlaid on the image?
|
| 204 |
+
rs-file: eval_visualization_tasks/case_10/results/{agent_mode}/cell_surface_trace.png
|
| 205 |
options:
|
| 206 |
cache: false
|
| 207 |
runSerially: true
|
| 208 |
|
| 209 |
+
# Case 11: Camera Operations (Zoom and Rotate)
|
| 210 |
- vars:
|
| 211 |
question: |
|
| 212 |
1. Load the "data/dataset_002/dataset_002_ch0.tif" dataset into napari as channel 0 and "data/dataset_002/dataset_002_ch1.tif" as channel 1.
|
|
|
|
| 214 |
3. Switch to the 3D view.
|
| 215 |
4. Zoom in to the cell in the middle.
|
| 216 |
5. Rotate the camera to a side view.
|
| 217 |
+
6. Take a screenshot of the zoomed-in view and save it to "eval_visualization_tasks/case_11/results/{agent_mode}/zoom_screenshot.png".
|
| 218 |
+
7. Take a screenshot of the side view and save it to "eval_visualization_tasks/case_11/results/{agent_mode}/rotate_screenshot.png".
|
| 219 |
assert:
|
| 220 |
- type: llm-rubric
|
| 221 |
subtype: vision
|
|
|
|
| 223 |
1. Does the visualization show a zoomed-in view of the cell in the middle?
|
| 224 |
2. Does the result rendering look similar to ground truth?
|
| 225 |
gs-file: GS/dataset_002_zoom.jpg
|
| 226 |
+
rs-file: eval_visualization_tasks/case_11/results/{agent_mode}/zoom_screenshot.png
|
| 227 |
- type: llm-rubric
|
| 228 |
subtype: vision
|
| 229 |
value: |
|
| 230 |
1. Does the visualization show a side view of the cell?
|
| 231 |
2. Does the result rendering look similar to ground truth?
|
| 232 |
gs-file: GS/dataset_002_camera_side.png
|
| 233 |
+
rs-file: eval_visualization_tasks/case_11/results/{agent_mode}/rotate_screenshot.png
|
| 234 |
options:
|
| 235 |
cache: false
|
| 236 |
runSerially: true
|
eval_cases/selected_cases.yaml
DELETED
|
@@ -1,200 +0,0 @@
|
|
| 1 |
-
# Selected 15 Cases for Human Evaluation
|
| 2 |
-
# These cases represent diverse visualization capabilities across the benchmark
|
| 3 |
-
#
|
| 4 |
-
# Each case specifies:
|
| 5 |
-
# - name: The case directory name
|
| 6 |
-
# - path: Path to the case directory (relative to workspace root)
|
| 7 |
-
# - yaml: Path to the YAML file containing evaluation criteria
|
| 8 |
-
# - description: Brief description of what the case tests
|
| 9 |
-
|
| 10 |
-
cases:
|
| 11 |
-
- name: argon-bubble
|
| 12 |
-
path: SciVisAgentBench-tasks/paraview/argon-bubble
|
| 13 |
-
yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 14 |
-
description: Color & Opacity Mapping, Volume Rendering
|
| 15 |
-
agent_mode: paraview_mcp_claude-sonnet-4-5_exp1
|
| 16 |
-
|
| 17 |
-
- name: richtmyer
|
| 18 |
-
path: SciVisAgentBench-tasks/paraview/richtmyer
|
| 19 |
-
yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 20 |
-
description: Color & Opacity Mapping, Volume Rendering
|
| 21 |
-
agent_mode: paraview_mcp_claude-sonnet-4-5_exp1
|
| 22 |
-
|
| 23 |
-
- name: foot
|
| 24 |
-
path: SciVisAgentBench-tasks/paraview/foot
|
| 25 |
-
yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 26 |
-
description: Volume Rendering
|
| 27 |
-
agent_mode: paraview_mcp_claude-sonnet-4-5_exp1
|
| 28 |
-
|
| 29 |
-
- name: crayfish_streamline
|
| 30 |
-
path: SciVisAgentBench-tasks/paraview/crayfish_streamline
|
| 31 |
-
yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 32 |
-
description: Surface & Contour Extraction
|
| 33 |
-
agent_mode: chatvis_claude-sonnet-4-5_exp1
|
| 34 |
-
|
| 35 |
-
- name: twoswirls_streamribbon
|
| 36 |
-
path: SciVisAgentBench-tasks/paraview/twoswirls_streamribbon
|
| 37 |
-
yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 38 |
-
description: Surface & Contour Extraction
|
| 39 |
-
agent_mode: chatvis_claude-sonnet-4-5_exp1
|
| 40 |
-
|
| 41 |
-
- name: tornado
|
| 42 |
-
path: SciVisAgentBench-tasks/paraview/tornado
|
| 43 |
-
yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 44 |
-
description: Surface & Contour Extraction, Glyph & Marker Placement
|
| 45 |
-
agent_mode: chatvis_claude-sonnet-4-5_exp1
|
| 46 |
-
|
| 47 |
-
- name: tgc-velocity_contour
|
| 48 |
-
path: SciVisAgentBench-tasks/paraview/tgc-velocity_contour
|
| 49 |
-
yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 50 |
-
description: Surface & Contour Extraction
|
| 51 |
-
agent_mode: paraview_mcp_claude-sonnet-4-5_exp1
|
| 52 |
-
|
| 53 |
-
- name: rti-velocity_slices
|
| 54 |
-
path: SciVisAgentBench-tasks/paraview/rti-velocity_slices
|
| 55 |
-
yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 56 |
-
description: View & Camera Control, Data Subsetting & Extraction
|
| 57 |
-
agent_mode: paraview_mcp_claude-sonnet-4-5_exp1
|
| 58 |
-
|
| 59 |
-
- name: rti-velocity_glyph
|
| 60 |
-
path: SciVisAgentBench-tasks/paraview/rti-velocity_glyph
|
| 61 |
-
yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 62 |
-
description: Glyph & Marker Placement, Data Subsetting & Extraction
|
| 63 |
-
agent_mode: chatvis_claude-sonnet-4-5_exp1
|
| 64 |
-
|
| 65 |
-
- name: supernova_isosurface
|
| 66 |
-
path: SciVisAgentBench-tasks/paraview/supernova_isosurface
|
| 67 |
-
yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 68 |
-
description: Surface & Contour Extraction (isosurface)
|
| 69 |
-
agent_mode: chatvis_claude-sonnet-4-5_exp1
|
| 70 |
-
|
| 71 |
-
- name: time-varying
|
| 72 |
-
path: SciVisAgentBench-tasks/paraview/time-varying
|
| 73 |
-
yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 74 |
-
description: Temporal Processing
|
| 75 |
-
agent_mode: chatvis_claude-sonnet-4-5_exp1
|
| 76 |
-
|
| 77 |
-
- name: chart-opacity
|
| 78 |
-
path: SciVisAgentBench-tasks/paraview/chart-opacity
|
| 79 |
-
yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 80 |
-
description: Plot & Chart Generation
|
| 81 |
-
agent_mode: chatvis_claude-sonnet-4-5_exp1
|
| 82 |
-
|
| 83 |
-
- name: climate
|
| 84 |
-
path: SciVisAgentBench-tasks/paraview/climate
|
| 85 |
-
yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 86 |
-
description: Field Computation
|
| 87 |
-
agent_mode: chatvis_claude-sonnet-4-5_exp1
|
| 88 |
-
|
| 89 |
-
# - name: subseries-of-time-series
|
| 90 |
-
# path: SciVisAgentBench-tasks/paraview/subseries-of-time-series
|
| 91 |
-
# yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 92 |
-
# description: Dataset Restructuring
|
| 93 |
-
# agent_mode: chatvis_claude-sonnet-4-5_exp1
|
| 94 |
-
|
| 95 |
-
- name: shrink-sphere
|
| 96 |
-
path: SciVisAgentBench-tasks/paraview/shrink-sphere
|
| 97 |
-
yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 98 |
-
description: Geometric & Topological Transformation
|
| 99 |
-
agent_mode: chatvis_claude-sonnet-4-5_exp1
|
| 100 |
-
|
| 101 |
-
- name: import-gltf
|
| 102 |
-
path: SciVisAgentBench-tasks/paraview/import-gltf
|
| 103 |
-
yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 104 |
-
description: Dataset Restructuring, View & Camera Control
|
| 105 |
-
agent_mode: paraview_mcp_claude-sonnet-4-5_exp1
|
| 106 |
-
|
| 107 |
-
- name: render-histogram
|
| 108 |
-
path: SciVisAgentBench-tasks/paraview/render-histogram
|
| 109 |
-
yaml: benchmark/eval_cases/paraview/paraview_cases.yaml
|
| 110 |
-
description: Plot & Chart Generation, Color & Opacity Mapping
|
| 111 |
-
agent_mode: chatvis_claude-sonnet-4-5_exp1
|
| 112 |
-
|
| 113 |
-
# From molecular_vis/workflows (2 cases)
|
| 114 |
-
- name: curved-membrane
|
| 115 |
-
path: SciVisAgentBench-tasks/molecular_vis/workflows/curved-membrane
|
| 116 |
-
yaml: benchmark/eval_cases/molecular_vis/workflows/eval_analysis_workflows.yaml
|
| 117 |
-
description: Data Subsetting & Extraction
|
| 118 |
-
agent_mode: gmx_vmd_mcp_claude-sonnet-4-5_exp1
|
| 119 |
-
|
| 120 |
-
- name: ras-raf-membrane
|
| 121 |
-
path: SciVisAgentBench-tasks/molecular_vis/workflows/ras-raf-membrane
|
| 122 |
-
yaml: benchmark/eval_cases/molecular_vis/workflows/eval_analysis_workflows.yaml
|
| 123 |
-
description: View & Camera Control
|
| 124 |
-
agent_mode: gmx_vmd_mcp_claude-sonnet-4-5_exp1
|
| 125 |
-
|
| 126 |
-
- name: bio_isosurface-determination
|
| 127 |
-
path: SciVisAgentBench-tasks\bioimage_data\eval_iso_surface_determination\operation_1
|
| 128 |
-
yaml: benchmark\eval_cases\napari\1_workflows\eval_iso_surface_determination.yaml
|
| 129 |
-
description: Surface & Contour Extraction (isosurface)
|
| 130 |
-
agent_mode: napari_mcp_claude-sonnet-4-5_exp_default
|
| 131 |
-
task_description:
|
| 132 |
-
1. Read the file "data/dataset_003/eval_iso_surface_determination_target_1.txt" to get the target iso-surface values for different tooth structures.
|
| 133 |
-
|
| 134 |
-
2. Load data/dataset_003/dataset_003.tif into napari.
|
| 135 |
-
|
| 136 |
-
3. Switch to 3D view mode and set the rendering to iso.
|
| 137 |
-
|
| 138 |
-
4. Find the iso surface value that shows the target clearly.
|
| 139 |
-
|
| 140 |
-
5. Rotate the camera to several angles and take a screenshot of the result each time to check if the target structure is clearly visible from different angles.
|
| 141 |
-
|
| 142 |
-
6. If the target structure is not clearly visible, adjust the iso surface value and take a screenshot again.
|
| 143 |
-
|
| 144 |
-
7. Stop when the target structure is clearly visible or you have tried five different iso surface values.
|
| 145 |
-
|
| 146 |
-
8. Save the final screenshot to "eval_iso_surface_determination/screenshot.png".
|
| 147 |
-
vision-rubrics:
|
| 148 |
-
1. Does the result rendering look similar to ground truth?
|
| 149 |
-
2. Does the visualization show the target structure clearly?
|
| 150 |
-
|
| 151 |
-
- name: bio_visualization-workflows
|
| 152 |
-
path: SciVisAgentBench-tasks\bioimage_data\eval_visualization_workflows\operation_1
|
| 153 |
-
yaml: benchmark\eval_cases\napari\1_workflows\eval_visualization_workflows.yaml
|
| 154 |
-
description: Color & Opacity Mapping, Volume Rendering, Temporal Processing
|
| 155 |
-
agent_mode: napari_mcp_claude-sonnet-4-5_exp_default
|
| 156 |
-
task_description:
|
| 157 |
-
1. Load the "data/dataset_002/dataset_002.tif" dataset into napari.
|
| 158 |
-
|
| 159 |
-
2. Depending on the number of channels, set the colormap for the first channel 0 to red and channel 1 to green.
|
| 160 |
-
|
| 161 |
-
3. Switch to the 3D view.
|
| 162 |
-
|
| 163 |
-
4. Use additive blending for all channels to create an overlay visualization.
|
| 164 |
-
|
| 165 |
-
5. Go the timestep 14.
|
| 166 |
-
Q1. Does the cell show protrusions? (Yes/No)
|
| 167 |
-
|
| 168 |
-
6. Take a screenshot of the result, save it to "eval_visualization_workflows/screenshot_1.png"
|
| 169 |
-
|
| 170 |
-
7. Answer Q1 in a plain text file "eval_visualization_workflows/Q1_answer.txt".
|
| 171 |
-
vision-rubrics:
|
| 172 |
-
1. Does the visualization show a green cell with red blobs on the inside?
|
| 173 |
-
2. Does the result rendering look similar to ground truth?
|
| 174 |
-
|
| 175 |
-
- name: bio_figure-recreation
|
| 176 |
-
path: SciVisAgentBench-tasks\bioimage_data\eval_figure_recreation\operation_1
|
| 177 |
-
yaml: benchmark\eval_cases\napari\1_workflows\eval_figure_recreation.yaml
|
| 178 |
-
description: Color & Opacity Mapping, Volume Rendering
|
| 179 |
-
agent_mode: napari_mcp_claude-sonnet-4-5_exp_default
|
| 180 |
-
task_description:
|
| 181 |
-
1. Load the dataset into napari "data/dataset_001/dataset_001.tiff"
|
| 182 |
-
|
| 183 |
-
2. Read the target figure "data/dataset_001/dataset_001.png" but don't load it into napari.
|
| 184 |
-
|
| 185 |
-
3. Read the dataset description "data/dataset_001/dataset_001.yaml".
|
| 186 |
-
|
| 187 |
-
4. Set the same colormaps and blending modes as the target figure.
|
| 188 |
-
|
| 189 |
-
5. Adjust contrast and gamma as needed to match the target figure.
|
| 190 |
-
|
| 191 |
-
6. Take a screenshot of your recreation.
|
| 192 |
-
|
| 193 |
-
7. If the recreation does not match the target figure, adjust the visualization settings and take a screenshot again.
|
| 194 |
-
|
| 195 |
-
8. Stop when the recreation matches the target figure or you have tried five different visualization settings.
|
| 196 |
-
|
| 197 |
-
9. Save the final screenshot to "eval_figure_recreation/screenshot.png".
|
| 198 |
-
vision-rubrics:
|
| 199 |
-
1. Does the visualization show a green cell with red blobs on the inside?
|
| 200 |
-
2. Does the result rendering look similar to ground truth?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|