KuangshiAi commited on
Commit
eee57cc
·
1 Parent(s): 38eb21c

add 4 new topology cases from Guoxi Liu

Browse files
.DS_Store CHANGED
Binary files a/.DS_Store and b/.DS_Store differ
 
eval_cases/topology/topology_cases.yaml CHANGED
@@ -193,4 +193,115 @@
193
  value: |
194
  1. Q1 correct answer: no
195
  2. Q2 correct answer: (C)
196
- 3. Q3 correct answer: (C)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
193
  value: |
194
  1. Q1 correct answer: no
195
  2. Q2 correct answer: (C)
196
+ 3. Q3 correct answer: (C)
197
+
198
+
199
+ # 6. noisyTerrain
200
+ # This dataset is a terrain with random scalar values added to create noise, originally from Julien Tierny. See https://github.com/topology-tool-kit/ttk-data.
201
+ - vars:
202
+ question: |
203
+ 1. Load the dataset from "noisyTerrain/data/noisyTerrain.vtu".
204
+ 2. Compute the persistence diagram on the scalar field named "Blend".
205
+ 3. Apply a threshold to filter out pairs with persistence value less than 1.
206
+ 4. Save the persistence diagram as "noisyTerrain/results/{agent_mode}/noisyTerrain.vtk" in legacy VTK format.
207
+ - The output should contain the points in the persistence diagram as point data, and each persistence pair is represented as a cell.
208
+ - Include the following three scalar arrays with the given names and purposes:
209
+ * "Birth" array: store the birth value of each pair.
210
+ * "Persistence" array: store the persistence value of each pair.
211
+ * "IsFinite" array: use 1 to mark finite persistence and 0 to mark infinite persistence.
212
+ assert:
213
+ - type: rule_based
214
+ eval_script: noisyTerrain/GS/noisyTerrain_eval.py
215
+ eval_function: evaluateNoisyTerrainPersistenceDiagram
216
+ gs_file:
217
+ - noisyTerrain/GS/noisyTerrain_gs.vtk
218
+ rs_file:
219
+ - noisyTerrain/results/{agent_mode}/noisyTerrain.vtk
220
+
221
+
222
+ # 7. molecule
223
+ # This dataset contains electron density and reduced gradient for a simple Ethane-Diol molecule, which is originally from Roberto Alvarez Boto. See https://github.com/topology-tool-kit/ttk-data.
224
+ - vars:
225
+ question: |
226
+ 1. Load the data file "molecule/data/molecule.vti".
227
+ 2. Compute the Morse-Smale segmentation on the scalar field named "log(s)".
228
+ 3. Save the Morse-Smale segmentation as "molecule/results/{agent_mode}/molecule.vti".
229
+ It should have a point array called "Segmentation".
230
+ For each point x, the array "Segmentation" should store the id number of the region in the segmentation that x belongs to.
231
+ assert:
232
+ - type: rule_based
233
+ eval_script: molecule/GS/molecule_eval.py
234
+ eval_function:
235
+ gs_file:
236
+ - molecule/GS/molecule_gs.vtk
237
+ rs_file:
238
+ - molecule/results/{agent_mode}/molecule.vtk
239
+
240
+
241
+ # 8. moons
242
+ # This 2D data set is based on the scikit-learn clustering examples (see https://scikit-learn.org/stable/modules/clustering.html), which computes a density field using Gaussian Resampling on the original point cloud.
243
+ - vars:
244
+ question: |
245
+ 1. Load the data file "moons/data/moons.vti".
246
+ 2. Apply topological simplification to the field "SplatterValues" with a persistence threshold of 10.
247
+ 3. Compute the Morse-Smale segmentation on the simplified scalar field.
248
+ 4. Save only the Ascending Manifold as "moons/results/{agent_mode}/moons.vti".
249
+ It should have a point array called "AscendingManifold".
250
+ For each point x, the array "AscendingManifold" should store the id number of the region that x belongs to.
251
+ assert:
252
+ - type: rule_based
253
+ eval_script: moons/GS/moons_eval.py
254
+ eval_function: evaluateMoonAscendingManifold
255
+ gs_file:
256
+ - moons/GS/moons_gs.vtk
257
+ rs_file:
258
+ - moons/results/{agent_mode}/moons.vtk
259
+
260
+
261
+ # 9. dragon
262
+ # The dataset is the scanned dragon model in the ttk-data GitHub repo (https://github.com/topology-tool-kit/ttk-data), originally from VisionAIR (VISION Advanced Infrastructure for Research).
263
+ - vars:
264
+ question: |
265
+ 1. Load the dataset from "dragon/data/dragon.vtu".
266
+
267
+ 2. Compute the Morse-Smale complex on the scalar field named "density". Make sure 1-Separatrices are computed.
268
+
269
+ 3. Compute the critical points on the previous elevation scalar field.
270
+
271
+ 4. Save the critical points as "dragon/results/{agent_mode}/dragon.vtk" in legacy VTK format.
272
+ - The output should contain the critical points as point dataset
273
+ - Include an array called "CriticalType" that labels each point according to what type of critical type it is. Use the following convention:
274
+ * 0 for minima
275
+ * 1 for 1-saddles
276
+ * 2 for 2-saddles
277
+ * 3 for maxima
278
+ - The point coordinates should be in world coordinates
279
+
280
+ 5. Analyze the visualization and answer the following questions:
281
+
282
+ Q1: How many saddle-maximum pairs are present in the dataset?
283
+ (A) 2 (B) 4 (C) 6 (D) 10.
284
+
285
+ Q2: How many minima are computed?
286
+ (A) 2 (B) 5 (C) 8 (D) 10.
287
+
288
+ Q3: Are there any saddle-saddle pairs in the persistence diagram?
289
+ (Yes/No)
290
+
291
+ Save the answers to the analysis questions in plain text as "dragon/results/{agent_mode}/answers.txt".
292
+ assert:
293
+ - type: rule_based
294
+ eval_script: dragon/GS/dragon_eval.py
295
+ eval_function: evaluateDragonCriticalPoints
296
+ gs_file:
297
+ - dragon/GS/dragon_gs.vtk
298
+ rs_file:
299
+ - dragon/results/{agent_mode}/dragon.vtk
300
+ - type: llm-rubric
301
+ subtype: text
302
+ value: |
303
+ 1. Q1 correct answer: (C) 6
304
+
305
+ 2. Q2 correct answer: (B) 5
306
+
307
+ 3. Q3 correct answer: No
topology/QMCPACK/task_description.txt CHANGED
@@ -10,4 +10,12 @@
10
  * 2 for 2-saddles
11
  * 3 for maxima
12
  * 4 for degenerate critical points
13
- - The point coordinates should be in index space (grid coordinates), not world coordinates
 
 
 
 
 
 
 
 
 
10
  * 2 for 2-saddles
11
  * 3 for maxima
12
  * 4 for degenerate critical points
13
+ - The point coordinates should be in index space (grid coordinates), not world coordinates
14
+
15
+ 4. Analyze the visualization and answer the following questions:
16
+ Q1: How many index 1 saddles are there:
17
+ (A) 248 (B) 274 (C) 299 (D) 344
18
+
19
+ Q2: What is the type of critical point closest to coordinates (4,58,12):
20
+ (A) minimum (B) 1-saddle (C) 2-saddle (D) maximum
21
+ Save the answers to the analysis questions in plain text as "QMCPACK/results/{agent_mode}/answers.txt".
topology/brain/task_description.txt CHANGED
@@ -1,3 +1,9 @@
1
  1. Load the file "brain/data/brain.vti". It is a symmetric tensor field, where the (1,1), (1,2) and (2,2) components of the tensor are respectively given by the arrays A, B, and D.
2
  2. Compute degenerate points of the tensor field.
3
- 3. Save the degenerate points as "brain/results/{agent_mode}/brain.vtk" in legacy VTK format. Label the type of degenerate point for each point in an array called DegeneracyType. Use a value of 0 for trisectors and 1 for wedges.
 
 
 
 
 
 
 
1
  1. Load the file "brain/data/brain.vti". It is a symmetric tensor field, where the (1,1), (1,2) and (2,2) components of the tensor are respectively given by the arrays A, B, and D.
2
  2. Compute degenerate points of the tensor field.
3
+ 3. Save the degenerate points as "brain/results/{agent_mode}/brain.vtk" in legacy VTK format. Label the type of degenerate point for each point in an array called DegeneracyType. Use a value of 0 for trisectors and 1 for wedges.
4
+ 4. Analyze the visualization and answer the following questions:
5
+ Q1: Are there more trisectors than wedges? (yes/no)
6
+
7
+ Q2: Out of all degenerate points, the sum of one point's coordinates is the highest. What is this highest sum, rounded to the nearest integer?
8
+ (A) 124 (B) 136 (C) 148 (D) 160
9
+ Save the answers to the analysis questions in plain text as "brain/results/{agent_mode}/answers.txt".
topology/cylinder/task_description.txt CHANGED
@@ -1,4 +1,11 @@
1
  1. Please load the file "cylinder/data/cylinder.vti"
2
  2. Apply persistence simplification of 0.01 to the Speed field.
3
  3. Compute the Morse-Smale segmentation of the simplified Speed field.
4
- 4. Save the Morse-Smale segmentation as "cylinder/results/{agent_mode}/cylinder.vti". It should have a point array called Partition. For each point x, the array "Partition" should store the id number of the region in the segmentation that x belongs to.
 
 
 
 
 
 
 
 
1
  1. Please load the file "cylinder/data/cylinder.vti"
2
  2. Apply persistence simplification of 0.01 to the Speed field.
3
  3. Compute the Morse-Smale segmentation of the simplified Speed field.
4
+ 4. Save the Morse-Smale segmentation as "cylinder/results/{agent_mode}/cylinder.vti". It should have a point array called Partition. For each point x, the array "Partition" should store the id number of the region in the segmentation that x belongs to.
5
+ 5. Analyze the visualization and answer the following questions:
6
+ Q1: How many unique partition regions are there?
7
+ (A) 152 (B) 163 (C) 174 (D) 185
8
+
9
+ Q2: How many points are in the largest partition region?
10
+ (A) 6879 (B) 7968 (C) 8796 (D) 9687
11
+ Save the answers to the analysis questions in plain text as "cylinder/results/{agent_mode}/answers.txt".
topology/dragon/GS/dragon_eval.py ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ import os
3
+
4
+ # Add the topology directory to Python path
5
+ sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../..'))
6
+
7
+ from topologyScoring import pointCloudGeometryScore
8
+
9
+ def evaluateDragonCriticalPoints(gtFilename : str, reconFilename : str, verbose : bool = False):
10
+ """
11
+ Given two sets of critical points, return a similarity score from 0-10.
12
+
13
+ A score of 0 is considered bad and a score of 10 is considered good.
14
+
15
+ Args:
16
+ gtPointsFile: The name of a file in legacy VTK format (.vtk) that stores the locations of each critical point
17
+ in the ground truth data. It should also have a point array called "CriticalType". It should assign
18
+ values as follows: 0: minimum. 1: 1-saddle. 2: 2-saddle. 3: maximum. 4: degenerate critical point.
19
+ reconPointsFile: The name of a file in legacy VTK format (.vtk) that stores the locations and degeneracy types
20
+ of each point in the reconstructed data.
21
+ verbose: Should error messages be printed if there are issues with the input files.
22
+ """
23
+ return pointCloudGeometryScore(gtFilename, "CriticalType", reconFilename, "CriticalType", verbose)
24
+
25
+ if __name__ == "__main__":
26
+
27
+ if len(sys.argv) != 3:
28
+ print(f"{os.path.basename(__file__)}: usage is 'python3 {os.path.basename(__file__)} gt_points.vtk recon_points.vtk'")
29
+ exit(1)
30
+
31
+ score = evaluateDragonCriticalPoints(sys.argv[1], sys.argv[2], verbose=True)
32
+
33
+ print(f"These critical points scored: {score}")
topology/dragon/GS/dragon_gs.vtk ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34f548c791a15ab3c72c752009b5cd6924b617ebffe505207014d1d9416c7c45
3
+ size 8156
topology/dragon/task_description.txt CHANGED
@@ -1,11 +1,19 @@
1
- Task:
2
- 1. Load the dragon dataset from "dragon/data/dragon.vtu". Apply an elevation function along the y-axis, ranging from 0 to 100.
3
 
4
- 2. Compute the persistence diagram using the \"elevation\" scalar field.
5
 
6
- 3. After filtering out persistence values less than 10, compute the critical points (e.g., minima, maxima, and saddle points) on the dataset.
7
 
8
- 4. Analyze the visualization and answer the following questions:
 
 
 
 
 
 
 
 
 
9
 
10
  Q1: How many saddle-maximum pairs are present in the dataset?
11
  (A) 2 (B) 4 (C) 6 (D) 10.
@@ -16,6 +24,4 @@ Q2: How many minima are computed?
16
  Q3: Are there any saddle-saddle pairs in the persistence diagram?
17
  (Yes/No)
18
 
19
- 5. Save your work:
20
- Save the ParaView state as "dragon/results/{agent_mode}/dragon.pvsm".
21
  Save the answers to the analysis questions in plain text as "dragon/results/{agent_mode}/answers.txt".
 
1
+ 1. Load the dataset from "dragon/data/dragon.vtu".
 
2
 
3
+ 2. Compute the Morse-Smale complex on the scalar field named "density". Make sure 1-Separatrices are computed.
4
 
5
+ 3. Compute the critical points on the previous elevation scalar field.
6
 
7
+ 4. Save the critical points as "dragon/results/{agent_mode}/dragon.vtk" in legacy VTK format.
8
+ - The output should contain the critical points as point dataset
9
+ - Include an array called "CriticalType" that labels each point according to what type of critical type it is. Use the following convention:
10
+ * 0 for minima
11
+ * 1 for 1-saddles
12
+ * 2 for 2-saddles
13
+ * 3 for maxima
14
+ - The point coordinates should be in world coordinates
15
+
16
+ 5. Analyze the visualization and answer the following questions:
17
 
18
  Q1: How many saddle-maximum pairs are present in the dataset?
19
  (A) 2 (B) 4 (C) 6 (D) 10.
 
24
  Q3: Are there any saddle-saddle pairs in the persistence diagram?
25
  (Yes/No)
26
 
 
 
27
  Save the answers to the analysis questions in plain text as "dragon/results/{agent_mode}/answers.txt".
topology/dragon/visualization_goals.txt DELETED
@@ -1,13 +0,0 @@
1
- vision:
2
- 1. Overall visualization quality
3
-
4
- 2. Correct color mapping for critical points
5
-
6
- 3. Correct pairing of critical points
7
-
8
- text:
9
- 1. Q1 correct answer: (C) 6
10
-
11
- 2. Q2 correct answer: (B) 5
12
-
13
- 3. Q3 correct answer: No
 
 
 
 
 
 
 
 
 
 
 
 
 
 
topology/isabel/task_description.txt CHANGED
@@ -6,4 +6,14 @@ This file should have two point arrays. One should be called "CriticalType" and
6
  It should follow the following convention: 0: minima. 1: 1-saddles. 2: 2-saddles. 3: maxima. 4: degenerate critical points.
7
  The other point array should be called "Scalar" and should contain the scalar field value at each point in the merge tree.
8
  5. Save the edges of the merge tree as "isabel/results/{agent_mode}/isabel_edges.vtk" in legacy VTK format.
9
- The file should store each edge as a separate cell with type vtkLine.
 
 
 
 
 
 
 
 
 
 
 
6
  It should follow the following convention: 0: minima. 1: 1-saddles. 2: 2-saddles. 3: maxima. 4: degenerate critical points.
7
  The other point array should be called "Scalar" and should contain the scalar field value at each point in the merge tree.
8
  5. Save the edges of the merge tree as "isabel/results/{agent_mode}/isabel_edges.vtk" in legacy VTK format.
9
+ The file should store each edge as a separate cell with type vtkLine.
10
+ 6. Analyze the visualization and answer the following questions:
11
+ Q1: The parent node of the leaf (377, 265, 0) has coordinates (x,y,z). What is x+y+z?
12
+ (A) 627 (B) 854 (C) 992 (D) 1039
13
+
14
+ Q2: How many edges are there in the merge tree?
15
+ (A) 154 (B) 195 (C) 204 (D) 254
16
+
17
+ Q3: What is the highest scalar field value of a minimum, rounded to the nearest whole number?
18
+ (A) 12 (B) 26 (C) 31 (D) 58
19
+ Save the answers to the analysis questions in plain text as "isabel/results/{agent_mode}/answers.txt".
topology/molecule/GS/molecule_gs.vti ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd16c23afd789bdbe0bdc80227f8bb3064cc187efd430bf8866bd205b0018a5a
3
+ size 9534120
topology/molecule/data/molecule.vti ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1d4cfbd469fbe131ff36790302ea0ba9fd5407472c7b0c8c7990f8db360a1cb
3
+ size 28601738
topology/molecule/task_description.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ 1. Load the data file "molecule/data/molecule.vti".
2
+ 2. Compute the Morse-Smale segmentation on the scalar field named "log(s)".
3
+ 3. Save the Morse-Smale segmentation as "molecule/results/{agent_mode}/molecule.vti".
4
+ It should have a point array called "Segmentation".
5
+ For each point x, the array "Segmentation" should store the id number of the region in the segmentation that x belongs to.
topology/moon/GS/moons_eval.py ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+
4
+ # Add the topology directory to Python path
5
+ sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../..'))
6
+
7
+ from topologyScoring import partitionTopologicalDiceScore
8
+
9
+ def evaluateMoonAscendingManifold(gtFilename : str, reconFilename : str, verbose : bool = False) -> int:
10
+ """
11
+ Given two ascending manifolds of the same domain, return a similarity score from 0-10.
12
+ A score of 0 is considered bad and a score of 10 is considered good. The segmentations should be
13
+ represented by a point array called "AscendingManifold" that assigs a region identifier to each point in
14
+ the domain. The region identifiers between the ground truth and reconstructed files do not need to match.
15
+ Args:
16
+ gtFilename: The name of a file storing VTK image data (.vti) storing the ground truth ascending manifold.
17
+ each point's region ID should be stored in a point array called "AscendingManifold".
18
+ reconFilename: The name of a file storing VTK image data (.vti) storing the reconstructed ascending manifold.
19
+ verbose: Should error messages be printed if there are issues with the input files.
20
+ """
21
+
22
+ return partitionTopologicalDiceScore(gtFilename, "AscendingManifold", reconFilename, "AscendingManifold", verbose)
23
+
24
+ if __name__ == "__main__":
25
+
26
+ if len(sys.argv) != 3:
27
+ print(f"{os.path.basename(__file__)}: usage is 'python3 {os.path.basename(__file__)} gt_filename.vti recon_filename.vti")
28
+ exit(1)
29
+
30
+ score = evaluateMoonAscendingManifold(sys.argv[1], sys.argv[2], verbose=True)
31
+
32
+ print(f"This ascending manifold scored: {score}")
topology/moon/GS/moons_gs.vti ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a94f60f79a15c79dd8199613d78c0a6cd42a10cf548fc36e3b49cd3cec9f572b
3
+ size 350065
topology/moon/data/moons.vti ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c733c228cf65c44f68c22dfdf1cfee2af8a5725658f370f3255c80ce582df614
3
+ size 699632
topology/moon/task_description.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ 1. Load the data file "moons/data/moons.vti".
2
+ 2. Apply topological simplification to the field "SplatterValues" with a persistence threshold of 10.
3
+ 3. Compute the Morse-Smale segmentation on the simplified scalar field.
4
+ 4. Save only the Ascending Manifold as "moons/results/{agent_mode}/moons.vti".
5
+ It should have a point array called "AscendingManifold".
6
+ For each point x, the array "AscendingManifold" should store the id number of the region that x belongs to.
topology/noisyTerrain/GS/noisyTerrain_eval.py ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import vtk
2
+ import numpy as np
3
+ import gudhi
4
+ import sys
5
+ import os
6
+
7
+ # Add the topology directory to Python path
8
+ sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../..'))
9
+
10
+ ###############################################################################
11
+ # The following parameters are from `topologyScoring.py`
12
+ ###############################################################################
13
+ # Set to True to allow data that is not perfectly predicted to score a perfect 10.
14
+ # If this is set to False, the highest possible score that an imperfect prediction can score is a 9.
15
+ canImperfectPredictionsScore10 = False
16
+
17
+ # The order of the Wasserstein distance
18
+ wassersteinOrder = 1.0
19
+
20
+ # The ground metric used for computing the Wasserstein distance
21
+ wassersteinGroundMetric = float('inf')
22
+
23
+ # This is the maximum average Wasserstein distance (the average is taken over (|P|+|Q|)/2) that can score points.
24
+ # Any distance above this score will score a 0.
25
+ maximumAverageWassersteinDistance = 0.2
26
+
27
+ ###############################################################################
28
+ # You can integrate the following two functions into `topologyScoring.py`
29
+ ###############################################################################
30
+ def _loadPersistenceDiagramFromVTK(pdFilename : str) -> np.ndarray:
31
+ """
32
+ Load a persistence diagram from a VTK file computed with TTK.
33
+
34
+ Args:
35
+ pdFilename: The path to the VTK file containing the persistence diagram.
36
+
37
+ Returns:
38
+ A numpy array of shape (n, 2) where each row is a (birth, death) pair for finite persistence pairs.
39
+ """
40
+ reader = vtk.vtkDataSetReader()
41
+ reader.SetFileName(pdFilename)
42
+ reader.Update()
43
+
44
+ output = reader.GetOutput()
45
+ if output is None:
46
+ raise ValueError(f"Could not read VTK file: {pdFilename}")
47
+
48
+ cellData = output.GetCellData()
49
+
50
+ birthArray = cellData.GetArray("Birth")
51
+ persistenceArray = cellData.GetArray("Persistence")
52
+ isFiniteArray = cellData.GetArray("IsFinite")
53
+
54
+ if birthArray is None or persistenceArray is None:
55
+ raise ValueError(f"VTK file {pdFilename} does not contain required 'Birth' and 'Persistence' arrays")
56
+
57
+ pairs = []
58
+ numCells = output.GetNumberOfCells()
59
+
60
+ for i in range(numCells):
61
+ isFinite = isFiniteArray.GetTuple1(i) if isFiniteArray else 1
62
+ if isFinite:
63
+ birth = birthArray.GetTuple1(i)
64
+ persistence = persistenceArray.GetTuple1(i)
65
+ death = birth + persistence
66
+ pairs.append((birth, death))
67
+
68
+ return np.array(pairs)
69
+
70
+
71
+ # ====== PERSISTENCE DIAGRAM WASSERSTEIN SCORE ======
72
+
73
+ def persistenceDiagramWassersteinScore(gtFilename : str, reconFilename : str, verbose : bool = False) -> int:
74
+ """
75
+ Compute a similarity score (0-10) between two persistence diagrams stored in VTK files using Wasserstein distance.
76
+
77
+ Args:
78
+ gtFilename: Path to the ground truth persistence diagram VTK file.
79
+ reconFilename: Path to the reconstructed persistence diagram VTK file.
80
+ verbose: Whether to print error messages.
81
+
82
+ Returns:
83
+ An integer score from 0-10 indicating similarity (10 is best).
84
+ """
85
+ try:
86
+ gtDiagram = _loadPersistenceDiagramFromVTK(gtFilename)
87
+ except Exception as e:
88
+ if verbose:
89
+ print(f"Error loading GT diagram: {e}")
90
+ return 0
91
+
92
+ try:
93
+ reconDiagram = _loadPersistenceDiagramFromVTK(reconFilename)
94
+ except Exception as e:
95
+ if verbose:
96
+ print(f"Error loading recon diagram: {e}")
97
+ return 0
98
+
99
+ if len(gtDiagram) == 0 and len(reconDiagram) == 0:
100
+ return 10
101
+ elif len(gtDiagram) == 0 or len(reconDiagram) == 0:
102
+ return 0
103
+
104
+ # Normalize using GT's min-max
105
+ minFunctionValue = np.min(gtDiagram)
106
+ maxFunctionValue = np.max(gtDiagram)
107
+
108
+ gtDiagram = (gtDiagram - minFunctionValue) / (maxFunctionValue - minFunctionValue)
109
+ reconDiagram = (reconDiagram - minFunctionValue) / (maxFunctionValue - minFunctionValue)
110
+
111
+ wassersteinDistance = gudhi.wasserstein.wasserstein_distance(gtDiagram, reconDiagram, order=wassersteinOrder, internal_p=wassersteinGroundMetric)
112
+
113
+ numAverage = (gtDiagram.shape[0] + reconDiagram.shape[0]) / 2
114
+ wassersteinDistance /= numAverage
115
+
116
+ if wassersteinDistance == 0:
117
+ return 10
118
+
119
+ score = round(10 * (maximumAverageWassersteinDistance - wassersteinDistance) / maximumAverageWassersteinDistance)
120
+
121
+ if not canImperfectPredictionsScore10 and score == 10:
122
+ return 9
123
+
124
+ if score < 0:
125
+ return 0
126
+
127
+ return score
128
+
129
+
130
+ def evaluateNoisyTerrainPersistenceDiagram(gtFilename : str, reconFilename : str, verbose : bool = False):
131
+ """
132
+ Given two persistence diagrams, return a similarity score from 0-10.
133
+
134
+ A score of 0 is considered bad and a score of 10 is considered good.
135
+
136
+ Args:
137
+ gtFilename: The name of a file in legacy VTK format (.vtk) that stores the persistence diagram of the ground truth data.
138
+ reconFilename: The name of a file in legacy VTK format (.vtk) that stores the persistence diagram of the reconstructed data.
139
+ verbose: Should error messages be printed if there are issues with the input files.
140
+ """
141
+ return persistenceDiagramWassersteinScore(gtFilename, reconFilename, verbose)
142
+
143
+ if __name__ == "__main__":
144
+
145
+ if len(sys.argv) != 3:
146
+ print(f"{os.path.basename(__file__)}: usage is 'python3 {os.path.basename(__file__)} gt_points.vtk recon_points.vtk'")
147
+ exit(1)
148
+
149
+ score = evaluateNoisyTerrainPersistenceDiagram(sys.argv[1], sys.argv[2], verbose=True)
150
+
151
+ print(f"These critical points scored: {score}")
topology/noisyTerrain/GS/noisyTerrain_gs.vtk ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4382b8fbaaa83b299a8cae5257734f66b8380edce9c9b0febd58e988ec9d209
3
+ size 3603
topology/noisyTerrain/data/noisyTerrain.vtu ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a07febd5f3230c45220d7718fff4a8020b8c6318186fea15cc960f794da502bf
3
+ size 10291578
topology/noisyTerrain/task_description.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ 1. Load the dataset from "noisyTerrain/data/noisyTerrain.vtu".
2
+ 2. Compute the persistence diagram on the scalar field named "Blend".
3
+ 3. Apply a threshold to filter out pairs with persistence value less than 1.
4
+ 4. Save the persistence diagram as "noisyTerrain/results/{agent_mode}/noisyTerrain.vtk" in legacy VTK format.
5
+ - The output should contain the points in the persistence diagram as point data, and each persistence pair is represented as a cell.
6
+ - Include the following three scalar arrays with the given names and purposes:
7
+ * "Birth" array: store the birth value of each pair.
8
+ * "Persistence" array: store the persistence value of each pair.
9
+ * "IsFinite" array: use 1 to mark finite persistence and 0 to mark infinite persistence.
topology/ocean/task_description.txt CHANGED
@@ -14,4 +14,14 @@ It should have a point array called "Partition" that stores the region identifie
14
 
15
  6. Save the partition information from the eigenvalue partition as "ocean/results/{agent_mode}/ocean_eigenvalue.vti" as VTK image data.
16
  It should have a point array called "Partition" that stores the region identifiers as follows:
17
- 0: positive scaling. 1: counterclockwise rotation. 2: negative scaling. 3: clockwise rotation. 4: anisotropic stretching.
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  6. Save the partition information from the eigenvalue partition as "ocean/results/{agent_mode}/ocean_eigenvalue.vti" as VTK image data.
16
  It should have a point array called "Partition" that stores the region identifiers as follows:
17
+ 0: positive scaling. 1: counterclockwise rotation. 2: negative scaling. 3: clockwise rotation. 4: anisotropic stretching.
18
+
19
+ 7. Analyze the visualization and answer the following questions:
20
+ Q1: Are there more trisectors than wedges? (yes/no)
21
+
22
+ Q2: How many points have the most common classification in the eigenvector partition?
23
+ (A) 752342 (B) 802842 (C) 826348 (D) 994682
24
+
25
+ Q3: Which is the least common classification in the eigenvalue partition?
26
+ (A) Positive scaling (B) counterclockwise rotation (C) negative scaling (D) clockwise rotation
27
+ Save the answers to the analysis questions in plain text as "ocean/results/{agent_mode}/answers.txt".