KuangshiAi commited on
Commit ·
4fe94e5
1
Parent(s): 367c2bc
add 5 topology cases from Nathaniel Gorski
Browse files- .DS_Store +0 -0
- eval_cases/topology/topology_cases.yaml +109 -4
- topology/QMCPack/GS/QMCPack_eval.py +34 -0
- topology/QMCPack/GS/QMCPack_gs.vtk +3 -0
- topology/QMCPack/data/QMCPack.vti +3 -0
- topology/QMCPack/task_description.txt +13 -0
- topology/brain/GS/brain_eval.py +34 -0
- topology/brain/GS/brain_gs.vtk +3 -0
- topology/brain/data/brain.vti +3 -0
- topology/brain/task_description.txt +3 -0
- topology/cylinder/GS/cylinder_eval.py +35 -0
- topology/cylinder/GS/cylinder_gs.vti +3 -0
- topology/cylinder/data/cylinder.vti +3 -0
- topology/cylinder/task_description.txt +4 -0
- topology/isabel/GS/isabel_edges_gs.vtk +3 -0
- topology/isabel/GS/isabel_eval.py +55 -0
- topology/isabel/GS/isabel_points_gs.vtk +3 -0
- topology/isabel/data/isabel.vti +3 -0
- topology/isabel/task_description.txt +9 -0
- topology/ocean/GS/ocean_eigenvalue.vti +3 -0
- topology/ocean/GS/ocean_eigenvector.vti +3 -0
- topology/ocean/GS/ocean_eval.py +71 -0
- topology/ocean/GS/ocean_points.vtk +3 -0
- topology/ocean/data/ocean.vti +3 -0
- topology/ocean/task_description.txt +15 -0
- topology/topologyScoring.py +1211 -0
.DS_Store
CHANGED
|
Binary files a/.DS_Store and b/.DS_Store differ
|
|
|
eval_cases/topology/topology_cases.yaml
CHANGED
|
@@ -9,15 +9,120 @@
|
|
| 9 |
- vars:
|
| 10 |
question: |
|
| 11 |
1. Please load the dataset from "QMCPack/data/QMCPack.vti".
|
| 12 |
-
|
| 13 |
2. Compute the critical points of the scalar field.
|
| 14 |
-
|
| 15 |
3. Save the critical points as "QMCPack/results/{agent_mode}/QMCPack.vtk" in legacy VTK format.
|
| 16 |
- The output should contain the critical points as point data
|
| 17 |
-
- Include
|
| 18 |
* 0 for minima
|
| 19 |
* 1 for 1-saddles
|
| 20 |
* 2 for 2-saddles
|
| 21 |
* 3 for maxima
|
| 22 |
* 4 for degenerate critical points
|
| 23 |
-
- The point coordinates should be in index space (grid coordinates), not world coordinates
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
- vars:
|
| 10 |
question: |
|
| 11 |
1. Please load the dataset from "QMCPack/data/QMCPack.vti".
|
|
|
|
| 12 |
2. Compute the critical points of the scalar field.
|
|
|
|
| 13 |
3. Save the critical points as "QMCPack/results/{agent_mode}/QMCPack.vtk" in legacy VTK format.
|
| 14 |
- The output should contain the critical points as point data
|
| 15 |
+
- Include an array called "CriticalType" that labels each point according to what type of critical type it is. Use the following convention:
|
| 16 |
* 0 for minima
|
| 17 |
* 1 for 1-saddles
|
| 18 |
* 2 for 2-saddles
|
| 19 |
* 3 for maxima
|
| 20 |
* 4 for degenerate critical points
|
| 21 |
+
- The point coordinates should be in index space (grid coordinates), not world coordinates
|
| 22 |
+
assert:
|
| 23 |
+
- type: rule_based
|
| 24 |
+
eval_script: QMCPack/GS/QMCPack_eval.py
|
| 25 |
+
eval_function: evaluateQmcpackCriticalPoints
|
| 26 |
+
gs_file: QMCPack/GS/QMCPack.vtk
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
# 2. Brain
|
| 30 |
+
# Symmetric 2D 2x2 tensor field. The (1,1), (1,2) and (2,2) components are given by the arrays A, B, and D respectively (the (2,1) component is equal to the (1,2) component).
|
| 31 |
+
# This is a 2D slice of the diffusion tensor of an MRI scan of a brain.
|
| 32 |
+
# Specifically, to produce the data, we started by downloading the data from patient 23 from this study: https://www.nature.com/articles/s41597-021-01092-6.
|
| 33 |
+
# Then, we extracted the diffusion tensor. We discarded the (2,1) entry that was produced, and set values outside of the brain to 0.
|
| 34 |
+
# We then took the slice where Z=50 (zero indexed) and discarded components of the tensor that use Z to derive a 2x2 tensor.
|
| 35 |
+
- vars:
|
| 36 |
+
question: |
|
| 37 |
+
1. Load the file "brain/data/brain.vti". It is a symmetric tensor field, where the (1,1), (1,2) and (2,2) components of the tensor are respectively given by the arrays A, B, and D.
|
| 38 |
+
2. Compute degenerate points of the tensor field.
|
| 39 |
+
3. Save the degenerate points as "brain/results/{agent_mode}/brain.vtk" in legacy VTK format. Label the type of degenerate point for each point in an array called DegeneracyType. Use a value of 0 for trisectors and 1 for wedges.
|
| 40 |
+
assert:
|
| 41 |
+
- type: rule_based
|
| 42 |
+
eval_script: brain/GS/brain_eval.py
|
| 43 |
+
eval_function: evaluateDegeneratePoints
|
| 44 |
+
gs_file: brain/GS/brain.vtk
|
| 45 |
+
rs_file: brain/results/{agent_mode}/brain.vtk
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
# 3. Heated Cylinder
|
| 49 |
+
# "The dataset is a flow around a heated cylinder.
|
| 50 |
+
# The data was taken from the computer graphics lab at ETH Zurich: https://cgl.ethz.ch/research/visualization/data.php.
|
| 51 |
+
# We took time step 1000 of the "Heated Cylinder with Bossinesq Approximation" dataset.
|
| 52 |
+
# We computed the flow magnitude to produce a scalar field.
|
| 53 |
+
- vars:
|
| 54 |
+
question: |
|
| 55 |
+
1. Please load the file "cylinder/data/cylinder.vti"
|
| 56 |
+
2. Apply persistence simplification of 0.01 to the Speed field.
|
| 57 |
+
3. Compute the Morse-Smale segmentation of the simplified Speed field.
|
| 58 |
+
4. Save the Morse-Smale segmentation as "cylinder/results/{agent_mode}/cylinder.vti". It should have a point array called Partition. For each point x, the array "Partition" should store the id number of the region in the segmentation that x belongs to.
|
| 59 |
+
assert:
|
| 60 |
+
- type: rule_based
|
| 61 |
+
eval_script: cylinder/GS/cylinder_eval.py
|
| 62 |
+
eval_function: evaluateMSSEgmentation
|
| 63 |
+
gs_file: cylinder/GS/cylinder.vti
|
| 64 |
+
rs_file: cylinder/results/{agent_mode}/cylinder.vti
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
# 4. Hurricane Isabel
|
| 68 |
+
# Wind speed at each point in a 3D region for a single time step for hurricane Isabel.
|
| 69 |
+
# The data was accessed from the SDR Bench (please cite): https://sdrbench.github.io/.
|
| 70 |
+
# It, in-turn, came from the IEEE SciVis contest 2004: http://vis.computer.org/vis2004contest/data.html.
|
| 71 |
+
# To derive the field, we used the three components of the wind velocity to derive a wind speed scalar field.
|
| 72 |
+
# We truncated the data from 500x500x100 to 500x500x90 so that the land component would not be present in the field.
|
| 73 |
+
- vars:
|
| 74 |
+
question: |
|
| 75 |
+
1. Load the file "isabel/data/isabel.vti".
|
| 76 |
+
2. Apply persistent simplification to the field "sf" with a persistence threshold of 0.4
|
| 77 |
+
3. Compute the merge tree of the simplified field.
|
| 78 |
+
4. Save the nodes of the merge tree as "isabel/results/{agent_mode}/isabel_nodes.vtk" in legacy VTK format.
|
| 79 |
+
This file should have two point arrays. One should be called "CriticalType" and should store the type of critical point for each node.
|
| 80 |
+
It should follow the following convention: 0: minima. 1: 1-saddles. 2: 2-saddles. 3: maxima. 4: degenerate critical points.
|
| 81 |
+
The other point array should be called "Scalar" and should contain the scalar field value at each point in the merge tree.
|
| 82 |
+
5. Save the edges of the merge tree as "isabel/results/{agent_mode}/isabel_edges.vtk" in legacy VTK format.
|
| 83 |
+
The file should store each edge as a separate cell with type vtkLine.
|
| 84 |
+
assert:
|
| 85 |
+
- type: rule_based
|
| 86 |
+
eval_script: isabel/GS/isabel_eval.py
|
| 87 |
+
eval_function: evaluateMergeTree
|
| 88 |
+
gs_file:
|
| 89 |
+
- isabel/GS/isabel_nodes.vtk
|
| 90 |
+
- isabel/GS/isabel_edges.vtk
|
| 91 |
+
rs_file:
|
| 92 |
+
- isabel/results/{agent_mode}/isabel_nodes.vtk
|
| 93 |
+
- isabel/results/{agent_mode}/isabel_edges.vtk
|
| 94 |
+
|
| 95 |
+
|
| 96 |
+
# 5. Ocean Flow
|
| 97 |
+
# This is the 2x2 gradient tensor field of a slice of the Indian Ocean.
|
| 98 |
+
# This tensor field is derived from the Global Ocean Physics Reanalysis dataset from EU Copernicus Marine.
|
| 99 |
+
# The gradient was taken numerically. For exact specifics, please see (https://arxiv.org/pdf/2508.09235) page 12 (under appendix A, the Ocean dataset is described).
|
| 100 |
+
- vars:
|
| 101 |
+
question: |
|
| 102 |
+
1. Please load the asymmetric tensor field from "ocean/data/ocean.vti". The (1,1), (1,2), (2,1) and (2,2) entries are respectively given by the arrays A, B, C, and D
|
| 103 |
+
|
| 104 |
+
2. Compute the eigenvector partition of the dataset.
|
| 105 |
+
|
| 106 |
+
3. Save the degenerate points as "ocean/results/{agent_mode}/ocean_points.vtk" in legacy VTK format.
|
| 107 |
+
Include a point array called DegeneracyType which classifies each degenerate point.
|
| 108 |
+
It should have a value of 0 for trisectors and 1 for wedges.
|
| 109 |
+
|
| 110 |
+
4. Save the partition information from the eigenvector partition as "ocean/results/{agent_mode}/ocean_eigenvector.vti" as VTK image data.
|
| 111 |
+
It should give regions identifiers as follows: 0: W_{c,s}. 1: W_{r,s}. 2: W_{r,n}. 3: W_{c,n}
|
| 112 |
+
|
| 113 |
+
5. Compute the eigenvalue partition of the dataset.
|
| 114 |
+
|
| 115 |
+
6. Save the partition information from the eigenvalue partition as "ocean/results/{agent_mode}/ocean_eigenvalue_partition.vti" as VTK image data.
|
| 116 |
+
It should give region identifiers as follows: 0: positive scaling. 1: counterclockwise rotation. 2: negative scaling. 3: clockwise rotation. 4: anisotropic stretching.
|
| 117 |
+
assert:
|
| 118 |
+
- type: rule_based
|
| 119 |
+
eval_script: ocean/GS/ocean_eval.py
|
| 120 |
+
eval_function: evaluate2DAsymmetricTFTopology
|
| 121 |
+
gs_file:
|
| 122 |
+
- ocean/GS/ocean_points.vtk
|
| 123 |
+
- ocean/GS/ocean_eigenvector.vti
|
| 124 |
+
- ocean/GS/ocean_eigenvalue.vti
|
| 125 |
+
rs_file:
|
| 126 |
+
- ocean/results/{agent_mode}/ocean_points.vtk
|
| 127 |
+
- ocean/results/{agent_mode}/ocean_eigenvector.vti
|
| 128 |
+
- ocean/results/{agent_mode}/ocean_eigenvalue.vti
|
topology/QMCPack/GS/QMCPack_eval.py
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import sys
|
| 2 |
+
import os
|
| 3 |
+
|
| 4 |
+
# Add the topology directory to Python path
|
| 5 |
+
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../..'))
|
| 6 |
+
|
| 7 |
+
from topologyScoring import pointCloudGeometryScore
|
| 8 |
+
|
| 9 |
+
def evaluateQmcpackCriticalPoints(gtFilename : str, reconFilename : str, verbose : bool = False):
|
| 10 |
+
"""
|
| 11 |
+
Given two sets of critical points, return a similarity score from 0-10.
|
| 12 |
+
|
| 13 |
+
A score of 0 is considered bad and a score of 10 is considered good.
|
| 14 |
+
|
| 15 |
+
Args:
|
| 16 |
+
gtPointsFile: The name of a file in legacy VTK format (.vtk) that stores the locations of each critical point
|
| 17 |
+
in the ground truth data. It should also have a point array called "CriticalType". It should assign
|
| 18 |
+
values as follows: 0: minimum. 1: 1-saddle. 2: 2-saddle. 3: maximum. 4: degenerate critical point.
|
| 19 |
+
reconPointsFile: The name of a file in legacy VTK format (.vtk) that stores the locations and degeneracy types
|
| 20 |
+
of each point in the reconstructed data.
|
| 21 |
+
verbose: Should error messages be printed if there are issues with the input files.
|
| 22 |
+
"""
|
| 23 |
+
return pointCloudGeometryScore(gtFilename, "CriticalType", reconFilename, "CriticalType", verbose)
|
| 24 |
+
|
| 25 |
+
if __name__ == "__main__":
|
| 26 |
+
|
| 27 |
+
if len(sys.argv) != 3:
|
| 28 |
+
print(f"{os.path.basename(__file__)}: usage is 'python3 {os.path.basename(__file__)} gt_points.vtk recon_points.vtk'")
|
| 29 |
+
exit(1)
|
| 30 |
+
|
| 31 |
+
score = evaluateQmcpackCriticalPoints(sys.argv[1], sys.argv[2], verbose=True)
|
| 32 |
+
|
| 33 |
+
print(f"These critical points scored: {score}")
|
| 34 |
+
|
topology/QMCPack/GS/QMCPack_gs.vtk
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:47aada72e4bcc1bea397c9615a748197eb52898d7752bbaf173b92783e366edc
|
| 3 |
+
size 31881
|
topology/QMCPack/data/QMCPack.vti
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3714bfdc1fc115a016692394ba6afd9e6fd21ece85625c9d742cd550a05ddc45
|
| 3 |
+
size 4380578
|
topology/QMCPack/task_description.txt
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
1. Please load the dataset from "QMCPack/data/QMCPack.vti".
|
| 2 |
+
|
| 3 |
+
2. Compute the critical points of the scalar field.
|
| 4 |
+
|
| 5 |
+
3. Save the critical points as "QMCPack/results/{agent_mode}/QMCPack.vtk" in legacy VTK format.
|
| 6 |
+
- The output should contain the critical points as point data
|
| 7 |
+
- Include an array called "CriticalType" that labels each point according to what type of critical type it is. Use the following convention:
|
| 8 |
+
* 0 for minima
|
| 9 |
+
* 1 for 1-saddles
|
| 10 |
+
* 2 for 2-saddles
|
| 11 |
+
* 3 for maxima
|
| 12 |
+
* 4 for degenerate critical points
|
| 13 |
+
- The point coordinates should be in index space (grid coordinates), not world coordinates
|
topology/brain/GS/brain_eval.py
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import sys
|
| 3 |
+
|
| 4 |
+
# Add the topology directory to Python path
|
| 5 |
+
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../..'))
|
| 6 |
+
from topologyScoring import pointCloudGeometryScore
|
| 7 |
+
|
| 8 |
+
def evaluateDegeneratePoints(gtPointsFile : str, reconPointsFile : str, verbose : bool = False) -> int:
|
| 9 |
+
"""
|
| 10 |
+
Given two sets of tensor field degenerate points, return a similarity score from 0-10.
|
| 11 |
+
|
| 12 |
+
A score of 0 is considered bad and a score of 10 is considered good.
|
| 13 |
+
|
| 14 |
+
Args:
|
| 15 |
+
gtPointsFile: The name of a file in legacy VTK format (.vtk) that stores the locations of each degenerate point
|
| 16 |
+
in the ground truth data. It should also have a point array called "DegeneracyType". It should assign
|
| 17 |
+
a value of 0 to each trisector and 1 for each wedge.
|
| 18 |
+
reconPointsFile: The name of a file in legacy VTK format (.vtk) that stores the locations and degeneracy types
|
| 19 |
+
of each point in the reconstructed data.
|
| 20 |
+
verbose: Should error messages be printed if there are issues with the input files.
|
| 21 |
+
"""
|
| 22 |
+
|
| 23 |
+
return pointCloudGeometryScore(gtPointsFile, "DegeneracyType", reconPointsFile, "DegeneracyType", verbose)
|
| 24 |
+
|
| 25 |
+
if __name__ == "__main__":
|
| 26 |
+
|
| 27 |
+
if len(sys.argv) != 3:
|
| 28 |
+
print(f"{os.path.basename(__file__)}: usage is 'python3 {os.path.basename(__file__)} ground_truth_file.vtk reconstructed_file.vtk'")
|
| 29 |
+
exit(1)
|
| 30 |
+
|
| 31 |
+
score = evaluateDegeneratePoints(sys.argv[1], sys.argv[2], verbose=True)
|
| 32 |
+
|
| 33 |
+
print(f"These degenerate points scored: {score}")
|
| 34 |
+
|
topology/brain/GS/brain_gs.vtk
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cb2ebfd796b01bf48580dd3f12653ed443aca2a7b91ff0aa78a731af0139fb56
|
| 3 |
+
size 21004
|
topology/brain/data/brain.vti
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:83b03d3dc67e2d796d0bd67156c9b7751ee3442b99bf98a88d842ab754dc3d64
|
| 3 |
+
size 171717
|
topology/brain/task_description.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
1. Load the file "brain/data/brain.vti". It is a symmetric tensor field, where the (1,1), (1,2) and (2,2) components of the tensor are respectively given by the arrays A, B, and D.
|
| 2 |
+
2. Compute degenerate points of the tensor field.
|
| 3 |
+
3. Save the degenerate points as "brain/results/{agent_mode}/brain.vtk" in legacy VTK format. Label the type of degenerate point for each point in an array called DegeneracyType. Use a value of 0 for trisectors and 1 for wedges.
|
topology/cylinder/GS/cylinder_eval.py
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import sys
|
| 3 |
+
|
| 4 |
+
# Add the topology directory to Python path
|
| 5 |
+
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../..'))
|
| 6 |
+
|
| 7 |
+
from topologyScoring import partitionTopologicalDiceScore
|
| 8 |
+
|
| 9 |
+
def evaluateMSSEgmentation(gtFilename : str, reconFilename : str, verbose : bool = False) -> int:
|
| 10 |
+
"""
|
| 11 |
+
Given two Morse-Smale segmentations of the same domain, return a similarity score from 0-10.
|
| 12 |
+
|
| 13 |
+
A score of 0 is considered bad and a score of 10 is considered good. The segmentations should be
|
| 14 |
+
represented by a point array called "Partition" that assigs a region identifier to each point in
|
| 15 |
+
the domain. The region identifiers between the ground truth and reconstructed files do not need to match.
|
| 16 |
+
|
| 17 |
+
Args:
|
| 18 |
+
gtFilename: The name of a file storing VTK image data (.vti) storing the ground truth MS segmentation.
|
| 19 |
+
each point's region ID should be stored in a point array called "Partition".
|
| 20 |
+
reconFilename: The name of a file storing VTK image data (.vti) storing the reconstructed MS segmentation.
|
| 21 |
+
verbose: Should error messages be printed if there are issues with the input files.
|
| 22 |
+
"""
|
| 23 |
+
|
| 24 |
+
return partitionTopologicalDiceScore(gtFilename, "Partition", reconFilename, "Partition", verbose)
|
| 25 |
+
|
| 26 |
+
if __name__ == "__main__":
|
| 27 |
+
|
| 28 |
+
if len(sys.argv) != 3:
|
| 29 |
+
print(f"{os.path.basename(__file__)}: usage is 'python3 {os.path.basename(__file__)} gt_filename.vti recon_filename.vti")
|
| 30 |
+
exit(1)
|
| 31 |
+
|
| 32 |
+
score = evaluateMSSEgmentation(sys.argv[1], sys.argv[2], verbose=True)
|
| 33 |
+
|
| 34 |
+
print(f"This MS segmentation scored: {score}")
|
| 35 |
+
|
topology/cylinder/GS/cylinder_gs.vti
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e4fd6d5ee67d09d44afe8b9460ff5f243e3a18b3e06d1e4d6ee3a7beeef3b466
|
| 3 |
+
size 540640
|
topology/cylinder/data/cylinder.vti
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b1602ec365ff1d37757b8c4d0b616ea3597bc898c796653a512518638f0624a4
|
| 3 |
+
size 540632
|
topology/cylinder/task_description.txt
ADDED
|
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
1. Please load the file "cylinder/data/cylinder.vti"
|
| 2 |
+
2. Apply persistence simplification of 0.01 to the Speed field.
|
| 3 |
+
3. Compute the Morse-Smale segmentation of the simplified Speed field.
|
| 4 |
+
4. Save the Morse-Smale segmentation as "cylinder/results/{agent_mode}/cylinder.vti". It should have a point array called Partition. For each point x, the array "Partition" should store the id number of the region in the segmentation that x belongs to.
|
topology/isabel/GS/isabel_edges_gs.vtk
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:16db9e3e2add1db0226937e5d905b10adca85dadf93526c7c6112ead59760454
|
| 3 |
+
size 14051
|
topology/isabel/GS/isabel_eval.py
ADDED
|
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import sys
|
| 3 |
+
|
| 4 |
+
# Add the topology directory to Python path
|
| 5 |
+
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../..'))
|
| 6 |
+
|
| 7 |
+
from topologyScoring import mergeTreePartialFusedGWDistanceScore, mergeTreePersistenceWassersteinScore
|
| 8 |
+
|
| 9 |
+
def evaluateMergetree(gtPointsFilename : str, gtEdgesFilename : str, reconPointsFilename : str,
|
| 10 |
+
reconEdgesFilename : str, verbose : bool = False) -> int:
|
| 11 |
+
"""
|
| 12 |
+
Given two merge trees, return a similarity score from 0-10.
|
| 13 |
+
|
| 14 |
+
This implementation only works with join trees, and not split trees or contour trees. Each merge tree
|
| 15 |
+
should be stored as two legacy VTK files (.vtk) where there is one file for the points and another for the edges.
|
| 16 |
+
The edges file should store the edges as cells of type vtkLine.
|
| 17 |
+
The points file should have an array called "CriticalType" which labels each vertex according to what type of
|
| 18 |
+
critical point it is. It should follow: 0: minimum. 1: 1-saddle. 2: 2-saddle. 3: maximum. 4: degenerate critical point.
|
| 19 |
+
The points file should also have an array called "Scalar" which stores the scalar field value at that point.
|
| 20 |
+
|
| 21 |
+
A score of 0 is considered bad and 10 is considered good. The score is computed based on the partial fused GW distance between
|
| 22 |
+
the two trees, as well as the Wasserstein distance between their persistence diagrams. These two distances are weighted equally.
|
| 23 |
+
For more information on the partial fused GW distance, see:
|
| 24 |
+
Mingzhe Li et al. "Flexible and Probabilistic Topology Tracking With Partial Optimal Transport".
|
| 25 |
+
doi: 10.1109/TVCG.2025.3561300
|
| 26 |
+
|
| 27 |
+
Args:
|
| 28 |
+
gtPointsFilename: The name of a file in legacy VTK format (.vtk) that stores the points of the ground truth merge tree
|
| 29 |
+
along with the critical types and scalar field values.
|
| 30 |
+
gtEdgesFilename: The name of a file in legacy VTK format (.vtk) that stores the points of the ground truth merge tree.
|
| 31 |
+
reconPointsFilename: The name of a file in legacy VTK format (.vtk) that stores the points of the reconstructed merge tree
|
| 32 |
+
along with the critical types and scalar field values.
|
| 33 |
+
reconEdgesFilename: The name of a file in legacy VTK format (.vtk) that stores the edges of the reconstructed merge tree.
|
| 34 |
+
Returns:
|
| 35 |
+
A score from 0-10 comparing the similarity of the two merge trees.
|
| 36 |
+
|
| 37 |
+
"""
|
| 38 |
+
|
| 39 |
+
pFGWScore = mergeTreePartialFusedGWDistanceScore(gtPointsFilename, gtEdgesFilename, "CriticalType", "Scalar",
|
| 40 |
+
reconPointsFilename, reconEdgesFilename, "CriticalType", "Scalar", verbose )
|
| 41 |
+
|
| 42 |
+
persistenceScore = mergeTreePersistenceWassersteinScore(gtPointsFilename, gtEdgesFilename, "Scalar",
|
| 43 |
+
reconPointsFilename, reconEdgesFilename, "Scalar", verbose)
|
| 44 |
+
|
| 45 |
+
return (pFGWScore + persistenceScore) / 2
|
| 46 |
+
|
| 47 |
+
if __name__ == "__main__":
|
| 48 |
+
|
| 49 |
+
if len(sys.argv) != 5:
|
| 50 |
+
print(f"{os.path.basename(__file__)}: usage is 'python3 {os.path.basename(__file__)} gt_points_file.vtk gt_edge_file.vtk reconstructed_points_file.vtk reconstructed_edges_file.vtk'")
|
| 51 |
+
exit(1)
|
| 52 |
+
|
| 53 |
+
score = evaluateMergetree(sys.argv[1], sys.argv[2], sys.argv[3], sys.argv[4], verbose=True)
|
| 54 |
+
|
| 55 |
+
print(f"This merge tree scored: {score}")
|
topology/isabel/GS/isabel_points_gs.vtk
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7f6fb7ca2f8b879771259a5fd2bfd2e780dc2fa6826b67c77c95079015ec676d
|
| 3 |
+
size 11483
|
topology/isabel/data/isabel.vti
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8430639d3734ec89258377b2cb727634916851136c249025fe5a108d693e53f3
|
| 3 |
+
size 180000448
|
topology/isabel/task_description.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
1. Load the file "isabel/data/isabel.vti".
|
| 2 |
+
2. Apply persistent simplification to the field "sf" with a persistence threshold of 0.4
|
| 3 |
+
3. Compute the merge tree of the simplified field.
|
| 4 |
+
4. Save the nodes of the merge tree as "isabel/results/{agent_mode}/isabel_nodes.vtk" in legacy VTK format.
|
| 5 |
+
This file should have two point arrays. One should be called "CriticalType" and should store the type of critical point for each node.
|
| 6 |
+
It should follow the following convention: 0: minima. 1: 1-saddles. 2: 2-saddles. 3: maxima. 4: degenerate critical points.
|
| 7 |
+
The other point array should be called "Scalar" and should contain the scalar field value at each point in the merge tree.
|
| 8 |
+
5. Save the edges of the merge tree as "isabel/results/{agent_mode}/isabel_edges.vtk" in legacy VTK format.
|
| 9 |
+
The file should store each edge as a separate cell with type vtkLine.
|
topology/ocean/GS/ocean_eigenvalue.vti
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b92a54aeecbbe2e1fe57cee0b21391f50143f615214fe34fb0bc91799c0b499f
|
| 3 |
+
size 20506265
|
topology/ocean/GS/ocean_eigenvector.vti
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f48a8b74025987863fe1a1dbe08cc1898397eccafff51a8c83070711e988b724
|
| 3 |
+
size 20506265
|
topology/ocean/GS/ocean_eval.py
ADDED
|
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import sys
|
| 3 |
+
|
| 4 |
+
# Add the topology directory to Python path
|
| 5 |
+
|
| 6 |
+
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../..'))
|
| 7 |
+
from topologyScoring import pointCloudGeometryScore, partitionAssignmentDiceScore
|
| 8 |
+
|
| 9 |
+
def evaluate2DAsymmetricTFTopology(gtDPFilename : str, gtEigenvectorFilename : str, gtEigenvalueFilename : str,
|
| 10 |
+
reconDPFilename : str, reconEigenvectorFilename : str, reconEigenvalueFilename : str,
|
| 11 |
+
verbose : bool = False) -> int:
|
| 12 |
+
"""
|
| 13 |
+
Given the topological descriptors computed from two 2D asymmetric tensor fields, return a score from 0-10 comparing their
|
| 14 |
+
topology.
|
| 15 |
+
|
| 16 |
+
A score of 0 is considered bad and 10 is considered good. 5 points come from the eigenvector partition, while 5 points come
|
| 17 |
+
from the eigenvalue partition. The scoring for the eigenvector partition is divided evenly between the placement of the
|
| 18 |
+
degenerate points as well as the partition itself.
|
| 19 |
+
|
| 20 |
+
Because classifications in the eigenvector and eigenvalue partitions do not interpolate linearly inside of cells, it
|
| 21 |
+
can be advantageous to store them in a mesh with a higher resolution than original dataset. This function supports
|
| 22 |
+
eigenvector and eigenvalue partition meshes at arbitrary resolutions, provided that the mesh represents an integer
|
| 23 |
+
scaling in terms of the number of grid cells. For example, suppose that the dataset is defined on a mesh with a width
|
| 24 |
+
of m square grid cells, then this function supports eigenvector and eigenvalue partition meshes with a width of nm for any
|
| 25 |
+
positive integer n. Note that this is the number of grid cells, and not points (which represent the vertices of the grid cells).
|
| 26 |
+
In the case above, the original mesh would contain m+1 points and this function supports meshes with nm+1 points for positive
|
| 27 |
+
integers n. The resolution of the ground truth and reconstructed data do not need to match but they do need to be validly supported
|
| 28 |
+
resolutions.
|
| 29 |
+
|
| 30 |
+
Args:
|
| 31 |
+
gtDPFilename: The name of a file in legacy VTK format (.vtk) that stores the locations of each degenerate point
|
| 32 |
+
in the ground truth data. It should also have a point array called "DegeneracyType". It should assign
|
| 33 |
+
a value of 0 to each trisector and 1 for each wedge.
|
| 34 |
+
gtEigenvectorFilename: The name of a file containing VTK image data (.vti) that classifies each point according to its
|
| 35 |
+
classification in the eigenvector partition. The classifications should be stored in a point array
|
| 36 |
+
called "Partition". It should assign values as follows: 0: W_{c,s}. 1: W_{r,s}
|
| 37 |
+
2: W_{r,n}. 3: W_{c,n}.
|
| 38 |
+
gtEigenvalueFilename: The name of a file containing VTK image data (.vti) that classifies each point according to
|
| 39 |
+
its classification in the eigenvalue partition. The classifications should be stored in a point array
|
| 40 |
+
called "Partition". It should assign values as follows: 0: positive scaling. 1: counterclockwise rotation.
|
| 41 |
+
2: negative scaling. 3: clockwise rotation. 4: anisotropic stretching.
|
| 42 |
+
reconDPFilename: The name of a file in legacy VTK format (.vtk) that stores the locations and degeneracy types of the degenerate
|
| 43 |
+
points of the reconstructed data.
|
| 44 |
+
reconEigenvectorFilename: The name of a file containing VTK image data (.vti) that classifies each point according to
|
| 45 |
+
the eigenvector partition.
|
| 46 |
+
reconEigenvalueFilename: The name of a file containing VTK image data (.vti) that classifies each point according to the
|
| 47 |
+
eigenvalue partition.
|
| 48 |
+
verbose: Should error messages be printed if there are issues with the input files.
|
| 49 |
+
"""
|
| 50 |
+
|
| 51 |
+
pointScore = pointCloudGeometryScore(gtDPFilename, "DegeneracyType", reconDPFilename, "DegeneracyType", verbose)
|
| 52 |
+
eigenvectorDiceScore = partitionAssignmentDiceScore(gtEigenvectorFilename, "Partition", reconEigenvectorFilename, "Partition", verbose, allowResampling=True)
|
| 53 |
+
eigenvalueDiceScore = partitionAssignmentDiceScore(gtEigenvalueFilename, "Partition", reconEigenvalueFilename, "Partition", verbose, allowResampling=True)
|
| 54 |
+
|
| 55 |
+
eigenvectorScore = (pointScore + eigenvectorDiceScore) / 2
|
| 56 |
+
|
| 57 |
+
totalScore = (eigenvalueDiceScore + eigenvectorScore) / 2
|
| 58 |
+
|
| 59 |
+
return totalScore
|
| 60 |
+
|
| 61 |
+
if __name__ == "__main__":
|
| 62 |
+
|
| 63 |
+
if len(sys.argv) != 7:
|
| 64 |
+
print(f"{os.path.basename(__file__)}: usage is 'python3 {os.path.basename(__file__)} gt_degenerate_points.vtk gt_eigenvector_partition.vti gt_eigenvalue_partition.vti'" \
|
| 65 |
+
" recon_degenerate_points.vtk recon_eigenvector_partition.vti recon_eigenvalue_partition.vti")
|
| 66 |
+
exit(1)
|
| 67 |
+
|
| 68 |
+
score = evaluate2DAsymmetricTFTopology(sys.argv[1], sys.argv[2], sys.argv[3], sys.argv[4], sys.argv[5], sys.argv[6], verbose=True)
|
| 69 |
+
|
| 70 |
+
print(f"This eigenvector and eigenvalue partition scored: {score}")
|
| 71 |
+
|
topology/ocean/GS/ocean_points.vtk
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7821369e93c9f218628c5c027dd5cc92ebf7f893974c42313d05808d0a1fadcd
|
| 3 |
+
size 4119
|
topology/ocean/data/ocean.vti
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:edac45f6b31005f65988a452a35fea3c952005d40474efe8682b4bd4e4a792af
|
| 3 |
+
size 327181
|
topology/ocean/task_description.txt
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
1. Please load the asymmetric tensor field from "ocean/data/ocean.vti". The (1,1), (1,2), (2,1) and (2,2) entries are respectively given by the arrays A, B, C, and D
|
| 2 |
+
|
| 3 |
+
2. Compute the eigenvector partition of the dataset.
|
| 4 |
+
|
| 5 |
+
3. Save the degenerate points as "ocean/results/{agent_mode}/ocean_points.vtk" in legacy VTK format.
|
| 6 |
+
Include a point array called DegeneracyType which classifies each degenerate point.
|
| 7 |
+
It should have a value of 0 for trisectors and 1 for wedges.
|
| 8 |
+
|
| 9 |
+
4. Save the partition information from the eigenvector partition as "ocean/results/{agent_mode}/ocean_eigenvector.vti" as VTK image data.
|
| 10 |
+
It should give regions identifiers as follows: 0: W_{c,s}. 1: W_{r,s}. 2: W_{r,n}. 3: W_{c,n}
|
| 11 |
+
|
| 12 |
+
5. Compute the eigenvalue partition of the dataset.
|
| 13 |
+
|
| 14 |
+
6. Save the partition information from the eigenvalue partition as "ocean/results/{agent_mode}/ocean_eigenvalue_partition.vti" as VTK image data.
|
| 15 |
+
It should give region identifiers as follows: 0: positive scaling. 1: counterclockwise rotation. 2: negative scaling. 3: clockwise rotation. 4: anisotropic stretching.
|
topology/topologyScoring.py
ADDED
|
@@ -0,0 +1,1211 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
import os
|
| 3 |
+
import vtk
|
| 4 |
+
import ot
|
| 5 |
+
import networkx as nx
|
| 6 |
+
import collections
|
| 7 |
+
import gudhi.wasserstein
|
| 8 |
+
import math
|
| 9 |
+
from vtk.util.numpy_support import vtk_to_numpy
|
| 10 |
+
from scipy.spatial.distance import cdist
|
| 11 |
+
from scipy.optimize import linear_sum_assignment
|
| 12 |
+
from typing import Any
|
| 13 |
+
|
| 14 |
+
# We provide parameters that can be used to adjust our various scoring algorithms. The parameters
|
| 15 |
+
# are sorted by which algorithms they refer to.
|
| 16 |
+
# It may be helpful to read descriptions of the algorithms themselves before attempting to understand
|
| 17 |
+
# the meaning of these parameters.
|
| 18 |
+
|
| 19 |
+
# Set to True to allow data that is not perfectly predicted to score a perfect 10.
|
| 20 |
+
# If this is set to False, the highest possible score that an imperfect prediction can score is a 9.
|
| 21 |
+
canImperfectPredictionsScore10 = False
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
# ====== POINT CLOUD GEOMETRY SCORE ======
|
| 26 |
+
|
| 27 |
+
# When two points are paired with each other, the cost associated with that pairing is equal to twice their
|
| 28 |
+
# squared distance (namely, each point receives a cost equal to the squared distance between them).
|
| 29 |
+
# This parameter imposes an additional flat cost to a pair of points depending on if their type classifications are different.
|
| 30 |
+
costMatrixTypeMismatchPenalty = 1000
|
| 31 |
+
|
| 32 |
+
# If two points have a cost above this threshold, they are not allowed to pair with each other.
|
| 33 |
+
costThresholdToPair = 100
|
| 34 |
+
|
| 35 |
+
# If a point remains unpaired, how much should it contribute to the total cost.
|
| 36 |
+
# In order to ensure that it is always better for a point to pair with another versus remain unpaired, ensure that this
|
| 37 |
+
# value is at least half of the previous parameter. That way, the total cost for two points to remain unpaired, which is
|
| 38 |
+
# two times the value of this parameter, exceeds the maximum possible if those two points were paired.
|
| 39 |
+
unpairedPointCost = 50
|
| 40 |
+
|
| 41 |
+
# What is the maximum average cost per point that should recieve any points. Any average cost higher than this will receive a score of 0.
|
| 42 |
+
maximumAverageCost = 50
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
# ====== DICE SCORES ======
|
| 47 |
+
|
| 48 |
+
# The minimum dice score that must be scored in order to score any points (any lower score receives a 0)
|
| 49 |
+
# Should be a float from 0-1.
|
| 50 |
+
minimumDiceScore = 0.3
|
| 51 |
+
|
| 52 |
+
# If the dice score is computed after resampling meshes to a common resolution, the dice score will not typically
|
| 53 |
+
# be a perfect 1.0 even if the model did nothing wrong. If rescaling occurs, then any dice score above this
|
| 54 |
+
# margin will score a perfect 10.
|
| 55 |
+
resamplingMarginForPerfect = 0.99
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+
# ====== MERGE TREES PARTIAL FUSED GW DISTANCE SCORE ======
|
| 61 |
+
|
| 62 |
+
# This controls the tradeoff between Wasserstein and GW distance when performing on the OT computation.
|
| 63 |
+
alpha = 0.5
|
| 64 |
+
|
| 65 |
+
# This is the maximum partial fused GW distance that can score any points. Any distance above this threshold will score a 0.
|
| 66 |
+
maximumPFGWDistance = 0.5
|
| 67 |
+
|
| 68 |
+
# Cutoff distance for returning a perfect 10. Due to numerical issues, if the reconstructed data is perfect, the OT distance
|
| 69 |
+
# computed is unlikely to be exactly 0. Thus, we return a perfect 10 if the distance is below this threshold.
|
| 70 |
+
perfectPFGWDistanceCutoff = 1e-10
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
# ====== MERGE TREE PERSISTENCE DIAGRAM WASSERSTEIN SCORE ======
|
| 75 |
+
|
| 76 |
+
# see https://gudhi.inria.fr/python/latest/wasserstein_distance_user.html
|
| 77 |
+
|
| 78 |
+
# The order of the Wasserstein distance
|
| 79 |
+
wassersteinOrder = 1.0
|
| 80 |
+
|
| 81 |
+
# The ground metric used for computing the Wasserstein distance
|
| 82 |
+
wassersteinGroundMetric = float('inf')
|
| 83 |
+
|
| 84 |
+
# This is the maximum average Wasserstein distance (the average is taken over (|P|+|Q|)/2) that can score points.
|
| 85 |
+
# Any distance above this score will score a 0.
|
| 86 |
+
maximumAverageWassersteinDistance = 0.2
|
| 87 |
+
|
| 88 |
+
def _convertPointsToArraysSortedByType(pointsFilename : str, pointsTypeArrayName : str) -> dict[Any, np.ndarray]:
|
| 89 |
+
"""
|
| 90 |
+
Converts a set of labeled points into a dictionary that sorts the points by label.
|
| 91 |
+
|
| 92 |
+
Args:
|
| 93 |
+
pointsFilename: The name of a file in legacy VTK format (.vtk) that stores a point cloud.
|
| 94 |
+
pointsTypeArrayName: The name of a point array which classifies each point by type.
|
| 95 |
+
the values should be categorical (e.g., integers representing the indexes of critical points.
|
| 96 |
+
|
| 97 |
+
Returns:
|
| 98 |
+
A python dictionary. The keys are the different types of points. Each value is an nx3 numpy array, where each
|
| 99 |
+
row corresponds to a different point. If the input point cloud is 2D, then the z coordinate of each point will
|
| 100 |
+
be set to 0.
|
| 101 |
+
"""
|
| 102 |
+
|
| 103 |
+
scriptName = os.path.basename(__file__)
|
| 104 |
+
|
| 105 |
+
# read in the points file
|
| 106 |
+
|
| 107 |
+
if not os.path.isfile(pointsFilename):
|
| 108 |
+
raise FileNotFoundError(f"{scriptName}: no such file: '{pointsFilename}'")
|
| 109 |
+
|
| 110 |
+
reader = vtk.vtkDataSetReader()
|
| 111 |
+
reader.SetFileName(pointsFilename)
|
| 112 |
+
reader.Update()
|
| 113 |
+
|
| 114 |
+
output = reader.GetOutput()
|
| 115 |
+
|
| 116 |
+
if output is None:
|
| 117 |
+
raise ValueError(f"{scriptName}: file '{pointsFilename}' is not a legacy .vtk file")
|
| 118 |
+
|
| 119 |
+
# read in the classification array
|
| 120 |
+
|
| 121 |
+
pointData = output.GetPointData()
|
| 122 |
+
|
| 123 |
+
if pointData is None:
|
| 124 |
+
raise ValueError(f"{scriptName}: file '{pointsFilename}' does not have any point data")
|
| 125 |
+
|
| 126 |
+
array = pointData.GetArray(pointsTypeArrayName)
|
| 127 |
+
|
| 128 |
+
if array is None:
|
| 129 |
+
raise ValueError(f"{scriptName}: file '{pointsFilename}' does not have a point array called {pointsTypeArrayName}")
|
| 130 |
+
|
| 131 |
+
numPoints = output.GetNumberOfPoints()
|
| 132 |
+
|
| 133 |
+
# extract all points and sort them by classification
|
| 134 |
+
|
| 135 |
+
# keys: type, values: list of point coordinates
|
| 136 |
+
pointLists = {}
|
| 137 |
+
|
| 138 |
+
for idx in range(numPoints):
|
| 139 |
+
type_ = array.GetTuple1(idx)
|
| 140 |
+
|
| 141 |
+
if type_ not in pointLists:
|
| 142 |
+
pointLists[type_] = []
|
| 143 |
+
|
| 144 |
+
pointCoords = output.GetPoint(idx)
|
| 145 |
+
|
| 146 |
+
if len(pointCoords) == 2:
|
| 147 |
+
pointCoords = (pointCoords[0], pointCoords[1], 0)
|
| 148 |
+
|
| 149 |
+
pointLists[type_].append(pointCoords)
|
| 150 |
+
|
| 151 |
+
# convert the extracted points to numpy array and return
|
| 152 |
+
|
| 153 |
+
pointsDict = {}
|
| 154 |
+
|
| 155 |
+
for type_ in pointLists:
|
| 156 |
+
pointsDict[type_] = np.array(pointLists[type_])
|
| 157 |
+
|
| 158 |
+
return pointsDict
|
| 159 |
+
|
| 160 |
+
def _padMatrixToSquare(matrix : np.ndarray, padValue : Any) -> np.ndarray:
|
| 161 |
+
"""
|
| 162 |
+
Pads a 2D numpy array into a square matrix. New rows/columns will take a user defined value.
|
| 163 |
+
|
| 164 |
+
Args:
|
| 165 |
+
matrix: A 2D numpy array.
|
| 166 |
+
padValue: The value that the padded rows/columns should contain.
|
| 167 |
+
Returns:
|
| 168 |
+
The paddeds square matrix.
|
| 169 |
+
"""
|
| 170 |
+
|
| 171 |
+
n = max(matrix.shape[0], matrix.shape[1])
|
| 172 |
+
return np.pad( matrix, ( (0, n-matrix.shape[0]), (0, n-matrix.shape[1]) ), constant_values = padValue )
|
| 173 |
+
|
| 174 |
+
def pointCloudGeometryScore(gtPointsFilename : str, gtPointsArrayName : str, reconPointsFilename : str, reconPointsArrayName : str, verbose : bool = False) -> int:
|
| 175 |
+
"""
|
| 176 |
+
Given two different point clouds, where each point in each point cloud has a type assigned to it,
|
| 177 |
+
assign a score of 0-10 for how well they match.
|
| 178 |
+
|
| 179 |
+
**The current algorithm is subject to change, but the function header will remain the same.**
|
| 180 |
+
|
| 181 |
+
Currently, in order to assess how well the reconstructed data
|
| 182 |
+
approximates the original, we seek to pair up each point in the ground truth with its corresponding point in the reconstructed data.
|
| 183 |
+
However, this raises two questions: (a) How close do two points need to be to be considered "paired"? and (b) How should we
|
| 184 |
+
penalize extra / missing points versus the distance between corresponding paired points? To answer these questions,
|
| 185 |
+
we use the following algorithm:
|
| 186 |
+
|
| 187 |
+
For each point p in the ground truth data, we assign a cost to pair p with each point q in the reconstructed data.
|
| 188 |
+
We also assign a cost to leave p unpaired. The way of determining the cost relies on parameters defined at the top of the file.
|
| 189 |
+
These costs are determined as follows:
|
| 190 |
+
i. The cost to pair two points p and q if d(p,q)^2
|
| 191 |
+
ii. If p and q have different types, we add the value of costMatrixTypeMismatchPenalty to the cost.
|
| 192 |
+
iii. If the cost of pairing p with q exceeds costThresholdToPair, then p is not allowed to pair with q (answering question (a))
|
| 193 |
+
iv. Any point may pair or remain unpaired
|
| 194 |
+
v. The cost of leaving a point unpaired is unpairedPointCost (answering question (b))
|
| 195 |
+
|
| 196 |
+
After defining these costs, we use the Hungarian algorithm to compute the optimal pairing. The score is based on the average cost
|
| 197 |
+
from all points in P and Q. The lowest cost that scores a 0 is given by maximumAverageCost. All lower scores assign a score of
|
| 198 |
+
0-10 linearly.
|
| 199 |
+
|
| 200 |
+
Args:
|
| 201 |
+
gtPointsFilename: The name of a file in legacy VTK format (.vtk) that stores the ground truth point cloud.
|
| 202 |
+
gtPointsArrayName: The name of the array in the the GT points file that classifies each point. This should store
|
| 203 |
+
a categorical value (such as the index of critical points).
|
| 204 |
+
reconPointsFilename: The name of a file in legacy VTK format (.vtk) that stores the reconstructed point cloud.
|
| 205 |
+
reconPointsArrayName: The name of the array in the reconstructed points file that classifies each point.
|
| 206 |
+
verbose: If there is an error with either of the files, should messages be printed out?
|
| 207 |
+
Returns:
|
| 208 |
+
A score of 0-10 assessing how well the reconstructed points approximate the ground truth file. A score of 10 means
|
| 209 |
+
very good approximation while a score of 0 means a very poor approximation.
|
| 210 |
+
|
| 211 |
+
"""
|
| 212 |
+
|
| 213 |
+
# sort points by classification
|
| 214 |
+
|
| 215 |
+
gtPointsDict = _convertPointsToArraysSortedByType(gtPointsFilename, gtPointsArrayName)
|
| 216 |
+
|
| 217 |
+
try:
|
| 218 |
+
reconPointsDict = _convertPointsToArraysSortedByType(reconPointsFilename, reconPointsArrayName)
|
| 219 |
+
except Exception as e:
|
| 220 |
+
if verbose:
|
| 221 |
+
print(e)
|
| 222 |
+
return 0
|
| 223 |
+
|
| 224 |
+
# Produce the cost matrix for pairing.
|
| 225 |
+
|
| 226 |
+
# First, we stack all points from ground truth and reconstructed data into matrices sorted by type
|
| 227 |
+
# and find the indices in the stacked matrix where each type starts.
|
| 228 |
+
|
| 229 |
+
allTypes = list(gtPointsDict.keys())
|
| 230 |
+
|
| 231 |
+
for type_ in reconPointsDict.keys():
|
| 232 |
+
if type_ not in allTypes:
|
| 233 |
+
allTypes.append(type_)
|
| 234 |
+
|
| 235 |
+
gtTypeStartIndices = {}
|
| 236 |
+
reconTypeStartIndices = {}
|
| 237 |
+
|
| 238 |
+
nextIdxGT = 0
|
| 239 |
+
nextIdxRecon = 0
|
| 240 |
+
|
| 241 |
+
allGTPointsList = []
|
| 242 |
+
allReconPointsList = []
|
| 243 |
+
|
| 244 |
+
for type_ in allTypes:
|
| 245 |
+
gtTypeStartIndices[type_] = nextIdxGT
|
| 246 |
+
if type_ in gtPointsDict:
|
| 247 |
+
nextIdxGT += gtPointsDict[type_].shape[0]
|
| 248 |
+
allGTPointsList.append(gtPointsDict[type_])
|
| 249 |
+
|
| 250 |
+
reconTypeStartIndices[type_] = nextIdxRecon
|
| 251 |
+
if type_ in reconPointsDict:
|
| 252 |
+
nextIdxRecon += reconPointsDict[type_].shape[0]
|
| 253 |
+
allReconPointsList.append(gtPointsDict[type_])
|
| 254 |
+
|
| 255 |
+
allGTCPs = np.vstack(allGTPointsList).astype(np.float64)
|
| 256 |
+
allReconCPs = np.vstack(allReconPointsList).astype(np.float64)
|
| 257 |
+
|
| 258 |
+
# add a dummy element to the end of the types for algorithmic ease
|
| 259 |
+
newElt = max([type_ for type_ in allTypes if type_ != float('inf')]) + 1
|
| 260 |
+
allTypes.append(newElt)
|
| 261 |
+
|
| 262 |
+
gtTypeStartIndices[newElt] = allGTCPs.shape[0]
|
| 263 |
+
reconTypeStartIndices[newElt] = allReconCPs.shape[0]
|
| 264 |
+
|
| 265 |
+
# The cost matrix starts by computing squared distances.
|
| 266 |
+
# We assume that all points have different types and then subtract the mismatch penalty from
|
| 267 |
+
# pairs of points with the same type.
|
| 268 |
+
|
| 269 |
+
costMatrix = cdist(allGTCPs, allReconCPs, metric="sqeuclidean") + costMatrixTypeMismatchPenalty
|
| 270 |
+
|
| 271 |
+
for i in range(len(allTypes)-1):
|
| 272 |
+
|
| 273 |
+
type_ = allTypes[i]
|
| 274 |
+
nextType = allTypes[i+1]
|
| 275 |
+
|
| 276 |
+
gtTypeStart = gtTypeStartIndices[type_]
|
| 277 |
+
gtTypeEnd = gtTypeStartIndices[nextType]
|
| 278 |
+
|
| 279 |
+
reconTypeStart = reconTypeStartIndices[type_]
|
| 280 |
+
reconTypeEnd = reconTypeStartIndices[nextType]
|
| 281 |
+
|
| 282 |
+
if gtTypeStart != gtTypeEnd and reconTypeStart != reconTypeEnd:
|
| 283 |
+
costMatrix[ gtTypeStart:gtTypeEnd, reconTypeStart:reconTypeEnd ] -= costMatrixTypeMismatchPenalty
|
| 284 |
+
|
| 285 |
+
costMatrix[costMatrix > costThresholdToPair] = 2 * unpairedPointCost
|
| 286 |
+
|
| 287 |
+
totalNumPoints = costMatrix.shape[0] + costMatrix.shape[1]
|
| 288 |
+
|
| 289 |
+
# pad the matrix to be square if it is not already.
|
| 290 |
+
# This will occur if the GT and reconstructed point clouds have different numbers of points.
|
| 291 |
+
# The new rows/columns correspond to the ability to leave points unpaired.
|
| 292 |
+
|
| 293 |
+
costMatrix = _padMatrixToSquare( costMatrix, unpairedPointCost )
|
| 294 |
+
|
| 295 |
+
# Compute the cost and return a score based on the cost.
|
| 296 |
+
|
| 297 |
+
rowInd, colInd = linear_sum_assignment(costMatrix)
|
| 298 |
+
totalCost = costMatrix[rowInd, colInd].sum()
|
| 299 |
+
averageCost = totalCost / totalNumPoints
|
| 300 |
+
|
| 301 |
+
if totalCost == 0:
|
| 302 |
+
return 10
|
| 303 |
+
|
| 304 |
+
score = round(10 * ( maximumAverageCost - averageCost ) / maximumAverageCost )
|
| 305 |
+
|
| 306 |
+
if not canImperfectPredictionsScore10 and score == 10:
|
| 307 |
+
return 9
|
| 308 |
+
|
| 309 |
+
if score < 0:
|
| 310 |
+
return 0
|
| 311 |
+
|
| 312 |
+
return score
|
| 313 |
+
|
| 314 |
+
|
| 315 |
+
def _getImageDataAndArray(filename : str, arrayName : str) -> tuple[vtk.vtkImageData, vtk.vtkArray]:
|
| 316 |
+
"""
|
| 317 |
+
Given a file containing VTK image data, return a variable storing the data, and an array with a given name.
|
| 318 |
+
|
| 319 |
+
Args:
|
| 320 |
+
filename: The name of a file storing VTK image data (.vti)
|
| 321 |
+
arrayName: The name of a point array in the file that should be returned.
|
| 322 |
+
Returns:
|
| 323 |
+
A tuple containing the image data and array.
|
| 324 |
+
"""
|
| 325 |
+
|
| 326 |
+
scriptName = os.path.basename(__file__)
|
| 327 |
+
|
| 328 |
+
if not os.path.isfile(filename):
|
| 329 |
+
raise FileNotFoundError(f"{scriptName}: No such file {filename}")
|
| 330 |
+
|
| 331 |
+
imageReader = vtk.vtkXMLImageDataReader()
|
| 332 |
+
imageReader.SetFileName(filename)
|
| 333 |
+
imageReader.Update()
|
| 334 |
+
image = imageReader.GetOutput()
|
| 335 |
+
|
| 336 |
+
if image is None:
|
| 337 |
+
raise ValueError(f"{scriptName}: File '{filename}' is not VTK image data")
|
| 338 |
+
|
| 339 |
+
array = image.GetPointData().GetArray(arrayName)
|
| 340 |
+
|
| 341 |
+
if array is None:
|
| 342 |
+
raise ValueError(f"{scriptName}: File '{filename}' has no array {arrayName}")
|
| 343 |
+
|
| 344 |
+
return image, array
|
| 345 |
+
|
| 346 |
+
def _scaleMesh(array : np.ndarray, ratio : int) -> np.ndarray:
|
| 347 |
+
"""
|
| 348 |
+
Given a numpy array, scales the numpy array to a new size and returns the scaled array.
|
| 349 |
+
|
| 350 |
+
The scaling assumes that the entries of the numpy array are the vertices of a grid. What is scaled
|
| 351 |
+
is the number of squares in the grid. Thus, if the input array has a width of n, the output size will
|
| 352 |
+
be ratio*(n-1)+1. Resampling is performed with nearest neighbor so that this is compatible with categorical data.
|
| 353 |
+
|
| 354 |
+
Args:
|
| 355 |
+
array: The numpy array that should be scaled
|
| 356 |
+
ratio: The scale factor.
|
| 357 |
+
Returns:
|
| 358 |
+
The scaled numpy array.
|
| 359 |
+
"""
|
| 360 |
+
|
| 361 |
+
if ratio == 1:
|
| 362 |
+
return array.copy()
|
| 363 |
+
|
| 364 |
+
scriptName = os.path.basename(__file__)
|
| 365 |
+
|
| 366 |
+
if int(ratio) != ratio or ratio <= 0:
|
| 367 |
+
raise ValueError(f"{scriptName} : ratio must be a positive integer")
|
| 368 |
+
|
| 369 |
+
ratio = int(ratio)
|
| 370 |
+
|
| 371 |
+
sizeX = ratio * (array.shape[0] - 1) + 1
|
| 372 |
+
sizeY = ratio * (array.shape[1] - 1) + 1
|
| 373 |
+
|
| 374 |
+
newArray = np.zeros(( sizeX, sizeY ))
|
| 375 |
+
|
| 376 |
+
# iterate through each square in the grid and scale it.
|
| 377 |
+
|
| 378 |
+
for X in range(array.shape[0]):
|
| 379 |
+
for Y in range(array.shape[1]):
|
| 380 |
+
|
| 381 |
+
if X == array.shape[0] - 1:
|
| 382 |
+
sizeX = 1
|
| 383 |
+
else:
|
| 384 |
+
sizeX = ratio
|
| 385 |
+
|
| 386 |
+
if Y == array.shape[1] - 1:
|
| 387 |
+
sizeY = 1
|
| 388 |
+
else:
|
| 389 |
+
sizeY = ratio
|
| 390 |
+
|
| 391 |
+
# fill in each new point in the square.
|
| 392 |
+
|
| 393 |
+
for x in range(sizeX):
|
| 394 |
+
|
| 395 |
+
if x < sizeX / 2:
|
| 396 |
+
xOffset = 0
|
| 397 |
+
else:
|
| 398 |
+
xOffset = 1
|
| 399 |
+
|
| 400 |
+
for y in range(sizeY):
|
| 401 |
+
|
| 402 |
+
if y < sizeY / 2:
|
| 403 |
+
yOffset = 0
|
| 404 |
+
else:
|
| 405 |
+
yOffset = 1
|
| 406 |
+
|
| 407 |
+
newArray[X*ratio + x, Y*ratio + y] = array[X+xOffset,Y+yOffset]
|
| 408 |
+
|
| 409 |
+
return newArray
|
| 410 |
+
|
| 411 |
+
def _resampleToCommonMesh(array1 : np.ndarray, array2 : np.ndarray) -> tuple[np.ndarray, np.ndarray]:
|
| 412 |
+
"""
|
| 413 |
+
Given two arrays storing data defined on two different grids, resample them to a grid of the same size.
|
| 414 |
+
|
| 415 |
+
In particular, we assume that the two arrays each store values defined on the vertices of a grid. The grids
|
| 416 |
+
should have the same aspect ratio, but may have different sizes. The size of the new grid will be the lcm
|
| 417 |
+
of the sizes of the input grids. Resampling uses nearest neighbor to maintain compatability with categorical data.
|
| 418 |
+
|
| 419 |
+
Args:
|
| 420 |
+
array1: The first array that is to be resampled.
|
| 421 |
+
array2: The second array that is to be resampled.
|
| 422 |
+
Returns:
|
| 423 |
+
A tuple containing the resampled versions of array1 and array2.
|
| 424 |
+
"""
|
| 425 |
+
|
| 426 |
+
if array1.shape == array2.shape:
|
| 427 |
+
return array1.copy(), array2.copy()
|
| 428 |
+
|
| 429 |
+
scriptName = os.path.basename(__file__)
|
| 430 |
+
|
| 431 |
+
numCells1 = (array1.shape[0]-1, array1.shape[1]-1)
|
| 432 |
+
numCells2 = (array2.shape[0]-1, array2.shape[1]-1)
|
| 433 |
+
|
| 434 |
+
newSizeX = math.lcm(numCells1[0], numCells2[0])
|
| 435 |
+
|
| 436 |
+
ratio1 = newSizeX / numCells1[0]
|
| 437 |
+
ratio2 = newSizeX / numCells2[0]
|
| 438 |
+
|
| 439 |
+
if ratio1 * numCells1[1] != numCells2[1]*ratio2:
|
| 440 |
+
raise ValueError(f"{scriptName}: arrays cannot be resampled to common mesh due to incompatible dimensions")
|
| 441 |
+
|
| 442 |
+
newArray1 = _scaleMesh(array1, ratio1)
|
| 443 |
+
newArray2 = _scaleMesh(array2, ratio2)
|
| 444 |
+
|
| 445 |
+
return newArray1, newArray2
|
| 446 |
+
|
| 447 |
+
def partitionAssignmentDiceScore(gtFilename : str, gtArrayName : str, reconFilename : str, reconArrayName : str,
|
| 448 |
+
verbose : bool = False, allowResampling : bool = False) -> int:
|
| 449 |
+
"""
|
| 450 |
+
Given two different partitions of a domain that assign each point to a category, compute a similarity score from 0-10 based on the dice score.
|
| 451 |
+
|
| 452 |
+
This computation uses the "standard" dice score. It is not concerned with topological connectivity, but only with correct
|
| 453 |
+
classification. That is, if two different disconnected regions of the partition both belong to the same category,
|
| 454 |
+
they will be considered as part of the same category, and not two separate regions. The dice score is weighted based on the
|
| 455 |
+
number of points in each category in the ground truth.
|
| 456 |
+
|
| 457 |
+
A score of 0 is considered bad and 10 is good. The maximum dice score that will receive a 0 is controlled by the parameter
|
| 458 |
+
minimumDiceScore defined at the top of the file. Scores above the minimum will scale linarly up to a 10. If resampling is allowed,
|
| 459 |
+
there is a margin for what is considered "perfect" which is controlled by the parameter resamplingMarginForPerfect.
|
| 460 |
+
|
| 461 |
+
Args:
|
| 462 |
+
gtFilename: The name of a file containing VTK image data (.vti) that stores the classification of each ground truth point.
|
| 463 |
+
gtPointsArrayName: The name of the array in the the GT file that classifies each point. This should store
|
| 464 |
+
a categorical value (such as the index of critical points).
|
| 465 |
+
reconFilename: The name of a file containing VTK image data (.vti) that stores the classification of each reconstructed point.
|
| 466 |
+
reconPointsArrayName: The name of the array in the reconstructed file that classifies each point.
|
| 467 |
+
verbose: Should messages be printed out if there are errors with the files.
|
| 468 |
+
allowResampling: If the ground truth and reconstructed files have different resolutions, should they be resampled onto a new, fine
|
| 469 |
+
grid so that they have the same resolution (if not, a score of 0 will be returned).
|
| 470 |
+
Returns:
|
| 471 |
+
An integer score from 0-10 that determines how similar the partitions are. A score of 0 is considered bad and 10 is good.
|
| 472 |
+
|
| 473 |
+
"""
|
| 474 |
+
|
| 475 |
+
scriptName = os.path.basename(__file__)
|
| 476 |
+
|
| 477 |
+
# load files
|
| 478 |
+
|
| 479 |
+
try:
|
| 480 |
+
reconImage, reconArray = _getImageDataAndArray(reconFilename, reconArrayName)
|
| 481 |
+
except Exception as e:
|
| 482 |
+
if verbose:
|
| 483 |
+
print(e)
|
| 484 |
+
return 0
|
| 485 |
+
|
| 486 |
+
reconDimensions = reconImage.GetDimensions()
|
| 487 |
+
|
| 488 |
+
gtImage, gtArray = _getImageDataAndArray(gtFilename, gtArrayName)
|
| 489 |
+
|
| 490 |
+
gtDimensions = gtImage.GetDimensions()
|
| 491 |
+
|
| 492 |
+
# check that dimensionality is correct
|
| 493 |
+
|
| 494 |
+
if len(reconDimensions) == 3:
|
| 495 |
+
|
| 496 |
+
if reconDimensions[2] != 1:
|
| 497 |
+
if verbose:
|
| 498 |
+
print(f"{scriptName}: {reconFilename} is not 2D and has shape {reconDimensions}. Excpected a 2D input.")
|
| 499 |
+
return 0
|
| 500 |
+
|
| 501 |
+
reconDimensions = (reconDimensions[0], reconDimensions[1])
|
| 502 |
+
|
| 503 |
+
if len(gtDimensions) == 3:
|
| 504 |
+
|
| 505 |
+
if gtDimensions[2] != 1:
|
| 506 |
+
raise ValueError(f"{scriptName}: ground truth file {gtFilename} is not 2D and has shape {gtDimensions}")
|
| 507 |
+
|
| 508 |
+
gtDimensions = (gtDimensions[0], gtDimensions[1])
|
| 509 |
+
|
| 510 |
+
gtArrayNumpy = vtk_to_numpy(gtArray).reshape(gtDimensions, order="F")
|
| 511 |
+
reconArrayNumpy = vtk_to_numpy(reconArray).reshape(reconDimensions, order="F")
|
| 512 |
+
|
| 513 |
+
# check if resolution matches and resample if necessary and able
|
| 514 |
+
|
| 515 |
+
if allowResampling:
|
| 516 |
+
gtArrayNumpy, reconArrayNumpy = _resampleToCommonMesh(gtArrayNumpy, reconArrayNumpy)
|
| 517 |
+
else:
|
| 518 |
+
if gtDimensions != reconDimensions:
|
| 519 |
+
if verbose:
|
| 520 |
+
print(f"{scriptName}: expected grond truth file {gtFilename} and reconstructed file {reconFilename} to have the same dimensions. Found {gtDimensions} and {reconDimensions}")
|
| 521 |
+
return 0
|
| 522 |
+
|
| 523 |
+
# compute the cardinalities of each category and the cardinality of their intersections
|
| 524 |
+
|
| 525 |
+
gtArrayNumpy = gtArrayNumpy.flatten()
|
| 526 |
+
reconArrayNumpy = reconArrayNumpy.flatten()
|
| 527 |
+
|
| 528 |
+
numPoints = gtArrayNumpy.shape[0]
|
| 529 |
+
|
| 530 |
+
overlap = {}
|
| 531 |
+
totalCardinality = {}
|
| 532 |
+
gtCardinality = {}
|
| 533 |
+
|
| 534 |
+
for i in range(numPoints):
|
| 535 |
+
gtVal = gtArrayNumpy[i]
|
| 536 |
+
reconVal = reconArrayNumpy[i]
|
| 537 |
+
|
| 538 |
+
if gtVal == reconVal:
|
| 539 |
+
if gtVal in overlap:
|
| 540 |
+
overlap[gtVal] += 1
|
| 541 |
+
else:
|
| 542 |
+
overlap[gtVal] = 1
|
| 543 |
+
|
| 544 |
+
if gtVal in totalCardinality:
|
| 545 |
+
totalCardinality[gtVal] += 1
|
| 546 |
+
else:
|
| 547 |
+
totalCardinality[gtVal] = 1
|
| 548 |
+
|
| 549 |
+
if reconVal in totalCardinality:
|
| 550 |
+
totalCardinality[reconVal] += 1
|
| 551 |
+
else:
|
| 552 |
+
totalCardinality[reconVal] = 1
|
| 553 |
+
|
| 554 |
+
if gtVal in gtCardinality:
|
| 555 |
+
gtCardinality[gtVal] += 1
|
| 556 |
+
else:
|
| 557 |
+
gtCardinality[gtVal] = 1
|
| 558 |
+
|
| 559 |
+
# compute the dice score and return a score based on the dice score
|
| 560 |
+
|
| 561 |
+
diceScore = 0
|
| 562 |
+
totalOverlap = 0
|
| 563 |
+
for category in gtCardinality:
|
| 564 |
+
totalOverlap += overlap[category]
|
| 565 |
+
weight = gtCardinality[category] / numPoints
|
| 566 |
+
dice = 2 * overlap[category] / totalCardinality[category]
|
| 567 |
+
diceScore += weight * dice
|
| 568 |
+
|
| 569 |
+
if allowResampling and diceScore > resamplingMarginForPerfect and (gtArrayNumpy.shape != gtDimensions or reconArrayNumpy.shape != reconDimensions):
|
| 570 |
+
return 10
|
| 571 |
+
|
| 572 |
+
if totalOverlap == numPoints:
|
| 573 |
+
return 10
|
| 574 |
+
|
| 575 |
+
score = round( 10 * (diceScore - minimumDiceScore) / (1-minimumDiceScore) )
|
| 576 |
+
|
| 577 |
+
if not canImperfectPredictionsScore10 and score == 10:
|
| 578 |
+
return 9
|
| 579 |
+
|
| 580 |
+
if score < 0:
|
| 581 |
+
return 0
|
| 582 |
+
|
| 583 |
+
return score
|
| 584 |
+
|
| 585 |
+
def _partition2DDomain(array : np.ndarray) -> np.ndarray:
|
| 586 |
+
"""
|
| 587 |
+
Given a 2D array that assigns a label to each point, identifies connected sets of points that all share a common label.
|
| 588 |
+
|
| 589 |
+
Each such connected region will be assigned a different integer id. The ids start at zero. A numpy array is returned
|
| 590 |
+
with the same shape as the input, where each point is assigned its region id. In order to have a triangular mesh, we
|
| 591 |
+
assume that each point is connected to its east, north, northwest, west, south, and southeast neighbors.
|
| 592 |
+
|
| 593 |
+
Args:
|
| 594 |
+
array: A 2D numpy array that stores a different categorical label for each point.
|
| 595 |
+
Returns:
|
| 596 |
+
A numpy array with the same shape as the input array. Each point is assigned the label for the connected region that it is
|
| 597 |
+
part of.
|
| 598 |
+
"""
|
| 599 |
+
|
| 600 |
+
outputPartition = np.zeros_like(array)
|
| 601 |
+
|
| 602 |
+
nextPartitionId = 0
|
| 603 |
+
|
| 604 |
+
# scan through each point. If it has not already been in a region, perform BFS to identify points in the
|
| 605 |
+
# same connected region and label them. Repeat until all points have been labeled.
|
| 606 |
+
|
| 607 |
+
for X in range(array.shape[0]):
|
| 608 |
+
for Y in range(array.shape[1]):
|
| 609 |
+
|
| 610 |
+
if outputPartition[X,Y] == 0:
|
| 611 |
+
|
| 612 |
+
# label the point and set up BFS
|
| 613 |
+
|
| 614 |
+
nextPartitionId += 1
|
| 615 |
+
queue = [(X,Y)]
|
| 616 |
+
value = array[X,Y]
|
| 617 |
+
|
| 618 |
+
# iterate through points and add neighbors
|
| 619 |
+
|
| 620 |
+
while len(queue) > 0:
|
| 621 |
+
point = queue.pop(0)
|
| 622 |
+
|
| 623 |
+
if outputPartition[point] == 0:
|
| 624 |
+
|
| 625 |
+
x,y = point
|
| 626 |
+
|
| 627 |
+
outputPartition[point] = nextPartitionId
|
| 628 |
+
|
| 629 |
+
if x != array.shape[0] - 1 and outputPartition[x+1,y] == 0 and array[x+1,y] == value:
|
| 630 |
+
queue.append((x+1,y))
|
| 631 |
+
|
| 632 |
+
if y != array.shape[1] - 1 and outputPartition[x,y+1] == 0 and array[x,y+1] == value:
|
| 633 |
+
queue.append((x,y+1))
|
| 634 |
+
|
| 635 |
+
if x != 0 and y != array.shape[1] - 1 and outputPartition[x-1,y+1] == 0 and array[x-1,y+1] == value:
|
| 636 |
+
queue.append((x-1,y+1))
|
| 637 |
+
|
| 638 |
+
if x != 0 and outputPartition[x-1,y] == 0 and array[x-1,y] == value:
|
| 639 |
+
queue.append((x-1,y))
|
| 640 |
+
|
| 641 |
+
if y != 0 and outputPartition[x,y-1] == 0 and array[x,y-1] == value:
|
| 642 |
+
queue.append((x,y-1))
|
| 643 |
+
|
| 644 |
+
if x != array.shape[0] - 1 and y != 0 and outputPartition[x+1,y-1] == 0 and array[x+1,y-1] == value:
|
| 645 |
+
queue.append((x+1,y-1))
|
| 646 |
+
|
| 647 |
+
return outputPartition - 1 # subtract 1 so that the lowest partition value is 0
|
| 648 |
+
|
| 649 |
+
|
| 650 |
+
|
| 651 |
+
def partitionTopologicalDiceScore(gtFilename : str, gtArrayName : str, reconFilename : str, reconArrayName : str,
|
| 652 |
+
verbose : bool = False, allowResampling : bool = False) -> int:
|
| 653 |
+
|
| 654 |
+
"""
|
| 655 |
+
Given two partitions of a domain, assign a score of 0-10 based on their topological similarity.
|
| 656 |
+
|
| 657 |
+
In each partition, a different region in the partition is defined as a connected set of points with the same label.
|
| 658 |
+
This function optimally matches the different regions between the two partitions such that the dice score is maximized.
|
| 659 |
+
Then, the score is computed based on the dice score. The dice score is weighted based on the region sizes in the ground truth.
|
| 660 |
+
|
| 661 |
+
A score of 0 is considered bad and 10 is good. The maximum dice score that will receive a 0 is controlled by the parameter
|
| 662 |
+
minimumDiceScore defined at the top of the file. Scores above the minimum will scale linarly up to a 10. If resampling is allowed,
|
| 663 |
+
there is a margin for what is considered "perfect" which is controlled by the parameter resamplingMarginForPerfect.
|
| 664 |
+
|
| 665 |
+
Args:
|
| 666 |
+
gtFilename: The name of a file containing VTK image data (.vti) that stores the classification of each ground truth point.
|
| 667 |
+
gtPointsArrayName: The name of the array in the the GT file that classifies each point. This should store
|
| 668 |
+
a categorical value (such as the index of critical points).
|
| 669 |
+
reconFilename: The name of a file containing VTK image data (.vti) that stores the classification of each reconstructed point.
|
| 670 |
+
reconPointsArrayName: The name of the array in the reconstructed file that classifies each point.
|
| 671 |
+
verbose: Should messages be printed out if there are errors with the files.
|
| 672 |
+
allowResampling: If the ground truth and reconstructed files have different resolutions, should they be resampled onto a new, fine
|
| 673 |
+
grid so that they have the same resolution (if not, a score of 0 will be returned).
|
| 674 |
+
Returns:
|
| 675 |
+
An integer score from 0-10 that determines how similar the partitions are. A score of 0 is considered bad and 10 is good.
|
| 676 |
+
"""
|
| 677 |
+
|
| 678 |
+
scriptName = os.path.basename(__file__)
|
| 679 |
+
|
| 680 |
+
# load files
|
| 681 |
+
|
| 682 |
+
try:
|
| 683 |
+
reconImage, reconArray = _getImageDataAndArray(reconFilename, reconArrayName)
|
| 684 |
+
except Exception as e:
|
| 685 |
+
if verbose:
|
| 686 |
+
print(e)
|
| 687 |
+
return 0
|
| 688 |
+
|
| 689 |
+
reconDimensions = reconImage.GetDimensions()
|
| 690 |
+
|
| 691 |
+
gtImage, gtArray = _getImageDataAndArray(gtFilename, gtArrayName)
|
| 692 |
+
|
| 693 |
+
gtDimensions = gtImage.GetDimensions()
|
| 694 |
+
|
| 695 |
+
# check that dimensionality is correct
|
| 696 |
+
|
| 697 |
+
if len(reconDimensions) == 3:
|
| 698 |
+
|
| 699 |
+
if reconDimensions[2] != 1:
|
| 700 |
+
if verbose:
|
| 701 |
+
print(f"{scriptName}: {reconFilename} is not 2D and has shape {reconDimensions}. Excpected a 2D input.")
|
| 702 |
+
return 0
|
| 703 |
+
|
| 704 |
+
reconDimensions = (reconDimensions[0], reconDimensions[1])
|
| 705 |
+
|
| 706 |
+
if len(gtDimensions) == 3:
|
| 707 |
+
|
| 708 |
+
if gtDimensions[2] != 1:
|
| 709 |
+
raise ValueError(f"{scriptName}: ground truth file {gtFilename} is not 2D and has shape {gtDimensions}")
|
| 710 |
+
|
| 711 |
+
gtDimensions = (gtDimensions[0], gtDimensions[1])
|
| 712 |
+
|
| 713 |
+
gtArrayNumpy = vtk_to_numpy(gtArray).reshape(gtDimensions, order="F")
|
| 714 |
+
reconArrayNumpy = vtk_to_numpy(reconArray).reshape(reconDimensions, order="F")
|
| 715 |
+
|
| 716 |
+
# resample to new meshes if necesary and able
|
| 717 |
+
|
| 718 |
+
if allowResampling:
|
| 719 |
+
gtArrayNumpy, reconArrayNumpy = _resampleToCommonMesh(gtArrayNumpy, reconArrayNumpy)
|
| 720 |
+
dimensions = gtArrayNumpy.shape
|
| 721 |
+
else:
|
| 722 |
+
if gtDimensions != reconDimensions:
|
| 723 |
+
if verbose:
|
| 724 |
+
print(f"{scriptName}: expected grond truth file {gtFilename} and reconstructed file {reconFilename} to have the same dimensions. Found {gtDimensions} and {reconDimensions}")
|
| 725 |
+
return 0
|
| 726 |
+
dimensions = gtDimensions
|
| 727 |
+
|
| 728 |
+
# partition the domain into different connected regions.
|
| 729 |
+
|
| 730 |
+
gtPartition = _partition2DDomain(gtArrayNumpy)
|
| 731 |
+
reconPartition = _partition2DDomain(reconArrayNumpy)
|
| 732 |
+
|
| 733 |
+
# compute pairwise overlap between each pair of regions.
|
| 734 |
+
|
| 735 |
+
numPoints = dimensions[0] * dimensions[1]
|
| 736 |
+
|
| 737 |
+
regionOverlaps = {}
|
| 738 |
+
gtRegionSizes = {}
|
| 739 |
+
reconRegionSizes = {}
|
| 740 |
+
|
| 741 |
+
gtPartition = gtPartition.flatten()
|
| 742 |
+
reconPartition = reconPartition.flatten()
|
| 743 |
+
|
| 744 |
+
numPoints = gtPartition.shape[0]
|
| 745 |
+
|
| 746 |
+
for i in range(numPoints):
|
| 747 |
+
|
| 748 |
+
gtPartitionId = gtPartition[i]
|
| 749 |
+
reconPartitionId = reconPartition[i]
|
| 750 |
+
|
| 751 |
+
if (gtPartitionId, reconPartitionId) in regionOverlaps:
|
| 752 |
+
regionOverlaps[(gtPartitionId, reconPartitionId)] += 1
|
| 753 |
+
else:
|
| 754 |
+
regionOverlaps[(gtPartitionId, reconPartitionId)] = 1
|
| 755 |
+
|
| 756 |
+
if gtPartitionId in gtRegionSizes:
|
| 757 |
+
gtRegionSizes[gtPartitionId] += 1
|
| 758 |
+
else:
|
| 759 |
+
gtRegionSizes[gtPartitionId] = 1
|
| 760 |
+
|
| 761 |
+
if reconPartitionId in reconRegionSizes:
|
| 762 |
+
reconRegionSizes[reconPartitionId] += 1
|
| 763 |
+
else:
|
| 764 |
+
reconRegionSizes[reconPartitionId] = 1
|
| 765 |
+
|
| 766 |
+
numGTPartitions = int(max(gtRegionSizes) + 1)
|
| 767 |
+
numReconPartitions = int(max(reconRegionSizes) + 1)
|
| 768 |
+
|
| 769 |
+
# compute the weighted pairwise dice score between connected regions.
|
| 770 |
+
|
| 771 |
+
pairwiseDice = np.zeros((numGTPartitions, numReconPartitions))
|
| 772 |
+
|
| 773 |
+
for i in range(numGTPartitions):
|
| 774 |
+
for j in range(numReconPartitions):
|
| 775 |
+
if (i,j) in regionOverlaps:
|
| 776 |
+
pairwiseDice[i,j] = -(gtRegionSizes[i] / numPoints) * ( 2 * regionOverlaps[(i,j)] / ( gtRegionSizes[i] + reconRegionSizes[j] ) )
|
| 777 |
+
else:
|
| 778 |
+
pairwiseDice[i,j] = 0
|
| 779 |
+
|
| 780 |
+
# pad matrix to square for the Hungarian algorithm
|
| 781 |
+
|
| 782 |
+
pairwiseDice = _padMatrixToSquare(pairwiseDice, 0)
|
| 783 |
+
|
| 784 |
+
# if the matrix has one nonzero entry per row/column, that means that the regions perfectly overlap and we should
|
| 785 |
+
# return a perfect 10.
|
| 786 |
+
if np.all(np.count_nonzero(pairwiseDice, axis=0) == 1) and np.all(np.count_nonzero(pairwiseDice, axis=1) == 1):
|
| 787 |
+
return 10
|
| 788 |
+
|
| 789 |
+
# use the Hungarian algorithm to find the pairing between different regions that maximizes the overall weighted dice score.
|
| 790 |
+
|
| 791 |
+
rowInd, colInd = linear_sum_assignment(pairwiseDice)
|
| 792 |
+
diceScore = -pairwiseDice[rowInd, colInd].sum()
|
| 793 |
+
|
| 794 |
+
# compute and return a score based on the dice score.
|
| 795 |
+
|
| 796 |
+
if allowResampling and diceScore > resamplingMarginForPerfect and (gtArrayNumpy.shape != gtDimensions or reconArrayNumpy.shape != reconDimensions):
|
| 797 |
+
return 10
|
| 798 |
+
|
| 799 |
+
score = round( 10 * (diceScore - minimumDiceScore) / (1-minimumDiceScore) )
|
| 800 |
+
|
| 801 |
+
if not canImperfectPredictionsScore10 and score == 10:
|
| 802 |
+
return 9
|
| 803 |
+
|
| 804 |
+
if score < 0:
|
| 805 |
+
return 0
|
| 806 |
+
|
| 807 |
+
return score
|
| 808 |
+
|
| 809 |
+
def _loadGraphFromVTK(pointsFilename : str, edgesFilename : str, arrayNamesToLoad : list[tuple[str,str]]) -> nx.Graph:
|
| 810 |
+
"""
|
| 811 |
+
Given a graph stored in VTK files, load the graph and store it as a networkx graph.
|
| 812 |
+
|
| 813 |
+
The graph must be stored in two legacy VTK files (.vtk). The points file should contain all of the points
|
| 814 |
+
as well as any pointlabels that should be loaded. The point labels should be stored as VTK arrays defined on each point.
|
| 815 |
+
The edges should be stored in a separate file.
|
| 816 |
+
|
| 817 |
+
Args:
|
| 818 |
+
pointsFilename: The name of a file in legacy VTK format (.vtk) storing the points in the graph.
|
| 819 |
+
edgesFilename: The name of a file in legacy VTK format (.vtk) storing the edges of the graph. It should only have
|
| 820 |
+
cells that are the segments of the graph.
|
| 821 |
+
arrayNamesToLoad: A list of tuples of two strings. Each tuple represents a different point label that should be loaded.
|
| 822 |
+
The first string in the tuple should store the name of the VTK array that should be used to produce point
|
| 823 |
+
labels. The second string should store the name of the point label category in the networkx graph.
|
| 824 |
+
Returns:
|
| 825 |
+
A networkx graph with point labels defined according to the input files.
|
| 826 |
+
"""
|
| 827 |
+
|
| 828 |
+
scriptName = os.path.basename(__file__)
|
| 829 |
+
|
| 830 |
+
# read edges from file
|
| 831 |
+
|
| 832 |
+
edgesReader = vtk.vtkDataSetReader()
|
| 833 |
+
edgesReader.SetFileName(edgesFilename)
|
| 834 |
+
edgesReader.update()
|
| 835 |
+
|
| 836 |
+
edgesOutput = edgesReader.GetOutput()
|
| 837 |
+
|
| 838 |
+
if edgesOutput is None:
|
| 839 |
+
raise ValueError(f"{scriptName}: the file '{edgesFilename}' is not properly formatted legacy VTK data")
|
| 840 |
+
|
| 841 |
+
# iterate through the edges and represent them as a list of ordered pairs
|
| 842 |
+
|
| 843 |
+
pointsFromEdges = set()
|
| 844 |
+
edgeList = []
|
| 845 |
+
numCells = edgesOutput.GetNumberOfCells()
|
| 846 |
+
|
| 847 |
+
for i in range(numCells):
|
| 848 |
+
cell = edgesOutput.GetCell(i)
|
| 849 |
+
if cell.GetNumberOfPoints() != 2:
|
| 850 |
+
raise ValueError(f"{scriptName}: the file '{edgesFilename}' contains a cell that is not a line segment")
|
| 851 |
+
|
| 852 |
+
point1 = cell.GetPoints().GetPoint(0)
|
| 853 |
+
point2 = cell.GetPoints().GetPoint(1)
|
| 854 |
+
edgeList.append((point1,point2))
|
| 855 |
+
pointsFromEdges.add(point1)
|
| 856 |
+
pointsFromEdges.add(point2)
|
| 857 |
+
|
| 858 |
+
# create the graph and add the edges
|
| 859 |
+
|
| 860 |
+
graph = nx.Graph()
|
| 861 |
+
graph.add_edges_from(edgeList)
|
| 862 |
+
|
| 863 |
+
if not os.path.isfile(pointsFilename):
|
| 864 |
+
raise FileNotFoundError(f"{scriptName}: The file '{pointsFilename}' does not exist")
|
| 865 |
+
|
| 866 |
+
# read points from file
|
| 867 |
+
|
| 868 |
+
pointsReader = vtk.vtkDataSetReader()
|
| 869 |
+
pointsReader.SetFileName(pointsFilename)
|
| 870 |
+
pointsReader.Update()
|
| 871 |
+
|
| 872 |
+
pointsOutput = pointsReader.GetOutput()
|
| 873 |
+
|
| 874 |
+
if pointsOutput is None:
|
| 875 |
+
raise ValueError(f"{scriptName}: The file '{pointsFilename}' is not properly formatted legacy VTK data")
|
| 876 |
+
|
| 877 |
+
pointData = pointsOutput.GetPointData()
|
| 878 |
+
|
| 879 |
+
if pointData is None:
|
| 880 |
+
raise ValueError(f"{scriptName}: The file '{pointsFilename}' does not have any associated point data")
|
| 881 |
+
|
| 882 |
+
# For each point label, create a dictionary where the keys are the point's coordinates and the values are the label.
|
| 883 |
+
|
| 884 |
+
pointsSet = set()
|
| 885 |
+
arrays = {}
|
| 886 |
+
arrayInfo = {}
|
| 887 |
+
numPoints = pointsOutput.GetNumberOfPoints()
|
| 888 |
+
|
| 889 |
+
for arrayName, abbreviatedName in arrayNamesToLoad:
|
| 890 |
+
|
| 891 |
+
array = pointData.GetArray(arrayName)
|
| 892 |
+
|
| 893 |
+
if array is None:
|
| 894 |
+
raise ValueError(f"{scriptName}: The file '{pointsFilename}' has no point array '{arrayName}'")
|
| 895 |
+
|
| 896 |
+
arrays[abbreviatedName] = array
|
| 897 |
+
arrayInfo[abbreviatedName] = {}
|
| 898 |
+
|
| 899 |
+
for i in range(numPoints):
|
| 900 |
+
point = pointsOutput.GetPoint(i)
|
| 901 |
+
pointsSet.add(pointsOutput.GetPoint(i))
|
| 902 |
+
|
| 903 |
+
for abbreviatedName in arrays:
|
| 904 |
+
value = arrays[abbreviatedName].GetTuple1(i)
|
| 905 |
+
arrayInfo[abbreviatedName][point] = value
|
| 906 |
+
|
| 907 |
+
# check that the points in the points file matches the points in the edge file, and return the graph
|
| 908 |
+
|
| 909 |
+
if pointsSet != pointsFromEdges:
|
| 910 |
+
raise ValueError(f"{scriptName}: The files '{pointsFilename}' and '{edgesFilename}' contain a different set of points")
|
| 911 |
+
|
| 912 |
+
for abbreviatedName in arrays:
|
| 913 |
+
nx.set_node_attributes(graph, arrayInfo[abbreviatedName], abbreviatedName)
|
| 914 |
+
|
| 915 |
+
return graph
|
| 916 |
+
|
| 917 |
+
def _getInternalDistancesForGWDistance(graph : nx.Graph, points : np.ndarray):
|
| 918 |
+
"""
|
| 919 |
+
Given a merge tree, compute the distance between each pair of nodes for computing the GW distance.
|
| 920 |
+
|
| 921 |
+
For mathematical specifics, see:
|
| 922 |
+
Mingzhe Li et al. "Flexible and Probabilistic Topology Tracking With Partial Optimal Transport".
|
| 923 |
+
doi: 10.1109/TVCG.2025.3561300
|
| 924 |
+
|
| 925 |
+
This implementation only works with join trees, and not split trees or contour trees.
|
| 926 |
+
|
| 927 |
+
Args:
|
| 928 |
+
graph: A merge tree represented as a networkx graph. Each node should have a label "sf" that stores
|
| 929 |
+
the scalar field value associated with the node.
|
| 930 |
+
points: A [n,3] numpy array storing the locations of each vertex in the graph. Each row should store
|
| 931 |
+
the coordinates of a different vertex.
|
| 932 |
+
Returns:
|
| 933 |
+
If the graph has n nodes, this will return an [n,n] numpy array where the (i,j) entry stores the
|
| 934 |
+
distance from node i to node j.
|
| 935 |
+
"""
|
| 936 |
+
|
| 937 |
+
numPoints = points.shape[0]
|
| 938 |
+
C = np.zeros((numPoints,numPoints))
|
| 939 |
+
|
| 940 |
+
for i in range(numPoints):
|
| 941 |
+
for j in range(i+1,numPoints):
|
| 942 |
+
node1 = tuple(points[i])
|
| 943 |
+
node2 = tuple(points[j])
|
| 944 |
+
|
| 945 |
+
f1 = graph.nodes[node1]["sf"]
|
| 946 |
+
f2 = graph.nodes[node2]["sf"]
|
| 947 |
+
|
| 948 |
+
path = nx.shortest_path(graph, node1, node2)
|
| 949 |
+
|
| 950 |
+
lca = max(path, key = lambda n : graph.nodes[n]["sf"])
|
| 951 |
+
flca = graph.nodes[lca]["sf"]
|
| 952 |
+
|
| 953 |
+
dist = abs(f1-flca) + abs(f2-flca)
|
| 954 |
+
|
| 955 |
+
C[i,j] = dist
|
| 956 |
+
C[j,i] = dist
|
| 957 |
+
|
| 958 |
+
return C
|
| 959 |
+
|
| 960 |
+
def mergeTreePartialFusedGWDistanceScore(gtPointsFilename : str, gtEdgesFilename : str, gtCriticalTypeArrayName : str,
|
| 961 |
+
gtScalarArrayName : str, reconPointsFilename : str, reconEdgesFilename : str,
|
| 962 |
+
reconCriticalTypeArrayName : str, reconScalarArrayName : str, verbose : bool = False) -> int:
|
| 963 |
+
|
| 964 |
+
"""
|
| 965 |
+
Given two merge trees, compute a score of 0-10 for their similarity based on the partial fused Gromov-Wasserstein distance.
|
| 966 |
+
|
| 967 |
+
For specifics of the distance computation, see:
|
| 968 |
+
Mingzhe Li et al. "Flexible and Probabilistic Topology Tracking With Partial Optimal Transport".
|
| 969 |
+
doi: 10.1109/TVCG.2025.3561300
|
| 970 |
+
|
| 971 |
+
Each merge tree should be stored as two legacy VTK files (.vtk) where there is one file for the points and another for the edges.
|
| 972 |
+
The points file should label each point with its critical point type and function value. The critical point types should be as
|
| 973 |
+
follows: 0: minimum. 1: 1-saddle. 2: 2-saddle. 3: maximum. 4: degenerate.
|
| 974 |
+
|
| 975 |
+
This function can only be used to compare join trees, and not split trees or contour trees.
|
| 976 |
+
The distance is conrolled by an alpha parameter which is defined at the top of the file. The smallest distance that will
|
| 977 |
+
score a 0 is given by the maximumPFGWDistance parameter. The score for all other distances scales linearly where a distance of 0
|
| 978 |
+
scores a perfect 10.
|
| 979 |
+
|
| 980 |
+
Due to numerical instability, we allow slightly imperfect distances to score a perfect 10. The largest such distance is
|
| 981 |
+
controlled by the perfectPFGWDistanceCutoff parameter.
|
| 982 |
+
|
| 983 |
+
Args:
|
| 984 |
+
gtPointsFilename: The name of a file in legacy VTK format (.vtk) that stores the points of the ground truth merge tree.
|
| 985 |
+
gtEdgesFilename: The name of a file in legacy VTK format (.vtk) that stores the edges of the ground truth merge tree. The
|
| 986 |
+
edges should each take the form of a cell of type vtkLine.
|
| 987 |
+
gtCriticalTypeArrayName: The name of the point array in the GT points file that stores the critical point type of each point.
|
| 988 |
+
gtScalarArrayName: The name of the point array in the GT points file that stores the scalar field value at each point.
|
| 989 |
+
reconPointsFilename: The name of a file in legacy VTK format (.vtk) that stores the points of the reconstructed merge tree.
|
| 990 |
+
reconEdgesFilename: The name of a file in legacy VTK format (.vtk) that stores the edges of the reconstructed merge tree.
|
| 991 |
+
reconCriticalTypeArrayName: The name of the point in array in the reconstructed points file that stores the critical
|
| 992 |
+
point type of each point.
|
| 993 |
+
reconScalarArrayName: The name of the point array in the reconstructed points file that stores the scalar field value at
|
| 994 |
+
each point.
|
| 995 |
+
verbose: hould messages be printed out if there are errors with the files.
|
| 996 |
+
|
| 997 |
+
"""
|
| 998 |
+
|
| 999 |
+
# load the files
|
| 1000 |
+
|
| 1001 |
+
try:
|
| 1002 |
+
reconGraph = _loadGraphFromVTK(reconPointsFilename, reconEdgesFilename, [(reconCriticalTypeArrayName, "ct"), (reconScalarArrayName, "sf")])
|
| 1003 |
+
except Exception as e:
|
| 1004 |
+
if verbose:
|
| 1005 |
+
print(e)
|
| 1006 |
+
return 0
|
| 1007 |
+
|
| 1008 |
+
gtGraph = _loadGraphFromVTK(gtPointsFilename, gtEdgesFilename, [(gtCriticalTypeArrayName, "ct"), (gtScalarArrayName, "sf")])
|
| 1009 |
+
|
| 1010 |
+
# set up attribute distance dA
|
| 1011 |
+
|
| 1012 |
+
# sort the points by critical point type and arrange into a single stacked numpy array per tree.
|
| 1013 |
+
|
| 1014 |
+
gtPointsList = list(gtGraph.nodes())
|
| 1015 |
+
reconPointsList = list(reconGraph.nodes())
|
| 1016 |
+
|
| 1017 |
+
gtPointsList.sort(key = lambda p : gtGraph.nodes[p]["ct"])
|
| 1018 |
+
reconPointsList.sort(key = lambda p : reconGraph.nodes[p]["ct"])
|
| 1019 |
+
|
| 1020 |
+
gtPointsNumpy = np.array(gtPointsList)
|
| 1021 |
+
reconPointsNumpy = np.array(reconPointsList)
|
| 1022 |
+
|
| 1023 |
+
# compute the position in each numpy array where each critical type starts.
|
| 1024 |
+
|
| 1025 |
+
gtCTCounts = collections.Counter(gtGraph.nodes[n]["ct"] for n in gtGraph.nodes)
|
| 1026 |
+
reconCTCounts = collections.Counter(reconGraph.nodes[n]["ct"] for n in reconGraph.nodes)
|
| 1027 |
+
|
| 1028 |
+
gtCTStartIndices = {}
|
| 1029 |
+
reconCTStartIndices = {}
|
| 1030 |
+
nextGTCTStartIndex = 0
|
| 1031 |
+
nextReconCTStartIndex = 0
|
| 1032 |
+
|
| 1033 |
+
for ct in range(5):
|
| 1034 |
+
gtCTStartIndices[ct] = nextGTCTStartIndex
|
| 1035 |
+
if ct in gtCTCounts:
|
| 1036 |
+
nextGTCTStartIndex += gtCTCounts[ct]
|
| 1037 |
+
|
| 1038 |
+
reconCTStartIndices[ct] = nextReconCTStartIndex
|
| 1039 |
+
if ct in reconCTCounts:
|
| 1040 |
+
nextReconCTStartIndex += reconCTCounts[ct]
|
| 1041 |
+
|
| 1042 |
+
gtCTStartIndices[5] = gtPointsNumpy.shape[0]
|
| 1043 |
+
reconCTStartIndices[5] = reconPointsNumpy.shape[0]
|
| 1044 |
+
|
| 1045 |
+
# dA is computed this way in the paper. This was discovered in a private correspondence with the author.
|
| 1046 |
+
|
| 1047 |
+
dA = ot.dist(gtPointsNumpy, reconPointsNumpy)
|
| 1048 |
+
dA /= np.max(dA)
|
| 1049 |
+
dA += 1
|
| 1050 |
+
|
| 1051 |
+
for ct in range(5):
|
| 1052 |
+
if gtCTStartIndices[ct] != gtCTStartIndices[ct+1] and reconCTStartIndices[ct] != reconCTStartIndices[ct+1]:
|
| 1053 |
+
dA[ gtCTStartIndices[ct]:gtCTStartIndices[ct+1], reconCTStartIndices[ct]:reconCTStartIndices[ct+1] ] -= 1
|
| 1054 |
+
|
| 1055 |
+
# Compute the distances between nodes within each tree. Normalize the values so that they are
|
| 1056 |
+
# compatible with dA.
|
| 1057 |
+
|
| 1058 |
+
gtC = _getInternalDistancesForGWDistance(gtGraph, gtPointsNumpy)
|
| 1059 |
+
reconC = _getInternalDistancesForGWDistance(reconGraph, reconPointsNumpy)
|
| 1060 |
+
|
| 1061 |
+
normalization = np.max(gtC)
|
| 1062 |
+
gtC /= normalization
|
| 1063 |
+
reconC /= normalization
|
| 1064 |
+
|
| 1065 |
+
# set up distributions and mass transported for the OT computation.
|
| 1066 |
+
|
| 1067 |
+
gtDist = np.ones(gtPointsNumpy.shape[0])
|
| 1068 |
+
reconDist = np.ones(reconPointsNumpy.shape[0])
|
| 1069 |
+
mass = min(gtPointsNumpy.shape[0], reconPointsNumpy.shape[0])
|
| 1070 |
+
|
| 1071 |
+
distance = ot.gromov.partial_fused_gromov_wasserstein2(dA, gtC, reconC, gtDist, reconDist, mass, alpha=alpha )
|
| 1072 |
+
|
| 1073 |
+
# based on the OT distance compute and return a score.
|
| 1074 |
+
|
| 1075 |
+
if distance < perfectPFGWDistanceCutoff:
|
| 1076 |
+
return 10
|
| 1077 |
+
|
| 1078 |
+
score = round(10 * (maximumPFGWDistance - distance) / maximumPFGWDistance)
|
| 1079 |
+
|
| 1080 |
+
if not canImperfectPredictionsScore10 and score == 10:
|
| 1081 |
+
return 9
|
| 1082 |
+
|
| 1083 |
+
if score < 0:
|
| 1084 |
+
return 0
|
| 1085 |
+
|
| 1086 |
+
return score
|
| 1087 |
+
|
| 1088 |
+
def _mergeTreePersistenceDiagram(tree : nx.Graph) -> np.ndarray:
|
| 1089 |
+
"""
|
| 1090 |
+
Given a merge tree represented as a networkx graph, compute its persistence diagram.
|
| 1091 |
+
|
| 1092 |
+
The networkx graph must contain a vertex attribute "sf" that stores the scalar field value at each node.
|
| 1093 |
+
This algorithm only works for join trees, not split trees or contour trees.
|
| 1094 |
+
|
| 1095 |
+
Args:
|
| 1096 |
+
tree: The merge tree that is computed.
|
| 1097 |
+
Returns:
|
| 1098 |
+
The persistence diagram of the merge tree. If n is the number of features in the persistence diagram,
|
| 1099 |
+
it will be returned as an [n,2] numpy array, where each row stores the (birth, death) points of a feature.
|
| 1100 |
+
The (birth,death) times are represented as function values and are not normalized.
|
| 1101 |
+
"""
|
| 1102 |
+
|
| 1103 |
+
def f(node):
|
| 1104 |
+
return tree.nodes[node]["sf"]
|
| 1105 |
+
|
| 1106 |
+
# sort leaves decreasing by function value
|
| 1107 |
+
|
| 1108 |
+
marked_points = set()
|
| 1109 |
+
leaves = [n for n in tree if tree.degree(n) == 1]
|
| 1110 |
+
leaves.sort(key = lambda n : f(n))
|
| 1111 |
+
leaves = leaves[:-1]
|
| 1112 |
+
leaves.reverse()
|
| 1113 |
+
|
| 1114 |
+
# for each leaf, climb up the tree and pair with the first unpaired saddle.
|
| 1115 |
+
|
| 1116 |
+
pairs = []
|
| 1117 |
+
|
| 1118 |
+
for n1 in leaves:
|
| 1119 |
+
val = f(n1)
|
| 1120 |
+
|
| 1121 |
+
base_node = n1
|
| 1122 |
+
higher_neighbor = None
|
| 1123 |
+
|
| 1124 |
+
while higher_neighbor is None:
|
| 1125 |
+
|
| 1126 |
+
# look for the neighbor with the highest function value
|
| 1127 |
+
found_next_point = False
|
| 1128 |
+
for n2 in tree.neighbors(base_node):
|
| 1129 |
+
if f(n2) > val:
|
| 1130 |
+
if n2 in marked_points:
|
| 1131 |
+
base_node = n2
|
| 1132 |
+
val = f(n2)
|
| 1133 |
+
else:
|
| 1134 |
+
higher_neighbor = n2
|
| 1135 |
+
marked_points.add(n2)
|
| 1136 |
+
|
| 1137 |
+
found_next_point = True
|
| 1138 |
+
break
|
| 1139 |
+
|
| 1140 |
+
if not found_next_point:
|
| 1141 |
+
raise Exception
|
| 1142 |
+
|
| 1143 |
+
pairs.append((f(n1), f(higher_neighbor)))
|
| 1144 |
+
|
| 1145 |
+
return np.array(pairs)
|
| 1146 |
+
|
| 1147 |
+
def mergeTreePersistenceWassersteinScore(gtPointsFilename : str, gtEdgesFilename : str, gtScalarArrayName : str,
|
| 1148 |
+
reconPointsFilename : str, reconEdgesFilename : str, reconScalarArrayName : str, verbose : bool = False) -> int:
|
| 1149 |
+
|
| 1150 |
+
"""
|
| 1151 |
+
Given two different merge trees stored in VTK format,
|
| 1152 |
+
compute a similarity score from 0-10 based on the Wassertein distance of their persistence diagrams.
|
| 1153 |
+
|
| 1154 |
+
This implementation only works with join trees, and not split trees or contour trees. Each merge tree
|
| 1155 |
+
should be stored as two legacy VTK files (.vtk) where there is one file for the points and another for the edges.
|
| 1156 |
+
The points file should label each point with its function value.
|
| 1157 |
+
|
| 1158 |
+
The result is controlled by several different parameters defined at the top of the file. The Wasserstein distance has
|
| 1159 |
+
an order controlled by wassersteinOrder, while the ground metric has an order given by wassersteinGroundMetric.
|
| 1160 |
+
After the distance is computed, an average is taken by dividing it through by (|P|+|Q|)/2.
|
| 1161 |
+
|
| 1162 |
+
A score of 0 is bad and 10 is good. The lowest value that can score zero points is given by
|
| 1163 |
+
maximumAverageWassersteinDistance. Scores below this will scale linearly, where a distance of zero scores 10 points.
|
| 1164 |
+
|
| 1165 |
+
gtPointsFilename: The name of a file in legacy VTK format (.vtk) that stores the points of the ground truth merge tree.
|
| 1166 |
+
gtEdgesFilename: The name of a file in legacy VTK format (.vtk) that stores the edges of the ground truth merge tree. The
|
| 1167 |
+
edges should each take the form of a cell of type vtkLine.
|
| 1168 |
+
gtScalarArrayName: The name of the point array in the GT points file that stores the scalar field value at each point.
|
| 1169 |
+
reconPointsFilename: The name of a file in legacy VTK format (.vtk) that stores the points of the reconstructed merge tree.
|
| 1170 |
+
reconEdgesFilename: The name of a file in legacy VTK format (.vtk) that stores the edges of the reconstructed merge tree.
|
| 1171 |
+
reconScalarArrayName: The name of the point array in the reconstructed points file that stores the scalar field value at
|
| 1172 |
+
each point.
|
| 1173 |
+
verbose: hould messages be printed out if there are errors with the files.
|
| 1174 |
+
"""
|
| 1175 |
+
|
| 1176 |
+
try:
|
| 1177 |
+
reconGraph = _loadGraphFromVTK(reconPointsFilename, reconEdgesFilename, [(reconScalarArrayName, "sf")])
|
| 1178 |
+
except Exception as e:
|
| 1179 |
+
if verbose:
|
| 1180 |
+
print(e)
|
| 1181 |
+
return 0
|
| 1182 |
+
|
| 1183 |
+
gtGraph = _loadGraphFromVTK(gtPointsFilename, gtEdgesFilename, [(gtScalarArrayName, "sf")])
|
| 1184 |
+
|
| 1185 |
+
gtPersistenceDiagram = _mergeTreePersistenceDiagram(gtGraph)
|
| 1186 |
+
reconPersistenceDiagram = _mergeTreePersistenceDiagram(reconGraph)
|
| 1187 |
+
|
| 1188 |
+
minFunctionValue = np.min(gtPersistenceDiagram)
|
| 1189 |
+
maxFunctionValue = np.max(gtPersistenceDiagram)
|
| 1190 |
+
|
| 1191 |
+
gtPersistenceDiagram = (gtPersistenceDiagram - minFunctionValue) / (maxFunctionValue - minFunctionValue)
|
| 1192 |
+
reconPersistenceDiagram = (reconPersistenceDiagram - minFunctionValue) / (maxFunctionValue - minFunctionValue)
|
| 1193 |
+
|
| 1194 |
+
wassersteinDistance = gudhi.wasserstein.wasserstein_distance(gtPersistenceDiagram, reconPersistenceDiagram,
|
| 1195 |
+
order=wassersteinOrder, internal_p=wassersteinGroundMetric)
|
| 1196 |
+
|
| 1197 |
+
numAverage = (gtPersistenceDiagram.shape[0] + reconPersistenceDiagram.shape[0]) / 2
|
| 1198 |
+
wassersteinDistance /= numAverage
|
| 1199 |
+
|
| 1200 |
+
if wassersteinDistance == 0:
|
| 1201 |
+
return 10
|
| 1202 |
+
|
| 1203 |
+
score = round( 10 * (maximumAverageWassersteinDistance - wassersteinDistance) / maximumAverageWassersteinDistance )
|
| 1204 |
+
|
| 1205 |
+
if not canImperfectPredictionsScore10 and score == 10:
|
| 1206 |
+
return 9
|
| 1207 |
+
|
| 1208 |
+
if score < 0:
|
| 1209 |
+
return 0
|
| 1210 |
+
|
| 1211 |
+
return score
|