Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 31
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 36927)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 31
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{\normalsize Introduction}
A partially directed graph $X$ is called a mixed graph, the undirected edges in $X$ are called digons and the directed edges are called arcs. Formally, a mixed graph $X$ is a set of vertices $V(X)$ together with a set of undirected edges $E_0(D)$ and a set of directed edges $E_1(X)$. For an arc $xy \in E_1(X)$, $x$(resp. $y$) is called initial (resp. terminal) vertex. The graph obtained from the mixed graph $X$ after stripping out the orientation of its arcs is called the underlying graph of $X$ and is denoted by $\Gamma(X)$.\\
A collection of digons and arcs of a mixed graph $X$ is called a perfect matching if they are vertex disjoint and cover $V(X)$. In other words, perfect matching of a mixed graph is just a perfect matching of its underlying graph. In general, a mixed graph may have more than one perfect matching. We denote the class of bipartite mixed graphs with a unique perfect matching by $\mathcal{H}$. In this class of mixed graphs the unique perfect matching will be denoted by $\mathcal{M}$. For a mixed graph $X\in \mathcal{H}$, an arc $e$ (resp. digon) in $\mathcal{M}$ is called matching arc (resp. matching digon) in $X$.
If $D$ is a mixed subgraph of $X$, then the mixed graph $X\backslash D$ is the induced mixed graph over $V(X)\backslash V(D)$.\\
Studying a graph or a digraph structure through properties of a matrix associated with it is an old and rich area of research.
For undirected graphs, the most popular and widely investigated matrix in literature is the adjacency matrix. The adjacency matrix of a graph is symmetric, and thus diagonalizable and all of its eigenvalues are real. On the other hand, the adjacency matrix of directed graphs and mixed graphs is not symmetric and its eigenvalues are not all real. Consequently, dealing with such matrix is very challenging. Many researchers have recently proposed other adjacency matrices for digraphs. For instance in \cite{Irena}, the author investigated the spectrum of $AA^T$, where $A$ is the traditional adjacency matrix of a digraph. The author called them non negative spectrum of digraphs. In \cite{OMT1}, authors proved that the non negative spectrum is totally controlled by a vertex partition called common out neighbor partition. Authors in \cite{BMI} and in \cite{LIU2015182} (independently) proposed a new adjacency matrix of mixed graphs as follows:
For a mixed graph $X$, the hermitian adjacency matrix of $X$ is a $|V|\times |V|$ matrix $H(X)=[h_{uv}]$, where
\[h_{uv} = \left\{
\begin{array}{ll}
1 & \text{if } uv \in E_0(X),\\
i & \text{if } uv \in E_1(X), \\
-i & \text{if } vu \in E_1(X),\\
0 & \text{otherwise}.
\end{array}
\right.
\]
This matrix has many nice properties. It has real spectrum and interlacing theorem holds. Beside investigating basic properties of this hermitian adjacency matrix, authors proved many interesting properties of the spectrum of $H$. This motivated Mohar in \cite{Mohar2019ANK} to extend the previously proposed adjacency matrix. The new kind of hermitian adjacency matrices, called $\alpha$-hermitian adjacency matrices of mixed graphs, are defined as follows: Let $X$ be a mixed graph and $\alpha$ be the primitive $n^{th}$ root of unity $e^{\frac{2\pi}{n}i}$. Then the $\alpha$ hermitian adjacency matrix of $X$ is a $|V|\times |V|$ matrix $H_{\alpha}(X)=[h_{uv}]$, where
\[h_{uv} = \left\{
\begin{array}{ll}
1 & \text{if } uv \in E_0(D),\\
\alpha & \text{if } uv \in E_1(D), \\
\overline{\alpha} & \text{if } vu \in E_1(D),\\
0 & \text{otherwise}.
\end{array}
\right.
\]
Clearly the new kind of hermitian adjacency matrices of mixed graphs is a natural generalization of the old one for mixed graphs and even for the graphs. As we mentioned before these adjacency matrices ($H_i(X)$ and $H_\alpha(X)$) are hermitian and have interesting properties. This paved the way to more a facinating research topic much needed nowadays.\\
For simplicity when dealing with one mixed graph $X$, then we write $H_\alpha$ instead of $H_\alpha(X)$. \\\\
The smallest positive eigenvalue of a graph plays an important role in quantum chemistry. Motivated by this application, Godsil in \cite{God} investigated the inverse of the adjacency matrix of a bipartite graph. He proved that if $T$ is a tree graph with perfect matching and $A(T)$ is its adjacency matrix then, $A(T)$ is invertabile and there is $\{1,-1\}$ diagonal matrix $D$ such that $DA^{-1}D$ is an adjacency matrix of another graph. Many of the problems mentioned in \cite{God} are still open. Further research appeared after this paper that continued on Godsil's work see \cite{Pavlkov}, \cite{McLeman2014GraphI} and \cite{Akbari2007OnUG}.\\
In this paper we study the inverse of $\alpha$-hermitian adjacency matrix $H_\alpha$ of unicyclic bipartite mixed graphs with unique perfect matching $X$. Since undirected graphs can be considered as a special case of mixed graphs, the out comes in this paper are broader than the work done previously in this area. We examine the inverse of $\alpha$-hermitian adjacency matricies of bipartite mixed graphs and unicyclic bipartite mixed graphs. Also, for $\alpha=\gamma$, the primative third root of unity, we answer the traditional question, when $H_\alpha^{-1}$ is $\{\pm 1\}$ diagonally similar to an $\alpha$-hermitian adjacency matrix of mixed graph. To be more precise, for a unicyclic bipartite mixed graph $X$ with unique perfect matching we give full characterization when there is a $\{\pm 1\}$ diagonal matrix $D$ such that $DH_\gamma^{-1}D$ is an $\gamma$-hermitian adjacency matrix of a mixed graph. Furthermore, through our work we introduce a construction of such diagonal matrix $D$. In order to do this, we need the following definitions and theorems:
\begin{definition}\citep{Abudayah2}
Let $X$ be a mixed graph and $H_\alpha=[h_{uv}]$ be its $\alpha$-hermitian adjacency matrix.
\begin{itemize}
\item $X$ is called elementary mixed graph if for every component $X'$ of $X$, $\Gamma(X')$ is either an edge or a cycle $C_k$ (for some $k\ge 3$).
\item For an elementary mixed graph $X$, the rank of $X$ is defined as $r(X)=n-c,$ where $n=|V(X)|$ and $c$ is the number of its components. The co-rank of $X$ is defined as $s(X)=m-r(X)$, where $m=|E_0(X)\cup E_1(X)|$.
\item For a mixed walk $W$ in $X$, where $\Gamma(W)=r_1,r_2,\dots r_k$, the value $h_\alpha(W)$ is defined as $$h_\alpha(W)=h_{r_1r_2}h_{r_2r_3}h_{r_3r_4}\dots h_{r_{k-1}r_k}\in \{\alpha^n\}_{n\in \mathbb{Z}}$$
\end{itemize}
\end{definition}
Recall that a bijective function $\eta$ from a set $V$ to itself is called permutation. The set of all permutations of a set $V$, denoted by $S_V$, together with functions composition form a group. Finally recall that for $\eta \in S_V$, $\eta$ can be written as composition of transpositions. In fact the number of transpositions is not unique. But this number is either odd or even and cannot be both. Now, we define $sgn(\eta)$ as $(-1)^k$, where $k$ is the number of transposition when $\eta$ is decomposed as a product of transpositions. The following theorem is well known as a classical result in linear algebra
\begin{theorem} \label{exp}
If $A=[a_{ij}]$ is an $n\times n$ matrix then $$det(A)=\displaystyle \sum_{\eta \in S_n } sgn(\eta) a_{1,\eta(1)}a_{2,\eta(2)}a_{3,\eta(3)}\dots a_{n,\eta(n)} $$
\end{theorem}
\section{Inverse of $\alpha$-hermitian adjacency matrix of a bipartite mixed graph}
In this section, we investigate the invertibility of the $\alpha$-hermitian adjacency matrix of a bipartite mixed graph $X$. Then we find a formula for the entries of its inverse based on elementary mixed subgraphs. This will lead to a formula for the entries based on the type of the paths between vertices.
Using Theorem \ref{exp}, authors in \cite{Abudayah2} proved the following theorem.
\begin{theorem}(Determinant expansion for $H_{\alpha}$) \cite{Abudayah2} \label{Determinant}
Let $X$ be a mixed graph and $H_\alpha$ its $\alpha$-hermitian adjacency matrix, then
$$ det( H_{\alpha}) = \sum_{X'} (-1)^{r(X')}2^{s(X')}Re \left(\prod_C h_{\alpha} ( \vec{C} )\right) $$
where the sum ranges over all spanning elementary mixed subgraphs $X'$ of $X$, the product ranges over all mixed cycles $C$ in $X'$, and $\vec{C}$ is any mixed closed walk traversing $C$.
\end{theorem}
Now, let $X\in \mathcal{H}$ and $\mathcal{M}$ is the unique perfect matching in $X$. Then since $X$ is bipartite graph, $X$ contains no odd cycles. Now, let $C_k$ be a cycle in $X$, then if $C_k \cap \mathcal{M}$ is a perfect matching of $C_k$ then, $\mathcal{M} \Delta C_k= \mathcal{M}\backslash C_k \cup C_k \backslash \mathcal{M}$ is another perfect matching in $X$ which is a contradiction. Therefore there is at least one vertex of $C_k$ that is matched by a matching edge not in $C_k$. This means if $X\in \mathcal{H}$, then $X$ has exactly one spanning elementary mixed subgraph that consist of only $K_2$ components. Therefore, Using the above discussion together with Theorem \ref{Determinant} we get the following theorem.
\begin{theorem}\label{Inv}
If $X\in \mathcal{H}$ and $H_\alpha$ is its $\alpha$-hermitian adjacency matrix then $H_\alpha$ is non singular.
\end{theorem}
Now, Let $X$ be a mixed graph and $H_\alpha$ be its $\alpha$-hermitian adjacency matrix. Then, for invertible $H_\alpha$, the following theorem finds a formula for the entries of $H_\alpha^{-1}$ based on elementary mixed subgraphs and paths between vertices. The proof can be found in \cite{invtree}.
\begin{theorem}\label{Thm1}
Let $X$ be a mixed graph, $H_\alpha$ be its $\alpha$-hermitian adjacency matrix and for $i \neq j$, $\rho_{i \to j}=\{ P_{i \to j}: P_{i \to j} \text{ is a mixed path from the vertex } i \text{ to the vertex } j \}$. If $\det(H_\alpha) \ne 0$, then
\begin{align*}
[H_\alpha^{-1}]_{ij} =&\\
& \frac{1}{\det(H_\alpha)}\displaystyle \sum_{P_{i \to j}\in \rho_{i \to j}} (-1)^{|E(P_{i \to j})|} \text{ } h_\alpha (P_{i \to j}) \sum_{X'} (-1)^{r(X')} 2^{s(X')} Re \left( \prod_C h_\alpha (\vec{C})\right)
\end{align*}
where the second sum ranges over all spanning elementary mixed subgraphs $X'$ of $X\backslash P_{i \to j}$, the product is being taken over all mixed cycles $C$ in $X'$ and $\vec{C}$ is any mixed closed walk traversing $C$.
\end{theorem}
This theorem describes how to find the non diagonal entries of $H_\alpha^{-1}$. In fact, the diagonal entries may or may not equal to zero. To observe this, lets consider the following example:
\begin{example}
Consider the mixed graph $X$ shown in Figure \ref{fig:A} and let $\alpha=e^{\frac{\pi}{5}i}$. The mixed graph $X$ has a unique perfect matching, say $M$, and this matching consists of the set of unbroken arcs and digons. Further $M$ is the unique spanning elementary mixed subgraph of $X$. Therefore, using Theorem \ref{Determinant}
\[ det[H_\alpha]= (-1)^{8-4}2^{4-4}=1
\]
So, $H_\alpha$ is invertible. To calculate $[H_\alpha^{-1}]_{ii}$, we observe that
\[ [H_\alpha^{-1}]_{ii}= \frac{det((H_\alpha)_{(i,i)})}{det(H_\alpha)}=det((H_\alpha)_{(i,i)}).
\]
Where $(H_\alpha)_{(i,i)}$ is the matrix obtained from $H_\alpha$ by deleting the $i^{th}$ row and $i^{th}$ column, which is exactly the $\alpha$-hermitian adjacency matrix of $X\backslash \{i\}$. Applying this on the mixed graph, one can deduce that the diagonal entries of $H_\alpha^{-1}$ are all zeros except the entry $(H_\alpha^{-1})_{11}$. In fact it can be easily seen that the mixed graph $X \backslash \{1\}$ has only one spanning elementary mixed subgraph. Therefore,
\[ [H_\alpha^{-1}]_{11}=det((H_\alpha)_{(1,1)})=(-1)^{7-2}2^{6-5}Re(\alpha)=-2Re(\alpha).
\]
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\linewidth]{Ex1-1.eps}
\caption{Mixed Graph $X$ where $H_\alpha^{-1}$ has nonzero diagonal entry}
\label{fig:A}
\end{figure}
\end{example}
The following theorem shows that if $X$ is a bipartite mixed graph with unique perfect matching, then the diagonal entries of $H_\alpha^{-1}$ should be all zeros.
\begin{theorem}
Let $X \in \mathcal{H}$ and $H_\alpha$ be its $\alpha$-hermitian adjacency matrix. Then, for every vertex $i \in V(X)$, $(H_\alpha^{-1})_{ii} =0$.
\end{theorem}
\begin{proof}
Observing that $X$ is a bipartite mixed graph with a unique perfect matching, and using Theorem \ref{Inv}, we have $H_\alpha$ is invertable. Furthermore,
\[
(H_\alpha^{-1})_{ii} = \frac{\det((H_\alpha)_{(i,i)})}{\det(H_\alpha)}
\]
Note that $(H_\alpha)_{(i,i)}$ is the $\alpha$-hermitian adjacency matrix of the mixed graph $X\backslash \{i\}$. However $X$ has a unique perfect matching, therefore $X\backslash \{i\}$ has an odd number of vertices. Hence $X\backslash \{i\}$ has neither a perfect matching nor an elementary mixed subgraph and thus $\det((H_\alpha)_{(i,i)})=0$.
\end{proof}\\
Now, we investigate the non diagonal entries of the inverse of the $\alpha$-hermitian adjacency matrix of a bipartite mixed graph, $X \in \mathcal{H}$. In order to do that we need to characterize the structure of the mixed graph $X \backslash P$ for every mixed path $P$ in $X$. To this end, consider the following theorems:
\begin{theorem}\cite{clark1991first}\label{clark}
Let $M$ and $M'$ be two matchings in a graph $G$. Let $H$ be the subgraph of $G$ induced by the set of edges $$M \Delta M'=(M\backslash M') \cup (M' \backslash M).$$
Then, the components of $H$ are either cycles of even number of vertices whose edges alternate in $M$ and $M'$ or a path whose edges alternate in $M$ and $M'$ and end vertices unsaturated in one of the two matchings.
\end{theorem}
\begin{corollary} \label{c1}
For any graph $G$, if $G$ has a unique perfect matching then $G$ does not contain alternating cycle.
\end{corollary}
\begin{definition}
Let $X$ be a mixed graph with unique perfect matching. A path $P$ between two vertices $u$ and $v$ in $X$ is called co-augmenting path if the edges of the underlying path of $P$ alternates between matching edges and non-matching edges where both first and last edges of $P$ are matching edges.
\end{definition}
\begin{corollary} \label{c2}
Let $G$ be a bipartite graph with unique perfect matching $\mathcal{M}$, $u$ and $v$ are two vertices of $G$. If $P_{uv}$ is a co-augmenting path between $u$ and $v$, then $G \backslash P_{uv}$ is a bipartite graph with unique perfect matching $\mathcal{M}\backslash P_{uv}$.
\end{corollary}
\begin{proof}
The part that $\mathcal{M}\backslash P_{uv}$ is being a perfect matching of $G \backslash P_{uv}$ is obvious.
Suppose that $M' \ne \mathcal{M}\backslash P_{uv}$ is another perfect matching of $G \backslash P_{uv}$. Using Theorem \ref{clark}, $G \backslash P_{uv}$ consists of an alternating cycles or an alternating paths, where its edges alternate between $\mathcal{M}\backslash P_{uv}$ and $M'$. If all $G \backslash P_{uv}$ components are paths, then $G \backslash P_{uv}$ has exactly one perfect matching, which is a contradiction. Therefore, $G \backslash P_{uv}$ contains an alternating cycle say $C$. Since $P_{uv}$ is a co-augmenting path, we have $M' \cup (P_{uv} \cap \mathcal{M})$ is a perfect matching of $G$. Therefore $G$ has more than one perfect matching, which is a contradiction.
\end{proof}\\
\begin{theorem}\label{nco}
Let $G$ be a bipartite graph with unique perfect matching $\mathcal{M}$, $u$ and $v$ are two vertices of $G$. If $P_{uv}$ is not a co-augmenting path between $u$ and $v$, then $G \backslash P_{uv}$ does not have a perfect matching.
\end{theorem}
\begin{proof}
Since $G$ has a perfect matching, then $G$ has even number of vertices. Therefore, when $P_{uv}$ has an odd number of vertices, $G \backslash P_{uv}$ does not have a perfect matching.\\
Suppose that $P_{uv}$ has an even number of vertices. Then, $P_{uv}$ has a perfect matching $M$. Therefore if $G \backslash P_{uv}$ has a perfect matching $M'$, then $M \cup M'$ will form a new perfect matching of $G$. This contradicts the fact that $G$ has a unique perfect matching.
\end{proof}\\
Now, we are ready to give a formula for the entries of the inverse of $\alpha$-hermitian adjacency matrix of bipartite mixed graph $X$ that has a unique perfect matching. This characterizing is based on the co-augmenting paths between vertices of $X$.
\begin{theorem}\label{Thm2}
Let $X$ be a bipartite mixed graph with unique perfect matching $\mathcal{M}$, $H_\alpha$ be its $\alpha$-hermitian adjacency matrix and
$$\Im_{i \to j}=\{ P_{i \to j}: P_{i \to j} \text{\small{ is a co-augmenting mixed path from the vertex }} i \text{ to the vertex } j \}$$ Then
\[
(H_\alpha^{-1})_{ij}= \left\{
\begin{array}{ll}
\displaystyle \sum_{P_{i\to j} \in \Im_{i\to j}} (-1)^{\frac{|E(P_{i \to j})|-1}{2}} h_\alpha(P_{i \to j}) & \text{if } i\ne j \\
0 & \text{ if } i =j
\end{array}
\right.
\]
\end{theorem}
\begin{proof}
Using Theorem \ref{Thm1},
$${ [H_{\alpha}^{-1}]_{ij} = \frac{1}{\det(H_\alpha)} \sum_{P_{i \rightarrow j} \in \rho_{i \rightarrow j}} \left[ (-1)^{|E(P_{i \rightarrow j})|} h_\alpha(P_{i \rightarrow j}) \sum_{X'} (-1)^{r(X')} 2^{s(X')} Re (\prod_C h_{\alpha} ( \vec{C} )) \right ]} $$
where the second sum ranges over all spanning elementary mixed subgraphs of $X \backslash P_{i \rightarrow j}$. The product is being taken over all mixed cycles $C$ of $X'$ and $\vec{C}$ is any mixed closed walk traversing $C$. \\
First, using Theorem \ref{nco} we observe that if $P_{i \rightarrow j}$ is not a co-augmenting path then $X \backslash P_{i\to j}$ does not have a perfect matching. Therefore, the term corresponds to $P_{i\to j}$ contributes zero. Thus we only care about the co-augmenting paths.
According to Corollary \ref{c2}, for any co-augmenting path $P_{i\to j}$ from the vertex $i$ to the vertex $j$ we get $X \backslash P_{i\to j}$ has a unique perfect matching, namely $\mathcal{M}\cap E( X \backslash P_{i\to j})$. Using Corollary \ref{c1}, $X \backslash P_{i\to j}$ does not contain an alternating cycle. Thus $X \backslash P_{i\to j}$ contains only one spanning elementary mixed subgraph which is $\mathcal{M} \backslash P_{i\to j}$. So,
$$ [H_{\alpha}^{-1}]_{ij} = \frac{1}{\det(H_\alpha)} \sum_{P_{i \to j} \in \Im_{i\to j}} (-1)^{|E(P_{i \to j})|} h_\alpha(P_{i \to j}) (-1)^{V(X\backslash P_{i \to j})-k} $$
where $k$ is the number of components of the spanning elementary mixed subgraph of $X \backslash P_{i\rightarrow j}$.
Observe that $| V(X \backslash P_{i\rightarrow j})|=n-(|E(P_{i \rightarrow j})|+1)$, $k=\frac{n-(|E(P_{i\rightarrow j})|+1)}{2}$ and $\det(H_\alpha) = (-1)^\frac{n}{2}$, we get the result.
\end{proof}\\
\section{Inverse of $\gamma$-hermitian adjacency matrix of a unicyclic bipartite mixed graph}
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\linewidth]{inverse.eps}
\caption{Unicycle bipartite mixed graph with unique perfect matching and $4$ pegs }
\label{fig:D}
\end{figure}
Let $\gamma$ be the third root of unity $e^{\frac{2\pi}{3}i}$. Using Theorem \ref{Thm2}, $h_\alpha(P_{i\to j})\in \{\alpha^i\}_{i=1}^n$ plays a central rule in finding the entries of $H_\alpha^{-1}$ and since the third root of unity has the property $\gamma^i \in \{1,\gamma, \overline{\gamma}\}$ we focus our study in this section on $\alpha=\gamma$. The property that $\alpha^i \in \{\pm1, \pm \alpha, \pm \overline{\alpha}\}$ is not true in general. To illustrate, consider the mixed graph shown in Figure \ref{fig:D} and let $\alpha=e^{\frac{\pi}{5}i}$. Using Theorem \ref{Thm2} we get $H_{05}^{-1}=e^{\frac{3\pi}{5}i}$ which is not from the set $\{\pm 1, \pm \alpha, \pm \overline{\alpha}\}$.\\
In this section, we are going to answer the classical question whether the inverse of $\gamma$-hermitian adjacency matrix of a unicyclic bipartite graph is $\{1,-1\}$ diagonally similar to a hermitian adjacency matrix of another mixed graph or not. Consider the mixed graph shown in Figure \ref{fig:D}. Then, obviously entries of $H_\gamma^{-1}$ are from the set $\{0,\pm 1, \pm \gamma, \pm \overline{\gamma}\}$ \\
Another thing we should bear in mind is the existence of $\{1,-1\}$ diagonal matrix $D$ such that $DH_\gamma D$ is $\gamma$-adjacency matrix of another mixed graph. In the mixed graph $X$ in Figure \ref{fig:D}, suppose that $D=diag\{d_{0},d_{1},\dots,d_{9}\}$ is $\{1,-1\}$ diagonal matrix with the property $DH_\gamma D$ has all entries from the set $\{0, \gamma, \overline{\gamma}\}$. Then, \\
\[
\begin{array}{l}
d_0d_5=1 \\
d_0d_9=-1 \\
d_9d_7=-1 \\
d_5d_7=-1
\end{array}
\]
which is impossible. Therefore, such diagonal matrix $D$ does not exist. To discuss the existence of the diagonal matrix $D$ further, let $G$ be a bipartite graph with unique perfect matching. Define $X_G$ to be the mixed graph obtained from $G$ by orienting all non matching edges. Clearly for $\alpha \ne 1$ and $\alpha \ne -1$ changing the orientation of the non matching edges will change the $\alpha$-hermitian adjacency matrix. For now lets restrict our study on $\alpha=-1$. Using Theorem \ref{Thm2} one can easily get the following observation.
\begin{observation}\label{corr1}
Let $G$ be a bipartite mixed graph with unique perfect matching $\mathcal{M}$, $H_{-1}$ be the $-1$-hermitian adjacency matrix of $X_G$ and
$$\Im_{i \to j}=\{ P_{i \to j}: P_{i \to j} \text{ is a co-augmenting mixed path from the vertex } i \text{ to the vertex } j \text{ in } X_G \}.$$
One can use Theorem \ref{Thm2} to get
\[
(H_{-1}^{-1})_{ij}= \left\{
\begin{array}{ll}
\displaystyle \vert \Im_{i\to j} \vert & \text{if } i\ne j \\
0 & \text{ if } i =j
\end{array}
\right.
\]
\end{observation}
So, the question we need to answer now is when $A(G)$ and $H_{-1}(X_G)$ are diagonally similar. To this end, let $G$ be a bipartite graph with a unique perfect matching and $u\in V(G)$. Then for a walk $W=u=r_1,r_2,r_3,\dots,r_k$ in $G$, define a function that assign the value $f_W(j)$ for the $j^{th}$ vertex of $W$ as follows:
\[f_W(1)=1\]
and
\[
f_W(j+1)= \left\{
\begin{array}{ll}
-f_W(j) & \text{if } r_jr_{j+1} \text{is unmatching edge in } G \\
f_W(j) & \text{if } r_jr_{j+1} \text{is matching edge in } G
\end{array}
\right.
\]
See Figure \ref{fig:E}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\linewidth]{store.eps}
\caption{The values of $f_W$ where $W$ is the closed walk starting from $0$ }
\label{fig:E}
\end{figure}
Since any path from a vertex $u$ to itself consist of pairs of identical paths and cycle, we get the following remark.
\begin{remark}
Let $G$ be bipartite graph with unique perfect matching and $F(u)=\{f_W(u): W \text{ is a closed walk in } G \text{ starting at u}\}.$ then, $\vert F(u) \vert=1$ if and only if the number of unmatching edges in each cycle of $G$ is even.
\end{remark}
Finally, let $G$ be a bipartite graph with unique perfect matching and suppose that each cycle of $G$ has an even number of unmatched edges. For a vertex $u\in V(G)$ define the function $w:V(G) \to \{1,-1\}$ by
\[ w(v)=f_W(v), \text{ where } W \text{ is a path from } u \text{ to } v
\]
\begin{definition}
Suppose that $G$ is bipartite graph with unique perfect matching and every cycle of $G$ has even number of unmatched edges. Suppose further $V(G)=\{v_1,v_2,\dots,v_n\}$ and $u\in V(G)$. Define the matrix $D_u$ by $D_u=diag\{w(v_1),w(v_2),\dots,w(v_n)\}$.
\end{definition}
\begin{theorem}\label{her}
Suppose $G$ is a bipartite graph with unique perfect matching and every cycle of $G$ has an even number of unmatched edges. Then for every $u \in V(G)$, we get $D_uA(G)D_u=H_{-1}(X_G)$.
\end{theorem}
\begin{proof}
Note that, for $x,y \in V(G)$, we have $(D_uA(G)D_u)_{xy}=d_xa_{xy}d_y$. Using the definition of $D_u$ we get,
\[
d_xd_y= \left\{
\begin{array}{ll}
-1 & \text{if } xy \text{ is an unmatching edge in } G \\
1 & \text{if } xy \text{ is a matching edge in } G\\
0 & \text{ otherwise }
\end{array}
\right.
\]
Therefore, $(D_uA(G)D_u)_{xy}=(H_{-1})_{xy}$.
\end{proof}\\
Now we are ready to discuss the inverse of $\gamma$-hermitian adjacency matrix of unicyclic mixed graph. Let $X$ be a unicyclic bipartite graph with unique perfect matching. An arc or digon of $X$ is called a peg if it is a matching arc or digon and incident to a vertex of the cycle in $X$. Since $X$ is unicyclic bipartite graph with unique perfect matching, $X$ has at least one peg. Otherwise the cycle in $X$ will be alternate cycle, and thus $X$ has more than one perfect matching which contradicts the assumption. Since each vertex of the cycle incident to a matching edge and $|V(X)|$ is even, $X$ should contain at least two pegs. The following theorem will deal with unicyclic bipartite mixed graphs $X\in \mathcal{H}$ with more than two pegs.
\begin{theorem}\label{peg}
Let $X$ be a unicyclic bipartite graph with unique perfect matching. If $X$ has more than two pegs, then between any two vertices of $X$ there is at most one co-augmenting path.
\end{theorem}
\begin{proof}
Let $\rho_1, \rho_2$ and $\rho_3$ be three pegs in $X$, $u,v \in V(D)$, $C$ is the unique cycle in $X$ and suppose there are two co-augmenting paths between $u$ and $v$, say $P$ and $P'$. Since $X$ is unicyclic, we have $V(C) \subset P \cup P'$,
Case1: $E(P) \cup E(P')$ does not contain any of the pegs. Then, if $v$ is the $X$ cycle vertex incident to $\rho_1$ then, $v$ is not matched by an edge in the cycle, which means one of $P$ or $P'$ is not co-augmenting path, which contradicts the assumption.
Case2: $(E(P) \cup E(P')$ contain pegs. Then, $(E(P) \cup E(P')$ should contain at most two pegs, suppose that $\rho_1$ and $v$ is the vertex of $X$ cycle that incident to $\rho_1$. Then, $v$ belongs to either $P$ or $P'$, again since $\rho_1$ is a matched edge, $v$ is not matched by the cycle edges which means one of $P$ or $P'$ is not co-augmenting path. which contradicts the assumption.
\end{proof}
\begin{corollary}\label{p1}
Let $X$ be a unicycle bipartite mixed graph with unique perfect matching. If $X$ has more than two pegs, then
\begin{enumerate}
\item $
(H_\alpha^{-1})_{ij}= \left\{
\begin{array}{ll}
(-1)^{\frac{|E(P_{i \to j})|-1}{2}} h_\alpha(P_{i \to j}) & \text{if } P_{i\rightarrow j} \text{ is a co-augmenting path from } i \text{ to } j \\
0 & \text{ Otherwise }
\end{array}
\right.
$
\item If the cycle of $X$ contains even number of unmatching edges, then for any vertex $u\in V(X)$, $D_uH^{-1}_\gamma(X)D_u$ is $\gamma$-hermitian adjacency matrix of a mixed graph.
\end{enumerate}
\end{corollary}
\begin{proof}
Part one is obvious using Theorem \ref{Thm2} together with Theorem \ref{peg}.\\
For part two, we observe that $\gamma^i\in \{1,\gamma,\overline{\gamma}\}$. Therefore all $H^{-1}_\gamma(X)$ entries are from the set $\{\pm 1,\pm \gamma,\pm \overline{\gamma}\}$. Also the negative signs in $A(\Gamma(X))^{-1}$ and in $H_\gamma^{-1}$ appear at the same position. Which means $D_uH_\gamma^{-1}D_u$ is $\gamma$-hermitian adjacency matrix of a mixed graph if and only if $D_uA(\Gamma(X))D_u$ is adjacency matrix of a graph. Finally, Theorem \ref{her} ends the proof.
\end{proof}
Now we will take care of unicycle graph with exactly two pegs.
Using the same technique of the proof of Theorem \ref{peg}, one can show the following:
\begin{theorem}\label{peg2}
Let $D$ be a unicyclic bipartite graph with unique perfect matching and exactly two pegs $\rho_1$ and $\rho_2$. Then for any two vertices of $D$, $u$ and $v$, if there are two co-augmenting paths from the vertex $u$ to the vertex $v$, then $\rho_1$ and $\rho_2$ are edges of the two paths.
\end{theorem}
Let $X$ be a unicyclic bipartite mixed graph with unique perfect matching and exactly two pegs, and let $uv$ and $u'v'$ be the two pegs of $X$ where $v$ and $v'$ are vertices of the cycle of $X$. We, denote the two paths between $v$ and $v'$ by $\mathcal{F}_{v\rightarrow v'}$ and $\mathcal{F}_{v\rightarrow v'}^c$.
\begin{theorem}\label{two pegs}
Let $X$ be a unicyclic bipartite mixed graph with unique perfect matching and exactly two pegs and let $C$ be the cycle of $X$. If there are two coaugmenting paths between the vertex $x$ and the vertex $y$, then
\[
(H_\alpha^{-1})_{xy}= (-1)^{\frac{|E(P_{x \to y})|-1}{2}} \frac{h_\alpha(P_{x \to v}) h_\alpha(P_{y \to v'}) }{h_\alpha(\mathcal{F}_{v \to v'})} \left[ (-1)^{m+1} h_\alpha(C)+1 \right]
\]
where $\mathcal{F}_{v \to v'}$ is chosen to be the part of the path $P_{x \to y}$ in the cycle $C$ and $C$ is of size $2m$.
\end{theorem}
\begin{proof}
Suppose that $P_{x \to y}$ and $Q_{x \to y}$ are the paths between the vertices $x$ and $y$, using theorem \ref{Thm2} we have
\[
(H_\alpha^{-1})_{xy}= (-1)^{\frac{|E(P_{x \to y})|-1}{2}} h_\alpha(P_{x \to y}) + (-1)^{\frac{|E(Q_{x \to y})|-1}{2}} h_\alpha(Q_{x \to y})
\]
Now, using Theorem \ref{peg2}, $P_{x \to y}$ $(Q_{x \to y})$ can be divided into three parts $P_{x \to v}$, $\mathcal{F}_{v \to v'}$ and $P_{v' \to y}$ (resp. $Q_{x \to v}=P_{x \to v},\text{ } \mathcal{F}_{v \to v'}^c$ and $Q_{v' \to y}=P_{v' \to y}$).\\ Observe that the number of unmatched edges in $\mathcal{F}_{v \to v'}$ is $k_1=\frac{|E(\mathcal{F}_{v \to v'})|+1}{2}$ and the number of unmatched edges in $\mathcal{F}_{v \to v'}^c$ is $k_2=m-\frac{|E(\mathcal{F}_{v \to v'})|+1}{2}+1$ we get
\[
(H_\alpha^{-1})_{xy}=(-1)^k h_\alpha(P_{x \to v}) h_\alpha(P_{v \to y}) \left( (-1)^{k_1} h_\alpha(\mathcal{F}_{v \to v'}) + (-1)^{k_2} h_\alpha(\mathcal{F}_{v \to v'}^c) \right)
\]
where $k=\frac{|E(P_{x \to v})|+|E(P_{v' \to y})|}{2}-1$
Note here $\overline{ h_\alpha(\mathcal{F}_{v \to v'})} h_\alpha(\mathcal{F}_{v \to v'}^c) = h_\alpha(C)$, therefore,
\[
(H_\alpha^{-1})_{xy}= (-1)^{\frac{|E(P_{x \to y})|-1}{2}} \frac{h_\alpha(P_{x \to v}) h_\alpha(P_{y \to v'}) }{h_\alpha(\mathcal{F}_{v \to v'})} \left[ (-1)^{m+1} h_\alpha(C)+1 \right]
\]
\end{proof}
\begin{theorem}\label{NH}
Let $X$ be a unicyclic bipartite mixed graph with unique perfect matching and $H_\gamma$ be its $\gamma$-hermitian adjacency matrix. If $X$ has exactly two pegs, then $H_\gamma^{-1}$ is not $\pm 1$ diagonally similar to $\gamma$-hermitian adjacency matrix of a mixed graph.
\end{theorem}
\begin{proof}
Let $xx'$ and $yy'$ be the two pegs of $X$, where $x'$ and $y'$ are vertices of the cycle $C$ of $X$ . Then, using Theorem \ref{two pegs} we have
\[(H_\gamma^{-1})_{xy}= (-1)^{\frac{|E(P_{x \to y})|-1}{2}} \frac{h_\gamma(P_{x \to x'}) h_\gamma(P_{y \to y'}) }{h_\gamma(\mathcal{F}_{x' \to y'})} \left[ (-1)^{m+1} h_\gamma(C)+1 \right]
\]
where $\mathcal{F}_{x' \to y'}$ is chosen to be the part of the path $P_{x \to y}$ in the cycle $C$ and $C$ is of size $2m$. Suppose that $D=diag\{d_v:v\in V(X)\}$ is a $\{\pm 1\}$ diagonal matrix with the property that $DH_\gamma^{-1}D$ is $\gamma$-hermitian adjacency matrix of a mixed graph.
\begin{itemize}
\item Case1: Suppose $m$ is even say $m=2r$.\\
Observe that $(-1)^{m+1}h_\gamma(C)+1=1-h_\gamma(C)$. If $h_\gamma(C) \in \{1, \gamma, \gamma^2\}$, then $1-h_\gamma(C) \notin \{\pm 1, \pm \gamma, \pm \gamma^2\}$ and so $H_\gamma^{-1}$ is not $\pm 1$ diagonally similar to $\gamma$-hermitian adjacency matrix of a mixed graph. Thus we only need to discuss the case when $h_\gamma(C)=1$. To this end, suppose that $h_\gamma(C)=1$. Then $(H_\gamma^{-1})_{xy}=0$. Since the length of $C$ is $4r$, we have the number of unmatching edges (number of matching edges) in $C$ is $\frac{4r+2}{2}$ (resp. $\frac{4r-2}{2}$). Since the number of unmatching edges in $C$ is odd, there is a coaugmenting path $\mathcal{F}_{x \to y}$ from $x$ to $y$ that contains odd number of unmatching edges and another coaugmenting path $\mathcal{F}^c_{x \to y}$ with even number of unmatching edges. Now, let $o'o$($e'e$ ) be any matching edges in the path $\mathcal{F}_{x \to y}$ (resp. $\mathcal{F}^c_{x \to y}$). Then, without loss of generality we may assume that there is a coaugmenting path between $x$ and $e$, $x$ and $o$ (and hence there is a co-augmenting path between $y$ and $o'$, $y$ and $e'$ ). Now, if $d_xd_y=1$ then
\begin{itemize}
\item $(DH_\gamma^{-1}D)_{xo}=(-1)^kd_xh_\gamma(P_{x \to o})d_o$
\item $(DH_\gamma^{-1}D)_{yo'}=(-1)^{k'}d_yh_\gamma(P_{y \to o'})d_{o'}$
\end{itemize}
Observe that $k+k'$ is odd number, we have $d_od_{o'}=-1$. This contradict the fact that for every matching edge $gg'$, $d_gd_{g'}=1$.\\
The case when $d_xd_y=-1$ is similar to the above case but with considering the path $\mathcal{F}^c_{x \to y}$ instead of $\mathcal{F}_{x \to y}$ and the vertex $e$ instead of $o$.
\item Case2: Suppose $m$ is odd say $2r+1$. Then
\[
(H_\gamma^{-1})_{xy}= (-1)^{\frac{|E(P_{x \to y})|-1}{2}} \frac{h_\alpha(P_{x \to v}) h_\alpha(P_{y \to v'}) }{h_\alpha(\mathcal{F}_{v \to v'})} \left[ h_\alpha(C)+1 \right] .
\]
Therefore,
\[(H_\gamma^{-1})_{xy}= (-1)^{\frac{|E(P_{x \to y})|-1}{2}} \frac{h_\alpha(P_{x \to v}) h_\alpha(P_{y \to v'}) }{h_\alpha(\mathcal{F}_{v \to v'})}\left\{
\begin{array}{lll}
-\gamma & \text{if } h_\alpha(C)=\gamma^2 \\
-\gamma^2 & \text{if } h_\alpha(C)=\gamma\\
2 & \text{if } h_\alpha(C)=1
\end{array}
\right.
\]
Obviously, when $h_\alpha(C)=1$, $H_\gamma^{-1}$ is not $\pm 1$ diagonally similar to $\gamma$-hermitian adjacency matrix of a mixed graph. Thus, the cases we need to discuss here are when $h_\alpha(C)=\gamma$ and $h_\alpha(C)=\gamma^2$.\\
Since $m$ is odd, then $C$ contains an even number of unmatched edges. Therefore, either both paths between $x$ and $y$, $\mathcal{F}_{x\to y}$ and $\mathcal{F}_{x\to y}^c$, contain odd number of unmatching edges or both of them contains even number of unmatching edges. \\
To this end, suppose that both of the paths $\mathcal{F}_{x\to y}$ and $\mathcal{F}_{x\to y}^c$ contain odd number of unmatched edges. Then, $(H_\gamma^{-1})_{xy}\in \{\gamma^i\}_{i=0}^2$, which means $d_xd_y=1$.
Finally, let $v'v$ be any matching edge in $\mathcal{F}_{x\to y}$ where $P_{x \to v}$ and $P_{v' \to y}$ are coaugmenting paths, then obviously $d_vd_{v'}=1$. But one of the coaugmenting paths $P_{x \to v}$ and $P_{v' \to y}$ should contain odd number of unmatching edges and the other one should contain even number of unmatched edges. Which means $d_xd_vd_{v'}d_y=-1$. This contradicts the fact that $d_vd_{v'}=1$.\\
In the other case, when both $\mathcal{F}_{x\to y}$ and $\mathcal{F}_{x\to y}^c$ contain even mumber of unmatching edges, one can easily deduce that $d_xd_y=-1$ and using same technique we can get another contradiction.
\end{itemize}
\end{proof}
Note that Corollary \ref{p1} and Theorem \ref{NH} give a full characterization of a unicyclic bipartite mixed graph with unique perfect matching where the inverse of its $\gamma$-hermitian adjacency matrix is $\{\pm 1\}$ diagonally similar to $\gamma$-hermitian adjacency matrix of a mixed graph. We summarize this characterization in the following corollary.
\begin{theorem}
Let $X$ be a unicyclic bipartite mixed graph with unique perfect matching and $H_\gamma$ its $\gamma$-hermitian adjacency matrix. Then, $H_\gamma^{-1}$ is $\pm 1$ diagonally similar to $\gamma$-hermitian adjacency matrix if and only if $X$ has more than two pegs and the cycle of $X$ contains even number of unmatching edges.
\end{theorem}
\section*{Acknowledgment}
The authors wish to acknowledge the support by the Deanship of Scientific Research at German Jordanian University.
|
{
"timestamp": "2022-05-17T02:09:06",
"yymm": "2205",
"arxiv_id": "2205.07010",
"language": "en",
"url": "https://arxiv.org/abs/2205.07010"
}
|
\section{Introduction}
CT-based lung airway analysis is clinically important as it could provide valuable quantitative information to assist lung disease diagnosis and navigation of surgical \cite{ref1,qin2019airwaynet}. Bronchial tree reconstruction is the basis of quantitative lung airway analysis, which usually comprises two steps. The first step is to extract the whole airway tree mask from the original CT imaging. The second step is to label major anatomical branches based on bronchus classification. The automatic bronchial tree reconstruction can further assist more clinical processes, such as individual airway tree phenotype matching and lung lobe or lung segment classification \cite{ref6,ref7}. There still exist two main obstacles: (1) the extreme unbalanced foreground and background in bronchus segmentation; (2) the ignorance of inherent topology or prior knowledge for bronchus segment classification.
Based on the above observations, we propose BronchusNet, a region and structure prior embedded framework to effectively segment and classify the bronchus in CT images. For segmentation task, we design an Adaptive Hard Region-aware UNet (AHR-UNet) to accurately segment the bronchus from the background. The AHR-UNet first uses a prediction screening based method
to discover hard region, then highlights the hard region with max-pooling in a coarse-to-fine manner. After that, we follow the inherent topological of the bronchus tree and design a neural network based on Point-Voxel Graph Representation (PVGR) to classify the branches. The idea underlying PVGR is to combine the position information represented by point clouds with the local higher-dimensional convolution features from an additional mask labeling task. Considering the prior knowledge that adjacent segments tend to belong to the same category, a Neighborhood Consistency Regularization is proposed to boost the performance. For evaluation, we manually annotate the
airway branch labels of 100 CT scans collected from public datasets and the cooperate hospital.
The contribution of this work could be summarized as follows: (1) we have designed a region and structure embedded representation learning framework to segment and classify the bronchus from the lung CT imaging; (2) we propose an Adaptive Hard Region-aware UNet to overcome the extreme imbalance of foreground and background pixel samples during bronchus segmentation training; (3) we contribute a benchmark named BRSC, which contains 100 bronchial cases with accurately pixel-level segmentation mask and anatomical categories. Extensive experimental results on the proposed benchmark showed that BronchusNet significantly exceeds the state-of-the-art methods.
\section{Related Work}
For bronchus segmentation, single-stage networks like U-Net \cite{ronneberger2015u}, 2.5D net \cite{2p5dnetforairway}, 3D U-Net \cite{charbonnier2017improving,isensee2021nnunet,qin2019airwaynet,qin2021tscnn} have been employed,
but they often rely on
laborious pre-/post-processing.
Two-stage approaches have shown promising results.
Zhao \textit{et al.} \cite{bronchuslp-zhao-2019} used a two-stage 2D+3D U-Net to segment thick and thin bronchus.
Qin \textit{et al.} \cite{qin2020airwaynet} trained an extra model to predict the bronchus connectivity.
These methods introduce additional strategies to enhance the segmentation of indistinguishable regions, and cannot achieve end-to-end joint training. Therefore, embedding the hard sample mining module in the network design can exchange the model for the fine segmentation of the bronchial boundary with less amount of parameters.
Bronchus classification remains a challenging task due to the various topology of bronchial trees.
Wang et al. \cite{ref7} achieved lobar-level bronchus classification based on keypoint detection. For deep learning based methods, Zhao et al. \cite{bronchuslp-zhao-2019}
applied linear programming to post-process the airway structure predicted by neural models.
Nadeem \cite{ref6} proposed a neural network of two stages that label the lobar-level and segment-level bronchus respectively.
Recently, graph-based methods \cite{garcia2019joint,selvan2020graph,zhao2021airway,tan2021sgnet} have been studied for the task. Juarez \textit{et al.} \cite{garcia2019joint} replaced the deepest convolutional layer in a U-Net with
graph convolutions to improve the airway binary segmentation.
However, the previous approaches either require hand-crafted feature \cite{ref7,bronchuslp-zhao-2019,ref6} or need additional annotation \cite{tan2021sgnet,garcia2019joint}, while they are not able to effectively handle the individual variance. Thus, there is a need to develop an efficient and effective framework that can embed both voxel-wise features and point-cloud topology into learnable representations for bronchus classification.
\section{Method}
BronchusNet contains three stages, as shown in Fig.~\ref{pipeline}. In the first stage, we segment the airways of an input lung CT scans with our Adaptive Hard Region-aware UNet to obtain a 3D binary mask. Secondly, we use a UNet to label the mask and to harvest voxel-wise features for the bronchus. Thirdly, we propose to refine the bronchus classification results based on a Hybrid Point-Voxel Graph.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\textwidth]{fig1.pdf}
\end{center}
\caption{Overview of the airway segmentation and classification framework.} \label{pipeline}
\end{figure}
\subsection{Adaptive Hard Region-aware UNet}
Considering that the bronchial voxels are sparse and scattered, we propose to locate multi-scale hard regions as prior knowledge to guide the representation learning of the bronchial region, and develop an Adaptive Hard Region-aware Unet (ADR-Unet).
We first use Otsu's \cite{otsu1979threshold} method to segment the main trachea from the CT imaging, and the hard region $y_{hr}$ is regarded as the voxels that appear in the ground truth but not in the main trachea. After that, to emphasizes the voxels lying on the end of bronchus, we use max-pooling to dilate the area of the hard region $y_{hr}$.
As the red arrows in Fig.~\ref{pipeline} shows, the max-pooling is applied to the hard region for multiple times to synthesize multi-scale supervisions for the decoder of UNet.
Formally, the hard region aware loss $L_{hr}^h$ with respect to the $h$-th layer of the decoder is defined as:
\begin{equation}
L_{hr}^h = L_{dice}(pred_{hr}^h, I^h(y_{hr})),
\end{equation}
where $pred_{hr}$ and $y_{hr}$ denote the predicted segmentation of hard region and the hard region of $h$-th (from left to right) layer of the decoder, respectively. $I^h(\cdot)$ denotes the inflation function that max-pools the ground truth with stride 2 for $h$ times according to the index of the layer. $L_{dice}$ is the dice loss function. With the hard region aware loss, the final loss function to segment the bronchus is defined as:
\begin{equation}
L_{seg} = \sum_{h=1}^{h=H}L^h_{hr}+L_{dice},
\end{equation}
where the $H$ is the number of layers of the decoder. $L_{dice}$ is used to supervise the segmentation of the whole airway.
\subsection{Hybrid Point-Voxel Graph based Representation Learning}
We further proposed the point voxel graph neural network to classify the bronchus in a more accurate way. The motivation underlying our framework is that the relative positional information represented by the point cloud helps to overcome the individual variance during the classification process. In the meanwhile, the high-dimension voxel feature could be a strong supplement for bronchus classification as it is able to capture other features of the bronchus such as diameter and direction of the bronchus.
\subsubsection{A. Construction of Bronchial Graph}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{pcg.pdf}
\end{center}
\caption{Workflow for bronchial tree construction.} \label{btc}
\end{figure}
\noindent \textbf{(1) Definition of Node and Edge in the Graph.} Based on the segmentation mask of the bronchus from Stage 1, we construct a Hybrid Point-Voxel Graph with the following steps, as shown in Fig.\ref{btc}. Firstly, we skeletonize the mask by extracting the centerline (see Fig. \ref{btc}(b)). Secondly, according to the number $N$ of foreground voxels in the 26-connected neighborhood of each voxel on the centerline, we define the end-points ($N=1$), edge-points ($N=2$), and division-points ($N \geq 3$), as shown in Fig.~\ref{btc}(c). Finally, as Fig.~\ref{btc}(d) shows, we divide the branches into segments (i.e., node in the graph) based on these points.
The edge of the graph is defined by the connectivity between line segments, which are divided by a division point.
\noindent \textbf{(2) Point-wise Coordinate Feature.} To obtain the point-wise coordinate feature, we need to generate the point cloud from the bronchus mask. Based on the bronchus tree shown in Fig. \ref{btc}(d), we crop out the bounding box from each bronchial segment (i.e., node in the graph) of the bronchus tree. Each segment is composed of centerline's voxels. Then the coordinate (i.e., $X$, $Y$, and $Z$) of each voxel is normalized to [0, 1] with respect to the shape of the bounding box. We sub-sample $K$ voxels on each branch as the number of voxels varies. In this way, we obtain the point cloud feature that contains three-dimensional coordinates of $K$ voxels with the length $3 \times K$. $K$ is set to 10, empirically.
\noindent \textbf{(3) Voxel-wise Convolution Feature.}
To obtain voxel features, we train a UNet (Stage 2 in Fig.~\ref{pipeline}) to predict the category of each bronchus branch. A 3D feature map is produced from the penultimate layer of the UNet. We extract $K$ convolution features from the feature map according to the $K$ voxel's coordinate of each bronchial fragment,
and aggregate them to a vector of length $C \times K$, where $C$ is the channel of the feature map ($C=24$ by default).
\subsubsection{B. Point-Voxel Graph Neural Networ
}
Given the above-defined graph that takes both point cloud features and high-dimension voxel features into account, we design a Point-Voxel Graph Neural Network (PV-GNN) to predict the category of each bronchial segment. The PV-GNN consists of Conv-Norm Blocks and a fully connected layer. The first part of the Conv-Norm Block is Mean Sage-Convolution (MSC) \cite{msc}, which uses the mean aggregated function to aggregate information from node neighbors to overcome the inductive bias. Considering that the adjacent nodes in the topology of the bronchial tree are relatively sparse, we build up a deep GNN for better information integration in point-clouds. As GNN has a risk of suffering from gradient vanishing as it goes deeper, we introduce Graph Normalization (GN) \cite{graphnorm} to shift and scale feature values, which makes graph neural networks converge much faster. Besides the first block, each block is added with element-wise addition which is performed as a residual connection. Let $H^k$ be the ouput of the $k$-th block and $\sigma$ refers to ReLU operation, the block is defined as:
\begin{equation}
H^k = \sigma({\rm{GN}}({\rm{MSC}}(H^{k-1}))+ H^{k-1}.
\end{equation}
\subsubsection{C. Cross-entropy with Neighborhood Consistency Regularization}
Considering the topology of the bronchial tree that the adjacent branches tend to belong to the same category, we design a novel Neighborhood Consistency Regularization (NCR) to penalize local spatial variations and force nearby nodes belonging to the same category to be closer in the latent space. Let $Y = \{y_1, y_2, ..., y_N\}$ be the set that contains the one-hot vector of the ground truth of each branch and $Z = \{z_1, z_2, ..., z_N\}$ be the set that contains the one-hot vector of model prediction of each branch, the NCR loss is formulated as:
\begin{equation}
L_{NCR} = \frac{\sum_{i=1}^N \sum_{j}^{V_i} ||z_i - z_j|| \mathbb I (y_i = y_j)}{M},
\end{equation}
where $V_i$ is the set of $i$'s neighbor and $j$ is the $j$-th node in this set, $z_i$ denotes the logit vector of node $i$ from the output of last fully-connection layer, $\mathbb I(\cdot)$ is an indicator function that returns 1 when the condition is met, and returns 0 otherwise, and $M, N$ are the numbers of edges and nodes in the graph, respectively. Let $\alpha$ be a scalar to balance the weight of the regularization and CE loss ($\alpha$ set to 1 empirically.), the overall loss function contains the above NCR and a vanilla Cross-Entropy loss, and can be formulated as:
\begin{equation}
L = L_{CE} + \alpha L_{NCR}.
\end{equation}
\section{BRSC: A New Benchmark for Bronchus Segmentation and Classification}
We contribute a new benchmark, BRSC, for bronchus segmentation and classification. BRSC contains 100 cases of lung CT images. We collect 60 cases from the currently available database EXACT'09 \cite{EXACT} and LIDC \cite{lidc}. The remaining 40 cases are collected from our cooperative hospital, which has received the appropriate approvals from the institutional ethical committee. The BRSC benchmark is annotated by two experts with a two-step annotation process. The experts first annotate the airway segmentation, and then label 18 segmental bronchi at the pixel level. Then we mix these data from different sources together and split the dataset into a training set of 70 cases and a test set of 30 cases. To evaluate the performance of the algorithms, we performed the five-fold cross-validation on the training set by randomly selecting 80\% data (i.e., 56 cases) for model training and the remaining 20\% (i.e., 14 cases) for validation.
\section{Experiments and Results}
\subsubsection{Implementation and Evaluation Details}
We use PyTorch 1.10 to build the model, and all models are trained with NVIDIA V100 GPU of 32GB.
For bronchus segmentation, we cut the CT imaging into overlapping cubes of shape $80\times 80\times 80$ for training. During inference, We crop a $64\times 64\times 64$ cube from the center of the CT imaging, with $16\times 16\times 16$ overlap between adjacent cubes to avoid prediction of obscure boundary. We train the model for 50 epoches with SGD optimizer and a learning rate of 0.001. Batch size is set to 16. We use Dice score to evaluate the segmentation.
For bronchus classification, we augment the training data by applying random affine transform and elastic deformation for 99 times. We use DropEdge \cite{dropedge} for model training to avoid over-fitting. Adam optimizer is applied to train the model with learning rate at 0.001 for 500 epoches, while the batch size is set to 128. The number of layers and hidden dimensions in PV-GNN is set to 5 and 256, respectively.
Following~\cite{bronchuslp-zhao-2019}, we evaluate the classification with the accuracy, precision, recall, and F1-score.
\begin{table}[!t]
\caption{Comparison of bronchus segmentation and classification methods. The best results are shown in \textbf{bold}.}\label{tab1}
\centering
\begin{tabular}{@{}lc||ccccc@{}}
\toprule
Segmentation & Dice-score & Classification & Accuracy & Precision & Recall & F1-score \\ \midrule
LP \cite{bronchuslp-zhao-2019} & $0.874_{\pm0.02}$ & LP \cite{bronchuslp-zhao-2019} & $0.818_{\pm0.01}$ & $0.747_{\pm0.01}$ & $0.792_{\pm0.02}$ & $0.770_{\pm0.02}$ \\
TS-CNN \cite{zhao2021airway} & $0.883_{\pm0.02}$ & TS-NN \cite{ref6} & $0.768_{\pm0.01}$ & $0.778_{\pm0.01}$ & $0.749_{\pm0.01}$ & $0.762_{\pm0.01}$ \\
SGNet \cite{tan2021sgnet} & $0.847_{\pm0.02}$ & SGNet \cite{tan2021sgnet} & $0.856_{\pm0.01}$ & $0.850_{\pm0.01}$ & $0.844_{\pm0.01}$ & $0.847_{\pm0.01}$ \\
nn-UNet \cite{isensee2021nnunet} & $0.865_{\pm0.01}$ & - & - & - & - & - \\
BronchusNet & $\bm{0.912_{\pm0.01}}$ & BronchusNet & \textbf{$\bm{0.924_{\pm0.01}}$} & \textbf{$\bm{0.923_{\pm0.01}}$} & \textbf{$\bm{0.919_{\pm0.01}}$} & \textbf{$\bm{0.921_{\pm0.01}}$} \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\textwidth]{visualization.pdf}
\end{center}
\caption{Qualitative analysis on the BRSC benchmark. The misclassified bronchus is bounded by a red box. In the above cases, TS-NN \cite{ref6} could suffer from errors related to branching variability. LP \cite{bronchuslp-zhao-2019} fails to distinguish the segments with similar angles. SGNet \cite{tan2021sgnet} misclassify the thin bronchus while BronchusNet shows robust results.}
\label{visualization}
\end{figure}
\subsubsection{Comparison with the State-of-the-art}
The comparison results for segmentation are shown in the left part of Table \ref{tab1}. Our AHR-UNet significantly outperforms other bronchus segmentation models by taking the hard region prior into account. Specifically, we exceed the previous state-of-the-art TS-CNN by 2.9\% w.r.t Dice score. The comparison results for bronchial classification are shown in the right part of Table.~\ref{tab1}. Our models significantly exceed the previous state-of-the-art SGNet by 7.5\% on accuracy and 7.7\% on F1-score, showing that the hybrid point-voxel graph representation is significantly effective.
The qualitative analysis is shown in Fig.~\ref{visualization}, the TS-NN \cite{ref6} under-performance as it only focuses on local features but neglects the global topology of the airway. The hard-crafted feature-based LP \cite{bronchuslp-zhao-2019} fails with segments that have similar angles. The SGNet \cite{tan2021sgnet} makes the mistake due to the severe class imbalance. Thanks to the affiliated hard region prior, our BronchusNet is robust to classifying the thin bronchus.
\subsubsection{Ablation study}
The ablation study is shown in Table.~\ref{tab:abla}. For segmentation, the UNet with the tailor-designed training strategies shows the competitive result. By taking the hard region into account, our AHR-UNet significantly exceeds the UNet by 3.2\% dice score. For classification, ``CNN'' only uses UNet to label the binary mask, which is underperformance as it could not provide structure prior. ``GNN-P'' uses the point cloud graph representation to classify the bronchus, which achieves substantial performance improvement (i.e., 6.3\%) against the ``CNN''. ``GNN-PV'' takes the voxel-wise convolutional feature into account, which can capture the local texture and the diameter information, thus significantly exceeding the ``GNN-P'' by 3.8\%. ``GNN-PVN'' embeds the neighborhood consistency regularization into the framework, which brings a performance gain of 1.1\%. This result shows that we can better classify the bronchus by using prior knowledge that adjacent segments tend to belong to the same category.
\begin{table}[]
\centering \caption{Ablation study of the bronchus's segmentation and classification.}
\begin{tabular}{@{}cc||ccccc@{}}
\toprule
Classification & Dice score & Segmentation & Point Feature & Voxel Feature & NCR & Accuracy \\ \midrule
UNet & $0.879_{\pm0.01}$ & CNN & & & & $0.819_{\pm0.01}$ \\
AHR-UNet & $0.912_{\pm0.01}$ & GNN-P & \checkmark & & & $0.882_{\pm0.02}$ \\
- & - & GNN-PV & \checkmark & \checkmark & & $0.918_{\pm0.01}$ \\
- & - & GNN-PVN & \checkmark & \checkmark & \checkmark & $0.924_{\pm0.01}$ \\ \bottomrule
\end{tabular}
\label{tab:abla}
\end{table}
\section{Conclusion}
In this paper we present the BronchusNet, a region and structure prior embedded framework for bronchus segmentation and classification. With the tailor-designed adaptive hard region-aware network, the feature representation learning obtains much more accurate bronchus segmentation result. Based on the hybrid point-voxel graph based representation learning, we are able to effectively overcome the individual variance for bronchus segment classification. Additionally, a novel neighbor consistency-based regularization is proposed to boost the performance. We contribute the BRSC benchmark that contains 100 CT scans with pixel-wise masks and segmental-level labels to facilitate future research. The experimental results on the BRSC benchmark show that our proposed method significantly outperforms the state-of-the-art methods.
\bibliographystyle{splncs04.bst}
\section{Details of the Contributed Dataset}
\begin{table}[]
\centering
\caption{\textbf{Sensitivity analysis on the hyper-parameter $\alpha$ of the neighborhood consistency regularization (NCR)}. The NCR term is weighted by $\alpha$ to make a trade-off between the cross-entropy loss and the NCR, as shown in Eq (5) in the paper.
When $\alpha=0$, the result is the same as the baseline. As $\alpha$ becomes larger, the accuracy of our model first increases then decreases. Setting $\alpha$ to 1 shows the highest accuracy.}
\begin{tabular}{@{}cccccc@{}}
\toprule
$\alpha$ & 0 & 0.5 & 1 & 1.5 & 2 \\ \midrule
Accuracy & $92.0_{\pm0.01}$ & $92.8_{\pm0.01}$ & $93.1_{\pm0.01}$ & $92.5_{\pm0.01}$ & $92.1_{\pm0.01}$ \\ \bottomrule
\end{tabular}
\label{tab1}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=1.\textwidth]{vis.pdf}
\end{center}
\caption{Visualization results of the proposed method and other state-of-the-art methods. The misclassified bronchus is bounded by a red-line box. As we can see, the proposed BronchusNet exceeds other methods and shows more spatially coherent results in most cases.} \label{pipeline}
\end{figure}
\bibliographystyle{splncs04.bst}
\end{document}
|
{
"timestamp": "2022-05-25T02:07:16",
"yymm": "2205",
"arxiv_id": "2205.06947",
"language": "en",
"url": "https://arxiv.org/abs/2205.06947"
}
|
\section{Related Work}
\label{sec:related}
\subsection{Visualization for Machine Learning}
Visualization has helped ML practitioners perform a variety of analytics tasks such as: exploring datasets, analyzing performance results, interpreting and explaining model internals, building models, monitoring training progress, and debugging models~\cite{hohman2018visual, yuan2021survey}.
Many existing visualization tools for ML support the tasks of analyzing performance results and exploring datasets at multiple levels of abstraction, ranging from individual instances to entire classes.
While ML practitioners often only use summary metrics (e.g., accuracy) or class-level statistics, visualization researchers have argued the importance of instance-level analysis.
Early works include ModelTracker~\cite{amershi2015modeltracker}, Squares~\cite{ren2016squares}, and Facets-Dive~\cite{facets, wexler2019if}.
These tools represent each instance as a small square using the \textit{unit visualization} technique~\cite{park2017atom}, enabling users to see individual instances in the context of aggregated information.
This can work particularly well for image datasets as each square can be replaced with a thumbnail of the actual image content.
While individual instance-level analysis provides detailed low-level analysis,
the scale of datasets urges researchers to develop ways to slice and filter datasets, resulting in subgroup-level analysis~\cite{kahng2017activis, hohman2018visual,he2021can}.
This allows users to specify data subsets based on attributes and perform more fine-grained analysis than at the class-level.
However, image data creates a fundamental challenge in supporting such analysis because there are no annotations or attributes beyond class labels.
Therefore, group structures are often created with algorithmic approaches. A common approach is to use a DR technique like t-SNE~\cite{maaten2008tsne} or UMAP~\cite{mcinnes2018umap}, which are often applied to high-dimensional representations obtained from neuron activations~\cite{rauber2016visualizing}.
In this paper, we propose an alternative approach to visualizing relationships between images by using a hierarchical clustering algorithm.
We recognize that dataset analysis is an increasingly important topic to address.
ML researchers have stressed the importance of data in deep learning by coining terms like Data-Centric AI and MLOps~\cite{ng2021chat}.
Our work aligns with this trend to support data exploration for ensuring that datasets are less biased, more fair and inclusive, and contain fewer errors.
A recently developed tool named Know Your Data~\cite{kyd} aligns with this goal;
however, its focus is on statistics based on many attributes obtained from external APIs (e.g., face recognition, object detection), while our work focuses on making sense of raw image datasets by relying on human perception.
\subsection{Image Browsing}
Zah\'alka and Worring~\cite{zahalka2014towards} presented a comprehensive overview of multimedia visualization methods (primarily of images) in their survey.
They categorized existing techniques into five types:
basic grid,
similarity space,
similarity-based,
spreadsheet,
and
thread-based.
The three methods commonly used by ML practitioners described in Sect. 1 and \autoref{fig:summary} (i.e., random grid, t-SNE, and a grid version of t-SNE) belong to the ``basic grid,'' ``similarity space,'' and ``similarity-based'' categories, respectively.
Our proposed treemap-based method can also be placed in the ``similarity-based'' category.
The idea of using treemaps for image browsing was proposed in 2001 by PhotoMesa~\cite{bederson2001photomesa}.
PhotoMesa proposed two variations of the treemap algorithms: ordered and quantum treemaps.
The ordered treemap ensures the order of images in each treemap block will match the order in file structures (e.g., by timestamp) and the quantum treemap ensures that the widths and heights of the generated rectangles are integer multiples of a
given elemental size.
Unlike their data, ML datasets have different properties:
each dataset has a set of classes, but the images within each class have no order.
Because there is no existing hierarchical structure, we extract one using agglomerative clustering algorithms and adapt the slice-dice treemap algorithm~\cite{shneiderman1992treemap}.
An important task in analyzing images or multimedia data is categorizing or exploratory searching.
The key difference from tabular datasets is that image datasets are not annotated with structured attributes; images are unstructured.
Many common data operations like filtering, grouping, and sorting cannot be directly applied.
If we consider low-level tasks by Amar et al.~\cite{amar2005low}, only a few of the 10 tasks can be applied to images~\cite{zahalka2014towards}.
Thus, an important challenge in interactive visualization of image data is automatic extraction of semantic information, interactive exploration of categories, or both~\cite{van2016iclic,zahalka2020ii,xie2018semantic}.
\subsection{Similarity-based Visualization Methods}
As we discussed in the previous subsection, our proposed work can be considered as a similarity-based approach.
We briefly describe both the similarity-space and similarity-based approaches in ML context.
The t-SNE algorithm is probably the most popular among ML researchers. They often use it to visualize cluster structures learned by deep learning models~\cite{maaten2008tsne,rauber2016visualizing,wang2020understanding}.
While t-SNE often plots each data point as a small circle in a 2-D space,
the nature of images provides us with the opportunity to directly plot a small thumbnail instead of a dot.
This enables users to see the image contents without interacting with each circle mark (e.g., clicking, hovering).
For example, Embedding Projector~\cite{smilkov2016embedding} displays MNIST images in t-SNE plots.
However, as the number of images grows, images overlap, making it almost impossible to see them in high-density areas (see Fig.~\ref{fig:summary}B).
Researchers and practitioners have devised methods to address the issue of overlapping images.
The images can be rearranged in a grid either by
selecting a sample of images among many in each grid or
redistributing all images into all the grid spaces in screen using optimization algorithms~\cite{jv1987jv}.
Although we have not found research papers to gridify t-SNE or UMAP, there exist several implementations~\cite{karpathy2014tsnegrid,tsnegrid,ml4atsne}, including one by Karpathy~\cite{karpathy2014tsnegrid}.
This type of gridifying algorithm has been used in several visual analytics tools for ML for image data~\cite{zhao2021human,chen2020oodanalyzer,wattenberg2016use}.
the relative distances among data instances in the projected space can only approximate their distances in high-dimensional space~\cite{stahnke2015probing,chatzimparmpas2020t}.
Redistributing data points or images into a rectangular grid has also been studied in non-ML context, such as IsoMatch~\cite{fried2015isomatch} and rectangular packing~\cite{gomi2008cat}.
Removing overlaps can be more intelligent by balancing the full use of screen space and intentionally leaving some white-space to reveal cluster structures~\cite{hilasaca2019overlap}.
\subsection{Hierarchical Exploration of Data}
Our work supports hierarchical exploration of datasets by extracting hierarchical structures using clustering algorithms, so
we provide a brief background about these algorithms here~\cite{murtagh1983survey}.
Unlike the $k$-means clustering algorithm which
partitions data points into a fixed number of groups based on distances among data points,
hierarchical clustering algorithms iteratively divide data space into smaller space (i.e., divisive) or merge from smaller groups into larger groups (i.e., agglomerative).
We use agglomerative algorithms to form a hierarchy (called a \textit{dendrogram}), since divisive does not produce high-quality results for high-dimensional data and is computationally expensive for large data.
The agglomerative ones align more closely with useful characteristics of t-SNE: focusing on similar pairs to find cluster structures.
Existing work on visualizing dendrograms include
Hierarchical Clustering Explorer (HCE) ~\cite{seo2002interactively},
Stacked Trees which interactively merge parts of the dendrogram~\cite{bisson2012improving},
and Yang et al. for steering and revising the dendrograms~\cite{yang2020interactive}.
All these used node-link diagrams to display dendrograms; however, this cannot easily be applied for image datasets, because the dendrograms require all instances to be positioned along a single line, which means the size of images would become very small if we want to display images in place of the dendrogram tree.
A space-filling technique like treemaps can resolve this challenge.
Hierarchical data exploration has been studied extensively in text domains.
Text data is unstructured, so automatic extraction of clusters is important too like images.
HierarchicalTopics~\cite{dou2013hierarchicaltopics} extracts hierarchical structures of latent topics and enables users to explore and revise them.
TopicLens~\cite{kim2016topiclens} allows users to zoom into certain areas of projected two-dimensional spaces.
Marcilio et al. extracts hierarchical structures from high-dimensional representations of deep learning data~\cite{marcilio2021explorertree},
and Duarte et al. represents data as treemap-style representations~\cite{duarte2014nmap}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.85\linewidth]{figures/treemapConstructionv2.pdf}
\vspace{-13pt}
\caption{To scalably visualize the dendrogram tree structure created from agglomerative clustering methods, users can dynamically specify the number of clusters to be rendered in \name{}. In this example, a portion of the dendrogram is rendered in the treemap view to show three image clusters. Increasing the number of clusters to be shown will result in creating more partitions across the treemap with smooth animations.}
\vspace{-5pt}
\label{fig:summaryTreemap}
\end{figure*}
\section{\name{} Construction and Interactions}
This section describes how a dendrogram can be constructed from an image dataset, how \name{} visualizes the dendrogram, and how supported interactions help achieve our design goals.
\subsection{Dendrogram Tree Construction}
\label{sec:dendrogram}
To create groups of images for hierarchical exploration, we use the well-known hierarchical agglomerative clustering algorithm~\cite{murtagh1983survey}.
Unlike flat clustering algorithms (e.g., k-means), hierarchical clustering algorithms create hierarchically nested clusters without requiring a parameter $k$ for the number of clusters.
Users can specify $k$ afterwards, whereas a flat clustering algorithm would need to recompute a new structure using the entire dataset for each users' new request for $k$.
High-dimensional representations of images are used as input to the clustering algorithm.
We used high-dimensional embedding from one of the last fully-connected layers of trained deep learning models, although it is also possible to use embeddings from pre-trained models or raw image pixels.
Given this input,
each image vector is initialized as its own cluster to start, then the most similar image clusters are merged together using Ward linkage with the Euclidean distance metric to form more balanced trees~\cite{murtagh1983survey}. The merging process repeats until the final two clusters merge into one cluster containing all the images in the dataset.
The output of the algorithm forms a special tree structure, called \textit{dendrogram}, resembling a binary tree, with leaf nodes corresponding to data instances.
\subsection{\name{} Visualization}
\label{sec:treemap}
\name{} visualizes dendrogram structures using a modified treemap algorithm. It traverses the dendrogram and renders each cluster node as a grid of images using the available rectangular space.
At the top of each cluster node, we display
the count and classification accuracy of the images in that cluster.
\textbf{Treemap Layout}.
The dendrogram resembles a binary tree, so there will only ever be two child nodes to layout in the space at each point in the traversal. This allows \name{} to adapt the simple slice-dice treemap layout~\cite{johnson1998tree}. Normally, slice-dice creates undesirable aspect ratios when laying out many rectangles per level~\cite{bederson2002ordered}; however, the dendrogram will not have more than two children per node, always resulting in just one partition of space.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.75\linewidth]{figures/example2.pdf}
\vspace{-10pt}
\caption{The slice-dice layout takes the available space given by the parent node $v_P$ and partitions the space into for its two children $v_{LC}$ and $v_{RC}$. To reveal the $v_P$'s hierarchy, padding is added to the children boxes.
}
\vspace{-5pt}
\label{fig:layout}
\end{figure}
We modify the slice-dice layout to display a grid of fixed sized images on top and to include padding (to highlight hierarchical structures). To demonstrate one iteration of the modified layout, consider a node $v_P$ that has two children $v_{LC}$ and $v_{RC}$ with 6 and 4 images, respectively. The goal is to fill a 100 by 90 pixel available space depicted in \autoref{fig:layout}. The algorithm works as follows:
\begin{enumerate}[topsep=1pt,itemsep=0pt,parsep=0pt]
\item \textbf{Dice if the available space from the parent $v_P$ is a horizontal rectangle and slice if it is vertical.} In \autoref{fig:layout}, $v_P$'s width $w_P$ is 100 pixels and height $h_P$ is 90 pixels, so dicing is chosen.
\item \textbf{Compute the ratio to partition the space}. When dicing, the partition ratio is calculated by $ratio := N_{LC} / N_{P}$, where $N_i$ represents the number of images in $v_i$. The left and right areas of the partition correspond to each child, $v_{LC}$ and $v_{RC}$. In \autoref{fig:layout}, the dice partition ratio is computed as $(6 / 10) = 0.6$. Meaning $60\%$ of the space is for the $v_{LC}$ and $40\%$ is for $v_{RC}$.
\item \textbf{Adjust the partition to fit images}.
Based on the image size, compute the maximum amount of the images that can fit across entire parent's width (or height if slicing) by $fit := \lfloor w_{P} / w_{image} \rfloor$, where $w_P$ is the width of the available space for $v_P$ and $w_{image}$ is the width of each image. Then the actual partition dimensions can be calculated as $\lfloor fit \times ratio \rfloor$ pixels, resulting in a partition that fits images without cutting them off.
\item \textbf{Add padding to show hierarchies}. After laying out the $v_{LC}$ and $v_{RC}$ and assigning them their new dimensions, a fixed padding is added to reveal the parent cluster $v_P$ behind it (like in \autoref{fig:layout}).
We set
a fixed padding of 10 pixels in our implementation.
Color can encode
the remaining height of tree under that node
\cite{bostock2019nestedtreemap}.
\end{enumerate}
\textbf{Adjusting the number of clusters}.
Traversing the \textit{entire} dendrogram quickly fills the available screen space making it hard to display many images. Thanks to the dendrogram's binary tree structure, each iteration of the \name{} algorithm only lays out two children (one partition), which allows us to render specific number of clusters (i.e., $k$ set by users).
By traversing the tree breadth-first and counting the $k$ clusters created so far, the algorithm can stop and show those $k$ clusters.
For example in \autoref{fig:summaryTreemap} the dendrogram traversal stops to only render three clusters showing in the treemap.
\textbf{Organizing images within the clusters}.
An interesting property of dendrograms is that the leaf nodes (i.e., images) have an order based on the hierarchical structure generated by the algorithm.
We use this order to organize the list of images for each cluster node.
As seen in \autoref{fig:summaryTreemap}, the root node cluster that contains all the images is in the same order as the leaf nodes.
This means that nearby images in a cluster are likely more similar than images located far within the cluster.
For example, in \autoref{fig:summary} on the right, insect images taken over white background are clustered together with a large node.
Furthermore, when there exist a larger number of images to display than the amount of available space,
we uniformly sample images from the ordered list of images,
in order to display a representative sample of images from a cluster.
\textbf{Zooming interactions.}
\label{sec:zoom}
To go past an overview, and explore large-scale datasets in more detail, \name{} supports a zooming interaction.
By clicking on a cluster node, \name{} animates
to zoom into the new cluster, which enlarges the selected cluster to fit into the entire space, and creates a set of subclusters within the selected cluster.
Our implementation basically follows Bostock's zoomable treemap implementation~\cite{bostock2019zoomabletreemap}.
In addition, by taking up the entire space with the zoom-in, more images can be shown with more specific hierarchies, leading to more in-depth exploration. This process corresponds to rendering a downstream portion of the dendrogram. At any point, by clicking back on the parent cluster, the reverse process of zooming-out goes back up the tree to reveal the overview again.
The zoom-in and zoom-out interactions allow users to quickly get an overview of very large image collections and split the hierarchies into the specified detail.
Please watch our video demo or use our website for this interaction.
\subsection{Coordinated Views with the Sidebar}
\label{sec:sidebar}
We developed a system for \name{} by designing coordinated views consisting of the main treemap view and the sidebar.
The sidebar contains rendering settings for the treemap display, a class table for class-level error analysis, and a panel for details for a selected image.
\textbf{\name{} Settings}.
The sidebar contains two sliders to change the overview level: one controls the number of clusters visible and the other controls the image size.
By default, \name{} shows eight clusters of medium-sized images to balance the level of detail and overview such that many images can be shown while still separated into distinguishable groups.
These sliders allow users to easily change the overview level based on their exploration needs.
For the case when a dataset comes with predictions from a trained model,
the sidebar provides two options to highlight misclassified images. One toggle highlights these images using a red border and the other toggle puts the images into focus by making the others translucent.
Visually emphasizing misclassified images makes it easier for users to find groups of images that the model consistently misclassifies.
\textbf{Class Table.}
The class table is visible if model predictions are present, and it contains information for additional \textit{error analysis} at the class level. The class table updates based on the parent cluster's images (i.e., the root or previously selected cluster; by default, all images).
Each row of the table corresponds to a specific class in the dataset (e.g., cat).
The next two columns of the table displays the counts of images with a true or predicted class label matching the class specified.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{figures/class-tablev2.png}
\vspace{-19pt}
\caption{The class table summarizes class-level statistics of images present in the selected cluster in the treemap view.
The user can sort and search for classes, and hover over each entry to quickly locate accurate or error filled clusters.}
\vspace{-5pt}
\label{fig:classtable}
\end{figure}
The last three columns of the table provide useful metrics for class-level error analysis: the prediction accuracy (i.e., how often the true and predicted classes matched that row's class), the false negative rate (i.e, how often the true class matched that row's class but the predicted class was different), and the false positive rate (i.e., how often the predicted class matched that row's class but the true class was different). As shown in \autoref{fig:classtable}, each rate is encoded with the opacity of a colored dot to quickly find rows of interest in the table.
By hovering over one of these entries in the table, the treemap view highlights the images used to determine that metric by making the other images translucent. This way users can use the class table in tandem with the treemap to isolate and find areas of high error or high accuracy.
\textbf{Image Details.}
A user can click on an image in \name{} to see detailed information: larger view of the image, true class label, predicted class label if it has one, and similar images.
The similar images are determined based on distances in the high-dimensional space, which can be used for counterfactual analysis~\cite{cheng2020dece,gomez2020vice}.
\subsection{Implementation Details}
The \name{} system was built using \textit{Svelte}\footnote{Svelte JavaScript Framework: \url{https://svelte.dev/}}, a reactive JavaScript framework that has been increasingly used in the visualization community. Each part of the user interface is built with \textit{Svelte} components.
The main component, the treemap view, is implemented primarily with \textit{D3.js}\footnote{D3 JavaScript Library: \url{https://d3js.org/}} to render SVG elements and to transition the elements for natural animation.
The complimentary component, the sidebar, is entirely implemented in \textit{Svelte}, and uses \textit{Svelte} \texttt{store} functionality to communicate between the treemap.
The dendrogram structure, created from hierarchical agglomerative clustering with Ward linkage, was implemented with Python
and exported as a nested JSON object to be rendered as a treemap on the client side.
\section{Design Goals}
To help ML practitioners explore large-scale image datasets, we adapt treemaps with the following design goals:
\begin{enumerate}[topsep=1pt,itemsep=1pt,parsep=0pt]
\item
\textbf{Overview of Data Distributions.}
We aim to assist users in getting an overview of datasets as a beginning step for their analysis of datasets.
This includes helping them answer questions like what kinds of images mostly exist in their datasets, whether they are \textit{diverse} enough~\cite{hong2020crowdsourcing} or biased towards any properties~\cite{buolamwini2018gender}.
\item
\textbf{Exploring at Multiple Levels of Abstraction.}
We aim to design our visualization to provide users with abilities to interactively adjust the level of abstraction.
While treemaps are effective at supporting \textit{abstract and elaborate} interactions~\cite{yi2007toward},
we adapt the original treemap techniques by considering unique properties of the dendrogram structure and the domain of ML for images.
\item \textbf{Instance-level Exploration.} As images do not contain attributes, it is important for users to see the individual image contents while exploring datasets.
We aim to effectively organize image thumbnails to help users find and inspect individual data points while they navigate over the tree structure.
\item \textbf{Subgroup-level Analysis for ML.}
Both the literature in multimedia analytics and visual analytics for ML point out the importance of identifying subgroups from datasets~\cite{zahalka2014towards,hohman2018visual,olson2021contrastive}.
This can be useful for performing a wide range of analytic tasks in ML, such as error analysis and bias discovery~\cite{wu2019errudite,cabrera2019fairvis}.
\end{enumerate}
\section{User Study}
To evaluate the effectiveness of \name{} for a variety of exploration tasks for large-scale machine learning datasets, we conducted a user study comparing \name{} and a baseline visualization technique for images, \baseline{}, a gridified version of t-SNE.
\subsection{Baseline: \baseline{}}
\label{sec:tsnegrid}
We compare \name{} with a gridified version of t-SNE, which we call \baseline{}.
It re-adjusts the positions obtained from the t-SNE algorithm~\cite{maaten2008tsne}, by filling the available rectangular grid space with the images for effectively using screen space~\cite{karpathy2014tsnegrid}.
\begin{figure}[!b]
\centering
\includegraphics[width=\linewidth]{figures/summaryBaselinev2.pdf}
\vspace{-17px}
\caption{Steps to generate \baseline{}: From t-SNE embeddings (in \textbf{A}),
we first overlay grid points on top of the embeddings
(in \textbf{B};
$10 \times 10$ in this case). Then in \textbf{C},
we assign each grid with an image that has the smallest distance.
} \label{fig:summaryBaseline}
\vspace{-5px}
\end{figure}
This process works by first taking the image representations from the dataset and reducing them down to their two-dimensional embeddings using t-SNE (like Fig.~\ref{fig:summaryBaseline}A). Then, to fill the space, two dimensional grid points are evenly laid out over the space of image embeddings (like Fig.~\ref{fig:summaryBaseline}B). Finally, each grid point is assigned the closest image embedding and the corresponding image is displayed on top (like Fig.~\ref{fig:summaryBaseline}C). The result is a grid of images with the structure from t-SNE.
There may be overlap with what is considered the closest image embedding to each grid point, so to achieve a result where the sum of grid assignment distances is minimized, the Jonker-Volgenant algorithm is used to get the optimal assignments~\cite{jv1987jv}.
The optimal grid assignments work by phrasing the problem as a linear assignment problem.
For this user study, to enhance the \baseline{} exploration further, we implemented a one-level zoom that recomputes the grid with a smaller number of images based on where the user clicks in the \baseline{}. In particular, the top $k$ closest to the click are recomputed with the Jonker-Volgenant algorithm to display a smaller and more focused grid of images to the user. $k$ is chosen based on the number of grids to show in the zoomed in view. For example, to show a $5 \times 5$ grid $k$ could be $25$ to take the $25$ closest points and gridify them.
We will open-source this implementation.
\subsection{Study Setup}
\subsubsection{Participants.}
We recruited 20 participants by using the departmental student mailing lists.
Their average age was 26.
Five were female and 15 were male.
Six were undergraduate and 14 were graduate students. Their degree programs included computer science, robotics, and AI.
We recruited only those who
have taken at least one AI or ML course.
Every participant attended the study in-person and we had one participant per session.
Each participant was compensated with a \$20 gift card.
\subsubsection{Protocol.}
We used a within-subject design such that each participant evaluated both \name{} and \baseline{}. Each study session had two phases, each involving a
visualization (\name{} or \baseline{}) and a dataset (Artifact and Organism subset from CIFAR-100), which
we describe in detail in Section~\ref{sec:datasets-and-models}).
From the two visualizations and two datasets, we created four conditions. Each participant was assigned to one of these four conditions to ensure there was no bias in the order in which a participant used a particular visualization/dataset combination (shown in \autoref{table:conditions}).
\begin{table}[!b]
\centering
\begin{tabular}{c|c|c|c|c}
\toprule
\# & \multicolumn{2}{c|}{Phase 1} & \multicolumn{2}{c}{Phase 2}\\\cmidrule{2-5}
& Visualization & Dataset & Visualization & Dataset\\\midrule
1 & \baseline{} & Artifact & \name{} & Organism\\%\hline
2 & \name{} & Artifact & \baseline{} & Organism\\%\hline
3 & \baseline{} & Organism & \name{} & Artifact\\%\hline
4 & \name{} & Organism & \baseline{} & Artifact\\
\bottomrule
\end{tabular}
\vspace{2pt}
\caption{Four conditions for counterbalancing the orders of two interfaces in our within-subject design}
\vspace{-5pt}
\label{table:conditions}
\end{table}
Every participant completed two sets of tasks, one for each visualization-dataset combination of their respective condition.
For each phase, a participant was given a brief tutorial of the visualization, then they were asked to complete seven tasks while thinking aloud. We recorded their voice and screen. After each phase, the participant filled out a post-questionnaire form.
All participants used the same computer setup with a 32-inch monitor.
\subsubsection{Dataset and Models.}
\label{sec:datasets-and-models}
We used the CIFAR-10 and CIFAR-100 datasets \cite{krizhevsky2009learning} for the study. The CIFAR-10 dataset has 10 classes, each containing 6,000 images (5,000 from training set and 1,000 from test set), while the CIFAR-100 dataset has 100 classes, each containing 600 images.
We fine-tuned the ResNet50 \cite{DBLP:conf/cvpr/HeZRS16} architecture that was pretrained on the ImageNet dataset provided by TensorFlow\footnote{\url{https://www.tensorflow.org/api_docs/python/tf/keras/applications/resnet50/ResNet50}}. The CIFAR-10 and CIFAR-100 images were upsampled to fit the input shape of the ResNet50 model (i.e., $224 \times 224 \times 3$). After extracting the image features from the models, we used Average Pooling, followed by Dense layers. The model was fine-tuned for 20 epochs, achieving a test set accuracy of $92.8 \%$ on CIFAR-10 and $76.3 \%$ on CIFAR-100. For use in the \name{} and \baseline{} algorithms, we represented the images in each dataset as high-dimensional vectors from the outputs of one of the last hidden layers in each respective model. For the CIFAR-10 ResNet50 model, we extracted the outputs from second-to-last hidden layer. For the CIFAR-100 ResNet50 model, we extracted the outputs from the last hidden layer.
We divided the classes of CIFAR-100 into two sets--``Artifacts'' and ``Organisms''--in order to have two very distinct sets of classes for the within-subject design. This helps ensure that results from the first interface only minimally affect those from the second interface. Each set consists of 40 classes (i.e., 4 superclasses, each consisting of 10 classes)~\cite{krizhevsky2009learning}.
For instance, the Artifact set contains classes like bed, chair, television, and bottles, while the Organisms set contains classes like lion, tiger, crocodile, and trout.
\subsubsection{Tasks}
The participants completed seven tasks. These tasks can be divided into two broad categories: grouping and searching. The grouping tasks involved identifying or analyzing groups of images based on semantically similar properties; the searching tasks involved searching for images based on specific properties. Table \ref{table:tasks} provides a summarized description of the tasks.
\begin{table}[!bt]
\centering
\begin{tabular}{@{\hskip 3pt} c @{\hskip 5pt} p{3.1in}}
\toprule
\# & Task Description\\
\midrule
1. & \textbf{Categorizing images} into groups across 40 classes\\
2. & \textbf{Categorizing images} into groups for a single class\\
3. & \textbf{Identifying groups} of images with high classification accuracy within a single class\\
4. & Estimating the image count \textbf{distribution} over multiple groups within a single class\\
5. & \textbf{Searching} for an image with a given text description\\
6. & \textbf{Searching} for an image with a given visual description\\
7. & Searching for an \textbf{anomalous} image with an incorrect class label\\
\bottomrule
\end{tabular}
\vspace{3pt}
\caption{Seven tasks designed to evaluate several grouping and searching tasks used in ML analysis}
\vspace{-2pt}
\label{table:tasks}
\end{table}
\begin{itemize}[itemsep=0pt,topsep=0pt,leftmargin=8pt,parsep=0pt]
\item In Tasks 1 and 2, participants were asked to categorize images into 3-4 groups
based on semantically similar properties. Task 1 was designed to evaluate how users make sense of and categorize images across many (i.e., 40) classes whereas Task 2 focuses on how users make sense of images within a single class. The common objectives of these two tasks include analyzing diversity or any potential bias present in the distribution of the data as well as getting an overview of the data.
\item In Task 3, we asked participants to find two large groups, using images from a single class, that have very high classification accuracy and have specific properties.
This task was designed to evaluate the scope of subgroup-level error analysis.
\item Task 4 is about examining the distribution of images for a single class. This task was designed based on the ``characterize distribution'' task discussed by Amar et al. \cite{amar2005low}. The participants were asked to estimate the approximate proportions of four groups determined based on an attribute (e.g., color of objects).
\item The following two tasks
are conventional searching tasks.
In Task 5, participants must find an image that matches a provided text description.
In Task 6, participants must find the image that matches the one on the task sheet.
\item Lastly, Task 7 was designed to find probable anomalies.
Participants are asked to find potential labeling errors among the misclassified images for a single class~\cite{northcutt2021pervasive,xiang2019interactive}.
\end{itemize}
Note that every participant worked with the same task list for both \name{} and \baseline{}, but used a different dataset for each of the visualizations.
\subsubsection{Interface Setup}
For fairer comparison, the sidebar component from \name{} was added to the \baseline{} visualization. Additionally, to confirm that certain sidebar components are not overused over the main visualization, the class table, class filtering, and similar images components were removed from the sidebar for both \name{} and \baseline{}.
\subsection{Results}
The setup of our user study gives us the scope to analyze data from a multitude of perspectives.
\subsubsection{Evaluation of task completion time}
Our first set of analyses focused on task completion time. During the study, we recorded the time a participant took to complete each task. After conducting a paired t-test, we found no significant difference between the average time taken by our participants with \baseline{} and that with \name{} for each task.
\subsubsection{Evaluation of task responses}
We evaluated the responses to the seven tasks using statistical methods.
\textbf{Task 1.}
We instructed our participants to identify four groups such that an image can be assigned to only one group (\textit{mutually exclusive}) and most images
present in the interface can be assigned to one of the groups (\textit{collectively exhaustive}). To evaluate the quality of groups made by the participants, we conducted three analyses.
First, to measure the collectively exhaustive property of the groups, we counted the number of classes covered by at least one of the four groups
and divided that number by the total number of classes present in the dataset (i.e., 40).
We counted the number of ``classes'' instead of ``images'' because
the number of classes can approximate the number of images because class has an equal number of images. If only a portion of images in a class belongs to a group, we count it as half;
in an ideal scenario, the value would be 1.0.
With \name{}, the average value over all participants are higher with a value of $0.82$, compared to $0.73$ with \baseline{}.
A one-sided paired t-test with a significance level of 0.1
indicates
the value is significantly greater for \name{} than \baseline{}. This suggests that on average, participants were able to maintain the ``collectively exhaustive'' property more with \name{} than \baseline{}.
Next, to assess the mutual exclusiveness of the groups made by a participant, we counted the number of classes that belong to two or more groups.
In an ideal scenario, the value is zero because there is no overlap between the groups.
We calculated the average value to be $0.07$ for \baseline{} and $0.13$ for \name{}. The results of the same t-test
also indicate that on average participants were able to create more ``mutually exclusive'' groups with \baseline{} than \name{}.
Lastly, we calculated the entropy score of the probability distribution of the four groups to check how much the groups are equally distributed.
From our analysis, we found the average entropy score of \name{} to be $1.37$ which is only slightly higher than \baseline{} whose average entropy is $1.34$.
\textbf{Task 2.} Like Task 1, participants were asked to identify mutually exclusive and collectively exhaustive groups. The main difference for Task 2 is that they worked with images for only one class.
To evaluate the quality of groups identified by our participants, we conducted the same three analyses as for Task 1. However, for Task 2, instead of counting the number of classes, we labeled a 10\% sample of individual images.
In our first analysis of the collectively exhaustive property, the average values for \baseline{} and \name{} are almost the same with the values of $0.67$ and $0.66$ respectively. This also happened with the mutual exclusiveness analysis
(i.e., $0.10$ and $0.13$).
Our final analysis of the entropy scores is also no exception
(i.e., $1.41$
and
$1.36$).
\textbf{Task 3.} This task is also about grouping as participants were asked to find two large groups of images with
high classification accuracy.
We conducted two analyses for this task.
First,
we assessed the average accuracy of the two groups. To find the accuracy of each group, we counted the correctly classified images from the total number of images covered by each group. The average accuracy values of the two groups are 92.2\% and 93.2\% for \baseline{} and \name{}, respectively. \name{} is slightly higher, but there is no significant difference.
Second,
we evaluated how large these groups are.
The average for \baseline{} is $38\%$ and for \name{} is $34\%$, with no statistical significant difference.
\textbf{Task 4.} In this task, participants estimated the approximate percentage of different cars and birds based on
car color (yellow, red, white or silver, or other) or
background of birds (e.g., sky), respectively.
To evaluate user responses, we counted the number of car and bird images that correspond with
the aforementioned criteria
and calculated the Kullback-Leibler (KL) divergence score to quantify how much the probability distributions reported by our participants differ from our own. A score of 0 means the two distributions are the same. In Fig.~\ref{fig:histogram} are two histograms to show the distribution of the KL divergence scores for \baseline{} and \name{}. From the distribution of the histograms, we see that \name{} has more counts in between 0.0 and 0.1 than \baseline{}. This indicates that more participants were closer to the actual distribution when using \name{} than the \baseline{}. This is also supported by the medians of the KL divergence scores where the median is $0.17$ for the \baseline{} and $0.10$ for \name{}.
\begin{figure}[!bt]
\includegraphics[width=1.0\linewidth]{figures/histogram_modifiedv5.pdf}
\vspace{-15pt}
\caption{KL divergence score distribution for Task 4 for \name{} and \baseline{} (lower is better). More participants were closer to the actual distribution with \name{} than \baseline{}.
}
\label{fig:histogram}
\vspace{-2pt}
\end{figure}
\textbf{Tasks 5 \& 6.} These tasks were about finding specific images. All the participants of our study were successful in finding the correct images using both the \baseline{} and \name{}.
\textbf{Task 7.} For this task, participants were asked to find labeling errors from misclassified images. Unlike Tasks 5 and 6, multiple correct answers exist. We assessed the images selected by our participants and divided them into three categories: \textit{reasonable, somewhat reasonable, not reasonable}. Based on our assessment of 20 images found
among 20 participants,
with \baseline{}, 12 are reasonable and 3 are somewhat reasonable; with \name{}, 15 are reasonable and 3 are somewhat reasonable.
This indicates that \name{} is likely more helpful in finding potential anomalies in image datasets as a user is required to review many images for a task like this. The images in \name{} are divided into clusters with distinguishable boundaries,
which makes it more convenient to systematically survey a large group of images
than with \baseline{}.
\subsubsection{Evaluation of post-questionnaires}
Each participant answered 10 questions in two separate post-questionnaire forms: one for \name{} and one for \baseline{}.
They provided ratings on a 7-point Likert scale (7 being strongly agree). The questions and their average rating are shown in Table \ref{table:ratings}.
\begin{table}[!tb]
\centering
\begin{tabular}{l @{\hskip3pt}c@{\hskip5pt} @{\hskip5pt}l@{\hskip3pt} }
\toprule
Question &
\baseline{} & \name{} \\\midrule
Easy to learn how to use & \textbf{6.45} & \hskip5pt 6.30\\\hline
Easy to use & 6.00 & \hskip5pt 6.00\\\hline
Helpful for overview & 5.95 & \hskip5pt \textbf{6.45}\textsuperscript{$\circ$}\\\hline
Helpful for detailed analysis & 5.15 & \hskip5pt \textbf{6.05}\textsuperscript{$\ast$}\\\hline
Helpful for finding specific images & 5.10 & \hskip5pt \textbf{5.75}\\\hline
Helpful to identify image categories & 5.70 & \hskip5pt \textbf{6.20}\textsuperscript{$\circ$}\\\hline
Helpful to discover new insights & 5.25 & \hskip5pt \textbf{6.00}\textsuperscript{$\ast$}\\\hline
Confident when using the tool & 5.85 & \hskip5pt \textbf{6.05}\\\hline
Enjoyed using the tool & 6.10 & \hskip5pt \textbf{6.40}\\\hline
Would like to use again & 5.80 & \hskip5pt \textbf{6.65}\textsuperscript{$\ast$}\\
\bottomrule
\end{tabular}
\vspace{2pt}
\caption{Participants' average ratings for the two visualizations. \name{} outscored \baseline{} in 8 out of 10 questions. Bold indicates higher average ratings. $\ast$ and $\circ$ indicate 95\% and 90\% statistical significance in the one-sided paired t-tests, respectively.
}
\vspace{-2pt}
\label{table:ratings}
\end{table}
The results indicate that
\name{} received higher ratings than \baseline{} in 8 out of 10 questions.
The \baseline{} received a better rating for only the first question regarding the learnability of the visualization.
This is reasonable as \baseline{} supports fewer interactions than \name{}.
From the ratings of several important aspects of image visualization, \name{} is found to be statistically significantly more preferable than \baseline{}, such as getting an overview, performing detailed analysis, identifying image categories, and discovering new insights.
Moreover,
participants on average inclined more towards \name{} than \baseline{} in mentioning their eagerness to use the tool again. The difference is significant ($p < 0.05$).
\subsection{Discussion}
We observed participants' usage while they performed the tasks.
Based on their usage patterns, we have made a few important findings.
\textbf{\name{} provides a more structured workflow.} Compared to \baseline{}, it is easier to assess or follow how a user makes certain decisions with \name{}. In \name{}, the presence of clusters and the hierarchical relationships within them provide significant semantic information to users when they create groups or search images based on certain properties. One participant said:
\emph{``The clustering of \name{} was very intuitive, more so than the grid one where the boundaries between groups were not clearly defined. The ability to click into different levels of clusters was very useful as well.''}
\textbf{\name{} helps with extracting more specific properties.} Using the semantic information provided by \name{}, users were able to find more detailed information about different image groups. This is more evident with Task 3 where participants worked with the images of ships and dogs to find two large groups that have high classification accuracy and specific properties. With \name{}, participants mentioned more specific properties compared to \baseline{}. For example, regarding dogs, \name{} users described their eyes, hair length, and facial structure in addition to generic properties such as size, color, and background. With the \baseline{}, participants mostly described groups using only generic properties.
\textbf{Image search can be narrowed down more with \name{}.} The hierarchical relationships within the clusters helped users narrow their search for a particular image.
With \name{}, they easily found specific clusters with more images similar to the one they were looking for. The sub-clusters present within a cluster then helped users further narrow the search space. On the other hand, with \baseline{}, users had to check a large group of images as there is no structured way of narrowing the search. One participant said:
\emph{``With the treemap, the ability to narrow down the search without having to recompute the grid size every time, having some predetermined way of organizing the images, and having the images broken up into clusters made it very easy to scan through the images without getting lost. I was able to quickly filter the exact things I was looking for.''}
\textbf{Cluster summary provided with \name{} is helpful.}
\name{} provides information about each cluster and sub-cluster, such as the number of images and classification accuracy.
Participants found this information useful, especially for Tasks 3 and 4. One participant expressed their liking by saying:
\emph{``I like the clusters having details like how many images and the accuracy. Also, the outline of the different clusters having different sizes helped.''}
\section{Experiments: Distance Preservation}
\label{sec:numerical}
Lastly, we evaluate the quality of the cluster structures generated from \name{} computationally.
We quantitatively measure \textit{$k$-nearest neighbor accuracy}--how well \name{} preserves the top-$k$ nearest neighbors in the original high-dimensional space.
\subsection{Setup}
We measure the number of common images in the top-$k$ nearest images between one of the techniques and the original high-dimensional representation of data, while varying $k$ (i.e., the size of nearest neighbor list).
It is a common way to evaluate the quality of DR methods~\cite{wang2020understanding}.
The techniques we compare are:
(1) t-SNE, (2) \baseline{} (described in \autoref{sec:tsnegrid}), and (3) \name{}.
We performed this experiment over 12 different datasets:
CIFAR-10,
CIFAR-100, and
10 subsets of CIFAR-10, each from one of the 10 classes.
All are trained with ResNet50 (same setup described in \autoref{sec:datasets-and-models}), but for the first two, the high-dimensional representations were taken from the last hidden layer, while those for the 10 subsets were taken from the second-to-last hidden layer.
\begin{figure}[!tb]
\centering
\includegraphics[width=\columnwidth]{figures/numerical-modified.pdf}
\vspace{-16pt}
\caption{Average for the number of common $k$-nearest neighbours between t-SNE, \baseline{}, or \name{} and high-dimensional representations of images.
For all 12 datasets we tested, \name{} preserves the top-$k$ images better than \baseline{}.
}
\vspace{-2pt}
\label{fig:numerical}
\end{figure}
While we compute Euclidean distances between 2-D points for ranking similar images in t-SNE and \baseline{} which assigns a ($x$, $y$) value to each data point,
\name{} needed a different methodology.
This is because \name{} creates additional structures to the 2-D space using treemaps, so it does not make sense to directly use Euclidean distances.
Instead, we define a distance from an image $\textbf{x}_i$ to another image $\textbf{x}_j$ in \name{} by
measuring the distance from the corresponding node for $\textbf{x}_i$ in the dendrogram tree to the nearest common ancestor node between $\textbf{x}_i$ and $\textbf{x}_j$. This can be thought of as how many times a user needs to zoom-out from the leaf node for $\textbf{x}_i$ to reach to the cluster where both $\textbf{x}_i$ and $\textbf{x}_j$ belong to.
\subsection{Results}
\label{sec:numerical-result}
Figure~\ref{fig:numerical} shows the results.
For each of the 12 plots,
the $x$-axis represents $k$ (in $k$-nearest neighbor) and the y-axis represents the average number of common images in two top-$k$ image lists.
We display up to 300 for 10,000 image datasets and 50 for the class-level CIFAR-10 datasets
As shown in the figure, in all cases,
t-SNE outperforms the other two, as we can expect because t-SNE is designed to optimize this metric.
When comparing \name{} and \baseline{}, \name{} shares more top-$k$ nearest neighbors with the high-dimensional representations than \baseline{} for all 12 datasets.
This indicates that \name{} preserves the local similarity structures better than \baseline{}.
\section{Use Cases}
In this section, we describe how \name{} can be used in practice to explore and analyze image datasets through four usage scenarios.
\begin{figure*}[t]
\centering
\includegraphics[width=0.85\linewidth]{figures/use-case-figuresv5.pdf}
\vspace{-10pt}
\caption{In our case study, ML practitioner, Dave,
investigates the specific classes that his model struggles with using \name{}.
}
\vspace{-2pt}
\label{fig:scenario}
\end{figure*}
\subsection{Evaluating Dataset Quality Before Model Training}
Consider Evan, a researcher who is in the process of preparing a model to classify different animal species.
He is looking into using the ImageNet dataset and he wants to get a sense of whether the images are sufficiently diverse enough for training a model.
Evan loads images from ImageNet across all 1,000 classes into \name{} and immediately sees that the images are roughly divided into two large groups. One could be described as a group of organisms containing various plants and animals and features many earth-tone colors. The other group could be described as artifacts or non-living objects, such as vehicles and stock photo close-ups of everyday objects.
Evan clicks on the encapsulating rectangle for the organism cluster given his interest in animals.
He incrementally increases the ``Clusters Visible'' slider until it reaches 20 and gradually sees the formation of distinctly-colored areas of the treemap within the overarching organism group.
He notices a blue-ish cluster containing different aquatic animals, a green-ish cluster of insects and flowers, and a very colorful cluster of fruits located next to a cluster of cooked foods.
He wants to more closely examine a cluster containing dogs and other fuzzy animals so he clicks to ``zoom in'' to this rectangle, revealing more clusters.
He notices there is a large cluster of animals on grassy fields and clicks on several images to inspect their class labels: Chow Chow, pug, miniature schnauzer, and even pig and polar bear.
In general, the images show dogs and other animals of different colors in a variety of poses with differently colored backgrounds, so Evan feels confident he will be able to train a capable model using this set of images.
\subsection{Examining Bias in Datasets}
Consider Priya, a data scientist who lives in the Southeast region of Asia and is evaluating whether ImageNet can be used to train an image classification model that she can deploy in her country.
After she loads the \name{} interface, Priya begins to click around to ``zoom'' into different portions of the dataset.
She first clicks on the rectangle containing the approximately half of the dataset and discovers a cluster containing everyday objects.
She notices a cluster of taxi cabs and hovers over the class name ``taxicab'' in the sidebar's class table to put just the taxicab photos in focus while the rest become faded.
She notices that most are black or yellow, but she knows from personal experience that many taxis are multicolored in her country, so she makes a note to supplement the ``taxicab'' class with some of those images.
Priya ``zooms out'' by clicking on the outermost rectangle and decides to visit another cluster, this one featuring many images of people interacting with a variety of everyday objects, such as ``violin'' and ``sunscreen''.
However, as she clicks on several images to get a better look at each one, she notices that the images tend to include people with lighter skin tones.
She makes another note to supplement the dataset with images of people with darker skin tones interacting with the objects corresponding to each of the classes listed in the class table.
Priya continues this inspection process until she feels she has a good sense of the quality of the images in this dataset and has compiled a complete list of the classes she plans to supplement.
\subsection{Identifying Underperforming Subgroups}
Consider Dave, a ML engineer who is using the CIFAR-100 dataset to evaluate a trained image classification model.
He opens \name{} and sees the default view of eight rectangles or clusters.
At the top of each rectangle is some information about the number of images and the average prediction accuracy of the images in each cluster.
As Dave inspects the interface, he notices that the group of images with the lowest accuracy score (57 percent) consists mostly of human faces.
He sees no obvious pattern at this level of overview in the hierarchical structure, so he clicks on another rectangle to get a closer look.
From the class table in the sidebar, he observes that a majority of the images in this group were predicted to be ``woman'' or ``girl'', but most were incorrect.
Dave thinks perhaps his classification model has trouble determining which of those two labels is correct.
He navigates back up one level by clicking on the outermost rectangle.
He selects a different cluster and this time he observes that a majority of the images are predicted as ``man'' or ``boy'', but with similar proportions of incorrect guesses (as shown in \autoref{fig:scenario}).
From these two insights, Dave hypothesizes that his model can distinguish male and female faces, but has difficulty determining whether the person is a child or adult.
\subsection{Analyzing Classification Errors}
Consider Anna, a ML practitioner who has trained an image classification model.
During the training process, she noticed her model consistently had a harder time correctly predicting images from the artifact-related classes
so she decided to analyze her model for these classes
from the ImageNet dataset, such as ``umbrella'' and ``frying pan''.
She opens \name{} and toggles the ``outline misclassified'' and ``focus misclassified'' switches to spotlight the misclassified images, outlined in red, while the others fade.
She notices that the red outlined images appear to be scattered without much of a pattern, so she gradually increases the number of clusters until \name{} splits the images into subgroups of higher or lower accuracy.
She stops when it reaches 18 clusters because she notices distinct subgroups of images with high accuracy (over 90 percent).
Most of these subgroups focus on particular classes, such as ``racket'' or ``potter's wheel''.
Anna wants to investigate the cause of clusters with much lower prediction accuracy, so she continually clicks on the next visible cluster with the lowest accuracy.
She notices a pattern as she keeps drilling down towards the leaf nodes: the accuracy rate decreases as the images become more cluttered.
She clicks on several misclassified images to inspect their true and predicted class labels, and she discovers that the predicted labels are not necessarily inaccurate--it is that the true label and predicted labels are classifying the entire image based on only a portion of it.
For example, she clicks on an image of a couple of people sitting on a bench on a sunny day. The true class label for this image is ``sunglasses'' because one person is wearing sunglasses, whereas the predicted label for the image is ``park bench'' because the two people are sitting on a bench.
Anna can now consider how she can train her model to handle these more complex images with multiple possible correct labels.
\section{Limitations and Future Work}
\indent\indent
\textbf{Interactive Refinement of Tree Structures.}
While the agglomerative clustering algorithms generate hierarchical structures that allow users to flexibly specify the number of clusters to be displayed,
the formed structures may not be ideal for some cases.
Visualization researchers have extensively studied interaction methods for steering and refining clustering results~\cite{yang2020interactive,choo2013utopian}.
Future research challenges include designing interactions for treemap representations that are distinct from scatterplots and node-link diagrams.
\textbf{Using Interpretable Attributes for Tree Construction.}
We used embedding vectors extracted from deep learning models as input to clustering algorithms, but
alternative methods may help people better interpret substructures of each cluster in \name{}.
For example, representing each image with human-understandable concepts~\cite{kim2018interpretability,zhao2021human} or additional resources~\cite{xie2018semantic} may make each dimension more interpretable.
Alternatively, integrating information about each dimension of the embedding vectors into the interface using explainable AI methods can also be helpful~\cite{olah2018the,hohman2019summit}.
\textbf{Formalizing Interaction Operations.}
Several data manipulation operations can also be provided in \name{}.
For example, sorting images within each node by user-specified criteria (e.g., prediction scores)
or splitting and zooming into only a subset of nodes~\cite{bisson2012improving,yang2020interactive}.
Formalizing these types of operations would allow for more flexible user exploration.
Integrating some ideas presented in the unit visualization literature~\cite{park2017atom,wexler2019if,ren2016squares}, such as horizontally or vertically separating space based on categorical attributes in Facets~\cite{wexler2019if,facets}, into the treemap context would also be an interesting future direction.
|
{
"timestamp": "2022-05-17T02:05:05",
"yymm": "2205",
"arxiv_id": "2205.06935",
"language": "en",
"url": "https://arxiv.org/abs/2205.06935"
}
|
"\\section{Introduction}\nSince the establishment of Newtonian mechanics in seventeen's century, phy(...TRUNCATED)
| {"timestamp":"2022-05-17T02:05:52","yymm":"2205","arxiv_id":"2205.06953","language":"en","url":"http(...TRUNCATED)
|
"\\section[About Java]{About \\proglang{Java}}\n\n\\section{Introduction}\n\nItem Response Theory (I(...TRUNCATED)
| {"timestamp":"2022-05-17T02:07:50","yymm":"2205","arxiv_id":"2205.06989","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\nAs technology advances, the need to investigate matter at even smaller sca(...TRUNCATED)
| {"timestamp":"2022-05-17T02:07:12","yymm":"2205","arxiv_id":"2205.06976","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\r\nQuantum Darwinism (QD) is a theoretical framework that allows one to und(...TRUNCATED)
| {"timestamp":"2022-05-17T02:05:15","yymm":"2205","arxiv_id":"2205.06939","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\t\n\t\n\tOnline learning is an effective and necessary method for adaptiv(...TRUNCATED)
| {"timestamp":"2022-05-17T02:06:48","yymm":"2205","arxiv_id":"2205.06968","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\\label{sec:intro}\n\n\\ud{Recurrent} fan-like jets which appear like a chai(...TRUNCATED)
| {"timestamp":"2022-05-17T02:07:31","yymm":"2205","arxiv_id":"2205.06981","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\n\\IEEEPARstart{M}{edical} image segmentation is an essential task in sev(...TRUNCATED)
| {"timestamp":"2022-05-17T02:07:48","yymm":"2205","arxiv_id":"2205.06987","language":"en","url":"http(...TRUNCATED)
|
End of preview.
No dataset card yet
- Downloads last month
- 6