The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      JSON parse error: Missing a closing quotation mark in string. in row 15
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 26993)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 15
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
meta
dict
\section{Introduction} \label{sec:intro} A typical state-of-the-art speaker verification (SV) system is based on comparison of speaker embeddings which are extracted using a deep neural model \cite{snyder2018x,desplanques2020ecapa,zhou2021resnext} trained from scratch on a large-scale speaker-labeled dataset such as Voxceleb \cite{nagrani2017voxceleb,chung2018voxceleb2}. The size (typically more than 10 million parameters) and the architecture based on a series of convolutional layers make it difficult to properly train these extractors in a data-restricted scenario of some low-resource domain (i.e., completely new channel, language, or their combination). Recently, large pre-trained Transformer models, including Wav2Vec \cite{ baevski2020Wav2Vec}, HuBERT \cite{hsu2021hubert}, WavLM \cite{chen2021wavlm}, and their variants \cite{chen2022unispeech} have significantly boosted the performance in the field of speech processing. The most common way to adapt those general-purpose models to downstream tasks is to fine-tune the whole pre-trained model with a task-oriented back-end (\emph{full fine-tuning}). In \cite{chen2022large}, a strong performance on the speaker verification task was achieved with the ECAPA-TDNN back-end, of which the frame-by-frame input to the back-end was calculated as a weighted combination of outputs of the individual layers of a pre-trained Transformer model. In \cite{peng2022attention}, to shorten the training time, a more lightweight back-end is employed, which consists of an attention layer and a linear layer to extract speaker representations. However, potential shortcomings of such full fine-tuning are the necessity of updating a vast amount of parameters and storing a separate task-related copy of the fine-tuned model parameters for each downstream task or its domain-specific version. This issue will become increasingly problematic with the number of parameters of the pre-trained model growing from hundreds of millions to billions. For example Whisper \cite{radford2022robust} contains 1,550 M parameters. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Fig/Performance.png} \caption{Performance change of several PETL approaches on VoxCeleb1-O, when the number of learnable parameters is increased. The learnable parameters include the speaker extractor back-end with constant 2.2M parameters and the PETL module.} \label{fig:fig1} \end{figure} \begin{figure*}[t] \begin{minipage}[t]{.24\linewidth} \centering \centerline{\includegraphics[height=5.2cm]{Fig/FullyFT.png}} \centerline{(a) Full Fine-tuning}\medskip \end{minipage} \begin{minipage}[t]{.24\linewidth} \centering \centerline{\includegraphics[height=5.2cm]{Fig/B_Adapter.png}} \centerline{(b) Bottleneck Adapter}\medskip \end{minipage} \hfill \begin{minipage}[t]{0.24\linewidth} \centering \centerline{\includegraphics[height=5.2cm]{Fig/Prefix.png}} \centerline{(c) Prefix Tuning}\medskip \end{minipage} \hfill \begin{minipage}[t]{0.24\linewidth} \centering \centerline{\includegraphics[height=5.2cm]{Fig/MAM.png}} \centerline{(d) Mix-and-Match Adapter}\medskip \end{minipage} \vfill \caption{Architecture of the pre-trained model and state-of-the-art parameter-efficient methods. For (b)(c)(d), only the inserted lightweight modules and back-end are learnable during fine-tuning, while the pre-trained model is frozen. “Speaker Extractor Back-end” consists of a multi-head factorized attentive pooling (MHFA) and a linear layer to extract speaker representations \cite{peng2022attention}. } \label{fig:sys} \end{figure*} To alleviate this issue, many recent studies have focused on parameter-efficient transfer learning, known as adapter, where additional lightweight modules with task-specific trainable parameters are inserted into the pre-trained model while keeping the entire pre-trained model frozen. For example, in \cite{thomas2022efficient}, a bottleneck adapter \cite{houlsby2019parameter} is applied to the Wav2Vec model and the adapter-based model achieved comparable performance to full fine-tuning by only updating 10\% of the model parameters in ASR tasks. In \cite{le2021lightweight}, pre-trained models are connected with a multilingual denoising auto-encoder for speech-to-text translation through adapter modules. In addition, a more challenging and not extensively explored problem with pre-trained speech models is their adaptation to a low-resource scenario with few trainable parameters. Indeed, most pre-trained models are optimized on English corpora (e.g. LibriSpeech \cite{panayotov2015librispeech}). Such models are supposed to be eminently suitable for downstream tasks transfer learning using the same language. However, there is no reason to believe those pre-trained models can also provide a proper initialization for unseen languages since the distribution of acoustic units might be completely different \cite{zhou2018comparison}. When fully fine-tuning on a different dataset, the training process might degenerate the model, resulting in catastrophic forgetting of what was learned during the pre-training phase \cite{mccloskey1989catastrophe}. To mitigate this issue, in this paper, we first analyze the performance of three different PETL methods, including bottleneck adapter~\cite{houlsby2019parameter}, prefix tuning \cite{li2021prefix}, and mix-and-match adapter \cite{he2021towards}, to transfer the pre-trained model to downstream speaker verification tasks. Then, we explore using model tuned on an intermediate dataset before fine-tuning it to a small out-of-domain (cross-language in our case) dataset. This approach reduces the variance between target and source domains, and improves the robustness and discrimination of learned speaker representations, resulting in boosting the performance in the low-resource setting. The contributions of our work are as follows: \begin{itemize} \item We demonstrate that the PETL methods can be utilized to effectively adapt large-scale pre-trained transformer models to a specific downstream task (e.g., speaker verification) with few learnable parameters, as shown in Fig \ref{fig:fig1}. \item To further boost the performance in the cross-language low-resource scenario, we tune the pre-trained model using an intermediate dataset before fine-tuning it on a small dataset. This achieves state-of-the-art results on the CNCeleb dataset. \item Extensive experiments on VoxCeleb corpus \cite{nagrani2017voxceleb,chung2018voxceleb2} show that adapter-based fine-tuning can achieve comparable performance to full fine-tuning through updating less than 4\% of the original model parameters.\footnote{The code will be available with the submission of the final paper.} \end{itemize} \section{Parameter-efficient transfer learning} \label{sec:PETL} In this section, we will introduce three state-of-the-art parameter-efficient transfer learning methods, as shown in Fig \ref{fig:sys}. Unless otherwise emphasized, the parameters of pre-trained models are frozen during the fine-tuning process, while only the parameters of lightweight additional modules are trainable. \noindent\textbf{Bottleneck Adapter:} As illustrated in Fig \ref{fig:sys} (b), bottleneck adapter \cite{houlsby2019parameter} is inserted into each Transformer block of a pre-trained model after multi-head attention layers and feed-forward layers with a residual connection. The bottleneck adapter layer consists of a down projection layer $\mathbf{W}_\textit{down} \in \mathbb{R}^{D_\textit{hidden}\times D_\textit{bottleneck}}$, a up projection layer $\mathbf{W}_\textit{up} \in \mathbb{R}^{D_\textit{bottleneck}\times D_\textit{hidden}}$, as well as a nonlinear activation function $f(\cdot)$. Its frame-by-frame output \begin{equation} \label{b-adapter} \begin{split} \mathbf{H}_\textit{out} = \mathbf{H}_\textit{in} + \mathbf{W}_\textit{up}f(\mathbf{W}_\textit{down}\mathbf{H}_\textit{in}) \end{split} \end{equation} is of the same size as the input $\mathbf{H}_\textit{in} \in \mathbb{R}^{ T\times D_\textit{hidden}}$, where $T$ is the length of the input sequence. \noindent\textbf{Prefix Tuning:} Different from the bottleneck adapter that utilizes the outputs of different layers, prefix tuning \cite{li2021prefix} adds a set of $l$ learnable vectors (virtual tokens) as additional keys and values to each head of the multi-head attention module in each Transformer block. During the fine-tuning phase, these learnable vectors are expected to capture task-related information and adapt the pre-trained model to the downstream task, as shown in Fig \ref{fig:sys} (c). We denote the linear projections of \textit{queries}, \textit{keys} and \textit{value} of each head of each attention module\footnote{We omit the layer and head indices in the symbols to keep the notation uncluttered.} as $\mathbf{W}_Q$, $\mathbf{W}_K$ and $\mathbf{W}_V\in \mathbb{R}^{D_\textit{hidden} \times D_\textit{proj}}$, respectively. $D_\textit{proj}$ is the projected dimension of each head. Typically, $D_\textit{proj}=D_\textit{hidden}/H$, where $H$ is the total number of heads. The attention maps are evaluated as: \begin{equation} \label{prefix} \begin{split} Attn(Q,K_\textit{prefix},V_\textit{prefix}) & = softmax\left(\frac{\mathbf{Q}\mathbf{K}^T_\textit{prefix}}{\sqrt{D_\textit{proj}}}\right)\mathbf{V}_\textit{prefix} \\ \mathbf{K}_\textit{prefix} &= concat(\mathbf{P}_{K},\mathbf{W}_K \mathbf{H}_\textit{in}) \\ \mathbf{V}_\textit{prefix} &= concat(\mathbf{P}_{V},\mathbf{W}_V \mathbf{H}_\textit{in}) \\ \end{split} \end{equation} where two learnable matrices (the two sets of virtual tokens) $\mathbf{P}_K$, $\mathbf{P}_V \in \mathbb{R}^{l \times D_\textit{proj}}$ are prepended to the original \textit{keys} and \textit{values}, respectively. \noindent\textbf{Mix-And-Match Adapter:} To combine and unify the two aforementioned PETL methods, in \cite{he2021towards}, a new variant, named mix-and-match adapter, was applied. As illustrated in Fig \ref{fig:sys} (d), the MAM uses an adapter block processing the hidden representation parallel to the feedforward block. Additionally, it leverages a small prefix tuning module to generate task-related attention maps. \vspace{-1.5mm} \section{Fine-tuning via intermediate dataset} HuBERT-style models consume masked frame-level features to predict a pre-determined discrete target during the unsupervised pre-training phase. When using those models to deal with a small dataset, the pre-learned parameters are supposed to be ideally appropriate for the downstream tasks. However, when fully fine-tuning to a cross-language low-resource scenario, the learning process often gets stuck in local minima. This might be because the target dataset may have a different distribution of acoustic units that is unseen during pre-training. Thus, to improve the robustness, in this paper, we tune the pre-trained model on a large intermediate supervised SV dataset before fine-tuning it to a small dataset. With this two-step tuning scheme, the task-related model is expected to be reasonably close to the proper setting for the low-resource target task. \vspace{-0.5mm} \section{Experiments} \label{sec:exp} \vspace{-3mm} \subsection{Setup} \textbf{Data-sets:} The SV performance is evaluated on the VoxCeleb \cite{nagrani2017voxceleb,chung2018voxceleb2} and CNCeleb \cite{fan2020cn,li2022cn} corpora, both are widely used text-independent speaker verification datasets. For VoxCeleb, the training set is the development set of VoxCeleb2. The performance is evaluated on \textit{VoxCeleb1-O}, \textit{VoxCeleb1-E}, and \textit{VoxCeleb1-H} trials. For CNCeleb, the model is fine-tuned on three different training dataset, namely \textit{CNCeleb1-S1}, \textit{CNCeleb1-S2}, \textit{CNCeleb1} and \textit{CNCeleb.T}, containing 200, 400, 800 and 2800 speakers, respectively. CNCeleb.T is a combination of CNCeleb1-dev and CNCeleb2. The evaluation part \textit{CNCeleb-E} contains 18,849 utterances from 200 speakers. Besides, all training datasets are augmented by adding noise and reverberation. \noindent\textbf{Implementation details:} In this work, we utilize two types of pre-trained models: 1) The Base models, including WavLM Base+ and HuBERT Base, contain a CNN encoder and 12 layers of Transformer. The dimension of the Transformer output $D_{hidden}$ is 768. The total number of parameters of those models is around 94M; 2) The Large model has 24 transformer blocks with 1024-dimensional output resulting in 316M parameters. All experiments are conducted on 8 A100 GPUs with 10 epochs optimizing AAM-softmax \cite{deng2019arcface} with a margin of 0.2 and scaling of 30. To speed up the training, the learning rate is decreased by 5\% each epoch. The duration of input raw waveforms is set to 3 seconds. The mini-batch size of 120 is chosen for training models. We also adopt large margin fine-tuning (LM-FT) \cite{thienpondt2021idlab} to further boost performance. Specifically, we input longer (5 seconds) waveforms and set the margin to 0.5 for additional 2 tuning epochs. \noindent\textbf{Performance Metrics:} Both equal error rate (EER) and minimum detection cost function (minDCF) are employed to measure the performances of speaker verification systems. The prior target probability $P_{\textit{tar}}$ is set to 0.01 or 0.05, for DCF1 and DCF5, respectively. $C_{\textit{fa}}$ and $C_{\textit{miss}}$ are set to 1.0. \begin{table}[t] \caption{Results on the VoxCeleb1-O dataset. For a fair comparison, all methods use WavLM Base+ as the frozen pre-trained model. } \label{tab:1} \centering \scalebox{0.9}{ \begin{tabular}{l|c|l|c} \hline Adapter & Params & Dim & EER (\%) \\ \hline \hline \multirow{3}*{Bottleneck Adapter} & 4.7M & $D_{bottleneck}=128$ & 0.78 \\ & 2.3M & $D_{bottleneck}=64$ & 0.85\\ & 1.2M & $D_{bottleneck}=32$ &0.87 \\ \hline \multirow{2}*{Prefix tuning} & 3.6M & $l=200$ & 1.15 \\ & 0.7M & $l=40$ & 1.09 \\ \hline \multirow{3}*{MAM Adapter} & 5.4M & $D_\textit{bottleneck}=256, l=40$ & \textbf{0.72} \\ & 3.0M & $D_\textit{bottleneck}=128, l=40$ & 0.77\\ & 1.9M & $D_\textit{bottleneck}=64, l=40$ & 0.84 \\ \hline \end{tabular} } \vspace{-0.5cm} \end{table} \begin{table*}[t] \caption{Results on the Voxceleb1 dataset and extended test sets. All models are trained on VoxCeleb2-dev, except \dag – its training data consists of Vox2-dev and Vox1-dev. LM-FT denotes large-margin fine-tuning. The learnable back-end MHFA \cite{peng2022attention} contains 2.2M parameters. } \label{tab:SOTA} \centering \scalebox{0.9}{ \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \hline \multicolumn{1}{c|}{\multirow{2}{*}{Front-end Model}}&\multicolumn{1}{c|}{\multirow{2}{*}{Params }}&\multicolumn{3}{c|}{VoxCeleb1-O}&\multicolumn{3}{c}{VoxCeleb1-E}&\multicolumn{3}{|c}{VoxCeleb1-H}\cr\cline{3-11} & &EER(\%)&DCF1 &DCF5&EER(\%)&DCF1&DCF5 &EER(\%)& DCF1&DCF5 \\ \hline \hline ECAPA-TDNN \cite{kwon2021ins} & 14.7M & 0.90 & - & 0.081 & 1.11 & - & 0.077 & 2.32 & - & 0.155\\ wav2vec-TDNN $^{\dag}$ \cite{novoselov2022robust} & 317M + 3M & 0.84 & 0.058 & - & - & - & - & - & - & - \\ UnispeechSAR\_BASE-TDNN \cite{chen2021wavlm} & 94M+6M & 1.00 & - & - & 0.93 & - & - & 1.87 & - & - \\ \hline \multicolumn{8}{l}{Pre-trained Model: \textbf{HuBERT BASE}, Back-end: \textbf{MHFA}} \\ \hline Full fine-tuning & 94.6M+2.2M & 0.82 & 0.114 & 0.061 & 1.13 & 0.122 & 0.073 & 2.43 & 0.244 & 0.014 \\ Fixed & 0.0M + 2.2M & 1.96 & 0.221 & 0.525 & 2.27 & 0.252 & 0.152 & 4.62 & 0.416 & 0.131 \\ Bottleneck Adapter & 4.7M + 2.2M & 0.98 & 0.138 & 0.068 & 1.21 & 0.137 & 0.081 & 2.61 & 0.260 & 0.162 \\ Prefix Tuning & 3.6M + 2.2M & 1.55 & 0.193 & 0.107 & 1.74 & 0.198 & 0.118 & 3.86 & 0.356 & 0.233\\ MAM Adapter & 5.4M + 2.2M & 0.96 & 0.130 & 0.065 & 1.18 & 0.133 & 0.079 & 2.56 & 0.261 & 0.161 \\ \hline \multicolumn{8}{l}{Pre-trained Model: \textbf{WavLM BASE+}, Back-end: \textbf{MHFA}} \\ \hline Full fine-tuning & 94.7M+2.2M & 0.66 & 0.074 & 0.045 & 0.89 & 0.097 & 0.056 & 1.90 & 0.190 & 0.119 \\ Full fine-tuning [LM-FT] \cite{peng2022attention} & 94.7M+2.2M & 0.59 & 0.069 & 0.041 & 0.79 & 0.089 & 0.050 & 1.73 & 0.177 & 0.107 \\ Fixed & 0.0M + 2.2M & 1.45 & 0.167 & 0.098 & 1.64 & 0.191 & 0.111 & 3.45 & 0.330 & 0.207 \\ Bottleneck Adapter & 4.7M + 2.2M & 0.78 & 0.073 & 0.052 & 0.96 & 0.108 & 0.063 & 2.10 & 0.215 & 0.131 \\ Prefix Tuning & 3.6M + 2.2M & 1.15 & 0.128 & 0.068 & 1.27 & 0.145 & 0.083 & 2.69 & 0.253 & 0.161 \\ MAM Adapter & 5.4M + 2.2M & 0.72 & 0.086 & 0.052 & 0.92 & 0.107 & 0.059 & 2.05 & 0.212 & 0.132 \\ MAM Adapter [LM-FT] & 5.4M + 2.2M & 0.61 & 0.058 & 0.041 & 0.88 & 0.099 & 0.055 & 1.90 & 0.193 & 0.119 \\ \hline \multicolumn{8}{l}{Pre-trained Model: \textbf{WavLM Large}, Back-end: \textbf{MHFA}} \\ \hline Full fine-tuning [LM-FT] & 316M + 2.2M & 0.49 & 0.081 & 0.041 & 0.70 & 0.091 & 0.051 & 1.70 & 0.177 & 0.105 \\ MAM Adapter [LM-FT] & 12.5M + 2.2M & 0.55 & 0.065 & 0.038 & 0.82 & 0.091 & 0.050 & 1.73 & 0.166 & 0.104 \\ \hline \end{tabular} } \vspace{-0.5cm} \end{table*} \begin{table}[t] \caption{Results on the CNCeleb-E dataset with different size of training dataset. \textit{CNCeleb1-S1 and -S2} denotes a subset of CNCeleb1 with randomly selected 200 and 400 speakers, respectively. [Int. D] means Intermediate dataset, i.e. the Transformer model is continually tuned on Vox2-dev before fine-tuning. WavLM Base+ is used as pretrained model.} \label{tab:3} \centering \scalebox{0.65}{ \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \multicolumn{1}{c|}{\multirow{3}{*}{Training Dataset}}&\multicolumn{2}{c|}{\multirow{1}{*}{\makecell{CNCeleb1-S1}}}&\multicolumn{2}{c|}{\multirow{1}{*}{\makecell{CNCeleb1-S2}}}&\multicolumn{2}{c|}{\multirow{1}{*}{\makecell{CNCeleb1}}}&\multicolumn{2}{c}{\multirow{1}{*}{\makecell{CNCeleb.T}}}\cr & \multicolumn{2}{c|}{\# 200 Spk} & \multicolumn{2}{c|}{\# 400 Spk} & \multicolumn{2}{c|}{\# 800 Spk} & \multicolumn{2}{c}{\# 2800 Spk} \cr \cline{2-9} & EER & DCF1 & EER & DCF1 & EER & DCF1 & EER & DCF1 \\ \hline \hline Sparse FilterBank \cite{peng2022learnable} & - & - & - & - & 12.25 & 0.5391 & - & - \\ Modified x-vector \cite{li2022real} & - & - & - & - & 11.05 & - & - & - \\ R-vector \cite{chen2021self} & - & - & - & - & 8.86 & - & - & - \\ ECAPA-TDNN \cite{zeng2022attention} & - & - & - & - & - & - & 8.93 & 0.5043 \\ \hline FT & 17.43&0.8624 & 12.25 & 0.6384 & 10.05&0.5000 & 7.93&0.4079 \\ FT [Int. D] & 9.25&0.5053 & 8.83 & \textbf{0.4515} & 8.45&0.4145 & 7.71&0.4057 \\ MAM Adapter & 13.73&0.7029 & 11.19 & 0.5715 & 9.45& 0.4745 & 7.52 & 0.4072 \\ MAM Adapter [Int. D] & \textbf{9.12}&\textbf{0.4963} & \textbf{8.58} & 0.4671 & \textbf{7.94}&\textbf{0.4087} & \textbf{6.89}&\textbf{0.3784} \\ \hline \end{tabular} } \vspace{-0.3cm} \end{table} \subsection{Analysis of PETL methods} We first investigate the performance of the three PETL variants. In the field of natural language processing, \emph{prefix tuning} attains comparable performance with the adapter-based method \cite{li2021prefix}. Nevertheless, as we observer results in Fig \ref{fig:fig1} and Table \ref{tab:1}, it exhibits the worst performance on the SV task. This might be caused by the model pre-training phase mostly focusing on the semantics within an utterance. In contrast, the SV task requires the discrimination ability between utterances, which cannot be achieved by modifying the attention weights among input sequences alone. For both Bottleneck and MAM adapters, the performance is similar for variants with the same bottleneck dimensionality. However, architectural design renders the MAM adapter more parameter efficient and thus our choice for further experiments. The final metrics improve with the increased dimensionality as shown in Fig \ref{fig:fig1}, where the last data-point for the MAM adapter corresponds to the bottleneck dimensionality of 512, but we set a threshold on the number of parameters to approximately 5M, which in our opinion is a good trade-off between performance and a reasonable model size. \vspace{-0.5mm} \subsection{Analysis on in-domain VoxCeleb} We will first analyze the base scenario with a relatively large amount of in-domain labeled data for fine-tuning. Let us first concentrate on comparing the fine-tuning of pre-trained models (HuBert Base, WavLM Base+, and WavLM Large) via different adapters and the MHFA backend~\cite{peng2022attention}. The bulk of the experiments can be observed in the second and third blocks of the Table~\ref{tab:1}, where we work with the WavLM Base+ and HuBert Base model, respectively. We observe that the MAM adapter performs similarly to the Bottleneck adapter and significantly better than Prefix Tuning across all analyzed Voxceleb test sets and both models. Additionally, we can observe only small degradation when using MAM adapter versus full fine-tuning of all model parameters. We can also safely claim that all PETL methods and full fine-tuning outperform the case when the pre-trained model is fixed and only the MHFA backend is trained. In the third block of Table~\ref{tab:1}, we can also observe the effect of large margin fine-tuning (LM-FT) which consistently improves the performance for both full fine-tuning and MAM adapter strategy across all test conditions. For completeness and consistency with our previous work~\cite{peng2022attention}, we also provide the results with WavLM Large and the MAM adapter including large margin fine-tuning in the last block of Table~ \ref{tab:1}. We can observe a small degradation in performance as now, the total decrease of the learnable parameters is much larger. Finally, in the first block of the same table, we can compare our attained results with different approaches selected from the literature. The ECAPA-TDNN represents a standard approach of embedding extractor fully trained from scratch on a supervised dataset (Voxceleb) while the wav2vec-TDNN and UnispeechSAR\_BASE-TDNN represent a combination of a pre-trained model with a TDNN~\cite{snyder2018x} and ECAPA-TDNN~\cite{desplanques2020ecapa} structure for embedding extraction, respectively. \subsection{Low-resource scenario} This section presents a scenario where we fine-tune with a small amount of labeled data that we would consider out-of-domain w.r.t. the substantially larger labeled dataset that we call \emph{intermediate dataset} (Voxceleb2-dev). As out-of-domain test set, we chose the CNCeleb-E benchmark and corresponding out-of-domain training sets formed by CNCeleb1 and CNCeleb.T. We perform our experiments with the WavLM Base+ pre-trained model. Our results are presented in the second block of Table~\ref{tab:3} where we also analyze the impact of the amount of available training data for fine-tuning (adaptation). When confronted with a small amount of training data (200, 400 and 800 speakers), we can observe that direct full fine-tuning (FT) on this data yields the worst results. Only after fine-tuning on CNCeleb.T with 2800 speakers, the performance of direct fine-tuning falls into the same ballpark as other approaches that we will analyze next. The rather average performance of full fine-tuning and its abrupt degradation with decreasing size of training data would suggest that it is indeed problematic to re-train such a large amount of parameters (94.7M + 2.2M for backend) in a low-resource scenario. The immediate solution might be to leave the pre-trained model fixed and only train the proposed MAM adapter (5.4M + 2.2M parameters). Results with this approach are in the third row of the second block in Table~\ref{tab:3} and we can indeed observe an improvement for low-resource scenarios, but it diminishes when using larger training data such as CNCeleb.T. In the next two approaches we make use of an intermediate dataset that represents a valuable resource for focusing the large model on the SV task. First, we take the model that is fully fine-tuned on Voxceleb2-dev dataset (first row in the third block of Table~\ref{tab:SOTA}) and further fine-tune it on CNCeleb data. This system is denoted by FT [Int. D] in Table~\ref{tab:3}). We can observe a significant improvement w.r.t. direct fine-tuning and even the direct use of MAM adapter, especially in low-resource scenarios. Again, the improvements diminish with the larger amount of available training data (CNCeleb2). Finally, we start again with the same model, add the MAM adapter and train on CNCeleb data. This yields overall the best results across all analyzed scenarios and even significantly outperforms the previous approaches when larger amount of training data is available. This final approach is especially practical in a sense that we need to store only the parameters of the MAM adapter and the MHFA backend (approximately 5\% of original model size) in order to switch to a new domain while retaining the best possible performance. \section{Conclusion} In this paper, we demonstrate the effectiveness of several PETL methods in the field of speaker verification. The large pre-trained model is frozen, and we only update the inserted lightweight modules. We show that the PETL strategy with MAM adapter is better than simple direct fine-tuning in a low-resource scenario. Additionally, we have demonstrated that having a large labeled intermediate dataset can further improve the overall performance as it preconditions the large transformer-based model for the use in intended task which in our case was speaker verification. Using model directly fine-tuned on such dataset and subsequently training the MAM adapter on low-resource, out-of-domain data, we achieve the best possible performance with the practicality of storing variants of the model for many different domains. { \ninept \bibliographystyle{IEEEbib}
{ "timestamp": "2022-10-31T01:11:20", "yymm": "2210", "arxiv_id": "2210.16032", "language": "en", "url": "https://arxiv.org/abs/2210.16032" }
\section{Introduction} Robotic solutions are becoming increasingly prevalent in our personal and professional lives, and have started to evolve into close collaborators~\cite{Bauer.2008, Fong.2003, ArevaloArboleda.2020}. These so-called cobots support humans in various ways that were unimaginable just a few years ago. Enabled by technological advances, newer lightweight materials, and improved safety sensors, they are gaining increasing popularity in domestic care, supporting people with disabilities in their everyday lives. A non-negligible number of people live with motor impairments, ranging from slight limitations to severe paralysis~\cite{Pflegestatistik}. While a near-complete integration into personal and social life is the final goal, current cobots focus on performing activities of daily living~\cite{Pascher.2019}. These include essentials like eating and drinking or more complex tasks such as grooming and activities associated with leisure time~\cite{Pascher.2021recommendations}. However, new potential issues arise when cobots are tasked with autonomous or semi-autonomous actions, resulting in added stress for end-users~\cite{Pollak.2020}. Particularly close proximity collaboration between humans and cobots remains challenging~\cite{gruenefeld2020mind}. These challenges include effective communication to the end-user of (a) motion intent and (b) the spatial perception of the cobot's vicinity~\cite{Chadalavada.2015,Pascher.2022}. Accurate communication increases user understanding while avoiding the unpredictability regarding impending steps, motions, and sensed environment parameters. While visualizations of motion intent have been extensively studied~\cite{Andersen.26.08.201631.08.2016,Chadalavada.2015,Coovert.2014,Stulp.28.09.201502.10.2015,Watanabe.28.09.201502.10.2015,gruenefeld2020mind}, communicating cobot intention via haptic has received less attention~\cite{Grushko.2021tactile,Grushko.2021haptic}. We investigate a new visual-haptic approach that communicates the cobot's intention, focusing primarily on information about its planned path, to the human collaborator (see~\autoref{fig:teaser}). Information about the path is crucial as, particularly in a close proximity collaboration situation, any misunderstanding in the cobot's motion intention can result in errors in behavior. These range from knocking over objects or even destroying them in the process to potentially harming the user. Minimizing these risks is an important step in the development of effective robotic solutions with wide end-user acceptance. The complex-looking supporting actions of the cobot are often just a series of pick \& place tasks. In our test scenario the user sits in front of a table with an object on it and a cobot mounted to the surface. The cobot assists the user by picking up the object and placing it in a dedicated spot on the table. \section{Related Work} Previous literature has focused either on (a) visualization or (b) haptic techniques to communicate the cobot's motion intention to the user. Combining these two approaches we focus on ways cobots can effectively communicate their planned path with a visual-haptic solution. \subsection{Visualization Techniques to Communicate Cobot's Motion Intention} In recent decades, Augmented Reality (AR) technology has been frequently used for human-robot collaboration~\cite{Dianatfar.2021}. Previous work focused mainly on the use of Head-Mounded Displays (HMDs), Mobile Augmented Reality (MAR), and Spatial Augmented Reality (SAR) for the visualization of the cobot motion intent~\cite{Rosen.2019,Walker.2018,gruenefeld2020mind}. Rosen et\,al. showed that AR is an improvement compared to classical desktop interfaces when visualizing the intended motion of robots~\cite{Rosen.2019}. However, while visualizations of motion intent have been studied extensively in previous work~\cite{Andersen.26.08.201631.08.2016,Chadalavada.2015,Coovert.2014,Stulp.28.09.201502.10.2015,Watanabe.28.09.201502.10.2015,gruenefeld2020mind}, communicating cobot intention via haptic has not attracted as much attention. \subsubsection{Techniques Involving Haptic to Communicate Cobot's Motion Intention} Previous research used haptic and tactile feedback to guide users in a specific direction for example by providing vibration feedback~\cite{Lehtinen.2012,Barralon.2009,Chen.2018,Hong.2017,Weber.2011}. Grushko et\,al. transferred these findings into the domain of human-robot collaboration by communicating directional information through the activation of six actuators on a glove spatially organized to represent an orthogonal coordinate frame~\cite{Grushko.2021tactile}. The vibration activates on the side of the glove that is closest to the future path of the robot. They also use this haptic device to notify the user about the currently planned robot's trajectory and status changes~\cite{Grushko.2021haptic}. \section{Approach} In earlier work, we developed an adaptive control interaction method based on a recommendation system generated by a Convolutional Neural Network (CNN)~\cite{Kronhardt.2022}. From the cobot's seven Degrees of Freedom (DoF), the adaptive control combined several DoFs to provide a more straightforward control to the user with fewer necessary mode-switches. We compared the novel adaptive control method, with two different visualization techniques, to the standard mode-switch approach with cardinal DoF mappings. We also designed a Virtual Reality (VR) environment based on a photogrammetry scan of a physical room. The environment included a virtual model of the \emph{Kinova Jaco}\footnote{Kinova Jaco robot arm: \url{https://assistive.kinovarobotics.com/product/jaco-robotic-arm}, last retrieved \today} robot arm attached to a table, a red target area, and a blue block (see Figure~\ref{fig:apparatus}). \begin{figure}[htbp] \centering \subfloat[]{\includegraphics[width=0.7\linewidth]{Apparatus.png}\label{fig:apparatus}} \hfill \subfloat[]{\includegraphics[width=0.268\linewidth]{Haptic_blank.png}\label{fig:glove}} \caption{(a) The virtual environment: description screen (\textbf{Left}); \emph{Kinova Jaco} with visualisation for control type \emph{Single Arrow} (\textbf{Right}); table with blue block and red target (\textbf{Bottom}); (b) \emph{Sensorial XR} haptic glove.} \label{fig:overview} \end{figure} The virtual environment was developed to be compatible with the \emph{Oculus Quest 2}\footnote{Oculus Quest 2: \url{https://www.oculus.com/quest-2/}, last retrieved \today} VR headset. This provided us with a VR testbed environment for developing and evaluating further feedback techniques. Currently we aim to develop multi-modal feedback methods for human-robot collaborations beyond visual and audio by providing haptic feedback. We are working on different concepts to communicate the cobot's motion intent via vibrotactile feedback by using the \emph{Sensorial XR}\footnote{Sensorial XR: \url{https://sensorialxr.com/}, last retrieved \today} (see Figure~\ref{fig:glove}). To communicates directional information to the human collaborator different mappings of vibrotactile actuators correspond to matching DoF combinations of the adaptive control. This brings the cartesian coordinate systems of the cobot and the glove in line to provide an intuitive mapping. Changes in the intensity of the actuators indicate the amount of directional change, thus enabling the user to better imagine the path generated by the recommendation system, resulting in low end-user task load. \bibliographystyle{ACM-Reference-Format} \section{Introduction} Robotic solutions are becoming increasingly prevalent in our personal and professional lives, and have started to evolve into close collaborators~\cite{Bauer.2008, Fong.2003, ArevaloArboleda.2020}. These so-called cobots support humans in various ways that were unimaginable just a few years ago. Enabled by technological advances, newer lightweight materials, and improved safety sensors, they are gaining increasing popularity in domestic care, supporting people with disabilities in their everyday lives. A non-negligible number of people live with motor impairments, ranging from slight limitations to severe paralysis~\cite{Pflegestatistik}. While a near-complete integration into personal and social life is the final goal, current cobots focus on performing activities of daily living~\cite{Pascher.2019}. These include essentials like eating and drinking or more complex tasks such as grooming and activities associated with leisure time~\cite{Pascher.2021recommendations}. However, new potential issues arise when cobots are tasked with autonomous or semi-autonomous actions, resulting in added stress for end-users~\cite{Pollak.2020}. Particularly close proximity collaboration between humans and cobots remains challenging~\cite{gruenefeld2020mind}. These challenges include effective communication to the end-user of (a) motion intent and (b) the spatial perception of the cobot's vicinity~\cite{Chadalavada.2015,Pascher.2022}. Accurate communication increases user understanding while avoiding the unpredictability regarding impending steps, motions, and sensed environment parameters. While visualizations of motion intent have been extensively studied~\cite{Andersen.26.08.201631.08.2016,Chadalavada.2015,Coovert.2014,Stulp.28.09.201502.10.2015,Watanabe.28.09.201502.10.2015,gruenefeld2020mind}, communicating cobot intention via haptic has received less attention~\cite{Grushko.2021tactile,Grushko.2021haptic}. We investigate a new visual-haptic approach that communicates the cobot's intention, focusing primarily on information about its planned path, to the human collaborator (see~\autoref{fig:teaser}). Information about the path is crucial as, particularly in a close proximity collaboration situation, any misunderstanding in the cobot's motion intention can result in errors in behavior. These range from knocking over objects or even destroying them in the process to potentially harming the user. Minimizing these risks is an important step in the development of effective robotic solutions with wide end-user acceptance. The complex-looking supporting actions of the cobot are often just a series of pick \& place tasks. In our test scenario the user sits in front of a table with an object on it and a cobot mounted to the surface. The cobot assists the user by picking up the object and placing it in a dedicated spot on the table. \section{Related Work} Previous literature has focused either on (a) visualization or (b) haptic techniques to communicate the cobot's motion intention to the user. Combining these two approaches we focus on ways cobots can effectively communicate their planned path with a visual-haptic solution. \subsection{Visualization Techniques to Communicate Cobot's Motion Intention} In recent decades, Augmented Reality (AR) technology has been frequently used for human-robot collaboration~\cite{Dianatfar.2021}. Previous work focused mainly on the use of Head-Mounded Displays (HMDs), Mobile Augmented Reality (MAR), and Spatial Augmented Reality (SAR) for the visualization of the cobot motion intent~\cite{Rosen.2019,Walker.2018,gruenefeld2020mind}. Rosen et\,al. showed that AR is an improvement compared to classical desktop interfaces when visualizing the intended motion of robots~\cite{Rosen.2019}. However, while visualizations of motion intent have been studied extensively in previous work~\cite{Andersen.26.08.201631.08.2016,Chadalavada.2015,Coovert.2014,Stulp.28.09.201502.10.2015,Watanabe.28.09.201502.10.2015,gruenefeld2020mind}, communicating cobot intention via haptic has not attracted as much attention. \subsubsection{Techniques Involving Haptic to Communicate Cobot's Motion Intention} Previous research used haptic and tactile feedback to guide users in a specific direction for example by providing vibration feedback~\cite{Lehtinen.2012,Barralon.2009,Chen.2018,Hong.2017,Weber.2011}. Grushko et\,al. transferred these findings into the domain of human-robot collaboration by communicating directional information through the activation of six actuators on a glove spatially organized to represent an orthogonal coordinate frame~\cite{Grushko.2021tactile}. The vibration activates on the side of the glove that is closest to the future path of the robot. They also use this haptic device to notify the user about the currently planned robot's trajectory and status changes~\cite{Grushko.2021haptic}. \section{Approach} In earlier work, we developed an adaptive control interaction method based on a recommendation system generated by a Convolutional Neural Network (CNN)~\cite{Kronhardt.2022}. From the cobot's seven Degrees of Freedom (DoF), the adaptive control combined several DoFs to provide a more straightforward control to the user with fewer necessary mode-switches. We compared the novel adaptive control method, with two different visualization techniques, to the standard mode-switch approach with cardinal DoF mappings. We also designed a Virtual Reality (VR) environment based on a photogrammetry scan of a physical room. The environment included a virtual model of the \emph{Kinova Jaco}\footnote{Kinova Jaco robot arm: \url{https://assistive.kinovarobotics.com/product/jaco-robotic-arm}, last retrieved \today} robot arm attached to a table, a red target area, and a blue block (see Figure~\ref{fig:apparatus}). \begin{figure}[htbp] \centering \subfloat[]{\includegraphics[width=0.7\linewidth]{Apparatus.png}\label{fig:apparatus}} \hfill \subfloat[]{\includegraphics[width=0.268\linewidth]{Haptic_blank.png}\label{fig:glove}} \caption{(a) The virtual environment: description screen (\textbf{Left}); \emph{Kinova Jaco} with visualisation for control type \emph{Single Arrow} (\textbf{Right}); table with blue block and red target (\textbf{Bottom}); (b) \emph{Sensorial XR} haptic glove.} \label{fig:overview} \end{figure} The virtual environment was developed to be compatible with the \emph{Oculus Quest 2}\footnote{Oculus Quest 2: \url{https://www.oculus.com/quest-2/}, last retrieved \today} VR headset. This provided us with a VR testbed environment for developing and evaluating further feedback techniques. Currently we aim to develop multi-modal feedback methods for human-robot collaborations beyond visual and audio by providing haptic feedback. We are working on different concepts to communicate the cobot's motion intent via vibrotactile feedback by using the \emph{Sensorial XR}\footnote{Sensorial XR: \url{https://sensorialxr.com/}, last retrieved \today} (see Figure~\ref{fig:glove}). To communicates directional information to the human collaborator different mappings of vibrotactile actuators correspond to matching DoF combinations of the adaptive control. This brings the cartesian coordinate systems of the cobot and the glove in line to provide an intuitive mapping. Changes in the intensity of the actuators indicate the amount of directional change, thus enabling the user to better imagine the path generated by the recommendation system, resulting in low end-user task load. \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2022-10-31T01:11:13", "yymm": "2210", "arxiv_id": "2210.16027", "language": "en", "url": "https://arxiv.org/abs/2210.16027" }
\section{Introduction} The human body has been identified as having five major senses. Vision is one of the most integral senses and contributes up to 80\% of brain processing with respect to environmental perception. This can validate the disruption and inconvenience that any form of vision impairment can cause. Cataract is one of the leading to vision impairments and in extreme cases even permanent blindness. As per the World Health Organisation \cite{WHO2022Vision}, approximately 94 million people had cataracts in 2021. Another survey led by the World Health Organisation \cite{S2008Current} suggested that 47.8\% of worldwide blindness, and 51\% of blindness in South Asia, including India is due to cataract. Cataract refers to the condition where there is a development of a cloudy area on the lens associated with aging and degradation of the eye tissues leading to blurry vision. Cataract can be categorized into nuclear, cortical, posterior subcapsular, and congenital cataracts. \\\\ Nuclear cataracts \cite{Clinic2022Cataracts} mainly affect the center of the lens, at first it causes nearsightedness or temporary impairment of vision but with time, the lens turns densely yellow and sometimes brown which slowly impedes the vision to a great extent and leads to difficulty in identifying colors. Cortical Cataracts affect the edges of the lens, at start it creates white wedge-shaped opacities on the outer edge of the cortex and slowly progresses towards the center of the lens. Posterior subcapsular cataracts affect the back of the lens and create a tiny opaque area near the back of the lens which interferes with the person’s reading ability. Lastly, Congenital cataracts are genetic types of cataracts and people are usually born with this condition. Throughout the years, researchers have proposed and developed several Comupter Aided Detection systems for diagnosis of ocular diseases. For an instance, Color Fundus Photography (CFPs) and Optical Coherence Tomography have been used to extract and represent global and local features of the eye for diagnostic purposes. These image results have also been combined with personal demographic data for refined diagnostic results. \\\\ In recent years, a plethora of Deep Learning based Neural Network models have been explored and developed by several researchers to automate and enhance the precision of Ocular diagnostics. These models have been showing great promise in ocular detection and classification. This has resulted in an extensive study of Convolutional Neural Networks \cite{o2015introduction} in this space. The models have proved to be successful in segmenting retinal vessels and classifying a specific ocular disease. However, only a few of the existing studies have addressed the task of classifying multiple ocular diseases from fundus images. \\\\ In this study, we propose a combination of CNN and LSTM model that would assist the ophthalmologists to classify fundus images into normal and cataract, rapidly and with high precision. In the proposed system, the input image is first passed to CNN \cite{o2015introduction} layers that extract relevant features from the image and then these features are forwarded to the LSTM \cite{greff2016lstm} layers that, because of its ability to learn long-term dependencies, serves as a classifier. \section{Related Work} Image understanding systems that exploit machine learning (ML) techniques have been rapidly evolving in recent years. Convolutional neural networks are becoming a mainstream solution for analyzing medical images \cite{yamashita2018convolutional}. CNN are state-of-the-art applications that extract visual information from a given set of input to carry out generative and descriptive tasks. CNN provides an amalgamation of various techniques that can be exploited for learning image representation and feature classification. The scarcity of experts, expensive consultation charges, and the complexities and anomalies in the medical images are complicated to identify. Therefore, computer-aided deep learning tools can significantly impact diagnosis by providing more accurate results than humans. \\\\ In this day and age, Ocular diseases, such as diabetic retinopathy, cataract, and age-related macular degeneration, are common and can be detrimental if not treated appropriately. They can be associated with an increased risk of ischemic heart disease death in diabetic patients. Early detection is vital but difficult due to the lack of symptoms’ visibility in the early stages. In ophthalmology, colour fundus photography is an economical and effective tool for early-stage ocular disease screening. C. Li et al. \cite{li2020dense} proposed a Dense Correlation Network (DCNet) model on the backbone of CNN (Resnet - 18, 34, 50, 101, with parameters initialized from ImageNet pretraining, the initial learning rate of 0.007, power of 0.9, an epoch of 50 using binary cross-entropy as loss function) was proposed for the extraction of feature representations and a Spatial Correlation Module (SCM) to exploit correlations. The ODIR dataset \cite{ODIR} (structured real-life ophthalmic dataset of paired fundus images from 5000 patients collected by Shanggong Medical Technology Co., Ltd. from different hospitals/medical centres in China) was used. The final results of the model for Resnet 18,34,50, 101 using SCM are 78.5\%, 80.8\%, 82.2\% and 82.7\% respectively. The Model showed better results with SCM than without SCM. Further accuracy enhancements can be studied using other transfer learning adaptations and custom models. \\\\ One of the most challenging tasks for ophthalmologists is early screening and diagnosing ocular diseases from fundus images. That is why a computer-aided automated ocular disease detection system is required for the early detection of various ocular disorders using fundus images. N. Dipu et al. \cite{dipu2021ocular} presented a study of four deep learning-based models for targeted ocular disease detection, namely Resnet-34, EfficientNet, MobileNetV2, and VGG-16, on the ODIR dataset consisting of 5000 fundus images that belong to 8 different classes. Each of these classes represents another ocular disease. The VGG-16 model achieved an accuracy of 97.23\%; the Resnet-34 model reached 90.85\%; the MobileNetV2 model provided an accuracy of 94.32\%, and the EfficientNet classification model achieved an accuracy of 93.82\%. Thus out of these models, VGG-16 provided the best accuracy of 97.23\% in classifying ocular diseases from the fundus images. \\\\ India has a blind population of approximately 15 million, and the sad reality is that 75\% of these cases are curable. The doctor-patient ratio in India is 1:10,000. Studies have found that Diabetic Retinopathy(DR) and Glaucoma are the leading causes of blindness in India. DK. Prasad et al. \cite{prasad2015early} proposed a deep neural network model that helps to detect the presence of diabetic retinopathy and glaucoma at its early stages. It can alert the patients to consult an ophthalmologist from a screening standpoint. The developed model is less complicated and resulted in an accuracy of 80\%. The system will produce accurate results, which is achieved by using a less complex 5-layer network. But more accuracy can be obtained by adding dropout layers and dense layers along with the conv2D layers. These layers improve the system's capability to reduce overfitting and improve its feature extraction property. The dataset used was ODIR. The accuracy obtained was 80\%. \\\\ Research has it that many leading causes of vision impairment (such as Glaucoma) cannot be cured. Data has shown that, due to the lack of visual symptoms in glaucoma and the shortage of clinical resources over 90\% of glaucoma cases remain undetected. Several computer aided detection (CAD) systems for many ocular diseases have been proposed for several leading causes of vision impairment. Retinal fundus or OCT images are used to extract global or local visual features. But different ocular diseases have different characteristics and most existing methods today are specifically designed for single disease detection, and methodology of detection for other diseases is still unclear. Y. Xu et al. \cite{xu2018ocular} proposed a novel model developed based on a unified MKL framework called MKLclm to automatically detect ocular diseases by the effective fusion of personal demographic data, genome information and visual information from retinal fundus images through the incorporation of prelearned SVM classifiers. The data used here is the Singapore Malay Eye Study (SiMES) database from a population-based study (among the 2258 subjects after quality control, there are 100 with glaucoma,122 with Age-related macular disease and 58 with Pathological Myopia). The Model showed an 85.3\% accuracy for Glaucoma prediction, 73.2\% for Age-related macular disease prediction and 88.2\% for Pathological Myopia. These results were better as compared to Single Kernel SVM and MKL. The Model although provides a centralized method of multi disease detection shows low accuracy. Hence, alternate models can be tried to enhance the accuracy. \section{Methodology} This section covers the dataset description, data preprocessing and the proposed architecture. The proposed architecture was designed by a combination of CNN (Convolutional Neural Networks) and LSTM (Long Short Term Memory) layers. The CNN layers function as feature extractors, understanding image characteristics, whilst the LSTM layers function as classifiers. \subsection{Dataset Description} To train and validate the model performance we used the ODIR dataset for this study. Ocular Disease Intelligent Recognition (ODIR) \cite{ODIR} is a real case based structured ophthalmic database of 5000 patients collected by Shanggong Medical Technology Co., Ltd. from different hospitals/medical centers in China. The dataset contains details of patients’ age, color fundus photographs from left and right eyes and doctors' diagnostic keywords from doctors. The fundus images are captured by various cameras in the market, such as Canon, Zeiss and Kowa, resulting into varied image resolutions. Each patient is classified into at least one of the following eight categories, normal (1140 cases), diabetes (1128 cases), glaucoma (215 cases), cataract (212 cases), AMD (164 cases), hypertension (103 cases), myopia (174 cases), and other diseases/abnormalities (979 cases) based on both CFPs and additional clinical features by trained human readers. \begin{figure}[H] \centering \captionsetup{justification=centering,margin=2cm} \includegraphics[width=\textwidth]{sample.jpeg} \caption{Visualization of sample images in the ODIR dataset. (a) and (b) are normal fundus images and (c) and (d) are cataractous fundus images.} \label{fig:fig1} \end{figure} \subsection{Dataset Preprocessing} The ODIR dataset consisted of 594 images labeled as cataract and 5,675 images labeled as normal. To avoid an imbalanced dataset, 594 images labeled as normal were randomly selected. Hence the dataset then consisted of 1,188 images in total. Subsequently, image augmentation was applied to increase the size of the dataset to avoid overfitting and aid in model’s generalization and also to mimic real world scenarios. Two augmentations were applied to the original dataset, 30 degree rotation and -30 degree rotation. After augmentation, the dataset consisted of 3,564 images. The dataset was then split into training and testing sets in a 70:30 ratio. \begin{figure}[H] \centering \includegraphics[width=0.68\textwidth]{ocular_architecture_bg.png} \caption{Architecture diagram of the proposed CNN-LSTM Model.} \label{fig:fig1} \end{figure} \subsection{Convolutional Neural Network} Convolutional Neural Networks \cite{o2015introduction} are quite similar to typical neural networks, but unlike multilayer perceptrons, ConvNets assume the input is a two-dimensional image, which aids in learning intricate image characteristics and attributes. CNNs have various applications such as image classification, image segmentation, object identification, image creation, and so on, and ConvNets have shown to be particularly beneficial in medical imaging as a result of these applications. A Convolutional Neural Network comprises three main types of layers, convolutional, pooling, and Dense (fully connected) layers. \\\\ CNN's foremost and most crucial layer is the convolutional layer. Convolution is a commonly used image processing method that has been used for a long time in which two image inputs are multiplied together to produce some output. The convolutional layer performs the same operation, multiplying the input picture by a collection of filters or kernels that are learned and altered throughout the training process, resulting in a smaller convoluted image that is forwarded to the next layer. The convolution operation follows this formula, where I is the input image and K is the kernel. \begin{equation} S(i, j) = (I \cdot K)(i, j) = \sum_m \sum_n I(m, n) K(i-m, j-n) \end{equation} Second, in the ConvNet, the Pooling layer is employed in between consecutive convolutional layers. The pooling layer's main function is to gradually lower the dimension of the image, pass just the significant information to the next layer, and reduce the amount of network parameters. The pooling layers also aid in the control of overfitting. Pooling layers are classified into three types: maximum pooling, average pooling, and global pooling. Max pooling determines the highest possible value for each patch in the feature map. Calculating the average for each patch of the feature map is what average pooling entails. Finally, global pooling reduces the feature map to a single value. Max pooling is the most commonly used pooling technique because it extracts the most important features like the bright regions and the edges. Finally, towards the network's end, the fully connected or dense layer is typically used. The neurons in the dense layer are linked to every neuron in the previous layer. The dense layer performs matrix-vector multiplication in the background. \subsection{Long Short Term Memory} Long Short Term Memory (LSTMs) \cite{greff2016lstm} are a kind of Recurrent Neural Network (RNN) (RNNs) \cite{sherstinsky2020fundamentals}. Unlike RNNs, LSTMs do remarkably well on long-term data. RNNs include a single tanh layer, however LSTMs have a unique structure with four modules that helps to prevent the vanishing gradient problem that occurs in RNNs. The key feature of LSTM is the cell state, which may add or delete information based on the output of structures known as gates. LSTMs have three gates: a forget gate, an input gate, and an output gate, where xt represents the current input, Ct and Ct 1 represent the new and previous cell states, respectively, and ht and ht 1 represent the present and previous outputs. \\\\ The sigmoid layer in the "Forget gate" decides whether to keep or discard the information gained in the previous step. The forget gate follows the formula represented by Equation 2. The sigmoid layer produces a 0 or a 1. Output 0 means "delete the previous information," whereas output 1 means "keep the previous information." \begin{equation} f_{t} = \sigma (W_{f} \cdot [h_{t-1}, x_{t}] + b_{f}) \end{equation} The LSTM then chooses what additional information should be added to the cell state. The "Input gate" performs this action. It is composed of two layers: sigmoid and tanh. The sigmoid determines which values must be updated, and the tanh generates a list of candidate values that may be added to the cell state. Finally, the old state must be updated by multiplying it by ft (values to forget) and adding it to the new candidate values. The "Input gate" is represented by Equations 3, 4, and 5. \begin{equation} i_{t} = \sigma (W_{i} \cdot [h_{t-1}, x_{t}] + b_{i}) \end{equation} \begin{equation} \widetilde{C_{t}} = tanh(W_{C} \cdot [h_{t-1}, x_{t}] + b_{c}) \end{equation} \begin{equation} C_{t} = f_{t} \cdot C_{t-1} + i_{t} \cdot \widetilde{C_{t}} \end{equation} Finally, the "Output gate" determines the LSTM's output values. To begin, run the output through a sigmoid function. The cell state is then sent via a tanh layer so that the values range between -1 and 1. Finally, the sigmoid and tanh outputs are multiplied to get the LSTM output. The operation of the "Output gate" is represented by Equations 6 and 7. \begin{equation} o_{t} = \sigma (W_{o} \cdot [h_{t-1}, x_{t}] + b_{o}) \end{equation} \begin{equation} h_{t} = o_{t} \cdot tanh(C_{t}) \end{equation} \subsection{Proposed Architecture} This paper offers a unique CNN-LSTM combination model for classifying ocular fundus images as normal or cataractous. The network consists of 16 layers: one input layer, three convolutional layers, three max pooling layers, three batch normalization layers, one dropout layer, one flatten layer, one fully connected dense layer, two LSTM layers, and one output dense layer with sigmoid activation function. The input layer takes images with the dimensions 224 × 224 x 3. Then it's linked to three further sets of three layers each. Each set consists of three layers: convolutional, max pooling, and batch normalization. The convolutional layer has a 3 x 3 kernel and a ReLU \cite{agarap2018deep} activation function. The pool size used by the max pooling layer is 2 × 2. The output of the three sets of layers is transmitted to a Dropout layer with a dropout rate of 0.2, dropping 20\% of the neurons at random, followed by a Flatten layer to reduce the two-dimensional representation to a one-dimensional vector. The 1D vector is then transmitted to a fully connected Dense layer with 256 units and a ReLU activation function, followed by two LSTM layers with 256 units each and a tanh activation function. Finally, the final LSTM layer's output is transferred to a Dense layer with sigmoid activation for classification. Figure 3 depicts the proposed CNN-LSTM network's design. \begin{figure}[H] \centering \captionsetup{justification=centering,margin=1cm} \includegraphics[width=\textwidth]{output.png} \caption{(a) Confusion matrix of the proposed model on the test set. (b) ROC (Receiver Operating Characteristic) Curve of the results on test set.} \label{fig:fig1} \end{figure} \section{Results and Discussion} \subsection{Evaluation Metrics} Performance metrics are used to evaluate the model's efficiency. These performance indicators show how well our model worked with the provided data. The proposed study evaluates CNN-LSTM architecture using the following six metrics: accuracy, precision, recall, sensitivity, specificity, and F1 score. \subsubsection{Accuracy} The accuracy of a classification algorithm is one technique to determine how well the algorithm correctly classifies a data point. \subsubsection{Precision} Precision, also known as positive predictive value, is the accuracy of a model's positive prediction. \subsubsection{Recall} The recall of the model assesses its ability to recognize positive samples. The more positive samples identified, the larger the recall. \subsubsection{Sensitivity} Sensitivity is used to assess model performance since it shows how many positive instances the model accurately identified. It is same as Recall. \subsubsection{Specificity} The percentage of true negatives accurately recognized by the model is referred to as specificity. \subsubsection{F1 Score} The F1-score combines a classifier's precision and recall into a single metric by taking their harmonic mean. \subsection{Results} The proposed CNN-LSTM combination model achieved remarkable results on the testing set. The model achieved an accuracy, precision, recall, sensitivity, specificity, and F1 score of 97.53\%, 95.64\%, 99.62\%, 100\%, 98.48\%, and 97.59\%. \section{Conclusion and Future Scope} In this study, we explored and developed a combined CNN-LSTM model for classifying ocular fundus images as normal or cataractous. We trained the model on the renowned ODIR dataset. The model achieved inordinate results on both seen as well as unseen data. It achieved a training accuracy of 99.918\% and a testing accuracy of 97.24\%. Our proposed model fetched comparatively better results than the existing CNN-based as well as other pre-trained models utilized for ocular disease classification models while at the same time lowering time and space complexities. \\\\ In the future, we plan on generalizing the model for multiple ocular diseases including myopia, glaucoma, etc, and obtain satisfactory results to actuate the system into a sizeable product. As the world is moving towards complex ocular diseases, this system will be able to provide cheap and mobile ocular diagnostic solutions potentially reducing diagnostic time and enhancing treatment by providing deeper insights. \bibliographystyle{unsrt}
{ "timestamp": "2022-10-31T01:13:04", "yymm": "2210", "arxiv_id": "2210.16093", "language": "en", "url": "https://arxiv.org/abs/2210.16093" }
"\\section{Introduction}\n\nFor two decades several examples of the AdS/CFT correspondence have been(...TRUNCATED)
{"timestamp":"2022-10-31T01:13:43","yymm":"2210","arxiv_id":"2210.16128","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\label{sect:intro}\n\nCorrelation functions (or scattering amplitudes in (...TRUNCATED)
{"timestamp":"2022-10-31T01:11:52","yymm":"2210","arxiv_id":"2210.16052","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\nThe Holographic Principle (HP) was originally stated as a conjecture by (...TRUNCATED)
{"timestamp":"2022-10-31T01:11:01","yymm":"2210","arxiv_id":"2210.16021","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\label{Sec1}\n\nGeneral relativity (GR) has stood up to a variety of test(...TRUNCATED)
{"timestamp":"2022-10-31T01:13:43","yymm":"2210","arxiv_id":"2210.16130","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\label{sec:intro}\nProper phrase break is crucial to oral performance\\ci(...TRUNCATED)
{"timestamp":"2022-10-31T01:11:14","yymm":"2210","arxiv_id":"2210.16029","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\label{sec:introduction}\n\nCosmic surveys are rapidly closing in on the (...TRUNCATED)
{"timestamp":"2022-10-31T01:11:01","yymm":"2210","arxiv_id":"2210.16020","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\nThe famous Cahn--Hilliard equation was originally established to model p(...TRUNCATED)
{"timestamp":"2022-10-31T01:10:51","yymm":"2210","arxiv_id":"2210.16017","language":"en","url":"http(...TRUNCATED)
End of preview.