Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      JSON parse error: Missing a closing quotation mark in string. in row 20
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 30031)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 20
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
meta
dict
\section{Introduction} Artificial intelligence has greatly impacted the drug discovery pipeline by achieving human-like performance in the field of retrosynthesis \cite{seglerPlanningChemicalSyntheses2018}. Retrosynthesis is the task of breaking down a chemical compound recursively into molecular precursors until a set of commercially available building block molecules is found \cite{seglerPlanningChemicalSyntheses2018, coreyLogicChemicalSynthesis1989}. Consequently, the goal is to provide a valid synthesis route for a molecule. Potential applications of these synthesis routes are suggestions for medical chemists on how to produce a molecule of interest \cite{coleyMachineLearningComputerAided2018}, a foundation for autonomous chemistry \cite{coleyRoboticPlatformFlow2019}, and using synthesizability as part of De Novo Drug Design \cite{schneiderComputerbasedNovoDesign2005}. The field of computational retrosynthesis prediction is separated into two different tasks \cite{schwallerMachineIntelligenceChemical2022}. In single-step retrosynthesis, the goal is to find the likely precursors or reactants for a given product. In multi-step retrosynthesis planning, the goal is to find viable synthesis paths over multiple reaction steps. The task of single-step retrosynthesis prediction is treated as a supervised learning task, commonly categorized as template-based or template-free \cite{dongDeepLearningRetrosynthesis2021}. Template-based approaches use manually curated or data-driven reaction templates \cite{thakkarArtificialIntelligenceAutomation2021}. These templates represent the general atoms and bond structures required to perform a reaction. Therefore the objective is to predict the most appropriate template to break down the molecule \cite{seidlImprovingFewZeroShot2022, chenDeepRetrosyntheticReaction2021a}. Template-free approaches are treated as a sequence prediction problem predicting one token of a chemical SMILES vocabulary at a time \cite{tetkoStateoftheartAugmentedNLP2020, irwinChemformerPretrainedTransformer2022}, drawing inspiration from natural language processing \cite{vaswaniAttentionAllYou2017}. Recently variations of these two approaches have emerged. In semi-template-based \cite{sachaMoleculeEditGraph2021a, wangRetroPrimeDiversePlausible2021a}, a molecule is first broken down into subparts, and then the subparts are rebuilt into chemically viable reactants. Lastly, though many models leverage the sequence-based SMILES notation, there are also attempts to utilize graph-based descriptors across these approaches, exploiting the advantages of a molecular graph \cite{chenDeepRetrosyntheticReaction2021a, tuPermutationInvariantGraphtoSequence2022}. In comparison to single-step retrosynthesis, multi-step retrosynthesis planning focuses on researching novel route search algorithms using a fixed single-step model to identify retrosynthetic disconnections. The pioneering work in the field uses neural-guided \gls{MCTS} and a template-based approach to find synthesis routes \cite{seglerPlanningChemicalSyntheses2018}. Instead of assessing the state value in the search tree at run-time, alternative methods use oracle functions to guide the tree search. These methods include \gls{DFPN} search with edge cost, which combines classical \gls{DFPN} with a neural heuristic \cite{kishimotoDepthfirstProofnumberSearch2019}, and Retro*, which combines the A* path finding algorithm with a neural heuristic \cite{chenRetroLearningRetrosynthetic2020}. Newer approaches use a template-free model, either combining neural-guided \gls{MCTS} with reaction feasibility heuristics \cite{linAutomaticRetrosyntheticRoute2020} or directly using synthesizability heuristics combined with a forward synthesis model \cite{schwallerPredictingRetrosyntheticPathways2020}. Instead of using heuristics, self-play \cite{silverMasteringGameGo2017}, learning a value function by letting an algorithm play the game of synthesis against itself, is an additional investigated approach \cite{schreckLearningRetrosyntheticPlanning2019b, hongRetrosyntheticPlanningExperienceGuided2021, kimSelfImprovedRetrosyntheticPlanning2021}. Multi-step approaches repeatedly apply the chemical information stored in single-step retrosynthesis models. However, the relationship between single-step models and multi-step approaches is not reflected in contemporary research, treating both tasks as distinct entities. Even though multi-step algorithms require the use of single-step models, these single-step models are generally fixed. Similarly, single-step models are developed without evaluating their use in multi-step approaches. Therefore, an open question is how single-step retrosynthesis evaluation metrics translate to the multi-step domain \cite{schwallerMachineIntelligenceChemical2022} and, consequently, how single-step models affect the synthesis route finding capabilities as part of a multi-step algorithm. In this work, we establish a bridge between single and multi-step retrosynthesis tasks by benchmarking the performance and transfer of different single-step retrosynthesis models to the multi-step domain. We show the impressive impact of the single-step model on multi-step performance but, more importantly, a disconnection between contemporary single-step and multi-step evaluation metrics. \section{Methods} We select three state-of-the-art retrosynthesis single-step models to compare their performance in the multi-step domain. The model selection is based on dominant contemporary neural network approaches, i.e., contrastive learning, sequence-to-sequence, and graph-based encoding, considering their respective top-1 to top-50 performance on the USPTO-50k single-step retrosynthesis benchmark \cite{loweExtractionChemicalStructures2012, schneiderWhatWhatNearly2016}. Accordingly, the selected models are MHNreact \cite{seidlImprovingFewZeroShot2022}, a contrastive learning approach, Chemformer \cite{irwinChemformerPretrainedTransformer2022}, a sequence-to-sequence approach, and LocalRetro \cite{chenDeepRetrosyntheticReaction2021a}, a graph-based approach. As an additional baseline, a template-based multi-layer perceptron approach \cite{thakkarDatasetsTheirInfluence2020, genhedenAiZynthFinderFastRobust2020}, drawing inspiration from \cite{seglerPlanningChemicalSyntheses2018}, is included since it is often used in multi-step retrosynthesis algorithms \cite{seglerPlanningChemicalSyntheses2018, kishimotoDepthfirstProofnumberSearch2019, chenRetroLearningRetrosynthetic2020, kimSelfImprovedRetrosyntheticPlanning2021}. Given that we aim to evaluate the capacity of these single-step retrosynthesis models in multi-step retrosynthesis planning, we use the model hyperparameters suggested in their respective publications (Appendix \ref{tab:ssm_hyperparam}) assuming the models are optimized for the single-step prediction task. The only exception is Chemformer, where we use beam size 50 to produce the single-step results and beam size 10, the publication default, for multi-step retrosynthesis planning. To ensure the correct implementation of the single-step models and compare their single-step performance, we perform a 10-fold cross-validation by splitting the data into 80\% training, 10\% validation, and 10\% test splits for each fold. Each model is trained using the train split, training is monitored using the validation split, and the test split is used for final evaluation. All models use the same data split for each fold, and the data is preprocessed according to the specifications of each model. Each single-step model is evaluated by measuring its accuracy and inference time. Single-step accuracy \cite{dongDeepLearningRetrosynthesis2021} is defined as the percentage of target compounds for which the model finds the ground-truth reactants within the top-k, $k \in \{1,3,5,10,50\}$, measuring the ability of the model to capture chemical reaction information. Inference time is defined as the time needed to produce retrosynthesis predictions for a set of molecules, measuring the ability of the model to provide predictions in a timely manner. In an ablation study, we measure the impact of the amount of evaluation data and batch size on the inference time. For the first, we measure the influence of doubling the evaluation data while using the default batch size (Appendix \ref{tab:batch_sizes}), analyzing the scalability of the model. For the second, we measure the impact of setting the batch size to 1, replicating the necessary conditions for a multi-step search algorithm that can only explore one molecule per instance (e.g. \cite{seglerPlanningChemicalSyntheses2018}). The selected multi-step algorithms to evaluate the performance of the different single-step models are \gls{MCTS} \cite{seglerPlanningChemicalSyntheses2018}, which dynamically assesses the state values of the search tree at run-time, and Retro* \cite{chenRetroLearningRetrosynthetic2020}, which instead uses an A* path finding algorithm in combination with an oracle function. In the case of Retro*, we refrain from using the oracle function and rely only on the priors of the single-step model for initial cost estimation, given that the original oracle function is generally shown to have little impact \cite{trippReEvaluatingChemicalSynthesis2022} and is trained on USPTO data, which could cause information leakage. We defer from using a self-play algorithm since it would be necessary to retrain the self-play algorithm per problem instance, i.e., the set of used building blocks. In the first and second experiments, the search settings for \gls{MCTS} and Retro* are set to a time limit of 1800 seconds (30 minutes) and 200 algorithm iterations per molecule (Appendix \ref{tab:ms_hyperparam}), respectively. In a third experiment, the search settings for Retro* are set to a time limit of 28800 seconds (8 hours) to allow the single-step models to reach the maximum iteration limit (Retro*-extended) (Appendix \ref{tab:ms_hyperparam}), given their potential slow inference times. This third experiment is only conducted with Retro* because the algorithm does not need to do multiple single-step model calls to evaluate a tree-search state. Thus the algorithm is more likely to allow the single-step model to reach the iteration limit. In this case, a single-step model call refers to the suggestion of multiple candidate reactants given a product. In all cases, we search up to a maximum route length, or tree depth, of 7 and use the Zinc stock of 17,422,831 building blocks \cite{genhedenAiZynthFinderFastRobust2020}. All experiments are conducted by extending the open-source \gls{AZF} multi-step retrosynthesis framework \cite{genhedenAiZynthFinderFastRobust2020} to use alternative single-step models instead of the thus far implemented baseline template-based model. To evaluate the performance of a single-step model within a multi-step setting, we measure the solvability of molecules when searching for a synthesis plan. Solvability is the percentage of test molecules for which a specific combination of search algorithm and single-step model can produce solved routes. A route is considered solved when all predicted leaf compounds are available within the building block stock. To further investigate the performance of single-step retrosynthesis models, we also analyze the average number of iterations carried out by the search algorithm, the average number of calls to the single-step model, and the average search time, calculated across the test compounds. The data used for all experiments is USPTO-50k \cite{schneiderWhatWhatNearly2016, loweExtractionChemicalStructures2012}, a commonly used dataset within the single-step retrosynthesis field. The dataset consists of 50,016 unique products and their respective reactants, where the randomly split dataset contains 40,012 training reactions, 5,002 validation reactions, and 5,002 test reactions. The multi-step evaluation is conducted on the products of the test set. Single-step retrosynthesis models are trained and benchmarked on one Tesla V100 32GB GPU. In comparison, multi-step retrosynthesis experiments are evaluated using a high-performance CPU cluster to facilitate the parallelization necessary to evaluate an extensive set of molecules in an appropriate time frame. \section{Results} \subsection{Single-step retrosynthesis prediction} We reproduce the performance of the selected single-step models using a 10-fold cross-validation with USPTO-50k. By averaging across the folds, we reproduce the results reported in Chemformer \cite{irwinChemformerPretrainedTransformer2022}, MHNreact \cite{seidlImprovingFewZeroShot2022}, and LocalRetro \cite{chenDeepRetrosyntheticReaction2021a} (Fig. \ref{fig:accuracy} and Appendix \ref{tab:accuracy}). For all models the data split has no discernable effect on the accuracy, shown by the small standard deviation across all folds. Additionally, we calculate the performance of the baseline model implemented in \gls{AZF} \cite{genhedenAiZynthFinderFastRobust2020} as a benchmark to compare the single-step models. For top-1 accuracy, Chemformer outperforms the other models, with an average accuracy of 54.7\% (± 1.1\%). LocalRetro and MHNreact follow this with 52.5\% (± 0.7\%) and 49.8\% (± 0.8\%) accuracy, respectively, and \gls{AZF} performs notably worse with an accuracy of 43.3\% (± 1.0\%). This pattern, however, is not maintained across the top-k measures. Accuracy noticeably ascends within the top-3 for all models, LocalRetro seeing a +24.1\% increase in accuracy to 76.6\% (± 0.6\%), similar to MHNreact with a +23.0\% increase to 72.8\% (± 1.0\%). \gls{AZF} has a +16.8\% increase in accuracy to 60.0\% (± 1.0\%), and Chemformer has the smallest gain in accuracy with an +11.2\% increase to 65.9\% (± 1.0\%). Within the top-50 predictions, LocalRetro shows 96.6\% (± 0.3\%) accuracy, followed by MHNreact with 93.3\% (± 0.4\%) accuracy, both showing similar profiles across the top-k. \gls{AZF} notably increases its performance across top-3 to top-10, giving 78.1\% (± 0.7\%) accuracy at top-50. Surprisingly, Chemformer delivers the lowest accuracy in the top-50 of the models tested, with an accuracy of 73.3\% (± 0.3\%). Though Chemformer outperforms other models in top-1, it is less able to find the ground-truth reactants for the remaining products despite additional explored alternatives with higher top-k. \begin{figure}[t] \centering \includegraphics[width=0.615\textwidth]{accuracy.png} \caption{Percentage of compounds for which single-step retrosynthesis models found the ground-truth reactants within the top-k (Accuracy) on USPTO-50k averaged across 10-fold cross-validation. The standard deviation over all folds is indicated by the colored error bands.} \label{fig:accuracy} \end{figure} \begin{figure}[b] \centering \includegraphics[width=0.6\textwidth]{times.png} \caption{Influence of data and batch size (Appendix \ref{tab:batch_sizes}) on inference time per molecule on USPTO-50k averaged across 10-fold cross-validation.} \label{fig:times} \end{figure} The influence of increased data and decreased batch size on the single-step model inference time is examined since single-step models typically evaluate in batches (Fig. \ref{fig:times}). Generally, there is a linear relationship when doubling the amount of inferred data. We observe that the inference time per molecule remains stable when the amount of test data increases, except for LocalRetro, which triples the inference time per molecule. In contrast, by decreasing batch size to one, we emulate the conditions of the model call within multi-step retrosynthesis planning. Chemformer and MHNreact both substantially increase their average inference time per molecule. For Chemformer, the increase in inference time is discernible, reaching eight times compared to the default batch size. MHNreact has the most marked increase in inference time, with the change from batch size 32 to 1 leading to 18x longer inference time per molecule. On the other hand, the inference time per molecule of LocalRetro is hardly affected by this change. \subsection{Multi-step retrosynthesis planning} Introducing the single-step models into the selected search algorithms, \gls{MCTS} generally performs worse than Retro* (Table \ref{tab:ms}). In detail, Retro* performs better in terms of solvability, number of explored routes, and solved routes per molecule across nearly all tested single-step models. The only exception is Chemformer, which produces more solved routes per molecule (\gls{MCTS}: 3.33, Retro*: 2.06) while using fewer model calls (\gls{MCTS}: 8.48, Retro*: 14.4). However, Chemformer with \gls{MCTS} still has a lower overall solvability (\gls{MCTS}: 44.3\%, Retro*: 53.4\%). In essence, it produces multiple solved routes for a smaller subset of solved molecules. Retro*-extended reaches or improves the result of Retro*, given that single-step models have more time for inference. In detail, Chemformer and MHNreact achieve higher performance using Retro*-extended, leveraging more single-step model calls. In comparison, the baseline \gls{AZF} model and LocalRetro do not utilize the added time with more single-step model calls as they already reach the 200 iteration limit within the 30 min time limit of Retro*, thus having similar performance using both settings. For the overall best-performing search setting, Retro*-extended, the single-step model ranking in terms of solvability is LocalRetro (80.6\%), Chemformer (65.6\%), MHNreact (60.9\%), and \gls{AZF} (50.6\%). However, high solvability does not always imply a high number of solved routes. For example, Chemformer has a higher solvability than MHNreact, yet produces a considerably lower number of solved routes per molecule (Chemformer: 8.04, MHNreact: 56.6). Moreover, a high number of explored routes is also not directly connected to a high solvability. For example, MHNreact explores the highest number of routes per molecule but performs only the third best in solvability since it solves comparably few explored routes. Lastly, there are large disparities across the average search time per molecule. Chemformer is by far the slowest model (18737 sec), followed by MHNreact (8016 sec), both of which are considerably slower compared to LocalRetro (322 sec) and \gls{AZF} (129 sec). Even with these extensive search times, Chemformer and MHNreact do not reach the same level of single-step model calls as LocalRetro and \gls{AZF}. Generally, LocalRetro outperforms other models in terms of solvability and number of solved routes while producing slightly fewer total explored routes than MHNreact and needing approximately 2.5x the time per molecule in comparison to the fastest \gls{AZF} baseline. \newcommand{\ra}[1]{\renewcommand{\arraystretch}{#1}} \begin{table*}[hbt] \centering \caption{\label{tab:ms}Comparison of multi-step algorithm and single-step retrosynthesis model combinations on USPTO-50k test set (5,002 molecules). Bold numbers indicate the best performance across all experiments. } \begin{tabular}{@{}crrcrrrr@{}}\toprule & & \multicolumn{1}{c}{Overall} & \phantom{}& \multicolumn{4}{c}{Average per Molecule} \\ \cmidrule{3-3} \cmidrule{5-8} \textbf{Algorithm} & \textbf{Model} & \begin{tabular}[c]{@{}l@{}} \textbf{Solvability (\%)} \end{tabular} && \begin{tabular}[c]{@{}l@{}}\textbf{Explored} \\ \textbf{Routes} \end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{Solved} \\ \textbf{Routes}\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{Search} \\ \textbf{Time (s)}\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{Model} \\ \textbf{Calls} \end{tabular} \\ \midrule \multirow{4}{*}{\begin{tabular}[c]{@{}l@{}}\gls{MCTS}\end{tabular}} & \gls{AZF} & 49.5 && 367 & 24.9 & 165 & 783 \\ & Chemformer & 44.3 && 4.40 & 3.33 & 2475 & 8.48\\ & LocalRetro & 71.5 && 86.7 & 27.4 & 1616 & 412\\ & MHNreact & 44.4 && 7.11 & 2.50 & 1842 & 29.5\\ \midrule \multirow{4}{*}{\begin{tabular}[c]{@{}l@{}}Retro*\end{tabular}} & \gls{AZF} & 50.6 && 2574 & 48.6 & 130 & 195\\ & Chemformer & 53.4 && 39.7 & 2.06 & 1518 & 14.4\\ & LocalRetro & \textbf{80.6} && 7792 & 149 & 335 & 193\\ & MHNreact & 55.2 && 2818 & 17.6 & 1653 & 38.8\\ \midrule \multirow{4}{*}{\begin{tabular}[c]{@{}l@{}}Retro*-extended\end{tabular}} & \gls{AZF} & 50.6 && 2567 & 48.5 & \textbf{129} & 195\\ & Chemformer & 65.6 && 224 & 8.04 & 18738 & 134\\ & LocalRetro & \textbf{80.6} && 7786 & \textbf{151} & 322 & 193\\ & MHNreact & 60.9 && \textbf{8176} & 56.5 & 8016 & 180\\ \bottomrule \end{tabular} \end{table*} \section{Discussion} We show that a single-step retrosynthesis model can tremendously impact multi-step retrosynthesis planning, influencing the ability to solve products and successfully produce multiple solutions. Across all three experiments (\gls{MCTS}, Retro*, Retro*-extended), the alternative single-step models mostly outperform the baseline \gls{AZF} model. In \gls{MCTS}, one single-step model, LocalRetro, shows a considerably higher solvability than \gls{AZF}. In the case of Retro* and Retro*-extended, all models outperform \gls{AZF}, particularly when given an extended time to carry out sufficient model calls. The generally best performing model is LocalRetro, which has outstanding solvability and provides the most solved routes per molecule across all multi-step retrosynthesis planning experiments, continually outperforming all other methods. We show that the exchange of the single-step model alone can improve solvability by +30.0\%, reaching 80.6\%, and triple the number of solved routes per molecule to 151. As such, the single-step model should be well considered when developing multi-step retrosynthesis planning approaches. Given our results, no clear pattern supports the usage of single-step top-k metrics as a potential proxy measure for solvability in multi-step retrosynthesis. For single-step models, the accuracy ranking varies from top-1 to top-50 (Fig. \ref{fig:accuracy} and Tab. \ref{tab:accuracy}). However, these rankings, and their intermediates, are never matched by their respective multi-step solvability rankings (Tab. \ref{tab:ms}). Hence, multi-step solvability does not solely depend on a singular single-step factor and should not be reduced to a singular top-k single-step metric. Exclusively focusing on the top-1 accuracy of single-step models is especially problematic when transferring the model to a multi-step domain. Though Chemformer shows the highest top-1 accuracy, in multi-step experiments, it finds a comparatively low number of solved routes per molecule despite having the second highest solvability. A cause for this could be its single-step accuracy profile, going from the best performing to the worst performing model as top-k increases. This suggests that the model is proficient at predicting certain reactions but cannot find a diverse set of solutions. However, diverse solutions are beneficial for a tree-search setting, where up to 50 possible explorable route alternatives could be added per search iteration. Importantly, similar top-k accuracy profiles do not result in the same multi-step results. For example, MHNreact and LocalRetro have a similar top-k accuracy profile in single-step retrosynthesis but differ greatly in multi-step retrosynthesis planning. Though they explore a similar number of routes, MHNreact solves considerably fewer routes. This difference in performance may be explained by comparing Retro* and Retro*-extended, where MHNreact improves only slightly in solvability despite having considerably more time. Though MHNreact explores and solves many more routes in general, the difference in solvability shows that it can only do this for molecules it had already solved in the shorter time frame. Since multi-step solvability is not solely dependent on the top-k accuracy shown by the single-step models, other factors may contribute to their performance. For example, given that search algorithms generally have a limited run-time, single-step model inference times can greatly affect performance in multi-step retrosynthesis planning. To produce solved routes, single-step models must carry out as many single-step model calls as possible within an allocated time limit. If the inference time is too long, then the number of model calls will be limited, and as such, the number of explored routes will also be limited. This effect is evident in MHNreact and Chemformer, the models with the highest search times and lowest model calls across all experiments. Single-step models typically evaluate in batches larger than one, however this does not currently transfer to multi-step retrosynthesis planning. For example, MHNreact has the fastest inference time when using its default batch size, but its inference time is considerably increased when reducing the batch size to one, the setting under which multi-step retrosynthesis planning is carried out. As such, single-step models may not reach their full potential in search algorithms due to slow inference times. Noteworthy, most single-step models are developed for GPU use, and CPU usage can hinder their inference speed. However, it is necessary to conduct the multi-step experiments in parallel on CPUs due to the thousands of target molecules to be solved since massive GPU parallelization is currently not available to the general research community. Consequently, models designed and optimized for the single-step prediction task may not perform well in multi-step retrosynthesis planning. Therefore, single-step model developers should take the potential multi-step application into account and optimize their methods accordingly. There are more general aspects to consider when discussing the divide between single-step and multi-step retrosynthesis. Though USPTO-50k is the most commonly used dataset for benchmarking single-step retrosynthesis prediction models, it represents only a limited area of the chemical space such that the models and our results may not apply to a more expansive chemical domain. Moreover, USPTO-50k comprises only single-step reactions, and the produced routes cannot be compared to reference routes to evaluate their validity. Recently, new benchmarks have emerged to address the lack of multi-step reference data \cite{genhedenPaRoutesFrameworkBenchmarking2022a}, so further work is required to quantify the produced routes. Ideally, one would assess these routes irrespective of a particular reference route since there are many potential valid routes for any target molecule. However, at present, this can only be addressed by a domain expert, an extremely time-intensive task. Additionally, this work focuses on using single-step models within two selected search algorithms. However, other search approaches can also considerably impact multi-step retrosynthesis planning methods \cite{hongRetrosyntheticPlanningExperienceGuided2021, kimSelfImprovedRetrosyntheticPlanning2021}. Thus finding the optimal combination of single-step and multi-step methods is yet to be explored and could have a substantial impact on synthesis planning in the future. \section{Conclusion} In this work, we bridge the gap between single-step retrosynthesis and multi-step retrosynthesis planning. By extending current state-of-the-art single-step models to the multi-step domain, we find no clear relationship between the single-step and multi-step benchmarking metrics, in particular single-step top-k accuracy and multi-step solvability. Additionally, we show the importance of developing single-step models for the multi-step domain, as single-step models can have an impressive impact on multi-step retrosynthesis planning performance. LocalRetro, the best performing single-step model, increases solvability by +30.0\% to 80.6\% and triples the number of solved routes compared to the most widely used model. Interestingly, LocalRetro outperforms other single-step models, even those with similar single-step accuracy profiles. Additionally, we analyze other potential factors involved in the translation between the two domains, most notably the inference time of the single-step model. Overall, we show there is no easy transfer of single-step retrosynthesis models to the multi-step retrosynthesis planning domain. With this work, we provide an overview of how current state-of-the-art single-step models fare within contemporary search algorithms, however, we only evaluate a selected scope of single-step and multi-step combinations. In the future, more diverse chemical datasets need to be further explored to examine the applicability of these approaches beyond the USPTO-50k dataset. To summarize, we show that single-step models should be developed and tested for the multi-step domain, and not as an isolated task, to successfully identify synthesis routes for molecules of interest. \section*{Acknowledgements} This study was partially funded by the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Innovative Training Network European Industrial Doctorate grant agreement No. 956832 “Advanced machine learning for Innovative Drug Discovery”. Parts of this work were performed using the ALICE compute resources provided by Leiden University. We thank Dr. Anthe Janssen (Leiden Institute of Chemistry) for providing chemical feedback. \bibliographystyle{IEEEtran}
{ "timestamp": "2022-12-23T02:13:56", "yymm": "2212", "arxiv_id": "2212.11809", "language": "en", "url": "https://arxiv.org/abs/2212.11809" }
\subsection{Configuration Space, Time Alignment, and Dynamic Probability Surface} \paragraph{Configuration space.} We study the configuration space $\mathbb{M}_s$ of a molecule. A molecule may take a particular configuration ${\utwi{x}} = (x_1,\, x_2, \cdots, x_d) \in \mathbb{M}_s$. The configuration of alanine dipeptide is fully characterized by $d=60$ features~\cite{Li_Ma2016TPS}. $\mathbb{M}_s$ therefore lies in a subspace of the Euclidean space $\mathbb{R}^d$: $\mathbb{M}_s \subset \mathbb{R}^d$, $d=60$. \paragraph{Dynamic probability surface.} We take the time $t$ as the $d+1$ dimension, and examine the space-time relationship of the configuration of the molecule. At time $t$, each configuration $({\utwi{x}};\; t) = (x_1,\, x_2, \cdots, x_d;\; t) $ lies in the time-configuration space $\mathbb{M}_t \subset \mathbb{R}^{d+1}$, and has a probability $f({\utwi{x}};\; t) \in [0,1]$. Here the function $ f: \mathbb{M}_t \rightarrow \mathbb{R}_{[0,\,1]} $ assigns the probability value $f({\utwi{x}}; t)$ to a specific time-configuration $({\utwi{x}};\; t)$. We study the topological structure of the dynamic probability surface $f({\utwi{x}}; t)$ over $\mathbb{M}_t$, with time as the $(d+1)$-th dimension. \paragraph{Time Alignment.} The dynamic probability surface is constructed from sampled molecular dynamics trajectories. All trajectories start from reactant basin and end in the product basin. Each trajectory is time-stamped by the duration from the start of the simulation, which is termed the {\it absolute time}. Isomerization occurs in individual trajectory at different absolute time (Fig.~\ref{fig:TPSIllustration}A). Since the relevant event is the transition of isomerization, we adjust the time with an offset so the transition time occurs at $t=0$ for each trajectory (Fig.~\ref{fig:TPSIllustration}B). Based on the 1-dimensional reaction coordinate computed following~\cite{wu2022rigorous}, we take the first time that the trajectory reaches the transition state as $t=0$. While it is not possible to conduct committor test for each of the $3\times 10^6$ trajectories to determine the transition time, the occurrence of the isomerization is fully captured by the reaction coordinates. Hence, we take the transition time $t=0$ as the time when the predicted committor value $p_B$ is 0.5 using the 1-D reaction coordinate described in the work done by Wu et al.~\cite{wu2022rigorous}, where the authors used a generalized work functional to summarize the mechanical effects of the couplings between different coordinates. Singular value decomposition (SVD) was then employed to extract the inherent structure of the generalized work function, from which the 1-dimensional reaction coordinate was approximated with high accuracy. This enabled identifications of the transition-state configurations where $p_B=0.5$ rapidly. More details can be found in~\cite{wu2022rigorous}. \subsection{Flux and Its Rotation} \label{sec:flux} \paragraph{Flux over the configuration space.} To characterize molecular movement in the configuration space, we study its dynamic fluxes. At time $t$, a molecule takes the configuration ${\utwi{x}}(t) \in \mathbb{M}_s \subset \mathbb{R}^d$ and has a velocity ${\utwi{u}}(t) \in \mathbb{R}^d$. We first take the Lagrangian view, namely, the viewpoint of trajectories, where we start to track the molecule at absolute time $t'=0$ along the trajectory currently located at ${\utwi{x}}(0) \equiv {\utwi{x}}(t'=0)$ and float with this trajectory over time. The flux of this trajectory $f({\utwi{x}}(0),t')$ at time $t'$ is then: \begin{equation} {\utwi{J}}({\utwi{x}}(0),\, t') =\rho\cdot{\utwi{u}}({\utwi{x}}(0),\, t'), \end{equation} where $\rho$ is the weight of the trajectory and ${\utwi{u}}({\utwi{x}}(0),\, t')$ is the velocity of trajectory at time $t'$. We then take the Eulerian view and consider the fluxes associated with molecules located at specified locations. We consider a small fixed volume $\Delta \Omega \subset \mathbb{M}_s$ in the configuration space and measure the flux inside $\Delta \Omega$ at time $t$ after time alignment. We do so by taking trajectories that are traveling inside $\Delta \Omega$ at time $t$. The total flux in $\Delta \Omega$ at time $t$ is then: \begin{equation} {\utwi{J}}_{\Delta \Omega}(t) = \int_{{\utwi{x}}(t) \in \Delta \Omega} {\utwi{J}}(t) d {\utwi{x}} = \int_{{\utwi{x}}(t) \in \Delta \Omega} \rho\cdot{\utwi{u}}({\utwi{x}}(0),\,t) d {\utwi{x}} , \end{equation} where ${\utwi{x}}(t)$ is the location of the current flux line originate from ${\utwi{x}}(0)$, which has a velocity of ${\utwi{u}}(t)$ at time $t$ after alignment (Fig.~\ref{fig:TPSIllustration}C). We estimate the fluxes from trajectories sampled by molecular dynamics. In this study, all trajectories are properly generated without bias and therefore of equal and constant weight proportional to $\rho$. This is, as the MD trajectories are sampled without bias, $\rho$ is the same for all trajectories and is a constant over time. The flux of the $i$-th trajectory at time $t$ is therefore: \begin{equation} {\utwi{J}}_i(t) =\rho\cdot{\utwi{u}}_i(t), \end{equation} and the flux ${\utwi{J}}_{\Delta\Omega}(t)$ through a small volume $\Delta \Omega$ at time $t$ is: \begin{equation} {\utwi{J}}_{\Delta\Omega}(t) = \sum_{{\utwi{x}}_i(t) \in \Delta \Omega} {\utwi{J}}_i(t) = \sum_{{\utwi{x}}_i(t) \in \Delta \Omega} \rho\cdot{\utwi{u}}_i(t). \label{eqn:flux-disc} \end{equation} Here we set $\rho = 1/N$, where $N$ is the total number of unbiased trajectories. \paragraph{Rotation of the flux.} We further study the rotation of the flux. Our goal is to accurately characterize the activation dynamics during the barrier crossing process. Here we introduce a rigorous concept of rotational flux based on differential form~\cite{bachman2012geometric} and describe a method for its computation. To illustrate, let us examine a toy system of a velocity field over a 2-dimensional configuration space, where the velocity at each point ${\utwi{x}}=$($x_1,x_2$) is ${\utwi{u}}({\utwi{x}})=(u_{x_1},u_{x_2})= (-x_2, +x_1$). This velocity field exhibits a constant counter clockwise rotation around the origin % (Fig.~\ref{fig:CurlIllustration}). The rotation of the velocity field on the $x_1$--$x_2$ plane is calculated by the difference of the changes of $u_{x_2}$ in the $x_1$ direction $\Delta u_{x_2}/\Delta x_1$ (blue, Fig.~\ref{fig:CurlIllustration}, right) where $\Delta u_{x_2} >0$ and $\Delta x_1 >0$, and changes in $u_{x_1}$ in the $x_2$ direction $\Delta u_{x_1}/\Delta x_2$ (red), where $\Delta u_{x_1} <0$ and $\Delta x_2 >0$. Specifically, the rotation can be written as ($\Delta u_{x_2}/\Delta x_1 - \Delta u_{x_1}/\Delta x_2$). \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.99\textwidth]{plots/illustration_Curl_2.png} \end{center} % \caption{\sf Rotational flux in a 2-dimensional velocity vector field, where $u_{x_1}=-x_2$ and $u_{x_2}=+x_1$. The velocity field circles around the origin $(x_1,\, x_2)=(0,0)$ in the counter clockwise fashion. The rotation value is the signed sum of changes of $u_{x_2}$ in the direction of $x_1$, and $u_{x_1}$ in the direction of $x_2$.} \label{fig:CurlIllustration} \end{figure} For flux in high-dimensional space, we generalize the concept of rotation using differential form~\cite{bachman2012geometric}. The flux ${\utwi{J}}_{\Delta\Omega}(t)$ inside the small volume $\Delta \Omega$ is represented by a $d$-dimensional vector ${\utwi{J}}_{\Delta\Omega}(t)=(J_{\Delta\Omega,\,1}(t), \cdots, J_{\Delta\Omega,\,d}(t)) \in \mathbb{R}^d$. This flux vector can be written as a $d$-dimensional $1$-form: \begin{equation} {\utwi{J}}_{\Delta\Omega}(t)=J_{\Delta\Omega,\,1}(t) \cdot dx_1+ \cdots + J_{\Delta\Omega,\,d}(t) \cdot dx_d, \end{equation} where $J_{\Delta\Omega,\, i}(t) $ is the component of the flux in the $i$-th dimension. The differential of this 1-form can be written as $$d{\utwi{J}}_{\Delta\Omega}(t)= \sum_{\substack{i\neq j \\i,j \in \{0,\cdots, d\}}} (\frac{\partial J_{\Delta\Omega,\,j}(t)}{\partial x_i} - \frac{\partial J_{\Delta\Omega,\, i}(t)}{\partial x_j}) \; dx_i \wedge dx_j$$ where $\wedge$ is the wedge operator denoting the exterior product~\cite{bachman2012geometric}. With the above definitions, the rotation of the $d$-dimensional flux vector can be written as \begin{equation} \nabla \times J(\Omega,t)=<\frac{\partial J_{\Delta\Omega,j}(t)}{\partial x_i} - \frac{\partial J_{\Delta\Omega,i}(t)}{\partial x_j}>, \quad i\neq j \,\quad{\rm{ and }}\, \quad i,j \in \{0,\cdots, d\} \label{eqn:rot} \end{equation} where $({\partial J_{\Delta\Omega,j}(t)}/{\partial x_i} - {\partial J_{\Delta\Omega,i}(t)}/{\partial x_j})$ represents the counter clockwise rotation of the flux projected onto the $i$-$j$ plane, as shown in Fig.~\ref{fig:CurlIllustration}. \subsection{Topology of dynamic probability surfaces} \paragraph{Homology groups and persistent homology.} In this study, we investigate global features of the occurrence of probability peaks in the time-configuration space. Our approach is that of homology group~\cite{munkres2018elements,hatcher2005algebraic} and persistent homology~\cite{Edelsbrunner:2230405,Edelsbrunner2002, carlsson2009topology}. Below we give a brief overview, as more detailed descriptions of this choice over that of critical points is described in ref~\cite{Manuchehrfar_Activated}. Homology groups characterize holes of various dimension. Persistent homology quantifies the prominence of these holes. Here we focus on the isolated probability peaks, which are components or 0-dimensional holes when they are isolated, and the configurations where they reside on. As an illustration, we envision that the probability landscape over the configuration space is flooded under the sea level. At the beginning, all mountain peaks on the probability landscape are below the sea level~(Supplementary movie 1). The sea level is then lowered gradually, with some peaks emerging above the sea. As the sea level further recedes, isolated mountain peaks may become connected by land-ridges. Depending on how much of the time-space configurations have probability above a given level, different peaks of probability over the configuration space may emerge. As the level is lowered, regions with probability greater than the given level enlarges. As a result, previously isolated peaks may become land-connected and become merged into one connected component. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.99\textwidth]{plots/Illustration_Filtration_2.png} \end{center} \caption{\sf The probability landscape $f(\mathbb{M})_t$ and the topology of its superlevel set $\mathbb{M}_{t,\,f\ge}$. % (\textbf{A}) The probability landscape and different sea levels. The superlevel set $\mathbb{M}_{t,\,f\ge .}$ are regions in $\mathbb{M}_t\subset \mathbb{R}^{(d+1)}$ whole probability height value is above the sealevel $(.)$. % (\textbf{B}) At $f({\utwi{x}})=a$, the whole probability landscape is below the sea level and $\mathbb{M}_{t,\,f\ge a}=\emptyset$. % At $f({\utwi{x}})=b_1,\, b_2,\, b_3,\, d_3$, and $d_2$, the topology of the superlevel set changes. At each of $b_1$, $b_2$, and $b_3$, a new peak shown as a white island emerges. At $d_3$ and $d_3$, two separate peaks become merged together. % (\textbf{C}) The persistent diagram of the birth and death probabilities of the peaks. The sea levels of $b_1$, $b_2$, and $b_3$ are birth probabilities, and the sea levels of $d_3$ and $d_2$ are death probabilities.} \label{fig:seaLevels} \end{figure} \paragraph{Superlevel sets and sublevel sets.} Formally, we can identify all $({\utwi{x}};\; t) \in \mathbb{M}_t$ with probability values $f({\utwi{x}};\; t)\ge a$. They form the \textit{superlevel set} $\mathbb{M}_{t,\; f\ge a}$: $$ \mathbb{M}_{t,\; f\ge a} \equiv \{ ({\utwi{x}};\;t) \in \mathbb{M}_t| f({\utwi{x}};\;t) \ge a\} = f^{-1}([a,\,1)).$$ The \textit{sublevel sets} $\mathbb{M}_{t,\; f\le a}$ is defined similarly: $$ \mathbb{M}_{t,\; f\le a} \equiv \{ ({\utwi{x}};\;t) \in \mathbb{M}_t| f({\utwi{x}}) \le a\} = f^{-1}((0,\,a]). $$ \paragraph{Time-space configurations as cubic complexes.} We represent the $(d+1)$-dimensional time-space configuration space $\mathbb{M}_t$ using cubic complexes~\cite{kaczynski2006computational}. A $d$-dimensional cubic complex $K$ is the union of points, line segments, squares, cubes, and their $k$-dimensional counterparts glued together properly, where $k \le d$ and all are of unit length, except points, which have no lengths. \paragraph{Filtration.} We examine the topological structures of probability peaks on the time-configuration space, and restrict ourselves to those whose probabilities are above certain level. By gradually adjusting this level, we can follow the details of topological changes. As illustrated in Fig~\ref{fig:seaLevels}A-\ref{fig:seaLevels}B, the probability sea level at $f({\utwi{x}}) = a$ covers the whole probability landscape. The domain of the portion of the landscape above the sea level does not exist and is therefore the empty set $\emptyset$. We gradually lower the sea level to value $b_1$, when the first peak emerges from the sea. This sea level gives the birth of the first peak. At this time, we have the superlevel set $\mathbb{M}_{t,\; f\ge b_1}=\{{\utwi{x}} \in \mathbb{M}_t| f({\utwi{x}}) \ge b_1 \}$, which are the set of configurations whose probability is $\ge~b_1$. They form the small white region shown in Fig~\ref{fig:seaLevels}B (left, middle). We further lower the sea level to $b_2$ when the region associated with the first peak expands, and another peak emerges above the sea. This is at the birth of the second peak (Fig~\ref{fig:seaLevels}B, left, bottom). At this time, we have the superlevel set $\mathbb{M}_{t,\;f\ge b_2}$. We then continue lowering the sea level to $b_3$, where a third peak emerges (Fig~\ref{fig:seaLevels}B, right, top). We have $\mathbb{M}_{t,\;f\ge b_3}$ at this level. We continue this process until sea level reaches $d_3$, where the first and the third peaks are merged together by a land ridge that has just emerged above the sea level. This is at the probability value of the death location of the third peak (Fig~\ref{fig:seaLevels}B, right, middle). At this sea level, we have $\mathbb{M}_{t,\;f\ge d_3}$. We further decrease the sea level until we reach the sea level of $d_2$, when the second peak becomes merged with the other two peaks (Fig~\ref{fig:seaLevels}B, right, bottom). At this level, we have $\mathbb{M}_{t,\;f\ge d_2}$. At each of these levels, the topology of the superlevel set changes, namely, we have in sequence one component, two components, three components, two components, and then one component again. These changes are captured by the changing 0-th homology groups or the Betti numbers we compute (see ref~\cite{Cohen-Steiner-stability,Manuchehrfar_Activated} for more details). Formally, we have a descending sequence of probability values corresponding to the lowering sea level: $$ 1=a_0>a_1 > a_2 > \cdots > a_n = 0, $$ and the corresponding superlevel sets, or the domains of the part of the landscape above the sea level, which are subspaces of $\mathbb{M}_t$: $$ \emptyset =\mathbb{M}_{t,\;0 } \subset \mathbb{M}_{t,\;1} \subset \mathbb{M}_{t,\;2} \cdots \subset \mathbb{M}_{t,\;n} = \mathbb{M}_t. $$ As the full time-configuration space $\mathbb{M}_t$ is represented by a cubic complex $K$, each superlevel set $\mathbb{M}_{t,\;i}$ is represented by a subcomplex $K_i \subset K$, which can be derived from the original full complex $K$. The corresponding sequence of subcomplexes are: $$ \emptyset = K_0 \subset K_1 \subset K_2 \cdots \subset K_n = K. $$ This sequence of subcomplexes forms a \textit{filtration}. \paragraph{Persistence and persistent diagram.} Upon changing the sea level so the corresponding subcomplex changes from $K_{i-1}$ to $K_i$, we may gain a new peak, or we may lose one when a peak is merged with another one. A peak (or a connected component) is \textit{born} at $a_i$ if it is present in $K_i$ but absent in $K_{i-1}$ for any value of $a_{i-1} < a_i$. The peak \textit{dies} at $a_i$ if it is present in $K_{i-1}$ but not at $a_i$ for any value of $a_{i-1} < a_i$. We record the location and the value of $a_i$, namely, the corresponding $k$-cube and its probability value whose inclusion lead to the birth or the death event. The prominence of the topological feature of a peak is encoded in its life-time or \textit{persistence}. Denote the birth value and the death value of peak $i$ as $b_i$ and $d_i$, respectively. The \textit{persistence} of peak $i$ is $b_i-d_i$. In the example shown in Fig.~\ref{fig:seaLevels}C, the component associated with the first, second and third peak is born at $f({\utwi{x}})=b_1$, $f({\utwi{x}})=b_2$, and $f({\utwi{x}})=b_3$, respectively. At $f({\utwi{x}})=d_2$, the first and the second components are merged together. That is, the second peak dies at $d_2$, and the persistence of this peak is $b_2-d_2$. At $f({\utwi{x}})=d_3$, the first and the third component are merged together, and the third peak dies at $d_3$. The persistence of the third peak is therefore $b_3-d_3$. The first peak dies at $f({\utwi{x}})=0$, and its persistence is $b_1-0=b_1$. We record the birth and death events of the peaks in a two-dimensional plot, or the \textit{persistent diagram}~\cite{Cohen-Steiner-stability}. Each peak is represented by a point in this diagram, where the birth value $b_i$ and the death value $d_i$ are taken as its coordinates ($b_i,\, d_i$). Fig.~\ref{fig:seaLevels}C shows the persistent diagram of our illustrative example. \paragraph{Computation.} We use the cubical complexes described in ref~\cite{wagner2012efficient} to calculate the persistent homology of the high dimensional time-evolving dynamic probability surface. The algorithm keeps track of changes in the super level set $\mathbb{M}_{t,\; f\ge a}$ of the probability surface, and considers the birth and death of probability peaks. We neglect other topological properties such as 1-cycles. The locations ${\utwi{x}}_s$ where birth and death events occur, namely, the corresponding $k$-cubes are also computed. Details and code are available at ``{https://github.com/fmanuc2/0-Homology-Group.git}''. \section {Results} \subsection{Model System and Computations.} \paragraph{Molecular Dynamics Simulation.} Simulations were performed using the molecular dynamics software suite GROMACS-$4.5.4$~\cite{Berk2008}, with implementation of transition path sampling reported in ref.~\cite{Li_Ma2016TPS}. Amber94 force field was used in our simulations~\cite{cornell1996second}. The structure of the alanine dipeptide was energy minimized using the steepest descent algorithm and heated to $300$K using velocity rescaling, with a coupling constant of $0.3$ ps. The system was then equilibrated for $200$ ps. No constraints were applied. The time step of integration was $1$ fs. We then performed $2$ ns NVE simulation, such that we are able to harvest one reactive trajectory. The reactant basin $C_{7eq}$ was defined as $-3.49<\phi<-0.96$ and $-1.57<\psi<3.32$, and the product basin $C_{7ax}$ was defined as $0.87<\phi<1.74$ and $-1.39<\psi<0$~\cite{Li_Ma2016_Reaction_Mechanism}. Given this initial reactive path, $3 \times 10^6$ reactive trajectories were harvested through transition path sampling. Specifically, we randomly select one time point in the original reactive trajectory, exert a small perturbation to the momentum, then initiate simulation from this point both forward and backward in time. Simulations are performed with constant total energy of $36$KJ/mol, such that the average temperature is 300 K in the transition path ensemble. This is repeated until a new reactive trajectory is harvested~\cite{Li_Ma2016TPS}. Each reactive trajectory is $2.5$ ps long, with the time step of $1$ fs. We then collect the configuration (conformation and velocity) at every step along each trajectory. All together, we have $7.5\times 10^{9}$ conformations. \paragraph{Constructing Time-Evolving Dynamic Probability Surface.} We align the trajectories by the time of the occurrence of the transition, with the time $t$ at transition set to $t=0$. Conformations at the transition state have the appropriate values of the one-dimensional reaction coordinate as described in~\cite{wu2021mechanism}. After alignment, we examine the time-interval of transition from $-2.5$ ps to $+2.5$ ps. We construct the time-evolving dynamic probability surface $\{ p({\utwi{x}},\, t) | ({\utwi{x}},\, t) \in \mathbb{M}_t\}$ using the $7.5\times 10^{9}$ aligned and time-stamped conformations. Based on the analysis of reaction coordinates using the energy flow theory~\cite{Li_Ma2016_Reaction_Mechanism}, we select the top-ranked $5$ dihedral angles ($\phi$, $\psi$, $\theta_1$, $\alpha$, $\beta$) from the original $60$ spatial dimensions as the coordinates of $\mathbb{M}_s$. Along with time $t$, we have a $6$-dimensional probability surface $\{ p_t({\utwi{x}},\, t) | {\utwi{x}} \in \mathbb{M}_t\}$. Each angle coordinate of ($\phi$, $\psi$, $\theta_1$, $\alpha$, $\beta$) in units of radians is divided into 15 bins, and the time interval is divided into 500 bins, each of $10$ fs. This discretization leads to to $15^5\times 500= 379,687,500$ 6-dimensional hypercubes, where time is one of the dimension. \paragraph{Computing Topological Structure of the Dynamic Probability Surface.} Persistent homology is computed using a $20$-core Xeon E5-2670CPU of 2.5 GHz, with a cache size of 20 MB and memory of 128 GB Ram. The computation time for finding the prominent peaks and ridges connecting them is about $10$ min. \subsection{High Probability Reactive Region in Space-Time from Topology of Dynamic Probability Surface} \label{Surf_topo} \paragraph{High probability reactive region dominates in the configuration-time space during the transition.} In a previous study, we showed that without time separation, the transition state conformations among the aggregation of $7.5\times 10^{9}$ conformations during the period of $-2.5$ to $+2.5$ ps are concentrated in a small reactive region of $\phi \times \theta_1 = [-0.2,\, +0.2] \times [-0.1,\, +0.1]$ (see Fig~\ref{fig:topo_TimeAnd3}C for the $\phi$ and $\theta_1$ angles). These reactive conformations pass the rigorous committer test and are at the transition state, and they form the most prominent peak with the largest probability mass outside the reactant and product basins~\cite{Manuchehrfar_Activated}. With time as the extra dimension, we now examine the detailed time sequences of the probability surface and determine how transition state conformations are distributed during this period of $5$ ps. This is captured by the 6-d space-time probability surface and its overall topological structure is summarized in the persistent diagram~(Fig~\ref{fig:topo_TimeAnd3}B), with the projections of the surface on the $\phi$--$\theta_1$ and $\phi$--$\psi$ planes shown in Fig~\ref{fig:topo_TimeAnd3}C-\ref{fig:topo_TimeAnd3}D. One prominent probability peak (Fig~\ref{fig:topo_TimeAnd3}B, red dot) located in the region of $(\phi,\,\theta_1) = [-0.2,\, +0.2]\times[-0.1,\, 0.1]$ stands out, which occurs during the short time interval of $t \in [-5,\, +5]$ fs. This is the reactive region where most probability mass of the transition state conformations accumulates. It forms the dominating topological structure with the largest persistence in the whole configuration-time space. The probability peak of time-aggregated conformations from $-2.5$ to $+2.5$ ps reported in ref~\cite{Manuchehrfar_Activated} largely arises from this short-durationed reactive probability peak. That is, the dominant peak of ref~\cite{Manuchehrfar_Activated} comes from the dominant peak occurring during $t = [-5,\, +5]$ fs reported here. There are 6 additional meta peaks at the next level of probability height (green dots) with much smaller persistence. Most of these occur near the transition state within $\pm 0.2$ ps from $t=0$. The remaining 54 peaks are near either the reactant basins or the product basins, reflecting minor fluctuations within these stable regions. \begin{figure}[!htbp] \includegraphics[width=0.98\linewidth]{plots/peaks_time_2.png} \caption{\sf The dynamic probability landscape and its topological structure. (\textbf{A}) 3-d conformation of the alanine dipeptide before and after isomerization. (\textbf{B}) The persistence diagram of probability peaks over the (time, $\phi, \psi, \theta_1, \alpha, \beta$)-space, where the birth and death probability of each peak is shown. (\textbf{C}) The 6-d landscape projected onto the $\phi$--$\theta_1$ plane. Here colored dots are the locations of probability peaks occurring at different time. The contour plot in the background depict the sea level of time-aggregated probability projected onto this 2-d plane, where brown and cyan color indicates high and low probability, respectively. % (\textbf{D}) The landscape projected onto the $\phi$--$\psi$ plane. % (\textbf{E}) The $\phi$ coordinate of the probability peaks as time proceeds. $\phi$ fluctuates in the reactant basin before transition occurs. During the transition period ($t=0$), $\phi$ increases and subsequently reaches to the value of the product basin. % (\textbf{F}) The $\psi$ coordinate of the probability peaks as time proceeds. There is significant fluctuation in $\psi$ before transition. % (\textbf{G}) The $\theta_1$ coordinate of the probability peaks as time proceeds. $\theta_1$ fluctuates throughout the whole time. % As there are only a finite number of trajectories, we coarse grained each coordinate into 15 bins. As probability peaks in 6-D space are shown on 2D-planes, separate probability peaks in space or time may appear at the same coarse-grained locations in these 2D angle plots. The location of each peak, and its birth probability is shown in Supporting information table S1. } \label{fig:topo_TimeAnd3} \end{figure} \paragraph{Probability peaks in the reactive region at the transition time.} The locations of the probability peaks in $\phi$, $\psi$, and $\theta_1$ against time $t$ are shown in Fig.~\ref{fig:topo_TimeAnd3}E--\ref{fig:topo_TimeAnd3}G. In the $\phi$ angle, minor probability peaks fluctuate around the reactant basin ($\phi \approx -2.0$) rad before reaching the transition state (Fig~\ref{fig:topo_TimeAnd3}E). At $<-0.5$ ps prior to the transition state, $\phi$ increases rapidly to the value of the product basin ($\phi\approx1.0$) rad, and fluctuates modestly afterwards. At $t=0$, the highest probability peak (red dot) occurs at $\phi=0$. The $\psi$ angle fluctuates drastically around the reactant basin ($\psi \approx 0$) prior to the transition state (Fig~\ref{fig:topo_TimeAnd3}F). Near the transition $t=0$, $\psi$ decreases gradually to the value of product basin ($\psi\approx 0.5$) rad and become more stabilized after the isomerization. In the reaction coordinate $\theta_1$, the probability peaks fluctuates significantly in a consistent manner throughout the entire $5$ ps, exhibiting an overall oscillating behavior (Fig.~\ref{fig:topo_TimeAnd3}G). The probability peaks are all small before the transition (blue), reflecting the fact that the molecular conformations prior to isomerization are diverse. At the transition time $t=0$, a probability peak is located in the small region of $\phi \times \theta_1 \times \psi = [-0.2,\, +0.2]\times[-0.1,\, +0.1]\times[-0.2,\, +0.2]$ (Fig.~\ref{fig:topo_TimeAnd3}E-\ref{fig:topo_TimeAnd3}G, red). This reflects the fact that most conformations on route to isomerization pass through a small reactive region in the configuration space. After the transition, probability peaks again become small (blue), reflecting the diverse conformations near the product basin. Overall, these results show that the transition state at $t=0$ has the highest probability peak, which is preceded and followed by smaller meta peaks (two before and seven after $t=0$), all within $\approx \pm0.4$ ps of the transition time. At the reactant and product basins, molecular conformations are diverse, with a number of small probability peaks. As the probability peak increases then decreases in height, there is consistent fluctuation in the $\theta_1$ angle , while moderate fluctuations occur before transition in $\phi$ and in $\psi$. \paragraph{Relation between free energy surface and the dynamic probability surface.} The potential energy surface of the alanine dipeptide isomerization in vacuum is as previously described in Ref.~\cite{Bolhuis2000}. There are two prominent minima on the potential energy surface, associated with the reactant basin and the product basin. Their locations are identical to the locations of the reactant and the product basin on the dynamic probability surface reported in~\cite{Manuchehrfar_Activated}. We have also determined the location of minima on the free energy surface, which are derived from a longer MD trajectory of $\approx 15$ ns. The $3.0 \times 10^7$ conformations taken from each $0.5$ fs intervals of the trajectory are harvested, from which the free energy surface is approximated~(Fig.~\ref{fig:ActualPotential}A and \ref{fig:ActualPotential}B). The topological structure of the free energy surface is summarized in the persistent diagram of Fig.~\ref{fig:ActualPotential}C. There are two prominent minima on the free energy surface, or equivalently, two high probability peaks on the probability surface, when examined over the 3-dimensional $\phi$--$\psi$--$\theta_1$ space (red dots on Fig.~\ref{fig:ActualPotential}A and \ref{fig:ActualPotential}B). One is associated with the reactant basin (($\phi$, $\psi$, $\theta_1$) $=$ $(1.25,\,-0.84,\,0.01)$ (labeled 1), and the other with the product basin $(-1.68,\,0.42,\,-0.18)$ (labeled 2). These locations are identical to the locations of the reactant and the product basins on the dynamics probability surface of reactive trajectories as reported in~\cite{Manuchehrfar_Activated}. % Furthermore, there exists a probability peak located at the active region of ($\phi$, $\psi$, $\theta_1$) $=$ $(0.00,\,-0.42,\,0.00)$ on the dynamic probability surfaces, regardless whether it is time-separated as discussed earlier or over the whole $2.5$ ps period~\cite{Manuchehrfar_Activated}. However, there is no corresponding minimum on the free energy surface at this location. \begin{figure}[!htbp] \includegraphics[width=0.98\linewidth]{plots/ActualPotentialSurface.png} \caption{Free energy surface approximated from a long MD trajectory of $15$ ns plotted on {\bf (A)} the $\phi$--$\theta_1$ plane and the {\bf (B)} $\phi$--$\psi$ plane. Its persistent diagram {\bf (C)} shows that there are two prominent minima on the free energy surface, or peaks on the probability landscape, which are associated with the reactant basin (labeled 1) and the product basin (labeled 2).} \label{fig:ActualPotential} \end{figure} \subsection{Reactive Vortex Regions of High Probability Exhibit Strong Non-Diffusive Rotational Flux} \paragraph{Flux and projection to the $\phi$--$\theta_1$ and $\phi$--$\psi$ planes.} We study dynamic fluxes of molecular movement, which is calculated using Eqn~(\ref{eqn:flux-disc}). We first study the projection of the flux lines to the $\phi$--$\theta_1$ and $\phi$--$\psi$ planes (Fig~\ref{fig:flux_TimeAnd3} and Supplementary movies~2-3), and examine how they are related to topological changes in the probability peaks on the 6-d space of (time$, \, \phi,\, \psi, \, \theta_1, \, \alpha,\, \beta$). For illustration, we take 3 time points before ($t= -700$ fs), at ($t= 0$ fs), and after ($t= +770$ fs) the transition. \begin{figure}[!htbp] \includegraphics[width=0.98\linewidth]{plots/Flux_phi_psi_theta1_2.png} \caption{\sf Dynamic fluxes projected on the $\phi$--$\theta_1$ and the $\phi$--$\psi$ planes at three different times of before ($t=-700$ fs), at ($t=0$ fs), and after ($t=+770$ fs) the transition. The strongest portions of the flux lines are in red. Red dots are locations of probability peaks at the current time, and blue dots are the % location of peaks after $20$ fs.} \label{fig:flux_TimeAnd3} \end{figure} At $t=-700$ fs, flux is present in the cubic region of $\theta_1 \in [-1.0, \, +1.0]$, \, $\phi \in [-3.0, \, 0.5]$, and $\psi \in [-3.0, \, +3.0 \, ]$. Upon projection onto the $\phi$--$\theta_1$ plane, strong and uneven fluxes are located in a smaller rectangle of $\theta_1 \in [-0.2, \, +0.2]$ and $\phi \in [-1.5, \, -0.5]$ (green rectangular region in Fig.~\ref{fig:flux_TimeAnd3}A and red flux lines in Supplementary movie 2). This is the same location where probability peak at $t=-700$ fs (Fig~\ref{fig:topo_TimeAnd3}D and in Fig~\ref{fig:topo_TimeAnd3}A). When projected to the $\phi$--$\psi$ plane, the flux is weak and even-valued (Fig.~\ref{fig:flux_TimeAnd3}D and Supplementary movie 3). At the transition time $t=0$, flux lines are the strongest on both the $\phi-\theta_1$ and the $\phi-\psi$ plane. They are in the direction of increasing $\phi$ and decreasing $\theta_1$, and slightly decreasing $\psi$ (Fig.~\ref{fig:flux_TimeAnd3}B, \ref{fig:flux_TimeAnd3}E, and Supplementary movies~2-3). This is the direction pointing from the reactant basin to the product basin. The probability peak at $t=0$ is located at the center of the flux lines (red dots in Fig.~\ref{fig:flux_TimeAnd3}B and~\ref{fig:flux_TimeAnd3}E). At $t=770$ fs after the transition, dynamic flux is found in the cubic region of $\phi \in [-0.5, \, 2.0]$, $\theta_1 \in [-0.75, \, 0.75]$, and $\psi \in [-3.0, \, 1.5]$~(Fig~\ref{fig:flux_TimeAnd3}C and ~\ref{fig:flux_TimeAnd3}F). When projected onto the $\phi$--$\theta_1$ plane, the flux is uneven and is the strongest around the rectangle of $\phi \in [0.8, \, 1.2]$ and $\theta_1 \in [-0.2, \, 0.5]$ (green rectangle, Fig~\ref{fig:flux_TimeAnd3}C). It is also uneven in the $\phi$--$\psi$ plane and is the strongest around the rectangle of $\phi \in [0.8, \, 1.2]$ and $\psi \in [-1.5, \, 0.0]$~(green rectangle, Fig~\ref{fig:flux_TimeAnd3}F). Overall, these results show that the directional fluxes of molecular movement emerge during the transition period. Fluxes are concentrated in the high probability reactive region in the configuration-time space and drive the probability peak of molecular configurations to future locations (red to blue dots, Fig.~\ref{fig:flux_TimeAnd3}A-~\ref{fig:flux_TimeAnd3}F). At the transition time, they are the strongest and are in the general direction of moving molecules towards the product basin. \paragraph{The reactive vortex region has strong rotational flux during transition.} We further study the rotational flux of molecular movements during the transition. Its projections onto the $\phi$--$\theta_1$ and the $\phi$--$\psi$ planes are $({\partial J_{\Delta\Omega,\theta_1}(t)}/{\partial \phi} - {\partial J_{\Delta\Omega,\phi}(t)}/{\partial \theta_1})$ and $({\partial J_{\Delta\Omega,\psi}(t)}/{\partial \phi} - {\partial J_{\Delta\Omega,\phi}(t)}/{\partial \psi})$, respectively~(Eqn~(\ref{eqn:rot})) We focus on the high probability reactive region and examine the rotational flux during the time interval of $-50$ fs and $+50$ fs in the reactive cubical region where most probability mass is located~(Fig~\ref{fig:HighResFlux_TimeAnd3}). We divide the interval in each dimension of the cube $\phi \times \theta_1 \times \psi \in [-1.0,\, 1.0] \times[-0.5,\, 0.5]\times[-2.0,\, 1.0]$ containing the reactive region into 250 bins and examine the flux and rotational flux in the $250^3=15,625,000$ cubes. \begin{figure}[!htbp] \includegraphics[width=0.98\linewidth]{plots/High_res2_2.png} \caption{\sf The flux and its rotation during the transition at $t=-33$ fs~(A), $t= -42$ fs, $-15$ fs, $+2$ fs, and $+12$ fs~(B). % The flux lines are shown as blue lines, with the flux rotation coded by the color intensity, where darker blue represents stronger rotation. % (A) There is strong flux rotation in the plane of the two reaction coordinates of $\theta_1$ and $\phi$ (darker blue) at $t=-33$ fs (top), while flux rotation is negligible in the plane of $\psi$ and $\phi$ (bottom). % (B) Strong flux rotation presents at $t=-42$ fs, $-15$ fs, $+2$ fs, and $t= +12$ fs as the flux lines changes direction. In contrast, flux rotation remains negligible in the $\phi$--$\psi$ plane (supplementary movie 4), with the flux maintaining the same direction as in (A). Arrows in red represent the overall directions of the flux lines. } \label{fig:HighResFlux_TimeAnd3} \end{figure} There are strong rotational fluxes in the $\phi$--$\theta_1$ plane, the two most important reaction coordinates~(Fig.~\ref{fig:HighResFlux_TimeAnd3}A, top). The flux exhibits significant changes in the direction of $\theta_1$ as time proceeds while maintaining the same direction of increasing $\phi$ (curved arrows, Fig.~\ref{fig:HighResFlux_TimeAnd3}B, top, and supplementary movie 4). In contrast, flux lines in the $\phi$--$\psi$ plane moves along a fixed direction of increasing $\phi$ and decreasing $\psi$. The rotational flux in this plane is negligible, even though $\psi$ is the important coordinate that defines geometrically the reactant and product basin along with $\phi$~(Fig.~\ref{fig:HighResFlux_TimeAnd3}A, bottom). Overall, these results show there are strong vortexes in the reactive region. \paragraph{Non-diffusive rotational trajectories in reactive vortex region dominate the transition process.} Our above analysis of flux over time are at $10$ fs resolution. As trajectories of molecular movement pass through the transition state rapidly, we now examine the behavior of trajectories during transition at finer resolution of $1$ fs. To gain further insight into the reactive vortex region, we study the behavior of trajectories of molecular movement and measure the number of times that each trajectory rotates during the short transition time interval of $[-100,\, +100]$ fs. This is calculated by counting the number of times a trajectory re-enters the transition state region. Here we regard the small rectangle of $(\phi, \theta_1)=[-0.2,\, 0.2]\times[-0.1,\, 0.1]$ as the reactive region, which is where the red dots in Fig.~5C and 5D are located. Results in ref~\cite{Manuchehrfar_Activated} showed that conformations of transition state ensemble are indeed located in this region, as these conformations pass the rigorous committor test (dashed red rectangle, Fig~\ref{fig:rotation}A, see also discussion related to Fig.~\ref{fig:topo_TimeAnd3}). % Fig.~\ref{fig:rotation}A shows example trajectories with different number of entrance and re-entrance to the reactive region. \begin{figure}[!htbp] \centering \includegraphics[width=0.38\linewidth]{plots/Trajectory_rotation.png} \caption{\sf Rotating trajectories in the reactive vortex region. % \textbf{(A)} Examples of trajectories which enter the reactive region one, two, four, and six times. Here the reactive region of transition state region is indicated by the dashed red rectangle of $(\phi, \theta_1)=[-0.2,\, +0.2]\times[-0.1,\, +0.1]$ as discussed in Fig.~\ref{fig:topo_TimeAnd3} % and in~\cite{Manuchehrfar_Activated}. % Each entrance/reentrance point of a trajectory is highlighted by a green circle. % \textbf{(B)} Distribution of the number of entrance of rotating trajectories exhibit during the transition. Majority of trajectories enter into the reactive vortex region between three and five times, with a small proportion rotate more than six or less than three times. % \textbf{(C)} Additional examples of trajectories circulate three, four, five, and six times around the reactive vortex region. Here dashed red circles highlight circles in each trajectory.} \label{fig:rotation} \end{figure} The distribution of the number of times that trajectories enter the transition region is shown in Fig.~\ref{fig:rotation}B for a sample of 100,000 trajectories. The majority of them re-enter the transition region 3--6 times, with those re-enters 4 times occurring most frequently ($35.2\%$). Trajectories entering the transition region 5, 3, and 6 times represent $20.0\%$, $18.6\%$ and $10.7\%$ of the sampled trajectories, respectively. Fig.~\ref{fig:rotation}C show additional examples of trajectories rotating inside the transition region moving in well-formed circles (3, 4, 5, and 6 times, respectively). These results show that most trajectories in the reactive vortex region rotate multiple times, exhibiting strong non-diffusive rotational dynamics. There is a broad distribution in the number of re-entrance into the transition-state region, with the majority of them experiencing 3--6 rounds of rotations. \paragraph{Rotational fluxes are important for barrier crossing.} The flux lines in Fig~\ref{fig:HighResFlux_TimeAnd3}B (e.g., $t=-33,\, -42$ and $2$ fs) show that molecules generally move along in the direction that coincides with the isosurfaces of ensembles of conformations with the same committor value as described in ref~\cite{wu2022rigorous}. The rotational flux carries the molecules in the direction orthogonal to the direction of the isocommittor, indicating barrier crossing. As representatives of most trajectories, the examples in Fig~\ref{fig:rotation}A show that molecules move rapidly in the general direction of isocommittor surfaces, but slowly in the orthogonal direction towards other isocommittor surfaces. The combination of these movements result in an overall spiral-like trajectories with elongated ellipse(s) drawn-out in the projection of $\phi$--$\theta_1$ reaction coordinates. During barrier crossing, the motion in $\phi$ is assisted by $\theta_1$, which transfers the potential energy it received from the thermal bath to $\phi$ directly via kinetic energy as discussed in ref~\cite{wu2022rigorous}. This leads to tight cooperative movements between $\theta_1$ and $\phi$, which is manifested as rotational flux. \section {Discussion} The transitions state theory has been the corner stone for understanding activated processes ranging from isomerization of simple organic molecules to complex protein conformational changes. Central to this theory are the transition state ensemble, namely, molecular conformations at the barrier top occupying the 1-degree saddle point of the free energy surface. Dynamics of the transition state ensemble along the reaction coordinates is an important component of reaction rate theories. The default assumption for complex systems was based on Kramers’ physical intuition rather than systematic examination of the transition dynamics in realistic systems. As a result, the dynamics of transition of naturally occurring activated processes in complex molecules are largely unexplored. In this study, we quantified the detailed topological structures of the dynamic probability surface of an activated process over the time-configuration space. We use the alanine dipeptide isomerization in vacuum as our model system. The dynamic probability surface is constructed by harvesting naturally occurring trajectories of molecular movements connecting the reactant and the product basins. Unlike small molecules that require an external energy source, alanine dipeptide is the smallest complex system with an internal heat-bath composed of the large number of non-reaction coordinates. This heat-bath provides the necessary energy flow to facilitate the barrier-crossing process, an important property shared by proteins but absent in small molecules. Our results are based on rigorous analysis of the topological structures of the high-dimensional dynamic surfaces using persistent homology. In addition, we introduce a new method for quantifying high-dimensional flux rotations. These techniques allowed us to uncover a number of important insights. First, the transition state ensemble of conformations are located in a reactive region in the configuration-time space and form the dominant probability peak. This finding extends earlier results of ref~\cite{Manuchehrfar_Activated} and shows that after further separation of transition-state conformations along the time-axis, a single prominent probability peak occurring during the short interval $t = [-5,\, +5]$ fs dominate throughout the transition barrier-crossing process. That is, a strong reactive region with the highest probability peak exists in configuration-time, where transition state conformations as verified by the rigorous committor test accumulate are located. This region of short time duration dominates the whole transition process. Second, there are strong directional fluxes in the high-probability reactive region. Molecules in this active region are not in equilibrium and are not diffusion-controlled. The fluxes adjust directions and become uniformly aligned at the transition time when they are the strongest, with the probability peak located at the center of flux lines. These fluxes occur primarily in the subspace of the reaction coordinates, and carry the molecular conformations forward. Third, the reactive region is characterized by strong vortexes. There are strong rotational fluxes at the transition state, which occur in the subspace of the two most important reaction coordinates, but not in the subspace of the most important geometric coordinates. Most trajectories on route to the product basin rotate and enter the reactive vortex region multiple times. These reactive trajectories move along rapidly in the direction of the surfaces of isocommittors, but slowly in the orthogonal direction to scale the barrier to the next isocommittor surface, drawing out spiral-like curves encircling ellipses elongated in the direction of the isocommittor surfaces. The tight cooperative movements between reaction coordinate $\theta_1$ and $\phi$ are due to the transfer of potential energy $\theta_1$ received from the thermal bath to $\phi$. The dynamic movements along the isocommittor surface and in the orthogonal direction of barrier-crossing are manifested as rotational fluxes in the plane of the reaction coordinates. Overall, our findings offers a first glimpse into the reactive vortex region that characterizes the non-diffusive dynamics of barrier-crossing of a naturally occurring activation process. By separating conformations along the time axis, we uncovered rich topological structures in the dynamic probability surface. Such details are not possible when examining the free-energy surface and its 1-saddle point, where the dynamic aspects of the process are obscured. The discovery of the reactive vortex region highlights the importance of analyzing the topological structures of the dynamics of the transition region in naturally occurring activated processes. With alanine dipeptide being the first system where non-diffusive behavior is established, it will be fruitful to study reactive dynamics of other naturally occurring activated processes of complex molecules. The results can serve as the foundation towards developing a theoretical model of transition dynamics describing activated process occurring in nature. While our study does not directly provide physical quantities such as rate constants that correspond to experimental measurements, it is possible in principle to analyze how fluxes crosses dividing separatrix surface and to estimate the reaction rate as described in~\cite{rosenberg1980isomerization,bose2017non,jang1992comment,zhao1993comment,nagahata2021phase}, provided one can precisely define the separatrix surface and can accurate sample and quantify the fluxes. \section*{Acknowledgement} We thank Drs.\ Hubert Wagner and Herbert Edelsbrunner for their generous help in extending the cubic complex algorithm. We also thank Dr.~Wei Tian for his help. This work is supported by grants NIH R35 GM127084 (to JL), NIH R01 GM086536 (to AM), and NSF CHE-1665104 (to AM). \section*{Conflict of Interest Statement} There are no conflict of interests.
{ "timestamp": "2022-12-23T02:14:01", "yymm": "2212", "arxiv_id": "2212.11815", "language": "en", "url": "https://arxiv.org/abs/2212.11815" }
\section{Introduction}\label{sec1} While majority of deep neural networks are trained on GPUs, they are increasingly being deployed on edge devices, such as mobile devices. These edge devices require to compress the architecture for a given hardware design (e.g. GPU or lower precision chips) due to memory and power constraints \cite{benmeziane2021comprehensive, cheng2017survey}. Moreover, application specific hardware are being designed to accommodate the deployment of deep learning models. Thus, designing efficient deep learning architectures that are efficient for the deployment (i.e. \emph{inference}) has become a new challenge in the deep learning community. The combined problem of hardware and deep learning model design is complex, and the precise measurement of efficiency is both device and model specific. This is because researchers have to take into account various efficiency factors such as latency, memory footprint, energy consumption. Here we deliberately oversimplify the problem in order to make it tractable, by addressing a fundamental element of hardware cost. Knowing that power consumption is directly related to the chip area in a digital circuit, we use the chip area required to implement an arithmetic operation on a hardware as a surrogate to measure the efficiency of a deep learning model. While this is very coarse, and full costs will depend on other aspects of hardware implementation, it nevertheless represents a fundamental unit of cost in hardware design \cite{hennessy2011computer}. In a deep learning model, weights are multiplied by inputs, hence on of the fundamental operations in deep learning models is multiplication $S_{{\mathrm{conv}}}(x,w) = wx$. In our work, we replace multiplication with the EuclidNet operator, \begin{equation}\label{eq: euclid} S_{{\mathrm{euclid}}}(x,w) = -\frac{1}{2}\|x-w\|_2^2. \end{equation} which combines a difference with a square operator. We will refer to the family of deep learning models that use equation \eqref{eq: euclid} as EuclidNets. These models compromise between standard multiplicative models and AdderNets\cite{chen2020addernet}, which remove multiplication entirely, but at the cost of a significant loss of accuracy and difficult training procedure. Replacing multiplication with square can potentially reduce the computation cost. The feature representation of each of the architectures is illustrated in Figure~\ref{fig:feature}. EuclidNets can be implemented on 8-bit precision without loss of accuracy as demonstrated in Table~\ref{tab: quant}. The square operator is cheaper than multiplication and it can also be implemented using look up tables \cite{de2009large}. In \cite{baluja2018no,covell2019table}, authors prove that replacing look up table can replace actual float computing, while works such as LookNN in \cite{razlighi2017looknn} take the first step in designing hardware for look up table use. On a low precision hardware, we can compute $S_{\mathrm{euclid}}$ for about half the cost of computing $S_{\mathrm{conv}}$. Furthermore, using EuclidNets, the deep learning model does not lose expressivity, as explained ins Section \ref{sec:theory}. To summarize, we make the following contributions: \begin{itemize} \item We design a deep learning architecture based on replacing the multiplication $S_{\mathrm{conv}}(x,w) = wx$ by the squared difference equation \eqref{eq: euclid}. We show that using square operator can potentialy reduce the hardwaer cost. \item These deep learning models are just as expressive as convolutional neural networks. In practice, they have comparable accuracy (drop of less than 1 percent on ImageNet on ResNet50 going from full precision convolutional to 8-bit EuclidNets). \item We show theoretically and empirically that EuclidNets have the same behaviour compared to convolutional neural network in the case that the input is transformed (e.g. linear transformation) or affected by noise (e.g. Guassian noise). \item We provide an easy approach to train EuclidNets using homotopy. \end{itemize} \begin{figure} \centering \includegraphics[width=0.32\textwidth]{conv.png} \includegraphics[width=0.32\textwidth]{adder.png} \includegraphics[width=0.32\textwidth]{euclid.png} \caption{Feature representation of traditional convolution with $S(x,w) = xw$ (left), AdderNet $S(x,w) = -\|x-w\|_1$ (middle), EuclidNet $S(x,w) = -\frac{1}{2}\|x-w\|_2^2$ (right).} \label{fig:feature} \end{figure} \begin{table}[h] \caption{Euclid-Net Accuracy with full precision and 8-bit quantization: Results on ResNet-20 with Euclidian similarity for CIFAR10 and CIFAR100, and results on ResNet-18 for ImageNet. Euclid-Net achieves comparable or better accuracy with 8-bit precision, compared to the conventional full precision convolutional neural network. }\label{tab: quant} \centering \begin{tabular}{ccc ccc} \multirow{3}{*}{\textbf{Network}} & \multirow{3}{*}{\textbf{Quantization}} & \multirow{3}{*}{\textbf{Chip Efficiency}} & \multicolumn{3}{c}{\textbf{Top-1 accuracy}} \\ &&& CIFAR10 & CIFAR100& ImageNet \\ \hline \multirow{2}{*}{$S_{{\mathrm{conv}}}$} & Full precision & \xmark & 92.9 & 68.14 & 69.56 \\%69.8 \\ & 8-bit & \cmark & 92.07 & 68.02 & 69.59 \\ \multirow{2}{*}{$S_{{\mathrm{euclid}}}$} & Full precision & \xmark & 93.32 & 68.84 & 69.69 \\ & 8-bit & \cmark & 93.30 & 68.78 & 68.59 \\ \multirow{2}{*}{$S_{{\mathrm{adder}}}$} & Full precision & \xmark & 91.84 & 67.60 & 67.0 \\ & 8-bit & \cmark & 91.78 & 67.60 & 68.8 \\ \multirow{1}{*}{BNN} & 1-bit & \cmark & 84.87 & 54.14 & 51.2 \\ \end{tabular} \end{table} \section{Context and related work} Compressing deep learning models comes at the costs of accuracy loss, and increasing training time (to a greater extent on quantized networks) \cite{frankle2018lottery, cheng2018model}. Part of the accuracy loss comes simply from decreasing model size, which is required for mobile and edge devices \cite{wu2019machine}. Some of the most common deep learning compression methods include pruning \cite{reed1993pruning}, quantization \cite{guo2018survey}, knowledge distillation \cite{hinton2015distilling}, and efficient design \cite{iandola2016squeezenet,howard2017mobilenets,zhang2018shufflenet,tan2019efficientnet}. Between the compression methods, the most prominent approach is low bit quantization \cite{guo2018survey}. In this case, the inference can speed up with lowering bit size, at the cost of accuracy drop and longer training times. In the extreme quantization, such as binary networks, operations have negligible cost at inference but exhibits a considerable accuracy drop \cite{Hubara_BNN}. Here we focus on a small sub-field of compression, that optimizes mathematical operations in a deep learning model. This approach can be combined successfully with other conventional compression methods, such as quantization \cite{xu2020kernel} and pruning \cite{reed1993pruning}. On the other hand, knowledge distillation \cite{hinton2015distilling} consists of transferring information from a larger teacher network to a smaller student network. The idea is easily extended by thinking of information transfer between different similarity measures, which \cite{xu2020kernel} explores in the context of AdderNets. Knowledge distillation is an uncommon training procedure and requires extra implementation effort. However, EuclidNet preserves the accuracy without knowledge distillation. We suggest a straightforward training using a smooth transition between common convolution and Euclid operation using homotopy. \section{Similarity and Distances} \subsection{Inner Products versus Distances} Consider an intermediate layer of a deep learning model with input $x\in{\mathbb{R}}^{H\times W \times c_{{\mathrm{in}}}}$ and output $y~\in~{\mathbb{R}}^{H\times W \times c_{{\mathrm{out}}}}$ where $H,W$ are the dimensions of the input feature, and $c_{{\mathrm{in}}}, c_{{\mathrm{out}}}$ the number of input and output channels, respectively. For a standard convolutional network, we represent the input to output transformation via weights $w~\in~{\mathbb{R}}^{d\times d\times c_{{\mathrm{in}}}\times c_{{\mathrm{out}}}}$ as \begin{equation}\label{eq: layer} y_{mnl} = \sum_{i = m}^{m+d} \sum_{j=n}^{n+d} \sum_{k = 0}^{c_{{\mathrm{in}}}} x_{ijk} w_{ijkl} \end{equation} Setting $d=1$ reduces the equation \eqref{eq: layer} to a fully-connected layer. We can abstract the multiplication of the weights $w_{ijkl}$ by $x_{ijkl}$ in the equation above by using a similarity measure $S:{\mathbb{R}}\times{\mathbb{R}}\to{\mathbb{R}}$. The original convolutional layer corresponds to $$ S_{{\mathrm{conv}}}(x,w) = xw. $$ In our work, we replace $S_{\mathrm{conv}}$ with $S_{\mathrm{euclid}}$, given by equation \eqref{eq: euclid}. A number of works have also replaced the multiplication operator in deep learning models. The most relevant work is the AdderNet \cite{chen2020addernet}, which uses \begin{equation}\label{eq: adder} S_{{\mathrm{adder}}}(x,w) = -\|x-w\|_1. \end{equation} to replace multiplication by $\ell_1$ norm, i.e. summation of the absolute value of differences. This operation can be implemented very efficiently on a custom hardware, knowing that subtraction and absolute value of different $n$-bit integers cost $\mathcal{O}(n)$ gate operations, compared to $\mathcal{O}(n^2)$ for multiplication i.e. $S_{\mathrm{conv}}(x,w) = xw$. However, AdderNet comes with a significant loss in accuracy, and is difficult to train. \subsection{Other Similarity Measures} The idea of replacing multiplication operations to save resources within the context of neural networks dates back to 1990s. Equally motivated by computational speed-up and hardware logic minimization, authors of \cite{dogaru1999comparative} defined perceptrons that use the synapse similarity, \begin{equation}\label{eq: comp_syn} S_{{\mathrm{synapse}}}(x,w) = \sign(x)\cdot \sign(w) \cdot \min(\|x\|,\|w\|), \end{equation} which is cheaper than multiplication in terms of hardware complexity. Equation \eqref{eq: comp_syn} has not been experimented with modern deep learning models and datasets. Moreover, in \cite{akbas2015multiplication} a slight variation is introduced which is also a multiplication-free operator, \begin{equation}\label{eq: mf} S_{{\mathrm{mfo}}}(x,w) = \sign(x)\cdot\sign(w)\cdot(\|x\|+\|w\|)). \end{equation} Note that both equations \eqref{eq: comp_syn} and \eqref{eq: mf} use $\ell_1$-norm. Also note that in \cite{mallah2018multiplication}, the updated design choice allows contributions from both operands $x$ and $w$. Furthermore, in \cite{afrasiyabi2018non}, the similarity in image classification on CIFAR10 is studied. Other applications of equation \eqref{eq: mf} are studied in \cite{badawi2017multiplication, pan2019additive}. In \cite{you2020shiftaddnet}, the similarity operation is further combined with a bit-shift, leading to an improved accuracy with negligible added hardware cost. However, the accuracy results for AdderNet appear to be lower than those reported in \cite{chen2020addernet}. Another follow-up work uses knowledge distillation to further improve the accuracy of AdderNets \citep{xu2020kernel}. Instead of simply replacing the similarity on the summation, there is also the possibility to replace the full expression of equation \eqref{eq: layer} as, for example, proposed in \cite{limonova2020resnet,limonova2020bipolar}, by approximating the activation of a given layer with an exponential term. Unfortunately, these methods only lead to speed-up in certain cases and, in particular, they do not improve CPU inference time. Moreover, the reported accuracy on the benchmark problems is also lower than the typical baseline. In \cite{mondal2019dense}, authors used three layer morphological neural networks for image classification. Morphological neural networks were introduced in 1990s in \cite{davidson1990theory, ritter1996introduction} and use the notion of erosion and dilation to replace equation \eqref{eq: layer}: \begin{align*} \mbox{Erosion}(x,w) &= \min_j S(x_j, w_j) = \min_j (x_j - w_j), \\ \mbox{Dilation}(x,w) &= \max_j S(x_j, w_j) = \max_j (x_j + w_j). \end{align*} The authors proposed two methods by stacking layers to expand networks, but they admitted the possibility of over-fitting and difficult training issues, casting doubt on scalability of the method. \section{Theoretical Justification}\label{sec:theory} This section provides some theoretical ground for the connections among AdderNets, EuclidNets, and conventional convolution. \subsection{Equivalence with Multiplication}\label{sec:theory_align} Euclidean distance has a close tie with multiplication and hence, it can replace the multiplications in convolution and linear layers. Here, we delve into the details of this claim a bit more. Let us consider Euclidean distance between the two vectors ${\mathbf{x}}$ and ${\mathbf{w}}$ as $\norm{{\mathbf{x}}-{\mathbf{w}}}=({\mathbf{x}}-{\mathbf{w}}){^\top}({\mathbf{x}}-{\mathbf{w}})$ where ${\mathbf{x}}$ and ${\mathbf{w}}$ are the vectors of inputs and weights respectively. Moreover, ${\mathbf{x}}$ and ${\mathbf{w}}$ are vectors of random variables, so it is of interest to study the expected value of the EuclidNet operation first, \begin{equation} -\frac 1 2 \mathbb{E} \norm{{\mathbf{x}}-{\mathbf{w}}} = -\frac 1 2 \mathbb{E}\norm {\mathbf{x}} - \frac 1 2 \mathbb{E}\norm {\mathbf{w}} + \mathbb{E}({\mathbf{x}}{^\top}{\mathbf{w}}) . \label{eq:euclid_expect} \end{equation} In other words \eqref{eq:euclid_expect}, convlution similarity measure, i.e. the inner product ${\mathbf{x}}{^\top}{\mathbf{w}}$, is embedded in EuclidNet form. However, the result is biased with two extra terms i.e. $-\frac 1 2 \mathbb{E}\norm{\mathbf{x}}$ and $-\frac 1 2 \mathbb{E}\norm{\mathbf{w}}$. Thus we may conclude that Euclidean distance is aligned with multiplication shifted by two bias terms. The induced bias by the EuclidNet operation remains controlled in both training or inference, most deep learning models use some sort of normalization mechanism such as batch norm, layer norm, and weight norm. Euclidean distance also has a close relationship with cosine similarity. Let us define $S_{\mathrm{cos}}$ as \begin{equation} S_{\mathrm{cos}}({\mathbf{x}},{\mathbf{w}}):= \frac{{\mathbf{x}}^\top {\mathbf{w}}}{\rootnorm {\mathbf{x}} \rootnorm{\mathbf{w}}}. \label{eq:cosine_sim} \end{equation} It is easy to see that in the case of having a normalization mechanism (i.e. $\rootnorm {\mathbf{x}}=\rootnorm {\mathbf{w}}=1$) the cosine similarity and Euclid similarity become equivalent \begin{eqnarray} S_{\mathrm{euclid}}({\mathbf{x}},{\mathbf{w}})= S_{\mathrm{cos}}({\mathbf{x}},{\mathbf{w}})-1 &\mathrm{\quad s.t.}& \rootnorm {\mathbf{x}} = \rootnorm {\mathbf{w}}=1 . \label{eq:cosine_distance} \end{eqnarray} Moreover, Euclidean norm is a transitive similarity measure since it satisfies the following inequality \begin{equation} \|{\mathbf{x}}-{\mathbf{w}}\|_2 \geq\lvert~\|{\mathbf{x}}\|_2-\|{\mathbf{w}}\|_2 ~\rvert . \label{eq:inv_triangle} \end{equation} It is noteworthy to mention that this transitivity holds for p-norms (i.e. $\|\mathbf{a}\|_p= (\sum_i \|{a}_i\|^p)^{\frac{1}{p}}$). This means that the AdderNet \cite{chen2020addernet} operator is also transitive. According to equation \eqref{eq:cosine_distance}, however, the only norm that has such a close relationship with the cosine similarity is Euclidean norm. This is the distinguishing feature of the EuclidNets that while they are distance based, and hence enjoy the transitivity property in measuring similarity, their performance is also completely aligned with those based on Cosine similarity. \subsection{Expressiveness of EuclidNets} Deep learning models that use the EuclidNet operation are just as expressive as those using multiplication. Note the polarization identity, \[ S_{\mathrm{conv}}(x,w) = S_{\mathrm{euclid}}(x,w) - S_{\mathrm{euclid}}(x,0) - S_{\mathrm{euclid}}(0,w) \] which means that any multiplication operation can be expressed using only Euclid operations. \subsection{Hardware cost} Traditionally, hardware developers use smaller multipliers to create larger multipliers \cite{de2009large}. They use various methods of multiplier tiling or divide and conquer to form larger multiplier. Karatsuba algorithm and its generalization \cite{weimerskirch2006generalizations} is among the most known algorithms to implement large multipliers. Here we show that Euclidean distance can be potentially implemented with fewer multipliers in hardware. Karatsuba algorithm is a form of divide and conquer algorithm to perform $n-$bit multiplication using $m-$bit multipliers. Let us assume $a$ and $b$ are $n-$bit integer numbers and they can be re-written using two $m-$bits partitions \begin{align} \nonumber &a = a_1 \times 2^m + a_2,\\ \nonumber &b = b_1 \times 2^m + b_2.\\ \label{eq:karatsuba_parts} \end{align} In the case of multiplication, we have \begin{align} \nonumber &ab = (a_1 \times 2^m + a_2) (b_1 \times 2^m + b_2)\\ \nonumber &~~~= 2^{2m}a_1b_1+2^m a_1b_2+2^m a_2b_1+a_2b_2,\\ \label{eq:mult} \end{align} which is comprised of \textit{three} additions and \textit{four} $m-$bits multiplications. However for the squaring operation, we have \begin{align} \nonumber &a^2 = (a_1 \times 2^m + a_2) (a_1 \times 2^m + a_2)\\ \nonumber &~~~= 2^{2m}a_1^2+2^{m+1} a_1a_2+a_2^2,\\ \label{eq:square} \end{align} which is comprised of \textit{two} additions and \textit{three} $m-$bits multiplications. Thus, the squaring operation can be cheaper in hardware. Also note that such divide and conquer techniques are used commonly in designing accelerator on FPGA targets. \section{Training EuclidNets} Training EuclidNets are much easier compared to other similarity measures such as AdderNets. This makes EuclidNet attractive for complex tasks such as image segmentation, and object detection where training compressed networks are challenging and causes large accuracy drops. However, EuclidNets are more expensive than AdderNets when using floating-point number format, however, their quantization is easy since, unlike AdderNets, they behave similar to traditional convolution to a great extent. In another words EuclidNets are easy to quantize. While training a deep learning model using EuclidNets, it is more appropriate to use the identity \begin{equation} S_{{\mathrm{euclid}}}(x,w) = -\frac {x^2}{2} - \frac{w^2}{2} + x w, \end{equation} that is more appropriate for GPUs that are optimized for inner product computations. As such, training EuclidNets does not require additional CUDA kernel implementation unlike AdderNets \citep{cuda}. The official implementation of AdderNet \citep{chen2020addernet} reflects order of $20\times$ slower training than the traditional convolution on PyTorch. This is specially problematic for large deep learning models and complex tasks since even traditional convolution training takes few days or even weeks. EuclidNet training is about $2\times$ slower in the worst case and their implementation is natural in deep learning frameworks such as PyTorch and Tensorflow. \begin{table}[h]\label{tab: times} \caption{Time (seconds) and maximum training batch-size that can fit in a signle GPU \textit{Tesla V100-SXM2-32GB}, during ImageNet training. In parenthesis is the slowdown with respect to the $S_{conv}$ baseline. We do not show times for AdderNet, which is much slower than both, because it is not implemented in CUDA } \centering \begin{tabular}{cc l l cc} \multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{l}{\textbf{ Maximum Batch-size}} & \multicolumn{2}{l}{\textbf{Time per step}} \\ & & \multicolumn{1}{l}{\textbf{\begin{tabular}[c]{@{}c@{}} power of 2\end{tabular}}} & \multicolumn{1}{l}{\textbf{integer}} & \textbf{Training} & \textbf{Testing} \\ \hline \multirow{2}{*}{ResNet-18} & $S_{{\mathrm{conv}}}$ & 1024 & 1439 & 0.149 & 0.066 \\ & $S_{{\mathrm{euclid}}}$ & 512 & 869 ($1.7\times$) & 0.157 ($1.1\times$) & 0.133 ($2\times$) \\ \hline \multirow{2}{*}{ResNet-50} & $S_{{\mathrm{conv}}}$ & 256 & 371 & 0.182 & 0.145 \\ & $S_{{\mathrm{euclid}}}$ & 128 & 248 ($1.5\times$) & 0.274 ($1.5\times$) & 0.160 ($1.1\times$) \\ \hline \end{tabular} \end{table} A common method in training neural networks is fine-tuning, that means initializing with weights trained on different data but with a similar nature. Here, we introduce the idea of using a weight initialization from a model trained on a related similarity measure. Rather than training from scratch, we wish to fine-tune EuclidNet starting from accurate CNN weights. This is achieved by an ``architecture homotopy" where we change hyperparameters to convert a regular convolution to an EuclidNet operation \begin{equation} S(x,w; \lambda_k) = xw - \lambda_k\frac{x^2 + w^2}{2},\qquad \mbox{ with }\lambda_k = \lambda_0 + \frac{1 - \lambda_0}{n} \cdot k, \label{eq: homotopy} \end{equation} where $n$ is the total number of epochs and $0<\lambda_0<1$ is the initial transition phase. Note that $S(x,w,0) = S_{{\mathrm{conv}}}(x,w)$ and $S(x,w,1) =S_{{\mathrm{euclid}}}(x,w)$ and equation \eqref{eq: homotopy} is a convex combination of these two similarities. One may interpret $\lambda_k$ as a scheduler for the homotopy, similar to the way learning rate is scheduled in training a deep learning model. We found that a linear scheduling as shown in equation \eqref{eq: homotopy} is empirically effective. Transformations like equation \eqref{eq: homotopy} are commonly used in scientific computing \cite{allgower2003introduction}. The idea of using homotopy in training neural networks can be traced back to \cite{chow1991homotopy}. Recently, homotopy was used in deep learning in the context of activation functions \citep{pathak2019parameter,cao2017hashnet, mobahi2016training,farhadi2020}, loss functions \citep{gulcehre2016mollifying}, compression \citep{chen2019efficient} and transfer learning \citep{bengio2009curriculum}. Here, we use homotopy in the context of transforming operations of a deep learning model. Fine-tuning method in equation \eqref{eq: homotopy} is inspired by continuation methods in partial differential equations. Assume $S$ is a solution to a differential equation with the initial condition $S(x,0) = S_0(x)$. In certain situations, solving this differential equation for $S(x,t)$ and then evaluating at $t=1$ might be easier than solving directly for $S_1$. One may think of this homotopy method as an evolution for deep learning model weights. At time zero the deep learning model consists of regular convolutional layers, but they gradually transform to Euclidean layers. The homotopy method can also be interpreted as a sort of of knowledge distillation. Whereas knowledge distillation methods tries to match a student network to a teacher network, the homotopy can be seen as a slow transformation from the teacher network into a student network. Figure \ref{fig: homotopy} demonstrates the idea. Interestingly, problems that have been solved with homotopy have also been tackled by knowledge distillation \citep{hinton2015distilling,chen2019efficient,yim2017gift, bengio2009curriculum}. \begin{figure}[t] \begin{center} \includegraphics[width=0.7\linewidth]{homotopy_intuition} \end{center} \caption{Training schema of EuclidNet using Homotopy, i.e. transitioning from traditional convolution $S(x,w)=xw$ towards EuclidNet $S(x,w)=-\frac{1}{2} |x-w|^2$ through equation \eqref{eq: homotopy}.} \label{fig: homotopy} \end{figure} \section{Experiments}\label{sec:Experiments} To illustrate performance of the EuclidNets, We apply our proposed method on image classification tasks. We also test our trained deep learning model under different transformations on the input image and compare the accuracy to standard convolutional networks. \subsection{CIFAR10}\label{sec: cifar10} First, we consider the CIFAR10 dataset, consisting of $32\times32$ RGB images with 10 possible classifications \citep{krizhevsky2009learning}. We normalize and augment the dataset with random crop and random horizontal flip. We consider two ResNet models \cite{he2015deep}, ResNet-20 and ResNet-32. We train EuclidNet using the optimizer from \cite{chen2020addernet}, which we will refer to as AdderSGD, to evaluate EuclidNet under a similar setup. We use initial learning rate $0.1$ with cosine decay, momentum $0.9$, batch size 128 and weight decay $5\times 10^{-4}$. We follow \cite{chen2020addernet} in setting the learning-rate scaling parameter $\eta$. For traditional convlutional network, we use the same hyper-parameters with stochastic gradient descent optimizer. The details of classification accuracy is provided in Table \ref{tab: cifar10}. We consider two different weight initialization for EuclidNets. First, we initialize the weights randomly and second, we initialize them with pre-trained on a convolutional network. The accuracy for EuclidNets has negligible accuracy loss compared to the standard ResNets. We see that for CIFAR10 training from scratch achieves even a higher accuracy, while initializing with convolution network and using linear homotopy training improves it even further. \begin{table}[h] \caption{Results on CIFAR10. The initial learning rate is adjusted for non-random initialization. } \label{tab: cifar10} \centering \begin{tabular}{ccccccc} \multirow{2}{*}{Model} & \multirow{2}{*}{Similarity} & \multirow{2}{*}{Initialization} & \multirow{2}{*}{Homotopy} & \multirow{2}{*}{Epochs} & \multicolumn{2}{c}{Top-1 accuracy} \\ & & & & & CIFAR10 & CIFAR100 \\ \hline \multirow{4}{*}{ResNet-20} & $S_{{\mathrm{conv}}}$ & Random & None & 400 & 92.97 & \textbf{69.29} \\ & \multirow{3}{*}{$S_{{\mathrm{euclid}}}$} & Random & None & 450 & {93.00} & 68.84 \\ & & \multirow{2}{*}{Conv} & None & 100 & 90.45 & 64.62 \\ & & & Linear & 100 & \textbf{93.32} & 68.8 \\ \hline \multirow{4}{*}{ResNet-32} & $S_{{\mathrm{conv}}}$ & Random & None & 400 & \textbf{ 93.93} & 71.07 \\ & \multirow{3}{*}{$S_{{\mathrm{euclid}}}$} & Random & None & 450 & 93.28 & \textbf{71.22} \\ & & \multirow{2}{*}{Conv} & None & 150 & 91.28 & 66.58 \\ & & & Linear & 100 & 92.62 & 68.42 \\ \hline \end{tabular} \end{table} EuclidNets can become unstable during the training, despite careful choice of the optimizer. Figure \ref{fig: train_comparison} shows a comparison of the EuclidNet training with a standard convolutional network. As it can be seen in the Figure \ref{fig: train_comparison}, fine-tuning the EuclidNets directly from convolutional networks' weights is more stable than training from scratch. Also observe that when we train EuclidNets from scrach, accuracy is lower but the convergence is faster. Finally, using homotopy in the training procedure, the accuracy is improved. Note that the pre-trained convolution weights are commonly available in the most of neural compression tasks, so initializing EuclidNets with pre-trained convolution is a commonplace procedure in optimizing deep learning models for inference. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{comp} \caption{Evolution of testing accuracy during training of ResNet-20 on CIFAR10, initialized with random weights, or initialized from convolution pre-trained network. Initializing from a pretrained convolution network speeds up the convergence. EuclidNet is harder to train compared with convolution network when both initialized from random weights.}\label{fig: train_comparison} \end{figure} EuclidNets are not only faster to train compared to other norm based similarity measures, but also stand superior in terms of accuracy. AdderNet performs slightly worse in terms of accuracy and also is much slower to train. The accuracy is significantly lower for the synapse \cite{dogaru1999comparative} and the multiplication-free \cite{akbas2015multiplication} operators. Table \ref{tab: sim_comparison} demonstrates a top-1 accuracy comparison of different methods. The reported results on AdderNet are from \cite{xu2020kernel}. Note that although for AdderNet in \cite{xu2020kernel}, authors used knowledge distillation to close the gap with the full precision, it still falls short compared with EuclidNet. \begin{table} \caption{Full precision results on ResNet-20 for CIFAR10 for different multiplication-free similarities.} \label{tab: sim_comparison} \centering \begin{tabular}{ c c c c c c} \multirow{1}{4em}{\textbf{Similarity}} & \multirow{1}{3em}{$S_{{\mathrm{conv}}}$} & \multirow{1}{3em}{$S_{{\mathrm{euclid}}}$} & $S_{{\mathrm{adder}}}$ & $S_{{\mathrm{mfo}}}$ & $S_{{\mathrm{synapse}}}$ \\ \hline \textbf{Accuracy} & 92.97 & \textbf{93.00 & 91.84 & 82.05 & 73.08 \\ \end{tabular} \end{table} Training a quantized $S_{{\mathrm{euclid}}}$ is very similar to convolutional neural networks. This allows a wider use of such models for lower resource devices. Quantization of the EuclidNets to 8bits keeps the accuracy drop within the range of one percent \citep{wu2020integer} similar to traditional convolutional neural networks. Table \ref{tab: quant} shows 8-bit quantization of EuclidNet where the accuracy drop remains negligible. Furthermore, training EuclidNets on CIFAR100 dataset exhibits a negligible accuracy drop when the weights are initialized with pre-trained standard model weights. \subsection{ImageNet} Next, we consider testing EuclidNet classifier on ImageNet \cite{imagenet_cvpr09} which is known to be a challenging classification task comparing to CIFAR10. We trained our baseline convolutional neural network with standard augmentations of random resized crop and horizontal flip and normalization. We consider ResNet-18 and ResNet-50 models with the same hyper-parameters as those used in Section \ref{sec: cifar10}. Table \ref{tab: in} shows top-1 and top-5 classification accuracy of ImageNet dataset. As shown in Table \ref{tab: in}, the accuracy of EuclidNet when it is trained from scratch is lower than the baseline emphasizing the importance of homotopy training. We believe that the accuracy drop with no homotopy is because the hyper-parameter tuning is harder for large datasets such as ImageNet. This means that even though there exists hyper-parameters that achieve equivalent accuracy with random initialization, however it is too difficult to find them. Thus, it is much easier to use the existing hyper-parameters of traditional convolutional neural network, and use homotopy to smoothly transfer the weights to wights that are suitable for EuclidNets. \begin{table}[h] \centering \caption{Full precision results on ImageNet. Best result for each model is in bold.}\label{tab: in} \scalebox{0.8}{ \begin{tabular}{ccccccc} Model & Similarity & Initialization & Homotopy & Epochs & \multicolumn{1}{l}{Top-1 Accuracy} & \multicolumn{1}{l}{Top-5 Accuracy} \\ \hline \multirow{6}{*}{ResNet-18} & $S_{{\mathrm{conv}}}$ & Random & None & 90 & 69.56 & 89.09 \\ \cline{2-7} & \multirow{5}{*}{$S_{{\mathrm{euclid}}}$} & Random & None & 90 & 64.93 & 86.46 \\ \cline{3-7} & & \multirow{4}{*}{Conv} & None & 90 & 68.52 & 88.79 \\ \cline{4-7} & & & \multirow{3}{*}{Linear} & 10 & 65.36 & 86.71 \\ & & & & 60 & 69.21 & 89.13 \\ & & & & 90 & \textbf{ 69.69} & \textbf{ 89.38} \\ \hline \multirow{6}{*}{ResNet-50} & $S_{{\mathrm{conv}}}$ & Random & None & 90 & 75.49 & 92.51 \\ \cline{2-7} & \multirow{5}{*}{$S_{{\mathrm{euclid}}}$} & Random & None & 90 & 37.89 & 63.99 \\ \cline{3-7} & & \multirow{4}{*}{Conv} & None & 90 & 75.12 & 92.50 \\ \cline{4-7} & & & \multirow{3}{*}{Linear} & 10 & 70.66 & 90.10 \\ & & & & 60 & 74.93 & 92.52 \\ & & & & 90 & \textbf{ 75.64} & \textbf{ 92.86} \\ \hline \end{tabular} } \end{table} \subsection{Transformation and blurring} Here we provide empirical evidence that Euclidean norm is aligned with the multiplication. First, we show that EuclidNets perform as well as standard convolutional neural networks in the case of \textit{pixel transform}. Second, we show that when the image is blurred with Guassian noise, EuclidNets closely follow the behaviour of the convolutional neural networks. \subsubsection{Pixel transformation} We define pixel transformation of an image as \begin{equation} \mathbf{I_T} = a\mathbf{I}+b, \label{eq:transform} \end{equation} where $\mathbf{I}$ is a tensor representing the original image, scalars $a$ and $b$ are transformation parameters, and $\mathbf{I_T}$ is the transformed image. Note that in \eqref{eq:transform}, $a$ controls the contrast and $b$ controls the brightness of the image. Such transformations are widely used in various stages of the imaging systems for instance in color correction, and gain-control (ISO). Figure \ref{fig:transform} shows the accuracy of the standard ResNet-18 and EuclidNet ResNet-18 when the input image is affected by the pixel transformation of equation \eqref{eq:transform}. We can see that when changing $a$ and $b$, EuclidNet ResNet-18 closely follow the accuracy of the standard ResNet-18. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{Conv_linear_t.png} \includegraphics[width=0.45\textwidth]{Euclid_linear_t.png} \caption{Accuracy of CIFAR10 classification affected by pixel transformation for a standard ResNet-18 (left) and EuclidNet ResNet-18 (right).} \label{fig:transform} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{conv_blur_acc.png} \includegraphics[width=0.45\textwidth]{euclid_blur_acc.png} \caption{Accuracy of CIFAR10 classification affected by Guassian noise for a standard ResNet-18 (left) and EuclidNet ResNet-18 (right).} \label{fig:noise} \end{figure} \subsubsection{Gaussian Blurring} Additive noise can be injected to an image in different stages of the imaging system due to faulty equipments or environmental conditions. We tested EuclidNet when the input image is affected by a Guassian additive noise. Figure \ref{fig:noise} demonstrates comparison of the standard ResNet-18 and EuclidNet ResNet-18 for different noise intensities i.e. $\sigma$ and kernel sizes. This experiment is done for classification of the CIFAR10 dataset. We can see that EuclidNet ResNet-18 closely follow the behaviour of the standard ResNet-18 in the case of different kernel sizes and noise intensities. \section{Conclusion} EuclidNets are a class of deep learning models in which the multiplication operator is replaced with the Euclidean distance. They are designed to be implemented on application specific hardware, with the idea that subtraction and squaring are cheaper than multiplication when designing efficient hardware for inference. Furthermore, in contrast to other efficient architectures that are difficult to train in low precision, EuclideNets are easily trained in low precision. EuclidNets can be initialized with pre-trained weights of the standard convolutional neural networks and hence, the training procedure of EuclidNets using homotopy is considered as a fine tuning of convolutional networks for inference. The homotopy method further improves training in such scenarios and training using this method sometimes surpass regular convolution accuracy. . \begin{appendices} \end{appendices}
{ "timestamp": "2022-12-23T02:13:52", "yymm": "2212", "arxiv_id": "2212.11803", "language": "en", "url": "https://arxiv.org/abs/2212.11803" }
\section{Introduction} Suppose that we are given the following partial differential equation~(PDE) for $u(x,t)$: \begin{equation} \label{diffusion-wave-PDE} \D{}{}{}{2 \nu} u = \kappa \frac{\partial^2 u}{\partial x^2}, \quad x \in \mathbb R, \quad t > 0, \end{equation} where $\kappa > 0$ and $0 < \nu \le 1$. The `time-fractional derivative operator'~$\D{}{}{}{2 \nu}$ is such that \eqref{diffusion-wave-PDE} reduces to the diffusion equation and the wave equation when $\nu = \frac{1}{2}$ and $\nu = 1$, respectively. The behaviour of a solution of \eqref{diffusion-wave-PDE} is said to be `diffusion-like' (respectively, `wave-like') when $0 < \nu \le \frac{1}{2}$ (respectively, $\frac{1}{2} < \nu \le 1$) and we refer to \eqref{diffusion-wave-PDE} as the time-fractional diffusion equation (respectively, time-fractional wave equation). The definition of $\D{}{}{}{2 \nu}$ relies on certain concepts from the field of mathematics known as the fractional calculus~\citep{MiRo93,Po99}. For notational convenience, define the function \begin{equation} \label{delta-fun} \delta_\mu(t) = \begin{cases} \frac{t^{\mu - 1}}{\Gamma(\mu)} & \text{if $\mu > 0$}, \\ \delta(t) & \text{if $\mu = 0$}, \end{cases} \end{equation} where $\Gamma(\mu)$ is the Euler gamma function and $\delta(t)$ is the Dirac delta function. For a suitable function~$y(t)$, the Riemann-Liouville fractional integral of order~$\mu$ is $$ \D{}{0}{t}{-\mu} y(t) = \frac{1}{\Gamma(\mu)} \int_0^t (t - \tau)^{\mu - 1} y(\tau) \, \d \tau. $$ This can be expressed as the Laplace convolution \begin{equation} \label{conv-int} \D{}{0}{t}{-\mu} y(t) = (\delta_\mu * y)(t). \end{equation} Let $\ceil{\mu}$ denote the least integer greater than or equal to $\mu$, so that $\ceil{\mu} \ge \mu$. The Caputo fractional derivative of order~$\mu$ is defined as $$ \D{C}{0}{t}{\mu} y(t) = \D{}{0}{t}{-(\ceil{\mu} - \mu)} D^{\ceil{\mu}} y(t), $$ whereas the Riemann-Liouville fractional derivative of order~$\mu$ is given by $$ \D{}{0}{t}{\mu} y(t) = D^{\ceil{\mu}} \D{}{0}{t}{-(\ceil{\mu} - \mu)} y(t). $$ Here, $\D{}{0}{t}{-(\ceil{\mu} - \mu)}$ is a Riemann-Liouville fractional integral operator and $D^{\ceil{\mu}}$ is an ordinary derivative operator. When $\mu = m \in \mathbb N$, the Riemann-Liouville fractional integral reduces to $m$-fold integration, while the Caputo and Riemann-Liouville fractional derivatives simplify to $m$-fold differentiation. This article considers initial-boundary value problems~(IBVPs) and moving boundary problems associated with \eqref{diffusion-wave-PDE} both when $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$ and $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$, where $0 < \nu \le \frac{1}{2}$. Note that if $n \in \mathbb N$, then $D^n u(x,t)$ refers to the $n$th partial derivative of $u(x,t)$ with respect to $t$. The Caputo time-fractional diffusion equation (i.e.~$\D{}{}{}{2 \nu}= \D{C}{0}{t}{2 \nu}$ and $0 < \nu \le \frac{1}{2}$) was used by \citet{Ni86} to model diffusion in media with fractal geometry. More recently, using a Caputo time-fractional diffusion equation, \citet{WeChZh15} developed a model to describe how chloride ions penetrate reinforced concrete structures exposed to chloride environments. Moving boundary problems arise in many areas of science and engineering~\citep{Cr84,Hi87,Gu03}. Some applications include modelling of biological and tumour invasion~\citep{CrGu72,ElMcSi20}, drug delivery~\citep{SaMaHa17} and melting of crystal dendrite~\citep{MoKiMoMc19}. The classical one-dimensional Stefan problem is a canonical moving boundary problem that models the melting of ice; see some historical notes in \citet{Vu93}. In this context, the PDE is referred to as the heat equation instead of the diffusion equation. Since Stefan's seminal work, moving boundary problems have been extensively studied. Excellent surveys can be found in the books by \cite{Cr84,Hi87,Gu03} and the references therein. As moving boundary problems are typically nonlinear, they are usually studied using numerical and approximate analytical methods. \citet{Fu80} performed a comparison of different numerical methods for moving boundary problems; see also \citet{CaKw04,LeBaLa15} for a study of numerical methods for one-dimensional Stefan problems. Approximate analytical methods for one-dimensional Stefan problems include the heat balance integral method~\citep{Go58,MiMy08,MiMy12}, the refined integral method~\citep{SaSiCo06} and the combined integral method~\citep{MiMy11,Mi12}. Exact analytical solutions of some one-dimensional Stefan problems are reviewed in \citet{Cr84,Hi87}. However, such solutions of moving boundary problems are quite rare because these problems are highly nonlinear. Hence standard methods for linear problems such as separation of variables, Green's functions and integral transforms are usually not applicable. \citet{RoTh21} used the embedding method to find exact analytical solutions of one-dimensional moving boundary problems for the heat equation. They also showed how the embedding method can be adapted to two-phase Stefan problems. In fact, \citet{RoTh21} considered a general IBVP for the heat equation with time-dependent boundary condition~(BCs) and derived the analytical solution using an embedding technique. The same technique is able to handle both bounded and unbounded spatial domains, unlike the standard solution techniques mentioned above. More recently, \cite{RoTh22} studied a diffusion-advection-reaction equation and solved the associated IBVP analytically with the embedding method and proposed a numerical method for solving systems of linear Volterra integral equations of the first kind that naturally arise from the technique. The embedding method was introduced in \citet{Ro14} in the context of pricing American call and put options, and was subsequently adapted to price barrier options~\citep{GuRoSa20} and perpetual American options with general payoffs~\citep{Ro22a}. In many applications of diffusion-advection-reaction equations to model contaminant or solute transport in porous media, the boundaries are usually assumed to be constant in time. However, solute transport problems can involve various types of time-dependent BCs~\citep{NgRiSt88,HoGeLe00,GaFuZhMa13}. The application of the embedding method to multilayer diffusion problems with time-dependent BCs is the subject of a recent article~\citep{Ro22c}. \citet{Ro22b} extended the embedding technique to propose a unified way to solve initial value problems~(IVPs) and IBVPs for the time-fractional diffusion-wave equation~\eqref{diffusion-wave-PDE} (i.e.~$0 < \nu \le 1$). The class of IBVPs considered was limited to those with spatial domains where $0 \le x < \infty$ and with Dirichlet-type (time-constant) BCs imposed at $x = 0$. The first contribution of the present article is to generalise the results in \citet{Ro22b} by solving IBVPs for the time-fractional diffusion equation (i.e.~$0 < \nu \le \frac{1}{2}$) with general time-dependent BCs over bounded and unbounded domains, similar to what was done in \citet{RoTh21} for the classical diffusion equation. The second contribution of the present article is to use the generalisation to find analytical solutions of moving boundary problems for the time-fractional diffusion equation. The reason for the restriction~$0 < \nu \le \frac{1}{2}$, instead of $0 < \nu \le 1$, is because we wish to consider `fractional Stefan problems' in this article. Hence we have to restrict to moving boundary problems whose solutions have `diffusion-like behaviour'. The formulation of Stefan problems for the heat equation includes an extra condition (known as the Stefan condition) that prescribes the dynamics for the unknown moving boundary. As we will consider the time-fractional diffusion equation here, the Stefan condition will be replaced by an analogous `fractional Stefan condition'. However, it is important to point out that the physical motivation for considering moving boundary problems (in fact, IBVPs in general) for the time-fractional diffusion equation remains an open problem. In this article, we approach the study of such problems from a theoretical viewpoint. The outline of this article as follows. In Section~2, we revisit a two-parameter auxiliary function introduced in \citet{Ro22b} by first summarising some of its properties and then deriving new properties that will be especially relevant for moving boundary problems. In Section~3, we formulate a general IBVP for the time-fractional diffusion equation and obtain the solution using the embedding method. Section~4 studies moving boundary problems via two illustrative examples, one with a bounded domain and the other with an unbounded domain. Brief concluding remarks are given in Section~5. \section{A useful auxiliary function and its properties} In this section, we investigate some properties of an auxiliary function that are useful in the study of the time-fractional diffusion-wave equation. \subsection{Summary of known properties of the auxiliary function} Let $\mu \ge 0$, $0 < \nu \le 1$ and $a > 0$. \citet{Ro22b} defined the function \begin{equation} \label{R-def} R_{\mu,\nu}(a,t) = \L^{-1}\{s^{-\mu} \mathrm e^{-a s^\nu};t\} \end{equation} as an inverse Laplace transform. Since $\L\{\D{}{0}{t}{-\mu}f(t);s\} = s^{-\mu} \L\{f(t);s\}$, we deduce that \begin{equation} \label{R-basic} R_{\mu,\nu}(a,t) = {}_{0}^{}D_{t}^{-\mu} R_{0,\nu}(a,t) \end{equation} and thus $R_{0,\nu}(a,t)$ can be interpreted as more `basic' than $R_{\mu,\nu}(a,t)$. For the convenience of the reader, in this subsection, we summarise some of the properties of $R_{\mu,\nu}(a,t)$ that were proved in \citet{Ro22b}. The function~$y(t) = R_{\mu,\nu}(a,t)$ verifies $y(0+) = 0$ and satisfies the fractional integral equation \begin{equation} \label{R-int-eq} a \nu \D{}{0}{t}{-(1 - \nu)} y(t) = t y(t) - \mu \int_0^t y(\tau) \, \d \tau \end{equation} and the fractional ordinary differential equation \begin{equation} \label{R-diff-eq} a \nu \D{}{0}{t}{\nu} y(t) = a \nu \D{C}{0}{t}{\nu} y(t) = t y'(t) + (1 - \mu) y(t). \end{equation} To evaluate $R_{\mu,\nu}(t)$, we can either perform a numerical Laplace transform inversion in \eqref{R-def} or implement finite difference schemes to solve the integral equation~\eqref{R-int-eq} or the differential equation~\eqref{R-diff-eq}. For example, numerical Laplace transform inversion was used to obtain profiles of $R_\nu(2.5,t)$, as shown in Figure~1 for $\nu = 0.3, 0.4, 0.5, 0.6, 0.7$. \begin{figure}[ht] \label{R-plot} \centering \includegraphics[scale=0.3]{R.eps} \caption{Plot of $R_\nu(2.5,t)$ for different values of $\nu$.} \end{figure} When $\mu = 0$, $0 < \nu \le \frac{1}{2}$ and $a > 0$, an alternative integral representation of \eqref{R-def} is \begin{equation} \label{R-alt} R_{0,\nu}(a,t) = \frac{1}{\pi} \int_0^\infty \mathrm e^{-t z} \mathrm e^{-a \cos(\pi \nu) z^\nu} \sin(a \sin(\pi \nu) z^\nu) \, \d z. \end{equation} An analogous integral representation when $\mu > 0$, $0 < \nu \le \frac{1}{2}$ and $a > 0$ can be obtained using \eqref{R-basic} in \eqref{R-alt} and taking the Riemann-Liouville fractional integral of the exponential function~$t \mapsto \mathrm e^{-t z}$. Note, however, that \eqref{R-alt} is not necessarily valid when $\frac{1}{2} < \nu \le 1$~\citep{Ro22b}. If $\mu \ge 0$, $0 < \nu \le 1$ and $a > 0$, then \begin{equation} \label{R-int-prop-1} R_{\mu + \nu,\nu}(a,t) = \int_a^\infty R_{\mu,\nu}(z,t) \, \d z. \end{equation} In particular, $\mu = \nu$ gives \begin{equation} \label{R-int-prop-2} R_{2 \nu,\nu}(a,t) = \int_a^\infty R_{\nu,\nu}(z,t) \, \d z. \end{equation} Some special cases are \begin{equation} \label{R-special-cases} R_{0,\frac{1}{2}}(a,t) = \frac{a \mathrm e^{-\frac{a^2}{4 t}}}{2 \sqrt{\pi t^3}}, \quad R_{\frac{1}{2},\frac{1}{2}}(a,t) = \frac{\mathrm e^{-\frac{a^2}{4 t}}}{\sqrt{\pi t}}, \quad R_{1,\frac{1}{2}}(a,t) = \erfc\Big(\frac{a}{2 \sqrt{t}}\Big), \end{equation} which follow from \eqref{R-alt}, \eqref{R-basic} and \eqref{R-int-prop-2}, respectively. \subsection{Further properties of the auxiliary function} Here, we derive new properties of the auxiliary function that are needed for solving IBVPs for the time-fractional diffusion equation. In the previous subsection, it was pointed out that $R_{\mu,\nu}(a,0+) = 0$ for a fixed~$a$. The following result derives a similar property for $R_{\mu,\nu}(0+,t)$ with $t$ fixed. \begin{prop} \label{R-a-zero} Suppose that $\mu \ge 0$, $0 < \nu \le 1$ and $a > 0$. Then, for $t > 0$, there holds $$ R_{\mu,\nu}(0+,t) = \lim_{a \rightarrow 0^+} R_{\mu,\nu}(a,t) = \begin{cases} \delta_\mu(t) & \text{if $\mu > 0$}, \\ \delta(t) & \text{if $\mu = 0$}, \end{cases} $$ where $\delta_\mu(t)$ is given in \eqref{delta-fun} and $\delta(t)$ is the Dirac delta function. \end{prop} \begin{proof} If $\mu > 0$, then from \eqref{R-def} we get $$ R_{\mu,\nu}(0+,t) = \lim_{a \rightarrow 0^+} R_{\mu,\nu}(a,t) = \L^{-1}\{s^{-\mu};t\} = \frac{t^{\mu - 1}}{\Gamma(\mu)} = \delta_\mu(t). $$ Similarly, if $\mu = 0$, then $$ R_{0,\nu}(0+,t) = \lim_{a \rightarrow 0^+} R_{0,\nu}(a,t) = \L^{-1}\{1;t\} = \delta(t). $$ \end{proof} The next proposition will be used when taking the spatial derivative of the solution of an associated IBVP. Note the assumption~$\mu \ge \nu$ here. \begin{prop} \label{R-partial-a} If $0 < \nu \le 1$, $\mu \ge \nu$ and $a > 0$, then \begin{equation*} \frac{\partial R_{\mu,\nu}}{\partial a}(a,t) = -R_{\mu - \nu,\nu}(a,t). \end{equation*} \end{prop} \begin{proof} It is straightforward to see from \eqref{R-def} that $$ \frac{\partial R_{\mu,\nu}}{\partial a}(a,t) = \L^{-1}\{s^{-\mu} \mathrm e^{-a s^\nu} (-s^\nu);t\} = -\L^{-1}\{s^{-(\mu - \nu)} \mathrm e^{-a s^\nu};t\} = -R_{\mu - \nu,\nu}(a,t). $$ \end{proof} The next task is to obtain a series representation for $R_{\mu,\nu}(a,t)$. Recall the Mainardi function~$M(z;\nu)$ with the series representation~\citep{Ma96} $$ M(z;\nu) = \sum_{j = 0}^\infty \frac{(-z)^j}{j! \Gamma(-\nu j + (1 - \nu))}, $$ where $0 < \nu < 1$. It turns out to be a special case of the Wright function~$W(z;\alpha,\beta)$ with the series representation~\citep{MaPa03} \begin{equation} \label{W-series} W(z;\alpha,\beta) = \sum_{j = 0}^\infty \frac{z^j}{j! \Gamma(\alpha j + \beta)}, \end{equation} where $\alpha > -1$ and $\beta > 0$ (in fact, it is also valid for $\beta \in \mathbb C$). More precisely, $M(z;\nu) = W(-z;-\nu,1 - \nu)$. An interesting relation pointed out in \citet{Ro22b}, valid when $0 < \nu \le \frac{1}{2}$, is \begin{equation} \label{MWR-rel} M(a t^{-\nu};\nu) = W(-a t^{-\nu};-\nu,1 - \nu) = t^\nu R_{1 - \nu,\nu}(a,t) = t^\nu \D{}{0}{t}{-(1 - \nu)} R_{0,\nu}(a,t). \end{equation} We will use \eqref{MWR-rel} to derive a series representation for $R_{\mu,\nu}(a,t)$ when $\mu \ge 0$, $0 < \nu \le \frac{1}{2}$ and $a > 0$. \begin{prop} \label{R-series} Let $\mu \ge 0$, $0 < \nu \le \frac{1}{2}$ and $a > 0$. A series representation for $R_{\mu,\nu}(a,t)$ is given by $$ R_{\mu,\nu}(a,t) = t^{\mu - 1} W(- a t^{-\nu};-\nu,\mu) = \sum_{j = 0}^\infty \frac{(- a t^{-\nu})^j}{j! \Gamma(-\nu j + \mu)}. $$ \end{prop} \begin{proof} The series representation~\eqref{W-series} yields $$ W(-a t^{-\nu};-\nu,1 - \nu) = \sum_{j = 0}^\infty \frac{(-a t^{-\nu})^j}{j! \Gamma(-\nu j - \nu + 1)}, $$ which in turn gives $$ t^{-\nu} W(-a t^{-\nu};-\nu,1 - \nu) = \sum_{j = 0}^\infty \frac{(-a)^j t^{-\nu j - \nu}}{j! \Gamma(-\nu j - \nu + 1)}. $$ Since $$ R_{0,\nu}(a,t) = \D{}{0}{t}{(1 - \nu)} (t^{-\nu} W(-a t^{-\nu};-\nu,1 - \nu)) $$ from \eqref{MWR-rel}, we obtain \begin{align*} \D{}{0}{t}{(1 - \nu)} (t^{-\nu} W(-a t^{-\nu};-\nu,1 - \nu)) & = \D{}{0}{t}{\ceil{1 - \nu}} \D{}{0}{t}{-(\ceil{1 - \nu} - (1 - \nu))} (t^{-\nu} W(-a t^{-\nu};-\nu,1 - \nu)) \\ & = D^1 \D{}{0}{t}{-\nu} (t^{-\nu} W(-a t^{-\nu};-\nu,1 - \nu)) \end{align*} and \begin{align*} \D{}{0}{t}{-\nu} (t^{-\nu} W(-a t^{-\nu};-\nu,1 - \nu)) & = \sum_{j = 0}^\infty \frac{(-a)^j}{j! \Gamma(-\nu j - \nu + 1)} \D{}{0}{t}{-\nu}(t^{-\nu j - \nu}) = \sum_{j = 0}^\infty \frac{(-a)^j t^{-\nu j}}{j! \Gamma(1 - \nu j)}. \end{align*} Hence $$ R_{0,\nu}(a,t) = \D{}{0}{t}{(1 - \nu)} (t^{-\nu} W(-a t^{-\nu};-\nu,1 - \nu)) = \sum_{j = 0}^\infty \frac{(-a)^j t^{-\nu j - 1}}{j! \Gamma(-\nu j)}. $$ Eq.~\eqref{R-basic} implies that \begin{align*} R_{\mu,\nu}(a,t) & = \D{}{0}{t}{-\mu} R_{0,\nu}(a,t) = \sum_{j = 0}^\infty \frac{(-a)^j}{j! \Gamma(-\nu j)} \D{}{0}{t}{-\mu} (t^{-\nu j - 1}) \\ & = t^{\mu - 1} \sum_{j = 0}^\infty \frac{(- a t^{-\nu})^j}{j! \Gamma(-\nu j + \mu)} = t^{\mu - 1} W(- a t^{-\nu};-\nu,\mu). \end{align*} \end{proof} \begin{rem} The result of Proposition~\ref{R-series} relies on the relation~\eqref{MWR-rel}, which is true if $0 < \nu \le \frac{1}{2}$. It is an open problem to determine whether the series representation is also valid for $\frac{1}{2} < \nu \le 1$. \end{rem} \begin{rem} Aside from the auxiliary function~$M(z;\nu)$, \citet{MaPa03} also introduced the auxiliary function $$ F(z;\nu) = \sum_{j = 0}^\infty \frac{(-z)^j}{j! \Gamma(-\nu j)}. $$ It follows from Proposition~\ref{R-series} that $M(z;\nu)$ and $F(z;\nu)$ can be expressed in terms of $R_{\mu,\nu}(a,t)$ as $$ M(a t^{-\nu};\nu) = t^\nu R_{1 - \nu,\nu}(a,t) = t^\nu \D{}{0}{t}{-(1 - \nu)} R_{0,\nu}(a,t), \quad F(a t^{-\nu};\nu) = t R_{0,\nu}(a,t), $$ respectively. Thus we deduce another relation between $M(z;\nu)$ and $F(z;\nu)$, namely $$ M(a t^{-\nu};\nu) = t^\nu \D{}{0}{t}{-(1 - \nu)}(t^{-1} F(a t^{-\nu};\nu)). $$ \end{rem} \begin{ex} Some special values of the Wright function are known~\citep{MaPa03}: \begin{equation} \label{W-special} W\Big(-z;-\frac{1}{2},\frac{1}{2}\Big) = \frac{\mathrm e^{-\frac{z^2}{4}}}{\sqrt{\pi}}, \quad W\Big(-z;-\frac{1}{2},1\Big) = 1 - \erf\Big(\frac{z}{2}\Big) = \erfc\Big(\frac{z}{2}\Big). \end{equation} Using Proposition~\ref{R-series}, it is not difficult to see that the second and third relations in \eqref{R-special-cases} are recovered. \end{ex} \begin{prop} \label{R-integral} If $\mu \ge 0$ and $0 < \nu \le 1$, then $$ \int_{-\infty}^\infty \frac{1}{2} R_{\mu,\nu}(\vert z \vert,t) \, \d z = \delta_{\mu + \nu}(t), $$ where $\delta_{\mu + \nu}(t)$ is given by \eqref{delta-fun}. \end{prop} \begin{proof} The definition in \eqref{R-def} leads to \begin{align*} \int_{-\infty}^\infty R_{\mu,\nu}(\vert z \vert,t) \, \d z & = \int_{-\infty}^\infty \L^{-1}\{s^{-\mu} \mathrm e^{-\vert z \vert s^\nu};t\} \, \d z = \L^{-1}\Big\{\int_{-\infty}^\infty s^{-\mu} \mathrm e^{-\vert z \vert s^\nu} \, \d z;t\Big\} \\ & = 2 \L^{-1}\{s^{-(\mu + \nu)};t\} = \frac{2 t^{\mu + \nu - 1}}{\Gamma(\mu + \nu)} = 2 \delta_{\mu + \nu}(t). \end{align*} Note that $\int_{-\infty}^\infty \frac{1}{2} R_{\mu,\nu}(\vert z \vert,t) \, \d z = 1$ only if $\mu + \nu = 1$. This observation is related to the generation of probability distributions from the time-fractional diffusion equation discussed in \cite{Ro22b}. \end{proof} \section{Solution of a general IBVP for the time-fractional diffusion equation using the embedding approach} In this section, we formulate a general IBVP for the time-fractional diffusion equation (i.e.~$0 < \nu \le \frac{1}{2}$) defined on bounded or unbounded spatial domains, and derive the analytical solution using the embedding approach. Let $f(x)$, $g^\pm(t)$ and $\eta^\pm(t)$ be given suitable functions. Suppose that $-\infty \le \eta^-(t) < \eta^+(t) \le \infty$ for $t > 0$, which ensures that both bounded and unbounded spatial domains are taken into account. Let $a$, $b$, $c$ and $d$ be constants such that $\vert a \vert + \vert b \vert > 0$ and $\vert c \vert + \vert d \vert > 0$. Consider the IBVP \begin{equation} \label{gen-IBVP} \left\{ \begin{split} & \D{}{}{}{2 \nu} u = \kappa \frac{\partial^2 u}{\partial x^2}, \quad \eta^-(t) < x < \eta^+(t), \quad t > 0, \\ & \Phi u(x,0+) = f(x), \quad \eta^-(0) \le x \le \eta^+(0), \\ & a u(\eta^-(t),t) + b \frac{\partial u}{\partial x}(\eta^-(t),t) = g^-(t), \quad t > 0, \\ & c u(\eta^+(t),t) + d \frac{\partial u}{\partial x}(\eta^+(t),t) = g^+(t), \quad t > 0, \end{split} \right. \end{equation} where $\D{}{}{}{2 \nu}$ is either a Caputo fractional derivative ($\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$) or a Riemann-Liouville fractional derivative ($\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$). The operator~$\Phi$ defines the initial condition~(IC) through $$ \Phi u = \begin{cases} u & \text{if $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$}, \\ \D{}{0}{t}{-(1 - 2 \nu)} & \text{if $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$}. \end{cases} $$ The motivation behind the choice of the IC was given in \citet{Ro22b} as a natural consequence of the Laplace transform properties of the Caputo and Riemann-Liouville fractional derivatives. We assume that the IBVP~\eqref{gen-IBVP} is well posed. \begin{rem} In the special case when $\nu = \frac{1}{2}$, the time-fractional diffusion equation reduces to the classical diffusion equation, and the analytical solution of \eqref{gen-IBVP} was obtained in \citet{RoTh21} via the embedding method. The numerical solution of a generalisation of \eqref{gen-IBVP} with advection and reaction terms was addressed in \citet{RoTh22}. \end{rem} \begin{rem} The embedding method was used in \citet{Ro22b} to provide a unified way to solve IVPs and IBVPs. However, the IBVP studied there is a very special case of \eqref{gen-IBVP}, i.e.~$\eta^-(t) = 0$, $\eta^+(t) = \infty$ and only a Dirichlet-type BC of the form~$u(x,0+) = h(t)$ for a given function~$h(t)$ was considered at the left endpoint. \end{rem} Let $f_\mathrm{ext}(x)$ be an extension of $f(x)$ such that $f_\mathrm{ext}(x) \vert_{\eta^-(0) \le x \le \eta^+(0)} = f(x)$. Denote by $\chi_A(x)$ the indicator function of the set~$A$, i.e.~$\chi_A(x) = 1$ if $x \in A$ and $\chi_A(x) = 0$ if $x \notin A$. We can embed the PDE and IC in \eqref{gen-IBVP} into the IVP on the real line for $v(x,t)$, namely \begin{equation} \label{v-IVP} \begin{split} & {}_{}^{}D_{}^{2 \nu} v = \kappa \frac{\partial^2 v}{\partial x^2} + F(x,t), \quad x \in \mathbb R, \quad t > 0, \\ & v(x,0) = f_\mathrm{ext}(x), \quad x \in \mathbb R, \end{split} \end{equation} where $$ F(x,t) = \varphi^-(t) \chi_{(-\infty,\eta^-(t)]}(x) + \varphi^+(t) \chi_{[\eta^+(t),\infty)}(x) = \begin{cases} \varphi^-(t) & \text{if $x \le \eta^-(t)$}, \\ 0 & \text{if $\eta^-(t) < x < \eta^+(t)$}, \\ \varphi^+(t) & \text{if $x \ge \eta^+(t)$}. \end{cases} $$ The arbitrary functions~$\varphi^\pm(t)$ are to be determined such that the BCs in \eqref{gen-IBVP} are satisfied when we restrict $\eta^-(t) \le x \le \eta^+(t)$. \begin{rem} Before we proceed to give the solution of \eqref{v-IVP}, we make a few observations. We can write \begin{align*} & \int_0^t \int_{-\infty}^\infty \frac{1}{2 \sqrt{\kappa}} R_{\nu,\nu} \Big(\frac{\vert x - \xi \vert}{\sqrt{\kappa}},t - \tau\Big) F(\xi,\tau) \, \d \xi \, \d \tau \\ & \qquad = \int_0^t \varphi^-(\tau) \int_{-\infty}^{\eta^-(\tau)} \frac{1}{2 \sqrt{\kappa}} R_{\nu,\nu} \Big(\frac{\vert x - \xi \vert}{\sqrt{\kappa}},t - \tau\Big) \, \d \xi \, \d \tau \\ & \qquad \quad {} + \int_0^t \varphi^+(\tau) \int_{\eta^+(\tau)}^\infty \frac{1}{2 \sqrt{\kappa}} R_{\nu,\nu} \Big(\frac{\vert x - \xi \vert}{\sqrt{\kappa}},t - \tau\Big) \, \d \xi \, \d \tau. \end{align*} Suppose that $\eta^-(t) \le x \le \eta^+(t)$. The argument when $x = \eta^\pm(t)$ can be justified with Proposition~\ref{R-a-zero}. In the first integral on the right-hand side, noting that $-\infty < \xi \le \eta^-(\tau) \le x$, we have from \eqref{R-int-prop-2} that \begin{align*} \int_{-\infty}^{\eta^-(\tau)} \frac{1}{2 \sqrt{\kappa}} R_{\nu,\nu} \Big(\frac{\vert x - \xi \vert}{\sqrt{\kappa}},t - \tau\Big) \, \d \xi \, \d \tau & = \int_{-\infty}^{\eta^-(\tau)} \frac{1}{2 \sqrt{\kappa}} R_{\nu,\nu} \Big(\frac{x - \xi}{\sqrt{\kappa}},t - \tau\Big) \, \d \xi \\ & = \int_{\frac{x - \eta^-(\tau)}{\sqrt{\kappa}}}^\infty \frac{1}{2} R_{\nu,\nu}(z,t - \tau) \, \d z \\ & = \frac{1}{2} R_{2 \nu,\nu}\Big(\frac{x - \eta^-(\tau)}{\sqrt{\kappa}},t - \tau\Big). \end{align*} Similarly, $x \le \eta^+(\tau) \le \xi < \infty$ in the second integral, giving \begin{align*} \int_{\eta^+(\tau)}^\infty \frac{1}{2 \sqrt{\kappa}} R_{\nu,\nu} \Big(\frac{\vert x - \xi \vert}{\sqrt{\kappa}},t - \tau\Big) \, \d \xi & = \int_{\eta^+(\tau)}^\infty \frac{1}{2 \sqrt{\kappa}} R_{\nu,\nu} \Big(\frac{\xi - x}{\sqrt{\kappa}},t - \tau\Big) \, \d \xi \\ & = \int_{\frac{\eta^+(\tau) - x}{\sqrt{\kappa}}}^\infty \frac{1}{2} R_{\nu,\nu}(z,t - \tau) \, \d z \\ & = \frac{1}{2} R_{2 \nu,\nu}\Big(\frac{\eta^+(\tau) - x}{\sqrt{\kappa}},t - \tau\Big). \end{align*} Therefore \begin{equation} \label{F-integral} \begin{split} \int_0^t \int_{-\infty}^\infty \frac{1}{2 \sqrt{\kappa}} R_{\nu,\nu} \Big(\frac{\vert x - \xi \vert}{\sqrt{\kappa}},t - \tau\Big) F(\xi,\tau) \, \d \xi \, \d \tau & = \int_0^t \frac{1}{2} R_{2 \nu,\nu}\Big(\frac{x - \eta^-(\tau)}{\sqrt{\kappa}},t - \tau\Big) \varphi^-(\tau) \, \d \tau \\ & \quad {} + \int_0^t \frac{1}{2} R_{2 \nu,\nu}\Big(\frac{\eta^+(\tau) - x}{\sqrt{\kappa}},t - \tau\Big) \varphi^+(\tau) \, \d \tau. \end{split} \end{equation} \end{rem} We will separate the analysis of \eqref{v-IVP} according to the type of fractional derivative operator~$\D{}{}{}{2 \nu}$ being considered. \subsection{Caputo time-fractional diffusion equation} Suppose that $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$. It was shown in \citet{Ro22b} that the solution of the IVP~\eqref{v-IVP} is \begin{equation*} \begin{split} v(x,t) & = \int_{-\infty}^\infty \frac{1}{2 \sqrt{\kappa}} R_{1 - \nu,\nu}\Big(\frac{\vert x - \xi \vert}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi \\ & \quad {} + \int_0^t \int_{-\infty}^\infty \frac{1}{2 \sqrt{\kappa}} R_{\nu,\nu} \Big(\frac{\vert x - \xi \vert}{\sqrt{\kappa}},t - \tau\Big) F(\xi,\tau) \, \d \xi \, \d \tau. \end{split} \end{equation*} Hence, restricting $\eta^-(t) \le x \le \eta^+(t)$ and recalling \eqref{F-integral}, the function \begin{equation} \label{u-sol-1} \begin{split} u(x,t) & = \int_{-\infty}^\infty \frac{1}{2 \sqrt{\kappa}} R_{1 - \nu,\nu}\Big(\frac{\vert x - \xi \vert}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi + \int_0^t \frac{1}{2} R_{2 \nu,\nu}\Big(\frac{x - \eta^-(\tau)}{\sqrt{\kappa}},t - \tau\Big) \varphi^-(\tau) \, \d \tau \\ & \quad {} + \int_0^t \frac{1}{2} R_{2 \nu,\nu}\Big(\frac{\eta^+(\tau) - x}{\sqrt{\kappa}},t - \tau\Big) \varphi^+(\tau) \, \d \tau \end{split} \end{equation} satisfies the PDE and IC of \eqref{gen-IBVP}, but not necessarily the BCs. To verify the BCs, we need to take the partial derivative of \eqref{u-sol-1} with respect to $x$. Breaking up the first integral on the right-hand side, \begin{align*} u(x,t) & = \int_{-\infty}^{x} \frac{1}{2 \sqrt{\kappa}} R_{1 - \nu,\nu}\Big(\frac{x - \xi}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi - \int_\infty^x \frac{1}{2 \sqrt{\kappa}} R_{1 - \nu,\nu}\Big(\frac{\xi - x}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi \\ & \quad {} + \int_0^t \frac{1}{2} R_{2 \nu,\nu}\Big(\frac{x - \eta^-(\tau)}{\sqrt{\kappa}},t - \tau\Big) \varphi^-(\tau) \, \d \tau + \int_0^t \frac{1}{2} R_{2 \nu,\nu}\Big(\frac{\eta^+(\tau) - x}{\sqrt{\kappa}},t - \tau\Big) \varphi^+(\tau) \, \d \tau. \end{align*} Performing straightforward calculations with the help of Proposition~\ref{R-partial-a}, we obtain \begin{align*} \frac{\partial}{\partial x}\int_{-\infty}^{x} \frac{1}{2 \sqrt{\kappa}} R_{1 - \nu,\nu}\Big(\frac{x - \xi}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi & = \frac{1}{2 \sqrt{\kappa}} R_{1 - \nu,\nu}(0+,t) f_\mathrm{ext}(x) \\ & \quad {} - \int_{-\infty}^{x} \frac{1}{2 \kappa} R_{1 - 2 \nu,\nu}\Big(\frac{x - \xi}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi, \end{align*} \begin{align*} -\frac{\partial}{\partial x}\int_\infty^x \frac{1}{2 \sqrt{\kappa}} R_{1 - \nu,\nu}\Big(\frac{\xi - x}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi & = -\frac{1}{2 \sqrt{\kappa}} R_{1 - \nu,\nu}(0+,t) f_\mathrm{ext}(x) \\ & \quad {} + \int_x^{\infty} \frac{1}{2 \kappa} R_{1 - 2 \nu,\nu}\Big(\frac{\xi - x}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi, \end{align*} \begin{align*} \frac{\partial}{\partial x}\int_0^t \frac{1}{2} R_{2 \nu,\nu}\Big(\frac{x - \eta^-(\tau)}{\sqrt{\kappa}},t - \tau\Big) \varphi^-(\tau) \, \d \tau & = - \int_0^t \frac{1}{2 \sqrt{\kappa}} R_{\nu,\nu}\Big(\frac{x - \eta^-(\tau)}{\sqrt{\kappa}},t - \tau\Big) \varphi^-(\tau) \, \d \tau \end{align*} and \begin{align*} \frac{\partial}{\partial x}\int_0^t \frac{1}{2} R_{2 \nu,\nu}\Big(\frac{\eta^+(\tau) - x}{\sqrt{\kappa}},t - \tau\Big) \varphi^+(\tau) \, \d \tau & = \int_0^t \frac{1}{2 \sqrt{\kappa}} R_{\nu,\nu}\Big(\frac{\eta^+(\tau) - x}{\sqrt{\kappa}},t - \tau\Big) \varphi^+(\tau) \, \d \tau. \end{align*} Combining these integrals, we get \begin{equation} \label{u-sol-1-der} \begin{split} \frac{\partial u}{\partial x}(x,t) & = - \int_{-\infty}^{x} \frac{1}{2 \kappa} R_{1 - 2 \nu,\nu}\Big(\frac{x - \xi}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi + \int_x^{\infty} \frac{1}{2 \kappa} R_{1 - 2 \nu,\nu}\Big(\frac{\xi - x}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi \\ & \quad {} - \int_0^t \frac{1}{2 \sqrt{\kappa}} R_{\nu,\nu}\Big(\frac{x - \eta^-(\tau)}{\sqrt{\kappa}},t - \tau\Big) \varphi^-(\tau) \, \d \tau \\ & \quad {} + \int_0^t \frac{1}{2 \sqrt{\kappa}} R_{\nu,\nu}\Big(\frac{\eta^+(\tau) - x}{\sqrt{\kappa}},t - \tau\Big) \varphi^+(\tau) \, \d \tau. \end{split} \end{equation} We introduce some simplifying notation. Identify $\eta^-_1$ with $\eta^-(t)$, $\eta^-_2$ with $\eta^-(\tau)$, $\eta^+_1$ with $\eta^+(t)$ and $\eta^+_2$ with $\eta^+(\tau)$. Define the kernel functions \begin{align*} K_{11}(\eta^-_1,\eta^-_2,\eta^+_1,\eta^+_2,t) & = \frac{a}{2} R_{2 \nu,\nu}\Big(\frac{\eta^-_1 - \eta^-_2}{\sqrt{\kappa}},t\Big) - \frac{b}{2 \sqrt{\kappa}} R_{\nu,\nu}\Big(\frac{\eta^-_1 - \eta^-_2}{\sqrt{\kappa}},t\Big), \\ K_{12}(\eta^-_1,\eta^-_2,\eta^+_1,\eta^+_2,t) & = \frac{a}{2} R_{2 \nu,\nu}\Big(\frac{\eta^+_2 - \eta^-_1}{\sqrt{\kappa}},t\Big) + \frac{b}{2 \sqrt{\kappa}} R_{\nu,\nu}\Big(\frac{\eta^+_2 - \eta^-_1}{\sqrt{\kappa}},t\Big), \\ K_{21}(\eta^-_1,\eta^-_2,\eta^+_1,\eta^+_2,t) & = \frac{c}{2} R_{2 \nu,\nu}\Big(\frac{\eta^+_1 - \eta^-_2}{\sqrt{\kappa}},t\Big) - \frac{d}{2 \sqrt{\kappa}} R_{\nu,\nu}\Big(\frac{\eta^+_1 - \eta^-_2}{\sqrt{\kappa}},t\Big), \\ K_{22}(\eta^-_1,\eta^-_2,\eta^+_1,\eta^+_2,t) & = \frac{c}{2} R_{2 \nu,\nu}\Big(\frac{\eta^+_2 - \eta^+_1}{\sqrt{\kappa}},t\Big) + \frac{d}{2 \sqrt{\kappa}} R_{\nu,\nu}\Big(\frac{\eta^+_2 - \eta^+_1}{\sqrt{\kappa}},t\Big). \end{align*} Moreover, define \begin{equation} \label{h-minus-1} \begin{split} h^-(t) & = g^-(t) - \int_{-\infty}^\infty \frac{a}{2 \sqrt{\kappa}} R_{1 - \nu,\nu}\Big(\frac{\vert \eta^-(t) - \xi \vert}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi \\ & \quad {} +\int_{-\infty}^{\eta^-(t)} \frac{b}{2 \kappa} R_{1 - 2 \nu,\nu}\Big(\frac{\eta^-(t) - \xi}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi \\ & \quad {} - \int_{\eta^-(t)}^{\infty} \frac{b}{2 \kappa} R_{1 - 2 \nu,\nu}\Big(\frac{\xi - \eta^-(t)}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi \end{split} \end{equation} and \begin{equation} \label{h-plus-1} \begin{split} h^+(t) & = g^+(t) - \int_{-\infty}^\infty \frac{c}{2 \sqrt{\kappa}} R_{1 - \nu,\nu}\Big(\frac{\vert \eta^+(t) - \xi \vert}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi \\ & \quad {} + \int_{-\infty}^{\eta^+(t)} \frac{d}{2 \kappa} R_{1 - 2 \nu,\nu}\Big(\frac{\eta^+(t) - \xi}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi \\ & \quad {} - \int_{\eta^+(t)}^{\infty} \frac{d}{2 \kappa} R_{1 - 2 \nu,\nu}\Big(\frac{\xi - \eta^+(t)}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi. \end{split} \end{equation} Substituting the above expressions into the BCs in \eqref{gen-IBVP}, the left BC becomes \begin{equation} \label{left-BC} \begin{split} & \int_0^t K_{11}(\eta^-(t),\eta^-(\tau),\eta^+(t),\eta^+(\tau),t - \tau) \varphi^-(\tau) \, \d \tau \\ & \quad {} + \int_0^t K_{12}(\eta^-(t),\eta^-(\tau),\eta^+(t),\eta^+(\tau),t - \tau) \varphi^+(\tau) \, \d \tau = h^-(t), \end{split} \end{equation} while the right BC simplifies to \begin{equation} \label{right-BC} \begin{split} & \int_0^t K_{21}(\eta^-(t),\eta^-(\tau),\eta^+(t),\eta^+(\tau),t - \tau) \varphi^-(\tau) \, \d \tau \\ & \quad {} + \int_0^t K_{22}(\eta^-(t),\eta^-(\tau),\eta^+(t),\eta^+(\tau),t - \tau) \varphi^+(\tau) \, \d \tau = h^+(t). \end{split} \end{equation} In summary, the analytical solution of the IBVP~\eqref{gen-IBVP} for the Caputo time-fractional diffusion equation is \eqref{u-sol-1}, where $\varphi^\pm(t)$ satisfy the pair of linear Volterra integral equations of the first kind described by \eqref{left-BC} and \eqref{right-BC}. The functions~$h^\pm(t)$ are given in \eqref{h-minus-1} and \eqref{h-plus-1}. Note that other choices of defining $f_\mathrm{ext}(x)$ will result in a corresponding adjustment of $h^\pm(t)$, yielding the same solution in the end. \subsection{Riemann-Liouville time-fractional diffusion equation} Now take $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$. As the calculations are similar to the Caputo case, we just give the final result. The analytical solution of the IBVP~\eqref{gen-IBVP} for the Riemann-Liouville time-fractional diffusion equation is \begin{equation} \label{u-sol-2} \begin{split} u(x,t) & = \int_{-\infty}^\infty \frac{1}{2 \sqrt{\kappa}} R_{\nu,\nu}\Big(\frac{\vert x - \xi \vert}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi + \int_0^t \frac{1}{2} R_{2 \nu,\nu}\Big(\frac{x - \eta^-(\tau)}{\sqrt{\kappa}},t - \tau\Big) \varphi^-(\tau) \, \d \tau \\ & \quad {} + \int_0^t \frac{1}{2} R_{2 \nu,\nu}\Big(\frac{\eta^+(\tau) - x}{\sqrt{\kappa}},t - \tau\Big) \varphi^+(\tau) \, \d \tau. \end{split} \end{equation} Note that one difference between \eqref{u-sol-2} and \eqref{u-sol-1} is in the first integral on the right-hand side. The functions~$\varphi^\pm(t)$ satisfy the pair of linear Volterra integral equations of the first kind also described by \eqref{left-BC} and \eqref{right-BC} but $h^\pm(t)$ are given by \begin{equation} \label{h-minus-2} \begin{split} h^-(t) & = g^-(t) - \int_{-\infty}^\infty \frac{a}{2 \sqrt{\kappa}} R_{\nu,\nu}\Big(\frac{\vert \eta^-(t) - \xi \vert}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi \\ & \quad {} + \int_{-\infty}^{\eta^-(t)} \frac{b}{2 \kappa} R_{0,\nu}\Big(\frac{\eta^-(t) - \xi}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi - \int_{\eta^-(t)}^{\infty} \frac{b}{2 \kappa} R_{0,\nu}\Big(\frac{\xi - \eta^-(t)}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi \end{split} \end{equation} and \begin{equation} \label{h-plus-2} \begin{split} h^+(t) & = g^+(t) - \int_{-\infty}^\infty \frac{c}{2 \sqrt{\kappa}} R_{\nu,\nu}\Big(\frac{\vert \eta^+(t) - \xi \vert}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi \\ & \quad {} + \int_{-\infty}^{\eta^+(t)} \frac{d}{2 \kappa} R_{0,\nu}\Big(\frac{\eta^+(t) - \xi}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi - \int_{\eta^+(t)}^{\infty} \frac{d}{2 \kappa} R_{0,\nu}\Big(\frac{\xi - \eta^+(t)}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi. \end{split} \end{equation} For later use, we note that \begin{equation} \label{u-sol-2-der} \begin{split} \frac{\partial u}{\partial x}(x,t) & = - \int_{-\infty}^{x} \frac{1}{2 \kappa} R_{0,\nu}\Big(\frac{x - \xi}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi + \int_x^{\infty} \frac{1}{2 \kappa} R_{0,\nu}\Big(\frac{\xi - x}{\sqrt{\kappa}},t\Big) f_\mathrm{ext}(\xi) \, \d \xi \\ & \quad {} - \int_0^t \frac{1}{2 \sqrt{\kappa}} R_{\nu,\nu}\Big(\frac{x - \eta^-(\tau)}{\sqrt{\kappa}},t - \tau\Big) \varphi^-(\tau) \, \d \tau \\ & \quad {} + \int_0^t \frac{1}{2 \sqrt{\kappa}} R_{\nu,\nu}\Big(\frac{\eta^+(\tau) - x}{\sqrt{\kappa}},t - \tau\Big) \varphi^+(\tau) \, \d \tau. \end{split} \end{equation} \begin{rem} As to be expected, when $\nu = \frac{1}{2}$, the Caputo solution~\eqref{u-sol-1} and the Riemann-Liouville solution~\eqref{u-sol-2} become identical and recover the analytical solution for the corresponding IBVP for the classical diffusion equation obtained in \citet{RoTh21}. \end{rem} \section{Solutions of moving boundary problems associated with the time-fractional diffusion equation} We are now ready to find analytical solutions of moving boundary problems for the time-fractional diffusion equation. Two representative examples will be considered with bounded and unbounded spatial domains. More general moving boundary problems can be handled in a similar fashion. \begin{ex} Consider the moving boundary problem \begin{equation} \left\{ \label{free-prob-1} \begin{split} & \D{}{}{}{2 \nu} u = \frac{\partial^2 u}{\partial x^2}, \quad 0 < x < \eta(t), \quad t > 0, \\ & u(x,0) = u_0 \chi_{(0,\infty)}(x), \quad 0 \le x < \infty, \\ & u(0,t) = 1, \quad u(\eta(t),t) = 0, \quad t > 0, \\ & \D{}{}{}{2 \nu} \eta(t) = -\frac{1}{r} \frac{\partial u}{\partial x}(\eta(t),t), \quad t > 0. \end{split} \right. \end{equation} Here, $r$ and $u_0$ are positive constants and $\eta(t)$ is the moving boundary. The goal is to find $u(x,t)$ and $\eta(t)$. When $\nu = \frac{1}{2}$, \eqref{free-prob-1} reduces to a classical Stefan problem for the melting of ice over a one-dimensional semi-infinite spatial domain~\citep{Cr84,Hi87}. In this context, the PDE under consideration is the heat equation. The interval~$[0,\eta(t)]$ is the region occupied by water. The last equation in \eqref{free-prob-1} is also known as the Stefan condition and $r$ is the ratio of latent to specific sensible heat. However, when $0 < \nu < \frac{1}{2}$, the physical interpretation of the problem in the context of melting of ice is not necessarily valid and we therefore study the IBVP~\eqref{free-prob-1} strictly from a theoretical perspective. Comparing \eqref{free-prob-1} with \eqref{gen-IBVP}, we identify $\eta^-(t) = 0$, $\eta^+(t) = \eta(t)$, $\kappa = 1$, $a = 1$, $b = 0$, $c = 1$, $d = 0$, $g^-(t) = 1$, $g^+(t) = 0$ and $f(x) = u_0 \chi_{(0,\infty)}(x)$. The last equation in \eqref{free-prob-1} provides a condition (`fractional Stefan condition') for the moving boundary~$\eta(t)$. Take $f_\mathrm{ext}(x) = u_0 \chi_{(-\infty,0) \cup (0,\infty)}(x)$ for all $x \in \mathbb R$ for instance. Using Proposition~\ref{R-integral}, we deduce that \begin{equation*} \int_{-\infty}^\infty \frac{1}{2} R_{\mu,\nu}(\vert x - \xi\vert,t) \, \d \xi = \int_{-\infty}^\infty \frac{1}{2} R_{\mu,\nu}(\vert z \vert,t) \, \d z = \delta_{\mu + \nu}(t) \end{equation*} for any $x \in \mathbb R$. In particular, \begin{equation} \label{R-int-real} \int_{-\infty}^\infty \frac{1}{2} R_{1 - \nu,\nu}(\vert x - \xi \vert,t) \, \d \xi = 1, \quad \int_{-\infty}^\infty \frac{1}{2} R_{\nu,\nu}(\vert x - \xi \vert,t) \, \d \xi = \delta_{2 \nu}(t). \end{equation} Assuming that $\varphi^+(t) = 0$ so as to be able to do some explicit calculations, \eqref{u-sol-1} and \eqref{u-sol-2} respectively give \begin{equation*} u(x,t) = \begin{cases} u_0 + \int_0^t \frac{1}{2} R_{2 \nu,\nu}(x,t - \tau) \varphi^-(\tau) \, \d \tau & \text{if $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$}, \\ u_0 \delta_{2 \nu}(t) + \int_0^t \frac{1}{2} R_{2 \nu,\nu}(x,t - \tau) \varphi^-(\tau) \, \d \tau & \text{if $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$}. \end{cases} \end{equation*} Eqs.~\eqref{h-minus-1}, \eqref{h-minus-2}, \eqref{h-plus-1} and \eqref{h-plus-2} yield \begin{equation*} h^-(t) = \begin{cases} 1 - u_0 & \text{if $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$}, \\ 1 - u_0 \delta_{2 \nu}(t) & \text{if $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$}, \end{cases} \quad h^+(t) = \begin{cases} -u_0 & \text{if $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$}, \\ -u_0 \delta_{2 \nu}(t) & \text{if $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$}. \end{cases} \end{equation*} Next, let us look at the left BC. Suppose that $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$. Eq.~\eqref{left-BC} gives $$ \int_0^t \frac{1}{2} R_{2 \nu,\nu}(0+,t - \tau) \varphi^-(\tau) \, \d \tau = 1 - u_0 \quad \text{or} \quad \D{}{0}{t}{-2 \nu}\varphi^-(t) = 2 (1 - u_0) $$ using Proposition~\ref{R-a-zero} and \eqref{conv-int}. If $\Phi^-(s) = \L\{\varphi^-(t);s\}$, then $$ \Phi^-(s) = \frac{2 (1 - u_0)}{s^{1 - 2 \nu}}. $$ Therefore $$ \varphi^-(t) = 2 (1 - u_0) \delta_{1 - 2 \nu}(t) $$ for the Caputo case. Now suppose that $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$. This time \eqref{left-BC} gives $$ \int_0^t \frac{1}{2} R_{2 \nu,\nu}(0+,t - \tau) \varphi^-(\tau) \, \d \tau = 1 - u_0 \delta_{2 \nu}(t) \quad \text{or} \quad \D{}{0}{t}{-2 \nu}\varphi^-(t) = 2 [1 - u_0 \delta_{2 \nu}(t)]. $$ Then $$ \Phi^-(s) = \frac{2}{s^{1 - 2 \nu}} - 2 u_0, $$ which yields $$ \varphi^-(t) = 2 \delta_{1 - 2 \nu}(t) - 2 u_0 \delta(t) $$ for the Riemann-Liouville case. Summarising, from the left BC~\eqref{left-BC} we deduce that \begin{equation*} \varphi^-(t) = \begin{cases} 2 (1 - u_0) \delta_{1 - 2 \nu}(t) & \text{if $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$}, \\ 2 \delta_{1 - 2 \nu}(t) - 2 u_0 \delta(t) & \text{if $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$}. \end{cases} \end{equation*} We now examine the right BC starting with $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$. From \eqref{right-BC} we see that $$ \int_0^t \frac{1}{2} R_{2 \nu,\nu}(\eta(t),t - \tau) \varphi^-(\tau) \, \d \tau = -u_0. $$ But \eqref{conv-int}, \eqref{R-basic} and the semigroup property for the Riemann-Liouville fractional integral lead to \begin{align*} & \int_0^t \frac{1}{2} R_{2 \nu,\nu}(\eta(t),t - \tau) \varphi^-(\tau) \, \d \tau = \int_0^t (1 - u_0) R_{2 \nu,\nu}(\eta(t),t - \tau) \delta_{1 - 2 \nu}(\tau) \, \d \tau \\ & \qquad = (1 - u_0) {}_{0}^{}D_{t}^{-(1 -2 \nu)} R_{2 \nu,\nu}(\eta(t),t) = (1 - u_0) {}_{0}^{}D_{t}^{-(1 -2 \nu)} {}_{0}^{}D_{t}^{-2 \nu} R_{0,\nu}(\eta(t),t) \\ & \qquad = (1 - u_0) {}_{0}^{}D_{t}^{-1} R_{0,\nu}(\eta(t),t) = (1 - u_0) R_{1,\nu}(\eta(t),t). \end{align*} Hence the right BC for the Caputo case becomes $$ R_{1,\nu}(\eta(t),t) = -\frac{u_0}{1 - u_0}. $$ Now let $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$. Eq.~\eqref{right-BC} in this case is $$ \int_0^t \frac{1}{2} R_{2 \nu,\nu}(\eta(t),t - \tau) \varphi^-(\tau) \, \d \tau = -u_0 \delta_{2 \nu}(t). $$ We have from \eqref{conv-int}, \eqref{R-basic} and the semigroup property for the Riemann-Liouville fractional integral that \begin{align*} & \int_0^t \frac{1}{2} R_{2 \nu,\nu}(\eta(t),t - \tau) \varphi^-(\tau) \, \d \tau = \int_0^t \frac{1}{2} R_{2 \nu,\nu}(\eta(t),t - \tau) [2 \delta_{1 - 2 \nu}(\tau) - 2 u_0 \delta(\tau)] \, \d \tau \\ & \qquad = {}_{0}^{}D_{t}^{-(1 - 2 \nu)} R_{2 \nu,\nu}(\eta(t),t) - u_0 R_{2 \nu,\nu}(\eta(t),t) \\ & \qquad = {}_{0}^{}D_{t}^{-(1 -2 \nu)} {}_{0}^{}D_{t}^{-2 \nu} R_{0,\nu}(\eta(t),t) - u_0 R_{2 \nu,\nu}(\eta(t),t) \\ & \qquad = {}_{0}^{}D_{t}^{-1} R_{0,\nu}(\eta(t),t) - u_0 R_{2 \nu,\nu}(\eta(t),t) = R_{1,\nu}(\eta(t),t) - u_0 R_{2 \nu,\nu}(\eta(t),t). \end{align*} Therefore the right BC for the Riemann-Liouville case becomes $$ R_{1,\nu}(\eta(t),t) - u_0 R_{2 \nu,\nu}(\eta(t),t) = -u_0 \delta_{2 \nu}(t). $$ In summary, the right BC~\eqref{right-BC} is equivalent to \begin{equation} \label{free-prob-1-right-BC} \begin{cases} R_{1,\nu}(\eta(t),t) = -\frac{u_0}{1 - u_0} & \text{if $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$}, \\ R_{1,\nu}(\eta(t),t) - u_0 R_{2 \nu,\nu}(\eta(t),t) = -u_0 \delta_{2 \nu}(t) & \text{if $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$}. \end{cases} \end{equation} Finally, we consider the `fractional Stefan condition'. Observe in \eqref{u-sol-1-der} and \eqref{u-sol-2-der} that $$ -\int_{-\infty}^{x} \frac{u_0}{2} R_{\mu,\nu}(x - \xi,t) + \int_x^{\infty} \frac{u_0}{2} R_{\mu,\nu}(\xi - x,t) \, \d \xi = 0 $$ for any $\mu \ge 0$. If $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$, then similar arguments as above give \begin{align*} \frac{\partial u}{\partial x}(x,t) & = -\int_0^t \frac{1}{2} R_{\nu,\nu}(x,t - \tau) \varphi^-(\tau) \, \d \tau = -\int_0^t (1 - u_0) R_{\nu,\nu}(x,t - \tau) \delta_{1 - 2 \nu}(\tau) \, \d \tau \\ & = -(1 - u_0) {}_{0}^{}D_{t}^{-(1 -2 \nu)} R_{\nu,\nu}(x,t) = -(1 - u_0) {}_{0}^{}D_{t}^{-(1 -2 \nu)} {}_{0}^{}D_{t}^{-\nu} R_{0,\nu}(x,t) \\ & = -(1 - u_0) {}_{0}^{}D_{t}^{-(1 - \nu)} R_{0,\nu}(x,t) = -(1 - u_0) R_{1 - \nu,\nu}(x,t). \end{align*} On the other hand, if $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$, then \begin{align*} \frac{\partial u}{\partial x}(x,t) & = -\int_0^t \frac{1}{2} R_{\nu,\nu}(x,t - \tau) \varphi^-(\tau) \, \d \tau = - \int_0^t R_{\nu,\nu}(x,t - \tau) [\delta_{1 - 2 \nu}(\tau) - u_0 \delta(\tau)] \, \d \tau \\ & = -{}_{0}^{}D_{t}^{-(1 -2 \nu)} R_{\nu,\nu}(x,t) + u_0 R_{\nu,\nu}(x,t) = -{}_{0}^{}D_{t}^{-(1 -2 \nu)} {}_{0}^{}D_{t}^{-\nu} R_{0,\nu}(x,t) + u_0 R_{\nu,\nu}(x,t) \\ & = -{}_{0}^{}D_{t}^{-(1 - \nu)} R_{0,\nu}(x,t) + u_0 R_{\nu,\nu}(x,t) = -R_{1 - \nu,\nu}(x,t) + u_0 R_{\nu,\nu}(x,t). \end{align*} Summarising, the `fractional Stefan condition' becomes \begin{equation} \label{free-prob-1-stefan} -r \D{}{}{}{2 \nu} \eta(t) = \begin{cases} -(1 - u_0) R_{1 - \nu,\nu}(\eta(t),t) & \text{if $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$}, \\ -R_{1 - \nu,\nu}(\eta(t),t) + u_0 R_{\nu,\nu}(\eta(t),t) & \text{if $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$}. \end{cases} \end{equation} It remains to determine $\eta(t)$. Looking at the series representation in Proposition~\ref{R-series} and the known similarity solution of the classical diffusion equation when $\nu = \frac{1}{2}$, we propose the ansatz~$\eta(t) = 2 \alpha t^\nu$ for some constant~$\alpha$ to be determined. Then $$ \D{C}{0}{t}{2 \nu} \eta(t) = \D{}{0}{t}{2 \nu} \eta(t) = \frac{2 \alpha \Gamma(1 + \nu) t^{-\nu}}{\Gamma(1 - \nu)} $$ and $$ R_{\mu,\nu}(\eta(t),t) = t^{\mu - 1} W(-2 \alpha;-\nu,\mu) $$ for any $\mu \ge 0$. If $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$ in the right BC~\eqref{free-prob-1-right-BC}, then \begin{equation} \label{free-prob-1-trans-1} W(-2 \alpha;-\nu,1) = -\frac{u_0}{1 - u_0}, \end{equation} which is a transcendental equation involving $\alpha$ and $u_0$. However, if $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$ in the right BC~\eqref{free-prob-1-right-BC}, then $$ W(-2 \alpha;-\nu,1)- u_0 t^{2 \nu - 1} W(-2 \alpha;-\nu,2 \nu) = -\frac{u_0 t^{2 \nu - 1}}{\Gamma(2 \nu)}, $$ which becomes an identity only when $\nu = \frac{1}{2}$. Hence we immediately conclude, without needing to verify the corresponding `fractional Stefan condition' in \eqref{free-prob-1-stefan}, that the ansatz~$\eta(t) = 2 \alpha t^\nu$ will not work when $0 < \nu < \frac{1}{2}$ for the Riemann-Liouville case. Taking $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$ in the `fractional Stefan condition'~\eqref{free-prob-1-stefan}, we obtain $$ -\frac{2 \alpha r \Gamma(1 + \nu) t^{-\nu}}{\Gamma(1 - \nu)} = -(1 - u_0) t^{-\nu} W(-2 \alpha;-\nu,1 - \nu) $$ or \begin{equation} \label{free-prob-1-trans-2} \frac{2 \alpha r \Gamma(1 + \nu)}{(1 - u_0) \Gamma(1 - \nu)} = W(-2 \alpha;-\nu,1 - \nu), \end{equation} another transcendental equation involving $\alpha$ and $u_0$. From \eqref{free-prob-1-trans-1} we can solve $$ u_0 = -\frac{W(-2 \alpha;-\nu,1)}{1 - W(-2 \alpha;-\nu,1)}, \quad 1 - u_0 = \frac{1}{1 - W(-2 \alpha;-\nu,1)}. $$ Substituting these into \eqref{free-prob-1-trans-2}, we get a transcendental equation only for $\alpha$, namely \begin{equation} \label{free-prob-1-trans-3} \frac{2 \alpha r \Gamma(1 + \nu)}{\Gamma(1 - \nu)} [1 - W(-2 \alpha;-\nu,1)] = W(-2 \alpha;-\nu,1 - \nu). \end{equation} Therefore \begin{equation*} \begin{split} u(x,t) & = u_0 + \int_0^t \frac{1}{2} R_{2 \nu,\nu}(x,t - \tau) 2 (1 - u_0) \delta_{1 - 2 \nu}(\tau) \, \d \tau = u_0 + (1 - u_0) \D{}{0}{t}{-(1 - 2 \nu)} R_{2 \nu,\nu}(x,t) \\ & = u_0 + (1 - u_0) \D{}{0}{t}{-(1 - 2 \nu)} \D{}{0}{t}{-2 \nu} R_{0,\nu}(x,t) = u_0 + (1 - u_0) \D{}{0}{t}{-1} R_{0,\nu}(x,t) \\ & = u_0 + (1 - u_0) R_{1,\nu}(x,t) = \frac{R_{1,\nu}(x,t) - W(-2 \alpha;-\nu,1)}{1 - W(-2 \alpha;-\nu,1)} \end{split} \end{equation*} and the analytical solution of the moving boundary problem for the Caputo case is \begin{equation} \label{free-prob-1-sol} u(x,t) = \frac{R_{1,\nu}(x,t) - W(-2 \alpha;-\nu,1)}{1 - W(-2 \alpha;-\nu,1)}, \quad \eta(t) = 2 \alpha t^\nu, \end{equation} where $\alpha$ satisfies the transcendental equation~\eqref{free-prob-1-trans-3}. \begin{rem} When $\nu = \frac{1}{2}$, \eqref{W-special} yields $$ W\Big(-2 \alpha;-\frac{1}{2},\frac{1}{2}\Big) = \frac{\mathrm e^{-\alpha^2}}{\sqrt{\pi}}, \quad W\Big(-2\alpha;-\frac{1}{2},1\Big) = 1 - \erf(\alpha), $$ while \eqref{R-special-cases} gives $$ R_{1,\frac{1}{2}}(x,t) = \erfc\Big(\frac{x}{2 \sqrt{t}}\Big) = 1 - \erf\Big(\frac{x}{2 \sqrt{t}}\Big). $$ Therefore \eqref{free-prob-1-sol} simplifies to $$ u(x,t) = \frac{\erfc(\frac{x}{2 \sqrt{t}}) - 1 + \erf(\alpha)}{\erf(\alpha)} = 1 - \frac{\erf(\frac{x}{2 \sqrt{t}})}{\erf(\alpha)}, \quad \eta(t) = 2 \alpha \sqrt{t}, $$ where $\alpha$ satisfies the transcendental equation $$ r \sqrt{\pi} \alpha \erf(\alpha) \mathrm e^{\alpha^2} = 1. $$ This is of course the well-known Neumann solution of the given Stefan problem for the heat equation~\citep{Cr84,Hi87} typically obtained through a similarity analysis. \end{rem} \end{ex} \begin{ex} Consider the moving boundary problem \begin{equation} \label{free-prob-2} \left\{ \begin{split} & \D{}{}{}{2 \nu} u = \frac{\partial^2 u}{\partial x^2}, \quad \eta(t) < x < \infty, \quad t > 0, \\ & u(x,0) = -1, \quad 0 \le x < \infty, \\ & u(\eta(t),t) = 0, \quad u(\infty,t) = -1, \quad t > 0, \\ & \D{}{}{}{2 \nu} \eta(t) = \frac{1}{r} \Big[1 + \frac{\partial u}{\partial x}(\eta(t),t)\Big], \quad t > 0, \end{split} \right. \end{equation} where $r$ is a positive constant and $\eta(t)$ is the moving boundary. Again, we wish to find $u(x,t)$ and $\eta(t)$. When $\nu = \frac{1}{2}$, \eqref{free-prob-2} reduces to a Stefan problem involving a single-phase, semi-infinite, subcooled material. One application is the determination of whether ice melts or water freezes when hot water is thrown over cold ice~\citep{Hu89}. The mathematical formulation for the heat equation is a Stefan problem with a constant heat source term in the condition at the boundary~\citep{KiRi00}. Furthermore, a related industrial process is ablation, i.e.~a mass is removed from an object by vapourisation or similar erosive processes~\citep{MiMy08,Mi12,MiMy12}. As in the previous example, the same physical interpretation when $0 < \nu < \frac{1}{2}$ is not necessarily valid so that our interest here is theoretical. We also refer to the last equation in \eqref{free-prob-2} as a `fractional Stefan condition'. Comparing \eqref{free-prob-2} with \eqref{gen-IBVP}, we identify $\eta^-(t) = \eta(t)$, $\eta^+(t) = \infty$, $\kappa = 1$, $a = 1$, $b = 0$, $c = 1$, $d = 0$, $g^-(t) = 0$, $g^+(t) = -1$ and $f(x) = -1$. The last equation in \eqref{free-prob-1} provides a condition for the moving boundary~$\eta(t)$. Take $f_\mathrm{ext}(x) = -1$ for all $x \in \mathbb R$ for example. Note that $R_{\mu,\nu}(\infty,t) = \lim_{a \rightarrow \infty} \L^{-1}\{s^{-\mu} \mathrm e^{-a s^\nu};t\} = 0$. Using \eqref{u-sol-1} and \eqref{u-sol-2}, we have \begin{equation} \label{free-prob-2-u} u(x,t) = \begin{cases} -1 + \int_0^t \frac{1}{2} R_{2 \nu,\nu}(x - \eta(\tau),t - \tau) \varphi^-(\tau) \, \d \tau & \text{if $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$}, \\ -\delta_{2 \nu}(t) + \int_0^t \frac{1}{2} R_{2 \nu,\nu}(x - \eta(\tau),t - \tau) \varphi^-(\tau) \, \d \tau & \text{if $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$}. \end{cases} \end{equation} Eqs.~\eqref{h-minus-1}, \eqref{h-minus-2}, \eqref{h-plus-1} and \eqref{h-plus-2} yield \begin{equation*} h^-(t) = \begin{cases} 1 & \text{if $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$}, \\ \delta_{2 \nu}(t) & \text{if $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$}, \end{cases} \quad h^+(t) = \begin{cases} -1 & \text{if $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$}, \\ -1 & \text{if $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$}. \end{cases} \end{equation*} From \eqref{left-BC} we deduce that the left BC is \begin{equation} \label{free-prob-2-phi} \begin{cases} \int_0^t \frac{1}{2} R_{2 \nu,\nu}(\eta(t) - \eta(\tau),t - \tau) \varphi^-(\tau) \, \d \tau = 1 & \text{if $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$}, \\ \int_0^t \frac{1}{2} R_{2 \nu,\nu}(\eta(t) - \eta(\tau),t - \tau) \varphi^-(\tau) \, \d \tau = \delta_{2 \nu}(t)& \text{if $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$}, \end{cases} \end{equation} while \eqref{right-BC} gives the right BC \begin{equation*} \begin{cases} \int_0^t \frac{1}{2} \delta_{2 \nu}(t - \tau) \varphi^+(\tau) \, \d \tau = -1 & \text{if $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$}, \\ \int_0^t \frac{1}{2} \delta_{2 \nu}(t - \tau) \varphi^+(\tau) \, \d \tau = -1 & \text{if $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$}. \end{cases} \end{equation*} Observe that we used Proposition~\ref{R-a-zero}, and both Caputo and Riemann-Liouville cases have the same right BC because they also have the same $h^+(t)$. In fact, the right BC can be expressed as $\D{}{0}{t}{-2 \nu} \varphi^+(t) = -2$ using \eqref{conv-int}. If $\Phi^+(s) = \L\{\varphi^+(t);s\}$, then $$ \Phi^+(s) = -\frac{2}{s^{1 - 2 \nu}}; $$ thus $\varphi^+(t) = -2 \delta_{1 - 2 \nu}(t)$. To use the `fractional Stefan condition', we first calculate \begin{align*} \frac{\partial u}{\partial x}(x,t) & = -\int_0^t \frac{1}{2} R_{\nu,\nu}(x - \eta(\tau),t - \tau) \varphi^-(\tau) \, \d \tau, \end{align*} which implies that \begin{equation} \label{free-prob-2-stefan} \begin{cases} r \D{C}{0}{t}{2 \nu} \eta(t) = 1 - \int_0^t \frac{1}{2} R_{\nu,\nu}(\eta(t) - \eta(\tau),t - \tau) \varphi^-(\tau) \, \d \tau & \text{if $\D{}{}{}{2 \nu} = \D{C}{0}{t}{2 \nu}$}, \\ r \D{}{0}{t}{2 \nu} \eta(t) = 1 - \int_0^t \frac{1}{2} R_{\nu,\nu}(\eta(t) - \eta(\tau),t - \tau) \varphi^-(\tau) \, \d \tau & \text{if $\D{}{}{}{2 \nu} = \D{}{0}{t}{2 \nu}$}. \end{cases} \end{equation} Hence the solution of the moving boundary problem is described by \eqref{free-prob-2-u}, \eqref{free-prob-2-phi} and \eqref{free-prob-2-stefan}. It does not appear to be possible to solve for $\varphi^-(t)$ and $\eta(t)$ explicitly (assuming that the solutions even exist) and therefore the integral equations have to be solved numerically. Note that although $\varphi^+(t) = -2 \delta_{1 - 2 \nu}(t)$ has been determined for both Caputo and Riemann-Liouville cases, the expressions in \eqref{free-prob-2-u}, \eqref{free-prob-2-phi} and \eqref{free-prob-2-stefan} do not actually depend on it explicitly. \end{ex} \section{Concluding remarks} In this article, we derived the solution of a general IBVP for the time-fractional diffusion equation using the embedding method. The formulation of the IBVP incorporates time-dependent BCs and allows the consideration of bounded and unbounded spatial domains. The solution of the IBVP generalises the results in \citet{RoTh21} for the classical diffusion equation and in \citet{Ro22b} for a particular class of IBVPs with Dirichlet BCs for the time-fractional diffusion equation. We then used the solution of the IBVP to solve two representative examples of moving boundary problems for the time-fractional diffusion equation. In particular, the solutin of the first problem is a `fractional' generalisation of the well-known Neumann solution for a Stefan problem for melting ice. The embedding method gives rise to a system of integral equations for some time-dependent functions, and which needs to be solved numerically in general. The numerical solution of IBVPs and moving boundary problems for the time-fractional diffusion equation is currently work in progress. However, the numerical solution of IBVPs for the classical diffusion equation has been done in \citet{RoTh22}. The novelty here for IBVPs for the time-fractional diffusion equation is that the linear Volterra integral equations of the first kind for $\varphi^\pm(t)$ now involve $R_{\mu,\nu}(a,t)$. Hence it is necessary to be able to compute these numerically. As this auxiliary function satisfies certain fractional integral and differential equations, a necessary first step seems to be to solve these equations numerically (e.g.~using finite differences) for $R_{\mu,\nu}(a,t)$ and adapt the boundary element method for solving linear Volterra integral equations of the first kind proposed in \cite{RoTh22} for the classical diffusion equation. Other future directions are multilayer problems for the time-fractional diffusion equation and a further investigation of the properties and applications of the auxiliary function~$R_{\mu,\nu}(a,t)$.
{ "timestamp": "2023-01-04T02:08:46", "yymm": "2212", "arxiv_id": "2212.11794", "language": "en", "url": "https://arxiv.org/abs/2212.11794" }
"\\section{Introduction}\nToday, software systems have a significant role in various domains among w(...TRUNCATED)
{"timestamp":"2022-12-23T02:13:25","yymm":"2212","arxiv_id":"2212.11774","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\nIn this paper we continue our work that connects random groups with the fi(...TRUNCATED)
{"timestamp":"2022-12-23T02:13:32","yymm":"2212","arxiv_id":"2212.11780","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\\label{sec:introduction}\n\nMuch of the modern-era precision in hadron spec(...TRUNCATED)
{"timestamp":"2022-12-23T02:13:19","yymm":"2212","arxiv_id":"2212.11767","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction} \\label{chap:introduction}\n \n Time-series forecasting has become a (...TRUNCATED)
{"timestamp":"2022-12-23T02:13:23","yymm":"2212","arxiv_id":"2212.11771","language":"en","url":"http(...TRUNCATED)
"\\section{\\@startsection {section}{1}{\\z@}{-3.5ex plus -1ex minus\n -.2ex}{2.3ex plus .2ex}{\\lar(...TRUNCATED)
{"timestamp":"2022-12-23T02:14:51","yymm":"2212","arxiv_id":"2212.11848","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\\label{sec:Introduction}\n\n\n\n\n\n\\IEEEPARstart{R}{adio maps} are repres(...TRUNCATED)
{"timestamp":"2022-12-23T02:13:31","yymm":"2212","arxiv_id":"2212.11777","language":"en","url":"http(...TRUNCATED)
End of preview.

No dataset card yet

Downloads last month
5